repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
PokeAPI/pokeapi | api | 824 | Internal Server Error | We are aware and fixing it | closed | 2023-01-23T15:24:39Z | 2023-01-23T15:32:29Z | https://github.com/PokeAPI/pokeapi/issues/824 | [] | Naramsim | 0 |
luispedro/mahotas | numpy | 134 | Python 3.11 wheels | Hi,
Python 3.11 is already out. It will be nice if wheels for python 3.11 will be uploaded to pypi.
I also see that you use NumPy as a build requirement which means using the latest available NumPy may lead to errors as described in #132 (the author of this issue has to old NumPy version).
I suggest switching to [oldest-supported-numpy](https://pypi.org/project/oldest-supported-numpy/) metapackage that will install the oldest supported NumPy for a given python version. | closed | 2022-12-30T10:34:09Z | 2024-04-17T12:28:49Z | https://github.com/luispedro/mahotas/issues/134 | [] | Czaki | 1 |
mwaskom/seaborn | matplotlib | 3,721 | Legend Overlaps with X-Axis Labels After Using move_legend in Seaborn | **Description:**
I'm encountering an issue with Seaborn's `relplot` function when using `sns.move_legend`. After moving the legend to the center-bottom of the plot, it overlaps with the x-axis labels. I've tried adjusting the subplot parameters and using `plt.rcParams['figure.autolayout']`, but the legend still overlaps with the x-axis labels. Here is a minimal reproducible example:
**Code:**
```python
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
# Generate sample data
np.random.seed(42)
time = np.linspace(0, 100, 300)
roi = ["Left Temporal", "Frontal", "Right Temporal"]
types = ["Oxyhemoglobin", "Deoxyhemoglobin", "Total Hemoglobin"]
data = []
for r in roi:
for t in types:
values = np.sin(time/10 + np.random.randn()) + np.random.normal(0, 0.1, len(time))
for i in range(len(time)):
data.append([time[i], values[i], r, t])
df_roi = pd.DataFrame(data, columns=["time", "value", "roi", "type"])
# Plotting
lineplot_device = sns.relplot(
data=df_roi, kind="line",
x="time", y="value", col="roi",
hue="type",
palette={"Oxyhemoglobin": "#FF847C", "Deoxyhemoglobin": "#82B3D0", "Total Hemoglobin": "#A2D5A2"},
facet_kws=dict(sharex=True),
legend="brief",
errorbar=None,
col_order=['Left Temporal', 'Frontal', 'Right Temporal'],
height=3,
aspect=0.8
)
sns.move_legend(lineplot_device, "center", bbox_to_anchor=(0.5, 0.05), ncol=3, title=None, frameon=False, numpoints=4)
lineplot_device.fig.subplots_adjust(bottom=0.25, right=1)
plt.rcParams['figure.autolayout'] = True
plt.show()
```
Expected Behavior:
The legend should be centered at the bottom of the plot without overlapping with the x-axis labels.
Actual Behavior:
The legend overlaps with the x-axis labels, making the labels difficult to read.
| closed | 2024-07-02T07:25:26Z | 2024-07-08T02:47:26Z | https://github.com/mwaskom/seaborn/issues/3721 | [] | hasibagen | 2 |
tensorlayer/TensorLayer | tensorflow | 376 | [Discussion] dim equal assert in ElementwiseLayer | `TensorFlow` support broadcasting. I think it is no necessary to add the following assert in `ElementwiseLayer`:
```Python
assert str(self.outputs.get_shape()) == str(l.outputs.get_shape()), "Hint: the input shapes should be the same. %s != %s" % (self.outputs.get_shape() , str(l.outputs.get_shape()))
``` | closed | 2018-03-04T16:16:49Z | 2018-03-05T13:51:51Z | https://github.com/tensorlayer/TensorLayer/issues/376 | [] | auroua | 1 |
google-research/bert | tensorflow | 491 | Is it possible to replicate the results on XNLI dataset which are present on "Multilingual README" page ? | I am not able to replicate the results for "BERT - Translate Train Cased" system on English. Can anybody know the set hyperparameters which were used for fine-tuning of **BERT-Base, Multilingual Cased (New, recommended)** on English ? | open | 2019-03-10T14:00:24Z | 2019-07-22T17:55:01Z | https://github.com/google-research/bert/issues/491 | [] | gourango01 | 1 |
LibreTranslate/LibreTranslate | api | 14 | Translate complete websites? | How to translate full websites?
It would be good to put examples to translate webs into any other language. | open | 2021-01-16T22:07:53Z | 2024-10-24T03:59:12Z | https://github.com/LibreTranslate/LibreTranslate/issues/14 | [
"enhancement"
] | wuniversales | 7 |
tiangolo/uwsgi-nginx-flask-docker | flask | 10 | cant connect mongodb | why can,t connect mongodb
eg:
data=source_client.users.users.find({"_id":ObjectId('5840e3eaf1d30043c60cae53')})[0]
feedback:
<title>500 Internal Server Error</title>
<h1>Internal Server Error</h1>
<p>The server encountered an internal error and was unable to complete your request. Either the server is overloaded or there is an error in the application.</p> | closed | 2017-06-06T03:29:20Z | 2017-08-26T14:45:13Z | https://github.com/tiangolo/uwsgi-nginx-flask-docker/issues/10 | [] | habout632 | 3 |
KaiyangZhou/deep-person-reid | computer-vision | 339 | how can change the size of feature dim? | closed | 2020-05-13T01:54:44Z | 2020-05-18T10:09:08Z | https://github.com/KaiyangZhou/deep-person-reid/issues/339 | [] | qianjinhao | 4 | |
benbusby/whoogle-search | flask | 789 | [BUG] Images in search results are fetched using HTTP, even with HTTPS_ONLY=1 | **Describe the bug**
A clear and concise description of what the bug is.
Title
**To Reproduce**
Steps to reproduce the behavior:
1. Search anything
2. Look in the network tab
3. See that images are pulled in using HTTP
**Deployment Method**
- [ ] Heroku (one-click deploy)
- [ ] Docker
- [x] `run` executable
- [ ] pip/pipx
- [ ] Other: [describe setup]
**Version of Whoogle Search**
- [x] Latest build from [source] (i.e. GitHub, Docker Hub, pip, etc)
- [ ] Version [version number]
- [ ] Not sure
**Desktop (please complete the following information):**
- OS: [e.g. iOS]
- Browser [e.g. chrome, safari]
- Version [e.g. 22]
**Smartphone (please complete the following information):**
- Device: [e.g. iPhone6]
- OS: [e.g. iOS8.1]
- Browser [e.g. stock browser, safari]
- Version [e.g. 22]
**Additional context**
Add any other context about the problem here.
| closed | 2022-06-15T03:42:39Z | 2022-06-27T13:41:02Z | https://github.com/benbusby/whoogle-search/issues/789 | [
"bug"
] | DUOLabs333 | 16 |
roboflow/supervision | computer-vision | 1,215 | Problems with tracker.update_with_detections(detections) | ### Search before asking
- [X] I have searched the Supervision [issues](https://github.com/roboflow/supervision/issues) and found no similar bug report.
### Bug
Somehow, I loose predicted bounding boxes in this line:
`tracker.update_with_detections(detections)`
In the plot from Ultralytics, everything is fine. Though, after the line above gets executed, I loose some bounding boxes. In this example, I loose two.
That's the plot from Ultralytics, how it should be:

That's the plot after the Roboflow labling, some predictions are missing:

Can somebody help me with this issue?
### Environment
- Supervision 0.20.0
- Python 3.12.3
- Ultralytics 8.2.18
### Minimal Reproducible Example
### Code:
```
import cv2
import supervision as sv
from ultralytics import YOLO
model_path = "path/to/your/model.pt"
video_path = "path/to/your/video.mp4"
cap = cv2.VideoCapture(video_path)
model = YOLO(model_path)
box_annotator = sv.BoundingBoxAnnotator()
label_annotator = sv.LabelAnnotator()
tracker = sv.ByteTrack()
while True:
ret, frame = cap.read()
results = model(frame, verbose=False)[0]
print(f"CLS_YOLO-model: {results.boxes.cls}")
results_2 = model.predict(frame,
show=True, # The plot from the Ultralytics library
conf = 0.5,
save = False,
)
detections = sv.Detections.from_ultralytics(results)
print(f"ClassID_Supervision_1: {detections.class_id}") # Between this and the next print, predictions are lost
detections = tracker.update_with_detections(detections) # The detections get lost here
labels = [
f"{results.names[class_id]} {confidence:0.2f}"
for confidence, class_id
in zip(detections.confidence, detections.class_id)
]
print(f"ClassID_Supervision_2: {detections.class_id}") # Here two predictions from the Ultralytics model are lost
annotated_frame = frame.copy()
annotated_frame = box_annotator.annotate(
annotated_frame,
detections
)
labeled_frame = label_annotator.annotate(
annotated_frame,
detections,
labels
)
print(f"ClassID_Supervision_3: {detections.class_id}")
print(f"{len(detections)} detections, Labels: {labels}", )
cv2.imshow('Predictions', labeled_frame) # The with Roboflow generated frame
cap.release()
cv2.destroyAllWindows()
```
### Prints in console:
CLS_YOLO-model: tensor([1., 1., 1., 1.], device='cuda:0') **--> Class ID's from the predicted bounding boxes**
ClassID_Supervision_1: [1 1 1 1] **--> Converted into Supervision**
ClassID_Supervision_2: [1 1] **--> After the tracker method class ID's are lost**
ClassID_Supervision_3: [1 1]
2 detections, Labels: ['Spot 0.87', 'Spot 0.86']
### Additional
_No response_
### Are you willing to submit a PR?
- [ ] Yes I'd like to help by submitting a PR! | open | 2024-05-21T12:45:53Z | 2024-06-16T15:12:54Z | https://github.com/roboflow/supervision/issues/1215 | [
"bug"
] | CodingMechineer | 18 |
yunjey/pytorch-tutorial | deep-learning | 170 | Good tutorial and thank you! Very nice and simple! Can you update this to PyTorch 1.0? | In the PyTorch 1.0, there are some changes such as replace variable with tensor and so on.There most be some better practice. It would be nice to update this. | open | 2019-03-29T03:00:19Z | 2019-03-29T03:00:19Z | https://github.com/yunjey/pytorch-tutorial/issues/170 | [] | dayekuaipao | 0 |
ghtmtt/DataPlotly | plotly | 63 | Better error handling when subplotting not possible | https://github.com/ghtmtt/DataPlotly/blob/master/data_plotly_dialog.py#L1021
would be great to add a manual page that explains which plot type are not compatible and add a direct link to that page in the messageBar | closed | 2017-12-15T10:04:33Z | 2018-05-15T12:48:13Z | https://github.com/ghtmtt/DataPlotly/issues/63 | [
"enhancement",
"docs"
] | ghtmtt | 0 |
simple-login/app | flask | 2,408 | Job runner container rolling reboot - undefined db column | Please note that this is only for bug report.
For help on your account, please reach out to us at hi[at]simplelogin.io. Please make sure to check out [our FAQ](https://simplelogin.io/faq/) that contains frequently asked questions.
For feature request, you can use our [forum](https://github.com/simple-login/app/discussions/categories/feature-request).
For self-hosted question/issue, please ask in [self-hosted forum](https://github.com/simple-login/app/discussions/categories/self-hosting-question)
## Prerequisites
- [ x] I have searched open and closed issues to make sure that the bug has not yet been reported.
## Bug report
**Describe the bug**
Job runner container is rebooting with the following error:
```console
>>> init logging <<<
2025-03-05 09:47:08,146 - SL - DEBUG - 1 - "/code/app/utils.py:16" - <module>() - - load words file: /code/local_data/words.txt
Traceback (most recent call last):
File "/code/.venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py", line 1276, in _execute_context
self.dialect.do_execute(
File "/code/.venv/lib/python3.12/site-packages/sqlalchemy/engine/default.py", line 608, in do_execute
cursor.execute(statement, parameters)
psycopg2.errors.UndefinedColumn: column job.priority does not exist
LINE 1: ...ts AS job_attempts, job.taken_at AS job_taken_at, job.priori...
^
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/code/job_runner.py", line 390, in <module>
for job in get_jobs_to_run(taken_before_time):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/code/job_runner.py", line 350, in get_jobs_to_run
.all()
^^^^^
File "/code/.venv/lib/python3.12/site-packages/sqlalchemy/orm/query.py", line 3373, in all
return list(self)
^^^^^^^^^^
File "/code/.venv/lib/python3.12/site-packages/sqlalchemy/orm/query.py", line 3535, in __iter__
return self._execute_and_instances(context)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/code/.venv/lib/python3.12/site-packages/sqlalchemy/orm/query.py", line 3560, in _execute_and_instances
result = conn.execute(querycontext.statement, self._params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/code/.venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py", line 1011, in execute
return meth(self, multiparams, params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/code/.venv/lib/python3.12/site-packages/sqlalchemy/sql/elements.py", line 298, in _execute_on_connection
return connection._execute_clauseelement(self, multiparams, params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/code/.venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py", line 1124, in _execute_clauseelement
ret = self._execute_context(
^^^^^^^^^^^^^^^^^^^^^^
File "/code/.venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py", line 1316, in _execute_context
self._handle_dbapi_exception(
File "/code/.venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py", line 1510, in _handle_dbapi_exception
util.raise_(
File "/code/.venv/lib/python3.12/site-packages/sqlalchemy/util/compat.py", line 182, in raise_
raise exception
File "/code/.venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py", line 1276, in _execute_context
self.dialect.do_execute(
File "/code/.venv/lib/python3.12/site-packages/sqlalchemy/engine/default.py", line 608, in do_execute
cursor.execute(statement, parameters)
sqlalchemy.exc.ProgrammingError: (psycopg2.errors.UndefinedColumn) column job.priority does not exist
LINE 1: ...ts AS job_attempts, job.taken_at AS job_taken_at, job.priori...
^
[SQL: SELECT job.id AS job_id, job.created_at AS job_created_at, job.updated_at AS job_updated_at, job.name AS job_name, job.payload AS job_payload, job.taken AS job_taken, job.run_at AS job_run_at, job.state AS job_state, job.attempts AS job_attempts, job.taken_at AS job_taken_at, job.priority AS job_priority
FROM job
WHERE (job.state = %(state_1)s OR job.state = %(state_2)s AND job.taken_at < %(taken_at_1)s AND job.attempts < %(attempts_1)s) AND (job.run_at IS NULL OR job.run_at <= %(run_at_1)s) ORDER BY job.priority DESC, job.run_at ASC
LIMIT %(param_1)s]
[parameters: {'state_1': 0, 'state_2': 1, 'taken_at_1': datetime.datetime(2025, 3, 5, 9, 17, 11, 388963), 'attempts_1': 5, 'run_at_1': datetime.datetime(2025, 3, 5, 9, 57, 11, 389082), 'param_1': 50}]
(Background on this error at: http://sqlalche.me/e/13/f405)
```
I can see the following error logs in postgres container:
```console
2025-03-05 09:44:21.456 UTC [1] LOG: database system is ready to accept connections
2025-03-05 09:44:29.556 UTC [32] ERROR: column job.priority does not exist at character 278
2025-03-05 09:44:29.556 UTC [32] STATEMENT: SELECT job.id AS job_id, job.created_at AS job_created_at, job.updated_at AS job_updated_at, job.name AS job_name, job.payload AS job_payload, job.taken AS job_taken, job.run_at AS job_run_at, job.state AS job_state, job.attempts AS job_attempts, job.taken_at AS job_taken_at, job.priority AS job_priority
FROM job
WHERE (job.state = 0 OR job.state = 1 AND job.taken_at < '2025-03-05T09:14:29.464980'::timestamp AND job.attempts < 5) AND (job.run_at IS NULL OR job.run_at <= '2025-03-05T09:54:29.465104'::timestamp) ORDER BY job.priority DESC, job.run_at ASC
LIMIT 50
```
**Expected behavior**
No error logs and no rolling reboot.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Environment (If applicable):**
- OS: Linux (docker)
- Browser: Edge
- Version: N/A
**Additional context**
Add any other context about the problem here.
| closed | 2025-03-05T09:49:27Z | 2025-03-05T10:15:18Z | https://github.com/simple-login/app/issues/2408 | [] | anarion80 | 1 |
microsoft/nni | tensorflow | 5,471 | Meet an error when we try to prune the yolov5s.pt | **Describe the bug**:
TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if self.onnx_dynamic or self.grid[i].shape[2:4] != x[i].shape[2:4]:
Comparison exception: expected tensor shape torch.Size([1, 3, 1, 1, 2]) doesn't match with actual tensor shape torch.Size([])!
**Environment**:
- NNI version:2.10
- Training service (local|remote|pai|aml|etc):local
- Python version:3.8.13
- PyTorch version:1.9.0
- Cpu or cuda version:10.2
**Reproduce the problem**
- Code|Example:
import torch
from nni.compression.pytorch.pruning import L2NormPruner
from nni.compression.pytorch import ModelSpeedup
from models.experimental import attempt_load
##Load
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
model = attempt_load('yolov5s.pt', map_location=device)
##'attempt_load()' function is from yolov5-master.models.experimental.py. And it has been attached below.
##Create instance
config_list = [{
'sparsity': 0.2,
'op_types': ['Conv2d'], 'op_names': [
'model.0.conv',
'model.1.conv',
]
}]
pruner = L2NormPruner(model, config_list, mode='dependency_aware', dummy_input=torch.rand(1, 3, 32, 32).to(device))
##Prune
_, masks = pruner.compress()
pruner.export_model(model_path='../Myweights/deepsort_yolov5m.pt', mask_path='../Myweights/deepsort_mask.pt')
pruner.show_pruned_weights()
pruner._unwrap_model()
##Speedup
PR_model = ModelSpeedup(model, torch.rand(1, 3, 32, 32).to(device), masks_file="../Myweights/deepsort_mask.pt")
PR_model.speedup_model()
print("The number of model parameters after acceleration:", sum(p.numel() for p in model.parameters()))
- How to reproduce:
[models.zip](https://github.com/microsoft/nni/files/11047995/models.zip)
Thanks for any help! | closed | 2023-03-23T07:21:20Z | 2023-04-07T03:03:55Z | https://github.com/microsoft/nni/issues/5471 | [] | Ku-Buqi | 4 |
ultralytics/ultralytics | deep-learning | 19,513 | pt model to an ONNX model, different result | When using the official tool to convert a PyTorch (pt) model to an ONNX model, it is found that when inferring the same image using the pt model and the ONNX model, the results are inconsistent. | open | 2025-03-04T09:16:58Z | 2025-03-12T22:44:15Z | https://github.com/ultralytics/ultralytics/issues/19513 | [
"non-reproducible",
"exports"
] | GXDD666 | 7 |
ydataai/ydata-profiling | data-science | 1,530 | No module named 'ydata_profiling' | ### Current Behaviour
after installing ydata using the following command
conda install -c conda-forge ydata-profiling
I can use
from ydata_profiling import ProfileReport
in the python cmd window. However, in the jupyter notebook I get the
following error:
ModuleNotFoundError: No module named 'ydata_profiling'
### Expected Behaviour
the import
from ydata_profiling import ProfileReport
in jupyter notebook should work
### Data Description
NA
### Code that reproduces the bug
```Python
from ydata_profiling import ProfileReport
```
### pandas-profiling version
ydata-profiling 4.1.1
### Dependencies
```Text
jupyter 1.0.0 pypi_0 pypi
jupyter-client 8.3.0 pypi_0 pypi
jupyter-console 6.6.3 pypi_0 pypi
jupyter-core 5.3.1 pypi_0 pypi
jupyter-events 0.6.3 pypi_0 pypi
jupyter-lsp 2.2.0 pypi_0 pypi
jupyter-server 2.7.0 pypi_0 pypi
jupyter-server-terminals 0.4.4 pypi_0 pypi
jupyterlab 4.0.3 pypi_0 pypi
ydata-profiling 4.1.1 py38haa95532_0
```
### OS
windows 10
### Checklist
- [X] There is not yet another bug report for this issue in the [issue tracker](https://github.com/ydataai/pandas-profiling/issues)
- [X] The problem is reproducible from this bug report. [This guide](http://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports) can help to craft a minimal bug report.
- [X] The issue has not been resolved by the entries listed under [Common Issues](https://pandas-profiling.ydata.ai/docs/master/pages/support_contrib/common_issues.html). | closed | 2024-01-26T05:31:03Z | 2024-09-06T02:33:35Z | https://github.com/ydataai/ydata-profiling/issues/1530 | [
"needs-triage"
] | makgul1 | 4 |
dynaconf/dynaconf | django | 1,047 | Centralized config package | External hooks for `platformdirs`, __package__, package_dir, etc. | ## **Problem**
I am trying to set up a single configuration package that all of my other packages will "inherit" from (let's call it `confpkg`).
The idea is to be able to change the code in `confpkg` to add "standard" configuration values, which can then be overridden in whatever package is being developed (call it `yourpkg`), or some dependency that also uses `confpkg` (call it `mid_pkg`).
## **Desired Solution**
It is desired to have this work with absolute minimal user interaction, and to have a very nice api exposed in the code of `yourpkg`.
For example, in `yourpkg`:
```python
from confpkg import pkg_setup
CONF, CACHE, LOG = pkg_setup()
```
Those two lines are all that need to be included.
Now this package ('yourpkg') has these available:
- `CONF['data_dir'] == "/home/user/.local/share/yourpkg/data"` (Or w/e `platformdirs` outputs)
- `CONF['cache_dir'] == "/home/user/.cache/yourpkg"`
- etc - many different directories from package dirs
- `CONF['pkg_name'] == __package__` (The package name will be 'yourpkg' here, but will change for different packages)
- `CONF['pkg_dir'] ==` {some logic typically like: `os.path.dirname(os.path.realpath(__file__))`} (This should work to give the 'yourpkg' top-level package directory, automatically)
These values should of course be *overridden* by the 'yourpkg' directory's config.toml file.
This can allow easily setting things like a 'default' database filepath, etc.
(The "Another Example / Perspective" section covers how `confpkg` should handle other logic around this.)
---
Another example (from `yourpkg` still) using `mid_pkg`, to demonstrate this further:
```python
from mid_pkg import BigDatabase # NOTE: Also uses this same `confpkg` line within the `mid_pkg` code
from confpkg import pkg_setup
CONF, LOG = pkg_setup()
db = BigDatabase()
# DB is created using it's packages code (where CONF from `confpkg` is called),
# which can contain a default `data_dir` computed by `platformdirs` - specific to the `mid_pkg`
db.query("select * from Docs")
```
However, specifically, it is desired to *not* have to have the same `config_maker.py` script in package A, B, C, etc - but rather just to have *one* `config_maker.py` or `config.py` in the `confpkg` package, which will load the directories and add the package name and dir - **for the package that is calling the CONF, LOG = pkgsetup()**.
What is a good way to make this happen?
## Solution Attempts
I have solved some of this with the `inspect` package, or having to load the *string* of the file from a package, and then execute it from within the calling package.
The solution used functions similar to this: https://gitlab.com/chase_brown/myappname/-/blob/12113f788b84a4d642e4f7f275fe4200b15f0685/myappname/util.py#L15-41
(\*Yes, the package is basically useless and you shouldn't use it, but it illustrates the point)
However, I just noticed `Dynaconf` exists, and it seems to be *very* close to what I need (if not a perfect fit - I am still learning this package).
So I figured I should not re-invent the wheel, and rather use the standard tools that are extremely popular in the language.
## Another Example / Perspective
A good example to illustrate the problem is the following:
- Pkg_A --(depends on)--> Pkg_B --(depends on)--> Pkg_C
- Pkg_C needs a `data_dir` which will hold, say **3TB** of space (to construct a database for **Pkg_A**).
This can be achieved with the system desired in this post - specifically a separate `confpkg` that can construct or append to a `config.toml`, (or just create entries in CONF without touching `config.toml`) for Pkg_A, Pkg_B, and Pkg_C, that has defaults from `platform_dirs`.
However, importantly here - the `confpkg` should check if that default can allow for this space, and ask the user for input to alter the `config.toml` **if and only if needed**.
It's clear that copy/pasting this code (probably a few hundred lines) to *every single package created by the user* is not a viable solution. Especially if an institution wants to make an addition for all of the packages that `confpkg` uses (e.g. a standard 'welcome message' for a LOG or something.
---
So what is a good way to deal with this (extremely common) type of scenario? | open | 2024-02-01T20:36:52Z | 2024-07-08T18:37:53Z | https://github.com/dynaconf/dynaconf/issues/1047 | [
"Not a Bug",
"RFC"
] | chasealanbrown | 1 |
CorentinJ/Real-Time-Voice-Cloning | tensorflow | 1,288 | the output only one second why | The code returns for one second and does not complete the text. Can anyone help me? | open | 2024-02-19T08:45:11Z | 2024-03-08T13:19:30Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/1288 | [] | Aladdin30 | 1 |
vitalik/django-ninja | rest-api | 1,360 | [BUG] Unable to generate pydantic-core schema for <class 'ninja.orm.metaclass.ModelSchemaMetaclass'>. | **Describe the bug**
Have a model, a ModelSchema, and an API endpoint where the response type is this schema. `python manage.py runserver` fails with an internal Pydantic error:
```
pydantic.errors.PydanticSchemaGenerationError: Unable to generate pydantic-core schema for <class 'ninja.orm.metaclass.ModelSchemaMetaclass'>. Set `arbitrary_types_allowed=True` in the model_config to ignore this error or implement `__get_pydantic_core_schema__` on your type to fully support it.
If you got this error by calling handler(<some type>) within `__get_pydantic_core_schema__` then you likely need to call `handler.generate_schema(<some type>)` since we do not call `__get_pydantic_core_schema__` on `<some type>` otherwise to avoid infinite recursion.
```
<summary>
Full stacktrace:
<details>
```
/Users/bozbalci/.venvs/not-cms/bin/python /Users/bozbalci/src/not-cms/manage.py runserver 8000
Watching for file changes with StatReloader
Performing system checks...
Exception in thread django-main-thread:
Traceback (most recent call last):
File "/opt/homebrew/Cellar/python@3.12/3.12.8/Frameworks/Python.framework/Versions/3.12/lib/python3.12/threading.py", line 1075, in _bootstrap_inner
self.run()
File "/opt/homebrew/Cellar/python@3.12/3.12.8/Frameworks/Python.framework/Versions/3.12/lib/python3.12/threading.py", line 1012, in run
self._target(*self._args, **self._kwargs)
File "/Users/bozbalci/.venvs/not-cms/lib/python3.12/site-packages/django/utils/autoreload.py", line 64, in wrapper
fn(*args, **kwargs)
File "/Users/bozbalci/.venvs/not-cms/lib/python3.12/site-packages/django/core/management/commands/runserver.py", line 134, in inner_run
self.check(display_num_errors=True)
File "/Users/bozbalci/.venvs/not-cms/lib/python3.12/site-packages/django/core/management/base.py", line 486, in check
all_issues = checks.run_checks(
^^^^^^^^^^^^^^^^^^
File "/Users/bozbalci/.venvs/not-cms/lib/python3.12/site-packages/django/core/checks/registry.py", line 88, in run_checks
new_errors = check(app_configs=app_configs, databases=databases)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/bozbalci/.venvs/not-cms/lib/python3.12/site-packages/django/core/checks/urls.py", line 136, in check_custom_error_handlers
handler = resolver.resolve_error_handler(status_code)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/bozbalci/.venvs/not-cms/lib/python3.12/site-packages/django/urls/resolvers.py", line 732, in resolve_error_handler
callback = getattr(self.urlconf_module, "handler%s" % view_type, None)
^^^^^^^^^^^^^^^^^^^
File "/Users/bozbalci/.venvs/not-cms/lib/python3.12/site-packages/django/utils/functional.py", line 47, in __get__
res = instance.__dict__[self.name] = self.func(instance)
^^^^^^^^^^^^^^^^^^^
File "/Users/bozbalci/.venvs/not-cms/lib/python3.12/site-packages/django/urls/resolvers.py", line 711, in urlconf_module
return import_module(self.urlconf_name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/Cellar/python@3.12/3.12.8/Frameworks/Python.framework/Versions/3.12/lib/python3.12/importlib/__init__.py", line 90, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<frozen importlib._bootstrap>", line 1387, in _gcd_import
File "<frozen importlib._bootstrap>", line 1360, in _find_and_load
File "<frozen importlib._bootstrap>", line 1331, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 935, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 999, in exec_module
File "<frozen importlib._bootstrap>", line 488, in _call_with_frames_removed
File "/Users/bozbalci/src/not-cms/notcms/urls.py", line 23, in <module>
api.add_router("/photos/", "photo.api.router")
File "/Users/bozbalci/.venvs/not-cms/lib/python3.12/site-packages/ninja/main.py", line 389, in add_router
router = import_string(router)
^^^^^^^^^^^^^^^^^^^^^
File "/Users/bozbalci/.venvs/not-cms/lib/python3.12/site-packages/django/utils/module_loading.py", line 30, in import_string
return cached_import(module_path, class_name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/bozbalci/.venvs/not-cms/lib/python3.12/site-packages/django/utils/module_loading.py", line 15, in cached_import
module = import_module(module_path)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/Cellar/python@3.12/3.12.8/Frameworks/Python.framework/Versions/3.12/lib/python3.12/importlib/__init__.py", line 90, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<frozen importlib._bootstrap>", line 1387, in _gcd_import
File "<frozen importlib._bootstrap>", line 1360, in _find_and_load
File "<frozen importlib._bootstrap>", line 1331, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 935, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 999, in exec_module
File "<frozen importlib._bootstrap>", line 488, in _call_with_frames_removed
File "/Users/bozbalci/src/not-cms/photo/api.py", line 14, in <module>
@router.get("/{photo_id}")
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/bozbalci/.venvs/not-cms/lib/python3.12/site-packages/ninja/router.py", line 268, in decorator
self.add_api_operation(
File "/Users/bozbalci/.venvs/not-cms/lib/python3.12/site-packages/ninja/router.py", line 319, in add_api_operation
path_view.add_operation(
File "/Users/bozbalci/.venvs/not-cms/lib/python3.12/site-packages/ninja/operation.py", line 426, in add_operation
operation = OperationClass(
^^^^^^^^^^^^^^^
File "/Users/bozbalci/.venvs/not-cms/lib/python3.12/site-packages/ninja/operation.py", line 82, in __init__
self.signature = ViewSignature(self.path, self.view_func)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/bozbalci/.venvs/not-cms/lib/python3.12/site-packages/ninja/signature/details.py", line 87, in __init__
self.models: TModels = self._create_models()
^^^^^^^^^^^^^^^^^^^^^
File "/Users/bozbalci/.venvs/not-cms/lib/python3.12/site-packages/ninja/signature/details.py", line 171, in _create_models
model_cls = type(cls_name, (base_cls,), attrs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/bozbalci/.venvs/not-cms/lib/python3.12/site-packages/pydantic/_internal/_model_construction.py", line 226, in __new__
complete_model_class(
File "/Users/bozbalci/.venvs/not-cms/lib/python3.12/site-packages/pydantic/_internal/_model_construction.py", line 658, in complete_model_class
schema = cls.__get_pydantic_core_schema__(cls, handler)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/bozbalci/.venvs/not-cms/lib/python3.12/site-packages/pydantic/main.py", line 702, in __get_pydantic_core_schema__
return handler(source)
^^^^^^^^^^^^^^^
File "/Users/bozbalci/.venvs/not-cms/lib/python3.12/site-packages/pydantic/_internal/_schema_generation_shared.py", line 84, in __call__
schema = self._handler(source_type)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/bozbalci/.venvs/not-cms/lib/python3.12/site-packages/pydantic/_internal/_generate_schema.py", line 612, in generate_schema
schema = self._generate_schema_inner(obj)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/bozbalci/.venvs/not-cms/lib/python3.12/site-packages/pydantic/_internal/_generate_schema.py", line 881, in _generate_schema_inner
return self._model_schema(obj)
^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/bozbalci/.venvs/not-cms/lib/python3.12/site-packages/pydantic/_internal/_generate_schema.py", line 693, in _model_schema
{k: self._generate_md_field_schema(k, v, decorators) for k, v in fields.items()},
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/bozbalci/.venvs/not-cms/lib/python3.12/site-packages/pydantic/_internal/_generate_schema.py", line 1073, in _generate_md_field_schema
common_field = self._common_field_schema(name, field_info, decorators)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/bozbalci/.venvs/not-cms/lib/python3.12/site-packages/pydantic/_internal/_generate_schema.py", line 1265, in _common_field_schema
schema = self._apply_annotations(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/bozbalci/.venvs/not-cms/lib/python3.12/site-packages/pydantic/_internal/_generate_schema.py", line 2062, in _apply_annotations
schema = get_inner_schema(source_type)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/bozbalci/.venvs/not-cms/lib/python3.12/site-packages/pydantic/_internal/_schema_generation_shared.py", line 84, in __call__
schema = self._handler(source_type)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/bozbalci/.venvs/not-cms/lib/python3.12/site-packages/pydantic/_internal/_generate_schema.py", line 2043, in inner_handler
schema = self._generate_schema_inner(obj)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/bozbalci/.venvs/not-cms/lib/python3.12/site-packages/pydantic/_internal/_generate_schema.py", line 886, in _generate_schema_inner
return self.match_type(obj)
^^^^^^^^^^^^^^^^^^^^
File "/Users/bozbalci/.venvs/not-cms/lib/python3.12/site-packages/pydantic/_internal/_generate_schema.py", line 997, in match_type
return self._unknown_type_schema(obj)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/bozbalci/.venvs/not-cms/lib/python3.12/site-packages/pydantic/_internal/_generate_schema.py", line 515, in _unknown_type_schema
raise PydanticSchemaGenerationError(
pydantic.errors.PydanticSchemaGenerationError: Unable to generate pydantic-core schema for <class 'ninja.orm.metaclass.ModelSchemaMetaclass'>. Set `arbitrary_types_allowed=True` in the model_config to ignore this error or implement `__get_pydantic_core_schema__` on your type to fully support it.
If you got this error by calling handler(<some type>) within `__get_pydantic_core_schema__` then you likely need to call `handler.generate_schema(<some type>)` since we do not call `__get_pydantic_core_schema__` on `<some type>` otherwise to avoid infinite recursion.
For further information visit https://errors.pydantic.dev/2.10/u/schema-for-unknown-type
```
</details>
</summary>
Minimum reproducible code:
```python
# models.py
class Photo(models.Model):
id = models.AutoField(primary_key=True)
image_url = models.URLField()
thumbnail_url = models.URLField()
title = models.CharField(max_length=255, blank=True)
exif = models.JSONField(blank=True, null=True)
uploaded_at = models.DateTimeField(auto_now_add=True)
# schemas.py
class PhotoSchema(ModelSchema):
class Meta:
model = Photo
fields = ["id","image_url","thumbnail_url","title","exif","uploaded_at",]
# api.py
@router.get("/{photo_id}")
def photo_details(request, photo_id: int, response=PhotoSchema):
photo = Photo.objects.get(id=photo_id)
return photo
```
**Versions (please complete the following information):**
- Python version: 3.12.8
- Django version: 5.1.4
- Django-Ninja version: 1.3.0
- Pydantic version: 2.10.3
| closed | 2024-12-15T00:26:20Z | 2024-12-15T00:33:36Z | https://github.com/vitalik/django-ninja/issues/1360 | [] | bozbalci | 1 |
Kanaries/pygwalker | matplotlib | 248 | Pygwalker 0.3.7 is missing on conda-forge | https://github.com/conda-forge/pygwalker-feedstock/pull/40
Looks like the automatic release is broken due to errors in Azure
Could you please release it on Conda too? | closed | 2023-09-27T21:31:00Z | 2023-10-02T21:38:17Z | https://github.com/Kanaries/pygwalker/issues/248 | [] | ilyanoskov | 1 |
ARM-DOE/pyart | data-visualization | 866 | Issues with CyLP | We're having issues getting cylp to run on jupyter notebooks - it immediately restarts the kernel when any phase processing is done, namely:
reproc_phase, kdp = pyart.correct.phase_proc_lp(radar,0., refl_field='reflectivity',
fzl=13000.,
self_const=60000.0,
window_len=20,
low_z=10.,high_z=53.)
We've created multiple environments, using the environment.yml file provided here, and all of our trials haven't succeeded in getting it to run. Sometimes a 'gcc' complier error will pop up, other times cylp appears to install successfully, but the kernel continues to restart whenever we try phase processing.
Any advice on what to try next or something to check?
| closed | 2019-09-10T17:43:08Z | 2019-10-01T21:14:26Z | https://github.com/ARM-DOE/pyart/issues/866 | [] | saralytle | 5 |
google-research/bert | tensorflow | 964 | when we calculate loss, why we jsut calculate one token loss, rather than sum all token loss in one example | open | 2019-12-19T03:17:08Z | 2019-12-19T03:20:17Z | https://github.com/google-research/bert/issues/964 | [] | xiongma | 1 | |
drivendataorg/cookiecutter-data-science | data-science | 34 | Click seems to be flawed on Python 3 - Consider using docopt | I don't know much about click or docopt yet, so don't shoot me if i'm lost here. Click seems to be handling Python 3 a bit badly. Should we consider switching to http://docopt.org/ ?
By the way i manage to get Click working by running, `export LC_ALL=no_NO.utf-8` and `export LANG=no_NO.utf-8`. But It seems like there might be more issues witch Click and Python 3:
http://click.pocoo.org/6/python3/
| closed | 2016-06-29T19:54:32Z | 2017-03-13T20:32:53Z | https://github.com/drivendataorg/cookiecutter-data-science/issues/34 | [
"needs-discussion"
] | ohenrik | 5 |
twopirllc/pandas-ta | pandas | 185 | VP indicator how width works? | **Which version are you running? The lastest version is on Github. Pip is for major releases.**
```python3.7
import pandas_ta as ta
print(ta.version)
```
**Upgrade.**
```sh
$ pip install -U git+https://github.com/twopirllc/pandas-ta
```
I'm calculating VP over 50 candles of 1h.
I don't understand how width works in VP indicator. I run a test with different width values (1,2,3 and 10). Check my output and pay attention to "close high". Are this VP values right?
**To Reproduce**
vp1=df.ta.vp(width=1)
vp2=df.ta.vp(width=2)
vp3=df.ta.vp(width=3)
vp10=df.ta.vp(width=10)
print(df)
print(vp1)
print(vp2)
print(vp3)
print(vp10)
**OUTPUT**
**timestamp open high low close volume**
0 2020-12-31 16:00:00.000000 739.88 742.00 737.36 738.83 14810.28022
1 2020-12-31 17:00:00.000000 738.76 744.50 737.61 743.96 14700.26345
2 2020-12-31 18:00:00.000000 743.96 745.00 736.52 738.24 18490.32151
3 2020-12-31 19:00:00.000000 738.23 742.50 735.88 740.48 11375.60510
4 2020-12-31 20:00:00.000000 740.46 741.70 731.83 736.42 19450.08215
5 2020-12-31 21:00:00.000000 736.42 739.00 729.33 734.07 27932.69884
6 2020-12-31 22:00:00.000000 734.08 749.00 733.37 748.28 52336.18779
7 2020-12-31 23:00:00.000000 748.27 749.00 742.27 744.06 33019.50100
8 2021-01-01 00:00:00.000000 744.06 747.23 743.10 744.82 17604.80859
9 2021-01-01 01:00:00.000000 744.87 747.09 739.30 742.29 18794.15424
10 2021-01-01 02:00:00.000000 742.34 743.23 739.50 740.65 14948.26447
11 2021-01-01 03:00:00.000000 740.72 743.25 737.04 739.97 17106.99495
12 2021-01-01 04:00:00.000000 739.87 740.51 734.40 737.38 21624.68945
13 2021-01-01 05:00:00.000000 737.37 738.48 725.10 730.07 52992.04892
14 2021-01-01 06:00:00.000000 730.07 734.77 728.77 733.68 22836.46973
15 2021-01-01 07:00:00.000000 733.69 738.87 733.27 736.81 22895.59711
16 2021-01-01 08:00:00.000000 736.82 741.76 736.11 738.85 29384.09871
17 2021-01-01 09:00:00.000000 738.85 743.33 732.12 733.19 50190.47228
18 2021-01-01 10:00:00.000000 733.19 741.99 733.00 740.08 23543.56362
19 2021-01-01 11:00:00.000000 740.08 742.67 736.46 738.00 22098.37395
20 2021-01-01 12:00:00.000000 738.01 740.00 733.50 735.39 25671.61199
21 2021-01-01 13:00:00.000000 735.39 737.73 733.01 735.83 22249.34205
22 2021-01-01 14:00:00.000000 735.76 736.45 726.57 727.94 40914.26505
23 2021-01-01 15:00:00.000000 727.94 731.80 714.29 724.60 66427.75355
24 2021-01-01 16:00:00.000000 724.57 728.28 720.95 725.34 26866.35862
25 2021-01-01 17:00:00.000000 725.34 730.71 722.50 729.48 22422.29118
26 2021-01-01 18:00:00.000000 729.48 730.61 726.69 728.23 14582.15316
27 2021-01-01 19:00:00.000000 728.23 731.97 725.65 728.90 14614.20353
28 2021-01-01 20:00:00.000000 728.97 730.58 727.65 728.91 14058.19051
29 2021-01-01 21:00:00.000000 728.91 730.66 716.26 719.90 46190.17553
30 2021-01-01 22:00:00.000000 719.90 731.75 714.91 729.38 37985.46001
31 2021-01-01 23:00:00.000000 729.39 734.40 729.05 729.44 21870.25220
32 2021-01-02 00:00:00.000000 729.45 731.96 728.40 730.39 13138.56186
33 2021-01-02 01:00:00.000000 730.39 730.67 726.26 729.45 13117.64933
34 2021-01-02 02:00:00.000000 729.45 733.40 729.22 731.99 19342.91315
35 2021-01-02 03:00:00.000000 731.99 740.49 730.53 737.04 41590.31856
36 2021-01-02 04:00:00.000000 737.08 738.66 733.07 735.12 23887.39609
37 2021-01-02 05:00:00.000000 735.12 738.35 728.21 733.84 32631.45363
38 2021-01-02 06:00:00.000000 733.84 734.64 723.06 725.41 38970.42653
39 2021-01-02 07:00:00.000000 725.24 733.93 723.01 728.47 34642.76548
40 2021-01-02 08:00:00.000000 728.55 731.10 727.17 729.70 14020.08545
41 2021-01-02 09:00:00.000000 729.70 757.00 728.25 752.38 134011.47560
42 2021-01-02 10:00:00.000000 752.39 772.80 749.11 770.77 154115.10137
43 2021-01-02 11:00:00.000000 770.77 771.28 753.00 760.94 88376.02018
44 2021-01-02 12:00:00.000000 760.99 768.99 760.86 768.43 51945.86127
45 2021-01-02 13:00:00.000000 768.45 782.98 764.50 781.94 145618.23911
46 2021-01-02 14:00:00.000000 781.94 784.66 774.04 778.84 74461.65986
47 2021-01-02 15:00:00.000000 778.77 784.39 772.00 780.01 57755.77853
48 2021-01-02 16:00:00.000000 780.00 787.69 776.77 784.79 56561.31490
49 2021-01-02 17:00:00.000000 784.79 785.48 750.12 756.55 122647.19925
**low_close mean_close high_close pos_volume neg_volume total_volume**
0 719.9 740.7906 784.79 1.228753e+06 748067.99059 1.976821e+06
**low_close mean_close high_close pos_volume neg_volume total_volume**
0 724.6 737.1692 748.28 479021.25676 209242.55058 6.882638e+05
1 719.9 744.4120 784.79 749731.50626 538825.44001 1.288557e+06
**low_close mean_close high_close pos_volume neg_volume total_volume**
0 730.07 739.344706 748.28 295817.15389 114484.91234 4.103021e+05
1 719.90 730.261765 740.08 244008.48193 231932.19649 4.759407e+05
2 725.41 753.513750 784.79 688927.12720 401650.88176 1.090578e+06
**low_close mean_close high_close pos_volume neg_volume total_volume**
0 736.42 739.586 743.96 52750.68388 26075.86855 78826.55243
1 734.07 742.704 748.28 99063.04087 50624.30959 149687.35046
2 730.07 736.350 740.65 91723.73332 37784.73420 129508.46752
3 733.19 737.386 740.08 102470.16810 45641.93757 148112.10567
4 724.60 729.820 735.83 133013.63059 49115.70067 182129.33126
5 719.90 727.084 729.48 60804.37906 51062.63485 111867.01391
6 729.38 730.130 731.99 19342.91315 86111.92340 105454.83655
7 725.41 731.976 737.04 113192.19872 58530.16157 171722.36029
8 729.70 756.444 770.77 288126.57697 154341.96690 442468.54387
9 756.55 776.426 784.79 268265.43836 188778.75329 457044.19165
| closed | 2021-01-07T01:32:45Z | 2021-01-17T17:39:22Z | https://github.com/twopirllc/pandas-ta/issues/185 | [
"enhancement",
"info"
] | casterock | 3 |
pyqtgraph/pyqtgraph | numpy | 2,372 | Typo? | in pyqtgraph/pgcollections.py:149:22:
```
def __init__(self, *args, **kwargs):
self.mutex = threading.RLock()
list.__init__(self, *args, **kwargs)
for k in self:
self[k] = mkThreadsafe(self[k])
^
```
should be
```
self[k] = makeThreadsafe(self[k])
```
in pyqtgraph/widget/DiffTreeWidget.py:126
```
def _compare(self, a, b):
```
should be
```
def _compare(self, info, expect):
```
| closed | 2022-07-23T02:28:34Z | 2022-07-23T10:06:14Z | https://github.com/pyqtgraph/pyqtgraph/issues/2372 | [] | dingo9 | 2 |
benbusby/whoogle-search | flask | 363 | [FEATURE] Wikiless - wikipedia alternative | <!--
DO NOT REQUEST UI/THEME/GUI/APPEARANCE IMPROVEMENTS HERE
THESE SHOULD GO IN ISSUE #60
REQUESTING A NEW FEATURE SHOULD BE STRICTLY RELATED TO NEW FUNCTIONALITY
-->
**Describe the feature you'd like to see added**
Wikiless - privacy alternative to wikipedia
**Additional context**
https://codeberg.org/orenom/Wikiless
https://github.com/SimonBrazell/privacy-redirect/issues/232
| closed | 2021-06-19T09:39:19Z | 2022-01-14T16:59:04Z | https://github.com/benbusby/whoogle-search/issues/363 | [
"enhancement"
] | specter78 | 1 |
microsoft/JARVIS | deep-learning | 162 | Can't use gradio | Running the command per the README:
```
python run_gradio_demo.py --config config.yaml
```
Results in the following exception being thrown when submitting any prompts thru the Gradio page:
```
Traceback (most recent call last):
File "/home/ubuntu/.conda/envs/jarvis/lib/python3.8/site-packages/gradio/routes.py", line 393, in run_predict
output = await app.get_blocks().process_api(
File "/home/ubuntu/.conda/envs/jarvis/lib/python3.8/site-packages/gradio/blocks.py", line 1108, in process_api
result = await self.call_function(
File "/home/ubuntu/.conda/envs/jarvis/lib/python3.8/site-packages/gradio/blocks.py", line 915, in call_function
prediction = await anyio.to_thread.run_sync(
File "/home/ubuntu/.conda/envs/jarvis/lib/python3.8/site-packages/anyio/to_thread.py", line 31, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "/home/ubuntu/.conda/envs/jarvis/lib/python3.8/site-packages/anyio/_backends/_asyncio.py", line 937, in run_sync_in_worker_thread
return await future
File "/home/ubuntu/.conda/envs/jarvis/lib/python3.8/site-packages/anyio/_backends/_asyncio.py", line 867, in run
result = context.run(func, *args)
File "run_gradio_demo.py", line 78, in bot
message = chat_huggingface(all_messages, OPENAI_KEY, "openai")["message"]
File "/home/ubuntu/JARVIS/server/awesome_chat.py", line 892, in chat_huggingface
task_str = parse_task(context, input, api_key, api_type)
File "/home/ubuntu/JARVIS/server/awesome_chat.py", line 344, in parse_task
return send_request(data)
File "/home/ubuntu/JARVIS/server/awesome_chat.py", line 204, in send_request
response = requests.post(endpoint, json=data, headers=HEADER, proxies=PROXY)
NameError: name 'endpoint' is not defined
``` | closed | 2023-04-17T22:11:31Z | 2023-04-18T03:19:25Z | https://github.com/microsoft/JARVIS/issues/162 | [] | tabrezm | 0 |
airtai/faststream | asyncio | 1,830 | Feature: Not update/create NATS streams/consumers | Hi, we are using NATS together with their k8s operator NACK which is used to provision stream & consumers (durable). Is there a way to disable create / updating of streams & consumers, and just rely them being provision by IAC. I haven't found this option, looking at example & docs, but I think it's a common use-case. Would also be willing to contribute, but any feedback & guidance would be helpful.
Thanks,
Mitja | closed | 2024-10-02T19:56:44Z | 2024-10-02T20:25:08Z | https://github.com/airtai/faststream/issues/1830 | [
"enhancement",
"NATS"
] | mkramb | 2 |
RomelTorres/alpha_vantage | pandas | 72 | alpha_vantage creates error | #source: https://quantdare.com/forecasting-sp-500-using-machine-learning/
from alpha_vantage.timeseries import TimeSeries
import pandas as pd
print('Pandas_Version: ' + pd.__version__)
symbol = 'GOOGL'
ts = TimeSeries(key='_my_key_', output_format='pandas')
close = ts.get_daily(symbol=symbol, outputsize='full')[0]['close'] # compact/full
direction = (close > close.shift()).astype(int)
target = direction.shift(-1).fillna(0).astype(int)
target.name = 'target'
_____________________________________________________________________________________________________
results in error:
Pandas_Version: 0.23.0
Traceback (most recent call last):
File "<ipython-input-23-c6eeea939d68>", line 1, in <module>
runfile('F:/Eigene Dokumente_C/Documents/AI/Deep_Learning/test_stock_predictor.py', wdir='F:/Eigene Dokumente_C/Documents/AI/Deep_Learning')
File "C:\Users\Ackermann\Anaconda3\lib\site-packages\spyder\utils\site\sitecustomize.py", line 705, in runfile
execfile(filename, namespace)
File "C:\Users\Ackermann\Anaconda3\lib\site-packages\spyder\utils\site\sitecustomize.py", line 102, in execfile
exec(compile(f.read(), filename, 'exec'), namespace)
File "F:/Eigene Dokumente_C/Documents/AI/Deep_Learning/test_stock_predictor.py", line 16, in <module>
close = ts.get_daily(symbol=symbol, outputsize='full')[0]['close'] # compact/full
File "C:\Users\Ackermann\Anaconda3\lib\site-packages\pandas\core\frame.py", line 2685, in __getitem__
return self._getitem_column(key)
File "C:\Users\Ackermann\Anaconda3\lib\site-packages\pandas\core\frame.py", line 2692, in _getitem_column
return self._get_item_cache(key)
File "C:\Users\Ackermann\Anaconda3\lib\site-packages\pandas\core\generic.py", line 2486, in _get_item_cache
values = self._data.get(item)
File "C:\Users\Ackermann\Anaconda3\lib\site-packages\pandas\core\internals.py", line 4115, in get
loc = self.items.get_loc(item)
File "C:\Users\Ackermann\Anaconda3\lib\site-packages\pandas\core\indexes\base.py", line 3065, in get_loc
return self._engine.get_loc(self._maybe_cast_indexer(key))
File "pandas\_libs\index.pyx", line 140, in pandas._libs.index.IndexEngine.get_loc
File "pandas\_libs\index.pyx", line 162, in pandas._libs.index.IndexEngine.get_loc
File "pandas\_libs\hashtable_class_helper.pxi", line 1492, in pandas._libs.hashtable.PyObjectHashTable.get_item
File "pandas\_libs\hashtable_class_helper.pxi", line 1500, in pandas._libs.hashtable.PyObjectHashTable.get_item
KeyError: 'close' | closed | 2018-05-22T22:57:33Z | 2018-05-24T07:02:17Z | https://github.com/RomelTorres/alpha_vantage/issues/72 | [] | amadeus22 | 5 |
davidsandberg/facenet | tensorflow | 561 | ValueError: There should not be more than one meta file in the model directory | open | 2017-12-01T03:12:15Z | 2022-01-05T19:39:58Z | https://github.com/davidsandberg/facenet/issues/561 | [] | ronyuzhang | 2 | |
Gozargah/Marzban | api | 1,127 | بهم ریختن ترتیب کانفیگا در حالت کاستوم | بعد از اپدیت مرزبان به نسخه 0.5.1 و 0.5.2 ترتیب کانفیگادر حالت کاستوم از اینباند اخر به اول شده | closed | 2024-07-17T08:23:51Z | 2024-07-19T22:38:16Z | https://github.com/Gozargah/Marzban/issues/1127 | [
"Bug"
] | dvltak | 4 |
mwaskom/seaborn | pandas | 3,058 | unable to plot with custom color ramp seaborn 0.12.0 | I was trying to use seaborn kdeplot with custom colour ramp, it is working with seaboarn version 0.11.00, but not with seaboarn 0.12.0
**color ramp as below**
```
def make_Ramp( ramp_colors ):
from colour import Color
from matplotlib.colors import LinearSegmentedColormap
color_ramp = LinearSegmentedColormap.from_list( 'my_list', [ Color( c1 ).rgb for c1 in ramp_colors ] )
plt.figure( figsize = (15,3))
plt.imshow( [list(np.arange(0, len( ramp_colors ) , 0.1)) ] , interpolation='nearest', origin='lower', cmap= color_ramp )
plt.xticks([])
plt.yticks([])
return color_ramp
custom_ramp = make_Ramp( ['#0000ff','#00ffff','#ffff00','#ff0000' ] )
```
my data look like this
```
0 1 2
0 142.5705 38.5744 hairpins
1 281.0795 55.1900 hairpins
2 101.7282 49.5604 hairpins
3 59.8472 63.0699 hairpins
4 296.4381 44.8293 hairpins
.. ... ... ...
347 284.6841 51.7468 stems
348 288.7241 49.9322 stems
349 320.2972 41.5520 stems
350 302.6805 67.2658 stems
351 293.6837 52.0663 stems
[352 rows x 3 columns]
<class 'numpy.float64'>
```
**this is my code**
`ax = sns.kdeplot(data=df, x=df.loc[df[2] == "hairpins", 0], y=df.loc[df[2] == "hairpins", 1], fill=False, thresh=0, levels=20, cmap=custom_ramp, common_norm=True, cbar=True, )`
**with version 0.12.0 get this error**
```
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
[<ipython-input-6-88652c0ba7ce>](https://localhost:8080/#) in <module>
21 #/
22 plt.subplot(2,3,i+1)
---> 23 ax = sns.kdeplot(data=df, x=df.loc[df[2] == dimtypes[i], 0], y=df.loc[df[2] == dimtypes[i], 1], fill=False, thresh=0, levels=20, cmap=custom_ramp, common_norm=True, cbar=True, cbar_kws={'format': '%2.1e', 'label': 'kernel density'} )
24 plt.title("Chart {}: {}".format(i+1, dimtypes[i]), size=20)
25 plt.xlabel(str(xname), fontsize=12)
9 frames
[/usr/local/lib/python3.7/dist-packages/matplotlib/artist.py](https://localhost:8080/#) in update(self, props)
1065 func = getattr(self, f"set_{k}", None)
1066 if not callable(func):
-> 1067 raise AttributeError(f"{type(self).__name__!r} object "
1068 f"has no property {k!r}")
1069 ret.append(func(v))
AttributeError: 'Line2D' object has no property 'cmap'
```
**with version 0.11.0 get this plot**

basically, custom ramp is not woking with seaborn 0.12.0 , please update thanks
| closed | 2022-10-07T16:51:19Z | 2022-10-08T23:02:17Z | https://github.com/mwaskom/seaborn/issues/3058 | [
"bug",
"mod:distributions"
] | Niransha | 5 |
sergree/matchering | numpy | 45 | Music Production | No issue here- just tried starting the repo under a new list "music production" but accidentally clicked "issue". | closed | 2022-11-24T20:21:20Z | 2023-01-30T19:57:33Z | https://github.com/sergree/matchering/issues/45 | [] | ActivateLLC | 1 |
psf/black | python | 4,362 | Black v24.4.1 & v24.4.2 fails to format f-strings containing multi-line strings | Black versions 24.4.1 and 24.4.2 encounter an issue when formatting Python code containing an f-string that includes a multi-line string. Example:
```python
s = f"""{'''a
b
c'''}"""
print(s)
```
this is valid python syntax, as it is executable, but black cannot format it, as indicated below:
```bash
>>> cat black_sample.py
s = f"""{'''a
b
c'''}"""
print(s)
>>> python --version
Python 3.12.1
>>> python black_sample.py
a
b
c
>>> black --version
black, 24.4.2 (compiled: yes)
Python (CPython) 3.12.1
>>> black black_sample.py
error: cannot format black_sample.py: Cannot parse: 1:9: s = f"""{'''a
Oh no! 💥 💔 💥
1 file failed to reformat.
```
| closed | 2024-05-14T15:05:54Z | 2024-07-31T18:07:52Z | https://github.com/psf/black/issues/4362 | [
"T: bug"
] | gbatagian | 4 |
liangliangyy/DjangoBlog | django | 641 | 请问对接memcached部分是那个配置文件啊 | <!--
如果你不认真勾选下面的内容,我可能会直接关闭你的 Issue。
提问之前,建议先阅读 https://github.com/ruby-china/How-To-Ask-Questions-The-Smart-Way
-->
**我确定我已经查看了** (标注`[ ]`为`[x]`)
- [ x] [DjangoBlog的readme](https://github.com/liangliangyy/DjangoBlog/blob/master/README.md)
- [ x] [配置说明](https://github.com/liangliangyy/DjangoBlog/blob/master/bin/config.md)
- [x ] [其他 Issues](https://github.com/liangliangyy/DjangoBlog/issues)
----
**我要申请** (标注`[ ]`为`[x]`)
- [ ] BUG 反馈
- [ ] 添加新的特性或者功能
- [x ] 请求技术支持
| closed | 2023-03-19T09:51:22Z | 2023-03-31T03:01:52Z | https://github.com/liangliangyy/DjangoBlog/issues/641 | [] | txbxxx | 1 |
miguelgrinberg/Flask-SocketIO | flask | 829 | Unable to run the example code | I must be doing something fundamentally wrong, but I'm unable to see what. I just tried the example in the documentation, and for some reason, the server keeps answering `200 Ok` instead of `101 Switching protocols`.
I did the following:
1. Copied the example code from the documentation:
```python
from flask import Flask, render_template
from flask_socketio import SocketIO
app = Flask(__name__)
app.config['SECRET_KEY'] = 'secret!'
socketio = SocketIO(app)
if __name__ == '__main__':
socketio.run(app, host="0.0.0.0")
```
2. Ran the code directly from python (3.6.6):
```
$ python example.py
```
3. Tried to connect from <http://www.websocket.org/echo.html>, to `ws://<My_IP>:5000/socket.io/` (my server has a public IP, and the port 5000 is not firewalled). The `ws` connection was inmediatly closed. Chrome complained (in the error console) that the response was 200 Ok, which was indeed,as I could see in the network inspector.
4. Just in case, I also tried the following command on the same machine than the flask server is running:
```
curl --include \
--no-buffer \
--header "Connection: Upgrade" \
--header "Upgrade: websocket" \
--header "Host: mysite.com:5000" \
--header "Origin: http://mysite.com:80" \
--header "Sec-WebSocket-Key: SGVsbG8sIHdvcmxkIQ==" \
--header "Sec-WebSocket-Version: 13" \
--output OUT.txt \
http://localhost:5000/socket.io/
```
The resulting `OUT.txt` also showed the response `200 Ok` (and two first websocket frames, by the way, the first one with some metadata like `{"sid":"303bf7a7abd4470784d70607c86e084a","upgrades":["websocket"],"pingTimeout":60000,"pingInterval":25000}`, and the next one with the bytes `00 02 ff 34 30`. Then the connection is closed).
5. Tried several different ways to start the server, such as the following ones:
```
$ FLASK_APP=example.py flask run --host 0.0.0.0 --port 5000
$ gunicorn --worker-class eventlet -w 1 example:app --bind 0.0.0.0:5000
$ gunicorn -k gevent -w 1 example:app --bind 0.0.0.0:5000
```
In all these cases the result is the same. `200 Ok` instead of `101`.
Finally, the closest to success I was, was when the server was launched with:
```
$ gunicorn -k geventwebsocket.gunicorn.workers.GeventWebSocketWorker -w 1 example:app --bind 0.0.0.0:5000
```
In this case, the `curl` command receives a response `400 Bad request`, and causes one exception in the server (" a bytes-like object is required, not 'str'") which aborts the transfer. But the site <http://www.websocket.org/echo.html> shows a successful connection (`101` at least), followed by a disconnection. Google chrome inspector shows no fames.
These are my versions (ouput of `pip freeze`):
```txt
Click==7.0
dnspython==1.15.0
eventlet==0.24.1
Flask==1.0.2
Flask-SocketIO==3.0.2
gevent==1.3.7
gevent-websocket==0.10.1
greenlet==0.4.15
gunicorn==19.9.0
itsdangerous==1.1.0
Jinja2==2.10
MarkupSafe==1.1.0
monotonic==1.5
pkg-resources==0.0.0
python-engineio==2.3.2
python-socketio==2.0.0
redis==2.10.6
six==1.11.0
Werkzeug==0.14.1
```
```
| closed | 2018-11-08T18:57:44Z | 2019-04-07T10:08:15Z | https://github.com/miguelgrinberg/Flask-SocketIO/issues/829 | [
"question"
] | jldiaz | 3 |
plotly/dash-core-components | dash | 188 | Graph callbacks not always triggered by Dropdown value or Graph selectedData change events in 0.22.1 - maybe broken by initially returning empty dict | I have multiple apps that have Graph `figure` plotting callbacks triggered by multiple inputs, including Dropdown `value` events (both single and multi select) and `selectedData` events from other Graphs. As of version 0.22.1, none of my plots are triggered by these events initially. I can trigger the callbacks using sliders and checkboxes, but the title doesn't change to "Updating..." while the callback is running. Once I've triggered the callback one of these methods, the dropdowns suddenly start working again (although intermittently).
Rolling back to 0.22.0 solves the above problems. I've gone back and forth a few times to double check.
Something that may contribute to this is that when the app first loads, the graph callback is triggered, but there have been no valid user selections to define the plot so I abort the callback by returning an empty dict. If I raise an Exception it seems to work, but I'd prefer not to spit out a traceback to the console in this case. Is raising an Exception the only way to halt termination of a callback without plotting anything? | closed | 2018-04-20T14:09:45Z | 2023-01-28T14:25:37Z | https://github.com/plotly/dash-core-components/issues/188 | [] | slishak | 2 |
CorentinJ/Real-Time-Voice-Cloning | pytorch | 733 | ImportError: Failed to import any qt binding | I face the error when run demo_toolbox.py:
```
(venv2) C:\Users\dev_user\PycharmProjects\Real-Time-Voice-Cloning>python demo_toolbox.py
Traceback (most recent call last):
File "demo_toolbox.py", line 2, in <module>
from toolbox import Toolbox
File "C:\Users\dev_user\PycharmProjects\Real-Time-Voice-Cloning\toolbox\__init__.py", line 1, in <module>
from toolbox.ui import UI
File "C:\Users\dev_user\PycharmProjects\Real-Time-Voice-Cloning\toolbox\ui.py", line 2, in <module>
from matplotlib.backends.backend_qt5agg import FigureCanvasQTAgg as FigureCanvas
File "C:\Users\dev_user\PycharmProjects\Real-Time-Voice-Cloning\venv2\lib\site-packages\matplotlib\backends\backend_qt5agg.py", line 11, in <module>
from .backend_qt5 import (
File "C:\Users\dev_user\PycharmProjects\Real-Time-Voice-Cloning\venv2\lib\site-packages\matplotlib\backends\backend_qt5.py", line 13, in <module>
import matplotlib.backends.qt_editor.figureoptions as figureoptions
File "C:\Users\dev_user\PycharmProjects\Real-Time-Voice-Cloning\venv2\lib\site-packages\matplotlib\backends\qt_editor\figureoptions.py", line 11, in <module>
from matplotlib.backends.qt_compat import QtGui
File "C:\Users\dev_user\PycharmProjects\Real-Time-Voice-Cloning\venv2\lib\site-packages\matplotlib\backends\qt_compat.py", line 179, in <module>
raise ImportError("Failed to import any qt binding")
ImportError: Failed to import any qt binding
```
Windows 10
Python 3.7.1
pip 21.0.1
"pip freeze" result:
appdirs==1.4.4
audioread==2.1.9
certifi==2020.12.5
cffi==1.14.5
chardet==4.0.0
cycler==0.10.0
decorator==5.0.6
dill==0.3.3
idna==2.10
inflect==5.3.0
joblib==1.0.1
jsonpatch==1.32
jsonpointer==2.1
kiwisolver==1.3.1
librosa==0.8.0
llvmlite==0.36.0
matplotlib==3.4.1
multiprocess==0.70.11.1
numba==0.53.1
numpy==1.19.3
packaging==20.9
Pillow==8.2.0
pooch==1.3.0
pycparser==2.20
pynndescent==0.5.2
pyparsing==2.4.7
PyQt5==5.15.4
PyQt5-Qt5==5.15.2
PyQt5-sip==12.8.1
python-dateutil==2.8.1
pyzmq==22.0.3
requests==2.25.1
resampy==0.2.2
scikit-learn==0.24.1
scipy==1.6.2
six==1.15.0
sounddevice==0.4.1
SoundFile==0.10.3.post1
threadpoolctl==2.1.0
torch==1.8.1+cpu
torchaudio==0.8.1
torchfile==0.1.0
torchvision==0.9.1+cpu
tornado==6.1
tqdm==4.60.0
typing-extensions==3.7.4.3
umap-learn==0.5.1
Unidecode==1.2.0
urllib3==1.26.4
visdom==0.1.8.9
websocket-client==0.58.0
| closed | 2021-04-11T14:46:26Z | 2021-04-22T20:18:15Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/733 | [] | 32r81b | 2 |
tartiflette/tartiflette | graphql | 292 | Cannot resolve extended types | With the schema
```
type Query {
hello: String
}
extend type Query {
world: String
}
```
I can see the extended fields in graphiql schema but as soon as i try to declare a resolver for the extended type like this
```
@Resolver('Query.world')
async def revolve_stuff(_, args, ctx, info):
return 'hi'
```
baking the schema i get the error
```
Traceback (most recent call last):
File "/Users/morse/.pyenv/versions/3.7.2/lib/python3.7/site-packages/tartiflette/schema/schema.py", line 348, in get_field_by_name
return self.type_definitions[parent_name].find_field(field_name)
File "/Users/morse/.pyenv/versions/3.7.2/lib/python3.7/site-packages/tartiflette/types/object.py", line 125, in find_field
return self.implemented_fields[name]
KeyError: 'world'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/morse/.pyenv/versions/3.7.2/lib/python3.7/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/Users/morse/.pyenv/versions/3.7.2/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/Users/morse/Documents/GitHub/play-tartiflette/src/__main__.py", line 54, in <module>
run()
File "/Users/morse/Documents/GitHub/play-tartiflette/src/__main__.py", line 52, in run
web.run_app(app, port=8090,)
File "/Users/morse/.pyenv/versions/3.7.2/lib/python3.7/site-packages/aiohttp/web.py", line 415, in run_app
reuse_port=reuse_port))
File "/Users/morse/.pyenv/versions/3.7.2/lib/python3.7/asyncio/base_events.py", line 584, in run_until_complete
return future.result()
File "/Users/morse/.pyenv/versions/3.7.2/lib/python3.7/site-packages/aiohttp/web.py", line 287, in _run_app
await runner.setup()
File "/Users/morse/.pyenv/versions/3.7.2/lib/python3.7/site-packages/aiohttp/web_runner.py", line 203, in setup
self._server = await self._make_server()
File "/Users/morse/.pyenv/versions/3.7.2/lib/python3.7/site-packages/aiohttp/web_runner.py", line 302, in _make_server
await self._app.startup()
File "/Users/morse/.pyenv/versions/3.7.2/lib/python3.7/site-packages/aiohttp/web_app.py", line 389, in startup
await self.on_startup.send(self)
File "/Users/morse/.pyenv/versions/3.7.2/lib/python3.7/site-packages/aiohttp/signals.py", line 34, in send
await receiver(*args, **kwargs) # type: ignore
File "/Users/morse/.pyenv/versions/3.7.2/lib/python3.7/site-packages/tartiflette_aiohttp/__init__.py", line 97, in _cook_on_startup
sdl=sdl, schema_name=schema_name, modules=modules
File "/Users/morse/.pyenv/versions/3.7.2/lib/python3.7/site-packages/tartiflette/engine.py", line 250, in cook
schema_name, custom_default_resolver, custom_default_type_resolver
File "/Users/morse/.pyenv/versions/3.7.2/lib/python3.7/site-packages/tartiflette/schema/bakery.py", line 65, in bake
schema = SchemaBakery._preheat(schema_name)
File "/Users/morse/.pyenv/versions/3.7.2/lib/python3.7/site-packages/tartiflette/schema/bakery.py", line 39, in _preheat
obj.bake(schema)
File "/Users/morse/.pyenv/versions/3.7.2/lib/python3.7/site-packages/tartiflette/resolver/resolver.py", line 67, in bake
field = schema.get_field_by_name(self.name)
File "/Users/morse/.pyenv/versions/3.7.2/lib/python3.7/site-packages/tartiflette/schema/schema.py", line 351, in get_field_by_name
f"field `{name}` was not found in GraphQL schema."
tartiflette.types.exceptions.tartiflette.UnknownSchemaFieldResolver: field `Query.world` was not found in GraphQL schema.
```
I am using the last published 1.0.0rc1 together with tartiflette-aiohttp==1.0.0
| closed | 2019-09-14T12:10:09Z | 2019-09-15T07:34:53Z | https://github.com/tartiflette/tartiflette/issues/292 | [
"bug"
] | remorses | 1 |
aidlearning/AidLearning-FrameWork | jupyter | 62 | Not Opening anything |

When I click on any Icon it goes blank | closed | 2019-11-12T01:32:59Z | 2020-07-29T01:18:55Z | https://github.com/aidlearning/AidLearning-FrameWork/issues/62 | [
"duplicate"
] | zuhairabs | 4 |
rasbt/watermark | jupyter | 4 | Allow storing watermark info in metadata? | Watermark looks great for reproducibility.
It would be nice to have an option to (also) store this data in the notebook `metadata`:
``` json
{
"metadata": {
"watermark": {
"date": "2015-17-06T15:04:35",
"CPython": "3.4.3",
"IPython": "3.1.0",
"compiler": "GCC 4.2.1 (Apple Inc. build 5577)",
"system" : "Darwin",
"release" : "14.3.0",
"machine": "x86_64",
"processor" : "i386",
"CPU cores": "4",
"interpreter": "64bit"
}
}
}
}
```
Maybe some more hierarchy in there as well...
Since the kernel doesn't have any idea what's going on w/r/t notebooks, it would probably have to be done with a `display.Javascript`:
``` python
if as_metadata:
display(
Javascript("""IPython.notebook.metadata.watermark = {}""")
.format(json.dumps(the_data)
)
```
Happy to help with a PR, if you would think there is a place for this!
| open | 2015-09-01T19:44:53Z | 2018-09-24T15:56:10Z | https://github.com/rasbt/watermark/issues/4 | [
"enhancement"
] | bollwyvl | 7 |
MagicStack/asyncpg | asyncio | 320 | Race condition on __async_init__ when min_size > 0 | ```python
def get_pool(loop=None):
global pool
if pool is None:
pool = asyncpg.create_pool(**config.DTABASE, loop=loop)
return pool
```
Task1:
```python
pool = get_pool()
await pool # init
```
Task2:
```python
pool = get_pool()
await pool # init
```
Got `AssertionError` for asyncpg 0.15 and `InternalClientError` for asyncpg 0.16
```
File "asyncpg/pool.py", line 356, in _async__init__
await first_ch.connect()
File "asyncpg/pool.py", line 118, in connect
assert self._con is None
```
Both tasks try to connect `PoolConnectionHolder`'s to ensure min_size connections will be ready.
1. Task1 create pool
2. Task1 await ensure min_size on __async_init__
3. Task2 get same pool
4. Task2 see initialization not done
5. Task1 done await
6. Task2 await ensure min_size on __async_init__ and failed
Previously i try to use module (cached-property)[https://github.com/pydanny/cached-property], and this use-case broken cause for async properties it's save Future and don't use any additional locks.
And if init failed, exception will be cached too.
This is why i don't await pool when create it, just when i need connection.
Simple workaround is set `min_size` to 0. Acquire and then connect. Connection acquiring is safe. | closed | 2018-06-28T09:37:40Z | 2018-07-10T22:07:09Z | https://github.com/MagicStack/asyncpg/issues/320 | [
"enhancement"
] | spumer | 4 |
vitalik/django-ninja | pydantic | 1,085 | Need more comprehensive Docs/Tutorial like FastAPI | When I check out DRF / FastAPI docs, I find it quite comprehensive with examples given and explained.
They seem to be quite clear to understand.
It would be great if we could add a similar level of explanation in Django Ninja Docs.
PS: Just a beginner here, trying to use it over DRF for my project.
| open | 2024-02-16T19:22:50Z | 2024-03-24T21:02:35Z | https://github.com/vitalik/django-ninja/issues/1085 | [] | tushar9sk | 3 |
serengil/deepface | deep-learning | 1,451 | [FEATURE]: Add Angular Distance as a Distance Metric | ### Description
DeepFace currently supports cosine distance, Euclidean distance, and Euclidean L2 distance for face embedding comparisons. To enhance distance metric options, we should add angular distance, which is based on the angle between two embeddings. This metric is particularly useful for spherical embeddings and provides an alternative to cosine distance.
# Formula
```python
angular_distance = np.arccos(similarity) / math.pi
```
### Additional Info
_No response_ | open | 2025-03-10T12:55:48Z | 2025-03-19T11:49:13Z | https://github.com/serengil/deepface/issues/1451 | [
"enhancement"
] | serengil | 1 |
JaidedAI/EasyOCR | machine-learning | 1,005 | Text Detection training code(craft) has BUG | hi @gmuffiness and anyone who can help :)
I've tried to train Craft with the Craft training code provided by the easyOCR repository, but I think the code has a bug. I trained with one sample image of the ICDAR dataset ( img_990.jpg, my training and validation dataset for finetuning was this image, just one image ), I used the pre-trained model of craft for finetuning after 5 epoch model will learn to predict nothing ( list of prediction boxes become empty) but training loss will reduce. How is that possible? I logged more info inside the code and I print them below :
> results : [{'precision': 0.5, 'recall': 1.0, 'hmean': 0.6666666666666666, 'pairs': [{'gt': 0, 'det': 1}], 'iouMat': [[0.11546084772668544, 0.5038291537164455]], 'gtPolPoints': [array([[ 9, 55],
[168, 55],
[168, 19],
[ 9, 19]], dtype=int32)], 'detPolPoints': [array([[10.285714, 24. ],
[35.42857 , 24. ],
[35.42857 , 50.285713],
[10.285714, 50.285713]], dtype=float32), array([[ 56. , 24. ],
[165.71428 , 24. ],
[165.71428 , 50.285713],
[ 56. , 50.285713]], dtype=float32)], 'gtCare': 1, 'detCare': 2, 'gtDontCare': [], 'detDontCare': [], 'detMatched': 1, 'evaluationLog': 'GT polygons: 1\nDET polygons: 2\nMatch GT #0 with Det #1\n'}]
----------------------------------------
----------------------------------------
{'precision': 0.5, 'recall': 1.0, 'hmean': 0.6666666666666666, 'numGlobalCareDet': 2, 'numGlobalCareGt': 1, 'matchedSum': 1}
./data_root_dir/ch4_training_images/task_data_0.jpg
/usr/local/lib/python3.10/dist-packages/torch/nn/_reduction.py:42: UserWarning: size_average and reduce args will be deprecated, please use reduction='none' instead.
warnings.warn(warning.format(ret))
this is what is sending to batch_image_loss
loss_region:tensor([[[0.0000e+00, 0.0000e+00, 0.0000e+00, ..., 1.8176e-04,
1.8178e-04, 1.8013e-04],
[0.0000e+00, 0.0000e+00, 0.0000e+00, ..., 1.8177e-04,
1.8180e-04, 1.8281e-04],
[0.0000e+00, 0.0000e+00, 0.0000e+00, ..., 1.8182e-04,
1.8186e-04, 1.9605e-04],
...,
[1.1542e-01, 1.1531e-01, 1.1127e-01, ..., 9.0923e-05,
9.0928e-05, 9.1294e-05],
[1.0864e-01, 1.1066e-01, 1.0959e-01, ..., 9.0938e-05,
9.1470e-05, 9.3088e-05],
[1.1095e-01, 1.0960e-01, 1.0875e-01, ..., 8.0241e-05,
8.8683e-05, 1.0420e-04]]], device='cuda:0', grad_fn=<MulBackward0>)
region_scores_label:tensor([[[0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000],
[0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000],
[0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000],
...,
[0.4985, 0.4937, 0.4852, ..., 0.0000, 0.0000, 0.0000],
[0.4948, 0.4901, 0.4816, ..., 0.0000, 0.0000, 0.0000],
[0.4926, 0.4878, 0.4794, ..., 0.0000, 0.0000, 0.0000]]],
device='cuda:0')
neg_rto:0.3
n_min_neg:5000
Let's see what
here checking what is positive pixel number : 18774.0
here checking what is negative_pixel_number : 128682.0
positive loss is 0.10362127423286438
negative loss is 0.0016526866238564253
Let's see what
here checking what is positive pixel number : 26033.0
here checking what is negative_pixel_number : 121423.0
positive loss is 0.1118001639842987
negative loss is 0.0015156574081629515
lets see what is char_loss 0.10527396202087402 and affi_loss 0.11331582069396973
2023-05-01:14:19:05, training_step: 5|15, learning rate: 0.00010000, training_loss: 0.21859, avg_batch_time: 17.51073
Saving state, index: 5
100% 1/1 [00:00<00:00, 56.85it/s]
------------------------------------------------------------
------results : [{'precision': 0.0, 'recall': 0.0, 'hmean': 0, 'pairs': [], 'iouMat': [[0.4718834949057586]], 'gtPolPoints': [array([[ 9, 55],
[168, 55],
[168, 19],
[ 9, 19]], dtype=int32)], 'detPolPoints': [array([[ 57.142857, 24. ],
[164.57143 , 24. ],
[164.57143 , 49.142857],
[ 57.142857, 49.142857]], dtype=float32)], 'gtCare': 1, 'detCare': 1, 'gtDontCare': [], 'detDontCare': [], 'detMatched': 0, 'evaluationLog': 'GT polygons: 1\nDET polygons: 1\n'}]
----------------------------------------
----------------------------------------
{'precision': 0.0, 'recall': 0.0, 'hmean': 0, 'numGlobalCareDet': 1, 'numGlobalCareGt': 1, 'matchedSum': 0}
./data_root_dir/ch4_training_images/task_data_0.jpg
/usr/local/lib/python3.10/dist-packages/torch/nn/_reduction.py:42: UserWarning: size_average and reduce args will be deprecated, please use reduction='none' instead.
warnings.warn(warning.format(ret))
this is what is sending to batch_image_loss
loss_region:tensor([[[2.3100e-07, 4.2590e-06, 7.5753e-06, ..., 9.4357e-05,
9.4357e-05, 9.8660e-05],
[4.6670e-06, 6.8195e-06, 1.0874e-05, ..., 9.4345e-05,
9.4349e-05, 9.6100e-05],
[8.9083e-06, 1.1795e-05, 1.6938e-05, ..., 9.4362e-05,
9.4369e-05, 9.5385e-05],
...,
[1.7950e-04, 1.8878e-04, 1.8876e-04, ..., 0.0000e+00,
0.0000e+00, 0.0000e+00],
[1.9854e-04, 1.8880e-04, 1.8878e-04, ..., 0.0000e+00,
0.0000e+00, 0.0000e+00],
[2.0348e-04, 2.3584e-04, 2.0312e-04, ..., 0.0000e+00,
0.0000e+00, 0.0000e+00]]], device='cuda:0', grad_fn=<MulBackward0>)
region_scores_label:tensor([[[0.0160, 0.0167, 0.0176, ..., 0.0000, 0.0000, 0.0000],
[0.0168, 0.0174, 0.0184, ..., 0.0000, 0.0000, 0.0000],
[0.0180, 0.0186, 0.0196, ..., 0.0000, 0.0000, 0.0000],
...,
[0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000],
[0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000],
[0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000]]],
device='cuda:0')
neg_rto:0.3
n_min_neg:5000
Let's see what
here checking what is positive pixel number : 20353.0
here checking what is negative_pixel_number : 127103.0
positive loss is 0.06648588925600052
negative loss is 0.0022214415948837996
Let's see what
here checking what is positive pixel number : 41201.0
here checking what is negative_pixel_number : 106255.0
positive loss is 0.10165758430957794
negative loss is 0.0013221692061051726
lets see what is char_loss 0.06870733201503754 and affi_loss 0.10297975689172745
2023-05-01:14:19:07, training_step: 6|15, learning rate: 0.00010000, training_loss: 0.17169, avg_batch_time: 18.67954
Saving state, index: 6
100% 1/1 [00:00<00:00, 27.06it/s]
This is not correct. I don't know if the problem is inside implemented Loss function or any other part of the code.
I'll attach image and corresponding gt here
[gt_img_990.txt](https://github.com/JaidedAI/EasyOCR/files/11379963/gt_img_990.txt)

and my config file is :
> wandb_opt: False
results_dir: "./exp/exp/"
vis_test_dir: "./exp/vis_result/"
data_root_dir: "./data_root_dir/"
score_gt_dir: None # "/data/ICDAR2015_official_supervision"
mode: "weak_supervision"
train:
backbone : vgg
use_synthtext: False # If you want to combine SynthText in train time as CRAFT did, you can turn on this option
synth_data_dir: "/data/SynthText/"
synth_ratio: 5
real_dataset: custom
ckpt_path: "./exp/CRAFT_clr_amp_29500.pth"
eval_interval: 1
batch_size: 5
st_iter: 0
end_iter: 25
lr: 0.0001
lr_decay: 7500
gamma: 0.2
weight_decay: 0.00001
num_workers: 0 # On single gpu, train.py execution only works when num worker = 0 / On multi-gpu, you can set num_worker > 0 to speed up
amp: True
loss: 2
neg_rto: 0.3
n_min_neg: 5000
data:
vis_opt: True
pseudo_vis_opt: True
output_size: 768
do_not_care_label: ['###', '']
mean: [0.485, 0.456, 0.406]
variance: [0.229, 0.224, 0.225]
enlarge_region : [0.5, 0.5] # x axis, y axis
enlarge_affinity: [0.5, 0.5]
gauss_init_size: 200
gauss_sigma: 40
watershed:
version: "skimage"
sure_fg_th: 0.75
sure_bg_th: 0.05
syn_sample: -1
custom_sample: -1
syn_aug:
random_scale:
range: [1.0, 1.5, 2.0]
option: False
random_rotate:
max_angle: 20
option: False
random_crop:
version: "random_resize_crop_synth"
option: True
random_horizontal_flip:
option: False
random_colorjitter:
brightness: 0.2
contrast: 0.2
saturation: 0.2
hue: 0.2
option: True
custom_aug:
random_scale:
range: [ 1.0, 1.5, 2.0 ]
option: False
random_rotate:
max_angle: 20
option: True
random_crop:
version: "random_resize_crop"
scale: [0.03, 0.4]
ratio: [0.75, 1.33]
rnd_threshold: 1.0
option: True
random_horizontal_flip:
option: True
random_colorjitter:
brightness: 0.2
contrast: 0.2
saturation: 0.2
hue: 0.2
option: True
test:
trained_model : './exp/exp/custom_data_train/CRAFT_clr_amp_25.pth'
custom_data:
test_set_size: 500
test_data_dir: "./data_root_dir/"
text_threshold: 0.75
low_text: 0.5
link_threshold: 0.2
canvas_size: 2240
mag_ratio: 1.75
poly: False
cuda: True
vis_opt: True
| open | 2023-05-03T06:32:55Z | 2024-11-14T23:03:42Z | https://github.com/JaidedAI/EasyOCR/issues/1005 | [] | masoudMZB | 4 |
fastapi/sqlmodel | fastapi | 164 | Enum types on inherited models don't correctly create type in Postgres | ### First Check
- [X] I added a very descriptive title to this issue.
- [X] I used the GitHub search to find a similar issue and didn't find it.
- [X] I searched the SQLModel documentation, with the integrated search.
- [X] I already searched in Google "How to X in SQLModel" and didn't find any information.
- [X] I already read and followed all the tutorial in the docs and didn't find an answer.
- [X] I already checked if it is not related to SQLModel but to [Pydantic](https://github.com/samuelcolvin/pydantic).
- [X] I already checked if it is not related to SQLModel but to [SQLAlchemy](https://github.com/sqlalchemy/sqlalchemy).
### Commit to Help
- [X] I commit to help with one of those options 👆
### Example Code
```python
import enum
import uuid
from sqlalchemy import Enum, Column, create_mock_engine
from sqlalchemy.sql.type_api import TypeEngine
from sqlmodel import SQLModel, Field
class MyEnum(enum.Enum):
A = 'A'
B = 'B'
class MyEnum2(enum.Enum):
C = 'C'
D = 'D'
class BaseModel(SQLModel):
id: uuid.UUID = Field(primary_key=True)
enum_field: MyEnum2 = Field(sa_column=Column(Enum(MyEnum2)))
class FlatModel(SQLModel, table=True):
id: uuid.UUID = Field(primary_key=True)
enum_field: MyEnum = Field(sa_column=Column(Enum(MyEnum)))
class InheritModel(BaseModel, table=True):
pass
def dump(sql: TypeEngine, *args, **kwargs):
dialect = sql.compile(dialect=engine.dialect)
sql_str = str(dialect).rstrip()
if sql_str:
print(sql_str + ';')
engine = create_mock_engine('postgresql://', dump)
SQLModel.metadata.create_all(bind=engine, checkfirst=False)
```
### Description
When executing the above example code, the output shows that only the enum from FlatModel is correctly created, while the enum from the inherited class is not:
```sql
CREATE TYPE myenum AS ENUM ('A', 'B');
# There should be a TYPE def for myenum2, but isn't
CREATE TABLE flatmodel (
enum_field myenum,
id UUID NOT NULL,
PRIMARY KEY (id)
);
CREATE INDEX ix_flatmodel_id ON flatmodel (id);
CREATE TABLE inheritmodel (
enum_field myenum2,
id UUID NOT NULL,
PRIMARY KEY (id)
);
CREATE INDEX ix_inheritmodel_id ON inheritmodel (id);
```
### Operating System
macOS
### Operating System Details
_No response_
### SQLModel Version
0.0.4
### Python Version
3.9.7
### Additional Context
_No response_ | closed | 2021-11-23T21:32:35Z | 2023-10-23T12:32:36Z | https://github.com/fastapi/sqlmodel/issues/164 | [
"answered"
] | chriswhite199 | 11 |
piskvorky/gensim | nlp | 2,948 | rename 8.6 tag to 0.8.6 | #### Problem description
There is a tag "8.6" in the repository that is between 0.8.7 and 0.8.5 in terms of when it was created, but it is missing the 0 at the start.
#### Steps/code/corpus to reproduce
```
git clone https://github.com/RaRe-Technologies/gensim.git
cd gensim
git tag | grep -v '^[0-3]'
```
Or load the github tags page:
https://github.com/RaRe-Technologies/gensim/tags?after=0.8.9
#### Versions
0.8.6
#### Suggested fix
```
git tag 0.8.6 8.6
git tag -d 8.6
git push --delete origin 8.6
git push --tags
``` | closed | 2020-09-16T08:21:41Z | 2022-05-05T06:41:20Z | https://github.com/piskvorky/gensim/issues/2948 | [] | pabs3 | 11 |
ultralytics/ultralytics | computer-vision | 18,816 | looks like starting over again and again automatically??? | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
Hi, I encountered a problem when I read and run the source code of Ultralytics. I did not call it after pip install Ultralytics, but I downloaded code zip and read the source code and run it using the same code with pip install Ultralytics.
when it runs to self.run_callbacks("on_train_start") in ultralytics\engine\trainer.py, it looks like starting over again and again automatically for 6 times totally.
When I comment this line(self.run_callbacks("on_train_start") ), it is ok again.
When exit running before 'for i, batch in pbar:' in ultralytics\engine\trainer.py, it is ok. When exit running after 'for i, batch in pbar:' in ultralytics\engine\trainer.py (just exit in the next one line of it), it looks like starting over again and again automatically till out of memory or report errors like 'RuntimeError: DataLoader worker (pid(s) 11868, 6544) exited unexpectedly'. Sometimes Pycharm automatically exit and closed, maybe because of out of memory. Show the console in below(deleted many repetitive information for character length limit).
All the above problems occur in GPU environment. But it is ok in CPU.
Any friends can explain it and solve it? Thank you!
The code where occurs the problem shows below. It is in ultralytics\engine\trainer.py.
` if epoch == (self.epochs - self.args.close_mosaic):
self._close_dataloader_mosaic()
self.train_loader.reset()
if RANK in {-1, 0}:
LOGGER.info(self.progress_string())
pbar = TQDM(enumerate(self.train_loader), total=nb)
self.tloss = None
for i, batch in pbar: # 这行是分界,前面没问题后面有问题
print('YYYYYDS')
exit()`
`Logging results to runs\detect\train389
Starting training for 100 epochs...
New https://pypi.org/project/ultralytics/8.3.65 available
Update with 'pip install -U ultralytics
Ultralytics 8.3.598Python-3.8.19 torch-2.2.2+cU121 CUDA:0 (NVIDIA GeForce RTX 3060, 12288MiB)
data=coco8.yaml, epochs=100, time=None, patience=100, batch=16, imgsz=640, save=True, save_period=-1,
engine trainer: task=detect, mode=train, model=yolo11n.yaml,
New https://pypi.org/project/ultralytics/8.3.65 available update with 'pip install -U ultralytics'
Ultralytics 8.3.59Python-3.8.19 torch-2.2.2+cU121 CUDA:0 (NVIDIA GeForce RTX 3060, 12288MiB)
engineltrainer: taskedetect, mode=train, model=yolo1n.,yaml, data=coco8.yaml, epochs=188,time=None, patience=188,batch=16.iN0SZ=640.Save=Tre. save period=-1
New https://pypi.org/project/ultralytics/8.3.65 available
Update with 'pip install -u ultralytics'
Ultralytics 8.3.59 Python-3.8.19 torch-2.2.2+cU121 CUDA:0 (NVIDIA GeForce RTX 3060, 12288M1B)
engineltrainer:task=detect, mode=train.
model=volo11n.yaml.
data=coco8.yaml, epochs=100, time=None, patience=100, batch=16, imgsz=640, save=True, save_period=-1
Update with 'pip install -u ultralytics'
New https://pypi.org/project/ultralytics/8.3.65 available
-U ultralytics'
New https://pypi.org/project/ultralytics/8.3.65 available
Update with 'pip install
Ultralytics 8.3.59Python-3.8.19
torch-2.2.2+cu121 CUDA:0(NVIDIA GeForce RTX 3060. 12288MiB)
Ultralytics 8.3.59 Python-3.8.19 torch-2.2.2+CU121 CUDA:0 (NVIDIA GeForce RTX 3060, 12288MiB)
Ultralytics 8.3.598Python-3.8.19 torch-2.2.2+cU121 CUDA:0 (NVIDIA GeForce RTX 3060, 12288MiB)
engineltrainer: task=detect, mode=train, model=yolo11n.yaml, data=coco8.yaml, epochs=100, time=one, patience=108, batch=16,imgsz=640,save=True, save period-1
enoineltrainer: taskedetect, modetrain, modelevolo11n.yaml, data=coco8.yaml, epochs=100, time=one, patience=108, batch=16, imgsz=640, save=True, save period-1`
### Additional
_No response_ | closed | 2025-01-22T08:11:36Z | 2025-03-13T09:09:31Z | https://github.com/ultralytics/ultralytics/issues/18816 | [
"question",
"detect"
] | AlbertMa123 | 19 |
pennersr/django-allauth | django | 3,405 | ModuleNotFoundError: No module named 'allauth.account.middleware' | Weird error won't let me import allauth.account.middleware
Any smart minds know what the issue could be? Can't find any similar.
Already tried: Upgrading to python3 and reinstalling django-allauth
Stack Overflow question link: https://stackoverflow.com/questions/77012106/django-allauth-modulenotfounderror-no-module-named-allauth-account-middlewar
settings.py:
```
`"""
Django settings for youtube2blog2 project.
Generated by 'django-admin startproject' using Django 4.2.4.
For more information on this file, see
For the full list of settings and their values, see
"""
from pathlib import Path
import django
import os
import logging
import pyfiglet
import allauth
# Build paths inside the project like this: BASE_DIR / 'subdir'.
BASE_DIR = Path(__file__).resolve().parent.parent
# Quick-start development settings - unsuitable for production
# SECURITY WARNING: keep the secret key used in production secret!
SECRET_KEY = 'omegalul'
# SECURITY WARNING: don't run with debug turned on in production!
DEBUG = True
ALLOWED_HOSTS = []
# CUSTOM CODE
# os.environ['FFMPEG_PATH'] = '/third-party/ffmpeg.exe'
# os.environ['FFPROBE_PATH'] = '/third-party/ffplay.exe'
OFFLINE_VERSION = False
def offline_version_setup(databases):
if (OFFLINE_VERSION):
# WRITE CODE TO REPLACE DATABASES DICT DATA FOR OFFLINE SETUP HERE
return True
return
logger = logging.getLogger()
logger.setLevel(logging.DEBUG)
print("\n - CURRENT DJANGO VERSION: " + str(django.get_version()))
print("\n - settings.py: Current logger level is " + str(logger.getEffectiveLevel()))
logger.debug('settings.py: Logger is working.\n\n')
MEDIA_ROOT = os.path.join(BASE_DIR, 'media')
MEDIA_URL = '/media/'
AUTHENTICATION_BACKENDS = [
# Needed to login by username in Django admin, regardless of `allauth`
'django.contrib.auth.backends.ModelBackend',
# `allauth` specific authentication methods, such as login by email
'allauth.account.auth_backends.AuthenticationBackend',
]
'''
NEEDED SETUP FOR SOCIAL AUTH
REQUIRES DEVELOPER CREDENTIALS
ON PAUSE UNTIL MVP IS DONE
# Provider specific settings
SOCIALACCOUNT_PROVIDERS = {
'google': {
# For each OAuth based provider, either add a ``SocialApp``
# (``socialaccount`` app) containing the required client
# credentials, or list them here:
'APP': {
'client_id': '123',
'secret': '456',
'key': ''
}
}
'apple': {
}
'discord' {
}
}
'''
LOGIN_REDIRECT_URL = 'dashboard'
#
# Application definition
INSTALLED_APPS = [
# My Apps
'yt2b2',
'home',
'dashboard',
# Django Apps
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
# Downloaded Apps
'rest_framework',
'embed_video',
'allauth',
'allauth.account',
'allauth.socialaccount',
#'allauth.socialaccount.providers.google',
#'allauth.socialaccount.providers.apple',
#'allauth.socialaccount.providers.discord',
]
MIDDLEWARE = [
'django.middleware.security.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
# Downloaded Middleware
'allauth.account.middleware.AccountMiddleware',
]
ROOT_URLCONF = 'youtube2blog2.urls'
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [os.path.join(BASE_DIR, 'templates')],
'APP_DIRS': True,
'OPTIONS': {
'context_processors': [
'django.template.context_processors.debug',
'django.template.context_processors.request',
'django.contrib.auth.context_processors.auth',
'django.contrib.messages.context_processors.messages',
],
},
},
]
WSGI_APPLICATION = 'youtube2blog2.wsgi.application'
# Database
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
'NAME': BASE_DIR / 'db.sqlite3', # <--------- OFFLINE VERSION
# Consider masking these secret variables using a .env file to beef up your Django app's security. Besides, Vercel allows you to list your environment variables during deployment.
#'URL' : 'postgresql://postgres:oibkk5LL9sI5dzY5PAnj@containers-us-west-128.railway.app:5968/railway',
#'NAME' : 'railway',
#'USER' : 'postgres',
#'PASSWORD' : 'oibkk5LL9sI5dzY5PAnj',
#'HOST' : 'containers-us-west-128.railway.app',
#'PORT' : '5968'
}
}
# Password validation
AUTH_PASSWORD_VALIDATORS = [
{
'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator',
},
]
# Internationalization
LANGUAGE_CODE = 'en-us'
TIME_ZONE = 'UTC'
USE_I18N = True
USE_TZ = True
# Static files (CSS, JavaScript, Images)
STATIC_URL = '/static/' # the path in url
STATICFILES_DIRS = [
os.path.join(BASE_DIR, "static"),
]
# Default primary key field type
DEFAULT_AUTO_FIELD = 'django.db.models.BigAutoField'
```
Error log:
```
`System check identified no issues (0 silenced).
Exception in thread django-main-thread:
Traceback (most recent call last):
File "C:\Users\Pedro\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\django\core\servers\basehttp.py", line 48, in get_internal_wsgi_application
return import_string(app_path)
^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Pedro\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\django\utils\module_loading.py", line 30, in import_string
return cached_import(module_path, class_name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Pedro\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\django\utils\module_loading.py", line 15, in cached_import
module = import_module(module_path)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.11_3.11.1520.0_x64__qbz5n2kfra8p0\Lib\importlib\__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<frozen importlib._bootstrap>", line 1204, in _gcd_import
File "<frozen importlib._bootstrap>", line 1176, in _find_and_load
File "<frozen importlib._bootstrap>", line 1147, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 690, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 940, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "C:\Users\Pedro\Documents\GITHUB\YT2B2-new-dev\YT2B2\youtube2blog2\youtube2blog2\wsgi.py", line 16, in <module>
application = get_wsgi_application()
^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Pedro\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\django\core\wsgi.py", line 13, in get_wsgi_application
return WSGIHandler()
^^^^^^^^^^^^^
File "C:\Users\Pedro\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\django\core\handlers\wsgi.py", line 118, in __init__
self.load_middleware()
File "C:\Users\Pedro\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\django\core\handlers\base.py", line 40, in load_middleware
middleware = import_string(middleware_path)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Pedro\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\django\utils\module_loading.py", line 30, in import_string
return cached_import(module_path, class_name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Pedro\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\django\utils\module_loading.py", line 15, in cached_import
module = import_module(module_path)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.11_3.11.1520.0_x64__qbz5n2kfra8p0\Lib\importlib\__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<frozen importlib._bootstrap>", line 1204, in _gcd_import
File "<frozen importlib._bootstrap>", line 1176, in _find_and_load
File "<frozen importlib._bootstrap>", line 1140, in _find_and_load_unlocked
ModuleNotFoundError: No module named 'allauth.account.middleware'
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.11_3.11.1520.0_x64__qbz5n2kfra8p0\Lib\threading.py", line 1038, in _bootstrap_inner
self.run()
File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.11_3.11.1520.0_x64__qbz5n2kfra8p0\Lib\threading.py", line 975, in run
self._target(*self._args, **self._kwargs)
File "C:\Users\Pedro\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\django\utils\autoreload.py", line 64, in wrapper
fn(*args, **kwargs)
File "C:\Users\Pedro\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\django\core\management\commands\runserver.py", line 139, in inner_run
handler = self.get_handler(*args, **options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Pedro\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\django\contrib\staticfiles\management\commands\runserver.py", line 31, in get_handler
handler = super().get_handler(*args, **options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Pedro\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\django\core\management\commands\runserver.py", line 78, in get_handler
return get_internal_wsgi_application()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Pedro\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\django\core\servers\basehttp.py", line 50, in get_internal_wsgi_application
raise ImproperlyConfigured(
django.core.exceptions.ImproperlyConfigured: WSGI application 'youtube2blog2.wsgi.application' could not be loaded; Error importing module.`
``` | closed | 2023-08-31T00:21:02Z | 2025-03-22T17:00:20Z | https://github.com/pennersr/django-allauth/issues/3405 | [] | pedro-santos21 | 5 |
electricitymaps/electricitymaps-contrib | data-visualization | 7,596 | CO2 net exchange chart is only presented for 24h and 72h - not for 30d, 12mo, all - missing `totalCo2Export` and `totalCo2Import` | ## Bug description / Feature request
Data (`totalCo2Export` and `totalCo2Import`) isn't available for a CO2 net exchange chart with 30d+ views, so it's only presented for the 24h and 72h views. This was surprising to me since `totalExport` and `totalImport` is around. Is not calculating the CO2 equivalent to `totalExport` and `totalImport` data not done due to a bug, or is it a feature not yet developed?
## Analysis
I'm not sure why `totalCo2Export` and `totalCo2Import` isn't calculated because I don't yet understand where it gets done. I figure parsers needs to acquire the info which it does for hourly levels, and then the hourly data needs to be processed to daily / monthly / yearly aggregates and that isn't getting done. | open | 2024-12-20T18:43:47Z | 2024-12-20T23:22:33Z | https://github.com/electricitymaps/electricitymaps-contrib/issues/7596 | [] | consideRatio | 2 |
onnx/onnx | scikit-learn | 6,239 | Create 1.16.2 release? | # Ask a Question
### Question
Should we cut a 1.16.2 release to pull in some commits from main or wait for 1.17.0?
### Further information
Primary PRs of interest to pull (will add to list if more are requested):
* https://github.com/onnx/onnx/pull/6164
* https://github.com/onnx/onnx/pull/6222
### Notes
I tried a smoke test cherry-picking the above PRs into a 1.16.1 based branch at https://github.com/onnx/onnx/pull/6238 but ran into issues.
The Linux CI tests look like they will be the biggest issue. They are failing for "Verify ONNX with the latest numpy", "Verify ONNX with the laset protobuf", and "Verify ONNX with the minimumly supported packages".
I'm guessing the numpy issue will be related to numpy 2.0 now being the latest. Support for numpy 2.0 was added to `main` just recently via https://github.com/onnx/onnx/pull/6196. However it's a large PR that touches a lot for a patch release. Also I tried to locally cherry-pick it into my smoke test branch but hit a bunch of conflicts.
So to create a 1.16.2, we'd have to either:
* cherry-pick numpy 2.0 support and an unknown number of other `main` commits so it can be cleanly picked
* cherry-pick numpy 2.0 and manually fixing the conflicts.
* In a 1.16.2 branch, alter the CI so it doesn't pull in the packages that are causing the issues.
There were also Windows CI issues. At a glance that might be as easy as pulling in #6173 and #6179. They cherry-pick cleanly but I don't know if there would be more issues after pulling those in.
My bandwidth is currently limited so there's not more more I can do related to getting the CIs passing for a 1.16.2 release. I can create a 1.16.2 branch and handle straight forward cherry-picks but I'd need someone else to get past the issues above. Once the CIs are passing I'd be able to finished the process and get it released if we decided to go forward with a 1.16.2. | closed | 2024-07-19T19:22:24Z | 2024-07-30T15:32:49Z | https://github.com/onnx/onnx/issues/6239 | [
"question"
] | cjvolzka | 12 |
jupyterlab/jupyter-ai | jupyter | 1,052 | v3.0.0 roadmap & release plan | Attention Jupyter AI users! I have some exciting news to share in this issue. We are currently planning, designing, and building the next major release of Jupyter AI, v3.0.0.
This issue is a living document that lists features planned for v3.0.0, and publicly tracks progress on v3.0.0 development. The list of planned features is incomplete and will evolve in the coming months. I'm posting what we have planned so far to make our progress transparent & visible to the broader user community.
The list of issues being considered for v3.0.0 are also listed here in a dedicated GitHub milestone: https://github.com/jupyterlab/jupyter-ai/milestone/10
## Planned features
These are the features we are >95% confident should be developed as part of v3.0.0.
- [x] **Migration to Jupyter Chat (must be completed first)**
- Context: [Jupyter Chat](https://github.com/jupyterlab/jupyter-chat/tree/main) is a new package that re-defines and provides the frontend components originally from Jupyter AI, while completely re-defining the backend model of chats. Chats will no longer be a simple list of messages stored in-memory and vanish on server restart, but instead persisted as plaintext `*.chat` files and represented in-memory as a [Yjs CRDT document](https://github.com/yjs/yjs). Yjs is the same family of packages that power [Jupyter Collaboration](https://github.com/jupyterlab/jupyter-collaboration), which provides RTC functionality in JupyterLab. @brichet (the lead dev behind Jupyter Chat) and I will be working closely to ensure that this migration will be seamless & painless for existing users & contributors.
- Motivation: This migration will 1) allow for multiple chats by creating a top-level abstraction for chats, 2) simplify Jupyter AI's backend by delegating chat state management & synchronization to Yjs, 3) allow for real-time editing of the chat history to enable features like message editing & re-ordering. This migration also moves most of Jupyter AI's frontend components to `@jupyter/chat`, allowing other extensions to re-use our code to build their own chat applications.
- Issue: https://github.com/jupyterlab/jupyter-ai/issues/785
- Issue: https://github.com/jupyterlab/jupyter-ai/issues/862
- PR: https://github.com/jupyterlab/jupyter-ai/pull/1043
- [x] **Multiple conversation management**
- Issue: https://github.com/jupyterlab/jupyter-ai/issues/813
- [x] **Message editing**
- Issue: https://github.com/jupyterlab/jupyter-ai/issues/339
- [x] **Migration to Pydantic v2 and LangChain >=0.3**
- Issue: https://github.com/jupyterlab/jupyter-ai/issues/1003
- [ ] **Unify authentication in chat & magics**
- Issue: https://github.com/jupyterlab/jupyter-ai/issues/1103
- There likely are a few more which should be added.
## Tentative features
These are features that may be developed as part of v3.0.0, but require further design, research, or feedback from the community.
- **Allow for dynamic installation of model dependencies**
- Issue: https://github.com/jupyterlab/jupyter-ai/issues/840
- Issue: https://github.com/jupyterlab/jupyter-ai/issues/680
- **Improve `/generate` by implementing it as an agentic workflow**
- Issue: https://github.com/jupyterlab/jupyter-ai/issues/1111
## Details on v3.0.0 development (for contributors)
- **The new `v3-dev` branch will track development on v3.0.0 until its initial release.**
- Until then, `main` will still track Jupyter AI v2.x.
- **From now on, newly-merged PRs should be backported to `v3-dev`.**
- Comment `@meeseeksdev please backport to v3-dev` on PRs after merging to have the bot automatically open a new backport PR against `v3-dev`.
- **We are targeting to have a pre-release ready by the end of December 2024.** The goal of this pre-release is to just achieve feature parity with v2 while migrating to `jupyterlab-chat`, i.e. all the features that work today for a single chat should work reasonably well for multiple chats.
- When `v3-dev` is ready for its initial release, a PR merging `v3-dev` into `main` will be opened, and be reviewed a final time.
- After that PR is merged, `main` will track v3.x, and a separate `2.x` branch will track Jupyter AI v2.x.
- We acknowledge that the Jupyter Chat migration requires contextual knowledge of Jupyter Chat & Yjs, which makes it difficult for others to contribute directly. This migration also changes the entire chat API, on both the frontend and the backend. @brichet and I are prioritizing reaching alignment on the new chat API that will be used in Jupyter AI v3.0.0 as quickly as possible, so other contributors can build freely using a (relatively) stable chat API (ETA: by end of Dec).
- Once that is complete, we will add a "Contributing" section to this issue that details how contributors can assign themselves issues & open PRs, and provides a summary of what is different in `v3-dev`.
- For now, we ask that those who wish to contribute do so by opening new issues, leaving feedback on existing ones listed here, and reviewing `v3-dev` PRs to stay in-the-loop on code changes.
| open | 2024-10-23T21:56:52Z | 2025-02-18T09:53:36Z | https://github.com/jupyterlab/jupyter-ai/issues/1052 | [
"enhancement"
] | dlqqq | 1 |
vanna-ai/vanna | data-visualization | 700 | Why do all routes point to index.html? | **Describe the bug**
all routes point to index.html
Once I specify index_html_path this happens
**To Reproduce**
Steps to reproduce the behavior:

**Expected behavior**
A clear and concise description of what you expected to happen.
**Error logs/Screenshots**
If applicable, add logs/screenshots to give more information about the issue.
**Desktop (please complete the following information where):**
- OS: [e.g. windows]
- Version: [e.g. 11]
- Python: [3.10]
- Vanna: [0.7.5]
**Additional context**
Add any other context about the problem here.
| closed | 2024-11-14T02:14:01Z | 2024-11-14T06:35:10Z | https://github.com/vanna-ai/vanna/issues/700 | [
"bug"
] | SharkSyl | 0 |
stanfordnlp/stanza | nlp | 778 | How to apply Stanza nlp tokenizer on dataframe? |
### **Code Below**
```
import stanza
nlp = stanza.Pipeline('ur', processors='tokenize',tokenize_no_ssplit=True)
def tokenizee(text):
text=nlp(text)
return text
```
```
import urduhack
from urduhack.preprocessing import replace_urls
from urduhack.preprocessing import remove_english_alphabets
from urduhack.preprocessing import normalize_whitespace
from urduhack.normalization import remove_diacritics
from urduhack import normalize
from urduhack.tokenization import word_tokenizer
from urduhack.normalization import normalize_characters
from urduhack.normalization import normalize_combine_characters
punctuations = '''`%÷×؛<>_()*&^%][ـ،/:"؟—.,'{}~¦+|!”…“–ـ''' + string.punctuation
#normalize text
def normalize_text(text):
text =normalize(text)
return text
#normalize_characters
def normalize_chars(text):
text =normalize_characters(text)
return text
#normalize_combine_characters
def normalize_combine_chars(text):
text =normalize_combine_characters(text)
return text
#remove urls
def remove_urls(text):
text =replace_urls(text)
return text
#remove diacritics
def remove_diacriticss(text):
text =remove_diacritics(text)
return text
#normalize whitespace
def normalize_white_space(text):
text = normalize_whitespace(text)
return text
#remove english letters
def remove_english_letters(text):
text = remove_english_alphabets(text)
return text
#remove punctuations
def remove_punctuations(text):
translator = str.maketrans('', '', punctuations)
text = text.translate(translator)
return text
# remove numbers
def remove_numbers(text):
result = re.sub(r'\d+', '', text)
return result
# tokenize
def tokenize(text):
text = word_tokenize(text)
return text
# remove stopwords
stop_words = set(stopwords.words('urdu.txt'))
def remove_stopwords(text):
text = [i for i in text if not i in stop_words]
return text
#Remove Emoji
def remove_emoji(text):
emoji_pattern = re.compile("["
u"\U0001F600-\U0001F64F" # emoticons
u"\U0001F300-\U0001F5FF" # symbols & pictographs
u"\U0001F680-\U0001F6FF" # transport & map symbols
u"\U0001F1E0-\U0001F1FF" # flags (iOS)
u"\U00002500-\U00002BEF" # chinese char
u"\U00002702-\U000027B0"
u"\U00002702-\U000027B0"
u"\U000024C2-\U0001F251"
u"\U0001f926-\U0001f937"
u"\U00010000-\U0010ffff"
u"\u2640-\u2642"
u"\u2600-\u2B55"
u"\u200d"
u"\u23cf"
u"\u23e9"
u"\u231a"
u"\ufe0f" # dingbats
u"\u3030"
"]+", flags=re.UNICODE)
return emoji_pattern.sub(r'', text)
def preprocess(text):
text=normalize_chars(text)
text=normalize_combine_chars(text)
text=remove_urls(text)
text=remove_emoji(text)
text=remove_diacriticss(text)
text=normalize_white_space(text)
text=remove_english_letters(text)
text=remove_punctuations(text)
text=remove_numbers(text)
text= nlp(text)
return text
```
`DAS['Text'][0:5].apply(preprocess)`
### **Output**
**0 [\n [\n {\n "id": 1,\n "text": "...
1 [\n [\n {\n "id": 1,\n "text": "...
2 [\n [\n {\n "id": 1,\n "text": "...
3 [\n [\n {\n "id": 1,\n "text": "...
4 [\n [\n {\n "id": 1,\n "text": "...
Name: Text, dtype: object**
| closed | 2021-07-27T12:52:08Z | 2021-07-27T18:23:33Z | https://github.com/stanfordnlp/stanza/issues/778 | [
"enhancement"
] | mahad-maqsood | 3 |
numpy/numpy | numpy | 27,934 | ENH: npy file format does not use standard JSON in header, change it to do so | ### Describe the issue:
I'm writing a loader for the npy file format in a non-Python language. Part of the header of this file is a JSON encoded dictionary. I am unable to parse it in my other language because the JSON numpy is generating does not conform to the JSON standard.
https://numpy.org/doc/stable/reference/generated/numpy.lib.format.html
Specifically, the JSON standard requires:
- double quotes for string literals
- square brackets for arrays
- lower case for true and false boolean values
The .npy file header uses
- single quotes for string literals
- round brackets for arrays
- capitalized True and False for boolean values
Dictionary extracted from the .npy file I generated:
```
{'descr': '<f8', 'fortran_order': False, 'shape': (10, 50, 60, 70), }
```
### Reproduce the code example:
```python
import numpy as np
import opensimplex
feature_size = 10
size_x = 70
size_y = 60
size_z = 50
size_w = 10
ix = np.arange(0, feature_size, feature_size / size_x)
iy = np.arange(0, feature_size, feature_size / size_y)
iz = np.arange(0, feature_size, feature_size / size_z)
iw = np.arange(0, feature_size, feature_size / size_w)
arr = opensimplex.noise4array(ix, iy, iz, iw)
arr = (arr + 1) * .5
np.save('noise_data_4d.npy', arr)
```
### Error message:
_No response_
### Python and NumPy Versions:
2.1.3
3.11.3 (tags/v3.11.3:f3909b8, Apr 4 2023, 23:49:59) [MSC v.1934 64 bit (AMD64)]
### Runtime Environment:
_No response_
### Context for the issue:
The JSON dictionary should adhere to the JSON standard for compatibility with other programming languages. | open | 2024-12-08T11:44:21Z | 2025-01-04T04:45:14Z | https://github.com/numpy/numpy/issues/27934 | [
"01 - Enhancement"
] | blackears | 5 |
coqui-ai/TTS | pytorch | 4,017 | VITS model gives bad results (training an italian tts model) | ### Describe the bug
Hi everyone. I'm new to the world of ML, so I'm not used to training AI models...
I really want to create my own TTS model using coqui's VITS trainer, so I've done a lot of research about it. I configured some dataset parameters and configuration functions and then started training. For the training I used almost 10 hours of audio spoken in Italian. After training I tried the model but the result is not bad, it's FAIRLY bad... The model doesn't even "speak" a language. Here is an example of the sentence:
`"input_text": ""input_text": "Oh, finalmente sei arrivato fin qui. Non è affatto comune che un semplice essere umano riesca a penetrare così profondamente nella mia dimora. Scarlet Devil Mansion non è un posto per i deboli di cuore, lo sapevi?""`
(I do not recommend to listen to the audio at full volume.)
https://github.com/user-attachments/assets/b4039119-2666-455f-8ed7-6a0b05179f8f
The voice of the audio is actually from a RVC model. I imported the model into a program that makes TTS first and then uses the weights of a RVC model to the generated audio. It's not a RVC problem because I used this program with the same RVC and other TTS models (mostly in english and one in italian) and they work well, especially the english ones.
### To Reproduce
Here's my configuration:
Dataset config:
> output_path = "/content/gdrive/MyDrive/tts"
> dataset_config` = BaseDatasetConfig(
formatter="ljspeech",
meta_file_train="test.txt",
path=os.path.join(output_path, "Dataset/"),
language="it"
> )
Dataset format:
> wav_file|text|text
> imalavoglia_00_verga_f000053|Milano, diciannove gennaio mille ottocento ottantuno.|Milano, diciannove gennaio mille ottocento ottantuno.
Audio:
> audio_config = VitsAudioConfig(
sample_rate=22050,
win_length=1024,
hop_length=256,
num_mels=80,
mel_fmin=0,
mel_fmax=None
)
Characters:
> character_config = CharactersConfig(
characters_class= "TTS.tts.models.vits.VitsCharacters",
characters= "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz1234567890àèìòùÀÈÌÒÙáéíóúÁÉÍÓÚî",
punctuations=" !,.?-'",
pad= "<PAD>",
eos= "<EOS>",
bos= "<BOS>",
blank= "<BLNK>",
)
General config:
> config = VitsConfig(
audio=audio_config,
characters=character_config,
run_name="vits_vctk",
batch_size=16,
eval_batch_size=4,
num_loader_workers=4,
num_eval_loader_workers=4,
run_eval=True,
test_delay_epochs=0,
epochs=10,
text_cleaner="multilingual_cleaners",
use_phonemes=False,
phoneme_language="it",
phoneme_cache_path=os.path.join(output_path, "phoneme_cache"),
compute_input_seq_cache=True,
print_step=25,
print_eval=False,
save_best_after=1000,
save_checkpoints=True,
save_all_best=True,
mixed_precision=True,
max_text_len=250,
output_path=output_path,
datasets=[dataset_config],
cudnn_benchmark=False,
test_sentences=[
"Qualcosa non va? Mi dispiace, hai voglia di parlarne a riguardo?",
"Il mio nome è Remilia Scarlet. come posso aiutarti oggi?",
]
)`
### Expected behavior
_No response_
### Logs
_No response_
### Environment
```shell
- TTS version: 0.22.0
- Python version: 3.10.9
- OS: Windows
- CUDA version: 11.8
- GPU: GTX 1650 with 4GB of VRAM
All the libraries were installed via pip command
```
### Additional context
Additionally, After few days I tried to use espeak phonemes but the trainer.fit() function stucks at the beginning with this output:
> > EPOCH: 0/10
--> /content/gdrive/MyDrive/tts/vits_vctk-October-09-2024_08+23PM-0000000
> DataLoader initialization
| > Tokenizer:
| > add_blank: True
| > use_eos_bos: False
| > use_phonemes: True
| > phonemizer:
| > phoneme language: it
| > phoneme backend: espeak
| > Number of instances : 5798
/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:557: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
> TRAINING (2024-10-09 20:23:45)
| > Preprocessing samples
| > Max text length: 167
| > Min text length: 12
| > Avg text length: 82.22473266643671
|
| > Max audio length: 183618.0
| > Min audio length: 24483.0
| > Avg audio length: 82634.87443946188
| > Num. instances discarded samples: 0
| > Batch group size: 0.
/usr/local/lib/python3.10/dist-packages/torch/functional.py:666: UserWarning: stft with return_complex=False is deprecated. In a future pytorch release, stft will return complex tensors for all inputs, and return_complex=False will raise an error.
Note: you can still call torch.view_as_real on the complex output to recover the old return format. (Triggered internally at ../aten/src/ATen/native/SpectralOps.cpp:873.)
return _VF.stft(input, n_fft, hop_length, win_length, window, # type: ignore[attr-defined] | closed | 2024-10-09T20:29:06Z | 2024-12-28T11:58:24Z | https://github.com/coqui-ai/TTS/issues/4017 | [
"bug",
"wontfix"
] | iDavide | 6 |
CorentinJ/Real-Time-Voice-Cloning | tensorflow | 1,280 | TypeError: melspectrogram() | Getting the mentioned error when running demo_cli.py (windows 10)
```
D:\Ai_audio\Real-Time-Voice-Cloning>python demo_cli.py
C:\Program Files\Python310\lib\site-packages\numpy\_distributor_init.py:30: UserWarning: loaded more than 1 DLL from .libs:
C:\Program Files\Python310\lib\site-packages\numpy\.libs\libopenblas.EL2C6PLE4ZYW3ECEVIV3OXXGRN2NRFM2.gfortran-win_amd64.dll
C:\Program Files\Python310\lib\site-packages\numpy\.libs\libopenblas64__v0.3.23-gcc_10_3_0.dll
warnings.warn("loaded more than 1 DLL from .libs:"
Arguments:
enc_model_fpath: saved_models\default\encoder.pt
syn_model_fpath: saved_models\default\synthesizer.pt
voc_model_fpath: saved_models\default\vocoder.pt
cpu: False
no_sound: False
seed: None
Running a test of your configuration...
Found 1 GPUs available. Using GPU 0 (NVIDIA GeForce RTX 3060) of compute capability 8.6 with 12.9Gb total memory.
Preparing the encoder, the synthesizer and the vocoder...
Loaded encoder "encoder.pt" trained to step 1564501
Synthesizer using device: cuda
Building Wave-RNN
Trainable Parameters: 4.481M
Loading model weights at saved_models\default\vocoder.pt
Testing your configuration with small inputs.
Testing the encoder...
Traceback (most recent call last):
File "D:\Ai_audio\Real-Time-Voice-Cloning\demo_cli.py", line 80, in <module>
encoder.embed_utterance(np.zeros(encoder.sampling_rate))
File "D:\Ai_audio\Real-Time-Voice-Cloning\encoder\inference.py", line 144, in embed_utterance
frames = audio.wav_to_mel_spectrogram(wav)
File "D:\Ai_audio\Real-Time-Voice-Cloning\encoder\audio.py", line 58, in wav_to_mel_spectrogram
frames = librosa.feature.melspectrogram(
TypeError: melspectrogram() takes 0 positional arguments but 2 positional arguments (and 2 keyword-only arguments) were given
```
| closed | 2024-01-03T19:41:48Z | 2024-01-03T19:46:17Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/1280 | [] | stevens-Ai | 1 |
statsmodels/statsmodels | data-science | 8,951 | can I manually increase de number of iter, from 500 to 1000? or change the convergence criterion? | #### Is your feature request related to a problem? Please describe
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
#### Describe the solution you'd like
A clear and concise description of what you want to happen.
#### Describe alternatives you have considered
A clear and concise description of any alternative solutions or features you have considered.
#### Additional context
Add any other context about the feature request here. | closed | 2023-07-13T16:58:53Z | 2023-10-27T09:57:09Z | https://github.com/statsmodels/statsmodels/issues/8951 | [] | carloseduardosg | 0 |
cvat-ai/cvat | tensorflow | 8,354 | Custom model Fine tuned deployment - Using Hugging Face Sam Transformers | ### Actions before raising this issue
- [X] I searched the existing issues and did not find anything similar.
- [X] I read/searched [the docs](https://docs.cvat.ai/docs/)
### Steps to Reproduce
_No response_
### Expected Behavior
When I use my Fine tuned model it should generate masks, this works choosing a point coordinates in my jupyternotebook, how ever when I deploy my fine tune model in cvat it shows me like this,
This is from jupyter notebook - 
This is on cvat - 

This function .yaml file in this when changed metadat: name as per my custom model and also the annotations: name: as per my custom name, I cannot deploy my custom model in cvat...
I read on this github that the meta data name and the annotations: name: should be as it is from the public model from the cvat..
I have different architecture of my SAM weights for hugging face transformers and outputs directly are probs and mask when doing inference on the jupyter notebook...
Do I need to make my own .onxx model and change the client-side code to integrate the hugging face sam model to CVAT for semi-automatic annotations?
### Possible Solution
_No response_
### Context
_No response_
### Environment
_No response_ | closed | 2024-08-27T11:29:23Z | 2024-11-14T19:08:24Z | https://github.com/cvat-ai/cvat/issues/8354 | [
"question"
] | venuss920 | 0 |
streamlit/streamlit | machine-learning | 10,017 | `st.segmented_control`: Add vertical option | ### Checklist
- [X] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar feature requests.
- [X] I added a descriptive title and summary to this issue.
### Summary
The segmented_control buttons are great. I'm interested in creating a similar set of buttons but vertically arranged.
<img width="121" alt="Screenshot 2024-12-12 at 8 44 32 PM" src="https://github.com/user-attachments/assets/9afdd5e6-c8e2-4b3d-a849-159233f91e31" />
### Why?
I have a list of sections to render vertically. To change the order of the list items, I'm using two st.buttons stacked, but I like the aesthetic of the segmented control.
### How?
with kwarg:
```python
st.segmented_control(
label='foo',
options=[':material/arrow_upward:', ':material/arrow_downward:'],
vertical=True
)
```
or
allow a 2d array:
```python
st.segmented_control(
label='foo',
options=[
[':material/arrow_upward:'],
[':material/arrow_downward:']
]
)
```
### Additional Context
_No response_ | open | 2024-12-13T03:55:44Z | 2025-01-19T16:10:55Z | https://github.com/streamlit/streamlit/issues/10017 | [
"type:enhancement",
"feature:st.segmented_control"
] | olliepro | 3 |
plotly/dash-core-components | dash | 965 | [Bug] Upload component won't broken in Dash 1.20.0 | Hi,
I am trying to use the upload component in one of my apps. Recently upgraded to dash 1.20.0 and have found that the component doesn't work anymore. The debug server gives me an error `Cannot read property 'call' of undefined`. I have also tried running the examples at https://dash.plotly.com/dash-core-components/upload in a clean virtual environment with dash 1.20.0, pandas 1.2.4 and their associated dependencies installed and the examples also fail with the same error. Additionally I am using python 3.9.5 and Chrome 90.0.4430.93.
| open | 2021-05-06T21:00:35Z | 2021-05-11T18:48:10Z | https://github.com/plotly/dash-core-components/issues/965 | [] | NicholasChin | 1 |
dask/dask | numpy | 10,945 | Pandas read_sql vs dask read_sql issues | Hello guys,
Help here. The same command works on pandas but does not work on dask:
Pandas
```
import pandas as pd
sql = """SELECT t1.NR_SEQL_SLCT_CPR
FROM ORANDPOW0000.SLCT_CPR_PRD_PCR t1
WHERE ROWNUM <= 1000"""
pd.read_sql(sql = sql, con=oracle.uri, index_col = 'nr_seql_slct_cpr')
```
It does work and return me the table (I don't know why I cant upload pictures here)
But if I try with dask, it does not find the target table.
```
import dask.dataframe as dd
sql = """SELECT t1.NR_SEQL_SLCT_CPR
FROM ORANDPOW0000.SLCT_CPR_PRD_PCR t1
WHERE ROWNUM <= 1000"""
dd.read_sql(sql = sql, con=oracle.uri, index_col = 'nr_seql_slct_cpr')
```
Got "NoSuchTableError"
```
---------------------------------------------------------------------------
NoSuchTableError Traceback (most recent call last)
Cell In[134], [line 5](vscode-notebook-cell:?execution_count=134&line=5)
[1](vscode-notebook-cell:?execution_count=134&line=1) import dask.dataframe as dd
[2](vscode-notebook-cell:?execution_count=134&line=2) sql = """SELECT t1.NR_SEQL_SLCT_CPR
[3](vscode-notebook-cell:?execution_count=134&line=3) FROM ORANDPOW0000.SLCT_CPR_PRD_PCR t1
[4](vscode-notebook-cell:?execution_count=134&line=4) WHERE ROWNUM <= 1000"""
----> [5](vscode-notebook-cell:?execution_count=134&line=5) dd.read_sql(sql = sql, con=oracle.uri, index_col = 'nr_seql_slct_cpr')
File [c:\Users\F3164582\AppData\Local\Programs\Python\Python311\Lib\site-packages\dask\dataframe\io\sql.py:393](file:///C:/Users/F3164582/AppData/Local/Programs/Python/Python311/Lib/site-packages/dask/dataframe/io/sql.py:393), in read_sql(sql, con, index_col, **kwargs)
[360](file:///C:/Users/F3164582/AppData/Local/Programs/Python/Python311/Lib/site-packages/dask/dataframe/io/sql.py:360) """
[361](file:///C:/Users/F3164582/AppData/Local/Programs/Python/Python311/Lib/site-packages/dask/dataframe/io/sql.py:361) Read SQL query or database table into a DataFrame.
[362](file:///C:/Users/F3164582/AppData/Local/Programs/Python/Python311/Lib/site-packages/dask/dataframe/io/sql.py:362)
(...)
[390](file:///C:/Users/F3164582/AppData/Local/Programs/Python/Python311/Lib/site-packages/dask/dataframe/io/sql.py:390) read_sql_query : Read SQL query into a DataFrame.
[391](file:///C:/Users/F3164582/AppData/Local/Programs/Python/Python311/Lib/site-packages/dask/dataframe/io/sql.py:391) """
[392](file:///C:/Users/F3164582/AppData/Local/Programs/Python/Python311/Lib/site-packages/dask/dataframe/io/sql.py:392) if isinstance(sql, str):
--> [393](file:///C:/Users/F3164582/AppData/Local/Programs/Python/Python311/Lib/site-packages/dask/dataframe/io/sql.py:393) return read_sql_table(sql, con, index_col, **kwargs)
[394](file:///C:/Users/F3164582/AppData/Local/Programs/Python/Python311/Lib/site-packages/dask/dataframe/io/sql.py:394) else:
[395](file:///C:/Users/F3164582/AppData/Local/Programs/Python/Python311/Lib/site-packages/dask/dataframe/io/sql.py:395) return read_sql_query(sql, con, index_col, **kwargs)
File [c:\Users\F3164582\AppData\Local\Programs\Python\Python311\Lib\site-packages\dask\dataframe\io\sql.py:314](file:///C:/Users/F3164582/AppData/Local/Programs/Python/Python311/Lib/site-packages/dask/dataframe/io/sql.py:314), in read_sql_table(table_name, con, index_col, divisions, npartitions, limits, columns, bytes_per_chunk, head_rows, schema, meta, engine_kwargs, **kwargs)
[312](file:///C:/Users/F3164582/AppData/Local/Programs/Python/Python311/Lib/site-packages/dask/dataframe/io/sql.py:312) m = sa.MetaData()
[313](file:///C:/Users/F3164582/AppData/Local/Programs/Python/Python311/Lib/site-packages/dask/dataframe/io/sql.py:313) if isinstance(table_name, str):
--> [314](file:///C:/Users/F3164582/AppData/Local/Programs/Python/Python311/Lib/site-packages/dask/dataframe/io/sql.py:314) table_name = sa.Table(table_name, m, autoload_with=engine, schema=schema)
...
[1541](file:///C:/Users/F3164582/AppData/Local/Programs/Python/Python311/Lib/site-packages/sqlalchemy/engine/reflection.py:1541) if _reflect_info.table_options:
NoSuchTableError: SELECT t1.NR_SEQL_SLCT_CPR
FROM ORANDPOW0000.SLCT_CPR_PRD_PCR t1
WHERE ROWNUM <= 1000
```
**Environment**:
- Dask version: '2023.11.0'
- Python version: 3.11.7
- Operating System: Windows 10
- Install method (conda, pip, source): PIP
| open | 2024-02-21T18:57:15Z | 2024-05-24T19:06:10Z | https://github.com/dask/dask/issues/10945 | [
"needs triage"
] | frbelotto | 3 |
marshmallow-code/marshmallow-sqlalchemy | sqlalchemy | 33 | Using marshmallow schema to restrict update fields. | I'm developing an api in Flask,
In my update functions I would like to restrict the fields that can be updated e.g. I don't want users to be able to change their email at the moment.
To achieve this I have set up a schema (UserSchema) with its fields restricted by a tuple (UserSchemaTypes.UPDATE_FIELDS).The tuple does not include email.
The problem I am having is that email is a required field for User rows in my database.
So when I create a User model object using the schema (users_schema.load(user_json) ) an illegal object is added to the sqlalchemy session.
```
#schema to validate the posted fields against
users_schema = UserSchema(only=UserSchemaTypes.UPDATE_FIELDS)
#attempt to deserialize the posted json to a User model object using the schema
user_data = users_schema.load(user_json)
if not user_data.errors:#update data passed validation
user_update_obj = user_data.data
User.update(user_id,vars(user_update_obj))
```
In my update function itself I then have to remove this illegal object from the session via db.session.expunge_all() as if I do not I receive an OperationalError.
```
@staticmethod
def update(p_id,data):
db.session.expunge_all()#hack I want to remove
user = User.query.get(p_id)
for k, v in data.iteritems():
setattr(user, k, v)
db.session.commit()
```
OperationalError received when db.session.expunge_all() is removed:
```
OperationalError: (raised as a result of Query-invoked autoflush; consider
using a session.no_autoflush block if this flush is occurring prematurely)
(_mysql_exceptions.OperationalError) (1048, "Column 'email' cannot be null") [SQL: u'INSERT INTO user (email, password, active, phone, current_login_at, last_login_at, current_login_ip, last_login_ip, login_count) VALUES (%s, %s, %s, %s, %s, %s, %s, %s, %s)'] [parameters: (None, None, 1, '0444', None, None, None, None, None)]
```
| closed | 2015-10-09T13:08:42Z | 2015-10-20T09:15:42Z | https://github.com/marshmallow-code/marshmallow-sqlalchemy/issues/33 | [] | EdCampion | 2 |
saulpw/visidata | pandas | 2,165 | **DirSheet** makes changes without needing commit in v2.11 | ### Discussed in https://github.com/saulpw/visidata/discussions/2163
<div type='discussions-op-text'>
<sup>Originally posted by **proItheus** December 8, 2023</sup>
Any change in `dirsheet` is instantly committed to filesystem, without my executing `commit change` command.
[](https://asciinema.org/a/iEAmZYerrGDBdgYnkB3u8jDoI)
I'm not sure if it's a bug, or if there are some relevant options I missed?
The version is `v2.11.1`</div> | closed | 2023-12-08T19:37:27Z | 2023-12-08T19:56:20Z | https://github.com/saulpw/visidata/issues/2165 | [] | anjakefala | 1 |
jina-ai/serve | fastapi | 5,625 | Add to_docker_compose to Deployment | Add to_docker_compose to Deployment | closed | 2023-01-25T16:37:02Z | 2023-02-10T09:17:46Z | https://github.com/jina-ai/serve/issues/5625 | [] | alaeddine-13 | 0 |
ydataai/ydata-profiling | data-science | 1,719 | Bug Report - new release (4.13) breaks ProfileReport | ### Current Behaviour
```
pip install ydata-profiling
```
Then in Python:
```
from ydata_profiling import ProfileReport
```
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "[redacted]/lib/python3.8/site-packages/ydata_profiling/__init__.py", line 14, in <module>
from ydata_profiling.utils.information import display_banner
File "[redacted]/lib/python3.8/site-packages/ydata_profiling/utils/information.py", line 4, in <module>
from IPython.display import HTML, display
ModuleNotFoundError: No module named 'IPython'
```
### Expected Behaviour
Expected import to work
### Data Description
None
### Code that reproduces the bug
```Python
See description above
```
### pandas-profiling version
4.13
### Dependencies
```Text
annotated-types==0.7.0
attrs==25.1.0
certifi==2025.1.31
charset-normalizer==3.4.1
contourpy==1.1.1
cycler==0.12.1
dacite==1.9.2
fonttools==4.56.0
htmlmin==0.1.12
idna==3.10
ImageHash==4.3.1
importlib_metadata==8.5.0
importlib_resources==6.4.5
Jinja2==3.1.5
joblib==1.4.2
kiwisolver==1.4.7
llvmlite==0.41.1
MarkupSafe==2.1.5
matplotlib==3.7.5
multimethod==1.10
networkx==3.1
numba==0.58.1
numpy==1.24.4
packaging==24.2
pandas==2.0.3
patsy==1.0.1
phik==0.12.4
pillow==10.4.0
pydantic==2.10.6
pydantic_core==2.27.2
pyparsing==3.1.4
python-dateutil==2.9.0.post0
pytz==2025.1
PyWavelets==1.4.1
PyYAML==6.0.2
requests==2.32.3
scipy==1.10.1
seaborn==0.13.2
six==1.17.0
statsmodels==0.14.1
tqdm==4.67.1
typeguard==4.4.0
typing_extensions==4.12.2
tzdata==2025.1
urllib3==2.2.3
visions==0.7.6
wordcloud==1.9.4
ydata-profiling==4.13.0
zipp==3.20.2
```
### OS
Macos
### Checklist
- [x] There is not yet another bug report for this issue in the [issue tracker](https://github.com/ydataai/pandas-profiling/issues)
- [x] The problem is reproducible from this bug report. [This guide](http://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports) can help to craft a minimal bug report.
- [x] The issue has not been resolved by the entries listed under [Common Issues](https://docs.profiling.ydata.ai/latest/support-contribution/contribution_guidelines/). | closed | 2025-03-05T16:19:06Z | 2025-03-11T15:16:03Z | https://github.com/ydataai/ydata-profiling/issues/1719 | [
"bug 🐛"
] | namiyousef | 2 |
LAION-AI/Open-Assistant | machine-learning | 2,690 | Clarify contributing frontend | This is something that's seemed to suddenly pick up quite a bit, but I've seen quite a lot of posts recently which seem to not be understanding the Chat section vs. the contributions section.
Example:

Loosely related, I've seen a massive uptick in synthetically generated content being contributed to the dataset. I personally think this could be stopped quite a bit if there was more specific, obvious "DO NOT USE CHATGPT OR OTHER LLMS IN RESPONSES".
For both of these issues, the underlying problem is a bit of ambiguity for exactly how to use the frontend when someone first logs in. I think the Messages vs Chat icons are incredibly similar, and there isn't enough FYI on the webpage to explain exactly what's going on.
I think that we should change the frontend so that it's much more direct in these manners. | closed | 2023-04-18T02:52:09Z | 2023-06-14T08:35:12Z | https://github.com/LAION-AI/Open-Assistant/issues/2690 | [
"website",
"needs discussion"
] | luphoria | 0 |
ageitgey/face_recognition | python | 718 | How to mapping image encoding based on folder name | Hello man,
I have a set of images of 1 person
My Image structure is like this:
```
Database/ Person1 / 1.jpg
Database/ Person1 / 2.jpg
Database/ Person1 / 3.jpg
Database/ Person1 / 4.jpg
....
Database/ Person9 / 1.jpg
Database/ Person9 / 2.jpg
Database/ Person9 / 3.jpg
Database/ Person9 / 4.jpg
```
How to mapping all image based on the folder name? Because I have multiple images for 1 person.
Thanks.
| open | 2019-01-15T12:57:49Z | 2019-01-17T11:40:58Z | https://github.com/ageitgey/face_recognition/issues/718 | [] | flyingduck92 | 1 |
scikit-optimize/scikit-optimize | scikit-learn | 678 | AttributeError: module 'skopt.callbacks' has no attribute 'CheckpointSaver' (dumping/loading results object) | Am I loading this incorrectly? It dumped without error but I'm not able to retrieve the results.
```python
skopt.__version__
'0.5.2'
```
```python
# Optimization
n_calls = 1000
kappa = 5.0
delta = 0.001
name = "forest.clustering"
callbacks = [skopt.callbacks.VerboseCallback(n_total=n_calls),
skopt.callbacks.CheckpointSaver(f"./bayesian_optimization/{name}.result.pkl"),
skopt.callbacks.DeltaYStopper(n_best=200, delta=delta)
]
res = skopt.forest_minimize(objective, dimensions=dimensions, random_state=random_state,
n_calls=n_calls, n_jobs=n_jobs, callback=callbacks, acq_func="LCB", kappa=kappa)
# Dumped results
skopt.dump(res, f"./bayesian_optimization/{name}.res.pkl.gz")
# Loaded results
res = skopt.load("./Data/Models/Bayesian_Clustering/forest.clustering.res.pkl.gz")
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<timed exec> in <module>()
~/anaconda/envs/python3/lib/python3.6/site-packages/skopt/utils.py in load(filename, **kwargs)
170 Reconstructed OptimizeResult instance.
171 """
--> 172 return load_(filename, **kwargs)
173
174
~/anaconda/envs/python3/lib/python3.6/site-packages/sklearn/externals/joblib/numpy_pickle.py in load(filename, mmap_mode)
576 return load_compatibility(fobj)
577
--> 578 obj = _unpickle(fobj, filename, mmap_mode)
579
580 return obj
~/anaconda/envs/python3/lib/python3.6/site-packages/sklearn/externals/joblib/numpy_pickle.py in _unpickle(fobj, filename, mmap_mode)
506 obj = None
507 try:
--> 508 obj = unpickler.load()
509 if unpickler.compat_mode:
510 warnings.warn("The file '%s' has been generated with a "
~/anaconda/envs/python3/lib/python3.6/pickle.py in load(self)
1048 raise EOFError
1049 assert isinstance(key, bytes_types)
-> 1050 dispatch[key[0]](self)
1051 except _Stop as stopinst:
1052 return stopinst.value
~/anaconda/envs/python3/lib/python3.6/pickle.py in load_global(self)
1336 module = self.readline()[:-1].decode("utf-8")
1337 name = self.readline()[:-1].decode("utf-8")
-> 1338 klass = self.find_class(module, name)
1339 self.append(klass)
1340 dispatch[GLOBAL[0]] = load_global
~/anaconda/envs/python3/lib/python3.6/pickle.py in find_class(self, module, name)
1390 return _getattribute(sys.modules[module], name)[0]
1391 else:
-> 1392 return getattr(sys.modules[module], name)
1393
1394 def load_reduce(self):
AttributeError: module 'skopt.callbacks' has no attribute 'CheckpointSaver'
``` | closed | 2018-05-14T17:58:50Z | 2019-03-18T20:36:38Z | https://github.com/scikit-optimize/scikit-optimize/issues/678 | [] | jolespin | 10 |
Lightning-AI/pytorch-lightning | deep-learning | 19,768 | Script freezes when Trainer is instantiated | ### Bug description
I can run once a training script with pytorch-lightning. However, after the training finishes, if train to run it again, the code freezes when the `L.Trainer` is instantiated. There are no error messages.
Only if I shutdown and restart, I can run it once again, but then the problem persist for the next time.
This happens to me with different codes, even in the "lightning in 15 minutes" example.
### What version are you seeing the problem on?
v2.2
### How to reproduce the bug
```python
# Based on https://lightning.ai/docs/pytorch/stable/starter/introduction.html
import os
import torch
from torch import optim, nn, utils
from torchvision.datasets import MNIST
from torchvision.transforms import ToTensor
import pytorch_lightning as L
# define any number of nn.Modules (or use your current ones)
encoder = nn.Sequential(nn.Linear(28 * 28, 64), nn.ReLU(), nn.Linear(64, 3))
decoder = nn.Sequential(nn.Linear(3, 64), nn.ReLU(), nn.Linear(64, 28 * 28))
# define the LightningModule
class LitAutoEncoder(L.LightningModule):
def __init__(self, encoder, decoder):
super().__init__()
self.encoder = encoder
self.decoder = decoder
def training_step(self, batch, batch_idx):
# training_step defines the train loop.
# it is independent of forward
x, y = batch
x = x.view(x.size(0), -1)
x_hat = self.model_forward(x)
loss = nn.functional.mse_loss(x_hat, x)
# Logging to TensorBoard (if installed) by default
self.log("train_loss", loss)
return batch
def configure_optimizers(self):
optimizer = optim.Adam(self.parameters(), lr=1e-3)
return optimizer
# init the autoencoder
autoencoder = LitAutoEncoder(encoder, decoder)
# setup data
dataset = MNIST(os.getcwd(), download=True, train=True, transform=ToTensor())
# use 20% of training data for validation
train_set_size = int(len(dataset) * 0.8)
valid_set_size = len(dataset) - train_set_size
seed = torch.Generator().manual_seed(42)
train_set, val_set = utils.data.random_split(dataset, [train_set_size, valid_set_size], generator=seed)
train_loader = utils.data.DataLoader(train_set, num_workers=15)
valid_loader = utils.data.DataLoader(val_set, num_workers=15)
print("Before instantiate Trainer")
# train the model (hint: here are some helpful Trainer arguments for rapid idea iteration)
trainer = L.Trainer(limit_train_batches=100, max_epochs=10, check_val_every_n_epoch=10, accelerator="gpu")
print("After instantiate Trainer")
```
### Error messages and logs
There are no error messages
### Environment
<details>
<summary>Current environment</summary>
* CUDA:
- GPU:
- NVIDIA GeForce RTX 3080 Laptop GPU
- available: True
- version: 12.1
* Lightning:
- denoising-diffusion-pytorch: 1.5.4
- ema-pytorch: 0.2.1
- lightning-utilities: 0.11.2
- pytorch-fid: 0.3.0
- pytorch-lightning: 2.2.2
- torch: 2.2.2
- torchaudio: 2.2.2
- torchmetrics: 1.0.0
- torchvision: 0.17.2
* Packages:
- absl-py: 1.4.0
- accelerate: 0.17.1
- addict: 2.4.0
- aiohttp: 3.8.3
- aiosignal: 1.2.0
- antlr4-python3-runtime: 4.9.3
- anyio: 3.6.1
- appdirs: 1.4.4
- argon2-cffi: 21.3.0
- argon2-cffi-bindings: 21.2.0
- array-record: 0.4.0
- arrow: 1.2.3
- astropy: 5.2.1
- asttokens: 2.0.8
- astunparse: 1.6.3
- async-timeout: 4.0.2
- attrs: 23.1.0
- auditwheel: 5.4.0
- babel: 2.10.3
- backcall: 0.2.0
- beautifulsoup4: 4.11.1
- bleach: 5.0.1
- blinker: 1.6.2
- bqplot: 0.12.40
- branca: 0.6.0
- build: 1.2.1
- cachetools: 5.2.0
- carla: 0.9.14
- certifi: 2024.2.2
- cffi: 1.15.1
- chardet: 5.1.0
- charset-normalizer: 2.1.1
- click: 8.1.3
- click-plugins: 1.1.1
- cligj: 0.7.2
- cloudpickle: 3.0.0
- cmake: 3.26.1
- colossus: 1.3.1
- colour: 0.1.5
- contourpy: 1.0.7
- cycler: 0.11.0
- cython: 0.29.32
- dacite: 1.8.1
- dask: 2023.3.1
- dataclass-array: 1.4.1
- debugpy: 1.6.3
- decorator: 4.4.2
- deepspeed: 0.7.2
- defusedxml: 0.7.1
- denoising-diffusion-pytorch: 1.5.4
- deprecation: 2.1.0
- dill: 0.3.6
- distlib: 0.3.6
- dm-tree: 0.1.8
- docker-pycreds: 0.4.0
- docstring-parser: 0.15
- einops: 0.6.0
- einsum: 0.3.0
- ema-pytorch: 0.2.1
- etils: 1.3.0
- exceptiongroup: 1.2.0
- executing: 1.0.0
- farama-notifications: 0.0.4
- fastjsonschema: 2.16.1
- filelock: 3.8.0
- fiona: 1.9.3
- flask: 2.3.3
- flatbuffers: 24.3.25
- folium: 0.14.0
- fonttools: 4.37.1
- frozenlist: 1.3.1
- fsspec: 2022.8.2
- future: 1.0.0
- fvcore: 0.1.5.post20221221
- gast: 0.4.0
- gdown: 4.7.1
- geojson: 3.0.1
- geopandas: 0.12.2
- gitdb: 4.0.11
- gitpython: 3.1.43
- google-auth: 2.16.2
- google-auth-oauthlib: 0.4.6
- google-pasta: 0.2.0
- googleapis-common-protos: 1.63.0
- googledrivedownloader: 0.4
- gputil: 1.4.0
- gpxpy: 1.5.0
- grpcio: 1.62.1
- gunicorn: 20.0.4
- gym: 0.26.2
- gym-notices: 0.0.8
- gymnasium: 0.28.1
- h5py: 3.7.0
- haversine: 2.8.0
- hdf5plugin: 4.1.1
- hjson: 3.1.0
- humanfriendly: 10.0
- idna: 3.6
- imageio: 2.31.3
- imageio-ffmpeg: 0.4.7
- immutabledict: 2.2.0
- importlib-metadata: 4.12.0
- importlib-resources: 6.1.0
- imutils: 0.5.4
- invertedai: 0.0.8.post1
- iopath: 0.1.10
- ipyevents: 2.0.2
- ipyfilechooser: 0.6.0
- ipykernel: 6.15.3
- ipyleaflet: 0.17.4
- ipython: 8.5.0
- ipython-genutils: 0.2.0
- ipytree: 0.2.2
- ipywidgets: 8.0.2
- itsdangerous: 2.1.2
- jax-jumpy: 1.0.0
- jedi: 0.18.1
- jinja2: 3.1.2
- joblib: 1.4.0
- jplephem: 2.19
- json5: 0.9.10
- jsonargparse: 4.15.0
- jsonschema: 4.19.1
- jsonschema-specifications: 2023.7.1
- jstyleson: 0.0.2
- julia: 0.6.1
- jupyter: 1.0.0
- jupyter-client: 7.3.5
- jupyter-console: 6.4.4
- jupyter-core: 4.11.1
- jupyter-packaging: 0.12.3
- jupyter-server: 1.18.1
- jupyterlab: 3.4.7
- jupyterlab-pygments: 0.2.2
- jupyterlab-server: 2.15.1
- jupyterlab-widgets: 3.0.3
- keras: 2.11.0
- kiwisolver: 1.4.4
- lanelet2: 1.2.1
- lark: 1.1.9
- lazy-loader: 0.2
- leafmap: 0.27.0
- libclang: 14.0.6
- lightning-utilities: 0.11.2
- lit: 16.0.0
- llvmlite: 0.39.1
- locket: 1.0.0
- lunarsky: 0.2.1
- lxml: 4.9.1
- lz4: 4.3.3
- markdown: 3.4.1
- markdown-it-py: 2.2.0
- markupsafe: 2.1.1
- matplotlib: 3.6.1
- matplotlib-inline: 0.1.6
- mdurl: 0.1.2
- mistune: 2.0.4
- moviepy: 1.0.3
- mpi4py: 3.1.3
- mpmath: 1.3.0
- msgpack: 1.0.8
- multidict: 6.0.2
- munch: 2.5.0
- natsort: 8.2.0
- nbclassic: 0.4.3
- nbclient: 0.6.8
- nbconvert: 7.0.0
- nbformat: 5.5.0
- nest-asyncio: 1.5.5
- networkx: 2.8.6
- ninja: 1.10.2.3
- notebook: 6.4.12
- notebook-shim: 0.1.0
- numba: 0.56.4
- numpy: 1.24.4
- nvidia-cublas-cu11: 11.10.3.66
- nvidia-cublas-cu12: 12.1.3.1
- nvidia-cuda-cupti-cu11: 11.7.101
- nvidia-cuda-cupti-cu12: 12.1.105
- nvidia-cuda-nvrtc-cu11: 11.7.99
- nvidia-cuda-nvrtc-cu12: 12.1.105
- nvidia-cuda-runtime-cu11: 11.7.99
- nvidia-cuda-runtime-cu12: 12.1.105
- nvidia-cudnn-cu11: 8.5.0.96
- nvidia-cudnn-cu12: 8.9.2.26
- nvidia-cufft-cu11: 10.9.0.58
- nvidia-cufft-cu12: 11.0.2.54
- nvidia-curand-cu11: 10.2.10.91
- nvidia-curand-cu12: 10.3.2.106
- nvidia-cusolver-cu11: 11.4.0.1
- nvidia-cusolver-cu12: 11.4.5.107
- nvidia-cusparse-cu11: 11.7.4.91
- nvidia-cusparse-cu12: 12.1.0.106
- nvidia-nccl-cu11: 2.14.3
- nvidia-nccl-cu12: 2.19.3
- nvidia-nvjitlink-cu12: 12.4.127
- nvidia-nvtx-cu11: 11.7.91
- nvidia-nvtx-cu12: 12.1.105
- oauthlib: 3.2.2
- omegaconf: 2.3.0
- open-humans-api: 0.2.9
- opencv-python: 4.6.0.66
- openexr: 1.3.9
- opt-einsum: 3.3.0
- osmnx: 1.2.2
- p5py: 1.0.0
- packaging: 21.3
- pandas: 1.5.3
- pandocfilters: 1.5.0
- parso: 0.8.3
- partd: 1.4.1
- pep517: 0.13.0
- pickleshare: 0.7.5
- pillow: 9.2.0
- pint: 0.21.1
- pip: 24.0
- pkgconfig: 1.5.5
- pkgutil-resolve-name: 1.3.10
- platformdirs: 2.5.2
- plotly: 5.13.1
- plyfile: 0.8.1
- portalocker: 2.8.2
- powerbox: 0.7.1
- prettymapp: 0.1.0
- proglog: 0.1.10
- prometheus-client: 0.14.1
- promise: 2.3
- prompt-toolkit: 3.0.31
- protobuf: 3.19.6
- psutil: 5.9.2
- ptyprocess: 0.7.0
- pure-eval: 0.2.2
- py-cpuinfo: 8.0.0
- pyarrow: 10.0.0
- pyasn1: 0.4.8
- pyasn1-modules: 0.2.8
- pycocotools: 2.0
- pycosat: 0.6.3
- pycparser: 2.21
- pydantic: 1.10.9
- pydeprecate: 0.3.1
- pydub: 0.25.1
- pyelftools: 0.30
- pyerfa: 2.0.0.1
- pyfftw: 0.13.1
- pygame: 2.1.2
- pygments: 2.13.0
- pylians: 0.7
- pyparsing: 3.0.9
- pyproj: 3.5.0
- pyproject-hooks: 1.0.0
- pyquaternion: 0.9.9
- pyrsistent: 0.18.1
- pyshp: 2.3.1
- pysocks: 1.7.1
- pysr: 0.16.3
- pystac: 1.8.4
- pystac-client: 0.7.5
- python-box: 7.1.1
- python-dateutil: 2.8.2
- pytorch-fid: 0.3.0
- pytorch-lightning: 2.2.2
- pytz: 2022.2.1
- pywavelets: 1.4.1
- pyyaml: 6.0
- pyzmq: 23.2.1
- qtconsole: 5.3.2
- qtpy: 2.2.0
- ray: 2.10.0
- referencing: 0.30.2
- requests: 2.31.0
- requests-oauthlib: 1.3.1
- rich: 13.3.4
- rpds-py: 0.10.3
- rsa: 4.9
- rtree: 1.0.1
- ruamel.yaml: 0.17.21
- ruamel.yaml.clib: 0.2.7
- scikit-build-core: 0.8.2
- scikit-image: 0.20.0
- scikit-learn: 1.2.2
- scipy: 1.8.1
- scooby: 0.7.4
- seaborn: 0.12.2
- send2trash: 1.8.0
- sentry-sdk: 1.44.1
- setproctitle: 1.3.3
- setuptools: 67.6.0
- shapely: 1.8.0
- shellingham: 1.5.4
- six: 1.16.0
- sklearn: 0.0.post1
- smmap: 5.0.1
- sniffio: 1.3.0
- soupsieve: 2.3.2.post1
- spiceypy: 6.0.0
- stack-data: 0.5.0
- stravalib: 1.4
- swagger-client: 1.0.0
- sympy: 1.11.1
- tabulate: 0.9.0
- taichi: 1.5.0
- tenacity: 8.2.3
- tensorboard: 2.11.2
- tensorboard-data-server: 0.6.1
- tensorboard-plugin-wit: 1.8.1
- tensorboardx: 2.6.2.2
- tensorflow: 2.11.0
- tensorflow-addons: 0.21.0
- tensorflow-datasets: 4.9.0
- tensorflow-estimator: 2.11.0
- tensorflow-graphics: 2021.12.3
- tensorflow-io-gcs-filesystem: 0.29.0
- tensorflow-metadata: 1.13.0
- tensorflow-probability: 0.19.0
- termcolor: 2.1.1
- terminado: 0.15.0
- threadpoolctl: 3.1.0
- tifffile: 2023.3.21
- timm: 0.4.12
- tinycss2: 1.1.1
- toml: 0.10.2
- tomli: 2.0.1
- tomlkit: 0.11.4
- toolz: 0.12.1
- torch: 2.2.2
- torchaudio: 2.2.2
- torchmetrics: 1.0.0
- torchvision: 0.17.2
- tornado: 6.2
- tqdm: 4.66.2
- tr: 1.0.0.2
- trafficgen: 0.0.0
- traitlets: 5.4.0
- traittypes: 0.2.1
- trimesh: 4.3.0
- triton: 2.2.0
- typeguard: 2.13.3
- typer: 0.12.2
- typing-extensions: 4.11.0
- urllib3: 1.26.15
- virtualenv: 20.16.5
- visu3d: 1.5.1
- wandb: 0.16.5
- waymo-open-dataset-tf-2-11-0: 1.6.1
- wcwidth: 0.2.5
- webencodings: 0.5.1
- websocket-client: 1.4.1
- werkzeug: 2.3.7
- wheel: 0.37.1
- whitebox: 2.3.1
- whiteboxgui: 2.3.0
- widgetsnbextension: 4.0.3
- wrapt: 1.14.1
- xyzservices: 2023.7.0
- yacs: 0.1.8
- yapf: 0.30.0
- yarl: 1.8.1
- zipp: 3.8.1
* System:
- OS: Linux
- architecture:
- 64bit
- ELF
- processor: x86_64
- python: 3.8.19
- release: 5.15.0-102-generic
- version: #112~20.04.1-Ubuntu SMP Thu Mar 14 14:28:24 UTC 2024
</details>
### More info
_No response_ | closed | 2024-04-12T14:11:34Z | 2024-06-22T22:46:07Z | https://github.com/Lightning-AI/pytorch-lightning/issues/19768 | [
"question"
] | PabloVD | 5 |
vitalik/django-ninja | pydantic | 405 | Custom ninja HttpRequest class, that includes an 'auth' property | **Is your feature request related to a problem? Please describe.**
All examples I saw use `django.http.HttpRequest` to annotate the request object in operations.
This is a problem when using an authentication. Now I need to work with `request.auth` but that property is not defined in the django HttpRequest class which breaks static typechecking (Pyright complains about an unknown property in VS Code for example).
I scanned through ninja codebase to find another request class to use for type hinting but found only this
`request.auth = result # type: ignore`
**Describe the solution you'd like**
It would be nice to have a ninja-specific request class to use for type hinting that will include an `auth` property. And maybe it could be used for other stuff in the future as well.
| closed | 2022-03-26T01:26:44Z | 2023-10-13T14:37:03Z | https://github.com/vitalik/django-ninja/issues/405 | [] | geeshta | 3 |
open-mmlab/mmdetection | pytorch | 12,055 | Quantization Aware Training | Is there any instructions on how to do QAT in mmdetection ? | open | 2024-12-02T07:24:44Z | 2024-12-10T01:59:58Z | https://github.com/open-mmlab/mmdetection/issues/12055 | [] | HanXuMartin | 2 |
pallets-eco/flask-wtf | flask | 173 | Can't disable csrf per form | Just tried something like this:
```
class MyForm(Form):
class Meta:
csrf = False
```
And a csrf not present error was thrown. I need to disable csrf so that my GET form can be accessed from outside with query parameters directly.
| closed | 2015-02-22T19:00:04Z | 2021-05-28T01:03:52Z | https://github.com/pallets-eco/flask-wtf/issues/173 | [
"todo"
] | italomaia | 5 |
keras-team/keras | machine-learning | 20,437 | Add `ifft2` method to ops | I'm curious why there is no `ops.ifft2`. Given that there is already `fft` and `fft2`, implementing one is trivial.
Here is an example of what an `ifft2` would look like:
```python
import keras
def keras_ops_ifft2(fft_real,fft_imag):
"""
Inputs are real and imaginary parts of array
of shape [...,H,W],[...,H,W] , where the last two
dimensions correspond tot he image dimensions
Returns tuple containing the real and imaginary parts
of the ifft2
Test:
from keras import ops
X = np.random.rand( 1,1,11,11 ).astype(np.float32)
X_real,X_imag = ops.real(X), ops.imag(X)
X_fft_real,X_fft_imag = keras.ops.fft2((X_real,X_imag))
X_recon,_ = keras_ops_ifft2(X_fft_real,X_fft_imag)
np.allclose(X,X_recon,atol=1e-6)
"""
H = ops.cast(ops.shape(fft_real)[-2],'float32') # height
W = ops.cast(ops.shape(fft_real)[-1],'float32') # width
# Conjugate the input
real_conj, imag_conj = fft_real, -fft_imag
# Compute FFT of conjugate
fft = ops.fft2((real_conj, imag_conj))
return fft[0] / (H*W), -fft[1] / (H*W)
``` | closed | 2024-11-01T18:32:58Z | 2024-11-05T00:14:56Z | https://github.com/keras-team/keras/issues/20437 | [
"stat:contributions welcome",
"type:feature"
] | markveillette | 1 |
lucidrains/vit-pytorch | computer-vision | 109 | DINO training... getting it to work? | Has anyone been able to get DINO to converge? I’m running the example code given and not seeing anything happening. Is there a set of meta parameters that works for something like Imagenet? | open | 2021-05-06T14:40:07Z | 2021-08-05T08:27:58Z | https://github.com/lucidrains/vit-pytorch/issues/109 | [] | watts4speed | 1 |
widgetti/solara | flask | 633 | Documentation getting started introduction links lead to 404 not found | Hi,
Take note I just started looking at solara today. So total noob.
I was going thru the [Introduction](https://github.com/widgetti/solara/blob/master/solara/website/pages/documentation/getting_started/content/01-introduction.md) and found most of the link's are not working.
Is this the best place to start?
Kind regards | closed | 2024-05-06T14:06:28Z | 2024-05-07T15:14:33Z | https://github.com/widgetti/solara/issues/633 | [] | HugoP | 3 |
fugue-project/fugue | pandas | 128 | [FEATURE] Limit and Limit by Partition | Implement a new method/transformer to limit.
Look at the Spark documentation: https://spark.apache.org/docs/2.1.0/api/python/pyspark.sql.html#pyspark.sql.DataFrame.limit
Pandas doesn't have one. Maybe the backend can use df.head() or df.sample if we want it to be random. | closed | 2020-12-19T17:00:28Z | 2021-01-11T04:12:16Z | https://github.com/fugue-project/fugue/issues/128 | [
"enhancement",
"high priority",
"programming interface",
"core feature"
] | kvnkho | 1 |
cobrateam/splinter | automation | 1,327 | can't control edge with debuggerAddress | can't control edge opened with debuggerAddress
line 57 in splinter\driver\webdriver\edge.py is:
options = Options() or options
maybe it should be:
options = options or Options()
test code:will open a new edge ,that is not we expected
```
from selenium.webdriver.edge.options import Options
edge_options = Options()
edge_options.add_experimental_option("debuggerAddress", "127.0.0.1:9223")
browser = Browser('edge',options=edge_options)
``` | open | 2025-02-11T08:15:45Z | 2025-02-11T08:15:45Z | https://github.com/cobrateam/splinter/issues/1327 | [] | chengair | 0 |
jeffknupp/sandman2 | rest-api | 25 | Flask-admin version | We seem to be using an old flask-admin version, is this intentional? I am finding the admin page looks very strange, and wondering if I am using a vastly newer version - with regressions.
Jeff, what are you on?
| closed | 2016-01-25T16:46:57Z | 2016-01-26T13:49:30Z | https://github.com/jeffknupp/sandman2/issues/25 | [] | filmackay | 2 |
newpanjing/simpleui | django | 475 | 在fields中设置两个元素在同一行无效 | **bug描述**
* *Bug description * *
简单的描述下遇到的bug:
Briefly describe the bugs encountered:
想设置两个文本框在同一行, fields = (('t1', 't2'), 't3')
没有生效
**重现步骤**
** repeat step **
1. django默认主题,使用 fields = (('t1', 't2'), 't3'),效果如下:

这也是我想达到的效果
2. simpleui主题,使用 fields ,效果如下:

两个文本框没有在同一行显示,两个文本框离的也比较近
3. simpleui主题,没有使用fields :

正常显示效果
**环境**
** environment**
1.Operating System:
(Windows/Linux/MacOS)....
5.Python Version:3.10.6
6.Django Version:4.2.4
7.SimpleUI Version:2023.3.1
**Description**
| open | 2023-10-10T09:30:07Z | 2025-03-17T10:37:59Z | https://github.com/newpanjing/simpleui/issues/475 | [
"bug"
] | Bisns | 2 |
ARM-DOE/pyart | data-visualization | 1,535 | Ensure Xradar Data Model is Consistent in Py-ART | ### Description
Currently, the xradar --> Py-ART bridge requires fields not in the xradar data model
see https://github.com/openradar/xradar/issues/164
### What I Did
Tried using the Py-ART Xradar object with other datasets
| closed | 2024-03-25T13:38:41Z | 2024-04-05T15:54:33Z | https://github.com/ARM-DOE/pyart/issues/1535 | [
"Bug"
] | mgrover1 | 1 |
sinaptik-ai/pandas-ai | data-visualization | 591 | Code not showing on Databricks notebook | ### 🐛 Describe the bug
Hi, you can see in the image below that when I try to use `show_code = True`, there is no code shown in Databricks.
How could I solve this?
Thanks
Francesco

| closed | 2023-09-25T09:07:22Z | 2023-09-30T20:32:25Z | https://github.com/sinaptik-ai/pandas-ai/issues/591 | [] | FrancescoRettondini | 6 |
geopandas/geopandas | pandas | 2,540 | BUG: no matching CRS warning for overlay | When using geopandas.overlay no warning is issued when the data frames are not using the same crs.
```py
import dask_geopandas
import geopandas
import libpysal.examples as exp
NYC = exp.load_example('NYC Education')
NYC.get_file_list()
df1 = geopandas.read_file('NYC_2000Census.shp')
'NOTE: maybe the file location needs to be changed here'
df2 = geopandas.read_file(
geopandas.datasets.get_path("nybb")
)
df2 = df2.to_crs(4326)
res = geopandas.overlay(df1, df2)
```
| open | 2022-08-27T15:32:46Z | 2022-08-27T18:55:49Z | https://github.com/geopandas/geopandas/issues/2540 | [
"bug",
"needs triage"
] | slumnitz | 1 |
joerick/pyinstrument | django | 241 | Incorrect Session.program when running a module | Due to `sys.argv` modifications (in `__main__.py`?), the program name is stored incorrectly in `Session.program`:
actual output:
```console
$ pyinstrument -m mpi4py t.py -a
Program: mpi4py -a
```
expected output:
```console
$ pyinstrument -m mpi4py t.py -a
Program: mpi4py t.py -a
``` | closed | 2023-05-03T21:33:28Z | 2023-08-04T10:02:59Z | https://github.com/joerick/pyinstrument/issues/241 | [] | matthiasdiener | 3 |
autokey/autokey | automation | 60 | python3-xlib not found | Hi, after the most recent update 0.93.8-1 (from 0.93.7-1), autokey-gtk will no longer run. This is under Linux Mint 18.1/Ubuntu 16.04. Source is ppa:troxor/autokey
Autokey is a tremendous resource. Thank you for any help.
Traceback follows:
Traceback (most recent call last):
File "/usr/bin/autokey", line 5, in <module>
from pkg_resources import load_entry_point
File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 2927, in <module>
@_call_aside
File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 2913, in _call_aside
f(*args, **kwargs)
File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 2940, in _initialize_master_working_set
working_set = WorkingSet._build_master()
File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 635, in _build_master
ws.require(__requires__)
File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 943, in require
needed = self.resolve(parse_requirements(requirements))
File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 829, in resolve
raise DistributionNotFound(req, requirers)
pkg_resources.DistributionNotFound: The 'python3-xlib' distribution was not found and is required by autokey
| closed | 2017-01-09T18:15:31Z | 2018-01-17T15:04:58Z | https://github.com/autokey/autokey/issues/60 | [
"bug"
] | ghost | 5 |
marimo-team/marimo | data-visualization | 4,139 | MultiIndex not shown correctly in mo.ui.table | ### Describe the bug
When I pass in a multi-indexed pandas DataFrame into `marimo.ui.table` or `marimo.ui.dataframe`, the inner levels of the index are not shown correctly. The outermost index is repeated multiple times. See screenshot below
### Environment
<details>
```
{
"marimo": "0.11.21",
"OS": "Darwin",
"OS Version": "24.3.0",
"Processor": "arm",
"Python Version": "3.11.11",
"Binaries": {
"Browser": "134.0.6998.89",
"Node": "--"
},
"Dependencies": {
"click": "8.1.8",
"docutils": "0.21.2",
"itsdangerous": "2.2.0",
"jedi": "0.19.2",
"markdown": "3.7",
"narwhals": "1.31.0",
"packaging": "24.2",
"psutil": "7.0.0",
"pygments": "2.19.1",
"pymdown-extensions": "10.14.3",
"pyyaml": "6.0.2",
"ruff": "0.11.0",
"starlette": "0.46.1",
"tomlkit": "0.13.2",
"typing-extensions": "4.12.2",
"uvicorn": "0.34.0",
"websockets": "15.0.1"
},
"Optional Dependencies": {
"pandas": "2.2.3",
"polars": "1.24.0"
},
"Experimental Flags": {}
}
```
</details>
### Code to reproduce
```python
import marimo as mo
import pandas as pd
df = pd.concat({"a": pd.DataFrame({"foo":[1]}, index=["hello"]), "b": pd.DataFrame({"baz": [2.0]}, index=['world'])})
print(df)
mo.ui.table(df)
```
<img width="1103" alt="Image" src="https://github.com/user-attachments/assets/333f2ed4-0ddb-4430-8f8c-e8c5e9c5af29" /> | closed | 2025-03-18T01:39:15Z | 2025-03-24T08:56:51Z | https://github.com/marimo-team/marimo/issues/4139 | [
"bug"
] | andy-lz | 1 |
mckinsey/vizro | plotly | 315 | Research and execute on docs to address plotly FAQs | We routinely tackle questions about plotly and it would be useful to have some standard answers (e.g. FAQs) or a document that guides readers. We may also be able to better link through to plotly docs in our content.
## Task
1. Make a list of common queries and answer these as standard FAQ. This may go into docs or maybe just as a pinned post on our (internal) slack channel or elsewhere (e.g. in our repo).
2. Investigate plotly docs and see if we can better build on them to help our users understand their content.
3. Extension task: Research if there is any scope for a contribution back to plotly e.g. if we spot a way to improve their docs nav/discoverability? | closed | 2024-02-15T12:49:08Z | 2025-01-14T09:42:40Z | https://github.com/mckinsey/vizro/issues/315 | [
"Docs :spiral_notepad:"
] | stichbury | 1 |
sktime/sktime | scikit-learn | 7,725 | [BUG] post-fix issue shapelet transform: failing `test_st_on_unit_test` | `test_st_on_unit_test` is failing after the merge of https://github.com/sktime/sktime/pull/7499, on some PR.
This did not seem to fail in #7499 itself, so may be sporadic - but it is most likely in connection to that PR.
After the fix, https://github.com/sktime/sktime/pull/7726 should be reverted.
FYI @fnhirwa. | open | 2025-01-30T20:24:26Z | 2025-01-30T20:27:38Z | https://github.com/sktime/sktime/issues/7725 | [
"bug",
"module:transformations"
] | fkiraly | 0 |
erdewit/ib_insync | asyncio | 48 | coroutine 'Watchdog.watchAsync' was never awaited | ```
/usr/lib/python3.6/socketserver.py:544: RuntimeWarning:
coroutine 'Watchdog.watchAsync' was never awaited
```
ib_insync 0.9.3
I am getting this warning when using new `Watchdog`. It uses one of 3 `IB`s that are connected all the time, to keep TWS alive with ib-controller.
`Watchdog` and other 2 `IB`s are run in separate `threading.Thread` that is the live algorithm using those clients.
First `IBController` is created:
```
self.controller = IBController(APP='GATEWAY', # 'TWS' or 'GATEWAY'
TWS_MAJOR_VRSN=config.get('TWSMajorVersion'),
TRADING_MODE=config.get('TWSTradingMode'),
IBC_INI=config.get('IBCIniPath'),
IBC_PATH=config.get('IBCPath'),
TWS_PATH=config.get('TWSPath'),
LOG_PATH=config.get('IBCLogPath'),
TWSUSERID='',
TWSPASSWORD='',
JAVA_PATH='',
TWS_CONFIG_PATH='')
```
Then `Watchdog` is initialized and started:
```
super().__init__(controller=self.controller,
host='127.0.0.1',
port='4002',
clientId=self.client_id,
connectTimeout=connect_timeout,
appStartupTime=app_startup_time,
appTimeout=app_timeout,
retryDelay=retry_delay)
super().start()
```
Is there anything else that I have to do? | closed | 2018-03-02T14:28:34Z | 2018-05-16T11:14:34Z | https://github.com/erdewit/ib_insync/issues/48 | [] | radekwlsk | 6 |
scikit-hep/awkward | numpy | 3,129 | Long time to error on incompatible shapes in numpy-broadcasting | ### Version of Awkward Array
2.6.4
### Description and code to reproduce
When attempting to incorrectly broadcast regular arrays (which follow right-justified shape broadcast semantics as opposed to left-justified for ragged arrays), I get an error as I should
```python
import awkward as ak
import numpy as np
n = 10
a = ak.zip({"a": np.ones((n, 3))}, depth_limit=1)
a.a * np.ones(n) # raises ValueError: cannot broadcast RegularArray of size 3 with RegularArray of size 10 in multiply
```
Great! But there is some very non-linear scaling with how long it takes to raise this error with n:
```python
import time
import awkward as ak
import numpy as np
for n in np.geomspace(1000, 100_000, 10).astype(int):
a = ak.zip({"a": np.ones((n, 3))}, depth_limit=1)
tic = time.monotonic()
try:
a.a * np.ones(n)
except ValueError:
pass
toc = time.monotonic()
print(f"Took {toc-tic:.4f}s for {n=}")
```
produces
```
Took 0.0022s for n=1000
Took 0.0102s for n=1668
Took 0.0142s for n=2782
Took 0.0401s for n=4641
Took 0.1293s for n=7742
Took 0.3004s for n=12915
Took 0.7906s for n=21544
Took 2.2169s for n=35938
Took 9.6671s for n=59948
Took 58.6853s for n=100000
```
| open | 2024-05-25T16:59:07Z | 2024-05-25T16:59:07Z | https://github.com/scikit-hep/awkward/issues/3129 | [
"performance"
] | nsmith- | 0 |
MagicStack/asyncpg | asyncio | 787 | asyncpg.exceptions.DataError: invalid input for query argument python | I have a problem insert to PostgreSQL. The column I'm trying to insert is of type JSONB. The type of the object is Counter().
At production its working but locally I have a problem.
The error is: asyncpg.exceptions.DataError: invalid input for query argument $16: Counter({'clearmeleva': 1, 'cr7fragrance... (expected str, got Counter)
Why at prod is working and locally its throwing me this error ?
Thank you! | open | 2021-07-27T09:06:41Z | 2023-08-04T06:15:33Z | https://github.com/MagicStack/asyncpg/issues/787 | [] | tomerTcm | 2 |
KevinMusgrave/pytorch-metric-learning | computer-vision | 367 | RuntimeError: nonzero is not supported for tensors with more than INT_MAX elements | hello, I met a problem when I trained model with triplet-margin loss, the error message as follows:
```
File "/home/bengui/miniconda3/envs/torch1.7/lib/python3.6/site-packages/pytorch_metric_learning-0.9.99-py3.6.egg/pytorch_metric_learning/losses/base_metric_loss_function.py", line 34, in forward
File "/home/bengui/miniconda3/envs/torch1.7/lib/python3.6/site-packages/pytorch_metric_learning-0.9.99-py3.6.egg/pytorch_metric_learning/losses/triplet_margin_loss.py", line 35, in compute_loss
File "/home/bengui/miniconda3/envs/torch1.7/lib/python3.6/site-packages/pytorch_metric_learning-0.9.99-py3.6.egg/pytorch_metric_learning/utils/loss_and_miner_utils.py", line 195, in convert_to_triplets
RuntimeError: nonzero is not supported for tensors with more than INT_MAX elements, file a support request
```
I found this error happened in the function: convert_to_triplets. The batchsize is 600 and images per class is 200. Could you help me fix this error? Thanks a lot.
| closed | 2021-09-23T07:44:50Z | 2022-08-03T18:51:07Z | https://github.com/KevinMusgrave/pytorch-metric-learning/issues/367 | [
"documentation",
"question"
] | PuNeal | 5 |
nschloe/tikzplotlib | matplotlib | 4 | Loglog plot produces erroneous tex code (pgfplots 1.4) | When using loglog plots with matplot lib matplotlib2tikz produces
\begin{loglog} \end{loglog}
which is invalid (at least with pgfplots 1.4). The correct is `\begin{loglogaxis} \end{loglogaxis}` (line 218)
(Sorry for opening two tickets at pretty much the same time :/)
| closed | 2010-10-14T11:06:17Z | 2010-10-14T11:38:59Z | https://github.com/nschloe/tikzplotlib/issues/4 | [] | foucault | 1 |
yeongpin/cursor-free-vip | automation | 275 | [Bug]: 谷歌软件有要求? | ### 提交前检查
- [x] 我理解 Issue 是用于反馈和解决问题的,而非吐槽评论区,将尽可能提供更多信息帮助问题解决。
- [x] 我已经查看了置顶 Issue 并搜索了现有的 [开放 Issue](https://github.com/yeongpin/cursor-free-vip/issues)和[已关闭 Issue](https://github.com/yeongpin/cursor-free-vip/issues?q=is%3Aissue%20state%3Aclosed%20),没有找到类似的问题。
- [x] 我填写了简短且清晰明确的标题,以便开发者在翻阅 Issue 列表时能快速确定大致问题。而不是“一个建议”、“卡住了”等。
### 平台
Windows x64
### 版本
最新版本
### 错误描述

### 相关日志输出
```shell
```
### 附加信息
_No response_ | closed | 2025-03-17T12:43:00Z | 2025-03-19T06:42:35Z | https://github.com/yeongpin/cursor-free-vip/issues/275 | [
"bug"
] | Tryoe | 4 |
deepspeedai/DeepSpeed | pytorch | 6,981 | [REQUEST] Please share WIndows WHL files now | @loadams in https://github.com/microsoft/DeepSpeed/issues/6871 you said...
> I'm able to build a whl locally, and tests seem to be fine. Working on getting these published sometime this week. I'll probably start with a 0.15.0 whl built with python 3.10 to confirm you're seeing things work there. I would upload here to test but it seems we cannot upload whls.
Please do share your built WHL files. DeepSpeed 0.16.3 would be best for Python 3.10.x. I have hit a lot of dead ends on Windows AI systems due to DeepSpeed being such a pain to compile on Windows. I was able to find older WHLs but need the latest now. If you cannot host them in your repo I will happily host them on huggingface for you.
Please do share on any site you can and I will post them to huggingface as a more permanent location for people..
Thanks.
PS I would have added a comment to https://github.com/microsoft/DeepSpeed/issues/6871 but Furzan blocked me after I (and others) pointed out he should not use github issues as a place to advertise his paywalled scripts.
| closed | 2025-01-29T21:41:54Z | 2025-01-30T00:24:36Z | https://github.com/deepspeedai/DeepSpeed/issues/6981 | [
"enhancement",
"windows"
] | SoftologyPro | 2 |
pyro-ppl/numpyro | numpy | 1,726 | Support forward mode differentiation for SVI | Hello everybody. I am encountering a problem with the VonMises distribution, and in particular with its concentration parameter. I am trying to perform a very simple MLE of a hierarchical model.
```
def model(X):
plate = numpyro.plate("data", Nc)
kappa = numpyro.param("kappa", 1., constraint = numpyro.distributions.constraints.positive)
mu = numpyro.param("mu", 0., constraint = numpyro.distributions.constraints.interval(-np.pi, np.pi))
with plate:
phi = numpyro.sample("phi", numpyro.distributions.VonMises(mu, kappa))
with plate:
numpyro.sample("X", numpyro.distributions.Normal(phi, 1.), obs=X)
def guide(X):
pass
```
When I run SVI, I get:
`ValueError: Reverse-mode differentiation does not work for lax.while_loop or lax.fori_loop with dynamic start/stop values. Try using lax.scan, or using fori_loop with static start/stop.`
Playing around with the model I noticed that if I fix the kappa (concentration parameter) to constant, and optimize only the mean of the VonMises, it works.
Also following the [docs](https://num.pyro.ai/en/stable/reparam.html#numpyro.infer.reparam.CircularReparam) I added on top of my function:
`@handlers.reparam(config={"phi": CircularReparam()})`
which changes the error message to:
`NotImplementedError: `
I finally tried (based on this) changing my sample to:
```
with plate:
with handlers.reparam(config={'phi': CircularReparam()}):
phi = numpyro.sample("phi", numpyro.distributions.VonMises(mu, 2.0))
```
Which also ends up with the `NotImplementedError.`
Is there a trick, or the VonMises distribution is just not well implemented yet?
Have a good day!
PS. HMC works with for sampling the conc parameter
A | closed | 2024-01-30T16:28:39Z | 2024-02-08T17:58:35Z | https://github.com/pyro-ppl/numpyro/issues/1726 | [
"enhancement",
"good first issue"
] | AndreaSalati | 1 |
tflearn/tflearn | tensorflow | 602 | Key is_training not found in checkpoint tflearn | I am saving and restoring models I created using TFLearn with
```
optimizer = tf.train.AdamOptimizer().minimize(model.loss_op)
# Initializing the variables
config = tf.ConfigProto()
config.gpu_options.allow_growth = True
avg_time = 0
with tf.Session(config = config) as sess:
tflearn.is_training(True)
merged = tf.summary.merge_all()
train_writer = tf.summary.FileWriter(save_dir + '/train',sess.graph)
sess.run(tf.local_variables_initializer())
sess.run(tf.global_variables_initializer())
coord = tf.train.Coordinator()
tf.train.start_queue_runners(sess, coord=coord)
saver = tf.train.Saver()
```
When I try to restore a model in a new process with
```
with tf.get_default_graph().as_default():
config = tf.ConfigProto()
config.gpu_options.allow_growth=True
avg_time = 0
model = get_model_with_placeholders(module, reuse=False, restore=True)
with tf.Session(config=config) as sess:
tflearn.is_training(False)
sess.run(tf.global_variables_initializer())
saver = tf.train.Saver()
saver.restore(sess, tf.train.latest_checkpoint(os.path.dirname(model_file)))
```
I get the error
`
Key is_training not found in checkpoint tflearn`
| closed | 2017-02-11T16:34:59Z | 2017-02-12T11:22:31Z | https://github.com/tflearn/tflearn/issues/602 | [] | plooney | 2 |
PrefectHQ/prefect | automation | 17,334 | DaskTaskRunner tasks occasionally fail with `AttributeError: 'NoneType' object has no attribute 'address'` | ### Bug summary
Tasks launched via a `DaskTaskRunner` randomly fail with the following exception:
```python
File "/Users/kzvezdarov/git/prefect-dask-test/attr_err_flow.py", line 11, in load_dataframe
with get_dask_client():
^^^^^^^^^^^^^^^^^
File "/opt/homebrew/Cellar/python@3.12/3.12.9/Frameworks/Python.framework/Versions/3.12/lib/python3.12/contextlib.py", line 137, in __enter__
return next(self.gen)
^^^^^^^^^^^^^^^^^
File "/Users/kzvezdarov/git/prefect-dask-test/.venv/lib/python3.12/site-packages/prefect_dask/utils.py", line 101, in get_dask_client
client_kwargs = _generate_client_kwargs(
^^^^^^^^^^^^^^^^^
File "/Users/kzvezdarov/git/prefect-dask-test/.venv/lib/python3.12/site-packages/prefect_dask/utils.py", line 29, in _generate_client_kwargs
address = get_client().scheduler.address
^^^^^^^^^^^^^^^^^
AttributeError: 'NoneType' object has no attribute 'address'
```
Minimum flow to reproduce this (somewhat reliably; executed on a local process workpool):
```python
from prefect import flow, task, serve
import dask.dataframe as dd
import numpy as np
from prefect.futures import PrefectFutureList
from prefect_dask.utils import get_dask_client
from prefect_dask.task_runners import DaskTaskRunner
@task
def load_dataframe() -> dd.DataFrame:
with get_dask_client():
return (
dd.DataFrame.from_dict(
{
"x": np.random.random(size=1_000),
"y": np.random.random(size=1_000),
}
)
.mean()
.compute()
)
@flow(task_runner=DaskTaskRunner())
def attr_err_flow():
tasks = PrefectFutureList()
for _ in range(10):
tasks.append(load_dataframe.submit())
return tasks.result()
if __name__ == "__main__":
attr_err_deploy = attr_err_flow.to_deployment(
name="attr-err-deployment", work_pool_name="local"
)
serve(attr_err_deploy)
```
This seems like some kind of a race condition, because increasing the amount work each
task has to do (via `size`) makes it less and less likely to manifest.
This appears to happen both when using both `LocalCluster` and `DaskKubernetesOperator`
ephemeral clusters.
Finally, a fairly straightforward workaround seems to be simly retrying the task when
that exception is encountered.
### Version info
```Text
Version: 3.2.9
API version: 0.8.4
Python version: 3.12.9
Git commit: 27eb408c
Built: Fri, Feb 28, 2025 8:12 PM
OS/Arch: darwin/arm64
Profile: local
Server type: ephemeral
Pydantic version: 2.10.6
Server:
Database: sqlite
SQLite version: 3.49.1
Integrations:
prefect-dask: 0.3.3
```
### Additional context
Full flow run logs:
[vengeful-wildcat.csv](https://github.com/user-attachments/files/19041296/vengeful-wildcat.csv) | open | 2025-03-02T01:07:22Z | 2025-03-02T01:07:22Z | https://github.com/PrefectHQ/prefect/issues/17334 | [
"bug"
] | kzvezdarov | 0 |
seleniumbase/SeleniumBase | pytest | 3,505 | CDP methods missing for element's parent | When using a UC CDP driver to find an element, cannot call CDP methods on its parent.
Code to reproduce:
```python
from seleniumbase import Driver
driver = Driver(uc=True)
driver.uc_activate_cdp_mode("https://google.com")
element = driver.cdp.find_element("textarea[aria-label='Search']")
print('element:', element)
print('element get_attribute:', element.get_attribute, '\n')
parent = element.parent
print('parent:', parent)
print('parent get_attribute:', parent.get_attribute, '\n')
```
Outputs:
```cmd
element: <textarea silent="True" class="gLFyf" aria-controls="Alh6id" aria-owns="Alh6id" autofocus="" title="Search" value="" aria-label="Search" placeholder="" aria-autocomplete="both" aria-expanded="false" aria-haspopup="false" autocapitalize="off" autocomplete="off" autocorrect="off" id="APjFqb" maxlength="2048" name="q" role="combobox" rows="1" spellcheck="false" jsaction="paste:puy29d" data-ved="0ahUKEwjc67KJz7yLAxXeAPsDHUNLJS0Q39UDCAQ" clear_input="<function CDPMethods.__add_sync_methods.<locals>.<lambda> at 0x10a1b3ec0>" click="<function CDPMethods.__add_sync_methods.<locals>.<lambda> at 0x10a2fc0e0>" flash="<function CDPMethods.__add_sync_methods.<locals>.<lambda> at 0x10a2fc180>" focus="<function CDPMethods.__add_sync_methods.<locals>.<lambda> at 0x10a2fc220>" highlight_overlay="<function CDPMethods.__add_sync_methods.<locals>.<lambda> at 0x10a2fc2c0>" mouse_click="<function CDPMethods.__add_sync_methods.<locals>.<lambda> at 0x10a2fc360>" mouse_drag="<function CDPMethods.__add_sync_methods.<locals>.<lambda> at 0x10a2fc400>" mouse_move="<function CDPMethods.__add_sync_methods.<locals>.<lambda> at 0x10a2fc4a0>" query_selector="<function CDPMethods.__add_sync_methods.<locals>.<lambda> at 0x10a2fc540>" querySelector="<function CDPMethods.__add_sync_methods.<locals>.<lambda> at 0x10a2fc540>" query_selector_all="<function CDPMethods.__add_sync_methods.<locals>.<lambda> at 0x10a2fc5e0>" querySelectorAll="<function CDPMethods.__add_sync_methods.<locals>.<lambda> at 0x10a2fc5e0>" remove_from_dom="<function CDPMethods.__add_sync_methods.<locals>.<lambda> at 0x10a2fc680>" save_screenshot="<function CDPMethods.__add_sync_methods.<locals>.<lambda> at 0x10a2fc720>" save_to_dom="<function CDPMethods.__add_sync_methods.<locals>.<lambda> at 0x10a2fc7c0>" scroll_into_view="<function CDPMethods.__add_sync_methods.<locals>.<lambda> at 0x10a2fc860>" select_option="<function CDPMethods.__add_sync_methods.<locals>.<lambda> at 0x10a2fc900>" send_file="<function CDPMethods.__add_sync_methods.<locals>.<lambda> at 0x10a2fc9a0>" send_keys="<function CDPMethods.__add_sync_methods.<locals>.<lambda> at 0x10a2fca40>" set_text="<function CDPMethods.__add_sync_methods.<locals>.<lambda> at 0x10a2fcae0>" set_value="<function CDPMethods.__add_sync_methods.<locals>.<lambda> at 0x10a2fcb80>" type="<function CDPMethods.__add_sync_methods.<locals>.<lambda> at 0x10a2fcc20>" get_position="<function CDPMethods.__add_sync_methods.<locals>.<lambda> at 0x10a2fccc0>" get_html="<function CDPMethods.__add_sync_methods.<locals>.<lambda> at 0x10a2fcd60>" get_js_attributes="<function CDPMethods.__add_sync_methods.<locals>.<lambda> at 0x10a2fce00>" get_attribute="<function CDPMethods.__add_sync_methods.<locals>.<lambda> at 0x10a2fcea0>"></textarea>
element get_attribute: <function CDPMethods.__add_sync_methods.<locals>.<lambda> at 0x10a2fcea0>
parent: <div silent="True" jscontroller="vZr2rb" jsname="gLFyf" class="a4bIc" data-hpmde="false" data-mnr="10" jsaction="h5M12e;input:d3sQLd;blur:jI3wzf"><style silent="True">.gLFyf,.YacQv{line-height:34px;font-size:16px;flex:100%;}textarea.gLFyf,.YacQv{font-family:Arial,sans-serif;line-height:22px;border-bottom:8px solid transparent;padding-top:11px;overflow-x:hidden}textarea.gLFyf{}.sbfc textarea.gLFyf{white-space:pre-line;overflow-y:auto}.gLFyf{resize:none;background-color:transparent;border:none;margin:0;padding:0;color:rgba(0,0,0,.87);word-wrap:break-word;outline:none;display:flex;-webkit-tap-highlight-color:transparent}.a4bIc{display:flex;flex-wrap:wrap;flex:1}.YacQv{color:transparent;white-space:pre;position:absolute;pointer-events:none}.YacQv span{text-decoration:#b3261e dotted underline}.gLFyf::placeholder{color:var(--IXoxUe)}</style><div silent="True" jsname="vdLsw" class="YacQv"></div><textarea silent="True" class="gLFyf" aria-controls="Alh6id" aria-owns="Alh6id" autofocus="" title="Search" value="" aria-label="Search" placeholder="" aria-autocomplete="both" aria-expanded="false" aria-haspopup="false" autocapitalize="off" autocomplete="off" autocorrect="off" id="APjFqb" maxlength="2048" name="q" role="combobox" rows="1" spellcheck="false" jsaction="paste:puy29d" data-ved="0ahUKEwjc67KJz7yLAxXeAPsDHUNLJS0Q39UDCAQ"></textarea></div>
parent get_attribute: None
``` | closed | 2025-02-11T22:01:17Z | 2025-02-12T02:23:49Z | https://github.com/seleniumbase/SeleniumBase/issues/3505 | [
"invalid usage",
"workaround exists",
"UC Mode / CDP Mode"
] | julesmcrt | 1 |
modelscope/modelscope | nlp | 1,045 | ModuleNotFoundError: No module named 'modelscope.models.cv.facial_68ldk_detection' | ## Question
The module exists in GitHub's modelscope-v1.19.1 source code at [modelscope/models/cv/facial_68ldk_detection](https://github.com/modelscope/modelscope/tree/master/modelscope/models/cv/facial_68ldk_detection), but is not found using command *pip install modelscope[cv]*.
## Environment
Python-3.10.14 torch-2.3.1 CUDA:0 (NVIDIA GeForce RTX 3090, 24260MiB)
modelscope-1.19.1
## Minimal Reproducible Example
```python
import cv2
from modelscope.pipelines import pipeline
from modelscope.utils.constant import Tasks
model_id = 'Damo_XR_Lab/cv_human_68-facial-landmark-detection'
estimator = pipeline(Tasks.facial_68ldk_detection, model=model_id)
Input_file = 'assets/sample.jpg'
results = estimator(input=Input_file)
landmarks = results['landmarks']
image_draw = cv2.imread(Input_file)
for num in range(landmarks.shape[0]):
cv2.circle(image_draw, (round(landmarks[num][0]), round(landmarks[num][1])), 2, (0, 255, 0), -1)
cv2.imwrite('result.png', image_draw)
```
## Error
ModuleNotFoundError: No module named 'modelscope.models.cv.facial_68ldk_detection' | closed | 2024-10-23T07:50:16Z | 2024-11-06T04:14:33Z | https://github.com/modelscope/modelscope/issues/1045 | [] | f549263766 | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.