repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
iterative/dvc | machine-learning | 10,428 | dvc exp run --run-all: One or two experiments are executed, than it hangs (JSONDecodeError) (similar to #10398) | # Bug Report
<!--
## Issue name
Issue names must follow the pattern `command: description` where the command is the dvc command that you are trying to run. The description should describe the consequence of the bug.
Example: `repro: doesn't detect input changes`
-->
## Description
When executing `dvc exp run --run-all`, the worker hangs at some point (after finishing a small number of experiments, right before starting a new one). Once this happened after two experiments, now after one.
<!--
A clear and concise description of what the bug is.
-->
### Reproduce
<!--
Step list of how to reproduce the bug
-->
<!--
Example:
1. dvc init
2. Copy dataset.zip to the directory
3. dvc add dataset.zip
4. dvc run -d dataset.zip -o model ./train.sh
5. modify dataset.zip
6. dvc repro
-->
Add multiple experiments to the queue with dvc exp run --queue
dvc exp run --run-all
### Expected
<!--
A clear and concise description of what you expect to happen.
-->
All experiments are executed.
### Environment information
<!--
This is required to ensure that we can reproduce the bug.
-->
I'm running this through github actions on a self-hosted runner. ubuntu 22.04
**Output of `dvc doctor`:**
DVC version: 3.50.2 (pip)
-------------------------
Platform: Python 3.11.9 on Linux-5.15.0-107-generic-x86_64-with-glibc2.35
Subprojects:
dvc_data = 3.15.1
dvc_objects = 5.1.0
dvc_render = 1.0.2
dvc_task = 0.4.0
scmrepo = 3.3.3
Supports:
http (aiohttp = 3.9.5, aiohttp-retry = 2.8.3),
https (aiohttp = 3.9.5, aiohttp-retry = 2.8.3),
s3 (s3fs = 2024.3.1, boto3 = 1.34.69)
Config:
Global: /github/home/.config/dvc
System: /etc/xdg/dvc
Cache types: <https://error.dvc.org/no-dvc-cache>
Caches: local
Remotes: s3
**Additional Information (if any):**
`cat .dvc/tmp/exps/celery/dvc-exp-worker-1.out` gives me this:
/app/venv/lib/python3.11/site-packages/celery/platforms.py:829: SecurityWarning: You're running the worker with superuser privileges: this is
absolutely not recommended!
Please specify a different user using the --uid option.
User information: uid=0 euid=0 gid=0 egid=0
warnings.warn(SecurityWarning(ROOT_DISCOURAGED.format(
[2024-05-15 16:42:53,662: WARNING/MainProcess] No hostname was supplied. Reverting to default 'localhost'
-------------- dvc-exp-0b0771-1@localhost v5.4.0 (opalescent)
--- ***** -----
-- ******* ---- Linux-5.15.0-107-generic-x86_64-with-glibc2.35 2024-05-15 16:42:53
- *** --- * ---
- ** ---------- [config]
- ** ---------- .> app: dvc-exp-local:0x7fc7d38a2350
- ** ---------- .> transport: filesystem://localhost//
- ** ---------- .> results: file:///__w/equinor_pipeline_model/equinor_pipeline_model/.dvc/tmp/exps/celery/result
- *** --- * --- .> concurrency: 1 (thread)
-- ******* ---- .> task events: OFF (enable -E to monitor tasks in this worker)
--- ***** -----
-------------- [queues]
.> celery exchange=celery(direct) key=celery
[tasks]
. dvc.repo.experiments.queue.tasks.cleanup_exp
. dvc.repo.experiments.queue.tasks.collect_exp
. dvc.repo.experiments.queue.tasks.run_exp
. dvc.repo.experiments.queue.tasks.setup_exp
. dvc_task.proc.tasks.run
[2024-05-15 16:42:53,671: WARNING/MainProcess] /app/venv/lib/python3.11/site-packages/celery/worker/consumer/consumer.py:508: CPendingDeprecationWarning: The broker_connection_retry configuration setting will no longer determine
whether broker connection retries are made during startup in Celery 6.0 and above.
If you wish to retain the existing behavior for retrying connections on startup,
you should set broker_connection_retry_on_startup to True.
warnings.warn(
[2024-05-15 16:42:53,671: WARNING/MainProcess] No hostname was supplied. Reverting to default 'localhost'
[2024-05-15 16:42:53,671: INFO/MainProcess] Connected to filesystem://localhost//
[2024-05-15 16:42:53,673: INFO/MainProcess] dvc-exp-0b0771-1@localhost ready.
[2024-05-15 16:42:53,674: INFO/MainProcess] Task dvc.repo.experiments.queue.tasks.run_exp[093a3dbc-da8f-4222-a839-e015a20dd6c2] received
[2024-05-15 20:05:07,673: WARNING/MainProcess] No hostname was supplied. Reverting to default 'localhost'
[2024-05-15 20:26:58,967: CRITICAL/MainProcess] Unrecoverable error: JSONDecodeError('Expecting value: line 1 column 1 (char 0)')
Traceback (most recent call last):
File "/app/venv/lib/python3.11/site-packages/celery/worker/worker.py", line 202, in start
self.blueprint.start(self)
File "/app/venv/lib/python3.11/site-packages/celery/bootsteps.py", line 116, in start
step.start(parent)
File "/app/venv/lib/python3.11/site-packages/celery/bootsteps.py", line 365, in start
return self.obj.start()
^^^^^^^^^^^^^^^^
File "/app/venv/lib/python3.11/site-packages/celery/worker/consumer/consumer.py", line 340, in start
blueprint.start(self)
File "/app/venv/lib/python3.11/site-packages/celery/bootsteps.py", line 116, in start
step.start(parent)
File "/app/venv/lib/python3.11/site-packages/celery/worker/consumer/consumer.py", line 746, in start
c.loop(*c.loop_args())
File "/app/venv/lib/python3.11/site-packages/celery/worker/loops.py", line 130, in synloop
connection.drain_events(timeout=2.0)
File "/app/venv/lib/python3.11/site-packages/kombu/connection.py", line 341, in drain_events
return self.transport.drain_events(self.connection, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/venv/lib/python3.11/site-packages/kombu/transport/virtual/base.py", line 997, in drain_events
get(self._deliver, timeout=timeout)
File "/app/venv/lib/python3.11/site-packages/kombu/utils/scheduling.py", line 55, in get
return self.fun(resource, callback, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/venv/lib/python3.11/site-packages/kombu/transport/virtual/base.py", line 1035, in _drain_channel
return channel.drain_events(callback=callback, timeout=timeout)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/venv/lib/python3.11/site-packages/kombu/transport/virtual/base.py", line 754, in drain_events
return self._poll(self.cycle, callback, timeout=timeout)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/venv/lib/python3.11/site-packages/kombu/transport/virtual/base.py", line 414, in _poll
return cycle.get(callback)
^^^^^^^^^^^^^^^^^^^
File "/app/venv/lib/python3.11/site-packages/kombu/utils/scheduling.py", line 55, in get
return self.fun(resource, callback, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/venv/lib/python3.11/site-packages/kombu/transport/virtual/base.py", line 417, in _get_and_deliver
message = self._get(queue)
^^^^^^^^^^^^^^^^
File "/app/venv/lib/python3.11/site-packages/kombu/transport/filesystem.py", line 261, in _get
return loads(bytes_to_str(payload))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/venv/lib/python3.11/site-packages/kombu/utils/json.py", line 93, in loads
return _loads(s, object_hook=object_hook)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/json/__init__.py", line 359, in loads
return cls(**kw).decode(s)
^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/json/decoder.py", line 337, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/json/decoder.py", line 355, in raw_decode
raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
very similar to https://github.com/iterative/dvc/issues/10398, though no solution was proposed there (as far as I can see)
The experiments that have run have been executed successfully.
| open | 2024-05-16T08:37:39Z | 2024-05-19T23:40:20Z | https://github.com/iterative/dvc/issues/10428 | [
"A: experiments"
] | AljoSt | 0 |
pydantic/pydantic-ai | pydantic | 158 | Is that an alternative to popular `guidance` or they can work together? | Thanks for your lib! It looks very interesting. I'm wondering if that is possible to use it with https://github.com/guidance-ai/guidance as latter is great in response shaping or you aim to replace that lib too? | closed | 2024-12-06T18:06:43Z | 2024-12-08T12:47:02Z | https://github.com/pydantic/pydantic-ai/issues/158 | [
"question"
] | pySilver | 3 |
kornia/kornia | computer-vision | 3,026 | K.Resize() doesn't work on MPS devices | ### Describe the bug
K.Resize() doesn't work if the device is `torch.device("mps")`
```
File "/Users/.../miniforge3/envs/ftw4/lib/python3.12/site-packages/kornia/utils/helpers.py", line 232, in _torch_solve_cast
out = torch.linalg.solve(A.to(torch.float64), B.to(torch.float64))
^^^^^^^^^^^^^^^^^^^
TypeError: Cannot convert a MPS Tensor to float64 dtype as the MPS framework doesn't support float64. Please use float32 instead.
```
Also mentioned in #1717
### Reproduction steps
```bash
1. Create `resizer = K.Resize(...).to("mps")`
2. Use the `resizer`
```
### Expected behavior
`K.Resize()` should resize, falling back to float32 if necessary.
### Environment
```shell
Unsure -- can find more details if necessary
```
### Additional context
_No response_ | open | 2024-09-28T17:15:13Z | 2024-09-29T09:49:31Z | https://github.com/kornia/kornia/issues/3026 | [
"bug :bug:",
"help wanted"
] | calebrob6 | 1 |
datapane/datapane | data-visualization | 5 | Fix windows support | Window support is currently broken, as we shell out to certain libraries (such as gzip). We should use Python alternatives instead. | closed | 2020-05-01T17:42:53Z | 2020-05-07T17:41:24Z | https://github.com/datapane/datapane/issues/5 | [
"bug"
] | lanthias | 1 |
rthalley/dnspython | asyncio | 285 | Error in the example | ```
Python 3.6.3 (default, Oct 13 2017, 07:46:30)
[GCC 7.2.1 20170915 (Red Hat 7.2.1-2)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import dns.resolver
>>> resolver = dns.resolver.Resolver(configure=False)
>>> resolver.nameservers = ['8.8.8.8']
>>> results = resolver.query("amazon.com", "NS")
20:29:03.302354 e4:b3:18:67:bd:03 > 0c:c4:7a:7f:b2:c1, ethertype IPv4 (0x0800), length 70: (tos 0x0, ttl 64, id 37110, offset 0, flags [DF], proto UDP (17), length 56)
192.168.157.165.50067 > 8.8.8.8.53: [udp sum ok] 15060+ NS? amazon.com. (28)
20:29:03.338403 0c:c4:7a:7f:b2:c1 > e4:b3:18:67:bd:03, ethertype IPv4 (0x0800), length 219: (tos 0x20, ttl 56, id 53638, offset 0, flags [none], proto UDP (17), length 205)
8.8.8.8.53 > 192.168.157.165.50067: [udp sum ok] 15060 q: NS? amazon.com. 6/0/0 amazon.com. [55m8s] NS pdns1.ultradns.net., amazon.com. [55m8s] NS ns4.p31.dynect.net., amazon.com. [55m8s] NS ns3.p31.dynect.net., amazon.com. [55m8s] NS ns2.p31.dynect.net., amazon.com. [55m8s] NS ns1.p31.dynect.net., amazon.com. [55m8s] NS pdns6.ultradns.co.uk. (177)
>>> results = dns.resolver.query("amazon.com", "NS")
20:29:12.017959 e4:b3:18:67:bd:03 > 0c:c4:7a:7f:b2:c1, ethertype IPv4 (0x0800), length 70: (tos 0x0, ttl 64, id 39224, offset 0, flags [DF], proto UDP (17), length 56)
192.168.157.165.42802 > 192.168.157.1.53: [udp sum ok] 17233+ NS? amazon.com. (28)
20:29:12.041484 0c:c4:7a:7f:b2:c1 > e4:b3:18:67:bd:03, ethertype IPv4 (0x0800), length 427: (tos 0x0, ttl 64, id 6016, offset 0, flags [none], proto UDP (17), length 413)
192.168.157.1.53 > 192.168.157.165.42802: [udp sum ok] 17233 q: NS? amazon.com. 6/0/10 amazon.com. [1h] NS ns3.p31.dynect.net., amazon.com. [1h] NS pdns6.ultradns.co.uk., amazon.com. [1h] NS pdns1.ultradns.net., amazon.com. [1h] NS ns4.p31.dynect.net., amazon.com. [1h] NS ns1.p31.dynect.net., amazon.com. [1h] NS ns2.p31.dynect.net. ar: ns1.p31.dynect.net. [8m26s] A 208.78.70.31, ns2.p31.dynect.net. [5h42m22s] A 204.13.250.31, ns3.p31.dynect.net. [11h46m15s] A 208.78.71.31, ns4.p31.dynect.net. [23h21m55s] A 204.13.251.31, pdns1.ultradns.net. [8m24s] A 204.74.108.1, pdns6.ultradns.co.uk. [16m29s] A 204.74.115.1, ns1.p31.dynect.net. [57s] AAAA 2001:500:90:1::31, ns3.p31.dynect.net. [51s] AAAA 2001:500:94:1::31, pdns1.ultradns.net. [39m16s] AAAA 2001:502:f3ff::1, pdns6.ultradns.co.uk. [25m35s] AAAA 2610:a1:1017::1 (385)
>>> quit()
``` | closed | 2017-11-15T01:31:54Z | 2018-02-20T19:54:58Z | https://github.com/rthalley/dnspython/issues/285 | [] | arcivanov | 0 |
miguelgrinberg/Flask-SocketIO | flask | 2,100 | Update 5.3.6 to 5.4.1 - test failed | Hello,
I'm trying to upgrade from 5.3.6 to 5.4.1 for openSUSE Tumbleweed (Rolling Release) and run into a test error:
[code]
```
[ 9s] + python3.10 -m unittest -v test_socketio.py
[ 10s] Traceback (most recent call last):
[ 10s] File "/usr/lib64/python3.10/runpy.py", line 196, in _run_module_as_main
[ 10s] return _run_code(code, main_globals, None,
[ 10s] File "/usr/lib64/python3.10/runpy.py", line 86, in _run_code
[ 10s] exec(code, run_globals)
[ 10s] File "/usr/lib64/python3.10/unittest/__main__.py", line 18, in <module>
[ 10s] main(module=None)
[ 10s] File "/usr/lib64/python3.10/unittest/main.py", line 100, in __init__
[ 10s] self.parseArgs(argv)
[ 10s] File "/usr/lib64/python3.10/unittest/main.py", line 147, in parseArgs
[ 10s] self.createTests()
[ 10s] File "/usr/lib64/python3.10/unittest/main.py", line 158, in createTests
[ 10s] self.test = self.testLoader.loadTestsFromNames(self.testNames,
[ 10s] File "/usr/lib64/python3.10/unittest/loader.py", line 220, in loadTestsFromNames
[ 10s] suites = [self.loadTestsFromName(name, module) for name in names]
[ 10s] File "/usr/lib64/python3.10/unittest/loader.py", line 220, in <listcomp>
[ 10s] suites = [self.loadTestsFromName(name, module) for name in names]
[ 10s] File "/usr/lib64/python3.10/unittest/loader.py", line 154, in loadTestsFromName
[ 10s] module = __import__(module_name)
[ 10s] File "/home/abuild/rpmbuild/BUILD/flask_socketio-5.4.1/test_socketio.py", line 11, in <module>
[ 10s] socketio = SocketIO(app)
[ 10s] File "/home/abuild/rpmbuild/BUILDROOT/python-Flask-SocketIO-5.4.1-0.x86_64/usr/lib/python3.10/site-packages/flask_socketio/__init__.py", line 187, in __init__
[ 10s] self.init_app(app, **kwargs)
[ 10s] File "/home/abuild/rpmbuild/BUILDROOT/python-Flask-SocketIO-5.4.1-0.x86_64/usr/lib/python3.10/site-packages/flask_socketio/__init__.py", line 243, in init_app
[ 10s] self.server = socketio.Server(**self.server_options)
[ 10s] File "/usr/lib/python3.10/site-packages/socketio/base_server.py", line 31, in __init__
[ 10s] self.eio = self._engineio_server_class()(**engineio_options)
[ 10s] File "/usr/lib/python3.10/site-packages/engineio/base_server.py", line 81, in __init__
[ 10s] raise ValueError('Invalid async_mode specified')
[ 10s] ValueError: Invalid async_mode specified
[ 10s] error: Bad exit status from /var/tmp/rpm-tmp.vl0P2D (%check)
```
[/code]
Any idea?
Thanks | closed | 2024-10-04T17:31:02Z | 2024-11-08T10:24:45Z | https://github.com/miguelgrinberg/Flask-SocketIO/issues/2100 | [
"question"
] | coogor | 4 |
voxel51/fiftyone | computer-vision | 5,397 | [BUG] GraphQL Exception after pip install and running quickstart example |
### Describe the problem
I ran the pip install quickstart example and got this GraphQL exception:
```
GraphQL API Error
Cannot return null for non-nullable field Query.dataset.
```
Here is the exception stack:
```
File "/Users/cuongwilliams/anaconda3/envs/voxel51/lib/python3.10/site-packages/graphql/execution/execute.py", line 1038, in await_result
return build_response(await result, errors) # type: ignore
File "/Users/cuongwilliams/anaconda3/envs/voxel51/lib/python3.10/site-packages/graphql/execution/execute.py", line 453, in get_results
await gather(*(results[field] for field in awaitable_fields)),
File "/Users/cuongwilliams/anaconda3/envs/voxel51/lib/python3.10/site-packages/graphql/execution/execute.py", line 537, in await_result
self.handle_field_error(error, return_type)
File "/Users/cuongwilliams/anaconda3/envs/voxel51/lib/python3.10/site-packages/graphql/execution/execute.py", line 571, in handle_field_error
raise error
File "/Users/cuongwilliams/anaconda3/envs/voxel51/lib/python3.10/site-packages/graphql/execution/execute.py", line 529, in await_result
completed = self.complete_value(
File "/Users/cuongwilliams/anaconda3/envs/voxel51/lib/python3.10/site-packages/graphql/execution/execute.py", line 622, in complete_value
raise TypeError(
```
### Code to reproduce issue
I created a python=3.10 using conda and activated it.
Then I ran the following code:
```
import fiftyone as fo
import fiftyone.zoo as foz
dataset = foz.load_zoo_dataset("quickstart")
session = fo.launch_app(dataset)
```
### System information
- MacOSX Sequoia 15.2 on M2 MacBook
- Conda 24.1
- Python =3.10
- fiftyone ==1.2.0
- fiftyone-brain == 0.18.2
- fiftyone_db == 1.1.7
### Willingness to contribute
- [X] Yes. I can contribute a fix for this bug independently
- [X] Yes. I would be willing to contribute a fix for this bug with guidance
from the FiftyOne community
| closed | 2025-01-16T20:48:56Z | 2025-02-03T16:39:57Z | https://github.com/voxel51/fiftyone/issues/5397 | [
"bug"
] | sourcesync | 4 |
geex-arts/django-jet | django | 351 | Google analytics API request failed | Successfully setup django-jet and everything working. But after few hours google analytics data is not showing. And displaying API request failed.
Using django 2.0(django 2.1 has compatibility issue) with latest django-jet. | closed | 2018-09-05T03:43:41Z | 2018-09-07T20:38:04Z | https://github.com/geex-arts/django-jet/issues/351 | [] | nikolas-dev | 2 |
drivendataorg/cookiecutter-data-science | data-science | 3 | Add testing boilerplate/docs | closed | 2016-04-23T17:00:17Z | 2016-05-16T15:27:21Z | https://github.com/drivendataorg/cookiecutter-data-science/issues/3 | [] | pjbull | 1 | |
polarsource/polar | fastapi | 5,221 | Issue Funding: Script to remove badges | First run it on all issues without a pledge | closed | 2025-03-10T08:25:29Z | 2025-03-20T14:08:14Z | https://github.com/polarsource/polar/issues/5221 | [
"admin"
] | birkjernstrom | 0 |
unit8co/darts | data-science | 2,363 | hyperparameter TiDE and TFT no complete trials | Hi I dont think this is a bug, but I cant trace the error.
It might be due to the data it self as it is multivariate and grouped by ID.
score is returning [nan, nan, nan, nan...] leading to trial = study.best_trial
the val_loss=120 so it is very high, but i think it should still return validation score.
It might be the model is not correctly using all data and simply a series?
```
def objective(trial):
# Suggest values for the hyperparameters
decoder_output_dim = trial.suggest_int("decoder_output_dim", 15, 45, step=10)
hidden_size = trial.suggest_categorical("hidden_size", [64, 128, 256, 512])
dropout = trial.suggest_float("dropout", 0.05, 0.55, step=0.05)
# Define early stopping criteria
early_stop_callback = EarlyStopping(
monitor="val_loss", min_delta=1e-2, patience=5, verbose=True, mode="min"
)
# Learning rate scheduler
lr_scheduler_cls = torch.optim.lr_scheduler.ExponentialLR
lr_scheduler_kwargs = {
"gamma": 0.999,
}
# Update model args with suggested values
model_args = {
"input_chunk_length": 45,
"output_chunk_length": 5,
"decoder_output_dim": decoder_output_dim,
"hidden_size": hidden_size,
"dropout": dropout,
"random_state": 42,
"use_static_covariates": True,
"use_reversible_instance_norm": True,
"lr_scheduler_cls": lr_scheduler_cls,
"lr_scheduler_kwargs": lr_scheduler_kwargs,
"pl_trainer_kwargs": {
"gradient_clip_val": 1,
"accelerator": "auto",
"max_epochs": 60,
"callbacks": [early_stop_callback],
},
}
# Create and fit the model
model = TiDEModel(**model_args)
model.fit(series=train_scaled, val_series=val_scaled, verbose=True)
# Evaluate the model
val_pred = model.predict(n=len(val_scaled[0]), series=val_scaled, verbose=True)
# Inverse transform the predictions and actual values
val_pred_inverse = scaler.inverse_transform(val_pred)
val_actual_inverse = scaler.inverse_transform(val_scaled)
score = mse(val_actual_inverse, val_pred_inverse)
print(f"Validation MSE: {score}")
return score
# Create a study object and optimize the objective function
study = optuna.create_study(direction="minimize")
study.optimize(objective, n_trials=30)
print("Number of trials:", len(study.trials))
try:
print("Best trial:")
trial = study.best_trial
print(" Value:", trial.value)
print(" Params:")
for key, value in trial.params.items():
print(f" {key}: {value}")
except ValueError:
print("No trials are completed yet.")
```
TimeSeries is created with .from_dataframe not as .from_group_dataframe as it was creating a few errors.
```
# Create a list of unique patient IDs
unique_ids = processed_df["unique_id"].unique()
# Get 40 unique IDs for testing
# unique_ids = unique_ids[:40]
# Split the unique IDs into training, validation, and test sets
train_ids, val_test_ids = train_test_split(unique_ids, test_size=0.2, random_state=42)
val_ids, test_ids = train_test_split(val_test_ids, test_size=0.5, random_state=42)
# Create TimeSeries objects for training, validation, and test sets
train_list = []
val_list = []
test_list = []
for unique_id in unique_ids:
patient_data = processed_df[processed_df["unique_id"] == unique_id]
patient_static_covariates = patient_data[["sex", "age"]].iloc[
0
] # Get the first row for the patient's static covariates
patient_series = TimeSeries.from_dataframe(
patient_data,
time_col=None, # Use the DataFrame's index as the time index
value_cols=component_names,
static_covariates=patient_static_covariates,
freq=1, # Specify the frequency as an integer value
)
if unique_id in train_ids:
train_list.append(patient_series)
elif unique_id in val_ids:
val_list.append(patient_series)
else:
test_list.append(patient_series)
# Convert the TimeSeries objects to float32
train_list = [ts.astype(np.float32) for ts in train_list]
val_list = [ts.astype(np.float32) for ts in val_list]
test_list = [ts.astype(np.float32) for ts in test_list]
print(f"Length of train_list: {len(train_list)}")
print(f"Length of val_list: {len(val_list)}")
print(f"Length of test_list: {len(test_list)}")
```
| closed | 2024-05-01T08:15:15Z | 2024-05-25T17:57:37Z | https://github.com/unit8co/darts/issues/2363 | [
"question"
] | flight505 | 3 |
sebp/scikit-survival | scikit-learn | 511 | brier_score and cumulative_dynamic_auc fail when there is a test time greater than a training time | I'm creating a new open issue related to [#478](https://github.com/sebp/scikit-survival/issues/478#issuecomment-2624334230)_ which is currently cosed because no reproducible example was provided.
Another user (@dpellow) reported that "brier_score" produces a ValueError when test time is greater than training time. I experience the same issue with "cumulative_dynamic_auc". This does not happen systematically but only in some cases.
The documentation sais that time points in "times" must be within the range of times in "survival_test", but says nothing about times in "survival_test" being within the range of times in "survival_train". In fact, this happens in many other examples and no error occurs.
Here I leave a reproducible example where this error happens (sksurv version 0.22.2): https://github.com/aliciaolivaresgil/Reproduce_errors/blob/main/Error_example.ipynb
I could not find any difference between this example and the others where survival_test is greater than training test but no error occurs. Am I doing something wrong in the example? Why is this error occurring?
| open | 2025-01-31T10:56:09Z | 2025-02-15T14:41:37Z | https://github.com/sebp/scikit-survival/issues/511 | [
"documentation"
] | aliciaolivaresgil | 2 |
mwaskom/seaborn | data-science | 3,204 | Boxplot rcparams not working in seaborn | I'm trying to create a custom style for my plots. In order to do so I'm modifying the rcParameters in order to have a uniform style while using matplotlib or seaborn. I created a style that satisfies me but when I use it with seaborn specific parameters are ignored. Example:
```
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
df_penguins = pd.read_csv(
"https://raw.githubusercontent.com/mwaskom/seaborn-data/master/penguins.csv"
)
from cycler import cycler
ex = {
"axes.spines.right": False,
"axes.spines.top": False,
"figure.dpi": 150,
"axes.prop_cycle": cycler(
"color",
[
"#6900D8",
"#00E2A7",
"#D095FF",
"#00AF78",
"#0047E5",
"#DB004F",
"#FF7F00",
],
),
"boxplot.boxprops.color": "darkgrey",
"boxplot.whiskerprops.color": "darkgrey",
"boxplot.capprops.color": "darkgrey",
"boxplot.medianprops.color": "C6",
"boxplot.notch": True,
"axes.axisbelow": True,
"axes.grid": True,
"grid.color": "#A4A5AE",
"grid.alpha": 0.5,
"grid.linestyle": "--",
"axes.grid.which": "major",
"legend.shadow": True,
"legend.facecolor": "inherit",
"legend.framealpha": 0.95,
"savefig.dpi": 320,
"savefig.bbox": "tight",
"scatter.edgecolors": "#343E3D",
"hist.bins": "auto",
"figure.constrained_layout.use": True,
}
sns.set_theme(rc=ex)
fig, (ax1, ax2) = plt.subplots(
ncols=2,
sharey=True,
figsize=plt.figaspect(0.5)
)
sns.boxplot(data=df_penguins, y="body_mass_g", ax=ax1)
ax2.boxplot(df_penguins.body_mass_g.dropna())
```
Here there's the output:

As you can see the colors of the boxprops, whiskerprops, capprops, and medianprops are not the ones I specified in the dictionary. Is this behavior expected? | closed | 2022-12-29T11:42:46Z | 2023-01-01T19:22:43Z | https://github.com/mwaskom/seaborn/issues/3204 | [] | GiuseppeMinardiWisee | 2 |
iMerica/dj-rest-auth | rest-api | 419 | Settings variable for enabling/disabling use of allauth forms/emails | There are a number of challenges that arise when we have `allauth` in our `INSTALLED_APPS` but don't want to use its email templates/password reset functionality, for instance #367. Could we have a settings toggle that allows us to very easily fix all these issues?
Essentially, anywhere we have:
```
if 'allauth' in settings.INSTALLED_APPS:
foo()
```
...we should replace with:
```
if settings.REST_AUTH_USE_ALLAUTH and 'allauth' in settings.INSTALLED_APPS:
foo()
```
Thoughts? | closed | 2022-07-14T23:51:28Z | 2022-09-01T06:40:25Z | https://github.com/iMerica/dj-rest-auth/issues/419 | [
"enhancement"
] | kut | 2 |
sgl-project/sglang | pytorch | 4,245 | [Bug] vllm vs sglang performance test comparison | ### Checklist
- [x] 1. I have searched related issues but cannot get the expected help.
- [x] 2. The bug has not been fixed in the latest version.
- [x] 3. Please note that if the bug-related issue you submitted lacks corresponding environment info and a minimal reproducible demo, it will be challenging for us to reproduce and resolve the issue, reducing the likelihood of receiving feedback.
- [x] 4. If the issue you raised is not a bug but a question, please raise a discussion at https://github.com/sgl-project/sglang/discussions/new/choose Otherwise, it will be closed.
- [x] 5. Please use English, otherwise it will be closed.
### Describe the bug
Why is the average time to first token for VLLM significantly faster than that of SGLang, by nearly 10 times? Could it be that the command parameters I used to start SGLang are incorrect?
### Reproduction
## test script
```python
evalscope perf \
--url "http://ip:31008/v1/chat/completions" \
--parallel 100 \
--model qwen2.5-32b \
--number 500\
--api openai \
--dataset openqa \
--stream
```
## vllm command
```python
CUDA_VISIBLE_DEVICES=4 vllm serve /data/chdmx/models/LLM_models/qwen/Qwen2.5/Qwen2.5-32B-Instruct --tensor-parallel-size 1 --gpu-memory-utilization 0.9 --max-model-len 20000 --trust-remote-code --host 0.0.0.0 --port 31008 --served-model-name qwen2.5-32b
```
**result:**

## sglang command
```python
CUDA_VISIBLE_DEVICES=4 python3 -m sglang.launch_server --model-path /data/chdmx/models/LLM_models/qwen/Qwen2.5/Qwen2.5-32B-Instruct --tp 1 --mem-fraction-static 0.9 --context-length 20000 --trust-remote-code --host 0.0.0.0 --port 31008
```
**result:**

### Environment
root@bms-schyjdmx03:/sgl-workspace# python3 -m sglang.check_env
2025-03-09 19:03:57,307 - INFO - flashinfer.jit: Prebuilt kernels not found, using JIT backend
INFO 03-09 19:04:03 __init__.py:190] Automatically detected platform cuda.
/usr/local/lib/python3.10/dist-packages/torch/utils/cpp_extension.py:1964: UserWarning: TORCH_CUDA_ARCH_LIST is not set, all archs for visible cards are included for compilation.
If this is not desired, please set os.environ['TORCH_CUDA_ARCH_LIST'].
warnings.warn(
/usr/local/lib/python3.10/dist-packages/pydantic/_internal/_config.py:345: UserWarning: Valid config keys have changed in V2:
* 'fields' has been removed
warnings.warn(message, UserWarning)
Python: 3.10.16 (main, Dec 4 2024, 08:53:37) [GCC 9.4.0]
CUDA available: True
GPU 0,1,2,3,4,5,6,7: NVIDIA A100 80GB PCIe
GPU 0,1,2,3,4,5,6,7 Compute Capability: 8.0
CUDA_HOME: /usr/local/cuda
NVCC: Cuda compilation tools, release 12.4, V12.4.131
CUDA Driver Version: 550.54.15
PyTorch: 2.5.1+cu124
sgl_kernel: 0.0.4
flashinfer: 0.2.2.post1
triton: 3.1.0
transformers: 4.48.3
torchao: 0.8.0
numpy: 1.26.4
aiohttp: 3.11.11
fastapi: 0.115.6
hf_transfer: 0.1.9
huggingface_hub: 0.29.1
interegular: 0.3.3
modelscope: 1.22.3
orjson: 3.10.15
packaging: 24.2
psutil: 6.1.1
pydantic: 2.10.5
multipart: 0.0.12
zmq: 26.2.0
uvicorn: 0.34.0
uvloop: 0.21.0
vllm: 0.7.2
openai: 1.59.8
anthropic: 0.43.1
decord: 0.6.0
NVIDIA Topology:
GPU0 GPU1 GPU2 GPU3 GPU4 GPU5 GPU6 GPU7 NIC0 NIC1 CPU Affinity NUMA Affinity GPU NUMA ID
GPU0 X PIX PIX PIX SYS SYS SYS SYS SYS SYS 0-27,56-83 0 N/A
GPU1 PIX X PIX PIX SYS SYS SYS SYS SYS SYS 0-27,56-83 0 N/A
GPU2 PIX PIX X PIX SYS SYS SYS SYS SYS SYS 0-27,56-83 0 N/A
GPU3 PIX PIX PIX X SYS SYS SYS SYS SYS SYS 0-27,56-83 0 N/A
GPU4 SYS SYS SYS SYS X PIX PIX PIX SYS SYS 28-55,84-111 1 N/A
GPU5 SYS SYS SYS SYS PIX X PIX PIX SYS SYS 28-55,84-111 1 N/A
GPU6 SYS SYS SYS SYS PIX PIX X PIX SYS SYS 28-55,84-111 1 N/A
GPU7 SYS SYS SYS SYS PIX PIX PIX X SYS SYS 28-55,84-111 1 N/A
NIC0 SYS SYS SYS SYS SYS SYS SYS SYS X SYS
NIC1 SYS SYS SYS SYS SYS SYS SYS SYS SYS X
Legend:
X = Self
SYS = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
PXB = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
PIX = Connection traversing at most a single PCIe bridge
NV# = Connection traversing a bonded set of # NVLinks
NIC Legend:
NIC0: mlx5_0
NIC1: mlx5_1
ulimit soft: 1048576 | open | 2025-03-10T02:07:14Z | 2025-03-10T07:40:08Z | https://github.com/sgl-project/sglang/issues/4245 | [] | luhairong11 | 5 |
pytest-dev/pytest-django | pytest | 424 | feature request: user_client fixture | In multiple cases I would like to test different behavior between an admin and an authenticated user. For example, admins are allowed to delete/modify everyone's data while a user is only allowed to delete/edit his own data.
I think many pytest-django users could use a `user_client` fixture for a regular logged in user.
Is this something that already came up? Will you be willing to consider this?
Thanks | closed | 2016-11-20T01:42:46Z | 2016-11-21T16:08:44Z | https://github.com/pytest-dev/pytest-django/issues/424 | [] | nirizr | 2 |
junyanz/pytorch-CycleGAN-and-pix2pix | pytorch | 904 | Learning? | Hello,
Thank you for providing the code. I am using a custom dataset and generator, however my approach is based on pix2pix. I have noticed that the loss_D_fake and loss _D_real never oscillate after the first epoch! The average loss for these is around 0.25 for 21,000 images. According to [soumith point 10](https://github.com/soumith/ganhacks), things are working when D has low variance and goes down over time. My discriminator loss plots show no variance at all after 2 epochs.
Similarly, the generator loss "MSE loss" is also stuck near 0.25 averaged over 21,000 images. However, the L1 loss is steadily decreasing, = 1.6 after 1 epoch and 1.27 after 18th epoch.
1) Visually, the result seems fine, but I am concerned that the generator converges too quickly to a non-optimum result and is able to easily fool the discriminator, any thoughts on this?
2) According to [goodfellow](https://arxiv.org/pdf/1701.00160.pdf), the Nash equilibrium is around D(x) = 0.5 for all x, seeing how mine is stuck at 0.25 when you combine Discriminator loss, what could be possibly done to alleviate the issue? I also tried to make the discriminator stronger by increasing LR and increasing the number of layers but the loss was still near 0.25
3) Also, since you have a "n_layer" implementation of your discriminator, how do you usually choose the layers? For example, I have 6 layers in my generator, what would be the ideal choice for N_layer_discriminator?
Training Details:
LR = 0.0001 or 0.00005
Learning Rate Decay = None
Dropout: None
No LeakyReLU's in Generator
BatchNorms = None in Generator
Loss: lsgan
BatchSize = 1
### **Loss_D_Fake**

**Loss_D_Real**

**Loss_G_L1**

**Loss_G_GAN**

| open | 2020-01-23T13:12:41Z | 2020-01-27T18:45:37Z | https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/904 | [] | DeepZono | 2 |
open-mmlab/mmdetection | pytorch | 11,245 | How to get GFL esk loss with segmentation masks | I have a heavily imbalanced dataset, GFL learns it really well, but I'm now in need of segmentation output and cascade mask rcnn seems to be struggling.
I'm a little limited in my choice because I want to be able to export to ONNX, so a change in loss would be ideal, but several other issues seemed to have had little success with this approach | open | 2023-12-03T16:14:02Z | 2023-12-04T10:54:15Z | https://github.com/open-mmlab/mmdetection/issues/11245 | [] | GeorgePearse | 1 |
Allen7D/mini-shop-server | sqlalchemy | 108 | sqlalchemy 字段中设置默认值为UrlFromEnum.LOCAL 不能正确create | UrlFromEnum.LOCAL 来自配置文件中设置的变量 来源: 1 本地,2 公网
from app.libs.enums import UrlFromEnum
class File(Base):
__tablename__ = 'file'
id = Column(Integer, primary_key=True)
parent_id = Column(Integer, comment='父级目录id')
uuid_name = Column(String(100), comment='唯一名称')
name = Column(String(100), nullable=False, comment='原始名称')
path = Column(String(500), comment='路径')
extension = Column(String(50), comment='后缀')
_from = Column('from', SmallInteger, default=file_from, comment='来源: 1 本地,2 公网')
在上传文件接口
uploader = LocalUploader(files) # 或者导入,使用 QiniuUploader
uploader.locate(parent_id=id)
res = uploader.upload()
uploader方法中
File.create(
parent_id=self.parent_id,
name=single.filename,
uuid_name=uuid_filename,
path=relative_path,
extension=self._get_ext(single.filename),
size=self._get_size(single),
md5=file_md5
)
失败,报错

错误日志如下:
127.0.0.1 - - [02/Mar/2021 11:57:34] "POST /cms/file/1 HTTP/1.1" 500 -
Traceback (most recent call last):
File "C:\Users\57137\Desktop\微信小程序\小程序后端\mini-shop-server-dev\f-venv\lib\site-packages\flask\app.py", line 2464, in __call__
return self.wsgi_app(environ, start_response)
File "C:\Users\57137\Desktop\微信小程序\小程序后端\mini-shop-server-dev\f-venv\lib\site-packages\werkzeug\middleware\proxy_fix.py", line 169, in __call__
return self.app(environ, start_response)
File "C:\Users\57137\Desktop\微信小程序\小程序后端\mini-shop-server-dev\f-venv\lib\site-packages\flask\app.py", line 2450, in wsgi_app
response = self.handle_exception(e)
File "C:\Users\57137\Desktop\微信小程序\小程序后端\mini-shop-server-dev\f-venv\lib\site-packages\flask_cors\extension.py", line 165, in wrapped_function
return cors_after_request(app.make_response(f(*args, **kwargs)))
File "C:\Users\57137\Desktop\微信小程序\小程序后端\mini-shop-server-dev\f-venv\lib\site-packages\flask\app.py", line 1867, in handle_exception
reraise(exc_type, exc_value, tb)
File "C:\Users\57137\Desktop\微信小程序\小程序后端\mini-shop-server-dev\f-venv\lib\site-packages\flask\_compat.py", line 39, in reraise
raise value
File "C:\Users\57137\Desktop\微信小程序\小程序后端\mini-shop-server-dev\f-venv\lib\site-packages\flask\app.py", line 2447, in wsgi_app
response = self.full_dispatch_request()
File "C:\Users\57137\Desktop\微信小程序\小程序后端\mini-shop-server-dev\f-venv\lib\site-packages\flask\app.py", line 1952, in full_dispatch_request
rv = self.handle_user_exception(e)
File "C:\Users\57137\Desktop\微信小程序\小程序后端\mini-shop-server-dev\f-venv\lib\site-packages\flask_cors\extension.py", line 165, in wrapped_function
return cors_after_request(app.make_response(f(*args, **kwargs)))
File "C:\Users\57137\Desktop\微信小程序\小程序后端\mini-shop-server-dev\f-venv\lib\site-packages\flask\app.py", line 1822, in handle_user_exception
return handler(e)
File "C:\Users\57137\Desktop\微信小程序\小程序后端\mini-shop-server-dev\app\__init__.py", line 128, in framework_error
raise e
File "C:\Users\57137\Desktop\微信小程序\小程序后端\mini-shop-server-dev\f-venv\lib\site-packages\flask\app.py", line 1950, in full_dispatch_request
rv = self.dispatch_request()
File "C:\Users\57137\Desktop\微信小程序\小程序后端\mini-shop-server-dev\f-venv\lib\site-packages\flask\app.py", line 1936, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "C:\Users\57137\Desktop\微信小程序\小程序后端\mini-shop-server-dev\f-venv\lib\site-packages\flasgger\utils.py", line 248, in wrapper
return function(*args, **kwargs)
File "C:\Users\57137\Desktop\微信小程序\小程序后端\mini-shop-server-dev\app\extensions\api_docs\redprint.py", line 59, in wrapper
return f(*args, **kwargs)
File "C:\Users\57137\Desktop\微信小程序\小程序后端\mini-shop-server-dev\app\api\cms\file.py", line 46, in upload_file
res = uploader.upload()
File "C:\Users\57137\Desktop\微信小程序\小程序后端\mini-shop-server-dev\app\extensions\file\local_uploader.py", line 49, in upload
md5=file_md5
File "C:\Users\57137\Desktop\微信小程序\小程序后端\mini-shop-server-dev\app\core\db.py", line 162, in create
return instance.save(commit)
File "C:\Users\57137\Desktop\微信小程序\小程序后端\mini-shop-server-dev\app\core\db.py", line 176, in save
db.session.commit()
File "C:\Users\57137\Desktop\微信小程序\小程序后端\mini-shop-server-dev\f-venv\lib\site-packages\sqlalchemy\orm\scoping.py", line 163, in do
return getattr(self.registry(), name)(*args, **kwargs)
File "C:\Users\57137\Desktop\微信小程序\小程序后端\mini-shop-server-dev\f-venv\lib\site-packages\sqlalchemy\orm\session.py", line 1042, in commit
self.transaction.commit()
File "C:\Users\57137\Desktop\微信小程序\小程序后端\mini-shop-server-dev\f-venv\lib\site-packages\sqlalchemy\orm\session.py", line 504, in commit
self._prepare_impl()
File "C:\Users\57137\Desktop\微信小程序\小程序后端\mini-shop-server-dev\f-venv\lib\site-packages\sqlalchemy\orm\session.py", line 483, in _prepare_impl
self.session.flush()
File "C:\Users\57137\Desktop\微信小程序\小程序后端\mini-shop-server-dev\f-venv\lib\site-packages\sqlalchemy\orm\session.py", line 2536, in flush
self._flush(objects)
File "C:\Users\57137\Desktop\微信小程序\小程序后端\mini-shop-server-dev\f-venv\lib\site-packages\sqlalchemy\orm\session.py", line 2678, in _flush
transaction.rollback(_capture_exception=True)
File "C:\Users\57137\Desktop\微信小程序\小程序后端\mini-shop-server-dev\f-venv\lib\site-packages\sqlalchemy\util\langhelpers.py", line 70, in __exit__
with_traceback=exc_tb,
File "C:\Users\57137\Desktop\微信小程序\小程序后端\mini-shop-server-dev\f-venv\lib\site-packages\sqlalchemy\util\compat.py", line 182, in raise_
raise exception
File "C:\Users\57137\Desktop\微信小程序\小程序后端\mini-shop-server-dev\f-venv\lib\site-packages\sqlalchemy\orm\session.py", line 2638, in _flush
flush_context.execute()
File "C:\Users\57137\Desktop\微信小程序\小程序后端\mini-shop-server-dev\f-venv\lib\site-packages\sqlalchemy\orm\unitofwork.py", line 422, in execute
rec.execute(self)
File "C:\Users\57137\Desktop\微信小程序\小程序后端\mini-shop-server-dev\f-venv\lib\site-packages\sqlalchemy\orm\unitofwork.py", line 589, in execute
uow,
File "C:\Users\57137\Desktop\微信小程序\小程序后端\mini-shop-server-dev\f-venv\lib\site-packages\sqlalchemy\orm\persistence.py", line 245, in save_obj
insert,
File "C:\Users\57137\Desktop\微信小程序\小程序后端\mini-shop-server-dev\f-venv\lib\site-packages\sqlalchemy\orm\persistence.py", line 1136, in _emit_insert_statements
statement, params
File "C:\Users\57137\Desktop\微信小程序\小程序后端\mini-shop-server-dev\f-venv\lib\site-packages\sqlalchemy\engine\base.py", line 1011, in execute
return meth(self, multiparams, params)
File "C:\Users\57137\Desktop\微信小程序\小程序后端\mini-shop-server-dev\f-venv\lib\site-packages\sqlalchemy\sql\elements.py", line 298, in _execute_on_connection
return connection._execute_clauseelement(self, multiparams, params)
File "C:\Users\57137\Desktop\微信小程序\小程序后端\mini-shop-server-dev\f-venv\lib\site-packages\sqlalchemy\engine\base.py", line 1130, in _execute_clauseelement
distilled_params,
File "C:\Users\57137\Desktop\微信小程序\小程序后端\mini-shop-server-dev\f-venv\lib\site-packages\sqlalchemy\engine\base.py", line 1317, in _execute_context
e, statement, parameters, cursor, context
File "C:\Users\57137\Desktop\微信小程序\小程序后端\mini-shop-server-dev\f-venv\lib\site-packages\sqlalchemy\engine\base.py", line 1514, in _handle_dbapi_exception
util.raise_(exc_info[1], with_traceback=exc_info[2])
File "C:\Users\57137\Desktop\微信小程序\小程序后端\mini-shop-server-dev\f-venv\lib\site-packages\sqlalchemy\util\compat.py", line 182, in raise_
raise exception
File "C:\Users\57137\Desktop\微信小程序\小程序后端\mini-shop-server-dev\f-venv\lib\site-packages\sqlalchemy\engine\base.py", line 1277, in _execute_context
cursor, statement, parameters, context
File "C:\Users\57137\Desktop\微信小程序\小程序后端\mini-shop-server-dev\f-venv\lib\site-packages\sqlalchemy\engine\default.py", line 593, in do_execute
cursor.execute(statement, parameters)
File "C:\Users\57137\Desktop\微信小程序\小程序后端\mini-shop-server-dev\f-venv\lib\site-packages\cymysql\cursors.py", line 117, in execute
escaped_args = tuple(conn.escape(arg) for arg in args)
File "C:\Users\57137\Desktop\微信小程序\小程序后端\mini-shop-server-dev\f-venv\lib\site-packages\cymysql\cursors.py", line 117, in <genexpr>
escaped_args = tuple(conn.escape(arg) for arg in args)
File "C:\Users\57137\Desktop\微信小程序\小程序后端\mini-shop-server-dev\f-venv\lib\site-packages\cymysql\connections.py", line 306, in escape
return escape_item(obj, self.charset, self.encoders)
File "C:\Users\57137\Desktop\微信小程序\小程序后端\mini-shop-server-dev\f-venv\lib\site-packages\cymysql\converters.py", line 327, in escape_item
return encoders[type(val)](val)
KeyError: <enum 'UrlFromEnum'>
| closed | 2021-03-02T04:06:10Z | 2021-03-02T08:24:12Z | https://github.com/Allen7D/mini-shop-server/issues/108 | [] | TuLingZb | 0 |
streamlit/streamlit | deep-learning | 9,951 | Tabs dont respond when using nested cache functions | ### Checklist
- [X] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar issues.
- [X] I added a very descriptive title to this issue.
- [X] I have provided sufficient information below to help reproduce this issue.
### Summary
[](https://issues.streamlitapp.com/?issue=gh-9951)
I detected a critical problem with tabs when updating streamlit to a version newer than 1.35.0 (1.36 and up all have this problem). I found the issue in mobile, but it reproduces also in PC.
In my app I have the following scenario:
- Multiple tabs
- Several of them call functions that are cached
- And those functions call also (sometimes several times) nested cache functions.
In version 1.35 everything works fine on mobile and pc, but when I tried to update to a newer version, I detected that changing between tabs dont work (they become super irresponsibe and the app seems to crash). This is weird because my understanding was that changing tabs didnt trigger any runs/calculations.
If you erase all the @st.cache_data from the reproducible code example, all the code works just fine. So the problem seem to be that streamlit is doing somthing with the cache data when I try to switch tabs.
### Reproducible Code Example
```Python
import streamlit as st
st.header(body = "Testing problem switching tabs")
@st.cache_data(ttl=None)
def cached_func_level4():
return "test"
@st.cache_data(ttl=None)
def cached_func_level3():
return cached_func_level4()
@st.cache_data(ttl=None)
def cached_func_level2():
return cached_func_level3()
@st.cache_data(ttl=None)
def cached_func_level1():
return cached_func_level2()
@st.cache_data(ttl=None)
def cached_func_level0():
# If you iterate more times than 2000, the tab problem is even bigger
for _ in range(2000):
x = cached_func_level1()
return x
# In this testing tabs I only print a value and execute the
# "root" cached function, which calls other cached funcs
admin_tabs = st.tabs(["test1", "test2"])
with admin_tabs[0]:
st.write("Hello")
val = cached_func_level0()
with admin_tabs[1]:
st.write("World!")
val = cached_func_level0()
```
### Steps To Reproduce
Just run streamlit and when the page renders try to switch between the tabs.
### Expected Behavior
The expected behavior would be to be able to switch tabs without delay
### Current Behavior
Now the tabs crash when you try to switch between them, and the app does not respond or it does but super slowly.
### Is this a regression?
- [X] Yes, this used to work in a previous version.
### Debug info
- Streamlit version: 1.36 and up
- Python version: 3.11.5
- Operating System: Windows and iOS
- Browser: Testing in both safari and chrome
### Additional Information
_No response_ | closed | 2024-12-01T16:47:01Z | 2024-12-06T21:41:32Z | https://github.com/streamlit/streamlit/issues/9951 | [
"type:bug",
"feature:cache",
"priority:P3"
] | ricardorfe | 7 |
gunthercox/ChatterBot | machine-learning | 1,559 | why there always an error while running running `python manage.py runserver 0.0.0.0:8000` | although someperson did ask this question
i am confused about what is wrong toward this?
NoReverseMatch at /
'chatterbot' is not a registered namespace
Request Method: GET
Request URL: http://127.0.0.1:8000/
Django Version: 2.1.5
Exception Type: NoReverseMatch
Exception Value:
'chatterbot' is not a registered namespace
Exception Location: /home/charliecheng/.local/lib/python3.5/site-packages/django/urls/base.py in reverse, line 86
Python Executable: /usr/bin/python3
Python Version: 3.5.2
Python Path:
['/home/charliecheng/Desktop/ChatterBot-master/examples/django_app',
'/usr/lib/python35.zip',
'/usr/lib/python3.5',
'/usr/lib/python3.5/plat-x86_64-linux-gnu',
'/usr/lib/python3.5/lib-dynload',
'/home/charliecheng/.local/lib/python3.5/site-packages',
'/usr/local/lib/python3.5/dist-packages',
'/usr/local/lib/python3.5/dist-packages/ChatterBot-1.0.0a3-py3.5.egg',
'/usr/local/lib/python3.5/dist-packages/SQLAlchemy-1.2.16-py3.5-linux-x86_64.egg',
'/usr/local/lib/python3.5/dist-packages/python_twitter-3.5-py3.5.egg',
'/usr/local/lib/python3.5/dist-packages/pymongo-3.7.2-py3.5-linux-x86_64.egg',
'/usr/local/lib/python3.5/dist-packages/chatterbot_corpus-1.2.0-py3.5.egg',
'/usr/local/lib/python3.5/dist-packages/requests_oauthlib-1.1.0-py3.5.egg',
'/usr/local/lib/python3.5/dist-packages/future-0.17.1-py3.5.egg',
'/usr/local/lib/python3.5/dist-packages/PyYAML-3.13-py3.5-linux-x86_64.egg',
'/usr/local/lib/python3.5/dist-packages/oauthlib-2.1.0-py3.5.egg',
'/usr/lib/python3/dist-packages']
Server time: Sat, 12 Jan 2019 14:42:56 +0000
Error during template rendering
In template /home/charliecheng/Desktop/ChatterBot-master/examples/django_app/example_app/templates/nav.html, error at line 21
'chatterbot' is not a registered namespace
11 <li class="nav-item">
12 <a class="nav-link" href="http://chatterbot.readthedocs.io/en/stable/">Documentation</a>
13 </li>
14 <li class="nav-item">
15 <a class="nav-link" href="https://github.com/gunthercox/ChatterBot">GitHub</a>
16 </li>
17 </ul>
18
19 <ul class="nav navbar-nav float-xs-right">
20 <li class="nav-item">
21 <a class="nav-link" href="{% url 'chatterbot:chatterbot' %}">API</a>
22 </li>
23 <li class="nav-item">
24 <a class="nav-link" href="{% url 'admin:index' %}">Admin</a>
25 </li>
26 </ul>
27 </nav>
Traceback Switch to copy-and-paste view
/home/charliecheng/.local/lib/python3.5/site-packages/django/urls/base.py in reverse
extra, resolver = resolver.namespace_dict[ns]
...
▶ Local vars
During handling of the above exception ('chatterbot'), another exception occurred:
/home/charliecheng/.local/lib/python3.5/site-packages/django/core/handlers/exception.py in inner
response = get_response(request)
...
▶ Local vars
/home/charliecheng/.local/lib/python3.5/site-packages/django/core/handlers/base.py in _get_response
response = self.process_exception_by_middleware(e, request)
...
▶ Local vars
/home/charliecheng/.local/lib/python3.5/site-packages/django/core/handlers/base.py in _get_response
response = response.render()
...
▶ Local vars
/home/charliecheng/.local/lib/python3.5/site-packages/django/template/response.py in render
self.content = self.rendered_content
...
▶ Local vars
/home/charliecheng/.local/lib/python3.5/site-packages/django/template/response.py in rendered_content
content = template.render(context, self._request)
...
▶ Local vars
/home/charliecheng/.local/lib/python3.5/site-packages/django/template/backends/django.py in render
return self.template.render(context)
...
▶ Local vars
/home/charliecheng/.local/lib/python3.5/site-packages/django/template/base.py in render
return self._render(context)
...
▶ Local vars
/home/charliecheng/.local/lib/python3.5/site-packages/django/template/base.py in _render
return self.nodelist.render(context)
...
▶ Local vars
/home/charliecheng/.local/lib/python3.5/site-packages/django/template/base.py in render
bit = node.render_annotated(context)
...
▶ Local vars
/home/charliecheng/.local/lib/python3.5/site-packages/django/template/base.py in render_annotated
return self.render(context)
...
▶ Local vars
/home/charliecheng/.local/lib/python3.5/site-packages/django/template/loader_tags.py in render
return template.render(context)
...
▶ Local vars
/home/charliecheng/.local/lib/python3.5/site-packages/django/template/base.py in render
return self._render(context)
...
▶ Local vars
/home/charliecheng/.local/lib/python3.5/site-packages/django/template/base.py in _render
return self.nodelist.render(context)
...
▶ Local vars
/home/charliecheng/.local/lib/python3.5/site-packages/django/template/base.py in render
bit = node.render_annotated(context)
...
▶ Local vars
/home/charliecheng/.local/lib/python3.5/site-packages/django/template/base.py in render_annotated
return self.render(context)
...
▶ Local vars
/home/charliecheng/.local/lib/python3.5/site-packages/django/template/defaulttags.py in render
url = reverse(view_name, args=args, kwargs=kwargs, current_app=current_app)
...
▶ Local vars
/home/charliecheng/.local/lib/python3.5/site-packages/django/urls/base.py in reverse
raise NoReverseMatch("%s is not a registered namespace" % key)
...
▶ Local vars
Request information
USER
[unable to retrieve the current user]
GET
No GET data
POST
No POST data
FILES
No FILES data
COOKIES
No cookie data
META
Variable Value
CLUTTER_IM_MODULE
'xim'
COMPIZ_CONFIG_PROFILE
'ubuntu'
CONTENT_LENGTH
''
CONTENT_TYPE
'text/plain'
DBUS_SESSION_BUS_ADDRESS
'unix:abstract=/tmp/dbus-zfV5ALKRKJ'
DEFAULTS_PATH
'/usr/share/gconf/ubuntu.default.path'
DESKTOP_SESSION
'ubuntu'
DISPLAY
':0'
DJANGO_SETTINGS_MODULE
'example_app.settings'
GATEWAY_INTERFACE
'CGI/1.1'
GDMSESSION
'ubuntu'
GDM_LANG
'en_US'
GNOME_DESKTOP_SESSION_ID
'this-is-deprecated'
GNOME_KEYRING_CONTROL
''
GNOME_KEYRING_PID
''
GPG_AGENT_INFO
'/home/charliecheng/.gnupg/S.gpg-agent:0:1'
GTK2_MODULES
'overlay-scrollbar'
GTK_IM_MODULE
'ibus'
GTK_MODULES
'gail:atk-bridge:unity-gtk-module'
HOME
'/home/charliecheng'
HTTP_ACCEPT
'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8'
HTTP_ACCEPT_ENCODING
'gzip, deflate'
HTTP_ACCEPT_LANGUAGE
'en-US,en;q=0.5'
HTTP_CONNECTION
'keep-alive'
HTTP_HOST
'127.0.0.1:8000'
HTTP_USER_AGENT
'Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:45.0) Gecko/20100101 Firefox/45.0'
IM_CONFIG_PHASE
'1'
INSTANCE
''
JOB
'dbus'
LANG
'en_US.UTF-8'
LANGUAGE
'en_US'
LESSCLOSE
'/usr/bin/lesspipe %s %s'
LESSOPEN
'| /usr/bin/lesspipe %s'
LOGNAME
'charliecheng'
LS_COLORS
'rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:mi=00:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arc=01;31:*.arj=01;31:*.taz=01;31:*.lha=01;31:*.lz4=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.tzo=01;31:*.t7z=01;31:*.zip=01;31:*.z=01;31:*.Z=01;31:*.dz=01;31:*.gz=01;31:*.lrz=01;31:*.lz=01;31:*.lzo=01;31:*.xz=01;31:*.bz2=01;31:*.bz=01;31:*.tbz=01;31:*.tbz2=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.war=01;31:*.ear=01;31:*.sar=01;31:*.rar=01;31:*.alz=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.cab=01;31:*.jpg=01;35:*.jpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.webm=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=00;36:*.au=00;36:*.flac=00;36:*.m4a=00;36:*.mid=00;36:*.midi=00;36:*.mka=00;36:*.mp3=00;36:*.mpc=00;36:*.ogg=00;36:*.ra=00;36:*.wav=00;36:*.oga=00;36:*.opus=00;36:*.spx=00;36:*.xspf=00;36:'
MANDATORY_PATH
'/usr/share/gconf/ubuntu.mandatory.path'
PATH
'/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin'
PATH_INFO
'/'
PWD
'/home/charliecheng/Desktop/ChatterBot-master/examples/django_app'
QT4_IM_MODULE
'xim'
QT_ACCESSIBILITY
'1'
QT_IM_MODULE
'ibus'
QT_LINUX_ACCESSIBILITY_ALWAYS_ON
'1'
QT_QPA_PLATFORMTHEME
'appmenu-qt5'
QUERY_STRING
''
REMOTE_ADDR
'127.0.0.1'
REMOTE_HOST
''
REQUEST_METHOD
'GET'
RUN_MAIN
'true'
SCRIPT_NAME
''
SERVER_NAME
'localhost'
SERVER_PORT
'8000'
SERVER_PROTOCOL
'HTTP/1.1'
SERVER_SOFTWARE
'WSGIServer/0.2'
SESSION
'ubuntu'
SESSIONTYPE
'gnome-session'
SESSION_MANAGER
'local/ubuntu:@/tmp/.ICE-unix/1634,unix/ubuntu:/tmp/.ICE-unix/1634'
SHELL
'/bin/bash'
SHLVL
'1'
SSH_AUTH_SOCK
'/run/user/1000/keyring/ssh'
TERM
'xterm-256color'
TZ
'UTC'
UPSTART_SESSION
'unix:abstract=/com/ubuntu/upstart-session/1000/1392'
USER
'charliecheng'
VTE_VERSION
'4205'
WINDOWID
'54526215'
XAUTHORITY
'/home/charliecheng/.Xauthority'
XDG_CONFIG_DIRS
'/etc/xdg/xdg-ubuntu:/usr/share/upstart/xdg:/etc/xdg'
XDG_CURRENT_DESKTOP
'Unity'
XDG_DATA_DIRS
'/usr/share/ubuntu:/usr/share/gnome:/usr/local/share/:/usr/share/:/var/lib/snapd/desktop'
XDG_GREETER_DATA_DIR
'/var/lib/lightdm-data/charliecheng'
XDG_MENU_PREFIX
'gnome-'
XDG_RUNTIME_DIR
'/run/user/1000'
XDG_SEAT
'seat0'
XDG_SEAT_PATH
'/org/freedesktop/DisplayManager/Seat0'
XDG_SESSION_DESKTOP
'ubuntu'
XDG_SESSION_ID
'c2'
XDG_SESSION_PATH
'/org/freedesktop/DisplayManager/Session0'
XDG_SESSION_TYPE
'x11'
XDG_VTNR
'7'
XMODIFIERS
'@im=ibus'
_
'/usr/bin/python3'
wsgi.errors
<_io.TextIOWrapper name='<stderr>' mode='w' encoding='UTF-8'>
wsgi.file_wrapper
''
wsgi.input
<django.core.handlers.wsgi.LimitedStream object at 0x7ff7da66dcc0>
wsgi.multiprocess
False
wsgi.multithread
True
wsgi.run_once
False
wsgi.url_scheme
'http'
wsgi.version
(1, 0)
Settings
Using settings module example_app.settings
Setting Value
ABSOLUTE_URL_OVERRIDES
{}
ADMINS
[]
ALLOWED_HOSTS
[]
APPEND_SLASH
True
AUTHENTICATION_BACKENDS
['django.contrib.auth.backends.ModelBackend']
AUTH_PASSWORD_VALIDATORS
'********************'
AUTH_USER_MODEL
'auth.User'
BASE_DIR
'/home/charliecheng/Desktop/ChatterBot-master/examples/django_app'
CACHES
{'default': {'BACKEND': 'django.core.cache.backends.locmem.LocMemCache'}}
CACHE_MIDDLEWARE_ALIAS
'default'
CACHE_MIDDLEWARE_KEY_PREFIX
'********************'
CACHE_MIDDLEWARE_SECONDS
600
CHATTERBOT
{'django_app_name': 'django_chatterbot', 'name': 'Django ChatterBot Example'}
CSRF_COOKIE_AGE
31449600
CSRF_COOKIE_DOMAIN
None
CSRF_COOKIE_HTTPONLY
False
CSRF_COOKIE_NAME
'csrftoken'
CSRF_COOKIE_PATH
'/'
CSRF_COOKIE_SAMESITE
'Lax'
CSRF_COOKIE_SECURE
False
CSRF_FAILURE_VIEW
'django.views.csrf.csrf_failure'
CSRF_HEADER_NAME
'HTTP_X_CSRFTOKEN'
CSRF_TRUSTED_ORIGINS
[]
CSRF_USE_SESSIONS
False
DATABASES
{'default': {'ATOMIC_REQUESTS': False,
'AUTOCOMMIT': True,
'CONN_MAX_AGE': 0,
'ENGINE': 'django.db.backends.sqlite3',
'HOST': '',
'NAME': '/home/charliecheng/Desktop/ChatterBot-master/examples/django_app/db.sqlite3',
'OPTIONS': {},
'PASSWORD': '********************',
'PORT': '',
'TEST': {'CHARSET': None,
'COLLATION': None,
'MIRROR': None,
'NAME': None},
'TIME_ZONE': None,
'USER': ''}}
DATABASE_ROUTERS
[]
DATA_UPLOAD_MAX_MEMORY_SIZE
2621440
DATA_UPLOAD_MAX_NUMBER_FIELDS
1000
DATETIME_FORMAT
'N j, Y, P'
DATETIME_INPUT_FORMATS
['%Y-%m-%d %H:%M:%S',
'%Y-%m-%d %H:%M:%S.%f',
'%Y-%m-%d %H:%M',
'%Y-%m-%d',
'%m/%d/%Y %H:%M:%S',
'%m/%d/%Y %H:%M:%S.%f',
'%m/%d/%Y %H:%M',
'%m/%d/%Y',
'%m/%d/%y %H:%M:%S',
'%m/%d/%y %H:%M:%S.%f',
'%m/%d/%y %H:%M',
'%m/%d/%y']
DATE_FORMAT
'N j, Y'
DATE_INPUT_FORMATS
['%Y-%m-%d',
'%m/%d/%Y',
'%m/%d/%y',
'%b %d %Y',
'%b %d, %Y',
'%d %b %Y',
'%d %b, %Y',
'%B %d %Y',
'%B %d, %Y',
'%d %B %Y',
'%d %B, %Y']
DEBUG
True
DEBUG_PROPAGATE_EXCEPTIONS
False
DECIMAL_SEPARATOR
'.'
DEFAULT_CHARSET
'utf-8'
DEFAULT_CONTENT_TYPE
'text/html'
DEFAULT_EXCEPTION_REPORTER_FILTER
'django.views.debug.SafeExceptionReporterFilter'
DEFAULT_FILE_STORAGE
'django.core.files.storage.FileSystemStorage'
DEFAULT_FROM_EMAIL
'webmaster@localhost'
DEFAULT_INDEX_TABLESPACE
''
DEFAULT_TABLESPACE
''
DISALLOWED_USER_AGENTS
[]
EMAIL_BACKEND
'django.core.mail.backends.smtp.EmailBackend'
EMAIL_HOST
'localhost'
EMAIL_HOST_PASSWORD
'********************'
EMAIL_HOST_USER
''
EMAIL_PORT
25
EMAIL_SSL_CERTFILE
None
EMAIL_SSL_KEYFILE
'********************'
EMAIL_SUBJECT_PREFIX
'[Django] '
EMAIL_TIMEOUT
None
EMAIL_USE_LOCALTIME
False
EMAIL_USE_SSL
False
EMAIL_USE_TLS
False
FILE_CHARSET
'utf-8'
FILE_UPLOAD_DIRECTORY_PERMISSIONS
None
FILE_UPLOAD_HANDLERS
['django.core.files.uploadhandler.MemoryFileUploadHandler',
'django.core.files.uploadhandler.TemporaryFileUploadHandler']
FILE_UPLOAD_MAX_MEMORY_SIZE
2621440
FILE_UPLOAD_PERMISSIONS
None
FILE_UPLOAD_TEMP_DIR
None
FIRST_DAY_OF_WEEK
0
FIXTURE_DIRS
[]
FORCE_SCRIPT_NAME
None
FORMAT_MODULE_PATH
None
FORM_RENDERER
'django.forms.renderers.DjangoTemplates'
IGNORABLE_404_URLS
[]
INSTALLED_APPS
('django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'chatterbot.ext.django_chatterbot',
'example_app')
INTERNAL_IPS
[]
LANGUAGES
[('af', 'Afrikaans'),
('ar', 'Arabic'),
('ast', 'Asturian'),
('az', 'Azerbaijani'),
('bg', 'Bulgarian'),
('be', 'Belarusian'),
('bn', 'Bengali'),
('br', 'Breton'),
('bs', 'Bosnian'),
('ca', 'Catalan'),
('cs', 'Czech'),
('cy', 'Welsh'),
('da', 'Danish'),
('de', 'German'),
('dsb', 'Lower Sorbian'),
('el', 'Greek'),
('en', 'English'),
('en-au', 'Australian English'),
('en-gb', 'British English'),
('eo', 'Esperanto'),
('es', 'Spanish'),
('es-ar', 'Argentinian Spanish'),
('es-co', 'Colombian Spanish'),
('es-mx', 'Mexican Spanish'),
('es-ni', 'Nicaraguan Spanish'),
('es-ve', 'Venezuelan Spanish'),
('et', 'Estonian'),
('eu', 'Basque'),
('fa', 'Persian'),
('fi', 'Finnish'),
('fr', 'French'),
('fy', 'Frisian'),
('ga', 'Irish'),
('gd', 'Scottish Gaelic'),
('gl', 'Galician'),
('he', 'Hebrew'),
('hi', 'Hindi'),
('hr', 'Croatian'),
('hsb', 'Upper Sorbian'),
('hu', 'Hungarian'),
('ia', 'Interlingua'),
('id', 'Indonesian'),
('io', 'Ido'),
('is', 'Icelandic'),
('it', 'Italian'),
('ja', 'Japanese'),
('ka', 'Georgian'),
('kab', 'Kabyle'),
('kk', 'Kazakh'),
('km', 'Khmer'),
('kn', 'Kannada'),
('ko', 'Korean'),
('lb', 'Luxembourgish'),
('lt', 'Lithuanian'),
('lv', 'Latvian'),
('mk', 'Macedonian'),
('ml', 'Malayalam'),
('mn', 'Mongolian'),
('mr', 'Marathi'),
('my', 'Burmese'),
('nb', 'Norwegian Bokmål'),
('ne', 'Nepali'),
('nl', 'Dutch'),
('nn', 'Norwegian Nynorsk'),
('os', 'Ossetic'),
('pa', 'Punjabi'),
('pl', 'Polish'),
('pt', 'Portuguese'),
('pt-br', 'Brazilian Portuguese'),
('ro', 'Romanian'),
('ru', 'Russian'),
('sk', 'Slovak'),
('sl', 'Slovenian'),
('sq', 'Albanian'),
('sr', 'Serbian'),
('sr-latn', 'Serbian Latin'),
('sv', 'Swedish'),
('sw', 'Swahili'),
('ta', 'Tamil'),
('te', 'Telugu'),
('th', 'Thai'),
('tr', 'Turkish'),
('tt', 'Tatar'),
('udm', 'Udmurt'),
('uk', 'Ukrainian'),
('ur', 'Urdu'),
('vi', 'Vietnamese'),
('zh-hans', 'Simplified Chinese'),
('zh-hant', 'Traditional Chinese')]
LANGUAGES_BIDI
['he', 'ar', 'fa', 'ur']
LANGUAGE_CODE
'en-us'
LANGUAGE_COOKIE_AGE
None
LANGUAGE_COOKIE_DOMAIN
None
LANGUAGE_COOKIE_NAME
'django_language'
LANGUAGE_COOKIE_PATH
'/'
LOCALE_PATHS
[]
LOGGING
{}
LOGGING_CONFIG
'logging.config.dictConfig'
LOGIN_REDIRECT_URL
'/accounts/profile/'
LOGIN_URL
'/accounts/login/'
LOGOUT_REDIRECT_URL
None
MANAGERS
[]
MEDIA_ROOT
''
MEDIA_URL
''
MESSAGE_STORAGE
'django.contrib.messages.storage.fallback.FallbackStorage'
MIDDLEWARE
[]
MIDDLEWARE_CLASSES
('django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.auth.middleware.SessionAuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
'django.middleware.security.SecurityMiddleware')
MIGRATION_MODULES
{}
MONTH_DAY_FORMAT
'F j'
NUMBER_GROUPING
0
PASSWORD_HASHERS
'********************'
PASSWORD_RESET_TIMEOUT_DAYS
'********************'
PREPEND_WWW
False
ROOT_URLCONF
'example_app.urls'
SECRET_KEY
'********************'
SECURE_BROWSER_XSS_FILTER
False
SECURE_CONTENT_TYPE_NOSNIFF
False
SECURE_HSTS_INCLUDE_SUBDOMAINS
False
SECURE_HSTS_PRELOAD
False
SECURE_HSTS_SECONDS
0
SECURE_PROXY_SSL_HEADER
None
SECURE_REDIRECT_EXEMPT
[]
SECURE_SSL_HOST
None
SECURE_SSL_REDIRECT
False
SERVER_EMAIL
'root@localhost'
SESSION_CACHE_ALIAS
'default'
SESSION_COOKIE_AGE
1209600
SESSION_COOKIE_DOMAIN
None
SESSION_COOKIE_HTTPONLY
True
SESSION_COOKIE_NAME
'sessionid'
SESSION_COOKIE_PATH
'/'
SESSION_COOKIE_SAMESITE
'Lax'
SESSION_COOKIE_SECURE
False
SESSION_ENGINE
'django.contrib.sessions.backends.db'
SESSION_EXPIRE_AT_BROWSER_CLOSE
False
SESSION_FILE_PATH
None
SESSION_SAVE_EVERY_REQUEST
False
SESSION_SERIALIZER
'django.contrib.sessions.serializers.JSONSerializer'
SETTINGS_MODULE
'example_app.settings'
SHORT_DATETIME_FORMAT
'm/d/Y P'
SHORT_DATE_FORMAT
'm/d/Y'
SIGNING_BACKEND
'django.core.signing.TimestampSigner'
SILENCED_SYSTEM_CHECKS
[]
STATICFILES_DIRS
('/home/charliecheng/Desktop/ChatterBot-master/examples/django_app/example_app/static',)
STATICFILES_FINDERS
['django.contrib.staticfiles.finders.FileSystemFinder',
'django.contrib.staticfiles.finders.AppDirectoriesFinder']
STATICFILES_STORAGE
'django.contrib.staticfiles.storage.StaticFilesStorage'
STATIC_ROOT
None
STATIC_URL
'/static/'
TEMPLATES
[{'APP_DIRS': True,
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [],
'OPTIONS': {'context_processors': ['django.template.context_processors.debug',
'django.template.context_processors.request',
'django.contrib.auth.context_processors.auth',
'django.contrib.messages.context_processors.messages']}}]
TEST_NON_SERIALIZED_APPS
[]
TEST_RUNNER
'django.test.runner.DiscoverRunner'
THOUSAND_SEPARATOR
','
TIME_FORMAT
'P'
TIME_INPUT_FORMATS
['%H:%M:%S', '%H:%M:%S.%f', '%H:%M']
TIME_ZONE
'UTC'
USE_I18N
True
USE_L10N
True
USE_THOUSAND_SEPARATOR
False
USE_TZ
True
USE_X_FORWARDED_HOST
False
USE_X_FORWARDED_PORT
False
WSGI_APPLICATION
'example_app.wsgi.application'
X_FRAME_OPTIONS
'SAMEORIGIN'
YEAR_MONTH_FORMAT
'F Y'
You're seeing this error because you have DEBUG = True in your Django settings file. Change that to False, and Django will display a standard page generated by the handler for this status code.
| closed | 2019-01-12T14:44:42Z | 2019-12-03T03:31:46Z | https://github.com/gunthercox/ChatterBot/issues/1559 | [
"answered"
] | chengtianyue | 11 |
pytorch/pytorch | machine-learning | 149,626 | `Segmentation Fault` in `torch.lstm_cell` | ### 🐛 Describe the bug
The following code snippet causes a `segmentation fault` when running torch.lstm_cell:
```
import torch
inp = torch.full((0, 8), 0, dtype=torch.float)
hx = torch.full((0, 9), 0, dtype=torch.float)
cx = torch.full((0, 9), 0, dtype=torch.float)
w_ih = torch.full((1, 8), 1.251e+12, dtype=torch.float)
w_hh = torch.full((1, 9), 1.4013e-45, dtype=torch.float)
b_ih = None
b_hh = None
torch.lstm_cell(inp, (hx, cx), w_ih, w_hh, b_ih, b_hh)
```
### Versions
torch 2.6.0
cc @mikaylagawarecki | open | 2025-03-20T15:24:09Z | 2025-03-21T07:08:54Z | https://github.com/pytorch/pytorch/issues/149626 | [
"module: crash",
"module: rnn",
"triaged",
"bug",
"module: empty tensor",
"topic: fuzzer"
] | vwrewsge | 1 |
raphaelvallat/pingouin | pandas | 118 | Get residuals from anova | Hi, recently I have been playing with `statsmodels` and `pinguin` and I have not been able to figure out how to get residuals from pinguin. What I mean is (taking your example):
```python
df = pg.read_dataset('anova2')
# Pinguin
df.anova(dv="Yield", between=["Blend", "Crop"]).round(3)
# statsmodels
model = ols( 'Yield ~ C(Blend) + C(Crop) + C(Blend):C(Crop)', df).fit()
aov_table = anova_lm(model, typ=2)
res = model.resid
pg.qqplot(res, dist='norm')
```
If I would like to plot the residuals, how can I do that without calling `model.resid`?
Thanks.

| open | 2020-08-17T14:39:25Z | 2021-10-28T22:19:47Z | https://github.com/raphaelvallat/pingouin/issues/118 | [
"feature request :construction:"
] | jankaWIS | 1 |
iMerica/dj-rest-auth | rest-api | 326 | Rest-Auth+AllAuth+PhoneNumberField TypeError: 'PhoneNumber' object is not subscriptable | models.py:
```
class CustomUser(AbstractUser):
username = PhoneNumberField(unique=True)
```
payloads:
```
{
"username": "+8801700000000",
"password1": "demo",
"password2": "demo",
"email": "demo@demo.com",
}
```
response:
```
Internal Server Error: /api/rest-auth/registration/
Traceback (most recent call last):
File "venv\lib\site-packages\django\core\handlers\exception.py", line 47, in inner
response = get_response(request)
File "venv\lib\site-packages\django\core\handlers\base.py", line 181, in _get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "venv\lib\site-packages\django\views\decorators\csrf.py", line 54, in wrapped_view
return view_func(*args, **kwargs)
File "venv\lib\site-packages\django\views\generic\base.py", line 70, in view
return self.dispatch(request, *args, **kwargs)
File "venv\lib\site-packages\django\utils\decorators.py", line 43, in _wrapper
return bound_method(*args, **kwargs)
File "venv\lib\site-packages\django\views\decorators\debug.py", line 89, in sensitive_post_parameters_wrapper
return view(request, *args, **kwargs)
File "venv\lib\site-packages\rest_auth\registration\views.py", line 46, in dispatch
return super(RegisterView, self).dispatch(*args, **kwargs)
File "venv\lib\site-packages\rest_framework\views.py", line 509, in dispatch
response = self.handle_exception(exc)
File "venv\lib\site-packages\rest_framework\views.py", line 469, in handle_exception
self.raise_uncaught_exception(exc)
File "venv\lib\site-packages\rest_framework\views.py", line 480, in raise_uncaught_exception
raise exc
File "venv\lib\site-packages\rest_framework\views.py", line 506, in dispatch
response = handler(request, *args, **kwargs)
File "venv\lib\site-packages\rest_framework\generics.py", line 190, in post
return self.create(request, *args, **kwargs)
File "venv\lib\site-packages\rest_auth\registration\views.py", line 65, in create
user = self.perform_create(serializer)
File "venv\lib\site-packages\rest_auth\registration\views.py", line 73, in perform_create
user = serializer.save(self.request)
File "venv\lib\site-packages\rest_auth\registration\serializers.py", line 210, in save
adapter.save_user(request, user, self)
File "venv\lib\site-packages\allauth\account\adapter.py", line 242, in save_user
self.populate_username(request, user)
File "venv\lib\site-packages\allauth\account\adapter.py", line 209, in populate_username
user_username(
File "venv\lib\site-packages\allauth\account\utils.py", line 120, in user_username
return user_field(user, app_settings.USER_MODEL_USERNAME_FIELD, *args)
File "venv\lib\site-packages\allauth\account\utils.py", line 110, in user_field
v = v[0:max_length]
TypeError: 'PhoneNumber' object is not subscriptable
``` | open | 2021-11-01T20:35:13Z | 2021-11-01T20:35:38Z | https://github.com/iMerica/dj-rest-auth/issues/326 | [] | nikolas-dev | 0 |
ultralytics/ultralytics | computer-vision | 19,821 | Train YOLOv11/v8 on Intel Arc Discrete GPU | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
What are the required changes to be made in ultralytics repository to train the YOLOv8/v11 model on Intel Arc Discrete GPUs?
### Additional
_No response_ | closed | 2025-03-22T09:03:51Z | 2025-03-23T14:04:53Z | https://github.com/ultralytics/ultralytics/issues/19821 | [
"enhancement",
"question"
] | ramesh-dev-code | 5 |
proplot-dev/proplot | matplotlib | 132 | There is an bug with using matplotlib after using proplot | Hey guys. I met a problem today.
I firstly ran the first codes with proplot package, and got normal results.
After that, I ran my second codes with matplotlib, however, the figure doesn't perform well.
The first codes are from [web](https://proplot.readthedocs.io/en/latest/subplots.html) :
`import proplot as plot
import numpy as np
N = 50
M = 40
state = np.random.RandomState(51423)
colors = plot.Colors('grays_r', M, left=0.1, right=0.8)
datas = []
for scale in (1, 3, 7, 0.2):
data = scale * (state.rand(N, M) - 0.5).cumsum(axis=0)[N//2:, :]
datas.append(data)
for share in (0, ):
f, axs = plot.subplots(
ncols=4, aspect=1, axwidth=1.2,
sharey=share, spanx=share//2
)
for ax, data in zip(axs, datas):
ax.plot(data, cycle=colors)
ax.format(
suptitle=f'Axis-sharing level: {share}, spanning labels {["off","on"][share//2]}',
grid=False, xlabel='spanning', ylabel='shared'
)`
It is ok with this figure.

My second codes are as followed:
`import matplotlib.pyplot as plt
import numpy as np
fig = plt.figure(figsize=(6,2))
axes = fig.add_axes((0,0,0.8,1))
x = np.linspace(-10, 10, 1000)
line1, = axes.plot(x, np.sin(x))
axes.set_xlim((-10,10))
axes.set_ylim((-2,2))
axes.spines['right'].set_color('none')
axes.spines['top'].set_color('none')
axes.xaxis.set_ticks_position('bottom')
axes.yaxis.set_ticks_position('left')
axes.spines['bottom'].set_position(('data', 1))
axes.spines['left'].set_position(('data', 2))
plt.show()`
Interestingly, the problem is here, it got the figure like this:

If I restarted spyder and only ran the second codes, it performs well as followed. Then I ran the first codes and ran again the second codes after. There is still a problem. So it really confused me.

by the way,I ran my codes with spyder (version Spyder 4.0.1 ). I am not sure if it is my spyder problem or because of the proplot.
Thanks.
| closed | 2020-03-12T14:37:45Z | 2020-05-09T12:00:44Z | https://github.com/proplot-dev/proplot/issues/132 | [
"support"
] | ZaamlamLeung | 1 |
mkhorasani/Streamlit-Authenticator | streamlit | 94 | Capturing username of failed login attempts | I would like to write code that blocks a user after 3 failed login attempts.
Currently, it seems like failed login attempts return the username None. | closed | 2023-10-12T19:52:14Z | 2024-01-25T20:55:07Z | https://github.com/mkhorasani/Streamlit-Authenticator/issues/94 | [] | IndigoJay | 1 |
biolab/orange3 | data-visualization | 6,485 | Pivot Table widget - wrong filtered data output | Hi there,
I am experiencing a problem with the Pivot Table Widget, specifically with the "filtered data" output. I want to use the functionality to select specific groups of my pivoted table to analyze separately. When I select one of the groups as intended, the routing towards the output of the widget does not work correctly. As seen on the picture I select the group with 8 instances but the widget outputs 23 instances, which is the group left to it.

When I select all of the groups, the number of outputted instances is also wrong, it doesn't include the most right column of groups. (>=30). The total count of instances in the output should be 94 but is only 50 with 44 instances missing from the most right column.

It seems to me that the selection is simply taking all the instances belonging to the group left to the one I select with my mouse.
I'm using Orange 3.35 as Standalone on Windows 11.
Thanks a lot and all the best,
Merit | closed | 2023-06-21T07:13:27Z | 2023-07-04T11:34:41Z | https://github.com/biolab/orange3/issues/6485 | [
"bug report"
] | meritwagner | 2 |
chiphuyen/stanford-tensorflow-tutorials | tensorflow | 142 | 07_convnet_layers.py self.training should use tf.placeholder | If you use a Python bool, the value of training won't change because you already build the logits before the session and you can't change the graph during the session.
The correct way should use tf.placeholder like this:
`self.training = tf.placeholder(tf.bool, name='is_train')`
And set `sess.run(init, feed_dict={self.is_training: True})` when trainingand `sess.run(init, feed_dict={self.is_training: False})` when evalating | open | 2019-02-27T03:24:55Z | 2019-02-27T03:24:55Z | https://github.com/chiphuyen/stanford-tensorflow-tutorials/issues/142 | [] | TonyZZX | 0 |
huggingface/peft | pytorch | 1,636 | Error running the deepspeed qlora example | ### System Info
Hi,
I am trying to run a qlora finetuning experiment on deepspeed similar to the one given in the sft folder.
I use the same requirements.txt as given. However, I face an issue
_
```
Traceback (most recent call last):
File "/opt/ml/code/train_qlora.py", line 161, in <module>
model_args, data_args, training_args = parser.parse_args_into_dataclasses()
File "/opt/conda/lib/python3.10/site-packages/transformers/hf_argparser.py", line 338, in parse_args_into_dataclasses
obj = dtype(**inputs)
File "<string>", line 123, in __init__
File "/opt/conda/lib/python3.10/site-packages/transformers/training_args.py", line 1528, in __post_init__
and (self.device.type != "cuda")
File "/opt/conda/lib/python3.10/site-packages/transformers/training_args.py", line 1995, in device
return self._setup_devices
File "/opt/conda/lib/python3.10/site-packages/transformers/utils/generic.py", line 56, in __get__
cached = self.fget(obj)
File "/opt/conda/lib/python3.10/site-packages/transformers/training_args.py", line 1931, in _setup_devices
self.distributed_state = PartialState(
File "/opt/conda/lib/python3.10/site-packages/accelerate/state.py", line 180, in __init__
from deepspeed import comm as dist
ImportError: cannot import name 'comm' from 'deepspeed' (/opt/conda/lib/python3.10/site-packages/deepspeed/__init__.py)
Traceback (most recent call last):
File "/opt/ml/code/train_qlora.py", line 161, in <module>
model_args, data_args, training_args = parser.parse_args_into_dataclasses()
File "/opt/conda/lib/python3.10/site-packages/transformers/hf_argparser.py", line 338, in parse_args_into_dataclasses
obj = dtype(**inputs)
File "<string>", line 123, in __init__
File "/opt/conda/lib/python3.10/site-packages/transformers/training_args.py", line 1528, in __post_init__
and (self.device.type != "cuda")
File "/opt/conda/lib/python3.10/site-packages/transformers/training_args.py", line 1995, in device
return self._setup_devices
File "/opt/conda/lib/python3.10/site-packages/transformers/utils/generic.py", line 56, in __get__
cached = self.fget(obj)
File "/opt/conda/lib/python3.10/site-packages/transformers/training_args.py", line 1931, in _setup_devices
self.distributed_state = PartialState(
File "/opt/conda/lib/python3.10/site-packages/accelerate/state.py", line 180, in __init__
from deepspeed import comm as dist
ImportError
: cannot import name 'comm' from 'deepspeed' (/opt/conda/lib/python3.10/site-packages/deepspeed/__init__.py)
```
_
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder
- [ ] My own task or dataset (give details below)
### Reproduction
requirements.txt:
git+https://github.com/huggingface/transformers@v4.38.2
git+https://github.com/huggingface/accelerate@v0.28.0
git+https://github.com/huggingface/peft@v0.9.0
git+https://github.com/huggingface/trl@v0.7.11
deepspeed
PyGithub
flash-attn
huggingface-hub
evaluate
datasets
bitsandbytes==0.43.0
einops
wandb
tensorboard
tiktoken
pandas
numpy
scipy
matplotlib
sentencepiece
nltk
xformers
hf_transfer
train file:
peft/examples/sft/train.py
### Expected behavior
Training should run | closed | 2024-04-09T14:30:55Z | 2024-04-11T14:41:41Z | https://github.com/huggingface/peft/issues/1636 | [] | AnirudhVIyer | 2 |
kizniche/Mycodo | automation | 1,211 | DHT22 not working | I'm using the last version of mycodo with a raspberries 3, and I'm having issue connecting my DHT22. Ive install pigpiod and when I'm sending: <<sudo systemctl enable pigpiod>> I receive: <<Failed to enable unit: Refusing to operate on alias name or linked unit file: pigpiod.service>> . Ive tried to enable remote GPIO pin, purge pigpiod and reinstall it. Can you help me?
| closed | 2022-07-06T02:14:03Z | 2022-08-21T22:24:57Z | https://github.com/kizniche/Mycodo/issues/1211 | [] | Tommynator062 | 2 |
Kav-K/GPTDiscord | asyncio | 306 | System instruction setting guide | I've noticed that `/gpt instruction` command doesn't have a guide on proper usage. @Hikari-Haru would you be able to give me the points of how it works, I'd be happy to add it to the docs?
| closed | 2023-05-08T18:33:03Z | 2023-06-16T02:14:41Z | https://github.com/Kav-K/GPTDiscord/issues/306 | [] | Raecaug | 2 |
miguelgrinberg/flasky | flask | 378 | Question about the function send_async_mail | Instead of passing the object app to the new thread, i passed the object context to the thread and it worked. But i am not sure if it is right to do this.
my code:
```
def send_async_mail(msg, context):
with context:
mail.send(msg)
def send_mail(to, subject, template, **kwargs):
context = current_app.app_context()
msg = Message(subject=current_app.config['FLASKY_MAIL_SUBJECT_PREFIX'] + subject,
sender=current_app.config['FLASKY_MAIL_SENDER'],
recipients=[to])
msg.body = render_template(template + '.txt', **kwargs)
msg.html = render_template(template + '.html', **kwargs)
thr = Thread(target=send_async_mail, args=[msg, context])
thr.start()
return thr
```
the original code:
```
def send_async_email(app, msg):
with app.app_context():
mail.send(msg)
def send_email(to, subject, template, **kwargs):
app = current_app._get_current_object()
msg = Message(app.config['FLASKY_MAIL_SUBJECT_PREFIX'] + ' ' + subject,
sender=app.config['FLASKY_MAIL_SENDER'], recipients=[to])
msg.body = render_template(template + '.txt', **kwargs)
msg.html = render_template(template + '.html', **kwargs)
thr = Thread(target=send_async_email, args=[app, msg])
thr.start()
return thr
```
| closed | 2018-08-17T02:44:06Z | 2018-08-17T08:06:43Z | https://github.com/miguelgrinberg/flasky/issues/378 | [
"question"
] | ChenYizhu97 | 2 |
keras-team/keras | machine-learning | 20,536 | BUG in load_weights_from_hdf5_group_by_name | https://github.com/keras-team/keras/blob/5d36ee1f219bb650dd108c35b257c783cd034ffd/keras/src/legacy/saving/legacy_h5_format.py#L521-L525
model.trainable_weights + model.non_trainable_weights references all weights instead of top-level | closed | 2024-11-22T19:49:14Z | 2024-11-30T11:00:53Z | https://github.com/keras-team/keras/issues/20536 | [
"type:Bug"
] | edwardyehuang | 1 |
vaexio/vaex | data-science | 1,478 | [BUG-REPORT] Unable to export dataframe to hdf5 when it includes multibyte named column (on Windows OS) | Hi, everyone
**Description**
Runtime Error at h5py when saving dataframe includes many columns, and one of them contains Multibyte string.
It might only happen on Windows. (It didn't happen when I tried it on MacOS and Ubuntu )
```python
import vaex
source_dict = {"東京": [1, 2, 3], "a": [1, 2, 3], "b": [1,2,3], "c": [1, 2, 3], "d": [1, 2, 3], "a1": [1, 2, 3], "b2": [1,2,3], "c3": [1, 2, 3], "d4": [1, 2, 3]}
df = vaex.from_dict(source_dict)
df.export_hdf5("testfile.hdf5")
```
```
Traceback (most recent call last):
File ".\main.py", line 8, in <module>
df.export_hdf5("testfile.hdf5")
File "C:\Users\naohiro.heya\.pyenv\pyenv-win\versions\3.8.7\lib\site-packages\vaex\dataframe.py", line 6486, in export_hdf5
writer.write(self, chunk_size=chunk_size, progress=progress, column_count=column_count)
File "C:\Users\naohiro.heya\.pyenv\pyenv-win\versions\3.8.7\lib\site-packages\vaex\hdf5\writer.py", line 40, in __exit__
self.close()
File "C:\Users\naohiro.heya\.pyenv\pyenv-win\versions\3.8.7\lib\site-packages\vaex\hdf5\writer.py", line 31, in close
self.h5.close()
File "C:\Users\naohiro.heya\.pyenv\pyenv-win\versions\3.8.7\lib\site-packages\h5py\_hl\files.py", line 457, in close
h5i.dec_ref(id_)
File "h5py\_objects.pyx", line 54, in h5py._objects.with_phil.wrapper
File "h5py\_objects.pyx", line 55, in h5py._objects.with_phil.wrapper
File "h5py\h5i.pyx", line 153, in h5py.h5i.dec_ref
RuntimeError: Can't decrement id ref count (unable to extend file properly)
```
**Software information**
- Vaex version : {'vaex': '4.1.0', 'vaex-core': '4.3.0.post1', 'vaex-viz': '0.5.0', 'vaex-hdf5': '0.7.0', 'vaex-server': '0.4.1', 'vaex-astro': '0.8.2', 'vaex-jupyter': '0.6.0', 'vaex-ml': '0.11.1', 'vaex-arrow': '0.5.1'}
- Vaex was installed via: pip
- OS: Windows 10 Pro (10.0.19042 build 19042)
Thanks!
| closed | 2021-07-28T09:42:58Z | 2021-08-04T14:01:45Z | https://github.com/vaexio/vaex/issues/1478 | [
"bug"
] | SyureNyanko | 7 |
microsoft/nni | pytorch | 5,130 | speedup_model() causes a error: “NotImplementedError: There were no tensor arguments to this function (e.g., you passed an empty list of Tensors)” | **Describe the issue**:
[2022-09-16 15:20:07] start to speedup the model
[2022-09-16 15:20:09] infer module masks...
[2022-09-16 15:20:09] Update mask for net.conv1.conv1.0
[2022-09-16 15:20:09] Update mask for net.aten::cat.87
Traceback (most recent call last):
File "network_prune.py", line 587, in <module>
m_Speedup.speedup_model()
File "/root/miniconda3/lib/python3.8/site-packages/nni/compression/pytorch/speedup/compressor.py", line 536, in speedup_model
self.infer_modules_masks()
File "/root/miniconda3/lib/python3.8/site-packages/nni/compression/pytorch/speedup/compressor.py", line 371, in infer_modules_masks
self.update_direct_sparsity(curnode)
File "/root/miniconda3/lib/python3.8/site-packages/nni/compression/pytorch/speedup/compressor.py", line 231, in update_direct_sparsity
_auto_infer = AutoMaskInference(
File "/root/miniconda3/lib/python3.8/site-packages/nni/compression/pytorch/speedup/infer_mask.py", line 80, in __init__
self.output = self.module(*dummy_input)
File "/root/miniconda3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "/root/miniconda3/lib/python3.8/site-packages/nni/compression/pytorch/speedup/jit_translate.py", line 561, in forward
return torch.cat(args, dim=self.cat_dim)
NotImplementedError: There were no tensor arguments to this function (e.g., you passed an empty list of Tensors), but no fallback function is registered for schema aten::_cat. This usually means that this function requires a non-empty list of Tensors, or that you (the operator writer) forgot to register a fallback function. Available functions are [CPU, CUDA, QuantizedCPU, BackendSelect, Python, Named, Conjugate, Negative, ZeroTensor, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, AutogradLazy, AutogradXPU, AutogradMLC, AutogradHPU, AutogradNestedTensor, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, Tracer, AutocastCPU, Autocast, Batched, VmapMode, Functionalize].
**Environment**:
- NNI version: 2.8 and 2.9
- Training service (local|remote|pai|aml|etc): local and remote
- Client OS: ubuntu
- Server OS (for remote mode only):
- Python version: python--3.8
- PyTorch/TensorFlow version: pytorch-1.11.0, cuda--11.3
- Is conda/virtualenv/venv used?: conda
- Is running in Docker?: no
| closed | 2022-09-16T07:45:10Z | 2023-03-08T08:52:52Z | https://github.com/microsoft/nni/issues/5130 | [
"support",
"ModelSpeedup",
"need more info"
] | missu123 | 3 |
flaskbb/flaskbb | flask | 345 | `flaskbb upgrade fixture` returns wrong number of settings | I just ran
` flaskbb upgrade --all --fixture settings` (to look into #336)
This returned
```
[+] Found config file 'flaskbb.cfg' in /home/student/Desktop/uni/coding/github/flaskbb
[+] Using config from: /home/student/Desktop/uni/coding/github/flaskbb/flaskbb.cfg
[+] Upgrading migrations to the latest version...
INFO [alembic.runtime.migration] Context impl SQLiteImpl.
INFO [alembic.runtime.migration] Will assume non-transactional DDL.
[+] Updating fixtures...
[+] 4 groups and 4 settings updated.
```
Apparently it outputs the same number twice, when the number of individual settings should be much higher than 4. | closed | 2017-09-16T16:01:04Z | 2018-04-15T07:47:47Z | https://github.com/flaskbb/flaskbb/issues/345 | [
"bug"
] | shunju | 2 |
uriyyo/fastapi-pagination | fastapi | 724 | TypeError("'ObjectId' object is not iterable") | Hi All
I'm trying to use this library to paginate data from a mongodb database using Motor.
I can get the `paginate()` query to work, and this returns the data from MongoDB/Motor - all good.
My issue comes when I try to then return this via fastAPI return_model and I get the following trace:
```python
Traceback (most recent call last):
File "/Users/myuser/GIT/fm_webCrawler/.venv/lib/python3.11/site-packages/fastapi/encoders.py", line 152, in jsonable_encoder
data = dict(obj)
^^^^^^^^^
TypeError: 'ObjectId' object is not iterable
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/myuser/GIT/fm_webCrawler/.venv/lib/python3.11/site-packages/fastapi/encoders.py", line 157, in jsonable_encoder
data = vars(obj)
^^^^^^^^^
TypeError: vars() argument must have __dict__ attribute
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/Users/myuser/GIT/fm_webCrawler/.venv/lib/python3.11/site-packages/uvicorn/protocols/http/h11_impl.py", line 428, in run_asgi
result = await app( # type: ignore[func-returns-value]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/myuser/GIT/fm_webCrawler/.venv/lib/python3.11/site-packages/uvicorn/middleware/proxy_headers.py", line 78, in __call__
return await self.app(scope, receive, send)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/myuser/GIT/fm_webCrawler/.venv/lib/python3.11/site-packages/fastapi/applications.py", line 284, in __call__
await super().__call__(scope, receive, send)
File "/Users/myuser/GIT/fm_webCrawler/.venv/lib/python3.11/site-packages/starlette/applications.py", line 122, in __call__
await self.middleware_stack(scope, receive, send)
File "/Users/myuser/GIT/fm_webCrawler/.venv/lib/python3.11/site-packages/starlette/middleware/errors.py", line 184, in __call__
raise exc
File "/Users/myuser/GIT/fm_webCrawler/.venv/lib/python3.11/site-packages/starlette/middleware/errors.py", line 162, in __call__
await self.app(scope, receive, _send)
File "/Users/myuser/GIT/fm_webCrawler/.venv/lib/python3.11/site-packages/starlette/middleware/exceptions.py", line 79, in __call__
raise exc
File "/Users/myuser/GIT/fm_webCrawler/.venv/lib/python3.11/site-packages/starlette/middleware/exceptions.py", line 68, in __call__
await self.app(scope, receive, sender)
File "/Users/myuser/GIT/fm_webCrawler/.venv/lib/python3.11/site-packages/fastapi/middleware/asyncexitstack.py", line 20, in __call__
raise e
File "/Users/myuser/GIT/fm_webCrawler/.venv/lib/python3.11/site-packages/fastapi/middleware/asyncexitstack.py", line 17, in __call__
await self.app(scope, receive, send)
File "/Users/myuser/GIT/fm_webCrawler/.venv/lib/python3.11/site-packages/starlette/routing.py", line 718, in __call__
await route.handle(scope, receive, send)
File "/Users/myuser/GIT/fm_webCrawler/.venv/lib/python3.11/site-packages/starlette/routing.py", line 276, in handle
await self.app(scope, receive, send)
File "/Users/myuser/GIT/fm_webCrawler/.venv/lib/python3.11/site-packages/starlette/routing.py", line 66, in app
response = await func(request)
^^^^^^^^^^^^^^^^^^^
File "/Users/myuser/GIT/fm_webCrawler/.venv/lib/python3.11/site-packages/fastapi/routing.py", line 259, in app
content = await serialize_response(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/myuser/GIT/fm_webCrawler/.venv/lib/python3.11/site-packages/fastapi/routing.py", line 146, in serialize_response
return jsonable_encoder(
^^^^^^^^^^^^^^^^^
File "/Users/myuser/GIT/fm_webCrawler/.venv/lib/python3.11/site-packages/fastapi/encoders.py", line 66, in jsonable_encoder
return jsonable_encoder(
^^^^^^^^^^^^^^^^^
File "/Users/myuser/GIT/fm_webCrawler/.venv/lib/python3.11/site-packages/fastapi/encoders.py", line 117, in jsonable_encoder
encoded_value = jsonable_encoder(
^^^^^^^^^^^^^^^^^
File "/Users/myuser/GIT/fm_webCrawler/.venv/lib/python3.11/site-packages/fastapi/encoders.py", line 131, in jsonable_encoder
jsonable_encoder(
File "/Users/myuser/GIT/fm_webCrawler/.venv/lib/python3.11/site-packages/fastapi/encoders.py", line 117, in jsonable_encoder
encoded_value = jsonable_encoder(
^^^^^^^^^^^^^^^^^
File "/Users/myuser/GIT/fm_webCrawler/.venv/lib/python3.11/site-packages/fastapi/encoders.py", line 160, in jsonable_encoder
raise ValueError(errors) from e
ValueError: [TypeError("'ObjectId' object is not iterable"), TypeError('vars() argument must have __dict__ attribute')]
```
I have the following setup to serialise the `ObjectId` to a string and can prove this works by just returning a `List[MyModel]` via fast api.
```python
class PyObjectId(ObjectId):
@classmethod
def __get_validators__(cls):
yield cls.validate
@classmethod
def validate(cls, v):
if not ObjectId.is_valid(v):
raise ValueError("Invalid objectid")
return ObjectId(v)
@classmethod
def __modify_schema__(cls, field_schema):
field_schema.update(type="string")
class Result(BaseModel):
name: Optional[str]
videolinks: Optional[List] = Field(default_factory=list)
longdesc: Optional[str]
highlights: Optional[List] = Field(default_factory=list)
spec: Optional[List] = Field(default_factory=list)
class Config:
allow_population_by_field_name = True
arbitrary_types_allowed = True
json_encoders = {ObjectId: str}
class AMSStandardProduct(BaseModel):
id: PyObjectId = Field(default_factory=PyObjectId, alias="_id")
mpn: Optional[str]
status: str = 'NEW' # Set to enum when done.
supplier: str
url: Optional[str]
result: Optional[Result]
class Config:
allow_population_by_field_name = True
arbitrary_types_allowed = True
json_encoders = {ObjectId: str}
@app.get(
"/", response_description="List all products", response_model=Page[AMSStandardProduct]
)
async def list_products():
products = await paginate(collection=db['supplier'])
return products
@app.get("/all", response_model=List[AMSStandardProduct])
async def list_all_products():
products = await db['supplier'].find().to_list(10)
return products
```
It appears to be around the fact that when my model is no longer the top level, the serialiser fails.
Does anyone have any good examples of this pattern working with Mongo and Motor.
I'm happy to write the examples into the docs etc once I find the answer out :)
| closed | 2023-06-29T10:51:39Z | 2023-07-03T09:24:09Z | https://github.com/uriyyo/fastapi-pagination/issues/724 | [
"question"
] | Bobspadger | 5 |
erdewit/ib_insync | asyncio | 417 | reqHistoricalData() raises a RuntimeError: There is no current event loop in thread 'Thread-1 (some_function)' | hi,
i'm trying to access the `reqHistoricalData()` api from `ib_insync` but i get a runtime error whny trying to run it async.
i'm using the `threading` module with `start()` and `join()` methods.
i also noticed that if i access `reqHistoricalDataAsync()` instead, it doesnt return a `BarDataList` anymore but instead it returns a `coroutine` object and can't extract any bar data from it anymore.
i've also followed [this youtube video](https://youtu.be/IEEhzQoKtQU?t=408) on how to call functions with multithreading and i was able to get a basic function to run but not the ibkr api.
here is the code that i run synchronously:
```
import threading
import concurrent.futures
import time
from ib_insync import *
import inspect
stock_input_list = ['A', 'AA', 'AAC', 'AACG', 'AACI', 'AACIU', 'AACIW', 'AADI', 'AAIC', 'AAIC PRB']
print("the stock input list is ", stock_input_list)
ibkr = IB()
ibkr.connect('127.0.0.1', 7496, clientId=1, readonly=True)
def some_function(list):
for item in list[:]:
list.remove(item)
contract = Stock(symbol=item, exchange="SMART", currency="USD")
ibkr_bar_list = ibkr.reqHistoricalData(contract, endDateTime='', durationStr='1 D', barSizeSetting='1 day', whatToShow='MIDPOINT', useRTH=True)
if ibkr_bar_list:
ibkr_closing_data = ibkr_bar_list[0].close
else:
ibkr_closing_data = 0.0
list.append(ibkr_closing_data)
return
start = time.perf_counter()
some_function(stock_input_list)
finish = time.perf_counter()
print("finished in", round(finish - start, 4), "seconds")
ibkr.disconnect()
print("the stock input list is ", stock_input_list)
```
and here is the code that i try to run asynchronously:
```
import threading
import concurrent.futures
import time
from ib_insync import *
import inspect
stock_input_list = ['A', 'AA', 'AAC', 'AACG', 'AACI', 'AACIU', 'AACIW', 'AADI', 'AAIC', 'AAIC PRB']
print("the stock input list is ", stock_input_list)
ibkr = IB()
ibkr.connect('127.0.0.1', 7496, clientId=1, readonly=True)
def some_function(list):
for item in list[:]:
list.remove(item)
contract = Stock(symbol=item, exchange="SMART", currency="USD")
ibkr_bar_list = ibkr.reqHistoricalDataAsync(contract, endDateTime='', durationStr='1 D', barSizeSetting='1 day', whatToShow='MIDPOINT', useRTH=True)
if ibkr_bar_list:
ibkr_closing_data = ibkr_bar_list[0].close
else:
ibkr_closing_data = 0.0
list.append(ibkr_closing_data)
return
start = time.perf_counter()
t1 = threading.Thread(target=some_function, args=[stock_input_list])
t1.start()
t1.join()
finish = time.perf_counter()
print("finished in", round(finish - start, 4), "seconds")
ibkr.disconnect()
print("the stock input list is ", stock_input_list)
```
and here are the exceptions i get:
```
Exception in thread Thread-1 (some_function):
Traceback (most recent call last):
...
raise RuntimeError('There is no current event loop in thread %r.'
RuntimeError: There is no current event loop in thread 'Thread-1 (some_function)'.
C:\Python310\lib\threading.py:1011: RuntimeWarning: coroutine 'IB.reqHistoricalDataAsync' was never awaited
self._invoke_excepthook(self)
RuntimeWarning: Enable tracemalloc to get the object allocation traceback
```
do you have any code examples for accessing functions async? i only found [async streaming ticks](https://ib-insync.readthedocs.io/recipes.html#async-streaming-ticks) but that doesn't look like what i'm looking for.
thank you | closed | 2021-12-04T16:09:12Z | 2021-12-26T11:40:25Z | https://github.com/erdewit/ib_insync/issues/417 | [] | properchopsticks | 3 |
encode/httpx | asyncio | 3,176 | Stricter typing for request parameters. | Ref: https://github.com/encode/starlette/pull/2534#issuecomment-2067634124
This issue was opened to resolve all kinds of problems where third-party packages may need to use private imports. We should definitely expose them to avoid problems like https://github.com/encode/httpx/pull/3130#issuecomment-1974778017.
I will include the package name along with its private imports (only starlette for now), so we can clearly identify what needs to be exposed.
Note that:
- [ ] -> Not yet publicly exposed
- [x] -> Publicly exposed
We can also add other packages as well to track all of them in this issue.
# Starlette
- [ ] [httpx._types.CookieTypes](https://github.com/encode/starlette/blob/96c90f26622c8f243ad965371eae2c2028a518de/starlette/testclient.py#L431)
- [ ] [httpx._client.UseClientDefault](https://github.com/encode/starlette/blob/96c90f26622c8f243ad965371eae2c2028a518de/starlette/testclient.py#L475C17-L475C47)
- [x] [httpx._client.USE_CLIENT_DEFAULT](https://github.com/encode/starlette/blob/96c90f26622c8f243ad965371eae2c2028a518de/starlette/testclient.py#L477)
- [ ] [httpx._types.URLTypes](https://github.com/encode/starlette/blob/96c90f26622c8f243ad965371eae2c2028a518de/starlette/testclient.py#L497C14-L497C35)
- [ ] [httpx._types.RequestContent](https://github.com/encode/starlette/blob/96c90f26622c8f243ad965371eae2c2028a518de/starlette/testclient.py#L499C18-L499C45)
- [ ] [httpx._types.RequestFiles](https://github.com/encode/starlette/blob/96c90f26622c8f243ad965371eae2c2028a518de/starlette/testclient.py#L501C16-L501C41)
- [ ] [httpx._types.QueryParamTypes](https://github.com/encode/starlette/blob/96c90f26622c8f243ad965371eae2c2028a518de/starlette/testclient.py#L503C17-L503C45)
- [ ] [httpx._types.HeaderTypes](https://github.com/encode/starlette/blob/96c90f26622c8f243ad965371eae2c2028a518de/starlette/testclient.py#L504C18-L504C42)
- [ ] [httpx._types.CookieTypes](https://github.com/encode/starlette/blob/96c90f26622c8f243ad965371eae2c2028a518de/starlette/testclient.py#L505C18-L505C42)
- [ ] [httpx._types.AuthTypes](https://github.com/encode/starlette/blob/96c90f26622c8f243ad965371eae2c2028a518de/starlette/testclient.py#L506C15-L506C37)
- [ ] [httpx._types.TimeoutTypes](https://github.com/encode/starlette/blob/96c90f26622c8f243ad965371eae2c2028a518de/starlette/testclient.py#L510C18-L510C43) | open | 2024-04-21T09:09:58Z | 2024-09-27T11:18:25Z | https://github.com/encode/httpx/issues/3176 | [
"user-experience"
] | karpetrosyan | 12 |
holoviz/panel | matplotlib | 7,805 | Card layouts break and overlap when in a container of a constrained size and expanded | <details>
<summary>Software Version Info</summary>
```plaintext
panel == 1.6.1
```
</details>
#### Description of expected behavior and the observed behavior
I expect the cards to respect the overflow property of the container they are in and not overlap when expanded.
**Example 1 overflow: auto in column container**
https://github.com/user-attachments/assets/78a749ae-f82b-4836-9b04-85464a60210a
**Example 2 no overflow specified**
https://github.com/user-attachments/assets/d9d82ab0-a5c3-43f8-9925-07e996223b30
#### Complete, minimal, self-contained example code that reproduces the issue
**Example 1 overflow: auto in column container**
```python
import panel as pn
card1 = pn.layout.Card(pn.pane.Markdown("""
Lorem ipsum dolor sit amet, consectetur adipiscing elit.
Sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.
Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.
Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.
"""), title="Card 1")
card2 = pn.layout.Card(pn.pane.Markdown("""
In a world where technology and nature coexist,
the balance between innovation and preservation becomes crucial.
As we advance into the future, we must remember the lessons of the past,
embracing sustainable practices that honor our planet.
Together, we can forge a path that respects both progress and the environment,
ensuring a brighter tomorrow for generations to come.
"""), title="Card 2")
pn.Column(card1, card2, height=200, styles={'overflow': 'auto'}).servable()
```
**Example 2 no overflow specified**
```python
import panel as pn
card1 = pn.layout.Card(pn.pane.Markdown("""
Lorem ipsum dolor sit amet, consectetur adipiscing elit.
Sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.
Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.
Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.
"""), title="Card 1")
card2 = pn.layout.Card(pn.pane.Markdown("""
In a world where technology and nature coexist,
the balance between innovation and preservation becomes crucial.
As we advance into the future, we must remember the lessons of the past,
embracing sustainable practices that honor our planet.
Together, we can forge a path that respects both progress and the environment,
ensuring a brighter tomorrow for generations to come.
"""), title="Card 2")
pn.Column(card1, card2, height=200).servable()
```
I think this stems from the recalculation of this style, but I'm not quite sure how to get around it:
 | open | 2025-03-24T19:46:16Z | 2025-03-24T19:57:43Z | https://github.com/holoviz/panel/issues/7805 | [] | DmitriyLeybel | 2 |
flairNLP/flair | nlp | 3,292 | [Feature]: add BINDER | ### Problem statement
Extend the label verbalizer decoder to the [BINDER](https://openreview.net/forum?id=9EAQVEINuum) paper which is using a contrastive / similarity loss whereas the current implementation uses matrix multiplication.
### Solution
Write a new class that implements the BINDER paper.
### Additional Context
_No response_ | open | 2023-08-06T10:36:21Z | 2023-08-06T10:36:22Z | https://github.com/flairNLP/flair/issues/3292 | [
"feature"
] | whoisjones | 0 |
google/trax | numpy | 1,331 | How to get SQuAD in form of text instead of stream? | Hi,
I just can get the SQuAD in **generator** by `trax.data.tf_inputs.data_stream()`.
How can i get it in form of **text** instead of **generator**? Or at least how can i convert it into text?
| closed | 2020-12-21T04:17:36Z | 2020-12-24T06:18:55Z | https://github.com/google/trax/issues/1331 | [] | ngoquanghuy99 | 0 |
deepset-ai/haystack | nlp | 8,967 | Pipeline drawing: expose `timeout` parameter and increase the default | End-to-End Haystack tests (executed nightly) are failing due to [Mermaid timeouts](https://github.com/deepset-ai/haystack/actions/runs/13643217064/job/38137252561) during Pipeline drawing.
I can also reproduce this on Colab running one of our tutorials.
To resolve this before 2.11.0, I would:
- increase timeout (currently set to 10 seconds)
- expose a `timeout` parameter, to allow users control this behavior | closed | 2025-03-04T17:26:01Z | 2025-03-05T10:49:35Z | https://github.com/deepset-ai/haystack/issues/8967 | [
"P1"
] | anakin87 | 0 |
pallets/flask | python | 4,816 | Speed up Dataclass JSON. | https://github.com/pallets/flask/blob/c34c84b69085e6bce67d0701b8f8ba3145f42ff2/src/flask/json/provider.py#L116-L117
asdict is super slow and unneeded. It makes a deep copy since the object is being encoded there is no need for a deep copy.
I can post some alternatives I have rattled around for this. If I remember correctly it was like roughly 10x faster to make a Mapping, and use that to a dictionary then to use todict for a flat class of 4 immutable fields. The performance difference gets much larger the more complex the DataClass is. Unfortunately Python doesnt support using a Mapping directly so you do still need to convert it into a dictionary.
I can post some snippets for it. I know I had one that used caching and [this function ](https://docs.python.org/3/library/dataclasses.html#dataclasses.fields) to cache the calls by type. | closed | 2022-09-12T18:48:43Z | 2022-09-28T00:12:03Z | https://github.com/pallets/flask/issues/4816 | [] | zrothberg | 1 |
kornia/kornia | computer-vision | 2,438 | Use Kornia Boxes utility to automаte the boxes conversion | Use Kornia Boxes utility to automаte the boxes conversion
_Originally posted by @edgarriba in https://github.com/kornia/kornia/pull/2363#discussion_r1257304152_
| open | 2023-07-08T16:18:50Z | 2023-07-08T16:19:14Z | https://github.com/kornia/kornia/issues/2438 | [
"enhancement :rocket:",
"module: contrib"
] | edgarriba | 0 |
ScottfreeLLC/AlphaPy | scikit-learn | 42 | Error while making a prediction with sflow | **Description of the error**
When following the NCAA Basketball Tutorial and trying to make a prediction on a date with the model created and trained, the system throws the following error:
"IndexError: boolean index did not match indexed array along dimension 1; dimension is 96 but corresponding boolean dimension is 106"
It is thrown at this part of the code:
Traceback (most recent call last):
File "/Users/alejandrovargasperez/opt/anaconda3/bin/sflow", line 8, in <module>
sys.exit(main())
File "/Users/alejandrovargasperez/opt/anaconda3/lib/python3.8/site-packages/alphapy/sport_flow.py", line 912, in main
model = main_pipeline(model)
File "/Users/alejandrovargasperez/opt/anaconda3/lib/python3.8/site-packages/alphapy/__main__.py", line 434, in main_pipeline
model = prediction_pipeline(model)
File "/Users/alejandrovargasperez/opt/anaconda3/lib/python3.8/site-packages/alphapy/__main__.py", line 364, in prediction_pipeline
X_all = create_interactions(model, X_all)
File "/Users/alejandrovargasperez/opt/anaconda3/lib/python3.8/site-packages/alphapy/features.py", line 1316, in create_interactions
pfeatures, pnames = get_polynomials(X[:, support], poly_degree)
**To Reproduce**
Steps to reproduce the behavior:
1. Run the command: "sflow --pdate 2016-03-01"
2. When finished, run SportFlow in predict mode: "sflow --predict --pdate 2016-03-15"
4. See error | open | 2021-10-20T17:11:40Z | 2021-10-20T17:11:40Z | https://github.com/ScottfreeLLC/AlphaPy/issues/42 | [] | AlexVaPe | 0 |
quantmind/pulsar | asyncio | 243 | HTTPDigestAuth partially broken | If the credentials are incorrect, an http requets with HTTPDigestAuth seems to be stuck in a loop of redirection that end with :
```
<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
<html><head>
<title>400 Bad Request</title>
</head><body>
<h1>Bad Request</h1>
<p>Your browser sent a request that this server could not understand.<br />
Size of a request header field exceeds server limit.<br />
<pre>
Authorization
</pre>
</p>
<hr>
<address>Apache/2.4.7 (Ubuntu) Server at review.openstack.org Port 443</address>
</body></html>
```
``` python
from pulsar.apps import http
from pulsar.apps.http import HTTPDigestAuth
session = http.HttpClient(headers=[('Content-Type', 'application/json')])
resp = await session.get(
'https://review.openstack.org/a/projects/',
auth=HTTPDigestAuth('foo', 'bar')
)
print(resp.text())
print(resp.request.headers)
```
The `Authorization` header looks incorrect.
Everything works fine when the correct creds are passed.
| closed | 2016-09-08T13:38:12Z | 2016-09-09T09:47:06Z | https://github.com/quantmind/pulsar/issues/243 | [
"http",
"bug",
"tests required"
] | JordanP | 2 |
streamlit/streamlit | data-visualization | 10,828 | Upload files to disk with `st.file_uploader` | ### Checklist
- [x] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar feature requests.
- [x] I added a descriptive title and summary to this issue.
### Summary
Add a way for `st.file_uploader` to upload files directly to disk. This should not store the file in memory.
### Why?
This is great for large files that don't fit into memory. An alternative is uploading them to S3 or some other external file storage (see #10827).
### How?
Maybe add a parameter `destination="memory"|"disk"`. Or we could (additionally?) have a config option that sets the location for all file uploaders. If `"disk"` is set, `st.file_uploader` should return the file path of the uploaded file instead of an `UploadedFile` object.
### Additional Context
_No response_ | open | 2025-03-18T19:56:48Z | 2025-03-19T16:19:16Z | https://github.com/streamlit/streamlit/issues/10828 | [
"type:enhancement",
"feature:st.file_uploader"
] | jrieke | 5 |
onnx/onnxmltools | scikit-learn | 499 | Error converting deserialized xgboost Booster | Hello.
I am trying to convert existing xgboost model file (which is created by `xgboost.Booster.save_model`) to onnx. While doing that I am getting the following error:
`AttributeError: 'Booster' object has no attribute 'best_ntree_limit'`
my environment
* OS: Windows 10 Home
* xgboost==1.4.2
* onnxmltools==1.9.1
* onnx==1.10.1
reproduction code:
```python
from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split
from xgboost import DMatrix, Booster, train as train_xgb
from onnxconverter_common.data_types import FloatTensorType
from onnxmltools.convert import convert_xgboost
# Train
iris = load_iris()
X, y = iris.data, iris.target
X_train, X_test, y_train, y_test = train_test_split(X, y)
dtrain = DMatrix(X_train, label=y_train)
param = {'objective': 'multi:softmax', 'num_class': 3}
bst_original = train_xgb(param, dtrain, 10)
initial_type = [('float_input', FloatTensorType([None, 4]))]
# Converting original Booster is OK.
onx = convert_xgboost(bst_original, initial_types=initial_type)
# Save and load model
bst_original.save_model('model.json')
bst_loaded = Booster()
bst_loaded.load_model('model.json')
# !!! Converting loaded Booster fails !!!
onx_loaded = convert_xgboost(bst_loaded, initial_types=initial_type)
```
stack trace:
```
[01:32:25] WARNING: C:/Users/Administrator/workspace/xgboost-win64_release_1.4.0/src/learner.cc:1095: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'multi:softmax' was changed from 'merror' to 'mlogloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
Traceback (most recent call last):
File "conv.py", line 25, in <module>
onx_loaded = convert_xgboost(bst_loaded, initial_types=initial_type)
File "C:\Users\macke\repos\onnx_sample\env\lib\site-packages\onnxmltools\convert\main.py", line 176, in convert_xgboost
return convert(*args, **kwargs)
File "C:\Users\macke\repos\onnx_sample\env\lib\site-packages\onnxmltools\convert\xgboost\convert.py", line 39, in convert
model = WrappedBooster(model)
File "C:\Users\macke\repos\onnx_sample\env\lib\site-packages\onnxmltools\convert\xgboost\_parse.py", line 85, in __init__
self.kwargs = _get_attributes(booster)
File "C:\Users\macke\repos\onnx_sample\env\lib\site-packages\onnxmltools\convert\xgboost\_parse.py", line 31, in _get_attributes
ntrees = booster.best_ntree_limit
AttributeError: 'Booster' object has no attribute 'best_ntree_limit'
```
According to https://github.com/dmlc/xgboost/issues/805 loaded Booster doesn't have `best_ntree_limit`, it may cause this error.
| open | 2021-09-18T16:34:28Z | 2022-02-01T10:16:50Z | https://github.com/onnx/onnxmltools/issues/499 | [] | ide-an | 2 |
postmanlabs/httpbin | api | 98 | Missing dependencies | A few dependencies are still missing;
- blinker
- raven
- Sentry
| closed | 2013-06-06T00:18:45Z | 2018-04-26T17:50:59Z | https://github.com/postmanlabs/httpbin/issues/98 | [] | ticky | 1 |
AirtestProject/Airtest | automation | 1,189 | 开启IDE情况下,命令行执行时获取的adb进程路径有时候空的 | **Describe the bug**
get_adb_path return None
**To Reproduce**
Steps to reproduce the behavior:
1. 打开airtestIDE的情况下
2. 命令行执行airtest脚本
非必现,有的时候出现,发现adb进程被airtestIDE占用,但是exe路径是空的, 如截图所示
**Expected behavior**
**Screenshots**
调试信息:
<img width="1409" alt="image" src="https://github.com/AirtestProject/Airtest/assets/48306281/61fc769d-4164-4c71-b38f-137455b66d24">
进程信息:
<img width="1045" alt="image" src="https://github.com/AirtestProject/Airtest/assets/48306281/4a4e1ffa-d717-43f2-9098-e2cad6a7dee4">
**python version:** `python3.8`
**airtest version:** `1.3.2`
| closed | 2024-01-05T07:14:29Z | 2024-02-02T07:50:03Z | https://github.com/AirtestProject/Airtest/issues/1189 | [] | lzus | 1 |
jupyter-widgets-contrib/ipycanvas | jupyter | 312 | debug_output only works if calling display(canvas) | I have a complicated layout with HBox and VBox and contains a canvas as one part of the whole thing.
when using debug_output, it forces me to call display(canvas), or else debug_output does not show. but that means none of the other components show.
if I call HBox then debug_output, only a horizontal line shows.
If I call HBox without debug_output, then all the components show, but no debug_output
How can I make debug_output work with HBox? | closed | 2023-01-06T21:09:32Z | 2023-01-08T05:45:17Z | https://github.com/jupyter-widgets-contrib/ipycanvas/issues/312 | [] | bhomass | 1 |
chaos-genius/chaos_genius | data-visualization | 438 | [BUG] Snowflake connector mentions setting up with a hostname, where the hostname is actually not required | ## Describe the bug
When setting up the Snowflake connector, I thought I needed to type the whole domain. It isn't needed, just the first part.

## Explain the environment
- **Chaos Genius version**: 0.1.3-alpha
- **OS Version / Instance**: Archlinux host, using `docker compose -f docker-compose.latest.yml`
- **Deployment type**: docker
## Current behavior
Type in the full host name, example: `foobar.ap-south-1.snowflakecomputing.com` and it will error. Remove it, and it will work. This is good, as this is how other snowflake integrations work.
## Expected behavior
It works.
## Screenshots

## Additional context
I think the best solution would be to rename the help text to:
```
Host domain of the snowflake instance (must include the account, region, cloud environment).
```
Maybe even an example? | closed | 2021-11-26T13:23:13Z | 2021-11-27T06:56:45Z | https://github.com/chaos-genius/chaos_genius/issues/438 | [] | joshuataylor | 1 |
opengeos/leafmap | jupyter | 118 | Add streamlit support for heremap module | To add stseamlit support for the heremap plotting backend, we need to save the map as an HTML file. However, it seems the exported HTML lose the map controls (e.g., zoom control, fullscreen control). @sackh Any advice?
```
import leafmap.heremap as leafmap
from ipywidgets.embed import embed_minimal_html
m = leafmap.Map()
m
embed_minimal_html('heremap.html', views=[m])
```

The same method works fine for ipylealfet. The exported HTML has the map controls.

| closed | 2021-10-02T19:28:56Z | 2021-10-18T01:26:06Z | https://github.com/opengeos/leafmap/issues/118 | [
"Feature Request"
] | giswqs | 2 |
modoboa/modoboa | django | 2,702 | OpenDKIM not sign mails | # Impacted versions
* OS Type: Ubuntu
* OS Version: Ubuntu 22.04.1 LTS
* Database Type: PostgreSQL
* Database version: 14
* Modoboa: 2.0.2
* installer used: No
* Webserver: Nginx
# Steps to reproduce
1. send some mail from any mail client
2. check DKIM sign with https://www.appmaildev.com/en/dkim
# Current behavior
No DKIM sign in mail
After every server restart my opendkim service is not up so I have to restart it manually and I have this in syslog:
"mail opendkim[1213]: opendkim: /etc/opendkim.conf: dsn:pgsql://username:pass@127.0.0.1/modoboa/table=tablename?keycol=domain_name?datacol=id: dkimf_db_open():"
DB user has access to this table so I have no idea why opendkim has a problem with it
# Expected behavior
e-mails has DKIM sign
# Video/Screenshot link (optional)

| closed | 2022-11-30T18:36:48Z | 2023-04-25T20:32:22Z | https://github.com/modoboa/modoboa/issues/2702 | [
"feedback-needed",
"stale"
] | Quiver92 | 5 |
OpenBB-finance/OpenBB | machine-learning | 6,924 | [🕹️] Starry-eyed supporter | ### What side quest or challenge are you solving?
asked five of my friends to star the repository
### Points
150
### Description
_No response_
### Provide proof that you've completed the task
1. https://github.com/amannegi?tab=stars

2. https://github.com/DhairyaMajmudar?tab=stars

3. https://github.com/kunal00000?tab=stars

4. https://github.com/narharim?tab=stars

5. https://github.com/VipinDevelops?tab=stars
 | closed | 2024-10-31T12:32:08Z | 2024-11-02T07:40:31Z | https://github.com/OpenBB-finance/OpenBB/issues/6924 | [] | Nabhag8848 | 1 |
CorentinJ/Real-Time-Voice-Cloning | pytorch | 1,095 | Numpy is not available | I have this error if i try to do anything. Does anyone know how I can fix this. Numpy is installed.

| closed | 2022-07-12T14:01:07Z | 2023-02-04T18:58:56Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/1095 | [] | timmosaurusrex | 2 |
NullArray/AutoSploit | automation | 523 | Unhandled Exception (ac906a5c4) | Autosploit version: `3.0`
OS information: `Linux-4.18.0-kali2-amd64-x86_64-with-Kali-kali-rolling-kali-rolling`
Running context: `autosploit.py`
Error meesage: `argument of type 'NoneType' is not iterable`
Error traceback:
```
Traceback (most recent call):
File "/root/AutoSploit/autosploit/main.py", line 117, in main
terminal.terminal_main_display(loaded_tokens)
File "/root/AutoSploit/lib/term/terminal.py", line 474, in terminal_main_display
if "help" in choice_data_list:
TypeError: argument of type 'NoneType' is not iterable
```
Metasploit launched: `False`
| closed | 2019-03-02T05:11:53Z | 2019-03-03T03:31:53Z | https://github.com/NullArray/AutoSploit/issues/523 | [] | AutosploitReporter | 0 |
ludwig-ai/ludwig | data-science | 3,681 | RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument mat1 in method wrapper... | **Describe the bug**
I followed the steps from "how to contribute" and setting up Lugwig locally for development. When I run pytest I get

**Environment (please complete the following information):**
- OS: Ubuntu
- Version: 20.04
- Python version: 3.10
- Ludwig version: 0.8
| closed | 2023-09-30T07:50:53Z | 2024-10-18T17:04:14Z | https://github.com/ludwig-ai/ludwig/issues/3681 | [] | karanjakhar | 2 |
RobertCraigie/prisma-client-py | pydantic | 33 | Improve type checking experience using a type checker without a plugin | ## Problem
<!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] -->
For example, using `pyright` to type check would result in false positive errors that are annoying to fix when including 1-to-many relational fields.
```py
user = await client.user.find_unique(where={'id': '1'}, include={'posts': True})
assert user is not None
for post in user.posts:
...
```
```
error: Object of type "None" cannot be used as iterable value (reportOptionalIterable)
```
This is a false positive as we are explicitly including `posts` they will never be `None`
> NOTE: false positive due to our types, not a bug in pyright
## Suggested solution
<!-- A clear and concise description of what you want to happen. -->
1-to-many relational fields should not be typed as optional, instead they should default to an empty list, however we should still error if the field is accessed without explicit inclusion for supported type checkers.
```py
class User(BaseModel):
...
posts: List[Post] = Field(default_factory=list)
```
Should be noted that we cannot do this for 1-to-1 relational fields.
## Alternatives
<!-- A clear and concise description of any alternative solutions or features you've considered. -->
The issue can be circumvented albeit using ugly / redundant methods
```py
user = await client.user.find_unique(where={'id': '1'}, include={'posts': True})
assert user is not None
for post in cast(List[Post], user.posts):
...
```
```py
user = await client.user.find_unique(where={'id': '1'}, include={'posts': True})
assert user is not None
assert user.posts is not None
for post in user.posts:
...
```
| open | 2021-07-02T13:42:25Z | 2022-02-01T15:31:51Z | https://github.com/RobertCraigie/prisma-client-py/issues/33 | [
"kind/improvement",
"topic: types",
"level/advanced",
"priority/low"
] | RobertCraigie | 0 |
vitalik/django-ninja | rest-api | 692 | [BUG] callable not working in schema Field definition using alias | **Describe the bug**
As mentioned in docs:
class TaskSchema(Schema):
type: str = Field(None)
type_display: str = Field(None, alias="get_type_display") # callable will be executed
When trying the same on a choice field the callable for getting the display value is not executed and getting None in result.
**Versions (please complete the following information):**
- Python version: 3.8
- Django version: 4.0.4
- Django-Ninja version: 0.21.0
- Pydantic version: 1.10.2
Note you can quickly get this by runninng in `./manage.py shell` this line:
```
import django; import pydantic; import ninja; django.__version__; ninja.__version__; pydantic.__version__
```
| closed | 2023-03-06T06:45:49Z | 2023-03-15T12:31:04Z | https://github.com/vitalik/django-ninja/issues/692 | [] | sagar-punchh | 6 |
junyanz/pytorch-CycleGAN-and-pix2pix | deep-learning | 1,568 | WGAN produce very poor results | Hi
I am using Cyclegan for medical image translation however when I changed the WGAN, the results completely failed.
I have attached the original images and the generated results. I have trained for 400 epochs.
Anyone has any idea why this happens?
Fig 1 Real A
Fig 2 Real B
Fig 3 fake A from unet128+wgan
Fig 4 fake A from res9+wgan
 


| open | 2023-05-09T21:45:16Z | 2023-05-09T21:47:53Z | https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/1568 | [] | xintian-99 | 0 |
litestar-org/litestar | asyncio | 4,057 | Bug: `Makefile` doesn't show install | ### Description
It is not clear when running `make` or `make help` how to install the project if you skip the docs.
### MCVE
```python
litestar on v3/n [🤷✓] via 🎁 v2.15.1 via pyenv took 5s
➜ make
Usage:
make <target>
help Display this help text for Makefile
upgrade Upgrade all dependencies to the latest stable versions
clean Cleanup temporary build artifacts
destroy Destroy the virtual environment
lock Rebuild lockfiles from scratch, updating all dependencies
mypy Run mypy
mypy-nocache Run Mypy without cache
pyright Run pyright
type-check Run all type checking
pre-commit Runs pre-commit hooks; includes ruff formatting and linting, codespell
slots-check Check for slots usage in classes
lint Run all linting
coverage Run the tests and generate coverage report
test Run the tests
test-examples Run the examples tests
test-all Run all tests
check-all Run all linting, tests, and coverage checks
docs-install Install docs dependencies
docs-clean Dump the existing built docs
docs-serve Serve the docs locally
docs Dump the existing built docs and rebuild them
docs-linkcheck Run the link check on the docs
docs-linkcheck-full Run the full link check on the docs
``` | open | 2025-03-18T16:54:37Z | 2025-03-18T16:55:01Z | https://github.com/litestar-org/litestar/issues/4057 | [
"Bug :bug:"
] | JacobCoffee | 0 |
Evil0ctal/Douyin_TikTok_Download_API | api | 163 | [BUG] An unexpected error occurred, the input value has been recorded. | closed | 2023-03-04T02:23:08Z | 2023-03-11T00:40:54Z | https://github.com/Evil0ctal/Douyin_TikTok_Download_API/issues/163 | [
"BUG",
"enhancement"
] | robbinhust | 2 | |
s3rius/FastAPI-template | graphql | 204 | Fast api | open | 2024-03-07T00:26:41Z | 2024-03-07T00:26:41Z | https://github.com/s3rius/FastAPI-template/issues/204 | [] | Jaewook-github | 0 | |
SYSTRAN/faster-whisper | deep-learning | 1,086 | CUDA compatibility with CTranslate2 | Hi Everyone,
as per @BBC-Esq research, `ctranslate2>=4.5.0` uses CuDNN v9 which requires CUDA >= 12.3.
Since most issues occur from a conflicting `torch` and `ctranslate2` installations these are tested working combinations:
| Torch Version | CT2 Version |
|:-------------:|:-----------:|
| `2.*.*+cu121` | `<=4.4.0` |
| `2.*.*+cu124` | `>=4.5.0` |
| `>=2.4.0` | `>=4.5.0` |
| `<2.4.0` | `<4.5.0` |
For google colab users, the quick solution is to downgrade to `4.4.0` as of 24/10/2024 as colab uses `torch==2.5.0+cu12.1` | open | 2024-10-24T17:16:34Z | 2024-12-04T17:14:53Z | https://github.com/SYSTRAN/faster-whisper/issues/1086 | [] | MahmoudAshraf97 | 15 |
thp/urlwatch | automation | 542 | Matrix client dependency (migrate to matrix-nio) | When I look at the github page for the dependency of matrix_client at https://github.com/matrix-org/matrix-python-sdk it reads:
> We strongly recommend using the matrix-nio library rather than this sdk. It is both more featureful and more actively maintained.
Is there a plan to change to the matrix-nio library as a client dependency? | open | 2020-07-29T03:36:26Z | 2020-07-30T18:34:37Z | https://github.com/thp/urlwatch/issues/542 | [
"enhancement"
] | pandrews255 | 2 |
pydantic/FastUI | pydantic | 198 | option to set dev mode via `prebuilt_html` | open | 2024-02-17T17:15:18Z | 2024-02-17T17:15:19Z | https://github.com/pydantic/FastUI/issues/198 | [] | samuelcolvin | 0 | |
waditu/tushare | pandas | 1,509 | daily_basic接口返回的数据不准确 | 测试步骤如下:
>>>profit_fore = pro.forecast(ts_code='600138.sh', period='20210205', fields='net_profit_min,net_profit_max')
>>>profit_fore.net_profit_max
Series([], Name: net_profit_max, dtype: object)
>>>profit_fore.net_profit_min
Series([], Name: net_profit_min, dtype: object)
实际查到的业绩预告见附图
id: 421774

| open | 2021-02-05T13:37:55Z | 2021-02-05T13:37:55Z | https://github.com/waditu/tushare/issues/1509 | [] | zjulkw | 0 |
google/trax | numpy | 1,463 | jaxboard_demo.py missing? | Cannot find the file jaxboard_demo.py as stated on line 18. Where is it located? Thanks
https://github.com/google/trax/blob/5b08d66a4e69cccbab5868697b207a8b71caa890/trax/jaxboard.py#L18 | open | 2021-02-15T17:29:20Z | 2022-03-04T22:15:43Z | https://github.com/google/trax/issues/1463 | [] | PizBernina | 1 |
ray-project/ray | python | 50,827 | CI test linux://python/ray/data:test_transform_pyarrow is flaky | CI test **linux://python/ray/data:test_transform_pyarrow** is flaky. Recent failures:
- https://buildkite.com/ray-project/postmerge/builds/8496#01952c44-0d09-4aa4-b1f3-e432b7ebfca1
- https://buildkite.com/ray-project/postmerge/builds/8495#01952b30-22c6-4a0f-9857-59a7988f67d8
- https://buildkite.com/ray-project/postmerge/builds/8491#01952b00-e020-4d4e-b46a-209c0b3dbf5b
- https://buildkite.com/ray-project/postmerge/builds/8491#01952ad9-1225-449b-84d0-29cfcc6a048c
DataCaseName-linux://python/ray/data:test_transform_pyarrow-END
Managed by OSS Test Policy | closed | 2025-02-22T06:46:30Z | 2025-03-04T09:29:49Z | https://github.com/ray-project/ray/issues/50827 | [
"bug",
"triage",
"data",
"flaky-tracker",
"ray-test-bot",
"ci-test",
"weekly-release-blocker",
"stability"
] | can-anyscale | 31 |
long2ice/fastapi-cache | fastapi | 454 | How to correctly annotated the key builder in Pycharm | How to annoate `cache` decorator with key builder, even I use the default builer, it still show warning in pycharm.

The message:
```
Expected type 'KeyBuilder | None', got '(func: (...) -> Any, namespace: str, Any, request: Request | None, response: Response | None, args: tuple[Any, ...], kwargs: dict[str, Any]) -> str' instead
```

---
Python version: 3.11
fastapi-cache2 version: 0.2.2 | open | 2024-10-30T09:12:43Z | 2024-11-09T09:56:26Z | https://github.com/long2ice/fastapi-cache/issues/454 | [
"documentation",
"question"
] | allen0099 | 1 |
babysor/MockingBird | pytorch | 647 | 不能打开程序,提示缺少module。ffmpeg跟webrtcvad均正确下载 | 完成ffmpeg跟webrtcvad的下载,无误。在代码库路径下,运行 python demo_toolbox.py -d .\samples时提示缺少module,为PyQt5。自行下载并再次运行后,显示缺少更多module,为matplotlib,scipy,sklearn,scipy.stats._stats_py。除最后一个以外均已自行安装,最后这个不知如何下载,请求大佬解答。
python,mockingbird,ffmpeg均放置在E盘,调取ffmpeg的地址正确。
python版本3.9.13,ffmpeg版本4.4.1。
<img width="694" alt="problem" src="https://user-images.githubusercontent.com/82459666/179083099-f962fd4d-e7de-41c9-9414-c6cf85f0f39e.PNG">
| open | 2022-07-14T20:53:58Z | 2022-09-15T19:15:18Z | https://github.com/babysor/MockingBird/issues/647 | [] | Audbrem507 | 6 |
Yorko/mlcourse.ai | plotly | 756 | Proofread topic 5 | - Fix issues
- Fix typos
- Correct the translation where needed
- Add images where necessary | closed | 2023-10-24T07:41:21Z | 2024-08-25T07:50:33Z | https://github.com/Yorko/mlcourse.ai/issues/756 | [
"enhancement",
"wontfix",
"articles"
] | Yorko | 1 |
Miksus/rocketry | automation | 222 | Is there any way to schedule an async function using 'threads' or 'processes'? | Currently, I have some async functions that I want to schedule, but the only viable execution seems to be async. Although these async functions are using async API, they also include CPU-intensive code, and using a separate thread or process should provide better concurrency.
There are several ways to run an async function in a new thread, such as those described in these links: [Link 1](https://stackoverflow.com/questions/59645272/how-do-i-pass-an-async-function-to-a-thread-target-in-python), [Link 2](https://stackoverflow.com/questions/61151031/start-an-async-function-inside-a-new-thread), and [Link 3](https://gist.github.com/dmfigol/3e7d5b84a16d076df02baa9f53271058). None of them seem to work with the rockertry decorator. | open | 2023-10-09T01:13:33Z | 2023-10-09T01:13:33Z | https://github.com/Miksus/rocketry/issues/222 | [] | lkyhfx | 0 |
tqdm/tqdm | pandas | 1,197 | Segmentation fault (core dumped)-using tqdm | - [ ] I have marked all applicable categories:
+ [ ] exception-raising bug
+ [ ] visual output bug
- [ ] I have visited the [source website], and in particular
read the [known issues]
- [ ] I have searched through the [issue tracker] for duplicates
- [ ] I have mentioned version numbers, operating system and
environment, where applicable:
```python
import tqdm, sys
print(tqdm.__version__, sys.version, sys.platform)
```
[source website]: https://github.com/tqdm/tqdm/
[known issues]: https://github.com/tqdm/tqdm/#faq-and-known-issues
[issue tracker]: https://github.com/tqdm/tqdm/issues?q=
| open | 2021-06-30T06:43:04Z | 2021-10-14T13:52:01Z | https://github.com/tqdm/tqdm/issues/1197 | [
"p0-bug-critical ☢",
"need-feedback 📢"
] | jingweirobot | 3 |
aleju/imgaug | machine-learning | 392 | Label Name | I was looking through the Bounding Boxes examples, and I noticed that there's no parameter for adding a name for each label.
Example: If I have a picture with two animals, I want to keep the labels animalA and animalB, not just that they are 2 different bounding boxes.
Example Code (that doesn't specify a label):
```
bbs = BoundingBoxesOnImage([
BoundingBox(x1=25, x2=75, y1=25, y2=75),
BoundingBox(x1=100, x2=150, y1=25, y2=75),
BoundingBox(x1=175, x2=225, y1=25, y2=75)
], shape=image.shape)
``` | open | 2019-08-23T06:24:47Z | 2019-08-24T15:46:33Z | https://github.com/aleju/imgaug/issues/392 | [] | MentalGear | 3 |
microsoft/nni | tensorflow | 4,958 | HPO problem | When I set the search space use quniform like below in HPO.
<img width="234" alt="image" src="https://user-images.githubusercontent.com/11451001/175040897-0a8cbe73-fbb9-4a9e-85ff-f8025bb16c1a.png">
The parameters often become Long decimal, how can I fix it?
<img width="392" alt="image" src="https://user-images.githubusercontent.com/11451001/175040684-ea85af97-62c7-47f1-9314-5657e908737b.png">
| closed | 2022-06-22T13:29:47Z | 2022-06-27T03:05:20Z | https://github.com/microsoft/nni/issues/4958 | [
"question"
] | woocoder | 2 |
aiogram/aiogram | asyncio | 611 | [Tracking] Bot API 5.3 | ## Personalized Commands
- [x] Bots can now show lists of commands tailored to specific situations - including localized commands for users with different languages, as well as different commands based on chat type or for specific chats, and special lists of commands for chat admins.
- [x] Added the class [`BotCommandScope`](https://core.telegram.org/bots/api#botcommandscope), describing the scope to which bot commands apply.
- [x] Added the parameters `scope` and `language_code` to the method [`setMyCommands`](https://core.telegram.org/bots/api#setmycommands) to allow bots specify different commands for different chats and users.
- [x] Added the parameters `scope` and `language_code` to the method [`getMyCommands`](https://core.telegram.org/bots/api#getmycommands).
- [x] Added the method [`deleteMyCommands`](https://core.telegram.org/bots/api#deletemycommands) to allow deletion of the bot's commands for the given scope and user language.
- [x] Improved visibility of bot commands in Telegram apps with the new 'Menu' button in chats with bots, read more on the [blog](https://telegram.org/blog/animated-backgrounds#bot-menu).
## Custom Placeholders
- [x] Added the ability to specify a custom input field placeholder in the classes [`ReplyKeyboardMarkup`](https://core.telegram.org/bots/api#replykeyboardmarkup) and [`ForceReply`](https://core.telegram.org/bots/api#forcereply).
## And More
- [x] Improved documentation of the class [`ChatMember`](https://core.telegram.org/bots/api#chatmember) by splitting it into 6 subclasses.
- [x] Renamed the method `kickChatMember` to [`banChatMember`](https://core.telegram.org/bots/api#banchatmember). The old method name can still be used.
- [x] Renamed the method `getChatMembersCount` to [`getChatMemberCount`](https://core.telegram.org/bots/api#getchatmembercount). The old method name can still be used.
- [x] :warning: Values of the field `file_unique_id` in objects of the type [`PhotoSize`](https://core.telegram.org/bots/api#photosize) and of the fields `small_file_unique_id` and `big_file_unique_id` in objects of the type [`ChatPhoto`](https://core.telegram.org/bots/api#chatphoto) were changed.
> **:warning: WARNING! :warning:**
> After one of the upcoming Bot API updates, user identifiers will become bigger than `2**31 - 1` and it will be no longer possible to store them in a signed 32-bit integer type. User identifiers will have up to 52 significant bits, so a 64-bit integer or double-precision float type would still be safe for storing them. Please make sure that your code can correctly handle such user identifiers.
> This won't really affect you and your Python code as Python's `int` is big enough, but if you rely on other libraries/frameworks/code, make sure that Telegram User ID field has appropriate size, e.g. use `bigint` / `bigserial` instead of `int` / `integer` / `serial` in PostgreSQL.
| closed | 2021-06-26T23:45:43Z | 2021-07-04T20:52:56Z | https://github.com/aiogram/aiogram/issues/611 | [
"api"
] | evgfilim1 | 1 |
ultrafunkamsterdam/undetected-chromedriver | automation | 786 | Not working on Replit | Hello, I'm trying to use this on Replit but it doesn't seem to work, while the normal selenium works just fine.
Here's the code:
import undetected_chromedriver as uc
driver = uc.Chrome()
driver.get('https://google.com')
I'm getting this error message:
Traceback (most recent call last):
File "main.py", line 4, in <module>
driver = uc.Chrome()
File "/home/runner/fff/venv/lib/python3.8/site-packages/undetected_chromedriver/__init__.py", line 401, in __init__
super(Chrome, self).__init__(
File "/home/runner/fff/venv/lib/python3.8/site-packages/selenium/webdriver/chrome/webdriver.py", line 69, in __init__
super().__init__(DesiredCapabilities.CHROME['browserName'], "goog",
File "/home/runner/fff/venv/lib/python3.8/site-packages/selenium/webdriver/chromium/webdriver.py", line 89, in __init__
self.service.start()
File "/home/runner/fff/venv/lib/python3.8/site-packages/selenium/webdriver/common/service.py", line 98, in start
self.assert_process_still_running()
File "/home/runner/fff/venv/lib/python3.8/site-packages/selenium/webdriver/common/service.py", line 110, in assert_process_still_running
raise WebDriverException(
selenium.common.exceptions.WebDriverException: Message: Service /home/runner/.local/share/undetected_chromedriver/be5abb7191dcb2e9_chromedriver unexpectedly exited. Status code was: 127
Please help me fix it, thanks so much!
| open | 2022-08-19T07:34:01Z | 2024-03-18T16:50:54Z | https://github.com/ultrafunkamsterdam/undetected-chromedriver/issues/786 | [] | lengvietcuong | 2 |
ploomber/ploomber | jupyter | 296 | Document and automate conda forge build | closed | 2021-05-18T13:45:50Z | 2021-08-13T13:15:43Z | https://github.com/ploomber/ploomber/issues/296 | [
"good first issue"
] | edublancas | 2 | |
SYSTRAN/faster-whisper | deep-learning | 265 | Improve Language detection | Since wishper detects language based on first 30 secs of the audio, sometimes there are errors in language detection. For example
[This video](https://www.youtube.com/watch?v=eGKFTUuJppU&pp=ygUPd2hvIGFtIGkgcGFydCAx ) is in english but both whisper and faster_whisper detects language as hindi "hi". There is a solution for this issue in whisper [here](https://github.com/openai/whisper/pull/676). Since I am new to Ctranslate2, I am having difficulty in cloning [this solution](https://github.com/openai/whisper/pull/676) to faster_whisper. Can someone help me on this? Thanks.
P.S. I have tested the solution in whisper and it works. | open | 2023-05-30T07:13:48Z | 2024-05-13T11:41:41Z | https://github.com/SYSTRAN/faster-whisper/issues/265 | [] | ab-pandey | 6 |
freqtrade/freqtrade | python | 11,280 | Freqtrade exited with code 0 | ## Describe your environment
* Operating system: Ubuntu 24.04
* Python Version: 3.12.3
* CCXT version: 4.4.43
* Freqtrade Version: 2024.12
## Describe the problem:
I have a FreqAI strategy and once every 2 hours I got this error without any message and then the bot restarts.
```
2025-01-24 07:31:18,113 - freqtrade.wallets - INFO - Wallets synced.
2025-01-24 07:31:18,114 - freqtrade.worker - INFO - Bot heartbeat. PID=1, version='2024.12', state='RUNNING'
2025-01-24 07:32:18,116 - freqtrade.worker - INFO - Bot heartbeat. PID=1, version='2024.12', state='RUNNING'
2025-01-24 07:33:18,118 - freqtrade.worker - INFO - Bot heartbeat. PID=1, version='2024.12', state='RUNNING'
2025-01-24 07:34:18,129 - freqtrade.worker - INFO - Bot heartbeat. PID=1, version='2024.12', state='RUNNING'
2025-01-24 07:35:18,308 - freqtrade.worker - INFO - Bot heartbeat. PID=1, version='2024.12', state='RUNNING'
2025-01-24 07:36:18,352 - freqtrade.worker - INFO - Bot heartbeat. PID=1, version='2024.12', state='RUNNING'
2025-01-24 07:37:18,395 - freqtrade.worker - INFO - Bot heartbeat. PID=1, version='2024.12', state='RUNNING'
exited with code 0
```
```
2025-01-24 07:37:35,045 - freqtrade - INFO - freqtrade 2024.12
2025-01-24 07:37:35,845 - numexpr.utils - INFO - NumExpr defaulting to 4 threads.
2025-01-24 07:37:38,949 - freqtrade.worker - INFO - Starting worker 2024.12
2025-01-24 07:37:38,949 - freqtrade.configuration.load_config - INFO - Using config: /freqtrade/user_data/config.json ...
2025-01-24 07:37:38,952 - freqtrade.configuration.environment_vars - INFO - Loading variable 'FREQTRADE__API_SERVER_ENABLED'
2025-01-24 07:37:38,952 - freqtrade.configuration.environment_vars - INFO - Loading variable 'FREQTRADE__API_SERVER_LISTEN_IP_ADDRESS'
2025-01-24 07:37:38,952 - freqtrade.configuration.environment_vars - INFO - Loading variable 'FREQTRADE__DRY_RUN'
2025-01-24 07:37:38,952 - freqtrade.configuration.environment_vars - INFO - Loading variable 'FREQTRADE__TELEGRAM__ENABLED'
```
I tried it on bare metal Ubuntu, on WSL Windows the same is happening.
With docker and without it's the same also.
I believe there is a correlation with these settings:
```
"live_retrain_hours": 6,
"expired_hours": 6,
```
Thanks a lot for your help or ideas! | closed | 2025-01-24T10:16:55Z | 2025-01-24T15:25:32Z | https://github.com/freqtrade/freqtrade/issues/11280 | [
"Question",
"freqAI"
] | alfirin | 4 |
amidaware/tacticalrmm | django | 1,614 | Script Manager: Run As User flag is ignored when using environmental variables | **Server Info (please complete the following information):**
- OS: Ubuntu 20.04.4 LTS (Focal Fossa)
- Browser: Firefox 116.0.2 (64-bit)
- RMM Version (as shown in top left of web UI): v0.16.3
**Installation Method:**
- [x] Standard
- [ ] Docker
**Agent Info (please complete the following information):**
- Agent version (as shown in the 'Summary' tab of the agent from web UI): Agent v2.4.11
- Agent OS: Opensuse-Leap 15.4
**Describe the bug**
The scripting system ignores the Run As User flag when using environmental variables. Running `Get-ChildItem ENV:` produces identical output regardless if the Run As User is checked **IF** there are environmental variables.
**To Reproduce**
Steps to reproduce the behavior:
1. Create a PowerShell script to dump all the environmental variables: `Get-ChildItem ENV:`
2. Do not check Run As User.
3. Add an environmental variable.
4. Run the script and save the output. Pay attention to the `USERNAME`, `USERPROFILE`, and other user specific variables.
5. Check Run As User.
6. Run the script and save the output. Pay attention to the `USERNAME`, `USERPROFILE`, and other user specific variables.
7. Notice the user specific variables did not change.
8. Leave Run As User checked and remove the environmental variable.
9. Run the script and save the output. Pay attention to the `USERNAME`, `USERPROFILE`, and other user specific variables.
10. Notice the user specific variables now reflect the logged in user.
**Expected behavior**
I expected the environmental variables to reflect the logged in user when Run As User is checked.
**Screenshots**
N/A
**Additional context**
N/A | closed | 2023-08-24T00:59:04Z | 2023-09-02T01:06:58Z | https://github.com/amidaware/tacticalrmm/issues/1614 | [
"bug"
] | NiceGuyIT | 2 |
KrishnaswamyLab/PHATE | data-visualization | 130 | API Documentation not shown | Hi,
It seems like autodoc was not run on https://phate.readthedocs.io/en/stable/api.html | closed | 2023-02-21T10:03:19Z | 2023-02-21T22:39:21Z | https://github.com/KrishnaswamyLab/PHATE/issues/130 | [
"bug"
] | gjhuizing | 1 |
pytest-dev/pytest-html | pytest | 418 | Remove phantomjs dependency | Since we're using chrome headless, we can remove the phantomjs dependency from package.js. | closed | 2020-12-13T13:02:13Z | 2020-12-15T23:15:01Z | https://github.com/pytest-dev/pytest-html/issues/418 | [
"packaging"
] | BeyondEvil | 0 |
plotly/dash | plotly | 2,495 | Creating two different scatter_mapbox, one often fails | I create two different scatter_mapbox with similar data.
Sample:
```
div_mapbox = html.Div([dcc.Graph(figure=figure, id='figmapbox', style={'width': '90vw', 'height': '90vh'})],
id = 'divmapbox',
style={'width': '90%', 'display': 'inline-block', 'height': '90%', 'position': 'absolute', 'z-index': '1'}
)
```
With figure either:
```
def get_fig_openseamap(wpts):
fig = px.scatter_mapbox(data_frame=wpts, lat="lat", lon="lon", size='size_marker', size_max=8, opacity=1.0, hover_data={'size_marker': False, 'lat':False, 'lon': False, 'energy': True, 'dist': True, 'eta': True})
fig2 = px.line_mapbox(data_frame=wpts, lat="lat", lon="lon")
fig.add_trace(fig2.data[0])
fig.update_layout(mapbox_style="open-street-map")
fig.update_layout(mapbox_layers=[source_openseamap])
fig.update({'layout':{'autosize':True,
'margin': {"t": 5,"b": 5,"l": 5,"r": 5}}})
return fig
```
Or:
```
def get_fig_northstar(wpts):
fig = px.line_mapbox(data_frame=wpts, lat="lat", lon="lon")
fig2 = px.scatter_mapbox(data_frame=wpts.query('ghost == 0'), lat="lat", lon="lon", opacity=1, hover_data={'lat': False, 'lon': False, 'energy': True, 'dist': True, 'eta': True})
fig.add_trace(fig2.data[0])
fig.update_layout(mapbox_style=mapbox_style_northstar)
fig.update({'layout':{'autosize':True,
'margin': {"t": 5,"b": 5,"l": 5,"r": 5}}})
return fig
```
Where openseamap layer is the openseamap marks tile.
Nothstar is the official mapbox style northstar, that needs to be copied into a personal account to work.
I can run each of them on its own, no problems. Sometimes, running both works, and sometimes it fails.
The failure normally appears like one map working perfect, the other has no graphics, BUT the hoverlayer works, so hovering over a point (that cant be seen) shows the hover data.
It seems a bit stochastic to me.
I can also replace the figure with callbacks using Output('figmapbox', 'figure')
As long as it's one at a time, it works fine. Having both can work but normally not.
Any suggestions how to debug this? I have suspected race conditions and tried delaying creating one of them but saw no improvement.
dash 2.8.1
dash-core-components 2.0.0
dash-daq 0.5.0
dash-html-components 2.0.0
OS: pop_os 22.04 (ubuntu 22.04)
Chrome 110.0.5481.96 (Official Build) (64-bit)
| open | 2023-04-04T10:59:45Z | 2024-08-13T19:30:49Z | https://github.com/plotly/dash/issues/2495 | [
"bug",
"P3"
] | jontis | 9 |
MaartenGr/BERTopic | nlp | 1,631 | Plotly Graphy produced shows datetime values as absolute numbers | <img width="1023" alt="image" src="https://github.com/MaartenGr/BERTopic/assets/147594747/05dbc751-e03c-4275-9be1-fa0e2d4bfb5f">
Hello BERTopic team, I am having an issue where the datetime values shown on the plotly graphy created from the visualize_topics_over_time function only show themselves as absolute numbers instead of being visualized in date format. Im not sure how to solve this issue as I have tried upgrading the plotly package. I am using pip as my package manager and currently running this app locally and on streamlit with the same issue. | closed | 2023-11-15T08:13:51Z | 2023-11-16T15:18:48Z | https://github.com/MaartenGr/BERTopic/issues/1631 | [] | APVB12 | 4 |
Avaiga/taipy | data-visualization | 2,079 | [🐛 BUG] <write a small description here> | ### What went wrong? 🤔
e
### Expected Behavior
q
### Steps to Reproduce Issue
1. A code fragment
2. And/or configuration files or code
3. And/or Taipy GUI Markdown or HTML files
### Solution Proposed
d
### Screenshots

### Runtime Environment
Windows 11
### Browsers
Chrome
### OS
Windows
### Version of Taipy
3.0.0
### Additional Context
```bash
work more
```
### Acceptance Criteria
- [ ] Ensure new code is unit tested, and check code coverage is at least 90%.
- [ ] Create related issue in taipy-doc for documentation and Release Notes.
### Code of Conduct
- [X] I have checked the [existing issues](https://github.com/Avaiga/taipy/issues?q=is%3Aissue+).
- [X] I am willing to work on this issue (optional) | closed | 2024-10-17T07:01:16Z | 2024-10-17T07:45:13Z | https://github.com/Avaiga/taipy/issues/2079 | [
"💥Malfunction"
] | predator2k5 | 0 |
axnsan12/drf-yasg | rest-api | 8 | Update README and add real docs | - [x] update README and setup.py descriptions
- [x] add sphinx documentation
- [x] upload to readthedocs
- [x] add docs build target
- [x] add source code documentation to classes and methods where possible | closed | 2017-12-08T15:01:33Z | 2017-12-12T16:31:46Z | https://github.com/axnsan12/drf-yasg/issues/8 | [] | axnsan12 | 1 |
coqui-ai/TTS | deep-learning | 4,149 | [Bug] Runtime Error | ### Describe the bug
RuntimeError: Input and output sizes should be greater than 0, but got input (W: 1) and output (W: 0)
Has anyone run into this issue while inferencing the xttsv2 model ??
what could be the possible cause of running into this error ??
### To Reproduce
Model inference.
### Expected behavior
_No response_
### Logs
```shell
```
### Environment
```shell
xttsv2
```
### Additional context
_No response_ | closed | 2025-02-07T05:39:44Z | 2025-03-19T07:20:57Z | https://github.com/coqui-ai/TTS/issues/4149 | [
"bug",
"wontfix"
] | yushan1230 | 2 |
apache/airflow | machine-learning | 47,632 | Don't show task duration if we are missing start date | ### Apache Airflow version
main (development)
### If "Other Airflow 2 version" selected, which one?
_No response_
### What happened?
I noticed that I had a negative duration shown for a task when there isn't a start_date set. We should just show nothing in that instance.

### What you think should happen instead?
_No response_
### How to reproduce
You need a broken instance, so it's probably easier to just null out a start_date manually in the db to replicate the situation.
### Operating System
macos
### Versions of Apache Airflow Providers
_No response_
### Deployment
Virtualenv installation
### Deployment details
_No response_
### Anything else?
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [x] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| closed | 2025-03-11T15:44:42Z | 2025-03-15T13:01:45Z | https://github.com/apache/airflow/issues/47632 | [
"kind:bug",
"area:core",
"area:UI",
"needs-triage",
"affected_version:3.0.0beta"
] | jedcunningham | 1 |
BeanieODM/beanie | pydantic | 301 | Links don't fetch in nested documents | For this code snippet:
```python
class PlaylistEntry(BaseModel):
content: Link[FileRecord]
some_props: List[str]
class PlaylistAbout(BaseModel):
name: str
description: str
class Playlist(Document):
about: PlaylistAbout
entries: List[PlaylistEntry] = Field(default_factory=list)
tags: List[str] = Field(default_factory=list)
enabled: bool = False
```
link from `PlaylistEntry` does not populate and returns DBRef instead of documents itself. My guess is that, `fetch_links` affects only root level fields.
How can I help with implementing recursive link fetching? | closed | 2022-07-05T14:00:29Z | 2023-09-14T09:24:22Z | https://github.com/BeanieODM/beanie/issues/301 | [
"Stale"
] | kpostekk | 6 |
ageitgey/face_recognition | machine-learning | 1,266 | Doing face encoding in nodejs | Hi,
Is there a supporting library that can run `face_recognition.face_encodings` in nodejs?
I don't want to pass the source image to the python server. just the encoded numbers.
Possible?
Thanks | open | 2021-01-10T10:40:26Z | 2021-01-27T09:51:16Z | https://github.com/ageitgey/face_recognition/issues/1266 | [] | avifatal | 1 |
jupyter/nbgrader | jupyter | 1,662 | How to execute a command on Jupyter Notebook Server Terminal via the Rest API or any Method | I have deployed Jupyterhub on Kubernetes following this guide <a href="https://zero-to-jupyterhub.readthedocs.io/en/latest/">link</a>, I have setup nbgrader and ngshare on jupyterhub using this guide <a href="https://nbgrader.readthedocs.io/en/stable/configuration/jupyterhub_config.html">link</a>, I have a Learning management system(LMS) similar to moodle, I want to view the list of assignments both for instructors and students I can do that by using the rest API of Jupyternotebook like this
```
import requests
import json
api_url = 'http://35.225.169.22/user/kevin/api/contents/release'
payload = {'token': 'XXXXXXXXXXXXXXXXXXXXXXXXXX'}
r = requests.get(api_url,params = payload)
r.raise_for_status()
users = r.json()
print(json.dumps(users, indent = 1))
```
now I want to grade all submitted assignments using the nbgrader command `nbgrader autograde "Assignment1"`, I can do that by logging into instructor notebook server and going to terminal and running the same command but I want to run this command on the notebook server terminal using Jupyter Notebook server rest API, so that instructor clicks on grade button on the LMS frontend which sends a request to LMS backend and which sends a rest API request(which has the above command) to jupyter notebook , which runs the command on terminal and returns the response to LMS backend. I cannot find anything similar on the Jupyter Notebook API <a href="https://jupyter-server.readthedocs.io/en/latest/developers/rest-api.html"> documentation</a> there is endpoint to start a terminal but not how to run commands on.
| open | 2022-09-07T23:04:46Z | 2024-02-23T12:26:56Z | https://github.com/jupyter/nbgrader/issues/1662 | [
"question"
] | adarshverma19 | 1 |
plotly/dash | plotly | 2,536 | [BUG] Turn twitter cards off by default to avoid security issues | - replace the result of `pip list | grep dash` below
```
dash 2.9.3
dash-auth 2.0.0
dash-bootstrap-components 1.4.1
dash-core-components 2.0.0
dash-extensions 0.1.13
dash-html-components 2.0.0
dash-table 5.0.0
```
- if frontend related, tell us your Browser, Version and OS
- OS: OSX
- Browser Firefox
- Version 102.10
**Describe the bug**
Deploying a dash app with pages allows attackers to embed code into the webapp. This is a potential security vulnerability since it allows attackers to execute arbitrary code in the context of the dash sandbox.
Concretely, if use_pages is true, dash calls self._pages_meta_tags()
https://github.com/plotly/dash/blob/a8b3ddbec5a0c639d41230137c5e5744d5f43c8f/dash/dash.py#L968
which *always* adds the twitter and og meta tags
https://github.com/plotly/dash/blob/a8b3ddbec5a0c639d41230137c5e5744d5f43c8f/dash/dash.py#L906
The twitter meta tag includes the URL
```
<!-- Twitter Card data -->
<meta property="twitter:card" content="summary_large_image">
<meta property="twitter:url" content="{flask.request.url}">
<meta property="twitter:title" content="{title}">
<meta property="twitter:description" content="{description}">
<meta property="twitter:image" content="{image_url}">
```
So if the dash app is involved with a URL that includes a `<script>` tag, the script specified in the tag will be executed.
Example URL
```
[dash_app_base_url]/?'"--></style></scRipt><scRipt>netsparker(0x000F45)</scRipt>
```
This causes our dash app to fail cyber security/pen testing scans.
A workaround is to a custom `index_string` which removes all `meta` tags, but that has the disadvantage of not including any meta tags, even the ones that we might want.
A better option would be to make the twitter and og meta tags opt-in; it is not clear that specifying a twitter card is necessary for all deployers.
I am happy to submit a PR (`include_card_configs` property) if that would help
**Screenshots**
-----------
#### Vulnerability report
<img width="873" alt="Screenshot 2023-05-21 at 9 54 22 PM" src="https://github.com/plotly/dash/assets/2423263/64d70995-8819-4604-b3f0-aef0435d9eac">
-----------
#### Page source includes the embedded script
<img width="873" alt="Screenshot 2023-05-21 at 9 52 51 PM" src="https://github.com/plotly/dash/assets/2423263/b90e03f2-8bed-4528-aada-e77fc853373b">
-----------
#### Example of an embedded alert in firefox
<img width="878" alt="Screenshot 2023-05-21 at 9 57 17 PM" src="https://github.com/plotly/dash/assets/2423263/2aea626b-1480-4d0f-b356-4392a60ab2b6">
| closed | 2023-05-22T05:00:50Z | 2023-05-25T16:12:38Z | https://github.com/plotly/dash/issues/2536 | [] | shankari | 4 |
dynaconf/dynaconf | django | 614 | [RFC] merge strategies/deep merge strategies for lazy objects | **Is your feature request related to a problem? Please describe.**
when performing deep merges, dynaconf always eagerly evaluates lazy objects
this can in particular end bad if one wants to configure dynaconf for usage with a loader based on config data
**Describe the solution you'd like**
have a merge strategy that does not require to eagerly evaluate lazy objects
**Describe alternatives you've considered**
nothing comes to mind, the issues is a tricky
| closed | 2021-07-12T19:15:23Z | 2024-01-08T11:00:21Z | https://github.com/dynaconf/dynaconf/issues/614 | [
"wontfix",
"Not a Bug",
"RFC"
] | RonnyPfannschmidt | 1 |
Asabeneh/30-Days-Of-Python | matplotlib | 373 | DAY 1 VSCODE INSTALLATION | 
I dont know why the directory is so long. the underlined part in blue. how can i fix it | open | 2023-03-17T20:16:28Z | 2023-08-08T20:27:12Z | https://github.com/Asabeneh/30-Days-Of-Python/issues/373 | [] | AishaBell | 2 |
lucidrains/vit-pytorch | computer-vision | 278 | Saving and loading model seems to be regressing to lower performance | Hi, it is a great experience working with your library so far. I wanted to ask what the best way is to save and load the model.
I save:
```python3
model = ViT(hyperparams)
train(model)
torch.save(model.state_dict(), save_loc + f"model_e{epoch+1}.pth")
```
I load:
```python3
model = ViT(hyperparams) # exact same
model.load_state_dict(torch.load(f"models/VITM/model_e13.pth"))
```
However, I get the loaded model has lower train and test scores on the exact same dataset. Are there other things that I will need to save that I am missing? What could be the reason for this?
I am using 0.1 dropouts, but I do not expect that to cause this discrepancy. | closed | 2023-09-10T07:44:12Z | 2023-09-11T07:54:12Z | https://github.com/lucidrains/vit-pytorch/issues/278 | [] | aperiamegh | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.