repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
tflearn/tflearn | data-science | 669 | Tflearn accuracy on a large input is always 1 | I'm training a regression DNN, and the output of a last layer should be a number between [-1,1]. It seems like the network is considering the output layer as categorical and always produces the output either -1 or 1 and nothing in between. The acc number is always 1 (Which is strange on a large input).
Here's the network:
```
with tf.Graph().as_default():
tnorm = tflearn.initializations.uniform(minval=-1.0, maxval=1.0)
net = tflearn.input_data(shape=[None, 128])
net = tflearn.fully_connected(net, 64, activation='tanh', weights_init=tnorm)
net = tflearn.fully_connected(net, 8, activation='tanh', weights_init=tnorm)
net = tflearn.fully_connected(net, 1 , activation='tanh', weights_init=tnorm)
net = tflearn.regression(net, optimizer='momentum', loss='mean_square', learning_rate=0.001)
model = tflearn.DNN(net, tensorboard_verbose=0, clip_gradients=1.0)
model.fit(X_train, Y_train, n_epoch = 10, validation_set=0.1, show_metric=True, run_id="Regression")
```
And here's the output to the network:
`Training Step: 600 | total loss: 0.010
| Momentum | epoch: 003 | loss: 0.010 - acc: 1.0000 -- iter 10000/10000:
`
| open | 2017-03-18T05:23:14Z | 2017-03-18T05:24:13Z | https://github.com/tflearn/tflearn/issues/669 | [] | sauberf | 0 |
flasgger/flasgger | flask | 482 | cant use external file | I am getting the exception `AttributeError: 'dict' object has no attribute 'startswith'` despite passing a string:
```
file_dir = path.dirname(path.abspath(__file__))
schema_path = path.join(file_dir, "openapi.yaml")
schema = Path(schema_path)
@app.route("/ingestion-job", methods=["GET", "POST"])
@swag_from(schema, endpoint="ingestion-job", methods=["POST"], validation=True)
def ingestion_root():
if request.method == "POST":
manifest = request.json
...
```
Leads to:
```
[2021-05-24 09:21:50,640] [dataapi.def_controller] [DEBUG ] Rebuilding cache from git using profile: None
[2021-05-24 09:21:50,652] [dataapi.def_controller] [DEBUG ] Cached 1 ingestion jobs
[2021-05-24 09:21:50,895] [dataapi.app ] [ERROR ] Exception on /ingestion-job [POST]
Traceback (most recent call last):
File "/Users/tommycarpenter/Development/python-data-ingestion-api/.tox/py39/lib/python3.9/site-packages/flask/app.py", line 2070, in wsgi_app
response = self.full_dispatch_request()
File "/Users/tommycarpenter/Development/python-data-ingestion-api/.tox/py39/lib/python3.9/site-packages/flask/app.py", line 1515, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/Users/tommycarpenter/Development/python-data-ingestion-api/.tox/py39/lib/python3.9/site-packages/flask/app.py", line 1513, in full_dispatch_request
rv = self.dispatch_request()
File "/Users/tommycarpenter/Development/python-data-ingestion-api/.tox/py39/lib/python3.9/site-packages/flask/app.py", line 1499, in dispatch_request
return self.ensure_sync(self.view_functions[rule.endpoint])(**req.view_args)
File "/Users/tommycarpenter/Development/python-data-ingestion-api/.tox/py39/lib/python3.9/site-packages/flasgger/utils.py", line 266, in wrapper
validate(
File "/Users/tommycarpenter/Development/python-data-ingestion-api/.tox/py39/lib/python3.9/site-packages/flasgger/utils.py", line 382, in validate
if not filepath.startswith('/'):
AttributeError: 'dict' object has no attribute 'startswith'
```
It is unclear why `filepath` would be of instance dict here | open | 2021-05-24T13:23:21Z | 2022-03-18T07:21:26Z | https://github.com/flasgger/flasgger/issues/482 | [] | tommyjcarpenter | 2 |
miguelgrinberg/python-socketio | asyncio | 650 | Server "eio_sid" error when entering client into a room in non-default namespace | For python-socketio v5, there is an error when trying to enter a client into a room when the namespace parameter is non-default.
Following are the server and client code used to isolate this situation:
**Server**
```
import socketio
from aiohttp import web
sio = socketio.AsyncServer(async_mode='aiohttp')
app = web.Application()
sio.attach(app)
@sio.event
async def connect(sid, environ):
print("connect ", sid)
sio.enter_room(sid=sid, room='test_room', namespace='/test_ns')
await sio.emit('message', data='ping', room='test_room', namespace='/test_ns')
if __name__ == '__main__':
web.run_app(app=app, port=6000)
```
**Client**
```
import asyncio
import socketio
sio = socketio.AsyncClient()
@sio.event
async def connect():
print('connection established')
@sio.event(namespace='/test_ns')
async def message(data):
print(data)
async def main():
await sio.connect('http://localhost:6000')
await sio.wait()
if __name__ == '__main__':
asyncio.run(main())
```
Traceback eventually ends up here:
` File "[redacted]\venv\lib\site-packages\socketio\base_manager.py", line 115, in enter_room
eio_sid = self.rooms[namespace][None][sid]
KeyError: None`
**Probable Cause**
I believe the error is caused by two functions.
1. The enter_room() function in socketio/server.py (line 389 as of time of writing), which calls enter_room() in base_manager, does not have an input for eio_sid. This causes eio_sid in the enter_room() function in base_manager to always default to None.
2. This means that lines 114-115 is always triggered:
```
if eio_sid is None:
eio_sid = self.rooms[namespace][None][sid]
```
In the example, the namespace to be accessed would be '/test_ns', which does not exist self.rooms, thus causing the KeyError.
**The Fix**
I believe that the namespace to be accessed when `eio_sid is None` should be '/'. Replacing `eio_sid = self.rooms[namespace][None][sid]` with `eio_sid = self.rooms['/'][None][sid]` resolved the issue. | closed | 2021-03-05T03:10:17Z | 2021-04-20T23:03:31Z | https://github.com/miguelgrinberg/python-socketio/issues/650 | [
"bug"
] | sd-zhang | 4 |
pyro-ppl/numpyro | numpy | 1,490 | Mixture: Intermediates cannot be coerced to bool | ```python
from jax import numpy as jnp
from jax.random import PRNGKey
from numpyro import distributions as dist
from numpyro import sample
from numpyro import handlers as hdl
from numpyro.infer.util import log_density
key = PRNGKey(0)
def model(toggle):
d1 = dist.HalfNormal()
d2 = dist.LogNormal()
mixing = dist.Categorical(jnp.full((2,), 0.5))
mixture = dist.Mixture(mixing, [d1, d2])
if toggle:
sample('s', mixture)
else:
sample('s', mixture, sample_shape=(2,))
with hdl.seed(rng_seed=key):
l1 = log_density(model, [], {'toggle': True}, {})
l2 = log_density(model, [], {'toggle': False}, {})
```
In the above `l1` evaluates fine, as the intermediates returned in the model trace is a `DeviceArray` containing a single number, but when sample/batch shape is anything but `(1,)`, `intermediates` is an array with size > 1 the following boolean coercion causes and error:

Comparing `_MixtureBase` to other distributions with `sample_with_intermediates`, it seems all other cases return a list of intermediates, whereas `_MixtureBase` simple returns a jax array directly. It appears the most logical fix would simply be `_MixtureBase.sample_with_intermediates` returning the intermediates array wrapped a list in order to comply with the current API. If this sounds like a reasonable fix, I would be happy to send a PR. | closed | 2022-10-25T10:27:15Z | 2022-10-26T09:35:50Z | https://github.com/pyro-ppl/numpyro/issues/1490 | [
"bug"
] | hessammehr | 2 |
sherlock-project/sherlock | python | 1,980 | Trying | closed | 2024-01-27T04:43:27Z | 2024-01-27T07:28:09Z | https://github.com/sherlock-project/sherlock/issues/1980 | [] | jerlee3131 | 1 | |
huggingface/text-generation-inference | nlp | 2,435 | PaliGemma detection task is failing | ### System Info
```
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 535.183.01 Driver Version: 535.183.01 CUDA Version: 12.2 |
|-----------------------------------------+----------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+======================+======================|
| 0 NVIDIA A100-SXM4-40GB Off | 00000000:00:04.0 Off | 0 |
| N/A 33C P0 49W / 400W | 34683MiB / 40960MiB | 0% Default |
| | | Disabled |
+-----------------------------------------+----------------------+----------------------+
+---------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=======================================================================================|
| 0 N/A N/A 7589 G /usr/lib/xorg/Xorg 95MiB |
| 0 N/A N/A 7836 G /usr/bin/gnome-shell 12MiB |
| 0 N/A N/A 43844 C /opt/conda/bin/python3.10 34552MiB |
+---------------------------------------------------------------------------------------+
```
### Information
- [X] Docker
- [ ] The CLI directly
### Tasks
- [X] An officially supported command
- [ ] My own modifications
### Reproduction
I'm running Google's PaliGemma 448-res model with Docker:
```
model=google/paligemma-3b-pt-448
volume=$PWD/data
docker run --gpus all --shm-size 1g -e HF_TOKEN=$token -p 8080:80 -v $volume:/data ghcr.io/huggingface/text-generation-inference:2.2.0 --model-id $model
```
I'm trying to run the detection task using the example code [here](https://huggingface.co/docs/text-generation-inference/main/en/basic_tutorials/visual_language_models#hugging-face-hub-python-library):
```python
from huggingface_hub import InferenceClient
client = InferenceClient("http://127.0.0.1:8080")
image = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/rabbit.png"
prompt = f"detect rabbit\n\n"
for token in client.text_generation(prompt, max_new_tokens=16, stream=True):
print(token)
# This is a picture of an anthropomorphic rabbit in a space suit.
```
But I only get `<eos>` as an output.
I raised a similar issue in vLLM: https://github.com/vllm-project/vllm/issues/7115
I suspected that it had to do with the special `<loc>` tokens. In the HF implementation the extra loc tokens are added to the Gemma processor:
https://github.com/huggingface/transformers/blob/25245ec26dc29bcf6102e1b4ddd0dfd02e720cf5/src/transformers/models/paligemma/processing_paligemma.py#L38
I'm not sure if it could be that, though.
### Expected behavior
I tried to run the same example using [this HF space](https://huggingface.co/spaces/big-vision/paligemma-hf) and it ran succesfully:
<img width="520" alt="Screenshot 2024-08-19 at 6 37 14 p m" src="https://github.com/user-attachments/assets/265bdd41-0759-44bc-a044-07d1d1f3e807"> | open | 2024-08-20T00:42:48Z | 2024-12-08T16:22:09Z | https://github.com/huggingface/text-generation-inference/issues/2435 | [] | nph4rd | 3 |
healthchecks/healthchecks | django | 572 | Increase log length | I have a healthcheck that is pinged every minute. There were some failures last night so I checked the log to see if I needed to up the grace period but the log only went back to the start of today. If the log length could be increased to 5760 entries (the number of minutes in two days times two, because there's a started and ok event) then that would be great. | closed | 2021-10-14T16:57:06Z | 2022-09-12T16:45:09Z | https://github.com/healthchecks/healthchecks/issues/572 | [
"feature"
] | caleb15 | 5 |
ResidentMario/missingno | pandas | 40 | Fix bad column numeracy plotting in matplotlib 2.0 | There's an issue with the column counts on the right side of the (bar, possibly matrix) plot in matplotlib 2.0, needs fixing. | closed | 2017-11-09T20:18:58Z | 2018-02-03T22:03:22Z | https://github.com/ResidentMario/missingno/issues/40 | [] | ResidentMario | 2 |
CorentinJ/Real-Time-Voice-Cloning | pytorch | 666 | No gui? | i run python demo_toolbox.py and what is returned is:
(voice-clone) S:\path\path\path\path\Real-Time-Voice-Cloning-master>python demo_toolbox.py
S:\path\path\path\path\Real-Time-Voice-Cloning-master\encoder\audio.py:13: UserWarning: Unable to import 'webrtcvad'. This package enables noise removal and is recommended.
warn("Unable to import 'webrtcvad'. This package enables noise removal and is recommended.")
Arguments:
datasets_root: None
enc_models_dir: encoder\saved_models
syn_models_dir: synthesizer\saved_models
voc_models_dir: vocoder\saved_models
cpu: False
seed: None
no_mp3_support: False
Error: Model files not found. Follow these instructions to get and install the models:
https://github.com/CorentinJ/Real-Time-Voice-Cloning/wiki/Pretrained-models
which confuses me because no GUI launches and it does not give an error either. | closed | 2021-02-17T00:06:59Z | 2021-02-17T20:22:19Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/666 | [] | ghost | 3 |
graphdeco-inria/gaussian-splatting | computer-vision | 470 | garbage code in in_frustum() ? | __forceinline__ __device__ bool in_frustum(int idx, //John: per-thread, per-gauss-point
const float* orig_points,
const float* viewmatrix,
const float* projmatrix,
bool prefiltered,
float3& p_view)
{
float3 p_orig = { orig_points[3 * idx], orig_points[3 * idx + 1], orig_points[3 * idx + 2] };
// Bring points to screen space
float4 p_hom = transformPoint4x4(p_orig, projmatrix); //here: garbage?
float p_w = 1.0f / (p_hom.w + 0.0000001f); //here: garbage?
float3 p_proj = { p_hom.x * p_w, p_hom.y * p_w, p_hom.z * p_w }; //here: garbage?
p_view = transformPoint4x3(p_orig, viewmatrix);
if (p_view.z <= 0.2f)// || ((p_proj.x < -1.3 || p_proj.x > 1.3 || p_proj.y < -1.3 || p_proj.y > 1.3)))
{
if (prefiltered)
{
printf("Point is filtered although prefiltered is set. This shouldn't happen!");
__trap();
}
return false;
}
return true;
} | closed | 2023-11-15T05:57:44Z | 2024-11-07T13:54:04Z | https://github.com/graphdeco-inria/gaussian-splatting/issues/470 | [] | yuedajiong | 1 |
flairNLP/flair | nlp | 3,065 | [Question]: TUTORIAL_13 export_onnx model size and oom questions. | ### Question
I am currently trying to reduce model size and memory consumption of a fine-tuned model. To this end I am following the TUTORIAL_13. To check out the changes in model size and consumption I am using the pre-trained `flair/ner-english-large` model which is 2135.899MB in size (disk space). To find out the size I am using this code snippet:
```
param_size = 0
for param in model.parameters():
param_size += param.nelement() * param.element_size()
buffer_size = 0
for buffer in model.buffers():
buffer_size += buffer.nelement() * buffer.element_size()
size_all_mb = (param_size + buffer_size) / 1024**2
print('model size: {:.3f}MB'.format(size_all_mb))
```
After calling `model.embeddings = model.embeddings.export_onnx("flert-embeddings.onnx", sentences, providers=["CUDAExecutionProvider", "CPUExecutionProvider"])` the model size reduces to 0.079MB which seems veeeeery odd to me. Calling `model.predict` here works fine though. Any ideas whether this is legit?
Another thing is that after calling
```
model.embeddings.optimize_model(
"flert-optimized-embeddings.onnx", opt_level=2, use_gpu=True, only_onnxruntime=True, use_external_data_format=True,
)
```
and then
```
model.embeddings.quantize_model(
"flert-quantized-embeddings.onnx", extra_options={"DisableShapeInference": True}, use_external_data_format=True
)
```
my Colab runs oom. Which seems also odd to me, since I am trying to reduce memory consumption. Or does the quantization process itself takes a lot of memory but the embeddings have less consumption in the end?
Any hints to these points are highly appreciated!
| closed | 2023-01-23T16:04:55Z | 2023-08-12T19:59:59Z | https://github.com/flairNLP/flair/issues/3065 | [
"question",
"wontfix"
] | agademic | 4 |
zihangdai/xlnet | tensorflow | 194 | Why the pos_emb starts with `klen` and not `klen -1`? | Hi,
I've noticed that `klen -1` is commented out, but `klen` is used instead.
https://github.com/zihangdai/xlnet/blob/b4e33739b7df17af6f37a89af9a769a987711587/modeling.py#L205-L250
However, the original transformer-xl actually uses `klen -1`:
https://github.com/kimiyoung/transformer-xl/blob/44781ed21dbaec88b280f74d9ae2877f52b492a5/tf/model.py#L487
Is there a particular reason why this is the case?
If the total length of the sequence is `klen = mlen + qlen`, then the `i-j` in `R_{i-j}` should never be `klen`, right? Why is the `klen` entry needed? | closed | 2019-07-28T00:39:19Z | 2019-07-28T01:09:17Z | https://github.com/zihangdai/xlnet/issues/194 | [] | shaform | 0 |
dpgaspar/Flask-AppBuilder | rest-api | 2,046 | Question: how to access userinfo or current user. | This is more of a question than issue and hopefully and add-on to your awesome docs, I would not normally if asked if I did not look and try before hand.
What I am trying to do is load a page baed on the role of the user when they are logged, however I am stamped at getting the current_user or the userinfo.
In short I am asking is there some global var or function to get the above info or do I pull it from the session cookies.
Pointing me in the right direction would be awesome. | open | 2023-05-19T15:46:32Z | 2023-05-28T15:41:19Z | https://github.com/dpgaspar/Flask-AppBuilder/issues/2046 | [
"question"
] | awsumco | 2 |
deepspeedai/DeepSpeed | machine-learning | 5,828 | [BUG] In deepspeed Zero3, RuntimeError: still have inflight params | **Describe the bug**
When using zero stage 3, an issue occurs during training where some model parameters are selected based on data content. The error message is: RuntimeError: still have inflight params.
**To Reproduce**
Steps to reproduce the behavior:
1. Create a python file
```
touch test_inflight.py
```
2. Paste the following code into test_inflight.py.
```
import argparse
import deepspeed
import torch
import torch.distributed as dist
import torch.nn as nn
from torch.optim import AdamW
from transformers import get_scheduler
from transformers.deepspeed import HfDeepSpeedConfig
class MyNetwork(nn.Module):
def __init__(self):
super(MyNetwork, self).__init__()
self.fc1 = nn.Linear(1024, 1024)
self.fc2 = nn.Linear(1024, 1024)
self.fc3 = nn.Linear(1024, 1)
self.selu = nn.SELU()
self.sigmoid = nn.Sigmoid()
def forward(self, x, use_fc2):
x = self.fc1(x)
x = self.selu(x)
if use_fc2:
x = self.fc2(x)
x = self.selu(x)
x = self.fc3(x)
x = self.sigmoid(x)
return x
def main() -> None:
parser = argparse.ArgumentParser()
parser.add_argument('--local_rank', type=int, default=-1)
parser = deepspeed.add_config_arguments(parser)
args = parser.parse_args()
model = MyNetwork()
deepspeed.init_distributed()
torch.cuda.set_device(args.local_rank)
device = torch.device('cuda', args.local_rank)
args.device = device
args.global_rank = dist.get_rank()
dist.barrier()
ds_config = {
'train_batch_size': None,
'train_micro_batch_size_per_gpu': 8,
'gradient_accumulation_steps': 1,
'steps_per_print': 10,
'zero_optimization': {
'stage': 3,
'offload_param': {
'device': 'none',
},
'offload_optimizer': {
'device': 'none',
},
'param_persistence_threshold': 1e4,
'max_live_parameters': 3e7,
'prefetch_bucket_size': 3e7,
'memory_efficient_linear': False,
'gather_16bit_weights_on_model_save': True,
},
'gradient_clipping': 1.0,
'prescale_gradients': False,
'wall_clock_breakdown': False,
}
_dstchf = HfDeepSpeedConfig(ds_config)
optimizer = AdamW(
[{'params': list(model.parameters()), 'weight_decay': 0.0}],
lr=1e-3,
betas=(0.9, 0.95),
)
lr_scheduler = get_scheduler(
name='cosine',
optimizer=optimizer,
num_warmup_steps=5,
num_training_steps=100,
)
model, *_ = deepspeed.initialize(
model=model,
optimizer=optimizer,
args=args,
config=ds_config,
lr_scheduler=lr_scheduler,
dist_init_required=True,
)
inputs = torch.randn(8, 1024).to(device)
predicts = torch.randn(8, 1).to(device)
outputs = model(inputs, use_fc2=True)
loss = nn.MSELoss()(outputs, predicts)
model.backward(loss)
model.step()
inputs = torch.randn(8, 1024).to(device)
predicts = torch.randn(8, 1).to(device)
outputs = model(inputs, use_fc2=False)
loss = nn.MSELoss()(outputs, predicts)
model.backward(loss)
model.step()
if __name__ == '__main__':
main()
```
3. Run the following command.
```
deepspeed --module test_inflight
```
**Expected behavior**
A clear and concise description of what you expected to happen.
**ds_report output**
Please run `ds_report` to give us details about your setup.
```
[2024-08-05 21:31:27,078] [INFO] [real_accelerator.py:203:get_accelerator] Setting ds_accelerator to cuda (auto detect)
Warning: The default cache directory for DeepSpeed Triton autotune, /home/yangyaodong/.triton/autotune, appears to be on an NFS system. While this is generally acceptable, if you experience slowdowns or hanging when DeepSpeed exits, it is recommended to set the TRITON_CACHE_DIR environment variable to a non-NFS path.
[WARNING] async_io requires the dev libaio .so object and headers but these were not found.
[WARNING] async_io: please install the libaio-dev package with apt
[WARNING] If libaio is already installed (perhaps from source), try setting the CFLAGS and LDFLAGS environment variables to where it can be found.
[WARNING] Please specify the CUTLASS repo directory as environment variable $CUTLASS_PATH
[WARNING] sparse_attn requires a torch version >= 1.5 and < 2.0 but detected 2.4
[WARNING] using untested triton version (3.0.0), only 1.0.0 is known to be compatible
/aifs4su/yaodong/miniconda3/envs/xuyao-multi-node/lib/python3.11/site-packages/deepspeed/runtime/zero/linear.py:47: FutureWarning: `torch.cuda.amp.custom_fwd(args...)` is deprecated. Please use `torch.amp.custom_fwd(args..., device_type='cuda')` instead.
@autocast_custom_fwd
/aifs4su/yaodong/miniconda3/envs/xuyao-multi-node/lib/python3.11/site-packages/deepspeed/runtime/zero/linear.py:66: FutureWarning: `torch.cuda.amp.custom_bwd(args...)` is deprecated. Please use `torch.amp.custom_bwd(args..., device_type='cuda')` instead.
@autocast_custom_bwd
--------------------------------------------------
DeepSpeed C++/CUDA extension op report
--------------------------------------------------
NOTE: Ops not installed will be just-in-time (JIT) compiled at
runtime if needed. Op compatibility means that your system
meet the required dependencies to JIT install the op.
--------------------------------------------------
JIT compiled ops requires ninja
ninja .................. [OKAY]
--------------------------------------------------
op name ................ installed .. compatible
--------------------------------------------------
[WARNING] async_io requires the dev libaio .so object and headers but these were not found.
[WARNING] async_io: please install the libaio-dev package with apt
[WARNING] If libaio is already installed (perhaps from source), try setting the CFLAGS and LDFLAGS environment variables to where it can be found.
async_io ............... [NO] ....... [NO]
fused_adam ............. [NO] ....... [OKAY]
cpu_adam ............... [NO] ....... [OKAY]
cpu_adagrad ............ [NO] ....... [OKAY]
cpu_lion ............... [NO] ....... [OKAY]
[WARNING] Please specify the CUTLASS repo directory as environment variable $CUTLASS_PATH
evoformer_attn ......... [NO] ....... [NO]
fp_quantizer ........... [NO] ....... [OKAY]
fused_lamb ............. [NO] ....... [OKAY]
fused_lion ............. [NO] ....... [OKAY]
inference_core_ops ..... [NO] ....... [OKAY]
cutlass_ops ............ [NO] ....... [OKAY]
transformer_inference .. [NO] ....... [OKAY]
quantizer .............. [NO] ....... [OKAY]
ragged_device_ops ...... [NO] ....... [OKAY]
ragged_ops ............. [NO] ....... [OKAY]
random_ltd ............. [NO] ....... [OKAY]
[WARNING] sparse_attn requires a torch version >= 1.5 and < 2.0 but detected 2.4
[WARNING] using untested triton version (3.0.0), only 1.0.0 is known to be compatible
sparse_attn ............ [NO] ....... [NO]
spatial_inference ...... [NO] ....... [OKAY]
transformer ............ [NO] ....... [OKAY]
stochastic_transformer . [NO] ....... [OKAY]
--------------------------------------------------
DeepSpeed general environment info:
torch install path ............... ['/aifs4su/yaodong/miniconda3/envs/xuyao-multi-node/lib/python3.11/site-packages/torch']
torch version .................... 2.4.0+cu121
deepspeed install path ........... ['/aifs4su/yaodong/miniconda3/envs/xuyao-multi-node/lib/python3.11/site-packages/deepspeed']
deepspeed info ................... 0.14.4, unknown, unknown
torch cuda version ............... 12.1
torch hip version ................ None
nvcc version ..................... 12.2
deepspeed wheel compiled w. ...... torch 0.0, cuda 0.0
shared memory (/dev/shm) size .... 1007.78 GB
```
**Screenshots**
If applicable, add screenshots to help explain your problem.

**System info (please complete the following information):**
- OS: Ubuntu 22.04.2 LTS
- GPU count and types: one machine with x8 H800s
- Python version: Python 3.11.0
**Launcher context**
Are you launching your experiment with the `deepspeed` launcher, MPI, or something else?
```
deepspeed --module test_inflight
```
**Docker context**
Are you using a specific docker image that you can share?
```
No
```
**Additional context**
Add any other context about the problem here.
```
No
``` | closed | 2024-08-05T15:20:38Z | 2024-08-08T09:10:29Z | https://github.com/deepspeedai/DeepSpeed/issues/5828 | [
"bug",
"training"
] | XuyaoWang | 3 |
CorentinJ/Real-Time-Voice-Cloning | python | 560 | Error message keeps saying I need to download the pretrained models even though i already have? | 
| closed | 2020-10-16T00:58:35Z | 2020-10-16T05:24:14Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/560 | [] | icandancelikecool | 1 |
unit8co/darts | data-science | 2,296 | Usage and Interpretation of SHAP Values in Darts | I have a couple of questions regarding the usage of SHAP values within [darts ](https://unit8co.github.io/darts/generated_api/darts.explainability.shap_explainer.html#:~:text=It%20uses%20shap%20values%20to,model%20to%20produce%20its%20forecasts.).
1. From what I understand, it is common to fit the ShapExplainer first to the train_series and then use it to explain the test series to get an understanding of how the predictions are made. So after the fitting the model, e.g.
```
lr_model = LinearRegressionModel(lags = target_lags, lags_future_covariates= future_cov_lags, lags_past_covariates=past_cov_lags, output_chunk_length=forecast_horizon, n_jobs=-1, random_state=42, multi_models=True)
shap_explain = ShapExplainer(lr_model)
```
I can then use the _foreground_series_ parameter to explain the shap values for the test set, e.g.
```
target_series_test = target_series.slice(test_set_start_date, end_date)
future_cov_test = future_cov_series.slice(test_set_start_date, end_date)
results = shap_explain.explain(
foreground_series=target_series_test,
foreground_future_covariates=future_cov_test,
)
shap_explain_df = results.get_feature_values(horizon=1).pd_dataframe()
```
However, I noticed that the feature values that are returned seem to be the same regardless of which horizon I pass into get_feature_values:
For horizon = 1

For horizon = 2

**Is that expected?**
Also since multi_models is _true_, shouldnt the same set of features in each row be used to predict each timestep in your forecast horizon for each submodel, i.e. if forecast horizon is 24, the target_lag-24 of -5.17 and target_lag-23 of -1.07 should appear 24 times in the feature values to predict for example 2023-01-02 00:00:00 in submodel1 but also to predict 2023-01-02 01:00:00 in submodel2.
**2. Is there a way to return a summary plot for the test_set instead? The summary_plot() method does not seem to be able to take a foreground series.**
**3. What are the .base_values returned by the summary_plot() method?**
From the documentation, this is what I see:

If I run
`shap_explain_vals = shap_explain.summary_plot(horizons = [1])`
I get:

**Does the .base_values represent the expected base value of which Shap values are then computed from?** | open | 2024-03-27T18:49:39Z | 2024-08-28T07:18:49Z | https://github.com/unit8co/darts/issues/2296 | [
"question"
] | DataScientistET | 0 |
scikit-image/scikit-image | computer-vision | 7,067 | histogram_matching not use channel_axis | ### Description:
channel_axis indicates which axis of the array corresponds to channels, but in the code, `-1` is always used as the channel parameter.
### Way to reproduce:
line 67-76
**source code**
```python
if channel_axis is not None:
if image.shape[-1] != reference.shape[-1]:
raise ValueError('Number of channels in the input image and '
'reference image must match!')
matched = np.empty(image.shape, dtype=image.dtype)
for channel in range(image.shape[-1]):
matched_channel = _match_cumulative_cdf(image[..., channel],
reference[..., channel])
matched[..., channel] = matched_channel
```
**recommendations**
```python
if channel_axis is not None:
if image.shape[channel_axis] != reference.shape[channel_axis]:
raise ValueError('Number of channels in the input image and '
'reference image must match!')
```
The rest of the code needs to be judged based on the channel position.
### Version information:
```Shell
>>> import sys; print(sys.version)
3.7.10 (default, Feb 26 2021, 13:06:18) [MSC v.1916 64 bit (AMD64)]
>>> import platform; print(platform.platform())
Windows-10-10.0.22621-SP0
>>> import skimage; print(f'scikit-image version: {skimage.__version__}')
scikit-image version: 0.19.3
>>> import numpy; print(f'numpy version: {numpy.__version__}')
numpy version: 1.21.6
>>>
```
| closed | 2023-07-17T02:48:48Z | 2023-09-17T11:41:43Z | https://github.com/scikit-image/scikit-image/issues/7067 | [
":bug: Bug"
] | lazyn1997 | 1 |
OpenBB-finance/OpenBB | machine-learning | 6,826 | [🕹️] Follow on LinkedIn | ### What side quest or challenge are you solving?
Follow on LinkedIn
### Points
50
### Description
Follow on LinkedIn
### Provide proof that you've completed the task

| closed | 2024-10-20T13:32:27Z | 2024-10-21T12:58:44Z | https://github.com/OpenBB-finance/OpenBB/issues/6826 | [] | sateshcharan | 2 |
vitalik/django-ninja | pydantic | 1,041 | [BUG] Foreign keys in input ModelSchema seem broken | If I create an input schema from a model:
```python
class Foo(models.Model):
id = models.UUIDField(primary_key=True, default=uuid4, editable=False)
name = models.TextField()
bar = models.ForeignKey(Bar)
class FooInSchema(ModelSchema):
class Meta:
model = Foo
fields = ["name", "bar"]
```
and use it in a create endpoint:
```python
@router.post("", response=FooSchema)
def create_foo(
request: AuthenticatedRequest,
payload: FooInSchema
):
foo: Foo = Foo.objects.create(**payload.dict())
return foo
```
the API docs show the following example:
```json
{
"name": "string",
"bar_id": "3fa85f64-5717-4562-b3fc-2c963f66afa6"
}
```
but when submitting a valid payload I get an exception:
```
ValueError: Cannot assign "UUID('45dfd214-b5c1-48a3-845c-d584f86a9447')": "Foo.bar" must be a "Bar" instance.
```
If I change the schema to set the bar relation manually, it works fine:
```python
class FooInSchema(ModelSchema):
bar_id: UUID
class Meta:
model = Foo
fields = ["name"]
```
**Versions (please complete the following information):**
- Python version: 3.11
- Django version: 4.2.9
- Django-Ninja version: 1.1.0
- Pydantic version: 2.5.3
| open | 2024-01-11T10:42:29Z | 2024-12-11T10:17:06Z | https://github.com/vitalik/django-ninja/issues/1041 | [] | jam13 | 2 |
satwikkansal/wtfpython | python | 30 | Nitpicking and minor suggestions | > `3 --0-- 5 == 8` and `--5 == 5` are both semantically correct statements and evaluate to True.
Might want to add that `++a` and `--a` are both valid Python (given `a` is a number), but isn’t what you probably think they are (increment/decrement).
> `'a'[0][0][0][0][0]` is also a semantically correct statement as strings are iterable in Python.
This is possible because strings are [*sequences*](https://docs.python.org/3/glossary.html#term-sequence), not iterables (although they are iterables as well).
> Multiple Python threads don't run concurrently (yes you heard it right!).
This is not accurate. Python threads do run concurrently, but can’t run *Python code* concurrently. A thread can execute if another is in the middle of non-Python code execution (e.g. waiting for the response of a network call). | closed | 2017-09-05T18:10:08Z | 2017-09-06T11:15:45Z | https://github.com/satwikkansal/wtfpython/issues/30 | [] | uranusjr | 2 |
Hironsan/BossSensor | computer-vision | 14 | Is there any way to improve the training accuracy? | My experiment was based on the pictures of those famous movie stars I gathered from the Internet. I could only get an approx 60% accuracy when training the model.
Is there any way to improve the accuracy? | open | 2017-01-30T15:13:03Z | 2017-02-20T00:37:15Z | https://github.com/Hironsan/BossSensor/issues/14 | [] | beef9999 | 5 |
ranaroussi/yfinance | pandas | 1,947 | missing data for private companies | ### Describe bug
Cannot use yfinance to scrap [limited] data from private companies.
### Simple code that reproduces your problem
curl https://finance.yahoo.com/company/space-exploration-technologies?h=eyJlIjoic3BhY2UtZXhwbG9yYXRpb24tdGVjaG5vbG9naWVzIiwibiI6IlNwYWNlWCJ9 results in proper data but
spacex = yf.Ticker("Space Exploration Technologies") yields an error
### Debug log
04 Client Error: Not Found for url: https://query2.finance.yahoo.com/v10/finance/quoteSummary/SPACE%20EXPLORATION%20TECHNOLOGIES?modules=financialData%2CquoteType%2CdefaultKeyStatistics%2CassetProfile%2CsummaryDetail&corsDomain=finance.yahoo.com&formatted=false&symbol=SPACE+EXPLORATION+TECHNOLOGIEScrumb=FQPvdjmXdGN
{'trailingPegRatio': None}
### Bad data proof
_No response_
### `yfinance` version
latest
### Python version
3.12
### Operating system
Sonoma | closed | 2024-05-22T16:24:16Z | 2025-02-16T18:40:23Z | https://github.com/ranaroussi/yfinance/issues/1947 | [] | opinsky | 0 |
healthchecks/healthchecks | django | 1,095 | [Feature Request] Weekly Report E-Mail sort downtime | I like the weekly report function, it would be cool to sort the downtimes for the services (most downtimes / longest) on top.
Maybe even an option to hide services which have “All Good” Status.
This way it is easy to find which services had downtimes and how long, currently I have to scroll trough the list and filter it myself. | open | 2024-12-02T09:16:51Z | 2024-12-02T09:17:21Z | https://github.com/healthchecks/healthchecks/issues/1095 | [] | Klar | 0 |
collerek/ormar | pydantic | 778 | Update `aiomysql` dependency | The currently used `aiomysql` dependency still has some gnarly bugs when used with MySQL 8 which were fixed in `aiomysql 0.1.x`.
It would be fantastic if the dependency was updated! Thank you for making ormar! | closed | 2022-08-10T14:13:42Z | 2022-09-07T10:59:24Z | https://github.com/collerek/ormar/issues/778 | [
"bug"
] | JeppeKlitgaard | 1 |
dask/dask | pandas | 11,386 | Memory issues with slicing | **Describe the issue**:
Slicing (via `.loc`) and other subsetting operations run out of memory on a worker. Even when the result should easily fit into memory.
**Minimal Complete Verifiable Example**:
The below example creates a .csv file with ~1B rows of numbers, each of which appears about 10k times. The entire file is about 6GB. So, the file cannot load into 1GB of RAM, but small batches should be able to, and the result can easily fit into the RAM for a worker.
```python
import random
import os
from dask.distributed import Client
import dask.dataframe as dd
LIMIT = int(1e9) #1B rows
LARGE_PRIME = 100003 # arbitrary large prime
BATCH_SIZE = 10000
### SETUP, this creates a large csv file with LIMIT rows containing the numbers 0-LARGE_PRIME each LIMIT/LARGE_PRIME times
if not os.path.isfile('big.csv'):
with open("big.csv","w") as f:
i = 0
f.write("numbers\n")
while i < LIMIT:
batch = "\n".join([str((x*LIMIT) % LARGE_PRIME) for x in range(i, i + BATCH_SIZE)])
f.write(batch)
f.write("\n")
i += BATCH_SIZE
if not (i % (BATCH_SIZE * int(LIMIT/BATCH_SIZE/100))): #print progress 100 times
print(i)
f.write(str(((i+1)*LIMIT) % LARGE_PRIME))
print(LIMIT / LARGE_PRIME) #about 10k copies of each value
dask_client = Client(n_workers=1, memory_limit='1GB')
big_df = dd.read_csv('big.csv', assume_missing=True, blocksize=25e6)
selection = big_df[big_df["numbers"] == 1] # should be ~LIMIT/LARGE_PRIME rows
result = selection.compute()
```
Warnings from the `.compute()` statement:
```
...
2024-09-12 11:54:48,456 - distributed.worker.memory - WARNING - Worker is at 81% memory usage. Pausing worker. Process memory: 776.68 MiB -- Worker memory limit: 0.93 GiB
2024-09-12 11:54:51,377 - distributed.worker.memory - WARNING - Unmanaged memory use is high. This may indicate a memory leak or the memory may not be released to the OS; see https://distributed.dask.org/en/latest/worker-memory.html#memory-not-released-back-to-the-os for more information. -- Unmanaged memory: 706.12 MiB -- Worker memory limit: 0.93 GiB
...
```
Warnings from the `client`:
```
...
2024-09-12 11:54:55,773 - distributed.nanny.memory - WARNING - Worker tcp://127.0.0.1:39705 (pid=85889) exceeded 95% memory budget. Restarting...
2024-09-12 11:54:55,886 - distributed.scheduler - ERROR - Task ('read_csv-getitem-eq-getitem-4fbed9a3f4c5a565cc948840279308a4', 94) marked as failed because 4 workers died while trying to run it
...
```
**Anything else we need to know?**:
**Environment**:
- Dask version: 2024.8.2
- Python version: 3.11
- Operating System: Ubuntu 24.04
- Install method (conda, pip, source): pip 24.2
| closed | 2024-09-12T16:35:44Z | 2024-10-11T14:35:57Z | https://github.com/dask/dask/issues/11386 | [
"needs triage"
] | csbrown | 8 |
lexiforest/curl_cffi | web-scraping | 122 | BOM parsing issue | Hello,
**Expected**
Some utf-8 files begin with a BOM (https://stackoverflow.com/questions/50130605/python-2-7-csv-file-read-write-xef-xbb-xbf-code) like the swebpage https://copytop.com/.
**Problem**
curl-cffi seems unable to decode it.
```
File "/home/bader/code/python3/lib/python3.11/site-packages/curl_cffi/requests/cookies.py", line 47, in text
return self.content.decode(self.charset)
│ │ │ │ └ 'utf-8'
│ │ │ └ <curl_cffi.requests.cookies.Response object at 0x7f1885012790>
│ │ └ <method 'decode' of 'bytes' objects>
│ └ b'\xef\xbb\xbf<!DOCTYPE html>\r\n<html lang="fr">\r\n <head>\r\n <!--[if IE]>\r\n <meta http-equiv="X-UA-Compatib...
└ <curl_cffi.requests.cookies.Response object at 0x7f1885012790>
Exception: 'utf-8' codec can't decode byte 0xe8 in position 115419: invalid continuation byte
``` | closed | 2023-09-10T01:14:25Z | 2023-11-02T10:09:50Z | https://github.com/lexiforest/curl_cffi/issues/122 | [] | baderdean | 2 |
Avaiga/taipy | automation | 1,500 | Add Python API when creating an app through *taipy create* | ### Description
Add a question to know if the user wants to use the Python API or Markdown syntax for his application and change the template accordingly.
`taipy create --template default`
A question should appear to know if the user wants to use the Python API or Markdown syntax.
### Acceptance Criteria
- [ ] Ensure new code is unit tested, and check code coverage is at least 90%.
- [ ] Propagate any change on the demos and run all of them to ensure there is no breaking change.
- [ ] Ensure any change is well documented.
### Code of Conduct
- [X] I have checked the [existing issues](https://github.com/Avaiga/taipy/issues?q=is%3Aissue+).
- [ ] I am willing to work on this issue (optional) | closed | 2024-07-10T14:17:36Z | 2025-01-14T15:03:25Z | https://github.com/Avaiga/taipy/issues/1500 | [
"📈 Improvement",
"🖧 Devops",
"🟨 Priority: Medium",
"🔒 Staff only",
"💬 Discussion"
] | FlorianJacta | 4 |
modelscope/modelscope | nlp | 854 | Tasks.image_segmentation Error | python=3.10
pytorch=2.1.2+cu118
modelscope=1.14.0
1. panoptic-segmentation
model_id = "damo/cv_r50_panoptic-segmentation_cocopan"
pipe = pipeline(Tasks.image_segmentation, model=model_id)
Error Info: 'image-panoptic-segmentation-easycv is not in the pipelines registry group image-segmentation. Please make sure the correct version of ModelScope library is used.'
2. semantic-segmentation
model_id = 'damo/cv_segformer-b0_image_semantic-segmentation_coco-stuff164k'
pipe = pipeline(Tasks.image_segmentation, model=model_id)
Error Info: 'easycv-segmentation is not in the pipelines registry group image-segmentation. Please make sure the correct version of ModelScope library is used.' | closed | 2024-05-15T03:04:39Z | 2024-05-15T08:06:34Z | https://github.com/modelscope/modelscope/issues/854 | [] | xuhzyy | 1 |
jumpserver/jumpserver | django | 14,322 | [Question] 使用客户端方式连接windows时无法用mstsc中的选择连接速度功能 | ### 产品版本
V3.10.13
### 版本类型
- [ ] 社区版
- [ ] 企业版
- [X] 企业试用版
### 安装方式
- [ ] 在线安装 (一键命令安装)
- [X] 离线包安装
- [ ] All-in-One
- [ ] 1Panel
- [ ] Kubernetes
- [ ] 源码安装
### 环境信息
使用的centos单机部署的jumpserver,通过谷歌浏览器129版本(最新版)访问
### 🤔 问题描述
通过jumpserver客户端方式连接windows,打开工业制图软件,会比较卡顿。
在使用windows原本的mstsc连接时,通过调整体验-选择连接速度来优化性能-低速带宽,远程就不会卡顿
<img width="575" alt="image" src="https://github.com/user-attachments/assets/0bead924-51a4-4ca2-8e78-1fa7a1fd5748">
在使用jumpserver客户端方式时没有出现可以选择的界面而直接跳过连接了,导致操作图像卡顿,
<img width="701" alt="image" src="https://github.com/user-attachments/assets/20b1203d-9c4a-4014-8705-573ebcae4d50">
在高级选项中调整过rdp分辨率,但效果不明显
<img width="719" alt="image" src="https://github.com/user-attachments/assets/a6851498-9387-46b4-a1d1-4a9da6f7350a">
通过下载rdp文件的方式是可以到选择连接速度来优化性能界面,但每次连接windows都需要下载,比较麻烦。
是否可以直接在客户端阶段选择mstsc的一些选项,如体验中的性能选项
### 期望结果
如何配置才能在客户端连接方式可选择mstsc中的体验中的性能选项
如现在还不满足的话,或者能加入需求池中进行迭代
### 补充信息
_No response_ | closed | 2024-10-17T07:18:03Z | 2024-12-19T10:48:15Z | https://github.com/jumpserver/jumpserver/issues/14322 | [
"⏳ Pending feedback",
"🤔 Question",
"📦 z~release:v4.5.0"
] | haipeng-fit2 | 8 |
mlfoundations/open_clip | computer-vision | 254 | adding coca-pytorch to open_clip | In [this comment](https://github.com/lucidrains/CoCa-pytorch/issues/2#issuecomment-1320707789) by @rom1504 it has been mentioned the possiblity to add [coca-pytorch](https://github.com/lucidrains/CoCa-pytorch/) to this repo to train a version of it, if nobody is doing it and it is still interesting I could try and set it up. Would it be ok? | closed | 2022-11-25T02:09:47Z | 2023-01-29T00:48:23Z | https://github.com/mlfoundations/open_clip/issues/254 | [
"new feature",
"important"
] | gpucce | 3 |
httpie/cli | api | 619 | Httpie can't resolve "localhost" in an Akka HTTP RESTful service | I have a demo service like this:
```scala
localhost:8080/item/{id}
```
But when I test it with Httpie, I get the following message:
```
$ http localhost:8080/item/3
http: error: ConnectionError: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response',)) while doing GET request to URL: http://localhost:8080/item/3
```
I tried 127.0.0.1 instead of localhost, and it works as expected:
```
$ http 127.0.0.1:8080/item/3
HTTP/1.1 404 Not Found
Content-Length: 83
Content-Type: text/plain; charset=UTF-8
Date: Sat, 07 Oct 2017 08:55:38 GMT
Server: akka-http/10.0.9
The requested resource could not be found but may be available again in the future.
``` | closed | 2017-10-07T08:56:54Z | 2020-06-08T12:36:05Z | https://github.com/httpie/cli/issues/619 | [] | kun-song | 3 |
nerfstudio-project/nerfstudio | computer-vision | 2,986 | Why does the nerf rebuild not work well with the blender data I created myself? | Hello, everyone. I use blender to create a set of reed blender data as shown in image01 and image 02.


The command to use is:
`ns-train instant-ngp-bounded --pipeline.model.background-color white --experiment-name luwei blender-data --data data/blender/luwei --scale-factor 0.23 . `
The result is shown in image03. But there is no color. And the exported point cloud is shown in image04 and image05, with a circle of something like a camera path. Does anyone know why that is? What should be done to fix it? I would be very grateful for your answer!



| open | 2024-03-07T05:26:25Z | 2024-04-08T13:42:02Z | https://github.com/nerfstudio-project/nerfstudio/issues/2986 | [] | ATing0203 | 2 |
albumentations-team/albumentations | machine-learning | 2,072 | out of range when using bounding boxes in object detection | import os
import torch
from torch.utils.data import Dataset
import albumentations as A
from albumentations.pytorch import ToTensorV2
import cv2
import numpy as np
import matplotlib.pyplot as plt
class CropDiseaseDataset(Dataset):
def init(self, df, img_dir, transforms=None):
self.df = df.reset_index(drop=True) # Reset index to handle indexing correctly
self.img_dir = img_dir
self.transforms = transforms # Augmentations should be passed during dataset instantiation
self.file_names = self.df[‘Image_ID’].values # Assuming ‘file_names’ column for image paths
self.targets = self.df[‘class’].values # Assuming ‘class’ column for labels
self.bboxes = self.df[[‘xmin’, ‘ymin’, ‘xmax’, ‘ymax’]].values # Assuming columns for bounding boxes
def __len__(self):
return len(self.df)
def __getitem__(self, index):
# Construct image path
img_path = os.path.join(self.img_dir, self.file_names[index])
img = cv2.imread(img_path)
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) # Convert BGR (OpenCV) to RGB format
target = self.targets[index]
bbox = self.bboxes[index].astype(float) # Convert bbox to float
# Check if bbox is valid
if bbox.shape[0] != 4 or any(bbox < 0):
print(f"Warning: Invalid bbox format at index {index}. Skipping sample.")
return None
# Apply transformations if they exist
if self.transforms is not None:
transformed = self.transforms(image=img, bboxes=[bbox], labels=[target])
# If transformation removes bounding box or any issue, skip the sample
if len(transformed['bboxes']) == 0:
print(f"Warning: Transformation removed bbox at index {index}. Skipping sample.")
return None
img = transformed['image']
bbox = transformed['bboxes'][0] # Extract transformed bbox as single bbox
target = transformed['labels'][0]
return {
'image': img,
'bbox': bbox,
'label': target
}
Define the transformations
train_transform = A.Compose([
A.RandomResizedCrop(224, 224),
A.HorizontalFlip(p=0.5),
A.RandomGamma(gamma_limit=(80, 120), p=0.5),
A.RandomBrightnessContrast(p=0.5),
A.CLAHE(clip_limit=4.0, tile_grid_size=(8, 8), p=0.5),
A.ShiftScaleRotate(shift_limit=0.05, scale_limit=0.05, rotate_limit=15, p=0.5),
A.RGBShift(r_shift_limit=15, g_shift_limit=15, b_shift_limit=15, p=0.5),
A.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]),
ToTensorV2(),
], bbox_params=A.BboxParams(format=‘pascal_voc’, min_visibility=0.05, label_fields=[‘labels’]))
val_transform = A.Compose([
A.Resize(256, 256),
A.CenterCrop(224, 224),
A.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]),
ToTensorV2(),
], bbox_params=A.BboxParams(format=‘pascal_voc’, min_visibility=0.05, label_fields=[‘labels’]))
def visualize_dataset_samples(dataset, num_samples=5):
“”"
Visualize a set number of samples from the dataset to check for issues.
“”"
for idx in range(num_samples):
print(f"\nVisualizing sample {idx}:“)
try:
sample = dataset[idx]
if sample is None:
print(f"Sample {idx} is None. Skipping.”)
continue
img = sample['image'].permute(1, 2, 0).numpy() # Convert to HWC format for display
bbox = sample['bbox']
label = sample['label']
# Draw bounding box on the image
x_min, y_min, x_max, y_max = map(int, bbox)
img = cv2.rectangle(img, (x_min, y_min), (x_max, y_max), (255, 0, 0), 2) # Draw bounding box
plt.imshow(img)
plt.title(f"Label: {label}")
plt.axis('off')
plt.show()
except IndexError as e:
print(f"Error visualizing sample {idx}: {e}")
except Exception as e:
print(f"An unexpected error occurred for sample {idx}: {e}")
Usage example (assuming your DataFrame df and image directory IMAGE_DIR are set up)
dataset = CropDiseaseDataset(df=df, img_dir=IMAGE_DIR, transforms=train_transform)
visualize_dataset_samples(dataset, num_samples=5)
The error: Visualizing sample 1:
Warning: Transformation removed bbox at index 1. Skipping sample.
Sample 1 is None. Skipping.
it reads some images and also show this error in some too? | closed | 2024-11-08T02:25:48Z | 2024-11-11T15:18:04Z | https://github.com/albumentations-team/albumentations/issues/2072 | [
"question",
"Need more info"
] | Rexedoziem | 19 |
dagster-io/dagster | data-science | 27,746 | Regression in 1.9.12 broke `BackfillPolicy.multi_run` by mixing `Multi-Partitions` | ### What's the issue?
**Description:**
In version 1.9.12, `BackfillPolicy.multi_run` no longer correctly separates multi-partitions when launching runs. Previously, with a `MultiPartitionsDefinition` consisting of a time window and static partitions (e.g., multiple tables with daily partitions), each static partition would trigger separate runs correctly. Now, all partitions are mixed together in a single run, breaking the expected behavior.
**Expected behavior:**
Each table's partitions should be correctly grouped into separate runs.
**Actual behavior:**
Partitions across different tables are now incorrectly merged into a single run.
**Steps to reproduce:**
1. Define a `MultiPartitionsDefinition` with a daily time window and multiple static partitions (e.g., three tables).
2. Trigger a backfill using `BackfillPolicy.multi_run()`.
3. Observe that instead of separate runs per table, all partitions are incorrectly mixed into a single run.
### **Before (correct behavior, separate runs per table)**
This would correctly create three runs, one for each table:
**Run 1:**
```
dagster/asset_partition_range_start:table_one|2025-02-04
dagster/asset_partition_range_start:table_one|2025-02-11
```
**Run 2:**
```
dagster/asset_partition_range_start:table_two|2025-02-04
dagster/asset_partition_range_start:table_two|2025-02-11
```
**Run 3:**
```
dagster/asset_partition_range_start:table_three|2025-02-04
dagster/asset_partition_range_start:table_three|2025-02-11
```
### **Now (incorrect behavior in 1.9.12, single run)**
Instead of separate runs, it incorrectly combines partitions from different tables into one mixed run:
```
dagster/asset_partition_range_start:table_one|2025-02-04
dagster/asset_partition_range_start:table_three|2025-02-11
```
### What did you expect to happen?
_No response_
### How to reproduce?
_No response_
### Dagster version
1.9.12
### Deployment type
None
### Deployment details
_No response_
### Additional information
_No response_
### Message from the maintainers
Impacted by this issue? Give it a 👍! We factor engagement into prioritization. | open | 2025-02-11T12:32:03Z | 2025-02-13T18:04:32Z | https://github.com/dagster-io/dagster/issues/27746 | [
"type: bug",
"area: backfill",
"area: partitions"
] | chrishiste | 10 |
jina-ai/clip-as-service | pytorch | 623 | question about the demo | Hi , I am using this python server demo to build a server, my configs are as below:
server:
from bert_serving.server.helper import get_args_parser
from bert_serving.server import BertServer
args = get_args_parser().parse_args(['-model_dir', './pretrained/',
'-config_name','bert_config.json',
'-tuned_model_dir', './finetune/',
'-ckpt_name','model.ckpt-21903',
'-port', '5555',
'-port_out', '5556'])
server = BertServer(args)
server.start()
####
client:
from bert_serving.client import BertClient
bc = BertClient(port=5555, port_out=5556)
vec = bc.encode(['hey you'])
print("print the shape of the vec",vec.shape)
print(vec[0][0])
###
However.
The result's out of my expectation
result:
print the shape of the vec (1, 768)
0.56098175
it seems something wrong with shape, I don't know where the max_seq_length goes, could you please help, thank you
| closed | 2021-03-22T09:13:37Z | 2021-03-22T21:47:51Z | https://github.com/jina-ai/clip-as-service/issues/623 | [] | 652994331 | 1 |
cobrateam/splinter | automation | 757 | FindLinks broken ? | Hi ! I'm receiving an exception when I try to use the links property like this :
```python
elements = browser.find_by_css(".link").links.find_by_partial_href(URL)
return [a["href"] for a in elements]
```
Here's the relevent part of the traceback :
```
File "/home/edited_out/venv/lib/python3.8/site-packages/splinter/driver/webdriver/__init__.py", line 205, in find_by_partial_href
return self.parent.find_by_xpath(
TypeError: find_by_xpath() got an unexpected keyword argument 'original_find'
```
https://github.com/cobrateam/splinter/blob/3bee3089e299f50ef7062b2e1892cd14f03a11f8/splinter/driver/webdriver/__init__.py#L207
I tried to remove all **original_find** and **original_query** from the FindLinks class and now it works fine !
Python 3.8
splinter 0.13.0
Firefox 72.0.2
geckodriver 0.26.0
Similar issue : #426 | closed | 2020-02-03T15:19:59Z | 2020-02-28T21:40:04Z | https://github.com/cobrateam/splinter/issues/757 | [] | ShellCode33 | 4 |
graphql-python/graphene-django | django | 1,134 | Loading `graphene_django.fields` takes too long | **Note: for support questions, please use stackoverflow**. This repository's issues are reserved for feature requests and bug reports.
* **What is the current behavior?**
Importing `graphene_django.fields` take 2.5s, of which 1.8 are taken by `graphql_relay.connection.arrayconnection`. See hierarchy generated using
`PYTHONPROFILEIMPORTTIME=1 ./manage.py 2>&1 | tee ~/Desktop/importtimes`
`tuna ~/Desktop/imporrtimes`

* **If the current behavior is a bug, please provide the steps to reproduce and if possible a minimal demo of the problem** via
a github repo, https://repl.it or similar (you can use this template as a starting point: https://repl.it/@jkimbo/Graphene-Django-Example).
* **What is the expected behavior?**
Likely lazily importing relay or rx related stuff to speed things up.
* **What is the motivation / use case for changing the behavior?**
Starting the dev environment now takes multiple seconds, which can be frustrating in the long run.
* **Please tell us about your environment:**
- Version: 2.15.0
- Platform: macOS Catalina
* **Other information** (e.g. detailed explanation, stacktraces, related issues, suggestions how to fix, links for us to have context, eg. stackoverflow)
See screenshot above.
Thanks!
| closed | 2021-02-23T03:55:40Z | 2021-02-23T04:13:00Z | https://github.com/graphql-python/graphene-django/issues/1134 | [
"🐛bug"
] | elchiapp | 1 |
aiortc/aioquic | asyncio | 222 | Connection close received (certificate unknown) | Hi, I'm currently try to deploy QUIC server, where client is from android with cronet.
When I type following command to activate QUIC server as a http3-server in the example,
python server.py -c certs/test.pem -k certs/test.key --port 7443
I find that QUIC server responses to client as follows:
2021-09-03 21:37:57,765 INFO quic [bb545e91d4e68f20] Connection close received (code 0x12E, reason 199:TLS handshake failure (ENCRYPTION_HANDSHAKE) 46: certificate unknown)
It seems that certificate does not have problem since I ran https nginx server with same pem and key files..
Hope there's one can answer my problem. Thanks in advance. | closed | 2021-09-03T12:52:56Z | 2023-08-22T01:18:42Z | https://github.com/aiortc/aioquic/issues/222 | [] | lgs96 | 3 |
oegedijk/explainerdashboard | dash | 270 | Autogluon and explainerdashboard integration | Is there a way to load autogluon models into explainer dashboard? | open | 2023-06-14T15:18:20Z | 2023-07-09T10:21:11Z | https://github.com/oegedijk/explainerdashboard/issues/270 | [] | apavlo89 | 4 |
manrajgrover/halo | jupyter | 176 | Bug: halo not showing properly in Jenkins CI | <!-- Please use the appropriate issue title format:
BUG REPORT
Bug: {Short description of bug}
SUGGESTION
Suggestion: {Short description of suggestion}
OTHER
{Question|Discussion|Whatever}: {Short description} -->
## Description
Halo doesn't display text properly in Jenkins CI. The animations, emojis display as question mark characters. The colors don't display at all.
### System settings
- Operating System: Unix
- Terminal in use: Jenkins
- Python version: 3.9
- Halo version: 0.0.31
- `pip freeze` output:
```
aiohttp==3.8.4
aiosignal==1.3.1
ansible==5.9.0
ansible-core==2.12.6
apache-libcloud==3.6.0
async-timeout==4.0.2
attrs==22.2.0
autopage==0.5.1
bcrypt==3.2.2
boto==2.49.0
bottle==0.12.21
certifi==2022.12.7
cffi==1.14.4
chardet==3.0.4
charset-normalizer==3.0.1
cheroot==8.6.0
click==8.1.3
cliff==3.10.1
cmd2==2.4.1
colorama==0.4.6
commonmark==0.9.1
configobj==5.0.6
cryptography==3.2.1
cycler==0.10.0
defusedxml==0.7.1
edgegrid-python==1.1.1
frozenlist==1.3.3
ghp-import==2.1.0
girok==0.1.14
halo==0.0.31
httpie==2.3.0
httpie-edgegrid==1.0.6
hvac==0.11.2
idna==3.4
importlib-metadata==4.13.0
jaraco.functools==3.5.0
Jinja2==3.1.2
joblib==1.2.0
kiwisolver==1.3.1
linkify-it-py==2.0.0
log-symbols==0.0.14
Markdown==3.3.7
markdown-it-py==2.2.0
MarkupSafe==2.1.2
matplotlib==3.4.2
mdit-py-plugins==0.3.5
mdurl==0.1.2
mergedeep==1.3.4
mkdocs==1.4.2
mkdocs-exclude==1.0.2
more-itertools==8.13.0
msgpack==1.0.5
multidict==6.0.4
ndg-httpsclient==0.5.1
netaddr==0.8.0
nltk==3.7
numpy==1.24.2
packaging==23.0
pandas==1.3.0
paramiko==2.11.0
pbr==5.9.0
pexpect==4.8.0
Pillow==8.3.1
ply==3.11
polling==0.3.2
prettytable==3.3.0
ptyprocess==0.7.0
pyasn1==0.4.8
pycparser==2.20
Pygments==2.14.0
pyjq==2.6.0
pylighthouse==0.1.0
PyNaCl==1.5.0
pyOpenSSL==20.0.0
pyparsing==2.4.7
pyperclip==1.8.2
PySocks==1.7.1
python-dateutil==2.8.2
pytz==2021.1
PyYAML==6.0
pyyaml_env_tag==0.1
RADL==1.2.0
regex==2022.10.31
requests==2.28.2
requests-toolbelt==0.9.1
resolvelib==0.5.4
retrying==1.3.4
rich==13.3.2
scp==0.14.4
shellingham==1.5.0.post1
six==1.16.0
sortedcontainers==2.4.0
spinners==0.0.24
stevedore==3.5.0
subprocess.run==0.0.8
suds-py3==1.4.5.0
termcolor==2.3.0
textual==0.14.0
tosca-parser==2.6.0
tqdm==4.64.0
typer==0.7.0
typing_extensions==4.5.0
uc-micro-py==1.0.1
urllib3==1.26.14
watchdog==2.3.1
wcwidth==0.2.5
yarl==1.8.2
zipp==3.15.0
```
### Error
<!-- Put error here. Exceptions, and full traceback if possible. -->
Example of output below in Jenkins CI:
`��� 1���������� 2023-05-08 18:31:54.006033 Preparing to initialize AniClient class (...)`
### Expected behaviour
<!-- Put expected behaviour here -->
Expected output:
`⠇ 1️⃣🥚 2023-05-08 15:19:02.836543 Preparing to initialize AniClient class with args`
## Steps to recreate
<!-- Describe the steps here -->
Run Halo in a Jenkins CI environment
## People to notify
<!-- Please @mention relevant people here:-->
## Potential solution
Ora deals with this scenario by allowing you to set `enabled` to `False` and it will disable the spinner, emojis and colors. However, it will still print the raw text normally. Whereas with Halo, setting `enabled` to `False` completely disables everything, even printing the raw text normally. So, I propose setting `enabled` to `False` would still log text normally.
| open | 2023-05-08T19:23:26Z | 2024-06-16T08:14:10Z | https://github.com/manrajgrover/halo/issues/176 | [] | omartoutounji | 2 |
browser-use/browser-use | python | 509 | Support for UI TARS | ### Problem Description
Support for UI TARS
### Proposed Solution
Support for UI TARS
### Alternative Solutions
_No response_
### Additional Context
_No response_ | open | 2025-02-01T20:07:56Z | 2025-03-14T14:08:57Z | https://github.com/browser-use/browser-use/issues/509 | [
"enhancement"
] | nuclear-gandhi | 3 |
miguelgrinberg/flasky | flask | 44 | Categories for posts | So I am trying to add a category model for the posts by doing the following:
```
class Post(db.Model):
__tablename__ = 'posts'
id = db.Column(db.Integer, primary_key=True)
title = db.Column(db.String(80))
body = db.Column(db.Text)
timestamp = db.Column(db.DateTime, index=True, default=datetime.utcnow)
author_id = db.Column(db.Integer, db.ForeignKey('users.id'))
body_html = db.Column(db.Text)
comments = db.relationship('Comment', backref='post', lazy='dynamic')
category_id =db.Column(db.Integer, db.ForeignKey('category.category_id'))
category = db.relationship('Category')
def __init__(self, title, category):
self.title = title
self.category = category
@staticmethod
def on_changed_body(target, value, oldvalue, initiator):
allowed_tags = ['a', 'abbr', 'acronym', 'b', 'blackquote', 'code', 'em', 'i', 'li', 'ol', 'pre', 'strong',
'ul', 'h1', 'h2', 'h3', 'p']
target.body_html = bleach.linkify(bleach.clean(
markdown(value, output_form='html'),
tags=allowed_tags, strip=True))
def __repr__(self):
return '<Category %r>' % self.id
db.event.listen(Post.body, 'set', Post.on_changed_body)
```
First question am I doing it right? To my guess it's one-to-many since it's one category for many posts. And if I do the following I get errors on main/views.py
```
post = Post(body=form.body.data,
author=current_user._get_current_object())
```
The error (using Pycharm IDE) for "body=form.body.data,
author=current_user._get_current_object()" is Unexpected argument.
Any ideas what's the right way?
| closed | 2015-05-08T15:20:48Z | 2015-05-10T11:48:29Z | https://github.com/miguelgrinberg/flasky/issues/44 | [
"question"
] | varqasim | 4 |
tfranzel/drf-spectacular | rest-api | 469 | Oauth2 provider: TokenMatchesOASRequirements doesn't pass validation | **Describe the bug**
I have a ModelViewSet with permission_classes = [TokenMatchesOASRequirements], generating the schema with --validate
Using django-oauth-toolkit.
```
jsonschema.exceptions.ValidationError: {'GET': [['v2:integration:account:client:read']], 'POST': [], 'PUT': [['v2:integration:account:client:update']], 'PATCH': [['v2:integration:account:client:update']], 'DELETE': []} is not of type 'array'
Failed validating 'type' in schema['properties']['paths']['patternProperties']['^\\/']['patternProperties']['^(get|put|post|delete|options|head|patch|trace)$']['properties']['security']['items']['additionalProperties']:
{'items': {'type': 'string'}, 'type': 'array'}
On instance['paths']['/api/v2/integration/account/clients/']['get']['security'][0]['oauth2']:
{'DELETE': [],
'GET': [['v2:integration:account:client:read']],
'PATCH': [['v2:integration:account:client:update']],
'POST': [],
'PUT': [['v2:integration:account:client:update']]}
```
**To Reproduce**
Create a ModelViewset with permission_classes=[TokenMatchesOASRequirements], generate schema with --validate
**Expected behavior**
Schema file generated without validation errors.
| closed | 2021-07-27T10:47:33Z | 2022-08-25T21:40:17Z | https://github.com/tfranzel/drf-spectacular/issues/469 | [
"bug",
"fix confirmation pending"
] | tomasgarzon | 8 |
schemathesis/schemathesis | pytest | 1,799 | Avoid showing the whole payload in the error details | Now Schemathesis displays the whole response payload which could be too much. For raw data, we might want to display first N symbols, for JSON, maybe only the relevant part (if the error is related to JSON schema), or maybe show N bytes around the place of the error | closed | 2023-10-07T21:36:00Z | 2023-11-09T08:57:10Z | https://github.com/schemathesis/schemathesis/issues/1799 | [
"Priority: Medium",
"Type: Feature",
"UX: Reporting",
"Status: Needs Design"
] | Stranger6667 | 0 |
huggingface/datasets | numpy | 7,467 | load_dataset with streaming hangs on parquet datasets | ### Describe the bug
When I try to load a dataset with parquet files (e.g. "bigcode/the-stack") the dataset loads, but python interpreter can't exit and hangs
### Steps to reproduce the bug
```python3
import datasets
print('Start')
dataset = datasets.load_dataset("bigcode/the-stack", data_dir="data/yaml", streaming=True, split="train")
it = iter(dataset)
next(it)
print('Finish')
```
The program prints finish but doesn't exit and hangs indefinitely.
I tried this on two different machines and several datasets.
### Expected behavior
The program exits successfully
### Environment info
datasets==3.4.1
Python 3.12.9.
MacOS and Ubuntu Linux | open | 2025-03-18T23:33:54Z | 2025-03-18T23:33:54Z | https://github.com/huggingface/datasets/issues/7467 | [] | The0nix | 0 |
pydantic/pydantic-ai | pydantic | 607 | Validation with nested Pydantic models (ollama, llama3.1) | When using the `pydantic_ai` library with a nested `BaseModel`, an `UnexpectedModelBehavior` error occurs, despite the underlying model (e.g., `ollama:llama3.1`) being capable of handling the requested structure and providing valid output.
The example here [https://ai.pydantic.dev/examples/pydantic-model/#running-the-example](https://ai.pydantic.dev/examples/pydantic-model/#running-the-example) is fully functional with `ollama:llama3.1`, but this slight modification to include nested models fails to work:
```python
import os
from typing import cast
import logfire
from pydantic import BaseModel
from pydantic_ai import Agent
from pydantic_ai.models import KnownModelName
logfire.configure(send_to_logfire='if-token-present')
class Country(BaseModel):
name: str
class MyModel(BaseModel):
city: str
country: Country
model = cast(KnownModelName, os.getenv('PYDANTIC_AI_MODEL', 'ollama:llama3.1'))
print(f'Using model: {model}')
agent = Agent(model, result_type=MyModel)
if __name__ == '__main__':
result = agent.run_sync('The windy city in the US of A.')
print(result.data)
print(result.usage())
```
I get this error:
```
UnexpectedModelBehavior: Exceeded maximum retries (1) for result validation
```
It seems that the model fails to generate the expected nested structure.
**Validation of the Model's Capability:**
To confirm that the underlying model (`ollama:llama3.1`) supports this functionality, the following custom implementation was tested:
```python
from ollama import chat, Options
from typing import Type, TypeVar
from pydantic import BaseModel
T = TypeVar('T', bound=BaseModel)
class Agent:
def __init__(self, model: str, result_type: Type[T], options: Options = None):
self.model = model.split(":")[1]
self.result_type = result_type
self.options = options
def run_sync(self, prompt: str) -> T:
response = chat(
messages=[
{
'role': 'user',
'content': prompt
}
],
model=self.model,
format=self.result_type.model_json_schema(),
options=self.options
)
if response.message.content is None:
raise Exception("No response from the model")
else:
o_data: str = response.message.content
o: T = self.result_type.model_validate_json(o_data)
return o
class Country(BaseModel):
name: str
class MyModel(BaseModel):
city: str
country: Country
model = cast(KnownModelName, os.getenv('PYDANTIC_AI_MODEL', 'ollama:llama3.1'))
print(f'Using model: {model}')
agent = Agent(model, result_type=MyModel)
if __name__ == '__main__':
result = agent.run_sync('The windy city in the US of A.')
print(result)
```
This implementation returned the expected result without any errors:
```
city='Chicago' country=Country(name='United States')
```
**Environment:**
- Python version: 3.10+
- `pydantic_ai[examples]>=0.0.17`
**Could I be misunderstanding or misusing the library?**
| closed | 2025-01-03T16:13:25Z | 2025-01-08T19:33:59Z | https://github.com/pydantic/pydantic-ai/issues/607 | [] | Ynn | 8 |
ultralytics/yolov5 | machine-learning | 13,189 | pip dependencies | ### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.
### Question
# I am encountering this error when i am running the code in colab:
------------------------------------------------------------------------------------------------------------------
!git clone https://github.com/ultralytics/yolov5.git
%cd yolov5
!pip install -qr requirements.txt
import torch
from IPython.display import Image, clear_output
print('Setup complete. Using torch %s %s' % (torch.__version__, torch.cuda.get_device_properties(0) if torch.cuda.is_available() else 'CPU'))
------------------------------------------------------------------------------------------------------------------
------------------------------------------------------------------------------------------------------------------
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
google-colab 1.0.0 requires requests==2.31.0, but you have requests 2.32.3 which is incompatible.
imageio 2.31.6 requires pillow<10.1.0,>=8.3.2, but you have pillow 10.4.0 which is incompatible.
WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager, possibly rendering your system unusable.It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv. Use the --root-user-action option if you know what you are doing and want to suppress this warning.
Setup complete. Using torch 2.3.0+cu121 _CudaDeviceProperties(name='Tesla T4', major=7, minor=5, total_memory=15102MB, multi_processor_count=40)
------------------------------------------------------------------------------------------------------------------
### Additional
_No response_ | open | 2024-07-15T07:43:00Z | 2024-10-20T19:50:06Z | https://github.com/ultralytics/yolov5/issues/13189 | [
"question",
"Stale"
] | karmakaragradwip02 | 3 |
httpie/cli | python | 940 | Add setup instructions for windows | As I have tried to setup my development environment for HTTPie on a Windows machine, I wasn't able to run the ``make`` command even after installing some third-party software. It would be great to add specific steps for windows setup. I discussed this with @jakubroztocil and he confirmed the issue.
Current guidelines provided for setup are located in [CONTRIBUTING.rst](https://github.com/jakubroztocil/httpie/blob/master/CONTRIBUTING.rst)
I will work on this since I have already identified the steps needed for windows setup. | closed | 2020-06-25T15:41:00Z | 2020-06-26T15:25:27Z | https://github.com/httpie/cli/issues/940 | [] | ovezovs | 0 |
encode/databases | sqlalchemy | 127 | Async sqlite? | I was curious what it means to present an async interface to Sqlite. My thought is that since Sqlite is an embedded database, there's no socket i/o, which is where one would typically see an event loop entering the picture. There's the additional caveat that only a single connection can hold the write lock at any given time, so you need to either share your connection across threads (adding significant complexity to the atomicity and size of your transactions, since transactions are bound to the connection), or handle the inevitable OperationalErrors from trying to write from multiple connections concurrently.
So what use-case does async sqlite intend to make better? I guess if you had a scraper or something and were hucking the responses from many asynchronous requests into a sqlite db, you could see some improvement (since the stdlib sqlite3 driver releases the GIL in the appropriate places)? | closed | 2019-07-17T20:10:41Z | 2019-07-18T10:45:42Z | https://github.com/encode/databases/issues/127 | [] | coleifer | 2 |
explosion/spaCy | machine-learning | 13,064 | sqlite3.OperationalError: unable to open database file | Hello, I met a bug, "sqlite3.OperationalError: unable to open database file"
My code is here:
nlp = spacy.load("en_core_web_md")
nlp.add_pipe("entityLinker", last=True)
doc = nlp(text)
and the error is:
doc = nlp(text)
File "/home/jxk/anaconda3/envs/KALA/lib/python3.6/site-packages/spacy/language.py", line 1001, in __call__
error_handler(name, proc, [doc], e)
File "/home/jxk/anaconda3/envs/KALA/lib/python3.6/site-packages/spacy/util.py", line 1486, in raise_error
raise e
File "/home/jxk/anaconda3/envs/KALA/lib/python3.6/site-packages/spacy/language.py", line 996, in __call__
doc = proc(doc, **component_cfg.get(name, {}))
File "/home/jxk/anaconda3/envs/KALA/lib/python3.6/site-packages/spacy_entity_linker/EntityLinker.py", line 24, in __call__
entityCandidates = termCandidates.get_entity_candidates()
File "/home/jxk/anaconda3/envs/KALA/lib/python3.6/site-packages/spacy_entity_linker/TermCandidate.py", line 26, in get_entity_candidates
wikidata_instance = get_wikidata_instance()
File "/home/jxk/anaconda3/envs/KALA/lib/python3.6/site-packages/spacy_entity_linker/DatabaseConnection.py", line 23, in get_wikidata_instance
wikidata_instance = WikidataQueryController()
File "/home/jxk/anaconda3/envs/KALA/lib/python3.6/site-packages/spacy_entity_linker/DatabaseConnection.py", line 39, in __init__
self.init_database_connection()
File "/home/jxk/anaconda3/envs/KALA/lib/python3.6/site-packages/spacy_entity_linker/DatabaseConnection.py", line 52, in init_database_connection
self.conn = sqlite3.connect(path)
sqlite3.OperationalError: unable to open database file
| closed | 2023-10-14T07:32:56Z | 2023-11-16T00:02:11Z | https://github.com/explosion/spaCy/issues/13064 | [
"third-party",
"feat / nel"
] | jiangxinke | 2 |
erdewit/ib_insync | asyncio | 701 | request live data with diffrent time resolution | Hi I want to fetch Live data with 5 min resolution, 15 mints resolution etc. How can I

Here mentioned that bar size must be 5 sec | open | 2024-02-27T07:28:16Z | 2024-02-27T10:53:24Z | https://github.com/erdewit/ib_insync/issues/701 | [] | krishdotn1 | 1 |
jonaswinkler/paperless-ng | django | 739 | [BUG] Main Search Bar doesn't show document with title matching | **Describe the bug**
When looking a particular document which I remember the title, I can't find it with the search Bar on top of the page.
But I can find it with document filter "title&content".
**To Reproduce**
Rename (title) a doc with a word not present in the PDF content.
Search it with "document filter" -> works
Search it via main search bar -> don't work.
**Expected behavior**
I suppose that the main search bar has the maximum coverage of search, so at least title+content.
**Relevant information**
docker installation, 1.3.0. | closed | 2021-03-11T08:51:45Z | 2021-04-10T09:26:10Z | https://github.com/jonaswinkler/paperless-ng/issues/739 | [] | xavgra2 | 3 |
AUTOMATIC1111/stable-diffusion-webui | deep-learning | 16,582 | [Bug]: "Generate" Button does not work, unless I am too dumb to use it. | ### Checklist
- [ ] The issue exists after disabling all extensions
- [ ] The issue exists on a clean installation of webui
- [ ] The issue is caused by an extension, but I believe it is caused by a bug in the webui
- [ ] The issue exists in the current version of the webui
- [ ] The issue has not been reported before recently
- [ ] The issue has been reported before but has not been fixed yet
### What happened?
txt2img does not do anything after pressing Generate
### Steps to reproduce the problem
Extract the sd.webui.zip, run update.bat, run run.bat, the webui opens automatically. Try generate an image
### What should have happened?
WebUI should have generate an image
### What browsers do you use to access the UI ?
Microsoft Edge, Other
### Sysinfo
[sysinfo-2024-10-23-21-39.json](https://github.com/user-attachments/files/17498553/sysinfo-2024-10-23-21-39.json)
### Console logs
```Shell
g: Importing from timm.models.layers is deprecated, please import via timm.layers
warnings.warn(f"Importing from {__name__} is deprecated, please import via timm.layers", FutureWarning)
no module 'xformers'. Processing without...
no module 'xformers'. Processing without...
No module 'xformers'. Proceeding without it.
Loading weights [6ce0161689] from D:\.D Dokuments\Stable diff\stable diff1\webui\models\Stable-diffusion\v1-5-pruned-emaonly.safetensors
Creating model from config: D:\.D Dokuments\Stable diff\stable diff1\webui\configs\v1-inference.yaml
Running on local URL: http://127.0.0.1:7860
To create a public link, set `share=True` in `launch()`.
D:\.D Dokuments\Stable diff\stable diff1\system\python\lib\site-packages\huggingface_hub\file_download.py:797: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.
warnings.warn(
Startup time: 13.0s (prepare environment: 3.4s, import torch: 4.4s, import gradio: 1.0s, setup paths: 1.0s, initialize shared: 0.3s, other imports: 0.5s, list SD models: 0.3s, load scripts: 1.3s, create ui: 0.5s, gradio launch: 0.6s).
Applying attention optimization: Doggettx... done.
Model loaded in 4.0s (load weights from disk: 0.6s, create model: 0.6s, apply weights to model: 2.5s, calculate empty prompt: 0.2s).
```
### Additional information
_No response_ | closed | 2024-10-23T21:40:10Z | 2024-10-24T15:00:36Z | https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/16582 | [
"bug-report"
] | NikkuIsRyu | 2 |
taverntesting/tavern | pytest | 206 | pytest is now version 4.0.0, which breaks the setup.py install | Steps to reproduce
-------------------
```
$ virtualenv tavern_pytest_4
Using base prefix '/Library/Frameworks/Python.framework/Versions/3.7'
New python executable in /Users/user/tavern_pytest_4/bin/python3
Also creating executable in /Users/user/tavern_pytest_4/bin/python
Installing setuptools, pip, wheel...done.
$ source tavern_pytest_4/bin/activate
(tavern_pytest_4) $ mkdir test
(tavern_pytest_4) $ cd test
(tavern_pytest_4) $ echo tavern >> requirements.txt
(tavern_pytest_4) $ echo pytest >> requirements.txt
(tavern_pytest_4) $ pip install -r requirements.txt
Collecting tavern (from -r requirements.txt (line 1))
Downloading https://files.pythonhosted.org/packages/79/47/a40cacce646c47ab79e74fa000768d03ef27d16517de81d72c884646b9f7/tavern-0.19.1-py2.py3-none-any.whl (55kB)
100% |████████████████████████████████| 61kB 257kB/s
Collecting pytest (from -r requirements.txt (line 2))
Downloading https://files.pythonhosted.org/packages/bb/d5/7601c468ded9a59478dcb39d21e24d58bb375681c64a06fbb629d2bc2ac3/pytest-4.0.0-py2.py3-none-any.whl (217kB)
100% |████████████████████████████████| 225kB 834kB/s
Collecting python-box (from tavern->-r requirements.txt (line 1))
Downloading https://files.pythonhosted.org/packages/42/ec/61f0b95d4b81fe674e8a3f49379e0cacc296c6e2bbb8a1096891f7e24b42/python_box-3.2.3-py3-none-any.whl
Collecting pykwalify>=1.6.1 (from tavern->-r requirements.txt (line 1))
Using cached https://files.pythonhosted.org/packages/36/9f/612de8ca540bd24d604f544248c4c46e9db76f6ea5eb75fb4244da6ebbf0/pykwalify-1.7.0-py2.py3-none-any.whl
Collecting contextlib2 (from tavern->-r requirements.txt (line 1))
Using cached https://files.pythonhosted.org/packages/a2/71/8273a7eeed0aff6a854237ab5453bc9aa67deb49df4832801c21f0ff3782/contextlib2-0.5.5-py2.py3-none-any.whl
Collecting requests (from tavern->-r requirements.txt (line 1))
Downloading https://files.pythonhosted.org/packages/ff/17/5cbb026005115301a8fb2f9b0e3e8d32313142fe8b617070e7baad20554f/requests-2.20.1-py2.py3-none-any.whl (57kB)
100% |████████████████████████████████| 61kB 17.6MB/s
Collecting pyjwt (from tavern->-r requirements.txt (line 1))
Using cached https://files.pythonhosted.org/packages/93/d1/3378cc8184a6524dc92993090ee8b4c03847c567e298305d6cf86987e005/PyJWT-1.6.4-py2.py3-none-any.whl
Collecting jmespath (from tavern->-r requirements.txt (line 1))
Using cached https://files.pythonhosted.org/packages/b7/31/05c8d001f7f87f0f07289a5fc0fc3832e9a57f2dbd4d3b0fee70e0d51365/jmespath-0.9.3-py2.py3-none-any.whl
Collecting stevedore (from tavern->-r requirements.txt (line 1))
Downloading https://files.pythonhosted.org/packages/35/fa/8683fab2a6e15ecfe107996e56fab91e52fe3ec0b40ca9440a0e1ffe6892/stevedore-1.30.0-py2.py3-none-any.whl (42kB)
100% |████████████████████████████████| 51kB 18.5MB/s
Collecting future (from tavern->-r requirements.txt (line 1))
Downloading https://files.pythonhosted.org/packages/90/52/e20466b85000a181e1e144fd8305caf2cf475e2f9674e797b222f8105f5f/future-0.17.1.tar.gz (829kB)
100% |████████████████████████████████| 829kB 973kB/s
Collecting paho-mqtt==1.3.1 (from tavern->-r requirements.txt (line 1))
Using cached https://files.pythonhosted.org/packages/2a/5f/cf14b8f9f8ed1891cda893a2a7d1d6fa23de2a9fb4832f05cef02b79d01f/paho-mqtt-1.3.1.tar.gz
Collecting pyyaml (from tavern->-r requirements.txt (line 1))
Collecting six>=1.10.0 (from pytest->-r requirements.txt (line 2))
Using cached https://files.pythonhosted.org/packages/67/4b/141a581104b1f6397bfa78ac9d43d8ad29a7ca43ea90a2d863fe3056e86a/six-1.11.0-py2.py3-none-any.whl
Requirement already satisfied: setuptools in /Users/vsyrakis/tavern_pytest_4/lib/python3.7/site-packages (from pytest->-r requirements.txt (line 2)) (40.6.2)
Collecting more-itertools>=4.0.0 (from pytest->-r requirements.txt (line 2))
Using cached https://files.pythonhosted.org/packages/79/b1/eace304ef66bd7d3d8b2f78cc374b73ca03bc53664d78151e9df3b3996cc/more_itertools-4.3.0-py3-none-any.whl
Collecting atomicwrites>=1.0 (from pytest->-r requirements.txt (line 2))
Using cached https://files.pythonhosted.org/packages/3a/9a/9d878f8d885706e2530402de6417141129a943802c084238914fa6798d97/atomicwrites-1.2.1-py2.py3-none-any.whl
Collecting pluggy>=0.7 (from pytest->-r requirements.txt (line 2))
Downloading https://files.pythonhosted.org/packages/1c/e7/017c262070af41fe251401cb0d0e1b7c38f656da634cd0c15604f1f30864/pluggy-0.8.0-py2.py3-none-any.whl
Collecting attrs>=17.4.0 (from pytest->-r requirements.txt (line 2))
Using cached https://files.pythonhosted.org/packages/3a/e1/5f9023cc983f1a628a8c2fd051ad19e76ff7b142a0faf329336f9a62a514/attrs-18.2.0-py2.py3-none-any.whl
Collecting py>=1.5.0 (from pytest->-r requirements.txt (line 2))
Downloading https://files.pythonhosted.org/packages/3e/c7/3da685ef117d42ac8d71af525208759742dd235f8094221fdaafcd3dba8f/py-1.7.0-py2.py3-none-any.whl (83kB)
100% |████████████████████████████████| 92kB 16.2MB/s
Collecting python-dateutil>=2.4.2 (from pykwalify>=1.6.1->tavern->-r requirements.txt (line 1))
Downloading https://files.pythonhosted.org/packages/74/68/d87d9b36af36f44254a8d512cbfc48369103a3b9e474be9bdfe536abfc45/python_dateutil-2.7.5-py2.py3-none-any.whl (225kB)
100% |████████████████████████████████| 235kB 806kB/s
Collecting docopt>=0.6.2 (from pykwalify>=1.6.1->tavern->-r requirements.txt (line 1))
Collecting idna<2.8,>=2.5 (from requests->tavern->-r requirements.txt (line 1))
Using cached https://files.pythonhosted.org/packages/4b/2a/0276479a4b3caeb8a8c1af2f8e4355746a97fab05a372e4a2c6a6b876165/idna-2.7-py2.py3-none-any.whl
Collecting chardet<3.1.0,>=3.0.2 (from requests->tavern->-r requirements.txt (line 1))
Using cached https://files.pythonhosted.org/packages/bc/a9/01ffebfb562e4274b6487b4bb1ddec7ca55ec7510b22e4c51f14098443b8/chardet-3.0.4-py2.py3-none-any.whl
Collecting urllib3<1.25,>=1.21.1 (from requests->tavern->-r requirements.txt (line 1))
Downloading https://files.pythonhosted.org/packages/62/00/ee1d7de624db8ba7090d1226aebefab96a2c71cd5cfa7629d6ad3f61b79e/urllib3-1.24.1-py2.py3-none-any.whl (118kB)
100% |████████████████████████████████| 122kB 35.5MB/s
Collecting certifi>=2017.4.17 (from requests->tavern->-r requirements.txt (line 1))
Downloading https://files.pythonhosted.org/packages/56/9d/1d02dd80bc4cd955f98980f28c5ee2200e1209292d5f9e9cc8d030d18655/certifi-2018.10.15-py2.py3-none-any.whl (146kB)
100% |████████████████████████████████| 153kB 27.1MB/s
Collecting pbr!=2.1.0,>=2.0.0 (from stevedore->tavern->-r requirements.txt (line 1))
Downloading https://files.pythonhosted.org/packages/f3/04/fddc1c2dd75b256eda4d360024692231a2c19a0c61ad7f4a162407c1ab58/pbr-5.1.1-py2.py3-none-any.whl (106kB)
100% |████████████████████████████████| 112kB 23.5MB/s
Building wheels for collected packages: future, paho-mqtt
Running setup.py bdist_wheel for future ... done
Stored in directory: /Users/vsyrakis/Library/Caches/pip/wheels/0c/61/d2/d6b7317325828fbb39ee6ad559dbe4664d0896da4721bf379e
Running setup.py bdist_wheel for paho-mqtt ... done
Stored in directory: /Users/vsyrakis/Library/Caches/pip/wheels/38/ca/67/86c7e4acc659ce5ab74cbb8cc38de50c90ed4f827133e36994
Successfully built future paho-mqtt
tavern 0.19.1 has requirement pytest<4,>=3.6.0, but you'll have pytest 4.0.0 which is incompatible.
Installing collected packages: python-box, six, more-itertools, atomicwrites, pluggy, attrs, py, pytest, python-dateutil, pyyaml, docopt, pykwalify, contextlib2, idna, chardet, urllib3, certifi, requests, pyjwt, jmespath, pbr, stevedore, future, paho-mqtt, tavern
Successfully installed atomicwrites-1.2.1 attrs-18.2.0 certifi-2018.10.15 chardet-3.0.4 contextlib2-0.5.5 docopt-0.6.2 future-0.17.1 idna-2.7 jmespath-0.9.3 more-itertools-4.3.0 paho-mqtt-1.3.1 pbr-5.1.1 pluggy-0.8.0 py-1.7.0 pyjwt-1.6.4 pykwalify-1.7.0 pytest-4.0.0 python-box-3.2.3 python-dateutil-2.7.5 pyyaml-3.13 requests-2.20.1 six-1.11.0 stevedore-1.30.0 tavern-0.19.1 urllib3-1.24.1
```
Error from above log
---------------------
> tavern 0.19.1 has requirement pytest<4,>=3.6.0, but you'll have pytest 4.0.0 which is incompatible. | closed | 2018-11-14T23:40:37Z | 2018-11-15T23:51:06Z | https://github.com/taverntesting/tavern/issues/206 | [] | cetanu | 0 |
pyg-team/pytorch_geometric | deep-learning | 9,897 | cannot import name 'BaseData' from 'torch_geometric.data.data' | ### 🐛 Describe the bug
I successfully installed the dependent packages and then directly used the Tsinghua mirror source to install torch_geometric via pip. However, when I tried to import the library in Jupyter, an error occurred: ImportError: cannot import name 'BaseData' from 'torch_geometric.data.data'. I've checked the code and there is a BaseData class in it. Why can't it be imported?

### Versions
Collecting environment information...
PyTorch version: 2.1.2+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Microsoft Windows 11 家庭中文版 (10.0.26100 64 位)
GCC version: Could not collect
Clang version: Could not collect
CMake version: Could not collect
Libc version: N/A
Python version: 3.8.19 (default, Mar 20 2024, 19:55:45) [MSC v.1916 64 bit (AMD64)] (64-bit runtime)
Python platform: Windows-10-10.0.26100-SP0
Is CUDA available: True
CUDA runtime version: 12.1.66
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3060 Laptop GPU
Nvidia driver version: 546.30
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Name: 12th Gen Intel(R) Core(TM) i9-12900H
Manufacturer: GenuineIntel
Family: 207
Architecture: 9
ProcessorType: 3
DeviceID: CPU0
CurrentClockSpeed: 2500
MaxClockSpeed: 2500
L2CacheSize: 11776
L2CacheSpeed: None
Revision: None
Versions of relevant libraries:
[pip3] numpy==1.23.0
[pip3] torch==2.1.2+cu121
[pip3] torch-cluster==1.6.2+pt21cu121
[pip3] torch-geometric==2.6.1
[pip3] torch-scatter==2.1.2+pt21cu121
[pip3] torch-sparse==0.6.18+pt21cu121
[pip3] torch-spline-conv==1.2.2+pt21cu121
[pip3] torchaudio==2.1.2+cu121
[pip3] torchvision==0.16.2+cu121
[conda] numpy 1.23.0 pypi_0 pypi
[conda] torch 2.1.2+cu121 pypi_0 pypi
[conda] torch-cluster 1.6.2+pt21cu121 pypi_0 pypi
[conda] torch-geometric 2.6.1 pypi_0 pypi
[conda] torch-scatter 2.1.2+pt21cu121 pypi_0 pypi
[conda] torch-sparse 0.6.18+pt21cu121 pypi_0 pypi
[conda] torch-spline-conv 1.2.2+pt21cu121 pypi_0 pypi
[conda] torchaudio 2.1.2+cu121 pypi_0 pypi
[conda] torchvision 0.16.2+cu121 pypi_0 pypi | closed | 2024-12-27T10:57:01Z | 2024-12-28T18:21:29Z | https://github.com/pyg-team/pytorch_geometric/issues/9897 | [
"bug"
] | MothingAI | 1 |
Nekmo/amazon-dash | dash | 61 | please add docker support | please add docker suport | closed | 2018-07-17T21:08:39Z | 2018-09-03T20:09:54Z | https://github.com/Nekmo/amazon-dash/issues/61 | [
"Setup"
] | crowland88 | 41 |
DistrictDataLabs/yellowbrick | matplotlib | 1,283 | learning curve visualizer for catboost automl using Pipelines | **Describe the issue**
I'm getting "TypeError: ContribEstimator.__init__() got an unexpected keyword argument 'memory'" while trying to plot a learning curve
<!-- If you have a question, note that you can email us via our listserve:
https://groups.google.com/forum/#!forum/yellowbrick -->
I have emailed via list serve but there is no code formatting so question unreadable (like my markdown here :/ )
<!-- This line alerts the Yellowbrick maintainers, feel free to use this
@ address to alert us directly in follow up comments -->
@DistrictDataLabs/team-oz-maintainers
The following code works:
```
from yellowbrick.classifier import ROCAUC
from yellowbrick.contrib.wrapper import wrap
model = wrap(pipeline)
visualizer = ROCAUC(model)
visualizer.fit(X_train, y_train)
visualizer.score(X_test, y_test)
visualizer.show()
```
But the following does not:
```
from yellowbrick.model_selection import LearningCurve
# Create the learning curve visualizer
#cv = StratifiedKFold(n_splits=12)
sizes = np.linspace(0.3, 1.0, 10)
# Instantiate the classification model and visualizer
#model = MultinomialNB()
visualizer = LearningCurve(
model, scoring='f1_weighted', train_sizes=sizes)
visualizer.fit(X, y) # Fit the data to the visualizer
visualizer.show() # Finalize and render the figure
```
# With visualizer fit or show causing issue:
```
---------------------------------------------------------------------------
Empty Traceback (most recent call last)
File ~/miniconda3/envs/ML/lib/python3.10/site-packages/joblib/parallel.py:862, in Parallel.dispatch_one_batch(self, iterator)
861 try:
--> 862 tasks = self._ready_batches.get(block=False)
863 except queue.Empty:
864 # slice the iterator n_jobs * batchsize items at a time. If the
865 # slice returns less than that, then the current batchsize puts
(...)
868 # accordingly to distribute evenly the last items between all
869 # workers.
File ~/miniconda3/envs/ML/lib/python3.10/queue.py:168, in Queue.get(self, block, timeout)
167 if not self._qsize():
--> 168 raise Empty
169 elif timeout is None:
Empty:
During handling of the above exception, another exception occurred:
TypeError Traceback (most recent call last)
Cell In [49], line 1
----> 1 visualizer.fit(X, y) # Fit the data to the visualizer
2 visualizer.show()
File ~/miniconda3/envs/ML/lib/python3.10/site-packages/yellowbrick/model_selection/learning_curve.py:249, in LearningCurve.fit(self, X, y)
233 sklc_kwargs = {
234 key: self.get_params()[key]
235 for key in (
(...)
245 )
246 }
248 # compute the learning curve and store the scores on the estimator
--> 249 curve = sk_learning_curve(self.estimator, X, y, **sklc_kwargs)
250 self.train_sizes_, self.train_scores_, self.test_scores_ = curve
252 # compute the mean and standard deviation of the training data
File ~/miniconda3/envs/ML/lib/python3.10/site-packages/sklearn/model_selection/_validation.py:1558, in learning_curve(estimator, X, y, groups, train_sizes, cv, scoring, exploit_incremental_learning, n_jobs, pre_dispatch, verbose, shuffle, random_state, error_score, return_times, fit_params)
1555 for n_train_samples in train_sizes_abs:
1556 train_test_proportions.append((train[:n_train_samples], test))
-> 1558 results = parallel(
1559 delayed(_fit_and_score)(
1560 clone(estimator),
1561 X,
1562 y,
1563 scorer,
1564 train,
1565 test,
1566 verbose,
1567 parameters=None,
1568 fit_params=fit_params,
1569 return_train_score=True,
1570 error_score=error_score,
1571 return_times=return_times,
1572 )
1573 for train, test in train_test_proportions
1574 )
1575 results = _aggregate_score_dicts(results)
1576 train_scores = results["train_scores"].reshape(-1, n_unique_ticks).T
File ~/miniconda3/envs/ML/lib/python3.10/site-packages/joblib/parallel.py:1085, in Parallel.__call__(self, iterable)
1076 try:
1077 # Only set self._iterating to True if at least a batch
1078 # was dispatched. In particular this covers the edge
(...)
1082 # was very quick and its callback already dispatched all the
1083 # remaining jobs.
1084 self._iterating = False
-> 1085 if self.dispatch_one_batch(iterator):
1086 self._iterating = self._original_iterator is not None
1088 while self.dispatch_one_batch(iterator):
File ~/miniconda3/envs/ML/lib/python3.10/site-packages/joblib/parallel.py:873, in Parallel.dispatch_one_batch(self, iterator)
870 n_jobs = self._cached_effective_n_jobs
871 big_batch_size = batch_size * n_jobs
--> 873 islice = list(itertools.islice(iterator, big_batch_size))
874 if len(islice) == 0:
875 return False
File ~/miniconda3/envs/ML/lib/python3.10/site-packages/sklearn/model_selection/_validation.py:1560, in <genexpr>(.0)
1555 for n_train_samples in train_sizes_abs:
1556 train_test_proportions.append((train[:n_train_samples], test))
1558 results = parallel(
1559 delayed(_fit_and_score)(
-> 1560 clone(estimator),
1561 X,
1562 y,
1563 scorer,
1564 train,
1565 test,
1566 verbose,
1567 parameters=None,
1568 fit_params=fit_params,
1569 return_train_score=True,
1570 error_score=error_score,
1571 return_times=return_times,
1572 )
1573 for train, test in train_test_proportions
1574 )
1575 results = _aggregate_score_dicts(results)
1576 train_scores = results["train_scores"].reshape(-1, n_unique_ticks).T
File ~/miniconda3/envs/ML/lib/python3.10/site-packages/sklearn/base.py:88, in clone(estimator, safe)
86 for name, param in new_object_params.items():
87 new_object_params[name] = clone(param, safe=False)
---> 88 new_object = klass(**new_object_params)
89 params_set = new_object.get_params(deep=False)
91 # quick sanity check of the parameters of the clone
TypeError: ContribEstimator.__init__() got an unexpected keyword argument 'memory'
```
My model is generated a scikit-learn pipeline using FLAML. I'm doing multi-class classification and best estimator is catboost.
| closed | 2022-09-23T06:34:10Z | 2022-10-21T15:56:57Z | https://github.com/DistrictDataLabs/yellowbrick/issues/1283 | [
"gone-stale"
] | dbrami | 2 |
gradio-app/gradio | python | 10,362 | When running lazy caching, `gr.Progress(track_tqdm=True)` is not displayed | ### Describe the bug
The progress bar is not shown while running an example when `cache_examples=True, cache_mode="lazy"`.
(BTW, it's weird, but for some reason, infinite loop mentioned in #6690 doesn't occur for `diffusers` pipelines. I wanted to provide a simpler repro, but got the same issue as #6690 , so I'm using a bit more complicated sample here.)
I was always seeing this issue when creating ZeroGPU Spaces, so I thought it was an issue of ZeroGPU, but it turned out that this occurs in my local env as well and seems to be a gradio issue.
### Have you searched existing issues? 🔎
- [X] I have searched and found no existing issues
### Reproduction
```python
import gradio as gr
import PIL.Image
import torch
from diffusers import AutoencoderKL, DiffusionPipeline
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
vae = AutoencoderKL.from_pretrained("madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16)
pipe = DiffusionPipeline.from_pretrained(
"stabilityai/stable-diffusion-xl-base-1.0",
vae=vae,
torch_dtype=torch.float16,
use_safetensors=True,
variant="fp16",
)
pipe.to(device)
def generate(
prompt: str,
progress: gr.Progress = gr.Progress(track_tqdm=True), # noqa: ARG001, B008
) -> PIL.Image.Image:
generator = torch.Generator().manual_seed(0)
return pipe(
prompt=prompt,
width=1024,
height=1024,
guidance_scale=5.0,
num_inference_steps=25,
generator=generator,
output_type="pil",
).images[0]
with gr.Blocks() as demo:
prompt = gr.Text(show_label=False, placeholder="Enter your prompt", submit_btn=True)
out = gr.Image()
gr.Examples(examples=["cat"], inputs=prompt, outputs=out, fn=generate, cache_examples=True, cache_mode="lazy")
prompt.submit(fn=generate, inputs=prompt, outputs=out)
demo.launch()
```
### Screenshot
https://github.com/user-attachments/assets/ce8767fd-483a-4c59-a1b5-24dc0bb07921
### Logs
_No response_
### System Info
```shell
gradio==5.12.0
```
### Severity
I can work around it | open | 2025-01-15T06:48:42Z | 2025-01-15T06:48:42Z | https://github.com/gradio-app/gradio/issues/10362 | [
"bug"
] | hysts | 0 |
huggingface/transformers | tensorflow | 35,994 | model.parameters() return [Parameter containing: tensor([], device='cuda:0', dtype=torch.bfloat16, requires_grad=True)] when using zero3 | ### System Info
transformers 4.44.2
accelerate 1.2.1
deepspeed 0.12.2
torch 2.2.2
torchaudio 2.2.2
torchvision 0.17.2
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
Try to print **model.parameters()** in transfomers trainer(), but get **Parameter containing: tensor([], device='cuda:0', dtype=torch.bfloat16, requires_grad=True)** for all layers
In fact, I am trying to return the correct **model.parameters()** in DeepSpeed Zero-3 mode and use the EMA model. Could you suggest any ways to solve the above issue, or any other methods to use the EMA model under Zero-3?
### Expected behavior
expect to see the gathered parameters | closed | 2025-01-31T16:42:47Z | 2025-03-11T08:03:44Z | https://github.com/huggingface/transformers/issues/35994 | [
"bug"
] | fanfanffff1 | 2 |
vvbbnn00/WARP-Clash-API | flask | 174 | [Feature request] 能否支持V2rayN订阅,V2rayN 6.39版本已经支持wireguard协议了 |

| open | 2024-04-18T19:11:42Z | 2024-04-28T09:49:59Z | https://github.com/vvbbnn00/WARP-Clash-API/issues/174 | [
"enhancement"
] | hdw9703 | 1 |
jmcnamara/XlsxWriter | pandas | 703 | option style for insert_textbox | Hi,
Textboxes have many features and option to personalize my textbox however i want to have my borders with rounded corners like this screenshot :

Can be able to implement this "style" feature for textbox ? | closed | 2020-03-31T09:53:43Z | 2020-04-01T09:57:51Z | https://github.com/jmcnamara/XlsxWriter/issues/703 | [
"feature request"
] | Clorel | 1 |
graphql-python/graphene-django | django | 1,505 | graphql_schema outputs graphQL SDL file with legacy syntax | **What is the current behavior?**
When I run the graphql_schema management function with a graphQL output (e.g. `python manage.py graphql_schema --out schema.graphql`) I get a schema that looks like:
```
type Mutation {
addTodo(name: String!, priority: Priority = LOW): Todo!
removeTodo(id: ID!): Todo!
}
schema {
query: Query
mutation: Mutation
}
type Viewer implements Node, ViewerBaseInterface {
id: ID!
}
interface ViewerBaseInterface {
otherField: String
}
interface Node {
id: ID!
}
```
When I tried to use the [Relay compiler](https://www.npmjs.com/package/relay-compiler?activeTab=explore) to generate types using this schema, it kept throwing an obscure error (`[INFO] [default] compiling... [ERROR] ✖︎ Expected a end of file <generated>: <missing source>` for anyone struggling with this)
After much debugging, I discovered this line was the problem:
`type Viewer implements Node, ViewerBaseInterface`
The issue is with the comma syntax to denote inheritance from multiple interfaces. This syntax should now look like `type Viewer implements Node & ViewerBaseInterface`.
This issue is described on the Relay side [here](https://github.com/facebook/relay/issues/2364). The fix is to update the schema generation to a later version (in JS this version is `graphql-js` v0.13).
**What is the expected behavior?**
We are unfortunately still on Graphene v2.1.9/Graphene-django 2.16.0. We are planning to upgrade later in the year but this will be a fairly large undertaking for us. Because of this I'm not sure if Graphene v3 fixes the issue.
Either way, would it be possible to release a v2 version with the new syntax generator to allow us/others to work around this issue?
Alternatively is there another way we can work round this in the short-term? e.g. what dependency should we update if we wanted to create a fork?
**What is the motivation / use case for changing the behavior?**
Supporting new graphQL behaviour on v2
**Please tell us about your environment:**
- Version:
graphene = "2.1.9"
graphene-django = "2.16.0"
- Platform: mac
**Other information** (e.g. detailed explanation, stacktraces, related issues, suggestions how to fix, links for us to have context, eg. stackoverflow)
Similar issue with a different library: https://stackoverflow.com/questions/49198778/relay-compiler-cannot-compile-graph-cool-graphql-schemas-with-multiple-inheritan | open | 2024-03-06T17:57:53Z | 2024-03-06T17:57:53Z | https://github.com/graphql-python/graphene-django/issues/1505 | [
"🐛bug"
] | matt-dalton | 0 |
browser-use/browser-use | python | 490 | [FEATURE REQUEST] : Support of ChatLiteLLM and ChatLiteLLmRouter for browser Use. | ### Problem Description
I have be trying to use langchains's ChatLiteLLM class as llm for broswer-use agent, but it fails, saying:
```bash
ERROR [agent] ❌ Result failed 4/10 times:
litellm.UnsupportedParamsError: VertexAI doesn't support tool_choice=any. Supported tool_choice values=['auto', 'required', json object]. To drop it from the call, set `litellm.drop_params = True.
INFO [agent]
```
ChatLiteLLm is very generic class that supports alot of llms. Adding support for this will enable a lot of llms.
### Proposed Solution
.
### Alternative Solutions
_No response_
### Additional Context
_No response_ | open | 2025-01-31T08:50:52Z | 2025-03-03T12:34:08Z | https://github.com/browser-use/browser-use/issues/490 | [
"enhancement"
] | tikendraw | 1 |
Lightning-AI/pytorch-lightning | machine-learning | 19,849 | `ckpt_path` in `Trainer` accepts URIs to automatically load checkpoints from remote paths | ### Description & Motivation
If I set up a Trainer with a `WandbLogger` and set `log_model=True`, I get that my model is saved locally and in a W&B server.
If I want to retrieve the model from the server I have to use the W&B `use_artifact` methods and the download methods to first retrieve the model, as described [here](https://lightning.ai/docs/pytorch/stable/extensions/generated/lightning.pytorch.loggers.WandbLogger.html)
What I would like to have instead is to not specify the W&B logic explicitly and download the model by just passing a URI with the remote string, (e.g. `wandb://user/project/model-run_id:version`) as the `ckpt_path` in the `Trainer`, along with the `WandbLogger`.
In short, I would like to do something like that:
```
trainer = Trainer(logger=my_wandb_logger, "wanb://user/project/model-run_id:version")
```
### Pitch
Specify a URI to a remote resource, automatically download it, and load it in the `Trainer`.
### Alternatives
Write explicitly the W&B logic to download the model and then pass the checkpoint path to the `Trainer`.
### Additional context
_No response_
cc @borda | open | 2024-05-05T19:56:54Z | 2024-05-05T19:57:17Z | https://github.com/Lightning-AI/pytorch-lightning/issues/19849 | [
"feature",
"needs triage"
] | aretor | 0 |
ufoym/deepo | tensorflow | 81 | how to use your docker image with gpu on window10 | u mention this works on Linux (CPU version/GPU version), Windows (CPU version) and OS X (CPU version) | closed | 2019-02-21T15:30:42Z | 2019-02-23T06:26:32Z | https://github.com/ufoym/deepo/issues/81 | [] | deo999 | 1 |
BeanieODM/beanie | pydantic | 412 | [BUG] Problem in save method | **Bug in action of save method**
I wanna save my document to database and use it after insert but I have problem with this situation:
**This is My code**
```python
class Child(BaseModel):
child_field: str
class Sample(Document):
field: Dict[str, Child]
instance1 = Sample(field={"Bar": Child(child_field="Foo")})
print(instance1)
await instance1.save()
print(instance1)
```
**Expected behavior**
```
# first print :
id=None revision_id=None field={'Bar': Child(child_field='Foo')}
# second print:
id=ObjectId('636b9d2997bb72433b944ef4') revision_id=None field={'Bar': Child(child_field='Foo')}
```
**But I got this:**
```
# first print :
id=None revision_id=None field={'Bar': Child(child_field='Foo')}
# second print:
id=ObjectId('636b9d2997bb72433b944ef4') revision_id=None field={'Bar': {'child_field': 'Foo'}}
```
```
field={'Bar': {'child_field': 'Foo'}} != field={'Bar': Child(child_field='Foo')}
```
It's okay when I wanna fetch from db and my field filled by Child model but when I want to save to db, this situation happened!!!
| closed | 2022-11-09T12:44:09Z | 2022-11-10T14:30:51Z | https://github.com/BeanieODM/beanie/issues/412 | [] | miladva | 9 |
ymcui/Chinese-LLaMA-Alpaca | nlp | 41 | Prompt 最长可以有多少? | GPT3.5 是4k Token,不知我们的项目是社么限制?如果长度可以很长的话,那么就可以嵌入很多索引资料了。 | closed | 2023-04-03T09:59:52Z | 2023-04-11T05:36:39Z | https://github.com/ymcui/Chinese-LLaMA-Alpaca/issues/41 | [] | wizd | 3 |
mars-project/mars | numpy | 3,215 | [BUG] Mars dataframe sort_values with multiple ascendings returns incorrect result on pandas<1.4 | <!--
Thank you for your contribution!
Please review https://github.com/mars-project/mars/blob/master/CONTRIBUTING.rst before opening an issue.
-->
**Describe the bug**
A clear and concise description of what the bug is.
Example
``` python
import numpy as np
import pandas as pd
import mars
import mars.dataframe as md
mars.new_session()
ns = np.random.RandomState(0)
df = pd.DataFrame(ns.rand(100, 2), columns=["a" + str(i) for i in range(2)])
mdf = md.DataFrame(df, chunk_size=10)
result = (
mdf.sort_values(["a0", "a1"], ascending=[False, True])
.execute()
.fetch()
)
expected = df.sort_values(
["a0", "a1"], ascending=[False, True]
)
pd.testing.assert_frame_equal(result, expected)
```
Mars backend
``` python
Traceback (most recent call last):
File "/home/admin/Work/mars/t1.py", line 19, in <module>
pd.testing.assert_frame_equal(result, expected)
File "/home/admin/.pyenv/versions/3.8.13/lib/python3.8/site-packages/pandas/_testing/asserters.py", line 1257, in assert_frame_equal
assert_index_equal(
File "/home/admin/.pyenv/versions/3.8.13/lib/python3.8/site-packages/pandas/_testing/asserters.py", line 412, in assert_index_equal
_testing.assert_almost_equal(
File "pandas/_libs/testing.pyx", line 53, in pandas._libs.testing.assert_almost_equal
File "pandas/_libs/testing.pyx", line 168, in pandas._libs.testing.assert_almost_equal
File "/home/admin/.pyenv/versions/3.8.13/lib/python3.8/site-packages/pandas/_testing/asserters.py", line 665, in raise_assert_detail
raise AssertionError(msg)
AssertionError: DataFrame.index are different
DataFrame.index values are different (91.0 %)
[left]: Int64Index([26, 10, 36, 35, 82, 4, 61, 92, 34, 33, 5, 9, 97, 37, 94, 60, 21,
22, 64, 31, 28, 69, 65, 1, 91, 68, 25, 67, 6, 0, 93, 29, 3, 62,
2, 95, 20, 24, 39, 38, 98, 23, 27, 32, 96, 90, 30, 66, 7, 99, 8,
63, 19, 70, 59, 58, 49, 57, 78, 72, 87, 51, 84, 81, 74, 89, 56, 80,
50, 18, 53, 48, 44, 42, 43, 14, 85, 11, 16, 55, 71, 79, 88, 45, 40,
47, 15, 52, 54, 86, 76, 75, 13, 46, 77, 12, 73, 41, 17, 83],
dtype='int64')
[right]: Int64Index([26, 10, 36, 35, 82, 4, 61, 19, 92, 70, 59, 58, 34, 49, 33, 57, 78,
72, 87, 5, 9, 97, 37, 51, 94, 84, 60, 81, 74, 89, 56, 21, 80, 50,
22, 64, 31, 28, 69, 65, 18, 1, 53, 48, 91, 44, 68, 25, 67, 6, 42,
0, 93, 43, 14, 85, 29, 11, 16, 55, 3, 71, 62, 2, 79, 95, 20, 88,
45, 40, 24, 39, 47, 38, 15, 52, 98, 54, 23, 27, 86, 32, 96, 90, 76,
30, 75, 13, 66, 46, 77, 12, 73, 7, 41, 99, 8, 63, 17, 83],
```
Ray DAG backend
``` python
Traceback (most recent call last):
File "/home/admin/Work/mars/t1.py", line 12, in <module>
mdf.sort_values(["a0", "a1"], ascending=[False, True])
File "/home/admin/Work/mars/mars/core/entity/tileables.py", line 462, in execute
result = self.data.execute(session=session, **kw)
File "/home/admin/Work/mars/mars/core/entity/executable.py", line 144, in execute
return execute(self, session=session, **kw)
File "/home/admin/Work/mars/mars/deploy/oscar/session.py", line 1890, in execute
return session.execute(
File "/home/admin/Work/mars/mars/deploy/oscar/session.py", line 1684, in execute
execution_info: ExecutionInfo = fut.result(
File "/home/admin/.pyenv/versions/3.8.13/lib/python3.8/concurrent/futures/_base.py", line 444, in result
return self.__get_result()
File "/home/admin/.pyenv/versions/3.8.13/lib/python3.8/concurrent/futures/_base.py", line 389, in __get_result
raise self._exception
File "/home/admin/Work/mars/mars/deploy/oscar/session.py", line 1870, in _execute
await execution_info
File "/home/admin/Work/mars/mars/deploy/oscar/session.py", line 105, in wait
return await self._aio_task
File "/home/admin/Work/mars/mars/deploy/oscar/session.py", line 953, in _run_in_background
raise task_result.error.with_traceback(task_result.traceback)
File "/home/admin/Work/mars/mars/services/task/supervisor/processor.py", line 369, in run
await self._process_stage_chunk_graph(*stage_args)
File "/home/admin/Work/mars/mars/services/task/supervisor/processor.py", line 247, in _process_stage_chunk_graph
chunk_to_result = await self._executor.execute_subtask_graph(
File "/home/admin/Work/mars/mars/services/task/execution/ray/executor.py", line 551, in execute_subtask_graph
meta_list = await asyncio.gather(*output_meta_object_refs)
File "/home/admin/.pyenv/versions/3.8.13/lib/python3.8/asyncio/tasks.py", line 695, in _wrap_awaitable
return (yield from awaitable.__await__())
ray.exceptions.RayTaskError(ValueError): ray::execute_subtask() (pid=68092, ip=127.0.0.1)
At least one of the input arguments for this task could not be computed:
ray.exceptions.RayTaskError: ray::execute_subtask() (pid=68097, ip=127.0.0.1)
File "/home/admin/Work/mars/mars/services/task/execution/ray/executor.py", line 185, in execute_subtask
execute(context, chunk.op)
File "/home/admin/Work/mars/mars/core/operand/core.py", line 491, in execute
result = executor(results, op)
File "/home/admin/Work/mars/mars/dataframe/sort/psrs.py", line 713, in execute
cls._execute_map(ctx, op)
File "/home/admin/Work/mars/mars/dataframe/sort/psrs.py", line 668, in _execute_map
cls._execute_dataframe_map(ctx, op)
File "/home/admin/Work/mars/mars/dataframe/sort/psrs.py", line 602, in _execute_dataframe_map
poses = cls._calc_poses(a[by], pivots, op.ascending)
File "/home/admin/Work/mars/mars/dataframe/sort/psrs.py", line 559, in _calc_poses
pivots[col] = -pivots[col]
File "/home/admin/.pyenv/versions/3.8.13/lib/python3.8/site-packages/pandas/core/frame.py", line 3612, in __setitem__
self._set_item(key, value)
File "/home/admin/.pyenv/versions/3.8.13/lib/python3.8/site-packages/pandas/core/frame.py", line 3797, in _set_item
self._set_item_mgr(key, value)
File "/home/admin/.pyenv/versions/3.8.13/lib/python3.8/site-packages/pandas/core/frame.py", line 3756, in _set_item_mgr
self._iset_item_mgr(loc, value)
File "/home/admin/.pyenv/versions/3.8.13/lib/python3.8/site-packages/pandas/core/frame.py", line 3746, in _iset_item_mgr
self._mgr.iset(loc, value)
File "/home/admin/.pyenv/versions/3.8.13/lib/python3.8/site-packages/pandas/core/internals/managers.py", line 1078, in iset
blk.set_inplace(blk_locs, value_getitem(val_locs))
File "/home/admin/.pyenv/versions/3.8.13/lib/python3.8/site-packages/pandas/core/internals/blocks.py", line 360, in set_inplace
self.values[locs] = values
ValueError: assignment destination is read-only
```
The problems is that the `pivots[col] = -pivots[col]`
- on Mars backend: It assigns without any exceptions, but the data is not updated to pivots. The following `p_records = pivots.to_records(index=False)` get incorrect `p_records`.
- on Ray backend: Ray mark the numpy array returns from Ray object store as immutable. So, this line raises a clear exception.
Related issues:
https://github.com/ray-project/ray/issues/369
https://github.com/pandas-dev/pandas/pull/43406
This bug has fixed in `pandas >= 1.4`.
**To Reproduce**
To help us reproducing this bug, please provide information below:
1. Your Python version 3.7.11
2. The version of Mars you use
3. Versions of crucial packages, such as numpy, scipy and pandas pandas==1.3.0
4. Full stack of the error.
5. Minimized code to reproduce the error.
**Expected behavior**
A clear and concise description of what you expected to happen.
**Additional context**
Add any other context about the problem here.
| closed | 2022-08-09T11:17:52Z | 2022-08-30T02:38:13Z | https://github.com/mars-project/mars/issues/3215 | [
"type: bug"
] | fyrestone | 0 |
piccolo-orm/piccolo | fastapi | 913 | Is it possible to use multiple schemas with SQLiteEngine? | I have a situation, where my Python app uses Postgres as the production database, but I would like to pytest my app functionality in the CI/CD pipeline using a sqlite database. My app however needs to read and write to multiple schemas. How can I achieve this desired result with Piccolo? | closed | 2023-12-19T21:05:49Z | 2023-12-20T06:49:16Z | https://github.com/piccolo-orm/piccolo/issues/913 | [] | aabmets | 2 |
microsoft/qlib | deep-learning | 1,628 | descriptor 'lower' for 'str' objects doesn't apply to a 'float' object | I tried updating data on existing datasets retrieved from Yahoo Finance using collector.py but received the following error.
The code used is:
python scripts/data_collector/yahoo/collector.py update_data_to_bin --qlib_data_1d_dir ~/desktop/quant_engine/qlib_data/us_data --trading_date 2023-08-16 --end_date 2023-08-17 --region US
"us_data" folder saved those three folders (calendar, features, and instruments) which contain data collected, normalized, and dumped following the instruction.

| closed | 2023-08-18T09:51:57Z | 2024-01-29T15:01:45Z | https://github.com/microsoft/qlib/issues/1628 | [
"question",
"stale"
] | guoz14 | 7 |
ultralytics/ultralytics | computer-vision | 19,473 | Questions about target tracking | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
Hi, how do you evaluate the accuracy of target tracking after using the current botsort and ByteTrack in yolo? Also, if I want to use another tracking network, how do I add it to yolo?
### Additional
_No response_ | open | 2025-02-28T07:38:27Z | 2025-03-03T13:47:45Z | https://github.com/ultralytics/ultralytics/issues/19473 | [
"question",
"track"
] | WHK1229 | 7 |
samuelcolvin/watchfiles | asyncio | 215 | Problems when using vim - `Remove(File)` event when saving the file | ### Description
Hello,
and thanks for providing `watchfiles` ! I seem to have an issue occuring with `vim` specifically, so I'm not sure this is the best place for a bug report, but this could always serve as future reference:
When editing then saving a file in `vim` (version 9.0.813), for some reason a `Remove(File)` event is sent; no sub-sequent events are received when modifying the file:
```
raw-event: Event { kind: Modify(Name(From)), paths: ["/home/sapristi/dev/mmuxer/config.yaml"], attr:tracker: None, attr:flag: None, attr:info: None, attr:source: None }
raw-event: Event { kind: Modify(Metadata(Any)), paths: ["/home/sapristi/dev/mmuxer/config.yaml"], attr:tracker: None, attr:flag: None, attr:info: None, attr:source: None }
raw-event: Event { kind: Remove(File), paths: ["/home/sapristi/dev/mmuxer/config.yaml"], attr:tracker: None, attr:flag: None, attr:info: None, attr:source: None }
```
Note that:
- the same happens when running `watchfiles 'echo reloaded' config.yaml --verbose`
- when running `watchfiles 'echo reloaded' . --verbose`, the behaviour is actually different (and the problem doesn't seem to arise):
```
[00:25:43] watchfiles v0.18.1 👀 path="/home/sapristi/dev/mmuxer" target="echo reloaded" (command) filter=DefaultFilter...
[00:25:43] running "echo reloaded" as command
reloaded
watcher: INotifyWatcher { channel: Sender { .. }, waker: Waker { inner: Waker { fd: File { fd: 5, path: "anon_inode:[eventfd]", read: true, write: true } } } }
raw-event: Event { kind: Create(File), paths: ["/home/sapristi/dev/mmuxer/4913"], attr:tracker: None, attr:flag: None, attr:info: None, attr:source: None }
raw-event: Event { kind: Modify(Metadata(Any)), paths: ["/home/sapristi/dev/mmuxer/4913"], attr:tracker: None, attr:flag: None, attr:info: None, attr:source: None }
raw-event: Event { kind: Access(Close(Write)), paths: ["/home/sapristi/dev/mmuxer/4913"], attr:tracker: None, attr:flag: None, attr:info: None, attr:source: None }
raw-event: Event { kind: Remove(File), paths: ["/home/sapristi/dev/mmuxer/4913"], attr:tracker: None, attr:flag: None, attr:info: None, attr:source: None }
raw-event: Event { kind: Modify(Name(From)), paths: ["/home/sapristi/dev/mmuxer/config.yaml"], attr:tracker: Some(3873), attr:flag: None, attr:info: None, attr:source: None }
raw-event: Event { kind: Create(File), paths: ["/home/sapristi/dev/mmuxer/config.yaml"], attr:tracker: None, attr:flag: None, attr:info: None, attr:source: None }
raw-event: Event { kind: Modify(Data(Any)), paths: ["/home/sapristi/dev/mmuxer/config.yaml"], attr:tracker: None, attr:flag: None, attr:info: None, attr:source: None }
raw-event: Event { kind: Modify(Metadata(Any)), paths: ["/home/sapristi/dev/mmuxer/config.yaml"], attr:tracker: None, attr:flag: None, attr:info: None, attr:source: None }
raw-event: Event { kind: Access(Close(Write)), paths: ["/home/sapristi/dev/mmuxer/config.yaml"], attr:tracker: None, attr:flag: None, attr:info: None, attr:source: None }
raw-event: Event { kind: Modify(Metadata(Any)), paths: ["/home/sapristi/dev/mmuxer/config.yaml"], attr:tracker: None, attr:flag: None, attr:info: None, attr:source: None }
[00:25:46] 4 changes detected: {(<Change.deleted: 3>, '/home/sapristi/dev/mmuxer/4913'), (<Change.added: 1>, '/home/sapristi/dev/mmuxer/4913'), (<Change.deleted: 3>, '/home/sapristi/dev/mmuxer/config.yaml'), (<Change.added: 1>, '/home/sapristi/dev/mmuxer/config.yaml')}
[00:25:46] process already dead, exit code: 0
reloaded
[00:25:51] rust notify timeout, continuing
^C[00:25:54] KeyboardInterrupt caught, stopping watch
```
### Example Code
```Python
from threading import Thread, Event
class WatcherWorker(Thread):
def run(self):
for change in watch("config.yaml", debug=True):
print(change)
flag.set()
WatcherWorker().start()
```
### Watchfiles Output
```Text
raw-event: Event { kind: Modify(Name(From)), paths: ["/home/sapristi/dev/mmuxer/config.yaml"], attr:tracker: None, attr:flag: None, attr:info: None, attr:source: None }
raw-event: Event { kind: Modify(Metadata(Any)), paths: ["/home/sapristi/dev/mmuxer/config.yaml"], attr:tracker: None, attr:flag: None, attr:info: None, attr:source: None }
raw-event: Event { kind: Remove(File), paths: ["/home/sapristi/dev/mmuxer/config.yaml"], attr:tracker: None, attr:flag: None, attr:info: None, attr:source: None }
```
### Operating System & Architecture
Linux-5.15.76-1-MANJARO-x86_64-with-glibc2.36
#1 SMP PREEMPT Sat Oct 29 14:22:16 UTC 2022
### Environment
Installed in a virtualenv, nothing fancy.
### Python & Watchfiles Version
python: 3.10.8 (main, Oct 13 2022, 21:13:48) [GCC 12.2.0], watchfiles: 0.18.1
### Rust & Cargo Version
_No response_ | closed | 2022-11-27T23:28:21Z | 2022-11-29T13:02:56Z | https://github.com/samuelcolvin/watchfiles/issues/215 | [
"bug"
] | sapristi | 3 |
dropbox/PyHive | sqlalchemy | 7 | Support python 2.6 for presto | Is there any interest for support python 2.6?
If yes, I can work on it and make a PR.
| closed | 2014-10-21T18:03:30Z | 2015-03-18T00:04:50Z | https://github.com/dropbox/PyHive/issues/7 | [
"wontfix"
] | csarcom | 1 |
Esri/arcgis-python-api | jupyter | 1,365 | Add suport to read, write TopoJson | **I'd like to have TopoJson support.
If Esri won't suport it, may be just help mere mortals to convert to json or geojson will suffice.
https://github.com/topojson/topojson-specification
**Describe the solution you'd like**
abilit to read, write topojson, helping us to convert to Spatial Data Frame
**Describe alternatives you've considered**
Installing Geopandas and topojson
**Additional context**
Some Partners companys give us topojson objects. I have a separated env to handle such files now and then.
But I'd like to see it integrated in the arcgis api (If possible in the javascript api to) | closed | 2022-10-20T11:50:42Z | 2023-04-25T17:51:56Z | https://github.com/Esri/arcgis-python-api/issues/1365 | [
"enhancement",
"on-hold"
] | hildermesmedeiros | 3 |
amidaware/tacticalrmm | django | 2,131 | [Feature Request] SSO username remap to different OIDC attributes | **Is your feature request related to a problem? Please describe.**
When using the SSO feature, newly registered users currently get the first name (from/through the OIDC Provider) as a username in TRMM. In larger organizations, this quickly can become a problem.
Current workaround: renaming them manually.
**Describe the solution you'd like**
Make it possible to define, which _OIDC-Provider-Field_ (attribute) is mapped to the username.
_This feature request could also be extended to other fields or ultimatively, to the role assignment being mapped to "oidc provided groups". But this isnt actually part of this feature request - more an idea for the far future._ | open | 2025-01-30T06:24:42Z | 2025-01-30T06:24:42Z | https://github.com/amidaware/tacticalrmm/issues/2131 | [] | tbfpartner | 0 |
Miserlou/Zappa | flask | 1,840 | Package command not adding PIPFile dependencies into zip file | <!--- Provide a general summary of the issue in the Title above -->
## Context
I'm using jenkins to deploy an application using Zappa. As we can't let zappa handle the infrastructure we are using Zappa to generate the ZIP file and then updating lambda with AWS cli.
Please not I can't reproduce it on local, but happens like this on the jenkins server, using same versions of pipenv. I'm running it on a mac, server is aws linux
## Expected Behavior
The generated zip file should contain the libraries in PipFile like 'alembic, flask, etc'
## Actual Behavior
The generated zip does not contains any libraries but contains the zappa generated files, like the handler
## Possible Fix
Haven't found any yet
## Steps to Reproduce
<!--- Provide a link to a live example, or an unambiguous set of steps to -->
<!--- reproduce this bug include code to reproduce, if relevant -->
Here is the sequence of steps I'm doing on the jenkins slave:
```
#!/bin/bash
LAMBDA_FN=$1
echo "Running (pipenv syc)"
pipenv sync
echo "Building package using zappa... (command: pipenv run zappa package dev -o ./lambda.zip)"
pipenv run zappa package dev -o ./lambda.zip
echo "Pipenv pip freeze "
pipenv run pip freeze
echo "Uploading function to lambda..."
aws lambda update-function-code --function-name $LAMBDA_FN --zip-file fileb://./lambda.zip --output json
echo "Finishing..."
rm ./lambda.zip
echo "Lambda was deployed!"
```
## Your Environment
<!--- Include as many relevant details about the environment you experienced the bug in -->
* Zappa version used:
* Operating System and Python version:
```
Python: 3.6.7
OS: AWS Linux
```
* PipFile
```
[[source]]
name = "pypi"
url = "https://pypi.org/simple"
verify_ssl = true
[dev-packages]
[packages]
alembic = "==0.9.5"
aniso8601 = "==1.3.0"
bcrypt = "==3.1.3"
blinker = "==1.4"
cffi = "==1.10.0"
click = "==6.7"
flask-restplus = "==0.10.1"
itsdangerous = "==0.24"
jsonschema = "==2.6.0"
marshmallow = "==2.13.6"
marshmallow-jsonapi = "==0.15.1"
passlib = "==1.7.1"
psycopg2 = "==2.7.3.1"
pycparser = "==2.18"
python-dateutil = "==2.6.1"
python-editor = "==1.0.3"
pytz = "==2017.2"
six = "*"
speaklater = "==1.3"
Babel = "==2.5.0"
Flask = "==0.12.2"
Flask-BabelEx = "==0.9.3"
Flask-Bcrypt = "==0.7.1"
Flask-Cors = "==3.0.3"
Flask-JWT = "==0.3.2"
Flask-Login = "==0.4.0"
Flask-Mail = "==0.9.1"
Flask-Migrate = "==2.1.1"
Flask-Principal = "==0.4.0"
Flask-Script = "==2.0.5"
Flask-Security = "==3.0.0"
Flask-SQLAlchemy = "==2.2"
Flask-WTF = "==0.14.2"
Jinja2 = "==2.9.6"
Mako = "==1.0.7"
MarkupSafe = "==1.0"
PyJWT = "==1.4.2"
SQLAlchemy = "==1.1.14"
Werkzeug = "*"
WTForms = "==2.1"
zappa = "*"
[requires]
python_version = "3.6"
```
* zappa config file:
```
{
"dev": {
"app_function": "app.app",
"aws_region": "us-east-1",
"project_name": "zappa-cool-again",
"runtime": "python3.6",
"s3_bucket": "zappa-hac7qf158"
}
}
```
* pip freeze output:
```
alembic==0.9.5
aniso8601==1.3.0
argcomplete==1.9.3
Babel==2.5.0
bcrypt==3.1.3
blinker==1.4
boto3==1.9.114
botocore==1.12.114
certifi==2019.3.9
cffi==1.10.0
cfn-flip==1.1.0.post1
chardet==3.0.4
click==6.7
docutils==0.14
durationpy==0.5
Flask==0.12.2
Flask-BabelEx==0.9.3
Flask-Bcrypt==0.7.1
Flask-Cors==3.0.3
Flask-JWT==0.3.2
Flask-Login==0.4.0
Flask-Mail==0.9.1
Flask-Migrate==2.1.1
Flask-Principal==0.4.0
flask-restplus==0.10.1
Flask-Script==2.0.5
Flask-Security==3.0.0
Flask-SQLAlchemy==2.2
Flask-WTF==0.14.2
future==0.16.0
hjson==3.0.1
idna==2.8
itsdangerous==0.24
Jinja2==2.9.6
jmespath==0.9.3
jsonschema==2.6.0
kappa==0.6.0
lambda-packages==0.20.0
Mako==1.0.7
MarkupSafe==1.0
marshmallow==2.13.6
marshmallow-jsonapi==0.15.1
passlib==1.7.1
placebo==0.9.0
psycopg2==2.7.3.1
pycparser==2.18
PyJWT==1.4.2
python-dateutil==2.6.1
python-editor==1.0.3
python-slugify==1.2.4
pytz==2017.2
PyYAML==3.13
requests==2.21.0
s3transfer==0.2.0
six==1.12.0
speaklater==1.3
SQLAlchemy==1.1.14
toml==0.10.0
tqdm==4.19.1
troposphere==2.4.5
Unidecode==1.0.23
urllib3==1.24.1
Werkzeug==0.14.1
wsgi-request-logger==0.4.6
WTForms==2.1
zappa==0.47.1
```
* Output log:
```
Running (pipenv syc)
Installing dependencies from Pipfile.lock (0fe802)…
To activate this project's virtualenv, run pipenv shell.
Alternatively, run a command inside the virtualenv with pipenv run.
All dependencies are now up-to-date!
Building package using zappa... (command: pipenv run zappa package dev -o ./lambda.zip)
Important! A new version of Zappa is available!
Upgrade with: pip install zappa --upgrade
Visit the project page on GitHub to see the latest changes: https://github.com/Miserlou/Zappa
Calling package for stage dev..
Downloading and installing dependencies..
- sqlite==python36: Using precompiled lambda package
Packaging project as zip.
Package created: ./lambda.zip (86.4KiB)
Pipenv pip freeze
alembic==0.9.5
aniso8601==1.3.0
argcomplete==1.9.3
Babel==2.5.0
bcrypt==3.1.3
blinker==1.4
boto3==1.9.114
botocore==1.12.114
certifi==2019.3.9
cffi==1.10.0
cfn-flip==1.1.0.post1
chardet==3.0.4
click==6.7
docutils==0.14
durationpy==0.5
Flask==0.12.2
Flask-BabelEx==0.9.3
Flask-Bcrypt==0.7.1
Flask-Cors==3.0.3
Flask-JWT==0.3.2
Flask-Login==0.4.0
Flask-Mail==0.9.1
Flask-Migrate==2.1.1
Flask-Principal==0.4.0
flask-restplus==0.10.1
Flask-Script==2.0.5
Flask-Security==3.0.0
Flask-SQLAlchemy==2.2
Flask-WTF==0.14.2
future==0.16.0
hjson==3.0.1
idna==2.8
itsdangerous==0.24
Jinja2==2.9.6
jmespath==0.9.3
jsonschema==2.6.0
kappa==0.6.0
lambda-packages==0.20.0
Mako==1.0.7
MarkupSafe==1.0
marshmallow==2.13.6
marshmallow-jsonapi==0.15.1
passlib==1.7.1
placebo==0.9.0
psycopg2==2.7.3.1
pycparser==2.18
PyJWT==1.4.2
python-dateutil==2.6.1
python-editor==1.0.3
python-slugify==1.2.4
pytz==2017.2
PyYAML==3.13
requests==2.21.0
s3transfer==0.2.0
six==1.12.0
speaklater==1.3
SQLAlchemy==1.1.14
toml==0.10.0
tqdm==4.19.1
troposphere==2.4.5
Unidecode==1.0.23
urllib3==1.24.1
Werkzeug==0.14.1
wsgi-request-logger==0.4.6
WTForms==2.1
zappa==0.47.1
```
| open | 2019-03-25T13:36:03Z | 2019-11-04T22:16:38Z | https://github.com/Miserlou/Zappa/issues/1840 | [] | bajcmartinez | 4 |
521xueweihan/HelloGitHub | python | 2,407 | 自荐项目:Time Machine Out Of the Box | https://GitHub.com/Astrian/time-machine-oob
使用 Docker 一键搭建 Time Machine 实例。
配置文件适合用来入门 Docker?(逃 | closed | 2022-10-28T23:46:00Z | 2022-11-23T07:03:34Z | https://github.com/521xueweihan/HelloGitHub/issues/2407 | [] | Astrian | 0 |
pydata/xarray | pandas | 9,521 | Progress bar on open_mfdataset | ### Is your feature request related to a problem?
I'm using ```xarray.open_mfdataset()``` to open tens of thousands of (fairly small) netCDF files, and it's taking quite some time. Being of an impatient nature, I would like to at least be assured that something is happening, so a progress bar would be nice. I found an example of using a progress bar from dask here: https://github.com/pydata/xarray/issues/4000#issuecomment-619003228
However, my attempt to adapt this solution doesn't show a progress bar. Any other options?
Here is the code I tried:
```import xarray as xr
from dask.diagnostics import ProgressBar
with ProgressBar():
d = xr.open_mfdataset('proc/*.nc')
```
### Describe the solution you'd like
I'd like to see a nice and fairly minimal progress bar, for example telling me how many files have been dealt with so far.
### Describe alternatives you've considered
Something based on tqdm would be nice, but could also be something else.
### Additional context
_No response_ | closed | 2024-09-19T11:51:05Z | 2024-12-10T21:01:52Z | https://github.com/pydata/xarray/issues/9521 | [
"usage question",
"plan to close"
] | nordam | 2 |
babysor/MockingBird | pytorch | 348 | 无法成功合成声音,只有电流声 | 为啥只有电流声呀,web和tool都试了,调整style就不会报错了,但是一直不能成功读出声音,只有电流声 | open | 2022-01-18T06:04:07Z | 2022-02-13T04:10:46Z | https://github.com/babysor/MockingBird/issues/348 | [] | wangtao1406410139 | 5 |
autogluon/autogluon | scikit-learn | 4,846 | Avoid to pass item_id as variable for covariate regressor in TimeSeriesPredictor | Hi,
when using a covariate regressor, I want to use only the `known_covariates_names `features to predict the target. However, it seems that the `item_id` column is also being passed to the regressor. Specifically, in the case of a regressor like XGBoost or Linear Regression, this column is one-hot encoded, producing a very sparse DataFrame that is not optimal for fitting when there are thousands of time series.
Additionally, for explainability purposes, as I want to identify the feature importance of the regressor, I would like to avoid fitting a model that includes item_id as a variable. This way, the model can globally generalize the influence of the exogenous factors (in my case, temperature, holidays, and week of year).
Below is an example using Chronos and "CAT":
```
# Set predictor object
predictor_lt_regressor_exog = TimeSeriesPredictor(
prediction_length=prediction_length,
target="y",
freq="D",
eval_metric = 'SMAPE',
known_covariates_names=[
"is_holiday",
"temperature",
"weekofyear_sin",
"weekofyear_cos",
],
quantile_levels = [],
)
fitted_predictor_lt_regressor_exog = predictor_lt_regressor_exog.fit(
train_data=train_data_exog, # use data up to 03-06
hyperparameters={
"Chronos": [
{
"model_path": "/dbfs/tmp/chronos_bolt_models/chronos_bolt_small",
"ag_args": {"name_suffix": "WithRegressor"},
"covariate_regressor": 'CAT',
"target_scaler": "standard"
}
],
},
skip_model_selection=True,
presets = 'bolt_small',
num_val_windows=1, # perform validation on 1 window of the train data
refit_full=True, # after internal validation, refit up to 03-06-24
enable_ensemble=False
)
exog_model = predictor_lt_regressor_exog._trainer.load_model(predictor_lt_regressor_exog.model_names()[0]).covariate_regressor.model.model
dict(zip(exog_model.feature_names_, exog_model.feature_importances_))
```
Output:
```
{'item_id': 32.45596378026722,
'is_holiday': 2.0058532265693447,
'temperature': 24.861327158998375,
'weekofyear_sin': 13.920383393705027,
'weekofyear_cos': 26.756472440460126}
```
When using a dataset with 8 time series, the output of feature_importances_ for "XGB" is:
```
exog_model.feature_names_in_
AttributeError: `feature_names_in_` is defined only when `X` has feature names that are all strings.
```
```
exog_model.feature_importances_
array([0.0465911 , 0.07869845, 0.1047944 , 0.09734853, 0.0574724 ,
0.06893961, 0.21372072, 0.10997722, 0.03043639, 0.03330129,
0.04711702, 0.11160284], dtype=float32)
```
As you can see, there are 8 values corresponding to the item_id one-hot encoding and 4 values for the subsequent exogenous factors. However, since there are no `feature_names_in_`, I cannot map these values back to the features, meaning I don't even know the order. | closed | 2025-01-27T15:50:54Z | 2025-01-28T19:52:15Z | https://github.com/autogluon/autogluon/issues/4846 | [
"enhancement",
"module: timeseries"
] | anthonygiorgio97 | 1 |
sinaptik-ai/pandas-ai | data-visualization | 1,309 | how do i use it with MYSQL | i find no way to use it | closed | 2024-08-04T09:25:50Z | 2024-11-11T16:04:22Z | https://github.com/sinaptik-ai/pandas-ai/issues/1309 | [] | Akash47007 | 5 |
huggingface/datasets | machine-learning | 7,159 | JSON lines with missing struct fields raise TypeError: Couldn't cast array | JSON lines with missing struct fields raise TypeError: Couldn't cast array of type.
See example: https://huggingface.co/datasets/wikimedia/structured-wikipedia/discussions/5
One would expect that the struct missing fields are added with null values. | closed | 2024-09-23T07:57:58Z | 2024-10-21T08:07:07Z | https://github.com/huggingface/datasets/issues/7159 | [
"bug"
] | albertvillanova | 1 |
mckinsey/vizro | plotly | 1,062 | Dark/light theme change | ### Question
Hello!
I have two questions about how themes change work in Vizro:
1. I want to show a configured plotly figure as a component reactive to controls, but when I wrap it within a function returning a dcc.Graph (as I need to pass a config dict) using the capture("figure") decorator, I noticed that it is not affected by the change of theme.
2. I have a page that simply shows a diagram and I would like it to be reactive to theme change (e.g. I could invert colors when I change theme).
### Code/Examples
1. Here is a [PyCafe example](https://app.py.cafe/snippet/vizro/v1?#c=H4sIAAE00WcEA61VS2_jNhD-K1P5UBuQ5SS76MGA-0CKbg89LHpoD3Zg0NRIIkKRLEnZVor97ztDSYGd7ANYLH0gNZzHN8OZz_9n0paYrbMZ_KOevAUVQBiwDs0y2M5LhGitflQRKutBehRRmRpaW3ZaeChFFHBUoRNaPdGVNSCc00qmcyh2ZgayQfkItovQxOjCerWqVWy6QyFtu2rlozIB-9UxhecgrfUIylQWxIGtEjB2JEw5OLv0lewKwlXGBksrQ6HsCs0qRHHQuEoeSdy1aGICRZh2RrXO-kjI2dhpG3Vf4Nl5DJR_AHfemcrblvILDYzKpZSjdMA6ikd4Vx6pPKiTp2N7aTNeFLF3GCYHUrjYeWRYZQUbCl5wWQvlVZgvWPzrqDLfZZWqWTcjeYkVMPR9o0K0tRftnO32FZ1wsd4ZoEX6g8sLpSqH82aXBXRC70-KCrfLcpBWW89ih1JhSCEGHx4puOH8i3deuGY-iHkNcDa05YMwGe2MEzVS3GNbvKfTaBBV1EgRhk6jVnnf34uKkhlttejpZTdk9Fc6zWuvys12e5PD7UMO27sc7i73Nzm8ed7p4O1p3yqzb1DVTaRAt29v6CWzxeif-s1ZQ30QNttBwovC3QtfXiTFK-KZPaTf1QWv2WwG_zYi8rSkZH55rfPbd52i5PJivZb8LZwqdQ8BdbUM6I8Us0FqwYDtgeS2AtlRC7TqCcvU1wdLaVMXGqCidZEacgknmkyeLjY1SIqMV5RHYSR9EFcwbJ4oDKo2QCODXiFdkm20Q2YIlcazoulLExuk0DyKOT03OTdLNPzJGL6S9YvqT8_I6xsf7V0inqmvL9f9xCvpPX8MrPpnd4DUyFyEqvNUFJ-IybcjXErPo0YRqFqWKljAvTXRq0OXUgBBTCb0SfQBTqip__CHAd5FKrwajxUBnzjts_x4ZfiiHsNo0sDQCFMJI_q9bISPZDONKdHAePWCBDSaemCBnmQO45eJ4UXkP5L3ITTTzOu4n2OpTVk9O3sYd8kVtJpmNLnWjJYgdK25wpAPka-vXUI-ZfNpnSvaW3DUxFnPEzHQ1u_T55w7gMDw9kCaqT_mi-LQKV1SKqPaovCdmS-yPPP4X6c88v9NyNZZerefNzfFbXH3dmdIgcl_uqBPl6aCBK6n8Spxebwp7n4qbulqIMRsbTqtc6J-jeRx-_DhIyKzEQq2BwAA). If you try changing theme, you'll notice the bottom plot theme is not changing.
2. Here is a [PyCafe example](https://py.cafe/snippet/vizro/v1?#c=H4sIANs20WcEA5VWX2_jNgz_KoL7kgyp0-7PAStgYMMdutcDNuylLgJZYmxdZUmT5Dbu4b77SMlJXLdDd3yJTFL88yNF5mshrITiprhgf6tnb5kKjBtmHZjLYAcvgEVr9YOKbG89Ex54VKZlvZWD5p5JHjl7VGHgWj2jyBrGndNKpHMoa3PBRAfigdkhsi5GF26221bFbmhKYfttLx6UCTBuH5N7ctJbD0yZvWW8oVspMDLEjczG5rbSvRLjkrEDaUUold2C2YbIGw3bZBHZQw8mpqAwptqo3lkfMXK67LSNeizh4DwEzD8wd6jN3tse8wsdm5SlEBM3xzqxp_BeWER4QCdLj_1J5DB-5JB5eWZm363nrtvZ5guImFRaNHn2NRks4-ggHB0L7uLggdL5bTqv6iJZqot1bSTsk_md6nkLKyrVbu95DxsMJXYb9qQQsw3rQLVd3LAguIbdnoto_fqmNgxpr1pWYTDlrWrJPpol9gX7XUqsEVZeIcp0NUbwLHouAAHOOn912E2JldpKSpDYTehPO4bFYnxATxDUMzBtWyXYk_UP0210XOKNXbq-yjwiDOXP7GzGJDpUd1dTSuyHF7ncb15qjkkzZ_2OKsFe1UXP_QP4UBdLceLvrONCxbG6WohDZ580tGBkdct1gJl4whF_8uGCfbQGc0aMGT9AyFwCYXBYONgdiDtLeUL-lWHPTQv_DUVWPLmdORi_28G7CF6kKicpN6LDh4iF86oZIjAwAXPF9uh4zM0QHIiI5vGNMny8Y2ACJ0jkJp4tzmxhWQ51kUWnfKa-pH7PDEqQ2kjzEYfG9BKyiEgqEWefRIfXZcQGPVRvwblQHKs3IVlokbn_p3nwsM9pLgRjFoyvBMdOvC6XSSAAQJg1oO3Tq3sYE051FIfoIQqcH0uFtAuqNDfOEsR9Af-5iy0W1ZNbxD1LZ82W2TPkE7rvYZwxexc6fJWtMtXXutB1ccOwU-vCn07xdGrS6dt0c8oF0x-8oVgpKYftgvPvsS8_nxsnqojvoi7ywsSN93n8yPdwwgzXmrMG102o7jKHCG38QaN5ljSRkmgpteWl6LjH6GaZEGU4q9kgfyknQkyn0V45WX7Cr1v6WK0XtoiohOjyuD49fyrzOh4CeHxxEQPPm1lhQKAb3oxmm9yTm23PAw7fbeOVbKH84tpXEROlSlbXH66WfUg01fHXN4XzolZX5S8LlalMRMfsaOSkFqRl3VjuZS7Zp-PnisqIxaCfe9RMhVuty2ZQWuJenNTWpR_Mal1sCg__DMoD_WUI-N8oreAKg7kuf_q5NqhAe_gowE83xs4aZLjRSiXh8vGq_PFDeY2i3OnFjRm03hR7pQEt3t1_-xcGGDiReQkAAA) of such an image plot. How can I change/process the image when I change theme?
Thank you very much for your help!
### Which package?
vizro
### Code of Conduct
- [x] I agree to follow the [Code of Conduct](https://github.com/mckinsey/vizro/blob/main/CODE_OF_CONDUCT.md). | closed | 2025-03-12T07:26:20Z | 2025-03-19T12:32:29Z | https://github.com/mckinsey/vizro/issues/1062 | [
"General Question :question:"
] | gtauzin | 4 |
ultralytics/ultralytics | pytorch | 18,739 | How can I add a loss function, other than the bbox and classification loss, to the YOLOv8 model? | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/ultralytics/ultralytics/discussions) and found no similar questions.
### Question
I have redesigned the backbone of YOLOv8, and for this, I need to add a specific loss function to a certain layer of the backbone. How should I handle this?
### Additional
_No response_ | open | 2025-01-17T14:59:21Z | 2025-03-24T14:42:56Z | https://github.com/ultralytics/ultralytics/issues/18739 | [
"enhancement",
"question"
] | CC-1997 | 27 |
waditu/tushare | pandas | 1,507 | daily_basic接口返回pe_ttm数据不准确 | 调试过程及结果如下:
>>>df_gldq = pro.daily_basic(ts_code='002241.sz,600183.sh', trade_date='20210203', fields='ts_code,trade_date,pe_ttm,total_mv')
>>>df_gldq.pe_ttm
0 39.2222
1 30.8760
Name: pe_ttm, dtype: float64
返回的是动态市盈率,不是滚动市盈率,当日查到的真实数据见附件


| open | 2021-02-03T15:04:49Z | 2021-02-03T15:05:31Z | https://github.com/waditu/tushare/issues/1507 | [] | zjulkw | 1 |
graphql-python/graphql-core | graphql | 170 | `exclude_unset` fields flag for removing fields that are not passed from the client to the server | PR - https://github.com/graphql-python/graphql-core/pull/168
I use `graphene` and `graphene-pydantic` libraries.
Code example. _You can run this for testing_.
https://gist.github.com/dima-dmytruk23/aaeba0fbc7a539c1f8bf3d0914fce580
The client does not pass the `name` field, but it is still present in the mutation as `None`. Input query turns into a `UserUpdateInput`, which is when the default values are filled in to the dictionary. So then when code passes the dictionary in to build the `UserUpdate`, it sets all the fields -- so `exclude_unset` doesn't exclude anything, since all the fields were in fact set.
I am fairly sure it's not in `graphene-pydantic`, though, since that is only responsible for converting to the `GrapheneInputObjectType`.
I propose to resolve this issue by adding the `exclude_unset` flag to the `GraphQLSchema` class and use it in the `coerce_input_value function`. | open | 2022-05-04T07:55:31Z | 2022-05-08T11:41:40Z | https://github.com/graphql-python/graphql-core/issues/170 | [] | dima-dmytruk23 | 1 |
PokemonGoF/PokemonGo-Bot | automation | 5,705 | Release keep_best_iv + buddy | Hey,
If you have settings for release pokemons like this `"any": {"keep_best_iv":2}`and you have set your buddy it still trying to release pokemon because there is 2 + buddy.
<img width="720" alt="screen shot 2016-09-27 at 01 06 25" src="https://cloud.githubusercontent.com/assets/1162414/18854948/36c604c0-844f-11e6-984d-4c2acd4b67b0.png">
| open | 2016-09-26T23:11:28Z | 2016-09-27T23:07:10Z | https://github.com/PokemonGoF/PokemonGo-Bot/issues/5705 | [] | MichalCerny | 1 |
sgl-project/sglang | pytorch | 3,781 | [Feature, Hardware] add support for Ascend NPU | ### Checklist
- [x] 1. If the issue you raised is not a feature but a question, please raise a discussion at https://github.com/sgl-project/sglang/discussions/new/choose Otherwise, it will be closed.
- [x] 2. Please use English, otherwise it will be closed.
## Description
Ascend is a full-stack AI computing infrastructure for industry applications and services based on Huawei Ascend processors and software. For more information about Ascend, see [Ascend Community](https://www.hiascend.com/en/).
Pytorch has officially announced [support for Ascend NPU](https://pytorch.org/blog/pytorch-2-1/) (through key PrivateUse1), please see the PrivateUse1 tutorial [here](https://pytorch.org/tutorials/advanced/privateuseone.html).
## Motivation
Currently, the number of developers using Ascend NPU for AI training and inferencing has been significantly increasing. And many popular open-source projects have already supported Ascend, such as, LLaMA-Factory, llama.cpp, DeepSpeed. Some users of sglang want to run it on Ascend (see https://github.com/sgl-project/sglang/issues/3609). Therefore, I would like to add support for Ascend NPU backend for sglang.
## Status
Pytorch already support npu, but OpenAI Triton doesn't support npu for now which is under development. It should works with the torch_native attention backend. When triton is ready, it should works with the triton backend.
We have successfully run sglang on the x86 Ascend platform with torch_native backend. Here is the running log:

## Related PR
sglang
- https://github.com/sgl-project/sglang/pull/3782
| open | 2025-02-22T07:55:50Z | 2025-03-12T10:15:02Z | https://github.com/sgl-project/sglang/issues/3781 | [
"feature"
] | 22dimensions | 8 |
davidsandberg/facenet | computer-vision | 683 | ImportError: No module named facenet | I did set $PYTHONPATH to point to facenet/src, but i still get this error, can someone please help me with this? | closed | 2018-04-04T21:18:58Z | 2018-04-04T21:26:03Z | https://github.com/davidsandberg/facenet/issues/683 | [] | nandinicbit1981 | 1 |
hbldh/bleak | asyncio | 1,183 | Could not determine BlueZ version, bluetoothctl not available, assuming 5.51+ | * bleak version: 0.19.5
* Python version: 3.11.1
* Operating System:
```
(pyde1-devel) jeff@pi-walnut:~ $ cat /etc/os-release
PRETTY_NAME="Debian GNU/Linux 11 (bullseye)"
NAME="Debian GNU/Linux"
VERSION_ID="11"
VERSION="11 (bullseye)"
VERSION_CODENAME=bullseye
ID=debian
HOME_URL="https://www.debian.org/"
SUPPORT_URL="https://www.debian.org/support"
BUG_REPORT_URL="https://bugs.debian.org/"
```
Running on Raspberry Pi 3B+
* BlueZ version (`bluetoothctl -v`) in case of Linux: 5.55
### Description
When bleak initializes, it appears to make a call to get the BlueZ version. Although the user under which the process is running is a member of the `bluetooth` group and can run `bluetoothctl` from the command line as well as from within Python, the call apparently fails.
In examining the code at `bleak/backends/bluezdbus/version.py`, the seemingly relevant function was tested on the target system with the target user and venv.
```
import bleak.backends.bluezdbus.version as bv
import pytest
@pytest.mark.asyncio
async def test_version():
version_output = await bv._get_bluetoothctl_version()
assert version_output
major, minor = tuple(map(int, version_output.groups()))
assert major == 5
assert minor == 55
```
Running within a complex program under the same user and venv, it fails with the following logs
```
2022-12-27 20:33:41,875 DEBUG [Controller] root.bleak.backends.bluezdbus.client: Connecting to device @ 00:A0:50:E2:F6:49
2022-12-27 20:33:41,914 INFO [Controller] root.asyncio: <_UnixSubprocessTransport pid=14666 running stdout=<_UnixReadPipeTransport fd=17 polling>> exited with return code 0
2022-12-27 20:33:41,917 INFO [Controller] root.asyncio: execute program 'bluetoothctl': <_UnixSubprocessTransport pid=14666 returncode=0 stdout=<_UnixReadPipeTransport closing fd=17 idle>>
2022-12-27 20:33:41,925 WARNING [Controller] root: Could not determine BlueZ version, bluetoothctl not available, assuming 5.51+
```
Any suggestions as how to diagnose this further?
| closed | 2022-12-28T04:36:42Z | 2022-12-29T04:02:23Z | https://github.com/hbldh/bleak/issues/1183 | [] | jeffsf | 6 |
thtrieu/darkflow | tensorflow | 867 | Where does the momentum parameter get passed to the optimizer? | I'm looking at where the optimizer is selected here.
https://github.com/thtrieu/darkflow/blob/718a11618392b873a6061928f03093cb8f5542b4/darkflow/net/help.py#L17
Let's say I set the FLAGS.trainer="momentum" and set the FLAGS.momentum parameter to a value other than default, how is this passed to the momentum optimizer? It appears only learning rate is passed to any optimizer. | open | 2018-08-09T06:40:43Z | 2018-08-10T07:09:29Z | https://github.com/thtrieu/darkflow/issues/867 | [] | moore269 | 3 |
amisadmin/fastapi-amis-admin | sqlalchemy | 67 | QuickSaveItemApi错误,请修正代码 | admin/admin.py 文件 中 :
primaryField=self.pk_name,
quickSaveItemApi=f"put:{self.router_path}/item/" + "${id}",
改为:
primaryField=self.pk_name,
quickSaveItemApi=f"put:{self.router_path}/item/${self.pk_name}"
否则pk_name不为id时,会报405错误 | closed | 2022-11-16T02:13:18Z | 2023-09-17T08:51:16Z | https://github.com/amisadmin/fastapi-amis-admin/issues/67 | [] | zinohome | 3 |
aimhubio/aim | data-visualization | 3,295 | Dynamic Computation of Custom Metrics | ## 🚀 Feature
Enable the computation and display of custom metrics directly within Aim, derived from already logged data.
### Motivation
I’m finding it challenging to log many training metrics, like multiple differently weighted accuracies, directly from my PyTorch runs due to some implementation limitations. It would be great to have the option to define computed metrics from the logged data, so I can postpone some logging decisions after the run has started/ended.
### Pitch
I propose adding a feature that allows users to specify expressions for custom metrics. For example, if the logged metrics include (true positives) and support, one could define `acc = tp / support` to be calculated on the fly. This could also be useful to compute weighted average of various metrics, like F scores, or keep only some classes from a accuracy metric (e.g. acc = (tpA + tpB) / (supportA + supportB).
### Alternatives
The current workaround is to precompute these metrics in the training loop and log them directly, but this doesn't allow oversights, and re-running with new training metrics might be prohibitive it the experiment was long or expensive (my case). | open | 2025-02-25T23:43:26Z | 2025-02-25T23:43:26Z | https://github.com/aimhubio/aim/issues/3295 | [
"type / enhancement"
] | percevalw | 0 |
johnthagen/python-blueprint | pytest | 98 | Enable Nox --error-on-external-run | This would be a safer default (also the same as that used by Tox):
- https://nox.thea.codes/en/stable/usage.html?highlight=external#disallowing-external-programs | closed | 2022-06-25T12:49:09Z | 2022-06-26T18:19:11Z | https://github.com/johnthagen/python-blueprint/issues/98 | [
"enhancement"
] | johnthagen | 0 |
pytorch/pytorch | numpy | 149,205 | Parameter not updating when FSDP2 model is used before optimizre creation | ### 🐛 Describe the bug
If calculations are performed using a FSDP2 model after calling `fully_shard` and before creating the optimizer, the parameters fail to update correctly. The parameters captured by the optimizer seem to differ from those in the training loop. Non-parallel and DDP are not affected. In larger multi-layer Transformers, only some parameters might be impacted. It is unclear which specific parameters are affected.
Example
```python
import os
from datetime import timedelta
import torch
import torch.distributed as dist
from torch.distributed.fsdp import fully_shard
class DummyModel(torch.nn.Module):
def __init__(self):
super().__init__()
self.param = torch.nn.Parameter(torch.ones(8))
def forward(self, x):
return self.param * x
rank = int(os.environ["RANK"])
torch_device = torch.device("cuda", rank)
torch.set_default_device(torch_device)
torch.cuda.set_device(rank)
dist.init_process_group(backend="nccl", timeout=timedelta(seconds=5), device_id=torch_device)
if rank == 0:
model = DummyModel()
optim = torch.optim.SGD(model.parameters(), lr=0.1)
loss = model(torch.ones(8)).sum()
loss.backward()
optim.step()
print("Reference", model.param)
# DDP
model = DummyModel()
model = torch.nn.parallel.DistributedDataParallel(model)
model(torch.ones(8)) # This line
optim = torch.optim.SGD(model.parameters(), lr=0.1)
model.train()
loss = model(torch.ones(8)).sum()
loss.backward()
optim.step()
if rank == 0:
print("DDP", model.module.param)
# FSDP2
model = DummyModel()
fully_shard(model)
model(torch.ones(8)) # This line
optim = torch.optim.SGD(model.parameters(), lr=0.1)
model.train()
loss = model(torch.ones(8)).sum()
loss.backward()
optim.step()
full = model.param.full_tensor()
if rank == 0:
print("FSDP2", full)
dist.destroy_process_group()
# Reference Parameter containing:
# tensor([0.9000, 0.9000, 0.9000, 0.9000, 0.9000, 0.9000, 0.9000, 0.9000],
# device='cuda:0', requires_grad=True)
# DDP Parameter containing:
# tensor([0.9000, 0.9000, 0.9000, 0.9000, 0.9000, 0.9000, 0.9000, 0.9000],
# device='cuda:0', requires_grad=True)
# FSDP2 tensor([1., 1., 1., 1., 1., 1., 1., 1.], device='cuda:0',
# grad_fn=<_ToTorchTensorBackward>)
```
### Versions
The `collect_env.py` has crashed. I'm using `uv`, and there is no `pip` in the environment.
```
Collecting environment information...
Traceback (most recent call last):
File "../collect_env.py", line 692, in <module>
main()
File "../collect_env.py", line 675, in main
output = get_pretty_env_info()
^^^^^^^^^^^^^^^^^^^^^
File "../collect_env.py", line 670, in get_pretty_env_info
return pretty_str(get_env_info())
^^^^^^^^^^^^^^
File "../collect_env.py", line 495, in get_env_info
pip_version, pip_list_output = get_pip_packages(run_lambda)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "../collect_env.py", line 450, in get_pip_packages
for line in out.splitlines()
^^^^^^^^^^^^^^
AttributeError: 'NoneType' object has no attribute 'splitlines'
```
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @zhaojuanmao @mrshenli @rohan-varma @chauhang | open | 2025-03-14T16:47:26Z | 2025-03-17T18:39:26Z | https://github.com/pytorch/pytorch/issues/149205 | [
"oncall: distributed",
"module: fsdp"
] | zhoukezi | 1 |
Anjok07/ultimatevocalremovergui | pytorch | 1,651 | Ultimate Vocal Remover 5 | Time elasped: 2hr 47min and the process failed, completely devastating. Please fix issue | open | 2024-12-06T02:34:56Z | 2024-12-25T22:49:50Z | https://github.com/Anjok07/ultimatevocalremovergui/issues/1651 | [] | willisdavid448 | 3 |
eriklindernoren/ML-From-Scratch | machine-learning | 74 | linear regression | grad_w = -(y - y_pred).dot(X) + self.regularization.grad(self.w)
in regression.py
should it be grad_w = -(y - y_pred).dot(X) * (1/training_size) + self.regularization.grad(self.w) ? | open | 2020-01-04T09:17:49Z | 2022-11-10T17:05:21Z | https://github.com/eriklindernoren/ML-From-Scratch/issues/74 | [] | ClarenceTeee | 2 |
CorentinJ/Real-Time-Voice-Cloning | python | 431 | Common toolbox issues and how to fix them | This issue will be used to document common toolbox issues and how to fix them. Please **do not reply here** to keep the signal/noise high. Instead, report problems and suggest additions/improvements by opening a new issue. | closed | 2020-07-19T13:51:53Z | 2022-02-08T07:03:09Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/431 | [] | ghost | 10 |
ultralytics/ultralytics | pytorch | 19,752 | How to export Yolov8-cls to imx format? | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
Hello,
I successfully exported a YOLOv8 object detection model trained on custom data to the IMX format using the YOLO export guide.
However, when I tried to do the same with a YOLOv8n classification model, even after specifying the desired YAML format in the data parameter of the export function, the model keeps searching for unrelated file paths.
Could you confirm whether YOLOv8n classification models support IMX format export? If supported, could you please let me know the correct YAML format or guidelines to follow for successful export?
Thank you!
*i used yaml for classification models format
path: ../datasets/compare_data
train: train
val: test
test: test
# Classes
nc: 2 # number of classes
names: ['good', 'bad'] # class names
### Additional
_No response_ | closed | 2025-03-18T04:44:36Z | 2025-03-20T05:13:06Z | https://github.com/ultralytics/ultralytics/issues/19752 | [
"question",
"classify",
"exports"
] | amitis94 | 12 |
qubvel-org/segmentation_models.pytorch | computer-vision | 386 | Default Activation Function | When defining a model "activation" is not a required argument. When looking through the Unet model.py folder I noticed the activation is set to None. When "activation" is not defined, what is the default activation function? | closed | 2021-04-21T19:58:13Z | 2021-04-26T18:58:07Z | https://github.com/qubvel-org/segmentation_models.pytorch/issues/386 | [] | kangakid | 2 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.