repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
microsoft/nni | machine-learning | 5,685 | Removing redundant string format in the final experiment log | ### This is a small but very simple request.
In the final experiment summary JSON generated through the NNI WebUI, there are some fields that were originally dictionaries that have been reformatted into strings. This is a small but annoying detail and probably easy to fix.
Most notably, this happens for values in the entry 'finalMetricData', which contains the default metric for the trial. When more than just the default metric are being tracked however, for example when a dictionary of metrics is added at each intermediate and final metric recordings, the value of the 'finalMetricData' field may look something like this:
`'"{\\"train_loss\\": 1.2782151699066162, \\"test_loss\\": 0.9486784338951111, \\"default\\": 0.5564953684806824}"'`
when it should simply be
```
{'train_loss': '1.2782151699066162',
'test_loss': '0.9486784338951111',
'default': '0.5564953684806824'}
```
I've reformatted it with these simple two lines:
```
keys_values = log['trialMessage'][0]['finalMetricData'][0]['data'].replace('"', '').replace(': ', '').replace(', ', '').strip('{}').split('\\')
reformatted = {k: v for k, v in zip(keys_values[1::2], keys_values[2::2])}
```
It would be quite nice and save unnecessary reprocessing if this could just be a regular JSON dictionary and not a stringified dictionary :)
### Reproducing this:
After downloading the experiment summary as a json, the following code would reproduce the above behavior (if the trial includes a multitude of metrics collected in a dict as opposed to just the default metric being recorded):
```
with open('path_to_experiment_json') as f:
log = json.load(f)
print(log['trialMessage'][0]['finalMetricData'][0]['data'])
> '"{\\"train_loss\\": 1.2782151699066162, \\"test_loss\\": 0.9486784338951111, \\"default\\": 0.5564953684806824}"'
```
A similar thing goes for the field `hyperParameters` field in each trial message, which is also a stringified dictionary.
```
log['trialMessage'][0]['hyperParameters']
> ['{"parameter_id":0,"parameter_source":"algorithm","parameters":{"batch_size":64,"seed":2,"steps":5000,"n_batches":1000,"linear_out1":512,"linear_out2":128,"conv2d_ks":2,"conv2d_out_channels":1},"parameter_index":0}']
``` | open | 2023-09-26T12:19:49Z | 2023-09-26T12:28:22Z | https://github.com/microsoft/nni/issues/5685 | [] | olive004 | 0 |
jupyter/nbgrader | jupyter | 1,209 | Manual grading: assignment fails to load. It hangs with msg "Loading, Please wait..." | <!--
Thanks for helping to improve nbgrader!
If you are submitting a bug report or looking for support, please use the below
template so we can efficiently solve the problem.
If you are requesting a new feature, feel free to remove irrelevant pieces of
the issue template.
-->
### Ubuntu 18.04
### `nbgrader --version`nbgrader version 0.5.5
### `jupyterhub --version` (if used with JupyterHub)
### `jupyter notebook --version` TLJH v.1
### Expected behavior When accessing assignment via manual grading (after autograde has run successfully) assignment/submission id should load
### Actual behavior . : the "loading please wait" message is all that shows
### Steps to reproduce the behavior . as Admin create an assignment , release it, login as test_student, complete and submit, login as Admin collect assignment, auto grade assignment, goto manual grade, click on assignment and wait.
That being said if I go through Manage student and click on the assignment and then click on Assignment id , then notebook ID it resolves to the submitted assignment. Just not when I go through Manual Grading
Note we are using github Oauth | closed | 2019-08-29T20:49:00Z | 2019-11-02T11:36:43Z | https://github.com/jupyter/nbgrader/issues/1209 | [
"question"
] | alvinhuff | 3 |
horovod/horovod | machine-learning | 3,790 | No NCCL INFO logs visible on training | **Environment:**
1. Framework: (TensorFlow, Keras, PyTorch, MXNet) PyTorch
2. Framework version: 1.13.0
3. Horovod version: 0.26.0
4. MPI version:4.1.4
5. CUDA version: 11.2
6. NCCL version: 2.14
7. Python version: 2.9
8. Spark / PySpark version: NA
9. Ray version: NA
10. OS and version: Ubuntu 20
11. GCC version: 9.4.0
12. CMake version: 3.22.3
**Checklist:**
1. Did you search issues to find if somebody asked this question before? Yes
2. If your question is about hang, did you read [this doc](https://github.com/horovod/horovod/blob/master/docs/running.rst)?
3. If your question is about docker, did you read [this doc](https://github.com/horovod/horovod/blob/master/docs/docker.rst)?
4. Did you check if you question is answered in the [troubleshooting guide](https://github.com/horovod/horovod/blob/master/docs/troubleshooting.rst)? Yes
**Bug report:**
Please describe erroneous behavior you're observing and steps to reproduce it.
When I run horovod command, I don't see any NCCL INFO logs in the training logs. Can you please help with this? Am I missing any env var or am I installing horovod incorrectly?
Command:
```
horovodrun -np 1 --verbose -H <IP>:1 --mpi-args="-x NCCL_IB_DISABLE=1 -x NCCL_SHM_DISABLE=1 -x NCCL_DEBUG=INFO" python horovod_mnist.py --epochs 1
Filtering local host names.
Remote host found:
All hosts are local, finding the interfaces with address 127.0.0.1
Local interface found lo
mpirun --allow-run-as-root --tag-output -np 1 -H <IP>:1 -bind-to none -map-by slot -mca pml ob1 -mca btl ^openib -mca btl_tcp_if_include lo -x NCCL_SOCKET_IFNAME=lo -x BASH_ENV -x CONDA_DEFAULT_ENV -x CONDA_EXE -x CONDA_MKL_INTERFACE_LAYER_BACKUP -x CONDA_PREFIX -x CONDA_PROMPT_MODIFIER -x CONDA_PYTHON_EXE -x CONDA_SHLVL -x DBUS_SESSION_BUS_ADDRESS -x ENV -x FI_EFA_USE_DEVICE_RDMA -x FI_PROVIDER -x GSETTINGS_SCHEMA_DIR -x GSETTINGS_SCHEMA_DIR_CONDA_BACKUP -x HOME -x LANG -x LD_LIBRARY_PATH -x LESSCLOSE -x LESSOPEN -x LOADEDMODULES -x LOGNAME -x LS_COLORS -x MANPATH -x MKL_INTERFACE_LAYER -x MODULEPATH -x MODULEPATH_modshare -x MODULESHOME -x MODULES_CMD -x MOTD_SHOWN -x NCCL_DEBUG -x NCCL_PROTO -x PATH -x PWD -x SHELL -x SHLVL -x SSH_CLIENT -x SSH_CONNECTION -x SSH_TTY -x TERM -x USER -x XDG_DATA_DIRS -x XDG_RUNTIME_DIR -x XDG_SESSION_CLASS -x XDG_SESSION_ID -x XDG_SESSION_TYPE -x _ -x _CE_CONDA -x _CE_M -x NCCL_IB_DISABLE=1 -x NCCL_SHM_DISABLE=1 -x NCCL_DEBUG=INFO python horovod_mnist.py --epochs 1
[1,0]<stdout>:printing num proc None
[1,0]<stdout>:printing comm None
[1,0]<stdout>:INFO
[1,0]<stdout>:None 0
[1,0]<stderr>:/home/ubuntu/horovod_mnist.py:70: UserWarning: Implicit dimension choice for log_softmax has been deprecated. Change the call to include dim=X as an argument.
[1,0]<stderr>: return F.log_softmax(x)
[1,0]<stdout>:Train Epoch: 1 [0/60000 (0%)] Loss: 2.319920
[1,0]<stdout>:Train Epoch: 1 [640/60000 (1%)] Loss: 2.322078
[1,0]<stdout>:Train Epoch: 1 [1280/60000 (2%)] Loss: 2.313202
[1,0]<stdout>:Train Epoch: 1 [1920/60000 (3%)] Loss: 2.290503
[1,0]<stdout>:Train Epoch: 1 [2560/60000 (4%)] Loss: 2.284245
[1,0]<stdout>:Train Epoch: 1 [3200/60000 (5%)] Loss: 2.306246
[1,0]<stdout>:Train Epoch: 1 [3840/60000 (6%)] Loss: 2.284169
[1,0]<stdout>:Train Epoch: 1 [4480/60000 (7%)] Loss: 2.256588
[1,0]<stdout>:Train Epoch: 1 [5120/60000 (9%)] Loss: 2.296885
```
Expected example NCCL INFO logs (not actual):
```
[1,0]<stdout>:ip-172-31-0-63:2120:2135 [0] NCCL INFO ....
[1,0]<stdout>:ip-172-31-0-63:2120:2135 [0] NCCL INFO ....
[1,0]<stdout>:NCCL version 2.14.8+cuda12.1
```
Horovod install command:
```
HOROVOD_CMAKE=/opt/conda/envs/pytorch/bin/cmake \
pip install --no-cache-dir --upgrade --upgrade-strategy only-if-needed "horovod>=0.26.1"
```
| closed | 2022-12-06T00:29:52Z | 2023-03-25T03:25:25Z | https://github.com/horovod/horovod/issues/3790 | [
"question",
"wontfix"
] | jeet4320 | 3 |
ultralytics/yolov5 | deep-learning | 12,571 | package yolov5 use pyinstaller,could not use cuda | ### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.
### Question
i have 2 computers, A and B.
and i package the follower code in A use pyinstaller. the cmd is` pyinstaller test.py`
```import torch
import os
import platform
print(torch.__file__)
def select_device(device='', batch_size=0, newline=True):
# device = None or 'cpu' or 0 or '0' or '0,1,2,3'
# s = f'YOLOv5 🚀 {git_describe() or file_date()} Python-{platform.python_version()} torch-{torch.__version__} '
s = f'YOLOv5 🚀 Python-{platform.python_version()} torch-{torch.__version__} '
device = str(device).strip().lower().replace('cuda:', '').replace('none', '') # to string, 'cuda:0' to '0'
print('s',s)
print('device:',device)
cpu = device == 'cpu'
mps = device == 'mps' # Apple Metal Performance Shaders (MPS)
if cpu or mps:
os.environ['CUDA_VISIBLE_DEVICES'] = '-1' # force torch.cuda.is_available() = False
elif device: # non-cpu device requested
os.environ['CUDA_VISIBLE_DEVICES'] = device # set environment variable - must be before assert is_available()
print('torch cuda:',torch.cuda.is_available())
print('count:', torch.cuda.device_count())
print('len:', len(device.replace(',', '')))
assert torch.cuda.is_available() and torch.cuda.device_count() >= len(device.replace(',', '')), \
f"Invalid CUDA '--device {device}' requested, use '--device cpu' or pass valid CUDA device(s)"
if not cpu and not mps and torch.cuda.is_available(): # prefer GPU if available
devices = device.split(',') if device else '0' # range(torch.cuda.device_count()) # i.e. 0,1,6,7
n = len(devices) # device count
if n > 1 and batch_size > 0: # check batch_size is divisible by device_count
assert batch_size % n == 0, f'batch-size {batch_size} not multiple of GPU count {n}'
space = ' ' * (len(s) + 1)
for i, d in enumerate(devices):
p = torch.cuda.get_device_properties(i)
s += f"{'' if i == 0 else space}CUDA:{d} ({p.name}, {p.total_memory / (1 << 20):.0f}MiB)\n" # bytes to MB
arg = 'cuda:0'
elif mps and getattr(torch, 'has_mps', False) and torch.backends.mps.is_available(): # prefer MPS if available
s += 'MPS\n'
arg = 'mps'
else: # revert to CPU
s += 'CPU\n'
arg = 'cpu'
if not newline:
s = s.rstrip()
print('ss',s)
return torch.device(arg)
DEVICE = '0'
device = select_device(DEVICE)
print('device:',device)
```
the result that run the generated file is
<img width="540" alt="f5997e1ad76b3baa86ca5068a3cb281" src="https://github.com/ultralytics/yolov5/assets/38728358/bb402e6e-bfbc-49e8-85da-23b3d4600fa7">
but when i move to the computer B. the result is
<img width="803" alt="09bff37a8c38ea081c74a24997c6d36" src="https://github.com/ultralytics/yolov5/assets/38728358/a71d06b4-a108-4105-8261-6e2a09832d0c">
A and B all have 3090,and in the base env torch.cuda.is_available() is true.
can anyone help how to solve this problem?thank you!
### Additional
_No response_ | closed | 2024-01-03T02:08:57Z | 2024-01-03T07:30:11Z | https://github.com/ultralytics/yolov5/issues/12571 | [
"question"
] | jo-dean | 0 |
CPJKU/madmom | numpy | 434 | Log filtered spectrogram height | I was under the impression that only changing the frame size for a LogarithmicFilteredSpectrogram wouldn't change the height of the spectrogram, but it does. Also with a lower frame size, low frequencies aren't being shown. Anyone know why this happens?
```
frame_size = 8392
fsig_proc = FramedSignalProcessor(frame_size=frame_size, fps=120, hop_size=frame_size // 2, origin='future')
spec_proc = LogarithmicFilteredSpectrogramProcessor(LogarithmicFilterbank, num_bands=32, fmin=1, fmax=8000)
processor = SequentialProcessor([fsig_proc, spec_proc])
comp_spec = processor(Signal(np.array(y), sample_rate=fs)).T
plt.imshow(comp_spec[:, 3500:4500], cmap='magma', interpolation='nearest', aspect='auto', origin='lower')
plt.show()
frame_size = 2048
fsig_proc = FramedSignalProcessor(frame_size=frame_size, fps=120, hop_size=frame_size // 2, origin='future')
spec_proc = LogarithmicFilteredSpectrogramProcessor(LogarithmicFilterbank, num_bands=32, fmin=1, fmax=8000)
processor = SequentialProcessor([fsig_proc, spec_proc])
comp_spec = processor(Signal(np.array(y), sample_rate=fs)).T
plt.imshow(comp_spec[:, 3500:4500], cmap='magma', interpolation='nearest', aspect='auto', origin='lower')
plt.show()
```
Top graph is a spectrogram with frame size 8392, bottom has frame size 2048.

Thank you
| closed | 2019-06-16T22:30:17Z | 2019-07-17T07:13:25Z | https://github.com/CPJKU/madmom/issues/434 | [] | andrewpeng02 | 6 |
tensorpack/tensorpack | tensorflow | 1,003 | What does this Error mean? | Hi, I tried to load an LMDB data of ImageNet, it is very large, about 140G. And there are 1281167 instances in the data. But I meet this error. It seems to fail to load a too big LMDB file.
```
Traceback (most recent call last):
File "main.py", line 388, in <module>
main()
File "main.py", line 154, in main
num_workers=args.workers,)
File "/home/jcz/github/pytorch_examples/imagenet/sequential_imagenet_dataloader/imagenet_seq/data.py", line 166, in __init__
ds = td.LMDBData(lmdb_loc, shuffle=False)
File "/home/jcz/github/tensorpack/tensorpack/dataflow/format.py", line 91, in __init__
self._set_keys(keys)
File "/home/jcz/github/tensorpack/tensorpack/dataflow/format.py", line 109, in _set_keys
self.keys = loads(self.keys)
File "/home/jcz/github/tensorpack/tensorpack/utils/serialize.py", line 29, in loads_msgpack
return msgpack.loads(buf, raw=False, max_bin_len=1000000000)
File "/home/jcz/Venv/pytorch/lib/python3.5/site-packages/msgpack_numpy.py", line 214, in unpackb
return _unpackb(packed, **kwargs)
File "msgpack/_unpacker.pyx", line 187, in msgpack._cmsgpack.unpackb
ValueError: 1281167 exceeds max_array_len(131072)
```
python env: 3.5
how can I fix this error? | closed | 2018-12-08T04:33:42Z | 2018-12-08T08:47:55Z | https://github.com/tensorpack/tensorpack/issues/1003 | [
"upstream issue"
] | Juicechen95 | 3 |
piskvorky/gensim | data-science | 3,422 | Gensim LdaMulticore can't work on cloud function | #### Problem description
I want to use gensim LDA module on cloud function, but it time out and show "/layers/google.python.pip/pip/lib/python3.8/site-packages/past/builtins/misc.py:45: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses".
But the same code worked on colab (python 3.8.16) and I did't find any bug in it. It can print 'LDA1' and 'LDA2', then time out.
#### Steps/code/corpus to reproduce
1.I have tried diffierent python version like 3.10, 3.8, 3.7
2.ADD import warnings
warnings.filterwarnings("ignore", category=DeprecationWarning)
3.It works on colab and 300 text just cost 10 sec, but I need it work on cloud function
```python
def LDA(corpus, dictionary, NumTopic):
print('LDA1')
time1 = time.time()
print('LDA2')
lda = gensim.models.LdaMulticore(corpus=corpus, id2word=dictionary, num_topics=NumTopic, chunksize=1000, iterations=200, passes=20, per_word_topics=False, random_state=100)
print('LDA3')
corpus_lda = lda[corpus]
print("LDA takes %2.2f seconds." % (time.time() - time1))
return lda, corpus_lda
```
#### Versions
Please provide the output of:
```python
from __future__ import unicode_literals
import base64
import importlib
import re
import os
import sys
import numpy as np
import pandas as pd
import gensim
import gensim.corpora as corpora
from gensim.utils import simple_preprocess
from gensim.models import CoherenceModel
from gensim import corpora, models, similarities
from google.cloud import bigquery
import pandas_gbq
import requests
import tqdm
import json
import pyLDAvis
import pyLDAvis.gensim_models
import matplotlib.pyplot as plt
import logging
import time
```
| closed | 2023-01-06T10:07:11Z | 2023-01-10T08:56:55Z | https://github.com/piskvorky/gensim/issues/3422 | [] | tinac5519 | 2 |
ibis-project/ibis | pandas | 10,236 | bug: Generated SQL for Array Aggregations with Order By doesn't work in BigQuery | Edited to include an end to end reproduction and more detail.
### What happened?
Consider
```
CREATE TABLE `my-project.my_dataset.colors` AS (
SELECT 1 AS id, 'red' AS color
UNION ALL
SELECT 2 AS id, 'red' AS color
UNION ALL
SELECT 3 AS id, 'blue' AS color
)
```
and
```
import ibis
con = ibis.bigquery.connect(project_id="my-project", dataset_id="my_dataset")
colors = con.table("colors")
table = colors.group_by("color").aggregate(
ids=colors.id.collect(order_by=colors.id),
)
table.execute()
```
This gives
```
BadRequest: 400 POST https://bigquery.googleapis.com/bigquery/v2/projects/my-project/queries?prettyPrint=false: NULLS LAST not supported with ascending sort order in aggregate functions.
```
If I check the output of `print(table.compile(pretty=True))` I see
```
SELECT
`t0`.`color`,
ARRAY_AGG(`t0`.`id` IGNORE NULLS ORDER BY `t0`.`id` ASC NULLS LAST) AS `ids`
FROM `i-amlg-dev`.`archipelago`.`colors` AS `t0`
GROUP BY
1
```
Looks like `NULLS LAST` isn't allowed. Without `order_by`, there's no `NULLS LAST`.
### What version of ibis are you using?
9.5.0. I'm also using sqlglot 25.20.2
### What backend(s) are you using, if any?
BigQuery
### Relevant log output
_No response_
### Code of Conduct
- [X] I agree to follow this project's Code of Conduct | closed | 2024-09-26T17:34:56Z | 2024-09-27T01:40:13Z | https://github.com/ibis-project/ibis/issues/10236 | [
"bug"
] | yjabri | 2 |
robotframework/robotframework | automation | 4,455 | Standard libraries don't support `pathlib.Path` objects | OperatingSystem library should support also Python Path objects. Currently OperatingSystem library keyword which uses path or file path as argument, expect the argument to be as a string. Would it be possible to enhance library keyword to support also Python [Path](https://docs.python.org/3/library/pathlib.html) as arguments.
This could be useful when library keywords returns a path object and that object is example used in [Get File](https://robotframework.org/robotframework/latest/libraries/OperatingSystem.html#Get%20File). Currently users are required to do:
```robot framework
${path} Library Keyword Return Path Object
${path} Convert To String ${path}
${data} Get File ${path}
```
But it could be handy to just do
```robot framework
${path} Library Keyword Return Path Object
${data} Get File ${path}
```
| closed | 2022-09-07T19:26:05Z | 2022-09-21T18:12:37Z | https://github.com/robotframework/robotframework/issues/4455 | [
"bug",
"priority: medium",
"beta 2"
] | aaltat | 5 |
Lightning-AI/pytorch-lightning | machine-learning | 19,940 | Custom batch selection for logging | ### Description & Motivation
Need to be able to select the same batch in every logging cycle. For generation pipelines similar to stable diffusion it is very hard to gauge the performance over training if we continue to choose random batches.
### Pitch
User should have selective ability to choose the batch to log which will be constant for all the logging cycles.
### Alternatives
Its possible to load the data again in train_btach_end() or validation_batch_end(), and call logging.
### Additional context
_No response_
cc @borda | open | 2024-06-04T10:29:40Z | 2024-06-08T11:03:18Z | https://github.com/Lightning-AI/pytorch-lightning/issues/19940 | [
"feature",
"needs triage"
] | bhosalems | 3 |
PhantomInsights/subreddit-analyzer | matplotlib | 7 | Just wondering about the recent Reddit API changes (paywall)... | Has this code been tested lately? | open | 2023-11-14T07:29:16Z | 2023-11-14T14:29:07Z | https://github.com/PhantomInsights/subreddit-analyzer/issues/7 | [] | champlainmarketing | 1 |
qwj/python-proxy | asyncio | 154 | How do i use authentication with my https server | hello i need help recently got this working with python on my Xbox one i wanted to make a proxy for peer2profit but i want it to be like this ussername:pass@IP:RandomPort for it to work with peer2profit help is very much appreciated! | open | 2022-08-17T19:22:35Z | 2022-08-17T19:22:35Z | https://github.com/qwj/python-proxy/issues/154 | [] | PurpleVoidEpic | 0 |
ipython/ipython | data-science | 14,638 | Crash if malformed `os.environ` | # Description
Setting `os.environ` to an invalid data structure crashes IPython. The following is obviously not correct, but while testing something else, I mistyped `[]` instead of `{}`:
```python
$ ipython
Python 3.13.1 (main, Dec 12 2024, 16:35:44) [GCC 11.4.0]
Type 'copyright', 'credits' or 'license' for more information
IPython 8.31.0 -- An enhanced Interactive Python. Type '?' for help.
In [1]: import os
In [2]: os.environ = [] # <-------------- stupid human typo!
Traceback (most recent call last):
File "/tmp/test-py313/bin/ipython", line 8, in <module>
sys.exit(start_ipython())
~~~~~~~~~~~~~^^
File "/tmp/test-py313/lib/python3.13/site-packages/IPython/__init__.py", line 130, in start_ipython
return launch_new_instance(argv=argv, **kwargs)
File "/tmp/test-py313/lib/python3.13/site-packages/traitlets/config/application.py", line 1075, in launch_instance
app.start()
~~~~~~~~~^^
File "/tmp/test-py313/lib/python3.13/site-packages/IPython/terminal/ipapp.py", line 317, in start
self.shell.mainloop()
~~~~~~~~~~~~~~~~~~~^^
File "/tmp/test-py313/lib/python3.13/site-packages/IPython/terminal/interactiveshell.py", line 926, in mainloop
self.interact()
~~~~~~~~~~~~~^^
File "/tmp/test-py313/lib/python3.13/site-packages/IPython/terminal/interactiveshell.py", line 911, in interact
code = self.prompt_for_code()
File "/tmp/test-py313/lib/python3.13/site-packages/IPython/terminal/interactiveshell.py", line 854, in prompt_for_code
text = self.pt_app.prompt(
default=default,
inputhook=self._inputhook,
**self._extra_prompt_options(),
)
File "/tmp/test-py313/lib/python3.13/site-packages/prompt_toolkit/shortcuts/prompt.py", line 1031, in prompt
if self._output is None and is_dumb_terminal():
~~~~~~~~~~~~~~~~^^
File "/tmp/test-py313/lib/python3.13/site-packages/prompt_toolkit/utils.py", line 325, in is_dumb_terminal
return is_dumb_terminal(os.environ.get("TERM", ""))
^^^^^^^^^^^^^^
AttributeError: 'list' object has no attribute 'get'
If you suspect this is an IPython 8.31.0 bug, please report it at:
https://github.com/ipython/ipython/issues
or send an email to the mailing list at ipython-dev@python.org
You can print a more detailed traceback right now with "%tb", or use "%debug"
to interactively debug it.
Extra-detailed tracebacks for bug-reporting purposes can be enabled via:
%config Application.verbose_crash=True
$ echo $?
1
```
When doing the same in the standard Python REPL, Python 3.13 crashes for a slightly different reason, but Python 3.10 through Python 3.12 handle it gracefully:
```python
$ python3.12
Python 3.12.8 (main, Dec 12 2024, 16:33:58) [GCC 11.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import os
>>> os.environ = []
>>>
$ python3.13
Python 3.13.1 (main, Dec 12 2024, 16:35:44) [GCC 11.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import os
>>> os.environ = []
Traceback (most recent call last):
File "<frozen runpy>", line 198, in _run_module_as_main
File "<frozen runpy>", line 88, in _run_code
File ".../.local/Python3.13.1/lib/python3.13/_pyrepl/__main__.py", line 6, in <module>
__pyrepl_interactive_console()
File ".../.local/Python3.13.1/lib/python3.13/_pyrepl/main.py", line 59, in interactive_console
run_multiline_interactive_console(console)
File ".../local/Python3.13.1/lib/python3.13/_pyrepl/simple_interact.py", line 151, in run_multiline_interactive_console
statement = multiline_input(more_lines, ps1, ps2)
File ".../local/Python3.13.1/lib/python3.13/_pyrepl/readline.py", line 389, in multiline_input
return reader.readline()
File ".../local/Python3.13.1/lib/python3.13/_pyrepl/reader.py", line 795, in readline
self.prepare()
File ".../local/Python3.13.1/lib/python3.13/_pyrepl/historical_reader.py", line 302, in prepare
super().prepare()
File ".../local/Python3.13.1/lib/python3.13/_pyrepl/reader.py", line 635, in prepare
self.console.prepare()
File ".../local/Python3.13.1/lib/python3.13/_pyrepl/unix_console.py", line 349, in prepare
self.height, self.width = self.getheightwidth()
File ".../local/Python3.13.1/lib/python3.13/_pyrepl/unix_console.py", line 451, in getheightwidth
return int(os.environ["LINES"]), int(os.environ["COLUMNS"])
TypeError: list indices must be integers or slices, not str
$ echo $?
1
```
I am fully aware that "one shouldn't do this at all." For example, use `os.environ.clear()`, `os.environb` is linked/a view of `os.environ`, etc. So, I'd understand if this ticket was just closed — this is clearly a low priority bug. However, a crash is a crash, which is why I reported it at all. | open | 2025-01-03T14:47:27Z | 2025-01-07T10:56:49Z | https://github.com/ipython/ipython/issues/14638 | [] | khk-globus | 2 |
quantmind/pulsar | asyncio | 235 | Remove wait from test classes | No longer needed, one can do an async assertion with the following syntax
``` python
self.assertEqual(await async_func(), 'foo')
```
| closed | 2016-07-28T08:50:04Z | 2016-10-10T15:25:15Z | https://github.com/quantmind/pulsar/issues/235 | [
"test",
"enhancement"
] | lsbardel | 0 |
alteryx/featuretools | scikit-learn | 2,457 | `NumWords` returns wrong answer when text with multiple spaces is passed in | ```
NumWords().get_function()(pd.Series(["hello world"]))
```
Returns 4. Adding another space would return 5.
The issue is with how the number of words is counted. Consecutive spaces should be collapsed into one. | closed | 2023-01-19T22:02:47Z | 2023-02-09T18:30:43Z | https://github.com/alteryx/featuretools/issues/2457 | [
"bug"
] | sbadithe | 0 |
gradio-app/gradio | machine-learning | 10,699 | Unable to change user interface for all users without restarting | ### Describe the bug
Let's assume one user creates something through a gradio interface that we want to be made available to all other users. A simple way is to add this new user content to an user interface element such as a Dropdown box. Unfortunately every change made to the user interface while the application is running impacts only the current user. Is there a way to force reloading the demo so that all the elements are updated ?
### Have you searched existing issues? 🔎
- [x] I have searched and found no existing issues
### Reproduction
```python
import gradio as gr
```
### Screenshot
_No response_
### Logs
```shell
```
### System Info
```shell
no error
```
### Severity
I can work around it | closed | 2025-02-28T18:01:10Z | 2025-03-05T13:33:35Z | https://github.com/gradio-app/gradio/issues/10699 | [
"bug"
] | deepbeepmeep | 2 |
jupyter-incubator/sparkmagic | jupyter | 66 | wait for state doesn't return immediately if state is final | When state goes into a final state, like error, wait for state should immediately return.
| closed | 2015-12-12T05:31:22Z | 2015-12-19T09:01:23Z | https://github.com/jupyter-incubator/sparkmagic/issues/66 | [
"kind:bug"
] | aggFTW | 2 |
ray-project/ray | data-science | 51,261 | [core][gpu-objects] CollectiveExecutor | ### Description
`CollectiveExecutor` is responsible for executing collective calls in order specified by the driver. This issue relies on #51260.
### Use case
_No response_ | open | 2025-03-11T18:36:03Z | 2025-03-11T22:06:09Z | https://github.com/ray-project/ray/issues/51261 | [
"enhancement",
"P0",
"core",
"gpu-objects"
] | kevin85421 | 0 |
xlwings/xlwings | automation | 1,885 | Google Sheets/Excel on the web bug: formats "1" etc as date | closed | 2022-04-01T08:07:21Z | 2022-04-01T09:19:12Z | https://github.com/xlwings/xlwings/issues/1885 | [
"bug"
] | fzumstein | 0 | |
Lightning-AI/pytorch-lightning | deep-learning | 19,799 | parsing issue with `save_last` parameter of `ModelCheckpoint` | ### Bug description
Cannot pass a boolean to the `save_last` parameter of the `ModelCheckpoint` callback using `LightningCLI`.
All parameters work fine except for `save_last`. I think `jsonargparse` is having trouble with the validation of the annotation of `save_last` which is currently `Optional[Literal[True, False, 'link']]`.
Ideally, this should work like any other boolean flag e.g., like `--my_model_checkpoint.verbose=false`.
I have already forked the project and proposed a solution together with tests in the relevant directory. I have readiness to submit a PR if you think this might be useful.
### What version are you seeing the problem on?
master
### How to reproduce the bug
```python
import inspect
import jsonargparse
from lightning.pytorch.callbacks import ModelCheckpoint
val = 'true'
annot = inspect.signature(ModelCheckpoint).parameters["save_last"].annotation
parser = jsonargparse.ArgumentParser()
parser.add_argument("--a", type=annot)
args = parser.parse_args(["--a", val])
```
### Error messages and logs
```
error: Parser key "a":
Does not validate against any of the Union subtypes
Subtypes: (typing.Literal[True, False, 'link'], <class 'NoneType'>)
Errors:
- Expected a typing.Literal[True, False, 'link']
- Expected a <class 'NoneType'>. Got value: True
Given value type: <class 'str'>
Given value: true
```
### Environment
<details>
<summary>Current environment</summary>
```
* CUDA:
- GPU: None
- available: False
- version: None
* Lightning:
- lightning-utilities: 0.11.2
- torch: 2.2.2+cpu
- torchmetrics: 1.2.1
- torchvision: 0.17.2+cpu
* Packages:
- aiohttp: 3.9.5
- aiosignal: 1.3.1
- antlr4-python3-runtime: 4.9.3
- attrs: 23.2.0
- certifi: 2024.2.2
- cfgv: 3.4.0
- charset-normalizer: 3.3.2
- cloudpickle: 2.2.1
- distlib: 0.3.8
- filelock: 3.13.4
- frozenlist: 1.4.1
- fsspec: 2023.4.0
- identify: 2.5.36
- idna: 3.7
- iniconfig: 2.0.0
- jinja2: 3.1.2
- jsonargparse: 4.28.0
- lightning-utilities: 0.11.2
- markupsafe: 2.1.3
- mpmath: 1.3.0
- multidict: 6.0.5
- networkx: 3.2.1
- nodeenv: 1.8.0
- numpy: 1.26.3
- omegaconf: 2.3.0
- packaging: 23.1
- pillow: 10.2.0
- pip: 24.0
- platformdirs: 4.2.0
- pluggy: 1.5.0
- pre-commit: 3.7.0
- pytest: 7.4.0
- pyyaml: 6.0.1
- requests: 2.31.0
- setuptools: 68.2.2
- sympy: 1.12
- torch: 2.2.2+cpu
- torchmetrics: 1.2.1
- torchvision: 0.17.2+cpu
- tqdm: 4.66.2
- typing-extensions: 4.8.0
- urllib3: 2.2.1
- virtualenv: 20.25.3
- wheel: 0.41.2
- yarl: 1.9.4
* System:
- OS: Linux
- architecture:
- 64bit
- ELF
- processor: x86_64
- python: 3.11.7
- release: 5.15.0-102-generic
- version: #112~20.04.1-Ubuntu SMP Thu Mar 14 14:28:24 UTC 2024
```
</details>
### More info
The solution is to change the typing annotation of the `save_last` parameter in the constructor of `ModelCheckpoint`.
I have made a draft PR and added a test to check that the bug is fixed <a href="https://github.com/Lightning-AI/pytorch-lightning/pull/19808">here</a> fore reference.
> The tests are passing:

cc @carmocca @awaelchli | closed | 2024-04-22T15:30:35Z | 2024-06-06T00:25:28Z | https://github.com/Lightning-AI/pytorch-lightning/issues/19799 | [
"bug",
"callback: model checkpoint",
"ver: 2.2.x"
] | mariovas3 | 0 |
autokey/autokey | automation | 951 | cyrillic symbols not shown - space added instead | ### AutoKey is a Xorg application and will not function in a Wayland session. Do you use Xorg (X11) or Wayland?
Xorg
### Has this issue already been reported?
- [X] I have searched through the existing issues.
### Is this a question rather than an issue?
- [X] This is not a question.
### What type of issue is this?
Bug
### Choose one or more terms that describe this issue:
- [ ] autokey triggers
- [ ] autokey-gtk
- [X] autokey-qt
- [ ] beta
- [X] bug
- [ ] critical
- [ ] development
- [ ] documentation
- [ ] enhancement
- [ ] installation/configuration
- [ ] phrase expansion
- [ ] scripting
- [ ] technical debt
- [ ] user interface
### Other terms that describe this issue if not provided above:
_No response_
### Which Linux distribution did you use?
ubuntu 22
### Which AutoKey GUI did you use?
Qt
### Which AutoKey version did you use?
0.95.10
### How did you install AutoKey?
apt
### Can you briefly describe the issue?
cyrillic symbols not shown when posting a phrase, only ANSI-symbols shown
### Can the issue be reproduced?
Always
### What are the steps to reproduce the issue?
1. make phrase on cyrillic
2. make hotkey
3. use
### What should have happened?
cyrrilic phrase shown
### What actually happened?
spaces instead of cyrillic symbols
### Do you have screenshots?
_No response_
### Can you provide the output of the AutoKey command?
_No response_
### Anything else?
_No response_ | open | 2024-07-19T21:03:20Z | 2024-07-21T08:03:58Z | https://github.com/autokey/autokey/issues/951 | [
"duplicate",
"wontfix",
"phrase expansion",
"user support"
] | AlexeyRoza | 1 |
miguelgrinberg/flasky | flask | 434 | Passing data from form to send to API sandbox | Hello Sir,
Always an owner to send you a chat when I have an issue. I have been doing a REstFul app with Flask following your guidelines although am stuck. I cannot understand why my code cannot pass the data from form to an external API. If I type the values for different keys in the json body it sends well but if I try obtaining the data from form it does not. Please check my code.
Thank you in advance
```
@app.route('/payrequest/<int:idf>', methods=['GET', 'POST'])
@login_required
def payrequest(idf):
if current_user.is_authenticated:
error=None
form=PayForm(request.form)
if form.validate_on_submit():
amount= form.get_json('Amount')
mobile = form.get_json('Mobile')
#timestamp = datetime.now()
external_id = form.get_json('External_ID')
note = form.get_json('Note')
message = form.get_json('Message')
currency=form.get_json('Currency')
userid = Collect.query.filter_by(client_id = idf).first_or_404()
collect_id = userid.user_id
apikey = userid.api_key
client = Collection({
"COLLECTION_USER_ID": collect_id,
"COLLECTION_API_SECRET": apikey,
"COLLECTION_PRIMARY_KEY": '1e5ceced566042c4b01a97e3400cedb1',
})
try:
resp = client.requestToPay(
mobile='{%s}'.format(mobile), amount='{%s}'.format(amount), external_id='{%s}'.format(external_id), payee_note='{%s}'.fomat(note), payer_message='{%s}'.format(message), currency='{%s}'.format(currency)
)
return jsonify(resp)
except Exception as e:
raise e
error= 'Wrong data'
return render_template('collection.html', error=error, form=form)
```
This is my template
**`<p>Fill the form below to submit Payment as disbursement</p>
```
<div class="container">
<form action="{{ url_for('payrequest', idf=current_user.id)}}">
<div class="row">
<div class="col-25">
{{ form.Mobile.label }}
</div>
<div class="col-75">
{{form.Mobile(placeholder = "e.g 256781234567")}}
</div>
</div>
<div class="row">
<div class="col-25">
{{ form.Amount.label }}
</div>
<div class="col-75">
{{form.Amount(placeholder = "e.g 1000")}}
</div>
</div>
<div class="row">
<div class="col-25">
{{ form.External_ID.label }}
</div>
<div class="col-75">
{{form.External_ID(placeholder = "e.g 12345")}}
</div>
</div>
<div class="row">
<div class="col-25">
{{ form.Note.label }}
</div>
<div class="col-75">
{{form.Note(height=80, placeholder="Write Something")}}
</div>
</div>
<div class="row">
<div class="col-25">
{{ form.Message.label }}
</div>
<div class="col-75">
{{form.Note(height=80, placeholder="Write Something")}}
</div>
</div>
<div class="row">
<div class="col-25">
{{ form.Currency.label }}
</div>
<div class="col-75">
<select id="country" name="currency">
<option value="eur">EUR</option>
</select>
</div>
</div>
</div>
<div class="row">
<input type="submit" value="Submit">
</div>
</form>
</div>`
``` | closed | 2019-08-16T14:47:09Z | 2019-08-19T14:10:42Z | https://github.com/miguelgrinberg/flasky/issues/434 | [
"question"
] | OwiyeD | 10 |
vitalik/django-ninja | rest-api | 1,393 | [BUG] Servers widget appears with empty select box when "ninja" app added to INSTALLED_APPS | I'm adding `'ninja'` to `INSTALLED_APPS` in settings in order to use Django-hosted static files rather than rely on the CDN. When doing so, I find an unpopulated select form element pop up for Swagger server configuration, like below:

Presumably the `<div class="opblock-section operation-servers">` is being accidentally rendered in templates when it's static vs loaded from the CDN - just a wildly unfounded/uneducated guess as to what might be happening. Will see if I can dedicate some time to fixing this but opening up a big for the broader audience in case this is tracked in another issue (I was unable to find anything) or someone has an easy fix.
**Versions**
Python 3.12.7
Django==5.1.4
django-ninja==1.3.0
pydantic==2.10.4
| open | 2025-01-17T21:32:45Z | 2025-01-17T21:33:29Z | https://github.com/vitalik/django-ninja/issues/1393 | [] | angstwad | 0 |
deepspeedai/DeepSpeed | machine-learning | 5,553 | Failed to install Fused_adam op on CPU | Hello, I am struggling to download fused_adam pre build of deepspeed.
I found nothing that solve my problem.
Here are the situations.
```
DS_BUILD_FUSED_ADAM=1 pip install deepspeed
ds_report
```
still have same results.
How can I successfully download and utilized ```fused_adam```?
```
[2024-05-21 11:34:41,285] [WARNING] [real_accelerator.py:162:get_accelerator] Setting accelerator to CPU. If you have GPU or other accelerator, we were unable to detect it.
[2024-05-21 11:34:41,287] [INFO] [real_accelerator.py:203:get_accelerator] Setting ds_accelerator to cpu (auto detect)
--------------------------------------------------
DeepSpeed C++/CUDA extension op report
--------------------------------------------------
NOTE: Ops not installed will be just-in-time (JIT) compiled at
runtime if needed. Op compatibility means that your system
meet the required dependencies to JIT install the op.
--------------------------------------------------
JIT compiled ops requires ninja
ninja .................. [OKAY]
--------------------------------------------------
op name ................ installed .. compatible
--------------------------------------------------
deepspeed_not_implemented [NO] ....... [OKAY]
deepspeed_ccl_comm ..... [NO] ....... [OKAY]
deepspeed_shm_comm ..... [NO] ....... [OKAY]
cpu_adam ............... [NO] ....... [OKAY]
fused_adam ............. [NO] ....... [OKAY]
--------------------------------------------------
DeepSpeed general environment info:
torch install path ............... ['TORCH_INSTALL_PATH']
torch version .................... 2.1.2+cu121
deepspeed install path ........... ['DEEPSPEED_INSTALL_PATH']
deepspeed info ................... 0.14.2+cu118torch2.0, unknown, unknown
deepspeed wheel compiled w. ...... torch 2.0
shared memory (/dev/shm) size .... 125.67 GB
```
Thanks for reading my question!
+)
not only fused adam, but also every build does not work
```
DS_BUILD_OPS=1 pip install deepspeed
``` | closed | 2024-05-21T02:39:40Z | 2024-05-28T16:24:35Z | https://github.com/deepspeedai/DeepSpeed/issues/5553 | [
"build"
] | daehuikim | 9 |
scikit-optimize/scikit-optimize | scikit-learn | 920 | ImportError: cannot import name 'Log10' from 'skopt.space.transformers' | Hi all,
I have tried to run the sklearn_examples
https://github.com/HunterMcGushion/hyperparameter_hunter/blob/master/examples/sklearn_examples/classification.py
I got below error:
ImportError: cannot import name 'Log10' from 'skopt.space.transformers' (/root/miniconda3/envs/psi4/lib/python3.7/site-packages/skopt/space/transformers.py) | open | 2020-07-02T12:12:06Z | 2020-07-02T12:14:06Z | https://github.com/scikit-optimize/scikit-optimize/issues/920 | [] | chrinide | 1 |
strawberry-graphql/strawberry | fastapi | 2,914 | First-Class support for `@stream`/`@defer` in strawberry | ## First-Class support for `@stream`/`@defer` in strawberry
This issue is going to collect all necessary steps for an awesome stream and defer devX in Strawberry 🍓
First steps collected today together in discovery session with @patrick91 @bellini666
#### ToDos for an initial support:
- [ ] Add support for `[async/sync]` generators in return types
- [ ] Make sure `GraphQLDeferDirective`, `GraphQLStreamDirective`, are in the GQL-Core schema
Flag in Schema(query=Query, config={enable_stream_defer: False}) - default true
- [ ] Add incremental delivery support to all the views
- [ ] FastAPI integration -> Maybe in the Async Base View?
- [ ] Explore Sync View Integration
### long term goals
_incomplete list of problems / design improvement potential of the current raw implementation_
#### Problem: streaming / n+1 -> first-level dataloaders are no longer working as every instance is resolved 1:1
#### Possible solutions
- dig deeper into https://github.com/robrichard/defer-stream-wg/discussions/40
- custom query execution plan engine
- Add @streamable directive to schema fields automatically if field is streamable, including custom validation rule
Some playground code: https://gist.github.com/erikwrede/993e1fc174ee75b11c491210e4a9136b | open | 2023-07-02T21:32:42Z | 2025-03-20T15:56:16Z | https://github.com/strawberry-graphql/strawberry/issues/2914 | [] | erikwrede | 9 |
lucidrains/vit-pytorch | computer-vision | 54 | Problem with ResNet | Hi! I would like to train Vit using distiller in a dataset with grayscale images, but I am having problems with the ResNet since it is expecting inputs with 3 channels and my images have only 1. Do you have any suggestions? Thanks!
This is the error:
`RuntimeError: Given groups=1, weight of size [64, 3, 7, 7], expected input[64, 1, 224, 224] to have 3 channels, but got 1 channels instead` | closed | 2020-12-28T14:07:25Z | 2021-01-02T09:44:28Z | https://github.com/lucidrains/vit-pytorch/issues/54 | [] | doglab753 | 2 |
ray-project/ray | machine-learning | 51,593 | [cgraph] Support function nodes | ### Description
Currently Ray compiled graphs only support actor method nodes. There is a [TODO](https://github.com/ray-project/ray/blob/master/python/ray/dag/compiled_dag_node.py#L1167-L1170) in the source code to add support for non-actor tasks, but I haven't seen a related issue. Many of the docs I've read on compiled graphs refer to running in the background thread of actors so I'm sure there is some complexity involved with supporting nodes unrelated to actors.
### Use case
One of our primary use cases for Ray involves executing a task graph on some interval. For example, every minute we take some inputs, run them through a task graph of hundreds of nodes (5-10 distinct functions), and extract the output. These graphs often pass large Arrow tables between tasks and the overall memory usage can be 50-100GB. Because of a long-standing, critical [bug](https://github.com/ray-project/ray/issues/47920) in the original ray.dag interface, we are forced to reconstruct the task graph every minute rather than reusing it. For this application, we run a Ray cluster on a single large server to minimize communication overhead.
Compiled graphs have been on our radar for a long time as a much more efficient way to run this application. Both the reduced scheduling overhead, and better passing of large objects between nodes are very appealing, as execution latency is a large concern of ours.
For these task graphs, we don't really care which worker/CPU the individual tasks run on, just that they run as soon as their dependencies are available. In order to fit this use case into an actor-method compiled graph, I suspect we would need to create some generic "runner" actor for every available CPU and simply distribute tasks in a round-robin fashion for each generation in the DAG. My concern here is that we may introduce artificial bottlenecks because we don't know a priori how long each task will take to run. Supporting function nodes where the task is run on the first available worker would be ideal for us. | open | 2025-03-21T14:29:40Z | 2025-03-21T16:07:48Z | https://github.com/ray-project/ray/issues/51593 | [
"enhancement",
"triage",
"core",
"compiled-graphs"
] | b-phi | 0 |
lundberg/respx | pytest | 208 | Regression of ASGI mocking after `respx == 0.17.1` and `httpx == 0.19.0` | We run code alike this:
```python
import httpx
import pytest
import respx
from respx.mocks import HTTPCoreMocker
@pytest.mark.asyncio
async def test_asgi():
try:
HTTPCoreMocker.add_targets(
"httpx._transports.asgi.ASGITransport",
"httpx._transports.wsgi.WSGITransport",
)
async with respx.mock:
async with httpx.AsyncClient(app="fake-asgi") as client:
url = "https://foo.bar/"
jzon = {"status": "ok"}
headers = {"X-Foo": "bar"}
request = respx.get(url) % dict(
status_code=202, headers=headers, json=jzon
)
response = await client.get(url)
assert request.called is True
assert response.status_code == 202
assert response.headers == httpx.Headers(
{
"Content-Type": "application/json",
"Content-Length": "16",
**headers,
}
)
assert response.json() == {"status": "ok"}
finally:
HTTPCoreMocker.remove_targets(
"httpx._transports.asgi.ASGITransport",
"httpx._transports.wsgi.WSGITransport",
)
```
It works perfectly fine using `respx == 0.17.1` and `httpx == 0.19.0`, as can be seen by:
```bash
$> pytest test.py
=================== test session starts ===================
platform linux -- Python 3.10.5, pytest-7.1.2, pluggy-1.0.0
plugins: asyncio-0.19.0, respx-0.17.1, anyio-3.6.1
asyncio: mode=strict
collected 1 item
test.py . [100%]
==================== 1 passed in 0.00s ====================
```
However upgrading `httpx == 0.20.0` yields:
```python
.../lib/python3.10/site-packages/respx/mocks.py:179: in amock
request = cls.to_httpx_request(**kwargs)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
cls = <class 'respx.mocks.HTTPCoreMocker'>, kwargs = {'request': <Request('GET', 'https://foo.bar/')>}
@classmethod
def to_httpx_request(cls, **kwargs):
"""
Create a `HTTPX` request from transport request args.
"""
request = (
> kwargs["method"],
kwargs["url"],
kwargs.get("headers"),
kwargs.get("stream"),
)
E KeyError: 'method'
.../lib/python3.10/site-packages/respx/mocks.py:288: KeyError
```
While trying to upgrade `respx == 0.18.0` yields a package resolution error:
```console
SolverProblemError
Because respx (0.18.0) depends on httpx (>=0.20.0)
and respxbug depends on httpx (^0.19.0), respx is forbidden.
So, because respxbug depends on respx (0.18.0), version solving failed.
```
Upgrading both yields an error alike the one for just upgrading `respx`.
Running with `respx == 0.19.2` and `httpx == 0.23.0` (the newest version at the time for writing), yields:
```python
.../lib/python3.10/site-packages/respx/mocks.py:186: in amock
request = cls.to_httpx_request(**kwargs))
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
cls = <class 'respx.mocks.HTTPCoreMocker'>, kwargs = {'request': <Request('GET', 'https://foo.bar/')>}, request = <Request('GET', 'https://foo.bar/')>
@classmethod
def to_httpx_request(cls, **kwargs):
"""
Create a `HTTPX` request from transport request arg.
"""
request = kwargs["request"]
raw_url = (
request.url.scheme,
request.url.host,
request.url.port,
> request.url.target,
)
E AttributeError: 'URL' object has no attribute 'target'
.../lib/python3.10/site-packages/respx/mocks.py:302: AttributeError
```
I have attempted to debug the issue, and it seems there's a difference in the incoming `request.url` object type during ASGI and non-ASGI mocking (see the example below).
With ASGI mocking:
```python
(Pdb) pp request.url
URL('https://foo.bar/')
(Pdb) pp type(request.url)
<class 'httpx.URL'>
```
Without ASGI mocking (i.e. ordinary mocking):
```python
(Pdb) pp request.url
URL(scheme=b'https', host=b'foo.bar', port=None, target=b'/')
(Pdb) type(request.url)
<class 'httpcore.URL'>
```
The non-ASGI mocking case was produced with the exact same code as above, but by changing:
```python
async with httpx.AsyncClient(app="fake-asgi") as client:
```
to:
```python
async with httpx.AsyncClient() as client:
```
I have not studied the flow of `respx` mock well enough to know how to implement an appropriate fix, but the problem seems to arise from the fact that `.target` is not a documented member on `httpx`s [`URL` object](https://github.com/encode/httpx/blob/0.23.0/httpx/_urls.py), however it does have similar code to the above, [here](https://github.com/encode/httpx/blob/0.23.0/httpx/_urls.py#L325-L338), which utilizes `.raw_path` instead of `.target`.
So maybe the code ought to branch on the incoming `URL` type and provide `url.raw` if the type is a `httpx.URL`, unless it can be passed on directly?
Finally it seems like the test validating that `ASGI` mocking works was removed in this commit: 47c0b935176e081a3aa7886aed8b8ed31c0e9457, while the core functionality was added in this PR: #131 (along with said test).
As the code stands now it only seems to test that the `"httpx._transports.asgi.ASGITransport"` can be added and removed using `HTTPCoreMocker.add_targets` and `HTTPCoreMocker.remove_targets` not that mocking with the `ASGITransport` actually works. | closed | 2022-07-26T09:11:51Z | 2022-08-25T20:25:16Z | https://github.com/lundberg/respx/issues/208 | [] | Skeen | 5 |
Nemo2011/bilibili-api | api | 527 | [需求] 直播间新增关注事件 | 如题,看了现有的EVENT好像没有关注的时间,`room.on("ALL")`好像也抓不到直播间中有人关注的事件。
请问怎样获取关注事件呢 | closed | 2023-10-13T07:49:19Z | 2023-10-14T14:36:37Z | https://github.com/Nemo2011/bilibili-api/issues/527 | [
"need",
"solved"
] | iAilu | 1 |
automl/auto-sklearn | scikit-learn | 1,474 | Refactor the concept of public and private test set | Auto-sklearn was built for the AutoML challenges where there were public and private test sets by the names "validation set" and "test set". Since these challenges we have substantially refactored Auto-sklearn and no longer allow the user to pass in the public test set aka "validation set". This leads to confusion as "validation set" is really a misnomer, as inside a machine learning system it usually denotes the dataset on which a model is selected. Therefore we should drop all references that still exist to this old concept of a "validation set", as already being done by @eddiebergman in #1434, which should only be in the `evaluation` submodule`. | closed | 2022-05-13T08:01:49Z | 2022-06-17T12:26:13Z | https://github.com/automl/auto-sklearn/issues/1474 | [
"maintenance"
] | mfeurer | 0 |
predict-idlab/plotly-resampler | data-visualization | 175 | About plotly graph exporting to HTML file | In my case, I want to export my plotly graph into HTML for later use, and I just run the following code:
`fig.write_html('fig1.html', include_plotlyjs='cdn')`
The good news is I do get a complete figure with basic interactive features (zoom, hover, etc.), but it will lose the advanced features like **auto resampling** when you zoom in/out like in the jupyter.
I've been looking for similar problems in every place, but I didn't think I find any direct detailed explanation and possible solution to this.
Maybe it's the innate html limitaiton, and hopefully someone can help me understand or even solve this.
| closed | 2023-03-03T06:55:27Z | 2023-03-03T12:49:47Z | https://github.com/predict-idlab/plotly-resampler/issues/175 | [
"documentation"
] | buptycz | 4 |
mwaskom/seaborn | matplotlib | 3,660 | JointPlot allow changing positions of the marginal histograms | Hi,
Amazing package! I was wondering if it is possible to add a change that would allow joint plots to specify the positioning of the marginal histogram? Kind of like in https://stackoverflow.com/questions/55111214/change-position-of-marginal-axis-in-seaborn-jointplot
| closed | 2024-03-21T19:40:08Z | 2025-01-26T15:43:23Z | https://github.com/mwaskom/seaborn/issues/3660 | [] | adam2392 | 1 |
paperless-ngx/paperless-ngx | machine-learning | 7,436 | [BUG] During the file upload on web site the status window is not closing | ### Description
While uploading a file in the web upload window, the status window about the process is stuck on the screen. The process itself is running well and finished but the status is stuck in this position and only after reload the web page will disappear.

### Steps to reproduce
1. upload a file on dashboard Upload new document window
2. check the status window
### Webserver logs
```bash
[2024-08-10 15:53:26,307] [INFO] [paperless.consumer] Document 2024-08-10 Projektek Tantusz Invest bemutatkozas consumption finished
[2024-08-10 15:53:26,311] [INFO] [paperless.tasks] ConsumeTaskPlugin completed with: Success. New document id 4 created
[2024-08-10 16:05:00,107] [DEBUG] [paperless.classifier] Gathering data from database...
[2024-08-10 16:05:00,122] [DEBUG] [paperless.classifier] 2 documents, 1 tag(s), 1 correspondent(s), 1 document type(s). 0 storage path(es)
[2024-08-10 16:05:00,122] [DEBUG] [paperless.classifier] Vectorizing data...
[2024-08-10 16:05:00,133] [DEBUG] [paperless.classifier] Training tags classifier...
[2024-08-10 16:05:00,171] [DEBUG] [paperless.classifier] Training correspondent classifier...
[2024-08-10 16:05:00,208] [DEBUG] [paperless.classifier] Training document type classifier...
[2024-08-10 16:05:00,240] [DEBUG] [paperless.classifier] There are no storage paths. Not training storage path classifier.
[2024-08-10 16:05:00,243] [INFO] [paperless.tasks] Saving updated classifier model to /usr/src/paperless/data/classification_model.pickle...
```
### Browser logs
_No response_
### Paperless-ngx version
2.11.2
### Host OS
Ubuntu 20.04
### Installation method
Docker - official image
### System status
_No response_
### Browser
Arc
### Configuration changes
_No response_
### Please confirm the following
- [X] I believe this issue is a bug that affects all users of Paperless-ngx, not something specific to my installation.
- [X] I have already searched for relevant existing issues and discussions before opening this report.
- [X] I have updated the title field above with a concise description. | closed | 2024-08-10T14:06:11Z | 2024-09-10T03:06:42Z | https://github.com/paperless-ngx/paperless-ngx/issues/7436 | [
"bug",
"frontend"
] | elpi07 | 6 |
allenai/allennlp | nlp | 5,602 | Unable to import Predictor from allennlp.predictors.predictor @ Apple Silicon Mac | <!--
Please fill this template entirely and do not erase any of it.
We reserve the right to close without a response bug reports which are incomplete.
If you have a question rather than a bug, please ask on [Stack Overflow](https://stackoverflow.com/questions/tagged/allennlp) rather than posting an issue here.
-->
## Checklist
<!-- To check an item on the list replace [ ] with [x]. -->
- [X] I have verified that the issue exists against the `main` branch of AllenNLP.
- [X] I have read the relevant section in the [contribution guide](https://github.com/allenai/allennlp/blob/main/CONTRIBUTING.md#bug-fixes-and-new-features) on reporting bugs.
- [X] I have checked the [issues list](https://github.com/allenai/allennlp/issues) for similar or identical bug reports.
- [X] I have checked the [pull requests list](https://github.com/allenai/allennlp/pulls) for existing proposed fixes.
- [X] I have checked the [CHANGELOG](https://github.com/allenai/allennlp/blob/main/CHANGELOG.md) and the [commit log](https://github.com/allenai/allennlp/commits/main) to find out if the bug was already fixed in the main branch.
- [X] I have included in the "Description" section below a traceback from any exceptions related to this bug.
- [X] I have included in the "Related issues or possible duplicates" section beloew all related issues and possible duplicate issues (If there are none, check this box anyway).
- [X] I have included in the "Environment" section below the name of the operating system and Python version that I was using when I discovered this bug.
- [X] I have included in the "Environment" section below the output of `pip freeze`.
- [X] I have included in the "Steps to reproduce" section below a minimally reproducible example.
## Description
<!-- ImportError: cannot import name 'ProcessGroup' from 'torch.distributed' (/Users/xxxxxxx/miniconda3/envs/allennlp_env/lib/python3.8/site-packages/torch/distributed/__init__.py) -->
<details>
<summary><b>Python traceback:</b></summary>
<p>----> 7 from allennlp.predictors.predictor import Predictor
8 import allennlp_models.tagging
10 predictor = Predictor.from_path("https://storage.googleapis.com/allennlp-public-models/ner-elmo.2021-02-12.tar.gz")
File ~/miniconda3/envs/allennlp_env/lib/python3.8/site-packages/allennlp/predictors/__init__.py:9, in <module>
1 """
2 A `Predictor` is
3 a wrapper for an AllenNLP `Model`
(...)
7 a `Predictor` that wraps it.
8 """
----> 9 from allennlp.predictors.predictor import Predictor
10 from allennlp.predictors.sentence_tagger import SentenceTaggerPredictor
11 from allennlp.predictors.text_classifier import TextClassifierPredictor
File ~/miniconda3/envs/allennlp_env/lib/python3.8/site-packages/allennlp/predictors/predictor.py:18, in <module>
16 from allennlp.data import DatasetReader, Instance
17 from allennlp.data.batch import Batch
---> 18 from allennlp.models import Model
19 from allennlp.models.archival import Archive, load_archive
20 from allennlp.nn import util
File ~/miniconda3/envs/allennlp_env/lib/python3.8/site-packages/allennlp/models/__init__.py:6, in <module>
1 """
2 These submodules contain the classes for AllenNLP models,
3 all of which are subclasses of `Model`.
4 """
----> 6 from allennlp.models.model import Model
7 from allennlp.models.archival import archive_model, load_archive, Archive
8 from allennlp.models.basic_classifier import BasicClassifier
File ~/miniconda3/envs/allennlp_env/lib/python3.8/site-packages/allennlp/models/model.py:22, in <module>
20 from allennlp.nn import util
21 from allennlp.nn.module import Module
---> 22 from allennlp.nn.parallel import DdpAccelerator
23 from allennlp.nn.regularizers import RegularizerApplicator
25 logger = logging.getLogger(__name__)
File ~/miniconda3/envs/allennlp_env/lib/python3.8/site-packages/allennlp/nn/parallel/__init__.py:7, in <module>
1 from allennlp.nn.parallel.sharded_module_mixin import ShardedModuleMixin
2 from allennlp.nn.parallel.ddp_accelerator import (
3 DdpAccelerator,
4 DdpWrappedModel,
5 TorchDdpAccelerator,
6 )
----> 7 from allennlp.nn.parallel.fairscale_fsdp_accelerator import (
8 FairScaleFsdpAccelerator,
9 FairScaleFsdpWrappedModel,
10 )
File ~/miniconda3/envs/allennlp_env/lib/python3.8/site-packages/allennlp/nn/parallel/fairscale_fsdp_accelerator.py:4, in <module>
1 import os
2 from typing import Tuple, Union, Optional, TYPE_CHECKING, List, Any, Dict, Sequence
----> 4 from fairscale.nn import FullyShardedDataParallel as FS_FSDP
5 from fairscale.nn.wrap import enable_wrap, wrap
6 from fairscale.nn.misc import FlattenParamsWrapper
File ~/miniconda3/envs/allennlp_env/lib/python3.8/site-packages/fairscale/__init__.py:12, in <module>
1 # Copyright (c) Facebook, Inc. and its affiliates. All rights reserved.
2 #
3 # This source code is licensed under the BSD license found in the
(...)
7 # Import most common subpackages
8 ################################################################################
10 from typing import List
---> 12 from . import nn
13 from .version import __version_tuple__
15 __version__ = ".".join([str(x) for x in __version_tuple__])
File ~/miniconda3/envs/allennlp_env/lib/python3.8/site-packages/fairscale/nn/__init__.py:9, in <module>
6 from typing import List
8 from .checkpoint import checkpoint_wrapper
----> 9 from .data_parallel import FullyShardedDataParallel, ShardedDataParallel
10 from .misc import FlattenParamsWrapper
11 from .moe import MOELayer, Top2Gate
File ~/miniconda3/envs/allennlp_env/lib/python3.8/site-packages/fairscale/nn/data_parallel/__init__.py:8, in <module>
1 # Copyright (c) Facebook, Inc. and its affiliates. All rights reserved.
2 #
3 # This source code is licensed under the BSD license found in the
4 # LICENSE file in the root directory of this source tree.
6 from typing import List
----> 8 from .fully_sharded_data_parallel import FullyShardedDataParallel, OffloadConfig, TrainingState, auto_wrap_bn
9 from .sharded_ddp import ShardedDataParallel
11 __all__: List[str] = []
File ~/miniconda3/envs/allennlp_env/lib/python3.8/site-packages/fairscale/nn/data_parallel/fully_sharded_data_parallel.py:38, in <module>
36 from torch.autograd import Variable
37 import torch.distributed as dist
---> 38 from torch.distributed import ProcessGroup
39 import torch.nn as nn
40 import torch.nn.functional as F
ImportError: cannot import name 'ProcessGroup' from 'torch.distributed' (/Users/simontse/miniconda3/envs/allennlp_env/lib/python3.8/site-packages/torch/distributed/__init__.py)
<!-- Paste the traceback from any exception (if there was one) in between the next two lines below -->
```
```
</p>
</details>
## Related issues or possible duplicates
- None
## Environment
<!-- Provide the name of operating system below (e.g. OS X, Linux) -->
OS: macOS Monterey ver12.3
<!-- Provide the Python version you were using (e.g. 3.7.1) -->
Python version: 3.8.12
<details>
<summary><b>Output of <code>pip freeze</code>:</b></summary>
<p>
<!-- Paste the output of `pip freeze` in between the next two lines below -->
```
aiohttp @ file:///Users/runner/miniforge3/conda-bld/aiohttp_1637087375815/work
aiosignal @ file:///home/conda/feedstock_root/build_artifacts/aiosignal_1636093929600/work
alembic @ file:///home/conda/feedstock_root/build_artifacts/alembic_1647367721563/work
allennlp @ file:///Users/runner/miniforge3/conda-bld/allennlp_1644183594868/work
allennlp-models @ file:///Users/runner/miniforge3/conda-bld/allennlp-models_1644193900256/work
allennlp-optuna @ file:///home/conda/feedstock_root/build_artifacts/allennlp-optuna_1637742042512/work
allennlp-semparse @ file:///Users/runner/miniforge3/conda-bld/allennlp-semparse_1644289991832/work
allennlp-server @ file:///Users/runner/miniforge3/conda-bld/allennlp-server_1644211316665/work
appnope @ file:///Users/runner/miniforge3/conda-bld/appnope_1635819899231/work
argon2-cffi @ file:///home/conda/feedstock_root/build_artifacts/argon2-cffi_1640817743617/work
argon2-cffi-bindings @ file:///Users/runner/miniforge3/conda-bld/argon2-cffi-bindings_1640885719931/work
asttokens @ file:///home/conda/feedstock_root/build_artifacts/asttokens_1618968359944/work
async-timeout @ file:///home/conda/feedstock_root/build_artifacts/async-timeout_1640026696943/work
attrs @ file:///home/conda/feedstock_root/build_artifacts/attrs_1640799537051/work
autopage @ file:///home/conda/feedstock_root/build_artifacts/autopage_1642834347039/work
backcall @ file:///home/conda/feedstock_root/build_artifacts/backcall_1592338393461/work
backports.csv==1.0.7
backports.functools-lru-cache @ file:///home/conda/feedstock_root/build_artifacts/backports.functools_lru_cache_1618230623929/work
base58 @ file:///home/conda/feedstock_root/build_artifacts/base58_1635724186165/work
beautifulsoup4 @ file:///home/conda/feedstock_root/build_artifacts/beautifulsoup4_1631087867185/work
bleach @ file:///home/conda/feedstock_root/build_artifacts/bleach_1629908509068/work
blis @ file:///Users/runner/miniforge3/conda-bld/cython-blis_1645002545531/work
boto3 @ file:///home/conda/feedstock_root/build_artifacts/boto3_1647500875427/work
botocore @ file:///home/conda/feedstock_root/build_artifacts/botocore_1647478405006/work
brotlipy @ file:///Users/runner/miniforge3/conda-bld/brotlipy_1636012322014/work
cached-path @ file:///Users/runner/miniforge3/conda-bld/cached_path_1646363831472/work
cached-property @ file:///home/conda/feedstock_root/build_artifacts/cached_property_1615209429212/work
cachetools @ file:///home/conda/feedstock_root/build_artifacts/cachetools_1640686991047/work
catalogue @ file:///Users/runner/miniforge3/conda-bld/catalogue_1638867620917/work
certifi==2021.10.8
cffi @ file:///Users/runner/miniforge3/conda-bld/cffi_1636046166270/work
chardet @ file:///Users/runner/miniforge3/conda-bld/chardet_1635814976389/work
charset-normalizer @ file:///home/conda/feedstock_root/build_artifacts/charset-normalizer_1644853463426/work
checklist @ file:///home/conda/feedstock_root/build_artifacts/checklist_1626355406222/work
cheroot @ file:///home/conda/feedstock_root/build_artifacts/cheroot_1641335003286/work
CherryPy @ file:///Users/runner/miniforge3/conda-bld/cherrypy_1643789202126/work
click==7.1.2
cliff @ file:///home/conda/feedstock_root/build_artifacts/cliff_1645470499396/work
cmaes @ file:///home/conda/feedstock_root/build_artifacts/cmaes_1613785714721/work
cmd2 @ file:///Users/runner/miniforge3/conda-bld/cmd2_1644164388812/work
colorama @ file:///home/conda/feedstock_root/build_artifacts/colorama_1602866480661/work
colorlog==6.6.0
configparser @ file:///home/conda/feedstock_root/build_artifacts/configparser_1638573090458/work
conllu @ file:///home/conda/feedstock_root/build_artifacts/conllu_1629103029427/work
cryptography @ file:///Users/runner/miniforge3/conda-bld/cryptography_1639699343700/work
cycler @ file:///home/conda/feedstock_root/build_artifacts/cycler_1635519461629/work
cymem @ file:///Users/runner/miniforge3/conda-bld/cymem_1636053450611/work
dataclasses @ file:///home/conda/feedstock_root/build_artifacts/dataclasses_1628958434797/work
datasets @ file:///home/conda/feedstock_root/build_artifacts/datasets_1647378244171/work
debugpy @ file:///Users/runner/miniforge3/conda-bld/debugpy_1636043378262/work
decorator @ file:///home/conda/feedstock_root/build_artifacts/decorator_1641555617451/work
defusedxml @ file:///home/conda/feedstock_root/build_artifacts/defusedxml_1615232257335/work
dill @ file:///home/conda/feedstock_root/build_artifacts/dill_1623610058511/work
docker-pycreds==0.4.0
editdistance @ file:///Users/runner/miniforge3/conda-bld/editdistance_1636224171992/work
entrypoints @ file:///home/conda/feedstock_root/build_artifacts/entrypoints_1643888246732/work
executing @ file:///home/conda/feedstock_root/build_artifacts/executing_1646044401614/work
fairscale @ file:///Users/runner/miniforge3/conda-bld/fairscale_1644056098310/work
feedparser @ file:///home/conda/feedstock_root/build_artifacts/feedparser_1624371037925/work
filelock @ file:///home/conda/feedstock_root/build_artifacts/filelock_1641470428964/work
Flask @ file:///home/conda/feedstock_root/build_artifacts/flask_1644887820294/work
Flask-Cors @ file:///home/conda/feedstock_root/build_artifacts/flask-cors_1622383494577/work
flit_core @ file:///home/conda/feedstock_root/build_artifacts/flit-core_1645629044586/work/source/flit_core
fonttools @ file:///Users/runner/miniforge3/conda-bld/fonttools_1646922287558/work
frozenlist @ file:///Users/runner/miniforge3/conda-bld/frozenlist_1643222648494/work
fsspec @ file:///home/conda/feedstock_root/build_artifacts/fsspec_1645566723803/work
ftfy @ file:///home/conda/feedstock_root/build_artifacts/ftfy_1647200718722/work
future @ file:///Users/runner/miniforge3/conda-bld/future_1635819654955/work
gevent @ file:///Users/runner/miniforge3/conda-bld/gevent_1639267879746/work
gitdb @ file:///home/conda/feedstock_root/build_artifacts/gitdb_1635085722655/work
GitPython @ file:///home/conda/feedstock_root/build_artifacts/gitpython_1645531658201/work
google-api-core @ file:///home/conda/feedstock_root/build_artifacts/google-api-core-split_1644877687275/work
google-auth @ file:///home/conda/feedstock_root/build_artifacts/google-auth_1644503159426/work
google-cloud-core @ file:///home/conda/feedstock_root/build_artifacts/google-cloud-core_1642607638110/work
google-cloud-storage @ file:///home/conda/feedstock_root/build_artifacts/google-cloud-storage_1644876711050/work
google-crc32c @ file:///Users/runner/miniforge3/conda-bld/google-crc32c_1636020985630/work
google-resumable-media @ file:///home/conda/feedstock_root/build_artifacts/google-resumable-media_1635195007097/work
googleapis-common-protos @ file:///Users/runner/miniforge3/conda-bld/googleapis-common-protos-feedstock_1647557644942/work
greenlet @ file:///Users/runner/miniforge3/conda-bld/greenlet_1635837112391/work
grpcio @ file:///Users/runner/miniforge3/conda-bld/grpcio_1645230394268/work
h5py @ file:///Users/runner/miniforge3/conda-bld/h5py_1637964070496/work
huggingface-hub @ file:///home/conda/feedstock_root/build_artifacts/huggingface_hub_1641988520462/work
idna @ file:///home/conda/feedstock_root/build_artifacts/idna_1642433548627/work
importlib-metadata @ file:///Users/runner/miniforge3/conda-bld/importlib-metadata_1647210434605/work
importlib-resources @ file:///home/conda/feedstock_root/build_artifacts/importlib_resources_1635615662634/work
ipykernel @ file:///Users/runner/miniforge3/conda-bld/ipykernel_1647271194783/work/dist/ipykernel-6.9.2-py3-none-any.whl
ipython @ file:///Users/runner/miniforge3/conda-bld/ipython_1646324756182/work
ipython-genutils==0.2.0
ipywidgets @ file:///home/conda/feedstock_root/build_artifacts/ipywidgets_1647456365981/work
iso-639 @ file:///home/conda/feedstock_root/build_artifacts/iso-639_1626355260505/work
itsdangerous @ file:///home/conda/feedstock_root/build_artifacts/itsdangerous_1646849180040/work
jaraco.classes @ file:///home/conda/feedstock_root/build_artifacts/jaraco.classes_1619298134024/work
jaraco.collections @ file:///home/conda/feedstock_root/build_artifacts/jaraco.collections_1641469018844/work
jaraco.context @ file:///home/conda/feedstock_root/build_artifacts/jaraco.context_1646657544740/work
jaraco.functools @ file:///home/conda/feedstock_root/build_artifacts/jaraco.functools_1641071972629/work
jaraco.text @ file:///Users/runner/miniforge3/conda-bld/jaraco.text_1646672054767/work
jedi @ file:///Users/runner/miniforge3/conda-bld/jedi_1637175378067/work
Jinja2 @ file:///home/conda/feedstock_root/build_artifacts/jinja2_1636510082894/work
jmespath @ file:///home/conda/feedstock_root/build_artifacts/jmespath_1647416812516/work
joblib @ file:///home/conda/feedstock_root/build_artifacts/joblib_1633637554808/work
jsonnet @ file:///Users/runner/miniforge3/conda-bld/jsonnet_1644086886800/work
jsonschema @ file:///home/conda/feedstock_root/build_artifacts/jsonschema-meta_1642000296051/work
jupyter @ file:///Users/runner/miniforge3/conda-bld/jupyter_1637233406932/work
jupyter-client @ file:///home/conda/feedstock_root/build_artifacts/jupyter_client_1642858610849/work
jupyter-console @ file:///home/conda/feedstock_root/build_artifacts/jupyter_console_1646669715337/work
jupyter-core @ file:///Users/runner/miniforge3/conda-bld/jupyter_core_1645024702831/work
jupyterlab-pygments @ file:///home/conda/feedstock_root/build_artifacts/jupyterlab_pygments_1601375948261/work
jupyterlab-widgets @ file:///home/conda/feedstock_root/build_artifacts/jupyterlab_widgets_1647446862951/work
kiwisolver @ file:///Users/runner/miniforge3/conda-bld/kiwisolver_1647351843120/work
langcodes @ file:///home/conda/feedstock_root/build_artifacts/langcodes_1636741340529/work
lmdb @ file:///Users/runner/miniforge3/conda-bld/python-lmdb_1644189524859/work
lxml @ file:///Users/runner/miniforge3/conda-bld/lxml_1645124877356/work
Mako @ file:///home/conda/feedstock_root/build_artifacts/mako_1646959760357/work
MarkupSafe @ file:///Users/runner/miniforge3/conda-bld/markupsafe_1647364592705/work
matplotlib @ file:///Users/runner/miniforge3/conda-bld/matplotlib-suite_1639359034653/work
matplotlib-inline @ file:///home/conda/feedstock_root/build_artifacts/matplotlib-inline_1631080358261/work
mistune @ file:///Users/runner/miniforge3/conda-bld/mistune_1635845001896/work
more-itertools @ file:///home/conda/feedstock_root/build_artifacts/more-itertools_1637732846337/work
multidict @ file:///Users/runner/miniforge3/conda-bld/multidict_1643055408799/work
multiprocess @ file:///Users/runner/miniforge3/conda-bld/multiprocess_1635876223414/work
munch==2.5.0
munkres==1.1.4
murmurhash @ file:///Users/runner/miniforge3/conda-bld/murmurhash_1636019736584/work
mysqlclient @ file:///Users/runner/miniforge3/conda-bld/mysqlclient_1639024612770/work
nbclient @ file:///home/conda/feedstock_root/build_artifacts/nbclient_1646999386773/work
nbconvert @ file:///Users/runner/miniforge3/conda-bld/nbconvert_1647040578847/work
nbformat @ file:///home/conda/feedstock_root/build_artifacts/nbformat_1646951096007/work
nest-asyncio @ file:///home/conda/feedstock_root/build_artifacts/nest-asyncio_1638419302549/work
networkx @ file:///home/conda/feedstock_root/build_artifacts/networkx_1646497321764/work
nltk @ file:///home/conda/feedstock_root/build_artifacts/nltk_1633955089856/work
notebook @ file:///home/conda/feedstock_root/build_artifacts/notebook_1647377876077/work
numpy @ file:///Users/runner/miniforge3/conda-bld/numpy_1646717493174/work
optuna @ file:///home/conda/feedstock_root/build_artifacts/optuna_1633337702246/work
packaging @ file:///home/conda/feedstock_root/build_artifacts/packaging_1637239678211/work
pandas==1.4.1
pandocfilters @ file:///home/conda/feedstock_root/build_artifacts/pandocfilters_1631603243851/work
parsimonious==0.8.1
parso @ file:///home/conda/feedstock_root/build_artifacts/parso_1638334955874/work
pathtools==0.1.2
pathy @ file:///home/conda/feedstock_root/build_artifacts/pathy_1635227809952/work
Pattern @ file:///home/conda/feedstock_root/build_artifacts/pattern_1588682046427/work
pbr @ file:///home/conda/feedstock_root/build_artifacts/pbr_1644225887826/work
pdfminer.six @ file:///home/conda/feedstock_root/build_artifacts/pdfminer.six_1634369700996/work
pexpect @ file:///home/conda/feedstock_root/build_artifacts/pexpect_1602535608087/work
pickleshare @ file:///home/conda/feedstock_root/build_artifacts/pickleshare_1602536217715/work
Pillow @ file:///Users/runner/miniforge3/conda-bld/pillow_1645323199912/work
portend @ file:///home/conda/feedstock_root/build_artifacts/portend_1614149298816/work
preshed @ file:///Users/runner/miniforge3/conda-bld/preshed_1636077826592/work
prettytable @ file:///home/conda/feedstock_root/build_artifacts/prettytable_1646674402880/work
prometheus-client @ file:///home/conda/feedstock_root/build_artifacts/prometheus_client_1643395600215/work
promise @ file:///Users/runner/miniforge3/conda-bld/promise_1644078593510/work
prompt-toolkit @ file:///home/conda/feedstock_root/build_artifacts/prompt-toolkit_1644497866770/work
protobuf==3.19.4
psutil @ file:///Users/runner/miniforge3/conda-bld/psutil_1640887165910/work
ptyprocess @ file:///home/conda/feedstock_root/build_artifacts/ptyprocess_1609419310487/work/dist/ptyprocess-0.7.0-py2.py3-none-any.whl
pure-eval @ file:///home/conda/feedstock_root/build_artifacts/pure_eval_1642875951954/work
py-rouge @ file:///home/conda/feedstock_root/build_artifacts/py-rouge_1611141518214/work
pyarrow==7.0.0
pyasn1==0.4.8
pyasn1-modules==0.2.7
pycparser @ file:///home/conda/feedstock_root/build_artifacts/pycparser_1636257122734/work
pydantic @ file:///Users/runner/miniforge3/conda-bld/pydantic_1636021450594/work
Pygments @ file:///home/conda/feedstock_root/build_artifacts/pygments_1641580240686/work
pyOpenSSL @ file:///home/conda/feedstock_root/build_artifacts/pyopenssl_1643496850550/work
pyparsing @ file:///home/conda/feedstock_root/build_artifacts/pyparsing_1642753572664/work
pyperclip @ file:///home/conda/feedstock_root/build_artifacts/pyperclip_1622337600177/work
pyrsistent @ file:///Users/runner/miniforge3/conda-bld/pyrsistent_1642534457653/work
PySocks @ file:///Users/runner/miniforge3/conda-bld/pysocks_1635862741516/work
python-dateutil @ file:///home/conda/feedstock_root/build_artifacts/python-dateutil_1626286286081/work
python-docx @ file:///home/conda/feedstock_root/build_artifacts/python-docx_1622121039670/work
pytz @ file:///home/conda/feedstock_root/build_artifacts/pytz_1633452062248/work
pyu2f @ file:///home/conda/feedstock_root/build_artifacts/pyu2f_1604248910016/work
PyYAML @ file:///Users/runner/miniforge3/conda-bld/pyyaml_1636139931219/work
pyzmq @ file:///Users/runner/miniforge3/conda-bld/pyzmq_1635877710701/work
regex @ file:///Users/runner/miniforge3/conda-bld/regex_1647399893594/work
repoze.lru==0.7
requests @ file:///home/conda/feedstock_root/build_artifacts/requests_1641580202195/work
responses @ file:///home/conda/feedstock_root/build_artifacts/responses_1643839609465/work
Routes @ file:///home/conda/feedstock_root/build_artifacts/routes_1604230639459/work
rsa @ file:///home/conda/feedstock_root/build_artifacts/rsa_1637781155505/work
s3transfer @ file:///home/conda/feedstock_root/build_artifacts/s3transfer_1645745825648/work
sacremoses @ file:///home/conda/feedstock_root/build_artifacts/sacremoses_1647361442468/work
scikit-learn @ file:///Users/runner/miniforge3/conda-bld/scikit-learn_1640464197451/work
scipy @ file:///Users/runner/miniforge3/conda-bld/scipy_1644357749526/work
Send2Trash @ file:///home/conda/feedstock_root/build_artifacts/send2trash_1628511208346/work
sentencepiece==0.1.96
sentry-sdk @ file:///home/conda/feedstock_root/build_artifacts/sentry-sdk_1646753508615/work
setproctitle @ file:///Users/runner/miniforge3/conda-bld/setproctitle_1635864315706/work
sgmllib3k @ file:///home/conda/feedstock_root/build_artifacts/sgmllib3k_1600021450347/work
shellingham @ file:///home/conda/feedstock_root/build_artifacts/shellingham_1612179560728/work
shortuuid @ file:///Users/runner/miniforge3/conda-bld/shortuuid_1644056369207/work
simplejson @ file:///Users/runner/miniforge3/conda-bld/simplejson_1637177164728/work
six @ file:///home/conda/feedstock_root/build_artifacts/six_1620240208055/work
smart-open @ file:///home/conda/feedstock_root/build_artifacts/smart_open_1630238320325/work
smmap @ file:///home/conda/feedstock_root/build_artifacts/smmap_1611376390914/work
soupsieve @ file:///home/conda/feedstock_root/build_artifacts/soupsieve_1638550740809/work
spacy @ file:///Users/runner/miniforge3/conda-bld/spacy_1644658008331/work
spacy-legacy @ file:///home/conda/feedstock_root/build_artifacts/spacy-legacy_1645713043381/work
spacy-loggers @ file:///home/conda/feedstock_root/build_artifacts/spacy-loggers_1634809367310/work
SQLAlchemy @ file:///Users/runner/miniforge3/conda-bld/sqlalchemy_1646615332152/work
sqlparse @ file:///home/conda/feedstock_root/build_artifacts/sqlparse_1631317292236/work
srsly @ file:///Users/runner/miniforge3/conda-bld/srsly_1638879679486/work
stack-data @ file:///home/conda/feedstock_root/build_artifacts/stack_data_1644872665635/work
stevedore @ file:///Users/runner/miniforge3/conda-bld/stevedore_1639542690056/work
tempora @ file:///home/conda/feedstock_root/build_artifacts/tempora_1643789373873/work
tensorboardX @ file:///home/conda/feedstock_root/build_artifacts/tensorboardx_1645578792360/work
termcolor==1.1.0
terminado @ file:///Users/runner/miniforge3/conda-bld/terminado_1646684761186/work
testpath @ file:///home/conda/feedstock_root/build_artifacts/testpath_1645693042223/work
thinc @ file:///Users/runner/miniforge3/conda-bld/thinc_1647363132934/work
thinc-apple-ops==0.0.5
threadpoolctl @ file:///home/conda/feedstock_root/build_artifacts/threadpoolctl_1643647933166/work
tokenizers @ file:///Users/runner/miniforge3/conda-bld/tokenizers_1632285718817/work
torch @ file:///Users/runner/miniforge3/conda-bld/pytorch-recipe_1643987637853/work
torchvision @ file:///Users/runner/miniforge3/conda-bld/torchvision-split_1644148546672/work
tornado @ file:///Users/runner/miniforge3/conda-bld/tornado_1635819723809/work
tqdm @ file:///home/conda/feedstock_root/build_artifacts/tqdm_1646031859244/work
traitlets @ file:///home/conda/feedstock_root/build_artifacts/traitlets_1635260543454/work
transformers @ file:///home/conda/feedstock_root/build_artifacts/transformers_1640232623006/work
typer @ file:///home/conda/feedstock_root/build_artifacts/typer_1630326630489/work
typing_extensions @ file:///home/conda/feedstock_root/build_artifacts/typing_extensions_1644850595256/work
unicodedata2 @ file:///Users/runner/miniforge3/conda-bld/unicodedata2_1640031423081/work
Unidecode @ file:///home/conda/feedstock_root/build_artifacts/unidecode_1646918762405/work
urllib3 @ file:///home/conda/feedstock_root/build_artifacts/urllib3_1647489083693/work
wandb @ file:///home/conda/feedstock_root/build_artifacts/wandb_1646271144123/work
wasabi @ file:///home/conda/feedstock_root/build_artifacts/wasabi_1638865582891/work
wcwidth @ file:///home/conda/feedstock_root/build_artifacts/wcwidth_1600965781394/work
webencodings==0.5.1
Werkzeug @ file:///home/conda/feedstock_root/build_artifacts/werkzeug_1644332431572/work
widgetsnbextension @ file:///Users/runner/miniforge3/conda-bld/widgetsnbextension_1647446968519/work
word2number==1.1
xxhash @ file:///Users/runner/miniforge3/conda-bld/python-xxhash_1646085210894/work
yarl @ file:///Users/runner/miniforge3/conda-bld/yarl_1636047129772/work
yaspin @ file:///home/conda/feedstock_root/build_artifacts/yaspin_1630004424954/work
zc.lockfile==2.0
zipp @ file:///home/conda/feedstock_root/build_artifacts/zipp_1643828507773/work
zope.event @ file:///home/conda/feedstock_root/build_artifacts/zope.event_1600479883063/work
zope.interface @ file:///Users/runner/miniforge3/conda-bld/zope.interface_1635859682970/work
```
</p>
</details>
## Steps to reproduce
1. Create a separate env for using: conda install -c conda-forge python=3.8 allennlp
2. Install additional libraries accordingly: conda install -c conda-forge allennlp-models allennlp-semparse allennlp-server allennlp-optuna
3. Then I also install thing for apple silicon: pip install thinc-apple-ops
<details>
<summary><b>
from allennlp.predictors.predictor import Predictor
import allennlp_models.tagging
predictor = **Predictor.from_path("https://storage.googleapis.com/allennlp-public-models/ner-elmo.2021-02-12.tar.gz")</b</summary>**
<p>
<!-- Add a fully runnable example in between the next two lines below that will reproduce the bug -->
```
```
`from allennlp.predictors.predictor import Predictor
import allennlp_models.tagging
predictor = Predictor.from_path("https://storage.googleapis.com/allennlp-public-models/ner-elmo.2021-02-12.tar.gz")`
</p>
</details>
[allennlp.pdf](https://github.com/allenai/allennlp/files/8308431/allennlp.pdf)
| closed | 2022-03-19T02:19:25Z | 2022-08-09T15:00:57Z | https://github.com/allenai/allennlp/issues/5602 | [
"bug"
] | ghostintheshellarise | 7 |
QuivrHQ/quivr | api | 2,703 | [Bug]: non-latin letters in file names are stripped | ### What happened?
non-ascii letters removed from file names.

it's caused by using file name as a key for storing upload https://github.com/QuivrHQ/quivr/blob/main/backend/modules/upload/service/upload_file.py#L81and upload keys are restricted https://github.com/supabase/storage/issues/133 Thus Quivr removes non-ascii due to #1728
I made an attempt to fix it, but not pleased with the result. I think it's worth to add `original_file_name` column into `knowledge` table and identify file uploads by uuid. WDYT?
### Relevant log output
_No response_
### Twitter / LinkedIn details
_No response_ | closed | 2024-06-22T20:43:34Z | 2024-09-26T00:24:01Z | https://github.com/QuivrHQ/quivr/issues/2703 | [
"bug",
"Stale",
"area: backend"
] | mkhludnev | 2 |
modin-project/modin | pandas | 7,020 | BUG: Excessive log file generation when using Modin[ray] with Parquet files and DataFrame operations | ### Modin version checks
- [X] I have checked that this issue has not already been reported.
- [X] I have confirmed this bug exists on the latest released version of Modin.
- [X] I have confirmed this bug exists on the main branch of Modin. (In order to do this you can follow [this guide](https://modin.readthedocs.io/en/stable/getting_started/installation.html#installing-from-the-github-master-branch).)
### Reproducible Example
```python
import modin.pandas as pd
df = pd.read_parquet('data.parquet')
train = df.query('date>2023')
train_value = train.values
```
### Issue Description
When using Modin[ray] to read a 300GB Parquet file with the `read_parquet` function, followed by minimal query and DataFrame operations (e.g., `dataframe.values`), an excessively large log file of approximately 600GB is generated. This excessive log file size can quickly consume available disk space and potentially cause system instability or crashes.

### Expected Behavior
The `read_parquet` function should read the Parquet file, and the subsequent DataFrame operations should execute without generating an excessively large log file.
### Error Logs
<details>
```python-traceback
Replace this line with the error backtrace (if applicable).
```
</details>
### Installed Versions
<details>


</details>
| open | 2024-03-07T02:15:59Z | 2024-08-05T17:13:28Z | https://github.com/modin-project/modin/issues/7020 | [
"bug 🦗",
"External"
] | xixibaobei | 26 |
areed1192/interactive-broker-python-api | rest-api | 3 | The requested URL /v1/portal/calendar/events/ was not found on this server. | I am having problem having access to the web api despite that i follow the instructions on https://interactivebrokers.github.io/cpwebapi/ and got `Client login succeeds`. please help. the error message is: `The requested URL /v1/portal/calendar/events/ was not found on this server.` | closed | 2020-04-21T19:59:53Z | 2020-05-01T21:23:38Z | https://github.com/areed1192/interactive-broker-python-api/issues/3 | [] | wuliwei9278 | 7 |
CorentinJ/Real-Time-Voice-Cloning | python | 559 | Generate a speech which still keep the speaker's speaking rate | I've gotten some good results with this project. It is really amazing!
However, as title, is keeping speaker's speaking rate achievable during generating?
I knew that there's an optimum length of input text(too short: the voice will be stretched out with pauses; too long: the voice will be rushed).
e.g., there are 5 people A, B, C, D, and E. Their speaking rates are different.
#### My input sentences are:
>Please call Stella. Ask her to bring these things with her from the store:
>Six spoons of fresh snow peas, five thick slabs of blue cheese, and maybe a snack for her brother Bob.
>We also need a small plastic snake and a big toy frog for the kids.
>She can scoop these things into three red bags, and we will go meet her Wednesday at the train station.
I expected to get 5 different length of output voice.
However, all of the output utterances are 17-19 seconds.
That's the reason why I'm curious. Is it possible to keep speaker's speaking rate?
If it is, could anyone tell me how to make it or give me a hint?
Should I change the encoder for capturing more speaker's features, or I need to modify on synthesizer?
Thanks in advance for anyone's reply.
| closed | 2020-10-15T15:39:40Z | 2020-10-16T16:00:24Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/559 | [] | CYT823 | 4 |
tflearn/tflearn | tensorflow | 355 | Display predicted classes for a input image with Alexnet | I have trained Alexnet model with my dataset. The dataset has 6 classes. Next I want to use the model for predictions, so that it displays the top 3 classes predicted for a given input image.
For this I am using `model.predict()` method. For a single input image the output of predict() is-
`[[0.38271743059158325, 6.913009656273061e-06, 0.01705058105289936, 0.5627931952476501, 0.002256882842630148, 0.0018134841229766607]`
Reverse sort on the output gives the following:
`[0.5627931952476501, 0.38271743059158325, 0.01705058105289936, 0.002256882842630148, 0.0018134841229766607, 6.913009656273061e-06]`
The original indices corresponding to these are - `[3,0,2,4,5,1]`. Is there some way to map the class indices to their names? Please advise.
| open | 2016-09-23T10:20:12Z | 2017-07-23T15:31:54Z | https://github.com/tflearn/tflearn/issues/355 | [] | deepali-c | 6 |
mljar/mercury | jupyter | 455 | Add file sorting in output directory view | Please sort files by name in output files view. Right now, files are displayed in random order. | closed | 2024-06-18T12:31:17Z | 2024-07-08T14:38:18Z | https://github.com/mljar/mercury/issues/455 | [
"enhancement"
] | pplonski | 1 |
comfyanonymous/ComfyUI | pytorch | 6,487 | Request to add a shortcut key for "Execute selected node" | ### Feature Idea
Request to add a shortcut key for "Execute selected node". There is currently no shortcut key and it does not support customization. However, this node is used very frequently, so I hope the author can add such a function. Thank you.
### Existing Solutions
_No response_
### Other
_No response_ | open | 2025-01-16T08:37:50Z | 2025-01-17T09:07:58Z | https://github.com/comfyanonymous/ComfyUI/issues/6487 | [
"Feature"
] | 13426447442 | 5 |
gto76/python-cheatsheet | python | 141 | Mario pygame code snippet returns None to 'pressed' disabling controls | In the Mario game code snippet the run function currently passes None into 'pressed' dict. disabling controls. Revised run function below fixes. Per PG documentation, get_pressed() isn't the best way to get text entry from user: https://www.pygame.org/docs/ref/key.html#pygame.key.get_pressed
RUNNING = True
def run(screen, images, mario, tiles):
clock = pg.time.Clock()
global RUNNING
current_key = None
while RUNNING:
for event in pg.event.get():
if event.type == pg.QUIT:
RUNNING = False
keys = {pg.K_UP: D.n, pg.K_RIGHT: D.e, pg.K_DOWN: D.s, pg.K_LEFT: D.w}
if event.type == pg.KEYDOWN:
current_key = event.key
else:
current_key = None
pressed = {keys.get(current_key)}
update_speed(mario, tiles, pressed)
update_position(mario, tiles)
draw(screen, images, mario, tiles, pressed)
clock.tick(28)
# Old version:
# def run(screen, images, mario, tiles):
# clock = pg.time.Clock()
# while all(event.type != pg.QUIT for event in pg.event.get()):
# keys = {pg.K_UP: D.n, pg.K_RIGHT: D.e, pg.K_DOWN: D.s, pg.K_LEFT: D.w}
# pressed = {keys.get(ch) for ch, is_prsd in enumerate(pg.key.get_pressed()) if is_prsd}
# update_speed(mario, tiles, pressed)
# update_position(mario, tiles)
# draw(screen, images, mario, tiles, pressed)
# clock.tick(28)
| closed | 2022-12-27T16:13:41Z | 2022-12-27T21:39:59Z | https://github.com/gto76/python-cheatsheet/issues/141 | [] | ccozort | 2 |
JoshuaC215/agent-service-toolkit | streamlit | 31 | Streamlit app is ending with no error | When I run the streamlit app, it is abruptly ending at the following line in the handle_feedback() function.
feedback = st.feedback("stars", key=latest_run_id)
Any idea? | closed | 2024-09-05T22:11:02Z | 2024-09-17T04:24:10Z | https://github.com/JoshuaC215/agent-service-toolkit/issues/31 | [] | ramkipalle | 2 |
zappa/Zappa | flask | 534 | [Migrated] Make an HTTPS request to endpoint instead of touch after deploy/update | Originally from: https://github.com/Miserlou/Zappa/issues/1414 by [mcrowson](https://github.com/mcrowson)
<!--- Provide a general summary of the issue in the Title above -->
## Context
<!--- Provide a more detailed introduction to the issue itself, and why you consider it to be a bug -->
<!--- Also, please make sure that you are running Zappa _from a virtual environment_ and are using Python 2.7/3.6 -->
When a zappa project is updated/deployed, zappa by default will "touch" the lambda to warm it up. This is helpful for speeding up initial requests.
Instead of doing a touch, zappa could make a GET request to some endpoint as specified in the zappa_settings.json (defaults to '/'). This would have the same effect as the touch to warm up the lambda, but the added benefit is that zappa could catch 500 errors right away. These 500 errors are currently one of the most common problems for migrating to zappa and often indicate missing or incompatible packages in the project for the lambda environment.
## Expected Behavior
<!--- Tell us what should happen -->
Zappa informs users when their app returns a 500 error after deployment.
## Actual Behavior
<!--- Tell us what happens instead -->
Users must visit their app to initiate any errors.
## Possible Fix
<!--- Not obligatory, but suggest a fix or reason for the bug -->
Instead of the touch functionality, make a request to a "touch_endpoint" setting, defaults to '/'. Zappa would only alert users if a 500 error is returned.
| closed | 2021-02-20T12:22:23Z | 2022-08-18T13:12:17Z | https://github.com/zappa/Zappa/issues/534 | [
"enhancement",
"feature-request",
"good-idea"
] | jneves | 2 |
waditu/tushare | pandas | 1,493 | [Feature]请加入特定类型的Exception | 目前见到的异常都是基本Exception类型。建议为不同情景下的Exception提供具体的继承,方便调试和稳定性保障 | open | 2021-01-15T13:17:10Z | 2021-01-15T13:17:10Z | https://github.com/waditu/tushare/issues/1493 | [] | Vargnatt | 0 |
pandas-dev/pandas | python | 60,672 | BUG: read_csv: Columns Silently Forced into Multi Index | ### Pandas version checks
- [X] I have checked that this issue has not already been reported.
- [X] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [X] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
import pandas as pd
data = """13.9,130,200,1000,2,1
12.1,120,200,1001,2,2
0.7,110,200,1000,2,3
"""
df = pd.read_csv(
pd.io.common.StringIO(data),
header=None,
names=["test1","test2","test3"],
sep=",",
encoding="utf-8-sig",
index_col=None
)
print(df.shape)
print(df)
```
### Issue Description
When calling pd.read_csv and using the arg 'names=...', if a user inputs fewer names than columns in the dataset, the names are applied to the final columns of the data and the first are silently placed into a multi index.
In my example, a clean csv with 6 columns is read using read_csv, where only 3 names are provided. The result is the final 3 columns (3,4,5) are named using the provided names and the first 3 are made into a multi index. No warning or error is raised.
### Expected Behavior
My understanding is that the expected behavior is that the first columns will receive naming priority and the following ones will be named 'unnamed_column' or something along those lines.
### Installed Versions
<details>
INSTALLED VERSIONS
------------------
commit : 0691c5cf90477d3503834d983f69350f250a6ff7
python : 3.11.11
python-bits : 64
OS : Windows
OS-release : 10
Version : 10.0.19045
machine : AMD64
processor : AMD64 Family 25 Model 97 Stepping 2, AuthenticAMD
byteorder : little
LC_ALL : None
LANG : None
LOCALE : English_United States.1252
pandas : 2.2.3
numpy : 2.2.1
pytz : 2024.1
dateutil : 2.9.0.post0
pip : 24.3.1
Cython : None
sphinx : None
IPython : 8.31.0
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : 4.12.3
blosc : None
bottleneck : None
dataframe-api-compat : None
fastparquet : None
fsspec : None
html5lib : None
hypothesis : None
gcsfs : None
jinja2 : 3.1.5
lxml.etree : None
matplotlib : None
numba : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
psycopg2 : None
pymysql : None
pyarrow : None
pyreadstat : None
pytest : None
python-calamine : None
pyxlsb : None
s3fs : None
scipy : None
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
xlsxwriter : None
zstandard : 0.23.0
tzdata : 2024.2
qtpy : None
pyqt5 : None
</details>
| closed | 2025-01-07T18:11:06Z | 2025-01-07T21:56:25Z | https://github.com/pandas-dev/pandas/issues/60672 | [
"Bug",
"IO CSV"
] | cdelisle95 | 2 |
Miserlou/Zappa | flask | 1,963 | Issue with slim handler deploys and lambda aliases for blue-green deploys | Hello!
We're using zappa for our api gateway/lambda deploys. We're using the fantastic slim handler option.
Recently we've moved to using lambda aliasing to attempt to get blue-green style deploys. In this situation, we have the gateway point to an alias that serves the active traffic (blue), while we build the new lambda (green). When static files have been collected and migrations run, we then switch the alias from blue to green.
However, we've hit a problem if there is a cold start during a deploy. The cold-started lambdas in the blue environment (currently serving production traffic) starts running new code (from the green environment). This causes our app to crash because of missing manifest entries and migrations that have not yet run.
We've traced it back to this code: https://github.com/Miserlou/Zappa/blob/60fbb55fffa762a85e79e756f2a1373832d78320/zappa/cli.py#L2376
The current project archive file name is unique for a function and a stage, but it is not unique for a particular deploy. This means that a cold-started lambda will always use the code in this file name regardless of whether it was part of this deploy or not. | open | 2019-11-15T02:16:20Z | 2019-11-15T02:32:32Z | https://github.com/Miserlou/Zappa/issues/1963 | [] | bxjx | 0 |
vaexio/vaex | data-science | 1,884 | [BUG-REPORT] `multiprocessing` flag in `register_function` throws pickle error | It seems that when setting the `multiprocessing` flag to `True` when registering a function, a pickle error is always thrown during invocation.
**Example**
```
import numpy as np
import vaex
df = vaex.from_dict({
"id":list(range(100_000)),
"vals":np.random.rand(100_000)
})
display(df)
@vaex.register_function(multiprocessing=True)
def do_mult(a):
return a*5
df["mult"] = df.vals.do_mult()
df
```
Switching that flag to `False` yields expected behavior
**Software information**
- Vaex version (`import vaex; vaex.__version__)`: `{'vaex-core': '4.7.0.post1', 'vaex-hdf5': '0.11.1'}`
- Vaex was installed via: pip / conda-forge / from source `pip`
- OS: MacOS Monterey (Intel chip)
| open | 2022-02-04T16:25:04Z | 2022-02-09T11:55:29Z | https://github.com/vaexio/vaex/issues/1884 | [
"bug"
] | Ben-Epstein | 0 |
adamerose/PandasGUI | pandas | 156 | Allow for drag-drop and CMD opening of pkl files | In run_with_args.py I modified starting at line 13, and in store.py at line 831.
Works with context menu opening as well.
[pandasgui.zip](https://github.com/adamerose/PandasGUI/files/6792641/pandasgui.zip) | closed | 2021-07-09T16:17:30Z | 2021-07-10T07:03:20Z | https://github.com/adamerose/PandasGUI/issues/156 | [
"enhancement"
] | rjsdotorg | 1 |
InstaPy/InstaPy | automation | 6,451 | Get followers returns spam:true | <!-- Did you know that we have a Discord channel ? Join us: https://discord.gg/FDETsht -->
<!-- Is this a Feature Request ? Please, check out our Wiki first https://github.com/timgrossmann/InstaPy/wiki -->
## Expected Behavior
Unfollow should get all of followers than unfollow them
## Current Behavior
When attempting to scrape followers from my own profile I receive a message on selenium that says spam:true than I scrape 0 users
## Possible Solution (optional)
## InstaPy configuration
| closed | 2022-01-03T17:06:27Z | 2022-01-03T21:00:05Z | https://github.com/InstaPy/InstaPy/issues/6451 | [] | Killerherts | 1 |
gunthercox/ChatterBot | machine-learning | 2,046 | Issue with time.clock | Got an exception -
chatbot = ChatBot('Ron Obvious')
Traceback (most recent call last):
File "<ipython-input-6-a1741da6c5bd>", line 1, in <module>
chatbot = ChatBot('Ron Obvious')
File "C:\Anaconda\lib\site-packages\chatterbot\chatterbot.py", line 34, in __init__
self.storage = utils.initialize_class(storage_adapter, **kwargs)
File "C:\Anaconda\lib\site-packages\chatterbot\utils.py", line 54, in initialize_class
return Class(*args, **kwargs)
File "C:\Anaconda\lib\site-packages\chatterbot\storage\sql_storage.py", line 22, in __init__
from sqlalchemy import create_engine
File "C:\Anaconda\lib\site-packages\sqlalchemy\__init__.py", line 8, in <module>
from . import util as _util # noqa
File "C:\Anaconda\lib\site-packages\sqlalchemy\util\__init__.py", line 14, in <module>
from ._collections import coerce_generator_arg # noqa
File "C:\Anaconda\lib\site-packages\sqlalchemy\util\_collections.py", line 16, in <module>
from .compat import binary_types
File "C:\Anaconda\lib\site-packages\sqlalchemy\util\compat.py", line 264, in <module>
time_func = time.clock
AttributeError: module 'time' has no attribute 'clock'
I have changed the code to this for temporary fix. because here win32==True but time.clock is not identified
if win32 or jython:
try:
time_func = time.clock
except:
time_func = time.time
else:
time_func = time.time
| closed | 2020-09-22T06:53:17Z | 2025-02-17T21:31:52Z | https://github.com/gunthercox/ChatterBot/issues/2046 | [] | ayushmodi-038 | 2 |
ultralytics/ultralytics | python | 19,250 | mode.val(save_json=True),COCO API AssertionError: Results do not correspond to current coco set. | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
trian.py
```
if __name__ == '__main__':
from ultralytics import YOLO
model = YOLO("yolo11n.pt") # load a pretrained model (recommended for training)
model.train(data="coco8.yaml",device=0,batch=-1)
```
yolo2json.py
```
import os
import json
from PIL import Image
# 设置数据集路径
output_dir = "D:\YOLOv11\datasets\coco8" #修改为YOLO格式的数据集路径;
dataset_path = "D:\YOLOv11\datasets\coco8" # 修改你想输出的coco格式数据集路径
images_path = os.path.join(dataset_path,"images")
labels_path = os.path.join(dataset_path,"labels")
# 类别映射
categories = [
{"id": 0, "name": "person"},
{"id": 1, "name": "bicycle"},
{"id": 2, "name": "car"},
{"id": 3, "name": "motorcycle"},
{"id": 4, "name": "airplane"},
{"id": 5, "name": "bus"},
{"id": 6, "name": "train"},
{"id": 7, "name": "truck"},
{"id": 8, "name": "boat"},
{"id": 9, "name": "traffic light"},
{"id": 10, "name": "fire hydrant"},
{"id": 11, "name": "stop sign"},
{"id": 12, "name": "parking meter"},
{"id": 13, "name": "bench"},
{"id": 14, "name": "bird"},
{"id": 15, "name": "cat"}, # 修改这里
{"id": 16, "name": "dog"},
{"id": 17, "name": "horse"},
{"id": 18, "name": "sheep"},
{"id": 19, "name": "cow"},
{"id": 20, "name": "elephant"},
{"id": 21, "name": "bear"},
{"id": 22, "name": "zebra"},
{"id": 23, "name": "giraffe"},
{"id": 24, "name": "backpack"},
{"id": 25, "name": "umbrella"},
{"id": 26, "name": "handbag"},
{"id": 27, "name": "tie"},
{"id": 28, "name": "suitcase"},
{"id": 29, "name": "frisbee"},
{"id": 30, "name": "skis"},
{"id": 31, "name": "snowboard"},
{"id": 32, "name": "sports ball"},
{"id": 33, "name": "kite"},
{"id": 34, "name": "baseball bat"},
{"id": 35, "name": "baseball glove"},
{"id": 36, "name": "skateboard"},
{"id": 37, "name": "surfboard"},
{"id": 38, "name": "tennis racket"},
{"id": 39, "name": "bottle"},
{"id": 40, "name": "wine glass"},
{"id": 41, "name": "cup"},
{"id": 42, "name": "fork"},
{"id": 43, "name": "knife"},
{"id": 44, "name": "spoon"},
{"id": 45, "name": "bowl"},
{"id": 46, "name": "banana"},
{"id": 47, "name": "apple"},
{"id": 48, "name": "sandwich"},
{"id": 49, "name": "orange"},
{"id": 50, "name": "broccoli"},
{"id": 51, "name": "carrot"},
{"id": 52, "name": "hot dog"},
{"id": 53, "name": "pizza"},
{"id": 54, "name": "donut"},
{"id": 55, "name": "cake"},
{"id": 56, "name": "chair"},
{"id": 57, "name": "couch"},
{"id": 58, "name": "potted plant"},
{"id": 59, "name": "bed"},
{"id": 60, "name": "dining table"},
{"id": 61, "name": "toilet"},
{"id": 62, "name": "tv"},
{"id": 63, "name": "laptop"},
{"id": 64, "name": "mouse"},
{"id": 65, "name": "remote"},
{"id": 66, "name": "keyboard"},
{"id": 67, "name": "cell phone"},
{"id": 68, "name": "microwave"},
{"id": 69, "name": "oven"},
{"id": 70, "name": "toaster"},
{"id": 71, "name": "sink"},
{"id": 72, "name": "refrigerator"},
{"id": 73, "name": "book"},
{"id": 74, "name": "clock"},
{"id": 75, "name": "vase"},
{"id": 76, "name": "scissors"},
{"id": 77, "name": "teddy bear"},
{"id": 78, "name": "hair drier"},
{"id": 79, "name": "toothbrush"}
]
# YOLO格式转COCO格式的函数
def convert_yolo_to_coco(x_center, y_center, width, height, img_width, img_height):
x_min = (x_center - width / 2) * img_width
y_min = (y_center - height / 2) * img_height
width = width * img_width
height = height * img_height
return [x_min, y_min, width, height]
# 初始化COCO数据结构
def init_coco_format():
return {
"images": [],
"annotations": [],
"categories": categories
}
# 处理每个数据集分区
for split in ['train', 'val']: #'test'
coco_format = init_coco_format()
annotation_id = 1
for img_name in os.listdir(os.path.join(images_path, split)):
if img_name.lower().endswith(('.png', '.jpg', '.jpeg')):
img_path = os.path.join(images_path, split, img_name)
label_path = os.path.join(labels_path, split, img_name.replace("jpg", "txt"))
img = Image.open(img_path)
img_width, img_height = img.size
image_info = {
"file_name": img_name,
"id": len(coco_format["images"]) + 1,
"width": img_width,
"height": img_height
}
coco_format["images"].append(image_info)
if os.path.exists(label_path):
with open(label_path, "r") as file:
for line in file:
category_id, x_center, y_center, width, height = map(float, line.split())
bbox = convert_yolo_to_coco(x_center, y_center, width, height, img_width, img_height)
annotation = {
"id": annotation_id,
"image_id": image_info["id"],
"category_id": int(category_id) + 1,
"bbox": bbox,
"area": bbox[2] * bbox[3],
"iscrowd": 0
}
coco_format["annotations"].append(annotation)
annotation_id += 1
# 为每个分区保存JSON文件
with open(os.path.join(output_dir, f"{split}_coco_format.json"), "w") as json_file:
json.dump(coco_format, json_file, indent=4)
```
vail.py
```
if __name__ == '__main__':
from ultralytics import YOLO
from pycocotools.coco import COCO
from pycocotools.cocoeval import COCOeval
model = YOLO("runs/detect/train11/weights/best.pt") # load a pretrained model (recommended for training)
results=model.val(data="coco8.yaml",save_json=True,device=0,batch=1)
anno = COCO("D:/YOLOv11/datasets/coco8/val_coco_format.json") # Load your JSON annotations
pred = anno.loadRes(f"{results.save_dir}/predictions.json") # Load predictions.json
val = COCOeval(anno, pred, "bbox")
val.evaluate()
val.accumulate()
val.summarize()
```
vail.py 报错
```
(yolov11) D:\YOLOv11>python vail.py
Ultralytics 8.3.18 🚀 Python-3.11.7 torch-2.6.0+cu126 CUDA:0 (NVIDIA GeForce RTX 4060 Ti, 16380MiB)
YOLO11n summary (fused): 238 layers, 2,616,248 parameters, 0 gradients, 6.5 GFLOPs
val: Scanning D:\YOLOv11\datasets\coco8\labels\val.cache... 4 images, 0 backgrounds, 0 corrupt: 100%|██████████| 4/4 [00:00<?, ?it/s]
Class Images Instances Box(P R mAP50 mAP50-95): 100%|██████████| 4/4 [00:01<00:00, 2.88it/s]
all 4 17 0.802 0.66 0.864 0.593
person 3 10 0.82 0.461 0.695 0.347
dog 1 1 0.707 1 0.995 0.697
horse 1 2 0.835 1 0.995 0.473
elephant 1 2 0.779 0.5 0.508 0.153
umbrella 1 1 0.669 1 0.995 0.995
potted plant 1 1 1 0 0.995 0.895
Speed: 2.2ms preprocess, 26.6ms inference, 0.0ms loss, 14.8ms postprocess per image
Saving runs\detect\val18\predictions.json...
Results saved to runs\detect\val18
loading annotations into memory...
Done (t=0.00s)
creating index...
index created!
Loading and preparing results...
Traceback (most recent call last):
File "D:\YOLOv11\vail.py", line 56, in <module>
pred = anno.loadRes(f"{results.save_dir}/predictions.json") # Load predictions.json
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ProgramData\anaconda3\Lib\site-packages\pycocotools\coco.py", line 327, in loadRes
assert set(annsImgIds) == (set(annsImgIds) & set(self.getImgIds())), \
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AssertionError: Results do not correspond to current coco set
```
### Additional
I don't know why there's an error | closed | 2025-02-14T15:42:45Z | 2025-02-18T11:19:04Z | https://github.com/ultralytics/ultralytics/issues/19250 | [
"question",
"detect"
] | SDIX-7 | 7 |
ymcui/Chinese-LLaMA-Alpaca | nlp | 76 | 教程有误 | python scripts/merge_llama_with_chinese_lora.py \
--base_model path_to_original_llama_hf_dir \
--lora_model path_to_chinese_llama_or_alpaca_lora \
--model_type 7B \
--output_dir path_to_output_dir
该段model_type应该传递的是model_size | closed | 2023-04-07T21:02:15Z | 2023-04-08T00:03:22Z | https://github.com/ymcui/Chinese-LLaMA-Alpaca/issues/76 | [] | ghost | 1 |
biolab/orange3 | scikit-learn | 6,519 | New version of cython was released, installation does not work with it | On 17 Jul Cython 3.0 was released. Orange can not be installed from source with it.
In long term, we should fix it. For now, I changed the installer to require older Cython (#6518). | closed | 2023-07-20T12:09:12Z | 2023-07-24T12:55:38Z | https://github.com/biolab/orange3/issues/6519 | [
"bug"
] | markotoplak | 0 |
sanic-org/sanic | asyncio | 2,220 | Sanic not raising ConnectionClosed on Python 3.7 | Hi,
I've been using Sanic and now I want to gather the information that whether the WebSocket is closing normally or abnormally. The problem that I'm facing currently is that whenever the client is closing a connection abnormally (Network Issue), no Exception is being raised. Although I'm getting logs server - event = connection_lost(None) server - state = CLOSED server x code = 1006, reason = [no reason] server! failing CLOSED WebSocket connection with code 1006 but no Exception.
OS: ubuntu - 16
Sanic version: 20.6.3
WebSockets version: 8.1 | closed | 2021-08-17T14:16:30Z | 2021-10-03T13:55:57Z | https://github.com/sanic-org/sanic/issues/2220 | [] | priyesh-kashyap | 6 |
mlfoundations/open_clip | computer-vision | 109 | Hyperparameters to replicate the conceptual caption RN50x4 experiments. | My run on 4GPUs with batch size 256 using default parameters can not replicate the 22.2% top-1 zero-shot Val accuracy on imageNet1k (claimed in readme) with the code out of the box.
Also which ImageNet1k we are using as reference here: 2012 or 2017 or the latest one with people's face masked off.
detailed cmd:
`training/main.py --train-data /data/cc/Train_GCC-training_output.csv --dataset-type csv --batch-size 256 --precision amp --workers 8 --imagenet-val /data/imagenet/val/ --report-to wandb --model RN50x4`

| closed | 2022-06-15T15:57:31Z | 2022-06-16T21:44:32Z | https://github.com/mlfoundations/open_clip/issues/109 | [] | kyleliang919 | 4 |
JaidedAI/EasyOCR | deep-learning | 447 | Number of iterations for models | Hi @rkcosmos
Great work! For how many approx. iterations did you train your language-specific models and how much data did you use for training?
Thanks | closed | 2021-06-04T09:55:37Z | 2022-03-02T09:25:00Z | https://github.com/JaidedAI/EasyOCR/issues/447 | [] | iknoorjobs | 0 |
allure-framework/allure-python | pytest | 807 | pytest-xdist run tests with --clean-alluredir may fail when faker installed. | [//]: # (
. Note: for support questions, please use Stackoverflow or Gitter**.
. This repository's issues are reserved for feature requests and bug reports.
.
. In case of any problems with Allure Jenkins plugin** please use the following repository
. to create an issue: https://github.com/jenkinsci/allure-plugin/issues
.
. Make sure you have a clear name for your issue. The name should start with a capital
. letter and no dot is required in the end of the sentence. An example of good issue names:
.
. - The report is broken in IE11
. - Add an ability to disable default plugins
. - Support emoji in test descriptions
)
#### I'm submitting a ...
- [x] bug report
- [ ] feature request
- [ ] support request => Please do not submit support request here, see note at the top of this template.
#### What is the current behavior?
pytest-xdist run tests with --clean-alluredir may fail. Because the {alluredir} already deleted by another woeker.
#### If the current behavior is a bug, please provide the steps to reproduce and if possible a minimal demo of the problem
create a test file :
test.py
```
def test_01():
assert True
```
install the pytest, allure-pytest and pytest-xdist
```
pip install pytest allure-pytest pytest-xdist faker
```
Run tests multiple times
`pytest test.py -n 10 --clean-alluredir --alluredir=/tmp/allure-results`
And your will get a error:
```
pytest test.py -n 10 --clean-alluredir --alluredir=/tmp/allure-results
============================================================================================== test session starts ===============================================================================================
platform darwin -- Python 3.10.13, pytest-8.1.1, pluggy-1.4.0
rootdir: /private/tmp/allure-test
plugins: allure-pytest-2.13.5, Faker-24.11.0, xdist-3.5.0
initialized: 7/10 workersINTERNALERROR> Traceback (most recent call last):
INTERNALERROR> File "/private/tmp/allure-test/.venv/lib/python3.10/site-packages/_pytest/main.py", line 281, in wrap_session
INTERNALERROR> config._do_configure()
INTERNALERROR> File "/private/tmp/allure-test/.venv/lib/python3.10/site-packages/_pytest/config/__init__.py", line 1121, in _do_configure
INTERNALERROR> self.hook.pytest_configure.call_historic(kwargs=dict(config=self))
INTERNALERROR> File "/private/tmp/allure-test/.venv/lib/python3.10/site-packages/pluggy/_hooks.py", line 523, in call_historic
INTERNALERROR> res = self._hookexec(self.name, self._hookimpls.copy(), kwargs, False)
INTERNALERROR> File "/private/tmp/allure-test/.venv/lib/python3.10/site-packages/pluggy/_manager.py", line 119, in _hookexec
INTERNALERROR> return self._inner_hookexec(hook_name, methods, kwargs, firstresult)
INTERNALERROR> File "/private/tmp/allure-test/.venv/lib/python3.10/site-packages/pluggy/_callers.py", line 138, in _multicall
INTERNALERROR> raise exception.with_traceback(exception.__traceback__)
INTERNALERROR> File "/private/tmp/allure-test/.venv/lib/python3.10/site-packages/pluggy/_callers.py", line 102, in _multicall
INTERNALERROR> res = hook_impl.function(*args)
INTERNALERROR> File "/private/tmp/allure-test/.venv/lib/python3.10/site-packages/allure_pytest/plugin.py", line 167, in pytest_configure
INTERNALERROR> file_logger = AllureFileLogger(report_dir, clean)
INTERNALERROR> File "/private/tmp/allure-test/.venv/lib/python3.10/site-packages/allure_commons/logger.py", line 18, in __init__
INTERNALERROR> shutil.rmtree(self._report_dir)
INTERNALERROR> File "/opt/homebrew/Cellar/python@3.10/3.10.13_2/Frameworks/Python.framework/Versions/3.10/lib/python3.10/shutil.py", line 721, in rmtree
INTERNALERROR> onerror(os.open, path, sys.exc_info())
INTERNALERROR> File "/opt/homebrew/Cellar/python@3.10/3.10.13_2/Frameworks/Python.framework/Versions/3.10/lib/python3.10/shutil.py", line 718, in rmtree
INTERNALERROR> fd = os.open(path, os.O_RDONLY)
INTERNALERROR> FileNotFoundError: [Errno 2] No such file or directory: '/tmp/allure-results'
initialized: 10/10 workersINTERNALERROR> def worker_internal_error(self, node, formatted_error):
INTERNALERROR> """
INTERNALERROR> pytest_internalerror() was called on the worker.
INTERNALERROR>
INTERNALERROR> pytest_internalerror() arguments are an excinfo and an excrepr, which can't
INTERNALERROR> be serialized, so we go with a poor man's solution of raising an exception
INTERNALERROR> here ourselves using the formatted message.
INTERNALERROR> """
INTERNALERROR> self._active_nodes.remove(node)
INTERNALERROR> try:
INTERNALERROR> > assert False, formatted_error
INTERNALERROR> E AssertionError: Traceback (most recent call last):
INTERNALERROR> E File "/private/tmp/allure-test/.venv/lib/python3.10/site-packages/_pytest/main.py", line 281, in wrap_session
INTERNALERROR> E config._do_configure()
INTERNALERROR> E File "/private/tmp/allure-test/.venv/lib/python3.10/site-packages/_pytest/config/__init__.py", line 1121, in _do_configure
INTERNALERROR> E self.hook.pytest_configure.call_historic(kwargs=dict(config=self))
INTERNALERROR> E File "/private/tmp/allure-test/.venv/lib/python3.10/site-packages/pluggy/_hooks.py", line 523, in call_historic
INTERNALERROR> E res = self._hookexec(self.name, self._hookimpls.copy(), kwargs, False)
INTERNALERROR> E File "/private/tmp/allure-test/.venv/lib/python3.10/site-packages/pluggy/_manager.py", line 119, in _hookexec
INTERNALERROR> E return self._inner_hookexec(hook_name, methods, kwargs, firstresult)
INTERNALERROR> E File "/private/tmp/allure-test/.venv/lib/python3.10/site-packages/pluggy/_callers.py", line 138, in _multicall
INTERNALERROR> E raise exception.with_traceback(exception.__traceback__)
INTERNALERROR> E File "/private/tmp/allure-test/.venv/lib/python3.10/site-packages/pluggy/_callers.py", line 102, in _multicall
INTERNALERROR> E res = hook_impl.function(*args)
INTERNALERROR> E File "/private/tmp/allure-test/.venv/lib/python3.10/site-packages/allure_pytest/plugin.py", line 167, in pytest_configure
INTERNALERROR> E file_logger = AllureFileLogger(report_dir, clean)
INTERNALERROR> E File "/private/tmp/allure-test/.venv/lib/python3.10/site-packages/allure_commons/logger.py", line 18, in __init__
INTERNALERROR> E shutil.rmtree(self._report_dir)
INTERNALERROR> E File "/opt/homebrew/Cellar/python@3.10/3.10.13_2/Frameworks/Python.framework/Versions/3.10/lib/python3.10/shutil.py", line 721, in rmtree
INTERNALERROR> E onerror(os.open, path, sys.exc_info())
INTERNALERROR> E File "/opt/homebrew/Cellar/python@3.10/3.10.13_2/Frameworks/Python.framework/Versions/3.10/lib/python3.10/shutil.py", line 718, in rmtree
INTERNALERROR> E fd = os.open(path, os.O_RDONLY)
INTERNALERROR> E FileNotFoundError: [Errno 2] No such file or directory: '/tmp/allure-results'
INTERNALERROR> E assert False
INTERNALERROR>
INTERNALERROR> .venv/lib/python3.10/site-packages/xdist/dsession.py:200: AssertionError
[gw0] node down: Not properly terminated
replacing crashed worker gw0
initialized: 11/11 workersINTERNALERROR> Traceback (most recent call last):
INTERNALERROR> File "/private/tmp/allure-test/.venv/lib/python3.10/site-packages/_pytest/main.py", line 285, in wrap_session
INTERNALERROR> session.exitstatus = doit(config, session) or 0
INTERNALERROR> File "/private/tmp/allure-test/.venv/lib/python3.10/site-packages/_pytest/main.py", line 339, in _main
INTERNALERROR> config.hook.pytest_runtestloop(session=session)
INTERNALERROR> File "/private/tmp/allure-test/.venv/lib/python3.10/site-packages/pluggy/_hooks.py", line 501, in __call__
INTERNALERROR> return self._hookexec(self.name, self._hookimpls.copy(), kwargs, firstresult)
INTERNALERROR> File "/private/tmp/allure-test/.venv/lib/python3.10/site-packages/pluggy/_manager.py", line 119, in _hookexec
INTERNALERROR> return self._inner_hookexec(hook_name, methods, kwargs, firstresult)
INTERNALERROR> File "/private/tmp/allure-test/.venv/lib/python3.10/site-packages/pluggy/_callers.py", line 138, in _multicall
INTERNALERROR> raise exception.with_traceback(exception.__traceback__)
INTERNALERROR> File "/private/tmp/allure-test/.venv/lib/python3.10/site-packages/pluggy/_callers.py", line 121, in _multicall
INTERNALERROR> teardown.throw(exception) # type: ignore[union-attr]
INTERNALERROR> File "/private/tmp/allure-test/.venv/lib/python3.10/site-packages/_pytest/logging.py", line 806, in pytest_runtestloop
INTERNALERROR> return (yield) # Run all the tests.
INTERNALERROR> File "/private/tmp/allure-test/.venv/lib/python3.10/site-packages/pluggy/_callers.py", line 102, in _multicall
INTERNALERROR> res = hook_impl.function(*args)
INTERNALERROR> File "/private/tmp/allure-test/.venv/lib/python3.10/site-packages/xdist/dsession.py", line 123, in pytest_runtestloop
INTERNALERROR> self.loop_once()
INTERNALERROR> File "/private/tmp/allure-test/.venv/lib/python3.10/site-packages/xdist/dsession.py", line 148, in loop_once
INTERNALERROR> call(**kwargs)
INTERNALERROR> File "/private/tmp/allure-test/.venv/lib/python3.10/site-packages/xdist/dsession.py", line 238, in worker_errordown
INTERNALERROR> self._active_nodes.remove(node)
INTERNALERROR> KeyError: <WorkerController gw0>
============================================================================================= no tests ran in 0.81s ==============================================================================================
```
#### What is the expected behavior?
no error raise.
#### What is the motivation / use case for changing the behavior?
#### Please tell us about your environment:
- Allure version: 2
- Test framework: pytest@3.10.13
- Allure adaptor: allure-pytest@2.13.5
- Faker-24.11.0
- xdist-3.5.0
#### Other information
[//]: # (
. e.g. detailed explanation, stacktraces, related issues, suggestions
. how to fix, links for us to have more context, eg. Stackoverflow, Gitter etc
)
| open | 2024-04-18T12:25:04Z | 2024-04-18T12:25:04Z | https://github.com/allure-framework/allure-python/issues/807 | [] | hokor | 0 |
davidsandberg/facenet | tensorflow | 484 | Step how to apply this library | Can someone step by step teach me how to use this library to detect face from beginning by using python and code using visual code. Sorry, because I am a beginner and I was interested on it. Thank you very much | closed | 2017-10-12T17:10:48Z | 2017-10-21T11:28:33Z | https://github.com/davidsandberg/facenet/issues/484 | [] | sy0209 | 1 |
okken/pytest-check | pytest | 6 | add Travis testing | closed | 2019-02-19T05:27:23Z | 2019-05-13T15:58:42Z | https://github.com/okken/pytest-check/issues/6 | [
"enhancement"
] | okken | 1 | |
vitalik/django-ninja | django | 533 | [BUG] Need to bundle js.map files for django 4+ | **Collectstatic fails on django 4+ if self hosting JS files**
See https://code.djangoproject.com/ticket/33353
Adding `ninja` to `INSTALLED_APPS` causes collectstatic to fail when using whitenoise
```
#0 0.428 Post-processing 'ninja/redoc@2.0.0-rc.66.standalone.js' failed!
#0 0.428
#0 0.428 Traceback (most recent call last):
#0 0.428 File "/opt/app/manage.py", line 22, in <module>
#0 0.428 main()
#0 0.428 File "/opt/app/manage.py", line 18, in main
#0 0.428 execute_from_command_line(sys.argv)
#0 0.428 File "/opt/venv/lib/python3.10/site-packages/django/core/management/__init__.py", line 446, in execute_from_command_line
#0 0.428 utility.execute()
#0 0.428 File "/opt/venv/lib/python3.10/site-packages/django/core/management/__init__.py", line 440, in execute
#0 0.428 self.fetch_command(subcommand).run_from_argv(self.argv)
#0 0.428 File "/opt/venv/lib/python3.10/site-packages/django/core/management/base.py", line 402, in run_from_argv
#0 0.428 self.execute(*args, **cmd_options)
#0 0.428 File "/opt/venv/lib/python3.10/site-packages/django/core/management/base.py", line 448, in execute
#0 0.428 output = self.handle(*args, **options)
#0 0.428 File "/opt/venv/lib/python3.10/site-packages/django/contrib/staticfiles/management/commands/collectstatic.py", line 209, in handle
#0 0.429 collected = self.collect()
#0 0.429 File "/opt/venv/lib/python3.10/site-packages/django/contrib/staticfiles/management/commands/collectstatic.py", line 154, in collect
#0 0.429 raise processed
#0 0.429 whitenoise.storage.MissingFileError: The file 'ninja/redoc.standalone.js.map' could not be found with <whitenoise.storage.CompressedManifestStaticFilesStorage object at 0xffff9d419960>.
#0 0.429
#0 0.429 The JS file 'ninja/redoc@2.0.0-rc.66.standalone.js' references a file which could not be found:
#0 0.429 ninja/redoc.standalone.js.map
#0 0.429
#0 0.429 Please check the URL references in this JS file, particularly any
#0 0.429 relative paths which might be pointing to the wrong location.
#0 0.429
```
**Versions (please complete the following information):**
- Python version: 3.10
- Django version: 4.1
- Django-Ninja version: 0.19.1
| closed | 2022-08-18T16:38:50Z | 2023-07-19T09:16:53Z | https://github.com/vitalik/django-ninja/issues/533 | [] | cblakkan | 8 |
twopirllc/pandas-ta | pandas | 500 | How to ignore `nan` when using `cross_value`? | Or specify a default value as result when compare a `nan`. | open | 2022-03-10T13:41:26Z | 2022-03-14T21:24:41Z | https://github.com/twopirllc/pandas-ta/issues/500 | [
"enhancement"
] | GF-Huang | 5 |
numba/numba | numpy | 9,413 | Python 3.13 | This issue is to track tasks and issues with Python 3.13.
Numba support status:
- Started early work on Python3.13.0a3. So far, there is minimal bytecode changes (WIP branch: https://github.com/numba/numba/compare/main...sklam:numba:wip/py3.13). ~We got `numba/tests/test_usecases.py` to pass. However, more involved tests are blocked by an issue with metaclass (https://github.com/python/cpython/issues/114806). As a result, `Type._code` is not set properly. (temporary debug commit related to the issue: https://github.com/numba/numba/commit/4b6cc78a79530e41e414326fbb7f255d05ee5506)~
- Added development setup using hatchery: https://github.com/numba/numba-hatchery/pull/8
- Work is halted until 3.13.0a4 is available. Expected on Feb 13 2024.
- Work resumes using 3.13.0a6 as https://github.com/python/cpython/issues/114806 is resolved. Continues to look into try-except changes 7/33 failures in `test_try_except.py` | closed | 2024-01-31T16:12:29Z | 2025-02-10T18:24:27Z | https://github.com/numba/numba/issues/9413 | [
"Python 3.13"
] | sklam | 14 |
plotly/dash | data-visualization | 2,467 | [BUG] allow_duplicate not working with clientside_callback |
Thanks for allow_duplicate, it's a very nice addition.
Everything works well, except for **clientside_callback**. Support for clientside_callback was supposed to be? When adding allow_duplicate, an error occurs:

If you specify several ouputs(with and without allow_duplicate), then even though there will be an error, the value will be updated for ouput without allow_duplicate, but not for output with allow_duplicate, example:

Tell me, am I doing something wrong? Thank you very much.
Full code sample (with one callback, but I think this is enough to show the error):
```python
import dash
from dash import Dash, html, Input, Output
app = Dash(__name__)
app.layout = html.Div(
children=[
html.Div(
children=['Last pressed button: ', html.Span(id='span', children='empty')]
),
html.Button(
id='button-right',
children='right'
)
]
)
# NOT WORKING FOR "SPAN", BUT WARKING FOR BUTTON-RIGHT
dash.clientside_callback(
"""
function(n_clicks){
return ["right", `right ${n_clicks}`];
}
""",
[
Output('span', 'children', allow_duplicate=True),
Output('button-right', 'children')
],
Input('button-right', 'n_clicks'),
prevent_initial_call=True
)
# WORKING EXAMPLE TO UNDERSTAND HOW IT SHOULD BE (THE ONLY DIFFERENCE IS THAT NO ALLOW_DUPLICATE)
# dash.clientside_callback(
# """
# function(n_clicks){
# return ["right", `right ${n_clicks}`];
# }
# """,
# [
# Output('span', 'children'),
# Output('button-right', 'children')
# ],
# Input('button-right', 'n_clicks'),
# prevent_initial_call=True
# )
if __name__ == '__main__':
app.run_server(debug=True, port=2414)
```
_Originally posted by @FatHare in https://github.com/plotly/dash/issues/2414#issuecomment-1473088764_
| closed | 2023-03-17T19:00:31Z | 2023-04-07T14:30:34Z | https://github.com/plotly/dash/issues/2467 | [] | T4rk1n | 1 |
joerick/pyinstrument | django | 298 | Example on how to disable profiling in FastAPI tests | I have a FastAPI application, and have added Pyinstrument instrumentation to it using code similar to the code linked [here]( https://pyinstrument.readthedocs.io/en/latest/guide.html#profile-a-web-request-in-fastapi).
The code suggests to have profiling configurable via a Settings object, such as one provided in Pydantic.
The problem with this is that in my case I have a Settings object which looks like this:
```python
class Settings(BaseSettings):
class Config:
extra = Extra.allow
my_url: AnyHttpUrl
foo: Optional[str] = None
ENABLE_PROFILING: bool = False
```
So when I try to do something like this inside my app factory function:
```python
def register_profiling_middleware(app: FastAPI):
if get_settings().ENABLE_PROFILING:
@app.middleware("http")
async def profile_request(request: Request, call_next: Callable):
...
```
This runs into Pydantic validation setting issues when running pytest tests. This is because some of the environment variables are not yet set when a test client is created. Currently my lazy solution in the app factory function is to add a parameter for enablig/disabling profiling:
```python
# conftest.py
@pytest.fixture
def client(tmp_path):
...
app = create_app(enable_profiling=False)
# app.py
def create_app(enable_profiling: bool = True) -> FastAPI:
app = FastAPI()
...
register_middlewares(app=app, enable_profiling=enable_profiling)
```
Is this what is recommended? Is there a better way? Ideally I wouldn't have profiling in the tests, or at least would want that to be determined by passing an environment variable myself, rather than having to make code changes. Specifically, if there are any examples in the docs I can look at that would be much appreciated. Thanks! | open | 2024-03-22T12:10:56Z | 2024-08-25T11:25:58Z | https://github.com/joerick/pyinstrument/issues/298 | [] | daniel-soutar-ki | 1 |
deepinsight/insightface | pytorch | 1,929 | Can you provide the pytorch pre training model of mobileface on glint360k dataset?arcface or cosineface | open | 2022-03-07T09:45:12Z | 2022-03-07T09:45:12Z | https://github.com/deepinsight/insightface/issues/1929 | [] | momohuangsha | 0 | |
thunlp/OpenPrompt | nlp | 242 | 'PrefixTuningTemplate' object has no attribute 'n_head' | Here is my code of trying to use ```PrefixTuningTemplate```:
```
import torch
from openprompt.data_utils.conditional_generation_dataset import WebNLGProcessor
from openprompt.plms import load_plm
from openprompt.prompts.prefix_tuning_template import PrefixTuningTemplate
plm, tokenizer, model_config, WrapperClass = load_plm('opt', "facebook/opt-125m")
mytemplate = PrefixTuningTemplate(model=plm, tokenizer=tokenizer, text=' {"placeholder":"text_a"} {"special": "<eos>"} {"mask"} ', using_decoder_past_key_values=True)
```
An error is encountered:
```
╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
│ in <module> │
│ │
│ 1 from openprompt.prompts.prefix_tuning_template import PrefixTuningTemplate │
│ ❱ 2 mytemplate = PrefixTuningTemplate(model=plm, tokenizer=tokenizer, text=' {"placeholder" │
│ 3 │
│ │
│ /root/gpt_exp/OpenPrompt/openprompt/prompts/prefix_tuning_template.py:77 in __init__ │
│ │
│ 74 │ │ │ self.n_head = self.config.n_head │
│ 75 │ │ │ self.match_n_decoder_layer = self.n_decoder_layer │
│ 76 │ │ self.mid_dim = mid_dim │
│ ❱ 77 │ │ self.match_n_head = self.n_head │
│ 78 │ │ self.match_n_embd = self.n_embd // self.n_head │
│ 79 │ │ self.prefix_dropout = prefix_dropout │
│ 80 │ │ self.dropout = nn.Dropout(self.prefix_dropout) │
│ │
│ /home/kg/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py:1208 in __getattr__ │
│ │
│ 1205 │ │ │ if name in modules: │
│ 1206 │ │ │ │ return modules[name] │
│ 1207 │ │ raise AttributeError("'{}' object has no attribute '{}'".format( │
│ ❱ 1208 │ │ │ type(self).__name__, name)) │
│ 1209 │ │
│ 1210 │ def __setattr__(self, name: str, value: Union[Tensor, 'Module']) -> None: │
│ 1211 │ │ def remove_from(*dicts_or_sets): │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
``` | open | 2023-02-28T08:29:52Z | 2023-03-30T06:49:06Z | https://github.com/thunlp/OpenPrompt/issues/242 | [] | SuperChanS | 2 |
jacobgil/pytorch-grad-cam | computer-vision | 371 | AssertionError of FullGrad for the inception_v3 model | For the `inception_v3` model in `torchvision.models`, FullGrad attribution arises the AssertionError about "assert(len(self.bias_data) == len(grads_list))"; I find that the `len(self.bias_data)` is 96 while `len(grads_list)`is just 94 when steps into the functions.
It is just from the normal-usage of the function,
> model = torchvision.models.inception_v3(weights=models.Inception_V3_Weights.IMAGENET1K_V1)
fg = FullGrad(model, [], use_cuda=True) # FullGrad will ignore the given target_layers, so here it is an empty list
attr = fg(input_tensor=x.to(device), targets=[ClassifierOutputTarget(tar_clsidx)])
Does anyone also encounter such a problem? Or any suggestions? @jacobgil | open | 2022-12-22T15:39:36Z | 2022-12-23T02:24:41Z | https://github.com/jacobgil/pytorch-grad-cam/issues/371 | [] | wenchieh | 0 |
piskvorky/gensim | nlp | 2,950 | Overflow error after unicode errors when loading a 'large' model built with gensim | <!--
**IMPORTANT**:
- Use the [Gensim mailing list](https://groups.google.com/forum/#!forum/gensim) to ask general or usage questions. Github issues are only for bug reports.
- Check [Recipes&FAQ](https://github.com/RaRe-Technologies/gensim/wiki/Recipes-&-FAQ) first for common answers.
Github bug reports that do not include relevant information and context will be closed without an answer. Thanks!
-->
#### Problem description
What are you trying to achieve?
I am loading a `fasttext` model built with `gensim`, using `gensim.models.fasttext.load_facebook_model` so I can use the model.
What is the expected result?
The model loads correctly.
What are you seeing instead?
Overflow error, preceded by unicode parsing errors.
#### Steps/code/corpus to reproduce
I get an overflow error when I try to load a `fasttext` model which I built with `gensim`. I have tried with versions 3.8.3 and then rebuild and load with the head of the code 4.0.0-dev as of yesterday. It's not reproducible because I cannot share the corpus.
Here is the stack trace:
In [21]: ft = load_facebook_model('data/interim/ft_model.bin')
2020-09-16 15:59:59,526 : MainThread : INFO : loading 582693 words for fastText model from data/interim/ft_model.
bin
2020-09-16 15:59:59,626 : MainThread : ERROR : failed to decode invalid unicode bytes b'\x8a\x08'; replacing in
lid characters, using '\\x8a\x08'
2020-09-16 15:59:59,684 : MainThread : ERROR : failed to decode invalid unicode bytes b'\xb0\x03'; replacing in
lid characters, using '\\xb0\x03'
2020-09-16 15:59:59,775 : MainThread : ERROR : failed to decode invalid unicode bytes b'\xb5\x01'; replacing in
lid characters, using '\\xb5\x01'
2020-09-16 15:59:59,801 : MainThread : ERROR : failed to decode invalid unicode bytes b'\x99\xe9\xa2\x9d'; repl
ing invalid characters, using '\\x99额'
---------------------------------------------------------------------------
OverflowError Traceback (most recent call last)
<ipython-input-21-3b4a7ad71a41> in <module>
----> 1 ft = load_facebook_model('data/interim/ft_model.bin')
/m/virtualenvs/<snip>/lib/python3.6/site-packages/gensim/models/fasttext.py in load_f
ebook_model(path, encoding)
1140
1141 """
-> 1142 return _load_fasttext_format(path, encoding=encoding, full_model=True)
1143
1144
/m/virtualenvs/<snip>/lib/python3.6/site-packages/gensim/models/fasttext.py in _load_
sttext_format(model_file, encoding, full_model)
1220 """
1221 with gensim.utils.open(model_file, 'rb') as fin:
-> 1222 m = gensim.models._fasttext_bin.load(fin, encoding=encoding, full_model=full_model)
1223
1224 model = FastText(
/m/virtualenvs/<snip>/python3.6/site-packages/gensim/models/_fasttext_bin.py in l
d(fin, encoding, full_model)
342 model.update(raw_vocab=raw_vocab, vocab_size=vocab_size, nwords=nwords, ntokens=ntokens)
343
--> 344 vectors_ngrams = _load_matrix(fin, new_format=new_format)
345
346 if not full_model:
/m/virtualenvs/<snip>/lib/python3.6/site-packages/gensim/models/_fasttext_bin.py in _
ad_matrix(fin, new_format)
276 matrix = _fromfile(fin, _FLOAT_DTYPE, count)
277 else:
--> 278 matrix = np.fromfile(fin, _FLOAT_DTYPE, count)
279
280 assert matrix.shape == (count,), 'expected (%r,), got %r' % (count, matrix.shape)
OverflowError: Python int too large to convert to C ssize_t
* There are no errors or warnings in the model building using the same .
* A quick check showed there are no unicode errors in the input file, but very well possible that there are Chinese characters.
* The `count` variable is calculated as `count = num_vectors * dim`. Both of these are astronomical at 10^23, `dim` should be 100, so there must be some unpacking problem here already. The unpacking of model params pre vocab look ok.
* The input dataset is somewhat large at 26 GB, one epoch is sufficient.
* The build and load works with a truncated file which is 4.8 GB. So change in size as well as corpus -- could be that the problematic input is not included.
* The same input file works when running with the python `fasttext` module, so I have a workaround.
The count of the erroneous words are also off the scale:
In [41]: raw_vocab['\\x8a\x08']
Out[41]: 7088947288457871360
In [42]: raw_vocab['\\xb0\x03']
Out[42]: 3774297962713186304
In [43]: raw_vocab['\\xb5\x01']
Out[43]: 7092324988178399232
I saw that there were many changes from `int` to `long long` both in 3.8.3 and also in 4.0.0-dev so my hypothesis was that it would be resolved when updating but I got the same error.
I don't know if this is sufficient information to go in in order to pin it down, please let me know if I can help with more information.
#### Versions
Please provide the output of:
```python
>>> import platform; print(platform.platform())
Linux-2.6.32-754.3.5.el6.x86_64-x86_64-with-centos-6.10-Final
>>> import sys; print("Python", sys.version)
Python 3.6.10 (default, Jul 8 2020, 16:15:16)
[GCC 4.4.7 20120313 (Red Hat 4.4.7-23)]
>>> import struct; print("Bits", 8 * struct.calcsize("P"))
Bits 64
>>> import numpy; print("NumPy", numpy.__version__)
NumPy 1.19.2
>>> import scipy; print("SciPy", scipy.__version__)
SciPy 1.5.2
>>> import gensim; print("gensim", gensim.__version__)
gensim 3.8.3
>>> from gensim.models import word2vec;print("FAST_VERSION", word2vec.FAST_VERSION)
FAST_VERSION 1
```
| open | 2020-09-16T18:10:33Z | 2020-09-16T20:12:36Z | https://github.com/piskvorky/gensim/issues/2950 | [] | svenski | 3 |
apify/crawlee-python | automation | 248 | Sitemap-based request provider | similar to what we're implementing in JS crawlee | open | 2024-06-28T10:58:00Z | 2024-07-15T10:15:45Z | https://github.com/apify/crawlee-python/issues/248 | [
"enhancement",
"t-tooling"
] | janbuchar | 0 |
saulpw/visidata | pandas | 2,159 | [parquet] can't load parquet directory anymore: `IsADirectoryError` | **Small description**
Hi @saulpw @anjakefala @takacsd - it seems that forcing opening the path as file with `.open()` - introduced with #2133 - breaks the use case where the multiple parquet files are stored in a directory, and this directory is then read by visidata. This is common with Hive partitioning or when working with spark. A simple fix would be to check if the path is a directory with `os.path.is_dir()` and then retaining old behavior of passing it as a string to `read_table()`. If it is not an existing directory, we move to the new way of opening as a binary buffer.
I have already added this workaround to my clone of visidata, and it fixes my issue, but maybe you have some better ideas how to handle it instead of `if-else` statement in the `ParquetSheet`.
**Expected result**
```bash
vd -f parquet parquet_dir
```
should load a parquet into visidata
**Actual result with screenshot**

**Additional context**
```bash
# freshest develop
visidata@9fd728b72c115e50e99c24b455caaf020381b48e
pyarrow==12.0.0
python 3.10.2
```
| closed | 2023-12-07T12:59:35Z | 2023-12-07T20:44:20Z | https://github.com/saulpw/visidata/issues/2159 | [
"bug",
"fixed"
] | mr-majkel | 1 |
deeppavlov/DeepPavlov | tensorflow | 829 | Difference between ELMo embedding for sentence/separate tokens | Hello,
I am trying to use ELMo embedder for further text classification task. I cannot find information on that, so forced to ask a question here.
What is the conceptual difference between this
`elmo([['вопрос', 'жизни', 'и' ,'смерти']])`
and this
`elmo([['вопрос жизни и смерти']])`
The embedding size is the same, but the values are different.
Thanks in advance. | closed | 2019-05-05T10:23:41Z | 2019-05-14T18:57:31Z | https://github.com/deeppavlov/DeepPavlov/issues/829 | [] | joistick11 | 2 |
mouredev/Hello-Python | fastapi | 538 | 网赌网址不给出款该怎么办 | 被黑了不能出款找我们大力出黑团队多年经验,如果你被黑了请联系我们帮忙把损失降到最低/微x:<>《微信zdn200手机号18469854912》前期出款不收任何费用,我们团队都是先出款/后收费。不成功不收费、
当网赌赢了不能提款的问题怎么办呢?
首先我们应该保持冷静,如果提现被拒绝就不重复点了,切记不能跟平台客服或者称的代理人有任何争执,一旦激怒对方,极有可能造成账号冻结之类的情况,这样问题就很难处理得到了,这个时候对方的理由或者借口我们要表示相信,并希望尽快得到处理,在稳定住对方后。
第一时间联系专业出黑团队,通过藏分锁卡等手段分批出款,这样问题顺利第一款解决了,如果您目前正遭遇网赢钱不能提款的,请及时联系我们专业出黑团队为您处理↓↓↓↓↓
 | closed | 2025-03-19T05:10:15Z | 2025-03-19T06:31:42Z | https://github.com/mouredev/Hello-Python/issues/538 | [] | wdbhgzmb | 0 |
nerfstudio-project/nerfstudio | computer-vision | 3,555 | Unable to use my own point cloud and camera poses | Unable to use my own point cloud and camera pose.
I am trying to splatfacto. However, I do not want to use colmap. I have my own camera parameters and point cloud for each image. These are ground truth parameters. The colmap prediction is obviously not as good as ground truth. Consequently the reconstruction is not satisfactory. This I want to understand how to use my own parameters to train splat-facto. | open | 2024-12-22T10:53:31Z | 2024-12-22T10:53:31Z | https://github.com/nerfstudio-project/nerfstudio/issues/3555 | [] | engs2570 | 0 |
pydata/xarray | numpy | 9,550 | `rolling(...).construct(...)` blows up chunk size | ### What happened?
When using `rolling(...).construct(...) in https://github.com/coiled/benchmarks/pull/1552, I noticed that my Dask workers died running out of memory because the chunk sizes get blown up.
### What did you expect to happen?
Naively, I would expect `rolling(...).construct(...)` to try and keep chunk sizes constant instead of blowing them up quadratic to the window size.
### Minimal Complete Verifiable Example
```Python
import dask.array as da
import xarray as xr
# Construct dataset with chunk size of (400, 400, 1) or 1.22 MiB
ds = xr.Dataset(
dict(
foo=(
["latitute", "longitude", "time"],
da.random.random((400, 400, 400), chunks=(-1, -1, 1)),
),
)
)
# Dataset now has chunks of size (400, 400, 100 100) or 11.92 GiB
ds = ds.rolling(time=100, center=True).construct("window")
```
### MVCE confirmation
- [X] Minimal example — the example is as focused as reasonably possible to demonstrate the underlying issue in xarray.
- [X] Complete example — the example is self-contained, including all data and the text of any traceback.
- [X] Verifiable example — the example copy & pastes into an IPython prompt or [Binder notebook](https://mybinder.org/v2/gh/pydata/xarray/main?urlpath=lab/tree/doc/examples/blank_template.ipynb), returning the result.
- [X] New issue — a search of GitHub Issues suggests this is not a duplicate.
- [X] Recent environment — the issue occurs with the latest version of xarray and its dependencies.
### Relevant log output
_No response_
### Anything else we need to know?
_No response_
### Environment
<details>
INSTALLED VERSIONS
------------------
commit: None
python: 3.12.6 | packaged by conda-forge | (main, Sep 11 2024, 04:55:15) [Clang 17.0.6 ]
python-bits: 64
OS: Darwin
OS-release: 23.6.0
machine: arm64
processor: arm
byteorder: little
LC_ALL: None
LANG: en_US.UTF-8
LOCALE: ('en_US', 'UTF-8')
libhdf5: None
libnetcdf: None
xarray: 2024.7.0
pandas: 2.2.2
numpy: 1.26.4
scipy: 1.14.0
netCDF4: None
pydap: None
h5netcdf: None
h5py: None
zarr: 2.18.2
cftime: 1.6.4
nc_time_axis: None
iris: None
bottleneck: 1.4.0
dask: 2024.9.0
distributed: 2024.9.0
matplotlib: None
cartopy: None
seaborn: None
numbagg: None
fsspec: 2024.6.1
cupy: None
pint: None
sparse: 0.15.4
flox: 0.9.9
numpy_groupies: 0.11.2
setuptools: 73.0.1
pip: 24.2
conda: 24.7.1
pytest: 8.3.3
mypy: None
IPython: 8.27.0
sphinx: None
</details> | closed | 2024-09-26T11:18:09Z | 2024-11-18T18:11:23Z | https://github.com/pydata/xarray/issues/9550 | [
"topic-dask",
"upstream issue",
"topic-rolling"
] | hendrikmakait | 6 |
redis/redis-om-python | pydantic | 91 | EmbeddedJsonModel not working within a JsonModel | ```python3
class Address(EmbeddedJsonModel):
street: str
class Contact(JsonModel):
name: str
address: Address
data = {
"name": "First",
"address": {"street": "Main Rd"}
}
try:
c = Contact(**data)
except Exception as e:
print(e)
```
Always gets an error of Nonetype:
```
1 validation error for Contact
address
'NoneType' object is not subscriptable (type=type_error)
```
If I change Address(EmbeddedJsonModel) -> Address(HashModel), it will work and embed it but the downside is that it creates a *pk* in each nested section.
What am I doing wrong?
Thanks | closed | 2022-01-15T02:15:59Z | 2022-01-15T19:23:46Z | https://github.com/redis/redis-om-python/issues/91 | [] | jctissier | 0 |
manrajgrover/halo | jupyter | 32 | Jupyter Notebooks don't support Halo | While Halo works perfectly fine in the terminal, I can't get Halo to work inside a Jupyter notebook. I am running the following code in the first cell of the notebook:
```
from halo import Halo
from time import sleep
with Halo(text='Loading', spinner='dots'):
sleep(5)
```
The program runs through without any errors, but no spinner is shown at all.
I figured out, that if you write something to `stdout` with `sys.stdout` prior to calling `Halo`, then the spinner is shown, but each spinner and corresponding text are shown next to each other, i.e. clearing the current line doesn't work.
```
from halo import Halo
import sys
from time import sleep
sys.stdout.write('test\n')
with Halo(text='Loading', spinner='dots'):
sleep(5)
```
returns the output
```
test
⠋ Loading⠙ Loading⠹ Loading⠸ Loading⠼ Loading⠴ Loading⠦ Loading⠧ Loading⠇ Loading⠏
Loading⠋ Loading⠙ Loading⠹ Loading⠸ Loading⠼ Loading⠴ Loading⠦ Loading⠧ Loading⠇
Loading⠏ Loading⠋ Loading⠙ Loading⠹ Loading⠸ Loading⠼ Loading⠴ Loading⠦ Loading⠧
Loading⠇ Loading⠏ Loading⠋ Loading⠙ Loading⠹ Loading⠸ Loading⠼ Loading⠴ Loading⠦
Loading⠧ Loading⠇ Loading⠏ Loading⠋ Loading⠙ Loading⠹ Loading⠸ Loading⠼ Loading⠴
Loading⠦ Loading⠧ Loading⠇ Loading⠏ Loading⠋ Loading⠙ Loading⠹ Loading⠸ Loading⠼
Loading⠴ Loading⠦ Loading⠧ Loading
```
Is there something wrong on my end, and do you have an idea how to fix that?
It would be really nice if Halo worked with Jupyter too. Thanks for your efforts and for sharing this library! | closed | 2017-10-26T11:19:32Z | 2018-05-06T18:49:55Z | https://github.com/manrajgrover/halo/issues/32 | [
"help wanted",
"new feature",
"hacktoberfest",
"feature request",
"ipython"
] | HBadertscher | 12 |
miguelgrinberg/flasky | flask | 473 | After launching to heroku, local app with flask run or heroku start broken | After I launched last night I went to back to my local development and tried registering a new user. When I did, the following trace snippit is occured.
```
File "/Users/richardkhillah/Developer/levelup/app/models/models.py", line 112, in __init__
self.follow(self)
File "/Users/richardkhillah/Developer/levelup/app/models/models.py", line 207, in follow
if not self.is_following(user):
File "/Users/richardkhillah/Developer/levelup/app/models/models.py", line 226, in is_following
followed_id=user.id).first() is not None
File "/Users/richardkhillah/Developer/levelup/venv/lib/python3.7/site-packages/sqlalchemy/orm/query.py", line 1897, in filter_by
return self.filter(*clauses)
File "<string>", line 2, in filter
File "/Users/richardkhillah/Developer/levelup/venv/lib/python3.7/site-packages/sqlalchemy/orm/base.py", line 224, in generate
self = args[0]._clone()
File "/Users/richardkhillah/Developer/levelup/venv/lib/python3.7/site-packages/sqlalchemy/orm/dynamic.py", line 349, in _clone
% (orm_util.instance_str(instance), self.attr.key)
sqlalchemy.orm.exc.DetachedInstanceError: Parent instance <User at 0x10df53160> is not bound to a Session, and no contextual session is established; lazy load operation of attribute 'followed' cannot proceed (Background on this error at: http://sqlalche.me/e/bhk3)
```
Nothing in the auth module changed between development and deployment, so I'm confused what's going on.
Locally, I'm using Posgtres (if that's helpful at all) | closed | 2020-06-15T17:43:50Z | 2020-06-17T08:43:07Z | https://github.com/miguelgrinberg/flasky/issues/473 | [
"question"
] | richardkhillah | 3 |
huggingface/diffusers | pytorch | 10,314 | HunyuanVideoPipeline produces NaN values | ### Describe the bug
Running `diffusers.utils.export_to_video()` on the output of `HunyuanVideoPipeline` results in
```
/app/diffusers/src/diffusers/image_processor.py:147: RuntimeWarning: invalid value encountered in cast
images = (images * 255).round().astype("uint8")
```
After adding some checks to `numpy_to_pil()` in `image_processor.py` I have confirmed that the output contains `NaN` values
```
File "/app/pipeline.py", line 37, in <module>
output = pipe(
^^^^^
File "/usr/local/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/app/diffusers/src/diffusers/pipelines/hunyuan_video/pipeline_hunyuan_video.py", line 677, in __call__
video = self.video_processor.postprocess_video(video, output_type=output_type)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/diffusers/src/diffusers/video_processor.py", line 103, in postprocess_video
batch_output = self.postprocess(batch_vid, output_type)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/diffusers/src/diffusers/image_processor.py", line 823, in postprocess
return self.numpy_to_pil(image)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/diffusers/src/diffusers/image_processor.py", line 158, in numpy_to_pil
raise ValueError("Image array contains NaN values")
ValueError: Image array contains NaN values
```
### Reproduction
```python
import os
import time
import torch
from diffusers import HunyuanVideoPipeline, HunyuanVideoTransformer3DModel
from diffusers.utils import export_to_video
from huggingface_hub import snapshot_download
from torch.profiler import ProfilerActivity, profile, record_function
os.environ["TOKENIZERS_PARALLELISM"] = "false"
MODEL_ID = "tencent/HunyuanVideo"
PROMPT = "a whale shark floating through outer space"
profile_dir = os.environ.get("PROFILE_OUT_PATH", "./")
profile_file_name = os.environ.get("PROFILE_OUT_FILE_NAME", "hunyuan_profile.json")
profile_path = os.path.join(profile_dir, profile_file_name)
transformer = HunyuanVideoTransformer3DModel.from_pretrained(
MODEL_ID, subfolder="transformer", torch_dtype=torch.float16, revision="refs/pr/18"
)
pipe = HunyuanVideoPipeline.from_pretrained(
MODEL_ID, transformer=transformer, torch_dtype=torch.float16, revision="refs/pr/18"
)
pipe.vae.enable_tiling()
pipe.to("cuda")
print(f"\nStarting profiling of {MODEL_ID}\n")
with profile(
activities=[ProfilerActivity.CPU, ProfilerActivity.CUDA], record_shapes=True
) as prof:
with record_function("model_inference"):
output = pipe(
prompt=PROMPT,
height=320,
width=512,
num_frames=61,
num_inference_steps=30,
)
# Export and print profiling results
print(prof.key_averages().table(sort_by="cuda_time_total", row_limit=10))
prof.export_chrome_trace(profile_path)
print(f"{profile_file_name} ready")
# export video
video = output.frames[0]
print(" ====== raw video matrix =====")
print(video)
print()
print(" ====== Exporting video =====")
export_to_video(video, "hunyuan_example.mp4", fps=15)
print()
```
### Logs
_No response_
### System Info
GPU: AMD MI300X
```dockerfile
ARG BASE_IMAGE=python:3.11-slim
FROM ${BASE_IMAGE}
ENV PYTHONBUFFERED=true
ENV CUDA_VISIBLE_DEVICES=0
WORKDIR /app
# Install tools
RUN apt-get update && \
apt-get install -y --no-install-recommends \
git \
libgl1-mesa-glx \
libglib2.0-0 \
libsm6 \
libxext6 \
libxrender-dev \
libfontconfig1 \
ffmpeg \
build-essential && \
rm -rf /var/lib/apt/lists/*
# install ROCm pytorch and python dependencies
RUN python -m pip install --no-cache-dir \
torch torchvision --index-url https://download.pytorch.org/whl/rocm6.2 && \
python -m pip install --no-cache-dir \
accelerate transformers sentencepiece protobuf opencv-python imageio imageio-ffmpeg
# install diffusers from source to include newest pipeline classes
COPY diffusers diffusers
RUN cd diffusers && \
python -m pip install -e .
# Copy the profiling script
ARG PIPELINE_FILE
COPY ${PIPELINE_FILE} pipeline.py
# run the script
CMD ["python", "pipeline.py"]
```
### Who can help?
@DN6 @a-r-r-o-w | closed | 2024-12-20T06:32:30Z | 2025-01-20T07:10:00Z | https://github.com/huggingface/diffusers/issues/10314 | [
"bug"
] | smedegaard | 19 |
jupyterhub/jupyterhub-deploy-docker | jupyter | 46 | R Kernel | Hi, would you be so kind and add some documentation how to add/enable R kernels for all users? I tried to adopt https://github.com/jupyter/docker-stacks/blob/master/r-notebook/Dockerfile without success. To do so, I copied the fixed-permissions and the R pre-requisites & R packages except USER $NB_USER. Fixed-permissions did not work because of missing permissions and it does not work without it.
EDIT:
Or would it be possible to activate the conda-interface somehow? Even as admin I am not able to see it on a fresh setup. | closed | 2017-09-20T20:12:55Z | 2017-10-21T07:07:43Z | https://github.com/jupyterhub/jupyterhub-deploy-docker/issues/46 | [] | inkrement | 2 |
exaloop/codon | numpy | 597 | Why Does the Instance in Codon Script Never Get Terminated After Deletion? | I've written a Codon script that includes a class and a function to be exported:
```python
class Foo:
def __init__(self):
print('ctor')
def __del__(self):
print('dtor')
@export
def func():
foo = Foo()
## before
del foo
## after
```
I noticed something strange: the instance created never gets properly terminated. It shows only `ctor`. After printing the `__raw__` and `__ptr__` values both before and after calling `del foo`, I found that while the `__raw__` value changes slightly:
```
foo.__raw__(): 0x7f6d1f342a80 -> 0x7f6d1f342a40
```
The `__ptr__` value remains the same:
```
__ptr__(foo): 0x7f6c59c94a10 (still unchanged)
```
Could you help me understand and resolve this issue? | closed | 2024-10-08T20:30:46Z | 2024-11-11T18:54:17Z | https://github.com/exaloop/codon/issues/597 | [] | gyuro | 3 |
python-visualization/folium | data-visualization | 1,762 | matplotlib colormap for Choropleth(fill_color=map) and HeatMap() | **Is your feature request related to a problem? Please describe.**
See the maps on this demo:
https://www.kaggle.com/code/alexisbcook/interactive-maps
It looks okay with `Choropleth(fill_color='YlGnBu')` but it would look better with a proper perceptually uniform colormap.
Same for `HeatMap()`.
**Describe the solution you'd like**
I would like to plug `matplotlib.cm.viridis` into the color arguments for these functions.
**Describe alternatives you've considered**
I've tried to build a Python function that would give Choropleth the color format it needs, but it looks like it only accepts a narrow and rigid scheme of colors. | closed | 2023-05-19T20:35:38Z | 2023-05-22T09:40:06Z | https://github.com/python-visualization/folium/issues/1762 | [] | FlorinAndrei | 1 |
thtrieu/darkflow | tensorflow | 1,055 | How to convert our weights to Darknet weights | I have got a better result then Darknet, so how can I convert our format weights to Darknet.
Thinks a lot! | open | 2019-06-24T07:32:22Z | 2019-07-10T18:20:47Z | https://github.com/thtrieu/darkflow/issues/1055 | [] | tianfengyijiu | 1 |
flasgger/flasgger | rest-api | 448 | Buggy resolve_path function when using decorators | Function "resolve_path" inside "swag_from" function has really weird behavior when using an endpoint function that is using a decorator declared elsewhere. Because we are trying to read origin of a file based on
```os.path.abspath(obj.__globals__['__file__'])``` it will return a filename where the decorator is located, not the filename of where the endpoint is implemented.
I'm not really sure how to fix that at the moment. | open | 2020-12-28T20:30:17Z | 2020-12-28T20:30:17Z | https://github.com/flasgger/flasgger/issues/448 | [] | mjurenka | 0 |
holoviz/panel | matplotlib | 7,220 | build-docs errors because of missing firefox or geckodriver | I've followed the new developer guide. When I run `docs-build` I see
```bash
Used existing FileDownload thumbnail
getting thumbnail code for /home/jovyan/repos/private/panel/examples/reference/widgets/FileDropper.ipynb
Path exists True
Traceback (most recent call last):
File "/tmp/tmpgivt71lq", line 67, in <module>
from nbsite.gallery.thumbnailer import thumbnail;thumbnail(file_dropper, '/home/jovyan/repos/private/panel/doc/reference/widgets/thumbnails/FileDropper')
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jovyan/repos/private/panel/.pixi/envs/docs/lib/python3.11/site-packages/nbsite/gallery/thumbnailer.py", line 133, in thumbnail
obj.save(basename+'.png')
File "/home/jovyan/repos/private/panel/panel/viewable.py", line 964, in save
return save(
^^^^^
File "/home/jovyan/repos/private/panel/panel/io/save.py", line 272, in save
return save_png(
^^^^^^^^^
File "/home/jovyan/repos/private/panel/panel/io/save.py", line 85, in save_png
state.webdriver = webdriver_control.create()
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jovyan/repos/private/panel/.pixi/envs/docs/lib/python3.11/site-packages/bokeh/io/webdriver.py", line 180, in create
driver = self._create(kind, scale_factor=scale_factor)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jovyan/repos/private/panel/.pixi/envs/docs/lib/python3.11/site-packages/bokeh/io/webdriver.py", line 198, in _create
raise RuntimeError("Neither firefox and geckodriver nor a variant of chromium browser and " \
RuntimeError: Neither firefox and geckodriver nor a variant of chromium browser and chromedriver are available on system PATH. You can install the former with 'conda install -c conda-forge firefox geckodriver'.
FileDropper thumbnail export failed
```
I guess the workaround is to make sure all thumbnails are created. But the real solution would to have pixi install the firefox or geckodriver as a part of the docs installation? | open | 2024-09-01T04:22:34Z | 2025-02-20T15:04:51Z | https://github.com/holoviz/panel/issues/7220 | [] | MarcSkovMadsen | 2 |
Zeyi-Lin/HivisionIDPhotos | fastapi | 206 | 调用/idphoto 报错 500,服务器内部错误 | 

| open | 2024-11-11T01:48:31Z | 2024-12-13T11:49:33Z | https://github.com/Zeyi-Lin/HivisionIDPhotos/issues/206 | [] | wangjun87 | 1 |
vitalik/django-ninja | rest-api | 767 | [BUG] Union does not work properly | Here is my code required for minimal setup for the app:
models.py
```py
from django.db import models
from polymorphic.models import PolymorphicModel
class Page(models.Model):
name = models.CharField(max_length=50, unique=True)
class Section(PolymorphicModel):
page = models.ForeignKey(Page, on_delete=models.CASCADE)
class SimpleSection(Section):
text = models.CharField(max_length=255,blank=True)
class FeatureSection(Section):
name = models.CharField(max_length=50)
class FeatureCell(models.Model):
feature_section = models.ForeignKey(FeatureSection, on_delete=models.CASCADE, related_name='cells')
text = models.TextField()
class GallerySection(Section):
name = models.CharField(max_length=50)
class GalleryImage(models.Model):
gallery_section = models.ForeignKey(GallerySection, on_delete=models.CASCADE, related_name='images')
image = models.ImageField()
```
schema.py
```py
from typing import List, Union
from ninja import ModelSchema
from .models import (FeatureCell, FeatureSection, GalleryImage, GallerySection,
Page, Section, SimpleSection)
class SimpleSectionSchema(ModelSchema):
kind: str
class Config:
model = SimpleSection
model_fields = ['id', 'page', 'text']
@staticmethod
def resolve_kind(obj: Section) -> str:
return obj.__class__.__name__
class FeatureCellSchema(ModelSchema):
class Config:
model = FeatureCell
model_fields = ['id', 'text']
class FeatureSectionSchema(ModelSchema):
cells: List['FeatureCellSchema']
kind: str
class Config:
model = FeatureSection
model_fields = ['id', 'page', 'name']
@staticmethod
def resolve_kind(obj: Section) -> str:
return obj.__class__.__name__
class GalleryImageSchema(ModelSchema):
class Config:
model = GalleryImage
model_fields = ['id', 'image']
class GallerySectionSchema(ModelSchema):
images: List['GalleryImageSchema']
kind: str
class Config:
model = GallerySection
model_fields = ['id', 'name']
@staticmethod
def resolve_kind(obj: Section) -> str:
return obj.__class__.__name__
class PageSchema(ModelSchema):
sections: List[Union[SimpleSectionSchema, FeatureSectionSchema, GallerySectionSchema]]
class Config:
model = Page
model_fields = ['id', 'name']
@staticmethod
def resolve_sections(obj: Page) -> List[Union[SimpleSectionSchema, FeatureSectionSchema, GallerySectionSchema]]:
return obj.section_set.all()
```
schema.py
```py
from typing import List
from django.shortcuts import get_object_or_404
from ninja import Router
from .models import Page
from .schemas import PageSchema
router = Router()
@router.get('/{page_id}', response=PageSchema)
def get_page(request, page_id: int):
return get_object_or_404(Page, id=page_id)
```
Django Ninja seems to get the schema very right:

However when I query I get response for all the records as they were `SimpleSection` records (by inspecting the fields visible in the response). There are fields missing for `FeatureSection` and `GallerySection` kinds - `cells` and `images` respectively.
```json
{
"id": 1,
"name": "Test Page",
"sections": [
{
"id": 1,
"page": 1,
"text": "text goes here",
"kind": "SimpleSection"
},
{
"id": 2,
"page": 1,
"text": null,
"kind": "FeatureSection"
},
{
"id": 3,
"page": 1,
"text": null,
"kind": "GallerySection"
}
]
}
```
I swear to God it was working just a day ago, but now it stopped. I have no idea what has changed. I have all package versions locked in requirements.txt file so there is no way there was some upgrade to any of the packages.
To me it looks like maybe some caching issue for the schema generation process, though I have not dived into the source code just yet.
Can you see if I am doing anything wrong or it is a bug indeed?
**Versions (please complete the following information):**
- Python version: 3.9
- Django version: 4.2
- Django-Ninja version: 0.21.0
- Pydantic version: 1.10.7
- Django-Polymorphic: 3.1.0
| closed | 2023-05-26T21:52:14Z | 2024-03-18T09:28:49Z | https://github.com/vitalik/django-ninja/issues/767 | [] | an0o0nym | 4 |
ydataai/ydata-profiling | jupyter | 1,739 | Generating report looks broken (visually) | ## Generating report looks broken (visually)
See screenshot:

## Reproducible example:
[example.zip](https://github.com/user-attachments/files/19427955/example.zip)
**Full conda environment: `conda env export` (zipped):**
[conda_env.yaml.zip](https://github.com/user-attachments/files/19428004/conda_env.yaml.zip)
## Expected Behaviour
- There should be no red rectangles indicating error(s)
- The link should ideally go away, as it mistakenly suggests like we are using some old version and should upgrade (not talking about, the link is broken)
### Data Description
Attached zip with MRE - minimalistic fully reproducible example.
### Code that reproduces the bug
```Python
The minimalistic code & data are in the attached *.zip example.
from ydata_profiling import ProfileReport
pp = ProfileReport(df)
pp.to_file("Report.html")
```
### pandas-profiling version
ydata-profiling=4.15.1
### Dependencies
```Text
**Libs:**
ipywidgets=8.1.5
jupyterlab=4.3.5
ydata-profiling=4.15.1
```
Full conda environment: `conda env export` (zipped):
https://github.com/user-attachments/files/19428004/conda_env.yaml.zip
### OS
MacOS: Sequoia 15.3.2
### Checklist
- [x] There is not yet another bug report for this issue in the [issue tracker](https://github.com/ydataai/pandas-profiling/issues)
- [x] The problem is reproducible from this bug report. [This guide](http://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports) can help to craft a minimal bug report.
- [x] The issue has not been resolved by the entries listed under [Common Issues](https://docs.profiling.ydata.ai/latest/support-contribution/contribution_guidelines/). | open | 2025-03-24T10:44:24Z | 2025-03-24T10:48:00Z | https://github.com/ydataai/ydata-profiling/issues/1739 | [
"needs-triage"
] | stefansimik | 0 |
hyperspy/hyperspy | data-visualization | 2,518 | Example files are not copied to site-packages when installing | In https://github.com/hyperspy/hyperspy/pull/2478, the example datasets were renamed to have `.hspy` extension, but in `setup.py` these files are not included under `package_data`, so they are not copied to the install directory. This results in a `ValueError` when trying to use these files:
```python
>>> hs.datasets.example_signals.EDS_SEM_Spectrum()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python3.8/site-packages/hyperspy/misc/example_signals_loading.py", line 35, in load_1D_EDS_SEM_spectrum
return load(file_path)
File "/usr/lib/python3.8/site-packages/hyperspy/io.py", line 275, in load
raise ValueError('No filename matches this pattern')
ValueError: No filename matches this pattern
``` | closed | 2020-09-01T04:54:50Z | 2021-01-05T13:19:54Z | https://github.com/hyperspy/hyperspy/issues/2518 | [
"type: bug",
"type: regression"
] | jat255 | 0 |
graphistry/pygraphistry | jupyter | 538 | [BUG] feat and umap cache incomplete runs | **Describe the bug**
When canceling (or getting an exception) during feat/umap, that gets cached for subsequent runs. We expect them to be not cached, e.g., recomputed, on future runs.
**To Reproduce**
**Expected behavior**
The caching flow should catch the interrupt/exn, ensure no caching, and rethrow.
**Actual behavior**
**Screenshots**
**Browser environment (please complete the following information):**
**Graphistry GPU server environment**
**PyGraphistry API client environment**
**Additional context**
| open | 2024-01-11T10:04:57Z | 2024-01-11T10:05:10Z | https://github.com/graphistry/pygraphistry/issues/538 | [
"bug",
"help wanted",
"good-first-issue"
] | lmeyerov | 0 |
sinaptik-ai/pandas-ai | pandas | 968 | SmartDataframe API Key Error Depending on Prompt | ### System Info
OS version: `macOS 14.2.1 (23C71)`
Python version: `3.11.8`
`pandasai` version: `1.5.20`
### 🐛 Describe the bug
When calling the chat method of `SmartDataframe`, it may throw the following error depending on the prompt:
```
"Unfortunately, I was not able to answer your question, because of the following error:\n\nError code: 401 - {'error': {'message': 'Incorrect API key provided: ***************************************. You can find your API key at https://platform.openai.com/account/api-keys.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}}\n"
```
### Reproduction steps
Run this code:
```python
import pandas as pd
from pandasai import SmartDataframe
from pandasai.prompts import AbstractPrompt
class MyCustomPrompt(AbstractPrompt):
@property
def template(self):
return """
You are provided with a dataset that contains sales data by brand across various regions. Here's the metadata for the given pandas DataFrames:
{dataframes}
Given this data, please follow these steps by Yassin:
0. Acknowledge the user's query and provide context for the analysis.
1. **Data Analysis**: < custom instructions >
2. **Opportunity Identification**: < custom instructions >
3. **Reasoning**: < custom instructions >
4. **Recommendations**: < custom instructions >
5. **Output**: Return a dictionary with:
- type (possible values: "text", "number", "dataframe", "plot")
- value (can be a string, a dataframe, or the path of the plot, NOT a dictionary)
Example: {{ "type": "text", "value": < custom instructions > }}
``python
def analyze_data(dfs: list[pd.DataFrame]) -> dict:
# Code goes here (do not add comments)
# Declare a result variable
result = analyze_data(dfs)
``
Using the provided dataframes (`dfs`), update the Python code based on the user's query:
{conversation}
# Updated code:
# """
df = pd.DataFrame({
"brand": ["A", "B", "C", "D", "E", "F", "G", "H", "I", "J"],
"region": ["North America", "North America", "North America", "North America", "North America", "Europe", "Europe", "Europe", "Europe", "Europe"],
"sales": [100, 200, 300, 400, 500, 600, 700, 800, 900, 1000]
})
sdf = SmartDataframe(df,
name="df",
config={
"custom_prompts": {
"generate_python_code": MyCustomPrompt()
}
})
```
This prompt will work:
```python
sdf.chat("What is the average sales by continent?")
```
This prompt will error:
```python
sdf.chat("what is the most popular brand")
```
Example:
<img width="689" alt="Screenshot 2024-02-28 at 10 15 57 PM" src="https://github.com/Sinaptik-AI/pandas-ai/assets/108594964/90f19f78-3e62-4ec8-b12d-ea9c0d01b119"> | closed | 2024-02-29T06:16:49Z | 2024-03-19T07:29:11Z | https://github.com/sinaptik-ai/pandas-ai/issues/968 | [] | yassinkortam | 1 |
saulpw/visidata | pandas | 2,444 | [loader] for linters: pylint, ruff, and more | People using linters would find Visidata very useful. Linter output is often long, with a lot of tabulated results that users would like to filter on. A perfect use for visidata!
So for future reference, here is a list of some Python linters that have JSON output:
```
pylint --output-format=json
pylint --output-format=json2
ruff check --output-format json
ruff check --output-format json-lines
```
If anyone knows of popular linters for other languages than Python, please reply so we can make a list. Especially if the linters already have output in formats that are easy to handle, like JSON or JSONL. | closed | 2024-07-09T04:48:03Z | 2024-09-22T05:15:59Z | https://github.com/saulpw/visidata/issues/2444 | [
"wishlist",
"loader"
] | midichef | 2 |
horovod/horovod | machine-learning | 3,884 | Reporting a vulnerability | Hello!
I hope you are doing well!
We are a security research team. Our tool automatically detected a vulnerability in this repository. We want to disclose it responsibly. GitHub has a feature called **Private vulnerability reporting**, which enables security research to privately disclose a vulnerability. Unfortunately, it is not enabled for this repository.
Can you enable it, so that we can report it?
Thanks in advance!
PS: you can read about how to enable private vulnerability reporting here: https://docs.github.com/en/code-security/security-advisories/repository-security-advisories/configuring-private-vulnerability-reporting-for-a-repository | closed | 2023-04-10T11:49:29Z | 2023-12-15T04:10:49Z | https://github.com/horovod/horovod/issues/3884 | [
"wontfix"
] | igibek | 2 |
robusta-dev/robusta | automation | 875 | dynamic slack sink configuration for channel | **Is your feature request related to a problem?**
We would like to be able to configure the slack sink to match a label on a Finding so the slack sink can infer what channel to send a finding to.
**Describe the solution you'd like**
something like:
- slack_sink:
name: slack-sink
slack_channel: alerts
api_key: xyz-abc
slack_channel_label_re_match: (team_slack_channel|slack_channel)
**Describe alternatives you've considered**
specifying a sink for every teams slack channel. creating a custom sink that can make the api call to slack with a dynamic channel.
**Additional context**
none.
| closed | 2023-05-16T14:13:19Z | 2023-08-28T11:08:08Z | https://github.com/robusta-dev/robusta/issues/875 | [] | mitch-mckenzie | 1 |
FactoryBoy/factory_boy | django | 739 | Difference factory_boy vs model bakery? | Hey guys,
why would I use factory boy over model bakery or vice versa? Could anyone shed some light on the similarities and differences? I tried to look it up but really can't anything in either docs and also not on the internets.
Your help is greatly appreciated. | open | 2020-05-31T16:54:40Z | 2023-11-09T22:46:27Z | https://github.com/FactoryBoy/factory_boy/issues/739 | [
"Q&A",
"Doc",
"Django"
] | lggwettmann | 4 |
minivision-ai/photo2cartoon | computer-vision | 23 | 训练需要多大显存? | 报错:Tried to allocate 20.00 MiB (GPU 0; 5.94 GiB total capacity; 5.22 GiB already allocated; 20.81 MiB free; 5.42 GiB reserved in total by PyTorch
是因为我的显存太小吗 T.T | closed | 2020-05-18T11:28:19Z | 2020-05-26T01:03:08Z | https://github.com/minivision-ai/photo2cartoon/issues/23 | [] | fire717 | 1 |
developmentseed/lonboard | jupyter | 581 | Remove pyarrow as hard dependency | **Is your feature request related to a problem? Please describe.**
pyarrow is a massive, monolithic dependency. It can be hard to install in some places, and can't currently be installed in Pyodide. It's certainly a monumental effort to get it to work in Pyodide, but I think it would be valuable for lonboard to wean off of pyarrow.
The core enabling factor here is the [Arrow PyCapsule Interface](https://arrow.apache.org/docs/format/CDataInterface/PyCapsuleInterface.html). It allows Python Arrow libraries to exchange Arrow data _at the C level_ at no cost. This means that we can interface at no cost with any user who's already using pyarrow, but not be required to use pyarrow ourselves. I've been promoting its use throughout the Python Arrow ecosystem (https://github.com/apache/arrow/issues/39195#issuecomment-2245718008), and hoping this grows into something as core to tabular data processing as the buffer protocol is to numpy.
As part of working to build the ecosystem, I created [arro3](https://github.com/kylebarron/arro3), a new, very minimal Python Arrow implementation that wraps the Rust Arrow implementation.
I think that it should be possible to swap out pyarrow for arro3, which is about 1% of the normal pyarrow installation size.
It's also symbiotic for the ecosystem if Lonboard shows the benefits of modular Arrow libraries in Python.
**Describe the solution you'd like**
We'll keep pyarrow as a required dependency for GeoPandas/Pandas interop. pyarrow has implemented `pyarrow.Table.from_pandas` and that's not something I want to even think about replicating.
But aside from that, pretty much everything is doable in arro3 and geoarrow-rust.
- [ ] [`pa.Table.from_arrays`](https://github.com/developmentseed/lonboard/blob/7eb19e09e0fd83b7f71b6cc745e1ea42bb410083/lonboard/_cli.py#L56)
- [ ] [Construct a table from named columns](https://github.com/developmentseed/lonboard/blob/12300ad5f1bbea6e0d696e6bcf68403ca7e5186b/lonboard/_serialization.py#L54)
- [ ] [Write Parquet with specified compression and compression level](https://github.com/developmentseed/lonboard/blob/12300ad5f1bbea6e0d696e6bcf68403ca7e5186b/lonboard/_serialization.py#L33-L38)
- [x] [Access column from table](https://github.com/developmentseed/lonboard/blob/95dbd52330f33c2d93ad5f9504e9b9907b8ab62a/lonboard/_layer.py#L228-L229), positionally
- [x] [Access Schema from table and field from schema, positionally](https://github.com/developmentseed/lonboard/blob/95dbd52330f33c2d93ad5f9504e9b9907b8ab62a/lonboard/_layer.py#L228-L229)
- [x] [Access individual arrays from a chunked array](https://github.com/developmentseed/lonboard/blob/53f0d59364a1a7c8023787d3d06abad538399584/lonboard/_geoarrow/ops/bbox.py#L64)
- [ ] [arr.flatten() and to_numpy()](https://github.com/developmentseed/lonboard/blob/53f0d59364a1a7c8023787d3d06abad538399584/lonboard/_geoarrow/ops/bbox.py#L56)
- [x] [Access metadata on field](https://github.com/developmentseed/lonboard/blob/b2ca1929c4de491c2347455c0b8717de6fc78fb1/lonboard/_geoarrow/ops/coord_layout.py#L38-L39)
- [ ] [Construct a FixedSizeListArray from numpy coords and a list size](https://github.com/developmentseed/lonboard/blob/b2ca1929c4de491c2347455c0b8717de6fc78fb1/lonboard/_geoarrow/ops/coord_layout.py#L79) (this is a bit harder, but is also doing a geoarrow operation that I should be able to do in geoarrow-rs anyways)
- [x] [Constructor for ChunkedArray](https://github.com/developmentseed/lonboard/blob/b2ca1929c4de491c2347455c0b8717de6fc78fb1/lonboard/_geoarrow/ops/coord_layout.py#L68) from a Python iterable of array objects
- [x] [Access field metadata](https://github.com/developmentseed/lonboard/blob/95dbd52330f33c2d93ad5f9504e9b9907b8ab62a/lonboard/_utils.py#L26)
CLI only:
- [ ] [Read ParquetFile and access geo metadata](https://github.com/developmentseed/lonboard/blob/7eb19e09e0fd83b7f71b6cc745e1ea42bb410083/lonboard/_cli.py#L65-L66)
Other notes:
- Add numpy as direct dependency | closed | 2024-07-24T21:29:32Z | 2024-08-27T14:58:34Z | https://github.com/developmentseed/lonboard/issues/581 | [] | kylebarron | 2 |
JaidedAI/EasyOCR | pytorch | 519 | Bad case for Chinese character detection | Using params `langs =['ch_sim', 'en']` , I test my own image. However the results aren't satisfied.
input image:

output result:

In the next scene, it works.

| closed | 2021-08-17T06:34:48Z | 2021-10-06T08:59:39Z | https://github.com/JaidedAI/EasyOCR/issues/519 | [] | HuitMahoon | 3 |
sqlalchemy/alembic | sqlalchemy | 625 | add virtualenv basics to the tutorial | I set up Alembic per the instructions in the [tutorial](https://alembic.sqlalchemy.org/en/latest/tutorial.html), but when I run `alembic revision --autogenerate` I get a `ModuleNotFoundError` when my base `MetaData` object is imported into `env.py`.
My project is structured like this:
```
myproj
alembic.ini
alembic/
env.py
versions/
myapp/
__init__.py
db/
__init__.py
models.py
```
In `alembic/env.py` I have set up the `target_metadata` like this:
```py
from myapp.db.models import Base
target_metadata = Base.metadata
```
If I run `alembic revision -m "Add a table"`, it works fine, but if I run `alembic revision --autogenerate -m "Add a table"` I get `ModuleNotFoundError: No module named 'myapp'`.
#### Environment
<dl>
<dt>Python</dt>
<dd>3.6.9</dd>
<dt>Alembic</dt>
<dd>1.3.1</dd>
</dl> | closed | 2019-11-15T22:55:58Z | 2019-11-16T15:18:11Z | https://github.com/sqlalchemy/alembic/issues/625 | [
"question",
"autogenerate - detection",
"documentation"
] | darthmall | 8 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.