repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
amidaware/tacticalrmm | django | 1,861 | Tray Icon for TRMM | I'd love a feature like a trmm tray icon to be implemented into Tactical where a user can right-click on the tray Icon and be met with the following options
Identify My PC - This gives them an output of their hostname
Submit a support request - This gives them a text box where they can type in an issue they are experiencing and it then either sends an email to the helpdesk or generates an alert in the dashboard.
Show Support contact - This is a field that can be edited showcasing an email and phone number or whatever is deemed necessary
Go to the Support website - a button that opens a web browser to a user-defined website
I know there is an alternative below. But it would be really nice if this was built into the agent or a feature you could enable for agents.
https://github.com/conlan0/Trayicon
| closed | 2024-05-02T06:03:26Z | 2024-05-02T07:01:52Z | https://github.com/amidaware/tacticalrmm/issues/1861 | [] | screwlooseit | 1 |
davidsandberg/facenet | tensorflow | 1,133 | TypeError: '<=' not supported between instances of 'NoneType' and 'int' | File "src/train_softmax.py", line 308, in train │
if lr<=0: │
TypeError: '<=' not supported between instances of 'NoneType' and 'int'
------------------------------------------------------------------------------------------------------------------------------------
I got this error at after 275. epoch, because learning rate at learning_rate_schedule_classifier_vggface2.txt :
# Learning rate schedule
# Maps an epoch number to a learning rate
0: 0.05
100: 0.005
200: 0.0005
276: -1
How should I do ? Thanks for help
| open | 2020-01-31T13:45:55Z | 2020-07-01T08:46:38Z | https://github.com/davidsandberg/facenet/issues/1133 | [] | pasa13142 | 2 |
vaexio/vaex | data-science | 2,234 | [FEATURE-REQUEST] The join "how" doesn't support "cross" option | Hi sir, I use the vaex recently. Vaex is a awesome package. But I find that the vaex doesn't support the "cross" option of the join API. I want to cross-join two dataframes like [pandas](https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.merge.html?highlight=merge#pandas.DataFrame.merge).
Can you help me? I really need the feature to build my cryptography system. | closed | 2022-10-18T09:16:36Z | 2022-10-20T03:48:46Z | https://github.com/vaexio/vaex/issues/2234 | [] | hewittzgh | 2 |
albumentations-team/albumentations | deep-learning | 1,588 | [Speed up] Currently Gaussian Noise is not optimized for separate uint8 and float32 treatment | It could happen, that
```python
@clipped
def gauss_noise(image: np.ndarray, gauss: np.ndarray) -> np.ndarray:
image = image.astype("float32")
return image + gauss
```
could be optimized with something like:
```python
def gauss_noise_optimized(image: np.ndarray, gauss: np.ndarray) -> np.ndarray:
if image.dtype == np.float32:
gauss = gauss.astype(np.float32)
noisy_image = cv2.add(image, gauss)
elif image.dtype == np.uint8:
gauss = np.clip(gauss, 0, 255).astype(np.uint8)
noisy_image = cv2.add(image, gauss)
else:
raise TypeError("Unsupported image dtype. Expected uint8 or float32.")
return noisy_image
```
requires benchmarking, but technically it is a one function replacement Pull Request, that is good for a first issue.
Althouh more involved process could be done if it makes it even faster:
```python
@clipped
def _shift_rgb_non_uint8(img: np.ndarray, r_shift: float, g_shift: float, b_shift: float) -> np.ndarray:
if r_shift == g_shift == b_shift:
return img + r_shift
result_img = np.empty_like(img)
shifts = [r_shift, g_shift, b_shift]
for i, shift in enumerate(shifts):
result_img[..., i] = img[..., i] + shift
return result_img
def _shift_image_uint8(img: np.ndarray, value: np.ndarray) -> np.ndarray:
max_value = MAX_VALUES_BY_DTYPE[img.dtype]
lut = np.arange(0, max_value + 1).astype("float32")
lut += value
lut = np.clip(lut, 0, max_value).astype(img.dtype)
return cv2.LUT(img, lut)
@preserve_shape
def _shift_rgb_uint8(img: np.ndarray, r_shift: ScalarType, g_shift: ScalarType, b_shift: ScalarType) -> np.ndarray:
if r_shift == g_shift == b_shift:
height, width, channels = img.shape
img = img.reshape([height, width * channels])
return _shift_image_uint8(img, r_shift)
result_img = np.empty_like(img)
shifts = [r_shift, g_shift, b_shift]
for i, shift in enumerate(shifts):
result_img[..., i] = _shift_image_uint8(img[..., i], shift)
return result_img
def shift_rgb(img: np.ndarray, r_shift: ScalarType, g_shift: ScalarType, b_shift: ScalarType) -> np.ndarray:
if img.dtype == np.uint8:
return _shift_rgb_uint8(img, r_shift, g_shift, b_shift)
return _shift_rgb_non_uint8(img, r_shift, g_shift, b_shift)
``` | closed | 2024-03-16T00:52:09Z | 2024-10-31T02:20:47Z | https://github.com/albumentations-team/albumentations/issues/1588 | [
"good first issue",
"Speed Improvements"
] | ternaus | 3 |
activeloopai/deeplake | computer-vision | 2,977 | [BUG] Deeplake dataset row access fails under multiprocessing | ### Severity
P0 - Critical breaking issue or missing functionality
### Current Behavior
Accessing deeplake dataset rows under a multiprocessing library such as concurrent futures results in an error.
Consider the following script which creates a dummy deeplake dataset and tries to access it with multiprocessing
```
import concurrent
from functools import partial
import deeplake
from deeplake import Dataset
DS_PATH = "/tmp/test_deeplake"
def create_deeplake_ds():
ds = deeplake.empty(DS_PATH, overwrite=True)
with ds:
ds.create_tensor("dummy", htype="text")
ds.dummy.append("dummy_test")
def worker(idx: int, ds: Dataset) -> None:
print("Row", ds[idx])
if __name__ == "__main__":
use_multi = True
create_deeplake_ds()
ds = deeplake.load(DS_PATH, read_only=True)
if use_multi:
with concurrent.futures.ProcessPoolExecutor() as executor:
results = list(executor.map(partial(worker, ds=ds), [0]))
else:
results = worker(0, ds=ds)
```
With deeplake 3.9.26
this gives the following error:
```
concurrent.futures.process._RemoteTraceback:
"""
Traceback (most recent call last):
File "/Users/abhay/miniconda3/envs/test/lib/python3.10/site-packages/deeplake/core/dataset/dataset.py", line 1380, in __getattr__
return self.__getitem__(key)
File "/Users/abhay/miniconda3/envs/test/lib/python3.10/site-packages/deeplake/core/dataset/dataset.py", line 582, in __getitem__
raise TensorDoesNotExistError(item)
deeplake.util.exceptions.TensorDoesNotExistError: "Tensor 'index_params' does not exist."
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/Users/abhay/miniconda3/envs/test/lib/python3.10/concurrent/futures/process.py", line 243, in _process_worker
r = call_item.fn(*call_item.args, **call_item.kwargs)
File "/Users/abhay/miniconda3/envs/test/lib/python3.10/concurrent/futures/process.py", line 202, in _process_chunk
return [fn(*args) for args in chunk]
File "/Users/abhay/miniconda3/envs/test/lib/python3.10/concurrent/futures/process.py", line 202, in <listcomp>
return [fn(*args) for args in chunk]
File "/Users/abhay/deep_test/deep.py", line 19, in worker
print("Row", ds[idx])
File "/Users/abhay/miniconda3/envs/test/lib/python3.10/site-packages/deeplake/core/dataset/dataset.py", line 653, in __getitem__
index_params=self.index_params,
File "/Users/abhay/miniconda3/envs/test/lib/python3.10/site-packages/deeplake/core/dataset/dataset.py", line 1382, in __getattr__
raise AttributeError(
AttributeError: '<class 'deeplake.core.dataset.dataset.Dataset'>' object has no attribute 'index_params'
"""
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/Users/abhay/deep_test/deep.py", line 29, in <module>
results = list(executor.map(partial(worker, ds=ds), [0]))
File "/Users/abhay/miniconda3/envs/test/lib/python3.10/concurrent/futures/process.py", line 567, in _chain_from_iterable_of_lists
for element in iterable:
File "/Users/abhay/miniconda3/envs/test/lib/python3.10/concurrent/futures/_base.py", line 608, in result_iterator
yield fs.pop().result()
File "/Users/abhay/miniconda3/envs/test/lib/python3.10/concurrent/futures/_base.py", line 445, in result
return self.__get_result()
File "/Users/abhay/miniconda3/envs/test/lib/python3.10/concurrent/futures/_base.py", line 390, in __get_result
raise self._exception
AttributeError: '<class 'deeplake.core.dataset.dataset.Dataset'>' object has no attribute 'index_params'
```
### Steps to Reproduce
See description in current behavior
### Expected/Desired Behavior
Either it should be documented that accessing dataset under multiprocessing is not allowed or the access should not throw the error that is seen
### Python Version
Python 3.10.0
### OS
MacOS Ventura 13.5
### IDE
Terminal
### Packages
deeplake==3.9.26
### Additional Context
_No response_
### Possible Solution
_No response_
### Are you willing to submit a PR?
- [ ] I'm willing to submit a PR (Thank you!) | closed | 2024-10-25T22:03:10Z | 2024-11-08T03:45:47Z | https://github.com/activeloopai/deeplake/issues/2977 | [
"bug"
] | abhayv | 3 |
guohongze/adminset | django | 99 | 普通用户无法查看监控 | 
如图所示,test无法查看监控信息 | closed | 2019-03-05T12:00:13Z | 2019-03-06T03:51:00Z | https://github.com/guohongze/adminset/issues/99 | [] | DaRingLee | 2 |
CPJKU/madmom | numpy | 422 | Installation issue, mabe wheels-related | ### Expected behaviour
`pip install madmom==0.16.1`
should just work.
### Actual behaviour
```
>>> import madmom
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/stefan/miniconda3/envs/asr/lib/python3.6/site-packages/madmom/__init__.py", line 24, in <module>
from . import audio, evaluation, features, io, ml, models, processors, utils
File "/home/stefan/miniconda3/envs/asr/lib/python3.6/site-packages/madmom/audio/__init__.py", line 27, in <module>
from . import comb_filters, filters, signal, spectrogram, stft
File "__init__.pxd", line 918, in init madmom.audio.comb_filters
ValueError: numpy.ufunc size changed, may indicate binary incompatibility. Expected 216 from C header, got 192 from PyObject
>>>
```
### Fix
Install `madmom` without using binaries/wheels:
`pip install madmom --no-binary :all:`
I'm not too familiar with Pypi and the wheels and where they come from but maybe the provided wheel was build with an older `numpy` version?
Maybe my case is also special since I have to use `numpy=1.15.4` in order to work with `theano`...
### Information about installed software
Please provide some information about installed software.
Ubuntu 18.04
Miniconda on Python 3.6
```
>>> np.__version__
'1.15.4'
>>> scipy.__version__
'1.1.0'
```
| closed | 2019-03-13T14:23:07Z | 2019-03-15T09:38:15Z | https://github.com/CPJKU/madmom/issues/422 | [] | stefan-balke | 2 |
marcomusy/vedo | numpy | 752 | Cannot load texture | The result does not contain texture, why?
Here is result:

Here is code:
```
import sys
import os
import numpy as np
from tqdm import tqdm
from vedo import *
def render(mesh_file, output_path):
os.makedirs(output_path, exist_ok=True)
vp = Plotter(axes=0, size=(800, 800), interactive=0, offscreen=0)
mesh = load(mesh_file)
mesh.computeNormals()
mesh.phong()
mesh.lighting("glossy")
vp += mesh
step = 3
for i in tqdm(range(int(360 / step)), ncols = 100):
vp.show(zoom = 1.0, bg = "white", interactive = 0)
vp.camera.Azimuth(step)
screenshot(os.path.join(output_path, str(10000+(i+0)*step).zfill(5) + ".png"))
vp.show(zoom = 1.0, bg = "white", interactive = 1)
vp.close()
def render_with_texture(mesh_file, texture_file, output_path):
os.makedirs(output_path, exist_ok=True)
vp = Plotter(axes=0, size=(800, 800), interactive=0, offscreen=0)
mesh = load(mesh_file)
mesh.texture(texture_file)
mesh.computeNormals()
mesh.phong()
mesh.lighting("glossy")
vp += mesh
step = 5
for i in tqdm(range(int(360 / step)), ncols = 100):
vp.show(zoom = 1.0, bg = "white", interactive = 0)
vp.camera.Azimuth(step)
screenshot(os.path.join(output_path, str(10000+(i+0)*step).zfill(5) + ".png"))
vp.show(zoom = 1.0, bg = "white", interactive = 1)
vp.close()
if __name__ == "__main__":
#render("./Armadillo.ply", "./output")
render_with_texture("./0002_reconstruction.obj", "./0002_reconstruction.jpg", "./output")
```
Here is data:
https://drive.google.com/drive/folders/1ts7y8HG5X6FNASL_iMJ253jug6qtUlni?usp=share_link | closed | 2022-12-12T11:17:58Z | 2022-12-12T11:29:47Z | https://github.com/marcomusy/vedo/issues/752 | [] | rlczddl | 1 |
koxudaxi/datamodel-code-generator | fastapi | 1,850 | null bytes (`\u0000`) not correctly escaped in generated code | **Describe the bug**
A schema containing a NUL character `\u0000` becomes a literal (i.e. not escaped) NUL in the generated Python file. This is a SyntaxError.
> SyntaxError: source code cannot contain null bytes
**To Reproduce**
Example schema:
```json
{
"$schema": "https://json-schema.org/draft/2020-12/schema",
"properties": {
"bug": {
"type": "string",
"enum": ["\u0000"]
}
},
"type": "object"
}
```
Used commandline:
```
$ datamodel-codegen --input-file-type jsonschema --input schema.json --output-model-type pydantic_v2.BaseModel --output model.py
$ python -i model.py
```
**Expected behavior**
Bytes invalid in source code should use the appropriate escape sequence.
**Version:**
- OS: macOS 13.0.1
- Python version: 3.11
- datamodel-code-generator version: 0.25.3
**Additional context**
I first encountered it with regex pattern properties, but it appears to be a general issue with just strings.
Notably this applies to all output-model-types with the exception of `typing.TypedDict`, where it's correctly escaped to `'\x00'` in Literal. | closed | 2024-02-09T11:58:51Z | 2025-01-12T15:31:11Z | https://github.com/koxudaxi/datamodel-code-generator/issues/1850 | [
"bug"
] | tim-timman | 0 |
2noise/ChatTTS | python | 742 | 为什么mac mps 加速比cpu 慢 | closed | 2024-09-04T03:33:46Z | 2024-09-09T00:18:21Z | https://github.com/2noise/ChatTTS/issues/742 | [
"documentation"
] | wuhongsheng | 3 | |
supabase/supabase-py | fastapi | 1,001 | Insert request made using service_role key still unable to bypass RLS | # Bug report
<!--
⚠️ We receive a lot of bug reports which have already been solved or discussed. If you are looking for help, please try these first:
- Docs: https://docs.supabase.com
- Discussions: https://github.com/supabase/supabase/discussions
- Discord: https://discord.supabase.com
Before opening a bug report, please verify the following:
-->
- [x] I confirm this is a bug with Supabase, not with my own application.
- [x] I confirm I have searched the [Docs](https://docs.supabase.com), GitHub [Discussions](https://github.com/supabase/supabase/discussions), and [Discord](https://discord.supabase.com).
## Describe the bug
So, I am making an insert request in my table, the issue is on my local machine with supabase 2.6.0 is it working fine as intended i.e. insert records as service_role that should bypass RLS but on staging I have supabase 2.10.0 and it is inserting the records as role authenticated with user_id i mentioned in the user_id column of the table even though I have no login code and also the key is service_role key leading to RLS error.
## To Reproduce
Steps to reproduce the behavior, please provide code snippets or a repository:
Create a table with RLS
Enable RLS
**Insert a record in the table with column user_id of an actual user**
## Expected behavior
The records should have been inserted as service_role without triggering RLS policy.
## Screenshots
From the logs,
```
"authorization": [
{
"invalid": null,
"payload": [
{
"algorithm": "HS256",
"issuer": "https://xxxxxxxxxxxxus.supabase.co/auth/v1", (hidden for security reasons)
"key_id": "gnKsQ+2zzxM9F1RT",
"role": "authenticated",
"signature_prefix": "qnJVPy",
"subject": "<UUID user_id of the column user_id is being shown here which I removed>"
```
Part of the code I used and also you can make out the DB schema.
```
data_dict = {
"user_id": user_id,
"organization_id": organization_id,
"source": source,
"review": review,
"author": author,
"author_email": author_email,
"company_name": company_name,
"timestamp": timestamp,
"rating": rating,
"tags": tags,
"status": "unprocessed"
}
# Store the data in the 'unprocessed_reviews' table
result, error = supabase.table('unprocessed_reviews').insert([data_dict]).execute()
if error[1] is not None:
raise HTTPException(status_code=500, detail="Error storing data")
return {"message": "Data stored successfully", "success": True}
```
Code error:
```
result, error = supabase.table('unprocessed_reviews').insert([data_dict]).execute()
File "/usr/local/lib/python3.10/site-packages/postgrest/_sync/request_builder.py", line 78, in execute
raise APIError(r.json())
postgrest.exceptions.APIError: {'code': '42501', 'details': None, 'hint': None, 'message': 'new row violates row-level security policy for table "unprocessed_reviews"'}
```
## System information
- OS: MacOS latest
- Browser (if applies) NA
- Version of supabase-py: 2.10.0
- Version of Node.js: NA
## Additional context
The problem is the requests are being executed as **authenticated** as the **user_id** I added in the column.
| closed | 2024-11-22T11:50:16Z | 2024-11-22T16:11:57Z | https://github.com/supabase/supabase-py/issues/1001 | [
"bug"
] | akarshghale | 4 |
scanapi/scanapi | rest-api | 431 | add type hints to the project | ## Feature request
### Description the feature
I think it would be nice to add type hints to the codebase, as it helps with documentation, catching bugs, and to maintain the whole project.
### Is your feature request related to a problem?
Nope
### Do you have any suggestions on how to add this feature in scanapi ?
Well, you just need to specify the type of every variable/parameter. I would be glad to do it myself, if you guys agree. | closed | 2021-07-28T22:46:59Z | 2021-08-03T12:26:49Z | https://github.com/scanapi/scanapi/issues/431 | [
"Code Quality",
"Multi Contributors",
"Needs Design Discussion"
] | sleao | 7 |
python-restx/flask-restx | flask | 285 | Namespace error handlers broken when propagate_exceptions=True | ### Details
When an `errorhandler` is registered on a namespace, and `PROPAGATE_EXCEPTIONS` is set to `True` in the Flask app, then the namespace handler will not catch the exceptions. It looks like this is due to the `handle_error` function not checking the error handlers that exist in any child classes.
### **Code**
`api.py:653`
```python
if (
not isinstance(e, HTTPException)
and current_app.propagate_exceptions
and not isinstance(e, tuple(self.error_handlers.keys()))
):
```
Should check for potential error handlers in the class and child classes:
```python
if (
not isinstance(e, HTTPException)
and current_app.propagate_exceptions
and not isinstance(e, tuple(self._own_and_child_error_handlers.keys()))
):
```
### **Repro Steps** (if applicable)
1. Set `propagate_exceptions=True` in the app
2. Create a namespace, and register it to the API
3. Add a `@namespace.errorhandler` function
4. Raise error in a route, which won't get caught by namespace's error handler
### **Expected Behavior**
Error handler defined on a namespace should still catch exceptions when `propagate_exceptions` is `True`. | closed | 2021-02-20T21:35:45Z | 2022-03-01T16:38:24Z | https://github.com/python-restx/flask-restx/issues/285 | [
"bug"
] | mjreiss | 0 |
pydata/xarray | pandas | 9,595 | Weighting a datatree by a tree of dataarrays | ### What is your issue?
This isn't currently possible - but it's not simple to imagine how it might work. See https://github.com/xarray-contrib/datatree/issues/193 for previous discussion. | open | 2024-10-08T16:43:01Z | 2024-10-08T16:43:01Z | https://github.com/pydata/xarray/issues/9595 | [
"API design",
"topic-groupby",
"topic-DataTree"
] | TomNicholas | 0 |
holoviz/panel | matplotlib | 7,272 | Inconsistent handling of (start, end, value) in DatetimeRangeSlider and DatetimeRangePicker widget | #### ALL software version info
<details>
<summary>Software Version Info</summary>
```plaintext
panel 1.4.5
param 2.1.1
```
</details>
#### Description of expected behavior and the observed behavior
* DatetimeRangePicker should allow changing `start` and `end` without raising out-of-bound exception
* `value` of DatetimeRange* widgets is always between `start` and `end` parameter or an Exception is raised
* same behavior of DatetimeRangeSlider and DatetimeRangePicker widget on this issue
#### Complete, minimal, self-contained example code that reproduces the issue
```python
import panel as pn
import datetime as dt
dtmin = dt.datetime(1000, 1, 1)
dtlow = dt.datetime(2000, 1, 1)
dtmax = dt.datetime(3000, 1, 1)
# increasing (start, end) and set value=(start, end) SHOULD WORK !
sel_dtrange = pn.widgets.DatetimeRangeSlider(start=dtmin, end=dtlow, value=(dtmin, dtlow))
sel_dtrange.param.update(start=dtmin, end=dtmax, value=(dtmin, dtmax)) # OK
sel_dtrange = pn.widgets.DatetimeRangePicker(start=dtmin, end=dtlow, value=(dtmin, dtlow))
sel_dtrange.param.update(start=dtmin, end=dtmax, value=(dtmin, dtmax)) # ERROR
sel_dtrange.param.update(start=dtmin, end=dtmax) # increasing (start, end) without setting value works
```
#### Stack traceback and/or browser JavaScript console output
```
---> [12] sel_dtrange.param.update(start=dtmin, end=dtmax, value=(dtmin, dtmax)) # ERROR
ValueError: DateRange parameter 'DatetimeRangePicker.value' upper bound must be in range [1000-01-01 00:00:00, 2000-01-01 00:00:00], not 3000-01-01 00:00:00.
```
#### Additional Info
On the contrary, the `DatetimeRangeSlider` does not raise an exception although `value` is out of bounds, which might also not be expected by the user.
```python
import panel as pn
import datetime as dt
dtmin = dt.datetime(1000, 1, 1)
dtlow = dt.datetime(2000, 1, 1)
dtmax = dt.datetime(3000, 1, 1)
# reducing (start, end) without correcting out-of-range value SHOULD FAIL !
sel_dtrange = pn.widgets.DatetimeRangeSlider(start=dtmin, end=dtmax, value=(dtmin, dtmax))
sel_dtrange.param.update(start=dtmin, end=dtlow) # ERROR as value is out of bounds and should raise
sel_dtrange = pn.widgets.DatetimeRangePicker(start=dtmin, end=dtmax, value=(dtmin, dtmax))
#sel_dtrange.param.update(start=dtmin, end=dtlow) # OK, fails as value is out of bounds
sel_dtrange.param.update(start=dtmin, end=dtlow, value=(dtmin, dtlow)) # OK, setting value to reduced bounds works
```
| open | 2024-09-13T13:52:03Z | 2024-09-13T19:23:43Z | https://github.com/holoviz/panel/issues/7272 | [] | rhambach | 0 |
autokey/autokey | automation | 353 | Manually added phrases work, but do not show up in GUI | ## Classification: UI/Usability
## Reproducibility: Always
## Version
AutoKey version: 0.95.9-0
Used GUI (Gtk, Qt, or both): Qt
If the problem is known to be present in more than one version, please list all of those.
Installed via: (deb file).
Linux Distribution: Kubuntu
## Summary
Manually created JSON and TXT files in "My Phrases" folder do not show up in GUI, BUT the phrases work.
## Steps to Reproduce (if applicable)
- In "My Phrases" I create a TXT and JSON file manually, exactly copying the code of the others, changing only relevant info to create a new phrase manually, outside the GUI.
- I restart the GUI.
## Expected Results
- This should allow me to insert a phrase via the new abbreviation I created.
- The new phrase should show up in the GUI when Autokey restarts.
## Actual Results
- While the new phrase gets inserted when I type the abbreviation I manually made, it does not show up in the GUI.
## Notes
The purpose of this is because I am moving the phrases from my previous text-expander to Autokey, and if I can get this to work, I would be able to streamline the process of dropping the relevant info into the JSON code template for phrases. | closed | 2020-01-10T22:01:23Z | 2023-04-29T19:13:41Z | https://github.com/autokey/autokey/issues/353 | [
"user interface"
] | dulawcdo | 11 |
gradio-app/gradio | deep-learning | 9,964 | Custom Loading UI for `gr.render` | - [x] I have searched to see if a similar issue already exists.
**Is your feature request related to a problem? Please describe.**
Currently, when using `gr.render()` for dynamic rendering, there is no support for custom loading UI. The default loading indicator does not meet specific design needs and may not align with the overall UI style, which can impact the user experience, especially in more complex applications.
**Describe the solution you'd like**
I would like `gr.render()` to support custom loading UIs. This would allow users to implement a loading indicator or animation that fits their design, instead of being limited to the default one.
**Additional context**
For example, it would be helpful if we could pass a custom component or loading animation as an argument when calling `gr.render()`, which would replace the default loading state display. This would greatly enhance flexibility for developers and improve UI consistency. | open | 2024-11-15T09:53:30Z | 2024-11-16T00:15:52Z | https://github.com/gradio-app/gradio/issues/9964 | [
"enhancement"
] | KeJunMao | 3 |
ultralytics/yolov5 | pytorch | 12,754 | --image-weights and background images | ### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.
### Question
If I use --image-weights, does the training process ignore background images?
in train.py, I noticed the following code snippet:
```python
if opt.image_weights:
cw = model.class_weights.cpu().numpy() * (1 - maps) ** 2 / nc # class weights
iw = labels_to_image_weights(dataset.labels, nc=nc, class_weights=cw) # image weights
dataset.indices = random.choices(range(dataset.n), weights=iw, k=dataset.n) # rand weighted idx
```
It seems that for a background image. It's corresponding weight will always be 0. Consequently, it won’t be selected during the training process.
Is the description correct?
### Additional
_No response_ | closed | 2024-02-22T09:46:24Z | 2024-04-07T00:23:30Z | https://github.com/ultralytics/yolov5/issues/12754 | [
"question",
"Stale"
] | tino926 | 4 |
polarsource/polar | fastapi | 4,497 | Root API endpoint (`/`) 404 vs. offers direction | ### Description
https://api.polar.sh is broken
### Current Behavior
If you visit https://docs.polar.sh/api#feedback
Clicking https://api.polar.sh/ shows a page saying "details not found"
### Expected Behavior
Link should work
### Screenshots

### Environment:
- Operating System: Windows 11
- Browser : Chrome
---
<!-- Thank you for contributing to Polar! We appreciate your help in improving it. -->
<!-- Questions: [Discord Server](https://discord.com/invite/Pnhfz3UThd). --> | open | 2024-11-19T14:59:30Z | 2024-11-23T11:18:32Z | https://github.com/polarsource/polar/issues/4497 | [] | hlevring | 2 |
plotly/dash | data-science | 3,196 | customdata is missing in clickData for Scattergl objects | **Describe your context**
I've made a dash application that presents some data from an SQLite database both in a graph and an AG Grid table. I'm using scattergl to create a figure with scattergl traces. Clicking a point on the scattergl traces fires a dash callback that should highlight the relevant data in the table. The mapping of the data between the figure and the table relies on the *customdata* variable of the scattergl class, and that the clickData passed by the click event carries with it the customdata data. This has been working until I recently upgraded my environment.
- replace the result of `pip list | grep dash` below
```
- Installing dash-core-components (2.0.0)
- Installing dash-html-components (2.0.0)
- Installing dash-table (5.0.0)
- Installing dash (2.18.2)
- Installing dash-bootstrap-components (1.7.1)
- Installing dash-ag-grid (31.3.0)
- Installing dash-bootstrap-templates (2.1.0)
```
- if frontend related, tell us your Browser, Version and OS
- Windows 10
- Edge
- 133.0.3065.69
**Describe the bug**
The bug is the same as that described in https://github.com/plotly/dash/issues/2493, except that I'm experiencing this with Scattergl objects (added to a plotly.graph_objs._figure object with add_traces). Note that this worked just fine before I recreated my environment (and updated many of my packages). Unfortunately I don't have the package list of the old environment.
**Expected behavior**
I expected the data in the customdata variable of the Scattergl trace to be present in the clickData object I receive in my figure callback, just like before I upgraded my environment.
**Screenshots of actual behaviour**
Firstly showing created Scattergl objects containing data in its *customdata* variable.

Secondly showing the click_data dict received in my dash callback after clicking a point on the scatter graph, which does not contain 'customdata'.

| open | 2025-02-28T09:40:31Z | 2025-02-28T17:43:58Z | https://github.com/plotly/dash/issues/3196 | [
"bug",
"P2"
] | rra88 | 0 |
LAION-AI/Open-Assistant | python | 3,248 | I can't get inside it. | It doesn't respond to me. | closed | 2023-05-28T21:30:50Z | 2023-05-29T09:59:25Z | https://github.com/LAION-AI/Open-Assistant/issues/3248 | [] | tom999663 | 1 |
litestar-org/litestar | pydantic | 3,455 | Enhancement: allow finer tuning `LoggingConfig` for exception logging | ### Summary
Setting `LoggingConfig.log_exceptions` to `"always"` is good to get more data about uncaught exceptions in a `debug=False` environment but can get pretty noisy, since we'll also log some common errors for schema validation, auth, etc when the developer might really only be interested in uncaught internal server errors.
It would be helpful if we could configure this according to HTTP status codes (or perhaps `Union[int, type[Exception]]` - similar to the keys of `ExceptionHandlersMap`?) and only log when raising for these status codes/types
### Basic Example
Something like
`LoggingConfig(exceptions={422, 500, MyCustomException})`
### Drawbacks and Impact
N/A
### Unresolved questions
_No response_ | open | 2024-04-30T14:33:44Z | 2025-03-20T15:54:39Z | https://github.com/litestar-org/litestar/issues/3455 | [
"Enhancement"
] | LonelyVikingMichael | 1 |
microsoft/MMdnn | tensorflow | 941 | Error to convert a model from IR to Pytorch | I got this error when I try to pass a model from IR to Pytorch:
RuntimeError: output with shape [96] doesn't match the broadcast shape [1, 1, 1, 96]
The IR files comes frome a caffe model, then the complete procces is to convert a caffe model to pytorch.
| open | 2023-02-09T02:02:28Z | 2023-02-09T02:02:28Z | https://github.com/microsoft/MMdnn/issues/941 | [] | luismadhe | 0 |
desec-io/desec-stack | rest-api | 154 | Upon Account Creation, Send Only One Email | Currently, when an account is opened and locked, we send to emails. Merge them and send only one. | closed | 2019-04-12T08:07:27Z | 2019-04-12T08:39:58Z | https://github.com/desec-io/desec-stack/issues/154 | [
"enhancement",
"prio: low"
] | nils-wisiol | 1 |
flairNLP/flair | pytorch | 3,620 | [Bug]: Language Order in OpusParallelCorpus | ### Describe the bug
When instantiating an OpusParallelCorpus with language pairs such as ("de", "en") or ("en", "de"), the resulting corpus consistently has German as the first language and English as the second language, regardless of the input order.
Upon reviewing the source code of OpusParallelCorpus, it appears that the [following code](https://github.com/flairNLP/flair/blob/c6b053d69fc5f5676e49e06f7354bd7757864685/flair/datasets/text_text.py#L83):
```
if l1 > l2:
l1, l2 = l2, l1
```
forces the languages to be ordered lexicographically based on their language codes. This results in German being treated as the first language in both ("de", "en") and ("en", "de").
### To Reproduce
```python
from flair.datasets import OpusParallelCorpus
corpus_de_en = OpusParallelCorpus(
dataset="tatoeba",
l1="de",
l2="en",
max_tokens_per_doc=512,
)
corpus_en_de = OpusParallelCorpus(
dataset="tatoeba",
l1="en",
l2="de",
max_tokens_per_doc=512,
)
# Both corpora consist of (German, English) pairs
print(corpus_de_en.train[0])
>> DataPair: 'Sentence[5]: "Ich muss schlafen gehen."' + 'Sentence[7]: "I have to go to sleep."'
print(corpus_en_de.train[0])
>> DataPair: 'Sentence[5]: "Ich muss schlafen gehen."' + 'Sentence[7]: "I have to go to sleep."'
```
### Expected behavior
Sentence pairs in `corpus_de_en` are (German, English) while sentence pairs in `corpus_en_de` are (English, German)
### Logs and Stack traces
```stacktrace
```
### Screenshots
_No response_
### Additional Context
_No response_
### Environment
#### Versions:
##### Flair
0.15.1
##### Pytorch
2.6.0+cu124
##### Transformers
4.49.0
#### GPU
False | closed | 2025-02-21T23:24:40Z | 2025-03-17T01:29:08Z | https://github.com/flairNLP/flair/issues/3620 | [
"bug"
] | chelseagzr | 0 |
mailgyc/doudizhu | sqlalchemy | 17 | Can't connect to MySQL server on'localhost' | Python: 3.6.3
MySQL: 8.0.17
你好,我按教程进行操作,进入8080后输入完账号密码,点击注册按钮时,游戏界面红色字体报错:
`pymysql.err.OperationalError: (2003, "Can't connect to MySQL server on'localhost'")`
以下是我的操作记录:
```
E:\Output\Python_output\CCP\doudizhu>net start mysql
mysql 服务正在启动 ..
mysql 服务已经启动成功。
E:\Output\Python_output\CCP\doudizhu>mysql --user=root -p < schema.sql
Enter password: *****
E:\Output\Python_output\CCP\doudizhu>cd doudizhu
E:\Output\Python_output\CCP\doudizhu\doudizhu>python app.py --password=mysql
2019-08-31 18:13 server on http://127.0.0.1:8080
```
请问如何解决,谢谢! | closed | 2019-08-31T10:21:56Z | 2019-09-03T15:36:19Z | https://github.com/mailgyc/doudizhu/issues/17 | [] | HaozhengLi | 4 |
mwaskom/seaborn | data-science | 2,927 | Consider Plot.tweak method for accepting function that operates on matplotlib figure | In some cases, it will be easy for users to achieve some fussy customization by writing imperative matplotlib code rather than trying to fit it into the declarative seaborn spec.
We could/should make it possible to access the matplotlib figure / axes, so the plot could be compiled and then tweaked. Currently, `Plot._figure` is private, although you can use `Plot.on` to have a reference to the relevant matplotlib object, like
```python
Plot(...).add(...).on(ax:=plt.gca()).plot()
ax.set(...)
```
But both patterns are still a little cumbersome as you need to (1) trigger the plot to compile by calling `.plot()` (2) catch the return value, fiddle with the matplotlib objects, and (3) show/display the plot.
An alternative pattern would be to have `Plot.tweak`, a method that accepts a callable function, where that function should consume a matplotlib figure and operate on it. It would be used toward the end of the plotting pipeline. (Although perhaps that should be flexible? e.g. allow `.tweak(f, before=True)` to do some custom setup?)
One thing to consider is that having the passed function consume a figure is the most general approach, but you're typically going to want to operate on the axes. Should that be allowed and if so, how? (e.g. `.tweak(f, axes=True)`? | open | 2022-07-30T22:13:13Z | 2022-07-30T22:13:39Z | https://github.com/mwaskom/seaborn/issues/2927 | [
"objects-plot",
"feature"
] | mwaskom | 0 |
onnx/onnx | tensorflow | 6,634 | Link broken in CONTRIBUTING.md | # Bug Report
### Is the issue related to model conversion?
No
### Describe the bug
<!-- Please describe the bug clearly and concisely -->
I think under the Development section, there is a broken link directing to https://github.com/onnx/onnx#build-onnx-from-source, but that section does not exist.
| closed | 2025-01-09T04:39:02Z | 2025-01-19T16:51:12Z | https://github.com/onnx/onnx/issues/6634 | [
"topic: documentation"
] | harshil1973 | 1 |
Urinx/WeixinBot | api | 230 | 新注册微信号无法登录网页版 | 
机器人没法用了吗? | open | 2017-09-15T06:46:49Z | 2017-12-17T13:10:29Z | https://github.com/Urinx/WeixinBot/issues/230 | [] | chunyong1991 | 4 |
mljar/mercury | jupyter | 151 | Windows path output handling | Hi.
I'm getting errors when I try to output to a windows directory. I can actually get the application to write to the filesystem, but mercury throws errors preventing the app from running.
Here's my relevant yaml:
```
params:
output_dir:
output: dir
```
and my relevant python:
`output_dir = r'C:\tmp\output'`
Finally, error output:
```python
File "C:\Users\tfluhr\AppData\Local\Temp/ipykernel_14616/3641552627.py", line 2
output_dir = "C:\Users\tfluhr\Anaconda3\Lib\site-packages\mercury\media\d40481ea-0cba-4fce-925e-4ec54b9b68a6\output_106"
^
SyntaxError: (unicode error) 'unicodeescape' codec can't decode bytes in position 2-3: truncated \UXXXXXXXX escape
```
| closed | 2022-07-28T14:16:30Z | 2022-07-28T16:54:57Z | https://github.com/mljar/mercury/issues/151 | [
"bug"
] | tfluhr | 6 |
python-gitlab/python-gitlab | api | 2,618 | CLI cannot handle server response | ## Description of the problem, including code/CLI snippet
`gitlab -c ./gitlab-cli.cfg project-merge-request get --iid 98765 --project-id 12345`
with config file content being
```
[global]
default = foo
ssl_verify = False
timeout = 5
api_version = 4
[foo]
url=https://some-project.url
private_token=some-token-here
```
results in
```
Attempted to initialize RESTObject with a non-dictionary value: <Response [200]>
This likely indicates an incorrect or malformed server response.
```
## Expected Behavior
Since I can successfully interact with the same server programmatically from Python (i.e. I am getting a valid server JSON response) I suspect this is a problem of the CLI.
The server response is 200 this typically indicates OK. So it will have sent some payload back.
I expect identical behaviour of package and CLI.
## Actual Behavior
## Specifications
- python-gitlab version: 3.15.0
- API version you are using (v3/v4): v4
- Gitlab server version (or gitlab.com): 14.9.5
| closed | 2023-07-18T09:35:18Z | 2024-10-13T07:13:22Z | https://github.com/python-gitlab/python-gitlab/issues/2618 | [
"need info",
"stale"
] | twil69 | 11 |
flairNLP/flair | nlp | 3,017 | Discrepancy when de-serializing TARSTagger in master branch vs last release | When de-serializing the tars-ner model, the word_dropout is missing from the model. In version 0.11.3, it is still there.
To reproduce, execute the following code in `master` branch and in `v0.11.3`:
```python
tars: TARSTagger = TARSTagger.load('tars-ner')
tagger = tars.tars_model
print(tagger)
```
In `master`, it prints:
```
)
)
(locked_dropout): LockedDropout(p=0.5)
(linear): Linear(in_features=1024, out_features=5, bias=True)
(loss_function): CrossEntropyLoss()
)
```
In `v0.11.3`, it prints:
```
)
)
(word_dropout): WordDropout(p=0.05)
(locked_dropout): LockedDropout(p=0.5)
(linear): Linear(in_features=1024, out_features=5, bias=True)
(loss_function): CrossEntropyLoss()
)
```
@helpmefindaname can you take a look? This may have something to do with the changes in de-serialization logic. | closed | 2022-12-11T09:03:35Z | 2023-06-11T11:25:40Z | https://github.com/flairNLP/flair/issues/3017 | [
"wontfix"
] | alanakbik | 2 |
fastapi/sqlmodel | pydantic | 377 | How to dynamically create tables by sqlmodel? | ### First Check
- [X] I added a very descriptive title to this issue.
- [X] I used the GitHub search to find a similar issue and didn't find it.
- [X] I searched the SQLModel documentation, with the integrated search.
- [X] I already searched in Google "How to X in SQLModel" and didn't find any information.
- [X] I already read and followed all the tutorial in the docs and didn't find an answer.
- [X] I already checked if it is not related to SQLModel but to [Pydantic](https://github.com/samuelcolvin/pydantic).
- [X] I already checked if it is not related to SQLModel but to [SQLAlchemy](https://github.com/sqlalchemy/sqlalchemy).
### Commit to Help
- [X] I commit to help with one of those options 👆
### Example Code
```python
class DeviceStore(SQLModel):
"""
Devices of one batch
"""
# __abstract__ = True
id: Optional[int] = Field(None, primary_key=True,
sa_column_kwargs={"autoincrement": True})
name: str
class DeviceBatch01(DeviceStore):
__tablename__ = "devicebatch01"
class DeviceBatch02(DeviceStore):
__tablename__ = "devicebatch02"
```
### Description
I'm new here and learning to use sqlmodel, it's really great, now I got a question which is how to use sqlmodel dynamically to create tables, all my tables have the same format, just the table names are different, like logtable_2205, logtable_2206, logtable_2207. . . could you guys provide some ideas? thanks a lot.
### Operating System
Windows
### Operating System Details
_No response_
### SQLModel Version
0.0.6
### Python Version
Python 3.8.10
### Additional Context
_No response_ | open | 2022-07-14T03:55:05Z | 2023-07-28T16:32:25Z | https://github.com/fastapi/sqlmodel/issues/377 | [
"question"
] | jaytang0923 | 6 |
elliotgao2/toapi | flask | 54 | Setting and updating of storage and cache issues. | If sent error, don't set storage.
If parse error, don't set cache. | closed | 2017-12-12T15:34:33Z | 2017-12-14T03:21:29Z | https://github.com/elliotgao2/toapi/issues/54 | [] | elliotgao2 | 0 |
scrapehero-code/amazon-scraper | web-scraping | 3 | Can we use this for wine products? | I am looking for a scraper for Amazon Vine products
[https://www.amazon.it/vine/vine-items?queue=last_chance&size=60](this a example link, but you must be vine)
| closed | 2020-09-19T11:49:57Z | 2023-02-23T06:34:47Z | https://github.com/scrapehero-code/amazon-scraper/issues/3 | [] | realtebo | 0 |
graphql-python/graphene-django | django | 744 | Better error messages for model_operations | Currently it throws this error when doing a create mutation which its `model_operations` doesn't have create set in it:
Invalid update operation. Input parameter "id" required. | closed | 2019-08-12T08:06:37Z | 2019-10-25T10:09:20Z | https://github.com/graphql-python/graphene-django/issues/744 | [
"wontfix"
] | dan-klasson | 1 |
pytest-dev/pytest-flask | pytest | 103 | indeterminate live_server segfaults on macos (workaround: threading.Thread) | _Not sure this is short-run actionable, and I don't think it's actually a pytest-flask bug. If others encounter this it may still make sense to add something like the workaround I identified as an optional run mode or automatic fallback. Opening an issue in case it helps others or attracts more information/reports. The lack of similar reports makes me suspect this might come down to the OS or specific dependency versions--but I haven't had time to fiddle with them._
We had some trouble this fall with flaky test runs using live_server and selenium for end-to-end tests. I'll describe everything I can recall in case it helps someone find the issue; scroll to the end for the workaround. :)
----
## Initial encounter/debug
The issue initially manifested as inconsistent test/fixture failures at the first location in a test that depended on finding/interacting with anything specific loaded (because the server never started and the browser has a blank page loaded). When I first encountered this, the test exception/assertion errors were all I got.
I eventually figured out that `live_server._process` could silently fail to start during setup. You can detect this with something like:
```python
live_server.start()
if not live_server._process.is_alive():
...
```
While most of my test runs fail (running 2 parameterized tests 8x each), there's usually only one failure across the run. It often (but not always) fails on the same test. Any remaining tests usually run fine. Playing around for a bit (different numbers of tests, turning each test into a simple pass, renaming them to force different orders) made me think that where it breaks is somewhat correlated with how much work the tests make the server do.
Stepping through the code isolated the problem to when `app.run(host=host, port=port, use_reloader=False, threaded=True)` is called inside the worker/target function passed to `multiprocessing.Process`. At the time, I had trouble introspecting the child process (see the paragraph), but I did notice the crash produces OS crash reports. Here's an example:
<details>
<summary>segfault crash report</summary>
```
Process: python3.6 [92689]
Path: /nix/*/python3.6
Identifier: python3.6
Version: ???
Code Type: X86-64 (Native)
Parent Process: python3.6 [92491]
Responsible: python3.6 [92689]
User ID: 501
Date/Time: 2019-12-30 11:58:18.934 -0600
OS Version: Mac OS X 10.14.6 (18G103)
Report Version: 12
Bridge OS Version: 3.6 (16P6571)
Anonymous UUID: 73B82B43-D894-E9FF-58D2-C4D60BD5AEFB
Sleep/Wake UUID: 6A0E1A0F-A8D2-4968-9C72-E38508FDB072
Time Awake Since Boot: 550000 seconds
Time Since Wake: 8300 seconds
System Integrity Protection: enabled
Crashed Thread: 0 Dispatch queue: com.apple.main-thread
Exception Type: EXC_BAD_ACCESS (SIGSEGV)
Exception Codes: KERN_INVALID_ADDRESS at 0x000000010b060a3a
Exception Note: EXC_CORPSE_NOTIFY
VM Regions Near 0x10b060a3a:
VM_ALLOCATE 000000010abe0000-000000010b060000 [ 4608K] rw-/rwx SM=COW
-->
VM_ALLOCATE 000000010b0a0000-000000010b2a0000 [ 2048K] rw-/rwx SM=COW
Application Specific Information:
crashed on child side of fork pre-exec
Thread 0 Crashed:: Dispatch queue: com.apple.main-thread
0 libsystem_kernel.dylib 0x00007fff6ac462c6 __pthread_kill + 10
1 libsystem_pthread.dylib 0x00007fff6ad01bf1 pthread_kill + 284
2 libsystem_c.dylib 0x00007fff6ab63d8a raise + 26
3 libsystem_platform.dylib 0x00007fff6acf6b5d _sigtramp + 29
4 ??? 0x00007fffa13ad000 0 + 140735898374144
5 libsystem_trace.dylib 0x00007fff6ad1a13d os_log_type_enabled + 627
6 libsystem_info.dylib 0x00007fff6ac2c709 si_destination_compare_statistics + 1993
7 libsystem_info.dylib 0x00007fff6ac2b1a5 si_destination_compare_internal + 661
8 libsystem_info.dylib 0x00007fff6ac2ad3f si_destination_compare + 559
9 libsystem_info.dylib 0x00007fff6ac096df _gai_addr_sort + 111
10 libsystem_c.dylib 0x00007fff6abb3e5b _isort + 193
11 libsystem_c.dylib 0x00007fff6abb3d88 _qsort + 2125
12 libsystem_info.dylib 0x00007fff6ac00f2d _gai_sort_list + 781
13 libsystem_info.dylib 0x00007fff6abff885 si_addrinfo + 2021
14 libsystem_info.dylib 0x00007fff6abfef77 _getaddrinfo_internal + 231
15 libsystem_info.dylib 0x00007fff6abfee7d getaddrinfo + 61
16 _socket.cpython-36m-darwin.so 0x0000000107e4649e setipaddr + 494
17 _socket.cpython-36m-darwin.so 0x0000000107e45d3b getsockaddrarg + 539
18 _socket.cpython-36m-darwin.so 0x0000000107e43044 sock_bind + 52
19 libpython3.6m.dylib 0x00000001064ab852 _PyCFunction_FastCallDict + 610
20 libpython3.6m.dylib 0x000000010653f57a call_function + 602
21 libpython3.6m.dylib 0x000000010653c3a5 _PyEval_EvalFrameDefault + 26661
22 libpython3.6m.dylib 0x0000000106540a19 fast_function + 569
23 libpython3.6m.dylib 0x000000010653f549 call_function + 553
24 libpython3.6m.dylib 0x000000010653c3a5 _PyEval_EvalFrameDefault + 26661
25 libpython3.6m.dylib 0x0000000106540a19 fast_function + 569
26 libpython3.6m.dylib 0x000000010653f549 call_function + 553
27 libpython3.6m.dylib 0x000000010653c3a5 _PyEval_EvalFrameDefault + 26661
28 libpython3.6m.dylib 0x0000000106540163 _PyEval_EvalCodeWithName + 2883
29 libpython3.6m.dylib 0x000000010654097b fast_function + 411
30 libpython3.6m.dylib 0x000000010653f549 call_function + 553
31 libpython3.6m.dylib 0x000000010653c3a5 _PyEval_EvalFrameDefault + 26661
32 libpython3.6m.dylib 0x0000000106540163 _PyEval_EvalCodeWithName + 2883
33 libpython3.6m.dylib 0x0000000106540d6f _PyFunction_FastCallDict + 607
34 libpython3.6m.dylib 0x0000000106459aa6 _PyObject_FastCallDict + 182
35 libpython3.6m.dylib 0x0000000106459c3c _PyObject_Call_Prepend + 156
36 libpython3.6m.dylib 0x00000001064598e5 PyObject_Call + 101
37 libpython3.6m.dylib 0x00000001064c548f slot_tp_init + 159
38 libpython3.6m.dylib 0x00000001064c1224 type_call + 292
39 libpython3.6m.dylib 0x0000000106459b31 _PyObject_FastCallDict + 321
40 libpython3.6m.dylib 0x0000000106459f55 _PyObject_FastCallKeywords + 197
41 libpython3.6m.dylib 0x000000010653f4f2 call_function + 466
42 libpython3.6m.dylib 0x000000010653c438 _PyEval_EvalFrameDefault + 26808
43 libpython3.6m.dylib 0x0000000106540163 _PyEval_EvalCodeWithName + 2883
44 libpython3.6m.dylib 0x000000010654097b fast_function + 411
45 libpython3.6m.dylib 0x000000010653f549 call_function + 553
46 libpython3.6m.dylib 0x000000010653c438 _PyEval_EvalFrameDefault + 26808
47 libpython3.6m.dylib 0x0000000106540163 _PyEval_EvalCodeWithName + 2883
48 libpython3.6m.dylib 0x000000010654097b fast_function + 411
49 libpython3.6m.dylib 0x000000010653f549 call_function + 553
50 libpython3.6m.dylib 0x000000010653c3a5 _PyEval_EvalFrameDefault + 26661
51 libpython3.6m.dylib 0x0000000106540163 _PyEval_EvalCodeWithName + 2883
52 libpython3.6m.dylib 0x0000000106535b37 PyEval_EvalCodeEx + 55
53 libpython3.6m.dylib 0x0000000106487a0f function_call + 399
54 libpython3.6m.dylib 0x00000001064598e5 PyObject_Call + 101
55 libpython3.6m.dylib 0x000000010653c5b0 _PyEval_EvalFrameDefault + 27184
56 libpython3.6m.dylib 0x0000000106540163 _PyEval_EvalCodeWithName + 2883
57 libpython3.6m.dylib 0x000000010654097b fast_function + 411
58 libpython3.6m.dylib 0x000000010653f549 call_function + 553
59 libpython3.6m.dylib 0x000000010653c438 _PyEval_EvalFrameDefault + 26808
60 libpython3.6m.dylib 0x0000000106540163 _PyEval_EvalCodeWithName + 2883
61 libpython3.6m.dylib 0x0000000106535b37 PyEval_EvalCodeEx + 55
62 libpython3.6m.dylib 0x0000000106487a0f function_call + 399
63 libpython3.6m.dylib 0x00000001064598e5 PyObject_Call + 101
64 libpython3.6m.dylib 0x000000010653c5b0 _PyEval_EvalFrameDefault + 27184
65 libpython3.6m.dylib 0x0000000106540a19 fast_function + 569
66 libpython3.6m.dylib 0x000000010653f549 call_function + 553
67 libpython3.6m.dylib 0x000000010653c3a5 _PyEval_EvalFrameDefault + 26661
68 libpython3.6m.dylib 0x0000000106540a19 fast_function + 569
69 libpython3.6m.dylib 0x000000010653f549 call_function + 553
70 libpython3.6m.dylib 0x000000010653c3a5 _PyEval_EvalFrameDefault + 26661
71 libpython3.6m.dylib 0x0000000106540a19 fast_function + 569
72 libpython3.6m.dylib 0x000000010653f549 call_function + 553
73 libpython3.6m.dylib 0x000000010653c3a5 _PyEval_EvalFrameDefault + 26661
74 libpython3.6m.dylib 0x0000000106540f0b _PyFunction_FastCallDict + 1019
75 libpython3.6m.dylib 0x0000000106459aa6 _PyObject_FastCallDict + 182
76 libpython3.6m.dylib 0x0000000106459c3c _PyObject_Call_Prepend + 156
77 libpython3.6m.dylib 0x00000001064598e5 PyObject_Call + 101
78 libpython3.6m.dylib 0x00000001064c548f slot_tp_init + 159
79 libpython3.6m.dylib 0x00000001064c1224 type_call + 292
80 libpython3.6m.dylib 0x0000000106459b31 _PyObject_FastCallDict + 321
81 libpython3.6m.dylib 0x000000010653f4f2 call_function + 466
82 libpython3.6m.dylib 0x000000010653c3a5 _PyEval_EvalFrameDefault + 26661
83 libpython3.6m.dylib 0x0000000106540a19 fast_function + 569
84 libpython3.6m.dylib 0x000000010653f549 call_function + 553
85 libpython3.6m.dylib 0x000000010653c3a5 _PyEval_EvalFrameDefault + 26661
86 libpython3.6m.dylib 0x0000000106540a19 fast_function + 569
87 libpython3.6m.dylib 0x000000010653f549 call_function + 553
88 libpython3.6m.dylib 0x000000010653c3a5 _PyEval_EvalFrameDefault + 26661
89 libpython3.6m.dylib 0x0000000106540a19 fast_function + 569
90 libpython3.6m.dylib 0x000000010653f549 call_function + 553
91 libpython3.6m.dylib 0x000000010653c3a5 _PyEval_EvalFrameDefault + 26661
92 libpython3.6m.dylib 0x0000000106540a19 fast_function + 569
93 libpython3.6m.dylib 0x000000010653f549 call_function + 553
94 libpython3.6m.dylib 0x000000010653c3a5 _PyEval_EvalFrameDefault + 26661
95 libpython3.6m.dylib 0x000000010647e2dc gen_send_ex + 252
96 libpython3.6m.dylib 0x0000000106533bae builtin_next + 110
97 libpython3.6m.dylib 0x00000001064ab6a4 _PyCFunction_FastCallDict + 180
98 libpython3.6m.dylib 0x000000010653f57a call_function + 602
99 libpython3.6m.dylib 0x000000010653c3a5 _PyEval_EvalFrameDefault + 26661
100 libpython3.6m.dylib 0x0000000106540a19 fast_function + 569
101 libpython3.6m.dylib 0x000000010653f549 call_function + 553
102 libpython3.6m.dylib 0x000000010653c3a5 _PyEval_EvalFrameDefault + 26661
103 libpython3.6m.dylib 0x0000000106540163 _PyEval_EvalCodeWithName + 2883
104 libpython3.6m.dylib 0x0000000106535b37 PyEval_EvalCodeEx + 55
105 libpython3.6m.dylib 0x0000000106487a0f function_call + 399
106 libpython3.6m.dylib 0x00000001064598e5 PyObject_Call + 101
107 libpython3.6m.dylib 0x000000010653c5b0 _PyEval_EvalFrameDefault + 27184
108 libpython3.6m.dylib 0x0000000106540163 _PyEval_EvalCodeWithName + 2883
109 libpython3.6m.dylib 0x000000010654097b fast_function + 411
110 libpython3.6m.dylib 0x000000010653f549 call_function + 553
111 libpython3.6m.dylib 0x000000010653c438 _PyEval_EvalFrameDefault + 26808
112 libpython3.6m.dylib 0x0000000106540163 _PyEval_EvalCodeWithName + 2883
113 libpython3.6m.dylib 0x000000010654097b fast_function + 411
114 libpython3.6m.dylib 0x000000010653f549 call_function + 553
115 libpython3.6m.dylib 0x000000010653c3a5 _PyEval_EvalFrameDefault + 26661
116 libpython3.6m.dylib 0x0000000106540a19 fast_function + 569
117 libpython3.6m.dylib 0x000000010653f549 call_function + 553
118 libpython3.6m.dylib 0x000000010653c3a5 _PyEval_EvalFrameDefault + 26661
119 libpython3.6m.dylib 0x0000000106540163 _PyEval_EvalCodeWithName + 2883
120 libpython3.6m.dylib 0x0000000106540d6f _PyFunction_FastCallDict + 607
121 libpython3.6m.dylib 0x0000000106459aa6 _PyObject_FastCallDict + 182
122 libpython3.6m.dylib 0x0000000106459c3c _PyObject_Call_Prepend + 156
123 libpython3.6m.dylib 0x00000001064598e5 PyObject_Call + 101
124 libpython3.6m.dylib 0x00000001064c4499 slot_tp_call + 153
125 libpython3.6m.dylib 0x0000000106459b31 _PyObject_FastCallDict + 321
126 libpython3.6m.dylib 0x0000000106459f55 _PyObject_FastCallKeywords + 197
127 libpython3.6m.dylib 0x000000010653f4f2 call_function + 466
128 libpython3.6m.dylib 0x000000010653c438 _PyEval_EvalFrameDefault + 26808
129 libpython3.6m.dylib 0x0000000106540163 _PyEval_EvalCodeWithName + 2883
130 libpython3.6m.dylib 0x000000010654097b fast_function + 411
131 libpython3.6m.dylib 0x000000010653f549 call_function + 553
132 libpython3.6m.dylib 0x000000010653c438 _PyEval_EvalFrameDefault + 26808
133 libpython3.6m.dylib 0x0000000106540a19 fast_function + 569
134 libpython3.6m.dylib 0x000000010653f549 call_function + 553
135 libpython3.6m.dylib 0x000000010653c3a5 _PyEval_EvalFrameDefault + 26661
136 libpython3.6m.dylib 0x0000000106540a19 fast_function + 569
137 libpython3.6m.dylib 0x000000010653f549 call_function + 553
138 libpython3.6m.dylib 0x000000010653c3a5 _PyEval_EvalFrameDefault + 26661
139 libpython3.6m.dylib 0x0000000106540163 _PyEval_EvalCodeWithName + 2883
140 libpython3.6m.dylib 0x000000010654097b fast_function + 411
141 libpython3.6m.dylib 0x000000010653f549 call_function + 553
142 libpython3.6m.dylib 0x000000010653c438 _PyEval_EvalFrameDefault + 26808
143 libpython3.6m.dylib 0x0000000106540a19 fast_function + 569
144 libpython3.6m.dylib 0x000000010653f549 call_function + 553
145 libpython3.6m.dylib 0x000000010653c3a5 _PyEval_EvalFrameDefault + 26661
146 libpython3.6m.dylib 0x0000000106540a19 fast_function + 569
147 libpython3.6m.dylib 0x000000010653f549 call_function + 553
148 libpython3.6m.dylib 0x000000010653c3a5 _PyEval_EvalFrameDefault + 26661
149 libpython3.6m.dylib 0x0000000106540163 _PyEval_EvalCodeWithName + 2883
150 libpython3.6m.dylib 0x000000010654097b fast_function + 411
151 libpython3.6m.dylib 0x000000010653f549 call_function + 553
152 libpython3.6m.dylib 0x000000010653c438 _PyEval_EvalFrameDefault + 26808
153 libpython3.6m.dylib 0x0000000106540a19 fast_function + 569
154 libpython3.6m.dylib 0x000000010653f549 call_function + 553
155 libpython3.6m.dylib 0x000000010653c3a5 _PyEval_EvalFrameDefault + 26661
156 libpython3.6m.dylib 0x0000000106540a19 fast_function + 569
157 libpython3.6m.dylib 0x000000010653f549 call_function + 553
158 libpython3.6m.dylib 0x000000010653c3a5 _PyEval_EvalFrameDefault + 26661
159 libpython3.6m.dylib 0x0000000106540a19 fast_function + 569
160 libpython3.6m.dylib 0x000000010653f549 call_function + 553
161 libpython3.6m.dylib 0x000000010653c3a5 _PyEval_EvalFrameDefault + 26661
162 libpython3.6m.dylib 0x0000000106540a19 fast_function + 569
163 libpython3.6m.dylib 0x000000010653f549 call_function + 553
164 libpython3.6m.dylib 0x000000010653c3a5 _PyEval_EvalFrameDefault + 26661
165 libpython3.6m.dylib 0x0000000106540a19 fast_function + 569
166 libpython3.6m.dylib 0x000000010653f549 call_function + 553
167 libpython3.6m.dylib 0x000000010653c3a5 _PyEval_EvalFrameDefault + 26661
168 libpython3.6m.dylib 0x0000000106540163 _PyEval_EvalCodeWithName + 2883
169 libpython3.6m.dylib 0x000000010654097b fast_function + 411
170 libpython3.6m.dylib 0x000000010653f549 call_function + 553
171 libpython3.6m.dylib 0x000000010653c3a5 _PyEval_EvalFrameDefault + 26661
172 libpython3.6m.dylib 0x0000000106540a19 fast_function + 569
173 libpython3.6m.dylib 0x000000010653f549 call_function + 553
174 libpython3.6m.dylib 0x000000010653c3a5 _PyEval_EvalFrameDefault + 26661
175 libpython3.6m.dylib 0x0000000106540163 _PyEval_EvalCodeWithName + 2883
176 libpython3.6m.dylib 0x0000000106535b37 PyEval_EvalCodeEx + 55
177 libpython3.6m.dylib 0x0000000106487a0f function_call + 399
178 libpython3.6m.dylib 0x00000001064598e5 PyObject_Call + 101
179 libpython3.6m.dylib 0x000000010653c5b0 _PyEval_EvalFrameDefault + 27184
180 libpython3.6m.dylib 0x0000000106540163 _PyEval_EvalCodeWithName + 2883
181 libpython3.6m.dylib 0x000000010654097b fast_function + 411
182 libpython3.6m.dylib 0x000000010653f549 call_function + 553
183 libpython3.6m.dylib 0x000000010653c438 _PyEval_EvalFrameDefault + 26808
184 libpython3.6m.dylib 0x0000000106540163 _PyEval_EvalCodeWithName + 2883
185 libpython3.6m.dylib 0x000000010654097b fast_function + 411
186 libpython3.6m.dylib 0x000000010653f549 call_function + 553
187 libpython3.6m.dylib 0x000000010653c3a5 _PyEval_EvalFrameDefault + 26661
188 libpython3.6m.dylib 0x0000000106540a19 fast_function + 569
189 libpython3.6m.dylib 0x000000010653f549 call_function + 553
190 libpython3.6m.dylib 0x000000010653c3a5 _PyEval_EvalFrameDefault + 26661
191 libpython3.6m.dylib 0x0000000106540163 _PyEval_EvalCodeWithName + 2883
192 libpython3.6m.dylib 0x0000000106540d6f _PyFunction_FastCallDict + 607
193 libpython3.6m.dylib 0x0000000106459aa6 _PyObject_FastCallDict + 182
194 libpython3.6m.dylib 0x0000000106459c3c _PyObject_Call_Prepend + 156
195 libpython3.6m.dylib 0x00000001064598e5 PyObject_Call + 101
196 libpython3.6m.dylib 0x00000001064c4499 slot_tp_call + 153
197 libpython3.6m.dylib 0x00000001064598e5 PyObject_Call + 101
198 libpython3.6m.dylib 0x000000010653c5b0 _PyEval_EvalFrameDefault + 27184
199 libpython3.6m.dylib 0x0000000106540163 _PyEval_EvalCodeWithName + 2883
200 libpython3.6m.dylib 0x000000010654097b fast_function + 411
201 libpython3.6m.dylib 0x000000010653f549 call_function + 553
202 libpython3.6m.dylib 0x000000010653c3a5 _PyEval_EvalFrameDefault + 26661
203 libpython3.6m.dylib 0x0000000106540163 _PyEval_EvalCodeWithName + 2883
204 libpython3.6m.dylib 0x000000010654097b fast_function + 411
205 libpython3.6m.dylib 0x000000010653f549 call_function + 553
206 libpython3.6m.dylib 0x000000010653c438 _PyEval_EvalFrameDefault + 26808
207 libpython3.6m.dylib 0x0000000106540163 _PyEval_EvalCodeWithName + 2883
208 libpython3.6m.dylib 0x0000000106535b37 PyEval_EvalCodeEx + 55
209 libpython3.6m.dylib 0x0000000106487a0f function_call + 399
210 libpython3.6m.dylib 0x00000001064598e5 PyObject_Call + 101
211 libpython3.6m.dylib 0x000000010653c5b0 _PyEval_EvalFrameDefault + 27184
212 libpython3.6m.dylib 0x0000000106540163 _PyEval_EvalCodeWithName + 2883
213 libpython3.6m.dylib 0x000000010654097b fast_function + 411
214 libpython3.6m.dylib 0x000000010653f549 call_function + 553
215 libpython3.6m.dylib 0x000000010653c3a5 _PyEval_EvalFrameDefault + 26661
216 libpython3.6m.dylib 0x0000000106540163 _PyEval_EvalCodeWithName + 2883
217 libpython3.6m.dylib 0x000000010654097b fast_function + 411
218 libpython3.6m.dylib 0x000000010653f549 call_function + 553
219 libpython3.6m.dylib 0x000000010653c438 _PyEval_EvalFrameDefault + 26808
220 libpython3.6m.dylib 0x0000000106540163 _PyEval_EvalCodeWithName + 2883
221 libpython3.6m.dylib 0x0000000106535b37 PyEval_EvalCodeEx + 55
222 libpython3.6m.dylib 0x0000000106487a0f function_call + 399
223 libpython3.6m.dylib 0x00000001064598e5 PyObject_Call + 101
224 libpython3.6m.dylib 0x000000010653c5b0 _PyEval_EvalFrameDefault + 27184
225 libpython3.6m.dylib 0x0000000106540163 _PyEval_EvalCodeWithName + 2883
226 libpython3.6m.dylib 0x000000010654097b fast_function + 411
227 libpython3.6m.dylib 0x000000010653f549 call_function + 553
228 libpython3.6m.dylib 0x000000010653c438 _PyEval_EvalFrameDefault + 26808
229 libpython3.6m.dylib 0x0000000106540163 _PyEval_EvalCodeWithName + 2883
230 libpython3.6m.dylib 0x000000010654097b fast_function + 411
231 libpython3.6m.dylib 0x000000010653f549 call_function + 553
232 libpython3.6m.dylib 0x000000010653c3a5 _PyEval_EvalFrameDefault + 26661
233 libpython3.6m.dylib 0x0000000106540a19 fast_function + 569
234 libpython3.6m.dylib 0x000000010653f549 call_function + 553
235 libpython3.6m.dylib 0x000000010653c3a5 _PyEval_EvalFrameDefault + 26661
236 libpython3.6m.dylib 0x0000000106540163 _PyEval_EvalCodeWithName + 2883
237 libpython3.6m.dylib 0x0000000106540d6f _PyFunction_FastCallDict + 607
238 libpython3.6m.dylib 0x0000000106459aa6 _PyObject_FastCallDict + 182
239 libpython3.6m.dylib 0x0000000106459c3c _PyObject_Call_Prepend + 156
240 libpython3.6m.dylib 0x00000001064598e5 PyObject_Call + 101
241 libpython3.6m.dylib 0x00000001064c4499 slot_tp_call + 153
242 libpython3.6m.dylib 0x0000000106459b31 _PyObject_FastCallDict + 321
243 libpython3.6m.dylib 0x0000000106459f55 _PyObject_FastCallKeywords + 197
244 libpython3.6m.dylib 0x000000010653f4f2 call_function + 466
245 libpython3.6m.dylib 0x000000010653c438 _PyEval_EvalFrameDefault + 26808
246 libpython3.6m.dylib 0x0000000106540163 _PyEval_EvalCodeWithName + 2883
247 libpython3.6m.dylib 0x0000000106535b37 PyEval_EvalCodeEx + 55
248 libpython3.6m.dylib 0x0000000106487a0f function_call + 399
249 libpython3.6m.dylib 0x00000001064598e5 PyObject_Call + 101
250 libpython3.6m.dylib 0x000000010653c5b0 _PyEval_EvalFrameDefault + 27184
251 libpython3.6m.dylib 0x0000000106540163 _PyEval_EvalCodeWithName + 2883
252 libpython3.6m.dylib 0x000000010654097b fast_function + 411
253 libpython3.6m.dylib 0x000000010653f549 call_function + 553
254 libpython3.6m.dylib 0x000000010653c438 _PyEval_EvalFrameDefault + 26808
255 libpython3.6m.dylib 0x0000000106540163 _PyEval_EvalCodeWithName + 2883
256 libpython3.6m.dylib 0x000000010654097b fast_function + 411
257 libpython3.6m.dylib 0x000000010653f549 call_function + 553
258 libpython3.6m.dylib 0x000000010653c3a5 _PyEval_EvalFrameDefault + 26661
259 libpython3.6m.dylib 0x0000000106540a19 fast_function + 569
260 libpython3.6m.dylib 0x000000010653f549 call_function + 553
261 libpython3.6m.dylib 0x000000010653c3a5 _PyEval_EvalFrameDefault + 26661
262 libpython3.6m.dylib 0x0000000106540163 _PyEval_EvalCodeWithName + 2883
263 libpython3.6m.dylib 0x0000000106540d6f _PyFunction_FastCallDict + 607
264 libpython3.6m.dylib 0x0000000106459aa6 _PyObject_FastCallDict + 182
265 libpython3.6m.dylib 0x0000000106459c3c _PyObject_Call_Prepend + 156
266 libpython3.6m.dylib 0x00000001064598e5 PyObject_Call + 101
267 libpython3.6m.dylib 0x00000001064c4499 slot_tp_call + 153
268 libpython3.6m.dylib 0x0000000106459b31 _PyObject_FastCallDict + 321
269 libpython3.6m.dylib 0x0000000106459f55 _PyObject_FastCallKeywords + 197
270 libpython3.6m.dylib 0x000000010653f4f2 call_function + 466
271 libpython3.6m.dylib 0x000000010653c438 _PyEval_EvalFrameDefault + 26808
272 libpython3.6m.dylib 0x0000000106540a19 fast_function + 569
273 libpython3.6m.dylib 0x000000010653f549 call_function + 553
274 libpython3.6m.dylib 0x000000010653c3a5 _PyEval_EvalFrameDefault + 26661
275 libpython3.6m.dylib 0x0000000106540a19 fast_function + 569
276 libpython3.6m.dylib 0x000000010653f549 call_function + 553
277 libpython3.6m.dylib 0x000000010653c3a5 _PyEval_EvalFrameDefault + 26661
278 libpython3.6m.dylib 0x0000000106540163 _PyEval_EvalCodeWithName + 2883
279 libpython3.6m.dylib 0x0000000106535b37 PyEval_EvalCodeEx + 55
280 libpython3.6m.dylib 0x0000000106487a0f function_call + 399
281 libpython3.6m.dylib 0x00000001064598e5 PyObject_Call + 101
282 libpython3.6m.dylib 0x000000010653c5b0 _PyEval_EvalFrameDefault + 27184
283 libpython3.6m.dylib 0x0000000106540163 _PyEval_EvalCodeWithName + 2883
284 libpython3.6m.dylib 0x000000010654097b fast_function + 411
285 libpython3.6m.dylib 0x000000010653f549 call_function + 553
286 libpython3.6m.dylib 0x000000010653c438 _PyEval_EvalFrameDefault + 26808
287 libpython3.6m.dylib 0x0000000106540163 _PyEval_EvalCodeWithName + 2883
288 libpython3.6m.dylib 0x000000010654097b fast_function + 411
289 libpython3.6m.dylib 0x000000010653f549 call_function + 553
290 libpython3.6m.dylib 0x000000010653c3a5 _PyEval_EvalFrameDefault + 26661
291 libpython3.6m.dylib 0x0000000106540a19 fast_function + 569
292 libpython3.6m.dylib 0x000000010653f549 call_function + 553
293 libpython3.6m.dylib 0x000000010653c3a5 _PyEval_EvalFrameDefault + 26661
294 libpython3.6m.dylib 0x0000000106540163 _PyEval_EvalCodeWithName + 2883
295 libpython3.6m.dylib 0x0000000106540d6f _PyFunction_FastCallDict + 607
296 libpython3.6m.dylib 0x0000000106459aa6 _PyObject_FastCallDict + 182
297 libpython3.6m.dylib 0x0000000106459c3c _PyObject_Call_Prepend + 156
298 libpython3.6m.dylib 0x00000001064598e5 PyObject_Call + 101
299 libpython3.6m.dylib 0x00000001064c4499 slot_tp_call + 153
300 libpython3.6m.dylib 0x0000000106459b31 _PyObject_FastCallDict + 321
301 libpython3.6m.dylib 0x0000000106459f55 _PyObject_FastCallKeywords + 197
302 libpython3.6m.dylib 0x000000010653f4f2 call_function + 466
303 libpython3.6m.dylib 0x000000010653c438 _PyEval_EvalFrameDefault + 26808
304 libpython3.6m.dylib 0x0000000106540ad9 fast_function + 761
305 libpython3.6m.dylib 0x000000010653f549 call_function + 553
306 libpython3.6m.dylib 0x000000010653c3a5 _PyEval_EvalFrameDefault + 26661
307 libpython3.6m.dylib 0x0000000106540163 _PyEval_EvalCodeWithName + 2883
308 libpython3.6m.dylib 0x0000000106535af0 PyEval_EvalCode + 48
309 libpython3.6m.dylib 0x000000010657074e PyRun_FileExFlags + 174
310 libpython3.6m.dylib 0x000000010656fc45 PyRun_SimpleFileExFlags + 277
311 libpython3.6m.dylib 0x000000010658d80a Py_Main + 3866
312 python3.6 0x00000001061d9db8 main + 248
313 python3.6 0x00000001061d9cb4 start + 52
Thread 0 crashed with X86 Thread State (64-bit):
rax: 0x0000000000000000 rbx: 0x0000000111f6f5c0 rcx: 0x00007ffee9a105b8 rdx: 0x0000000000000000
rdi: 0x0000000000000203 rsi: 0x000000000000000b rbp: 0x00007ffee9a105f0 rsp: 0x00007ffee9a105b8
r8: 0x00007ffee9a10ab8 r9: 0x33bb6fc8d10fca0a r10: 0x0000000111f6f66c r11: 0x0000000000000287
r12: 0x0000000000000203 r13: 0x0000000000000000 r14: 0x000000000000000b r15: 0x000000000000002d
rip: 0x00007fff6ac462c6 rfl: 0x0000000000000286 cr2: 0x0000000107fbac98
Logical CPU: 0
Error Code: 0x02000148
Trap Number: 133
Binary Images:
0x1061d9000 - 0x1061d9ff7 +python3.6 (???) <35FF5575-D6AC-3DA6-B015-B42B69210AF7> /nix/*/python3.6
0x1061de000 - 0x106362fff +CoreFoundation (0) <16A969D9-5137-3572-8A2E-4AD27F8E2A69> /nix/*/CoreFoundation.framework/Versions/A/CoreFoundation
0x10644b000 - 0x10665aff7 +libpython3.6m.dylib (3.6) <78A76B4A-DDDF-3426-96F3-9E4708F9FA45> /nix/*/libpython3.6m.dylib
0x10672f000 - 0x10672ffff +libSystem.B.dylib (1226.10.1) <3F5A1DEE-940A-365E-BC6D-312CF83AFCF1> /nix/*/libSystem.B.dylib
0x106731000 - 0x106731fff +grp.cpython-36m-darwin.so (???) <42F483BE-8B8F-3BA6-B313-FD618A10CB25> /nix/*/grp.cpython-36m-darwin.so
0x106735000 - 0x10677cffb +libncursesw.6.dylib (0) <8E490522-234B-37BD-AAE0-B11B86076995> /nix/*/libncursesw.6.dylib
0x106793000 - 0x106976ff7 +libicucore.A.dylib (0) <CDC07E9B-217D-3EE2-9530-E557530AF481> /nix/*/libicucore.A.dylib
0x106a6a000 - 0x106ad7ff7 +libcurl.4.dylib (0) <436D2AC4-CCEF-31DB-BFA4-D524313B5AC2> /nix/*/libcurl.4.dylib
0x106aec000 - 0x106aecfff +_bisect.cpython-36m-darwin.so (???) <CF29A42C-F9FA-3F9B-A726-B40D582E213C> /nix/*/_bisect.cpython-36m-darwin.so
0x106aef000 - 0x106c2cfff +libxml2.2.dylib (0) <09A7EBAA-06E5-3E0D-9B25-E149FF192BED> /nix/*/libxml2.2.dylib
0x106c5d000 - 0x106c5efff +_random.cpython-36m-darwin.so (???) <EC7A5AEB-7F0F-3A81-9DB4-3E29F14C2B8E> /nix/*/_random.cpython-36m-darwin.so
0x106c61000 - 0x106cddff7 +libc++.1.0.dylib (0) <C7A1C95D-2474-362F-A9C2-027706C00B56> /nix/*/libc++.1.0.dylib
0x106d39000 - 0x106d58ff7 +libc++abi.dylib (0) <D716EE50-B468-385A-BE45-5D06E86BA151> /nix/*/libc++abi.dylib
0x106d72000 - 0x106d72fff +libsystem_c.dylib (0) <C32D34C8-BA8B-3354-8D1C-B3CFC3111E8A> /nix/*/libsystem_c.dylib
0x106d8a000 - 0x106d8afff +libsystem_kernel.dylib (0) <0550AE42-4BC3-3B1E-8C21-3D3E5486B4FE> /nix/*/libsystem_kernel.dylib
0x106da7000 - 0x106da7fff +libSystem_internal.dylib (0) <393A2DB2-3E05-3B6E-8B26-043AAF9EE831> /nix/*/libSystem_internal.dylib
0x106da9000 - 0x106daaff7 +_heapq.cpython-36m-darwin.so (???) <BC86CB15-974D-3908-B3E0-DE5F21E1AC78> /nix/*/_heapq.cpython-36m-darwin.so
0x106daf000 - 0x106dccff7 +libnghttp2.14.dylib (0) <CE050852-0C1D-3198-9646-C2EFC0E5406F> /nix/*/libnghttp2.14.dylib
0x106dd8000 - 0x106e07ff3 +libssh2.1.dylib (0) <3B2E5A9F-543B-3180-9DF3-1227D5FDE2DE> /nix/*/libssh2.1.dylib
0x106e13000 - 0x106e71ff3 +libssl.1.1.dylib (0) <549B9F8D-6C27-36D2-90F2-B2D23DFEDE44> /nix/*/libssl.1.1.dylib
0x106e9a000 - 0x10707e37b +libcrypto.1.1.dylib (0) <A632411B-2AB0-3553-ABF5-8FBCBF7E7C53> /nix/*/libcrypto.1.1.dylib
0x107110000 - 0x107141ff7 +libgssapi_krb5.2.2.dylib (0) <A80FEF43-EB00-3F5F-A001-FB48744215AD> /nix/*/libgssapi_krb5.2.2.dylib
0x107152000 - 0x107176ff3 +libresolv.9.dylib (0) <99B3A32A-295C-3078-8543-498FED1EDBCE> /nix/*/libresolv.9.dylib
0x10717f000 - 0x107180fff +_bz2.cpython-36m-darwin.so (???) <A36D9617-BCC4-3B15-8325-278718DB9531> /nix/*/_bz2.cpython-36m-darwin.so
0x107185000 - 0x107199ff3 +libz.dylib (0) <948CB931-36A0-3203-AE70-A077681EEE58> /nix/*/libz.dylib
0x10719f000 - 0x10722dff7 +libkrb5.3.3.dylib (0) <54778E86-DE3C-3480-9256-3F0ECD9A370D> /nix/*/libkrb5.3.3.dylib
0x107265000 - 0x107266fff +fcntl.cpython-36m-darwin.so (???) <A6551221-5D1E-3B4F-BE7C-A30A00CE4EF3> /nix/*/fcntl.cpython-36m-darwin.so
0x10726b000 - 0x107292fff +libk5crypto.3.1.dylib (0) <AB4E33BD-FD3B-3265-9D13-4DB7BFC71AAF> /nix/*/libk5crypto.3.1.dylib
0x10729a000 - 0x10729afff +_opcode.cpython-36m-darwin.so (???) <94BA3E1E-D3CD-3B58-8428-3E701B41093A> /nix/*/_opcode.cpython-36m-darwin.so
0x10729d000 - 0x10729effb +libcom_err.3.0.dylib (0) <C8990D19-55DA-3411-8895-CC7D8C35138F> /nix/*/libcom_err.3.0.dylib
0x1072a1000 - 0x1072a2fff +_posixsubprocess.cpython-36m-darwin.so (???) <047B8A3C-C916-3B38-8130-3154F509BEFF> /nix/*/_posixsubprocess.cpython-36m-darwin.so
0x1072a7000 - 0x1072adffb +libkrb5support.1.1.dylib (0) <E4B7CAA7-2ED2-348E-9043-62DA461C7C91> /nix/*/libkrb5support.1.1.dylib
0x10773a000 - 0x107740fff +_struct.cpython-36m-darwin.so (???) <5C4B2605-7ABA-3FB5-B0C8-B720DC8CE4A7> /nix/*/_struct.cpython-36m-darwin.so
0x1077d0000 - 0x1077d4fff +zlib.cpython-36m-darwin.so (???) <085D2657-6491-3A68-98E8-F582D28211D0> /nix/*/zlib.cpython-36m-darwin.so
0x1077d9000 - 0x1077e9fff +libbz2.1.dylib (0) <89006BA2-B7C7-352A-9ED8-72FC93F39E1F> /nix/*/libbz2.1.dylib
0x10782c000 - 0x107830ff7 +_lzma.cpython-36m-darwin.so (???) <2265DBA0-CCC6-3AAE-8D32-183F691F676E> /nix/*/_lzma.cpython-36m-darwin.so
0x107835000 - 0x107854fff +liblzma.5.dylib (0) <E5EE127A-B99F-3674-AC53-2053AADDD343> /nix/*/liblzma.5.dylib
0x10785a000 - 0x107861ff7 +math.cpython-36m-darwin.so (???) <86DDDBD9-1155-3FC5-B875-1BCCB3009CC1> /nix/*/math.cpython-36m-darwin.so
0x107866000 - 0x107869fff +_hashlib.cpython-36m-darwin.so (???) <574759B3-042A-3082-9C35-A0076E1CE1E5> /nix/*/_hashlib.cpython-36m-darwin.so
0x10786d000 - 0x1078ccff7 +libssl.1.1.dylib (0) <68719DC9-69D4-3ABD-92A0-79FE627AEF9D> /nix/*/libssl.1.1.dylib
0x1078f5000 - 0x107ae6c1f +libcrypto.1.1.dylib (0) <B7825AD6-BDE6-3186-A6AC-3D1293EB47EA> /nix/*/libcrypto.1.1.dylib
0x107b76000 - 0x107b7cfff +_blake2.cpython-36m-darwin.so (???) <33957568-2523-3567-906D-665E2A08132B> /nix/*/_blake2.cpython-36m-darwin.so
0x107b80000 - 0x107b91ff7 +_sha3.cpython-36m-darwin.so (???) <CC16A512-9E8C-3796-9941-33ACAA4A9E43> /nix/*/_sha3.cpython-36m-darwin.so
0x107c6a000 - 0x107c6dff7 +select.cpython-36m-darwin.so (???) <7178C79B-738D-3DDA-A45D-D1DA3BEED3DF> /nix/*/select.cpython-36m-darwin.so
0x107cf2000 - 0x107cf5fff +_csv.cpython-36m-darwin.so (???) <368596C0-BF25-3723-9809-D25BCA9B2EEB> /nix/*/_csv.cpython-36m-darwin.so
0x107d3a000 - 0x107d3dfff +binascii.cpython-36m-darwin.so (???) <995E612B-0897-353A-9958-65FEA02032F8> /nix/*/binascii.cpython-36m-darwin.so
0x107e41000 - 0x107e4bfff +_socket.cpython-36m-darwin.so (???) <F21DA00E-DFD5-34C0-8ACA-725927AA6D66> /nix/*/_socket.cpython-36m-darwin.so
0x107e95000 - 0x107ea4ff7 +_datetime.cpython-36m-darwin.so (???) <503C48E6-2FE2-35AB-9A1A-AE4F7D7E81A9> /nix/*/_datetime.cpython-36m-darwin.so
0x1081bd000 - 0x108200fff +_decimal.cpython-36m-darwin.so (???) <00FAD6BB-6338-380C-9E6F-B7BD99C47086> /nix/*/_decimal.cpython-36m-darwin.so
0x108313000 - 0x10831aff7 +_json.cpython-36m-darwin.so (???) <41AE9698-8233-378F-A0BA-7AE9947FC245> /nix/*/_json.cpython-36m-darwin.so
0x10835e000 - 0x108434fff +unicodedata.cpython-36m-darwin.so (???) <FB0AC214-5CC6-3565-BB5F-B60F046192F7> /nix/*/unicodedata.cpython-36m-darwin.so
0x1084f9000 - 0x10850dfff +_pickle.cpython-36m-darwin.so (???) <5D4ADF61-DA1B-381C-9FD7-91B6561063B7> /nix/*/_pickle.cpython-36m-darwin.so
0x108517000 - 0x108519fff +tracer.cpython-36m-darwin.so (0) <C1659AF7-3C88-3951-8824-DBADC13466DB> /nix/*/tracer.cpython-36m-darwin.so
0x1085dd000 - 0x1085e4fff +array.cpython-36m-darwin.so (???) <FED1D7BF-4AB2-3D1A-AAF3-6F27AE2AE9BE> /nix/*/array.cpython-36m-darwin.so
0x1086eb000 - 0x1086f9ff7 +_ssl.cpython-36m-darwin.so (???) <58C6A9A5-9083-3B51-B098-D0A86AEB38EE> /nix/*/_ssl.cpython-36m-darwin.so
0x108746000 - 0x108747fff +_scproxy.cpython-36m-darwin.so (???) <2530E5AD-804C-3DBE-A53F-04EAF643B7CC> /nix/*/_scproxy.cpython-36m-darwin.so
0x10874a000 - 0x108796ff3 com.apple.SystemConfiguration (1.12.2 - 1.12.2) <B35616CA-8780-34DB-89FE-A5CEEA47A90D> /nix/*/SystemConfiguration.framework/SystemConfiguration
0x108917000 - 0x108918ff3 +_speedups.cpython-36m-darwin.so (0) <A256DCFA-2FFC-3C5D-9BC6-E202EA2B0949> /nix/*/_speedups.cpython-36m-darwin.so
0x1089e5000 - 0x1089e6ff7 +_multiprocessing.cpython-36m-darwin.so (???) <BA3CD43B-A3E4-3A81-93A3-71CA828AB78F> /nix/*/_multiprocessing.cpython-36m-darwin.so
0x108a29000 - 0x108a2eff7 +_asyncio.cpython-36m-darwin.so (???) <2D03A676-BE5C-30AA-9777-248D3DD3E784> /nix/*/_asyncio.cpython-36m-darwin.so
0x108df9000 - 0x108dfaffb +cprocessors.cpython-36m-darwin.so (0) <692A51F8-F468-3281-A0FD-39C3952BB3D7> /nix/*/cprocessors.cpython-36m-darwin.so
0x108dfd000 - 0x108dfdffb +cutils.cpython-36m-darwin.so (0) <8C6ABB63-7D01-3380-A6DC-47B3E74989CC> /nix/*/cutils.cpython-36m-darwin.so
0x1090ee000 - 0x1090efffb +cresultproxy.cpython-36m-darwin.so (0) <396D6918-E57D-3603-8C08-C11D41A3A0C3> /nix/*/cresultproxy.cpython-36m-darwin.so
0x1098d3000 - 0x1098e6ff7 +_ctypes.cpython-36m-darwin.so (???) <8AFA4D0C-7EA1-36D2-8874-472DFD0361D2> /nix/*/_ctypes.cpython-36m-darwin.so
0x1098f1000 - 0x1098f6ff7 +libffi.6.dylib (0) <06D7A7A5-FB71-373F-A23F-9BF3B0DC2BC8> /nix/*/libffi.6.dylib
0x109957000 - 0x109958ff3 +_constant_time.abi3.so (0) <4EF6F8C1-4952-3404-A608-E157533E1FF8> /nix/*/_constant_time.abi3.so
0x10995b000 - 0x109977ff3 +_cffi_backend.cpython-36m-darwin.so (0) <52529F8B-0C8F-3760-B9F5-AAE33B8B069D> /nix/*/_cffi_backend.cpython-36m-darwin.so
0x1099e8000 - 0x1099e9fff +_rjsmin.cpython-36m-darwin.so (0) <212A3869-4C24-377C-85B9-5601F020B8E7> /nix/*/_rjsmin.cpython-36m-darwin.so
0x1099ec000 - 0x1099edfff +termios.cpython-36m-darwin.so (???) <22773E18-8F4E-353E-9443-484684932FED> /nix/*/termios.cpython-36m-darwin.so
0x1099f1000 - 0x1099f2ffb +_padding.abi3.so (0) <F648E0C8-3CAA-39EB-B06D-38FEB4767B29> /nix/*/_padding.abi3.so
0x109a85000 - 0x109ae8ffb +_openssl.abi3.so (0) <564479B8-33DC-3BAD-9826-D57D9E0C8198> /nix/*/_openssl.abi3.so
0x109eae000 - 0x109ebeff7 +_hoedown.abi3.so (0) <4572C822-D79A-3338-92EF-C1361946612C> /nix/*/_hoedown.abi3.so
0x10a1ae000 - 0x10a1cdfff +_psycopg.cpython-36m-darwin.so (0) <A4BBCD35-AF8E-3D24-9490-DFA856F5409B> /nix/*/_psycopg.cpython-36m-darwin.so
0x10a1de000 - 0x10a214ffb +libpq.5.dylib (0) <A57F3AAE-D0B1-311C-8855-8A9866F2425A> /nix/*/libpq.5.dylib
0x10a3e0000 - 0x10a3ecfff +pyexpat.cpython-36m-darwin.so (???) <BFF922CC-2485-3790-87AF-2C9D9C6760AA> /nix/*/pyexpat.cpython-36m-darwin.so
0x10a3f3000 - 0x10a414ff7 +libexpat.1.dylib (0) <98357E18-8248-34D3-B2D1-19F487954805> /nix/*/libexpat.1.dylib
0x111ecd000 - 0x111f3770f dyld (655.1.1) <DFC3C4AF-6F97-3B34-B18D-7DCB23F2A83A> /usr/lib/dyld
0x7fff65828000 - 0x7fff65829fff com.apple.TrustEvaluationAgent (2.0 - 31.200.1) <15DF9C73-54E4-3C41-BCF4-378338C55FB4> /System/Library/PrivateFrameworks/TrustEvaluationAgent.framework/Versions/A/TrustEvaluationAgent
0x7fff67af2000 - 0x7fff67af3ffb libSystem.B.dylib (1252.250.1) <B1006948-7AD0-3CA9-81E0-833F4DD6BFB4> /usr/lib/libSystem.B.dylib
0x7fff67d35000 - 0x7fff67d88ff7 libc++.1.dylib (400.9.4) <9A60A190-6C34-339F-BB3D-AACE942009A4> /usr/lib/libc++.1.dylib
0x7fff67d89000 - 0x7fff67d9eff7 libc++abi.dylib (400.17) <38C09CED-9090-3719-90F3-04A2749F5428> /usr/lib/libc++abi.dylib
0x7fff681f4000 - 0x7fff682ecff7 libcrypto.35.dylib (22.260.1) <91C3D71A-4D1D-331D-89CC-67863DF10574> /usr/lib/libcrypto.35.dylib
0x7fff69329000 - 0x7fff69aaefdf libobjc.A.dylib (756.2) <7C312627-43CB-3234-9324-4DEA92D59F50> /usr/lib/libobjc.A.dylib
0x7fff6a98e000 - 0x7fff6a992ff3 libcache.dylib (81) <1987D1E1-DB11-3291-B12A-EBD55848E02D> /usr/lib/system/libcache.dylib
0x7fff6a993000 - 0x7fff6a99dff3 libcommonCrypto.dylib (60118.250.2) <1765BB6E-6784-3653-B16B-CB839721DC9A> /usr/lib/system/libcommonCrypto.dylib
0x7fff6a99e000 - 0x7fff6a9a5ff7 libcompiler_rt.dylib (63.4) <5212BA7B-B7EA-37B4-AF6E-AC4F507EDFB8> /usr/lib/system/libcompiler_rt.dylib
0x7fff6a9a6000 - 0x7fff6a9afff7 libcopyfile.dylib (146.250.1) <98CD00CD-9B91-3B5C-A9DB-842638050FA8> /usr/lib/system/libcopyfile.dylib
0x7fff6a9b0000 - 0x7fff6aa34fc3 libcorecrypto.dylib (602.260.2) <01464D24-570C-3B83-9D18-467769E0FCDD> /usr/lib/system/libcorecrypto.dylib
0x7fff6aabb000 - 0x7fff6aaf4ff7 libdispatch.dylib (1008.270.1) <97273678-E94C-3C8C-89F6-2E2020F4B43B> /usr/lib/system/libdispatch.dylib
0x7fff6aaf5000 - 0x7fff6ab21ff7 libdyld.dylib (655.1.1) <002418CC-AD11-3D10-865B-015591D24E6C> /usr/lib/system/libdyld.dylib
0x7fff6ab22000 - 0x7fff6ab22ffb libkeymgr.dylib (30) <0D0F9CA2-8D5A-3273-8723-59987B5827F2> /usr/lib/system/libkeymgr.dylib
0x7fff6ab30000 - 0x7fff6ab30ff7 liblaunch.dylib (1336.261.2) <2B07E27E-D404-3E98-9D28-BCA641E5C479> /usr/lib/system/liblaunch.dylib
0x7fff6ab31000 - 0x7fff6ab36fff libmacho.dylib (927.0.3) <A377D608-77AB-3F6E-90F0-B4F251A5C12F> /usr/lib/system/libmacho.dylib
0x7fff6ab37000 - 0x7fff6ab39ffb libquarantine.dylib (86.220.1) <6D0BC770-7348-3608-9254-F7FFBD347634> /usr/lib/system/libquarantine.dylib
0x7fff6ab3a000 - 0x7fff6ab3bff7 libremovefile.dylib (45.200.2) <9FBEB2FF-EEBE-31BC-BCFC-C71F8D0E99B6> /usr/lib/system/libremovefile.dylib
0x7fff6ab3c000 - 0x7fff6ab53ff3 libsystem_asl.dylib (356.200.4) <A62A7249-38B8-33FA-9875-F1852590796C> /usr/lib/system/libsystem_asl.dylib
0x7fff6ab54000 - 0x7fff6ab54ff7 libsystem_blocks.dylib (73) <A453E8EE-860D-3CED-B5DC-BE54E9DB4348> /usr/lib/system/libsystem_blocks.dylib
0x7fff6ab55000 - 0x7fff6abdcfff libsystem_c.dylib (1272.250.1) <7EDACF78-2FA3-35B8-B051-D70475A35117> /usr/lib/system/libsystem_c.dylib
0x7fff6abdd000 - 0x7fff6abe0ffb libsystem_configuration.dylib (963.270.3) <2B4A836D-68A4-33E6-8D48-CD4486B03387> /usr/lib/system/libsystem_configuration.dylib
0x7fff6abe1000 - 0x7fff6abe4ff7 libsystem_coreservices.dylib (66) <719F75A4-74C5-3BA6-A09E-0C5A3E5889D7> /usr/lib/system/libsystem_coreservices.dylib
0x7fff6abe5000 - 0x7fff6abebfff libsystem_darwin.dylib (1272.250.1) <EC9B39A5-9592-3577-8997-7DC721D20D8C> /usr/lib/system/libsystem_darwin.dylib
0x7fff6abec000 - 0x7fff6abf2ff7 libsystem_dnssd.dylib (878.270.2) <E9A5ACCF-E35F-3909-AF0A-2A37CD217276> /usr/lib/system/libsystem_dnssd.dylib
0x7fff6abf3000 - 0x7fff6ac3effb libsystem_info.dylib (517.200.9) <D09D5AE0-2FDC-3A6D-93EC-729F931B1457> /usr/lib/system/libsystem_info.dylib
0x7fff6ac3f000 - 0x7fff6ac67ff7 libsystem_kernel.dylib (4903.271.2) <EA204E3C-870B-30DD-B4AF-D1BB66420D14> /usr/lib/system/libsystem_kernel.dylib
0x7fff6ac68000 - 0x7fff6acb3ff7 libsystem_m.dylib (3158.200.7) <F19B6DB7-014F-3820-831F-389CCDA06EF6> /usr/lib/system/libsystem_m.dylib
0x7fff6acb4000 - 0x7fff6acdefff libsystem_malloc.dylib (166.270.1) <011F3AD0-8E6A-3A89-AE64-6E5F6840F30A> /usr/lib/system/libsystem_malloc.dylib
0x7fff6acdf000 - 0x7fff6ace9ff7 libsystem_networkextension.dylib (767.250.2) <FF06F13A-AEFE-3A27-A073-910EF78AEA36> /usr/lib/system/libsystem_networkextension.dylib
0x7fff6acea000 - 0x7fff6acf1fff libsystem_notify.dylib (172.200.21) <145B5CFC-CF73-33CE-BD3D-E8DDE268FFDE> /usr/lib/system/libsystem_notify.dylib
0x7fff6acf2000 - 0x7fff6acfbfef libsystem_platform.dylib (177.270.1) <9D1FE5E4-EB7D-3B3F-A8D1-A96D9CF1348C> /usr/lib/system/libsystem_platform.dylib
0x7fff6acfc000 - 0x7fff6ad06ff7 libsystem_pthread.dylib (330.250.2) <2D5C08FF-484F-3D59-9132-CE1DCB3F76D7> /usr/lib/system/libsystem_pthread.dylib
0x7fff6ad07000 - 0x7fff6ad0aff7 libsystem_sandbox.dylib (851.270.1) <9494594B-5199-3186-82AB-5FF8BED6EE16> /usr/lib/system/libsystem_sandbox.dylib
0x7fff6ad0b000 - 0x7fff6ad0dff3 libsystem_secinit.dylib (30.260.2) <EF1EA47B-7B22-35E8-BD9B-F7003DCB96AE> /usr/lib/system/libsystem_secinit.dylib
0x7fff6ad0e000 - 0x7fff6ad15ff3 libsystem_symptoms.dylib (820.267.1) <03F1C2DD-0F5A-3D9D-88F6-B26C0F94EB52> /usr/lib/system/libsystem_symptoms.dylib
0x7fff6ad16000 - 0x7fff6ad2bfff libsystem_trace.dylib (906.260.1) <FC761C3B-5434-3A52-912D-F1B15FAA8EB2> /usr/lib/system/libsystem_trace.dylib
0x7fff6ad2c000 - 0x7fff6ad2cff7 libunc.dylib (30) <946AD970-D655-3526-AB11-F4FE52222E0B> /usr/lib/system/libunc.dylib
0x7fff6ad2d000 - 0x7fff6ad32ffb libunwind.dylib (35.4) <24A97A67-F017-3CFC-B0D0-6BD0224B1336> /usr/lib/system/libunwind.dylib
0x7fff6ad33000 - 0x7fff6ad62fff libxpc.dylib (1336.261.2) <7DEE2300-6D8E-3C00-9C63-E3E80D56B0C4> /usr/lib/system/libxpc.dylib
External Modification Summary:
Calls made by other processes targeting this process:
task_for_pid: 0
thread_create: 0
thread_set_state: 0
Calls made by this process:
task_for_pid: 0
thread_create: 0
thread_set_state: 0
Calls made by all processes on this machine:
task_for_pid: 51493004
thread_create: 0
thread_set_state: 0
VM Region Summary:
ReadOnly portion of Libraries: Total=256.9M resident=0K(0%) swapped_out_or_unallocated=256.9M(100%)
Writable regions: Total=107.2M written=0K(0%) resident=0K(0%) swapped_out=0K(0%) unallocated=107.2M(100%)
VIRTUAL REGION
REGION TYPE SIZE COUNT (non-coalesced)
=========== ======= =======
Activity Tracing 256K 1
Kernel Alloc Once 8K 1
MALLOC 49.8M 35
MALLOC guard page 16K 4
MALLOC_LARGE (reserved) 384K 3 reserved VM address space (unallocated)
STACK GUARD 56.0M 1
Stack 8192K 1
VM_ALLOCATE 48.5M 51
__DATA 4412K 123
__LINKEDIT 226.7M 78
__TEXT 30.2M 117
__UNICODE 560K 1
shared memory 12K 3
=========== ======= =======
TOTAL 424.8M 419
TOTAL, minus reserved VM space 424.4M 419
```
</details>
Today I disabled my workaround to ensure I still see the issue and noticed that something (maybe pytest--I was using 4.x at the time and now use 5.x) in my environment has shifted and the test run now prints helpful Python segfault messages and full stack traces. Here's an example:
<details>
<summary>segfault stack trace</summary>
```
Fatal Python error: Segmentation fault
Current thread 0x00000001121c05c0 (most recent call first):
File "/nix/store/ajn7df20f65rb00pjkayr82dppyszsn8-python3-3.6.9/lib/python3.6/socketserver.py", line 470 in server_bind
File "/nix/store/ajn7df20f65rb00pjkayr82dppyszsn8-python3-3.6.9/lib/python3.6/http/server.py", line 136 in server_bind
File "/nix/store/ajn7df20f65rb00pjkayr82dppyszsn8-python3-3.6.9/lib/python3.6/socketserver.py", line 456 in __init__
File "/nix/store/ai2cvzkgzgjq4j38rwvgym81jj8vqs1r-python3.6-Werkzeug-0.12.2/lib/python3.6/site-packages/werkzeug/serving.py", line 504 in __init__
File "/nix/store/ai2cvzkgzgjq4j38rwvgym81jj8vqs1r-python3.6-Werkzeug-0.12.2/lib/python3.6/site-packages/werkzeug/serving.py", line 587 in make_server
File "/nix/store/ai2cvzkgzgjq4j38rwvgym81jj8vqs1r-python3.6-Werkzeug-0.12.2/lib/python3.6/site-packages/werkzeug/serving.py", line 699 in inner
File "/nix/store/ai2cvzkgzgjq4j38rwvgym81jj8vqs1r-python3.6-Werkzeug-0.12.2/lib/python3.6/site-packages/werkzeug/serving.py", line 739 in run_simple
File "/nix/store/2wsznll072jgsaqp3ypmd354s9yqw9vw-python3.6-Flask-0.12.2/lib/python3.6/site-packages/flask/app.py", line 841 in run
File "/nix/store/dsdjvybj8bp9cpqw3hzl2fjd0gns0p8d-python3.6-pytest-flask-0.14.0/lib/python3.6/site-packages/pytest_flask/fixtures.py", line 67 in worker
File "/nix/store/ajn7df20f65rb00pjkayr82dppyszsn8-python3-3.6.9/lib/python3.6/multiprocessing/process.py", line 93 in run
File "/nix/store/ajn7df20f65rb00pjkayr82dppyszsn8-python3-3.6.9/lib/python3.6/multiprocessing/process.py", line 258 in _bootstrap
File "/nix/store/ajn7df20f65rb00pjkayr82dppyszsn8-python3-3.6.9/lib/python3.6/multiprocessing/popen_fork.py", line 73 in _launch
File "/nix/store/ajn7df20f65rb00pjkayr82dppyszsn8-python3-3.6.9/lib/python3.6/multiprocessing/popen_fork.py", line 19 in __init__
File "/nix/store/ajn7df20f65rb00pjkayr82dppyszsn8-python3-3.6.9/lib/python3.6/multiprocessing/context.py", line 277 in _Popen
File "/nix/store/ajn7df20f65rb00pjkayr82dppyszsn8-python3-3.6.9/lib/python3.6/multiprocessing/context.py", line 223 in _Popen
File "/nix/store/ajn7df20f65rb00pjkayr82dppyszsn8-python3-3.6.9/lib/python3.6/multiprocessing/process.py", line 105 in start
File "/nix/store/dsdjvybj8bp9cpqw3hzl2fjd0gns0p8d-python3.6-pytest-flask-0.14.0/lib/python3.6/site-packages/pytest_flask/fixtures.py", line 72 in start
File "/Users/abathur/<intentionally snipped>/tests/conftest.py", line 963 in browser
File "/nix/store/j44z35nkdkn527j7r93iajm0sv6h0678-python3.6-pytest-5.2.1/lib/python3.6/site-packages/_pytest/fixtures.py", line 775 in call_fixture_func
File "/nix/store/j44z35nkdkn527j7r93iajm0sv6h0678-python3.6-pytest-5.2.1/lib/python3.6/site-packages/_pytest/fixtures.py", line 949 in pytest_fixture_setup
File "/nix/store/njj7nw68w2kxf8z6d2s1b5zw8l2dzw3m-python3.6-pluggy-0.13.0/lib/python3.6/site-packages/pluggy/callers.py", line 187 in _multicall
File "/nix/store/njj7nw68w2kxf8z6d2s1b5zw8l2dzw3m-python3.6-pluggy-0.13.0/lib/python3.6/site-packages/pluggy/manager.py", line 86 in <lambda>
File "/nix/store/njj7nw68w2kxf8z6d2s1b5zw8l2dzw3m-python3.6-pluggy-0.13.0/lib/python3.6/site-packages/pluggy/manager.py", line 92 in _hookexec
File "/nix/store/njj7nw68w2kxf8z6d2s1b5zw8l2dzw3m-python3.6-pluggy-0.13.0/lib/python3.6/site-packages/pluggy/hooks.py", line 286 in __call__
File "/nix/store/j44z35nkdkn527j7r93iajm0sv6h0678-python3.6-pytest-5.2.1/lib/python3.6/site-packages/_pytest/fixtures.py", line 900 in execute
File "/nix/store/j44z35nkdkn527j7r93iajm0sv6h0678-python3.6-pytest-5.2.1/lib/python3.6/site-packages/_pytest/fixtures.py", line 571 in _compute_fixture_value
File "/nix/store/j44z35nkdkn527j7r93iajm0sv6h0678-python3.6-pytest-5.2.1/lib/python3.6/site-packages/_pytest/fixtures.py", line 490 in _get_active_fixturedef
File "/nix/store/j44z35nkdkn527j7r93iajm0sv6h0678-python3.6-pytest-5.2.1/lib/python3.6/site-packages/_pytest/fixtures.py", line 880 in execute
File "/nix/store/j44z35nkdkn527j7r93iajm0sv6h0678-python3.6-pytest-5.2.1/lib/python3.6/site-packages/_pytest/fixtures.py", line 571 in _compute_fixture_value
File "/nix/store/j44z35nkdkn527j7r93iajm0sv6h0678-python3.6-pytest-5.2.1/lib/python3.6/site-packages/_pytest/fixtures.py", line 490 in _get_active_fixturedef
File "/nix/store/j44z35nkdkn527j7r93iajm0sv6h0678-python3.6-pytest-5.2.1/lib/python3.6/site-packages/_pytest/fixtures.py", line 880 in execute
File "/nix/store/j44z35nkdkn527j7r93iajm0sv6h0678-python3.6-pytest-5.2.1/lib/python3.6/site-packages/_pytest/fixtures.py", line 571 in _compute_fixture_value
File "/nix/store/j44z35nkdkn527j7r93iajm0sv6h0678-python3.6-pytest-5.2.1/lib/python3.6/site-packages/_pytest/fixtures.py", line 490 in _get_active_fixturedef
File "/nix/store/j44z35nkdkn527j7r93iajm0sv6h0678-python3.6-pytest-5.2.1/lib/python3.6/site-packages/_pytest/fixtures.py", line 474 in getfixturevalue
File "/nix/store/j44z35nkdkn527j7r93iajm0sv6h0678-python3.6-pytest-5.2.1/lib/python3.6/site-packages/_pytest/fixtures.py", line 464 in _fillfixtures
File "/nix/store/j44z35nkdkn527j7r93iajm0sv6h0678-python3.6-pytest-5.2.1/lib/python3.6/site-packages/_pytest/fixtures.py", line 291 in fillfixtures
File "/nix/store/j44z35nkdkn527j7r93iajm0sv6h0678-python3.6-pytest-5.2.1/lib/python3.6/site-packages/_pytest/python.py", line 1427 in setup
File "/nix/store/j44z35nkdkn527j7r93iajm0sv6h0678-python3.6-pytest-5.2.1/lib/python3.6/site-packages/_pytest/runner.py", line 366 in prepare
File "/nix/store/j44z35nkdkn527j7r93iajm0sv6h0678-python3.6-pytest-5.2.1/lib/python3.6/site-packages/_pytest/runner.py", line 118 in pytest_runtest_setup
File "/nix/store/njj7nw68w2kxf8z6d2s1b5zw8l2dzw3m-python3.6-pluggy-0.13.0/lib/python3.6/site-packages/pluggy/callers.py", line 187 in _multicall
File "/nix/store/njj7nw68w2kxf8z6d2s1b5zw8l2dzw3m-python3.6-pluggy-0.13.0/lib/python3.6/site-packages/pluggy/manager.py", line 86 in <lambda>
File "/nix/store/njj7nw68w2kxf8z6d2s1b5zw8l2dzw3m-python3.6-pluggy-0.13.0/lib/python3.6/site-packages/pluggy/manager.py", line 92 in _hookexec
File "/nix/store/njj7nw68w2kxf8z6d2s1b5zw8l2dzw3m-python3.6-pluggy-0.13.0/lib/python3.6/site-packages/pluggy/hooks.py", line 286 in __call__
File "/nix/store/j44z35nkdkn527j7r93iajm0sv6h0678-python3.6-pytest-5.2.1/lib/python3.6/site-packages/_pytest/runner.py", line 201 in <lambda>
File "/nix/store/j44z35nkdkn527j7r93iajm0sv6h0678-python3.6-pytest-5.2.1/lib/python3.6/site-packages/_pytest/runner.py", line 229 in from_call
File "/nix/store/j44z35nkdkn527j7r93iajm0sv6h0678-python3.6-pytest-5.2.1/lib/python3.6/site-packages/_pytest/runner.py", line 201 in call_runtest_hook
File "/nix/store/j44z35nkdkn527j7r93iajm0sv6h0678-python3.6-pytest-5.2.1/lib/python3.6/site-packages/_pytest/runner.py", line 176 in call_and_report
File "/nix/store/j44z35nkdkn527j7r93iajm0sv6h0678-python3.6-pytest-5.2.1/lib/python3.6/site-packages/_pytest/runner.py", line 89 in runtestprotocol
File "/nix/store/j44z35nkdkn527j7r93iajm0sv6h0678-python3.6-pytest-5.2.1/lib/python3.6/site-packages/_pytest/runner.py", line 80 in pytest_runtest_protocol
File "/nix/store/njj7nw68w2kxf8z6d2s1b5zw8l2dzw3m-python3.6-pluggy-0.13.0/lib/python3.6/site-packages/pluggy/callers.py", line 187 in _multicall
File "/nix/store/njj7nw68w2kxf8z6d2s1b5zw8l2dzw3m-python3.6-pluggy-0.13.0/lib/python3.6/site-packages/pluggy/manager.py", line 86 in <lambda>
File "/nix/store/njj7nw68w2kxf8z6d2s1b5zw8l2dzw3m-python3.6-pluggy-0.13.0/lib/python3.6/site-packages/pluggy/manager.py", line 92 in _hookexec
File "/nix/store/njj7nw68w2kxf8z6d2s1b5zw8l2dzw3m-python3.6-pluggy-0.13.0/lib/python3.6/site-packages/pluggy/hooks.py", line 286 in __call__
File "/nix/store/j44z35nkdkn527j7r93iajm0sv6h0678-python3.6-pytest-5.2.1/lib/python3.6/site-packages/_pytest/main.py", line 256 in pytest_runtestloop
File "/nix/store/njj7nw68w2kxf8z6d2s1b5zw8l2dzw3m-python3.6-pluggy-0.13.0/lib/python3.6/site-packages/pluggy/callers.py", line 187 in _multicall
File "/nix/store/njj7nw68w2kxf8z6d2s1b5zw8l2dzw3m-python3.6-pluggy-0.13.0/lib/python3.6/site-packages/pluggy/manager.py", line 86 in <lambda>
File "/nix/store/njj7nw68w2kxf8z6d2s1b5zw8l2dzw3m-python3.6-pluggy-0.13.0/lib/python3.6/site-packages/pluggy/manager.py", line 92 in _hookexec
File "/nix/store/njj7nw68w2kxf8z6d2s1b5zw8l2dzw3m-python3.6-pluggy-0.13.0/lib/python3.6/site-packages/pluggy/hooks.py", line 286 in __call__
File "/nix/store/j44z35nkdkn527j7r93iajm0sv6h0678-python3.6-pytest-5.2.1/lib/python3.6/site-packages/_pytest/main.py", line 235 in _main
File "/nix/store/j44z35nkdkn527j7r93iajm0sv6h0678-python3.6-pytest-5.2.1/lib/python3.6/site-packages/_pytest/main.py", line 191 in wrap_session
File "/nix/store/j44z35nkdkn527j7r93iajm0sv6h0678-python3.6-pytest-5.2.1/lib/python3.6/site-packages/_pytest/main.py", line 228 in pytest_cmdline_main
File "/nix/store/njj7nw68w2kxf8z6d2s1b5zw8l2dzw3m-python3.6-pluggy-0.13.0/lib/python3.6/site-packages/pluggy/callers.py", line 187 in _multicall
File "/nix/store/njj7nw68w2kxf8z6d2s1b5zw8l2dzw3m-python3.6-pluggy-0.13.0/lib/python3.6/site-packages/pluggy/manager.py", line 86 in <lambda>
File "/nix/store/njj7nw68w2kxf8z6d2s1b5zw8l2dzw3m-python3.6-pluggy-0.13.0/lib/python3.6/site-packages/pluggy/manager.py", line 92 in _hookexec
File "/nix/store/njj7nw68w2kxf8z6d2s1b5zw8l2dzw3m-python3.6-pluggy-0.13.0/lib/python3.6/site-packages/pluggy/hooks.py", line 286 in __call__
File "/nix/store/j44z35nkdkn527j7r93iajm0sv6h0678-python3.6-pytest-5.2.1/lib/python3.6/site-packages/_pytest/config/__init__.py", line 90 in main
File "/nix/store/j44z35nkdkn527j7r93iajm0sv6h0678-python3.6-pytest-5.2.1/bin/.pytest-wrapped", line 11 in <module>
```
</details>
## Workaround
At least in our case, threads proved a viable workaround. I achieved this by subclassing LiveServer and overriding the live_server fixture in our conftest:
```python
from pytest_flask.fixtures import LiveServer, _rewrite_server_name
import socket
from threading import Thread
try:
from urllib2 import URLError, urlopen
except ImportError:
from urllib.error import URLError
from urllib.request import urlopen
class PatchedLiveServer(LiveServer):
def start(self):
"""Start application in a separate process."""
self._process = Thread(
target=self.app.run,
kwargs=dict(
host=self.host, port=self.port, use_reloader=False, threaded=False
),
daemon=True,
)
self._process.start()
# We must wait for the server to start listening with a maximum
# timeout of 5 seconds.
timeout = 5
while timeout > 0:
time.sleep(1)
try:
urlopen(self.url())
timeout = 0
except URLError:
timeout -= 1
def stop(self):
# inherited stop will break (thread has no terminate, join may fail)
pass
@pytest.fixture(scope="function")
def live_server(request, app, monkeypatch, pytestconfig):
port = pytestconfig.getvalue("live_server_port")
if port == 0:
# Bind to an open port
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.bind(("", 0))
port = s.getsockname()[1]
s.close()
host = pytestconfig.getvalue("live_server_host")
# Explicitly set application ``SERVER_NAME`` for test suite
# and restore original value on test teardown.
server_name = app.config["SERVER_NAME"] or "localhost"
monkeypatch.setitem(
app.config, "SERVER_NAME", _rewrite_server_name(server_name, str(port))
)
clean_stop = request.config.getvalue("live_server_clean_stop")
server = PatchedLiveServer(app, host, port, clean_stop)
if request.config.getvalue("start_live_server"):
server.start()
request.addfinalizer(server.stop)
return server
``` | open | 2019-12-30T20:27:52Z | 2021-11-04T17:32:30Z | https://github.com/pytest-dev/pytest-flask/issues/103 | [
"stale"
] | abathur | 0 |
ijl/orjson | numpy | 370 | Preparing metadata (pyproject.toml) did not run successfully: Cargo, the Rust package manager, is not installed or is not on PATH. | ```
#0 0.956 Downloading orjson-3.8.9.tar.gz (657 kB)
#0 1.147 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 657.1/657.1 kB 3.4 MB/s eta 0:00:00
#0 1.203 Installing build dependencies: started
#0 3.158 Installing build dependencies: finished with status 'done'
#0 3.159 Getting requirements to build wheel: started
#0 3.191 Getting requirements to build wheel: finished with status 'done'
#0 3.192 Preparing metadata (pyproject.toml): started
#0 3.223 Preparing metadata (pyproject.toml): finished with status 'error'
#0 3.226 error: subprocess-exited-with-error
#0 3.226
#0 3.226 × Preparing metadata (pyproject.toml) did not run successfully.
#0 3.226 │ exit code: 1
#0 3.226 ╰─> [6 lines of output]
#0 3.226
#0 3.226 Cargo, the Rust package manager, is not installed or is not on PATH.
#0 3.226 This package requires Rust and Cargo to compile extensions. Install it through
#0 3.226 the system's package manager or via https://rustup.rs/
#0 3.226
#0 3.226 Checking for Rust toolchain....
#0 3.226 [end of output]
```
Seen on Mac M1 and Ubuntu.
Only appears in the latest release. Falling back to 3.8.8 solved it for me. | closed | 2023-03-28T15:56:07Z | 2023-04-09T15:09:12Z | https://github.com/ijl/orjson/issues/370 | [] | btseytlin | 5 |
yeongpin/cursor-free-vip | automation | 287 | [Bug]: 在执行了完全重置命令之后,cursor被卸载了,然后重新安装cursor都运行不起来,总是提示没有权限 | ### 提交前检查
- [x] 我理解 Issue 是用于反馈和解决问题的,而非吐槽评论区,将尽可能提供更多信息帮助问题解决。
- [x] 我已经查看了置顶 Issue 并搜索了现有的 [开放 Issue](https://github.com/yeongpin/cursor-free-vip/issues)和[已关闭 Issue](https://github.com/yeongpin/cursor-free-vip/issues?q=is%3Aissue%20state%3Aclosed%20),没有找到类似的问题。
- [x] 我填写了简短且清晰明确的标题,以便开发者在翻阅 Issue 列表时能快速确定大致问题。而不是“一个建议”、“卡住了”等。
### 平台
Linux ARM64
### 版本
1.7.0
### 错误描述
[Bug]: 在执行了完全重置命令之后,cursor被卸载了,然后重新安装cursor都运行不起来,总是提示没有权限

### 相关日志输出
```shell
```
### 附加信息
_No response_ | open | 2025-03-18T02:39:51Z | 2025-03-19T01:25:07Z | https://github.com/yeongpin/cursor-free-vip/issues/287 | [
"bug"
] | jeffreycool | 2 |
jupyter/nbviewer | jupyter | 177 | Download an ipynb file on a windows machine | When trying to download an ipynb file, by clicking on the button displayed in nbviewer, on Windows one adds an extra extension, txt. If I choose save all types of files, the txt is not added, but when I try to open it on my computer, invoking ipython notebook, an error message is displayed:
Error loading notebook
Unreadable Notebook: Notebook does not appear to be JSON: '\n
| open | 2014-01-20T19:33:08Z | 2018-07-16T13:19:57Z | https://github.com/jupyter/nbviewer/issues/177 | [
"type:Bug"
] | empet | 4 |
erdewit/ib_insync | asyncio | 4 | Unfilled params | Intellij IDEA code inspection indicates incorrect call arguments for the following two methods:
- `def exerciseOptions` (ib.py, line 798): parameter 'override' unfilled
- `def reqHistogramDataAsync` (ib.py, line 1007): parameter 'timePeriod' unfilled | closed | 2017-08-11T09:40:53Z | 2017-08-12T12:24:39Z | https://github.com/erdewit/ib_insync/issues/4 | [] | Elektra58 | 3 |
pywinauto/pywinauto | automation | 1,170 | Unable to interact with grid/table | ## Expected Behavior
Would like to be able to read items, select items, etc. from this table/grid. It doesn't seem to be of any type that I can work with, as it is only labeled as a pane type. It only has one child which is the scroll bar. Is there any possible way to work with this?
Any advice or suggestions much appreciated.
## Actual Behavior

Inspect.exe Output
```
How found: Selected from tree...
Name: "ABONNER"
ControlType: UIA_PaneControlTypeId (0xC371)
LocalizedControlType: "pane"
IsEnabled: true
IsOffscreen: true
IsKeyboardFocusable: true
HasKeyboardFocus: false
AccessKey: ""
ProcessId: 5744
RuntimeId: [2A.20BE8]
AutomationId: "grdUser"
FrameworkId: "WinForm"
ClassName: "WindowsForms10.Window.8.app.0.13965fa_r7_ad1"
NativeWindowHandle: 0x20BE8
ProviderDescription: "[pid:7136,providerId:0x20BE8 Main:Nested [pid:5744,providerId:0x20BE8 Main(parent link):Microsoft: MSAA Proxy (unmanaged:UIAutomationCore.dll)]; Hwnd(parent link):Microsoft: HWND Proxy (unmanaged:uiautomationcore.dll)]"
IsPassword: false
HelpText: ""
IsDialog: false
LegacyIAccessible.ChildId: 0
LegacyIAccessible.DefaultAction: ""
LegacyIAccessible.Description: ""
LegacyIAccessible.Help: ""
LegacyIAccessible.KeyboardShortcut: ""
LegacyIAccessible.Name: "ABONNER"
LegacyIAccessible.Role: client (0xA)
LegacyIAccessible.State: focusable (0x100000)
LegacyIAccessible.Value: ""
IsAnnotationPatternAvailable: false
IsDragPatternAvailable: false
IsDockPatternAvailable: false
IsDropTargetPatternAvailable: false
IsExpandCollapsePatternAvailable: false
IsGridItemPatternAvailable: false
IsGridPatternAvailable: false
IsInvokePatternAvailable: false
IsItemContainerPatternAvailable: false
IsLegacyIAccessiblePatternAvailable: true
IsMultipleViewPatternAvailable: false
IsObjectModelPatternAvailable: false
IsRangeValuePatternAvailable: false
IsScrollItemPatternAvailable: false
IsScrollPatternAvailable: false
IsSelectionItemPatternAvailable: false
IsSelectionPatternAvailable: false
IsSpreadsheetItemPatternAvailable: false
IsSpreadsheetPatternAvailable: false
IsStylesPatternAvailable: false
IsSynchronizedInputPatternAvailable: false
IsTableItemPatternAvailable: false
IsTablePatternAvailable: false
IsTextChildPatternAvailable: false
IsTextEditPatternAvailable: false
IsTextPatternAvailable: false
IsTextPattern2Available: false
IsTogglePatternAvailable: false
IsTransformPatternAvailable: false
IsTransform2PatternAvailable: false
IsValuePatternAvailable: false
IsVirtualizedItemPatternAvailable: false
IsWindowPatternAvailable: false
IsCustomNavigationPatternAvailable: false
IsSelectionPattern2Available: false
FirstChild: "" scroll bar
LastChild: "" scroll bar
Next: [null]
Previous: [null]
Other Props: Object has no additional properties
Children: "" scroll bar
Ancestors: "Panel5" pane
"" pane
"Panel1" pane
"" pane
"Panel1" pane
"" window
"Desktop 1" pane
[ No Parent ]
```
## Specifications
- Pywinauto version: 0.6.8
- Python version and bitness: 3.10 64bit
- Platform and OS: Win10 x64
| open | 2022-01-14T18:19:51Z | 2022-01-14T18:19:51Z | https://github.com/pywinauto/pywinauto/issues/1170 | [] | vectar7 | 0 |
pytorch/pytorch | machine-learning | 149,094 | How to skip backward specific steps in torch.compile | ### 🐛 Describe the bug
I couldn't find much documentation around how we can skip backward specific-steps in torch.compile/AOT autograd.
Some info would be helpful.
### Error logs
_No response_
### Versions
NA
cc @chauhang @penguinwu | open | 2025-03-13T02:12:44Z | 2025-03-17T23:55:31Z | https://github.com/pytorch/pytorch/issues/149094 | [
"triaged",
"oncall: pt2"
] | janak2 | 3 |
nteract/papermill | jupyter | 379 | ImportError when Running Code off python.exe | Hi,
When I run the following code it does as intended.
```
import papermill as pm
pm.execute_notebook('test.ipynb',
'test.ipynb',
parameters=dict())
```
However if I run the code off cmd like so:
`C:\Users\me> python.exe code_above.py`
I get
```
raise ImportError: 'nbconvert --execute' requires the jupyter_client package: 'pip install jupyter_client'
```
So went to install the client but it was already installed in my environment. I uninstalled and reinstalled with the same error.
The reason why I need python.exe to run it is because I am hoping to put it in windows scheduler.
Any help is appreciated.
Thanks. | closed | 2019-06-13T20:50:29Z | 2019-07-05T20:17:35Z | https://github.com/nteract/papermill/issues/379 | [] | GXAB | 2 |
torchbox/wagtail-grapple | graphql | 134 | Move away from accessing stream_data directly | Ref: https://github.com/wagtail/wagtail/pull/6485
tl;dr "external code should not be using stream_data" | closed | 2020-11-04T19:44:15Z | 2021-08-19T06:54:09Z | https://github.com/torchbox/wagtail-grapple/issues/134 | [
"type: Refactor"
] | zerolab | 3 |
amdegroot/ssd.pytorch | computer-vision | 109 | ValueError: optimizing a parameter that doesn't require gradients | I wanted to freeze the first two layers of the network. Based on [this](http://pytorch.org/docs/master/notes/autograd.html?#excluding-requires-grad)
I wrote a code to freeze the first two layers like this before the optimisation line 105 on [train.py](https://github.com/amdegroot/ssd.pytorch/blob/master/train.py)
Here's the code
> #Freeze weights
for layer,param in enumerate(net.parameters()):
if layer == 1 or layer == 2:
param.requires_grad = False
else:
param.requires_grad = True
I'm getting this error on this line
`optimizer = optim.SGD(net.parameters(), lr=args.lr,momentum=args.momentum, weight_decay=args.weight_decay)`
> File "train.py", line 155, in <module>
optimizer = optim.SGD(net.parameters(), lr=args.lr,momentum=args.momentum, weight_decay=args.weight_decay)
File "/Users/name/.virtualenvs/test/lib/python3.6/site-packages/torch/optim/sgd.py", line 57, in __init__
super(SGD, self).__init__(params, defaults)
File "/Users/name/.virtualenvs/test/lib/python3.6/site-packages/torch/optim/optimizer.py", line 39, in __init__
self.add_param_group(param_group)
File "/Users/name/.virtualenvs/test/lib/python3.6/site-packages/torch/optim/optimizer.py", line 153, in add_param_group
raise ValueError("optimizing a parameter that doesn't require gradients")
ValueError: optimizing a parameter that doesn't require gradients
What's wrong any help would be appreciated. I'm stuck | open | 2018-02-21T05:58:18Z | 2018-02-22T09:10:30Z | https://github.com/amdegroot/ssd.pytorch/issues/109 | [] | santhoshdc1590 | 1 |
tensorflow/tensor2tensor | deep-learning | 1,917 | Question about bleu evaluation | Hi, I am a little bit confused why should we set `REFERENCE_TEST_TRANSLATE_DIR=t2t_local_exp_runs_dir_master/t2t_datagen/dev/newstest2014-deen-ref.en.sgm` . because in my mind, the reference should be `de.sgm`. Do you have any idea? Thanks!
| open | 2022-10-20T03:42:52Z | 2022-10-20T10:16:18Z | https://github.com/tensorflow/tensor2tensor/issues/1917 | [] | shizhediao | 1 |
pytest-dev/pytest-selenium | pytest | 194 | get_cookies() is empty | Hi All
I have a test suite which is working nicely when using standard desktop browser settings. (Chrome)
When I try to test as a mobile using the following options, pytest-selenium returns no cookies.
```python
@pytest.fixture()
def chrome_options(chrome_options):
mobile_emulation = { "deviceName": "iPhone 6/7/8" }
chrome_options.add_experimental_option('mobileEmulation', mobile_emulation)
return chrome_options
```
Am I missing something? (I've tried even just opening `google.com` and nothing is shown, but on inspection in the web inspector, I can see cookies. | closed | 2018-09-24T16:22:28Z | 2019-04-29T20:18:05Z | https://github.com/pytest-dev/pytest-selenium/issues/194 | [] | Bobspadger | 13 |
lexiforest/curl_cffi | web-scraping | 451 | Automatic decoding of the link, resulting in an error request | When using requests. When a session sends a request, the URL link is automatically decoded and sent. This leads to some request errors,
For example, you would:
q8gMIv%2F%2F%2F%2F%2F%2F%2F%2F%2F%2F%2FARAEGgw5Mjc
Decoded into:
q8gMIv///////////ARAEGgw5Mjc
Causing an error in the request, how can I submit the original URL instead of sending it after it is automatically decoded? | closed | 2024-12-02T14:51:31Z | 2024-12-03T09:19:43Z | https://github.com/lexiforest/curl_cffi/issues/451 | [
"bug"
] | zdoek001 | 2 |
allenai/allennlp | nlp | 5,276 | Add label smoothing to CopyNetSeq2Seq | **Is your feature request related to a problem? Please describe.**
I am wondering if it is possible to add label smoothing to [`CopyNetSeq2Seq`](https://github.com/allenai/allennlp-models/blob/main/allennlp_models/generation/models/copynet_seq2seq.py). Label smoothing is implemented for the other allennlp-models under the [`generation`](https://github.com/allenai/allennlp-models/tree/main/allennlp_models/generation/models) module via [`sequence_cross_entropy_with_logits`](https://github.com/allenai/allennlp/blob/cf113d705b9054d329c67cf9bb29cbc3f191015d/allennlp/nn/util.py#L706).
**Describe the solution you'd like**
I think ideally, `CopyNetSeq2Seq` would be updated to use `sequence_cross_entropy_with_logits`, which would enable label smoothing in addition to some other features (like `alpha` and `gamma`).
**Describe alternatives you've considered**
An alternative solution would be to skip `sequence_cross_entropy_with_logits` and just add label smoothing directly to `CopyNetSeq2Seq`, with a new parameter: `label_smoothing`.
**Additional context**
I've looked through the source code of `CopyNetSeq2Seq`, but I can't quite figure out how to implement either solution, or if they are even feasible given the complications that the copy mechanism introduces.
I would be happy to take a crack at this but I think I might need more guidance (if it's feasible and if so where to start).
| closed | 2021-06-22T00:44:59Z | 2021-08-05T17:48:21Z | https://github.com/allenai/allennlp/issues/5276 | [
"Contributions welcome"
] | JohnGiorgi | 5 |
google/seq2seq | tensorflow | 128 | UnicodeEncodeError when doing En-De prediction | After training En-De model, I tried running inference task and got UnicodeEncodeError:
My training and prediction scripts are attached.
Error:
`Traceback (most recent call last):
File "/usr/lib/python2.7/runpy.py", line 174, in _run_module_as_main
"__main__", fname, loader, pkg_name)
File "/usr/lib/python2.7/runpy.py", line 72, in _run_code
exec code in run_globals
File "/home/okuchaiev/repos/seq2seq/bin/infer.py", line 129, in <module>
tf.app.run()
File "/home/okuchaiev/repos/tensorflow/_python_build/tensorflow/python/platform/app.py", line 44, in run
_sys.exit(main(_sys.argv[:1] + flags_passthrough))
File "/home/okuchaiev/repos/seq2seq/bin/infer.py", line 125, in main
sess.run([])
File "/home/okuchaiev/repos/tensorflow/_python_build/tensorflow/python/training/monitored_session.py", line 462, in run
run_metadata=run_metadata)
File "/home/okuchaiev/repos/tensorflow/_python_build/tensorflow/python/training/monitored_session.py", line 786, in run
run_metadata=run_metadata)
File "/home/okuchaiev/repos/tensorflow/_python_build/tensorflow/python/training/monitored_session.py", line 744, in run
return self._sess.run(*args, **kwargs)
File "/home/okuchaiev/repos/tensorflow/_python_build/tensorflow/python/training/monitored_session.py", line 899, in run
run_metadata=run_metadata))
File "/home/okuchaiev/repos/seq2seq/seq2seq/tasks/decode_text.py", line 188, in after_run
print(sent)
UnicodeEncodeError: 'ascii' codec can't encode character u'\xf6' in position 6: ordinal not in range(128)
`
************
My prediction script:
export CUDA_VISIBLE_DEVICES=0
export MODEL_DIR=/media/okuchaiev/D2/Workspace/seq2seq/Exp1_toy/nmt_tutorial_small
export PRED_DIR=${MODEL_DIR}/pred
export DATA_PATH=/home/okuchaiev/nmt_data/wmt16_de_en
export DEV_SOURCES=${DATA_PATH}/newstest2013.tok.bpe.32000.en
mkdir -p ${PRED_DIR}
python -m ${HOME}/repos/seq2seq/bin/infer \
--tasks "
- class: DecodeText" \
--model_dir $MODEL_DIR \
--input_pipeline "
class: ParallelTextInputPipeline
params:
source_files:
- $DEV_SOURCES" \
\> ${PRED_DIR}/predictions.txt
| closed | 2017-03-29T18:24:57Z | 2017-03-29T18:39:41Z | https://github.com/google/seq2seq/issues/128 | [] | okuchaiev | 2 |
Evil0ctal/Douyin_TikTok_Download_API | fastapi | 599 | [Feature request] 作者你好,tiktok 关键字搜索视频数据抓取可不可以考虑加一下呢。 | open | 2025-03-23T07:04:51Z | 2025-03-23T07:04:51Z | https://github.com/Evil0ctal/Douyin_TikTok_Download_API/issues/599 | [
"enhancement"
] | palp1233211 | 0 | |
Nemo2011/bilibili-api | api | 419 | [提问] 直播弹幕接口只能获取第一条弹幕 | **Python 版本:3.11.4
**模块版本:15.5.3
**运行环境:Windows
<!-- 务必提供模块版本并确保为最新版 -->
---
今天直播弹幕ws突然无法获取接下来的弹幕了,只能获取前面的第一次弹幕或者入场信息。试过多个号和ip地址都不行,现在有携带Credential初始化LiveDanmaku | closed | 2023-08-10T08:43:11Z | 2023-09-01T23:53:56Z | https://github.com/Nemo2011/bilibili-api/issues/419 | [
"bug",
"solved"
] | xqe2011 | 29 |
pywinauto/pywinauto | automation | 507 | Getting/Setting slider values on OBS Studio 64 | I'm trying to get/set the range of a volume slider on OBS Studio 64 bit. https://obsproject.com/download
I'm on the latest version `21.1.2`. Here is my code:
```python
from pywinauto.application import Application
app = Application(backend='uia').connect(path='obs64.exe')
# Mixers area
mixers = app.top_window().child_window(title="Mixer", control_type="Window")
# the volume slider
slider = mixers.child_window(
title_re="Volume slider for 'Desktop Audio'",
control_type="Slider"
)
slider_wrapper = slider.wrapper_object()
print(slider_wrapper.min_value())
```
Here's the exception:
```
Traceback (most recent call last):
File "C:\Users\glenbot\AppData\Local\Programs\Python\Python36\lib\site-packages\pywinauto\uia_defines.py", line 232, in get_elem_interface
iface = cur_ptrn.QueryInterface(cls_name)
File "C:\Users\glenbot\AppData\Local\Programs\Python\Python36\lib\site-packages\comtypes\__init__.py", line 1158, in QueryInterface
self.__com_QueryInterface(byref(iid), byref(p))
ValueError: NULL COM pointer access
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "c:/Users/glenbot/Documents/code/simple-stream-deck/test.py", line 55, in <module>
print(slider_wrapper.min_value())
File "C:\Users\glenbot\AppData\Local\Programs\Python\Python36\lib\site-packages\pywinauto\controls\uia_controls.py", line 434, in min_value
return self.iface_range_value.CurrentMinimum
File "C:\Users\glenbot\AppData\Local\Programs\Python\Python36\lib\site-packages\pywinauto\controls\uiawrapper.py", line 131, in __get__
value = self.fget(obj)
File "C:\Users\glenbot\AppData\Local\Programs\Python\Python36\lib\site-packages\pywinauto\controls\uiawrapper.py", line 258, in iface_range_value
return uia_defs.get_elem_interface(elem, "RangeValue")
File "C:\Users\glenbot\AppData\Local\Programs\Python\Python36\lib\site-packages\pywinauto\uia_defines.py", line 234, in get_elem_interface
raise NoPatternInterfaceError()
pywinauto.uia_defines.NoPatternInterfaceError
```
I have attempted to set the slider using coordinates but the resolution of the click matched to decibel value gets difficult to predict. I would rather call `set_value()` but I get the same error. Any help would be appreciated :)
| closed | 2018-06-17T18:41:52Z | 2021-10-08T07:51:29Z | https://github.com/pywinauto/pywinauto/issues/507 | [
"enhancement",
"UIA-related",
"good first issue",
"refactoring_critical"
] | glenbot | 5 |
ethanopp/fitly | plotly | 14 | Trouble connecting oura | Hi, was having trouble trying to connect my oura account. I hit the 'connect oura' button on the setting page, I was redirected to the oura page to grant access to the app, I accept the grant, andwas redirected back the settings page, with the 'connect oura' button still there. In the debug output I see:
Exception on /_dash-update-component [POST]
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/flask/app.py", line 2447, in wsgi_app
response = self.full_dispatch_request()
File "/usr/local/lib/python3.7/site-packages/flask/app.py", line 1952, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/usr/local/lib/python3.7/site-packages/flask/app.py", line 1821, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "/usr/local/lib/python3.7/site-packages/flask/_compat.py", line 39, in reraise
raise value
File "/usr/local/lib/python3.7/site-packages/flask/app.py", line 1950, in full_dispatch_request
rv = self.dispatch_request()
File "/usr/local/lib/python3.7/site-packages/flask/app.py", line 1936, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "/usr/local/lib/python3.7/site-packages/dash/dash.py", line 1076, in dispatch
response.set_data(func(*args, outputs_list=outputs_list))
File "/usr/local/lib/python3.7/site-packages/dash/dash.py", line 1007, in add_context
output_value = func(*args, **kwargs) # %% callback invoked %%
File "/app/src/fitly/pages/settings.py", line 1093, in update_tokens
oura_auth_client.fetch_access_token(parse_qs(query_params.query)['code'][0])
KeyError: 'code'
Turns out the callback url to fitly contained an additional "?" (after /settings?oura) and converting the second "?" to an "&" then re-submitting the callback url connected the oura account.
Not sure if there's anything you can do about it, but might help someone else if they see this. | closed | 2021-01-05T04:51:12Z | 2021-01-09T19:36:15Z | https://github.com/ethanopp/fitly/issues/14 | [] | spawn-github | 4 |
python-gitlab/python-gitlab | api | 2,401 | Difficulties to print trace properly | Hello everybody,
I'm having trouble displaying the logs properly in the terminal. The aim is to allow non-gitlab users to launch pipelines from their machines (could be Shell or Powershell, so we write it in Python and wrap it).
I would like this to be as close as possible to what can be found on Gitlab interface.
Ex :
- User launch script (it triggers a 3 jobs pipeline)
- It shows logs of first job, then logs of second job, then third job (no concurent jobs)
- At the end it displays pipeline status and exit
Here is what I have for the moment, but I'm sure there is a better way to do it
```
import gitlab
import time
import os
gl = gitlab.Gitlab(url='HIDDEN')
trigger_token = HIDDEN
variables={HIDDEN}
project_id = HIDDEN
project = gl.projects.get(project_id)
pipeline = project.trigger_pipeline('main', trigger_token, variables=variables)
jobs = pipeline.jobs.list()
job_list = []
for x in jobs:
job_list.append(x.id)
job_list.sort()
while pipeline.finished_at is None:
pipeline.refresh()
for i in job_list:
job = project.jobs.get(i)
while job.finished_at is None:
os.system('cls' if os.name == 'nt' else 'clear')
print("Job :", job.id)
print("")
print(job.trace())
time.sleep(15)
```
Thanks in advance for your help | closed | 2022-11-29T05:53:19Z | 2023-12-11T01:17:45Z | https://github.com/python-gitlab/python-gitlab/issues/2401 | [
"need info",
"support"
] | d3ployment | 2 |
clovaai/donut | nlp | 14 | How to perform text reading task | Hi, thanks for the great project!
I am exciting to integrate the model into my document understanding project, and I want to implement text reading task.
I have one question:
- According to my understanding, i should download the pretrained model from "naver-clova-ix/donut-base", but what would be the prompt word that fed into decoder? | closed | 2022-08-08T15:10:17Z | 2022-08-15T02:34:49Z | https://github.com/clovaai/donut/issues/14 | [] | mike132129 | 1 |
deezer/spleeter | tensorflow | 290 | Command not found: Spleeter | <img width="568" alt="Screen Shot 2020-03-12 at 9 33 25 AM" src="https://user-images.githubusercontent.com/58147163/76526825-87602400-6444-11ea-9ec2-a16ea279bd02.png">
Any ideas? I followed all the instructions. | closed | 2020-03-12T13:34:49Z | 2020-05-25T19:25:21Z | https://github.com/deezer/spleeter/issues/290 | [
"bug",
"invalid"
] | chrisgauthier9 | 2 |
serengil/deepface | machine-learning | 538 | Install issues | pip install deepface is resulting in the following error
"ImportError: cannot import name 'DeepFace' from partially initialized module 'deepface' (most likely due to a circular import)"
have deleted and recreated the environment multiple times and continue to get this message. Running the same install command 2 months ago on a different system did not produce this error | closed | 2022-08-19T21:14:00Z | 2022-08-19T21:44:58Z | https://github.com/serengil/deepface/issues/538 | [
"question"
] | kg6kvq | 5 |
ray-project/ray | tensorflow | 51,514 | [Autoscaler] Add Support for BatchingNodeProvider in Autoscaler Config Option | ### Description
[KubeRay](https://docs.ray.io/en/latest/cluster/kubernetes/user-guides/configuring-autoscaling.html#overview) currently uses the BatchingNodeProvider to manage clusters externally (using the KubeRay operator), which enables users to interact with external cluster management systems. However, to support custom providers with the BatchingNodeProvider, users must implement a module and integrate it as an external type provider, which leads to inconvenience.
On the other hand, [LocalNodeProvider](https://github.com/ray-project/ray/tree/master/python/ray/autoscaler/_private/local) offers the CoordinatorSenderNodeProvider to manage clusters externally through a coordinator server, [but the local type provider currently does not support updates for clusters](https://github.com/ray-project/ray/issues/39565).
To simplify custom cluster management, adding the BatchingNodeProvider and BatchingSenderNodeProvider would be highly beneficial. This would significantly assist users who wish to customize and use their own providers for managing clusters (on-premises or multi cloud environments).
For example, the following configuration could be used to add the BatchingNodeProvider to the provider type:
```yaml
provider:
type: batch
coordinator_address: "127.0.0.1:8000"
```
This would allow users to easily configure external cluster management with the BatchingNodeProvider, enhancing the flexibility and usability of the system.
### Use case
https://github.com/ray-project/ray/blob/8773682e49876627b9b4e10e2d2f4f32d961c0c9/python/ray/autoscaler/_private/providers.py#L184-L197
If the 'batch' type is additionally supported in the provider configuration, users will be able to manage the creation and deletion of cluster nodes externally in the coordinator server. | open | 2025-03-19T06:51:24Z | 2025-03-19T22:23:54Z | https://github.com/ray-project/ray/issues/51514 | [
"enhancement",
"P2",
"core"
] | nadongjun | 0 |
jupyterhub/repo2docker | jupyter | 852 | RShiny bus-dashboard example returns 500 | <!-- Thank you for contributing. These HTML commments will not render in the issue, but you can delete them once you've read them if you prefer! -->
### Bug description
<!-- Use this section to clearly and concisely describe the bug. -->
Running [RShiny bus-dashboard example](https://github.com/binder-examples/r) with repo2docker is timing out and reporting "500 : internal server error" on loading.
#### Expected behaviour
<!-- Tell us what you thought would happen. -->
`jupyter-repo2docker rshiny-bus` builds a container and runs it.
Visiting `http://127.0.0.1:<port>?token=<token>` in browser brings you to Jupyter.
Visiting `http://127.0.0.1:<port>/shiny/bus-dashboard` runs the example shiny app.
#### Actual behaviour
<!-- Tell us what you actually happens. -->
Visiting `http://127.0.0.1:<port>/shiny/bus-dashboard` reports "500 : internal server error".
In the console this traceback is logged:
[E 00:31:47.512 NotebookApp] Uncaught exception GET /shiny/bus-dashboard (172.17.0.1)
HTTPServerRequest(protocol='http', host='127.0.0.1:49558', method='GET', uri='/shiny/bus-dashboard', version='HTTP/1.1', remote_ip='172.17.0.1')
Traceback (most recent call last):
File "/srv/conda/envs/notebook/lib/python3.7/site-packages/tornado/web.py", line 1699, in _execute
result = await result
File "/srv/conda/envs/notebook/lib/python3.7/site-packages/jupyter_server_proxy/websocket.py", line 94, in get
return await self.http_get(*args, **kwargs)
File "/srv/conda/envs/notebook/lib/python3.7/site-packages/jupyter_server_proxy/handlers.py", line 391, in http_get
return await self.proxy(self.port, path)
File "/srv/conda/envs/notebook/lib/python3.7/site-packages/jupyter_server_proxy/handlers.py", line 387, in proxy
return await super().proxy(self.port, path)
File "/srv/conda/envs/notebook/lib/python3.7/site-packages/jupyter_server_proxy/handlers.py", line 209, in proxy
response = await client.fetch(req, raise_error=False)
tornado.simple_httpclient.HTTPTimeoutError: Timeout during request
### Your personal set up
<!-- Tell us a little about the system you're using. You can see the guidelines for setting up and reporting this information at https://repo2docker.readthedocs.io/en/latest/contributing/contributing.html#setting-up-for-local-development. -->
- OS: OSX
- Docker version 19.03.5
- repo2docker version repo2docker version 0.11.0 | closed | 2020-02-28T01:33:33Z | 2020-03-01T21:06:03Z | https://github.com/jupyterhub/repo2docker/issues/852 | [] | supern8ent | 5 |
coqui-ai/TTS | python | 3,099 | KeyError: 'xtts_v1' | Hey, when i run the following python api i encounter KeyError :'xtts_v1'
```
import torch
from TTS.api import TTS
# Get device
device = "cuda" if torch.cuda.is_available() else "cpu"
# List available 🐸TTS models
print(TTS().list_models())
# Init TTS
tts = TTS("tts_models/multilingual/multi-dataset/xtts_v1").to(device)
# Run TTS
# ❗ Since this model is multi-lingual voice cloning model, we must set the target speaker_wav and language
# Text to speech list of amplitude values as output
wav = tts.tts(text="Hello world!", speaker_wav="my/cloning/audio.wav", language="en")
# Text to speech to a file
tts.tts_to_file(text="Hello world!", speaker_wav="my/cloning/audio.wav", language="en", file_path="output.wav")
```
### The Error
```
No API token found for 🐸Coqui Studio voices - https://coqui.ai
Visit 🔗https://app.coqui.ai/account to get one.
Set it as an environment variable `export COQUI_STUDIO_TOKEN=<token>`
['tts_models/multilingual/multi-dataset/your_tts', 'tts_models/bg/cv/vits', 'tts_models/cs/cv/vits', 'tts_models/da/cv/vits', 'tts_models/et/cv/vits', 'tts_models/ga/cv/vits', 'tts_models/en/ek1/tacotron2', 'tts_models/en/ljspeech/tacotron2-DDC', 'tts_models/en/ljspeech/tacotron2-DDC_ph', 'tts_models/en/ljspeech/glow-tts', 'tts_models/en/ljspeech/speedy-speech', 'tts_models/en/ljspeech/tacotron2-DCA', 'tts_models/en/ljspeech/vits', 'tts_models/en/ljspeech/vits--neon', 'tts_models/en/ljspeech/fast_pitch', 'tts_models/en/ljspeech/overflow', 'tts_models/en/ljspeech/neural_hmm', 'tts_models/en/vctk/vits', 'tts_models/en/vctk/fast_pitch', 'tts_models/en/sam/tacotron-DDC', 'tts_models/en/blizzard2013/capacitron-t2-c50', 'tts_models/en/blizzard2013/capacitron-t2-c150_v2', 'tts_models/en/multi-dataset/tortoise-v2', 'tts_models/en/jenny/jenny', 'tts_models/es/mai/tacotron2-DDC', 'tts_models/es/css10/vits', 'tts_models/fr/mai/tacotron2-DDC', 'tts_models/fr/css10/vits', 'tts_models/uk/mai/glow-tts', 'tts_models/uk/mai/vits', 'tts_models/zh-CN/baker/tacotron2-DDC-GST', 'tts_models/nl/mai/tacotron2-DDC', 'tts_models/nl/css10/vits', 'tts_models/de/thorsten/tacotron2-DCA', 'tts_models/de/thorsten/vits', 'tts_models/de/thorsten/tacotron2-DDC', 'tts_models/de/css10/vits-neon', 'tts_models/ja/kokoro/tacotron2-DDC', 'tts_models/tr/common-voice/glow-tts', 'tts_models/it/mai_female/glow-tts', 'tts_models/it/mai_female/vits', 'tts_models/it/mai_male/glow-tts', 'tts_models/it/mai_male/vits', 'tts_models/ewe/openbible/vits', 'tts_models/hau/openbible/vits', 'tts_models/lin/openbible/vits', 'tts_models/tw_akuapem/openbible/vits', 'tts_models/tw_asante/openbible/vits', 'tts_models/yor/openbible/vits', 'tts_models/hu/css10/vits', 'tts_models/el/cv/vits', 'tts_models/fi/css10/vits', 'tts_models/hr/cv/vits', 'tts_models/lt/cv/vits', 'tts_models/lv/cv/vits', 'tts_models/mt/cv/vits', 'tts_models/pl/mai_female/vits', 'tts_models/pt/cv/vits', 'tts_models/ro/cv/vits', 'tts_models/sk/cv/vits', 'tts_models/sl/cv/vits', 'tts_models/sv/cv/vits', 'tts_models/ca/custom/vits', 'tts_models/fa/custom/glow-tts', 'tts_models/bn/custom/vits-male', 'tts_models/bn/custom/vits-female']
Traceback (most recent call last):
File "d:/liveManga/work.py", line 11, in <module>
tts = TTS("tts_models/multilingual/multi-dataset/xtts_v1").to(device)
File "C:\Users\smart\AppData\Local\Programs\Python\Python37\lib\site-packages\TTS\api.py", line 289, in __init__
self.load_tts_model_by_name(model_name, gpu)
File "C:\Users\smart\AppData\Local\Programs\Python\Python37\lib\site-packages\TTS\api.py", line 386, in load_tts_model_by_name
model_name
File "C:\Users\smart\AppData\Local\Programs\Python\Python37\lib\site-packages\TTS\api.py", line 348, in download_model_by_name
model_path, config_path, model_item = self.manager.download_model(model_name)
File "C:\Users\smart\AppData\Local\Programs\Python\Python37\lib\site-packages\TTS\utils\manage.py", line 287, in download_model
model_item, model_full_name, model = self._set_model_item(model_name)
File "C:\Users\smart\AppData\Local\Programs\Python\Python37\lib\site-packages\TTS\utils\manage.py", line 269, in _set_model_item
model_item = self.models_dict[model_type][lang][dataset][model]
KeyError: 'xtts_v1'
```
### Environment
```shell
TTS version 0.14.3
python 3.7
cuda 12
windows 11
```
| closed | 2023-10-22T02:52:05Z | 2023-10-22T19:16:14Z | https://github.com/coqui-ai/TTS/issues/3099 | [] | a-3isa | 1 |
huggingface/transformers | nlp | 36,296 | tensor parallel training bug | ### System Info
transformers:4.45.dev0
python:3.11
linux
### Who can help?
#34194
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
torchrun --nnodes 1 --nproc_per_node 2 --master_port 27654 run_clm.py \
--model_name_or_path TinyLlama/TinyLlama-1.1B-Chat-v1.0 \
--dataset_name wikitext \
--dataset_config_name wikitext-2-raw-v1 \
--per_device_train_batch_size 1 \
--per_device_eval_batch_size 1 \
--do_train \
--do_eval \
--tp_size 2 \
--output_dir /tmp/test-clm
**unexpected behavior:**
runtimeerror: aten._foreach_norm_Scalar: got mixed torch.tensor and DTensor, need to convert all torch.tensor to DTensor before calling distributed operators.
### Expected behavior
autoTP training | open | 2025-02-20T08:15:10Z | 2025-03-23T08:03:34Z | https://github.com/huggingface/transformers/issues/36296 | [
"bug"
] | iMountTai | 4 |
CorentinJ/Real-Time-Voice-Cloning | python | 289 | How do I use my own mp3? | I'm playing with the demo, and I only have an option to record, how do I import an audio file?
tnx.
| closed | 2020-02-26T00:24:56Z | 2020-07-04T22:35:07Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/289 | [] | orenong | 10 |
horovod/horovod | deep-learning | 3,027 | CMake Error in horovod/torch/CMakeLists.txt: Target "pytorch" requires the language dialect "CXX14" , but CMake does not know the compile flags to use to enable it. | **Environment:**
1. Framework: (PyTorch,)
2. Framework version:
3. Horovod version:
4. MPI version:
5. CUDA version:
6. NCCL version:
7. Python version:
8. Spark / PySpark version:
9. Ray version:
10. OS and version:
11. GCC version:
12. CMake version:
-- Configuring done
CMake Error in horovod/torch/CMakeLists.txt:
Target "pytorch" requires the language dialect "CXX14" , but CMake does not
know the compile flags to use to enable it.
-- Generating done
CMake Generate step failed. Build files cannot be regenerated correctly.
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/tmp/pip-install-3j8y4qov/horovod_155b0d6aeac74d1899be1b6ff9cb8742/setup.py", line 199, in <module>
'horovodrun = horovod.runner.launch:run_commandline'
File "/home/xx/.conda/envs/dalle_test/lib/python3.7/site-packages/setuptools/__init__.py", line 163, in setup
return distutils.core.setup(**attrs)
File "/home/xx/.conda/envs/dalle_test/lib/python3.7/distutils/core.py", line 148, in setup
dist.run_commands()
File "/home/xx/.conda/envs/dalle_test/lib/python3.7/distutils/dist.py", line 966, in run_commands
self.run_command(cmd)
File "/home/xx/.conda/envs/dalle_test/lib/python3.7/distutils/dist.py", line 985, in run_command
cmd_obj.run()
File "/home/xx/.conda/envs/dalle_test/lib/python3.7/site-packages/wheel/bdist_wheel.py", line 299, in run
self.run_command('build')
File "/home/xx/.conda/envs/dalle_test/lib/python3.7/distutils/cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "/home/xx/.conda/envs/dalle_test/lib/python3.7/distutils/dist.py", line 985, in run_command
cmd_obj.run()
File "/home/xx/.conda/envs/dalle_test/lib/python3.7/distutils/command/build.py", line 135, in run
self.run_command(cmd_name)
File "/home/xx/.conda/envs/dalle_test/lib/python3.7/distutils/cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "/home/xx/.conda/envs/dalle_test/lib/python3.7/distutils/dist.py", line 985, in run_command
cmd_obj.run()
File "/home/xx/.conda/envs/dalle_test/lib/python3.7/site-packages/setuptools/command/build_ext.py", line 87, in run
_build_ext.run(self)
File "/home/xx/.conda/envs/dalle_test/lib/python3.7/distutils/command/build_ext.py", line 340, in run
self.build_extensions()
File "/tmp/pip-install-3j8y4qov/horovod_155b0d6aeac74d1899be1b6ff9cb8742/setup.py", line 95, in build_extensions
cwd=cmake_build_dir)
File "/home/xx/.conda/envs/dalle_test/lib/python3.7/subprocess.py", line 363, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['cmake', '/tmp/pip-install-3j8y4qov/horovod_155b0d6aeac74d1899be1b6ff9cb8742', '-DCMAKE_BUILD_TYPE=RelWithDebInfo', '-DCMAKE_LIBRARY_OUTPUT_DIRECTORY_RELWITHDEBINFO=/tmp/pip-install-3j8y4qov/horovod_155b0d6aeac74d1899be1b6ff9cb8742/build/lib.linux-x86_64-3.7', '-DPYTHON_EXECUTABLE:FILEPATH=/home/xx/.conda/envs/dalle_test/bin/python3.7']' returned non-zero exit status 1.
----------------------------------------
ERROR: Failed building wheel for horovod
| closed | 2021-07-08T08:03:34Z | 2021-08-03T10:13:52Z | https://github.com/horovod/horovod/issues/3027 | [
"bug"
] | Junzh821 | 1 |
google-research/bert | tensorflow | 1,074 | Are adam weights and variances necessary to continue pretraining? | Before continuing pre-training from one of the checkpoints provided in the readme page, I reduced the size of the checkpoint by removing Adam weights and parameters, keeping only the model weights.
Do you think this might affect the performances of continuing the pre-training? (and/or even fine-tuning?)
In other words, are those parameters necessary to continue the pre-training in the correct way?
Thank you very much in advance for any comment and suggestion!
| open | 2020-04-27T16:04:10Z | 2020-04-27T16:04:37Z | https://github.com/google-research/bert/issues/1074 | [] | pretidav | 0 |
junyanz/pytorch-CycleGAN-and-pix2pix | pytorch | 773 | Slowly training speed? | Thanks for your great work in Cycle-GAN. However, when I use it in my data training, the training process seems so slow, almost 7 epochs a day. My training data contains 22000 images with size 256x256 in trainA. And the loss-curves seem no obviously changes. | closed | 2019-09-20T07:05:43Z | 2019-09-23T01:48:01Z | https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/773 | [] | JerryLeolfl | 7 |
jina-ai/serve | fastapi | 6,030 | Flow with http doesn't support docarray float attribute | **Describe the bug**
<!-- A clear and concise description of what the bug is. -->
The flow will raise an error when sending float in HTTP. GRPC works fine
```python
from typing import Optional
from docarray import BaseDoc, DocList
from jina import Flow, Executor, requests
class DummyDoc(BaseDoc):
number: float
class DummyExecutor(Executor):
@requests
def foo(self, docs: DocList[DummyDoc], **kwargs) -> DocList[DummyDoc]:
result: DocList[DummyDoc] = DocList[DummyDoc]()
for d in docs:
result.append(DummyDoc(number=d.number * 2))
return result
with Flow(protocol='http').add(uses=DummyExecutor) as f:
result = f.post(on='/foo', inputs=DocList([DummyDoc(number=0.5)]))
print(result[0].number)
```
Throws
```
Traceback (most recent call last):
File "/opt/anaconda3/envs/py310/lib/python3.10/site-packages/jina/serve/runtimes/worker/request_handling.py", line 1049, in process_data
result = await self.handle(
File "/opt/anaconda3/envs/py310/lib/python3.10/site-packages/jina/serve/runtimes/worker/request_handling.py", line 647, in handle
len_docs = len(requests[0].docs) # TODO we can optimize here and access the
File "/opt/anaconda3/envs/py310/lib/python3.10/site-packages/jina/types/request/data.py", line 278, in docs
return self.data.docs
File "/opt/anaconda3/envs/py310/lib/python3.10/site-packages/jina/types/request/data.py", line 47, in docs
self._loaded_doc_array = self.document_array_cls.from_protobuf(
File "/opt/anaconda3/envs/py310/lib/python3.10/site-packages/docarray/array/doc_list/doc_list.py", line 310, in from_protobuf
return super().from_protobuf(pb_msg)
File "/opt/anaconda3/envs/py310/lib/python3.10/site-packages/docarray/array/doc_list/io.py", line 122, in from_protobuf
return cls(cls.doc_type.from_protobuf(doc_proto) for doc_proto in pb_msg.docs)
File "/opt/anaconda3/envs/py310/lib/python3.10/site-packages/docarray/array/doc_list/doc_list.py", line 130, in __init__
super().__init__(docs)
File "/opt/anaconda3/envs/py310/lib/python3.10/site-packages/docarray/array/doc_list/doc_list.py", line 157, in _validate_docs
for doc in docs:
File "/opt/anaconda3/envs/py310/lib/python3.10/site-packages/docarray/array/doc_list/io.py", line 122, in <genexpr>
return cls(cls.doc_type.from_protobuf(doc_proto) for doc_proto in pb_msg.docs)
File "/opt/anaconda3/envs/py310/lib/python3.10/site-packages/docarray/base_doc/mixins/io.py", line 250, in from_protobuf
return cls(**fields)
File "pydantic/main.py", line 341, in pydantic.main.BaseModel.__init__
pydantic.error_wrappers.ValidationError: 1 validation error for DummyDoc
number
none is not an allowed value (type=type_error.none.not_allowed)
```
Deployment dones't have such issues
```
with Deployment(protocol='http', uses=DummyExecutor) as f:
result = f.post(on='/foo', inputs=DocList([DummyDoc(number=0.5)]))
```
**Describe how you solve it**
<!-- copy past your code/pull request link -->
---
<!-- Optional, but really help us locate the problem faster -->
**Environment**
<!-- Run `jina --version-full` and copy paste the output here -->
**Screenshots**
<!-- If applicable, add screenshots to help explain your problem. --> | closed | 2023-08-17T10:41:38Z | 2024-03-18T10:30:14Z | https://github.com/jina-ai/serve/issues/6030 | [] | ZiniuYu | 0 |
AUTOMATIC1111/stable-diffusion-webui | deep-learning | 15,840 | [Feature Request]: Option to save original mask image when inpainting | ### Is there an existing issue for this?
- [X] I have searched the existing issues and checked the recent builds/commits
### What would your feature do ?
To be able to reproduce results, there should be an option to save the original mask image, which can later be used in the `inpaint upload` tab.
Currently there is already an option `save_mask` (For inpainting, save a copy of the greyscale mask). However, this option only saves the processed mask.
Originally, this wasn't much of an issue because the only processing applied to the mask was the mask blur. To reproduce a result, you could simply use the already-blurred mask in `inpaint upload` and then simply set mask blur 0 to to prevent the mask from being blurred twice.
However, with the introduction of soft inpainting, this approach no longer works, because for every seed/result there will be a different mask image.
Optionally, this original mask image could be saved only once prior to generating results, similar to init images.
### Proposed workflow
Add an option in settings > saving images/grids, the description could be something like `For inpainting, save a copy of the original mask`.
### Additional information
_No response_ | open | 2024-05-19T16:22:55Z | 2024-05-19T16:22:55Z | https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/15840 | [
"enhancement"
] | Drake53 | 0 |
huggingface/datasets | tensorflow | 6,721 | Hi,do you know how to load the dataset from local file now? | Hi, if I want to load the dataset from local file, then how to specify the configuration name?
_Originally posted by @WHU-gentle in https://github.com/huggingface/datasets/issues/2976#issuecomment-1333455222_
| open | 2024-03-07T13:58:40Z | 2024-03-31T08:09:25Z | https://github.com/huggingface/datasets/issues/6721 | [] | Gera001 | 3 |
reloadware/reloadium | pandas | 179 | VSCode plugin roadmap | Dear Reloadium project maintainers,
I am captivated by the features of Reloadium as presented on your official website. Unfortunately, I use VSCode as my primary production tool, rather than PyCharm. Thus, I would like to inquire whether you could provide a more detailed roadmap for this project. Thank you! | closed | 2024-02-08T14:46:19Z | 2024-02-17T14:25:02Z | https://github.com/reloadware/reloadium/issues/179 | [] | Mirac-Le | 1 |
Johnserf-Seed/TikTokDownload | api | 229 | 用TikTokTool 和 TikTokDownload 下载视频的清晰度不一样。 | 用 TikTokTool.exe 批量下载的所有视频都是1080p,
用 TikTokDownload.exe下载的单个视频是720p,能否也上1080p?
测试抖音主页:https://v.douyin.com/MdRBCPk/ | closed | 2022-10-07T08:29:37Z | 2022-11-27T11:49:27Z | https://github.com/Johnserf-Seed/TikTokDownload/issues/229 | [
"故障(bug)",
"额外求助(help wanted)",
"无效(invalid)"
] | happyaguang | 2 |
tensorpack/tensorpack | tensorflow | 1,121 | Assign model to my graph | When I try to use tensorpack as a part of my code to get an accuracy of a model, outside it I define a Keras model, but it turns out to have a conflict between the two models, when I add
```
child_graph = tf.Graph()
with child_graph.as_default():
```
before the tensorpack train, it is solved, I wonder if there is any way to put the training on my own graph or session. | closed | 2019-03-27T11:00:43Z | 2019-04-03T06:39:01Z | https://github.com/tensorpack/tensorpack/issues/1121 | [
"unrelated"
] | Guocode | 1 |
pyeventsourcing/eventsourcing | sqlalchemy | 284 | Exclude unnecessary packages (tests, examples) from distribution | When installing the `eventsourcing` package, unnecessary directories like `tests` and `examples` are included in the package by default. These folders are not required for production use and add extra size to the installation.
To improve the package structure and reduce its size, please adjust the build configuration to exclude these directories from the default distribution.
**Steps to Reproduce**:
1. Install the package via pip:
```bash
pip install eventsourcing
```
2. Observe that `tests` and `examples` folders are included in the installed package.
**Expected Behavior**:
Only the essential source files are included in the package, excluding `tests`, `examples`, and any other non-essential directories.
**Suggested Solution**:
Update the `MANIFEST.in` or setup configuration to exclude these unnecessary directories from the package.
**Environment**:
- Python version: 3.11
- eventsourcing version: 9.3.2 | closed | 2024-11-05T11:47:43Z | 2024-11-08T07:34:40Z | https://github.com/pyeventsourcing/eventsourcing/issues/284 | [] | vmorugin | 2 |
jina-ai/clip-as-service | pytorch | 386 | how could I support concurrency upto 50 a seconds? | **Prerequisites**
> Please fill in by replacing `[ ]` with `[x]`.
* [yes ] Are you running the latest `bert-as-service`?
* [yes ] Did you follow [the installation](https://github.com/hanxiao/bert-as-service#install) and [the usage](https://github.com/hanxiao/bert-as-service#usage) instructions in `README.md`?
* [yes ] Did you check the [FAQ list in `README.md`](https://github.com/hanxiao/bert-as-service#speech_balloon-faq)?
* [yes ] Did you perform [a cursory search on existing issues](https://github.com/hanxiao/bert-as-service/issues)?
**System information**
> Some of this information can be collected via [this script](https://github.com/tensorflow/tensorflow/tree/master/tools/tf_env_collect.sh).
- OS Platform and Distribution (e.g., Linux Ubuntu 16.04):windows10
- TensorFlow installed from (source or binary):aliyun
- TensorFlow version:tensorflow-gpu==1.12.0
- Python version:python3.5.4
- `bert-as-service` version: 1.9.1
- GPU model and memory:GTX 1060
- CPU model and memory:i7 7700k
---
### Description
> Please replace `YOUR_SERVER_ARGS` and `YOUR_CLIENT_ARGS` accordingly. You can also write your own description for reproducing the issue.
I'm using this command to start the server:
```bash
bert-serving-start -model_dir D:\NLU\chinese_L-12_H-768_A-12 -tuned_model_dir D:\NLU\rasa_model_output -ckpt_name=model.ckpt-597
```
and calling the server via:
```python
all_tokens = []
for msg in message:
msg_tokens = []
for t in msg.get("tokens"):
text = self._replace_number_blank(t.text)
if text != '':
msg_tokens.append(text)
a = str(msg_tokens)
a = a.replace('[', '')
a = a.replace(']', '')
a = a.replace(',', '')
a = a.replace('\'', '')
a = a.replace(' ', '')
all_tokens.append(list(a))
#all_tokens.append(a)
logger.info("bert vectors featurizer finished")
try:
bert_embedding = self.bc.encode(all_tokens, is_tokenized=True)
bert_embedding = np.squeeze(bert_embedding)
```
Then this issue shows up:
I want to increase the concurrency performance in the production environments,
and in production , the user input one sentence at once .
and I used jmeter to test the concurrency is at about 10 per second
and when it up to 20 per second
here:
bert_embedding = self.bc.encode(all_tokens, is_tokenized=True)
will block and cost a lot of time
how could I improve the concurrency up to 50 a second?
should I use this parameter? in the server side config?
-http_max_connect 50
Thanks
weizhen
... | open | 2019-06-19T02:55:47Z | 2019-06-20T02:37:56Z | https://github.com/jina-ai/clip-as-service/issues/386 | [] | weizhenzhao | 1 |
itamarst/eliot | numpy | 75 | setup.py points to "hybridlogic" github url instead of "hybridcluster" | Fortunately the former is a redirect to the latter. Still would be better to point directly at the right page though.
| closed | 2014-05-15T15:49:58Z | 2018-09-22T20:59:13Z | https://github.com/itamarst/eliot/issues/75 | [
"documentation"
] | exarkun | 1 |
streamlit/streamlit | streamlit | 10,452 | v1.42.0 introduces call to asyncio.get_event_loop().is_running() which sometimes throws RuntimeError | ### Checklist
- [x] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar issues.
- [x] I added a very descriptive title to this issue.
- [x] I have provided sufficient information below to help reproduce this issue.
### Summary
Developer of aider here. After upgrading my streamlit dependency from 1.41.1 -> 1.42.0 I started receiving a high rate of exception reports from end users. See the stack trace below showing `RuntimeError: There is no current event loop in thread 'MainThread'.`.
Looking at the streamlit repo, I see the line causing the exception was introduced in 1.42.0.
https://github.com/streamlit/streamlit/blob/54c7a880c46a67b2a8ed1cfe7c10492f97135ffb/lib/streamlit/web/bootstrap.py#L344
I have pinned aider to use 1.41.1 for now, but wanted to make you aware.
Here is the main [aider issue](https://github.com/Aider-AI/aider/issues/3221) where these exceptions have been reported. It is linked with a dozen or more other issues reporting the same problem.
```
cli.main(st_args)
File "core.py", line 1161, in __call__
return self.main(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "core.py", line 1082, in main
rv = self.invoke(ctx)
^^^^^^^^^^^^^^^^
File "core.py", line 1697, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "core.py", line 1443, in invoke
return ctx.invoke(self.callback, **ctx.params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "core.py", line 788, in invoke
return __callback(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "cli.py", line 240, in main_run
_main_run(target, args, flag_options=kwargs)
File "cli.py", line 276, in _main_run
bootstrap.run(file, is_hello, args, flag_options)
File "bootstrap.py", line 344, in run
if asyncio.get_event_loop().is_running():
^^^^^^^^^^^^^^^^^^^^^^^^
File "events.py", line 702, in get_event_loop
raise RuntimeError('There is no current event loop in thread %r.'
RuntimeError: There is no current event loop in thread 'MainThread'.
```
### Reproducible Code Example
```Python
I have no ability to reproduce. Reporting numerous exceptions seen in the wild with end users.
```
### Steps To Reproduce
_No response_
### Expected Behavior
_No response_
### Current Behavior
_No response_
### Is this a regression?
- [x] Yes, this used to work in a previous version.
### Debug info
- Streamlit version: 1.42.0.
- Python version: 3.12.x
- Operating System: Many
- Browser: Many
### Additional Information
_No response_ | closed | 2025-02-19T22:00:35Z | 2025-03-24T02:24:06Z | https://github.com/streamlit/streamlit/issues/10452 | [
"type:bug",
"status:confirmed",
"priority:P1"
] | paul-gauthier | 5 |
autogluon/autogluon | data-science | 3,905 | Multilabel Predictor Issue | I have trained a model 'Multilabel Predictor' in my local computer. I need to run a airflow pipeline to predict the data and store predictions in a table in redshift. The issue with the model stored in my computer is that the pickle file has the hardcore path of my computer (screenshot 1: first line of the pickle file), so when airflow tries to predict, theres an error that the path cannot be recognized. Due this situation, i've trained the same model in SageMaker and i stored it in a path of S3. When i try to predict the model (the one stored in s3), theres another error that botocore cant locate the credentials. (screenshot 2: logs error airflow).
Please, can you provide me any information of what can i do to do a airflow pipeline with the multilabel predictor of autogluon, i already did this for tabular predictor and it worked perfect.

Screenshot 1

Screenshot 2
| open | 2024-02-06T18:57:17Z | 2024-11-25T22:47:10Z | https://github.com/autogluon/autogluon/issues/3905 | [
"bug",
"module: tabular",
"priority: 1"
] | YilanHipp | 0 |
BeanieODM/beanie | asyncio | 1,103 | [BUG] Inconsistent `before_event` trigger behavior | **Describe the bug**
I have a `before_event` annotation to update a field on a document every time it is updated. The behavior the event handling is inconsistent based on what update method I use. I have a full reproduction that shows updating 4 documents in 4 different ways, with differing results:
1. `find_one(...).update(...)` - Not triggered, tag field NOT set in db
2. `find_one(...); result.update(...)` - Triggered ONCE, tag field NOT set in db
3. `find_one(...); result.set(...)` - Triggered ONCE, tag field NOT set in db
4. `find_one(...); result.field = val; result.save()` - Triggered TWICE; tag field SET in db
**To Reproduce**
```python
import asyncio
from beanie import Document, Replace, Save, SaveChanges, Update, before_event
from beanie import init_beanie
from motor.motor_asyncio import AsyncIOMotorClient
from beanie.odm.operators.update.general import Set
from beanie.odm.queries.update import UpdateResponse
class Person(Document):
name: str
tag: str | None = None
@before_event(Save, Update, Replace, SaveChanges)
def set_tag(self):
print(" before_event TRIGGERED")
self.tag = "updated"
async def main():
client = AsyncIOMotorClient("mongodb://localhost:27017")
client.get_io_loop = asyncio.get_running_loop
await init_beanie(
client["mydb"],
document_models=[
Person,
],
)
print("=== create")
await Person(name="Alice").insert()
await Person(name="Bob").insert()
await Person(name="Charlie").insert()
await Person(name="Dan").insert()
print("=== find_one.update")
result = await Person.find_one(Person.name == "Alice").update(Set({"name": "Alicia"}), response_type=UpdateResponse.NEW_DOCUMENT)
print(f" result: {result}")
print("=== find_one; update")
result = await Person.find_one(Person.name == "Bob")
result = await result.update(Set({"name": "Bobby"}))
print(f" result: {result}")
print("=== find_one; set")
result = await Person.find_one(Person.name == "Charlie")
result = await result.set({"name": "Charles"})
print(f" result: {result}")
print("=== find_one; save")
result = await Person.find_one(Person.name == "Dan")
result.name = "Daniel"
await result.save()
print(f" result: {result}")
print("=== close")
client.close()
if __name__ == "__main__":
asyncio.run(main())
```
**Expected behavior**
I'm unsure of whether or not the `before_event` should be triggered twice during the 4th case (find, update, save()), but for all 4 cases I would expect the `before_event` to get triggered at least once, and I would expect the final value in the DB to have a `"tag": "updated"` value.
**Additional context**
I'm particularly interested in the behavior of the first case, where `.update()` is called directly on the result of `find_one()` - I would like to use this pattern with a `before_event` annotation to automatically set an `updatedAt` field on my documents.
The repro code can be run with beanie installed and an empty mongodb instance (I used the `mongo:5` docker image, but I suspect a local instance would work as well).
Python version: 3.10.10
Beanie version: 1.28.0
| open | 2025-01-03T18:40:13Z | 2025-03-16T14:48:44Z | https://github.com/BeanieODM/beanie/issues/1103 | [
"bug"
] | paulpage | 4 |
flavors/django-graphql-jwt | graphql | 250 | Does it really support Graphene V3 ? | Hello everyone !
According to [this commit](https://github.com/flavors/django-graphql-jwt/commit/d50a533e26f1509828ef9fc804b195ebdfc1c04e), Graphene V3 should be supported.
However if I use :
```
django==3.1.5
psycopg2==2.8.6
graphene-django==3.0.0b7
django-graphql-jwt==0.3.1
PyJWT>=1.5.0,<2
pyyaml==5.3.1
gunicorn==20.0.4
```
as requirements, I have errors :
```
Exception in thread django-main-thread:
Traceback (most recent call last):
File "C:\Program Files\Python37\lib\threading.py", line 926, in _bootstrap_inner
self.run()
File "C:\Program Files\Python37\lib\threading.py", line 870, in run
self._target(*self._args, **self._kwargs)
File "C:\Program Files\Python37\lib\site-packages\django\utils\autoreload.py", line 53, in wrapper
fn(*args, **kwargs)
File "C:\Program Files\Python37\lib\site-packages\django\core\management\commands\runserver.py", line 118, in inner_run
self.check(display_num_errors=True)
File "C:\Program Files\Python37\lib\site-packages\django\core\management\base.py", line 396, in check
databases=databases,
File "C:\Program Files\Python37\lib\site-packages\django\core\checks\registry.py", line 70, in run_checks
new_errors = check(app_configs=app_configs, databases=databases)
File "C:\Program Files\Python37\lib\site-packages\django\core\checks\urls.py", line 40, in check_url_namespaces_unique
all_namespaces = _load_all_namespaces(resolver)
File "C:\Program Files\Python37\lib\site-packages\django\core\checks\urls.py", line 57, in _load_all_namespaces
url_patterns = getattr(resolver, 'url_patterns', [])
File "C:\Program Files\Python37\lib\site-packages\django\utils\functional.py", line 48, in __get__
res = instance.__dict__[self.name] = self.func(instance)
File "C:\Program Files\Python37\lib\site-packages\django\urls\resolvers.py", line 589, in url_patterns
patterns = getattr(self.urlconf_module, "urlpatterns", self.urlconf_module)
File "C:\Program Files\Python37\lib\site-packages\django\utils\functional.py", line 48, in __get__
res = instance.__dict__[self.name] = self.func(instance)
File "C:\Program Files\Python37\lib\site-packages\django\urls\resolvers.py", line 582, in urlconf_module
return import_module(self.urlconf_name)
File "C:\Program Files\Python37\lib\importlib\__init__.py", line 127, in
import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1006, in _gcd_import
File "<frozen importlib._bootstrap>", line 983, in _find_and_load
File "<frozen importlib._bootstrap>", line 967, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 677, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 728, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "E:\Documents\Projects\myproject\myproject\backend\urls.py", line 22, in <module>
from graphql_jwt.decorators import jwt_cookie
File "C:\Program Files\Python37\lib\site-packages\graphql_jwt\__init__.py", line 1, in <module>
from . import relay
File "C:\Program Files\Python37\lib\site-packages\graphql_jwt\relay.py",
line 5, in <module>
from . import mixins
File "C:\Program Files\Python37\lib\site-packages\graphql_jwt\mixins.py", line 8, in <module>
from .decorators import csrf_rotation, ensure_token, setup_jwt_cookie
File "C:\Program Files\Python37\lib\site-packages\graphql_jwt\decorators.py", line 10, in <module>
from graphql.execution.base import ResolveInfo
ModuleNotFoundError: No module named 'graphql.execution.base'
``` | closed | 2021-01-18T08:43:43Z | 2021-01-27T10:06:06Z | https://github.com/flavors/django-graphql-jwt/issues/250 | [] | laurent-brisbois | 4 |
automl/auto-sklearn | scikit-learn | 1,439 | Non-breaking ERROR printed: "init_dgesdd failed init" while running AutoSklearnClassifier | ## Describe the bug ##
Hi,
I am getting the Error 'init_dgesdd failed init' when I perform longer runs (> 46000 s) of the AutoSklearnClassifier. My dataset has the shape (42589, 26). The call still results in an optimized model. So I suspect that only some optimization runs are failing. The AutoSklearnClassifier is called like this:
```
estim = AutoSklearnClassifier(
n_jobs = 6,
ensemble_size = 1,
memory_limit = 8000,
time_left_for_this_task = 46000,
metric = autosklearn.metrics.f1,
max_models_on_disc = 1
)
```
also if the n_jobs are reduced to 2 and the memory limit is increased this error occurs. Also running my code on an instance with more total memory.
Googling this issue suggested that this happens in numpy if the memory limits are exceeded. Do you have an idea why and where this could happen? Thanks a lot for your help!
## To Reproduce ##
Steps to reproduce the behavior:
I was using this script
1. Go to https://github.com/smautner/biofilm/blob/master/biofilm/biofilm-optimize6.py
calling it like this:
```
python -W ignore -m biofilm.biofilm-optimize6 --infile /home/uhlm/Dokumente/Teresa/build_model_with_graph_featurs//PARIS_human_RBP/feature_files/training_data_PARIS_human_RBP_context_150 --featurefile /home/uhlm/Dokumente/Teresa/build_model_with_graph_featurs/PARIS_human_RBP//model//features/PARIS_human_RBP_context_150 --memoryMBthread 10000 --folds 0 --out /home/uhlm/Dokumente/Teresa/build_model_with_graph_featurs/PARIS_human_RBP/model/optimized/PARIS_human_RBP_context_150 --preprocess True --n_jobs 6 --time 50000
```
## Expected behavior ##
I am not sure if this error is due to my OS. I did test it on 3 different instances two with 64 GB and one with 128 GB memory.
## Actual behavior, stacktrace or logfile ##
my output:
```
ERROR: init_dgesdd failed init
init_dgesdd failed init
init_dgesdd failed init
init_dgesdd failed init
init_dgesdd failed init
init_dgesdd failed init
Traceback (most recent call last):
File "/home/uhlm/Progs/anaconda3/envs/cherri/lib/python3.8/runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/home/uhlm/Progs/anaconda3/envs/cherri/lib/python3.8/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/home/uhlm/Progs/anaconda3/envs/cherri/lib/python3.8/site-packages/biofilm/biofilm-optimize6.py", line 82, in <module>
main()
File "/home/uhlm/Progs/anaconda3/envs/cherri/lib/python3.8/site-packages/biofilm/biofilm-optimize6.py", line 78, in main
print('\n',pipeline.steps[2][1].choice.preprocessor.get_support())
AttributeError: 'PolynomialFeatures' object has no attribute 'get_support'
For call: python -W ignore -m biofilm.biofilm-optimize6 --infile /home/uhlm/Dokumente/Teresa/build_model_with_graph_featurs//PARIS_human_RBP/feature_files//training_data_PARIS_human_RBP_context_150 --featurefile /home/uhlm/Dokumente/Teresa/build_model_with_graph_featurs//PARIS_human_RBP//model//features/PARIS_human_RBP_context_150 --memoryMBthread 10000 --folds 0 --out /home/uhlm/Dokumente/Teresa/build_model_with_graph_featurs//PARIS_human_RBP//model//optimized/PARIS_human_RBP_context_150 --preprocess True --n_jobs 6 --time 50000
adding .npz to filename
optimization datatype: <class 'numpy.ndarray'>
[WARNING] [2022-04-07 11:03:42,882:Client-AutoML(1):9aaebae0-b651-11ec-8fe3-901b0eb924fa] Capping the per_run_time_limit to 24999.0 to have time for a least 2 models in each process.
adding .npz to filename
```
## Environment and installation: ##
I am running the code on Ubuntu 20.04.4 LTS and a conda enviormat with:
python 3.8.12 ha38a3c6_3_cpython conda-forge
auto-sklearn 0.14.2 pyhd8ed1ab_0 conda-forge
| closed | 2022-04-13T13:16:07Z | 2022-04-24T13:37:14Z | https://github.com/automl/auto-sklearn/issues/1439 | [
"documentation"
] | teresa-m | 2 |
junyanz/pytorch-CycleGAN-and-pix2pix | computer-vision | 1,290 | Poor result for GTA to Cityscapes translation using cyclegan !! | Hello !!
I want generate more realistic images from GTA dataset,
So use Cyclegan to translate Gta to Cityscapes but it gives me poor result,
Anyone can helps me to fix that ?

| open | 2021-06-21T13:27:00Z | 2021-11-04T06:41:13Z | https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/1290 | [] | soufianeao | 1 |
dynaconf/dynaconf | flask | 341 | 'box_it_up' key in dict | **Describe the bug**
Loading dictionary from yaml type config, adds additional pair: 'box_it_up': True
**To Reproduce**
1. create the following settings.yaml:
development:
WEEK_DAYS:
FRI: false
MON: false
SAT: false
SUN: false
THU: false
TUE: false
WED: false
2. Read using all_days = settings.WEEK_DAYS
3. Result is as follows:
all_days = {'FRI': False, 'MON': True, 'SAT': False, 'SUN': False, 'THU': False, 'TUE': False, 'WED': False, 'box_it_up': True}
**Environment (please complete the following information):**
- OS: Windows 10
- Python 3.8.2
- Dynaconf Version 2.2.3
| closed | 2020-05-15T22:29:35Z | 2020-07-27T19:45:24Z | https://github.com/dynaconf/dynaconf/issues/341 | [
"bug",
"Pending Release"
] | And2Devel | 2 |
guohongze/adminset | django | 2 | 关于adminset相关建议 | 测试了一下adminset,体验还不错。简洁明了,使用方便。在使用过程中有几个小小的建议,具体如下:
1. 资产管理,显示数据较少,具体如下图:

比如能再增加一些操作系统版本信息、CPU型号、硬盘大小等更好;
2. 增加一下用户的操作记录;
3. 对用户的权限增加做一些说明,比如下图:

对其中的权限添加的URL始终不解;
暂时只想到这些,非常感谢你的项目; | closed | 2017-04-18T04:36:12Z | 2017-06-21T10:51:43Z | https://github.com/guohongze/adminset/issues/2 | [] | ghost | 1 |
paperless-ngx/paperless-ngx | machine-learning | 7,253 | [BUG] Table view columns are messed up on smaller screen sizes | ### Description
I Updated today to 2.11
I don‘t now if thsi was previously working.
In the table view on the ipad safari columns are missing and column titles are not positioned correctly

### Steps to reproduce
Got to a view
Select table view
Select the columns to show
### Webserver logs
```bash
none
```
### Browser logs
_No response_
### Paperless-ngx version
2.11
### Host OS
Proxmox, Intel
### Installation method
Docker - official image
### System status
_No response_
### Browser
Safari
### Configuration changes
_No response_
### Please confirm the following
- [X] I believe this issue is a bug that affects all users of Paperless-ngx, not something specific to my installation.
- [X] I have already searched for relevant existing issues and discussions before opening this report.
- [X] I have updated the title field above with a concise description. | closed | 2024-07-15T13:10:38Z | 2024-08-15T03:04:08Z | https://github.com/paperless-ngx/paperless-ngx/issues/7253 | [
"bug",
"frontend"
] | umtauscher | 5 |
skypilot-org/skypilot | data-science | 4,433 | [Jobs/Serve] Warning for credentials that requires reauth | <!-- Describe the bug report / feature request here -->
For cloud credentials like AWS and GCP, they may have expiration, e.g. AWS SSO or gcloud reauth, which will cause significant leakage from the controller for jobs and service if we use these local credentials on the controller. We have multiple users encountered this.
A UX fix should be: we raise an error or print out a warning before starting the job/service if we detect the credential can require reauthorization (or even more aggressively, using the service account for controller by default)
<!-- If relevant, fill in versioning info to help us troubleshoot -->
_Version & Commit info:_
* `sky -v`: PLEASE_FILL_IN
* `sky -c`: PLEASE_FILL_IN
| closed | 2024-12-03T18:36:24Z | 2025-01-09T22:14:50Z | https://github.com/skypilot-org/skypilot/issues/4433 | [
"P0"
] | Michaelvll | 2 |
Miserlou/Zappa | flask | 1,711 | How to call "xxx.so" file | Dear Zappa developers,
thanks for providing such a wonderful solution to help depoly python web services to Lambda.
i am trying to use Zappa to migrate my current Django project to lambda, the problem is some of my third-party packages are not pure python (example package "levenshtein") the output file was a "xxxxxx.so" file, it works from my local but reported "Import Error" when i deploy to Lambda.
I m new to Lambda, sorry for bothering you guys, but could you please help give me some suggestions how i can call ".so" file from Lambda?
Thanks in advance. | closed | 2018-11-28T07:29:43Z | 2018-12-04T02:28:36Z | https://github.com/Miserlou/Zappa/issues/1711 | [] | wally-yu | 2 |
gradio-app/gradio | python | 10,557 | Add an option to remove line numbers in gr.Code | - [X ] I have searched to see if a similar issue already exists.
**Is your feature request related to a problem? Please describe.**
`gr.Code()` always displays line numbers.
**Describe the solution you'd like**
I propose to add an option `show_line_numbers = True | False` to display or hide the line numbers. The default should be `True` for compatibility with the current behaviour.
| closed | 2025-02-10T11:38:07Z | 2025-02-21T22:11:43Z | https://github.com/gradio-app/gradio/issues/10557 | [
"enhancement",
"good first issue"
] | altomani | 1 |
d2l-ai/d2l-en | data-science | 2,076 | MaskedSoftmaxCELoss code wrong | this line:
weighted_loss = (unweighted_loss * weights).mean(dim=1)
should be corrected to:
weighted_loss = (unweighted_loss * weights).sum(dim=1)
reason:
when use `mean`, the padding locations will be calculated as denoimator to drag down the loss. In this case, model will learn to cheat by predicting as long as possible (as a result, no `eos` will be generated). ( I've tested it).
Also, use `mean` is inconsistent with the downstream `loss` calculation. In the downstream, `loss` will be divided by `num_token`. In this case, CEloss will be in total divided by square of `num_token`, which makes no sense.
when use `sum`, the padding locations will not be calculated (all zeros). In total `loss` will not be divieded by `num_token` square but just `num_token`.
thanks.
| open | 2022-03-21T16:13:03Z | 2022-05-19T23:42:53Z | https://github.com/d2l-ai/d2l-en/issues/2076 | [] | Y-H-Joe | 2 |
feature-engine/feature_engine | scikit-learn | 764 | radial basis function | https://youtu.be/68ABAU_V8qI?feature=shared&t=373
Useful for time series. Need to investigate a bit more, leaving a note here | open | 2024-05-16T12:34:38Z | 2024-08-24T15:59:43Z | https://github.com/feature-engine/feature_engine/issues/764 | [] | solegalli | 1 |
holoviz/panel | matplotlib | 7,459 | Make it possible to ignore caching when using `--autoreload`. | When running Panel in production we would not expect source files css, html and Graphic Walker spec files to change. For performance reasons we would like to read and cache these. But during development with hot reload/ `--dev` we would like files to be reread.
Its not clear how this should be implemented. But I believe we are missing an option
```python
@pn.cache(..., ignore_when_dev_mode=True)
def func_that_reads(...):
....
```
------------
For example in panel-graphic-walker the caching should be applied when reading `spec`s. But not if hot reloading.

------
It would be nice with a feature. But also just describing how this should be implemented or pointing to a reference example would be nice.
**How do I determine if we are developing with hot reload?** | open | 2024-11-03T09:37:28Z | 2024-11-03T09:38:30Z | https://github.com/holoviz/panel/issues/7459 | [] | MarcSkovMadsen | 0 |
numba/numba | numpy | 9,826 | RecursionError from ssa.py due to repeated calls to a jitted function. |
## Reporting a bug
When running the same relatively simple function, jitted using `nb.njit(inline="always")`, a recursion error is raised by `numba/core/ssa.py`.
In the attached runnable, this bug occurs with 150 repeated calls to the function.
If additional type complexity is introduced to the file, the bug is possible to trigger with fewer invocations (especially as many different jitted functions are included).
The traceback points to `/numba/core/ssa.py", line 460, in _find_def_from_bottom`, raising a `RecursionError: maximum recursion depth exceeded`.
In another example, without the while loop but with a much larger number of invocations, the recursion error is instead raised from `/numba/core/ir_utils.py", line 1176, in _dfs_rec`
The code to reproduce is attached as a zip file due to its length when embedded: [bug.zip](https://github.com/user-attachments/files/18011291/bug.zip) | open | 2024-12-04T16:25:08Z | 2024-12-05T21:07:39Z | https://github.com/numba/numba/issues/9826 | [
"bug - failure to compile"
] | DSchab | 3 |
keras-team/keras | machine-learning | 20,118 | Testing functional models as layers | In keras V2 it was possible to test functional models as layers with TestCase.run_layer_test
But in keras V3 it is not due to an issue with deserialization https://colab.research.google.com/drive/1OUnnbeLOvI7eFnWYDvQiiZKqPMF5Rl0M?usp=sharing
The root issue is input_shape type in model config is a list, while layers expect a tuple.
As far as i understand the root issue is json dump/load in serialization test. Can we omit this step? | closed | 2024-08-14T08:42:59Z | 2024-10-21T06:37:10Z | https://github.com/keras-team/keras/issues/20118 | [
"type:Bug"
] | shkarupa-alex | 3 |
jmcnamara/XlsxWriter | pandas | 1,069 | Chart: in a discontinuous series, the data label isn't displayed | ### Question
hello,
I have read the [chart-series-option-data-labels](https://xlsxwriter.readthedocs.io/working_with_charts.html#chart-series-option-data-labels) document
try to add data label in a discontinuous series, but it's not work.
thanks a lot
**example as follow**
version: 3.2.0
```
import xlsxwriter
print(xlsxwriter.__version__)
workbook = xlsxwriter.Workbook("chart.xlsx")
worksheet = workbook.add_worksheet()
# Create a new Chart object.
chart = workbook.add_chart({"type": "column"})
# Write some data to add to plot on the chart.
index = ["A", "B", "C", "D", "E", "F", "G"]
data = [1, 2, 3, 4, 5, 6, 7]
worksheet.write_column("A1", index)
worksheet.write_column("B1", data)
worksheet.write_string("D1", "tag1")
worksheet.write_string("D2", "tag2")
# Configure the charts. In simplest case we just add some data series.
tag1_labels = [
{'delete': True},
{'delete': True},
{"value": "=Sheet1!$D$1"}, # tag1
{'delete': True},
{'delete': True},
{'delete': True},
{'delete': True},
]
tag2_labels = [
{'delete': True},
{'delete': True},
{'delete': True},
{'delete': True},
{'delete': True},
{"value": "=Sheet1!$D$2"}, # tag2
{'delete': True},
]
chart.add_series({
"name": "First",
"categories": "=Sheet1!$A$1:$A$7",
"values": "=(Sheet1!$B$1:$B$4,Sheet1!$C$1:$C$3)", # non-contiguous ranges
"data_labels": {
"value": True,
"custom": tag1_labels
}
})
chart.add_series({
"name": "Second",
"categories": "=Sheet1!$A$1:$A$7",
"values": "=(Sheet1!$C$1:$C$4,Sheet1!$B$5:$B$7)", # non-contiguous ranges
"data_labels": {
"value": True,
"custom": tag2_labels
}
})
# Insert the chart into the worksheet.
worksheet.insert_chart("A10", chart)
workbook.close()
```
| closed | 2024-05-15T02:08:44Z | 2024-05-15T09:14:14Z | https://github.com/jmcnamara/XlsxWriter/issues/1069 | [
"question",
"awaiting user feedback"
] | youth54 | 3 |
Ehco1996/django-sspanel | django | 860 | feature reqeust: support upload node load when sync traffic | ## Feature Request
**Is your feature request related to a problem? Please describe:**
<!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] -->
**Describe the feature you'd like:**
<!-- A clear and concise description of what you want to happen. -->
**Describe alternatives you've considered:**
<!-- A clear and concise description of any alternative solutions or features you've considered. -->
| closed | 2023-08-16T06:30:33Z | 2023-08-25T00:22:23Z | https://github.com/Ehco1996/django-sspanel/issues/860 | [] | Ehco1996 | 1 |
MilesCranmer/PySR | scikit-learn | 173 | [BUG] PySR sometimes fails without internet | `update=False` should be set to a new default: `update=None`, and updates should only be performed if the internet is connected. | closed | 2022-08-04T17:44:35Z | 2022-11-28T23:22:00Z | https://github.com/MilesCranmer/PySR/issues/173 | [
"bug"
] | MilesCranmer | 1 |
hayabhay/frogbase | streamlit | 20 | CPU Dynamic Quantization | Would it be possible for you guys to add an option to enable dynamic quantization of the model when it's being run on a CPU? This would greatly improve the run-time performance of the OpenAI Whisper model (CPU-only) with minimal to no loss in performance.
The benchmarks for this are available [here](https://github.com/MiscellaneousStuff/openai-whisper-cpu).
The implementation only requires adding a few lines of code using features which are already built into PyTorch.
# Implementation
Quantization of the Whisper model requires changing the `Linear()`
layers within the model to `nn.Linear()`. This is because you need
to specifiy which layer types to dynamically quantize, such as:
```python
quantized_model = torch.quantization.quantize_dynamic(
model_fp32, {torch.nn.Linear}, dtype=torch.qint8
)
```
However the whisper model is designed to be adaptable, i.e.
it can run at different precisions, so the `Linear()` layer contains
custom code to account for this. However, this is not required for
the quantized model. You can either change the `Linear()` layers in
"/whisper/whisper/model.py" yourself (i.e. create a fork of OpenAI-Whisper
which would be compatible with future merges), or you can use
mine from [here](https://github.com/MiscellaneousStuff/whisper/tree/e87e7c46466505688119011f9190f7eb8c437b53). | closed | 2023-02-09T02:25:26Z | 2023-05-24T18:18:20Z | https://github.com/hayabhay/frogbase/issues/20 | [] | MiscellaneousStuff | 5 |
KaiyangZhou/deep-person-reid | computer-vision | 467 | feature_extractor + cuda error: out of memory | Hello Kaiyang, thank you for your amazing work on OSNet.
I am using torchreid as a feature extractor in my own project following the API given in your documentation and using an OSNet pretrained model (osnet_x0_25_imagenet) . When performing this inference using cpu, I am able to extract features of the query and gallery images. However, when using cuda, I am hit with "ERROR: RuntimeError: CUDA error: out of memory".
As I am using a pretrained model, I am not able to change the batch_size and the feature_extrator.py script also transforms images to the required (256, 128) size.
Are there any avenues that I can explore to overcome this issue while using a pretrained model?
I am using CUDA11.3 with torch version: 1.11.0.dev20211020+cu113 and running on Windows10 with a graphics card model: Nvidia GeForce MX150 with 2GB memory.
Thank you for your help on this matter. | closed | 2021-10-21T14:36:38Z | 2021-12-15T02:18:48Z | https://github.com/KaiyangZhou/deep-person-reid/issues/467 | [] | jovi-s | 0 |
vllm-project/vllm | pytorch | 14,507 | [Usage]: The example of using microsoft/Phi-4-multimodal-instruct audio | ### Your current environment
How to use microsoft/Phi-4-multimodal-instruct audio by using vllm?
[Here](https://github.com/vllm-project/vllm/pull/14343/files#diff-068f76c074ff2ec408347e0b9ff0b8ce78b75048a83343b71d684b68511480aa),
I can see an example of using vision, but how to use audio? Please help!

### How would you like to use vllm
I want to run inference of a [specific model](put link here). I don't know how to integrate it with vllm.
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions. | closed | 2025-03-09T01:47:33Z | 2025-03-10T05:40:46Z | https://github.com/vllm-project/vllm/issues/14507 | [
"usage"
] | engchina | 5 |
davidteather/TikTok-Api | api | 206 | [BUG] - wrong answer after several requests | After several requests to tiktok (getTikTokbyId in my case), TikTok begins to return only
{"code": "10000", "from": "", "type": "verify", "version": "", "region": "va", "subtype": "slide", "detail": "xEjm*7jBPnMllKzEUYW8xSJ-ivjFjq65ZCYvEfIj3pI1Z3VtwL-uBL4JnDUOshgBEzoHt7mfm6YHTheB0ulhLuQchGb6mFdS1tvGxJA08J9k8x37-lFZS-pmCl4PMOANhq32MJ7GwBNl4wJDcEAdTM-bqP4gVZgcrdE3QWBRGu0LJ0akzT1OIeFsuVNzmxjWDG2aTIXtAaNvtEakfXSR*kgFNSUI3AI.", "verify_event": "", "fp": "verify_551e7a5c9e57bd01a65a2b32ad396360"}
to each request. Is it indefeatable ban by IP or something we're able to cope with? | closed | 2020-07-31T14:44:04Z | 2020-08-01T02:29:57Z | https://github.com/davidteather/TikTok-Api/issues/206 | [
"bug"
] | tarkhil | 6 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.