repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
pywinauto/pywinauto | automation | 1,369 | Using element wrapper_object in multiple systems generated using pywinauto | I got a wrapper while performing action using pywinauto.
Now I want to play it on a different system using the same wrapper on the same application.
Is it possible?
I am creating the wrappers using below code :-
## Short Example of Code to Demonstrate the Problem
from ctypes.wintypes import tagPOINT
import pywinauto
import time
time.sleep(2)
def get_ElementFromPoint(x,y):
elem = pywinauto.uia_defines.IUIA().iuia.ElementFromPoint(tagPOINT(x, y))
element = pywinauto.uia_element_info.UIAElementInfo(elem)
wrapper = pywinauto.controls.uiawrapper.UIAWrapper(element)
return wrapper
## Specifications
- Pywinauto version: 0.6.8
- Python version and bitness: 3.11
- Platform and OS: WinOS
| open | 2024-01-15T12:43:25Z | 2024-01-16T08:19:41Z | https://github.com/pywinauto/pywinauto/issues/1369 | [] | Roboflex30 | 0 |
microsoft/nlp-recipes | nlp | 453 | [BUG] Mismatch in fit method | ### Description
<!--- Describe your bug in detail -->
In most models we have `def fit(num_gpus=None, ...)` however in https://github.com/microsoft/nlp/blob/staging/utils_nlp/models/bert/sequence_encoding.py#L37 and https://github.com/microsoft/nlp/blob/staging/utils_nlp/models/xlnet/sequence_classification.py#L32 this parameter is in the init.
### How do we replicate the bug?
<!--- Please be specific as possible (use a list if needed). -->
<!--- For example: -->
<!--- * Create a conda environment for gpu -->
<!--- * Run unit test `test_timer.py` -->
<!--- * ... -->
### Expected behavior (i.e. solution)
<!--- For example: -->
<!--- * The tests for the timer should pass successfully. -->
The parameters related to training should be in the fit method, not in the initializer.
### Other Comments
| closed | 2019-10-25T13:50:25Z | 2019-11-25T18:05:15Z | https://github.com/microsoft/nlp-recipes/issues/453 | [
"bug"
] | miguelgfierro | 1 |
healthchecks/healthchecks | django | 1,048 | Feature requests: Markdown Descriptions and Search by Slug | There are so many little details about healthchecks.io that show you care about your work and that make the app a delight to use:
- When sorting by name, numbers are sorted numerically not alphabetically (so I don't have to write 02 to make 2 show up before 10)
- Tags are flexible and the tag buttons up the top are colour coded
- Click to copy ping URL
- Ability to bulk select/deselect checks per integration (without having to individually deselect the integration in all 97 checks)
- Recently used timezones when configuring cron schedules
- Thorough documentation
Two things that would make the app better (in my opinion) are:
- being able to use Markdown in the description field
- being able to search the checks by both name **and** slug | open | 2024-08-15T06:47:05Z | 2024-08-15T11:32:02Z | https://github.com/healthchecks/healthchecks/issues/1048 | [] | matt17r | 1 |
CorentinJ/Real-Time-Voice-Cloning | python | 584 | Pytorch not working on RTX 30X0 nvidia GPU (CUDA capability sm_86) | Hi everyone,
I just bought a Nvidia RTX 3070 and I was thrilled to do some training with it but it appears Pytorch doesn't work with my new GPU...
Here is the message I get, and there is no help on the pytorch webpage.
"
GeForce RTX 3070 with CUDA capability sm_86 is not compatible with the current PyTorch installation.
The current PyTorch install supports CUDA capabilities sm_37 sm_50 sm_60 sm_61 sm_70 sm_75 compute_37.
If you want to use the GeForce RTX 3070 GPU with PyTorch, please check the instructions at https://pytorch.org/get-started/locally/
"
Is there anyone out here using RTX 30 series GPU that could help me?
Thanks a lot | closed | 2020-11-02T13:09:21Z | 2020-11-02T17:06:25Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/584 | [] | rallandr | 2 |
CorentinJ/Real-Time-Voice-Cloning | tensorflow | 591 | hello i have issues starting the program | Hello i have issues starting the program i got this issue :
ImportError: cannot import name '_imaging'
Any idea ? | closed | 2020-11-08T07:39:57Z | 2020-11-09T19:39:31Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/591 | [] | Z3ugm4 | 1 |
JaidedAI/EasyOCR | machine-learning | 449 | In the case of identity card recognition, sometimes it infers the last 'x' to '8' | Thanks for your great work, it performs high-accuracy, but I found In the case of identity card recognition, sometimes it infers the last 'x' to '8' .
How can i increse the accuracy of digit recognition? | closed | 2021-06-07T01:45:22Z | 2022-03-02T09:25:01Z | https://github.com/JaidedAI/EasyOCR/issues/449 | [] | dyther | 1 |
microsoft/nni | deep-learning | 5,683 | warnings.warn(warning_message, RuntimeWarning) Traceback (most recent call last): File "H:\MAGNN-main913\train.py", line 243, in <module> main(params) File "H:\MAGNN-main913\train.py", line 160, in main dropout = params['dropout'] KeyError: 'dropout' | parser = argparse.ArgumentParser(description='PyTorch Time series forecasting')
parser.add_argument('--data', type=str, default='multivariate-time-series-data/exchange_rate/exchange_rate.txt',
help='location of the multivariate-time-series-data file')
parser.add_argument('--log_interval', type=int, default=2000, metavar='N', help='report interval')
parser.add_argument('--save', type=str, default='model/model.pt',
help='path to save the final model')
parser.add_argument('--optim', type=str, default='adam')
parser.add_argument('--L1Loss', type=bool, default=True)
parser.add_argument('--normalize', type=int, default=2)
parser.add_argument('--device',type=str,default='cuda:0',help='')
parser.add_argument('--gcn_depth',type=int,default=2,help='graph convolution depth')
parser.add_argument('--num_nodes',type=int,default=137,help='number of nodes/variables')
parser.add_argument('--dropout',type=float,default=0.3,help='dropout rate')
parser.add_argument('--subgraph_size',type=int,default=20,help='k')
parser.add_argument('--node_dim',type=int,default=40,help='dim of nodes')
parser.add_argument('--conv_channels',type=int,default=16,help='convolution channels')
parser.add_argument('--scale_channels',type=int,default=32,help='scale channels')
parser.add_argument('--end_channels',type=int,default=64,help='end channels')
parser.add_argument('--in_dim',type=int,default=1,help='inputs dimension')
parser.add_argument('--seq_in_len',type=int,default=24*7,help='input sequence length')
parser.add_argument('--seq_out_len',type=int,default=1,help='output sequence length')
parser.add_argument('--horizon', type=int, default=3)
parser.add_argument('--layers',type=int,default=3,help='number of layers')
parser.add_argument('--batch_size',type=int,default=32,help='batch size')
parser.add_argument('--lr',type=float,default=0.0001,help='learning rate')
parser.add_argument('--weight_decay',type=float,default=0.00001,help='weight decay rate')
parser.add_argument('--clip',type=int,default=5,help='clip')
parser.add_argument('--propalpha',type=float,default=0.05,help='prop alpha')
parser.add_argument('--tanhalpha',type=float,default=3,help='tanh alpha')
parser.add_argument('--epochs',type=int,default=1,help='')
parser.add_argument('--num_split',type=int,default=1,help='number of splits for graphs')
parser.add_argument('--step_size',type=int,default=100,help='step_size')
parser.add_argument('--dynamic_graph',type=bool,default=False,help='whether to construct dynamic graph')
# parser.add_argument('--data', type=str, default='H:/个人研究/多尺度时间/MAGNN-main/data/traffic.txt',
# help='location of the data file')
args = parser.parse_args()
device = torch.device(args.device)
torch.set_num_threads(3)
def main(params):
dropout = params['dropout']
subgraph_size = params['subgraph_size']
conv_channels = params['conv_channels']
scale_channels = conv_channels
gnn_channels = conv_channels
# data_dir = "multivariate-time-series-data/" + args.data
data_dir = args.data
Data = DataLoaderS(data_dir, 0.6, 0.2, device, args.horizon, args.seq_in_len, args.normalize)
model = magnn(args.gcn_depth, args.num_nodes,
device, node_dim=args.node_dim, subgraph_size=subgraph_size, dropout=dropout, conv_channels=conv_channels,
scale_channels=scale_channels, end_channels= args.end_channels, gnn_channels = gnn_channels,
seq_length=args.seq_in_len, in_dim=args.in_dim, out_dim=args.seq_out_len,
layers=args.layers, propalpha=args.propalpha, tanhalpha=args.tanhalpha,
single_step=True, dynamic_graph=args.dynamic_graph)
model = model.to(device)
print(args)
nParams = sum([p.nelement() for p in model.parameters()])
print('Number of model parameters is', nParams, flush=True)
# for p in model.parameters():
# if p.requires_grad:
# print(p.nelement())
# summary(model, torch.zeros((4, 1, 137, 168)).to(device))
if args.L1Loss:
criterion = nn.L1Loss(size_average=False).to(device)
else:
criterion = nn.MSELoss(size_average=False).to(device)
evaluateL2 = nn.MSELoss(size_average=False).to(device)
evaluateL1 = nn.L1Loss(size_average=False).to(device)
best_val = 10000000
optim = Optim(model.parameters(), args.optim, args.lr, args.clip, lr_decay=args.weight_decay)
# At any point you can hit Ctrl + C to break out of training early.
try:
print('begin training')
for epoch in range(1, args.epochs + 1):
epoch_start_time = time.time()
train_loss = train(Data, Data.train[0], Data.train[1], model, criterion, optim, args.batch_size)
val_loss, val_rae, val_corr, val_mae, val_rmse = evaluate(Data, Data.valid[0], Data.valid[1], model, evaluateL2, evaluateL1,
args.batch_size)
print(
'| end of epoch {:3d} | time: {:5.2f}s | train_loss {:5.4f} | valid rse {:5.4f} | valid rae {:5.4f} | valid corr {:5.4f} | valid mae {:5.4f} | valid rmse {:5.4f}'.format(
epoch, (time.time() - epoch_start_time), train_loss, val_loss, val_rae, val_corr, val_mae, val_rmse), flush=True)
# Save the model if the validation loss is the best we've seen so far.
if val_loss < best_val:
with open(args.save, 'wb') as f:
torch.save(model, f)
best_val = val_loss
if epoch % 5 == 0:
test_acc, test_rae, test_corr, test_mae, test_rmse = evaluate(Data, Data.test[0], Data.test[1], model, evaluateL2, evaluateL1,
args.batch_size)
print("test rse {:5.4f} | test rae {:5.4f} | test corr {:5.4f} | test mae {:5.4f} | test rmse {:5.4f}".format(test_acc, test_rae, test_corr, test_mae, test_rmse), flush=True)
nni.report_intermediate_result(float(test_acc))
except KeyboardInterrupt:
print('-' * 89)
print('Exiting from training early')
# Load the best saved model.
with open(args.save, 'rb') as f:
model = torch.load(f)
vtest_acc, vtest_rae, vtest_corr, vtest_mae, vtest_rmse = evaluate(Data, Data.valid[0], Data.valid[1], model, evaluateL2, evaluateL1,
args.batch_size)
test_acc, test_rae, test_corr, test_mae, test_rmse = evaluate(Data, Data.test[0], Data.test[1], model, evaluateL2, evaluateL1,
args.batch_size)
print("final test rse {:5.4f} | test rae {:5.4f} | test corr {:5.4f} | test mae {:5.4f} | test mae {:5.4f} | test rmse {:5.4f}".format(test_acc, test_rae, test_corr, test_mae, test_mae, test_rmse))
nni.report_final_result(float(test_acc))
return vtest_acc, vtest_rae, vtest_corr, vtest_mae, vtest_rmse, test_acc, test_rae, test_corr, test_mae, test_rmse
if __name__ == "__main__":
params = nni.get_next_parameter()
main(params)
| closed | 2023-09-21T07:05:54Z | 2023-09-21T07:09:56Z | https://github.com/microsoft/nni/issues/5683 | [] | lifhdf | 0 |
explosion/spaCy | nlp | 13,449 | SpaCy is not building today | <!-- NOTE: For questions or install related issues, please open a Discussion instead. -->
## How to reproduce the behaviour
<!-- Include a code example or the steps that led to the problem. Please try to be as specific as possible. -->
I am building the devcontainer for https://github.com/lovellbrian/cpu and spaCy is not building. may be due to cpdef instead of cdef usage.
## Your Environment
<!-- Include details of your environment. You can also type `python -m spacy info --markdown` and copy-paste the result here.-->
* Operating System: Ubuntu 22.04
* Python Version Used: 3.10
* spaCy Version Used: spacy-3.0.6.tar.gz
* Environment Information:
* Downloading spacy-3.0.6.tar.gz (7.1 MB)
1055.3 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 7.1/7.1 MB 6.2 MB/s eta 0:00:00
1056.4 Installing build dependencies: started
1074.9 Installing build dependencies: finished with status 'done'
1074.9 Getting requirements to build wheel: started
1079.7 Getting requirements to build wheel: finished with status 'error'
1079.8 error: subprocess-exited-with-error
1079.8
1079.8 × Getting requirements to build wheel did not run successfully.
1079.8 │ exit code: 1
1079.8 ╰─> [164 lines of output]
1079.8
1079.8 Error compiling Cython file:
1079.8 ------------------------------------------------------------
1079.8 ...
1079.8 int length
1079.8
1079.8
1079.8 cdef class Vocab:
1079.8 cdef Pool mem
1079.8 cpdef readonly StringStore strings
1079.8 ^
1079.8 ------------------------------------------------------------
1079.8
1079.8 spacy/vocab.pxd:28:10: Variables cannot be declared with 'cpdef'. Use 'cdef' instead.
1079.8
1079.8 Error compiling Cython file:
1079.8 ------------------------------------------------------------ | closed | 2024-04-20T04:59:31Z | 2024-07-06T00:02:30Z | https://github.com/explosion/spaCy/issues/13449 | [
"install"
] | lovellbrian | 18 |
gee-community/geemap | streamlit | 597 | in google colab there is no toolbox displayed as it appear in jupyter notebook? | in google colab there is no toolbox displayed as it appears in jupyter notebook?
I want to draw a polygon using a toolbox and drawing polygon is not available in google colab so is there any other way to do that. | closed | 2021-07-23T19:48:09Z | 2021-07-25T01:01:22Z | https://github.com/gee-community/geemap/issues/597 | [] | pawansingh1610 | 1 |
aeon-toolkit/aeon | scikit-learn | 1,750 | [BUG] CollectionTransformers with `fit_is_empty` is true | ### Describe the bug
if fit_is_empty is true, is_fitted is false and the function returns. This means that reset() is not called. This is in principle fine, because reset only resets variables with the suffix _. Note you cannot call transform without first calling fit (that is a separate conversation).
The issue here arises with the self.meta_ data_. Currently this is set in _preprocess_collection, if the current metadata is empty. This means that if you call fit_transform twice, first with equal length then unequal length, it crashes as the metadata is not overwritten.
this is simply solved in two ways
1. reset before fit_is_empty is fit
2. dont store metadata when calling from transform (e.g. pass boolean)
I prefer (1), because the whole idea was to ultimately all the potential to pass metadata through kwargs
### Steps/Code to reproduce the bug
```python
from aeon.transformations.collection.compose._identity import CollectionId
t = CollectionId.create_test_instance()
X, y = make_example_3d_numpy(n_cases=10, n_channels=4, n_timepoints=30)
t.fit(X, y)
t.transform(X)
X2 = t.fit_transform(X, y)
X, y = make_example_3d_numpy_list(
n_cases=10, n_channels=1, min_n_timepoints=20, max_n_timepoints=30
)
t.fit(X, y)
t.transform(X)
X2 = t.fit_transform(X, y)
```
it does not overwrite the previous metadata.
### Expected results
should work, either not store meta from transform or reset meta in fit
### Actual results
```python-traceback
File "C:\Code\aeon\aeon\local\transform_debug.py", line 43, in <module>
t.transform(X)
File "C:\Code\aeon\aeon\transformations\collection\base.py", line 154, in transform
X_inner = self._preprocess_collection(X)
File "C:\Code\aeon\aeon\base\_base_collection.py", line 83, in _preprocess_collection
X = self._convert_X(X)
File "C:\Code\aeon\aeon\base\_base_collection.py", line 208, in _convert_X
X = convert_collection(X, inner_type)
File "C:\Code\aeon\aeon\utils\conversion\_convert_collection.py", line 570, in convert_collection
return convert_dictionary[(input_type, output_type)](X)
File "C:\Code\aeon\aeon\utils\conversion\_convert_collection.py", line 178, in _from_np_list_to_numpy3d
raise TypeError("Cannot convert unequal length to numpy3D")
TypeError: Cannot convert unequal length to numpy3D
```
### Versions
_No response_ | closed | 2024-07-03T09:46:17Z | 2024-07-03T21:22:22Z | https://github.com/aeon-toolkit/aeon/issues/1750 | [
"bug",
"transformations"
] | TonyBagnall | 2 |
microsoft/qlib | machine-learning | 1,795 | port_analysis_config中strategy下kwarg的signal问题 | 在benchmark文件夹下面,模型对应的yaml配置文件中,一般使用signal<PRED >
port_analysis_config: &port_analysis_config
strategy:
class: TopkDropoutStrategy
module_path: qlib.contrib.strategy
kwargs:
signal: “PRED”
topk: 50
n_drop: 5
在workflow__by_code.py中,使用signal": (model, dataset)。
port_analysis_config = {.....
"strategy": {
"class": "TopkDropoutStrategy",
"module_path": "qlib.contrib.strategy.signal_strategy",
"kwargs": {
"signal": (model, dataset),
"topk": 50,
"n_drop": 5,
},
},
问题是"PRED"(应该是《PRED》)中的pred是什么定义,没有看到相关说明?然后就是这两种方式有什么区别,谢谢大佬们。 | open | 2024-05-24T01:07:17Z | 2024-06-24T13:39:57Z | https://github.com/microsoft/qlib/issues/1795 | [
"question"
] | semiparametric | 1 |
NullArray/AutoSploit | automation | 728 | Divided by zero exception48 | Error: Attempted to divide by zero.48 | closed | 2019-04-19T15:59:40Z | 2019-04-19T16:38:39Z | https://github.com/NullArray/AutoSploit/issues/728 | [] | AutosploitReporter | 0 |
mirumee/ariadne-codegen | graphql | 18 | Handle extra configuration options | Package should read and handle more optional parameters, which currently are hardcoded as default values.
List of parameters:
- name of generated client class
- name of file that contains types generated from schema
- name and path from where copy base client class | closed | 2022-10-20T11:10:56Z | 2022-11-02T14:34:28Z | https://github.com/mirumee/ariadne-codegen/issues/18 | [
"enhancement"
] | mat-sop | 1 |
PablocFonseca/streamlit-aggrid | streamlit | 230 | StopIteration due to columnDefs | Error Trace:
```
File "/opt/homebrew/Caskroom/miniconda/base/envs/snowpark/lib/python3.10/site-packages/streamlit/runtime/scriptrunner/script_runner.py", line 552, in _run_script
exec(code, module.__dict__)
File "/Users/<redacted>/sfc-workspace/streamlit-uar/uar/1_Manager_App.py", line 41, in <module>
options.configure_selection(
File "/opt/homebrew/Caskroom/miniconda/base/envs/snowpark/lib/python3.10/site-packages/st_aggrid/grid_options_builder.py", line 258, in configure_selection
first_key = next(iter(self.__grid_options["columnDefs"].keys()))
```
Hi Team,
I'm using the following code but I constantly get the above error. Has anyone faced this before or know what I'm doing wrong?
```python
df_filtered = df_reviews.filter(
items=['col1', 'col2', 'col3', 'col4', 'col5']
)
options = GridOptionsBuilder.from_dataframe(
df_filtered,
enableRowGroup=True,
enableValue=True,
enablePivot=True,
)
options.configure_pagination(
paginationAutoPageSize=False,
paginationPageSize=10,
)
options.configure_selection(
selection_mode='multiple',
use_checkbox=True,
header_checkbox=True,
pre_selected_rows=st.session_state.pre_selected_rows,
)
grid_return = AgGrid(
df_filtered,
fit_columns_on_grid_load=True,
columns_auto_size_mode=ColumnsAutoSizeMode.NO_AUTOSIZE,
data_return_mode=DataReturnMode.FILTERED_AND_SORTED,
gridOptions=options.build(),
theme=AgGridTheme.STREAMLIT,
key='my_grid',
)
```
| open | 2023-08-15T14:23:37Z | 2024-03-21T17:26:52Z | https://github.com/PablocFonseca/streamlit-aggrid/issues/230 | [
"help wanted"
] | sfc-gh-pkommini | 1 |
encode/httpx | asyncio | 3,092 | Problem with proxy and streaming | I try to process streaming (return customer in chat). I need to use proxy. I have problem that response does not streaming when using proxy (all responses returned after all processed, no effect of writing text)
```python
import asyncio
from typing import Optional
from httpx import AsyncClient
from openai import AsyncStream, AsyncOpenAI
from openai.types.chat import ChatCompletionChunk
async def get_openai_stream_agenerator() -> AsyncStream[ChatCompletionChunk]:
client = AsyncOpenAI(
http_client=AsyncClient(
# when I comment these two lines streaming is ok
proxy="http://localhost:8080", # I'm using mitmproxy with basic configuration
verify=False,
)
)
messages = [
{"role": "system", "content": "Return details about asking person"},
{"role": "user", "content": "Iga Świątek"},
]
response: AsyncStream[ChatCompletionChunk] = await client.chat.completions.create(
model='gpt-4-0613',
messages=messages,
stream=True,
) # type: ignore
return response
def get_delta_argument(chunk: ChatCompletionChunk) -> Optional[str]:
if len(chunk.choices) > 0:
return chunk.dict()['choices'][0]['delta']['content']
else:
return None
async def get_response_generator() -> None:
async for it in await get_openai_stream_agenerator():
value = get_delta_argument(it)
if value:
print(value, end="")
print()
if __name__ == '__main__':
asyncio.run(get_response_generator())
```
OS: macOS
Python version: Python v3.11.7
Library version: openai 1.12.0, httpx 0.26.0 | closed | 2024-02-13T01:54:51Z | 2024-02-13T11:24:28Z | https://github.com/encode/httpx/issues/3092 | [] | markowanga | 0 |
tensorflow/tensor2tensor | deep-learning | 1,477 | RFC: What do you think about TRAX? How do we make the next T2T really good? | ### Description
...
### Environment information
```
OS: <your answer here>
$ pip freeze | grep tensor
# your output here
$ python -V
# your output here
```
### For bugs: reproduction and error logs
```
# Steps to reproduce:
...
```
```
# Error logs:
...
```
| closed | 2019-03-07T01:57:11Z | 2019-03-07T01:57:22Z | https://github.com/tensorflow/tensor2tensor/issues/1477 | [] | lukaszkaiser | 0 |
raphaelvallat/pingouin | pandas | 377 | pandas.DataFrame.iteritems deprecated since pandas 1.5: pingouin.plot_rm_corr fails on example dataset | Thank you for adding the plotting function for the repeated measures correlation! Previously, I had to switch to R for that...
Unfortunately, it does not work for me. Even when using the example dataset from the [function description](https://pingouin-stats.org/build/html/generated/pingouin.plot_rm_corr.html#pingouin.plot_rm_corr) I get the following error:
```Python
import pingouin as pg
df = pg.read_dataset('rm_corr')
g = pg.plot_rm_corr(data=df, x='pH', y='PacO2', subject='Subject')
```
```Python
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
[/var/folders/1y/49sjn_6j1_sgl474yd2zw6pc0000gn/T/ipykernel_73405/2687313090.py](https://file+.vscode-resource.vscode-cdn.net/var/folders/1y/49sjn_6j1_sgl474yd2zw6pc0000gn/T/ipykernel_73405/2687313090.py) in ?()
2 df = pg.read_dataset('rm_corr')
3 print(pg.rm_corr(data=df, x='pH', y='PacO2', subject='Subject'))
4
5
----> 6 g = pg.plot_rm_corr(data=df, x='pH', y='PacO2', subject='Subject')
[~/anaconda3/envs/localglobal/lib/python3.11/site-packages/pingouin/plotting.py](https://file+.vscode-resource.vscode-cdn.net/Users/moritzgerster/Library/CloudStorage/Dropbox/Code/BIDS_LocalGlobal/notebooks/~/anaconda3/envs/localglobal/lib/python3.11/site-packages/pingouin/plotting.py) in ?(data, x, y, subject, legend, kwargs_facetgrid, kwargs_line, kwargs_scatter)
1012 kwargs_facetgrid["palette"] = sns.hls_palette(data[subject].nunique())
1013
1014 # Start plot
1015 g = sns.FacetGrid(data, hue=subject, **kwargs_facetgrid)
-> 1016 g = g.map(sns.regplot, x, "pred", scatter=False, ci=None, truncate=True, line_kws=kwargs_line)
1017 g = g.map(sns.scatterplot, x, y, **kwargs_scatter)
1018
1019 if legend:
[~/anaconda3/envs/localglobal/lib/python3.11/site-packages/seaborn/axisgrid.py](https://file+.vscode-resource.vscode-cdn.net/Users/moritzgerster/Library/CloudStorage/Dropbox/Code/BIDS_LocalGlobal/notebooks/~/anaconda3/envs/localglobal/lib/python3.11/site-packages/seaborn/axisgrid.py) in ?(self, func, *args, **kwargs)
673 # Get the actual data we are going to plot with
674 plot_data = data_ijk[list(args)]
675 if self._dropna:
676 plot_data = plot_data.dropna()
--> 677 plot_args = [v for k, v in plot_data.iteritems()]
...
5987 ):
5988 return self[name]
-> 5989 return object.__getattribute__(self, name)
AttributeError: 'DataFrame' object has no attribute 'iteritems'
``` | closed | 2023-09-21T09:26:56Z | 2023-11-11T18:52:03Z | https://github.com/raphaelvallat/pingouin/issues/377 | [
"bug :boom:",
"URGENT :warning:"
] | moritz-gerster | 4 |
sqlalchemy/sqlalchemy | sqlalchemy | 11,586 | Get DDL compiler instance within SQL expression compiling construct | ### Describe the use case
Access to DDL compiler instance within SQL expression compiling construct, as it is in DDL compiling constructs through DDLCompiler sql_compiler accessor. This is useful for cases when inline schemas can be provided in SQL queries, such as SQL Server OPENJSON or MySQL JSON_TABLE.
### Databases / Backends / Drivers targeted
All databases. All drivers.
### Example Use
This is an example of use if ddl_compiler accessor would be present in SQLCompiler instance.
```
from sqlalchemy.ext.compiler import compiles
from sqlalchemy.sql.functions import GenericFunction
import sqlalchemy as sa
class json_table(GenericFunction):
inherit_cache = True
@compiles(json_table, 'mssql')
def _compile_json_table(element, compiler, **kw):
return "OPENJSON({}) WITH ({})".format(
compiler.process(element.clauses.clauses[0], **kw),
",".join(compiler.ddl_compiler.process(clause, **kw) for clause in element.clauses.clauses[1:])
)
@compiles(json_table, 'mysql')
def _compile_json_table(element, compiler, **kw):
return "JSON_TABLE({}, '$.*' COLUMNS({})) AS {}".format(
compiler.process(element.clauses.clauses[0], **kw),
",".join(compiler.ddl_compiler.process(clause, **kw) for clause in element.clauses.clauses[1:]),
element.clauses.clauses[0].name # In MySQL JSON_TABLE result is required an alias
)
select(
func.json_table(
my_table.my_json_column,
# JSON schema
sa.Column('my_column', sa.String()),
sa.Column('my_other_column', sa.Integer())
)
)
```
### Additional context
_No response_ | closed | 2024-07-09T06:42:23Z | 2024-07-09T13:20:42Z | https://github.com/sqlalchemy/sqlalchemy/issues/11586 | [
"use case"
] | apabolleta-dasnano | 0 |
babysor/MockingBird | deep-learning | 272 | 安装 ffmpeg时的requirements.txt在哪里? | 安装 ffmpeg。 1)下载 选择点击打开链接Windows对应的版本下载 2)解压 ffmpeg-xxxx.zip 文件到指定目录; 3)将解压后的文件目录中 bin 目录(包含 ffmpeg.exe )添加进 path 环境变量中; 4)进入 cmd,输入 ffmpeg -version,可验证当前系统是否识别 ffmpeg 以及查看 ffmpeg 的版本
运行pip install -r requirements.txt 来安装剩余的必要包。 | closed | 2021-12-15T08:47:23Z | 2021-12-26T03:18:53Z | https://github.com/babysor/MockingBird/issues/272 | [] | jasonyun | 2 |
pytorch/pytorch | deep-learning | 149,301 | Unexpected results w/ LayerNorm -- suspecting possible memory issue? | ### 🐛 Describe the bug
I'm noticing an interesting behaviour of LayerNorm when applied to large 4d tensors (bf16) when normalized shape is an int (i.e., normalizing over the final dimension of the tensor).
What I'm seeing is that the size of the first dimension (batch size) can impact the normalized values of existing samples. To be specific, if A and B are 2 input tensors (4d), where B = A[:-1], then after passing both through the LayerNorm layer, there's a difference between A[:-1] and B even though B is a subset of A. It's almost as if LayerNorm has some memory-access issue?
This does not happen if I run smaller tensors through this operation or if I run this through a manual normalization (using torch.mean() and unbiased torch.var()).
A code that reproduces this on A100/40GB would be something like:
```python
import torch
import torch.nn.functional as F
torch.manual_seed(1337)
device = torch.device('cuda')
USE_MANUAL_LAYERNORM = False
ln1 = torch.nn.LayerNorm(768).to(device).to(torch.bfloat16)
ln1.eval()
@torch.inference_mode()
def test():
x = torch.rand((24, 493, 768), device=device, dtype=torch.bfloat16)
x1 = x[:-1]
print('-----------------------')
print(f'x shape: {x.shape} | x1 shape: {x1.shape}')
print('> Max Diff at input:\n', (x[:-1]-x1[:]).abs().max())
x = torch.tanh(x)
x1 = torch.tanh(x1)
x = x[:, :, None, :] + x[:, None, :, :]
x1 = x1[:, :, None, :] + x1[:, None, :, :]
print('> Max Diff after broadcast:\n', (x[:-1]-x1[:]).abs().max())
x = F.gelu(x)
x1 = F.gelu(x1)
print('> Max Diff after non-linearity:\n', (x[:-1]-x1[:]).abs().max())
_x = ln1(x[:, :, 0, :])
_x1 = ln1(x1[:, :, 0, :])
print('> Max Diff after 3d layernorm:\n', (_x[:-1]-_x1[:]).abs().max())
if USE_MANUAL_LAYERNORM:
x = (x - x.mean(dim=-1, keepdim=True)) / torch.sqrt(torch.var(x, dim=-1, keepdim=True, correction=1)+1e-8)
x1 = (x1 - x1.mean(dim=-1, keepdim=True)) / torch.sqrt(torch.var(x1, dim=-1, keepdim=True, correction=1)+1e-8)
print(x[0,:2, :2, :2])
print(x1[0,:2, :2, :2])
print('> Max Diff after manual 4d layernorm:\n', (x[:-1]-x1[:]).abs().max())
else:
x = ln1(x)
x1 = ln1(x1)
print(x[0,:2, :2, :2])
print(x1[0,:2, :2, :2])
print('> Max Diff after 4d layernorm:\n', (x[:-1]-x1[:]).abs().max())
test()
```
Which yields, for a large tensor:
```
x shape: torch.Size([24, 493, 768]) | x1 shape: torch.Size([23, 493, 768])
> Max Diff at input:
tensor(0., device='cuda:0', dtype=torch.bfloat16)
> Max Diff after broadcast:
tensor(0., device='cuda:0', dtype=torch.bfloat16)
> Max Diff after non-linearity:
tensor(0., device='cuda:0', dtype=torch.bfloat16)
> Max Diff after 3d layernorm:
tensor(0., device='cuda:0', dtype=torch.bfloat16)
[...]
> Max Diff after 4d layernorm:
tensor(0.6172, device='cuda:0', dtype=torch.bfloat16)
```
Doing this with `USE_MANUAL_LAYERNORM = True` gives:
```
x shape: torch.Size([24, 493, 768]) | x1 shape: torch.Size([23, 493, 768])
> Max Diff at input:
tensor(0., device='cuda:0', dtype=torch.bfloat16)
> Max Diff after broadcast:
tensor(0., device='cuda:0', dtype=torch.bfloat16)
> Max Diff after non-linearity:
tensor(0., device='cuda:0', dtype=torch.bfloat16)
> Max Diff after 3d layernorm:
tensor(0., device='cuda:0', dtype=torch.bfloat16)
[...]
> Max Diff after manual 4d layernorm:
tensor(0., device='cuda:0', dtype=torch.bfloat16)
```
Also, for a smaller tensor (i.e., x.shape = (24, 200, 768)):
```
x shape: torch.Size([24, 200, 768]) | x1 shape: torch.Size([23, 200, 768])
> Max Diff at input:
tensor(0., device='cuda:0', dtype=torch.bfloat16)
> Max Diff after broadcast:
tensor(0., device='cuda:0', dtype=torch.bfloat16)
> Max Diff after non-linearity:
tensor(0., device='cuda:0', dtype=torch.bfloat16)
> Max Diff after 3d layernorm:
tensor(0., device='cuda:0', dtype=torch.bfloat16)
[...]
> Max Diff after 4d layernorm:
tensor(0., device='cuda:0', dtype=torch.bfloat16)
```
Please let me know if there's any mistake in my understanding of this.
### Versions
```
Collecting environment information...
PyTorch version: 2.6.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Debian GNU/Linux 11 (bullseye) (x86_64)
GCC version: (Debian 10.2.1-6) 10.2.1 20210110
Clang version: Could not collect
CMake version: version 3.18.4
Libc version: glibc-2.31
Python version: 3.11.11 | packaged by conda-forge | (main, Mar 3 2025, 20:43:55) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-5.10.0-34-cloud-amd64-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 11.8.89
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA A100-SXM4-40GB
Nvidia driver version: 550.90.07
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 48 bits virtual
CPU(s): 12
On-line CPU(s) list: 0-11
Thread(s) per core: 2
Core(s) per socket: 6
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) CPU @ 2.20GHz
Stepping: 7
CPU MHz: 2200.240
BogoMIPS: 4400.48
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 192 KiB
L1i cache: 192 KiB
L2 cache: 6 MiB
L3 cache: 38.5 MiB
NUMA node0 CPU(s): 0-11
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves arat avx512_vnni md_clear arch_capabilities
Versions of relevant libraries:
[pip3] numpy==2.2.3
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] torch==2.6.0
[pip3] triton==3.2.0
[conda] numpy 2.2.3 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.2 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] torch 2.6.0 pypi_0 pypi
[conda] triton 3.2.0 pypi_0 pypi
``` | open | 2025-03-17T07:51:14Z | 2025-03-18T13:53:01Z | https://github.com/pytorch/pytorch/issues/149301 | [
"module: numerical-stability",
"triaged",
"module: norms and normalization"
] | Apex95 | 3 |
kubeflow/katib | scikit-learn | 1,734 | Add `shellcheck` to CI | /kind feature
**Describe the solution you'd like**
[A clear and concise description of what you want to happen.]
Add [`shellcheck`](https://github.com/koalaman/shellcheck) to verify all shell scripts in this repository.
Ref: https://github.com/kubeflow/katib/pull/1731#discussion_r751174167
**Anything else you would like to add:**
[Miscellaneous information that will assist in solving the issue.]
| closed | 2021-11-17T13:57:54Z | 2022-05-03T02:42:39Z | https://github.com/kubeflow/katib/issues/1734 | [
"kind/feature",
"lifecycle/frozen"
] | tenzen-y | 12 |
rougier/scientific-visualization-book | matplotlib | 65 | Code for Figure 2.3 from the book doesn't match the image (Chapter 2, pages 22-23) | Hi @rougier,
If we use the code for the Figure 2.3 from the book, then the border will be inside data coordinates, not outside as expected, and with different tick labels (specifying the code from the book before resolving #63):
```python
fig = plt.figure(figsize=(5, 5), dpi=100)
ax = fig.add_subplot(1, 1, 1, projection='polar')
FC_to_DC = ax.transData.inverted().transform
NDC_to_FC = ax.transAxes.transform
NDC_to_DC = lambda x: FC_to_DC(NDC_to_FC(x))
P = NDC_to_DC([[0,0], [1,0], [1,1], [0,1], [0,0]])
plt.plot(P[:,0], P[:,1], clip_on=False, zorder=-10
color="k", linewidth=1.0, linestyle="--", )
plt.scatter(P[:-1,0], P[:-1,1],
clip_on=False, facecolor="w", edgecolor="k")
plt.show()
```
But the code in [Python file](https://github.com/rougier/scientific-visualization-book/blob/master/code/coordinates/transforms-polar.py) is correct.
I compared the code and find that for displaying the figure there are 2 additional lines in Python file:
https://github.com/rougier/scientific-visualization-book/blob/a8eeebb08d443caba6b1ad4d9b18e4a449a41f06/code/coordinates/transforms-polar.py#L12
https://github.com/rougier/scientific-visualization-book/blob/a8eeebb08d443caba6b1ad4d9b18e4a449a41f06/code/coordinates/transforms-polar.py#L30
And the first line (setting the limits for y-axis and specifying tick labels) is required to get expected figure.
Thank you. | closed | 2022-07-12T12:07:41Z | 2022-07-25T07:35:46Z | https://github.com/rougier/scientific-visualization-book/issues/65 | [] | labdmitriy | 1 |
d2l-ai/d2l-en | deep-learning | 2,560 | A question about 4.7.3.3. Label Shift Correction | I'm confused about the equation $\sum_jc_{ij}p(y_j)=\mu(\hat y_i)$ and the definition of confusion matrix $C$ above.
As I understood, the equation is based on the full probability equation $$\sum_jP(\hat y=y_i|y=y_j)P(y=y_j)=P(\hat y=y_i)$$ where $\hat{y}$ stands for the predicted label of $x$ and $y$ stands for the true label of $x$. To link the two equation together, I got $P(\hat y=y_i)$ is equal to $\mu(\hat y_i)$ and $P(y=y_j)$ is equal to $p(y_j)$. So the confusion matrix element $c_{ij}$ need to be the conditional probability, while according to the definition above, the $c_{ij}$ is actually a joint probability drawn from training distribution. My question is
* Am I thinking wrong?
* or are we using the joint probability to calculate the target label distribution approximately while never precisely?
Looking forward to your reply! | open | 2023-10-07T04:22:40Z | 2023-10-07T04:25:29Z | https://github.com/d2l-ai/d2l-en/issues/2560 | [] | OneCoin123 | 1 |
huggingface/datasets | machine-learning | 7,196 | concatenate_datasets does not preserve shuffling state | ### Describe the bug
After concatenate datasets on an iterable dataset, the shuffling state is destroyed, similar to #7156
This means concatenation cant be used for resolving uneven numbers of samples across devices when using iterable datasets in a distributed setting as discussed in #6623
I also noticed that the number of shards is the same after concatenation, which I found surprising, but I don't understand the internals well enough to know whether this is actually surprising or not
### Steps to reproduce the bug
```python
import datasets
import torch.utils.data
def gen(shards):
yield {"shards": shards}
def main():
dataset1 = datasets.IterableDataset.from_generator(
gen, gen_kwargs={"shards": list(range(25))} # TODO: how to understand this?
)
dataset2 = datasets.IterableDataset.from_generator(
gen, gen_kwargs={"shards": list(range(25, 50))} # TODO: how to understand this?
)
dataset1 = dataset1.shuffle(buffer_size=1)
dataset2 = dataset2.shuffle(buffer_size=1)
print(dataset1.n_shards)
print(dataset2.n_shards)
dataset = datasets.concatenate_datasets(
[dataset1, dataset2]
)
print(dataset.n_shards)
# dataset = dataset1
dataloader = torch.utils.data.DataLoader(
dataset,
batch_size=8,
num_workers=0,
)
for i, batch in enumerate(dataloader):
print(batch)
print("\nNew epoch")
dataset = dataset.set_epoch(1)
for i, batch in enumerate(dataloader):
print(batch)
if __name__ == "__main__":
main()
```
### Expected behavior
Shuffling state should be preserved
### Environment info
Latest datasets | open | 2024-10-03T14:30:38Z | 2025-03-18T10:56:47Z | https://github.com/huggingface/datasets/issues/7196 | [] | alex-hh | 1 |
scikit-optimize/scikit-optimize | scikit-learn | 646 | Dimensions: boolean np.array not supported | Despite the fact that numpy.array are supported types for dimensions, the built-in Python bool and the numpy.bool_ have very different semantics from each other (check [this](https://github.com/numpy/numpy/issues/9646) numpy issue), meaning `isinstance(np.array([True])[0], bool)` returns `False` as the bool value is implicitly casted to `numpy.bool_`
For this reason, `any([isinstance(d, (str, bool)) for d in dimension])` always returns `False` if dimension is a np.array with boolean values, which makes skopt's `check_dimensions` function raise a ValueError with the message "Invalid dimension {}. Read the documentation for supported types."
| closed | 2018-03-13T13:49:24Z | 2018-03-13T15:07:46Z | https://github.com/scikit-optimize/scikit-optimize/issues/646 | [] | carlosdanielcsantos | 0 |
davidsandberg/facenet | tensorflow | 867 | Hello, do you have any model trained on triplet loss? | Fine, I'm currently working on a private use video based roll in system. Dlib did not perform well on real scene and got a lot of error.
Though I've changed the way to select correct predictions, the result is still not applicable.
I found the triplet loss would be most close to my currently used select method, so I want a trained model to test on.
Is there any one have such a model?
I'm not sure how long would it cost to run training on VGG2 with 2 1080Ti.
Or, may some one predict how long would it cost to train such a model? | open | 2018-09-11T06:01:12Z | 2018-09-11T06:01:12Z | https://github.com/davidsandberg/facenet/issues/867 | [] | Heermosi | 0 |
tqdm/tqdm | jupyter | 1,411 | Add support for more time formats in `rate_inv_fmt` | - [X] I have marked all applicable categories:
+ [ ] documentation request (i.e. "X is missing from the documentation." If instead I want to ask "how to use X?" I understand [StackOverflow#tqdm] is more appropriate)
+ [X] new feature request
- [X] I have visited the [source website], and in particular
read the [known issues]
- [X] I have searched through the [issue tracker] for duplicates
- [ ] I have mentioned version numbers, operating system and
environment, where applicable:
When setting [rate_inv_fmt](https://github.com/tqdm/tqdm/blob/87d253f65621884c9a4020fecabc7824029e2358/tqdm/std.py#L447) in `format_meter` function, it would be nice to use different time formats when `unit_scale` is `True` (maybe create a `format_rate` function to use here instead of `format_sizeof`?).
This way, we could see information like `1.35min/it`, `8.55h/it`, or even `1.5d/it`. | open | 2022-12-27T19:37:06Z | 2022-12-27T19:45:50Z | https://github.com/tqdm/tqdm/issues/1411 | [] | george-gca | 0 |
coqui-ai/TTS | pytorch | 2,406 | docker image for mac m1 ship | ### Describe the bug
no matching manifest for linux/arm64/v8 in the manifest list entries
### To Reproduce
have a mac m1 with apple ship
run `docker pull ghcr.io/coqui-ai/tts-cpu`
> docker pull ghcr.io/coqui-ai/tts-cpu
>
> Using default tag: latest
> latest: Pulling from coqui-ai/tts-cpu
> no matching manifest for linux/arm64/v8 in the manifest list entries
### Expected behavior
pull the image
### Logs
_No response_
### Environment
```shell
Mac with M1 apple ship
```
### Additional context
_No response_ | closed | 2023-03-10T17:08:23Z | 2024-01-10T02:43:04Z | https://github.com/coqui-ai/TTS/issues/2406 | [
"good first issue",
"wontfix",
"feature request"
] | ans1genie | 10 |
pennersr/django-allauth | django | 3,073 | Idea: django-oauth-toolkit provider | I have the need to implement a custom OAuth2 provider for a service that uses [django-oauth-toolkit](https://django-oauth-toolkit.readthedocs.io/en/latest/). Instead of having only my custom provider (that is already implemented here [techmatters/terraso-allauth](https://github.com/techmatters/terraso-allauth/)), I thought that it could be interesting for the `django-allauth` project to have a generic provider that work well with any Django service using `django-oauth-toolkit`.
It seems this approach is a bit different from the current way of writing providers (to a specific library instead of to a service). So, it would be good to know if it looks like something acceptable for the project. If it's something acceptable, I can work on this front and submit a PR.
Of course, if you guys have experience doing this integration and have better approaches in mind, it would be great to hear more about :wink: | closed | 2022-04-14T20:59:09Z | 2023-07-06T05:49:18Z | https://github.com/pennersr/django-allauth/issues/3073 | [] | caiocarrara | 1 |
ultralytics/yolov5 | deep-learning | 13,010 | stuck training on NVIDIA H100 | ### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.
### Question
I am training my custom dataset on NVIDIA H100 (80GB HBM3, 81008MiB), only single gpu but training stuck after model summary.
It works well on NVIDIA GeForce RTX 2080 Ti, RTX 3090.
I don't know why it does not work on H100.
I need your help.
Training command:
`
root@548fdf5867cc:/usr/src/app# python train.py
train: weights=yolov5s.pt, cfg=, data=data/coco128.yaml, hyp=data/hyps/hyp.scratch-low.yaml, epochs=100, batch_size=16, imgsz=640, rect=False, resume=False, nosave=False, noval=False, noautoanchor=False, noplots=False, evolve=None, evolve_population=data/hyps, resume_evolve=None, bucket=, cache=None, image_weights=False, device=, multi_scale=False, single_cls=False, optimizer=SGD, sync_bn=False, workers=8, project=runs/train, name=exp, exist_ok=False, quad=False, cos_lr=False, label_smoothing=0.0, patience=100, freeze=[0], save_period=-1, seed=0, local_rank=-1, entity=None, upload_dataset=False, bbox_interval=-1, artifact_alias=latest, ndjson_console=False, ndjson_file=False
github: up to date with https://github.com/ultralytics/yolov5 ✅
YOLOv5 🚀 v7.0-312-g1bcd17ee Python-3.10.9 torch-2.0.0 CUDA:0 (NVIDIA H100 80GB HBM3, 81008MiB)
hyperparameters: lr0=0.01, lrf=0.01, momentum=0.937, weight_decay=0.0005, warmup_epochs=3.0, warmup_momentum=0.8, warmup_bias_lr=0.1, box=0.05, cls=0.5, cls_pw=1.0, obj=1.0, obj_pw=1.0, iou_t=0.2, anchor_t=4.0, fl_gamma=0.0, hsv_h=0.015, hsv_s=0.7, hsv_v=0.4, degrees=0.0, translate=0.1, scale=0.5, shear=0.0, perspective=0.0, flipud=0.0, fliplr=0.5, mosaic=1.0, mixup=0.0, copy_paste=0.0
Comet: run 'pip install comet_ml' to automatically track and visualize YOLOv5 🚀 runs in Comet
TensorBoard: Start with 'tensorboard --logdir runs/train', view at http://localhost:6006/
Dataset not found ⚠️, missing paths ['/usr/src/datasets/coco128/images/train2017']
Downloading https://ultralytics.com/assets/coco128.zip to coco128.zip...
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 6.66M/6.66M [00:01<00:00, 6.83MB/s]
Dataset download success ✅ (3.4s), saved to /usr/src/datasets
from n params module arguments
0 -1 1 3520 models.common.Conv [3, 32, 6, 2, 2]
1 -1 1 18560 models.common.Conv [32, 64, 3, 2]
2 -1 1 18816 models.common.C3 [64, 64, 1]
3 -1 1 73984 models.common.Conv [64, 128, 3, 2]
4 -1 2 115712 models.common.C3 [128, 128, 2]
5 -1 1 295424 models.common.Conv [128, 256, 3, 2]
6 -1 3 625152 models.common.C3 [256, 256, 3]
7 -1 1 1180672 models.common.Conv [256, 512, 3, 2]
8 -1 1 1182720 models.common.C3 [512, 512, 1]
9 -1 1 656896 models.common.SPPF [512, 512, 5]
10 -1 1 131584 models.common.Conv [512, 256, 1, 1]
11 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest']
12 [-1, 6] 1 0 models.common.Concat [1]
13 -1 1 361984 models.common.C3 [512, 256, 1, False]
14 -1 1 33024 models.common.Conv [256, 128, 1, 1]
15 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest']
16 [-1, 4] 1 0 models.common.Concat [1]
17 -1 1 90880 models.common.C3 [256, 128, 1, False]
18 -1 1 147712 models.common.Conv [128, 128, 3, 2]
19 [-1, 14] 1 0 models.common.Concat [1]
20 -1 1 296448 models.common.C3 [256, 256, 1, False]
21 -1 1 590336 models.common.Conv [256, 256, 3, 2]
22 [-1, 10] 1 0 models.common.Concat [1]
23 -1 1 1182720 models.common.C3 [512, 512, 1, False]
24 [17, 20, 23] 1 229245 models.yolo.Detect [80, [[10, 13, 16, 30, 33, 23], [30, 61, 62, 45, 59, 119], [116, 90, 156, 198, 373, 326]], [128, 256, 512]]
Model summary: 214 layers, 7235389 parameters, 7235389 gradients, 16.6 GFLOPs
`
### Additional
_No response_ | closed | 2024-05-14T06:57:16Z | 2024-10-20T19:45:53Z | https://github.com/ultralytics/yolov5/issues/13010 | [
"question",
"Stale"
] | SoraJung | 5 |
InstaPy/InstaPy | automation | 6,532 | Bot not working on any computer - why? | Hello - i have the following script which works fine on the one win10 client - but fails on another win10 client -
```
import os
from instapy import InstaPy
session = InstaPy(username=INSTA_USER,
password=INSTA_PW,
headless_browser= True)
session.login()
session.set_comments(listComments)
session.like_by_tags(listTags, amount=likeCount)
session.set_dont_like(listNotTags)
session.set_do_follow(True, percentage=100)
session.set_do_comment(True, percentage=100)
session.end()
```
On the one client it runs trough without problems and on the second one i get this error - directly at the login i would say
```
$ python instaBot.py
InstaPy Version: 0.6.16
._. ._. ._. ._. ._. ._. ._. ._. ._.
Workspace in use: "C:/Users/WRSPOL/InstaPy"
OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO
INFO [2022-03-02 20:56:30] [aunikat_wien] Session started!
oooooooooooooooooooooooooooooooooooooooooooooooooooooo
INFO [2022-03-02 20:56:30] [aunikat_wien] -- Connection Checklist [1/2] (Internet Connection Status)
INFO [2022-03-02 20:56:30] [aunikat_wien] - Internet Connection Status: ok
INFO [2022-03-02 20:56:30] [aunikat_wien] - Current IP is "185.17.14.8" and it's from "Austria/AT"
INFO [2022-03-02 20:56:30] [aunikat_wien] -- Connection Checklist [2/2] (Hide Selenium Extension)
INFO [2022-03-02 20:56:31] [aunikat_wien] - window.navigator.webdriver response: True
WARNING [2022-03-02 20:56:31] [aunikat_wien] - Hide Selenium Extension: error
INFO [2022-03-02 20:56:34] [aunikat_wien] - Cookie file not found, creating cookie...
INFO [2022-03-02 20:57:18] [aunikat_wien] Timed out with failure while explicitly waiting until visibility of element located!
Traceback (most recent call last):
File "C:\DEV\Fiverr\TRY\littlescreamer\instaBot.py", line 50, in <module>
File "C:\DEV\.venv\instapy\lib\site-packages\instapy\login_util.py", line 385, in login_user
input_username = browser.find_element(By.XPATH, input_username_XP)
File "C:\DEV\.venv\instapy\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 1248, in find_element
return self.execute(Command.FIND_ELEMENT, {
in execute , in find_element
self.error_handler.check_response(response)
File "C:\DEV\.venv\instapy\lib\site-packages\selenium\webdriver\remote\errorhandler.py", line 2 in execute47, in check_response
selenium.common.exceptions.NoSuchElementException: Message: Unable to locate element: //input[@na47, in check_responseme='username']
Stacktrace:
NoSuchElementError@chrome://remote/content/shared/webdriver/Errors.jsm:393:5 me='username']
element.find/</<@chrome://remote/content/marionette/element.js:305:16
```
What could be the reason for this?
Must there be anything special installed (chrome, firefox, etc.)?
| open | 2022-03-02T20:02:45Z | 2022-03-26T19:14:37Z | https://github.com/InstaPy/InstaPy/issues/6532 | [] | Rapid1898-code | 8 |
WZMIAOMIAO/deep-learning-for-image-processing | pytorch | 265 | 在VOC07上跑的结果比原论文低了约7%的mAP[0.5] (不好意思之前您回复我的问题我没看见问题关闭了) | 原问题 :
在VOC2007的trainval图片上训练,在VOC2007test图片上测试结果是63%mAP[0.5] 但是Faster RCNN论文中提到的 测试结果应该是接近70%的(不过论文中proposals是300) 请问是因为某些设定不一样嘛? 必如 冻结了backbone的一些层,可是如果VOC数据不够的话 冻结了效果不该更好吗?
您的回复是:
你用的backbone是vgg吗?你确定你backbone改的是正确的吗?你确定你载入的backbone预训练权重是正确的吗?
回复:
我用的是vgg, 关于backbone我使用的是您在train_mobilenetv2.py里面注释的部分,然后把mobilev2注释掉了,载入的backbone是Pytorch官方下载的VGG。

另外还有个现象我不知道是不是造成这个现象的原因,我设置batchsize=1的时候VGG 可以达到68的mAP,但是设置bs=8的时候VGG就只有63了。。。。
最后感谢您的代码和快速的回复~~
| closed | 2021-05-22T05:02:25Z | 2021-08-10T04:27:02Z | https://github.com/WZMIAOMIAO/deep-learning-for-image-processing/issues/265 | [] | LinfengYuan1997 | 4 |
davidsandberg/facenet | computer-vision | 1,198 | Regarding creating custom model | Hi
I would like to create a custom model for my own dataset and not a classifier with the existing model say for example (20180402-114759).
Can you help me outline the steps needed to create a custom model ?
I would much prefer triplet loss (anchor, pos, neg) for this.
Thanks
chadnra | open | 2021-04-20T05:45:53Z | 2021-04-20T05:45:53Z | https://github.com/davidsandberg/facenet/issues/1198 | [] | ghost | 0 |
opengeos/leafmap | streamlit | 435 | Error after upgrading leafmap to 0.20.0 | The following script was working fine with leafmap 0.19.0.
import leafmap,os,time
m = leafmap.Map()
m.add_basemap('SATELLITE')
m.add_shp(in_shp=extent_vector, layer_name="Extent shapefile")
m
But now the following error is popping with 0.20.0 version.
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
File ~\miniconda3\envs\gee\lib\site-packages\leafmap\common.py:1443, in shp_to_geojson(in_shp, out_json, encoding, **kwargs)
1442 try:
-> 1443 import geopandas as gpd
1445 except Exception:
File ~\miniconda3\envs\gee\lib\site-packages\geopandas\__init__.py:1
----> 1 from geopandas._config import options # noqa
3 from geopandas.geoseries import GeoSeries # noqa
File ~\miniconda3\envs\gee\lib\site-packages\geopandas\_config.py:109
104 compat.set_use_pygeos(value)
107 use_pygeos = Option(
108 key="use_pygeos",
--> 109 default_value=_default_use_pygeos(),
110 doc=(
111 "Whether to use PyGEOS to speed up spatial operations. The default is True "
112 "if PyGEOS is installed, and follows the USE_PYGEOS environment variable "
113 "if set."
114 ),
115 validator=_validate_bool,
116 callback=_callback_use_pygeos,
117 )
120 options = Options({"display_precision": display_precision, "use_pygeos": use_pygeos})
File ~\miniconda3\envs\gee\lib\site-packages\geopandas\_config.py:95, in _default_use_pygeos()
94 def _default_use_pygeos():
---> 95 import geopandas._compat as compat
97 return compat.USE_PYGEOS
File ~\miniconda3\envs\gee\lib\site-packages\geopandas\_compat.py:9
8 import pandas as pd
----> 9 import pyproj
10 import shapely
File ~\miniconda3\envs\gee\lib\site-packages\pyproj\__init__.py:49
47 import warnings
---> 49 import pyproj.network
50 from pyproj._datadir import ( # noqa: F401 pylint: disable=unused-import
51 _pyproj_global_context_initialize,
52 set_use_global_context,
53 )
File ~\miniconda3\envs\gee\lib\site-packages\pyproj\network.py:10
8 import certifi
---> 10 from pyproj._network import ( # noqa: F401 pylint: disable=unused-import
11 _set_ca_bundle_path,
12 is_network_enabled,
13 set_network_enabled,
14 )
17 def set_ca_bundle_path(ca_bundle_path: Union[Path, str, bool, None] = None) -> None:
ImportError: DLL load failed while importing _network: The specified module could not be found.
During handling of the above exception, another exception occurred:
ImportError Traceback (most recent call last)
File ~\miniconda3\envs\gee\lib\site-packages\leafmap\common.py:1446, in shp_to_geojson(in_shp, out_json, encoding, **kwargs)
1445 except Exception:
-> 1446 raise ImportError(
1447 "Geopandas is required to perform reprojection of the data. See https://geopandas.org/install.html"
1448 )
1450 try:
ImportError: Geopandas is required to perform reprojection of the data. See https://geopandas.org/install.html
During handling of the above exception, another exception occurred:
Exception Traceback (most recent call last)
Cell In[3], line 4
1 m = leafmap.Map()
2 m.add_basemap('SATELLITE')
----> 4 m.add_shp(in_shp=extent_vector, layer_name="Extent shapefile")
6 m
File ~\miniconda3\envs\gee\lib\site-packages\leafmap\leafmap.py:2153, in Map.add_shp(self, in_shp, layer_name, style, hover_style, style_callback, fill_colors, info_mode, encoding)
2150 if not os.path.exists(in_shp):
2151 raise FileNotFoundError("The provided shapefile could not be found.")
-> 2153 geojson = shp_to_geojson(in_shp, encoding=encoding)
2154 self.add_geojson(
2155 geojson,
2156 layer_name,
(...)
2162 encoding,
2163 )
File ~\miniconda3\envs\gee\lib\site-packages\leafmap\common.py:1475, in shp_to_geojson(in_shp, out_json, encoding, **kwargs)
1472 return out_dict
1474 except Exception as e:
-> 1475 raise Exception(e)
Exception: Geopandas is required to perform reprojection of the data. See https://geopandas.org/install.html | closed | 2023-04-25T04:59:49Z | 2023-04-25T15:06:53Z | https://github.com/opengeos/leafmap/issues/435 | [
"bug"
] | ravishbapna | 2 |
nschloe/tikzplotlib | matplotlib | 424 | Out-of-order subplots don't work | If subplots are generated in a different order (e.g., 2 before 1), then tikzplotlib incorrectly generates multiple groupplot environments, breaking the placement.
```py
from matplotlib import pyplot as plt
import tikzplotlib
import numpy as np
plt.subplot(2, 1, 2) # Note: subplot 2 is accessed before subplot 1
plt.plot([0, 0, 0])
plt.subplot(2, 1, 1)
plt.plot([3, 4, 5])
print(tikzplotlib.get_tikz_code())
```
Original Matplotlib PDF:

Resulting TeX:

```tex
python3 temp.py
% This file was created by tikzplotlib v0.9.2.
\begin{tikzpicture}
\definecolor{color0}{rgb}{0.12156862745098,0.466666666666667,0.705882352941177}
\begin{groupplot}[group style={group size=1 by 2}]
\nextgroupplot[
tick align=outside,
tick pos=left,
x grid style={white!69.0196078431373!black},
xmin=-0.1, xmax=2.1,
xtick style={color=black},
y grid style={white!69.0196078431373!black},
ymin=-0.055, ymax=0.055,
ytick style={color=black}
]
\addplot [semithick, color0]
table {%
0 0
1 0
2 0
};
\end{groupplot}
\begin{groupplot}[group style={group size=1 by 2}]
\nextgroupplot[
tick align=outside,
tick pos=left,
x grid style={white!69.0196078431373!black},
xmin=-0.1, xmax=2.1,
xtick style={color=black},
y grid style={white!69.0196078431373!black},
ymin=2.9, ymax=5.1,
ytick style={color=black}
]
\addplot [semithick, color0]
table {%
0 3
1 4
2 5
};
\end{groupplot}
\end{tikzpicture}
```
Expected result: The same, but without `\end{groupplot} \begin{groupplot}[...]` in the middle.
| open | 2020-07-22T09:16:13Z | 2020-07-22T09:16:13Z | https://github.com/nschloe/tikzplotlib/issues/424 | [] | MaxGaukler | 0 |
dropbox/PyHive | sqlalchemy | 2 | Support Presto query cancellation | Send a HTTP DELETE to the "nextUri"
| closed | 2014-03-05T22:53:36Z | 2018-08-01T23:46:45Z | https://github.com/dropbox/PyHive/issues/2 | [
"enhancement"
] | jingw | 2 |
paperless-ngx/paperless-ngx | django | 7,988 | [BUG] Getting error while running the pipenv install --dev | ### Description
i am getting the below error after running the pipenv install --dev command
pipenv install --dev
Loading .env environment variables...
Creating a virtualenv for this project...
Pipfile: /home/intern/paperless-ngx/Pipfile
Using default python from /usr/bin/python3 (3.12.3) to create virtualenv...
⠙ Creating virtual environment...created virtual environment CPython3.12.3.final.0-64 in 96ms
creator CPython3Posix(dest=/home/intern/.local/share/virtualenvs/paperless-ngx-go2CHL1_, clear=False, no_vcs_ignore=False, global=False)
seeder FromAppData(download=False, pip=bundle, via=copy, app_data_dir=/home/intern/.local/share/virtualenv)
added seed packages: pip==24.0
activators BashActivator,CShellActivator,FishActivator,NushellActivator,PowerShellActivator,PythonActivator
✔ Successfully created virtual environment!
Virtualenv location: /home/intern/.local/share/virtualenvs/paperless-ngx-go2CHL1_
Installing dependencies from Pipfile.lock (7c0b8a)...
[pipenv.exceptions.InstallError]: Looking in indexes: https://pypi.python.org/simple
[pipenv.exceptions.InstallError]: Ignoring async-timeout: markers 'python_full_version < "3.11.3"' don't match your environment
[pipenv.exceptions.InstallError]: Ignoring exceptiongroup: markers 'python_version < "3.11"' don't match your environment
[pipenv.exceptions.InstallError]: Ignoring typing-extensions: markers 'python_version < "3.11"' don't match your environment
[pipenv.exceptions.InstallError]: Collecting amqp==5.2.0 (from -r /tmp/pipenv-uj12p2r5-requirements/pipenv-vmhht2qy-hashed-reqs.txt (line 1))
[pipenv.exceptions.InstallError]: Using cached amqp-5.2.0-py3-none-any.whl (50 kB)
[pipenv.exceptions.InstallError]: Collecting anyio==4.6.0 (from -r /tmp/pipenv-uj12p2r5-requirements/pipenv-vmhht2qy-hashed-reqs.txt (line 2))
[pipenv.exceptions.InstallError]: Using cached anyio-4.6.0-py3-none-any.whl (89 kB)
[pipenv.exceptions.InstallError]: Collecting asgiref==3.8.1 (from -r /tmp/pipenv-uj12p2r5-requirements/pipenv-vmhht2qy-hashed-reqs.txt (line 3))
[pipenv.exceptions.InstallError]: Using cached asgiref-3.8.1-py3-none-any.whl (23 kB)
[pipenv.exceptions.InstallError]: Collecting billiard==4.2.1 (from -r /tmp/pipenv-uj12p2r5-requirements/pipenv-vmhht2qy-hashed-reqs.txt (line 5))
[pipenv.exceptions.InstallError]: Using cached billiard-4.2.1-py3-none-any.whl (86 kB)
[pipenv.exceptions.InstallError]: Collecting bleach==6.1.0 (from -r /tmp/pipenv-uj12p2r5-requirements/pipenv-vmhht2qy-hashed-reqs.txt (line 6))
[pipenv.exceptions.InstallError]: Using cached bleach-6.1.0-py3-none-any.whl (162 kB)
[pipenv.exceptions.InstallError]: Collecting brotli==1.1.0 (from -r /tmp/pipenv-uj12p2r5-requirements/pipenv-vmhht2qy-hashed-reqs.txt (line 7))
[pipenv.exceptions.InstallError]: Using cached Brotli-1.1.0-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (2.9 MB)
[pipenv.exceptions.InstallError]: Collecting celery==5.4.0 (from celery[redis]==5.4.0->-r /tmp/pipenv-uj12p2r5-requirements/pipenv-vmhht2qy-hashed-reqs.txt (line 8))
[pipenv.exceptions.InstallError]: Using cached celery-5.4.0-py3-none-any.whl (425 kB)
[pipenv.exceptions.InstallError]: Collecting certifi==2024.8.30 (from -r /tmp/pipenv-uj12p2r5-requirements/pipenv-vmhht2qy-hashed-reqs.txt (line 9))
[pipenv.exceptions.InstallError]: Using cached certifi-2024.8.30-py3-none-any.whl (167 kB)
[pipenv.exceptions.InstallError]: Collecting cffi==1.17.1 (from -r /tmp/pipenv-uj12p2r5-requirements/pipenv-vmhht2qy-hashed-reqs.txt (line 10))
[pipenv.exceptions.InstallError]: Using cached cffi-1.17.1-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (479 kB)
[pipenv.exceptions.InstallError]: Collecting channels==4.1.0 (from -r /tmp/pipenv-uj12p2r5-requirements/pipenv-vmhht2qy-hashed-reqs.txt (line 11))
[pipenv.exceptions.InstallError]: Using cached channels-4.1.0-py3-none-any.whl (30 kB)
[pipenv.exceptions.InstallError]: Collecting channels-redis==4.2.0 (from -r /tmp/pipenv-uj12p2r5-requirements/pipenv-vmhht2qy-hashed-reqs.txt (line 12))
[pipenv.exceptions.InstallError]: Using cached channels_redis-4.2.0-py3-none-any.whl (18 kB)
[pipenv.exceptions.InstallError]: Collecting charset-normalizer==3.3.2 (from -r /tmp/pipenv-uj12p2r5-requirements/pipenv-vmhht2qy-hashed-reqs.txt (line 13))
[pipenv.exceptions.InstallError]: Using cached charset_normalizer-3.3.2-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (141 kB)
[pipenv.exceptions.InstallError]: Collecting click==8.1.7 (from -r /tmp/pipenv-uj12p2r5-requirements/pipenv-vmhht2qy-hashed-reqs.txt (line 14))
[pipenv.exceptions.InstallError]: Using cached click-8.1.7-py3-none-any.whl (97 kB)
[pipenv.exceptions.InstallError]: Collecting click-didyoumean==0.3.1 (from -r /tmp/pipenv-uj12p2r5-requirements/pipenv-vmhht2qy-hashed-reqs.txt (line 15))
[pipenv.exceptions.InstallError]: Using cached click_didyoumean-0.3.1-py3-none-any.whl (3.6 kB)
[pipenv.exceptions.InstallError]: Collecting click-plugins==1.1.1 (from -r /tmp/pipenv-uj12p2r5-requirements/pipenv-vmhht2qy-hashed-reqs.txt (line 16))
[pipenv.exceptions.InstallError]: Using cached click_plugins-1.1.1-py2.py3-none-any.whl (7.5 kB)
[pipenv.exceptions.InstallError]: Collecting click-repl==0.3.0 (from -r /tmp/pipenv-uj12p2r5-requirements/pipenv-vmhht2qy-hashed-reqs.txt (line 17))
[pipenv.exceptions.InstallError]: Using cached click_repl-0.3.0-py3-none-any.whl (10 kB)
[pipenv.exceptions.InstallError]: Collecting concurrent-log-handler==0.9.25 (from -r /tmp/pipenv-uj12p2r5-requirements/pipenv-vmhht2qy-hashed-reqs.txt (line 18))
[pipenv.exceptions.InstallError]: Using cached concurrent_log_handler-0.9.25-py3-none-any.whl (25 kB)
[pipenv.exceptions.InstallError]: Collecting cryptography==43.0.1 (from -r /tmp/pipenv-uj12p2r5-requirements/pipenv-vmhht2qy-hashed-reqs.txt (line 19))
[pipenv.exceptions.InstallError]: Using cached cryptography-43.0.1-cp39-abi3-manylinux_2_28_x86_64.whl (4.0 MB)
[pipenv.exceptions.InstallError]: Collecting dateparser==1.2.0 (from -r /tmp/pipenv-uj12p2r5-requirements/pipenv-vmhht2qy-hashed-reqs.txt (line 20))
[pipenv.exceptions.InstallError]: Using cached dateparser-1.2.0-py2.py3-none-any.whl (294 kB)
[pipenv.exceptions.InstallError]: Collecting deprecated==1.2.14 (from -r /tmp/pipenv-uj12p2r5-requirements/pipenv-vmhht2qy-hashed-reqs.txt (line 21))
[pipenv.exceptions.InstallError]: Using cached Deprecated-1.2.14-py2.py3-none-any.whl (9.6 kB)
[pipenv.exceptions.InstallError]: Collecting deprecation==2.1.0 (from -r /tmp/pipenv-uj12p2r5-requirements/pipenv-vmhht2qy-hashed-reqs.txt (line 22))
[pipenv.exceptions.InstallError]: Using cached deprecation-2.1.0-py2.py3-none-any.whl (11 kB)
[pipenv.exceptions.InstallError]: Collecting django==5.1.1 (from -r /tmp/pipenv-uj12p2r5-requirements/pipenv-vmhht2qy-hashed-reqs.txt (line 23))
[pipenv.exceptions.InstallError]: Using cached Django-5.1.1-py3-none-any.whl (8.2 MB)
[pipenv.exceptions.InstallError]: Collecting django-allauth==65.0.2 (from django-allauth[socialaccount]==65.0.2->-r /tmp/pipenv-uj12p2r5-requirements/pipenv-vmhht2qy-hashed-reqs.txt (line 24))
[pipenv.exceptions.InstallError]: Using cached django_allauth-65.0.2.tar.gz (1.3 MB)
[pipenv.exceptions.InstallError]: Installing build dependencies: started
[pipenv.exceptions.InstallError]: Installing build dependencies: finished with status 'done'
[pipenv.exceptions.InstallError]: Getting requirements to build wheel: started
[pipenv.exceptions.InstallError]: Getting requirements to build wheel: finished with status 'done'
[pipenv.exceptions.InstallError]: Preparing metadata (pyproject.toml): started
[pipenv.exceptions.InstallError]: Preparing metadata (pyproject.toml): finished with status 'done'
[pipenv.exceptions.InstallError]: Collecting django-auditlog==3.0.0 (from -r /tmp/pipenv-uj12p2r5-requirements/pipenv-vmhht2qy-hashed-reqs.txt (line 25))
[pipenv.exceptions.InstallError]: Using cached django_auditlog-3.0.0-py3-none-any.whl (35 kB)
[pipenv.exceptions.InstallError]: Collecting django-celery-results==2.5.1 (from -r /tmp/pipenv-uj12p2r5-requirements/pipenv-vmhht2qy-hashed-reqs.txt (line 26))
[pipenv.exceptions.InstallError]: Using cached django_celery_results-2.5.1-py3-none-any.whl (36 kB)
[pipenv.exceptions.InstallError]: Collecting django-compression-middleware==0.5.0 (from -r /tmp/pipenv-uj12p2r5-requirements/pipenv-vmhht2qy-hashed-reqs.txt (line 27))
[pipenv.exceptions.InstallError]: Using cached django_compression_middleware-0.5.0-py2.py3-none-any.whl (8.2 kB)
[pipenv.exceptions.InstallError]: Collecting django-cors-headers==4.4.0 (from -r /tmp/pipenv-uj12p2r5-requirements/pipenv-vmhht2qy-hashed-reqs.txt (line 28))
[pipenv.exceptions.InstallError]: Using cached django_cors_headers-4.4.0-py3-none-any.whl (12 kB)
[pipenv.exceptions.InstallError]: Collecting django-extensions==3.2.3 (from -r /tmp/pipenv-uj12p2r5-requirements/pipenv-vmhht2qy-hashed-reqs.txt (line 29))
[pipenv.exceptions.InstallError]: Using cached django_extensions-3.2.3-py3-none-any.whl (229 kB)
[pipenv.exceptions.InstallError]: Collecting django-filter==24.3 (from -r /tmp/pipenv-uj12p2r5-requirements/pipenv-vmhht2qy-hashed-reqs.txt (line 30))
[pipenv.exceptions.InstallError]: Using cached django_filter-24.3-py3-none-any.whl (95 kB)
[pipenv.exceptions.InstallError]: Collecting django-guardian==2.4.0 (from -r /tmp/pipenv-uj12p2r5-requirements/pipenv-vmhht2qy-hashed-reqs.txt (line 31))
[pipenv.exceptions.InstallError]: Using cached django_guardian-2.4.0-py3-none-any.whl (106 kB)
[pipenv.exceptions.InstallError]: Collecting django-multiselectfield==0.1.13 (from -r /tmp/pipenv-uj12p2r5-requirements/pipenv-vmhht2qy-hashed-reqs.txt (line 32))
[pipenv.exceptions.InstallError]: Using cached django_multiselectfield-0.1.13-py3-none-any.whl (14 kB)
[pipenv.exceptions.InstallError]: Collecting django-soft-delete==1.0.15 (from -r /tmp/pipenv-uj12p2r5-requirements/pipenv-vmhht2qy-hashed-reqs.txt (line 33))
[pipenv.exceptions.InstallError]: Using cached django_soft_delete-1.0.15-py3-none-any.whl (10 kB)
[pipenv.exceptions.InstallError]: Collecting djangorestframework==3.15.2 (from -r /tmp/pipenv-uj12p2r5-requirements/pipenv-vmhht2qy-hashed-reqs.txt (line 34))
[pipenv.exceptions.InstallError]: Using cached djangorestframework-3.15.2-py3-none-any.whl (1.1 MB)
[pipenv.exceptions.InstallError]: Collecting djangorestframework-guardian==0.3.0 (from -r /tmp/pipenv-uj12p2r5-requirements/pipenv-vmhht2qy-hashed-reqs.txt (line 35))
[pipenv.exceptions.InstallError]: Using cached djangorestframework_guardian-0.3.0-py2.py3-none-any.whl (6.9 kB)
[pipenv.exceptions.InstallError]: Collecting drf-writable-nested==0.7.0 (from -r /tmp/pipenv-uj12p2r5-requirements/pipenv-vmhht2qy-hashed-reqs.txt (line 36))
[pipenv.exceptions.InstallError]: Using cached drf_writable_nested-0.7.0-py3-none-any.whl (10 kB)
[pipenv.exceptions.InstallError]: Collecting filelock==3.16.1 (from -r /tmp/pipenv-uj12p2r5-requirements/pipenv-vmhht2qy-hashed-reqs.txt (line 38))
[pipenv.exceptions.InstallError]: Using cached filelock-3.16.1-py3-none-any.whl (16 kB)
[pipenv.exceptions.InstallError]: Collecting flower==2.0.1 (from -r /tmp/pipenv-uj12p2r5-requirements/pipenv-vmhht2qy-hashed-reqs.txt (line 39))
[pipenv.exceptions.InstallError]: Using cached flower-2.0.1-py2.py3-none-any.whl (383 kB)
[pipenv.exceptions.InstallError]: Collecting gotenberg-client==0.6.0 (from -r /tmp/pipenv-uj12p2r5-requirements/pipenv-vmhht2qy-hashed-reqs.txt (line 40))
[pipenv.exceptions.InstallError]: Using cached gotenberg_client-0.6.0-py3-none-any.whl (22 kB)
[pipenv.exceptions.InstallError]: Collecting gunicorn==23.0.0 (from -r /tmp/pipenv-uj12p2r5-requirements/pipenv-vmhht2qy-hashed-reqs.txt (line 41))
[pipenv.exceptions.InstallError]: Using cached gunicorn-23.0.0-py3-none-any.whl (85 kB)
[pipenv.exceptions.InstallError]: Collecting h11==0.14.0 (from -r /tmp/pipenv-uj12p2r5-requirements/pipenv-vmhht2qy-hashed-reqs.txt (line 42))
[pipenv.exceptions.InstallError]: Using cached h11-0.14.0-py3-none-any.whl (58 kB)
[pipenv.exceptions.InstallError]: Collecting h2==4.1.0 (from -r /tmp/pipenv-uj12p2r5-requirements/pipenv-vmhht2qy-hashed-reqs.txt (line 43))
[pipenv.exceptions.InstallError]: Using cached h2-4.1.0-py3-none-any.whl (57 kB)
[pipenv.exceptions.InstallError]: Collecting hiredis==3.0.0 (from -r /tmp/pipenv-uj12p2r5-requirements/pipenv-vmhht2qy-hashed-reqs.txt (line 44))
[pipenv.exceptions.InstallError]: Using cached hiredis-3.0.0-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (169 kB)
[pipenv.exceptions.InstallError]: Collecting hpack==4.0.0 (from -r /tmp/pipenv-uj12p2r5-requirements/pipenv-vmhht2qy-hashed-reqs.txt (line 45))
[pipenv.exceptions.InstallError]: Using cached hpack-4.0.0-py3-none-any.whl (32 kB)
[pipenv.exceptions.InstallError]: Collecting httpcore==1.0.6 (from -r /tmp/pipenv-uj12p2r5-requirements/pipenv-vmhht2qy-hashed-reqs.txt (line 46))
[pipenv.exceptions.InstallError]: Using cached httpcore-1.0.6-py3-none-any.whl (78 kB)
[pipenv.exceptions.InstallError]: Collecting httptools==0.6.1 (from -r /tmp/pipenv-uj12p2r5-requirements/pipenv-vmhht2qy-hashed-reqs.txt (line 47))
[pipenv.exceptions.InstallError]: Using cached httptools-0.6.1-cp312-cp312-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl (344 kB)
[pipenv.exceptions.InstallError]: Collecting httpx==0.27.2 (from httpx[http2]==0.27.2->-r /tmp/pipenv-uj12p2r5-requirements/pipenv-vmhht2qy-hashed-reqs.txt (line 48))
[pipenv.exceptions.InstallError]: Using cached httpx-0.27.2-py3-none-any.whl (76 kB)
[pipenv.exceptions.InstallError]: Collecting httpx-oauth==0.15.1 (from -r /tmp/pipenv-uj12p2r5-requirements/pipenv-vmhht2qy-hashed-reqs.txt (line 49))
[pipenv.exceptions.InstallError]: Using cached httpx_oauth-0.15.1-py3-none-any.whl (37 kB)
[pipenv.exceptions.InstallError]: Collecting humanize==4.10.0 (from -r /tmp/pipenv-uj12p2r5-requirements/pipenv-vmhht2qy-hashed-reqs.txt (line 50))
[pipenv.exceptions.InstallError]: Using cached humanize-4.10.0-py3-none-any.whl (126 kB)
[pipenv.exceptions.InstallError]: Collecting hyperframe==6.0.1 (from -r /tmp/pipenv-uj12p2r5-requirements/pipenv-vmhht2qy-hashed-reqs.txt (line 51))
[pipenv.exceptions.InstallError]: Using cached hyperframe-6.0.1-py3-none-any.whl (12 kB)
[pipenv.exceptions.InstallError]: Collecting idna==3.10 (from -r /tmp/pipenv-uj12p2r5-requirements/pipenv-vmhht2qy-hashed-reqs.txt (line 52))
[pipenv.exceptions.InstallError]: Using cached idna-3.10-py3-none-any.whl (70 kB)
[pipenv.exceptions.InstallError]: Collecting imap-tools==1.7.3 (from -r /tmp/pipenv-uj12p2r5-requirements/pipenv-vmhht2qy-hashed-reqs.txt (line 53))
[pipenv.exceptions.InstallError]: Using cached imap_tools-1.7.3-py3-none-any.whl (33 kB)
[pipenv.exceptions.InstallError]: Collecting img2pdf==0.5.1 (from -r /tmp/pipenv-uj12p2r5-requirements/pipenv-vmhht2qy-hashed-reqs.txt (line 54))
[pipenv.exceptions.InstallError]: Using cached img2pdf-0.5.1.tar.gz (104 kB)
[pipenv.exceptions.InstallError]: Installing build dependencies: started
[pipenv.exceptions.InstallError]: Installing build dependencies: finished with status 'done'
[pipenv.exceptions.InstallError]: Getting requirements to build wheel: started
[pipenv.exceptions.InstallError]: Getting requirements to build wheel: finished with status 'done'
[pipenv.exceptions.InstallError]: Preparing metadata (pyproject.toml): started
[pipenv.exceptions.InstallError]: Preparing metadata (pyproject.toml): finished with status 'done'
[pipenv.exceptions.InstallError]: Collecting inotify-simple==1.3.5 (from -r /tmp/pipenv-uj12p2r5-requirements/pipenv-vmhht2qy-hashed-reqs.txt (line 55))
[pipenv.exceptions.InstallError]: Using cached inotify_simple-1.3.5.tar.gz (9.7 kB)
[pipenv.exceptions.InstallError]: Installing build dependencies: started
[pipenv.exceptions.InstallError]: Installing build dependencies: finished with status 'done'
[pipenv.exceptions.InstallError]: Getting requirements to build wheel: started
[pipenv.exceptions.InstallError]: Getting requirements to build wheel: finished with status 'done'
[pipenv.exceptions.InstallError]: Preparing metadata (pyproject.toml): started
[pipenv.exceptions.InstallError]: Preparing metadata (pyproject.toml): finished with status 'done'
[pipenv.exceptions.InstallError]: Collecting inotifyrecursive==0.3.5 (from -r /tmp/pipenv-uj12p2r5-requirements/pipenv-vmhht2qy-hashed-reqs.txt (line 56))
[pipenv.exceptions.InstallError]: Using cached inotifyrecursive-0.3.5-py3-none-any.whl (8.0 kB)
[pipenv.exceptions.InstallError]: Collecting jinja2==3.1.4 (from -r /tmp/pipenv-uj12p2r5-requirements/pipenv-vmhht2qy-hashed-reqs.txt (line 57))
[pipenv.exceptions.InstallError]: Using cached jinja2-3.1.4-py3-none-any.whl (133 kB)
[pipenv.exceptions.InstallError]: Collecting joblib==1.4.2 (from -r /tmp/pipenv-uj12p2r5-requirements/pipenv-vmhht2qy-hashed-reqs.txt (line 58))
[pipenv.exceptions.InstallError]: Using cached joblib-1.4.2-py3-none-any.whl (301 kB)
[pipenv.exceptions.InstallError]: Collecting kombu==5.4.2 (from -r /tmp/pipenv-uj12p2r5-requirements/pipenv-vmhht2qy-hashed-reqs.txt (line 59))
[pipenv.exceptions.InstallError]: Using cached kombu-5.4.2-py3-none-any.whl (201 kB)
[pipenv.exceptions.InstallError]: Collecting langdetect==1.0.9 (from -r /tmp/pipenv-uj12p2r5-requirements/pipenv-vmhht2qy-hashed-reqs.txt (line 60))
[pipenv.exceptions.InstallError]: Using cached langdetect-1.0.9.tar.gz (981 kB)
[pipenv.exceptions.InstallError]: Installing build dependencies: started
[pipenv.exceptions.InstallError]: Installing build dependencies: finished with status 'done'
[pipenv.exceptions.InstallError]: Getting requirements to build wheel: started
[pipenv.exceptions.InstallError]: Getting requirements to build wheel: finished with status 'done'
[pipenv.exceptions.InstallError]: Preparing metadata (pyproject.toml): started
[pipenv.exceptions.InstallError]: Preparing metadata (pyproject.toml): finished with status 'done'
[pipenv.exceptions.InstallError]: Collecting lxml==5.3.0 (from -r /tmp/pipenv-uj12p2r5-requirements/pipenv-vmhht2qy-hashed-reqs.txt (line 61))
[pipenv.exceptions.InstallError]: Using cached lxml-5.3.0-cp312-cp312-manylinux_2_28_x86_64.whl (4.9 MB)
[pipenv.exceptions.InstallError]: Collecting markdown-it-py==3.0.0 (from -r /tmp/pipenv-uj12p2r5-requirements/pipenv-vmhht2qy-hashed-reqs.txt (line 62))
[pipenv.exceptions.InstallError]: Using cached markdown_it_py-3.0.0-py3-none-any.whl (87 kB)
[pipenv.exceptions.InstallError]: Collecting markupsafe==2.1.5 (from -r /tmp/pipenv-uj12p2r5-requirements/pipenv-vmhht2qy-hashed-reqs.txt (line 63))
[pipenv.exceptions.InstallError]: Using cached MarkupSafe-2.1.5-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (28 kB)
[pipenv.exceptions.InstallError]: Collecting mdurl==0.1.2 (from -r /tmp/pipenv-uj12p2r5-requirements/pipenv-vmhht2qy-hashed-reqs.txt (line 64))
[pipenv.exceptions.InstallError]: Using cached mdurl-0.1.2-py3-none-any.whl (10.0 kB)
[pipenv.exceptions.InstallError]: Collecting msgpack==1.1.0 (from -r /tmp/pipenv-uj12p2r5-requirements/pipenv-vmhht2qy-hashed-reqs.txt (line 65))
[pipenv.exceptions.InstallError]: Using cached msgpack-1.1.0-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (401 kB)
[pipenv.exceptions.InstallError]: Collecting mysqlclient==2.2.4 (from -r /tmp/pipenv-uj12p2r5-requirements/pipenv-vmhht2qy-hashed-reqs.txt (line 66))
[pipenv.exceptions.InstallError]: Using cached mysqlclient-2.2.4.tar.gz (90 kB)
[pipenv.exceptions.InstallError]: Installing build dependencies: started
[pipenv.exceptions.InstallError]: Installing build dependencies: finished with status 'done'
[pipenv.exceptions.InstallError]: Getting requirements to build wheel: started
[pipenv.exceptions.InstallError]: Getting requirements to build wheel: finished with status 'error'
[pipenv.exceptions.InstallError]: error: subprocess-exited-with-error
[pipenv.exceptions.InstallError]:
[pipenv.exceptions.InstallError]: × Getting requirements to build wheel did not run successfully.
[pipenv.exceptions.InstallError]: │ exit code: 1
[pipenv.exceptions.InstallError]: ╰─> [30 lines of output]
[pipenv.exceptions.InstallError]: /bin/sh: 1: pkg-config: not found
[pipenv.exceptions.InstallError]: /bin/sh: 1: pkg-config: not found
[pipenv.exceptions.InstallError]: /bin/sh: 1: pkg-config: not found
[pipenv.exceptions.InstallError]: Trying pkg-config --exists mysqlclient
[pipenv.exceptions.InstallError]: Command 'pkg-config --exists mysqlclient' returned non-zero exit status 127.
[pipenv.exceptions.InstallError]: Trying pkg-config --exists mariadb
[pipenv.exceptions.InstallError]: Command 'pkg-config --exists mariadb' returned non-zero exit status 127.
[pipenv.exceptions.InstallError]: Trying pkg-config --exists libmariadb
[pipenv.exceptions.InstallError]: Command 'pkg-config --exists libmariadb' returned non-zero exit status 127.
[pipenv.exceptions.InstallError]: Traceback (most recent call last):
[pipenv.exceptions.InstallError]: File "/usr/lib/python3/dist-packages/pipenv/patched/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 353, in <module>
[pipenv.exceptions.InstallError]: main()
[pipenv.exceptions.InstallError]: File "/usr/lib/python3/dist-packages/pipenv/patched/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 335, in main
[pipenv.exceptions.InstallError]: json_out['return_val'] = hook(**hook_input['kwargs'])
[pipenv.exceptions.InstallError]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[pipenv.exceptions.InstallError]: File "/usr/lib/python3/dist-packages/pipenv/patched/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 118, in get_requires_for_build_wheel
[pipenv.exceptions.InstallError]: return hook(config_settings)
[pipenv.exceptions.InstallError]: ^^^^^^^^^^^^^^^^^^^^^
[pipenv.exceptions.InstallError]: File "/tmp/pip-build-env-3j8n0n4z/overlay/lib/python3.12/site-packages/setuptools/build_meta.py", line 332, in get_requires_for_build_wheel
[pipenv.exceptions.InstallError]: return self._get_build_requires(config_settings, requirements=[])
[pipenv.exceptions.InstallError]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[pipenv.exceptions.InstallError]: File "/tmp/pip-build-env-3j8n0n4z/overlay/lib/python3.12/site-packages/setuptools/build_meta.py", line 302, in _get_build_requires
[pipenv.exceptions.InstallError]: self.run_setup()
[pipenv.exceptions.InstallError]: File "/tmp/pip-build-env-3j8n0n4z/overlay/lib/python3.12/site-packages/setuptools/build_meta.py", line 318, in run_setup
[pipenv.exceptions.InstallError]: exec(code, locals())
[pipenv.exceptions.InstallError]: File "<string>", line 155, in <module>
[pipenv.exceptions.InstallError]: File "<string>", line 49, in get_config_posix
[pipenv.exceptions.InstallError]: File "<string>", line 28, in find_package_name
[pipenv.exceptions.InstallError]: Exception: Can not find valid pkg-config name.
[pipenv.exceptions.InstallError]: Specify MYSQLCLIENT_CFLAGS and MYSQLCLIENT_LDFLAGS env vars manually
[pipenv.exceptions.InstallError]: [end of output]
[pipenv.exceptions.InstallError]:
[pipenv.exceptions.InstallError]: note: This error originates from a subprocess, and is likely not a problem with pip.
[pipenv.exceptions.InstallError]: error: subprocess-exited-with-error
[pipenv.exceptions.InstallError]:
[pipenv.exceptions.InstallError]: × Getting requirements to build wheel did not run successfully.
[pipenv.exceptions.InstallError]: │ exit code: 1
[pipenv.exceptions.InstallError]: ╰─> See above for output.
[pipenv.exceptions.InstallError]:
[pipenv.exceptions.InstallError]: note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Couldn't install package: {}
Package installation failed...
/usr/lib/python3.12/subprocess.py:1127: ResourceWarning: subprocess 29162 is still running
_warn("subprocess %s is still running" % self.pid,
ResourceWarning: Enable tracemalloc to get the object allocation traceback
sys:1: ResourceWarning: unclosed file <_io.TextIOWrapper name=4 encoding='utf-8'>
ResourceWarning: Enable tracemalloc to get the object allocation traceback
sys:1: ResourceWarning: unclosed file <_io.TextIOWrapper name=7 encoding='utf-8'>
ResourceWarning: Enable tracemalloc to get the object allocation traceback
how to resolve this error.
thanks in advance.
### Steps to reproduce
1. goto paperless-ngx root folder or src folder under the paperless-nx root folder
2. run the pipenv install --dev command
3. gets the error
### Webserver logs
```bash
pipenv install --dev
Loading .env environment variables...
Creating a virtualenv for this project...
Pipfile: /home/intern/paperless-ngx/Pipfile
Using default python from /usr/bin/python3 (3.12.3) to create virtualenv...
⠙ Creating virtual environment...created virtual environment CPython3.12.3.final.0-64 in 96ms
creator CPython3Posix(dest=/home/intern/.local/share/virtualenvs/paperless-ngx-go2CHL1_, clear=False, no_vcs_ignore=False, global=False)
seeder FromAppData(download=False, pip=bundle, via=copy, app_data_dir=/home/intern/.local/share/virtualenv)
added seed packages: pip==24.0
activators BashActivator,CShellActivator,FishActivator,NushellActivator,PowerShellActivator,PythonActivator
✔ Successfully created virtual environment!
Virtualenv location: /home/intern/.local/share/virtualenvs/paperless-ngx-go2CHL1_
Installing dependencies from Pipfile.lock (7c0b8a)...
[pipenv.exceptions.InstallError]: Looking in indexes: https://pypi.python.org/simple
[pipenv.exceptions.InstallError]: Ignoring async-timeout: markers 'python_full_version < "3.11.3"' don't match your environment
[pipenv.exceptions.InstallError]: Ignoring exceptiongroup: markers 'python_version < "3.11"' don't match your environment
[pipenv.exceptions.InstallError]: Ignoring typing-extensions: markers 'python_version < "3.11"' don't match your environment
[pipenv.exceptions.InstallError]: Collecting amqp==5.2.0 (from -r /tmp/pipenv-uj12p2r5-requirements/pipenv-vmhht2qy-hashed-reqs.txt (line 1))
[pipenv.exceptions.InstallError]: Using cached amqp-5.2.0-py3-none-any.whl (50 kB)
[pipenv.exceptions.InstallError]: Collecting anyio==4.6.0 (from -r /tmp/pipenv-uj12p2r5-requirements/pipenv-vmhht2qy-hashed-reqs.txt (line 2))
[pipenv.exceptions.InstallError]: Using cached anyio-4.6.0-py3-none-any.whl (89 kB)
[pipenv.exceptions.InstallError]: Collecting asgiref==3.8.1 (from -r /tmp/pipenv-uj12p2r5-requirements/pipenv-vmhht2qy-hashed-reqs.txt (line 3))
[pipenv.exceptions.InstallError]: Using cached asgiref-3.8.1-py3-none-any.whl (23 kB)
[pipenv.exceptions.InstallError]: Collecting billiard==4.2.1 (from -r /tmp/pipenv-uj12p2r5-requirements/pipenv-vmhht2qy-hashed-reqs.txt (line 5))
[pipenv.exceptions.InstallError]: Using cached billiard-4.2.1-py3-none-any.whl (86 kB)
[pipenv.exceptions.InstallError]: Collecting bleach==6.1.0 (from -r /tmp/pipenv-uj12p2r5-requirements/pipenv-vmhht2qy-hashed-reqs.txt (line 6))
[pipenv.exceptions.InstallError]: Using cached bleach-6.1.0-py3-none-any.whl (162 kB)
[pipenv.exceptions.InstallError]: Collecting brotli==1.1.0 (from -r /tmp/pipenv-uj12p2r5-requirements/pipenv-vmhht2qy-hashed-reqs.txt (line 7))
[pipenv.exceptions.InstallError]: Using cached Brotli-1.1.0-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (2.9 MB)
[pipenv.exceptions.InstallError]: Collecting celery==5.4.0 (from celery[redis]==5.4.0->-r /tmp/pipenv-uj12p2r5-requirements/pipenv-vmhht2qy-hashed-reqs.txt (line 8))
[pipenv.exceptions.InstallError]: Using cached celery-5.4.0-py3-none-any.whl (425 kB)
[pipenv.exceptions.InstallError]: Collecting certifi==2024.8.30 (from -r /tmp/pipenv-uj12p2r5-requirements/pipenv-vmhht2qy-hashed-reqs.txt (line 9))
[pipenv.exceptions.InstallError]: Using cached certifi-2024.8.30-py3-none-any.whl (167 kB)
[pipenv.exceptions.InstallError]: Collecting cffi==1.17.1 (from -r /tmp/pipenv-uj12p2r5-requirements/pipenv-vmhht2qy-hashed-reqs.txt (line 10))
[pipenv.exceptions.InstallError]: Using cached cffi-1.17.1-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (479 kB)
[pipenv.exceptions.InstallError]: Collecting channels==4.1.0 (from -r /tmp/pipenv-uj12p2r5-requirements/pipenv-vmhht2qy-hashed-reqs.txt (line 11))
[pipenv.exceptions.InstallError]: Using cached channels-4.1.0-py3-none-any.whl (30 kB)
[pipenv.exceptions.InstallError]: Collecting channels-redis==4.2.0 (from -r /tmp/pipenv-uj12p2r5-requirements/pipenv-vmhht2qy-hashed-reqs.txt (line 12))
[pipenv.exceptions.InstallError]: Using cached channels_redis-4.2.0-py3-none-any.whl (18 kB)
[pipenv.exceptions.InstallError]: Collecting charset-normalizer==3.3.2 (from -r /tmp/pipenv-uj12p2r5-requirements/pipenv-vmhht2qy-hashed-reqs.txt (line 13))
[pipenv.exceptions.InstallError]: Using cached charset_normalizer-3.3.2-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (141 kB)
[pipenv.exceptions.InstallError]: Collecting click==8.1.7 (from -r /tmp/pipenv-uj12p2r5-requirements/pipenv-vmhht2qy-hashed-reqs.txt (line 14))
[pipenv.exceptions.InstallError]: Using cached click-8.1.7-py3-none-any.whl (97 kB)
[pipenv.exceptions.InstallError]: Collecting click-didyoumean==0.3.1 (from -r /tmp/pipenv-uj12p2r5-requirements/pipenv-vmhht2qy-hashed-reqs.txt (line 15))
[pipenv.exceptions.InstallError]: Using cached click_didyoumean-0.3.1-py3-none-any.whl (3.6 kB)
[pipenv.exceptions.InstallError]: Collecting click-plugins==1.1.1 (from -r /tmp/pipenv-uj12p2r5-requirements/pipenv-vmhht2qy-hashed-reqs.txt (line 16))
[pipenv.exceptions.InstallError]: Using cached click_plugins-1.1.1-py2.py3-none-any.whl (7.5 kB)
[pipenv.exceptions.InstallError]: Collecting click-repl==0.3.0 (from -r /tmp/pipenv-uj12p2r5-requirements/pipenv-vmhht2qy-hashed-reqs.txt (line 17))
[pipenv.exceptions.InstallError]: Using cached click_repl-0.3.0-py3-none-any.whl (10 kB)
[pipenv.exceptions.InstallError]: Collecting concurrent-log-handler==0.9.25 (from -r /tmp/pipenv-uj12p2r5-requirements/pipenv-vmhht2qy-hashed-reqs.txt (line 18))
[pipenv.exceptions.InstallError]: Using cached concurrent_log_handler-0.9.25-py3-none-any.whl (25 kB)
[pipenv.exceptions.InstallError]: Collecting cryptography==43.0.1 (from -r /tmp/pipenv-uj12p2r5-requirements/pipenv-vmhht2qy-hashed-reqs.txt (line 19))
[pipenv.exceptions.InstallError]: Using cached cryptography-43.0.1-cp39-abi3-manylinux_2_28_x86_64.whl (4.0 MB)
[pipenv.exceptions.InstallError]: Collecting dateparser==1.2.0 (from -r /tmp/pipenv-uj12p2r5-requirements/pipenv-vmhht2qy-hashed-reqs.txt (line 20))
[pipenv.exceptions.InstallError]: Using cached dateparser-1.2.0-py2.py3-none-any.whl (294 kB)
[pipenv.exceptions.InstallError]: Collecting deprecated==1.2.14 (from -r /tmp/pipenv-uj12p2r5-requirements/pipenv-vmhht2qy-hashed-reqs.txt (line 21))
[pipenv.exceptions.InstallError]: Using cached Deprecated-1.2.14-py2.py3-none-any.whl (9.6 kB)
[pipenv.exceptions.InstallError]: Collecting deprecation==2.1.0 (from -r /tmp/pipenv-uj12p2r5-requirements/pipenv-vmhht2qy-hashed-reqs.txt (line 22))
[pipenv.exceptions.InstallError]: Using cached deprecation-2.1.0-py2.py3-none-any.whl (11 kB)
[pipenv.exceptions.InstallError]: Collecting django==5.1.1 (from -r /tmp/pipenv-uj12p2r5-requirements/pipenv-vmhht2qy-hashed-reqs.txt (line 23))
[pipenv.exceptions.InstallError]: Using cached Django-5.1.1-py3-none-any.whl (8.2 MB)
[pipenv.exceptions.InstallError]: Collecting django-allauth==65.0.2 (from django-allauth[socialaccount]==65.0.2->-r /tmp/pipenv-uj12p2r5-requirements/pipenv-vmhht2qy-hashed-reqs.txt (line 24))
[pipenv.exceptions.InstallError]: Using cached django_allauth-65.0.2.tar.gz (1.3 MB)
[pipenv.exceptions.InstallError]: Installing build dependencies: started
[pipenv.exceptions.InstallError]: Installing build dependencies: finished with status 'done'
[pipenv.exceptions.InstallError]: Getting requirements to build wheel: started
[pipenv.exceptions.InstallError]: Getting requirements to build wheel: finished with status 'done'
[pipenv.exceptions.InstallError]: Preparing metadata (pyproject.toml): started
[pipenv.exceptions.InstallError]: Preparing metadata (pyproject.toml): finished with status 'done'
[pipenv.exceptions.InstallError]: Collecting django-auditlog==3.0.0 (from -r /tmp/pipenv-uj12p2r5-requirements/pipenv-vmhht2qy-hashed-reqs.txt (line 25))
[pipenv.exceptions.InstallError]: Using cached django_auditlog-3.0.0-py3-none-any.whl (35 kB)
[pipenv.exceptions.InstallError]: Collecting django-celery-results==2.5.1 (from -r /tmp/pipenv-uj12p2r5-requirements/pipenv-vmhht2qy-hashed-reqs.txt (line 26))
[pipenv.exceptions.InstallError]: Using cached django_celery_results-2.5.1-py3-none-any.whl (36 kB)
[pipenv.exceptions.InstallError]: Collecting django-compression-middleware==0.5.0 (from -r /tmp/pipenv-uj12p2r5-requirements/pipenv-vmhht2qy-hashed-reqs.txt (line 27))
[pipenv.exceptions.InstallError]: Using cached django_compression_middleware-0.5.0-py2.py3-none-any.whl (8.2 kB)
[pipenv.exceptions.InstallError]: Collecting django-cors-headers==4.4.0 (from -r /tmp/pipenv-uj12p2r5-requirements/pipenv-vmhht2qy-hashed-reqs.txt (line 28))
[pipenv.exceptions.InstallError]: Using cached django_cors_headers-4.4.0-py3-none-any.whl (12 kB)
[pipenv.exceptions.InstallError]: Collecting django-extensions==3.2.3 (from -r /tmp/pipenv-uj12p2r5-requirements/pipenv-vmhht2qy-hashed-reqs.txt (line 29))
[pipenv.exceptions.InstallError]: Using cached django_extensions-3.2.3-py3-none-any.whl (229 kB)
[pipenv.exceptions.InstallError]: Collecting django-filter==24.3 (from -r /tmp/pipenv-uj12p2r5-requirements/pipenv-vmhht2qy-hashed-reqs.txt (line 30))
[pipenv.exceptions.InstallError]: Using cached django_filter-24.3-py3-none-any.whl (95 kB)
[pipenv.exceptions.InstallError]: Collecting django-guardian==2.4.0 (from -r /tmp/pipenv-uj12p2r5-requirements/pipenv-vmhht2qy-hashed-reqs.txt (line 31))
[pipenv.exceptions.InstallError]: Using cached django_guardian-2.4.0-py3-none-any.whl (106 kB)
[pipenv.exceptions.InstallError]: Collecting django-multiselectfield==0.1.13 (from -r /tmp/pipenv-uj12p2r5-requirements/pipenv-vmhht2qy-hashed-reqs.txt (line 32))
[pipenv.exceptions.InstallError]: Using cached django_multiselectfield-0.1.13-py3-none-any.whl (14 kB)
[pipenv.exceptions.InstallError]: Collecting django-soft-delete==1.0.15 (from -r /tmp/pipenv-uj12p2r5-requirements/pipenv-vmhht2qy-hashed-reqs.txt (line 33))
[pipenv.exceptions.InstallError]: Using cached django_soft_delete-1.0.15-py3-none-any.whl (10 kB)
[pipenv.exceptions.InstallError]: Collecting djangorestframework==3.15.2 (from -r /tmp/pipenv-uj12p2r5-requirements/pipenv-vmhht2qy-hashed-reqs.txt (line 34))
[pipenv.exceptions.InstallError]: Using cached djangorestframework-3.15.2-py3-none-any.whl (1.1 MB)
[pipenv.exceptions.InstallError]: Collecting djangorestframework-guardian==0.3.0 (from -r /tmp/pipenv-uj12p2r5-requirements/pipenv-vmhht2qy-hashed-reqs.txt (line 35))
[pipenv.exceptions.InstallError]: Using cached djangorestframework_guardian-0.3.0-py2.py3-none-any.whl (6.9 kB)
[pipenv.exceptions.InstallError]: Collecting drf-writable-nested==0.7.0 (from -r /tmp/pipenv-uj12p2r5-requirements/pipenv-vmhht2qy-hashed-reqs.txt (line 36))
[pipenv.exceptions.InstallError]: Using cached drf_writable_nested-0.7.0-py3-none-any.whl (10 kB)
[pipenv.exceptions.InstallError]: Collecting filelock==3.16.1 (from -r /tmp/pipenv-uj12p2r5-requirements/pipenv-vmhht2qy-hashed-reqs.txt (line 38))
[pipenv.exceptions.InstallError]: Using cached filelock-3.16.1-py3-none-any.whl (16 kB)
[pipenv.exceptions.InstallError]: Collecting flower==2.0.1 (from -r /tmp/pipenv-uj12p2r5-requirements/pipenv-vmhht2qy-hashed-reqs.txt (line 39))
[pipenv.exceptions.InstallError]: Using cached flower-2.0.1-py2.py3-none-any.whl (383 kB)
[pipenv.exceptions.InstallError]: Collecting gotenberg-client==0.6.0 (from -r /tmp/pipenv-uj12p2r5-requirements/pipenv-vmhht2qy-hashed-reqs.txt (line 40))
[pipenv.exceptions.InstallError]: Using cached gotenberg_client-0.6.0-py3-none-any.whl (22 kB)
[pipenv.exceptions.InstallError]: Collecting gunicorn==23.0.0 (from -r /tmp/pipenv-uj12p2r5-requirements/pipenv-vmhht2qy-hashed-reqs.txt (line 41))
[pipenv.exceptions.InstallError]: Using cached gunicorn-23.0.0-py3-none-any.whl (85 kB)
[pipenv.exceptions.InstallError]: Collecting h11==0.14.0 (from -r /tmp/pipenv-uj12p2r5-requirements/pipenv-vmhht2qy-hashed-reqs.txt (line 42))
[pipenv.exceptions.InstallError]: Using cached h11-0.14.0-py3-none-any.whl (58 kB)
[pipenv.exceptions.InstallError]: Collecting h2==4.1.0 (from -r /tmp/pipenv-uj12p2r5-requirements/pipenv-vmhht2qy-hashed-reqs.txt (line 43))
[pipenv.exceptions.InstallError]: Using cached h2-4.1.0-py3-none-any.whl (57 kB)
[pipenv.exceptions.InstallError]: Collecting hiredis==3.0.0 (from -r /tmp/pipenv-uj12p2r5-requirements/pipenv-vmhht2qy-hashed-reqs.txt (line 44))
[pipenv.exceptions.InstallError]: Using cached hiredis-3.0.0-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (169 kB)
[pipenv.exceptions.InstallError]: Collecting hpack==4.0.0 (from -r /tmp/pipenv-uj12p2r5-requirements/pipenv-vmhht2qy-hashed-reqs.txt (line 45))
[pipenv.exceptions.InstallError]: Using cached hpack-4.0.0-py3-none-any.whl (32 kB)
[pipenv.exceptions.InstallError]: Collecting httpcore==1.0.6 (from -r /tmp/pipenv-uj12p2r5-requirements/pipenv-vmhht2qy-hashed-reqs.txt (line 46))
[pipenv.exceptions.InstallError]: Using cached httpcore-1.0.6-py3-none-any.whl (78 kB)
[pipenv.exceptions.InstallError]: Collecting httptools==0.6.1 (from -r /tmp/pipenv-uj12p2r5-requirements/pipenv-vmhht2qy-hashed-reqs.txt (line 47))
[pipenv.exceptions.InstallError]: Using cached httptools-0.6.1-cp312-cp312-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl (344 kB)
[pipenv.exceptions.InstallError]: Collecting httpx==0.27.2 (from httpx[http2]==0.27.2->-r /tmp/pipenv-uj12p2r5-requirements/pipenv-vmhht2qy-hashed-reqs.txt (line 48))
[pipenv.exceptions.InstallError]: Using cached httpx-0.27.2-py3-none-any.whl (76 kB)
[pipenv.exceptions.InstallError]: Collecting httpx-oauth==0.15.1 (from -r /tmp/pipenv-uj12p2r5-requirements/pipenv-vmhht2qy-hashed-reqs.txt (line 49))
[pipenv.exceptions.InstallError]: Using cached httpx_oauth-0.15.1-py3-none-any.whl (37 kB)
[pipenv.exceptions.InstallError]: Collecting humanize==4.10.0 (from -r /tmp/pipenv-uj12p2r5-requirements/pipenv-vmhht2qy-hashed-reqs.txt (line 50))
[pipenv.exceptions.InstallError]: Using cached humanize-4.10.0-py3-none-any.whl (126 kB)
[pipenv.exceptions.InstallError]: Collecting hyperframe==6.0.1 (from -r /tmp/pipenv-uj12p2r5-requirements/pipenv-vmhht2qy-hashed-reqs.txt (line 51))
[pipenv.exceptions.InstallError]: Using cached hyperframe-6.0.1-py3-none-any.whl (12 kB)
[pipenv.exceptions.InstallError]: Collecting idna==3.10 (from -r /tmp/pipenv-uj12p2r5-requirements/pipenv-vmhht2qy-hashed-reqs.txt (line 52))
[pipenv.exceptions.InstallError]: Using cached idna-3.10-py3-none-any.whl (70 kB)
[pipenv.exceptions.InstallError]: Collecting imap-tools==1.7.3 (from -r /tmp/pipenv-uj12p2r5-requirements/pipenv-vmhht2qy-hashed-reqs.txt (line 53))
[pipenv.exceptions.InstallError]: Using cached imap_tools-1.7.3-py3-none-any.whl (33 kB)
[pipenv.exceptions.InstallError]: Collecting img2pdf==0.5.1 (from -r /tmp/pipenv-uj12p2r5-requirements/pipenv-vmhht2qy-hashed-reqs.txt (line 54))
[pipenv.exceptions.InstallError]: Using cached img2pdf-0.5.1.tar.gz (104 kB)
[pipenv.exceptions.InstallError]: Installing build dependencies: started
[pipenv.exceptions.InstallError]: Installing build dependencies: finished with status 'done'
[pipenv.exceptions.InstallError]: Getting requirements to build wheel: started
[pipenv.exceptions.InstallError]: Getting requirements to build wheel: finished with status 'done'
[pipenv.exceptions.InstallError]: Preparing metadata (pyproject.toml): started
[pipenv.exceptions.InstallError]: Preparing metadata (pyproject.toml): finished with status 'done'
[pipenv.exceptions.InstallError]: Collecting inotify-simple==1.3.5 (from -r /tmp/pipenv-uj12p2r5-requirements/pipenv-vmhht2qy-hashed-reqs.txt (line 55))
[pipenv.exceptions.InstallError]: Using cached inotify_simple-1.3.5.tar.gz (9.7 kB)
[pipenv.exceptions.InstallError]: Installing build dependencies: started
[pipenv.exceptions.InstallError]: Installing build dependencies: finished with status 'done'
[pipenv.exceptions.InstallError]: Getting requirements to build wheel: started
[pipenv.exceptions.InstallError]: Getting requirements to build wheel: finished with status 'done'
[pipenv.exceptions.InstallError]: Preparing metadata (pyproject.toml): started
[pipenv.exceptions.InstallError]: Preparing metadata (pyproject.toml): finished with status 'done'
[pipenv.exceptions.InstallError]: Collecting inotifyrecursive==0.3.5 (from -r /tmp/pipenv-uj12p2r5-requirements/pipenv-vmhht2qy-hashed-reqs.txt (line 56))
[pipenv.exceptions.InstallError]: Using cached inotifyrecursive-0.3.5-py3-none-any.whl (8.0 kB)
[pipenv.exceptions.InstallError]: Collecting jinja2==3.1.4 (from -r /tmp/pipenv-uj12p2r5-requirements/pipenv-vmhht2qy-hashed-reqs.txt (line 57))
[pipenv.exceptions.InstallError]: Using cached jinja2-3.1.4-py3-none-any.whl (133 kB)
[pipenv.exceptions.InstallError]: Collecting joblib==1.4.2 (from -r /tmp/pipenv-uj12p2r5-requirements/pipenv-vmhht2qy-hashed-reqs.txt (line 58))
[pipenv.exceptions.InstallError]: Using cached joblib-1.4.2-py3-none-any.whl (301 kB)
[pipenv.exceptions.InstallError]: Collecting kombu==5.4.2 (from -r /tmp/pipenv-uj12p2r5-requirements/pipenv-vmhht2qy-hashed-reqs.txt (line 59))
[pipenv.exceptions.InstallError]: Using cached kombu-5.4.2-py3-none-any.whl (201 kB)
[pipenv.exceptions.InstallError]: Collecting langdetect==1.0.9 (from -r /tmp/pipenv-uj12p2r5-requirements/pipenv-vmhht2qy-hashed-reqs.txt (line 60))
[pipenv.exceptions.InstallError]: Using cached langdetect-1.0.9.tar.gz (981 kB)
[pipenv.exceptions.InstallError]: Installing build dependencies: started
[pipenv.exceptions.InstallError]: Installing build dependencies: finished with status 'done'
[pipenv.exceptions.InstallError]: Getting requirements to build wheel: started
[pipenv.exceptions.InstallError]: Getting requirements to build wheel: finished with status 'done'
[pipenv.exceptions.InstallError]: Preparing metadata (pyproject.toml): started
[pipenv.exceptions.InstallError]: Preparing metadata (pyproject.toml): finished with status 'done'
[pipenv.exceptions.InstallError]: Collecting lxml==5.3.0 (from -r /tmp/pipenv-uj12p2r5-requirements/pipenv-vmhht2qy-hashed-reqs.txt (line 61))
[pipenv.exceptions.InstallError]: Using cached lxml-5.3.0-cp312-cp312-manylinux_2_28_x86_64.whl (4.9 MB)
[pipenv.exceptions.InstallError]: Collecting markdown-it-py==3.0.0 (from -r /tmp/pipenv-uj12p2r5-requirements/pipenv-vmhht2qy-hashed-reqs.txt (line 62))
[pipenv.exceptions.InstallError]: Using cached markdown_it_py-3.0.0-py3-none-any.whl (87 kB)
[pipenv.exceptions.InstallError]: Collecting markupsafe==2.1.5 (from -r /tmp/pipenv-uj12p2r5-requirements/pipenv-vmhht2qy-hashed-reqs.txt (line 63))
[pipenv.exceptions.InstallError]: Using cached MarkupSafe-2.1.5-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (28 kB)
[pipenv.exceptions.InstallError]: Collecting mdurl==0.1.2 (from -r /tmp/pipenv-uj12p2r5-requirements/pipenv-vmhht2qy-hashed-reqs.txt (line 64))
[pipenv.exceptions.InstallError]: Using cached mdurl-0.1.2-py3-none-any.whl (10.0 kB)
[pipenv.exceptions.InstallError]: Collecting msgpack==1.1.0 (from -r /tmp/pipenv-uj12p2r5-requirements/pipenv-vmhht2qy-hashed-reqs.txt (line 65))
[pipenv.exceptions.InstallError]: Using cached msgpack-1.1.0-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (401 kB)
[pipenv.exceptions.InstallError]: Collecting mysqlclient==2.2.4 (from -r /tmp/pipenv-uj12p2r5-requirements/pipenv-vmhht2qy-hashed-reqs.txt (line 66))
[pipenv.exceptions.InstallError]: Using cached mysqlclient-2.2.4.tar.gz (90 kB)
[pipenv.exceptions.InstallError]: Installing build dependencies: started
[pipenv.exceptions.InstallError]: Installing build dependencies: finished with status 'done'
[pipenv.exceptions.InstallError]: Getting requirements to build wheel: started
[pipenv.exceptions.InstallError]: Getting requirements to build wheel: finished with status 'error'
[pipenv.exceptions.InstallError]: error: subprocess-exited-with-error
[pipenv.exceptions.InstallError]:
[pipenv.exceptions.InstallError]: × Getting requirements to build wheel did not run successfully.
[pipenv.exceptions.InstallError]: │ exit code: 1
[pipenv.exceptions.InstallError]: ╰─> [30 lines of output]
[pipenv.exceptions.InstallError]: /bin/sh: 1: pkg-config: not found
[pipenv.exceptions.InstallError]: /bin/sh: 1: pkg-config: not found
[pipenv.exceptions.InstallError]: /bin/sh: 1: pkg-config: not found
[pipenv.exceptions.InstallError]: Trying pkg-config --exists mysqlclient
[pipenv.exceptions.InstallError]: Command 'pkg-config --exists mysqlclient' returned non-zero exit status 127.
[pipenv.exceptions.InstallError]: Trying pkg-config --exists mariadb
[pipenv.exceptions.InstallError]: Command 'pkg-config --exists mariadb' returned non-zero exit status 127.
[pipenv.exceptions.InstallError]: Trying pkg-config --exists libmariadb
[pipenv.exceptions.InstallError]: Command 'pkg-config --exists libmariadb' returned non-zero exit status 127.
[pipenv.exceptions.InstallError]: Traceback (most recent call last):
[pipenv.exceptions.InstallError]: File "/usr/lib/python3/dist-packages/pipenv/patched/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 353, in <module>
[pipenv.exceptions.InstallError]: main()
[pipenv.exceptions.InstallError]: File "/usr/lib/python3/dist-packages/pipenv/patched/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 335, in main
[pipenv.exceptions.InstallError]: json_out['return_val'] = hook(**hook_input['kwargs'])
[pipenv.exceptions.InstallError]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[pipenv.exceptions.InstallError]: File "/usr/lib/python3/dist-packages/pipenv/patched/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 118, in get_requires_for_build_wheel
[pipenv.exceptions.InstallError]: return hook(config_settings)
[pipenv.exceptions.InstallError]: ^^^^^^^^^^^^^^^^^^^^^
[pipenv.exceptions.InstallError]: File "/tmp/pip-build-env-3j8n0n4z/overlay/lib/python3.12/site-packages/setuptools/build_meta.py", line 332, in get_requires_for_build_wheel
[pipenv.exceptions.InstallError]: return self._get_build_requires(config_settings, requirements=[])
[pipenv.exceptions.InstallError]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[pipenv.exceptions.InstallError]: File "/tmp/pip-build-env-3j8n0n4z/overlay/lib/python3.12/site-packages/setuptools/build_meta.py", line 302, in _get_build_requires
[pipenv.exceptions.InstallError]: self.run_setup()
[pipenv.exceptions.InstallError]: File "/tmp/pip-build-env-3j8n0n4z/overlay/lib/python3.12/site-packages/setuptools/build_meta.py", line 318, in run_setup
[pipenv.exceptions.InstallError]: exec(code, locals())
[pipenv.exceptions.InstallError]: File "<string>", line 155, in <module>
[pipenv.exceptions.InstallError]: File "<string>", line 49, in get_config_posix
[pipenv.exceptions.InstallError]: File "<string>", line 28, in find_package_name
[pipenv.exceptions.InstallError]: Exception: Can not find valid pkg-config name.
[pipenv.exceptions.InstallError]: Specify MYSQLCLIENT_CFLAGS and MYSQLCLIENT_LDFLAGS env vars manually
[pipenv.exceptions.InstallError]: [end of output]
[pipenv.exceptions.InstallError]:
[pipenv.exceptions.InstallError]: note: This error originates from a subprocess, and is likely not a problem with pip.
[pipenv.exceptions.InstallError]: error: subprocess-exited-with-error
[pipenv.exceptions.InstallError]:
[pipenv.exceptions.InstallError]: × Getting requirements to build wheel did not run successfully.
[pipenv.exceptions.InstallError]: │ exit code: 1
[pipenv.exceptions.InstallError]: ╰─> See above for output.
[pipenv.exceptions.InstallError]:
[pipenv.exceptions.InstallError]: note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Couldn't install package: {}
Package installation failed...
/usr/lib/python3.12/subprocess.py:1127: ResourceWarning: subprocess 29162 is still running
_warn("subprocess %s is still running" % self.pid,
ResourceWarning: Enable tracemalloc to get the object allocation traceback
sys:1: ResourceWarning: unclosed file <_io.TextIOWrapper name=4 encoding='utf-8'>
ResourceWarning: Enable tracemalloc to get the object allocation traceback
sys:1: ResourceWarning: unclosed file <_io.TextIOWrapper name=7 encoding='utf-8'>
ResourceWarning: Enable tracemalloc to get the object allocation traceback
```
### Browser logs
_No response_
### Paperless-ngx version
developement
### Host OS
localhost
### Installation method
Docker - official image
### System status
_No response_
### Browser
chrome
### Configuration changes
_No response_
### Please confirm the following
- [X] I believe this issue is a bug that affects all users of Paperless-ngx, not something specific to my installation.
- [X] This issue is not about the OCR or archive creation of a specific file(s). Otherwise, please see above regarding OCR tools.
- [X] I have already searched for relevant existing issues and discussions before opening this report.
- [X] I have updated the title field above with a concise description. | closed | 2024-10-23T14:50:52Z | 2024-10-23T14:52:41Z | https://github.com/paperless-ngx/paperless-ngx/issues/7988 | [
"not a bug"
] | ajinkyajoshicyret | 0 |
comfyanonymous/ComfyUI | pytorch | 6,569 | Unexpected Color Artifacts with Euler Ancestral Sampler in v0.3.12 | ### Expected Behavior
The generated image should accurately reflect the prompt without including extraneous colored blocks.
### Actual Behavior
When using ComfyUI v0.3.12 with the Euler Ancestral sampler, unexpected color artifacts appear in the generated image. These artifacts include random colored blocks that were not specified in the prompt. This issue persists across multiple runs and does not appear with other samplers under the same conditions.

### Steps to Reproduce
To Reproduce
Steps to reproduce the behavior:
Open ComfyUI v0.3.12.
Set the sampler to Euler Ancestral.
Use any valid prompt without specifying additional colors or artifacts.
Run the generation process.
### Debug Logs
```powershell
ComfyUI Launcher Diagnostic File
Date: 2025-01-23 02:28:30
Launcher Version: 2.8.12.407
Data File Version: 2024-12-23 12:30
ComfyUI Version: ee9547ba31f5f2c1de0211a09c3fb829bd8e25e6 (2024-12-26 20:18:49)
Working Directory: E:\ComfyUI-aki-v1.4
App Directory: E:\ComfyUI-aki-v1.4
------------------------
System Information:
OS: Microsoft Windows NT 10.0.26100.0
CPU: 20 cores
Memory Size: 16384 MB Total, 2809 MB Free
Allocated Page File Size: 23552 MB Total, 22254 MB Free
Page File Settings: 0 MB Initial, 0 MB Maximum
NVIDIA Management Library:
NVIDIA Driver Version: 566.36
NVIDIA Management Library Version: 12.566.36
CUDA Driver:
Version: 12070
Devices:
00000000:01:00.0 0: NVIDIA GeForce RTX 4060 Laptop GPU [89] 7 GB
NvApi:
Version: 56636 r566_31
HIP Driver:
Not Available
DirectML Driver:
Devices:
10400 0: NVIDIA GeForce RTX 4060 Laptop GPU 7 GB
Intel Level Zero Driver:
Not Available
------------------------
Environment Variables:
ALLUSERSPROFILE=C:\ProgramData
EFC_16768=1
SESSIONNAME=Console
HOMEDRIVE=C:
ZES_ENABLE_SYSMAN=1
LOCALAPPDATA=C:\Users\17973\AppData\Local
AV_APPDATA=C:\Users\17973\AppData\Roaming
DriverData=C:\Windows\System32\Drivers\DriverData
FPS_BROWSER_USER_PROFILE_STRING=Default
PROCESSOR_IDENTIFIER=Intel64 Family 6 Model 191 Stepping 2, GenuineIntel
IGCCSVC_DB=AQAAANCMnd8BFdERjHoAwE/Cl+sBAAAAqpeCuty8yEGD7p+k1qTcuAQAAAACAAAAAAAQZgAAAAEAACAAAACLEv8iFIzGsvkjVnbLSjnkFK6ylI+k+4OUvCYZBz5t6QAAAAAOgAAAAAIAACAAAACD6HZzA+UeYm1p/T9f4yXJZvvtifL4rAPkZneFT/3xY2AAAACD9xKI6wXYyO3Esv3dwcV/As2GcAEGQkeJmPlakgn0t+t/D5Xym1UCArawH8OJgb4wHHspc/ie9NJHlSiimiezppzjShehzGrhzAen1juLhIh6HU1CHfyh59d1r9GWX/xAAAAAC49dcEc1cHfqTHUBVnLt/JyENGkd3QqiPgjBpbl/UJqbzCiaUjO/Fk0sboIWC3GvzRpEghxoLbc/sHD6tXsn7w==
USERPROFILE=C:\Users\17973
ComSpec=C:\WINDOWS\system32\cmd.exe
CommonProgramW6432=C:\Program Files\Common Files
SystemDrive=C:
COMPUTERNAME=DESKTOP-EIKRAF1
OneDrive=C:\Users\17973\OneDrive
USERDOMAIN_ROAMINGPROFILE=DESKTOP-EIKRAF1
PROCESSOR_REVISION=bf02
PROCESSOR_LEVEL=6
ProgramFiles(x86)=C:\Program Files (x86)
CommonProgramFiles=C:\Program Files\Common Files
PUBLIC=C:\Users\Public
TMP=C:\Users\17973\AppData\Local\Temp
PATHEXT=.COM;.EXE;.BAT;.CMD;.VBS;.VBE;.JS;.JSE;.WSF;.WSH;.MSC
APPDATA=C:\Users\17973\AppData\Roaming
TEMP=C:\Users\17973\AppData\Local\Temp
Path=C:\WINDOWS\system32;C:\WINDOWS;C:\WINDOWS\System32\Wbem;C:\WINDOWS\System32\WindowsPowerShell\v1.0\;C:\WINDOWS\System32\OpenSSH\;C:\Users\Administrator\AppData\Local\Microsoft\WindowsApps;C:\Recovery\OEM\Backup\;C:\Program Files (x86)\NVIDIA Corporation\PhysX\Common;C:\Program Files\dotnet\;C:\ProgramData\Eastmoney\Choice\ExcelAddinSSL;C:\WINDOWS\system32;C:\WINDOWS;C:\WINDOWS\System32\Wbem;C:\WINDOWS\System32\WindowsPowerShell\v1.0\;C:\WINDOWS\System32\OpenSSH\;C:\Program Files\NVIDIA Corporation\NVIDIA app\NvDLISR;C:\Program Files\Git\cmd;C:\Program Files\nodejs\;C:\Users\17973\AppData\Local\Microsoft\WindowsApps;C:\Users\17973\AppData\Roaming\TinyTeX\bin\windows;C:\Users\17973\AppData\Roaming\npm
PROCESSOR_ARCHITECTURE=AMD64
HOMEPATH=\Users\17973
ProgramW6432=C:\Program Files
windir=C:\WINDOWS
FPS_BROWSER_APP_PROFILE_STRING=Internet Explorer
RTOOLS43_HOME=C:\rtools43
PSModulePath=C:\Program Files\WindowsPowerShell\Modules;C:\WINDOWS\system32\WindowsPowerShell\v1.0\Modules
NUMBER_OF_PROCESSORS=20
CommonProgramFiles(x86)=C:\Program Files (x86)\Common Files
USERDOMAIN=DESKTOP-EIKRAF1
OneDriveConsumer=C:\Users\17973\OneDrive
SystemRoot=C:\WINDOWS
ProgramFiles=C:\Program Files
LOGONSERVER=\\DESKTOP-EIKRAF1
ProgramData=C:\ProgramData
OS=Windows_NT
USERNAME=17973
------------------------
Paths:
Python: E:\ComfyUI-aki-v1.4\python\python.exe
- Python Paths:
- E:\ComfyUI-aki-v1.4\python\DLLs
- E:\ComfyUI-aki-v1.4\python\lib
- E:\ComfyUI-aki-v1.4\python
- E:\ComfyUI-aki-v1.4\python\lib\site-packages
- E:\ComfyUI-aki-v1.4\python\lib\site-packages\win32
- E:\ComfyUI-aki-v1.4\python\lib\site-packages\win32\lib
- E:\ComfyUI-aki-v1.4\python\lib\site-packages\Pythonwin
- Python Packages:
- absl_py 2.0.0
- accelerate 0.32.1
- addict 2.4.0
- aggdraw 1.3.19
- aiofiles 23.2.1
- aiohttp 3.8.5
- aiosignal 1.3.1
- albucore 0.0.16
- albumentations 1.3.1
- aliyun_python_sdk_alimt 3.2.0
- aliyun_python_sdk_core 2.13.10
- annotated_types 0.7.0
- antlr4_python3_runtime 4.9.3
- anyio 4.2.0
- argostranslate 1.9.1
- arrow 1.3.0
- async_timeout 4.0.3
- attrs 23.1.0
- audioread 3.0.1
- beautifulsoup4 4.12.2
- binaryornot 0.4.4
- bitsandbytes 0.45.0
- blend_modes 2.2.0
- blind_watermark 0.4.4
- boltons 23.0.0
- boto3 1.34.86
- botocore 1.34.162
- cachetools 5.3.2
- certifi 2023.7.22
- cffi 1.16.0
- chardet 5.2.0
- charset_normalizer 3.2.0
- click 8.1.7
- clip_interrogator 0.6.0
- cmake 3.27.7
- colorama 0.4.6
- coloredlogs 15.0.1
- colorlog 6.8.0
- color_matcher 0.5.0
- colour_science 0.4.6
- compel 2.0.3
- contourpy 1.1.1
- cookiecutter 2.6.0
- cryptography 42.0.5
- cssselect2 0.7.0
- ctranslate2 3.20.0
- cycler 0.12.1
- Cython 3.0.8
- ddt 1.7.1
- decorator 5.1.1
- deepdiff 6.7.1
- deep_translator 1.11.4
- Deprecated 1.2.14
- diffusers 0.29.1
- dill 0.3.9
- diskcache 5.6.3
- distro 1.9.0
- docopt 0.6.2
- docutils 0.20.1
- einops 0.6.1
- embreex 2.17.7.post4
- exceptiongroup 1.2.0
- fairscale 0.4.13
- fal_client 0.5.6
- fastapi 0.115.5
- filelock 3.12.4
- flatbuffers 24.3.25
- flet 0.24.1
- flet_core 0.24.1
- flet_runtime 0.24.1
- fonttools 4.43.1
- frozenlist 1.4.0
- fsspec 2023.9.1
- ftfy 6.1.1
- gdown 5.2.0
- gitdb 4.0.11
- GitPython 3.1.40
- googleapis_common_protos 1.66.0
- google_ai_generativelanguage 0.6.10
- google_api_core 2.24.0
- google_api_python_client 2.156.0
- google_auth 2.37.0
- google_auth_httplib2 0.2.0
- google_generativeai 0.8.3
- gradio 5.9.1
- gradio_client 1.5.2
- grpcio 1.68.1
- grpcio_status 1.48.2
- h11 0.14.0
- h2 4.1.0
- hpack 4.0.0
- httpcore 1.0.2
- httplib2 0.22.0
- httptools 0.6.4
- httpx 0.27.0
- httpx_sse 0.4.0
- huggingface_hub 0.27.0
- humanfriendly 10.0
- hydra_core 1.3.2
- hyperframe 6.0.1
- idna 3.4
- imageio 2.31.5
- importlib_metadata 6.8.0
- insightface 0.7.3
- intel_openmp 2021.4.0
- Jinja2 3.1.2
- jiter 0.8.2
- jmespath 0.10.0
- joblib 1.3.2
- jsonschema 4.20.0
- jsonschema_specifications 2023.7.1
- kiwisolver 1.4.5
- kornia 0.7.1
- lark_parser 0.12.0
- lazy_loader 0.3
- librosa 0.10.1
- lightning_utilities 0.11.9
- llama_cpp_python 0.3.4
- llvmlite 0.41.1
- loguru 0.7.3
- lxml 4.9.3
- mapbox_earcut 1.0.1
- markdown_it_py 3.0.0
- MarkupSafe 2.1.3
- matplotlib 3.8.0
- matrix_client 0.4.0
- mdurl 0.1.1
- mediapipe 0.10.7
- mkl 2021.4.0
- mpmath 1.3.0
- msgpack 1.0.7
- mss 9.0.1
- multidict 6.0.4
- multiprocess 0.70.17
- networkx 3.1
- numba 0.58.1
- numexpr 2.8.8
- numpy 1.26.4
- oauthlib 3.2.2
- omegaconf 2.3.0
- onnx 1.15.0
- onnxruntime 1.17.0
- onnxruntime_gpu 1.19.0
- openai 1.58.1
- opencv_contrib_python 4.10.0.84
- opencv_contrib_python_headless 4.10.0.84
- opencv_python 4.10.0.84
- opencv_python_headless 4.10.0.84
- open_clip_torch 2.24.0
- ordered_set 4.1.0
- orjson 3.10.13
- packaging 23.2
- pandas 2.1.3
- pathos 0.3.3
- peft 0.13.2
- piexif 1.1.3
- pilgram 1.2.1
- pillow 10.3.0
- pip 23.0.1
- platformdirs 3.11.0
- pooch 1.8.0
- portalocker 2.8.2
- pox 0.3.5
- ppft 1.7.6.9
- prettytable 3.9.0
- protobuf 3.20.3
- proto_plus 1.25.0
- psd_tools 1.10.4
- psutil 5.9.5
- pyasn1 0.6.1
- pyasn1_modules 0.4.1
- pycparser 2.21
- pydantic 2.10.1
- pydantic_core 2.27.1
- pydub 0.25.1
- pygit2 1.15.1
- PyGithub 2.3.0
- pygments 2.18.0
- PyJWT 2.8.0
- PyMatting 1.1.12
- PyNaCl 1.5.0
- pynvml 11.5.0
- pyparsing 3.1.1
- pypng 0.20220715.0
- pyreadline3 3.4.1
- PySocks 1.7.1
- python_dateutil 2.8.2
- python_dotenv 1.0.1
- python_multipart 0.0.20
- python_slugify 8.0.4
- pytorch_lightning 2.3.3
- pytz 2023.3.post1
- pywavelets 1.5.0
- pywin32 306
- PyYAML 6.0.1
- pyzbar 0.1.9
- py_cpuinfo 9.0.0
- qrcode 7.4.2
- qudida 0.0.4
- redis 5.2.0
- referencing 0.30.2
- regex 2023.8.8
- rembg 2.0.52
- repath 0.9.0
- replicate 1.0.3
- reportlab 4.0.6
- requests 2.31.0
- rich 13.7.1
- rpds_py 0.12.0
- rsa 4.9
- Rtree 1.1.0
- ruamel.yaml 0.18.7
- ruamel.yaml.clib 0.2.12
- ruff 0.8.4
- s3transfer 0.10.4
- safehttpx 0.1.6
- safetensors 0.4.2
- scikit_image 0.20.0
- scikit_learn 1.3.2
- scipy 1.12.0
- seaborn 0.13.0
- segment_anything 1.0
- semantic_version 2.10.0
- Send2Trash 1.8.3
- sentencepiece 0.1.99
- setuptools 65.5.0
- shapely 2.0.2
- shellingham 1.5.4
- simpleeval 0.9.13
- six 1.16.0
- smmap 5.0.1
- sniffio 1.3.0
- sounddevice 0.4.6
- soundfile 0.12.1
- soupsieve 2.5
- soxr 0.3.7
- spandrel 0.3.4
- stanza 1.1.1
- starlette 0.41.3
- surrealist 1.0.5
- svg.path 6.3
- sympy 1.13.2
- tabulate 0.9.0
- tbb 2021.13.1
- termcolor 2.3.0
- text_unidecode 1.3
- thop 0.1.1.post2209072238
- threadpoolctl 3.2.0
- tifffile 2023.9.26
- timm 1.0.7
- tinycss2 1.2.1
- tipo_kgen 0.1.8
- tokenizers 0.20.3
- tomli 2.0.1
- tomlkit 0.13.2
- torch 2.3.1+cu121
- torchaudio 2.3.1+cu121
- torchmetrics 1.6.0
- torchsde 0.2.5
- torchvision 0.18.1+cu121
- tqdm 4.66.1
- trampoline 0.1.2
- transformers 4.46.3
- transparent_background 1.3.3
- trimesh 4.0.9
- typer 0.12.3
- typer_config 1.4.2
- types_python_dateutil 2.9.0.20241003
- typing_extensions 4.12.2
- tzdata 2023.3
- ultralytics 8.3.40
- ultralytics_thop 2.0.13
- uritemplate 4.1.1
- urllib3 1.26.18
- uvicorn 0.32.1
- vhacdx 0.0.5
- watchdog 4.0.2
- watchfiles 1.0.0
- wcwidth 0.2.8
- webcolors 24.11.1
- webencodings 0.5.1
- websockets 14.1
- websocket_client 1.8.0
- win32_setctime 1.2.0
- wrapt 1.16.0
- xformers 0.0.27
- xxhash 3.4.1
- yacs 0.1.8
- yapf 0.40.2
- yarl 1.9.2
- zhipuai 2.1.5.20241204
- zipp 3.17.0
- cstr 0.1.0
- easydict 1.11
- ffmpy 0.3.0
- fvcore 0.1.5.post20221221
- googletrans_py 4.0.0
- img2texture 1.0.6
- iopath 0.1.10
- pixeloe 0.0.10
- pycollada 0.8
- PyExecJS 1.5.1
- sacremoses 0.0.53
- svglib 1.5.1
- wget 3.2
Git: E:\ComfyUI-aki-v1.4\git\cmd\git.exe
Shell: C:\WINDOWS\system32\cmd.exe
Cache Path: E:\ComfyUI-aki-v1.4\.cache
------------------------
Engine Validator:
PyTorch: 2.3.1+cu121 C(12010)___
OnnxRuntime: CUDA(12020)
------------------------
Port Info:
Exclusion List:
Non-administered:
Administered:
In Use (v4):
135 => 1752
139 => 4
445 => 4
4301 => 19688
4310 => 19688
5037 => 13124
5040 => 3216
5283 => 19688
7890 => 18244
8188 => 20676
9010 => 12376
9080 => 12376
9100 => 5220
9180 => 5220
9210 => 19688
15292 => 13376
15393 => 13376
16494 => 13376
19292 => 19660
45654 => 12376
49664 => 1488
49665 => 1384
49668 => 3048
49669 => 3416
49676 => 4468
49680 => 1460
53000 => 15880
58342 => 10116
58343 => 18244
In Use (v6):
135 => 1752
445 => 4
49664 => 1488
49665 => 1384
49668 => 3048
49669 => 3416
49676 => 4468
49677 => 5280
49680 => 1460
------------------------
Config:
Audience Type: 新手
Lock Engine: True
Engine: CUDA GPU 0: NVIDIA GeForce RTX 4060 Laptop GPU (8 GB) [0]
VRAM Optimization: Auto [Auto]
Port: 8188 [8188]
XAttn Optimization: Xformers [Xformers]
CPU VAE: False [False]
Upcast Attention: True [True]
Precision: Auto [Auto]
Text Encoder Precision: Auto [Auto]
UNet Precision: Auto [Auto]
VAE Precision: Auto [Auto]
Force Channels Last: False [False]
Preview Method: Auto [Auto]
Smart Memory: True [True]
Deterministic: False [False]
Multi User: False [False]
Fast: False [False]
Listen: False [False]
Server Name: []
HF Offline Mode: False [False]
Cuda Allocator Backend: CudaMallocAsync [CudaMallocAsync]
Prevent Sysmem Fallback: True
Extra Args:
------------------------
Network Preferences:
Proxy Address: http://127.0.0.1:12334
Proxy Git: False
Proxy Pip: False
Proxy Model Download: False
Proxy Env: True
Mirror Pypi: True
Mirror Git: True
Mirror ExtensionList: True
Mirror Huggingface: True
Github Acceleration: False
------------------------
Log:
Adding extra search path checkpoints E:/SD/models/Stable-diffusion
Adding extra search path configs E:/SD/models/Stable-diffusion
Adding extra search path vae E:/SD/models/VAE
Adding extra search path loras E:/SD/models/Lora
Adding extra search path loras E:/SD/models/LyCORIS
Adding extra search path upscale_models E:/SD/models/ESRGAN
Adding extra search path upscale_models E:/SD/models/RealESRGAN
Adding extra search path upscale_models E:/SD/models/SwinIR
Adding extra search path embeddings E:/SD/embeddings
Adding extra search path hypernetworks E:/SD/models/hypernetworks
Adding extra search path controlnet E:/SD/models/ControlNet
[START] Security scan
[DONE] Security scan
## ComfyUI-Manager: installing dependencies done.
** ComfyUI startup time:
2025-01-23 02:15:08.957788
** Platform:
Windows
** Python version:
3.10.11 (tags/v3.10.11:7d4cc5a, Apr 5 2023, 00:38:17) [MSC v.1929 64 bit (AMD64)]
** Python executable:
E:\ComfyUI-aki-v1.4\python\python.exe
** ComfyUI Path:
E:\ComfyUI-aki-v1.4
** Log path: E:\ComfyUI-aki-v1.4\comfyui.log
Prestartup times for custom nodes:
0.0 seconds: E:\ComfyUI-aki-v1.4\custom_nodes\rgthree-comfy
0.0 seconds: E:\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-Easy-Use
0.0 seconds: E:\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-Marigold
9.7 seconds: E:\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-Manager
Total VRAM 8188 MB, total RAM 16176 MB
pytorch version: 2.3.1+cu121
xformers version: 0.0.27
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce RTX 4060 Laptop GPU : cudaMallocAsync
Using xformers attention
[Prompt Server] web root: E:\ComfyUI-aki-v1.4\web
Error:
[WinError 1314] 客户端没有所需的特权。: 'E:\\ComfyUI-aki-v1.4\\custom_nodes\\ComfyLiterals\\js' -> 'E:\\ComfyUI-aki-v1.4\\web\\extensions\\ComfyLiterals'
Failed to create symlink to E:\ComfyUI-aki-v1.4\web\extensions\ComfyLiterals. Please copy the folder manually.
Source: E:\ComfyUI-aki-v1.4\custom_nodes\ComfyLiterals\js
Target: E:\ComfyUI-aki-v1.4\web\extensions\ComfyLiterals
[AnimateDiffEvo] - [0;31mERROR[0m - No motion models found. Please download one and place in: ['E:\\ComfyUI-aki-v1.4\\custom_nodes\\ComfyUI-AnimateDiff-Evolved\\models', 'E:\\ComfyUI-aki-v1.4\\models\\animatediff_models']
[Crystools [0;32mINFO[0m] Crystools version: 1.21.0
[Crystools [0;32mINFO[0m] CPU: 13th Gen Intel(R) Core(TM) i5-13500HX - Arch: AMD64 - OS: Windows 10
[Crystools [0;32mINFO[0m] Pynvml (Nvidia) initialized.
[Crystools [0;32mINFO[0m] GPU/s:
[Crystools [0;32mINFO[0m] 0) NVIDIA GeForce RTX 4060 Laptop GPU
[Crystools [0;32mINFO[0m] NVIDIA Driver: 566.36
[34m[ComfyUI-Easy-Use] server: [0mv1.2.6 [92mLoaded[0m
[34m[ComfyUI-Easy-Use] web root: [0mE:\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-Easy-Use\web_version/v2 [92mLoaded[0m
### Loading: ComfyUI-Impact-Pack (V8.1.6)
[Impact Pack] Wildcards loading done.
### Loading: ComfyUI-Impact-Subpack (V1.1)
[Impact Subpack] ultralytics_bbox: E:\ComfyUI-aki-v1.4\models\ultralytics\bbox
[Impact Subpack] ultralytics_segm: E:\ComfyUI-aki-v1.4\models\ultralytics\segm
### Loading: ComfyUI-Inspire-Pack (V1.9.1)
Total VRAM 8188 MB, total RAM 16176 MB
pytorch version: 2.3.1+cu121
xformers version: 0.0.27
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce RTX 4060 Laptop GPU : cudaMallocAsync
### Loading: ComfyUI-Manager (V2.55.5)
### ComfyUI Version: v0.3.10 | Released on '2024-12-26'
E:\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-segment-anything-2\sam2\modeling\sam\transformer.py:20: UserWarning: Flash Attention is disabled as it requires a GPU with Ampere (8.0) CUDA capability.
OLD_GPU, USE_FLASH_ATTN, MATH_KERNEL_ON = get_sdpa_settings()
(pysssss:WD14Tagger) [DEBUG] Available ORT providers: TensorrtExecutionProvider, CUDAExecutionProvider, CPUExecutionProvider
(pysssss:WD14Tagger) [DEBUG] Using ORT providers: CUDAExecutionProvider, CPUExecutionProvider
Workspace manager - Openning file hash dict
🦄🦄Loading: Workspace Manager (V2.1.0)
------------------------------------------
[34mComfyroll Studio v1.76 : [92m 175 Nodes Loaded[0m
------------------------------------------
** For changes, please see patch notes at https://github.com/Suzie1/ComfyUI_Comfyroll_CustomNodes/blob/main/Patch_Notes.md
** For help, please see the wiki at https://github.com/Suzie1/ComfyUI_Comfyroll_CustomNodes/wiki
------------------------------------------
[36;20m[comfyui_controlnet_aux] | INFO -> Using ckpts path: E:\ComfyUI-aki-v1.4\custom_nodes\comfyui_controlnet_aux\ckpts[0m
[36;20m[comfyui_controlnet_aux] | INFO -> Using symlinks: False[0m
[36;20m[comfyui_controlnet_aux] | INFO -> Using ort providers: ['CUDAExecutionProvider', 'DirectMLExecutionProvider', 'OpenVINOExecutionProvider', 'ROCMExecutionProvider', 'CPUExecutionProvider', 'CoreMLExecutionProvider'][0m
DWPose: Onnxruntime with acceleration providers detected
[1;35m### [START] ComfyUI AlekPet Nodes [1;34mv1.0.37[0m[1;35m ###[0m
Exception in thread Thread-13 (<lambda>):
Traceback (most recent call last):
File "E:\ComfyUI-aki-v1.4\python\lib\site-packages\aiohttp\connector.py", line 980, in _wrap_create_connection
return await self._loop.create_connection(*args, **kwargs) # type: ignore[return-value] # noqa
File "E:\ComfyUI-aki-v1.4\python\lib\asyncio\base_events.py", line 1076, in create_connection
raise exceptions[0]
File "E:\ComfyUI-aki-v1.4\python\lib\asyncio\base_events.py", line 1060, in create_connection
sock = await self._connect_sock(
File "E:\ComfyUI-aki-v1.4\python\lib\asyncio\base_events.py", line 969, in _connect_sock
await self.sock_connect(sock, address)
File "E:\ComfyUI-aki-v1.4\python\lib\asyncio\proactor_events.py", line 709, in sock_connect
return await self._proactor.connect(sock, address)
File "E:\ComfyUI-aki-v1.4\python\lib\asyncio\windows_events.py", line 826, in _poll
value = callback(transferred, key, ov)
File "E:\ComfyUI-aki-v1.4\python\lib\asyncio\windows_events.py", line 613, in finish_connect
ov.getresult()
ConnectionRefusedError: [WinError 1225] 远程计算机拒绝网络连接。
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "E:\ComfyUI-aki-v1.4\python\lib\threading.py", line 1016, in _bootstrap_inner
self.run()
File "<enhanced_experience vendors.sentry_sdk.integrations.threading>", line 99, in run
File "<enhanced_experience vendors.sentry_sdk.integrations.threading>", line 94, in _run_old_run_func
File "<enhanced_experience vendors.sentry_sdk.utils>", line 1649, in reraise
File "<enhanced_experience vendors.sentry_sdk.integrations.threading>", line 92, in _run_old_run_func
File "E:\ComfyUI-aki-v1.4\python\lib\threading.py", line 953, in run
self._target(*self._args, **self._kwargs)
File "E:\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-Manager\glob\manager_server.py", line 1373, in <lambda>
threading.Thread(target=lambda: asyncio.run(default_cache_update())).start()
File "E:\ComfyUI-aki-v1.4\python\lib\asyncio\runners.py", line 44, in run
return loop.run_until_complete(main)
File "E:\ComfyUI-aki-v1.4\python\lib\asyncio\base_events.py", line 649, in run_until_complete
return future.result()
File "E:\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-Manager\glob\manager_server.py", line 1370, in default_cache_update
await asyncio.gather(a, b, c, d, e)
File "E:\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-Manager\glob\manager_server.py", line 1357, in get_cache
json_obj = await core.get_data(uri, True)
File "E:\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-Manager\glob\manager_core.py", line 662, in get_data
async with session.get(uri, headers=headers) as resp:
File "E:\ComfyUI-aki-v1.4\python\lib\site-packages\aiohttp\client.py", line 1141, in __aenter__
self._resp = await self._coro
File "E:\ComfyUI-aki-v1.4\python\lib\site-packages\aiohttp\client.py", line 536, in _request
conn = await self._connector.connect(
File "E:\ComfyUI-aki-v1.4\python\lib\site-packages\aiohttp\connector.py", line 540, in connect
proto = await self._create_connection(req, traces, timeout)
File "E:\ComfyUI-aki-v1.4\python\lib\site-packages\aiohttp\connector.py", line 899, in _create_connection
_, proto = await self._create_proxy_connection(req, traces, timeout)
File "E:\ComfyUI-aki-v1.4\python\lib\site-packages\aiohttp\connector.py", line 1234, in _create_proxy_connection
transport, proto = await self._create_direct_connection(
File "E:\ComfyUI-aki-v1.4\python\lib\site-packages\aiohttp\connector.py", line 1209, in _create_direct_connection
raise last_exc
File "E:\ComfyUI-aki-v1.4\python\lib\site-packages\aiohttp\connector.py", line 1178, in _create_direct_connection
transp, proto = await self._wrap_create_connection(
File "E:\ComfyUI-aki-v1.4\python\lib\site-packages\aiohttp\connector.py", line 988, in _wrap_create_connection
raise client_error(req.connection_key, exc) from exc
aiohttp.client_exceptions
.ClientProxyConnectionError:
Cannot connect to host 127.0.0.1:12334 ssl:default [远程计算机拒绝网络连接。]
[92mNode -> ChatGLMNode: [93mChatGLM4TranslateCLIPTextEncodeNode, ChatGLM4TranslateTextNode, ChatGLM4InstructNode, ChatGLM4InstructMediaNode[0m [92m[Loading] [0m
[92mNode -> ArgosTranslateNode: [93mArgosTranslateCLIPTextEncodeNode, ArgosTranslateTextNode[0m [92m[Loading] [0m
[92mNode -> DeepTranslatorNode: [93mDeepTranslatorCLIPTextEncodeNode, DeepTranslatorTextNode[0m [92m[Loading] [0m
[92mNode -> GoogleTranslateNode: [93mGoogleTranslateCLIPTextEncodeNode, GoogleTranslateTextNode[0m [92m[Loading] [0m
[92mNode -> ExtrasNode: [93mPreviewTextNode, HexToHueNode, ColorsCorrectNode[0m [92m[Loading] [0m
[92mNode -> PoseNode: [93mPoseNode[0m [92m[Loading] [0m
[92mNode -> IDENode: [93mIDENode[0m [92m[Loading] [0m
[92mNode -> PainterNode: [93mPainterNode[0m [92m[Loading] [0m
[1;35m### [END] ComfyUI AlekPet Nodes ###[0m
[34mFizzleDorf Custom Nodes: [92mLoaded[0m
# 😺dzNodes: LayerStyle -> [1;33mCannot import name 'guidedFilter' from 'cv2.ximgproc'
A few nodes cannot works properly, while most nodes are not affected. Please REINSTALL package 'opencv-contrib-python'.
For detail refer to [4mhttps://github.com/chflame163/ComfyUI_LayerStyle/issues/5[0m[m
# 😺dzNodes: LayerStyle -> [1;33mCannot import name 'guidedFilter' from 'cv2.ximgproc'
A few nodes cannot works properly, while most nodes are not affected. Please REINSTALL package 'opencv-contrib-python'.
For detail refer to [4mhttps://github.com/chflame163/ComfyUI_LayerStyle/issues/5[0m[m
[92m[tinyterraNodes] [32mLoaded[0m
[36mEfficiency Nodes:[0m Attempting to add Control Net options to the 'HiRes-Fix Script' Node (comfyui_controlnet_aux add-on)...[92mSuccess![0m
Patching UNetModel.forward
UNetModel.forward has been successfully patched.
[1;32m[Power Noise Suite]: 🦚🦚🦚 [93m[3mSqueaa-squee!!![0m 🦚🦚🦚
[1;32m[Power Noise Suite]:[0m Tamed [93m11[0m wild nodes.
[92m[rgthree-comfy] Loaded 42 fantastic nodes. 🎉[00m
WeiLinComfyUIPromptAllInOne 请求安装依赖中.......
WeiLinComfyUIPromptAllInOne 请求安装依赖成功 =======
WeiLinComfyUIPromptAllInOne background API service started successfully.
WeiLinComfyUIPromptAllInOne 插件API已成功启动!
====== WeiLin prompt-all-in-one =====
APP is running WeiLin prompt-all-in-one!.
WeiLinComfyUIPromptAllInOne 节点已启动成功!.
[TIPO-KGen]-|02:15:30|-[0;32mINFO[0m: Using model dir: E:\ComfyUI-aki-v1.4\models\kgen
Import times for custom nodes:
0.0 seconds: E:\ComfyUI-aki-v1.4\custom_nodes\websocket_image_save.py
0.0 seconds: E:\ComfyUI-aki-v1.4\custom_nodes\AIGODLIKE-ComfyUI-Translation
0.0 seconds: E:\ComfyUI-aki-v1.4\custom_nodes\ControlNet-LLLite-ComfyUI
0.0 seconds: E:\ComfyUI-aki-v1.4\custom_nodes\FreeU_Advanced
0.0 seconds: E:\ComfyUI-aki-v1.4\custom_nodes\ComfyUI_TiledKSampler
0.0 seconds: E:\ComfyUI-aki-v1.4\custom_nodes\stability-ComfyUI-nodes
0.0 seconds: E:\ComfyUI-aki-v1.4\custom_nodes\cg-image-picker
0.0 seconds: E:\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-WD14-Tagger
0.0 seconds: E:\ComfyUI-aki-v1.4\custom_nodes\ComfyUI_JPS-Nodes
0.0 seconds: E:\ComfyUI-aki-v1.4\custom_nodes\PowerNoiseSuite
0.0 seconds: E:\ComfyUI-aki-v1.4\custom_nodes\ComfyUI_experiments
0.0 seconds: E:\ComfyUI-aki-v1.4\custom_nodes\ComfyLiterals
0.0 seconds: E:\ComfyUI-aki-v1.4\custom_nodes\ComfyUI_IPAdapter_plus
0.0 seconds: E:\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-SuperBeasts
0.0 seconds: E:\ComfyUI-aki-v1.4\custom_nodes\images-grid-comfy-plugin
0.0 seconds: E:\ComfyUI-aki-v1.4\custom_nodes\ComfyUI_UltimateSDUpscale
0.0 seconds: E:\ComfyUI-aki-v1.4\custom_nodes\comfy-image-saver
0.0 seconds: E:\ComfyUI-aki-v1.4\custom_nodes\ComfyUI_essentials
0.0 seconds: E:\ComfyUI-aki-v1.4\custom_nodes\Derfuu_ComfyUI_ModdedNodes
0.0 seconds: E:\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-Custom-Scripts
0.0 seconds: E:\ComfyUI-aki-v1.4\custom_nodes\efficiency-nodes-comfyui
0.0 seconds: E:\ComfyUI-aki-v1.4\custom_nodes\rgthree-comfy
0.0 seconds: E:\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-Advanced-ControlNet
0.0 seconds: E:\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-Marigold
0.0 seconds: E:\ComfyUI-aki-v1.4\custom_nodes\ComfyUI_Comfyroll_CustomNodes
0.0 seconds: E:\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-KJNodes
0.0 seconds: E:\ComfyUI-aki-v1.4\custom_nodes\ComfyUI_smZNodes
0.0 seconds: E:\ComfyUI-aki-v1.4\custom_nodes\comfyui-workspace-manager
0.0 seconds: E:\ComfyUI-aki-v1.4\custom_nodes\ComfyUI_tinyterraNodes
0.0 seconds: E:\ComfyUI-aki-v1.4\custom_nodes\comfyui_controlnet_aux
0.0 seconds: E:\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-AnimateDiff-Evolved
0.1 seconds: E:\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-Inspire-Pack
0.1 seconds: E:\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-segment-anything-2
0.1 seconds: E:\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-Crystools
0.1 seconds: E:\ComfyUI-aki-v1.4\custom_nodes\comfyui-tensorops
0.1 seconds: E:\ComfyUI-aki-v1.4\custom_nodes\z-tipo-extension
0.2 seconds: E:\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-Impact-Pack
0.3 seconds: E:\ComfyUI-aki-v1.4\custom_nodes\ComfyUI_LayerStyle_Advance
0.3 seconds: E:\ComfyUI-aki-v1.4\custom_nodes\ComfyUI_FizzNodes
0.4 seconds: E:\ComfyUI-aki-v1.4\custom_nodes\ComfyUI_LayerStyle
0.4 seconds: E:\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-Manager
0.6 seconds: E:\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-Easy-Use
2.1 seconds: E:\ComfyUI-aki-v1.4\custom_nodes\WeiLin-ComfyUI-prompt-all-in-one
2.4 seconds: E:\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-Impact-Subpack
3.2 seconds: E:\ComfyUI-aki-v1.4\custom_nodes\ComfyUI_Custom_Nodes_AlekPet
Starting server
To see the GUI go to: http://127.0.0.1:8188
FETCH DATA from: E:\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-Manager\extension-node-map.json [DONE]
FETCH DATA from: E:\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-Manager\extension-node-map.json [DONE]
got prompt
WARNING: PlaySound.IS_CHANGED() missing 1 required positional argument: 'self'
WARNING: PlaySound.IS_CHANGED() missing 1 required positional argument: 'self'
WARNING: PlaySound.IS_CHANGED() missing 1 required positional argument: 'self'
WARNING: PlaySound.IS_CHANGED() missing 1 required positional argument: 'self'
WARNING: [Errno 2] No such file or directory: 'E:\\ComfyUI-aki-v1.4\\input\\ComfyUI_04180_.png'
model weight dtype torch.float16, manual cast: None
model_type EPS
Using xformers attention in VAE
Using xformers attention in VAE
VAE load device: cuda:0, offload device: cpu, dtype: torch.bfloat16
Requested to load SDXLClipModel
loaded completely 9.5367431640625e+25 1560.802734375 True
CLIP model load device: cuda:0, offload device: cpu, current: cuda:0, dtype: torch.float16
Warning torch.load doesn't support weights_only on this pytorch version, loading unsafely.
Loads SAM model: E:\ComfyUI-aki-v1.4\models\sams\sam_vit_b_01ec64.pth (device:Prefer GPU)
[36m[rgthree-comfy][Power Lora Loader][00m Matched Lora input "artist\nyaliaXL_il_lokr_V531-2.safetensors" to "nyaliaXL_il_lokr_V531-2.safetensors".[00m
Requested to load SDXLClipModel
loaded completely 9.5367431640625e+25 2139.802734375 True
E:\ComfyUI-aki-v1.4\comfy\ldm\modules\attention.py:431: UserWarning: 1Torch was not compiled with flash attention. (Triggered internally at ..\aten\src\ATen\native\transformers\cuda\sdp_utils.cpp:455.)
out = torch.nn.functional.scaled_dot_product_attention(q, k, v, attn_mask=mask, dropout_p=0.0, is_causal=False)
Requested to load SDXL
loaded completely 9.5367431640625e+25 4897.0483474731445 True
Requested to load AutoencoderKL
loaded completely 9.5367431640625e+25 159.55708122253418 True
loaded completely 9.5367431640625e+25 4897.0483474731445 True
Requested to load AutoencoderKL
loaded completely 9.5367431640625e+25 159.55708122253418 True
Processing interrupted
Prompt executed in 45.95 seconds
got prompt
[ 1 ; 3 1 m 2 0 2 5 - 0 1 - 2 3 0 2 : 1 9 : 2 6 . 1 4 4 2 4 1 5 [ E : o n n x r u n t i m e : D e f a u l t , p r o v i d e r _ b r i d g e _ o r t . c c : 1 9 9 2 o n n x r u n t i m e : : T r
y G e t P r o v i d e r I n f o _ C U D A ] D : \ a \ _ w o r k \ 1 \ s \ o n n x r u n t i m e \ c o r e \ s e s s i o n \ p r o v i d e r _ b r i d g e _ o r t . c c : 1 6 3 7 o n n x r u n t i m e : : P r o v i d e r L i b r a r y : : G e t [ O N N X R u n t i m e E r r o r ] : 1 : F A I L : L o a d L i b r a r y f a i l e d w i t h e r r o r 1 2 6 " " w h e n t r y i n g t o l o a d " E : \ C o m f y U I - a k i - v 1 . 4 \ p y t h o n \ l i b \ s i t e - p a c k a g e s \ o n n x r u n t i m e \ c a p i \ o n n x r u n t i m e _ p r o v i d e r s _ c u d a . d l l "
[ m
[ 0 ; 9 3 m 2 0 2 5 - 0 1 - 2 3 0 2 : 1 9 : 2 6 . 1 4 5 1 7 3 8 [ W : o n n x r u n t i m e : D e f a u l t , o n n x r u n t i m e _ p y b i n d _ s t a t e . c c : 9 6 5 o n n x r u n t i m e : : p y t h o n : : C r e a t e E x e c u t i o n P r o v i d e r I n s t a n c e ] F a i l e d t o c r e a t e C U D A E x e c u t i o n P r o v i d e r . R e q u i r e c u D N N 9 . * a n d C U D A 1 2 . * , a n d t h e l a t e s t M S V C r u n t i m e . P l e a s e i n s t a l l a l l d e p e n d e n c i e s a s m e n t i o n e d i n t h e G P U r e q u i r e m e n t s p a g e ( h t t p s : / / o n n x r u n t i m e . a i / d o c s / e x e c u t i o n - p r o v i d e r s / C U D A - E x e c u t i o n P r o v i d e r . h t m l # r e q u i r e m e n t s ) , m a k e s u r e t h e y ' r e i n t h e P A T H , a n d t h a t y o u r G P U i s s u p p o r t e d . [ m
plana \(blue archive\), 1girl, solo, long hair, looking at viewer, skirt, long sleeves, closed mouth, very long hair, sitting, underwear, school uniform, panties, white hair, pink hair, braid, multicolored hair, pantyhose, pleated skirt, hairband, choker, serafuku, black skirt, sailor collar, hair over one eye, black eyes, halo, feet, coat, black pantyhose, legs, toes, black choker, soles, no shoes, black hairband, knees up, colored inner hair, black coat, black sailor collar, panties under pantyhose, thighband pantyhose, black serafuku, foot focus, hugging own legs, pink halo, red pupils
Prompt executed in 5.99 seconds
got prompt
WARNING: PlaySound.IS_CHANGED() missing 1 required positional argument: 'self'
WARNING: PlaySound.IS_CHANGED() missing 1 required positional argument: 'self'
WARNING: PlaySound.IS_CHANGED() missing 1 required positional argument: 'self'
WARNING: PlaySound.IS_CHANGED() missing 1 required positional argument: 'self'
WARNING: [Errno 2] No such file or directory: 'E:\\ComfyUI-aki-v1.4\\input\\ComfyUI_04180_.png'
model weight dtype torch.float16, manual cast: None
model_type EPS
Using xformers attention in VAE
Using xformers attention in VAE
VAE load device: cuda:0, offload device: cpu, dtype: torch.bfloat16
Requested to load SDXLClipModel
loaded completely 9.5367431640625e+25 1560.802734375 True
CLIP model load device: cuda:0, offload device: cpu, current: cuda:0, dtype: torch.float16
Warning torch.load doesn't support weights_only on this pytorch version, loading unsafely.
Loads SAM model: E:\ComfyUI-aki-v1.4\models\sams\sam_vit_b_01ec64.pth (device:Prefer GPU)
[36m[rgthree-comfy][Power Lora Loader][00m Matched Lora input "artist\nyaliaXL_il_lokr_V531-2.safetensors" to "nyaliaXL_il_lokr_V531-2.safetensors".[00m
Requested to load SDXLClipModel
loaded completely 9.5367431640625e+25 2139.802734375 True
Requested to load SDXL
loaded completely 9.5367431640625e+25 4897.0483474731445 True
Requested to load AutoencoderKL
loaded completely 9.5367431640625e+25 159.55708122253418 True
loaded completely 9.5367431640625e+25 4897.0483474731445 True
Requested to load AutoencoderKL
loaded completely 9.5367431640625e+25 159.55708122253418 True
FETCH DATA from: E:\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-Manager\extension-node-map.json
[DONE]
Processing interrupted
Prompt executed in 135.59 seconds
got prompt
WARNING: PlaySound.IS_CHANGED() missing 1 required positional argument: 'self'
WARNING: PlaySound.IS_CHANGED() missing 1 required positional argument: 'self'
WARNING: PlaySound.IS_CHANGED() missing 1 required positional argument: 'self'
WARNING: PlaySound.IS_CHANGED() missing 1 required positional argument: 'self'
WARNING: [Errno 2] No such file or directory: 'E:\\ComfyUI-aki-v1.4\\input\\ComfyUI_04180_.png'
Requested to load SDXLClipModel
loaded completely 9.5367431640625e+25 2139.802734375 True
loaded completely 9.5367431640625e+25 4897.0483474731445 True
Requested to load AutoencoderKL
loaded completely 9.5367431640625e+25 159.55708122253418 True
loaded completely 9.5367431640625e+25 4897.0483474731445 True
Requested to load AutoencoderKL
loaded completely 9.5367431640625e+25 159.55708122253418 True
Processing interrupted
Prompt executed in 31.15 seconds
got prompt
[ 1 ; 3 1 m 2 0 2 5 - 0 1 - 2 3 0 2 : 2 3 : 0 1 . 7 6 2 5 7 9 2 [ E : o n n x r u n t i m e : D e f a u l t , p r o v i d e r _ b r i d
g e _ o r t . c c : 1 9 9
2 o n n x r u
n
t i m e : : T r y G e t P r o v i d e r I n f o _ C U D A ] D
: \ a \ _ w o r k \ 1 \ s \ o n n x r u n t i m e
\ c o r e \ s e s s i o n \ p r o v i d e r _ b r i d g e _ o r t . c c : 1 6 3 7 o n
n x
r
u
n t i m e : : P r o v i d e r L i b r a r y : : G e t [ O N N
X R u n
t i m e E r r o r ] : 1 : F A I L : L o a d L i b r a r y f a i l e d w i t h
e r r o r 1 2 6 " " w h
e n t r y i n g t o l o a d " E : \ C o m f y U I - a k i - v 1 . 4 \ p y t h o n \ l i b \ s i t e - p a c k a g e s \ o n n x r u n t i m e \ c a p i \ o n n x r u n t i m e _ p r o v i d e r s _ c u d a . d l l "
[ m
[ 0 ; 9 3 m 2 0 2 5 - 0 1 - 2 3 0 2 : 2 3 : 0 1 . 7 6 4
3 8 5 1 [ W : o n n x r u n t i m e : D e f a u l t , o n n x r u n t i m e _ p y b i n d _ s t a t e . c c : 9 6 5 o n n x r u n t i m e : : p
y t h o n : : C
r e a t e E x e c u t i o n P r o v i d e r I n s t a n c e ] F a i l e d t o c r e a t
e C U
D A E x e c u t i o n
P r o v i d e r . R e q u i r e c u D N N 9 . * a n
d
C U D A 1 2 . * , a n d t h e l a t e s t M S V C r u n t i m e . P l e a s e i
n s t a l l a l l d e p
e n d
e
n
c
i e s a
s m e n t
i o n e
d i n t h e
G P U
r e q u i r e m e n t s p a g e ( h t t p s : /
/ o n n x r u n t i m e . a i / d o c s / e x e c u t i o n - p r o v i d e r s / C U D A - E x e c u t i o n P r o v i d e r
. h t m l
# r e
q u i
r e m
e
n
t
s
) , m a k e s u r e t h e y ' r
e
i n t h e P A T H , a n d t h a t y o u r G P U i s s u p p o r t e d . [ m
got prompt
arona \(blue archive\), plana \(blue archive\), long hair, looking at viewer, smile, short hair, blue eyes, shirt, skirt, multiple girls, long sleeves, ribbon, closed mouth, 2girls, very long hair, sitting, blue hair, school uniform, full body, white hair, pink hair, grey hair, braid, hair ribbon, multicolored hair, pantyhose, pleated skirt, hairband, open clothes, choker, barefoot, serafuku, black skirt, sailor collar, hair over one eye, black eyes, two-tone hair, yuri, halo, feet, coat, black pantyhose, grey eyes, legs, neckerchief, toes, single braid, black choker, soles, no shoes, white skirt, blue shirt, black hairband, eyes visible through hair, colored inner hair, toenails, black coat, black sailor collar, open coat, black serafuku, white hairband, blue halo, hand on another's face, white choker, bow hairband, white neckerchief, hand on another's cheek, red pupils
Prompt executed in 5.38 seconds
got prompt
WARNING: PlaySound.IS_CHANGED() missing 1 required positional argument: 'self'
WARNING: PlaySound.IS_CHANGED() missing 1 required positional argument: 'self'
WARNING: PlaySound.IS_CHANGED() missing 1 required positional argument: 'self'
WARNING: PlaySound.IS_CHANGED() missing 1 required positional argument: 'self'
WARNING: [Errno 2] No such file or directory: 'E:\\ComfyUI-aki-v1.4\\input\\ComfyUI_04180_.png'
model weight dtype torch.float16, manual cast: None
model_type EPS
Using xformers attention in VAE
Using xformers attention in VAE
VAE load device: cuda:0, offload device: cpu, dtype: torch.bfloat16
Requested to load SDXLClipModel
loaded completely 9.5367431640625e+25 1560.802734375 True
CLIP model load device: cuda:0, offload device: cpu, current: cuda:0, dtype: torch.float16
Warning torch.load doesn't support weights_only on this pytorch version, loading unsafely.
Loads SAM model: E:\ComfyUI-aki-v1.4\models\sams\sam_vit_b_01ec64.pth (device:Prefer GPU)
[36m[rgthree-comfy][Power Lora Loader][00m Matched Lora input "artist\nyaliaXL_il_lokr_V531-2.safetensors" to "nyaliaXL_il_lokr_V531-2.safetensors".[00m
Requested to load SDXLClipModel
loaded completely 9.5367431640625e+25 2139.802734375 True
Requested to load SDXL
loaded completely 9.5367431640625e+25 4897.0483474731445 True
Requested to load AutoencoderKL
loaded completely 9.5367431640625e+25 159.55708122253418 True
loaded completely 9.5367431640625e+25 4897.0483474731445 True
Requested to load AutoencoderKL
loaded completely 9.5367431640625e+25 159.55708122253418 True
Processing interrupted
Prompt executed in 142.22 seconds
got prompt
WARNING: PlaySound.IS_CHANGED() missing 1 required positional argument: 'self'
WARNING: PlaySound.IS_CHANGED() missing 1 required positional argument: 'self'
WARNING: PlaySound.IS_CHANGED() missing 1 required positional argument: 'self'
WARNING: PlaySound.IS_CHANGED() missing 1 required positional argument: 'self'
WARNING: [Errno 2] No such file or directory: 'E:\\ComfyUI-aki-v1.4\\input\\ComfyUI_04180_.png'
Requested to load SDXLClipModel
loaded completely 9.5367431640625e+25 2139.802734375 True
loaded completely 9.5367431640625e+25 4897.0483474731445 True
Requested to load AutoencoderKL
loaded completely 9.5367431640625e+25 159.55708122253418 True
loaded completely 9.5367431640625e+25 4897.0483474731445 True
Requested to load AutoencoderKL
loaded completely 9.5367431640625e+25 159.55708122253418 True
------------------------
Fault Traceback:
Not Available
```
### Other
[Diagnostics-1737570506.log](https://github.com/user-attachments/files/18510080/Diagnostics-1737570506.log) | closed | 2025-01-22T18:29:12Z | 2025-02-26T08:51:42Z | https://github.com/comfyanonymous/ComfyUI/issues/6569 | [
"Potential Bug"
] | TomXPRIME | 3 |
xmu-xiaoma666/External-Attention-pytorch | pytorch | 118 | DMSANet: Dual Multi Scale Attention Network | 作者可以把这个注意力机制加上去嘛?
| open | 2024-09-03T10:44:17Z | 2024-09-03T10:44:17Z | https://github.com/xmu-xiaoma666/External-Attention-pytorch/issues/118 | [] | wuxiaohui0 | 0 |
httpie/cli | api | 1,411 | Help option crash | ## Checklist
- [X] I've searched for similar issues.
- [X] I'm using the latest version of HTTPie.
---
## Minimal reproduction code and steps
1. install httpie latest version on debian
2. run the command `http --help`
3.
## Current result
```text
Traceback (most recent call last):
File "http_cli.py", line 5, in <module>
File "httpie/__main__.py", line 9, in main
File "httpie/core.py", line 162, in main
File "httpie/core.py", line 77, in raw_main
File "httpie/cli/argparser.py", line 159, in parse_args
File "argparse.py", line 1869, in parse_known_args
File "argparse.py", line 2078, in _parse_known_args
File "argparse.py", line 2018, in consume_optional
File "argparse.py", line 1946, in take_action
File "argparse.py", line 1110, in __call__
File "argparse.py", line 2566, in print_help
File "httpie/cli/argparser.py", line 125, in _print_message
File "argparse.py", line 2572, in _print_message
UnicodeEncodeError: 'ascii' codec can't encode character '\u2019' in position 10678: ordinal not in range(128)
[19189] Failed to execute script 'http_cli' due to unhandled exception!
```
## Expected result
The help menu from httpie.
---
## Debug output
Please re-run the command with `--debug`, then copy the entire command & output and paste both below:
```bash
$ http --debug --help
HTTPie 3.2.1
Requests 2.27.1
Pygments 2.12.0
Python 3.9.12 (main, Apr 16 2022, 19:31:36)
[GCC 7.5.0]
/usr/bin/http
Linux 4.19.0-16-cloud-amd64
<Environment {'apply_warnings_filter': <function Environment.apply_warnings_filter at 0x7f1a16b09a60>,
'args': Namespace(),
'as_silent': <function Environment.as_silent at 0x7f1a16b09940>,
'colors': 256,
'config': {'__meta__': {'about': 'HTTPie configuration file',
'help': 'https://httpie.org/docs#config',
'httpie': '0.9.8'},
'default_options': []},
'config_dir': PosixPath('/home/debian/.httpie'),
'devnull': <property object at 0x7f1a16b0c1d0>,
'is_windows': False,
'log_error': <function Environment.log_error at 0x7f1a16b099d0>,
'program_name': 'http',
'quiet': 0,
'rich_console': <functools.cached_property object at 0x7f1a16b02a00>,
'rich_error_console': <functools.cached_property object at 0x7f1a16af5400>,
'show_displays': True,
'stderr': <_io.TextIOWrapper name='<stderr>' mode='w' encoding='ascii'>,
'stderr_isatty': True,
'stdin': <_io.TextIOWrapper name='<stdin>' mode='r' encoding='ascii'>,
'stdin_encoding': 'ascii',
'stdin_isatty': True,
'stdout': <_io.TextIOWrapper name='<stdout>' mode='w' encoding='ascii'>,
'stdout_encoding': 'ascii',
'stdout_isatty': True}>
<PluginManager {'adapters': [],
'auth': [<class 'httpie.plugins.builtin.BasicAuthPlugin'>,
<class 'httpie.plugins.builtin.DigestAuthPlugin'>,
<class 'httpie.plugins.builtin.BearerAuthPlugin'>],
'converters': [],
'formatters': [<class 'httpie.output.formatters.headers.HeadersFormatter'>,
<class 'httpie.output.formatters.json.JSONFormatter'>,
<class 'httpie.output.formatters.xml.XMLFormatter'>,
<class 'httpie.output.formatters.colors.ColorFormatter'>]}>
Traceback (most recent call last):
File "http_cli.py", line 5, in <module>
File "httpie/__main__.py", line 9, in main
File "httpie/core.py", line 162, in main
File "httpie/core.py", line 77, in raw_main
File "httpie/cli/argparser.py", line 159, in parse_args
File "argparse.py", line 1869, in parse_known_args
File "argparse.py", line 2078, in _parse_known_args
File "argparse.py", line 2018, in consume_optional
File "argparse.py", line 1946, in take_action
File "argparse.py", line 1110, in __call__
File "argparse.py", line 2566, in print_help
File "httpie/cli/argparser.py", line 125, in _print_message
File "argparse.py", line 2572, in _print_message
UnicodeEncodeError: 'ascii' codec can't encode character '\u2019' in position 10678: ordinal not in range(128)
[19180] Failed to execute script 'http_cli' due to unhandled exception!
```
## Additional information, screenshots, or code examples
I am on a cloud linux vps with debian 10. I used the pip version before but I uninstalled it to be sure.
| open | 2022-06-08T12:13:36Z | 2022-06-09T13:34:19Z | https://github.com/httpie/cli/issues/1411 | [
"bug"
] | troplolBE | 0 |
sammchardy/python-binance | api | 1,550 | Can't retrieve order or trade history from more than 2 years ago | Hi,
`client.get_my_trades(symbol=pair)` returns only the trade history of the last 2 years.
Same for `client.get_all_orders(symbol=pair)`
Adding the parameters `startTime` or `fromId` has no effect.
How can i get the trade and order history from the beginning?
tnx | open | 2025-02-05T04:06:26Z | 2025-02-26T12:32:17Z | https://github.com/sammchardy/python-binance/issues/1550 | [
"question"
] | waynongithub | 3 |
gunthercox/ChatterBot | machine-learning | 1,409 | What machine learning algorithm is used in this project? | first thanks of this wonderful work,
I read all in readme, it's say this project use machine learning to make bot have more real response,
would you tell me how this machine learning work? like make a simple example or used algorithm name. so i can make a greater train data.
thanks.
| closed | 2018-09-18T09:21:13Z | 2019-07-15T09:36:45Z | https://github.com/gunthercox/ChatterBot/issues/1409 | [] | sherry0429 | 2 |
Layout-Parser/layout-parser | computer-vision | 1 | prediction script and reading-order | @RosenZhang @lolipopshock
- Does it predict he reading-order of the regions?
- A Prediction script to predict all images in a folder. | closed | 2020-06-25T17:47:07Z | 2020-09-18T10:27:16Z | https://github.com/Layout-Parser/layout-parser/issues/1 | [] | ghost | 0 |
modelscope/modelscope | nlp | 919 | StructBERT问句识别-中文口语-通用领域项目BUG | ## 问题
[StructBERT问句识别-中文口语-通用领域](https://www.modelscope.cn/models/iic/nlp_structbert_qd_spoken_chinese-base?spm=a2c6h.13066369.question.1.65e750ed4SSbfR)项目中,采用魔塔提供的notebook免费环境,试了各种镜像,都存在一个这样的问题:
**TypeError: PreTrainedTokenizerFast._batch_encode_plus() got an unexpected keyword argument 'label2id'**
## 复现
复现这个问题非常简单,只需要点开这个项目的网址,然后打开[魔塔平台免费实例](https://modelscope.cn/my/mynotebook/preset?spm=a2c6h.13066369.question.2.65e750ed4SSbfR)这个网址,然后创建一个实例,在创建好的notebook中运行下面的代码,这也是这个项目网页给出的demo。
```python
from modelscope.pipelines import pipeline
from modelscope.utils.constant import Tasks
from modelscope.outputs import OutputKeys
text_classification = pipeline(Tasks.text_classification, model='damo/nlp_structbert_alimeeting_action-classification_chinese-base')
output = text_classification("今天会议的第一个结论是明天先收集用户的需求吗?")
```
## 我的尝试
- 尝试了notebook提供的各种环境镜像,包括python=3.7,3.8,3.10等等
- 在google,github上搜索了这个问题,没有找到直接的相同问题,但是根据搜索的结果,判断是和transformers库版本有关系
- 尝试了各种transformers库的版本
- 在[StructBERT问句识别-中文口语-通用领域](https://www.modelscope.cn/models/iic/nlp_structbert_qd_spoken_chinese-base?spm=a2c6h.13066369.question.3.65e750ed4SSbfR)项目中提出了相关的issue,但是半年没有人回答 | closed | 2024-07-20T01:39:16Z | 2024-07-27T12:56:33Z | https://github.com/modelscope/modelscope/issues/919 | [] | lqzzy | 2 |
numpy/numpy | numpy | 28,119 | ENH,MAINT: Remove maxarg usages/workaround | With `nditer`/`NpyIter` now supporting an arbitrary number of operands, we can remove a few work-arounds and work on improving support for arbitrary number of arguments in general.
The main work-around in Python is code using `nditer` (or avoiding it due to the limit):
* usage in `numpy/lib/_stride_tricks_impl.py` for broadcasting, which may need some thinking but should be able to make use of it to avoid existing limitations.
On the C-side, we have the limitation for:
* einsum
* ufuncs
Both could be removed, both have a decent number of places where stack-allocations are used, though.
There are a few other places where `NPY_MAXARGS` can be removed, e.g.:
* `compiled_base.c` uses it, but really should phrase any limit in terms of `NPY_MAXDIMS` now (probably no point in removing it). | open | 2025-01-07T11:36:14Z | 2025-01-09T15:05:06Z | https://github.com/numpy/numpy/issues/28119 | [] | seberg | 1 |
deepset-ai/haystack | nlp | 8,062 | docs: clean up the first batch of docstrings | The goal is to clean up and clarify the docstrings:
- The introduction will explain what the component does in a concise way.
- Use more concise language throughout the docstrings.
The first batch of components were identified based on their most-often usage in pipelines according to telemetry:
- [x] #8063
- [x] #8064
- [x] #8065
- [x] #8067
- [x] #8080
- [x] #8083
- [x] #8104
- [x] #8107
- [x] #8110
- [x] #8084
- [x] #8093
- [x] #8114
- [x] #8116
- [x] #8118
- [x] #8097
- [x] #8119
- [x] #8121
| closed | 2024-07-24T08:52:11Z | 2024-07-31T09:44:17Z | https://github.com/deepset-ai/haystack/issues/8062 | [] | dfokina | 1 |
aws/aws-sdk-pandas | pandas | 2,274 | timestream 'list_tables' method typo in 'NextToken' | ### Describe the bug
Hi,
The code in the package;

'nextToken' string should be searched as 'NextToken' in response in while loop. (In the json response there is no key such as 'nextToken', but there is a key such as 'NextToken')
Otherwise it does not return more than 20 table names.
So code should be like following;

### How to Reproduce
```
*P.S. Please do not attach files as it's considered a security risk. Add code snippets directly in the message body as much as possible.*
```
Just try to use 'list_tables' method in a timestream database where there are more than 20 tables in a database. You will be able to see that method won't be able to list more than 20 table since 'NextToken' string is written as 'nextToken'.
Python 3.10, awswrangler 3.0.0
### Expected behavior
_No response_
### Your project
_No response_
### Screenshots
_No response_
### OS
Windows
### Python version
3.10
### AWS SDK for pandas version
3.0.0
### Additional context
_No response_ | closed | 2023-05-15T12:09:37Z | 2023-05-15T14:03:44Z | https://github.com/aws/aws-sdk-pandas/issues/2274 | [
"bug"
] | SukruHan | 1 |
horovod/horovod | deep-learning | 3,121 | horovod.common.exceptions.HorovodInternalError: Broadcast is not supported with Join at this time. | **Environment:**
1. Framework: Pytorch (Lightning)
2. Framework version: 1.9.0+cu102 (and 1.4.2 for pytorch lightning)
3. Horovod version: 0.22.1
4. MPI version: 2.1.1
5. CUDA version: 10.2
6. NCCL version: 2708
7. Python version: 3.9.6
8. Spark / PySpark version: -
9. Ray version: -
10. OS and version: Ubuntu 18.04
11. GCC version: gcc (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
12. CMake version: cmake version 3.10.2
**Bug report:**
When running my training loop using horovod as an accelerator in pytorch lightning I encounter the following error after the 2nd epoch in all of my 8 workers:
```
[0]<stderr>: File "/usr/local/lib/python3.9/dist-packages/pytorch_lightning/callbacks/model_checkpoint.py", line 816, in file_exists
[0]<stderr>: return trainer.training_type_plugin.broadcast(exists)
[0]<stderr>: │ │ └ False
[0]<stderr>: │ └ <property object at 0x7fa7182721d0>
[0]<stderr>: └ <pytorch_lightning.trainer.trainer.Trainer object at 0x7fa708526fa0>
[0]<stderr>:
[0]<stderr>: File "/usr/local/lib/python3.9/dist-packages/pytorch_lightning/plugins/training_type/horovod.py", line 129, in broadcast
[0]<stderr>: obj = hvd.broadcast_object(obj, src)
[0]<stderr>: │ │ │ └ 0
[0]<stderr>: │ │ └ False
[0]<stderr>: │ └ <function broadcast_object at 0x7fa72eb0c4c0>
[0]<stderr>: └ <module 'horovod.torch' from '/usr/local/lib/python3.9/dist-packages/horovod/torch/__init__.py'>
[0]<stderr>:
[0]<stderr>: File "/usr/local/lib/python3.9/dist-packages/horovod/torch/functions.py", line 218, in broadcast_object
[0]<stderr>: broadcast_(sz, root_rank, name + '.sz')
[0]<stderr>: │ │ │ └ 'bool'
[0]<stderr>: │ │ └ 0
[0]<stderr>: │ └ tensor([4], dtype=torch.int32)
[0]<stderr>: └ <function broadcast_ at 0x7fa72eb0a040>
[0]<stderr>:
[0]<stderr>: File "/usr/local/lib/python3.9/dist-packages/horovod/torch/mpi_ops.py", line 741, in broadcast_
[0]<stderr>: return synchronize(handle)
[0]<stderr>: │ └ 69713
[0]<stderr>: └ <function synchronize at 0x7fa72eb0a4c0>
[0]<stderr>:
[0]<stderr>: File "/usr/local/lib/python3.9/dist-packages/horovod/torch/mpi_ops.py", line 882, in synchronize
[0]<stderr>: raise HorovodInternalError(e)
[0]<stderr>: └ <class 'horovod.common.exceptions.HorovodInternalError'>
[0]<stderr>:horovod.common.exceptions.HorovodInternalError: Broadcast is not supported with Join at this time.
```
I'm not quite sure how to avoid the broadcast in the Join since this is all code internal to pytorch lightning. Do you have any idea what might be causing this? If you think this is an issue with pytorch lightning I'm happy to open an issue there.
Thanks a lot!
| closed | 2021-08-19T20:13:24Z | 2021-10-06T05:53:45Z | https://github.com/horovod/horovod/issues/3121 | [
"bug"
] | b-hahn | 2 |
biolab/orange3 | numpy | 6,574 | Datasets widget settings are incompatible between Windows and Linux/Mac | **What's wrong?**
Here are two workflows: [datasets-platform.zip](https://github.com/biolab/orange3/files/12608329/datasets-platform.zip). Both were saved with "adult" selected and, in both, Datasets should output adult when opened.
On Windows, only the Windows workflow will work. On Linux (and probably Mac), only the other one will.
The problem is how the selected dataset is saved. Windows saves it as `'selected_id': 'core\\adult.tab'` while Linux saves `'selected_id': 'core/adult.tab'`. | closed | 2023-09-14T11:49:05Z | 2023-09-15T20:26:44Z | https://github.com/biolab/orange3/issues/6574 | [
"bug"
] | markotoplak | 0 |
browser-use/browser-use | python | 198 | [Feature request] Agent returns JSON | It would be cool if ActionResult could return also JSON apart from only text. Sure I can use text to feed into GPT-4o, but when I am using openai, te newer models use JSON response, which is super handy. | open | 2025-01-10T11:32:50Z | 2025-01-15T23:06:30Z | https://github.com/browser-use/browser-use/issues/198 | [] | detrin | 2 |
fastapi/sqlmodel | fastapi | 853 | Internal link failed at create-db-and-table.md | ### Privileged issue
- [X] I'm @tiangolo or he asked me directly to create an issue here.
### Issue Content
In md file `\sqlmodel\docs\tutorial\create-db-and-table.md`, line 485, there is a internal link failed.
```
Remember that [each SQL Database has some different variations in what they support?](../databases/#sql-the-language){.internal-link target=_blank}
```
Should be:
```
Remember that [each SQL Database has some different variations in what they support?](../databases.md#sql-the-language){.internal-link target=_blank}
``` | closed | 2024-03-21T03:04:02Z | 2024-06-21T02:16:58Z | https://github.com/fastapi/sqlmodel/issues/853 | [] | chinabiue | 0 |
coqui-ai/TTS | deep-learning | 3,039 | [Feature request] [SSML] Manual Stress Control | *The following FR is applied mostly to XTTS, but it could be extended to other multilingual models.*
**🚀 Feature Description**
In non-English models (i.e. Russian) stress could be assigned incorrectly. In some cases it could drastically alter the word meaning. For instance, the word "замок" is a homograph that has two meanings depending on stress: "за́мок" (a castle) or "замо́к" (a lock). Currently there are no means of determining the right stress placement according to context.
**Solution**
An implementation of Speech Synthesis Markup Language (SSML) would help mitigating this issue without the need for retraining existing models.
*Referring to prior issues regarding SSML: #670 #752 #1452 * | closed | 2023-10-07T06:43:31Z | 2025-03-24T10:50:59Z | https://github.com/coqui-ai/TTS/issues/3039 | [
"wontfix",
"feature request"
] | Th3rdSergeevich | 7 |
plotly/dash | dash | 2,475 | Allow modification of position/direction and style of dash_table tooltips | **Context**
- The tooltip is always positioned under its corresponding cell, except in the last rows where it's positioned on top. This automatic behaviour cannot be modified.
- Right now it's only possible to modify the _general_ style of _all_ of the tooltips with the `css` argument of `dash_table.DataTable`
**Describe the solution you'd like**
Add an argument similar to `tooltip_position` an a `style` key in `tooltip_conditional`, that could be used like:
```
dash_table.DataTable(
...,
tooltip_position='top',
tooltip_conditional= {
'if': {
'filter_query': '{Region} contains "New"'
},
'type': 'markdown',
'value': 'This row is significant.',
'style': {'background-color':'red', 'max-width':'100px'}
}
]
```
**Describe alternatives you've considered**
The tooltip is not a container per cell, but a general container that covers the whole table, and I guess somehow it gets the mouse position and calculates the appropriate position for the visible hover div (the position is specified in css with position: absolute and then top: XXXpx left: XXXpx)
I have explored solutions with the different tooltip properties of dash_table (https://dash.plotly.com/datatable/tooltips#conditional-tooltips) but there are no keys in the tooltip dict to specify the position/direction.
I've explored a workaround by including the table as the children of a [dmc.Tooltip](https://www.dash-mantine-components.com/components/tooltip) and modifying its position based on the hover info of the table, but it didn't work. I will open a feature request so that the Product Team takes this into account for future developments.
| open | 2023-03-22T12:02:07Z | 2024-08-13T19:29:34Z | https://github.com/plotly/dash/issues/2475 | [
"feature",
"dash-data-table",
"P3"
] | celia-lm | 0 |
ymcui/Chinese-BERT-wwm | nlp | 170 | wwm训练的模型,如何使用bert的tokenize? | 您好,我有几个问题,想咨询一下:
1、中文使用WWM预训练时,会将"公司" tokenize 为 [“公”,"##司"],mask时,会将"公司"mask为“[[MASK],[MASK]]”,请问此时label是["公","司"]还是["公","##司"]?
2、如果label使用的是["公","##司"]来训练的话,那么训练的就是“##司”,而不是“司”;那么在后续模型的使用时,使用bert的BertTokenizer,只会将"公司"tokenize为[“公”,"司"],但是按照WWM的思想,应该要tokenize 为 [“公”,"##司"]。“##司”和“司”二者的embeddings是不一样的。后续模型的使用,是不是也需要分词?
麻烦帮忙解答一下!!!谢谢 | closed | 2021-01-28T09:30:41Z | 2021-01-29T06:57:06Z | https://github.com/ymcui/Chinese-BERT-wwm/issues/170 | [] | nathinal | 9 |
modelscope/modelscope | nlp | 885 | py_sound_connect无法安装 | modelscope版本:1.15.0
python版本:3.8.0
运行CTC的demo程序:
```
from modelscope.pipelines import pipeline
from modelscope.utils.constant import Tasks
kws = pipeline(
Tasks.keyword_spotting,
model='damo/speech_dfsmn_kws_char_farfield_16k_nihaomiya')
# you can also use local file path
result = kws('https://modelscope.oss-cn-beijing.aliyuncs.com/test/audios/3ch_nihaomiya10.wav')
print(result)
```
报错:
```
2024-06-17 16:26:25,011 - modelscope - INFO - PyTorch version 1.10.2+cu102 Found.
2024-06-17 16:26:25,014 - modelscope - INFO - TensorFlow version 2.13.0 Found.
2024-06-17 16:26:25,015 - modelscope - INFO - Loading ast index from C:\Users\24115\.cache\modelscope\ast_indexer
2024-06-17 16:26:25,129 - modelscope - INFO - Loading done! Current index file version is 1.15.0, with md5 81203c215ac348dce49ba68f25d9e3f8 and a total number of 980 components indexed
2024-06-17 16:26:27,142 - modelscope - WARNING - Model revision not specified, use revision: v1.1.3
2024-06-17 16:26:27,551 - modelscope - INFO - initiate model from C:\Users\24115\.cache\modelscope\hub\damo\speech_dfsmn_kws_char_farfield_16k_nihaomiya
2024-06-17 16:26:27,551 - modelscope - INFO - initiate model from location C:\Users\24115\.cache\modelscope\hub\damo\speech_dfsmn_kws_char_farfield_16k_nihaomiya.
2024-06-17 16:26:27,553 - modelscope - INFO - initialize model from C:\Users\24115\.cache\modelscope\hub\damo\speech_dfsmn_kws_char_farfield_16k_nihaomiya
Traceback (most recent call last):
File "C:\Users\24115\.conda\envs\torch1\lib\site-packages\modelscope\utils\registry.py", line 210, in build_from_cfg
return obj_cls._instantiate(**args)
File "C:\Users\24115\.conda\envs\torch1\lib\site-packages\modelscope\models\base\base_model.py", line 67, in _instantiate
return cls(**kwargs)
File "C:\Users\24115\.conda\envs\torch1\lib\site-packages\modelscope\models\audio\kws\farfield\model.py", line 49, in __init__
import py_sound_connect
ModuleNotFoundError: No module named 'py_sound_connect'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\24115\.conda\envs\torch1\lib\site-packages\modelscope\utils\registry.py", line 212, in build_from_cfg
return obj_cls(**args)
File "C:\Users\24115\.conda\envs\torch1\lib\site-packages\modelscope\pipelines\audio\kws_farfield_pipeline.py", line 38, in __init__
super().__init__(model=model, **kwargs)
File "C:\Users\24115\.conda\envs\torch1\lib\site-packages\modelscope\pipelines\base.py", line 100, in __init__
self.model = self.initiate_single_model(model, **kwargs)
File "C:\Users\24115\.conda\envs\torch1\lib\site-packages\modelscope\pipelines\base.py", line 53, in initiate_single_model
return Model.from_pretrained(
File "C:\Users\24115\.conda\envs\torch1\lib\site-packages\modelscope\models\base\base_model.py", line 183, in from_pretrained
model = build_model(model_cfg, task_name=task_name)
File "C:\Users\24115\.conda\envs\torch1\lib\site-packages\modelscope\models\builder.py", line 35, in build_model
model = build_from_cfg(
File "C:\Users\24115\.conda\envs\torch1\lib\site-packages\modelscope\utils\registry.py", line 215, in build_from_cfg
raise type(e)(f'{obj_cls.__name__}: {e}')
ModuleNotFoundError: FSMNSeleNetV2Decorator: No module named 'py_sound_connect'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "c:/Users/24115/Desktop/学习/ctc/1.py", line 5, in <module>
kws = pipeline(
File "C:\Users\24115\.conda\envs\torch1\lib\site-packages\modelscope\pipelines\builder.py", line 170, in pipeline
return build_pipeline(cfg, task_name=task)
File "C:\Users\24115\.conda\envs\torch1\lib\site-packages\modelscope\pipelines\builder.py", line 65, in build_pipeline
return build_from_cfg(
File "C:\Users\24115\.conda\envs\torch1\lib\site-packages\modelscope\utils\registry.py", line 215, in build_from_cfg
raise type(e)(f'{obj_cls.__name__}: {e}')
ModuleNotFoundError: KWSFarfieldPipeline: FSMNSeleNetV2Decorator: No module named 'py_sound_connect'
```
应该是需要安装py_sound_connect这个库,但我使用pip install py_sound_connect无法安装该库 | closed | 2024-06-17T08:27:41Z | 2024-07-25T01:53:13Z | https://github.com/modelscope/modelscope/issues/885 | [
"Stale"
] | KAWAKO-in-GAYHUB | 3 |
matterport/Mask_RCNN | tensorflow | 2,580 | Comparing Tensorboard Log Losses between RGB , Grayscale and RGB-D | I have completed training the Mask_RCNN on a very small dataset of 22 training images and 6 validation images which I took with my RealSense Depth camera. I trained on 3 different modes written below and compared the log losses graphs on TensorBoard to see if RGB-Depth has a better performance. However I have problem interpreting the graphs shown below. Can somebody help me. Thank you.
1) RGB mode 3 channel
2) Depth mode 1 channel (16bit)
3) RGB-Depth mode 4 channel

| open | 2021-05-31T10:01:10Z | 2021-05-31T10:01:10Z | https://github.com/matterport/Mask_RCNN/issues/2580 | [] | maohong30 | 0 |
schemathesis/schemathesis | graphql | 1,869 | [BUG] Max examples is not respected in stateful tests | ### Checklist
- [x] I checked the [FAQ section](https://schemathesis.readthedocs.io/en/stable/faq.html#frequently-asked-questions) of the documentation
- [x] I looked for similar issues in the [issue tracker](https://github.com/schemathesis/schemathesis/issues)
- [x] I am using the latest version of Schemathesis
### Describe the bug
Hi, I'm trying to run stateful tests where each test is running with 1 maximum example. But when I run the tests, they end up running forever. I'm following the documentation itself but to no luck.
Here is the code
```python
from hypothesis import Phase, settings
import requests
import schemathesis
schema = schemathesis.from_path(
"testing_oas.yaml",
base_url="/api/v1/",
)
class APIWorkflow(schema.as_state_machine()):
headers: dict
def setup(self):
# Make a login request
response = requests.post(
"/api/v1/login",
json={
"username": "xxxx",
"password": "xxxx"
},
verify=False
)
response.raise_for_status()
token = response.json()["token"]
self.headers = {"Authorization": f"Bearer {token}"}
def get_call_kwargs(self, case):
# Use stored headers
return {
"headers": self.headers,
"verify": False
}
statefulTests = APIWorkflow()
statefulTests.run(
settings=settings(
max_examples=1
)
)
```
Clearly describe the issue you're facing.
### To Reproduce
🚨 **Mandatory** 🚨: Steps to reproduce the behavior:
1. Run the above code
2. Process does not terminate
Please include a minimal API schema causing this issue:
```yaml
paths:
/tenants/contact:
post:
tags:
- Contact
operationId: create_contact_api
description: "Create a contact"
summary: Create a contact
requestBody:
required: true
content:
application/json:
schema:
$ref: "#/components/schemas/ContactModel"
responses:
"200":
description: Successful creation of a resource
content:
"text/plain; charset=utf-8":
schema:
type: object
properties:
contact_id:
type: string
links:
GetContactById:
operationId: get_contact_by_id_api
parameters:
contact_id: '$response.body#/contact_id'
400:
$ref: "#/components/responses/invalid_params"
401:
$ref: "#/components/responses/unauthorized"
500:
$ref: "#/components/responses/internal_server_error"
/tenants/contact/{contact_id}:
get:
tags:
- Contact
summary: Get contact by Id
operationId: get_contact_by_id_api
description: Get a contact by contact Id
parameters:
- name: contact_id
required: true
in: path
description: contact_id to get the contact
schema:
type: string
format: uuid
responses:
"200":
description: Get contacts
content:
application/json:
schema:
type: object
properties:
contact_id:
type: string
example: "c453e2a1-773b-4d51-a8ba-c92f2b2e9fbd"
first_name:
type: string
example: "ubuntu"
last_name:
type: string
example: "testing"
email_id:
type: string
example: "ubuntu@gmail.com"
is_global:
type: boolean
mobile_no:
type: string
example: "23213213231"
organization:
type: string
example: "nouveau labs"
location:
type: string
example: "Bangalore"
designation:
type: string
example: "Software Engineer"
department:
type: string
example: "engineering"
tags:
type: array
items:
type: string
dvc_user_id:
type: string
enum:
- "email_id"
- "mobile_no"
- "both"
404:
$ref: "#/components/responses/not_found"
401:
$ref: "#/components/responses/unauthorized"
500:
$ref: "#/components/responses/internal_server_error"
```
### Expected behavior
Each API test should run once
### Environment
```
- OS: Linux
- Python version: 3.10
- Schemathesis version: 3.20.0
- Spec version: Open API 3.0
```
### Additional context
Include any other relevant details, like logs or screenshots.
| closed | 2023-11-02T11:30:35Z | 2024-06-13T13:03:07Z | https://github.com/schemathesis/schemathesis/issues/1869 | [
"Type: Bug",
"Status: Needs more info"
] | lchauhan21 | 3 |
yt-dlp/yt-dlp | python | 12,458 | malicious spam by a blocked user | this was malicious spam and stephablouin6 is now blocked
| closed | 2025-02-23T15:25:30Z | 2025-02-23T18:17:08Z | https://github.com/yt-dlp/yt-dlp/issues/12458 | [
"spam"
] | stephablouin6 | 0 |
fastapi/sqlmodel | fastapi | 483 | Is it normal unable to use autocomplete for SQLModel class field? | ### First Check
- [X] I added a very descriptive title to this issue.
- [X] I used the GitHub search to find a similar issue and didn't find it.
- [X] I searched the SQLModel documentation, with the integrated search.
- [X] I already searched in Google "How to X in SQLModel" and didn't find any information.
- [X] I already read and followed all the tutorial in the docs and didn't find an answer.
- [X] I already checked if it is not related to SQLModel but to [Pydantic](https://github.com/samuelcolvin/pydantic).
- [X] I already checked if it is not related to SQLModel but to [SQLAlchemy](https://github.com/sqlalchemy/sqlalchemy).
### Commit to Help
- [X] I commit to help with one of those options 👆
### Example Code
```python
class Hero1(SQLModel, table=True):
id: Optional[int] = Field(default=None, primary_key=True)
name: str
secret_name: str
age: Optional[int] = None
class Hero2():
id: Optional[int] = Field(default=None, primary_key=True)
name: str
secret_name: str
age: Optional[int] = None
```
### Description
I'm using Pycharm.
When I write code like,
`select(Hero1).where(Hero1.age > 35)`
I expect IDE to show field 'age' when I type 'Hero1.' but IDE autocomplete displays no fields of Hero1.
but with Hero2 (without SQLModel inherits), it surely works fine.
Is this normal or only my IDE problem?
### Operating System
Windows
### Operating System Details
windows 10
Pycharm 2022
### SQLModel Version
0.0.8
### Python Version
3.10.6
### Additional Context


| closed | 2022-11-03T09:43:55Z | 2024-08-09T07:26:23Z | https://github.com/fastapi/sqlmodel/issues/483 | [
"question"
] | ggree1 | 14 |
deezer/spleeter | deep-learning | 301 | [Discussion] Latest update news? | Anyone care to explain what improvements TF 1.15 brings to Spleeter? | closed | 2020-03-24T17:28:08Z | 2020-03-28T01:07:37Z | https://github.com/deezer/spleeter/issues/301 | [
"question"
] | aidv | 1 |
aiogram/aiogram | asyncio | 1,246 | Incorrect Type Annotations - InlineKeyboardButton | Hello, I'm one of the developers who use pyright as a type-checker, and with aiogram's current type-annotation i've got a lot type errors, ofc everything works in runtime, but its really annoying, you could say "dont use pyright then" but I hope u'll find a better solution, as an example I choose InlineKeyboardButton class, here you can see current and expected annotations:
## Current Annotations
[source](https://github.com/aiogram/aiogram/blob/88baf0b5828fe35805a58bc48b63615a906f6ea6/aiogram/types/inline_keyboard.py#L100C1-L109)
```python
class InlineKeyboardButton(base.TelegramObject):
"""..."""
...
def __init__(self, text: base.String,
url: base.String = None,
login_url: LoginUrl = None,
callback_data: base.String = None,
switch_inline_query: base.String = None,
switch_inline_query_current_chat: base.String = None,
callback_game: CallbackGame = None,
pay: base.Boolean = None,
web_app: WebAppInfo = None,
**kwargs):
...
```
## Expected Annotations
```python
from typing import Optional
...
class InlineKeyboardButton(base.TelegramObject):
"""..."""
...
def __init__(self, text: base.String,
url: Optional[base.String] = None,
login_url: Optional[LoginUrl] = None,
callback_data: Optional[base.String] = None,
switch_inline_query: Optional[base.String] = None,
switch_inline_query_current_chat: Optional[base.String] = None,
callback_game: Optional[CallbackGame] = None,
pay: Optional[base.Boolean] = None,
web_app: Optional[WebAppInfo] = None,
**kwargs):
...
```
## Context
* aiogram version: 2.25.1 | closed | 2023-08-05T19:11:47Z | 2023-08-06T14:21:45Z | https://github.com/aiogram/aiogram/issues/1246 | [] | ruslan-korneev | 3 |
Josh-XT/AGiXT | automation | 836 | Chain/Task Chain/Smart Task Chain Not Creating Chain | ### Description
Chain not creating chain or something like that so it says not found
[logs.txt](https://github.com/Josh-XT/AGiXT/files/11998248/logs.txt)

### Steps to Reproduce the Bug
Run Task Chain or Smart Task Chain and error will show
### Expected Behavior
Supposed to create Chain so it can run X chain
### Operating System
- [X] Linux
- [ ] Microsoft Windows
- [ ] Apple MacOS
- [ ] Android
- [ ] iOS
- [ ] Other
### Python Version
- [ ] Python <= 3.9
- [X] Python 3.10
- [ ] Python 3.11
### Environment Type - Connection
- [X] Local - You run AGiXT in your home network
- [ ] Remote - You access AGiXT through the internet
### Runtime environment
- [X] Using docker compose
- [ ] Using local
- [ ] Custom setup (please describe above!)
### Acknowledgements
- [X] I have searched the existing issues to make sure this bug has not been reported yet.
- [X] I am using the latest version of AGiXT.
- [X] I have provided enough information for the maintainers to reproduce and diagnose the issue. | closed | 2023-07-10T05:57:15Z | 2023-07-11T11:18:04Z | https://github.com/Josh-XT/AGiXT/issues/836 | [
"type | report | bug",
"needs triage"
] | birdup000 | 1 |
junyanz/pytorch-CycleGAN-and-pix2pix | pytorch | 1,397 | --phase train: AttributeError: 'CycleGANModel' object has no attribute 'real_A' | Hello! I am training cycle_gan with _dataset_mode_ equal to _unaligned_. My environment is:
Python -> 3.8.10
torch -> 1.8.1+cu102
OS -> Linux Mint 20.2 Cinnamon
I had passed parameter _--phase train_ however I am facing an error **AttributeError: 'CycleGANModel' object has no attribute 'real_A'** .
Complete stacktrace was
_Traceback (most recent call last):
File "train.py", line 43, in <module>
model.optimize_parameters()
File "[/home/aim-beast/Desktop/hamza/venv/pix2pix/models/cycle_gan_model.py]()", line 184, in optimize_parameters
self.forward() # compute fake images and reconstruction images.
File "[/home/aim-beast/Desktop/hamza/venv/pix2pix/models/cycle_gan_model.py]()", line 115, in forward
self.fake_B = self.netG_A(self.real_A) # G_A(A)
AttributeError: 'CycleGANModel' object has no attribute 'real_A'_
and I tracked file. I see in `train.py` file, line 43 function `model.optimized_parameters()` is called before call of `model.set_input()`. While `model.set_input()` initializes `self.real_A` and `model.optimized_parameters()` uses `self.real_A`.
Am I making a mistake or it's a bug?
| closed | 2022-03-15T13:00:03Z | 2023-06-17T08:20:48Z | https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/1397 | [] | MasterHM-ml | 2 |
geopandas/geopandas | pandas | 3,089 | ENH: add a convenience function to remove Z coordinates from *Z geometry types | Sometimes, you get PolygonZ or similar geometries with all Z being actually 0. It is often useful to just get rid of those coordinates and downcast geoms to Polygon. I remember we talked about that earlier but couldn't find anything.
Right now, one way is to use `shapely.transform` like this
```py
shapely.transform(gdf.geometry, lambda x: x)
```
but it may be useful to have a method like `drop_z()` because transform is not very straightforward. | closed | 2023-11-28T18:45:03Z | 2023-11-28T19:20:52Z | https://github.com/geopandas/geopandas/issues/3089 | [] | martinfleis | 2 |
aidlearning/AidLearning-FrameWork | jupyter | 45 | apt update error:E: Sub-process /usr/bin/dpkg returned an error code (1) | Phone:HUAWEI P8, Android OS 6
When I run upgrade/installing code,errors occurred:
``root@localhost:/home# apt install mime
Reading package lists... Done
Building dependency tree
Reading state information... Done
E: Unable to locate package mime
Reading package lists...
.....
errors were encountered while processing: /var/cache/apt/archives/unzip_6.0-21+deb9u2_arm64.deb`
`
| closed | 2019-09-09T03:14:20Z | 2020-08-03T09:08:50Z | https://github.com/aidlearning/AidLearning-FrameWork/issues/45 | [] | dobefore | 1 |
Guovin/iptv-api | api | 610 | 节目名称问题 | 添加的订阅源有部分频道名称与模板中名称不一致。
不知道有没有频道别名的配置方法? 比如 CCTV5、CCTV-5、CCTV5-体育、CCTV-5 体育频道,如何合并为CCTV-5 | closed | 2024-12-02T08:15:29Z | 2024-12-02T11:23:03Z | https://github.com/Guovin/iptv-api/issues/610 | [
"question"
] | Vendong | 1 |
amisadmin/fastapi-amis-admin | sqlalchemy | 195 | 感谢更新 | 看到作者更新,非常高兴,也非常感谢!
| open | 2025-03-20T04:59:03Z | 2025-03-20T05:53:26Z | https://github.com/amisadmin/fastapi-amis-admin/issues/195 | [] | hezuogongying | 0 |
piskvorky/gensim | data-science | 3,489 | Windows wheel broken for Python 3.10 | https://github.com/RaRe-Technologies/gensim/actions/runs/5949723110/job/16137526110 | open | 2023-08-23T11:14:01Z | 2023-08-23T11:14:01Z | https://github.com/piskvorky/gensim/issues/3489 | [] | mpenkov | 0 |
encode/databases | asyncio | 534 | RFE: please provde update for `sqlalchemy` 2.x | https://github.com/encode/databases/blob/b6eba5f7a19eaf8966e3821f44fe00f4770cb822/setup.py#L50 | closed | 2023-02-15T18:50:43Z | 2023-02-28T21:17:28Z | https://github.com/encode/databases/issues/534 | [] | kloczek | 4 |
pytest-dev/pytest-xdist | pytest | 501 | fixtures with class scope not working as expected with --dist=loadscope option | I need to run about 10 tests that depend on a set up done with a fixture, it all works fine without parallelism using a class fixture and tests within class.
Once I add the xdist parallelism I experience the following:
When I do not use loadscope, all the tests are sent to their own worker. I do not want this because I would like to only build the fixture once and use it for all related class tests.
When I use loadscope, all the tests are executed against gw0 and I am not getting any parallelism.
Exactly same as experienced on this thread by someone else:
https://stackoverflow.com/questions/51756594/pytest-xdist-indirect-fixtures-with-class-scope
Is there another syntax or way of achieving a class level group of tests (that depends on one session level fixture) to make sure only one worker will go through
The specific use case is having to process each file in a session fixture list and based on the output of this process, run 10x tests. I can't afford xdist to give some of these 10x tests to one worker and some to other as they both end up processing same file again. Cannot afford to move this to conftest and do it session-scoped as each worker will pay the penalty of processing each file.
The wiki says that tests are grouped by class scope when using this loadscope option, so failing to understand why this is not working. Is there an issue/collision that the class refers to another session level fixture (not stand alone)? | closed | 2020-02-09T12:51:46Z | 2020-02-09T23:58:54Z | https://github.com/pytest-dev/pytest-xdist/issues/501 | [] | fpiccione | 4 |
litestar-org/litestar | asyncio | 3,937 | Bug: CI builds fail due to deprecated Github action | ### Description
The currently used Github upload action has been deprecated and should be upgraded.
### URL to code causing the issue
https://github.com/litestar-org/litestar/actions/runs/12687347653/job/35387778036?pr=3935#step:1:36
### MCVE
_No response_
### Steps to reproduce
_No response_
### Screenshots
_No response_
### Logs
_No response_
### Litestar Version
latest
### Platform
- [ ] Linux
- [ ] Mac
- [ ] Windows
- [X] Other (Please specify in the description above) | closed | 2025-01-09T17:51:53Z | 2025-01-10T02:47:26Z | https://github.com/litestar-org/litestar/issues/3937 | [
"Bug :bug:"
] | cofin | 1 |
twopirllc/pandas-ta | pandas | 520 | Study's failing when ta kind doesn't have enough data | **Which version are you running? The lastest version is on Github. Pip is for major releases.**
latest dev branch
**Do you have _TA Lib_ also installed in your environment?**
yes
**Describe the bug**
When running a Study
**To Reproduce**
Use a dataframe that only has 10-30 rows of data and run the below.
```python
MyStudy = ta.Study(
name="DCSMA10",
description="SMA 50,200, BBANDS, RSI, MACD and Volume SMA 20",
cores=0,
ta=[
{"kind": "sma", "length": 20},
{"kind": "sma", "length": 50},
{"kind": "sma", "length": 100},
{"kind": "sma", "length": 200},
{"kind": "ema", "length": 12},
{"kind": "ema", "length": 26},
{"kind": "ema", "length": 50},
{"kind": "ema", "length": 200},
{"kind": "macd", "fast": 12, "slow": 26, "signal": 9},
{"kind": "rsi", "length": 14},
{"kind": "percent_return", "length": 5},
{"kind": "percent_return", "length": 20},
{"kind": "percent_return", "length": 50},
],
)
# Run it
df = df.ta.study(MyStudy, returns=True)
```
**Expected behavior**
Would expect this to just not put a column and skip that ta in the DF but it errors out with the below.
```sh
File ~/anaconda3/envs/obb_api/lib/python3.9/site-packages/pandas_ta/core.py:731, in AnalysisIndicators.study(self, *args, **kwargs)
729 for ind in ta:
730 params = ind["params"] if "params" in ind and isinstance(ind["params"], tuple) else tuple()
--> 731 getattr(self, ind["kind"])(*params, **{**ind, **kwargs})
732 else:
733 if Imports["tqdm"] and verbose:
File ~/anaconda3/envs/obb_api/lib/python3.9/site-packages/pandas_ta/core.py:1003, in AnalysisIndicators.macd(self, fast, slow, signal, offset, **kwargs)
1001 def macd(self, fast=None, slow=None, signal=None, offset: Int = None, **kwargs: DictLike):
1002 close = self._get_column(kwargs.pop("close", "close"))
-> 1003 result = macd(close=close, fast=fast, slow=slow, signal=signal, offset=offset, **kwargs)
1004 return self._post_process(result, **kwargs)
File ~/anaconda3/envs/obb_api/lib/python3.9/site-packages/pandas_ta/momentum/macd.py:64, in macd(close, fast, slow, signal, talib, offset, **kwargs)
62 macd, signalma, histogram = MACD(close, fast, slow, signal)
63 else:
---> 64 fastma = ema(close, length=fast, talib=mode_tal)
65 slowma = ema(close, length=slow, talib=mode_tal)
67 macd = fastma - slowma
File ~/anaconda3/envs/obb_api/lib/python3.9/site-packages/pandas_ta/overlap/ema.py:82, in ema(close, length, talib, presma, offset, **kwargs)
80 sma_nth = close[0:length].mean()
81 close[:length - 1] = nan
---> 82 close.iloc[length - 1] = sma_nth
83 ema = close.ewm(span=length, adjust=adjust).mean()
85 # Offset
File ~/anaconda3/envs/obb_api/lib/python3.9/site-packages/pandas/core/indexing.py:720, in _LocationIndexer.__setitem__(self, key, value)
718 key = com.apply_if_callable(key, self.obj)
719 indexer = self._get_setitem_indexer(key)
--> 720 self._has_valid_setitem_indexer(key)
722 iloc = self if self.name == "iloc" else self.obj.iloc
723 iloc._setitem_with_indexer(indexer, value, self.name)
File ~/anaconda3/envs/obb_api/lib/python3.9/site-packages/pandas/core/indexing.py:1461, in _iLocIndexer._has_valid_setitem_indexer(self, indexer)
1459 elif is_integer(i):
1460 if i >= len(ax):
-> 1461 raise IndexError("iloc cannot enlarge its target object")
1462 elif isinstance(i, dict):
1463 raise IndexError("iloc cannot enlarge its target object")
IndexError: iloc cannot enlarge its target object
```
**Additional context**
Add any other context about the problem here.
Thanks for using Pandas TA!
| closed | 2022-04-21T17:47:59Z | 2023-03-16T19:01:50Z | https://github.com/twopirllc/pandas-ta/issues/520 | [
"bug"
] | andrewkenreich | 2 |
litestar-org/litestar | asyncio | 3,727 | Enhancement: Switch to official msgpack media type | ### Summary
As of May 2024, msgpack has an officially registered IANA media type: `application/vnd.msgpack`. Litestar should switch to using that in msgpack responses (and accepting it in request bodies), probably in v3.0, as changing the response media type is a breaking change.
Reference: https://www.iana.org/assignments/media-types/application/vnd.msgpack
### Basic Example
_No response_
### Drawbacks and Impact
It would be great to also accept the old media type in requests but only output the new one.
### Unresolved questions
It's not obvious to me how to accept two different media types as msgpack data. | closed | 2024-09-11T09:12:36Z | 2025-03-20T15:54:54Z | https://github.com/litestar-org/litestar/issues/3727 | [
"Enhancement"
] | agronholm | 1 |
python-gitlab/python-gitlab | api | 3,147 | FYI: gql 3.5.1 breaks testing with Python 3.14 | Reported issue upstream at: https://github.com/graphql-python/gql/issues/534
Example failure seen in: https://github.com/python-gitlab/python-gitlab/pull/3146 | closed | 2025-03-06T00:15:06Z | 2025-03-06T14:40:00Z | https://github.com/python-gitlab/python-gitlab/issues/3147 | [] | JohnVillalovos | 1 |
supabase/supabase-py | fastapi | 872 | Error with RPC Calls and select queries with postgrest v0.16.9 | # Bug report
## Describe the bug
This started arising with postgrest v0.16.9 only, it was working perfectly with version v0.16.8
## To Reproduce
Steps to reproduce the behavior, please provide code snippets or a repository:
1. Make a database function with parameter
2. run it with supabase.rpc in v2.6
3. Nothing will be returned from the db, probably malformed parameter
4. See error
## Expected behavior
It should work and proper response from db should be returned
## Screenshots
## System information
- OS: docker
- Version of supabase-py: v2.5+
| closed | 2024-07-25T09:37:43Z | 2024-08-23T08:46:19Z | https://github.com/supabase/supabase-py/issues/872 | [
"bug"
] | 97k | 3 |
davidteather/TikTok-Api | api | 946 | [BUG] - TikTok blocks this request displaying a Captcha |
I'm trying to get the video urls from an account and I keep running into this error, I've included my cookies with the ms token and have read through others facing the same issue but none of the solutions seem to work. Any help would be much appreciated
This is the code I've got atm:
```py
from TikTokApi import TikTokApi
with TikTokApi() as api:
ms_token = 'mUAXXtTLDEriJ7YzOHczfy_ClOU_ypBNadE2lUMPfA61eJeG6546cfQ5xqlp8spFSJLA59CGK3s0xPwG4S2aT2sVhKZOz7FWUV475oEdG4tH4xwX'
api = TikTokApi(custom_verify_fp=ms_token, use_api_endpoints=True)
user = api.user(username="nottooshabbycakes")
for video in user.videos():
print(video.id)
for u_vid in video.author.videos():
print(u_vid.id)
```
Is it the case of an incorrect cookie?
- OS: Mac
- TikTokApi Version 5.2.2
| closed | 2022-09-07T14:51:35Z | 2023-08-08T22:04:25Z | https://github.com/davidteather/TikTok-Api/issues/946 | [
"bug"
] | blithum | 2 |
sigmavirus24/github3.py | rest-api | 1,054 | RTD stuck on master branch? | https://github3.readthedocs.io seems to redirect to https://github3.readthedocs.io/en/master/, but those docs are outdated (see e.g. [release notes](https://github3.readthedocs.io/en/master/release-notes/index.html) which are missing v2.0.0 and v3.0.0).
Trying to look at https://github3.readthedocs.io/en/main/ instead leads to a 404. | closed | 2021-11-01T07:42:20Z | 2021-11-01T11:43:00Z | https://github.com/sigmavirus24/github3.py/issues/1054 | [] | The-Compiler | 0 |
Gozargah/Marzban | api | 1,725 | Lack of access to uuid when migrating to Merzban, and you must give the new Merzban link to the user |
Lack of access to UUID when migrating to Merzban, and you must give the new Merzban link to the user
We can change the UUID of users.
The UUID option of users should be added to the panel so that we can change them. Until we migrate from other panels to Marzban, we don't need to give a new link.
| closed | 2025-03-24T07:37:08Z | 2025-03-24T08:29:17Z | https://github.com/Gozargah/Marzban/issues/1725 | [
"Invalid"
] | Networkhealth | 1 |
jupyterhub/zero-to-jupyterhub-k8s | jupyter | 2,631 | AKS Documentation outdated? | <!-- Thank you for contributing. These HTML comments will not render in the issue, but you can delete them once you've read them if you prefer! -->
### Bug description
<!-- Use this section to clearly and concisely describe the bug. -->
Recently had to upgrade our running JupyterHub deployment on AKS. Found that the public IP for the existing deployment was no longer accessible from the internet. Have tried to redeploy a completely fresh install, fresh resource group etc. following the documentation. Still unable to access the new public ip.
#### Expected behaviour
<!-- Tell us what you thought would happen. -->
Access base jupyterhub image at AKS public IP.
#### Actual behaviour
<!-- Tell us what actually happens. -->
IP address is unavailable, browser times out when trying to access IP.
### How to reproduce
<!-- Use this section to describe the steps that a user would take to experience this bug. -->
Follow documentation for deploying jupyterhub on AKS, navigate to public IP.
### Your personal set up
<!--
Tell us a little about the system you're using.
Please include information about how you installed,
e.g. are you using a distribution such as zero-to-jupyterhub or the-littlest-jupyterhub.
-->
- OS:
<!-- [e.g. ubuntu 20.04, macOS 11.0] -->
- Version(s):
<!-- e.g. jupyterhub --version, python --version --->
- <details><summary>Full environment</summary>
<!-- For reproduction, it's useful to have the full environment. For example, the output of `pip freeze` or `conda list` --->
```
# paste output of `pip freeze` or `conda list` here
```
</details>
- <details><summary>Configuration</summary>
<!--
For JupyterHub, especially include information such as what Spawner and Authenticator are being used.
Be careful not to share any sensitive information.
You can paste jupyterhub_config.py below.
To exclude lots of comments and empty lines from auto-generated jupyterhub_config.py, you can do:
grep -v '\(^#\|^[[:space:]]*$\)' jupyterhub_config.py
-->
```python
# jupyterhub_config.py
```
</details>
- <details><summary>Logs</summary>
<!--
Errors are often logged by jupytehub. How you get logs depends on your deployment.
With kubernetes it might be:
kubectl get pod # hub pod name starts with hub...
kubectl logs hub-...
# or for a single-user server
kubectl logs jupyter-username
Or the-littlest-jupyterhub:
journalctl -u jupyterhub
# or for a single-user server
journalctl -u jupyter-username
-->
```
# paste relevant logs here, if any
```
</details>
| closed | 2022-03-21T23:16:06Z | 2022-03-29T06:25:51Z | https://github.com/jupyterhub/zero-to-jupyterhub-k8s/issues/2631 | [
"bug"
] | AlexChung1995 | 20 |
RobertCraigie/prisma-client-py | pydantic | 167 | Time parameters should accept `datetime.timedelta` instead of `int` | ## Problem
<!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] -->
Currently timeout arguments are ambiguous as it is not clear what the value passed corresponds to. Is it seconds? Milliseconds?
## Suggested solution
<!-- A clear and concise description of what you want to happen. -->
We should refactor time parameter to accept a `datetime.timdelta` instance. This would also give the control back to the user, allowing them to ergonomically use the units of their desire.
We should also still accept integers but raise a deprecation warning and then remove them in the next release.
| closed | 2021-12-06T16:05:32Z | 2024-08-04T16:23:25Z | https://github.com/RobertCraigie/prisma-client-py/issues/167 | [
"kind/improvement",
"good first issue",
"level/beginner",
"priority/low"
] | RobertCraigie | 0 |
ScrapeGraphAI/Scrapegraph-ai | machine-learning | 785 | CSR (client side rendering) web pages don't work ! | **Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.
| closed | 2024-11-03T22:25:28Z | 2024-11-04T08:20:37Z | https://github.com/ScrapeGraphAI/Scrapegraph-ai/issues/785 | [] | obscuredotsh | 0 |
plotly/dash-table | dash | 656 | MaterialCSS not working with data-table selectable | I tried to import material css, however this causes the "selectable" rows to malfunction in the data-table. It is not showing up and cannot be ticked.
```
import dash
from dash.dependencies import Input, Output
import dash_table
import dash_core_components as dcc
import dash_html_components as html
import pandas as pd
df = pd.read_csv('https://raw.githubusercontent.com/plotly/datasets/master/gapminder2007.csv')
external_stylesheets = ['https://cdnjs.cloudflare.com/ajax/libs/materialize/1.0.0/css/materialize.min.css', 'https://fonts.googleapis.com/icon?family=Material+Icons']
external_scripts = ['https://cdnjs.cloudflare.com/ajax/libs/materialize/1.0.0/js/materialize.min.js']
app = dash.Dash(__name__,
external_stylesheets=external_stylesheets,
external_scripts=external_scripts
)
app.layout = html.Div([
dash_table.DataTable(
id='datatable-interactivity',
columns=[
{"name": i, "id": i, "deletable": True, "selectable": True} for i in df.columns
],
data=df.to_dict('records'),
editable=True,
filter_action="native",
sort_action="native",
sort_mode="multi",
column_selectable="single",
row_selectable="multi",
row_deletable=True,
selected_columns=[],
selected_rows=[],
page_action="native",
page_current= 0,
page_size= 10,
),
html.Div(id='datatable-interactivity-container')
])
@app.callback(
Output('datatable-interactivity', 'style_data_conditional'),
[Input('datatable-interactivity', 'selected_columns')]
)
def update_styles(selected_columns):
return [{
'if': { 'column_id': i },
'background_color': '#D2F3FF'
} for i in selected_columns]
@app.callback(
Output('datatable-interactivity-container', "children"),
[Input('datatable-interactivity', "derived_virtual_data"),
Input('datatable-interactivity', "derived_virtual_selected_rows")])
def update_graphs(rows, derived_virtual_selected_rows):
# When the table is first rendered, `derived_virtual_data` and
# `derived_virtual_selected_rows` will be `None`. This is due to an
# idiosyncracy in Dash (unsupplied properties are always None and Dash
# calls the dependent callbacks when the component is first rendered).
# So, if `rows` is `None`, then the component was just rendered
# and its value will be the same as the component's dataframe.
# Instead of setting `None` in here, you could also set
# `derived_virtual_data=df.to_rows('dict')` when you initialize
# the component.
if derived_virtual_selected_rows is None:
derived_virtual_selected_rows = []
dff = df if rows is None else pd.DataFrame(rows)
colors = ['#7FDBFF' if i in derived_virtual_selected_rows else '#0074D9'
for i in range(len(dff))]
return [
dcc.Graph(
id=column,
figure={
"data": [
{
"x": dff["country"],
"y": dff[column],
"type": "bar",
"marker": {"color": colors},
}
],
"layout": {
"xaxis": {"automargin": True},
"yaxis": {
"automargin": True,
"title": {"text": column}
},
"height": 250,
"margin": {"t": 10, "l": 10, "r": 10},
},
},
)
# check if column exists - user may have deleted it
# If `column.deletable=False`, then you don't
# need to do this check.
for column in ["pop", "lifeExp", "gdpPercap"] if column in dff
]
if __name__ == '__main__':
app.run_server(debug=True)
``` | closed | 2019-12-03T17:36:56Z | 2019-12-06T18:05:28Z | https://github.com/plotly/dash-table/issues/656 | [] | bangxiangyong | 2 |
ivy-llc/ivy | tensorflow | 28,044 | Wrong key-word argument `name` in `ivy.remainder()` function call | In the following line, the name argument is passed,
https://github.com/unifyai/ivy/blob/bec4752711c314f01298abc3845f02c24a99acab/ivy/functional/frontends/tensorflow/variable.py#L191
From the actual function definition, there is no such argument
https://github.com/unifyai/ivy/blob/8ff497a8c592b75f010160b313dc431218c2b475/ivy/functional/ivy/elementwise.py#L5415-L5422 | closed | 2024-01-25T14:03:42Z | 2024-01-25T14:51:02Z | https://github.com/ivy-llc/ivy/issues/28044 | [] | Sai-Suraj-27 | 0 |
AUTOMATIC1111/stable-diffusion-webui | deep-learning | 15,711 | [Bug]: Fail to install requirements.txt | ### Checklist
- [ ] The issue exists after disabling all extensions
- [X] The issue exists on a clean installation of webui
- [ ] The issue is caused by an extension, but I believe it is caused by a bug in the webui
- [X] The issue exists in the current version of the webui
- [X] The issue has not been reported before recently
- [ ] The issue has been reported before but has not been fixed yet
### What happened?
Clone repository, create venv, activate venv.
pip install -r requirements.txt
install fails at building wheel for tokenizers.
### Steps to reproduce the problem
Clone repository, create venv, activate venv.
pip install -r requirements.txt
install fails at building wheel for tokenizers.
### What should have happened?
Should have installed correctly
### What browsers do you use to access the UI ?
Mozilla Firefox
### Sysinfo
unable, cannot fully install
### Console logs
```Shell
https://gist.github.com/vvvilife/f5499dfbc8810caec328d9f962305bc9
```
### Additional information
_No response_ | open | 2024-05-04T20:47:48Z | 2024-05-31T16:58:31Z | https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/15711 | [
"bug-report"
] | vvvilife | 2 |
pytest-dev/pytest-xdist | pytest | 695 | Worker startup performances | I have a computer with a AMD Ryzen 7 1800X with 8 cores (16 logical CPUs) and 16Go of ram. My project is a flask website with ~400 tests, most of them testing the generated pages with webtest, or the models with an in-memory temporary database. There is almost no i/o.
Here are the results for launching one or all my tests, with `-n0`, `-n1` or `-nauto`. The one test duration is ~0.7s, and this is the slowest. All the tests pass. In the following table there is the data displayed by pytest, then the data displayed by the `time` command (i.e. `time pytest -nauto`).
command|`pytest -n0`|`pytest -n1`|`pytest -nauto/-n16`
-------------|------:|----------:|-------------------:|
1 test|pytest: 0.94s<br>time: 4.4s|pytest: 4.51s<br>time: 8.04s|pytest: 13.38s<br>time: 16.85s|
400 tests|pytest: 60.70s<br>time: 62.28s|pytest: 63.92s<br>time: 67.45s|pytest: 21.72s<br>time: 25,36s|
What I read in those data:
- There is a difference of ~4s between the duration announced by pytest, and the duration announced by the system. I suppose it is not really pytest-xdist, but what is the difference due to?
- Passing from `-n0` to `-n1` costs between ~3.5s and ~5s. I understand that spawning processes can be costly, but 3.5s seems a lot, especially when only a few tests are ran.
- With 1 test, `-n16` is ~8s slower than `-n1`. As 15 workers won't run any test, I guess that the 8 additional seconds are lost by spawning the 15 useless workers. By why spawning n workers is so much longer than spawning a single worker? Maybe related to #272
- With 400 tests `-n16` is not 16 times faster than `-n1`, but only ~2.7 times faster. I understand that there will never be a 16x improvement, but 2.7 is a bit disappointing. Maybe related to #657
I found the most annoying point is the spending of 3.5s for spawning the first worker. Is there some way this could be accelerated (either by configuration, or by a patch)?
For instance, what would you think of some kind of pytest agent? That would be: a pytest agent running in the background, that would run idling workers, and when a user would launch tests, they would be sent to the workers, and the workers would not stop after.
What do you think? | open | 2021-08-17T08:50:46Z | 2021-08-21T11:12:41Z | https://github.com/pytest-dev/pytest-xdist/issues/695 | [] | azmeuk | 1 |
junyanz/pytorch-CycleGAN-and-pix2pix | deep-learning | 899 | About Error | Hi!
I am in trouble because I got an error and could not solve it.
Please tell me how to resolve.
The details of the error are described below.
Error during test.
Traceback (most recent call last):
File "test.py", line 60, in <module>
model.test() # run inference
File "C:\Users\migita\pytorch-CycleGAN-and-pix2pix\models\base_model.py", line 105, in test
self.forward()
File "C:\Users\migita\pytorch-CycleGAN-and-pix2pix\models\pix2pix_model.py", line 88, in forward
self.fake_B = self.netG(self.real_A) # G(A)
File "C:\Users\migita\AppData\Local\Programs\Python\Python37\lib\site-packages\torch\nn\modules\module.py", line 547, in __call__
result = self.forward(*input, **kwargs)
File "C:\Users\migita\AppData\Local\Programs\Python\Python37\lib\site-packages\torch\nn\parallel\data_parallel.py", line 150, in forward
return self.module(*inputs[0], **kwargs[0])
File "C:\Users\migita\AppData\Local\Programs\Python\Python37\lib\site-packages\torch\nn\modules\module.py", line 547, in __call__
result = self.forward(*input, **kwargs)
File "C:\Users\migita\pytorch-CycleGAN-and-pix2pix\models\networks.py", line 465, in forward
return self.model(input)
File "C:\Users\migita\AppData\Local\Programs\Python\Python37\lib\site-packages\torch\nn\modules\module.py", line 547, in __call__
result = self.forward(*input, **kwargs)
File "C:\Users\migita\pytorch-CycleGAN-and-pix2pix\models\networks.py", line 533, in forward
return self.model(x)
File "C:\Users\migita\AppData\Local\Programs\Python\Python37\lib\site-packages\torch\nn\modules\module.py", line 547, in __call__
result = self.forward(*input, **kwargs)
File "C:\Users\migita\AppData\Local\Programs\Python\Python37\lib\site-packages\torch\nn\modules\container.py", line 92, in forward
input = module(input)
File "C:\Users\migita\AppData\Local\Programs\Python\Python37\lib\site-packages\torch\nn\modules\module.py", line 547, in __call__
result = self.forward(*input, **kwargs)
File "C:\Users\migita\pytorch-CycleGAN-and-pix2pix\models\networks.py", line 535, in forward
return torch.cat([x, self.model(x)], 1)
File "C:\Users\migita\AppData\Local\Programs\Python\Python37\lib\site-packages\torch\nn\modules\module.py", line 547, in __call__
result = self.forward(*input, **kwargs)
File "C:\Users\migita\AppData\Local\Programs\Python\Python37\lib\site-packages\torch\nn\modules\container.py", line 92, in forward
input = module(input)
File "C:\Users\migita\AppData\Local\Programs\Python\Python37\lib\site-packages\torch\nn\modules\module.py", line 547, in __call__
result = self.forward(*input, **kwargs)
File "C:\Users\migita\pytorch-CycleGAN-and-pix2pix\models\networks.py", line 535, in forward
return torch.cat([x, self.model(x)], 1)
File "C:\Users\migita\AppData\Local\Programs\Python\Python37\lib\site-packages\torch\nn\modules\module.py", line 547, in __call__
result = self.forward(*input, **kwargs)
File "C:\Users\migita\AppData\Local\Programs\Python\Python37\lib\site-packages\torch\nn\modules\container.py", line 92, in forward
input = module(input)
File "C:\Users\migita\AppData\Local\Programs\Python\Python37\lib\site-packages\torch\nn\modules\module.py", line 547, in __call__
result = self.forward(*input, **kwargs)
File "C:\Users\migita\pytorch-CycleGAN-and-pix2pix\models\networks.py", line 535, in forward
return torch.cat([x, self.model(x)], 1)
File "C:\Users\migita\AppData\Local\Programs\Python\Python37\lib\site-packages\torch\nn\modules\module.py", line 547, in __call__
result = self.forward(*input, **kwargs)
File "C:\Users\migita\AppData\Local\Programs\Python\Python37\lib\site-packages\torch\nn\modules\container.py", line 92, in forward
input = module(input)
File "C:\Users\migita\AppData\Local\Programs\Python\Python37\lib\site-packages\torch\nn\modules\module.py", line 547, in __call__
result = self.forward(*input, **kwargs)
File "C:\Users\migita\pytorch-CycleGAN-and-pix2pix\models\networks.py", line 535, in forward
return torch.cat([x, self.model(x)], 1)
File "C:\Users\migita\AppData\Local\Programs\Python\Python37\lib\site-packages\torch\nn\modules\module.py", line 547, in __call__
result = self.forward(*input, **kwargs)
File "C:\Users\migita\AppData\Local\Programs\Python\Python37\lib\site-packages\torch\nn\modules\container.py", line 92, in forward
input = module(input)
File "C:\Users\migita\AppData\Local\Programs\Python\Python37\lib\site-packages\torch\nn\modules\module.py", line 547, in __call__
result = self.forward(*input, **kwargs)
File "C:\Users\migita\pytorch-CycleGAN-and-pix2pix\models\networks.py", line 535, in forward
return torch.cat([x, self.model(x)], 1)
File "C:\Users\migita\AppData\Local\Programs\Python\Python37\lib\site-packages\torch\nn\modules\module.py", line 547, in __call__
result = self.forward(*input, **kwargs)
File "C:\Users\migita\AppData\Local\Programs\Python\Python37\lib\site-packages\torch\nn\modules\container.py", line 92, in forward
input = module(input)
File "C:\Users\migita\AppData\Local\Programs\Python\Python37\lib\site-packages\torch\nn\modules\module.py", line 547, in __call__
result = self.forward(*input, **kwargs)
File "C:\Users\migita\pytorch-CycleGAN-and-pix2pix\models\networks.py", line 535, in forward
return torch.cat([x, self.model(x)], 1)
RuntimeError: invalid argument 0: Sizes of tensors must match except in dimension 1. Got 14 and 15 in dimension 3 at C:/w/1/s/windows/pytorch/aten/src\THC/generic/THCTensorMath.cu:71
| closed | 2020-01-16T05:38:23Z | 2020-01-16T10:10:49Z | https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/899 | [] | hanatyaki6 | 1 |
ExpDev07/coronavirus-tracker-api | rest-api | 200 | Recovered time series are gone | If you check now the api returns 0 recovered and empty lists in the json for time series of recovered. | open | 2020-03-26T12:06:34Z | 2020-03-27T11:04:36Z | https://github.com/ExpDev07/coronavirus-tracker-api/issues/200 | [
"question",
"Frequently Asked",
"source: jhu"
] | paolotamag | 5 |
dsdanielpark/Bard-API | nlp | 167 | Error on get answer | bardapi.core.Bard(token=cookie, language='it').get_answer(search)['content']
get_answer -> list out of range | closed | 2023-08-18T14:39:12Z | 2023-08-18T16:54:12Z | https://github.com/dsdanielpark/Bard-API/issues/167 | [] | sapycola | 1 |
deepset-ai/haystack | nlp | 8,497 | @component decorator does not detect extra parameters from other decorators | **Describe the bug**
When a components `run` method has a decorator, extra parameters are not detected by the `@component` decorator.
E.g:
```python
from haystack import Document, component, Pipeline
from typing import Callable
from functools import wraps
def cache(
directory: Path,
):
def decorator(func: Callable):
@wraps(func)
def wrapper(
self,
documents: list[Document],
*args,
force_recompute: bool = False,
**kwargs,
) -> dict:
return {"documents": documents}
return wrapper
return decorator
@component
class SomeComponent:
@component.output_types(documents=list[Document])
@cache(directory=Path())
def run(
self,
documents: list[Document],
) -> dict:
return document
p = Pipeline()
p.add_component("c", SomeComponent())
p.run({"c": {"documents": [], "force_recompute": True}})
```
**Error message**
ValueError: Input <decorator_param> not found in component <component>.
`ValueError: Input force_recompute not found in component c.`
**Expected behavior**
I want to be able to add extra parameters to "run" using decorators
**Additional context**
Add any other context about the problem here, like document types / preprocessing steps / settings of reader etc.
**To Reproduce**
Steps to reproduce the behavior
**FAQ Check**
- [x] Have you had a look at [our new FAQ page](https://docs.haystack.deepset.ai/docs/faq)?
**System:**
- Haystack version (commit or version number): haystack-ai = "2.6.1" | closed | 2024-10-28T11:21:17Z | 2024-11-27T10:51:38Z | https://github.com/deepset-ai/haystack/issues/8497 | [
"P2"
] | tsoernes | 1 |
predict-idlab/plotly-resampler | plotly | 300 | [BUG] Using gunicorn to deploy a dash app with plotly-resampler in Linux | **Describe the bug** :crayon:
>I am trying to use plotly-resampler to generate some figures in Dash app with big data.
step1. Use gunicorn to deploy it in Linux OS and configure the number of workers more than 1
step2. Scale the figure by dragging the mouse more than 10 times
step3. Click the top-right home icon button to reset the plotly figure, it cannot reset the figure like before.
**Reproducing the bug** :mag:
app.py
```
import numpy as np
import plotly.graph_objects as go
from dash import dcc, Dash, html
from trace_updater import TraceUpdater
from plotly_resampler import FigureResampler
app = Dash(__name__)
server = app.server
fig = FigureResampler()
x = np.arange(1_000_000)
sin = (3 + np.sin(x / 200) + np.random.randn(len(x)) / 10) * x / 1_000
fig.add_trace(
go.Scattergl(
x=x,
y=sin,
name='demo',
mode='lines+markers'
),
max_n_samples=int(len(x) * 0.2)
)
app.layout = html.Div(
[
dcc.Graph(id='demo-graph', figure=fig),
TraceUpdater(id='demo-trace-updater', gdID='demo-graph')
]
)
fig.register_update_graph_callback(app, 'demo-graph')
if __name__ == '__main__':
app.run(debug=True)
```
deploy script with gunicorn in Linux:
```
gunicorn --workers=2 app:server -b :8090
```
step1. Execute deployment script in Linux OS and visit the Dash app
step2. Scale the figure by dragging the mouse more than 10 times
step3. Click the top-right home icon button to reset the plotly figure, it cannot reset the figure like before.
**Expected behavior** :wrench:
> Scaling the figure and clicking the reset button can reset the figure before
**Environment information**: (please complete the following information)
- OS: Linux
- Python environment:
- Python version: 3.10.x
- plotly-resampler environment: Dash web app (Chrome)
- plotly-resampler version: 0.9.2
Thx for your attention. | open | 2024-03-11T16:30:11Z | 2024-04-09T15:02:54Z | https://github.com/predict-idlab/plotly-resampler/issues/300 | [
"bug",
"works-on-main"
] | SMOKTEA | 7 |
LibrePhotos/librephotos | django | 818 | Document secret.key | closed | 2023-04-13T08:23:47Z | 2023-05-31T09:23:12Z | https://github.com/LibrePhotos/librephotos/issues/818 | [
"documentation",
"enhancement"
] | derneuere | 1 | |
napari/napari | numpy | 7,351 | Access Violation - Error drawing visual - vispy [WINDOWS] | ### 🐛 Bug Report
After using Napari for a while in a desktop application we are building to open and process digital Whole Slides images, an access violation is raised on the vispy module. It happens only in Windows machines (on win10 and win11). On a MacBook Pro the exception has never ever been raised so far.
```python
The error output is:
OSError: exception: access violation reading 0x000000000000001C
WARNING: Error drawing visual <Image at 0x29c2e9391b0>
---------------------------------------------------------------------------
OSError Traceback (most recent call last)
File C:\ProgramData\miniconda3\envs\condavenv2\lib\site-packages\vispy\app\backends\_qt.py:928, in CanvasBackendDesktop.paintGL(self=<vispy.app.backends._qt.CanvasBackendDesktop object>)
926 # (0, 0, self.width(), self.height()))
927 self._vispy_canvas.set_current()
--> 928 self._vispy_canvas.events.draw(region=None)
self._vispy_canvas = <NapariSceneCanvas (PyQt5) at 0x29bd85cb0a0>
self._vispy_canvas.events.draw = <vispy.util.event.EventEmitter object at 0x0000029BD85D8E20>
self = <vispy.app.backends._qt.CanvasBackendDesktop object at 0x0000029BD85CDB40>
self._vispy_canvas.events = <vispy.util.event.EmitterGroup object at 0x0000029BD85D8D90>
930 # Clear the alpha channel with QOpenGLWidget (Qt >= 5.4), otherwise the
931 # window is translucent behind non-opaque objects.
932 # Reference: MRtrix3/mrtrix3#266
933 if QT5_NEW_API or PYSIDE6_API or PYQT6_API:
File C:\ProgramData\miniconda3\envs\condavenv2\lib\site-packages\vispy\util\event.py:453, in EventEmitter.__call__(self=<vispy.util.event.EventEmitter object>, *args=(), **kwargs={'region': None})
450 if self._emitting > 1:
451 raise RuntimeError('EventEmitter loop detected!')
--> 453 self._invoke_callback(cb, event)
event = <DrawEvent blocked=False handled=False native=None region=None source=None sources=[] type=draw>
self = <vispy.util.event.EventEmitter object at 0x0000029BD85D8E20>
cb = <bound method SceneCanvas.on_draw of <NapariSceneCanvas (PyQt5) at 0x29bd85cb0a0>>
454 if event.blocked:
455 break
File C:\ProgramData\miniconda3\envs\condavenv2\lib\site-packages\vispy\util\event.py:471, in EventEmitter._invoke_callback(self=<vispy.util.event.EventEmitter object>, cb=<bound method SceneCanvas.on_draw of <NapariSceneCanvas (PyQt5)>>, event=<DrawEvent blocked=False handled=False native=None region=None source=None sources=[] type=draw>)
469 cb(event)
470 except Exception:
--> 471 _handle_exception(self.ignore_callback_errors,
self = <vispy.util.event.EventEmitter object at 0x0000029BD85D8E20>
cb = <bound method SceneCanvas.on_draw of <NapariSceneCanvas (PyQt5) at 0x29bd85cb0a0>>
event = <DrawEvent blocked=False handled=False native=None region=None source=None sources=[] type=draw>
(cb, event) = (<bound method SceneCanvas.on_draw of <NapariSceneCanvas (PyQt5) at 0x29bd85cb0a0>>, <DrawEvent blocked=False handled=False native=None region=None source=None sources=[] type=draw>)
472 self.print_callback_errors,
473 self, cb_event=(cb, event))
File C:\ProgramData\miniconda3\envs\condavenv2\lib\site-packages\vispy\util\event.py:469, in EventEmitter._invoke_callback(self=<vispy.util.event.EventEmitter object>, cb=<bound method SceneCanvas.on_draw of <NapariSceneCanvas (PyQt5)>>, event=<DrawEvent blocked=False handled=False native=None region=None source=None sources=[] type=draw>)
467 def _invoke_callback(self, cb, event):
468 try:
--> 469 cb(event)
cb = <bound method SceneCanvas.on_draw of <NapariSceneCanvas (PyQt5) at 0x29bd85cb0a0>>
event = <DrawEvent blocked=False handled=False native=None region=None source=None sources=[] type=draw>
470 except Exception:
471 _handle_exception(self.ignore_callback_errors,
472 self.print_callback_errors,
473 self, cb_event=(cb, event))
File C:\ProgramData\miniconda3\envs\condavenv2\lib\site-packages\vispy\scene\canvas.py:219, in SceneCanvas.on_draw(self=<NapariSceneCanvas (PyQt5)>, event=<DrawEvent blocked=False handled=False native=None region=None source=None sources=[] type=draw>)
216 # Now that a draw event is going to be handled, open up the
217 # scheduling of further updates
218 self._update_pending = False
--> 219 self._draw_scene()
self = <NapariSceneCanvas (PyQt5) at 0x29bd85cb0a0>
File C:\ProgramData\miniconda3\envs\condavenv2\lib\site-packages\vispy\scene\canvas.py:278, in SceneCanvas._draw_scene(self=<NapariSceneCanvas (PyQt5)>, bgcolor=array([0., 0., 0., 1.], dtype=float32))
276 bgcolor = self._bgcolor
277 self.context.clear(color=bgcolor, depth=True)
--> 278 self.draw_visual(self.scene)
self = <NapariSceneCanvas (PyQt5) at 0x29bd85cb0a0>
File C:\ProgramData\miniconda3\envs\condavenv2\lib\site-packages\vispy\scene\canvas.py:316, in SceneCanvas.draw_visual(self=<NapariSceneCanvas (PyQt5)>, visual=<SubScene>, event=None)
314 else:
315 if hasattr(node, 'draw'):
--> 316 node.draw()
node = <Image at 0x29c2e9391b0>
317 prof.mark(str(node))
318 else:
File C:\ProgramData\miniconda3\envs\condavenv2\lib\site-packages\vispy\scene\visuals.py:106, in VisualNode.draw(self=<Image>)
104 if self.picking and not self.interactive:
105 return
--> 106 self._visual_superclass.draw(self)
self = <Image at 0x29c2e9391b0>
self._visual_superclass = <class 'vispy.visuals.image.ImageVisual'>
File C:\ProgramData\miniconda3\envs\condavenv2\lib\site-packages\vispy\visuals\visual.py:514, in Visual.draw(self=<Image>)
512 self._configure_gl_state()
513 try:
--> 514 self._program.draw(self._vshare.draw_mode,
self._vshare.draw_mode = 'triangles'
self = <Image at 0x29c2e9391b0>
self._vshare = <vispy.visuals.visual.VisualShare object at 0x0000029C38C63760>
self._program = <vispy.visuals.shaders.program.ModularProgram object at 0x0000029C5F5EF880>
self._vshare.index_buffer = None
515 self._vshare.index_buffer)
516 except Exception:
517 logger.warning("Error drawing visual %r" % self)
File C:\ProgramData\miniconda3\envs\condavenv2\lib\site-packages\vispy\visuals\shaders\program.py:102, in ModularProgram.draw(self=<vispy.visuals.shaders.program.ModularProgram object>, *args=('triangles', None), **kwargs={})
100 self.build_if_needed()
101 self.update_variables()
--> 102 Program.draw(self, *args, **kwargs)
self = <vispy.visuals.shaders.program.ModularProgram object at 0x0000029C5F5EF880>
args = ('triangles', None)
kwargs = {}
File C:\ProgramData\miniconda3\envs\condavenv2\lib\site-packages\vispy\gloo\program.py:543, in Program.draw(self=<vispy.visuals.shaders.program.ModularProgram object>, mode='triangles', indices=None, check_error=True)
539 raise TypeError("Invalid index: %r (must be IndexBuffer)" %
540 indices)
542 # Process GLIR commands
--> 543 canvas.context.flush_commands()
canvas = <NapariSceneCanvas (PyQt5) at 0x29bd85cb0a0>
File C:\ProgramData\miniconda3\envs\condavenv2\lib\site-packages\vispy\gloo\context.py:172, in GLContext.flush_commands(self=<GLContext>, event=None)
170 fbo = 0
171 self.shared.parser.parse([('CURRENT', 0, fbo)])
--> 172 self.glir.flush(self.shared.parser)
self = <GLContext at 0x29bd85d8d60>
File C:\ProgramData\miniconda3\envs\condavenv2\lib\site-packages\vispy\gloo\glir.py:584, in GlirQueue.flush(self=<vispy.gloo.glir.GlirQueue object>, parser=<vispy.gloo.glir.GlirParser object>)
582 def flush(self, parser):
583 """Flush all current commands to the GLIR interpreter."""
--> 584 self._shared.flush(parser)
parser = <vispy.gloo.glir.GlirParser object at 0x0000029BD85D90C0>
self._shared = <vispy.gloo.glir._GlirQueueShare object at 0x0000029BD85D9180>
self = <vispy.gloo.glir.GlirQueue object at 0x0000029BD85D9150>
File C:\ProgramData\miniconda3\envs\condavenv2\lib\site-packages\vispy\gloo\glir.py:506, in _GlirQueueShare.flush(self=<vispy.gloo.glir._GlirQueueShare object>, parser=<vispy.gloo.glir.GlirParser object>)
504 show = self._verbose if isinstance(self._verbose, str) else None
505 self.show(show)
--> 506 parser.parse(self._filter(self.clear(), parser))
self = <vispy.gloo.glir._GlirQueueShare object at 0x0000029BD85D9180>
parser = <vispy.gloo.glir.GlirParser object at 0x0000029BD85D90C0>
File C:\ProgramData\miniconda3\envs\condavenv2\lib\site-packages\vispy\gloo\glir.py:824, in GlirParser.parse(self=<vispy.gloo.glir.GlirParser object>, commands=[('FUNC', 'glClearColor', 0.0, 0.0, 0.0, 1.0), ('FUNC', 'glClear', 17664), ('FUNC', 'glDisable', 'cull_face'), ('FUNC', 'glDisable', 'depth_test'), ('FUNC', 'glEnable', 'blend'), ('FUNC', 'glBlendFuncSeparate', 'src_alpha', 'one_minus_src_alpha', 'one', 'one'), ('FUNC', 'glBlendEquationSeparate', 'func_add', 'func_add'), ('DRAW', 2215, 'triangles', (0, 6), 1)])
821 self._objects.pop(id_)
823 for command in commands:
--> 824 self._parse(command)
command = ('DRAW', 2215, 'triangles', (0, 6), 1)
self = <vispy.gloo.glir.GlirParser object at 0x0000029BD85D90C0>
File C:\ProgramData\miniconda3\envs\condavenv2\lib\site-packages\vispy\gloo\glir.py:786, in GlirParser._parse(self=<vispy.gloo.glir.GlirParser object>, command=('DRAW', 2215, 'triangles', (0, 6), 1))
783 # Triage over command. Order of commands is set so most
784 # common ones occur first.
785 if cmd == 'DRAW': # Program
--> 786 ob.draw(*args)
args = ('triangles', (0, 6), 1)
ob = <GlirProgram 2215 at 0x29c5f5efb20>
787 elif cmd == 'TEXTURE': # Program
788 ob.set_texture(*args)
File C:\ProgramData\miniconda3\envs\condavenv2\lib\site-packages\vispy\gloo\glir.py:1365, in GlirProgram.draw(self=<GlirProgram 2215>, mode=GL_TRIANGLES, selection=(0, 6), instances=1)
1363 gl.glDrawArraysInstanced(mode, first, count, instances)
1364 else:
-> 1365 gl.glDrawArrays(mode, first, count)
mode = GL_TRIANGLES
count = 6
gl = <module 'vispy.gloo.gl' from 'C:\\ProgramData\\miniconda3\\envs\\condavenv2\\lib\\site-packages\\vispy\\gloo\\gl\\__init__.py'>
first = 0
1366 # Wrap up
1367 gl.check_error('Check after draw')
File C:\ProgramData\miniconda3\envs\condavenv2\lib\site-packages\vispy\gloo\gl\_gl2.py:414, in glDrawArrays(mode=GL_TRIANGLES, first=0, count=6)
412 except AttributeError:
413 nativefunc = glDrawArrays._native = _get_gl_func("glDrawArrays", None, (ctypes.c_uint, ctypes.c_int, ctypes.c_int,))
--> 414 nativefunc(mode, first, count)
nativefunc = <_FuncPtr object at 0x0000029C39B5B920>
mode = GL_TRIANGLES
first = 0
count = 6
OSError: exception: access violation reading 0x000000000000001C
```
### 💡 Steps to Reproduce
open a whole slide using dask:
store = imread(filepath, aszarr=True)
grp = zarr.open(store, mode="r")
datasets = grp.attrs["multiscales"][0]["datasets"]
pyramid = [da.from_zarr(store, component=d["path"]) for d in datasets]
and add to an image layer along with some metadata:
meta={...}
self.baselayer = self.viewer.add_image(pyramid, name=basename, metadata=meta )
after some zooming and panning, the error is raised
### 💡 Expected Behavior
No exception should be raised and the application should not crash.
### 🌎 Environment
napari: 0.5.1
Platform: Windows-10-10.0.19045-SP0
Python: 3.10.6 | packaged by conda-forge | (main, Oct 24 2022, 16:02:16) [MSC v.1916 64 bit (AMD64)]
Qt: 5.15.2
PyQt5: 5.15.11
NumPy: 1.23.5
SciPy: 1.14.0
Dask: 2024.7.1
VisPy: 0.14.3
magicgui: 0.9.0
superqt: 0.6.7
in-n-out: 0.2.1
app-model: 0.2.8
npe2: 0.7.7
OpenGL:
- GL version: 4.6.0 NVIDIA 546.09
- MAX_TEXTURE_SIZE: 32768
- GL_MAX_3D_TEXTURE_SIZE: 16384
Screens:
- screen 1: resolution 2560x1440, scale 1.0
Optional:
- numba not installed
- triangle not installed
- napari-plugin-manager not installed
Settings path:
- C:\Users\Margherita Mottola\AppData\Local\napari\condavenv2_b7b979b9265c6f9544f62d60c95e9fdeac189a62\settings.yaml
Plugins:
- napari: 0.5.1 (81 contributions)
- napari-console: 0.0.9 (0 contributions)
- napari-svg: 0.2.0 (2 contributions)
--- Other windows machine:
WINDOWS 11
cuda Cuda compilation tools, release 12.1, V12.1.66
CONDA env Python version: 3.10.6
CUDA + CUDNN: cuda_12.1.r12.1 - cudnn 8
absl-py==2.1.0
alabaster==1.0.0
annotated-types==0.7.0
app-model==0.2.8
appdirs==1.4.4
asciitree==0.3.3
asttokens==2.4.1
attrs==24.1.0
Babel==2.15.0
build==1.2.1
cachey==0.2.1
certifi==2024.7.4
charset-normalizer==3.3.2
click==8.1.7
cloudpickle==3.0.0
colorama==0.4.6
comm==0.2.2
contourpy==1.2.1
cycler==0.12.1
dask==2024.7.1
debugpy==1.8.7
decorator==5.1.1
docstring_parser==0.16
docutils==0.21.2
exceptiongroup==1.2.2
executing==2.0.1
fasteners==0.19
filelock==3.13.1
flexcache==0.3
flexparser==0.3.1
fonttools==4.53.1
freetype-py==2.4.0
fsspec==2024.6.1
grpcio==1.66.1
HeapDict==1.0.1
hsluv==5.0.4
idna==3.7
imagecodecs==2024.6.1
imageio==2.34.2
imagesize==1.4.1
importlib_metadata==8.2.0
in-n-out==0.2.1
ipykernel==6.29.5
ipython==8.26.0
jedi==0.19.1
Jinja2==3.1.4
joblib==1.4.2
jsonschema==4.23.0
jsonschema-specifications==2023.12.1
jupyter_client==8.6.2
jupyter_core==5.7.2
kiwisolver==1.4.5
lazy_loader==0.4
locket==1.0.0
magicgui==0.9.0
Markdown==3.7
markdown-it-py==3.0.0
MarkupSafe==2.1.5
matplotlib==3.9.0
matplotlib-inline==0.1.7
mdurl==0.1.2
mpmath==1.3.0
napari==0.5.1
napari-console==0.0.9
napari-plugin-engine==0.2.0
napari-svg==0.2.0
nest-asyncio==1.6.0
networkx==3.3
npe2==0.7.7
numcodecs==0.13.0
numpy==1.23.5
numpydoc==1.7.0
opencv-python==4.10.0.84
openslide-python==1.3.1
packaging==24.1
pandas==2.2.2
parso==0.8.4
partd==1.4.2
pillow==10.4.0
Pint==0.24.3
platformdirs==4.2.2
pooch==1.8.2
prompt_toolkit==3.0.47
protobuf==5.28.0
psutil==6.0.0
psygnal==0.11.1
pure_eval==0.2.3
pyconify==0.1.6
pydantic==2.8.2
pydantic-compat==0.1.2
pydantic_core==2.20.1
Pygments==2.18.0
PyOpenGL==3.1.7
pyparsing==3.1.2
pyproject_hooks==1.1.0
PyQt5==5.15.11
PyQt5-Qt5==5.15.2
PyQt5_sip==12.15.0
python-dateutil==2.9.0.post0
pytz==2024.1
pywin32==306
PyYAML==6.0.1
pyzmq==26.1.0
qtconsole==5.5.2
QtPy==2.4.1
referencing==0.35.1
requests==2.32.3
rich==13.7.1
rpds-py==0.19.1
scikit-image==0.24.0
scikit-learn==1.5.1
scipy==1.14.0
seaborn==0.13.2
segment-anything==1.0
shapely==2.0.5
shellingham==1.5.4
six==1.16.0
snowballstemmer==2.2.0
Sphinx==8.0.2
sphinxcontrib-applehelp==2.0.0
sphinxcontrib-devhelp==2.0.0
sphinxcontrib-htmlhelp==2.1.0
sphinxcontrib-jsmath==1.0.1
sphinxcontrib-qthelp==2.0.0
sphinxcontrib-serializinghtml==2.0.0
stack-data==0.6.3
superqt==0.6.7
sympy==1.13.1
tabulate==0.9.0
tensorboard==2.17.1
tensorboard-data-server==0.7.2
threadpoolctl==3.5.0
tifffile==2024.7.24
tomli==2.0.1
tomli_w==1.0.0
toolz==0.12.1
torch==2.5.0+cu121
torchvision==0.20.0+cu121
tornado==6.4.1
tqdm==4.66.5
traitlets==5.14.3
typer==0.12.3
typing_extensions==4.12.2
tzdata==2024.1
urllib3==2.2.2
vispy==0.14.3
wcwidth==0.2.13
Werkzeug==3.0.4
wrapt==1.16.0
zarr==2.18.2
zipp==3.19.2
### 💡 Additional Context
The problem is happening on different Windows machines, with different Nvidia graphics cards. I did not tested on Linux machines yet, only on macOS 14.6.1 where the exception is never raised. | open | 2024-11-04T10:45:04Z | 2024-11-15T17:08:10Z | https://github.com/napari/napari/issues/7351 | [
"bug",
"os:windows"
] | alghera | 10 |
keras-team/keras | data-science | 20,833 | Keras 2.15 is unable to load "h5" dumps created by itself (but can load models made in 2.12) | Using keras 2.15 installed with tensorflow 2.15, I'm taking a sample code from keras documentation: https://keras.io/guides/serialization_and_saving/ with the only change - I'm saving "h5" file instead of "keras".
Sample code produces output:
```
numpy: 1.26.4
tensorflow: 2.15.1
keras: 2.15.0
TypeError: Error when deserializing class 'Dense' using config={'name': 'dense', 'trainable': True, 'dtype': 'float32', 'units': 1, 'activation': {'module': 'builtins', 'class_name': 'function', 'config': 'my_package>custom_fn', 'registered_name': 'function'}, 'use_bias': True, 'kernel_initializer': {'module': 'keras.initializers', 'class_name': 'GlorotUniform', 'config': {'seed': None}, 'registered_name': None}, 'bias_initializer': {'module': 'keras.initializers', 'class_name': 'Zeros', 'config': {}, 'registered_name': None}, 'kernel_regularizer': None, 'bias_regularizer': None, 'activity_regularizer': None, 'kernel_constraint': None, 'bias_constraint': None}.
Exception encountered: Unknown activation function: 'function'. Please ensure you are using a `keras.utils.custom_object_scope` and that this object is included in the scope. See https://www.tensorflow.org/guide/keras/save_and_serialize#registering_the_custom_object for details.
```
Sample code:
```python
import numpy as np
import tensorflow as tf
import keras
print("numpy:", np.__version__)
print("tensorflow:", tf.__version__)
print("keras:", keras.__version__)
keras.saving.get_custom_objects().clear()
@keras.saving.register_keras_serializable(package="MyLayers")
class CustomLayer(keras.layers.Layer):
def __init__(self, factor):
super().__init__()
self.factor = factor
def call(self, x):
return x * self.factor
def get_config(self):
return {"factor": self.factor}
@keras.saving.register_keras_serializable(package="my_package", name="custom_fn")
def custom_fn(x):
return x**2
# Create the model.
def get_model():
inputs = keras.Input(shape=(4,))
mid = CustomLayer(0.5)(inputs)
outputs = keras.layers.Dense(1, activation=custom_fn)(mid)
model = keras.Model(inputs, outputs)
model.compile(optimizer="rmsprop", loss="mean_squared_error")
return model
# Train the model.
def train_model(model):
input = np.random.random((4, 4))
target = np.random.random((4, 1))
model.fit(input, target)
return model
if __name__ == "__main__":
# This is the only difference wit the documentation
# when using "keras", loading succeeds.
file_format = "h5"
file_name = f"custom_model_reg.{file_format}"
model = get_model()
model = train_model(model)
model.save(file_name)
# Raises error
reconstructed_model = keras.models.load_model(file_name)
```
If I create this model in keras 2.12, loading succeeds.
Comparing metadata for this model, created in 2.12 and 2.15, there is a certain difference:
Here is 2.12 metadata:
```json
{
"class_name": "Dense",
"config": {
"name": "dense",
"trainable": true,
"dtype": "float32",
"units": 1,
"activation": "custom_fn",
...
```
and here is 2.15:
```json
"class_name": "Dense",
"config": {
"name": "dense",
"trainable": true,
"dtype": "float32",
"units": 1,
"activation": {
"module": "builtins",
"class_name": "function",
"config": "custom_fn",
"registered_name": "function"
},
...
```
2.15 changed "activation" definition from string to dictionary.
Further debugging shows that when we try to load "h5" file, execution eventually reaches function `keras.src.saving.legacy.serialization.class_and_config_for_serialized_keras_object`, which takes only "class_name" to resolve the object, and, naturally, fails, because class_name is "function":
```python
class_name = config["class_name"]
cls = object_registration.get_registered_object(
class_name, custom_objects, module_objects
)
if cls is None:
raise ValueError(
f"Unknown {printable_module_name}: '{class_name}'. "
```
So the question is - is there a way to fix this or at least workaround?
tensorflow 2.15 is highest version available to me.
| closed | 2025-01-31T12:51:41Z | 2025-03-06T02:04:46Z | https://github.com/keras-team/keras/issues/20833 | [
"stat:awaiting response from contributor",
"stale",
"type:Bug"
] | nchaly | 3 |
MagicStack/asyncpg | asyncio | 629 | feature request: query logging | I could not find any issue/question/pr related to this, so I'm starting a new one.
It would be great to have query logging implemented inside `asyncpg`:
- to see all executed queries and their parameters
- especially nice would be to have a query cache logging to see cache hits/misses
| closed | 2020-09-30T10:01:20Z | 2023-10-09T20:17:44Z | https://github.com/MagicStack/asyncpg/issues/629 | [] | dmig-alarstudios | 6 |
huggingface/datasets | computer-vision | 6,899 | List of dictionary features get standardized | ### Describe the bug
Hi, i’m trying to create a HF dataset from a list using Dataset.from_list.
Each sample in the list is a dict with the same keys (which will be my features). The values for each feature are a list of dictionaries, and each such dictionary has a different set of keys. However, the datasets library standardizes all dictionaries under a feature and adds all possible keys (with None value) from all the dictionaries under that feature.
How can I keep the same set of keys as in the original list for each dictionary under a feature?
### Steps to reproduce the bug
```
from datasets import Dataset
# Define a function to generate a sample with "tools" feature
def generate_sample():
# Generate random sample data
sample_data = {
"text": "Sample text",
"feature_1": []
}
# Add feature_1 with random keys for this sample
feature_1 = [{"key1": "value1"}, {"key2": "value2"}] # Example feature_1 with random keys
sample_data["feature_1"].extend(feature_1)
return sample_data
# Generate multiple samples
num_samples = 10
samples = [generate_sample() for _ in range(num_samples)]
# Create a Hugging Face Dataset
dataset = Dataset.from_list(samples)
dataset[0]
```
```{'text': 'Sample text', 'feature_1': [{'key1': 'value1', 'key2': None}, {'key1': None, 'key2': 'value2'}]}```
### Expected behavior
```{'text': 'Sample text', 'feature_1': [{'key1': 'value1'}, {'key2': 'value2'}]}```
### Environment info
- `datasets` version: 2.19.1
- Platform: Linux-5.15.0-1040-nvidia-x86_64-with-glibc2.35
- Python version: 3.10.13
- `huggingface_hub` version: 0.23.0
- PyArrow version: 15.0.0
- Pandas version: 2.2.0
- `fsspec` version: 2023.10.0 | open | 2024-05-15T14:11:35Z | 2024-05-15T14:11:35Z | https://github.com/huggingface/datasets/issues/6899 | [] | sohamparikh | 0 |
SYSTRAN/faster-whisper | deep-learning | 143 | add details about segments generator in README | since it confuses many people #67 #117 #141 #142 (also me when i first used), please add details about segments generator in README (for e.g. how to get more accurate execution time) | closed | 2023-04-12T18:14:59Z | 2023-04-13T07:50:55Z | https://github.com/SYSTRAN/faster-whisper/issues/143 | [] | phineas-pta | 1 |
qubvel-org/segmentation_models.pytorch | computer-vision | 415 | Negative dice loss and IOU score more than 1 | Hi, I followed the CamVid example and used the exact same code for the whole training process. However, the dice loss is negative, the IOU score is more than 1 and in the 100s, Fscore is also more than 1. I am unable to spot what is wrong. Please help!
```
loss = smp.utils.losses.DiceLoss()
metrics = [
smp.utils.metrics.IoU(threshold=0.5),
smp.utils.metrics.Fscore()
]
optimizer = torch.optim.Adam([
dict(params=model.parameters(), lr=0.0001),
])
max_score = 0
for i in range(0, 30):
print('\nEpoch: {}'.format(i))
train_logs = train_epoch.run(train_loader)
valid_logs = valid_epoch.run(valid_loader)
# do something (save model, change lr, etc.)
if max_score < valid_logs['iou_score']:
max_score = valid_logs['iou_score']
torch.save(model, f'/content/unet_{ENCODER}.pth')
print('Model saved!')
if i == 25:
optimizer.param_groups[0]['lr'] = 1e-5
print('Decrease decoder learning rate to 1e-5!')
```

| closed | 2021-06-06T20:21:07Z | 2021-07-04T15:46:56Z | https://github.com/qubvel-org/segmentation_models.pytorch/issues/415 | [] | cherinae | 2 |
svc-develop-team/so-vits-svc | pytorch | 23 | M1 mac 安装依赖失败 | 由于`numpy==1.20.3`还未支持m1 的mac导致报错`ERROR: Failed building wheel for numpy`,
去numpy仓库下看到在1.21.4版本后就可以解决,同时还可以解决x86_64及pipenv的一些安装问题
https://github.com/numpy/numpy/issues/17784#issuecomment-966444334
请问锁1.20是因为哪个包的依赖,可否麻烦升级项目numpy版本呢 | closed | 2023-03-14T08:16:47Z | 2023-03-14T08:24:35Z | https://github.com/svc-develop-team/so-vits-svc/issues/23 | [] | Chrosea | 0 |
paulpierre/RasaGPT | fastapi | 30 | sqlalchemy.exc.DataError: (psycopg2.errors.InvalidParameterValue) dimensions for type vector cannot exceed 1024 | How to resolve the problem? thanks! | open | 2023-06-12T07:27:55Z | 2023-12-26T03:41:44Z | https://github.com/paulpierre/RasaGPT/issues/30 | [] | lwzh | 1 |
wkentaro/labelme | deep-learning | 516 | Support for freehand area drawing | This is important for image segmentation related machine learning projects. | closed | 2019-11-21T22:52:09Z | 2020-01-27T01:49:20Z | https://github.com/wkentaro/labelme/issues/516 | [] | drestion | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.