repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
trevismd/statannotations | seaborn | 171 | Mann-Whitney | Does it only support the Mann-Whitney-Wilcoxon two-sided test, without one-tail? | closed | 2025-03-13T04:20:01Z | 2025-03-13T04:48:20Z | https://github.com/trevismd/statannotations/issues/171 | [] | minnyqiu | 1 |
custom-components/pyscript | jupyter | 489 | [Feature] Web UI for configuration | Especially while on mobile I'd find it really useful to have a simple UI that would allow you to modify app or global settings. | closed | 2023-07-14T18:34:36Z | 2023-07-14T18:55:40Z | https://github.com/custom-components/pyscript/issues/489 | [] | tal | 0 |
d2l-ai/d2l-en | deep-learning | 1,737 | Search doesn't appear to work | Currently, the search page shows no results and just "Preparing search"...
http://d2l.ai/search.html?q=transformer

Possibly related to this error in the console:

| closed | 2021-04-26T22:16:35Z | 2021-05-17T03:12:17Z | https://github.com/d2l-ai/d2l-en/issues/1737 | [
"bug"
] | indigoviolet | 3 |
scikit-multilearn/scikit-multilearn | scikit-learn | 221 | Faiss for Multi Label Approximate Nearest Neighbours | Should it be reasonably easy to adapt your ML KNN implementation to use FAISS as the ANN engine? | open | 2021-10-06T18:09:42Z | 2021-10-06T18:09:42Z | https://github.com/scikit-multilearn/scikit-multilearn/issues/221 | [] | GeorgePearse | 0 |
dgtlmoon/changedetection.io | web-scraping | 2,869 | "Send test notification" - does not alert user on error | **Describe the bug**
When Change Detection delivers a notification, an HTTP response status indicating failure, such as [404](https://httpstatuses.io/404), is treated as successful and disregarded silently.
**Version**
v0.48.05
**How did you install?**
Docker Compose
**To Reproduce**
Steps to reproduce the behavior:
1. Go to the notification settings for a watch
2. Supply a failing URL, such as `posts://httpbin.org/status/404`
3. Push Send test notification, then open the notification log
5. See there is no error or other indicator of failure provided. The only log entry shows `SENDING`.
Example watch: https://changedetection.io/share/0nFoikTeP3Ua
**Expected behavior**
Some sort of indication of failure in the log or watch should be shown to the user.
**Desktop (please complete the following information):**
- OS: Ubuntu 22.04.3 LTS
- Browser: Vivaldi
- Version 7.0
**Additional context**
This can arise when Change Detection mangles your notification URL, as in #2868.
| closed | 2024-12-29T02:39:22Z | 2025-01-08T13:35:42Z | https://github.com/dgtlmoon/changedetection.io/issues/2869 | [
"bug",
"Notifications systems",
"user-interface"
] | duozmo | 5 |
tflearn/tflearn | tensorflow | 986 | ValueError: Tag: acc:0 cannot be found in summaries list. | Hi, when I run the tflearn example of Trainer.
```
import tensorflow as tf
import tflearn
import tflearn.datasets.mnist as mnist
# ----------------------------
# Utils: Using TFLearn Trainer
# ----------------------------
trainX, trainY, testX, testY = mnist.load_data(one_hot=True)
with tf.Graph().as_default():
X = tf.placeholder('float', [None, 784])
Y = tf.placeholder('float', [None, 10])
W1 = tf.Variable(tf.random_normal([784, 256]))
W2 = tf.Variable(tf.random_normal([256, 256]))
W3 = tf.Variable(tf.random_normal([256, 10]))
b1 = tf.Variable(tf.random_normal([256]))
b2 = tf.Variable(tf.random_normal([256]))
b3 = tf.Variable(tf.random_normal([10]))
def dnn(x):
# Multilayer perceptron
x = tf.nn.tanh(tf.add(tf.matmul(x, W1), b1))
x = tf.nn.tanh(tf.add(tf.matmul(x, W2), b2))
x = tf.add(tf.matmul(x, W3), b3)
return x
net = dnn(X)
loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=net, labels=Y))
optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.1)
accuracy = tf.reduce_mean(tf.cast(tf.equal(tf.argmax(net, 1), tf.argmax(Y, 1)), tf.float32), name='acc')
# Using TFLearn Trainer
# Define a training op (op for backprop, only need 1 in this model)
trainop = tflearn.TrainOp(loss=loss, optimizer=optimizer, metric=accuracy, batch_size=128)
# Create Trainer, providing all training ops. Tensorboard logs stored
# in /tmp/tflearn_logs/. It is possible to change verbose level for more
# details logs about gradients, variables etc...
trainer = tflearn.Trainer(train_ops=trainop, tensorboard_verbose=0)
trainer.fit( feed_dicts={X: trainX, Y: trainY}, val_feed_dicts={X: testX, Y: testY}, n_epoch=10, show_metric=True)
```
I encounter this problem :
```
Traceback (most recent call last):
File "C:/Users/Administrator/Desktop/machine learning/DL/tflearn/trainer.py", line 43, in <module>
trainer.fit( feed_dicts={X: trainX, Y: trainY}, val_feed_dicts={X: testX, Y: testY}, n_epoch=10, show_metric=True)
File "D:\Anaconda2\envs\py3\lib\site-packages\tflearn\helpers\trainer.py", line 339, in fit
show_metric)
File "D:\Anaconda2\envs\py3\lib\site-packages\tflearn\helpers\trainer.py", line 829, in _train
sname, train_summ_str)
File "D:\Anaconda2\envs\py3\lib\site-packages\tflearn\summaries.py", line 192, in get_value_from_summary_string
raise ValueError("Tag: " + tag + " cannot be found in summaries list.")
ValueError: Tag: acc:0 cannot be found in summaries list.
```
But when I use
`trainer.fit( feed_dicts={X: trainX, Y: trainY}, val_feed_dicts={X: testX, Y: testY}, n_epoch=10)`
instead of
`trainer.fit( feed_dicts={X: trainX, Y: trainY}, val_feed_dicts={X: testX, Y: testY}, n_epoch=10, show_metric=True)`
the error disappear?
Any advice why this question happen?
Environment: Win7, 64, tensorflow 1.4, python 3.5 | open | 2017-12-30T08:07:27Z | 2017-12-30T08:12:12Z | https://github.com/tflearn/tflearn/issues/986 | [] | yuzhou164 | 0 |
jupyterlab/jupyter-ai | jupyter | 298 | Make the default chat window memory size configurable | <!-- Welcome! Thank you for contributing. These HTML comments will not render in the issue, but you can delete them once you've read them if you prefer! -->
<!--
Thanks for thinking of a way to improve JupyterLab. If this solves a problem for you, then it probably solves that problem for lots of people! So the whole community will benefit from this request.
Before creating a new feature request please search the issues for relevant feature requests.
-->
### Problem
<!-- Provide a clear and concise description of what problem this feature will solve. For example:
* I'm always frustrated when [...] because [...]
* I would like it if [...] happened when I [...] because [...]
-->
Chat conversation memory window of 2 is a bit small especially as people are used to long memories from their experiences with ChatGPT.
### Proposed Solution
<!-- Provide a clear and concise description of a way to accomplish what you want. For example:
* Add an option so that when [...] [...] will happen
-->
* Make the `k` param in the ConversationBufferWindowMemory of the DefaultChatHandler configurable.
* Preferably as a setting in the UI.
* But short-term fix of setting as some global constant like MEMORY_K = 2 which I am able to configure would be fine in the meantime.
* Can consider as provider-level configuration with provider-level default if differing context windows sizes are an issue.
### Additional context
<!-- Add any other context or screenshots about the feature request here. You can also include links to examples of other programs that have something similar to your request. For example:
* Another project [...] solved this by [...]
-->
| closed | 2023-07-27T15:28:55Z | 2025-02-04T23:13:28Z | https://github.com/jupyterlab/jupyter-ai/issues/298 | [
"enhancement",
"scope:chat-ux",
"scope:settings"
] | michaelchia | 3 |
explosion/spaCy | data-science | 12,332 | If a dictionary contains double-byte hyphens(ー), it is not registered in the user dictionary. | ## How to reproduce the behaviour
If I register a word in csv that contains a double-byte hyphen, the registration is not reflected and it is broken up. Words that do not contain double-byte hyphens are reflected correctly.
The procedure is as follows.
1. Input the word in the csv
2. Update the dic in sudachipy
!sudachipy ubuild -s '{site_package}/sudachidict_core/resources/system.dic' ./.sudachi/user_dic.csv -o ./.sudachi/user.dic
3. Change the path in sudachi.json ("userDict")
4. The json file is then changed to utf-8.
After this, it will not work, for example, if I run the following on the command line.
echo "ビューティーキラー" | sudachipy
If I do this, they will be separated by a double-byte hyphen.
After that, for some reason it works in other environments, but not in this one.
## My Environment
* Operating System: Windows 10 Pro
* Python Version Used:Python 3.10.5
* spaCy Version Used: spacy 3.5.0
* Environment Information:Intel(E) core(TM) i7-7700 CPU
Other library info:
spacy-alignments 0.9.0
spacy-legacy 3.0.11
spacy-loggers 1.0.4
spacy-transformers 1.1.9
SudachiDict-core 20221021
SudachiPy 0.6.6
SudachiTra 0.1.7
| closed | 2023-02-25T00:49:40Z | 2023-04-09T00:02:28Z | https://github.com/explosion/spaCy/issues/12332 | [
"third-party",
"lang / ja"
] | Dormir30 | 8 |
globaleaks/globaleaks-whistleblowing-software | sqlalchemy | 4,438 | Mail after user registration / password reset are not coming | ### What version of GlobaLeaks are you using?
5.0.63
### What browser(s) are you seeing the problem on?
All
### What operating system(s) are you seeing the problem on?
Windows
### Describe the issue
Mail messages after user registration / password reset are not coming.
Sometimes rebooting the server fix part of the problem but not always.
### Proposed solution
_No response_ | open | 2025-03-21T09:09:51Z | 2025-03-21T10:42:18Z | https://github.com/globaleaks/globaleaks-whistleblowing-software/issues/4438 | [
"T: Bug",
"Triage"
] | maelaborazioni | 1 |
jazzband/django-oauth-toolkit | django | 1,206 | How to activate translations for oauth2 views | I am unable to display oauth pages in anything other than English.
I import the urls using
```python
path("o/", include("oauth2_provider.urls", namespace="oauth2_provider")),
```
And I have specified the language and i18n settings in my project's `settings.py`:
```python
LANGUAGE_CODE = "fr-fr"
TIME_ZONE = "Europe/Paris"
USE_I18N = True
USE_L10N = True
USE_TZ = True
```
However, my `/o` pages are always in English.
The only i18n entry I found in the doc is about contributing translation files, but not on how to use them. | closed | 2022-09-27T14:38:36Z | 2023-10-04T14:29:40Z | https://github.com/jazzband/django-oauth-toolkit/issues/1206 | [
"question"
] | alemangui | 1 |
CorentinJ/Real-Time-Voice-Cloning | deep-learning | 305 | audioread.exceptions.NoBackendError comes up whenever i try to load my own data :( | Pls help!
Whenever I try to load in my own data set to clone from, it says:
exception:
expected "str, byte..."
on the toolbox itself. When I try to load from my root with a saved WAV file (demo_toolbox.py -d <datasets_root>) it says: audioread.exceptions.NoBackendError on the terminal
| closed | 2020-03-30T14:43:35Z | 2022-09-20T06:43:01Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/305 | [] | Virus32wasmyname | 8 |
onnx/onnx | tensorflow | 6,152 | Missing type support in parser for various types (float16, bfloat16, ...) | # Bug Report
### Is the issue related to model conversion?
No.
### Describe the bug
The parser has only partial support for data type parsing: https://github.com/onnx/onnx/blob/093a8d335a66ea136eb1f16b3a1ce6237ee353ab/onnx/defs/parser.cc#L436
The missing types are:
```
TensorProto_DataType_FLOAT16
TensorProto_DataType_BFLOAT16
TensorProto_DataType_FLOAT8E4M3FN
TensorProto_DataType_FLOAT8E4M3FNUZ
TensorProto_DataType_FLOAT8E5M2
TensorProto_DataType_FLOAT8E5M2FNUZ
TensorProto_DataType_COMPLEX64
TensorProto_DataType_COMPLEX128
```
### System information
System independent.
### Reproduction instructions
Find a sample model and instruction for float16 below; note however that it would be great to support all types not just float16. The error will be the same for the other types.
Sample model:
[gemmfloat16.onnxtxt](https://github.com/user-attachments/files/15513562/gemmfloat16.onnxtxt.txt)
Sample instructions:
```
import onnx
from pathlib import Path
onnx.parser.parse_model(Path("gemmfloat16.onnxtxt.txt").read_text())
```
Result:
`onnx.parser.ParseError: b'[ParseError at position (...)]\nError context: <float16[4, 4] weight = {...}, float16[4] bias = {...}>\nUnhandled type: %d10'`
(Note: The `.onnxtxt` format is not allowed to be uploaded therefore the sample is `.onnxtxt.txt`)
### Expected behavior
ONNX model is parsed successfully. | open | 2024-05-31T12:50:30Z | 2025-01-24T15:02:17Z | https://github.com/onnx/onnx/issues/6152 | [
"bug",
"module: parser",
"contributions welcome"
] | TinaAMD | 1 |
FactoryBoy/factory_boy | sqlalchemy | 302 | Passing Trait Parameter Value to SubFactory as factory.SelfAttribute Always Evaluates as True | If a trait parameter passed to `SubFactory` not as strict `True` or `False` value, but as `SelfAttrubute` - it always evaluates as if `True` were passed.
This issue can be reproduced easilly with this example
``` python
class Foo(Base):
""" Base model """
__tablename__ = "foo"
id_ = Column(Integer, primary_key=True, autoincrement=True)
status = Column(Boolean)
value = Column(String)
def __str__(self):
return "<Foo: id_: \"%d\", status\"%s\", value=\"%s\"" % (self.id_, self.status, self.value)
class FooFactory(Factory):
""" Factory for a base model """
class Meta:
sqlalchemy_session = session
model=Foo
force_flush=True
class Params:
status_active = factory.Trait(
status=True,
value="active!"
)
# id_ generated automatically
status = False
value = "not active"
class Bar(Base):
""" Model with a foreign key to base model `Foo` """
__tablename__ = "bar"
id_ = Column(Integer, primary_key=True, autoincrement=True)
foo_id = Column(ForeignKey(Foo.id_))
ref_foo = relation(Foo)
class BarFactory(Factory):
""" Factory for a Bar model, that passes trait to a Foo subfactory based on a parametric value """
class Meta:
sqlalchemy_session = session
model=Bar
class Params:
foo_status_active = False
# id_ generated automatically
ref_foo = factory.SubFactory(FooFactory, status_active=factory.SelfAttribute("..foo_status_active"))
foo_id = factory.SelfAttribute(".ref_foo.id_")
print(BarFactory.create(foo_status_active=True).ref_foo)
print(BarFactory.create(foo_status_active=False).ref_foo)
```
(full gist at https://gist.github.com/Snork2/d88977f5bf66157f721c69363c2449f6)
This example prints
```
# <Foo: id_: "1", status"True", value="active!"
# <Foo: id_: "2", status"True", value="active!"
```
while
```
# <Foo: id_: "1", status"True", value="active!"
# <Foo: id_: "2", status"False", value="not active"
```
were expected
Environment:
`Python 3.5.1 (v3.5.1:37a07cee5969, Dec 6 2015, 01:38:48) [MSC v.1900 32 bit (Intel)] on win32`
`SQLAlchemy==1.0.12`
`factory-boy==2.7.0`
| open | 2016-05-16T08:19:57Z | 2018-01-03T19:10:37Z | https://github.com/FactoryBoy/factory_boy/issues/302 | [
"SQLAlchemy"
] | 14droplets | 2 |
proplot-dev/proplot | data-visualization | 333 | Bug: pplt.rc.load does not work properly | <!-- Thanks for helping us make proplot a better package! If this is a bug report, please use the template provided below. If this is a feature request, you can delete the template text (just try to be descriptive with your request). -->
### Description
pplt.rc.load does not work properly.
I found the problem when I was playing with `~/.proplotrc`. The same problem appears as in the following example.
### Steps to reproduce
I save a rc file as follows
```python
import proplot as pplt
pplt.rc['font.size'] = 10
pplt.rc['axes.labelsize'] = 'med-large'
pplt.rc.save('proplotrc', comment=True, backup=False, description=True)
pplt.rc['font.size'],pplt.rc['axes.labelsize'], pplt.rc['label.size']
```
It prints `(10.0, 'med-large', 'med-large')`
The file is indeed updated with the lines
```
# Changed settings
label.size: med-large
axes.labelsize: med-large
font.size: 10.0
```
Then I **restart** the session and load the same file as follows
```python
import proplot as pplt
pplt.rc.load('proplotrc')
pplt.rc['font.size'],pplt.rc['axes.labelsize'], pplt.rc['label.size']
```
It prints `(10.000000000000002, 'medium', 'medium')`.
### Proplot version
Paste the results of `import matplotlib; print(matplotlib.__version__); import proplot; print(proplot.version)`here.
3.4.3
0.9.5.post259 | closed | 2022-01-29T18:46:20Z | 2022-01-30T21:26:01Z | https://github.com/proplot-dev/proplot/issues/333 | [
"bug"
] | syrte | 5 |
localstack/localstack | python | 11,538 | bug: ECS SSM param env var with a leading slash fails | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
When an ECS task has an env var with the value stored in an SSM param whose name starts with a `/` (for e.g. `/test/some-config`), starting the task fails with the following error:
```
botocore.exceptions.ClientError: An error occurred (ValidationException) when calling the GetParameter operation: Parameter name: can't be prefixed with "ssm" (case-insensitive). If formed as a path, it can consist of sub-paths divided by slash symbol; each sub-path can be formed as a mix of letters, numbers and the following 3 symbols .-_
```
Checking the request logs, it looks like GetParameter is called without the leading slash:
```
{
"Name": "test/some-config",
"WithDecryption": true
}
```
### Expected Behavior
ECS should try to get the param `/test/some-config` not `test/some-config`
### How are you starting LocalStack?
With a docker-compose file
### Steps To Reproduce
- create an SSM param `/test/some-config`
- create an ECS service and add this ssm param as an env var to the container
- deploying the service fails
### Environment
```markdown
- OS: Mac OS 14
- LocalStack:
LocalStack version: 3.7.2
LocalStack Docker image sha: sha256:6121fe59d0c05ccc30bee5d4f24fce23c3e3148a4438551982a5bf60657a9e8d
LocalStack build date: 2024-09-06
LocalStack build git hash: 854016a0
```
### Anything else?
I also see this in the logs:
```
File "/opt/code/localstack/.venv/lib/python3.11/site-packages/localstack/pro/core/services/ecs/secrets_resolver.py.enc", line 24, in resolve_secrets
if':ssm:'in B:O=B.split(':parameter/')[1];A=N.get_parameter(Name=O,WithDecryption=True)['Parameter']['Value']
```
Which looks like could be the issue since it is splitting the arn by `:parameter/` to get the name of the SSM param. Example SSM param from my aws account (real account, not localstack):
```
Name: /cdk-bootstrap/.../version
ARN: arn:aws:ssm:us-east-1:...:parameter/cdk-bootstrap/.../version | open | 2024-09-18T19:28:21Z | 2025-01-30T13:40:00Z | https://github.com/localstack/localstack/issues/11538 | [
"type: bug",
"status: resolved/fixed",
"aws:ecs",
"aws:ssm",
"area: integration/aws-sdk-python"
] | arshsingh | 4 |
biolab/orange3 | numpy | 6,480 | Forward selection in feature suggestion | I really like the feature suggestion in _Linear projection_ and _Radviz_ but finding a combination of as little as 4 features among 100+ attributes takes forever. I tried to use _Rank_ to reduce their number but often cannot go so low that it would be worth the wait (also independently measuring feature importance does not give the same result as in combination).
My idea is that a forward selection strategy would be a good alternative to brute-forcing all combinations. It can easily find a combination of 10-20 features much faster. | closed | 2023-06-17T14:03:00Z | 2023-06-23T10:36:51Z | https://github.com/biolab/orange3/issues/6480 | [] | processo | 1 |
NVIDIA/pix2pixHD | computer-vision | 281 | The D_real and D_fake drop fast | The loss D_real and D_fake drop a very small value after several steps, is it norm?

| open | 2021-11-12T07:17:13Z | 2024-03-16T15:05:11Z | https://github.com/NVIDIA/pix2pixHD/issues/281 | [] | WayneCho | 1 |
pytorch/vision | machine-learning | 8,083 | `affine` creates artefacts on the edges of the image | ### 🐛 Describe the bug
When employing the affine functional operation (in both v1 and v2), it's evident that black borders are introduced around the image, even when the fill value matches the image content. These black margins are observable when using both uint8 and float32 data types, and this phenomenon occurs consistently on both Ubuntu and Mac M1.
Upon comparing the implementation of the 'affine' operation in torchvision with that in Kornia, I am uncertain whether the interpolation issue is limited to the image edges. Notably, when utilizing Kornia, the output appears to be more visually appealing when applied to an image.
```python
import torch
import torchvision
from torchvision.transforms.v2.functional import affine
from torchvision.tv_tensors import Image
from torchvision.transforms.v2.functional._geometry import _get_inverse_affine_matrix
from kornia.geometry.transform import get_affine_matrix2d, warp_affine
from torchvision.transforms import InterpolationMode
image = Image(128 * torch.ones((3, 240, 200), dtype=torch.float))
angle =30
trans = (0,0)
scale = 1.0
shear = (0,0)
center = (image.shape[-1] / 2, image.shape[-2] / 2)
inter = InterpolationMode.BILINEAR
fill = [128, 128, 128]
M = get_affine_matrix2d(
torch.Tensor(trans),
torch.Tensor([center]),
torch.Tensor([[scale, scale]]),
torch.Tensor([angle]),
torch.Tensor([shear[0]]),
torch.Tensor([shear[1]]),
)
kn_img = warp_affine(
image.unsqueeze(0),
M[:, :2],
image.shape[-2:],
mode="bilinear",
padding_mode="fill",
fill_value=torch.tensor(fill),
align_corners=False,
)
tv_img = affine(
image,
angle=angle,
translate=trans,
scale=scale,
shear=shear,
fill=fill,
interpolation=inter,
)
torchvision.io.write_png(kn_img[0].to(dtype=torch.uint8), "affine_kornia.png")
torchvision.io.write_png(tv_img.to(dtype=torch.uint8), "affine_torchvision.png")
```
Kornia | Torchvision
:--:|:--:
 | 
### Versions
PyTorch version: 2.1.0
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 13.6 (arm64)
GCC version: Could not collect
Clang version: 15.0.0 (clang-1500.0.40.1)
CMake version: Could not collect
Libc version: N/A
Python version: 3.10.9 (main, Jun 29 2023, 12:23:23) [Clang 14.0.3 (clang-1403.0.22.14.1)] (64-bit runtime)
Python platform: macOS-13.6-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M1 Pro
Versions of relevant libraries:
[pip3] flake8==6.1.0
[pip3] mypy==1.6.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.25.2
[pip3] onnx==1.14.1
[pip3] pytorch-lightning==2.0.9
[pip3] pytorch-ranger==0.1.1
[pip3] torch==2.1.0
[pip3] torch-optimizer==0.3.0
[pip3] torchdata==0.6.1
[pip3] torchmetrics==1.0.3
[pip3] torchtext==0.15.2
[pip3] torchvision==0.16.0
cc @vfdev-5 | open | 2023-10-30T18:46:17Z | 2025-01-11T08:09:57Z | https://github.com/pytorch/vision/issues/8083 | [
"module: transforms"
] | antoinebrl | 7 |
plotly/dash | plotly | 3,154 | dcc.send_data_frame polars support | Currently, dcc.send_data_frame only supports pandas writers. I was wondering if we can add support for [polars](https://github.com/pola-rs/polars) as well. Previously, I had to all my operations in polars and then convert to pandas at the end to utilize dcc.send_data_frame. However, I made a modified workaround currently which I am using.
```
import polars as pl
from dash import Dash, html, dcc, callback, Output, Input
import io
import base64
def polars_to_send_data_frame(df: pl.DataFrame, filename: str, **csv_kwargs):
buffer = io.StringIO()
df.write_csv(buffer, **csv_kwargs)
return {
'content': base64.b64encode(buffer.getvalue().encode('utf-8')).decode('utf-8'),
'filename': filename,
'type': 'text/csv',
'base64': True
}
app = Dash(__name__)
# Sample data
df = pl.DataFrame({
'A': range(5),
'B': ['foo', 'bar', 'baz', 'qux', 'quux'],
'C': [1.1, 2.2, 3.3, 4.4, 5.5]
})
app.layout = html.Div([
html.Button("Download CSV", id="btn"),
dcc.Download(id="download")
])
@callback(
Output("download", "data"),
Input("btn", "n_clicks"),
prevent_initial_call=True
)
def download_csv(n_clicks):
return polars_to_send_data_frame(df, "data.csv")
if __name__ == '__main__':
app.run(debug=True)
```
I am wondering if I can make a pr to add support for polars! | closed | 2025-02-06T23:39:03Z | 2025-02-06T23:39:56Z | https://github.com/plotly/dash/issues/3154 | [] | omarirfa | 1 |
clovaai/donut | nlp | 41 | linux or windows ? | can we run the train script on windows ? | closed | 2022-09-01T13:34:06Z | 2023-11-23T00:59:01Z | https://github.com/clovaai/donut/issues/41 | [] | trikiamine23 | 4 |
openapi-generators/openapi-python-client | rest-api | 907 | Error Reference schema are not supported | **Describe the bug**
I am trying to generate the client and this error is shown:
$ openapi-python-client generate --path <path_to_.gen.yaml>
Generating client
Warning(s) encountered while generating. Client was generated, but some pieces may be missing
Unable to parse this part of your OpenAPI document:
Reference schemas are not supported.
Reference(ref='./<folder>/<schema>.yaml')
**OpenAPI Spec File**
The spec file has the components defined this way:
components:
schemas:
SchemaName:
$ref: "./<folder>/<schema>.yaml"
**Desktop (please complete the following information):**
- OS: [macOS]
- Python Version: [3.11.3]
- openapi-python-client version [0.14.1]
**Additional context**
| closed | 2023-12-15T12:55:11Z | 2023-12-15T16:38:21Z | https://github.com/openapi-generators/openapi-python-client/issues/907 | [] | antoneladestito | 1 |
KevinMusgrave/pytorch-metric-learning | computer-vision | 685 | The split of the Stanford Cars dataset. | Thank you for your commendable efforts in your work. I have a question regarding the split of the Stanford Cars dataset, which comprises 16,185 images representing 196 car models.
In most metric-learning literature, the dataset split is described as follows: "The first 98 classes (8,054 images) are used for training, and the remaining 98 classes (8,131 images) are held out for testing."
However, the split mentioned in the [Torchvision](https://pytorch.org/vision/stable/_modules/torchvision/datasets/stanford_cars.html#StanfordCars) documentation states that "The data is split into 8,144 training images and 8,041 testing images, with an approximately 50-50 split for each class.", the training and testing split of which is different from current metric-learning community.
Unfortunately, the [official website](https://ai.stanford.edu/~jkrause/cars/car_dataset.html) is currently inaccessible, leaving me uncertain about the specific split used in this implementation.
Could you kindly provide me with a detailed split list (rather than the raw images) used in your implementation of the Stanford Cars dataset?
Thank you for your attention to this matter. | closed | 2024-02-15T17:25:16Z | 2024-02-20T17:16:14Z | https://github.com/KevinMusgrave/pytorch-metric-learning/issues/685 | [] | ppanzx | 1 |
deepfakes/faceswap | machine-learning | 1,206 | Convert invokes FFmpeg with redundant & conflicting arguments | **Crash reports MUST be included when reporting bugs.**
**Describe the bug**
FaceSwap convert invokes FFmpeg on the writer side with 2 sets of conflicting output codec options. The first set is generated by write_frames in imageio-ffmpeg, the second by output_params in convert's ffmpeg module.
/mnt/data/homedir/miniconda3/envs/faceswap/bin/ffmpeg -y -f rawvideo -vcodec rawvideo -s 3840x2160 -pix_fmt rgb24 -r 29.97 -i - -an **-vcodec libx264 -pix_fmt yuv420p -crf 25** -v error -vf scale=3840:2160 **-c:v libx264 -crf 23 -preset medium** /mnt/data/workspace/18/output.mp4
https://github.com/deepfakes/faceswap/blob/183aee37e93708c0ae73845face5b4469319ebd3/plugins/convert/writer/ffmpeg.py#L95
**To Reproduce**
Steps to reproduce the behavior:
1. Run a convert
2. Inspect ffmpeg arguments with `ps aux | grep ffmpeg`
**Expected behavior**
FFmpeg invocation should not have redundant/conflicting arguments.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Desktop (please complete the following information):**
- OS: CentOS 8
- Python Version 3.6.8
- Conda Version [e.g. 4.5.12]
- Commit ID 09c7d8aca3c608d1afad941ea78e9fd9b64d9219
| closed | 2022-01-22T06:49:00Z | 2022-05-16T00:25:05Z | https://github.com/deepfakes/faceswap/issues/1206 | [] | HG4554 | 1 |
tensorflow/tensor2tensor | machine-learning | 1,707 | Transformer-XL gets unhappy with unexpected batch sizes | ### Description
When training a Transformer-xl (model: transformer_memory, hyperparameters:transformer_wikitext103_l4k_memory_v0), if the transformer encounters an unexpected batch size, it halts training. At least, I think that's what's happening. When I set the max_length=batch_size, and choose max_length so that every example fits exactly one batch, this error goes away.
### Environment information
```
OS: Ubuntu 18
$ pip freeze | grep tensor
mesh-tensorflow==0.0.5
tensor2tensor==1.14.0
tensorboard==1.14.0
tensorflow-datasets==1.2.0
tensorflow-estimator==1.14.0
tensorflow-gan==1.0.0.dev0
tensorflow-gpu==1.14.0
tensorflow-metadata==0.14.0
tensorflow-model-optimization==0.1.3
tensorflow-probability==0.7.0
$ python -V
Python 2.7.16
$ python3 -V
Python 3.7.3
Error happens in both Python2 and Python3. I haven't tried with TF2 yet.
```
### For bugs: reproduction and error logs
```
# Steps to reproduce:
I fed in a new Problem with sequence length between 2k-3k tokens. I set the max_length and batch_size to 5k. Received the Traceback below.
```
```
# Error logs:
Traceback (most recent call last):
File "/home/tom/.local/bin/t2t-trainer", line 33, in <module>
tf.app.run()
File "/home/tom/.local/lib/python2.7/site-packages/tensorflow/python/platform/app.py", line 40, in run
_run(main=main, argv=argv, flags_parser=_parse_flags_tolerate_undef)
File "/home/tom/.local/lib/python2.7/site-packages/absl/app.py", line 299, in run
_run_main(main, args)
File "/home/tom/.local/lib/python2.7/site-packages/absl/app.py", line 250, in _run_main
sys.exit(main(argv))
File "/home/tom/.local/bin/t2t-trainer", line 28, in main
t2t_trainer.main(argv)
File "/home/tom/.local/lib/python2.7/site-packages/tensor2tensor/bin/t2t_trainer.py", line 412, in main
execute_schedule(exp)
File "/home/tom/.local/lib/python2.7/site-packages/tensor2tensor/bin/t2t_trainer.py", line 367, in execute_schedule
getattr(exp, FLAGS.schedule)()
File "/home/tom/.local/lib/python2.7/site-packages/tensor2tensor/utils/trainer_lib.py", line 456, in continuous_train_and_eval
self._eval_spec)
File "/home/tom/.local/lib/python2.7/site-packages/tensorflow_estimator/python/estimator/training.py", line 473, in train_and_evaluate
return executor.run()
File "/home/tom/.local/lib/python2.7/site-packages/tensorflow_estimator/python/estimator/training.py", line 613, in run
return self.run_local()
File "/home/tom/.local/lib/python2.7/site-packages/tensorflow_estimator/python/estimator/training.py", line 714, in run_local
saving_listeners=saving_listeners)
File "/home/tom/.local/lib/python2.7/site-packages/tensorflow_estimator/python/estimator/estimator.py", line 367, in train
loss = self._train_model(input_fn, hooks, saving_listeners)
File "/home/tom/.local/lib/python2.7/site-packages/tensorflow_estimator/python/estimator/estimator.py", line 1158, in _train_model
return self._train_model_default(input_fn, hooks, saving_listeners)
File "/home/tom/.local/lib/python2.7/site-packages/tensorflow_estimator/python/estimator/estimator.py", line 1192, in _train_model_default
saving_listeners)
File "/home/tom/.local/lib/python2.7/site-packages/tensorflow_estimator/python/estimator/estimator.py", line 1484, in _train_with_estimator_spec
_, loss = mon_sess.run([estimator_spec.train_op, estimator_spec.loss])
File "/home/tom/.local/lib/python2.7/site-packages/tensorflow/python/training/monitored_session.py", line 754, in run
run_metadata=run_metadata)
File "/home/tom/.local/lib/python2.7/site-packages/tensorflow/python/training/monitored_session.py", line 1252, in run
run_metadata=run_metadata)
File "/home/tom/.local/lib/python2.7/site-packages/tensorflow/python/training/monitored_session.py", line 1353, in run
raise six.reraise(*original_exc_info)
File "/home/tom/.local/lib/python2.7/site-packages/tensorflow/python/training/monitored_session.py", line 1338, in run
return self._sess.run(*args, **kwargs)
File "/home/tom/.local/lib/python2.7/site-packages/tensorflow/python/training/monitored_session.py", line 1411, in run
run_metadata=run_metadata)
File "/home/tom/.local/lib/python2.7/site-packages/tensorflow/python/training/monitored_session.py", line 1169, in run
return self._sess.run(*args, **kwargs)
File "/home/tom/.local/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 950, in run
run_metadata_ptr)
File "/home/tom/.local/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1173, in _run
feed_dict_tensor, options, run_metadata)
File "/home/tom/.local/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1350, in _do_run
run_metadata)
File "/home/tom/.local/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1370, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.InvalidArgumentError: 2 root error(s) found.
(0) Invalid argument: ConcatOp : Dimensions of inputs should match: shape[0] = [2,64,256] vs. shape[1] = [3,64,256]
[[node transformer_memory/parallel_0_5/transformer_memory/transformer_memory/body/decoder/layer_0/self_attention/multihead_attention/concat (defined at home/tom/.local/lib/python2.7/site-packages/tensor2tensor/layers/transformer_memory.py:137) ]]
[[transformer_memory/parallel_0_5/transformer_memory/transformer_memory/body/decoder/layer_7/self_attention/multihead_attention/Pad/_2497]]
(1) Invalid argument: ConcatOp : Dimensions of inputs should match: shape[0] = [2,64,256] vs. shape[1] = [3,64,256]
```
| open | 2019-09-22T20:23:32Z | 2019-09-22T20:36:51Z | https://github.com/tensorflow/tensor2tensor/issues/1707 | [] | tomweingarten | 1 |
roboflow/supervision | pytorch | 1,513 | Looking for a Model or Dataset for Detecting Objects Held in Hand | ### Search before asking
- [X] I have searched the Supervision [issues](https://github.com/roboflow/supervision/issues) and found no similar feature requests.
### Question
Hi,
I’m trying to detect objects held in hand. Do you know of any models or datasets that are well-suited for this task?
If labeling is required, would it be better to use `YOLO-world` for bbox grounding?
Additionally, there are a large number of product classes involved. I’m wondering if it would be better to only detect objects and handle class recognition through retrieval methods.
Thank you!
### Additional
_No response_ | closed | 2024-09-13T23:56:54Z | 2024-09-14T19:59:05Z | https://github.com/roboflow/supervision/issues/1513 | [
"question"
] | YoungjaeDev | 1 |
mljar/mljar-supervised | scikit-learn | 23 | Add support for new data types | Right now there is support for numerical and categorical data types. There is a need to support more data types:
- [x] text (#128)
- [x] dates (#122)
- [ ] IP
- [ ] geo locations | open | 2019-05-22T12:35:14Z | 2020-07-20T13:27:02Z | https://github.com/mljar/mljar-supervised/issues/23 | [
"enhancement",
"help wanted"
] | pplonski | 0 |
ibis-project/ibis | pandas | 10,636 | docs: Cloud backend support policy duplicated in dropdown menu | ### Please describe the issue
I think we can move the [cloud_support_policy.qmd](https://github.com/ibis-project/ibis/blob/main/docs/backends/cloud_support_policy.qmd) file into **ibis/blob/main/docs/backends/support** to render once.
<div style="display: flex; justify-content: space-between;">
<img src="https://github.com/user-attachments/assets/3b779efd-6a11-4aa8-b701-abb76fa3a1e4" alt="Image 1" style="width: 45%; margin-right: 5%;">
<img src="https://github.com/user-attachments/assets/8e16bc07-cd95-4515-bac7-b4fafedc96a7" alt="Image 2" style="width: 45%;">
</div>
### Code of Conduct
- [X] I agree to follow this project's Code of Conduct | closed | 2024-12-31T14:19:27Z | 2024-12-31T15:46:41Z | https://github.com/ibis-project/ibis/issues/10636 | [
"docs"
] | IndexSeek | 1 |
KaiyangZhou/deep-person-reid | computer-vision | 490 | Unable to reproduce market2017 when training from scratch | Hi I am modifying the models based on osnet so I decided to train osnet from scratch to compare apple to apple. I am trying to reproduce the performance on market2017 as stated in paper, and get the result as below:

It is far from stated in the paper (mAP 81.0) so I am wondering if I have missed anything in the configuration
```
adam:
beta1: 0.9
beta2: 0.999
cuhk03:
classic_split: False
labeled_images: False
use_metric_cuhk03: False
data:
combineall: False
height: 256
k_tfm: 1
load_train_targets: False
norm_mean: [0.485, 0.456, 0.406]
norm_std: [0.229, 0.224, 0.225]
root: data
save_dir: log/osnet_x1_0_market1501_softmax_sgd
sources: ['market1501']
split_id: 0
targets: ['market1501']
transforms: ['random_flip']
type: image
width: 128
workers: 4
loss:
name: softmax
softmax:
label_smooth: True
triplet:
margin: 0.3
weight_t: 1.0
weight_x: 0.0
market1501:
use_500k_distractors: False
model:
fusion:
load_weights:
name: osnet_x1_0
pretrained: False
resume:
rmsprop:
alpha: 0.99
sampler:
num_cams: 1
num_datasets: 1
num_instances: 4
train_sampler: RandomSampler
train_sampler_t: RandomSampler
sgd:
dampening: 0.0
momentum: 0.9
nesterov: False
test:
batch_size: 300
dist_metric: euclidean
eval_freq: 10
evaluate: False
normalize_feature: False
ranks: [1, 5, 10, 20]
rerank: False
start_eval: 0
visrank: False
visrank_topk: 10
train:
base_lr_mult: 0.1
batch_size: 64
fixbase_epoch: 0
gamma: 0.1
lr: 0.065
lr_scheduler: multi_step
max_epoch: 350
new_layers: ['classifier']
open_layers: ['classifier']
optim: sgd
print_freq: 20
seed: 1
staged_lr: False
start_epoch: 0
stepsize: [150, 225, 300]
weight_decay: 0.0005
use_gpu: True
video:
pooling_method: avg
sample_method: evenly
seq_len: 15
``` | closed | 2022-02-01T18:37:57Z | 2022-02-02T06:25:38Z | https://github.com/KaiyangZhou/deep-person-reid/issues/490 | [] | zye1996 | 1 |
graphql-python/graphene-django | graphql | 503 | DjangoObjectType interprets string choices fields as required in schema even with blank=True attribute | To reproduce this problem, create a Django model with a choices field with `blank=True`:
class MyModel(models.Model):
PERIODIC_INTERVAL_CHOICES = (('Weekly', 'Weekly'),
('Bi-Weekly', 'Bi-Weekly'),
('Monthly', 'Monthly'),
('Quarterly', 'Quarterly'),
('Semi-Annually', 'Semi-Annually'),
'Annually', 'Annually'))
payment_frequency = models.CharField(
blank=True,
choices=PERIODIC_INTERVAL_CHOICES,
max_length=13)
The schema generated by DjangoObjectType will show incorrectly show that this field `payment_frequency` is required. This is incorrect -- the expected behavior is that it NOT required.
I have been able to fix this issue by patching how the value of required is determined in `graphene.converter.convert_django_field_with_choices` (shown for graphene-django 2.0.0) :
def convert_django_field_with_choices(field, registry=None):
# Modified from graphene_django.converter import convert_django_field_with_choices
# to adjust "required"
choices = getattr(field, 'choices', None)
if choices:
meta = field.model._meta
name = to_camel_case('{}_{}'.format(meta.object_name, field.name))
choices = list(get_choices(choices))
named_choices = [(c[0], c[1]) for c in choices]
named_choices_descriptions = {c[0]: c[2] for c in choices}
class EnumWithDescriptionsType(object):
@property
def description(self):
return named_choices_descriptions[self.name]
enum = Enum(name, list(named_choices), type=EnumWithDescriptionsType)
required = not (field.blank or field.null or field.default) # MODIFIED FROM ORIGINAL
return enum(description=field.help_text, required=required)
return convert_django_field(field, registry)
| closed | 2018-08-23T15:07:14Z | 2020-06-16T19:07:15Z | https://github.com/graphql-python/graphene-django/issues/503 | [] | picturedots | 8 |
yt-dlp/yt-dlp | python | 12,220 | Can't get "chapter", "chapter_number" fields when using --split-chapters | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [x] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [x] I'm asking a question and **not** reporting a bug or requesting a feature
- [x] I've looked through the [README](https://github.com/yt-dlp/yt-dlp#readme)
- [x] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [x] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar questions **including closed ones**. DO NOT post duplicates
- [x] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
### Please make sure the question is worded well enough to be understood
Command:
```
yt-dlp -f 399+251 --split-chapters -o chapter:"%(chapter_number)s-%(chapter)s - %(title)s.%(ext)s" -P chapter:chapters -vU "https://www.youtube.com/watch?v=93Sfl5rRBGw"
```
Output:
```
NA-NA - You Ghost My Heart.webm
```
Can't get chapter and chapter_number fields.
Is it my fault or a bug?
### Provide verbose output that clearly demonstrates the problem
- [ ] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [x] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
[debug] Command-line config: ['-f', '399+251', '--split-chapters', '-o', 'chapter:%(chapter_number)s-%(chapter)s - %(title)s.%(ext)s', '-P', 'chapter:chapters', '-vU', 'https://www.youtube.com/watch?v=93Sfl5rRBGw']
[debug] Encodings: locale cp932, fs utf-8, pref cp932, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version nightly@2025.01.26.034637 from yt-dlp/yt-dlp-nightly-builds [3b4531934] (win_exe)
[debug] Python 3.10.11 (CPython AMD64 64bit) - Windows-10-10.0.22631-SP0 (OpenSSL 1.1.1t 7 Feb 2023)
[debug] exe versions: ffmpeg n7.1-184-gdc07f98934-20250127 (setts), ffprobe n7.1-184-gdc07f98934-20250127
[debug] Optional libraries: Cryptodome-3.21.0, brotli-1.1.0, certifi-2024.12.14, curl_cffi-0.5.10, mutagen-1.47.0, requests-2.32.3, sqlite3-3.40.1, urllib3-2.3.0, websockets-14.2
[debug] Proxy map: {}
[debug] Request Handlers: urllib, requests, websockets, curl_cffi
[debug] Loaded 1839 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp-nightly-builds/releases/latest
Latest version: nightly@2025.01.26.034637 from yt-dlp/yt-dlp-nightly-builds
yt-dlp is up to date (nightly@2025.01.26.034637 from yt-dlp/yt-dlp-nightly-builds)
[youtube] Extracting URL: https://www.youtube.com/watch?v=93Sfl5rRBGw
[youtube] 93Sfl5rRBGw: Downloading webpage
[youtube] 93Sfl5rRBGw: Downloading tv client config
[youtube] 93Sfl5rRBGw: Downloading player 1080ef44
[youtube] 93Sfl5rRBGw: Downloading tv player API JSON
[youtube] 93Sfl5rRBGw: Downloading ios player API JSON
[debug] Loading youtube-nsig.1080ef44 from cache
[debug] [youtube] Decrypted nsig uQxkG8oW96B7jVVR => QwEpmQsgY2YrMQ
[debug] Loading youtube-nsig.1080ef44 from cache
[debug] [youtube] Decrypted nsig J9PwcH1xPFJEzPgQ => PwH1BK3Nm-U6cQ
[debug] [youtube] 93Sfl5rRBGw: ios client https formats require a GVS PO Token which was not provided. They will be skipped as they may yield HTTP Error 403. You can manually pass a GVS PO Token for this client with --extractor-args "youtube:po_token=ios.gvs+XXX". For more information, refer to https://github.com/yt-dlp/yt-dlp/wiki/PO-Token-Guide . To enable these broken formats anyway, pass --extractor-args "youtube:formats=missing_pot"
[youtube] 93Sfl5rRBGw: Downloading m3u8 information
[debug] Sort order given by extractor: quality, res, fps, hdr:12, source, vcodec, channels, acodec, lang, proto
[debug] Formats sorted by: hasvid, ie_pref, quality, res, fps, hdr:12(7), source, vcodec, channels, acodec, lang, proto, size, br, asr, vext, aext, hasaud, id
[info] 93Sfl5rRBGw: Downloading 1 format(s): 399+251
[download] You Ghost My Heart [93Sfl5rRBGw].webm has already been downloaded
[SplitChapters] Splitting video by chapters; 15 chapters found
[SplitChapters] Chapter 001; Destination: chapters\NA-NA - You Ghost My Heart.webm
[debug] ffmpeg command line: ffmpeg -y -loglevel repeat+info -ss 0.0 -t 115.0 -i "file:You Ghost My Heart [93Sfl5rRBGw].webm" -map 0 -dn -ignore_unknown -c copy -movflags +faststart "file:chapters\NA-NA - You Ghost My Heart.webm"
[SplitChapters] Chapter 002; Destination: chapters\NA-NA - You Ghost My Heart.webm
[debug] ffmpeg command line: ffmpeg -y -loglevel repeat+info -ss 115.0 -t 101.0 -i "file:You Ghost My Heart [93Sfl5rRBGw].webm" -map 0 -dn -ignore_unknown -c copy -movflags +faststart "file:chapters\NA-NA - You Ghost My Heart.webm"
[SplitChapters] Chapter 003; Destination: chapters\NA-NA - You Ghost My Heart.webm
[debug] ffmpeg command line: ffmpeg -y -loglevel repeat+info -ss 216.0 -t 168.0 -i "file:You Ghost My Heart [93Sfl5rRBGw].webm" -map 0 -dn -ignore_unknown -c copy -movflags +faststart "file:chapters\NA-NA - You Ghost My Heart.webm"
[SplitChapters] Chapter 004; Destination: chapters\NA-NA - You Ghost My Heart.webm
[debug] ffmpeg command line: ffmpeg -y -loglevel repeat+info -ss 384.0 -t 149.0 -i "file:You Ghost My Heart [93Sfl5rRBGw].webm" -map 0 -dn -ignore_unknown -c copy -movflags +faststart "file:chapters\NA-NA - You Ghost My Heart.webm"
[SplitChapters] Chapter 005; Destination: chapters\NA-NA - You Ghost My Heart.webm
[debug] ffmpeg command line: ffmpeg -y -loglevel repeat+info -ss 533.0 -t 97.0 -i "file:You Ghost My Heart [93Sfl5rRBGw].webm" -map 0 -dn -ignore_unknown -c copy -movflags +faststart "file:chapters\NA-NA - You Ghost My Heart.webm"
[SplitChapters] Chapter 006; Destination: chapters\NA-NA - You Ghost My Heart.webm
[debug] ffmpeg command line: ffmpeg -y -loglevel repeat+info -ss 630.0 -t 159.0 -i "file:You Ghost My Heart [93Sfl5rRBGw].webm" -map 0 -dn -ignore_unknown -c copy -movflags +faststart "file:chapters\NA-NA - You Ghost My Heart.webm"
[SplitChapters] Chapter 007; Destination: chapters\NA-NA - You Ghost My Heart.webm
[debug] ffmpeg command line: ffmpeg -y -loglevel repeat+info -ss 789.0 -t 93.0 -i "file:You Ghost My Heart [93Sfl5rRBGw].webm" -map 0 -dn -ignore_unknown -c copy -movflags +faststart "file:chapters\NA-NA - You Ghost My Heart.webm"
[SplitChapters] Chapter 008; Destination: chapters\NA-NA - You Ghost My Heart.webm
[debug] ffmpeg command line: ffmpeg -y -loglevel repeat+info -ss 882.0 -t 209.0 -i "file:You Ghost My Heart [93Sfl5rRBGw].webm" -map 0 -dn -ignore_unknown -c copy -movflags +faststart "file:chapters\NA-NA - You Ghost My Heart.webm"
[SplitChapters] Chapter 009; Destination: chapters\NA-NA - You Ghost My Heart.webm
[debug] ffmpeg command line: ffmpeg -y -loglevel repeat+info -ss 1091.0 -t 208.0 -i "file:You Ghost My Heart [93Sfl5rRBGw].webm" -map 0 -dn -ignore_unknown -c copy -movflags +faststart "file:chapters\NA-NA - You Ghost My Heart.webm"
[SplitChapters] Chapter 010; Destination: chapters\NA-NA - You Ghost My Heart.webm
[debug] ffmpeg command line: ffmpeg -y -loglevel repeat+info -ss 1299.0 -t 157.0 -i "file:You Ghost My Heart [93Sfl5rRBGw].webm" -map 0 -dn -ignore_unknown -c copy -movflags +faststart "file:chapters\NA-NA - You Ghost My Heart.webm"
[SplitChapters] Chapter 011; Destination: chapters\NA-NA - You Ghost My Heart.webm
[debug] ffmpeg command line: ffmpeg -y -loglevel repeat+info -ss 1456.0 -t 104.0 -i "file:You Ghost My Heart [93Sfl5rRBGw].webm" -map 0 -dn -ignore_unknown -c copy -movflags +faststart "file:chapters\NA-NA - You Ghost My Heart.webm"
[SplitChapters] Chapter 012; Destination: chapters\NA-NA - You Ghost My Heart.webm
[debug] ffmpeg command line: ffmpeg -y -loglevel repeat+info -ss 1560.0 -t 104.0 -i "file:You Ghost My Heart [93Sfl5rRBGw].webm" -map 0 -dn -ignore_unknown -c copy -movflags +faststart "file:chapters\NA-NA - You Ghost My Heart.webm"
[SplitChapters] Chapter 013; Destination: chapters\NA-NA - You Ghost My Heart.webm
[debug] ffmpeg command line: ffmpeg -y -loglevel repeat+info -ss 1664.0 -t 95.0 -i "file:You Ghost My Heart [93Sfl5rRBGw].webm" -map 0 -dn -ignore_unknown -c copy -movflags +faststart "file:chapters\NA-NA - You Ghost My Heart.webm"
[SplitChapters] Chapter 014; Destination: chapters\NA-NA - You Ghost My Heart.webm
[debug] ffmpeg command line: ffmpeg -y -loglevel repeat+info -ss 1759.0 -t 71.0 -i "file:You Ghost My Heart [93Sfl5rRBGw].webm" -map 0 -dn -ignore_unknown -c copy -movflags +faststart "file:chapters\NA-NA - You Ghost My Heart.webm"
[SplitChapters] Chapter 015; Destination: chapters\NA-NA - You Ghost My Heart.webm
[debug] ffmpeg command line: ffmpeg -y -loglevel repeat+info -ss 1830.0 -t 1829.0 -i "file:You Ghost My Heart [93Sfl5rRBGw].webm" -map 0 -dn -ignore_unknown -c copy -movflags +faststart "file:chapters\NA-NA - You Ghost My Heart.webm"
``` | closed | 2025-01-28T08:25:10Z | 2025-01-28T21:57:07Z | https://github.com/yt-dlp/yt-dlp/issues/12220 | [
"question"
] | safethumb | 6 |
vaexio/vaex | data-science | 1,687 | Parquet to csv export slow performance | Hi, I am currently doing an PoC if Vaex can be fit for my use case. I found Vaex lighting fast while reading parquet however if I need to unload parquet to csv it takes long time. I tried different chunk size small (100k to 1m) on 4million rows with 266 columns and compressed filesize ~2.3gb.
So use case is to unload parquet to csv with 1m chunksize which evenly split into 4 csv files however it takes 19mins to read --> export
**System configuration:**
- EC2 r6.4xlarge (16 cores, 128gb), 400gb SSD
- vaex package 4.5
- Parquet File location S3 (if process locally it will take 15mins instead of 19mins)
- CSV files location S3
- Data: string, float, int (mix) 30% string
Code:
import vaex as vx
df = vx.open('s3://path/to/file.parquuet')
df.export_many('s3://path/to/file-{i:02}.csv', chunksize=100_000)
I am looking to improve performance and I tried looking other closed and open issues but haven't got to bottom of it if there is way to improve on timing and utilise maximum available capacity on the server:
- Not all cores being used only 1-2 out of 16 cores - is there an option enforce vaex to use all cores?
- how to reduce processing time on export from Parquet?
Hopefully, someone has already gone through such scenario any help much appreciated
| open | 2021-11-09T12:56:21Z | 2021-11-17T09:23:29Z | https://github.com/vaexio/vaex/issues/1687 | [] | ighori | 1 |
thtrieu/darkflow | tensorflow | 936 | asking about Box color and meta file | When I test my train model, I found that the bounding box color is white.

How can I change the color?
and can I get the box data(like xml or txt) ?
Last question: meta files remain very few(only 20?)
How can I save all of them?
Thank you for your reading. | open | 2018-11-20T04:48:30Z | 2018-11-23T05:10:19Z | https://github.com/thtrieu/darkflow/issues/936 | [] | murras | 2 |
feature-engine/feature_engine | scikit-learn | 287 | feat: Custom threshold in SmartCorrelatedFeatures | **Is your feature request related to a problem? Please describe.**
Currently `SmartCorrelatedFeatures` takes correlation measures that have a similar range (between -1 and +1) and select features by a fixed threshold value (defaults to 0.8).
**Describe the solution you'd like**
Extend the `threshold` selection to a custom function in order to be able to apply other correlation measures that have different range as mutual information. Other measures to consider could be find [here](https://m-clark.github.io/docs/CorrelationComparison.pdf).
**Describe alternatives you've considered**
An alternative is to create a separate class. | open | 2021-07-15T13:08:33Z | 2021-08-09T08:48:21Z | https://github.com/feature-engine/feature_engine/issues/287 | [
"new transformer"
] | TremaMiguel | 9 |
ClimbsRocks/auto_ml | scikit-learn | 272 | create advanced scoring logging for multi-class classification | open | 2017-07-05T22:26:48Z | 2017-07-14T15:24:20Z | https://github.com/ClimbsRocks/auto_ml/issues/272 | [] | ClimbsRocks | 1 | |
jonaswinkler/paperless-ng | django | 1,508 | [Feature Request] Allow RO source folder for Consume | I currently have a folder on my NAS which has all of my documents etc in it; I dump stuff in there, in the right folder structure, from my laptop, and it all gets backed up to B2 etc.
I'd really like to use this as the source folder for Paperless-ng to consume from. However, I absolutely _don't_ want the files in there to be altered, moved or deleted. So what I'd really like to do is set up a read-only consume folder, where Paperless picks up new or altered files, and consumes them as normal, but leaving them alone rather than the usual behaviour of moving/deleting them after consumption.
I gather from a [reddit thread](https://www.reddit.com/r/selfhosted/comments/rmv2nt/paperlessng_using_an_ro_share_as_the_consume/) that this isn't possible, because if I start up my docker container with the consume folder mount set to RO, Paperless won't start.
Is this something that can be configured or added as a new feature? | open | 2021-12-24T10:01:43Z | 2022-11-15T04:44:54Z | https://github.com/jonaswinkler/paperless-ng/issues/1508 | [] | Webreaper | 1 |
aidlearning/AidLearning-FrameWork | jupyter | 233 | 魅族好像没法用 | 打开aidlux就开始报错,然后进入error模式. ssh也没法连接localhost的desktop. 我看了一下端口,好像都没开是怎么回事? 有解决方法吗? | closed | 2023-12-26T16:27:49Z | 2024-04-19T20:18:25Z | https://github.com/aidlearning/AidLearning-FrameWork/issues/233 | [] | Pinglewin | 1 |
KevinMusgrave/pytorch-metric-learning | computer-vision | 81 | Add CURL loss? | I came across an interesting looking loss in this [paper](https://arxiv.org/abs/1902.09229) that the authors call [CURL](https://arxiv.org/abs/1902.09229) (not to be confused with this other [CURL](https://arxiv.org/abs/2004.04136)). I was wondering if this idea could be ported to the library?

The loss is given in equation 12 under section 6.3, it is used to improve performance over strong baselines in both image/text representation learning. It looks a little like the InfoNCE/NT-Xent losses, but there are two sums introduced over "blocks" of positives and "blocks" of negatives.
I thought about trying to implement it myself and open a pull request but I wanted to first get the repo maintainers' opinion on whether or not this would be possible, and if so what the best way to approach it might be. Thanks!
> I reached out to the authors [via their blog](http://disq.us/p/29008i9) to see if they have a implementation somewhere.
| closed | 2020-05-01T00:08:25Z | 2020-05-07T14:18:46Z | https://github.com/KevinMusgrave/pytorch-metric-learning/issues/81 | [
"new algorithm request"
] | JohnGiorgi | 2 |
pyqtgraph/pyqtgraph | numpy | 3,194 | Internal C++ object (AxisItem) already deleted | ### Short description
I'm running pyqtgraph inside a python app created for a FEA simulation program. The program often crashes if many updates are triggered in the plot.
### Code to reproduce
<img width="931" alt="image" src="https://github.com/user-attachments/assets/66136c25-a41c-44a3-94b5-33eb9e69e231">
| open | 2024-12-03T16:32:42Z | 2024-12-26T15:08:05Z | https://github.com/pyqtgraph/pyqtgraph/issues/3194 | [] | FOkigami | 3 |
modelscope/modelscope | nlp | 709 | MsDataset: loading different chunk at a time. | Hi,
I'm facing a memory error when trying to load a large dataset (AI-ModelScope/stack-exchange-paired) using ModelScope. Here's the code:
```
from modelscope.msdatasets import MsDataset
ds = MsDataset.load('AI-ModelScope/stack-exchange-paired', subset_name='finetune', split='train')
```
Is there a way to load this dataset in smaller chunks to avoid this error?
**i.e, I wanna load one chunk, and then another chunk, and another chunk, instead of loading them all in once. Is there a way for me to do this?** | closed | 2024-01-07T05:01:36Z | 2024-06-06T01:53:58Z | https://github.com/modelscope/modelscope/issues/709 | [
"Stale"
] | candygocandy | 2 |
sqlalchemy/alembic | sqlalchemy | 422 | Support Comments on Table / Columns | **Migrated issue, originally created by Brice Maron ([@emerzh](https://github.com/emerzh))**
Hi,
it seems that sqlalchemy supports comments on objects
(https://bitbucket.org/zzzeek/sqlalchemy/issues/1546/feature-request-commenting-db-objects)
which is awesome!
it could be really cool if it can be integrated to alembic as well
| closed | 2017-03-17T21:20:16Z | 2019-01-10T02:09:50Z | https://github.com/sqlalchemy/alembic/issues/422 | [
"feature",
"autogenerate - rendering"
] | sqlalchemy-bot | 18 |
google-research/bert | tensorflow | 1,069 | pretraining BERT CASED model gives lower accuracy than UNCASED | I pretrained both BERT uncased as well as BERT cased models using the same hyperparameters(for uncased model) on Wikipedia and BookCorpus, but the BERT cased models perform worse than the google checkpoints on downstream tasks. Did you pretrain the cased models differently? Could you share the hyperparameters?
Thanks! | open | 2020-04-21T23:07:16Z | 2020-04-22T00:27:34Z | https://github.com/google-research/bert/issues/1069 | [] | yzhang123 | 1 |
flasgger/flasgger | flask | 359 | I'd like to test flasgger compiles correctly | Hi! I'd like to write unit tests to ensure my endpoints are properly documented. I was wondering what sort of introspection is available? If not much, maybe I could contribute something? | open | 2020-01-24T19:05:24Z | 2020-05-06T07:23:37Z | https://github.com/flasgger/flasgger/issues/359 | [] | alexjdw | 1 |
Kanaries/pygwalker | matplotlib | 13 | Add to Vega-Lite ecosystem page | Since you already added graphic walker, you could also add this to https://vega.github.io/vega-lite/ecosystem.html. | closed | 2023-02-21T04:26:42Z | 2023-02-21T21:01:13Z | https://github.com/Kanaries/pygwalker/issues/13 | [] | domoritz | 1 |
benbusby/whoogle-search | flask | 1,195 | [BUG] <brief bug description>PermissionError: [Errno 13] Permission denied: '/whoogle/app/static/css/input.css' -> '/whoogle/app/static/build/input.61ccbb50.css' | **Describe the bug**
A clear and concise description of what the bug is.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to 'docker logs'
2. Click on 'enter'
3. Scroll down to 'the whole page'
4. See error
**Deployment Method**
- [ ] Heroku (one-click deploy)
- [ *] Docker
- [ ] `run` executable
- [ ] pip/pipx
- [ ] Other: [describe setup]
**Version of Whoogle Search**
- [ *] Latest build from [source] (i.e. GitHub, Docker Hub, pip, etc)
- [ ] Version [version number]
- [ ] Not sure
**Desktop (please complete the following information):**
- OS: [e.g. iOS] Windows 11
- Browser [e.g. chrome, safari]: Chrome, Edge
- Version [e.g. 22]
**Smartphone (please complete the following information):**
- Device: [e.g. iPhone6] none
- OS: [e.g. iOS8.1] none
- Browser [e.g. stock browser, safari] none
- Version [e.g. 22] none
**Additional context**
I overrode the HTML file, but I got the PermissionError: [Errno 13] Permission denied: '/whoogle/app/static/css/input.css' -> '/whoogle/app/static/build/input.61ccbb50.css'. Could anyone help with this issue? | closed | 2024-11-03T11:54:05Z | 2025-01-22T19:19:09Z | https://github.com/benbusby/whoogle-search/issues/1195 | [
"bug"
] | lyknny | 2 |
slackapi/python-slack-sdk | asyncio | 1,450 | Add "slack_file" properties to "image" blocks/elements under slack_sdk.models | The "image" blocks and block elements now can have "slack_file", which refers to an uploaded image file within Slack instead of "image_url", which must be a publicly hosted one. The Block Kit class representation in this SDK should add supports for these new options.
References:
* https://api.slack.com/reference/block-kit/blocks#image_fields
* https://api.slack.com/reference/block-kit/block-elements#image
* https://api.slack.com/reference/block-kit/composition-objects#slack_file
### Category (place an `x` in each of the `[ ]`)
- [ ] **slack_sdk.web.WebClient (sync/async)** (Web API client)
- [ ] **slack_sdk.webhook.WebhookClient (sync/async)** (Incoming Webhook, response_url sender)
- [x] **slack_sdk.models** (UI component builders)
- [ ] **slack_sdk.oauth** (OAuth Flow Utilities)
- [ ] **slack_sdk.socket_mode** (Socket Mode client)
- [ ] **slack_sdk.audit_logs** (Audit Logs API client)
- [ ] **slack_sdk.scim** (SCIM API client)
- [ ] **slack_sdk.rtm** (RTM client)
- [ ] **slack_sdk.signature** (Request Signature Verifier)
### Requirements
Please read the [Contributing guidelines](https://github.com/slackapi/python-slack-sdk/blob/main/.github/contributing.md) and [Code of Conduct](https://slackhq.github.io/code-of-conduct) before creating this issue or pull request. By submitting, you are agreeing to those rules.
| closed | 2024-01-23T08:15:42Z | 2024-01-31T08:38:34Z | https://github.com/slackapi/python-slack-sdk/issues/1450 | [
"enhancement",
"web-client",
"Version: 3x"
] | seratch | 0 |
onnx/onnxmltools | scikit-learn | 677 | UNABLE_TO_INFER_SCHEMA on pyspark | i get below error when convert from sparkml model to ONNX
```
An error was encountered:
AnalysisException
[Traceback (most recent call last):
, File "/tmp/spark-591fcd26-f35c-4194-9d93-9e4fa0b7a634/shell_wrapper.py", line 113, in exec
self._exec_then_eval(code)
, File "/tmp/spark-591fcd26-f35c-4194-9d93-9e4fa0b7a634/shell_wrapper.py", line 106, in _exec_then_eval
exec(compile(last, '<string>', 'single'), self.globals)
, File "<string>", line 1, in <module>
, File "/home/user/work/.python_libs/lib/python3.10/site-packages/onnxmltools/convert/main.py", line 302, in convert_sparkml
return convert(
, File "/home/user/work/.python_libs/lib/python3.10/site-packages/onnxmltools/convert/sparkml/convert.py", line 101, in convert
onnx_model = convert_topology(
, File "/home/user/work/.python_libs/lib/python3.10/site-packages/onnxconverter_common/topology.py", line 776, in convert_topology
get_converter(operator.type)(scope, operator, container)
, File "/home/user/work/.python_libs/lib/python3.10/site-packages/onnxmltools/convert/sparkml/operator_converters/random_forest_regressor.py", line 31, in convert_random_forest_regressor
tree_df = save_read_sparkml_model_data(
, File "/home/user/work/.python_libs/lib/python3.10/site-packages/onnxmltools/convert/sparkml/operator_converters/tree_ensemble_common.py", line 113, in save_read_sparkml_model_data
df = spark.read.parquet(os.path.join(path, "data"))
, File "/opt/spark/python/lib/pyspark.zip/pyspark/sql/readwriter.py", line 531, in parquet
return self._df(self._jreader.parquet(_to_seq(self._spark._sc, paths)))
, File "/opt/spark/python/lib/py4j-0.10.9.7-src.zip/py4j/java_gateway.py", line 1322, in __call__
return_value = get_return_value(
, File "/opt/spark/python/lib/pyspark.zip/pyspark/errors/exceptions/captured.py", line 175, in deco
raise converted from None
, pyspark.errors.exceptions.captured.AnalysisException: [UNABLE_TO_INFER_SCHEMA] Unable to infer schema for Parquet. It must be specified manually.
]
```
print(initial_types)
```
[('some_column_name', StringTensorType(shape=[None, 1])), ('var1', StringTensorType(shape=[None, 1])), ('var2', FloatTensorType(shape=[None, 1])), ('var3', FloatTensorType(shape=[None, 1])), ('var4', FloatTensorType(shape=[None, 1])), ('var5', FloatTensorType(shape=[None, 1])), ('var6', FloatTensorType(shape=[None, 1])), ('var7', FloatTensorType(shape=[None, 1])), ('var9', FloatTensorType(shape=[None, 1]))]
```
### model i use is from pipeline randomforest
onnx_model = convert_sparkml(model, 'pyspark test', initial_types, spark_session = spark)
| open | 2024-01-18T08:22:08Z | 2024-01-18T08:24:57Z | https://github.com/onnx/onnxmltools/issues/677 | [] | cometta | 0 |
google/seq2seq | tensorflow | 174 | KeyError: 'attention_scores' when setting unk_replace to True | I am getting the following error when I ran decoding with `unk_replace`. The training was done with `nmt_medium.yml`, which is using `AttentionSeq2Seq` model.
```
Traceback (most recent call last):
File "/Users/png/.pyenv/versions/3.5.3/lib/python3.5/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/Users/png/.pyenv/versions/3.5.3/lib/python3.5/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/Users/png/repos/seq2seq/bin/infer.py", line 129, in <module>
tf.app.run()
File "/Users/png/.pyenv/versions/tf-seq2seq/lib/python3.5/site-packages/tensorflow/python/platform/app.py", line 44, in run
_sys.exit(main(_sys.argv[:1] + flags_passthrough))
File "/Users/png/repos/seq2seq/bin/infer.py", line 125, in main
sess.run([])
File "/Users/png/.pyenv/versions/tf-seq2seq/lib/python3.5/site-packages/tensorflow/python/training/monitored_session.py", line 462, in run
run_metadata=run_metadata)
File "/Users/png/.pyenv/versions/tf-seq2seq/lib/python3.5/site-packages/tensorflow/python/training/monitored_session.py", line 786, in run
run_metadata=run_metadata)
File "/Users/png/.pyenv/versions/tf-seq2seq/lib/python3.5/site-packages/tensorflow/python/training/monitored_session.py", line 744, in run
return self._sess.run(*args, **kwargs)
File "/Users/png/.pyenv/versions/tf-seq2seq/lib/python3.5/site-packages/tensorflow/python/training/monitored_session.py", line 899, in run
run_metadata=run_metadata))
File "/Users/png/repos/seq2seq/seq2seq/tasks/decode_text.py", line 172, in after_run
attention_scores = fetches["attention_scores"]
KeyError: 'attention_scores'
``` | open | 2017-04-18T07:46:12Z | 2017-08-01T11:11:59Z | https://github.com/google/seq2seq/issues/174 | [] | pnpnpn | 10 |
nltk/nltk | nlp | 2,637 | SentimentIntensityAnalyzer() from nltk.sentiment.vader does not respond to hashtags. | Example:
```
SentimentIntensityAnalyzer.polarity_scores( 'Strings with hashtag #stupid #useless #BAD' )
... compound: 0.0,
... neg: 0.0,
... neu: 1.0,
... pos: 0.0,
``` | closed | 2020-12-08T18:09:16Z | 2022-12-13T22:42:45Z | https://github.com/nltk/nltk/issues/2637 | [] | neldivad | 0 |
ultrafunkamsterdam/undetected-chromedriver | automation | 1,111 | Attempt to get page source on `dl-protect.net` with `driver.page_source` and ublock turned on results in unknown error. Getting page_source with uBlock turned off works fine. | Hi guys.
Today I've encountered very strange bug. Namely any attempt to get a page source with `driver.page_source` results in script stoppage and following error:
example URL = https://dl-protect.net/da4e1680
EDIT:
I've digged a little bit deeper into this and I've figured out that if I disable uBlock I'm able to get `page_source` without any issues, but still I have no idea why ad blocker is causing such an issue.
```
Message: unknown error: Runtime.evaluate threw exception: ReferenceError: nceuevuho
at get (<anonymous>:268:31)
at new CacheWithUUID (<anonymous>:94:17)
at getPageCache (<anonymous>:222:18)
at callFunction (<anonymous>:433:17)
at <anonymous>:461:23
at <anonymous>:462:3
(Session info: chrome=111.0.5563.65)
Stacktrace:
Backtrace:
(No symbol) [0x003FDCE3]
(No symbol) [0x003939D1]
(No symbol) [0x002A4DA8]
(No symbol) [0x002AC7F8]
(No symbol) [0x002A71C0]
(No symbol) [0x002A6D01]
(No symbol) [0x002A756C]
(No symbol) [0x002A7850]
(No symbol) [0x002FEDE5]
(No symbol) [0x002EAECC]
(No symbol) [0x002FD57C]
(No symbol) [0x002EACC6]
(No symbol) [0x002C6F68]
(No symbol) [0x002C80CD]
GetHandleVerifier [0x00673832+2506274]
GetHandleVerifier [0x006A9794+2727300]
GetHandleVerifier [0x006AE36C+2746716]
GetHandleVerifier [0x004A6690+617600]
(No symbol) [0x0039C712]
(No symbol) [0x003A1FF8]
(No symbol) [0x003A20DB]
(No symbol) [0x003AC63B]
BaseThreadInitThunk [0x766700F9+25]
RtlGetAppContainerNamedObjectPath [0x76EC7BBE+286]
RtlGetAppContainerNamedObjectPath [0x76EC7B8E+238]
```
If anybody knows how to deal with that I will be really grateful. | open | 2023-03-08T10:21:49Z | 2023-03-09T17:29:16Z | https://github.com/ultrafunkamsterdam/undetected-chromedriver/issues/1111 | [] | danielrzad | 0 |
ets-labs/python-dependency-injector | asyncio | 821 | a simplest fastapi app, di does not work | I've been playing with the lib for a while. Cannot make it work :(
Python 3.12.3
dependency-injector = "^4.42.0"
fastapi = "^0.110.1"
uvicorn = "^0.29.0"
```
import uvicorn
from dependency_injector import containers, providers
from dependency_injector.wiring import Provide, inject
from fastapi import FastAPI, Depends
class X:
def x(self) -> int:
return 1
class Container(containers.DeclarativeContainer):
wiring_config = containers.WiringConfiguration(modules=[__name__])
x = providers.Factory(X)
app = FastAPI()
app.container = Container()
@app.get("/")
@inject
def root(foo: X = Depends(Provide[Container.x])):
print(foo.x())
if __name__ == "__main__":
uvicorn.run(
app,
host="127.0.0.1",
port=5000,
)
```
the error that I get:
```
File "src/dependency_injector/_cwiring.pyx", line 28, in dependency_injector._cwiring._get_sync_patched._patched
File "/home/sergey/.config/JetBrains/PyCharm2024.2/scratches/scratch_309.py", line 30, in root
print(foo.x())
^^^^^
AttributeError: 'Provide' object has no attribute 'x'
```
what am I doing wrong?
| closed | 2024-10-02T20:50:24Z | 2024-10-03T08:30:55Z | https://github.com/ets-labs/python-dependency-injector/issues/821 | [] | antonio-antuan | 1 |
miguelgrinberg/Flask-SocketIO | flask | 1,039 | Client Receiving 400 error | Hello,
I setup a very simple flask-socketio script to test. This is purely in a development environment and theres no reverse proxy like apache/nginx/etc or ssl. I have installed eventlet on my computer for it to use
```python
from flask import Flask, jsonify, request
from flask_socketio import SocketIO, emit
HOST = '0.0.0.0'
PORT = 7000
app = Flask(__name__)
app.config['SECRET_KEY'] = 'fjsalfj'
socketio = SocketIO(app)
notifications_feed = {}
@socketio.on('connect', namespace='/notifications')
def notifications_on_connect():
notifications_feed[request.sid] = request.args['username']
@socketio.on('disconnect', namespace='/notifications')
def notifications_on_disconnect():
notifications_feed.pop(request.sid, None)
if __name__ == '__main__':
socketio.run(app, port=PORT, host=HOST)
```
and on my client nodejs application i have
```javascript
this.webSocket = io('localhost:7000/notifications', { transports: ['websocket'], query: {username: 'Test User'}});
this.webSocket.on('citations', (msg: any) => {
console.log(msg);
});
```
Here is the request

The really weird part is that it works only **one** of my development computers -- but every single other computer i have tried it on receives this error in their development environment. I install eventlet, flask-socketio, flask - im not sure if i am missing something here?
| closed | 2019-08-14T15:11:42Z | 2019-08-14T17:01:12Z | https://github.com/miguelgrinberg/Flask-SocketIO/issues/1039 | [
"question"
] | Shamim56 | 6 |
nschloe/tikzplotlib | matplotlib | 584 | axis values not showing when using .save() | ```
sns.lineplot(data = c_t_pop30, x="year", y="dif_growth")
plt.axvline(1949)
plt.axvline(1990)
tikzplotlib.save(r"/Users/a/Library/Mobile Documents/com~apple~CloudDocs/Courses/Economics/problem_set/manuscript/src/figs/fig4c_alone.tex")
```
Python shows the values for both axes, as expected

Actual output in latex
<img width="303" alt="fail_a" src="https://github.com/nschloe/tikzplotlib/assets/78081516/6d87dc0b-7d7c-427b-857c-5796a5345a62">
Here is the latex code that was saved:
```
% This file was created with tikzplotlib v0.10.1.
\begin{tikzpicture}
\definecolor{darkslategray38}{RGB}{38,38,38}
\definecolor{lavender234234242}{RGB}{234,234,242}
\definecolor{steelblue31119180}{RGB}{31,119,180}
\begin{axis}[
axis background/.style={fill=lavender234234242},
axis line style={white},
tick align=outside,
x grid style={white},
xlabel=\textcolor{darkslategray38}{year},
xmajorgrids,
xmajorticks=false,
xmin=1921.15, xmax=2005.85,
xtick style={color=darkslategray38},
y grid style={white},
ylabel=\textcolor{darkslategray38}{dif\_growth},
ymajorgrids,
ymajorticks=false,
ymin=-0.118489124518529, ymax=0.0844196238924438,
ytick style={color=darkslategray38}
]
\addplot [semithick, steelblue31119180]
table {%
1925 -0.0213444461404777
1933 -0.0566859111009346
1939 0.0751964989646723
1950 0.0192261391483122
1960 -0.109265999590757
1970 -0.0633964941647206
1980 -0.0537741982727188
1988 -0.00858158243166562
1992 -0.000585163199995131
2002 0.00798771051790126
};
\addplot [semithick, steelblue31119180]
table {%
1949 -0.118489124518529
1949 0.0844196238924437
};
\addplot [semithick, steelblue31119180]
table {%
1990 -0.118489124518529
1990 0.0844196238924437
};
\end{axis}
\end{tikzpicture}
```
| open | 2023-05-12T17:11:15Z | 2023-05-24T11:17:53Z | https://github.com/nschloe/tikzplotlib/issues/584 | [] | alexgunsberg | 2 |
pennersr/django-allauth | django | 3,271 | E-Mail-Address change workflow (Feature request?) | Hi,
as I understand. There is currently no workflow to directly change the e-mail-address.
The current way is it to use /accounts/email/ with the following steps:
1. Add new E-Mail
2. Verify E-mail
3. Mark the new email as a primary E-mail
4. Delete the old one
Is it possible to just change the E-Mail without doing the other steps?
For example.
1. User add to a form an E-Mail-Address
2. Afterwards he get directly an E-Mail, and if the user clicks on the email and accept it then the old e-mail will be deleted directly and the new one will be made/marked directly as the primary?
Thank you | closed | 2023-03-03T15:28:45Z | 2023-07-19T22:30:37Z | https://github.com/pennersr/django-allauth/issues/3271 | [] | sowinski | 2 |
ageitgey/face_recognition | machine-learning | 1,561 | Will your face_recognition application allow me to store pictures with names tide to them and just later down the line submit a new photo of someone who i dont remember i actually stored, and then the application when i send the photo identifies the person in the photo because its the exact person i stored of their name in a different photo i have saved? | See what im hoping this application is, is lets say i have 5 images of 5 different people one name zack evan, damian blue, eric blake, jacob king, and henry alek
and i stored each of these pictures in my known file path with their names as each image file
and i go on the internet a month later and i find someone i save their picture to find out who they are
so i use the application i put in the info of this exact unknown photo of my list of unknown photos, and if it turns out the facial features of this photo is the exact of a photo of same person i have stored
it comes back with the results of identical person different picture with name?
that is what i wanting this application to do
i dont want to have to remmber their known photo information to then put in the unknown photo information
i want this application to let me store pictures with the persons name on each
and then sometime later i just get a hold of obersing a particular person in a picture i save the picture
i send it to the application and boom either it comes back a known result aka a picture of that person i stored sometime ago but forgot or it comes back as unknown meaning i never stored this person
| open | 2024-04-18T05:43:22Z | 2024-08-21T10:51:09Z | https://github.com/ageitgey/face_recognition/issues/1561 | [] | olstice | 1 |
STVIR/pysot | computer-vision | 84 | ConnectionResetError: [Errno 104] Connection reset by peer | when training at 100% ,why
[2019-07-03 02:04:41,443-rk0-log_helper.py#105] Progress: 142840 / 142840 [100%], Speed: 2.148 s/iter, ETA 0:00:00 (D:H:M)
Process Process-1:
Process Process-1:
Traceback (most recent call last):
File "/home/wudi/anaconda3/lib/python3.6/multiprocessing/process.py", line 258, in _bootstrap
self.run()
File "/home/wudi/anaconda3/lib/python3.6/multiprocessing/process.py", line 93, in run
self._target(*self._args, **self._kwargs)
File "/home/wudi/anaconda3/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 96, in _worker_loop
r = index_queue.get(timeout=MANAGER_STATUS_CHECK_INTERVAL)
File "/home/wudi/anaconda3/lib/python3.6/multiprocessing/queues.py", line 113, in get
return _ForkingPickler.loads(res)
File "/home/wudi/anaconda3/lib/python3.6/site-packages/torch/multiprocessing/reductions.py", line 151, in rebuild_storage_fd
fd = df.detach()
File "/home/wudi/anaconda3/lib/python3.6/multiprocessing/resource_sharer.py", line 57, in detach
with _resource_sharer.get_connection(self._id) as conn:
File "/home/wudi/anaconda3/lib/python3.6/multiprocessing/resource_sharer.py", line 87, in get_connection
c = Client(address, authkey=process.current_process().authkey)
File "/home/wudi/anaconda3/lib/python3.6/multiprocessing/connection.py", line 494, in Client
deliver_challenge(c, authkey)
File "/home/wudi/anaconda3/lib/python3.6/multiprocessing/connection.py", line 722, in deliver_challenge
response = connection.recv_bytes(256) # reject large message
File "/home/wudi/anaconda3/lib/python3.6/multiprocessing/connection.py", line 216, in recv_bytes
buf = self._recv_bytes(maxlength)
File "/home/wudi/anaconda3/lib/python3.6/multiprocessing/connection.py", line 407, in _recv_bytes
buf = self._recv(4)
File "/home/wudi/anaconda3/lib/python3.6/multiprocessing/connection.py", line 379, in _recv
chunk = read(handle, remaining)
ConnectionResetError: [Errno 104] Connection reset by peer
| open | 2019-07-03T01:40:39Z | 2019-07-03T01:56:40Z | https://github.com/STVIR/pysot/issues/84 | [
"bug"
] | ghost | 1 |
PaddlePaddle/models | nlp | 4,981 | PaddlePaddleNLP issue | How to use the underlying model of the model to train their own data and predict, but only need to label proper terms.
In addition, can incremental training be continued on the basis of the underlying model?model is https://github.com/PaddlePaddle/models/tree/release/1.8/PaddleNLP/lexical_analysis。 Thanks! | closed | 2020-12-06T09:35:37Z | 2020-12-09T11:05:40Z | https://github.com/PaddlePaddle/models/issues/4981 | [
"paddlenlp"
] | FYF1997 | 4 |
Miserlou/Zappa | flask | 1,399 | I Need Help Triaging All These Tickets | There are too many untriaged tickets!
I am going to get to it but if anybody wants to help, let me know!
| open | 2018-02-15T23:48:01Z | 2019-05-03T22:37:38Z | https://github.com/Miserlou/Zappa/issues/1399 | [
"help wanted"
] | Miserlou | 11 |
albumentations-team/albumentations | deep-learning | 2,397 | [New feature] Add apply_to_images to ColorJitter | open | 2025-03-11T01:00:38Z | 2025-03-11T01:00:45Z | https://github.com/albumentations-team/albumentations/issues/2397 | [
"enhancement",
"good first issue"
] | ternaus | 0 | |
learning-at-home/hivemind | asyncio | 302 | More detailed installation guide | Currently, our installation guide is a sub-section within quickstart. It does not cover libp2p or non-linux OS
Thanks to @yhn112 's recent investigation, we can now run gpu-enabled hivemind on windows through WSL.
The goal of this issue is to
- add a detailed installation page in the docs
- [ ] in the Windows section, explain how to set up through WSL
- [ ] in the Linux section, explain build vs download libp2p
- [ ] on Mac OS, explain how to build libp2p from source
- if some tests fail (e.g. #143 ), this is okay for now
- [ ] refer to this page in the installation section [here](https://learning-at-home.readthedocs.io/en/latest/user/quickstart.html#installation)
- state that the short guide is only good for linux users, windows/mac *should* use the detailed guide
As usual, feel free to adjust the goal as necessary, split into multiple PRs, or recruit help (e.g. from me). | open | 2021-07-01T16:14:43Z | 2021-07-01T16:17:11Z | https://github.com/learning-at-home/hivemind/issues/302 | [
"enhancement",
"help wanted"
] | justheuristic | 0 |
autokey/autokey | automation | 62 | Autokey crashes because python3-xlib is actually python-xlib under Arch Linux | Classification: Crash
Reproducibility: Always
## Summary
After installing Autokey-py3 under Arch Linux, it crashes with the following error message:
## Version
0.93.7
Distro: Arch Linux
```
$ /usr/bin/autokey-gtk
Traceback (most recent call last):
File "/usr/bin/autokey-gtk", line 6, in <module>
from pkg_resources import load_entry_point
File "/usr/lib/python3.6/site-packages/pkg_resources/__init__.py", line 3019, in <module>
@_call_aside
File "/usr/lib/python3.6/site-packages/pkg_resources/__init__.py", line 3003, in _call_aside
f(*args, **kwargs)
File "/usr/lib/python3.6/site-packages/pkg_resources/__init__.py", line 3032, in _initialize_master_working_set
working_set = WorkingSet._build_master()
File "/usr/lib/python3.6/site-packages/pkg_resources/__init__.py", line 655, in _build_master
ws.require(__requires__)
File "/usr/lib/python3.6/site-packages/pkg_resources/__init__.py", line 963, in require
needed = self.resolve(parse_requirements(requirements))
File "/usr/lib/python3.6/site-packages/pkg_resources/__init__.py", line 849, in resolve
raise DistributionNotFound(req, requirers)
pkg_resources.DistributionNotFound: The 'python3-xlib' distribution was not found and is required by autokey
```
Maintainer dark-saber mentioned this in his notes:
> dark-saber commented on 2017-01-14 15:20
> The recent problems were introduced by this commit: https://github.com/autokey-py3/autokey/commit/2250d90d31aec0fac029ea2f9c2f1c71cfd09daf and in fact they consist of two parts:
> 1. 'python3-xlib' in Ubuntu is 'python-xlib' in Arch, this patches easily.
> 2. 'dbus-python' is somehow not found in Arch, even if python-dbus and python-dbus-common are installed. Other aliases, like 'dbus', 'python-dbus' etc. don't work too. I just patched this dep out of setup.py, especially since we still control dependencies by our distribution tools. But I still wonder what's wrong with python-dbus, some people had the same problem with blockify (https://aur.archlinux.org/packages/blockify/?comments=all), but that was resolved when this dep was removed from upstream.
>pip install would work, of course, but using that we're mixing Arch and python's package managers, and get all kinds of problems on further updates/removes. It would be wiser to just install autokey via pip if we're going this way.
>autokey-py3 0.93.9-2 (2017-01-14 15:04)
And dark-saber's patch does indeed work but I'm hoping this can be fixed in the autokey itself so every update doesn't break autokey and we don't have to wait day(s) until it's patched by someone.
Thank you so much for a great tool and I hope this issue can be fixed. | closed | 2017-01-14T21:12:49Z | 2017-01-17T06:00:46Z | https://github.com/autokey/autokey/issues/62 | [] | nick-s-b | 1 |
moshi4/pyCirclize | data-visualization | 78 | ZeroDivisionError: division by zero | I am trying to create a circular plot, and my matrix contains zero values as well. It spits out ZeroDivisionError: division by zero.
How should I handle this one? | closed | 2024-10-30T02:38:24Z | 2025-01-11T16:46:15Z | https://github.com/moshi4/pyCirclize/issues/78 | [
"question"
] | kjrathore | 1 |
RobertCraigie/prisma-client-py | pydantic | 497 | Prisma Register Error | i was trying to build a simple rest api with prisma-client-py and flask i have the configuration right but, i keep getting a register error from prisma-client-py
here is the code
<img width="751" alt="Screenshot 2022-09-30 at 12 10 21 PM" src="https://user-images.githubusercontent.com/40169444/193257687-74f25ca8-d826-4543-86af-2d02832ed0bf.png">
here is the error output
<img width="850" alt="Screenshot 2022-09-30 at 12 09 52 PM" src="https://user-images.githubusercontent.com/40169444/193257757-cb031aac-3dab-4cba-acbe-0911d566c35d.png">
| closed | 2022-09-30T11:13:54Z | 2022-10-01T12:29:23Z | https://github.com/RobertCraigie/prisma-client-py/issues/497 | [
"kind/question"
] | ifeanyidotdev | 3 |
giotto-ai/giotto-tda | scikit-learn | 128 | Extend plotting functionality to include arbitrary quantities of interest | #### Description
Currently, users of our static or interactive visualisation functions can pass an argument `columns_to_color` to colour Mapper nodes by the average value of one of the columns of the original data. In static mode, an argument `node_color` can also be added if the Mapper nodes are already known and the user wishes to hard-code colour values for each node. While the former can be used in interactive mode, the latter cannot be used in interactive mode.
An alternative design choice might proceed loosely as follows:
- Allow the user to pass a `y` argument, where `y` stands for a "quantity of interest" which may either be an array with the same length as the input data array, or a pandas Series [see (#125)]
- Allor the user to further pass a `y_rule` argument specifying how the data from `y` pertaining to each node is to be aggregated to return a single scalar.
- Note that passing `y=data[:, i]` where `i` is a column index would also cover the case in which `y` is simply a column of `data`.
- If we decide to go ahead with pandas support as per (#125), we could even imagine accepting string values for `y`, and interpreting them as column names for `data`.
Finally, the current and proposed approaches do not address the question of how to allow the user to colour by filter value in a simple way. Perhaps, one could have a `color_by_filter` boolean argument which, when set to `True`, would override anything passed to `y` and would internallly split `fit_transform` into two steps to avoid recomputation? | closed | 2019-12-23T16:16:22Z | 2020-01-16T12:54:15Z | https://github.com/giotto-ai/giotto-tda/issues/128 | [
"enhancement",
"discussion",
"mapper"
] | ulupo | 1 |
axnsan12/drf-yasg | django | 328 | How to API sorting? | swagger API sort by api alphabet
i want to sort by operation_id | closed | 2019-03-09T11:42:06Z | 2023-05-25T09:04:56Z | https://github.com/axnsan12/drf-yasg/issues/328 | [] | KimSoungRyoul | 2 |
vimalloc/flask-jwt-extended | flask | 53 | Is there a way to revoke both refresh token and access token when logout? | I create both refresh token and access token when login. However, when logout, those tokens should be revoked at the same time, without affecting other tokens owned by the user.
I look at the doc. Like:
```
# Endpoint for revoking the current users access token
@app.route('/logout', methods=['POST'])
@jwt_required
def logout():
try:
_revoke_current_token()
except KeyError:
return jsonify({
'msg': 'Access token not found in the blacklist store'
}), 500
return jsonify({"msg": "Successfully logged out"}), 200
# Endpoint for revoking the current users refresh token
@app.route('/logout2', methods=['POST'])
@jwt_refresh_token_required
def logout2():
try:
_revoke_current_token()
except KeyError:
return jsonify({
'msg': 'Refresh token not found in the blacklist store'
}), 500
return jsonify({"msg": "Successfully logged out"}), 200
```
Is there a way to revoke both?
| closed | 2017-06-14T09:29:48Z | 2023-02-02T08:12:09Z | https://github.com/vimalloc/flask-jwt-extended/issues/53 | [] | alexcc4 | 15 |
marcomusy/vedo | numpy | 1,170 | Creating a plot with objects out of scene, seems to break calls to render | Hi!
I think we have found a weird bug, the steps to reproduce it are shown below:
1. Create a plotter instance with a mesh out of camera using `show` (thus with an empty screenshot).
2. Move it back to camera range and render the plot using `render(resetcam=False)`.
3. Take a screenshot, which should display the object, however the screenshot is empty.
The problem seems to disappear if the plotter instance is created with the object in camera range (`show_problem=False`)
You can download the mesh in the example from: [https://github.com/user-attachments/files/16443482/canonical_face.zip](https://github.com/user-attachments/files/16443482/canonical_face.zip)
```python
import vedo
import cv2
import numpy as np
# Transform that moves the object of ot the camera range
t = np.eye(4)
t[2, 3] = -500
inv_t = t.copy()
inv_t[2, 3] = 500
mesh = vedo.load("canonical_face.obj")
# Whether to show the bug or not
show_problem = True
## Part: 1
plt = mesh.show(
offscreen=False,
interactive=False,
camera=vedo.utils.oriented_camera(),
bg="black",
)
img1 = plt.screenshot(asarray=True)
print("Out of camera range? (It should be False): ", img1.max() == 0)
cv2.imwrite("image0.png", img1)
## Part: 2
if not show_problem:
mesh = mesh.apply_transform(t)
plt.render(resetcam=False)
else:
mesh = mesh.apply_transform(t)
plt = mesh.show(
offscreen=False,
interactive=False,
camera=vedo.utils.oriented_camera(),
bg="black",
)
img1 = plt.screenshot(asarray=True)
print("Out of camera range? (It should be True): ", img1.max() == 0)
cv2.imwrite("image1.png", img1)
## Part: 3
# Return to original position
mesh = mesh.apply_transform(inv_t)
# Preserve camera position
plt.render(resetcam=False)
img2 = plt.screenshot(asarray=True)
print("Out of camera range? (It should be False): ", img2.max() == 0)
cv2.imwrite("image2.png", img2)
```
Thank you for your time :smile: | closed | 2024-07-31T14:51:34Z | 2024-08-19T18:03:40Z | https://github.com/marcomusy/vedo/issues/1170 | [] | xehartnort | 1 |
ultralytics/ultralytics | pytorch | 18,733 | Evaluating a model that does multiple tasks | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/ultralytics/ultralytics/discussions) and found no similar questions.
### Question
If a model is used for different things, lets say I have a pose estimation model, that I use to detect the bounding boxes of a person that works in a factory. Then I use the same model to detect if those persons are sitting or standing in their working space - for this I only care when they are around the desk, not while they are in other places in a factory.
Should I have different test sets for this model. First test set I would use to evaluate the keypoints and bounding boxes of the person - regardless of the category, and the second test set I would use to evaluate how well it classifies between the person sitting and person standing classes. And the second test set should only have images from those places in the factory that have desks.
### Additional
_No response_ | closed | 2025-01-17T10:46:52Z | 2025-01-20T08:19:59Z | https://github.com/ultralytics/ultralytics/issues/18733 | [
"question",
"pose"
] | uran-lajci | 4 |
davidsandberg/facenet | tensorflow | 768 | Is this model using SVM on top of landmark for face recognition ? | Hi,
Is this model using SVM on top of landmark for face recognition ?
Thanks ! | open | 2018-05-29T13:00:32Z | 2018-06-05T00:43:52Z | https://github.com/davidsandberg/facenet/issues/768 | [] | ashuezy | 1 |
xlwings/xlwings | automation | 1,722 | pywintypes.com_error: (-2147023266, '这个类型的数据不受支持。', None, None) | #### OS (Windows7)
#### Versions of xlwings, Excel and Python (0.24.9, Office2013, Python 3.8.1)
#### Describe your issue (incl. Traceback!)
```python
File "D:\Program Files\PythonProgram\practise_excel.py", line 3, in <module>
app=xw.App(visible=True,add_book=False)
File "D:\Program Files\python\lib\site-packages\xlwings\main.py", line 219, in __init__
self.impl = xlplatform.App(spec=spec, add_book=add_book, visible=visible)
File "D:\Program Files\python\lib\site-packages\xlwings\_xlwindows.py", line 317, in __init__
self._xl = COMRetryObjectWrapper(DispatchEx('Excel.Application'))
File "D:\Program Files\python\lib\site-packages\win32com\client\__init__.py", line 113, in DispatchEx
dispatch = pythoncom.CoCreateInstanceEx(clsid, None, clsctx, serverInfo, (pythoncom.IID_IDispatch,))[0]
pywintypes.com_error: (-2147023266, '这个类型的数据不受支持。', None, None)
```
#### Include a minimal code sample to reproduce the issue (and attach a sample workbook if required!)
```python
import xlwings as xw
app=xw.App(visible=True,add_book=False)
``` | closed | 2021-09-28T14:52:10Z | 2021-09-28T15:03:36Z | https://github.com/xlwings/xlwings/issues/1722 | [] | running1st | 1 |
google-research/bert | tensorflow | 1,217 | How to kill bad starts when pre-training from scratch | Hi!
I am pre-training a model from scratch and was wondering about the possibility of killing bad starts. Because the model will be initiated with random weights when pre-training from scratch, and these initial weights might influence the performance of the final model, I want to do my best to at least not get the worst weight initialization. I have heard that it might be a possibility to calculate perplexity and let that score be decisive of whether to kill the training process or not. Does anyone have experience with how to do this, or does someone have a better idea to review weight initialization and kill bad starts? | open | 2021-04-10T15:01:33Z | 2021-04-10T15:01:33Z | https://github.com/google-research/bert/issues/1217 | [] | StellaVerkijk | 0 |
RomelTorres/alpha_vantage | pandas | 275 | Add tests for extended intraday | closed | 2020-12-21T02:26:07Z | 2021-11-19T18:46:01Z | https://github.com/RomelTorres/alpha_vantage/issues/275 | [
"good first issue"
] | PatrickAlphaC | 1 | |
explosion/spaCy | deep-learning | 13,772 | In requirements.txt thinc>=8.3.4,<8.4.0,which was not found so I changed it to thinc>=8.3.4,<8.4.0 but it is giving error that failed building wheel for thinc |
<!-- Include a code example or the steps that led to the problem. Please try to be as specific as possible. -->
(dlenv) [manshika@lappy spaCy]$ pip install -r requirements.txt
Collecting spacy-legacy<3.1.0,>=3.0.11 (from -r requirements.txt (line 2))
Using cached spacy_legacy-3.0.12-py2.py3-none-any.whl.metadata (2.8 kB)
Collecting spacy-loggers<2.0.0,>=1.0.0 (from -r requirements.txt (line 3))
Using cached spacy_loggers-1.0.5-py3-none-any.whl.metadata (23 kB)
Collecting cymem<2.1.0,>=2.0.2 (from -r requirements.txt (line 4))
Using cached cymem-2.0.11-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (8.5 kB)
Collecting preshed<3.1.0,>=3.0.2 (from -r requirements.txt (line 5))
Using cached preshed-3.0.9-cp313-cp313-linux_x86_64.whl
ERROR: Ignored the following yanked versions: 6.10.4.dev0, 7.4.4
ERROR: Could not find a version that satisfies the requirement thinc<8.4.0,>=8.3.4 (from versions: 1.0, 1.1, 1.2, 1.3, 1.4, 1.5, 1.41, 1.42, 1.60, 1.61, 1.62, 1.63, 1.64, 1.65, 1.66, 1.67, 1.68, 1.69, 1.70, 1.71, 1.72, 1.73, 1.74, 1.75, 1.76, 2.0, 3.0, 3.1, 3.2, 3.3, 3.4.1, 4.0.0, 4.1.0, 4.2.0, 5.0.0, 5.0.1, 5.0.2, 5.0.3, 5.0.4, 5.0.5, 5.0.6, 5.0.7, 5.0.8, 6.0.0, 6.1.0, 6.1.1, 6.1.2, 6.1.3, 6.2.0, 6.3.0, 6.4.0, 6.5.0, 6.5.2, 6.6.0, 6.7.0, 6.7.1, 6.7.2, 6.7.3, 6.8.0, 6.8.1, 6.8.2, 6.9.0, 6.10.0, 6.10.1.dev0, 6.10.1, 6.10.2.dev0, 6.10.2.dev1, 6.10.2, 6.10.3.dev0, 6.10.3.dev1, 6.10.3, 6.11.0.dev2, 6.11.1.dev0, 6.11.1.dev1, 6.11.1.dev2, 6.11.1.dev3, 6.11.1.dev4, 6.11.1.dev6, 6.11.1.dev7, 6.11.1.dev10, 6.11.1.dev11, 6.11.1.dev12, 6.11.1.dev13, 6.11.1.dev15, 6.11.1.dev16, 6.11.1.dev17, 6.11.1.dev18, 6.11.1.dev19, 6.11.1.dev20, 6.11.1, 6.11.2.dev0, 6.11.2, 6.11.3.dev1, 6.11.3.dev2, 6.12.0, 6.12.1, 7.0.0.dev0, 7.0.0.dev1, 7.0.0.dev2, 7.0.0.dev3, 7.0.0.dev4, 7.0.0.dev5, 7.0.0.dev6, 7.0.0.dev8, 7.0.0, 7.0.1.dev0, 7.0.1.dev1, 7.0.1.dev2, 7.0.1, 7.0.2, 7.0.3, 7.0.4.dev0, 7.0.4, 7.0.5.dev0, 7.0.5, 7.0.6, 7.0.7, 7.0.8, 7.1.0.dev0, 7.1.0, 7.1.1, 7.2.0.dev3, 7.2.0, 7.3.0.dev0, 7.3.0, 7.3.1, 7.4.0.dev0, 7.4.0.dev1, 7.4.0.dev2, 7.4.0, 7.4.1, 7.4.2, 7.4.3, 7.4.5, 7.4.6, 8.0.0.dev0, 8.0.0.dev2, 8.0.0.dev4, 8.0.0a0, 8.0.0a1, 8.0.0a2, 8.0.0a3, 8.0.0a6, 8.0.0a8, 8.0.0a9, 8.0.0a11, 8.0.0a12, 8.0.0a13, 8.0.0a14, 8.0.0a16, 8.0.0a17, 8.0.0a18, 8.0.0a19, 8.0.0a20, 8.0.0a21, 8.0.0a22, 8.0.0a23, 8.0.0a24, 8.0.0a25, 8.0.0a26, 8.0.0a27, 8.0.0a28, 8.0.0a29, 8.0.0a30, 8.0.0a31, 8.0.0a32, 8.0.0a33, 8.0.0a34, 8.0.0a35, 8.0.0a36, 8.0.0a40, 8.0.0a41, 8.0.0a42, 8.0.0a43, 8.0.0a44, 8.0.0rc0, 8.0.0rc1, 8.0.0rc2, 8.0.0rc3, 8.0.0rc4, 8.0.0rc5, 8.0.0rc6.dev0, 8.0.0rc6, 8.0.0, 8.0.1, 8.0.2, 8.0.3, 8.0.4, 8.0.5, 8.0.6, 8.0.7, 8.0.8, 8.0.9, 8.0.10, 8.0.11, 8.0.12, 8.0.13, 8.0.14.dev0, 8.0.14, 8.0.15, 8.0.16, 8.0.17, 8.1.0.dev0, 8.1.0.dev1, 8.1.0.dev2, 8.1.0.dev3, 8.1.0, 8.1.1, 8.1.2, 8.1.3, 8.1.4, 8.1.5, 8.1.6, 8.1.7, 8.1.8, 8.1.9, 8.1.10, 8.1.11, 8.1.12, 8.2.0, 8.2.1, 8.2.2, 8.2.3, 8.2.4, 8.2.5, 8.3.0, 8.3.1, 8.3.2, 9.0.0.dev0, 9.0.0.dev1, 9.0.0.dev2, 9.0.0.dev3, 9.0.0.dev4, 9.0.0.dev5, 9.0.0, 9.1.0, 9.1.1)
ERROR: No matching distribution found for thinc<8.4.0,>=8.3.4
(dlenv) [manshika@lappy spaCy]$ pip install -r requirements.txt
Collecting spacy-legacy<3.1.0,>=3.0.11 (from -r requirements.txt (line 2))
Using cached spacy_legacy-3.0.12-py2.py3-none-any.whl.metadata (2.8 kB)
Collecting spacy-loggers<2.0.0,>=1.0.0 (from -r requirements.txt (line 3))
Using cached spacy_loggers-1.0.5-py3-none-any.whl.metadata (23 kB)
Collecting cymem<2.1.0,>=2.0.2 (from -r requirements.txt (line 4))
Using cached cymem-2.0.11-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (8.5 kB)
Collecting preshed<3.1.0,>=3.0.2 (from -r requirements.txt (line 5))
Using cached preshed-3.0.9-cp313-cp313-linux_x86_64.whl
Collecting thinc<8.4.0,>=8.3.0 (from -r requirements.txt (line 6))
Using cached thinc-8.3.2.tar.gz (193 kB)
Installing build dependencies ... done
Getting requirements to build wheel ... done
Preparing metadata (pyproject.toml) ... done
Collecting ml_datasets<0.3.0,>=0.2.0 (from -r requirements.txt (line 7))
Using cached ml_datasets-0.2.0-py3-none-any.whl.metadata (7.5 kB)
Collecting murmurhash<1.1.0,>=0.28.0 (from -r requirements.txt (line 8))
Using cached murmurhash-1.0.12-cp313-cp313-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (2.1 kB)
Collecting wasabi<1.2.0,>=0.9.1 (from -r requirements.txt (line 9))
Using cached wasabi-1.1.3-py3-none-any.whl.metadata (28 kB)
Collecting srsly<3.0.0,>=2.4.3 (from -r requirements.txt (line 10))
Using cached srsly-2.5.1-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (19 kB)
Collecting catalogue<2.1.0,>=2.0.6 (from -r requirements.txt (line 11))
Using cached catalogue-2.0.10-py3-none-any.whl.metadata (14 kB)
Collecting typer<1.0.0,>=0.3.0 (from -r requirements.txt (line 12))
Using cached typer-0.15.2-py3-none-any.whl.metadata (15 kB)
Collecting weasel<0.5.0,>=0.1.0 (from -r requirements.txt (line 13))
Using cached weasel-0.4.1-py3-none-any.whl.metadata (4.6 kB)
Requirement already satisfied: numpy<3.0.0,>=2.0.0 in /home/manshika/.virtualenvs/dlenv/lib/python3.13/site-packages (from -r requirements.txt (line 15)) (2.2.4)
Collecting requests<3.0.0,>=2.13.0 (from -r requirements.txt (line 16))
Using cached requests-2.32.3-py3-none-any.whl.metadata (4.6 kB)
Collecting tqdm<5.0.0,>=4.38.0 (from -r requirements.txt (line 17))
Using cached tqdm-4.67.1-py3-none-any.whl.metadata (57 kB)
Collecting pydantic!=1.8,!=1.8.1,<3.0.0,>=1.7.4 (from -r requirements.txt (line 18))
Using cached pydantic-2.10.6-py3-none-any.whl.metadata (30 kB)
Collecting jinja2 (from -r requirements.txt (line 19))
Using cached jinja2-3.1.6-py3-none-any.whl.metadata (2.9 kB)
Collecting langcodes<4.0.0,>=3.2.0 (from -r requirements.txt (line 20))
Using cached langcodes-3.5.0-py3-none-any.whl.metadata (29 kB)
Requirement already satisfied: setuptools in /home/manshika/.virtualenvs/dlenv/lib/python3.13/site-packages (from -r requirements.txt (line 22)) (76.1.0)
Collecting packaging>=20.0 (from -r requirements.txt (line 23))
Using cached packaging-24.2-py3-none-any.whl.metadata (3.2 kB)
Collecting pre-commit>=2.13.0 (from -r requirements.txt (line 25))
Using cached pre_commit-4.1.0-py2.py3-none-any.whl.metadata (1.3 kB)
Collecting cython<3.0,>=0.25 (from -r requirements.txt (line 26))
Using cached Cython-0.29.37-py2.py3-none-any.whl.metadata (3.1 kB)
Collecting pytest!=7.1.0,>=5.2.0 (from -r requirements.txt (line 27))
Using cached pytest-8.3.5-py3-none-any.whl.metadata (7.6 kB)
Collecting pytest-timeout<2.0.0,>=1.3.0 (from -r requirements.txt (line 28))
Using cached pytest_timeout-1.4.2-py2.py3-none-any.whl.metadata (11 kB)
Collecting mock<3.0.0,>=2.0.0 (from -r requirements.txt (line 29))
Using cached mock-2.0.0-py2.py3-none-any.whl.metadata (3.2 kB)
Collecting flake8<6.0.0,>=3.8.0 (from -r requirements.txt (line 30))
Using cached flake8-5.0.4-py2.py3-none-any.whl.metadata (4.1 kB)
Collecting hypothesis<7.0.0,>=3.27.0 (from -r requirements.txt (line 31))
Using cached hypothesis-6.129.4-py3-none-any.whl.metadata (4.4 kB)
Collecting mypy<1.6.0,>=1.5.0 (from -r requirements.txt (line 32))
Using cached mypy-1.5.1-py3-none-any.whl.metadata (1.7 kB)
Collecting types-mock>=0.1.1 (from -r requirements.txt (line 33))
Using cached types_mock-5.2.0.20250306-py3-none-any.whl.metadata (2.0 kB)
Collecting types-setuptools>=57.0.0 (from -r requirements.txt (line 34))
Using cached types_setuptools-76.0.0.20250313-py3-none-any.whl.metadata (2.2 kB)
Collecting types-requests (from -r requirements.txt (line 35))
Using cached types_requests-2.32.0.20250306-py3-none-any.whl.metadata (2.3 kB)
Collecting black==22.3.0 (from -r requirements.txt (line 37))
Using cached black-22.3.0-py3-none-any.whl.metadata (45 kB)
Collecting cython-lint>=0.15.0 (from -r requirements.txt (line 38))
Using cached cython_lint-0.16.6-py3-none-any.whl.metadata (4.9 kB)
Collecting isort<6.0,>=5.0 (from -r requirements.txt (line 39))
Using cached isort-5.13.2-py3-none-any.whl.metadata (12 kB)
Collecting click>=8.0.0 (from black==22.3.0->-r requirements.txt (line 37))
Using cached click-8.1.8-py3-none-any.whl.metadata (2.3 kB)
Collecting platformdirs>=2 (from black==22.3.0->-r requirements.txt (line 37))
Using cached platformdirs-4.3.6-py3-none-any.whl.metadata (11 kB)
Collecting pathspec>=0.9.0 (from black==22.3.0->-r requirements.txt (line 37))
Using cached pathspec-0.12.1-py3-none-any.whl.metadata (21 kB)
Collecting mypy-extensions>=0.4.3 (from black==22.3.0->-r requirements.txt (line 37))
Using cached mypy_extensions-1.0.0-py3-none-any.whl.metadata (1.1 kB)
Collecting blis<1.1.0,>=1.0.0 (from thinc<8.4.0,>=8.3.0->-r requirements.txt (line 6))
Using cached blis-1.0.2-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (7.6 kB)
Collecting confection<1.0.0,>=0.0.1 (from thinc<8.4.0,>=8.3.0->-r requirements.txt (line 6))
Using cached confection-0.1.5-py3-none-any.whl.metadata (19 kB)
Collecting numpy<3.0.0,>=2.0.0 (from -r requirements.txt (line 15))
Using cached numpy-2.0.2-cp313-cp313-linux_x86_64.whl
Requirement already satisfied: typing-extensions>=3.7.4.3 in /home/manshika/.virtualenvs/dlenv/lib/python3.13/site-packages (from typer<1.0.0,>=0.3.0->-r requirements.txt (line 12)) (4.12.2)
Collecting shellingham>=1.3.0 (from typer<1.0.0,>=0.3.0->-r requirements.txt (line 12))
Using cached shellingham-1.5.4-py2.py3-none-any.whl.metadata (3.5 kB)
Collecting rich>=10.11.0 (from typer<1.0.0,>=0.3.0->-r requirements.txt (line 12))
Using cached rich-13.9.4-py3-none-any.whl.metadata (18 kB)
Collecting cloudpathlib<1.0.0,>=0.7.0 (from weasel<0.5.0,>=0.1.0->-r requirements.txt (line 13))
Using cached cloudpathlib-0.21.0-py3-none-any.whl.metadata (14 kB)
Collecting smart-open<8.0.0,>=5.2.1 (from weasel<0.5.0,>=0.1.0->-r requirements.txt (line 13))
Using cached smart_open-7.1.0-py3-none-any.whl.metadata (24 kB)
Collecting charset-normalizer<4,>=2 (from requests<3.0.0,>=2.13.0->-r requirements.txt (line 16))
Using cached charset_normalizer-3.4.1-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (35 kB)
Collecting idna<4,>=2.5 (from requests<3.0.0,>=2.13.0->-r requirements.txt (line 16))
Using cached idna-3.10-py3-none-any.whl.metadata (10 kB)
Collecting urllib3<3,>=1.21.1 (from requests<3.0.0,>=2.13.0->-r requirements.txt (line 16))
Using cached urllib3-2.3.0-py3-none-any.whl.metadata (6.5 kB)
Collecting certifi>=2017.4.17 (from requests<3.0.0,>=2.13.0->-r requirements.txt (line 16))
Using cached certifi-2025.1.31-py3-none-any.whl.metadata (2.5 kB)
Collecting annotated-types>=0.6.0 (from pydantic!=1.8,!=1.8.1,<3.0.0,>=1.7.4->-r requirements.txt (line 18))
Using cached annotated_types-0.7.0-py3-none-any.whl.metadata (15 kB)
Collecting pydantic-core==2.27.2 (from pydantic!=1.8,!=1.8.1,<3.0.0,>=1.7.4->-r requirements.txt (line 18))
Using cached pydantic_core-2.27.2-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (6.6 kB)
Collecting MarkupSafe>=2.0 (from jinja2->-r requirements.txt (line 19))
Using cached MarkupSafe-3.0.2-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (4.0 kB)
Collecting language-data>=1.2 (from langcodes<4.0.0,>=3.2.0->-r requirements.txt (line 20))
Using cached language_data-1.3.0-py3-none-any.whl.metadata (4.3 kB)
Collecting cfgv>=2.0.0 (from pre-commit>=2.13.0->-r requirements.txt (line 25))
Using cached cfgv-3.4.0-py2.py3-none-any.whl.metadata (8.5 kB)
Collecting identify>=1.0.0 (from pre-commit>=2.13.0->-r requirements.txt (line 25))
Using cached identify-2.6.9-py2.py3-none-any.whl.metadata (4.4 kB)
Collecting nodeenv>=0.11.1 (from pre-commit>=2.13.0->-r requirements.txt (line 25))
Using cached nodeenv-1.9.1-py2.py3-none-any.whl.metadata (21 kB)
Collecting pyyaml>=5.1 (from pre-commit>=2.13.0->-r requirements.txt (line 25))
Using cached PyYAML-6.0.2-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (2.1 kB)
Collecting virtualenv>=20.10.0 (from pre-commit>=2.13.0->-r requirements.txt (line 25))
Using cached virtualenv-20.29.3-py3-none-any.whl.metadata (4.5 kB)
Collecting iniconfig (from pytest!=7.1.0,>=5.2.0->-r requirements.txt (line 27))
Using cached iniconfig-2.0.0-py3-none-any.whl.metadata (2.6 kB)
Collecting pluggy<2,>=1.5 (from pytest!=7.1.0,>=5.2.0->-r requirements.txt (line 27))
Using cached pluggy-1.5.0-py3-none-any.whl.metadata (4.8 kB)
Collecting pbr>=0.11 (from mock<3.0.0,>=2.0.0->-r requirements.txt (line 29))
Using cached pbr-6.1.1-py2.py3-none-any.whl.metadata (3.4 kB)
Collecting six>=1.9 (from mock<3.0.0,>=2.0.0->-r requirements.txt (line 29))
Using cached six-1.17.0-py2.py3-none-any.whl.metadata (1.7 kB)
Collecting mccabe<0.8.0,>=0.7.0 (from flake8<6.0.0,>=3.8.0->-r requirements.txt (line 30))
Using cached mccabe-0.7.0-py2.py3-none-any.whl.metadata (5.0 kB)
Collecting pycodestyle<2.10.0,>=2.9.0 (from flake8<6.0.0,>=3.8.0->-r requirements.txt (line 30))
Using cached pycodestyle-2.9.1-py2.py3-none-any.whl.metadata (31 kB)
Collecting pyflakes<2.6.0,>=2.5.0 (from flake8<6.0.0,>=3.8.0->-r requirements.txt (line 30))
Using cached pyflakes-2.5.0-py2.py3-none-any.whl.metadata (3.8 kB)
Collecting attrs>=22.2.0 (from hypothesis<7.0.0,>=3.27.0->-r requirements.txt (line 31))
Using cached attrs-25.3.0-py3-none-any.whl.metadata (10 kB)
Collecting sortedcontainers<3.0.0,>=2.1.0 (from hypothesis<7.0.0,>=3.27.0->-r requirements.txt (line 31))
Using cached sortedcontainers-2.4.0-py2.py3-none-any.whl.metadata (10 kB)
Collecting tokenize-rt>=3.2.0 (from cython-lint>=0.15.0->-r requirements.txt (line 38))
Using cached tokenize_rt-6.1.0-py2.py3-none-any.whl.metadata (4.1 kB)
Collecting marisa-trie>=1.1.0 (from language-data>=1.2->langcodes<4.0.0,>=3.2.0->-r requirements.txt (line 20))
Using cached marisa_trie-1.2.1-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (9.0 kB)
Collecting markdown-it-py>=2.2.0 (from rich>=10.11.0->typer<1.0.0,>=0.3.0->-r requirements.txt (line 12))
Using cached markdown_it_py-3.0.0-py3-none-any.whl.metadata (6.9 kB)
Collecting pygments<3.0.0,>=2.13.0 (from rich>=10.11.0->typer<1.0.0,>=0.3.0->-r requirements.txt (line 12))
Using cached pygments-2.19.1-py3-none-any.whl.metadata (2.5 kB)
Collecting wrapt (from smart-open<8.0.0,>=5.2.1->weasel<0.5.0,>=0.1.0->-r requirements.txt (line 13))
Using cached wrapt-1.17.2-cp313-cp313-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (6.4 kB)
Collecting distlib<1,>=0.3.7 (from virtualenv>=20.10.0->pre-commit>=2.13.0->-r requirements.txt (line 25))
Using cached distlib-0.3.9-py2.py3-none-any.whl.metadata (5.2 kB)
Collecting filelock<4,>=3.12.2 (from virtualenv>=20.10.0->pre-commit>=2.13.0->-r requirements.txt (line 25))
Using cached filelock-3.18.0-py3-none-any.whl.metadata (2.9 kB)
Collecting mdurl~=0.1 (from markdown-it-py>=2.2.0->rich>=10.11.0->typer<1.0.0,>=0.3.0->-r requirements.txt (line 12))
Using cached mdurl-0.1.2-py3-none-any.whl.metadata (1.6 kB)
Using cached black-22.3.0-py3-none-any.whl (153 kB)
Using cached spacy_legacy-3.0.12-py2.py3-none-any.whl (29 kB)
Using cached spacy_loggers-1.0.5-py3-none-any.whl (22 kB)
Using cached cymem-2.0.11-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (222 kB)
Using cached ml_datasets-0.2.0-py3-none-any.whl (15 kB)
Using cached murmurhash-1.0.12-cp313-cp313-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl (133 kB)
Using cached wasabi-1.1.3-py3-none-any.whl (27 kB)
Using cached srsly-2.5.1-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (1.1 MB)
Using cached catalogue-2.0.10-py3-none-any.whl (17 kB)
Using cached typer-0.15.2-py3-none-any.whl (45 kB)
Using cached weasel-0.4.1-py3-none-any.whl (50 kB)
Using cached requests-2.32.3-py3-none-any.whl (64 kB)
Using cached tqdm-4.67.1-py3-none-any.whl (78 kB)
Using cached pydantic-2.10.6-py3-none-any.whl (431 kB)
Using cached pydantic_core-2.27.2-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (2.0 MB)
Using cached jinja2-3.1.6-py3-none-any.whl (134 kB)
Using cached langcodes-3.5.0-py3-none-any.whl (182 kB)
Using cached packaging-24.2-py3-none-any.whl (65 kB)
Using cached pre_commit-4.1.0-py2.py3-none-any.whl (220 kB)
Using cached Cython-0.29.37-py2.py3-none-any.whl (989 kB)
Using cached pytest-8.3.5-py3-none-any.whl (343 kB)
Using cached pytest_timeout-1.4.2-py2.py3-none-any.whl (10 kB)
Using cached mock-2.0.0-py2.py3-none-any.whl (56 kB)
Using cached flake8-5.0.4-py2.py3-none-any.whl (61 kB)
Using cached hypothesis-6.129.4-py3-none-any.whl (489 kB)
Using cached mypy-1.5.1-py3-none-any.whl (2.5 MB)
Using cached types_mock-5.2.0.20250306-py3-none-any.whl (10 kB)
Using cached types_setuptools-76.0.0.20250313-py3-none-any.whl (65 kB)
Using cached types_requests-2.32.0.20250306-py3-none-any.whl (20 kB)
Using cached cython_lint-0.16.6-py3-none-any.whl (12 kB)
Using cached isort-5.13.2-py3-none-any.whl (92 kB)
Using cached annotated_types-0.7.0-py3-none-any.whl (13 kB)
Using cached attrs-25.3.0-py3-none-any.whl (63 kB)
Using cached blis-1.0.2-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (9.2 MB)
Using cached certifi-2025.1.31-py3-none-any.whl (166 kB)
Using cached cfgv-3.4.0-py2.py3-none-any.whl (7.2 kB)
Using cached charset_normalizer-3.4.1-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (144 kB)
Using cached click-8.1.8-py3-none-any.whl (98 kB)
Using cached cloudpathlib-0.21.0-py3-none-any.whl (52 kB)
Using cached confection-0.1.5-py3-none-any.whl (35 kB)
Using cached identify-2.6.9-py2.py3-none-any.whl (99 kB)
Using cached idna-3.10-py3-none-any.whl (70 kB)
Using cached language_data-1.3.0-py3-none-any.whl (5.4 MB)
Using cached MarkupSafe-3.0.2-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (23 kB)
Using cached mccabe-0.7.0-py2.py3-none-any.whl (7.3 kB)
Using cached mypy_extensions-1.0.0-py3-none-any.whl (4.7 kB)
Using cached nodeenv-1.9.1-py2.py3-none-any.whl (22 kB)
Using cached pathspec-0.12.1-py3-none-any.whl (31 kB)
Using cached pbr-6.1.1-py2.py3-none-any.whl (108 kB)
Using cached platformdirs-4.3.6-py3-none-any.whl (18 kB)
Using cached pluggy-1.5.0-py3-none-any.whl (20 kB)
Using cached pycodestyle-2.9.1-py2.py3-none-any.whl (41 kB)
Using cached pyflakes-2.5.0-py2.py3-none-any.whl (66 kB)
Using cached PyYAML-6.0.2-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (759 kB)
Using cached rich-13.9.4-py3-none-any.whl (242 kB)
Using cached shellingham-1.5.4-py2.py3-none-any.whl (9.8 kB)
Using cached six-1.17.0-py2.py3-none-any.whl (11 kB)
Using cached smart_open-7.1.0-py3-none-any.whl (61 kB)
Using cached sortedcontainers-2.4.0-py2.py3-none-any.whl (29 kB)
Using cached tokenize_rt-6.1.0-py2.py3-none-any.whl (6.0 kB)
Using cached urllib3-2.3.0-py3-none-any.whl (128 kB)
Using cached virtualenv-20.29.3-py3-none-any.whl (4.3 MB)
Using cached iniconfig-2.0.0-py3-none-any.whl (5.9 kB)
Using cached distlib-0.3.9-py2.py3-none-any.whl (468 kB)
Using cached filelock-3.18.0-py3-none-any.whl (16 kB)
Using cached marisa_trie-1.2.1-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (1.4 MB)
Using cached markdown_it_py-3.0.0-py3-none-any.whl (87 kB)
Using cached pygments-2.19.1-py3-none-any.whl (1.2 MB)
Using cached wrapt-1.17.2-cp313-cp313-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl (89 kB)
Using cached mdurl-0.1.2-py3-none-any.whl (10.0 kB)
Building wheels for collected packages: thinc
Building wheel for thinc (pyproject.toml) ... error
error: subprocess-exited-with-error
× Building wheel for thinc (pyproject.toml) did not run successfully.
│ exit code: 1
╰─> [380 lines of output]
Cythonizing sources
running bdist_wheel
running build
running build_py
creating build/lib.linux-x86_64-cpython-313/thinc
copying thinc/util.py -> build/lib.linux-x86_64-cpython-313/thinc
copying thinc/types.py -> build/lib.linux-x86_64-cpython-313/thinc
copying thinc/schedules.py -> build/lib.linux-x86_64-cpython-313/thinc
copying thinc/optimizers.py -> build/lib.linux-x86_64-cpython-313/thinc
copying thinc/mypy.py -> build/lib.linux-x86_64-cpython-313/thinc
copying thinc/model.py -> build/lib.linux-x86_64-cpython-313/thinc
copying thinc/loss.py -> build/lib.linux-x86_64-cpython-313/thinc
copying thinc/initializers.py -> build/lib.linux-x86_64-cpython-313/thinc
copying thinc/config.py -> build/lib.linux-x86_64-cpython-313/thinc
copying thinc/compat.py -> build/lib.linux-x86_64-cpython-313/thinc
copying thinc/api.py -> build/lib.linux-x86_64-cpython-313/thinc
copying thinc/about.py -> build/lib.linux-x86_64-cpython-313/thinc
copying thinc/__init__.py -> build/lib.linux-x86_64-cpython-313/thinc
creating build/lib.linux-x86_64-cpython-313/thinc/tests
copying thinc/tests/util.py -> build/lib.linux-x86_64-cpython-313/thinc/tests
copying thinc/tests/test_util.py -> build/lib.linux-x86_64-cpython-313/thinc/tests
copying thinc/tests/test_types.py -> build/lib.linux-x86_64-cpython-313/thinc/tests
copying thinc/tests/test_serialize.py -> build/lib.linux-x86_64-cpython-313/thinc/tests
copying thinc/tests/test_schedules.py -> build/lib.linux-x86_64-cpython-313/thinc/tests
copying thinc/tests/test_optimizers.py -> build/lib.linux-x86_64-cpython-313/thinc/tests
copying thinc/tests/test_loss.py -> build/lib.linux-x86_64-cpython-313/thinc/tests
copying thinc/tests/test_initializers.py -> build/lib.linux-x86_64-cpython-313/thinc/tests
copying thinc/tests/test_indexing.py -> build/lib.linux-x86_64-cpython-313/thinc/tests
copying thinc/tests/test_import__all__.py -> build/lib.linux-x86_64-cpython-313/thinc/tests
copying thinc/tests/test_examples.py -> build/lib.linux-x86_64-cpython-313/thinc/tests
copying thinc/tests/test_config.py -> build/lib.linux-x86_64-cpython-313/thinc/tests
copying thinc/tests/strategies.py -> build/lib.linux-x86_64-cpython-313/thinc/tests
copying thinc/tests/enable_tensorflow.py -> build/lib.linux-x86_64-cpython-313/thinc/tests
copying thinc/tests/enable_mxnet.py -> build/lib.linux-x86_64-cpython-313/thinc/tests
copying thinc/tests/conftest.py -> build/lib.linux-x86_64-cpython-313/thinc/tests
copying thinc/tests/__init__.py -> build/lib.linux-x86_64-cpython-313/thinc/tests
creating build/lib.linux-x86_64-cpython-313/thinc/shims
copying thinc/shims/torchscript.py -> build/lib.linux-x86_64-cpython-313/thinc/shims
copying thinc/shims/tensorflow.py -> build/lib.linux-x86_64-cpython-313/thinc/shims
copying thinc/shims/shim.py -> build/lib.linux-x86_64-cpython-313/thinc/shims
copying thinc/shims/pytorch_grad_scaler.py -> build/lib.linux-x86_64-cpython-313/thinc/shims
copying thinc/shims/pytorch.py -> build/lib.linux-x86_64-cpython-313/thinc/shims
copying thinc/shims/mxnet.py -> build/lib.linux-x86_64-cpython-313/thinc/shims
copying thinc/shims/__init__.py -> build/lib.linux-x86_64-cpython-313/thinc/shims
creating build/lib.linux-x86_64-cpython-313/thinc/layers
copying thinc/layers/with_signpost_interval.py -> build/lib.linux-x86_64-cpython-313/thinc/layers
copying thinc/layers/with_reshape.py -> build/lib.linux-x86_64-cpython-313/thinc/layers
copying thinc/layers/with_ragged.py -> build/lib.linux-x86_64-cpython-313/thinc/layers
copying thinc/layers/with_padded.py -> build/lib.linux-x86_64-cpython-313/thinc/layers
copying thinc/layers/with_nvtx_range.py -> build/lib.linux-x86_64-cpython-313/thinc/layers
copying thinc/layers/with_list.py -> build/lib.linux-x86_64-cpython-313/thinc/layers
copying thinc/layers/with_getitem.py -> build/lib.linux-x86_64-cpython-313/thinc/layers
copying thinc/layers/with_flatten_v2.py -> build/lib.linux-x86_64-cpython-313/thinc/layers
copying thinc/layers/with_flatten.py -> build/lib.linux-x86_64-cpython-313/thinc/layers
copying thinc/layers/with_debug.py -> build/lib.linux-x86_64-cpython-313/thinc/layers
copying thinc/layers/with_cpu.py -> build/lib.linux-x86_64-cpython-313/thinc/layers
copying thinc/layers/with_array2d.py -> build/lib.linux-x86_64-cpython-313/thinc/layers
copying thinc/layers/with_array.py -> build/lib.linux-x86_64-cpython-313/thinc/layers
copying thinc/layers/uniqued.py -> build/lib.linux-x86_64-cpython-313/thinc/layers
copying thinc/layers/tuplify.py -> build/lib.linux-x86_64-cpython-313/thinc/layers
copying thinc/layers/torchscriptwrapper.py -> build/lib.linux-x86_64-cpython-313/thinc/layers
copying thinc/layers/tensorflowwrapper.py -> build/lib.linux-x86_64-cpython-313/thinc/layers
copying thinc/layers/swish.py -> build/lib.linux-x86_64-cpython-313/thinc/layers
copying thinc/layers/strings2arrays.py -> build/lib.linux-x86_64-cpython-313/thinc/layers
copying thinc/layers/softmax_activation.py -> build/lib.linux-x86_64-cpython-313/thinc/layers
copying thinc/layers/softmax.py -> build/lib.linux-x86_64-cpython-313/thinc/layers
copying thinc/layers/sigmoid_activation.py -> build/lib.linux-x86_64-cpython-313/thinc/layers
copying thinc/layers/sigmoid.py -> build/lib.linux-x86_64-cpython-313/thinc/layers
copying thinc/layers/siamese.py -> build/lib.linux-x86_64-cpython-313/thinc/layers
copying thinc/layers/resizable.py -> build/lib.linux-x86_64-cpython-313/thinc/layers
copying thinc/layers/residual.py -> build/lib.linux-x86_64-cpython-313/thinc/layers
copying thinc/layers/remap_ids.py -> build/lib.linux-x86_64-cpython-313/thinc/layers
copying thinc/layers/relu.py -> build/lib.linux-x86_64-cpython-313/thinc/layers
copying thinc/layers/reduce_sum.py -> build/lib.linux-x86_64-cpython-313/thinc/layers
copying thinc/layers/reduce_mean.py -> build/lib.linux-x86_64-cpython-313/thinc/layers
copying thinc/layers/reduce_max.py -> build/lib.linux-x86_64-cpython-313/thinc/layers
copying thinc/layers/reduce_last.py -> build/lib.linux-x86_64-cpython-313/thinc/layers
copying thinc/layers/reduce_first.py -> build/lib.linux-x86_64-cpython-313/thinc/layers
copying thinc/layers/ragged2list.py -> build/lib.linux-x86_64-cpython-313/thinc/layers
copying thinc/layers/pytorchwrapper.py -> build/lib.linux-x86_64-cpython-313/thinc/layers
copying thinc/layers/parametricattention_v2.py -> build/lib.linux-x86_64-cpython-313/thinc/layers
copying thinc/layers/parametricattention.py -> build/lib.linux-x86_64-cpython-313/thinc/layers
copying thinc/layers/padded2list.py -> build/lib.linux-x86_64-cpython-313/thinc/layers
copying thinc/layers/noop.py -> build/lib.linux-x86_64-cpython-313/thinc/layers
copying thinc/layers/mxnetwrapper.py -> build/lib.linux-x86_64-cpython-313/thinc/layers
copying thinc/layers/multisoftmax.py -> build/lib.linux-x86_64-cpython-313/thinc/layers
copying thinc/layers/mish.py -> build/lib.linux-x86_64-cpython-313/thinc/layers
copying thinc/layers/maxout.py -> build/lib.linux-x86_64-cpython-313/thinc/layers
copying thinc/layers/map_list.py -> build/lib.linux-x86_64-cpython-313/thinc/layers
copying thinc/layers/lstm.py -> build/lib.linux-x86_64-cpython-313/thinc/layers
copying thinc/layers/logistic.py -> build/lib.linux-x86_64-cpython-313/thinc/layers
copying thinc/layers/list2ragged.py -> build/lib.linux-x86_64-cpython-313/thinc/layers
copying thinc/layers/list2padded.py -> build/lib.linux-x86_64-cpython-313/thinc/layers
copying thinc/layers/list2array.py -> build/lib.linux-x86_64-cpython-313/thinc/layers
copying thinc/layers/linear.py -> build/lib.linux-x86_64-cpython-313/thinc/layers
copying thinc/layers/layernorm.py -> build/lib.linux-x86_64-cpython-313/thinc/layers
copying thinc/layers/hashembed.py -> build/lib.linux-x86_64-cpython-313/thinc/layers
copying thinc/layers/hard_swish_mobilenet.py -> build/lib.linux-x86_64-cpython-313/thinc/layers
copying thinc/layers/hard_swish.py -> build/lib.linux-x86_64-cpython-313/thinc/layers
copying thinc/layers/gelu.py -> build/lib.linux-x86_64-cpython-313/thinc/layers
copying thinc/layers/expand_window.py -> build/lib.linux-x86_64-cpython-313/thinc/layers
copying thinc/layers/embed.py -> build/lib.linux-x86_64-cpython-313/thinc/layers
copying thinc/layers/dropout.py -> build/lib.linux-x86_64-cpython-313/thinc/layers
copying thinc/layers/dish.py -> build/lib.linux-x86_64-cpython-313/thinc/layers
copying thinc/layers/concatenate.py -> build/lib.linux-x86_64-cpython-313/thinc/layers
copying thinc/layers/clone.py -> build/lib.linux-x86_64-cpython-313/thinc/layers
copying thinc/layers/clipped_linear.py -> build/lib.linux-x86_64-cpython-313/thinc/layers
copying thinc/layers/chain.py -> build/lib.linux-x86_64-cpython-313/thinc/layers
copying thinc/layers/cauchysimilarity.py -> build/lib.linux-x86_64-cpython-313/thinc/layers
copying thinc/layers/bidirectional.py -> build/lib.linux-x86_64-cpython-313/thinc/layers
copying thinc/layers/array_getitem.py -> build/lib.linux-x86_64-cpython-313/thinc/layers
copying thinc/layers/add.py -> build/lib.linux-x86_64-cpython-313/thinc/layers
copying thinc/layers/__init__.py -> build/lib.linux-x86_64-cpython-313/thinc/layers
creating build/lib.linux-x86_64-cpython-313/thinc/extra
copying thinc/extra/__init__.py -> build/lib.linux-x86_64-cpython-313/thinc/extra
creating build/lib.linux-x86_64-cpython-313/thinc/backends
copying thinc/backends/ops.py -> build/lib.linux-x86_64-cpython-313/thinc/backends
copying thinc/backends/mps_ops.py -> build/lib.linux-x86_64-cpython-313/thinc/backends
copying thinc/backends/cupy_ops.py -> build/lib.linux-x86_64-cpython-313/thinc/backends
copying thinc/backends/_param_server.py -> build/lib.linux-x86_64-cpython-313/thinc/backends
copying thinc/backends/_custom_kernels.py -> build/lib.linux-x86_64-cpython-313/thinc/backends
copying thinc/backends/_cupy_allocators.py -> build/lib.linux-x86_64-cpython-313/thinc/backends
copying thinc/backends/__init__.py -> build/lib.linux-x86_64-cpython-313/thinc/backends
creating build/lib.linux-x86_64-cpython-313/thinc/tests/shims
copying thinc/tests/shims/test_pytorch_grad_scaler.py -> build/lib.linux-x86_64-cpython-313/thinc/tests/shims
copying thinc/tests/shims/__init__.py -> build/lib.linux-x86_64-cpython-313/thinc/tests/shims
creating build/lib.linux-x86_64-cpython-313/thinc/tests/regression
copying thinc/tests/regression/test_issue564.py -> build/lib.linux-x86_64-cpython-313/thinc/tests/regression
copying thinc/tests/regression/test_issue208.py -> build/lib.linux-x86_64-cpython-313/thinc/tests/regression
copying thinc/tests/regression/__init__.py -> build/lib.linux-x86_64-cpython-313/thinc/tests/regression
creating build/lib.linux-x86_64-cpython-313/thinc/tests/mypy
copying thinc/tests/mypy/test_mypy.py -> build/lib.linux-x86_64-cpython-313/thinc/tests/mypy
copying thinc/tests/mypy/__init__.py -> build/lib.linux-x86_64-cpython-313/thinc/tests/mypy
creating build/lib.linux-x86_64-cpython-313/thinc/tests/model
copying thinc/tests/model/test_validation.py -> build/lib.linux-x86_64-cpython-313/thinc/tests/model
copying thinc/tests/model/test_model.py -> build/lib.linux-x86_64-cpython-313/thinc/tests/model
copying thinc/tests/model/__init__.py -> build/lib.linux-x86_64-cpython-313/thinc/tests/model
creating build/lib.linux-x86_64-cpython-313/thinc/tests/layers
copying thinc/tests/layers/test_with_transforms.py -> build/lib.linux-x86_64-cpython-313/thinc/tests/layers
copying thinc/tests/layers/test_with_flatten.py -> build/lib.linux-x86_64-cpython-313/thinc/tests/layers
copying thinc/tests/layers/test_with_debug.py -> build/lib.linux-x86_64-cpython-313/thinc/tests/layers
copying thinc/tests/layers/test_uniqued.py -> build/lib.linux-x86_64-cpython-313/thinc/tests/layers
copying thinc/tests/layers/test_transforms.py -> build/lib.linux-x86_64-cpython-313/thinc/tests/layers
copying thinc/tests/layers/test_torchscriptwrapper.py -> build/lib.linux-x86_64-cpython-313/thinc/tests/layers
copying thinc/tests/layers/test_tensorflow_wrapper.py -> build/lib.linux-x86_64-cpython-313/thinc/tests/layers
copying thinc/tests/layers/test_sparse_linear.py -> build/lib.linux-x86_64-cpython-313/thinc/tests/layers
copying thinc/tests/layers/test_softmax.py -> build/lib.linux-x86_64-cpython-313/thinc/tests/layers
copying thinc/tests/layers/test_shim.py -> build/lib.linux-x86_64-cpython-313/thinc/tests/layers
copying thinc/tests/layers/test_resizable.py -> build/lib.linux-x86_64-cpython-313/thinc/tests/layers
copying thinc/tests/layers/test_reduce.py -> build/lib.linux-x86_64-cpython-313/thinc/tests/layers
copying thinc/tests/layers/test_pytorch_wrapper.py -> build/lib.linux-x86_64-cpython-313/thinc/tests/layers
copying thinc/tests/layers/test_parametric_attention_v2.py -> build/lib.linux-x86_64-cpython-313/thinc/tests/layers
copying thinc/tests/layers/test_mxnet_wrapper.py -> build/lib.linux-x86_64-cpython-313/thinc/tests/layers
copying thinc/tests/layers/test_mnist.py -> build/lib.linux-x86_64-cpython-313/thinc/tests/layers
copying thinc/tests/layers/test_mappers.py -> build/lib.linux-x86_64-cpython-313/thinc/tests/layers
copying thinc/tests/layers/test_lstm.py -> build/lib.linux-x86_64-cpython-313/thinc/tests/layers
copying thinc/tests/layers/test_linear.py -> build/lib.linux-x86_64-cpython-313/thinc/tests/layers
copying thinc/tests/layers/test_layers_api.py -> build/lib.linux-x86_64-cpython-313/thinc/tests/layers
copying thinc/tests/layers/test_hash_embed.py -> build/lib.linux-x86_64-cpython-313/thinc/tests/layers
copying thinc/tests/layers/test_feed_forward.py -> build/lib.linux-x86_64-cpython-313/thinc/tests/layers
copying thinc/tests/layers/test_combinators.py -> build/lib.linux-x86_64-cpython-313/thinc/tests/layers
copying thinc/tests/layers/test_basic_tagger.py -> build/lib.linux-x86_64-cpython-313/thinc/tests/layers
copying thinc/tests/layers/__init__.py -> build/lib.linux-x86_64-cpython-313/thinc/tests/layers
creating build/lib.linux-x86_64-cpython-313/thinc/tests/extra
copying thinc/tests/extra/test_beam_search.py -> build/lib.linux-x86_64-cpython-313/thinc/tests/extra
copying thinc/tests/extra/__init__.py -> build/lib.linux-x86_64-cpython-313/thinc/tests/extra
creating build/lib.linux-x86_64-cpython-313/thinc/tests/backends
copying thinc/tests/backends/test_ops.py -> build/lib.linux-x86_64-cpython-313/thinc/tests/backends
copying thinc/tests/backends/test_mem.py -> build/lib.linux-x86_64-cpython-313/thinc/tests/backends
copying thinc/tests/backends/__init__.py -> build/lib.linux-x86_64-cpython-313/thinc/tests/backends
creating build/lib.linux-x86_64-cpython-313/thinc/tests/regression/issue519
copying thinc/tests/regression/issue519/test_issue519.py -> build/lib.linux-x86_64-cpython-313/thinc/tests/regression/issue519
copying thinc/tests/regression/issue519/program.py -> build/lib.linux-x86_64-cpython-313/thinc/tests/regression/issue519
copying thinc/tests/regression/issue519/__init__.py -> build/lib.linux-x86_64-cpython-313/thinc/tests/regression/issue519
creating build/lib.linux-x86_64-cpython-313/thinc/tests/mypy/modules
copying thinc/tests/mypy/modules/success_plugin.py -> build/lib.linux-x86_64-cpython-313/thinc/tests/mypy/modules
copying thinc/tests/mypy/modules/success_no_plugin.py -> build/lib.linux-x86_64-cpython-313/thinc/tests/mypy/modules
copying thinc/tests/mypy/modules/fail_plugin.py -> build/lib.linux-x86_64-cpython-313/thinc/tests/mypy/modules
copying thinc/tests/mypy/modules/fail_no_plugin.py -> build/lib.linux-x86_64-cpython-313/thinc/tests/mypy/modules
copying thinc/tests/mypy/modules/__init__.py -> build/lib.linux-x86_64-cpython-313/thinc/tests/mypy/modules
creating build/lib.linux-x86_64-cpython-313/thinc/extra/tests
copying thinc/extra/tests/__init__.py -> build/lib.linux-x86_64-cpython-313/thinc/extra/tests
running egg_info
writing thinc.egg-info/PKG-INFO
writing dependency_links to thinc.egg-info/dependency_links.txt
writing entry points to thinc.egg-info/entry_points.txt
writing requirements to thinc.egg-info/requires.txt
writing top-level names to thinc.egg-info/top_level.txt
dependency /tmp/pip-build-env-w27chjg7/overlay/lib/python3.13/site-packages/numpy/_core/include/numpy/arrayobject.h won't be automatically included in the manifest: the path must be relative
dependency /tmp/pip-build-env-w27chjg7/overlay/lib/python3.13/site-packages/numpy/_core/include/numpy/arrayscalars.h won't be automatically included in the manifest: the path must be relative
dependency /tmp/pip-build-env-w27chjg7/overlay/lib/python3.13/site-packages/numpy/_core/include/numpy/ndarrayobject.h won't be automatically included in the manifest: the path must be relative
dependency /tmp/pip-build-env-w27chjg7/overlay/lib/python3.13/site-packages/numpy/_core/include/numpy/ndarraytypes.h won't be automatically included in the manifest: the path must be relative
dependency /tmp/pip-build-env-w27chjg7/overlay/lib/python3.13/site-packages/numpy/_core/include/numpy/ufuncobject.h won't be automatically included in the manifest: the path must be relative
dependency /usr/include/python3.13/Python.h won't be automatically included in the manifest: the path must be relative
dependency /tmp/pip-build-env-w27chjg7/overlay/lib/python3.13/site-packages/numpy/_core/include/numpy/arrayobject.h won't be automatically included in the manifest: the path must be relative
dependency /tmp/pip-build-env-w27chjg7/overlay/lib/python3.13/site-packages/numpy/_core/include/numpy/arrayscalars.h won't be automatically included in the manifest: the path must be relative
dependency /tmp/pip-build-env-w27chjg7/overlay/lib/python3.13/site-packages/numpy/_core/include/numpy/ndarrayobject.h won't be automatically included in the manifest: the path must be relative
dependency /tmp/pip-build-env-w27chjg7/overlay/lib/python3.13/site-packages/numpy/_core/include/numpy/ndarraytypes.h won't be automatically included in the manifest: the path must be relative
dependency /tmp/pip-build-env-w27chjg7/overlay/lib/python3.13/site-packages/numpy/_core/include/numpy/ufuncobject.h won't be automatically included in the manifest: the path must be relative
dependency /usr/include/python3.13/Python.h won't be automatically included in the manifest: the path must be relative
reading manifest file 'thinc.egg-info/SOURCES.txt'
reading manifest template 'MANIFEST.in'
no previously-included directories found matching 'tmp'
adding license file 'LICENSE'
writing manifest file 'thinc.egg-info/SOURCES.txt'
/tmp/pip-build-env-w27chjg7/overlay/lib/python3.13/site-packages/setuptools/command/build_py.py:212: _Warning: Package 'thinc.tests.mypy.configs' is absent from the `packages` configuration.
!!
********************************************************************************
############################
# Package would be ignored #
############################
Python recognizes 'thinc.tests.mypy.configs' as an importable package[^1],
but it is absent from setuptools' `packages` configuration.
This leads to an ambiguous overall configuration. If you want to distribute this
package, please make sure that 'thinc.tests.mypy.configs' is explicitly added
to the `packages` configuration field.
Alternatively, you can also rely on setuptools' discovery methods
(for example by using `find_namespace_packages(...)`/`find_namespace:`
instead of `find_packages(...)`/`find:`).
You can read more about "package discovery" on setuptools documentation page:
- https://setuptools.pypa.io/en/latest/userguide/package_discovery.html
If you don't want 'thinc.tests.mypy.configs' to be distributed and are
already explicitly excluding 'thinc.tests.mypy.configs' via
`find_namespace_packages(...)/find_namespace` or `find_packages(...)/find`,
you can try to use `exclude_package_data`, or `include-package-data=False` in
combination with a more fine grained `package-data` configuration.
You can read more about "package data files" on setuptools documentation page:
- https://setuptools.pypa.io/en/latest/userguide/datafiles.html
[^1]: For Python, any directory (with suitable naming) can be imported,
even if it does not contain any `.py` files.
On the other hand, currently there is no concept of package data
directory, all directories are treated like packages.
********************************************************************************
!!
check.warn(importable)
/tmp/pip-build-env-w27chjg7/overlay/lib/python3.13/site-packages/setuptools/command/build_py.py:212: _Warning: Package 'thinc.tests.mypy.outputs' is absent from the `packages` configuration.
!!
********************************************************************************
############################
# Package would be ignored #
############################
Python recognizes 'thinc.tests.mypy.outputs' as an importable package[^1],
but it is absent from setuptools' `packages` configuration.
This leads to an ambiguous overall configuration. If you want to distribute this
package, please make sure that 'thinc.tests.mypy.outputs' is explicitly added
to the `packages` configuration field.
Alternatively, you can also rely on setuptools' discovery methods
(for example by using `find_namespace_packages(...)`/`find_namespace:`
instead of `find_packages(...)`/`find:`).
You can read more about "package discovery" on setuptools documentation page:
- https://setuptools.pypa.io/en/latest/userguide/package_discovery.html
If you don't want 'thinc.tests.mypy.outputs' to be distributed and are
already explicitly excluding 'thinc.tests.mypy.outputs' via
`find_namespace_packages(...)/find_namespace` or `find_packages(...)/find`,
you can try to use `exclude_package_data`, or `include-package-data=False` in
combination with a more fine grained `package-data` configuration.
You can read more about "package data files" on setuptools documentation page:
- https://setuptools.pypa.io/en/latest/userguide/datafiles.html
[^1]: For Python, any directory (with suitable naming) can be imported,
even if it does not contain any `.py` files.
On the other hand, currently there is no concept of package data
directory, all directories are treated like packages.
********************************************************************************
!!
check.warn(importable)
copying thinc/__init__.pxd -> build/lib.linux-x86_64-cpython-313/thinc
copying thinc/py.typed -> build/lib.linux-x86_64-cpython-313/thinc
copying thinc/layers/premap_ids.pyx -> build/lib.linux-x86_64-cpython-313/thinc/layers
copying thinc/layers/sparselinear.pyx -> build/lib.linux-x86_64-cpython-313/thinc/layers
copying thinc/extra/__init__.pxd -> build/lib.linux-x86_64-cpython-313/thinc/extra
copying thinc/extra/search.pxd -> build/lib.linux-x86_64-cpython-313/thinc/extra
copying thinc/extra/search.pyx -> build/lib.linux-x86_64-cpython-313/thinc/extra
copying thinc/backends/__init__.pxd -> build/lib.linux-x86_64-cpython-313/thinc/backends
copying thinc/backends/_custom_kernels.cu -> build/lib.linux-x86_64-cpython-313/thinc/backends
copying thinc/backends/_murmur3.cu -> build/lib.linux-x86_64-cpython-313/thinc/backends
copying thinc/backends/cblas.pxd -> build/lib.linux-x86_64-cpython-313/thinc/backends
copying thinc/backends/cblas.pyx -> build/lib.linux-x86_64-cpython-313/thinc/backends
copying thinc/backends/cpu_kernels.hh -> build/lib.linux-x86_64-cpython-313/thinc/backends
copying thinc/backends/linalg.pxd -> build/lib.linux-x86_64-cpython-313/thinc/backends
copying thinc/backends/linalg.pyx -> build/lib.linux-x86_64-cpython-313/thinc/backends
copying thinc/backends/numpy_ops.pxd -> build/lib.linux-x86_64-cpython-313/thinc/backends
copying thinc/backends/numpy_ops.pyx -> build/lib.linux-x86_64-cpython-313/thinc/backends
creating build/lib.linux-x86_64-cpython-313/thinc/tests/mypy/configs
copying thinc/tests/mypy/configs/mypy-default.ini -> build/lib.linux-x86_64-cpython-313/thinc/tests/mypy/configs
copying thinc/tests/mypy/configs/mypy-plugin.ini -> build/lib.linux-x86_64-cpython-313/thinc/tests/mypy/configs
creating build/lib.linux-x86_64-cpython-313/thinc/tests/mypy/outputs
copying thinc/tests/mypy/outputs/fail-no-plugin.txt -> build/lib.linux-x86_64-cpython-313/thinc/tests/mypy/outputs
copying thinc/tests/mypy/outputs/fail-plugin.txt -> build/lib.linux-x86_64-cpython-313/thinc/tests/mypy/outputs
copying thinc/tests/mypy/outputs/success-no-plugin.txt -> build/lib.linux-x86_64-cpython-313/thinc/tests/mypy/outputs
copying thinc/tests/mypy/outputs/success-plugin.txt -> build/lib.linux-x86_64-cpython-313/thinc/tests/mypy/outputs
copying thinc/extra/tests/c_test_search.pyx -> build/lib.linux-x86_64-cpython-313/thinc/extra/tests
running build_ext
building 'thinc.backends.cblas' extension
creating build/temp.linux-x86_64-cpython-313/thinc/backends
g++ -fno-strict-overflow -Wsign-compare -DNDEBUG -g -O3 -Wall -march=x86-64 -mtune=generic -O3 -pipe -fno-plt -fexceptions -Wp,-D_FORTIFY_SOURCE=3 -Wformat -Werror=format-security -fstack-clash-protection -fcf-protection -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -g -ffile-prefix-map=/build/python/src=/usr/src/debug/python -flto=auto -ffat-lto-objects -march=x86-64 -mtune=generic -O3 -pipe -fno-plt -fexceptions -Wp,-D_FORTIFY_SOURCE=3 -Wformat -Werror=format-security -fstack-clash-protection -fcf-protection -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -g -ffile-prefix-map=/build/python/src=/usr/src/debug/python -flto=auto -march=x86-64 -mtune=generic -O3 -pipe -fno-plt -fexceptions -Wp,-D_FORTIFY_SOURCE=3 -Wformat -Werror=format-security -fstack-clash-protection -fcf-protection -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -g -ffile-prefix-map=/build/python/src=/usr/src/debug/python -flto=auto -fPIC -I/tmp/pip-build-env-w27chjg7/overlay/lib/python3.13/site-packages/numpy/_core/include -I/usr/include/python3.13 -I/home/manshika/.virtualenvs/dlenv/include -I/usr/include/python3.13 -c thinc/backends/cblas.cpp -o build/temp.linux-x86_64-cpython-313/thinc/backends/cblas.o -O3 -Wno-strict-prototypes -Wno-unused-function -std=c++11
cc1plus: warning: command-line option ‘-Wno-strict-prototypes’ is valid for C/ObjC but not for C++
thinc/backends/cblas.cpp:871:72: warning: ‘Py_UNICODE’ is deprecated [-Wdeprecated-declarations]
871 | static CYTHON_INLINE size_t __Pyx_Py_UNICODE_strlen(const Py_UNICODE *u) {
| ^
In file included from /usr/include/python3.13/unicodeobject.h:1014,
from /usr/include/python3.13/Python.h:79,
from thinc/backends/cblas.cpp:24:
/usr/include/python3.13/cpython/unicodeobject.h:10:37: note: declared here
10 | Py_DEPRECATED(3.13) typedef wchar_t Py_UNICODE;
| ^~~~~~~~~~
thinc/backends/cblas.cpp: In function ‘size_t __Pyx_Py_UNICODE_strlen(const Py_UNICODE*)’:
thinc/backends/cblas.cpp:872:23: warning: ‘Py_UNICODE’ is deprecated [-Wdeprecated-declarations]
872 | const Py_UNICODE *u_end = u;
| ^~~~~
/usr/include/python3.13/cpython/unicodeobject.h:10:37: note: declared here
10 | Py_DEPRECATED(3.13) typedef wchar_t Py_UNICODE;
| ^~~~~~~~~~
thinc/backends/cblas.cpp: In function ‘int __Pyx_PyList_Extend(PyObject*, PyObject*)’:
thinc/backends/cblas.cpp:1908:22: error: ‘_PyList_Extend’ was not declared in this scope; did you mean ‘PyList_Extend’?
1908 | PyObject* none = _PyList_Extend((PyListObject*)L, v);
| ^~~~~~~~~~~~~~
| PyList_Extend
thinc/backends/cblas.cpp: In function ‘void __Pyx_init_assertions_enabled()’:
thinc/backends/cblas.cpp:1946:39: error: ‘_PyInterpreterState_GetConfig’ was not declared in this scope; did you mean ‘PyInterpreterState_GetID’?
1946 | __pyx_assertions_enabled_flag = ! _PyInterpreterState_GetConfig(__Pyx_PyThreadState_Current->interp)->optimization_level;
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~
| PyInterpreterState_GetID
thinc/backends/cblas.cpp: In function ‘int __Pyx_PyInt_As_int(PyObject*)’:
thinc/backends/cblas.cpp:20354:46: error: too few arguments to function ‘int _PyLong_AsByteArray(PyLongObject*, unsigned char*, size_t, int, int, int)’
20354 | int ret = _PyLong_AsByteArray((PyLongObject *)v,
| ~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~
20355 | bytes, sizeof(val),
| ~~~~~~~~~~~~~~~~~~~
20356 | is_little, !is_unsigned);
| ~~~~~~~~~~~~~~~~~~~~~~~~
In file included from /usr/include/python3.13/longobject.h:107,
from /usr/include/python3.13/Python.h:81:
/usr/include/python3.13/cpython/longobject.h:111:17: note: declared here
111 | PyAPI_FUNC(int) _PyLong_AsByteArray(PyLongObject* v,
| ^~~~~~~~~~~~~~~~~~~
thinc/backends/cblas.cpp: In function ‘long int __Pyx_PyInt_As_long(PyObject*)’:
thinc/backends/cblas.cpp:20550:46: error: too few arguments to function ‘int _PyLong_AsByteArray(PyLongObject*, unsigned char*, size_t, int, int, int)’
20550 | int ret = _PyLong_AsByteArray((PyLongObject *)v,
| ~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~
20551 | bytes, sizeof(val),
| ~~~~~~~~~~~~~~~~~~~
20552 | is_little, !is_unsigned);
| ~~~~~~~~~~~~~~~~~~~~~~~~
/usr/include/python3.13/cpython/longobject.h:111:17: note: declared here
111 | PyAPI_FUNC(int) _PyLong_AsByteArray(PyLongObject* v,
| ^~~~~~~~~~~~~~~~~~~
thinc/backends/cblas.cpp: In function ‘char __Pyx_PyInt_As_char(PyObject*)’:
thinc/backends/cblas.cpp:20822:46: error: too few arguments to function ‘int _PyLong_AsByteArray(PyLongObject*, unsigned char*, size_t, int, int, int)’
20822 | int ret = _PyLong_AsByteArray((PyLongObject *)v,
| ~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~
20823 | bytes, sizeof(val),
| ~~~~~~~~~~~~~~~~~~~
20824 | is_little, !is_unsigned);
| ~~~~~~~~~~~~~~~~~~~~~~~~
/usr/include/python3.13/cpython/longobject.h:111:17: note: declared here
111 | PyAPI_FUNC(int) _PyLong_AsByteArray(PyLongObject* v,
| ^~~~~~~~~~~~~~~~~~~
error: command '/usr/bin/g++' failed with exit code 1
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for thinc
Failed to build thinc
ERROR: Failed to build installable wheels for some pyproject.toml based projects (thinc)
## Your Environment
<!-- Include details of your environment. You can also type `python -m spacy info --markdown` and copy-paste the result here.-->
* Operating System: Arch Linux
* Python Version Used: Python 3.13.2
* spaCy Version Used:Latest
* Environment Information: virtual environment
| open | 2025-03-18T12:44:49Z | 2025-03-18T12:44:49Z | https://github.com/explosion/spaCy/issues/13772 | [] | manshika13 | 0 |
davidsandberg/facenet | computer-vision | 616 | How to submit LFW test result to lfw webpage? | I want to submit LFW test result to LFW results webpage(http://vis-www.cs.umass.edu/lfw/results.html). However, I failed to find any submit port or webpage or description. Could anybody give me some advice? | closed | 2018-01-17T11:38:37Z | 2018-10-19T09:34:08Z | https://github.com/davidsandberg/facenet/issues/616 | [] | shanren7 | 3 |
okken/pytest-check | pytest | 65 | document maxfail behavior | Related to issue #64 | closed | 2021-08-02T17:05:56Z | 2021-09-12T17:13:56Z | https://github.com/okken/pytest-check/issues/65 | [
"documentation"
] | okken | 1 |
sqlalchemy/alembic | sqlalchemy | 343 | Initialise from existing schema (v0.7.3) | **Migrated issue, originally created by Tom Dalton ([@tom-dalton-fanduel](https://github.com/tom-dalton-fanduel))**
Apologies if this isn't the correct place to ask this - I can't see the answer in the docs (http://alembic.readthedocs.org/en/latest/tutorial.html#running-our-first-migration), nor see a mailing list that I can ask this.
I have a system that has an existing DB, that existed pre-alembic. We now [want to] use alembic to manage the migrations, and I have an initial migration for the current schema. I have a second migration to make some changes.
In development, I can run `alembic upgrade head` on an empty DB and it will all work as expected. However, in production, the DB already exists and is at version 1, but the alembic metadata (the alembic_version table) isnt there.
Is it possible to run something like `alembic upgrade --force --from 1 --to 2` which will create the alembic version table, set the current version to 1, and then run the 1->2 migration as normal?
I'm currently using 0.7.3 but can use a newer version if necessary.
| closed | 2015-12-09T18:24:01Z | 2015-12-09T23:18:35Z | https://github.com/sqlalchemy/alembic/issues/343 | [] | sqlalchemy-bot | 7 |
rthalley/dnspython | asyncio | 482 | 2.0.0 incompatibility: str(dnssec.algorithm_from_text(x)) | Discussion if we want to make this incompatible change:
`dnssec.algorithm_from_text()` now returns an enum which is change from integer in versions < 2.0.0.
This breaks code which uses result of `dnssec.algorithm_from_text()` to assemble text representation of RRs, e.g. DS or DNSKEYs.
Example of old code:
```
alg = dnssec.algorithm_from_text(user_input_algo) # accepts `10` and also `RSASHA512`
ds_txt = '... {alg} ...'
rd = dns.rdata.from_text(rr.rdclass, rr.rdtype, ds_txt'
```
This code in 2.0.0 raises exception:
```
pydnstest/scenario.py:253: in process_sections
rd = dns.rdata.from_text(rr.rdclass, rr.rdtype, ' '.join(
../../python-dns/git/dns/rdata.py:454: in from_text
return cls.from_text(rdclass, rdtype, tok, origin, relativize,
../../python-dns/git/dns/rdtypes/dsbase.py:49: in from_text
algorithm = tok.get_uint8()
../../python-dns/git/dns/tokenizer.py:491: in get_uint8
value = self.get_int()
../../python-dns/git/dns/tokenizer.py:479: in get_int
raise dns.exception.SyntaxError('expecting an integer')
E dns.exception.SyntaxError: expecting an integer
```
Reason is that `str(alg)` now returns `Algorithm.RSASHA512`.
Maybe this is a corner case, I don't now. Possible compatibility interface would be `__str__` method of `dns.dnssec.Algorithm` which returns `Algorithm.value` as string.
Ideas? | closed | 2020-05-25T10:30:14Z | 2020-05-27T07:08:25Z | https://github.com/rthalley/dnspython/issues/482 | [] | pspacek | 3 |
marshmallow-code/flask-smorest | rest-api | 267 | Multiple argument schemas not possible with `location='json'` | Hey, first thank you guys for making a really nice framework. I've used marshmallow / webargs for some time, and I really think making API documentation using flask-smorest is a breeze when all these frameworks are combined.
I saw that it's possible to use multiple arguments schemas [(docs here)](https://flask-smorest.readthedocs.io/en/latest/arguments.html#multiple-arguments-schemas) by stacking several `@blp.arguments` on top of each other. This is convenient and makes sense. However, it seems that only the innermost decorator is included in the documentation if the location is `'json'`.
Here's a minimal example that reproduces for me:
```python
import flask
import marshmallow as ma
import flask_smorest
# flask_smorest version: 0.31.2
# flask: 2.0.1
# marshmallow: 3.10.0
blp = flask_smorest.Blueprint('bp', __name__)
class Schema1(ma.Schema):
field1 = ma.fields.String()
class Schema2(ma.Schema):
field2 = ma.fields.String()
@blp.route('/foo')
@blp.arguments(Schema1, location='json', as_kwargs=True) # change both to location='query' and it works as expected
@blp.arguments(Schema2, location='json', as_kwargs=True)
@blp.response(200)
def foo(field1, field2):
return f"received {field1} and {field2} :-)"
app = flask.Flask(__name__)
app.config["API_TITLE"] = "rvs API"
app.config["API_VERSION"] = "v1"
app.config["OPENAPI_VERSION"] = "3.0.2"
app.config["OPENAPI_URL_PREFIX"] = "/"
app.config["OPENAPI_RAPIDOC_PATH"] = "/docs"
app.config[ "OPENAPI_RAPIDOC_URL" ] = "https://mrin9.github.io/RapiDoc/rapidoc-min.js"
api = flask_smorest.Api(app)
api.register_blueprint(blp)
```
Next I do `FLASK_APP=main.py flask run` and then go to `localhost:5000/docs`
**Expected**:
Both `field1` and `field2` are shown in the API docs
**Actual**
Only `field2` is shown. Here's a screenshot for me:

Notice that if we replace `location='json'` to `location='query'` it works as expected.
Edit: Also looks like e.g. doing `http GET :5000/foo field1="field1" field2="field2"` on the terminal does not work, which indicate that using multiple `@blp.arguments` does not work if the `'json'` location is used. | closed | 2021-08-06T14:18:22Z | 2021-08-09T09:08:05Z | https://github.com/marshmallow-code/flask-smorest/issues/267 | [
"question"
] | kvalv | 2 |
JaidedAI/EasyOCR | pytorch | 611 | RuntimeError: Error(s) in loading state_dict for Model: | Missing key(s) in state_dict: "FeatureExtraction.ConvNet.0.weight", "FeatureExtraction.ConvNet.0.bias", "FeatureExtraction.ConvNet.3.weight", "FeatureExtraction.ConvNet.3.bias", "FeatureExtraction.ConvNet.6.weight", "FeatureExtraction.ConvNet.6.bias", "FeatureExtraction.ConvNet.8.weight", "FeatureExtraction.ConvNet.8.bias", "FeatureExtraction.ConvNet.11.weight", "FeatureExtraction.ConvNet.12.weight", "FeatureExtraction.ConvNet.12.bias", "FeatureExtraction.ConvNet.12.running_mean", "FeatureExtraction.ConvNet.12.running_var", "FeatureExtraction.ConvNet.14.weight", "FeatureExtraction.ConvNet.15.weight", "FeatureExtraction.ConvNet.15.bias", "FeatureExtraction.ConvNet.15.running_mean", "FeatureExtraction.ConvNet.15.running_var", "FeatureExtraction.ConvNet.18.weight", "FeatureExtraction.ConvNet.18.bias".
Unexpected key(s) in state_dict: "FeatureExtraction.ConvNet.conv0_1.weight", "FeatureExtraction.ConvNet.bn0_1.weight", "FeatureExtraction.ConvNet.bn0_1.bias", "FeatureExtraction.ConvNet.bn0_1.running_mean", "FeatureExtraction.ConvNet.bn0_1.running_var", "FeatureExtraction.ConvNet.bn0_1.num_batches_tracked", "FeatureExtraction.ConvNet.conv0_2.weight", "FeatureExtraction.ConvNet.bn0_2.weight", "FeatureExtraction.ConvNet.bn0_2.bias", "FeatureExtraction.ConvNet.bn0_2.running_mean", "FeatureExtraction.ConvNet.bn0_2.running_var", "FeatureExtraction.ConvNet.bn0_2.num_batches_tracked", "FeatureExtraction.ConvNet.layer1.0.conv1.weight", "FeatureExtraction.ConvNet.layer1.0.bn1.weight", "FeatureExtraction.ConvNet.layer1.0.bn1.bias", "FeatureExtraction.ConvNet.layer1.0.bn1.running_mean", "FeatureExtraction.ConvNet.layer1.0.bn1.running_var", "FeatureExtraction.ConvNet.layer1.0.bn1.num_batches_tracked", "FeatureExtraction.ConvNet.layer1.0.conv2.weight", "FeatureExtraction.ConvNet.layer1.0.bn2.weight", "FeatureExtraction.ConvNet.layer1.0.bn2.bias", "FeatureExtraction.ConvNet.layer1.0.bn2.running_mean", "FeatureExtraction.ConvNet.layer1.0.bn2.running_var", "FeatureExtraction.ConvNet.layer1.0.bn2.num_batches_tracked", "FeatureExtraction.ConvNet.layer1.0.downsample.0.weight", "FeatureExtraction.ConvNet.layer1.0.downsample.1.weight", "FeatureExtraction.ConvNet.layer1.0.downsample.1.bias", "FeatureExtraction.ConvNet.layer1.0.downsample.1.running_mean", "FeatureExtraction.ConvNet.layer1.0.downsample.1.running_var", "FeatureExtraction.ConvNet.layer1.0.downsample.1.num_batches_tracked", "FeatureExtraction.ConvNet.conv1.weight", "FeatureExtraction.ConvNet.bn1.weight", "FeatureExtraction.ConvNet.bn1.bias", "FeatureExtraction.ConvNet.bn1.running_mean", "FeatureExtraction.ConvNet.bn1.running_var", "FeatureExtraction.ConvNet.bn1.num_batches_tracked", "FeatureExtraction.ConvNet.layer2.0.conv1.weight", "FeatureExtraction.ConvNet.layer2.0.bn1.weight", "FeatureExtraction.ConvNet.layer2.0.bn1.bias", "FeatureExtraction.ConvNet.layer2.0.bn1.running_mean", "FeatureExtraction.ConvNet.layer2.0.bn1.running_var", "FeatureExtraction.ConvNet.layer2.0.bn1.num_batches_tracked", "FeatureExtraction.ConvNet.layer2.0.conv2.weight", "FeatureExtraction.ConvNet.layer2.0.bn2.weight", "FeatureExtraction.ConvNet.layer2.0.bn2.bias", "FeatureExtraction.ConvNet.layer2.0.bn2.running_mean", "FeatureExtraction.ConvNet.layer2.0.bn2.running_var", "FeatureExtraction.ConvNet.layer2.0.bn2.num_batches_tracked", "FeatureExtraction.ConvNet.layer2.0.downsample.0.weight", "FeatureExtraction.ConvNet.layer2.0.downsample.1.weight", "FeatureExtraction.ConvNet.layer2.0.downsample.1.bias", "FeatureExtraction.ConvNet.layer2.0.downsample.1.running_mean", "FeatureExtraction.ConvNet.layer2.0.downsample.1.running_var", "FeatureExtraction.ConvNet.layer2.0.downsample.1.num_batches_tracked", "FeatureExtraction.ConvNet.layer2.1.conv1.weight", "FeatureExtraction.ConvNet.layer2.1.bn1.weight", "FeatureExtraction.ConvNet.layer2.1.bn1.bias", "FeatureExtraction.ConvNet.layer2.1.bn1.running_mean", "FeatureExtraction.ConvNet.layer2.1.bn1.running_var", "FeatureExtraction.ConvNet.layer2.1.bn1.num_batches_tracked", "FeatureExtraction.ConvNet.layer2.1.conv2.weight", "FeatureExtraction.ConvNet.layer2.1.bn2.weight", "FeatureExtraction.ConvNet.layer2.1.bn2.bias", "FeatureExtraction.ConvNet.layer2.1.bn2.running_mean", "FeatureExtraction.ConvNet.layer2.1.bn2.running_var", "FeatureExtraction.ConvNet.layer2.1.bn2.num_batches_tracked", "FeatureExtraction.ConvNet.conv2.weight", "FeatureExtraction.ConvNet.bn2.weight", "FeatureExtraction.ConvNet.bn2.bias", "FeatureExtraction.ConvNet.bn2.running_mean", "FeatureExtraction.ConvNet.bn2.running_var", "FeatureExtraction.ConvNet.bn2.num_batches_tracked", "FeatureExtraction.ConvNet.layer3.0.conv1.weight", "FeatureExtraction.ConvNet.layer3.0.bn1.weight", "FeatureExtraction.ConvNet.layer3.0.bn1.bias", "FeatureExtraction.ConvNet.layer3.0.bn1.running_mean", "FeatureExtraction.ConvNet.layer3.0.bn1.running_var", "FeatureExtraction.ConvNet.layer3.0.bn1.num_batches_tracked", "FeatureExtraction.ConvNet.layer3.0.conv2.weight", "FeatureExtraction.ConvNet.layer3.0.bn2.weight", "FeatureExtraction.ConvNet.layer3.0.bn2.bias", "FeatureExtraction.ConvNet.layer3.0.bn2.running_mean", "FeatureExtraction.ConvNet.layer3.0.bn2.running_var", "FeatureExtraction.ConvNet.layer3.0.bn2.num_batches_tracked", "FeatureExtraction.ConvNet.layer3.0.downsample.0.weight", "FeatureExtraction.ConvNet.layer3.0.downsample.1.weight", "FeatureExtraction.ConvNet.layer3.0.downsample.1.bias", "FeatureExtraction.ConvNet.layer3.0.downsample.1.running_mean", "FeatureExtraction.ConvNet.layer3.0.downsample.1.running_var", "FeatureExtraction.ConvNet.layer3.0.downsample.1.num_batches_tracked", "FeatureExtraction.ConvNet.layer3.1.conv1.weight", "FeatureExtraction.ConvNet.layer3.1.bn1.weight", "FeatureExtraction.ConvNet.layer3.1.bn1.bias", "FeatureExtraction.ConvNet.layer3.1.bn1.running_mean", "FeatureExtraction.ConvNet.layer3.1.bn1.running_var", "FeatureExtraction.ConvNet.layer3.1.bn1.num_batches_tracked", "FeatureExtraction.ConvNet.layer3.1.conv2.weight", "FeatureExtraction.ConvNet.layer3.1.bn2.weight", "FeatureExtraction.ConvNet.layer3.1.bn2.bias", "FeatureExtraction.ConvNet.layer3.1.bn2.running_mean", "FeatureExtraction.ConvNet.layer3.1.bn2.running_var", "FeatureExtraction.ConvNet.layer3.1.bn2.num_batches_tracked", "FeatureExtraction.ConvNet.layer3.2.conv1.weight", "FeatureExtraction.ConvNet.layer3.2.bn1.weight", "FeatureExtraction.ConvNet.layer3.2.bn1.bias", "FeatureExtraction.ConvNet.layer3.2.bn1.running_mean", "FeatureExtraction.ConvNet.layer3.2.bn1.running_var", "FeatureExtraction.ConvNet.layer3.2.bn1.num_batches_tracked", "FeatureExtraction.ConvNet.layer3.2.conv2.weight", "FeatureExtraction.ConvNet.layer3.2.bn2.weight", "FeatureExtraction.ConvNet.layer3.2.bn2.bias", "FeatureExtraction.ConvNet.layer3.2.bn2.running_mean", "FeatureExtraction.ConvNet.layer3.2.bn2.running_var", "FeatureExtraction.ConvNet.layer3.2.bn2.num_batches_tracked", "FeatureExtraction.ConvNet.layer3.3.conv1.weight", "FeatureExtraction.ConvNet.layer3.3.bn1.weight", "FeatureExtraction.ConvNet.layer3.3.bn1.bias", "FeatureExtraction.ConvNet.layer3.3.bn1.running_mean", "FeatureExtraction.ConvNet.layer3.3.bn1.running_var", "FeatureExtraction.ConvNet.layer3.3.bn1.num_batches_tracked", "FeatureExtraction.ConvNet.layer3.3.conv2.weight", "FeatureExtraction.ConvNet.layer3.3.bn2.weight", "FeatureExtraction.ConvNet.layer3.3.bn2.bias", "FeatureExtraction.ConvNet.layer3.3.bn2.running_mean", "FeatureExtraction.ConvNet.layer3.3.bn2.running_var", "FeatureExtraction.ConvNet.layer3.3.bn2.num_batches_tracked", "FeatureExtraction.ConvNet.layer3.4.conv1.weight", "FeatureExtraction.ConvNet.layer3.4.bn1.weight", "FeatureExtraction.ConvNet.layer3.4.bn1.bias", "FeatureExtraction.ConvNet.layer3.4.bn1.running_mean", "FeatureExtraction.ConvNet.layer3.4.bn1.running_var", "FeatureExtraction.ConvNet.layer3.4.bn1.num_batches_tracked", "FeatureExtraction.ConvNet.layer3.4.conv2.weight", "FeatureExtraction.ConvNet.layer3.4.bn2.weight", "FeatureExtraction.ConvNet.layer3.4.bn2.bias", "FeatureExtraction.ConvNet.layer3.4.bn2.running_mean", "FeatureExtraction.ConvNet.layer3.4.bn2.running_var", "FeatureExtraction.ConvNet.layer3.4.bn2.num_batches_tracked", "FeatureExtraction.ConvNet.conv3.weight", "FeatureExtraction.ConvNet.bn3.weight", "FeatureExtraction.ConvNet.bn3.bias", "FeatureExtraction.ConvNet.bn3.running_mean", "FeatureExtraction.ConvNet.bn3.running_var", "FeatureExtraction.ConvNet.bn3.num_batches_tracked", "FeatureExtraction.ConvNet.layer4.0.conv1.weight", "FeatureExtraction.ConvNet.layer4.0.bn1.weight", "FeatureExtraction.ConvNet.layer4.0.bn1.bias", "FeatureExtraction.ConvNet.layer4.0.bn1.running_mean", "FeatureExtraction.ConvNet.layer4.0.bn1.running_var", "FeatureExtraction.ConvNet.layer4.0.bn1.num_batches_tracked", "FeatureExtraction.ConvNet.layer4.0.conv2.weight", "FeatureExtraction.ConvNet.layer4.0.bn2.weight", "FeatureExtraction.ConvNet.layer4.0.bn2.bias", "FeatureExtraction.ConvNet.layer4.0.bn2.running_mean", "FeatureExtraction.ConvNet.layer4.0.bn2.running_var", "FeatureExtraction.ConvNet.layer4.0.bn2.num_batches_tracked", "FeatureExtraction.ConvNet.layer4.1.conv1.weight", "FeatureExtraction.ConvNet.layer4.1.bn1.weight", "FeatureExtraction.ConvNet.layer4.1.bn1.bias", "FeatureExtraction.ConvNet.layer4.1.bn1.running_mean", "FeatureExtraction.ConvNet.layer4.1.bn1.running_var", "FeatureExtraction.ConvNet.layer4.1.bn1.num_batches_tracked", "FeatureExtraction.ConvNet.layer4.1.conv2.weight", "FeatureExtraction.ConvNet.layer4.1.bn2.weight", "FeatureExtraction.ConvNet.layer4.1.bn2.bias", "FeatureExtraction.ConvNet.layer4.1.bn2.running_mean", "FeatureExtraction.ConvNet.layer4.1.bn2.running_var", "FeatureExtraction.ConvNet.layer4.1.bn2.num_batches_tracked", "FeatureExtraction.ConvNet.layer4.2.conv1.weight", "FeatureExtraction.ConvNet.layer4.2.bn1.weight", "FeatureExtraction.ConvNet.layer4.2.bn1.bias", "FeatureExtraction.ConvNet.layer4.2.bn1.running_mean", "FeatureExtraction.ConvNet.layer4.2.bn1.running_var", "FeatureExtraction.ConvNet.layer4.2.bn1.num_batches_tracked", "FeatureExtraction.ConvNet.layer4.2.conv2.weight", "FeatureExtraction.ConvNet.layer4.2.bn2.weight", "FeatureExtraction.ConvNet.layer4.2.bn2.bias", "FeatureExtraction.ConvNet.layer4.2.bn2.running_mean", "FeatureExtraction.ConvNet.layer4.2.bn2.running_var", "FeatureExtraction.ConvNet.layer4.2.bn2.num_batches_tracked", "FeatureExtraction.ConvNet.conv4_1.weight", "FeatureExtraction.ConvNet.bn4_1.weight", "FeatureExtraction.ConvNet.bn4_1.bias", "FeatureExtraction.ConvNet.bn4_1.running_mean", "FeatureExtraction.ConvNet.bn4_1.running_var", "FeatureExtraction.ConvNet.bn4_1.num_batches_tracked", "FeatureExtraction.ConvNet.conv4_2.weight", "FeatureExtraction.ConvNet.bn4_2.weight", "FeatureExtraction.ConvNet.bn4_2.bias", "FeatureExtraction.ConvNet.bn4_2.running_mean", "FeatureExtraction.ConvNet.bn4_2.running_var", "FeatureExtraction.ConvNet.bn4_2.num_batches_tracked".
size mismatch for SequenceModeling.0.rnn.weight_ih_l0: copying a param with shape torch.Size([2048, 512]) from checkpoint, the shape in current model is torch.Size([1024, 256]).
size mismatch for SequenceModeling.0.rnn.weight_hh_l0: copying a param with shape torch.Size([2048, 512]) from checkpoint, the shape in current model is torch.Size([1024, 256]).
size mismatch for SequenceModeling.0.rnn.bias_ih_l0: copying a param with shape torch.Size([2048]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for SequenceModeling.0.rnn.bias_hh_l0: copying a param with shape torch.Size([2048]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for SequenceModeling.0.rnn.weight_ih_l0_reverse: copying a param with shape torch.Size([2048, 512]) from checkpoint, the shape in current model is torch.Size([1024, 256]).
size mismatch for SequenceModeling.0.rnn.weight_hh_l0_reverse: copying a param with shape torch.Size([2048, 512]) from checkpoint, the shape in current model is torch.Size([1024, 256]).
size mismatch for SequenceModeling.0.rnn.bias_ih_l0_reverse: copying a param with shape torch.Size([2048]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for SequenceModeling.0.rnn.bias_hh_l0_reverse: copying a param with shape torch.Size([2048]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for SequenceModeling.0.linear.weight: copying a param with shape torch.Size([512, 1024]) from checkpoint, the shape in current model is torch.Size([256, 512]).
size mismatch for SequenceModeling.0.linear.bias: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for SequenceModeling.1.rnn.weight_ih_l0: copying a param with shape torch.Size([2048, 512]) from checkpoint, the shape in current model is torch.Size([1024, 256]).
size mismatch for SequenceModeling.1.rnn.weight_hh_l0: copying a param with shape torch.Size([2048, 512]) from checkpoint, the shape in current model is torch.Size([1024, 256]).
size mismatch for SequenceModeling.1.rnn.bias_ih_l0: copying a param with shape torch.Size([2048]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for SequenceModeling.1.rnn.bias_hh_l0: copying a param with shape torch.Size([2048]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for SequenceModeling.1.rnn.weight_ih_l0_reverse: copying a param with shape torch.Size([2048, 512]) from checkpoint, the shape in current model is torch.Size([1024, 256]).
size mismatch for SequenceModeling.1.rnn.weight_hh_l0_reverse: copying a param with shape torch.Size([2048, 512]) from checkpoint, the shape in current model is torch.Size([1024, 256]).
size mismatch for SequenceModeling.1.rnn.bias_ih_l0_reverse: copying a param with shape torch.Size([2048]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for SequenceModeling.1.rnn.bias_hh_l0_reverse: copying a param with shape torch.Size([2048]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for SequenceModeling.1.linear.weight: copying a param with shape torch.Size([512, 1024]) from checkpoint, the shape in current model is torch.Size([256, 512]).
size mismatch for SequenceModeling.1.linear.bias: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for Prediction.weight: copying a param with shape torch.Size([188, 512]) from checkpoint, the shape in current model is torch.Size([188, 256]). | closed | 2021-12-08T04:01:42Z | 2024-07-18T01:59:39Z | https://github.com/JaidedAI/EasyOCR/issues/611 | [] | jitesh-rathod | 4 |
davidsandberg/facenet | computer-vision | 1,064 | Someone tried using VGGFACE2 or CASIA-WebFace hyperparameters with INCEPTION_RESNET_V2 | open | 2019-08-06T13:22:46Z | 2019-08-06T13:22:46Z | https://github.com/davidsandberg/facenet/issues/1064 | [] | hsm4703 | 0 | |
ckan/ckan | api | 8,089 | SERVER GOT HACKED TO MAKE GAMBLING ONLINE AND SPAM |

Hello sir.
I would like to inform you that there are other Indonesian government sites that have been hacked and phished to be turned into online gambling portals and there are lots of them, you can check them on Google with the keywords:
site:http://103.143.152.165/uploads/user/
To access the link, you can only click on it from Google and from the mobile/handphone version, this is the hacker's way now so that they are not easily detected. You can open an example of one of the links from the mobile/handphone version:
http://103.143.152.165/uploads/user/2024-02-26-124625.444989pajaktoto.html/
http://103.143.152.165/uploads/user/2024-02-26-121433.938501ladangtoto.html/
http://103.143.152.165/uploads/user/2024-02-27-023121.310552boss88.html/
If you click on the mobile version of Google and the button inside will lead to the online gambling site
https://lbfjhonmusic.shop/
https://asli-makbeti168.lat/
This is clearly very insulting and degrading to the Indonesian government site. Please find the perpetrator and then arrest and imprison him in accordance with applicable laws and articles.
As an Indonesian citizen, I would like to emphasize that online gambling is clearly prohibited by Indonesian law. In accordance with Article 27 paragraph (2) of the ITE Law, this act is a prohibited act. Therefore, individuals are prohibited from distributing, transmitting or making accessible gambling content online.
I have attached clear photographic evidence related to this case. My request is that you immediately take the necessary action to delete this sub-domain and online gambling page, or at least suspend access immediately so that online gambling content cannot be accessed. I hope that firm steps will be taken to ensure the integrity of the father's name is maintained and to avoid potential legal problems in the future. | closed | 2024-02-27T12:03:09Z | 2024-02-27T22:34:50Z | https://github.com/ckan/ckan/issues/8089 | [] | OLIV853 | 2 |
miguelgrinberg/microblog | flask | 334 | Ch4: flask application context | I have followed the tutorial to Chapter 4, but when I run
`db.session.add(u)`
I get:
`Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/dev/code/tutorial/venv/lib/python3.10/site-packages/sqlalchemy/orm/scoping.py", line 361, in add
return self._proxied.add(instance, _warn=_warn)
File "/home/dev/code/tutorial/venv/lib/python3.10/site-packages/sqlalchemy/orm/scoping.py", line 188, in _proxied
return self.registry()
File "/home/dev/code/tutorial/venv/lib/python3.10/site-packages/sqlalchemy/util/_collections.py", line 639, in __call__
key = self.scopefunc()
File "/home/dev/code/tutorial/venv/lib/python3.10/site-packages/flask_sqlalchemy/session.py", line 81, in _app_ctx_id
return id(app_ctx._get_current_object()) # type: ignore[attr-defined]
File "/home/dev/code/tutorial/venv/lib/python3.10/site-packages/werkzeug/local.py", line 513, in _get_current_object
raise RuntimeError(unbound_message) from None
RuntimeError: Working outside of application context.
This typically means that you attempted to use functionality that needed
the current application. To solve this, set up an application context
with app.app_context(). See the documentation for more information.`
I have deleted the migrations folder and app.db and run through it again, still the same result. I have checked the Ch4 github branch and there are no code diffs between mine and the source. I have looked at the Application Context docs the recommendations differ from the tutorial code. I have also looked at SO examples, which align to the docs, but again differ from the tutorial code. I am stuck, any help would be appreciated. | closed | 2023-01-31T21:50:34Z | 2023-02-01T09:52:18Z | https://github.com/miguelgrinberg/microblog/issues/334 | [
"question"
] | ghost | 3 |
graphdeco-inria/gaussian-splatting | computer-vision | 963 | The SIBR viewer discards some gaussians near the cameras | Hi guys! I'm using other implement of gaussian-splatting to train the gaussians, and I convert them into the SIBR style to use the SIBR_viewer. But the rendered images seem weird that the SIBR drops some gaussians in the "Splats" model, but works normal in other two model: "Initial Points" and "Ellipsoids". I don't know what's going on. The following is some examples when snapping to camera 0:
### in Splats model
<img width="1532" alt="image" src="https://github.com/user-attachments/assets/431e6884-1c4a-4a45-b058-0a5eee114054">
### in Initial Points model
<img width="1532" alt="image" src="https://github.com/user-attachments/assets/15abb653-ea89-49bd-869d-6bf50254de7c">
### in Ellipsoids model
<img width="1532" alt="image" src="https://github.com/user-attachments/assets/d901d0c1-d6b3-4eff-b770-eba3ad485eec">
cc @Snosixtyboo @graphdeco @gdrett | closed | 2024-08-31T01:04:42Z | 2024-11-28T04:35:36Z | https://github.com/graphdeco-inria/gaussian-splatting/issues/963 | [] | Master-cai | 12 |
nteract/papermill | jupyter | 729 | --report-mode bug | ## 🐛 Bug
<!-- A clear and concise description of what the bug is. -->
I am using python 3.9 to call subprocess.run to run a premade command, when set --report-mode the output notebook still have the cell tagged as prams ingest.
Or it did not meant for it, but I need to some how hide the credentials passed in. | open | 2023-08-16T18:45:31Z | 2023-08-16T18:45:31Z | https://github.com/nteract/papermill/issues/729 | [
"bug",
"help wanted"
] | whylovegithub | 0 |
aleju/imgaug | machine-learning | 278 | How to use imgaug with Detectron ? | Any example or suggestion ? | open | 2019-03-03T21:06:10Z | 2022-03-07T08:15:41Z | https://github.com/aleju/imgaug/issues/278 | [] | qpoisson | 2 |
strawberry-graphql/strawberry | fastapi | 3,400 | Errors when closing a subscription after authentication failure | <!-- Provide a general summary of the bug in the title above. -->
<!--- This template is entirely optional and can be removed, but is here to help both you and us. -->
<!--- Anything on lines wrapped in comments like these will not show up in the final text. -->
## Describe the Bug
When a permissions class's `has_permission` method returns False for a subscription, strawberry subsequently throws some errors while closing the connection.
<!-- A clear and concise description of what the bug is. -->
## System Information
- Operating system: Ubuntu 22.04
- Strawberry version (if applicable): 0.216.1 (and 0.171.1, 0.219)
## Additional Context
When a subscription fails due to an authentication failure, we see log outputs that look like this
```
< TEXT '{"type":"connection_init","payload":{"Authoriza...bGt1hupQ9QVf1VYivKHw"}}' [844 bytes]
> TEXT '{"type": "connection_ack"}' [26 bytes]
< TEXT '{"id":"5","type":"start","payload":{"variables"...ypename\\n }\\n}\\n"}}' [3034 bytes]
> TEXT '{"type": "error", "id": "5", "payload": {"messa...buildingDataChanges"]}}' [144 bytes]
Not Authorized
GraphQL request:2:3
...
raise PermissionError(message)
PermissionError: Not Authorized
< TEXT '{"id":"5","type":"stop"}' [24 bytes]
Exception in ASGI application
Traceback (most recent call last):
...
File ".../strawberry/subscriptions/protocols/graphql_ws/handlers.py", line 193, in cleanup_operation
await self.subscriptions[operation_id].aclose()
KeyError: '5'
= connection is CLOSING
> CLOSE 1000 (OK) [2 bytes]
= connection is CLOSED
! failing connection with code 1006
closing handshake failed
Traceback (most recent call last):
...
File ".../websockets/legacy/protocol.py", line 935, in ensure_open
raise self.connection_closed_exc()
websockets.exceptions.ConnectionClosedError: sent 1000 (OK); no close frame received
connection closed
```
This seems to indicate that the permissions failure prevents the subscription from being created, which causes cleanup to fail since it assumes the subscription exists. If I modify https://github.com/strawberry-graphql/strawberry/blob/808d898a9041caffe74e4364314b585413e4e5e2/strawberry/subscriptions/protocols/graphql_ws/handlers.py#L192 to check for `operation_id` in `self.subscriptions` and `self.tasks` before accessing them, then both the `KeyError` and `closing handshake failed` errors go away.
Since an expired token can cause rapid subscription retries and failures, this can produce quite a lot of log spam.
<!-- Add any other relevant information about the problem here. --> | open | 2024-02-27T16:48:04Z | 2025-03-20T15:56:37Z | https://github.com/strawberry-graphql/strawberry/issues/3400 | [
"bug"
] | wlaub | 1 |
CorentinJ/Real-Time-Voice-Cloning | deep-learning | 818 | File "C:\Real-Time-Voice-Cloning-master\venv\lib\site-packages\torch\__init__.py", line 81, in <module> from torch._C import * ImportError: DLL load failed: The specified procedure could not be found. (venv) C:\Real-Time-Voice-Cloning-master> | Having an issue with windows installation when attemping to do "run python demo_toolbox.py" in an admin CMD.
As far as I can tell, I have installed all of the requirements. And it tells me as much.
`C:\Real-Time-Voice-Cloning-master>venv\Scripts\activate.bat
(venv) C:\Real-Time-Voice-Cloning-master>python demo_toolbox.py
Traceback (most recent call last):
File "demo_toolbox.py", line 2, in <module>
from toolbox import Toolbox
File "C:\Real-Time-Voice-Cloning-master\toolbox\__init__.py", line 1, in <module>
from toolbox.ui import UI
File "C:\Real-Time-Voice-Cloning-master\toolbox\ui.py", line 6, in <module>
from encoder.inference import plot_embedding_as_heatmap
File "C:\Real-Time-Voice-Cloning-master\encoder\inference.py", line 2, in <module>
from encoder.model import SpeakerEncoder
File "C:\Real-Time-Voice-Cloning-master\encoder\model.py", line 5, in <module>
from torch.nn.utils import clip_grad_norm_
File "C:\Real-Time-Voice-Cloning-master\venv\lib\site-packages\torch\__init__.py", line 81, in <module>
from torch._C import *
ImportError: DLL load failed: The specified procedure could not be found.
(venv) C:\Real-Time-Voice-Cloning-master>`
assistance would be appreciated.
| closed | 2021-08-12T19:29:53Z | 2021-08-12T23:10:12Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/818 | [] | ghost | 0 |
albumentations-team/albumentations | deep-learning | 1,874 | Enhance augmentation objects with references to a random state. | ## Suggested Improvement
Looking at the current code, to draw random samples augmentation objects are using the global random state in the `random` module. This is ideal for maximally random pipelines that are impacted by any other use of the global random state outside of albumentations itself.
This is not ideal for cases where the subcomponents of a system want their random generators to be seeded and not impacted by other components. For instance, right now there is no way for me to define a seeded augmentation pipeline that does not interfere with any other usage of the global random state.
I suggest adding a parameter to each augmentation class called: `seed`, `random_state`, or `rng` that defaults to None. When it is `None`, the it gets resolved to the global random state, which keeps the current behavior.
If it is an integer, then it would create a new `random.Random` object, and if `rng` is already a `random.Random` object, then it keeps it as-is, which allows augmentation pipelines to be independent of the global random state, but use an internally consistent random state.
## Potential Benefits
* Default behavior is unchanged
* Makes it easy to test augmentation pipelines without modifying the global state
* Makes it possible to set up a highly random, but consistent augmentation pipeline independent of any global random usage.
## Additional Information
This is how the (now defuct) [imgaug](https://github.com/aleju/imgaug/blob/master/imgaug/augmenters/convolutional.py#L407) library handled randomness, where random states are explicitly passed and maintained.
I see there is a [random_utils](https://github.com/albumentations-team/albumentations/blob/7eda70e01e7f3bb31c1085e4fa473089b5a468be/albumentations/random_utils.py) module which somewhat handles this, but only for numpy random states, but as documented in CONTRIBUTING, it is only to ensure that any numpy.random usage is impacting the global Python random state.
I've written a function that I widely use called [ensure_rng](https://kwarray.readthedocs.io/en/release/kwarray.util_random.html#kwarray.util_random.ensure_rng) that handles the resolution of an argument to a valid random state object. In fact, it can also convert between the stdlib random.Random and np.random.RandomState objects. This might be useful here, although it doesn't exactly handle what is done in `random_utils.get_random_state`, but it is compatible with it.
I also see that [ReplayCompose](https://albumentations.ai/docs/examples/replay/) is a good solution to the problem of creating reproducible pipelines, but I believe maintaining a random state in each augmentation instance is complementary, especially in the realm of testing. | closed | 2024-08-12T22:00:47Z | 2024-10-26T00:29:44Z | https://github.com/albumentations-team/albumentations/issues/1874 | [
"enhancement"
] | Erotemic | 2 |
pydantic/FastUI | pydantic | 111 | DOCS: Create documentation for `ModelForm` | ## Intro
If I understand `c.ModelForm` correctly, `submit_url='/decisions/'` should post to `"/decisions/"`.
This may lead to the following errors:
- `405 Method Not Allowed`, if the post route does not exist
- `422 Unprocessable Entity`, if the payload or response doesn't have the right format (Pydantic model)
However, in the below MRE I don't understand why I get a `422`.
## Questions
- [ ] (How) Can I get more information about the 422 error?
- [ ] How to leverage `PageEvent`?
- [ ] Is the request or the response model the problem?
## Suggested Steps
- [ ] Root Cause Analysis in this issue
- [ ] PR to improve documentation (e.g. docstring for `ModelForm`)
- [ ] PR to improve error messages related to 422 error messages
## Further Information
https://github.com/pydantic/FastUI/blob/6b7c7cba5250eaf044bc312199fd877878334087/src/python-fastui/fastui/components/forms.py#L97
https://github.com/pydantic/FastUI/blob/6b7c7cba5250eaf044bc312199fd877878334087/src/python-fastui/fastui/events.py#L9
## Minimal Reproducible Example (MRE)
<details>
```python
from fastapi import FastAPI
from fastapi.responses import HTMLResponse
from fastui import AnyComponent, FastUI, prebuilt_html
from fastui import components as c
from fastui.events import PageEvent
from sqlmodel import SQLModel
app = FastAPI()
class Decision(SQLModel):
name: str = "example decision"
state_emotional: str = "curious"
situation: str = "I have a decision to make"
@app.get("/api/new", response_model=FastUI, response_model_exclude_none=True)
def new_decision() -> list[AnyComponent]:
return [
c.Heading(text="New Decision", level=2),
c.Paragraph(text="Create a new decision."),
c.ModelForm[Decision](
submit_url="/decisions/",
# success_event=PageEvent(),
),
]
@app.post("/decisions/", response_model=Decision)
async def create_decision(
*,
decision: Decision,
# session: Session = Depends(get_session),
):
print(decision)
# db_decision = Decision.from_orm(decision)
# session.add(db_decision)
# session.commit()
# session.refresh(db_decision)
return decision
@app.get("/{path:path}")
async def html_landing() -> HTMLResponse:
"""Simple HTML page which serves the React app, comes last as it matches all paths."""
return HTMLResponse(prebuilt_html(title="FastUI: Decisions"))
```
</details>
| open | 2023-12-19T21:54:10Z | 2023-12-21T07:49:53Z | https://github.com/pydantic/FastUI/issues/111 | [] | Zaubeerer | 2 |
matterport/Mask_RCNN | tensorflow | 2,197 | How to calculate the Dice loss ? | Hi, I am trying to calculate the Dice loss for a test set but I don't know how, and I am also new to this library. Can anyone help me ? Thank you very much. | open | 2020-05-21T20:41:52Z | 2020-05-22T20:49:35Z | https://github.com/matterport/Mask_RCNN/issues/2197 | [] | dangmanhtruong1995 | 1 |
schemathesis/schemathesis | graphql | 1,800 | Improve failure representation | It might be better to display some parts (like status code, title, etc) as bold + lets think about some other minor visual improvements | closed | 2023-10-07T21:38:32Z | 2023-11-09T08:57:11Z | https://github.com/schemathesis/schemathesis/issues/1800 | [
"Priority: Medium",
"Type: Feature",
"UX: Reporting",
"Status: Needs Design"
] | Stranger6667 | 0 |
capitalone/DataProfiler | pandas | 650 | Typo in structured_profilers.py | **Please provide the issue you face regarding the documentation**
Need typo fix `appliied` -> `applied` in `Differences in Data` section
https://github.com/capitalone/DataProfiler/blob/main/examples/structured_profilers.ipynb | closed | 2022-09-20T17:18:49Z | 2022-10-15T01:01:28Z | https://github.com/capitalone/DataProfiler/issues/650 | [
"Documentation",
"Help Wanted",
"good_first_issue"
] | JGSweets | 0 |
sunscrapers/djoser | rest-api | 453 | Pagination may yield inconsistent results with an unordered object_list | Enable PageNumberPagination
```python
REST_FRAMEWORK = {
"DEFAULT_PAGINATION_CLASS": "rest_framework.pagination.PageNumberPagination",
....
```
Make API request get list of users, in logs:
```
rest_framework/pagination.py:200: UnorderedObjectListWarning: Pagination may yield inconsistent results with an unordered object_list: <class 'tfi.users.models.User'> QuerySet.
paginator = self.django_paginator_class(queryset, page_size)
```
My solution:
```python
class UserViewSet(viewsets.ModelViewSet):
serializer_class = settings.SERIALIZERS.user
queryset = User.objects.order_by("pk").all() # <--- add .order_by("pk")
``` | open | 2019-12-24T06:36:56Z | 2019-12-24T06:36:56Z | https://github.com/sunscrapers/djoser/issues/453 | [] | llybin | 0 |
seleniumbase/SeleniumBase | pytest | 3,010 | How to access Browser from host when using Docker? | First off - very cool project!
I took a look at the docker docs here https://seleniumbase.io/integrations/docker/ReadMe/#1-install-the-docker-desktop and was able to spin up the container and pass the test.
How do I go about accessing the GUI and running in non-headless mode so I can see the browser? …is there a port that just needs to be mapped and then accessible at hostip:port ?
thanks in advance! | closed | 2024-08-09T22:57:11Z | 2024-08-14T21:01:23Z | https://github.com/seleniumbase/SeleniumBase/issues/3010 | [
"question"
] | ttraxxrepo | 10 |
gradio-app/gradio | machine-learning | 10,375 | Using S3 presigned URL's with gr.Video or gr.Model3D fails | ### Describe the bug
#### Description
Using presigned URLs with certain Gradio components like `gr.Video` or `gr.Model3D` fails, resulting in the following error:
```
OSError: [Errno 22] Invalid argument: 'C:\\Users\\<username>\\AppData\\Local\\Temp\\gradio\\<hashed_filename>\\.mp4?response-content-disposition=inline&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Security-Token=<truncated_token>&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=<truncated_credential>&X-Amz-Date=20250116T133649Z&X-Amz-Expires=1800&X-Amz-SignedHeaders=host&X-Amz-Signature=<truncated_signature>'
```
#### Steps to Reproduce
1. Generate a presigned URL for a file stored in S3.
2. Use the presigned URL as input to a Gradio component, such as `gr.Video` or `gr.Model3D`.
3. Observe the error during file handling.
#### Expected Behavior
Gradio components should seamlessly handle presigned URLs and load the respective content without issues. For example this GitHub's RAW filepath does work: https://github.com/XnetLoL/test/raw/813ce5b531308d88f1a0c4256849fefe024c4d9e/breakdance.glb
#### Actual Behavior
The presigned URL, which includes query parameters, causes an invalid file path error when Gradio tries to process it.
#### Possible Cause
It seems Gradio attempts to interpret the entire presigned URL (including query parameters) as a file path, which results in an invalid argument error.
#### Environment
- Gradio version: 5.12.0
- OS: Windows 11
- Python version: 3.13
#### Additional Notes
Maybe this issue could be resolved by properly handling URLs with query parameters in Gradio components.
Thanks.
### Have you searched existing issues? 🔎
- [x] I have searched and found no existing issues
### Reproduction
```python
import gradio as gr
# Replace this with an actual presigned S3 URL
presigned_url = "https://your-bucket.s3.amazonaws.com/sample.mp4?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=..."
with gr.Blocks() as demo:
gr.Markdown("### Video Test with Presigned URL")
gr.Video(presigned_url )
if __name__ == "__main__":
demo.launch()
```
### Screenshot
_No response_
### Logs
```shell
```
### System Info
```shell
Gradio Environment Information:
------------------------------
Operating System: Windows
gradio version: 5.12.0
gradio_client version: 1.5.4
------------------------------------------------
gradio dependencies in your environment:
aiofiles: 23.2.1
anyio: 4.8.0
audioop-lts: 0.2.1
fastapi: 0.115.6
ffmpy: 0.5.0
gradio-client==1.5.4 is not installed.
httpx: 0.28.1
huggingface-hub: 0.27.1
jinja2: 3.1.5
markupsafe: 2.1.5
numpy: 2.2.1
orjson: 3.10.13
packaging: 24.2
pandas: 2.2.3
pillow: 10.4.0
pydantic: 2.10.4
pydub: 0.25.1
python-multipart: 0.0.20
pyyaml: 6.0.2
ruff: 0.8.6
safehttpx: 0.1.6
semantic-version: 2.10.0
starlette: 0.41.3
tomlkit: 0.12.0
typer: 0.15.1
typing-extensions: 4.12.2
urllib3: 2.3.0
uvicorn: 0.34.0
authlib; extra == 'oauth' is not installed.
itsdangerous; extra == 'oauth' is not installed.
gradio_client dependencies in your environment:
fsspec: 2024.12.0
httpx: 0.28.1
huggingface-hub: 0.27.1
packaging: 24.2
typing-extensions: 4.12.2
websockets: 12.0
```
### Severity
I can work around it | closed | 2025-01-16T13:44:51Z | 2025-01-23T16:35:55Z | https://github.com/gradio-app/gradio/issues/10375 | [
"bug",
"good first issue",
"python"
] | XnetLoL | 1 |
TencentARC/GFPGAN | pytorch | 298 | Using 4-channels images | Hello :)
Thank you very much for your repo, it's amaizing !
I have a question - Is any simple way for using 4-channels arrays (512 x 512 x 4) ? Or I should change all networks and losses ?
Thank you :) | closed | 2022-10-27T09:19:52Z | 2022-11-24T20:09:17Z | https://github.com/TencentARC/GFPGAN/issues/298 | [] | MDYLL | 2 |
lgienapp/aquarel | matplotlib | 36 | Arial font not included in the package? | `WARNING:matplotlib.font_manager:findfont: Generic family 'sans-serif' not found because none of the following families were found: Arial` | closed | 2024-09-10T13:23:11Z | 2024-12-10T14:44:53Z | https://github.com/lgienapp/aquarel/issues/36 | [] | realliyifei | 3 |
python-gino/gino | asyncio | 825 | Is this project dead? | We are stopping to use GitHub Issues for questions, please go to the GitHub [Discussions Q&A](https://github.com/python-gino/gino/discussions?discussions_q=category%3AQ%26A) instead.
| open | 2024-06-17T15:32:30Z | 2024-06-17T16:48:52Z | https://github.com/python-gino/gino/issues/825 | [
"question"
] | erhuabushuo | 1 |
geex-arts/django-jet | django | 242 | Autocomplete assumes integer pk's | Python==3.6.1
Django==1.11.4
django-jet==1.0.6
We use django.utils.crypto.get_random_string values for all our PK's so we can use them in url's etc without being concerned about someone walking them.
However as currently implemented Jet's autocomplete assumes id's are integers and silently fails if they are not.
This is due to this line:
https://github.com/geex-arts/django-jet/blob/dev/jet/forms.py#L101
Changing it to a CharField seems to resolve the matter.
| open | 2017-08-08T11:37:30Z | 2018-03-14T11:53:48Z | https://github.com/geex-arts/django-jet/issues/242 | [] | tolomea | 5 |
sktime/sktime | scikit-learn | 7,287 | [ENH] Bias correction for box-cox and logarithm transform - as a composite | Multiple requests have been made to enable bias correction in box-cox and logarithm transformation (the latter being a special case).
A reference for that is here: https://otexts.com/fpp2/transformations.html
Generic feature request: https://github.com/sktime/sktime/issues/2391
The problem is that the inversion requires forecast variance, i.e., the output of `predict_var` of a forecaster `f` used in the pipeline `boxcox_trafo * f`. Therefore, this is no longer mappable on the transformation interface, for instance, a parameter of the box-cox or log transformer, as we have recognized in this PR: https://github.com/sktime/sktime/pull/7268
This issue is to discuss possible solutions before implementation. The solution must map onto one of the unified API points of `sktime` and use only public interface points, e.g., "reaching into the pipeline" might be a workaround hack, but it cannot be the ultimate solution.
So far I can think of two potential solutions:
1. forecaster wrapper. Because `predict_var` is needed, we may have to write a forecaster wrapping version of the box-cox transformer.
This would look like `BoxCoxBiasAdjustedForecaster(forecaster)`, which behaves like one would want `boxcox_trafo * f` to behave. But in `predict`, it has access to `forecaster.predict_var`, so can carry out the bias adjustment.
We also need to think what this would do if the `predict_var` or `predict_proba` of the composite is called - the adjustment should perhaps also push through to the proba methods. If we cannot come up with anything canonical, perhaps we just patch the proba methods through.
2. composite of transformation and forecast. I think there is a more general bias adjustment algorithm here - formula to be discussed - where *any transformation with an inverse transform* can be bias adjusted. This would be
`BiasAdjustedPipe(transformer, forecaster)`, and in `predict` it does sth more general that, as special case, does the well-known bias adjustment if `transformer=BoxCoxTransformer()`, or `transformer=LogTransformer()`.
Similar as in 1, we have the problem of having to think about the proba methods of `BiasAdjustedPipe`. | open | 2024-10-17T10:04:34Z | 2024-10-22T21:53:56Z | https://github.com/sktime/sktime/issues/7287 | [
"API design",
"implementing algorithms",
"module:forecasting",
"module:transformations",
"enhancement"
] | fkiraly | 17 |
koxudaxi/fastapi-code-generator | fastapi | 227 | No Request parameter generated for `requestBody` with `multipart/form-data` | I am describing a schema for uploading files, and hence using `multipart/form-data` content type for `requestBody`. The generated function have no argument provided. Note that the `request: Request` is correctly generated for `application/x-www-form-urlencoded` content type.
```yaml
paths:
/foo:
put:
summary: Bar
requestBody:
content:
"multipart/form-data":
schema:
type: object
properties:
metadata:
type: string
format: binary
payload:
type: string
format: binary
``` | open | 2021-12-08T19:37:30Z | 2023-01-25T11:57:34Z | https://github.com/koxudaxi/fastapi-code-generator/issues/227 | [] | olivergondza | 2 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.