repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
liangliangyy/DjangoBlog | django | 647 | Cfe | closed | 2023-03-30T12:33:13Z | 2023-03-31T03:01:43Z | https://github.com/liangliangyy/DjangoBlog/issues/647 | [] | networknice | 0 | |
iperov/DeepFaceLab | deep-learning | 5,228 | "data_src faceset extract" Failing | Choose one or several GPU idxs (separated by comma).
[CPU] : CPU
[0] : GeForce GTX 960M
[0] Which GPU indexes to choose? :
0
[wf] Face type ( f/wf/head ?:help ) :
wf
[0] Max number of faces from image ( ?:help ) :
0
[512] Image size ( 256-2048 ?:help ) :
512
[90] Jpeg quality ( 1-100 ?:help ) :
90
[n] Write debug images to aligned_debug? ( y/n ) :
n
Extracting faces...
Caching GPU kernels...
Error while subprocess initialization: Traceback (most recent call last):
File "E:\DeepFaceLab_NVIDIA - Copy\_internal\DeepFaceLab\core\joblib\SubprocessorBase.py", line 62, in _subprocess_run
self.on_initialize(client_dict)
File "E:\DeepFaceLab_NVIDIA - Copy\_internal\DeepFaceLab\mainscripts\Extractor.py", line 68, in on_initialize
nn.initialize (device_config)
File "E:\DeepFaceLab_NVIDIA - Copy\_internal\DeepFaceLab\core\leras\nn.py", line 113, in initialize
nn.tf_sess = tf.Session(config=nn.tf_sess_config)
File "E:\DeepFaceLab_NVIDIA - Copy\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1596, in __init__
super(Session, self).__init__(target, graph, config=config)
File "E:\DeepFaceLab_NVIDIA - Copy\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 711, in __init__
self._session = tf_session.TF_NewSessionRef(self._graph._c_graph, opts)
tensorflow.python.framework.errors_impl.InternalError: cudaGetDevice() failed. Status: initialization error
Traceback (most recent call last):
File "E:\DeepFaceLab_NVIDIA - Copy\_internal\DeepFaceLab\main.py", line 324, in <module>
arguments.func(arguments)
File "E:\DeepFaceLab_NVIDIA - Copy\_internal\DeepFaceLab\main.py", line 45, in process_extract
force_gpu_idxs = [ int(x) for x in arguments.force_gpu_idxs.split(',') ] if arguments.force_gpu_idxs is not None else None,
File "E:\DeepFaceLab_NVIDIA - Copy\_internal\DeepFaceLab\mainscripts\Extractor.py", line 853, in main
device_config=device_config).run()
File "E:\DeepFaceLab_NVIDIA - Copy\_internal\DeepFaceLab\core\joblib\SubprocessorBase.py", line 210, in run
raise Exception ( "Unable to start subprocesses." )
Exception: Unable to start subprocesses.
Press any key to continue . . . | open | 2021-01-02T11:18:19Z | 2023-06-08T21:53:08Z | https://github.com/iperov/DeepFaceLab/issues/5228 | [] | adam-eme | 2 |
polakowo/vectorbt | data-visualization | 40 | Returning None from order_func_nb causes numba TypingError | Error:
```
TypingError: Failed in nopython mode pipeline (step: nopython frontend)
Failed in nopython mode pipeline (step: nopython frontend)
Unknown attribute 'size' of type none
```
Code (taken from example with modified order_func_nb)
```python
import vectorbt as vbt
from vectorbt.portfolio.enums import SizeType, AccumulateExitMode, ConflictMode
import numpy as np
import pandas as pd
from datetime import datetime, timedelta
from numba import njit, f8, i8, b1, optional
from datetime import datetime
price = pd.Series([1, 2, 3, 2, 1], index=pd.Index([
datetime(2020, 1, 1),
datetime(2020, 1, 2),
datetime(2020, 1, 3),
datetime(2020, 1, 4),
datetime(2020, 1, 5)
]))
entry_price = price * 0.9
exit_price = price * 1.1
@njit
def order_func_nb(order_context, price, fees, fixed_fees, slippage):
# i = order_context.i
# col = order_context.col
# size = col + 1
# if i % 2 == 1:
# size *= -1
# return vbt.portfolio.nb.Order(
# size, SizeType.Shares, price[i, col], fees[i, col], fixed_fees[i, col], slippage[i, col])
return None
portfolio = vbt.Portfolio.from_order_func(
price,
order_func_nb,
price.values[:, None],
np.full(price.shape, 0.01)[:, None],
np.full(price.shape, 1)[:, None],
np.full(price.shape, 0.01)[:, None]
)
``` | closed | 2020-08-26T03:04:49Z | 2020-09-16T22:31:56Z | https://github.com/polakowo/vectorbt/issues/40 | [] | Ziink | 5 |
keras-team/keras | data-science | 20,259 | ValueError: File not found: filepath=Backend\C4__256g_000040000.keras. Please ensure the file is an accessible `.keras` zip file. | Error:
Please ensure the file is an accessible '.keras' zip file
Keras Version: 3.5.0
tensorflow Version: 2.16.1
I don't have GPU please I need solution
Code:
import numpy as np
# from keras.models import load_model
# from keras.models import load_model
import matplotlib.pyplot as plt
from numpy import vstack
from tensorflow.keras.utils import img_to_array
from tensorflow.keras.preprocessing.image import load_img
from skimage.metrics import structural_similarity as ssim
from skimage.metrics import peak_signal_noise_ratio as psnr
from skimage.color import rgb2lab
import os
from tensorflow.keras.models import load_model
import tensorflow as tf
import keras
model_path = 'Backend\C4__256g_000040000.keras'
model = load_model(model_path)
# model = keras.models.load_model(model_path,compile=False)
height, width = 256, 256
os.envrion['TF_ENABLE_ONEDNN_OPTS']='0'
print("Model input shape:", model.input_shape)
def plot_images(src_img, gen_img, tar_img=None):
if tar_img is not None:
images = [src_img, gen_img, tar_img]
titles = ['Source', 'Generated', 'Expected']
fig, axs = plt.subplots(1, 3, figsize=(15, 5))
else:
images = [src_img, gen_img]
titles = ['Source', 'Generated']
fig, axs = plt.subplots(1, 2, figsize=(10, 5))
for i, (img, title) in enumerate(zip(images, titles)):
img = np.squeeze(img)
img = (img + 1) / 2.0
img = np.clip(img, 0, 1)
if img.ndim == 2 or (img.ndim == 3 and img.shape[-1] == 1):
axs[i].imshow(img, cmap='gray')
else:
axs[i].imshow(img)
axs[i].axis('off')
axs[i].set_title(title)
plt.tight_layout()
return fig
def preprocess_data(data):
if isinstance(data, list):
X1, X2 = data
X1 = (X1 - 127.5) / 127.5
X2 = (X2 - 127.5) / 127.5
return [X1, X2]
else:
return (data - 127.5) / 127.5
def calculate_metrics(generated_image, target_image):
generated_image = (generated_image + 1) / 2.0
target_image = (target_image + 1) / 2.0
if generated_image.ndim == 4 and generated_image.shape[-1] == 3:
generated_image = np.mean(generated_image, axis=-1)
target_image = np.mean(target_image, axis=-1)
generated_image = np.squeeze(generated_image)
target_image = np.squeeze(target_image)
min_dim = min(generated_image.shape)
win_size = min_dim if min_dim % 2 != 0 else min_dim - 1
ssim_value = ssim(generated_image, target_image, win_size=win_size, data_range=1.0)
psnr_value = psnr(target_image, generated_image, data_range=1.0)
return ssim_value, psnr_value
def process_images(src_path, tar_path):
src_image = load_img(src_path, target_size=(height, width), color_mode='rgb')
src_image = img_to_array(src_image)
src_image = np.expand_dims(src_image, axis=0)
tar_image = load_img(tar_path, target_size=(height, width), color_mode='rgb')
tar_image = img_to_array(tar_image)
tar_image = np.expand_dims(tar_image, axis=0)
src_image, tar_image = preprocess_data([src_image, tar_image])
gen_image = model.predict(src_image)
ssim_value, psnr_value = calculate_metrics(gen_image, tar_image)
fig = plot_images(src_image[0], gen_image[0], tar_image[0])
return fig, ssim_value, psnr_value, src_image[0], gen_image[0], tar_image[0]
def plot_histogram(image, ax, title):
colors = ('r', 'g', 'b')
for i, color in enumerate(colors):
hist, bins = np.histogram(image[:, :, i].flatten(), bins=256, range=[0, 1])
ax.plot(bins[:-1], hist, color=color, alpha=0.7)
ax.set_title(title)
ax.set_xlabel('Pixel Intensity')
ax.set_ylabel('Count')
def plot_difference_map(original, generated):
difference = np.abs(original - generated)
# Enhance the difference for visibility
difference = np.power(difference, 0.5)
fig, ax = plt.subplots(figsize=(5, 5))
im = ax.imshow(difference, cmap='viridis')
ax.set_title('Difference Map')
fig.colorbar(im, ax=ax, label='Absolute Difference (sqrt-scaled)')
return fig, np.mean(difference)
def colorize_image(image_file):
# Load the image
src_image = load_img(image_file, target_size=(height, width), color_mode='rgb')
src_image = img_to_array(src_image)
# Convert to grayscale
src_image = np.mean(src_image, axis=-1, keepdims=True)
# Repeat the grayscale channel to create a 3-channel image
src_image = np.repeat(src_image, 3, axis=-1)
# Normalize the image
src_image = (src_image - 127.5) / 127.5
# Add batch dimension
src_image = np.expand_dims(src_image, axis=0)
# Ensure the input shape is correct (batch, height, width, channels)
expected_shape = (1, height, width, 3)
if src_image.shape != expected_shape:
raise ValueError(f"Expected input shape {expected_shape}, but got {src_image.shape}")
# Generate the colorized image
gen_image = model.predict(src_image)
# Remove the batch dimension
src_image = np.squeeze(src_image, axis=0)
gen_image = np.squeeze(gen_image, axis=0)
return src_image, gen_image
def plot_color_channels(image, title):
fig, axes = plt.subplots(1, 3, figsize=(15, 5))
for i, channel in enumerate(['Red', 'Green', 'Blue']):
axes[i].imshow(image[:,:,i], cmap='gray')
axes[i].set_title(f'{channel} Channel')
axes[i].axis('off')
plt.suptitle(title)
return fig
def plot_lab_channels(image, title):
lab_image = rgb2lab(image)
fig, axes = plt.subplots(1, 3, figsize=(15, 5))
channels = ['L (Lightness)', 'a (Green-Red)', 'b (Blue-Yellow)']
for i, channel in enumerate(channels):
im = axes[i].imshow(lab_image[:,:,i], cmap='gray')
axes[i].set_title(channel)
axes[i].axis('off')
plt.colorbar(im, ax=axes[i])
plt.suptitle(title)
return fig
# Print model summary for debugging
model.summary()
__all__ = ['process_images', 'plot_histogram', 'plot_difference_map', 'plot_color_channels', 'plot_lab_channels']
| open | 2024-09-15T07:32:16Z | 2024-09-20T18:02:08Z | https://github.com/keras-team/keras/issues/20259 | [
"type:Bug"
] | ananthanarayanan431 | 6 |
ivy-llc/ivy | tensorflow | 28,621 | Fix Frontend Failing Test: torch - search.paddle.argsort | To-do List: https://github.com/unifyai/ivy/issues/27498 | closed | 2024-03-17T14:16:17Z | 2024-03-25T12:44:27Z | https://github.com/ivy-llc/ivy/issues/28621 | [
"Sub Task"
] | ZJay07 | 0 |
kizniche/Mycodo | automation | 797 | Error setup -> data after removing pi default admin user | Changed raspberrypi admin user
Traceback (most recent call last):
File "/home/[user]/Mycodo/env/lib/python3.7/site-packages/flask/app.py", line 2447, in wsgi_app
response = self.full_dispatch_request()
File "/home/[user]/Mycodo/env/lib/python3.7/site-packages/flask/app.py", line 1952, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/home/[user]/Mycodo/env/lib/python3.7/site-packages/flask_restx/api.py", line 639, in error_router
return original_handler(e)
File "/home/[user]/Mycodo/env/lib/python3.7/site-packages/flask/app.py", line 1821, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "/home/[user]/Mycodo/env/lib/python3.7/site-packages/flask/_compat.py", line 39, in reraise
raise value
File "/home/[user]/Mycodo/env/lib/python3.7/site-packages/flask/app.py", line 1950, in full_dispatch_request
rv = self.dispatch_request()
File "/home/[user]/Mycodo/env/lib/python3.7/site-packages/flask/app.py", line 1936, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "/home/[user]/Mycodo/env/lib/python3.7/site-packages/flask_login/utils.py", line 272, in decorated_view
return func(*args, **kwargs)
File "/home/[user]/Mycodo/mycodo/mycodo_flask/routes_page.py", line 1965, in page_data
elif not dpkg_package_exists('ow-shell'):
File "/home/[user]/Mycodo/mycodo/utils/system_pi.py", line 83, in dpkg_package_exists
_, _, stat = cmd_output(cmd)
File "/home/[user]/Mycodo/mycodo/utils/system_pi.py", line 249, in cmd_output
pw_record = pwd.getpwnam(user)
KeyError: "getpwnam(): name not found: 'pi'" | closed | 2020-07-25T01:11:46Z | 2020-07-27T18:29:22Z | https://github.com/kizniche/Mycodo/issues/797 | [] | stardawg | 1 |
chezou/tabula-py | pandas | 237 | Read_PDF generating single item list | <!--- Provide a general summary of your changes in the Title above -->
When using the tabula.read_pdf for an online pdf files, I expected a dataframe output but seemed to receive a list with the entire table data as one item in the list.
<!-- Write the summary of your issue here -->
# Check list before submit
<!--- Write and check the following questionaries. -->
- [ x] Did you read [FAQ](https://tabula-py.readthedocs.io/en/latest/faq.html)?
- [x ] (Optional, but really helpful) Your PDF URL:
https://www.nar.realtor/sites/default/files/documents/ehs-03-2020-overview-2020-04-21.pdf
- [x ] Paste the output of `import tabula; tabula.environment_info()` on Python REPL: ?
```Python version:
3.7.4 (default, Aug 9 2019, 18:34:13) [MSC v.1915 64 bit (AMD64)]
Java version:
java version "1.8.0_251"
Java(TM) SE Runtime Environment (build 1.8.0_251-b08)
Java HotSpot(TM) Client VM (build 25.251-b08, mixed mode)
tabula-py version: 2.1.0
platform: Windows-10-10.0.18362-SP0
uname:
uname_result(system='Windows', node='SENE01-VM02', release='10', version='10.0.18362', machine='AMD64', processor='Intel64 Family 6 Model 63 Stepping 2, GenuineIntel')
linux_distribution: ('', '', '')
mac_ver: ('', ('', '', ''), '')
```
If not possible to execute `tabula.environment_info()`, please answer following questions manually.
- [ ] Paste the output of `python --version` command on your terminal: ?
- [ ] Paste the output of `java -version` command on your terminal: ?
- [ ] Does `java -h` command work well?; Ensure your java command is included in `PATH`
- [ ] Write your OS and it's version: ?
# What did you do when you faced the problem?
I tried to manually convert the tabula.read_pdf() output as a DataFrame via pd.DataFrame(output) but the ouput was a single cell df that showed truncated part of the first text extracted from the pdf. ('Unnam...').
Tried searching StackOverflow for similar issues but did not find any.
## Code:
```
import tabula
df = tabula.read_pdf()("https://www.nar.realtor/sites/default/files/documents/ehs-03-2020-overview-2020-04-21.pdf")
print(df)
## output not as expected, used pd.DataFrame() in attmept to achieve df expected
df2 = pd.DataFrame(df)
```
## Expected behavior:
Expected a table output similar to the examples provided on GitHub.
https://github.com/chezou/tabula-py/blob/master/examples/tabula_example.ipynb
## Actual behavior:
A single cell dataframe that doesn't display any data as expected via the examples provided in the linked GitHub example python notebook.
## Related Issues:
| closed | 2020-04-27T15:56:10Z | 2020-06-04T12:08:09Z | https://github.com/chezou/tabula-py/issues/237 | [
"not a bug"
] | clarakheinz | 6 |
pallets-eco/flask-sqlalchemy | flask | 384 | Provide a max_per_page parameter for pagination | Right now the default for per_page is 20 unless otherwise specified. I also see that the code makes sure it is no larger than the total items.
To prevent a request asking for too many items at once, it would be convenient if Pagination could be initialized with a maximum items per page (max_per_page). Then the supplied per_page value could use the ceil() of total and max_per_page.
| closed | 2016-03-25T20:30:51Z | 2020-12-05T20:55:31Z | https://github.com/pallets-eco/flask-sqlalchemy/issues/384 | [
"pagination"
] | dmulter | 1 |
stanfordnlp/stanza | nlp | 784 | [QUESTION] How to evaluate pre-trained NER model on my domain specific text? | I am trying to get F1 scores for the pre-trained english model on my specific text domain without doing any training.
The docs mention the following command:
`
python -m stanza.utils.training.run_ete ${corpus} --score_${split}
`
However as I dont want to do any training, how can I evaluate the model as is?
I've got an annotated dataset for my domain in BIO format.
| closed | 2021-08-08T13:18:37Z | 2021-08-11T12:09:16Z | https://github.com/stanfordnlp/stanza/issues/784 | [
"question"
] | Pwgeorge-Py | 2 |
pytest-dev/pytest-django | pytest | 696 | DB fails to clean up after tests, says db is used by another user | This is basically a repost of my question from stackoverflow. I hope it will be more useful here.
I have a Django application, and I'm trying to test it using pytest and pytest-django. However, quite often, when the tests finish running, I get the error that the database failed to be deleted: DETAIL: There is 1 other session using the database.
Basically, the minimum test code that I could narrow it down to is:
```python
@pytest.fixture
def make_bundle():
a = MyUser.objects.create(key_id=uuid.uuid4())
return a
class TestThings:
def test_it(self, make_bundle):
all_users = list(MyUser.objects.all())
assert_that(all_users, has_length(1))
```
Every now and again the tests will fail with the above error. Is there something I am doing wrong? Or how can I fix this?
The database that I am using is PostgreSQL 9.6.
And here is the dirty fix I seem to have found for this:
```python
def _destroy_test_db(self, test_database_name, verbosity):
"""
Internal implementation - remove the test db tables.
"""
# Remove the test database to clean up after
# ourselves. Connect to the previous database (not the test database)
# to do so, because it's not allowed to delete a database while being
# connected to it.
with self.connection._nodb_connection.cursor() as cursor:
cursor.execute(
"SELECT pg_terminate_backend(pg_stat_activity.pid) "
"FROM pg_stat_activity "
"WHERE pg_stat_activity.datname = '{}' "
"AND pid <> pg_backend_pid();".format(test_database_name)
)
cursor.execute("DROP DATABASE %s"
% self.connection.ops.quote_name(test_database_name))
@pytest.fixture(autouse=True)
def patch_db_cleanup():
creation.BaseDatabaseCreation._destroy_test_db = _destroy_test_db
```
Is there a better solution for the situation? Or is it a bug in pytest-django? | open | 2019-01-28T08:00:40Z | 2024-03-27T15:21:28Z | https://github.com/pytest-dev/pytest-django/issues/696 | [] | ibolit | 1 |
allenai/allennlp | data-science | 4,816 | Make image sets easily accessible | We don't want to automatically download large image sets with `cached_path()`, but we at least want to make it very easy to download them manually (download one archive, extract it, set the image path).
- [x] VQA
- [x] GQA
- [x] SNLI-VE | closed | 2020-11-24T00:27:16Z | 2021-01-04T21:39:11Z | https://github.com/allenai/allennlp/issues/4816 | [] | dirkgr | 2 |
explosion/spaCy | deep-learning | 13,707 | numpy requirements/compatibility | I noticed that conda-forge is struggling with some of the current numpy specifications to the point that they're having to patch the requirements (https://github.com/conda-forge/thinc-feedstock/pull/123).
To improve this and restore numpy v1 compatibility, could you consider using numpy's suggested build+install requirements as described here? https://numpy.org/doc/stable/dev/depending_on_numpy.html#numpy-2-0-specific-advice
For python 3.9+ only, I think this could look like:
```
[build-system]
requires = [
"numpy>=2.0.0", # or additionally with <3.0 if you want
]
build-backend = "setuptools.build_meta"
```
```
install_requires =
numpy>=1.19.3
```
And if you wanted to support earlier python, you could use `oldest-supported-numpy` for simplicity (which is less concerning now that it's effectively archived/frozen, plus restricted to EOL python):
```
[build-system]
requires = [
"oldest-supported-numpy; python_version < '3.9'",
"numpy>=2.0.0; python_version >= '3.9'",
]
build-backend = "setuptools.build_meta"
```
(And if you're already taking a look at requirements, I think you could also consider restricting blis to `<0.9` for windows only? The segfaults are a known issue and we found some but not all of the related bugs back then, which is why we reverted all the attempted blis upgrades after a short time.) | open | 2024-12-05T09:13:23Z | 2024-12-16T06:31:18Z | https://github.com/explosion/spaCy/issues/13707 | [] | adrianeboyd | 2 |
JaidedAI/EasyOCR | pytorch | 1,166 | Bug in craft training in make_char_box.py | I am hitting a divide by zero exception in `make_char_box.py` when training the craft recognition model where it tries to crop an image by the bounding box, but the bounding box has an area of zero.
Stack trace below:
```
> /home/connoourke/bin/src/EasyOCRDev/EasyOCR/trainer/craft/data/pseudo_label/make_charbox.py(36)crop_image_by_bbox()
35
---> 36 one_char_ratio = min(h, w) / (max(h, w) / len(word))
37
ipdb> w
/home/connoourke/bin/src/EasyOCRDev/EasyOCR/trainer/craft/train.py(479)<module>()
477
478 if __name__ == "__main__":
--> 479 main()
/home/connoourke/bin/src/EasyOCRDev/EasyOCR/trainer/craft/train.py(472)main()
471 trainer = Trainer(config, 0, mode)
--> 472 trainer.train(buffer_dict)
473
/home/connoourke/bin/src/EasyOCRDev/EasyOCR/trainer/craft/train.py(239)train()
238 while train_step < whole_training_step:
--> 239 for (
240 index,
/home/connoourke/bin/src/EasyOCRDev/EasyOCR/.venv/lib/python3.10/site-packages/torch/utils/data/dataloader.py(630)__next__()
629 self._reset() # type: ignore[call-arg]
--> 630 data = self._next_data()
631 self._num_yielded += 1
/home/connoourke/bin/src/EasyOCRDev/EasyOCR/.venv/lib/python3.10/site-packages/torch/utils/data/dataloader.py(674)_next_data()
673 index = self._next_index() # may raise StopIteration
--> 674 data = self._dataset_fetcher.fetch(index) # may raise StopIteration
675 if self._pin_memory:
/home/connoourke/bin/src/EasyOCRDev/EasyOCR/.venv/lib/python3.10/site-packages/torch/utils/data/_utils/fetch.py(51)fetch()
50 else:
---> 51 data = [self.dataset[idx] for idx in possibly_batched_index]
52 else:
/home/connoourke/bin/src/EasyOCRDev/EasyOCR/.venv/lib/python3.10/site-packages/torch/utils/data/_utils/fetch.py(51)<listcomp>()
50 else:
---> 51 data = [self.dataset[idx] for idx in possibly_batched_index]
52 else:
/home/connoourke/bin/src/EasyOCRDev/EasyOCR/trainer/craft/data/dataset.py(150)__getitem__()
149 words,
--> 150 ) = self.make_gt_score(index)
151 else:
/home/connoourke/bin/src/EasyOCRDev/EasyOCR/trainer/craft/data/dataset.py(457)make_gt_score()
456 horizontal_text_bools,
--> 457 ) = self.load_data(index)
458 img_h, img_w, _ = image.shape
/home/connoourke/bin/src/EasyOCRDev/EasyOCR/trainer/craft/data/dataset.py(424)load_data()
423 horizontal_text_bool,
--> 424 ) = self.pseudo_charbox_builder.build_char_box(
425 self.net, self.gpu, image, word_bboxes[i], words[i], img_name=img_name
/home/connoourke/bin/src/EasyOCRDev/EasyOCR/trainer/craft/data/pseudo_label/make_charbox.py(209)build_char_box()
208 def build_char_box(self, net, gpu, image, word_bbox, word, img_name=""):
--> 209 word_image, M, horizontal_text_bool = self.crop_image_by_bbox(
210 image, word_bbox, word
> /home/connoourke/bin/src/EasyOCRDev/EasyOCR/trainer/craft/data/pseudo_label/make_charbox.py(36)crop_image_by_bbox()
35
---> 36 one_char_ratio = min(h, w) / (max(h, w) / len(word))
37
ipdb> a
self = <data.pseudo_label.make_charbox.PseudoCharBoxBuilder object at 0x7f68ded91030>
image = array([[[125, 141, 166],
[153, 170, 196],
[182, 201, 231],
...,
[227, 247, 254],
[227, 247, 254],
[227, 247, 254]],
[[140, 156, 181],
[154, 171, 197],
[168, 187, 217],
...,
[227, 247, 254],
[227, 247, 254],
[227, 247, 254]],
[[155, 171, 197],
[157, 174, 200],
[157, 176, 206],
...,
[228, 248, 255],
[227, 247, 254],
[227, 247, 254]],
...,
[[ 57, 71, 71],
[ 55, 67, 65],
[ 57, 67, 66],
...,
[190, 182, 159],
[191, 183, 160],
[205, 197, 174]],
[[ 56, 72, 72],
[ 60, 76, 75],
[ 79, 93, 93],
...,
[142, 130, 108],
[152, 140, 118],
[197, 185, 163]],
[[ 84, 102, 102],
[103, 121, 121],
[139, 155, 154],
...,
[119, 104, 83],
[119, 104, 83],
[166, 151, 130]]], dtype=uint8)
box = array([[322., 506.],
[322., 506.],
[322., 506.],
[322., 506.]], dtype=float32)
word = 'af'
ipdb>
```
| open | 2023-11-16T12:02:51Z | 2024-01-31T18:31:01Z | https://github.com/JaidedAI/EasyOCR/issues/1166 | [] | connorourke | 1 |
awesto/django-shop | django | 193 | Error in docs | Hi! I'm new to django shop, and I think I've found an error in the docs.
At http://django-shop.readthedocs.org/en/latest/getting-started.html#adding-taxes in the code says def add_extra_cart_price_field(self, cart): and it should say def get_extra_cart_price_field(self, cart):
If it isnt an error i would love to hear an explanation, because i've had an issue with it.
| closed | 2012-11-03T00:44:19Z | 2012-12-06T13:07:26Z | https://github.com/awesto/django-shop/issues/193 | [] | singold | 0 |
mwaskom/seaborn | data-visualization | 2,886 | Area mark raises with log x scale | ```python
so.Plot([1, 10, 100, 1000], [1, 2, 4, 3]).add(so.Area()).scale(x="log")
```
Raises:
<details>
```python-traceback
/Users/mwaskom/miniconda3/envs/seaborn-py39-latest/lib/python3.9/site-packages/matplotlib/transforms.py:2664: RuntimeWarning: invalid value encountered in double_scalars
self._mtx = np.array([[x_scale, 0.0 , (-inl*x_scale)],
---------------------------------------------------------------------------
LinAlgError Traceback (most recent call last)
File ~/miniconda3/envs/seaborn-py39-latest/lib/python3.9/site-packages/IPython/core/formatters.py:343, in BaseFormatter.__call__(self, obj)
341 method = get_real_method(obj, self.print_method)
342 if method is not None:
--> 343 return method()
344 return None
345 else:
File ~/code/seaborn/seaborn/_core/plot.py:224, in Plot._repr_png_(self)
222 def _repr_png_(self) -> tuple[bytes, dict[str, float]]:
--> 224 return self.plot()._repr_png_()
File ~/code/seaborn/seaborn/_core/plot.py:635, in Plot.plot(self, pyplot)
632 plotter._layers = layers
634 for layer in layers:
--> 635 plotter._plot_layer(self, layer)
637 plotter._make_legend()
639 # TODO this should be configurable
File ~/code/seaborn/seaborn/_core/plot.py:1142, in Plotter._plot_layer(self, p, layer)
1137 grouping_vars = mark._grouping_props + default_grouping_vars
1138 split_generator = self._setup_split_generator(
1139 grouping_vars, df, subplots
1140 )
-> 1142 mark._plot(split_generator, scales, orient)
1144 # TODO is this the right place for this?
1145 for view in self._subplots:
File ~/code/seaborn/seaborn/_marks/area.py:47, in AreaBase._plot(self, split_gen, scales, orient)
44 kws[ax]["linestyle"].append(resolved["edgestyle"])
46 for ax, ax_kws in kws.items():
---> 47 ax.add_collection(mpl.collections.PolyCollection(**ax_kws))
File ~/miniconda3/envs/seaborn-py39-latest/lib/python3.9/site-packages/matplotlib/axes/_base.py:2248, in _AxesBase.add_collection(self, collection, autolim)
2244 if autolim:
2245 # Make sure viewLim is not stale (mostly to match
2246 # pre-lazy-autoscale behavior, which is not really better).
2247 self._unstale_viewLim()
-> 2248 datalim = collection.get_datalim(self.transData)
2249 points = datalim.get_points()
2250 if not np.isinf(datalim.minpos).all():
2251 # By definition, if minpos (minimum positive value) is set
2252 # (i.e., non-inf), then min(points) <= minpos <= max(points),
2253 # and minpos would be superfluous. However, we add minpos to
2254 # the call so that self.dataLim will update its own minpos.
2255 # This ensures that log scales see the correct minimum.
File ~/miniconda3/envs/seaborn-py39-latest/lib/python3.9/site-packages/matplotlib/collections.py:295, in Collection.get_datalim(self, transData)
288 offsets = offsets.filled(np.nan)
289 # get_path_collection_extents handles nan but not masked arrays
290 # collections that are just in data units (like quiver)
291 # can properly have the axes limits set by their shape +
292 # offset. LineCollections that have no offsets can
293 # also use this algorithm (like streamplot).
294 return mpath.get_path_collection_extents(
--> 295 transform.get_affine() - transData, paths,
296 self.get_transforms(),
297 transOffset.transform_non_affine(offsets),
298 transOffset.get_affine().frozen())
300 # NOTE: None is the default case where no offsets were passed in
301 if self._offsets is not None:
302 # this is for collections that have their paths (shapes)
303 # in physical, axes-relative, or figure-relative units
304 # (i.e. like scatter). We can't uniquely set limits based on
305 # those shapes, so we just set the limits based on their
306 # location.
File ~/miniconda3/envs/seaborn-py39-latest/lib/python3.9/site-packages/matplotlib/transforms.py:1470, in Transform.__sub__(self, other)
1468 # if we have got this far, then there was no shortcut possible
1469 if other.has_inverse:
-> 1470 return self + other.inverted()
1471 else:
1472 raise ValueError('It is not possible to compute transA - transB '
1473 'since transB cannot be inverted and there is no '
1474 'shortcut possible.')
File ~/miniconda3/envs/seaborn-py39-latest/lib/python3.9/site-packages/matplotlib/transforms.py:2452, in CompositeGenericTransform.inverted(self)
2449 def inverted(self):
2450 # docstring inherited
2451 return CompositeGenericTransform(
-> 2452 self._b.inverted(), self._a.inverted())
File ~/miniconda3/envs/seaborn-py39-latest/lib/python3.9/site-packages/matplotlib/transforms.py:2452, in CompositeGenericTransform.inverted(self)
2449 def inverted(self):
2450 # docstring inherited
2451 return CompositeGenericTransform(
-> 2452 self._b.inverted(), self._a.inverted())
File ~/miniconda3/envs/seaborn-py39-latest/lib/python3.9/site-packages/matplotlib/transforms.py:1890, in Affine2DBase.inverted(self)
1888 if self._shorthand_name:
1889 shorthand_name = '(%s)-1' % self._shorthand_name
-> 1890 self._inverted = Affine2D(inv(mtx), shorthand_name=shorthand_name)
1891 self._invalid = 0
1892 return self._inverted
File <__array_function__ internals>:180, in inv(*args, **kwargs)
File ~/miniconda3/envs/seaborn-py39-latest/lib/python3.9/site-packages/numpy/linalg/linalg.py:552, in inv(a)
550 signature = 'D->D' if isComplexType(t) else 'd->d'
551 extobj = get_linalg_error_extobj(_raise_linalgerror_singular)
--> 552 ainv = _umath_linalg.inv(a, signature=signature, extobj=extobj)
553 return wrap(ainv.astype(result_t, copy=False))
File ~/miniconda3/envs/seaborn-py39-latest/lib/python3.9/site-packages/numpy/linalg/linalg.py:89, in _raise_linalgerror_singular(err, flag)
88 def _raise_linalgerror_singular(err, flag):
---> 89 raise LinAlgError("Singular matrix")
LinAlgError: Singular matrix
```
</details>
Appears to be specific to the x scale, and not to the orient dimension. | closed | 2022-07-05T00:28:58Z | 2022-07-14T00:15:25Z | https://github.com/mwaskom/seaborn/issues/2886 | [
"bug",
"objects-mark"
] | mwaskom | 1 |
zappa/Zappa | django | 473 | [Migrated] Add documentation to the README for Cognito triggers | Originally from: https://github.com/Miserlou/Zappa/issues/1268 by [millarm](https://github.com/millarm)
<!--
Before you submit this PR, please make sure that you meet these criteria:
* Did you read the [contributing guide](https://github.com/Miserlou/Zappa/#contributing)?
* If this is a non-trivial commit, did you **open a ticket** for discussion?
* Did you **put the URL for that ticket in a comment** in the code?
* If you made a new function, did you **write a good docstring** for it?
* Did you avoid putting "_" in front of your new function for no reason?
* Did you write a test for your new code?
* Did the Travis build pass?
* Did you improve (or at least not significantly reduce) the amount of code test coverage?
* Did you **make sure this code actually works on Lambda**, as well as locally?
* Did you test this code with both **Python 2.7** and **Python 3.6**?
If so, awesome! If not, please try to fix those issues before submitting your Pull Request.
Thank you for your contribution!
-->
## Description
Detailed documentation for cognito triggers
## GitHub Issues
<!-- Proposed changes should be discussed in an issue before submitting a PR. -->
<!-- Link to relevant tickets here. -->
| closed | 2021-02-20T08:35:20Z | 2024-04-13T16:18:40Z | https://github.com/zappa/Zappa/issues/473 | [
"needs-user-testing",
"no-activity",
"auto-closed"
] | jneves | 2 |
tensorflow/tensor2tensor | deep-learning | 1,310 | How to evaluate the transformer on the summarization task by using the tensor2tensor? | ### Description
e.g. CNN/dailymail dataset. I know how to train the transformer on the dataset, but I don't know how to evaluate the performance of the transformer on the task.
| open | 2018-12-18T05:46:58Z | 2019-07-25T08:57:18Z | https://github.com/tensorflow/tensor2tensor/issues/1310 | [] | zhaoguangxiang | 5 |
sloria/TextBlob | nlp | 75 | pip install fails with latest setuptools version | A recent change in `setuptools` causes `nltk` install to fail (see https://github.com/nltk/nltk/issues/824).
Temporary workaround for users who have `setuptools>=10.0` installed in their python environment (default for all newly created environments using `get_pip.py`):
```
pip install setuptools==9.1
# this will automatically uninstall newer/older setuptools versions first
pip install -U textblob
```
To check for `setuptools` version, type `pip list`:
```
pip list
# result in clean virtual environment
pip (6.0.3)
setuptools (10.1)
```
| closed | 2015-01-01T15:08:55Z | 2015-01-13T18:41:38Z | https://github.com/sloria/TextBlob/issues/75 | [] | markuskiller | 1 |
PokeAPI/pokeapi | api | 188 | evolution-chain uses inconsistent schema for evolution_details | Best example to compare is evolution-chain/1 and evolution-chain/34.
The evolution_details property for evolves_to can be null, a single value, or an array. I propose wrapping all values in an array so that the expected type remains consistent.
| closed | 2016-05-12T13:29:12Z | 2016-05-24T13:56:16Z | https://github.com/PokeAPI/pokeapi/issues/188 | [] | zberk | 4 |
netbox-community/netbox | django | 18,318 | Mentoning of manufacturer is double in "Module Types/Interface" | ### Deployment Type
NetBox Cloud
### Triage priority
N/A
### NetBox Version
v4.2.0
### Python Version
3.10
### Steps to Reproduce
1. Open A module type that contains interfaces. ex: [https://demo.netbox.dev/dcim/module-types/1/](https://demo.netbox.dev/dcim/module-types/1/)
2. Click on the interface tab: [https://demo.netbox.dev/dcim/module-types/1/interfaces/](https://demo.netbox.dev/dcim/module-types/1/interfaces/)
### Expected Behavior
Manufacturer is only mentioned once

### Observed Behavior
Manufacturer is mentioned twice
 | closed | 2025-01-07T10:39:29Z | 2025-01-07T15:28:28Z | https://github.com/netbox-community/netbox/issues/18318 | [
"type: bug",
"status: accepted",
"severity: low"
] | elixdreamer | 0 |
litestar-org/litestar | asyncio | 3,481 | Bug: test failures | ### Description
The tests ran right around midnight UTC time.
```
________________ test_default_serializer[v1-condate-2024-05-08] ________________
[gw0] linux -- Python 3.8.18 /home/runner/work/litestar/litestar/.venv/bin/python
model = ModelV1(custom_str='', custom_int=0, custom_float=0.0, custom_list=[], custom_set=CustomSet(), custom_frozenset=Custom...scheme='some', host='example.org', tld='org', host_type='domain', path='/'), http_url=HttpUrl('http://example.org/', ))
attribute_name = 'condate', expected = '2024-05-08'
@pytest.mark.parametrize(
"attribute_name, expected",
[
("path", "example"),
("email_str", "info@example.org"),
("name_email", "info <info@example.org>"),
("color", "white"),
("bytesize", 100),
("secret_str", "**********"),
("secret_bytes", "**********"),
("payment_card_number", "4000000000000002"),
("constr", "hello"),
("conbytes", b"hello"),
("condate", datetime.date.today().isoformat()),
("condecimal", 3.14),
("conset", {1}),
("confrozenset", frozenset([1])),
("conint", 1),
("url", "some://example.org/"),
("http_url", "http://example.org/"),
],
)
def test_default_serializer(model: ModelV1 | ModelV2, attribute_name: str, expected: Any) -> None:
> assert serializer(getattr(model, attribute_name)) == expected
E AssertionError: assert '2024-05-09' == '2024-05-08'
E - 2024-05-08
E ? ^
E + 2024-05-09
E ? ^
tests/unit/test_contrib/test_pydantic/test_plugin_serialization.py:203: AssertionError
________________ test_default_serializer[v2-condate-2024-05-08] ________________
[gw1] linux -- Python 3.8.18 /home/runner/work/litestar/litestar/.venv/bin/python
model = ModelV2(conset={1}, confrozenset=frozenset({1}), conlist=[1], path=PosixPath('example'), email_str='info@example.org',...ondecimal=Decimal('3.14'), confloat=1.0, conint=1, url=Url('some://example.org/'), http_url=Url('http://example.org/'))
attribute_name = 'condate', expected = '2024-05-08'
@pytest.mark.parametrize(
"attribute_name, expected",
[
("path", "example"),
("email_str", "info@example.org"),
("name_email", "info <info@example.org>"),
("color", "white"),
("bytesize", 100),
("secret_str", "**********"),
("secret_bytes", "**********"),
("payment_card_number", "4000000000000002"),
("constr", "hello"),
("conbytes", b"hello"),
("condate", datetime.date.today().isoformat()),
("condecimal", 3.14),
("conset", {1}),
("confrozenset", frozenset([1])),
("conint", 1),
("url", "some://example.org/"),
("http_url", "http://example.org/"),
],
)
def test_default_serializer(model: ModelV1 | ModelV2, attribute_name: str, expected: Any) -> None:
> assert serializer(getattr(model, attribute_name)) == expected
E AssertionError: assert '2024-05-09' == '2024-05-08'
E - 2024-05-08
E ? ^
E + 2024-05-09
E ? ^
tests/unit/test_contrib/test_pydantic/test_plugin_serialization.py:203: AssertionError
```
and
```
_______________ test_default_serializer[v1-condate-2024-05-08] ________________
[gw0] win32 -- Python 3.12.3 D:\a\litestar\litestar\.venv\Scripts\python.exe
model = ModelV1(custom_str='', custom_int=0, custom_float=0.0, custom_list=[], custom_set=CustomSet(), custom_frozenset=Custom...scheme='some', host='example.org', tld='org', host_type='domain', path='/'), http_url=HttpUrl('http://example.org/', ))
attribute_name = 'condate', expected = '2024-05-08'
@pytest.mark.parametrize(
"attribute_name, expected",
[
("path", "example"),
("email_str", "info@example.org"),
("name_email", "info <info@example.org>"),
("color", "white"),
("bytesize", 100),
("secret_str", "**********"),
("secret_bytes", "**********"),
("payment_card_number", "4000000000000002"),
("constr", "hello"),
("conbytes", b"hello"),
("condate", datetime.date.today().isoformat()),
("condecimal", 3.14),
("conset", {1}),
("confrozenset", frozenset([1])),
("conint", 1),
("url", "some://example.org/"),
("http_url", "http://example.org/"),
],
)
def test_default_serializer(model: ModelV1 | ModelV2, attribute_name: str, expected: Any) -> None:
> assert serializer(getattr(model, attribute_name)) == expected
E AssertionError: assert '2024-05-09' == '2024-05-08'
E - 2024-05-08
E ? ^
E + 2024-05-09
E ? ^
tests\unit\test_contrib\test_pydantic\test_plugin_serialization.py:203: AssertionError
_______________ test_default_serializer[v2-condate-2024-05-08] ________________
[gw0] win32 -- Python 3.12.3 D:\a\litestar\litestar\.venv\Scripts\python.exe
model = ModelV2(conset={1}, confrozenset=frozenset({1}), conlist=[1], path=WindowsPath('example'), email_str='info@example.org...ondecimal=Decimal('3.14'), confloat=1.0, conint=1, url=Url('some://example.org/'), http_url=Url('http://example.org/'))
attribute_name = 'condate', expected = '2024-05-08'
@pytest.mark.parametrize(
"attribute_name, expected",
[
("path", "example"),
("email_str", "info@example.org"),
("name_email", "info <info@example.org>"),
("color", "white"),
("bytesize", 100),
("secret_str", "**********"),
("secret_bytes", "**********"),
("payment_card_number", "4000000000000002"),
("constr", "hello"),
("conbytes", b"hello"),
("condate", datetime.date.today().isoformat()),
("condecimal", 3.14),
("conset", {1}),
("confrozenset", frozenset([1])),
("conint", 1),
("url", "some://example.org/"),
("http_url", "http://example.org/"),
],
)
def test_default_serializer(model: ModelV1 | ModelV2, attribute_name: str, expected: Any) -> None:
> assert serializer(getattr(model, attribute_name)) == expected
E AssertionError: assert '2024-05-09' == '2024-05-08'
E - 2024-05-08
E ? ^
E + 2024-05-09
E ? ^
tests\unit\test_contrib\test_pydantic\test_plugin_serialization.py:203: AssertionError
============================== warnings summary ===============================
tests/unit/test_concurrency.py::test_sync_to_thread_trio
D:\a\litestar\litestar\.venv\Lib\site-packages\trio\_core\_wakeup_socketpair.py:59: RuntimeWarning: It looks like Trio's signal handling code might have collided with another library you're using. If you're running Trio in guest mode, then this might mean you should set host_uses_signal_set_wakeup_fd=True. Otherwise, file a bug on Trio and we'll help you figure out what's going on.
warnings.warn(
-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
=========================== short test summary info ===========================
FAILED tests/unit/test_contrib/test_pydantic/test_plugin_serialization.py::test_default_serializer[v1-condate-2024-05-08] - AssertionError: assert '2024-05-09' == '2024-05-08'
- 2024-05-08
? ^
+ 2024-05-09
? ^
FAILED tests/unit/test_contrib/test_pydantic/test_plugin_serialization.py::test_default_serializer[v2-condate-2024-05-08] - AssertionError: assert '2024-05-09' == '2024-05-08'
- 2024-05-08
? ^
+ 2024-05-09
? ^
= 2 failed, 5043 passed, 169 skipped, 7 xfailed, 5 xpassed, 1 warning in 72.62s (0:01:12) =
```
https://github.com/litestar-org/litestar/actions/runs/9010091640/job/24755515232
https://github.com/litestar-org/litestar/actions/runs/9010091640/job/24755515608
### URL to code causing the issue
_No response_
### MCVE
```python
# Your MCVE code here
```
### Steps to reproduce
```bash
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
```
### Screenshots
```bash
""
```
### Logs
_No response_
### Litestar Version
main
### Platform
- [X] Linux
- [ ] Mac
- [X] Windows
- [ ] Other (Please specify in the description above) | closed | 2024-05-09T00:21:59Z | 2025-03-20T15:54:42Z | https://github.com/litestar-org/litestar/issues/3481 | [
"Bug :bug:"
] | peterschutt | 0 |
Neoteroi/BlackSheep | asyncio | 128 | Typo in the documentation(middleware) | Hey @RobertoPrevato in the middlewares documentation we should fix code-example
https://www.neoteroi.dev/blacksheep/middlewares/
Handler `home` should receive `request` argument. | closed | 2021-05-11T08:13:07Z | 2021-05-11T08:35:39Z | https://github.com/Neoteroi/BlackSheep/issues/128 | [] | myusko | 2 |
marshmallow-code/marshmallow-sqlalchemy | sqlalchemy | 159 | SQLAlchemy Sessions Not Passed to Nested Fields | https://marshmallow-sqlalchemy.readthedocs.io/en/latest/recipes.html#smart-nested-field
Using this recipe as a base, I was writing a function to assist with serialization/de-serialization that looks somewhat like the below snippet:
```python
class MySchema(ModelSchema):
children = SmartNested(ChildModel)
class Meta:
model = Model
...
schema_obj = MySchema(session=session)
schema_obj.dump(data)
```
For the above snippet, data is an object with a children list that is eagerly loaded. Additionally, I don't have a global/thread local SQLAlchemy session object available as I'm using raw SQLAlchemy rather than flask-sqlalchemy because I'm a masochist who likes to reinvent the wheel. The above code (rightfully) throws an exception of "ValueError: Deserialization requires a session", as the SmartNested field's schema doesn't appear to have access to the parent schema object. I went digging through the code and I'm sure this is a non-trivial issue to address, but I admittedly have only been looking at this code base for a day.
Is there a workaround for this or do I have to just bite the bullet and set up a thread local scoped session? | closed | 2018-11-25T00:06:43Z | 2019-08-15T14:24:43Z | https://github.com/marshmallow-code/marshmallow-sqlalchemy/issues/159 | [] | ducharmemp | 2 |
flairNLP/flair | nlp | 2,690 | [Feature/enhancement] Documentation | **Is your feature/enhancement request related to a problem? Please describe.**
After reading the FLERT paper I attempted to implement it on my own data/labels. The quick start guide & tutorials provide a good starting point but lack significant documentation. Very few classes/functions have any documentation at all, forcing me to look into the source code before understanding what a function does and which parameters are available.
Some specific instances that confuse me:
- The expected data format for training a custom model. The tutorials ([TUTORIAL_6_CORPUS.md](https://github.com/flairNLP/flair/blob/master/resources/docs/TUTORIAL_6_CORPUS.md)) show how to load from a set of .txt files, sure, but what format is expected for the FLERT model? Should Sentence objects be ordered in order to use context? Also, is it possible to create a Corpus objects from a list of Sentence objects in Python? I found out that the latter is possible after delving into the Corpus source code.
-----
- The `use_context` parameter in `TransformerWordEmbeddings`. The [current documentation](https://github.com/flairNLP/flair/blob/master/resources/docs/embeddings/TRANSFORMER_EMBEDDINGS.md) states:
> Set to True to include context outside of sentences. This can greatly increase accuracy on some tasks, but slows down embedding generation
- This does not describe anything about how this works, which format is expected, etc. A more thorough description would be great here. The FLERT paper obviously describes the methodology behind this functionality, however it is not stated in the context of the Flair framework. Is this handled automatically? Does it use tokens from separating Sentence objects in the Corpus sets (train, dev, test)?
-----
**Describe the solution you'd like**
The scenarios above are only examples. In general, I feel that Flair is lacking a thorough documentation as is standard with almost all Python packages. I really think Flair is great, so it's really unfortunate that I have to give up on using the framework because I need some functionality that is a bit more advanced than the provided examples. It seems like the functionality I need is already implemented, just poorly documented!
| closed | 2022-03-28T10:51:52Z | 2022-09-09T02:02:40Z | https://github.com/flairNLP/flair/issues/2690 | [
"wontfix"
] | torbenal | 1 |
polakowo/vectorbt | data-visualization | 399 | How to plot indicator? | 
| closed | 2022-03-01T11:58:56Z | 2023-04-28T06:32:46Z | https://github.com/polakowo/vectorbt/issues/399 | [] | GF-Huang | 3 |
robotframework/robotframework | automation | 5,121 | Request to set implicit wait as a basic parameter to a keyword | Our team uses 1.5 min as a basic implicit wait. Sometimes, we need to get results immediately (e.g., is there an element on the page). Since implicit wait is set globally we are failing to structurize scripts nicely and are forced to proceed with the following workaround where we change the global wait, run the keyword, and set it back:
```
${orig}= Get Selenium Implicit Wait
Set Browser Implicit Wait 1s
${check_abc}= Run Keyword And Return Status Element Should Be Visible xpath=%xpath%
Set Browser Implicit Wait ${orig}
```
or
```
Set Browser Implicit Wait 1s
${status}= Run Keyword And Return Status Page Contains Text IMAGE_1.png
Set Browser Implicit Wait ${orig}
```
Can you please suggest ways to accomplish the same logic without waiting for 1.5 minutes and not playing with that "Set Browser Implicit Wait"? Or maybe there is a possibility to add an implicit wait for a specific keyword execution?
Thank you in advance!
| closed | 2024-04-25T14:19:15Z | 2024-04-26T10:54:36Z | https://github.com/robotframework/robotframework/issues/5121 | [] | vyvy3 | 1 |
vllm-project/vllm | pytorch | 15,144 | [Bug] Mismatch between `get_multimodal_embedding` output and `PlaceholderRange` | In V1, we expect the output of `get_multimodal_embedding` to correspond to the `PlaceholderRange`, which is in turn constructed based on `PromptUpdateDetails.features`. However, the current V1 code doesn't validate this, causing the model to crash during inference when under high load (e.g. #14897, #14963).
From a quick look at the code, these models output embedding sizes which are inconsistent with the placeholder range:
- [ ] Fuyu
- [x] Gemma3 (fixed by #14980)
- [ ] Idefics3
- [x] InternVL-based models (fixed by #15086)
- [ ] MiniCPM-V (does not support V1 yet)
(Basically, any model that has image newline/column tokens after applying HF processor needs a mask to map image patch features to image embeddings, as described below.)
To fix this, we can follow these steps:
1. Update the multi-modal processor to output a mask to indicate which positions in the `PlaceholderRange`-aligned embeddings should the patch features (outputted by vision encoder) be assigned to. This mask can be called `embed_is_patch`.
2. Use `scatter_patch_features` to scatter the patch features into the image embedding tensor.
3. When merging multimodal embeddings, use `select_patch_features` to recover the patch features from the image embeddings. The number of patch features should correspond to the number of image tokens (which is a subset of the feature tokens in `PromptUpdateDetails`).
Follow-up work:
- [ ] Update model development docs for Fuyu (assigned to @DarkLight1337)
- [ ] Add validation in V1 engine (assigned to @ywang96)
| open | 2025-03-19T16:53:23Z | 2025-03-24T13:56:02Z | https://github.com/vllm-project/vllm/issues/15144 | [
"bug",
"help wanted",
"v1",
"multi-modality"
] | DarkLight1337 | 0 |
zalandoresearch/fashion-mnist | computer-vision | 59 | Benchmark: Conv Net - Accuracy: 92.56% | Tried this network topology that can be summarized as follows:
- Convolutional layer with 32 feature maps of size 5×5.
- Pooling layer taking the max over 2*2 patches.
- Convolutional layer with 64 feature maps of size 5×5.
- Pooling layer taking the max over 2*2 patches.
- Convolutional layer with 128 feature maps of size 1×1.
- Pooling layer taking the max over 2*2 patches.
- Flatten layer.
- Fully connected layer with 1024 neurons and rectifier activation.
- Dropout layer with a probability of 50%.
- Fully connected layer with 510 neurons and rectifier activation.
- Dropout layer with a probability of 50%.
- Output layer.
I used Normalization as Preprocessing and 5-fold cross-validation to evaluate the model.
Accuracy scores: [0.92433, 0.92133, 0.923581, 0.92391, 0.92466]
Mean Accuracy: 0.923567
Stdev Accuracy: 0.001175
Final Accuracy: 92.56%
You can find the code [here](https://github.com/umbertogriffo/Fashion-mnist-cnn-keras). | closed | 2017-09-07T09:28:48Z | 2017-09-07T10:50:17Z | https://github.com/zalandoresearch/fashion-mnist/issues/59 | [
"benchmark"
] | umbertogriffo | 2 |
kaliiiiiiiiii/Selenium-Driverless | web-scraping | 85 | cdp_socket.exceptions.CDPError: {'code': -32000, 'message': 'Could not find node with given id'} | > ```python
> from selenium_driverless import webdriver
> from selenium_driverless.types.by import By
> import asyncio
> async def main():
> options = webdriver.ChromeOptions()
> async with webdriver.Chrome(options=options) as driver:
> await driver.get('https://www.google.com/', wait_load=True)
> elements = await driver.find_element(By.CSS_SELECTOR,'#APjFqb')
> print(elements)
> await elements.write('forex')
> elements1 = await driver.find_element(By.CSS_SELECTOR, 'body > div.L3eUgb > div.o3j99.ikrT4e.om7nvf > form > div:nth-child(1) > div.A8SBwf > div.FPdoLc.lJ9FBc > center > input.gNO89b',timeout=300)
> await elements1.click()
> elements2 = await driver.find_element(By.CSS_SELECTOR, '#rso > div:nth-child(2) > div > div > div > div > div > div > div > div.yuRUbf > div > span > a > h3',timeout=300)
> #await elements2.click()
> input('a: ')
>
> asyncio.run(main())
> ```
> ```
> C:\Users\PycharmProjects\new\venv\Scripts\python.exe C:/Users/samm/PycharmProjects/untitled/mm.py
> WebElement("None", obj_id=None, node_id="59", backend_node_id=None, context_id=None)
> Traceback (most recent call last):
> File "C:\Users\PycharmProjects\untitled\mm.py", line 17, in <module>
> asyncio.run(main())
> File "C:\Users\AppData\Local\Programs\Python\Python39\lib\asyncio\runners.py", line 44, in run
> return loop.run_until_complete(main)
> File "C:\Users\AppData\Local\Programs\Python\Python39\lib\asyncio\base_events.py", line 642, in run_until_complete
> return future.result()
> File "C:\Users\PycharmProjects\untitled\mm.py", line 13, in main
> elements2 = await driver.find_element(By.CSS_SELECTOR, '#rso > div:nth-child(2) > div > div > div > div > div > div > div > div.yuRUbf > div > span > a > h3',timeout=300)
> File "C:\Users\PycharmProjects\new\venv\lib\site-packages\selenium_driverless\webdriver.py", line 662, in find_element
> return await target.find_element(by=by, value=value, parent=parent, timeout=timeout)
> File "C:\Users\PycharmProjects\new\venv\lib\site-packages\selenium_driverless\types\target.py", line 547, in find_element
> return await parent.find_element(by=by, value=value, timeout=timeout)
> File "C:\Users\PycharmProjects\new\venv\lib\site-packages\selenium_driverless\types\webelement.py", line 245, in find_element
> elems = await self.find_elements(by=by, value=value)
> File "C:\Users\PycharmProjects\new\venv\lib\site-packages\selenium_driverless\types\webelement.py", line 280, in find_elements
> res = await self.__target__.execute_cdp_cmd("DOM.querySelectorAll", {"nodeId": node_id,
> File "C:\Users\PycharmProjects\new\venv\lib\site-packages\selenium_driverless\types\target.py", line 742, in execute_cdp_cmd
> result = await self.socket.exec(method=cmd, params=cmd_args, timeout=timeout)
> File "C:\Users\PycharmProjects\new\venv\lib\site-packages\cdp_socket\socket.py", line 69, in exec
> return await asyncio.wait_for(self._responses[_id], timeout=timeout)
> File "C:\Users\AppData\Local\Programs\Python\Python39\lib\asyncio\tasks.py", line 481, in wait_for
> return fut.result()
> cdp_socket.exceptions.CDPError: {'code': -32000, 'message': 'Could not find node with given id'}
> ```
how can i fix this error : `cdp_socket.exceptions.CDPError: {'code': -32000, 'message': 'Could not find node with given id'}`please | closed | 2023-10-07T08:28:35Z | 2023-10-12T03:19:47Z | https://github.com/kaliiiiiiiiii/Selenium-Driverless/issues/85 | [] | samyeid | 4 |
AntonOsika/gpt-engineer | python | 530 | Using gpt-engineer with Azure OpenAI |
Hi, I am trying to test gpt-engineer by using Azure OpenAI but I am getting authentication error. I have added all the additional details that are required for the Azure OpenAI like api_base url, model, etc. in the python file ai.py in the gpt_engineer folder. Am I missing out something can you please help me out with this issue.
Have set the openAI API key as the windows environmnet variable. Rest all the steps have followed according to the readme file.
<img width="946" alt="image" src="https://github.com/AntonOsika/gpt-engineer/assets/53396422/d3657e3b-1e49-4f6c-adac-18125ee1f29f">
| closed | 2023-07-13T11:01:49Z | 2023-10-05T07:44:45Z | https://github.com/AntonOsika/gpt-engineer/issues/530 | [] | RenukaMane | 6 |
huggingface/datasets | pytorch | 7,473 | Webdataset data format problem | ### Describe the bug
Please see https://huggingface.co/datasets/ejschwartz/idioms/discussions/1
Error code: FileFormatMismatchBetweenSplitsError
All three splits, train, test, and validation, use webdataset. But only the train split has more than one file. How can I force the other two splits to also be interpreted as being the webdataset format? (I don't think there is currently a way, but happy to be told that I am wrong.)
### Steps to reproduce the bug
```
import datasets
datasets.load_dataset("ejschwartz/idioms")
### Expected behavior
The dataset loads. Alternatively, there is a YAML syntax for manually specifying the format.
### Environment info
- `datasets` version: 3.2.0
- Platform: Linux-6.8.0-52-generic-x86_64-with-glibc2.35
- Python version: 3.10.12
- `huggingface_hub` version: 0.28.1
- PyArrow version: 19.0.0
- Pandas version: 2.2.3
- `fsspec` version: 2024.9.0 | closed | 2025-03-21T17:23:52Z | 2025-03-21T19:19:58Z | https://github.com/huggingface/datasets/issues/7473 | [] | edmcman | 1 |
onnx/onnx | scikit-learn | 6,131 | Model zoo test failures | https://github.com/onnx/onnx/actions/runs/8962117051/job/24610534787#step:7:943
```
--------------Time used: 0.31854987144470215 secs-------------
In all 184 models, 4 models failed, 25 models were skipped
ResNet-preproc failed because: Field 'type' of 'value_info' is required but missing.
VGG 16-bn failed because: /Users/runner/work/onnx/onnx/onnx/version_converter/adapters/transformers.h:35: operator(): Assertion `node->i(attr) == value` failed: Attribute spatial must have value 1
VGG 19-bn failed because: /Users/runner/work/onnx/onnx/onnx/version_converter/adapters/transformers.h:35: operator(): Assertion `node->i(attr) == value` failed: Attribute spatial must have value 1
SSD-MobilenetV1 failed because: [ShapeInferenceError] Inference error(s): (op_type:Loop, node name: generic_loop_Loop__48): [ShapeInferenceError] Inference error(s): (op_type:If, node name: Postprocessor/BatchMultiClassNonMaxSuppression/map/while/PadOrClipBoxList/cond_If__115): [ShapeInferenceError] Inference error(s): (op_type:Concat, node name: Postprocessor/BatchMultiClassNonMaxSuppression/map/while/PadOrClipBoxList/cond/concat): [ShapeInferenceError] All inputs to Concat must have same rank. Input 1 has rank 1 != 2
```
cc @ramkrishna2910 | open | 2024-05-06T16:23:02Z | 2024-05-07T23:43:57Z | https://github.com/onnx/onnx/issues/6131 | [] | justinchuby | 1 |
OpenGeoscience/geonotebook | jupyter | 129 | Annotation bounds passed to index are always in WGS84 | These need to be reprojected to the native SRS of the image before being passed to Rasterio's index function. | closed | 2017-07-19T19:19:31Z | 2017-07-21T14:35:10Z | https://github.com/OpenGeoscience/geonotebook/issues/129 | [] | dorukozturk | 0 |
pytorch/vision | machine-learning | 8,956 | `all_ops` argument of `AugMix()` accepts many types of values | ### 🐛 Describe the bug
[The doc](https://pytorch.org/vision/main/generated/torchvision.transforms.v2.AugMix.html) of `AugMix()` says that `all_ops` parameter is `bool` as shown below:
> Parameters:
> ...
> - all_ops ([bool](https://docs.python.org/3/library/functions.html#bool), optional) – Use all operations (including brightness, contrast, color and sharpness). Default is True.
But `all_ops` argument accepts many types of values as shown below:
```python
from torchvision.datasets import OxfordIIITPet
from torchvision.transforms.v2 import AugMix
my_data = OxfordIIITPet(
root="data",
transform=AugMix(all_ops=3.+0.j)
# transform=AugMix(all_ops="Hello")
# transform=AugMix(all_ops=[])
# transform=AugMix(all_ops=None)
)
my_data[0]
# (<PIL.Image.Image image mode=RGB size=394x500>, 0)
```
### Versions
```python
import torchvision
torchvision.__version__ # '0.20.1'
``` | open | 2025-03-07T14:28:44Z | 2025-03-07T14:28:44Z | https://github.com/pytorch/vision/issues/8956 | [] | hyperkai | 0 |
stanford-oval/storm | nlp | 99 | Fixed Timeout in WebPageHelper Could Lead to Incomplete Data Retrieval | # Fixed Timeout in WebPageHelper Could Lead to Incomplete Data Retrieval
## Description
In the file `utils.py`, the `WebPageHelper` class uses a fixed timeout of 4 seconds for all HTTP requests:
```python
res = self.httpx_client.get(url, timeout=4)
```
This fixed timeout can lead to issues with data retrieval, especially when dealing with varying network conditions and server response times.
## Why this is problematic
1. **Incomplete Data Retrieval**: A fixed 4-second timeout might be too short for some servers or under certain network conditions such as satellite, mobile networks, etc., leading to incomplete data retrieval. This could result in partial or missing information in the knowledge base. Could also be related to issue #88
2. **Inconsistent Performance**: The timeout doesn't account for the variability in server response times. Some requests might fail unnecessarily, while others might take longer than needed.
3. **Inefficient Resource Usage**: A fixed timeout doesn't allow for optimizing resource usage based on the specific requirements of different requests or the current system load.
4. **Poor Adaptability**: The current implementation doesn't adapt to changing network conditions or server responsiveness, which could lead to suboptimal performance in dynamic environments.
5. **Potential Data Bias**: If certain types of content consistently take longer to retrieve, a fixed timeout could inadvertently introduce bias into the collected data by systematically excluding this content.
## How it affects knowledge curation
1. **Incomplete Knowledge Base**: Incomplete data retrieval can lead to gaps in the knowledge base, affecting the quality and comprehensiveness of the curated information.
2. **Unreliable Information Gathering**: Inconsistent retrieval of information can lead to unreliable or inconsistent knowledge curation results.
3. **Reduced Efficiency**: Unnecessary timeouts on faster responses and premature timeouts on slower but valid responses can significantly reduce the overall efficiency of the knowledge curation process.
## Proposed Solution
Implement a more flexible and adaptive timeout strategy:
1. **Dynamic Timeout**: Implement a dynamic timeout that adjusts based on factors such as:
- The average response time of the server
- The size of the expected response
- The current network conditions
- The importance or priority of the request
2. **Retry Mechanism**: Implement a retry mechanism with exponential backoff for failed requests. This can help handle temporary network issues or server hiccups.
3. **Timeout Configuration**: Allow the timeout to be configurable, either through environment variables or a configuration file. This enables easy adjustment without code changes.
4. **Adaptive Timeout**: Implement an adaptive timeout system that learns from past request performance and adjusts accordingly.
## Example Implementation
Here's a basic example of how this could be implemented:
```python
import backoff
import httpx
class WebPageHelper:
def __init__(self, base_timeout=4, max_timeout=30):
self.base_timeout = base_timeout
self.max_timeout = max_timeout
self.httpx_client = httpx.Client()
@backoff.on_exception(backoff.expo, httpx.TimeoutException, max_time=300)
def get_with_retry(self, url):
timeout = min(self.base_timeout * 2, self.max_timeout) # Double the timeout, but cap it
return self.httpx_client.get(url, timeout=timeout)
def download_webpage(self, url):
try:
res = self.get_with_retry(url)
if res.status_code >= 400:
res.raise_for_status()
return res.content
except httpx.HTTPError as exc:
print(f"Error while requesting {exc.request.url!r} - {exc!r}")
return None
```
This implementation uses a base timeout that can be doubled (up to a maximum limit) and includes a retry mechanism with exponential backoff.
## Action Items
- [ ] Implement a dynamic timeout mechanism in the `WebPageHelper` class
- [ ] Add a retry mechanism with exponential backoff for failed requests
- [ ] Make the timeout configurable through environment variables or a config file
- [ ] Update the documentation to reflect the new timeout behavior
- [ ] Add logging to track timeout-related issues and adjust the strategy if needed | closed | 2024-07-22T06:03:30Z | 2025-03-08T09:09:48Z | https://github.com/stanford-oval/storm/issues/99 | [] | rmcc3 | 1 |
tensorflow/tensor2tensor | machine-learning | 1,770 | Multiple GPUs are visible but not allocated and used | ### Description
I am trying to start the quick start imagenet run with resnet_101 on 3 GPUs. However, the second and third GPU are not used - the model is not allocated to them. All GPUs are visible -> `$CUDA_VISIBLE_DEVICES=0,1,2` as seen in the logs.
### Environment information
OS: Ubuntu 18.04
$ pip freeze | grep tensor
```
mesh-tensorflow==0.1.7
tensor2tensor==1.15.2
tensorboard==1.14.0
tensorflow-datasets==1.3.2
tensorflow-estimator==1.14.0
tensorflow-gan==2.0.0
tensorflow-gpu==1.14.0
tensorflow-hub==0.7.0
tensorflow-metadata==0.15.1
tensorflow-probability==0.7.0
```
$ python -V
```
Python 3.7.5
```
### For bugs: reproduction and error logs
```
INFO:tensorflow:Transforming body output with class_label_modality_10_64.top
I1213 18:41:40.599883 140097934563136 t2t_model.py:2261] Transforming body output with class_label_modality_10_64.top
WARNING:tensorflow:From /home/chris/envs/deep/lib/python3.7/site-packages/tensor2tensor/layers/modalities.py:946: dense (from tensorflow.python.layers.core) is deprecated and will be removed in a future version.
Instructions for updating:
Use keras.layers.dense instead.
W1213 18:41:40.600962 140097934563136 deprecation.py:323] From /home/chris/envs/deep/lib/python3.7/site-packages/tensor2tensor/layers/modalities.py:946: dense (from tensorflow.python.layers.core) is deprecated and will be removed in a future version.
Instructions for updating:
Use keras.layers.dense instead.
WARNING:tensorflow:Entity <bound method Dense.call of <tensorflow.python.layers.core.Dense object at 0x7f6aaef45f10>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method Dense.call of <tensorflow.python.layers.core.Dense object at 0x7f6aaef45f10>>: AssertionError: Bad argument number for Name: 3, expecting 4
W1213 18:41:40.687236 140097934563136 ag_logging.py:145] Entity <bound method Dense.call of <tensorflow.python.layers.core.Dense object at 0x7f6aaef45f10>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method Dense.call of <tensorflow.python.layers.core.Dense object at 0x7f6aaef45f10>>: AssertionError: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:From /home/chris/envs/deep/lib/python3.7/site-packages/tensor2tensor/utils/learning_rate.py:120: The name tf.train.get_or_create_global_step is deprecated. Please use tf.compat.v1.train.get_or_create_global_step instead.
W1213 18:41:40.742912 140097934563136 deprecation_wrapper.py:119] From /home/chris/envs/deep/lib/python3.7/site-packages/tensor2tensor/utils/learning_rate.py:120: The name tf.train.get_or_create_global_step is deprecated. Please use tf.compat.v1.train.get_or_create_global_step instead.
INFO:tensorflow:Applying exp learning rate warmup for 100 steps
I1213 18:41:40.745574 140097934563136 learning_rate.py:205] Applying exp learning rate warmup for 100 steps
INFO:tensorflow:Applying learning rate decay: cosine.
I1213 18:41:40.750162 140097934563136 learning_rate.py:163] Applying learning rate decay: cosine.
WARNING:tensorflow:From /home/chris/envs/deep/lib/python3.7/site-packages/tensor2tensor/utils/learning_rate.py:112: add_dispatch_support.<locals>.wrapper (from tensorflow.python.ops.array_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.where in 2.0, which has the same broadcast rule as np.where
W1213 18:41:40.756382 140097934563136 deprecation.py:323] From /home/chris/envs/deep/lib/python3.7/site-packages/tensor2tensor/utils/learning_rate.py:112: add_dispatch_support.<locals>.wrapper (from tensorflow.python.ops.array_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.where in 2.0, which has the same broadcast rule as np.where
INFO:tensorflow:Base learning rate: 0.400000
I1213 18:41:40.756893 140097934563136 learning_rate.py:114] Base learning rate: 0.400000
INFO:tensorflow:Applying weight decay, decay_rate: 0.00010
I1213 18:41:40.759765 140097934563136 optimize.py:294] Applying weight decay, decay_rate: 0.00010
INFO:tensorflow:Trainable Variables Total size: 42514378
I1213 18:41:41.002760 140097934563136 optimize.py:335] Trainable Variables Total size: 42514378
INFO:tensorflow:Non-trainable variables Total size: 105349
I1213 18:41:41.004743 140097934563136 optimize.py:335] Non-trainable variables Total size: 105349
INFO:tensorflow:Using optimizer Momentum
I1213 18:41:41.005156 140097934563136 optimize.py:190] Using optimizer Momentum
WARNING:tensorflow:From /home/chris/envs/deep/lib/python3.7/site-packages/tensor2tensor/utils/registry.py:460: The name tf.logging.warning is deprecated. Please use tf.compat.v1.logging.warning instead.
W1213 18:41:41.005277 140097934563136 deprecation_wrapper.py:119] From /home/chris/envs/deep/lib/python3.7/site-packages/tensor2tensor/utils/registry.py:460: The name tf.logging.warning is deprecated. Please use tf.compat.v1.logging.warning instead.
WARNING:tensorflow:optimizer names now keyed by snake_case names. Please update `registry.optimizer` callsite (likely due to a `HParams.optimizer` value)
W1213 18:41:41.005335 140097934563136 registry.py:461] optimizer names now keyed by snake_case names. Please update `registry.optimizer` callsite (likely due to a `HParams.optimizer` value)
WARNING:tensorflow:From /home/chris/envs/deep/lib/python3.7/site-packages/tensor2tensor/utils/optimize.py:135: The name tf.train.MomentumOptimizer is deprecated. Please use tf.compat.v1.train.MomentumOptimizer instead.
W1213 18:41:41.005416 140097934563136 deprecation_wrapper.py:119] From /home/chris/envs/deep/lib/python3.7/site-packages/tensor2tensor/utils/optimize.py:135: The name tf.train.MomentumOptimizer is deprecated. Please use tf.compat.v1.train.MomentumOptimizer instead.
INFO:tensorflow:Done calling model_fn.
I1213 18:41:44.708933 140097934563136 estimator.py:1147] Done calling model_fn.
INFO:tensorflow:Create CheckpointSaverHook.
I1213 18:41:44.710017 140097934563136 basic_session_run_hooks.py:541] Create CheckpointSaverHook.
INFO:tensorflow:Graph was finalized.
I1213 18:41:47.335859 140097934563136 monitored_session.py:240] Graph was finalized.
2019-12-13 18:41:47.337449: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1640] Found device 0 with properties:
name: GeForce GTX 1080 Ti major: 6 minor: 1 memoryClockRate(GHz): 1.582
pciBusID: 0000:0b:00.0
2019-12-13 18:41:47.337526: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1005] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2019-12-13 18:41:47.338237: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1640] Found device 1 with properties:
name: GeForce GTX 1080 Ti major: 6 minor: 1 memoryClockRate(GHz): 1.582
pciBusID: 0000:41:00.0
2019-12-13 18:41:47.338302: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1005] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2019-12-13 18:41:47.339012: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1640] Found device 2 with properties:
name: GeForce GTX 1080 Ti major: 6 minor: 1 memoryClockRate(GHz): 1.582
pciBusID: 0000:42:00.0
2019-12-13 18:41:47.339057: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcudart.so.10.1
2019-12-13 18:41:47.339068: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcublas.so.10
2019-12-13 18:41:47.339077: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcufft.so.10
2019-12-13 18:41:47.339086: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcurand.so.10
2019-12-13 18:41:47.339095: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcusolver.so.10
2019-12-13 18:41:47.339104: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcusparse.so.10
2019-12-13 18:41:47.339114: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcudnn.so.7
2019-12-13 18:41:47.341557: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1005] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2019-12-13 18:41:47.342539: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1005] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2019-12-13 18:41:47.344254: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1005] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2019-12-13 18:41:47.345240: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1005] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2019-12-13 18:41:47.345944: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1763] Adding visible gpu devices: 0, 1, 2
2019-12-13 18:41:47.346025: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1181] Device interconnect StreamExecutor with strength 1 edge matrix:
2019-12-13 18:41:47.346034: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1187] 0 1 2
2019-12-13 18:41:47.346041: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1200] 0: N Y Y
2019-12-13 18:41:47.346045: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1200] 1: Y N Y
2019-12-13 18:41:47.346049: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1200] 2: Y Y N
2019-12-13 18:41:47.347136: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1005] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2019-12-13 18:41:47.348120: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1005] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2019-12-13 18:41:47.349842: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1326] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 10481 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1080 Ti, pci bus id: 0000:0b:00.0, compute capability: 6.1)
2019-12-13 18:41:47.349905: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1005] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2019-12-13 18:41:47.350852: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1326] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:1 with 10481 MB memory) -> physical GPU (device: 1, name: GeForce GTX 1080 Ti, pci bus id: 0000:41:00.0, compute capability: 6.1)
2019-12-13 18:41:47.350908: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1005] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2019-12-13 18:41:47.351615: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1326] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:2 with 10479 MB memory) -> physical GPU (device: 2, name: GeForce GTX 1080 Ti, pci bus id: 0000:42:00.0, compute capability: 6.1)
2019-12-13 18:41:49.156848: W tensorflow/compiler/jit/mark_for_compilation_pass.cc:1412] (One-time warning): Not using XLA:CPU for cluster because envvar TF_XLA_FLAGS=--tf_xla_cpu_global_jit was not set. If you want XLA:CPU, either set that envvar, or use experimental_jit_scope to enable XLA:CPU. To confirm that XLA is active, pass --vmodule=xla_compilation_cache=1 (as a proper command-line flag, not via TF_XLA_FLAGS) or set the envvar XLA_FLAGS=--xla_hlo_profile.
INFO:tensorflow:Running local_init_op.
I1213 18:41:50.544704 140097934563136 session_manager.py:500] Running local_init_op.
INFO:tensorflow:Done running local_init_op.
I1213 18:41:50.711471 140097934563136 session_manager.py:502] Done running local_init_op.
INFO:tensorflow:Saving checkpoints for 0 into /home/chris/output/model.ckpt.
I1213 18:41:55.744890 140097934563136 basic_session_run_hooks.py:606] Saving checkpoints for 0 into /home/chris/output/model.ckpt.
2019-12-13 18:42:01.270528: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcublas.so.10
2019-12-13 18:42:04.607807: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcudnn.so.7
INFO:tensorflow:loss = 12.15226, step = 0
I1213 18:42:05.817591 140097934563136 basic_session_run_hooks.py:262] loss = 12.15226, step = 0
INFO:tensorflow:global_step/sec: 8.8183
I1213 18:42:17.157002 140097934563136 basic_session_run_hooks.py:692] global_step/sec: 8.8183
INFO:tensorflow:loss = 14.065001, step = 100 (11.341 sec)
I1213 18:42:17.158270 140097934563136 basic_session_run_hooks.py:260] loss = 14.065001, step = 100 (11.341 sec)
INFO:tensorflow:global_step/sec: 12.5375
I1213 18:42:25.133048 140097934563136 basic_session_run_hooks.py:692] global_step/sec: 12.5375
INFO:tensorflow:loss = 3.8954778, step = 200 (7.976 sec)
I1213 18:42:25.134311 140097934563136 basic_session_run_hooks.py:260] loss = 3.8954778, step = 200 (7.976 sec)
```
```
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 440.44 Driver Version: 440.44 CUDA Version: 10.2 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 GeForce GTX 108... Off | 00000000:0B:00.0 Off | N/A |
| 40% 76C P2 220W / 250W | 10899MiB / 11178MiB | 90% Default |
+-------------------------------+----------------------+----------------------+
| 1 GeForce GTX 108... Off | 00000000:41:00.0 Off | N/A |
| 32% 59C P8 13W / 250W | 147MiB / 11178MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
| 2 GeForce GTX 108... Off | 00000000:42:00.0 Off | N/A |
| 35% 61C P8 22W / 250W | 147MiB / 11176MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 18199 C /home/chris/envs/deep/bin/python 10889MiB |
| 1 18199 C /home/chris/envs/deep/bin/python 137MiB |
| 2 18199 C /home/chris/envs/deep/bin/python 137MiB |
+-----------------------------------------------------------------------------+
```
# Steps to reproduce:
```
t2t-trainer \
--generate_data \
--data_dir=~/data \
--output_dir=~/output \
--problem=image_mnist \
--model=resnet \
--hparams_set=resnet_101 \
--train_steps=200000 \
--eval_steps=5000 \
--worker-gpu=3
```
# Error logs:
No errors...
| closed | 2019-12-13T17:51:11Z | 2019-12-13T18:30:23Z | https://github.com/tensorflow/tensor2tensor/issues/1770 | [] | cgebe | 1 |
voxel51/fiftyone | computer-vision | 5,220 | [BUG] AttributeError: 'FiftyOneRTDETRModelConfig' object has no attribute 'model' for models rtdetr-l-coco-torch and rtdetr-x-coco-torch | ### Describe the problem
The `rtdetr-l-coco-torch` and `rtdetr-x-coco-torch` models (https://docs.voxel51.com/model_zoo/models.html#rtdetr-l-coco-torch) do not load anymore. Running the provided notebook from the docs gives
```
Traceback (most recent call last):
File "<ipython-input-10-c73e2e7b9249>", line 11, in <cell line: 9>
model = foz.load_zoo_model(model_name)
File "/usr/local/lib/python3.10/dist-packages/fiftyone/zoo/models/__init__.py", line 308, in load_zoo_model
model = fom.load_model(config_dict, model_path=model_path, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/fiftyone/core/models.py", line 1916, in load_model
return config.build()
File "/usr/local/lib/python3.10/dist-packages/eta/core/learning.py", line 296, in build
return self._model_cls(self.config)
File "/usr/local/lib/python3.10/dist-packages/fiftyone/utils/ultralytics.py", line 482, in __init__
self.model = self._load_model(config)
File "/usr/local/lib/python3.10/dist-packages/fiftyone/utils/ultralytics.py", line 485, in _load_model
if config.model is not None:
AttributeError: 'FiftyOneRTDETRModelConfig' object has no attribute 'model'
```
### Code to reproduce issue
```
import fiftyone as fo
import fiftyone.zoo as foz
dataset = foz.load_zoo_dataset(
"coco-2017",
split="validation",
dataset_name=fo.get_default_dataset_name(),
max_samples=50,
shuffle=True,
)
model = foz.load_zoo_model("rtdetr-l-coco-torch")
dataset.apply_model(model, label_field="predictions")
session = fo.launch_app(dataset)
```
### System information
- **OS Platform and Distribution** (e.g., Linux Ubuntu 22.04): Google Colab T4 Instance
- **Python version** (`python --version`): Python 3.10.12
- **FiftyOne version** (`fiftyone --version`): FiftyOne v1.0.2, Voxel51, Inc.
- **FiftyOne installed from** (pip or source): pip
### Willingness to contribute
The FiftyOne Community encourages bug fix contributions. Would you or another
member of your organization be willing to contribute a fix for this bug to the
FiftyOne codebase?
- [ ] Yes. I can contribute a fix for this bug independently
- [x] Yes. I would be willing to contribute a fix for this bug with guidance
from the FiftyOne community
- [ ] No. I cannot contribute a bug fix at this time
| closed | 2024-12-05T15:14:13Z | 2024-12-09T16:38:51Z | https://github.com/voxel51/fiftyone/issues/5220 | [
"bug"
] | daniel-bogdoll | 0 |
sqlalchemy/alembic | sqlalchemy | 491 | drop python 2.6 support and bump version to 1.0 | **Migrated issue, originally created by Michael Bayer ([@zzzeek](https://github.com/zzzeek))**
the argparse requirement in setup.py can be removed once we support 2.7 and above. That in turn would make a universal wheel package on pypi possible.
| closed | 2018-04-21T13:14:57Z | 2018-06-30T01:09:24Z | https://github.com/sqlalchemy/alembic/issues/491 | [
"bug",
"installation"
] | sqlalchemy-bot | 3 |
JaidedAI/EasyOCR | deep-learning | 1,131 | Letter 'o' is wrongly interpreted as 0 (zero) | Portuguese language uses letter 'o' a lot as a word in sentences. They are almost every time recognized as 0 (zero). How can I overcome this issue?

| open | 2023-09-04T13:48:19Z | 2023-09-30T02:13:30Z | https://github.com/JaidedAI/EasyOCR/issues/1131 | [] | bilalsattar | 4 |
Anjok07/ultimatevocalremovergui | pytorch | 1,642 | resources to understand which model to pick ? | hi, are there any resources to understood which models to pick, and which are the most advanced ? I use hq 5 but idk if thats' the best at this time | open | 2024-11-29T22:01:49Z | 2024-11-29T22:01:49Z | https://github.com/Anjok07/ultimatevocalremovergui/issues/1642 | [] | avichou | 0 |
waditu/tushare | pandas | 1,701 | 利润表下载数据有重复 | 例如:000001.sz pro.income(ts_code=‘000001.sz ’, start_date=‘20000101’, end_date=‘20230418’, repot_type=1) 得到的数据,多处重复
ID:280603 | open | 2023-04-19T08:41:39Z | 2023-04-19T08:43:38Z | https://github.com/waditu/tushare/issues/1701 | [] | YellowStarr | 0 |
ydataai/ydata-profiling | pandas | 825 | Phi K correlation variable order | For me all correlation plots show variables in the (domain-specific sensible) order of the columns in my data frame.
Only Phi K shows them in some other order.
Is this a bug or a feature?
Is there a setting to get the "good" order?
This is with pandas 1.3 and pandas-profiling 3.0.0
<img width="879" alt="Screenshot 2021-09-05 at 21 43 55" src="https://user-images.githubusercontent.com/852409/132139566-ba92033b-98fb-4b3d-a869-6c096ed294a1.png">
<img width="907" alt="Screenshot 2021-09-05 at 21 43 45" src="https://user-images.githubusercontent.com/852409/132139567-22e2d9ce-cdc8-4b95-93b2-7445a78ed397.png">
| closed | 2021-09-05T19:46:25Z | 2021-09-16T08:31:52Z | https://github.com/ydataai/ydata-profiling/issues/825 | [
"bug 🐛",
"help wanted 🙋"
] | cdeil | 5 |
huggingface/datasets | pytorch | 6,595 | Loading big dataset raises pyarrow.lib.ArrowNotImplementedError 2 | ### Describe the bug
I'm aware of the issue #5695 .
I'm using a modified SDXL trainer: https://github.com/kopyl/diffusers/blob/5e70f604155aeecee254a5c63c5e4236ad4a0d3d/examples/text_to_image/train_text_to_image_sdxl.py#L1027C16-L1027C16
So i
1. Map dataset
2. Save to disk
3. Try to upload:
```
import datasets
from datasets import load_from_disk
dataset = load_from_disk("ds")
datasets.config.DEFAULT_MAX_BATCH_SIZE = 1
dataset.push_to_hub("kopyl/ds", private=True, max_shard_size="500MB")
```
And i get this error:
`pyarrow.lib.ArrowNotImplementedError: Unhandled type for Arrow to Parquet schema conversion: halffloat`
Full traceback:
```
>>> dataset.push_to_hub("kopyl/3M_icons_monochrome_only_no_captioning_mapped-for-SDXL-2", private=True, max_shard_size="500MB")
Map: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1451/1451 [00:00<00:00, 6827.40 examples/s]
Uploading the dataset shards: 0%| | 0/2099 [00:00<?, ?it/s]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.10/dist-packages/datasets/dataset_dict.py", line 1705, in push_to_hub
split_additions, uploaded_size, dataset_nbytes = self[split]._push_parquet_shards_to_hub(
File "/usr/local/lib/python3.10/dist-packages/datasets/arrow_dataset.py", line 5208, in _push_parquet_shards_to_hub
shard.to_parquet(buffer)
File "/usr/local/lib/python3.10/dist-packages/datasets/arrow_dataset.py", line 4931, in to_parquet
return ParquetDatasetWriter(self, path_or_buf, batch_size=batch_size, **parquet_writer_kwargs).write()
File "/usr/local/lib/python3.10/dist-packages/datasets/io/parquet.py", line 129, in write
written = self._write(file_obj=self.path_or_buf, batch_size=batch_size, **self.parquet_writer_kwargs)
File "/usr/local/lib/python3.10/dist-packages/datasets/io/parquet.py", line 141, in _write
writer = pq.ParquetWriter(file_obj, schema=schema, **parquet_writer_kwargs)
File "/usr/local/lib/python3.10/dist-packages/pyarrow/parquet/core.py", line 1016, in __init__
self.writer = _parquet.ParquetWriter(
File "pyarrow/_parquet.pyx", line 1869, in pyarrow._parquet.ParquetWriter.__cinit__
File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
pyarrow.lib.ArrowNotImplementedError: Unhandled type for Arrow to Parquet schema conversion: halffloat
```
Smaller datasets with the same way of saving and pushing work wonders. Big ones are not.
I'm currently trying to upload dataset like this:
`HfApi().upload_folder...`
But i'm not sure that in this case "load_dataset" would work well.
This setting num_shards does not help too:
```
dataset.push_to_hub("kopyl/3M_icons_monochrome_only_no_captioning_mapped-for-SDXL-2", private=True, num_shards={'train': 500})
```
Tried 3000, 500, 478, 100
Also do you know if it's possible to push a dataset with multiple processes? It would take an eternity pushing 1TB...
### Steps to reproduce the bug
Described above
### Expected behavior
Should be able to upload...
### Environment info
Total dataset size: 978G
Amount of `.arrow` files: 2101
Each `.arrow` file size: 477M (i know 477 megabytes * 2101 does not equal 978G, but i just checked the size of a couple `.arrow` files, i don't know if some might have different size)
Some files:
- "ds/train/state.json": https://pastebin.com/tJ3ZLGAg
- "ds/train/dataset_info.json": https://pastebin.com/JdXMQ5ih | closed | 2024-01-16T02:03:09Z | 2024-01-27T18:26:33Z | https://github.com/huggingface/datasets/issues/6595 | [] | kopyl | 14 |
napari/napari | numpy | 7,226 | layers.events.changed not calling | ### 🐛 Bug Report
The callback for layers.events.changed doesn't seem to be working, although this may be due to confusion over how it's meant to work thanks to the ongoing issues with events documentation in Napari.
At least to me, the name implies that this event is called every time any change to the layerlist occurs (add, remove, reorder).
Originally used 0.4.19 thanks to a bug I reported for 0.5.2 that was affecting my plugin, but issue seems to persist in 0.5.2
### 💡 Steps to Reproduce
Implement a function with @layers.events.changed tagged.
Function does not call.
### 💡 Expected Behavior
_No response_
### 🌎 Environment
napari: 0.5.2
Platform: Windows-10-10.0.19045-SP0
Python: 3.10.14 | packaged by conda-forge | (main, Mar 20 2024, 12:40:08) [MSC v.1938 64 bit (AMD64)]
Qt: 5.15.2
PyQt5: 5.15.11
NumPy: 1.26.4
SciPy: 1.14.0
Dask: 2024.8.1
VisPy: 0.14.3
magicgui: 0.9.1
superqt: 0.6.7
in-n-out: 0.2.1
app-model: 0.2.8
npe2: 0.7.7
OpenGL:
- GL version: 4.6.14761 Compatibility Profile Context 21.30.02.01 30.0.13002.1001
- MAX_TEXTURE_SIZE: 16384
- GL_MAX_3D_TEXTURE_SIZE: 2048
Screens:
- screen 1: resolution 1920x1080, scale 1.0
Optional:
- numba: 0.60.0
- triangle: 20230923
- napari-plugin-manager: 0.1.0
Settings path:
- C:\Users\josep\AppData\Local\napari\napari-env_c9b7736bcca0cbc8c73003efd88a2db25209893b\settings.yaml
### 💡 Additional Context
_No response_ | closed | 2024-08-29T15:42:48Z | 2025-02-19T06:59:03Z | https://github.com/napari/napari/issues/7226 | [
"documentation"
] | Joseph-Garvey | 7 |
junyanz/pytorch-CycleGAN-and-pix2pix | deep-learning | 1,676 | Repeated Pattern in generated images | I'm using pix2pix model to generate images with some datasets I collected but the model keeps generate images with repeated pattern in the middle.
I'm using scale_width_and_crop on preprocess option.
Is there anyone know why?
This is a sample image i generated.

| open | 2024-09-23T00:51:16Z | 2024-09-23T00:51:16Z | https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/1676 | [] | Jiwno | 0 |
zama-ai/concrete-ml | scikit-learn | 380 | Accuracy mismatch between clear and FHE predictions in CIFAR-10 example | I am trying to run the VGG like NN on CIFAR-10 example found at https://github.com/zama-ai/concrete-ml/tree/main/use_case_examples/cifar/cifar_brevitas_training.
When I am running the example once, I am getting correct results in terms of accuracy (during validation) being similar in clear and FHE predictions. However, when I am running the same code multiple times with same parameters, I am getting a huge difference in the accuracy of clear and FHE predictions. For example, in clear prediction I am getting accuracy of around 97%, which remains similar across the multiple iterations. On the other hand, the accuracy in FHE prediction drops to around 40% to 50% and varies in this range across multiple iterations. I am executing the scripts as obtained from the repository, while only changing the number of training epochs to 10 from default 1000.
I am unable to understand why this is happening. While it is understandable that multiple training iterations will result in models with varied accuracy, such significant drop in the accuracy, that too only in FHE prediction, does not make sense to me. Please let me know if any other code base for CIFAR-10 is available or not. | closed | 2023-10-29T07:17:12Z | 2023-10-31T14:19:30Z | https://github.com/zama-ai/concrete-ml/issues/380 | [
"bug"
] | bhuvneshchaturvedi2512 | 4 |
Johnserf-Seed/TikTokDownload | api | 482 | [BUG] | 大佬你好,我这边发现tiktok的视频合集下载每次都报错,可能是因为tiktok的主页链接和抖音不同。我试了下载抖音的个人合集没有问题。tiktok的主页链接是https://www.tiktok.com/@soon_ne,也试过您主页提供的链接也不行。能麻烦解答一下吗,感谢
Traceback (most recent call last):
File "C:\Users\XTY\Desktop\test\TikTokTool.py", line 32, in <module>
profile.getProfile(cmd.setting())
File "C:\Users\XTY\Desktop\test\Util\Profile.py", line 72, in getProfile
print('[ 提示 ]:用户的sec_id=%s\r' % self.sec)
AttributeError: 'Profile' object has no attribute 'sec' | open | 2023-07-29T03:49:08Z | 2023-08-03T15:01:13Z | https://github.com/Johnserf-Seed/TikTokDownload/issues/482 | [
"故障(bug)",
"额外求助(help wanted)",
"无效(invalid)"
] | xty8623 | 2 |
CorentinJ/Real-Time-Voice-Cloning | deep-learning | 428 | Windows 10 - DLL Error | I've been stuck on this step for 2 days and I know people have posted this problem before, but the answers have not been too helpful to me considering I am retarded, technologically speaking. Anyway, when I type:
> python demo_cli.py
I am returned with:
> Traceback (most recent call last):
> File "C:\Users\hugow\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow_core\python\pywrap_tensorflow.py", line 58, in <module>
> from tensorflow.python.pywrap_tensorflow_internal import *
> File "C:\Users\hugow\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow_core\python\pywrap_tensorflow_internal.py", line 28, in <module>
> _pywrap_tensorflow_internal = swig_import_helper()
> File "C:\Users\hugow\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow_core\python\pywrap_tensorflow_internal.py", line 24, in swig_import_helper
> _mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description)
> File "C:\Users\hugow\AppData\Local\Programs\Python\Python37\lib\imp.py", line 242, in load_module
> return load_dynamic(name, filename, file)
> File "C:\Users\hugow\AppData\Local\Programs\Python\Python37\lib\imp.py", line 342, in load_dynamic
> return _load(spec)
> ImportError: DLL load failed: A dynamic link library (DLL) initialization routine failed.
>
> During handling of the above exception, another exception occurred:
>
> Traceback (most recent call last):
> File "demo_cli.py", line 4, in <module>
> from synthesizer.inference import Synthesizer
> File "C:\Users\hugow\Downloads\Real-Time-Voice-Cloning-master\Real-Time-Voice-Cloning-master\synthesizer\inference.py", line 1, in <module>
> from synthesizer.tacotron2 import Tacotron2
> File "C:\Users\hugow\Downloads\Real-Time-Voice-Cloning-master\Real-Time-Voice-Cloning-master\synthesizer\tacotron2.py", line 3, in <module>
> from synthesizer.models import create_model
> File "C:\Users\hugow\Downloads\Real-Time-Voice-Cloning-master\Real-Time-Voice-Cloning-master\synthesizer\models\__init__.py", line 1, in <module>
> from .tacotron import Tacotron
> File "C:\Users\hugow\Downloads\Real-Time-Voice-Cloning-master\Real-Time-Voice-Cloning-master\synthesizer\models\tacotron.py", line 1, in <module>
> import tensorflow as tf
> File "C:\Users\hugow\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow\__init__.py", line 99, in <module>
> from tensorflow_core import *
> File "C:\Users\hugow\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow_core\__init__.py", line 28, in <module>
> from tensorflow.python import pywrap_tensorflow # pylint: disable=unused-import
> File "C:\Users\hugow\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow\__init__.py", line 50, in __getattr__
> module = self._load()
> File "C:\Users\hugow\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow\__init__.py", line 44, in _load
> module = _importlib.import_module(self.__name__)
> File "C:\Users\hugow\AppData\Local\Programs\Python\Python37\lib\importlib\__init__.py", line 127, in import_module
> return _bootstrap._gcd_import(name[level:], package, level)
> File "C:\Users\hugow\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow_core\python\__init__.py", line 49, in <module>
> from tensorflow.python import pywrap_tensorflow
> File "C:\Users\hugow\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow_core\python\pywrap_tensorflow.py", line 74, in <module>
> raise ImportError(msg)
> ImportError: Traceback (most recent call last):
> File "C:\Users\hugow\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow_core\python\pywrap_tensorflow.py", line 58, in <module>
> from tensorflow.python.pywrap_tensorflow_internal import *
> File "C:\Users\hugow\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow_core\python\pywrap_tensorflow_internal.py", line 28, in <module>
> _pywrap_tensorflow_internal = swig_import_helper()
> File "C:\Users\hugow\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow_core\python\pywrap_tensorflow_internal.py", line 24, in swig_import_helper
> _mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description)
> File "C:\Users\hugow\AppData\Local\Programs\Python\Python37\lib\imp.py", line 242, in load_module
> return load_dynamic(name, filename, file)
> File "C:\Users\hugow\AppData\Local\Programs\Python\Python37\lib\imp.py", line 342, in load_dynamic
> return _load(spec)
> ImportError: DLL load failed: A dynamic link library (DLL) initialization routine failed.
>
>
> Failed to load the native TensorFlow runtime.
>
> See https://www.tensorflow.org/install/errors
>
> for some common reasons and solutions. Include the entire stack trace
> above this error message when asking for help.
>
Please, If you have anything to help, can you literally spell it out for me because I cannot stress enough that I have no idea what I'm doing. Thanks. | closed | 2020-07-17T16:32:39Z | 2020-07-27T00:40:13Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/428 | [] | gfhsrtras | 3 |
CorentinJ/Real-Time-Voice-Cloning | python | 285 | AssertionError | while sythesizer_preprocess_audio.py running it intrupted by an assetion error at this point every time . can anyone tell me what is happening?
AssertionError
LibriSpeech: 21%|███▊ | 251/1172 [48:17<2:57:10, 11.54s/speakers] | closed | 2020-02-19T06:37:12Z | 2020-07-04T22:38:57Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/285 | [] | AmitBDA | 1 |
numba/numba | numpy | 9,398 | CUDA: Providing a signature for a function reference when eagerly compiling does not work | ## Reporting a bug
- [x] I have tried using the latest released version of Numba (most recent is
visible in the release notes
(https://numba.readthedocs.io/en/stable/release-notes-overview.html).
- [x] I have included a self contained code sample to reproduce the problem.
i.e. it's possible to run as 'python bug.py'.
Minimal example:
```python
import numba as nb
from numba import cuda
import numpy as np
@cuda.jit("f8(f8)", device = True, inline = True)
def foo(x):
return x + 1
@cuda.jit("f8(FunctionType(f8(f8)),f8)", device = True, inline = True) # ISSUE HERE
def demo(func, x):
return func(x) * 2
@nb.vectorize("f8(f8)", target = "cuda", nopython = True)
def demo_vec(x):
return demo(foo, x)
X = np.arange(10., 20.)
print(demo_vec(X))
```
On CPUs, i.e. using `numba.jit`, I could provide a signature for `demo` as follows: `f8(FunctionType(f8(f8)),f8)`. This appears to not work for CUDA.
Workaround: Do not provide a signature for `demo`.
Providing `CUDADispatcher` instead of `FunctionType` does not work either. Is there a different (undocumented) notation or is there simply not a solution for this in `numba` at the moment?
For more context / my use case, see [here](https://numba.discourse.group/t/cuda-device-function-pointers-re-implementing-scipy-integrate-solve-ivp-for-numba-cuda/2342). | open | 2024-01-19T07:51:29Z | 2024-02-27T07:07:54Z | https://github.com/numba/numba/issues/9398 | [
"feature_request",
"CUDA"
] | s-m-e | 3 |
TracecatHQ/tracecat | pydantic | 15 | Readme: add definition of SOAR | **Is your feature request related to a problem? Please describe.**
When someone stumbles across this repo and is not familiar with the acronym, why should he care and read any further.
**Describe the solution you'd like**
A clear and concise description of SOAR, e.g:
**SOAR** ([Security Orchestration, Automation and Response](https://www.gartner.com/en/information-technology/glossary/security-orchestration-automation-response-soar)) refers to technologies that enable organizations to collect inputs monitored by the security operations team. | closed | 2024-03-28T10:48:48Z | 2024-03-28T17:37:40Z | https://github.com/TracecatHQ/tracecat/issues/15 | [
"documentation"
] | cleder | 0 |
pyeve/eve | flask | 1,348 | Fix simple typo: wether -> whether | There is a small typo in eve/io/base.py.
Should read `whether` rather than `wether`.
| closed | 2020-01-26T11:28:58Z | 2020-02-01T08:31:49Z | https://github.com/pyeve/eve/issues/1348 | [] | timgates42 | 1 |
pykaldi/pykaldi | numpy | 303 | Install kaldi script fails with missing folder | The `./install_kaldi.sh` script mentioned in the README fails with the following error:
All done OK.
Configuring KALDI to use MKL.
Checking compiler g++ ...
Checking OpenFst library in /home/chris/pykaldi/tools/kaldi/tools/openfst-1.6.7 ...
Performing OS specific configuration ...
On Linux: Checking for linear algebra header files ...
Configuring MKL library directory: ***configure failed: Could not find the MKL library directory.
Please use the switch --mkl-root and/or --mkl-libdir if you have MKL installed,
or try another math library, e.g. --mathlib=OPENBLAS (Kaldi may be slower). ***
Presumably, I'm missing the undocumented library MKL. Where/how should I install that? | closed | 2022-05-28T23:08:39Z | 2023-09-14T21:43:16Z | https://github.com/pykaldi/pykaldi/issues/303 | [] | chrisspen | 10 |
modin-project/modin | pandas | 6,729 | Use custom pytest mark instead of `--extra-test-parameters` option | closed | 2023-11-09T01:28:33Z | 2023-12-08T14:42:27Z | https://github.com/modin-project/modin/issues/6729 | [
"Code Quality 💯",
"Testing 📈"
] | anmyachev | 0 | |
neuml/txtai | nlp | 692 | Create temporary tables once per database session | Currently, databases check to create temporary tables before every `createbatch` and `createscores` call. Temporary database tables should be created when the database connection is created, as they have a lifespan of the session. | closed | 2024-04-17T12:12:08Z | 2024-04-18T17:18:16Z | https://github.com/neuml/txtai/issues/692 | [] | davidmezzetti | 0 |
oegedijk/explainerdashboard | plotly | 289 | skorch models raising The SHAP explanations do not sum up to the model's output | Seems related to this bug: https://github.com/shap/shap/issues/3363
Can be avoided with passing `shap_kwargs=dict(check_additivity=False)` to the explainer, but then you might get inaccurate shap values.
Added check_additivity=False param to the skorch tests for now.
| open | 2023-12-17T12:20:51Z | 2023-12-17T12:20:51Z | https://github.com/oegedijk/explainerdashboard/issues/289 | [] | oegedijk | 0 |
amisadmin/fastapi-amis-admin | sqlalchemy | 110 | 使用postgresql asyncpg 时报错 |
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/Users/yu/.conda/envs/turingmodel/lib/python3.8/site-packages/uvicorn/protocols/http/httptools_impl.py", line 371, in run_asgi
result = await app(self.scope, self.receive, self.send)
File "/Users/yu/.conda/envs/turingmodel/lib/python3.8/site-packages/uvicorn/middleware/proxy_headers.py", line 59, in __call__
return await self.app(scope, receive, send)
File "/Users/yu/.conda/envs/turingmodel/lib/python3.8/site-packages/fastapi/applications.py", line 271, in __call__
await super().__call__(scope, receive, send)
File "/Users/yu/.conda/envs/turingmodel/lib/python3.8/site-packages/starlette/applications.py", line 118, in __call__
await self.middleware_stack(scope, receive, send)
File "/Users/yu/.conda/envs/turingmodel/lib/python3.8/site-packages/starlette/middleware/errors.py", line 184, in __call__
raise exc
File "/Users/yu/.conda/envs/turingmodel/lib/python3.8/site-packages/starlette/middleware/errors.py", line 162, in __call__
await self.app(scope, receive, _send)
File "/Users/yu/.conda/envs/turingmodel/lib/python3.8/site-packages/starlette/middleware/base.py", line 109, in __call__
await response(scope, receive, send)
File "/Users/yu/.conda/envs/turingmodel/lib/python3.8/site-packages/starlette/responses.py", line 277, in __call__
await wrap(partial(self.listen_for_disconnect, receive))
File "/Users/yu/.conda/envs/turingmodel/lib/python3.8/site-packages/anyio/_backends/_asyncio.py", line 662, in __aexit__
raise exceptions[0]
File "/Users/yu/.conda/envs/turingmodel/lib/python3.8/site-packages/starlette/responses.py", line 273, in wrap
await func()
File "/Users/yu/.conda/envs/turingmodel/lib/python3.8/site-packages/starlette/middleware/base.py", line 134, in stream_response
return await super().stream_response(send)
File "/Users/yu/.conda/envs/turingmodel/lib/python3.8/site-packages/starlette/responses.py", line 262, in stream_response
async for chunk in self.body_iterator:
File "/Users/yu/.conda/envs/turingmodel/lib/python3.8/site-packages/starlette/middleware/base.py", line 98, in body_stream
raise app_exc
File "/Users/yu/.conda/envs/turingmodel/lib/python3.8/site-packages/starlette/middleware/base.py", line 70, in coro
await self.app(scope, receive_or_disconnect, send_no_error)
File "/Users/yu/.conda/envs/turingmodel/lib/python3.8/site-packages/starlette/middleware/exceptions.py", line 79, in __call__
raise exc
File "/Users/yu/.conda/envs/turingmodel/lib/python3.8/site-packages/starlette/middleware/exceptions.py", line 68, in __call__
await self.app(scope, receive, sender)
File "/Users/yu/.conda/envs/turingmodel/lib/python3.8/site-packages/fastapi/middleware/asyncexitstack.py", line 21, in __call__
raise e
File "/Users/yu/.conda/envs/turingmodel/lib/python3.8/site-packages/fastapi/middleware/asyncexitstack.py", line 18, in __call__
await self.app(scope, receive, send)
File "/Users/yu/.conda/envs/turingmodel/lib/python3.8/site-packages/starlette/routing.py", line 706, in __call__
await route.handle(scope, receive, send)
File "/Users/yu/.conda/envs/turingmodel/lib/python3.8/site-packages/starlette/routing.py", line 443, in handle
await self.app(scope, receive, send)
File "/Users/yu/.conda/envs/turingmodel/lib/python3.8/site-packages/fastapi/applications.py", line 271, in __call__
await super().__call__(scope, receive, send)
File "/Users/yu/.conda/envs/turingmodel/lib/python3.8/site-packages/starlette/applications.py", line 118, in __call__
await self.middleware_stack(scope, receive, send)
File "/Users/yu/.conda/envs/turingmodel/lib/python3.8/site-packages/starlette/middleware/errors.py", line 184, in __call__
raise exc
File "/Users/yu/.conda/envs/turingmodel/lib/python3.8/site-packages/starlette/middleware/errors.py", line 162, in __call__
await self.app(scope, receive, _send)
File "/Users/yu/.conda/envs/turingmodel/lib/python3.8/site-packages/starlette/middleware/exceptions.py", line 79, in __call__
raise exc
File "/Users/yu/.conda/envs/turingmodel/lib/python3.8/site-packages/starlette/middleware/exceptions.py", line 68, in __call__
await self.app(scope, receive, sender)
File "/Users/yu/.conda/envs/turingmodel/lib/python3.8/site-packages/fastapi/middleware/asyncexitstack.py", line 21, in __call__
raise e
File "/Users/yu/.conda/envs/turingmodel/lib/python3.8/site-packages/fastapi/middleware/asyncexitstack.py", line 18, in __call__
await self.app(scope, receive, send)
File "/Users/yu/.conda/envs/turingmodel/lib/python3.8/site-packages/starlette/routing.py", line 706, in __call__
await route.handle(scope, receive, send)
File "/Users/yu/.conda/envs/turingmodel/lib/python3.8/site-packages/starlette/routing.py", line 276, in handle
await self.app(scope, receive, send)
File "/Users/yu/.conda/envs/turingmodel/lib/python3.8/site-packages/starlette/routing.py", line 66, in app
response = await func(request)
File "/Users/yu/.conda/envs/turingmodel/lib/python3.8/site-packages/fastapi/routing.py", line 237, in app
raw_response = await run_endpoint_function(
File "/Users/yu/.conda/envs/turingmodel/lib/python3.8/site-packages/fastapi/routing.py", line 163, in run_endpoint_function
return await dependant.call(**values)
File "/Users/yu/.conda/envs/turingmodel/lib/python3.8/site-packages/fastapi_amis_admin/crud/_sqlmodel.py", line 471, in route
data.total = await self.db.async_scalar(
File "/Users/yu/.conda/envs/turingmodel/lib/python3.8/site-packages/sqlalchemy/ext/asyncio/session.py", line 241, in scalar
result = await self.execute(
File "/Users/yu/.conda/envs/turingmodel/lib/python3.8/site-packages/sqlalchemy/ext/asyncio/session.py", line 215, in execute
result = await greenlet_spawn(
File "/Users/yu/.conda/envs/turingmodel/lib/python3.8/site-packages/sqlalchemy/util/_concurrency_py3k.py", line 126, in greenlet_spawn
result = context.throw(*sys.exc_info())
File "/Users/yu/.conda/envs/turingmodel/lib/python3.8/site-packages/sqlmodel/orm/session.py", line 101, in execute
return super().execute( # type: ignore
File "/Users/yu/.conda/envs/turingmodel/lib/python3.8/site-packages/sqlalchemy/orm/session.py", line 1712, in execute
result = conn._execute_20(statement, params or {}, execution_options)
File "/Users/yu/.conda/envs/turingmodel/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1705, in _execute_20
return meth(self, args_10style, kwargs_10style, execution_options)
File "/Users/yu/.conda/envs/turingmodel/lib/python3.8/site-packages/sqlalchemy/sql/elements.py", line 333, in _execute_on_connection
return connection._execute_clauseelement(
File "/Users/yu/.conda/envs/turingmodel/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1572, in _execute_clauseelement
ret = self._execute_context(
File "/Users/yu/.conda/envs/turingmodel/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1943, in _execute_context
self._handle_dbapi_exception(
File "/Users/yu/.conda/envs/turingmodel/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 2124, in _handle_dbapi_exception
util.raise_(
File "/Users/yu/.conda/envs/turingmodel/lib/python3.8/site-packages/sqlalchemy/util/compat.py", line 208, in raise_
raise exception
File "/Users/yu/.conda/envs/turingmodel/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1900, in _execute_context
self.dialect.do_execute(
File "/Users/yu/.conda/envs/turingmodel/lib/python3.8/site-packages/sqlalchemy/engine/default.py", line 736, in do_execute
cursor.execute(statement, parameters)
File "/Users/yu/.conda/envs/turingmodel/lib/python3.8/site-packages/sqlalchemy/dialects/postgresql/asyncpg.py", line 479, in execute
self._adapt_connection.await_(
File "/Users/yu/.conda/envs/turingmodel/lib/python3.8/site-packages/sqlalchemy/util/_concurrency_py3k.py", line 68, in await_only
return current.driver.switch(awaitable)
File "/Users/yu/.conda/envs/turingmodel/lib/python3.8/site-packages/sqlalchemy/util/_concurrency_py3k.py", line 121, in greenlet_spawn
value = await result
File "/Users/yu/.conda/envs/turingmodel/lib/python3.8/site-packages/sqlalchemy/dialects/postgresql/asyncpg.py", line 454, in _prepare_and_execute
self._handle_exception(error)
File "/Users/yu/.conda/envs/turingmodel/lib/python3.8/site-packages/sqlalchemy/dialects/postgresql/asyncpg.py", line 389, in _handle_exception
self._adapt_connection._handle_exception(error)
File "/Users/yu/.conda/envs/turingmodel/lib/python3.8/site-packages/sqlalchemy/dialects/postgresql/asyncpg.py", line 682, in _handle_exception
raise translated_error from error
sqlalchemy.exc.DBAPIError: (sqlalchemy.dialects.postgresql.asyncpg.Error) <class 'asyncpg.exceptions.InFailedSQLTransactionError'>: current transaction is aborted, commands ignored until end of transaction block
[SQL: SELECT count(%s) AS count_1
FROM (SELECT "user".id AS id
FROM "user") AS anon_1]
[parameters: ('*',)]
(Background on this error at: https://sqlalche.me/e/14/dbapi) | open | 2023-07-08T15:40:36Z | 2023-07-08T15:40:36Z | https://github.com/amisadmin/fastapi-amis-admin/issues/110 | [] | yxlwfds | 0 |
JaidedAI/EasyOCR | deep-learning | 1,361 | Fine-Tune training for new and rare Chinese characters in Traditional Chinese | Hello
Can I ask a question?
How to fine-tune training for new and rare Chinese characters in Traditional Chinese?
For rare words that are not established in the vocabulary, the official teaching is currently used, and the parameters of config.yaml are
FT: True, new_prediction: True
A small number of training sets/validation sets are generated, first generated with only one font, but with different background colors and tilts. Finally, the pre-trained model is used for fine-tuning training, but the results are almost always worse than the original pre-trained model.
In other words, The original official Traditional Chinese model worked well, but after the training steps, the text content that could have been recognized normally had errors | open | 2025-01-08T10:36:00Z | 2025-01-08T10:41:00Z | https://github.com/JaidedAI/EasyOCR/issues/1361 | [] | 6692a | 0 |
amfoss/cms | graphql | 23 | Broken link for amfoss linkedIn account. | **Describe the bug**
LinkedIn link is broken.
**Screenshots**

| closed | 2019-03-12T08:22:56Z | 2019-03-13T12:24:53Z | https://github.com/amfoss/cms/issues/23 | [] | harshithpabbati | 0 |
jupyter-book/jupyter-book | jupyter | 1,782 | Make Pyppeteer configurable for pdfhtml builder | ### Context
SImilar to sphinx, pyppeteer also has a bunch of configs as described [here](https://miyakogi.github.io/pyppeteer/reference.html#launcher). However, JB does not allow default configs to be overridden while converting HTML to PDF.
For example, pyppeteer attempts to download and install chromium in the default download folder when used for the first time (if not overridden via pyppeteer options). There is no way to skip this installation, and point to another chromium installation sitting somewhere else. For my use case, I am unable to download chromium from external download host due to my company firewall. I have to make the below change in `pdf.py` to make the builder work:
`browser = await launch(executablePath="path/to/my/chromium", args=["--no-sandbox"])`
There are some other options/arguments that might be useful for others as well.
### Proposal
I think the best way to allow pyppeteer configs would be to have a new optional section in `_config.yml` as below:
```
#######################################################################################
# pyppeteer-specific settings to be used for pdfhtml builder only
# See all options available at https://miyakogi.github.io/pyppeteer/reference.html#launcher
pyppeteer:
executablePath: path/to/my/chromium
```
This can then be parsed inside `config.py` and used specifically inside `if builder == "pdfhtml":` block of `builder_specific_actions` in `main.py`.
Open for thoughts, and can help contribute towards this if okay.
### Tasks and updates
_No response_ | open | 2022-07-17T17:51:42Z | 2022-09-14T18:49:19Z | https://github.com/jupyter-book/jupyter-book/issues/1782 | [
"enhancement"
] | SimplyOm | 2 |
tensorflow/tensor2tensor | machine-learning | 1,096 | AttributeError: 'RunConfig' object has no attribute 'data_parallelism' t2t=1.9, TPU TF version 1.9 | ### Description
Hi
I am getting this error while running translation workloads on TPU.
AttributeError: 'RunConfig' object has no attribute 'data_parallelism'
...
### Environment information
Tensor2Tensor =1.9
TPU tensorflow=1.9
```
OS: <your answer here>
$ pip freeze | grep tensor
tensor2tensor==1.9.0
tensorboard==1.9.0
tensorflow==1.9.0
$ python -V
Python 2.7.13
```
### For bugs: reproduction and error logs
```
# Steps to reproduce:
Use steps to train a transformer https://cloud.google.com/tpu/docs/tutorials/transformer
```
```
# Error logs:
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/control_flow_ops.py", line 3209, in while_loop
result = loop_context.BuildLoop(cond, body, loop_vars, shape_invariants)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/control_flow_ops.py", line 2941, in BuildLoop
pred, body, original_loop_vars, loop_vars, shape_invariants)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/control_flow_ops.py", line 2878, in _BuildLoop
body_result = body(*packed_vars_for_body)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/tpu/python/tpu/training_loop.py", line 120, in bo
dy_wrapper
outputs = body(*(inputs + dequeue_ops))
File "/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/tpu/python/tpu/training_loop.py", line 203, in bo
dy_wrapper
return [i + 1] + _convert_to_list(body(*args))
File "/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/tpu/python/tpu/tpu_estimator.py", line 1166, in t
rain_step
self._call_model_fn(features, labels))
File "/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/tpu/python/tpu/tpu_estimator.py", line 1337, in _
call_model_fn
estimator_spec = self._model_fn(features=features, **kwargs)
File "/home/anurupa_sarma/.local/lib/python2.7/site-packages/tensor2tensor/utils/t2t_model.py", line 1225, in wra
pping_model_fn
decode_hparams=decode_hparams)
File "/home/anurupa_sarma/.local/lib/python2.7/site-packages/tensor2tensor/utils/t2t_model.py", line 1260, in est
imator_model_fn
data_parallelism = config.data_parallelism
AttributeError: 'RunConfig' object has no attribute 'data_parallelism'
```
| open | 2018-09-25T21:51:40Z | 2018-09-25T21:51:40Z | https://github.com/tensorflow/tensor2tensor/issues/1096 | [] | nicks165 | 0 |
pyro-ppl/numpyro | numpy | 1,467 | Leak when running examples with JAX_CHECK_TRACER_LEAKS=1 | Hi
when I run ar2.py from yours examples (or bnn.py did not try with others) with the environment variable JAX_CHECK_TRACER_LEAKS=1 they fail. (I had to use it to try to find an issue with a function I had written)
the exception raised is
Exception: Leaked level MainTrace(1,DynamicJaxprTrace). Leaked tracer(s): [Traced<ShapedArray(int32[])>with<DynamicJaxprTrace(level=1/0)>].
here the complete log:
Traceback (most recent call last):
File "/home/ffranco/Downloads/ar2.py", line 138, in <module>
main(args)
File "/home/ffranco/Downloads/ar2.py", line 117, in main
run_inference(model, args, rng_key, y)
File "/home/ffranco/Downloads/ar2.py", line 96, in run_inference
mcmc.run(rng_key, y=y)
File "/home/ffranco/numpyro08/lib/python3.7/site-packages/numpyro/infer/mcmc.py", line 593, in run
states_flat, last_state = partial_map_fn(map_args)
File "/home/ffranco/numpyro08/lib/python3.7/site-packages/numpyro/infer/mcmc.py", line 386, in _single_chain_mcmc
model_kwargs=kwargs,
File "/home/ffranco/numpyro08/lib/python3.7/site-packages/numpyro/infer/hmc.py", line 707, in init
rng_key_init_model, model_args, model_kwargs, init_params
File "/home/ffranco/numpyro08/lib/python3.7/site-packages/numpyro/infer/hmc.py", line 659, in _init_state
forward_mode_differentiation=self._forward_mode_differentiation,
File "/home/ffranco/numpyro08/lib/python3.7/site-packages/numpyro/infer/util.py", line 606, in initialize_model
) = _get_model_transforms(substituted_model, model_args, model_kwargs)
File "/home/ffranco/numpyro08/lib/python3.7/site-packages/numpyro/infer/util.py", line 404, in _get_model_transforms
model_trace = trace(model).get_trace(*model_args, **model_kwargs)
File "/home/ffranco/numpyro08/lib/python3.7/site-packages/numpyro/handlers.py", line 171, in get_trace
self(*args, **kwargs)
File "/home/ffranco/numpyro08/lib/python3.7/site-packages/numpyro/primitives.py", line 105, in __call__
return self.fn(*args, **kwargs)
File "/home/ffranco/numpyro08/lib/python3.7/site-packages/numpyro/primitives.py", line 105, in __call__
return self.fn(*args, **kwargs)
File "/home/ffranco/numpyro08/lib/python3.7/site-packages/numpyro/primitives.py", line 105, in __call__
return self.fn(*args, **kwargs)
File "/home/ffranco/Downloads/ar2.py", line 67, in ar2_scan
scan(transition, init, timesteps)
File "/home/ffranco/numpyro08/lib/python3.7/site-packages/numpyro/contrib/control_flow/scan.py", line 438, in scan
msg = apply_stack(initial_msg)
File "/home/ffranco/numpyro08/lib/python3.7/site-packages/numpyro/primitives.py", line 53, in apply_stack
default_process_message(msg)
File "/home/ffranco/numpyro08/lib/python3.7/site-packages/numpyro/primitives.py", line 28, in default_process_message
msg["value"] = msg["fn"](*msg["args"], **msg["kwargs"])
File "/home/ffranco/numpyro08/lib/python3.7/site-packages/numpyro/contrib/control_flow/scan.py", line 306, in scan_wrapper
body_fn, wrapped_carry, xs, length=length, reverse=reverse
File "/home/ffranco/numpyro08/lib/python3.7/site-packages/jax/_src/traceback_util.py", line 162, in reraise_with_filtered_traceback
return fun(*args, **kwargs)
File "/home/ffranco/numpyro08/lib/python3.7/site-packages/jax/_src/lax/control_flow.py", line 1345, in scan
init_flat, carry_avals, carry_avals_out, init_tree, *rest = _create_jaxpr(init)
File "/home/ffranco/numpyro08/lib/python3.7/site-packages/jax/_src/lax/control_flow.py", line 1332, in _create_jaxpr
f, in_tree, carry_avals + x_avals, "scan")
File "/home/ffranco/numpyro08/lib/python3.7/site-packages/jax/_src/util.py", line 185, in wrapper
return f(*args, **kwargs)
File "/home/ffranco/numpyro08/lib/python3.7/site-packages/jax/_src/lax/control_flow.py", line 78, in _initial_style_jaxpr
fun, in_tree, in_avals, primitive_name)
File "/home/ffranco/numpyro08/lib/python3.7/site-packages/jax/_src/util.py", line 185, in wrapper
return f(*args, **kwargs)
File "/home/ffranco/numpyro08/lib/python3.7/site-packages/jax/_src/lax/control_flow.py", line 71, in _initial_style_open_jaxpr
jaxpr, _, consts = pe.trace_to_jaxpr_dynamic(wrapped_fun, in_avals, debug)
File "/home/ffranco/numpyro08/lib/python3.7/site-packages/jax/interpreters/partial_eval.py", line 1511, in trace_to_jaxpr_dynamic
del main, fun
File "/home/ffranco/anaconda3/lib/python3.7/contextlib.py", line 119, in __exit__
next(self.gen)
File "/home/ffranco/numpyro08/lib/python3.7/site-packages/jax/core.py", line 810, in new_main
raise Exception(f'Leaked level {t()}. Leaked tracer(s): {leaked_tracers}.')
jax._src.traceback_util.UnfilteredStackTrace: Exception: Leaked level MainTrace(1,DynamicJaxprTrace). Leaked tracer(s): [Traced<ShapedArray(int32[])>with<DynamicJaxprTrace(level=1/0)>].
The stack trace below excludes JAX-internal frames.
The preceding is the original exception that occurred, unmodified.
--------------------
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/ffranco/Downloads/ar2.py", line 138, in <module>
main(args)
File "/home/ffranco/Downloads/ar2.py", line 117, in main
run_inference(model, args, rng_key, y)
File "/home/ffranco/Downloads/ar2.py", line 96, in run_inference
mcmc.run(rng_key, y=y)
File "/home/ffranco/numpyro08/lib/python3.7/site-packages/numpyro/infer/mcmc.py", line 593, in run
states_flat, last_state = partial_map_fn(map_args)
File "/home/ffranco/numpyro08/lib/python3.7/site-packages/numpyro/infer/mcmc.py", line 386, in _single_chain_mcmc
model_kwargs=kwargs,
File "/home/ffranco/numpyro08/lib/python3.7/site-packages/numpyro/infer/hmc.py", line 707, in init
rng_key_init_model, model_args, model_kwargs, init_params
File "/home/ffranco/numpyro08/lib/python3.7/site-packages/numpyro/infer/hmc.py", line 659, in _init_state
forward_mode_differentiation=self._forward_mode_differentiation,
File "/home/ffranco/numpyro08/lib/python3.7/site-packages/numpyro/infer/util.py", line 606, in initialize_model
) = _get_model_transforms(substituted_model, model_args, model_kwargs)
File "/home/ffranco/numpyro08/lib/python3.7/site-packages/numpyro/infer/util.py", line 404, in _get_model_transforms
model_trace = trace(model).get_trace(*model_args, **model_kwargs)
File "/home/ffranco/numpyro08/lib/python3.7/site-packages/numpyro/handlers.py", line 171, in get_trace
self(*args, **kwargs)
File "/home/ffranco/numpyro08/lib/python3.7/site-packages/numpyro/primitives.py", line 105, in __call__
return self.fn(*args, **kwargs)
File "/home/ffranco/numpyro08/lib/python3.7/site-packages/numpyro/primitives.py", line 105, in __call__
return self.fn(*args, **kwargs)
File "/home/ffranco/numpyro08/lib/python3.7/site-packages/numpyro/primitives.py", line 105, in __call__
return self.fn(*args, **kwargs)
File "/home/ffranco/Downloads/ar2.py", line 67, in ar2_scan
scan(transition, init, timesteps)
File "/home/ffranco/numpyro08/lib/python3.7/site-packages/numpyro/contrib/control_flow/scan.py", line 438, in scan
msg = apply_stack(initial_msg)
File "/home/ffranco/numpyro08/lib/python3.7/site-packages/numpyro/primitives.py", line 53, in apply_stack
default_process_message(msg)
File "/home/ffranco/numpyro08/lib/python3.7/site-packages/numpyro/primitives.py", line 28, in default_process_message
msg["value"] = msg["fn"](*msg["args"], **msg["kwargs"])
File "/home/ffranco/numpyro08/lib/python3.7/site-packages/numpyro/contrib/control_flow/scan.py", line 306, in scan_wrapper
body_fn, wrapped_carry, xs, length=length, reverse=reverse
File "/home/ffranco/anaconda3/lib/python3.7/contextlib.py", line 119, in __exit__
next(self.gen)
Exception: Leaked level MainTrace(1,DynamicJaxprTrace). Leaked tracer(s): [Traced<ShapedArray(int32[])>with<DynamicJaxprTrace(level=1/0)>]. | closed | 2022-08-12T08:54:00Z | 2022-08-23T13:34:29Z | https://github.com/pyro-ppl/numpyro/issues/1467 | [
"bug"
] | hyperfra | 4 |
yunjey/pytorch-tutorial | deep-learning | 37 | [image captioning] training /test/validation data? | training /test/validation data? I saw in image captioning and the other examples that there is no training/test/validation splits. Should the examples have this to promote best practices? If I wrote code to add it, would it be merged in? | closed | 2017-05-26T17:28:12Z | 2017-05-28T11:21:12Z | https://github.com/yunjey/pytorch-tutorial/issues/37 | [] | jtoy | 1 |
junyanz/pytorch-CycleGAN-and-pix2pix | pytorch | 1,594 | pix2pix training doesn't go past "create web directory ./checkpoints\cartoonify_pix2pix\web..." | I am training a pix2pix model on my own dataset. I trained for 170 epochs at first and then another 20 epochs. Now when I try to continue training, it loads the model and does everything normally but stops execution without giving any error. This issue was there before, but doing random things like restarting the PC or Jupyter, and changing the value of n_epochs and n_epochs_decay solved the issue. Now it's not training and always stops after the step mentioned above.
Below is the output and my training command:
```
!python train.py --dataroot ../processed2 --continue_train --n_epochs 70 --n_epochs_decay 70 \
--name cartoonify_pix2pix --model pix2pix --dataset_mode aligned \
--batch_size 8 --load_size 512 --crop_size 256 --preprocess scale_width_and_crop \
--display_id 0 --netG resnet_9blocks --epoch_count 201
```
```
----------------- Options ---------------
batch_size: 8 [default: 1]
beta1: 0.5
checkpoints_dir: ./checkpoints
continue_train: True [default: False]
crop_size: 256
dataroot: ../processed2 [default: None]
dataset_mode: aligned
direction: AtoB
display_env: main
display_freq: 400
display_id: 0 [default: 1]
display_ncols: 4
display_port: 8097
display_server: http://localhost/
display_winsize: 256
epoch: latest
epoch_count: 201 [default: 1]
gan_mode: vanilla
gpu_ids: 0
init_gain: 0.02
init_type: normal
input_nc: 3
isTrain: True [default: None]
lambda_L1: 100.0
load_iter: 0 [default: 0]
load_size: 512 [default: 286]
lr: 0.0002
lr_decay_iters: 50
lr_policy: linear
max_dataset_size: inf
model: pix2pix [default: cycle_gan]
n_epochs: 50 [default: 100]
n_epochs_decay: 50 [default: 100]
n_layers_D: 3
name: cartoonify_pix2pix [default: experiment_name]
ndf: 64
netD: basic
netG: resnet_9blocks [default: unet_256]
ngf: 64
no_dropout: False
no_flip: False
no_html: False
norm: batch
num_threads: 4
output_nc: 3
phase: train
pool_size: 0
preprocess: scale_width_and_crop [default: resize_and_crop]
print_freq: 100
save_by_iter: False
save_epoch_freq: 5
save_latest_freq: 5000
serial_batches: False
suffix:
update_html_freq: 1000
use_wandb: False
verbose: False
wandb_project_name: CycleGAN-and-pix2pix
----------------- End -------------------
dataset [AlignedDataset] was created
The number of training images = 904
initialize network with normal
initialize network with normal
model [Pix2PixModel] was created
loading the model from ./checkpoints\cartoonify_pix2pix\latest_net_G.pth
loading the model from ./checkpoints\cartoonify_pix2pix\latest_net_D.pth
---------- Networks initialized -------------
[Network G] Total number of parameters : 11.383 M
[Network D] Total number of parameters : 2.769 M
-----------------------------------------------
create web directory ./checkpoints\cartoonify_pix2pix\web...
```
@taesungp any solution please? | closed | 2023-09-01T06:16:56Z | 2024-06-25T02:19:59Z | https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/1594 | [] | TehreemFarooqi | 2 |
vimalloc/flask-jwt-extended | flask | 418 | AssertionError: View function mapping is overwriting an existing endpoint function: wrapper | Hello, Im coding a REST API for a website I got his problem when I use @jwt_required. Couldn't find any solutions on the internet so I thought asking here would be the only option.
My Code:
```python
@app.route('/login', methods=['POST'])
def login():
email = request.json.get('email', None)
password = request.json.get('password', None)
try:
user = mydb["User"].find_one({"Email": email})
if user is None:
raise Exception("User doesn't")
valid = check_password_hash(user['password'], password)
if not valid:
raise Exception('Password not correct')
access_token = create_access_token(
identity=user['id'], expires_delta=None)
refresh_token = create_refresh_token(
identity=user['id'], expires_delta=None)
ret = {
'access_token': access_token,
'refresh_token': refresh_token
}
return jsonify(ret), 200
except Exception as e:
return jsonify({'Error': repr(e)})
@app.route('/logout', methods=['POST'])
@jwt_required
def logout():
jti = get_jwt()['jti']
blacklist.add(jti)
return jsonify({"message": "Successfully logged out"}), 200
@app.route('/protected', methods=['POST'])
@jwt_required
def protected():
username = get_jwt_identity()
return jsonify(logged_in_as=username), 200
@app.route('/refresh_token', methods=['POST'])
@jwt_required
def refresh():
current_user = get_jwt_identity()
ret = {
'access_token': create_access_token(identity=current_user, expires_delta=None)
}
return jsonify(ret), 200
if __name__ == '__main__':
app.run(debug=True, port=80)
```
ERROR:
> ```
> Traceback (most recent call last):
> self.add_url_rule(rule, endpoint, f, **options)
> File "C:\Users\VRX\AppData\Local\Programs\Python\Python38\lib\site-packages\flask\app.py", line 98, in wrapper_func
> return f(self, *args, **kwargs)
> File "C:\Users\VRX\AppData\Local\Programs\Python\Python38\lib\site-packages\flask\app.py", line 1282, in add_url_rule
> raise AssertionError(
> AssertionError: View function mapping is overwriting an existing endpoint function: wrapper
> ```
Thanks in advance. | closed | 2021-04-24T17:20:33Z | 2021-04-25T03:40:07Z | https://github.com/vimalloc/flask-jwt-extended/issues/418 | [] | GoekhanDev | 2 |
exaloop/codon | numpy | 25 | PyObjects in `if`-statements fail | ```python
from python import foo
if foo() > 1:
pass
```
gives
```
Trunc only operates on integer
%262 = trunc i8* %261 to i1, !dbg !24241
Assert failed: module broken
Expression: !broken
Source: /Users/arshajii/Documents/workspace/codon/codon/sir/llvm/optimize.cpp:232
zsh: abort build/codon run -release build/scratch.codon
``` | closed | 2022-04-21T15:03:58Z | 2022-07-22T16:50:44Z | https://github.com/exaloop/codon/issues/25 | [] | arshajii | 1 |
ets-labs/python-dependency-injector | flask | 424 | Erroneous initialization of Resources | After upgrading from `4.10.1` to `4.29.2`, I noticed that some Resources which remained unitialized in prior version unexpectedly started to initialize during `init_resorces()` call. The **Selector** provider initializes all nested resources disregarding selector key:
```
from dependency_injector import containers, providers
class PostgresAdapter:
def __init__(self, host, port):
print('Postgres initialized:', host, port)
class SQLiteAdapter:
def __init__(self, db_file):
print('Sqlite initialized:', db_file)
def setup_db_adapter(klass, **kwargs):
yield klass(**kwargs)
print('close')
class Container(containers.DeclarativeContainer):
config = providers.Configuration()
database = providers.Selector(
config.db_type,
postgres=providers.Resource(
setup_db_adapter,
klass=PostgresAdapter,
host=config.db_host,
port=config.db_port,
),
sqlite=providers.Resource(
setup_db_adapter,
klass=SQLiteAdapter,
db_file=config.db_file,
),
)
if __name__ == '__main__':
container = Container(
config={
'db_type': 'postgres',
'db_host': 'localhost',
'db_port': 5432,
}
)
container.init_resources()
container.shutdown_resources()
```
Output:
```
Postgres initialized: localhost 5432
Sqlite initialized: None
close
close
```
Expected output:
```
Postgres initialized: localhost 5432
close
```
Both resources are initialized even if `db_type` is unspecified, which is nonsence:
```
container = Container(config={})
```
Output:
```
Postgres initialized: None None
Sqlite initialized: None
close
close
``` | open | 2021-03-11T11:34:59Z | 2022-04-05T09:41:48Z | https://github.com/ets-labs/python-dependency-injector/issues/424 | [] | atten | 9 |
jonaswinkler/paperless-ng | django | 1,594 | [BUG] Installed from script and Gotenburg and Tika not working? | Hello, thanks for this great work!
I am new to paperless-ng do not normally use docker, so I may be doing something wrong.
My paperless works well, but when I try to import a .docx file for example, it fails with, `Error while converting document to PDF: 404 Client Error: Not Found for url: http://gotenberg:3000/convert/office`
I installed using the script, and specified to enable Tika.
Gothenburg and Tika are running according to `docker ps`
```
paperless@docker ~/paperless-ng$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
8a20a33aefa6 jonaswinkler/paperless-ng:latest "/sbin/docker-entryp_" 2 minutes ago Up 2 minutes (healthy) 0.0.0.0:8000->8000/tcp, :::8000->8000/tcp paperless-webserver-1
b4d6babc41a2 postgres:13 "docker-entrypoint.s_" 24 minutes ago Up 23 minutes 5432/tcp paperless-db-1
ed4b52bfb5a4 redis:6.0 "docker-entrypoint.s_" 24 minutes ago Up 23 minutes 6379/tcp paperless-broker-1
d8bf67ec76c5 thecodingmachine/gotenberg "/usr/bin/tini -- go_" 24 minutes ago Up 23 minutes 3000/tcp paperless-gotenberg-1
85843f762418 apache/tika "/bin/sh -c 'exec ja_" 24 minutes ago Up 23 minutes 9998/tcp paperless-tika-1
```
```
paperless@docker ~/paperless-ng$ docker-compose up
[+] Running 5/5
_ Container paperless-tika-1 Running 0.0s
_ Container paperless-gotenberg-1 Running 0.0s
_ Container paperless-db-1 Running 0.0s
_ Container paperless-broker-1 Running 0.0s
_ Container paperless-webserver-1 Created 9.2s
Attaching to paperless-broker-1, paperless-db-1, paperless-gotenberg-1, paperless-tika-1, paperless-webserver-1
paperless-webserver-1 | Paperless-ng docker container starting...
paperless-webserver-1 | Creating directory /tmp/paperless
paperless-webserver-1 | Adjusting permissions of paperless files. This may take a while.
paperless-webserver-1 | Waiting for PostgreSQL to start...
paperless-webserver-1 | Apply database migrations...
paperless-webserver-1 | Operations to perform:
paperless-webserver-1 | Apply all migrations: admin, auth, authtoken, contenttypes, django_q, documents, paperless_mail, sessions
paperless-webserver-1 | Running migrations:
paperless-webserver-1 | No migrations to apply.
paperless-webserver-1 | Executing /usr/local/bin/supervisord -c /etc/supervisord.conf
paperless-webserver-1 | 2022-02-01 11:22:15,874 INFO Set uid to user 0 succeeded
paperless-webserver-1 | 2022-02-01 11:22:15,875 INFO supervisord started with pid 1
paperless-webserver-1 | 2022-02-01 11:22:16,877 INFO spawned: 'consumer' with pid 36
paperless-webserver-1 | 2022-02-01 11:22:16,879 INFO spawned: 'gunicorn' with pid 37
paperless-webserver-1 | 2022-02-01 11:22:16,881 INFO spawned: 'scheduler' with pid 38
paperless-webserver-1 | [2022-02-01 12:22:17 +0100] [37] [INFO] Starting gunicorn 20.1.0
paperless-webserver-1 | [2022-02-01 12:22:17 +0100] [37] [INFO] Listening at: http://0.0.0.0:8000 (37)
paperless-webserver-1 | [2022-02-01 12:22:17 +0100] [37] [INFO] Using worker: paperless.workers.ConfigurableWorker
paperless-webserver-1 | [2022-02-01 12:22:17 +0100] [37] [INFO] Server is ready. Spawning workers
paperless-webserver-1 | 12:22:17 [Q] INFO Q Cluster romeo-idaho-nine-diet starting.
paperless-webserver-1 | [2022-02-01 12:22:17,742] [INFO] [paperless.management.consumer] Using inotify to watch directory for changes: /usr/src/paperless/src/../consume
paperless-webserver-1 | 12:22:17 [Q] INFO Process-1:1 ready for work at 61
paperless-webserver-1 | 12:22:17 [Q] INFO Process-1:2 ready for work at 62
paperless-webserver-1 | 12:22:17 [Q] INFO Process-1:3 monitoring at 63
paperless-webserver-1 | 12:22:17 [Q] INFO Process-1 guarding cluster romeo-idaho-nine-diet
paperless-webserver-1 | 12:22:17 [Q] INFO Process-1:4 pushing tasks at 64
paperless-webserver-1 | 12:22:17 [Q] INFO Q Cluster romeo-idaho-nine-diet running.
paperless-webserver-1 | 2022-02-01 11:22:18,836 INFO success: consumer entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
paperless-webserver-1 | 2022-02-01 11:22:18,836 INFO success: gunicorn entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
paperless-webserver-1 | 2022-02-01 11:22:18,836 INFO success: scheduler entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
paperless-webserver-1 | 12:22:47 [Q] INFO Enqueued 1
paperless-webserver-1 | 12:22:47 [Q] INFO Process-1 created a task from schedule [Check all e-mail accounts]
paperless-webserver-1 | 12:22:47 [Q] INFO Process-1:1 processing [lithium-edward-diet-utah]
paperless-webserver-1 | /usr/local/lib/python3.9/site-packages/imap_tools/mailbox.py:214: UserWarning: seen method are deprecated and will be removed soon, use flag method instead
paperless-webserver-1 | warnings.warn('seen method are deprecated and will be removed soon, use flag method instead')
paperless-webserver-1 | 12:22:50 [Q] INFO Process-1:1 stopped doing work
paperless-webserver-1 | 12:22:50 [Q] INFO Processed [lithium-edward-diet-utah]
paperless-webserver-1 | 12:22:50 [Q] INFO recycled worker Process-1:1
paperless-webserver-1 | 12:22:50 [Q] INFO Process-1:5 ready for work at 77
paperless-broker-1 | 1:M 01 Feb 2022 11:23:06.030 * 100 changes in 300 seconds. Saving...
paperless-broker-1 | 1:M 01 Feb 2022 11:23:06.031 * Background saving started by pid 20
paperless-broker-1 | 20:C 01 Feb 2022 11:23:06.044 * DB saved on disk
paperless-broker-1 | 20:C 01 Feb 2022 11:23:06.044 * RDB: 0 MB of memory used by copy-on-write
paperless-broker-1 | 1:M 01 Feb 2022 11:23:06.132 * Background saving terminated with success
paperless-webserver-1 | [2022-02-01 12:24:01,094] [WARNING] [django.security.SuspiciousSession] Session data corrupted
paperless-webserver-1 | [2022-02-01 12:24:01,184] [WARNING] [django.security.SuspiciousSession] Session data corrupted
paperless-webserver-1 | [2022-02-01 12:24:04,271] [WARNING] [django.security.SuspiciousSession] Session data corrupted
paperless-webserver-1 | 12:24:14 [Q] INFO Enqueued 1
paperless-webserver-1 | 12:24:14 [Q] INFO Process-1:2 processing [Dear Facilitators.docx]
paperless-webserver-1 | [2022-02-01 12:24:15,000] [INFO] [paperless.consumer] Consuming Dear Facilitators.docx
paperless-webserver-1 | [2022-02-01 12:24:15,008] [INFO] [paperless.parsing.tika] Sending /tmp/paperless/paperless-upload-zf1ilcyo to Tika server
paperless-tika-1 | INFO [qtp2128195220-23] 11:24:15,195 org.apache.tika.server.resource.RecursiveMetadataResource rmeta/text (autodetecting type)
paperless-webserver-1 | [2022-02-01 12:24:15,631] [INFO] [paperless.parsing.tika] Converting /tmp/paperless/paperless-upload-zf1ilcyo to PDF as /tmp/paperless/paperless-agiq8vzt/convert.pdf
paperless-gotenberg-1 | {"level":"error","ts":1643714655.6423903,"logger":"api","msg":"code=404, message=Not Found","trace":"8662f7e2-1acd-4f7b-bfe0-fd235b6c1f59","remote_ip":"172.23.0.6","host":"gotenberg:3000","uri":"/convert/office","method":"POST","path":"/convert/office","referer":"","user_agent":"python-requests/2.26.0","status":404,"latency":2408520,"latency_human":"2.40852ms","bytes_in":31351,"bytes_out":9}
paperless-webserver-1 | [2022-02-01 12:24:15,647] [ERROR] [paperless.consumer] Error while consuming document Dear Facilitators.docx: Error while converting document to PDF: 404 Client Error: Not Found for url: http://gotenberg:3000/convert/office
paperless-webserver-1 | Traceback (most recent call last):
paperless-webserver-1 | File "/usr/src/paperless/src/paperless_tika/parsers.py", line 79, in convert_to_pdf
paperless-webserver-1 | response.raise_for_status() # ensure we notice bad responses
paperless-webserver-1 | File "/usr/local/lib/python3.9/site-packages/requests/models.py", line 953, in raise_for_status
paperless-webserver-1 | raise HTTPError(http_error_msg, response=self)
paperless-webserver-1 | requests.exceptions.HTTPError: 404 Client Error: Not Found for url: http://gotenberg:3000/convert/office
paperless-webserver-1 |
paperless-webserver-1 | During handling of the above exception, another exception occurred:
paperless-webserver-1 |
paperless-webserver-1 | Traceback (most recent call last):
paperless-webserver-1 | File "/usr/src/paperless/src/documents/consumer.py", line 248, in try_consume_file
paperless-webserver-1 | document_parser.parse(self.path, mime_type, self.filename)
paperless-webserver-1 | File "/usr/src/paperless/src/paperless_tika/parsers.py", line 65, in parse
paperless-webserver-1 | self.archive_path = self.convert_to_pdf(document_path, file_name)
paperless-webserver-1 | File "/usr/src/paperless/src/paperless_tika/parsers.py", line 81, in convert_to_pdf
paperless-webserver-1 | raise ParseError(
paperless-webserver-1 | documents.parsers.ParseError: Error while converting document to PDF: 404 Client Error: Not Found for url: http://gotenberg:3000/convert/office
paperless-webserver-1 | 12:24:15 [Q] INFO Process-1:2 stopped doing work
paperless-webserver-1 | 12:24:15 [Q] INFO recycled worker Process-1:2
paperless-webserver-1 | 12:24:15 [Q] INFO Process-1:6 ready for work at 123
paperless-webserver-1 | 12:24:15 [Q] ERROR Failed [Dear Facilitators.docx] - Dear Facilitators.docx: Error while consuming document Dear Facilitators.docx: Error while converting document to PDF: 404 Client Error: Not Found for url: http://gotenberg:3000/convert/office : Traceback (most recent call last):
paperless-webserver-1 | File "/usr/src/paperless/src/paperless_tika/parsers.py", line 79, in convert_to_pdf
paperless-webserver-1 | response.raise_for_status() # ensure we notice bad responses
paperless-webserver-1 | File "/usr/local/lib/python3.9/site-packages/requests/models.py", line 953, in raise_for_status
paperless-webserver-1 | raise HTTPError(http_error_msg, response=self)
paperless-webserver-1 | requests.exceptions.HTTPError: 404 Client Error: Not Found for url: http://gotenberg:3000/convert/office
paperless-webserver-1 |
paperless-webserver-1 | During handling of the above exception, another exception occurred:
paperless-webserver-1 |
paperless-webserver-1 | Traceback (most recent call last):
paperless-webserver-1 | File "/usr/local/lib/python3.9/site-packages/asgiref/sync.py", line 288, in main_wrap
paperless-webserver-1 | raise exc_info[1]
paperless-webserver-1 | File "/usr/src/paperless/src/documents/consumer.py", line 248, in try_consume_file
paperless-webserver-1 | document_parser.parse(self.path, mime_type, self.filename)
paperless-webserver-1 | File "/usr/src/paperless/src/paperless_tika/parsers.py", line 65, in parse
paperless-webserver-1 | self.archive_path = self.convert_to_pdf(document_path, file_name)
paperless-webserver-1 | File "/usr/src/paperless/src/paperless_tika/parsers.py", line 81, in convert_to_pdf
paperless-webserver-1 | raise ParseError(
paperless-webserver-1 | documents.parsers.ParseError: Error while converting document to PDF: 404 Client Error: Not Found for url: http://gotenberg:3000/convert/office
paperless-webserver-1 |
paperless-webserver-1 | During handling of the above exception, another exception occurred:
paperless-webserver-1 |
paperless-webserver-1 | Traceback (most recent call last):
paperless-webserver-1 | File "/usr/local/lib/python3.9/site-packages/django_q/cluster.py", line 432, in worker
paperless-webserver-1 | res = f(*task["args"], **task["kwargs"])
paperless-webserver-1 | File "/usr/src/paperless/src/documents/tasks.py", line 74, in consume_file
paperless-webserver-1 | document = Consumer().try_consume_file(
paperless-webserver-1 | File "/usr/src/paperless/src/documents/consumer.py", line 266, in try_consume_file
paperless-webserver-1 | self._fail(
paperless-webserver-1 | File "/usr/src/paperless/src/documents/consumer.py", line 70, in _fail
paperless-webserver-1 | raise ConsumerError(f"{self.filename}: {log_message or message}")
paperless-webserver-1 | documents.consumer.ConsumerError: Dear Facilitators.docx: Error while consuming document Dear Facilitators.docx: Error while converting document to PDF: 404 Client Error: Not Found for url: http://gotenberg:3000/convert/office
paperless-webserver-1 |
paperless-webserver-1 | [2022-02-01 12:24:17 +0100] [37] [CRITICAL] WORKER TIMEOUT (pid:40)
paperless-webserver-1 | [2022-02-01 12:24:17 +0100] [37] [WARNING] Worker with pid 40 was terminated due to signal 6
```
```
paperless@docker ~/paperless-ng$ cat docker-compose.yml
# docker-compose file for running paperless from the Docker Hub.
# This file contains everything paperless needs to run.
# Paperless supports amd64, arm and arm64 hardware.
#
# All compose files of paperless configure paperless in the following way:
#
# - Paperless is (re)started on system boot, if it was running before shutdown.
# - Docker volumes for storing data are managed by Docker.
# - Folders for importing and exporting files are created in the same directory
# as this file and mounted to the correct folders inside the container.
# - Paperless listens on port 8000.
#
# In addition to that, this docker-compose file adds the following optional
# configurations:
#
# - Instead of SQLite (default), PostgreSQL is used as the database server.
# - Apache Tika and Gotenberg servers are started with paperless and paperless
# is configured to use these services. These provide support for consuming
# Office documents (Word, Excel, Power Point and their LibreOffice counter-
# parts.
#
# To install and update paperless with this file, do the following:
#
# - Copy this file as 'docker-compose.yml' and the files 'docker-compose.env'
# and '.env' into a folder.
# - Run 'docker-compose pull'.
# - Run 'docker-compose run --rm webserver createsuperuser' to create a user.
# - Run 'docker-compose up -d'.
#
# For more extensive installation and update instructions, refer to the
# documentation.
version: "3.4"
services:
broker:
image: redis:6.0
restart: unless-stopped
db:
image: postgres:13
restart: unless-stopped
volumes:
- pgdata:/var/lib/postgresql/data
environment:
POSTGRES_DB: paperless
POSTGRES_USER: paperless
POSTGRES_PASSWORD: paperless
webserver:
image: jonaswinkler/paperless-ng:latest
restart: unless-stopped
depends_on:
- db
- broker
- gotenberg
- tika
ports:
- 8000:8000
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8000"]
interval: 30s
timeout: 10s
retries: 5
volumes:
- data:/usr/src/paperless/data
- media:/usr/src/paperless/media
- ./export:/usr/src/paperless/export
- /home/paperless/paperless-ng/consume:/usr/src/paperless/consume
env_file: docker-compose.env
environment:
PAPERLESS_REDIS: redis://broker:6379
PAPERLESS_DBHOST: db
PAPERLESS_TIKA_ENABLED: 1
PAPERLESS_TIKA_GOTENBERG_ENDPOINT: http://gotenberg:3000
PAPERLESS_TIKA_ENDPOINT: http://tika:9998
gotenberg:
image: thecodingmachine/gotenberg
restart: unless-stopped
environment:
DISABLE_GOOGLE_CHROME: 1
tika:
image: apache/tika
restart: unless-stopped
volumes:
data:
media:
pgdata:
``` | open | 2022-02-01T11:56:09Z | 2022-04-19T11:13:49Z | https://github.com/jonaswinkler/paperless-ng/issues/1594 | [] | 2600box | 10 |
yzhao062/pyod | data-science | 274 | AutoEncoder.fit() times ever increasing | While working with the AutoEncoder, we noticed that times required to fit a model are increasing a bit every time AutoEncoder.fit() is invoked.
As an example, if the following is executed
```
F1 = list(np.random.normal(0, 0.2, 250))`
F1.extend(list(np.random.normal(1, 0.2, 50)))
F2 = list(np.random.normal(0, 0.2, 250))
F2.extend(list(np.random.normal(1, 0.2, 50)))
df = pd.DataFrame(list(zip(F1, F2)), columns = ["F1", "F2"])
tms = []
for i in range(10):
start = time.time()
lc = AutoEncoder(epochs=2, hidden_neurons=[len(df.columns)-1], contamination=0.01, verbose = 0)
lc.fit(X = df)
t = time.time() - start
print("{} -- {:8.4f}s [address of AE: {}]".format(i, t, hex(id(lc))))
tms.append(t)
```
the output would be:
```
0 -- 2.2708s [address of AE: 0x7f4548a1c7b8]
1 -- 1.4591s [address of AE: 0x7f4543b18e10]
2 -- 1.3210s [address of AE: 0x7f4530102cf8]
3 -- 1.5623s [address of AE: 0x7f44c7e135c0]
4 -- 1.5779s [address of AE: 0x7f44c78702e8]
5 -- 1.9374s [address of AE: 0x7f44c6d4fda0]
6 -- 1.4910s [address of AE: 0x7f44c6709978]
7 -- 1.8271s [address of AE: 0x7f44c5ea86d8]
8 -- 2.0997s [address of AE: 0x7f44c51def28]
9 -- 2.2106s [address of AE: 0x7f44c4901e48]
```
I was not expecting this, since a new AutoEncoder is instantiated at every round in the for loop.
Is this a known issue or can this be explained somehow?
Thanks. | open | 2021-01-15T13:23:39Z | 2021-01-27T09:11:18Z | https://github.com/yzhao062/pyod/issues/274 | [] | fgabbaninililly | 1 |
microsoft/nni | tensorflow | 5,483 | TypeError: __new__() missing 1 required positional argument: 'task' | **Describe the issue**: Hello, developers. I'm a newbie just getting started learning nas. When I run the tutorial file in the notebook, I get an error, I hope you can help me to solve this problem:
File "search_2.py", line 59, in <module>
max_epochs=5)
File "D:\anaconda3\envs\venv_copy\lib\site-packages\nni\nas\evaluator\pytorch\lightning.py", line 372, in __init__
weight_decay=weight_decay, optimizer=optimizer, export_onnx=export_onnx)
File "D:\anaconda3\envs\venv_copy\lib\site-packages\nni\common\serializer.py", line 473, in new_init
**{kw: _argument_processor(arg) for kw, arg in kwargs.items()}
File "D:\anaconda3\envs\venv_copy\lib\site-packages\nni\nas\evaluator\pytorch\lightning.py", line 310, in __init__
export_onnx=export_onnx)
File "D:\anaconda3\envs\venv_copy\lib\site-packages\nni\nas\evaluator\pytorch\lightning.py", line 224, in __init__
self.metrics = nn.ModuleDict({name: cls() for name, cls in metrics.items()})
File "D:\anaconda3\envs\venv_copy\lib\site-packages\nni\nas\evaluator\pytorch\lightning.py", line 224, in <dictcomp>
self.metrics = nn.ModuleDict({name: cls() for name, cls in metrics.items()})
TypeError: __new__() missing 1 required positional argument: 'task'
here is the demo code of Retiarii_example_multi-trial_NAS ,When calling pl.Classification() the error is reported.
```python
@model_wrapper
class Net(nn.Module):
def __init__(self):
super().__init__()
self.conv1 = nn.LayerChoice([nn.Conv2d(3, 6, 3, padding=1), nn.Conv2d(3, 6, 5, padding=2)])# input[3,32,32] output[6,32,32]
self.pool = nn.MaxPool2d(2, 2) #output[6,16,16]
self.conv2 = nn.LayerChoice([nn.Conv2d(6, 16, 3, padding=1), nn.Conv2d(6, 16, 5, padding=2)]) #output[16,16,16]
self.conv3 = nn.Conv2d(16, 16, 1) #output[16,16,16]
self.skipconnect = nn.InputChoice(n_candidates=2)
self.bn = nn.BatchNorm2d(16)
self.gap = nn.AdaptiveAvgPool2d(4) #output[16,4,4]
self.fc1 = nn.Linear(16 * 4 * 4, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
def forward(self, x):
bs = x.size(0)
x = self.pool(F.relu(self.conv1(x)))
x0 = F.relu(self.conv2(x))
x1 = F.relu(self.conv3(x0))
x1 = self.skipconnect([x1, x1+x0])
x = self.pool(self.bn(x1))
x = self.gap(x).view(bs, -1)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
model = Net()
simple_strategy = strategy.Random() # choice: Random, GridSearch, RegularizedEvolution, TPEStrategy
transform = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
train_dataset = serialize(CIFAR10, root="./data_cifar10", train=True, download=True, transform=transform)
test_dataset = serialize(CIFAR10, root="./data_cifar10", train=False, download=True, transform=transform)
trainer = pl.Classification(train_dataloader=pl.DataLoader(train_dataset, batch_size=16),
val_dataloaders=pl.DataLoader(test_dataset, batch_size=16),
max_epochs=5, gpus=[0])
if __name__ == '__main__':
exp = RetiariiExperiment(model, trainer, [], simple_strategy)
exp_config = RetiariiExeConfig('local')
exp_config.experiment_name = 'search darts example'
exp_config.trial_concurrency = 1
exp_config.max_trial_number = 10
exp_config.trial_gpu_number = 1
exp_config.max_experiment_duration = '10m'
exp_config.execution_engine = 'base'
exp_config.training_service.use_active_gpu = True
exp.run(exp_config, 8745)
print('Final model:')
for model_code in exp.export_top_models():
print(model_code)
exp.stop()
```
**Environment**:
- NNI version:2.10
- Training service (local|remote|pai|aml|etc):local
- Client OS:
- Server OS (for remote mode only):
- Python version:3.7.11
- PyTorch/TensorFlow version:pytorch 1.11
- Is conda/virtualenv/venv used?: yes
- Is running in Docker?: no
| closed | 2023-03-28T01:18:25Z | 2023-03-29T02:38:04Z | https://github.com/microsoft/nni/issues/5483 | [] | zzfer490 | 3 |
jupyterlab/jupyter-ai | jupyter | 978 | Cannot import name 'RunnableWithMessageHistory' from 'langchain_core.runnables' | ## Description
https://github.com/jupyterlab/jupyter-ai/pull/943 replaced import:
```diff
- from langchain_core.runnables.history import RunnableWithMessageHistory
+ from langchain_core.runnables import ConfigurableFieldSpec, RunnableWithMessageHistory
```
This errors out on specific version of `langchain_core` (e.g. 0.1.38) with:
```
cannot import name 'RunnableWithMessageHistory' from 'langchain_core.runnables'
```
but required versions were not bumped:
https://github.com/jupyterlab/jupyter-ai/blob/4b45ec23f3e16a0fa29718eefaf37064f59cafe6/packages/jupyter-ai-magics/pyproject.toml#L27-L28
and that module remains in defined in `langchain_core.runnables.history` as of the latest version: https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.history.RunnableWithMessageHistory.html
## Reproduce
1. Install older version of langchain
2. Start JupyterLab
3. See error:
```
[W 2024-09-07 10:40:06.488 ServerApp] jupyter_ai | error adding extension (enabled: True): The module 'jupyter_ai' could not be found (cannot import name 'RunnableWithMessageHistory' from 'langchain_core.runnables' (site-packages/langchain_core/runnables/__init__.py)). Are you sure the extension is installed?
```
## Expected behavior
`jupyter-ai` works
## Context
2.22.0 | closed | 2024-09-07T09:41:39Z | 2024-09-09T20:20:29Z | https://github.com/jupyterlab/jupyter-ai/issues/978 | [
"bug"
] | krassowski | 1 |
microsoft/MMdnn | tensorflow | 834 | [Caffe2Keras] converted keras gives wrong output value | Hi,
I have a Caffe model and I converted to Keras model (Caffe - IR - Keras) but when testing with same input, the Keras model gave different output value although the dimension is correct. What should I do?
The caffe model: https://github.com/CongWeilin/mtcnn-caffe/tree/master/12net
Thanks | open | 2020-05-11T15:41:37Z | 2020-05-16T07:41:46Z | https://github.com/microsoft/MMdnn/issues/834 | [] | nhatuan84 | 1 |
Guovin/iptv-api | api | 479 | open_m3u_result设置成了False依然输出的是m3u | docker-compose.yml:
services:
tv-requests:
image: guovern/tv-requests:latest
ports:
- "8000:8000"
environment:
open_m3u_result: False
volumes:
- /volume1/IPTV/config:/tv-requests/config
- /volume1/IPTV/output:/tv-requests/output
restart: always
访问http://IP:8000/result 却是m3u内容

| closed | 2024-10-29T13:26:27Z | 2024-10-29T15:46:57Z | https://github.com/Guovin/iptv-api/issues/479 | [
"invalid"
] | iFoxox | 3 |
postmanlabs/httpbin | api | 359 | How to debug issues in URL? | I'm trying to debug an encoding issue in the URL. I want to quickly test an HTTP request language, and I want to see the results of:
```
request.get("http://httpbin.org/get/test");
request.get("http://httpbin.org/get/café");
request.get("http://httpbin.org/get/🐶");
```
I want to see if the library encodes the URLs using percent-escaping correctly or not. It would be very useful if httpbin.org would let me use custom URLs like that. | open | 2017-05-30T09:50:39Z | 2018-04-26T17:51:14Z | https://github.com/postmanlabs/httpbin/issues/359 | [] | Flimm | 3 |
Gozargah/Marzban | api | 731 | sni رندم در ریالیتی | دوستان آیا راهی هست بشه برای ریالیتی توی فایل کانفیگ servername / sni رو با * گذاشت مرزبان به صورت رندم یک مقدار بسازه و xray قبول کنه؟
من یه سایت با tls 1.3 آووردم بالا و تنظیم کردم روی همه ساب دامنه هام هم هم سایت میاره بالا اما مشکل اینه وقتی توی فایل xray configuration وقتی مقدار *.domain.com رو میزنم کاربر اگهه لینک رندم ساخته شده به واسطه مرزبان رو بزنه، که مثلا میشه 68464.domain.com xray core قبول نمیکنه حتما باید مقدارش با مقدار وارد شده توی فایل configuration یکی باشه یعنی *.domain.com میزنم دیتا رد میکنه و ریالیتی فعالیت میکنه و وقتیم میام فایل کانفیگ رو بدون severname خالی میزارم که یا تگش رو حذف میکنم که هرجی گذاشتم قبول کنه xraycore خطا میده و اجرا نمیشه
پ. ن. : میدونم اینجا جای درست مطرح کردن سوال نیست، اما گروه زیاد نمیشه نتیجه گرفت یا جواب سوال اینجا حداقل چندتا راهنمایی درست میشه. | closed | 2024-01-05T22:15:44Z | 2024-01-09T21:58:09Z | https://github.com/Gozargah/Marzban/issues/731 | [
"Bug"
] | lvlrf | 3 |
Sanster/IOPaint | pytorch | 513 | [Feature Request] Add Zits-PlusPlus model | A new version of the Zits model [has been released](https://github.com/ewrfcas/ZITS-PlusPlus)
The inference code and models have been released, so I think it should be possible to add it. Seems to be a fairly strong model that should be able to outperform a lot of the models in IOPaint. | closed | 2024-04-24T19:38:01Z | 2025-03-13T20:43:27Z | https://github.com/Sanster/IOPaint/issues/513 | [
"stale"
] | mn7216 | 3 |
scikit-tda/kepler-mapper | data-visualization | 169 | Ayasdi's Neighborhood Lens 1 and 2? | I feel embarrassed asking this here and if it's not appropriate then please delete, but I have not been able to find the answer anywhere...
(Some of ?) the lenses that Ayasdi uses are listed in [this](https://platform.ayasdi.com/sdkdocs/userdoc/lenses_supp_overview.html) page (scroll to the bottom of the page). The names are self-explanatory, except for the Neighborhood Lens 1 and Neighborhood Lens 2.
Does anyone know exactly what the maps are to define those two lenses? One possibility I can think of is the distance to the first and the second nearest neighbors, for the choice of metric. Does this sound reasonable?
Unfortunately for me, two papers I am reading used Ayasdi and these lenses for their analysis, but of course they did not explain what they are (and to learn about them you have to be able to log into Ayasdi (at a price of $100k/year...))
Many thanks | closed | 2019-04-18T10:48:25Z | 2021-04-15T17:12:24Z | https://github.com/scikit-tda/kepler-mapper/issues/169 | [] | karinsasaki | 4 |
erdewit/ib_insync | asyncio | 180 | decode with correct encoding | utils.py --> def decode()
return s.decode(errors='backslashreplace') -->> some problem here.
s.decode(encoding="GBK") # non-english location have to specify the correct one that set in tws.
so another question is how to get the language you set in tws? | closed | 2019-08-25T19:20:33Z | 2019-08-29T10:32:31Z | https://github.com/erdewit/ib_insync/issues/180 | [] | viponedream | 1 |
roboflow/supervision | machine-learning | 1,463 | Notebook not found: Serialise Detections to a CSV File | ### Search before asking
- [X] I have searched the Supervision [issues](https://github.com/roboflow/supervision/issues) and found no similar bug report.
### Bug
The Colab in this [cookbook](https://supervision.roboflow.com/develop/notebooks/serialise-detections-to-csv/) is not found.
<img width="1118" alt="Screenshot 2024-08-19 at 1 19 21 PM" src="https://github.com/user-attachments/assets/07b23e28-0ccc-456d-a496-631e3600bb57">
```
Notebook not found
There was an error loading this notebook. Ensure that the file is accessible and try again.
Ensure that you have permission to view this notebook in GitHub and authorize Colab to use the GitHub API.
https://github.com/roboflow/supervision/blob/develop/docs/notebooks/detections-to-jsonsink.ipynb
Could not find detections-to-jsonsink.ipynb in https://api.github.com/repos/roboflow/supervision/contents/docs/no
```
### Environment
Browser only error: https://supervision.roboflow.com/develop/notebooks/serialise-detections-to-csv/
### Minimal Reproducible Example
Steps:
1. Open the cookbook https://supervision.roboflow.com/develop/notebooks/serialise-detections-to-csv/
2. Click on "Open in Colab"
3. Get the 404 error
### Additional
_No response_
### Are you willing to submit a PR?
- [ ] Yes I'd like to help by submitting a PR! | closed | 2024-08-19T20:22:59Z | 2024-08-20T14:08:37Z | https://github.com/roboflow/supervision/issues/1463 | [
"bug"
] | ediardo | 2 |
PaddlePaddle/models | nlp | 5,293 | paddlepaddle2.0.0、paddlehub2.0.0版本ocr接口疑似存在内存泄漏 | ocr接口疑似存在内存泄漏请看一下问题出现在什么地方,谢谢。
1、版本:
paddlepaddle2.0.0、
paddlehub2.0.0
运行的为cpu版本
服务器内存16G
2、问题现象
a、python单进程调用ocr.recognize_text接口读取指定目录下的100张不同图片进行图片文字识别,每识别一张图片,占用内存增加200M以上,随着识别图片数的增加,内存持续增长,直到内存被耗尽被操作系统kill掉
b、如果只是反复识别同一张图片的话,内存增长的一定数后不再增长
3、导致内存泄漏的疑似代码
通过代码调用链的分析最终定位到c++库中的PD_PredictorZeroCopyRun函数疑似存在内存泄漏,内存泄漏语句
output_i.shape = new int[output_shape.size()];
此处new分配的内存没有释放
包含PD_PredictorZeroCopyRun函数的源码文件:Paddle/paddle/fluid/inference/capi/pd_predictor.cc
函数调用链:ocr.recognize_text->self._recognize_text->self.rec_predictor.zero_copy_run()->PD_PredictorRun
ocr.recognize_text函数所在文件:.paddlehub/modules/chinese_ocr_db_crnn_server/module.py
self._recognize_text函数所在文件:.paddlehub/modules/chinese_ocr_db_crnn_server/module.py
zero_copy_run函数所在文件:Paddle/paddle/fluid/inference/tests/api/analyzer_capi_tester.cc
PD_PredictorZeroCopyRun函数所在文件:Paddle/paddle/fluid/inference/capi/pd_predictor.cc
4、出现问题的测试代码
import paddlehub as hub
import cv2
import os, sys
ocr = hub.Module(name="chinese_ocr_db_crnn_server")
for i in range(100):
rootdir = '/home/test/img'
list = os.listdir(rootdir)
for i in range(0, len(list)):
f = os.path.join(rootdir, list[i])
res = ocr.recognize_text(images=[cv2.imread(f)])
| closed | 2021-03-20T02:46:24Z | 2021-03-22T02:31:38Z | https://github.com/PaddlePaddle/models/issues/5293 | [] | weihyemail | 0 |
huggingface/datasets | deep-learning | 7,363 | ImportError: To support decoding images, please install 'Pillow'. | ### Describe the bug
Following this tutorial locally using a macboko and VSCode: https://huggingface.co/docs/diffusers/en/tutorials/basic_training
This line of code: for i, image in enumerate(dataset[:4]["image"]):
throws: ImportError: To support decoding images, please install 'Pillow'.
Pillow is installed.
### Steps to reproduce the bug
Run the tutorial
### Expected behavior
Images should be rendered
### Environment info
MacBook, VSCode | open | 2025-01-08T02:22:57Z | 2025-02-07T07:30:33Z | https://github.com/huggingface/datasets/issues/7363 | [] | jamessdixon | 3 |
sebp/scikit-survival | scikit-learn | 278 | Confusion with dynamic AUC | I am not able to reproduce the AUC that is returned by `cumulative_dynamic_auc()`. I am pretty certain that this is due to my lack of understanding, but would like some clarification if possible. Below is a reproducible example, mostly derived from the documentation example:
```
# Most imports
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import pandas as pd
from sklearn.impute import SimpleImputer
from sklearn.model_selection import train_test_split
from sklearn.pipeline import make_pipeline
from sksurv.datasets import load_flchain, load_gbsg2
from sksurv.functions import StepFunction
from sksurv.linear_model import CoxPHSurvivalAnalysis, CoxnetSurvivalAnalysis
from sksurv.metrics import (
concordance_index_censored,
concordance_index_ipcw,
cumulative_dynamic_auc,
integrated_brier_score,
)
from sksurv.nonparametric import kaplan_meier_estimator
from sksurv.preprocessing import OneHotEncoder, encode_categorical
from sksurv.util import Surv
# Load data and split
from sksurv.datasets import load_veterans_lung_cancer
va_x, va_y = load_veterans_lung_cancer()
va_x_train, va_x_test, va_y_train, va_y_test = train_test_split(
va_x, va_y, test_size=0.2, stratify=va_y["Status"], random_state=0
)
# train RSF
rom sksurv.ensemble import RandomSurvivalForest
rsf = make_pipeline(
OneHotEncoder(),
RandomSurvivalForest(n_estimators=100, min_samples_leaf=7, random_state=0)
)
rsf.fit(va_x_train, va_y_train)
# compute dynamic AUC
va_times = np.arange(8, 184, 7)
rsf_chf_funcs = rsf.predict_cumulative_hazard_function(
va_x_test, return_array=False)
rsf_risk_scores = np.row_stack([chf(va_times) for chf in rsf_chf_funcs])
rsf_auc, rsf_mean_auc = cumulative_dynamic_auc(
va_y_train, va_y_test, rsf_risk_scores, va_times
)
# plot
plt.plot(va_times, rsf_auc, "o-", label="RSF (mean AUC = {:.3f})".format(rsf_mean_auc))
plt.xlabel("days from enrollment")
plt.ylabel("time-dependent AUC")
plt.legend(loc="lower center")
plt.grid(True)
# Plot shows that AUC is around 79% on day 71
# Create a data frame with risk scores from day 71 along with actual event and times, and percentile rank and predicted events
risks_day_71 = rsf_risk_scores[:, np.where(va_times==71)[0]].squeeze().tolist()
events = []
times = []
for event, time in va_y_test:
events.append(event)
times.append(time)
metricsDf = pd.DataFrame(
{"Risk_score": risks_day_71, "Actual_event":events, "Time_of_event":times}
)
metricsDf = metricsDf.sort_values('Risk_score', ascending=False)
metricsDf['Percentile_rank'] = metricsDf['Risk_score'].rank(pct=True)
metricsDf['Predicted_Event'] = np.where(metricsDf['Percentile_rank'] > 0.5, True, False)
# Use actual events and percentile rank to compute AUC
from sklearn.metrics import roc_auc_score
roc_auc_score(metricsDf.Actual_event, metricsDf.Percentile_rank)
# AUC is larger than what the dynamic AUC suggested
```
I am confused as to how to utilize the fact that AUC is larger at day 71 when using this model on new data. First, is my use of percentile rank incorrect? Second, how then do you provide the AUC with your predicted probabilities?
I apologize for my naïvety and appreciate any help. Thank you. | closed | 2022-07-01T23:43:54Z | 2022-07-03T08:20:10Z | https://github.com/sebp/scikit-survival/issues/278 | [] | Dekermanjian | 1 |
CTFd/CTFd | flask | 2,550 | [Question] Timing's Impact on CTF Ranking | It's often observed that when two teams amass equal points, the team which solves challenges later tends to be ranked higher on the scoreboard.
Why does the timing of challenge solutions become a determining factor in team rankings?
To shed light on this, consider a scenario where two teams complete all challenges but in different order, resulting in one team being ranked higher due to their later completions. Similarly, in another instance, two teams might be tied in points, yet the team that finishes solving challenges closer to the end of the competition secures a better position
Why not keeping the team that solved first higher in the scoreboard awarding "first blood"? | closed | 2024-06-02T09:59:09Z | 2024-06-07T13:23:33Z | https://github.com/CTFd/CTFd/issues/2550 | [] | mosheDO | 2 |
rthalley/dnspython | asyncio | 1,028 | 2.5.0 Release | I just looked at the amount of unreleased stuff we have and concluded it's enough for a release. So, I plan to start the release process in early 2024 with the usual RC and release following two weeks or so later if all goes well. | closed | 2023-12-24T22:23:54Z | 2024-01-20T13:32:45Z | https://github.com/rthalley/dnspython/issues/1028 | [] | rthalley | 2 |
opengeos/leafmap | streamlit | 715 | Incorrect data labels with `.add_data()`? | <!-- Please search existing issues to avoid creating duplicates. -->
### Environment Information
- leafmap version: leafmap==0.31.7
- Python version: 3.10.12
- Operating System: Ubuntu Linux
### Description
### What I Did
I use `add_data()` to map color fill to a categorical variable. This seems to result in incorrect labeling:
```python
import leafmap.foliumap as leafmap
sf_url = "/vsicurl/https://dsl.richmond.edu/panorama/redlining/static/citiesData/CASanFrancisco1937/geojson.json"
sf = gpd.read_file(sf_url)
sf = sf[sf.residential]
m = leafmap.Map(draw_control=False, measure_control=False, center=[37.75, -122.45], zoom=13)
m.add_data(sf,
column="grade",
layer_name = "Grade",
colors = ["green", "blue", "orange", "red"],
k=4)
m
```

The legend suggests that there is no grade "D" (red) polygons. Mousing over the leafmap, we can see all the orange polygons are in fact grade "D", not "C" like the legend says. (A and B have both been colored green, and C has been colored blue).
This may be a user error in that I have not specified the correct "scheme" to map to data, or that `add_data()` schemes implicitly applies only to floating point / numerical data? I think plotting by data columns and handling both categorical and numerical variables is a common use case.
Some related questions which maybe should be separate issues:
- I looked at doing this using `add_geojson()` instead. I read #28, but it doesn't seem to cover the concept of mapping data columns to aesthetics (like fill color) -- it looks like I would have to manually divide the data ahead of time?
- I also could not figure out how to add opacity / transparency when using `add_data()`. Most other `.add_*` methods support an `opacity` argument, but I don't see how to do this with `add_data()`.
| closed | 2024-04-12T22:35:14Z | 2024-04-14T18:44:10Z | https://github.com/opengeos/leafmap/issues/715 | [
"bug"
] | cboettig | 2 |
horovod/horovod | pytorch | 3,347 | GPU Head tests for elastic ray are flaky in CI | Elastic ray tests(scale up and down) become flaky in GPU Heads. The following tests fail:
1. test_ray_elastic_v2::test_fault_tolerance_hosts_remove_and_add
2. test_ray_elastic_v2::test_fault_tolerance_hosts_added_and_removed
3. test_ray_elastic_v2::test_fault_tolerance_hosts_remove_and_add_cooldown
| open | 2022-01-06T00:17:49Z | 2022-01-06T00:17:49Z | https://github.com/horovod/horovod/issues/3347 | [
"bug"
] | ashahab | 0 |
scrapy/scrapy | web-scraping | 6,590 | Path to removal of spider middleware iterable downgrading | We added support for async spider middlewares in 2.7 (In October 2022), and [mixing sync and async middlewares](https://docs.scrapy.org/en/latest/topics/coroutines.html#sync-async-spider-middleware) was intended to be temporary and eventually deprecated and removed, after (most?) 3rd-party middlewares are converted. The initial discussion of this was at https://github.com/scrapy/scrapy/pull/4978#discussion_r645987466 and then at https://github.com/scrapy/scrapy/pull/4978#issuecomment-944238855 and below. I felt from the beginning that there is no definite path to the expected destination, because things can and should work until we disable them, and also because for many 3rd-party middlewares the only reason to change them would be other 3rd-party middlewares gaining async support, but I now wonder how we can make at least some next steps to the destination.
We expect one of two things from 3rd-party middleware authors: either make the middleware universal, adding an `async def process_spider_output_async` method, or make the middleware async-only, making an `async def process_spider_output` method. If neither is done, and either an async callback or an async-only middleware (called before this one in the stack) is used, there will be a warning in the spider logs. It doesn't say anything about deprecation though, just a warning that an async iterable was downgraded when passed to this middleware. And if everything else is sync or universal, there will be no warning.
There is also another future problem, if you make your middleware universal in the more far future you will need to change it again to make it async-only. I don't think this is clear to a random maintainer and I don't know if we should make it more clear, but I think this change should be mostly trivial anyway.
So maybe as the first steps that can be done right now we should:
- make the downgrading warning say that this behavior is deprecated
- make the downgrading warning say how can it be fixed, and link to the docs
- at the load step emit a warning if a spider middleware has only the sync method, saying this is deprecated (will be deprecated? do we emit such advance warnings?)?
Should we try to check which popular middlewares are still not updated? Should we at least do this for Zyte-written ones (public and/or internal?)
Anything else we can do? | open | 2024-12-24T18:44:15Z | 2025-02-08T13:17:53Z | https://github.com/scrapy/scrapy/issues/6590 | [
"discuss",
"cleanup",
"asyncio"
] | wRAR | 5 |
modelscope/modelscope | nlp | 1,029 | [Bug] 模型存在重复下载的问题,不能正确识别已经上传至缓存文件夹的权重文件 | Thanks for your error report and we appreciate it a lot.
**Checklist**
* I have searched the tutorial on modelscope [doc-site](https://modelscope.cn/docs)
* I have searched related issues but cannot get the expected help.
* The bug has not been fixed in the latest version.
**Describe the bug**
将一个下载容易断线的模型权重先单独离线下载,再上传至缓存文件夹`.cache/modelscope/hub/组织名/模型名/`下,再次运行`modelscope download --model 组织名/模型名`,发现会重复下载模型权重,对于已有的离线下载后并上传的模型权重文件不识别
**To Reproduce**
1.首先在[网页](https://modelscope.cn/models/X-D-Lab/MindChat-Qwen-7B-v2/files)上分别下载模型的所有文件,上传至服务器的`.cache/modelscope/hub/X-D-Lab/MindChat-Qwen-7B-v2`下
2.运行`modelscope download --model X-D-Lab/MindChat-Qwen-7B-v2`
3.发现modelscope未识别已经上传的权重bin文件,仍然重复从网络下载已有权重,没有及时报告下载已完成
**Your Environments (__required__)**
* OS: Linux ld-PT6620W 6.8.0-40-generic #40~22.04.3-Ubuntu SMP PREEMPT_DYNAMIC Tue Jul 30 17:30:19 UTC 2 x86_64 x86_64 x86_64 GNU/Linux
* CPU: Intel(R) Xeon(R) Silver 4314 CPU @ 2.40GHz
* modelscope version 1.18.1
Please @ corresponding people according to your problem:
Model hub related: @liuyhwangyh @tastelikefeet @wangxingjun778
Finetune related: @tastelikefeet @Jintao-Huang
Pipeline related: @tastelikefeet @wangxingjun778
| closed | 2024-10-18T06:52:07Z | 2024-10-21T09:34:38Z | https://github.com/modelscope/modelscope/issues/1029 | [] | zydmtaichi | 10 |
quantumlib/Cirq | api | 6,727 | Incorrect behaviour when inserting classical control circuit element, causing `ValueError` | **Description of the issue**
Under specific circumstances, it seems that classical controls are inserted wrongly and causing errors when getting measurement keys.
**How to reproduce the issue**
Run the following code:
```py
from cirq import *
from cirq.transformers import *
import numpy as np
all_passes = { "align_left": align_left, "align_right": align_right}
def individual_pass(circ : Circuit, circ_num : int, pass_to_do : str):
simulator = Simulator()
shots = 1024
# Takes the modified copy of circ
if pass_to_do == "merge_k_qubit_unitaries":
circ_new = all_passes[pass_to_do](circ, k=np.random.randint(1, 5))
else :
circ_new = all_passes[pass_to_do](circ)
c_orig = simulator.run(circ, repetitions=shots).histogram(key="results")
c_new = simulator.run(circ_new, repetitions=shots).histogram(key='results')
# Adding qubits
qubits = NamedQubit.range(6, prefix="q")
main_circ = Circuit()
main_circ.append(measure(qubits[4], key="cbit0"))
main_circ.append(ry(0.645000).on(qubits[1]).with_classical_controls('cbit0'), strategy=InsertStrategy.INLINE) #Comment me out
# main_circ.append(ry(0.645000).on(qubits[1]).with_classical_controls('cbit0'), strategy=InsertStrategy.EARLIEST) #Uncomment me for working circuit
main_circ.append(measure(qubits, key="results"))
print(main_circ)
individual_pass(main_circ, 26, "align_right")
#individual_pass(main_circ, 26, "align_left") #Or uncomment me
```
The code above will run into an issue where the `ry` gate with classical controls will return an error where it cannot find the key `cbit0`, even though it was inserted before it.
Changing the circuit such that the passes use an insert strategy of `EARLIEST` or `NEW_THEN_INLINE` would make it work. Changing the pass to apply on the circuit to `align_left` instead would also make the circuit not throw an exception, however the resulting circuit would look the same as the wrong one. The commented lines of code above can be uncommented to demonstrate this.
<details>
Circuit that doesn't work looks like this:
```sh
┌───────────┐
q0: ──────────────────────M('results')───
│
q1: ────────Ry(0.205π)────M──────────────
║ │
q2: ────────╫─────────────M──────────────
║ │
q3: ────────╫─────────────M──────────────
║ │
q4: ───────M╫─────────────M──────────────
║║ │
q5: ───────╫╫─────────────M──────────────
║║
cbit0: ════@^════════════════════════════
└───────────┘
```
This also throws an error:
```sh
ValueError: Measurement key cbit0 missing when testing classical control
```
Circuit that works looks like this:
```sh
q0: ───────────────────────M('results')───
│
q1: ──────────Ry(0.205π)───M──────────────
║ │
q2: ──────────╫────────────M──────────────
║ │
q3: ──────────╫────────────M──────────────
║ │
q4: ──────M───╫────────────M──────────────
║ ║ │
q5: ──────╫───╫────────────M──────────────
║ ║
cbit0: ═══@═══^═══════════════════════════
```
Using `align right` would give the same looking circuit without any exceptions thrown:
```sh
┌───────────┐
q0: ──────────────────────M('results')───
│
q1: ────────Ry(0.205π)────M──────────────
║ │
q2: ────────╫─────────────M──────────────
║ │
q3: ────────╫─────────────M──────────────
║ │
q4: ───────M╫─────────────M──────────────
║║ │
q5: ───────╫╫─────────────M──────────────
║║
cbit0: ════@^════════════════════════════
└───────────┘
```
</details>
**Cirq version**
1.4.1
| closed | 2024-09-13T14:22:33Z | 2025-01-15T17:07:11Z | https://github.com/quantumlib/Cirq/issues/6727 | [
"good first issue",
"kind/bug-report",
"triage/accepted"
] | Bennybenassius | 3 |
neuml/txtai | nlp | 163 | Make adding pipelines to API easier | Currently, the list of available pipelines and endpoints is hard coded in the API. This should be modified to dynamically detect pipelines within the txtai.pipeline package. This will make all txtai.pipelines available via API workflows.
Exposing API endpoints for each pipeline still needs to be explicitly set but this change will also make that easier. All that will be needed is a new router class in txtai.api.routers. With this change, the router will automatically be detected and loaded.
Not all pipelines should be available via a direct endpoint and there is non-trival parameter mapping necessary, so this is a good balance. | closed | 2021-11-28T00:38:46Z | 2021-11-28T00:40:41Z | https://github.com/neuml/txtai/issues/163 | [] | davidmezzetti | 0 |
quantmind/pulsar | asyncio | 172 | HTTPException message is not escaped | When an HTTPException is raised, and [render_error()](https://github.com/quantmind/pulsar/blob/74d1e6e58095679291c14c826ce63f6312a93256/pulsar/apps/wsgi/utils.py#L262) decide to return HTML, the exception message should be escaped to prevent [XSS](https://en.wikipedia.org/wiki/Cross-site_scripting).
[OWASP recommends it](https://www.owasp.org/index.php/XSS_%28Cross_Site_Scripting%29_Prevention_Cheat_Sheet#RULE_.231_-_HTML_Escape_Before_Inserting_Untrusted_Data_into_HTML_Element_Content) and [Werkzeug does it.](https://github.com/mitsuhiko/werkzeug/blob/d9e7736de41e7cbf5de70a15e28449b4dd4adab0/werkzeug/exceptions.py#L113)
| closed | 2015-10-30T10:56:12Z | 2016-01-06T14:55:06Z | https://github.com/quantmind/pulsar/issues/172 | [
"enhancement",
"security"
] | msornay | 2 |
getsentry/sentry | django | 87,029 | Search functionality for schema hints slideout | same search functionality as breadcrumbs. AKA local filtering logic. | closed | 2025-03-13T20:43:29Z | 2025-03-18T18:16:40Z | https://github.com/getsentry/sentry/issues/87029 | [] | nikkikapadia | 0 |
junyanz/pytorch-CycleGAN-and-pix2pix | pytorch | 890 | No models saved in Colab | I was using the PyTorch Colab notebook of CycleGAN. I trained the network using `!python train.py --dataroot ./datasets/horse2zebra --name horse2zebra --model cycle_gan --display_id -1 --n_epochs 5 --n_epochs_decay 5` but it didn't saved any model (I'm using default settings to save every 5000 iterations, in fact it printed several times that something was saved to `checkpoint` folder, but the folder was empty). | closed | 2020-01-03T01:59:05Z | 2022-04-28T21:33:35Z | https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/890 | [] | domef | 6 |
automagica/automagica | automation | 1 | Math functions | Add math functionality to activities.py to support native math operations as described in the documentation | closed | 2018-05-14T14:57:58Z | 2018-10-04T09:45:02Z | https://github.com/automagica/automagica/issues/1 | [
"enhancement"
] | tvturnhout | 0 |
ultralytics/yolov5 | machine-learning | 13,304 | hyp.finetune.yaml missing | ### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.
### Question
Hi I cannot find hyp.finetune.yaml in this repo as it was mentioned in finetuning model docs in ultralytics docmentation. Did you guys change hyp.finetune.yaml into hyp.VOC.yaml? Thank you!
### Additional
_No response_ | closed | 2024-09-11T02:27:47Z | 2024-09-13T05:58:50Z | https://github.com/ultralytics/yolov5/issues/13304 | [
"question"
] | spacewalk01 | 3 |
iperov/DeepFaceLab | machine-learning | 5,598 | 3070TI (laptop) shows better compatibility with DX12 build compared to 3000 build (More memoryerrors) | First of all, I want to ask that, theoretically, should 3000 build provide a better performance or not? If not, maybe I will stop trying to figure out how to run 3000 build properly.
OK, now the description of the problems:
In general, for training, 3000 build always crashes, and usually with a MemoryError or low paging file message. (There were many times just nothing happened after finished loading samples, I searched the Issues here, it seems most ppl think it's due to low paging file)
I tried to increase the paging file, 32-42G for the C drive, and 32-64G for the drive where the project locate in. (Which is the one the codes really use? The system drive C: in this case?) Only in very very rare case, this works for very light load training like batch size 2 and lowest for all parameters.
The errors and where the codes crash are different almost every time, the error messages are very similar to #5524 . So I tried his method. by reducing the number of worker, change the Model.py 669 to:
cpu_count = multiprocessing.cpu_count() // 2
I did this to both AMP and SAEHD training codes. The good thing is, the AMP works for most time, but the SAEHD still crashes.
So for SAEHD, if I start a new model training with a batch size of 2, lowest for all parameters, it can run. But if I increase the batch size to 4, no other changes, it crashes. With DX12 build, I can train with a pretrained model (res: 256, dims: 256/64/64/22) bs=8.
The followings are images to show that 3000 build only recognizes 5+G VRAM even when the AMP works, while the DX12 recognized 6+ available, and also the latest error message I got when I ran the SAEHD training codes:



`Error:
Traceback (most recent call last):
File "A:\AIprojects\DeepFaceLab_NVIDIA_RTX3000_series\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1375, in _do_call
return fn(*args)
File "A:\AIprojects\DeepFaceLab_NVIDIA_RTX3000_series\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1360, in _run_fn
target_list, run_metadata)
File "A:\AIprojects\DeepFaceLab_NVIDIA_RTX3000_series\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1453, in _call_tf_sessionrun
run_metadata)
tensorflow.python.framework.errors_impl.InternalError: 2 root error(s) found.
(0) Internal: Attempting to perform BLAS operation using StreamExecutor without BLAS support
[[{{node MatMul}}]]
[[Sigmoid_5/_821]]
(1) Internal: Attempting to perform BLAS operation using StreamExecutor without BLAS support
[[{{node MatMul}}]]
0 successful operations.
0 derived errors ignored.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "A:\AIprojects\DeepFaceLab_NVIDIA_RTX3000_series\_internal\DeepFaceLab\models\ModelBase.py", line 263, in update_sample_for_preview
self.get_history_previews()
File "A:\AIprojects\DeepFaceLab_NVIDIA_RTX3000_series\_internal\DeepFaceLab\models\ModelBase.py", line 383, in get_history_previews
return self.onGetPreview (self.sample_for_preview, for_history=True)
File "A:\AIprojects\DeepFaceLab_NVIDIA_RTX3000_series\_internal\DeepFaceLab\models\Model_SAEHD\Model.py", line 789, in onGetPreview
S, D, SS, DD, DDM, SD, SDM = [ np.clip( nn.to_data_format(x,"NHWC", self.model_data_format), 0.0, 1.0) for x in ([target_src,target_dst] + self.AE_view (target_src, target_dst) ) ]
File "A:\AIprojects\DeepFaceLab_NVIDIA_RTX3000_series\_internal\DeepFaceLab\models\Model_SAEHD\Model.py", line 611, in AE_view
self.warped_dst:warped_dst})
File "A:\AIprojects\DeepFaceLab_NVIDIA_RTX3000_series\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 968, in run
run_metadata_ptr)
File "A:\AIprojects\DeepFaceLab_NVIDIA_RTX3000_series\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1191, in _run
feed_dict_tensor, options, run_metadata)
File "A:\AIprojects\DeepFaceLab_NVIDIA_RTX3000_series\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1369, in _do_run
run_metadata)
File "A:\AIprojects\DeepFaceLab_NVIDIA_RTX3000_series\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1394, in _do_call
raise type(e)(node_def, op, message) # pylint: disable=no-value-for-parameter
tensorflow.python.framework.errors_impl.InternalError: 2 root error(s) found.
(0) Internal: Attempting to perform BLAS operation using StreamExecutor without BLAS support
[[node MatMul (defined at A:\AIprojects\DeepFaceLab_NVIDIA_RTX3000_series\_internal\DeepFaceLab\core\leras\layers\Dense.py:66) ]]
[[Sigmoid_5/_821]]
(1) Internal: Attempting to perform BLAS operation using StreamExecutor without BLAS support
[[node MatMul (defined at A:\AIprojects\DeepFaceLab_NVIDIA_RTX3000_series\_internal\DeepFaceLab\core\leras\layers\Dense.py:66) ]]
0 successful operations.
0 derived errors ignored.
Errors may have originated from an input operation.
Input Source operations connected to node MatMul:
inter_AB/dense1/weight/read (defined at A:\AIprojects\DeepFaceLab_NVIDIA_RTX3000_series\_internal\DeepFaceLab\core\leras\layers\Dense.py:47)
mul_6 (defined at A:\AIprojects\DeepFaceLab_NVIDIA_RTX3000_series\_internal\DeepFaceLab\core\leras\ops\__init__.py:400)
Input Source operations connected to node MatMul:
inter_AB/dense1/weight/read (defined at A:\AIprojects\DeepFaceLab_NVIDIA_RTX3000_series\_internal\DeepFaceLab\core\leras\layers\Dense.py:47)
mul_6 (defined at A:\AIprojects\DeepFaceLab_NVIDIA_RTX3000_series\_internal\DeepFaceLab\core\leras\ops\__init__.py:400)
Original stack trace for 'MatMul':
File "threading.py", line 884, in _bootstrap
File "threading.py", line 916, in _bootstrap_inner
File "threading.py", line 864, in run
File "A:\AIprojects\DeepFaceLab_NVIDIA_RTX3000_series\_internal\DeepFaceLab\mainscripts\Trainer.py", line 58, in trainerThread
debug=debug)
File "A:\AIprojects\DeepFaceLab_NVIDIA_RTX3000_series\_internal\DeepFaceLab\models\ModelBase.py", line 193, in __init__
self.on_initialize()
File "A:\AIprojects\DeepFaceLab_NVIDIA_RTX3000_series\_internal\DeepFaceLab\models\Model_SAEHD\Model.py", line 416, in on_initialize
gpu_src_inter_AB_code = self.inter_AB (gpu_src_code)
File "A:\AIprojects\DeepFaceLab_NVIDIA_RTX3000_series\_internal\DeepFaceLab\core\leras\models\ModelBase.py", line 117, in __call__
return self.forward(*args, **kwargs)
File "A:\AIprojects\DeepFaceLab_NVIDIA_RTX3000_series\_internal\DeepFaceLab\core\leras\archis\DeepFakeArchi.py", line 151, in forward
x = self.dense1(x)
File "A:\AIprojects\DeepFaceLab_NVIDIA_RTX3000_series\_internal\DeepFaceLab\core\leras\layers\LayerBase.py", line 14, in __call__
return self.forward(*args, **kwargs)
File "A:\AIprojects\DeepFaceLab_NVIDIA_RTX3000_series\_internal\DeepFaceLab\core\leras\layers\Dense.py", line 66, in forward
x = tf.matmul(x, weight)
File "A:\AIprojects\DeepFaceLab_NVIDIA_RTX3000_series\_internal\python-3.6.8\lib\site-packages\tensorflow\python\util\dispatch.py", line 206, in wrapper
return target(*args, **kwargs)
File "A:\AIprojects\DeepFaceLab_NVIDIA_RTX3000_series\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\math_ops.py", line 3655, in matmul
a, b, transpose_a=transpose_a, transpose_b=transpose_b, name=name)
File "A:\AIprojects\DeepFaceLab_NVIDIA_RTX3000_series\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\gen_math_ops.py", line 5713, in mat_mul
name=name)
File "A:\AIprojects\DeepFaceLab_NVIDIA_RTX3000_series\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 750, in _apply_op_helper
attrs=attr_protos, op_def=op_def)
File "A:\AIprojects\DeepFaceLab_NVIDIA_RTX3000_series\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\ops.py", line 3569, in _create_op_internal
op_def=op_def)
File "A:\AIprojects\DeepFaceLab_NVIDIA_RTX3000_series\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\ops.py", line 2045, in __init__
self._traceback = tf_stack.extract_stack_for_node(self._c_op)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "A:\AIprojects\DeepFaceLab_NVIDIA_RTX3000_series\_internal\DeepFaceLab\mainscripts\Trainer.py", line 58, in trainerThread
debug=debug)
File "A:\AIprojects\DeepFaceLab_NVIDIA_RTX3000_series\_internal\DeepFaceLab\models\ModelBase.py", line 193, in __init__
self.on_initialize()
File "A:\AIprojects\DeepFaceLab_NVIDIA_RTX3000_series\_internal\DeepFaceLab\models\Model_SAEHD\Model.py", line 698, in on_initialize
self.update_sample_for_preview(force_new=True)
File "A:\AIprojects\DeepFaceLab_NVIDIA_RTX3000_series\_internal\DeepFaceLab\models\ModelBase.py", line 265, in update_sample_for_preview
self.sample_for_preview = self.generate_next_samples()
File "A:\AIprojects\DeepFaceLab_NVIDIA_RTX3000_series\_internal\DeepFaceLab\models\ModelBase.py", line 461, in generate_next_samples
sample.append ( generator.generate_next() )
File "A:\AIprojects\DeepFaceLab_NVIDIA_RTX3000_series\_internal\DeepFaceLab\samplelib\SampleGeneratorBase.py", line 21, in generate_next
self.last_generation = next(self)
File "A:\AIprojects\DeepFaceLab_NVIDIA_RTX3000_series\_internal\DeepFaceLab\samplelib\SampleGeneratorFace.py", line 112, in __next__
return next(generator)
File "A:\AIprojects\DeepFaceLab_NVIDIA_RTX3000_series\_internal\DeepFaceLab\core\joblib\SubprocessGenerator.py", line 73, in __next__
gen_data = self.cs_queue.get()
File "multiprocessing\queues.py", line 94, in get
File "multiprocessing\connection.py", line 216, in recv_bytes
File "multiprocessing\connection.py", line 318, in _recv_bytes
File "multiprocessing\connection.py", line 340, in _get_more_data
MemoryError` | closed | 2022-12-10T23:50:51Z | 2022-12-18T08:40:35Z | https://github.com/iperov/DeepFaceLab/issues/5598 | [] | JulesLiu | 0 |
ivy-llc/ivy | numpy | 28,090 | Fix Frontend Failing Test: paddle - math.paddle.pow | To-do List: https://github.com/unifyai/ivy/issues/27500 | closed | 2024-01-27T16:36:15Z | 2024-01-29T13:11:44Z | https://github.com/ivy-llc/ivy/issues/28090 | [
"Sub Task"
] | Sai-Suraj-27 | 0 |
plotly/dash | data-visualization | 3,076 | add support for ignoring specific folders under the assets directory | The existing parameter `assets_ignore` can ignore files under the assets folder that match a specified regular expression pattern, but it cannot ignore specific folders under the assets folder as a whole, which would be useful in many scenarios.
| open | 2024-11-15T01:58:40Z | 2024-11-20T13:30:18Z | https://github.com/plotly/dash/issues/3076 | [
"feature",
"P3"
] | CNFeffery | 3 |
cobrateam/splinter | automation | 387 | browser.get_alert() does not return "None" when there is no alert | When I call `browser.get_alert()` on a page without an alert, instead of `None`, it returns `NoAlertPresentException: Message: No alert is present`-- contrary to the documentation.
Ie.,
```
from splinter import Browser
browser = Browser()
browser.visit('https://www.google.com')
browser.get_alert()
```
It'd be nice for this to work as documented, since then I'd have a simple way to check that there isn't an active alert on the page, although it's not a huge problem.
Is the problem with the code, or with the documentation? It looks like `.get_alert()` is using a deprecated Selenium method, though I'm not sure if that's the cause of the issue or not.
| closed | 2015-04-06T20:46:31Z | 2019-10-02T20:48:42Z | https://github.com/cobrateam/splinter/issues/387 | [
"NeedsInvestigation"
] | njbennett | 2 |
dinoperovic/django-salesman | rest-api | 19 | item_validate is called twice per item | validate_basket_item is called twice per item | closed | 2022-04-25T19:03:24Z | 2022-04-25T19:18:55Z | https://github.com/dinoperovic/django-salesman/issues/19 | [] | IncreaseComputers | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.