repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
kaliiiiiiiiii/Selenium-Driverless | web-scraping | 310 | await driver.close() closes the entire window instead of current Ta | I expected await driver.close() to close only the current tab, but it closes the entire browser window instead.
Steps to Reproduce
Open multiple tabs using execute_script("window.open(...);").
Switch to the new tab.
Call await driver.close().
Instead of closing only the new tab, the entire browser session terminates.
Expected Behavior
await driver.close() should close only the currently active tab.
The browser window should remain open with other tabs intact.
Actual Behavior
The entire browser window closes instead of just the active tab.
`
for idx, h2 in enumerate(job_titles, start=1):
try:
# Find <a> inside <h2>
link = await h2.find_element(By.CSS_SELECTOR, "a")
job_link = await link.get_attribute("href")
logging.info(f"Opening job {idx}: {job_link}")
# Open new tab for the job link
await driver.execute_script(f"window.open('{job_link}', '_blank');")
await asyncio.sleep(1) # Short delay for tab update
# Get the current tab handles
all_tabs = await driver.window_handles
# Ensure there's more than one tab
if len(all_tabs) <= 1:
logging.warning("No new tab detected, skipping...")
continue
# Identify the new tab (the last handle in the list)
new_tab = all_tabs[-1]
# Switch to new tab
await driver.switch_to.window(new_tab)
# Wait for page to load
try:
await driver.find_element(By.TAG_NAME, 'body')
logging.info("Page loaded successfully")
except Exception as e:
logging.error(f"Error waiting for page load: {e}")
await asyncio.sleep(3) # Let content load
# Print page source before closing tab
page_source = await driver.page_source
logging.info(f"Page Source for job {idx}:\n{page_source[:100]}") # First 100 chars
# Close the new tab only if there are multiple tabs open
remaining_tabs = await driver.window_handles
if len(remaining_tabs) > 1:
await driver.close() # Close the current tab
logging.info("New tab closed successfully")
# Switch back to main tab
remaining_tabs = await driver.window_handles
if main_tab in remaining_tabs:
await driver.switch_to.window(main_tab)
else:
logging.error("Main tab closed unexpectedly!")
else:
logging.warning("Skipping tab close: Only one tab left!")
except Exception as e:
logging.error(f"Error processing job {idx}: {e}")
`
| open | 2025-02-02T15:07:32Z | 2025-02-02T22:10:13Z | https://github.com/kaliiiiiiiiii/Selenium-Driverless/issues/310 | [
"invalid"
] | bizjaya | 1 |
apachecn/ailearning | nlp | 564 | 几乎所有的图都挂了。。。 | 几乎所有的图都挂了。。。 | closed | 2019-12-23T03:32:12Z | 2019-12-30T02:28:29Z | https://github.com/apachecn/ailearning/issues/564 | [] | llwinner | 2 |
sqlalchemy/alembic | sqlalchemy | 1,098 | Postgres Index with opperators dont autogenerate | **Describe the bug**
I'm using a postgresql index with a inet_ops operator. Autogenerated code does not include the index.
**Expected behavior**
Autogenerated code should include the index.
**To Reproduce**
****env.py****
```py
from logging.config import fileConfig
from sqlalchemy import Column, Index, MetaData, Table, engine_from_config, literal_column
from sqlalchemy.dialects.postgresql import INET
from sqlalchemy import pool
from alembic import context
config = context.config
if config.config_file_name is not None:
fileConfig(config.config_file_name)
target_metadata = MetaData()
Table(
"t", target_metadata,
Column("addr", INET),
Index(
"ix_1",
literal_column("addr inet_ops"),
postgresql_using="GiST",
),
)
def run_migrations_offline() -> None:
url = config.get_main_option("sqlalchemy.url")
context.configure(
url=url, target_metadata=target_metadata,
literal_binds=True, dialect_opts={"paramstyle": "named"}
)
with context.begin_transaction():
context.run_migrations()
def run_migrations_online() -> None:
connectable = engine_from_config(
config.get_section(config.config_ini_section),
prefix="sqlalchemy.", poolclass=pool.NullPool,
)
with connectable.connect() as connection:
context.configure(connection=connection, target_metadata=target_metadata)
with context.begin_transaction():
context.run_migrations()
if context.is_offline_mode():
run_migrations_offline()
else:
run_migrations_online()
```
**Error**
```
% alembic revision --autogenerate -m "Add inet ops table/index"
INFO [alembic.runtime.migration] Context impl PostgresqlImpl.
INFO [alembic.runtime.migration] Will assume transactional DDL.
INFO [alembic.autogenerate.compare] Detected added table 't'
/home/gary/dev/alembic-inet-ops-mcve/ve/lib64/python3.10/site-packages/alembic/ddl/postgresql.py:252: UserWarning: autogenerate skipping functional index ix_1; not supported by SQLAlchemy reflection
util.warn(
Generating /home/gary/dev/alembic-inet-ops-mcve/alembic/versions/7361d9f63fd9_add_inet_ops_table_index.py ... done
```
**Versions.**
- OS: Fedora 36
- Python: Python 3.10.7
- Alembic: 1.8.1
- SQLAlchemy: 1.4.41
- Database: postgresql
- DBAPI: psycopg2-binary==2.9.4
**Have a nice day!**
P.S. pull request to follow shortly. | open | 2022-10-12T11:25:37Z | 2022-12-01T16:19:52Z | https://github.com/sqlalchemy/alembic/issues/1098 | [
"autogenerate - detection",
"postgresql",
"use case"
] | garyvdm | 4 |
django-import-export/django-import-export | django | 1,812 | Can't import file without an `id` column in V4 | **Describe the bug**
In V4 I get an error attempting to import data without an `id` column.
**To Reproduce**
When updating to V4 it seems I'm no longer able to import data without an `id` field present in the import data. My expectation (and I think this was the V3 behavior?) is that if `id` is present it is used to check for an existing record but if it's not present a new record is added.
Example configuration:
```python
class ImportExportResourceBase(resources.ModelResource):
id = fields.Field(attribute="id")
child = fields.Field(attribute="child_id", column_name="child_id")
child_first_name = fields.Field(attribute="child__first_name", readonly=True)
child_last_name = fields.Field(attribute="child__last_name", readonly=True)
class Meta:
clean_model_instances = True
exclude = ("duration",)
class WeightImportExportResource(ImportExportResourceBase):
class Meta:
model = models.Weight
@admin.register(models.Weight)
class WeightAdmin(ImportExportMixin, ExportActionMixin, admin.ModelAdmin):
list_display = (
"child",
"weight",
"date",
)
list_filter = ("child", "tags")
search_fields = (
"child__first_name",
"child__last_name",
"weight",
)
resource_class = WeightImportExportResource
import_error_display = ('message', 'traceback', 'row')
```
And import file:
```csv
child_id,weight,date,notes
1,10.45,2020-02-14,
1,10.17,2020-02-07,Medical that leader line general marriage plan present. Good garden suggest no. Section than order little share property why. Prepare production imagine commercial seek teach over anything. Reason late consumer.
1,10.04,2020-01-31,Drive usually end than forget. Option value either building car minute the. After resource worker response if. Include second property finish. Music event pick give wind.
1,9.89,2020-01-24,
1,9.68,2020-01-17,
```
When I try to import this file I get errors (for all rows):
```
Errors
The following fields are declared in 'import_id_fields' but are not present in the file headers: id
Traceback (most recent call last):
File "/workspaces/babybuddy/.venv/lib/python3.12/site-packages/import_export/resources.py", line 871, in import_data_inner
self._check_import_id_fields(dataset.headers)
File "/workspaces/babybuddy/.venv/lib/python3.12/site-packages/import_export/resources.py", line 1144, in _check_import_id_fields
raise exceptions.FieldError(
import_export.exceptions.FieldError: The following fields are declared in 'import_id_fields' but are not present in the file headers: id
Line number: 1 - "Column 'id' not found in dataset. Available columns are: ['child_id', 'weight', 'date', 'notes']"
1, 10.45, 2020-02-14,
Traceback (most recent call last):
File "/workspaces/babybuddy/.venv/lib/python3.12/site-packages/import_export/fields.py", line 82, in clean
value = row[self.column_name]
~~~^^^^^^^^^^^^^^^^^^
KeyError: 'id'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/workspaces/babybuddy/.venv/lib/python3.12/site-packages/import_export/resources.py", line 707, in import_row
instance, new = self.get_or_init_instance(instance_loader, row)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/workspaces/babybuddy/.venv/lib/python3.12/site-packages/import_export/resources.py", line 177, in get_or_init_instance
instance = self.get_instance(instance_loader, row)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/workspaces/babybuddy/.venv/lib/python3.12/site-packages/import_export/resources.py", line 170, in get_instance
return instance_loader.get_instance(row)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/workspaces/babybuddy/.venv/lib/python3.12/site-packages/import_export/instance_loaders.py", line 29, in get_instance
params[field.attribute] = field.clean(row)
^^^^^^^^^^^^^^^^
File "/workspaces/babybuddy/.venv/lib/python3.12/site-packages/import_export/fields.py", line 84, in clean
raise KeyError(
KeyError: "Column 'id' not found in dataset. Available columns are: ['child_id', 'weight', 'date', 'notes']"
```
**Versions (please complete the following information):**
- Django Import Export: 4.0.0
- Python: 3.12
- Django: 5.0.4
**Expected behavior**
If there is no `id` column in the data to import, new entries are added to the database for each row.
| closed | 2024-05-03T05:04:53Z | 2024-05-08T08:10:49Z | https://github.com/django-import-export/django-import-export/issues/1812 | [
"bug"
] | cdubz | 4 |
desec-io/desec-stack | rest-api | 752 | Webapp: refactor form inputs | ## Problem
Some form inputs are C&P and code is redundant.
Affected inputs:
- [x] E-Mail
- [x] Password
- [ ] Captcha
## Solution
Refactor to generic component.
Implementation already done, but must be evaluated. PR in some days... | open | 2023-07-06T07:08:56Z | 2023-09-16T11:43:37Z | https://github.com/desec-io/desec-stack/issues/752 | [] | Rotzbua | 1 |
open-mmlab/mmdetection | pytorch | 12,020 | ImportError: cannot import name 'make_res_layer' from 'mmdet.models.backbones' | >>> import mmdet
>>> mmdet.__version__
'1.0.rc0+34e9957' | open | 2024-10-29T12:08:51Z | 2024-10-29T12:09:07Z | https://github.com/open-mmlab/mmdetection/issues/12020 | [] | jiyuwangbupt | 0 |
CorentinJ/Real-Time-Voice-Cloning | python | 1,189 | Support for Arabic language. | Hello, I'm working on an Arabic model for this project. I'm planning on adding Arabic characters support as well as having the software synthesize audio that speaks Arabic. So far I'm trying to do the following :
1. Downloading the corpus Arabic dataset linked here : http://en.arabicspeechcorpus.com/arabic-speech-corpus.zip
2. Modifying synthesizer/utils/symbols.py to include Arabic characters and numbers and I must have successfully done that.
3. Running encoder_train, vocoder_train and synthesizer_train with the corpus dataset to generate pt files for each which will server as the new pretrained model for Arabic language
4. The final step which consists of running the toolbox to check if the software functions well.
Is there anything else that needs to be done??? | open | 2023-04-12T11:48:25Z | 2023-04-12T11:48:25Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/1189 | [] | ghost | 0 |
ultralytics/yolov5 | pytorch | 13,330 | about save-txt in yolov5-seg | ### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.
### Question
when activating "save-txt" in yolov5-seg.py, a txt with the coordinates of the predicted region is saved, but i found that the coordinates seem not to be in sequence, that is to say when i use fillpoly in opencv, the coordinates seem unable to form a polygon like the one of prediction. is there a way to make the coordinates in sequence?
我发现启用save-txt后保存的包含预测分割区域的txt里的坐标似乎不是按顺序的(指坐标的保存顺序不是围着分割区域的)?用opencv的fillpoly填充出来的也跟预测的区域不一样。有办法把坐标变成按顺序的吗?

### Additional
_No response_ | open | 2024-09-23T09:40:49Z | 2024-10-27T13:30:36Z | https://github.com/ultralytics/yolov5/issues/13330 | [
"question"
] | Powerfulidot | 5 |
Yorko/mlcourse.ai | data-science | 757 | Proofread topic 6 | - Fix issues
- Fix typos
- Correct the translation where needed
- Add images where necessary | closed | 2023-10-24T07:41:40Z | 2024-08-25T08:08:32Z | https://github.com/Yorko/mlcourse.ai/issues/757 | [
"enhancement",
"articles"
] | Yorko | 0 |
Anjok07/ultimatevocalremovergui | pytorch | 1,496 | Low VRAM error | I'm using the openCL version with a 1GB VRAM Radeon RX580, so definitely on the lowers of the low VRAM range. I was trying to use the Ensemble Mode, but it didn't run. I have 16gb of RAM though, so it would be lovely if it could use my RAM instead, but that's a PyTorch problem, isn't it? Could you guys create a low_vram mode akin to what ComfyUI did? Some nodes there also have this thing called tiling, where they process small chunks, one at a time, to limit how much VRAM is being used at each moment. While I _can_ run in CPU Only mode, it is way too slow compared to what I can do in ComfyUI using DirectML and their -low_vram argument.
The error thrown is pasted bellow:
```
RuntimeError: "Could not allocate tensor with 402653184 bytes. There is not enough GPU video memory available!"
Traceback Error: "
File "UVR.py", line 6638, in process_start
File "separate.py", line 1054, in seperate
File "separate.py", line 1182, in inference_vr
File "separate.py", line 1149, in _execute
File "lib_v5\vr_network\nets.py", line 161, in predict_mask
File "lib_v5\vr_network\nets.py", line 137, in forward
File "lib_v5\vr_network\nets.py", line 45, in __call__
File "lib_v5\vr_network\layers.py", line 73, in __call__
File "torch\nn\functional.py", line 3959, in interpolate
"
Error Time Stamp [2024-08-06 07:52:49]
Full Application Settings:
vr_model: Choose Model
aggression_setting: 5
window_size: 512
mdx_segment_size: 256
batch_size: Default
crop_size: 256
is_tta: False
is_output_image: False
is_post_process: False
is_high_end_process: False
post_process_threshold: 0.2
vr_voc_inst_secondary_model: No Model Selected
vr_other_secondary_model: No Model Selected
vr_bass_secondary_model: No Model Selected
vr_drums_secondary_model: No Model Selected
vr_is_secondary_model_activate: False
vr_voc_inst_secondary_model_scale: 0.9
vr_other_secondary_model_scale: 0.7
vr_bass_secondary_model_scale: 0.5
vr_drums_secondary_model_scale: 0.5
demucs_model: Choose Model
segment: Default
overlap: 0.25
overlap_mdx: Default
overlap_mdx23: 8
shifts: 2
chunks_demucs: Auto
margin_demucs: 44100
is_chunk_demucs: False
is_chunk_mdxnet: False
is_primary_stem_only_Demucs: False
is_secondary_stem_only_Demucs: False
is_split_mode: True
is_demucs_combine_stems: True
is_mdx23_combine_stems: True
demucs_voc_inst_secondary_model: No Model Selected
demucs_other_secondary_model: No Model Selected
demucs_bass_secondary_model: No Model Selected
demucs_drums_secondary_model: No Model Selected
demucs_is_secondary_model_activate: False
demucs_voc_inst_secondary_model_scale: 0.9
demucs_other_secondary_model_scale: 0.7
demucs_bass_secondary_model_scale: 0.5
demucs_drums_secondary_model_scale: 0.5
demucs_pre_proc_model: No Model Selected
is_demucs_pre_proc_model_activate: False
is_demucs_pre_proc_model_inst_mix: False
mdx_net_model: Choose Model
chunks: Auto
margin: 44100
compensate: Auto
denoise_option: None
is_match_frequency_pitch: True
phase_option: Automatic
phase_shifts: None
is_save_align: False
is_match_silence: True
is_spec_match: False
is_mdx_c_seg_def: False
is_invert_spec: False
is_deverb_vocals: False
deverb_vocal_opt: Main Vocals Only
voc_split_save_opt: Lead Only
is_mixer_mode: False
mdx_batch_size: Default
mdx_voc_inst_secondary_model: No Model Selected
mdx_other_secondary_model: No Model Selected
mdx_bass_secondary_model: No Model Selected
mdx_drums_secondary_model: No Model Selected
mdx_is_secondary_model_activate: False
mdx_voc_inst_secondary_model_scale: 0.9
mdx_other_secondary_model_scale: 0.7
mdx_bass_secondary_model_scale: 0.5
mdx_drums_secondary_model_scale: 0.5
is_save_all_outputs_ensemble: True
is_append_ensemble_name: False
chosen_audio_tool: Manual Ensemble
choose_algorithm: Min Spec
time_stretch_rate: 2.0
pitch_rate: 2.0
is_time_correction: True
is_gpu_conversion: True
is_primary_stem_only: False
is_secondary_stem_only: False
is_testing_audio: False
is_auto_update_model_params: True
is_add_model_name: False
is_accept_any_input: False
is_task_complete: False
is_normalization: False
is_use_opencl: True
is_wav_ensemble: False
is_create_model_folder: False
mp3_bit_set: 320k
semitone_shift: 0
save_format: WAV
wav_type_set: PCM_16
device_set: Radeon RX 580 Series:0
help_hints_var: True
set_vocal_splitter: No Model Selected
is_set_vocal_splitter: False
is_save_inst_set_vocal_splitter: False
model_sample_mode: False
model_sample_mode_duration: 10
demucs_stems: All Stems
mdx_stems: All Stems
``` | open | 2024-08-06T11:09:11Z | 2024-08-06T11:31:05Z | https://github.com/Anjok07/ultimatevocalremovergui/issues/1496 | [] | RutraNickers | 1 |
alpacahq/alpaca-trade-api-python | rest-api | 276 | paper trading api keys don't authenticate on StreamConn | I'm using my paper trading keys to instantiate a StreamConn because I want to receive trade updates on my paper account. But it looks like StreamConn only authenticates on live trading keys. | closed | 2020-07-24T03:00:14Z | 2020-07-24T03:34:28Z | https://github.com/alpacahq/alpaca-trade-api-python/issues/276 | [] | connor-roche | 0 |
ultralytics/yolov5 | deep-learning | 12,815 | load images in batch size | ### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.
### Question
Hi, I would like to ask when using torch.hub.load to load a model, is there a method to process images in batches, accommodating scenarios where 3 or 5 images arrive simultaneously, allowing parallel execution instead of queuing?
### Additional
_No response_ | closed | 2024-03-12T01:59:59Z | 2024-05-09T00:21:09Z | https://github.com/ultralytics/yolov5/issues/12815 | [
"question",
"Stale"
] | KnightInsight | 6 |
Miserlou/Zappa | flask | 2,233 | ImportError: cannot import name 'Flask' from 'flask' (unknown location) | I am trying to deploy a simple application using zappa which uses flask and moviepy.
## Context
Neither the name of the folder nor the any file is named as flask.
Locally `python ./app.py` is just working fine.
I am running on python 3.9
## Expected Behavior
To run normally on lambda.
## Actual Behavior
When trying to do `zappa deploy` every thing works fine except it ends with a `502`.
Then when tried to debugged with `zappa tail dev`, found this :
```bash
Instancing..
[1639313514980] Failed to find library: libmysqlclient.so.18...right filename?
[1639313515099] [ERROR] ImportError: cannot import name 'Flask' from 'flask' (unknown location)
Traceback (most recent call last):
File "/var/task/handler.py", line 657, in lambda_handler
return LambdaHandler.lambda_handler(event, context)
File "/var/task/handler.py", line 251, in lambda_handler
handler = cls()
File "/var/task/handler.py", line 148, in __init__
self.app_module = importlib.import_module(self.settings.APP_MODULE)
File "/var/lang/lib/python3.9/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1030, in _gcd_import
File "<frozen importlib._bootstrap>", line 1007, in _find_and_load
File "<frozen importlib._bootstrap>", line 986, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 680, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 850, in exec_module
File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed
File "/tmp/transcripter/app.py", line 1, in <module>
from flask import Flask, render_template, request, redirect
```
## Steps to Reproduce
1. zappa deploy
2. zappa tail dev
## Your Environment
<!--- Include as many relevant details about the environment you experienced the bug in -->
* Zappa version used: 0.54.1
* Operating System and Python version: Windows 10 with python 3.9
* The output of `pip freeze`:
```txt
certifi==2021.10.8
charset-normalizer==2.0.9
click==8.0.3
colorama==0.4.4
decorator==4.4.2
Flask==2.0.2
Flask-Cors==3.0.10
idna==3.3
imageio==2.13.3
imageio-ffmpeg==0.4.5
itsdangerous==2.0.1
Jinja2==3.0.3
MarkupSafe==2.0.1
moviepy==1.0.3
numpy==1.21.4
Pillow==8.4.0
proglog==0.1.9
requests==2.26.0
six==1.16.0
SpeechRecognition==3.8.1
tqdm==4.62.3
urllib3==1.26.7
Werkzeug==2.0.2
```
* Your `zappa_settings.json`:
```json
{
"dev": {
"app_function": "app.app",
"profile_name": null,
"aws_region": "ap-south-1",
"project_name": "transcripter",
"runtime": "python3.9",
"s3_bucket": "zappa-e5ola6g1j",
"slim_handler": true,
"include": []
}
}
```
And additionally I don't understand why this `Failed to find library: libmysqlclient.so.18...right filename?` keeps coming.
| open | 2021-12-12T13:24:19Z | 2022-08-22T13:27:45Z | https://github.com/Miserlou/Zappa/issues/2233 | [] | Shubham-Kumar-2000 | 2 |
allenai/allennlp | nlp | 5,018 | Visual Genome Question Answering | Visual Genome has a set of questions for every image, 1.7M in total. This task is about `Step` that produces a `DatasetDict` for this task, and then some more steps to connect it to the model for [VQA](https://github.com/allenai/allennlp-models/blob/main/allennlp_models/vision/dataset_readers/vqav2.py) and [GQA](https://github.com/allenai/allennlp-models/blob/main/allennlp_models/vision/dataset_readers/gqa.py), and train it.
Note that this will not solve the task completely. Some of the Visual Genome answers consist of multiple tokens, which cannot be handled by the existing model that's there. The best models achieve about 36% accuracy on this task, but we will need the result of #5003 to get there. As a result, achieving any accuracy scores above 30% would be a success.
Writing one of these dataset steps is a little complex, because it has to deal with pre-processing the images. But on the plus side, you don't have to write the model yourself. | open | 2021-02-24T22:40:24Z | 2021-08-28T00:25:23Z | https://github.com/allenai/allennlp/issues/5018 | [
"Contributions welcome",
"Models",
"medium"
] | dirkgr | 0 |
flairNLP/flair | nlp | 2,953 | How to use the trained model for named entity recognition | I've trained a named entity-related model, now how can I use the model to recognize named entities in my own sentences? Can you provide relevant examples? I looked at this case
`model = SequenceTagger.load('resources/taggers/example-upos/final-model.pt')
sentence = Sentence('I love Berlin')
model.predict(sentence)
print(sentence.to_tagged_string())`
But the results seemed different from what I wanted | closed | 2022-10-01T03:10:17Z | 2023-04-02T16:54:25Z | https://github.com/flairNLP/flair/issues/2953 | [
"question",
"wontfix"
] | yaoysyao | 1 |
jina-ai/serve | deep-learning | 6,168 | Engine consumes a large amount of Memory. What methods can be used to optimize memory usage? | jina ver: 3.25.1 latest
When using an empty example to observe the idle memory usage without adding any executor, the situation is as follows:
flow.py
```python
from jina import Flow
f = Flow()
with f:
f.block()
```
In the x86 environment:
```scss
python(16985)───python(17051)─┬─{default-executo}(17054)
├─{event_engine}(17058)
├─{event_engine}(17059)
├─{event_engine}(17060)
├─{event_engine}(17061)
├─{event_engine}(17077)
├─{event_engine}(17080)
├─{grpc_global_tim}(17056)
├─{python}(17057)
├─{resolver-execut}(17055)
└─{timer_manager}(17081)
```
PID `16985` occupies `80M`, PID `17051` occupies `66M`.
In the ARM environment:
```scss
python(3740)───python(8001)─┬─{python}(8014)
├─{python}(8015)
├─{python}(8016)
├─{python}(8017)
├─{python}(8020)
├─{python}(8021)
├─{python}(8022)
├─{python}(8023)
├─{python}(8024)
├─{python}(8025)
└─{python}(8026)
```
PID `3740` occupies `308M`, PID `8001` occupies `214M`.
The executor is designed as multiple processes, We have also tested the memory usage of each process is around `250M-350M`.
In this way, when all processes are combined in a flow, it is easy to exceed `3-4G`.
There is no mention of memory in the entire documentation. Is there any solution to this problem? | closed | 2024-05-16T07:00:52Z | 2024-08-29T00:21:17Z | https://github.com/jina-ai/serve/issues/6168 | [
"Stale"
] | Janus-Xu | 8 |
ansible/ansible | python | 83,915 | local_action does not delegate to controller when remote is localhost | ### Summary
When port forwarding a remote SSH server to a local port and using that local port in Ansible, delegation to the controller node does not seem to be possible with either the `local_action` or `delegate_to` as, in both cases, Ansible runs delegated tasks on the remote node.
While it does make sense to me that `delegate_to` would use whatever is defined in the inventory, I assumed that `local_action` (per its name) would run the module *locally* - on the controller node.
### Issue Type
Bug Report
### Component Name
local_action,delegate_to
### Ansible Version
```console
$ ansible --version
ansible [core 2.17.3]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/ubuntu/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /tmp/repro/.venv/lib/python3.10/site-packages/ansible
ansible collection location = /home/ubuntu/.ansible/collections:/usr/share/ansible/collections
executable location = .venv/bin/ansible
python version = 3.10.12 (main, Jul 29 2024, 16:56:48) [GCC 11.4.0] (/tmp/repro/.venv/bin/python)
jinja version = 3.1.4
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
CONFIG_FILE() = /etc/ansible/ansible.cfg
EDITOR(env: EDITOR) = subl -w
```
### OS / Environment
Running Ubuntu 22.04 on both the controller node and the remote.
### Steps to Reproduce
In a real environment, I forward the SSH port of a different machine in the network that the machine I ssh into is (i.e. bastion setup) but this example should highlight the issue as well and is simple to create a reproduction for.
1. Create a file `playbook.yaml` with the following contents:
```yaml
---
- hosts: all
tasks:
- name: Runs on remote node as expected
ansible.builtin.file:
path: /tmp/file1
state: touch
- name: Runs on remote node which I guess makes sense but I don't know how to delegate to controller node in this case
ansible.builtin.file:
path: /tmp/file2
state: touch
delegate_to: localhost
- name: Runs on remote node which is unexpected
local_action:
module: ansible.builtin.file
path: /tmp/file3
state: touch
```
2. Create a file `inventory.yaml` with the following contents (replace `<user>` with the user to connect with):
```
all:
hosts:
127.0.0.1:
ansible_connection: ssh
ansible_port: 1234
ansible_user: <user>
```
3. Run `ssh <user>@<host> -L 127.0.0.1:1234:127.0.0.1:22` in another terminal window.
4. Run `ansible-playbook playbook.yaml -i inventory.yaml`.
5. Validate that all of the files have been created on the remote node and that none of the files (namely /tmp/file2 and /tmp/file3) have been created on the controller node.
### Expected Results
For comparison to how I would expect it work, you can change the `inventory.yaml` file to:
```yaml
all:
hosts:
<host>:
ansible_connection: ssh
ansible_port: 22
ansible_user: <user>
```
You'll see that in this case, the `/tmp/file2` and `/tmp/file3` files are created on the controller node, while `/tmp/file1` is created on the remote node.
### Actual Results
```console
Too long to fit into the issue, here's the gist: https://gist.githubusercontent.com/Jackenmen/fa7ddca29eaf9704dd65d57d1b7511b9/raw/c8e530b9071a00768dc92fa830003d537b2ffa3e/actual_result_83915.log
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct | closed | 2024-09-07T00:30:35Z | 2024-09-21T13:00:02Z | https://github.com/ansible/ansible/issues/83915 | [
"bug",
"affects_2.17"
] | Jackenmen | 5 |
vitalik/django-ninja | pydantic | 977 | [BUG] pagination does not support async view | **Describe the bug**
I created async APIs with ninja pagination. error occurs as below.
The qs passed into pagination class is coroutine.

**Versions (please complete the following information):**
- Python version: 3.11
- Django version: 4.2
- Django-Ninja version: 1.0.1
- Pydantic version: 2.1.1
| closed | 2023-12-04T07:01:13Z | 2023-12-04T08:39:03Z | https://github.com/vitalik/django-ninja/issues/977 | [] | samuelchen | 1 |
numpy/numpy | numpy | 27,679 | BUG: Inconsistent casting error for masked-arrays in-place operations | ### Describe the issue:
The changes in scalar casting of 2.0 seem to affect NumPy arrays and masked-arrays differently for in-place operations, resulting in unexpected errors for masked-arrays. See example below (which only occurs in NumPy 2.X, but not on 1.26.4).
Maybe linked to https://github.com/numpy/numpy/issues/7991 and https://github.com/numpy/numpy/issues/27029.
EDIT: Also have cases with the same error happening not in-place.
### Reproduce the code example:
```python
import numpy as np
test = np.array([1, 2], dtype="uint8")
test += 5
test2 = np.ma.masked_array([1, 2], dtype="uint8")
test2 += 5
```
### Error message:
```shell
Traceback (most recent call last):
File "/home/atom/miniconda3/envs/geoutils-dev/lib/python3.11/site-packages/IPython/core/interactiveshell.py", line 3577, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-5-14912cbe9744>", line 1, in <module>
test2 += 5
File "/home/atom/miniconda3/envs/geoutils-dev/lib/python3.11/site-packages/numpy/ma/core.py", line 4422, in __iadd__
self._data.__iadd__(other_data)
numpy._core._exceptions._UFuncOutputCastingError: Cannot cast ufunc 'add' output from dtype('int64') to dtype('uint8') with casting rule 'same_kind'
```
### Python and NumPy Versions:
2.1.2
3.11.10 | packaged by conda-forge | (main, Oct 16 2024, 01:27:36) [GCC 13.3.0]
### Runtime Environment:
[{'numpy_version': '2.1.2',
'python': '3.11.10 | packaged by conda-forge | (main, Oct 16 2024, 01:27:36) '
'[GCC 13.3.0]',
'uname': uname_result(system='Linux', node='pop-os', release='6.8.0-76060800daily20240311-generic', version='#202403110203~1711393930~22.04~331756a SMP PREEMPT_DYNAMIC Mon M', machine='x86_64')},
{'simd_extensions': {'baseline': ['SSE', 'SSE2', 'SSE3'],
'found': ['SSSE3',
'SSE41',
'POPCNT',
'SSE42',
'AVX',
'F16C',
'FMA3',
'AVX2'],
'not_found': ['AVX512F',
'AVX512CD',
'AVX512_KNL',
'AVX512_KNM',
'AVX512_SKX',
'AVX512_CLX',
'AVX512_CNL',
'AVX512_ICL',
'AVX512_SPR']}},
{'architecture': 'Haswell',
'filepath': '/home/atom/miniconda3/envs/geoutils-dev/lib/libopenblasp-r0.3.28.so',
'internal_api': 'openblas',
'num_threads': 20,
'prefix': 'libopenblas',
'threading_layer': 'pthreads',
'user_api': 'blas',
'version': '0.3.28'}]
### Context for the issue:
I work with classes composed of NumPy masked-arrays stored in a `.data` attribute (a bit like for Xarray `.data`, except that it is a masked-array). Not being able to update those with simple in-place manipulation is quite limiting. | open | 2024-10-30T23:00:40Z | 2024-11-01T18:31:09Z | https://github.com/numpy/numpy/issues/27679 | [
"00 - Bug",
"component: numpy.ma"
] | rhugonnet | 3 |
gradio-app/gradio | deep-learning | 9,992 | Update the hugging face space header | - [x] I have searched to see if a similar issue already exists.
We need to update the Huggingface space header version to latest to pull in some bug fixes. | open | 2024-11-19T03:49:34Z | 2024-11-25T23:32:04Z | https://github.com/gradio-app/gradio/issues/9992 | [
"good first issue",
"svelte"
] | pngwn | 1 |
tortoise/tortoise-orm | asyncio | 1,109 | HOW to use 'using_db' in 'in_transaction' when using 'bulk_create' | tortoise-orm==0.18.1
when I use 'in_transaction', I want to use 'MyModel.bulk_create', but has no attribute 'using_db'
```
async with in_transaction('default') as tconn:
objects = [
MyModel(...) for _ in range(10)
]
await MyModel.bulk_create(objects) ## this has no attribute 'using_db', how I use 'tconn'
...
```
how I use 'tconn', just like :
```
await MyModel.all().using_db(tconn).update(num=1)
```
| closed | 2022-04-21T08:16:32Z | 2022-04-21T08:36:31Z | https://github.com/tortoise/tortoise-orm/issues/1109 | [] | wchpeng | 3 |
ymcui/Chinese-LLaMA-Alpaca | nlp | 440 | 预训练后效果不佳,请教过程问题 | 我训练的过程如下:
1、下载chinese_llama_plus_lora_7b、chinese_alpaca_plus_lora_7b,并进行合并,脚本如下:
python scripts/merge_llama_with_chinese_lora.py \
--base_model /home/dell/project/Chinese-LLaMA-Alpaca/models/llama_hf_models \
--lora_model /home/dell/project/Chinese-LLaMA-Alpaca/models/chinese_llama_plus_lora_7b,/home/dell/project/Chinese-LLaMA-Alpaca/models/chinese_alpaca_plus_lora_7b \
--output_type huggingface \
--output_dir /home/dell/project/Chinese-LLaMA-Alpaca/models/merge_llama_lora
2、基于chinese_llama_plus_lora_7b、chinese_alpaca_plus_lora_7b合并结果进行预训练,启动run_pt.sh进行训练,相关路径配置如下:
pretrained_model=/home/dell/project/Chinese-LLaMA-Alpaca/models/merge_llama_lora
chinese_tokenizer_path=/home/dell/project/Chinese-LLaMA-Alpaca/models/merge_llama_lora
dataset_dir=/home/dell/project/Chinese-LLaMA-Alpaca/data
data_cache=/home/dell/project/Chinese-LLaMA-Alpaca/cache_data
output_dir=/home/dell/project/Chinese-LLaMA-Alpaca/models/stage2
3、训练完成后,进行结果合并,数据拷贝过程如下:
cd stage2_models
cp /home/dell/project/Chinese-LLaMA-Alpaca/models/stage2/pytorch_model.bin adapter_model.bin
cp /home/dell/project/Chinese-LLaMA-Alpaca/models/chinese_llama_plus_lora_7b/adapter_config.json .
cp /home/dell/project/Chinese-LLaMA-Alpaca/models/chinese_llama_plus_lora_7b/*token* .
合并脚本如下:
python scripts/merge_llama_with_chinese_lora.py \
--base_model /home/dell/project/Chinese-LLaMA-Alpaca/models/merge_llama_lora \
--lora_model /home/dell/project/Chinese-LLaMA-Alpaca/models/stage2_models \
--output_type pth \
--output_dir /home/dell/project/Chinese-LLaMA-Alpaca/models/merge_llama_final
4、按照wiki的步骤,拷贝merge_llama_final结果到llama.cpp下的zh-models,并执行:
python convert.py zh-models/7B/
./quantize ./zh-models/7B/ggml-model-f16.bin ./zh-models/7B/ggml-model-q5_0.bin q5_0
./main -m zh-models/7B/ggml-model-q5_0.bin --color -f prompts/alpaca.txt -ins -c 51200 --temp 0 -n 2048 --repeat_penalty 1.1
情况描述:
1、我训练的数据量在1.1GB,包含了专业书籍、相关论文及专业的问答数据,以行的方式,每行一个段落,并保存txt文件,放在/home/dell/project/Chinese-LLaMA-Alpaca/data目录;
2、通过命令行启动llama.cpp后,回答的问题,与没有进行data预训练的结果基本一致;
请教问题:
1、是不是我的过程有问题?
2、针对我这种情况,需要以什么方式或者过程训练比较好?
感谢各位贡献者的无私奉献!
### 必查项目(前三项只保留你要问的)
- [x ] **基础模型**:LLaMA-Plus / Alpaca-Plus
- [x ] **运行系统**:Ubuntu
- [x ] **问题分类**:模型训练与精调 | closed | 2023-05-27T05:34:13Z | 2023-06-19T22:02:44Z | https://github.com/ymcui/Chinese-LLaMA-Alpaca/issues/440 | [
"stale"
] | chyyf006 | 30 |
benbusby/whoogle-search | flask | 416 | [FEATURE] <Autocomplete URL for browsers> | Since whoogle has autocomplete functionality, what about adding an autocomplete URL for the browsers that support it?
`https://<instance>/auto?q=` something like this?
This is vivaldi and you can add a "suggest URL thing"

| closed | 2021-09-05T06:00:14Z | 2021-09-06T04:55:42Z | https://github.com/benbusby/whoogle-search/issues/416 | [
"enhancement"
] | Albonycal | 2 |
deepspeedai/DeepSpeed | machine-learning | 6,853 | [BUG] Unable to Use `quantization_setting` for Customizing MoQ in DeepSpeed Inference | **Describe the bug**
Unable to customize MoQ using `quantization_setting` with DeepSpeed inference.
**To Reproduce**
Follow the example from the [DeepSpeed inference tutorial on datatypes and quantized models](https://www.deepspeed.ai/tutorials/inference-tutorial/#datatypes-and-quantized-models).
Below is the full script to reproduce the issue:
```Python
import torch
from transformers import T5Tokenizer, T5ForConditionalGeneration
import deepspeed
# Load T5 model and tokenizer
model_name = "t5-small" # You can change this to other T5 models like 't5-base' or 't5-large'
tokenizer = T5Tokenizer.from_pretrained(model_name)
model = T5ForConditionalGeneration.from_pretrained(model_name)
# Define quantization settings
quantize_groups = 8 # Example setting; adjust as needed
mlp_extra_grouping = True # Example setting; adjust as needed
# Initialize DeepSpeed inference with quantization
model = deepspeed.init_inference(
model=model,
mp_size=1, # Model parallel size (1 if no model parallelism is used)
quantization_setting=(quantize_groups, mlp_extra_grouping)
)
# Tokenize input text
input_text = "Translate English to French: Hello, how are you?"
inputs = tokenizer(input_text, return_tensors="pt")
# Perform inference
outputs = model.generate(**inputs)
output_text = tokenizer.decode(outputs[0], skip_special_tokens=True)
# Print the result
print("Input:", input_text)
print("Output:", output_text)
```
**Expected behavior**
The script should take the input in English and produce the French translation using the T5 model. However, an error is raised:
```bash
pydantic_core._pydantic_core.ValidationError: 1 validation error for DeepSpeedInferenceConfig
quantization_setting
Extra inputs are not permitted [type=extra_forbidden, input_value=(8, True), input_type=tuple]
For further information visit https://errors.pydantic.dev/2.10/v/extra_forbidden
```
**ds_report output**
```bash
[2024-12-11 10:01:03,448] [INFO] [real_accelerator.py:219:get_accelerator] Setting ds_accelerator to cuda (auto detect)
--------------------------------------------------
DeepSpeed C++/CUDA extension op report
--------------------------------------------------
NOTE: Ops not installed will be just-in-time (JIT) compiled at
runtime if needed. Op compatibility means that your system
meet the required dependencies to JIT install the op.
--------------------------------------------------
JIT compiled ops requires ninja
ninja .................. [OKAY]
--------------------------------------------------
op name ................ installed .. compatible
--------------------------------------------------
[WARNING] async_io requires the dev libaio .so object and headers but these were not found.
[WARNING] async_io: please install the libaio-dev package with apt
[WARNING] If libaio is already installed (perhaps from source), try setting the CFLAGS and LDFLAGS environment variables to where it can be found.
async_io ............... [NO] ....... [NO]
fused_adam ............. [NO] ....... [OKAY]
cpu_adam ............... [NO] ....... [OKAY]
cpu_adagrad ............ [NO] ....... [OKAY]
cpu_lion ............... [NO] ....... [OKAY]
[WARNING] Please specify the CUTLASS repo directory as environment variable $CUTLASS_PATH
evoformer_attn ......... [NO] ....... [NO]
[WARNING] FP Quantizer is using an untested triton version (3.1.0), only 2.3.(0, 1) and 3.0.0 are known to be compatible with these kernels
fp_quantizer ........... [NO] ....... [NO]
fused_lamb ............. [NO] ....... [OKAY]
fused_lion ............. [NO] ....... [OKAY]
/opt/conda/compiler_compat/ld: /usr/local/cuda/lib64/libcufile.so: undefined reference to `dlvsym'
/opt/conda/compiler_compat/ld: /usr/local/cuda/lib64/libcufile.so: undefined reference to `dlopen'
/opt/conda/compiler_compat/ld: /usr/local/cuda/lib64/libcufile.so: undefined reference to `dlclose'
/opt/conda/compiler_compat/ld: /usr/local/cuda/lib64/libcufile.so: undefined reference to `dlerror'
/opt/conda/compiler_compat/ld: /usr/local/cuda/lib64/libcufile.so: undefined reference to `dlsym'
collect2: error: ld returned 1 exit status
gds .................... [NO] ....... [NO]
transformer_inference .. [NO] ....... [OKAY]
inference_core_ops ..... [NO] ....... [OKAY]
cutlass_ops ............ [NO] ....... [OKAY]
quantizer .............. [NO] ....... [OKAY]
ragged_device_ops ...... [NO] ....... [OKAY]
ragged_ops ............. [NO] ....... [OKAY]
random_ltd ............. [NO] ....... [OKAY]
[WARNING] sparse_attn requires a torch version >= 1.5 and < 2.0 but detected 2.5
[WARNING] using untested triton version (3.1.0), only 1.0.0 is known to be compatible
sparse_attn ............ [NO] ....... [NO]
spatial_inference ...... [NO] ....... [OKAY]
transformer ............ [NO] ....... [OKAY]
stochastic_transformer . [NO] ....... [OKAY]
--------------------------------------------------
DeepSpeed general environment info:
torch install path ............... ['/opt/conda/lib/python3.10/site-packages/torch']
torch version .................... 2.5.1+cu124
deepspeed install path ........... ['/opt/conda/lib/python3.10/site-packages/deepspeed']
deepspeed info ................... 0.16.1, unknown, unknown
torch cuda version ............... 12.4
torch hip version ................ None
nvcc version ..................... 12.4
deepspeed wheel compiled w. ...... torch 2.5, cuda 12.4
shared memory (/dev/shm) size .... 188.94 GB
```
**Screenshots**
I will provide the full terminal output running my provided script on my machine:
```bash
python3 test_moq.py
[2024-12-11 09:56:28,176] [INFO] [real_accelerator.py:219:get_accelerator] Setting ds_accelerator to cuda (auto detect)
You are using the default legacy behaviour of the <class 'transformers.models.t5.tokenization_t5.T5Tokenizer'>. This is expected, and simply means that the `legacy` (previous) behavior will be used so nothing changes for you. If you want to use the new behaviour, set `legacy=False`. This should only be set if you understand what it means, and thoroughly read the reason why this was added as explained in https://github.com/huggingface/transformers/pull/24565
[2024-12-11 09:56:30,285] [INFO] [logging.py:128:log_dist] [Rank -1] DeepSpeed info: version=0.16.1, git-hash=unknown, git-branch=unknown
Traceback (most recent call last):
File "/home/chenyuxu/platform/ml/hhemv2/experiments/precision_test/ds_quant_test/test_moq.py", line 22, in <module>
model = deepspeed.init_inference(
File "/opt/conda/lib/python3.10/site-packages/deepspeed/__init__.py", line 362, in init_inference
ds_inference_config = DeepSpeedInferenceConfig(**config_dict)
File "/opt/conda/lib/python3.10/site-packages/deepspeed/runtime/config_utils.py", line 57, in __init__
super().__init__(**data)
File "/opt/conda/lib/python3.10/site-packages/pydantic/main.py", line 214, in __init__
validated_self = self.__pydantic_validator__.validate_python(data, self_instance=self)
pydantic_core._pydantic_core.ValidationError: 1 validation error for DeepSpeedInferenceConfig
quantization_setting
Extra inputs are not permitted [type=extra_forbidden, input_value=(8, True), input_type=tuple]
For further information visit https://errors.pydantic.dev/2.10/v/extra_forbidden
```
**System info (please complete the following information):**
- OS: Debian GNU/Linux 11 (bullseye)
- GPU count and types: 8x L4
- Interconnects (if applicable): Just one machine
- Python version: 3.10.15
- Any other relevant info about your setup: Nothing else for now
| open | 2024-12-11T10:08:14Z | 2024-12-19T17:17:56Z | https://github.com/deepspeedai/DeepSpeed/issues/6853 | [
"bug",
"compression"
] | cyx96 | 3 |
NVIDIA/pix2pixHD | computer-vision | 64 | Is it possible to freeze the model in pytorch ? | Thanks to this code that I succeed to train with my custom data. As I set `--ngf=64`, at last I obtained a model .pth very heavy: about **700MB.**
I would like to ask if there exist some methods to freeze the model (before I worked with tensorflow so you can freeze your ckpt to .pb so at last you get a much lighter model to run inference). I wonder if in the pytorch could we have something similar ? I have tested to convert .pth to .onnx, but I don't get a lighter model at the end.
Thanks advance for all of advices( I'm really new in Pytorch) | open | 2018-09-26T13:17:06Z | 2018-09-26T13:17:06Z | https://github.com/NVIDIA/pix2pixHD/issues/64 | [] | chenyuZha | 0 |
ets-labs/python-dependency-injector | flask | 821 | a simplest fastapi app, di does not work | I've been playing with the lib for a while. Cannot make it work :(
Python 3.12.3
dependency-injector = "^4.42.0"
fastapi = "^0.110.1"
uvicorn = "^0.29.0"
```
import uvicorn
from dependency_injector import containers, providers
from dependency_injector.wiring import Provide, inject
from fastapi import FastAPI, Depends
class X:
def x(self) -> int:
return 1
class Container(containers.DeclarativeContainer):
wiring_config = containers.WiringConfiguration(modules=[__name__])
x = providers.Factory(X)
app = FastAPI()
app.container = Container()
@app.get("/")
@inject
def root(foo: X = Depends(Provide[Container.x])):
print(foo.x())
if __name__ == "__main__":
uvicorn.run(
app,
host="127.0.0.1",
port=5000,
)
```
the error that I get:
```
File "src/dependency_injector/_cwiring.pyx", line 28, in dependency_injector._cwiring._get_sync_patched._patched
File "/home/sergey/.config/JetBrains/PyCharm2024.2/scratches/scratch_309.py", line 30, in root
print(foo.x())
^^^^^
AttributeError: 'Provide' object has no attribute 'x'
```
what am I doing wrong?
| closed | 2024-10-02T20:50:24Z | 2024-10-03T08:30:55Z | https://github.com/ets-labs/python-dependency-injector/issues/821 | [] | antonio-antuan | 1 |
neuml/txtai | nlp | 626 | QUESTION. Debugging, tracing, observability? | Hey, i like how minimalistic txtai is, a way better to get started with then competitors.
Thinking about debbuging, i don't find txtai's solution such as [langchain debugging](https://python.langchain.com/docs/guides/debugging) or [llamaindex tracing](https://docs.llamaindex.ai/en/stable/understanding/tracing_and_debugging/tracing_and_debugging.html) have.
Does txtai provide with something like advanced logging or whatever to see all the steps in Workflow, Pipeline, etc. ? | closed | 2023-12-31T23:46:09Z | 2024-11-19T12:49:55Z | https://github.com/neuml/txtai/issues/626 | [] | 4l1fe | 5 |
onnx/onnx | machine-learning | 6,583 | ImportError when importing onnx in python script packed in snap on arm machine | # Bug Report
### Is the issue related to model conversion?
No
### Describe the bug
There is an ImportError when trying to import onnx into a python program when the python file is packed into a snap package and run on an arm device. The error is: `ImportError: cannot import name 'ONNX_ML' from 'onnx.onnx_cpp2py_export' (unknown location)`.
### Reproduction instructions
I'm trying to pack onnx into a snap package and run it on an arm device. This is my project folder I used for testing:
[onnx_test.zip](https://github.com/user-attachments/files/18129005/onnx_test.zip)
The folder contains the following:
- 2 scripts to build a snap for arm and amd architecture:
```
#!/usr/bin/env bash
set -e
sudo snapcraft clean
sudo snapcraft --build-for=arm64 --verbosity=verbose
```
and
```
#!/usr/bin/env bash
set -e
sudo snapcraft clean
sudo snapcraft --build-for=amd64 --verbosity=verbose
```
- main.py file that tries to import onnx:
```
#!/usr/bin/env python3
print("now importing...")
import onnx
print("Import done")
```
- setup.cfg file (tbh no idea what it does):
```
[metadata]
description-file = README.md
[flake8]
max-line-length = 150
```
- setup.py (some metadata and it links to main.py file):
```
from setuptools import setup
setup(
name='test-app',
version='2.4.0',
description='Test onnx import',
author='me',
scripts=['main.py']
)
```
- snap folder with snapcraft.yaml:
```
name: test-app
title: ONNX test app
version: 1.0.0
summary: test app with onnx
description: |
Test the onnx import
base: core22
confinement: strict
grade: stable
architectures:
- build-on: [amd64, arm64]
build-for: [amd64]
- build-on: [amd64, arm64]
build-for: [arm64]
apps:
provider:
command: bin/main.py
daemon: simple
restart-condition: always
passthrough:
restart-delay: 30s
parts:
provider:
plugin: python
source: .
python-packages:
- onnx
```
1. I created the snaps by running the build-snap scripts for arm and amd: `./build-snap-arm64.sh` and `./build-snap-amd64.sh`
2. I installed the snaps on their respective system archticture:
`sudo snap install test-app_1.0.0_arm64.snap --dangerous` and `sudo snap install test-app_1.0.0_amd64.snap --dangerous`.
3. I run the app with:
`sudo snap run test-app.provider`
To see the logs and errors, I typed `sudo snap logs test-app.provider` on both systems.
On the amd system, there were no errors:
```2024-12-13T17:27:34+01:00 systemd[1]: Started snap.test-app.provider.service - Service for snap application test-app.provider.
2024-12-13T17:27:36+01:00 test-app.provider[68868]: now importing...
2024-12-13T17:27:36+01:00 test-app.provider[68868]: Import done
2024-12-13T17:27:36+01:00 systemd[1]: snap.test-app.provider.service: Deactivated successfully.
2024-12-13T17:27:36+01:00 systemd[1]: snap.test-app.provider.service: Consumed 1.924s CPU time.
```
On the arm system on the other hand:
```2024-12-13T17:54:17+01:00 systemd[1]: Started Service for snap application test-app.provider.
2024-12-13T17:54:17+01:00 test-app.provider[17552]: now importing...
2024-12-13T17:54:17+01:00 test-app.provider[17552]: Traceback (most recent call last):
2024-12-13T17:54:17+01:00 test-app.provider[17552]: File "/snap/test-app/x1/bin/main.py", line 3, in <module>
2024-12-13T17:54:17+01:00 test-app.provider[17552]: import onnx
2024-12-13T17:54:17+01:00 test-app.provider[17552]: File "/snap/test-app/x1/lib/python3.10/site-packages/onnx/__init__.py", line 77, in <module>
2024-12-13T17:54:17+01:00 test-app.provider[17552]: from onnx.onnx_cpp2py_export import ONNX_ML
2024-12-13T17:54:17+01:00 test-app.provider[17552]: ImportError: cannot import name 'ONNX_ML' from 'onnx.onnx_cpp2py_export' (unknown location)
2024-12-13T17:54:17+01:00 systemd[1]: snap.test-app.provider.service: Main process exited, code=exited, status=1/FAILURE
2024-12-13T17:54:17+01:00 systemd[1]: snap.test-app.provider.service: Failed with result 'exit-code'.
```
I saw issues where others had the same error, but it was always in the development process where onnx was compiled from source. In my case on the other hand, I get onnx directly from pip so I can't change anything suggested in the other issues. The error message says the location is unknown, but the file path "/snap/test-app/x1/lib/python3.10/site-packages/onnx/__init__.py" exists and has "ONNX_ML" in it, so I don't know where the problem comes from. If this is an issue with snap, let me know and I'll try it there.
### System information
- OS Platform and Distribution: tried it both on Ubuntu Core 22.04 (arm) and Debian 12 (arm)
- ONNX version (*e.g. 1.13*): not specified in snapcraft.yaml, so should be the latest from pip
- Python version: Python 3.9.2
| open | 2024-12-13T17:21:28Z | 2025-02-19T17:32:44Z | https://github.com/onnx/onnx/issues/6583 | [
"bug"
] | BjGoCraft | 0 |
chatanywhere/GPT_API_free | api | 286 | zotero翻译插件和iTerm均出现405报错 | **Describe the bug 描述bug**
确定配置没有问题,zotero- translator和iTerm出现405报错,沉浸式翻译可以正常使用,.tech和.cn都出现此问题
**Screenshots 截图**
<img width="467" alt="image" src="https://github.com/user-attachments/assets/49d4df98-e63b-4fad-9b10-209dfe1875a0">
<img width="248" alt="image" src="https://github.com/user-attachments/assets/fae94560-42bb-4a17-9364-3ab1f1d0fa13">
| closed | 2024-08-24T02:56:21Z | 2024-08-26T05:50:40Z | https://github.com/chatanywhere/GPT_API_free/issues/286 | [] | JYDAG | 5 |
python-gino/gino | asyncio | 26 | Table inheritance? | open | 2017-08-04T05:48:11Z | 2018-05-21T07:15:24Z | https://github.com/python-gino/gino/issues/26 | [
"help wanted",
"question"
] | fantix | 2 | |
pallets-eco/flask-sqlalchemy | flask | 820 | README example throws error | If I try to run the example from the repository README, it throws this error:
> python3 app.py/home/username/.local/lib/python3.8/site-packages/flask_sqlalchemy/__init__.py:834: FSADeprecationWarning: SQLALCHEMY_TRACK_MODIFICATIONS adds significant overhead and will be disabled by default in the future. Set it to True or False to suppress this warning.
> warnings.warn(FSADeprecationWarning(
> Traceback (most recent call last):
> File "/home/username/.local/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1283, in _execute_context
> self.dialect.do_execute(
> File "/home/username/.local/lib/python3.8/site-packages/sqlalchemy/engine/default.py", line 590, in do_execute
> cursor.execute(statement, parameters)
> sqlite3.OperationalError: no such table: user
>
> The above exception was the direct cause of the following exception:
>
> Traceback (most recent call last):
> File "app.py", line 16, in <module>
> db.session.commit()
> File "/home/username/.local/lib/python3.8/site-packages/sqlalchemy/orm/scoping.py", line 163, in do
> return getattr(self.registry(), name)(*args, **kwargs)
> File "/home/username/.local/lib/python3.8/site-packages/sqlalchemy/orm/session.py", line 1042, in commit
> self.transaction.commit()
> File "/home/username/.local/lib/python3.8/site-packages/sqlalchemy/orm/session.py", line 504, in commit
> self._prepare_impl()
> File "/home/username/.local/lib/python3.8/site-packages/sqlalchemy/orm/session.py", line 483, in _prepare_impl
> self.session.flush()
> File "/home/username/.local/lib/python3.8/site-packages/sqlalchemy/orm/session.py", line 2523, in flush
> self._flush(objects)
> File "/home/username/.local/lib/python3.8/site-packages/sqlalchemy/orm/session.py", line 2664, in _flush
> transaction.rollback(_capture_exception=True)
> File "/home/username/.local/lib/python3.8/site-packages/sqlalchemy/util/langhelpers.py", line 68, in __exit__
> compat.raise_(
> File "/home/username/.local/lib/python3.8/site-packages/sqlalchemy/util/compat.py", line 178, in raise_
> raise exception
> File "/home/username/.local/lib/python3.8/site-packages/sqlalchemy/orm/session.py", line 2624, in _flush
> flush_context.execute()
> File "/home/username/.local/lib/python3.8/site-packages/sqlalchemy/orm/unitofwork.py", line 422, in execute
> rec.execute(self)
> File "/home/username/.local/lib/python3.8/site-packages/sqlalchemy/orm/unitofwork.py", line 586, in execute
> persistence.save_obj(
> File "/home/username/.local/lib/python3.8/site-packages/sqlalchemy/orm/persistence.py", line 239, in save_obj
> _emit_insert_statements(
> File "/home/username/.local/lib/python3.8/site-packages/sqlalchemy/orm/persistence.py", line 1135, in _emit_insert_statements
> result = cached_connections[connection].execute(
> File "/home/username/.local/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1020, in execute
> return meth(self, multiparams, params)
> File "/home/username/.local/lib/python3.8/site-packages/sqlalchemy/sql/elements.py", line 298, in _execute_on_connection
> return connection._execute_clauseelement(self, multiparams, params)
> File "/home/username/.local/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1133, in _execute_clauseelement
> ret = self._execute_context(
> File "/home/username/.local/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1323, in _execute_context
> self._handle_dbapi_exception(
> File "/home/username/.local/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1517, in _handle_dbapi_exception
> util.raise_(
> File "/home/username/.local/lib/python3.8/site-packages/sqlalchemy/util/compat.py", line 178, in raise_
> raise exception
> File "/home/username/.local/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1283, in _execute_context
> self.dialect.do_execute(
> File "/home/username/.local/lib/python3.8/site-packages/sqlalchemy/engine/default.py", line 590, in do_execute
> cursor.execute(statement, parameters)
> sqlalchemy.exc.OperationalError: (sqlite3.OperationalError) no such table: user
> [SQL: INSERT INTO user (username, email) VALUES (?, ?)]
> [parameters: ('Flask', 'example@example.com')]
> (Background on this error at: http://sqlalche.me/e/e3q8)
Is it expected? | closed | 2020-05-15T20:37:49Z | 2020-12-05T19:58:35Z | https://github.com/pallets-eco/flask-sqlalchemy/issues/820 | [] | rafaellehmkuhl | 2 |
CorentinJ/Real-Time-Voice-Cloning | pytorch | 1,062 | synthesizer params varries for same input audio and text?? | If run a the demo.cli.py for same audio and text multiple time i can see variation in synthesizer
EX-Synthesizing the waveform:
{| ████████████████ 57000/57600 | Batch Size: 6 | Gen Rate: 5.1kHz | }float64
Synthesizing the waveform:
{| ████████████████ 47500/48000 | Batch Size: 5 | Gen Rate: 4.3kHz | }float64
Synthesizing the waveform:
{| ████████████████ 47500/48000 | Batch Size: 5 | Gen Rate: 4.2kHz | }float64
Synthesizing the waveform:
{| ████████████████ 57000/57600 | Batch Size: 6 | Gen Rate: 5.2kHz | }float64
Same audio and text but output is ranging?
do you know what can be the reason?
| open | 2022-05-05T12:15:12Z | 2022-05-25T20:19:15Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/1062 | [] | ayush431 | 8 |
huggingface/datasets | pytorch | 6,801 | got fileNotFound | ### Describe the bug
When I use load_dataset to load the nyanko7/danbooru2023 data set, the cache is read in the form of a symlink. There may be a problem with the arrow_dataset initialization process and I get FileNotFoundError: [Errno 2] No such file or directory: '2945000.jpg'
### Steps to reproduce the bug
#code show as below
from datasets import load_dataset
data = load_dataset("nyanko7/danbooru2023",cache_dir=<symlink>)
data["train"][0]
### Expected behavior
I should get this result:
{'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=365x256 at 0x7FB730CB4070>, 'label': 0}
### Environment info
datasets==2.12.0
python==3.10.14
| closed | 2024-04-11T04:57:41Z | 2024-04-12T16:47:43Z | https://github.com/huggingface/datasets/issues/6801 | [] | laoniandisko | 2 |
recommenders-team/recommenders | data-science | 1,929 | [BUG] News recommendation method: npa_MIND.ipynb cannot run properly! | ### Description
I'm running `recommenders/examples/00_quick_start/npa_MIND.ipynb`, and I encountered the following error message when I ran `print(model.run_eval(valid_news_file, valid_behaviors_file))`:
> File ~/anaconda3/envs/tf2/lib/python3.8/site-packages/tensorflow/python/client/session.py:1480, in BaseSession._Callable.__call__(self, *args, **kwargs)
> 1478 try:
> 1479 run_metadata_ptr = tf_session.TF_NewBuffer() if run_metadata else None
> -> 1480 ret = tf_session.TF_SessionRunCallable(self._session._session,
> 1481 self._handle, args,
> 1482 run_metadata_ptr)
> 1483 if run_metadata:
> 1484 proto_data = tf_session.TF_GetBuffer(run_metadata_ptr)
>
> InvalidArgumentError: indices[0,0] = 173803 is not in [0, 94057)
> [[{{node news_encoder/embedding_1/embedding_lookup}}]]
The reason for this error is that the index is out of bounds. When we try to access an index that does not exist in an array or matrix, this error occurs. In this case, the value of the index [0,0] is 173803, but it should be within the range of [0, 94057). This may be due to incorrect handling or allocation of the index values, or some issues in the dataset.
In the MIND_type option, I chose 'small'. I noticed that there is a similar issue on https://github.com/microsoft/recommenders/issues/1291, but there hasn't been any recent updates, so I'm not sure if the issue has been resolved.
### In which platform does it happen?
Tensorflow: 2.6.1
Python: 3.7.11
Linux Ubuntu: 18.04.6
### How do we replicate the issue?
Just run `recommenders/examples/00_quick_start/npa_MIND.ipynb`
### Expected behavior (i.e. solution)
Print evaluation metrics, like as:
{'group_auc': 0.5228, 'mean_mrr': 0.2328, 'ndcg@5': 0.2377, 'ndcg@10': 0.303}
### Other Comments
Thank you to everyone who has helped and contributed to the community. | closed | 2023-05-08T08:06:59Z | 2023-05-13T09:02:39Z | https://github.com/recommenders-team/recommenders/issues/1929 | [
"bug"
] | SnowyMeteor | 1 |
okken/pytest-check | pytest | 125 | Using pytest_check asserts that there will be errors | My error is shown below
```shell
../venv/lib/python3.11/site-packages/_pytest/config/__init__.py:1173
/Users/shanyingqing/code/weeeTest/venv/lib/python3.11/site-packages/_pytest/config/__init__.py:1173: PytestAssertRewriteWarning: Module already imported so cannot be rewritten: pytest_check
self._mark_plugins_for_rewrite(hook)
-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
``` | closed | 2023-04-04T08:35:40Z | 2023-06-06T21:03:58Z | https://github.com/okken/pytest-check/issues/125 | [
"need more info"
] | Yingqingshan | 2 |
Gozargah/Marzban | api | 1,141 | raw client config template! | as client configs are so complicated, some users needs a lot of things like External Config (https://github.com/Gozargah/Marzban/pull/1118) & fake config (to show user expire time and else) without creating additional inbound & many options like tcpFastOpen that is not usable with Marzban currently
I have a idea to solve all of them
we can create multiple template files
this templates must contain something like
`v2ray.template` :
```
vless://<vless_uuid>@example.com:443?type=ws&path=...
vmess://{"v":"2","add":"<server_ipv4>","port":2096,"id":"<vmess_uuid>"...}
```
then replace variables with actual data, base64 the vmess data and done
this can get done for all other types of configs like Clash.Meta / Sing-Box / v2ray-json & etc.
we can add variable with a list of random data like `<["server1.com", "server2.com"]>` useful for random user-agent or list of random IPs for Address or random port with every sub update
advantages is that (for example) you can add hy2 and other type of unsupported configs with detour, and you can use all possible options in different clients
but you will lose some options like randomize configs (can get implemented for old v2ray share links), and Hosts of GUI panel will be ignored
as this feature will be optional, so will not hurt anyone | closed | 2024-07-19T10:07:16Z | 2024-07-20T21:07:53Z | https://github.com/Gozargah/Marzban/issues/1141 | [
"Feature"
] | fodhelper | 1 |
NullArray/AutoSploit | automation | 1,229 | Unhandled Exception (09a4fcccc) | Autosploit version: `2.2.3`
OS information: `Linux-3.18.130-dieD-Rebase-armv8l-with-libc`
Running context: `/data/data/com.thecrackertechnology.andrax/ANDRAX/AutoSploit/autosploit.py`
Error meesage: `[Errno 2] No such file or directory: 'host'`
Error traceback:
```
Traceback (most recent call):
File "/data/data/com.thecrackertechnology.andrax/ANDRAX/AutoSploit/autosploit/main.py", line 123, in main
terminal.terminal_main_display(loaded_exploits)
File "/data/data/com.thecrackertechnology.andrax/ANDRAX/AutoSploit/lib/term/terminal.py", line 331, in terminal_main_display
self.custom_host_list(loaded_mods)
File "/data/data/com.thecrackertechnology.andrax/ANDRAX/AutoSploit/lib/term/terminal.py", line 277, in custom_host_list
self.exploit_gathered_hosts(mods, hosts=provided_host_file)
File "/data/data/com.thecrackertechnology.andrax/ANDRAX/AutoSploit/lib/term/terminal.py", line 215, in exploit_gathered_hosts
host_file = lib.exploitation.exploiter.whitelist_wash(open(hosts).readlines(), whitelist_file)
IOError: [Errno 2] No such file or directory: 'host'
```
Metasploit launched: `False`
| closed | 2019-12-31T10:53:45Z | 2020-02-02T01:20:00Z | https://github.com/NullArray/AutoSploit/issues/1229 | [] | AutosploitReporter | 0 |
deezer/spleeter | tensorflow | 667 | [Bug] Windows CLI: 2.3.0 crashes when working in GPU mode with RTX3060 laptop 6GB VRAM | - [Y] I didn't find a similar issue already open.
- [Y] I read the documentation (README AND Wiki)
- [Y] I have installed FFMpeg
- [Y] My problem is related to Spleeter only, not a derivative product (such as Webapplication, or GUI provided by others)
## Description
I use spleeter 2.3.0 in

windows CLI mode working on GPU then it crashes.(CPU 5900HS; GPU RTX3060 laptop 6GB VRAM),
and I discoverrd that the occupation of VRAM by the python process increased continuously up to 5.6GB before crash happened
## Step to reproduce
1. Installed cuda(11.1 11.2 11.4) and proper version of cudnn, pip install spleeter
2. Run in Windows CLI window:
D:\python>spleeter separate -p spleeter:2stems -o output spleeter/audio_example.mp3
3. Got error below:
2021-09-26 15:03:29.181055: F tensorflow/stream_executor/cuda/cuda_fft.cc:439] failed to initialize batched cufft plan with customized allocator: Failed to make cuFFT batched plan.
## Output
2021-09-26 15:03:29.181055: F tensorflow/stream_executor/cuda/cuda_fft.cc:439] failed to initialize batched cufft plan with customized allocator: Failed to make cuFFT batched plan.
```bash
D:\python>spleeter separate -p spleeter:2stems -o output spleeter/audio_example.mp3
2021-09-26 15:03:29.181055: F tensorflow/stream_executor/cuda/cuda_fft.cc:439] failed to initialize batched cufft plan with customized allocator: Failed to make cuFFT batched plan.
```
## Environment
| | |
| ----------------- | ------------------------------- |
| OS | Windows 10 |
| Installation type | pip |
| RAM available | ram 32GB vram 6GB|
| Hardware spec | CPU 5900HS with 32GB memory; GPU RTX3060 laptop with 6GB VRAM |
## Additional context
NONE
| open | 2021-09-26T10:58:56Z | 2021-12-21T10:29:52Z | https://github.com/deezer/spleeter/issues/667 | [
"bug",
"invalid"
] | ths0013 | 3 |
ipyflow/ipyflow | jupyter | 116 | add observable-style minimap to display cell dependencies | open | 2022-12-21T19:25:23Z | 2023-08-13T04:32:18Z | https://github.com/ipyflow/ipyflow/issues/116 | [
"enhancement"
] | smacke | 1 | |
CorentinJ/Real-Time-Voice-Cloning | pytorch | 1,218 | demo_cli melspectrogram error | I get this error when running demo_cli.py:
`
File "/home/carlos/Downloads/Real-Time-Voice-Cloning-master/encoder/audio.py", line 58, in wav_to_mel_spectrogram
frames = librosa.feature.melspectrogram(
TypeError: melspectrogram() takes 0 positional arguments but 2 positional arguments (and 2 keyword-only arguments) were given`
Any ideas as to why that may be?
Thanks | open | 2023-05-14T19:40:25Z | 2023-05-17T07:20:26Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/1218 | [] | Noobos100 | 1 |
pydata/pandas-datareader | pandas | 931 | RemoteDataError | Yesteday I found this error.
I suppose yahoo finace has changed html structure.
```
[/usr/local/lib/python3.7/dist-packages/pandas_datareader/base.py](https://localhost:8080/#) in _get_response(self, url, params, headers)
179 msg += "\nResponse Text:\n{0}".format(last_response_text)
180
--> 181 raise RemoteDataError(msg)
182
``` | open | 2022-04-24T23:08:59Z | 2022-04-24T23:08:59Z | https://github.com/pydata/pandas-datareader/issues/931 | [] | higebobo | 0 |
roboflow/supervision | pytorch | 748 | [JSONSink] - allowing to serialise Detections to a JSON file | ### Description
The `JSONSink` class should be designed to efficiently convert and store detection data into a JSON file format. This class should be capable of handling bounding box coordinates in the `xyxy` format and converting them into a JSON structure with fields: `x_min`, `y_min`, `x_max`, `y_max`. It should also support fields like `confidence`, `class_id`, and `tracker_id`. The JSONSink should allow for the inclusion of additional custom_data along with the `Detections.data` field, providing flexibility for extensive data logging.
### Scaffold API
```python
import json
from typing import Any, Optional, Dict
class JSONSink:
def __init__(self, filename: str = 'output.json'):
self.filename: str = filename
self.file: Optional[open] = None
self.data: List[Dict[str, Any]] = []
def __enter__(self) -> 'JSONSink':
self.open()
return self
def __exit__(self, exc_type: Optional[type], exc_val: Optional[Exception], exc_tb: Optional[Any]) -> None:
self.write_and_close()
def open(self) -> None:
self.file = open(self.filename, 'w')
def write_and_close(self) -> None:
if self.file:
json.dump(self.data, self.file, indent=4)
self.file.close()
def append(self, detections: Detections, custom_data: Dict[str, Any] = None) -> None:
# logic for saving detection data to a JSON file
```
### Usage example
- With 'with' statement
```python
with JSONSink() as sink:
detections = Detections(...)
custom_data = {'frame_number': 42}
sink.append(detections, custom_data)
```
- With manual open and close
```python
sink = JSONSink()
sink.open()
detections = Detections(...)
sink.append(detections, {'frame_number': 42})
sink.write_and_close()
```
### Input / Output examples
- input
```python
import numpy as np
# First detection instance
detections = Detections(
xyxy=np.array([
[10, 20, 30, 40],
[50, 60, 70, 80],
]),
confidence=np.array([0.7, 0.8]),
class_id=np.array([0, 0]),
tracker_id=np.array([0, 1]),
data={
'class_name': np.array(['person', 'person'])
}
)
# Second detection instance
second_detections = Detections(
xyxy=np.array([
[15, 25, 35, 45],
[55, 65, 75, 85],
]),
confidence=np.array([0.6, 0.9]),
class_id=np.array([1, 1]),
tracker_id=np.array([2, 3]),
data={
'class_name': np.array(['car', 'car'])
}
)
# Custom data for each detection instance
custom_data = {'frame_number': 42}
second_custom_data = {'frame_number': 43}
```
- output
```json
[
{
"x_min": 10, "y_min": 20, "x_max": 30, "y_max": 40,
"class_id": 0, "confidence": 0.7, "tracker_id": 0, "class_name": "person",
"frame_number": 42
},
{
"x_min": 50, "y_min": 60, "x_max": 70, "y_max": 80,
"class_id": 0, "confidence": 0.8, "tracker_id": 1, "class_name": "person",
"frame_number": 42
},
{
"x_min": 15, "y_min": 25, "x_max": 35, "y_max": 45,
"class_id": 1, "confidence": 0.6, "tracker_id": 2, "class_name": "car",
"frame_number": 43
},
{
"x_min": 55, "y_min": 65, "x_max": 75, "y_max": 85,
"class_id": 1, "confidence": 0.9, "tracker_id": 3, "class_name": "car",
"frame_number": 43
}
]
```
### Additional
- Note: Please share a Google Colab with minimal code to test the new feature. We know it's additional work, but it will definitely speed up the review process. Each change must be tested by the reviewer. Setting up a local environment to do this is time-consuming. Please ensure that Google Colab can be accessed without any issues (make it public). Thank you! 🙏🏻 | closed | 2024-01-18T20:25:20Z | 2024-03-01T00:23:47Z | https://github.com/roboflow/supervision/issues/748 | [
"enhancement",
"Q1.2024"
] | SkalskiP | 0 |
adbar/trafilatura | web-scraping | 42 | Replace requests with bare urllib3 | The current use of `requests` sessions in `cli_utils.py` doesn't appear to be thread-safe (https://github.com/psf/requests/issues/2766).
The full functionality of the module isn't really needed here and a change would help reducing the total number of dependencies as mentioned in https://github.com/adbar/trafilatura/issues/41. | closed | 2020-12-15T16:19:35Z | 2020-12-16T15:45:29Z | https://github.com/adbar/trafilatura/issues/42 | [
"enhancement"
] | adbar | 1 |
lucidrains/vit-pytorch | computer-vision | 130 | Transformer in Convolutional Neural Networks [Paper Implementation] | Hi @lucidrains
Are you considering implementing the titled "Transformer in Convolutional Neural Networks" paper in your repo?
https://arxiv.org/pdf/2106.03180.pdf
Regards,
Khawar | open | 2021-07-04T07:18:25Z | 2021-07-04T07:18:25Z | https://github.com/lucidrains/vit-pytorch/issues/130 | [] | khawar-islam | 0 |
huggingface/datasets | pytorch | 6,796 | CI is broken due to hf-internal-testing/dataset_with_script | CI is broken for test_load_dataset_distributed_with_script. See: https://github.com/huggingface/datasets/actions/runs/8614926216/job/23609378127
```
FAILED tests/test_load.py::test_load_dataset_distributed_with_script[None] - assert False
+ where False = all(<generator object test_load_dataset_distributed_with_script.<locals>.<genexpr> at 0x7f0c741de3b0>)
FAILED tests/test_load.py::test_load_dataset_distributed_with_script[force_redownload] - assert False
+ where False = all(<generator object test_load_dataset_distributed_with_script.<locals>.<genexpr> at 0x7f0be45f6ea0>)
``` | closed | 2024-04-10T06:56:02Z | 2024-04-12T09:02:13Z | https://github.com/huggingface/datasets/issues/6796 | [
"bug"
] | albertvillanova | 4 |
JaidedAI/EasyOCR | pytorch | 579 | Failed to detect and determine text with confidence. | 
I have a well uniform writting on a led screen.
2 problematics:
- evolving light
- black and green background changing row by row
With an adaptive treshold and by analyzing the row one by one, we should be able to get good images with easy text to detect.
We get some good results:


And some bad results:


I feel like the neural network could use some new characters and noise, is there an easy way to add images on a pre-trained maybe ? Or maybe there are some tricks that I didn't saw on how to improve the detection ?
[stackoverflow link](https://stackoverflow.com/questions/69705008/difficulty-to-use-cv2-adaptivethreshold-with-the-reflection-problematic) | closed | 2021-10-28T15:01:51Z | 2021-12-15T16:30:05Z | https://github.com/JaidedAI/EasyOCR/issues/579 | [] | AvekIA | 2 |
oegedijk/explainerdashboard | plotly | 212 | get_feature_names_out() not working with FunctionTransformer | Sheers,
if you use a FunctionTransformer in a columntransformer explainerdashboard does not recognize the column names.
I think this is because you use get_features_names_out() in your code rather than the attribute. | closed | 2022-04-27T12:41:46Z | 2022-04-27T22:03:37Z | https://github.com/oegedijk/explainerdashboard/issues/212 | [] | nilslacroix | 1 |
Lightning-AI/pytorch-lightning | machine-learning | 20,348 | Add support S3 as a storage option for profiling results | ### Description & Motivation
Description: Currently, when using the <code>default_root_dir</code> parameter to specify an S3 bucket as the storage location for profiling results, the profiler fails to work due to lack of support for S3 I/O. This limitation prevents users from storing profiling results in a scalable and durable storage solution.
Motivation: As a user of the Lightning library, I would like to be able to store profiling results in an S3 bucket to take advantage of its scalability, durability, and ease of access. This would enable me to easily collect and analyze profiling data from multiple runs, and share the results with others.
### Pitch
I propose that the Lightning library be extended to support S3 as a storage option for profiling results. This could be achieved by adding S3 I/O support to the profiler, allowing users to specify an S3 bucket as the <code>default_root_dir</code>. This would enable users to store profiling results in a flexible and scalable storage solution, and would greatly enhance the usability and usefulness of the profiling feature.
### Alternatives
_No response_
### Additional context
_No response_
cc @borda | open | 2024-10-18T15:23:04Z | 2024-10-18T15:23:27Z | https://github.com/Lightning-AI/pytorch-lightning/issues/20348 | [
"feature",
"needs triage"
] | kimminw00 | 0 |
custom-components/pyscript | jupyter | 439 | Strange behaviour of @event_trigger | Hi,
I just found some strange behaviour when using the event_trigger.
Having somehting like this:
`
@event_trigger("some_event", "action == 'some_action' and state=='finished'")
def dummy(**kwargs):
log.debug(f"{kwargs}")
`
a and triggering the event with `event.fire("some_event")` leads to the correct result - the function dummy isn't called and thus no log output.
Triggering the event with `event.fire("some_event", action="some_action",state="finished")` also works correctly and leads to a log output like this:
`{'trigger_type': 'event', 'event_type': 'some_event', 'context': <homeassistant.core.Context object at 0x7f7ced65ab40>, 'action': 'some_action', 'state': 'finished'}`
BUT, each subsequent event without any arguements (`event.fire("some_event")`) now also triggers the function and shows an output like this:
`{'trigger_type': 'event', 'event_type': 'some_event', 'context': <homeassistant.core.Context object at 0x7f7ce9657e00>}`
It seems the the str_expr of @event_trigger is not evaluated anymore. Giving some random arguments to the event like this
`event.fire("some_event", x="s")`
also triggers the function.
But providing at least one of the arguments to be checked in the str_expr like here
`event.fire("some_event", action="s")`
prevents the function from being called.
Thats pretty strange. I would expect the str_expr to return false if the arguments are not provided in the event instead of ingoring this and evaluating to true.
| closed | 2023-02-23T10:57:29Z | 2023-04-08T21:17:12Z | https://github.com/custom-components/pyscript/issues/439 | [] | Michael-CGN | 3 |
deepfakes/faceswap | deep-learning | 1,085 | Being informed on manual preview refresh | On slower hardware and with demanding model configurations it can take several minutes until a manual preview refresh actually completes.
For that reason I suggest that another message "Refresh preview done" will be added, so that the user can focus on other things in the meantime and still reliably tell whether the refresh has completed or not. | closed | 2020-11-10T10:37:25Z | 2021-05-30T10:48:41Z | https://github.com/deepfakes/faceswap/issues/1085 | [
"feature"
] | OreSeq | 1 |
SYSTRAN/faster-whisper | deep-learning | 941 | can't get it to use GPU | so I've been trying to use whisperX but can't get it to work, so decided to test with faster whisper since it is built on that.
but I still can't get it working, it only uses CPU.
I've also downloaded the 11 and 12 files from here and pasted them in the bin https://github.com/Purfview/whisper-standalone-win/releases/tag/libs

```# This file may be used to create an environment using:
# $ conda create --name <env> --file <this file>
# platform: win-64
aiohttp=3.9.5=pypi_0
aiosignal=1.3.1=pypi_0
alembic=1.13.2=pypi_0
antlr4-python3-runtime=4.9.3=pypi_0
asteroid-filterbanks=0.4.0=pypi_0
asttokens=2.4.1=pyhd8ed1ab_0
async-timeout=4.0.3=pypi_0
attrs=23.2.0=pypi_0
audioread=3.0.1=pypi_0
av=12.3.0=pypi_0
blas=1.0=mkl
brotli-python=1.1.0=py310h00ffb61_1
bzip2=1.0.8=h2466b09_7
ca-certificates=2024.7.4=h56e8100_0
certifi=2024.7.4=pyhd8ed1ab_0
cffi=1.16.0=py310h8d17308_0
charset-normalizer=3.3.2=pyhd8ed1ab_0
click=8.1.7=pypi_0
colorama=0.4.6=pyhd8ed1ab_0
coloredlogs=15.0.1=pypi_0
colorlog=6.8.2=pypi_0
comm=0.2.2=pyhd8ed1ab_0
contourpy=1.2.1=pypi_0
ctranslate2=4.3.1=pypi_0
cuda-cccl=12.5.39=0
cuda-cccl_win-64=12.5.39=0
cuda-cudart=12.1.105=0
cuda-cudart-dev=12.1.105=0
cuda-cupti=12.1.105=0
cuda-libraries=12.1.0=0
cuda-libraries-dev=12.1.0=0
cuda-nvrtc=12.1.105=0
cuda-nvrtc-dev=12.1.105=0
cuda-nvtx=12.1.105=0
cuda-opencl=12.5.39=0
cuda-opencl-dev=12.5.39=0
cuda-profiler-api=12.5.39=0
cuda-runtime=12.1.0=0
cuda-version=12.5=3
cycler=0.12.1=pypi_0
debugpy=1.8.2=py310h9e98ed7_0
decorator=5.1.1=pyhd8ed1ab_0
docopt=0.6.2=pypi_0
einops=0.8.0=pypi_0
exceptiongroup=1.2.2=pyhd8ed1ab_0
executing=2.0.1=pyhd8ed1ab_0
faster-whisper=1.0.3=pypi_0
filelock=3.15.4=pyhd8ed1ab_0
flatbuffers=24.3.25=pypi_0
fonttools=4.53.1=pypi_0
freetype=2.12.1=hdaf720e_2
frozenlist=1.4.1=pypi_0
fsspec=2024.6.1=pypi_0
greenlet=3.0.3=pypi_0
h2=4.1.0=pyhd8ed1ab_0
hpack=4.0.0=pyh9f0ad1d_0
huggingface-hub=0.24.2=pypi_0
humanfriendly=10.0=pypi_0
hyperframe=6.0.1=pyhd8ed1ab_0
hyperpyyaml=1.2.2=pypi_0
idna=3.7=pyhd8ed1ab_0
importlib-metadata=8.2.0=pyha770c72_0
importlib_metadata=8.2.0=hd8ed1ab_0
intel-openmp=2024.2.0=h57928b3_980
ipykernel=6.29.5=pyh4bbf305_0
ipython=8.26.0=pyh7428d3b_0
jedi=0.19.1=pyhd8ed1ab_0
jinja2=3.1.4=pyhd8ed1ab_0
joblib=1.4.2=pypi_0
julius=0.2.7=pypi_0
jupyter_client=8.6.2=pyhd8ed1ab_0
jupyter_core=5.7.2=py310h5588dad_0
kiwisolver=1.4.5=pypi_0
krb5=1.21.3=hdf4eb48_0
lazy-loader=0.4=pypi_0
lcms2=2.16=h67d730c_0
lerc=4.0.0=h63175ca_0
libblas=3.9.0=1_h8933c1f_netlib
libcblas=3.9.0=5_hd5c7e75_netlib
libcublas=12.1.0.26=0
libcublas-dev=12.1.0.26=0
libcufft=11.0.2.4=0
libcufft-dev=11.0.2.4=0
libcurand=10.3.6.82=0
libcurand-dev=10.3.6.82=0
libcusolver=11.4.4.55=0
libcusolver-dev=11.4.4.55=0
libcusparse=12.0.2.55=0
libcusparse-dev=12.0.2.55=0
libdeflate=1.20=hcfcfb64_0
libffi=3.4.2=h8ffe710_5
libhwloc=2.11.1=default_h8125262_1000
libiconv=1.17=hcfcfb64_2
libjpeg-turbo=3.0.0=hcfcfb64_1
liblapack=3.9.0=5_hd5c7e75_netlib
libnpp=12.0.2.50=0
libnpp-dev=12.0.2.50=0
libnvjitlink=12.1.105=0
libnvjitlink-dev=12.1.105=0
libnvjpeg=12.1.1.14=0
libnvjpeg-dev=12.1.1.14=0
libpng=1.6.43=h19919ed_0
librosa=0.10.2.post1=pypi_0
libsodium=1.0.18=h8d14728_1
libsqlite=3.46.0=h2466b09_0
libtiff=4.6.0=hddb2be6_3
libuv=1.48.0=hcfcfb64_0
libwebp-base=1.4.0=hcfcfb64_0
libxcb=1.16=hcd874cb_0
libxml2=2.12.7=h0f24e4e_4
libzlib=1.3.1=h2466b09_1
lightning=2.3.3=pypi_0
lightning-utilities=0.11.6=pypi_0
llvmlite=0.43.0=pypi_0
m2w64-gcc-libgfortran=5.3.0=6
m2w64-gcc-libs=5.3.0=7
m2w64-gcc-libs-core=5.3.0=7
m2w64-gmp=6.1.0=2
m2w64-libwinpthread-git=5.0.0.4634.697f757=2
mako=1.3.5=pypi_0
markdown-it-py=3.0.0=pypi_0
markupsafe=2.1.5=py310h8d17308_0
matplotlib=3.9.1=pypi_0
matplotlib-inline=0.1.7=pyhd8ed1ab_0
mdurl=0.1.2=pypi_0
mkl=2023.1.0=h6a75c08_48682
mpmath=1.3.0=pyhd8ed1ab_0
msgpack=1.0.8=pypi_0
msys2-conda-epoch=20160418=1
multidict=6.0.5=pypi_0
nest-asyncio=1.6.0=pyhd8ed1ab_0
networkx=3.3=pyhd8ed1ab_1
nltk=3.8.1=pypi_0
numba=0.60.0=pypi_0
numpy=1.26.4=pypi_0
omegaconf=2.3.0=pypi_0
onnxruntime=1.18.1=pypi_0
openjpeg=2.5.2=h3d672ee_0
openssl=3.3.1=h2466b09_2
optuna=3.6.1=pypi_0
packaging=24.1=pyhd8ed1ab_0
pandas=2.2.2=pypi_0
parso=0.8.4=pyhd8ed1ab_0
pickleshare=0.7.5=py_1003
pillow=10.4.0=py310h3e38d90_0
pip=24.0=pyhd8ed1ab_0
platformdirs=4.2.2=pyhd8ed1ab_0
pooch=1.8.2=pypi_0
primepy=1.3=pypi_0
prompt-toolkit=3.0.47=pyha770c72_0
protobuf=5.27.2=pypi_0
psutil=6.0.0=py310ha8f682b_0
pthread-stubs=0.4=hcd874cb_1001
pthreads-win32=2.9.1=hfa6e2cd_3
pure_eval=0.2.3=pyhd8ed1ab_0
pyannote-audio=3.1.1=pypi_0
pyannote-core=5.0.0=pypi_0
pyannote-database=5.1.0=pypi_0
pyannote-metrics=3.2.1=pypi_0
pyannote-pipeline=3.0.1=pypi_0
pycparser=2.22=pyhd8ed1ab_0
pygments=2.18.0=pyhd8ed1ab_0
pyparsing=3.1.2=pypi_0
pyreadline3=3.4.1=pypi_0
pysocks=1.7.1=pyh0701188_6
python=3.10.14=h4de0772_0_cpython
python-dateutil=2.9.0.post0=pypi_0
python_abi=3.10=4_cp310
pytorch=2.4.0=py3.10_cuda12.1_cudnn9_0
pytorch-cuda=12.1=hde6ce7c_5
pytorch-lightning=2.3.3=pypi_0
pytorch-metric-learning=2.6.0=pypi_0
pytorch-mutex=1.0=cuda
pytz=2024.1=pypi_0
pywin32=306=py310h00ffb61_2
pyyaml=6.0.1=py310h8d17308_1
pyzmq=26.0.3=py310h656833d_0
regex=2024.7.24=pypi_0
requests=2.32.3=pyhd8ed1ab_0
rich=13.7.1=pypi_0
ruamel-yaml=0.18.6=pypi_0
ruamel-yaml-clib=0.2.8=pypi_0
safetensors=0.4.3=pypi_0
scikit-learn=1.5.1=pypi_0
scipy=1.14.0=pypi_0
semver=3.0.2=pypi_0
sentencepiece=0.2.0=pypi_0
setuptools=71.0.4=pyhd8ed1ab_0
shellingham=1.5.4=pypi_0
six=1.16.0=pyh6c4a22f_0
sortedcontainers=2.4.0=pypi_0
soundfile=0.12.1=pypi_0
soxr=0.4.0=pypi_0
speechbrain=1.0.0=pypi_0
sqlalchemy=2.0.31=pypi_0
stack_data=0.6.2=pyhd8ed1ab_0
sympy=1.13.0=pyh04b8f61_3
tabulate=0.9.0=pypi_0
tbb=2021.12.0=hc790b64_3
tensorboardx=2.6.2.2=pypi_0
threadpoolctl=3.5.0=pypi_0
tk=8.6.13=h5226925_1
tokenizers=0.19.1=pypi_0
torch-audiomentations=0.11.1=pypi_0
torch-pitch-shift=1.2.4=pypi_0
torchaudio=2.4.0=pypi_0
torchmetrics=1.4.0.post0=pypi_0
torchvision=0.19.0=pypi_0
tornado=6.4.1=py310ha8f682b_0
tqdm=4.66.4=pypi_0
traitlets=5.14.3=pyhd8ed1ab_0
transformers=4.43.3=pypi_0
typer=0.12.3=pypi_0
typing_extensions=4.12.2=pyha770c72_0
tzdata=2024.1=pypi_0
ucrt=10.0.22621.0=h57928b3_0
urllib3=2.2.2=pyhd8ed1ab_1
vc=14.3=h8a93ad2_20
vc14_runtime=14.40.33810=ha82c5b3_20
vs2015_runtime=14.40.33810=h3bf8584_20
wcwidth=0.2.13=pyhd8ed1ab_0
wheel=0.43.0=pyhd8ed1ab_1
whisperx=3.1.1=dev_0
win_inet_pton=1.1.0=pyhd8ed1ab_6
xorg-libxau=1.0.11=hcd874cb_0
xorg-libxdmcp=1.1.3=hcd874cb_0
xz=5.2.6=h8d14728_0
yaml=0.2.5=h8ffe710_2
yarl=1.9.4=pypi_0
zeromq=4.3.5=he1f189c_4
zipp=3.19.2=pyhd8ed1ab_0
zstandard=0.23.0=py310he5e10e1_0
zstd=1.5.6=h0ea2cb4_0
``` | open | 2024-07-29T12:32:01Z | 2024-08-13T08:46:00Z | https://github.com/SYSTRAN/faster-whisper/issues/941 | [] | M2ATrail | 4 |
yunjey/pytorch-tutorial | deep-learning | 72 | RuntimeError: invalid argument 2: out of range | HI, @jtoy @hunkim @Kongsea @DingKe @JayParks
I met this error in image caption run:
kraken@devBox1:~/pytorch-tutorial/tutorials/03-advanced/image_captioning$ sudo python3 sample.py --image='./png/example.png'
[sudo] password for kraken:
Traceback (most recent call last):
File "sample.py", line 97, in <module>
main(args)
File "sample.py", line 61, in main
sampled_ids = decoder.sample(feature)
File "/home/kraken/pytorch-tutorial/tutorials/03-advanced/image_captioning/model.py", line 68, in sample
sampled_ids = torch.cat(sampled_ids, 1) # (batch_size, 20)
File "/usr/local/lib/python3.5/dist-packages/torch/autograd/variable.py", line 897, in cat
return Concat.apply(dim, *iterable)
File "/usr/local/lib/python3.5/dist-packages/torch/autograd/_functions/tensor.py", line 316, in forward
ctx.input_sizes = [i.size(dim) for i in inputs]
File "/usr/local/lib/python3.5/dist-packages/torch/autograd/_functions/tensor.py", line 316, in <listcomp>
ctx.input_sizes = [i.size(dim) for i in inputs]
RuntimeError: invalid argument 2: out of range at /pytorch/torch/lib/THC/generic/THCTensor.c:23
kraken@devBox1:~/pytorch-tutorial/tutorials/03-advanced/image_captioning$
What's wrong with me ? | closed | 2017-10-13T07:34:32Z | 2018-05-10T08:58:41Z | https://github.com/yunjey/pytorch-tutorial/issues/72 | [] | bemoregt | 2 |
huggingface/datasets | nlp | 6,585 | losing DatasetInfo in Dataset.map when num_proc > 1 | ### Describe the bug
Hello and thanks for developing this package!
When I process a Dataset with the map function using multiple processors some set attributes of the DatasetInfo get lost and are None in the resulting Dataset.
### Steps to reproduce the bug
```python
from datasets import Dataset, DatasetInfo
def run_map(num_proc):
dataset = Dataset.from_dict(
{"col1": [0, 1], "col2": [3, 4]},
info=DatasetInfo(
dataset_name="my_dataset",
),
)
ds = dataset.map(lambda x: x, num_proc=num_proc)
print(ds.info.dataset_name)
run_map(1)
run_map(2)
```
This puts out:
```bash
Map: 100%|██████████| 2/2 [00:00<00:00, 724.66 examples/s]
my_dataset
Map (num_proc=2): 100%|██████████| 2/2 [00:00<00:00, 18.25 examples/s]
None
```
### Expected behavior
I expect the DatasetInfo to be kept as it was and there should be no difference in the output of running map with num_proc=1 and num_proc=2.
Expected output:
```bash
Map: 100%|██████████| 2/2 [00:00<00:00, 724.66 examples/s]
my_dataset
Map (num_proc=2): 100%|██████████| 2/2 [00:00<00:00, 18.25 examples/s]
my_dataset
```
### Environment info
- `datasets` version: 2.16.1
- Platform: Linux-5.10.102.1-microsoft-standard-WSL2-x86_64-with-glibc2.17
- Python version: 3.8.18
- `huggingface_hub` version: 0.20.2
- PyArrow version: 12.0.1
- Pandas version: 2.0.3
- `fsspec` version: 2023.9.2 | open | 2024-01-12T13:39:19Z | 2024-01-12T14:08:24Z | https://github.com/huggingface/datasets/issues/6585 | [] | JochenSiegWork | 2 |
encode/httpx | asyncio | 3,203 | Authentification docs reference non-existing class | The [auth docs](https://www.python-httpx.org/advanced/authentication/) reference `httpx.BasicAuthentication()` 3 times, but this class does not exist, it should be replaced with `httpx.BasicAuth()`. | closed | 2024-05-17T07:53:03Z | 2024-08-27T12:10:34Z | https://github.com/encode/httpx/issues/3203 | [] | alexprengere | 2 |
vastsa/FileCodeBox | fastapi | 54 | docker起来之后怎么开启HTTPS,直接反向代理? 使用CF的CDN提示525错误 | 不知道从哪里开始排查错误/ | closed | 2023-02-28T06:27:37Z | 2024-04-29T15:08:36Z | https://github.com/vastsa/FileCodeBox/issues/54 | [] | asseywang | 2 |
quantmind/pulsar | asyncio | 126 | WsgiHandler doesn't chain response middlewares | According to the documentation the `response_middleware` parameter in the `WsgiHandler` supports a list of functions:
``` python
middleware = [...]
response_middleware = [
CustomMiddleware(),
GZipMiddleware()
]
return wsgi.WsgiHandler(middleware=middleware, response_middleware=response_middleware)
```
However it seems that only the first reponse middleware is called and all others ignored. Intention or bug?
| closed | 2014-08-12T13:55:24Z | 2014-10-14T13:51:09Z | https://github.com/quantmind/pulsar/issues/126 | [
"wsgi"
] | sekrause | 11 |
piskvorky/gensim | machine-learning | 3,051 | Release 3.8.3: vector_size in docs doesn't match release constructor argument for word2vec and fasttext | A fix seems to be currently in the develop branch, but the pypi release branch (release-3.8.3) [Word2Vec](https://github.com/RaRe-Technologies/gensim/blob/release-3.8.3/gensim/models/word2vec.py#L477) and [FastText](https://github.com/RaRe-Technologies/gensim/blob/release-3.8.3/gensim/models/fasttext.py#L355) files use 'size' as their constructor argument not 'vector_size'. [doc link here](https://radimrehurek.com/gensim/models/word2vec.html)
Marginally related, the [developer's guide](https://github.com/RaRe-Technologies/gensim/wiki/Developer-page#git-flow) suggests the master branch head is the latest release, but it looks like release-3.8.3 is the one I would get from `pip install gensim` (note, the [master branch](https://github.com/RaRe-Technologies/gensim/blob/8624aa2822f885c56996f9a2f84490c9166c84ca/gensim/models/fasttext.py#L274) correctly uses vector_size).
Minimal code to test:
```
from gensim.test.utils import common_texts
from gensim.models import Word2Vec
# this is currently on the doc website. Will fail for the pip install gensim version
model = Word2Vec(sentences=common_texts, vector_size=100, window=5, min_count=1, workers=4)
# Using 'size' instead will pass for the default release
model = Word2Vec(sentences=common_texts, size=100, window=5, min_count=1, workers=4)
```
In case it helps:
Linux-5.8.0-43-generic-x86_64-with-glibc2.29
Python 3.8.5 (default, Jul 28 2020, 12:59:40)
[GCC 9.3.0]
Bits 64
NumPy 1.19.2
SciPy 1.4.1
gensim 3.8.3
FAST_VERSION 1
| closed | 2021-02-25T22:25:57Z | 2021-02-26T07:47:47Z | https://github.com/piskvorky/gensim/issues/3051 | [] | hakunanatasha | 1 |
ultralytics/yolov5 | deep-learning | 13,246 | divide the objects into small and large categories based on the size of the bonding boxes | ### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.
### Question
Hello. I had a question that was not similar. I want to divide the objects into small and large categories based on the size of the bonding boxes that are produced during training. I want the threshold that I define to be different for each of these categories. How can I access the bonding boxes that are generated during the training? In which module are network predicates generated on training data?
Thank you
### Additional
_No response_ | open | 2024-08-06T10:53:11Z | 2024-10-20T19:51:27Z | https://github.com/ultralytics/yolov5/issues/13246 | [
"question"
] | EmmaLevine94 | 8 |
PaddlePaddle/PaddleHub | nlp | 2,144 | PaddleHub一键OCR中文识别-五、部署服务器 | Sir, thanks for your guidance. I have completed online image recognition according the article
https://aistudio.baidu.com/aistudio/projectdetail/5121515
While, I am unable to know how to deploy the server through the notebook.I will appreciate it ,if you give me detail steps about deploying the server -"五、部署服务器" by this notebook.
Thanks for your reading.
| open | 2022-11-28T05:34:47Z | 2022-12-01T02:36:20Z | https://github.com/PaddlePaddle/PaddleHub/issues/2144 | [] | MaxokDavid | 9 |
prkumar/uplink | rest-api | 61 | Code suggestion: use abc instead of raising NotImplementedError in interfaces | ``abc`` module provides the infrastructure for defining abstract base classes (ABCs) in Python, as outlined in PEP 3119. The point is that not fully defined successors of abstract base class cannot be instantiated unlike objects with methods raising ``NotImplementedError``. | open | 2018-02-01T13:32:23Z | 2018-02-01T19:08:52Z | https://github.com/prkumar/uplink/issues/61 | [
"Needs Maintainer Input"
] | daa | 0 |
aleju/imgaug | deep-learning | 90 | Make the bounding box more tighten for the object after image rotation | I use this code for image and bounding box rotation:
```
ia.seed(1)
image = # read image
bbs = ia.BoundingBoxesOnImage(
[ia.BoundingBox(x1=95, y1=52, x2=250, y2=245)],
shape=image.shape)
seq = iaa.Sequential([
iaa.Multiply((1.2, 1.5)), # change brightness, doesn't affect BBs
iaa.Affine(
rotate=45.0,
)
])
seq_det = seq.to_deterministic()
image_aug = seq_det.augment_images([image])[0]
bbs_aug = seq_det.augment_bounding_boxes([bbs])[0]
```
The results are:

As you can see, the bounding box became larger after rotation.
Is it posibble to make the bounding box more tighten for the object after image rotation? | closed | 2018-01-10T13:42:46Z | 2020-11-05T17:42:34Z | https://github.com/aleju/imgaug/issues/90 | [] | panovr | 5 |
noirbizarre/flask-restplus | api | 438 | Refactor for accepting Marshmallow and other marshaling libraries | This is less of an issue and more of an **intent to implement** :shipit:
## Result
The result of this work should be these two main things:
- It should be possible to build alternative request parsing implementations.
- It should be possible to use Marshmallow as easily as the current request parsing implementation.
I intend to try doing this without losing backward compatibility of public APIs, so that most applications can update without having to rewrite their marshaling implementations. No promises of course about the internal API.
## Planning
My plan of attack is as follows (may later be filled out with "subtasks"):
- [ ] **Design initial additions/changes to the public API through a simple "sample app"**
I'm simultaneously building a real world product with RestPlus and Marshmallow (hacked in) for my employer @l1ndanl. This should help a ton, since I can my experiences and "wish list" from that to design the public API.
This implementation might be useful later for testing the implementation, but it shouldn't be set in stone. Implementation details may have to change the actual public API.
- [ ] **Locate and split out all touch points of the internal APIs with request parsing**
Before I design the abstraction layer I'd like to make sure I know exactly what the internal API needs to be able to do and where it's used. This will require some more extensive knowledge of RestPlus internals and since these uses will have to be split out anyway eventually, doing this before designing the abstraction should make it a lot easier for me.
- [ ] **Design the abstraction layer**
- [ ] **Implement the abstraction layer for the "legacy" request parsing**
- [ ] **Implement an abstraction layer for Marshmallow and/or [webargs](http://webargs.readthedocs.io/en/latest/)**
## Methodology
I'll obviously be developing in my own fork of RestPlus in a special branch, that can then eventually be merged through a PR.
**I'm also very interested in suggestions/ideas/insider info/similar projects + shows of support 🎉 .**
## Related reading...
- https://github.com/noirbizarre/flask-restplus/issues/410
- https://github.com/noirbizarre/flask-restplus/issues/317
- https://github.com/noirbizarre/flask-restplus/issues/9
- The warning on this page: http://flask-restplus.readthedocs.io/en/stable/parsing.html | closed | 2018-05-15T17:40:18Z | 2019-12-30T10:26:45Z | https://github.com/noirbizarre/flask-restplus/issues/438 | [] | martijnarts | 24 |
Gerapy/Gerapy | django | 233 | 通过克隆的方法部署项目,可以运行起来,但是登录不了 | **描述错误**
通过克隆的方法部署项目,可以运行,但是登录不了
**重现**
重现行为的步骤:
1. 通过 git clone 克隆仓库到本地
2. 安装好 requirements.txt 所需库,在 Pycharm 里面配置好 `*\Gerapy\gerapy\cmd\__init__.py `运行配置
3. 启动 scrapyd,运行 `*\Gerapy\gerapy\cmd\__init__.py `文件,在 gerapy/client 文件夹下 通过 `npm run serve` 命令启动前端
4. 控制台报错
**Traceback**
将控制台中显示的 traceback 复制到此处:
```python
E:\PycharmProjects\Gerapy\venv\Scripts\python.exe E:/PycharmProjects/Gerapy/gerapy/cmd/__init__.py runserver 0.0.0.0:5000
Watching for file changes with StatReloader
Performing system checks...
DEBUG - 2022-04-02 09:53:53,684 - process: 6920 - scheduler.py - gerapy.server.core.scheduler - 91 - scheduler - syncing jobs from tasks configured...
DB error executing '_get_jobs' (no such table: django_apscheduler_djangojob). Retrying with a new DB connection...
--- Logging error ---
Error getting due jobs from job store 'default': no such table: django_apscheduler_djangojob
System check identified no issues (0 silenced).
Traceback (most recent call last):
File "E:\PycharmProjects\Gerapy\venv\lib\site-packages\django\db\backends\utils.py", line 84, in _execute
return self.cursor.execute(sql, params)
File "E:\PycharmProjects\Gerapy\venv\lib\site-packages\django\db\backends\sqlite3\base.py", line 383, in execute
return Database.Cursor.execute(self, query, params)
sqlite3.OperationalError: no such table: core_task
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "C:\Python\Python39\lib\logging\__init__.py", line 1083, in emit
msg = self.format(record)
File "C:\Python\Python39\lib\logging\__init__.py", line 927, in format
return fmt.format(record)
File "C:\Python\Python39\lib\logging\__init__.py", line 663, in format
record.message = record.getMessage()
File "C:\Python\Python39\lib\logging\__init__.py", line 367, in getMessage
msg = msg % self.args
File "E:\PycharmProjects\Gerapy\venv\lib\site-packages\django\db\models\query.py", line 250, in __repr__
data = list(self[:REPR_OUTPUT_SIZE + 1])
File "E:\PycharmProjects\Gerapy\venv\lib\site-packages\django\db\models\query.py", line 256, in __len__
self._fetch_all()
File "E:\PycharmProjects\Gerapy\venv\lib\site-packages\django\db\models\query.py", line 1242, in _fetch_all
self._result_cache = list(self._iterable_class(self))
File "E:\PycharmProjects\Gerapy\venv\lib\site-packages\django\db\models\query.py", line 55, in __iter__
results = compiler.execute_sql(chunked_fetch=self.chunked_fetch, chunk_size=self.chunk_size)
File "E:\PycharmProjects\Gerapy\venv\lib\site-packages\django\db\models\sql\compiler.py", line 1142, in execute_sql
cursor.execute(sql, params)
File "E:\PycharmProjects\Gerapy\venv\lib\site-packages\django\db\backends\utils.py", line 99, in execute
return super().execute(sql, params)
File "E:\PycharmProjects\Gerapy\venv\lib\site-packages\django\db\backends\utils.py", line 67, in execute
return self._execute_with_wrappers(sql, params, many=False, executor=self._execute)
File "E:\PycharmProjects\Gerapy\venv\lib\site-packages\django\db\backends\utils.py", line 76, in _execute_with_wrappers
return executor(sql, params, many, context)
File "E:\PycharmProjects\Gerapy\venv\lib\site-packages\django\db\backends\utils.py", line 84, in _execute
return self.cursor.execute(sql, params)
File "E:\PycharmProjects\Gerapy\venv\lib\site-packages\django\db\utils.py", line 89, in __exit__
raise dj_exc_value.with_traceback(traceback) from exc_value
File "E:\PycharmProjects\Gerapy\venv\lib\site-packages\django\db\backends\utils.py", line 84, in _execute
return self.cursor.execute(sql, params)
File "E:\PycharmProjects\Gerapy\venv\lib\site-packages\django\db\backends\sqlite3\base.py", line 383, in execute
return Database.Cursor.execute(self, query, params)
django.db.utils.OperationalError: no such table: core_task
Call stack:
File "C:\Python\Python39\lib\threading.py", line 912, in _bootstrap
self._bootstrap_inner()
File "C:\Python\Python39\lib\threading.py", line 954, in _bootstrap_inner
self.run()
File "E:\PycharmProjects\Gerapy\gerapy\server\core\scheduler.py", line 160, in run
self.sync_jobs(force=True)
File "E:\PycharmProjects\Gerapy\gerapy\server\core\scheduler.py", line 93, in sync_jobs
logger.debug('get realtime tasks %s', tasks)
Unable to print the message and arguments - possible formatting error.
Use the traceback above to help find the error.
--- Logging error ---
Traceback (most recent call last):
File "E:\PycharmProjects\Gerapy\venv\lib\site-packages\django\db\backends\utils.py", line 84, in _execute
return self.cursor.execute(sql, params)
File "E:\PycharmProjects\Gerapy\venv\lib\site-packages\django\db\backends\sqlite3\base.py", line 383, in execute
return Database.Cursor.execute(self, query, params)
sqlite3.OperationalError: no such table: core_task
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "C:\Python\Python39\lib\logging\__init__.py", line 1083, in emit
msg = self.format(record)
File "C:\Python\Python39\lib\logging\__init__.py", line 927, in format
return fmt.format(record)
File "C:\Python\Python39\lib\logging\__init__.py", line 663, in format
record.message = record.getMessage()
File "C:\Python\Python39\lib\logging\__init__.py", line 367, in getMessage
msg = msg % self.args
File "E:\PycharmProjects\Gerapy\venv\lib\site-packages\django\db\models\query.py", line 250, in __repr__
data = list(self[:REPR_OUTPUT_SIZE + 1])
File "E:\PycharmProjects\Gerapy\venv\lib\site-packages\django\db\models\query.py", line 256, in __len__
self._fetch_all()
File "E:\PycharmProjects\Gerapy\venv\lib\site-packages\django\db\models\query.py", line 1242, in _fetch_all
self._result_cache = list(self._iterable_class(self))
File "E:\PycharmProjects\Gerapy\venv\lib\site-packages\django\db\models\query.py", line 55, in __iter__
results = compiler.execute_sql(chunked_fetch=self.chunked_fetch, chunk_size=self.chunk_size)
File "E:\PycharmProjects\Gerapy\venv\lib\site-packages\django\db\models\sql\compiler.py", line 1142, in execute_sql
cursor.execute(sql, params)
File "E:\PycharmProjects\Gerapy\venv\lib\site-packages\django\db\backends\utils.py", line 99, in execute
return super().execute(sql, params)
File "E:\PycharmProjects\Gerapy\venv\lib\site-packages\django\db\backends\utils.py", line 67, in execute
return self._execute_with_wrappers(sql, params, many=False, executor=self._execute)
File "E:\PycharmProjects\Gerapy\venv\lib\site-packages\django\db\backends\utils.py", line 76, in _execute_with_wrappers
return executor(sql, params, many, context)
File "E:\PycharmProjects\Gerapy\venv\lib\site-packages\django\db\backends\utils.py", line 84, in _execute
return self.cursor.execute(sql, params)
File "E:\PycharmProjects\Gerapy\venv\lib\site-packages\django\db\utils.py", line 89, in __exit__
raise dj_exc_value.with_traceback(traceback) from exc_value
File "E:\PycharmProjects\Gerapy\venv\lib\site-packages\django\db\backends\utils.py", line 84, in _execute
return self.cursor.execute(sql, params)
File "E:\PycharmProjects\Gerapy\venv\lib\site-packages\django\db\backends\sqlite3\base.py", line 383, in execute
return Database.Cursor.execute(self, query, params)
django.db.utils.OperationalError: no such table: core_task
Call stack:
File "C:\Python\Python39\lib\threading.py", line 912, in _bootstrap
self._bootstrap_inner()
File "C:\Python\Python39\lib\threading.py", line 954, in _bootstrap_inner
self.run()
File "E:\PycharmProjects\Gerapy\gerapy\server\core\scheduler.py", line 160, in run
self.sync_jobs(force=True)
File "E:\PycharmProjects\Gerapy\gerapy\server\core\scheduler.py", line 93, in sync_jobs
logger.debug('get realtime tasks %s', tasks)
Unable to print the message and arguments - possible formatting error.
Use the traceback above to help find the error.
Exception in thread Thread-1:
Traceback (most recent call last):
File "E:\PycharmProjects\Gerapy\venv\lib\site-packages\django\db\backends\utils.py", line 84, in _execute
return self.cursor.execute(sql, params)
File "E:\PycharmProjects\Gerapy\venv\lib\site-packages\django\db\backends\sqlite3\base.py", line 383, in execute
return Database.Cursor.execute(self, query, params)
sqlite3.OperationalError: no such table: core_task
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "C:\Python\Python39\lib\threading.py", line 954, in _bootstrap_inner
self.run()
File "E:\PycharmProjects\Gerapy\gerapy\server\core\scheduler.py", line 160, in run
self.sync_jobs(force=True)
File "E:\PycharmProjects\Gerapy\gerapy\server\core\scheduler.py", line 94, in sync_jobs
for task in tasks:
File "E:\PycharmProjects\Gerapy\venv\lib\site-packages\django\db\models\query.py", line 274, in __iter__
self._fetch_all()
File "E:\PycharmProjects\Gerapy\venv\lib\site-packages\django\db\models\query.py", line 1242, in _fetch_all
self._result_cache = list(self._iterable_class(self))
File "E:\PycharmProjects\Gerapy\venv\lib\site-packages\django\db\models\query.py", line 55, in __iter__
results = compiler.execute_sql(chunked_fetch=self.chunked_fetch, chunk_size=self.chunk_size)
File "E:\PycharmProjects\Gerapy\venv\lib\site-packages\django\db\models\sql\compiler.py", line 1142, in execute_sql
cursor.execute(sql, params)
File "E:\PycharmProjects\Gerapy\venv\lib\site-packages\django\db\backends\utils.py", line 99, in execute
return super().execute(sql, params)
File "E:\PycharmProjects\Gerapy\venv\lib\site-packages\django\db\backends\utils.py", line 67, in execute
return self._execute_with_wrappers(sql, params, many=False, executor=self._execute)
File "E:\PycharmProjects\Gerapy\venv\lib\site-packages\django\db\backends\utils.py", line 76, in _execute_with_wrappers
return executor(sql, params, many, context)
File "E:\PycharmProjects\Gerapy\venv\lib\site-packages\django\db\backends\utils.py", line 84, in _execute
return self.cursor.execute(sql, params)
File "E:\PycharmProjects\Gerapy\venv\lib\site-packages\django\db\utils.py", line 89, in __exit__
raise dj_exc_value.with_traceback(traceback) from exc_value
File "E:\PycharmProjects\Gerapy\venv\lib\site-packages\django\db\backends\utils.py", line 84, in _execute
return self.cursor.execute(sql, params)
File "E:\PycharmProjects\Gerapy\venv\lib\site-packages\django\db\backends\sqlite3\base.py", line 383, in execute
return Database.Cursor.execute(self, query, params)
django.db.utils.OperationalError: no such table: core_task
You have 38 unapplied migration(s). Your project may not work properly until you apply the migrations for app(s): admin, auth, authtoken, contenttypes, core, django_apscheduler, sessions.
Run 'python manage.py migrate' to apply them.
April 02, 2022 - 09:53:53
Django version 2.2.27, using settings 'gerapy.server.server.settings'
Starting development server at http://0.0.0.0:5000/
Quit the server with CTRL-BREAK.
DB error executing '_get_jobs' (no such table: django_apscheduler_djangojob). Retrying with a new DB connection...
Error getting due jobs from job store 'default': no such table: django_apscheduler_djangojob
DB error executing '_get_jobs' (no such table: django_apscheduler_djangojob). Retrying with a new DB connection...
Error getting due jobs from job store 'default': no such table: django_apscheduler_djangojob
DB error executing '_get_jobs' (no such table: django_apscheduler_djangojob). Retrying with a new DB connection...
Error getting due jobs from job store 'default': no such table: django_apscheduler_djangojob
DB error executing '_get_jobs' (no such table: django_apscheduler_djangojob). Retrying with a new DB connection...
Error getting due jobs from job store 'default': no such table: django_apscheduler_djangojob
DB error executing '_get_jobs' (no such table: django_apscheduler_djangojob). Retrying with a new DB connection...
Error getting due jobs from job store 'default': no such table: django_apscheduler_djangojob
```
**预期行为**
我希望能成功运行项目并登录进去。
**屏幕截图**
[](https://imgtu.com/i/qIEAsA)
**环境:**
- OS: [Windows 10]
- Browser [Chrome 99.0.4844.84]
- Python Version [3.9.4]
- Gerapy Version [未安装]
-
**附加上下文**
`*\Gerapy\gerapy\dbs`文件夹下并没有数据库文件
| closed | 2022-04-02T02:47:05Z | 2022-04-29T19:40:37Z | https://github.com/Gerapy/Gerapy/issues/233 | [
"bug"
] | kesyupeng | 2 |
deepspeedai/DeepSpeed | deep-learning | 5,793 | [BUG] Excessive CPU and GPU Memory Usage with Multi-GPU Inference Using DeepSpeed | I am experiencing excessive CPU and GPU memory usage when running multi-GPU inference with DeepSpeed. Specifically, the memory usage does not scale as expected when increasing the number of GPUs. Below is the code I am using for inference:
```python
import os
import torch
import deepspeed
import time
from transformers import AutoTokenizer, AutoModelForCausalLM, AutoConfig
from deepspeed.runtime.zero.config import DeepSpeedZeroConfig
from deepspeed.inference.config import DeepSpeedTPConfig
from deepspeed.runtime.utils import see_memory_usage
local_rank = int(os.getenv("LOCAL_RANK", "0"))
world_size = int(os.getenv("WORLD_SIZE", "1"))
model_dir = "/mnt/sgnfsdata/tolo-03-97/pretrained_models/internlm2-chat-20b"
trust_remote_code = True
tokenizer = AutoTokenizer.from_pretrained(model_dir, trust_remote_code=trust_remote_code)
config = AutoConfig.from_pretrained(model_dir, trust_remote_code=trust_remote_code)
model = AutoModelForCausalLM.from_pretrained(model_dir,
torch_dtype=torch.bfloat16,
trust_remote_code=trust_remote_code
)
model = model.eval()
see_memory_usage("After load model", force=True)
tp_config = DeepSpeedTPConfig(tp_size=world_size)
zero_config = DeepSpeedZeroConfig(stage=3,
model_persistence_threshold=0,
max_live_parameters=0,
mics_shard_size=world_size
)
ds_engine = deepspeed.init_inference(model=model,
tensor_parallel=tp_config,
dtype=torch.bfloat16,
zero=zero_config,
max_out_tokens=1024,
replace_method="auto",
replace_with_kernel_inject=True)
see_memory_usage("After DS-inference init", force=True)
model = ds_engine.module
print("device: ", model.device)
prompt = "what is deepspeed?"
t0 = time.time()
response = model.chat(tokenizer=tokenizer,
query=prompt,
history=[],
max_new_tokens=1024,
do_sample=True,
temperature=0.8,
top_p=0.8
)
t1 = time.time()
print(response)
print('=' * 100)
print("inference time: ", t1 - t0)
print('=' * 100)
```
Steps to Reproduce:
1. Run the script with 2 GPUs:
```bash
deepspeed --num_gpus 2 main.py --ds_inference
```


2. Run the script with 4 GPUs:
```bash
deepspeed --num_gpus 4 main.py --ds_inference
```


Expected Behavior:
I expected that using 4 GPUs would reduce the memory usage per GPU, ideally halving the GPU memory usage compared to running with 2 GPUs.
Actual Behavior:
With 2 GPUs:
CPU virtual memory: 92.87GB
Each GPU memory: 37.74GB
With 4 GPUs:
CPU virtual memory: 162.92GB (significantly higher than expected)
Each GPU memory: 37.74GB (no reduction)
Questions:
Why does the CPU virtual memory usage increase significantly when using more GPUs?
How can I reduce the memory usage per GPU when scaling up the number of GPUs?
System Info:
DeepSpeed version: 0.14.4
PyTorch version: 2.3.1
Transformers version: 4.42.3
Python version: 3.10
OS: ubuntu 24.04
Additional Context:
Any insights or suggestions on how to optimize the memory usage for multi-GPU inference with DeepSpeed would be greatly appreciated. Thank you! | open | 2024-07-23T07:55:48Z | 2024-10-10T03:02:31Z | https://github.com/deepspeedai/DeepSpeed/issues/5793 | [
"bug",
"inference"
] | gawain000000 | 3 |
JaidedAI/EasyOCR | machine-learning | 775 | Tips and trick | Hi,
could you share some tip and tirck about creating data and config file
for train arabic model
thanks | closed | 2022-07-04T16:08:56Z | 2022-11-18T11:00:19Z | https://github.com/JaidedAI/EasyOCR/issues/775 | [] | uniquefan | 4 |
ndleah/python-mini-project | data-visualization | 270 | Add Docstring in the Caesar Cipher file | # Description
In this issue, I want to add docstrings to the function in the Caesar Cipher code. The purpose is to have a clear and well documented code that will help us to understand it better.
<!-- Please include a summary of the issue.-->
## Type of issue
- [ ] Feature (New Script)
- [ ] Bug
- [x] Documentation
## Checklist:
- [ ] I have read the project guidelines.
- [x] I have checked previous issues to avoid duplicates.
- [ ] This issue will be meaningful for the project.
## Details
The functions in the Caesar Cipher code currently lack docstrings. We need to add detailed docstrings to explain:
- The purpose of each function.
- The parameters each function, including their types and descriptions.
- The return values, including their types and descriptions.
<!-- Uncomment this in case you have a issue related to a bug in existing code.-->
<!--
- [ ] I have added screenshots of the bug
- [ ] I have added steps to reproduce the bug
- [ ] I have proposed a possible solution for the bug
-->
| open | 2024-06-07T08:33:33Z | 2024-06-09T08:20:29Z | https://github.com/ndleah/python-mini-project/issues/270 | [] | Gabriela20103967 | 0 |
eamigo86/graphene-django-extras | graphql | 152 | DurationField is incorrectly converted as float | Hi!
Doing [this](https://github.com/eamigo86/graphene-django-extras/blob/master/graphene_django_extras/converter.py#L264):
```python
@convert_django_field.register(models.DurationField)
def convert_field_to_float(field, registry=None, input_flag=None, nested_field=False):
return Float(
description=field.help_text or field.verbose_name,
required=is_required(field) and input_flag == "create",
)
```
results in this:
```
float() argument must be a string or a number, not 'datetime.timedelta'.
```
Upon `save()`, i just found a workaround to override the `save()` method and create a timedelta from the float manually there. Can you see any workaround for loading?
Thanks!
| open | 2020-09-18T12:42:00Z | 2020-09-18T12:42:32Z | https://github.com/eamigo86/graphene-django-extras/issues/152 | [] | karlosss | 0 |
tflearn/tflearn | tensorflow | 1,075 | Calling regression() with parameter loss='weighted_crossentropy' | Hello, I am currently also trying to implement a weighted CE loss function. I'd really appreciate some guidance on how to call this function from the `loss=` parameter of the `tflearn.regression()` function.
The following attempt to use the above method in my code yields:
```
net_2 = net = tflearn.input_data(shape=[None, n_features])
net_2 = tflearn.fully_connected(net_2, 16, activation='relu')
net_2 = tflearn.dropout(net_2, 0.8)
net_2 = tflearn.fully_connected(net_2, 32, activation='relu')
net_2 = tflearn.dropout(net_2, 0.8)
net_2 = tflearn.fully_connected(net_2, 64, activation='relu')
net_2 = tflearn.dropout(net_2, 0.8)
net_2 = tflearn.fully_connected(net_2, 64, activation='relu')
net_2 = tflearn.dropout(net_2, 0.8)
net_2 = tflearn.fully_connected(net_2, 2, activation='softmax')
from tflearn.objectives import weighted_crossentropy
net_2 = tflearn.regression(net, optimizer='adam', loss=lambda data, target: weighted_crossentropy(data, target, weight=.5))
```
<img width="704" alt="screen shot 2018-07-19 at 3 19 07 pm" src="https://user-images.githubusercontent.com/17347282/42965146-250f5ab2-8b67-11e8-8f15-06ad377677af.png">
| closed | 2018-07-19T22:15:44Z | 2018-07-20T00:26:44Z | https://github.com/tflearn/tflearn/issues/1075 | [] | tnightengale | 0 |
hbldh/bleak | asyncio | 754 | Collect notify data from two devices (different MACs) with same UUIDs + Identify devices | * bleak version: 0.14.2
* Python version: 3.9.7
* Operating System: Ubuntu 21.10
* BlueZ version (`bluetoothctl -v`) in case of Linux: 5.60
### Description
*I apologize in advance, I am sure the library is working fine here, this is just my limited BLE knowledge causing issues*
- I will hopefully repay any help here with an example / documentation update
#### My use case / setup
I have two Lithionics Batteries that each have different Bluetooth MAC addresses that I wish to pull the telemetry from each. I wish to store it in a way so I can create counters in prometheus for each respective device. They send ascii strings that I can split into a data class and then I will export to Prometheus (time series DB) via it's exporter API.
When I try to create a client per device via `BleakClient(mac_address)` the second device always fails to connect / stream data. It seems to just sit there blocked. So I then thought the one notify callback must get the data from both batteries, but that does not seem the case either as I always get the same "sender id" sent to the handler:
```
[2022-01-30 17:09:10,294] INFO: Received Li3 telementary from 17 (li3.py:50)
```
- Always client 17
My code connecting to the two device can be seen here: https://github.com/cooperlees/vanD/blob/main/src/vand/li3.py#L71
- The two devices UUIDs etc. can be found in the config: https://github.com/cooperlees/vanD/blob/main/vand.json#L9
What's the right way to get data and identify each battery (device) here so I can collect respective statistics? Do I need separate processes for each device?
<img width="526" alt="Screen Shot 2022-01-30 at 8 57 01 AM" src="https://user-images.githubusercontent.com/3005596/151710020-69825208-1ff5-490b-b426-410e69d2f4bb.png">
### What I Did
I run a daemon that listens and converts the ascii bluetooth data into a data class and then exposes via the Prometheus Export API. For one device this is working, but I can't get it to work for two.
Any tips / tricks / info here would be appreciated. I thank you for this library and again, apaologize as I bet this is just all my limited knowledge of Bluetooth LE. This is my first time ever working with it. | closed | 2022-01-30T17:22:33Z | 2022-01-30T21:56:58Z | https://github.com/hbldh/bleak/issues/754 | [] | cooperlees | 4 |
neuml/txtai | nlp | 827 | How to index() or upsert() only on specific index? | Heyo,
Love txtai, amazing work.
I have two subindexes and want to perform an `upsert()` on only one of the subindexes:
Basically I'd want something like this:
```python
embeddings = Embeddings({
"indexes": {
"raw" : CONFIG,
"llm": CONFIG
}
})
embeddings.upsert((id, text, tags), index='raw'))
```
Is this something that's supported? I don't see anything in the docs about it | closed | 2024-12-02T18:18:48Z | 2025-02-23T14:54:34Z | https://github.com/neuml/txtai/issues/827 | [] | byt3bl33d3r | 6 |
plotly/dash | data-visualization | 2,925 | _validate.py "RuntimeError: dictionary changed size during iteration" | **Environment**
```
dash 2.17.1
dash-bootstrap-components 1.5.0
dash-bootstrap-templates 1.0.8
dash-core-components 2.0.0
dash-html-components 2.0.0
dash-loading-spinners 1.0.2
dash-table 5.0.0
```
**Describe the bug**
When starting a large Dash app (couple hundred callbacks, large callback chains) I receive the following error on startup and access to the application for the first time.
```
[2024-07-20 11:22:42,611] ERROR in app: Exception on /auth/dashboards/_dash-component-suites/dash/deps/react-dom@16.v2_17_1m1721474206.14.0.min.js [GET]
Traceback (most recent call last):
File "/usr/local/lib/python3.10/site-packages/flask/app.py", line 1473, in wsgi_app
response = self.full_dispatch_request()
File "/usr/local/lib/python3.10/site-packages/flask/app.py", line 882, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/usr/local/lib/python3.10/site-packages/flask/app.py", line 878, in full_dispatch_request
rv = self.preprocess_request()
File "/usr/local/lib/python3.10/site-packages/flask/app.py", line 1253, in preprocess_request
rv = self.ensure_sync(before_func)()
File "/usr/local/lib/python3.10/site-packages/dash/dash.py", line 1430, in _setup_server
_validate.validate_long_callbacks(self.callback_map)
File "/usr/local/lib/python3.10/site-packages/dash/_validate.py", line 528, in validate_long_callbacks
for callback in callback_map.values():
RuntimeError: dictionary changed size during iteration
```
This prevents the page from loading; however, I can perform a full refresh (ctrl-F5 on firefox) and everything will load correctly. This only occurs once, and only for the first user visiting the app. Occurs most times after starting the app. I have 1 callback which utilizes celery, but other than that no other long_callbacks in the app. Seems to occur more frequently when the overall number of callbacks increases in the application.
| open | 2024-07-20T11:34:29Z | 2024-08-13T19:55:53Z | https://github.com/plotly/dash/issues/2925 | [
"bug",
"P3"
] | mbworth | 1 |
Guovin/iptv-api | api | 226 | 关于docker运行的问题 | 建议生成的文件和配置的文件在一个单独的文件夹,比如
/tv-driver/output
/tv-driver/config
这样映射只需要映射子文件夹,就不会提示脚本不存在了
```
实现宿主机文件与容器文件同步,修改模板、配置、获取更新结果文件可直接在宿主机文件夹下操作
注:使用此命令运行容器,请务必先clone本项目至宿主机
``` | closed | 2024-08-04T05:27:09Z | 2024-08-14T10:01:37Z | https://github.com/Guovin/iptv-api/issues/226 | [
"enhancement"
] | QAQQL | 3 |
numba/numba | numpy | 9,994 | Numba spams output on startup | <!--
Thanks for opening an issue! To help the Numba team handle your information
efficiently, please first ensure that there is no other issue present that
already describes the issue you have
(search at https://github.com/numba/numba/issues?&q=is%3Aissue).
-->
## Reporting a bug
<!--
Before submitting a bug report please ensure that you can check off these boxes:
-->
- [x] I have tried using the latest released version of Numba (most recent is
visible in the release notes
(https://numba.readthedocs.io/en/stable/release-notes-overview.html).
- [x] I have included a self contained code sample to reproduce the problem.
i.e. it's possible to run as 'python bug.py'.
<!--
Please include details of the bug here, including, if applicable, what you
expected to happen!
-->
Numba prints the following message to the console the first time it compiles a function:
```
.../lib/python3.10/site-packages/numba/cpython/old_hashing.py:477: UserWarning: FNV hashing is not implemented in Numba. See PEP 456 https://www.python.org/dev/peps/pep-0456/ for rationale over not using FNV. Numba will continue to work, but hashes for built in types will be computed using siphash24. This will permit e.g. dictionaries to continue to behave as expected, however anything relying on the value of the hash opposed to hash as a derived property is likely to not work as expected.
```
The following program demonstrates this:
```
import numba
@numba.jit
def f ():
pass
f()
```
Expected output: Nothing
Actual output: The above log message, which I do not care about. | open | 2025-03-17T19:54:27Z | 2025-03-19T14:19:02Z | https://github.com/numba/numba/issues/9994 | [
"needtriage"
] | JC3 | 7 |
pytorch/vision | machine-learning | 8,819 | torchvision' object has no attribute '_cuda_version' | ### 🐛 Describe the bug
I am using a MacBook 15.1.1 (24B91) running Python 3.10.
I installed torch and torchvision through pip with `pip install -U torch torchvision` and it gave the following output:
```
Installing collected packages: torch, torchvision
Attempting uninstall: torch
Found existing installation: torch 1.13.1
Uninstalling torch-1.13.1:
Successfully uninstalled torch-1.13.1
Attempting uninstall: torchvision
Found existing installation: torchvision 0.14.0
Uninstalling torchvision-0.14.0:
Successfully uninstalled torchvision-0.14.0
Successfully installed torch-2.5.1 torchvision-0.20.1
```
After that I go into a terminal in venv and run
`import torch`-> works as expected
`import torchvision`-> Gives the following error:
```
>>> import torchvision
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/flippchen/project/venv/lib/python3.10/site-packages/torchvision/__init__.py", line 9, in <module>
from .extension import _HAS_OPS # usort:skip
File "/Users/flippchen/project/venv/lib/python3.10/site-packages/torchvision/extension.py", line 92, in <module>
_check_cuda_version()
File "/Users/flippchen/project/venv/lib/python3.10/site-packages/torchvision/extension.py", line 65, in _check_cuda_version
_version = torch.ops.torchvision._cuda_version()
File "/Users/flippchen/project/venv/lib/python3.10/site-packages/torch/_ops.py", line 1225, in __getattr__
raise AttributeError(
AttributeError: '_OpNamespace' 'torchvision' object has no attribute '_cuda_version'
```
I have tried several torch and torchvision combinations from the website, but none of them work.
### Versions
PyTorch version: 2.5.1
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 15.1.1 (arm64)
GCC version: Could not collect
Clang version: 16.0.0 (clang-1600.0.26.4)
CMake version: version 3.30.4
Libc version: N/A
Python version: 3.10.10 (v3.10.10:aad5f6a891, Feb 7 2023, 08:47:40) [Clang 13.0.0 (clang-1300.0.29.30)] (64-bit runtime)
Python platform: macOS-15.1.1-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M3 Pro
Versions of relevant libraries:
[pip3] numpy==1.24.3
[pip3] onnx==1.17.0
[pip3] onnx-graphsurgeon==0.5.2
[pip3] onnx2tf==1.22.3
[pip3] onnxruntime==1.20.1
[pip3] onnxslim==0.1.44
[pip3] optree==0.13.1
[pip3] sng4onnx==1.0.4
[pip3] torch==2.5.1
[pip3] torchvision==0.20.1
[conda] Could not collect
| closed | 2024-12-19T21:53:57Z | 2025-01-22T20:17:24Z | https://github.com/pytorch/vision/issues/8819 | [] | Flippchen | 2 |
recommenders-team/recommenders | data-science | 1,861 | [ASK] Error in NCFDataset creation | ### Description
Hello all,
i'm trying to use the NCF_deep_dive notebook with my own data.
With the following structure
<html>
<body>
<!--StartFragment-->
| usr_id | code_id | amt_trx | bestelldatum
-- | -- | -- | -- | --
0 | 0 | 35 | 1 | 2022-03-01
1 | 0 | 2 | 1 | 2022-03-01
2 | 0 | 18 | 1 | 2022-03-01
3 | 0 | 9 | 1 | 2022-03-01
4 | 0 | 0 | 1 | 2022-03-01
<!--EndFragment-->
</body>
</html>
when I try to create the dataset i get the following error
`data = NCFDataset(train_file=train_file,
test_file=leave_one_out_test_file,
seed=SEED,
overwrite_test_file_full=True,
col_user='usr_id',
col_item='code_id',
col_rating='amt_trx',
binary=False)`
```
---------------------------------------------------------------------------
MissingUserException Traceback (most recent call last)
Cell In [39], line 1
----> 1 data = NCFDataset(train_file=train_file,
2 test_file=leave_one_out_test_file,
3 seed=SEED,
4 overwrite_test_file_full=True,
5 col_user='usr_id',
6 col_item='code_id',
7 col_rating='amt_trx',
8 binary=False)
File /anaconda/envs/recsys/lib/python3.8/site-packages/recommenders/models/ncf/dataset.py:376, in Dataset.__init__(self, train_file, test_file, test_file_full, overwrite_test_file_full, n_neg, n_neg_test, col_user, col_item, col_rating, binary, seed, sample_with_replacement, print_warnings)
374 self.test_file_full = os.path.splitext(self.test_file)[0] + "_full.csv"
375 if self.overwrite_test_file_full or not os.path.isfile(self.test_file_full):
--> 376 self._create_test_file()
377 self.test_full_datafile = DataFile(
378 filename=self.test_file_full,
379 col_user=self.col_user,
(...)
383 binary=self.binary,
384 )
385 # set random seed
File /anaconda/envs/recsys/lib/python3.8/site-packages/recommenders/models/ncf/dataset.py:417, in Dataset._create_test_file(self)
415 if user in train_datafile.users:
416 user_test_data = test_datafile.load_data(user)
--> 417 user_train_data = train_datafile.load_data(user)
418 # for leave-one-out evaluation, exclude items seen in both training and test sets
419 # when sampling negatives
420 user_positive_item_pool = set(
421 user_test_data[self.col_item].unique()
422 ).union(user_train_data[self.col_item].unique())
File /anaconda/envs/recsys/lib/python3.8/site-packages/recommenders/models/ncf/dataset.py:194, in DataFile.load_data(self, key, by_user)
192 while (self.line_num == 0) or (self.row[key_col] != key):
193 if self.end_of_file:
--> 194 raise MissingUserException("User {} not in file {}".format(key, self.filename))
195 next(self)
196 # collect user/test batch data
MissingUserException: User 58422 not in file ./train_new.csv
```
I made some checks
print(train.usr_id.nunique()) --> output: 81062
print(test.usr_id.nunique()) --> output: 81062
print(leave.usr_id.nunique()) --> output: 81062
also checked by hand and the user 58422 is in all the files. Also the types are the same i'm using int64 for usr_id, code_id and amt_trx like movielens dataset
I can't understand the error, could you help me please?
### Update
If i remove the parameter **overwrite_test_file_full** it creates the dataset but then I can't make predictions because the dataset object didn't create the user2id mapping
```
data = NCFDataset(train_file=train_file,
test_file=leave_one_out_test_file,
seed=SEED,
col_user='usr_id',
col_item='code_id',
col_rating='amt_trx',
print_warnings=True)
model = NCF (
n_users=data.n_users,
n_items=data.n_items,
model_type="NeuMF",
n_factors=4,
layer_sizes=[16,8,4],
n_epochs=EPOCHS,
batch_size=BATCH_SIZE,
learning_rate=1e-3,
verbose=99,
seed=SEED
)
predictions = [[row.usr_id, row.code_id, model.predict(row.usr_id, row.code_id)]
for (_, row) in test.iterrows()]
predictions = pd.DataFrame(predictions, columns=['usr_id', 'code_id', 'prediction'])
predictions.head()
```
```
AttributeError Traceback (most recent call last)
Cell In [38], line 1
----> 1 predictions = [[row.usr_id, row.code_id, model.predict(row.usr_id, row.code_id)]
2 for (_, row) in test.iterrows()]
5 predictions = pd.DataFrame(predictions, columns=['usr_id', 'code_id', 'prediction'])
6 predictions.head()
Cell In [38], line 1, in <listcomp>(.0)
----> 1 predictions = [[row.usr_id, row.code_id, model.predict(row.usr_id, row.code_id)]
2 for (_, row) in test.iterrows()]
5 predictions = pd.DataFrame(predictions, columns=['usr_id', 'code_id', 'prediction'])
6 predictions.head()
File /anaconda/envs/recsys/lib/python3.8/site-packages/recommenders/models/ncf/ncf_singlenode.py:434, in NCF.predict(self, user_input, item_input, is_list)
431 return list(output.reshape(-1))
433 else:
--> 434 output = self._predict(np.array([user_input]), np.array([item_input]))
435 return float(output.reshape(-1)[0])
File /anaconda/envs/recsys/lib/python3.8/site-packages/recommenders/models/ncf/ncf_singlenode.py:440, in NCF._predict(self, user_input, item_input)
437 def _predict(self, user_input, item_input):
438
439 # index converting
--> 440 user_input = np.array([self.user2id[x] for x in user_input])
441 item_input = np.array([self.item2id[x] for x in item_input])
443 # get feed dict
File /anaconda/envs/recsys/lib/python3.8/site-packages/recommenders/models/ncf/ncf_singlenode.py:440, in <listcomp>(.0)
437 def _predict(self, user_input, item_input):
438
439 # index converting
--> 440 user_input = np.array([self.user2id[x] for x in user_input])
441 item_input = np.array([self.item2id[x] for x in item_input])
443 # get feed dict
AttributeError: 'NCF' object has no attribute 'user2id'
```
| open | 2022-11-28T09:13:17Z | 2022-11-28T14:20:49Z | https://github.com/recommenders-team/recommenders/issues/1861 | [
"help wanted"
] | mrcmoresi | 0 |
adbar/trafilatura | web-scraping | 245 | Use a class to gather all extraction settings | Implementing an extraction class instead of passing series of arguments to extraction functions would simplify the code. | closed | 2022-09-08T10:32:21Z | 2022-09-12T15:12:58Z | https://github.com/adbar/trafilatura/issues/245 | [
"enhancement"
] | adbar | 1 |
vimalloc/flask-jwt-extended | flask | 525 | Signature verification failed with just generated tokens | I have flask-jwt-extended configured with the following settings, with an app running on a docker container behind nginx:
```
JWT_SECRET_KEY = secrets.token_urlsafe(24)
JWT_ACCESS_TOKEN_EXPIRES = timedelta(hours=1)
JWT_TOKEN_LOCATION = ['cookies']
JWT_COOKIE_CSRF_PROTECT = False
JWT_CSRF_CHECK_FORM = False
JWT_COOKIE_SECURE = True
JWT_COOKIE_SAMESITE = "Strict"
```
In my locust test I have the following:
```python
def on_start(self):
response = self.client.get('http://127.0.0.1/api/csrf')
self.headers = {'X-CSRF-Token': response.json()['csrf']}
self.cookies = dict(response.cookies.iteritems())
response = self.client.post('http://127.0.0.1/api/auth/login', json=user_credentials.pop(), headers=self.headers, cookies=self.cookies)
if response.status_code != HTTPStatus.OK:
raise RuntimeError('Authentication did not succeed')
self.cookies |= dict(response.cookies.iteritems())
# get all the lists for the user
lists = self.client.get('http://127.0.0.1/api/todolists', headers=self.headers, cookies=self.cookies)
self.cookies |= dict(lists.cookies.iteritems())
# request their deletion
for l in lists.json():
response = self.client.get('http://127.0.0.1/api/csrf', cookies=self.cookies)
self.headers |= {'X-CSRF-Token': response.json()['csrf']}
self.cookies |= dict(response.cookies.iteritems())
response = self.client.delete(f'http://127.0.0.1/api/todolists/{l["id"]}', headers=self.headers, cookies=self.cookies)
self.cookies |= dict(response.cookies.iteritems())
```
when running this test against the server, I see the following:
```
nginx-1 | 172.19.0.1 - - [23/Sep/2023:19:14:20 +0000] "GET /api/csrf HTTP/1.1" 200 103 "-" "python-requests/2.31.0" "-"
nginx-1 | 172.19.0.1 - - [23/Sep/2023:19:14:20 +0000] "POST /api/auth/login HTTP/1.1" 200 0 "-" "python-requests/2.31.0" "-"
nginx-1 | 172.19.0.1 - - [23/Sep/2023:19:14:20 +0000] "GET /api/todolists HTTP/1.1" 200 172 "-" "python-requests/2.31.0" "-"
nginx-1 | 172.19.0.1 - - [23/Sep/2023:19:14:20 +0000] "GET /api/csrf HTTP/1.1" 200 103 "-" "python-requests/2.31.0" "-"
nginx-1 | 172.19.0.1 - - [23/Sep/2023:19:14:20 +0000] "DELETE /api/todolists/2c16ce48-3e7e-4e46-8982-5ae64d418d56 HTTP/1.1" 200 0 "-" "python-requests/2.31.0" "-"
nginx-1 | 172.19.0.1 - - [23/Sep/2023:19:14:20 +0000] "GET /api/csrf HTTP/1.1" 200 103 "-" "python-requests/2.31.0" "-"
nginx-1 | 172.19.0.1 - - [23/Sep/2023:19:14:20 +0000] "DELETE /api/todolists/90f863aa-4178-4295-8ef8-2b03898fcbb6 HTTP/1.1" 200 0 "-" "python-requests/2.31.0" "-"
nginx-1 | 172.19.0.1 - - [23/Sep/2023:19:14:20 +0000] "GET /api/csrf HTTP/1.1" 200 103 "-" "python-requests/2.31.0" "-"
nginx-1 | 172.19.0.1 - - [23/Sep/2023:19:14:20 +0000] "POST /api/todolists/add HTTP/1.1" 201 85 "-" "python-requests/2.31.0" "-"
nginx-1 | 172.19.0.1 - - [23/Sep/2023:19:14:23 +0000] "GET /api/csrf HTTP/1.1" 200 103 "-" "python-requests/2.31.0" "-"
internal-1 | Signature verification failed
nginx-1 | 172.19.0.1 - - [23/Sep/2023:19:14:23 +0000] "POST /api/todolists/add HTTP/1.1" 302 199 "-" "python-requests/2.31.0" "-"
internal-1 | Signature verification failed
```
the first GET for todolists as well as the DELETES are OK, but when the next phase goes on (so, getting another csrf and trying to POST a request to /api/todolists/add) then I get Signature verification failed. Few queries later the verification succeeds, and then fails again.
For the verification I am doing the following:
```python
@app.before_request
def before_request():
# the only routes that do not require authentication are the endpoint to login and to retrieve the csrf
if request.path in [url_for("api.auth.login"), url_for("api.get_csrf")]:
return None
try:
verify_jwt_in_request()
except (NoAuthorizationError, ExpiredSignatureError, InvalidSignatureError) as _exc:
# return static files
return None
```
and for the refreshing of the token I have the following:
```python
@app.after_request
def refresh_expiring_jwts(response):
if not request.blueprint or not request.blueprint.startswith('api'):
return response
try:
verify_jwt_in_request(refresh=True)
access_token = create_access_token(identity=get_jwt_identity())
set_access_cookies(response, access_token)
return response
except (RuntimeError, NoAuthorizationError, InvalidSignatureError):
pass
finally:
return response
```
Am I doing something wrong? | closed | 2023-09-23T19:30:19Z | 2023-09-24T19:23:16Z | https://github.com/vimalloc/flask-jwt-extended/issues/525 | [] | flixman | 1 |
benbusby/whoogle-search | flask | 303 | [BUG] Slow search on Raspberry Pi and incorrect env variables | **Describe the bug**
* Whoogle takes a long time to return search results (between 5-15s)
* Autocomplete doesn't work.
* whoogle.env seems to be ignored.
**Deployment Method**
- [ ] Heroku (one-click deploy)
- [x] Docker -- buildx/experimental
- [ ] `run` executable
- [ ] pip/pipx
- [ ] Other: [describe setup]
**Version of Whoogle Search**
- [ ] Latest build from [source] (i.e. GitHub, Docker Hub, pip, etc)
- [x] Version [0.4.1]
- [ ] Not sure
**Server**
Raspberry Pi 4 4GB
`Linux alarm 5.11.4-1-ARCH #1 SMP Sun Mar 7 23:46:10 UTC 2021 aarch64 GNU/Linux`
**Additional context**
It started a few days ago and I don't think I've made any drastic changes. I tried recreating containers, restarting system, etc. Nothing has worked so far.
docker-compose.yml
```
# cant use mem_limit in a 3.x docker-compose file in non swarm mode
# see https://github.com/docker/compose/issues/4513
version: "2.4"
services:
whoogle-search:
image: benbusby/whoogle-search:buildx-experimental
container_name: whoogle
volumes:
- /etc/localtime:/etc/localtime:ro
- ./whoogle.env:/whoogle/whoogle.env:ro
restart: on-failure:5
mem_limit: 256mb
memswap_limit: 256mb
# user debian-tor from tor package
user: '102'
security_opt:
- no-new-privileges
cap_drop:
- ALL
read_only: true
tmpfs:
- /config/:size=10M,uid=102,gid=102,mode=1700
- /var/lib/tor/:size=10M,uid=102,gid=102,mode=1700
- /run/tor/:size=1M,uid=102,gid=102,mode=1700
environment:
- WHOOGLE_DOTENV=1
ports:
- 5000:5000
restart: unless-stopped
```
whoogle.env
```
WHOOGLE_CONFIG_ALTS=1
WHOOGLE_ALT_TW=nitter.42l.fr
WHOOGLE_ALT_YT=invidious.tube
WHOOGLE_ALT_IG=bib.actionsack.com/u
WHOOGLE_ALT_RD=libredd.it
WHOOGLE_CONFIG_LANGUAGE=lang_en
WHOOGLE_CONFIG_DARK=1
WHOOGLE_CONFIG_NEW_TAB=1 # Open results in new tab
# WHOOGLE_CONFIG_COUNTRY=countryGB # See app/static/settings/
# WHOOGLE_CONFIG_SEARCH_LANGUAGE=lang_en
# WHOOGLE_USER=""
# WHOOGLE_PASS=""
# WHOOGLE_PROXY_USER=""
# WHOOGLE_PROXY_PASS=""
# WHOOGLE_PROXY_TYPE=""
# WHOOGLE_PROXY_LOC=""
# HTTPS_ONLY=1
#
# WHOOGLE_CONFIG_DISABLE=1 # Disables changing of config from client
# WHOOGLE_CONFIG_SAFE=1 # Safe searches
# WHOOGLE_CONFIG_TOR=1 # Use Tor if available
# WHOOGLE_CONFIG_GET_ONLY=1 # Search using GET requests only
# WHOOGLE_CONFIG_URL=https://<whoogle url>/
# WHOOGLE_CONFIG_STYLE=":root { /* LIGHT THEME COLORS */ --whoogle-background: #d8dee9; --whoogle-accent: #2e3440; --whoogle-text: #3B4252; --whoogle-contrast-text: #eceff4; --whoogle-secondary-text: #70757a; --whoogle-result-bg: #fff; --whoogle-result-title: #4c566a; --whoogle-result-url: #81a1c1; --whoogle-result-visited: #a3be8c; /* DARK THEME COLORS */ --whoogle-dark-background: #222; --whoogle-dark-accent: #685e79; --whoogle-dark-text: #fff; --whoogle-dark-contrast-text: #000; --whoogle-dark-secondary-text: #bbb; --whoogle-dark-result-bg: #000; --whoogle-dark-result-title: #1967d2; --whoogle-dark-result-url: #4b11a8; --whoogle-dark-result-visited: #bbbbff; }"
```
Logs seem to be normal except for one:
```
ERROR:app:Exception on /autocomplete [POST]
Traceback (most recent call last):
File "/usr/local/lib/python3.8/site-packages/urllib3/connection.py", line 159, in _new_conn
conn = connection.create_connection(
File "/usr/local/lib/python3.8/site-packages/urllib3/util/connection.py", line 61, in create_connection
for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM):
File "/usr/local/lib/python3.8/socket.py", line 918, in getaddrinfo
for res in _socket.getaddrinfo(host, port, family, type, proto, flags):
socket.gaierror: [Errno -3] Temporary failure in name resolution
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.8/site-packages/urllib3/connectionpool.py", line 670, in urlopen
httplib_response = self._make_request(
File "/usr/local/lib/python3.8/site-packages/urllib3/connectionpool.py", line 381, in _make_request
self._validate_conn(conn)
File "/usr/local/lib/python3.8/site-packages/urllib3/connectionpool.py", line 976, in _validate_conn
conn.connect()
File "/usr/local/lib/python3.8/site-packages/urllib3/connection.py", line 308, in connect
conn = self._new_conn()
File "/usr/local/lib/python3.8/site-packages/urllib3/connection.py", line 171, in _new_conn
raise NewConnectionError(
urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPSConnection object at 0xffff88442550>: Failed to establish a new connection: [Errno -3] Temporary failure in name resolution
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.8/site-packages/requests/adapters.py", line 439, in send
resp = conn.urlopen(
File "/usr/local/lib/python3.8/site-packages/urllib3/connectionpool.py", line 724, in urlopen
retries = retries.increment(
File "/usr/local/lib/python3.8/site-packages/urllib3/util/retry.py", line 439, in increment
raise MaxRetryError(_pool, url, error or ResponseError(cause))
urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='suggestqueries.google.com', port=443): Max retries exceeded with url: /complete/search?client=toolbar&hl=&q=i (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0xffff88442550>: Failed to establish a new connection: [Errno -3] Temporary failure in name resolution'))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.8/site-packages/flask/app.py", line 2446, in wsgi_app
response = self.full_dispatch_request()
File "/usr/local/lib/python3.8/site-packages/flask/app.py", line 1951, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/usr/local/lib/python3.8/site-packages/flask/app.py", line 1820, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "/usr/local/lib/python3.8/site-packages/flask/_compat.py", line 39, in reraise
raise value
File "/usr/local/lib/python3.8/site-packages/flask/app.py", line 1949, in full_dispatch_request
rv = self.dispatch_request()
File "/usr/local/lib/python3.8/site-packages/flask/app.py", line 1935, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "/whoogle/app/routes.py", line 182, in autocomplete
g.user_request.autocomplete(q) if not g.user_config.tor else []
File "/whoogle/app/request.py", line 187, in autocomplete
response = self.send(base_url=AUTOCOMPLETE_URL,
File "/whoogle/app/request.py", line 247, in send
response = requests.get(
File "/usr/local/lib/python3.8/site-packages/requests/api.py", line 76, in get
return request('get', url, params=params, **kwargs)
File "/usr/local/lib/python3.8/site-packages/requests/api.py", line 61, in request
return session.request(method=method, url=url, **kwargs)
File "/usr/local/lib/python3.8/site-packages/requests/sessions.py", line 530, in request
resp = self.send(prep, **send_kwargs)
File "/usr/local/lib/python3.8/site-packages/requests/sessions.py", line 643, in send
r = adapter.send(request, **kwargs)
File "/usr/local/lib/python3.8/site-packages/requests/adapters.py", line 516, in send
raise ConnectionError(e, request=request)
requests.exceptions.ConnectionError: HTTPSConnectionPool(host='suggestqueries.google.com', port=443): Max retries exceeded with url: /complete/search?client=toolbar&hl=&q=i (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0xffff88442550>: Failed to establish a new connection: [Errno -3] Temporary failure in name resolution'))
ERROR:app:Exception on /autocomplete [POST]
Traceback (most recent call last):
File "/usr/local/lib/python3.8/site-packages/urllib3/connection.py", line 159, in _new_conn
conn = connection.create_connection(
File "/usr/local/lib/python3.8/site-packages/urllib3/util/connection.py", line 61, in create_connection
for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM):
File "/usr/local/lib/python3.8/socket.py", line 918, in getaddrinfo
for res in _socket.getaddrinfo(host, port, family, type, proto, flags):
socket.gaierror: [Errno -3] Temporary failure in name resolution
```
Doing an `nslookup suggestqueries.google.com` on the host works fine and whoogle has worked fine in the past so I dunno what happened.
Moreover, (maybe I'm doing it wrong), whoogle doesn't seem to pickup whoogle.env values. Doing a `echo $WHOOGLE_ALT_TW` prints the default value. Doing a `cat whoogle.env` inside the container reveals the custom settings so I don't know why whoogle is picking the default values. | closed | 2021-05-02T14:16:05Z | 2021-05-20T13:59:00Z | https://github.com/benbusby/whoogle-search/issues/303 | [
"bug"
] | accountForIssues | 10 |
huggingface/datasets | deep-learning | 6,842 | Datasets with files with colon : in filenames cannot be used on Windows | ### Describe the bug
Datasets (such as https://huggingface.co/datasets/MLCommons/peoples_speech) cannot be used on Windows due to the fact that windows does not allow colons ":" in filenames. These should be converted into alternative strings.
### Steps to reproduce the bug
1. Attempt to run load_dataset on MLCommons/peoples_speech
### Expected behavior
Does not crash during extraction
### Environment info
Windows 11, NTFS filesystem, Python 3.12
| open | 2024-04-26T00:14:16Z | 2024-04-26T00:14:16Z | https://github.com/huggingface/datasets/issues/6842 | [] | jacobjennings | 0 |
dmlc/gluon-nlp | numpy | 1,034 | Error when using fp16 trainer | ## Description
I use fp16 trainer in fp16_utils.py to train model. I got the following error.
@eric-haibin-lin
### Error Message
File "/home/ec2-user/project/src/deep/utils/fp16_utils.py", line 179, in step
overflow = self._scaler.has_overflow(self.fp32_trainer._params)
File "/home/ec2-user/project/src/deep/utils/fp16_utils.py", line 195, in has_overflow
is_not_finite += mx.nd.contrib.isnan(grad).sum()
File "/apollo/env/project/lib/python3.6/site-packages/mxnet/ndarray/ndarray.py", line 217, in __iadd__
return op.broadcast_add(self, other, out=self)
File "<string>", line 56, in broadcast_add
File "/apollo/env/project/lib/python3.6/site-packages/mxnet/_ctypes/ndarray.py", line 92, in _imperative_invoke
ctypes.byref(out_stypes)))
File "/apollo/env/project/lib/python3.6/site-packages/mxnet/base.py", line 253, in check_call
raise MXNetError(py_str(_LIB.MXGetLastError()))
mxnet.base.MXNetError: [22:57:10] /opt/brazil-pkg-cache/packages/DeepMXNet/DeepMXNet-1.5.x.1353.0/AL2012/generic-flavor/src/src/io/../operator/elemwise_op_common.h:135: Check failed: assign(&dattr, vec.at(i)): Incompatible attr in node at 1-th input: expected float16, got float32
| closed | 2019-12-03T01:04:51Z | 2019-12-05T02:48:29Z | https://github.com/dmlc/gluon-nlp/issues/1034 | [
"bug"
] | rich-junwang | 0 |
raphaelvallat/pingouin | pandas | 204 | Homoscedasticity is Incorrectly Calculated for Wide-Format DataFrames | Hello,
First, thanks for this great package! It's been great to use in place of scipy.stats and statsmodels.
It seems the following part of the homoscedasticity function (lines 342 - 347 in distribution.py) performs incorrectly:
```python
if dv is None and group is None:
# Wide-format
# Get numeric data only
numdata = data._get_numeric_data()
assert numdata.shape[1] > 1, 'Data must have at least two columns.'
statistic, p = func(*numdata.to_numpy())
```
As it passes the data to SciPy in the incorrect shape. For a DataFrame with M rows and N columns, it passes M arrays of length N instead of N arrays of length M.
I believe this can easily be fixed by passing a transposed version of `numdata`, like so:
```python
if dv is None and group is None:
# Wide-format
# Get numeric data only
numdata = data._get_numeric_data()
assert numdata.shape[1] > 1, 'Data must have at least two columns.'
statistic, p = func(*numdata.to_numpy().T)
```
The current version of the function can also be used with a transposed version of the DataFrame for now, as a workaround:
```python
pg.homoscedasticity(df.T)
```
And this seems to function properly. It also allows arrays of integers to be calculated correctly, which for some reason results in the following error from SciPy at the moment:
> RuntimeWarning: divide by zero encountered in double_scalars
> W = numer / denom
Although I haven't had time to dig into why that happens yet... Hopefully it won't matter though, since a simple `.T` should fix this!
Best,
Stephen | closed | 2021-10-20T22:41:18Z | 2021-10-28T22:11:51Z | https://github.com/raphaelvallat/pingouin/issues/204 | [
"bug :boom:",
"URGENT :warning:"
] | StephenB1289 | 4 |
pytorch/vision | machine-learning | 8,902 | Different Behaviors of tranforms.ToTensor and transforms.v2.ToTensor | ## 🐛 Describe the bug
In the [docs](https://pytorch.org/vision/stable/transforms.html#conversion) it says
> Deprecated
>
> | Func | Desc |
> |---|---|
> | v2.ToTensor() | [DEPRECATED] Use v2.Compose([v2.ToImage(), v2.ToDtype(torch.float32, scale=True)]) instead. |
But when using the suggested code, the values are slightly different.
## Test code
```python
from pprint import pprint
import torch
import numpy as np
import torchvision.transforms.v2 as v2
from torchvision import transforms as v1
from PIL.Image import Image, fromarray
np_image = np.array(
[[[100, 150, 200]], [[25, 75, 125]]],
dtype=np.uint8,
)
pil_image = fromarray(np_image) # like done for CIFAR10
class ToTensorV2:
# As of https://pytorch.org/vision/stable/transforms.html#conversion
to_tensor = v2.Compose(
[
v2.ToImage(),
v2.ToDtype(torch.float32, scale=True),
]
)
def __call__(self, inpt: torch.Tensor | Image | np.ndarray) -> torch.Tensor:
return self.to_tensor(inpt)
original_transform = v1.ToTensor()
custom_transform = ToTensorV2()
result1 = original_transform(pil_image)
result2 = custom_transform(pil_image)
# Print results
print("Original image (numpy array):")
print(np_image)
print("\nShape:", np_image.shape)
print("dtype:", np_image.dtype)
print("\nv1.ToTensor() result:")
print(result1)
pprint(result1.tolist())
print("\nShape:", result1.shape)
print("dtype:", result1.dtype)
print("\nToTensorV2 result:")
print(result2)
pprint(result2.tolist())
print("\nShape:", result2.shape)
print("dtype:", result2.dtype)
print("\nDiff:")
print(result1 == result2)
```
### Example output
```
Original image (numpy array):
[[[100 150 200]]
[[ 25 75 125]]]
Shape: (2, 1, 3)
dtype: uint8
v1.ToTensor() result:
tensor([[[0.3922],
[0.0980]],
[[0.5882],
[0.2941]],
[[0.7843],
[0.4902]]])
[[[0.3921568691730499], [0.09803921729326248]],
[[0.5882353186607361], [0.29411765933036804]],
[[0.7843137383460999], [0.4901960790157318]]]
Shape: torch.Size([3, 2, 1])
dtype: torch.float32
ToTensorV2 result:
Image([[[0.3922],
[0.0980]],
[[0.5882],
[0.2941]],
[[0.7843],
[0.4902]]], )
[[[0.3921568989753723], [0.09803922474384308]],
[[0.5882353186607361], [0.29411765933036804]],
[[0.7843137979507446], [0.4901961088180542]]]
Shape: torch.Size([3, 2, 1])
dtype: torch.float32
Diff:
tensor([[[False],
[False]],
[[ True],
[ True]],
[[False],
[False]]])
```
## Versions
```
PyTorch version: 2.5.1
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 15.3 (arm64)
GCC version: Could not collect
Clang version: 16.0.0 (clang-1600.0.26.6)
CMake version: version 3.31.5
Libc version: N/A
Python version: 3.12.3 | packaged by conda-forge | (main, Apr 15 2024, 18:35:20) [Clang 16.0.6 ] (64-bit runtime)
Python platform: macOS-15.3-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M4 Pro
Versions of relevant libraries:
[pip3] mypy==1.15.0
[pip3] mypy_extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] pytorch-lightning==2.5.0.post0
[pip3] pytorch-metric-learning==2.8.1
[pip3] torch==2.5.1
[pip3] torchmetrics==1.6.1
[pip3] torchvision==0.20.1
[conda] faiss-cpu 1.9.0 py3.12_hbe593ad_0_cpu pytorch
[conda] libfaiss 1.9.0 hcb8d3e5_0_cpu pytorch
[conda] numpy 1.26.4 py312h8442bc7_0 conda-forge
[conda] pytorch 2.5.1 py3.12_0 pytorch
[conda] pytorch-lightning 2.5.0.post0 pyh101cb37_0 conda-forge
[conda] pytorch-metric-learning 2.8.1 pyh101cb37_1 conda-forge
[conda] torchmetrics 1.6.1 pyhd8ed1ab_0 conda-forge
[conda] torchvision 0.20.1 py312_cpu pytorch
``` | closed | 2025-02-07T21:36:36Z | 2025-02-19T16:34:40Z | https://github.com/pytorch/vision/issues/8902 | [] | jneuendorf | 1 |
gto76/python-cheatsheet | python | 126 | Python | thank you | closed | 2022-03-10T18:52:28Z | 2022-12-14T05:30:29Z | https://github.com/gto76/python-cheatsheet/issues/126 | [] | SouthKoreanLee | 1 |
Zeyi-Lin/HivisionIDPhotos | machine-learning | 1 | 换装的话是否替换背景? | closed | 2023-07-03T03:18:08Z | 2023-07-04T05:14:23Z | https://github.com/Zeyi-Lin/HivisionIDPhotos/issues/1 | [] | hai411741962 | 1 | |
python-visualization/folium | data-visualization | 1,142 | Working on a Colab Notebook (link included) to help teachers to use in the classroom | As a teacher, folium is a great tool. Sometimes students and teachers do not have enough coding knowledge to work with packages. Therefore, by writing it in a Colab, teachers can easily edit specific areas to use in the classroom. I just started this and I am working through the documentation in getting started.
https://colab.research.google.com/drive/1RMj3_iAdc-NUR8kvACdzQBpaNWlv8_pw
| closed | 2019-05-04T21:17:20Z | 2022-11-26T16:56:49Z | https://github.com/python-visualization/folium/issues/1142 | [
"MentoredSprintsPyCon2019"
] | KellyPared | 0 |
JaidedAI/EasyOCR | deep-learning | 404 | Tajik Language | Here's needed data. Can you tell me when your ocr can learn it, so i can use it? Thank you!
[easyocr.zip](https://github.com/JaidedAI/EasyOCR/files/6220472/easyocr.zip)
| closed | 2021-03-29T08:59:30Z | 2021-03-31T01:36:32Z | https://github.com/JaidedAI/EasyOCR/issues/404 | [] | KhayrulloevDD | 1 |
jupyterhub/zero-to-jupyterhub-k8s | jupyter | 3,545 | hook and continuous image pullers' DaemonSets: configuring k8s ServiceAccount - yes or no? | ## Update
My take is that we should help people pass some compliance tests etc, even though it doesn't improve security in this case. There could be some edge case where use of service accounts can be relevant still, such as when PSPs were around or similar.
## Background
There are two sets of machinery to pull images to k8s nodes:
- A temporary one-off machinery _before_ `helm upgrade` proceeds in full using a [`pre-upgrade` helm hook](https://helm.sh/docs/topics/charts_hooks/), referred to as "hook image pulling" or "hook pre-puller".
- A continiuous machinery _after_ `helm upgrade`, referred to as "continous image pulling" or "continuous pre-puller"
Both involves a k8s DaemonSet to schedule a pod on each node, and have that pod startup containers referencing images to "pre-pull", which makes the k8s node's pull the images.
However, the hook image puller DaemonSet resource is paired with the `hook-image-awaiter` k8s Job, that in turn have misc permissions to inspect k8s DaemonSets - and by doing that can know if the pulling has completed. When pulling is completed, the `pre-upgrade` helm hook can be considered completed, and the `helm upgrade` command can proceed. This is why the `hook-image-awaiter` k8s Job needs a k8s ServiceAccount for itself, so it can be granted permissions to ask the k8s api-server about DaemonSet resources.
## Question
- For the k8s DaemonSets resources created for the hook and continuous image puller machineries, is there a need to configure a k8s ServiceAccount by declaring a `serviceAccountName`?
- For the k8s DaemonSets resources created for the hook and continuous image puller machineries, is there a need to have the chart create a k8s ServiceAccount resource to use? | closed | 2024-10-15T09:20:01Z | 2025-01-12T15:26:20Z | https://github.com/jupyterhub/zero-to-jupyterhub-k8s/issues/3545 | [
"new"
] | consideRatio | 11 |
plotly/dash | plotly | 2,710 | [Feature Request] support multiple URL path levels in path template | I'd like to suggest the following behavior for interpreting path templates as part of the pages feature.
The following example can illustrate the requested behavior:
```
dash.register_page("reports", path_template="/reports/<product>/<feature>/<report_type>/<data_version>")
def layout(product: str | None = None, feature: str | None = None, report_type: str | None = None, data_version: str | None = None) -> Any:
return html.Div(f"{product} {feature} {report_type} {data_version}")
```
For '/reports' layout will be called with None for all input arguments.
For '/reports/spaceship' layout will be called with 'spaceship' for product and None for the rest.
Etc.
A template may also combine arguments and static parts. For instance the following two templates may both be supported:
```
"/reports/<product>/<feature>/types/<report_type>/<data_version>"
"/reports/<product>/<feature>/sub_features/<sub_feature>/<report_type>/<data_version>"
```
When registering the pages, conflicts should be checked and error raised upon conflict. The rule is that one template should not be a superset of the other. For example, the following templates conflict:
```
"/reports/<product>/<feature>/types/<report_type>/<data_version>"
"/reports/<product>/<feature>/types/<report_type>"
```
Here's my suggested python code for checking a conflict between two templates:
```
def is_variable(var: str) -> bool:
return var[0] == '<' and var[-1] == '>'
def is_template_confict(tpl1: str, tpl2: str) -> bool: # return True if there is a conflict
vars1 = tpl1.split('/')
vars2 = tpl2.split('/')
for ind in range(min(len(vars1), len(vars2))):
if is_variable(vars1[ind]) != is_variable(vars2[ind]):
return False
if not is_variable(vars1[ind]) and vars1[ind] != vars2[ind]:
return False # both are static and not equal
return True
```
| closed | 2023-12-10T07:45:07Z | 2024-05-31T20:14:05Z | https://github.com/plotly/dash/issues/2710 | [] | yreiss | 1 |
alteryx/featuretools | data-science | 2,611 | Update for compatibility with pyarrow 13.0.0 in Featuretools | Pyarrow v13.0.0 appears to have introduced changes that are causing some unit tests to fail. These failures should be investigated and, once resolved, the upper version restriction on pyarrow in pyproject.toml should be removed (both in test requirements and spark requirements). | closed | 2023-09-06T14:13:10Z | 2024-02-15T21:58:20Z | https://github.com/alteryx/featuretools/issues/2611 | [] | thehomebrewnerd | 1 |
sqlalchemy/alembic | sqlalchemy | 575 | Insert values to created table | How i can insert variables into created tables?
When i create them and commit it will raise error, that table is not created | closed | 2019-06-05T07:41:47Z | 2019-06-05T07:45:13Z | https://github.com/sqlalchemy/alembic/issues/575 | [] | mrquokka | 0 |
2noise/ChatTTS | python | 155 | 这个报错是怎么回事 | To create a public link, set `share=True` in `launch()`.
INFO:ChatTTS.core:All initialized.
WARNING:ChatTTS.core:Package WeTextProcessing not found! Run: conda install -c conda-forge pynini=2.1.5 && pip install WeTextProcessing
Traceback (most recent call last):
File "C:\Users\20514\AppData\Local\Programs\Python\Python311\Lib\site-packages\gradio\queueing.py", line 521, in process_events
response = await route_utils.call_process_api(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\20514\AppData\Local\Programs\Python\Python311\Lib\site-packages\gradio\route_utils.py", line 276, in call_process_api
output = await app.get_blocks().process_api(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\20514\AppData\Local\Programs\Python\Python311\Lib\site-packages\gradio\blocks.py", line 1945, in process_api
result = await self.call_function(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\20514\AppData\Local\Programs\Python\Python311\Lib\site-packages\gradio\blocks.py", line 1513, in call_function
prediction = await anyio.to_thread.run_sync(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\20514\AppData\Local\Programs\Python\Python311\Lib\site-packages\anyio\to_thread.py", line 56, in run_sync
return await get_async_backend().run_sync_in_worker_thread(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\20514\AppData\Local\Programs\Python\Python311\Lib\site-packages\anyio\_backends\_asyncio.py", line 2144, in run_sync_in_worker_thread
return await future
^^^^^^^^^^^^
File "C:\Users\20514\AppData\Local\Programs\Python\Python311\Lib\site-packages\anyio\_backends\_asyncio.py", line 851, in run
result = context.run(func, *args)
^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\20514\AppData\Local\Programs\Python\Python311\Lib\site-packages\gradio\utils.py", line 831, in wrapper
response = f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^
File "d:\桌面\ChatTTS-main\webui.py", line 35, in generate_audio
text = chat.infer(text,
^^^^^^^^^^^^^^^^^
File "d:\桌面\ChatTTS-main\ChatTTS\core.py", line 146, in infer
self.init_normalizer(_lang)
File "d:\桌面\ChatTTS-main\ChatTTS\core.py", line 192, in init_normalizer
self.normalizer[lang] = Normalizer().normalize
^^^^^^^^^^
UnboundLocalError: cannot access local variable 'Normalizer' where it is not associated with a value | closed | 2024-05-31T22:57:23Z | 2024-06-24T08:29:28Z | https://github.com/2noise/ChatTTS/issues/155 | [
"bug"
] | iscc-top | 4 |
microsoft/nni | data-science | 5,350 | . | closed | 2023-02-14T11:44:10Z | 2023-02-15T10:20:42Z | https://github.com/microsoft/nni/issues/5350 | [] | Nafees-060 | 6 | |
onnx/onnx | tensorflow | 6,577 | ONNX produces different results compared to torch.geometric | # Bug Report
### Is the issue related to model conversion?
I have a torch model that i need to convert to onnx to run in C++. However, i cannot produce the same output using the torch model and the one with onnx.
### Describe the bug
I think SAGEConv Layer causes some problems when converted to onnx. It correctly creates the model that can be exported but the output of torch and onnx is different.
### System information
- OS Platform and Distribution: Linux Mint 21.3
- ONNX version : 1.17.0
- Python version: Python 3.11.7
- GCC/Compiler version : gcc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
### Reproduction instructions
CASE 1: torch
```bash
# Instantiate the model
gnn_model = GNNModel(
in_dim_classifier=400,
h_feat=400,
hidden_dim_classifier=200,
device="cpu", #cuda
)
gnn_model.eval();
index = 200000
threshold = 0.5
in_dim_classifier, h_feat = 400, 400
hidden_dim_classifier = 200
df = make_dataframe(index)
g = create_input_tensors(df, "fully_connected")
output_torch = gnn_model(g.x, g.edge_index)
```
CASE2: ONNX
```bash
# Export the model to ONNX
torch.onnx.export(
gnn_model, # Your model
(g.x, g.edge_index), # Separate inputs
"gnn_model.onnx", # Output file
input_names=["node_features", "edge_index"], # Names of the inputs
output_names=["edge_predictions"], # Name of the output
dynamic_axes={
"node_features": {0: "num_nodes"}, # Dynamic number of nodes
"edge_index": {1: "num_edges"} # Dynamic number of edges
},
opset_version=15 ,# Adjust as needed
)
input_dict = {
"node_features": g.x.cpu().numpy().astype(np.float32),
"edge_index": g.edge_index.cpu().numpy().astype(np.int64)
}
# Load ONNX model
onnx_session = onnxruntime.InferenceSession("gnn_model.onnx")
# Run inference
outputs = onnx_session.run(
["edge_predictions"], # Names of the outputs
input_dict # Inputs to the model
)
output_onnx = onnx_session.run(["edge_predictions"], {"node_features": g.x.cpu().numpy().astype(np.float32), "edge_index": g.edge_index.cpu().numpy().astype(np.int64),})
```
The two output ( ``` output_torch ``` and ```output_onnx```) have the same dimensions.
But, printing for example
```bash
print(output_torch[i:i+10,0])
print(torch.tensor(output_onnx[0])[i:i+10,0])
```
I have
```bash
tensor([0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.5077, 0.0000, 0.0000, 0.0000,
0.0000], grad_fn=<SelectBackward0>)
tensor([0.7381, 0.3218, 0.3255, 0.0000, 0.8052, 1.5875, 0.8589, 0.4503, 0.4391,
0.0794])
````
The model is
```bash
class GNNModel(nn.Module):
def __init__(self, in_dim_classifier=400, h_feat=400, hidden_dim_classifier=200, device="cpu", mode = "test"):
super(GNNModel, self).__init__()
self.in_dim_classifier = in_dim_classifier
self.h_feat = h_feat
self.hidden_dim_classifier = hidden_dim_classifier
self.model = GCN_BN(in_feats=4, h_feats=self.h_feat, preprocessing="all_connection")
self.edge_classifier = MultiEdgeClassifier(in_dim=self.in_dim_classifier, hidden_dim=self.hidden_dim_classifier)
self.device = device
# Load pre-trained weights
self.model.load_state_dict(torch.load(<checkpoint_path>))
self.model.to(self.device)
if mode == "test":
self.model.eval()
def forward(self, x, edge_index):
# Custom preprocessing
x[:, 2] = (x[:, 2] - x[:, 7]).to(self.device) # Example preprocessing step: cl_time - KTAGTime
# Forward pass through the GCN model
gnn_output = self.model(x[:, :4].to(self.device), edge_index.to(self.device))
return gnn_output
```
where
```bash
class GCN_BN(nn.Module):
def __init__(self, in_feats, h_feats, preprocessing):
self.preprocessing = preprocessing
super(GCN_BN, self).__init__()
self.bn_input = nn.BatchNorm1d(in_feats)
self.conv1 = SAGEConv(in_feats, h_feats, 'mean')
#self.conv1 = GCNConv(in_feats, h_feats)
# Batch Normalization for GCN
self.bn_gcn = nn.BatchNorm1d(h_feats)
if self.preprocessing == 'adjacent':
self.conv2 = SAGEConv(h_feats, h_feats, 'mean')
#self.conv2 = GCNConv(h_feats, h_feats)
self.bn_gcn2 = nn.BatchNorm1d(h_feats)
self.conv3 = SAGEConv(h_feats, h_feats, 'mean')
#self.conv3 = GCNConv(h_feats, h_feats)
self.bn_gcn3 = nn.BatchNorm1d(h_feats)
def forward(self, x, edge_index):
#normalization of the features
x = self.bn_input(x)
x = self.conv1( x, edge_index)
x = self.bn_gcn(x)
x = F.relu(x)
if self.preprocessing == 'adjacent':
x = self.conv2(x, edge_index)
x = self.bn_gcn2(x)
x = F.relu(x)
x = self.conv3(x, edge_index)
x = self.bn_gcn3(x)
x = F.relu(x)
return x
```
I think that SAGEConv is causing the problem, but i cannot figure out how to solve the problem
### Expected behavior
The two tensors should have the same numbers
| open | 2024-12-06T16:04:24Z | 2024-12-06T16:05:12Z | https://github.com/onnx/onnx/issues/6577 | [
"bug"
] | PliniLeonardo | 0 |
tortoise/tortoise-orm | asyncio | 1,744 | Robyn registe tortoise, but unable to close the robyn program properl | I tried using tortoise_orm with the Robyn. I referenced the registration method from Sanic. When I didn't remove the '@app.shutdown_handler' function, I couldn't properly close the Robyn program using Ctrl+C. After I removed the '@app.shutdown_handler' function, it closed normally
```from robyn import Robyn, jsonify
from tortoise import Tortoise, connections
from tortoise.log import logger
from models import Users
from typing import Optional, Dict, Iterable, Union
from types import ModuleType, FunctionType
def register_tortoise(
app: Robyn,
config: Optional[dict] = None,
config_file: Optional[str] = None,
db_url: Optional[str] = None,
modules: Optional[Dict[str, Iterable[Union[str, ModuleType]]]] = None,
generate_schemas: bool = False,
startu_up_function: FunctionType = None
):
async def tortoise_init() -> None:
await Tortoise.init(config=config, config_file=config_file, db_url=db_url, modules=modules)
logger.info(
"Tortoise-ORM started, %s, %s", connections._get_storage(), Tortoise.apps
) # pylint: disable=W0212
@app.startup_handler
async def init_orm(): # pylint: disable=W0612
if startu_up_function:
await startu_up_function()
await tortoise_init()
if generate_schemas:
logger.info("Tortoise-ORM generating schema")
await Tortoise.generate_schemas()
@app.shutdown_handler
async def shutdown_orm(): # pylint: disable=W0612
await Tortoise.close_connections()
logger.info("Tortoise-ORM connections closed")
app = Robyn(__file__)
@app.get("/")
async def h(request):
users = await Users.all()
return jsonify({"users": [str(user) for user in users]})
@app.get("/user")
async def add_user(request):
user = await Users.create(name="New User")
return jsonify({"user": str(user)})
async def my_startup_function():
print("Starting up my app")
register_tortoise(
app, db_url="sqlite://:memory:", modules={"models": ["models"]}, generate_schemas=True,
other_startu_up_function=my_startup_function
)
app.start(port=8080)
```
Although we can forcefully close the program in other ways, I would like to understand what is happening here.
| closed | 2024-10-22T08:57:00Z | 2024-11-26T12:23:53Z | https://github.com/tortoise/tortoise-orm/issues/1744 | [] | JiaLiangChen99 | 1 |
jina-ai/clip-as-service | pytorch | 359 | server does not start successfully | **Prerequisites**
> Please fill in by replacing `[ ]` with `[x]`.
* [x ] Are you running the latest `bert-as-service`?
* [x] Did you follow [the installation](https://github.com/hanxiao/bert-as-service#install) and [the usage](https://github.com/hanxiao/bert-as-service#usage) instructions in `README.md`?
* [x ] Did you check the [FAQ list in `README.md`](https://github.com/hanxiao/bert-as-service#speech_balloon-faq)?
* [ x] Did you perform [a cursory search on existing issues](https://github.com/hanxiao/bert-as-service/issues)?
**System information**
> Some of this information can be collected via [this script](https://github.com/tensorflow/tensorflow/tree/master/tools/tf_env_collect.sh).
- OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Ubuntu 18.04.1 LTS
- TensorFlow installed from (source or binary): binary (via conda command)
- TensorFlow version: tensorflow-1.13.1
- Python version: python 3.7
- `bert-as-service` version: 1.9.1
- GPU model and memory: no dedicated nvidia GPU
- CPU model and memory: Intel core i7, memory: 16gb
---
### Description
> Please replace `YOUR_SERVER_ARGS` and `YOUR_CLIENT_ARGS` accordingly. You can also write your own description for reproducing the issue.
I'm using this command to start the server:
```bash
bert-serving-start -model_dir /home/mainul/Downloads/uncased_L-24_H-1024_A-16/ -cpu -num_worker=4
```
In the terminal, there is no "ready and listening" message. Which means I think the server has not started successfully after more than 3 hours of waiting. Here is the log from my terminal:
usage: /home/mainul/anaconda3/bin/bert-serving-start -model_dir /home/mainul/Downloads/uncased_L-24_H-1024_A-16/ -cpu -num_worker=4
ARG VALUE
__________________________________________________
ckpt_name = bert_model.ckpt
config_name = bert_config.json
cors = *
cpu = True
device_map = []
do_lower_case = True
fixed_embed_length = False
fp16 = False
gpu_memory_fraction = 0.5
graph_tmp_dir = None
http_max_connect = 10
http_port = None
mask_cls_sep = False
max_batch_size = 256
max_seq_len = 25
model_dir = /home/mainul/Downloads/uncased_L-24_H-1024_A-16/
num_worker = 4
pooling_layer = [-2]
pooling_strategy = REDUCE_MEAN
port = 5555
port_out = 5556
prefetch_size = 10
priority_batch_size = 16
show_tokens_to_client = False
tuned_model_dir = None
verbose = False
xla = False
I:VENTILATOR:[__i:__i: 66]:freeze, optimize and export graph, could take a while...
I:GRAPHOPT:[gra:opt: 52]:model config: /home/mainul/Downloads/uncased_L-24_H-1024_A-16/bert_config.json
I:GRAPHOPT:[gra:opt: 55]:checkpoint: /home/mainul/Downloads/uncased_L-24_H-1024_A-16/bert_model.ckpt
I:GRAPHOPT:[gra:opt: 59]:build graph...
WARNING: The TensorFlow contrib module will not be included in TensorFlow 2.0.
For more information, please see:
* https://github.com/tensorflow/community/blob/master/rfcs/20180907-contrib-sunset.md
* https://github.com/tensorflow/addons
If you depend on functionality not listed there, please file an issue.
OMP: Info #212: KMP_AFFINITY: decoding x2APIC ids.
OMP: Info #210: KMP_AFFINITY: Affinity capable, using global cpuid leaf 11 info
OMP: Info #154: KMP_AFFINITY: Initial OS proc set respected: 0-7
OMP: Info #156: KMP_AFFINITY: 8 available OS procs
OMP: Info #157: KMP_AFFINITY: Uniform topology
OMP: Info #179: KMP_AFFINITY: 1 packages x 4 cores/pkg x 2 threads/core (4 total cores)
OMP: Info #214: KMP_AFFINITY: OS proc to physical thread map:
OMP: Info #171: KMP_AFFINITY: OS proc 0 maps to package 0 core 0 thread 0
OMP: Info #171: KMP_AFFINITY: OS proc 4 maps to package 0 core 0 thread 1
OMP: Info #171: KMP_AFFINITY: OS proc 1 maps to package 0 core 1 thread 0
OMP: Info #171: KMP_AFFINITY: OS proc 5 maps to package 0 core 1 thread 1
OMP: Info #171: KMP_AFFINITY: OS proc 2 maps to package 0 core 2 thread 0
OMP: Info #171: KMP_AFFINITY: OS proc 6 maps to package 0 core 2 thread 1
OMP: Info #171: KMP_AFFINITY: OS proc 3 maps to package 0 core 3 thread 0
OMP: Info #171: KMP_AFFINITY: OS proc 7 maps to package 0 core 3 thread 1
OMP: Info #250: KMP_AFFINITY: pid 28407 tid 28407 thread 0 bound to OS proc set 0
I:GRAPHOPT:[gra:opt:128]:load parameters from checkpoint...
I:GRAPHOPT:[gra:opt:132]:optimize...
I:GRAPHOPT:[gra:opt:140]:freeze...
OMP: Info #250: KMP_AFFINITY: pid 28407 tid 30426 thread 1 bound to OS proc set 1
OMP: Info #250: KMP_AFFINITY: pid 28407 tid 30427 thread 2 bound to OS proc set 2
OMP: Info #250: KMP_AFFINITY: pid 28407 tid 30428 thread 3 bound to OS proc set 3
I:GRAPHOPT:[gra:opt:145]:write graph to a tmp file: /tmp/tmpmn6qhm2f
I:VENTILATOR:[__i:__i: 74]:optimized graph is stored at: /tmp/tmpmn6qhm2f
I:VENTILATOR:[__i:_ru:128]:bind all sockets
I:VENTILATOR:[__i:_ru:132]:open 8 ventilator-worker sockets
I:VENTILATOR:[__i:_ru:135]:start the sink
I:SINK:[__i:_ru:305]:ready
I:VENTILATOR:[__i:_ge:221]:get devices
I:VENTILATOR:[__i:_ge:254]:device map:
worker 0 -> cpu
worker 1 -> cpu
worker 2 -> cpu
worker 3 -> cpu
I:WORKER-0:[__i:_ru:517]:use device cpu, load graph from /tmp/tmpmn6qhm2f
I:WORKER-1:[__i:_ru:517]:use device cpu, load graph from /tmp/tmpmn6qhm2f
I:WORKER-2:[__i:_ru:517]:use device cpu, load graph from /tmp/tmpmn6qhm2f
I:WORKER-3:[__i:_ru:517]:use device cpu, load graph from /tmp/tmpmn6qhm2f
OMP: Info #212: KMP_AFFINITY: decoding x2APIC ids.
OMP: Info #212: KMP_AFFINITY: decoding x2APIC ids.
OMP: Info #210: KMP_AFFINITY: Affinity capable, using global cpuid leaf 11 info
OMP: Info #154: KMP_AFFINITY: Initial OS proc set respected: 0-7
OMP: Info #156: KMP_AFFINITY: 8 available OS procs
OMP: Info #157: KMP_AFFINITY: Uniform topology
OMP: Info #179: KMP_AFFINITY: 1 packages x 4 cores/pkg x 2 threads/core (4 total cores)
OMP: Info #214: KMP_AFFINITY: OS proc to physical thread map:
OMP: Info #171: KMP_AFFINITY: OS proc 0 maps to package 0 core 0 thread 0
OMP: Info #171: KMP_AFFINITY: OS proc 4 maps to package 0 core 0 thread 1
OMP: Info #171: KMP_AFFINITY: OS proc 1 maps to package 0 core 1 thread 0
OMP: Info #171: KMP_AFFINITY: OS proc 5 maps to package 0 core 1 thread 1
OMP: Info #171: KMP_AFFINITY: OS proc 2 maps to package 0 core 2 thread 0
OMP: Info #171: KMP_AFFINITY: OS proc 6 maps to package 0 core 2 thread 1
OMP: Info #171: KMP_AFFINITY: OS proc 3 maps to package 0 core 3 thread 0
OMP: Info #171: KMP_AFFINITY: OS proc 7 maps to package 0 core 3 thread 1
OMP: Info #250: KMP_AFFINITY: pid 30438 tid 30438 thread 0 bound to OS proc set 0
OMP: Info #210: KMP_AFFINITY: Affinity capable, using global cpuid leaf 11 info
OMP: Info #154: KMP_AFFINITY: Initial OS proc set respected: 0-7
OMP: Info #156: KMP_AFFINITY: 8 available OS procs
OMP: Info #157: KMP_AFFINITY: Uniform topology
OMP: Info #179: KMP_AFFINITY: 1 packages x 4 cores/pkg x 2 threads/core (4 total cores)
OMP: Info #214: KMP_AFFINITY: OS proc to physical thread map:
OMP: Info #171: KMP_AFFINITY: OS proc 0 maps to package 0 core 0 thread 0
OMP: Info #171: KMP_AFFINITY: OS proc 4 maps to package 0 core 0 thread 1
OMP: Info #171: KMP_AFFINITY: OS proc 1 maps to package 0 core 1 thread 0
OMP: Info #171: KMP_AFFINITY: OS proc 5 maps to package 0 core 1 thread 1
OMP: Info #171: KMP_AFFINITY: OS proc 2 maps to package 0 core 2 thread 0
OMP: Info #171: KMP_AFFINITY: OS proc 6 maps to package 0 core 2 thread 1
OMP: Info #171: KMP_AFFINITY: OS proc 3 maps to package 0 core 3 thread 0
OMP: Info #171: KMP_AFFINITY: OS proc 7 maps to package 0 core 3 thread 1
OMP: Info #250: KMP_AFFINITY: pid 30444 tid 30444 thread 0 bound to OS proc set 0
OMP: Info #250: KMP_AFFINITY: pid 30444 tid 30523 thread 1 bound to OS proc set 1
... | open | 2019-05-22T11:25:45Z | 2020-08-19T17:54:34Z | https://github.com/jina-ai/clip-as-service/issues/359 | [] | mainulquraishi | 2 |
OpenInterpreter/open-interpreter | python | 1,049 | UnicodeDecodeError: 'utf-8' codec can't decode byte 0xff in position 0: invalid start byte | ### Describe the bug
(oi) C:\Users\Matas>interpreter
Traceback (most recent call last):
File "<frozen runpy>", line 198, in _run_module_as_main
File "<frozen runpy>", line 88, in _run_code
File "C:\Users\Matas\anaconda3\envs\oi\Scripts\interpreter.exe\__main__.py", line 4, in <module>
File "C:\Users\Matas\anaconda3\envs\oi\Lib\site-packages\interpreter\__init__.py", line 1, in <module>
from .core.core import OpenInterpreter
File "C:\Users\Matas\anaconda3\envs\oi\Lib\site-packages\interpreter\core\core.py", line 11, in <module>
from ..terminal_interface.start_terminal_interface import start_terminal_interface
File "C:\Users\Matas\anaconda3\envs\oi\Lib\site-packages\interpreter\terminal_interface\start_terminal_interface.py", line 16, in <module>
from .validate_llm_settings import validate_llm_settings
File "C:\Users\Matas\anaconda3\envs\oi\Lib\site-packages\interpreter\terminal_interface\validate_llm_settings.py", line 5, in <module>
import litellm
File "C:\Users\Matas\anaconda3\envs\oi\Lib\site-packages\litellm\__init__.py", line 10, in <module>
dotenv.load_dotenv()
File "C:\Users\Matas\anaconda3\envs\oi\Lib\site-packages\dotenv\main.py", line 356, in load_dotenv
return dotenv.set_as_environment_variables()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Matas\anaconda3\envs\oi\Lib\site-packages\dotenv\main.py", line 92, in set_as_environment_variables
if not self.dict():
^^^^^^^^^^^
File "C:\Users\Matas\anaconda3\envs\oi\Lib\site-packages\dotenv\main.py", line 76, in dict
self._dict = OrderedDict(resolve_variables(raw_values, override=self.override))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Matas\anaconda3\envs\oi\Lib\site-packages\dotenv\main.py", line 238, in resolve_variables
for (name, value) in values:
File "C:\Users\Matas\anaconda3\envs\oi\Lib\site-packages\dotenv\main.py", line 84, in parse
for mapping in with_warn_for_invalid_lines(parse_stream(stream)):
File "C:\Users\Matas\anaconda3\envs\oi\Lib\site-packages\dotenv\main.py", line 26, in with_warn_for_invalid_lines
for mapping in mappings:
File "C:\Users\Matas\anaconda3\envs\oi\Lib\site-packages\dotenv\parser.py", line 173, in parse_stream
reader = Reader(stream)
^^^^^^^^^^^^^^
File "C:\Users\Matas\anaconda3\envs\oi\Lib\site-packages\dotenv\parser.py", line 64, in __init__
self.string = stream.read()
^^^^^^^^^^^^^
File "<frozen codecs>", line 322, in decode
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xff in position 0: invalid start byte
### Reproduce
1. going to the cmd
2. running interpreter
3. seeing error
### Expected behavior
when running "interpreter" or any related command it should work but doesnt and i always get that error, please help me to fix this problem.
### Screenshots
_No response_
### Open Interpreter version
latest
### Python version
3.11
### Operating System name and version
windows 11
### Additional context
_No response_ | open | 2024-03-01T19:38:53Z | 2024-03-01T19:38:53Z | https://github.com/OpenInterpreter/open-interpreter/issues/1049 | [
"Bug"
] | Politas380 | 0 |
ludwig-ai/ludwig | computer-vision | 3,759 | test issue notification | closed | 2023-10-30T23:36:37Z | 2023-10-30T23:37:12Z | https://github.com/ludwig-ai/ludwig/issues/3759 | [] | geoffreyangus | 1 | |
marshmallow-code/flask-marshmallow | sqlalchemy | 194 | URLFor and many relationships | I haven't found much luck with the following.
I'd like to use URLFor to describe a URL for a relationship. I am rendering the child model `Lease`, which back_populates `resources`.
When I am deserializing the `Lease` object and `links` is generated, the parent `Resource` object is desereialized as well, but since it is a list, I am unsure how I can use that relationship to generate a URL to self, which requires looking at `lease.relationships[0].discriminator`. See `LeaseSchema` at bottom.
### Resource Model
```
class Resource(base_mixin.BaseMixin, db.Base):
__tablename__ = 'resources'
id = sqlalchemy.Column(sqlalchemy.Integer, primary_key=True)
name = sqlalchemy.Column(sqlalchemy.String(32), unique=True)
labels = sqlalchemy.Column(sqlalchemy.dialects.postgresql.JSONB)
lease_id = sqlalchemy.Column(
sqlalchemy.Integer, sqlalchemy.ForeignKey('leases.id')
)
lease = sqlalchemy.orm.relationship('Lease', back_populates='resources')
discriminator = sqlalchemy.Column('type', sqlalchemy.String(50))
__mapper_args__ = {'polymorphic_on': discriminator}
```
### Sapro Model
```
class Sapro(resource.Resource):
__mapper_args__ = {
'polymorphic_identity': 'sapro',
}
def __init__(self, *args, **kwargs):
super(Sapro, self).__init__(*args, **kwargs)
```
### Lease Model
```
class Lease(base_mixin.BaseMixin, db.Base):
__tablename__ = 'leases'
id = sqlalchemy.Column(sqlalchemy.Integer, primary_key=True)
resources = sqlalchemy.orm.relationship(
'Resource', back_populates='lease', cascade='all,delete'
)
```
### Lease Schema
```
class LeaseSchema(ma.Schema):
class Meta:
fields = (
'created_at',
'updated_at',
'id',
'resources',
'links',
)
resources = marshmallow.fields.Nested(sapro.SaproSchema)
links = ma.Hyperlinks(
{
# Unsure how to pass the discriminator from resources to URLFor.
#'self': ma.URLFor(
# 'lease_v1_bp.test_get_leases_by_discriminator',
# resources='<resources>',
#),
'collection': ma.URLFor('lease_v1_bp.get_leases'),
}
``` | open | 2020-06-19T22:36:35Z | 2020-06-19T22:36:57Z | https://github.com/marshmallow-code/flask-marshmallow/issues/194 | [] | retr0h | 0 |
StackStorm/st2 | automation | 5,787 | Concurrency issues with workflow or action | Scene:
Receive and consume messages in bulk using kafka
issue:


The workflow is simple, printing messages from kafka, but the action creation process is very slow. If you have more data, or a complex process, that means it's going to be very difficult to process. Does anyone know what causes this?
We use with item for concurrent operations. There is no limit to the number of concurrent operations, but we limit the number of items kafka can consume at a time, which is around 20 pieces of data. But it feels like there's a performance problem, but I can't find the cause. I wonder if you can help me | open | 2022-10-26T03:13:54Z | 2022-10-26T03:13:54Z | https://github.com/StackStorm/st2/issues/5787 | [] | simonli866 | 0 |
jupyter/nbviewer | jupyter | 820 | nbviewer.jupyter.org: External images are not updated | If you have links to externalized images in a notebook (but located in the same git repo), when those images change, they are not updated by https://nbviewer.jupyter.org/, even when adding `?flush_cache=true`.
**To Reproduce**
Compare these 2 URLs:
* https://nbviewer.jupyter.org/github/jhermann/jupyter-by-example/blob/master/how-tos/cleanup.ipynb
* https://github.com/jhermann/jupyter-by-example/blob/master/how-tos/cleanup.ipynb
Note the different images – at least at the time of this writing, and yes, also with flushed caches and after Shift-Ctrl-R.
**Expected behavior**
Update the git clone, *at least* when explicit cache flushing is requested. | closed | 2019-03-11T13:52:05Z | 2019-03-18T11:12:52Z | https://github.com/jupyter/nbviewer/issues/820 | [] | jhermann | 2 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.