repo_name
stringlengths
9
75
topic
stringclasses
30 values
issue_number
int64
1
203k
title
stringlengths
1
976
body
stringlengths
0
254k
state
stringclasses
2 values
created_at
stringlengths
20
20
updated_at
stringlengths
20
20
url
stringlengths
38
105
labels
listlengths
0
9
user_login
stringlengths
1
39
comments_count
int64
0
452
aminalaee/sqladmin
fastapi
594
Invalid time picker behaviour
### Checklist - [X] The bug is reproducible against the latest release or `master`. - [x] There are no similar issues or pull requests to fix it yet. ### Describe the bug The `TimeField` from wtforms accepts only time in format `"%H:%M"`. But UI requires to present time in another format, `"H:i:s"`. So, attempts to set time object field causes an error (UI imposes invalid format). ### Steps to reproduce the bug ```python from datetime import time from sqlalchemy import create_engine from sqlalchemy.orm import DeclarativeBase, Mapped, mapped_column import uvicorn from fastapi import FastAPI from sqladmin import Admin, ModelView engine = create_engine('sqlite:///test.db') class Base(DeclarativeBase): pass class Test(Base): __tablename__ = 'test_t' id: Mapped[int] = mapped_column(primary_key=True) test_c: Mapped[time] = mapped_column() class TestView(ModelView, model=Test): column_list = [table.c.test] Base.metadata.create_all(engine) app = FastAPI() admin = Admin(app, engine=engine) admin.add_view(TestView) uvicorn.run(app) ``` ### Expected behavior I can create new record within `test_t` table. ### Actual behavior I can't. ### Debugging material <img width="1217" alt="Снимок экрана 2023-08-29 в 02 21 30" src="https://github.com/aminalaee/sqladmin/assets/67328068/571bd66d-a37d-418c-b108-4fa5caaaf524"> ### Environment ```txt Python 3.9.7 (v3.9.7:1016ef3790, Aug 30 2021, 16:39:15) [Clang 6.0 (clang-600.0.57)] on darwin ``` ```txt sqladmin==0.14.1 WTForms==3.0.1 SQLAlchemy==2.0.17 uvicorn==0.21.1 ``` ### Additional context _No response_
closed
2023-08-28T23:36:32Z
2023-08-30T17:15:45Z
https://github.com/aminalaee/sqladmin/issues/594
[]
TheArcherST
2
ageitgey/face_recognition
machine-learning
789
How to get the results of landmarks with 68 points as dlib
### Description How to get the results of landmarks with 68 points as dlib. I don't wanna use `import dlib; etc.. ` . Cause it takes too much time. When i was trying to get landmarks, i used `api.face_landmarks(image)`. But what i got is 72 points. And i tried to manually changed the results to 68 points according to the results used by dlib. What i did was transforming 72 points to 68 points: ``` python def compatibleToDlib(arr): results = [] # 0 - 54 for i1 in range(55): results.append(arr[i1]) # 55,56,57,58,59 <- arr[61-65] for i2 in range(61,66): results.append(arr[i2]) # 60 <- arr[59] results.append(arr[59]) # 61,62,63 <- arr[58-56] for i3 in range(0,3): results.append(arr[-i3+58]) # 64 <- arr[55] results.append(arr[55]) # 65,66,67 <- arr[70,69,68] for i4 in range(0,3): results.append(arr[-i4+70]) # print(len(results)) return results ``` But it didnt work tho. Cause the project that i needed to work with is `deepfakes` and tried the conversion part afterwards, but it says wrong aligened. `shapes (2,51) and (55,2) not aligned: 51 (dim 1) != 55 (dim 0)` And i tried to replace the landmarks by the landmarks created by dlib. And it works. ### So what i want to know is how to get the same aligned results of 68-point landmarks as dlib by only using this repo - face_recognition ? Thanks so much!!
open
2019-03-29T02:47:39Z
2022-03-15T16:47:47Z
https://github.com/ageitgey/face_recognition/issues/789
[]
zoeleesss
1
miguelgrinberg/Flask-SocketIO
flask
849
Can't tell if its a client or server issue with simple app.
I'm having trouble with my on_...._response handlers. They don't seem to get an of the data being sent by the server. I think I have the server configure properly. It's based on the sample programs. I'm sure I'm doing something wrong and was hoping you could help. The code is here: https://gist.github.com/ilovetogetspamed/b4e013fd860a8d3cf753205747bb9cb6 I tried it two ways with similar results. Thanks
closed
2018-11-30T21:57:23Z
2018-12-03T17:06:57Z
https://github.com/miguelgrinberg/Flask-SocketIO/issues/849
[ "question" ]
ilovetogetspamed
3
scikit-learn/scikit-learn
data-science
30,832
Numpy Array Error when Training MultioutputClassifer with LogisticRegressionCV with classes underrepresented
### Describe the bug When I train the MultioutputClassifer with LogisticRegressionCV with classes underrepresented, I get the following numpy error. I think this is connected to the issue #28178 and #26401. ### Steps/Code to Reproduce ```python import sklearn print(sklearn.__version__) from sklearn.linear_model import LogisticRegressionCV from sklearn.multioutput import MultiOutputClassifier import numpy as np n, m = 20, 5 model = MultiOutputClassifier(LogisticRegressionCV()) X = np.random.randn(n, m) y = np.concatenate([[np.random.randint(0, 2, n), np.random.randint(0, 5, n)]], axis=0).T y[-3:, 0] = [3, 4, 5] model.fit(X, y) ``` ### Expected Results 1.6.1 ### Actual Results 1.6.1 ```pytb .venv/lib/python3.12/site-packages/sklearn/model_selection/_split.py:805: UserWarning: The least populated class in y has only 1 members, which is less than n_splits=5. warnings.warn( Traceback (most recent call last): File "error_skitlearn.py", line 14, in <module> model.fit(X, y) File ".venv/lib/python3.12/site-packages/sklearn/multioutput.py", line 543, in fit super().fit(X, Y, sample_weight=sample_weight, **fit_params) File ".venv/lib/python3.12/site-packages/sklearn/base.py", line 1389, in wrapper return fit_method(estimator, *args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File ".venv/lib/python3.12/site-packages/sklearn/multioutput.py", line 274, in fit self.estimators_ = Parallel(n_jobs=self.n_jobs)( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File ".venv/lib/python3.12/site-packages/sklearn/utils/parallel.py", line 77, in __call__ return super().__call__(iterable_with_config) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File ".venv/lib/python3.12/site-packages/joblib/parallel.py", line 1918, in __call__ return output if self.return_generator else list(output) ^^^^^^^^^^^^ File ".venv/lib/python3.12/site-packages/joblib/parallel.py", line 1847, in _get_sequential_output res = func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File ".venv/lib/python3.12/site-packages/sklearn/utils/parallel.py", line 139, in __call__ return self.function(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File ".venv/lib/python3.12/site-packages/sklearn/multioutput.py", line 63, in _fit_estimator estimator.fit(X, y, **fit_params) File ".venv/lib/python3.12/site-packages/sklearn/base.py", line 1389, in wrapper return fit_method(estimator, *args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File ".venv/lib/python3.12/site-packages/sklearn/linear_model/_logistic.py", line 2038, in fit coefs_paths = np.reshape( ^^^^^^^^^^^ File ".venv/lib/python3.12/site-packages/numpy/_core/fromnumeric.py", line 324, in reshape return _wrapfunc(a, 'reshape', shape, order=order) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File ".venv/lib/python3.12/site-packages/numpy/_core/fromnumeric.py", line 54, in _wrapfunc return _wrapit(obj, method, *args, **kwds) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File ".venv/lib/python3.12/site-packages/numpy/_core/fromnumeric.py", line 42, in _wrapit conv = _array_converter(obj) ^^^^^^^^^^^^^^^^^^^^^ ValueError: setting an array element with a sequence. The requested array has an inhomogeneous shape after 2 dimensions. The detected shape was (5, 10) + inhomogeneous part. ``` ### Versions ```shell System: python: 3.12.3 (main, Jan 17 2025, 18:03:48) [GCC 13.3.0] executable: .venv/bin/python machine: Linux-6.8.0-52-generic-x86_64-with-glibc2.39 Python dependencies: sklearn: 1.6.1 pip: 25.0.1 setuptools: 75.8.0 numpy: 2.2.3 scipy: 1.15.1 Cython: None pandas: 2.2.3 matplotlib: 3.10.0 joblib: 1.4.2 threadpoolctl: 3.5.0 Built with OpenMP: True threadpoolctl info: user_api: blas internal_api: openblas num_threads: 8 prefix: libscipy_openblas filepath: .venv/lib/python3.12/site-packages/numpy.libs/libscipy_openblas64_-6bb31eeb.so version: 0.3.28 threading_layer: pthreads architecture: Haswell user_api: blas internal_api: openblas num_threads: 8 prefix: libscipy_openblas filepath: .venv/lib/python3.12/site-packages/scipy.libs/libscipy_openblas-68440149.so version: 0.3.28 threading_layer: pthreads architecture: Haswell user_api: openmp internal_api: openmp num_threads: 8 prefix: libgomp filepath: .venv/lib/python3.12/site-packages/scikit_learn.libs/libgomp-a34b3233.so.1.0.0 version: None ```
open
2025-02-14T10:34:16Z
2025-03-10T09:46:04Z
https://github.com/scikit-learn/scikit-learn/issues/30832
[ "Bug" ]
lionelkusch
4
developmentseed/lonboard
jupyter
7
Integrate with ipywidgets ColorPicker widget
https://ipywidgets.readthedocs.io/en/stable/reference/ipywidgets.html#ipywidgets.widgets.widget_color.ColorPicker <img width="579" alt="image" src="https://github.com/developmentseed/lonboard/assets/15164633/4ab5c646-6820-474d-9596-78b2496f5a1f"> Just need to be able to parse the hex to RGB
closed
2023-09-29T18:09:23Z
2023-10-27T18:48:56Z
https://github.com/developmentseed/lonboard/issues/7
[ "python" ]
kylebarron
0
labmlai/annotated_deep_learning_paper_implementations
machine-learning
101
Switch Transformer with Mixture-of-Experts Ouput
If an ST is used for transfer-learning is the output a dense or sparse vector and of what size?
closed
2021-11-02T14:44:57Z
2021-11-03T12:28:48Z
https://github.com/labmlai/annotated_deep_learning_paper_implementations/issues/101
[ "question" ]
dineshbvadhia
5
Gozargah/Marzban
api
1,480
notifications bot
درود اوقات به کام چطور میتونم قبل از اتمام حجم یا تاریخ اعلان دریافت کنم Customize all notifications NOTIFY_STATUS_CHANGE = True NOTIFY_USER_CREATED = false NOTIFY_USER_UPDATED = false NOTIFY_USER_DELETED = false NOTIFY_USER_DATA_USED_RESET = false NOTIFY_USER_SUB_REVOKED = True NOTIFY_IF_DATA_USAGE_PERCENT_REACHED = True NOTIFY_IF_DAYS_LEF_REACHED = True NOTIFY_LOGIN = false NOTIFY_DAYS_LEFT = 3 NOTIFY_REACHED_USAGE_PERCENT = 85 من از این موارد رو در فایل .env قرار دادم اما قبل از اتمام حجم و تاریخ ربات بهم اطلاع نمیده
closed
2024-12-02T18:46:47Z
2024-12-07T07:54:04Z
https://github.com/Gozargah/Marzban/issues/1480
[]
mohsenzt
2
litestar-org/polyfactory
pydantic
41
Feature: access already generated fields in custom generation
It would be a nice feature to have access to the already generated field values in a customised generator scenario, e.g. ```python from typing import Any from pydantic_factories import ModelFactory class CustomFactory(ModelFactory[Any]): """Tweak the ModelFactory to add our custom mocks.""" @classmethod def get_mock_value(cls, field_type: Any, previous_fields: dict[str, Any]) -> Any: """Add our custom mock value.""" if str(field_type) == "my_dependant_field" and previous_fields["my_relying_on"] == 'special_value': return cls._get_faker().date_time_between() return super().get_mock_value(field_type) ``` I could even imagine some decorator or annotation based solution to the same problem, e.g. ```python class MyFactory(ModelFactory): __model__ = MyModel @depends('field_relying_on') def dependant_field_name(self, current_value: Any, other_value: Any): return 'special_value_generated_based_on_context' ```
closed
2022-05-05T14:43:03Z
2022-05-27T16:13:02Z
https://github.com/litestar-org/polyfactory/issues/41
[ "enhancement" ]
blagasz
17
jschneier/django-storages
django
628
Adding handling of media files to the documentation
Would it be possible to add documentation to the handling of media files? It would have been really helpful in my case: From @elnygren 's post on stackoverflow: https://stackoverflow.com/questions/34247702/configure-django-and-google-cloud-storage and answer to https://github.com/jschneier/django-storages/issues/491 I struggled a lot with handling media files between a django website on an appengine and static files in google cloud storage (both static markup files and media files). After a long try, I finally managed to succeed with the following settings. Credit to @elnygren! In my settings.py I include the following: ``` DEFAULT_FILE_STORAGE = 'config.storage_backends.GoogleCloudMediaStorage' STATICFILES_STORAGE = 'config.storage_backends.GoogleCloudStaticStorage' GS_PROJECT_ID = '<project-id>' GS_MEDIA_BUCKET_NAME = '<media-bucket-name>' GS_STATIC_BUCKET_NAME = '<static-bucket-name>' STATIC_URL = 'https://storage.googleapis.com/{}/'.format(GS_STATIC_BUCKET_NAME) MEDIA_URL = 'https://storage.googleapis.com/{}/'.format(GS_MEDIA_BUCKET_NAME) GS_DEFAULT_ACL = 'private' # makes the files to private ``` Credit to @elnygren! Creating a folder in the root directoy named 'config' I inserted the following 'storage_backends.py' file: ``` """ GoogleCloudStorage extensions suitable for handing Django's Static and Media files. Requires following settings: MEDIA_URL, GS_MEDIA_BUCKET_NAME STATIC_URL, GS_STATIC_BUCKET_NAME In addition to https://django-storages.readthedocs.io/en/latest/backends/gcloud.html """ from django.conf import settings from storages.backends.gcloud import GoogleCloudStorage from storages.utils import setting from urllib.parse import urljoin class GoogleCloudMediaStorage(GoogleCloudStorage): """GoogleCloudStorage suitable for Django's Media files.""" def __init__(self, *args, **kwargs): if not settings.MEDIA_URL: raise Exception('MEDIA_URL has not been configured') kwargs['bucket_name'] = setting('GS_MEDIA_BUCKET_NAME', strict=True) super(GoogleCloudMediaStorage, self).__init__(*args, **kwargs) def url(self, name): """.url that doesn't call Google.""" return urljoin(settings.MEDIA_URL, name) class GoogleCloudStaticStorage(GoogleCloudStorage): """GoogleCloudStorage suitable for Django's Static files""" def __init__(self, *args, **kwargs): if not settings.STATIC_URL: raise Exception('STATIC_URL has not been configured') ``` App engine already instantiate GOOGLE_APPLICATION_CREDENTIALS so for development, I added: `os.environ["GOOGLE_APPLICATION_CREDENTIALS"] = "/path/to/service_account.json"` in my "dev settings"
closed
2018-11-04T21:41:52Z
2019-01-11T18:42:14Z
https://github.com/jschneier/django-storages/issues/628
[]
kasperj93
8
MycroftAI/mycroft-core
nlp
2,973
Fail to install packages on arch
I had to replace `yes | sudo pacman -S ...` for `sudo pacman -S --noconfirm --needed`. Because it was failing to install the packages on arch linux.
closed
2021-08-11T00:30:36Z
2021-08-21T17:58:23Z
https://github.com/MycroftAI/mycroft-core/issues/2973
[]
ghost
2
littlecodersh/ItChat
api
87
加好友,能获取到信息,但无法自动同意
> @itchat.msg_register(FRIENDS) > def add_friend(msg): > print msg > itchat.add_friend(**msg['Text']) # 该操作会自动将新好友的消息录入,不需要重载通讯录 > itchat.send_msg('Nice to meet you!', msg['RecommendInfo']['UserName']) 能够打印出json的新的好友信息,但并不能自动同意。
closed
2016-09-26T16:02:06Z
2016-09-27T03:47:04Z
https://github.com/littlecodersh/ItChat/issues/87
[ "bug" ]
poach123
1
ivy-llc/ivy
pytorch
28,608
Fix Frontend Failing Test: tensorflow - non_linear_activations.jax.nn.softplus
closed
2024-03-14T22:13:36Z
2024-03-16T18:20:21Z
https://github.com/ivy-llc/ivy/issues/28608
[ "Sub Task" ]
samthakur587
0
langmanus/langmanus
automation
98
一直报错, 不知道什么情况
能正常运行, 前端控制台一直输出报错信息, 不知道什么情况 终端中没有看见报错 ![Image](https://github.com/user-attachments/assets/b370bea7-74f7-45e3-a05f-d8f15765e96f) ![Image](https://github.com/user-attachments/assets/fcdade36-8bea-4417-8eaf-f39304b80bb9) # 控制台输出如下(代理模型请求都是200, 信息已删除) 2025-03-21 17:19:02,647 - src.service.workflow_service - INFO - Starting workflow with user input: [{'role': 'user', 'content': '我是做私域方面ai培训的, 请提供话术与直播脚本,文案,素材等, 供我直播时吸引别人观'}] 2025-03-21 17:19:02,654 - src.graph.nodes - INFO - Coordinator talking. 2025-03-21 17:19:05,672 - src.graph.nodes - INFO - Planner generating full plan 2025-03-21 17:19:53,267 - src.graph.nodes - INFO - Supervisor evaluating next action 2025-03-21 17:19:55,500 - src.graph.nodes - INFO - Supervisor delegating to: researcher 2025-03-21 17:19:55,504 - src.graph.nodes - INFO - Research agent starting task Warning: node executable not found, reverting to pure-Python mode. Install Node.js v10 or newer to use Readability.js. Warning: node executable not found, reverting to pure-Python mode. Install Node.js v10 or newer to use Readability.js. Warning: node executable not found, reverting to pure-Python mode. Install Node.js v10 or newer to use Readability.js. Warning: node executable not found, reverting to pure-Python mode. Install Node.js v10 or newer to use Readability.js. Warning: node executable not found, reverting to pure-Python mode. Install Node.js v10 or newer to use Readability.js. Warning: node executable not found, reverting to pure-Python mode. Install Node.js v10 or newer to use Readability.js. Warning: node executable not found, reverting to pure-Python mode. Install Node.js v10 or newer to use Readability.js. Warning: node executable not found, reverting to pure-Python mode. Install Node.js v10 or newer to use Readability.js. 2025-03-21 17:20:15,625 - src.graph.nodes - INFO - Research agent completed task 2025-03-21 17:20:15,629 - src.graph.nodes - INFO - Supervisor evaluating next action 2025-03-21 17:20:16,641 - src.graph.nodes - INFO - Supervisor delegating to: reporter 2025-03-21 17:20:16,644 - src.graph.nodes - INFO - Reporter write final report 2025-03-21 17:20:23,258 - src.graph.nodes - INFO - Supervisor evaluating next action 2025-03-21 17:20:24,410 - src.graph.nodes - INFO - Workflow completed # 下发issue提醒信息如下 ## 报错信息1 Error: Maximum update depth exceeded. This can happen when a component calls setState inside useEffect, but useEffect either doesn't have a dependency array, or one of the dependencies changes on every render. at createUnhandledError (http://localhost:3000/_next/static/chunks/2756c_next_dist_client_198cdfc1._.js:879:71) at handleClientError (http://localhost:3000/_next/static/chunks/2756c_next_dist_client_198cdfc1._.js:1052:56) at console.error (http://localhost:3000/_next/static/chunks/2756c_next_dist_client_198cdfc1._.js:1191:56) at getRootForUpdatedFiber (http://localhost:3000/_next/static/chunks/2756c_next_dist_compiled_0fbf8836._.js:4702:143) at enqueueConcurrentRenderForLane (http://localhost:3000/_next/static/chunks/2756c_next_dist_compiled_0fbf8836._.js:4689:16) at forceStoreRerender (http://localhost:3000/_next/static/chunks/2756c_next_dist_compiled_0fbf8836._.js:5854:20) at http://localhost:3000/_next/static/chunks/2756c_next_dist_compiled_0fbf8836._.js:5840:45 at http://localhost:3000/_next/static/chunks/2756c_next_dist_client_198cdfc1._.js:12095:39 at Array.forEach (<anonymous>) at _devbuildindicator.devBuildIndicator.hide (http://localhost:3000/_next/static/chunks/2756c_next_dist_client_198cdfc1._.js:12095:19) at handleDevBuildIndicatorHmrEvents (http://localhost:3000/_next/static/chunks/2756c_next_dist_client_198cdfc1._.js:14491:54) at WebSocket.handler (http://localhost:3000/_next/static/chunks/2756c_next_dist_client_198cdfc1._.js:15040:88) ## 报错信息2 Error: parsed json with extra tokens: {} at createUnhandledError (http://localhost:3000/_next/static/chunks/2756c_next_dist_client_198cdfc1._.js:879:71) at handleClientError (http://localhost:3000/_next/static/chunks/2756c_next_dist_client_198cdfc1._.js:1052:56) at console.error (http://localhost:3000/_next/static/chunks/2756c_next_dist_client_198cdfc1._.js:1191:56) at push.[project]/node_modules/.pnpm/best-effort-json-parser@1.1.3/node_modules/best-effort-json-parser/dist/parse.js [app-client] (ecmascript).parse.onExtraToken (http://localhost:3000/_next/static/chunks/node_modules__pnpm_b85362bd._.js:16263:17) at parse (http://localhost:3000/_next/static/chunks/node_modules__pnpm_b85362bd._.js:16254:23) at PlanTaskView.useMemo[plan] (http://localhost:3000/_next/static/chunks/src_a58de9b6._.js:2000:283) at updateMemo (http://localhost:3000/_next/static/chunks/2756c_next_dist_compiled_0fbf8836._.js:6225:21) at Object.useMemo (http://localhost:3000/_next/static/chunks/2756c_next_dist_compiled_0fbf8836._.js:15198:24) at exports.useMemo (http://localhost:3000/_next/static/chunks/2756c_next_dist_compiled_0fbf8836._.js:1672:36) at PlanTaskView (http://localhost:3000/_next/static/chunks/src_a58de9b6._.js:1989:318) at http://localhost:3000/_next/static/chunks/src_a58de9b6._.js:1899:556 at Array.map (<anonymous>) at http://localhost:3000/_next/static/chunks/src_a58de9b6._.js:1899:156 at Array.map (<anonymous>) at WorkflowProgressView (http://localhost:3000/_next/static/chunks/src_a58de9b6._.js:1880:45) at MessageView (http://localhost:3000/_next/static/chunks/src_a58de9b6._.js:2211:345) at http://localhost:3000/_next/static/chunks/src_a58de9b6._.js:2153:366 at Array.map (<anonymous>) at MessageHistoryView (http://localhost:3000/_next/static/chunks/src_a58de9b6._.js:2153:22) at HomePage (http://localhost:3000/_next/static/chunks/src_a58de9b6._.js:2301:360) at ClientPageRoot (http://localhost:3000/_next/static/chunks/2756c_next_dist_f0270881._.js:2053:50) ## 报错3 Error: parsed json with extra tokens: {} at createUnhandledError (http://localhost:3000/_next/static/chunks/2756c_next_dist_client_198cdfc1._.js:879:71) at handleClientError (http://localhost:3000/_next/static/chunks/2756c_next_dist_client_198cdfc1._.js:1052:56) at console.error (http://localhost:3000/_next/static/chunks/2756c_next_dist_client_198cdfc1._.js:1191:56) at push.[project]/node_modules/.pnpm/best-effort-json-parser@1.1.3/node_modules/best-effort-json-parser/dist/parse.js [app-client] (ecmascript).parse.onExtraToken (http://localhost:3000/_next/static/chunks/node_modules__pnpm_b85362bd._.js:16263:17) at parse (http://localhost:3000/_next/static/chunks/node_modules__pnpm_b85362bd._.js:16254:23) at PlanTaskView.useMemo[plan] (http://localhost:3000/_next/static/chunks/src_a58de9b6._.js:2000:283) at updateMemo (http://localhost:3000/_next/static/chunks/2756c_next_dist_compiled_0fbf8836._.js:6229:17) at Object.useMemo (http://localhost:3000/_next/static/chunks/2756c_next_dist_compiled_0fbf8836._.js:15198:24) at exports.useMemo (http://localhost:3000/_next/static/chunks/2756c_next_dist_compiled_0fbf8836._.js:1672:36) at PlanTaskView (http://localhost:3000/_next/static/chunks/src_a58de9b6._.js:1989:318) at http://localhost:3000/_next/static/chunks/src_a58de9b6._.js:1899:556 at Array.map (<anonymous>) at http://localhost:3000/_next/static/chunks/src_a58de9b6._.js:1899:156 at Array.map (<anonymous>) at WorkflowProgressView (http://localhost:3000/_next/static/chunks/src_a58de9b6._.js:1880:45) at MessageView (http://localhost:3000/_next/static/chunks/src_a58de9b6._.js:2211:345) at http://localhost:3000/_next/static/chunks/src_a58de9b6._.js:2153:366 at Array.map (<anonymous>) at MessageHistoryView (http://localhost:3000/_next/static/chunks/src_a58de9b6._.js:2153:22) at HomePage (http://localhost:3000/_next/static/chunks/src_a58de9b6._.js:2301:360) at ClientPageRoot (http://localhost:3000/_next/static/chunks/2756c_next_dist_f0270881._.js:2053:50) ## 报错4 Error: Maximum update depth exceeded. This can happen when a component calls setState inside useEffect, but useEffect either doesn't have a dependency array, or one of the dependencies changes on every render. at createUnhandledError (http://localhost:3000/_next/static/chunks/2756c_next_dist_client_198cdfc1._.js:879:71) at handleClientError (http://localhost:3000/_next/static/chunks/2756c_next_dist_client_198cdfc1._.js:1052:56) at console.error (http://localhost:3000/_next/static/chunks/2756c_next_dist_client_198cdfc1._.js:1191:56) at getRootForUpdatedFiber (http://localhost:3000/_next/static/chunks/2756c_next_dist_compiled_0fbf8836._.js:4702:143) at enqueueConcurrentRenderForLane (http://localhost:3000/_next/static/chunks/2756c_next_dist_compiled_0fbf8836._.js:4689:16) at forceStoreRerender (http://localhost:3000/_next/static/chunks/2756c_next_dist_compiled_0fbf8836._.js:5854:20) at http://localhost:3000/_next/static/chunks/2756c_next_dist_compiled_0fbf8836._.js:5840:45 at http://localhost:3000/_next/static/chunks/2756c_next_dist_client_198cdfc1._.js:12091:39 at Array.forEach (<anonymous>) at _devbuildindicator.devBuildIndicator.show (http://localhost:3000/_next/static/chunks/2756c_next_dist_client_198cdfc1._.js:12091:19) at handleDevBuildIndicatorHmrEvents (http://localhost:3000/_next/static/chunks/2756c_next_dist_client_198cdfc1._.js:14487:54) at WebSocket.handler (http://localhost:3000/_next/static/chunks/2756c_next_dist_client_198cdfc1._.js:15040:88) ## 报错5 Error: Maximum update depth exceeded. This can happen when a component repeatedly calls setState inside componentWillUpdate or componentDidUpdate. React limits the number of nested updates to prevent infinite loops. at getRootForUpdatedFiber (http://localhost:3000/_next/static/chunks/2756c_next_dist_compiled_0fbf8836._.js:4701:171) at enqueueConcurrentHookUpdate (http://localhost:3000/_next/static/chunks/2756c_next_dist_compiled_0fbf8836._.js:4685:16) at dispatchSetStateInternal (http://localhost:3000/_next/static/chunks/2756c_next_dist_compiled_0fbf8836._.js:6448:22) at dispatchSetState (http://localhost:3000/_next/static/chunks/2756c_next_dist_compiled_0fbf8836._.js:6421:9) at http://localhost:3000/_next/static/chunks/2756c_next_dist_client_198cdfc1._.js:13911:17
open
2025-03-21T09:32:04Z
2025-03-21T09:32:04Z
https://github.com/langmanus/langmanus/issues/98
[ "bug" ]
Amon1412
0
ultralytics/ultralytics
python
18,727
Performance Drop When Using OBB
### Search before asking - [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/ultralytics/ultralytics/discussions) and found no similar questions. ### Question I have compared the performance of OBB and HBB for object detection using YOLOv8. Why do I find out that the performance of my implementation of OBB is significantly worse than HBB (both precision and recall)? The precision is around 0.57 and recall around 0.6 for OBB while precision and recall are around 0.75 for HBB. OBB should in principle have better performance than HBB, isn't it? ### Additional _No response_
open
2025-01-17T06:48:53Z
2025-01-17T07:30:32Z
https://github.com/ultralytics/ultralytics/issues/18727
[ "question", "OBB" ]
KHC1234
10
pytorch/pytorch
numpy
149,718
Constraints for distributions with mixed support
### 🚀 The feature, motivation and pitch I'd like to implement a joint distribution of both discrete and continuous parameters and would like to be able to define a constraint that indicates that the support is mixed and which parameters are continuous. A potential use-case is representing an approximate posterior distribution (with continuous _and_discrete parameters) with a distribution to use it as a prior when new data comes in in a Pyro model. `Distributions.support` is required to be a child of `Constraint`, and `Constraint.is_discrete` returns a `bool`. `constraints._Cat`, for example, returns `True` for `is_discrete` if _any_ of the component constraints are discrete. i.e. PyTorch treats distributions with discrete support as having a support that is entirely discrete. I can think of 2 solutions: 1. Allow `Constraint.is_discrete` to return a `Tensor` of `Bool` indicating which parameters would have which constraints. 2. Allow `Distribution.support` to return a `Tensor` of `Constraint` indicating which parameters would have which constraints. ### Alternatives _No response_ ### Additional context _No response_ cc @fritzo @neerajprad @alicanb @nikitaved
open
2025-03-21T09:03:28Z
2025-03-21T15:07:28Z
https://github.com/pytorch/pytorch/issues/149718
[ "module: distributions", "triaged" ]
sethaxen
0
ansible/awx
django
15,486
Error when installing AWX using playbook
### Please confirm the following - [X] I agree to follow this project's [code of conduct](https://docs.ansible.com/ansible/latest/community/code_of_conduct.html). - [X] I have checked the [current issues](https://github.com/ansible/awx/issues) for duplicates. - [X] I understand that AWX is open source software provided for free and that I might not receive a timely response. - [X] I am **NOT** reporting a (potential) security vulnerability. (These should be emailed to `security@ansible.com` instead.) ### Bug Summary I'm trying to work Final year project using Ansible AWX. But getting error when installing AWX using playbook ### AWX version Ansible(core 2.17.30) ### Select the relevant components - [ ] UI - [ ] UI (tech preview) - [ ] API - [X] Docs - [ ] Collection - [ ] CLI - [X] Other ### Installation method docker development environment ### Modifications no ### Ansible version Ansible(core 2.17.30) ### Operating system Ubuntu 22.04 ### Web browser Chrome, Edge ### Steps to reproduce I'm getting below error when running the play book 'ansible-playbook -i inventory install.yml Already python3 is installed on AWS ubuntu 22.04. I used the steps to install from this website https://medium.com/@sahildrive007/easily-install-ansible-awx-on-ubuntu-a-step-by-step-guide-b391c783d54c ### Expected results I thought it will run but didn't. Not sure how to set the interpreter. If you know please explain in detail. I'm new to work ubuntu. ### Actual results Getting below error when running playbook. [DEPRECATION WARNING]: community.docker.docker_compose has been deprecated. This module uses docker-compose v1, which is End of life since july 2022. Please migrate to community.docker.docker_compose_v2. this feature will be removed from community.docker in version 4.0.0. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg. PLAY [Build and deploy AWX] ************************************************************ TASK [Gathering Facts] ***************************************************************** fatal: [localhost]: fAILED! => {'ansible_facts":{}, "changed": false, "failed_modules": {"ansible.legacy.setup": {"failed": true, "module_stderr": "/bin/sh: 1: /usr/bin/env python3: not found\n", "module_stdout": "", "msg": "The module failed to execute correctly, you probably need to set the interpreter.\nSee stdout/stderr for the exact error", "rc": 127}}, "msg": "the following module failed to execute: ansible.legacy.setup\n"} PLAY RECAP ************************************************************************** ### Additional information _No response_
closed
2024-09-03T18:55:54Z
2024-09-04T19:03:01Z
https://github.com/ansible/awx/issues/15486
[ "type:bug", "component:docs", "needs_triage", "community" ]
Manimala94
1
huggingface/transformers
python
36,096
flex_attention does not output the full attention_weights with output_attention option
### System Info transformers==4.45.1 torch==2.6.0 ### Who can help? _No response_ ### Information - [x] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [x] My own task or dataset (give details below) ### Reproduction ```attention_interface: Callable = eager_attention_forward if self.config._attn_implementation != "eager": if self.config._attn_implementation == "sdpa" and kwargs.get("output_attentions", False): logger.warning_once( "`torch.nn.functional.scaled_dot_product_attention` does not support `output_attentions=True`. Falling back to " 'eager attention. This warning can be removed using the argument `attn_implementation="eager"` when loading the model.' ) else: attention_interface = ALL_ATTENTION_FUNCTIONS[self.config._attn_implementation] attn_output, attn_weights = attention_interface( self, query_states, key_states, value_states, attention_mask, dropout=0.0 if not self.training else self.attention_dropout, scaling=self.scaling, **kwargs, ) ### Expected behavior The output_attention option (e.g. in [llama](https://github.com/huggingface/transformers/blob/main/src/transformers/models/llama/modeling_llama.py#L232)) is supposed to return the full attention weights of shape (bsize, nheads, len_q, len_k). Currently, transformers support this option with eager attention and flex_attention, but it looks like what [flex_attention](https://github.com/huggingface/transformers/blob/main/src/transformers/integrations/flex_attention.py) returns are just the logsumexp scores of shape (bsize, nheads, len_q). It seems they do not support outputting the full attention weight either.
closed
2025-02-07T19:24:12Z
2025-03-18T08:04:25Z
https://github.com/huggingface/transformers/issues/36096
[ "bug" ]
gblackout
5
onnx/onnx
machine-learning
6,422
The opposite
https://github.com/onnx/onnx/blob/56cf02beb2331a75f2a119f8b6d64e9d460d3a98/onnx/onnx-ml.proto3#L320 I think it is not the gradient of "x" w.r.t. a chosen loss but the gradient of a chosen loss w.r.t. "x"
open
2024-10-03T13:58:19Z
2024-10-03T15:00:14Z
https://github.com/onnx/onnx/issues/6422
[]
JeanLoupFarges
1
jupyter-book/jupyter-book
jupyter
1,328
Embed a hyperlink in the sidebar logo
I saw some feature requests about this (especially #79), but it seems like there is no working workaround for this? A simple option like `link : www.myhomepage.tz` adding to the `_config.yml` file would be nice. Thus, when a user clicks on the logo he will be redirected (or a new tab is opened) to a corresponding website.
open
2021-05-12T17:46:44Z
2021-05-13T21:40:45Z
https://github.com/jupyter-book/jupyter-book/issues/1328
[ ":label: sphinx-book-theme" ]
hackenjoe
1
coleifer/sqlite-web
flask
106
Edit Table Data?
Great project! Is it possible to ask for the ability to directly EDIT table data? I.e. change data in a table and ADD or DELETE a row to a table? Thanks for your efforts and consideration!
closed
2022-08-12T05:00:45Z
2024-04-09T07:17:57Z
https://github.com/coleifer/sqlite-web/issues/106
[]
randyg3000
2
automl/auto-sklearn
scikit-learn
1,741
Install error on ubuntu / window - pip and conda
I meet issue install this library on both window and ubuntu 24.04, and this is ubuntu, plz help! I try other library like TPOT, even successful but still cannot import. with pip: error: ``` File "<string>", line 293, in setup_package ModuleNotFoundError: No module named 'numpy.distutils' [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. error: metadata-generation-failed × Encountered error while generating package metadata. ╰─> See above for output. note: This is an issue with the package mentioned above, not pip. hint: See above for details. ``` with conda: error: ``` conda install conda-forge::auto-sklearn Channels: - defaults - conda-forge Platform: linux-64 Collecting package metadata (repodata.json): done Solving environment: / warning libmamba Added empty dependency for problem type SOLVER_RULE_UPDATE failed LibMambaUnsatisfiableError: Encountered problems while solving: - package auto-sklearn-0.12.5-pyhd8ed1ab_0 requires pyrfr >=0.8.1,<0.9, but none of the providers can be installed Could not solve for environment specs The following packages are incompatible ├─ auto-sklearn is installable and it requires │ └─ pyrfr >=0.8.1,<0.9 with the potential options │ ├─ pyrfr [0.8.1|0.8.2] would require │ │ └─ python >=3.6,<3.7.0a0 , which can be installed; │ ├─ pyrfr [0.8.1|0.8.2|0.8.3] would require │ │ └─ python >=3.7,<3.8.0a0 , which can be installed; │ ├─ pyrfr [0.8.1|0.8.2|0.8.3] would require │ │ └─ python >=3.8,<3.9.0a0 , which can be installed; │ ├─ pyrfr [0.8.1|0.8.2|0.8.3] would require │ │ └─ python >=3.9,<3.10.0a0 , which can be installed; │ ├─ pyrfr [0.8.2|0.8.3] would require │ │ └─ python >=3.10,<3.11.0a0 , which can be installed; │ └─ pyrfr 0.8.3 would require │ └─ python >=3.11,<3.12.0a0 , which can be installed; └─ pin-1 is not installable because it requires └─ python 3.12.* , which conflicts with any installable versions previously reported. ```
open
2024-09-23T11:03:59Z
2025-01-30T13:16:26Z
https://github.com/automl/auto-sklearn/issues/1741
[]
ladylazy9x
9
google-research/bert
nlp
515
Is there a larger unreleased BERT model?
GPT-2 has a 1.5 billion parameter model that is unreleased, probably to prevent competition now that OpenAI has decided to pursue profitability. Is there a larger BERT model that is unreleased for similar reasons?
open
2019-03-22T18:37:19Z
2019-03-22T18:37:19Z
https://github.com/google-research/bert/issues/515
[]
abhishmitra
0
tensorpack/tensorpack
tensorflow
1,032
What does @layer_register(log_shape=True) do before a layer?
As the title mentioned, I'm curious about what @layer_register(log_shape=True) do before a layer? such as `@layer_register(log_shape=True)` `def Conv2D(x, out_channel, kernel_shape, padding='SAME', stride=1, W_init=None, b_init=None, nl=tf.nn.relu, split=1, use_bias=True):`
closed
2019-01-05T21:01:50Z
2019-01-11T16:53:47Z
https://github.com/tensorpack/tensorpack/issues/1032
[ "usage" ]
john81923
1
browser-use/browser-use
python
211
please merge os x chrome profile fix from web-ui into main repo
in order for os x users to have `browser-use` access their chrome profile, a workaround was needed. this now works in the `web-ui` repo. please merge this into the main project so we can use without the web-ui. https://github.com/browser-use/web-ui/pull/83
closed
2025-01-11T16:51:34Z
2025-01-13T13:09:05Z
https://github.com/browser-use/browser-use/issues/211
[]
rawwerks
2
Gozargah/Marzban
api
1,029
نحوه فعال سازی فرگمنت برای یک اینباند
درود چطوری میشه فقط برای یک اینباند خاص فرگمنت رو فعال کرد نه همه کانفیگ ها؟
closed
2024-06-03T12:41:46Z
2024-07-15T20:54:13Z
https://github.com/Gozargah/Marzban/issues/1029
[]
rbsdotnet
0
mkhorasani/Streamlit-Authenticator
streamlit
87
Azure AAD Oauth2 Support
Hello, I want to add user authorization system to the application with Azure AD. How can I do that. Does this module have such support? I would appreciate your help.
closed
2023-09-08T12:41:28Z
2024-09-30T06:49:27Z
https://github.com/mkhorasani/Streamlit-Authenticator/issues/87
[ "enhancement" ]
cnhsn
4
SYSTRAN/faster-whisper
deep-learning
1,176
Poor performance for Vietnamese diarization
closed
2024-11-27T10:25:12Z
2024-11-27T10:25:30Z
https://github.com/SYSTRAN/faster-whisper/issues/1176
[]
dohuyduc2002
0
globaleaks/globaleaks-whistleblowing-software
sqlalchemy
3,038
Getting error email multiple times : Error Message: TypeError: Cannot read property 'split' of undefined
Hello, I am getting 4 emails at a time stating " Error Message: TypeError: Cannot read property 'split' of undefined ". I've started getting this after the update from v4.2.12 to v4.2.13. Note: I've done an apt-update and apt-upgrade on ubuntu. And i am using version 20 LTS. Any idea why this error message? Version: 4.2.13 URL: /status/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx User Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.159 Safari/537.36 Error Message: TypeError: Cannot read property 'split' of undefined Stacktrace: [ { "columnNumber": 240267, "lineNumber": 64, "fileName": "http://complaints.icac.mu/js/scripts.min.js", "source": " at http://complaints.icac.mu/js/scripts.min.js:64:240267" }, { "columnNumber": 114346, "lineNumber": 8, "fileName": "http://complaints.icac.mu/js/scripts.min.js", "functionName": "Array.<anonymous>", "source": " at Array.<anonymous> (http://complaints.icac.mu/js/scripts.min.js:8:114346)" }, { "columnNumber": 114327, "lineNumber": 8, "fileName": "http://complaints.icac.mu/js/scripts.min.js", "source": " at http://complaints.icac.mu/js/scripts.min.js:8:114327" }, { "columnNumber": 93072, "lineNumber": 8, "fileName": "http://complaints.icac.mu/js/scripts.min.js", "source": " at http://complaints.icac.mu/js/scripts.min.js:8:93072" }, { "columnNumber": 101303, "lineNumber": 8, "fileName": "http://complaints.icac.mu/js/scripts.min.js", "functionName": "d.$digest", "source": " at d.$digest (http://complaints.icac.mu/js/scripts.min.js:8:101303)" }, { "columnNumber": 103100, "lineNumber": 8, "fileName": "http://complaints.icac.mu/js/scripts.min.js", "functionName": "d.$apply", "source": " at d.$apply (http://complaints.icac.mu/js/scripts.min.js:8:103100)" }, { "columnNumber": 66702, "lineNumber": 8, "fileName": "http://complaints.icac.mu/js/scripts.min.js", "functionName": "_", "source": " at _ (http://complaints.icac.mu/js/scripts.min.js:8:66702)" }, { "columnNumber": 68927, "lineNumber": 8, "fileName": "http://complaints.icac.mu/js/scripts.min.js", "functionName": "C", "source": " at C (http://complaints.icac.mu/js/scripts.min.js:8:68927)" }, { "columnNumber": 68274, "lineNumber": 8, "fileName": "http://complaints.icac.mu/js/scripts.min.js", "functionName": "XMLHttpRequest.b.onload", "source": " at XMLHttpRequest.b.onload (http://complaints.icac.mu/js/scripts.min.js:8:68274)" } ]
open
2021-08-30T05:01:52Z
2021-08-31T10:52:36Z
https://github.com/globaleaks/globaleaks-whistleblowing-software/issues/3038
[]
divium-singh
5
keras-team/keras
pytorch
20,955
Error with variable.regularizer in keras.layers.TFSMLayer
I created a universal sentence encoder layer as an instance of `keras.layers.TFSMLayer` and used it in a binary classifier. Training the model results in the following error: ``` Epoch 1/10 --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) [<ipython-input-7-d7863a64cd24>](https://localhost:8080/#) in <cell line: 0>() 2 y_train = keras.ops.array([[1], [0], [0]]) 3 ----> 4 nn_model_history = nn_model.fit( 5 x = x_train, 6 y = y_train, 1 frames [/usr/local/lib/python3.11/dist-packages/keras/src/layers/layer.py](https://localhost:8080/#) in _get_regularization_losses(self) 1152 weight_regularization_losses = [] 1153 for variable in self.trainable_weights: -> 1154 if variable.regularizer is None: 1155 continue 1156 if backend.in_stateless_scope() and not in_symbolic_scope(): AttributeError: 'UninitializedVariable' object has no attribute 'regularizer' ``` The [Keras layers code](https://github.com/keras-team/keras/blob/master/keras/src/layers/layer.py) fails when the trainable weights lack a `regularizer` attribute. I've reproduced the error in [this Colab notebook](https://colab.research.google.com/drive/1F-RsSZNrr0bH4EUwkjkm0LE8XyVg6IMB?usp=sharing).
closed
2025-02-24T21:04:34Z
2025-03-08T18:22:39Z
https://github.com/keras-team/keras/issues/20955
[ "type:Bug" ]
rlcauvin
9
strawberry-graphql/strawberry
fastapi
3,347
Different contexts getters depending on the query or mutation
## Feature Request Type - [ ] Core functionality - [X] Alteration (enhancement/optimization) of existing feature(s) - [ ] New behavior ## Description Currently the only way of doing context getters with fastapi integration is through multiple routers but then I would need to have different paths which I think would be weird in the graphql world. So I would like to be able of changing the context based on the resolver instead of setting only in the main router.
open
2024-01-18T17:30:58Z
2025-03-20T15:56:34Z
https://github.com/strawberry-graphql/strawberry/issues/3347
[]
Focadecombate
2
pydantic/pydantic-core
pydantic
1,071
`pydantic_core-2.14.1-pp310-pypy310_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl` is missing.
In version 2.10.1, this wheel exists, but it is missing in latest release(2.14.1). See https://pypi.org/project/pydantic-core/2.14.1/#files See https://pypi.org/project/pydantic-core/2.10.1/#files P.S. Not only this one wheel is missing, but also some others are missing.
closed
2023-11-14T05:15:27Z
2023-11-14T14:12:22Z
https://github.com/pydantic/pydantic-core/issues/1071
[]
overcat
1
streamlit/streamlit
streamlit
10,419
Add support for more currency formats to `column_config.NumberColumn`
### Checklist - [x] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar feature requests. - [x] I added a descriptive title and summary to this issue. ### Summary In https://github.com/streamlit/streamlit/pull/10179 we added support for a couple of pre-defined formats. This also includes `dollar` and `euro` to format values as currencies. Should we add more formats to cover other currencies as well? Maybe we can just support the full list of currencies supported by the [Intl.NumberFormat API](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Intl/NumberFormat/NumberFormat#currency_2). ### Why? _No response_ ### How? _No response_ ### Additional Context _No response_
open
2025-02-17T17:11:07Z
2025-02-17T17:11:21Z
https://github.com/streamlit/streamlit/issues/10419
[ "type:enhancement", "feature:st.column_config" ]
lukasmasuch
1
aminalaee/sqladmin
asyncio
218
`sourceMappingURL` triggers access to non-existent file
### Checklist - [X] The bug is reproducible against the latest release or `master`. - [X] There are no similar issues or pull requests to fix it yet. ### Describe the bug The .js files specify `sourceMappingURL` but the `.map` files are not included in the package. So the browser tries to fetch it and gets a 404 error. We should add the source map files or remove the following lines: https://github.com/aminalaee/sqladmin/blob/daea8c9c385794605ddeb02b7105d71ca9a9823f/sqladmin/statics/js/bootstrap.min.js#L7 ### Steps to reproduce the bug _No response_ ### Expected behavior _No response_ ### Actual behavior _No response_ ### Debugging material ``` INFO: 127.0.0.1:8000 - "GET /js/bootstrap.min.js.map HTTP/1.1" 404 Not Found ``` ### Environment - Ubuntu 20.04.4 LTS - Python 3.9.12 - SQLAdmin 0.1.11 ### Additional context _No response_
closed
2022-06-27T04:30:26Z
2022-07-08T07:23:01Z
https://github.com/aminalaee/sqladmin/issues/218
[ "good first issue" ]
okapies
2
Gozargah/Marzban
api
991
وارپ با استفاده از هسته در Core SettingXray
سلام و احترام توی وارپ با استفاده از هسته در Core SettingXray { "tag": "warp", "protocol": "wireguard", "settings": { "secretKey": "Your_Secret_Key", "DNS": "1.1.1.1", "address": ["172.16.0.2/32", "2606:4700:110:8756:9135:af04:3778:40d9/128"], "peers": [ { "publicKey": "bmXOC+F1FxEMF9dyiK2H5/1SUtzH0JuVo51h2wPfgyo=", "endpoint": "engage.cloudflareclient.com:2408" } ], "kernelMode": false } } مواردی که در فایل wgcf-profile.conf قرار داره access_token device_id license_key private_key هر کدام از آیتمهای بالا را در کدام قسمت Core SettingXray قرار دهیم
closed
2024-05-18T20:37:05Z
2024-05-21T05:28:51Z
https://github.com/Gozargah/Marzban/issues/991
[ "Duplicate", "Invalid" ]
yaramahmadi
2
plotly/plotly.py
plotly
4,791
make package versions explicit in packages/python/plotly/optional-requirements.txt
As the title says, `packages/python/plotly/optional-requirements.txt` should specify versions or version ranges for pandas, scipy, etc.
open
2024-10-09T13:06:20Z
2024-10-09T13:06:21Z
https://github.com/plotly/plotly.py/issues/4791
[ "feature", "P3" ]
gvwilson
0
wkentaro/labelme
deep-learning
499
labelme2voc.py file wrongly puts annotations on the image
Hi, after creating .json files of segmentation annotations, I tried labelme2voc.py file to save original images, segmentation masks, and visualization of segmentation annotations on the original images. But, for some reason, annotation masks, and hence the visualization are not matching with the actual objects in the original image. e.g. This is the original image. ![PLATE_FFS-12_2013_A_005](https://user-images.githubusercontent.com/5428153/67136328-bea21000-f1d9-11e9-9a47-f942b4f9d0c2.jpg) And this is the image for annotation masks ![PLATE_FFS-12_2013_A_005](https://user-images.githubusercontent.com/5428153/67136338-d7122a80-f1d9-11e9-914d-5ed0f5e46f73.png) The objects that I put polygon on were the 3 read blobs on the top side of the original image but somehow they appear on the left, as shown below. ![PLATE_FFS-12_2013_A_005](https://user-images.githubusercontent.com/5428153/67136347-ebeebe00-f1d9-11e9-9e0c-cb9ab142c9cc.jpg) Any idea, what might be the reason? I am working on Ubuntu.
closed
2019-10-19T02:03:32Z
2019-10-20T02:07:03Z
https://github.com/wkentaro/labelme/issues/499
[]
sangramkapre
1
microsoft/Bringing-Old-Photos-Back-to-Life
pytorch
225
module 'models.networks' has no attribute 'modify_commandline_options'
Running Stage 3: Face Enhancement Traceback (most recent call last): File "test_face.py", line 15, in <module> opt = TestOptions().parse() File "E:\picture_fix\Bringing-Old-Photos-Back-to-Life\Face_Enhancement\options\base_options.py", line 262, in parse opt = self.gather_options() File "E:\picture_fix\Bringing-Old-Photos-Back-to-Life\Face_Enhancement\options\base_options.py", line 197, in gather_options parser = model_option_setter(parser, self.isTrain) File "E:\picture_fix\Bringing-Old-Photos-Back-to-Life\Face_Enhancement\models\pix2pix_model.py", line 12, in modify_commandline_options networks.modify_commandline_options(parser, is_train) AttributeError: module 'models.networks' has no attribute 'modify_commandline_options' Finish Stage 3 ...
open
2022-03-17T01:41:33Z
2022-04-26T00:23:55Z
https://github.com/microsoft/Bringing-Old-Photos-Back-to-Life/issues/225
[]
bill1969
2
CorentinJ/Real-Time-Voice-Cloning
python
1,267
Kılıcdaroğlu
Kzkgzpux
open
2023-10-25T11:50:31Z
2023-10-25T11:50:58Z
https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/1267
[]
nejatsamedov
0
joeyespo/grip
flask
27
shutdown grip server
Is it possible to shutdown the grip server in the CLI? If not, would it be hard to add this feature?
closed
2013-09-09T20:00:20Z
2013-09-10T19:22:17Z
https://github.com/joeyespo/grip/issues/27
[ "question" ]
kindrowboat
4
lepture/authlib
flask
235
Support for EdDSA Algorithm
It looks like this is one of the new algorithms, and is listed on jwt.io. It's the only thing on there that you don't support. Keep it up!
closed
2020-05-23T20:01:00Z
2020-05-24T07:10:36Z
https://github.com/lepture/authlib/issues/235
[ "spec", "feature request" ]
benwis
2
twopirllc/pandas-ta
pandas
503
Ray/Trendline/Linear Regression of two points
Say I have two points on the Dataframe (price and UNIX timestamp) and I want to draw a Ray (endless) that goes through these two points. Does Pandas TA or Pandas has a function for this? Is there an easy way to draw a Linear Regression ray, I just have the two points and I want to draw a Ray through them and get all the Datapoints (price and timestamp of each) say 1 week into the future to be predicted. Any example of how to use "linear_regression" ?
closed
2022-03-15T18:34:43Z
2022-05-02T22:12:39Z
https://github.com/twopirllc/pandas-ta/issues/503
[ "help wanted", "good first issue", "info" ]
hadialaddin
3
huggingface/datasets
pandas
6,778
Dataset.to_csv() missing commas in columns with lists
### Describe the bug The `to_csv()` method does not output commas in lists. So when the Dataset is loaded back in the data structure of the column with a list is not correct. Here's an example: Obviously, it's not as trivial as inserting commas in the list, since its a comma-separated file. But hopefully there's a way to export the list in a way that it'll be imported by `load_dataset()` correctly. ### Steps to reproduce the bug Here's some code to reproduce the bug: ```python from datasets import Dataset ds = Dataset.from_dict( { "pokemon": ["bulbasaur", "squirtle"], "type": ["grass", "water"] } ) def ascii_to_hex(text): return [ord(c) for c in text] ds = ds.map(lambda x: {"int": ascii_to_hex(x['pokemon'])}) ds.to_csv('../output/temp.csv') ``` temp.csv then contains: ``` ### Expected behavior ACTUAL OUTPUT: ``` pokemon,type,int bulbasaur,grass,[ 98 117 108 98 97 115 97 117 114] squirtle,water,[115 113 117 105 114 116 108 101] ``` EXPECTED OUTPUT: ``` pokemon,type,int bulbasaur,grass,[98, 117, 108, 98, 97, 115, 97, 117, 114] squirtle,water,[115, 113, 117, 105, 114, 116, 108, 101] ``` or probably something more like this since it's a CSV file: ``` pokemon,type,int bulbasaur,grass,"[98, 117, 108, 98, 97, 115, 97, 117, 114]" squirtle,water,"[115, 113, 117, 105, 114, 116, 108, 101]" ``` ### Environment info ### Package Version Name: datasets Version: 2.16.1 ### Python version: 3.10.12 ### OS Info PRETTY_NAME="Ubuntu 22.04.4 LTS" NAME="Ubuntu" VERSION_ID="22.04" VERSION="22.04.4 LTS (Jammy Jellyfish)" VERSION_CODENAME=jammy ID=ubuntu ID_LIKE=debian ... UBUNTU_CODENAME=jammy
open
2024-04-04T16:46:13Z
2024-04-08T15:24:41Z
https://github.com/huggingface/datasets/issues/6778
[]
mpickard-dataprof
1
KevinMusgrave/pytorch-metric-learning
computer-vision
647
Can MatchFinder implement the query library?
Hello! Thank you for developing this great library. It helps me try metric learning a lot. I am conducting a binary classification task. I want to use feature vectors for similarity calculation to assist in the final test results. I want to create a feature vector query library. My model has already obtained a set of labeled feature vectors, and I want to use MatchFinder to select the most representative feature vectors as the query library. Each time a test feature vector is obtained, it will be queried against the query library to determine the classification. I am unsure how to implement this. Could you provide a simple example, or explain how MatchFinder can be used in this way? As a beginner, I hope you don't mind if I have any questions.
closed
2023-07-10T09:22:40Z
2023-07-24T16:01:27Z
https://github.com/KevinMusgrave/pytorch-metric-learning/issues/647
[ "question" ]
ZH-UCAS
1
MilesCranmer/PySR
scikit-learn
339
[Feature]: Add complexity calculation for user defined expression
### Feature Request Hi. I've recently started using PySR and I would like to suggest a new feature that I think would make the code even more user-friendly. Would it be possible to have more direct access to the function that computes the complexity such that one can compare expressions found by PySR and those found in the literature? For example: `model.complexity('1 + x_0 + x_1**2')` This would allow the user to easily map the expressions found in the literature on the complexity vs accuracy plots. Thank you in advance.
closed
2023-05-28T10:07:22Z
2024-03-24T23:45:31Z
https://github.com/MilesCranmer/PySR/issues/339
[ "enhancement", "priority: mid" ]
OsAmaro
6
rougier/from-python-to-numpy
numpy
39
Incorrect Python version
Section 1.2 declares Python version 3.5.2 however section 2.1 "vectorized approach" uses random.choices() function which was introduced in Python 3.6. https://docs.python.org/3.6/library/random.html#random.choices
closed
2017-01-10T12:26:00Z
2017-01-10T13:02:02Z
https://github.com/rougier/from-python-to-numpy/issues/39
[]
reidfaiv
2
deepfakes/faceswap
machine-learning
1,255
How to transfer facial expression of source image and other body gestures and posture of destination image.
Hi I really like your work. I wanted to know that is there any option in which we can transfer facial expression along with our face to the destination. in simple words I want a complete face projection to the destination frame. So that my facial expressions also swapped along with my face. Kindly let me know.
closed
2022-08-12T09:24:11Z
2022-08-12T09:32:00Z
https://github.com/deepfakes/faceswap/issues/1255
[]
rohaantahir
2
junyanz/pytorch-CycleGAN-and-pix2pix
computer-vision
1,313
Blurred results using cycleGAN on depthmaps
Hello everybody, I want to use cycleGAN on depthmaps for domain adaptation. We are currently training the cycleGAN with two data sets of 2000 images each, but our result are blurred. ![results](https://user-images.githubusercontent.com/89905275/132120069-f8252970-b34e-4e74-b1a9-90a47e79a452.png) Trainining configuration: image_size = 128 input_channels = 1 output_channels = 1 norm = instance_norm lr = 0.0002 beta_1 = 0.5 cycle_loss_weight = 10 batch_size = 1 9 res_blocks ### Model import torch import torch.nn as nn import torch.nn.functional as F class Discriminator(nn.Module): def __init__(self): super(Discriminator, self).__init__() self.main = nn.Sequential( nn.Conv2d(1, 64, 4, stride=2, padding=1), nn.LeakyReLU(0.2, inplace=True), nn.Conv2d(64, 128, 4, stride=2, padding=1), nn.InstanceNorm2d(128), nn.LeakyReLU(0.2, inplace=True), nn.Conv2d(128, 256, 4, stride=2, padding=1), nn.InstanceNorm2d(256), nn.LeakyReLU(0.2, inplace=True), nn.Conv2d(256, 512, 4, padding=1), nn.InstanceNorm2d(512), nn.LeakyReLU(0.2, inplace=True), nn.Conv2d(512, 1, 4, padding=1), ) def forward(self, x): x = self.main(x) x = F.avg_pool2d(x, x.size()[2:]) x = torch.flatten(x, 1) return x class Generator(nn.Module): def __init__(self): super(Generator, self).__init__() self.main = nn.Sequential( # Initial convolution block nn.ReflectionPad2d(3), nn.Conv2d(1, 64, 7), nn.InstanceNorm2d(64), nn.ReLU(inplace=True), # Downsampling nn.Conv2d(64, 128, 3, stride=2, padding=1), nn.InstanceNorm2d(128), nn.ReLU(inplace=True), nn.Conv2d(128, 256, 3, stride=2, padding=1), nn.InstanceNorm2d(256), nn.ReLU(inplace=True), # Residual blocks ResidualBlock(256), ResidualBlock(256), ResidualBlock(256), ResidualBlock(256), ResidualBlock(256), ResidualBlock(256), ResidualBlock(256), ResidualBlock(256), ResidualBlock(256), # Upsampling nn.ConvTranspose2d(256, 128, 3, stride=2, padding=1, output_padding=1), nn.InstanceNorm2d(128), nn.ReLU(inplace=True), nn.ConvTranspose2d(128, 64, 3, stride=2, padding=1, output_padding=1), nn.InstanceNorm2d(64), nn.ReLU(inplace=True), # Output layer nn.ReflectionPad2d(3), nn.Conv2d(64, 1, 7), nn.Tanh() ) def forward(self, x): return self.main(x) class ResidualBlock(nn.Module): def __init__(self, in_channels): super(ResidualBlock, self).__init__() self.res = nn.Sequential(nn.ReflectionPad2d(1), nn.Conv2d(in_channels, in_channels, 3), nn.InstanceNorm2d(in_channels), nn.ReLU(inplace=True), nn.ReflectionPad2d(1), nn.Conv2d(in_channels, in_channels, 3), nn.InstanceNorm2d(in_channels)) def forward(self, x): return x + self.res(x) ### Train import argparse import itertools import os import random import torch.backends.cudnn as cudnn import torch.utils.data import torchvision.transforms as transforms import torchvision.utils as vutils from PIL import Image from tqdm import tqdm import matplotlib.pylab as plt from cyclegan_pytorch import DecayLR from cyclegan_pytorch import Discriminator from cyclegan_pytorch import Generator from cyclegan_pytorch import ImageDataset from cyclegan_pytorch import ReplayBuffer from cyclegan_pytorch import weights_init parser = argparse.ArgumentParser( description="PyTorch implements `Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks`") parser.add_argument("--dataroot", type=str, default="./data", help="path to datasets. (default:./data)") parser.add_argument("--dataset", type=str, default="depthmaps_white", help="dataset name. (default:`horse2zebra`)" "Option: [apple2orange, summer2winter_yosemite, horse2zebra, monet2photo, " "cezanne2photo, ukiyoe2photo, vangogh2photo, maps, facades, selfie2anime, " "iphone2dslr_flower, ae_photos, ]") parser.add_argument("--epochs", default=200, type=int, metavar="N", help="number of total epochs to run") parser.add_argument("--decay_epochs", type=int, default=100, help="epoch to start linearly decaying the learning rate to 0. (default:100)") parser.add_argument("-b", "--batch-size", default=1, type=int, metavar="N", help="mini-batch size (default: 1), this is the total " "batch size of all GPUs on the current node when " "using Data Parallel or Distributed Data Parallel") parser.add_argument("--lr", type=float, default=0.0002, help="learning rate. (default:0.0002)") parser.add_argument("-p", "--print-freq", default=100, type=int, metavar="N", help="print frequency. (default:100)") parser.add_argument("--cuda", action="store_true", help="Enables cuda") parser.add_argument("--netG_A2B", default="", help="path to netG_A2B (to continue training)") parser.add_argument("--netG_B2A", default="", help="path to netG_B2A (to continue training)") parser.add_argument("--netD_A", default="", help="path to netD_A (to continue training)") parser.add_argument("--netD_B", default="", help="path to netD_B (to continue training)") parser.add_argument("--image-size", type=int, default=128, help="size of the data crop (squared assumed). (default:128)") parser.add_argument("--outf", default="./outputs", help="folder to output images. (default:`./outputs`).") parser.add_argument("--manualSeed", type=int, help="Seed for initializing training. (default:none)") args = parser.parse_args() print(args) try: os.makedirs(args.outf) except OSError: pass try: os.makedirs("weights") except OSError: pass if args.manualSeed is None: args.manualSeed = random.randint(1, 10000) print("Random Seed: ", args.manualSeed) random.seed(args.manualSeed) torch.manual_seed(args.manualSeed) cudnn.benchmark = True if torch.cuda.is_available() and not args.cuda: print("WARNING: You have a CUDA device, so you should probably run with --cuda") # Dataset dataset = ImageDataset(root=os.path.join(args.dataroot, args.dataset), transform=transforms.Compose([ transforms.Resize(int(args.image_size * 1.12), Image.BICUBIC), transforms.RandomCrop(args.image_size), #transforms.RandomHorizontalFlip(), transforms.ToTensor()]), #transforms.Grayscale(1), #transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))]), #transforms.Normalize((0.5), (0.5))]) unaligned=True) dataloader = torch.utils.data.DataLoader(dataset, batch_size=args.batch_size, shuffle=True, pin_memory=True) try: os.makedirs(os.path.join(args.outf, args.dataset, "Sim")) os.makedirs(os.path.join(args.outf, args.dataset, "Real")) except OSError: pass try: os.makedirs(os.path.join("weights", args.dataset)) except OSError: pass device = torch.device("cuda:0" if args.cuda else "cpu") # create model netG_A2B = Generator().to(device) netG_B2A = Generator().to(device) netD_A = Discriminator().to(device) netD_B = Discriminator().to(device) netG_A2B.apply(weights_init) netG_B2A.apply(weights_init) netD_A.apply(weights_init) netD_B.apply(weights_init) if args.netG_A2B != "": netG_A2B.load_state_dict(torch.load(args.netG_A2B)) if args.netG_B2A != "": netG_B2A.load_state_dict(torch.load(args.netG_B2A)) if args.netD_A != "": netD_A.load_state_dict(torch.load(args.netD_A)) if args.netD_B != "": netD_B.load_state_dict(torch.load(args.netD_B)) # define loss function (adversarial_loss) and optimizer cycle_loss = torch.nn.L1Loss().to(device) identity_loss = torch.nn.L1Loss().to(device) adversarial_loss = torch.nn.MSELoss().to(device) # Optimizers optimizer_G = torch.optim.Adam(itertools.chain(netG_A2B.parameters(), netG_B2A.parameters()), lr=args.lr, betas=(0.5, 0.999)) optimizer_D_A = torch.optim.Adam(netD_A.parameters(), lr=args.lr, betas=(0.5, 0.999)) optimizer_D_B = torch.optim.Adam(netD_B.parameters(), lr=args.lr, betas=(0.5, 0.999)) lr_lambda = DecayLR(args.epochs, 0, args.decay_epochs).step lr_scheduler_G = torch.optim.lr_scheduler.LambdaLR(optimizer_G, lr_lambda=lr_lambda) lr_scheduler_D_A = torch.optim.lr_scheduler.LambdaLR(optimizer_D_A, lr_lambda=lr_lambda) lr_scheduler_D_B = torch.optim.lr_scheduler.LambdaLR(optimizer_D_B, lr_lambda=lr_lambda) g_losses = [] d_losses = [] identity_losses = [] gan_losses = [] cycle_losses = [] fake_A_buffer = ReplayBuffer() fake_B_buffer = ReplayBuffer() for epoch in range(0, args.epochs): progress_bar = tqdm(enumerate(dataloader), total=len(dataloader)) for i, data in progress_bar: # get batch size data real_image_A = data["Sim"].to(device) real_image_B = data["Real"].to(device) batch_size = real_image_A.size(0) # real data label is 1, fake data label is 0. real_label = torch.full((batch_size, 1), 1, device=device, dtype=torch.float32) fake_label = torch.full((batch_size, 1), 0, device=device, dtype=torch.float32) ############################################## # (1) Update G network: Generators A2B and B2A ############################################## # Set G_A and G_B's gradients to zero optimizer_G.zero_grad() # Identity loss # G_B2A(A) should equal A if real A is fed identity_image_A = netG_B2A(real_image_A) loss_identity_A = identity_loss(identity_image_A, real_image_A) * 1.0 # G_A2B(B) should equal B if real B is fed identity_image_B = netG_A2B(real_image_B) loss_identity_B = identity_loss(identity_image_B, real_image_B) * 1.0 # GAN loss # GAN loss D_A(G_A(A)) fake_image_A = netG_B2A(real_image_B) fake_output_A = netD_A(fake_image_A) loss_GAN_B2A = adversarial_loss(fake_output_A, real_label) # GAN loss D_B(G_B(B)) fake_image_B = netG_A2B(real_image_A) fake_output_B = netD_B(fake_image_B) loss_GAN_A2B = adversarial_loss(fake_output_B, real_label) # Cycle loss recovered_image_A = netG_B2A(fake_image_B) loss_cycle_ABA = cycle_loss(recovered_image_A, real_image_A) * 10.0 recovered_image_B = netG_A2B(fake_image_A) loss_cycle_BAB = cycle_loss(recovered_image_B, real_image_B) * 10.0 # Combined loss and calculate gradients errG = loss_GAN_A2B + loss_GAN_B2A + loss_cycle_ABA + loss_cycle_BAB #errG = loss_identity_A + loss_identity_B + loss_GAN_A2B + loss_GAN_B2A + loss_cycle_ABA + loss_cycle_BAB # Calculate gradients for G_A and G_B errG.backward() # Update G_A and G_B's weights optimizer_G.step() ############################################## # (2) Update D network: Discriminator A ############################################## # Set D_A gradients to zero optimizer_D_A.zero_grad() # Real A image loss real_output_A = netD_A(real_image_A) errD_real_A = adversarial_loss(real_output_A, real_label) # Fake A image loss fake_image_A = fake_A_buffer.push_and_pop(fake_image_A) fake_output_A = netD_A(fake_image_A.detach()) errD_fake_A = adversarial_loss(fake_output_A, fake_label) # Combined loss and calculate gradients errD_A = (errD_real_A + errD_fake_A) / 2 # Calculate gradients for D_A errD_A.backward() # Update D_A weights optimizer_D_A.step() ############################################## # (3) Update D network: Discriminator B ############################################## # Set D_B gradients to zero optimizer_D_B.zero_grad() # Real B image loss real_output_B = netD_B(real_image_B) errD_real_B = adversarial_loss(real_output_B, real_label) # Fake B image loss fake_image_B = fake_B_buffer.push_and_pop(fake_image_B) fake_output_B = netD_B(fake_image_B.detach()) errD_fake_B = adversarial_loss(fake_output_B, fake_label) # Combined loss and calculate gradients errD_B = (errD_real_B + errD_fake_B) / 2 # Calculate gradients for D_B errD_B.backward() # Update D_B weights optimizer_D_B.step() progress_bar.set_description( f"[{epoch}/{args.epochs - 1}][{i}/{len(dataloader) - 1}] " f"Loss_D: {(errD_A + errD_B).item():.4f} " f"Loss_G: {errG.item():.4f} " f"Loss_G_identity: {(loss_identity_A + loss_identity_B).item():.4f} " f"loss_G_GAN: {(loss_GAN_A2B + loss_GAN_B2A).item():.4f} " f"loss_G_cycle: {(loss_cycle_ABA + loss_cycle_BAB).item():.4f}") if i % args.print_freq == 0: vutils.save_image(real_image_A, f"{args.outf}/{args.dataset}/Sim/real_samples.png", normalize=True) vutils.save_image(real_image_B, f"{args.outf}/{args.dataset}/Real/real_samples.png", normalize=True) #print('Real Image A Shape:', real_image_A.shape) #print('Real Image B Shape:', real_image_B.shape) fake_image_A = 0.5 * (netG_B2A(real_image_B).data + 1.0) fake_image_B = 0.5 * (netG_A2B(real_image_A).data + 1.0) #print('Fake Image A Shape:',fake_image_A.shape) #print('Fake Image B Shape:',fake_image_B.shape) vutils.save_image(fake_image_A.detach(), f"{args.outf}/{args.dataset}/Sim/fake_samples.png", normalize=True) vutils.save_image(fake_image_B.detach(), f"{args.outf}/{args.dataset}/Real/fake_samples.png", normalize=True) transformed_image_A = 0.5 * (netG_B2A(fake_image_B).data + 1.0) transformed_image_B = 0.5 * (netG_A2B(fake_image_A).data + 1.0) #print('Transformed Image A Shape:',transformed_image_A.shape) #print('Transformed Image B Shape:',transformed_image_B.shape) vutils.save_image(transformed_image_A.detach(), f"{args.outf}/{args.dataset}/Sim/transformed_samples.png", normalize=True) vutils.save_image(transformed_image_B.detach(), f"{args.outf}/{args.dataset}/Real/transformed_samples.png", normalize=True) # merge results new_img = Image.new('RGB', (args.image_size*3,args.image_size*2),'white') real_sample_A = Image.open(f"{args.outf}/{args.dataset}/Sim/real_samples.png") fake_sample_B = Image.open(f"{args.outf}/{args.dataset}/Real/fake_samples.png") transformed_image_A = Image.open(f"{args.outf}/{args.dataset}/Sim/transformed_samples.png") transformed_image_B = Image.open(f"{args.outf}/{args.dataset}/Real/transformed_samples.png") real_sample_B = Image.open(f"{args.outf}/{args.dataset}/Real/real_samples.png") fake_sample_A = Image.open(f"{args.outf}/{args.dataset}/Sim/fake_samples.png") new_img.paste(real_sample_A,(0,0,args.image_size,args.image_size)) new_img.paste(fake_sample_B,(args.image_size,0,args.image_size*2,args.image_size)) new_img.paste(transformed_image_A,(args.image_size*2,0,args.image_size*3,args.image_size)) new_img.paste(real_sample_B,(0,args.image_size,args.image_size,args.image_size*2)) new_img.paste(fake_sample_A,(args.image_size,args.image_size,args.image_size*2,args.image_size*2)) new_img.paste(transformed_image_B,(args.image_size*2,args.image_size,args.image_size*3,args.image_size*2)) new_img.save(f"{args.outf}/{args.dataset}/results_at_epoch_{epoch}_idx_{i}.png") # do check pointing torch.save(netG_A2B.state_dict(), f"weights/{args.dataset}/netG_A2B_epoch_{epoch}.pth") torch.save(netG_B2A.state_dict(), f"weights/{args.dataset}/netG_B2A_epoch_{epoch}.pth") torch.save(netD_A.state_dict(), f"weights/{args.dataset}/netD_A_epoch_{epoch}.pth") torch.save(netD_B.state_dict(), f"weights/{args.dataset}/netD_B_epoch_{epoch}.pth") # Update learning rates lr_scheduler_G.step() lr_scheduler_D_A.step() lr_scheduler_D_B.step() # save last check pointing torch.save(netG_A2B.state_dict(), f"weights/{args.dataset}/netG_A2B.pth") torch.save(netG_B2A.state_dict(), f"weights/{args.dataset}/netG_B2A.pth") torch.save(netD_A.state_dict(), f"weights/{args.dataset}/netD_A.pth") torch.save(netD_B.state_dict(), f"weights/{args.dataset}/netD_B.pth")
open
2021-09-05T08:12:41Z
2021-10-27T20:17:57Z
https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/1313
[]
ghost
1
jupyterhub/repo2docker
jupyter
476
Julia support does not require `environment.yml`
The documentation for how to use Julia states that you need a `REQUIRE` file as well as an `environment.yml`. This is no longer correct. Should be fixed by removing the comment about the `environment.yml`. https://repo2docker.readthedocs.io/en/latest/config_files.html#require-install-a-julia-environment This got changed in #103 but docs didn't get updated.
closed
2018-11-24T15:34:19Z
2018-11-27T07:40:56Z
https://github.com/jupyterhub/repo2docker/issues/476
[]
betatim
0
Netflix/metaflow
data-science
1,513
Support explicit job queues for @kubernetes
tracking integration work with kueue/volcano/yunicorn cc @rparundekar
open
2023-08-29T16:34:20Z
2023-08-29T16:34:20Z
https://github.com/Netflix/metaflow/issues/1513
[]
savingoyal
0
sqlalchemy/sqlalchemy
sqlalchemy
12,042
Support adding where condition to UniqueConstraint
### Describe the use case I want to add a where condition to UniqueConstraint so that the constraint can be applied to a selected set of records. I know this can be done using Index but I want to defer this constraint till the transaction is completed. I cannot defer the Index nor I can add where condition to UniqueConstraint. Django seems to support adding where condition to UniqueConstraint. I would really appreciate having this feature in SQLAlchemy since it solves a common problem in efficient way. ### Databases / Backends / Drivers targeted I'm looking this for PostgreSQL ### Example Use In Django, this is supported like this: ``` models.UniqueConstraint( fields=["email", "tenant"], condition=Q(deleted_at=None), name="users_email_tenant_id_deleted_at_null_uniq", deferrable=Deferrable.DEFERRED, ), ``` Here, I'm applying the constraint on all the records which are not deleted and also deferring the check so that I can do bulk updates. ### Additional context _No response_
closed
2024-10-29T17:24:54Z
2024-10-29T17:36:49Z
https://github.com/sqlalchemy/sqlalchemy/issues/12042
[ "postgresql" ]
cankush625
1
wandb/wandb
tensorflow
9,425
[Bug]: Logging does not update progress bar
### Describe the bug The logs interface does not update `tqdm` progress bars. ### Reproducing With the following code, ```python import wandb import tqdm import time run = wandb.init(project="tqdm-test") for i in tqdm.tqdm(range(1000), ncols=88, ascii=True): run.log({"acc": i / 1000}) time.sleep(10) run.finish() ``` the progress bar looks like ``` 1%|6 | 13/1000 [02:10<2:44:31, 10.00s/it] ``` But the logs tab in the web interface always shows ``` 1 0%| | 0/1000 [00:00<?, ?it/s] ``` > See https://wandb.ai/francois-rozet/tqdm-test/runs/rlqhqf9v/logs
open
2025-02-06T14:35:20Z
2025-02-26T16:12:26Z
https://github.com/wandb/wandb/issues/9425
[ "ty:bug", "a:app" ]
francois-rozet
10
modelscope/modelscope
nlp
438
img2video
does modelscope do img2video similar to runway gn2 ?
closed
2023-08-03T20:26:02Z
2023-09-18T11:24:09Z
https://github.com/modelscope/modelscope/issues/438
[]
almfahd41
3
iperov/DeepFaceLab
deep-learning
5,734
Eye and mouth priority bad after 144k
Expected behavior: eyes and mouth conform to src image. Actually: generated face keeps reverting to a generalized face, often not even close to the direction of the src, It will stat looking like what it is supposed to but then revert. I am on 144k iter and my faces are wrong. Expected: Scr & Dst images to become defined and clear to see in training. Actually: After 144k iter src & dts models are still very blurry and occationally warp to odd dimensions that carry over to the combined image. Dims are all default values, mask training on, eye and mouth priority on, warp on.
open
2023-10-07T13:04:04Z
2023-10-07T13:10:48Z
https://github.com/iperov/DeepFaceLab/issues/5734
[]
D-reads
0
torchbox/wagtail-grapple
graphql
69
Confusing docstring in PageInterface
The docstring of `resolve_children()` reads: "Resolves a list of live children of this page with `show_in_menus` set." (I guess [this](https://docs.wagtail.io/en/stable/reference/pages/queryset_reference.html#examples) is the source of the docstring). The function however, does not filter live child pages, nor does it filter on `show_in_menus`. I wouldn't mind making a PR to resolve this issue, but I don't know if this function should be returning live pages only (filtering for `show_in_menus` does not make sense here IMO). Is the docstring or the query invalid? **Useful links:** - https://github.com/torchbox/wagtail-grapple/blob/master/grapple/types/pages.py#L65-L70
closed
2020-04-30T14:22:58Z
2020-08-10T21:25:40Z
https://github.com/torchbox/wagtail-grapple/issues/69
[]
leewesleyv
1
tatsu-lab/stanford_alpaca
deep-learning
128
轻量化专属化ChayGpt
open
2023-03-23T02:42:45Z
2023-03-23T02:42:45Z
https://github.com/tatsu-lab/stanford_alpaca/issues/128
[]
vampirelzy
0
httpie/cli
api
1,036
Following Link-Header / MultiPage-Requests (RFC5988)
Please implement RFC5988 Link Header https://tools.ietf.org/html/rfc5988#section-5 Also see: https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Link I often do need this functionality. And yes, I am know of: https://github.com/httpie/httpie/issues/323
open
2021-02-18T08:52:42Z
2021-12-28T10:38:33Z
https://github.com/httpie/cli/issues/1036
[ "enhancement", "needs product design", "deferred" ]
blurayne
2
ipython/ipython
jupyter
13,913
Changing `c.TerminalInteractiveShell.autosuggestions_provider` in the configuration file will cause IPython shell to fail to start
<!-- This is the repository for IPython command line, if you can try to make sure this question/bug/feature belong here and not on one of the Jupyter repositories. If it's a generic Python/Jupyter question, try other forums or discourse.jupyter.org. If you are unsure, it's ok to post here, though, there are few maintainer so you might not get a fast response. --> IPython version: 8.9.0, Python version: CPython 3.10.9 The default value of the configuration item `c.TerminalInteractiveShell.autosuggestions_provider` is `'NavigableAutoSuggestFromHistory'`. And the documentation said: > \## Specifies from which source automatic suggestions are provided. Can be set to > \# ``'NavigableAutoSuggestFromHistory'`` (:kbd:`up` and :kbd:`down` swap > \# suggestions), ``'AutoSuggestFromHistory'``, or ``None`` to disable automatic > \# suggestions. Default is `'NavigableAutoSuggestFromHistory`'. > \# Default: 'NavigableAutoSuggestFromHistory' But if you set it to `'AutoSuggestFromHistory'` or `None` will cause IPython shell to fail to start (Extra-detailed tracebacks is enabled): ``` $ > ipython Error in sys.excepthook: Traceback (most recent call last): File "/usr/lib/python3.10/pathlib.py", line 1305, in is_dir return S_ISDIR(self.stat().st_mode) AttributeError: 'str' object has no attribute 'stat' Original exception was: Traceback (most recent call last): File "/usr/bin/ipython", line 8, in <module> sys.exit(start_ipython()) File "/usr/lib/python3.10/site-packages/IPython/__init__.py", line 124, in start_ipython return launch_new_instance(argv=argv, **kwargs) File "/usr/lib/python3.10/site-packages/traitlets/config/application.py", line 1040, in launch_instance app.initialize(argv) File "/usr/lib/python3.10/site-packages/traitlets/config/application.py", line 113, in inner return method(app, *args, **kwargs) File "/usr/lib/python3.10/site-packages/IPython/terminal/ipapp.py", line 279, in initialize self.init_shell() File "/usr/lib/python3.10/site-packages/IPython/terminal/ipapp.py", line 293, in init_shell self.shell = self.interactive_shell_class.instance(parent=self, File "/usr/lib/python3.10/site-packages/traitlets/config/configurable.py", line 551, in instance inst = cls(*args, **kwargs) File "/usr/lib/python3.10/site-packages/IPython/terminal/interactiveshell.py", line 687, in __init__ super(TerminalInteractiveShell, self).__init__(*args, **kwargs) File "/usr/lib/python3.10/site-packages/IPython/core/interactiveshell.py", line 560, in __init__ super(InteractiveShell, self).__init__(**kwargs) File "/usr/lib/python3.10/site-packages/traitlets/config/configurable.py", line 109, in __init__ self.config = config File "/usr/lib/python3.10/site-packages/traitlets/traitlets.py", line 732, in __set__ self.set(obj, value) File "/usr/lib/python3.10/site-packages/traitlets/traitlets.py", line 721, in set obj._notify_trait(self.name, old_value, new_value) File "/usr/lib/python3.10/site-packages/traitlets/traitlets.py", line 1505, in _notify_trait self.notify_change( File "/usr/lib/python3.10/site-packages/traitlets/traitlets.py", line 1517, in notify_change return self._notify_observers(change) File "/usr/lib/python3.10/site-packages/traitlets/traitlets.py", line 1564, in _notify_observers c(event) File "/usr/lib/python3.10/site-packages/traitlets/traitlets.py", line 1146, in compatible_observer return func(self, change) File "/usr/lib/python3.10/site-packages/traitlets/config/configurable.py", line 218, in _config_changed self._load_config(change.new, traits=traits, section_names=section_names) File "/usr/lib/python3.10/site-packages/traitlets/config/configurable.py", line 167, in _load_config with self.hold_trait_notifications(): File "/usr/lib/python3.10/contextlib.py", line 142, in __exit__ next(self.gen) File "/usr/lib/python3.10/site-packages/traitlets/traitlets.py", line 1502, in hold_trait_notifications self.notify_change(change) File "/usr/lib/python3.10/site-packages/traitlets/traitlets.py", line 1517, in notify_change return self._notify_observers(change) File "/usr/lib/python3.10/site-packages/traitlets/traitlets.py", line 1564, in _notify_observers c(event) File "/usr/lib/python3.10/site-packages/IPython/terminal/interactiveshell.py", line 417, in _autosuggestions_provider_changed self._set_autosuggestions(provider) File "/usr/lib/python3.10/site-packages/IPython/terminal/interactiveshell.py", line 399, in _set_autosuggestions if self.auto_suggest and isinstance( AttributeError: 'TerminalInteractiveShell' object has no attribute 'auto_suggest' ```
closed
2023-01-28T16:42:07Z
2023-01-30T13:02:04Z
https://github.com/ipython/ipython/issues/13913
[ "bug", "autosuggestions" ]
nukemiko
1
awesto/django-shop
django
236
How to integrate it to django-cms?
Sorry to interrupt you, I searched the web but didn't find answer to this question. How to make it an extension of django cms? Thanks in advance!
closed
2013-06-26T14:05:39Z
2016-02-02T13:58:57Z
https://github.com/awesto/django-shop/issues/236
[]
educate
4
NullArray/AutoSploit
automation
1,067
what's wrong ?
[+] attempting to load API keys [+] Shodan API token loaded from /root/Desktop/AutoSploit-master/etc/tokens/shodan.key [+] Censys API token loaded from /root/Desktop/AutoSploit-master/etc/tokens/censys.key [-] no arguments have been parsed at run time, dropping into terminal session. to get help type `help` to quit type `exit/quit` to get help on a specific command type `command help`
closed
2019-05-01T14:46:45Z
2019-05-02T18:12:55Z
https://github.com/NullArray/AutoSploit/issues/1067
[]
lmx5200410
4
gradio-app/gradio
machine-learning
10,663
Search functionality doesn't work for gr.Dataframe
### Describe the bug Notice another bug for gr.Dataframe, i.e., the search & filter functionality doesn't work for text cells. <img width="1920" alt="Image" src="https://github.com/user-attachments/assets/8f8f180d-451f-4b1c-842d-525cefee8f37" /> <img width="1920" alt="Image" src="https://github.com/user-attachments/assets/89a99660-7b4a-4af3-8cb4-f3e105c37456" /> <img width="1920" alt="Image" src="https://github.com/user-attachments/assets/c3292e86-e5d1-4c7b-8b32-f2ee4f0d66eb" /> ### Have you searched existing issues? 🔎 - [x] I have searched and found no existing issues ### Reproduction ```python import pandas as pd import gradio as gr # Creating a sample dataframe def run(): df = pd.DataFrame({ "A" : ["Apparel & Accessories", "Home & Garden", "Health & Beauty", "Cameras & Optics", "Apparel & Accessories"], "B" : [6, 2, 54, 3, 2], "C" : [3, 20, 7, 3, 8], "D" : [2, 3, 6, 2, 6], "E" : [-1, 45, 64, 32, 23] }) df = df.style.map(color_num, subset=["E"]) return df # Function to apply text color def color_num(value: float) -> str: color = "red" if value >= 0 else "green" color_style = "color: {}".format(color) return color_style # Displaying the styled dataframe in Gradio with gr.Blocks() as demo: gr.Textbox("{}".format(gr.__version__)) a = gr.DataFrame(show_search="search") b = gr.Button("run") b.click(run,outputs=a) demo.launch() ``` ### Screenshot _No response_ ### Logs ```shell ``` ### System Info ```shell gradio = 5.17.1 ``` ### Severity Blocking usage of gradio
closed
2025-02-24T02:54:23Z
2025-03-10T18:14:59Z
https://github.com/gradio-app/gradio/issues/10663
[ "bug", "💾 Dataframe" ]
jamie0725
4
laughingman7743/PyAthena
sqlalchemy
493
Add support for Spark calculations
# Description Add support to run spark calculations using any cursor # Related docs 1. Implementation in awswrangler - https://github.com/aws/aws-sdk-pandas/blob/main/awswrangler/athena/_spark.py 2. Official AWS docs - https://docs.aws.amazon.com/athena/latest/ug/notebooks-spark.html 3. MR in dbt-athena-community - https://github.com/dbt-athena/dbt-athena/pull/248 # Comments I am currently working on adding support to run python models using the dbt-athena-community adapter and it would be much easier to accomplish if the pyathena library supports this first. I don't think mock_athena supports these yet so testing it actually much more difficult than I thought.
closed
2023-11-23T17:29:36Z
2024-01-09T05:50:02Z
https://github.com/laughingman7743/PyAthena/issues/493
[]
Avinash-1394
8
msoedov/langcorn
rest-api
20
langchain Agents
Thanks for the great repository. Does langcorn support langchain agents?
open
2023-06-20T10:36:55Z
2023-08-21T15:05:44Z
https://github.com/msoedov/langcorn/issues/20
[]
zlapp
7
SYSTRAN/faster-whisper
deep-learning
781
Problem with audio
When I cut out only 5 minutes of the movie, everything worked correctly (screen #1). The one hour portion of the movie caused the error (screen #2). ![Безымянный3](https://github.com/SYSTRAN/faster-whisper/assets/166128009/b7dabb5c-7734-42d2-ac2e-4dc5772b7b07) ![Безымянный1](https://github.com/SYSTRAN/faster-whisper/assets/166128009/d8fd6a06-d075-4631-9c54-248001bf97c3) General Unique ID : 80764340276249783376247501835629204513 (0x3CC2A569474575CF3E3F1C6E312F9421) Complete name : C:\Whisper-Faster\55776.mkv Format : Matroska Format version : Version 4 File size : 9.92 GiB Duration : 1 h 0 min Overall bit rate mode : Variable Overall bit rate : 23.7 Mb/s Encoded date : UTC 2024-04-05 09:25:18 Writing application : mkvmerge v82.0 ('I'm The President') 64-bit Writing library : libebml v1.4.5 + libmatroska v1.7.1 Video ID : 1 Format : AVC Format/Info : Advanced Video Codec Format profile : High@L4.1 Format settings : CABAC / 2 Ref Frames Format settings, CABAC : Yes Format settings, Reference fra : 2 frames Format settings, GOP : M=1, N=10 Codec ID : V_MPEG4/ISO/AVC Duration : 1 h 0 min Bit rate mode : Variable Bit rate : 19.9 Mb/s Width : 1 920 pixels Height : 1 080 pixels Display aspect ratio : 16:9 Frame rate mode : Constant Frame rate : 23.976 (23976/1000) FPS Original frame rate : 23.976 (24000/1001) FPS Color space : YUV Chroma subsampling : 4:2:0 Bit depth : 8 bits Scan type : Progressive Bits/(Pixel*Frame) : 0.400 Stream size : 8.34 GiB (84%) Default : Yes Forced : No Audio #1 ID : 2 Format : DTS XLL Format/Info : Digital Theater Systems Commercial name : DTS-HD Master Audio Codec ID : A_DTS Duration : 1 h 0 min Bit rate mode : Variable Bit rate : 1 144 kb/s Channel(s) : 2 channels Channel layout : L R Sampling rate : 48.0 kHz Frame rate : 93.750 FPS (512 SPF) Bit depth : 24 bits Compression mode : Lossless Stream size : 491 MiB (5%) Default : Yes Forced : No Audio #2 ID : 3 Format : DTS XLL Format/Info : Digital Theater Systems Commercial name : DTS-HD Master Audio Codec ID : A_DTS Duration : 1 h 0 min Bit rate mode : Variable Bit rate : 2 496 kb/s Channel(s) : 6 channels Channel layout : C L R Ls Rs LFE Sampling rate : 48.0 kHz Frame rate : 93.750 FPS (512 SPF) Bit depth : 24 bits Compression mode : Lossless Stream size : 1.05 GiB (11%) Default : Yes Forced : No Text #1 ID : 4 Format : PGS Muxing mode : zlib Codec ID : S_HDMV/PGS Codec ID/Info : Picture based subtitle format used on BDs/HD-DVDs Duration : 59 min 30 s Bit rate : 209 kb/s Frame rate : 1.482 FPS Count of elements : 5291 Stream size : 89.0 MiB (1%) Default : Yes Forced : No Text #2 ID : 5 Format : PGS Muxing mode : zlib Codec ID : S_HDMV/PGS Codec ID/Info : Picture based subtitle format used on BDs/HD-DVDs Duration : 59 min 30 s Bit rate : 35.3 kb/s Frame rate : 0.459 FPS Count of elements : 1638 Stream size : 15.0 MiB (0%) Language : Chinese Default : Yes Forced : No Text #3 ID : 6 Format : PGS Muxing mode : zlib Codec ID : S_HDMV/PGS Codec ID/Info : Picture based subtitle format used on BDs/HD-DVDs Duration : 59 min 30 s Bit rate : 35.7 kb/s Frame rate : 0.459 FPS Count of elements : 1638 Stream size : 15.2 MiB (0%) Language : Chinese Default : Yes Forced : No
closed
2024-04-05T11:41:34Z
2024-11-14T13:32:46Z
https://github.com/SYSTRAN/faster-whisper/issues/781
[]
Herzfrequenz21
5
onnx/onnx
deep-learning
6,301
Split onnx model in architecture and weights
## Context Among others "THE MODEL OPENNESS FRAMEWORK: PROMOTING COMPLETENESS AND OPENNESS FOR REPRODUCIBILITY, TRANSPARENCY, AND USABILITY IN ARTIFICIAL INTELLIGENCE" (https://arxiv.org/pdf/2403.13784) is in favor of the separation between architecture and model weights (https://arxiv.org/pdf/2403.13784 p.8 l.10) As model parameters are in fact data, there is discussion whether data-related licenses such as CDLA-Permissive-2.0 might be better suited for the weights (https://arxiv.org/pdf/2403.13784 p.8 l.3) ## Question Regardless of this, the question for me is whether there is a converter or workflow with which I can, for example, split a finished Onnx model into a file that only describes the architecture (pbtxt?) and the weights? Similar to intel openvino (https://docs.openvino.ai/nightly/documentation/openvino-ir-format.html) If not, what is missing at the moment?
open
2024-08-16T17:39:53Z
2024-08-26T12:13:22Z
https://github.com/onnx/onnx/issues/6301
[]
andife
2
plotly/dash
dash
2,696
[Feature Request] Make `dcc.Loading` toggle-able attribute for enabled/disabled
I have created an application where various `dcc.Store` elements are used to hold user-imported hash tables for images of varying sizes. These images can then be blended together and viewed in a `dcc.Graph`. Based on the size and number of images, I would like to be able to wrap these store elements in `dcc.Loading` components whose visibility/functionality is toggle-able i.e. if the user wishes, he/she can either enable or disable the loading visibility for the stores using a Dash toggle switch. Right now, I don't see an option for this because there is no `visibility` or `enabled` attribute for the Loading component. The result is that the visibility for the stores must be set when the app is executed but cannot be dynamically changed within the session. It would be very beneficial to have an attribute for enabled or visible, similar to how some other dash components have, in order to dynamically toggle the visibility of loadings within the app on the fly.
closed
2023-11-20T18:30:42Z
2024-06-03T17:32:26Z
https://github.com/plotly/dash/issues/2696
[ "feature", "P3" ]
matt-sd-watson
8
tensorflow/tensor2tensor
deep-learning
1,077
How to run t2t-decode with mnist model?
### Description I have trained a mnist model follow the quick start guide successfully. Now I want to test it by my test images. But I don't know how to write t2t-decode script. I used ----------------------------------- t2t-decoder \ --data_dir=~/t2t_data \ --problem=image_mnist \ --model=shake_shake \ --hparams_set=shake_shake_quick \ --output_dir=~/t2t_train/mnist \ --decode_from_dataset=/home/admin1/t2t/test/t10k-images.idx3-ubyte \ --decode_to_file=image.label -------------------------------------- -------------------------------------- t2t-decoder \ --data_dir=~/t2t_data \ --problem=image_mnist \ --model=shake_shake \ --hparams_set=shake_shake_quick \ --output_dir=~/t2t_train/mnist \ --decode_from_file=/home/admin1/t2t/test/502.bmp \ --decode_to_file=image.label -------------------------------------- but both scripts are failed. So could anyone provide the right one to run it? Thanks.
open
2018-09-19T07:49:59Z
2018-09-19T07:49:59Z
https://github.com/tensorflow/tensor2tensor/issues/1077
[]
strongwdl
0
521xueweihan/HelloGitHub
python
2,193
超轻量级前端开发框架:Lightue
## 项目推荐 - 项目地址: https://github.com/smalllong/lightue - 类别:JS - 项目后续更新计划: - 创建官方详细入门使用文档 - 继续加入各种框架级功能如生命周期、配套router等 - 项目描述: - Lightue是一款超轻量级前端框架,类似Vue、React,但是体积仅不到2KB(min+br)。它让你可以用现代的方式创建网页:声明状态,更改状态后框架可以自动更新dom,将页面拆分成很多函数组件以复用等。项目学习及使用成本极低,提供的对象式模板声明写法让使用者可以无需编译便可采用类似jsx的方式编写页面。 - 目前尚未成熟到可以支持大型项目的程度,不过可以用来搭建一些自己的小型项目,或在大型项目的局部使用。初学者学习后可以快速搭建自己的页面。框架源码仅300行左右,初学者也可以自己查看源码,从而学习框架的原理。 - 推荐理由:轻量小巧,上手简单,创新型设计 - 示例代码:(可选)长度:1-20 行 ```js // specify application state var S = Lightue.useState({ text: 'Hello world!' }) // create Lightue instance var vm = Lightue({ // for static content just specify statically hi: 'Hi world!', // for dynamic content that depends on state, use a state function (a function that uses state) hello: () => S.text }, { el: '#app' // append to <div id='app'></div> }) // change the state after 2 seconds and the DOM automatically get updated setTimeout(function() { S.text = 'Hello again!' }, 2000) ``` - 其他示例 - https://codepen.io/lxl898/pen/vYyooWK - https://smalllong.github.io/spreadsheet/index.html
closed
2022-05-07T12:38:33Z
2022-05-25T09:40:51Z
https://github.com/521xueweihan/HelloGitHub/issues/2193
[ "JavaScript 项目" ]
smalllong
1
pytest-dev/pytest-selenium
pytest
9
Review Opera driver support
Opera has changed substantially since I last tested it, and I suspect that we may need to improve the experience. At very least we should provide a decent guide to getting Opera configured for running tests. We should also see if there's a way to get the Opera tests running on Travis-CI.
closed
2015-06-11T10:38:36Z
2015-06-26T16:10:12Z
https://github.com/pytest-dev/pytest-selenium/issues/9
[]
davehunt
2
errbotio/errbot
automation
736
No error message on plugin configuration exception
When plugin configuration fails (`check_configuration` raises a `errbot.utils.ValidationException`), I still get the message "Plugin configuration done.". There should be an error message to the user instead. Example: ``` >>> !plugin config exec {'command': 'failure'} 20:42:25 INFO errbot.plugins.ACLS Matching ACL {} against username gbin@localhost for command Plugins:plugin_config 20:42:25 INFO errbot.plugins.ACLS Check if plugin_config is admin only command. 20:42:25 INFO errbot.errBot Processing command 'plugin_config' with parameters 'exec {'command': 'failure'}' from gbin@localhost 20:42:26 INFO errbot.plugin_manager Activating exec with min_err_version = 4.0.0 and max_version = 4.0.99 20:42:26 ERROR errbot.plugin_manager Something is wrong with the configuration of the plugin exec Traceback (most recent call last): File "/usr/local/lib/python2.7/dist-packages/errbot/plugin_manager.py", line 277, in activate_plugin_with_version_check obj.check_configuration(config) File "/home/cweiske/Dev/tools/xmppexecbot/exec.py", line 44, in check_configuration self.executable_exists(configuration['command']) File "/home/cweiske/Dev/tools/xmppexecbot/exec.py", line 31, in executable_exists raise ValidationException('Command not in PATH') ValidationException: Command not in PATH 20:42:26 ERROR errbot.plugin_manager Error loading exec Traceback (most recent call last): File "/usr/local/lib/python2.7/dist-packages/errbot/plugin_manager.py", line 452, in activate_plugin self.activate_plugin_with_version_check(name, self.get_plugin_configuration(name)) File "/usr/local/lib/python2.7/dist-packages/errbot/plugin_manager.py", line 283, in activate_plugin_with_version_check raise PluginConfigurationException(unicode(ex)) PluginConfigurationException: Command not in PATH ╌╌[MD ]╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌ Plugin configuration done. ```
closed
2016-05-04T18:50:54Z
2016-05-06T23:48:56Z
https://github.com/errbotio/errbot/issues/736
[ "feature: plugins", "#usability" ]
cweiske
2
iperov/DeepFaceLab
machine-learning
5,536
Question about image loading and SampleLoader.py
I'm trying to modify the RGB data that is loaded into DFL slightly, e.g. add some text overlay etc. I found the SampleLoader.py to be the ideal candidate but I fail to understand where exactly the RGB data is. I see this code here: ``` def process_data(self, data): idx, filename = data io.log_info("process_data: " + filename) dflimg = DFLIMG.load (Path(filename)) if dflimg is None or not dflimg.has_data(): self.log_err (f"FaceSamplesLoader: {filename} is not a dfl image file.") data = None else: data = (dflimg.get_face_type(), dflimg.get_shape(), dflimg.get_landmarks(), dflimg.get_seg_ie_polys(), dflimg.get_xseg_mask_compressed(), dflimg.get_eyebrows_expand_mod(), dflimg.get_source_filename() ) return idx, data ``` but the return value `data` does not contain RGB data of the file or am I missing something? DFLJGP has several fields: `img` `chunks` `data` but none of them are used here... I'm trying to display some text in the trainer and I thought it would be easiest to modify the images on load for that.
closed
2022-07-15T21:12:43Z
2022-07-17T13:02:13Z
https://github.com/iperov/DeepFaceLab/issues/5536
[]
zqueezy
1
gunthercox/ChatterBot
machine-learning
1,935
django.db.utils.OperationalError: (1071, 'Specified key was too long; max key length is 767 bytes')
I am using Python 3.8 on ubuntu 18.04. While running sudo python3 manage.py migrate I am getting the following error. Other tables are migrated successfully except django_chatterbot Applying django_chatterbot.0010_statement_text...Traceback (most recent call last): .... .... File "/usr/local/lib/python3.8/site-packages/MySQLdb/connections.py", line 239, in query _mysql.connection.query(self, query) django.db.utils.OperationalError: (1071, 'Specified key was too long; max key length is 767 bytes') [error.txt](https://github.com/gunthercox/ChatterBot/files/4373537/error.txt)
open
2020-03-24T06:33:03Z
2020-03-24T06:33:03Z
https://github.com/gunthercox/ChatterBot/issues/1935
[]
bktbunu
0
Teemu/pytest-sugar
pytest
186
No instant stack trace displayed when test fails and tests are run with Gitlab CI
Env issue
closed
2020-02-25T14:03:31Z
2020-02-27T10:03:30Z
https://github.com/Teemu/pytest-sugar/issues/186
[]
fbarbu15
1
pydata/xarray
numpy
9,478
TreeNode constructor should not modify children in-place
### What is your issue? The in-place modification issue described in #9196 was fixed for `DataTree`, but actually still exists for its private base class `TreeNode`. ```python In [1]: from xarray.core.treenode import TreeNode ...: ...: child = TreeNode() ...: root = TreeNode(children={'child': child}) ...: print(child.parent) TreeNode(children={'child': TreeNode(children={})}) ``` (The issue here being that the constructor has returned a new object `root` but also modified the object `child` in-place.) We do test `TreeNode` directly (in `test_treenode.py`), which I think is useful as it internally separates tree structure operations from data manipulation operations. But if you copy the [new test](https://github.com/pydata/xarray/blob/781877cb76dd2806dbefded817ce7e012f5a4c2e/xarray/tests/test_datatree.py#L48) added in #9196 to `treenode.py` it will currently fail. Trying to fix this reveals a rabbit hole of ways in which the `DataTree`/`TreeNode` relationship should be refactored: 1. `DataTree.__init__` should call `TreeNode.__init__` to assign `children`, 2. `TreeNode.__init__` should `.copy()` children (i.e. move the solution to #9196 down into `TreeNode`), 3. This requires `.copy()` to be defined on `TreeNode` rather than on `DataTree`, with only `._copy_node` overridden to also copy `.data`, 4. That requires `._copy_subtree()` to use only methods available to the `TreeNode` class, to iterate over the subtree efficiently, 5. That might require using some methods that are currently only defined on the `NamedNode` class, particularly `.relative_to` (not totally sure about that yet), 6. `.relative_to` requires knowing the node `.name`, which implies perhaps we should merge `TreeNode` and `NamedNode` (which was suggested previously by @shoyer anyway).
closed
2024-09-11T14:41:53Z
2024-09-12T19:13:10Z
https://github.com/pydata/xarray/issues/9478
[ "topic-internals", "topic-DataTree" ]
TomNicholas
0
koxudaxi/fastapi-code-generator
pydantic
251
Illegal FastAPI constructor generated (AttributeError: 'str' object has no attribute 'get': server_data.get("url"))
Using default main template: ```jinja2 app = FastAPI( {% if info %} {% for key,value in info.items() %} {{ key }} = "{{ value }}", {% endfor %} {% endif %} ) ``` Generates invalid `FastAPI` constructor when servers are used: ```python app = FastAPI( title="App title", description="App desc", version="0.1", servers="[{'url': 'http://api.example.com/v1', 'description': 'App desc'}, {'url': 'http://staging-api.example.com', 'description': 'App desc'}]", ) ``` (note the `servers` key value is actually a string). Full stacktrace: ``` File "/CODE/repo/openapi/main.py", line 23, in <module> app = FastAPI( File "/CODE/.repo-venv/lib/python3.10/site-packages/fastapi/applications.py", line 146, in __init__ self.setup() File "/CODE/.repo-venv/lib/python3.10/site-packages/fastapi/applications.py", line 216, in setup server_urls = {url for url in urls if url} File "/CODE/.repo-venv/lib/python3.10/site-packages/fastapi/applications.py", line 216, in <setcomp> server_urls = {url for url in urls if url} File "/CODE/.repo-venv/lib/python3.10/site-packages/fastapi/applications.py", line 215, in <genexpr> urls = (server_data.get("url") for server_data in self.servers) AttributeError: 'str' object has no attribute 'get' ``` Removing the `"` from the template in `{{ key }} = "{{ value }}"` does not help as it breaks the scalar keys. What do help is removing `servers` from OpenApi spec. Version 0.3.5.
closed
2022-05-17T11:17:20Z
2023-08-16T11:27:59Z
https://github.com/koxudaxi/fastapi-code-generator/issues/251
[]
olivergondza
3
graphdeco-inria/gaussian-splatting
computer-vision
464
SIBR_gaussianViewer got crash
I got crash when using this viewer. Anybody help pleas. Thank you so much ![image](https://github.com/graphdeco-inria/gaussian-splatting/assets/68176850/b4b85485-9b25-4081-bd44-19e6e8c63aef)
closed
2023-11-12T16:51:36Z
2023-11-13T17:52:41Z
https://github.com/graphdeco-inria/gaussian-splatting/issues/464
[]
ryudokid
4
ageitgey/face_recognition
machine-learning
1,483
New release?
I think it might be time for a new release.
open
2023-03-28T17:30:09Z
2023-03-28T17:30:09Z
https://github.com/ageitgey/face_recognition/issues/1483
[]
MatthijsBurgh
0
marimo-team/marimo
data-science
3,688
marimo.chat: ChatModelConfig maxTokens
### Documentation is - [ ] Missing - [ ] Outdated - [x] Confusing - [ ] Not sure? ### Explain in Detail In https://docs.marimo.io/api/inputs/chat/#supported-model-providers the parameter "max_tokens" of the class ChatModelConfig isn't working with the OpenAI LLMs. Obviously, there is a misspelling. If I'm using the parameter "maxTokens" then everything works correctly. ### Your Suggestion for Changes Either the implementation is changed so that "max_tokens" is accepted. I would prefer this option. Or the documentation is changed.
closed
2025-02-04T20:39:06Z
2025-02-05T01:08:18Z
https://github.com/marimo-team/marimo/issues/3688
[ "documentation" ]
netzrac
1
AUTOMATIC1111/stable-diffusion-webui
pytorch
16,876
[Bug]: function parse_generation_parameter removes lastline if multiple loras are embedded in final text
### Checklist - [x] The issue exists after disabling all extensions - [x] The issue exists on a clean installation of webui - [ ] The issue is caused by an extension, but I believe it is caused by a bug in the webui - [x] The issue exists in the current version of the webui - [x] The issue has not been reported before recently - [ ] The issue has been reported before but has not been fixed yet ### What happened? File: modules/infotext_utils.py Line(s): 255 -> if len(re_param.findall(lastline)) < 3: The function parse_generation_parameters from modules/infotext_utils.py has a built in functionality to ignore the last line of a text passed in if len(re_param.findall(lastline)) < 3, when this criteria is met the last line will not be added to the lines variable list. ### Steps to reproduce the problem Under the scenario a standalone prompt text such as this below is passed in, the last line will be ignored: promptDescription1, promptDescription2, <lora: loraname1 v1:1>, <lora: loraname2 v1:1>, <lora: loraname3 v1:1>, extraDescription, etc ### What should have happened? I believe the loras format <lora: loraname> should be included in the regex so it does not ignore it when multiple ones are called in the last line. ### What browsers do you use to access the UI ? Mozilla Firefox ### Sysinfo Could not generate file ### Console logs ```Shell no console log errors for this bug ``` ### Additional information More curious as to if this is intended to not detect multiple lora formats for a last line in text since most geninfo are not composed of only the prompt.
open
2025-03-02T09:50:19Z
2025-03-06T13:10:36Z
https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/16876
[ "bug-report" ]
thundaga
6
matplotlib/matplotlib
matplotlib
29,216
[Bug]:
### Bug summary scatter with small s value leads to hollow point ### Code for reproduction ```Python import matplotlib.pyplot as plt import numpy as np fig, ax = plt.subplots() x = np.random.randn(10) y = np.random.randn(10) plt.scatter(x, y, s=0.1) plt.savefig('scatter.pdf') ``` ### Actual outcome ![8b04a39d583f67769c40d680ba54f0d2](https://github.com/user-attachments/assets/d39ee714-4146-484c-a5c3-a7ea4d835971) ### Expected outcome the scatter should be not hollow, even for small s value ### Additional information _No response_ ### Operating system _No response_ ### Matplotlib Version 3.9.2 ### Matplotlib Backend _No response_ ### Python version _No response_ ### Jupyter version _No response_ ### Installation None
closed
2024-12-02T03:44:19Z
2024-12-03T00:29:34Z
https://github.com/matplotlib/matplotlib/issues/29216
[ "status: needs clarification" ]
ZhenyuanJin
5
recommenders-team/recommenders
deep-learning
1,968
[ASK] No module named 'recommenders'
### Description Hi, I try to from recommenders.models.tfidf.tfidf_utils import TfidfRecommender, then get error: No module named 'recommenders', So I use :!pip install git+https://github.com/microsoft/recommenders.git, in google colab get error again : ERROR: Package 'recommenders' requires a different Python: 3.10.12 not in '<3.10,>=3.6', so I change environment in anaconda, get another error: ERROR: Could not build wheels for safetensors, which is required to install pyproject.toml-based projects. Seems can't !pip in google colab and anaconda, is anyone has same problem like me? ### Other Comments
closed
2023-08-16T04:40:12Z
2023-08-17T18:51:55Z
https://github.com/recommenders-team/recommenders/issues/1968
[ "help wanted", "duplicate" ]
shasha920
2
deeppavlov/DeepPavlov
nlp
1,626
ModuleNotFoundError: No module named 'torch'
Want to contribute to DeepPavlov? Please read the [contributing guideline](http://docs.deeppavlov.ai/en/master/devguides/contribution_guide.html) first. Please enter all the information below, otherwise your issue may be closed without a warning. **DeepPavlov version** deeppavlov-1.0.2 **Python version** 3.9.9: **Operating system** windows: **Issue**: Ошибка ModuleNotFoundError: No module named 'torch' Конечно очевидно, что нужно просто установить torch, но для пользователя python уже становится привычным, что зависимости тянутся автоматически при pip install.
closed
2023-02-14T04:26:07Z
2023-03-15T15:43:24Z
https://github.com/deeppavlov/DeepPavlov/issues/1626
[ "bug" ]
Xtreemrus
1
scrapy/scrapy
web-scraping
6,630
build error
<!-- build error --> ### Description myenvdeveloper@developerdeMac-mini fanyi % scrapy crawl fy1 2025-01-24 11:26:25 [scrapy.utils.log] INFO: Scrapy 2.12.0 started (bot: fanyi) 2025-01-24 11:26:25 [scrapy.utils.log] INFO: Versions: lxml 5.3.0.0, libxml2 2.12.9, cssselect 1.2.0, parsel 1.10.0, w3lib 2.2.1, Twisted 24.11.0, Python 3.11.6 (main, Jan 24 2025, 10:24:22) [Clang 16.0.0 (clang-1600.0.26.6)], pyOpenSSL 25.0.0 (OpenSSL 3.4.0 22 Oct 2024), cryptography 44.0.0, Platform macOS-15.2-arm64-arm-64bit 2025-01-24 11:26:25 [scrapy.addons] INFO: Enabled addons: [] 2025-01-24 11:26:25 [scrapy.utils.log] DEBUG: Using reactor: twisted.internet.selectreactor.SelectReactor 2025-01-24 11:26:25 [scrapy.extensions.telnet] INFO: Telnet Password: 1969f05accbbc054 2025-01-24 11:26:25 [scrapy.middleware] INFO: Enabled extensions: ['scrapy.extensions.corestats.CoreStats', 'scrapy.extensions.telnet.TelnetConsole', 'scrapy.extensions.memusage.MemoryUsage', 'scrapy.extensions.logstats.LogStats'] 2025-01-24 11:26:25 [scrapy.crawler] INFO: Overridden settings: {'BOT_NAME': 'fanyi', 'CONCURRENT_REQUESTS': 4, 'CONCURRENT_REQUESTS_PER_DOMAIN': 2, 'DOWNLOAD_DELAY': 3, 'DOWNLOAD_TIMEOUT': 86400, 'DUPEFILTER_CLASS': 'scrapy_splash.SplashAwareDupeFilter', 'HTTPCACHE_STORAGE': 'scrapy_splash.SplashAwareFSCacheStorage', 'NEWSPIDER_MODULE': 'fanyi.spiders', 'SPIDER_MODULES': ['fanyi.spiders']} 2025-01-24 11:26:25 [scrapy.middleware] INFO: Enabled downloader middlewares: ['scrapy.downloadermiddlewares.offsite.OffsiteMiddleware', 'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware', 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware', 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware', 'fanyi.middlewares.MyUserAgentMiddleware', 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware', 'fanyi.middlewares.FanyiDownloaderMiddleware', 'scrapy.downloadermiddlewares.retry.RetryMiddleware', 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware', 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware', 'scrapy.downloadermiddlewares.cookies.CookiesMiddleware', 'scrapy_splash.SplashCookiesMiddleware', 'scrapy_splash.SplashMiddleware', 'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware', 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware', 'scrapy.downloadermiddlewares.stats.DownloaderStats'] 2025-01-24 11:26:25 [scrapy.middleware] INFO: Enabled spider middlewares: ['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware', 'scrapy_splash.SplashDeduplicateArgsMiddleware', 'fanyi.middlewares.FanyiSpiderMiddleware', 'scrapy.spidermiddlewares.referer.RefererMiddleware', 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware', 'scrapy.spidermiddlewares.depth.DepthMiddleware'] 2025-01-24 11:26:25 [scrapy.middleware] INFO: Enabled item pipelines: ['fanyi.pipelines.FanyiDownloadFilePipeline', 'fanyi.pipelines.FanyiPipeline'] 2025-01-24 11:26:25 [scrapy.core.engine] INFO: Spider opened Unhandled error in Deferred: 2025-01-24 11:26:25 [twisted] CRITICAL: Unhandled error in Deferred: Traceback (most recent call last): File "/Users/developer/.pyenv/versions/3.11.6/envs/myenv311/lib/python3.11/site-packages/twisted/internet/defer.py", line 2017, in _inlineCallbacks result = context.run(gen.send, result) File "/Users/developer/.pyenv/versions/3.11.6/envs/myenv311/lib/python3.11/site-packages/scrapy/crawler.py", line 154, in crawl yield self.engine.open_spider(self.spider, start_requests) File "/Users/developer/.pyenv/versions/3.11.6/envs/myenv311/lib/python3.11/site-packages/twisted/internet/defer.py", line 2017, in _inlineCallbacks result = context.run(gen.send, result) File "/Users/developer/.pyenv/versions/3.11.6/envs/myenv311/lib/python3.11/site-packages/scrapy/core/engine.py", line 386, in open_spider scheduler = build_from_crawler(self.scheduler_cls, self.crawler) File "/Users/developer/.pyenv/versions/3.11.6/envs/myenv311/lib/python3.11/site-packages/scrapy/utils/misc.py", line 187, in build_from_crawler instance = objcls.from_crawler(crawler, *args, **kwargs) # type: ignore[attr-defined] File "/Users/developer/.pyenv/versions/3.11.6/envs/myenv311/lib/python3.11/site-packages/scrapy/core/scheduler.py", line 208, in from_crawler dupefilter=build_from_crawler(dupefilter_cls, crawler), File "/Users/developer/.pyenv/versions/3.11.6/envs/myenv311/lib/python3.11/site-packages/scrapy/utils/misc.py", line 187, in build_from_crawler instance = objcls.from_crawler(crawler, *args, **kwargs) # type: ignore[attr-defined] File "/Users/developer/.pyenv/versions/3.11.6/envs/myenv311/lib/python3.11/site-packages/scrapy/dupefilters.py", line 96, in from_crawler return cls._from_settings( File "/Users/developer/.pyenv/versions/3.11.6/envs/myenv311/lib/python3.11/site-packages/scrapy/dupefilters.py", line 109, in _from_settings return cls(job_dir(settings), debug, fingerprinter=fingerprinter) builtins.TypeError: SplashAwareDupeFilter.__init__() got an unexpected keyword argument 'fingerprinter' 2025-01-24 11:26:25 [twisted] CRITICAL: Traceback (most recent call last): File "/Users/developer/.pyenv/versions/3.11.6/envs/myenv311/lib/python3.11/site-packages/twisted/internet/defer.py", line 2017, in _inlineCallbacks result = context.run(gen.send, result) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/developer/.pyenv/versions/3.11.6/envs/myenv311/lib/python3.11/site-packages/scrapy/crawler.py", line 154, in crawl yield self.engine.open_spider(self.spider, start_requests) File "/Users/developer/.pyenv/versions/3.11.6/envs/myenv311/lib/python3.11/site-packages/twisted/internet/defer.py", line 2017, in _inlineCallbacks result = context.run(gen.send, result) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/developer/.pyenv/versions/3.11.6/envs/myenv311/lib/python3.11/site-packages/scrapy/core/engine.py", line 386, in open_spider scheduler = build_from_crawler(self.scheduler_cls, self.crawler) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/developer/.pyenv/versions/3.11.6/envs/myenv311/lib/python3.11/site-packages/scrapy/utils/misc.py", line 187, in build_from_crawler instance = objcls.from_crawler(crawler, *args, **kwargs) # type: ignore[attr-defined] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/developer/.pyenv/versions/3.11.6/envs/myenv311/lib/python3.11/site-packages/scrapy/core/scheduler.py", line 208, in from_crawler dupefilter=build_from_crawler(dupefilter_cls, crawler), ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/developer/.pyenv/versions/3.11.6/envs/myenv311/lib/python3.11/site-packages/scrapy/utils/misc.py", line 187, in build_from_crawler instance = objcls.from_crawler(crawler, *args, **kwargs) # type: ignore[attr-defined] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/developer/.pyenv/versions/3.11.6/envs/myenv311/lib/python3.11/site-packages/scrapy/dupefilters.py", line 96, in from_crawler return cls._from_settings( ^^^^^^^^^^^^^^^^^^^ File "/Users/developer/.pyenv/versions/3.11.6/envs/myenv311/lib/python3.11/site-packages/scrapy/dupefilters.py", line 109, in _from_settings return cls(job_dir(settings), debug, fingerprinter=fingerprinter) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ TypeError: SplashAwareDupeFilter.__init__() got an unexpected keyword argument 'fingerprinter' ### Steps to Reproduce 1. pip install scrapy scrapy-splash 2. scrapy crawl fy1 ### Versions myenvdeveloper@developerdeMac-mini fanyi % scrapy version --verbose Scrapy : 2.12.0 lxml : 5.3.0.0 libxml2 : 2.12.9 cssselect : 1.2.0 parsel : 1.10.0 w3lib : 2.2.1 Twisted : 24.11.0 Python : 3.11.6 (main, Jan 24 2025, 10:24:22) [Clang 16.0.0 (clang-1600.0.26.6)] pyOpenSSL : 25.0.0 (OpenSSL 3.4.0 22 Oct 2024) cryptography : 44.0.0 Platform : macOS-15.2-arm64-arm-64bit
closed
2025-01-24T03:29:38Z
2025-01-24T07:35:16Z
https://github.com/scrapy/scrapy/issues/6630
[]
plkgq
1
bigscience-workshop/petals
nlp
8
[DESIGN] auction-like priorities for servers
[for the record: this was proposed by @TimDettmers ] Currently, hivemind-server treats all requests on a first come first served basis. If we want to reward active participants with faster inference/training, we could change that into an auction. Here's how client-server interaction looks like: - server gives client its stats, the current-highest bid, and maybe some metadata for bidding, e.g. the lowest serviced bids over last T seconds - client makes a bid - and signs it in such a way that it becomes a commitment ( see #6 ) - [in TaskPool.iterate_minibatches](https://github.com/learning-at-home/hivemind/blob/master/hivemind/moe/server/task_pool.py#L124), server will now generate minibatches in the order of highest bid first - [in TaskPool.priority](https://github.com/learning-at-home/hivemind/blob/master/hivemind/moe/server/task_pool.py#L175), server will now set pool's priority based on highest bid in the pool, instead of wait time As suggested by @GreenFatGuy , we need to think through how to deal with situations when low bids on high-demand servers won't ever be processed, and will hence take up memory on both client and server. First order solution: add absolute expiration time to each request, drop requests that hit expiration time.
closed
2022-06-12T04:12:30Z
2023-01-02T19:14:32Z
https://github.com/bigscience-workshop/petals/issues/8
[]
justheuristic
3
donnemartin/system-design-primer
python
626
Large scale system
open
2021-12-05T11:19:43Z
2022-04-23T13:17:56Z
https://github.com/donnemartin/system-design-primer/issues/626
[ "needs-review" ]
abdou-c
1
CorentinJ/Real-Time-Voice-Cloning
deep-learning
409
Toolbox: Disable "Load" button when dataset is not loaded
We should also disable the "Load" button in the GUI when a dataset is not loaded. New users click the Load button and are confused when it throws an exception. This has happened several times, most recently in #407 . Current interface as of #402: ![test](https://user-images.githubusercontent.com/24435787/86547699-2dfa9400-bf10-11ea-8b96-be357b74491b.png)
closed
2020-07-09T03:00:59Z
2020-07-10T07:55:24Z
https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/409
[]
ghost
1
slackapi/bolt-python
fastapi
1,105
Scaling with socket mode
How can I scale the application connected using socket mode? Based on the documentation, it looks like I can have up to 10 connections open without any way of scaling more horizontally or vertically. ### The page URLs * https://api.slack.com/apis/socket-mode ## Requirements Please read the [Contributing guidelines](https://github.com/slackapi/bolt-python/blob/main/.github/contributing.md) and [Code of Conduct](https://slackhq.github.io/code-of-conduct) before creating this issue or pull request. By submitting, you are agreeing to those rules.
closed
2024-07-02T10:57:23Z
2024-07-04T23:18:13Z
https://github.com/slackapi/bolt-python/issues/1105
[ "duplicate", "question" ]
Keeo
2
deezer/spleeter
deep-learning
707
[Bug] If the -s is greater than 2 hours, the output file is only 1kb and cannot be played.
- [x] I didn't find a similar issue already open. - [x] I read the documentation (README AND Wiki) - [x] I have installed FFMpeg - [ ] My problem is related to Spleeter only, not a derivative product (such as Webapplication, or GUI provided by others) ## Description <!-- Give us a clear and concise description of the bug you are reporting. --> ## Step to reproduce <!-- Indicates clearly steps to reproduce the behavior: --> 1. Installed using `pip` 2. Run as `python3.9.7` `SpleeterGui` 3. set -s greater than 2h,SpleeterGui will happen greater than 12h 4. output files cannot play and only 1kb ## Output ```bash INFO:spleeter:File .\462124074-1-30232/accompaniment.wav written succesfully INFO:spleeter:File .\462124074-1-30232/vocals.wav written succesfully ``` ## Environment <!-- Fill the following table --> | | | | ----------------- | ------------------------------- | | OS | Windows 10 | | Installation type | pip | | RAM available| 16GB | | Hardware spec | G3 3500china Intel-i7-10870H nvidia RTX-2060 | ## Additional context I need audio files over 1day Length to train speaker model,but files too big.
closed
2022-01-07T22:29:00Z
2022-01-07T23:45:00Z
https://github.com/deezer/spleeter/issues/707
[ "bug", "invalid" ]
EnderOcelot
0
graphql-python/graphene
graphql
1,404
Found different types with the same name in the schema
Bear with me please Let's assume I have a type called `Product` in schema1 and a type with the same name in schema2 and I want to expose __both__ schemas via the same API, I get an error saying that I can't have two types with the same name, even though I'm using TOTALLY different types from different files ``` # file class Product(...): pass # another file class Product(...): pass ``` The workaround is to rename the type to something like ProductX or smth, but I want to know why does this specific behavior happen, is there any kind of _global_ registry that I'm not aware of? I've ran through the code but I found nothing
closed
2022-01-30T02:37:22Z
2024-06-22T10:22:58Z
https://github.com/graphql-python/graphene/issues/1404
[ "🐛 bug" ]
LeOndaz
3
Layout-Parser/layout-parser
computer-vision
172
How I can extract Titles, Headers , Photos and respective article information from Newspaper?
Hi, I have been trying to implement the Newspaper navigator model for my application. However, it is able to detect the regions like title or whole article. But I want to extract title and its below paragraphs for my usecase. How I can do that? Please help me to resolve this issue. Is their any tutorial available to guide on it? Thanks
open
2023-03-11T09:30:15Z
2023-10-04T09:32:18Z
https://github.com/Layout-Parser/layout-parser/issues/172
[]
karndeepsingh
1
django-cms/django-cms
django
7,732
[BUG] Click "New Draft" cause error UNIQUE constraint failed: cms_pagecontent.language, cms_pagecontent.page_id
Click "New Draft" cause error UNIQUE constraint failed: cms_pagecontent.language, cms_pagecontent.page_id Django Version: | 4.2.9 -- | -- IntegrityError UNIQUE constraint failed: cms_pagecontent.language, cms_pagecontent.page_id \.env\Lib\site-packages\django\db\backends\sqlite3\base.py, line 328, in execute djangocms_versioning.admin.edit_redirect_view \.env\Scripts\python.exe 3.11.7
closed
2024-01-03T16:31:56Z
2025-02-03T06:59:21Z
https://github.com/django-cms/django-cms/issues/7732
[ "4.1" ]
TLuesebrinck
15
xinntao/Real-ESRGAN
pytorch
144
Link to realesrgan-ncnn-vulkan from README.md
I used the link from the README to download : `realesrgan-ncnn-vulkan`. To my confusion, `realesrgan-x4plus-anime.bin` and `realesrgan-x4plus-anime.param` files are nowhere to be found. It's because the links from the README is a link to an old version. v0.1.2
open
2021-11-02T06:16:33Z
2021-11-02T09:36:03Z
https://github.com/xinntao/Real-ESRGAN/issues/144
[]
justUmen
0
autogluon/autogluon
data-science
4,778
[BUG] leaderboard don't have score_test column
**Describe the bug** leaderboard don't have score_test column this is my score: ``` ,model,score_test,score_val,pred_time_test,pred_time_val,fit_time_marginal,fit_order,MAE,MAPE,MASE,MSE,RMSLE,RMSSE,SMAPE,WAPE 0,WeightedEnsemble,,-0.12049423885675616,1509.9086158275604,1127.7490582466125,6.181342363357544,13,,,,,,,, 1,TiDE,,-0.12476309357860434,118.49078869819641,36.36528301239014,611.4046132564545,12,,,,,,,, 2,TemporalFusionTransformer,,-0.13627036644945328,100.78860259056091,23.59303569793701,206.16117525100708,9,,,,,,,, 3,SeasonalNaive,,-0.660805435228755,33.18238949775696,41.93700838088989,15.594502210617065,1,,,,,,,, 4,RecursiveTabular,,-0.15038114804071967,122.86139273643494,7.5854315757751465,563.4693095684052,2,,,,,,,, 5,PatchTST,,-0.1542016456512023,132.4857325553894,16.203494548797607,85.02262091636658,11,,,,,,,, 6,NPTS,,-0.6918072100454226,311.7031497955322,38.06069493293762,15.793499231338501,4,,,,,,,, 7,DynamicOptimizedTheta,,-0.12378074038959759,293.6147825717926,158.2078046798706,16.762980699539185,5,,,,,,,, 8,DirectTabular,,-0.13990292055189926,67.13186550140381,13.225001811981201,374.5207350254059,3,,,,,,,, 9,DeepAR,,-0.15540650115048688,82.5639443397522,26.561711311340332,283.0971758365631,10,,,,,,,, 10,ChronosZeroShot[bolt_base],,-0.12829275030407586,134.53621864318848,11.037777662277222,0.0044155120849609375,7,,,,,,,, 11,ChronosFineTuned[bolt_small],,-0.12416203701720428,83.0649847984314,6.950927972793579,77.18166732788086,8,,,,,,,, 12,AutoETS,,-0.122765501797385,319.86651253700256,808.5811042785645,17.70991277694702,6,,,,,,,, ``` **Expected behavior** give me score_test **To Reproduce** ```python TimeSeriesPredictor.load(f"model").leaderboard( test_data, extra_metrics=[ "MAE", "MAPE", "MASE", "MSE", "RMSLE", "RMSSE", "SMAPE", "WAPE", ], ).to_csv(f"score.csv") ``` **Screenshots / Logs** nolog **Installed Versions** <!-- Please run the following code snippet: --> <details> ```python INSTALLED VERSIONS ------------------ date : 2025-01-08 time : 22:15:58.292015 python : 3.12.3.final.0 OS : Linux OS-release : 5.15.0-60-generic Version : #66-Ubuntu SMP Fri Jan 20 14:29:49 UTC 2023 machine : x86_64 processor : x86_64 num_cores : 128 cpu_ram_mb : 1031709.55859375 cuda version : 12.535.104.12 num_gpus : 1 gpu_ram_mb : [79943] avail_disk_size_mb : 1335921416 accelerate : 0.34.2 autogluon : 1.2 autogluon.common : 1.2 autogluon.core : 1.2 autogluon.features : 1.2 autogluon.multimodal : 1.2 autogluon.tabular : 1.2 autogluon.timeseries : 1.2 boto3 : 1.35.82 catboost : 1.2.7 coreforecast : 0.0.12 defusedxml : 0.7.1 einops : 0.8.0 evaluate : 0.4.3 fastai : 2.7.18 fugue : 0.9.1 gluonts : 0.16.0 huggingface-hub : 0.27.0 hyperopt : 0.2.7 imodels : None jinja2 : 3.1.4 joblib : 1.4.2 jsonschema : 4.21.1 lightgbm : 4.5.0 lightning : 2.4.0 matplotlib : 3.10.0 mlforecast : 0.13.4 networkx : 3.4.2 nlpaug : 1.1.11 nltk : 3.8.1 numpy : 1.26.4 nvidia-ml-py3 : 7.352.0 omegaconf : 2.2.3 onnx : None onnxruntime : None onnxruntime-gpu : None openmim : 0.3.9 optimum : None optimum-intel : None orjson : 3.10.12 pandas : 2.2.3 pdf2image : 1.17.0 Pillow : 11.0.0 psutil : 6.1.0 pyarrow : 18.1.0 pytesseract : 0.3.10 pytorch-lightning : 2.4.0 pytorch-metric-learning: 2.3.0 ray : 2.39.0 requests : 2.32.3 scikit-image : 0.24.0 scikit-learn : 1.5.2 scikit-learn-intelex : None scipy : 1.14.1 seqeval : 1.2.2 skl2onnx : None spacy : 3.7.5 statsforecast : 1.7.8 tabpfn : None tensorboard : 2.18.0 text-unidecode : 1.3 timm : 1.0.3 torch : 2.5.1 torchmetrics : 1.2.1 torchvision : 0.20.1 tqdm : 4.67.1 transformers : 4.47.0 utilsforecast : 0.2.4 vowpalwabbit : None xgboost : 2.1.3 ``` </details>
closed
2025-01-08T13:19:31Z
2025-01-10T08:37:29Z
https://github.com/autogluon/autogluon/issues/4778
[ "bug: unconfirmed", "Needs Triage", "module: timeseries" ]
ghost
6
idealo/image-super-resolution
computer-vision
14
Inference of large images
I think it would be a great feature to have some option for prediction of larger images, by applying sr to tiles of the main image and stitching the results, this is after enabling gpu inference of course. Thanks!
open
2019-04-01T18:11:51Z
2019-08-29T12:11:34Z
https://github.com/idealo/image-super-resolution/issues/14
[ "enhancement" ]
ontheway16
16
Gozargah/Marzban
api
932
Json Subscription link does not work on Hiddify apk
برنامه هیدیفای از لینک اشتراک json مرزبان پشتیبانی نمیکنه. نمیدونم مشکل از برنامه هیدیفای هست یا لينک اشتراک، ولی لطفا با بچه‌های هیدیفای رایزنی و مشکل را حل کنید. ممنون
closed
2024-04-09T21:09:34Z
2024-04-09T21:54:01Z
https://github.com/Gozargah/Marzban/issues/932
[ "Invalid" ]
farshadl
1
dynaconf/dynaconf
django
414
Update the docs version on every release autoamtically
Right now the version shown in the docs page is taken from: https://github.com/rochacbruno/dynaconf/blob/master/mkdocs.yml#L1 https://github.com/rochacbruno/dynaconf/blob/9f86c52ea644862d326079589dfd9c6fd02eecf3/mkdocs.yml#L1 We need to edit the `./release.sh` so it can update that version on the mkdocs.yml the same way it does with VERSION file.
closed
2020-09-16T15:38:34Z
2020-09-17T06:06:28Z
https://github.com/dynaconf/dynaconf/issues/414
[ "help wanted", "Not a Bug", "RFC" ]
rochacbruno
1
ageitgey/face_recognition
python
1,203
为什么小孩的识别效果不好?
是因为人脸对齐时,小孩的人脸对齐效果不好造成的么?
open
2020-08-20T10:02:36Z
2020-08-20T10:02:36Z
https://github.com/ageitgey/face_recognition/issues/1203
[]
yfq512
0
strawberry-graphql/strawberry
fastapi
3,703
Support for nullable Connection
The field using `strawberry.connection` should be able to be nullable. ## Feature Request Type - [ ] Core functionality - [x] Alteration (enhancement/optimization) of existing feature(s) - [x] New behavior ## Description This is needed in case if connection field would return an error e.g. from `PermissionExtension` and we don't want to error out the whole query. Right now when the field has a declared type of `strawberry.relay.ListConnection[...] | None` the following error is thrown ``` strawberry.relay.exceptions.RelayWrongAnnotationError: Wrong annotation used on field "...". It should be annotated with a "Connection" subclass. ```
closed
2024-11-18T14:39:09Z
2025-03-20T15:56:56Z
https://github.com/strawberry-graphql/strawberry/issues/3703
[]
marmor157
3
anselal/antminer-monitor
dash
73
Add SNMP
I don't even know if this is possible, but could a custom SNMP table be added to allow monitoring via a third party SNMP server? Essentially using AntminerMonitor as a SNMP proxy for use with PRTG or other monitoring software?
closed
2018-02-19T21:55:55Z
2018-07-26T02:33:07Z
https://github.com/anselal/antminer-monitor/issues/73
[ ":x: invalid" ]
dmelvin
1
jacobgil/pytorch-grad-cam
computer-vision
60
get_gradients() is empty
When extracting the gradient values in the call method of the GradCam class: ```python self.extractor.get_gradients()[-1].cpu().data.numpy() ``` the get_cradients() call, returns an empty list, and therefore the [-1] produces an IndexError exception. The only different thing that I am doing is to use a custom ResNet18 model to work with grayscale images that do not follow the standard (224, 224) size. Has anyone faced this problem before?
closed
2021-02-26T12:14:33Z
2021-03-03T15:27:11Z
https://github.com/jacobgil/pytorch-grad-cam/issues/60
[]
josepdecid
1
ray-project/ray
python
51,167
[telemetry] RLlib telemetry prior to `ray.init` is not reported
See this test case: https://github.com/ray-project/ray/pull/51161/files#diff-223a4454f64b8d38669989a2f91f3d9c9910c58d1690ca05541485342c2d52b3R32
open
2025-03-07T17:39:16Z
2025-03-07T17:40:53Z
https://github.com/ray-project/ray/issues/51167
[ "P1", "rllib" ]
edoakes
0