id
stringlengths
4
10
text
stringlengths
4
2.14M
source
stringclasses
2 values
created
timestamp[s]date
2001-05-16 21:05:09
2025-01-01 03:38:30
added
stringdate
2025-04-01 04:05:38
2025-04-01 07:14:06
metadata
dict
2589134912
chore: update submodules - 2024-10-15 15:26 This PR updates the following submodules: Remote Repository Submodule Path Change LedgerHQ/clear-signing-erc7730-registry tests/registries/clear-signing-erc7730-registry df3157b...b4139cf This PR description was generated by sgoudham/update-git-submodules. Done in https://github.com/LedgerHQ/python-erc7730/pull/87
gharchive/pull-request
2024-10-15T15:26:51
2025-04-01T04:32:43.394082
{ "authors": [ "fsamier", "ldg-github-ci" ], "repo": "LedgerHQ/python-erc7730", "url": "https://github.com/LedgerHQ/python-erc7730/pull/89", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1707263967
🛑 CompanyWise3 is down In ee82963, CompanyWise3 (https://comp3.wisereport.co.kr/servercheck.aspx) was down: HTTP code: 0 Response time: 0 ms Resolved: CompanyWise3 is back up in 056225d.
gharchive/issue
2023-05-12T09:14:16
2025-04-01T04:32:43.396522
{ "authors": [ "LeeYoungJin" ], "repo": "LeeYoungJin/fg_upptime", "url": "https://github.com/LeeYoungJin/fg_upptime/issues/2477", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1729642502
🛑 CompanyWise1 is down In 51f73d9, CompanyWise1 (https://comp1.wisereport.co.kr/servercheck.aspx) was down: HTTP code: 0 Response time: 0 ms Resolved: CompanyWise1 is back up in 90105bb.
gharchive/issue
2023-05-28T19:52:11
2025-04-01T04:32:43.399049
{ "authors": [ "LeeYoungJin" ], "repo": "LeeYoungJin/fg_upptime", "url": "https://github.com/LeeYoungJin/fg_upptime/issues/3735", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2255893743
🛑 WiseReport1 is down In ab7d026, WiseReport1 (https://www1.wisereport.co.kr/servercheck.aspx) was down: HTTP code: 0 Response time: 0 ms Resolved: WiseReport1 is back up in 2feb2e3 after 20 minutes.
gharchive/issue
2024-04-22T08:17:13
2025-04-01T04:32:43.401375
{ "authors": [ "LeeYoungJin" ], "repo": "LeeYoungJin/fg_upptime", "url": "https://github.com/LeeYoungJin/fg_upptime/issues/7610", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2341620311
🛑 MobileETFGlobal2 is down In 6e010a6, MobileETFGlobal2 (https://mglobaletf2.wisereport.co.kr/Home/ServerCheck) was down: HTTP code: 0 Response time: 0 ms Resolved: MobileETFGlobal2 is back up in 2c9e728 after 7 minutes.
gharchive/issue
2024-06-08T11:50:52
2025-04-01T04:32:43.403721
{ "authors": [ "LeeYoungJin" ], "repo": "LeeYoungJin/fg_upptime", "url": "https://github.com/LeeYoungJin/fg_upptime/issues/7754", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1759616528
LockedNFT Refactoring + 721Psi Separated code into distinct files AccessControl Roles Legitimate specific state variables (i.e. servicePeriod and transferLock) Added 721Psi implementation @danielduan i think this is ready for final review. there are a few more changes that I think we should make, mainly around emitting events for unlocking, but let's get this in first? Looks really good, thanks for this!
gharchive/pull-request
2023-06-15T22:10:46
2025-04-01T04:32:43.471292
{ "authors": [ "danielduan", "thecalvinchan" ], "repo": "LegitimateTech/lgt-phygital-nft-v3", "url": "https://github.com/LegitimateTech/lgt-phygital-nft-v3/pull/17", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1857709605
id: late fees explanation comes after the question. reverse order Fixed
gharchive/issue
2023-08-19T13:20:05
2025-04-01T04:32:43.483019
{ "authors": [ "nonprofittechy", "tobyfey" ], "repo": "LemmaLegalConsulting/docassemble-MOHUDEvictionProject", "url": "https://github.com/LemmaLegalConsulting/docassemble-MOHUDEvictionProject/issues/317", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1630405612
请问会考虑加入Prompt Store吗 请问是否会考虑加入Prompt Store这种预存储的对话提示词呢?类似https://github.com/Chanzhaoyu/chatgpt-web 项目中的这种 请问是否会考虑加入Prompt Store这种预存储的对话提示词呢?类似https://github.com/Chanzhaoyu/chatgpt-web 项目中的这种 有空了解下😁
gharchive/issue
2023-03-18T16:21:40
2025-04-01T04:32:43.518065
{ "authors": [ "LiangYang666", "imClumsyPanda" ], "repo": "LiangYang666/ChatGPT-Web", "url": "https://github.com/LiangYang666/ChatGPT-Web/issues/8", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1451169460
BrnMultiDataPicker弹窗的“确定”按钮和“取消”按钮点击事件范围太小了,建议增大一点,手指有时候点不上按钮,很难受 优化建议 优化内容: 所有Picker类的弹窗的确认取消按钮的点击范围太小了,我感觉你们把点击事件只是加到那个Text文字上了,应该在Text外面包一个Container,把点击事件加到Container上,然后把Container的宽高调高一点,这样点击范围比较大,不然所有的Picker类的弹窗 确定取消按钮有时候手指头点不上,难受的很! 内容需包含: 1、所有Picker类的弹窗的确认取消按钮的点击范围太小了,我感觉你们把点击事件只是加到那个Text文字上了,应该在Text外面包一个Container,把点击事件加到Container上,然后把Container的宽高调高一点,这样点击范围比较大,不然所有的Picker类的弹窗 确定取消按钮有时候手指头点不上,难受的很! 2、关联组件 BrnMultiDataPicker等等。。。 3、运行环境(非必填) 运行设备 华为nova9 系统 鸿蒙3.0.0 Bruno 版本 3.1.0 Flutter Doctor 信息 5、附加信息 确实,我们跟交互确认下修改 #383 已提交修复,将在近期版本上线,暂时关闭该 issue, 感谢反馈~
gharchive/issue
2022-11-16T08:58:30
2025-04-01T04:32:43.521925
{ "authors": [ "violinday", "yaochangliang159", "zhoujuanjuan" ], "repo": "LianjiaTech/bruno", "url": "https://github.com/LianjiaTech/bruno/issues/369", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
257419785
Add function to run analysis for all results I'll handle this in https://github.com/LibCrowds/libcrowds-analyst/issues/31, just adding this here for reference. Note that this one doesn't really need manual testing as such - we have a full set of unit tests for the above module. What you can do is check https://backend.libcrowds.com/api/result?project_id=65?limit=100 It's just a block of JSON but our annotations are in there (first results, yay :smile:)
gharchive/issue
2017-09-13T15:15:54
2025-04-01T04:32:43.524018
{ "authors": [ "alexandermendes" ], "repo": "LibCrowds/vue-pybossa-frontend", "url": "https://github.com/LibCrowds/vue-pybossa-frontend/issues/266", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1843132501
Worker timeout and service 502 on api/upload/complete call The service seems to die when i try to complete an upload. backend | [2023-08-09 12:14:55 +0000] [108] [CRITICAL] WORKER TIMEOUT (pid:170) backend | [2023-08-09 12:14:57 +0000] [108] [ERROR] Worker (pid:170) was sent SIGKILL! Perhaps out of memory? backend | [2023-08-09 12:14:57 +0000] [182] [INFO] Booting worker with pid: 182 I have checked that the system has enough free ram btw. Full logs: [2023-08-09 09:14:21 +0000] [108] [INFO] Starting gunicorn 21.2.0 [2023-08-09 09:14:21 +0000] [108] [INFO] Listening at: http://0.0.0.0:8001 (108) [2023-08-09 09:14:21 +0000] [108] [INFO] Using worker: gevent [2023-08-09 09:14:21 +0000] [110] [INFO] Booting worker with pid: 110 [2023-08-09 09:14:21 +0000] [111] [INFO] Booting worker with pid: 111 use SECRET_KEY from env use SECRET_KEY from env Unauthorized: /api/albums/date/list/ Unauthorized: /api/albums/date/1508/ 10:54:07 [Q] INFO Enqueued [DjangORM] 19 Internal Server Error: /api/photos/baf52319d94bbecd131ee440559429b41/summary/ Traceback (most recent call last): File "/usr/local/lib/python3.11/dist-packages/django/core/handlers/exception.py", line 55, in inner response = get_response(request) ^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/dist-packages/django/core/handlers/base.py", line 197, in _get_response response = wrapped_callback(request, *callback_args, **callback_kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/dist-packages/django/views/decorators/csrf.py", line 56, in wrapper_view return view_func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/dist-packages/rest_framework/viewsets.py", line 125, in view return self.dispatch(request, *args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/dist-packages/rest_framework/views.py", line 509, in dispatch response = self.handle_exception(exc) ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/dist-packages/rest_framework/views.py", line 469, in handle_exception self.raise_uncaught_exception(exc) File "/usr/local/lib/python3.11/dist-packages/rest_framework/views.py", line 480, in raise_uncaught_exception raise exc File "/usr/local/lib/python3.11/dist-packages/rest_framework/views.py", line 506, in dispatch response = handler(request, *args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/code/api/views/photos.py", line 375, in summary return Response(serializer.data) ^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/dist-packages/rest_framework/serializers.py", line 555, in data ret = super().data ^^^^^^^^^^^^ File "/usr/local/lib/python3.11/dist-packages/rest_framework/serializers.py", line 253, in data self._data = self.to_representation(self.instance) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/dist-packages/rest_framework/serializers.py", line 522, in to_representation ret[field.field_name] = field.to_representation(attribute) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/dist-packages/rest_framework/fields.py", line 1838, in to_representation return method(value) ^^^^^^^^^^^^^ File "/code/api/serializers/photos.py", line 144, in get_photo_summary return PhotoSummarySerializer(obj.get()).data ^^^^^^^^^ File "/usr/local/lib/python3.11/dist-packages/django/db/models/query.py", line 637, in get raise self.model.DoesNotExist( api.models.photo.Photo.DoesNotExist: Photo matching query does not exist. Unauthorized: /api/photos/baf52319d94bbecd131ee440559429b41/summary/ Unauthorized: /api/photos/baf52319d94bbecd131ee440559429b41/summary/ Not Found: /api/static/rest_framework/js/ajax-form.js Not Found: /api/static/rest_framework/css/bootstrap-tweaks.css Not Found: /api/static/rest_framework/js/bootstrap.min.js Not Found: /api/static/rest_framework/css/prettify.css Not Found: /api/static/rest_framework/css/bootstrap.min.css Not Found: /api/static/rest_framework/css/default.css Not Found: /api/static/rest_framework/js/default.js Not Found: /api/static/rest_framework/js/jquery-3.5.1.min.js Not Found: /api/static/rest_framework/js/csrf.js Not Found: /api/static/rest_framework/js/prettify-min.js Unauthorized: /api/user/1/ Unauthorized: /api/user/1/ Unauthorized: /api/user/1/ Unauthorized: /api/user/1/ Unauthorized: /api/user/1/ Unauthorized: /api/user/1/ Unauthorized: /api/user/1/ Unauthorized: /api/user/1/ 11:11:43 [Q] INFO Enqueued [DjangORM] 20 Internal Server Error: /api/photos/8e26638d1515b7406abcf14d648440871/summary/ Traceback (most recent call last): File "/usr/local/lib/python3.11/dist-packages/django/core/handlers/exception.py", line 55, in inner response = get_response(request) ^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/dist-packages/django/core/handlers/base.py", line 197, in _get_response response = wrapped_callback(request, *callback_args, **callback_kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/dist-packages/django/views/decorators/csrf.py", line 56, in wrapper_view return view_func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/dist-packages/rest_framework/viewsets.py", line 125, in view return self.dispatch(request, *args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/dist-packages/rest_framework/views.py", line 509, in dispatch response = self.handle_exception(exc) ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/dist-packages/rest_framework/views.py", line 469, in handle_exception self.raise_uncaught_exception(exc) File "/usr/local/lib/python3.11/dist-packages/rest_framework/views.py", line 480, in raise_uncaught_exception raise exc File "/usr/local/lib/python3.11/dist-packages/rest_framework/views.py", line 506, in dispatch response = handler(request, *args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/code/api/views/photos.py", line 375, in summary return Response(serializer.data) ^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/dist-packages/rest_framework/serializers.py", line 555, in data ret = super().data ^^^^^^^^^^^^ File "/usr/local/lib/python3.11/dist-packages/rest_framework/serializers.py", line 253, in data self._data = self.to_representation(self.instance) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/dist-packages/rest_framework/serializers.py", line 522, in to_representation ret[field.field_name] = field.to_representation(attribute) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/dist-packages/rest_framework/fields.py", line 1838, in to_representation return method(value) ^^^^^^^^^^^^^ File "/code/api/serializers/photos.py", line 144, in get_photo_summary return PhotoSummarySerializer(obj.get()).data ^^^^^^^^^ File "/usr/local/lib/python3.11/dist-packages/django/db/models/query.py", line 637, in get raise self.model.DoesNotExist( api.models.photo.Photo.DoesNotExist: Photo matching query does not exist. 11:23:14 [Q] INFO Enqueued [DjangORM] 21 Internal Server Error: /api/photos/269c5a5c84dd91bf2310bc9e1bba367f1/summary/ Traceback (most recent call last): File "/usr/local/lib/python3.11/dist-packages/django/core/handlers/exception.py", line 55, in inner response = get_response(request) ^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/dist-packages/django/core/handlers/base.py", line 197, in _get_response response = wrapped_callback(request, *callback_args, **callback_kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/dist-packages/django/views/decorators/csrf.py", line 56, in wrapper_view return view_func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/dist-packages/rest_framework/viewsets.py", line 125, in view return self.dispatch(request, *args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/dist-packages/rest_framework/views.py", line 509, in dispatch response = self.handle_exception(exc) ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/dist-packages/rest_framework/views.py", line 469, in handle_exception self.raise_uncaught_exception(exc) File "/usr/local/lib/python3.11/dist-packages/rest_framework/views.py", line 480, in raise_uncaught_exception raise exc File "/usr/local/lib/python3.11/dist-packages/rest_framework/views.py", line 506, in dispatch response = handler(request, *args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/code/api/views/photos.py", line 375, in summary return Response(serializer.data) ^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/dist-packages/rest_framework/serializers.py", line 555, in data ret = super().data ^^^^^^^^^^^^ File "/usr/local/lib/python3.11/dist-packages/rest_framework/serializers.py", line 253, in data self._data = self.to_representation(self.instance) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/dist-packages/rest_framework/serializers.py", line 522, in to_representation ret[field.field_name] = field.to_representation(attribute) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/dist-packages/rest_framework/fields.py", line 1838, in to_representation return method(value) ^^^^^^^^^^^^^ File "/code/api/serializers/photos.py", line 144, in get_photo_summary return PhotoSummarySerializer(obj.get()).data ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/dist-packages/rest_framework/serializers.py", line 555, in data ret = super().data ^^^^^^^^^^^^ File "/usr/local/lib/python3.11/dist-packages/rest_framework/serializers.py", line 253, in data self._data = self.to_representation(self.instance) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/dist-packages/rest_framework/serializers.py", line 522, in to_representation ret[field.field_name] = field.to_representation(attribute) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/dist-packages/rest_framework/fields.py", line 1838, in to_representation return method(value) ^^^^^^^^^^^^^ File "/code/api/serializers/photos.py", line 82, in get_type if obj.main_file.embedded_media.count() > 0: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ AttributeError: 'NoneType' object has no attribute 'embedded_media' Unauthorized: /api/user/1/ Unauthorized: /api/user/1/ Unauthorized: /api/albums/date/list/ Unauthorized: /api/albums/date/1471/ Unauthorized: /api/albums/date/1529/ [2023-08-09 12:05:20 +0000] [108] [CRITICAL] WORKER TIMEOUT (pid:110) [2023-08-09 12:05:25 +0000] [108] [ERROR] Worker (pid:110) was sent SIGKILL! Perhaps out of memory? [2023-08-09 12:05:25 +0000] [170] [INFO] Booting worker with pid: 170 use SECRET_KEY from env [2023-08-09 12:13:12 +0000] [108] [CRITICAL] WORKER TIMEOUT (pid:111) [2023-08-09 12:13:17 +0000] [108] [ERROR] Worker (pid:111) was sent SIGKILL! Perhaps out of memory? [2023-08-09 12:13:17 +0000] [176] [INFO] Booting worker with pid: 176 use SECRET_KEY from env [2023-08-09 12:14:55 +0000] [108] [CRITICAL] WORKER TIMEOUT (pid:170) [2023-08-09 12:14:57 +0000] [108] [ERROR] Worker (pid:170) was sent SIGKILL! Perhaps out of memory? [2023-08-09 12:14:57 +0000] [182] [INFO] Booting worker with pid: 182 use SECRET_KEY from env [2023-08-09 12:17:41 +0000] [108] [CRITICAL] WORKER TIMEOUT (pid:176) [2023-08-09 12:18:43 +0000] [108] [ERROR] Worker (pid:176) was sent SIGKILL! Perhaps out of memory? [2023-08-09 12:18:43 +0000] [188] [INFO] Booting worker with pid: 188 use SECRET_KEY from env [2023-08-09 12:22:40 +0000] [108] [CRITICAL] WORKER TIMEOUT (pid:182) [2023-08-09 12:23:06 +0000] [108] [ERROR] Worker (pid:182) was sent SIGKILL! Perhaps out of memory? [2023-08-09 12:23:06 +0000] [194] [INFO] Booting worker with pid: 194 use SECRET_KEY from env i will give the next dev build a try and report back latest dev seems to have fixed this
gharchive/issue
2023-08-09T12:27:47
2025-04-01T04:32:43.571442
{ "authors": [ "savvasdalkitsis" ], "repo": "LibrePhotos/librephotos-docker", "url": "https://github.com/LibrePhotos/librephotos-docker/issues/104", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
777103608
Volumes defined in docker-compose.yaml but unused The section volumes: librephotos-data: media: at the end of the compose file defines two volumes. Though they are never used. The postgres has a bind-mount with a similar name. $HOME/librephotos_data:/var/lib/postgresql/data It would probably be a good idea to use the volumes in appropriate mounts or remove the from the compose file. Thanks, I removed the unused volumes! :+1:
gharchive/issue
2020-12-31T18:14:53
2025-04-01T04:32:43.573794
{ "authors": [ "Craumix", "derneuere" ], "repo": "LibrePhotos/librephotos", "url": "https://github.com/LibrePhotos/librephotos/issues/69", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1426922460
Класс для симуляции механизма с управлением Добавлены сенсоры для робота Сумма нормальной контактной силы для каждого блока тела Абсолютные координаты блоков тела Количество поверхностей контакта тела (пока что детектирует факт контакта с поверхностью, но количество) Значение углов вращательных звеньев Добавлены датаклассы для данных из симуляции системы Добавлен класс для симуляции системы в алгоритме оптимизации управления Добавлен класс флаг остановки симуляции по времени симуляции Сделаем мердж по нормальному после 31 числа Добавлен набор правил для примеров Добавлен пример с биндом управляющих воздействий, который автоматически подстраивается под количество актуаторов Пофиксил баг с флагом остановки Пофиксил баг с возвращаемым значением CoG
gharchive/pull-request
2022-10-28T08:52:21
2025-04-01T04:32:43.583602
{ "authors": [ "Huowl", "ZharkovKirill" ], "repo": "LicAiBeerLab/graph_assembler", "url": "https://github.com/LicAiBeerLab/graph_assembler/pull/19", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
1322513649
Error setting custom BuildConfig to the LightningWork object 🐛 Bug Following the example for using a public docker image results in an error. I modified the work slightly just to print hi. To Reproduce Component.py from lightning_app import LightningWork, BuildConfig class MyWork(LightningWork): def __init__(self): super().__init__() # Using a publicly hosted docker image: self.cloud_build_config = BuildConfig( # This is one of the base images Lightning uses by default image="ghcr.io/gridai/base-images:v1.8-gpu" ) def run(self): print('hi') app.py import lightning as L from lightning_app.storage import Drive from components.fail import MyWork class LitApp(L.LightningFlow): def __init__(self) -> None: super().__init__() self.work = MyWork() def run(self): self.work.run() app = L.LightningApp(LitApp()) error Your Lightning App is starting. This won't take long. ERROR: Found an exception when loading your application from fail.py. Please, resolve it to run your app. Traceback (most recent call last): File "fail.py", line 15, in <module> app = L.LightningApp(LitApp()) File "fail.py", line 9, in __init__ self.work = MyWork() File "C:\Users\Olufemi Ojo\Documents\MlFlowComponent\components\fail.py", line 8, in __init__ self.cloud_build_config = BuildConfig( File "C:\Users\Olufemi Ojo\Documents\envs\network\lib\site-packages\lightning_app\core\work.py", line 300, in __setattr__ setattr_fn(name, value) File "C:\Users\Olufemi Ojo\Documents\envs\network\lib\site-packages\lightning_app\core\work.py", line 365, in _default_setattr raise AttributeError( AttributeError: Only JSON-serializable attributes are currently supported (str, int, float, bool, tuple, list, dict etc.) to be part of <components.fail.MyWork object at 0x000002D6446D0070> state. Found the attribute cloud_build_config with BuildConfig(requirements=None, dockerfile=None, image='ghcr.io/gridai/base-images:v1.8-gpu') instead. HINT: Private attributes defined as follows `self._x = y` won't be shared between components and therefore don't need to be JSON-serializable. If you need to include non-JSON serializable objects in the state, you can use the `lightning_app.storage.Payload` API. Expected behavior no error following the docs Environment CUDA: - GPU: - available: False - version: None Packages: - lightning: 2022.7.18 - lightning_app: 0.5.2 - numpy: 1.23.1 - pyTorch_debug: False - pyTorch_version: 1.12.0+cpu - pytorch-lightning: 1.6.5 - tqdm: 4.64.0 System: - OS: Windows - architecture: - 64bit - WindowsPE - processor: Intel64 Family 6 Model 140 Stepping 1, GenuineIntel - python: 3.9.7 - version: 10.0.22000 cc @tchaton @rohitgr7 A simple fix for this issue seems to be adding a __json__ attribute to BuildConfig class. The __json__ attribute would look like this, def __json__(self): return self.to_dict() If this solution seems right, I could quickly make PR to fix this issue. Any comments/suggestions would be really helpful! @manskx @akihironitta @tchaton Out of curiosity, how did you arrive at that solution? On Sat, Aug 6, 2022, 8:32 AM Pranjal Datta @.***> wrote: A simple fix for this issue seems to be adding a json attribute to BuildConfig class https://github.com/Lightning-AI/lightning/blob/26d69ceada7f4ad1632e70df6414348170e85574/src/lightning_app/utilities/packaging/build_config.py#L49 . The json attribute would look like this, def __json__(self): return self.to_dict() If this solution seems right, I could quickly make PR to fix this issue. Any comments/suggestions would be really helpful! @manskx https://github.com/manskx @akihironitta https://github.com/akihironitta @tchaton https://github.com/tchaton — Reply to this email directly, view it on GitHub https://github.com/Lightning-AI/lightning/issues/13934#issuecomment-1207206699, or unsubscribe https://github.com/notifications/unsubscribe-auth/ALHYMCXXMIBXZ56IMBH6BVDVXZLNTANCNFSM55BOGX4A . You are receiving this because you authored the thread.Message ID: @.***> Out of curiosity, how did you arrive at that solution? … On Sat, Aug 6, 2022, 8:32 AM Pranjal Datta @.> wrote: A simple fix for this issue seems to be adding a json attribute to BuildConfig class https://github.com/Lightning-AI/lightning/blob/26d69ceada7f4ad1632e70df6414348170e85574/src/lightning_app/utilities/packaging/build_config.py#L49 . The json attribute would look like this, def json(self): return self.to_dict() If this solution seems right, I could quickly make PR to fix this issue. Any comments/suggestions would be really helpful! @manskx https://github.com/manskx @akihironitta https://github.com/akihironitta @tchaton https://github.com/tchaton — Reply to this email directly, view it on GitHub <#13934 (comment)>, or unsubscribe https://github.com/notifications/unsubscribe-auth/ALHYMCXXMIBXZ56IMBH6BVDVXZLNTANCNFSM55BOGX4A . You are receiving this because you authored the thread.Message ID: @.> So the AttributeError is being thrown here since, _is_json_serializable returns False as this function in turns tries to do a json.dumps here using a custom JSON Encoder class defined here. It is this encoder class looks for a __json__ attribute. So the issue here is caused by the asymmetry between python's __getattr__ and __setattr__ behaviour. The former is called only if the attribute is not found but the latter is called regardless. The work class already has a property.setter for build configs defined which is supposed to be called when you set the build config and this error would not have raised. @cloud_build_config.setter def cloud_build_config(self, build_config: BuildConfig) -> None: self._cloud_build_config = build_config self._cloud_build_config.on_work_init(self, cloud_compute=self._cloud_compute) But instead what happens is that the custom __setattr__ defined is taking priority over the property.setter which checks for the object to be json serializable or not. Probably a solution here would be to fix the __setattr__ function to check if the incoming name has a property setter defined or not. @pranjaldatta Would it be something you are interested in contributing to? @oojo12 I have slightly modified the title. As a workaround to your problem, can you try passing the build config as an argument to the init call of super class. from lightning_app import LightningWork, BuildConfig class MyWork(LightningWork): def __init__(self): super().__init__(cloud_build_config=BuildConfig(image="ghcr.io/gridai/base-images:v1.8-gpu")) def run(self): print('hi') So the issue here is caused by the asymmetry between python's __getattr__ and __setattr__ behaviour. The former is called only if the attribute is not found but the latter is called regardless. The work class already has a property.setter for build configs defined which is supposed to be called when you set the build config and this error would not have raised. @cloud_build_config.setter def cloud_build_config(self, build_config: BuildConfig) -> None: self._cloud_build_config = build_config self._cloud_build_config.on_work_init(self, cloud_compute=self._cloud_compute) But instead what happens is that the custom __setattr__ defined is taking priority over the property.setter which checks for the object to be json serializable or not. Probably a solution here would be to fix the __setattr__ function to check if the incoming name has a property setter defined or not. @pranjaldatta Would it be something you are interested in contributing to? @hhsecond, I would love to contribute to this issue! I tried to make some quick changes and it seems like this solution works. If this is cool with you, I could make a PR in a day or two @pranjaldatta fantastic. Let's get this one fixed. Thanks a lot for taking it up. @pranjaldatta circling back to see if you have got some time to spare on this one? Please don't hesitate to ask questions or seek help incase you need @pranjaldatta circling back to see if you have got some time to spare on this one? Please don't hesitate to ask questions or seek help incase you need Hi @hhsecond , so sorry about the delay! I have the code ready. It is just that I haven't had the time to make the PR due to some prior commitments. I will definitely try to to finish the PR by today.
gharchive/issue
2022-07-29T17:32:28
2025-04-01T04:32:43.658823
{ "authors": [ "hhsecond", "oojo12", "pranjaldatta" ], "repo": "Lightning-AI/lightning", "url": "https://github.com/Lightning-AI/lightning/issues/13934", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
964323044
Record iteration speed with Trainer and automatically pass to loggers 🚀 Feature Currently, users are required to track iteration speed themselves. It seems like a common use case to want to display iteration speed/sec in the logging UI of choice (ex. tensorboard), so it would be very nice if the Trainer could automatically record the iteration speed/sec and pass it to a logger. Motivation I am currently implementing a callback to track iteration speed and it is quite a lot of code for such a small task. Pitch Add a flag to the Trainer to automatically track iteration speed and pass this information to the specified logger. Ex: trainer = Trainer(..., log_iteration_speed = True, logger = tb_logger) and then the Tensorboard would have plots for iter:sec/train, iter:sec/validation, iter:sec/test, etc. I recommend iter:sec vs. iter/sec because tensorboard groups plots into sections if there are /. The user can then choose to add these metrics to the hparams section of the tensorboard. This would require that the keys used for logging be exported for public consumption. Alternatives Instead of logging to a tensorboad, the Trainer could print out an average iter/sec per split at the end of training. However, this is an issue if the user manually kills the fitting process and then the average iter/sec won't be printed. Additional context https://pytorch-lightning.slack.com/archives/CRBLFHY79/p162843593629850 any update about this feature?
gharchive/issue
2021-08-09T19:58:15
2025-04-01T04:32:43.663285
{ "authors": [ "EricWiener", "dongzhuoyao" ], "repo": "Lightning-AI/lightning", "url": "https://github.com/Lightning-AI/lightning/issues/8817", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1294185022
Revert "Fix mypy typing errors in pytorch_lightning/strategies/single_tpu.py" Reverts Lightning-AI/lightning#13534 In the previous PR I explained that the condition is_overridden("on_post_move_to_device", self.lightning_module) can not be True. This is incorrect. I missed the heritage of LightningModule with ModelHooks. Indeed, ModelHooks implements on_post_move_to_device. I am very sorry for this. I think I did not see it in the unit tests because this function has been deprecated in v1.5 and will be removed in v1.7, therefore it is not tested. According to this test: https://github.com/Lightning-AI/lightning/blob/61c28cb428a13c2aea6d7f3f55e0f00431a4ea4e/tests/tests_pytorch/deprecated_api/test_remove_1-7.py#L64 Probably no need to revert it, as #13534 will be merged to 1.7 only and won't be included in 1.6.X @Cyprien-Ricque Thank you for your action! The change shouldn't have been part of your previous PR #13534, but as mentioned above, there's no need to revert the change because my PR #13548 will remove it anyway as part of #12521 and the change won't be included in 1.6.x. Could you close this PR?
gharchive/pull-request
2022-07-05T11:43:20
2025-04-01T04:32:43.667156
{ "authors": [ "Cyprien-Ricque", "akihironitta", "justusschock" ], "repo": "Lightning-AI/lightning", "url": "https://github.com/Lightning-AI/lightning/pull/13544", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1867775916
Add CodeLlama configs Pull Request Description: This pull request introduces support for various CodeLlama models and their configurations. The following CodeLlama models have been integrated into our system: Base Models: CodeLlama-7b-hf: This model's configuration can be found here. CodeLlama-13b-hf: This model's configuration can be found here. CodeLlama-34b-hf: This model's configuration can be found here. Python Models: CodeLlama-7b-Python-hf: This model's configuration can be found here. CodeLlama-13b-Python-hf: This model's configuration can be found here. CodeLlama-34b-Python-hf: This model's configuration can be found here. Instruct Models: CodeLlama-7b-Instruct-hf: This model's configuration can be found here. CodeLlama-13b-Instruct-hf: This model's configuration can be found here. CodeLlama-34b-Instruct-hf: This model's configuration can be found here. Related Issue: #464 Thanks @m0saan , I am really excited about this! Was just trying out the CodeLlama-7b-Python-hf model with generate/base.py and am getting a RuntimeError: Error(s) in loading state_dict for GPT: size mismatch for lm_head.weight: copying a param with shape torch.Size([32000, 4096]) from checkpoint, the shape in current model is torch.Size([32016, 4096]). size mismatch for transformer.wte.weight: copying a param with shape torch.Size([32000, 4096]) from checkpoint, the shape in current model is torch.Size([32016, 4096]). Have you had any luck with the other models? @rasbt hmmm, let me check again, last time I check I did not get that error! @rasbt can you check now! Thanks, that seems to load fine now. The generation is still not working well though. I tried the prompt "write a function for binary search". chat/base.py seems to repeat itself and generate/base.py also doesn't seem to produce code. I can try to download and use the larger model later. Do you have a prompt that worked well for you? The vocabulary sizes for Instruct models are incorrect, they should be 32016, not 32000. The vocabulary sizes for Instruct models are incorrect, they should be 32016, not 32000. Thank you :3 Unfortunately I get an error when trying to train with LoRA: File "/home/ntoxeg/llama-finetuning/lit_gpt/rmsnorm.py", line 19, in forward norm_x = torch.mean(x * x, dim=self.dim, keepdim=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ RuntimeError: CUDA error: device-side assert triggered CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1. Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions. ../aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [991,0,0], thread: [0,0,0] Assertion `srcIndex < srcSelectDimSize` failed. ../aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [991,0,0], thread: [1,0,0] Assertion `srcIndex < srcSelectDimSize` failed. […] That is with the 7B models, both base and instruct. I didn’t check others. Actually, disregard the above as I’m just getting this with LLaMA models for some reason :/ so probably unrelated. @ntoxeg You are not the first one :) #454 Another thing, for CodeLlama-34b-Instruct-hf (and probably others) you need n_query_groups=8. Just tested it and it seems the updated configs work now: 7B Python model With generate/base.py and python generate/base.py \ --checkpoint_dir checkpoints/codellama/CodeLlama-7b-Python-hf/ \ --prompt '"""function for binary search"""' I get """function for binary search""" left, mid, right = 0, 0, len(arr) - 1 while left <= right: mid = math.floor((right - left) / 2) if arr[mid Finetuning via finetune/lora.py works too. 34B Python model Can't run on 1 GPU or 8x A100 w/o running out of memory. However, I got it to work with "bnb.nf4" quantization on a single GPU. I.e., python generate/base.py --checkpoint_dir checkpoints/codellama/CodeLlama-34b-Python-hf/ --prompt '"""function for binary search"""' --precision bf16-true --quantize "bnb.nf4" gives """function for binary search""" l, r = 0, len(nums) - 1 while l <= r: if nums[l] == target: return l if nums[r] == target I also run out of RAM (LoRA with 472 GB and 4 GPUs) with the 34B model but quantization on a single GPU works. Tested the other checkpoints as well, and everything seems to be working. Imho this PR should be good to go 7B python generate/base.py \ --checkpoint_dir checkpoints/codellama/CodeLlama-7b-hf \ --prompt '"""function for binary search"""' """function for binary search""" l, mid, r = INF, 0, len(arr), INF while l <= r: mid = (l + r) // 2 if arr[mid] == k: Time for inference 1: 2.69 sec total, 18.59 tokens/sec Memory used: 14.02 GB 13B python generate/base.py \ --checkpoint_dir checkpoints/codellama/CodeLlama-13b-hf \ --prompt '"""function for binary search"""' """function for binary search""" if n >= len(a): return None mid = (low + high) // 2 if a[mid] == n: # Found return mid elif a[mid] > Time for inference 1: 2.12 sec total, 23.61 tokens/sec Memory used: 26.70 GB 34B python generate/base.py \ --checkpoint_dir checkpoints/codellama/CodeLlama-34b-hf \ --quantize "bnb.nf4" \ --prompt '"""function for binary search"""' """function for binary search""" def wrapper(func): def inner(arr, key): """setting mid as start""" start = 0 """setting mid as end""" end = len(arr) - Time for inference 1: 3.01 sec total, 16.59 tokens/sec Memory used: 20.81 GB 7B Python see previous comment 13B Python python generate/base.py \ --checkpoint_dir checkpoints/codellama/CodeLlama-13b-Python-hf \ --prompt '"""function for binary search"""' """function for binary search""" l,r = 0, len(nums)-1 while l<r: m = (l+r)//2 if nums[m]==target: # we just check Time for inference 1: 2.28 sec total, 21.88 tokens/sec Memory used: 26.70 GB 34B Python see previous comment 7B Instruct python generate/base.py \ --checkpoint_dir checkpoints/codellama/CodeLlama-7b-Instruct-hf \ --prompt '"""function for binary search"""' """function for binary search""" left, mid, right = INF, 0, -INF while left <= right: mid = (left + right) // 2 # integer division if arr[mid] == k: Time for inference 1: 3.13 sec total, 15.99 tokens/sec Memory used: 14.02 GB 13B Instruct python generate/base.py \ --checkpoint_dir checkpoints/codellama/CodeLlama-13b-Instruct-hf \ --prompt '"""function for binary search"""' """function for binary search""" if start >= end: # if condition satisfied return -1 mid = (start + end) // 2 if arr[mid] == target: return mid elif arr[ Time for inference 1: 2.65 sec total, 18.89 tokens/sec Memory used: 26.22 GB 34B Instruct python generate/base.py \ --checkpoint_dir checkpoints/codellama/CodeLlama-34b-Instruct-hf \ --quantize "bnb.nf4" \ --prompt '"""function for binary search"""' """function for binary search""" def wrapper(func): def inner(arr, key, low, high): if high <= low: return -1 mid = (high + low) // 2 if key > mid Time for inference 1: 3.00 sec total, 16.66 tokens/sec Memory used: 20.81 GB Just tested it and it seems the updated configs work now: @rasbt, what was the fix for the messy generations? @m0saan I think the updated vocabulary size may have fixed it. @m0saan I think the updated vocabulary size may have fixed it. hmmm. I wonder! bc that was the case for the CodeLlama-34b-Instruct-hf model
gharchive/pull-request
2023-08-25T23:17:54
2025-04-01T04:32:43.689016
{ "authors": [ "Andrei-Aksionov", "m0saan", "ntoxeg", "rasbt" ], "repo": "Lightning-AI/lit-gpt", "url": "https://github.com/Lightning-AI/lit-gpt/pull/472", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1372412180
Slower performance for Torchmetrics PESQ 🐛 Bug Multiprocessing capability is missing for Torchmetrics PESQ while the underlying library ludlows/python-pesq have that option. As a result, the torchmetrics version of PESQ performs slower than ludlows/python-pesq. To Reproduce Run PESQ for a batch of 100 audios with a minimum duration of 10 seconds for each audio. Code sample Python notebook code attachment here: analysis.zip Expected behavior Both the Torchmetrics PESQ and ludlows/python-pesq should take almost the same time to compute the PESQ score. Environment Python - 3.10.4 (installed using conda) OS: Ubuntu 20.04.4 LTS Below libraries installed using pip TorchMetrics - 0.9.3 PyTorch - 1.12.1 Torchaudio - 0.12.1 PESQ - 0.0.4 Additional context References ludlows/python-pesq: https://github.com/ludlows/PESQ Hi @ashinkajay, thank you for your post. As far as I know, the original PESQ code has a pretty restricting license regarding the way how PESQ can be implemented in torchmetrics (see discussion in #726) cc: @Borda @SkafteNicki Have you had any discussion about this besides the issue attached? :] Yes their license is very strict so we cannot re-implement. However, it seems like we could probably call pesq_batch from https://github.com/ludlows/PESQ instead of pesq to run the calculation in parallel. We would just need to add an n_processor argument to the modular and functional implementations. Yes their license is very strict so we cannot re-implement. However, it seems like we could probably call pesq_batch from https://github.com/ludlows/PESQ instead of pesq to run the calculation in parallel. We would just need to add an n_processor argument to the modular and functional implementations. @ashinkajay is this what you are thinking about when raising this issue?
gharchive/issue
2022-09-14T06:23:22
2025-04-01T04:32:43.697032
{ "authors": [ "SkafteNicki", "ashinkajay", "stancld" ], "repo": "Lightning-AI/metrics", "url": "https://github.com/Lightning-AI/metrics/issues/1223", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2128625687
ImportError: cannot import name 'DepthAnything' from 'depth_anything.dpt' When using run.py Hey everyone ! Trying to use Depth anything for the first time, after I install everything I finally try it out, with the following command python run.py --encoder vitl --img-path "F:\Dossiers\Depth Anything\Depth-Anything\Images" --outdir "F:\Dossiers\Depth Anything\Depth-Anything\Images\output" --pred-only --grayscale Unfortunately, I just get this: Traceback (most recent call last): File "F:\Dossiers\Depth Anything\Depth-Anything\run.py", line 10, in <module> from depth_anything.dpt import DepthAnything ImportError: cannot import name 'DepthAnything' from 'depth_anything.dpt' (F:\Dossiers\Depth Anything\Depth-Anything\depth_anything\dpt.py) I am on Windows 10 with an RTX GPU, using python 3.10.6 Do you have any suggestions ? I'd love to use Depth-Anything for video compositing but I'm currently stuck Thanks! (Realized I somehow had an old version of dpt.py mixed in with a new run.py) what‘s your torch version? i got an error Traceback (most recent call last): File "run.py", line 34, in depth_anything = DepthAnything.from_pretrained('LiheYoung/depth_anything_{}14'.format(args.encoder)).to(DEVICE).eval() File "/root/miniconda3/lib/python3.8/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn return fn(*args, **kwargs) File "/root/miniconda3/lib/python3.8/site-packages/huggingface_hub/hub_mixin.py", line 558, in from_pretrained instance = cls._from_pretrained( File "/root/miniconda3/lib/python3.8/site-packages/huggingface_hub/hub_mixin.py", line 778, in _from_pretrained model = cls(**model_kwargs) TypeError: init() missing 1 required positional argument: 'config' how to solve?
gharchive/issue
2024-02-10T17:46:44
2025-04-01T04:32:43.704633
{ "authors": [ "FoxTrotte", "Guansiyu-glitch" ], "repo": "LiheYoung/Depth-Anything", "url": "https://github.com/LiheYoung/Depth-Anything/issues/89", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1074075990
关于标签的问题 楼主你好,在阅读楼主的源代码的时候,有些地方不是很明白,希望能得到楼主的指点: 1、在change_detection.py中 CLASSES = ['未变化区域', '水体', '地面', '低矮植被', '树木', '建筑物', '运动场'] 以及商汤提供的标签都指明 未变化区域标记为0,水体标记为1,地面为2,低矮植被为3,树木为4,建筑物为5,运动场为6。 但是,在随后的一段代码中: if self.mode == 'train': gt_mask1 = np.array(Image.open(os.path.join(self.root, 'label1', id))) mask_bin = np.zeros_like(gt_mask1) mask_bin[gt_mask1 == 0] = 1 mask_bin = Image.fromarray(mask_bin) 为什么把mask_bin 标签为0都改为1?标签为0不是表示改区域未发生变化吗? 2、在训练代码train.py中: out1, out2, out_bin = self.model(img1, img2) loss1 = self.criterion(out1, mask1 - 1) loss2 = self.criterion(out2, mask2 - 1) 计算损失的时候,为什么mask1和mask2需要减1呢? 我们代码中使用了0表示变化,1表示未变化,在测试时的时候也进行了相应的反转。当然也可以依照原数据集,不进行反转。 因为输出的out1和out2的index是从0开始的,而标签mask1和mask2的index是从1开始的,需要调整一致。 对于问题2,还是有一点不是很明白,在CLASSES = ['未变化区域', '水体', '地面', '低矮植被', '树木', '建筑物', '运动场']中 out1和out2应该是单纯的语义分割,未变化区域应该也是作为一类(背景?)进行训练的,他的index是0; 给官方提供的数据标签label1和label2着色进行观察,index应该是0开始的 在语义分割任务中只含有['水体', '地面', '低矮植被', '树木', '建筑物', '运动场' ]这几个类别,没有“背景”或“未变化区域”这个类别,因为“未变化区域”这个类别通过单张图像是无法确定的,而我们的语义分割任务是针对单张输入图像进行的分割。 albert28wen @.***> 于2021年12月8日周三 15:36写道: 对于问题2,还是有一点不是很明白,在CLASSES = ['未变化区域', '水体', '地面', '低矮植被', '树木', '建筑物', '运动场']中 out1和out2应该是单纯的语义分割,未变化区域应该也是作为一类(背景?)进行训练的,他的index是0; 给官方提供的数据标签label1和label2着色进行观察,index应该是0开始的 — You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/LiheYoung/SenseEarth2020-ChangeDetection/issues/15#issuecomment-988569375, or unsubscribe https://github.com/notifications/unsubscribe-auth/ALH6FAPTCSLEAPJ6X3EBH43UP4DHDANCNFSM5JTBLESQ . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub. 可是我下载的官方数据提供的label1 和label2 的数值确实是从0到6的,不是1到6,难道是我下载的数据标签不是官方原始的? 数据集中的0是表示未变化区域,在语义分割中是用不到这个类别的。在语义分割中用0-5表示原数据集中的1-6,所以需要减1。 请问一下,val集的标签是否可以提供一下呢? 抱歉,比赛中只提供了训练集的标签。
gharchive/issue
2021-12-08T06:57:48
2025-04-01T04:32:43.712784
{ "authors": [ "LiheYoung", "albert28wen", "shanhuhaifeng" ], "repo": "LiheYoung/SenseEarth2020-ChangeDetection", "url": "https://github.com/LiheYoung/SenseEarth2020-ChangeDetection/issues/15", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2762945509
🛑 Siyntra Minecraft Profile is down In 53e2a8d, Siyntra Minecraft Profile (https://auth.siyntrastudios.com/session/minecraft/profile) was down: HTTP code: 0 Response time: 0 ms Resolved: Siyntra Minecraft Profile is back up in e384cc6 after 23 minutes.
gharchive/issue
2024-12-30T09:50:32
2025-04-01T04:32:43.721625
{ "authors": [ "IkyMax" ], "repo": "Limbo-Studios/siyntra-status-page", "url": "https://github.com/Limbo-Studios/siyntra-status-page/issues/101", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1471437861
ltfs_ordered_copy LTFS check error When I try running ltfs_ordered_copy to copy files to an LTFS formatted tape I get the following error: Check destination:startswith first arg must be bytes or a tuple of bytes, not str Copying to non-LTFS locations seems to work as expected. If I print the value that xattr returns from an LTFS tape I get b'LTFS LE'. Should xattr be returning a string here? When running just the code related to generating and checking sig, I don't get an error if I convert sig to a string before checking if it starts with "LTFS" or if I change if sig.startswith("LTFS"): to if sig.startswith(b"LTFS"):. I'm running the 2.4.5.0 release version on Fedora 36. Thank you for your info. I confirmed.
gharchive/issue
2022-12-01T14:43:07
2025-04-01T04:32:43.740870
{ "authors": [ "jycm205", "piste-jp-ibm" ], "repo": "LinearTapeFileSystem/ltfs", "url": "https://github.com/LinearTapeFileSystem/ltfs/issues/370", "license": "bsd-3-clause", "license_type": "permissive", "license_source": "bigquery" }
1228739422
🛑 MortyProxy is down In 505a6d7, MortyProxy (https://proxy.linerly.tk) was down: HTTP code: 0 Response time: 0 ms Resolved: MortyProxy is back up in 4ba7436.
gharchive/issue
2022-05-07T22:46:16
2025-04-01T04:32:43.743223
{ "authors": [ "Linerly" ], "repo": "Linerly/status", "url": "https://github.com/Linerly/status/issues/1346", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
388821136
Implement monitoring of new emails Resolves #4 This PR implements notifications of unread emails. The emails are marked as seen when discovered, because you would not want to be notified every minute that there is new mail. The default check my email intent will not mark them as seen. The monitoring is implemented via repeating events and will therefore look for new mail every 60 seconds. When stopping mycroft and reopening again, the notification settings are preserved and the skill will continue to look for new mail if it was instructed to do so. It's also possible to supplement an email-address or name and the skill will only notify of emails by that sender. However you have to speak very clearly to get a good speech to text translation for emails. Signed-off-by: florian floriankothmeier@web.de It's not possible to collect arguments with intent_file_handlers, e.g. "tell me when I get an email from bob" is impossible without intent builders Will change that so it can accept emails and sender names. They should be matched case insensitive and when it's part of a name, e.g. Bob Smith will be matched by bob. What is your preferred method for doing that? I don't know of another way than polling. I don't know when I'll be able to do this, but will probably start working on this tomorrow. Thanks for responding. Yes it is: In a .intent file: tell me when I get email from {name} Then, in the intent: name = message.data.get("name") The fuzzy matching method might help with this: Mycroft Core docs Sorry for not making this clear - polling is a great way to find unread email - I just didn't think about enabling polling and having an intent for that - just about tell me when I have email from bob without the enable polling intents I hope I addressed most of your concerns. However I still could not figure out how you expected the intent to work? If you've only got tell me when I have email from bob, how would you turn it off, if you don't want to be notified anymore? Oh, sorry I forgot about that. Keep the intents - just take out the parts of looking for all new messages. Thanks! I'd like to get notified of every email (I get very few mail). What is the reason you want it removed? If you'd want it to be more explicit (e.g. Hey mycroft, look for **all** new mail), that would be possible too. Okay, thanks I'll merge this now, but I'm going to merge it into a feature branch, because I want to clean a few things up before I upload it to the skill store. Thanks again for this!
gharchive/pull-request
2018-12-07T21:31:07
2025-04-01T04:32:43.763719
{ "authors": [ "Dragoncraft89", "LinusS1" ], "repo": "LinusS1/email-skill", "url": "https://github.com/LinusS1/email-skill/pull/8", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
942480661
Issue#110 Handle empty telecom correctly Small change and test to ensure empty telecom information processes correctly -- it should create no telecom section at all. (There are only four actual lines of code changes in the .yaml. It looks like more because I added a test, and added hints to the Techniques documentation) Updated documentation to be clearer. @LisaWellman confirms one reviewer sufficient for this one.
gharchive/pull-request
2021-07-12T21:53:55
2025-04-01T04:32:43.773406
{ "authors": [ "cragun47" ], "repo": "LinuxForHealth/hl7v2-fhir-converter", "url": "https://github.com/LinuxForHealth/hl7v2-fhir-converter/pull/128", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1092752188
🛑 MARC is down In 8674bf2, MARC (https://marc.flitswallet.app/api) was down: HTTP code: 502 Response time: 1009 ms Resolved: MARC is back up in 424a098.
gharchive/issue
2022-01-03T19:14:15
2025-04-01T04:32:43.775761
{ "authors": [ "Liquid369" ], "repo": "Liquid369/FlitsUptime", "url": "https://github.com/Liquid369/FlitsUptime/issues/395", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1325296163
id serialization I have make interface for entities (kotlinx serialization) interface WithId { val id: String //I use value class here actually } All other entities inherits it How to deal with _id globally without @SerialName("_id") everywhere ? I would say this is more a kotlinx.serialization question (see https://github.com/Kotlin/kotlinx.serialization/issues/33). I think it's not feasible for now.
gharchive/issue
2022-08-02T05:12:01
2025-04-01T04:32:43.835186
{ "authors": [ "Lewik", "zigzago" ], "repo": "Litote/kmongo", "url": "https://github.com/Litote/kmongo/issues/359", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
1434161921
New CLI Note: I don't yet know how to include custom hooks yet. The underlying library provides 4 pairs of before/after events you can hook to. Suggestions would be appreciated. import Add an existing market to the manager Arguments: one of: file.{json, yaml} --stdin --{json,yaml} default: try import.yaml, import.json account REQUIRED create Create and add a market to the manager Arguments: see above --queue-if-no-funds --queue Note it should be a priority queue, ideally of priority / cost. Queue is stored in the database, and on a run this is iterated through until the next item can't be paid for quick-create Lets you build a market from command line using JSON to initialize the resolution rules. Best for simpler markets in interactive sessions Arguments: type REQUIRED choices=(binary, pseudo-numeric, free-response, multiple-choice) --resolve-when {rule name} {JSON init dict} nargs=2 action=append if not given, do on market closs --resolve-to {rule name} {JSON init dict} nargs=2 action=append REQUIRED --notes str --close-on datestring REQUIRED --account {id or Manifold username} REQUIRED scan Arguments: --disable-all (default) --enable-all --enable-[scanner] run Hopefully using the actual event loop Arguments: --daemon also scanner arguments --scan-period (if daemon) edit --{json, yaml, repl} The idea is to: if repl, open a repl to edit the market manually otherwise, dump market in import format to temp file open editor on close, if edited, try to "import" new market remove Arguments: id nargs=+ --assume-yes (-y) list Have options for various filters, verbosity of listings, format Bounty market: https://manifold.markets/LivInTheLookingGlass/bounty-which-of-your-suggestions-wi Geez, just the parsing for this is ~100 lines. Not even with help texts This is a work in progress, so I'm not committing yet """Runner script for ManifoldMarketManager. Includes a variety of command line options which can be explored by invoking with the `--help` flag. Note that the behavior of this runner script is not yet stable. Many changes are going to occur between its current state and the desired production behavior. These changes include: - [ ] Multiple Account Support - [ ] Create markets using JSON - [ ] Import markets using JSON - [ ] Queue markets to be created in the future - [ ] Run hooks on various Markets, ex: when they are - [ ] queued - [ ] created - [ ] resolved - [ ] cancelled - [ ] Use an event loop (maybe asyncio) rather that a sleep loop - [ ] Allow rules to store data in the db (and clean up after them) """ from __future__ import annotations from argparse import ArgumentParser from logging import DEBUG, INFO, basicConfig, getLogger from os import getenv from . import consts from .application import (create_command, edit_command, import_command, list_command, loop_command, quick_create_command, remove_command, run_command, scan_command) # Enable logging basicConfig( format="%(asctime)s - %(name)s - %(levelname)s - %(message)s", level=(INFO if not getenv("DEBUG") else DEBUG), filename=getenv("LogFile"), ) logger = getLogger(__name__) main_parser = ArgumentParser() main_parser.add_argument('-v', '--verbose', action='count', default=0) subparsers = main_parser.add_subparsers() import_parser = subparsers.add_parser('import') import_parser.add_argument('account', action='store', type=str) import_parser.add_argument('file', action='store', type=str, nargs='?') import_parser.add_argument('--interactive', action='store_true') group = import_parser.add_mutually_exclusive_group(required=False) group.add_argument('--yaml', action='store_true') group.add_argument('--json', action='store_true') group.add_argument('--repl', action='store_true') import_parser.set_defaults(func=import_command) # TODO: add templates here # must finish import_parser first create_parser = subparsers.add_parser('create', parents=[import_parser], add_help=False) create_parser.add_argument('--queue-if-no-funds', action='store_true') create_parser.add_argument('--queue', action='store_true') create_parser.set_defaults(func=create_command) quick_create_parser = subparsers.add_parser('quick-create') quick_create_parser.add_argument( 'type', type=str, choices=["BINARY", "PSEUDO_NUMERIC", "FREE_RESPONSE", "MULTIPLE_CHOICE"] ) quick_create_parser.add_argument('account', action='store', type=str) quick_create_parser.add_argument('close-on', action='store', type=str) quick_create_parser.add_argument( '--resolve-when', nargs=2, action='append', help="Should be a qualified rule name, followed by a JSON string of its initializers" ) quick_create_parser.add_argument( '--resolve-to', nargs=2, action='append', required=True, help="Should be a qualified rule name, followed by a JSON string of its initializers" ) quick_create_parser.add_argument('--notes', type=str, action='store', default='') quick_create_parser.set_defaults(func=quick_create_command) scan_parser = subparsers.add_parser('scan') scan_parser.add_argument('--disable-all', action='store_false', dest='all_scanners', default=True) for scanner in consts.AVAILABLE_SCANNERS: scan_parser.add_argument( f'--enable-{scanner.replace(".", "-")}', dest='scanners', action='append_const', const=scanner ) scan_parser.set_defaults(func=scan_command) run_parser = subparsers.add_parser('run') run_parser.add_argument('--disable-all', action='store_false', dest='all_scanners', default=True) for scanner in consts.AVAILABLE_SCANNERS: run_parser.add_argument(f'--enable-{scanner}', dest='scanners', action='append_const', const=scanner) run_parser.set_defaults(func=run_command) loop_parser = subparsers.add_parser('loop', parents=[run_parser], add_help=False) loop_parser.set_defaults(func=loop_command) edit_parser = subparsers.add_parser('edit') edit_parser.add_argument('id', nargs='+', type=str) edit_parser.set_defaults(func=edit_command) remove_parser = subparsers.add_parser('remove') remove_parser.add_argument('id', nargs='+', type=str) remove_parser.add_argument('--assume-yes', '-y', action='store_true') remove_parser.set_defaults(func=remove_command) list_parser = subparsers.add_parser('list') list_parser.add_argument('--stats', action='store_true') list_parser.set_defaults(func=list_command) args = main_parser.parse_args() exit(args.func(**vars(args))) Okay, at the point where it is probably at feature parity. I have limited ability to test, but the only failure I've seen is that the MTG market got improperly loaded. Current complaints with this implementation: Needs a lot more help text The loop command should be an event-loop version list needs more levels of verbosity list needs actual statistics to print import, create and quick-create can't be made until #10 edit won't be implemented for quite some time scan won't do anything until #11 I guess I should make each of these into separate issues, then close this one?
gharchive/issue
2022-11-03T06:10:25
2025-04-01T04:32:43.852407
{ "authors": [ "LivInTheLookingGlass" ], "repo": "LivInTheLookingGlass/ManifoldMarketManager", "url": "https://github.com/LivInTheLookingGlass/ManifoldMarketManager/issues/31", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1581099627
Add a way to detect key release in livesplit-hotkey Currently is only possible to detect keys pressed, but not the release event. Had a look at the linux implementations and should be simple (didn't check mac/win/wasm), but I don't think that there's a way to make this backwards compatible other than creating another register/unregister methods. Would you consider something like this? what would be a good api? new methods? an argument in the callback with some enum with KeyDown/KeyUp? another optional callback for the release? Mmh, I'm not sure this makes sense for our use case, cause it's not a generic keyboard API and instead specifically a hotkey listener where you want to know when the hotkey is pressed, especially in our case where the timing is really important.
gharchive/issue
2023-02-12T03:06:21
2025-04-01T04:32:43.858458
{ "authors": [ "CryZe", "Roger" ], "repo": "LiveSplit/livesplit-core", "url": "https://github.com/LiveSplit/livesplit-core/issues/633", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1289619726
🛑 invest is down In d0d88f5, invest (https://blog-pos-17/) was down: HTTP code: 0 Response time: 0 ms Resolved: invest is back up in e197b4d.
gharchive/issue
2022-06-30T05:49:42
2025-04-01T04:32:43.894812
{ "authors": [ "LloydGain" ], "repo": "LloydGain/UppTIME", "url": "https://github.com/LloydGain/UppTIME/issues/1763", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
630928556
14.2 Я нашел только это Источник :+1:
gharchive/issue
2020-06-04T15:25:40
2025-04-01T04:32:43.902785
{ "authors": [ "LoDThe", "shoraii" ], "repo": "LoDThe/hse-tex", "url": "https://github.com/LoDThe/hse-tex/issues/47", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1102565900
Updated to print outstanding balance for each invoice Sample C# program now prints outstanding balance for each invoice (in addition to invoice id and company name). Formatted outstanding balance amount to be more reader-friendly. Screenshot (console): LGTM!
gharchive/pull-request
2022-01-13T23:24:22
2025-04-01T04:32:43.907447
{ "authors": [ "nicoleajoy", "tspence" ], "repo": "Lockstep-Network/lockstep-sdk-examples", "url": "https://github.com/Lockstep-Network/lockstep-sdk-examples/pull/9", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1643135451
🛑 SentinelDB is down In ee0292f, SentinelDB (https://db.logsentinel.com) was down: HTTP code: 502 Response time: 321 ms Resolved: SentinelDB is back up in dddff7f.
gharchive/issue
2023-03-28T03:25:05
2025-04-01T04:32:43.924024
{ "authors": [ "Glamdring" ], "repo": "LogSentinel/status", "url": "https://github.com/LogSentinel/status/issues/874", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
215940791
Strange "object does not exist error" for SWI-Prolog 7 For following code, I got a strange object 'http_server' does not exist error in SWI-Prolog 7.5.2-16-gdc20d77 (also in 7.2.3), but it's ok in SWI 6.6.6. % cat sources/loader.lgt :- use_module(library(http/thread_httpd), []). :- use_module(library(http/http_dispatch), []). :- initialization(( logtalk_load(http_server) )). % cat sources/http_server.lgt :- object(http_server). :- public(run/0). run :- thread_httpd:http_server( http_dispatch:http_dispatch, [port('0.0.0.0':8000)]). :- end_object. % cat run.sh #!/bin/sh ROOT="$(dirname "$(readlink -f "$0")")" export LOGTALKHOME="$ROOT/../logtalk3" export LOGTALKUSER="$ROOT/../logtalk3" GOAL="{'$ROOT/sources/loader.lgt'}, http_server::run" export PATH="$HOME/prolog/swi/master/bin/:$PATH" exec "$LOGTALKHOME"/integration/swilgt.sh -q -g "$GOAL" % ./run.sh Warning: /home/dram/logtalk/http-server/sources/http_server.lgt:10: /home/dram/logtalk/http-server/sources/.lgt_tmp/loader_10247014_lgt.pl:6: Initialization goal failed ERROR: /home/dram/logtalk/http-server/sources/http_server.lgt:1: -g {'/home/dram/logtalk/http-server/sources/loader.lgt'}, http_server::run: object `http_server' does not exist Sorry, forgot to mention Logtalk version, I'm using master branch, Logtalk 3.10.4-rc1. I can reproduce the loading failure of the http_server object. It seems to be an issue with the compilation of the qualified modules calls in the body of the run/0 predicate. That the code works on SWI-Prolog 6.6.6 but not in 7.5.2 suggest a change in the meta-predicate templates of the HTTP module predicates. I'm looking into it. As a temporary workaround, use the {}/1 compiler bypass control construct in the definition of the run/0 predicate: run :- {thread_httpd:http_server( http_dispatch:http_dispatch, [port('0.0.0.0':8000)])}. In SWI-Prolog 7.5.2 we get: ?- predicate_property(thread_httpd:http_server(_,_), meta_predicate(M)). M = http_server(1, :). The second meta-argument in this meta-predicate template is ambiguous (:). Thus, Logtalk doesn't have enough information to compile the argument. But the compiler is expected to print an error in this case and not just fail to compile the call silently. That's definitely a bug. Fixed in 79dacb08758f3b19dccaeda0914815cc00e1d7cb. Thanks for the bug report. The compiler nows reports the expected error. The best solution is to use {}/1 control construct as explained above. An alternative solution would be to override the meta-predicate template for the module predicate. Fix confirmed, thanks!
gharchive/issue
2017-03-22T03:14:26
2025-04-01T04:32:43.941994
{ "authors": [ "dram", "pmoura" ], "repo": "LogtalkDotOrg/logtalk3", "url": "https://github.com/LogtalkDotOrg/logtalk3/issues/51", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
1469075334
🛑 jpvir is down In df484e2, jpvir ($JPVIR) was down: HTTP code: 0 Response time: 0 ms Resolved: jpvir is back up in f6868de.
gharchive/issue
2022-11-30T05:37:02
2025-04-01T04:32:44.013825
{ "authors": [ "LonelyJupiter" ], "repo": "LonelyJupiter/UPPTIME", "url": "https://github.com/LonelyJupiter/UPPTIME/issues/1929", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1584587503
🛑 jpvir is down In 4695b31, jpvir ($JPVIR) was down: HTTP code: 0 Response time: 0 ms Resolved: jpvir is back up in e231724.
gharchive/issue
2023-02-14T17:49:53
2025-04-01T04:32:44.016121
{ "authors": [ "LonelyJupiter" ], "repo": "LonelyJupiter/UPPTIME", "url": "https://github.com/LonelyJupiter/UPPTIME/issues/3602", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1445818943
Add per display resolution and viewcount baselines Instead of a slider the resolution in the config should be set based on the connected Looking Glass display, with options for increased or decreased quality. In addition, the default view counts for each display should be set based on the connected display +1. Some thought: I think there would also be ways to provide these "good defaults" to the browser without having to install the "Websocket Driver" which I feel is kind of problematic in corporate environments and such. For example, if LG would also advertise itself as Gamepad, the existance of it and some "default for this device" values could be transmitted without even having to install a driver and without the user having to choose anything.
gharchive/issue
2022-11-11T17:54:14
2025-04-01T04:32:44.031686
{ "authors": [ "BryanChrisBrown", "hybridherbst" ], "repo": "Looking-Glass/looking-glass-webxr", "url": "https://github.com/Looking-Glass/looking-glass-webxr/issues/16", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
716204415
Dollar value NIOX token Dollar value of the NIOX token is always at 0.00 Could you provide a screenshot? sure
gharchive/issue
2020-10-07T05:19:12
2025-04-01T04:32:44.059239
{ "authors": [ "koermiz", "xiaowheat" ], "repo": "Loopring/dexwebapp", "url": "https://github.com/Loopring/dexwebapp/issues/201", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1786657142
🛑 Link Shortener is down In adf6a0e, Link Shortener (https://lr-link.vercel.app) was down: HTTP code: 504 Response time: 10820 ms Resolved: Link Shortener is back up in cb3cf6e.
gharchive/issue
2023-07-03T18:44:35
2025-04-01T04:32:44.113963
{ "authors": [ "LordRonz" ], "repo": "LordRonz/status", "url": "https://github.com/LordRonz/status/issues/3337", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1921024010
🛑 Link Shortener is down In 65bfe73, Link Shortener (https://lr-link.vercel.app) was down: HTTP code: 504 Response time: 10639 ms Resolved: Link Shortener is back up in edbf38e after 2 minutes.
gharchive/issue
2023-10-01T23:14:03
2025-04-01T04:32:44.116334
{ "authors": [ "LordRonz" ], "repo": "LordRonz/status", "url": "https://github.com/LordRonz/status/issues/4236", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2052046130
🛑 Link Shortener is down In d6b4f09, Link Shortener (https://lr-link.vercel.app) was down: HTTP code: 504 Response time: 10794 ms Resolved: Link Shortener is back up in 15ec69a after 7 minutes.
gharchive/issue
2023-12-21T09:30:19
2025-04-01T04:32:44.118635
{ "authors": [ "LordRonz" ], "repo": "LordRonz/status", "url": "https://github.com/LordRonz/status/issues/4302", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1764059299
Dev@prototype 1.0.0 The basic idea and banner layout are created, ready to refactor on future PRs and commits. This pull request has been deployed to Vercel. Latest commit: 584e8a4 ✅ Preview: https://marknow-lored-n6imxvo0g-guz013.vercel.app 🔍 Inspect: https://vercel.com/guz013/marknow-lored/GmeiRx2kQBGy1bhDqHcfFq6hA9m9 View Workflow Logs
gharchive/pull-request
2023-06-19T18:58:40
2025-04-01T04:32:44.121007
{ "authors": [ "Guz013" ], "repo": "LoredDev/MarkNow", "url": "https://github.com/LoredDev/MarkNow/pull/4", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1869029853
System.NullReferenceException: 未将对象引用设置到对象的实例。 ************** 异常文本 ************** System.NullReferenceException: 未将对象引用设置到对象的实例。 在 VPet_Simulator.Core.Main.Display(IGraph graph, Action EndAction) 在 VPet_Simulator.Core.Main.DisplayNomal() 在 VPet_Simulator.Windows.MainWindow.<>c__DisplayClass118_0.b__25(Object x, EventArgs y) 在 System.Windows.Forms.MenuItem.OnClick(EventArgs e) 在 System.Windows.Forms.MenuItem.MenuItemData.Execute() 在 System.Windows.Forms.Command.Invoke() 在 System.Windows.Forms.NotifyIcon.WndProc(Message& msg) 在 System.Windows.Forms.NativeWindow.Callback(IntPtr hWnd, Int32 msg, IntPtr wparam, IntPtr lparam) 启动后人物不会立刻出现在屏幕上。似乎在人物做出第一个动作时才会出现 没看懂, 是如何触发的吗? 就是一开始人物不会出现,然后我就点击重置状态的时候会弹出这个报错,然后等他播放动作时人物才会出来(比如下蹲,爬行)或者在开发控制台播放时会出现 没看懂, 是如何触发的吗? 经过复盘,我觉得不是等待导致的问题,而是另一种系统Fips缺失导致的问题 https://blog.csdn.net/m0_46868092/article/details/117436852 操作到第五部分就行
gharchive/issue
2023-08-28T06:00:21
2025-04-01T04:32:44.127256
{ "authors": [ "LorisYounger", "zhuhiki" ], "repo": "LorisYounger/VPet", "url": "https://github.com/LorisYounger/VPet/issues/133", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
743033170
Email Server: Investigate why Email server is not working all the time The email backup at 1&1 is no longer available (check with Gerald). We incorrectly trigger SMTP backup usage when ev-server send an email on no more existing email account. The workaround is either: to delete the matching account on the tenant. to disable notifications on the tenant matching account. SMTP backup have been moved to AWS as part of that issue. We incorrectly trigger SMTP backup usage when ev-server send an email on no more existing email account. The workaround until SMTP error handling is added to ev-server is either: to delete the matching account on the tenant. to disable notifications on the tenant matching account. SMTP backup have been moved to AWS as part of that issue. If we can detect this at 100%, we can mark the user as inactive and disable all the notifications in his profile. We incorrectly trigger SMTP backup usage when ev-server send an email on no more existing email account. The workaround until SMTP error handling is added to ev-server is either: to delete the matching account on the tenant. to disable notifications on the tenant matching account. SMTP backup have been moved to AWS as part of that issue. If we can detect this at 100%, we can mark the user as inactive and disable all the notifications in his profile. Maybe not the user, we can't say if the user is inactive because of SMTP error 5.1.1, he can just have moved his email addresses. But you can for sure detect if the error is repeated and then mark the email as invalid, no more sending email notification and warn the user with a new decicated push notification for invalid email. Ok, then it's a dedicated dev, new issue. Ok, then it's a dedicated dev, new issue. Yes, I will only add SMTP error handling to solve that issue and use it to trigger or not SMTP backup usage. @jerome-benoit All your PRs are 'draft' except the mobile one, is it on purpose? @jerome-benoit All your PRs are 'draft' except the mobile one, is it on purpose? @jerome-benoit All your PRs are 'draft' except the mobile one, is it on purpose? Yes, the mobile app needs handle both notification name first. And deployed in production. Once done. The renaming will have no impact on it. The rest is going to be deployed side by side at the same time. @jerome-benoit All your PRs are 'draft' except the mobile one, is it on purpose? Yes, the mobile app needs handle both notification name first. And deployed in production. Once done. The renaming will have no impact on it. The rest is going to be deployed side by side at the same time. @jerome-benoit All your PRs are 'draft' except the mobile one, is it on purpose? Yes, the mobile app needs to handle both notification name first. And deployed in production. Once done, the renaming will have no impact on it. The rest is going to be deployed side by side at the same time. Both solution should work we should not wait. Just provide a comment to remove it after deployment. @jerome-benoit All your PRs are 'draft' except the mobile one, is it on purpose? Yes, the mobile app needs to handle both notification name first. And deployed in production. Once done, the renaming will have no impact on it. The rest is going to be deployed side by side at the same time. Both solution should work we should not wait. Just provide a comment to remove it after deployment. @jerome-benoit All your PRs are 'draft' except the mobile one, is it on purpose? Yes, the mobile app needs to handle both notification name first. And deployed in production. Once done, the renaming will have no impact on it. The rest is going to be deployed side by side at the same time. Both solution should work we should not wait. Just provide a comment to remove it after deployment. I can duplicate notification code server side and sent them with both name. The mobile app will only no need to be updated. @jerome-benoit All your PRs are 'draft' except the mobile one, is it on purpose? Yes, the mobile app needs to handle both notification name first. And deployed in production. Once done, the renaming will have no impact on it. The rest is going to be deployed side by side at the same time. Both solution should work we should not wait. Just provide a comment to remove it after deployment. I can duplicate notification code server side and sent them with both name. The mobile app will only no need to be updated. I can duplicate notification code server side and sent them with both name. The mobile app no need to be updated. If you send twice the notif, the user will receive 2 notifs on his mobile app and if you touch one you will navigate to the app and here the processing will happen. I just checked the mobile code and there is no actions related to this notif, it only opens the app. So the new notif will do the same and unknown notif IDs are currently ignored. So you can deploy this backend version. You see now how much it can be complicated to handle a simple renaming of a variable. I can duplicate notification code server side and sent them with both name. The mobile app no need to be updated. If you send twice the notif, the user will receive 2 notifs on his mobile app and if you touch one you will navigate to the app and here the processing will happen. I just checked the mobile code and there is no actions related to this notif, it only opens the app. So the new notif will do the same and unknown notif IDs are currently ignored. So you can deploy this backend version. You see now how much it can be complicated to handle a simple renaming of a variable.
gharchive/issue
2020-11-14T16:24:25
2025-04-01T04:32:44.220802
{ "authors": [ "LucasBrazi06", "jerome-benoit" ], "repo": "LucasBrazi06/ev-dashboard", "url": "https://github.com/LucasBrazi06/ev-dashboard/issues/2090", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
771139571
Allow for a fixed color for icons Currently, icons are linked to a SV tile and changes color based on thresholds. Allow for a fixed color override. Use case is for icons like a help icon that you want as a fixed color and not based on a SV tile and thresholds. Or an icon that links to another dashboard. You would need to make the Link and Base options optional. Currently I have: !PU(svg):icon=help-bubble;link=lnk4;base=low;warn=0;crit=3;url=#dashboard;id=0ac9409a-3e01-456b-b2ff-1e401695dbd5 Maybe allow this: !PU(svg):icon=help-bubble;color=blue;url=#dashboard;id=0ac9409a-3e01-456b-b2ff-1e401695dbd5 Released in 1.36
gharchive/issue
2020-12-18T20:05:17
2025-04-01T04:32:44.223657
{ "authors": [ "LucasHocker", "TechShady" ], "repo": "LucasHocker/DynatraceDashboardPowerups", "url": "https://github.com/LucasHocker/DynatraceDashboardPowerups/issues/18", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2321552809
LuckPerms not working with PlaceholderAPI Description I used /papi ecloud download LuckPerms followed by /papi reload as per the wiki and receive this error: Failed to load expansion class LuckPermsExpansion (Is a dependency missing?) Reproduction Steps Using PlaceholderAPI file PlaceholderAPI-2.11.6.jar and LuckPerms file LuckPerms-Bukkit-5.4.128.jar all in plugins file running /papi ecloud download LuckPerms followed by /papi reload Expected Behaviour [PlaceholderAPI] Placeholder expansion registration initializing... [PlaceholderAPI] Fetching available expansion information... [PlaceholderAPI] Successfully registered external expansion: (name of expansion) Server Details paper-496(MC: 1.20.4) LuckPerms Version 5.4.128 Logs and Configs papi ecloud download LuckPerms [18:37:12 INFO]: Successfully downloaded expansion LuckPerms [5.4-R2] to file: Expansion-luckperms.jar Make sure to type /papi reload to enable your new expansion! [18:37:12 INFO]: [PlaceholderAPI] Fetching available expansion information... papi reload [18:37:20 INFO]: [PlaceholderAPI] Placeholder expansion registration initializing... [18:37:20 INFO]: [PlaceholderAPI] Fetching available expansion information... [18:37:20 INFO]: [PlaceholderAPI] Successfully registered external expansion: essentials [1.5.2] [18:37:20 ERROR]: [PlaceholderAPI] Failed to load expansion class LuckPermsExpansion (Is a dependency missing?) java.lang.NoClassDefFoundError: me/lucko/luckperms/placeholders/LPPlaceholderProvider at me.lucko.luckperms.placeholders.LuckPermsExpansion.register(LuckPermsExpansion.java:58) ~[?:?] at me.clip.placeholderapi.expansion.manager.LocalExpansionManager.register(LocalExpansionManager.java:193) ~[PlaceholderAPI-2.11.6.jar:?] at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:197) ~[?:?] at java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:179) ~[?:?] at java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1708) ~[?:?] at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:509) ~[?:?] at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:499) ~[?:?] at java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:921) ~[?:?] at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) ~[?:?] at java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:682) ~[?:?] at me.clip.placeholderapi.expansion.manager.LocalExpansionManager.lambda$registerAll$4(LocalExpansionManager.java:366) ~[PlaceholderAPI-2.11.6.jar:?] at me.clip.placeholderapi.util.Futures.lambda$null$0(Futures.java:46) ~[PlaceholderAPI-2.11.6.jar:?] at org.bukkit.craftbukkit.v1_20_R3.scheduler.CraftTask.run(CraftTask.java:101) ~[paper-1.20.4.jar:git-Paper-496] at org.bukkit.craftbukkit.v1_20_R3.scheduler.CraftScheduler.mainThreadHeartbeat(CraftScheduler.java:482) ~[paper-1.20.4.jar:git-Paper-496] at net.minecraft.server.MinecraftServer.tickChildren(MinecraftServer.java:1646) ~[paper-1.20.4.jar:git-Paper-496] at net.minecraft.server.dedicated.DedicatedServer.tickChildren(DedicatedServer.java:447) ~[paper-1.20.4.jar:git-Paper-496] at net.minecraft.server.MinecraftServer.tickServer(MinecraftServer.java:1525) ~[paper-1.20.4.jar:git-Paper-496] at net.minecraft.server.MinecraftServer.runServer(MinecraftServer.java:1226) ~[paper-1.20.4.jar:git-Paper-496] at net.minecraft.server.MinecraftServer.lambda$spin$0(MinecraftServer.java:319) ~[paper-1.20.4.jar:git-Paper-496] at java.lang.Thread.run(Thread.java:1583) ~[?:?] Caused by: java.lang.ClassNotFoundException: me.lucko.luckperms.placeholders.LPPlaceholderProvider at java.net.URLClassLoader.findClass(URLClassLoader.java:445) ~[?:?] at java.lang.ClassLoader.loadClass(ClassLoader.java:593) ~[?:?] at java.lang.ClassLoader.loadClass(ClassLoader.java:526) ~[?:?] ... 20 more Extra Details I was able to add EssentialsX placeholder using the same method with positive results, working as expected. I am logging this due to placeholderAPI's common issues' page: https://wiki.placeholderapi.com/common-issues/ Upload your full latest.log to a paste site such as pastes.dev This should work, dropped it in here: https://gist.github.com/SulkyWhale/36730035173135ab878ad0cc96dd7ac2 Follow the steps below to resolve the issue: Stop your server. Delete the /plugins/LuckPerms/libs/ directory. Restart your server. I just tried that, it does not work, when starting the server, the directory is added back at line [LuckPerms] Enabling LickPerms v5.4.128, and the error reoccurs
gharchive/issue
2024-05-28T17:09:37
2025-04-01T04:32:44.244479
{ "authors": [ "ImDarkLaw", "SulkyWhale", "powercasgamer" ], "repo": "LuckPerms/LuckPerms", "url": "https://github.com/LuckPerms/LuckPerms/issues/3900", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
727022892
Update 10057.cpp 增加註解 @LCH030 Good Job!👍 PR 我 Merge 囉!恭喜你完成第一個 PR!🎉🎉 @all-contributors please add @LCH030 for code
gharchive/pull-request
2020-10-22T03:53:58
2025-04-01T04:32:44.246266
{ "authors": [ "LCH030", "LuckyPigeon" ], "repo": "LuckyPigeon/CPE_Previous_Questions", "url": "https://github.com/LuckyPigeon/CPE_Previous_Questions/pull/55", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
456711849
2.0.0 Release Documentation This commit updates the documentation to reflect the additional methods and request queue added for 2.0.0. Coverage increased (+0.3%) to 100.0% when pulling 57f767fd359e741838d0812219b8e93e11ae5dde on version-two-documentation into 5a41f6d529d6dcae18dc0da1aa695fb9a76eecbf on master.
gharchive/pull-request
2019-06-17T02:10:57
2025-04-01T04:32:44.261993
{ "authors": [ "Luidog", "coveralls" ], "repo": "Luidog/fms-api-client", "url": "https://github.com/Luidog/fms-api-client/pull/45", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2377688382
feat(cursorhold): #4 added cursorhold event Closes: #4 i really dont know how to do this resolve stuff and reviewing in PR coz this is my first time no worries xD I'll look at your comments and changes in a few hours :) But thanks a lot for the effort! i really dont know how to do this resolve stuff and reviewing in PR coz this is my first time no worries xD I'll look at your comments and changes in a few hours :) But thanks a lot for the effort! what you did is quite similar i can delete this patch if u say what you did is quite similar i can delete this patch if u say However you like! I don't mind deleting my PR and merging yours. But if you'd prefer if we merge mine, then that's also okay for me. you can check what you want, its ur repo, and yeah i also dont mind becoming a contributor, btw did you check: https://github.com/lewis6991/hover.nvim (i remember the UI was quite similar) boo: hover: hey there what happened? Oh, I'm so sorry! I completely forgot this PR. I just started writing my master's thesis and I'm currently a bit short on time. But I'll try my best to look into your comments and code on Wednesday. Oh, I'm so sorry! I completely forgot this PR. I just started writing my master's thesis and I'm currently a bit short on time. But I'll try my best to look into your comments and code on Wednesday. oh no problemo, i will be in that train in a few months tho mine will be bachelor's thesis good luck
gharchive/pull-request
2024-06-27T09:49:42
2025-04-01T04:32:44.268261
{ "authors": [ "LukasPietzschmann", "daUnknownCoder" ], "repo": "LukasPietzschmann/boo.nvim", "url": "https://github.com/LukasPietzschmann/boo.nvim/pull/6", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
116411519
Content blacked out on expansion in Firefox 42, OSX 10.10.5 Love this accordion but seeing an issue in Firefox 42 on OSX 10.10.5. Not sure if there is a difference in Windows or not but better to be explicit than not. When expanding content all the text gets a black highlight over top the text making it unreadable. See attached image. This bug is also present viewing the demo page. Override the will-change attribute of .vAccordion--default v-pane-content > div to will-change: auto @kylealwyn made the change v-pane-content > div { padding-bottom: $v-accordion-spacing; will-change: auto; opacity: 0; transform: translate3d(0, 30px, 0); transition: transform $v-pane-expand-duration, opacity $v-pane-expand-duration; } The solid black overlay is removed but there is still a flicker. When the menu expands it will flicker the black overlay than it will be removed.
gharchive/issue
2015-11-11T20:06:59
2025-04-01T04:32:44.271428
{ "authors": [ "kylealwyn", "wsfuller" ], "repo": "LukaszWatroba/v-accordion", "url": "https://github.com/LukaszWatroba/v-accordion/issues/30", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
141279768
Hi, how can i change + & - sign with up and down arrow? Your v-accordion is awesome thanks ! Take a look at #31 Thanks, I tried to fix this problem from yesterday. I would like to obtain these arrows. Can you help me ?
gharchive/issue
2016-03-16T13:44:30
2025-04-01T04:32:44.273346
{ "authors": [ "LukaszWatroba", "christianpanea" ], "repo": "LukaszWatroba/v-accordion", "url": "https://github.com/LukaszWatroba/v-accordion/issues/49", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
1722810987
Update to opentelemetry 0.19 Issue #, if available: https://github.com/LukeMathWalker/tracing-actix-web/issues/103 Description of changes: Update dependencies, introduce a new flag for 0.19 Thanks!
gharchive/pull-request
2023-05-23T21:29:02
2025-04-01T04:32:44.276344
{ "authors": [ "LukeMathWalker", "asonix" ], "repo": "LukeMathWalker/tracing-actix-web", "url": "https://github.com/LukeMathWalker/tracing-actix-web/pull/105", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1071743103
🛑 Bank of China Australia is down In 52660e3, Bank of China Australia (https://obdevp.bank-of-china.net.au/cds-au/v1/banking/products) was down: HTTP code: 0 Response time: 0 ms Resolved: Bank of China Australia is back up in fc1ea94.
gharchive/issue
2021-12-06T05:22:23
2025-04-01T04:32:44.278792
{ "authors": [ "LukePrior" ], "repo": "LukePrior/OpenBankingUptime", "url": "https://github.com/LukePrior/OpenBankingUptime/issues/1767", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2596422573
matterbridge-zigbee2mqtt issue receiving invalid Z2M message When matterbridge-zigbee2mqtt receives an invalid Z2M message as per issue reported (https://github.com/Luligu/zigbee2mqtt-automations/issues/5). It ends up trying to convert the message to Matter and then fails. [InteractionServer] Subscription 825015395 for Session 27795: Error while sending initial data reports The number NaN cannot be converted to a BigInt because it is not an integer at BigInt (<anonymous>) at toBigInt (file:///usr/local/lib/node_modules/matterbridge/node_modules/@project-chip/matter.js/dist/esm/util/Number.js:25:38) at DataWriter.writeInt64 (file:///usr/local/lib/node_modules/matterbridge/node_modules/@project-chip/matter.js/dist/esm/util/DataWriter.js:57:53) at TlvCodec.writeUInt (file:///usr/local/lib/node_modules/matterbridge/node_modules/@project-chip/matter.js/dist/esm/tlv/TlvCodec.js:375:23) at TlvCodec.writePrimitive (file:///usr/local/lib/node_modules/matterbridge/node_modules/@project-chip/matter.js/dist/esm/tlv/TlvCodec.js:301:21) at TlvByteArrayWriter.writePrimitive (file:///usr/local/lib/node_modules/matterbridge/node_modules/@project-chip/matter.js/dist/esm/tlv/TlvSchema.js:70:14) at file:///usr/local/lib/node_modules/matterbridge/node_modules/@project-chip/matter.js/dist/esm/tlv/TlvAny.js:57:18 at Array.forEach (<anonymous>) at AnySchema.encodeTlvInternal (file:///usr/local/lib/node_modules/matterbridge/node_modules/@project-chip/matter.js/dist/esm/tlv/TlvAny.js:37:15) at file:///usr/local/lib/node_modules/matterbridge/node_modules/@project-chip/matter.js/dist/esm/tlv/TlvArray.js:25:51 This repeats 3 times, after which the controller is unsubscribed: [SubscriptionHandler] Sending update failed 3 times in a row, canceling subscription 825015377 and let controller subscribe again. Either this requires a manual MatterBridge addon restart or some amount of time for the devices to be available again on Matter. Hi, I need a full log to understand where to put a filter in the plugin to filter wrong values that can come from anywhere. Subscribe again is the typical controller behaviour... after a while the controller subscribe again.
gharchive/issue
2024-10-18T05:03:19
2025-04-01T04:32:44.307187
{ "authors": [ "Luligu", "robvanoostenrijk" ], "repo": "Luligu/matterbridge-zigbee2mqtt", "url": "https://github.com/Luligu/matterbridge-zigbee2mqtt/issues/78", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2467622403
issue with model training when i issue the training function: model = modeling.create_and_train_model( loaded_sim_lib, n_epochs=100, hidden_layer_units_list=[10, 128, 512, 512, 512, 128, 10], activation_list=['relu', 'tanh', 'relu', 'relu', 'relu', 'tanh', 'tanh'], train_batch_size=100, ) it does not work, but gives some errors as given below: ValueError: Exception encountered when calling ComplexLayer.call(). Could not automatically infer the output shape / dtype of 'complex_layer' (of type ComplexLayer). Either the ComplexLayer.call() method is incorrect, or you need to implement the ComplexLayer.compute_output_spec() / compute_output_shape() method. Error encountered: Invalid dtype: complex64 Arguments received by ComplexLayer.call(): • args=('<KerasTensor shape=(None, 4), dtype=float32, sparse=False, name=keras_tensor_9>',) • kwargs=<class 'inspect._empty'> Thanks, I will check this. Muhammad Nadeem Akram Professor Faculty of Technology, Natural Sciences and Maritime Sciences Department of Microsystems Tel: +47 31 00 90 30 @.@.> www.usn.nohttp://www.usn.no/ @.*** From: Sergei @.> Sent: Friday, August 16, 2024 11:42 AM To: Luochenghuang/metabox @.> Cc: Muhammad Nadeem Akram @.>; Author @.> Subject: Re: [Luochenghuang/metabox] issue with model training (Issue #4) I don't remember exactly, but it seems I had such a problem when I tried to use Python 3.12 as an interpreter. Everything started working successfully on Python 3.8 — Reply to this email directly, view it on GitHubhttps://github.com/Luochenghuang/metabox/issues/4#issuecomment-2293183862, or unsubscribehttps://github.com/notifications/unsubscribe-auth/AHWVQNZ3RRFFJPCGDRNVQWTZRXCPLAVCNFSM6AAAAABMRY65E6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDEOJTGE4DGOBWGI. You are receiving this because you authored the thread.Message ID: @.***> Thanks, Can we design a doublet with ‘metabox’ package, for example, a double with good performance over a field of view and with achromatic performance. Thanks, Muhammad Nadeem Akram Professor Faculty of Technology, Natural Sciences and Maritime Sciences Department of Microsystems Tel: +47 31 00 90 30 @.@.> www.usn.nohttp://www.usn.no/ @.*** From: Sergei @.> Sent: Friday, August 16, 2024 11:42 AM To: Luochenghuang/metabox @.> Cc: Muhammad Nadeem Akram @.>; Author @.> Subject: Re: [Luochenghuang/metabox] issue with model training (Issue #4) I don't remember exactly, but it seems I had such a problem when I tried to use Python 3.12 as an interpreter. Everything started working successfully on Python 3.8 — Reply to this email directly, view it on GitHubhttps://github.com/Luochenghuang/metabox/issues/4#issuecomment-2293183862, or unsubscribehttps://github.com/notifications/unsubscribe-auth/AHWVQNZ3RRFFJPCGDRNVQWTZRXCPLAVCNFSM6AAAAABMRY65E6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDEOJTGE4DGOBWGI. You are receiving this because you authored the thread.Message ID: @.***> Thanks, Can we design a doublet with ‘metabox’ package, for example, a double with good performance over a field of view and with achromatic performance. Thanks, Muhammad Nadeem Akram Professor Faculty of Technology, Natural Sciences and Maritime Sciences Department of Microsystems Tel: +47 31 00 90 30 @.@.> www.usn.no<http://www.usn.no/> @.*** From: Sergei @.> Sent: Friday, August 16, 2024 11:42 AM To: Luochenghuang/metabox @.> Cc: Muhammad Nadeem Akram @.>; Author @.> Subject: Re: [Luochenghuang/metabox] issue with model training (Issue #4) I don't remember exactly, but it seems I had such a problem when I tried to use Python 3.12 as an interpreter. Everything started working successfully on Python 3.8 — Reply to this email directly, view it on GitHub<#4 (comment)>, or unsubscribehttps://github.com/notifications/unsubscribe-auth/AHWVQNZ3RRFFJPCGDRNVQWTZRXCPLAVCNFSM6AAAAABMRY65E6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDEOJTGE4DGOBWGI. You are receiving this because you authored the thread.Message ID: @.***> It is possible to make an assembly from several components here. The author shows in examples the possibilities of using this program for conventional refractive optics, but some data will have to be taken from Zemax. Perhaps you should wait for the author's response. I have not used this package for conventional optics yet. metabox worked fine with python 3.8, but not with 3.11.
gharchive/issue
2024-08-15T08:16:01
2025-04-01T04:32:44.366187
{ "authors": [ "MasSciencer", "mnadeemakram" ], "repo": "Luochenghuang/metabox", "url": "https://github.com/Luochenghuang/metabox/issues/4", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1325785629
Add my first file tuki Test
gharchive/pull-request
2022-08-02T12:32:00
2025-04-01T04:32:44.374585
{ "authors": [ "Lupama2" ], "repo": "Lupama2/introduction_to_GitHub", "url": "https://github.com/Lupama2/introduction_to_GitHub/pull/1", "license": "CC-BY-4.0", "license_type": "permissive", "license_source": "github-api" }
162467824
Create tests using NIST Standard Reference Simulation Calculations We should build tests for the systems/states for the NIST Simulation Benchmarks. These tests should include evaluations of single configurations [ ] LJ reference calculations for system energy and virial [ ] Ewald-Summation for SPC/E water as well as simulation runs in different ensembles and potentials: [ ] LJ [ ] Alkanes (TraPPE) [ ] SPC/TIP4P Water SPC/TIP4P Water These can only be done using Monte-Carlo simulations, because there is no code for rigid molecules in the MD part.
gharchive/issue
2016-06-27T14:38:39
2025-04-01T04:32:44.379740
{ "authors": [ "Luthaf", "g-bauer" ], "repo": "Luthaf/cymbalum", "url": "https://github.com/Luthaf/cymbalum/issues/27", "license": "bsd-2-clause", "license_type": "permissive", "license_source": "bigquery" }
2647626682
🛑 Official(泠泫凝的异次元空间-主页) is down In 16929a0, Official(泠泫凝的异次元空间-主页) (https://lxnchan.cn) was down: HTTP code: 0 Response time: 0 ms Resolved: Official(泠泫凝的异次元空间-主页) is back up in 8e4ae49 after 6 minutes.
gharchive/issue
2024-11-10T20:34:36
2025-04-01T04:32:44.384456
{ "authors": [ "LxnChan" ], "repo": "LxnChan/status", "url": "https://github.com/LxnChan/status/issues/10648", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2664706637
🛑 Official(泠泫凝的异次元空间-主页) is down In 2a9e349, Official(泠泫凝的异次元空间-主页) (https://lxnchan.cn) was down: HTTP code: 0 Response time: 0 ms Resolved: Official(泠泫凝的异次元空间-主页) is back up in f298f1c after 5 minutes.
gharchive/issue
2024-11-16T17:38:18
2025-04-01T04:32:44.387217
{ "authors": [ "LxnChan" ], "repo": "LxnChan/status", "url": "https://github.com/LxnChan/status/issues/10960", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1587070810
🛑 115Wiki is down In e47f721, 115Wiki (https://115.anavi.cn) was down: HTTP code: 0 Response time: 0 ms Resolved: 115Wiki is back up in c5a1fb8.
gharchive/issue
2023-02-16T05:58:15
2025-04-01T04:32:44.389548
{ "authors": [ "LxnChan" ], "repo": "LxnChan/status", "url": "https://github.com/LxnChan/status/issues/112", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2756506072
🛑 EnderChest(末影箱-云盘) is down In e19bc0c, EnderChest(末影箱-云盘) (https://enderchest.anavi.cn) was down: HTTP code: 0 Response time: 0 ms Resolved: EnderChest(末影箱-云盘) is back up in f05e241 after 1 hour, 1 minute.
gharchive/issue
2024-12-23T18:00:37
2025-04-01T04:32:44.391980
{ "authors": [ "LxnChan" ], "repo": "LxnChan/status", "url": "https://github.com/LxnChan/status/issues/12789", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2420547027
🛑 EnderChest(末影箱-云盘) is down In 89b4b02, EnderChest(末影箱-云盘) (https://enderchest.anavi.cn) was down: HTTP code: 0 Response time: 0 ms Resolved: EnderChest(末影箱-云盘) is back up in a85939a after 15 minutes.
gharchive/issue
2024-07-20T04:13:43
2025-04-01T04:32:44.394661
{ "authors": [ "LxnChan" ], "repo": "LxnChan/status", "url": "https://github.com/LxnChan/status/issues/3937", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2474867123
🛑 Official(泠泫凝的异次元空间-主页) is down In 387b823, Official(泠泫凝的异次元空间-主页) (https://lxnchan.cn) was down: HTTP code: 0 Response time: 0 ms Resolved: Official(泠泫凝的异次元空间-主页) is back up in 6b2a354 after 26 minutes.
gharchive/issue
2024-08-20T06:58:49
2025-04-01T04:32:44.397065
{ "authors": [ "LxnChan" ], "repo": "LxnChan/status", "url": "https://github.com/LxnChan/status/issues/5964", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
201385815
Automation results submission for the pizza task Google checks sent result files online. So to avoid unit test writing we will write a test on Selenium WebDriver that sends results to the submission page and parse scores I finished up with SeleniumIDE and firefox. Here is my test case. Need to substitute file paths to appropriate. type //input[@type="file"][1] /Users/greg/Desktop/Hamcrest-1.3.pdf pause 1000 type //input[@type="file"][1] /Users/greg/Desktop/Hamcrest-1.3.pdf pause 1000 type //input[@type="file"][1] /Users/greg/Desktop/Hamcrest-1.3.pdf pause 1000 type //input[@type="file"][1] /Users/greg/Desktop/Hamcrest-1.3.pdf pause 1000 type //input[@type="file"][1] /Users/greg/Desktop/Hamcrest-1.3.pdf pause 1000 click xpath=(//button[@type='button'])[25] Final solution scripts: For zipping the source code(bash) To submit the source code archive and output files via a web form(selenium) I think a better option is to migrate a selenium script to Java but this way we will encounter a problem with authentification or connecting to an open browser. See the branch automationResultsSubmission I'm finishing with with selenium code (but u shoud be logged in from u'r google account in this Chrome browser) when it works i'll comit it.
gharchive/issue
2017-01-17T19:43:17
2025-04-01T04:32:44.400086
{ "authors": [ "LyashenkoGS", "seajer" ], "repo": "LyashenkoGS/GoogleHashCode2017", "url": "https://github.com/LyashenkoGS/GoogleHashCode2017/issues/2", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
1104330475
Markdown support Fixes https://github.com/LycheeOrg/Lychee/issues/1189 This adds support for markdown in photo and album descriptions. I used the marked parser as it was the smallest actively supported one I could find (<50 KB). I modified the editing of descriptions to provide a multi-line textarea rather than a single-line input; this required some tweaks as textareas are poorly supported in basicModal and such. I added support for displaying album descriptions right under the header. This is enabled by default (though configurable) because I felt that not having it displayed anywhere was an omission in our UI, but I can be persuaded otherwise. Much of the work went on the CSS side since we simply didn't support many of the HTML tags that markdown uses in the contexts in which descriptions are displayed. I made sure that basic markdown syntax renders nicely, such as headings/bold/italic/blockquotes/lists/code/rules/links; no guarantees about anything more than that, though it should be easy to extend should the need arise. There is a corresponding trivial server PR that adds two config variables, one for displaying the album descriptions, the other for enabling markdown. The PR includes two unrelated fixes: The separate view mode was crapping out on opening the sidebar because photo.updateSizeLivePhotoDuringAnimation is not defined in it. If the last visited album had an info sidebar open, it would reopen when going to any of the Settings/Logs/Diagnostics/etc. This was because view.photo.hide() was being called unconditionally but it should only be called in photo view. SonarCloud fails because CSS uses font-family: revert to request going back to browser-defaults for backquotes and such; I consider that to be a bug in SonarCloud, not in my code, though I suppose we could request some specific fonts instead... I have merged consistent_json_api into master and rebased your branch markdown-support as markdown-support_rebased onto the new master. To update your PR, I believe the following commands should be the the right ones Attention: I haven't tested them. Commands: git checkout master # Checkout the current master of Lychee-front, the only important thing is that your are not on the branch `markdown-support` git branch -D markdown-support # Delete your local branch `markdown-support` git checkout -b markdown-support --track origin/markdown-support_rebased # Checkout a new local branch `markdown-support` from `markdown-support_rebased` git pull # Pull it git branch -u origin/markdown-support # Change upstream back to `markdown-support` git push # Try to push it, this is expected to fail, but make sure that GIT tries to push to the correct branch git push --force # Now push forcefully Better integration directly in #1303
gharchive/pull-request
2022-01-15T03:47:11
2025-04-01T04:32:44.406646
{ "authors": [ "ildyria", "kamil4", "nagmat84" ], "repo": "LycheeOrg/Lychee-front", "url": "https://github.com/LycheeOrg/Lychee-front/pull/280", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2262323243
SpiceWeaver goals? Greetings! I'm working on a C# client for SpiceDb and I sorely would love to automate the process of making relationships and permissions strongly-typed and happened upon your repo. Hand-writing everything requires constant referencing of the schema and I'd rather not do that if possible. My repo is at https://github.com/JalexSocial/SpiceDb I was wondering what your project goals were if you would be willing to share. Ah that's how your named looked familiar. I didn't put the two together. Since the recent major refactor you can now supply a ChannelBase directly if so desired for the grpc channel. Try out 1.5.1 if you haven't yet. Relationships can also include caveats now when you create them. I also reorganized a lot of code to clean things up a bit and started work on a documentation project. If we could unite the two I think that would be great and since you are using it for work it could give you an opportunity to shape the library as well. Honestly it feels like a bit of a lonely island working on the client because it's tough to tell how much SpiceDb is even adopted by the .net community. TBH I saw your comment on the spice2json library so it appears it isn't just limited to .net. I made the .net client out of necessity originally though because I wanted to use it from c# first of all, there wasn't a lot to choose from client-wise, and I wanted to create a wrapper for possibly more than SpiceDb (which is why some of the types don't match exactly in the client) but I've since mostly steered exclusively towards SpiceDb. I do want to be able to provide better quality of life c# code for developers though who want to work with the database though which means if I can create a useful layer on top of the grpc client that could be useful. One of the aspects I found off-putting with libraries like the Java client is how much code it takes to just check a permission. For your project, what if you still used spice2json but eliminated the executable dependency (sort of) by just requiring that schemas be run through spice2json first and named with .zedj (or similar) before adding to the project? Then have the source generator just transform that specific file? That way you can just check in the transformed schema directly into your source. It requires an extra step but the ability to use it doesn't require much setup work (especially on remote build servers). Let me know your thoughts. Does Spice2Json pick up on caveats as well? A caveat has a name and context, but the context keys (user_ip and allowed_range) are also well-defined. ex. caveat has_valid_ip(user_ip ipaddress, allowed_range string) { user_ip.in_cidr(allowed_range) } For simplicity sake you could create a class called Caveat that contains all the caveats. If the caveats themselves were classes that inherited SpiceDb.net Caveat that would make working with them pretty simple. new Caveat.HasValidIp { AllowedRange = "10.20.30.0" } That would be something like: public class Caveat { public class HasValidIp : Caveat { public HasValidIp () { this.Name = "has_valid_ip"; } public IpAddress? UserIp { get { return this.Context.ContainsKey("user_ip") ? this.Context["user_ip"].ToIpAddress() : null } set { this.Context["user_ip"] = value; } } public string AllowedRange { get { return this.Context.ContainsKey("allowed_range") ? this.Context["allowed_range"] : null; } set { this.Context["allowed_range"] = value; } } } } ``` csharp Note that allowed_range would be supplied when creating the relationship caveat and user_ip could be supplied during permission checks. For your project, what if you still used spice2json but eliminated the executable dependency (sort of) by just requiring that schemas be run through spice2json first and named with .zedj (or similar) before adding to the project? I've already taken this into account with an option on the additional file, was just missing documentation to explain An issue was just raised in the SpiceDB repo about having an official .NET client: https://github.com/authzed/spicedb/issues/1877 Be interesting to see what the response to this is. TBH it's pretty easy to just generate a c# client off the protobuf definitions but I could definitely see how it's easier to have faith in a company if the client is supported by that company. You just have to deal with all the custom google grpc data structures. I wouldn't have bothered making what I did if there was something official. I'm playing around with the source generator and it's working well. I created a new Relationship object as follows: using Arch = Archimedes.Schema.Definitions; new Relationship($"{Arch.Platform.Name}:abc123", Arch.Platform.Relations.Administrator, $"{Arch.User.Name}:efg456"); Using the definition with an id is extremely common.. maybe add an additional "WithId" method to each definition as a helper? Something generated like this: public static class User { public const string Name = "user"; public static string WithId (string id) => $"user:{id}"; } Then code to use it is: using Arch = Archimedes.Schema.Definitions; new Relationship(Arch.Platform.WithId("abc123"), Arch.Platform.Relations.Administrator, Arch.User.WithId("efg456")); Thoughts? I like it, if you throw that in a new issue I can take a look tomorrow - should be pretty trivial
gharchive/issue
2024-04-24T23:00:47
2025-04-01T04:32:44.423034
{ "authors": [ "LykaiosNZ", "tanczosm" ], "repo": "LykaiosNZ/SpiceWeaver", "url": "https://github.com/LykaiosNZ/SpiceWeaver/issues/18", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
993919329
🛑 Edge Compute Cluster - Asia Pacific is down In 39dcb68, Edge Compute Cluster - Asia Pacific (https://apsoutheast1-vega.lyrid.io/version) was down: HTTP code: 0 Response time: 0 ms Resolved: Edge Compute Cluster - Asia Pacific is back up in 63b484f.
gharchive/issue
2021-09-11T19:16:48
2025-04-01T04:32:44.427212
{ "authors": [ "soemarko" ], "repo": "LyridInc/statuspage", "url": "https://github.com/LyridInc/statuspage/issues/39", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
114244479
Avoid unserialize issues Sometimes we get this error : Catchable Fatal Error: Argument 2 passed to GuzzleHttp\Psr7\Response::__construct() must be of the type array, null given, called in [...]vendor/m6web/guzzle-http-bundle/src/Handler/CacheTrait.php on line 140 and defined There might be an issue when data are unserialized from cache. That's why I add a check on unserialized data. :+1: :+1: :+1: :metal:
gharchive/pull-request
2015-10-30T10:40:22
2025-04-01T04:32:44.444445
{ "authors": [ "cedvan", "jdlabails", "mojoLyon", "omansour", "t-geindre" ], "repo": "M6Web/GuzzleHttpBundle", "url": "https://github.com/M6Web/GuzzleHttpBundle/pull/21", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
184117694
「よくある質問」について 10月18日の結果をもとに、よくある質問をまとめます。 Q. 建物に入ったときに、距離情報やコンパス情報がおかしくなる A. 建物内ではうまくGPSを使えず、ナビ画面がおかしくなる時がある Q. チェックインタスクに正しい4つの数字を入れているのに、チェックインできない A. 半角英数字で入力してください アンサーの部分は、当日僕が対応した返答です。間違っていたら、訂正お願いいたします。 量があるわけでもないので、中島先生、菱山先生のおっしゃる通り、新しいページを新たに作る必要もないように思います。 よろしくお願いいたします。 @ieiri0104 Q. チェックインタスクに正しい4つの数字を入れているのに、チェックインできない A. 半角英数字で入力してください これが発生した端末を教えて貰えますか?type="number"を指定しているので, 半角数字以外は入らないようになっているはずです.Chromeでは発生しませんでした. safari ios でこの現象が出ました。 機種はiphone6でした。 @ieiri0104 了解です. iOSのnumberの処理がおかしいようですね. #169 で対応しました.空白や全角数字を扱える用にしました. 「A. 半角英数字で入力してください」 入力欄にここまで表示しているにも関わらずこの質問が出るということなので, このQ&Aを作ったところで,無駄に終わりそうですね.
gharchive/issue
2016-10-20T02:03:39
2025-04-01T04:32:44.452780
{ "authors": [ "ieiri0104", "yuu-nkjm" ], "repo": "MAGCruise/magcruise-citywalk", "url": "https://github.com/MAGCruise/magcruise-citywalk/issues/161", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
2305687776
Add a Dedicated Product Details Page Is your feature request related to a problem? Please describe. Users are currently unable to view detailed information about specific products on the website. This absence of a dedicated product details page makes it difficult for users to make informed purchasing decisions. The lack of detailed product information leads to a frustrating and suboptimal shopping experience. Describe the solution you'd like Implement a dedicated product details page for each product. This page should include: High-quality images of the product from multiple angles. Comprehensive product descriptions Customer reviews and ratings. Price and availability information. Related products or recommendations. Describe alternatives you've considered A clear and concise description of any alternative solutions or features you've considered. Additional context Add any other context or screenshots about the feature request here. Including a dedicated product details page will enhance the user experience by providing all necessary information in one place. please assign me this issue i want to work on that and if their any design in your mind they please give me also that some sample design see this design also we already have that
gharchive/issue
2024-05-20T10:41:38
2025-04-01T04:32:44.463744
{ "authors": [ "CoderSwarup", "MAVRICK-1" ], "repo": "MAVRICK-1/e-commerce_website", "url": "https://github.com/MAVRICK-1/e-commerce_website/issues/195", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
185041674
SSL with client certificate Our jenkins master server is hidden behind a Proxy which terminates SSL requiring a client certificate to authenticate. Unfortunately this server is not reachable with this plugin. Is this still a feature on the roadmap @dboissier ? Hi Yes but not as fisrt enhancement planned but i look how much time it will take to implement. In last release we use the IDE proxy settings for proxies. But I think there is no setting to use cert to client auth the proxy. But maybe there is a solution. Is your server public or can you grant access for testing against because setup all avalaible enviroments tales also time
gharchive/issue
2016-10-25T07:43:37
2025-04-01T04:32:44.479698
{ "authors": [ "MCMicS", "Willdotwhite", "georgengel" ], "repo": "MCMicS/jenkins-control-plugin", "url": "https://github.com/MCMicS/jenkins-control-plugin/issues/130", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2474268257
[Feature] 优化docker 实例创建 Description 在创建实例的时候就可以提供docker的配置同时希望支持支持docker run 直接导入 Reason docker实例创建繁琐 至少要需要点击5次才能打开docker的配置(容器化) 快速开服里面有一个「使用Docker镜像创建实例」的选项,创建时可以直接填写 Docker Hub 已有镜像,安装后自动执行 docker pul 拉取镜像,你可以尝试一下。 MCSM使用了第三方模块,而没有使用docker run命令来执行容器。暂时不支持解析docker run命令为运行参数。 快速开服里面有一个「使用Docker镜像创建实例」的选项,创建时可以直接填写 Docker Hub 已有镜像,安装后自动执行 docker pul 拉取镜像,你可以尝试一下。 没有那么多可调参数,比如环境变量 容器名 端口映射 目录挂载 快速开服里面有一个「使用Docker镜像创建实例」的选项,创建时可以直接填写 Docker Hub 已有镜像,安装后自动执行 docker pul 拉取镜像,你可以尝试一下。 没有那么多可调参数,比如环境变量 容器名 端口映射 目录挂载 那就创建完再编辑呗,没有必要单独为创建过程加一套这么复杂的配置界面。
gharchive/issue
2024-08-19T21:48:23
2025-04-01T04:32:44.486943
{ "authors": [ "duncai233", "huangsijun17", "unitwk" ], "repo": "MCSManager/MCSManager", "url": "https://github.com/MCSManager/MCSManager/issues/1346", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1183136063
Sync with the releases from mlplatform.org Releases on mlplatforms have documented revisions/git-SHAs in a json file. The release action should digest this to determine the version of source repositories. Start with a release 1.xx to reflect this. e.g. 1.22.02 with versions from https://review.mlplatform.org/plugins/gitiles/ml/ethos-u/ethos-u/+/refs/heads/master/22.02.json Covered by new release process.
gharchive/issue
2022-03-28T09:16:14
2025-04-01T04:32:44.503364
{ "authors": [ "MatthiasHertel80" ], "repo": "MDK-Packs/tensorflow-pack", "url": "https://github.com/MDK-Packs/tensorflow-pack/issues/4", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2394899187
文件缺失,无法使用 删除sln中的相关文件即可,这些文件确实改名或者移除了
gharchive/issue
2024-07-08T07:45:10
2025-04-01T04:32:44.540080
{ "authors": [ "chenyunshan", "orca-zhang" ], "repo": "MFCer/AutoUpdate", "url": "https://github.com/MFCer/AutoUpdate/issues/2", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
239998714
Support granularity when specifying which season to download Feature request Description Hi forgive me if this is already possible but I would like to easily download a full season of a show e.g. Season 3 without having to add each episode individually. Is this possible? This is not currently possible. I will see if I can add it in a future version.
gharchive/issue
2017-07-02T09:13:16
2025-04-01T04:32:44.541525
{ "authors": [ "MGaetan89", "danpilch" ], "repo": "MGaetan89/ShowsRage", "url": "https://github.com/MGaetan89/ShowsRage/issues/139", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
109648043
Larger target area for episode menus on Coming Episodes It can be difficult to precisely hit the "...", and it has reserved space to both the left and right sides that currently link to the episode. That space would be better to link to the menu, making it easier to access. I fix this. It will be available in the next version of ShowsRage.
gharchive/issue
2015-10-03T21:59:39
2025-04-01T04:32:44.542794
{ "authors": [ "MGaetan89", "OIK2" ], "repo": "MGaetan89/ShowsRage", "url": "https://github.com/MGaetan89/ShowsRage/issues/23", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
1168035256
BUG Report Attack some method Layer4 then ERROR Cannot Create Raw Socket You need spoofable server
gharchive/issue
2022-03-14T08:20:49
2025-04-01T04:32:44.543731
{ "authors": [ "SudoLite", "mcnxzijfe" ], "repo": "MHProDev/MHDDoS", "url": "https://github.com/MHProDev/MHDDoS/issues/242", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2216312939
utcfromtimestamp deprecated https://github.com/MITHaystack/srt-py/blob/master/srt/dashboard/layouts/graphs.py#L266 is deprecated since Python 3.12. It can be repleced by fromtimestamp: from datetime import datetime, timezone ts = 1571595618.0 x = datetime.utcfromtimestamp(ts) print(x) y = datetime.fromtimestamp(ts, tz=timezone.utc) print(y) 2019-10-20 18:20:18 2019-10-20 18:20:18+00:00 Fixed: https://github.com/AlexKurek/srt-py/commit/45a309e7dc55f8f638ec38d5eb9dfa3b6a407b50
gharchive/issue
2024-03-30T07:45:15
2025-04-01T04:32:44.595840
{ "authors": [ "AlexKurek" ], "repo": "MITHaystack/srt-py", "url": "https://github.com/MITHaystack/srt-py/issues/26", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
780735635
ClkDependentInit() - Reinitializes BLE stack ClkDependentInit() completely resets BLE, should only be Doing lowlevel init Fixed Fixed
gharchive/issue
2021-01-06T17:48:19
2025-04-01T04:32:44.607487
{ "authors": [ "MJurczak-PMarchut" ], "repo": "MJurczak-PMarchut/remote-measurement-station", "url": "https://github.com/MJurczak-PMarchut/remote-measurement-station/issues/3", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
119305304
Default values in setReward for ABC test should be zero Seems that the following in "ABC test" setReward() n = self.get_theta().get("n",1) + 1 self.set_theta({"n":n}) # If still running increase action count if N==0 or n<N: d = self.get_theta(action={'choice':self.action['choice']}) s = d.get("s",1) + self.reward n = d.get("n",2) + 1 self.set_theta({"s":s, "n":n}, action={'choice':self.action['choice']}) should be n = self.get_theta().get("n",0) + 1 self.set_theta({"n":n}) # If still running increase action count if N==0 or n<N: d = self.get_theta(action={'choice':self.action['choice']}) s = d.get("s",0) + self.reward n = d.get("n",0) + 1 self.set_theta({"s":s, "n":n}, action={'choice':self.action['choice']}) Otherwise, the the n counter after the first choice will be 2 , not 1, for theta and 3 instead of 1 for theta:choice. Also, reward will start at 1 plus the reward of a particular choice, instead of just the reward value of that particular choice. I would say both are incorrect; we should make use of the count and mean objects in base, and use their updates... :S Verstuurd vanaf mijn iPhone Op 28 nov. 2015 om 17:32 heeft Robin van Emden notifications@github.com het volgende geschreven: Seems that the following in "ABC test" n = self.get_theta().get("n",1) + 1 self.set_theta({"n":n}) If still running increase action count if N==0 or n<N: d = self.get_theta(action={'choice':self.action['choice']}) s = d.get("s",1) + self.reward n = d.get("n",2) + 1 self.set_theta({"s":s, "n":n}, action={'choice':self.action['choice']}) should be n = self.get_theta().get("n",0) + 1 self.set_theta({"n":n}) If still running increase action count if N==0 or n<N: d = self.get_theta(action={'choice':self.action['choice']}) s = d.get("s",0) + self.reward n = d.get("n",0) + 1 self.set_theta({"s":s, "n":n}, action={'choice':self.action['choice']}) Otherwise, the the n counter after the first choice will be 2 , not 1, for theta and 3 instead of 1 for theta:choice. Also, reward will start at 1 plus the reward of a particular choice, instead of just the particular choice. — Reply to this email directly or view it on GitHub. Agreed - though that is a change at a different level of abstraction. The suggested change does count correctly :) Also, I see that work is being done on these files in dev - so this issue might be superfluous anyway. Same goes for something like the import of the base lib in lm.py - it states from list.base import * Which obviously should be from libs.base import * yet I see that a recent commit states that "Big update, lm.py is not finished yet.", so I might just leave this code alone and concentrate on my own LiF code? The start value for counts is tricky; you don't want to divide by 0 when computing a mean... The default base mean starts with n=2, s=1, reflecting a uniform prior on a proportion... But we should be using our libs, if we don't, then who will ;)? Verstuurd vanaf mijn iPhone Op 28 nov. 2015 om 17:46 heeft Robin van Emden notifications@github.com het volgende geschreven: Agreed - though that is a change at a different level of abstraction. The suggested change does count correctly :) Also, I see that work is being done on these files in dev - so this issue might be superfluous anyway. — Reply to this email directly or view it on GitHub. Jules is working on the examples and finishing base and lm; paper deadline is coming Monday, and then it should move to the master branch.... (And hence be working. And using the libs in the AB test example ;)) thanks for spotting that one! Verstuurd vanaf mijn iPhone Op 28 nov. 2015 om 17:50 heeft Robin van Emden notifications@github.com het volgende geschreven: Same goes for something like the import of the base lib in lm.py - it states from list.base import * Which obviously should be from libs.base import * yet I see that a recent commit states that "Big update, lm.py is not finished yet.", so I might just leave this code alone and concentrate on my own LiF code? — Reply to this email directly or view it on GitHub. I agree there - so I am happy to report that I have been using the libs :) Also, if we don't use them because they are not useful, then we should fix that ;) Verstuurd vanaf mijn iPhone Op 28 nov. 2015 om 18:03 heeft Robin van Emden notifications@github.com het volgende geschreven: I agree there - so I am happy to report that I have been using the libs :) — Reply to this email directly or view it on GitHub. I guess this discussion is relevant but we should revisit it when reconsidering the libs (before pushing them to the main branche). I am going to close the issue for now.
gharchive/issue
2015-11-28T16:32:31
2025-04-01T04:32:44.621730
{ "authors": [ "MKaptein", "robinvanemden" ], "repo": "MKaptein/streamingbandit", "url": "https://github.com/MKaptein/streamingbandit/issues/14", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
2296989733
infer poetry install is pending indefinetly When I build the infer project I get something pending. Also there is a warning about improper dependecies. When I run this the first time it finished 4 lines and the was pending at he 5th (aframe-env) [vasileios.skliris@ldas-pcdev2 infer]$ poetry install Installing dependencies from lock file Warning: poetry.lock is not consistent with pyproject.toml. You may be getting improper dependencies. Run `poetry lock [--no-update]` to fix it. Package operations: 136 installs, 42 updates, 0 removals • Updating six (1.16.0 /home/conda/feedstock_root/build_artifacts/six_1620240208055/work -> 1.16.0): Pending... Hm thanks for this pointer. It looks like our poetry.lock is not consistent with the pyproject.toml. Let me re-pull the repo and try to build. @VasSkliris Can you pull the latest changes from main and try again? I did and it now pending again but without the warning. Few steps further Installing dependencies from lock file Package operations: 134 installs, 44 updates, 0 removals • Updating rpds-py (0.17.1 -> 0.18.1) • Updating referencing (0.33.0 -> 0.35.1) • Updating six (1.16.0 /home/conda/feedstock_root/build_artifacts/six_1620240208055/work -> 1.16.0): Pending... @VasSkliris Can you try removing your poetry cache dir and trying again? I think theres something stale in there. You can find it by running poetry config cache-dir It worked after I deleted that directory. The installation had some exception that might be of interest but it completed (aframe-env) [vasileios.skliris@ldas-pcdev12 infer]$ poetry install Installing dependencies from lock file Package operations: 134 installs, 42 updates, 0 removals • Updating six (1.16.0 /home/conda/feedstock_root/build_artifacts/six_1620240208055/work -> 1.16.0) • Updating jsonschema-specifications (2023.12.1 /tmp/tmpkv1z7p57/src -> 2023.12.1) • Updating platformdirs (4.2.0 /home/conda/feedstock_root/build_artifacts/platformdirs_1706713388748/work -> 4.2.2) • Installing python-dateutil (2.9.0.post0) • Installing traitlets (5.14.3) • Installing types-python-dateutil (2.9.0.20240316) • Installing arrow (1.3.0) • Installing fastjsonschema (2.19.1) • Updating jsonschema (4.21.1 /home/conda/feedstock_root/build_artifacts/jsonschema-meta_1705707496704/work -> 4.22.0) • Installing jupyter-core (5.7.2) • Updating pycparser (2.21 /home/conda/feedstock_root/build_artifacts/pycparser_1636257122734/work -> 2.22) • Installing pyzmq (26.0.3) • Installing tornado (6.4) • Updating cffi (1.16.0 /home/conda/feedstock_root/build_artifacts/cffi_1696001684923/work -> 1.16.0) • Installing fqdn (1.5.1) • Updating idna (3.6 /home/conda/feedstock_root/build_artifacts/idna_1701026962277/work -> 3.7) • Installing isoduration (20.11.0) • Updating jsonpointer (2.4 /home/conda/feedstock_root/build_artifacts/jsonpointer_1695397238043/work -> 2.4) • Installing jupyter-client (8.6.1) • Updating markupsafe (2.1.5 /home/conda/feedstock_root/build_artifacts/markupsafe_1706899921127/work -> 2.1.5) • Installing nbformat (5.10.4) • Updating ptyprocess (0.7.0 /home/vasileios.skliris/.local/lib/python3.10/site-packages -> 0.7.0) • Installing rfc3339-validator (0.1.4) • Installing rfc3986-validator (0.1.1) • Updating soupsieve (2.5 /home/conda/feedstock_root/build_artifacts/soupsieve_1693929250441/work -> 2.5) • Installing uri-template (1.3.0) • Installing webcolors (1.13) • Updating webencodings (0.5.1 /home/vasileios.skliris/.local/lib/python3.10/site-packages -> 0.5.1) • Installing argon2-cffi-bindings (21.2.0) • Installing asttokens (2.4.1) • Updating beautifulsoup4 (4.12.3 /home/conda/feedstock_root/build_artifacts/beautifulsoup4_1705564648255/work -> 4.12.3): Installing... • Installing bleach (6.1.0): Installing... • Installing defusedxml (0.7.1): Installing... • Updating pygments (2.17.2 /home/conda/feedstock_root/build_artifacts/pygments_1700607939962/work -> 2.18.0): Installing... • Installing executing (2.0.1) • Updating jinja2 (3.1.3 /home/conda/feedstock_root/build_artifacts/jinja2_1704966972576/work -> 3.1.4): Installing... • Installing jupyterlab-pygments (0.3.0) • Installing mistune (3.0.2): Installing... • Installing nbclient (0.10.0) • Updating packaging (24.0 /home/conda/feedstock_root/build_artifacts/packaging_1710075952259/work -> 24.0): Failed CalledProcessError Command '['/home/vasileios.skliris/.conda/envs/aframe-env/bin/python', '-m', 'pip', 'install', '--disable-pip-version-check', '--prefix', '/home/vasileios.skliris/.conda/envs/aframe-env', '--upgrade', '--no-deps', '/home/vasileios.skliris/.cache/pypoetry/artifacts/16/b0/e3/9de32eb3cee4faf411c88d49bc4a86b90d4470081e4ad5f2f90478909e/packaging-24.0-py3-none-any.whl']' returned non-zero exit status 1. at /cvmfs/software.igwn.org/conda/lib/python3.10/subprocess.py:526 in run 522│ # We don't call process.wait() as .__exit__ does that for us. 523│ raise 524│ retcode = process.poll() 525│ if check and retcode: → 526│ raise CalledProcessError(retcode, process.args, 527│ output=stdout, stderr=stderr) 528│ return CompletedProcess(process.args, retcode, stdout, stderr) 529│ 530│ The following error occurred when trying to handle this error: EnvCommandError Command ['/home/vasileios.skliris/.conda/envs/aframe-env/bin/python', '-m', 'pip', 'install', '--disable-pip-version-check', '--prefix', '/home/vasileios.skliris/.conda/envs/aframe-env', '--upgrade', '--no-deps', '/home/vasileios.skliris/.cache/pypoetry/artifacts/16/b0/e3/9de32eb3cee4faf411c88d49bc4a86b90d4470081e4ad5f2f90478909e/packaging-24.0-py3-none-any.whl'] errored with the following return code 1, and output: Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com Processing /home/vasileios.skliris/.cache/pypoetry/artifacts/16/b0/e3/9de32eb3cee4faf411c88d49bc4a86b90d4470081e4ad5f2f90478909e/packaging-24.0-py3-none-any.whl packaging is already installed with the same version as the provided wheel. Use --force-reinstall to force an installation of the wheel. ERROR: Could not install packages due to an OSError: [Errno 2] No such file or directory: '/home/vasileios.skliris/.conda/envs/aframe-env/lib/python3.10/site-packages/~xceptiongroup-1.2.0.dist-info' at ~/.local/lib/python3.10/site-packages/poetry/utils/env.py:1476 in _run 1472│ output = subprocess.check_output( 1473│ command, stderr=subprocess.STDOUT, env=env, **kwargs 1474│ ) 1475│ except CalledProcessError as e: • Updating pygments (2.17.2 /home/conda/feedstock_root/build_artifacts/pygments_1700607939962/work -> 2.18.0): Installing... • Installing executing (2.0.1) • Updating jinja2 (3.1.3 /home/conda/feedstock_root/build_artifacts/jinja2_1704966972576/work -> 3.1.4): Installing... • Installing jupyterlab-pygments (0.3.0) • Installing mistune (3.0.2): Installing... • Installing nbclient (0.10.0) • Updating packaging (24.0 /home/conda/feedstock_root/build_artifacts/packaging_1710075952259/work -> 24.0): Failed CalledProcessError Command '['/home/vasileios.skliris/.conda/envs/aframe-env/bin/python', '-m', 'pip', 'install', '--disable-pip-version-check', '--prefix', '/home/vasileios.skliris/.conda/envs/aframe-env', '--upgrade', '--no-deps', '/home/vasileios.skliris/.cache/pypoetry/artifacts/16/b0/e3/9de32eb3cee4faf411c88d49bc4a86b90d4470081e4ad5f2f90478909e/packaging-24.0-py3-none-any.whl']' returned non-zero exit status 1. at /cvmfs/software.igwn.org/conda/lib/python3.10/subprocess.py:526 in run 522│ # We don't call process.wait() as .__exit__ does that for us. 523│ raise 524│ retcode = process.poll() 525│ if check and retcode: → 526│ raise CalledProcessError(retcode, process.args, 527│ output=stdout, stderr=stderr) 528│ return CompletedProcess(process.args, retcode, stdout, stderr) 529│ 530│ The following error occurred when trying to handle this error: EnvCommandError Command ['/home/vasileios.skliris/.conda/envs/aframe-env/bin/python', '-m', 'pip', 'install', '--disable-pip-version-check', '--prefix', '/home/vasileios.skliris/.conda/envs/aframe-env', '--upgrade', '--no-deps', '/home/vasileios.skliris/.cache/pypoetry/artifacts/16/b0/e3/9de32eb3cee4faf411c88d49bc4a86b90d4470081e4ad5f2f90478909e/packaging-24.0-py3-none-any.whl'] errored with the following return code 1, and output: Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com Processing /home/vasileios.skliris/.cache/pypoetry/artifacts/16/b0/e3/9de32eb3cee4faf411c88d49bc4a86b90d4470081e4ad5f2f90478909e/packaging-24.0-py3-none-any.whl packaging is already installed with the same version as the provided wheel. Use --force-reinstall to force an installation of the wheel. ERROR: Could not install packages due to an OSError: [Errno 2] No such file or directory: '/home/vasileios.skliris/.conda/envs/aframe-env/lib/python3.10/site-packages/~xceptiongroup-1.2.0.dist-info' at ~/.local/lib/python3.10/site-packages/poetry/utils/env.py:1476 in _run 1472│ output = subprocess.check_output( 1473│ command, stderr=subprocess.STDOUT, env=env, **kwargs 1474│ ) 1475│ except CalledProcessError as e: • Installing defusedxml (0.7.1) • Updating pygments (2.17.2 /home/conda/feedstock_root/build_artifacts/pygments_1700607939962/work -> 2.18.0): Installing... • Installing executing (2.0.1) • Updating jinja2 (3.1.3 /home/conda/feedstock_root/build_artifacts/jinja2_1704966972576/work -> 3.1.4): Installing... • Installing jupyterlab-pygments (0.3.0) • Installing mistune (3.0.2): Installing... • Installing nbclient (0.10.0) • Updating packaging (24.0 /home/conda/feedstock_root/build_artifacts/packaging_1710075952259/work -> 24.0): Failed CalledProcessError Command '['/home/vasileios.skliris/.conda/envs/aframe-env/bin/python', '-m', 'pip', 'install', '--disable-pip-version-check', '--prefix', '/home/vasileios.skliris/.conda/envs/aframe-env', '--upgrade', '--no-deps', '/home/vasileios.skliris/.cache/pypoetry/artifacts/16/b0/e3/9de32eb3cee4faf411c88d49bc4a86b90d4470081e4ad5f2f90478909e/packaging-24.0-py3-none-any.whl']' returned non-zero exit status 1. at /cvmfs/software.igwn.org/conda/lib/python3.10/subprocess.py:526 in run 522│ # We don't call process.wait() as .__exit__ does that for us. 523│ raise 524│ retcode = process.poll() 525│ if check and retcode: → 526│ raise CalledProcessError(retcode, process.args, 527│ output=stdout, stderr=stderr) 528│ return CompletedProcess(process.args, retcode, stdout, stderr) 529│ 530│ The following error occurred when trying to handle this error: EnvCommandError Command ['/home/vasileios.skliris/.conda/envs/aframe-env/bin/python', '-m', 'pip', 'install', '--disable-pip-version-check', '--prefix', '/home/vasileios.skliris/.conda/envs/aframe-env', '--upgrade', '--no-deps', '/home/vasileios.skliris/.cache/pypoetry/artifacts/16/b0/e3/9de32eb3cee4faf411c88d49bc4a86b90d4470081e4ad5f2f90478909e/packaging-24.0-py3-none-any.whl'] errored with the following return code 1, and output: Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com Processing /home/vasileios.skliris/.cache/pypoetry/artifacts/16/b0/e3/9de32eb3cee4faf411c88d49bc4a86b90d4470081e4ad5f2f90478909e/packaging-24.0-py3-none-any.whl packaging is already installed with the same version as the provided wheel. Use --force-reinstall to force an installation of the wheel. ERROR: Could not install packages due to an OSError: [Errno 2] No such file or directory: '/home/vasileios.skliris/.conda/envs/aframe-env/lib/python3.10/site-packages/~xceptiongroup-1.2.0.dist-info' at ~/.local/lib/python3.10/site-packages/poetry/utils/env.py:1476 in _run 1472│ output = subprocess.check_output( 1473│ command, stderr=subprocess.STDOUT, env=env, **kwargs 1474│ ) 1475│ except CalledProcessError as e: • Installing nbclient (0.10.0) • Updating packaging (24.0 /home/conda/feedstock_root/build_artifacts/packaging_1710075952259/work -> 24.0): Failed CalledProcessError Command '['/home/vasileios.skliris/.conda/envs/aframe-env/bin/python', '-m', 'pip', 'install', '--disable-pip-version-check', '--prefix', '/home/vasileios.skliris/.conda/envs/aframe-env', '--upgrade', '--no-deps', '/home/vasileios.skliris/.cache/pypoetry/artifacts/16/b0/e3/9de32eb3cee4faf411c88d49bc4a86b90d4470081e4ad5f2f90478909e/packaging-24.0-py3-none-any.whl']' returned non-zero exit status 1. at /cvmfs/software.igwn.org/conda/lib/python3.10/subprocess.py:526 in run 522│ # We don't call process.wait() as .__exit__ does that for us. 523│ raise 524│ retcode = process.poll() 525│ if check and retcode: → 526│ raise CalledProcessError(retcode, process.args, 527│ output=stdout, stderr=stderr) 528│ return CompletedProcess(process.args, retcode, stdout, stderr) 529│ 530│ The following error occurred when trying to handle this error: EnvCommandError Command ['/home/vasileios.skliris/.conda/envs/aframe-env/bin/python', '-m', 'pip', 'install', '--disable-pip-version-check', '--prefix', '/home/vasileios.skliris/.conda/envs/aframe-env', '--upgrade', '--no-deps', '/home/vasileios.skliris/.cache/pypoetry/artifacts/16/b0/e3/9de32eb3cee4faf411c88d49bc4a86b90d4470081e4ad5f2f90478909e/packaging-24.0-py3-none-any.whl'] errored with the following return code 1, and output: Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com Processing /home/vasileios.skliris/.cache/pypoetry/artifacts/16/b0/e3/9de32eb3cee4faf411c88d49bc4a86b90d4470081e4ad5f2f90478909e/packaging-24.0-py3-none-any.whl packaging is already installed with the same version as the provided wheel. Use --force-reinstall to force an installation of the wheel. ERROR: Could not install packages due to an OSError: [Errno 2] No such file or directory: '/home/vasileios.skliris/.conda/envs/aframe-env/lib/python3.10/site-packages/~xceptiongroup-1.2.0.dist-info' at ~/.local/lib/python3.10/site-packages/poetry/utils/env.py:1476 in _run 1472│ output = subprocess.check_output( 1473│ command, stderr=subprocess.STDOUT, env=env, **kwargs 1474│ ) 1475│ except CalledProcessError as e: • Installing mistune (3.0.2) • Installing nbclient (0.10.0) • Updating packaging (24.0 /home/conda/feedstock_root/build_artifacts/packaging_1710075952259/work -> 24.0): Failed CalledProcessError Command '['/home/vasileios.skliris/.conda/envs/aframe-env/bin/python', '-m', 'pip', 'install', '--disable-pip-version-check', '--prefix', '/home/vasileios.skliris/.conda/envs/aframe-env', '--upgrade', '--no-deps', '/home/vasileios.skliris/.cache/pypoetry/artifacts/16/b0/e3/9de32eb3cee4faf411c88d49bc4a86b90d4470081e4ad5f2f90478909e/packaging-24.0-py3-none-any.whl']' returned non-zero exit status 1. at /cvmfs/software.igwn.org/conda/lib/python3.10/subprocess.py:526 in run 522│ # We don't call process.wait() as .__exit__ does that for us. 523│ raise 524│ retcode = process.poll() 525│ if check and retcode: → 526│ raise CalledProcessError(retcode, process.args, 527│ output=stdout, stderr=stderr) 528│ return CompletedProcess(process.args, retcode, stdout, stderr) 529│ 530│ The following error occurred when trying to handle this error: EnvCommandError Command ['/home/vasileios.skliris/.conda/envs/aframe-env/bin/python', '-m', 'pip', 'install', '--disable-pip-version-check', '--prefix', '/home/vasileios.skliris/.conda/envs/aframe-env', '--upgrade', '--no-deps', '/home/vasileios.skliris/.cache/pypoetry/artifacts/16/b0/e3/9de32eb3cee4faf411c88d49bc4a86b90d4470081e4ad5f2f90478909e/packaging-24.0-py3-none-any.whl'] errored with the following return code 1, and output: Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com Processing /home/vasileios.skliris/.cache/pypoetry/artifacts/16/b0/e3/9de32eb3cee4faf411c88d49bc4a86b90d4470081e4ad5f2f90478909e/packaging-24.0-py3-none-any.whl packaging is already installed with the same version as the provided wheel. Use --force-reinstall to force an installation of the wheel. ERROR: Could not install packages due to an OSError: [Errno 2] No such file or directory: '/home/vasileios.skliris/.conda/envs/aframe-env/lib/python3.10/site-packages/~xceptiongroup-1.2.0.dist-info' at ~/.local/lib/python3.10/site-packages/poetry/utils/env.py:1476 in _run 1472│ output = subprocess.check_output( 1473│ command, stderr=subprocess.STDOUT, env=env, **kwargs 1474│ ) 1475│ except CalledProcessError as e: • Installing bleach (6.1.0): Installing... • Installing defusedxml (0.7.1) • Updating pygments (2.17.2 /home/conda/feedstock_root/build_artifacts/pygments_1700607939962/work -> 2.18.0): Installing... • Installing executing (2.0.1) • Updating jinja2 (3.1.3 /home/conda/feedstock_root/build_artifacts/jinja2_1704966972576/work -> 3.1.4): Installing... • Installing jupyterlab-pygments (0.3.0) • Installing mistune (3.0.2) • Installing nbclient (0.10.0) • Updating packaging (24.0 /home/conda/feedstock_root/build_artifacts/packaging_1710075952259/work -> 24.0): Failed CalledProcessError Command '['/home/vasileios.skliris/.conda/envs/aframe-env/bin/python', '-m', 'pip', 'install', '--disable-pip-version-check', '--prefix', '/home/vasileios.skliris/.conda/envs/aframe-env', '--upgrade', '--no-deps', '/home/vasileios.skliris/.cache/pypoetry/artifacts/16/b0/e3/9de32eb3cee4faf411c88d49bc4a86b90d4470081e4ad5f2f90478909e/packaging-24.0-py3-none-any.whl']' returned non-zero exit status 1. at /cvmfs/software.igwn.org/conda/lib/python3.10/subprocess.py:526 in run 522│ # We don't call process.wait() as .__exit__ does that for us. 523│ raise 524│ retcode = process.poll() 525│ if check and retcode: → 526│ raise CalledProcessError(retcode, process.args, 527│ output=stdout, stderr=stderr) 528│ return CompletedProcess(process.args, retcode, stdout, stderr) 529│ 530│ The following error occurred when trying to handle this error: EnvCommandError Command ['/home/vasileios.skliris/.conda/envs/aframe-env/bin/python', '-m', 'pip', 'install', '--disable-pip-version-check', '--prefix', '/home/vasileios.skliris/.conda/envs/aframe-env', '--upgrade', '--no-deps', '/home/vasileios.skliris/.cache/pypoetry/artifacts/16/b0/e3/9de32eb3cee4faf411c88d49bc4a86b90d4470081e4ad5f2f90478909e/packaging-24.0-py3-none-any.whl'] errored with the following return code 1, and output: Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com Processing /home/vasileios.skliris/.cache/pypoetry/artifacts/16/b0/e3/9de32eb3cee4faf411c88d49bc4a86b90d4470081e4ad5f2f90478909e/packaging-24.0-py3-none-any.whl packaging is already installed with the same version as the provided wheel. Use --force-reinstall to force an installation of the wheel. ERROR: Could not install packages due to an OSError: [Errno 2] No such file or directory: '/home/vasileios.skliris/.conda/envs/aframe-env/lib/python3.10/site-packages/~xceptiongroup-1.2.0.dist-info' at ~/.local/lib/python3.10/site-packages/poetry/utils/env.py:1476 in _run 1472│ output = subprocess.check_output( 1473│ command, stderr=subprocess.STDOUT, env=env, **kwargs 1474│ ) 1475│ except CalledProcessError as e: • Updating beautifulsoup4 (4.12.3 /home/conda/feedstock_root/build_artifacts/beautifulsoup4_1705564648255/work -> 4.12.3) • Installing bleach (6.1.0): Installing... • Installing defusedxml (0.7.1) • Updating pygments (2.17.2 /home/conda/feedstock_root/build_artifacts/pygments_1700607939962/work -> 2.18.0): Installing... • Installing executing (2.0.1) • Updating jinja2 (3.1.3 /home/conda/feedstock_root/build_artifacts/jinja2_1704966972576/work -> 3.1.4): Installing... • Installing jupyterlab-pygments (0.3.0) • Installing mistune (3.0.2) • Installing nbclient (0.10.0) • Updating packaging (24.0 /home/conda/feedstock_root/build_artifacts/packaging_1710075952259/work -> 24.0): Failed CalledProcessError Command '['/home/vasileios.skliris/.conda/envs/aframe-env/bin/python', '-m', 'pip', 'install', '--disable-pip-version-check', '--prefix', '/home/vasileios.skliris/.conda/envs/aframe-env', '--upgrade', '--no-deps', '/home/vasileios.skliris/.cache/pypoetry/artifacts/16/b0/e3/9de32eb3cee4faf411c88d49bc4a86b90d4470081e4ad5f2f90478909e/packaging-24.0-py3-none-any.whl']' returned non-zero exit status 1. at /cvmfs/software.igwn.org/conda/lib/python3.10/subprocess.py:526 in run 522│ # We don't call process.wait() as .__exit__ does that for us. 523│ raise 524│ retcode = process.poll() 525│ if check and retcode: → 526│ raise CalledProcessError(retcode, process.args, 527│ output=stdout, stderr=stderr) 528│ return CompletedProcess(process.args, retcode, stdout, stderr) 529│ 530│ The following error occurred when trying to handle this error: EnvCommandError Command ['/home/vasileios.skliris/.conda/envs/aframe-env/bin/python', '-m', 'pip', 'install', '--disable-pip-version-check', '--prefix', '/home/vasileios.skliris/.conda/envs/aframe-env', '--upgrade', '--no-deps', '/home/vasileios.skliris/.cache/pypoetry/artifacts/16/b0/e3/9de32eb3cee4faf411c88d49bc4a86b90d4470081e4ad5f2f90478909e/packaging-24.0-py3-none-any.whl'] errored with the following return code 1, and output: Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com Processing /home/vasileios.skliris/.cache/pypoetry/artifacts/16/b0/e3/9de32eb3cee4faf411c88d49bc4a86b90d4470081e4ad5f2f90478909e/packaging-24.0-py3-none-any.whl packaging is already installed with the same version as the provided wheel. Use --force-reinstall to force an installation of the wheel. ERROR: Could not install packages due to an OSError: [Errno 2] No such file or directory: '/home/vasileios.skliris/.conda/envs/aframe-env/lib/python3.10/site-packages/~xceptiongroup-1.2.0.dist-info' at ~/.local/lib/python3.10/site-packages/poetry/utils/env.py:1476 in _run 1472│ output = subprocess.check_output( 1473│ command, stderr=subprocess.STDOUT, env=env, **kwargs 1474│ ) 1475│ except CalledProcessError as e: • Installing defusedxml (0.7.1) • Updating pygments (2.17.2 /home/conda/feedstock_root/build_artifacts/pygments_1700607939962/work -> 2.18.0): Installing... • Installing executing (2.0.1) • Updating jinja2 (3.1.3 /home/conda/feedstock_root/build_artifacts/jinja2_1704966972576/work -> 3.1.4): Installing... • Installing jupyterlab-pygments (0.3.0) • Installing mistune (3.0.2) • Installing nbclient (0.10.0) • Updating packaging (24.0 /home/conda/feedstock_root/build_artifacts/packaging_1710075952259/work -> 24.0): Failed CalledProcessError Command '['/home/vasileios.skliris/.conda/envs/aframe-env/bin/python', '-m', 'pip', 'install', '--disable-pip-version-check', '--prefix', '/home/vasileios.skliris/.conda/envs/aframe-env', '--upgrade', '--no-deps', '/home/vasileios.skliris/.cache/pypoetry/artifacts/16/b0/e3/9de32eb3cee4faf411c88d49bc4a86b90d4470081e4ad5f2f90478909e/packaging-24.0-py3-none-any.whl']' returned non-zero exit status 1. at /cvmfs/software.igwn.org/conda/lib/python3.10/subprocess.py:526 in run 522│ # We don't call process.wait() as .__exit__ does that for us. 523│ raise 524│ retcode = process.poll() 525│ if check and retcode: → 526│ raise CalledProcessError(retcode, process.args, 527│ output=stdout, stderr=stderr) 528│ return CompletedProcess(process.args, retcode, stdout, stderr) 529│ 530│ The following error occurred when trying to handle this error: EnvCommandError Command ['/home/vasileios.skliris/.conda/envs/aframe-env/bin/python', '-m', 'pip', 'install', '--disable-pip-version-check', '--prefix', '/home/vasileios.skliris/.conda/envs/aframe-env', '--upgrade', '--no-deps', '/home/vasileios.skliris/.cache/pypoetry/artifacts/16/b0/e3/9de32eb3cee4faf411c88d49bc4a86b90d4470081e4ad5f2f90478909e/packaging-24.0-py3-none-any.whl'] errored with the following return code 1, and output: Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com Processing /home/vasileios.skliris/.cache/pypoetry/artifacts/16/b0/e3/9de32eb3cee4faf411c88d49bc4a86b90d4470081e4ad5f2f90478909e/packaging-24.0-py3-none-any.whl packaging is already installed with the same version as the provided wheel. Use --force-reinstall to force an installation of the wheel. ERROR: Could not install packages due to an OSError: [Errno 2] No such file or directory: '/home/vasileios.skliris/.conda/envs/aframe-env/lib/python3.10/site-packages/~xceptiongroup-1.2.0.dist-info' at ~/.local/lib/python3.10/site-packages/poetry/utils/env.py:1476 in _run 1472│ output = subprocess.check_output( 1473│ command, stderr=subprocess.STDOUT, env=env, **kwargs 1474│ ) 1475│ except CalledProcessError as e: • Installing bleach (6.1.0) • Installing defusedxml (0.7.1) • Updating pygments (2.17.2 /home/conda/feedstock_root/build_artifacts/pygments_1700607939962/work -> 2.18.0): Installing... • Installing executing (2.0.1) • Updating jinja2 (3.1.3 /home/conda/feedstock_root/build_artifacts/jinja2_1704966972576/work -> 3.1.4): Installing... • Installing jupyterlab-pygments (0.3.0) • Installing mistune (3.0.2) • Installing nbclient (0.10.0) • Updating packaging (24.0 /home/conda/feedstock_root/build_artifacts/packaging_1710075952259/work -> 24.0): Failed CalledProcessError Command '['/home/vasileios.skliris/.conda/envs/aframe-env/bin/python', '-m', 'pip', 'install', '--disable-pip-version-check', '--prefix', '/home/vasileios.skliris/.conda/envs/aframe-env', '--upgrade', '--no-deps', '/home/vasileios.skliris/.cache/pypoetry/artifacts/16/b0/e3/9de32eb3cee4faf411c88d49bc4a86b90d4470081e4ad5f2f90478909e/packaging-24.0-py3-none-any.whl']' returned non-zero exit status 1. at /cvmfs/software.igwn.org/conda/lib/python3.10/subprocess.py:526 in run 522│ # We don't call process.wait() as .__exit__ does that for us. 523│ raise 524│ retcode = process.poll() 525│ if check and retcode: → 526│ raise CalledProcessError(retcode, process.args, 527│ output=stdout, stderr=stderr) 528│ return CompletedProcess(process.args, retcode, stdout, stderr) 529│ 530│ The following error occurred when trying to handle this error: EnvCommandError Command ['/home/vasileios.skliris/.conda/envs/aframe-env/bin/python', '-m', 'pip', 'install', '--disable-pip-version-check', '--prefix', '/home/vasileios.skliris/.conda/envs/aframe-env', '--upgrade', '--no-deps', '/home/vasileios.skliris/.cache/pypoetry/artifacts/16/b0/e3/9de32eb3cee4faf411c88d49bc4a86b90d4470081e4ad5f2f90478909e/packaging-24.0-py3-none-any.whl'] errored with the following return code 1, and output: Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com Processing /home/vasileios.skliris/.cache/pypoetry/artifacts/16/b0/e3/9de32eb3cee4faf411c88d49bc4a86b90d4470081e4ad5f2f90478909e/packaging-24.0-py3-none-any.whl packaging is already installed with the same version as the provided wheel. Use --force-reinstall to force an installation of the wheel. ERROR: Could not install packages due to an OSError: [Errno 2] No such file or directory: '/home/vasileios.skliris/.conda/envs/aframe-env/lib/python3.10/site-packages/~xceptiongroup-1.2.0.dist-info' at ~/.local/lib/python3.10/site-packages/poetry/utils/env.py:1476 in _run 1472│ output = subprocess.check_output( 1473│ command, stderr=subprocess.STDOUT, env=env, **kwargs 1474│ ) 1475│ except CalledProcessError as e: • Installing jupyterlab-pygments (0.3.0) • Installing mistune (3.0.2) • Installing nbclient (0.10.0) • Updating packaging (24.0 /home/conda/feedstock_root/build_artifacts/packaging_1710075952259/work -> 24.0): Failed CalledProcessError Command '['/home/vasileios.skliris/.conda/envs/aframe-env/bin/python', '-m', 'pip', 'install', '--disable-pip-version-check', '--prefix', '/home/vasileios.skliris/.conda/envs/aframe-env', '--upgrade', '--no-deps', '/home/vasileios.skliris/.cache/pypoetry/artifacts/16/b0/e3/9de32eb3cee4faf411c88d49bc4a86b90d4470081e4ad5f2f90478909e/packaging-24.0-py3-none-any.whl']' returned non-zero exit status 1. at /cvmfs/software.igwn.org/conda/lib/python3.10/subprocess.py:526 in run 522│ # We don't call process.wait() as .__exit__ does that for us. 523│ raise 524│ retcode = process.poll() 525│ if check and retcode: → 526│ raise CalledProcessError(retcode, process.args, 527│ output=stdout, stderr=stderr) 528│ return CompletedProcess(process.args, retcode, stdout, stderr) 529│ 530│ The following error occurred when trying to handle this error: EnvCommandError Command ['/home/vasileios.skliris/.conda/envs/aframe-env/bin/python', '-m', 'pip', 'install', '--disable-pip-version-check', '--prefix', '/home/vasileios.skliris/.conda/envs/aframe-env', '--upgrade', '--no-deps', '/home/vasileios.skliris/.cache/pypoetry/artifacts/16/b0/e3/9de32eb3cee4faf411c88d49bc4a86b90d4470081e4ad5f2f90478909e/packaging-24.0-py3-none-any.whl'] errored with the following return code 1, and output: Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com Processing /home/vasileios.skliris/.cache/pypoetry/artifacts/16/b0/e3/9de32eb3cee4faf411c88d49bc4a86b90d4470081e4ad5f2f90478909e/packaging-24.0-py3-none-any.whl packaging is already installed with the same version as the provided wheel. Use --force-reinstall to force an installation of the wheel. ERROR: Could not install packages due to an OSError: [Errno 2] No such file or directory: '/home/vasileios.skliris/.conda/envs/aframe-env/lib/python3.10/site-packages/~xceptiongroup-1.2.0.dist-info' at ~/.local/lib/python3.10/site-packages/poetry/utils/env.py:1476 in _run 1472│ output = subprocess.check_output( 1473│ command, stderr=subprocess.STDOUT, env=env, **kwargs 1474│ ) 1475│ except CalledProcessError as e: • Updating jinja2 (3.1.3 /home/conda/feedstock_root/build_artifacts/jinja2_1704966972576/work -> 3.1.4) • Installing jupyterlab-pygments (0.3.0) • Installing mistune (3.0.2) • Installing nbclient (0.10.0) • Updating packaging (24.0 /home/conda/feedstock_root/build_artifacts/packaging_1710075952259/work -> 24.0): Failed CalledProcessError Command '['/home/vasileios.skliris/.conda/envs/aframe-env/bin/python', '-m', 'pip', 'install', '--disable-pip-version-check', '--prefix', '/home/vasileios.skliris/.conda/envs/aframe-env', '--upgrade', '--no-deps', '/home/vasileios.skliris/.cache/pypoetry/artifacts/16/b0/e3/9de32eb3cee4faf411c88d49bc4a86b90d4470081e4ad5f2f90478909e/packaging-24.0-py3-none-any.whl']' returned non-zero exit status 1. at /cvmfs/software.igwn.org/conda/lib/python3.10/subprocess.py:526 in run 522│ # We don't call process.wait() as .__exit__ does that for us. 523│ raise 524│ retcode = process.poll() 525│ if check and retcode: → 526│ raise CalledProcessError(retcode, process.args, 527│ output=stdout, stderr=stderr) 528│ return CompletedProcess(process.args, retcode, stdout, stderr) 529│ 530│ The following error occurred when trying to handle this error: EnvCommandError Command ['/home/vasileios.skliris/.conda/envs/aframe-env/bin/python', '-m', 'pip', 'install', '--disable-pip-version-check', '--prefix', '/home/vasileios.skliris/.conda/envs/aframe-env', '--upgrade', '--no-deps', '/home/vasileios.skliris/.cache/pypoetry/artifacts/16/b0/e3/9de32eb3cee4faf411c88d49bc4a86b90d4470081e4ad5f2f90478909e/packaging-24.0-py3-none-any.whl'] errored with the following return code 1, and output: Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com Processing /home/vasileios.skliris/.cache/pypoetry/artifacts/16/b0/e3/9de32eb3cee4faf411c88d49bc4a86b90d4470081e4ad5f2f90478909e/packaging-24.0-py3-none-any.whl packaging is already installed with the same version as the provided wheel. Use --force-reinstall to force an installation of the wheel. ERROR: Could not install packages due to an OSError: [Errno 2] No such file or directory: '/home/vasileios.skliris/.conda/envs/aframe-env/lib/python3.10/site-packages/~xceptiongroup-1.2.0.dist-info' at ~/.local/lib/python3.10/site-packages/poetry/utils/env.py:1476 in _run 1472│ output = subprocess.check_output( 1473│ command, stderr=subprocess.STDOUT, env=env, **kwargs 1474│ ) 1475│ except CalledProcessError as e: • Installing executing (2.0.1) • Updating jinja2 (3.1.3 /home/conda/feedstock_root/build_artifacts/jinja2_1704966972576/work -> 3.1.4) • Installing jupyterlab-pygments (0.3.0) • Installing mistune (3.0.2) • Installing nbclient (0.10.0) • Updating packaging (24.0 /home/conda/feedstock_root/build_artifacts/packaging_1710075952259/work -> 24.0): Failed CalledProcessError Command '['/home/vasileios.skliris/.conda/envs/aframe-env/bin/python', '-m', 'pip', 'install', '--disable-pip-version-check', '--prefix', '/home/vasileios.skliris/.conda/envs/aframe-env', '--upgrade', '--no-deps', '/home/vasileios.skliris/.cache/pypoetry/artifacts/16/b0/e3/9de32eb3cee4faf411c88d49bc4a86b90d4470081e4ad5f2f90478909e/packaging-24.0-py3-none-any.whl']' returned non-zero exit status 1. at /cvmfs/software.igwn.org/conda/lib/python3.10/subprocess.py:526 in run 522│ # We don't call process.wait() as .__exit__ does that for us. 523│ raise 524│ retcode = process.poll() 525│ if check and retcode: → 526│ raise CalledProcessError(retcode, process.args, 527│ output=stdout, stderr=stderr) 528│ return CompletedProcess(process.args, retcode, stdout, stderr) 529│ 530│ The following error occurred when trying to handle this error: EnvCommandError Command ['/home/vasileios.skliris/.conda/envs/aframe-env/bin/python', '-m', 'pip', 'install', '--disable-pip-version-check', '--prefix', '/home/vasileios.skliris/.conda/envs/aframe-env', '--upgrade', '--no-deps', '/home/vasileios.skliris/.cache/pypoetry/artifacts/16/b0/e3/9de32eb3cee4faf411c88d49bc4a86b90d4470081e4ad5f2f90478909e/packaging-24.0-py3-none-any.whl'] errored with the following return code 1, and output: Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com Processing /home/vasileios.skliris/.cache/pypoetry/artifacts/16/b0/e3/9de32eb3cee4faf411c88d49bc4a86b90d4470081e4ad5f2f90478909e/packaging-24.0-py3-none-any.whl packaging is already installed with the same version as the provided wheel. Use --force-reinstall to force an installation of the wheel. ERROR: Could not install packages due to an OSError: [Errno 2] No such file or directory: '/home/vasileios.skliris/.conda/envs/aframe-env/lib/python3.10/site-packages/~xceptiongroup-1.2.0.dist-info' at ~/.local/lib/python3.10/site-packages/poetry/utils/env.py:1476 in _run 1472│ output = subprocess.check_output( 1473│ command, stderr=subprocess.STDOUT, env=env, **kwargs 1474│ ) 1475│ except CalledProcessError as e: • Updating pygments (2.17.2 /home/conda/feedstock_root/build_artifacts/pygments_1700607939962/work -> 2.18.0) • Installing executing (2.0.1) • Updating jinja2 (3.1.3 /home/conda/feedstock_root/build_artifacts/jinja2_1704966972576/work -> 3.1.4) • Installing jupyterlab-pygments (0.3.0) • Installing mistune (3.0.2) • Installing nbclient (0.10.0) • Updating packaging (24.0 /home/conda/feedstock_root/build_artifacts/packaging_1710075952259/work -> 24.0): Failed CalledProcessError Command '['/home/vasileios.skliris/.conda/envs/aframe-env/bin/python', '-m', 'pip', 'install', '--disable-pip-version-check', '--prefix', '/home/vasileios.skliris/.conda/envs/aframe-env', '--upgrade', '--no-deps', '/home/vasileios.skliris/.cache/pypoetry/artifacts/16/b0/e3/9de32eb3cee4faf411c88d49bc4a86b90d4470081e4ad5f2f90478909e/packaging-24.0-py3-none-any.whl']' returned non-zero exit status 1. at /cvmfs/software.igwn.org/conda/lib/python3.10/subprocess.py:526 in run 522│ # We don't call process.wait() as .__exit__ does that for us. 523│ raise 524│ retcode = process.poll() 525│ if check and retcode: → 526│ raise CalledProcessError(retcode, process.args, 527│ output=stdout, stderr=stderr) 528│ return CompletedProcess(process.args, retcode, stdout, stderr) 529│ 530│ The following error occurred when trying to handle this error: EnvCommandError Command ['/home/vasileios.skliris/.conda/envs/aframe-env/bin/python', '-m', 'pip', 'install', '--disable-pip-version-check', '--prefix', '/home/vasileios.skliris/.conda/envs/aframe-env', '--upgrade', '--no-deps', '/home/vasileios.skliris/.cache/pypoetry/artifacts/16/b0/e3/9de32eb3cee4faf411c88d49bc4a86b90d4470081e4ad5f2f90478909e/packaging-24.0-py3-none-any.whl'] errored with the following return code 1, and output: Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com Processing /home/vasileios.skliris/.cache/pypoetry/artifacts/16/b0/e3/9de32eb3cee4faf411c88d49bc4a86b90d4470081e4ad5f2f90478909e/packaging-24.0-py3-none-any.whl packaging is already installed with the same version as the provided wheel. Use --force-reinstall to force an installation of the wheel. ERROR: Could not install packages due to an OSError: [Errno 2] No such file or directory: '/home/vasileios.skliris/.conda/envs/aframe-env/lib/python3.10/site-packages/~xceptiongroup-1.2.0.dist-info' at ~/.local/lib/python3.10/site-packages/poetry/utils/env.py:1476 in _run 1472│ output = subprocess.check_output( 1473│ command, stderr=subprocess.STDOUT, env=env, **kwargs 1474│ ) 1475│ except CalledProcessError as e: → 1476│ raise EnvCommandError(e, input=input_) 1477│ 1478│ return decode(output) 1479│ 1480│ def execute(self, bin: str, *args: str, **kwargs: Any) -> int: The following error occurred when trying to handle this error: PoetryException Failed to install /home/vasileios.skliris/.cache/pypoetry/artifacts/16/b0/e3/9de32eb3cee4faf411c88d49bc4a86b90d4470081e4ad5f2f90478909e/packaging-24.0-py3-none-any.whl at ~/.local/lib/python3.10/site-packages/poetry/utils/pip.py:51 in pip_install 47│ 48│ try: 49│ return environment.run_pip(*args) 50│ except EnvCommandError as e: → 51│ raise PoetryException(f"Failed to install {path.as_posix()}") from e 52│ • Installing pandocfilters (1.5.1) • Installing parso (0.8.4) • Installing pure-eval (0.2.2) • Installing python-json-logger (2.0.7) • Updating pyyaml (6.0.1 /home/conda/feedstock_root/build_artifacts/pyyaml_1695373428874/work -> 6.0.1) • Updating exceptiongroup (1.2.0 /home/conda/feedstock_root/build_artifacts/exceptiongroup_1704921103267/work -> 1.2.1) • Updating sniffio (1.3.1 /home/conda/feedstock_root/build_artifacts/sniffio_1708952932303/work -> 1.3.1) • Installing terminado (0.18.1) • Installing tinycss2 (1.3.0) • Updating typing-extensions (4.11.0 /home/conda/feedstock_root/build_artifacts/typing_extensions_1712329955671/work -> 4.11.0) • Updating wcwidth (0.2.13 /home/conda/feedstock_root/build_artifacts/wcwidth_1704731205417/work -> 0.2.13)
gharchive/issue
2024-05-15T06:36:42
2025-04-01T04:32:44.638969
{ "authors": [ "EthanMarx", "VasSkliris" ], "repo": "ML4GW/aframev2", "url": "https://github.com/ML4GW/aframev2/issues/153", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1259143708
Hanna Started working on the responsiveness of the profile page. Would love to hear feedback and discuss resolving the conflicts with others' modifications Looks a lot better! I think we should finish making the profile image responsive before we merge. We should also take care of aligning all the boxes. I'm working on the profile image right now, as well as the general margins between the cards. Trying to make sure that everything looks fine on wider screens as well.
gharchive/pull-request
2022-06-03T01:20:12
2025-04-01T04:32:44.646921
{ "authors": [ "Hgersten-hash", "jescalada" ], "repo": "MLH-Fellowship/project-team-pythonic", "url": "https://github.com/MLH-Fellowship/project-team-pythonic/pull/22", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
664220520
State Management for bookmarks Contributes to #158 Fixes #157 Changelog Note - (android only) Copied Async Storage for android State management for bookmarks Persisting bookmarks using async storage Add bookmark screen (Would need refactoring later, code copied from ExampleList) Add a button on Example List Screen to navigate to Bookmark screen (Need to remove this later) Test Plan @YashKumarVerma can you check the metro.config file and see that it's linked to the React Native npm since I remember linking wasn't really working for you? @YashKumarVerma can you check the metro.config file and see if that it is linked to the React Native npm package since I remember linking wasn't really working for you? as of now, it's linked to the react-native npm package. const reactNativePath = path.resolve(__dirname, 'node_modules', 'react-native'); @anku255 On iOS, it would give errors because we have removed the AsyncStorage. There are changes required in RNTesterApp.ios.js just like RNTesterApp.android.js. @YashKumarVerma Strange, how's that side drawer with lists coming? @manyaagarwal Have you tested these changes locally? @YashKumarVerma we would remove the drawer #170, Can you try adding bookmarks from the main ExampleList Screen @chirag-singhal Tested the changes. LGTM! @anku255 I made changes for ios, but we can't test them now. So should we keep it on a hold or merge this?
gharchive/pull-request
2020-07-23T05:53:43
2025-04-01T04:32:44.652217
{ "authors": [ "YashKumarVerma", "chirag-singhal", "manyaagarwal" ], "repo": "MLH-Fellowship/react-native", "url": "https://github.com/MLH-Fellowship/react-native/pull/162", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
676046135
Use theme for colors Summary Using theme instead of hardcoded color values. I have tested it on both Android and iOS. I am going to merge this now. I will migrate the mlh-fellowship-ui-changes branch to rn-ios-migration-final and I need the former to be as updated as possible.
gharchive/pull-request
2020-08-10T10:54:16
2025-04-01T04:32:44.653629
{ "authors": [ "anku255" ], "repo": "MLH-Fellowship/react-native", "url": "https://github.com/MLH-Fellowship/react-native/pull/197", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
245156755
Share database accounts for multi-pools. Hi, I would like to be able to share the accounts database on others servers with mpos and others cryptocoins for a multi-pool. Any suggestions on how to do it? Thanks. Best regards i found it sorry https://github.com/MPOS/php-mpos/issues/735
gharchive/issue
2017-07-24T17:44:44
2025-04-01T04:32:44.663721
{ "authors": [ "minerpool" ], "repo": "MPOS/php-mpos", "url": "https://github.com/MPOS/php-mpos/issues/2582", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
1498412302
report email needs some corrections "finished" typo link to github is wrong write and attach module log fiile closed with commits 82faa98, d9c2512
gharchive/issue
2022-12-15T13:11:57
2025-04-01T04:32:44.664875
{ "authors": [ "m-jahn" ], "repo": "MPUSP/snakemake-ms-proteomics", "url": "https://github.com/MPUSP/snakemake-ms-proteomics/issues/3", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
110437204
os_image_label parameter returns old ubuntu version when I set: "os_image_label": "Ubuntu Server 14.04 LTS" and run packer build then I can notice that used image source is not the latest one azure: Image source is OS image "b39f27a8b8c64d52b05eac6a62ebad85__Ubuntu-14_04-LTS-amd64-server-20140724-en-us-30GB" When I try create VM via azure interface with ubuntu image there is an option which allows you to choose version release date and currently the latest one is 9/9/2015 (that is the date which I can choose via interface). below one is returned with azure vm image list command b39f27a8b8c64d52b05eac6a62ebad85__Ubuntu-14_04_3-LTS-amd64-server-20150908-en-us-30GB Is it possible to set it somewhere or that option has been missed? I've just changed it to "os_image_label": "Ubuntu Server 14.04.3-LTS" and now I get the latest one. Indeed, it does an exact match on label: https://github.com/MSOpenTech/packer-azure/blob/master/packer/builder/azure/FindImage.go#L43 Does that resolve your issue? @paulmey - yes, using "os_image_label": "Ubuntu Server 14.04.3-LTS" resolves my issue. It would be nice to mention that somewhere in a doc because it is not such obvious that you need to use a hyphen in the label especially when example shows "os_image_label": "Ubuntu Server 14.04 LTS" the label without a hyphen, nothing big but confusing. Agreed. I've been thinking for a while what the best way would be to select the OS image. The name is a bit obscure, I'd never makes someone type it, but the label doesn't always work well either, such as in this case. BTW, these names are created by the publishers, so things like this hyphen are not something that is under central control... Hi @paulmey, I fell in a similar case. My use case is that I would want to be able able to build an image immutable in time (for debugging purpose for instance). As an example, I have those indeed obscur names: 5112500ae3b842c8b9c604889f8753c3__OpenLogic-CentOS-71-20150605 5112500ae3b842c8b9c604889f8753c3__OpenLogic-CentOS-71-20150731 If I have built my user image with the 20150605 version, so configure the packer file with an exhaustive identifier of the os version, ie i'd like to rebuild it without getting the 20150731 one. Yet, as the label is the same for both ("OpenLogic 7.1"), it is not usable in my case. When I check the list from azure-cli, the long hashed prefix which makes the name obscur doesn't bring any info. So, wouldn't it be possible to have an "os_image_name" parameter based on the last part of the name, ie in my case : OpenLogic-CentOS-71-20150605. It would make the name less obscur and more precise than the label. I like the suggestion. However, I don't want to rely on format of the name that specifically (it's not guaranteed). But maybe a substring match or regex match would be a workable solution? You could still use the value you propose above "OpenLogic-CentOS-71-20150605" to match the name "5112500ae3b842c8b9c604889f8753c3__OpenLogic-CentOS-71-20150605". If you have time, feel free to send a PR. I'm afraid I won't get to this in the next week or so... That's exactly what I had in mind. It is by the way how it is done with Google COmpute Engine. So, I'll open a PR. No problem for the time, I have a workaround to solve my use case (store an OS image version as a custom image in storage to have it under control). Thanks for your work
gharchive/issue
2015-10-08T12:24:22
2025-04-01T04:32:44.682426
{ "authors": [ "paulmey", "ravbaba", "rchalumeau" ], "repo": "MSOpenTech/packer-azure", "url": "https://github.com/MSOpenTech/packer-azure/issues/108", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
113082490
Redis Slave encounter Out of memory allocation and stop I have 1 master redis cache and 2 slave running on separate Win2012 R2 servers. Redis Master is working perfectly. But slaves encounter out of memory allocation and stop every 3 or 4 days. I set maxheap size to 1gb on all 3 instances. Normally there are about 2000 items at the peak time and take about 100MB on redis master. Below is the memory usage of redis master and cache items Memory used_memory:74411880 used_memory_human:70.96M used_memory_rss:74378240 used_memory_peak:265625808 used_memory_peak_human:253.32M used_memory_lua:33792 mem_fragmentation_ratio:1.00 mem_allocator:dlmalloc-2.8 Keyspace db0:keys=13,expires=13,avg_ttl=86386815 db15:keys=214,expires=214,avg_ttl=18559118 db16:keys=412,expires=412,avg_ttl=5878930 db52:keys=774,expires=774,avg_ttl=3192163 db53:keys=87,expires=87,avg_ttl=7578893 Below are the status of two salves. Slave1 Memory used_memory:409188320 used_memory_human:390.23M used_memory_rss:409187480 used_memory_peak:410208960 used_memory_peak_human:391.21M used_memory_lua:36864 mem_fragmentation_ratio:1.00 mem_allocator:dlmalloc-2.8 Keyspace db0:keys=13,expires=13,avg_ttl=0 db15:keys=13638,expires=13638,avg_ttl=0 db16:keys=1796,expires=1796,avg_ttl=0 db52:keys=8087,expires=8087,avg_ttl=0 db53:keys=2917,expires=2917,avg_ttl=0 Slave2 Memory used_memory:321751264 used_memory_human:306.85M used_memory_rss:321717624 used_memory_peak:322745704 used_memory_peak_human:307.79M used_memory_lua:33792 mem_fragmentation_ratio:1.00 mem_allocator:dlmalloc-2.8 Keyspace db0:keys=13,expires=13,avg_ttl=0 db15:keys=9918,expires=9918,avg_ttl=0 db16:keys=1406,expires=1406,avg_ttl=0 db52:keys=6048,expires=6048,avg_ttl=0 db53:keys=2120,expires=2120,avg_ttl=0 You can see that memory usage in slave is going up and cache items are also far more than that of master. After another 2 or 3 days memory usage on slaves will reach to maxheap limit and crush. How can I configure the slaves not to crush and running out of memory? Thanks Zaw Hi @zawaung1973, can you please provide the configuration files and the log files? Thank you. Closing this issue for inactivity. Please feel free to reopen it if needed. Thank you.
gharchive/issue
2015-10-23T19:12:34
2025-04-01T04:32:44.691739
{ "authors": [ "enricogior", "zawaung1973" ], "repo": "MSOpenTech/redis", "url": "https://github.com/MSOpenTech/redis/issues/346", "license": "bsd-3-clause", "license_type": "permissive", "license_source": "bigquery" }
1267935098
[Windows] Slow internet speed or Stuck at recaptcha, in 1.7.2 wasn't present this issue... Is there an existing issue for this? [X] I have searched the existing issues I'm submitting a ... [X] bug report [ ] feature request [ ] support request --> Contact me over mail for support https://github.com/MShawon Description When using 1.7.3 or 1.7.4 there is a bug when trying to use Youtube Viewer. "Slow internet speed to Stuck at recaptcha", program tries to open a Chrome windows but fails. In macOS program is running fine with no issue Environment - OS : Windows 10 21H2 - Python : 3.9.10 - Script version : 1.7.3 or 1.7.4 config.json { "http_api": { enabled: false host: 0.0.0.0 port: 5000 } database: true views: 50000 minimum: 85.0 maximum: 95.0 "proxy": { category: p proxy_type: http filename: proxy.txt authentication: true proxy_api: false refresh: 0.0 } "background": false "bandwidth": true "playback_speed": 3 "max_threads": 5 "min_threads": 2 } Well, if you set incorrect user/pass combo for proxy( in my case, proxy.txt), it will not load any page. Tried after resetting PC to ubuntu Linux (seems to run better than windows). Also I modified a small line to have running python 3.10, since ubuntu ships with python 3.10
gharchive/issue
2022-06-10T19:41:42
2025-04-01T04:32:44.700844
{ "authors": [ "vrdevelopersco" ], "repo": "MShawon/YouTube-Viewer", "url": "https://github.com/MShawon/YouTube-Viewer/issues/389", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
579850163
IP请求附带国家的信息 { "ip":"(---)", "province":"", "provinceId":999999, "city":"", "cityId":0, "isp":"美国", "desc":" 美国" } 如题,如果我要判断客户端IP是国内还是国外的话,返回值里没有正常的信息,我用日本IP也返回的上面数据 后续更新计划中,考虑一下支持判断国家? @youyinnn 会考虑的
gharchive/issue
2020-03-12T10:47:12
2025-04-01T04:32:44.738229
{ "authors": [ "MZCretin", "youyinnn" ], "repo": "MZCretin/RollToolsApi", "url": "https://github.com/MZCretin/RollToolsApi/issues/40", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
477163417
[BUG]A non well formed numeric value encountered "php": "^7.1.3", "fideloper/proxy": "^4.0", "laravel/framework": "5.8.", "laravel/tinker": "^1.0", "maatwebsite/excel": "2." "maatwebsite/excel": "2.*" Laravel Excel version 2 is no longer supported. See Supported versions
gharchive/issue
2019-08-06T04:29:49
2025-04-01T04:32:44.764385
{ "authors": [ "GlennM", "pankajthakur007" ], "repo": "Maatwebsite/Laravel-Excel", "url": "https://github.com/Maatwebsite/Laravel-Excel/issues/2318", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
386428781
Permissions Can anyone use /py command? If so, it seems not a good idea to use this plugin because anyone can hack my server. By default, yes, though this can be disabled in the config, and server ports put behind an authentication mechanism. There isn't much support out of the box for authentication, might think about adding this later. The new release contains permissions for the chat commands (set to ops by default). The Telnet / Websocket interface is left as is (as these can be disabled / proxied, which for now is my preferred way to go about this).
gharchive/issue
2018-12-01T06:20:24
2025-04-01T04:32:44.772281
{ "authors": [ "Macuyiko", "pitust" ], "repo": "Macuyiko/minecraft-python", "url": "https://github.com/Macuyiko/minecraft-python/issues/19", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
970071854
BetterBeds & FastChest not removed in 2.1.0-beta.3 Describe the bug In the changelog for 2.1.0-beta.3, BetterBeds & FastChests were supposedly removed. However, they're both still present in the Packwiz config. From what I can tell, this doesn't cause any issues, but should still be addressed. This issue is similar to #97. Expected behavior BetterBeds & FastChest were both removed. Observed/actual behavior BetterBeds & FastChest are still present. Steps To Reproduce Install 2.1.0-beta.3 Modpack version 2.1.0-beta.3 Launcher MultiMC Install method Fresh install/new profile Additional context No response Thank you. I guess I should clear up the packwiz mods folder every time before adding new ones 😄
gharchive/issue
2021-08-13T05:19:59
2025-04-01T04:32:44.775557
{ "authors": [ "Madis0", "nchristopher" ], "repo": "Madis0/fabulously-optimized", "url": "https://github.com/Madis0/fabulously-optimized/issues/107", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
1858005915
🛑 Libreddit (libreddit.spike.codes) is down In 63fad39, Libreddit (libreddit.spike.codes) (https://libreddit.spike.codes) was down: HTTP code: 0 Response time: 0 ms Resolved: Libreddit (libreddit.spike.codes) is back up in 7a82597 after 592 days, 12 hours, 47 minutes.
gharchive/issue
2023-08-20T07:01:36
2025-04-01T04:32:44.783920
{ "authors": [ "Magic-Services-Account" ], "repo": "Magic-Services/upptime", "url": "https://github.com/Magic-Services/upptime/issues/6008", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1969399168
🛑 Libreddit (libreddit.spike.codes) is down In 94cf4fe, Libreddit (libreddit.spike.codes) (https://libreddit.spike.codes) was down: HTTP code: 502 Response time: 22804 ms Resolved: Libreddit (libreddit.spike.codes) is back up in 426bfe1 after 1 hour, 29 minutes.
gharchive/issue
2023-10-30T23:51:06
2025-04-01T04:32:44.786270
{ "authors": [ "Magic-Services-Account" ], "repo": "Magic-Services/upptime", "url": "https://github.com/Magic-Services/upptime/issues/7193", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1990035110
🛑 Libreddit (libreddit.spike.codes) is down In fbcf6fa, Libreddit (libreddit.spike.codes) (https://libreddit.spike.codes) was down: HTTP code: 404 Response time: 726 ms Resolved: Libreddit (libreddit.spike.codes) is back up in d239057 after 56 minutes.
gharchive/issue
2023-11-13T07:27:48
2025-04-01T04:32:44.788904
{ "authors": [ "Magic-Services-Account" ], "repo": "Magic-Services/upptime", "url": "https://github.com/Magic-Services/upptime/issues/7484", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2225161031
short dayname with time there i am not a native en speaker ... this is an modification ask, not a bug my next calendar event is Saturday at 15.30 (german: Samstag um 15.30 Uhr) ... both are very long, german is much longer but to long for the available cell ... its possible to short this dayname? there is an thread in the forum (https://forum.magicmirror.builders/topic/18610/set-the-calendar-day-to-a-short-version?page=1) with the suggestion to open an issue ... here https://momentjscom.readthedocs.io/en/latest/moment/04-displaying/01-format/ is the needed format i think i need: Day of Week > "ddd" and i think the "problem" is in calendar.js:455 > nextWeek "dddd" but idk if this the problerm ... also nextDay with "TOMORROW" is to long if an time there i need "Sat at 15:30" instead of Saturday at 15.30 ... i need a short dayname of the week but only if a time there ... same with tomorrow and other long names with an time (in german tomorrow (morgen) is a way shorter) ivex changed line 455: from "nextWeek: "dddd"" (full dayname) ro "nextWeek: "ddd"" (short dayname) an restart mm but no changes :( iam not a programmer ... any1 can help? its only for full day events.. no time.. AND also timeFormat:"relative" I changed mine to ddd and created an event for Monday and it stopped at that code and produced 'Mon' events without time works good with "ddd" ... and how i can change it for events with time? same for tomorrow (with and without time) an option in config.js is a good idea ... but i think this is not important enough for a change ... the code says the config.js dateFormat property will be used for this.. its not, sry ... or i am blind ... show me plz :)
gharchive/issue
2024-04-04T11:00:20
2025-04-01T04:32:44.793579
{ "authors": [ "sdetweil", "skyzuma" ], "repo": "MagicMirrorOrg/MagicMirror", "url": "https://github.com/MagicMirrorOrg/MagicMirror/issues/3421", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2210091567
Support for Claude 3 models Currently it looks like logs (captured requests and responses) for the new Claude 3 models don't show up at https://promptlayer.com/workspace/.... Logs for Claude 2 show up fine. To test Claude 2 I used promptlayer.anthropic.Anthropic.completions.create and it worked (logs showed up). To test Claude 3 I use promptlayer.anthropic.AsyncAnthropic.messages.create/promptlayer.anthropic.AsyncAnthropic.messages.stream and the logs don't show up in PromptLayer. I did follow all the instructions to configure the python promptlayer client properly. @teremterem I can confirm that I am seeing this bug with AsyncAnthropic, looking into a solution Hey @teremterem can you pull the most recent version of the python library and try again? Hey @Jped thank you for such a quick reaction! Unfortunately the logs don't show up with 0.5.3 either. It gives me this warning when I try: WARNING: While logging your request PromptLayer had the following error: Object of type AsyncMessageStreamManager is not JSON serializable The logs do show up when I turn off token streaming, though. So the problem remains only when token streaming is on. @teremterem my bad, forgot to check for streaming. Looking into that right now. Hi @Jped I will keep using Anthropic in streaming mode (because it contributes to better UX) and for awhile I'll be ok without the logs, but there will be a point in time when having logs is important to me again. I don't have a specific timeline just yet and I also don't want to put pressure on you, so please take your time. Thanks for reacting to my issue so soon in the first place! hey @teremterem just pushed an update and it works in my testing. It does not work for the text_stream example anthropic gives in their docs, but it does work for this code snippet: async def main() -> None: stream = await client.messages.create( max_tokens=1024, messages=[ { "role": "user", "content": "Hello, Claude", } ], model="claude-3-opus-20240229", stream=True, ) async for event in stream: # print(event) pass LMK if this works. Hey @Jped, I finally had a chance to check it out. Yes, it works, thank you! Would you by any chance be able to support text_stream version too some time in the future? text_stream based code looks cleaner because there is no need to think about different event types and think which ones to filter out (plus, since text_stream is in the docs, it will most likely be the first thing everyone will try anyways - only to discover that it is not logged in your platform). I haven't checked how easy it is to get text generation metadata (token count etc.) with the non-text_stream version yet, but if it happens to be more messy too, I'll probably have to go back to text_stream.
gharchive/issue
2024-03-27T07:42:44
2025-04-01T04:32:44.819693
{ "authors": [ "Jped", "teremterem" ], "repo": "MagnivOrg/prompt-layer-library", "url": "https://github.com/MagnivOrg/prompt-layer-library/issues/126", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
509979571
MAISTRA-1005 Do not expect namespaces to have a stable ordering The ServiceMeshMemberRoll controller was previously checking the equality of the existing list of namespaces vs incoming using a simple for-loop that would report inequality in case of different ordering. This now became a problem when a bug in the operator caused it to repeatedly reconcile the MemberRoll, changing the order of namespaces in the process. Note that this increases complexity of the equality check by quite a margin, but the changes in https://github.com/Maistra/istio/pull/51 could counteract this. LGTM, but you could also shorten this a lot by using sets.NewString() and HasAll() or Equal() (see vendor/k8s.io/apimachinery/pkg/util/sets/string.go) Good call. I'm using sets now. I tried to make the change as non-invasive as possible, I will include a refactored version into my WIP PR that doesn't rebuild the sets on every comparison but stores a set instead of []string in its struct. Was able to confirm that this change will at least keep the control plane sane when the operator goes into the endless reconciliation loop. Still looking at how to fix the operator side
gharchive/pull-request
2019-10-21T13:54:51
2025-04-01T04:32:44.849275
{ "authors": [ "dgn" ], "repo": "Maistra/istio", "url": "https://github.com/Maistra/istio/pull/59", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
873951403
Please merge on 3-05-2020, morning Updated tubing on freecad assembly staging more commits Okay, standing by.
gharchive/pull-request
2021-05-02T14:07:01
2025-04-01T04:32:44.868749
{ "authors": [ "Anool", "argo-1" ], "repo": "MakersAsylumIndia/M19_OxiKit", "url": "https://github.com/MakersAsylumIndia/M19_OxiKit/pull/8", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }