repo_name
stringlengths
9
75
topic
stringclasses
30 values
issue_number
int64
1
203k
title
stringlengths
1
976
body
stringlengths
0
254k
state
stringclasses
2 values
created_at
stringlengths
20
20
updated_at
stringlengths
20
20
url
stringlengths
38
105
labels
listlengths
0
9
user_login
stringlengths
1
39
comments_count
int64
0
452
axnsan12/drf-yasg
rest-api
722
[Custom schema generation] for request_body not working
Hi, I was working with custom schema generation for request body. The request are going through. ![Screenshot from 2021-06-05 21-47-01](https://user-images.githubusercontent.com/14337121/120898334-0cd28400-c648-11eb-9994-2cacf0b70420.png) I have defined the email field as required but it's not working . The request are sill going through resulting in error. My aim is to through an error {This field is required} if the data is not provided. ``` @swagger_auto_schema( method='post', request_body=openapi.Schema( type=openapi.TYPE_OBJECT, title="Request OTP", required=["email"], properties={ "email": openapi.Schema( title="Email", type=openapi.TYPE_STRING, format=openapi.FORMAT_EMAIL, ), }, ), responses={ 200: "Success", 400: "Invalid Request" }) ``` Thanks for the help.
open
2021-06-05T16:21:50Z
2025-03-07T12:13:03Z
https://github.com/axnsan12/drf-yasg/issues/722
[ "triage" ]
BradPrit09
0
PrefectHQ/prefect
automation
17,449
multiprocessing library inconsistencies on Deployment Flow runs
### Bug summary Utilizing the Multiprocessing library within a prefect flow run seems to work locally but causes failures or remains stuck indefinitely when triggering deployed runs with the same code. The following code executes successfully when run locally but when triggered as part of a deployment the run appears to be stuck indefinitely, only observed from a Kubernetes deployment, or fails with a pickling error seemingly from the multiprocessing library when run using .serve() ```python from prefect import flow, task from multiprocessing import Process # from prefect.runner.storage import GitRepository def print_func(continent='Asia'): print('The name of continent is : ', continent) @task(log_prints=True) def run_print_func(): print("This task is to print out given continents.") names = ['America', 'Europe', 'Africa'] procs = [] proc = Process(target=print_func) # instantiating without any argument procs.append(proc) proc.start() # instantiating process with arguments for name in names: # print(name) proc = Process(target=print_func, args=(name,)) procs.append(proc) proc.start() # complete the processes for proc in procs: proc.join() @flow(name="test-multiprocessing") def test_multiprocessing_flow(): run_print_func() print("Multiprocessing works!") if __name__ == "__main__": test_multiprocessing_flow.serve() ``` ### Version info ```Text Version: 3.2.11 API version: 0.8.4 Python version: 3.12.0 Git commit: 9481694f Built: Wed, Mar 5, 2025 10:00 PM OS/Arch: darwin/arm64 Profile: masonsandbox Server type: cloud Pydantic version: 2.10.6 ``` ### Additional context Traceback from a served flow ``` 13:16:42.878 | ERROR | prefect.engine - Execution of flow run '3f700320-83c2-404f-a4eb-d8865d587fa1' exited with unexpected exception Traceback (most recent call last): File "/opt/homebrew/Caskroom/miniconda/base/envs/prefect_cloud/lib/python3.12/site-packages/prefect/engine.py", line 57, in handle_engine_signals yield File "/opt/homebrew/Caskroom/miniconda/base/envs/prefect_cloud/lib/python3.12/site-packages/prefect/engine.py", line 124, in <module> run_flow(flow, flow_run=flow_run, error_logger=run_logger) File "/opt/homebrew/Caskroom/miniconda/base/envs/prefect_cloud/lib/python3.12/site-packages/prefect/flow_engine.py", line 1530, in run_flow ret_val = run_flow_sync(**kwargs) ^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/homebrew/Caskroom/miniconda/base/envs/prefect_cloud/lib/python3.12/site-packages/prefect/flow_engine.py", line 1375, in run_flow_sync return engine.state if return_type == "state" else engine.result() ^^^^^^^^^^^^^^^ File "/opt/homebrew/Caskroom/miniconda/base/envs/prefect_cloud/lib/python3.12/site-packages/prefect/flow_engine.py", line 350, in result raise self._raised File "/opt/homebrew/Caskroom/miniconda/base/envs/prefect_cloud/lib/python3.12/site-packages/prefect/flow_engine.py", line 765, in run_context yield self File "/opt/homebrew/Caskroom/miniconda/base/envs/prefect_cloud/lib/python3.12/site-packages/prefect/flow_engine.py", line 1373, in run_flow_sync engine.call_flow_fn() File "/opt/homebrew/Caskroom/miniconda/base/envs/prefect_cloud/lib/python3.12/site-packages/prefect/flow_engine.py", line 785, in call_flow_fn result = call_with_parameters(self.flow.fn, self.parameters) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/homebrew/Caskroom/miniconda/base/envs/prefect_cloud/lib/python3.12/site-packages/prefect/utilities/callables.py", line 208, in call_with_parameters return fn(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^ File "/Users/masonmenges/Repos/git_hub_repos/mm2-sanbox/flows/multiprocessing_test2.py", line 32, in test_multiprocessing_flow run_print_func() File "/opt/homebrew/Caskroom/miniconda/base/envs/prefect_cloud/lib/python3.12/site-packages/prefect/tasks.py", line 1033, in __call__ return run_task( ^^^^^^^^^ File "/opt/homebrew/Caskroom/miniconda/base/envs/prefect_cloud/lib/python3.12/site-packages/prefect/task_engine.py", line 1576, in run_task return run_task_sync(**kwargs) ^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/homebrew/Caskroom/miniconda/base/envs/prefect_cloud/lib/python3.12/site-packages/prefect/task_engine.py", line 1389, in run_task_sync return engine.state if return_type == "state" else engine.result() ^^^^^^^^^^^^^^^ File "/opt/homebrew/Caskroom/miniconda/base/envs/prefect_cloud/lib/python3.12/site-packages/prefect/task_engine.py", line 482, in result raise self._raised File "/opt/homebrew/Caskroom/miniconda/base/envs/prefect_cloud/lib/python3.12/site-packages/prefect/task_engine.py", line 805, in run_context yield self File "/opt/homebrew/Caskroom/miniconda/base/envs/prefect_cloud/lib/python3.12/site-packages/prefect/task_engine.py", line 1387, in run_task_sync engine.call_task_fn(txn) File "/opt/homebrew/Caskroom/miniconda/base/envs/prefect_cloud/lib/python3.12/site-packages/prefect/task_engine.py", line 828, in call_task_fn result = call_with_parameters(self.task.fn, parameters) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/homebrew/Caskroom/miniconda/base/envs/prefect_cloud/lib/python3.12/site-packages/prefect/utilities/callables.py", line 208, in call_with_parameters return fn(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^ File "/Users/masonmenges/Repos/git_hub_repos/mm2-sanbox/flows/multiprocessing_test2.py", line 16, in run_print_func proc.start() File "/opt/homebrew/Caskroom/miniconda/base/envs/prefect_cloud/lib/python3.12/multiprocessing/process.py", line 121, in start self._popen = self._Popen(self) ^^^^^^^^^^^^^^^^^ File "/opt/homebrew/Caskroom/miniconda/base/envs/prefect_cloud/lib/python3.12/multiprocessing/context.py", line 224, in _Popen return _default_context.get_context().Process._Popen(process_obj) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/homebrew/Caskroom/miniconda/base/envs/prefect_cloud/lib/python3.12/multiprocessing/context.py", line 289, in _Popen return Popen(process_obj) ^^^^^^^^^^^^^^^^^^ File "/opt/homebrew/Caskroom/miniconda/base/envs/prefect_cloud/lib/python3.12/multiprocessing/popen_spawn_posix.py", line 32, in __init__ super().__init__(process_obj) File "/opt/homebrew/Caskroom/miniconda/base/envs/prefect_cloud/lib/python3.12/multiprocessing/popen_fork.py", line 19, in __init__ self._launch(process_obj) File "/opt/homebrew/Caskroom/miniconda/base/envs/prefect_cloud/lib/python3.12/multiprocessing/popen_spawn_posix.py", line 47, in _launch reduction.dump(process_obj, fp) File "/opt/homebrew/Caskroom/miniconda/base/envs/prefect_cloud/lib/python3.12/multiprocessing/reduction.py", line 60, in dump ForkingPickler(file, protocol).dump(obj) _pickle.PicklingError: Can't pickle <function print_func at 0x112eb8fe0>: import of module '__prefect_loader_4615899616__' failed 13:16:43.467 | WARNING | EventsWorker - Still processing items: 3 items remaining... 13:16:45.970 | ERROR | prefect.flow_runs.runner - Process for flow run 'spiffy-gecko' exited with status code: 1 ```
closed
2025-03-11T19:23:30Z
2025-03-20T18:07:30Z
https://github.com/PrefectHQ/prefect/issues/17449
[ "bug" ]
masonmenges
0
django-import-export/django-import-export
django
1,027
Export don't work for queryset with values() method
Hello! Exporting fields don't work when queryset used with values() method. This is because when we iterate over queryset we have dict rather than model instance. I think code in resources.py must be like this: ```python def export_field(self, field, obj): if isinstance(obj, dict): return obj.get(field.attribute) field_name = self.get_field_name(field) method = getattr(self, 'dehydrate_%s' % field_name, None) if method is not None: return method(obj) return field.export(obj) ``` We can't using queryset without values() because this using for GROUP BY. So I think import_export must support it
closed
2019-11-06T15:02:06Z
2024-09-10T12:52:48Z
https://github.com/django-import-export/django-import-export/issues/1027
[ "stale" ]
CrazySky2121
3
axnsan12/drf-yasg
django
600
How to write @swagger_auto_schema for uploading a file?
Hello! I want to simply do the file uploading. After reading through the documentation, and reading issues in here. I came up with the following code, still. It doesn't seem to work as intended. I decided to use manual_parameters because as the doc said, parameter can describe file uploads via type = file. but only as part of a form Operation. So I simply use openapi.Parameter and make my view use FormParser class. ``` @swagger_auto_schema( operation_description='Upload container excel, if the columns and data are valid. Containers will be created. ' 'If container with such name already exists, it will be update instead', operation_id='Upload container excel', tags=[tag], manual_parameters=[openapi.Parameter( name="file", in_=openapi.IN_FORM, type=openapi.TYPE_FILE, required=True, description="Document" )], responses={400: 'Invalid data in uploaded file', 200: 'Success'}, ) @action(detail=False, methods=['post'], parser_classes=(FormParser, ), name='upload-excel', url_path='upload-excel') def upload_excel(self, request): file = request.FILES.get('file') try: df = pd.read_excel(file) ...omitted code... ``` However, then I received this error. `drf_yasg.errors.SwaggerGenerationError: cannot instantiate nested serializer as Parameter` There is #503 talking about this, but I'm not sure how to resolve it with my method. I did try adding ``` def get_parsers(self): if getattr(self, 'swagger_fake_view', False): return [] return super().get_parsers() ``` to my view. The problem still persists. I also tried to remove the parser classes, override auto_schema with my own, simply ``` class ExcelUploadAutoSchema(SwaggerAutoSchema): def get_consumes(self): return ['multipart/form-data'] ``` and ``` @swagger_auto_schema( auto_schema=ExcelUploadAutoSchema, operation_description='Upload container excel, if the columns and data are valid. Containers will be created. ' 'If container with such name already exists, it will be update instead', ..... ``` Still producing the same error. Did I miss anything? I would really appreciate the help!
closed
2020-06-01T08:20:41Z
2021-12-11T14:32:06Z
https://github.com/axnsan12/drf-yasg/issues/600
[]
Atidhaya
3
pydantic/pydantic-settings
pydantic
543
Can we store JSON extras?
`pydantic-settings` provides a type-safe setting access. A comparison with .NET is `IConfiguration` stores everything with the format "MySetting:MyNested:...", and you can create a class (`class MyConfiguration`) and access settings in a type-safe way if you want so. But sometimes we need a centralized place (`IConfiguration`) just to save settings to be managed by libraries. For example, Azure App Configuration can return settings (that we'll save in a Pydantic model), but also can return feature flags. Feature flags can be only True/False or complex. For example, these are two (simple) feature flags returned by the Azure service: ```json { "feature_management": { "feature_flags": [ { "id": "MyFeatureFlag1", "description": "", "enabled": true, "conditions": { "client_filters": [] }, "display_name": null }, { "id": "MyFeatureFlag2", "description": "", "enabled": true, "conditions": { "client_filters": [ { "name": "Microsoft.TimeWindow", "parameters": { "Start": "Sun, 16 Feb 2025 23:00:00 GMT", "End": "Tue, 18 Feb 2025 14:41:00 GMT", "Recurrence": { "Pattern": { "Type": "Daily", "Interval": 1 }, "Range": { "Type": "EndDate", "EndDate": "Thu, 11 Dec 2025 23:00:00 GMT" } } } } ] }, "display_name": null } ] } } ``` Creating Pydantic models for that is absurd, so we have to store it in a centralized place. After a research, I tried to store them allowing extras: ```python from collections.abc import Mapping from typing import Any from pydantic_settings import ( BaseSettings, EnvSettingsSource, PydanticBaseSettingsSource, SettingsConfigDict, ) feature_management: dict[str, Any] = { "feature_management": { "feature_flags": [ { "id": "MyFeatureFlag1", "description": "", "enabled": True, "conditions": {"client_filters": []}, "display_name": None, }, { "id": "mytwo", "description": "", "enabled": True, "conditions": { "client_filters": [ { "name": "Microsoft.TimeWindow", "parameters": { "Start": "Sun, 16 Feb 2025 23:00:00 GMT", "End": "Tue, 18 Feb 2025 14:41:00 GMT", "Recurrence": { "Pattern": {"Type": "Daily", "Interval": 1}, "Range": { "Type": "EndDate", "EndDate": "Thu, 11 Dec 2025 23:00:00 GMT", }, }, }, } ] }, "display_name": None, }, ] } } class CustomSettingsSource(EnvSettingsSource): def __init__( self, settings_cls: type[BaseSettings], ) -> None: super().__init__( settings_cls, case_sensitive=True, env_prefix=None, env_nested_delimiter=":", env_ignore_empty=False, env_parse_none_str=None, env_parse_enums=None, ) def _load_env_vars(self) -> Mapping[str, str | None]: return feature_management def __repr__(self) -> str: return "CustomSettingsSource" class ApplicationSettings(BaseSettings): name: str model_config = SettingsConfigDict(extra="allow") @classmethod def settings_customise_sources( cls, settings_cls: type[BaseSettings], init_settings: PydanticBaseSettingsSource, env_settings: PydanticBaseSettingsSource, # noqa: ARG003 dotenv_settings: PydanticBaseSettingsSource, # noqa: ARG003 file_secret_settings: PydanticBaseSettingsSource, # noqa: ARG003 ) -> tuple[PydanticBaseSettingsSource, ...]: custom = CustomSettingsSource( settings_cls, ) return (init_settings, custom) application_settings = ApplicationSettings(name="Alex") # type: ignore[reportCallIssue] print(application_settings) print("----") print(application_settings.__pydantic_extra__) ``` Result: ``` name='Alex' ---- {} ``` Expected result: ``` name='Alex' ---- {'feature_management': ...} ``` This is important for us because we should reload feature flags after X seconds or immediately, and the class `FeatureManager` (https://learn.microsoft.com/en-us/azure/azure-app-configuration/feature-management-python-reference) use them: ```python feature_manager = FeatureManager(feature_management) ```
closed
2025-02-19T12:47:35Z
2025-02-27T13:51:48Z
https://github.com/pydantic/pydantic-settings/issues/543
[ "unconfirmed" ]
AndreuCodina
3
graphdeco-inria/gaussian-splatting
computer-vision
989
How does cuda code work when rasterization is performed ?
Hi, Thanks a lot for sharing your great work. I have some questions about the process of differentiable gaussian rasterization. 1. ![image](https://github.com/user-attachments/assets/42e23ed0-156c-4fbb-afae-68304372fe9c) What's the meaning of "from . import _C" ? Is "_C" refer to the gaussian rasterization folder containing cuda code? 2. ![image](https://github.com/user-attachments/assets/52339597-138a-472b-8785-f695fc813954) Which file in the above rasterize_gaussians folder containing cuda code should I look for the corresponding **rasterize_gaussians** function when I perform rasterization, running the code to the line `num_rendered, color, objects, radii, geomBuffer, binningBuffer, imgBuffer = _C.rasterize_gaussians(*args)`? 3. ![image](https://github.com/user-attachments/assets/994d06c1-1b2e-4fcc-946d-35b633851de9) There are too many code files in this folder, do they have any **order of execution**, I don't know where to start reading them. And how is this cuda code connected to the corresponding parts of python, is it by the way in question 1? 4. I am new in this field. I would like to add a new attribute to the Gaussian and modify the code in the rasterization section accordingly, which file in cuda would you recommend to start with? Sorry for asking so many questions, looking forward to your reply! My best regards
open
2024-09-15T08:00:08Z
2024-10-09T14:43:43Z
https://github.com/graphdeco-inria/gaussian-splatting/issues/989
[]
jingyuzhao1010
3
Ehco1996/django-sspanel
django
239
单用户多端口使用不到
日志如下: 启动服务了!!端口:1025 密码:b'OSgVSYqCpf91' 配置:{'password': b'OSgVSYqCpf91', 'obfs': b'http_simple_compatible', 'protocol': b'auth_chain_a', 'method': b'aes-128-ctr'} 2019-07-16 17:24:26 WARNING common.py:238 unsupported addrtype 69, maybe wrong password or encryption method 2019-07-16 17:24:26 WARNING tcprelay.py:540 Protocol ERROR, TCP ogn data b'35e8ff3ff2091b8feae1ea916f8e58b81b384738003956863655949456130e3ac2235ec1c48abc82d23a08165472852b2fe791091ee80d7b91c49ad29071d9e800fc3e2308cdfe10485b4942589e75b37bef73ca80a47300bf2d06de2f972e17eabebc0b640f9a7da873630cf34a8f73b7c8eed1e2fd8c0e3c2ae09791b2ea21927f7d71182f27bbb49927974a1019623958d40d20063ad88b5d7b70eee7559343d24a0e70027b4d1192d1aa994f4edf4d6e7f772faa7bdce420bb57a03db6177c90901aa7c4c5973ce2cc29ee1d6e0859dcc650feaf4d0e0040f35d210954e5e768605869caff55b2bb4d31b2fb937afcd5621716b05538666460a4867e6644c11c2c54ab4ef2d90ed0f58ac2d8ecbbc914cbc93ed3688a39b3bdfa13a768af8a2809f6419a057b693a948d0e4efc82670ef6eb4ffd5c83407d4da6d4a030665d1cc665d4ec1a7fb39cba95b0ec6aaf0ef734e1368bc8059b5e1e980958c71b86f5b397a54121d23a60ef8b21d265200421e6d37257a8e24757e24466d144497b4abbb56019aee4a88994dd97ae8747c918f8a3f4ab4591b8faba8a7318003c58098681b10ed18c1b0d282e3e95f3e553553a04f9dea24f9dce10cae4593257cfea829a35c700aa33eed4611c2724efb6efcd3a07b787e16216a389f414a92885d5389f93a65c39035862f8911334ad4bcaf89e4bef86edfc002a3e50f86b82dea7decf9e222ea812c028c688ed52874ee69b46327964a31718db32ba03b6d428ab08e716b1898b4a54c7c2e68db5dab867b6fd915842023e409154660479750c22ece976ba050b881293f404c0b567481d6773835908ac221cf842d5ae6621702a15ac04897b250f823e58800fd3684dfdf93c78130ecf9ea0706edaf723eb1710457cbcc193efb69e0f226f78cdc829b4785301ab5144ab4dc4bd899c0c198d95880eada983e962f75df4d89c3a1175c4a90ff6901e008d7e07caac3656a93c96ae7b627e5889cdc4a5e2b628538d250780d0cdc8d4011039c7fa69f4bfbde7a556f3ededede47b93dc1faa2ed12be6890f2500eb2ae5b4fa7488a7da11f396185df771b48b54a73df6cbdd0d37ac23a86777fa712b85e25d71e69859745b5a8f02575d6cdc9fa0d3b917283c723846c723535f67dae0425f5456e3fa4acd37d53c7f9087a28518c319a615ab613302a3d1ed0d07da01876dbc32fc8c99a749559faf639d716d7015a5540cbd49ed94cb09f088a79aeebab1ea2640053b2a3ff89f35abd486769b8b5c7bbef991705a137e5b04a2be8262ad78041a75b66980311a7f584396476412bbe43491c2f357fdd78ca4e79de01b27feb77979f975f26c7ea7fc863f6f03adc7f77906a321aff47c022d106a6c0d7cf5ba4c37fe833fe052fd557ef1f90a317904ed00a7f04041409acd79b65e578892e6b176b17344270dbd6164a70da0fa5ce5d0e405e7e48768178f1cccbb357199b3221bf7602c134f9ac41901c23c1c64b77c3534433392df68d336bfd7853d2ae561d86835d68c26d13227675a5d122c4bca9978a652478d2c20b4956bbe7228f4981359ea2e1ae2377c150360b9a466eeb542e12027bb85ed81ca4470fa8f42cd37c19f37ebf5ed5c9093c16fcc69e979d5bad374cd62a8bf592ea9a76a6c86737eedc8649f69042312cb7514004c9fb36f6018a3215dc37f337d7b98f62077c68313351b90788cdae32689f2b295dfe7293a598ea4fd9fcadea50e23d147546d94fee7fae75db7c6019be6d7ba65e04abfca4167c9a84642a8e0b74af3804201ef484471c7d8ffd3de39c3197dc1bed9e55c3019a1f96506f4c476b6413668e2f5e068488eea9a12ff20447aa637c2ed107c1adb63bffebafc79ff6df6822b69d5807b1b87f38029c85ca5c8cd358b2101203fef1438370cc7589b4578afb219480bacc15a476d32d68ba8c2d45ab641b0aecfa31b866953918180475bf0c7e825b39212d1e564559a05f52c5af2f9e4a56e5b09dd5c5c153bff41d1e4dee787190f8da450dea55fb1f58f15094acebee0e4e44886e995fc2a420d2da7b39f45ef7411f94350971a3ce092fdd81166ef9cc26f7564a9a2a9ca2b4df2df5861562649527a1298c7292182e5830fae3f20ddce0948ee6797ce3e03df87952a75d1e71beb2a6338b82adf60be759282cdcd19362f428721002d0e2d8af319b05f81546808ea274be084f8edf2a16c5c9285aad19dffe584238118e35c71d70b1e5d157672bdf9d61c829789fe6bd8f6d290cdb41e40698e50a2ec07abba4e67599129f49e0a5ec687ab7ca95106f94a6900d7e1a09744c6484b54811c134113a9eff09a89a113a7d95fe26e3ba37a88194d809a4b476782a161aac80dd5304b354457a7ce44e5edd803631a6ee2566c6830ee2076e27a37c5f730b3c0d1a4e769070826f6830c5c2625ea33f55f4f2df6626cea79f9f99f8dcb499146e46f50f381c7e5be84e7b11cd94faac28d132e3ba3c833e853a47c5d73c62fb84c8661931541b468c8f360a25e34df951c119ff008c2994afa01bb6da88d92dc7e10ec2399ac64483327d56419d1c3d6bd858dbb5dd85a55b86ee08eeea3ae3a7ed7bb14411cdac1e7811469b0c68e22f58d65e6bdb46d04092d4ae1e3143121bdf280291c714ff25499cb172a4cb9a51b664044ad6288f223dc6eb11807875d298da2e4a65ac9d89b6a8c1941cc046e93c5b988d7adb80913c4dd4c19f8fde6bc1ab5c1a824c62bd0d26890ca02f5889b2e4fe102e7fe7f9b' from ::ffff:192.168.14.135:51740 via port 1025 by UID 1025 2019-07-16 17:24:26 ERROR tcprelay.py:1150 can not parse header when handling connection from ::ffff:192.168.14.135:51740
closed
2019-07-16T09:28:39Z
2019-08-13T12:50:09Z
https://github.com/Ehco1996/django-sspanel/issues/239
[]
CL545740896
1
pytest-dev/pytest-flask
pytest
61
LiveServer requires flask having root path
If server is not ready it will try again taking up to 5 seconds. The problem apears when application do not implement '/', when app returns 404 LiveServer considers it a failure and tries again, resulting in it always taking 5 seconds.
closed
2017-02-17T13:50:53Z
2020-11-08T18:06:13Z
https://github.com/pytest-dev/pytest-flask/issues/61
[]
serathius
1
dynaconf/dynaconf
django
1,073
[RFC] Add `as_dict` alias to `to_dict` for `DynaBox` for consistency between `LazySettings` and `DynaBox` objs
**Is your feature request related to a problem? Please describe.** It's possible to convert a `LazySettings` object to a Python dictionary with `LazySettings.as_dict()`, and there is an alias to `to_dict` for backwards compatibility. The `DynaBox` class inherits from the `Box` class which comes from a vendor. There is the method `Box.to_dict()` but not `.as_dict()`, so there is an inconsistency for the user if the user accesses a sub-box of settings and then tries to convert to a dictionary - see context. **Describe the solution you'd like** Add an alias to `DynaBox`: ```python class DynaBox(Box): ... as_dict = to_dict ``` The opposite of what is done here: https://github.com/dynaconf/dynaconf/blob/4ab518393a1f7aa72e353a485aebea2852561120/dynaconf/base.py#L424 **Describe alternatives you've considered** | No | Alternative | Drawback | |----|---------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------| | 1 | Using a combination of `as_dict()` and `to_dict()` | confusing for the user | | 2 | Using `to_dict()` for both `LazySettings` and `DynaBox` | seems like this is the older method and the current guidance is to use `as_dict()` | | 3 | Doing only `LazySettings.as_dict()` and then using keys to get to dict level we want | takes away the flexibility of using dot notation to get to sub-levels of config | **Additional context** #### nested_settings.toml ```toml [hyperparameters] max_depth = 10 n_estimators = 20 ``` #### Examples ```python from dynaconf import Dynaconf settings = Dynaconf(settings_files=['nested_settings.toml']) print(settings.as_dict()) >>> {'HYPERPARAMETERS': {'max_depth': 10, 'n_estimators': 20}} print(settings.hyperparameters.as_dict()) >>> dynaconf.vendor.box.exceptions.BoxKeyError: "'DynaBox' object has no attribute 'as_dict'" print(settings.hyperparameters.to_dict()) >>> {'max_depth': 10, 'n_estimators': 20} ```
closed
2024-03-05T19:46:16Z
2024-07-08T18:37:14Z
https://github.com/dynaconf/dynaconf/issues/1073
[ "Not a Bug", "RFC" ]
mitches-got-glitches
2
dunossauro/fastapi-do-zero
pydantic
250
remover instalação do multipart
Agora, como estamos usando a instalação standard, isso não precisa mais ser instalado!
closed
2024-10-03T02:36:21Z
2024-10-03T03:17:00Z
https://github.com/dunossauro/fastapi-do-zero/issues/250
[]
dunossauro
0
FactoryBoy/factory_boy
django
135
Metaclass for BaseFactory too agressive in removing attrs
I would like to extend a Factory subclass to add a custom "build" style classmethod. I've got a model structure which (simplified) looks like: `A <- B <- D -> C -> A` The fks through B and C should always point at the same A for a given D. As this requires about three lines of typing (there's more than one model at B and C), the idea was to build: ``` class Dfactory(factory.Factory): @classmethod def for_a(cls, a, **kwargs): kwargs.update(b__a=a, c__a=a) cls(**kwargs) ``` Unfortunately, the Metaclass on BaseFactory throws away the classmethod, without raising any errors. This is not expected python behaviour.
closed
2014-02-22T11:44:56Z
2015-10-20T21:52:08Z
https://github.com/FactoryBoy/factory_boy/issues/135
[ "NeedInfo" ]
mjtamlyn
2
pandas-dev/pandas
data-science
60,909
BUG: pandas.read_excel returns dict type if sheet_name=None
### Pandas version checks - [x] I have checked that this issue has not already been reported. - [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas. - [ ] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas. ### Reproducible Example ```python import pandas as pd df = pd.read_excel("test_data/test.xlsx", sheet_name=None) print(type(df)) ``` ### Issue Description Function `pandas.read_excel` when parameter `sheet_name` is set to `None` return dict object apart from `pandas.core.frame.DataFrame`. ### Expected Behavior It should return DF ### Installed Versions pandas==2.2.3
closed
2025-02-11T09:28:31Z
2025-02-13T17:41:27Z
https://github.com/pandas-dev/pandas/issues/60909
[ "Docs", "IO Excel" ]
Filip-Regenczuk
3
ultralytics/yolov5
pytorch
12,737
'segment/train.py' runs normally, but 'classify/train.py' does not receive the expected output
version:yolov5-7.0 segment ![capture_20240216210927568](https://github.com/ultralytics/yolov5/assets/51474963/3e075ad0-a018-402c-86fe-c113a35fd499) ![capture_20240216211023037](https://github.com/ultralytics/yolov5/assets/51474963/1a8afbda-b17e-4185-98c5-280ee9400cd2) classify ![capture_20240216211153625](https://github.com/ultralytics/yolov5/assets/51474963/c8c9e5c1-07ad-42ea-874a-d35921cb75b7)
closed
2024-02-16T13:17:58Z
2024-10-20T19:39:47Z
https://github.com/ultralytics/yolov5/issues/12737
[ "Stale" ]
weijintaocode
4
graphql-python/graphql-core
graphql
51
Scalar variables failing validation
When I validate [these queries](https://github.com/go-build-it/gqlmod/blob/master/testmod/queries.gql) against [this schema](https://github.com/go-build-it/gqlmod/blob/master/gqlmod_starwars/schema.py) (via [this code](https://github.com/go-build-it/gqlmod/blob/6fe690f7ffb0634f4522ded57be8f40b21205c52/gqlmod/importer.py#L94-L103)), I'm getting this error: ``` graphql.error.graphql_error.GraphQLError: Unknown type 'Int'. /home/astraluma/code/gobuildit/gqlmod/testmod/queries.gql:15:30 14 | 15 | query HeroComparison($first: Int = 3) { | ^ 16 | leftComparison: hero(episode: EMPIRE) { ``` The results of `pip freeze`: ``` astor==0.8.0 -e git+git@github.com:go-build-it/gqlmod.git@6fe690f7ffb0634f4522ded57be8f40b21205c52#egg=gqlmod graphql-core==3.0.0a2 import-x==0.1.0 pkg-resources==0.0.0 ``` I'm pretty new to GraphQL, but as I understand it, `Int` is a builtin scalar that should always be available?
closed
2019-08-29T19:34:30Z
2019-09-14T19:43:09Z
https://github.com/graphql-python/graphql-core/issues/51
[]
AstraLuma
6
OWASP/Nettacker
automation
632
Move issue_template and pull request template to .github directory
We can clean the main page of the repository by moving the template files in the .github hidden folder.
closed
2022-12-07T09:02:53Z
2023-03-15T01:24:15Z
https://github.com/OWASP/Nettacker/issues/632
[ "good first issue" ]
kailashchoudhary11
1
OFA-Sys/Chinese-CLIP
computer-vision
245
基于模型微调后得到epoch_latest.pt,但是为什么训练完推理不了,我试了好几个服务器跑完都不行
--------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) /tmp/ipykernel_25611/2218594588.py in <module> 13 model_state_dic = torch.load(model_path) ---> 15 model.load_state_dict(model_state_dict) 16 model.eval() 17 ~/miniconda3/lib/python3.8/site-packages/torch/nn/modules/module.py in load_state_dict(self, state_dict, strict) 1480 1481 if len(error_msgs) > 0: -> 1482 raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format( 1483 self.__class__.__name__, "\n\t".join(error_msgs))) 1484 return _IncompatibleKeys(missing_keys, unexpected_keys)
closed
2024-01-09T12:48:05Z
2024-01-22T10:21:27Z
https://github.com/OFA-Sys/Chinese-CLIP/issues/245
[]
Luhuanz
2
mage-ai/mage-ai
data-science
5,004
GCP Cloud Run Executor CPU and Memory Configuration
**Is your feature request related to a problem? Please describe.** I saw ECS, k8s and Azure Container Instance have a configuration option to set memory and cpu. As I deploy Mage on GCP, I found an issue that the cloud run job memory is not enough to run my job. Right now, the Cloud Run Jobs CPU and memory configuration will copy the config of the Cloud Run service if mage is deployed on Cloud Run service. So to increase the resource allocation, we need to set the service resource allocation higher, while it might not be necessary. Also each block might need different resource allocation. **Describe the solution you'd like** Make the configuration option to set memory and cpu for cloud run executor **Describe alternatives you've considered** Another option is to set a generic config on cloud run as a mapping to be passed to cloud run API function. This way, if GCP add new feature on their API, we can just pass the keyword argument to the `metadata.yaml` and the executor will pass it to GCP function. **Additional context** https://docs.mage.ai/production/configuring-production-settings/compute-resource#azure-container-instance-executor
open
2024-04-29T11:58:04Z
2024-05-09T19:15:26Z
https://github.com/mage-ai/mage-ai/issues/5004
[ "enhancement" ]
FaisalDoowii
0
jupyterhub/repo2docker
jupyter
871
--env option semantics differ from docker's
<!-- Thank you for contributing. These HTML commments will not render in the issue, but you can delete them once you've read them if you prefer! --> ### Bug description The semantics for the `--env` option to `repo2docker` differs from the sane option in the `docker run` subcommand. #### Expected behaviour <!-- Tell us what you thought would happen. --> I expected the option to behave identically, but they do not in the case of a bare environment variable name. For example, to pass the value of a secret to a docker instance (so it does not appear on the command line), you can do: ```bash SEKRT=fred docker run --env SEKRT ... ``` and the value of `SEKRT` inside the container is "fred". see also [docker documentation](https://docs.docker.com/engine/reference/commandline/run/#set-environment-variables--e---env---env-file) #### Actual behaviour <!-- Tell us what you actually happens. --> For `repo2docker`: ```bash SEKRT=fred repo2docker --env SEKRT ... ``` and `SEKRT` is undefined inside the container. ### How to reproduce <!-- Use this section to describe the steps that a user would take to experience this bug. --> 1. Download [this notebook](https://gist.github.com/34dd525848df78a5adc101b0da0ac3eb) to an empty directory 2. run `SEKRT=sam repo2docker --env SEKRT -E .` 1. connect to server and run notebook cells 1. note output -- this is observed 1. stop server 3. Get docker image just created: `image=$(docker images --quiet | head -1)` 1. run `SEKRT=sam docker run --rm --env SEKRT $image` 1. connect to server and run notebook cells 1. note output -- this is expected 1. stop server ### Your personal set up <!-- Tell us a little about the system you're using. You can see the guidelines for setting up and reporting this information at https://repo2docker.readthedocs.io/en/latest/contributing/contributing.html#setting-up-for-local-development. --> - OS: Ubuntu Ubuntu 18.04.4 LTS - Docker version: 18.06.1-ce, build e68fc7a - repo2docker version: 0.11.0
closed
2020-04-10T23:27:37Z
2020-04-23T04:55:06Z
https://github.com/jupyterhub/repo2docker/issues/871
[]
hwine
2
miguelgrinberg/flasky
flask
81
flash message shows up again using back button
According to Flask docs: "The flashing system basically makes it possible to record a message at the end of a request and access it next request and only next request." When I change my profile in Flasky, I see the following flash message: ![screenshot 2015-10-10 06 20 14](https://cloud.githubusercontent.com/assets/595772/10410683/46f2a39c-6f17-11e5-8814-093ec7abb639.png) Then I click home, and use the back button of the browser to go back, the message is still there (or shows up again), which is not correct. Is there a way to force the message to show up only once? Thanks!!
closed
2015-10-10T10:24:34Z
2017-03-17T18:54:00Z
https://github.com/miguelgrinberg/flasky/issues/81
[]
harrywang
8
streamlit/streamlit
machine-learning
10,706
Limit max number of uploaded files in st.file_uploader and st.chat_input
### Checklist - [x] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar feature requests. - [x] I added a descriptive title and summary to this issue. ### Summary Add an argument `max_files` in `st.file_uploader` and `st.chat_input` or a configuration option similar to `server.maxUploadSize` that limits the number of files that can be uploaded in these weights. This is very useful in the case of `st.chat_input` as it will limit the number of files while interacting with LLM. This should be applicable if the user is uploading more than 1 file. ### Why? _No response_ ### How? _No response_ ### Additional Context _No response_
closed
2025-03-10T11:42:24Z
2025-03-10T15:14:34Z
https://github.com/streamlit/streamlit/issues/10706
[ "type:enhancement" ]
amanchaudhary-95
3
miguelgrinberg/microblog
flask
359
Github links in chapters start with wrong tag (one-off)
Hi, in beginning of chapter one the links to github tags (browse, zip, diff) start with v0.1 but I think they should start with tag v0.0. Otherwise you browse the github repo from the next chapter. For example, in chapter one the browse link points to tag v0.1 which already has the jinja2 template from chapter two.
closed
2023-12-06T12:15:47Z
2023-12-06T19:54:18Z
https://github.com/miguelgrinberg/microblog/issues/359
[]
oddfellow
2
proplot-dev/proplot
data-visualization
448
Set markercolor for scatter plots with a substring from another columns using .map() and colordict
<!-- Thanks for helping us make proplot a better package! If this is a bug report, please use the template provided below. If this is a feature request, you can delete the template text (just try to be descriptive with your request). --> ### Description I have my pandas DataFrame with 3 columns: 2 are numerical and 1 is categorical (string); below is a subset of the data. Site2 MOB insituOxy '1244' 0.792353 14.757724 '1244' 0.724254 14.757724 '1244' 0.753294 14.757724 'BAR94-24' 0.106508 77.748306 'BAR94-24' 0.153819 77.748306 I would like to color a scatter plot using the following syntax: fig, ax = plot.subplots() ax.scatter(plot_data.insituOxy,plot_data.MOB, marker='o', c=plot_data.Site2.map(color_site_dict).fillna('gray6'), ) Doing so, I got the error below: --------------------------------------------------------------------------- TypeError Traceback (most recent call last) Cell In[252], [line 9](vscode-notebook-cell:?execution_count=252&line=9) [4](vscode-notebook-cell:?execution_count=252&line=4) color_site_dict = { [5](vscode-notebook-cell:?execution_count=252&line=5) '1244':'blue6', [6](vscode-notebook-cell:?execution_count=252&line=6) '850':'red6', [7](vscode-notebook-cell:?execution_count=252&line=7) } [8](vscode-notebook-cell:?execution_count=252&line=8) fig, ax = plot.subplots() **----> [9](vscode-notebook-cell:?execution_count=252&line=9) ax.scatter(plot_data.insituOxy,plot_data.MOB, [10](vscode-notebook-cell:?execution_count=252&line=10) marker='o', [11](vscode-notebook-cell:?execution_count=252&line=11) c=plot_data.Site2.map(color_site_dict).fillna('gray6'), [12](vscode-notebook-cell:?execution_count=252&line=12) )** File [c:\Users\ratta\miniconda3\envs\python310_pymc_env\lib\site-packages\proplot\internals\process.py:284](file:///C:/Users/ratta/miniconda3/envs/python310_pymc_env/lib/site-packages/proplot/internals/process.py:284), in _preprocess_args.<locals>.decorator.<locals>._redirect_or_standardize(self, *args, **kwargs) [281](file:///C:/Users/ratta/miniconda3/envs/python310_pymc_env/lib/site-packages/proplot/internals/process.py:281) ureg.setup_matplotlib(True) [283](file:///C:/Users/ratta/miniconda3/envs/python310_pymc_env/lib/site-packages/proplot/internals/process.py:283) # Call main function --> [284](file:///C:/Users/ratta/miniconda3/envs/python310_pymc_env/lib/site-packages/proplot/internals/process.py:284) return func(self, *args, **kwargs) File [c:\Users\ratta\miniconda3\envs\python310_pymc_env\lib\site-packages\proplot\axes\plot.py:3259](file:///C:/Users/ratta/miniconda3/envs/python310_pymc_env/lib/site-packages/proplot/axes/plot.py:3259), in PlotAxes.scatter(self, *args, **kwargs) [3255](file:///C:/Users/ratta/miniconda3/envs/python310_pymc_env/lib/site-packages/proplot/axes/plot.py:3255) """ [3256](file:///C:/Users/ratta/miniconda3/envs/python310_pymc_env/lib/site-packages/proplot/axes/plot.py:3256) %(plot.scatter)s [3257](file:///C:/Users/ratta/miniconda3/envs/python310_pymc_env/lib/site-packages/proplot/axes/plot.py:3257) """ [3258](file:///C:/Users/ratta/miniconda3/envs/python310_pymc_env/lib/site-packages/proplot/axes/plot.py:3258) kwargs = _parse_vert(default_vert=True, **kwargs) -> [3259](file:///C:/Users/ratta/miniconda3/envs/python310_pymc_env/lib/site-packages/proplot/axes/plot.py:3259) return self._apply_scatter(*args, **kwargs) File [c:\Users\ratta\miniconda3\envs\python310_pymc_env\lib\site-packages\proplot\axes\plot.py:3220](file:///C:/Users/ratta/miniconda3/envs/python310_pymc_env/lib/site-packages/proplot/axes/plot.py:3220), in PlotAxes._apply_scatter(self, xs, ys, ss, cc, vert, **kwargs) [3215](file:///C:/Users/ratta/miniconda3/envs/python310_pymc_env/lib/site-packages/proplot/axes/plot.py:3215) if ( [3216](file:///C:/Users/ratta/miniconda3/envs/python310_pymc_env/lib/site-packages/proplot/axes/plot.py:3216) any(_.ndim == 2 and _.shape[1] in (3, 4) for _ in (xs, ys)) [3217](file:///C:/Users/ratta/miniconda3/envs/python310_pymc_env/lib/site-packages/proplot/axes/plot.py:3217) and test.ndim == 2 and test.shape[1] in (3, 4) [3218](file:///C:/Users/ratta/miniconda3/envs/python310_pymc_env/lib/site-packages/proplot/axes/plot.py:3218) ): [3219](file:///C:/Users/ratta/miniconda3/envs/python310_pymc_env/lib/site-packages/proplot/axes/plot.py:3219) infer_rgb = False -> [3220](file:///C:/Users/ratta/miniconda3/envs/python310_pymc_env/lib/site-packages/proplot/axes/plot.py:3220) cc, kw = self._parse_color( [3221](file:///C:/Users/ratta/miniconda3/envs/python310_pymc_env/lib/site-packages/proplot/axes/plot.py:3221) xs, ys, cc, inbounds=inbounds, apply_cycle=False, infer_rgb=infer_rgb, **kw [3222](file:///C:/Users/ratta/miniconda3/envs/python310_pymc_env/lib/site-packages/proplot/axes/plot.py:3222) ) [3223](file:///C:/Users/ratta/miniconda3/envs/python310_pymc_env/lib/site-packages/proplot/axes/plot.py:3223) guide_kw = _pop_params(kw, self._update_guide) [3224](file:///C:/Users/ratta/miniconda3/envs/python310_pymc_env/lib/site-packages/proplot/axes/plot.py:3224) objs = [] File [c:\Users\ratta\miniconda3\envs\python310_pymc_env\lib\site-packages\proplot\axes\plot.py:2082](file:///C:/Users/ratta/miniconda3/envs/python310_pymc_env/lib/site-packages/proplot/axes/plot.py:2082), in PlotAxes._parse_color(self, x, y, c, apply_cycle, infer_rgb, **kwargs) [2080](file:///C:/Users/ratta/miniconda3/envs/python310_pymc_env/lib/site-packages/proplot/axes/plot.py:2080) c = list(map(pcolors.to_hex, c)) # avoid iterating over columns [2081](file:///C:/Users/ratta/miniconda3/envs/python310_pymc_env/lib/site-packages/proplot/axes/plot.py:2081) else: -> [2082](file:///C:/Users/ratta/miniconda3/envs/python310_pymc_env/lib/site-packages/proplot/axes/plot.py:2082) kwargs = self._parse_cmap( [2083](file:///C:/Users/ratta/miniconda3/envs/python310_pymc_env/lib/site-packages/proplot/axes/plot.py:2083) x, y, c, plot_lines=True, default_discrete=False, **kwargs [2084](file:///C:/Users/ratta/miniconda3/envs/python310_pymc_env/lib/site-packages/proplot/axes/plot.py:2084) ) [2085](file:///C:/Users/ratta/miniconda3/envs/python310_pymc_env/lib/site-packages/proplot/axes/plot.py:2085) methods = (self._parse_cycle,) [2086](file:///C:/Users/ratta/miniconda3/envs/python310_pymc_env/lib/site-packages/proplot/axes/plot.py:2086) pop = _pop_params(kwargs, *methods, ignore_internal=True) File [c:\Users\ratta\miniconda3\envs\python310_pymc_env\lib\site-packages\proplot\internals\warnings.py:96](file:///C:/Users/ratta/miniconda3/envs/python310_pymc_env/lib/site-packages/proplot/internals/warnings.py:96), in _rename_kwargs.<locals>.decorator.<locals>._deprecate_kwargs(*args, **kwargs) [91](file:///C:/Users/ratta/miniconda3/envs/python310_pymc_env/lib/site-packages/proplot/internals/warnings.py:91) key_new = key_new.format(value) [92](file:///C:/Users/ratta/miniconda3/envs/python310_pymc_env/lib/site-packages/proplot/internals/warnings.py:92) _warn_proplot( [93](file:///C:/Users/ratta/miniconda3/envs/python310_pymc_env/lib/site-packages/proplot/internals/warnings.py:93) f'Keyword {key_old!r} was deprecated in version {version} and will ' [94](file:///C:/Users/ratta/miniconda3/envs/python310_pymc_env/lib/site-packages/proplot/internals/warnings.py:94) f'be removed in a future release. Please use {key_new!r} instead.' [95](file:///C:/Users/ratta/miniconda3/envs/python310_pymc_env/lib/site-packages/proplot/internals/warnings.py:95) ) ---> [96](file:///C:/Users/ratta/miniconda3/envs/python310_pymc_env/lib/site-packages/proplot/internals/warnings.py:96) return func_orig(*args, **kwargs) File [c:\Users\ratta\miniconda3\envs\python310_pymc_env\lib\site-packages\proplot\axes\plot.py:2645](file:///C:/Users/ratta/miniconda3/envs/python310_pymc_env/lib/site-packages/proplot/axes/plot.py:2645), in PlotAxes._parse_cmap(self, cmap, cmap_kw, c, color, colors, default_cmap, norm, norm_kw, extend, vmin, vmax, discrete, default_discrete, skip_autolev, plot_lines, plot_contours, min_levels, *args, **kwargs) [2643](file:///C:/Users/ratta/miniconda3/envs/python310_pymc_env/lib/site-packages/proplot/axes/plot.py:2643) isdiverging = False [2644](file:///C:/Users/ratta/miniconda3/envs/python310_pymc_env/lib/site-packages/proplot/axes/plot.py:2644) if not discrete and not skip_autolev: -> [2645](file:///C:/Users/ratta/miniconda3/envs/python310_pymc_env/lib/site-packages/proplot/axes/plot.py:2645) vmin, vmax, kwargs = self._parse_vlim( [2646](file:///C:/Users/ratta/miniconda3/envs/python310_pymc_env/lib/site-packages/proplot/axes/plot.py:2646) *args, vmin=vmin, vmax=vmax, **kwargs [2647](file:///C:/Users/ratta/miniconda3/envs/python310_pymc_env/lib/site-packages/proplot/axes/plot.py:2647) ) [2648](file:///C:/Users/ratta/miniconda3/envs/python310_pymc_env/lib/site-packages/proplot/axes/plot.py:2648) if autodiverging and vmin is not None and vmax is not None: [2649](file:///C:/Users/ratta/miniconda3/envs/python310_pymc_env/lib/site-packages/proplot/axes/plot.py:2649) if abs(np.sign(vmax) - np.sign(vmin)) == 2: File [c:\Users\ratta\miniconda3\envs\python310_pymc_env\lib\site-packages\proplot\axes\plot.py:2158](file:///C:/Users/ratta/miniconda3/envs/python310_pymc_env/lib/site-packages/proplot/axes/plot.py:2158), in PlotAxes._parse_vlim(self, vmin, vmax, robust, inbounds, negative, positive, symmetric, to_centers, *args, **kwargs) [2156](file:///C:/Users/ratta/miniconda3/envs/python310_pymc_env/lib/site-packages/proplot/axes/plot.py:2156) if inbounds and x is not None and y is not None: # ignore if None coords [2157](file:///C:/Users/ratta/miniconda3/envs/python310_pymc_env/lib/site-packages/proplot/axes/plot.py:2157) z = self._inbounds_vlim(x, y, z, to_centers=to_centers) -> [2158](file:///C:/Users/ratta/miniconda3/envs/python310_pymc_env/lib/site-packages/proplot/axes/plot.py:2158) imin, imax = process._safe_range(z, pmin, pmax) [2159](file:///C:/Users/ratta/miniconda3/envs/python310_pymc_env/lib/site-packages/proplot/axes/plot.py:2159) if automin and imin is not None: [2160](file:///C:/Users/ratta/miniconda3/envs/python310_pymc_env/lib/site-packages/proplot/axes/plot.py:2160) vmins.append(imin) File [c:\Users\ratta\miniconda3\envs\python310_pymc_env\lib\site-packages\proplot\internals\process.py:490](file:///C:/Users/ratta/miniconda3/envs/python310_pymc_env/lib/site-packages/proplot/internals/process.py:490), in _safe_range(data, lo, hi) [484](file:///C:/Users/ratta/miniconda3/envs/python310_pymc_env/lib/site-packages/proplot/internals/process.py:484) """ [485](file:///C:/Users/ratta/miniconda3/envs/python310_pymc_env/lib/site-packages/proplot/internals/process.py:485) Safely return the minimum and maximum (default) or percentile range accounting [486](file:///C:/Users/ratta/miniconda3/envs/python310_pymc_env/lib/site-packages/proplot/internals/process.py:486) for masked values. Use min and max functions when possible for speed. Return [487](file:///C:/Users/ratta/miniconda3/envs/python310_pymc_env/lib/site-packages/proplot/internals/process.py:487) ``None`` if we fail to get a valid range. [488](file:///C:/Users/ratta/miniconda3/envs/python310_pymc_env/lib/site-packages/proplot/internals/process.py:488) """ [489](file:///C:/Users/ratta/miniconda3/envs/python310_pymc_env/lib/site-packages/proplot/internals/process.py:489) _load_objects() --> [490](file:///C:/Users/ratta/miniconda3/envs/python310_pymc_env/lib/site-packages/proplot/internals/process.py:490) data, units = _to_masked_array(data) [491](file:///C:/Users/ratta/miniconda3/envs/python310_pymc_env/lib/site-packages/proplot/internals/process.py:491) data = data.compressed() # remove all invalid values [492](file:///C:/Users/ratta/miniconda3/envs/python310_pymc_env/lib/site-packages/proplot/internals/process.py:492) min_ = max_ = None File [c:\Users\ratta\miniconda3\envs\python310_pymc_env\lib\site-packages\proplot\internals\process.py:143](file:///C:/Users/ratta/miniconda3/envs/python310_pymc_env/lib/site-packages/proplot/internals/process.py:143), in _to_masked_array(data, copy) [141](file:///C:/Users/ratta/miniconda3/envs/python310_pymc_env/lib/site-packages/proplot/internals/process.py:141) if ndarray is not Quantity and isinstance(data, Quantity): [142](file:///C:/Users/ratta/miniconda3/envs/python310_pymc_env/lib/site-packages/proplot/internals/process.py:142) data, units = data.magnitude, data.units --> [143](file:///C:/Users/ratta/miniconda3/envs/python310_pymc_env/lib/site-packages/proplot/internals/process.py:143) data = ma.masked_invalid(data, copy=copy) [144](file:///C:/Users/ratta/miniconda3/envs/python310_pymc_env/lib/site-packages/proplot/internals/process.py:144) if np.issubdtype(data.dtype, int): [145](file:///C:/Users/ratta/miniconda3/envs/python310_pymc_env/lib/site-packages/proplot/internals/process.py:145) data = data.astype(float) File [c:\Users\ratta\miniconda3\envs\python310_pymc_env\lib\site-packages\numpy\ma\core.py:2360](file:///C:/Users/ratta/miniconda3/envs/python310_pymc_env/lib/site-packages/numpy/ma/core.py:2360), in masked_invalid(a, copy) [2333](file:///C:/Users/ratta/miniconda3/envs/python310_pymc_env/lib/site-packages/numpy/ma/core.py:2333) """ [2334](file:///C:/Users/ratta/miniconda3/envs/python310_pymc_env/lib/site-packages/numpy/ma/core.py:2334) Mask an array where invalid values occur (NaNs or infs). [2335](file:///C:/Users/ratta/miniconda3/envs/python310_pymc_env/lib/site-packages/numpy/ma/core.py:2335) (...) [2357](file:///C:/Users/ratta/miniconda3/envs/python310_pymc_env/lib/site-packages/numpy/ma/core.py:2357) [2358](file:///C:/Users/ratta/miniconda3/envs/python310_pymc_env/lib/site-packages/numpy/ma/core.py:2358) """ [2359](file:///C:/Users/ratta/miniconda3/envs/python310_pymc_env/lib/site-packages/numpy/ma/core.py:2359) a = np.array(a, copy=False, subok=True) -> [2360](file:///C:/Users/ratta/miniconda3/envs/python310_pymc_env/lib/site-packages/numpy/ma/core.py:2360) res = masked_where(~(np.isfinite(a)), a, copy=copy) [2361](file:///C:/Users/ratta/miniconda3/envs/python310_pymc_env/lib/site-packages/numpy/ma/core.py:2361) # masked_invalid previously never returned nomask as a mask and doing so [2362](file:///C:/Users/ratta/miniconda3/envs/python310_pymc_env/lib/site-packages/numpy/ma/core.py:2362) # threw off matplotlib (gh-22842). So use shrink=False: [2363](file:///C:/Users/ratta/miniconda3/envs/python310_pymc_env/lib/site-packages/numpy/ma/core.py:2363) if res._mask is nomask: **TypeError: ufunc 'isfinite' not supported for the input types, and the inputs could not be safely coerced to any supported types according to the casting rule ''safe''** ### Equivalent steps in matplotlib However, the same syntax can be implemented by matplotlib (pyplot). ```python # your code here, if applicable import matplotlib.pyplot as plt fig, ax = plt.subplots() ax.scatter(plot_data.insituOxy,plot_data.MOB, marker='o', c=plot_data.Site2.map(color_site_dict).fillna('gray6'), ) ``` Here is the expected output: ![image](https://github.com/proplot-dev/proplot/assets/55888172/21dc13b3-c089-49c1-ba87-bee142398282) ### Proplot version Paste the results of `import matplotlib; print(matplotlib.__version__); import proplot; print(proplot.version)` here. matplotlib version: 3.4.3 proplot version: 0.9.7
open
2024-02-07T20:26:59Z
2024-02-08T07:50:48Z
https://github.com/proplot-dev/proplot/issues/448
[]
PaleoLipidRR
1
rthalley/dnspython
asyncio
1,074
dnspython 2.x DNS lookup failed with "The DNS operation timed out" error over on Redhat
**Describe the bug** We are having trouble getting dnspython 2.x DNS lookup to work over Python3 on Redhat. DNS lookup failed with "The DNS operation timed out" error. In order to find out the root cause. We've done similar test on different platform, python and dnspython versions. The same test_lookup.py script was used for DNS lookup test via the same DNS name server over the same A record. Sample script provided below. Intersting finding is: - DNS lookup only failed over Python3 dnspython 2.x on Redhat - DNS lookup works over Python3 dnspython 2.x on Ubuntu and Windows - DNS lookup works over Python2 dnspython 1.x on Redhat Any idea what we can do to further troubleshoot this issue? - DNS lookup failed over Python3 dnspython 2.x on Redhat dnspython lookup success - Windows 10 - Python3 dnspython 2.x ------------------------------------------------------------- C:\> python.exe test_lookup.py Platform info: Windows 10 10.0.19041 Python version: 3.9 dnspython version: 2.5.0 A hkl20162177.hk.hsbc. 205 IN A 130.50.128.132 dnspython lookup success - Ubuntu 20.04 - Python3 dnspython 2.x --------------------------------------------------------------- $ python3 test_lookup.py Platform info: Linux 5.10.124-linuxkit #1 SMP Thu Jun 30 08:19:10 UTC 2022 Python version: 3.8 dnspython version: 2.3.0 A hkl20162177.hk.hsbc. 238 IN A 130.50.128.132 dnspython lookup success - Redhat 8 - Python2 dnspython 1.x ----------------------------------------------------------- $ python2 test_lookup.py Platform info: Linux 4.18.0-513.18.1.el8_9.x86_64 #1 SMP Thu Feb 1 03:51:05 EST 2024 Python version: 2.7 dnspython version: 1.16.0 A hkl20162177.hk.hsbc. 285 IN A 130.50.128.132 dnspython lookup success - Redhat 7 - Python2 dnspython 1.x ----------------------------------------------------------- $ python2 test_lookup.py Platform info: Linux 3.10.0-1160.114.2.el7.x86_64 #1 SMP Sun Mar 3 08:18:39 EST 2024 Python version: 2.7 dnspython version: 1.16.0 A hkl20162177.hk.hsbc. 300 IN A 130.50.128.132 dnspython lookup failed - Redhat 7 - Python3 dnspython 2.x ----------------------------------------------------------- $ python3.6 test_lookup.py Platform info: Linux 3.10.0-1160.114.2.el7.x86_64 #1 SMP Sun Mar 3 08:18:39 EST 2024 Python version: 3.6 dnspython version: 2.2.1 A The resolution lifetime expired after 5.108 seconds: Server 130.45.33.26 UDP port 53 answered The DNS operation timed out.; Server 130.45.33.26 UDP port 53 answered The DNS operation timed out.; Server 130.45.33.26 UDP port 53 answered The DNS operation timed out.; Server 130.45.33.26 UDP port 53 answered The DNS operation timed out.; Server 130.45.33.26 UDP port 53 answered The DNS operation timed out.; Server 130.45.33.26 UDP port 53 answered The DNS operation timed out. dnspython lookup failed - Redhat 8 - Python3.6 dnspython 2.2.1 -------------------------------------------------------------- $ python3.6 test_lookup.py Platform info: Linux 4.18.0-513.18.1.el8_9.x86_64 #1 SMP Thu Feb 1 03:51:05 EST 2024 Python version: 3.6 dnspython version: 2.2.1 A The resolution lifetime expired after 5.109 seconds: Server 130.45.33.26 UDP port 53 answered The DNS operation timed out.; Server 130.45.33.26 UDP port 53 answered The DNS operation timed out.; Server 130.45.33.26 UDP port 53 answered The DNS operation timed out.; Server 130.45.33.26 UDP port 53 answered The DNS operation timed out.; Server 130.45.33.26 UDP port 53 answered The DNS operation timed out.; Server 130.45.33.26 UDP port 53 answered The DNS operation timed out. dnspython lookup failed - Redhat 8 - Python3.11 dnspython 2.6.1 --------------------------------------------------------------- $ python3.11 test_lookup.py Platform info: Linux 4.18.0-513.18.1.el8_9.x86_64 #1 SMP Thu Feb 1 03:51:05 EST 2024 Python version: 3.11 dnspython version: 2.6.1 A The resolution lifetime expired after 5.103 seconds: Server Do53:130.45.33.26@53 answered The DNS operation timed out.; Server Do53:130.45.33.26@53 answered The DNS operation timed out.; Server Do53:130.45.33.26@53 answered The DNS operation timed out.; Server Do53:130.45.33.26@53 answered The DNS operation timed out.; Server Do53:130.45.33.26@53 answered The DNS operation timed out.; Server Do53:130.45.33.26@53 answered The DNS operation timed out. **To Reproduce** Sample test_lookup.py - the name server and A record need to be changed for your test environment ---------------------------------------------------------------------------------------------------------------- ``` import dns.resolver import dns.version import sys import platform import warnings warnings.filterwarnings("ignore", category=DeprecationWarning) print("Platform info: " + platform.system() + " " + platform.release() + " " + platform.version()) print("Python version: " + str(sys.version_info[0]) + "." + str(sys.version_info[1])) print("dnspython version: " + dns.version.version) my_resolver = dns.resolver.Resolver() my_resolver.nameservers = ['130.45.33.26'] name = 'hkl20162177.hk.hsbc' for qtype in 'A': print(qtype) try: answer = my_resolver.query(name, qtype, raise_on_no_answer=False) if answer.rrset is not None: print(answer.rrset) except Exception as e: print(e) pass ``` **Context (please complete the following information):** - dnspython version 2.x - Python version 3.x - OS: Redhat 7, 8
closed
2024-04-09T07:15:39Z
2024-04-19T16:34:54Z
https://github.com/rthalley/dnspython/issues/1074
[ "Cannot Reproduce" ]
leungwh
7
flasgger/flasgger
flask
179
Validation with definitions mixed in with allOf
I'm struggling to make flassger validation work with allOf'd definitions pulled into my resource definition. Using `validation=True` seems to always pass, and using `validate()` seems to not like traversing definitions via allOf. I've created a sample project to illustrate: https://github.com/robertlagrant/flasgger_validation
open
2018-02-14T11:46:30Z
2018-10-01T17:31:19Z
https://github.com/flasgger/flasgger/issues/179
[ "bug", "hacktoberfest" ]
robertlagrant
3
amidaware/tacticalrmm
django
1,528
Privacy policy
Hi, can someone please post the links to the Privacy policy? and to the security and compliance policy? Please. I went through the documentation but not able to locate the same.
closed
2023-06-02T14:24:37Z
2023-06-02T14:37:40Z
https://github.com/amidaware/tacticalrmm/issues/1528
[]
ashishbarmase
0
vitalik/django-ninja
pydantic
903
When having a discriminated union as request body schema the schema validation error should not show all the union components
``` class A(ninja.Schema): type: typing.Literal['A'] class B(ninja.Schema): type: typing.Literal['B'] class C(ninja.Schema): type: typing.Literal['C'] class D(ninja.Schema): type: typing.Literal['D'] something: conlist(str, min_items=2) def handler( request: http.HttpRequest, body: A | B | C | D, ): pass ``` the reposnse for request with type `D` and only 1 item in `something` is: ``` { "detail": [ { "loc": [ "body", "body", "type" ], "msg": "unexpected value; permitted: 'A'", "type": "value_error.const", "ctx": { "given": "D", "permitted": [ "A" ] } }, { "loc": [ "body", "body", "type" ], "msg": "unexpected value; permitted: 'B'", "type": "value_error.const", "ctx": { "given": "D", "permitted": [ "B" ] } }, { "loc": [ "body", "body", "type" ], "msg": "unexpected value; permitted: 'C'", "type": "value_error.const", "ctx": { "given": "D", "permitted": [ "C" ] } }, { "loc": [ "body", "body", "something" ], "msg": "ensure this value has at least 2 items", "type": "value_error.list.min_items", "ctx": { "limit_value": 2 } } ] } ``` I'd like it to be: ``` { "detail": [ { "loc": [ "body", "body", "something" ], "msg": "ensure this value has at least 2 items", "type": "value_error.list.min_items", "ctx": { "limit_value": 2 } } ] } ```
closed
2023-11-04T12:58:29Z
2025-01-03T23:54:02Z
https://github.com/vitalik/django-ninja/issues/903
[]
Vulwsztyn
4
piskvorky/gensim
data-science
2,977
Improve/prune docs/tutorial of TranslationMatrix functionality
The [concerning test failure](https://github.com/RaRe-Technologies/gensim/pull/2944#issuecomment-704512389) at #2944 now seems to me to be a false alarm. With more testing across many seeds, it appears the extremely flimsy `BackMappingTranslationTest.test_infer_vector()` was only passing in the base case (`float64` randoms downcast to `float32`s) due to a lucky seeding, and only failing in the changed case due to unlucky seeding of the slightly-different stream of (`float32` from the start) random numbers. I've disabled the flimsy test, and it's questionable whether the `BackMappingTranslationMatrix` should even exist. It's perhaps 10 lines of *using* (not specializing-via-subclass) the actual `TranslationMatrix` class, and over-specialized on `Doc2Vec` models – whereas the `TranslationMatrix` functionality could and should be general to any vector-set, requiring just a few lines to apply to word-vectors, doc-vectors, or others. (And, calling the translation/projection `infer_vector` is unnecessarily prone to confusion with the different 'inference' that's native to `Doc2Vec`.) I still think the `TranslationMatrix` itself is an under-appreciated bit of functionality, and I even strongly suspect – subject to experimentation – it could be part of a recommended solution for evolving a model to include more words that's far more robust/theoretically-defensible/performant than the `build_vocab(..., update=True)` & then incrementally `.train()` approach. But, it'll need at the very least better docs/tutorial examples. The existing `docs/notebook/tranlsation_matrix.ipynb` is muddled & hard to run. (The test data it's using links to an all-in-Chinese Baidu download page that seems to require a login before raw `.txt` download.) It demos the `BackmappingTranslationMatrix` class in a later 'experimental' area I have trouble following even though it reuses some of the IMDB-dataset `Doc2Vec` tutorial I wrote. I only have time to disable the `BackMappingTranslationTest.test_infer_vector` test right now, and this is pretty fringe functionality, so there's no urgency to clean it up - but this issue it to keep it under consideration, when the right person comes along.
open
2020-10-08T22:57:57Z
2020-10-09T08:04:49Z
https://github.com/piskvorky/gensim/issues/2977
[ "bug", "documentation", "testing" ]
gojomo
0
OpenInterpreter/open-interpreter
python
1,146
Always continuing
### Describe the bug The proceedings seem to be continuing...When I execute the code `interpreter --model ollama/mistral` ![image](https://github.com/OpenInterpreter/open-interpreter/assets/44490800/ba48370f-c419-449f-929c-1820d898dbb3) ### Reproduce interpreter --model ollama/mistral ### Expected behavior Enter interaction ### Screenshots _No response_ ### Open Interpreter version 0.2.4 ### Python version 3.9.0 ### Operating System name and version macOS13 ### Additional context _No response_
closed
2024-03-28T04:31:21Z
2024-03-28T07:53:11Z
https://github.com/OpenInterpreter/open-interpreter/issues/1146
[]
An-Jhon
2
HumanSignal/labelImg
deep-learning
684
Where is the view packages and control packages?
I am following the MVC model. Can you tell me which codes are for the view package and which codes are for the control package, more specifically scrollable list?
open
2020-12-16T17:02:46Z
2020-12-16T17:02:46Z
https://github.com/HumanSignal/labelImg/issues/684
[]
SrutiBh
0
JaidedAI/EasyOCR
pytorch
668
unable to detect simple texts from parts of a ship image
I just took a simple image from the internet for parts of the ship and I tried to extract the part names from the image. It was just able to extract the ship name which was on the ship itself and the naming of the parts was not detected at all.
closed
2022-02-16T05:38:06Z
2022-08-25T10:51:53Z
https://github.com/JaidedAI/EasyOCR/issues/668
[]
Jalmoru
1
0b01001001/spectree
pydantic
404
feat: Support unions or list of models for validating response
### Describe the feature Pydantic supports specifying unions of fields/models within models using the | operator. There are certain situations to validate more than one model can be pretty useful. For example, there are certain APIs that return code 200 even if there is an error but have an error field in the response (and no other fields). In such a scenario, it would make sense to validate against 2 models for the HTTP 200 return code (working response model and error model). Something like (Falcon example): ` @spec.validate( json=RequestModel, resp=Response(HTTP_200=Union[ResponseModel, ErrorResponse]), tags=["api"], ) def on_post(self, req, resp): .... ` or with the | syntax as a shortcut for Union. ### Additional context _No response_
open
2025-03-01T06:40:03Z
2025-03-04T08:45:07Z
https://github.com/0b01001001/spectree/issues/404
[ "enhancement" ]
Some7hing0riginal
3
grok-ai/nn-template
streamlit
94
Update LightningDataModule example usage
I believe [L217](https://github.com/grok-ai/nn-template/blob/8471d9a36c2e73356196e30fd58bcedac43613fe/%7B%7B%20cookiecutter.repository_name%20%7D%7D/src/%7B%7B%20cookiecutter.package_name%20%7D%7D/data/datamodule.py#L217C14-L217C14) in `datamodule.py` should be changed from ``` _: pl.LightningDataModule = hydra.utils.instantiate(cfg.data.datamodule, _recursive_=False) ``` to ``` _: pl.LightningDataModule = hydra.utils.instantiate(cfg.nn.data, _recursive_=False) ``` Thanks for the great project, keep up with the good work! @lucmos
closed
2023-09-12T21:39:59Z
2023-10-12T19:58:54Z
https://github.com/grok-ai/nn-template/issues/94
[ "solved" ]
LeonardoEmili
1
strawberry-graphql/strawberry
asyncio
3,347
Different contexts getters depending on the query or mutation
## Feature Request Type - [ ] Core functionality - [X] Alteration (enhancement/optimization) of existing feature(s) - [ ] New behavior ## Description Currently the only way of doing context getters with fastapi integration is through multiple routers but then I would need to have different paths which I think would be weird in the graphql world. So I would like to be able of changing the context based on the resolver instead of setting only in the main router.
open
2024-01-18T17:30:58Z
2025-03-20T15:56:34Z
https://github.com/strawberry-graphql/strawberry/issues/3347
[]
Focadecombate
2
localstack/localstack
python
12,144
bug: Secrets Manager is not properly using --rotation-lambda-arn and --rotation-rules
### Is there an existing issue for this? - [x] I have searched the existing issues ### Current Behavior (There were similar previous issues--but they were all closed as fixed.) I create a secret key in Secrets Manager, set up a lambda to trigger on secret rotation, including all the appropriate IAM, etc. I can manually invoke the lambda and it works fine. My start up script includes as its last command: ``` aws secretsmanager rotate-secret \ --region $AWS_DEFAULT_REGION \ --endpoint-url $AWS_ENDPOINT_URL \ --secret-id MySecretKey \ --rotation-lambda-arn $ROTATION_LAMBDA_ARN \ --rotation-rules AutomaticallyAfterDays=30 ``` This completes successfully with a 200 in the logs and appropriate/expected output in the lambda's logs. However... if, after this script is done, I immediately issue this command: ``` aws secretsmanager rotate-secret \ --region $AWS_DEFAULT_REGION \ --endpoint-url $AWS_ENDPOINT_URL \ --secret-id MySecretKey ``` Then I see this in the logs: ``` 2025-01-20T04:06:19.586 INFO --- [et.reactor-0] localstack.request.aws : AWS secretsmanager.RotateSecret => 400 (ResourceNotFoundException) ``` And there's zero output in a well-logged lambda, indicating it was not called. The only difference is the last 2 params, and if I manually call that command again (the first one) it works. If I do a aws secretsmanager describe-secret... I get appropriate output: ``` { "ARN": "arn:aws:secretsmanager:us-east-1:000000000000:secret:MySecretKey-MVFUZI", "Name": "MySecretKey", "RotationEnabled": true, "RotationLambdaARN": "arn:aws:lambda:us-east-1:000000000000:function:RotateSecretFunction", "RotationRules": { "AutomaticallyAfterDays": 30 }, "LastRotatedDate": "2025-01-19T22:17:41-06:00", "LastChangedDate": "2025-01-19T22:17:41.089000-06:00", "LastAccessedDate": "2025-01-19T18:00:00-06:00", "VersionIdsToStages": { "03ee0074-71e7-4129-a4ae-a694d2055401": [ "AWSPREVIOUS" ], "d5c6f636-aa06-4813-a849-dfdfb371cf6f": [ "AWSPENDING", "AWSCURRENT" ] }, "CreatedDate": "2025-01-19T22:06:02.182794-06:00" } ``` Notice that both fields RotationLambdaARN and RotationRules are set, but subsequent invocations of rotate-secret aren't using them. What would AWS do? Would it require both fields on every call? Or remember and use the earlier set values? ### Expected Behavior I would expect (if AWS does this) that once set, Secrets Manager would remember RotationLambdaARN and RotationRules fields and utilize their values on subsequent rotate-secret calls w/o having to always specify them. ### How are you starting LocalStack? With the `localstack` script ### Steps To Reproduce #### How are you starting localstack (e.g., `bin/localstack` command, arguments, or `docker-compose.yml`) I was using a docker-compose but to get more granular and remove one moving piece I cloned the localstack repo and used python to run locally, ie install all the dependencies and then run: ``` localstack --debug start ``` #### Client commands (e.g., AWS SDK code snippet, or sequence of "awslocal" commands) The commands in question are shown in detail above, but here is my start script: ``` #!/bin/bash # Set up dummy AWS credentials and region export AWS_ACCESS_KEY_ID=test export AWS_SECRET_ACCESS_KEY=test export AWS_DEFAULT_REGION=us-east-1 export AWS_ENDPOINT_URL=http://localhost:4566 # Create the secret in Secrets Manager echo ">> Creating Secret in Secrets Manager" SECRET_ARN=$(aws --endpoint-url=$AWS_ENDPOINT_URL secretsmanager create-secret \ --name MySecretKey \ --secret-string "initialValue" \ --region $AWS_DEFAULT_REGION \ --query "ARN" --output text) echo "Created Secret ARN: $SECRET_ARN" sleep 1 aws --endpoint-url=$AWS_ENDPOINT_URL secretsmanager update-secret \ --secret-id MySecretKey \ --secret-string "secondKey" \ --region $AWS_DEFAULT_REGION # Create an SNS Topic and set raw delivery (JSON only) echo ">> Creating SNS Topic" SNS_TOPIC_ARN=$(aws sns create-topic --name SecretKeyRotation --endpoint-url=$AWS_ENDPOINT_URL --query "TopicArn" --output text) # Create an IAM Role for Lambda echo ">> Creating IAM Role" cat <<EOF > trust-policy.json { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Service": "lambda.amazonaws.com" }, "Action": "sts:AssumeRole" } ] } EOF aws --endpoint-url $AWS_ENDPOINT_URL iam create-role \ --role-name MyLambdaRole \ --region $AWS_DEFAULT_REGION \ --assume-role-policy-document file://trust-policy.json rm trust-policy.json echo ">> Attaching IAM Policy to Role" aws --endpoint-url $AWS_ENDPOINT_URL iam attach-role-policy \ --role-name MyLambdaRole \ --region $AWS_DEFAULT_REGION \ --policy-arn arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole # Grant Lambda permission to publish to SNS aws --endpoint-url $AWS_ENDPOINT_URL iam put-role-policy \ --role-name MyLambdaRole \ --region $AWS_DEFAULT_REGION \ --policy-name PublishToSNS \ --policy-document "{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Action\": \"sns:Publish\", \"Resource\": \"$SNS_TOPIC_ARN\" } ] }" # Retrieve Role ARN ROLE_ARN=$(aws --endpoint-url $AWS_ENDPOINT_URL iam get-role \ --region $AWS_DEFAULT_REGION \ --role-name MyLambdaRole \ --query 'Role.Arn' --output text) echo "Role ARN: $ROLE_ARN" # Create the rotation Lambda function echo ">> Creating Rotation Lambda Function" ROTATION_LAMBDA_ARN=$(aws lambda create-function \ --endpoint-url $AWS_ENDPOINT_URL \ --region $AWS_DEFAULT_REGION \ --function-name RotateSecretFunction \ --runtime python3.9 \ --role "$ROLE_ARN" \ --handler rotationLambda.lambda_handler \ --zip-file fileb://scripts/rotationLambda.zip \ --query "FunctionArn" --output text) aws --endpoint-url $AWS_ENDPOINT_URL lambda wait function-active-v2 --function-name RotateSecretFunction echo "ROTATION_LAMBDA_ARN: $ROTATION_LAMBDA_ARN" aws lambda add-permission \ --region $AWS_DEFAULT_REGION \ --endpoint-url $AWS_ENDPOINT_URL \ --function-name RotateSecretFunction \ --statement-id "allow-secrets-manager" \ --action lambda:InvokeFunction \ --principal secretsmanager.amazonaws.com \ --source-arn $SECRET_ARN # Attach the rotation Lambda to the secret (THIS WORKS...NO ERROR) echo ">> Setting up Rotation Lambda for MySecretKey" aws secretsmanager rotate-secret \ --region $AWS_DEFAULT_REGION \ --endpoint-url $AWS_ENDPOINT_URL \ --secret-id MySecretKey \ --rotation-lambda-arn $ROTATION_LAMBDA_ARN \ --rotation-rules AutomaticallyAfterDays=30 echo ">> Done!" ``` ### Environment ```markdown - OS: MacOS - LocalStack: LocalStack version:LocalStack CLI 1.1.0 LocalStack Docker image sha: n/a LocalStack build date: latest? -- cloned the repo today 1/19/25, using main branch LocalStack build git hash: 4832016 ``` ### Anything else? _No response_
open
2025-01-20T04:32:53Z
2025-03-12T15:46:11Z
https://github.com/localstack/localstack/issues/12144
[ "type: bug", "aws:secretsmanager", "status: backlog" ]
gzoller
1
lorien/grab
web-scraping
274
Can not install grab
pip install -U pip setuptools pip install -U grab Collecting grab Using cached grab-0.6.38.tar.gz Collecting weblib>=0.1.23 (from grab) Using cached weblib-0.1.24.tar.gz Complete output from command python setup.py egg_info: error in weblib setup command: 'extras_require' requirements cannot include environment markers, in 'full': 'lxml; platform_system != "Windows"'
closed
2017-07-19T09:46:25Z
2018-04-15T16:59:05Z
https://github.com/lorien/grab/issues/274
[ "bug" ]
scorday
3
sanic-org/sanic
asyncio
2,436
Use SANIC_NO_UJSON during runtime also
**Is your feature request related to a problem? Please describe.** I would like to use Sanic without ujson (I want to use the standard json, for backward compatibility reasons). My problem is that the only way to not install ujson is by installing sanic with the` --no-binary` option. I can't do that because I have a large microservices system that is based on regular requirements files, that are installed during build time with standart `pip install` command, and using pre-built wheel files is very important to us. **Describe the solution you'd like** I would like that any place that does this: ``` try: from ujson import loads as json_loads # type: ignore except ImportError: from json import loads as json_loads # type: ignore ``` Will also add something like this: ``` try: if not strtobool(os.environ.get("SANIC_NO_UJSON", "no")): from ujson import loads as json_loads # type: ignore else: from json import loads as json_loads # type: ignore except ImportError: from json import loads as json_loads # type: ignore ``` For both json.dumps and json.loads. If this idea sounds good, I can also do a PR to add this.
closed
2022-04-24T07:53:08Z
2022-07-04T10:52:06Z
https://github.com/sanic-org/sanic/issues/2436
[]
azimovMichael
6
apache/airflow
python
47,703
Implement Automated CLI Authentication Flow for Token Retrieval
### Description We should include login with username and password whilst automating the token save to keyring for further CLI use. ### Use case/motivation Follow up from AIP-81. ### Are you willing to submit a PR? - [x] Yes I am willing to submit a PR! ### Code of Conduct - [x] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
open
2025-03-12T22:47:36Z
2025-03-12T22:58:32Z
https://github.com/apache/airflow/issues/47703
[ "area:CLI", "kind:feature", "area:auth" ]
bugraoz93
0
viewflow/viewflow
django
90
BPMN Editor
Did they could consider include a BPMN editor like http://demo.bpmn.io/?
closed
2015-06-10T19:47:33Z
2015-06-11T05:50:16Z
https://github.com/viewflow/viewflow/issues/90
[ "request/question" ]
rcorzogutierrez
1
jofpin/trape
flask
63
Says 'no module named requests
Says 'no module named cormala python trape.py -h Traceback (most recent call last): File "trape.py", line 23, in <module> from core.utils import utils # File "E:\cmder\trape\core\utils.py", line 22, in <module> from colorama import init , Style,Fore ModuleNotFoundError: No module named 'colorama'
closed
2018-11-25T20:03:04Z
2018-12-01T12:21:14Z
https://github.com/jofpin/trape/issues/63
[]
rayy101
1
huggingface/diffusers
pytorch
10,142
Requirements for advanced_diffusion_training incomplete
### Describe the bug When following the https://github.com/huggingface/diffusers/blob/main/examples/advanced_diffusion_training/README.md I also had to install / do those things to make it work: ``` !pip install wandb prodigyopt datasets !pip install --upgrade peft ``` - login to huggingface - setup and login to wandb Those things are not mentioned in the Readme / not updated in the requirements.txt which makes it hard to get started ### Reproduction Execute README preparation things on a new machine ``` cd diffusers pip install -e . cd examples/advanced_diffusion_training pip install -r requirements.txt accelerate config default ``` download dataset ``` from huggingface_hub import snapshot_download local_dir = "./data/3d_icon" snapshot_download( "LinoyTsaban/3d_icon", local_dir=local_dir, repo_type="dataset", ignore_patterns=".gitattributes", ) ``` run script ``` MODEL_NAME="stabilityai/stable-diffusion-xl-base-1.0" DATASET_NAME="./data/3d_icon" OUTPUT_DIR="3d-icon-SDXL-LoRA" VAE_PATH="madebyollin/sdxl-vae-fp16-fix" !accelerate train_dreambooth_lora_sdxl_advanced.py \ --pretrained_model_name_or_path="$MODEL_NAME" \ --pretrained_vae_model_name_or_path="$VAE_PATH" \ --dataset_name="$DATASET_NAME" \ --instance_prompt="3d icon in the style of TOK" \ --validation_prompt="a TOK icon of an astronaut riding a horse, in the style of TOK" \ --output_dir="$OUTPUT_DIR" \ --caption_column="prompt" \ --mixed_precision="fp16" \ --resolution=1024 \ --train_batch_size=3 \ --repeats=1 \ --report_to="wandb"\ --gradient_accumulation_steps=1 \ --gradient_checkpointing \ --learning_rate=1.0 \ --text_encoder_lr=1.0 \ --optimizer="prodigy"\ --train_text_encoder_ti\ --train_text_encoder_ti_frac=0.5\ --snr_gamma=5.0 \ --lr_scheduler="constant" \ --lr_warmup_steps=0 \ --rank=8 \ --max_train_steps=1000 \ --checkpointing_steps=2000 \ --seed="0" \ --push_to_hub ``` ### Logs _No response_ ### System Info - 🤗 Diffusers version: 0.32.0.dev0 - Platform: Linux-6.8.0-49-generic-x86_64-with-glibc2.39 - Running on Google Colab?: No - Python version: 3.11.10 - PyTorch version (GPU?): 2.5.1+cu124 (True) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Huggingface_hub version: 0.26.3 - Transformers version: 4.46.3 - Accelerate version: 1.1.1 - PEFT version: 0.13.2 - Bitsandbytes version: not installed - Safetensors version: 0.4.5 - xFormers version: not installed - Accelerator: NVIDIA RTX 5000 Ada Generation, 32760 MiB - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help? @sayakpaul
open
2024-12-06T10:21:00Z
2025-01-30T15:03:25Z
https://github.com/huggingface/diffusers/issues/10142
[ "bug", "stale", "training", "contributions-welcome" ]
teshi24
6
AUTOMATIC1111/stable-diffusion-webui
deep-learning
16,916
[Bug]: Getting requirements to build wheel: finished with status 'error'
### Checklist - [ ] The issue exists after disabling all extensions - [ ] The issue exists on a clean installation of webui - [ ] The issue is caused by an extension, but I believe it is caused by a bug in the webui - [ ] The issue exists in the current version of the webui - [x] The issue has not been reported before recently - [ ] The issue has been reported before but has not been fixed yet ### What happened? WebUI didn't start successfully. ### Steps to reproduce the problem 1. Run webui-user.bat with COMMANDLINE_ARGS=--skip-torch-cuda-test ### What should have happened? WebUI should have started successfully. ### What browsers do you use to access the UI ? _No response_ ### Sysinfo N/A ### Console logs ```Shell venv "C:\Users\mabrchaouen\Desktop\stable-diffusion-webui\venv\Scripts\Python.exe" Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)] Version: v1.10.1 Commit hash: 82a973c04367123ae98bd9abdf80d9eda9b910e2 Installing requirements Traceback (most recent call last): File "C:\Users\mabrchaouen\Desktop\stable-diffusion-webui\launch.py", line 48, in <module> main() File "C:\Users\mabrchaouen\Desktop\stable-diffusion-webui\launch.py", line 39, in main prepare_environment() File "C:\Users\mabrchaouen\Desktop\stable-diffusion-webui\modules\launch_utils.py", line 423, in prepare_environment run_pip(f"install -r \"{requirements_file}\"", "requirements") File "C:\Users\mabrchaouen\Desktop\stable-diffusion-webui\modules\launch_utils.py", line 144, in run_pip return run(f'"{python}" -m pip {command} --prefer-binary{index_url_line}', desc=f"Installing {desc}", errdesc=f"Couldn't install {desc}", live=live) File "C:\Users\mabrchaouen\Desktop\stable-diffusion-webui\modules\launch_utils.py", line 116, in run raise RuntimeError("\n".join(error_bits)) RuntimeError: Couldn't install requirements. Command: "C:\Users\mabrchaouen\Desktop\stable-diffusion-webui\venv\Scripts\python.exe" -m pip install -r "requirements_versions.txt" --prefer-binary Error code: 1 stdout: Collecting setuptools==69.5.1 (from -r requirements_versions.txt (line 1)) Using cached setuptools-69.5.1-py3-none-any.whl.metadata (6.2 kB) Collecting GitPython==3.1.32 (from -r requirements_versions.txt (line 2)) Using cached GitPython-3.1.32-py3-none-any.whl.metadata (10.0 kB) Collecting Pillow==9.5.0 (from -r requirements_versions.txt (line 3)) Using cached Pillow-9.5.0-cp310-cp310-win_amd64.whl.metadata (9.7 kB) Collecting accelerate==0.21.0 (from -r requirements_versions.txt (line 4)) Using cached accelerate-0.21.0-py3-none-any.whl.metadata (17 kB) Collecting blendmodes==2022 (from -r requirements_versions.txt (line 5)) Using cached blendmodes-2022-py3-none-any.whl.metadata (12 kB) Collecting clean-fid==0.1.35 (from -r requirements_versions.txt (line 6)) Using cached clean_fid-0.1.35-py3-none-any.whl.metadata (36 kB) Collecting diskcache==5.6.3 (from -r requirements_versions.txt (line 7)) Using cached diskcache-5.6.3-py3-none-any.whl.metadata (20 kB) Collecting einops==0.4.1 (from -r requirements_versions.txt (line 8)) Using cached einops-0.4.1-py3-none-any.whl.metadata (10 kB) Collecting facexlib==0.3.0 (from -r requirements_versions.txt (line 9)) Using cached facexlib-0.3.0-py3-none-any.whl.metadata (4.6 kB) Collecting fastapi==0.94.0 (from -r requirements_versions.txt (line 10)) Using cached fastapi-0.94.0-py3-none-any.whl.metadata (25 kB) Collecting gradio==3.41.2 (from -r requirements_versions.txt (line 11)) Using cached gradio-3.41.2-py3-none-any.whl.metadata (17 kB) Collecting httpcore==0.15 (from -r requirements_versions.txt (line 12)) Using cached httpcore-0.15.0-py3-none-any.whl.metadata (15 kB) Collecting inflection==0.5.1 (from -r requirements_versions.txt (line 13)) Using cached inflection-0.5.1-py2.py3-none-any.whl.metadata (1.7 kB) Collecting jsonmerge==1.8.0 (from -r requirements_versions.txt (line 14)) Using cached jsonmerge-1.8.0.tar.gz (26 kB) Installing build dependencies: started Installing build dependencies: finished with status 'done' Getting requirements to build wheel: started Getting requirements to build wheel: finished with status 'done' Preparing metadata (pyproject.toml): started Preparing metadata (pyproject.toml): finished with status 'done' Collecting kornia==0.6.7 (from -r requirements_versions.txt (line 15)) Using cached kornia-0.6.7-py2.py3-none-any.whl.metadata (12 kB) Collecting lark==1.1.2 (from -r requirements_versions.txt (line 16)) Using cached lark-1.1.2-py2.py3-none-any.whl.metadata (1.7 kB) Collecting numpy==1.26.2 (from -r requirements_versions.txt (line 17)) Using cached numpy-1.26.2-cp310-cp310-win_amd64.whl.metadata (61 kB) Collecting omegaconf==2.2.3 (from -r requirements_versions.txt (line 18)) Using cached omegaconf-2.2.3-py3-none-any.whl.metadata (3.9 kB) Collecting open-clip-torch==2.20.0 (from -r requirements_versions.txt (line 19)) Using cached open_clip_torch-2.20.0-py3-none-any.whl.metadata (46 kB) Collecting piexif==1.1.3 (from -r requirements_versions.txt (line 20)) Using cached piexif-1.1.3-py2.py3-none-any.whl.metadata (3.7 kB) Requirement already satisfied: protobuf==3.20.0 in c:\users\mabrchaouen\desktop\stable-diffusion-webui\venv\lib\site-packages (from -r requirements_versions.txt (line 21)) (3.20.0) Collecting psutil==5.9.5 (from -r requirements_versions.txt (line 22)) Using cached psutil-5.9.5-cp36-abi3-win_amd64.whl.metadata (21 kB) Collecting pytorch_lightning==1.9.4 (from -r requirements_versions.txt (line 23)) Using cached pytorch_lightning-1.9.4-py3-none-any.whl.metadata (22 kB) Collecting resize-right==0.0.2 (from -r requirements_versions.txt (line 24)) Using cached resize_right-0.0.2-py3-none-any.whl.metadata (551 bytes) Collecting safetensors==0.4.2 (from -r requirements_versions.txt (line 25)) Using cached safetensors-0.4.2-cp310-none-win_amd64.whl.metadata (3.9 kB) Collecting scikit-image==0.21.0 (from -r requirements_versions.txt (line 26)) Using cached scikit_image-0.21.0-cp310-cp310-win_amd64.whl.metadata (14 kB) Collecting spandrel==0.3.4 (from -r requirements_versions.txt (line 27)) Using cached spandrel-0.3.4-py3-none-any.whl.metadata (14 kB) Collecting spandrel-extra-arches==0.1.1 (from -r requirements_versions.txt (line 28)) Using cached spandrel_extra_arches-0.1.1-py3-none-any.whl.metadata (3.0 kB) Collecting tomesd==0.1.3 (from -r requirements_versions.txt (line 29)) Using cached tomesd-0.1.3-py3-none-any.whl.metadata (9.1 kB) Requirement already satisfied: torch in c:\users\mabrchaouen\desktop\stable-diffusion-webui\venv\lib\site-packages (from -r requirements_versions.txt (line 30)) (2.1.2+cu121) Collecting torchdiffeq==0.2.3 (from -r requirements_versions.txt (line 31)) Using cached torchdiffeq-0.2.3-py3-none-any.whl.metadata (488 bytes) Collecting torchsde==0.2.6 (from -r requirements_versions.txt (line 32)) Using cached torchsde-0.2.6-py3-none-any.whl.metadata (5.3 kB) Collecting transformers==4.30.2 (from -r requirements_versions.txt (line 33)) Using cached transformers-4.30.2-py3-none-any.whl.metadata (113 kB) Collecting httpx==0.24.1 (from -r requirements_versions.txt (line 34)) Using cached httpx-0.24.1-py3-none-any.whl.metadata (7.4 kB) Collecting pillow-avif-plugin==1.4.3 (from -r requirements_versions.txt (line 35)) Using cached pillow_avif_plugin-1.4.3-cp310-cp310-win_amd64.whl.metadata (1.7 kB) Collecting gitdb<5,>=4.0.1 (from GitPython==3.1.32->-r requirements_versions.txt (line 2)) Using cached gitdb-4.0.12-py3-none-any.whl.metadata (1.2 kB) Requirement already satisfied: packaging>=20.0 in c:\users\mabrchaouen\desktop\stable-diffusion-webui\venv\lib\site-packages (from accelerate==0.21.0->-r requirements_versions.txt (line 4)) (24.2) Requirement already satisfied: pyyaml in c:\users\mabrchaouen\desktop\stable-diffusion-webui\venv\lib\site-packages (from accelerate==0.21.0->-r requirements_versions.txt (line 4)) (6.0.2) Collecting aenum<4,>=3.1.7 (from blendmodes==2022->-r requirements_versions.txt (line 5)) Using cached aenum-3.1.15-py3-none-any.whl.metadata (3.7 kB) Collecting deprecation<3,>=2.1.0 (from blendmodes==2022->-r requirements_versions.txt (line 5)) Using cached deprecation-2.1.0-py2.py3-none-any.whl.metadata (4.6 kB) Requirement already satisfied: torchvision in c:\users\mabrchaouen\desktop\stable-diffusion-webui\venv\lib\site-packages (from clean-fid==0.1.35->-r requirements_versions.txt (line 6)) (0.16.2+cu121) Collecting scipy>=1.0.1 (from clean-fid==0.1.35->-r requirements_versions.txt (line 6)) Using cached scipy-1.15.2-cp310-cp310-win_amd64.whl.metadata (60 kB) Requirement already satisfied: tqdm>=4.28.1 in c:\users\mabrchaouen\desktop\stable-diffusion-webui\venv\lib\site-packages (from clean-fid==0.1.35->-r requirements_versions.txt (line 6)) (4.67.1) Requirement already satisfied: requests in c:\users\mabrchaouen\desktop\stable-diffusion-webui\venv\lib\site-packages (from clean-fid==0.1.35->-r requirements_versions.txt (line 6)) (2.32.3) Collecting filterpy (from facexlib==0.3.0->-r requirements_versions.txt (line 9)) Using cached filterpy-1.4.5.zip (177 kB) Installing build dependencies: started Installing build dependencies: finished with status 'done' Getting requirements to build wheel: started Getting requirements to build wheel: finished with status 'error' stderr: error: subprocess-exited-with-error Getting requirements to build wheel did not run successfully. exit code: 1 [27 lines of output] Traceback (most recent call last): File "C:\Users\mabrchaouen\Desktop\stable-diffusion-webui\venv\lib\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py", line 389, in <module> main() File "C:\Users\mabrchaouen\Desktop\stable-diffusion-webui\venv\lib\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py", line 373, in main json_out["return_val"] = hook(**hook_input["kwargs"]) File "C:\Users\mabrchaouen\Desktop\stable-diffusion-webui\venv\lib\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py", line 143, in get_requires_for_build_wheel return hook(config_settings) File "C:\Users\mabrchaouen\AppData\Local\Temp\pip-build-env-xj8idsk9\overlay\Lib\site-packages\setuptools\build_meta.py", line 334, in get_requires_for_build_wheel return self._get_build_requires(config_settings, requirements=[]) File "C:\Users\mabrchaouen\AppData\Local\Temp\pip-build-env-xj8idsk9\overlay\Lib\site-packages\setuptools\build_meta.py", line 304, in _get_build_requires self.run_setup() File "C:\Users\mabrchaouen\AppData\Local\Temp\pip-build-env-xj8idsk9\overlay\Lib\site-packages\setuptools\build_meta.py", line 522, in run_setup super().run_setup(setup_script=setup_script) File "C:\Users\mabrchaouen\AppData\Local\Temp\pip-build-env-xj8idsk9\overlay\Lib\site-packages\setuptools\build_meta.py", line 320, in run_setup exec(code, locals()) File "<string>", line 12, in <module> File "C:\Users\mabrchaouen\AppData\Local\Temp\pip-build-env-xj8idsk9\overlay\Lib\site-packages\setuptools\__init__.py", line 116, in setup _install_setup_requires(attrs) File "C:\Users\mabrchaouen\AppData\Local\Temp\pip-build-env-xj8idsk9\overlay\Lib\site-packages\setuptools\__init__.py", line 87, in _install_setup_requires dist.parse_config_files(ignore_option_errors=True) File "C:\Users\mabrchaouen\AppData\Local\Temp\pip-build-env-xj8idsk9\overlay\Lib\site-packages\setuptools\dist.py", line 730, in parse_config_files self._parse_config_files(filenames=inifiles) File "C:\Users\mabrchaouen\AppData\Local\Temp\pip-build-env-xj8idsk9\overlay\Lib\site-packages\setuptools\dist.py", line 599, in _parse_config_files opt = self._enforce_underscore(opt, section) File "C:\Users\mabrchaouen\AppData\Local\Temp\pip-build-env-xj8idsk9\overlay\Lib\site-packages\setuptools\dist.py", line 629, in _enforce_underscore raise InvalidConfigError( setuptools.errors.InvalidConfigError: Invalid dash-separated key 'description-file' in 'metadata' (setup.cfg), please use the underscore name 'description_file' instead. [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. error: subprocess-exited-with-error Getting requirements to build wheel did not run successfully. exit code: 1 See above for output. note: This error originates from a subprocess, and is likely not a problem with pip. Press any key to continue . . . ``` ### Additional information _No response_
open
2025-03-24T18:25:34Z
2025-03-24T18:25:34Z
https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/16916
[ "bug-report" ]
rhnaxifg4y
0
yihong0618/running_page
data-visualization
120
增加 favicon
Running Page似乎一直没有设置favicon, 这里我使用您自己设计的 LOGO 简单做了一个 favicon 放到了我自己的 Running Page 上: ![running_page_logo](https://user-images.githubusercontent.com/53750381/115108576-d65f7d80-9fa3-11eb-85fd-4cd466f0f3ca.png) 由于原图不是正方形所以稍微做了一些的裁剪,如果不满意可以暂时当作一个备选。 实际使用效果如下:([点我预览](https://mfydev.run)) ![image](https://user-images.githubusercontent.com/53750381/115109062-4bcc4d80-9fa6-11eb-97f6-be9764a56368.png) 我也从 icons8 和 flaticon上面找到了几个不错的跑步icon,是免费的。 这几款感觉都可以作为 Running Page Logo 的设计参考,**当然目前的这个 LOGO 就已经很好看了(感谢 @shaonianche )**,就是作为 favicon 的话好像有点不清晰,要是有个高清的 svg 图标就好了(小声bb): ![running](https://user-images.githubusercontent.com/53750381/115108815-0b200480-9fa5-11eb-8fd4-9583ae4c2c2e.png):个人觉得这个最好看 ![icons8](https://img.icons8.com/color/48/000000/sports-mode.png):这个配色好像和 Running Page还挺搭?哈哈 ![icons8](https://img.icons8.com/color/48/000000/trainers.png):跑鞋拟物风 ![icons8](https://img.icons8.com/cotton/64/000000/trainers.png):这个感觉有点少女风LOL ![icons8](https://img.icons8.com/cotton/64/000000/sneakers--v2.png):另一款跑鞋拟物风,好像配色也有点搭
closed
2021-04-17T09:32:06Z
2021-04-22T11:31:31Z
https://github.com/yihong0618/running_page/issues/120
[]
MFYDev
5
horovod/horovod
machine-learning
3,852
elastic job:success count == 2 -> stop running
**Environment:** 1. Framework: (TensorFlow, Keras, PyTorch, MXNet) tensorflow 2. Framework version: 1.15 3. Horovod version:0.27.0 5. MPI version: 6. CUDA version: 11.6 7. NCCL version: 2.12 8. Python version: 3.6 9. Spark / PySpark version: 10. Ray version: 11. OS and version: 12. GCC version: 13. CMake version: **Checklist:** 1. Did you search issues to find if somebody asked this question before? 2. If your question is about hang, did you read [this doc](https://github.com/horovod/horovod/blob/master/docs/running.rst)? 3. If your question is about docker, did you read [this doc](https://github.com/horovod/horovod/blob/master/docs/docker.rst)? 4. Did you check if you question is answered in the [troubleshooting guide](https://github.com/horovod/horovod/blob/master/docs/troubleshooting.rst)? **Bug report:** Please describe erroneous behavior you're observing and steps to reproduce it. i run the elastic job in k8s with this command,the job is running normally, but when i delete one worker from the job ,sometimes,the job will failed. ``` horovodrun -np 2 --min-np 2 --max-np 8 --log-level DEBUG --host-discovery-script /wangdk/discover_hosts.sh python /wangdk/elasticjob/tensorflow/tensorflow_keras_mnist_elastic.py --epochs=10000 --data-dir /wangdk/mnist ``` the job failed log : ``` [0]<stdout>:166/166 [==============================] - 4s 25ms/step - loss: 0.0301 - acc: 0.9908 [0]<stdout>:Epoch 10/10000 [0]<stdout>:166/166 [==============================] - 4s 26ms/step - loss: 0.0276 - acc: 0.9914 [0]<stdout>:Epoch 11/10000 [2]<stderr>:Connection to 172.23.141.131 closed by remote host.- loss: 0.0293 - acc: 0.9903 Process 2 exit with status code 255. INFO:root:record state: 172.23.141.131[0] = FAILURE [0]<stderr>:[2023-02-17 11:02:28.524148: E /tmp/pip-install-re1ggobk/horovod/horovod/common/operations.cc:697] [0]: Horovod background loop uncaught exception: [/tmp/pip-install-re1ggobk/horovod/third_party/compatible_gloo/gloo/transport/tcp/unbound_buffer.cc:84] Timed out waiting 30000ms for recv operation to complete [3]<stderr>:[2023-02-17 11:02:28.532151: E /tmp/pip-install-re1ggobk/horovod/horovod/common/operations.cc:697] [2]: Horovod background loop uncaught exception: [/tmp/pip-install-re1ggobk/horovod/third_party/compatible_gloo/gloo/transport/tcp/unbound_buffer.cc:136] Timed out waiting 30000ms for send operation to complete [0]<stdout>:106/166 [==================>...........] - ETA: 1s - loss: 0.0292 - acc: 0.9903[2023-02-17 11:02:28.556292: D /tmp/pip-install-re1ggobk/horovod/horovod/common/operations.cc:709] [0]: Shutting down background thread [3]<stdout>:[2023-02-17 11:02:28.571005: D /tmp/pip-install-re1ggobk/horovod/horovod/common/operations.cc:709] [2]: Shutting down background thread INFO:root:record state: 172.21.114.71[0] = SUCCESS INFO:root:record state: 172.21.114.76[0] = SUCCESS INFO:root:all 3 workers recorded INFO:root:success count == 2 -> stop running DEBUG:root:adding results for 172.21.114.76[0]: (0, 1676631750.8236825) DEBUG:root:adding results for 172.23.141.131[0]: (255, 1676631718.5624497) DEBUG:root:adding results for 172.21.114.71[0]: (0, 1676631750.5931416) Traceback (most recent call last): File "/usr/local/bin/horovodrun", line 8, in <module> sys.exit(run_commandline()) File "/usr/local/lib/python3.6/dist-packages/horovod/runner/launch.py", line 837, in run_commandline _run(args) File "/usr/local/lib/python3.6/dist-packages/horovod/runner/launch.py", line 825, in _run return _run_elastic(args) File "/usr/local/lib/python3.6/dist-packages/horovod/runner/launch.py", line 738, in _run_elastic return gloo_run_elastic(settings, env, args.run_func if args.run_func else args.command, executable) File "/usr/local/lib/python3.6/dist-packages/horovod/runner/gloo_run.py", line 380, in gloo_run_elastic return launch_gloo_elastic(command_or_func, exec_command, settings, env, get_common_interfaces, rendezvous, executable) File "/usr/local/lib/python3.6/dist-packages/horovod/runner/gloo_run.py", line 354, in launch_gloo_elastic .format(name=name, code=exit_code)) RuntimeError: Horovod detected that one or more processes exited with non-zero status, thus causing the job to be terminated. The first process to do so was: Process name: 172.23.141.131[0] Exit code: 255 ``` i check the horovod code ,i find some code maybe the reason,the code will stop the job when the success worker count is greater than zero. i don't know the reason of the code . can anyone give me some suggestion? thanks ``` def _on_workers_recorded(self): logging.info('all {} workers recorded'.format(self.size())) # Check for success state, if any process succeeded, shutdown all other processes if self.count(SUCCESS) > 0: logging.info('success count == {} -> stop running'.format(self.count(SUCCESS))) self._driver.stop() return # Check that all processes failed, indicating that processing should stop if self.count(FAILURE) == self._size: logging.error('failure count == {} -> stop running'.format(self._size)) self._driver.stop() return # Check for failures, and add them to the blacklisted hosts list failures = self.get(FAILURE) for host, slot in failures: self._host_manager.blacklist(host) ```
open
2023-02-17T13:21:25Z
2023-02-17T14:00:06Z
https://github.com/horovod/horovod/issues/3852
[ "bug" ]
davidstack
0
tqdm/tqdm
jupyter
1,161
Poor time estimation for concurrent processing
When using `tqdm.contrib.concurrent.process_map` to execute some longer-running tasks in parallel, the default remaining time and iterations per second estimation can be quite far off the mark. Consider running ten workers in parallel, who all sleep one second: ```Python from tqdm.contrib.concurrent import process_map import time NUM_WORKERS = 10 NUM_TASKS = 100 def sleep(time_to_sleep): time.sleep(time_to_sleep) if __name__ == "__main__": time_to_sleep_args = [1 for i in range(NUM_TASKS)] process_map(sleep, time_to_sleep_args, max_workers=NUM_WORKERS) ``` The processing time should be a little over ten seconds total. On my machine (tqdm 4.50.2 3.8.5 (default, Sep 4 2020, 02:22:02) / [Clang 10.0.0 ] / darwin, MBP 2020), the progress bar will update after task 1, task 11, task 21, ... . As the first ten tasks finish around the same time but the estimations being done after task 1 finishes, the estimations for remaining time and iterations per second are very far off, approaching the correct values only towards the end. The first time estimation is for > 100 seconds, while it actually finishes arounds at around 10 seconds. Consider running the same number of tasks and adding more realism by having the tasks differ in length a little bit, sleeping between 1 and 1.2 seconds: ```Python import random random.seed(42) time_to_sleep_args = [1 + random.random() / 5 for i in range(NUM_TASKS)] process_map(sleep, time_to_sleep_args, max_workers=NUM_WORKERS) ``` On my machine, the general behavior is the same (remaing time and it/s far off after estimating after first iteration, slowly recovering), but additionally, tqdm will start overestimating the number of iterations per second towards the end to values > 10it/s, which is not actually possible. This is what I can observe in practice as well: when using tasks of multiple seconds length, the estimated remaining time and iterations per second are all over the place, even though all tasks have nearly the same length. When several tasks happen to finish around the same time, the remaining time estimation is much too low. The issue seems to be that the interval/timing code does not consider that these tasks are running in parallel. One solution could be to default to `miniters=max_workers`, if neither `miniters`, `mininterval` or `maxinterval` are given in the call to process_map, and if the number of total iterations is larger than `max_workers`. This would result in both samples above producing output after every group of workers is done, having a good estimate after every group of 10 tasks done. (Of course this could be done by the user/caller as well, but I would argue that they should not have to work around a broken parallel estimation, rather that this uses a sensible default). This solution should fix the issue for any use case where the tasks have similar length. If the tasks have widely varying length, then the resulting estimations are not worse, they just appear a little later. As a downside, if the `max_workers` parameter is not given in the call to `process_map`, it would have to be estimated by duplicating the [code in CPython](https://github.com/python/cpython/blob/2a3f4899c63806439e5bcea0c30f7e6a6295a763/Lib/concurrent/futures/process.py#L596-L607). Also, if the number of actual spawned workers were much smaller than max_workers, there would be a delay until the first update of the progress bar. But I would consider this to happen much less often, and the cases above being the default use case for users of process_map. I'd be happy to provide a pull request with an implementation if the solution is deemed acceptable. [source website]: https://github.com/tqdm/tqdm/ [known issues]: https://github.com/tqdm/tqdm/#faq-and-known-issues [issue tracker]: https://github.com/tqdm/tqdm/issues?q= [StackOverflow#tqdm]: https://stackoverflow.com/questions/tagged/tqdm
open
2021-04-22T18:02:31Z
2024-02-02T15:17:13Z
https://github.com/tqdm/tqdm/issues/1161
[ "p3-enhancement 🔥", "submodule ⊂", "synchronisation ⇶", "c1-quick 🕐" ]
w-m
6
rthalley/dnspython
asyncio
351
dns.query.xfr leaks an fd on an exception
As reported recently on dnspython-users, dns.query.xfr leaks an fd on an exception.
closed
2019-02-15T18:24:27Z
2020-04-01T20:16:18Z
https://github.com/rthalley/dnspython/issues/351
[ "Bug" ]
rthalley
2
nvbn/thefuck
python
711
Command selection of `ctrl+n` and `ctrl+p` instead of `↑` and `↓`
Feature request. It's convinient to select commands by `ctrl+n` and `ctrl+p` instead of `↑` and `↓`. ``` $ gitstatus -bash: gitstatus: command not found $ fuck gstat [enter/↑/↓/ctrl+c] # <= here, selection can be changed by ctrl+p or ctrl+n ```
closed
2017-10-16T03:21:39Z
2017-10-20T00:46:06Z
https://github.com/nvbn/thefuck/issues/711
[ "next release", "hacktoberfest" ]
ikuwow
1
biolab/orange3
pandas
6,380
Signals in example workflows are identified by strings
Before releasing the next version, somebody should load and save example workflows, so the signals are identified by id's and can be loaded by Slovenian Orange. And (s)he should do so on the latest master which includes #6346, Data Table with a single input.
closed
2023-03-31T10:15:17Z
2023-04-12T09:10:13Z
https://github.com/biolab/orange3/issues/6380
[]
janezd
0
dunossauro/fastapi-do-zero
sqlalchemy
277
Sharing my repo
| Link do projeto | Seu @ no git | Comentário (opcional) | |------------------ |:------------------:|:----------------------------------:| |[fastpy_todo](https://github.com/pLogicador/fastpy_todo) | [@pLogicador](https://github.com/pLogicador) | Primeiros passos com FastAPI, até agora interessante! |
closed
2025-01-08T02:45:23Z
2025-02-02T10:00:21Z
https://github.com/dunossauro/fastapi-do-zero/issues/277
[]
pLogicador
2
cobrateam/splinter
automation
1,044
DeprecationWarning: firefox_profile has been deprecated, please use an Options object
Using `Browser('firefox')` results in a DeprecationWarning: ``` ~/Vcs/splinter/splinter/browser.py:113: in Browser return get_driver(driver, retry_count=retry_count, *args, **kwargs) ~/Vcs/splinter/splinter/browser.py:84: in get_driver return driver(*args, **kwargs) ~/Vcs/splinter/splinter/driver/webdriver/firefox.py:32: in __init__ firefox_profile = FirefoxProfile(profile) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <selenium.webdriver.firefox.firefox_profile.FirefoxProfile object at 0x7f886862ff70> profile_directory = None def __init__(self, profile_directory=None): """ Initialises a new instance of a Firefox Profile :args: - profile_directory: Directory of profile that you want to use. If a directory is passed in it will be cloned and the cloned directory will be used by the driver when instantiated. This defaults to None and will create a new directory when object is created. """ > warnings.warn('firefox_profile has been deprecated, please use an Options object', DeprecationWarning, stacklevel=2) E DeprecationWarning: firefox_profile has been deprecated, please use an Options object .venv/lib/python3.10/site-packages/selenium/webdriver/firefox/firefox_profile.py:58: DeprecationWarning: firefox_profile has been deprecated, please use an Options object ``` Using selenium==4.2.0,
closed
2022-06-10T10:30:07Z
2022-06-21T03:05:37Z
https://github.com/cobrateam/splinter/issues/1044
[ "easy", "help wanted", "good first issue" ]
blueyed
3
pmaji/crypto-whale-watching-app
dash
11
App looks like it's doing nothing on initial load
I've been wondering how to fix this myself but I don't know enough about these libraries to figure it out. When you load the app it looks like it's doing nothing for 10 seconds until it makes the first requests to the server to get the data. It seems to be because of the interval being set to 10 seconds, it waits for the first 10 seconds to tick and then it loads the data. What would be better is if it loaded the graphs immediately on the page's first load and then updated them every 10 seconds, but I can't see how to do this from the library being used.
closed
2018-02-11T20:48:28Z
2018-02-11T23:01:26Z
https://github.com/pmaji/crypto-whale-watching-app/issues/11
[]
CrackLord
3
vvbbnn00/WARP-Clash-API
flask
148
生成yaml文件
希望手动优选后,能在本地生成订阅的yaml文件
closed
2024-03-14T16:01:22Z
2024-04-16T05:19:41Z
https://github.com/vvbbnn00/WARP-Clash-API/issues/148
[]
jimmylzt188
0
deeppavlov/DeepPavlov
tensorflow
934
Training function without config
Hello! Function `train_evaluate_model_from_config` works fine but sometimes one wants to train some pipeline without config. I still do not understand how to build it except the following way: 1. build class pipeline which have `__call__`, `train_on_batch` or `fit`, `save` and `load` methods. 2. send this class instance to some `train_pipeline` function given additional training parameters (such as number of epochs, metrics etc). Maybe you will find out how to build in other way. Thank you in advance.
closed
2019-07-19T12:53:59Z
2020-05-13T09:42:31Z
https://github.com/deeppavlov/DeepPavlov/issues/934
[]
dilyararimovna
1
gradio-app/gradio
deep-learning
10,430
Update Deprecated Arguments in StableDiffusionPipeline
### Describe the bug ### Overview The `run.py` script in `demo/stable-diffusion` uses deprecated arguments (use_auth_token, revision="fp16") when setting up the `StableDiffusionPipeline`. This results in warnings during runtime. Additionally, the script does not handle the case where the Hugging Face access token (`auth_token`) is missing from the environment variables, leaving users confused and potentially stuck in the UI when trying to generate an inference. ### Impact: 1. The use of deprecated arguments leads to warnings and does not comply with the latest diffusers API, reducing code maintainability. 2. Without the token validation at runtime, users experience misleading behavior in the UI without a clear error message explaining how to resolve the issue. ### Suggested Improvements 1. Replace deprecated arguments: - Update `use_auth_token` to `token`. - Replace revision="fp16" with variant="fp16" in the StableDiffusionPipeline setup. 2. Add token validation: - Check for the presence of `auth_token` environment variable at runtime. - Display a clear error message in the console and terminate execution if the token is missing. ### Have you searched existing issues? 🔎 - [x] I have searched and found no existing issues ### Reproduction ```python import gradio as gr import torch from diffusers import StableDiffusionPipeline # type: ignore from PIL import Image import os auth_token = os.getenv("auth_token") model_id = "CompVis/stable-diffusion-v1-4" device = "cpu" pipe = StableDiffusionPipeline.from_pretrained( model_id, use_auth_token=auth_token, revision="fp16", torch_dtype=torch.float16 ) pipe = pipe.to(device) def infer(prompt, samples, steps, scale, seed): generator = torch.Generator(device=device).manual_seed(seed) images_list = pipe( # type: ignore [prompt] * samples, num_inference_steps=steps, guidance_scale=scale, generator=generator, ) images = [] safe_image = Image.open(r"unsafe.png") for i, image in enumerate(images_list["sample"]): # type: ignore if images_list["nsfw_content_detected"][i]: # type: ignore images.append(safe_image) else: images.append(image) return images block = gr.Blocks() with block: with gr.Group(): with gr.Row(): text = gr.Textbox( label="Enter your prompt", max_lines=1, placeholder="Enter your prompt", container=False, ) btn = gr.Button("Generate image") gallery = gr.Gallery( label="Generated images", show_label=False, elem_id="gallery", columns=[2], ) advanced_button = gr.Button("Advanced options", elem_id="advanced-btn") with gr.Row(elem_id="advanced-options"): samples = gr.Slider(label="Images", minimum=1, maximum=4, value=4, step=1) steps = gr.Slider(label="Steps", minimum=1, maximum=50, value=45, step=1) scale = gr.Slider( label="Guidance Scale", minimum=0, maximum=50, value=7.5, step=0.1 ) seed = gr.Slider( label="Seed", minimum=0, maximum=2147483647, step=1, randomize=True, ) gr.on([text.submit, btn.click], infer, inputs=[text, samples, steps, scale, seed], outputs=gallery) advanced_button.click( None, [], text, ) block.launch() ``` ### Screenshot _No response_ ### Logs ```shell Pipelines loaded with `dtype=torch.float16` cannot run with `cpu` device. It is not recommended to move them to `cpu` as running them will fail. Please make sure to use an accelerator to run the pipeline in inference, due to the lack of support for`float16` operations on this device in PyTorch. Please, remove the `torch_dtype=torch.float16` argument, or use another device for inference. ``` ### System Info ```shell Gradio Environment Information: ------------------------------ Operating System: Darwin gradio version: 5.13.1 gradio_client version: 1.6.0 ------------------------------------------------ gradio dependencies in your environment: aiofiles: 23.2.1 anyio: 4.8.0 audioop-lts is not installed. fastapi: 0.115.7 ffmpy: 0.5.0 gradio-client==1.6.0 is not installed. httpx: 0.28.1 huggingface-hub: 0.27.1 jinja2: 3.1.5 markupsafe: 2.1.5 numpy: 2.2.2 orjson: 3.10.15 packaging: 24.2 pandas: 2.2.3 pillow: 11.1.0 pydantic: 2.10.6 pydub: 0.25.1 python-multipart: 0.0.20 pyyaml: 6.0.2 ruff: 0.9.3 safehttpx: 0.1.6 semantic-version: 2.10.0 starlette: 0.45.2 tomlkit: 0.13.2 typer: 0.15.1 typing-extensions: 4.12.2 urllib3: 2.3.0 uvicorn: 0.34.0 authlib; extra == 'oauth' is not installed. itsdangerous; extra == 'oauth' is not installed. gradio_client dependencies in your environment: fsspec: 2024.12.0 httpx: 0.28.1 huggingface-hub: 0.27.1 packaging: 24.2 typing-extensions: 4.12.2 websockets: 14.2 ``` ### Severity I can work around it
closed
2025-01-24T09:38:40Z
2025-01-25T00:51:05Z
https://github.com/gradio-app/gradio/issues/10430
[ "bug" ]
ddayto21
0
MorvanZhou/tutorials
numpy
66
LSTM:About data sequence in 7-RNN_Classifier_example.py 关于过程数据序列化问题 7-RNN_Classifier_example.py?
Now dataset **X**={**x1,x2,x3...,xn**},`shape=[n,m]`, **x1,x2,...,xn** are samples of **X**. And label data `y.shape=[n,k]` If I use a time window with length of 2,then after reshape: `X= tf.reshape(X,[int(n/2), 2, m])` `X.shape=[n/2,m]` But I have a problem in getting the cost by formula, `cost_rnn = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=y_ , labels=y))` because both X and y have different shape. Anybody knows how to solve this problem? ___________________________________________________ 现在,有数据集**X**={**x1,x2,x3...,xn**},`shape=[n,m]` 其中,**x1**包含多个变量,`shape=[m]`. 比如,**X**=[[1,10,100],[2,20,200],[3,30,300]],可以看做X由多个样本x1,x2,...组成的。 标签样本**y**={**y1,y2,...yn**},`shape=[n,k]`, 比如,**Y**=[[1,0,0],[0,1,0],[0,0,1]]。 这个`LSTM`如果序列化数据的话,比如说,用时间窗`time_step=2`, `X= tf.reshape(X,[int(n/2), 2, m])` 那么,序列化之后的样本,**X**就只有**n-1** 个了,`shape=[n/2,m]` 这样,由于维度不一样,就无法求出`cost`了 `cost_rnn = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=y_rnn, labels=y))` 针对这种数据集应该怎样处理?
open
2018-04-27T15:31:34Z
2018-04-27T15:33:40Z
https://github.com/MorvanZhou/tutorials/issues/66
[]
rosefun
0
piskvorky/gensim
machine-learning
3,315
evaluate_word_pairs gives ValueError: x and y must have length at least 2.
<!-- **IMPORTANT**: - Use the [Gensim mailing list](https://groups.google.com/forum/#!forum/gensim) to ask general or usage questions. Github issues are only for bug reports. - Check [Recipes&FAQ](https://github.com/RaRe-Technologies/gensim/wiki/Recipes-&-FAQ) first for common answers. Github bug reports that do not include relevant information and context will be closed without an answer. Thanks! --> #### Problem description I am trying to run an evaluation on a pair of trained vectors on a datafile, namely [this one](https://github.com/tca19/near-lossless-binarization/blob/master/datasets/MEN.txt). However, when using the function `wordembeddingmodel.evaluate_word_pairs(datapath("MEN.txt"))` (it is stored in the datapath folder) I get the error `ValueError: x and y must have length at least 2.`. I suspect this has to do with the formatting, but I cannot figure out what the issue is. #### Steps/code/corpus to reproduce Traceback: ```bash File "/.../src/evaluate.py", line 25, in evaluate_wordsimilarity ] = wordembeddingmodel.evaluate_word_pairs(datapath(worddataset))[0][0] File "/home/users/filip/.conda/envs/sskgnn/lib/python3.9/site-packages/gensim/models/keyedvectors.py", line 1321, in evaluate_word_pairs pearson = stats.pearsonr(similarity_gold, similarity_model) File "/home/users/filip/.conda/envs/sskgnn/lib/python3.9/site-packages/scipy/stats/stats.py", line 4016, in pearsonr raise ValueError('x and y must have length at least 2.') ValueError: x and y must have length at least 2. ``` Dataset: Downloaded from [here](https://raw.githubusercontent.com/tca19/near-lossless-binarization/master/datasets/MEN.txt). I download it and create it using: ```python3 import requests from gensim.test.utils import datapath import os.path as osp fpath = datapath("") file = "https://raw.githubusercontent.com/tca19/near-lossless-binarization/master/datasets/MEN.txt" r = requests.get(file) filename = file.split("/")[-1] with open(osp.join(fpath,filename), "wb") as f: f.write(r.text.encode("utf-8").strip()) ``` ```python >>> print(wordembeddingmodel.lifecycle_events) *** AttributeError: 'Word2VecKeyedVectors' object has no attribute 'lifecycle_events' ``` #### Versions Please provide the output of: ```python >>> import platform; print(platform.platform()) ec.FAST_VERSION)Linux-5.4.0-90-generic-x86_64-with-glibc2.31 >>> import sys; print("Python", sys.version) Python 3.9.7 (default, Sep 16 2021, 13:09:58) [GCC 7.5.0] >>> import struct; print("Bits", 8 * struct.calcsize("P")) Bits 64 >>> import numpy; print("NumPy", numpy.__version__) NumPy 1.21.2 >>> import scipy; print("SciPy", scipy.__version__) SciPy 1.7.1 >>> import gensim; print("gensim", gensim.__version__) gensim 3.8.3 >>> from gensim.models import word2vec;print("FAST_VERSION", word2vec.FAST_VERSION) FAST_VERSION 1 ``` Any help is greatly appreciated! Thank you :) Cheers Filip
closed
2022-03-30T22:07:51Z
2022-04-08T09:12:53Z
https://github.com/piskvorky/gensim/issues/3315
[]
Filco306
5
paperless-ngx/paperless-ngx
django
8,746
[BUG] Value field is disappearing when choosing logical conidatin equal
### Description When i open the custom fields to filter documents, i am choosing my field (which is a date field), then changing "exists" to "is equal" the value field disappears so i cant enter anything. --- ![pic01](https://github.com/user-attachments/assets/af0e9cb5-02e7-4238-92c0-11a67a2abf3f) --- ![pic02](https://github.com/user-attachments/assets/f3f85af9-4dd7-42d3-be1e-ebbdda72b133) --- ![pic03](https://github.com/user-attachments/assets/146cf47e-4442-4f86-ba8b-52055e144bda) --- ![pic04](https://github.com/user-attachments/assets/4ccfb17b-bd99-4b06-9c33-18668e3dab19) Strange thing: i have another date field which does not show this behavior, but the name of the field is quite short, much shorter as shown in the screenshots. ### Steps to reproduce Click on Customfields Choose a date field change exists to equl ### Webserver logs ```bash No logs as no actions are performed, seems to be a GUI Problem ``` ### Browser logs _No response_ ### Paperless-ngx version 2.14.2 ### Host OS Synolgy ### Installation method Docker - official image ### System status ```json { "pngx_version": "2.14.2", "server_os": "Linux-4.4.302+-x86_64-with-glibc2.36", "install_type": "docker", "storage": { "total": 3829392842752, "available": 3811269746688 }, "database": { "type": "postgresql", "url": "paperless", "status": "OK", "error": null, "migration_status": { "latest_migration": "mfa.0003_authenticator_type_uniq", "unapplied_migrations": [] } }, "tasks": { "redis_url": "redis://broker:6379", "redis_status": "OK", "redis_error": null, "celery_status": "OK", "index_status": "OK", "index_last_modified": "2025-01-15T00:00:19.365251+01:00", "index_error": null, "classifier_status": "OK", "classifier_last_trained": "2025-01-15T09:05:03.020041Z", "classifier_error": null } } ``` ### Browser chrome firefox edge safari ### Configuration changes _No response_ ### Please confirm the following - [X] I believe this issue is a bug that affects all users of Paperless-ngx, not something specific to my installation. - [X] This issue is not about the OCR or archive creation of a specific file(s). Otherwise, please see above regarding OCR tools. - [X] I have already searched for relevant existing issues and discussions before opening this report. - [X] I have updated the title field above with a concise description.
closed
2025-01-15T09:40:55Z
2025-02-15T03:05:59Z
https://github.com/paperless-ngx/paperless-ngx/issues/8746
[ "bug", "frontend" ]
dochuebi
3
tensorpack/tensorpack
tensorflow
1,493
Some questions about ZMQInput
Are there any examples of using ZMQInput or zmq_ops? Now I have a requirement: use zmq on the client to send data, and use ZMQInput on the server to continuously receive data and perform inference work. In the process of using these two interface, I encountered some problems. I can continue to receive data in the while loop and convert it to tf.dataset, but when I tried to print it, something went wrong. The logic of my print function is as follows: ``` def print_dataset(data_set): iterator = data_set.make_initializable_iterator() next_element = iterator.get_next() num_batch = 0 with tf.train.MonitoredTrainingSession() as sess: sess.run(iterator.initializer) while True: try: value = sess.run(next_element) print("Num Batch: ", num_batch) print("Batch value: ", value) num_batch += 1 except tf.errors.OutOfRangeError: break ``` When this function is not called, I can receive tensor continuously. When this function is called, it can only output success at the first time. I am a newbie in this area, and it may be the problem caused by my improper use, so I would like to ask you for some suggestions or examples. Thank you very much and look forward to receiving your reply. By the way, the error is as follows ``` terminate called after throwing an instance of 'zmq::error_t' what(): Address already in use *** Received signal 6 *** *** BEGIN MANGLED STACK TRACE *** /usr/local/lib/python3.7/dist-packages/tensorflow_core/python/../libtensorflow_framework.so.1(+0x10708ae)[0x7fb1f783c8ae] /lib/x86_64-linux-gnu/libpthread.so.0(+0x12730)[0x7fb2cde38730] /lib/x86_64-linux-gnu/libc.so.6(gsignal+0x10b)[0x7fb2cd8b27bb] /lib/x86_64-linux-gnu/libc.so.6(abort+0x121)[0x7fb2cd89d535] /usr/lib/x86_64-linux-gnu/libstdc++.so.6(+0x8c983)[0x7fb229dae983] /usr/lib/x86_64-linux-gnu/libstdc++.so.6(+0x928c6)[0x7fb229db48c6] /usr/lib/x86_64-linux-gnu/libstdc++.so.6(+0x92901)[0x7fb229db4901] /usr/local/lib/python3.7/dist-packages/tensorflow_core/python/../libtensorflow_framework.so.1(+0x16cc53e)[0x7fb1f7e9853e] /lib/x86_64-linux-gnu/libpthread.so.0(+0x7fa3)[0x7fb2cde2dfa3] /lib/x86_64-linux-gnu/libc.so.6(clone+0x3f)[0x7fb2cd9744cf] *** END MANGLED STACK TRACE *** *** Begin stack trace *** tensorflow::CurrentStackTrace() gsignal abort clone *** End stack trace *** Aborted (core dumped) ```
closed
2020-11-11T11:44:05Z
2020-11-12T03:59:57Z
https://github.com/tensorpack/tensorpack/issues/1493
[]
LMLzz
1
plotly/dash-component-boilerplate
dash
39
react and react-dom should be in dev-dependencies
react and react-dom are supplied by `dash-renderer`. Also we need to update our Component Author documentation to make this clear (in case it isn't).
closed
2018-11-12T20:41:58Z
2018-11-16T16:39:34Z
https://github.com/plotly/dash-component-boilerplate/issues/39
[]
bpostlethwaite
3
huggingface/datasets
nlp
7,461
List of images behave differently on IterableDataset and Dataset
### Describe the bug This code: ```python def train_iterable_gen(): images = np.array(load_image("https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg").resize((128, 128))) yield { "images": np.expand_dims(images, axis=0), "messages": [ { "role": "user", "content": [{"type": "image", "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" }] }, { "role": "assistant", "content": [{"type": "text", "text": "duck" }] } ] } train_ds = Dataset.from_generator(train_iterable_gen, features=Features({ 'images': [datasets.Image(mode=None, decode=True, id=None)], 'messages': [{'content': [{'text': datasets.Value(dtype='string', id=None), 'type': datasets.Value(dtype='string', id=None) }], 'role': datasets.Value(dtype='string', id=None)}] } ) ) ``` works as I'd expect; if I iterate the dataset then the `images` column returns a `List[PIL.Image.Image]`, i.e. `'images': [<PIL.PngImagePlugin.PngImageFile image mode=RGB size=128x128 at 0x77EFB7EF4680>]`. But if I change `Dataset` to `IterableDataset`, the `images` column changes into `'images': [{'path': None, 'bytes': ..]` ### Steps to reproduce the bug The code above + ```python def load_image(url): response = requests.get(url) image = Image.open(io.BytesIO(response.content)) return image ``` I'm feeding it to SFTTrainer ### Expected behavior Dataset and IterableDataset would behave the same ### Environment info ```yaml requires-python = ">=3.12" dependencies = [ "av>=14.1.0", "boto3>=1.36.7", "datasets>=3.3.2", "docker>=7.1.0", "google-cloud-storage>=2.19.0", "grpcio>=1.70.0", "grpcio-tools>=1.70.0", "moviepy>=2.1.2", "open-clip-torch>=2.31.0", "opencv-python>=4.11.0.86; sys_platform == 'darwin'", "opencv-python-headless>=4.11.0.86; sys_platform == 'linux'", "pandas>=2.2.3", "pillow>=10.4.0", "plotly>=6.0.0", "py-spy>=0.4.0", "pydantic>=2.10.6", "pydantic-settings>=2.7.1", "pymysql>=1.1.1", "ray[data,default,serve,train,tune]>=2.43.0", "torch>=2.6.0", "torchmetrics>=1.6.1", "torchvision>=0.21.0", "transformers[torch]@git+https://github.com/huggingface/transformers", "wandb>=0.19.4", # https://github.com/Dao-AILab/flash-attention/issues/833 "flash-attn @ https://github.com/Dao-AILab/flash-attention/releases/download/v2.7.3/flash_attn-2.7.3+cu12torch2.6cxx11abiFALSE-cp312-cp312-linux_x86_64.whl; sys_platform == 'linux'", "trl@https://github.com/huggingface/trl.git", "peft>=0.14.0", ] ```
closed
2025-03-17T15:59:23Z
2025-03-18T08:57:17Z
https://github.com/huggingface/datasets/issues/7461
[]
FredrikNoren
2
Miksus/rocketry
automation
191
@app.task('daily.at("21:00") before Friday')
**Describe the bug** above statements results in rocketry.parse.utils.exception.ParserError: Could not find parser for string 'daily.at'. **To Reproduce** Steps to reproduce the behavior. **Expected behavior** Statement should run every MO,TUE,WED,THUR,FRI at 21:00 **Desktop (please complete the following information):** - OS: Linux - Python version: 3.8
open
2023-02-09T10:34:58Z
2023-02-09T13:44:12Z
https://github.com/Miksus/rocketry/issues/191
[ "bug" ]
faulander
1
stanfordnlp/stanza
nlp
1,431
TypeError: expected np.ndarray (got Tensor)
**Describe the bug** Was trying to use pretrained model https://huggingface.co/stanfordnlp/stanza-lt With a lot of issues, like stanza.download("lt") constantly crashing, I was forced to do it manually. So, installed and downloaded everything and used next piece of code to get the bug <code> import stanza config = { 'processors': 'tokenize,pos', 'lang': 'lt', 'tokenize_model_path': './stanza_resources/lt/tokenize/alksnis.pt', 'pos_model_path': './stanza_resources/lt/pos/alksnis_nocharlm.pt', 'pos_pretrain_path': './stanza_resources/lt/pretrain/fasttextwiki.pt', 'tokenize_pretokenized': True, 'download_method': None } nlp = stanza.Pipeline(**config) # initialize neural pipeline doc = nlp("Kur einam mes su Knysliuku, didžiulė paslaptis") # run annotation over a sentence print(doc) </code> **Expected behavior** The result shoud be obvious: > [ > [ > { > "id": 1, > "text": "Kur", > "upos": "ADV", > "xpos": "prm.l.lrgin.", > "feats": "Degree=Pos|PronType=Int,Rel", > "misc": "", > "start_char": 0, > "end_char": 3 > }, > ... > ] **Environment (please complete the following information):** - OS: Windows 10 - Python 3.10.5 - stanza 1.9.2 - numpy 2.1.2 **Additional context** At least it works after patching code in file stanza/models/pos/model.py ~90 line self.add_unsaved_module('pretrained_emb', nn.Embedding.from_pretrained(torch.from_numpy(emb_matrix), freeze=True)) to <code> if type(emb_matrix) == torch.Tensor: self.add_unsaved_module('pretrained_emb', nn.Embedding.from_pretrained(emb_matrix, freeze=True)) else: self.add_unsaved_module('pretrained_emb', nn.Embedding.from_pretrained(torch.from_numpy(emb_matrix), freeze=True)) </code> Not sure who is culprit - library or model.
closed
2024-11-06T06:02:26Z
2024-12-31T05:38:25Z
https://github.com/stanfordnlp/stanza/issues/1431
[ "bug" ]
topl0305
13
huggingface/transformers
tensorflow
36,749
SFTConfig.__init__() got an unexpected keyword argument 'optimizers'
closed
2025-03-16T10:03:46Z
2025-03-16T10:12:03Z
https://github.com/huggingface/transformers/issues/36749
[]
Sneakr
0
paulbrodersen/netgraph
matplotlib
100
get_geometric_layout results in missing required arguments for get_layout_for_multiple_components()
In a jupyter notebook I plot the graph first … ``` import matplotlib.pyplot as plt from netgraph import Graph # pip install netgraph OR conda install -c conda-forge netgraph # right triangle edge_length = { (0, 1) : 1.41, (0, 2) : 2.77, (2, 3) : 2.46, (0, 4) : 2.43, (1, 2) : 1.79, (1, 3) : 1.55, (3, 4) : 1.07, } edges = list(edge_length.keys()) fig, ax = plt.subplots() Graph(edges, edge_labels=edge_length, node_layout='geometric', node_layout_kwargs=dict(edge_length=edge_length), ax=ax, scale=(3, 3), tol=0.000000001) ax.set_aspect('equal') plt.show() ``` … and then try to get the positions ``` from netgraph import get_geometric_layout pos = get_geometric_layout(edges, edge_length) ``` This results in the following TypeError ``` --------------------------------------------------------------------------- TypeError Traceback (most recent call last) Cell In[63], line 3 1 from netgraph import get_geometric_layout ----> 3 pos = get_geometric_layout(edges, edge_length) File ~/.cache/pypoetry/virtualenvs/vermessung-2oHJ2gDh-py3.12/lib/python3.12/site-packages/netgraph/_node_layout.py:67, in _handle_multiple_components.<locals>.wrapped_layout_function(edges, nodes, *args, **kwargs) 64 return get_layout_for_multiple_components( 65 edges, components, layout_function, mode='side-by-side', *args, **kwargs) 66 else: ---> 67 return get_layout_for_multiple_components( 68 edges, components, layout_function, mode='packed', *args, **kwargs) 69 else: 70 return layout_function(edges, *args, **kwargs) TypeError: get_layout_for_multiple_components() missing 2 required positional arguments: 'origin' and 'scale' ```
closed
2024-10-15T11:08:05Z
2024-10-18T09:56:31Z
https://github.com/paulbrodersen/netgraph/issues/100
[]
white-gecko
6
awesto/django-shop
django
637
AttributeError: module 'shop.models.customer' has no attribute 'CustomerStateField'
When upgrading to version `0.11.dev0` I received this error due to my `0001_initial.py` migration referring to the `recognized` field according to django-SHOP version 0.10.2. I was forced to modify my `0001_initial.py` migration with this: `('recognized', shop.models.fields.ChoiceEnumField(enum_type=shop.models.customer.CustomerState, help_text='Designates the state the customer is recognized as.', verbose_name='Recognized as'))` They are compatible configurations since both fields derive from `models.PositiveSmallIntegerField` and CustomerState has not changed
closed
2017-08-23T15:24:49Z
2017-08-23T15:25:50Z
https://github.com/awesto/django-shop/issues/637
[]
racitup
0
aleju/imgaug
machine-learning
817
Script takes over one hour to install through pycharm on 224GB 24CPUs DSVM
Looks like the script is bad or buggy takes an enormous amount of time to install yet unfixed
open
2022-04-22T15:45:33Z
2022-04-22T15:48:38Z
https://github.com/aleju/imgaug/issues/817
[]
ffsEveryNameIsTaken
1
PeterL1n/BackgroundMattingV2
computer-vision
108
Minimum pixel value of the "pha" images from the provided dataset is 1, not 0.
closed
2021-05-27T02:20:25Z
2021-07-11T20:25:30Z
https://github.com/PeterL1n/BackgroundMattingV2/issues/108
[]
semchan
1
ploomber/ploomber
jupyter
438
Do not connect to client if tasks previously run
For some of my projects, I need to connect to 3+ databases. When I run `ploomber build`, it connects to the client each time -- even if the task is not outdated. It would be nice if ploomber could recognize these tasks has already been run without invoking the client. This way I could continue to developing locally without internet connectivity (and save a few seconds each time I build because I don't need to connect to the clients).
closed
2021-12-16T16:45:14Z
2022-02-14T01:07:51Z
https://github.com/ploomber/ploomber/issues/438
[]
reesehopkins
7
ScrapeGraphAI/Scrapegraph-ai
machine-learning
377
'SmartScraperGraph' object has no attribute 'model_token'
**Describe the bug** Hi, I am trying to scrape webpage using SmartScraperGraph, but am constantly getting the following error- 'SmartScraperGraph' object has no attribute 'model_token' **To Reproduce** This is the code- ``` repo_id = "meta-llama/Llama-2-7b-hf" llm_model_instance = HuggingFaceEndpoint( repo_id=repo_id, max_length=128, temperature=0.5, huggingfacehub_api_token='MY_API_TOKEN') embedder_model_instance = HuggingFaceInferenceAPIEmbeddings( api_key='MY_API_TOKEN', model_name="sentence-transformers/all-MiniLM-l6-v2" ) from scrapegraphai.graphs import SmartScraperGraph graph_config = { "llm": {"model_instance": llm_model_instance}, "embeddings": {"model_instance": embedder_model_instance} } smart_scraper_graph = SmartScraperGraph( prompt="List me all the events, with the following fields: company_name, event_name, event_start_date, event_start_time, event_end_date, event_end_time, location, event_mode, event_category, third_party_redirect, no_of_days, time_in_hours, hosted_or_attending, refreshments_type, registration_available, registration_link", # also accepts a string with the already downloaded HTML code source="https://www.hmhco.com/event", config=graph_config ) result = smart_scraper_graph.run() print(result) ```
closed
2024-06-13T01:34:37Z
2024-09-26T08:48:15Z
https://github.com/ScrapeGraphAI/Scrapegraph-ai/issues/377
[ "bug" ]
Naman-Bhrgv
15
pytorch/vision
computer-vision
8,695
Pytorch build with rocm failed, execution_policy.h' file not found
### 🐛 Describe the bug FAILED: caffe2/CMakeFiles/torch_hip.dir/__/aten/src/ATen/native/hip/torch_hip_generated_CumprodKernel.hip.o /home/hmsjwzb/work/pytorch/build/caffe2/CMakeFiles/torch_hip.dir/__/aten/src/ATen/native/hip/torch_hip_generated_CumprodKernel.hip.o cd /home/hmsjwzb/work/pytorch/build/caffe2/CMakeFiles/torch_hip.dir/__/aten/src/ATen/native/hip && /home/hmsjwzb/miniconda3/bin/cmake -E make_directory /home/hmsjwzb/work/pytorch/build/caffe2/CMakeFiles/torch_hip.dir/__/aten/src/ATen/native/hip/. && /home/hmsjwzb/miniconda3/bin/cmake -D verbose:BOOL=OFF -D build_configuration:STRING=RELEASE -D generated_file:STRING=/home/hmsjwzb/work/pytorch/build/caffe2/CMakeFiles/torch_hip.dir/__/aten/src/ATen/native/hip/./torch_hip_generated_CumprodKernel.hip.o -P /home/hmsjwzb/work/pytorch/build/caffe2/CMakeFiles/torch_hip.dir/__/aten/src/ATen/native/hip/torch_hip_generated_CumprodKernel.hip.o.cmake In file included from /home/hmsjwzb/work/pytorch/aten/src/ATen/native/hip/CumprodKernel.hip:3: In file included from /home/hmsjwzb/work/pytorch/aten/src/ATen/core/TensorBase.h:6: In file included from /home/hmsjwzb/work/pytorch/c10/core/ScalarType.h:8: In file included from /home/hmsjwzb/work/pytorch/c10/util/Float8_e5m2.h:17: In file included from /home/hmsjwzb/work/pytorch/c10/util/Half.h:16: In file included from /home/hmsjwzb/work/pytorch/c10/util/complex.h:8: In file included from /usr/include/thrust/complex.h:1030: In file included from /usr/include/thrust/detail/complex/complex.inl:22: In file included from /usr/include/thrust/type_traits/is_trivially_relocatable.h:19: In file included from /usr/include/thrust/type_traits/is_contiguous_iterator.h:27: In file included from /usr/include/thrust/detail/type_traits/pointer_traits.h:23: In file included from /usr/include/thrust/iterator/iterator_traits.h:62: /usr/include/thrust/iterator/detail/device_system_tag.h:23:10: fatal error: 'thrust/system/__THRUST_DEVICE_SYSTEM_NAMESPACE/detail/execution_policy.h' file not found 23 | #include __THRUST_DEVICE_SYSTEM_TAG_HEADER | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /usr/include/thrust/iterator/detail/device_system_tag.h:22:43: note: expanded from macro '__THRUST_DEVICE_SYSTEM_TAG_HEADER' 22 | #define __THRUST_DEVICE_SYSTEM_TAG_HEADER <__THRUST_DEVICE_SYSTEM_ROOT/detail/execution_policy.h> | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <scratch space>:42:1: note: expanded from here 42 | <thrust/system/__THRUST_DEVICE_SYSTEM_NAMESPACE/detail/execution_policy.h> | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 1 error generated when compiling for host. failed to execute:/opt/rocm/llvm/bin/clang++ --offload-arch=gfx1100 --cuda-host-only -O3 -M -x hip /home/hmsjwzb/work/pytorch/aten/src/ATen/native/hip/CumprodKernel.hip -o "/home/hmsjwzb/work/pytorch/build/caffe2/CMakeFiles/torch_hip.dir/__/aten/src/ATen/native/hip/torch_hip_generated_CumprodKernel.hip.o.depend.pre" -fclang-abi-compat=17 -DUSE_NCCL -DUSE_ROCM -D__HIP_PLATFORM_AMD__ -DUSE_FLASH_ATTENTION -DUSE_MEM_EFF_ATTENTION -DUSE_C10D_NCCL -DTORCH_HIP_BUILD_MAIN_LIB -DROCM_VERSION=60202 -DTORCH_HIP_VERSION=602 -DONNX_ML=1 -DONNXIFI_ENABLE_EXT=1 -DONNX_NAMESPACE=onnx_torch -DHAVE_MMAP=1 -D_FILE_OFFSET_BITS=64 -DHAVE_SHM_OPEN=1 -DHAVE_SHM_UNLINK=1 -DHAVE_MALLOC_USABLE_SIZE=1 -DUSE_EXTERNAL_MZCRC -DMINIZ_DISABLE_ZIP_READER_CRC32_CHECKS -DFLASHATTENTION_DISABLE_ALIBI -D__HIP_PLATFORM_AMD__=1 -DUSE_PROF_API=1 -DAT_PER_OPERATOR_HEADERS -DUSE_DISTRIBUTED -DUSE_C10D_GLOO -DUSE_RPC -DUSE_TENSORPIPE -D__HIP_PLATFORM_AMD__ -DFMT_HEADER_ONLY=1 -fPIC -D__HIP_PLATFORM_AMD__=1 -DCUDA_HAS_FP16=1 -DUSE_ROCM -D__HIP_NO_HALF_OPERATORS__=1 -D__HIP_NO_HALF_CONVERSIONS__=1 -DTORCH_HIP_VERSION=602 -Wno-shift-count-negative -Wno-shift-count-overflow -Wno-duplicate-decl-specifier -DCAFFE2_USE_MIOPEN -DTHRUST_DEVICE_SYSTEM=THRUST_DEVICE_SYSTEM_HIP -std=c++17 -DHIPBLAS_V2 -DHIP_NEW_TYPE_ENUMS -fno-gpu-rdc -I/home/hmsjwzb/work/pytorch/build/aten/src -I/home/hmsjwzb/work/pytorch/aten/src -I/home/hmsjwzb/work/pytorch/build -I/home/hmsjwzb/work/pytorch -I/opt/rocm-6.2.2/include -I/home/hmsjwzb/work/pytorch/build/third_party/gloo -I/home/hmsjwzb/work/pytorch/cmake/../third_party/gloo -I/home/hmsjwzb/work/pytorch/cmake/../third_party/tensorpipe/third_party/libuv/include -I/home/hmsjwzb/work/pytorch/cmake/../third_party/googletest/googlemock/include -I/home/hmsjwzb/work/pytorch/cmake/../third_party/googletest/googletest/include -I/home/hmsjwzb/work/pytorch/third_party/protobuf/src -I/home/hmsjwzb/work/pytorch/third_party/XNNPACK/include -I/home/hmsjwzb/work/pytorch/cmake/../third_party/benchmark/include -I/home/hmsjwzb/work/pytorch/third_party/ittapi/include -I/home/hmsjwzb/work/pytorch/cmake/../third_party/eigen -I/home/hmsjwzb/work/pytorch/third_party/onnx -I/home/hmsjwzb/work/pytorch/build/third_party/onnx -I/home/hmsjwzb/work/pytorch/third_party/ideep/mkl-dnn/include/oneapi/dnnl -I/home/hmsjwzb/work/pytorch/third_party/ideep/include -I/home/hmsjwzb/work/pytorch/third_party/ideep/mkl-dnn/include/oneapi/dnnl -I/home/hmsjwzb/work/pytorch/nlohmann -I/home/hmsjwzb/work/pytorch/INTERFACE -I/home/hmsjwzb/work/pytorch/third_party/nlohmann/include -I/opt/rocm/include -I/opt/rocm/hcc/include -I/opt/rocm/rocblas/include -I/opt/rocm/hipsparse/include -I/opt/rocm/include/rccl/ -I/home/hmsjwzb/work/pytorch/aten/src/THH -I/home/hmsjwzb/work/pytorch/aten/src/ATen/hip -I/home/hmsjwzb/work/pytorch/aten/src/ATen/../../../third_party/composable_kernel/include -I/home/hmsjwzb/work/pytorch/aten/src/ATen/../../../third_party/composable_kernel/library/include -I/home/hmsjwzb/work/pytorch/third_party/fmt/include -I/home/hmsjwzb/work/pytorch/aten/src -I/home/hmsjwzb/work/pytorch/build/caffe2/aten/src -I/home/hmsjwzb/work/pytorch/build/aten/src -I/home/hmsjwzb/work/pytorch/aten/src -I/home/hmsjwzb/work/pytorch/aten/src/ATen/.. -I/home/hmsjwzb/work/pytorch/torch/include -I/opt/rocm-6.2.2/include -I/opt/rocm/include -I/home/hmsjwzb/work/pytorch/c10/hip/../.. -I/home/hmsjwzb/work/pytorch/build -I/home/hmsjwzb/work/pytorch/c10/../ -I/home/hmsjwzb/work/pytorch/build -I/home/hmsjwzb/work/pytorch/torch/csrc/api -I/home/hmsjwzb/work/pytorch/torch/csrc/api/include -I/home/hmsjwzb/work/pytorch/third_party/protobuf/src -I/opt/rocm-6.2.2/include -I/opt/rocm/include -I/opt/rocm-6.2.2/include -I/opt/rocm-6.2.2/include -I/opt/rocm-6.2.2/include -I/opt/rocm-6.2.2/include -I/opt/rocm-6.2.2/include -I/opt/rocm-6.2.2/include -I/opt/rocm-6.2.2/include/hiprand -I/opt/rocm-6.2.2/include -I/opt/rocm-6.2.2/include -I/opt/rocm-6.2.2/include -I/opt/rocm-6.2.2/include -I/opt/rocm-6.2.2/include -I/opt/rocm/include -I/home/hmsjwzb/work/pytorch/build/third_party/gloo/hip -I/home/hmsjwzb/miniconda3/include -I/home/hmsjwzb/work/pytorch/build/aten/src -I/home/hmsjwzb/work/pytorch/aten/src -I/home/hmsjwzb/work/pytorch/build -I/home/hmsjwzb/work/pytorch -I/opt/rocm-6.2.2/include -I/home/hmsjwzb/work/pytorch/build/third_party/gloo -I/home/hmsjwzb/work/pytorch/cmake/../third_party/gloo -I/home/hmsjwzb/work/pytorch/cmake/../third_party/tensorpipe/third_party/libuv/include -I/home/hmsjwzb/work/pytorch/cmake/../third_party/googletest/googlemock/include -I/home/hmsjwzb/work/pytorch/cmake/../third_party/googletest/googletest/include -I/home/hmsjwzb/work/pytorch/third_party/protobuf/src -I/home/hmsjwzb/work/pytorch/third_party/XNNPACK/include -I/home/hmsjwzb/work/pytorch/cmake/../third_party/benchmark/include -I/home/hmsjwzb/work/pytorch/third_party/ittapi/include -I/home/hmsjwzb/work/pytorch/cmake/../third_party/eigen -I/home/hmsjwzb/work/pytorch/third_party/onnx -I/home/hmsjwzb/work/pytorch/build/third_party/onnx -I/home/hmsjwzb/work/pytorch/third_party/ideep/mkl-dnn/include/oneapi/dnnl -I/home/hmsjwzb/work/pytorch/third_party/ideep/include -I/home/hmsjwzb/work/pytorch/nlohmann -I/home/hmsjwzb/work/pytorch/INTERFACE -I/home/hmsjwzb/work/pytorch/third_party/nlohmann/include CMake Error at torch_hip_generated_CumprodKernel.hip.o.cmake:146 (message): Error generating /home/hmsjwzb/work/pytorch/build/caffe2/CMakeFiles/torch_hip.dir/__/aten/src/ATen/native/hip/./torch_hip_generated_CumprodKernel.hip.o ninja: build stopped: subcommand failed. Building wheel torch-2.6.0a0+gitd8f22a1 -- Building version 2.6.0a0+gitd8f22a1 cmake --build . --target install --config Release ### Versions PyTorch version: 2.5.0+rocm6.2 Is debug build: False CUDA used to build PyTorch: N/A ROCM used to build PyTorch: 6.2.41133-dd7f95766 OS: Ubuntu 22.04.2 LTS (x86_64) GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 Clang version: 18.1.8 (/home/hmsjwzb/code/llvm-project/clang 443e23eed24d9533566f189ef25154263756a36d) CMake version: version 3.26.4 Libc version: glibc-2.35 Python version: 3.10.15+ (heads/3.10:0c5fc272175, Sep 24 2024, 11:33:24) [GCC 11.4.0] (64-bit runtime) Python platform: Linux-6.8.0-47-generic-x86_64-with-glibc2.35 Is CUDA available: True CUDA runtime version: Could not collect CUDA_MODULE_LOADING set to: LAZY GPU models and configuration: Radeon RX 7900 XTX (gfx1100) Nvidia driver version: Could not collect cuDNN version: Could not collect HIP runtime version: 6.2.41133 MIOpen runtime version: 3.2.0 Is XNNPACK available: True CPU: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Address sizes: 46 bits physical, 48 bits virtual Byte Order: Little Endian CPU(s): 32 On-line CPU(s) list: 0-31 Vendor ID: GenuineIntel Model name: 13th Gen Intel(R) Core(TM) i9-13900 CPU family: 6 Model: 183 Thread(s) per core: 2 Core(s) per socket: 24 Socket(s): 1 Stepping: 1 CPU max MHz: 5600.0000 CPU min MHz: 800.0000 BogoMIPS: 3993.60 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect user_shstk avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi vnmi umip pku ospke waitpkg gfni vaes vpclmulqdq tme rdpid movdiri movdir64b fsrm md_clear serialize pconfig arch_lbr ibt flush_l1d arch_capabilities Virtualization: VT-x L1d cache: 896 KiB (24 instances) L1i cache: 1.3 MiB (24 instances) L2 cache: 32 MiB (12 instances) L3 cache: 36 MiB (1 instance) NUMA node(s): 1 NUMA node0 CPU(s): 0-31 Vulnerability Gather data sampling: Not affected Vulnerability Itlb multihit: Not affected Vulnerability L1tf: Not affected Vulnerability Mds: Not affected Vulnerability Meltdown: Not affected Vulnerability Mmio stale data: Not affected Vulnerability Reg file data sampling: Mitigation; Clear Register File Vulnerability Retbleed: Not affected Vulnerability Spec rstack overflow: Not affected Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S Vulnerability Srbds: Not affected Vulnerability Tsx async abort: Not affected Versions of relevant libraries: [pip3] numpy==1.26.3 [pip3] nvidia-cublas-cu12==12.4.5.8 [pip3] nvidia-cuda-cupti-cu12==12.4.127 [pip3] nvidia-cuda-nvrtc-cu12==12.4.127 [pip3] nvidia-cuda-runtime-cu12==12.4.127 [pip3] nvidia-cudnn-cu12==9.1.0.70 [pip3] nvidia-cufft-cu12==11.2.1.3 [pip3] nvidia-curand-cu12==10.3.5.147 [pip3] nvidia-cusolver-cu12==11.6.1.9 [pip3] nvidia-cusparse-cu12==12.3.1.170 [pip3] nvidia-nccl-cu12==2.21.5 [pip3] nvidia-nvjitlink-cu12==12.4.127 [pip3] nvidia-nvtx-cu12==12.4.127 [pip3] pytorch-triton-rocm==3.1.0 [pip3] torch==2.5.0+rocm6.2 [pip3] torchaudio==2.5.0+rocm6.2 [pip3] torchvision==0.20.0+rocm6.2 [pip3] triton==3.1.0 [conda] magma-cuda121 2.6.1 1 pytorch cc @jeffdaily @jithunnair-amd
open
2024-10-24T10:41:05Z
2024-10-28T15:57:20Z
https://github.com/pytorch/vision/issues/8695
[ "module: rocm" ]
FlintWangacc
1
piskvorky/gensim
data-science
3,521
Where are pre-trained doc2vec model w/ recent version of Gensim?
closed
2024-03-31T03:42:38Z
2024-03-31T03:50:06Z
https://github.com/piskvorky/gensim/issues/3521
[]
jobs-git
1
Lightning-AI/pytorch-lightning
data-science
20,665
Logging Cause XLA Graph Recompile on Every Epochs
### Bug description I am using wandb logger on single TPU core, if I invoke `self.log` in `training_step` or `validation_step`, a XLA graph recompile will be triggered. The behavior is observed by setting `PT_XLA_DEBUG=1` and notice many `Compilation Cause: most likely user code trying to access tensor value before mark_step` in the log. I have attached a sample code for those two functions, with out any `self.log` the code runs fine. ### What version are you seeing the problem on? master ### How to reproduce the bug ```python def training_step(self, batch: tuple[torch.Tensor, torch.Tensor]) -> torch.Tensor: sample, target = batch pred = self(sample) loss = self.train_loss(pred, target) self.log("Training Loss", loss) return loss def validation_step(self, batch: tuple[torch.Tensor, torch.Tensor]) -> torch.Tensor: sample, target = batch pred = self(sample) loss = self.valid_loss(pred, target) self.log( "Validation Accuracy Top 1", self.valid_acc_top_1(pred, target) ) self.log( "Validation Accuracy Top 5", self.valid_acc_top_5(pred, target) ) self.log("Validation Loss", loss) return loss ``` ### Error messages and logs ``` INFO: GPU available: False, used: False INFO:lightning.pytorch.utilities.rank_zero:GPU available: False, used: False INFO: TPU available: True, using: 1 TPU cores INFO:lightning.pytorch.utilities.rank_zero:TPU available: True, using: 1 TPU cores INFO: HPU available: False, using: 0 HPUs INFO:lightning.pytorch.utilities.rank_zero:HPU available: False, using: 0 HPUs wandb: Using wandb-core as the SDK backend. Please refer to https://wandb.me/wandb-core for more information. wandb: (1) Create a W&B account wandb: (2) Use an existing W&B account wandb: (3) Don't visualize my results wandb: Enter your choice: 2 wandb: You chose 'Use an existing W&B account' wandb: Logging into wandb.ai. (Learn how to deploy a W&B server locally: https://wandb.me/wandb-server) wandb: You can find your API key in your browser here: https://wandb.ai/authorize wandb: Paste an API key from your profile and hit enter, or press ctrl+c to quit: wandb: No netrc file found, creating one. wandb: Appending key for api.wandb.ai to your netrc file: /root/.netrc wandb: Currently logged in as: catalpa to https://api.wandb.ai/. Use `wandb login --relogin` to force relogin wandb: Tracking run with wandb version 0.19.8 wandb: Run data is saved locally in ./wandb/run-20250322_222633-ttri785i wandb: Run `wandb offline` to turn off syncing. wandb: Syncing run morning-feather-1 wandb: ⭐️ View project at https://wandb.ai/catalpa/Theia wandb: 🚀 View run at https://wandb.ai/catalpa/Theia/runs/ttri785i | Name | Type | Params | Mode ------------------------------------------------------------------- 0 | stem | Sequential | 38.7 K | train 1 | blocks | ModuleList | 12.1 M | train 2 | classifier | Sequential | 775 K | train 3 | train_loss | SoftTargetCrossEntropy | 0 | train 4 | valid_loss | CrossEntropyLoss | 0 | train 5 | valid_acc_top_1 | MulticlassAccuracy | 0 | train 6 | valid_acc_top_5 | MulticlassAccuracy | 0 | train ------------------------------------------------------------------- 12.9 M Trainable params 0 Non-trainable params 12.9 M Total params 51.709 Total estimated model params size (MB) 339 Modules in train mode 0 Modules in eval mode Compilation Analysis: ================================================================================ Compilation Analysis: Compilation Cause Compilation Analysis: most likely user code trying to access tensor value before mark_step Compilation Analysis: Graph Info: Compilation Analysis: Graph Hash: b9a9a96de5d2e35627c50c1849afaa51 Compilation Analysis: Number of Graph Inputs: 1 Compilation Analysis: Number of Graph Outputs: 1 Compilation Analysis: Python Frame Triggered Execution: Compilation Analysis: has_len_all_ranks (/usr/local/lib/python3.11/dist-packages/pytorch_lightning/utilities/data.py:105) Compilation Analysis: setup_data (/usr/local/lib/python3.11/dist-packages/pytorch_lightning/loops/fit_loop.py:265) Compilation Analysis: run (/usr/local/lib/python3.11/dist-packages/pytorch_lightning/loops/fit_loop.py:208) Compilation Analysis: _run_stage (/usr/local/lib/python3.11/dist-packages/pytorch_lightning/trainer/trainer.py:1056) Compilation Analysis: _run (/usr/local/lib/python3.11/dist-packages/pytorch_lightning/trainer/trainer.py:1012) Compilation Analysis: _fit_impl (/usr/local/lib/python3.11/dist-packages/pytorch_lightning/trainer/trainer.py:599) Compilation Analysis: _call_and_handle_interrupt (/usr/local/lib/python3.11/dist-packages/pytorch_lightning/trainer/call.py:48) Compilation Analysis: fit (/usr/local/lib/python3.11/dist-packages/pytorch_lightning/trainer/trainer.py:561) Compilation Analysis: .......... Compilation Analysis: -------------------------------------------------------------------------------- Compilation Analysis: ================================================================================ Post Compilation Analysis: ================================================================================ Post Compilation Analysis: Graph input size: 0.000002 GB Post Compilation Analysis: Graph output size: 0.000002 GB Post Compilation Analysis: Aliased Input size: 0.000000 GB Post Compilation Analysis: Intermediate tensor size: 0.000000 GB Post Compilation Analysis: Compiled program size: 0.000025 GB Post Compilation Analysis: -------------------------------------------------------------------------------- Post Compilation Analysis: ================================================================================ Execution Analysis: ================================================================================ Execution Analysis: Execution Cause Execution Analysis: most likely user code trying to access tensor value before mark_step Execution Analysis: Graph Info: Execution Analysis: Graph Hash: b9a9a96de5d2e35627c50c1849afaa51 Execution Analysis: Number of Graph Inputs: 1 Execution Analysis: Number of Graph Outputs: 1 Execution Analysis: Python Frame Triggered Execution: Execution Analysis: has_len_all_ranks (/usr/local/lib/python3.11/dist-packages/pytorch_lightning/utilities/data.py:105) Execution Analysis: setup_data (/usr/local/lib/python3.11/dist-packages/pytorch_lightning/loops/fit_loop.py:265) Execution Analysis: run (/usr/local/lib/python3.11/dist-packages/pytorch_lightning/loops/fit_loop.py:208) Execution Analysis: _run_stage (/usr/local/lib/python3.11/dist-packages/pytorch_lightning/trainer/trainer.py:1056) Execution Analysis: _run (/usr/local/lib/python3.11/dist-packages/pytorch_lightning/trainer/trainer.py:1012) Execution Analysis: _fit_impl (/usr/local/lib/python3.11/dist-packages/pytorch_lightning/trainer/trainer.py:599) Execution Analysis: _call_and_handle_interrupt (/usr/local/lib/python3.11/dist-packages/pytorch_lightning/trainer/call.py:48) Execution Analysis: fit (/usr/local/lib/python3.11/dist-packages/pytorch_lightning/trainer/trainer.py:561) Execution Analysis: .......... Execution Analysis: -------------------------------------------------------------------------------- Execution Analysis: ================================================================================ Compilation Analysis: ================================================================================ Compilation Analysis: Compilation Cause Compilation Analysis: most likely user code trying to access tensor value before mark_step Compilation Analysis: Graph Info: Compilation Analysis: Graph Hash: adfea99482db8ca265e5f21e69b2412c Compilation Analysis: Number of Graph Inputs: 1 Compilation Analysis: Number of Graph Outputs: 1 Compilation Analysis: Python Frame Triggered Execution: Compilation Analysis: has_len_all_ranks (/usr/local/lib/python3.11/dist-packages/pytorch_lightning/utilities/data.py:110) Compilation Analysis: setup_data (/usr/local/lib/python3.11/dist-packages/pytorch_lightning/loops/fit_loop.py:265) Compilation Analysis: run (/usr/local/lib/python3.11/dist-packages/pytorch_lightning/loops/fit_loop.py:208) Compilation Analysis: _run_stage (/usr/local/lib/python3.11/dist-packages/pytorch_lightning/trainer/trainer.py:1056) Compilation Analysis: _run (/usr/local/lib/python3.11/dist-packages/pytorch_lightning/trainer/trainer.py:1012) Compilation Analysis: _fit_impl (/usr/local/lib/python3.11/dist-packages/pytorch_lightning/trainer/trainer.py:599) Compilation Analysis: _call_and_handle_interrupt (/usr/local/lib/python3.11/dist-packages/pytorch_lightning/trainer/call.py:48) Compilation Analysis: fit (/usr/local/lib/python3.11/dist-packages/pytorch_lightning/trainer/trainer.py:561) Compilation Analysis: .......... Compilation Analysis: -------------------------------------------------------------------------------- Compilation Analysis: ================================================================================ Post Compilation Analysis: ================================================================================ Post Compilation Analysis: Graph input size: 0.000002 GB Post Compilation Analysis: Graph output size: 0.000002 GB Post Compilation Analysis: Aliased Input size: 0.000000 GB Post Compilation Analysis: Intermediate tensor size: 0.000000 GB Post Compilation Analysis: Compiled program size: 0.000025 GB Post Compilation Analysis: -------------------------------------------------------------------------------- Post Compilation Analysis: ================================================================================ Execution Analysis: ================================================================================ Execution Analysis: Execution Cause Execution Analysis: most likely user code trying to access tensor value before mark_step Execution Analysis: Graph Info: Execution Analysis: Graph Hash: adfea99482db8ca265e5f21e69b2412c Execution Analysis: Number of Graph Inputs: 1 Execution Analysis: Number of Graph Outputs: 1 Execution Analysis: Python Frame Triggered Execution: Execution Analysis: has_len_all_ranks (/usr/local/lib/python3.11/dist-packages/pytorch_lightning/utilities/data.py:110) Execution Analysis: setup_data (/usr/local/lib/python3.11/dist-packages/pytorch_lightning/loops/fit_loop.py:265) Execution Analysis: run (/usr/local/lib/python3.11/dist-packages/pytorch_lightning/loops/fit_loop.py:208) Execution Analysis: _run_stage (/usr/local/lib/python3.11/dist-packages/pytorch_lightning/trainer/trainer.py:1056) Execution Analysis: _run (/usr/local/lib/python3.11/dist-packages/pytorch_lightning/trainer/trainer.py:1012) Execution Analysis: _fit_impl (/usr/local/lib/python3.11/dist-packages/pytorch_lightning/trainer/trainer.py:599) Execution Analysis: _call_and_handle_interrupt (/usr/local/lib/python3.11/dist-packages/pytorch_lightning/trainer/call.py:48) Execution Analysis: fit (/usr/local/lib/python3.11/dist-packages/pytorch_lightning/trainer/trainer.py:561) Execution Analysis: .......... Execution Analysis: -------------------------------------------------------------------------------- Execution Analysis: ================================================================================ Execution Analysis: ================================================================================ Execution Analysis: Execution Cause Execution Analysis: most likely user code trying to access tensor value before mark_step Execution Analysis: Graph Info: Execution Analysis: Graph Hash: b9a9a96de5d2e35627c50c1849afaa51 Execution Analysis: Number of Graph Inputs: 1 Execution Analysis: Number of Graph Outputs: 1 Execution Analysis: Python Frame Triggered Execution: Execution Analysis: has_len_all_ranks (/usr/local/lib/python3.11/dist-packages/pytorch_lightning/utilities/data.py:105) Execution Analysis: setup_data (/usr/local/lib/python3.11/dist-packages/pytorch_lightning/loops/fit_loop.py:278) Execution Analysis: run (/usr/local/lib/python3.11/dist-packages/pytorch_lightning/loops/fit_loop.py:208) Execution Analysis: _run_stage (/usr/local/lib/python3.11/dist-packages/pytorch_lightning/trainer/trainer.py:1056) Execution Analysis: _run (/usr/local/lib/python3.11/dist-packages/pytorch_lightning/trainer/trainer.py:1012) Execution Analysis: _fit_impl (/usr/local/lib/python3.11/dist-packages/pytorch_lightning/trainer/trainer.py:599) Execution Analysis: _call_and_handle_interrupt (/usr/local/lib/python3.11/dist-packages/pytorch_lightning/trainer/call.py:48) Execution Analysis: fit (/usr/local/lib/python3.11/dist-packages/pytorch_lightning/trainer/trainer.py:561) Execution Analysis: .......... Execution Analysis: -------------------------------------------------------------------------------- Execution Analysis: ================================================================================ Execution Analysis: ================================================================================ Execution Analysis: Execution Cause Execution Analysis: most likely user code trying to access tensor value before mark_step Execution Analysis: Graph Info: Execution Analysis: Graph Hash: adfea99482db8ca265e5f21e69b2412c Execution Analysis: Number of Graph Inputs: 1 Execution Analysis: Number of Graph Outputs: 1 Execution Analysis: Python Frame Triggered Execution: Execution Analysis: has_len_all_ranks (/usr/local/lib/python3.11/dist-packages/pytorch_lightning/utilities/data.py:110) Execution Analysis: setup_data (/usr/local/lib/python3.11/dist-packages/pytorch_lightning/loops/fit_loop.py:278) Execution Analysis: run (/usr/local/lib/python3.11/dist-packages/pytorch_lightning/loops/fit_loop.py:208) Execution Analysis: _run_stage (/usr/local/lib/python3.11/dist-packages/pytorch_lightning/trainer/trainer.py:1056) Execution Analysis: _run (/usr/local/lib/python3.11/dist-packages/pytorch_lightning/trainer/trainer.py:1012) Execution Analysis: _fit_impl (/usr/local/lib/python3.11/dist-packages/pytorch_lightning/trainer/trainer.py:599) Execution Analysis: _call_and_handle_interrupt (/usr/local/lib/python3.11/dist-packages/pytorch_lightning/trainer/call.py:48) Execution Analysis: fit (/usr/local/lib/python3.11/dist-packages/pytorch_lightning/trainer/trainer.py:561) Execution Analysis: .......... Execution Analysis: -------------------------------------------------------------------------------- Execution Analysis: ================================================================================ Execution Analysis: ================================================================================ Execution Analysis: Execution Cause Execution Analysis: most likely user code trying to access tensor value before mark_step Execution Analysis: Graph Info: Execution Analysis: Graph Hash: b9a9a96de5d2e35627c50c1849afaa51 Execution Analysis: Number of Graph Inputs: 1 Execution Analysis: Number of Graph Outputs: 1 Execution Analysis: Python Frame Triggered Execution: Execution Analysis: has_len_all_ranks (/usr/local/lib/python3.11/dist-packages/pytorch_lightning/utilities/data.py:105) Execution Analysis: setup_data (/usr/local/lib/python3.11/dist-packages/pytorch_lightning/loops/evaluation_loop.py:202) Execution Analysis: on_run_start (/usr/local/lib/python3.11/dist-packages/pytorch_lightning/loops/fit_loop.py:414) Execution Analysis: run (/usr/local/lib/python3.11/dist-packages/pytorch_lightning/loops/fit_loop.py:212) Execution Analysis: _run_stage (/usr/local/lib/python3.11/dist-packages/pytorch_lightning/trainer/trainer.py:1056) Execution Analysis: _run (/usr/local/lib/python3.11/dist-packages/pytorch_lightning/trainer/trainer.py:1012) Execution Analysis: _fit_impl (/usr/local/lib/python3.11/dist-packages/pytorch_lightning/trainer/trainer.py:599) Execution Analysis: _call_and_handle_interrupt (/usr/local/lib/python3.11/dist-packages/pytorch_lightning/trainer/call.py:48) Execution Analysis: .......... Execution Analysis: -------------------------------------------------------------------------------- Execution Analysis: ================================================================================ Execution Analysis: ================================================================================ Execution Analysis: Execution Cause Execution Analysis: most likely user code trying to access tensor value before mark_step Execution Analysis: Graph Info: Execution Analysis: Graph Hash: adfea99482db8ca265e5f21e69b2412c Execution Analysis: Number of Graph Inputs: 1 Execution Analysis: Number of Graph Outputs: 1 Execution Analysis: Python Frame Triggered Execution: Execution Analysis: has_len_all_ranks (/usr/local/lib/python3.11/dist-packages/pytorch_lightning/utilities/data.py:110) Execution Analysis: setup_data (/usr/local/lib/python3.11/dist-packages/pytorch_lightning/loops/evaluation_loop.py:202) Execution Analysis: on_run_start (/usr/local/lib/python3.11/dist-packages/pytorch_lightning/loops/fit_loop.py:414) Execution Analysis: run (/usr/local/lib/python3.11/dist-packages/pytorch_lightning/loops/fit_loop.py:212) Execution Analysis: _run_stage (/usr/local/lib/python3.11/dist-packages/pytorch_lightning/trainer/trainer.py:1056) Execution Analysis: _run (/usr/local/lib/python3.11/dist-packages/pytorch_lightning/trainer/trainer.py:1012) Execution Analysis: _fit_impl (/usr/local/lib/python3.11/dist-packages/pytorch_lightning/trainer/trainer.py:599) Execution Analysis: _call_and_handle_interrupt (/usr/local/lib/python3.11/dist-packages/pytorch_lightning/trainer/call.py:48) Execution Analysis: .......... Execution Analysis: -------------------------------------------------------------------------------- Execution Analysis: ================================================================================ Epoch 0: 0% 0/73 [00:00<?, ?it/s] Compilation Analysis: ================================================================================ Compilation Analysis: Compilation Cause Compilation Analysis: user mark_step Compilation Analysis: Graph Info: Compilation Analysis: Graph Hash: fbba2ba25aff1cda058689b761890f05 Compilation Analysis: Number of Graph Inputs: 323 Compilation Analysis: Number of Graph Outputs: 1138 Compilation Analysis: Python Frame Triggered Execution: Compilation Analysis: mark_step (/usr/local/lib/python3.11/dist-packages/torch_xla/core/xla_model.py:1061) Compilation Analysis: optimizer_step (/usr/local/lib/python3.11/dist-packages/pytorch_lightning/plugins/precision/xla.py:75) Compilation Analysis: optimizer_step (/usr/local/lib/python3.11/dist-packages/pytorch_lightning/strategies/strategy.py:239) Compilation Analysis: step (/usr/local/lib/python3.11/dist-packages/pytorch_lightning/core/optimizer.py:154) Compilation Analysis: optimizer_step (/usr/local/lib/python3.11/dist-packages/pytorch_lightning/core/module.py:1302) Compilation Analysis: _call_lightning_module_hook (/usr/local/lib/python3.11/dist-packages/pytorch_lightning/trainer/call.py:176) Compilation Analysis: _optimizer_step (/usr/local/lib/python3.11/dist-packages/pytorch_lightning/loops/optimization/automatic.py:270) Compilation Analysis: run (/usr/local/lib/python3.11/dist-packages/pytorch_lightning/loops/optimization/automatic.py:192) Compilation Analysis: .......... Compilation Analysis: -------------------------------------------------------------------------------- Compilation Analysis: ================================================================================ Post Compilation Analysis: ================================================================================ Post Compilation Analysis: Graph input size: 0.072663 GB Post Compilation Analysis: Graph output size: 0.222699 GB Post Compilation Analysis: Aliased Input size: 0.048717 GB Post Compilation Analysis: Intermediate tensor size: 4.298608 GB Post Compilation Analysis: Compiled program size: 0.093338 GB Post Compilation Analysis: -------------------------------------------------------------------------------- Post Compilation Analysis: ================================================================================ Execution Analysis: ================================================================================ Execution Analysis: Execution Cause Execution Analysis: user mark_step Execution Analysis: Graph Info: Execution Analysis: Graph Hash: fbba2ba25aff1cda058689b761890f05 Execution Analysis: Number of Graph Inputs: 323 Execution Analysis: Number of Graph Outputs: 1138 Execution Analysis: Python Frame Triggered Execution: Execution Analysis: mark_step (/usr/local/lib/python3.11/dist-packages/torch_xla/core/xla_model.py:1061) Execution Analysis: optimizer_step (/usr/local/lib/python3.11/dist-packages/pytorch_lightning/plugins/precision/xla.py:75) Execution Analysis: optimizer_step (/usr/local/lib/python3.11/dist-packages/pytorch_lightning/strategies/strategy.py:239) Execution Analysis: step (/usr/local/lib/python3.11/dist-packages/pytorch_lightning/core/optimizer.py:154) Execution Analysis: optimizer_step (/usr/local/lib/python3.11/dist-packages/pytorch_lightning/core/module.py:1302) Execution Analysis: _call_lightning_module_hook (/usr/local/lib/python3.11/dist-packages/pytorch_lightning/trainer/call.py:176) Execution Analysis: _optimizer_step (/usr/local/lib/python3.11/dist-packages/pytorch_lightning/loops/optimization/automatic.py:270) Execution Analysis: run (/usr/local/lib/python3.11/dist-packages/pytorch_lightning/loops/optimization/automatic.py:192) Execution Analysis: .......... Execution Analysis: -------------------------------------------------------------------------------- Execution Analysis: ================================================================================ pt-xla-profiler: TransferFromDeviceTime too frequent: 6 counts during 1 steps Epoch 0: 1% 1/73 [01:37<1:56:46, 97.31s/it, v_num=785i] Compilation Analysis: ================================================================================ Compilation Analysis: Compilation Cause Compilation Analysis: user mark_step Compilation Analysis: Graph Info: Compilation Analysis: Graph Hash: e61678c26ffda4f9f726a4af30dae523 Compilation Analysis: Number of Graph Inputs: 830 Compilation Analysis: Number of Graph Outputs: 1135 Compilation Analysis: Python Frame Triggered Execution: Compilation Analysis: mark_step (/usr/local/lib/python3.11/dist-packages/torch_xla/core/xla_model.py:1061) Compilation Analysis: optimizer_step (/usr/local/lib/python3.11/dist-packages/pytorch_lightning/plugins/precision/xla.py:75) Compilation Analysis: optimizer_step (/usr/local/lib/python3.11/dist-packages/pytorch_lightning/strategies/strategy.py:239) Compilation Analysis: step (/usr/local/lib/python3.11/dist-packages/pytorch_lightning/core/optimizer.py:154) Compilation Analysis: optimizer_step (/usr/local/lib/python3.11/dist-packages/pytorch_lightning/core/module.py:1302) Compilation Analysis: _call_lightning_module_hook (/usr/local/lib/python3.11/dist-packages/pytorch_lightning/trainer/call.py:176) Compilation Analysis: _optimizer_step (/usr/local/lib/python3.11/dist-packages/pytorch_lightning/loops/optimization/automatic.py:270) Compilation Analysis: run (/usr/local/lib/python3.11/dist-packages/pytorch_lightning/loops/optimization/automatic.py:192) Compilation Analysis: .......... Compilation Analysis: -------------------------------------------------------------------------------- Compilation Analysis: ================================================================================ Post Compilation Analysis: ================================================================================ Post Compilation Analysis: Graph input size: 0.169890 GB Post Compilation Analysis: Graph output size: 0.222694 GB Post Compilation Analysis: Aliased Input size: 0.145945 GB Post Compilation Analysis: Intermediate tensor size: 4.341870 GB Post Compilation Analysis: Compiled program size: 0.095390 GB Post Compilation Analysis: -------------------------------------------------------------------------------- Post Compilation Analysis: ================================================================================ Execution Analysis: ================================================================================ Execution Analysis: Execution Cause Execution Analysis: user mark_step Execution Analysis: Graph Info: Execution Analysis: Graph Hash: e61678c26ffda4f9f726a4af30dae523 Execution Analysis: Number of Graph Inputs: 830 Execution Analysis: Number of Graph Outputs: 1135 Execution Analysis: Python Frame Triggered Execution: Execution Analysis: mark_step (/usr/local/lib/python3.11/dist-packages/torch_xla/core/xla_model.py:1061) Execution Analysis: optimizer_step (/usr/local/lib/python3.11/dist-packages/pytorch_lightning/plugins/precision/xla.py:75) Execution Analysis: optimizer_step (/usr/local/lib/python3.11/dist-packages/pytorch_lightning/strategies/strategy.py:239) Execution Analysis: step (/usr/local/lib/python3.11/dist-packages/pytorch_lightning/core/optimizer.py:154) Execution Analysis: optimizer_step (/usr/local/lib/python3.11/dist-packages/pytorch_lightning/core/module.py:1302) Execution Analysis: _call_lightning_module_hook (/usr/local/lib/python3.11/dist-packages/pytorch_lightning/trainer/call.py:176) Execution Analysis: _optimizer_step (/usr/local/lib/python3.11/dist-packages/pytorch_lightning/loops/optimization/automatic.py:270) Execution Analysis: run (/usr/local/lib/python3.11/dist-packages/pytorch_lightning/loops/optimization/automatic.py:192) Execution Analysis: .......... Execution Analysis: -------------------------------------------------------------------------------- Execution Analysis: ================================================================================ pt-xla-profiler: TransferFromDeviceTime too frequent: 6 counts during 2 steps Epoch 0: 3% 2/73 [03:17<1:56:50, 98.75s/it, v_num=785i] SKIPPING TO THE END OF THE EPOCH Epoch 0: 99% 72/73 [04:04<00:03, 3.40s/it, v_num=785i] Execution Analysis: ================================================================================ Execution Analysis: Execution Cause Execution Analysis: user mark_step Execution Analysis: Graph Info: Execution Analysis: Graph Hash: e61678c26ffda4f9f726a4af30dae523 Execution Analysis: Number of Graph Inputs: 830 Execution Analysis: Number of Graph Outputs: 1135 Execution Analysis: Python Frame Triggered Execution: Execution Analysis: mark_step (/usr/local/lib/python3.11/dist-packages/torch_xla/core/xla_model.py:1061) Execution Analysis: optimizer_step (/usr/local/lib/python3.11/dist-packages/pytorch_lightning/plugins/precision/xla.py:75) Execution Analysis: optimizer_step (/usr/local/lib/python3.11/dist-packages/pytorch_lightning/strategies/strategy.py:239) Execution Analysis: step (/usr/local/lib/python3.11/dist-packages/pytorch_lightning/core/optimizer.py:154) Execution Analysis: optimizer_step (/usr/local/lib/python3.11/dist-packages/pytorch_lightning/core/module.py:1302) Execution Analysis: _call_lightning_module_hook (/usr/local/lib/python3.11/dist-packages/pytorch_lightning/trainer/call.py:176) Execution Analysis: _optimizer_step (/usr/local/lib/python3.11/dist-packages/pytorch_lightning/loops/optimization/automatic.py:270) Execution Analysis: run (/usr/local/lib/python3.11/dist-packages/pytorch_lightning/loops/optimization/automatic.py:192) Execution Analysis: .......... Execution Analysis: -------------------------------------------------------------------------------- Execution Analysis: ================================================================================ Epoch 0: 100% 73/73 [04:05<00:00, 3.36s/it, v_num=785i] Validation: | | 0/? [00:00<?, ?it/s] Validation: 0% 0/31 [00:00<?, ?it/s] Validation DataLoader 0: 0% 0/31 [00:00<?, ?it/s] Validation DataLoader 0: 3% 1/31 [00:01<00:39, 1.33s/it] Validation DataLoader 0: 6% 2/31 [00:01<00:27, 1.05it/s] Validation DataLoader 0: 10% 3/31 [00:02<00:21, 1.28it/s] Validation DataLoader 0: 13% 4/31 [00:02<00:19, 1.41it/s] Validation DataLoader 0: 16% 5/31 [00:03<00:16, 1.55it/s] Validation DataLoader 0: 19% 6/31 [00:03<00:15, 1.66it/s] Validation DataLoader 0: 23% 7/31 [00:04<00:13, 1.75it/s] Validation DataLoader 0: 26% 8/31 [00:04<00:12, 1.84it/s] Validation DataLoader 0: 29% 9/31 [00:04<00:11, 1.94it/s] Validation DataLoader 0: 32% 10/31 [00:05<00:10, 2.00it/s] Validation DataLoader 0: 35% 11/31 [00:05<00:09, 2.06it/s] Validation DataLoader 0: 39% 12/31 [00:05<00:08, 2.13it/s] Validation DataLoader 0: 42% 13/31 [00:05<00:08, 2.19it/s] Validation DataLoader 0: 45% 14/31 [00:06<00:07, 2.22it/s] Validation DataLoader 0: 48% 15/31 [00:06<00:07, 2.26it/s] Validation DataLoader 0: 52% 16/31 [00:06<00:06, 2.31it/s] Validation DataLoader 0: 55% 17/31 [00:07<00:05, 2.35it/s] Validation DataLoader 0: 58% 18/31 [00:07<00:05, 2.37it/s] Validation DataLoader 0: 61% 19/31 [00:07<00:04, 2.40it/s] Validation DataLoader 0: 65% 20/31 [00:08<00:04, 2.44it/s] Validation DataLoader 0: 68% 21/31 [00:08<00:04, 2.47it/s] Validation DataLoader 0: 71% 22/31 [00:08<00:03, 2.50it/s] Validation DataLoader 0: 74% 23/31 [00:09<00:03, 2.52it/s] Validation DataLoader 0: 77% 24/31 [00:09<00:02, 2.54it/s] Validation DataLoader 0: 81% 25/31 [00:09<00:02, 2.57it/s] Validation DataLoader 0: 84% 26/31 [00:10<00:01, 2.59it/s] Validation DataLoader 0: 87% 27/31 [00:10<00:01, 2.61it/s] Validation DataLoader 0: 90% 28/31 [00:10<00:01, 2.64it/s] Validation DataLoader 0: 94% 29/31 [00:10<00:00, 2.65it/s] Validation DataLoader 0: 97% 30/31 [00:11<00:00, 2.65it/s] Validation DataLoader 0: 100% 31/31 [00:11<00:00, 2.64it/s] Compilation Analysis: ================================================================================ Compilation Analysis: Compilation Cause Compilation Analysis: most likely user code trying to access tensor value before mark_step Compilation Analysis: Graph Info: Compilation Analysis: Graph Hash: 58a97574047f2c30444efbb2217f260d Compilation Analysis: Number of Graph Inputs: 360 Compilation Analysis: Number of Graph Outputs: 1 Compilation Analysis: Python Frame Triggered Execution: Compilation Analysis: to_item (/usr/local/lib/python3.11/dist-packages/lightning_fabric/utilities/apply_func.py:134) Compilation Analysis: <dictcomp> (/usr/local/lib/python3.11/dist-packages/lightning_utilities/core/apply_func.py:72) Compilation Analysis: apply_to_collection (/usr/local/lib/python3.11/dist-packages/lightning_utilities/core/apply_func.py:72) Compilation Analysis: convert_tensors_to_scalars (/usr/local/lib/python3.11/dist-packages/lightning_fabric/utilities/apply_func.py:136) Compilation Analysis: log_metrics (/usr/local/lib/python3.11/dist-packages/pytorch_lightning/trainer/connectors/logger_connector/logger_connector.py:106) Compilation Analysis: log_eval_end_metrics (/usr/local/lib/python3.11/dist-packages/pytorch_lightning/trainer/connectors/logger_connector/logger_connector.py:151) Compilation Analysis: on_run_end (/usr/local/lib/python3.11/dist-packages/pytorch_lightning/loops/evaluation_loop.py:306) Compilation Analysis: run (/usr/local/lib/python3.11/dist-packages/pytorch_lightning/loops/evaluation_loop.py:152) Compilation Analysis: .......... Compilation Analysis: -------------------------------------------------------------------------------- Compilation Analysis: ================================================================================ Post Compilation Analysis: ================================================================================ Post Compilation Analysis: Graph input size: 2.256581 GB Post Compilation Analysis: Graph output size: 0.000002 GB Post Compilation Analysis: Aliased Input size: 0.000000 GB Post Compilation Analysis: Intermediate tensor size: 0.905377 GB Post Compilation Analysis: Compiled program size: 0.043901 GB Post Compilation Analysis: -------------------------------------------------------------------------------- Post Compilation Analysis: ================================================================================ Execution Analysis: ================================================================================ Execution Analysis: Execution Cause Execution Analysis: most likely user code trying to access tensor value before mark_step Execution Analysis: Graph Info: Execution Analysis: Graph Hash: 58a97574047f2c30444efbb2217f260d Execution Analysis: Number of Graph Inputs: 360 Execution Analysis: Number of Graph Outputs: 1 Execution Analysis: Python Frame Triggered Execution: Execution Analysis: to_item (/usr/local/lib/python3.11/dist-packages/lightning_fabric/utilities/apply_func.py:134) Execution Analysis: <dictcomp> (/usr/local/lib/python3.11/dist-packages/lightning_utilities/core/apply_func.py:72) Execution Analysis: apply_to_collection (/usr/local/lib/python3.11/dist-packages/lightning_utilities/core/apply_func.py:72) Execution Analysis: convert_tensors_to_scalars (/usr/local/lib/python3.11/dist-packages/lightning_fabric/utilities/apply_func.py:136) Execution Analysis: log_metrics (/usr/local/lib/python3.11/dist-packages/pytorch_lightning/trainer/connectors/logger_connector/logger_connector.py:106) Execution Analysis: log_eval_end_metrics (/usr/local/lib/python3.11/dist-packages/pytorch_lightning/trainer/connectors/logger_connector/logger_connector.py:151) Execution Analysis: on_run_end (/usr/local/lib/python3.11/dist-packages/pytorch_lightning/loops/evaluation_loop.py:306) Execution Analysis: run (/usr/local/lib/python3.11/dist-packages/pytorch_lightning/loops/evaluation_loop.py:152) Execution Analysis: .......... Execution Analysis: -------------------------------------------------------------------------------- Execution Analysis: ================================================================================ Compilation Analysis: ================================================================================ Compilation Analysis: Compilation Cause Compilation Analysis: most likely user code trying to access tensor value before mark_step Compilation Analysis: Graph Info: Compilation Analysis: Graph Hash: 5c3d7a91c419c821771bf8f85eeb7e0a Compilation Analysis: Number of Graph Inputs: 360 Compilation Analysis: Number of Graph Outputs: 1 Compilation Analysis: Python Frame Triggered Execution: Compilation Analysis: to_item (/usr/local/lib/python3.11/dist-packages/lightning_fabric/utilities/apply_func.py:134) Compilation Analysis: <dictcomp> (/usr/local/lib/python3.11/dist-packages/lightning_utilities/core/apply_func.py:72) Compilation Analysis: apply_to_collection (/usr/local/lib/python3.11/dist-packages/lightning_utilities/core/apply_func.py:72) Compilation Analysis: convert_tensors_to_scalars (/usr/local/lib/python3.11/dist-packages/lightning_fabric/utilities/apply_func.py:136) Compilation Analysis: log_metrics (/usr/local/lib/python3.11/dist-packages/pytorch_lightning/trainer/connectors/logger_connector/logger_connector.py:106) Compilation Analysis: log_eval_end_metrics (/usr/local/lib/python3.11/dist-packages/pytorch_lightning/trainer/connectors/logger_connector/logger_connector.py:151) Compilation Analysis: on_run_end (/usr/local/lib/python3.11/dist-packages/pytorch_lightning/loops/evaluation_loop.py:306) Compilation Analysis: run (/usr/local/lib/python3.11/dist-packages/pytorch_lightning/loops/evaluation_loop.py:152) Compilation Analysis: .......... Compilation Analysis: -------------------------------------------------------------------------------- Compilation Analysis: ================================================================================ Post Compilation Analysis: ================================================================================ Post Compilation Analysis: Graph input size: 2.256593 GB Post Compilation Analysis: Graph output size: 0.000002 GB Post Compilation Analysis: Aliased Input size: 0.000000 GB Post Compilation Analysis: Intermediate tensor size: 0.909032 GB Post Compilation Analysis: Compiled program size: 0.044282 GB Post Compilation Analysis: -------------------------------------------------------------------------------- Post Compilation Analysis: ================================================================================ Execution Analysis: ================================================================================ Execution Analysis: Execution Cause Execution Analysis: most likely user code trying to access tensor value before mark_step Execution Analysis: Graph Info: Execution Analysis: Graph Hash: 5c3d7a91c419c821771bf8f85eeb7e0a Execution Analysis: Number of Graph Inputs: 360 Execution Analysis: Number of Graph Outputs: 1 Execution Analysis: Python Frame Triggered Execution: Execution Analysis: to_item (/usr/local/lib/python3.11/dist-packages/lightning_fabric/utilities/apply_func.py:134) Execution Analysis: <dictcomp> (/usr/local/lib/python3.11/dist-packages/lightning_utilities/core/apply_func.py:72) Execution Analysis: apply_to_collection (/usr/local/lib/python3.11/dist-packages/lightning_utilities/core/apply_func.py:72) Execution Analysis: convert_tensors_to_scalars (/usr/local/lib/python3.11/dist-packages/lightning_fabric/utilities/apply_func.py:136) Execution Analysis: log_metrics (/usr/local/lib/python3.11/dist-packages/pytorch_lightning/trainer/connectors/logger_connector/logger_connector.py:106) Execution Analysis: log_eval_end_metrics (/usr/local/lib/python3.11/dist-packages/pytorch_lightning/trainer/connectors/logger_connector/logger_connector.py:151) Execution Analysis: on_run_end (/usr/local/lib/python3.11/dist-packages/pytorch_lightning/loops/evaluation_loop.py:306) Execution Analysis: run (/usr/local/lib/python3.11/dist-packages/pytorch_lightning/loops/evaluation_loop.py:152) Execution Analysis: .......... Execution Analysis: -------------------------------------------------------------------------------- Execution Analysis: ================================================================================ Compilation Analysis: ================================================================================ Compilation Analysis: Compilation Cause Compilation Analysis: most likely user code trying to access tensor value before mark_step Compilation Analysis: Graph Info: Compilation Analysis: Graph Hash: 56dedd005204d85a65e1f9272d88cc0b Compilation Analysis: Number of Graph Inputs: 358 Compilation Analysis: Number of Graph Outputs: 1 Compilation Analysis: Python Frame Triggered Execution: Compilation Analysis: to_item (/usr/local/lib/python3.11/dist-packages/lightning_fabric/utilities/apply_func.py:134) Compilation Analysis: <dictcomp> (/usr/local/lib/python3.11/dist-packages/lightning_utilities/core/apply_func.py:72) Compilation Analysis: apply_to_collection (/usr/local/lib/python3.11/dist-packages/lightning_utilities/core/apply_func.py:72) Compilation Analysis: convert_tensors_to_scalars (/usr/local/lib/python3.11/dist-packages/lightning_fabric/utilities/apply_func.py:136) Compilation Analysis: log_metrics (/usr/local/lib/python3.11/dist-packages/pytorch_lightning/trainer/connectors/logger_connector/logger_connector.py:106) Compilation Analysis: log_eval_end_metrics (/usr/local/lib/python3.11/dist-packages/pytorch_lightning/trainer/connectors/logger_connector/logger_connector.py:151) Compilation Analysis: on_run_end (/usr/local/lib/python3.11/dist-packages/pytorch_lightning/loops/evaluation_loop.py:306) Compilation Analysis: run (/usr/local/lib/python3.11/dist-packages/pytorch_lightning/loops/evaluation_loop.py:152) Compilation Analysis: .......... Compilation Analysis: -------------------------------------------------------------------------------- Compilation Analysis: ================================================================================ Post Compilation Analysis: ================================================================================ Post Compilation Analysis: Graph input size: 2.256577 GB Post Compilation Analysis: Graph output size: 0.000002 GB Post Compilation Analysis: Aliased Input size: 0.000000 GB Post Compilation Analysis: Intermediate tensor size: 0.904372 GB Post Compilation Analysis: Compiled program size: 0.043720 GB Post Compilation Analysis: -------------------------------------------------------------------------------- Post Compilation Analysis: ================================================================================ Execution Analysis: ================================================================================ Execution Analysis: Execution Cause Execution Analysis: most likely user code trying to access tensor value before mark_step Execution Analysis: Graph Info: Execution Analysis: Graph Hash: 56dedd005204d85a65e1f9272d88cc0b Execution Analysis: Number of Graph Inputs: 358 Execution Analysis: Number of Graph Outputs: 1 Execution Analysis: Python Frame Triggered Execution: Execution Analysis: to_item (/usr/local/lib/python3.11/dist-packages/lightning_fabric/utilities/apply_func.py:134) Execution Analysis: <dictcomp> (/usr/local/lib/python3.11/dist-packages/lightning_utilities/core/apply_func.py:72) Execution Analysis: apply_to_collection (/usr/local/lib/python3.11/dist-packages/lightning_utilities/core/apply_func.py:72) Execution Analysis: convert_tensors_to_scalars (/usr/local/lib/python3.11/dist-packages/lightning_fabric/utilities/apply_func.py:136) Execution Analysis: log_metrics (/usr/local/lib/python3.11/dist-packages/pytorch_lightning/trainer/connectors/logger_connector/logger_connector.py:106) Execution Analysis: log_eval_end_metrics (/usr/local/lib/python3.11/dist-packages/pytorch_lightning/trainer/connectors/logger_connector/logger_connector.py:151) Execution Analysis: on_run_end (/usr/local/lib/python3.11/dist-packages/pytorch_lightning/loops/evaluation_loop.py:306) Execution Analysis: run (/usr/local/lib/python3.11/dist-packages/pytorch_lightning/loops/evaluation_loop.py:152) Execution Analysis: .......... Execution Analysis: -------------------------------------------------------------------------------- Execution Analysis: ================================================================================ Epoch 1: 0% 0/73 [00:00<?, ?it/s, v_num=785i]INFO: Detected KeyboardInterrupt, attempting graceful shutdown ... INFO:lightning.pytorch.utilities.rank_zero: Detected KeyboardInterrupt, attempting graceful shutdown ... Compilation Analysis: ================================================================================ Compilation Analysis: Compilation Cause Compilation Analysis: most likely user code trying to access tensor value before mark_step Compilation Analysis: Graph Info: Compilation Analysis: Graph Hash: a0fddc367f604e909016927c79f693e5 Compilation Analysis: Number of Graph Inputs: 316 Compilation Analysis: Number of Graph Outputs: 1 Compilation Analysis: Python Frame Triggered Execution: Compilation Analysis: batch_to (/usr/local/lib/python3.11/dist-packages/lightning_fabric/utilities/apply_func.py:104) Compilation Analysis: apply_to_collection (/usr/local/lib/python3.11/dist-packages/lightning_utilities/core/apply_func.py:66) Compilation Analysis: move_data_to_device (/usr/local/lib/python3.11/dist-packages/lightning_fabric/utilities/apply_func.py:110) Compilation Analysis: _optimizer_to_device (/usr/local/lib/python3.11/dist-packages/lightning_fabric/utilities/optimizer.py:41) Compilation Analysis: _optimizers_to_device (/usr/local/lib/python3.11/dist-packages/lightning_fabric/utilities/optimizer.py:27) Compilation Analysis: teardown (/usr/local/lib/python3.11/dist-packages/pytorch_lightning/strategies/strategy.py:532) Compilation Analysis: teardown (/usr/local/lib/python3.11/dist-packages/pytorch_lightning/strategies/single_xla.py:121) Compilation Analysis: _teardown (/usr/local/lib/python3.11/dist-packages/pytorch_lightning/trainer/trainer.py:1035) Compilation Analysis: .......... Compilation Analysis: -------------------------------------------------------------------------------- Compilation Analysis: ================================================================================ Post Compilation Analysis: ================================================================================ Post Compilation Analysis: Graph input size: 0.072671 GB Post Compilation Analysis: Graph output size: 0.000018 GB Post Compilation Analysis: Aliased Input size: 0.000000 GB Post Compilation Analysis: Intermediate tensor size: 4.321018 GB Post Compilation Analysis: Compiled program size: 0.084074 GB Post Compilation Analysis: -------------------------------------------------------------------------------- Post Compilation Analysis: ================================================================================ Execution Analysis: ================================================================================ Execution Analysis: Execution Cause Execution Analysis: most likely user code trying to access tensor value before mark_step Execution Analysis: Graph Info: Execution Analysis: Graph Hash: a0fddc367f604e909016927c79f693e5 Execution Analysis: Number of Graph Inputs: 316 Execution Analysis: Number of Graph Outputs: 1 Execution Analysis: Python Frame Triggered Execution: Execution Analysis: batch_to (/usr/local/lib/python3.11/dist-packages/lightning_fabric/utilities/apply_func.py:104) Execution Analysis: apply_to_collection (/usr/local/lib/python3.11/dist-packages/lightning_utilities/core/apply_func.py:66) Execution Analysis: move_data_to_device (/usr/local/lib/python3.11/dist-packages/lightning_fabric/utilities/apply_func.py:110) Execution Analysis: _optimizer_to_device (/usr/local/lib/python3.11/dist-packages/lightning_fabric/utilities/optimizer.py:41) Execution Analysis: _optimizers_to_device (/usr/local/lib/python3.11/dist-packages/lightning_fabric/utilities/optimizer.py:27) Execution Analysis: teardown (/usr/local/lib/python3.11/dist-packages/pytorch_lightning/strategies/strategy.py:532) Execution Analysis: teardown (/usr/local/lib/python3.11/dist-packages/pytorch_lightning/strategies/single_xla.py:121) Execution Analysis: _teardown (/usr/local/lib/python3.11/dist-packages/pytorch_lightning/trainer/trainer.py:1035) Execution Analysis: .......... Execution Analysis: -------------------------------------------------------------------------------- Execution Analysis: ================================================================================ Compilation Analysis: ================================================================================ Compilation Analysis: Compilation Cause Compilation Analysis: most likely user code trying to access tensor value before mark_step Compilation Analysis: Graph Info: Compilation Analysis: Graph Hash: 49ba580755d1d589f2d6554e98e81d23 Compilation Analysis: Number of Graph Inputs: 316 Compilation Analysis: Number of Graph Outputs: 1 Compilation Analysis: Python Frame Triggered Execution: Compilation Analysis: batch_to (/usr/local/lib/python3.11/dist-packages/lightning_fabric/utilities/apply_func.py:104) Compilation Analysis: apply_to_collection (/usr/local/lib/python3.11/dist-packages/lightning_utilities/core/apply_func.py:66) Compilation Analysis: move_data_to_device (/usr/local/lib/python3.11/dist-packages/lightning_fabric/utilities/apply_func.py:110) Compilation Analysis: _optimizer_to_device (/usr/local/lib/python3.11/dist-packages/lightning_fabric/utilities/optimizer.py:41) Compilation Analysis: _optimizers_to_device (/usr/local/lib/python3.11/dist-packages/lightning_fabric/utilities/optimizer.py:27) Compilation Analysis: teardown (/usr/local/lib/python3.11/dist-packages/pytorch_lightning/strategies/strategy.py:532) Compilation Analysis: teardown (/usr/local/lib/python3.11/dist-packages/pytorch_lightning/strategies/single_xla.py:121) Compilation Analysis: _teardown (/usr/local/lib/python3.11/dist-packages/pytorch_lightning/trainer/trainer.py:1035) Compilation Analysis: .......... Compilation Analysis: -------------------------------------------------------------------------------- Compilation Analysis: ================================================================================ Post Compilation Analysis: ================================================================================ Post Compilation Analysis: Graph input size: 0.072672 GB Post Compilation Analysis: Graph output size: 0.000018 GB Post Compilation Analysis: Aliased Input size: 0.000000 GB Post Compilation Analysis: Intermediate tensor size: 4.321048 GB Post Compilation Analysis: Compiled program size: 0.084075 GB Post Compilation Analysis: -------------------------------------------------------------------------------- Post Compilation Analysis: ================================================================================ Execution Analysis: ================================================================================ Execution Analysis: Execution Cause Execution Analysis: most likely user code trying to access tensor value before mark_step Execution Analysis: Graph Info: Execution Analysis: Graph Hash: 49ba580755d1d589f2d6554e98e81d23 Execution Analysis: Number of Graph Inputs: 316 Execution Analysis: Number of Graph Outputs: 1 Execution Analysis: Python Frame Triggered Execution: Execution Analysis: batch_to (/usr/local/lib/python3.11/dist-packages/lightning_fabric/utilities/apply_func.py:104) Execution Analysis: apply_to_collection (/usr/local/lib/python3.11/dist-packages/lightning_utilities/core/apply_func.py:66) Execution Analysis: move_data_to_device (/usr/local/lib/python3.11/dist-packages/lightning_fabric/utilities/apply_func.py:110) Execution Analysis: _optimizer_to_device (/usr/local/lib/python3.11/dist-packages/lightning_fabric/utilities/optimizer.py:41) Execution Analysis: _optimizers_to_device (/usr/local/lib/python3.11/dist-packages/lightning_fabric/utilities/optimizer.py:27) Execution Analysis: teardown (/usr/local/lib/python3.11/dist-packages/pytorch_lightning/strategies/strategy.py:532) Execution Analysis: teardown (/usr/local/lib/python3.11/dist-packages/pytorch_lightning/strategies/single_xla.py:121) Execution Analysis: _teardown (/usr/local/lib/python3.11/dist-packages/pytorch_lightning/trainer/trainer.py:1035) Execution Analysis: .......... Execution Analysis: -------------------------------------------------------------------------------- Execution Analysis: ================================================================================ Compilation Analysis: ================================================================================ Compilation Analysis: Compilation Cause Compilation Analysis: most likely user code trying to access tensor value before mark_step Compilation Analysis: Graph Info: Compilation Analysis: Graph Hash: bd36ce82dec854af372a639cd429280e Compilation Analysis: Number of Graph Inputs: 316 Compilation Analysis: Number of Graph Outputs: 1 Compilation Analysis: Python Frame Triggered Execution: Compilation Analysis: batch_to (/usr/local/lib/python3.11/dist-packages/lightning_fabric/utilities/apply_func.py:104) Compilation Analysis: apply_to_collection (/usr/local/lib/python3.11/dist-packages/lightning_utilities/core/apply_func.py:66) Compilation Analysis: move_data_to_device (/usr/local/lib/python3.11/dist-packages/lightning_fabric/utilities/apply_func.py:110) Compilation Analysis: _optimizer_to_device (/usr/local/lib/python3.11/dist-packages/lightning_fabric/utilities/optimizer.py:41) Compilation Analysis: _optimizers_to_device (/usr/local/lib/python3.11/dist-packages/lightning_fabric/utilities/optimizer.py:27) Compilation Analysis: teardown (/usr/local/lib/python3.11/dist-packages/pytorch_lightning/strategies/strategy.py:532) Compilation Analysis: teardown (/usr/local/lib/python3.11/dist-packages/pytorch_lightning/strategies/single_xla.py:121) Compilation Analysis: _teardown (/usr/local/lib/python3.11/dist-packages/pytorch_lightning/trainer/trainer.py:1035) Compilation Analysis: .......... Compilation Analysis: -------------------------------------------------------------------------------- Compilation Analysis: ================================================================================ Post Compilation Analysis: ================================================================================ Post Compilation Analysis: Graph input size: 0.072928 GB Post Compilation Analysis: Graph output size: 0.000276 GB Post Compilation Analysis: Aliased Input size: 0.000000 GB Post Compilation Analysis: Intermediate tensor size: 4.321064 GB Post Compilation Analysis: Compiled program size: 0.084111 GB Post Compilation Analysis: -------------------------------------------------------------------------------- Post Compilation Analysis: ================================================================================ Execution Analysis: ================================================================================ Execution Analysis: Execution Cause Execution Analysis: most likely user code trying to access tensor value before mark_step Execution Analysis: Graph Info: Execution Analysis: Graph Hash: bd36ce82dec854af372a639cd429280e Execution Analysis: Number of Graph Inputs: 316 Execution Analysis: Number of Graph Outputs: 1 Execution Analysis: Python Frame Triggered Execution: Execution Analysis: batch_to (/usr/local/lib/python3.11/dist-packages/lightning_fabric/utilities/apply_func.py:104) Execution Analysis: apply_to_collection (/usr/local/lib/python3.11/dist-packages/lightning_utilities/core/apply_func.py:66) Execution Analysis: move_data_to_device (/usr/local/lib/python3.11/dist-packages/lightning_fabric/utilities/apply_func.py:110) Execution Analysis: _optimizer_to_device (/usr/local/lib/python3.11/dist-packages/lightning_fabric/utilities/optimizer.py:41) Execution Analysis: _optimizers_to_device (/usr/local/lib/python3.11/dist-packages/lightning_fabric/utilities/optimizer.py:27) Execution Analysis: teardown (/usr/local/lib/python3.11/dist-packages/pytorch_lightning/strategies/strategy.py:532) Execution Analysis: teardown (/usr/local/lib/python3.11/dist-packages/pytorch_lightning/strategies/single_xla.py:121) Execution Analysis: _teardown (/usr/local/lib/python3.11/dist-packages/pytorch_lightning/trainer/trainer.py:1035) Execution Analysis: .......... Execution Analysis: -------------------------------------------------------------------------------- Execution Analysis: ================================================================================ Compilation Analysis: ================================================================================ Compilation Analysis: Compilation Cause Compilation Analysis: most likely user code trying to access tensor value before mark_step Compilation Analysis: Graph Info: Compilation Analysis: Graph Hash: 89e1a79ffb6333962ca03038f475745d Compilation Analysis: Number of Graph Inputs: 316 Compilation Analysis: Number of Graph Outputs: 1 Compilation Analysis: Python Frame Triggered Execution: Compilation Analysis: batch_to (/usr/local/lib/python3.11/dist-packages/lightning_fabric/utilities/apply_func.py:104) Compilation Analysis: apply_to_collection (/usr/local/lib/python3.11/dist-packages/lightning_utilities/core/apply_func.py:66) Compilation Analysis: move_data_to_device (/usr/local/lib/python3.11/dist-packages/lightning_fabric/utilities/apply_func.py:110) Compilation Analysis: _optimizer_to_device (/usr/local/lib/python3.11/dist-packages/lightning_fabric/utilities/optimizer.py:41) Compilation Analysis: _optimizers_to_device (/usr/local/lib/python3.11/dist-packages/lightning_fabric/utilities/optimizer.py:27) Compilation Analysis: teardown (/usr/local/lib/python3.11/dist-packages/pytorch_lightning/strategies/strategy.py:532) Compilation Analysis: teardown (/usr/local/lib/python3.11/dist-packages/pytorch_lightning/strategies/single_xla.py:121) Compilation Analysis: _teardown (/usr/local/lib/python3.11/dist-packages/pytorch_lightning/trainer/trainer.py:1035) Compilation Analysis: .......... Compilation Analysis: -------------------------------------------------------------------------------- Compilation Analysis: ================================================================================ MANY COMPILATIONS FOLLOW ``` ### Environment <details> <summary>Current environment</summary> ``` #- PyTorch Lightning Version (e.g., 2.5.0): 2.5.1 #- PyTorch Version (e.g., 2.5): 2.6 #- Python version (e.g., 3.12): 3.11 #- OS (e.g., Linux): Linux #- CUDA/cuDNN version: N/A #- GPU models and configuration: Colab TPU v2-8 #- How you installed Lightning(`conda`, `pip`, source): pip ``` </details> ### More info _No response_
closed
2025-03-22T22:45:34Z
2025-03-24T03:32:26Z
https://github.com/Lightning-AI/pytorch-lightning/issues/20665
[ "bug", "needs triage", "ver: 2.5.x" ]
catalpaaa
1
google-deepmind/sonnet
tensorflow
88
Too limited documentation about DeepRNN
Dear Deepminder, Since there is too limited documentation of DeepRNN, could you please tell me what's wrong with my following way of using DeepRNN? I would guess it is a typical useage, however it always complains: ``` import tensorflow as tf import sonnet as snt class Cell(snt.RNNCore): def __init__(self, num_channels, spatial_size, filter_size = 3, name = "cell"): """ Args: num_channels: the number of output channels in the layer. spatial_size: spatial size of the input, [H, W] filter_size: the shape of the each convolution filter. """ super(Cell, self).__init__(name = name) self._num_channels = num_channels self._spatial_size = spatial_size self._state_size = tf.TensorShape(spatial_size + [num_channels]) self._output_size = tf.TensorShape(spatial_size + [num_channels]) with self._enter_variable_scope(): self._conv2d = snt.Conv2D(num_channels, [filter_size, filter_size]) def inital_state(self, batch_size, spatial_size, state_initializer = tf.zeros_initializer(), dtype = tf.float32): return state_initializer([batch_size] + self._spatial_size + [self._num_channels], dtype = dtype) def _build(self, inputs, state): """ Basic recurrent network cell, with 2D convolution connections. Args: inputs: input Tensor, 4D, batch x height x width x channels. state: state Tensor, 4D, batch x height x width x channels. Returns: a tuple of tensors representing output and the new state. """ inputs.get_shape().assert_has_rank(4) state.get_shape().assert_has_rank(4) inputs_h = tf.concat(axis = 3, values = [inputs, state]) new_h = self._conv2d(inputs_h) return new_h, new_h @property def state_size(self): return self._state_size @property def output_size(self): return self._output_size def test(): import numpy as np t = tf.constant(np.ones([4, 8, 32, 32, 3]), tf.float32) # batch_size = 4 # sequence_length = 8 # spatial_size = [32, 32] # num_channels = 3 c1 = Cell(4, [32, 32]) c2 = Cell(8, [32, 32]) rnn = snt.DeepRNN([c1, c2], skip_connections = True, concat_final_output_if_skip = False) initial_state = tuple([c1.initial_state(4), c2.initial_state(4)]) output_sequence, final_state = tf.nn.dynamic_rnn(rnn, rnn, initial_state = initial_state, time_major = False) config = tf.ConfigProto(log_device_placement = False) with tf.Session(config = config) as sess: sess.run([tf.global_variables_initializer(), tf.local_variables_initializer()]) s, l = sess.run([output_sequence, final_state]) print(s) if __name__ == '__main__': test() ``` This is the output when I try to run the code: ``` (tensorflow)[yuming@cibci1 working-files]$ python deeprnn_test.py WARNING:tensorflow:The `skip_connections` argument will be deprecated. Please use snt.SkipConnectionCore instead. Traceback (most recent call last): File "deeprnn_test.py", line 80, in <module> test() File "deeprnn_test.py", line 67, in test rnn = snt.DeepRNN([c1, c2], skip_connections = True, concat_final_output_if_skip = False) File "/home/yuming/tensorflow/lib/python2.7/site-packages/sonnet/python/modules/basic_rnn.py", line 289, in __init__ self._check_cores_output_sizes() File "/home/yuming/tensorflow/lib/python2.7/site-packages/sonnet/python/modules/basic_rnn.py", line 307, in _check_cores_output_sizes "has size %s" % (first_core_list, i, core_list)) ValueError: The outputs of the provided cores are not able to be concatenated along the first feature dimension. Core 0 has size [32, 4], whereas Core 0 has size [32, 32, 8] (tensorflow)[yuming@cibci1 working-files]$ ``` I am grateful to your help since I think it is a typical user case.
closed
2018-06-03T13:08:23Z
2018-06-04T12:52:31Z
https://github.com/google-deepmind/sonnet/issues/88
[]
mingyr
2
jpadilla/django-rest-framework-jwt
django
97
OperationalError: no such table: auth_user
cloned the project, and run the test as the following with the following error. Can anyone shed some ligh on the issue? $ ./runtests.py ==================================================================================================================================== test session starts ===================================================================================================================================== platform linux2 -- Python 2.7.2 -- py-1.4.26 -- pytest-2.6.4 collected 43 items tests/test_authentication.py FFFFFFFFFFss tests/test_serializers.py FFsFFF tests/test_utils.py FFFFFFFFFF tests/test_views.py FFFFFFFFFFFFFFF ========================================================================================================================================== FAILURES ========================================================================================================================================== _____________________________________________________________________________________________________________ JSONWebTokenAuthenticationTests.test_different_auth_header_prefix ______________________________________________________________________________________________________________ self = <tests.test_authentication.JSONWebTokenAuthenticationTests testMethod=test_different_auth_header_prefix> ``` def setUp(self): self.csrf_client = APIClient(enforce_csrf_checks=True) self.username = 'jpueblo' self.email = 'jpueblo@example.com' ``` > ``` > self.user = User.objects.create_user(self.username, self.email) > ``` tests/test_authentication.py:79: --- ../../apps/python-2.7.2/lib/python2.7/site-packages/django/contrib/auth/models.py:187: in create_user *_extra_fields) ../../apps/python-2.7.2/lib/python2.7/site-packages/django/contrib/auth/models.py:182: in _create_user user.save(using=self._db) ../../apps/python-2.7.2/lib/python2.7/site-packages/django/db/models/base.py:710: in save force_update=force_update, update_fields=update_fields) ../../apps/python-2.7.2/lib/python2.7/site-packages/django/db/models/base.py:738: in save_base updated = self._save_table(raw, cls, force_insert, force_update, using, update_fields) ../../apps/python-2.7.2/lib/python2.7/site-packages/django/db/models/base.py:822: in _save_table result = self._do_insert(cls._base_manager, using, fields, update_pk, raw) ../../apps/python-2.7.2/lib/python2.7/site-packages/django/db/models/base.py:861: in _do_insert using=using, raw=raw) ../../apps/python-2.7.2/lib/python2.7/site-packages/django/db/models/manager.py:127: in manager_method return getattr(self.get_queryset(), name)(_args, **kwargs) ../../apps/python-2.7.2/lib/python2.7/site-packages/django/db/models/query.py:920: in _insert return query.get_compiler(using=using).execute_sql(return_id) ../../apps/python-2.7.2/lib/python2.7/site-packages/django/db/models/sql/compiler.py:963: in execute_sql cursor.execute(sql, params) ../../apps/python-2.7.2/lib/python2.7/site-packages/django/db/backends/utils.py:64: in execute return self.cursor.execute(sql, params) ../../apps/python-2.7.2/lib/python2.7/site-packages/django/db/utils.py:97: in __exit__ six.reraise(dj_exc_type, dj_exc_value, traceback) ../../apps/python-2.7.2/lib/python2.7/site-packages/django/db/backends/utils.py:64: in execute return self.cursor.execute(sql, params) --- self = <django.db.backends.sqlite3.base.SQLiteCursorWrapper object at 0x2321510>, query = 'INSERT INTO "auth_user" ("password", "last_login", "is_superuser", "username", "first_name", "last_name", "email", "is_staff", "is_active", "date_joined") VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?)' params = ['!PFhnSZujHPvSBGh0Fi0a5cmMhZKvPzYfu3f4gzJO', None, False, 'jpueblo', '', '', ...] ``` def execute(self, query, params=None): if params is None: return Database.Cursor.execute(self, query) query = self.convert_query(query) ``` > ``` > return Database.Cursor.execute(self, query, params) > ``` > > E OperationalError: no such table: auth_user
closed
2015-04-10T18:53:04Z
2015-07-26T19:41:40Z
https://github.com/jpadilla/django-rest-framework-jwt/issues/97
[]
caot
1
miguelgrinberg/microblog
flask
305
Error running first example in Chapter 22
I'm running under macOS 10.14.6. In Chapter 22 when I tried the example under **Executing Tasks** I'd get the error: >+[__NSPlaceholderDictionary initialize] may have been in progress in another thread when fork() was called. We cannot safely call it or ignore it in the fork() child process. Crashing instead. Set a breakpoint on objc_initializeAfterForkError to debug. I found [this answer on Stack Overflow](https://stackoverflow.com/a/52230415/76810) in reply to someone having the same issue. Instead of adding `OBJC_DISABLE_INITIALIZE_FORK_SAFETY=YES` to my .bash_profile, I did what [this comment](https://stackoverflow.com/questions/50168647/multiprocessing-causes-python-to-crash-and-gives-an-error-may-have-been-in-progr#comment92429088_52230415) suggested and changed my command line to `OBJC_DISABLE_INITIALIZE_FORK_SAFETY=YES rq worker microblog-tasks`. After this change the example worked as expected. It might be worth adding a note regarding this for users running macOS >= High Sierra. Thanks.
closed
2021-09-30T19:39:20Z
2023-12-03T10:56:57Z
https://github.com/miguelgrinberg/microblog/issues/305
[ "question" ]
SSteve
5
jina-ai/clip-as-service
pytorch
571
运行很长时间都没有反应
**Prerequisites** > Please fill in by replacing `[ ]` with `[x]`. * [x] Are you running the latest `bert-as-service`? * [x] Did you follow [the installation](https://github.com/hanxiao/bert-as-service#install) and [the usage](https://github.com/hanxiao/bert-as-service#usage) instructions in `README.md`? * [x] Did you check the [FAQ list in `README.md`](https://github.com/hanxiao/bert-as-service#speech_balloon-faq)? * [x] Did you perform [a cursory search on existing issues](https://github.com/hanxiao/bert-as-service/issues)? ) ![686fb4f76180dd0f589ca752c402b8a](https://user-images.githubusercontent.com/39673281/86461617-d8ca5100-bd5c-11ea-9fdd-db68b745fcf8.png) **System information** > Some of this information can be collected via [this script](https://github.com/tensorflow/tensorflow/tree/master/tools/tf_env_collect.sh). - OS Platform and Distribution (e.g., Linux Ubuntu 16.04): - TensorFlow installed from (source or binary): - TensorFlow version: - Python version: - `bert-as-service` version: - GPU model and memory: - CPU model and memory: --- ### Description > Please replace `YOUR_SERVER_ARGS` and `YOUR_CLIENT_ARGS` accordingly. You can also write your own description for reproducing the issue. I'm using this command to start the server: ```bash bert-serving-start YOUR_SERVER_ARGS ``` and calling the server via: ```python bc = BertClient(YOUR_CLIENT_ARGS) bc.encode() ``` Then this issue shows up: ...
open
2020-07-03T10:42:25Z
2020-11-18T13:01:55Z
https://github.com/jina-ai/clip-as-service/issues/571
[]
strawberrylunar
1
flairNLP/flair
pytorch
2,877
Extract weights from ner model
Hello, I want to make a model in Spanish, I have seen that there is a ner model of flair in Spanish, I would like to know if it is possible to extract the weights of this model and apply them in a pytorch model on which I will make a fine-tuning. Thanks.
closed
2022-07-29T00:41:53Z
2023-01-07T13:48:07Z
https://github.com/flairNLP/flair/issues/2877
[ "question", "wontfix" ]
fmafelipe
1
fastapi-users/fastapi-users
asyncio
595
Same email can register multiple times if email contains modifiers
If I have a user already registered with king.arthur@camelot.bt, I can register another user with the same email but with a modifier king.arthur+fake@camelot.bt Even though the emails look different, they are actually the same email address.
closed
2021-04-11T13:24:32Z
2021-04-12T07:05:02Z
https://github.com/fastapi-users/fastapi-users/issues/595
[ "bug" ]
alexferrari88
2
datadvance/DjangoChannelsGraphqlWs
graphql
41
how do i track the users subscribed (online) to a group?
Please help. I was using [django-channels-presence](https://django-channels-presence.readthedocs.io/en/latest/index.html) But the problem is i cannot get the subscription query which was sent inside the consumer and not able to get the channel_name in the subscription class. I need both to put create Room instance. Thanks in advance.
closed
2020-05-03T18:24:14Z
2020-12-06T00:23:07Z
https://github.com/datadvance/DjangoChannelsGraphqlWs/issues/41
[ "question" ]
isinghmitesh
11
WZMIAOMIAO/deep-learning-for-image-processing
deep-learning
58
RuntimeError: Arguments for call are not valid.
Thanks for your code. when i run the code 'python train_res50_fpn.py',i meet this problem.How can i resolve it? Traceback (most recent call last): File "train_res50_fpn.py", line 3, in <module> from network_files.faster_rcnn_framework import FasterRCNN, FastRCNNPredictor File "/home/yzhou/IpFPN/faster_rcnn/network_files/faster_rcnn_framework.py", line 4, in <module> from network_files.rpn_function import AnchorsGenerator, RPNHead, RegionProposalNetwork File "/home/yzhou/IpFPN/faster_rcnn/network_files/rpn_function.py", line 6, in <module> from network_files import det_utils File "/home/yzhou/IpFPN/faster_rcnn/network_files/det_utils.py", line 16, in <module> class BalancedPositiveNegativeSampler(object): File "/home/yzhou/.pyenv/versions/anaconda3-5.3.1/envs/CenterNet/lib/python3.6/site-packages/torch/jit/__init__.py", line 1274, in script _compile_and_register_class(obj, _rcb, qualified_name) File "/home/yzhou/.pyenv/versions/anaconda3-5.3.1/envs/CenterNet/lib/python3.6/site-packages/torch/jit/__init__.py", line 1115, in _compile_and_register_class _jit_script_class_compile(qualified_name, ast, rcb) RuntimeError: Arguments for call are not valid. The following variants are available: aten::index_put_(Tensor(a!) self, Tensor?[] indices, Tensor values, bool accumulate=False) -> (Tensor(a!)): Expected a value of type 'Tensor' for argument 'values' but instead found type 'int'. aten::index_put_(Tensor(a!) self, Tensor[] indices, Tensor values, bool accumulate=False) -> (Tensor(a!)): Expected a value of type 'List[Tensor]' for argument 'indices' but instead found type 'List[Optional[Tensor]]'. The original call is: File "/home/yzhou/IpFPN/faster_rcnn/network_files/det_utils.py", line 85 ) pos_idx_per_image_mask[pos_idx_per_image] = 1 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE neg_idx_per_image_mask[neg_idx_per_image] = 1
closed
2020-09-29T06:17:08Z
2020-10-12T08:53:39Z
https://github.com/WZMIAOMIAO/deep-learning-for-image-processing/issues/58
[]
zZhouzhiYing
1
wkentaro/labelme
deep-learning
376
What does image data consist of ?
I thought that image data consist of the image in bytes or string form. So, I have tried to write it in the form of an image file. ``` red = open(annotation_img_file,'r') data = json.load(red) img = data['imageData'] rm = open('img.jpg','w') rm.write(img) rm.close() ``` I have checked the supposed the image file, and it is showing image is currupted. I have also tried to image data assuming it is the original image using some other way, ``` image = Image.open(io.BytesIO(img)) ``` This also throwing error.
closed
2019-04-17T15:03:56Z
2019-04-21T10:05:29Z
https://github.com/wkentaro/labelme/issues/376
[]
shubhampateliitm
1
jina-ai/serve
fastapi
6,084
Topology graph Key Error (After Jina 3.21.0)
**Bug Description** There is a KeyError: 'type' in the File "/home/lzh/.conda/envs/DIMA/lib/python3.8/site-packages/jina/serve/runtimes/gateway/graph/topology_graph.py", line 90. ![image](https://github.com/jina-ai/jina/assets/25542404/70147ad3-b2f9-4cb2-a068-dee6a4f6688c) **Solution** I have printed the schema_1_properties[property_1], the result is as follows: ![image](https://github.com/jina-ai/jina/assets/25542404/836bf1a0-c58d-488e-822d-e04c110be96d) There is no key named 'type' for Env Info Here is the code for Env Info: ```python class EnvInfo(BaseDoc): env_memory: ShortTermMemory = ShortTermMemory() # Interaction information of all agent history: str = '' ``` ```python class ShortTermMemory(DocList[Info]): def add(self, info: Info): if info in self: return self.append(info) def add_batch(self, infos: DocList[Info]): for info in infos: self.add(info) def remember(self, k=0) -> DocList[Info]: """Return the most recent k memories, return all when k=0""" return self[-k:] def remember_news(self, observed: DocList[Info], k=0) -> DocList[Info]: """remember the most recent k memories from observed Messages, return all when k=0""" already_observed = self.remember(k) news = DocList[Info]() for i in observed: if i.id in already_observed.id: continue news.append(i) return news def remember_by_action(self, action: str) -> DocList[Info]: """Return all messages triggered by a specified Action""" storage_index = InMemoryExactNNIndex[Info]() storage_index.index(self) query = {'cause_by': {'$eq': action}} content = storage_index.filter(query) return content def remember_by_actions(self, actions: Iterable[str]) -> DocList[Info]: """Return all messages triggered by specified Actions""" contents = DocList[Info]() for action in actions: storage_index = InMemoryExactNNIndex[Info]() storage_index.index(self) query = {'cause_by': {'$eq': action}} contents = contents + storage_index.filter(query) # become a list after + operation return DocList[Info](contents) ``` ```python class Info(BaseDoc): content: List = [] instruction: str = '' agent_id: str = '' # the profile of the agent role: str = 'user' # system / user / assistant cause_by: str = '' @property def Info_str(self): # prefix = '-'.join([self.role, str(self.cause_by)]) return f"{self.role}: {self.content}" def Info_str_repr(self): return self.Info_str() def to_dict(self) -> dict: return { "role": self.role, "content": self.content } ``` **Environment** Python 3.8 Jina>3.21.0
closed
2023-10-15T16:04:03Z
2023-10-21T09:53:11Z
https://github.com/jina-ai/serve/issues/6084
[]
ZhihaoAIRobotic
5
blb-ventures/strawberry-django-plus
graphql
172
precommit broken - isort version needs to be bumped
With poetry 1.5 an additional validation is introduced which breaks isort < 5.11.4
closed
2023-02-01T14:53:00Z
2023-02-01T16:57:01Z
https://github.com/blb-ventures/strawberry-django-plus/issues/172
[]
devkral
1
AUTOMATIC1111/stable-diffusion-webui
pytorch
16,264
[minor Feature Request]: checkmark in model list
### Is there an existing issue for this? - [X] I have searched the existing issues and checked the recent builds/commits ### What would your feature do ? checkmarked or inverted model in model-list ### Proposed workflow if have a list of models if i choose one and generate an image and want to choose another model the old modle is checkmarked ... so long so good BUT if i have a model that has no generated hash and do the same and look in the list after generate the first image the model is not checkmarked ... ### Additional information _No response_
closed
2024-07-26T07:56:49Z
2024-07-28T08:45:05Z
https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/16264
[ "enhancement" ]
kalle07
7
autogluon/autogluon
data-science
3,869
Tabular: Benchmark scikit-learn 1.4 RandomForest missing value support
scikit-learn 1.4 [adds native missing value handling for RandomForest](https://scikit-learn.org/stable/auto_examples/release_highlights/plot_release_highlights_1_4_0.html#missing-value-support-for-random-forest), which may lead to better predictive quality. We should test it out:
open
2024-01-21T20:50:31Z
2024-11-12T17:25:44Z
https://github.com/autogluon/autogluon/issues/3869
[ "enhancement", "module: tabular" ]
Innixma
1
sqlalchemy/sqlalchemy
sqlalchemy
10,844
Default values not respected in sqlalchemy
### Describe the bug I setup my environment to use SQLAlchemy for my next project but then something wrong happens. i use `fastapi` example to get started and then tried to add some default values to Column declaration. here is my table example: ```py marketers = sqlalchemy.Table( 'Users', metadata, Column('id', Integer, primary_key=True, index=True), Column('username', String(32), index=True, unique=True), Column('password', String(255)), Column('email', String(255), unique=True, index=True, nullable=True), Column('phone', String(11), unique=True, index=True, nullable=True), Column('create_time', DOUBLE, default=datetime.datetime.now().timestamp()), Column('update_time', DOUBLE, default=datetime.datetime.now().timestamp()), Column('is_active', Boolean, default=True), ) ``` The problem is that when i call this method only username and password will be inserted and `create_time` & `update_time` and is_active will be inserted as `null`. fastapi example: [refrence](https://fastapi.tiangolo.com/how-to/async-sql-encode-databases/#import-and-set-up-sqlalchemy) ### Optional link from https://docs.sqlalchemy.org which documents the behavior that is expected _No response_ ### SQLAlchemy Version in Use 1.4.42 ### DBAPI (i.e. the database driver) pymysql ### Database Vendor and Major Version mysql 8 ### Python Version 3.10 ### Operating system OSX ### To Reproduce ```python @router.post("/") async def create_marketer(marketer: NewMarketer): hashed_password = bcrypt.kdf( bytes(marketer.password, 'utf-8'), desired_key_bytes=32, rounds=100, salt=bytes(config.PASSWORD_SALT, 'utf-8') ) query = Marketers.create().values(username=marketer.username, password=hashed_password.hex()) print(query) try: last_record_id = await database.execute(query) return {'result': True, 'id': last_record_id} except pymysql.IntegrityError: raise HTTPException(status_code=422, detail='username is already full') ``` ### Error ``` Default values wont be inserted. ``` ### Additional context Maybe I'm misusing the API. but can you guys detect how this happens? and how can i fix it?
closed
2024-01-08T15:25:10Z
2024-01-08T16:10:02Z
https://github.com/sqlalchemy/sqlalchemy/issues/10844
[]
hasanparasteh
0
ray-project/ray
machine-learning
51,574
[CG, Core] Add Ascend NPU Support for RCCL and CG
### Description This RFC proposes to provide initial support for RCCL and CG on Ascend NPU. Original work by [@Bye-legumes](https://github.com/ray-project/ray/pull/47658) and [@hipudding](https://github.com/ray-project/ray/pull/51032). However, we need to decouple them into several PRs with minor modifications and set an example for further hardware support. ## Notes: - I previously submitted a PR in September 2024 to support HCCL and refactor NCCL into a communicator, but the feedback was that it was too large and complicated and we should decouple into some PR with minor modification. - We should avoid adding additional C code into Ray, as that would influence the build stage. ## Plan for Decoupling into Several Stages: ### **PR1: Support RCCL on NPU** Ray Core supports scheduling on Ascend NPU devices, but the Ray Collective API does not yet support communication between NPUs using HCCL. 🔗 [PR #50790](https://github.com/ray-project/ray/pull/50790) 👤 @liuxsh9 ### **PR2: Refactor CG to Support Multiple Devices** We can refer to [this PR](https://github.com/ray-project/ray/pull/44086) to decouple device-related modules. Move cupy dependency, support rank mapping or different progress group. 👤 @hipudding ### **PR3: CG Support for NPU** CG support will be added after RCCL is merged, utilizing the RCCL API from [PR #47658](https://github.com/ray-project/ray/pull/47658). 👤 @Bye-legumes ### **Merge Strategy** - PR2 and PR3 can be merged independently. - PR3 will adjust accordingly based on PR2. ### CANN+torch Version Based on vLLM or latest? ### Use case Support vllm-ascend https://github.com/vllm-project/vllm-ascend
open
2025-03-21T02:09:40Z
2025-03-21T23:37:13Z
https://github.com/ray-project/ray/issues/51574
[ "enhancement", "core", "compiled-graphs" ]
Bye-legumes
0
pyro-ppl/numpyro
numpy
1,488
Inconsistent MCMC Results Based On * Operator
## Issue I receive inconsistent MCMC results based on simply using the * operator or not. Model1 (correct result) and Model2 (incorrect result) gives different answers despite all the mathematical description being equivalent. The only difference between model2 versus model1 is the * operator. ``` (jnp.exp(-(jnp.power(t2*lam, k) - jnp.power(t1*lam, k)))) # gives the wrong answer (jnp.exp(-(jnp.power(t2, k) - jnp.power(t1, k))*jnp.power(lam,k) ))) # correct answer ``` ## Package Versions * numpyro=1.23.3 * numpyro=0.10.1 * jax=0.3.23 * torch=1.12.1 ## OS Edition Windows 10 Home Version 21H2 Installed on ‎12/‎8/‎2021 OS build 19044.2006 Experience Windows Feature Experience Pack 120.2212.4180.0 ## Code ``` # -*- coding: utf-8 -*- import numpy as np import torch from torch.distributions import Weibull from jax import random import jax.numpy as jnp import jax from numpyro.infer import MCMC, NUTS, Predictive from numpyro.handlers import trace import numpyro.distributions as dist import numpyro from time import time import random as rnd numpyro.enable_x64() cpu_cores = 2 numpyro.set_host_device_count(cpu_cores) torch.manual_seed(0) rnd.seed(0) np.random.seed(0) print(np.__version__) print(numpyro.__version__) print(jax.__version__) print(torch.__version__) def generate_data_no_features(N,k,lam,intervals=6,intervals_length=4): print("Shape Parameter",k) print("Rate Parameter",lam) weibull = Weibull(torch.ones(N,1)*1/lam,torch.ones(N,1)*k) t_fail = weibull.sample() t1 = []; t2 = []; y = []; print(f"Number of intervals: {intervals}") print(f"Length of intervals: {intervals_length}") for i in torch.arange(intervals): t_start = intervals_length*i t_end = intervals_length*(i+1) failed = torch.logical_and(t_fail <= t_end,t_fail > t_start).type(torch.float) y.append(failed.ravel()) t1.append(t_start*torch.ones_like(failed).ravel()) t2.append(t_end*torch.ones_like(failed).ravel()) if len(failed) == 0: break t_fail = t_fail[np.logical_not(failed).type(torch.bool)] t1 = torch.hstack(t1).unsqueeze(1) t2 = torch.hstack(t2).unsqueeze(1) y = torch.hstack(y).squeeze() print(f"The timeline ends at {t_end}") return t1.type(torch.float).squeeze().numpy(),t2.type(torch.float).squeeze().numpy(),y.type(torch.float).squeeze().numpy() tfix = 5 #(jnp.exp(-(jnp.power(t2*lam, k) - jnp.power(t1*lam, k)))) gives the wrong answer #(jnp.exp(-(jnp.power(t2, k) - jnp.power(t1, k))*jnp.power(lam,k) ))) ## correct answer def model1(t1, t2, y=None): k = numpyro.sample("k", dist.LogNormal(0,1)) r = numpyro.sample("r", dist.Uniform(0,1)) lam = numpyro.deterministic("lam", jnp.power(-jnp.log(r), 1 / (k)) / tfix) p = 1.0 - jnp.exp(-(jnp.power(t2, k) - jnp.power(t1, k))*jnp.power(lam,k) ) numpyro.sample("likelihood", dist.Bernoulli(probs=p), obs=y) def model2(t1, t2, y=None): k = numpyro.sample("k", dist.LogNormal(0,1)) r = numpyro.sample("r", dist.Uniform(0,1)) lam = numpyro.deterministic("lam", jnp.power(-jnp.log(r), 1 / (k)) / tfix) p = 1.0 - jnp.exp(-(jnp.power(t2*lam, k) - jnp.power(t1*lam, k)) ) numpyro.sample("likelihood", dist.Bernoulli(probs=p), obs=y) if __name__ == "__main__": t1,t2,y = generate_data_no_features(100,1.1,.01,30,4) rng_keys = jax.random.split(random.PRNGKey(123), cpu_cores) print("Model 1") nuts_kernel = NUTS(model1) mcmc = MCMC(nuts_kernel, num_warmup=1000, num_samples=1000, num_chains=cpu_cores, chain_method='parallel') with numpyro.handlers.seed(rng_seed=1): exec_trace = trace(model1).get_trace(t1, t2, y) print(numpyro.util.format_shapes(exec_trace, compute_log_prob=True)) start = time() mcmc.run(rng_keys, t1=t1, t2=t2, y=y) end = time() print(mcmc.print_summary(exclude_deterministic=False)) print("Compile time + Sampling Time {:.4f}".format(end - start)) print("Model 2") nuts_kernel = NUTS(model2) mcmc = MCMC(nuts_kernel, num_warmup=1000, num_samples=1000, num_chains=cpu_cores, chain_method='parallel') with numpyro.handlers.seed(rng_seed=1): exec_trace = trace(model2).get_trace(t1,t2,y) print(numpyro.util.format_shapes(exec_trace,compute_log_prob=True)) start = time() mcmc.run(rng_keys, t1=t1, t2=t2, y=y) end = time() print("Compile time + Sampling Time {:.4f}".format(end - start)) print(mcmc.print_summary(exclude_deterministic=False)) ```
closed
2022-10-20T14:59:05Z
2022-10-29T13:45:40Z
https://github.com/pyro-ppl/numpyro/issues/1488
[]
mlpotter
4
ultrafunkamsterdam/undetected-chromedriver
automation
1,006
_hook_remove_cdc_props still detectable.
The _hook_remove_cdc_props is detected by hcaptcha and when I solve the captcha its freezes and in request its an invalid response #986
closed
2023-01-23T20:17:58Z
2023-02-04T21:29:38Z
https://github.com/ultrafunkamsterdam/undetected-chromedriver/issues/1006
[]
mrafieefard
0
seleniumbase/SeleniumBase
web-scraping
2,126
`--rcs` (reuse class session) has the same effect as `--rs` for `sb` pytest fixture tests
## `--rcs` (reuse class session) has the same effect as `--rs` for `sb` pytest fixture tests `--rcs` (reuse class session) is keeping the browser session open between tests of different classes when using the `sb` pytest fixture. That mode should only be reusing the browser session between tests of the same class. (`--rcs` is working normally for tests that inherit `BaseCase`. This issue is specific to the `sb` pytest fixture.) This behavior is expected to be different from `pytest --rs`, where all tests (regardless of class) should be reusing the same browser session.
closed
2023-09-21T16:25:25Z
2023-09-26T01:34:45Z
https://github.com/seleniumbase/SeleniumBase/issues/2126
[ "bug" ]
mdmintz
1
allenai/allennlp
data-science
4,930
Finish the migration guide for 2.0
closed
2021-01-25T19:00:14Z
2021-01-29T17:18:32Z
https://github.com/allenai/allennlp/issues/4930
[ "Feature request" ]
schmmd
1
mwaskom/seaborn
data-science
2,760
Suggestion: Add option for non-square plots to JointGrid
I'm using [`JointGrid`](https://seaborn.pydata.org/generated/seaborn.JointGrid.html#seaborn.JointGrid) for a scatterplot with a KDE on the margins. Simple most MWE would be straight from the documentation: penguins = sns.load_dataset('penguins') g = sns.JointGrid() x, y = penguins["bill_length_mm"], penguins["bill_depth_mm"] sns.scatterplot(x=x, y=y, ec="b", fc="none", s=100, linewidth=1.5, ax=g.ax_joint) sns.kdeplot(x=x, fill=False, linewidth=2, ax=g.ax_marg_x) sns.kdeplot(y=y, linewidth=2, ax=g.ax_marg_y) Now, for this example, the square scatterplot works perfectly fine (x and y both in mm, with a similar range However, when working with stable isotopes of water, where the y axis (Deuterium) tends to span in the range from -250 to 0 and the x axis (d18O) is between -30 and 0 and thus, most publications on this subject have their scatterplots in a non-square aspect ratio (see e.g. [this wikipedia article](https://en.wikipedia.org/wiki/Global_meteoric_water_line)). I can get a non-square plot with pure `sns.scatterplot` (and I can do a lot of great things with `hue`, `style` and so on...) but additionally having the KDE on the margins as `JointGrid` easily allows me to do is really helpful, but when every isotope plot I know about isn't square, mine probably shouldn't be either... The documentation states > The figure will always be square (unless you resize it at the matplotlib layer) but doesn't give specifics, and I'm probably not the only one who's using seaborn to somewhat avoid the *matplotlib layer*. Searching for it produces some solutions, e.g. [here](https://stackoverflow.com/questions/29909515/how-to-plot-non-square-seaborn-jointplot-or-jointgrid) or [there](https://www.tutorialspoint.com/how-to-plot-a-non-square-seaborn-jointplot-or-jointgrid-matplotlib), but adding it to the MWE appears to do nothing, so they're apparently outdated: #plt.rcParams["figure.figsize"] = [20, 10] #plt.rcParams["figure.autolayout"] = True g = sns.JointGrid() g.fig.set_figwidth=(20) g.fig.set_figheight=(10) x, y = penguins["bill_length_mm"], penguins["bill_depth_mm"] sns.scatterplot(x=x, y=y, ec="b", fc="none", s=100, linewidth=1.5, ax=g.ax_joint) sns.kdeplot(x=x, fill=False, linewidth=2, ax=g.ax_marg_x) sns.kdeplot(y=y, linewidth=2, ax=g.ax_marg_y) #plt.show() (same result when uncommenting the additions from the second link) **So is it possible to add an option for non-square plots into `JointGrid` or to add some more info to the documentation?**
closed
2022-03-15T13:15:03Z
2022-03-16T10:18:57Z
https://github.com/mwaskom/seaborn/issues/2760
[]
joha1
2
DistrictDataLabs/yellowbrick
matplotlib
1,047
RFECV is much slower than Sklearn's implementation
I am aware that yellowbrick is using RFE and CV separately to produce the visualiser but the approach is several times slower than sklearn's implementation of RFECV. Running the following in a jupyter notebook: ``` import yellowbrick print('yellowbrick version: ', yellowbrick.__version__) import sklearn (print('sklearn version: ', sklearn.__version__)) ``` _yellowbrick version: 1.1 sklearn version: 0.22.1_ ``` from sklearn.datasets import make_classification from sklearn.feature_selection import RFECV as skrfecv from yellowbrick.model_selection import RFECV from sklearn.model_selection import StratifiedKFold from sklearn.linear_model import LogisticRegression # Build a classification task using 4 out of 50 informative features X, y = make_classification(n_samples=200, n_features=50, n_informative=4, n_redundant=2, n_repeated=0, n_classes=4, n_clusters_per_class=1, random_state=0) log_reg = LogisticRegression() def rfe_time_test(yb=True): if yb: rfecv = RFECV(log_reg, step=1, cv=StratifiedKFold(5), scoring='accuracy') else: rfecv = skrfecv(log_reg, step=1, cv=StratifiedKFold(5), scoring='accuracy') _ = rfecv.fit(X, y) %timeit rfe_time_test(yb=True) ``` _1min 23s ± 8.18 s per loop (mean ± std. dev. of 7 runs, 1 loop each)_ ``` %timeit rfe_time_test(yb=False) ``` _3.73 s ± 430 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)_ If this is unavoidable due to using CV separately to get the full scores, then it would be nice to note in the documentation, so that you could use sklearn's RFECV to drop the bottom ~50% of features before running the visualiser. This got me interested so I did some digging into what might affect the difference between sklearn and yellowbricks's RFECV: ``` import matplotlib.pyplot as plt import numpy as np def plot_timings(x_range, yb_timings, sk_timings, x_axis, titles): f, ax = plt.subplots(1, 2) s_times = np.array([t.average for t in sk_timings]) y_times = np.array([t.average for t in yb_timings]) ax[0].plot(x_range, y_times, 'ro-') ax[0].plot(x_range, s_times, 'bo-') ax[0].legend(['yellowbrick', 'sklearn']) ax[0].set_ylabel('Time (seconds)') ax[1].set_ylabel('YB time / SK time') ratio = y_times/s_times ax[1].plot(x_range, ratio, 'og-') for i, title in enumerate(titles): ax[i].set_title(title) ax[i].set_xlabel(x_axis) f.subplots_adjust(wspace=0.25) f.set_size_inches(10, 6) plt.show() return f yb_timings = [] sk_timings = [] n_obs = [i for i in range(200, 1001, 100)] for i in n_obs: # Build a classification task using 4 informative features X, y = make_classification(n_samples=i, n_features=10, n_informative=4, n_redundant=2, n_repeated=0, n_classes=4, n_clusters_per_class=1, random_state=0) yb_time = %timeit -o rfe_time_test(yb=True) yb_timings.append(yb_time) sk_time = %timeit -o rfe_time_test(yb=False) sk_timings.append(sk_time) obs = plot_timings(n_obs, yb_timings, sk_timings, x_axis='Number of observations', titles=['Timings', 'Ratio']) ``` ![Timings and observations](https://user-images.githubusercontent.com/28015710/75980692-e79e1580-5eda-11ea-9d91-593c8e51eccf.png) Ratio of time difference is fairly stable over number of observations. ``` yb_timings = [] sk_timings = [] n_feats = [i for i in range(10, 51, 10)] for i in n_feats: # Build a classification task using 4 informative features X, y = make_classification(n_samples=200, n_features=i, n_informative=4, n_redundant=2, n_repeated=0, n_classes=4, n_clusters_per_class=1, random_state=0) yb_time = %timeit -o rfe_time_test(yb=True) yb_timings.append(yb_time) sk_time = %timeit -o rfe_time_test(yb=False) sk_timings.append(sk_time) feats = plot_timings(n_feats, yb_timings, sk_timings, x_axis='Number of input features', titles=['Timings', 'Ratio']) ``` ![Timings and features](https://user-images.githubusercontent.com/28015710/75980850-29c75700-5edb-11ea-834d-c204e6c9846f.png) As number of starting features increase YB becomes even slower relative to sklearn. ``` # Build a classification task using 4 informative features X, y = make_classification(n_samples=200, n_features=10, n_informative=4, n_redundant=2, n_repeated=0, n_classes=4, n_clusters_per_class=1, random_state=0) log_reg = LogisticRegression() yb_timings = [] sk_timings = [] cvs = [i for i in range(2, 11, 2)] for i in cvs: def rfe_time_test(yb=True): if yb: rfecv = RFECV(log_reg, step=1, cv=StratifiedKFold(i), scoring='accuracy') else: rfecv = skrfecv(log_reg, step=1, cv=StratifiedKFold(i), scoring='accuracy') _ = rfecv.fit(X, y) yb_time = %timeit -o rfe_time_test(yb=True) yb_timings.append(yb_time) sk_time = %timeit -o rfe_time_test(yb=False) sk_timings.append(sk_time) cv = plot_timings(cvs, yb_timings, sk_timings, x_axis='Number of CV folds', titles=['Timings', 'Ratio']) ``` ![Timings and CV folds](https://user-images.githubusercontent.com/28015710/75981029-888cd080-5edb-11ea-8203-0afca01dadea.png) YB becomes slower with increasing number of folds too!
open
2020-03-05T12:24:07Z
2021-02-23T15:43:58Z
https://github.com/DistrictDataLabs/yellowbrick/issues/1047
[ "priority: high", "type: technical debt" ]
jc639
10
scikit-learn/scikit-learn
python
30,133
`check_estimator` to return structured info
From https://github.com/scikit-learn/scikit-learn/issues/29951#issuecomment-2383536734 (@ogrisel) > Somehow related, side note: maybe check_estimator could be made to return a structured result object that is programmatically introspectable. > > This would allow third-party libraries to build and publish a scikit-learn compliance table to be integrated as part of their documentation. In case of XFAILed checks, the reason could be displayed as part of the report. > > Currently, users would have to dig into CI logs and grep the pytest output, assuming those projects use the parametrize_with_checks as part of a pytest test suite instead of just calling check_estimator. > > Thinking about, we could even start by eating our own dog food: we have no simple summary of all the XFAILed/XPASSed cases for scikit-learn estimators at the moment. I like the idea, but in order not to forget about it, creating its own issue.
closed
2024-10-22T14:53:52Z
2024-11-08T16:28:04Z
https://github.com/scikit-learn/scikit-learn/issues/30133
[ "Developer API" ]
adrinjalali
0
napari/napari
numpy
7,526
Inherit overlay (i.e. scale bar) font size from appearance settings
## 🚀 Feature Use the font size that the user sets in appearance settings (or, its default) as the base for determining font size for overlays. Update 2025/01/15: At the napari community meeting we discussed separating overlay font sizes from the current font size setting in appearances. This OP has been updated to match. ## Motivation I've been thinking it might be smart to base font size [of the scale_bar] on what's set in appearance. Would help someone who might need a larger base font size, like 15, because of visual impairment or a screen farther away. Though, it should maybe be a multiple of that font size, like 1.3 (so it would actually be larger, since I think default is 9 without checking). _Originally posted by @TimMonko in https://github.com/napari/napari/issues/7018#issuecomment-2588487447_ At the 2025/01/15 community meeting, the attendees endorsed the value of adding font_size (and axes length) to the axes. ## Pitch 1. Use a new 'overlay font size' appearance setting for determining scale bar and other text overlay font defaults 2. Increase the current default UI overlay font sizes 3. Add font_size to overlay.axes in addition to the currently available scale_bar and text_overlay Use this framework for future overlays (like proposed in #7321 ) ## Alternatives 1. Continue defining these values with specific font sizes in mind. 2. Inherit font size from the current appearance settings font_size, only. Do not create a new setting/widget.
open
2025-01-14T04:16:43Z
2025-01-16T05:09:29Z
https://github.com/napari/napari/issues/7526
[]
TimMonko
2
numba/numba
numpy
9,635
Can Numba use c++functions?
We are using CoolProp in Python, but unfortunately, this package is not implemented in Numba, and we cannot overload/overwrite it through numpy/python. So we think that can we write a python function to call CoolProp in C++ and we introduce this python function into numba. But we dont find a way to call C++ function in Numba in doc. Can you tell us how to do this, thanks a lot!!!
closed
2024-06-29T12:16:03Z
2024-08-11T01:55:17Z
https://github.com/numba/numba/issues/9635
[ "question", "stale" ]
chen-erqi
7
clovaai/donut
nlp
131
How to get confidence score from Donut ?
open
2023-02-01T09:50:43Z
2023-02-01T21:22:17Z
https://github.com/clovaai/donut/issues/131
[]
mohsin7822
1
allure-framework/allure-python
pytest
401
Attachment path
When I use: `allure.attach.file('./screen.png', name="Screenshot1", attachment_type=allure.attachment_type.PNG) ` I would like to get back the relative path where the attachment will be created. I need it in order to create a link reference from another test step? Is it possible at runtime to get that path? Riccardo ![2019-07-04_174020](https://user-images.githubusercontent.com/6887824/60677846-ee274980-9e82-11e9-80e5-400efdcfd7aa.jpg)
closed
2019-07-04T15:41:35Z
2024-01-08T17:02:35Z
https://github.com/allure-framework/allure-python/issues/401
[ "task:new feature" ]
ric79
1
codertimo/BERT-pytorch
nlp
13
Question about random sampling.
https://github.com/codertimo/BERT-pytorch/blob/7efd2b5a631f18ebc83cd16886b8c6ee77a40750/bert_pytorch/dataset/dataset.py#L50-L64 Well, seems `random.random()` always returns a positive number, so `prob >= prob * 0.9` will always be true?
closed
2018-10-20T07:35:51Z
2018-10-22T22:58:53Z
https://github.com/codertimo/BERT-pytorch/issues/13
[]
SongRb
3
plotly/dash-bio
dash
125
Clustergram label is obscure and lengthy.
Hi @shammamah, As I was linting [app_clustergram.py](https://github.com/plotly/dash-bio/blob/master/tests/dash/app_clustergram.py), I noticed some heavy and confusing wording, lines 162-164: ```py 'Header of row labels column in uploaded dataset', title='If a dataset was uploaded, enter the header of' + 'the column that contains the title of each row.', ``` Even when I view it in [context](https://dash-gallery.plotly.host/dash-bio/clustergram), it's unclear what this label means (I mean 'label' in the context of an app layout). In Pandas language, when talking about a DataFrame, row labels are referred to as the **index** and column labels as **columns**. Reference: https://pandas.pydata.org/pandas-docs/stable/dsintro.html Maybe we could follow this convention, instead of torturing our brains?
closed
2019-01-24T16:22:07Z
2019-01-25T15:40:44Z
https://github.com/plotly/dash-bio/issues/125
[]
mkcor
0
ivy-llc/ivy
tensorflow
28,615
[Bug]: ivy.array() RecursionError: maximum recursion depth exceeded
### Bug Explanation When I create an array using ivy.array() and then try to access the array using indexing or array slicing 'RecursionError: maximum recursion depth exceeded ' happens on colab notebook ### Steps to Reproduce Bug ``` import ivy arr = ivy.array([1,2,4,5]) arr1 = ivy.array([[1],[2],[3]) print(arr[0]) print(arr1[0]) ``` ### Environment colab ### Ivy Version 0.0.8.0 ### Backend - [ ] NumPy - [ ] TensorFlow - [ ] PyTorch - [ ] JAX ### Device T4
open
2024-03-16T20:01:25Z
2024-04-08T07:01:48Z
https://github.com/ivy-llc/ivy/issues/28615
[ "Bug Report" ]
marvlyngkhoi
1