| DEBUG 2026-01-28 12:42:22 d_metavar.py:22 Metavar for type None: None |
| DEBUG 2026-01-28 12:42:22 formatter.py:52 action type: None, Result: , nargs: 0, default metavar: None |
| DEBUG 2026-01-28 12:42:22 d_metavar.py:22 Metavar for type <class 'str'>: str |
| DEBUG 2026-01-28 12:42:22 formatter.py:52 action type: <class 'str'>, Result: str, nargs: None, default metavar: None |
| DEBUG 2026-01-28 12:42:22 _wrapper.py:168 wrapped field at host has a default value of localhost |
| DEBUG 2026-01-28 12:42:22 _wrapper.py:168 wrapped field at port has a default value of 8080 |
| DEBUG 2026-01-28 12:42:22 _wrapper.py:168 wrapped field at fps has a default value of 30 |
| DEBUG 2026-01-28 12:42:22 _wrapper.py:168 wrapped field at inference_latency has a default value of 0.03333333333333333 |
| DEBUG 2026-01-28 12:42:22 _wrapper.py:168 wrapped field at obs_queue_timeout has a default value of 2 |
| DEBUG 2026-01-28 12:42:22 _wrapper.py:309 Arg options for field 'host': {'required': False, 'dest': 'host', 'default': 'localhost', 'help': 'Networking configuration', 'type': <class 'str'>} |
| DEBUG 2026-01-28 12:42:22 _wrapper.py:309 Arg options for field 'port': {'required': False, 'dest': 'port', 'default': 8080, 'help': ' ', 'type': <class 'int'>} |
| DEBUG 2026-01-28 12:42:22 _wrapper.py:309 Arg options for field 'fps': {'required': False, 'dest': 'fps', 'default': 30, 'help': 'Timing configuration', 'type': <class 'int'>} |
| DEBUG 2026-01-28 12:42:22 ocstring.py:253 Warning: Unable to parse attribute docstring: default=DEFAULT_INFERENCE_LATENCY, metadata={"help": "Target inference latency in seconds"} |
| DEBUG 2026-01-28 12:42:22 _wrapper.py:309 Arg options for field 'inference_latency': {'required': False, 'dest': 'inference_latency', 'default': 0.03333333333333333, 'help': ' ', 'type': <class 'float'>} |
| DEBUG 2026-01-28 12:42:22 ocstring.py:253 Warning: Unable to parse attribute docstring: default=DEFAULT_OBS_QUEUE_TIMEOUT, metadata={"help": "Timeout for observation queue in seconds"} |
| DEBUG 2026-01-28 12:42:22 _wrapper.py:309 Arg options for field 'obs_queue_timeout': {'required': False, 'dest': 'obs_queue_timeout', 'default': 2, 'help': '\n', 'type': <class 'float'>} |
| DEBUG 2026-01-28 12:42:22 gparsing.py:143 |
| POST PROCESSING |
|
|
| DEBUG 2026-01-28 12:42:22 gparsing.py:144 (raw) parsed args: Namespace(port='5555') |
| DEBUG 2026-01-28 12:42:22 decoding.py:112 from_dict for <class 'lerobot.async_inference.configs.PolicyServerConfig'> |
| DEBUG 2026-01-28 12:42:22 decoding.py:132 Decode name = port, type = <class 'int'> |
| INFO 2026-01-28 12:42:22 y_server.py:420 {'fps': 30, |
| 'host': 'localhost', |
| 'inference_latency': 0.03333333333333333, |
| 'obs_queue_timeout': 2, |
| 'port': 5555} |
| INFO 2026-01-28 12:42:22 y_server.py:430 PolicyServer started on localhost:5555 |
| INFO 2026-01-28 12:43:28 y_server.py:112 Client ipv4:127.0.0.1:43896 connected and ready |
| INFO 2026-01-28 12:43:28 y_server.py:138 Receiving policy instructions from ipv4:127.0.0.1:43896 | Policy type: act | Pretrained name or path: /home/dobot/dobot/x-trainer/ckpts/act_sort2/checkpoints/last/pretrained_model | Actions per chunk: 16 | Device: cuda |
| DEBUG 2026-01-28 12:43:28 d_metavar.py:22 Metavar for type None: None |
| DEBUG 2026-01-28 12:43:28 formatter.py:52 action type: None, Result: , nargs: 0, default metavar: None |
| DEBUG 2026-01-28 12:43:28 d_metavar.py:22 Metavar for type <class 'str'>: str |
| DEBUG 2026-01-28 12:43:28 formatter.py:52 action type: <class 'str'>, Result: str, nargs: None, default metavar: None |
| DEBUG 2026-01-28 12:43:28 _wrapper.py:168 wrapped field at n_obs_steps has a default value of 1 |
| DEBUG 2026-01-28 12:43:28 _wrapper.py:201 wrapped field at input_features has a default value of {} |
| DEBUG 2026-01-28 12:43:28 _wrapper.py:201 wrapped field at output_features has a default value of {} |
| DEBUG 2026-01-28 12:43:28 _wrapper.py:201 wrapped field at device has a default value of None |
| DEBUG 2026-01-28 12:43:28 _wrapper.py:168 wrapped field at use_amp has a default value of False |
| DEBUG 2026-01-28 12:43:28 _wrapper.py:168 wrapped field at push_to_hub has a default value of True |
| DEBUG 2026-01-28 12:43:28 _wrapper.py:201 wrapped field at repo_id has a default value of None |
| DEBUG 2026-01-28 12:43:28 _wrapper.py:201 wrapped field at private has a default value of None |
| DEBUG 2026-01-28 12:43:28 _wrapper.py:201 wrapped field at tags has a default value of None |
| DEBUG 2026-01-28 12:43:28 _wrapper.py:201 wrapped field at license has a default value of None |
| DEBUG 2026-01-28 12:43:28 _wrapper.py:201 wrapped field at pretrained_path has a default value of None |
| DEBUG 2026-01-28 12:43:28 ocstring.py:253 Warning: Unable to parse attribute docstring: current step and additional steps going back). |
| DEBUG 2026-01-28 12:43:28 _wrapper.py:309 Arg options for field 'n_obs_steps': {'required': False, 'dest': 'n_obs_steps', 'default': 1, 'help': ' ', 'type': <class 'int'>} |
| DEBUG 2026-01-28 12:43:28 _wrapper.py:309 Arg options for field 'input_features': {'required': False, 'dest': 'input_features', 'default': {}, 'help': ' ', 'type': typing.Dict[str, lerobot.configs.types.PolicyFeature]} |
| DEBUG 2026-01-28 12:43:28 _wrapper.py:309 Arg options for field 'output_features': {'required': False, 'dest': 'output_features', 'default': {}, 'help': ' ', 'type': typing.Dict[str, lerobot.configs.types.PolicyFeature]} |
| DEBUG 2026-01-28 12:43:28 _wrapper.py:309 Arg options for field 'device': {'required': False, 'dest': 'device', 'default': None, 'help': 'e.g. "cuda", "cuda:0", "cpu", or "mps"', 'type': typing.Optional[str]} |
| DEBUG 2026-01-28 12:43:28 _wrapper.py:309 Arg options for field 'use_amp': {'required': False, 'dest': 'use_amp', 'default': False, 'help': '`use_amp` determines whether to use Automatic Mixed Precision (AMP) for training and evaluation. With AMP,\nautomatic gradient scaling is used.', 'type': <class 'bool'>} |
| DEBUG 2026-01-28 12:43:28 _wrapper.py:309 Arg options for field 'push_to_hub': {'required': False, 'dest': 'push_to_hub', 'default': True, 'help': 'type: ignore[assignment] # TODO: use a different name to avoid override', 'type': <class 'bool'>} |
| DEBUG 2026-01-28 12:43:28 _wrapper.py:309 Arg options for field 'repo_id': {'required': False, 'dest': 'repo_id', 'default': None, 'type': typing.Optional[str]} |
| DEBUG 2026-01-28 12:43:28 _wrapper.py:309 Arg options for field 'private': {'required': False, 'dest': 'private', 'default': None, 'help': 'Upload on private repository on the Hugging Face hub.', 'type': typing.Optional[bool]} |
| DEBUG 2026-01-28 12:43:28 _wrapper.py:309 Arg options for field 'tags': {'required': False, 'dest': 'tags', 'default': None, 'help': 'Add tags to your policy on the hub.', 'type': typing.Optional[typing.List[str]]} |
| DEBUG 2026-01-28 12:43:28 _wrapper.py:309 Arg options for field 'license': {'required': False, 'dest': 'license', 'default': None, 'help': 'Add tags to your policy on the hub.', 'type': typing.Optional[str]} |
| DEBUG 2026-01-28 12:43:28 _wrapper.py:309 Arg options for field 'pretrained_path': {'required': False, 'dest': 'pretrained_path', 'default': None, 'help': 'Either the repo ID of a model hosted on the Hub or a path to a directory containing weights\nsaved using `Policy.save_pretrained`. If not provided, the policy is initialized from scratch.', 'type': typing.Optional[pathlib.Path]} |
| DEBUG 2026-01-28 12:43:28 gparsing.py:143 |
| POST PROCESSING |
|
|
| DEBUG 2026-01-28 12:43:28 gparsing.py:144 (raw) parsed args: Namespace() |
| DEBUG 2026-01-28 12:43:28 decoding.py:112 from_dict for <class 'lerobot.policies.act.configuration_act.ACTConfig'> |
| DEBUG 2026-01-28 12:43:28 decoding.py:132 Decode name = n_obs_steps, type = <class 'int'> |
| DEBUG 2026-01-28 12:43:28 decoding.py:132 Decode name = input_features, type = dict[str, lerobot.configs.types.PolicyFeature] |
| DEBUG 2026-01-28 12:43:28 decoding.py:261 Decoding a Dict field: dict[str, lerobot.configs.types.PolicyFeature] |
| DEBUG 2026-01-28 12:43:28 decoding.py:112 from_dict for <class 'lerobot.configs.types.PolicyFeature'> |
| DEBUG 2026-01-28 12:43:28 decoding.py:132 Decode name = type, type = <enum 'FeatureType'> |
| DEBUG 2026-01-28 12:43:28 decoding.py:132 Decode name = shape, type = tuple[int, ...] |
| DEBUG 2026-01-28 12:43:28 decoding.py:275 Decoding a Tuple field: tuple[int, ...] |
| DEBUG 2026-01-28 12:43:28 decoding.py:112 from_dict for <class 'lerobot.configs.types.PolicyFeature'> |
| DEBUG 2026-01-28 12:43:28 decoding.py:132 Decode name = type, type = <enum 'FeatureType'> |
| DEBUG 2026-01-28 12:43:28 decoding.py:132 Decode name = shape, type = tuple[int, ...] |
| DEBUG 2026-01-28 12:43:28 decoding.py:112 from_dict for <class 'lerobot.configs.types.PolicyFeature'> |
| DEBUG 2026-01-28 12:43:28 decoding.py:132 Decode name = type, type = <enum 'FeatureType'> |
| DEBUG 2026-01-28 12:43:28 decoding.py:132 Decode name = shape, type = tuple[int, ...] |
| DEBUG 2026-01-28 12:43:28 decoding.py:112 from_dict for <class 'lerobot.configs.types.PolicyFeature'> |
| DEBUG 2026-01-28 12:43:28 decoding.py:132 Decode name = type, type = <enum 'FeatureType'> |
| DEBUG 2026-01-28 12:43:28 decoding.py:132 Decode name = shape, type = tuple[int, ...] |
| DEBUG 2026-01-28 12:43:28 decoding.py:132 Decode name = output_features, type = dict[str, lerobot.configs.types.PolicyFeature] |
| DEBUG 2026-01-28 12:43:28 decoding.py:112 from_dict for <class 'lerobot.configs.types.PolicyFeature'> |
| DEBUG 2026-01-28 12:43:28 decoding.py:132 Decode name = type, type = <enum 'FeatureType'> |
| DEBUG 2026-01-28 12:43:28 decoding.py:132 Decode name = shape, type = tuple[int, ...] |
| DEBUG 2026-01-28 12:43:28 decoding.py:132 Decode name = device, type = typing.Optional[str] |
| DEBUG 2026-01-28 12:43:28 decoding.py:293 Decoding a Union field: typing.Optional[str] |
| DEBUG 2026-01-28 12:43:28 decoding.py:132 Decode name = use_amp, type = <class 'bool'> |
| DEBUG 2026-01-28 12:43:28 decoding.py:132 Decode name = push_to_hub, type = <class 'bool'> |
| DEBUG 2026-01-28 12:43:28 decoding.py:132 Decode name = repo_id, type = typing.Optional[str] |
| DEBUG 2026-01-28 12:43:28 decoding.py:132 Decode name = private, type = typing.Optional[bool] |
| DEBUG 2026-01-28 12:43:28 decoding.py:293 Decoding a Union field: typing.Optional[bool] |
| DEBUG 2026-01-28 12:43:28 decoding.py:132 Decode name = tags, type = typing.Optional[list[str]] |
| DEBUG 2026-01-28 12:43:28 decoding.py:293 Decoding a Union field: typing.Optional[list[str]] |
| DEBUG 2026-01-28 12:43:28 decoding.py:282 Decoding a List field: list[str] |
| DEBUG 2026-01-28 12:43:28 decoding.py:132 Decode name = license, type = typing.Optional[str] |
| DEBUG 2026-01-28 12:43:28 decoding.py:132 Decode name = pretrained_path, type = typing.Optional[pathlib.Path] |
| DEBUG 2026-01-28 12:43:28 decoding.py:293 Decoding a Union field: typing.Optional[pathlib.Path] |
| DEBUG 2026-01-28 12:43:28 decoding.py:132 Decode name = chunk_size, type = <class 'int'> |
| DEBUG 2026-01-28 12:43:28 decoding.py:132 Decode name = n_action_steps, type = <class 'int'> |
| DEBUG 2026-01-28 12:43:28 decoding.py:132 Decode name = normalization_mapping, type = dict[str, lerobot.configs.types.NormalizationMode] |
| DEBUG 2026-01-28 12:43:28 decoding.py:261 Decoding a Dict field: dict[str, lerobot.configs.types.NormalizationMode] |
| DEBUG 2026-01-28 12:43:28 decoding.py:132 Decode name = vision_backbone, type = <class 'str'> |
| DEBUG 2026-01-28 12:43:28 decoding.py:132 Decode name = pretrained_backbone_weights, type = typing.Optional[str] |
| DEBUG 2026-01-28 12:43:28 decoding.py:132 Decode name = replace_final_stride_with_dilation, type = <class 'int'> |
| DEBUG 2026-01-28 12:43:28 decoding.py:132 Decode name = pre_norm, type = <class 'bool'> |
| DEBUG 2026-01-28 12:43:28 decoding.py:132 Decode name = dim_model, type = <class 'int'> |
| DEBUG 2026-01-28 12:43:28 decoding.py:132 Decode name = n_heads, type = <class 'int'> |
| DEBUG 2026-01-28 12:43:28 decoding.py:132 Decode name = dim_feedforward, type = <class 'int'> |
| DEBUG 2026-01-28 12:43:28 decoding.py:132 Decode name = feedforward_activation, type = <class 'str'> |
| DEBUG 2026-01-28 12:43:28 decoding.py:132 Decode name = n_encoder_layers, type = <class 'int'> |
| DEBUG 2026-01-28 12:43:28 decoding.py:132 Decode name = n_decoder_layers, type = <class 'int'> |
| DEBUG 2026-01-28 12:43:28 decoding.py:132 Decode name = use_vae, type = <class 'bool'> |
| DEBUG 2026-01-28 12:43:28 decoding.py:132 Decode name = latent_dim, type = <class 'int'> |
| DEBUG 2026-01-28 12:43:28 decoding.py:132 Decode name = n_vae_encoder_layers, type = <class 'int'> |
| DEBUG 2026-01-28 12:43:28 decoding.py:132 Decode name = temporal_ensemble_coeff, type = typing.Optional[float] |
| DEBUG 2026-01-28 12:43:28 decoding.py:293 Decoding a Union field: typing.Optional[float] |
| DEBUG 2026-01-28 12:43:28 decoding.py:132 Decode name = dropout, type = <class 'float'> |
| DEBUG 2026-01-28 12:43:28 decoding.py:132 Decode name = kl_weight, type = <class 'float'> |
| DEBUG 2026-01-28 12:43:28 decoding.py:132 Decode name = optimizer_lr, type = <class 'float'> |
| DEBUG 2026-01-28 12:43:28 decoding.py:132 Decode name = optimizer_weight_decay, type = <class 'float'> |
| DEBUG 2026-01-28 12:43:28 decoding.py:132 Decode name = optimizer_lr_backbone, type = <class 'float'> |
| DEBUG 2026-01-28 12:43:28 d_metavar.py:22 Metavar for type None: None |
| DEBUG 2026-01-28 12:43:28 formatter.py:52 action type: None, Result: , nargs: 0, default metavar: None |
| DEBUG 2026-01-28 12:43:28 d_metavar.py:22 Metavar for type <class 'str'>: str |
| DEBUG 2026-01-28 12:43:28 formatter.py:52 action type: <class 'str'>, Result: str, nargs: None, default metavar: None |
| DEBUG 2026-01-28 12:43:28 _wrapper.py:168 wrapped field at n_obs_steps has a default value of 1 |
| DEBUG 2026-01-28 12:43:28 _wrapper.py:201 wrapped field at input_features has a default value of {} |
| DEBUG 2026-01-28 12:43:28 _wrapper.py:201 wrapped field at output_features has a default value of {} |
| DEBUG 2026-01-28 12:43:28 _wrapper.py:201 wrapped field at device has a default value of None |
| DEBUG 2026-01-28 12:43:28 _wrapper.py:168 wrapped field at use_amp has a default value of False |
| DEBUG 2026-01-28 12:43:28 _wrapper.py:168 wrapped field at push_to_hub has a default value of True |
| DEBUG 2026-01-28 12:43:28 _wrapper.py:201 wrapped field at repo_id has a default value of None |
| DEBUG 2026-01-28 12:43:28 _wrapper.py:201 wrapped field at private has a default value of None |
| DEBUG 2026-01-28 12:43:28 _wrapper.py:201 wrapped field at tags has a default value of None |
| DEBUG 2026-01-28 12:43:28 _wrapper.py:201 wrapped field at license has a default value of None |
| DEBUG 2026-01-28 12:43:28 _wrapper.py:201 wrapped field at pretrained_path has a default value of None |
| DEBUG 2026-01-28 12:43:28 _wrapper.py:168 wrapped field at chunk_size has a default value of 100 |
| DEBUG 2026-01-28 12:43:28 _wrapper.py:168 wrapped field at n_action_steps has a default value of 100 |
| DEBUG 2026-01-28 12:43:28 _wrapper.py:201 wrapped field at normalization_mapping has a default value of {'VISUAL': <NormalizationMode.MEAN_STD: 'MEAN_STD'>, 'STATE': <NormalizationMode.MEAN_STD: 'MEAN_STD'>, 'ACTION': <NormalizationMode.MEAN_STD: 'MEAN_STD'>} |
| DEBUG 2026-01-28 12:43:28 _wrapper.py:168 wrapped field at vision_backbone has a default value of resnet18 |
| DEBUG 2026-01-28 12:43:28 _wrapper.py:201 wrapped field at pretrained_backbone_weights has a default value of ResNet18_Weights.IMAGENET1K_V1 |
| DEBUG 2026-01-28 12:43:28 _wrapper.py:168 wrapped field at replace_final_stride_with_dilation has a default value of False |
| DEBUG 2026-01-28 12:43:28 _wrapper.py:168 wrapped field at pre_norm has a default value of False |
| DEBUG 2026-01-28 12:43:28 _wrapper.py:168 wrapped field at dim_model has a default value of 512 |
| DEBUG 2026-01-28 12:43:28 _wrapper.py:168 wrapped field at n_heads has a default value of 8 |
| DEBUG 2026-01-28 12:43:28 _wrapper.py:168 wrapped field at dim_feedforward has a default value of 3200 |
| DEBUG 2026-01-28 12:43:28 _wrapper.py:168 wrapped field at feedforward_activation has a default value of relu |
| DEBUG 2026-01-28 12:43:28 _wrapper.py:168 wrapped field at n_encoder_layers has a default value of 4 |
| DEBUG 2026-01-28 12:43:28 _wrapper.py:168 wrapped field at n_decoder_layers has a default value of 1 |
| DEBUG 2026-01-28 12:43:28 _wrapper.py:168 wrapped field at use_vae has a default value of True |
| DEBUG 2026-01-28 12:43:28 _wrapper.py:168 wrapped field at latent_dim has a default value of 32 |
| DEBUG 2026-01-28 12:43:28 _wrapper.py:168 wrapped field at n_vae_encoder_layers has a default value of 4 |
| DEBUG 2026-01-28 12:43:28 _wrapper.py:201 wrapped field at temporal_ensemble_coeff has a default value of None |
| DEBUG 2026-01-28 12:43:28 _wrapper.py:168 wrapped field at dropout has a default value of 0.1 |
| DEBUG 2026-01-28 12:43:28 _wrapper.py:168 wrapped field at kl_weight has a default value of 10.0 |
| DEBUG 2026-01-28 12:43:28 _wrapper.py:168 wrapped field at optimizer_lr has a default value of 1e-05 |
| DEBUG 2026-01-28 12:43:28 _wrapper.py:168 wrapped field at optimizer_weight_decay has a default value of 0.0001 |
| DEBUG 2026-01-28 12:43:28 _wrapper.py:168 wrapped field at optimizer_lr_backbone has a default value of 1e-05 |
| DEBUG 2026-01-28 12:43:28 ocstring.py:253 Warning: Unable to parse attribute docstring: current step and additional steps going back). |
| DEBUG 2026-01-28 12:43:28 _wrapper.py:309 Arg options for field 'n_obs_steps': {'required': False, 'dest': 'n_obs_steps', 'default': 1, 'help': ' ', 'type': <class 'int'>} |
| DEBUG 2026-01-28 12:43:28 _wrapper.py:309 Arg options for field 'input_features': {'required': False, 'dest': 'input_features', 'default': {}, 'help': ' ', 'type': typing.Dict[str, lerobot.configs.types.PolicyFeature]} |
| DEBUG 2026-01-28 12:43:29 _wrapper.py:309 Arg options for field 'output_features': {'required': False, 'dest': 'output_features', 'default': {}, 'help': ' ', 'type': typing.Dict[str, lerobot.configs.types.PolicyFeature]} |
| DEBUG 2026-01-28 12:43:29 _wrapper.py:309 Arg options for field 'device': {'required': False, 'dest': 'device', 'default': None, 'help': 'e.g. "cuda", "cuda:0", "cpu", or "mps"', 'type': typing.Optional[str]} |
| DEBUG 2026-01-28 12:43:29 _wrapper.py:309 Arg options for field 'use_amp': {'required': False, 'dest': 'use_amp', 'default': False, 'help': '`use_amp` determines whether to use Automatic Mixed Precision (AMP) for training and evaluation. With AMP,\nautomatic gradient scaling is used.', 'type': <class 'bool'>} |
| DEBUG 2026-01-28 12:43:29 _wrapper.py:309 Arg options for field 'push_to_hub': {'required': False, 'dest': 'push_to_hub', 'default': True, 'help': 'type: ignore[assignment] # TODO: use a different name to avoid override', 'type': <class 'bool'>} |
| DEBUG 2026-01-28 12:43:29 _wrapper.py:309 Arg options for field 'repo_id': {'required': False, 'dest': 'repo_id', 'default': None, 'type': typing.Optional[str]} |
| DEBUG 2026-01-28 12:43:29 _wrapper.py:309 Arg options for field 'private': {'required': False, 'dest': 'private', 'default': None, 'help': 'Upload on private repository on the Hugging Face hub.', 'type': typing.Optional[bool]} |
| DEBUG 2026-01-28 12:43:29 _wrapper.py:309 Arg options for field 'tags': {'required': False, 'dest': 'tags', 'default': None, 'help': 'Add tags to your policy on the hub.', 'type': typing.Optional[typing.List[str]]} |
| DEBUG 2026-01-28 12:43:29 _wrapper.py:309 Arg options for field 'license': {'required': False, 'dest': 'license', 'default': None, 'help': 'Add tags to your policy on the hub.', 'type': typing.Optional[str]} |
| DEBUG 2026-01-28 12:43:29 _wrapper.py:309 Arg options for field 'pretrained_path': {'required': False, 'dest': 'pretrained_path', 'default': None, 'help': 'Either the repo ID of a model hosted on the Hub or a path to a directory containing weights\nsaved using `Policy.save_pretrained`. If not provided, the policy is initialized from scratch.', 'type': typing.Optional[pathlib.Path]} |
| DEBUG 2026-01-28 12:43:29 _wrapper.py:309 Arg options for field 'chunk_size': {'required': False, 'dest': 'chunk_size', 'default': 100, 'help': ' ', 'type': <class 'int'>} |
| DEBUG 2026-01-28 12:43:29 ocstring.py:253 Warning: Unable to parse attribute docstring: This should be no greater than the chunk size. For example, if the chunk size size 100, you may |
| DEBUG 2026-01-28 12:43:29 _wrapper.py:309 Arg options for field 'n_action_steps': {'required': False, 'dest': 'n_action_steps', 'default': 100, 'help': ' ', 'type': <class 'int'>} |
| DEBUG 2026-01-28 12:43:29 ocstring.py:253 Warning: Unable to parse attribute docstring: default_factory=lambda: { |
| DEBUG 2026-01-28 12:43:29 _wrapper.py:309 Arg options for field 'normalization_mapping': {'required': False, 'dest': 'normalization_mapping', 'default': {'VISUAL': <NormalizationMode.MEAN_STD: 'MEAN_STD'>, 'STATE': <NormalizationMode.MEAN_STD: 'MEAN_STD'>, 'ACTION': <NormalizationMode.MEAN_STD: 'MEAN_STD'>}, 'help': ' ', 'type': typing.Dict[str, lerobot.configs.types.NormalizationMode]} |
| DEBUG 2026-01-28 12:43:29 _wrapper.py:309 Arg options for field 'vision_backbone': {'required': False, 'dest': 'vision_backbone', 'default': 'resnet18', 'help': ' ', 'type': <class 'str'>} |
| DEBUG 2026-01-28 12:43:29 ocstring.py:253 Warning: Unable to parse attribute docstring: `None` means no pretrained weights. |
| DEBUG 2026-01-28 12:43:29 _wrapper.py:309 Arg options for field 'pretrained_backbone_weights': {'required': False, 'dest': 'pretrained_backbone_weights', 'default': 'ResNet18_Weights.IMAGENET1K_V1', 'help': ' ', 'type': typing.Optional[str]} |
| DEBUG 2026-01-28 12:43:29 ocstring.py:253 Warning: Unable to parse attribute docstring: convolution. |
| DEBUG 2026-01-28 12:43:29 _wrapper.py:309 Arg options for field 'replace_final_stride_with_dilation': {'required': False, 'dest': 'replace_final_stride_with_dilation', 'default': False, 'help': ' ', 'type': <class 'int'>} |
| DEBUG 2026-01-28 12:43:29 _wrapper.py:309 Arg options for field 'pre_norm': {'required': False, 'dest': 'pre_norm', 'default': False, 'help': ' ', 'type': <class 'bool'>} |
| DEBUG 2026-01-28 12:43:29 _wrapper.py:309 Arg options for field 'dim_model': {'required': False, 'dest': 'dim_model', 'default': 512, 'help': ' ', 'type': <class 'int'>} |
| DEBUG 2026-01-28 12:43:29 _wrapper.py:309 Arg options for field 'n_heads': {'required': False, 'dest': 'n_heads', 'default': 8, 'help': ' ', 'type': <class 'int'>} |
| DEBUG 2026-01-28 12:43:29 ocstring.py:253 Warning: Unable to parse attribute docstring: layers. |
| DEBUG 2026-01-28 12:43:29 _wrapper.py:309 Arg options for field 'dim_feedforward': {'required': False, 'dest': 'dim_feedforward', 'default': 3200, 'help': ' ', 'type': <class 'int'>} |
| DEBUG 2026-01-28 12:43:29 _wrapper.py:309 Arg options for field 'feedforward_activation': {'required': False, 'dest': 'feedforward_activation', 'default': 'relu', 'help': ' ', 'type': <class 'str'>} |
| DEBUG 2026-01-28 12:43:29 _wrapper.py:309 Arg options for field 'n_encoder_layers': {'required': False, 'dest': 'n_encoder_layers', 'default': 4, 'help': ' ', 'type': <class 'int'>} |
| DEBUG 2026-01-28 12:43:29 _wrapper.py:309 Arg options for field 'n_decoder_layers': {'required': False, 'dest': 'n_decoder_layers', 'default': 1, 'help': ' ', 'type': <class 'int'>} |
| DEBUG 2026-01-28 12:43:29 ocstring.py:253 Warning: Unable to parse attribute docstring: which is used as the VAE's encoder (not to be confused with the transformer encoder - see |
| DEBUG 2026-01-28 12:43:29 _wrapper.py:309 Arg options for field 'use_vae': {'required': False, 'dest': 'use_vae', 'default': True, 'help': ' ', 'type': <class 'bool'>} |
| DEBUG 2026-01-28 12:43:29 _wrapper.py:309 Arg options for field 'latent_dim': {'required': False, 'dest': 'latent_dim', 'default': 32, 'help': '\n', 'type': <class 'int'>} |
| DEBUG 2026-01-28 12:43:29 _wrapper.py:309 Arg options for field 'n_vae_encoder_layers': {'required': False, 'dest': 'n_vae_encoder_layers', 'default': 4, 'help': ' ', 'type': <class 'int'>} |
| DEBUG 2026-01-28 12:43:29 ocstring.py:253 Warning: Unable to parse attribute docstring: ensembling. Defaults to None which means temporal ensembling is not used. `n_action_steps` must be |
| DEBUG 2026-01-28 12:43:29 _wrapper.py:309 Arg options for field 'temporal_ensemble_coeff': {'required': False, 'dest': 'temporal_ensemble_coeff', 'default': None, 'type': typing.Optional[float]} |
| DEBUG 2026-01-28 12:43:29 _wrapper.py:309 Arg options for field 'dropout': {'required': False, 'dest': 'dropout', 'default': 0.1, 'help': '\n\n', 'type': <class 'float'>} |
| DEBUG 2026-01-28 12:43:29 _wrapper.py:309 Arg options for field 'kl_weight': {'required': False, 'dest': 'kl_weight', 'default': 10.0, 'help': ' ', 'type': <class 'float'>} |
| DEBUG 2026-01-28 12:43:29 _wrapper.py:309 Arg options for field 'optimizer_lr': {'required': False, 'dest': 'optimizer_lr', 'default': 1e-05, 'help': 'Training preset', 'type': <class 'float'>} |
| DEBUG 2026-01-28 12:43:29 _wrapper.py:309 Arg options for field 'optimizer_weight_decay': {'required': False, 'dest': 'optimizer_weight_decay', 'default': 0.0001, 'help': ' ', 'type': <class 'float'>} |
| DEBUG 2026-01-28 12:43:29 _wrapper.py:309 Arg options for field 'optimizer_lr_backbone': {'required': False, 'dest': 'optimizer_lr_backbone', 'default': 1e-05, 'help': ' ', 'type': <class 'float'>} |
| DEBUG 2026-01-28 12:43:29 gparsing.py:143 |
| POST PROCESSING |
|
|
| DEBUG 2026-01-28 12:43:29 gparsing.py:144 (raw) parsed args: Namespace() |
| DEBUG 2026-01-28 12:43:29 decoding.py:112 from_dict for <class 'lerobot.policies.act.configuration_act.ACTConfig'> |
| DEBUG 2026-01-28 12:43:29 decoding.py:132 Decode name = n_obs_steps, type = <class 'int'> |
| DEBUG 2026-01-28 12:43:29 decoding.py:132 Decode name = input_features, type = dict[str, lerobot.configs.types.PolicyFeature] |
| DEBUG 2026-01-28 12:43:29 decoding.py:112 from_dict for <class 'lerobot.configs.types.PolicyFeature'> |
| DEBUG 2026-01-28 12:43:29 decoding.py:132 Decode name = type, type = <enum 'FeatureType'> |
| DEBUG 2026-01-28 12:43:29 decoding.py:132 Decode name = shape, type = tuple[int, ...] |
| DEBUG 2026-01-28 12:43:29 decoding.py:112 from_dict for <class 'lerobot.configs.types.PolicyFeature'> |
| DEBUG 2026-01-28 12:43:29 decoding.py:132 Decode name = type, type = <enum 'FeatureType'> |
| DEBUG 2026-01-28 12:43:29 decoding.py:132 Decode name = shape, type = tuple[int, ...] |
| DEBUG 2026-01-28 12:43:29 decoding.py:112 from_dict for <class 'lerobot.configs.types.PolicyFeature'> |
| DEBUG 2026-01-28 12:43:29 decoding.py:132 Decode name = type, type = <enum 'FeatureType'> |
| DEBUG 2026-01-28 12:43:29 decoding.py:132 Decode name = shape, type = tuple[int, ...] |
| DEBUG 2026-01-28 12:43:29 decoding.py:112 from_dict for <class 'lerobot.configs.types.PolicyFeature'> |
| DEBUG 2026-01-28 12:43:29 decoding.py:132 Decode name = type, type = <enum 'FeatureType'> |
| DEBUG 2026-01-28 12:43:29 decoding.py:132 Decode name = shape, type = tuple[int, ...] |
| DEBUG 2026-01-28 12:43:29 decoding.py:132 Decode name = output_features, type = dict[str, lerobot.configs.types.PolicyFeature] |
| DEBUG 2026-01-28 12:43:29 decoding.py:112 from_dict for <class 'lerobot.configs.types.PolicyFeature'> |
| DEBUG 2026-01-28 12:43:29 decoding.py:132 Decode name = type, type = <enum 'FeatureType'> |
| DEBUG 2026-01-28 12:43:29 decoding.py:132 Decode name = shape, type = tuple[int, ...] |
| DEBUG 2026-01-28 12:43:29 decoding.py:132 Decode name = device, type = typing.Optional[str] |
| DEBUG 2026-01-28 12:43:29 decoding.py:132 Decode name = use_amp, type = <class 'bool'> |
| DEBUG 2026-01-28 12:43:29 decoding.py:132 Decode name = push_to_hub, type = <class 'bool'> |
| DEBUG 2026-01-28 12:43:29 decoding.py:132 Decode name = repo_id, type = typing.Optional[str] |
| DEBUG 2026-01-28 12:43:29 decoding.py:132 Decode name = private, type = typing.Optional[bool] |
| DEBUG 2026-01-28 12:43:29 decoding.py:132 Decode name = tags, type = typing.Optional[list[str]] |
| DEBUG 2026-01-28 12:43:29 decoding.py:132 Decode name = license, type = typing.Optional[str] |
| DEBUG 2026-01-28 12:43:29 decoding.py:132 Decode name = pretrained_path, type = typing.Optional[pathlib.Path] |
| DEBUG 2026-01-28 12:43:29 decoding.py:132 Decode name = chunk_size, type = <class 'int'> |
| DEBUG 2026-01-28 12:43:29 decoding.py:132 Decode name = n_action_steps, type = <class 'int'> |
| DEBUG 2026-01-28 12:43:29 decoding.py:132 Decode name = normalization_mapping, type = dict[str, lerobot.configs.types.NormalizationMode] |
| DEBUG 2026-01-28 12:43:29 decoding.py:132 Decode name = vision_backbone, type = <class 'str'> |
| DEBUG 2026-01-28 12:43:29 decoding.py:132 Decode name = pretrained_backbone_weights, type = typing.Optional[str] |
| DEBUG 2026-01-28 12:43:29 decoding.py:132 Decode name = replace_final_stride_with_dilation, type = <class 'int'> |
| DEBUG 2026-01-28 12:43:29 decoding.py:132 Decode name = pre_norm, type = <class 'bool'> |
| DEBUG 2026-01-28 12:43:29 decoding.py:132 Decode name = dim_model, type = <class 'int'> |
| DEBUG 2026-01-28 12:43:29 decoding.py:132 Decode name = n_heads, type = <class 'int'> |
| DEBUG 2026-01-28 12:43:29 decoding.py:132 Decode name = dim_feedforward, type = <class 'int'> |
| DEBUG 2026-01-28 12:43:29 decoding.py:132 Decode name = feedforward_activation, type = <class 'str'> |
| DEBUG 2026-01-28 12:43:29 decoding.py:132 Decode name = n_encoder_layers, type = <class 'int'> |
| DEBUG 2026-01-28 12:43:29 decoding.py:132 Decode name = n_decoder_layers, type = <class 'int'> |
| DEBUG 2026-01-28 12:43:29 decoding.py:132 Decode name = use_vae, type = <class 'bool'> |
| DEBUG 2026-01-28 12:43:29 decoding.py:132 Decode name = latent_dim, type = <class 'int'> |
| DEBUG 2026-01-28 12:43:29 decoding.py:132 Decode name = n_vae_encoder_layers, type = <class 'int'> |
| DEBUG 2026-01-28 12:43:29 decoding.py:132 Decode name = temporal_ensemble_coeff, type = typing.Optional[float] |
| DEBUG 2026-01-28 12:43:29 decoding.py:132 Decode name = dropout, type = <class 'float'> |
| DEBUG 2026-01-28 12:43:29 decoding.py:132 Decode name = kl_weight, type = <class 'float'> |
| DEBUG 2026-01-28 12:43:29 decoding.py:132 Decode name = optimizer_lr, type = <class 'float'> |
| DEBUG 2026-01-28 12:43:29 decoding.py:132 Decode name = optimizer_weight_decay, type = <class 'float'> |
| DEBUG 2026-01-28 12:43:29 decoding.py:132 Decode name = optimizer_lr_backbone, type = <class 'float'> |
| INFO 2026-01-28 12:43:29 y_server.py:171 Time taken to put policy on cuda: 0.5942 seconds |
| DEBUG 2026-01-28 12:43:29 y_server.py:178 Receiving observations from ipv4:127.0.0.1:43896 |
| INFO 2026-01-28 12:43:29 ort/utils.py:74 <Logger policy_server (NOTSET)> Starting receiver |
| DEBUG 2026-01-28 12:43:29 ort/utils.py:76 <Logger policy_server (NOTSET)> Received item |
| DEBUG 2026-01-28 12:43:29 ort/utils.py:85 <Logger policy_server (NOTSET)> Received data at step 0 |
| DEBUG 2026-01-28 12:43:29 ort/utils.py:76 <Logger policy_server (NOTSET)> Received item |
| DEBUG 2026-01-28 12:43:29 ort/utils.py:93 <Logger policy_server (NOTSET)> Received data at step end size 2765657 |
| DEBUG 2026-01-28 12:43:29 y_server.py:188 Received observation #1 |
| DEBUG 2026-01-28 12:43:29 y_server.py:196 Received observation #1 | Avg FPS: 0.00 | Target: 30.00 | One-way latency: 1.51ms |
| DEBUG 2026-01-28 12:43:29 y_server.py:203 Server timestamp: 1769575409.927519 | Client timestamp: 1769575409.926011 | Deserialization time: 0.008081s |
| DEBUG 2026-01-28 12:43:29 y_server.py:298 Enqueuing observation. Must go: False | Last processed obs: None |
| DEBUG 2026-01-28 12:43:29 y_server.py:220 Client ipv4:127.0.0.1:43896 connected for action streaming |
| INFO 2026-01-28 12:43:29 y_server.py:226 Running inference for observation #1 (must_go: False) |
| INFO 2026-01-28 12:43:30 y_server.py:361 Preprocessing and inference took 0.1883s, action shape: torch.Size([1, 16, 16]) |
| DEBUG 2026-01-28 12:43:30 y_server.py:382 Postprocessed action shape: torch.Size([16, 16]) |
| INFO 2026-01-28 12:43:30 y_server.py:391 Observation 1 | Total time: 201.59ms |
| DEBUG 2026-01-28 12:43:30 y_server.py:396 Observation 1 | Prepare time: 0.91ms | Preprocessing time: 10.82ms | Inference time: 188.28ms | Postprocessing time: 1.45ms | Total time: 201.59ms |
| INFO 2026-01-28 12:43:30 y_server.py:244 Action chunk #1 generated | Total time: 202.52ms |
| DEBUG 2026-01-28 12:43:30 y_server.py:249 Action chunk #1 generated | Inference time: 0.20s |Serialize time: 0.00s |Total time: 0.20s |
| DEBUG 2026-01-28 12:43:30 y_server.py:178 Receiving observations from ipv4:127.0.0.1:43896 |
| INFO 2026-01-28 12:43:30 ort/utils.py:74 <Logger policy_server (NOTSET)> Starting receiver |
| DEBUG 2026-01-28 12:43:30 ort/utils.py:76 <Logger policy_server (NOTSET)> Received item |
| DEBUG 2026-01-28 12:43:30 ort/utils.py:85 <Logger policy_server (NOTSET)> Received data at step 0 |
| DEBUG 2026-01-28 12:43:30 ort/utils.py:76 <Logger policy_server (NOTSET)> Received item |
| DEBUG 2026-01-28 12:43:30 ort/utils.py:93 <Logger policy_server (NOTSET)> Received data at step end size 2765657 |
| DEBUG 2026-01-28 12:43:30 y_server.py:188 Received observation #2 |
| DEBUG 2026-01-28 12:43:30 y_server.py:196 Received observation #2 | Avg FPS: 1.05 | Target: 30.00 | One-way latency: 0.98ms |
| DEBUG 2026-01-28 12:43:30 y_server.py:203 Server timestamp: 1769575410.880227 | Client timestamp: 1769575410.879248 | Deserialization time: 0.004448s |
| DEBUG 2026-01-28 12:43:30 y_server.py:298 Enqueuing observation. Must go: False | Last processed obs: 1 |
| DEBUG 2026-01-28 12:43:30 y_server.py:220 Client ipv4:127.0.0.1:43896 connected for action streaming |
| INFO 2026-01-28 12:43:30 y_server.py:226 Running inference for observation #2 (must_go: False) |
| INFO 2026-01-28 12:43:30 y_server.py:361 Preprocessing and inference took 0.0070s, action shape: torch.Size([1, 16, 16]) |
| DEBUG 2026-01-28 12:43:30 y_server.py:382 Postprocessed action shape: torch.Size([16, 16]) |
| INFO 2026-01-28 12:43:30 y_server.py:391 Observation 2 | Total time: 9.91ms |
| DEBUG 2026-01-28 12:43:30 y_server.py:396 Observation 2 | Prepare time: 1.44ms | Preprocessing time: 0.97ms | Inference time: 6.99ms | Postprocessing time: 0.41ms | Total time: 9.91ms |
| INFO 2026-01-28 12:43:30 y_server.py:244 Action chunk #2 generated | Total time: 11.26ms |
| DEBUG 2026-01-28 12:43:30 y_server.py:249 Action chunk #2 generated | Inference time: 0.01s |Serialize time: 0.00s |Total time: 0.01s |
| DEBUG 2026-01-28 12:43:31 y_server.py:178 Receiving observations from ipv4:127.0.0.1:43896 |
| INFO 2026-01-28 12:43:31 ort/utils.py:74 <Logger policy_server (NOTSET)> Starting receiver |
| DEBUG 2026-01-28 12:43:31 ort/utils.py:76 <Logger policy_server (NOTSET)> Received item |
| DEBUG 2026-01-28 12:43:31 ort/utils.py:85 <Logger policy_server (NOTSET)> Received data at step 0 |
| DEBUG 2026-01-28 12:43:31 ort/utils.py:76 <Logger policy_server (NOTSET)> Received item |
| DEBUG 2026-01-28 12:43:31 ort/utils.py:93 <Logger policy_server (NOTSET)> Received data at step end size 2765657 |
| DEBUG 2026-01-28 12:43:31 y_server.py:188 Received observation #3 |
| DEBUG 2026-01-28 12:43:31 y_server.py:196 Received observation #3 | Avg FPS: 1.15 | Target: 30.00 | One-way latency: 0.99ms |
| DEBUG 2026-01-28 12:43:31 y_server.py:203 Server timestamp: 1769575411.672275 | Client timestamp: 1769575411.671285 | Deserialization time: 0.009056s |
| DEBUG 2026-01-28 12:43:31 y_server.py:298 Enqueuing observation. Must go: False | Last processed obs: 2 |
| DEBUG 2026-01-28 12:43:31 y_server.py:220 Client ipv4:127.0.0.1:43896 connected for action streaming |
| INFO 2026-01-28 12:43:31 y_server.py:226 Running inference for observation #3 (must_go: False) |
| INFO 2026-01-28 12:43:31 y_server.py:361 Preprocessing and inference took 0.0071s, action shape: torch.Size([1, 16, 16]) |
| DEBUG 2026-01-28 12:43:31 y_server.py:382 Postprocessed action shape: torch.Size([16, 16]) |
| INFO 2026-01-28 12:43:31 y_server.py:391 Observation 3 | Total time: 9.32ms |
| DEBUG 2026-01-28 12:43:31 y_server.py:396 Observation 3 | Prepare time: 0.72ms | Preprocessing time: 0.98ms | Inference time: 7.11ms | Postprocessing time: 0.41ms | Total time: 9.32ms |
| INFO 2026-01-28 12:43:31 y_server.py:244 Action chunk #3 generated | Total time: 10.70ms |
| DEBUG 2026-01-28 12:43:31 y_server.py:249 Action chunk #3 generated | Inference time: 0.01s |Serialize time: 0.00s |Total time: 0.01s |
| DEBUG 2026-01-28 12:43:32 y_server.py:178 Receiving observations from ipv4:127.0.0.1:43896 |
| INFO 2026-01-28 12:43:32 ort/utils.py:74 <Logger policy_server (NOTSET)> Starting receiver |
| DEBUG 2026-01-28 12:43:32 ort/utils.py:76 <Logger policy_server (NOTSET)> Received item |
| DEBUG 2026-01-28 12:43:32 ort/utils.py:85 <Logger policy_server (NOTSET)> Received data at step 0 |
| DEBUG 2026-01-28 12:43:32 ort/utils.py:76 <Logger policy_server (NOTSET)> Received item |
| DEBUG 2026-01-28 12:43:32 ort/utils.py:93 <Logger policy_server (NOTSET)> Received data at step end size 2765657 |
| DEBUG 2026-01-28 12:43:32 y_server.py:188 Received observation #4 |
| DEBUG 2026-01-28 12:43:32 y_server.py:196 Received observation #4 | Avg FPS: 1.18 | Target: 30.00 | One-way latency: 0.90ms |
| DEBUG 2026-01-28 12:43:32 y_server.py:203 Server timestamp: 1769575412.464869 | Client timestamp: 1769575412.463971 | Deserialization time: 0.003966s |
| DEBUG 2026-01-28 12:43:32 y_server.py:298 Enqueuing observation. Must go: False | Last processed obs: 3 |
| DEBUG 2026-01-28 12:43:32 y_server.py:220 Client ipv4:127.0.0.1:43896 connected for action streaming |
| INFO 2026-01-28 12:43:32 y_server.py:226 Running inference for observation #4 (must_go: False) |
| INFO 2026-01-28 12:43:32 y_server.py:361 Preprocessing and inference took 0.0068s, action shape: torch.Size([1, 16, 16]) |
| DEBUG 2026-01-28 12:43:32 y_server.py:382 Postprocessed action shape: torch.Size([16, 16]) |
| INFO 2026-01-28 12:43:32 y_server.py:391 Observation 4 | Total time: 9.00ms |
| DEBUG 2026-01-28 12:43:32 y_server.py:396 Observation 4 | Prepare time: 0.71ms | Preprocessing time: 0.97ms | Inference time: 6.83ms | Postprocessing time: 0.40ms | Total time: 9.00ms |
| INFO 2026-01-28 12:43:32 y_server.py:244 Action chunk #4 generated | Total time: 10.46ms |
| DEBUG 2026-01-28 12:43:32 y_server.py:249 Action chunk #4 generated | Inference time: 0.01s |Serialize time: 0.00s |Total time: 0.01s |
| DEBUG 2026-01-28 12:43:33 y_server.py:178 Receiving observations from ipv4:127.0.0.1:43896 |
| INFO 2026-01-28 12:43:33 ort/utils.py:74 <Logger policy_server (NOTSET)> Starting receiver |
| DEBUG 2026-01-28 12:43:33 ort/utils.py:76 <Logger policy_server (NOTSET)> Received item |
| DEBUG 2026-01-28 12:43:33 ort/utils.py:85 <Logger policy_server (NOTSET)> Received data at step 0 |
| DEBUG 2026-01-28 12:43:33 ort/utils.py:76 <Logger policy_server (NOTSET)> Received item |
| DEBUG 2026-01-28 12:43:33 ort/utils.py:93 <Logger policy_server (NOTSET)> Received data at step end size 2765657 |
| DEBUG 2026-01-28 12:43:33 y_server.py:188 Received observation #5 |
| DEBUG 2026-01-28 12:43:33 y_server.py:196 Received observation #5 | Avg FPS: 1.20 | Target: 30.00 | One-way latency: 0.91ms |
| DEBUG 2026-01-28 12:43:33 y_server.py:203 Server timestamp: 1769575413.250932 | Client timestamp: 1769575413.250022 | Deserialization time: 0.004107s |
| DEBUG 2026-01-28 12:43:33 y_server.py:298 Enqueuing observation. Must go: False | Last processed obs: 4 |
| DEBUG 2026-01-28 12:43:33 y_server.py:220 Client ipv4:127.0.0.1:43896 connected for action streaming |
| INFO 2026-01-28 12:43:33 y_server.py:226 Running inference for observation #5 (must_go: False) |
| INFO 2026-01-28 12:43:33 y_server.py:361 Preprocessing and inference took 0.0098s, action shape: torch.Size([1, 16, 16]) |
| DEBUG 2026-01-28 12:43:33 y_server.py:382 Postprocessed action shape: torch.Size([16, 16]) |
| INFO 2026-01-28 12:43:33 y_server.py:391 Observation 5 | Total time: 12.78ms |
| DEBUG 2026-01-28 12:43:33 y_server.py:396 Observation 5 | Prepare time: 0.96ms | Preprocessing time: 0.97ms | Inference time: 9.83ms | Postprocessing time: 0.87ms | Total time: 12.78ms |
| INFO 2026-01-28 12:43:33 y_server.py:244 Action chunk #5 generated | Total time: 13.56ms |
| DEBUG 2026-01-28 12:43:33 y_server.py:249 Action chunk #5 generated | Inference time: 0.01s |Serialize time: 0.00s |Total time: 0.01s |
| DEBUG 2026-01-28 12:43:34 y_server.py:178 Receiving observations from ipv4:127.0.0.1:43896 |
| INFO 2026-01-28 12:43:34 ort/utils.py:74 <Logger policy_server (NOTSET)> Starting receiver |
| DEBUG 2026-01-28 12:43:34 ort/utils.py:76 <Logger policy_server (NOTSET)> Received item |
| DEBUG 2026-01-28 12:43:34 ort/utils.py:85 <Logger policy_server (NOTSET)> Received data at step 0 |
| DEBUG 2026-01-28 12:43:34 ort/utils.py:76 <Logger policy_server (NOTSET)> Received item |
| DEBUG 2026-01-28 12:43:34 ort/utils.py:93 <Logger policy_server (NOTSET)> Received data at step end size 2765657 |
| DEBUG 2026-01-28 12:43:34 y_server.py:188 Received observation #6 |
| DEBUG 2026-01-28 12:43:34 y_server.py:196 Received observation #6 | Avg FPS: 1.21 | Target: 30.00 | One-way latency: 0.91ms |
| DEBUG 2026-01-28 12:43:34 y_server.py:203 Server timestamp: 1769575414.043910 | Client timestamp: 1769575414.042996 | Deserialization time: 0.006037s |
| DEBUG 2026-01-28 12:43:34 y_server.py:298 Enqueuing observation. Must go: False | Last processed obs: 5 |
| DEBUG 2026-01-28 12:43:34 y_server.py:220 Client ipv4:127.0.0.1:43896 connected for action streaming |
| INFO 2026-01-28 12:43:34 y_server.py:226 Running inference for observation #6 (must_go: False) |
| INFO 2026-01-28 12:43:34 y_server.py:361 Preprocessing and inference took 0.0073s, action shape: torch.Size([1, 16, 16]) |
| DEBUG 2026-01-28 12:43:34 y_server.py:382 Postprocessed action shape: torch.Size([16, 16]) |
| INFO 2026-01-28 12:43:34 y_server.py:391 Observation 6 | Total time: 9.61ms |
| DEBUG 2026-01-28 12:43:34 y_server.py:396 Observation 6 | Prepare time: 0.78ms | Preprocessing time: 1.05ms | Inference time: 7.29ms | Postprocessing time: 0.41ms | Total time: 9.61ms |
| INFO 2026-01-28 12:43:34 y_server.py:244 Action chunk #6 generated | Total time: 11.15ms |
| DEBUG 2026-01-28 12:43:34 y_server.py:249 Action chunk #6 generated | Inference time: 0.01s |Serialize time: 0.00s |Total time: 0.01s |
| DEBUG 2026-01-28 12:43:34 y_server.py:178 Receiving observations from ipv4:127.0.0.1:43896 |
| INFO 2026-01-28 12:43:34 ort/utils.py:74 <Logger policy_server (NOTSET)> Starting receiver |
| DEBUG 2026-01-28 12:43:34 ort/utils.py:76 <Logger policy_server (NOTSET)> Received item |
| DEBUG 2026-01-28 12:43:34 ort/utils.py:85 <Logger policy_server (NOTSET)> Received data at step 0 |
| DEBUG 2026-01-28 12:43:34 ort/utils.py:76 <Logger policy_server (NOTSET)> Received item |
| DEBUG 2026-01-28 12:43:34 ort/utils.py:93 <Logger policy_server (NOTSET)> Received data at step end size 2765657 |
| DEBUG 2026-01-28 12:43:34 y_server.py:188 Received observation #7 |
| DEBUG 2026-01-28 12:43:34 y_server.py:196 Received observation #7 | Avg FPS: 1.22 | Target: 30.00 | One-way latency: 0.90ms |
| DEBUG 2026-01-28 12:43:34 y_server.py:203 Server timestamp: 1769575414.837669 | Client timestamp: 1769575414.836767 | Deserialization time: 0.004666s |
| DEBUG 2026-01-28 12:43:34 y_server.py:298 Enqueuing observation. Must go: False | Last processed obs: 6 |
| DEBUG 2026-01-28 12:43:34 y_server.py:220 Client ipv4:127.0.0.1:43896 connected for action streaming |
| INFO 2026-01-28 12:43:34 y_server.py:226 Running inference for observation #7 (must_go: False) |
| INFO 2026-01-28 12:43:34 y_server.py:361 Preprocessing and inference took 0.0070s, action shape: torch.Size([1, 16, 16]) |
| DEBUG 2026-01-28 12:43:34 y_server.py:382 Postprocessed action shape: torch.Size([16, 16]) |
| INFO 2026-01-28 12:43:34 y_server.py:391 Observation 7 | Total time: 9.39ms |
| DEBUG 2026-01-28 12:43:34 y_server.py:396 Observation 7 | Prepare time: 0.91ms | Preprocessing time: 0.98ms | Inference time: 6.99ms | Postprocessing time: 0.43ms | Total time: 9.39ms |
| INFO 2026-01-28 12:43:34 y_server.py:244 Action chunk #7 generated | Total time: 10.72ms |
| DEBUG 2026-01-28 12:43:34 y_server.py:249 Action chunk #7 generated | Inference time: 0.01s |Serialize time: 0.00s |Total time: 0.01s |
| DEBUG 2026-01-28 12:43:35 y_server.py:178 Receiving observations from ipv4:127.0.0.1:43896 |
| INFO 2026-01-28 12:43:35 ort/utils.py:74 <Logger policy_server (NOTSET)> Starting receiver |
| DEBUG 2026-01-28 12:43:35 ort/utils.py:76 <Logger policy_server (NOTSET)> Received item |
| DEBUG 2026-01-28 12:43:35 ort/utils.py:85 <Logger policy_server (NOTSET)> Received data at step 0 |
| DEBUG 2026-01-28 12:43:35 ort/utils.py:76 <Logger policy_server (NOTSET)> Received item |
| DEBUG 2026-01-28 12:43:35 ort/utils.py:93 <Logger policy_server (NOTSET)> Received data at step end size 2765657 |
| DEBUG 2026-01-28 12:43:35 y_server.py:188 Received observation #8 |
| DEBUG 2026-01-28 12:43:35 y_server.py:196 Received observation #8 | Avg FPS: 1.23 | Target: 30.00 | One-way latency: 0.93ms |
| DEBUG 2026-01-28 12:43:35 y_server.py:203 Server timestamp: 1769575415.621255 | Client timestamp: 1769575415.620322 | Deserialization time: 0.003469s |
| DEBUG 2026-01-28 12:43:35 y_server.py:298 Enqueuing observation. Must go: False | Last processed obs: 7 |
| DEBUG 2026-01-28 12:43:35 y_server.py:220 Client ipv4:127.0.0.1:43896 connected for action streaming |
| INFO 2026-01-28 12:43:35 y_server.py:226 Running inference for observation #8 (must_go: False) |
| INFO 2026-01-28 12:43:35 y_server.py:361 Preprocessing and inference took 0.0070s, action shape: torch.Size([1, 16, 16]) |
| DEBUG 2026-01-28 12:43:35 y_server.py:382 Postprocessed action shape: torch.Size([16, 16]) |
| INFO 2026-01-28 12:43:35 y_server.py:391 Observation 8 | Total time: 9.68ms |
| DEBUG 2026-01-28 12:43:35 y_server.py:396 Observation 8 | Prepare time: 0.71ms | Preprocessing time: 1.49ms | Inference time: 6.95ms | Postprocessing time: 0.44ms | Total time: 9.68ms |
| INFO 2026-01-28 12:43:35 y_server.py:244 Action chunk #8 generated | Total time: 11.09ms |
| DEBUG 2026-01-28 12:43:35 y_server.py:249 Action chunk #8 generated | Inference time: 0.01s |Serialize time: 0.00s |Total time: 0.01s |
| DEBUG 2026-01-28 12:43:36 y_server.py:178 Receiving observations from ipv4:127.0.0.1:43896 |
| INFO 2026-01-28 12:43:36 ort/utils.py:74 <Logger policy_server (NOTSET)> Starting receiver |
| DEBUG 2026-01-28 12:43:36 ort/utils.py:76 <Logger policy_server (NOTSET)> Received item |
| DEBUG 2026-01-28 12:43:36 ort/utils.py:85 <Logger policy_server (NOTSET)> Received data at step 0 |
| DEBUG 2026-01-28 12:43:36 ort/utils.py:76 <Logger policy_server (NOTSET)> Received item |
| DEBUG 2026-01-28 12:43:36 ort/utils.py:93 <Logger policy_server (NOTSET)> Received data at step end size 2765657 |
| DEBUG 2026-01-28 12:43:36 y_server.py:188 Received observation #9 |
| DEBUG 2026-01-28 12:43:36 y_server.py:196 Received observation #9 | Avg FPS: 1.24 | Target: 30.00 | One-way latency: 0.92ms |
| DEBUG 2026-01-28 12:43:36 y_server.py:203 Server timestamp: 1769575416.400435 | Client timestamp: 1769575416.399519 | Deserialization time: 0.004359s |
| DEBUG 2026-01-28 12:43:36 y_server.py:298 Enqueuing observation. Must go: False | Last processed obs: 8 |
| DEBUG 2026-01-28 12:43:36 y_server.py:220 Client ipv4:127.0.0.1:43896 connected for action streaming |
| INFO 2026-01-28 12:43:36 y_server.py:226 Running inference for observation #9 (must_go: False) |
| INFO 2026-01-28 12:43:36 y_server.py:361 Preprocessing and inference took 0.0069s, action shape: torch.Size([1, 16, 16]) |
| DEBUG 2026-01-28 12:43:36 y_server.py:382 Postprocessed action shape: torch.Size([16, 16]) |
| INFO 2026-01-28 12:43:36 y_server.py:391 Observation 9 | Total time: 9.65ms |
| DEBUG 2026-01-28 12:43:36 y_server.py:396 Observation 9 | Prepare time: 1.27ms | Preprocessing time: 0.98ms | Inference time: 6.90ms | Postprocessing time: 0.41ms | Total time: 9.65ms |
| INFO 2026-01-28 12:43:36 y_server.py:244 Action chunk #9 generated | Total time: 11.03ms |
| DEBUG 2026-01-28 12:43:36 y_server.py:249 Action chunk #9 generated | Inference time: 0.01s |Serialize time: 0.00s |Total time: 0.01s |
| DEBUG 2026-01-28 12:43:37 y_server.py:178 Receiving observations from ipv4:127.0.0.1:43896 |
| INFO 2026-01-28 12:43:37 ort/utils.py:74 <Logger policy_server (NOTSET)> Starting receiver |
| DEBUG 2026-01-28 12:43:37 ort/utils.py:76 <Logger policy_server (NOTSET)> Received item |
| DEBUG 2026-01-28 12:43:37 ort/utils.py:85 <Logger policy_server (NOTSET)> Received data at step 0 |
| DEBUG 2026-01-28 12:43:37 ort/utils.py:76 <Logger policy_server (NOTSET)> Received item |
| DEBUG 2026-01-28 12:43:37 ort/utils.py:93 <Logger policy_server (NOTSET)> Received data at step end size 2765657 |
| DEBUG 2026-01-28 12:43:37 y_server.py:188 Received observation #10 |
| DEBUG 2026-01-28 12:43:37 y_server.py:196 Received observation #10 | Avg FPS: 1.24 | Target: 30.00 | One-way latency: 0.89ms |
| DEBUG 2026-01-28 12:43:37 y_server.py:203 Server timestamp: 1769575417.181859 | Client timestamp: 1769575417.180969 | Deserialization time: 0.008986s |
| DEBUG 2026-01-28 12:43:37 y_server.py:298 Enqueuing observation. Must go: False | Last processed obs: 9 |
| DEBUG 2026-01-28 12:43:37 y_server.py:220 Client ipv4:127.0.0.1:43896 connected for action streaming |
| INFO 2026-01-28 12:43:37 y_server.py:226 Running inference for observation #10 (must_go: False) |
| INFO 2026-01-28 12:43:37 y_server.py:361 Preprocessing and inference took 0.0069s, action shape: torch.Size([1, 16, 16]) |
| DEBUG 2026-01-28 12:43:37 y_server.py:382 Postprocessed action shape: torch.Size([16, 16]) |
| INFO 2026-01-28 12:43:37 y_server.py:391 Observation 10 | Total time: 9.30ms |
| DEBUG 2026-01-28 12:43:37 y_server.py:396 Observation 10 | Prepare time: 0.89ms | Preprocessing time: 0.99ms | Inference time: 6.91ms | Postprocessing time: 0.42ms | Total time: 9.30ms |
| INFO 2026-01-28 12:43:37 y_server.py:244 Action chunk #10 generated | Total time: 10.76ms |
| DEBUG 2026-01-28 12:43:37 y_server.py:249 Action chunk #10 generated | Inference time: 0.01s |Serialize time: 0.00s |Total time: 0.01s |
| DEBUG 2026-01-28 12:43:37 y_server.py:178 Receiving observations from ipv4:127.0.0.1:43896 |
| INFO 2026-01-28 12:43:37 ort/utils.py:74 <Logger policy_server (NOTSET)> Starting receiver |
| DEBUG 2026-01-28 12:43:37 ort/utils.py:76 <Logger policy_server (NOTSET)> Received item |
| DEBUG 2026-01-28 12:43:37 ort/utils.py:85 <Logger policy_server (NOTSET)> Received data at step 0 |
| DEBUG 2026-01-28 12:43:37 ort/utils.py:76 <Logger policy_server (NOTSET)> Received item |
| DEBUG 2026-01-28 12:43:37 ort/utils.py:93 <Logger policy_server (NOTSET)> Received data at step end size 2765657 |
| DEBUG 2026-01-28 12:43:37 y_server.py:188 Received observation #11 |
| DEBUG 2026-01-28 12:43:37 y_server.py:196 Received observation #11 | Avg FPS: 1.24 | Target: 30.00 | One-way latency: 0.87ms |
| DEBUG 2026-01-28 12:43:37 y_server.py:203 Server timestamp: 1769575417.966122 | Client timestamp: 1769575417.965249 | Deserialization time: 0.003195s |
| DEBUG 2026-01-28 12:43:37 y_server.py:298 Enqueuing observation. Must go: False | Last processed obs: 10 |
| DEBUG 2026-01-28 12:43:37 y_server.py:220 Client ipv4:127.0.0.1:43896 connected for action streaming |
| INFO 2026-01-28 12:43:37 y_server.py:226 Running inference for observation #11 (must_go: False) |
| INFO 2026-01-28 12:43:37 y_server.py:361 Preprocessing and inference took 0.0070s, action shape: torch.Size([1, 16, 16]) |
| DEBUG 2026-01-28 12:43:37 y_server.py:382 Postprocessed action shape: torch.Size([16, 16]) |
| INFO 2026-01-28 12:43:37 y_server.py:391 Observation 11 | Total time: 9.35ms |
| DEBUG 2026-01-28 12:43:37 y_server.py:396 Observation 11 | Prepare time: 0.80ms | Preprocessing time: 0.97ms | Inference time: 7.02ms | Postprocessing time: 0.48ms | Total time: 9.35ms |
| INFO 2026-01-28 12:43:37 y_server.py:244 Action chunk #11 generated | Total time: 10.74ms |
| DEBUG 2026-01-28 12:43:37 y_server.py:249 Action chunk #11 generated | Inference time: 0.01s |Serialize time: 0.00s |Total time: 0.01s |
| DEBUG 2026-01-28 12:43:38 y_server.py:178 Receiving observations from ipv4:127.0.0.1:43896 |
| INFO 2026-01-28 12:43:38 ort/utils.py:74 <Logger policy_server (NOTSET)> Starting receiver |
| DEBUG 2026-01-28 12:43:38 ort/utils.py:76 <Logger policy_server (NOTSET)> Received item |
| DEBUG 2026-01-28 12:43:38 ort/utils.py:85 <Logger policy_server (NOTSET)> Received data at step 0 |
| DEBUG 2026-01-28 12:43:38 ort/utils.py:76 <Logger policy_server (NOTSET)> Received item |
| DEBUG 2026-01-28 12:43:38 ort/utils.py:93 <Logger policy_server (NOTSET)> Received data at step end size 2765657 |
| DEBUG 2026-01-28 12:43:38 y_server.py:188 Received observation #12 |
| DEBUG 2026-01-28 12:43:38 y_server.py:196 Received observation #12 | Avg FPS: 1.25 | Target: 30.00 | One-way latency: 0.89ms |
| DEBUG 2026-01-28 12:43:38 y_server.py:203 Server timestamp: 1769575418.738632 | Client timestamp: 1769575418.737745 | Deserialization time: 0.003909s |
| DEBUG 2026-01-28 12:43:38 y_server.py:298 Enqueuing observation. Must go: False | Last processed obs: 11 |
| DEBUG 2026-01-28 12:43:38 y_server.py:220 Client ipv4:127.0.0.1:43896 connected for action streaming |
| INFO 2026-01-28 12:43:38 y_server.py:226 Running inference for observation #12 (must_go: False) |
| INFO 2026-01-28 12:43:38 y_server.py:361 Preprocessing and inference took 0.0072s, action shape: torch.Size([1, 16, 16]) |
| DEBUG 2026-01-28 12:43:38 y_server.py:382 Postprocessed action shape: torch.Size([16, 16]) |
| INFO 2026-01-28 12:43:38 y_server.py:391 Observation 12 | Total time: 9.63ms |
| DEBUG 2026-01-28 12:43:38 y_server.py:396 Observation 12 | Prepare time: 0.92ms | Preprocessing time: 0.97ms | Inference time: 7.17ms | Postprocessing time: 0.47ms | Total time: 9.63ms |
| INFO 2026-01-28 12:43:38 y_server.py:244 Action chunk #12 generated | Total time: 10.90ms |
| DEBUG 2026-01-28 12:43:38 y_server.py:249 Action chunk #12 generated | Inference time: 0.01s |Serialize time: 0.00s |Total time: 0.01s |
| DEBUG 2026-01-28 12:43:39 y_server.py:178 Receiving observations from ipv4:127.0.0.1:43896 |
| INFO 2026-01-28 12:43:39 ort/utils.py:74 <Logger policy_server (NOTSET)> Starting receiver |
| DEBUG 2026-01-28 12:43:39 ort/utils.py:76 <Logger policy_server (NOTSET)> Received item |
| DEBUG 2026-01-28 12:43:39 ort/utils.py:85 <Logger policy_server (NOTSET)> Received data at step 0 |
| DEBUG 2026-01-28 12:43:39 ort/utils.py:76 <Logger policy_server (NOTSET)> Received item |
| DEBUG 2026-01-28 12:43:39 ort/utils.py:93 <Logger policy_server (NOTSET)> Received data at step end size 2765657 |
| DEBUG 2026-01-28 12:43:39 y_server.py:188 Received observation #13 |
| DEBUG 2026-01-28 12:43:39 y_server.py:196 Received observation #13 | Avg FPS: 1.25 | Target: 30.00 | One-way latency: 0.92ms |
| DEBUG 2026-01-28 12:43:39 y_server.py:203 Server timestamp: 1769575419.512927 | Client timestamp: 1769575419.512004 | Deserialization time: 0.003238s |
| DEBUG 2026-01-28 12:43:39 y_server.py:298 Enqueuing observation. Must go: False | Last processed obs: 12 |
| DEBUG 2026-01-28 12:43:39 y_server.py:220 Client ipv4:127.0.0.1:43896 connected for action streaming |
| INFO 2026-01-28 12:43:39 y_server.py:226 Running inference for observation #13 (must_go: False) |
| INFO 2026-01-28 12:43:39 y_server.py:361 Preprocessing and inference took 0.0068s, action shape: torch.Size([1, 16, 16]) |
| DEBUG 2026-01-28 12:43:39 y_server.py:382 Postprocessed action shape: torch.Size([16, 16]) |
| INFO 2026-01-28 12:43:39 y_server.py:391 Observation 13 | Total time: 9.14ms |
| DEBUG 2026-01-28 12:43:39 y_server.py:396 Observation 13 | Prepare time: 0.89ms | Preprocessing time: 1.00ms | Inference time: 6.78ms | Postprocessing time: 0.38ms | Total time: 9.14ms |
| INFO 2026-01-28 12:43:39 y_server.py:244 Action chunk #13 generated | Total time: 10.71ms |
| DEBUG 2026-01-28 12:43:39 y_server.py:249 Action chunk #13 generated | Inference time: 0.01s |Serialize time: 0.00s |Total time: 0.01s |
| DEBUG 2026-01-28 12:43:40 y_server.py:178 Receiving observations from ipv4:127.0.0.1:43896 |
| INFO 2026-01-28 12:43:40 ort/utils.py:74 <Logger policy_server (NOTSET)> Starting receiver |
| DEBUG 2026-01-28 12:43:40 ort/utils.py:76 <Logger policy_server (NOTSET)> Received item |
| DEBUG 2026-01-28 12:43:40 ort/utils.py:85 <Logger policy_server (NOTSET)> Received data at step 0 |
| DEBUG 2026-01-28 12:43:40 ort/utils.py:76 <Logger policy_server (NOTSET)> Received item |
| DEBUG 2026-01-28 12:43:40 ort/utils.py:93 <Logger policy_server (NOTSET)> Received data at step end size 2765657 |
| DEBUG 2026-01-28 12:43:40 y_server.py:188 Received observation #14 |
| DEBUG 2026-01-28 12:43:40 y_server.py:196 Received observation #14 | Avg FPS: 1.25 | Target: 30.00 | One-way latency: 0.93ms |
| DEBUG 2026-01-28 12:43:40 y_server.py:203 Server timestamp: 1769575420.297206 | Client timestamp: 1769575420.296276 | Deserialization time: 0.002909s |
| DEBUG 2026-01-28 12:43:40 y_server.py:298 Enqueuing observation. Must go: False | Last processed obs: 13 |
| DEBUG 2026-01-28 12:43:40 y_server.py:220 Client ipv4:127.0.0.1:43896 connected for action streaming |
| INFO 2026-01-28 12:43:40 y_server.py:226 Running inference for observation #14 (must_go: False) |
| INFO 2026-01-28 12:43:40 y_server.py:361 Preprocessing and inference took 0.0070s, action shape: torch.Size([1, 16, 16]) |
| DEBUG 2026-01-28 12:43:40 y_server.py:382 Postprocessed action shape: torch.Size([16, 16]) |
| INFO 2026-01-28 12:43:40 y_server.py:391 Observation 14 | Total time: 9.37ms |
| DEBUG 2026-01-28 12:43:40 y_server.py:396 Observation 14 | Prepare time: 0.76ms | Preprocessing time: 1.05ms | Inference time: 7.02ms | Postprocessing time: 0.45ms | Total time: 9.37ms |
| INFO 2026-01-28 12:43:40 y_server.py:244 Action chunk #14 generated | Total time: 10.68ms |
| DEBUG 2026-01-28 12:43:40 y_server.py:249 Action chunk #14 generated | Inference time: 0.01s |Serialize time: 0.00s |Total time: 0.01s |
| DEBUG 2026-01-28 12:43:41 y_server.py:178 Receiving observations from ipv4:127.0.0.1:43896 |
| INFO 2026-01-28 12:43:41 ort/utils.py:74 <Logger policy_server (NOTSET)> Starting receiver |
| DEBUG 2026-01-28 12:43:41 ort/utils.py:76 <Logger policy_server (NOTSET)> Received item |
| DEBUG 2026-01-28 12:43:41 ort/utils.py:85 <Logger policy_server (NOTSET)> Received data at step 0 |
| DEBUG 2026-01-28 12:43:41 ort/utils.py:76 <Logger policy_server (NOTSET)> Received item |
| DEBUG 2026-01-28 12:43:41 ort/utils.py:93 <Logger policy_server (NOTSET)> Received data at step end size 2765657 |
| DEBUG 2026-01-28 12:43:41 y_server.py:188 Received observation #15 |
| DEBUG 2026-01-28 12:43:41 y_server.py:196 Received observation #15 | Avg FPS: 1.26 | Target: 30.00 | One-way latency: 0.88ms |
| DEBUG 2026-01-28 12:43:41 y_server.py:203 Server timestamp: 1769575421.063637 | Client timestamp: 1769575421.062762 | Deserialization time: 0.004127s |
| DEBUG 2026-01-28 12:43:41 y_server.py:298 Enqueuing observation. Must go: False | Last processed obs: 14 |
| DEBUG 2026-01-28 12:43:41 y_server.py:220 Client ipv4:127.0.0.1:43896 connected for action streaming |
| INFO 2026-01-28 12:43:41 y_server.py:226 Running inference for observation #15 (must_go: False) |
| INFO 2026-01-28 12:43:41 y_server.py:361 Preprocessing and inference took 0.0073s, action shape: torch.Size([1, 16, 16]) |
| DEBUG 2026-01-28 12:43:41 y_server.py:382 Postprocessed action shape: torch.Size([16, 16]) |
| INFO 2026-01-28 12:43:41 y_server.py:391 Observation 15 | Total time: 9.82ms |
| DEBUG 2026-01-28 12:43:41 y_server.py:396 Observation 15 | Prepare time: 1.02ms | Preprocessing time: 0.98ms | Inference time: 7.32ms | Postprocessing time: 0.42ms | Total time: 9.82ms |
| INFO 2026-01-28 12:43:41 y_server.py:244 Action chunk #15 generated | Total time: 10.87ms |
| DEBUG 2026-01-28 12:43:41 y_server.py:249 Action chunk #15 generated | Inference time: 0.01s |Serialize time: 0.00s |Total time: 0.01s |
| DEBUG 2026-01-28 12:43:41 y_server.py:178 Receiving observations from ipv4:127.0.0.1:43896 |
| INFO 2026-01-28 12:43:41 ort/utils.py:74 <Logger policy_server (NOTSET)> Starting receiver |
| DEBUG 2026-01-28 12:43:41 ort/utils.py:76 <Logger policy_server (NOTSET)> Received item |
| DEBUG 2026-01-28 12:43:41 ort/utils.py:85 <Logger policy_server (NOTSET)> Received data at step 0 |
| DEBUG 2026-01-28 12:43:41 ort/utils.py:76 <Logger policy_server (NOTSET)> Received item |
| DEBUG 2026-01-28 12:43:41 ort/utils.py:93 <Logger policy_server (NOTSET)> Received data at step end size 2765657 |
| DEBUG 2026-01-28 12:43:41 y_server.py:188 Received observation #16 |
| DEBUG 2026-01-28 12:43:41 y_server.py:196 Received observation #16 | Avg FPS: 1.26 | Target: 30.00 | One-way latency: 0.88ms |
| DEBUG 2026-01-28 12:43:41 y_server.py:203 Server timestamp: 1769575421.837701 | Client timestamp: 1769575421.836823 | Deserialization time: 0.004273s |
| DEBUG 2026-01-28 12:43:41 y_server.py:298 Enqueuing observation. Must go: False | Last processed obs: 15 |
| DEBUG 2026-01-28 12:43:41 y_server.py:220 Client ipv4:127.0.0.1:43896 connected for action streaming |
| INFO 2026-01-28 12:43:41 y_server.py:226 Running inference for observation #16 (must_go: False) |
| INFO 2026-01-28 12:43:41 y_server.py:361 Preprocessing and inference took 0.0070s, action shape: torch.Size([1, 16, 16]) |
| DEBUG 2026-01-28 12:43:41 y_server.py:382 Postprocessed action shape: torch.Size([16, 16]) |
| INFO 2026-01-28 12:43:41 y_server.py:391 Observation 16 | Total time: 9.22ms |
| DEBUG 2026-01-28 12:43:41 y_server.py:396 Observation 16 | Prepare time: 0.79ms | Preprocessing time: 0.98ms | Inference time: 6.97ms | Postprocessing time: 0.40ms | Total time: 9.22ms |
| INFO 2026-01-28 12:43:41 y_server.py:244 Action chunk #16 generated | Total time: 10.83ms |
| DEBUG 2026-01-28 12:43:41 y_server.py:249 Action chunk #16 generated | Inference time: 0.01s |Serialize time: 0.00s |Total time: 0.01s |
| DEBUG 2026-01-28 12:43:42 y_server.py:178 Receiving observations from ipv4:127.0.0.1:43896 |
| INFO 2026-01-28 12:43:42 ort/utils.py:74 <Logger policy_server (NOTSET)> Starting receiver |
| DEBUG 2026-01-28 12:43:42 ort/utils.py:76 <Logger policy_server (NOTSET)> Received item |
| DEBUG 2026-01-28 12:43:42 ort/utils.py:85 <Logger policy_server (NOTSET)> Received data at step 0 |
| DEBUG 2026-01-28 12:43:42 ort/utils.py:76 <Logger policy_server (NOTSET)> Received item |
| DEBUG 2026-01-28 12:43:42 ort/utils.py:93 <Logger policy_server (NOTSET)> Received data at step end size 2765657 |
| DEBUG 2026-01-28 12:43:42 y_server.py:188 Received observation #17 |
| DEBUG 2026-01-28 12:43:42 y_server.py:196 Received observation #17 | Avg FPS: 1.26 | Target: 30.00 | One-way latency: 0.94ms |
| DEBUG 2026-01-28 12:43:42 y_server.py:203 Server timestamp: 1769575422.639517 | Client timestamp: 1769575422.638577 | Deserialization time: 0.002851s |
| DEBUG 2026-01-28 12:43:42 y_server.py:298 Enqueuing observation. Must go: False | Last processed obs: 16 |
| DEBUG 2026-01-28 12:43:42 y_server.py:220 Client ipv4:127.0.0.1:43896 connected for action streaming |
| INFO 2026-01-28 12:43:42 y_server.py:226 Running inference for observation #17 (must_go: False) |
| INFO 2026-01-28 12:43:42 y_server.py:361 Preprocessing and inference took 0.0069s, action shape: torch.Size([1, 16, 16]) |
| DEBUG 2026-01-28 12:43:42 y_server.py:382 Postprocessed action shape: torch.Size([16, 16]) |
| INFO 2026-01-28 12:43:42 y_server.py:391 Observation 17 | Total time: 9.19ms |
| DEBUG 2026-01-28 12:43:42 y_server.py:396 Observation 17 | Prepare time: 0.81ms | Preprocessing time: 0.98ms | Inference time: 6.92ms | Postprocessing time: 0.41ms | Total time: 9.19ms |
| INFO 2026-01-28 12:43:42 y_server.py:244 Action chunk #17 generated | Total time: 10.83ms |
| DEBUG 2026-01-28 12:43:42 y_server.py:249 Action chunk #17 generated | Inference time: 0.01s |Serialize time: 0.00s |Total time: 0.01s |
| DEBUG 2026-01-28 12:43:43 y_server.py:178 Receiving observations from ipv4:127.0.0.1:43896 |
| INFO 2026-01-28 12:43:43 ort/utils.py:74 <Logger policy_server (NOTSET)> Starting receiver |
| DEBUG 2026-01-28 12:43:43 ort/utils.py:76 <Logger policy_server (NOTSET)> Received item |
| DEBUG 2026-01-28 12:43:43 ort/utils.py:85 <Logger policy_server (NOTSET)> Received data at step 0 |
| DEBUG 2026-01-28 12:43:43 ort/utils.py:76 <Logger policy_server (NOTSET)> Received item |
| DEBUG 2026-01-28 12:43:43 ort/utils.py:93 <Logger policy_server (NOTSET)> Received data at step end size 2765657 |
| DEBUG 2026-01-28 12:43:43 y_server.py:188 Received observation #18 |
| DEBUG 2026-01-28 12:43:43 y_server.py:196 Received observation #18 | Avg FPS: 1.26 | Target: 30.00 | One-way latency: 0.90ms |
| DEBUG 2026-01-28 12:43:43 y_server.py:203 Server timestamp: 1769575423.405529 | Client timestamp: 1769575423.404632 | Deserialization time: 0.004305s |
| DEBUG 2026-01-28 12:43:43 y_server.py:298 Enqueuing observation. Must go: False | Last processed obs: 17 |
| DEBUG 2026-01-28 12:43:43 y_server.py:220 Client ipv4:127.0.0.1:43896 connected for action streaming |
| INFO 2026-01-28 12:43:43 y_server.py:226 Running inference for observation #18 (must_go: False) |
| INFO 2026-01-28 12:43:43 y_server.py:361 Preprocessing and inference took 0.0072s, action shape: torch.Size([1, 16, 16]) |
| DEBUG 2026-01-28 12:43:43 y_server.py:382 Postprocessed action shape: torch.Size([16, 16]) |
| INFO 2026-01-28 12:43:43 y_server.py:391 Observation 18 | Total time: 9.52ms |
| DEBUG 2026-01-28 12:43:43 y_server.py:396 Observation 18 | Prepare time: 0.76ms | Preprocessing time: 1.00ms | Inference time: 7.17ms | Postprocessing time: 0.50ms | Total time: 9.52ms |
| INFO 2026-01-28 12:43:43 y_server.py:244 Action chunk #18 generated | Total time: 10.82ms |
| DEBUG 2026-01-28 12:43:43 y_server.py:249 Action chunk #18 generated | Inference time: 0.01s |Serialize time: 0.00s |Total time: 0.01s |
| INFO 2026-01-28 12:44:50 y_server.py:112 Client ipv4:127.0.0.1:43162 connected and ready |
| INFO 2026-01-28 12:44:50 y_server.py:138 Receiving policy instructions from ipv4:127.0.0.1:43162 | Policy type: act | Pretrained name or path: /home/dobot/dobot/x-trainer-main/ckpts/act_tube/checkpoints/last/pretrained_model | Actions per chunk: 16 | Device: cuda |
| ERROR 2026-01-28 12:44:50 /_server.py:636 Exception calling application: Repo id must be in the form 'repo_name' or 'namespace/repo_name': '/home/dobot/dobot/x-trainer-main/ckpts/act_tube/checkpoints/last/pretrained_model'. Use `repo_type` argument if needed. |
|
|