Spaces:
Sleeping
sidebar_position: 5
Chapter 4: Advanced AI-Robot Brain Techniques
Learning Objectives
- Understand advanced AI techniques for humanoid robot perception and decision making
- Learn about reinforcement learning for humanoid locomotion
- Explore deep learning integration with Isaac ROS perception
- Implement adaptive control systems using AI
- Understand neural architecture search for robotics applications
Introduction to Advanced AI Techniques
Humanoid robots require sophisticated AI systems to perceive, reason, and act in complex environments. This chapter explores advanced techniques that go beyond basic perception and navigation, focusing on systems that can learn, adapt, and make complex decisions.
Key Areas of Advanced AI for Humanoid Robotics
- Learning-based Locomotion: Using AI to develop adaptive walking patterns
- Perception-Action Integration: Deep learning systems that connect perception to action
- Adaptive Control: AI systems that adjust control parameters in real-time
- Hierarchical Decision Making: Multi-level AI systems for complex tasks
- Sim-to-Real Transfer: Techniques to transfer learned behaviors from simulation to real robots
Reinforcement Learning for Humanoid Locomotion
Reinforcement Learning (RL) has shown remarkable success in developing robust humanoid locomotion controllers. Unlike traditional control methods, RL can learn complex gait patterns and adapt to various terrains.
Deep Reinforcement Learning Framework
import torch
import torch.nn as nn
import torch.optim as optim
import numpy as np
from collections import deque
import random
class ActorNetwork(nn.Module):
"""Actor network for humanoid control policy"""
def __init__(self, state_dim, action_dim, hidden_dim=256):
super(ActorNetwork, self).__init__()
self.network = nn.Sequential(
nn.Linear(state_dim, hidden_dim),
nn.ReLU(),
nn.Linear(hidden_dim, hidden_dim),
nn.ReLU(),
nn.Linear(hidden_dim, hidden_dim),
nn.ReLU(),
nn.Linear(hidden_dim, action_dim),
nn.Tanh() # Actions are clamped to [-1, 1]
)
def forward(self, state):
return self.network(state)
class CriticNetwork(nn.Module):
"""Critic network for value estimation"""
def __init__(self, state_dim, action_dim, hidden_dim=256):
super(CriticNetwork, self).__init__()
self.network = nn.Sequential(
nn.Linear(state_dim + action_dim, hidden_dim),
nn.ReLU(),
nn.Linear(hidden_dim, hidden_dim),
nn.ReLU(),
nn.Linear(hidden_dim, 1)
)
def forward(self, state, action):
x = torch.cat([state, action], dim=1)
return self.network(x)
class HumanoidRLAgent:
def __init__(self, state_dim, action_dim, lr=3e-4, gamma=0.99, tau=0.005):
self.device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
# Networks
self.actor = ActorNetwork(state_dim, action_dim).to(self.device)
self.critic = CriticNetwork(state_dim, action_dim).to(self.device)
self.target_actor = ActorNetwork(state_dim, action_dim).to(self.device)
self.target_critic = CriticNetwork(state_dim, action_dim).to(self.device)
# Optimizers
self.actor_optimizer = optim.Adam(self.actor.parameters(), lr=lr)
self.critic_optimizer = optim.Adam(self.critic.parameters(), lr=lr)
# Hyperparameters
self.gamma = gamma # Discount factor
self.tau = tau # Soft update parameter
self.action_dim = action_dim
# Initialize target networks
self.hard_update(self.target_actor, self.actor)
self.hard_update(self.target_critic, self.critic)
def hard_update(self, target, source):
"""Hard update target network with source parameters"""
for target_param, param in zip(target.parameters(), source.parameters()):
target_param.data.copy_(param.data)
def soft_update(self, target, source):
"""Soft update target network with source parameters"""
for target_param, param in zip(target.parameters(), source.parameters()):
target_param.data.copy_(self.tau * param.data + (1.0 - self.tau) * target_param.data)
def select_action(self, state, add_noise=False, noise_scale=0.1):
"""Select action with exploration noise"""
state_tensor = torch.FloatTensor(state).unsqueeze(0).to(self.device)
with torch.no_grad():
action = self.actor(state_tensor)
if add_noise:
noise = torch.randn_like(action) * noise_scale
action = torch.clamp(action + noise, -1, 1)
return action.cpu().numpy()[0]
def update(self, replay_buffer, batch_size=100):
"""Update networks with experiences from replay buffer"""
if len(replay_buffer) < batch_size:
return
# Sample batch
states, actions, rewards, next_states, dones = replay_buffer.sample(batch_size)
states = torch.FloatTensor(states).to(self.device)
actions = torch.FloatTensor(actions).to(self.device)
rewards = torch.FloatTensor(rewards).unsqueeze(1).to(self.device)
next_states = torch.FloatTensor(next_states).to(self.device)
dones = torch.BoolTensor(dones).unsqueeze(1).to(self.device)
# Update critic
with torch.no_grad():
next_actions = self.target_actor(next_states)
next_q_values = self.target_critic(next_states, next_actions)
target_q_values = rewards + (self.gamma * next_q_values * ~dones)
current_q_values = self.critic(states, actions)
critic_loss = nn.MSELoss()(current_q_values, target_q_values)
self.critic_optimizer.zero_grad()
critic_loss.backward()
self.critic_optimizer.step()
# Update actor
predicted_actions = self.actor(states)
actor_loss = -self.critic(states, predicted_actions).mean()
self.actor_optimizer.zero_grad()
actor_loss.backward()
self.actor_optimizer.step()
# Soft update target networks
self.soft_update(self.target_actor, self.actor)
self.soft_update(self.target_critic, self.critic)
class ReplayBuffer:
"""Experience replay buffer for RL training"""
def __init__(self, capacity=1000000):
self.buffer = deque(maxlen=capacity)
def push(self, state, action, reward, next_state, done):
"""Add experience to buffer"""
self.buffer.append((state, action, reward, next_state, done))
def sample(self, batch_size):
"""Sample batch from buffer"""
batch = random.sample(self.buffer, batch_size)
state, action, reward, next_state, done = map(np.stack, zip(*batch))
return state, action, reward, next_state, done
def __len__(self):
return len(self.buffer)
Humanoid Environment for RL Training
import gym
from gym import spaces
import numpy as np
class HumanoidLocomotionEnv(gym.Env):
"""Custom environment for humanoid locomotion training"""
def __init__(self):
super(HumanoidLocomotionEnv, self).__init__()
# Define action and observation spaces
# This is a simplified example - real environments would have more complex spaces
self.action_space = spaces.Box(
low=-1.0, high=1.0, shape=(19,), dtype=np.float32 # 19 joints
)
# Observation space: joint positions, velocities, IMU readings
obs_dim = 48 # Example dimension
self.observation_space = spaces.Box(
low=-np.inf, high=np.inf, shape=(obs_dim,), dtype=np.float32
)
# Humanoid robot interface (simplified)
self.robot = None # Would interface with Gazebo, PyBullet, etc.
# Episode parameters
self.max_episode_steps = 1000
self.current_step = 0
self.target_velocity = 0.5 # m/s
def reset(self):
"""Reset the environment"""
# Reset robot to initial pose
self._reset_robot()
self.current_step = 0
# Return initial observation
return self._get_observation()
def step(self, action):
"""Execute one step with given action"""
# Apply action to robot
self._apply_action(action)
# Step simulation
self._step_simulation()
# Get new observation
observation = self._get_observation()
# Calculate reward
reward = self._calculate_reward()
# Check termination
done = self._is_done()
info = {}
self.current_step += 1
return observation, reward, done, info
def _get_observation(self):
"""Get current observation from robot"""
# This would interface with robot's sensors
# Example observations: joint angles, velocities, IMU data, etc.
observation = np.zeros(48, dtype=np.float32) # Simplified
return observation
def _calculate_reward(self):
"""Calculate reward based on current state"""
# Reward for forward velocity
forward_vel_reward = self._get_forward_velocity() * 0.1
# Penalty for energy consumption
energy_penalty = self._get_energy_consumption() * 0.01
# Reward for maintaining balance
balance_reward = self._get_balance_score() * 0.5
# Penalty for joint limits violations
joint_limit_penalty = self._get_joint_limit_violations() * 1.0
total_reward = forward_vel_reward - energy_penalty + balance_reward - joint_limit_penalty
return max(total_reward, -10.0) # Clamp reward
def _get_forward_velocity(self):
"""Get forward velocity of the robot"""
# Would interface with robot's odometry
return 0.0 # Simplified
def _get_energy_consumption(self):
"""Get energy consumption"""
# Would calculate based on joint torques and velocities
return 0.0 # Simplified
def _get_balance_score(self):
"""Get balance score (higher is better)"""
# Calculate based on COM position, IMU readings, etc.
return 0.0 # Simplified
def _get_joint_limit_violations(self):
"""Count joint limit violations"""
# Check current joint positions against limits
return 0.0 # Simplified
def _is_done(self):
"""Check if episode is done"""
# Done if fallen, exceeded max steps, or other failure conditions
return self.current_step >= self.max_episode_steps
def _apply_action(self, action):
"""Apply action to robot"""
# Convert normalized action to joint commands
# This would interface with robot controller
pass
def _step_simulation(self):
"""Step the physics simulation"""
# This would interface with physics engine
pass
def _reset_robot(self):
"""Reset robot to initial configuration"""
# Reset robot pose, velocities, etc.
pass
Training Loop for Humanoid Locomotion
def train_humanoid_locomotion():
"""Training loop for humanoid locomotion policy"""
env = HumanoidLocomotionEnv()
# Initialize agent
state_dim = env.observation_space.shape[0]
action_dim = env.action_space.shape[0]
agent = HumanoidRLAgent(state_dim, action_dim)
# Initialize replay buffer
replay_buffer = ReplayBuffer(capacity=1000000)
# Training parameters
num_episodes = 2000
max_steps_per_episode = 1000
batch_size = 256
update_every = 50
scores = []
avg_scores = []
for episode in range(num_episodes):
state = env.reset()
episode_reward = 0
episode_steps = 0
for step in range(max_steps_per_episode):
# Select action with exploration
action = agent.select_action(state, add_noise=True, noise_scale=0.1)
# Take action
next_state, reward, done, info = env.step(action)
# Store experience
replay_buffer.push(state, action, reward, next_state, done)
# Update agent
if len(replay_buffer) > batch_size and step % update_every == 0:
agent.update(replay_buffer, batch_size)
state = next_state
episode_reward += reward
episode_steps += 1
if done:
break
scores.append(episode_reward)
# Calculate average score over last 100 episodes
if len(scores) >= 100:
avg_score = sum(scores[-100:]) / 100
avg_scores.append(avg_score)
else:
avg_scores.append(sum(scores) / len(scores))
print(f"Episode {episode}, Score: {episode_reward:.2f}, "
f"Avg Score: {avg_scores[-1]:.2f}")
# Save trained model
torch.save(agent.actor.state_dict(), "humanoid_locomotion_actor.pth")
torch.save(agent.critic.state_dict(), "humanoid_locomotion_critic.pth")
return agent, scores, avg_scores
Isaac ROS Deep Learning Integration
Isaac ROS provides GPU-accelerated deep learning capabilities that can be integrated with humanoid control systems:
Perception-Action Integration
import rclpy
from rclpy.node import Node
from sensor_msgs.msg import Image, CompressedImage, Imu
from geometry_msgs.msg import Twist
import torch
import torchvision.transforms as T
from PIL import Image as PILImage
import io
import cv2
class PerceptionActionNode(Node):
def __init__(self):
super().__init__('perception_action_node')
# Subscriptions
self.image_sub = self.create_subscription(
Image, '/camera/color/image_raw', self.image_callback, 10)
self.depth_sub = self.create_subscription(
Image, '/camera/depth/image_rect_raw', self.depth_callback, 10)
self.imu_sub = self.create_subscription(
Imu, '/imu/data', self.imu_callback, 10)
# Publisher for action commands
self.cmd_vel_pub = self.create_publisher(Twist, '/cmd_vel', 10)
# Load pretrained models
self.perception_model = self.load_perception_model()
self.action_model = self.load_action_model()
# Transformation for images
self.transform = T.Compose([
T.Resize((224, 224)),
T.ToTensor(),
T.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
])
# State buffers
self.current_img = None
self.current_depth = None
self.current_imu = None
self.get_logger().info('Perception-Action Node initialized')
def load_perception_model(self):
"""Load pretrained perception model"""
# In practice, this would load a model from Isaac ROS or other source
# For example, an object detection or segmentation model
import torchvision.models as models
model = models.resnet18(pretrained=True)
model.eval()
return model
def load_action_model(self):
"""Load action selection model"""
# This would be a model that maps perceptions to actions
# Could be the RL policy trained above
import torch.nn as nn
class ActionModel(nn.Module):
def __init__(self, perception_features_dim, action_dim):
super(ActionModel, self).__init__()
self.fc1 = nn.Linear(perception_features_dim, 256)
self.relu = nn.ReLU()
self.fc2 = nn.Linear(256, 128)
self.action_head = nn.Linear(128, action_dim)
def forward(self, perception_features):
x = self.relu(self.fc1(perception_features))
x = self.relu(self.fc2(x))
action = torch.tanh(self.action_head(x))
return action
model = ActionModel(512, 2) # 512 features, 2D action (vx, wz)
model.eval()
return model
def image_callback(self, msg):
"""Process incoming image"""
try:
# Convert ROS Image to PIL Image
img_data = np.frombuffer(msg.data, dtype=np.uint8).reshape(
msg.height, msg.width, -1)
pil_img = PILImage.fromarray(img_data)
# Process image with perception model
with torch.no_grad():
transformed_img = self.transform(pil_img).unsqueeze(0)
features = self.extract_features(transformed_img)
# Determine action based on perception
action = self.action_model(features)
# Execute action
self.publish_action(action)
except Exception as e:
self.get_logger().error(f'Error processing image: {str(e)}')
def extract_features(self, img_tensor):
"""Extract features using perception model"""
# Get intermediate layer features (example)
# This would depend on the specific model architecture
with torch.no_grad():
# Run through convolutional layers to extract features
x = self.perception_model.conv1(img_tensor)
x = self.perception_model.bn1(x)
x = self.perception_model.relu(x)
x = self.perception_model.maxpool(x)
x = self.perception_model.layer1(x)
x = self.perception_model.layer2(x)
x = self.perception_model.layer3(x)
x = self.perception_model.layer4(x)
# Global average pooling
x = torch.nn.functional.adaptive_avg_pool2d(x, (1, 1))
features = torch.flatten(x, 1)
return features
def depth_callback(self, msg):
"""Process depth information"""
# Process depth data for navigation
pass
def imu_callback(self, msg):
"""Process IMU data for balance"""
# Process IMU for balance awareness
pass
def publish_action(self, action_tensor):
"""Execute action by publishing to robot"""
cmd_msg = Twist()
cmd_msg.linear.x = float(action_tensor[0, 0]) * 0.5 # Scale to reasonable velocity
cmd_msg.angular.z = float(action_tensor[0, 1]) * 0.5 # Scale to reasonable angular velocity
self.cmd_vel_pub.publish(cmd_msg)
Adaptive Control Systems
Adaptive control systems can modify their behavior based on changing conditions or performance:
Model Reference Adaptive Control (MRAC)
class MRACController:
"""Model Reference Adaptive Controller for humanoid robots"""
def __init__(self, reference_model_params, plant_params):
self.reference_model = self.initialize_reference_model(reference_model_params)
self.plant_params = plant_params
# Adaptive parameters
self.theta = np.zeros(plant_params.size) # Controller parameters
self.P = np.eye(plant_params.size) * 100 # Covariance matrix
self.gamma = 1.0 # Adaptation gain
# State tracking
self.error = 0.0
self.prev_error = 0.0
self.integral_error = 0.0
def initialize_reference_model(self, params):
"""Initialize reference model for desired behavior"""
# This would create a reference dynamic system
# For humanoid, this might represent ideal walking dynamics
class ReferenceModel:
def __init__(self, params):
self.params = params
self.state = 0.0 # Simplified state
def update(self, input_signal):
# Update reference model state
# This implements the desired dynamics
self.state = self.state * 0.9 + input_signal * 0.1 # Simplified
return self.state
return ReferenceModel(params)
def update(self, measured_output, reference_input):
"""Update controller with new measurements"""
# Get reference output
reference_output = self.reference_model.update(reference_input)
# Calculate tracking error
self.error = reference_output - measured_output
# Compute parameter adjustment
phi = self.get_regression_vector(measured_output, reference_input)
adjustment = self.gamma * np.outer(self.P, phi) * self.error
self.theta += adjustment.flatten()
# Update covariance matrix
denom = 1 + np.dot(phi, np.dot(self.P, phi))
self.P = self.P - (np.outer(np.dot(self.P, phi), np.dot(phi, self.P))) / denom
# Compute control signal
control_signal = np.dot(self.theta, phi)
# Update for next iteration
self.prev_error = self.error
return control_signal
def get_regression_vector(self, y, r):
"""Get regression vector for adaptation law"""
# This would depend on the specific plant model
# For humanoid, this might relate to joint kinematics/dynamics
phi = np.array([y, r, y*r, y**2, r**2]) # Example features
return phi[:self.theta.size] # Trim to match parameter size
Neural Adaptive Control
import torch.nn as nn
class NeuralAdaptiveController(nn.Module):
"""Neural adaptive controller for complex robotic systems"""
def __init__(self, state_dim, action_dim, hidden_dim=128):
super(NeuralAdaptiveController, self).__init__()
# Controller network
self.controller_network = nn.Sequential(
nn.Linear(state_dim + action_dim, hidden_dim),
nn.ReLU(),
nn.Linear(hidden_dim, hidden_dim),
nn.ReLU(),
nn.Linear(hidden_dim, action_dim),
)
# Adaptation network (learns to adjust controller parameters)
self.adaptation_network = nn.Sequential(
nn.Linear(state_dim + action_dim + hidden_dim, hidden_dim),
nn.ReLU(),
nn.Linear(hidden_dim, hidden_dim),
nn.ReLU(),
nn.Linear(hidden_dim, hidden_dim),
)
# Error prediction network
self.error_predictor = nn.Sequential(
nn.Linear(state_dim + action_dim, hidden_dim),
nn.ReLU(),
nn.Linear(hidden_dim, 1),
)
self.state_dim = state_dim
self.action_dim = action_dim
def forward(self, state, action, internal_state=None):
"""Forward pass with potential adaptation"""
if internal_state is None:
internal_state = torch.zeros(state.size(0), self.hidden_dim)
# Controller output
controller_output = self.controller_network(torch.cat([state, action], dim=1))
# Adaptation
adaptation_input = torch.cat([state, action, internal_state], dim=1)
adaptation = self.adaptation_network(adaptation_input)
# Combined output
adapted_output = controller_output + 0.1 * adaptation # Small adaptation influence
# Predict error for learning
error_prediction = self.error_predictor(torch.cat([state, action], dim=1))
return adapted_output, adaptation, error_prediction
class AdaptiveControlSystem:
"""Complete adaptive control system for humanoid robots"""
def __init__(self, state_dim, action_dim):
self.neural_controller = NeuralAdaptiveController(state_dim, action_dim)
self.optimizer = optim.Adam(self.neural_controller.parameters(), lr=1e-4)
# Performance metrics
self.performance_history = deque(maxlen=100)
self.adaptation_activity = 0.0
def compute_control(self, state, desired_action):
"""Compute control action with adaptation"""
state_tensor = torch.FloatTensor(state).unsqueeze(0)
action_tensor = torch.FloatTensor(desired_action).unsqueeze(0)
with torch.no_grad():
control_output, adaptation, error_pred = self.neural_controller(
state_tensor, action_tensor)
return control_output.numpy()[0]
def update_adaptation(self, state, action, desired_output, actual_output):
"""Update neural adaptation based on performance"""
state_tensor = torch.FloatTensor(state).unsqueeze(0)
action_tensor = torch.FloatTensor(action).unsqueeze(0)
desired_tensor = torch.FloatTensor(desired_output).unsqueeze(0)
actual_tensor = torch.FloatTensor(actual_output).unsqueeze(0)
# Compute loss
tracking_error = desired_tensor - actual_tensor
error_loss = torch.mean(tracking_error ** 2)
# Also train on error prediction
error_pred = self.neural_controller.error_predictor(
torch.cat([state_tensor, actual_tensor], dim=1))
prediction_loss = torch.nn.functional.mse_loss(error_pred, tracking_error)
total_loss = error_loss + 0.1 * prediction_loss
# Update
self.optimizer.zero_grad()
total_loss.backward()
self.optimizer.step()
# Track performance
self.performance_history.append(error_loss.item())
return error_loss.item()
Hierarchical Decision Making
Complex humanoid tasks often require hierarchical decision-making systems:
Task and Motion Planning (TAMP)
class TaskAndMotionPlanner:
"""Hierarchical planner for complex humanoid tasks"""
def __init__(self):
self.task_planner = SymbolicTaskPlanner()
self.motion_planner = BiPedalMotionPlanner()
self.high_level_reasoner = HighLevelReasoner()
def plan_task(self, goal_description):
"""Plan high-level task decomposition"""
# Parse goal description
goal = self.parse_goal(goal_description)
# Decompose into subtasks
task_plan = self.task_planner.decompose_task(goal)
# For each task, generate motion plans
complete_plan = []
for task in task_plan:
if task.type == "navigate":
motion_plan = self.motion_planner.plan_navigate(task.destination)
elif task.type == "manipulate":
motion_plan = self.motion_planner.plan_manipulate(
task.object, task.destination)
elif task.type == "communicate":
motion_plan = self.motion_planner.plan_communicate(task.message)
complete_plan.append({
'task': task,
'motion_plan': motion_plan
})
return complete_plan
def parse_goal(self, goal_description):
"""Parse natural language goal into structured format"""
# This would use NLP to parse goals
# Example: "Go to kitchen and bring me a water bottle"
# Would be parsed into navigate → find_object → grasp → navigate → place
pass
class SymbolicTaskPlanner:
"""Symbolic task planner using STRIPS-like formalism"""
def __init__(self):
# Define operators (actions) and predicates (states)
self.operators = self.define_operators()
self.predicates = self.define_predicates()
def define_operators(self):
"""Define available operators for task planning"""
return {
'navigate': {
'preconditions': ['at(X)', 'accessible(Y)'],
'effects': ['at(Y)', '!at(X)'],
'cost': 1.0
},
'grasp': {
'preconditions': ['at(X)', 'reachable(X)', 'free_hand()'],
'effects': ['holding(X)', '!free_hand()'],
'cost': 0.5
},
'place': {
'preconditions': ['holding(X)', 'at(Y)'],
'effects': ['!holding(X)', 'placed(X, Y)', 'free_hand()'],
'cost': 0.5
}
}
def decompose_task(self, goal):
"""Decompose goal into sequence of operators"""
# Implement forward/backward chaining search
# or use classical planning algorithms
task_plan = []
# Simplified implementation
return task_plan
class BiPedalMotionPlanner:
"""Motion planner specialized for bipedal robots"""
def __init__(self):
self.footstep_planner = FootstepPlanner()
self.balance_controller = BalanceController()
self.manipulation_planner = ManipulationPlanner()
def plan_navigate(self, destination):
"""Plan navigation motion for bipedal robot"""
return self.footstep_planner.plan_path(destination)
def plan_manipulate(self, obj, destination):
"""Plan manipulation motion for object"""
return self.manipulation_planner.plan_grasp_transport_place(obj, destination)
def plan_communicate(self, message):
"""Plan motion for communication (e.g., gestures)"""
return [{'type': 'gesture', 'motion': 'wave', 'duration': 2.0}]
Sim-to-Real Transfer Techniques
Transferring learned behaviors from simulation to real robots requires special consideration:
Domain Randomization
class DomainRandomization:
"""Domain randomization for sim-to-real transfer"""
def __init__(self):
self.randomization_params = {
'visual': {
'lighting': (0.5, 2.0), # Intensity range
'textures': ['concrete', 'wood', 'carpet', 'grass'],
'colors': [(0.2, 0.2, 0.2), (0.8, 0.8, 0.8)], # Dark to light
'materials': ['matte', 'glossy', 'rough']
},
'dynamics': {
'friction': (0.3, 1.0),
'mass_variation': (0.8, 1.2),
'inertia_scaling': (0.9, 1.1),
'actuator_noise': (0.0, 0.05)
},
'sensor': {
'camera_noise': (0.0, 0.02),
'imu_drift': (0.0, 0.01),
'delay_range': (0.01, 0.05)
}
}
def randomize_domain(self, sim_env):
"""Randomize simulation domain parameters"""
# Visual randomization
lighting_mult = np.random.uniform(
self.randomization_params['visual']['lighting'][0],
self.randomization_params['visual']['lighting'][1]
)
sim_env.set_lighting_multiplier(lighting_mult)
# Dynamics randomization
friction = np.random.uniform(
self.randomization_params['dynamics']['friction'][0],
self.randomization_params['dynamics']['friction'][1]
)
sim_env.set_friction(friction)
# Add sensor noise
camera_noise = np.random.uniform(
self.randomization_params['sensor']['camera_noise'][0],
self.randomization_params['sensor']['camera_noise'][1]
)
sim_env.add_camera_noise(camera_noise)
return sim_env
Curriculum Learning
class CurriculumLearning:
"""Curriculum learning for gradual skill acquisition"""
def __init__(self):
self.curriculum_levels = [
{
'name': 'stationary_balance',
'difficulty': 0.1,
'tasks': ['maintain_balance'],
'rewards': {'balance_time': 1.0, 'fall_penalty': -10.0}
},
{
'name': 'simple_stepping',
'difficulty': 0.3,
'tasks': ['step_forward', 'step_backward'],
'rewards': {'balance_time': 1.0, 'reach_target': 5.0, 'fall_penalty': -10.0}
},
{
'name': 'straight_line_walking',
'difficulty': 0.5,
'tasks': ['walk_forward', 'walk_backward'],
'rewards': {'forward_vel': 1.0, 'energy_efficiency': 0.5, 'balance_time': 0.5, 'fall_penalty': -10.0}
},
{
'name': 'turning',
'difficulty': 0.7,
'tasks': ['turn_left', 'turn_right'],
'rewards': {'heading_accuracy': 2.0, 'balance_time': 0.5, 'energy_efficiency': 0.3, 'fall_penalty': -10.0}
},
{
'name': 'complex_maneuvers',
'difficulty': 1.0,
'tasks': ['sidestep', 'walk_over_small_obstacles'],
'rewards': {'task_completion': 10.0, 'smoothness': 1.0, 'balance_time': 0.5, 'fall_penalty': -10.0}
}
]
self.current_level = 0
self.level_progress_threshold = 0.8 # 80% success rate to advance
def get_current_tasks(self):
"""Get tasks for current curriculum level"""
return self.curriculum_levels[self.current_level]['tasks']
def evaluate_performance(self, episode_results):
"""Evaluate agent performance on current level"""
# Calculate performance metrics based on episode results
success_rate = self.calculate_success_rate(episode_results)
if success_rate >= self.level_progress_threshold and self.current_level < len(self.curriculum_levels) - 1:
self.current_level += 1
print(f"Advancing to curriculum level: {self.curriculum_levels[self.current_level]['name']}")
return success_rate
def calculate_success_rate(self, results):
"""Calculate success rate from episode results"""
if not results:
return 0.0
successful_episodes = sum(1 for r in results if r.success)
return successful_episodes / len(results)
Neural Architecture Search for Robotics
Neural Architecture Search (NAS) can optimize neural networks specifically for robotic tasks:
class RobotNASCandidate:
"""Represents a candidate neural architecture for robotics"""
def __init__(self, layers_config):
self.layers_config = layers_config # List of layer specifications
self.fitness_score = 0.0
self.computation_cost = 0.0 # FLOPs or inference time
def build_network(self):
"""Build the neural network from configuration"""
layers = []
for layer_config in self.layers_config:
layer_type = layer_config['type']
if layer_type == 'conv':
layers.append(nn.Conv2d(layer_config['in_channels'],
layer_config['out_channels'],
layer_config['kernel_size']))
elif layer_type == 'linear':
layers.append(nn.Linear(layer_config['in_size'],
layer_config['out_size']))
elif layer_type == 'residual':
layers.append(ResidualBlock(layer_config['channels']))
# Add activation functions
layers.append(nn.ReLU())
return nn.Sequential(*layers)
class RobotNeuralArchitectureSearch:
"""Neural Architecture Search specialized for robotics applications"""
def __init__(self, search_space, population_size=50, generations=20):
self.search_space = search_space
self.population_size = population_size
self.generations = generations
self.population = []
# Robot-specific objectives
self.objectives = {
'accuracy': 0.5,
'latency': 0.3,
'power_efficiency': 0.2
}
def initialize_population(self):
"""Initialize random population of architectures"""
for _ in range(self.population_size):
config = self.generate_random_architecture()
candidate = RobotNASCandidate(config)
self.population.append(candidate)
def generate_random_architecture(self):
"""Generate a random architecture within search space"""
layers = []
# Generate random sequence of layers
num_layers = np.random.randint(3, 8) # 3-8 layers
for _ in range(num_layers):
layer_type = np.random.choice(['conv', 'linear', 'residual'])
layer_config = self.sample_layer_configuration(layer_type)
layers.append(layer_config)
return layers
def sample_layer_configuration(self, layer_type):
"""Sample configuration for a layer type"""
if layer_type == 'conv':
return {
'type': 'conv',
'in_channels': np.random.choice([16, 32, 64, 128]),
'out_channels': np.random.choice([32, 64, 128, 256]),
'kernel_size': np.random.choice([3, 5, 7])
}
elif layer_type == 'linear':
return {
'type': 'linear',
'in_size': np.random.choice([64, 128, 256, 512]),
'out_size': np.random.choice([32, 64, 128, 256])
}
elif layer_type == 'residual':
return {
'type': 'residual',
'channels': np.random.choice([64, 128, 256])
}
def evaluate_candidate(self, candidate, eval_env):
"""Evaluate a candidate architecture"""
try:
network = candidate.build_network()
# Test accuracy on evaluation environment
accuracy = self.evaluate_accuracy(network, eval_env)
# Estimate computational cost
latency = self.estimate_latency(network)
power = self.estimate_power_usage(network)
# Combined fitness score
fitness = (self.objectives['accuracy'] * accuracy -
self.objectives['latency'] * latency -
self.objectives['power_efficiency'] * power)
candidate.fitness_score = fitness
candidate.computation_cost = latency
return fitness
except Exception as e:
# Penalize invalid architectures
candidate.fitness_score = -1.0
return -1.0
def evolve_population(self):
"""Evolve population using genetic operators"""
# Sort by fitness
self.population.sort(key=lambda x: x.fitness_score, reverse=True)
# Keep top performers
survivors = self.population[:int(0.2 * self.population_size)]
# Generate offspring through mutation and crossover
offspring = []
for _ in range(self.population_size - len(survivors)):
parent1 = np.random.choice(survivors)
if np.random.rand() < 0.8: # Crossover 80% of the time
parent2 = np.random.choice(survivors)
child_config = self.crossover(parent1.layers_config, parent2.layers_config)
else: # Mutation otherwise
child_config = self.mutate(parent1.layers_config)
offspring.append(RobotNASCandidate(child_config))
self.population = survivors + offspring
def crossover(self, config1, config2):
"""Combine two architectures"""
# Simplified crossover: take half from each
mid_point = len(config1) // 2
child_config = config1[:mid_point] + config2[mid_point:]
return child_config
def mutate(self, config):
"""Mutate an architecture"""
mutated_config = config.copy()
# Randomly modify ~20% of layers
for i in range(len(mutated_config)):
if np.random.rand() < 0.2:
mutated_config[i] = self.sample_layer_configuration(
mutated_config[i]['type'])
return mutated_config
def search(self, eval_env):
"""Run the entire NAS process"""
self.initialize_population()
for generation in range(self.generations):
print(f"Evaluating generation {generation + 1}/{self.generations}")
for candidate in self.population:
self.evaluate_candidate(candidate, eval_env)
# Report best from generation
best = max(self.population, key=lambda x: x.fitness_score)
print(f"Best fitness: {best.fitness_score:.4f}")
# Evolve for next generation
self.evolve_population()
# Return best architecture
best_final = max(self.population, key=lambda x: x.fitness_score)
return best_final
Summary
This chapter covered advanced AI techniques for humanoid robot brains:
- Reinforcement learning for locomotion and control
- Isaac ROS integration for perception-action systems
- Adaptive control systems that adjust in real-time
- Hierarchical decision-making for complex tasks
- Sim-to-real transfer techniques including domain randomization
- Curriculum learning for gradual skill acquisition
- Neural architecture search for optimizing robot neural networks
These techniques enable humanoid robots to learn complex behaviors, adapt to changing conditions, and execute sophisticated tasks that require both perception and action capabilities.
Exercises
- Implement a simple DDPG agent for basic humanoid control
- Create a domain randomization scheme for your simulation environment
- Design a hierarchical task planner for a specific humanoid task
Next Steps
With advanced AI techniques for humanoid robots covered, the next module will focus on Vision-Language-Action systems that integrate perception, cognition, and action in unified frameworks for natural human-robot interaction.