filename stringclasses 9
values | AgentFunctions dict | ObservationSpaces dict | ActionSpaces dict | Agents stringclasses 9
values | Description stringclasses 9
values |
|---|---|---|---|---|---|
simple.json | {
"adversary_reward": null,
"agent_reward": "def reward(self, agent, world): dist2 = np.sum(np.square(agent.state.p_pos - world.landmarks[0].state.p_pos)); return -dist2",
"observation": "def observation(self, agent, world): entity_pos = [entity.state.p_pos - agent.state.p_pos for entity in world.landmarks]; retu... | {
"Adversary": null,
"Agent": "[self_vel, landmark_rel_position]",
"Alice": null,
"Bob": null,
"Eve": null
} | {
"Adversary": null,
"Agent": "[no_action, move_left, move_right, move_down, move_up]"
} | [agent_0] | In this environment a single agent sees a landmark position and is rewarded based on how close it gets to the landmark (Euclidean distance). This is not a multiagent environment, and is primarily intended for debugging purposes. |
simple_adversary.json | {
"adversary_reward": "def adversary_reward(self, agent, world): shaped_reward = True; return -np.sqrt(np.sum(np.square(agent.state.p_pos - agent.goal_a.state.p_pos))) if shaped_reward else (5 if np.sqrt(np.sum(np.square(agent.state.p_pos - agent.goal_a.state.p_pos))) < 2 * agent.goal_a.size else 0)",
"agent_reward... | {
"Adversary": "[landmark_rel_position, other_agents_rel_positions]",
"Agent": "[self_pos, self_vel, goal_rel_position, landmark_rel_position, other_agent_rel_positions]",
"Alice": null,
"Bob": null,
"Eve": null
} | {
"Adversary": "[no_action, move_left, move_right, move_down, move_up]",
"Agent": "[no_action, move_left, move_right, move_down, move_up]"
} | [adversary_0, agent_0, agent_1] | In this environment, there is 1 adversary (red), N good agents (green), N landmarks (default N=2). All agents observe the position of landmarks and other agents. One landmark is the ‘target landmark’ (colored green). Good agents are rewarded based on how close the closest one of them is to the target landmark, but nega... |
simple_crypto.json | {
"adversary_reward": "def adversary_reward(self, agent, world): rew = 0; rew -= np.sum(np.square(agent.state.c - agent.goal_a.color)) if not (agent.state.c == np.zeros(world.dim_c)).all() else 0; return rew # Adversary (Eve) is rewarded if it can reconstruct original goal",
"agent_reward": "def agent_reward(self,... | {
"Adversary": null,
"Agent": null,
"Alice": "[message, private_key]",
"Bob": "[private_key, alices_comm]",
"Eve": "[alices_comm]"
} | {
"Adversary": null,
"Agent": "[say_0, say_1, say_2, say_3]"
} | [eve_0, bob_0, alice_0] | In this environment, there are 2 good agents (Alice and Bob) and 1 adversary (Eve). Alice must sent a private 1 bit message to Bob over a public channel. Alice and Bob are rewarded +2 if Bob reconstructs the message, but are rewarded -2 if Eve reconstruct the message (that adds to 0 if both teams reconstruct the bit). ... |
simple_push.json | {
"adversary_reward": "def adversary_reward(self, agent, world): agent_dist = [np.sqrt(np.sum(np.square(a.state.p_pos - a.goal_a.state.p_pos))) for a in world.agents if not a.adversary]; pos_rew = min(agent_dist); neg_rew = np.sqrt(np.sum(np.square(agent.goal_a.state.p_pos - agent.state.p_pos))); return pos_rew - neg... | {
"Adversary": "[self_vel, all_landmark_rel_positions, other_agent_rel_positions]",
"Agent": "[self_vel, goal_rel_position, goal_landmark_id, all_landmark_rel_positions, landmark_ids, other_agent_rel_positions]",
"Alice": null,
"Bob": null,
"Eve": null
} | {
"Adversary": "[no_action, move_left, move_right, move_down, move_up]",
"Agent": "[no_action, move_left, move_right, move_down, move_up]"
} | [adversary_0, agent_0] | This environment has 1 good agent, 1 adversary, and 1 landmark. The good agent is rewarded based on the distance to the landmark. The adversary is rewarded if it is close to the landmark, and if the agent is far from the landmark (the difference of the distances). Thus the adversary must learn to push the good agent aw... |
simple_reference.json | {
"adversary_reward": "",
"agent_reward": "def reward(self, agent, world): agent_reward = 0.0 if agent.goal_a is None or agent.goal_b is None else np.sqrt(np.sum(np.square(agent.goal_a.state.p_pos - agent.goal_b.state.p_pos))); return -agent_reward \n def global_reward(self, world): all_rewards = sum(self.reward(ag... | {
"Adversary": "[]",
"Agent": "[self_vel, all_landmark_rel_positions, landmark_ids, goal_id, communication]",
"Alice": null,
"Bob": null,
"Eve": null
} | {
"Adversary": "[]",
"Agent": "[say_0, say_1, say_2, say_3, say_4, say_5, say_6, say_7, say_8, say_9] X [no_action, move_left, move_right, move_down, move_up]"
} | [agent_0, agent_1] | This environment has 2 agents and 3 landmarks of different colors. Each agent wants to get closer to their target landmark, which is known only by the other agents. Both agents are simultaneous speakers and listeners.Locally, the agents are rewarded by their distance to their target landmark. Globally, all agents are r... |
simple_speaker_listener.json | {
"adversary_reward": "def reward(self, agent, world): a = world.agents[0]; dist2 = np.sum(np.square(a.goal_a.state.p_pos - a.goal_b.state.p_pos)); return -dist2 # squared distance from listener to landmark",
"agent_reward": "",
"observation": "def observation(self, agent, world): goal_color = agent.goal_b.color... | {
"Adversary": "[self_vel, all_landmark_rel_positions, communication]",
"Agent": "[goal_id]",
"Alice": null,
"Bob": null,
"Eve": null
} | {
"Adversary": "[no_action, move_left, move_right, move_down, move_up]",
"Agent": "[say_0, say_1, say_2, say_3, say_4, say_5, say_6, say_7, say_8, say_9]"
} | [speaker_0, listener_0] | This environment is similar to simple_reference, except that one agent is the ‘speaker’ (gray) and can speak but cannot move, while the other agent is the listener (cannot speak, but must navigate to correct landmark). |
simple_spread.json | {
"adversary_reward": "",
"agent_reward": "def is_collision(self, agent1, agent2): delta_pos = agent1.state.p_pos - agent2.state.p_pos; dist = np.sqrt(np.sum(np.square(delta_pos))); dist_min = agent1.size + agent2.size; return True if dist < dist_min else False \n def reward(self, agent, world): rew = 0; rew -= sum... | {
"Adversary": "[]",
"Agent": "[self_vel, self_pos, landmark_rel_positions, other_agent_rel_positions, communication]",
"Alice": null,
"Bob": null,
"Eve": null
} | {
"Adversary": "[]",
"Agent": "[no_action, move_left, move_right, move_down, move_up]"
} | [agent_0, agent_1, agent_2] | This environment has N agents, N landmarks (default N=3). At a high level, agents must learn to cover all the landmarks while avoiding collisions.More specifically, all agents are globally rewarded based on how far the closest agent is to each landmark (sum of the minimum distances). Locally, the agents are penalized i... |
simple_tag.json | {
"adversary_reward": "def adversary_reward(self, agent, world): rew = 0; shape = False; agents = self.good_agents(world); adversaries = self.adversaries(world); rew -= sum(0.1 * min(np.sqrt(np.sum(np.square(a.state.p_pos - adv.state.p_pos))) for a in agents) for adv in adversaries) if shape else 0; rew += sum(10 for... | {
"Adversary": "[self_vel, self_pos, landmark_rel_positions, other_agent_rel_positions, other_agent_velocities]",
"Agent": "[self_vel, self_pos, landmark_rel_positions, other_agent_rel_positions, other_agent_velocities]",
"Alice": null,
"Bob": null,
"Eve": null
} | {
"Adversary": "[no_action, move_left, move_right, move_down, move_up]",
"Agent": "[no_action, move_left, move_right, move_down, move_up]"
} | [adversary_0, adversary_1, adversary_2, agent_0] | This is a predator-prey environment. Good agents (green) are faster and receive a negative reward for being hit by adversaries (red) (-10 for each collision). Adversaries are slower and are rewarded for hitting good agents (+10 for each collision). Obstacles (large black circles) block the way. By default, there is 1 g... |
simple_world_comm.json | {
"adversary_reward": "def adversary_reward(self, agent, world): rew = 0; shape = True; agents = self.good_agents(world); adversaries = self.adversaries(world); rew -= 0.1 * min(np.sqrt(np.sum(np.square(a.state.p_pos - agent.state.p_pos))) for a in agents) if shape else 0; rew += sum(5 for ag in agents for adv in adv... | {
"Adversary": "Normal adversary observations:[self_vel, self_pos, landmark_rel_positions, other_agent_rel_positions, other_agent_velocities, self_in_forest, leader_comm];Adversary leader observations: [self_vel, self_pos, landmark_rel_positions, other_agent_rel_positions, other_agent_velocities, leader_comm]",
"Ag... | {
"Adversary": "Normal adversary action space: [no_action, move_left, move_right, move_down, move_up]; Adversary leader discrete action space: [say_0, say_1, say_2, say_3] X [no_action, move_left, move_right, move_down, move_up]",
"Agent": "[no_action, move_left, move_right, move_down, move_up]"
} | [leadadversary_0, adversary_0, adversary_1, adversary_3, agent_0, agent_1] | This environment is similar to simple_tag, except there is food (small blue balls) that the good agents are rewarded for being near, there are ‘forests’ that hide agents inside from being seen, and there is a ‘leader adversary’ that can see the agents at all times and can communicate with the other adversaries to help ... |
README.md exists but content is empty.
- Downloads last month
- 11