File size: 4,159 Bytes
8f4fde7
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
---
license: mit
task_categories:
- robotics
tags:
- multi-object-manipulation
- diffusion-models
- behavioral-cloning
- object-centric
---

# EC-Diffuser Dataset

This repository contains the datasets, pretrained agents, and Deep Latent Predictor (DLP) representations for the paper [EC-Diffuser: Multi-Object Manipulation via Entity-Centric Behavior Generation](https://huggingface.co/papers/2412.18907).

EC-Diffuser proposes a novel behavioral cloning (BC) approach that leverages object-centric representations and an entity-centric Transformer with diffusion-based optimization. This enables efficient learning from offline image data for multi-object manipulation tasks, leading to substantial performance improvements and compositional generalization to novel object configurations and goals.

*   **Paper:** [EC-Diffuser: Multi-Object Manipulation via Entity-Centric Behavior Generation](https://huggingface.co/papers/2412.18907)
*   **Project Website:** [https://sites.google.com/view/ec-diffuser](https://sites.google.com/view/ec-diffuser)
*   **Code Repository:** [https://github.com/carl-qi/EC-Diffuser](https://github.com/carl-qi/EC-Diffuser)

## Sample Usage

The datasets, pretrained agents, and DLP representations provided here are intended for use with the official [EC-Diffuser code repository](https://github.com/carl-qi/EC-Diffuser). Below are instructions for setting up the environment, downloading the data, and using the provided scripts for evaluation and training.

### Installation

Follow these steps to set up the environment (tested on Python 3.8):

1.  **Create and activate a Conda environment:**

    ```bash
    conda create -n dlp python=3.8
    conda activate dlp
    ```

2.  **Install main dependencies:**
    The full list of dependencies can be found in the `requirements.txt` file within the [code repository](https://github.com/carl-qi/EC-Diffuser).

3.  **Install Diffuser-related packages:**

    ```bash
    cd diffuser
    pip install -e .
    cd ../
    ```

4.  **Setup for the FrankaKitchen environment:**

    Install D4RL by cloning the repository:

    ```bash
    git clone https://github.com/Farama-Foundation/d4rl.git
    cd d4rl
    pip install -e .
    cd ../
    ```

5.  **Finalize environment setup:**

    Run the provided setup script:

    ```bash
    bash setup_env.sh
    ```

    *(If the script requires sourcing, you can also run: `source setup_env.sh`)*

### Downloading Datasets

Download the required datasets, pretrained agents, and DLP representations from this Hugging Face dataset repository:

```bash
git lfs install
git clone https://huggingface.co/datasets/carlq/ecdiffuser-data
```

### Evaluating a Pretrained Agent

You can evaluate the pretrained agents with the following commands. Replace `CUDA_VISIBLE_DEVICES=0,1` with the GPU devices you wish to use (Note IsaacGym env has to be on GPU 0).

-   **PushCube Agent:**

    ```bash
    CUDA_VISIBLE_DEVICES=0,1 python diffuser/scripts/eval_agent.py --config config.plan_pandapush_pint --num_entity 3 --planning_only
    ```

-   **PushT Agent:**

    ```bash
    CUDA_VISIBLE_DEVICES=0,1 python diffuser/scripts/eval_agent.py --config config.plan_pandapush_pint --push_t --num_entity 3 --push_t_num_color 1 --planning_only
    ```

-   **FrankaKitchen Agent:**

    ```bash
    CUDA_VISIBLE_DEVICES=0,1 python diffuser/scripts/eval_agent.py --config config.plan_pandapush_pint --kitchen --planning_only
    ```

### Training an Agent

Train your own agents using the commands below. Replace `CUDA_VISIBLE_DEVICES=0,1` with the GPU devices you wish to use (Note IsaacGym env has to be on GPU 0).

-   **Train a PushCube Agent (3 cubes):**

    ```bash
    CUDA_VISIBLE_DEVICES=0,1 python diffuser/scripts/train.py --config config.pandapush_pint --num_entity 3
    ```

-   **Train a PushT Agent (1 T-shaped object):**

    ```bash
    CUDA_VISIBLE_DEVICES=0,1 python diffuser/scripts/train.py --config config.pandapush_pint --push_t --num_entity 1
    ```

-   **Train a FrankaKitchen Agent:**

    ```bash
    CUDA_VISIBLE_DEVICES=0,1 python diffuser/scripts/train_kitchen.py --config config.pandapush_pint --kitchen
    ```