Update README.md
Browse files
README.md
CHANGED
|
@@ -6,119 +6,46 @@ colorTo: indigo
|
|
| 6 |
sdk: static
|
| 7 |
pinned: false
|
| 8 |
---
|
| 9 |
-
# HCIS-Lab β Human-centered Intelligent Systems Lab
|
| 10 |
-
|
| 11 |
-
**Welcome to the official space for the HCIS-Lab (Human-centered Intelligent Systems Lab) at National Yang Ming Chiao Tung University.**
|
| 12 |
-
This repository and space showcase our research, team, publications, and open resources on human-centered AI, robotics, and intelligent systems research.
|
| 13 |
-
|
| 14 |
-
---
|
| 15 |
-
|
| 16 |
-
## π Overview
|
| 17 |
-
|
| 18 |
-
The **Human-centered Intelligent Systems Lab (HCIS-Lab)** is led by **Assoc. Prof. Yi-Ting Chen**, focusing on designing *intelligent systems that augment human capabilities* and improve everyday life. Our work spans:
|
| 19 |
-
|
| 20 |
-
* Artificial Intelligence
|
| 21 |
-
* Computer Vision & Machine Learning
|
| 22 |
-
* Robotics and Human-Robot Interaction
|
| 23 |
-
* Developmental and Cognitive Science
|
| 24 |
-
* Multimodal Representations (vision, tactile, dynamics)
|
| 25 |
-
|
| 26 |
-
We emphasize **human-centric design**, interpretable models, and *user co-design principles* in all our research initiatives.
|
| 27 |
-
|
| 28 |
-
---
|
| 29 |
-
|
| 30 |
-
## π§ͺ Research Areas
|
| 31 |
-
|
| 32 |
-
HCIS-Labβs research spans two major thematic clusters:
|
| 33 |
-
|
| 34 |
-
### π Intelligent Driving Systems
|
| 35 |
-
|
| 36 |
-
Exploring dynamic scene understanding, risk assessment, and large-scale scenario modeling for autonomous systems. Projects include:
|
| 37 |
-
|
| 38 |
-
* Visual risk object identification
|
| 39 |
-
* Scenario-based benchmarks for risk assessment
|
| 40 |
-
* Traffic scene representation learning
|
| 41 |
-
|
| 42 |
-
### π€ Assistive & Mobile Manipulation
|
| 43 |
-
|
| 44 |
-
Developing AI systems that interact with real environments to assist humans:
|
| 45 |
-
|
| 46 |
-
* Robot manipulation and skill acquisition
|
| 47 |
-
* Task-aware robotic policies
|
| 48 |
-
* Food pickup and food serving robot frameworks
|
| 49 |
|
| 50 |
-
--
|
| 51 |
-
|
| 52 |
-
## π Publications & Projects
|
| 53 |
-
|
| 54 |
-
HCIS-Lab maintains a broad portfolio of peer-reviewed research outputs including:
|
| 55 |
-
|
| 56 |
-
πΉ CVPR, ICRA, ICCV, NeurIPS, IROS papers
|
| 57 |
-
πΉ Systems and benchmarks for scene understanding and robotics
|
| 58 |
-
πΉ Cross-modal learning methods and interpretable AI frameworks
|
| 59 |
-
|
| 60 |
-
Some notable research highlights:
|
| 61 |
-
|
| 62 |
-
* **RiskBench:** Scenario-based risk identification benchmark for autonomous vehicles.
|
| 63 |
-
* **SKT-Hang:** Semantic keypoint trajectory methods for everyday object manipulation.
|
| 64 |
-
|
| 65 |
-
> For full publication list and links to slides/code, explore the Publications page of the HCIS Lab website.
|
| 66 |
-
---
|
| 67 |
-
|
| 68 |
-
## π₯ Research Team
|
| 69 |
-
|
| 70 |
-
HCIS-Lab comprises a diverse team of Ph.D. students, Masterβs students, research assistants, undergraduate contributors, and alumni with collaborations spanning universities and industry research centers. ([sites.google.com][1])
|
| 71 |
|
| 72 |
-
|
|
|
|
| 73 |
|
| 74 |
-
|
| 75 |
-
* **Research Assistants**
|
| 76 |
-
* **Masters & Undergraduate Researchers**
|
| 77 |
-
* **Alumni working in academia and industry**
|
| 78 |
|
| 79 |
---
|
| 80 |
|
| 81 |
-
##
|
| 82 |
|
| 83 |
-
HCIS-Lab
|
| 84 |
|
| 85 |
-
|
| 86 |
-
|
| 87 |
-
|
| 88 |
-
|
| 89 |
|
| 90 |
-
|
| 91 |
|
| 92 |
---
|
| 93 |
|
| 94 |
-
##
|
| 95 |
-
|
| 96 |
-
HCIS-Lab is actively **looking for motivated students and researchers** (Ph.D., Research Assistant, M.S., and undergraduate) interested in *human-centered system research*. Potential candidates can reach out with:
|
| 97 |
-
|
| 98 |
-
* CV and representative work
|
| 99 |
-
* Research interests and goals
|
| 100 |
|
| 101 |
-
|
|
|
|
|
|
|
| 102 |
|
| 103 |
-
|
| 104 |
-
|
| 105 |
-
|
| 106 |
-
|
| 107 |
-
| Resource | Link |
|
| 108 |
-
| ------------------- | ------------------------------------------------------------------------------------------------------------------------------------ |
|
| 109 |
-
| HCIS-Lab Homepage | [https://sites.google.com/site/yitingchen0524/hcis-lab](https://sites.google.com/site/yitingchen0524/hcis-lab) |
|
| 110 |
-
| Publications Page | [https://sites.google.com/site/yitingchen0524/publications_1](https://sites.google.com/site/yitingchen0524/publications_1) |
|
| 111 |
-
| Mobile Manipulation | [https://sites.google.com/site/yitingchen0524/mobile-manipulation](https://sites.google.com/site/yitingchen0524/mobile-manipulation) |
|
| 112 |
-
| Research Areas | [https://sites.google.com/site/yitingchen0524/research](https://sites.google.com/site/yitingchen0524/research) |
|
| 113 |
|
| 114 |
---
|
| 115 |
|
| 116 |
-
##
|
| 117 |
|
| 118 |
-
|
| 119 |
-
|
| 120 |
|
| 121 |
---
|
| 122 |
|
| 123 |
-
β¨ *
|
| 124 |
-
|
|
|
|
| 6 |
sdk: static
|
| 7 |
pinned: false
|
| 8 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 9 |
|
| 10 |
+
# HCIS-Lab β Human-centered Intelligent Systems Lab
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 11 |
|
| 12 |
+
Welcome to the official Hugging Face space of the
|
| 13 |
+
**Human-centered Intelligent Systems Lab (HCIS-Lab)** at National Yang Ming Chiao Tung University.
|
| 14 |
|
| 15 |
+
We develop **human-centered AI and robotics systems** for real-world applications.
|
|
|
|
|
|
|
|
|
|
| 16 |
|
| 17 |
---
|
| 18 |
|
| 19 |
+
## π About
|
| 20 |
|
| 21 |
+
HCIS-Lab is led by **Assoc. Prof. Yi-Ting Chen** and focuses on:
|
| 22 |
|
| 23 |
+
- Artificial Intelligence
|
| 24 |
+
- Computer Vision & Machine Learning
|
| 25 |
+
- Robotics & Human-Robot Interaction
|
| 26 |
+
- Multimodal Learning
|
| 27 |
|
| 28 |
+
Our goal is to build intelligent systems that are **useful, interpretable, and human-centric**.
|
| 29 |
|
| 30 |
---
|
| 31 |
|
| 32 |
+
## π§ͺ Research Focus
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 33 |
|
| 34 |
+
### π Intelligent Driving
|
| 35 |
+
- Risk assessment and scene understanding
|
| 36 |
+
- Autonomous driving benchmarks
|
| 37 |
|
| 38 |
+
### π€ Assistive Robotics
|
| 39 |
+
- Robot manipulation and skill learning
|
| 40 |
+
- Service and assistive robots
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 41 |
|
| 42 |
---
|
| 43 |
|
| 44 |
+
## π More Information
|
| 45 |
|
| 46 |
+
- Homepage: https://sites.google.com/site/yitingchen0524/hcis-lab
|
| 47 |
+
- Publications: https://sites.google.com/site/yitingchen0524/publications_1
|
| 48 |
|
| 49 |
---
|
| 50 |
|
| 51 |
+
β¨ *Maintained by HCIS-Lab*
|
|
|