top of page
icons8-camera-100 (1).png
There, there.

Hi there! I am Zhou Xian, a final-year PhD student in the Robotics Institute at Carnegie Mellon University, advised by Katerina Fragkiadaki. I am broadly interested in robotics, computer vision and world model learning. Prior to CMU, I completed my Bachelor's degree at Nanyang Technological University in Singapore, working with Pham Quang Cuong and I-Ming Chen. I have also spent wonderful summers as a research intern at Meta AI with Akshara Rai, and at MIT-IBM AI Lab with Chuang Gan.

My current research focuses on building unified neural policies and data engine for robotics research and beyond.

I am also an independent landscape photographer, with a special interest in aerial and celestial photography. I was affiliated with the Visual China Group (视觉中国) and was one of the top 10 contributors on My images have received tens of millions of views and near a million downloads across platforms like Unsplash, Notion, Trello, Medium, etc. Here are some of my personal favorite photos of all time.

I get emotional watching the Milky Way dance, every single time.

[Contact] If you are interested in chatting about robots, AI, research, photography, dreams, stand-up comedy, Honor of Kings, wine tasting, kittens🐱, the best superpower, the illusion of free will and consciousness, or anything else that makes absolutely no sense apart from being fun or romantic :P, feel free to shoot me an email at /

[2024-5] [New] RoboGen and RL-VLM-F are accepted to ICML 2024. 

[2024-1] [New] Gen2Sim is accepted to ICRA 2024. 

[2024-1] [New] ThinShellLab and DiffTactile are accepted to ICLR 2024. 

[2023-11] Introducing RoboGen, a generative robotic agent for automated and diverse skill acquisition.

[2023-9] We are organizing a workshop on the topic of "Towards Generalist Robots" at CoRL 2023.

[2023-8] We released a white paper envisioning a promising path towards Generalist Robots!

[2023-8] Act3D and ChainedDiffuser are accepted to CoRL 2023.

[2023-6] RoboNinja and EBMPlanner are accepted to RSS 2023.

[2023-1] FluidLab and SoftZoo are accepted to ICLR 2023.



(* indicates equal contribution)


Towards Generalist Robots: A Promising Paradigm via Generative Simulation

Zhou Xian, Theophile Gervet, Zhenjia Xu, Yi-Ling Qiao, Tsun-Hsuan Wang, Yian Wang, Yufei Wang​

White paper, arXiv



RoboGen: Towards Unleashing Infinite Data for Automated Robot Learning via Generative Simulation

Yufei Wang*, Zhou Xian*, Feng Chen*, Tsun-Hsuan Wang, Yian Wang, Katerina Fragkiadaki, Zackory Erickson, David Held, Chuang Gan

ICML 2024

[Project Page] | [Paper] | [Code]

RL-VLM-F: Reinforcement Learning from Vision Language Foundation Model Feedback

Yufei Wang, Zhanyi Sun, Jesse Zhang, Zhou Xian, Erdem Bıyık, David Held, Zackory Erickson

ICML 2024

[Project Page] | [Paper] | [Code]


Gen2Sim: Scaling up Robot Learning in Simulation with Generative Models
Pushkal Katara*, Zhou Xian*, Katerina Fragkiadaki

ICRA 2024

[Project Page] | [Paper] | [Code]


Thin-Shell Object Manipulations With Differentiable Physics Simulations

Yian Wang*, Juntian Zheng*, Zhehuan Chen, Zhou Xian, Gu Zhang, Chao Liu, Chuang Gan

ICLR 2024 (Spotlight)

[Project Page] | [Paper] | [Code]


DiffTactile: A Physics-based Differentiable Tactile Simulator for Contact-rich Robotic Manipulation
Zilin Si*, Gu Zhang*, Qingwei Ben*, Branden Romero, Zhou Xian, Chao Liu, Chuang Gan

ICLR 2024

[Project Page] | [Paper] | [Code]

Screenshot 2023-09-01 at 1.25.23 AM.png

ChainedDiffuser: A Unified Neural Architecture for Multimodal Multi-task Robotic Manipulation

Zhou Xian*, Nikolaos Gkanatsios*, Theophile Gervet*, Katerina Fragkiadaki

CoRL 2023

[Project Page] | [Paper] | [Code]


Act3D: Infinite Resolution Action Detection Transformer for Robotic Manipulation

Theophile Gervet*, Zhou Xian*, Nikolaos Gkanatsios, Katerina Fragkiadaki

​CoRL 2023

[Project Page] | [Paper] | [Code]


Energy-based Models are Zero-Shot Planners for Compositional Scene Rearrangement

Nikolaos Gkanatsios*, Ayush Jain*, Zhou Xian, Yunchu Zhang, Christopher Atkeson, Katerina Fragkiadaki

​RSS 2023

[Project Page] | [Paper] | [Code]


RoboNinja: Learning an Adaptive Cutting Policy for Multi-Material Objects

Zhenjia Xu, Zhou Xian, Xingyu Lin, Cheng Chi, Zhiao Huang, Chuang Gan, Shuran Song

RSS 2023

[Project Page] | [Paper] | [Code]


FluidLab: A Differentiable Environment for Benchmarking Complex Fluid Manipulation

Zhou Xian, Bo Zhu, Zhenjia Xu, Hsiao-Yu Tung, Antonio Torralba, Katerina Fragkiadaki, Chuang Gan

ICLR 2023 (Spotlight)

[Project Page] | [Paper] | [Code]

SoftZoo: A Soft Robot Co-design Benchmark For Locomotion In Diverse Environments

Tsun-Hsuan Wang, Pingchuan Ma, Andrew Spielberg, Zhou Xian, Hao Zhang, Joshua Tenenbaum, Daniela Rus, Chuang Gan

ICLR 2023

[Project Page] | [Paper] | [Code]


HyperDynamics: Meta-Learning Object and Agent Dynamics with Hypernetworks

Zhou Xian, Shamit Lal, Hsiao-Yu Tung, Emmanouil Antonios Platanios, Katerina Fragkiadaki

ICLR 2021

[Project Page] | [Paper] | [Code]


3D-OES: Viewpoint-Invariant Object-Factorized Environment Simulators

Hsiao-Yu Fish Tung*, Zhou Xian*, Mihir Prabhudesai, Shamit Lal, Katerina Fragkiadaki

CoRL 2020

[Project Page] | [Paper]


Learning from Unlabelled Videos Using Contrastive Predictive Neural 3D Mapping

Adam W. Harley, Fangyu Li, Shrinidhi K. Lakshmikanth, Zhou Xian, Hsiao-Yu Fish Tung, Katerina Fragkiadaki

ICLR 2020

[Project Page] | [Paper] | [Code]


Graph-Structed Visual Imitation

Maximilian Sieb*, Zhou Xian*, Audrey Huang, Oliver Kroemer, Katerina Fragkiadaki

CoRL 2019

[Project Page] | [Paper] | [Code]


Domain Randomization for Macromolecule Structure Classification and Segmentation in Electron Cyro-tomograms

Chengqian Che*, Zhou Xian*, Xiangrui ZengXin GaoMin Xu

BIBM 2019



Can Robots Assemble an IKEA Chair?

Francisco Suarez-Ruiz, Zhou Xian, Quang-Cuong Pham

Science Robotics 2018

[Paper] | [Preprint] | [Video]


Closed-Chain Manipulation of Large Objects by Multi-Arm Robotic Systems
Zhou Xian, Puttichai Lertkultanon, Quang-Cuong Pham

IEEE Robotics and Automation Letters 2017

[Paper] | [Video]

bottom of page