Atlas3D: Physically Constrained Self-Supporting Text-to-3D for Simulation and Fabrication

1University of California, Los Angeles, 2Amazon (This work is not related to F. Gao’s position at Amazon.), 3University of Utah
*Equal Contributions

Abstract

Existing diffusion-based text-to-3D generation methods primarily focus on producing visually realistic shapes and appearances, often neglecting the physical constraints necessary for downstream tasks. Generated models frequently fail to maintain balance when placed in physics-based simulations or 3D printed. This balance is crucial for satisfying user design intentions in interactive gaming, embodied AI, and robotics, where stable models are needed for reliable interaction. Additionally, stable models ensure that 3D-printed objects, such as figurines for home decoration, can stand on their own without requiring additional supports. To fill this gap, we introduce Atlas3D, an automatic and easy-to-implement method that enhances existing Score Distillation Sampling (SDS)-based text-to-3D tools. Atlas3D ensures the generation of self-supporting 3D models that adhere to physical laws of stability under gravity, contact, and friction. Our approach combines a novel differentiable simulation-based loss function with physically inspired regularization, serving as either a refinement or a post-processing module for existing frameworks. We verify Atlas3D's efficacy through extensive generation tasks and validate the resulting 3D models in both simulated and real-world environments.

Comparison with Magic3D

We augment Magic3D with Atlas3D.


Comparison with MVDream

We augment MVDream with Atlas3D.


Real World Evaluation

We further validate the standibility of the generated models using Atlas3D in real-world scenarios.


Standability Generalization

Our model can be generalized to stand on uneven surfaces.

w/ Atlas3D

w/o Atlas3D

BibTeX

@article{chen2024atlas3d,
      title={Atlas3D: Physically Constrained Self-Supporting Text-to-3D for Simulation and Fabrication},
      author={Yunuo Chen and Tianyi Xie and Zeshun Zong and Xuan Li and Feng Gao and Yin Yang and Ying Nian Wu and Chenfanfu Jiang},
      journal={arXiv preprint arXiv:2405.18515},
      year={2024},
}