From anything to mesh like human artists. Official impl. of "MeshAnything: Artist-Created Mesh Generation with Autoregressive Transformers"
Go to file
Yiwen Chen df361407d9 first commit 2024-06-17 00:07:35 +08:00
.idea first commit 2024-06-17 00:07:35 +08:00
MeshAnything first commit 2024-06-17 00:07:35 +08:00
demo first commit 2024-06-17 00:07:35 +08:00
examples first commit 2024-06-17 00:07:35 +08:00
pc_examples first commit 2024-06-17 00:07:35 +08:00
LICENSE.txt first commit 2024-06-17 00:07:35 +08:00
README.md first commit 2024-06-17 00:07:35 +08:00
app.py first commit 2024-06-17 00:07:35 +08:00
main.py first commit 2024-06-17 00:07:35 +08:00
mesh_to_pc.py first commit 2024-06-17 00:07:35 +08:00
requirements.txt first commit 2024-06-17 00:07:35 +08:00

README.md

MeshAnything:
Artist-Created Mesh Generation
with Autoregressive Transformers

Yiwen Chen1,2*, Tong He2†, Di Huang2, Weicai Ye2, Sijin Chen3, Jiaxiang Tang4
Xin Chen5, Zhongang Cai6, Lei Yang6, Gang Yu7, Guosheng Lin1†, Chi Zhang8†
*Work done during a research internship at Shanghai AI Lab.
Corresponding authors.
1S-Lab, Nanyang Technological University, 2Shanghai AI Lab,
3Fudan University, 4Peking University, 5University of Chinese Academy of Sciences,
6SenseTime Research, 7Stepfun, 8Westlake University

                  ;

Demo GIF

Release

  • [6/17] 🔥🔥 We released the 350m version of MeshAnything.

Contents

Installation

Our environment has been tested on Ubuntu 22, CUDA 11.8 with A100, A800 and A6000.

  1. Clone our repo and create conda environment
git clone https://github.com/buaacyw/MeshAnything.git && cd MeshAnything
conda create -n MeshAnything python==3.10.13
conda activate MeshAnything
pip install torch==2.1.1 torchvision==0.16.1 torchaudio==2.1.1 --index-url https://download.pytorch.org/whl/cu118
pip install -r requirements.txt
pip install flash-attn --no-build-isolation

Usage

# Gradio Demo
python app.py

# Command line inference

# For mesh

# inference for folder
python main.py --input_dir examples --out_dir mesh_output --input_type mesh

# inference for single file
python main.py --input_dir examples/wand.ply --out_dir mesh_output --input_type mesh

# Preprocess with Marching Cubes first
python main.py --input_dir examples --out_dir mesh_output --input_type mesh --mc

# For point cloud

# Note: if you want to use your own point cloud, please make sure the normal is included.
# The file format should be a .npy file with shape (N, 6), where N is the number of points. The first 3 columns are the coordinates, and the last 3 columns are the normal.

# inference for folder
python main.py --input_dir pc_examples --out_dir pc_output --input_type pc_normal

# inference for single file
python main.py --input_dir pc_examples/mouse.npy --out_dir pc_output --input_type pc_normal

Important Notes

  • The input mesh will be normalized to a unit bounding box. The up vector of the input mesh should be +Y for better results.
  • Limited by computational resources, MeshAnything is trained on meshes with fewer than 800 faces and cannot generate meshes with more than 800 faces. The shape of the input mesh should be sharp enough; otherwise, it will be challenging to represent it with only 800 faces. Thus, feed-forward image-to-3D methods may often produce bad results due to insufficient shape quality.
  • It takes about 7GB and 30s to generate a mesh on an A6000 GPU.
  • Please refer to https://huggingface.co/spaces/Yiwen-ntu/MeshAnything/tree/main/examples for more examples.

TODO

The repo is still being under construction, thanks for your patience.

  • Release of training code.
  • Release of larger model.

Acknowledgement

Our code is based on these wonderful repos: