In this study, we present a method that enables individuals to generate 3D garment designs from basic sketches, thus making a previously specialized creative field more accessible. Our approach takes a single freehand sketch as input and employs generative deep learning techniques to automatically create high-fidelity 3D garment models using a conditional diffusion 3D generation network. Our proposed method comprises two key components: 1) a pre-training phase that uses a 3D prior, drawing on a varied dataset of clothing shapes in 3D to grasp the essential characteristics of these forms, and 2) a sketch-to-prior mapping module that establishes a connection between the sketches and the pre-trained manifold, facilitating the effective generation of 3D shapes. Our method not only overcomes existing limitations but also introduces new possibilities for shape prototyping and exploration in fashion design, thus broadening the scope of creativity for a more diverse audience.
This paper's license is marked as closed access or non-commercial and cannot be viewed on ResearchHub. Visit the paper's external site.