A team of the University of Barcelona and the Computer Vision Center (CVC) have carried out CLOTH3D, the first 3D synthetic dataset at a large scale aimed at the development of deep learning to simulate clothes on different body types. This dataset, created artificially and published on open access, is the first step to improve virtual fitting rooms.
There are more and more people who buy clothes online every day. This trend is speeding up due to the current pandemic situation. The advantages of this way of buying are clear, but there are some disadvantages too. One of the most important issues is that people cannot try clothes on before receiving them. To solve this problem, 3D clothes generation and modelling have been used, a key for artificial intelligence and deep learning. These models, which will ease the work of designers and animators, involve an improvement in the experience the virtual fitting rooms provide.
Nowadays, there are models to simulate clothes on different bodies, but many are in 2D. The reason is that 3D models need many data, and those that are currently available are just a few. There are three main strategies to produce 3D data for dressed people: 3D scanning, 3D image generation with conventional images and synthetic generation. 3D scannings are expensive and unable to differentiate clothes from body, that is, they can take the 3D shape as if body and clothes were the same object. On the other hand, datasets that infer 3D clothing geometry with conventional images are inexact and cannot model properly the dynamics of the clothes. However, synthetic data are easy to create and are free from measurement errors.
Researchers from the Human Pose Recovery and Behaviour Analysis Group, from the Copmuter Vision Center (CVC) – University of Barcelona (UB), chose the latter path and created CLOTH3D, the first set of synthetic data at a large scale with 3D human dressed sequences, published recently in the journal Computer Vision – ECCV 2020 Workshops. “Since we need many data to create 3D models, we decided to create our own data. We designed and published the largest dataset of this kind with a great range of clothes and movements”, notes Hugo Bertiche (UB – CVC) also member of the Institute of Mathematics of the UB (IMUB).
With more than two million samples, CLOTH3D is unique when it comes to variability, regarding the type of clothes, shape, size, tightness and tissue. The fitting room can simulate thousands of poses and different body types, which creates a real dynamic for clothes. “We work on a generation line that creates an only set for each sequence in terms of type of piece, topology, shape, size, adjustment and tissue. While other datasets have a few different pieces, ours has thousands of them”, notes Sergio Escalera (CVC-UB).
The textile industry is not the only one to benefit from this dataset: “It can benefit the entertainment industry too, since films with computer-generated images and videogames could be more realistic”, notes Bertiche. However, there is a lot of work to do: “The 3D piece modelling through deep learning is still at an early phase. Although our dataset covers the largest part of variability of daily clothes, fashion styles are limited by imagination. The fastest, automatic and intelligent design of clothes could lead to many interesting applications. On the other hand, the dynamics of clothes are extremely complex, and this is starting to be treated in a clever way. Further exploration is necessary for this community”, concludes Bertiche. Also, real clothes are more complex than what simulators can show, so deep learning has to find the right wat to model extremely fine and chaotic details, such as wrinkles, and arbitrary geometrical objects related to dresses, such as hats, glasses, gloves, shoes and bijouterie.
Bertiche, H.; Madadi, M., and Escalera, S. “CLOTH3D: Clothed 3D Humans”. Computer Vision – ECCV 2020 Workshops. ECCV 2020. Lecture Notes in Computer Science, November, 2020, vol. 12540. DOI: 10.1007/978-3-030-58565-5_21