MIT researchers develop AI-driven tool that can personalise 3D printable models

46
MIT researchers developed a user-friendly interface that enables a maker to customize the color, texture, and shape of the aesthetic characteristics of an open-source 3D model from an online repository, without affecting the functionality of the fabricated object. Image credit: MIT

Researchers from the Massachusetts Institute of Technology (MIT) have introduced Style2Fab, a generative AI tool that allows users to easily add custom design elements to 3D models without affecting their functionality.

With Style2Fab, designers can quickly customise models of 3D-printable objects, such as assistive devices, without hampering their functionality, the research team said in a news release. 

Unlike traditional methods that require complex computer-aided design (CAD) software, Style2Fab lets users describe their design preferences using natural language prompts. 

The AI, Text2Mesh, interprets these descriptions and applies changes to the aesthetic segments of the model while preserving functionality.

In particular, researchers said Style2Fab uses machine learning to analyse the model’s topology, tracking changes in geometry such as curves or angles. 

It then divides the model into segments, comparing them to a dataset of 294 annotated 3D models to classify segments as functional or aesthetic. 

“Style2Fab would make it very easy to stylise and print a 3D model, but also experiment and learn while doing it,” said Faraz Faruqi, a computer science graduate student and lead author of a paper introducing Style2Fab.

Additionally, researchers ran a study with makers with varying degrees of 3D modelling skill and discovered that Style2Fab was useful in varied ways depending on a maker’s expertise. 

The interface was simple enough for novice users to grasp and utilise to stylise designs, but it also provided a fertile ground for experimentation with a low barrier to entry, the team said. 

Researchers said they plan to keep working on fine-tuning the algorithm for more accurate shapes and creating fresh 3D models from scratch. 

They are also excited about the potential applications in the medical field, where healthcare professionals and patients with limited experience with 3D printing may quickly make objects like splints or casts with the assistance of an AI program. 

Faruqi collaborated with his advisor, co-senior author Stefanie Mueller, an associate professor in the MIT departments of Electrical Engineering and Computer Science and Mechanical Engineering, and a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL) who leads the HCI Engineering Group; co-senior author Megan Hofmann, assistant professor at the Khoury College of Computer Sciences at Northeastern University; as well as other members and former members of the group.

Their findings will be presented at the ACM Symposium on User Interface Software and Technology.