Designers, makers and others often exploit 3D printing to rapidly prototype a range of functional objects, from movie props to medical devices. An true print preview is vital for users to be sure that the produced object will perform as expected.
However, the previews generated by most 3D printing software focus on functionality, not aesthetics. The printed object may have a different color, texture or shading than the user expected, resulting in multiple prints that waste time, effort and material.
To facilitate users imagine what a manufactured object will look like, researchers at MIT and elsewhere have developed an easy-to-use preview tool that puts appearance first.
Users upload a screenshot of the object from their 3D printing software along with a single image of the printing material. Based on this data, the system automatically generates a rendering of the likely appearance of the manufactured object.
The AI-based system called VisiPrint is designed to work with a wide range of 3D printing software and can handle any material sample. It takes into account not only the color of the material, but also the gloss, transparency and the influence of the nuances of the production process on the appearance of the object.
Such aesthetic-focused previews can be particularly useful in fields such as dentistry, helping clinicians provide ephemeral crowns and bridges that match the appearance of a patient’s teeth, or in architecture, to facilitate designers evaluate the visual impact of models.
“3D printing can be a very wasteful process. Some studies estimate that as much as one-third of the material used goes directly to landfill, often from prototypes that the user throws away. To make 3D printing more sustainable, we want to reduce the number of trials needed to get the prototype you are looking for. The user should not have to try every available printing material before committing to a design,” says Maxine Perroni-Scharf, electrical engineering and computer science (EECS) graduate student and principal author of the book a article in VisiPrint.
She is joined in this article by Faraz Faruqi, EECS graduate; Raul Hernandez, MIT student; SooYeon Ahn, graduate of Gwangju Institute of Science and Technology; Szymon Rusinkiewicz, professor of computer science at Princeton University; William Freeman, Thomas and Gerd Perkins Professor of EECS at MIT and member of the Computer Science and Artificial Intelligence Laboratory (CSAIL); and senior author Stefanie Mueller, associate professor of EECS and Mechanical Engineering at MIT and a member of CSAIL. The research will be presented at the ACM CHI Conference on Human Factors in Computing Systems.
True aesthetics
The researchers focused on fused deposition modeling (FDM), the most popular type of 3D printing. In FDM, a filament of printing material is melted and then passed through a nozzle to produce an object, layer by layer.
Generating true aesthetic previews is challenging because the melting and extrusion process can change the appearance of the material, as can the height of each deposited layer and the path the nozzle follows during production.
VisiPrint uses two AI models that work together to overcome these challenges.
The VisiPrint preview relies on two inputs: a screenshot of the digital design from the user’s 3D printing software (called “slicer” software), and an image of the print material, which can be downloaded from an online source or captured from a printed sample.
Based on these input data, the computer vision model extracts features from the material sample that are vital for the appearance of the object.
It feeds these features into a generative AI model that calculates the object’s geometry and structure while taking into account the so-called “slicing” pattern that the nozzle will follow as it extrudes each layer.
The key to the researchers’ approach is a special conditioning method. This involves fine-tuning the inner workings of the model to guide it so that it follows the slicing pattern and adheres to the constraints of the 3D printing process.
Their conditioning method uses a depth map that preserves the shape and shading of an object, along with an edge map that reflects internal contours and structural boundaries.
“If you don’t get the right balance of these two things, you can use the wrong geometry or the wrong cutting pattern. We had to be careful to combine them the right way,” says Perroni-Scharf.
User-oriented system
The team has also developed an easy-to-use interface through which you can upload required images and rate the preview.
The VisiPrint interface allows more advanced creators to adjust many settings, such as how certain colors affect the final look.
Ultimately, the aesthetic preview is intended to complement the functional preview generated by the slicer software, as VisiPrint does not estimate printability, mechanical feasibility, or failure probability.
To evaluate VisiPrint, researchers conducted a user study in which participants were asked to compare the system with other approaches. Almost all participants found that it provided a better overall appearance, as well as greater texture similarity to printed objects.
Additionally, VisiPrint’s preview process took about a minute on average, more than twice as quick as any competing method.
“VisiPrint really shined compared to other AI interfaces. If you gave a more general AI model the same screenshots, it might randomly change shape or use the wrong slicing pattern because it didn’t have direct conditioning,” he says.
In the future, researchers want to address artifacts that can occur when the model preview contains very fine details. They also want to add features that will allow users to optimize parts of the printing process beyond material color.
“It is important to think about the way we make things. We must continue to strive to develop methods that reduce waste. To this end, combining artificial intelligence with the physical manufacturing process is an exciting area of future work,” says Perroni-Scharf.
“What you see is what you get” was the main reason desktop publishing emerged in the 1980s because it allowed users to get what they wanted the first time. It’s time to get WYSIWYG for 3D printing too. VisiPrint is a great step in this direction,” says Patrick Baudisch, a professor of computer science at the Hasso Plattner Institute, who was not involved in the work.
This research was funded in part by the MIT Morningside Academy for Design Fellowship and the MIT MathWorks Fellowship.
