Generative artificial intelligence models have left such an indelible impact on digital content creation that it’s getting harder to recall what the internet was like before it. You can call on these AI tools for clever projects such as videos and photos — but their flair for the creative hasn’t quite crossed over into the physical world just yet.

So why haven’t we seen generative AI-enabled personalized objects, such as phone cases and pots, in places like homes, offices, and stores yet? According to MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) researchers, a key issue is the mechanical integrity of the 3D model.

While AI can help generate personalized 3D models that you can fabricate, those systems don’t often consider the physical properties of the 3D model. MIT Department of Electrical Engineering and Computer Science (EECS) PhD student and CSAIL engineer Faraz Faruqi has explored this trade-off, creating generative AI-based systems that can make aesthetic changes to designs while preserving functionality, and another that modifies structures with the desired tactile properties users want to feel.

Making it real 

Together with researchers at Google, Stability AI, and Northeastern University, Faruqi has now found a way to make real-world objects with AI, creating items that are both durable and exhibit the user’s intended appearance and texture. With the AI-powered “MechStyle” system, users simply upload a 3D model or select a preset asset of things like vases and hooks, and prompt the tool using images or text to create a personalized version. A generative AI model then modifies the 3D geometry, while MechStyle simulates how those changes will impact particular parts, ensuring vulnerable areas remain structurally sound. When you’re happy with this AI-enhanced blueprint, you can 3D print it and use it in the real world.

You could select a model of, say, a wall hook, and the material you’ll be printing it with (for example, plastics like polylactic acid). Then, you can prompt the system to create a personalized version, with directions like, “generate a cactus-like hook.” The AI model will work in tandem with the simulation module and generate a 3D model resembling a cactus while also having the structural properties of a hook. This green, ridged accessory can then be used to hang up mugs, coats, and backpacks. Such creations are possible thanks, in part, to a stylization process, where the system changes a model’s geometry based on its understanding of the text prompt, and working with the feedback received from the simulation module.

According to CSAIL researchers, 3D stylization used to come with unintended consequences. Their formative study revealed that only about 26 percent of 3D models remained structurally viable after they were modified, meaning that the AI system didn’t understand the physics of the models it was modifying.

“We want to use AI to create models that you can actually fabricate and use in the real world,” says Faruqi, who is a lead author on a paper presenting the project. “So MechStyle actually simulates how GenAI-based changes will impact a structure. Our system allows you to personalize the tactile experience for your item, incorporating your personal style into it while ensuring the object can sustain everyday use.”

This computational thoroughness could eventually help users personalize their belongings, creating a unique pair of glasses with speckled blue and beige dots resembling fish scales, for example. It also produced a pillbox with a rocky texture that’s checkered with pink and aqua spots. The system’s potential extends to crafting unique home and office decor, like a lampshade resembling red magma. It can even design assistive technology fit to users’ specifications, such as finger splints to aid with dexterous injuries and utensil grips to aid with motor impairments.

In the future, MechStyle could also be useful in creating prototypes for accessories and other handheld products you might sell in a toy shop, hardware store, or craft boutique. The goal, CSAIL researchers say, is for both expert and novice designers to spend more time brainstorming and testing out different 3D designs, instead of assembling and customizing items by hand.

Staying strong

To ensure MechStyle’s creations could withstand daily use, the researchers augmented their generative AI technology with a type of physics simulation called a finite element analysis (FEA). You can imagine a 3D model of an item, such as a pair of glasses, with a sort of heat map indicating which regions are structurally viable under a realistic amount of weight, and which ones aren’t. As AI refines this model, the physics simulations highlight which parts of the model are getting weaker and prevent further changes.

Faruqi adds that running these simulations every time a change is made drastically slows down the AI process, so MechStyle is designed to know when and where to do additional structural analyses. “MechStyle’s adaptive scheduling strategy keeps track of what changes are happening in specific points in the model. When the genAI system makes tweaks that endanger certain regions of the model, our approach simulates the physics of the design again. MechStyle will make subsequent modifications to make sure the model doesn’t break after fabrication.”

Combining the FEA process with adaptive scheduling allowed MechStyle to generate objects that were as high as 100 percent structurally viable. Testing out 30 different 3D models with styles resembling things like bricks, stones, and cacti, the team found that the most efficient way to create structurally viable objects was to dynamically identify weak regions and tweak the generative AI process to mitigate its effect. In these scenarios, the researchers found that they could either stop stylization completely when a particular stress threshold was reached, or gradually make smaller refinements to prevent at-risk areas from approaching that mark.

The system also offers two different modes: a freestyle feature that allows AI to quickly visualize different styles on your 3D model, and a MechStyle one that carefully analyzes the structural impacts of your tweaks. You can explore different ideas, then try the MechStyle mode to see how those artistic flourishes will affect the durability of particular regions of the model.

CSAIL researchers add that while their model can ensure your model remains structurally sound before being 3D printed, it’s not yet able to improve 3D models that weren’t viable to begin with. If you upload such a file to MechStyle, you’ll receive an error message, but Faruqi and his colleagues intend to improve the durability of those faulty models in the future.

What’s more, the team hopes to use generative AI to create 3D models for users, instead of stylizing presets and user-uploaded designs. This would make the system even more user-friendly, so that those who are less familiar with 3D models, or can’t find their design online, can simply generate it from scratch. Let’s say you wanted to fabricate a unique type of bowl, and that 3D model wasn’t available in a repository; AI could create it for you instead.

“While style-transfer for 2D images works incredibly well, not many works have explored how this transfer to 3D,” says Google Research Scientist Fabian Manhardt, who wasn’t involved in the paper. “Essentially, 3D is a much more difficult task, as training data is scarce and changing the object’s geometry can harm its structure, rendering it unusable in the real world. MechStyle helps solve this problem, allowing for 3D stylization without breaking the object’s structural integrity via simulation. This gives people the power to be creative and better express themselves through products that are tailored towards them.”

Farqui wrote the paper with senior author Stefanie Mueller, who is an MIT associate professor and CSAIL principal investigator, and two other CSAIL colleagues: researcher Leandra Tejedor SM ’24, and postdoc Jiaji Li. Their co-authors are Amira Abdel-Rahman PhD ’25, now an assistant professor at Cornell University, and Martin Nisser SM ’19, PhD ’24; Google researcher Vrushank Phadnis; Stability AI Vice President of Research Varun Jampani; MIT Professor and Center for Bits and Atoms Director Neil Gershenfeld; and Northeastern University Assistant Professor Megan Hofmann.

Their work was supported by the MIT-Google Program for Computing Innovation. It was presented at the Association for Computing Machinery’s Symposium on Computational Fabrication in November.