Using Artificial Intelligence to Manage Digital Manufacturing | MIT News

Scientists and engineers are constantly developing new materials with unique properties that can be used in 3D printing, but figuring out how printing with these materials can be a difficult and expensive conundrum.

Often, an experienced operator must use manual trial and error—perhaps making thousands of prints—to determine the ideal parameters that will print new material consistently and efficiently. These parameters include print speed and how much material the printer deposits.

MIT researchers have now used artificial intelligence to simplify this procedure. They have developed a machine learning system that uses computer vision to monitor the manufacturing process and correct errors related to material processing in real time.

They used simulations to teach a neural network how to adjust printing parameters to minimize error, and then applied that control to a real 3D printer. Their system printed objects more accurately than any other 3D printing controller they compared it to.

The work avoids the prohibitively expensive process of printing thousands or millions of real-world objects to train a neural network. And it could make it easier for engineers to incorporate new materials into their prints, which could help create objects with special electrical or chemical properties. It can also help the technician adjust the printing process if material or environmental conditions change unexpectedly.

“This project is really the first demonstration of how to create a manufacturing system that uses machine learning to learn complex control policies,” says senior author Wojciech Matusik, an MIT professor of electrical engineering and computer science who leads the Computer-Aided Design and Manufacturing Group (CDFG) . ) at the Computer Science and Artificial Intelligence Laboratory (CSAIL). “If you have smarter manufacturing machines, they can adapt to the changing environment on the jobsite in real time to improve productivity or system accuracy. You can squeeze more out of the machine.

Co-lead authors investigation are Mike Foshey, mechanical engineer and project manager at CDFG, and Michal Piovarci, PhD student at the Austrian Institute of Science and Technology. MIT co-authors include Jie Xu, a graduate student in electrical engineering and computer science, and Timothy Erps, a former CDFG technical fellow.

Selection of parameters

Determining the ideal parameters for a digital manufacturing process can be one of the most expensive parts of the process, as it requires so much trial and error. And when a technician finds a combination that works well, these settings are only ideal for one specific situation. It has little data on how the material will behave in other environments, on different hardware, or if the new batch has different properties.

Using a machine learning framework is also fraught with challenges. First, the researchers had to measure what was happening in the printer in real time.

To do this, they created a machine vision system using two cameras pointed at the nozzle of a 3D printer. The system illuminates the material as it is deposited and calculates the thickness of the material based on how much light passes through.

“You can think of the vision system as a set of eyes watching the process in real time,” says Foshey.

The controller would then process the images received from the vision system and adjust the feed speed and direction of the printer based on any visible error.

However, training a neural network-based controller to understand this manufacturing process is data-intensive, requiring millions of prints. So the researchers created a simulator instead.

Successful simulation

To train their controller, they used a process known as reinforcement learning, in which the model learns through trial and error with rewards. The model was tasked with selecting printing parameters that would create a particular object in a simulated environment. After being shown the expected output, the model was rewarded when its chosen parameters minimized the error between its output and the expected output.

In this case, an “error” means that the model either dispensed too much material, placing it in areas that should have been left open, or under-dosed, leaving open areas that should have been filled. The model performed more simulated prints. , it updated its control policy to maximize rewards by becoming more and more accurate.

But the real world is messier than a simulation. In practice, the conditions usually change due to small changes in the printing process or noise. So the researchers created a digital model that approximates the noise produced by a 3D printer. They used this model to add noise to the simulation, which led to more realistic results.

“The interesting thing we found was that by implementing this noise model, we were able to transfer a control policy that was purely trained in simulation to hardware without any physical experiment,” says Foshey. “After that, we didn’t have to adjust the actual equipment.

When they tested the control, it printed objects more accurately than any other control method they evaluated. This performed particularly well for fill printing, which is printing the interior of an object. Some other controllers deposited so much material that the printed object bulged, but the researchers’ controller adjusted the print path so that the object remained flat.

Their control policies can even learn how materials spread after deposition and adjust parameters accordingly.

“We were also able to create a management policy that could manage different types of materials in flight. So if you have a manufacturing process in the field and you want to change the material, you don’t need to revalidate the manufacturing process. You can just load new material and the controller will automatically adapt,” says Foshey.

Now that they have demonstrated the effectiveness of this 3D printing technique, the researchers want to develop controls for other manufacturing processes. They would also like to see how the method can be modified in scenarios where there are multiple layers of material or multiple materials are being printed at once. Additionally, their approach assumed that each material had a fixed viscosity (“syrupiness”), but a future iteration could use AI to recognize and adjust viscosity in real-time.

Additional co-authors of this work include Vahid Babaei, who leads the Artificial Intelligence Design and Manufacturing Group at the Max Planck Institute; Piotr Didyk, Associate Professor at the University of Lugano, Switzerland; Szymon Rusinkiewicz, David M. Siegel ’83 Professor of Computer Science at Princeton University; and Bernd Bickel, professor at the Austrian Institute of Science and Technology.

The work was supported in part by the FWF Lise-Meitner Program, a European Research Council Starting Grant, and the US National Science Foundation.

Godfrey Kemp

"Bacon fanatic. Social media enthusiast. Music practitioner. Internet scholar. Incurable travel advocate. Wannabe web junkie. Coffeeaholic. Alcohol fanatic."

Leave a Reply

Your email address will not be published.