Fall Research Expo 2023

Improving the Precision of Direct-Write 3D Printing via Computer Vision and Machine Learning

This research project aimed to improve Direct Ink Writing (DIW) 3D printing through the integration of machine learning, computer vision, and multithreading. DIW is an advanced additive manufacturing method that enables the creation of 3D structures with desired designs and compositions. During this process, viscoelastic inks are methodically pushed through a nozzle, layer by layer, to construct scaffolds and various 3D forms on a digitally controlled platform. The low cost, simplicity, and ability to combine different materials and introduce multifunctionality in a single processing step in DIW have attracted significant research and industrial field attention.

However, like any manufacturing process, DIW printing is susceptible to errors due to its use of soft-textured materials, including over/under-extrusion and ink clogging. These errors result in defects, hindering the final product’s intended structural performance. 

To address these challenges, we constructed a Convolutional Neural Network (CNN) that classifies various error types in real-time by analyzing close-up images of the nozzle and the extruded material. The goal was to detect and rectify errors in real-time during the 3D printing process, enabling instantaneous adjustments to printing parameters and error mitigation at inception. To demonstrate the utility of our approach, we print with the versatile silicone polymer Polydimethylsiloxane (PDMS), which has applications ranging from microfluidics to soft robotics. 

Our intelligent DIW 3D printer setup featured a customized printer and camera. The DIW printer used compressed air to extrude PDMS and moved according to G-Code commands generated by Python. The iDS imaging camera was fixed onto the printer so that it moved with the nozzle and captured its extrusion of material. 

Our multi-classification algorithm for error classification identified four main error types: high pressure, low pressure, nozzle positioning errors (too close or too far), in addition to good quality. These error types were associated with specific causal relationships with printing parameters, such as nozzle height and printing speed, which could be adjusted in real-time using modified G-Code. The model was trained using a dataset of artificially-simulated errors, consisting of about 1,000 manually-labeled images for each of the five error classes. The CNN architecture employed a 3-Block Visual Geometry Group (VGG) with convolutional layers, pooling layers, and dropout regularization. The model was then optimized using Stochastic Gradient Descent and a Categorical Cross Entropy Loss Function.

During live predictions, three concurrent threads operated: one for displaying the live camera feed, one for printer movement, and one for image processing. This allowed for continuous monitoring of the printing process and quick adjustments based on the CNN's analysis. The results showed promising outcomes, with the model achieving an 87% accuracy against 371 live-test images, including randomly-simulated errors. 

Future steps involve training models to identify more complex errors and exploring broader scenarios to increase 3D printing accuracy. Nonetheless, our project highlights the potential of integrating machine learning and computer vision with multi-material and DIW 3D printing processes, paving the way for more intelligent and error-resilient additive manufacturing techniques.

PRESENTED BY
PURM - Penn Undergraduate Research Mentoring Program
Engineering & Applied Sciences 2025
Advised By
Jordan R. Raney
Assistant Professor of Mechanical Engineering & Applied Mechanics
PRESENTED BY
PURM - Penn Undergraduate Research Mentoring Program
Engineering & Applied Sciences 2025
Advised By
Jordan R. Raney
Assistant Professor of Mechanical Engineering & Applied Mechanics

Comments