As process complexity increases, digital twins are becoming key to efficient operations and high product yields, according to a recent SEMI blog. Traditional knowledge-based approaches are losing their effectiveness and increasingly must be augmented with data-driven approaches. The current interest in digital twins, which act as virtual representations of physical systems, is fueled by the convergence of IoT, machine learning and big data technology. In this blog, we will look at the implementation of a semiconductor manufacturing digital twin – a powerful tool to improve yield by detecting associations between product quality metrics and up to millions of predictor process parameters, primarily equipment sensor traces and process measurement data.
Moore’s law, even while slowing, continues to drive exponential increases in the performance and storage capacities of integrated circuits (ICs), while also increasing the volumes of data produced by the processes used to manufacture those ICs. As the number and complexity of the processing steps are growing rapidly, manufacturing process equipment is increasingly better instrumented with sensors. The process complexity necessitates, and the available data volumes enable, a shift to ever more data-driven yield improvement, leveraging the latest big data, machine learning, AI and streaming technologies.
But when creating digital twins for manufacturing and taking advantage of the opportunities they provide, there are several key challenges to consider:
- The wide and Big Data Challenge – processing very large numbers of potential predictors
- The Data Preprocessing Challenge – making sense of sensor trace and equipment attribute data
- The Feature Selection Challenge – identifying the important nuggets in the sea of less important variables
- In-Memory vs. Big-Data Analytics – how to combine them effectively for best results
- Performance Challenge – doing all this rapidly to aid in ‘real-time’ decisions
Recently, TIBCO has addressed these challenges. Using the TIBCO Connected Intelligence platform, TIBCO built a solution to provide analytic applications involving millions of process variables (logical data columns) associated with product quality and performance measurements. These cutting edge applications support root-cause, clustering, and other analyses at the die level. Further, the results are available close to “real time” to enable useful process interventions. With real-time results, manufacturers can identify subtle equipment changes, process shift or drift in certain tools, or predict and remedy substandard yield for a lot moving through the manufacturing process.
This solution is a hybrid big-data plus in-memory system that addresses the various new analytic and IT-architecture problems associated with creating digital twins. It delivers efficient, scalable, and practical actionable insights quickly to owners and stakeholders, even when there are hundreds of thousands or even millions of variables that must be considered.
The following diagram summarizes the solution’s overall architecture and data flows.
Additional contributors to this work: Michael Alperin, Steven Hillion, Ph.D., Siva Ramalingam, Nico Rode
For a deeper dive into these challenges and this solution, including the architecture and specific analytic activities, please download this technical paper “Addressing Process Control Challenges in Big and Wide Data Environments.” In addition, you can watch a demo of the use case described in this blog and visit the TIBCO Manufacturing Solutions Community page for more information about TIBCOs broader set of solutions for manufacturers.