Neural Network for Product Quality Estimation

by | Mar 24, 2010 | Measurement Instrumentation, Services, Consulting & Training

Jim Cahill

Jim Cahill

Chief Blogger, Social Marketing Leader

Neural networks, not the ones in our brains, but the ones that came out of the artificial intelligence research, began back in the 1940s. This advanced process control (APC) technology has been used in the process industries for many years for applications ranging from virtual sensors to end of batch prediction. Neural networks have been discussed in numerous ModelingAndControl.com blog posts. Several software packages, such as DeltaV Neural, exist to build these neural networks applied in process automation-related control strategies.

I received an email yesterday from an engineer trying to predict quality for a product made in his manufacturing process. Their current way to do this was to get quality data from lab samples drawn every 4 hours. Obviously if quality problems are not discovered until the results are back from the lab analysis, it can impact 4+ hours of production. The person asked me what historical data collection frequency should be used to build the neural network model. In his case, he was using DeltaV Neural.

I turned to Lou Heavner, whom you may recall from an earlier neural network-related post. Lou had a great response and I asked if I could highlight some of it here.

Lou opened:

The sample frequency is related to the process dynamics. Initially you can make an educated guess and later revise if necessary. The Neural model is based on observing inputs that will affect the modeled output. There is likely to be some delay between when an input is observed and when the output is affected.

For example, if I had a conveyor with a 10-minute transit period from the point at which some measurement is made (e.g. temperature) and that measurement is important to modeling the output, say quality (e.g. moisture) from a sample port in a downstream hopper. And, the residence time in the hopper from the conveyor to the sample port is 15 minutes–I would have a sample delay of approximately 25 minutes. This is the time between when the material’s temperature was measured and when it was extracted as a lab sample.

We refer to this in DeltaV as the time to steady state (TSS), although it is more accurately described as the process residence time. DeltaV Neural divides the TSS into 50 equal intervals and reads all inputs at that frequency. In this example with TSS = 25 minutes, the data frequency would be every 30 seconds. The sampling frequency of the DeltaV Neural block can be faster, but should be no slower. Having the Neural block update once per second may put some additional load on the controller, but that is the only penalty of setting the sampling rate faster.

If you use a Lab Data Entry block to enter the lab results, the Neural block will “poke” the lab data where it belongs in the file. In truth, it isn’t poking as much as it is defining a delay. It is important to get the lab data timestamp right to build a successful model. That is why we go to such pains to get the delays and sample times correct. So the lab data needs to be tagged with the time stamp corresponding to when the sample was extracted. This may be and in fact probably is different from the time the sample was scheduled to be extracted, or when the lab analyst performed the analysis, or when the sample was reported, or when the historian was updated.

If you use the Lab Data Entry block and select the correct TSS, there should be no problems building a Neural model due anything related to sampling frequency. If you are unsure of the correct TSS, it is usually better to overestimate than underestimate. The biggest problems are models where some variables have a very long delay and others have a very short delay. In order to see the effect of the variables with the long delay, you may not have the granularity you need on the variables with a shorter time delay. You will need data that is accurate and that exhibits typical process variability, since the model is trying to correlate various model inputs with the model output.

Lou also passed along some guidance when attempting to build the data set for use in the creation of the Neural Network in the offline mode:

It sounds like you are limited to developing an online Neural model using historical data. There are many pitfalls to watch out and avoid. This is called data mining and sometimes success is simply not possible. The same rule for data frequency applies. It doesn’t matter if you are using data from 1 hour, 1 shift, 1 day, 1 week, or 1 year. If the data frequency is slower than the real process TSS, the best you can do is create a steady-state model which is probably going to suffer in accuracy unless the process is truly at steady state and has been for 1 TSS prior to each lab sample.

Next, historical data is often conditioned, compressed, averaged, or in some way filtered. This will weaken the correlation and make a poorer model. Ideally, the process inputs should be raw snapshots. When you collect lab data, it is often collected at irregular intervals. The sample schedule may call for samples to be taken at 6:00am every morning, for instance, but the actual sample time may vary (depending on what the sample taker is doing) from somewhere between 5:30 and 6:30.

What I typically do is import my process data (Neural inputs) from the historian into Excel. Neural will find the correct time delays, so you do not need to worry about that. I create a separate column for the lab data. I populate that column with the word MISSING in each row. Then I copy the lab data into the position that corresponds with the time the sample was extracted. Neural ignores words and uses only numeric values. Another thing I do is look for any bad or missing values from the historian and replace those cells with the word BAD or MISSING. Once I have the values correctly saved in the spreadsheet, I can save it as a tab delimited value (.txt) file. Make sure the header information is correct and you are ready to train.

There is a mathematical minimum requirement for the number of lab samples required to build a model and it is related to the number if model inputs. From practical experience, I find that a model with 10 inputs will require at least 100 lab samples and all of those samples must show some variability. With 20 inputs, you would need to at least double and possible triple that amount. With lab data collected every 4 hours, you would probably need about 5 weeks’ worth of data and with a TSS of 25 minutes, the process data would need to be 30 second data. You should also verify the resulting Neural model against a completely different set of data to assure yourself you have a reasonable model.

If you’re building neural network models for property estimation, virtual sensors, time prediction, or other applications, I hope some of Lou’s experience serves as a shortcut to your successful implementation.

GreenPodcast.gif

Popular Posts

Comments

Follow Us

We invite you to follow us on Facebook, LinkedIn, Twitter and YouTube to stay up to date on the latest news, events and innovations that will help you face and solve your toughest challenges.

Do you want to reuse or translate content?

Just post a link to the entry and send us a quick note so we can share your work. Thank you very much.

Our Global Community

Emerson Exchange 365

The opinions expressed here are the personal opinions of the authors. Content published here is not read or approved by Emerson before it is posted and does not necessarily represent the views and opinions of Emerson.

PHP Code Snippets Powered By : XYZScripts.com