Using data historians to optimize batch processes


Background:

LSI has been working with a client who has a remote location in the middle of the panhandle of Texas for the past 2 ½ years. This client has a custom batch process that has enjoyed continued growth over the last few years (astronomical at times) which keeps us constantly challenged to keep up with their process. Several times over the last few years, business opportunities (new customers for our client) has challenged LSI to enhance the process, re-write major core chunks of the application to fit the new needs, or add additional equipment to support the process. The major challenge is that the core code in the PLC was written long before LSI became a part of the team, and was written for an entirely different process and business model. The client has grown from a single product (all with 1 ingredient or slight variations with the 1 ingredient that corrected the dry matter consistency of the product) that was run almost continuously to a highly customized batch solution as now almost one-half to two-thirds of the batches are customized with multiple ingredients.

There are several things that LSI has learned through the last two and a half years that can help our clients make their own process better and we would like to share those experiences in this post.

Challenges:

About a year and a half ago, we were challenged to make  batches that have a tighter tolerance in their main ingredient. The goal was to have 95% plus batches have a  +/-1000 pound tolerance for the main ingredient in a 56,000 – 57,000 pound load; and for 90% of the batches to have a +/-500 pound tolerance. The main ingredient makes up between 60% and 100% of the batch. The customer also had a goal of increasing the speed of the fill for the main ingredient. At that time, 2,000 to 2,500 pound swings in either direction for final weight to target of the main ingredient weight were not uncommon. This had not been a major problem for a single ingredient batch that ran almost 100% of the time, but it did cause manual intervention at times and was certainly seen as a hindrance to the goal of doing more blended loads that had multiple ingredients. The customer didn’t want to have to make adjustments to the batch’s individual ingredients after the mixer was filled to keep the entire batch in specification, as it would slow down the process and potentially lead to human error.  The constraint that made this goal very difficult to attain was the fact that the feed rate for the main ingredient is highly variable during a batch (0-50,000 pounds per minute) and is difficult/impossible to make constant. So, LSI wrote a very simple algorithm that slowed down the fill rate at the end of the batch that allowed for approximately 97% of the loads to be within 1,000 pounds, approximately 90% were within 500 pounds of the target fill, and about 65% were within 250 pounds of the target fill. This was a huge improvement and allowed our client to land a new customer.

However, now the client needs even more of the loads to be within the 500 pound specification because of requirements from another new client and because of some extremely tight tolerances on some ingredients. Some of these ingredients will be less than 100 pounds of the entire batch, so if the main ingredient gets way out of specification, then the entire load can be out of specification.

Solution:

So, LSI has started using the plant’s data historian, which was unavailable at the time of writing the above algorithm, to analyze why there are still some overshoots or undershoots in the process and this data gathering and reporting is allowing LSI to “tune” the algorithm used to get the accuracy needed. Feed rate at the end of the fill cycle is being correlated to overshoot or undershoot values.  If the feed rate is below a certain value, accuracy within 250 pounds is nearly guaranteed. Outside of this rate, the results are more variable. LSI is now noticing that the plant is achieving feed rates much higher during the filling process than when the algorithm was first written, and because of this, these “golden” feed rates at the end of the batch are not being achieved as often. LSI is using a formatted spreadsheet report to identify batches out of tolerance (each have a time and date stamp of when the batch ended), then is using trends of key data during the batch (feed rates, mixer weights, current of motors, cutoffs of equipment, etc.) to analyze the process data on why the “golden” feed rates were not achieved and why the batch went out of tolerance.

LSI has been able to tune the algorithms for more repeatable performance. For a recent 38 hour period, 349 batches were run with 99.7% of the tolerances for the main ingredient being under 1,000 pounds (1 batch), 96% were under 500 pounds, and 75% were under 250 pounds. Without the correlation provided by the historical data and trends, then this tuning would not have been possible.

  
 
 

Figure 1 - Formatted spreadsheet. Purple rectangle shows final feed rate (more green = higher rate). Black rectangle shows overshoot (more red = worse overshoot)

 

 

Figure 2 - Scatter plot showing improvements in accuracy over a 3 day period.

 
 
 

Figure 3 - Mixer trend - Light blue line shows ingredient 1 target, pink shows ingredient 1 actual, green is overall mixer weight, and dark blue shows the feed rate into the mixer.

Lessons Learned:

Being able to correlate data from a process historian has allowed LSI to become the “process engineer” and optimize the process itself through the PLC code. Many companies collect all kinds of data, but to be able to actually act on the data being supplied is the real value of having the data in the first place. The company must also have intelligence within its own ranks or within its partner network to actually act on the data and make the improvements needed. This has to be done by people who understand the process. Without intimate process knowledge, the data really has limited value.

LSI will be working with this client in the future to put together more reports that will allow them and us to find opportunities to optimize the manufacturing process itself and/or the operator’s work flow as well. We will also be looking for lost opportunities in revenue(i.e. if the truck scale was empty too long, meaning that there was no truck to fill to send to a customer – as this may point to issues within the customer’s  own supply chain). I am sure the customer will continue to challenge us to further improve and optimize the process.

Collecting data for the sake of collecting data is useless. Putting the data to work in optimizing the process or work flow is priceless.

 

 


Advertisement

About Jim Gavigan

Business Development Manager at Logical Systems, LLC jgavigan@logicalsysinc.com
This entry was posted in Domain Expertise, Success Stories and tagged , , , , , , , , , . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s