The days of unabashed and frivolous capital expenditures are over. Companies depend on maintenance managers to ensure that investments like top drives, mixers and pumps operate more efficiently and last longer than ever before. Maintenance managers also face many daily operational challenges, including machine repair costs, machine replacement costs, worker safety concerns, aging equipment, and aging just-in-time inventory that sits in the corner if failures occur. Although maintenance managers implement an assortment of maintenance techniques, known as the “maintenance mix,” more advanced techniques are usually saved for only the most critical of assets due to up-front costs.
Maintenance managers most commonly rely on a regularly scheduled (preventive) maintenance program. This is a practice we have all subscribed to in our everyday lives, as well—from brushing our teeth each day to changing our vehicle’s oil every three months. Therefore, it’s no surprise that this is the default practice applied to equipment maintenance.
Preventive maintenance is a less-than-ideal practice, which may actually be more costly than running a machine to failure when considering the wasted time, effort and money associated with fixing an asset that may actually be in perfect working order. Fortunately, there is a better way to detect and repair small problems before they grow into costly catastrophes, all without unnecessary or excessive tinkering with the equipment.
Predictive maintenance — A bigger problem than you think
Inefficient maintenance is a bigger issue than it might seem, as it is prevalent in most applications where critical equipment is in use. The Electric Power Research Institute (EPRI, www.epri.com) has calculated comparative maintenance costs for different maintenance techniques in U.S. dollars per horsepower (HP) per year. Researchers found that a scheduled maintenance strategy is the most expensive to run at $24 per HP. A reactive maintenance (run-to-failure) strategy is the second most costly at $17 per HP, but has the additional cost of compromising safety.
Drawing a parallel to pumps used within oilfield stimulation, maintaining a 1,500 HP motor with a scheduled maintenance strategy would cost approximately $36,000 per year, while a reactive maintenance strategy would cost $25,500 per year, according the EPRI study. That cost might not seem like much, but when multiplied by the number of rigs across the entire fleet (1,500), the cost skyrockets to $54 million per year for a scheduled maintenance plan and $38 million for reactive maintenance plan. In fact, according to Forbes Magazine “one out of every $3 spent on [preventive] (or schedule-based) maintenance is wasted.” Looking at the maintenance costs of large assets across in a fleet can start to give us an understanding of the cost of equipment maintenance, but it only begins to tell the story of the true cost of equipment mismanagement.
To identify the true cost of equipment mismanagement, we must first take a closer look at the issue. There are many costs associated with maintaining a pump, such as a yearly cost of $36,000 or the capital costs of a new triplex or quintaplex pump, which can be upward of $350,000. However, these losses pale in comparison to the true loss of a machine going down, which is a loss in production. In oil & gas, for example, equipment uptime is directly correlated to the company’s bottom line. When drilling stops at a well site, virtually all cash flow associated with the well stops. This is further compounded when you consider operational costs for fracturing crews on-site. At risk are not only money, but also jobs and reputations. And the same can be said for downstream rotating equipment. Reliability, therefore, is critical. So much so, that it is a standard practice for companies to have several backup equipment trucks onsite that may or may not be needed, because they have no idea if their pump is about to fail — even if the pump was just serviced. There has to be a better way than just hoping machinery doesn’t go down and having backups in case it does.
Predictive maintenance — A better approach?
In the same EPRI study mentioned previously, researchers identified a much more reliable strategy. They found that a predictive maintenance strategy is the most cost-effective at only $9 per HP, and it all but eliminates the risks of secondary damage from catastrophic failures. By using a predictive maintenance strategy, operations and maintenance managers can have the insight to determine when their machines will fail and have enough advanced notice to make the necessary preparations to fix the problem with as little downtime as possible. On the surface, this seems like the optimal approach with no downside, because the company can save money on maintenance and ensure longer uptime.
However, several factors keep companies from adopting and enjoying the benefits of such a predictive maintenance strategy. These shortcomings are primarily associated with the traditional approaches that have been used to implement predictive maintenance strategies, not with predictive maintenance itself. These two traditional approaches are (1) a complete end-to-end automated solution that covers everything from the site survey to installation to remote monitoring; and (2) a manual route-based solution where technicians and experts regularly visit each asset to collect measurements, and then return back to perform the analysis.
The traditional approach
To understand the issues surrounding the two techniques, think of equipment health as analogous to our own health. Imagine you go to your doctor’s office and after sitting in the waiting room for half an hour, you finally see the doctor who checks your temperature and only your temperature. To be thorough, he checks the temperature at multiple locations on your body and gives you a diagnosis of good health. In this scenario, everything seems viscerally wrong from a healthcare standpoint. However, this is the exact approach many companies take with machine health. They outfit their machines with only accelerometers and use only vibration analysis to monitor the machine’s health. Although this is a great indicator of wellness, it is not the only one. The practice of using a limited number of diagnostic tools is a problem for both traditional methods — manual route-based and automated. The two approaches fall short, either because of the route-based technician’s lack of expertise to measure and analyze other vibration sensors, or the measurement platform’s lack of flexibility in integrating or expanding to new or custom sensors.
Now, back to the doctor analogy. You decide to do your due diligence and ensure that you actually are in good health and continue your physical evaluation by visiting another doctor who can measure your blood pressure and cholesterol. Again, you have to pay this doctor and the only thing you receive is a diagnosis based on the narrow scope of your blood pressure and cholesterol. Although this sounds silly, this scenario mimics the real-world approach to traditional machine health assessments. Maintenance managers try to give their machines a more complete health diagnosis, but are left with a less than holistic machine health assessment. This is a result of disparate vibration monitoring systems being cobbled together to take the measurements. In the end, just like the case of going to separate doctors, this makes it costly and difficult to scale a monitoring solution across all assets because of the high up-front costs of the initial system, the cost of adding on subsequent systems and then integrating everything together.
On the opposite end of the spectrum, companies can perform manual measurement rounds, which are less expensive in theory, but, in reality, cannot be scaled to cover a large number of assets. The technical prowess required to take and analyze measurements coupled with an aging workforce, prevent companies from solving the problem by indiscriminately placing more people on it. Even if this weren’t the case, there are no economies of scale to be gained with this method. Monitoring five times more assets would result in five times the cost and even more logistics. Thirty people performing 60,000 rounds per month to cover 2,000 assets could suddenly become 150 people performing 300,000 rounds per month to cover 10,000 assets. Why? Because people don’t scale. Adding different sensors results in even more people, because of the expertise needed for the different measurement specialties. Specialists can spend up to 80 percent of their time manually collecting the data with only 20 percent of their time left to actively analyze the data and uncover root-cause issues that prevent costly repairs in the future. And because it’s manually collected by a variety of people, there is the potential for dirty, disparate data.
To conclude our analogy, remember that the ultimate goal of this whole journey is to gain a holistic view of your overall health. After visiting multiple doctors and gathering multiple diagnoses you would be frustrated, to say the least. Each doctor used a separate tool to assess your health and the ability to integrate all of your health data did not exist. As a result, you would not have a holistic or accurate assessment of your health because the doctors couldn’t be brought together to communicate their findings and give an accurate diagnosis. Added all together, your physical was inconclusive and your time and money was wasted. The doctors are limited by their roles, their instrumentation, and their ability to communicate the data with each other or you, the patient. When dealing with separate monitoring systems (and sometimes manually entered data), this is all too often the case. Not only do the systems not talk very well with one another and the enterprise, but there also isn’t an option for you to perform your own analysis because there is no access to raw data.
Overall, traditional approaches present problems in four main areas:
Flexibility – Integration with a multitude of sensors;
Scalability – Financial and logistical possibilities to expand to cover all assets;
Accessibility – Raw data that can be easily integrated and analyzed on an enterprise level; and
Cost – Capital expenditures of the end-to-end solution.
A new way forward
By implementing a modern condition monitoring system, companies can overcome the shortcomings of traditional systems to cover more assets with less resources. For example, one tier-one energy provider found that its vibration specialists could now spend up to 80 percent of their time actually analyzing data and reduced data collection to only 20 percent. Ultimately, this will allow them to increase their coverage from about 2,000 assets to over 10,000 assets. In addition, because they took a platform-based approach to monitoring they will have the opportunity to explore the integration of other specialized sensing techniques, such as thermal imaging, to enhance their detection capabilities. All of this in the effort to detect these issues sooner and fix problems before they become catastrophies.
Brian Phillippi is a product manager for the Embedded Control and Monitoring team at National Instruments. In this role, he helps manage the I/O modules for the company’s industrial, embedded and data acquisition platforms. Phillippi joined NI in 2011 as an applications engineer, and he moved to product management in 2012. During his time in product management, Phillippi has brought numerous products to market to support needs in power monitoring, machine condition monitoring, datalogging, and smart machines. Phillippi holds a bachelor’s degree in mechanical engineering from Brigham Young University. You can reach him at 512 683-9825, or at email@example.com.