Improving Reliability in Power Generation: A Competitive Advantage

Douglas Morris Director of Marketing, Mining & Power Industries

Douglas Morris
Director of Marketing, Mining & Power Industries

Author: Douglas Morris

Recently, the consulting arm of Black & Veatch published its annual strategic directions report for the US utility industry. In 2014 “reliability” was again identified as the top industry concern. This report discusses how technology will play an important role for utilities as they look to improve upon asset reliability.

The industry has always had some play with this discipline; in fact, most plants had staffs dedicated to the practice of reliability. As utilities cut back staffing over time, though, many of these departments disappeared and the focus was suddenly absent. When most fossil plants ran as originally intended, this didn’t pose a large problem. Times have changed and now with the growing number of renewables along with gas plants being cycled on a regular basis, former baseload plants are increasingly running in load following mode, subjecting these units to greater thermal cycling and more stress on mechanical equipment.

As the B&V report states, technology can be the tool that helps utilities achieve better reliability. Per the report:

…new data collection and performance monitoring technologies will assist utility operators in better understanding potential points for failure and managing risk by improving visibility into asset condition and performance.

Tucson-Power-ReliabilityWebThere are already sites that have used technology to improve plant reliability and they are reaping the benefits.

Tucson Electric Power (TEP), Springerville Generating Station, is one such utility. Gary Gardner of TEP wrote an article published on reliabilityweb.com which states:

TEP relies on technology with high resolution, accurate data collection and advanced diagnostics capabilities.

In 2012, Gary and his predictive maintenance team, along with the use of advanced technology, helped the company avoid more than $1M in maintenance and replacement costs.

So as utilities embrace the recent rebirth of reliability, many will likely follow the path of TEP. Those that do and invest in proper technology for condition monitoring will reap the rewards of increased plant availability and reduced operating and maintenance (O&M) costs.

From Jim: You can connect and interact with other Power industry and reliability professionals in the Power and Asset Optimization, Maintenance and Reliability tracks of the Emerson Exchange 365 community

6 comments

  1. Jonas Berge says:

    I personally believe improving reliability, reducing maintenance cost, and shortening turnarounds is one of the principal drivers for wireless pervasive sensing. By instrumenting process equipment like pumps, blowers, and cooling towers, asset monitoring software can tell their condition. Until now these were missing measurements. These new sensors are beyond the P&ID, they do not load the DCS, and the information is delivered beyond the control room, to the maintenance and reliability office. Learn more from these articles:

    Second Layer of Automation
    http://www.ceasiamag.com/article/second-layer-of-automation/10354

    Maintenance with a Hart
    http://www.ceasiamag.com/article/maintenance-with-a-hart/9894

    Wireless for Asset Uptime
    http://www.ceasiamag.com/article/wireless-for-asset-uptime/8689

    • Jonas, thanks for sharing the links to those articles to highlight the advantages on increasing reliability through increased measurements in the process.

      • Jonas Berge says:

        You’re welcome. And the additional measurements to increase reliability is not really on the PROCESS but beyond the process, it is additional measurement on the EQUIPMENT. Here’s an analogy: I personally believe it is pretty well accepted that a digital valve controller / positioner is one of the smartest instruments in the plant. What makes these instruments smart? Well, they have lots of sensors inside them making additional measurements – not process measurements, but measurements on the valve and actuator – such as air supply pressure, actuator chamber pressures, valve stem position, temperature, drive signal, and drive current etc. From those raw measurements they then compute intermediate variables like air mass flow and then ultimately using expert algorithms to detect I/P plugging, relay adjustment, relay jam, relay diaphragm leak, actuator leak, position sensor arm damage, air leak, high or low supply pressure, and calibration shift etc. Since the problem is narrowed down with such fine granularity, this is then simply presented with actionable information; i.e. a discription of the problem plus recommended action. The solution for pumps, blowers/fans, and cooling towers is just like that solution for valves, only on a larger scale. The equipment are instrumented with additional sensors for raw measurements on the equipment (not the process) – it just happens that we use “process-grade” sensors for this purpose. The raw data feeds into expert algorithms that recide on software part of the asset management system, or possibly on the plant historian. It computes some intermediate values like heat transfer coefficient or heat duty of a heat exchanger. The intermediate values are then used to detect fouling of a heat exchanger or cavitation on a pump etc. The principles are the same, but on a different scale. A positioner makes a valve a smart valve. Pervasive sensing makes a dumb pump a smart pump.

Leave a Reply