Reliability Standard: I Do Not Think it Means What you Think it Means


Much like the character Inigo Montoya who, in the movie The Princess Bride questions Vizzini’s relentless repetition of the word “inconceivable,” I find myself wondering whether many of those who insist over and over again that the EPA’s Clean Power Plan (CPP) will “threaten reliability” actually understand how we’ve gone about setting, interpreting, and enforcing different reliability standards. This issue has been simmering for some time, including revealing analyses by The Brattle Group and Astrape Consulting, in the context of capacity market studies that have received far too little attention. Ironically, it is the North American Electric Reliability Corporation (NERC) and other industry organizations, with their pleas for a “safety valve,” that have invited a more public and critical examination of the topic. NERC is the regulatory authority charged with developing and enforcing numerous reliability standards for the bulk power system. Here I will focus specifically on the resource adequacy standard, since it seems to be the standard on which NERC and others are most focused.

In reading NERC’s April report on the reliability implications of the CPP, I was struck by the manner in which NERC simply assumes that, in the event of any conflict between compliance with the CPP and compliance with resource adequacy standards, it is the CPP that should give way. NERC appears to insist that this deference be granted from the moment NERC determines, and for as long as NERC deems, that a state’s plan for compliance with the CPP would result in the slightest departure from NERC’s standards. Even for those inclined to accept that reliability standards should always trump environmental standards, this should seem excessive. Why? Because it assumes there is zero tolerance built into the standard, and that operation at or above the standard constitutes “reliable,” while anything below it constitutes a “threat to reliability.”

Is the design standard we’ve come to accept for resource adequacy also the minimum safe level? We should hope not, as this would be unusual in the design of a critical system. One typically wants to remain well clear of catastrophic failure modes under standard operating conditions and therefore sets normal operating limits in anticipation of riding through occasional deviations beyond those limits without significant risk—a “margin for error,” if you will. We see this, for instance, in the way we manage power system frequency, with the North American standard being 60 cycles per second, but with actual performance constantly fluctuating around that level. In a trade-off between a more costly power system and more costly electrical equipment, the industry has evolved to tolerate swings in the range of one-half cycle per second up or down (59.5 to 60.5 cycles per second) before protective relays start shutting things down.

How, then, should we think about NERC’s frequently cited “one day in ten years” resource adequacy standard? It is useful to start with where it comes from, except no one knows exactly where it came from. It first appeared in technical journals in the 1940s as the result engineers should shoot for in designing power systems. How that recommendation was arrived at, and what it assumed about costs to make the system more or less reliable and the value of more or less reliability, are lost to history. Even as it has morphed from recommended design constraint to inviolable NERC standard, it is not clear what, if any, cost/value analyses have been applied to substantiate its appropriateness for that purpose.

It is not unheard of for standards to have obscure origins, but to apply them intelligently it still helps to put them in some sort of objective context. We can start by translating the standard into terms comprehensible to the average person, or at least to the average regulator. Even if one assumes NERC’s standard is uniformly applied across the country (it’s not), that turns out to be less clear-cut than you’d think, since neither the meaning of “one day” nor the event expected to happen with that frequency are clearly or consistently specified. In a recent attempt to apply reasonable interpretations in the context of the Texas market, The Brattle Group concluded that the standard, as commonly interpreted, implies the average customer will experience about 20 seconds of service interruption in an average year due to not enough generating capacity being connected to the system. If that sounds awfully stringent, it’s because it is. In an average year, that same customer can expect between 100 and 300 minutes of service interruption from all causes (excluding outages due to major disasters like hurricanes). One can reasonably ask whether anyone would actually notice if resource adequacy dropped for some transient period of time to a level of, say, 40 seconds per year, per customer.

Perhaps such a stringent standard can be justified specifically for resource adequacy, based on the cost to achieve a higher standard relative to costs to increase reliability in the transmission and distribution (T&D) sectors, but the numbers aren’t encouraging. Based on the assumptions used by The Brattle Group, the marginal cost of achieving the standard is in the range of $400,000/MWh. Again, if that sounds like a lot, it’s because it is. Economists put the value customers attach to avoiding service interruptions in the range of zero to $30,000/MWh, depending on the customer and the load. While it may make sense to set the break-even cost somewhat higher than that to reflect other, less quantifiable reliability benefits, $400,000/MWh seems a little rich. Perhaps some of that money would be better spent on improving T&D reliability or on energy efficiency or demand response.

In the end, a lot of judgment goes into answering these questions, requiring expertise beyond the ken of most stakeholders. But it is important for at least some of us—regulators, in particular—to realize the NERC standard for resource adequacy is not a magic threshold above which the lights will stay on and below which they may not. It is instead a rather arbitrary point—and not a clearly defined one—along a continuum from less to more reliable and less to more expensive.

I am not proposing here that the resource adequacy standard itself be changed, though the available information would suggest that it warrants a serious conversation and that, at a minimum, the standard’s current metrics and language leave considerable room for improvement. Rather, I’m saying that an insistence on treating any deviation of any magnitude for any period of time from the current resource adequacy standards as the trigger for a safety valve in the Clean Power Plan is neither a rational nor a proportionate response. If these proposals are to be taken seriously, the next step must be an honest and informed discussion about the appropriate level of tolerance that can and should be built into the design of the trigger.