Article   |   Gordon Ellis, Nikkii Ng-Morgan   |   27.07.2022

SECEs are unnecessarily costly

An over-assignment of safety and environmentally critical elements (SECEs) when setting performance standards can be unnecessarily costly throughout an asset’s operational life.

Where safety is critical, such as in the oil, gas and petrochemical industries, playing it safe is an imperative. But too safe is a problem when managing SECEs. As a trend we see far more over, than under, assignment of these elements on schemes for independent verification. This is often the case when a blanket approach to SECE assignment is adopted. For example, assigning all pressure safety valves (PSVs) as SECEs may seem a given, but not all of these valves will be there to prevent a major accident hazard event. PSVs can be fitted purely to protect equipment, such as a pump deadheading. Such over-assignment is adding to OPEX budgets, and the number of testing backlogs and deferrals.

Over time, it’s easy to accept established approaches to SECE management as unquestionable. However, standard approaches can be unnecessarily draining on the finite resources of an owner or operator. In the first place, they can be responsible for inadequate hazard identification, and poor performance criteria definitions that lack appropriate targets for asset availability and reliability. Both of these issues typically result in over-compensating when it comes to SECEs. They can also lead to fuzzy assurance tasks, with a lack of distinction between maintenance and assurance testing. This is far from helpful and can also result in ‘false positives’ where, for example, a fire damper on an installation is cleaned and greased before testing for closure time.

Identifying SECEs with a strong hierarchy

A good hierarchy is the foundation for effective SECE management. It allows easy identification of functional relationships between components, and a straightforward assessment of items which have the potential to affect a SECE’s performance.

Many organisations identify all ‘child’ components of the main equipment as SECEs. This leads to a very large number of SECEs, which is challenging to manage. To understand which sub-components could affect the main safety function of each SECE, we’d recommend a simple failure modes and effects analysis (FMEA), rather than the traditional method of cascading through the hierarchy. Such an approach could represent a ten-fold saving. Where SECE identification is kept to a main-equipment level only, the result should be a 1-to-1 SECE-to-assurance-task relation, leading to a more manageable assurance programme.

While we have focused on over-assignment, it’s fundamental to make sure that all potential SECEs are recorded in the asset register. That is, every component, system and structure, including IT, which can cause, prevent, control, mitigate, rescue or help recover from a major accident. Any missing element will significantly increase risk exposure, as the safety and environmental consequences of an event will be incomplete, leaving some circumstances unplanned for.

A good hierarchy is the foundation for effective SECE management.

Setting true performance standards

Performance standards should cover the following five main themes.

  1. Functionality
    What is each SECE required to do?

  2. Availability
    How long will the SECE be capable of performing?

  3. Reliability
    How likely is the SECE to perform on demand?

  4. Survivability
    Does the SECE have a role to do post-event?

  5. Interactivity
    Do other systems need to function for the SECE to operate?

It’s worth emphasising that operational performance standards are not design performance standards, especially over time. Continuing to use the latter as a baseline is likely to cause costly maintenance issues down the line, and greater unplanned downtime. Corporate-level performance standards are equally problematic. Put simply, they’re just too generic when no two assets are the same. Even in cases where the hazards faced are similar between rigs, units or industrial plants, there will always be different mitigation and acceptance criteria.

In our experience across multiple sectors, the more performance standards are based on the actual asset the better, ideally using operational data and failure rates held in the computerised maintenance management systems (CMMS). Using valuable, but often neglected, feedback from test results will be the subject of our next blog.