Anyone who has been involved in source emission measurement programs, colloquially known as “stack testing,” knows that Murphy's Law is usually strictly enforced during a test: if anythingEQ Winter 2018 Airflow 1 can go wrong, it probably will. Most of the time, the problems, errors, and breakdowns that can infect a stack testing program amount to no more than relatively small bumps in the road; events that may extend the test day a bit, or cause a production snafu, but nothing truly consequential.

Yet there are times when anomalous stack test results can have significant consequences. For example, unexpectedly high test results may require the source operator to report non-compliance with an applicable rule or permit condition. Conversely, sometimes the test results may demonstrate compliance, but some of the supporting data - like exhaust gas flow rate, or temperature, or moisture content, etc. - seems completely out of line with everyone's understanding of the underlying process. 

 
Stack testing is only one example where measurement errors can lead to unfavorable consequences in the environmental arena. The same holds true for air and water quality monitoring programs, defining best practices to reduce emissions, and for analytical procedures in the laboratory. While EPA and other regulatory agencies publish methodologies and procedures intended to enable sources to accurately gather the data needed to demonstrate and maintain compliance, the existence of these methodologies and procedures does not guarantee they will be properly performed, or be appropriate for use with every process. 

Measurement mysteries can often benefit from a second look by another pair of eyes. 

This case study involves a chemical process that was unique in detail, but quite common in general character. A stack test contractor performed applicable test methods exactly as required, but used them to generate misleading data, resulting in a potential compliance concern. Understanding the genesis of the concern would require a “deep dive” into the underlying stack test report data.

The Problem: Apparent Unacceptable Scrubber Efficiency

A chemical facility produced specialty esters used in the food industry. While the chemistry of the process was unique, it was ultimately not relevant to the case. What mattered was that the process was a batch process. Chemicals were introduced into pressurized reactors and the desired reaction was allowed to occur over a period of time measured in hours. At the conclusion of the reaction time, the vessels were emptied.

Air pollutant emissions were expected to occur at only two points in the production cycle: 1) when a vessel was being charged and not yet sealed, allowing for some evaporative loss to occur, and 2) when a vessel was being depressurized at the end of a production cycle. During those events, emissions were directed to a scrubber designed to control the worst case emissions from the maximum number of vessels discharging the most total emissions at the same time. A review of the scrubber design led to the conclusion that it was designed correctly in terms of capacity and chemistry. 

Emissions testing at the inlet and outlet of the scrubber appeared to demonstrate that the scrubber was operating at less than 50% overall control efficiency. This was well below the overall control efficiency required by the facility's construction permit and applicable rules, and also well below the overall control efficiency guaranteed by the scrubber manufacturer. So, what went wrong?

EQ Winter 2018 Airflow Quote

The Investigation: Suspect Airflow Data


Step one in an investigation like this is determining what methods were used, and how those methods matched up to the process they were testing. Ultimately, the quality of the test depends on the experience and expertise of the individuals who perform the test. The more unique the process, the more important this expertise becomes. 

Step two is to review the test results, looking for any data that appears suspect. One bit of data quickly stood out: the airflow measurement. The test company had reported extremely low volumetric airflow rates during each of the three test runs conducted. This was a suspicious result for a few reasons.

  1. The flow rate reported was far less than what would be produced by a small residential table fan. It seemed highly unlikely that any process like the one described would vent such a tiny amount of gas.
  2. The test method most commonly used to measure gas velocity (which is used to calculate flow rate) is EPA Method 2. If this method were used, it would not be capable of measuring the very low gas velocity needed to calculate such a flow rate.
  3. Volumetric flow rates almost always vary from run to run, usually by hundreds of scfm. In this case, the flow rate reported for each of three test runs was exactly the same to the last digit. This was a red flag that called into question the validity of the data reported.

An examination of the field data revealed the problem. Velocity measurements made following Method 2 require a pitot tube and some type of instrument to measure the pressure differential determined using the pitot tube. In this case, the testing company 
used an electronic manometer and recorded a pressure differential that was determined, after further research, to be the detection limit of the instrument. In other words, it appears there was in fact no air flow at all, but the result was recorded based on the detection limit figure as if it were a true reading.

What Happened: Invalid Data


A review of process data showed that the process was not venting during any of the three one-hour periods when testing was conducted. Without any inlet loading, scrubber efficiency could not be determined. The data presented in the report, based on concentration measurements of stagnant air, was meaningless. 

The chemical manufacturer was able to explain the situation to the regulatory authority and design a new test program that would accurately determine scrubber efficiency. A key feature of the new test program was use of EPA Method 2A instead of Method 2 for flow measurement. Method 2A utilizes a totalizing flow meter, not unlike a typical residential gas meter, that can accurately determine the amount of gas emitted during widely spaced venting events.

When processes are unique, a program to test emissions from the process must often be equally unique. It's not always possible for the operator of such a source to know that standard test methods may not yield accurate results in such cases, even when properly performed. Similarly, testing contractors cannot be expected to understand the nuances of a process that can affect emissions measurement. It is in these cases where Trinity's unique mix of in-house process and emissions measurements experts can help ensure your test program goes as smoothly as possible.