Output stability or drift overtime has long been a major
performance deficiency for gas sensors irrespective of what technology or
methodology is used for their conception. Software correction may alleviate the
problem somewhat but it is not always applicable. It has long been the
objective of many researchers in this field to overcome this problem
fundamentally and for good. The purpose of this paper is to show that this
objective has now finally been achieved.
Design/methodology/approach
Conventional non-dispersive infrared (NDIR) dual beam
methodology utilizes the ratio of signal channel output over reference channel
output for signal processing. The signal filter overlaps the absorption band of
the gas of interest while the reference filter does not. However, this ratio
changes as the source ages. The current methodology uses an absorption bias
between signal and reference channel outputs. This absorption bias is created
by using a path length for the signal channel greater than that for the
reference channel. Both the signal and reference detectors carry an identical
spectral filter overlapping the absorption band of the gas to be measured.
Findings
Implementation of the currently patented NDIR gas-sensing
methodology has been carried out in different gas sensor configurations for
over a year in the laboratory. Performance results for these sensors showing
insignificant output drifts overtime have been repeatedly demonstrated via
simulated aging for the source. Originality/value - The paper puts forward the
view that the recent breakthrough of the Near Zero Drift methodology for NDIR gas sensors will very quickly change the hierarchy of technology dominance and
utility for gas sensors at large.
没有评论:
发表评论