dc.contributor.author | Brown, Christopher David. | en_US |
dc.date.accessioned | 2014-10-21T12:35:38Z | |
dc.date.available | 2000 | |
dc.date.issued | 2000 | en_US |
dc.identifier.other | AAINQ60665 | en_US |
dc.identifier.uri | http://hdl.handle.net/10222/55730 | |
dc.description | Multivariate calibration, used in conjunction with multichannel instrumental techniques, has been vital in making convenient, rapid and cost-effective chemical analysis possible. Numerical preprocessing techniques, which are intended to recondition the measurement data to a form which is better suited for chemometric methods, often play a key role in multivariate calibration. In some cases, the use of preprocessing techniques improves the precision of the analytical result. In other cases, meaningful results are altogether impossible without preprocessing in some form. Despite the integral importance of preprocessing strategies in multivariate analysis and calibration, the theoretical impact of many of these numerical methods in calibration theory is unknown, leaving the analyst no other option than a trial-and-error approach. In this work, two of the most prominent preprocessing methods, digital smoothing and differentiation, are examined in depth from the perspective of calibration theory. | en_US |
dc.description | Smoothing is very frequently performed with aspirations of enhancing the signal-to-noise ratio (S/N) of the measurement data. It is demonstrated here that, based on theoretical considerations, no enhancement in multivariate S/N or predictive ability can be anticipated from symmetric smoothing filter application. In practical studies, it is observed that gains can sometimes be made, although they are found to be consistently marginal, and attributable to substantial calibration model error. This leaves smoothing filters in multivariate calibration as little more than cosmetic devices which are more likely to obfuscate information than enhance it. | en_US |
dc.description | Derivative filters are widely employed for the alleviation of baseline drift, and other noise structures which contribute error covariance to the measurement data. Theoretical examinations of their operation reveal that drift reduction proceeds by attempted diagonalization of the error covariance matrix (and homogenization of the noise power spectrum), although this benefit is often offset by the deleterious side-effects of derivative filtering: potential signal degradation and loss of chemical interpretability. While derivative filters do relieve error covariance to some extent, they are suboptimal in their approach as no consideration is given to heteroscedasticity, error covariance, or the net analyte signal. It is shown that optimal drift correction methods can actually be derived by direct consideration of the error structure. It is further demonstrated that this optimal drift correction filter is a special case of maximum likelihood principal components analysis, a method recently introduced by this research group. | en_US |
dc.description | This work demonstrates that preprocessing and calibration strategies can be logically developed from careful consideration of the problem at hand. These rational approaches to calibration not only are often superior in performance, but also avoid the wildly empirical and inefficient approaches in widespread use. | en_US |
dc.description | Thesis (Ph.D.)--Dalhousie University (Canada), 2000. | en_US |
dc.language | eng | en_US |
dc.publisher | Dalhousie University | en_US |
dc.publisher | | en_US |
dc.subject | Chemistry, Analytical. | en_US |
dc.title | Rational approaches to data preprocessing in multivariate calibration. | en_US |
dc.type | text | en_US |
dc.contributor.degree | Ph.D. | en_US |