Show simple item record

dc.contributor.authorBrown, Christopher David.en_US
dc.date.accessioned2014-10-21T12:35:38Z
dc.date.available2014-10-21T12:35:38Z
dc.date.issued2000en_US
dc.identifier.otherAAINQ60665en_US
dc.identifier.urihttp://hdl.handle.net/10222/55730
dc.descriptionMultivariate calibration, used in conjunction with multichannel instrumental techniques, has been vital in making convenient, rapid and cost-effective chemical analysis possible. Numerical preprocessing techniques, which are intended to recondition the measurement data to a form which is better suited for chemometric methods, often play a key role in multivariate calibration. In some cases, the use of preprocessing techniques improves the precision of the analytical result. In other cases, meaningful results are altogether impossible without preprocessing in some form. Despite the integral importance of preprocessing strategies in multivariate analysis and calibration, the theoretical impact of many of these numerical methods in calibration theory is unknown, leaving the analyst no other option than a trial-and-error approach. In this work, two of the most prominent preprocessing methods, digital smoothing and differentiation, are examined in depth from the perspective of calibration theory.en_US
dc.descriptionSmoothing is very frequently performed with aspirations of enhancing the signal-to-noise ratio (S/N) of the measurement data. It is demonstrated here that, based on theoretical considerations, no enhancement in multivariate S/N or predictive ability can be anticipated from symmetric smoothing filter application. In practical studies, it is observed that gains can sometimes be made, although they are found to be consistently marginal, and attributable to substantial calibration model error. This leaves smoothing filters in multivariate calibration as little more than cosmetic devices which are more likely to obfuscate information than enhance it.en_US
dc.descriptionDerivative filters are widely employed for the alleviation of baseline drift, and other noise structures which contribute error covariance to the measurement data. Theoretical examinations of their operation reveal that drift reduction proceeds by attempted diagonalization of the error covariance matrix (and homogenization of the noise power spectrum), although this benefit is often offset by the deleterious side-effects of derivative filtering: potential signal degradation and loss of chemical interpretability. While derivative filters do relieve error covariance to some extent, they are suboptimal in their approach as no consideration is given to heteroscedasticity, error covariance, or the net analyte signal. It is shown that optimal drift correction methods can actually be derived by direct consideration of the error structure. It is further demonstrated that this optimal drift correction filter is a special case of maximum likelihood principal components analysis, a method recently introduced by this research group.en_US
dc.descriptionThis work demonstrates that preprocessing and calibration strategies can be logically developed from careful consideration of the problem at hand. These rational approaches to calibration not only are often superior in performance, but also avoid the wildly empirical and inefficient approaches in widespread use.en_US
dc.descriptionThesis (Ph.D.)--Dalhousie University (Canada), 2000.en_US
dc.languageengen_US
dc.publisherDalhousie Universityen_US
dc.publisheren_US
dc.subjectChemistry, Analytical.en_US
dc.titleRational approaches to data preprocessing in multivariate calibration.en_US
dc.typetexten_US
dc.contributor.degreePh.D.en_US
 Find Full text

Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record