Repository logo
 

Deep Neural Network (DNN) Design: The Utilization of Approximate Computing and Practical Considerations for Accuracy Evaluation

dc.contributor.authorHammad, Issam
dc.contributor.copyright-releaseYesen_US
dc.contributor.degreeDoctor of Philosophyen_US
dc.contributor.departmentDepartment of Electrical & Computer Engineeringen_US
dc.contributor.ethics-approvalNot Applicableen_US
dc.contributor.external-examinerDr. Lihong Zhangen_US
dc.contributor.graduate-coordinatorDr. Jacek Ilowen_US
dc.contributor.manuscriptsYesen_US
dc.contributor.thesis-readerDr. Guy Kemberen_US
dc.contributor.thesis-readerDr. Jason Guen_US
dc.contributor.thesis-supervisorDr. Kamal El-Sankaryen_US
dc.date.accessioned2021-08-04T13:45:46Z
dc.date.available2021-08-04T13:45:46Z
dc.date.defence2021-07-23
dc.date.issued2021-08-04T13:45:46Z
dc.description.abstractApproximate computing is emerging as a viable way to achieve significant performance enhancement in terms of power, speed, and area for system on chip (SoC) designs. Utilizing approximate computing in the design of deep neural networks (DNNs) can significantly reduce the system’s power, delay, and area at a cost of a tolerable drop in accuracy. This thesis demonstrates how approximate computing methods such as approximate multiplication, low quantization, and shared neural networks can achieve these performance enhancements in DNN designs. In terms of approximate multipliers which are the primary focus of the thesis, a study on the impact of approximate multipliers on the inference accuracy of convolutional neural networks (CNNs) is presented. Additionally, an efficient hybrid training approach using both exact and approximate multipliers is proposed. Most importantly, the thesis introduces the new concept of boosting CNN multiplication performance using a precision prediction preprocessor that controls approximate multipliers with various precisions. Another important research contribution of this thesis is studying practical considerations for accuracy evaluation of sensor-based machine learning and deep learning designs. Certain aspects can negatively impact the system’s accuracy in production. These aspects are not usually considered when evaluating and comparing models’ accuracy during development and prototyping. Examples include accuracy loss due to the component’s variable thermal noise, component failure or partial failure, and analog-to-digital converter (ADC) quantization error. Finally, the thesis presents the new concept of utilizing machine learning for person identification through physical activity. This research finding demonstrates that machine learning can be applied not only for the identification of physical activities but also for the identification of the activity performer as well. Based on this finding, a novel multi-label shared deep neural network (DNN) to identify both the physical activity and the activity performer simultaneously is proposed.en_US
dc.identifier.urihttp://hdl.handle.net/10222/80640
dc.language.isoenen_US
dc.subjectApproximate Computingen_US
dc.subjectApproximate Multiplieren_US
dc.subjectDeep Neural Network (DNN)en_US
dc.subjectConvolutional Neural Network (CNN)en_US
dc.subjectMachine Learning for Sensorsen_US
dc.subjectKerasen_US
dc.subjectDeep Learningen_US
dc.subjectMachine Learningen_US
dc.subjectAI Hardwareen_US
dc.subjectDeep Learning Acceleratoren_US
dc.subjectError Simulationen_US
dc.titleDeep Neural Network (DNN) Design: The Utilization of Approximate Computing and Practical Considerations for Accuracy Evaluationen_US

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
IssamHammad2021.pdf
Size:
3.02 MB
Format:
Adobe Portable Document Format
Description:
Issam Hammad - PhD Thesis

License bundle

Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
1.71 KB
Format:
Item-specific license agreed upon to submission
Description: