Repository logo

SoftAdaClip: A Smooth Clipping Strategy for Fair and Private Model Training

dc.contributor.authorSoleymani, Dorsa
dc.contributor.copyright-releaseNot Applicable
dc.contributor.degreeMaster of Computer Science
dc.contributor.departmentFaculty of Computer Science
dc.contributor.ethics-approvalNot Applicable
dc.contributor.external-examinern/a
dc.contributor.manuscriptsNot Applicable
dc.contributor.thesis-readerDr. Vlado Keselj
dc.contributor.thesis-readerDr. Sageev Oore
dc.contributor.thesis-supervisorDr. Frank Rudzicz
dc.date.accessioned2025-11-05T14:58:05Z
dc.date.available2025-11-05T14:58:05Z
dc.date.defence2025-10-20
dc.date.issued2025-10-27
dc.descriptionThis thesis introduces SoftAdaClip, a novel differentially private training strategy that replaces traditional hard gradient clipping with a smooth, tanh-based transformation. The method aims to improve both model utility and fairness by preserving informative gradients while maintaining strong privacy guarantees. Through extensive experiments on healthcare and tabular datasets, SoftAdaClip demonstrates significant improvements in accuracy and subgroup fairness compared to standard DP-SGD and Adaptive-DPSGD.
dc.description.abstractDifferential privacy (DP) provides strong protection for sensitive data, but often reduces model performance and fairness, especially for underrepresented groups. One major reason is gradient clipping in DP-SGD, which can disproportionately suppress learning signals for minority subpopulations. Although adaptive clipping can enhance utility, it still relies on uniform hard clipping, which may restrict fairness. To address this, we introduce SoftAdaClip, a differentially private training method that replaces hard clipping with a smooth, tanh-based transformation to preserve relative gradient magnitudes while bounding sensitivity. We evaluate SoftAdaClip on various datasets, including MIMIC-III (clinical text), GOSSIS-eICU (structured healthcare), and Adult Income (tabular data). Our results show that SoftAdaClip reduces subgroup disparities by up to 87% compared to DP-SGD and up to 48% compared to Adaptive-DPSGD, and these reductions in subgroup disparities are statistically significant. These findings underscore the importance of integrating smooth transformations with adaptive mechanisms to achieve fair and private model training.
dc.identifier.urihttps://hdl.handle.net/10222/85519
dc.language.isoen
dc.subjectMachine Learning
dc.subjectPrivacy
dc.subjectFairness
dc.subjectDeep Learning
dc.subjectDifferential Privay
dc.titleSoftAdaClip: A Smooth Clipping Strategy for Fair and Private Model Training

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
DorsaSoleymani2025.pdf
Size:
1.4 MB
Format:
Adobe Portable Document Format

License bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
license.txt
Size:
2.12 KB
Format:
Item-specific license agreed upon to submission
Description: