Development site for the EIT FOAI CDT
Calibrated Uncertainty Estimation in LLMs and Diffusion Models
Standard foundation models often produce confident but incorrect predictions, limiting their reliability for high-stakes scientific and medical applications.
Investigate and implement state-of-the-art uncertainty quantification techniques (e.g., conformal prediction, Bayesian deep learning, ensembles) for large language models and diffusion models. The project will focus on developing methods to produce well-calibrated confidence intervals and reliable uncertainty estimates for outputs in both BioFM and PatientJourneyFM contexts, enhancing model trustworthiness.
Bayesian methods, conformal prediction, calibration techniques
TBD