SciPy 2025

Debarshi Datta

Debarshi Datta, PhD is an Assistant Professor in Data Science at the Christine E. Lynn College of Nursing, Florida Atlantic University. He completed PhD in Experimental Psychology at the Charles E. Schmidt College of Science, Florida Atlantic University. Dr. Datta has experience developing AI-driven decision support systems in healthcare data, including understanding problem statements, handling disputes, exploratory data analysis, building models, data visualization, and data storytelling. Dr. Datta’s current research focuses on data-driven domains like AI/ML to understand a population-based disease prognosis. His primary research contribution has been finding the severity of the disease, decision-making for developing a model that comprehends the most significant features predicting mortality, and severity of disease utilizing traditional AI/ML techniques such as decision trees, random forest classifier, XGB boost, and deep learning. In other research, he is building a model to identify early prediction of Dementia. Dr. Datta received many intramural grants, including Early Prediction of Alzheimer's Disease and Related Dementias on Preclinical Assessment Data using Machine Learning tools, Seed Funding from Smart Health for COVID-19 research, NSF I-Corps Customer Discovery Funding, ALL of US Institutional Champion Award, among many others.

The speaker's profile picture

Sessions

07-07
08:00
240min
A Hands-on Tutorial towards building Explainable Machine Learning using SHAP, GINI, LIME, and Permutation Importance
Debarshi Datta, Dr. Subhosit Ray

The advancement of AI systems necessitates the need for interpretability to address transparency, biases, risks, and regulatory compliance. The workshop teaches core techniques in interpretability, including SHAP (game-theoretic feature attribution), GINI (decision tree impurity analysis), LIME (local surrogate models), and Permutation Importance (feature shuffling), which provide global and local explanations for model decisions. With hands-on building of interpretability tools and visualization techniques, we explore how these methods enable bias detection and clinical trust in healthcare diagnostics and develop the most effective strategies in finance. These techniques are essential in building interpretable AI to address the challenges of the black-box models.

Tutorials
Ballroom A