Enhancing Deep Learning Model Evaluation via Attention Consistency Analysis in Spatial Transcriptomics Data

Authors

Dylan Agyemang

Document Type

Article

Publication Date

Summer 2023

Keywords

JGM

JAX Location

In: Student Reports, Summer 2023, The Jackson Laboratory

Abstract

Spatial transcriptomics technology enables the investigation of gene expression levels in tissue sections, while preserving heterogeneity and spatial relationships. Recently deep learning models have become a powerful tool in spatial transcriptomics data analysis by helping identify patterns in gene expression, biomarkers for cancers, and biosignatures for cell populations. Specifically, Vision Transformer Models, which utilize self-attention layers to capture long-range spatial relationships, have demonstrated a more robust prediction power on image classification tasks than regular convolutional neural networks (CNNs). However, like other deep learning algorithms, ViT models are prone to poor performance caused by a host of mechanisms, namely overfitting. Overfitting occurs when the model memorizes unnecessary noise and patterns in training datasets, failing to generalize to new, unseen test data. In spatial tissue datasets, which are already hard to interpret, poor performance due to overfitting can negatively affect conclusions drawn, including false predictions about gene expression levels and other cellular properties. Here we introduce a novel algorithm of Train and Validation Attention Consistency for interpretation of ViT performance on spatial data and other data sets. Using this concept, we were able to accurately assess and evaluate the consistency of model performance on breast cancer spatial data. This approach sets a standard for deep learning model interpretation and provides a technique for determining the significance of high-attention regions learned on tissue images in biomedical applications.

Please contact the Joan Staats Library for information regarding this document.

Share

COinS