29/01/2026
👏 PhD student João Mendes, and full members of IBEB Prof. Ana M. Mota and Prof. Nuno Matela have published new research on using AI to differentiate breast lesions with very similar radiological appearance. A key highlight of this work is the implementation of Explainable AI (XAI), ensuring transparency and providing clinicians with a clearer understanding of the AI’s decision-making process.
✨ Triple-negative breast cancer (TNBC) is the most aggressive molecular subtype of breast cancer (BC). TNBC lacks targeted treatment options, which results in poor clinical outcomes. TNBC lesions usually present benign characteristics on mammograms, complicating their early diagnosis. This retrospective multicenter study presents a convolutional neural network (CNN) model to distinguish TNBC from benign lesions on 566 mammograms (277 benign/289 TNBC), acquired at three different institutions across the UK. Each mammogram had its quality enhanced using a combination of total variation minimization filtering and contrast local adaptive histogram equalization (CLAHE). The proposed model achieved a test set AUC of 0.984, with a sensitivity and specificity of 94.2% and 91.9%, respectively. Explainability with GRAD-CAM was applied to the test set, revealing that the model was using not only lesion characteristics but also tumor microenvironment regions to make predictions. The same test set was analyzed by an expert radiologist who achieved a sensitivity of 71% and a specificity of 60%. The comparison of results between the developed model and the expert radiologist highlights the model’s performance and underscores its potential as a complementary diagnostic tool. This model might help in the task of TNBC early diagnosis, potentially diminishing the number of false negatives.
🔗 Read more at https://link.springer.com/article/10.1007/s11547-025-02157-x