Machine learning of microstructure–property relationships in materials leveraging microstructure representation from foundational vision transformers
Sheila E. Whitman, Marat I. Latypov,
Machine learning of microstructure–property relationships in materials leveraging microstructure representation from foundational vision transformers,
Acta Materialia,
Volume 296,
2025,
121217,
ISSN 1359-6454,
https://doi.org/10.1016/j.actamat.2025.121217.
(https://www.sciencedirect.com/science/article/pii/S135964542500504X)
Abstract: Machine learning of microstructure–property relationships from data is an emerging approach in computational materials science. Most existing machine learning efforts focus on the development of task-specific models for each microstructure–property relationship. We propose utilizing pre-trained foundational vision transformers for the extraction of task-agnostic microstructure features and subsequent light-weight machine learning of a microstructure-dependent property. We demonstrate our approach with pre-trained state-of-the-art vision transformers (CLIP, DINOv2, SAM) in two case studies on machine-learning: (i) elastic modulus of two-phase microstructures based on simulations data; and (ii) Vicker’s hardness of Ni-base and Co-base superalloys based on experimental data published in literature. Our results show the potential of foundational vision transformers for robust microstructure representation and efficient machine learning of microstructure–property relationships without the need for expensive task-specific training or fine-tuning of bespoke deep learning models.
Keywords: Microstructure–property relationships; Microstructure representation; Machine learning; Reduced-order models