Learning Retinal Patterns from Multimodal Images release_sv2ba5vq5vavplnl6s4ihwp2ou

by Álvaro S. Hervella, José Rouco, Jorge Novo Buján, Marcos Ortega

Published in Proceedings (MDPI) by MDPI AG.

2018   Issue 18, p1195

Abstract

The training of deep neural networks usually requires a vast amount of annotated data, which is expensive to obtain in clinical environments. In this work, we propose the use of complementary medical image modalities as an alternative to reduce the required annotated data. The self-supervised training of a reconstruction task between paired multimodal images can be used to learn about the image contents without using any label. Experiments performed with the multimodal setting formed by retinography and fluorescein angiography demonstrate that the proposed task produces the recognition of relevant retinal structures.
In application/xml+jats format

Archived Files and Locations

application/pdf  1.1 MB
file_7e4clrhpyrainb3fnte2n7eggu
web.archive.org (webarchive)
res.mdpi.com (web)
Read Archived PDF
Preserved and Accessible
Type  article-journal
Stage   published
Date   2018-09-17
Language   en ?
Proceedings Metadata
Open Access Publication
In DOAJ
In ISSN ROAD
In Keepers Registry
ISSN-L:  2504-3900
Work Entity
access all versions, variants, and formats of this works (eg, pre-prints)
Catalog Record
Revision: f597efc3-0bb2-4581-a6d4-9885369c7060
API URL: JSON