Learning Retinal Patterns from Multimodal Images
release_sv2ba5vq5vavplnl6s4ihwp2ou
by
Álvaro S. Hervella, José Rouco, Jorge Novo Buján, Marcos Ortega
Abstract
The training of deep neural networks usually requires a vast amount of annotated data, which is expensive to obtain in clinical environments. In this work, we propose the use of complementary medical image modalities as an alternative to reduce the required annotated data. The self-supervised training of a reconstruction task between paired multimodal images can be used to learn about the image contents without using any label. Experiments performed with the multimodal setting formed by retinography and fluorescein angiography demonstrate that the proposed task produces the recognition of relevant retinal structures.
In application/xml+jats
format
Archived Files and Locations
application/pdf 1.1 MB
file_7e4clrhpyrainb3fnte2n7eggu
|
web.archive.org (webarchive) res.mdpi.com (web) |
Open Access Publication
In DOAJ
In ISSN ROAD
In Keepers Registry
ISSN-L:
2504-3900
access all versions, variants, and formats of this works (eg, pre-prints)
Crossref Metadata (via API)
Worldcat
SHERPA/RoMEO (journal policies)
wikidata.org
CORE.ac.uk
Semantic Scholar
Google Scholar