Neural relational inference for disaster multimedia retrieval

Publikation: Beiträge in ZeitschriftenZeitschriftenaufsätzeForschungbegutachtet

Standard

Neural relational inference for disaster multimedia retrieval. / Fadel, Samuel G.; Torres, Ricardo da S.
in: Multimedia Tools and Applications, Jahrgang 79, Nr. 35-36, 09.2020, S. 26735-26746.

Publikation: Beiträge in ZeitschriftenZeitschriftenaufsätzeForschungbegutachtet

Harvard

APA

Vancouver

Fadel SG, Torres RDS. Neural relational inference for disaster multimedia retrieval. Multimedia Tools and Applications. 2020 Sep;79(35-36):26735-26746. doi: 10.1007/s11042-020-09272-z

Bibtex

@article{55f1a49dac5d468cb12eb356b6c10df0,
title = "Neural relational inference for disaster multimedia retrieval",
abstract = "Events around the world are increasingly documented on social media, especially by the people experiencing them, as these platforms become more popular over time. As a consequence, social media turns into a valuable source of data for understanding those events. Due to their destructive potential, natural disasters are among events of particular interest to response operations and environmental monitoring agencies. However, this amount of information also makes it challenging to identify relevant content pertaining to those events. In this paper, we use a relational neural network model for identifying this type of content. The model is particularly suitable for unstructured text, that is, text with no particular arrangement of words, such as tags, which is commonplace in social media data. In addition, our method can be combined with a CNN for handling multimodal data where text and visual data are available. We perform experiments in three different scenarios, where different modalities are evaluated: visual, textual, and both. Our method achieves competitive performance in both modalities by themselves, while significantly outperforms the baseline on the multimodal scenario. We also demonstrate the behavior of the proposed method in different applications by performing additional experiments in the CUB-200-2011 multimodal dataset.",
keywords = "Information retrieval, Machine learning, Multimodal, Natural language processing, Neural networks, Business informatics",
author = "Fadel, {Samuel G.} and Torres, {Ricardo da S.}",
year = "2020",
month = sep,
doi = "10.1007/s11042-020-09272-z",
language = "English",
volume = "79",
pages = "26735--26746",
journal = "Multimedia Tools and Applications",
issn = "1380-7501",
publisher = "Springer New York LLC",
number = "35-36",

}

RIS

TY - JOUR

T1 - Neural relational inference for disaster multimedia retrieval

AU - Fadel, Samuel G.

AU - Torres, Ricardo da S.

PY - 2020/9

Y1 - 2020/9

N2 - Events around the world are increasingly documented on social media, especially by the people experiencing them, as these platforms become more popular over time. As a consequence, social media turns into a valuable source of data for understanding those events. Due to their destructive potential, natural disasters are among events of particular interest to response operations and environmental monitoring agencies. However, this amount of information also makes it challenging to identify relevant content pertaining to those events. In this paper, we use a relational neural network model for identifying this type of content. The model is particularly suitable for unstructured text, that is, text with no particular arrangement of words, such as tags, which is commonplace in social media data. In addition, our method can be combined with a CNN for handling multimodal data where text and visual data are available. We perform experiments in three different scenarios, where different modalities are evaluated: visual, textual, and both. Our method achieves competitive performance in both modalities by themselves, while significantly outperforms the baseline on the multimodal scenario. We also demonstrate the behavior of the proposed method in different applications by performing additional experiments in the CUB-200-2011 multimodal dataset.

AB - Events around the world are increasingly documented on social media, especially by the people experiencing them, as these platforms become more popular over time. As a consequence, social media turns into a valuable source of data for understanding those events. Due to their destructive potential, natural disasters are among events of particular interest to response operations and environmental monitoring agencies. However, this amount of information also makes it challenging to identify relevant content pertaining to those events. In this paper, we use a relational neural network model for identifying this type of content. The model is particularly suitable for unstructured text, that is, text with no particular arrangement of words, such as tags, which is commonplace in social media data. In addition, our method can be combined with a CNN for handling multimodal data where text and visual data are available. We perform experiments in three different scenarios, where different modalities are evaluated: visual, textual, and both. Our method achieves competitive performance in both modalities by themselves, while significantly outperforms the baseline on the multimodal scenario. We also demonstrate the behavior of the proposed method in different applications by performing additional experiments in the CUB-200-2011 multimodal dataset.

KW - Information retrieval

KW - Machine learning

KW - Multimodal

KW - Natural language processing

KW - Neural networks

KW - Business informatics

UR - http://www.scopus.com/inward/record.url?scp=85088125041&partnerID=8YFLogxK

U2 - 10.1007/s11042-020-09272-z

DO - 10.1007/s11042-020-09272-z

M3 - Journal articles

AN - SCOPUS:85088125041

VL - 79

SP - 26735

EP - 26746

JO - Multimedia Tools and Applications

JF - Multimedia Tools and Applications

SN - 1380-7501

IS - 35-36

ER -

DOI