German English

Enhancing Cross‑lingual Biomedical Concept Normalization Using Deep Neural Network Pretrained Language Models

PDF
further information
Google Scholar
publication iconLin, Ying-Chin; Hoffmann, Phillip; Rahm, Erhard
Enhancing Cross‑lingual Biomedical Concept Normalization Using Deep Neural Network Pretrained Language Models
SN Computer Science (2022) 3:387
2022-07

Weitere Informationen: https://trebuchet.public.springernature.app/get_content/c82fc918-bb36-4b8c-9c3f-5e38d5304f23

Beschreibung

In this study, we propose a new approach for cross-lingual biomedical concept normalization, the process of mapping text in non-English documents to English concepts of a knowledge base. The resulting mappings, named as semantic annotations, enhance data integration and interoperability of documents in different languages. The US FDA (Food and Drug Administration), therefore, requires all submitted medical forms to be semantically annotated. These standardized medical forms are used in health care practice and biomedical research and are translated/adapted into various languages. Mapping them to the same concepts (normally in English) facilitates the comparison of multiple medical studies even cross-lingually. However, the translation and adaptation of these forms can cause them to deviate from its original text syntactically and in wording. This leads the conventional string matching methods to produce low-quality annotation results. Therefore, our new approach incorporates semantics into the cross-lingual concept normalization process. This is done using sentence embeddings generated by BERT-based pretrained language models. We evaluate the new approach by annotating entire questions of German medical forms with concepts in English, as required by the FDA. The new approach achieves an improvement of 136% in recall, 52% in precision and 66% in F-measure compared to the conventional string matching methods.