Ontology Enrichment from Texts (OET): A Biomedical Dataset for Concept Discovery and Placement
The repository provides scripts for data creation, with guideline to implement baseline methods for out-of-KB mention discovery and concept placement. The study is described in this work (link to arXiv, accepted for CIKM 2023).
The dataset is available at Zenodo and its JSON keys are described in the dataset
folder.
Before data creation, the below sources need to be downloaded.
- SNOMED CT https://www.nlm.nih.gov/healthit/snomedct/archive.html (and use snomed-owl-toolkit to form .owl files)
- UMLS https://www.nlm.nih.gov/research/umls/licensedcontent/umlsarchives04.html (and mainly use MRCONSO for mapping UMLS to SNOMED CT)
- MedMentions https://github.com/chanzuckerberg/MedMentions (source of entity linking)
The below tools and libraries are used.
- Protege http://protegeproject.github.io/protege/
- snomed-owl-toolkit https://github.com/IHTSDO/snomed-owl-toolkit
- DeepOnto https://github.com/KRR-Oxford/DeepOnto (based on OWLAPI https://owlapi.sourceforge.net/) for ontology processing and complex concept verbalisation
The data creation scripts are available at data-construction
folder, where run_preprocess_ents_and_data new.sh
provides an overall shell script that calls the other .py
files.
We used BLINKout with default parameters.
We used an edge-Bi-encoder, which adapts the original BLINK/BLINKout model by matching a mention to an edge <parent, child>
.
Then after selecting top-k edges, an optional step is to choose the correct ones for the evaluation. We tested GPT-3.5 (gpt-3.5-turbo) via OpenAI API. Details of the prompt and implementation are available in baseline-methods/concept-placement
folder.
See an example project processing the dataset at LM-ontology-concept-placement.
- The baseline implementations are based on BLINKout paper and BLINK repository under the MIT liscence.
- The zero-shot prompting uses GPT-3.5 from OpenAI API.
- Acknowledgement to all data and processing sources listed above.