Show simple item record

dc.contributor.author Lawley C.J.M.
dc.contributor.author Raimondo S.
dc.contributor.author Chen T.
dc.contributor.author Brin L.
dc.contributor.author Zakharov A.
dc.contributor.author Kur D.
dc.contributor.author Hui J.
dc.contributor.author Newton G.
dc.contributor.author Burgoyne S.L.
dc.contributor.author Marquis G.
dc.date.accessioned 2023-07-24T05:42:53Z
dc.date.available 2023-07-24T05:42:53Z
dc.date.issued 2022
dc.identifier.citation Applied Computing and Geosciences, 2022, 14, 100084, p. 1-10 ru_RU
dc.identifier.uri https://repository.geologyscience.ru/handle/123456789/41600
dc.description.abstract Geoscientists use observations and descriptions of the rock record to study the origins and history of our planet, which has resulted in a vast volume of scientific literature. Recent progress in natural language processing (NLP) has the potential to parse through and extract knowledge from unstructured text, but there has, so far, been only limited work on the concepts and vocabularies that are specific to geoscience. Herein we harvest and process public geoscientific reports (i.e., Canadian federal and provincial geological survey publications databases) and a subset of open access and peer-reviewed publications to train new, geoscience-specific language models to address that knowledge gap. Language model performance is validated using a series of new geoscience-specific NLP tasks (i.e., analogies, clustering, relatedness, and nearest neighbour analysis) that were developed as part of the current study. The raw and processed national geological survey corpora, language models, and evaluation criteria are all made public for the first time. We demonstrate that non-contextual (i.e., Global Vectors for Word Representation, GloVe) and contextual (i.e., Bidirectional Encoder Representations from Transformers, BERT) language models updated using the geoscientific corpora outperform the generic versions of these models for each of the evaluation criteria. Principal component analysis further demonstrates that word embeddings trained on geoscientific text capture meaningful semantic relationships, including rock classifications, mineral properties and compositions, and the geochemical behaviour of elements. Semantic relationships that emerge from the vector space have the potential to unlock latent knowledge within unstructured text, and perhaps more importantly, also highlight the potential for other downstream geoscience-focused NLP tasks (e.g., keyword prediction, document similarity, recommender systems, rock and mineral classification). ru_RU
dc.language.iso en ru_RU
dc.subject Word embedding ru_RU
dc.subject Language model ru_RU
dc.subject Machine learning ru_RU
dc.subject Artificial intelligence ru_RU
dc.subject BERT ru_RU
dc.subject GloVe ru_RU
dc.title Geoscience language models and their intrinsic evaluation ru_RU
dc.type Article ru_RU
dc.identifier.doi 10.1016/j.acags.2022.100084


Files in this item

This item appears in the following Collection(s)

Show simple item record