Taboada Iglesias, María JesúsMartínez Hernández, DiegoArideh, MohammedMosquera Losada, María Rosa2025-11-042025-11-042025-05-07Taboada, M., Martinez, D., Arideh, M., & Mosquera, R. (2025). Ontology matching with Large Language Models and prioritized depth-first search. Information Fusion, 123, 103254. 10.1016/j.inffus.2025.1032541566-2535https://hdl.handle.net/10347/43542Ontology matching (OM) plays a key role in enabling data interoperability and knowledge sharing. Recently, methods based on Large Language Model (LLMs) have shown great promise in OM, particularly through the use of a retrieve-then-prompt pipeline. In this approach, relevant target entities are first retrieved and then used to prompt the LLM to predict the final matches. Despite their potential, these systems still present limited performance and high computational overhead. To address these issues, we introduce MILA, a novel approach that embeds a retrieve-identify-prompt pipeline within a prioritized depth-first search (PDFS) strategy. This approach efficiently identifies a large number of semantic correspondences with high accuracy, limiting LLM requests to only the most borderline cases. We evaluated MILA using three challenges from the 2024 edition of the Ontology Alignment Evaluation Initiative. Our method achieved the highest F-Measure in five of seven unsupervised tasks, outperforming state-of-the-art OM systems by up to 17%. It also performed better than or comparable to the leading supervised OM systems. MILA further exhibited task-agnostic performance, remaining stable across all tasks and settings, while significantly reducing runtime. These findings highlight that high-performance LLM-based OM can be achieved through a combination of programmed (PDFS), learned (embedding vectors), and prompting-based heuristics, without the need of domain-specific heuristics or fine-tuning.eng© 2025 The Authors. Published by Elsevier B.V. This is an open access article under the CC BY licensehttp://creativecommons.org/licenses/by/4.0/Ontology matchingRetrieval augmented generationGreedy searchLarge Language ModelsZero-shot settingOntology matching with Large Language Models and prioritized depth-first searchjournal article10.1016/j.inffus.2025.1032541872-6305open access