Ontologies and LLMs

It’s a fair question to ask in an era where LLMs seem to handle almost anything, why do I even bother with ontologies? To many, the word feels a bit 2005, a relic of the semantic web that was heavy on academic rigor but light on practical speed. But, while LLMs excel at processing natural language, there’s much to complain about the transparency of the process and further integration with information processing systems. I’ve been thinking that ontologies might be due for a comeback, not as a replacement for neural networks, but as the logic engine that helps make them more reliable.

If one asks why ontologies died, the answer usually comes down to cost. Curation was traditionally a manual, painful process. You needed human experts to sit in a room for months, arguing over definitions and mapping out every possible relationship. It was simply too expensive and too slow to keep up with the messiness of real-world data.

At its core, an ontology is really just a schema definition, it’s about defining a specific language for a given problem and filling it with the domain knowledge. With a standard approach, you might give an LLM a massive document and hope it interprets the instructions correctly. With an ontology, you define the “language”, the entities and their relationships, beforehand. When you define a schema, you aren’t just organizing data; you’re setting the ground rules for the reasoning process.

The biggest argument for revisiting these rusty structures is explainability. By blending LLMs with ontologies, we get something much closer to a glass box where you can see exactly which entities were identified and follow the logical chain from a factual property to a final conclusion.

The real intuition here is that while the manual effort of the past was a dealbreaker, LLMs change the math entirely. LLMs are actually quite good at ontology learning. We now have an unprecedented opportunity to use the LLM to lower the cost of curation by letting the neural network help build the symbolic structure. This isn’t about going back to the old way of doing things; it’s about a hybrid approach where we use the LLM’s to build the schema, and then use that schema to provide the guardrails for the reasoning.

So, do ontologies matter? Probably. They might not be the one true path, but I believe they offer a practical bridge between the language model and the verifiable logic. We finally have the tools to make them cheap enough to be useful again.