cross-posted from: https://lemmy.world/post/34416839
The fundamental idea of this paper is for ChatGPT-like apps to lose natural language for less energy consumption and more determinism in their answers based on controlled natural languages like ACE; for the user to be able to modify this trade-off-ratio at will based on LLMs (which is not possible when starting from a ChatGPT-like app); and to capture this new paradigm in a new type of browser that has natural language as its primary interface, here called a semantic web-first browser.
"
Basically with an app like ChatGPT, you have before you a black box that you can send commands to and that gives you unpredictable answers and consumes huge amounts of energy.
Instead, the semantc web browser with the precision controller starts with a complete white box, where you can control the ontology, have full control of the language, the outcome and consumes much less energy; but you can move on the ratio towards a ChatGPT-like black box with the precision controller.
With the default level, the ACE-syntax is enforced very strictly and semantic web data is expected to have the same syntax as defined. With a lower precision level, an LLM is stuck inbetween, syntax is not enforced as strictly and the ontology is enforced even if the data does not match it, also propagating to other web pradigms like MCP/AI-web and traditional web-services with REST-APIs.
The precision controller basically let’s the user move between a very strict semantic web browser and the lose cannon of a ChatGPT+MCP-app. And I think this moving of ratio is only possible if you start developing a strict semantic web browser, which has a precision controller integrated.
Another merit is that the energy consumption can be adjusted at will. If money/energy is low, for example in a state’s administration, the semantic web browser can still be used, while ChatGPT-like apps become unfeasible. "
the main issue here is still one of use case. this is a text-interface for the semantic web, but the semantic web is built to be easily parsed so you wouldn’t need a specific interface. llms, meanwhile, are data transformers, which you don’t want loose on strict content because the integrity can no longer be guaranteed. so what are you left with?
also, if you think this can be used on government code you might need to adjust your expectations.