Teaching Search Engines to understand arguments: Inside the AKASE Project
A European research project is building a knowledge graph of public argumentation – and using it to make web search smarter
When you search the web for a contentious topic – e.g. how AI should be regulated – you get a list of links ranked by relevance to your keywords. What you do not get is any indication of whether the arguments in those documents are well-structured, logically sound, or represent a balanced range of perspectives. The search engine has no understanding of argumentation. It cannot tell you which pages contain strong reasoning and which are riddled with logical fallacies.
The AKASE project – Argumentation Knowledge-Graphs for Advanced Search Engines – set out to change this. Funded under the European OpenWebSearch.EU project and carried out at the University of Groningen, AKASE has built a large-scale computational map of public argumentation, extracted from tens of thousands of web documents, and used it to power two new kinds of tools: a search engine that ranks results by argumentative quality, and a multi-agent deliberation platform where humans and AI reason together.
The Problem: Arguments are everywhere, but remain unleveraged in web search
Public debate on the internet is vast. People argue about climate policy, healthcare, technology regulation, and countless other topics across news articles, opinion pieces, forums, and dedicated debating platforms. But the argumentation threads are scattered, unstructured, and variable in quality. Some arguments are carefully reasoned and well-supported; others rely on logical fallacies or present only one side of an issue.
Project AKASE addresses these challenges by developing a computational framework for extracting, organizing, and presenting argumentative content from the web in a coherent, scalable, and actionable way.
The Approach: Mapping the Structure of Public Debate
The AKASE team’s approach begins with a simple question: what are people actually arguing about? To answer it, they collected nearly 30,000 arguments from five online debating platforms and used a combination of advanced text embeddings and clustering algorithms to identify the distinct “issues” – the specific questions or sub-problems – that these arguments revolve around. After removing duplicates and merging near-identical formulations using large language models, they arrived at a set of roughly 16,000 unique issues, organised into 16 thematic domains ranging from politics and technology to health and ethics.
But structured debating platforms represent only a fraction of online argumentation. Most arguments exist in ordinary web pages – news articles, opinion columns, policy documents – where they are expressed in natural language without explicit labels. To capture this unstructured content, the AKASE team developed an automated pipeline that reads web documents, identifies which sentences are argumentative, classifies them as claims or supporting premises, and determines the relationships between them.
The team went further by enriching these arguments with two additional layers of analysis. First, they annotated arguments with the human values they express – freedom, equality, security, and so on – capturing not just what people argue but the moral commitments that underpin their reasoning. Second, they developed methods for assessing argument quality: a system that generates probing critical questions to test an argument’s assumptions, and a multi-agent framework where multiple AI models deliberate with each other to detect logical fallacies.
The result: A Knowledge Graph of Argumentation
All of this analysis feeds into the project’s central artefact: the Argumentation Knowledge Graph, or AKG. This is a large, interconnected data structure that links topics, issues, claims, and premises across thousands of documents. It captures the logical and rhetorical relationships between argumentative units – which claims support each other, which ones conflict, and which are essentially underpinning the same point in different words.
Starting from an initial set of around 50,000 documents retrieved from the Open Web Index, the team extracted nearly half a million argumentative units and identified millions of relationships between them. A second processing phase expanded the data source to over 105 million documents. The resulting graph contains tens of thousands of interconnected nodes, with over 90 per cent belonging to a single connected component – meaning that you can navigate from virtually any argument to any other through a chain of related reasoning.
Two Applications: Smarter Search and Structured Deliberation
The AKASE team translated this knowledge graph into two practical tools. The first is an argument-aware search engine. When you submit a query, the system retrieves relevant documents as a conventional search engine would – but then it reranks them based on the argumentative quality of each document. Three criteria were used:
- how well claims are justified
- how coherently the argument is structured
- whether the document presents a balanced range of perspectives rather than a one-sided view
The system also generates a concise summary of the top results and suggests related issues from the knowledge graph, helping users explore the broader landscape of debate around their query.
The second tool is ArgsBase, a multi-agent deliberation platform. ArgsBase creates a structured discussion involving multiple AI agents, a human user, and a moderator. The AI agents contribute arguments, counterarguments, and refinements; the moderator manages the flow of discussion; and a real-time analyser tracks the evolving state of the debate, producing summaries and visual argument maps. In an initial user study, participants found this multi-agent format more useful than interacting with a single AI, precisely because the diversity of perspectives and the structured format encouraged deeper thinking.
Why it matters
Today’s information environment does not lack arguments – it lacks tools for navigating them. By building a computational infrastructure that can extract, organise, evaluate, and present argumentative content from the open web, AKASE offers a different model of information access: one where the quality of reasoning is a first-class signal, not an afterthought.
The ArgsBase platform, in particular, points toward an intriguing future for human–AI interaction. Rather than using AI as an oracle that delivers answers, it positions AI models as participants in a structured reasoning process – one where disagreement is productive, perspectives are made explicit, and the human user remains an active agent rather than a passive recipient. This is a model of AI-assisted thinking that takes critical reasoning seriously.
What’s Next
The AKASE team has identified several directions for future work: expanding the knowledge graph dynamically as new arguments emerge on the web, incorporating multi-modal content (not just text), and refining the deliberation platform through more extensive user studies focused on practical decision-making scenarios. The argument-aware search engine will also benefit from reduced latency and broader domain coverage.
To read the full technical report, go here: https://zenodo.org/records/17674255
The AKASE project was funded under the OpenWebSearch.EU project (Horizon Europe, Grant Agreement 101070014, Call #2).



