Overview

Wikidata is an open knowledge base hosted by the Wikimedia Foundation that can be read and edited by both humans and machines. Serving as the central repository for structured data used by Wikipedia, Wiktionary, Wikisource, and many other projects. Wikidata has become an indispensable resource in both academic research and industrial applications.


In recent years, the growing body of scholarly publications and practical innovations surrounding Wikidata has underscored its evolving role as the backbone of open structured data. Previous workshops have addressed challenges such as data quality, multilingual contributions, community dynamics, and the evolution of collaborative knowledge graphs, and the intersection of Wikidata and GenAI.


The Wikidata Workshop 2026 continues this discussion while reflecting the rapid advances in artificial intelligence. In particular, this edition places special emphasis on the evolving relationship between Wikidata and recent developments in Generative AI. As these technologies increasingly rely on structured knowledge for grounding, verification, and reasoning, Wikidata offers a unique opportunity to explore new forms of interaction between collaborative knowledge graphs and modern AI systems.


At the same time, the workshop remains open to the broad range of research and applications related to Wikidata and collaborative knowledge graphs. We welcome contributions that investigate the use, development, and impact of Wikidata across different domains, including both academic research and real-world applications.


This workshop brings together everyone working around Wikidata in both the scientific field and industry to discuss trends and topics around this collaborative knowledge graph.

What is Wikidata?

Call for Papers

Topics

We invite researchers from all domains to show the importance of Wikidata in their fields. Topics of interest include, but are not limited to:

Core Wikidata Research
  1. Data Modeling and Ontologies: Approaches for schema design, ontology usage, and knowledge modeling in Wikidata.
  2. Knowledge Graph Construction and Enrichment: Methods for building, extending, and completing Wikidata knowledge graphs.
  3. Entity Linking and Data Integration: Entity resolution, record linkage, and alignment with external datasets.
  4. Data Quality and Validation: Techniques for detecting inconsistencies, enforcing constraints, and improving Wikidata reliability.
  5. Evolution of Collaborative Knowledge Graphs: Studying maintenance, growth, and evolution of large-scale collaborative knowledge bases.
Querying, Access, and Infrastructure
  1. SPARQL Querying and Optimization: Advances in query performance, benchmarking, and scalable querying over Wikidata.
  2. Federated Queries and Integration: Combining Wikidata with other knowledge graphs and linked data sources.
  3. Tools and APIs: Infrastructure, APIs, and developer tools for accessing and processing Wikidata.
  4. Visualization and Exploration: Interfaces and systems for exploring and analyzing Wikidata knowledge graphs.
  5. Datasets and Benchmarking Resources: Creation of datasets and evaluation benchmarks derived from Wikidata.
Wikidata in Applications
  1. Applications in Science and Humanities: Wikidata-based applications in research, digital humanities, cultural heritage, and education.
  2. Industry Use Cases: Real-world deployments and industrial applications leveraging Wikidata.
  3. Knowledge Integration: Integrating Wikidata with heterogeneous data sources and knowledge graphs.
  4. Open Science Infrastructure: Supporting open science, scholarly metadata, and open data ecosystems using Wikidata.
Wikidata and Artificial Intelligence
  1. Wikidata and Generative AI: Integration of Wikidata with large language models and generative AI systems.
  2. Knowledge Grounding and Fact Verification: Using Wikidata to improve factual accuracy and reduce hallucinations in AI systems.
  3. Retrieval-Augmented Generation (RAG): Leveraging Wikidata in RAG pipelines and knowledge-grounded AI systems.
  4. AI Skills, Tools, and MCP Integration: Developing AI skills and integrating Wikidata through protocols such as Model Context Protocol (MCP).
  5. AI Agents and Knowledge Graph Interaction: Intelligent agents retrieving, reasoning over, or contributing to Wikidata.
Responsible and Trustworthy Knowledge Graphs
  1. Bias and Representation: Investigating bias, fairness, and representation in Wikidata.
  2. Ethical AI-Assisted Knowledge Curation: Ethical considerations of using AI to generate or curate knowledge graph content.
  3. Explainability and Transparency: Using knowledge graphs to improve interpretability of AI systems.
  4. Governance and Community Practices: Sustainability, governance, and collaboration in open knowledge communities.

Tracks

This workshop will have two tracks: Novel Work, and Previously Published Work.

Papers in the Novel Work track will be published as part of the workshop proceedings. The Previously Published Work track is for papers already published in other conferences, giving the community the chance to access and discuss relevant work that has been presented elsewhere as part of the workshop.


Novel Work Track

The papers will be single-blind peer-reviewed by at least two researchers. Selected papers will be published in CEUR (unless authors wish to opt out).

For the Novel Work track, we will accept papers up to 12 pages (excluding references, contribution of the paper should justify the length of the paper). We invite the following types of papers:

Novel research contributions (8-12 pages)
Novel research contributions of smaller scope than full papers (3-5 pages)
Presenting a new dataset or other resource, includes the publication of that resource (8-12 pages)
Presenting the usage of research concept (6-8 pages)
Presenting a system based on research concepts (6-8 pages)

Previously Published Work Track

Published papers will be reviewed by the organising committee in terms of topical fit and prominence of the publication venue. They will not be published as part of the proceedings.

For the Previously Published Work track, we will accept papers with no page limit, prioritizing instead the importance and relevance of the publication. We invite the following types of papers:

Previously published full papers
Previously published datasets or other resources that are important or interesting to the community
Presenting a previously published paper on the usage of research concept
Presenting a previously published system based on research concepts

Submission

We ask authors to declare the track they intend on submitting to. To do so, please add, at the beginning of the "title" field on the submission, either the string "[Novel]", for the Novel Work track, or the string "[Published]", for the Previously Published track.

Submission Link: https://easychair.org/conferences/?conf=wikidata26


Important Dates (all deadlines are 23:59 AoE)

Papers due: Thursday, 24 July 2026

Notification of accepted papers: Thursday, 21 August 2026

Camera ready papers due: Thursday, 18 September 2026

Workshop date: October, 2026 in Bari, Italy


Submission Guidelines

Submission must be as PDF, in English, for the [Novel] track formatted in the style of CEURART by CEUR-WS. For more information on using the CEURART style (single column), please visit the author guidelines.

For the [Published] track, no reformatting of the original PDFs is needed.


Important Info: Be aware of CEUR policies: (1) A Declaration on Generative AI is mandatory at the end of the paper; (2) Include the ORCIDs for all the authors.

Schedule Detail

Coming soon...

Our Speakers

Coming soon...

Sessions / Papers

Coming soon...

The coastal walkway along Lungomare Imperatore Augusto, at Bari Italy.

Location

Co-located with ISWC 2026

In Bari, Italy, in-person

Image: The coastal walkway along Lungomare Imperatore Augusto, at Bari Italy.jpg, CC-BY-SA 4.0

Sponsors

Coming soon...