- Layman
- Developer
- ontologist
- Linguist
What is DanNet?
DanNet is an open Danish WordNet: a structured lexical database organising Danish words into synonym sets (synsets) linked by semantic relations. It currently contains ~70K synsets covering ~62K words. It was created by the Centre for Language Technology (University of Copenhagen) and Dansk Sprog- og Litteraturselskab, and is available in an interactive version at wordnet.dk. DanNet is designed to be easy to integrate with other lexical data sources.
The DanNet source code is MIT-licensed and available (along with current and past releases of the DanNet datasets) at our GitHub repo.
Accessing the data
Content negotiation
Every RDF resource in DanNet resolves as a dereferenceable URI. You can request different representations using the HTTP Accept header: text/html for the web page, application/ld+json for JSON-LD, and text/turtle for RDF/Turtle. For example:
curl -H "Accept: application/ld+json" https://wordnet.dk/dannet/data/synset-5028
SPARQL endpoint
A public SPARQL endpoint is available for querying. In a browser, it serves an interactive query editor. See the SPARQL guide for an introduction to querying DanNet.
The SPARQL endpoint also accepts programmatic requests via GET (with a query parameter) or POST (with the query as the request body), and supports content negotiation (including serving as application/sparql-results+json). Additional query parameters: limit, offset, timeout, inference, and distinct. There are restrictions on result set size and query execution time, though, so if you need to make some more resource-intensive queries you are better off querying the dataset in a local RDF graph.
WN-LMF + Python
The WN-LMF format can be used with the wn Python library:
import wn
wn.add("dannet-wn-lmf.xml.gz")
for synset in wn.synsets('kage'):
print((synset.lexfile() or "?") + ": " + (synset.definition() or "?"))
NOTE: WN-LMF only includes official GWA relations; DanNet-specific relations (such as
used_for) are only available in the RDF format.
AI integration
An MCP server is available for integration with AI/LLM tools such as Claude and ChatGPT. The MCP server URL is https://wordnet.dk/mcp. It provides direct access to the DanNet API and returns results as JSON-LD which the LLM can interpret using the provided schemas.
Datasets
DanNet is available in a handful of different datasets formatted as either RDF/Turtle, CSV, or WN-LMF XML. All datasets are published under the CC BY-SA 4.0 license. You can download them here. All releases are also available on the Github releases page.
The RDF dataset is the canonical version. It can be imported into any modern RDF triplestore and queried with SPARQL.
Standards and integrations
DanNet is modelled using the Ontolex-lemon standard with additions from the Global WordNet Association, making it interoperable with other WordNets and linked data resources. It integrates directly with COR (Det Centrale Ordregister, a Danish word registry), DDS (Det Danske Sentimentleksikon, a Danish sentiment lexicon), and the Open English WordNet.
Tech stack
The DanNet codebase is an MIT-licensed Clojure/ClojureScript project hosted on GitHub, built on top of the Apache Jena RDF triplestore. The web app is a Pedestal-based backend with a Rum (React) frontend and works both as a server-rendered site and a single-page application.
Documentation
Additional developer-oriented documentation:
- SPARQL guide β a hands-on introduction to querying DanNet with SPARQL
- Querying DanNet β SPARQL, Aristotle DSL, and graph traversal
- Sense/synset label format
- Design rationale