We spent the day with the CrossRef team in Oxford last week, talking about our recent work in the linked data space (see the nature ontologies portal) and their recent initiatives in the scholarly publishing area.
So here's a couple of interesting follow ups from the meeting. ps. If you want to know more about CrossRef, make sure you take a look at their website and in particular the labs section: http://labs.crossref.org/.
CrossRef is using the open source Lagotto application (developed by PLOS https://github.com/articlemetrics/lagotto) to retrieve article metrics data from a variety of sources (e.g. wikipedia, twitter etc. see the full list here).
The model used for storing this data follows an agreed ontology containing for example a classification of 'mentions' actions (viewed/saved/discussed/recommended/cited - see this paper for more details).
In a nutshell, CrossRef is planning to collect and make the metrics (raw) data for all the DOIs they track in the form of 'DOI events'
An interesting demo application shows the stream of DOIs citations coming from Wikipedia (one of the top referrers of DOIs, unsurprisingly). More discussions on this blog post.
CrossRef has been working with Datacite to the goal of harmonising their databases. Datacite is the second major register of DOIs (after CrossRef) and it has been focusing on assigning persistent identifiers to datasets.
This work is now gaining more momentum as Datacite is enlarging its team. So in theory it won't be long before we see a service that allows to interlink publications and datasets, which is great news.
http://www.crossref.org/fundref/
FundRef provides a standard way to report funding sources for published scholarly research. This is increasingly becoming a fundamental requirement for all publicly funded research, so several publishers have agreed to help extracting funding information and sending it to CrossRef.
A recent platform built on top of Fundref is Chorus http://www.chorusaccess.org/, which enables users to discover articles reporting on funded research. Furthermore it provides dashboards which can b used by funders, institutions, researchers, publishers, and the public for monitoring and tracking public-access compliance for articles reporting on funded research.
For example see http://dashboard.chorusaccess.org/ahrq#/breakdown
- JSON-LD (an RDF version of JSON) is being considered as a candidate data format for the next generation of the CrossRef REST API.
- The prototype http://www.yamz.net/ came up in discussion; a quite interesting stack-overflow meets ontology-engineering kind of tool. Def worth a look, I'd say.
- Wikidata (a queryable structured data version of wikipedia) seems to be gaining a lot of momentum after it's taken over Freebase from Google. Will it eventually replace its main rival DBpedia?
Cite this blog post:
Comments via Github:
2023
2017
paper Fitting Personal Interpretation with the Semantic Web: lessons learned from Pliny
Digital Humanities Quarterly, Jan 2017. Volume 11 Number 1
2010
paper How do philosophers think their own discipline? Reports from a knowledge elicitation experiment
European Philosophy and Computing conference, ECAP10, Munich, Germany, Oct 2010.
paper Data integration perspectives from the London Theatres Bibliography project
Annual Conference of the Canadian Society for Digital Humanities / Société pour l'étude des médias interactifs (SDH-SEMI 2010), Montreal, Canada, Jun 2010.