Wednesday, July 18, 2018

Pagination of physical specimens and CSV downloads

I have made some minor modifications to the coin type pages in OCRE, CRRO, etc. A relatively small number of types across these corpora have more than 100 total specimens, but more importantly a very small handful of types are linked to several hundred or even more than 1,000(!) physical specimens. For example RIC 10 Honorius 1228 is associtaed with 2,396 physical specimens, nearly all of which are in the British Museum (presumably from one or more hoards). This is the single-largest number of specimens associated with a coin type. In these extreme cases, the amount of data to load into one HTML page is simply too great, resulting in the browser overloading and running out of memory.

In order to ameliorate this issue, I have introduced pagination. The number of results per page can be set in the Numishare config, but the default is 48 (16 rows of 3 columns). The page is set by a page request parameter, which is converted into a proper offset in the underlying SPARQL query. The pagination buttons, then, are crawlable by robots since each hyperlink will resolve to a URL (so no AJAX here).

Nearly 400 total coin type URIs have more than 48 specimens (out of roughly 55-60,000 total Hellenistic or Roman coin types across all projects) for which pagination controls will appear. About 100 types have more than 100 specimens and 5 have more than 1,000.

In addition, when physical specimens are present, the user can click to download a CSV file for the metadata about these specimens. It is the same basic query that populates the HTML page and includes URIs for each object, title, measurement data, URLs to images or IIIF services, findspot/hoard data, and source collection/dataset. This should make it easier to use coin type and specimen data for analysis in R or other platforms.

Friday, April 27, 2018

OpenRefine workshop materials for ECFN/Nomisma

Next week is the 7th annual European Coin Find Network and meeting in Valencia. I'll be guiding two brief, 30 minute introductory workshops in OpenRefine aimed at cleaning numismatic data and linking to Roman imperial coin type URIs defined in OCRE. I plan to write up the steps in the tutorial at some point, but the test materials can be accessed here:
Expect updates to this post when the workflow I intend to show in the workshop is codified into a written tutorial

Improving OCRE OpenRefine reconciliation with regex

I have made a slight update to improve the matching of OCRE coin types through the Numishare type-based OpenRefine reconciliation API. The reconciliation API queries the "title" as indexed as a text field in Solr, which as detailed in a previous blog post, functions most accurately when you reduce your reconciliation column down to the RIC number and use authority/mint/denomination as an additional property.

This would miss a lot of potential attributions of numbered subtypes that were never given parent type URIs in OCRE. Some examples are in Hadrianic types. The British Museum has assigned the type number '14', but OCRE has no Hadrian 14, only 14a and 14c. The API update appends the following regex to the title field Solr search: '(\(?[a-zA-z]\)?)?', resulting in the query "title_text:/14(\(?[a-zA-z]\)?)?/". This looks for a single lower-case or upper-case optional letter that may optionally be enclosed in parentheses.

When running the API against more than 2000 coins of Hadrian from Rome from the British Museum, about 500 had a 100% automatic match, and another 1,500 yielded two or more potential matches. Before this regex tweak, a significant portion of the 1,500 coins that didn't automatically match had no suggestions, and therefore required the "Search for Match" function to manually attempt through autosuggest by typing with the keyboard.

Friday, April 20, 2018

New updates from KENOM, M├╝nzkabinett Berlin

Two new collection URIs have been minted in for the KENOM project, and the OAI-PMH feed from KENOM has been re-harvested. Now there are more than 7,500 coins (and medals) contributed from that project, including 1 medal from Munich and 4 from Moritzburg for Art of Devastation, a corpus of World War I medals. These medals represent the first partner contributions to AoD, which to this point has consisted only of the American Numismatic Society's own collection.

The updates include 3 coins from Munich for Seleucid Coins Online.

Furthermore, a new update has been run on Berlin's contribution to Online Coins of the Roman Empire, which now includes some coins of the Gallic Empire. With these most recent updates from Berlin and KENOM, there are now 114,136 coins in OCRE.

Thursday, February 1, 2018

A Closer Look at ResearchSpace.

Following on my earlier post about the British Museum's URIs getting killed by the deployment of ResearchSpace into production...

The first glimpse I saw of ResearchSpace was at the Linked Ancient World Data Institute in May 2012. We are fast approaching six years since the first demo I saw, and the project must have existed for at least a year before that. So we are at 6-7 years of development with $1-2million from the Mellon Foundation for an application ( that looks wholly underdeveloped compared to many cultural heritage platforms that I've seen (that were built in a fraction of the time with a fraction of the money).
Let's take a look, shall we?

Um, okay. Not the easiest starting point for a query interface of a museum collection. The interface eschews keyword search--this is a feature, not a bug, mind you (see

Essentially, you can only query by making relationships between different categories of information. If I just want to see the coins of Antioch, I have to sort my way through several iterations of expressing a query as triples. Now certainly there is some potential in querying in this way, but the problem is that this isn't how the vast majority of the public (general public or researchers interested in a specific portion of the collection) expects or wants to interact with the collection through a UI. Libraries, archives, and museums have been implementing faceted search of their collections (based on Solr, ElasticSearch, etc.) for over a decade now. There are facets in ResearchSpace, although they are not prominently displayed (click on the left to show via Javascript). Not all of the facets are terribly useful (some are 1:1 directly with the coin itself with regard to production events), but you do have to have some idea about how relationships are expressed between objects and concepts. I should also note that I'm skeptical that SPARQL-generated dynamic facets will be able to bear the load of production usage.

Okay, I got Things from Antioch. How to narrow? Add another query parameter. How about a Thing that is a Concept. What concept? Well, I have to select the relationship between the Thing and the Concept. "Has type". Useful to be familiar with modeling data in CIDOC CRM before you use the interface. Then I select "coin."

Oh. 0 results? There were definitely coins in the last page.

As it turns out, I really wanted things from Antiochia ad Orontem, but I also need to know that the emperor is expressed by the "refers to" property. "Refers to" appears twice, so you have to select the one with the person icon. There are times when the top-level filters conflict over overlap with the left-hand facets.

Coins of Antioch

User interface issues aside--certainly there is room for improvement here--the larger issue is the time frame and money it cost to arrive with this product. Having been funded by the Mellon Foundation, it would seem that both the data and the code should be open source, but the ResearchSpace code has never been opened, and therefore it is presently impossible to test, critique, or contribute back in order to make the platform better.

Aside from the fact that URIs don't dereference (failing the primary requirement of a LOD system), the UI is entirely driven by AJAX, making it complicated to paginate (clicking on a coin and then clicking back in the browser wipes out your facets and your page number) and impossible for a robot to crawl the collection, thus reducing access to the public who might happen upon museum objects through search engines. At the ANS, about 70% of all visitors to our Library, Archive, and Museum platforms come through search engines.

Even if you go to , what is actually there? A CURL yields the same basic HTML template. There's actually no useful information for a machine to extract--not even human readable versions of the data--and especially not RDFa or other types of microdata.

Content negotiation?

curl -H "Accept: application/rdf+xml"

No response. No HTML header metadata pointing to alternative serializations that can be requested by URLs. The whole system is antithetical to modern design--not just within cultural heritage, but everywhere.

ResearchSpace project managers have traveled from conference to conference in digital cultural heritage over the years, talking about the system and its advanced functionality, but the product we are seeing now falls extremely short of the hype. The problem has been in its management. It seems, from the outside, that there were never any achievable goals that could be reached in a concrete timeline. The ambition for the project grew to include IIIF, 3D annotation, and a host of other useful features.  ResearchSpace shot for the moon for its first public release. However, the project could have been released as a simpler framework years ago with these new features added in iterative development. With the code stored in Github, they might have been able solicit feedback from the cultural heritage community. Is ResearchSpace writing its own IIIF viewer? WebGL viewer? Is it using other open source libraries? Who knows.

There are a some cool features in the system--distribution analysis of categories based on your queried subset.

Material distribution of Antiochene coins of Elagabalus

So there's obviously potential. But between the high cost, the long duration of development, and the application architecture itself, with its walk back from stable URIs and REST, at what point is it safe to question whether or not the project has been a success? (since it technically exists on the web now, I suppose it's no longer vaporware) Did the British Museum lay off Dan Pett and put all of its eggs into the ResearchSpace basket for its future online collection database? It's really a crying shame that the BM employed one of the 10 top thinkers and doers in Digital Humanities in the entire world--the one person in their organization with the experience and creativity that might have been able to salvage the BM online collection and do something truly revolutionary with it (as he had done with the Portable Antiquities Scheme). It's a completely squandered opportunity, and the BM has done much to destroy the reputation it has gained as a leader within the museum Open Access and Open Data movement.

On stable URIs at the British Museum

Update, 2 Feburary 2018: The previous iteration of (Metaphacts) has been restored, and the collection object URIs now redirect once again to the Metaphacts framework. Earlier critiques of the system design still stand, relating to referenceable URIs, CURL, content negotiation, etc.

There are 61,853 coins from the British Museum integrated into's SPARQL endpoint and made available through type corpora such as OCRE and CRRO. The BM is the single largest contributor of numismatic data, providing about 3,000 more coins than the American Numismatic Society itself. With the aid of its highly talented and collaborative curators in the Coins and Medals Department, the British Museum's contributions to this research ecosystem have been transformative for the discipline, and the BM has played a vital role in demonstrating, through Linked Open Data methodologies, that LOD makes the whole greater than the sum of its parts.

This morning, all links have died.

And not died in the way that they have died when the oft-neglected, but extremely valuable (even though it had some obvious data modeling problems) British Museum SPARQL endpoint has gone down. Now they are 404's. Dead. Gone. Not a spinning circle you get when a server application runs out of memory. We've all known that there's some wonky and inefficient CIDOC-CRM modeling, and despite claims from ResearchSpace project managers, the British Museum data were never 5* Linked Open Data because they never linked externally. But stable, clean URIs are the A #1 requirement for LOD architecture.

And so ResearchSpace has managed to kill their own URIs when transitioning to the public version of the software. I was assured many years ago that these were the permanent URIs for objects.

So the URI for this coin of Augustus is dead:

However, you can still access the data in the new ResearchSpace system at

It should be noted: The BM implemented https://, effectively changing the URIs of its objects, but the URIs within the underlying graph database/SPARQL endpoint are still http://.

But you shouldn't have to negotiate the ResearchSpace framework with an unclean, application-specific request parameter to extract the data for At least create a proxy that allows for the resolution of the URI into human and machine-readable data, as per Linked Open Data principles? Or a semantic 303 redirect? Anything but straight-up killing millions of URIs. This betrays a serious deficiency in how to develop web applications.

Wednesday, January 17, 2018

Nearly 2,000 Roman Imperial coins from the University of Graz integrated into Nomisma

After several weeks of working with Elisabeth Steiner at the University of Graz, a large portion of the collection of Roman coins at the Institute of Ancient History and Classical Antiquities has been integrated into the SPARQL endpoint and is available in OCRE and CRRO. About 300 Republican coins were initially ingested in October, but the coverage has extended by nearly 2,000 coins from the Imperial period. The collection includes images published according to the IIIF specification, which is rapidly becoming the standard API by which new partners make their images available online. Unlike most Nomisma contributors, where intermediary harvesting scripts transform source XML or CSV into Nomisma-compliant RDF, the University of Graz export is a direct serialization of TEI from their Fedora repository into RDF.

An antoninianus of Gordian III at the University of Graz

What's especially notable about this collection is that it was a successful demonstration of the new Numishare and OpenRefine reconciliation APIs for normalizing RIC references to OCRE URIs. The first step was to normalize mints, emperors, and denominations to Nomisma preferred labels, which were then used as additional property search parameters for normalizing the RIC numbers themselves to the relevant OCRE URI.

You can read more at:
These new reconciliation APIs are the topic of my CAA presentation and paper in two months in the tools session.