Abstract
One of the most difficult aspects of developing matching systems – whether for matching ontologies or for other types of mismatched
data – is evaluation. The accuracy of matchers are usually evaluated by measuring the results produced by the systems against reference sets, but gold-standard reference sets are expensive and difficult to create. In this paper we introduce crptr, which generates multiple variations of different sorts of dataset, where the degree of variation is controlled, in order that they can be used to evaluate matchers in different context.
data – is evaluation. The accuracy of matchers are usually evaluated by measuring the results produced by the systems against reference sets, but gold-standard reference sets are expensive and difficult to create. In this paper we introduce crptr, which generates multiple variations of different sorts of dataset, where the degree of variation is controlled, in order that they can be used to evaluate matchers in different context.
Original language | English |
---|---|
Pages (from-to) | 41-45 |
Number of pages | 5 |
Journal | CEUR Workshop Proceedings |
Volume | 2536 |
Publication status | Published - 16 Jan 2020 |
Event | 14th International Workshop on Ontology Matching 2019 - Auckland , New Zealand Duration: 26 Oct 2019 → 26 Oct 2019 |