Talk
Wednesday, May 10 |
12:00 PM
Objective, human, and machine assessments of confidence in research claims
Abstract

Assessing the credibility of research claims is a central, continuous, and laborious part of the scientific process that is essential for maintaining public trust in science. The Systemizing Confidence in Open Research and Evidence (SCORE) program was designed to address this challenge by developing and assessing algorithms’ potential as a rapid, scalable, and valid method for assessing confidence in research claims. To achieve this aim, a corpus of social and behavioral science papers from over 60 journals over a 10 year period were used to extract claims for further assessment by human judgement and algorithmic approaches. From this corpus a subset of claims were investigated for objective evidence of credibility – specifically, process reproducibility (availability of data and code), reproducibility (reanalysis using original data and analytical strategy), and replicability (same analytical strategy on new data) – with replicability serving as the ground truth for human assessment within the program. Preliminary results will be shared during the talk and future directions.