Abstract The Institution for Social and Policy Studies (ISPS) at Yale developed policies and workflows to optimize the long-term computational reproducibility of scholarship produced by its faculty. ISPS reviews data and code (i.e., research compendia) and generates reports indicating the status of verification of—or any issues with—computationally reproducing reported findings. We analyze reports generated between 2020-2022 to explore the rate of computational reproducibility, the reasons for irreproducibility, and any observable patterns of irreproducibility. We find the majority of research compendia had either issues that prevent the code from executing in the first place, discrepancies between the reported results and the code output once the code is run, or both. Importantly, we find that three quarters of these research compendia have already been deposited by authors in journal-approved repositories by the time ISPS conducts its review. We discuss implications for authors, journals, institutions, and the quality of the scholarly record.