Summary of ACM REP 2023

The inaugural ACM Conference on Reproducibility and Replicability was held on 27-29 June 2023. The program included three keynotes and several sessions exploring topics in reproducibility and computer science, and a day of hands-on workshops. It was held at a wonderful space, the Cowell Hay Barn on the campus of the University of California, Santa Cruz. Conference proceedings are available from the ACM Digital Library.

The conference opened with introductory remarks by John MacMillan, Vice Chancellor of Research, UC Santa Cruz. MacMillan welcomed the participants and discussed the importance of open source software in reproducibility, mentioning the pioneering work of the UCSC Open Source Program Office (OSPO). The first keynote was delivered by Torsten Hoefler (ETH Zurich). Hoefler’s talk, Reproducing Performance - The Good, the Bad, and the Ugly, gave an overview of the “reproducibility crisis” and noted that computer science as a discipline is doing quite well, with some areas better than others, for example machine learning. In the area of performance, Hoefler points out several issues that make reproducibility difficult to assess. Hoefler calls for a focus on interpretability and for the need to ensure that researchers in this area have a good grip on basic statistics to address known issues with speedup plots.

The second keynote of the day, Embracing Computational Reproducibility: Challenges, Solutions, and Cultivating Trust in Data-Driven Science, was delivered by Juliana Freire (New York University). Freire discussed some of the challenges with current reproducibility tools and reminded us that reproducibility is not the ultimate goal, but rather trust in and explanation of the science. Enabling reproducibility allows others to assess whether a result is a discovery or a bug. She concluded with a call to action that includes better provenance capture and making computer science more like science. Two sessions, on advancing reproducibility and on testing reproducibility includes very interesting papers on cutting edge technologies and ideas related to reproducibility, including edge-to-cloud experiments and reproducible execution of closed-source applications.

The second day began with a keynote from Grigori Fursin, co-chair of the MLCommons task force on automation and reproducibility, president of the cTuning foundation, and founder of titled, Toward a common language to facilitate reproducible research and technology transfer: challenges and solutions. Fursin described the work he’s been doing on reproducibility, which aims to provide portability by identifying the basic blocks that can be abstracted from all scripts. A panel session on The Role of Open Source in Open Science featured an impressive lineup of speakers including Sayeed Choudhury (Carnegie Mellon University), Stephanie Lieggi (UCSC), Zach Chandler (Stanford), and Malvika Sharan (Turing Project). Panelists spoke about their experience with open science and discussed whether reproducibility and replicability are technical or scientific issues. The fascinating and wide-ranging conversation also touched on the efforts and challenges to make open science more inclusive, and the still lingering, though diminishing, confusion about definitions of reproducibility, among other issues. Two other sessions, one in the morning and one in the afternoon, included fantastic papers on cost and benefits of reproducibility, teaching reproducibility in the CS curriculum, and on novel approaches to GPT benchmarks for reproducibility.

Three workshops were offered on the last day of the conference. Checking Reproducibility with the Open Research Knowledge Graph, led by Hassan Hussein and Anna-Lena Lorenz (both of TIB - Leibniz Information Centre for Science and Technology), focused on the reproducibility assessment functionality of the Open Research Knowledge Graph (ORKG), a platform for structured semantic knowledge. Kate Keahey and Mark Powers (University of Chicago), led a Practical Reproducibility for Computer Science Hackathon, teaching participants how to package their computer science research experiments so they are “practically reproducible.” The afternoon workshop was taught by Reed Milewicz and Miranda Mundt (Sandia Labs) on the topic of Software Quality Practices for Reproducibility and included activities on the topic and techniques for incorporating practices that facilitate reproducibility in the software development process.

About the ACM REP: The conference brings together a broad and inclusive intellectual community around the issues of reproducibility of computational research, including practical, actionable aspects of reproducibility in broad areas of computational science and data exploration, with special emphasis on issues in which community collaboration can be essential for adopting novel methodologies, techniques and frameworks aimed at addressing some of the challenges we face today. The ACM REP conference series is associated with the ACM Emerging Interest Group for Reproducibility and Replicability (see ACM REP’s history).

The inaugural conference was a hybrid conference, organized by conference chairs Carlos Maltzahn (UC Santa Cruz) and Philippe Bonnet (IT University of Copenhagen), Tanu Malik (DePaul University) and Jay Lofstead (Sandia National Laboratories), Program Chairs. Thanks to local arrangement superstars, Stephanie Lieggi and Yelena Martynovskaya from UC Santa Cruz for the welcoming environment, thoughtful attention to detail, and delicious food.