Topics of Interest

ACM REP ‘25 welcomes submissions across computing disciplines, spanning both traditional computer science and interdisciplinary scientific computing applications in biology, chemistry, physics, astronomy, genomics, geosciences, etc. The conference particularly values submissions that demonstrate reproducible experimental results. Where full reproduction is not achieved, detailed documentation of the reproducibility experience is equally valuable.

The conference addresses various aspects of reproducibility and replicability, including but not limited to the following topics:

Reproducibility Concepts

  • Experiment dependency management.
  • Experiment portability for code, performance, and related metrics.
  • Software and artifact packaging and container-related reproducibility methods.
  • Approximate reproducibility.
  • Record and replay methods.
  • Data versioning and preservation.
  • Provenance of data-intensive experiments.
  • Automated experiment execution and validation.
  • Reproducibility-aware computational infrastructure.
  • Experiment discoverability for re-use.
  • Approaches for advancing reproducibility.

Reproducibility Experiences

  • Experience of sharing and consuming reproducible artifacts.
  • Conference-scale artifact evaluation experiences and practices.
  • Experiences as part of hackathons and summer programs.
  • Classroom and teaching experiences.
  • Usability and adaptability of reproducibility frameworks into already-established domain-specific tools.
  • Frameworks for sociological constructs to incentivize paradigm shifts.
  • Policies around publication of articles/software.
  • Experiences within computational science communities.
  • Collecting datasets from laboratory / real-world settings.

Systems and Security Concerns

  • Experience comparing published systems in a domain.
  • Tools to support replicability of system analysis.
  • Designing machine learning workflows to support reproducibility.
  • Reproducing real-world security findings.
  • Privacy concerns arising from reproducibility.
  • Challenges of reproducing security experiments.
  • Securing reproducibility infrastructure.

Broader Reproducibility

  • Cost-benefit analysis frameworks for reproducibility.
  • Novel methods and techniques that impact reproducibility.
  • Reusability, repurposability, and replicability methods.
  • Long-term artifact archiving and verification/testing for future reproducibility.

Submission Guidelines

We solicit papers describing original work relevant to reproducibility and independent verification of scientific results. The submission must not be published or under review elsewhere. ACM REP is a double-blind reviewed conference. ACM REP submissions can be research, survey, vision, or experience papers. Submissions will be evaluated according to their significance, originality, technical content, style, clarity, relevance, and likelihood of generating discussion. Authors should note that changes to the author list after the submission deadline are not allowed without permission from the PC Chairs. At least one author of each accepted paper is required to register for, attend, and present the work at the conference.

In-person attendance and presentation is highly encouraged, but remote participation will also be supported.

Research Papers (Long and Short)

We solicit both full length papers (10 pages) and short papers (4 pages). The former tend to be descriptions of complete technical work, while the latter tend to be descriptions of interesting, innovative ideas, which nevertheless require more effort to mature. The program committee may decide to accept some full papers as short papers. Full papers will be given a presentation slot in the conference, while short papers will be presented in the form of posters. All papers, regardless of size, will be given an entry in the conference proceedings. The page limit is without references and/or appendices. Authors may optionally include reproducibility information that allows for automated validation of experimental results. (See the artifact evaluation criteria below.) Accepted submissions that pass automated validation will earn ACM Reproducibility badges in accordance with the artifact review and validation policy.

Artifact Evaluation Criteria

The conference will also be soliciting code/data artifacts. For submitted papers, these artifacts will be optional supplemental material and solicited based on the program committee’s criteria. The artifacts will be mandatory for accepted full papers with experimental results. The artifacts will be reviewed by an Artifact Evaluation committee, and those that pass will be awarded Reproducibility Badges per ACM policy.

Formatting

Papers must be submitted in PDF format according to the ACM template published in the ACM guidelines, selecting the generic “sigconf” sample. The PDF files must have all non-standard fonts embedded. Papers must be self-contained and in English. If submitting a short paper, authors must indicate “SHORT:” at the beginning of the title. The review process is double-blind.

Submission Site

The conference submission site is: https://easychair.org/conferences?conf=acmrep2025

Important Dates

Paper submission (Long and Short): March 31, 2025, 23:59 AOE
First response to authors: May 12, 2025
Revise and Resubmit: May 26, 2025
Notification of acceptance: June 23, 2025
Camera-ready copy: July 14, 2025
Conference: July 29 - 31, 2025

Program Committee

Program Chairs

Ashish Gehani (SRI)

Khalid Belhajjame (University Paris - Dauphine)

Program Committee

NameAffiliation
Sergey BratusDartmouth College
Kevin ButlerUniversity of Florida
Prasad CalyamUniversity of Missouri
Jean CampIndiana University - Bloomington
Lorenzo CavallaroUniversity College London
Bruce ChildersUniversity of Pittsburgh
Ludovic CourtesINRIA
Jack DavidsonUniversity of Virginia
Lorenzo De CarliUniversity of Calgary
Ewa DeelmanInformation Sciences Institute
David EyersUniversity of Otago
Dustin FrazeMicrosoft
Juliana FreireNew York University
Fraida FundNew York University
Grigori FursinFlexAI / cTuning / MLCommons
Simson GarfinkelBasisTech
Paul GrothUniversity of Amsterdam
Kevin HamlenUniversity of Texas - Dallas
Marc HerbstrittUniversity of Freiburg
Thomas HildebrandtUniversity of Copenhagen
Alefiya HussainInformation Sciences Institute
Daniel KatzUniversity of Illinois - Urbana-Champaign
Joshua KrollNaval Postgraduate School
Ignacio LagunaLawrence Livermore National Laboratory
Stefan LeueUniversity of Konstanz
Michael LocastoNarf Industries
Bertram LudäscherUniversity of Illinois - Urbana-Champaign
Alyssa MilburnIntel
Jelena MirkovicInformation Sciences Institute
Sean OeschOak Ridge National Laboratory
Limor PeerYale University
Sean PeisertLawrence Berkeley National Laboratory
Solal PirelliSonar
Beth PlaleIndiana University - Bloomington
Lutz PrecheltFree University of Berlin
Vicky RampinNew York University
Birali RuneshaUniversity of Chicago
Mahadev SatyanarayananCarnegie Mellon University
Stefanie ScherzingerUniversity of Passau
Sameer ShendeUniversity of Oregon
Salvatore SignorelloUniversity of Lisbon
Douglas ThainUniversity of Notre Dame
Rafael Tolosana-CalasanzUniversity of Zaragoza
Ana TrisovicMassachusetts Institute of Technology
Petr TumaCharles University
Anjo Vahldiek-OberwagnerIntel Labs
Theo ZimmermannTelecom Paris