Artifact Evaluation for MICRO 2023

[ Back to the ACM/IEEE MICRO 2023 conference website ]

Important dates

Paper rebuttal/Revision: June 26 – July 7, 2023
Paper decision: July 24, 2023
Artifact submission deadline: August 7, 2023 AoE
Artifact decision: September 14, 2023
Conference: October 28 - November 1, 2023 (Toronto, Canada)

A common CM interface to rerun experiments from different papers

We are developing a common CM interface to rerun experiments from different papers in a unified and automated way:

Artifact evaluation chairs

Artifact submission

Artifact evaluation promotes reproducibility of experimental results and encourages code and data sharing.

The authors will need to fill in this Artifact Appendix to describe the minimal software, hardware and data set requirements, and explaining how to prepare, run and reproduce key experiments. The Artifact Appendix should be appended to the accepted paper and submitted for evaluation via MICRO'23 AE HotCRP website.

We introduced this Artifact Appendix to unify the description of experimental setups and results across different conferences. Though it is relatively intuitive and based on the feedback from the community, we encourage the authors to check the Artifact Appendix guide, artifact reviewing guide, the SIGPLAN Empirical Evaluation Guidelines, the NeurIPS reproducibility checklist, AE FAQs and Artifact Appendices from past papers before submitting artifacts for evaluation.

This submission is voluntary and will not influence the final decision regarding the papers - our goal is to help the authors validate their experiments by an independent AE Committee in a collaborative and constructive way. Furthermore, the authors can add notes and corrections to the Artifact Appendix if some mistakes were found during artifact evaluation.

The authors will communicate with evaluators via HotCRP after the submission.

We suggest the authors to make their artifacts available for evaluation via GitHub or similar public or private service. Public artifact sharing allows the authors to quickly fix encountered issues during evaluation before submitting the final version to archival repositories. Other acceptable methods include:

  • Using zip or tar files with all related code and data, particularly when code must be rebuilt on reviewers' machines (for example to have a non-virtualized access to some specific hardware).
  • Using Docker, Virtual Box and other containers and VM images.
  • Arranging remote access to the authors' machine with the pre-installed software - this is an exceptional cases when rare or proprietary software and hardware is used. The authors will need to privately send the access information to the AE chairs.

The papers that successfully go through AE will receive a set of ACM badges of approval printed on the papers themselves and available as meta information in the ACM Digital Library (it is now possible to search for papers with specific badges in the ACM DL). Authors of such papers will have an option to include up to 2 pages of their Artifact Appendix to the camera-ready paper.


Artifact available General ACM guidelines - artifacts will receive the ACM "artifact available" badge only if they have been placed on any publicly accessible archival repository such as Zenodo, FigShare, and Dryad with a DOI. The authors can provide a DOI of the final artifact at the very end of the AE process.
Artifact evaluated - functional General ACM guidelines.
Artifacts evaluated – reusable
(pilot project)

To help digest the criteria for the ACM "Artifacts Evaluated – Reusable" badge, we have partnered with MLCommons to add their unified automation interface (MLCommons CM) to the shared artifacts to prepare, run and plot results. We opine that MLCommons CM captures the core tenets of ACM "Artifacts Evaluated – Reusable" badge. Hence, we have added it as a possible criteria to obtain the ACM "Artifacts Evaluated – Reusable" badge. The authors can try to add the MLCommons CM interface to their artifacts themselves using this tutorial.

Results reproduced General ACM guidelines and our extended guidelines.

Artifact evaluation process

Note that our artifact evaluation is single-blind because the paper is accepted, there no need to hide the authors, and we help the authors fix issues, improve their artifacts and pass evaluation.

Evaluators will read a paper and then go through the Artifact Appendix to evaluate artifacts and reproduce experiments based on general ACM guidelines and our MICRO'23 guidelines.

Reviewers will communicate with the authors about any encountered issue immediately (and anonymously) via the HotCRP submission website to give the authors time to resolve all problems! Note that our philosophy is not to fail problematic artifacts but to help the authors improve their public artifacts and pass the evaluation!

In the end, AE chairs will communicate with the authors and decide on a set of the standard ACM reproducibility badges to award to a given paper/artifact based on all reviews and the authors' responses.

Artifact Evaluation Committee

  • Junaid Ahmed (Barcelona Supercomputing Center (BSC))
  • Thomas Bourgeat (EPFL)
  • Roman Kaspar Brunner (Norwegian University of Science and Technology (NTNU))
  • Filippo Carloni (Politecnico di Milano)
  • Scott Cheng (Pennsylvania State University)
  • Davide Conficconi (Politecnico di Milano)
  • Christin David Bose (Purdue university)
  • Quang Duong (The University of Texas at Austin)
  • Charles Eckman (Google)
  • Yu Feng (University of Rochester)
  • Peter Gavin (Google)
  • Tianao Ge (The Hong Kong University of Science and Technology (Guangzhou))
  • Yueming Hao (North Carolina State University)
  • Ryan Hou (University of Michigan)
  • Kashif Inayat (Incheon National University)
  • Vikram Jain (University of California, Berkeley)
  • Xiaolin Jiang (UC Riverside)
  • Apostolos Kokolis (Meta)
  • Stephen Longfield (Google)
  • Themis Melissaris (Snowflake)
  • Nafis Mustakin (UCR)
  • Mahmood Naderan-Tahan (Ghent University)
  • Alan Nair (The University of Edinburgh)
  • Asmita Pal (University of Wisconsin-Madison)
  • Subhankar Pal (IBM Research)
  • Santosh Pandey (Rutgers University)
  • Francesco Peverelli (Politecnico di Milano)
  • Umair Riaz (Barcelona Supercomputing Center (BSC))
  • Alen Sabu (National University of Singapore)
  • Yukinori Sato (Toyohashi University of Technology)
  • Umer Shahid (UET Lahore)
  • Cesar A. Stuardo (Bytedance)
  • Yiqiu Sun (University of Illinois Urbana-Champaign)
  • Minh-Thuyen Thi (Institute List, CEA, Paris-Saclay University)
  • Mohit Upadhyay (National University of Singapore)
  • Marco Venere (Politecnico di Milano)
  • Gaurav Verma (Stony Brook University, New York)
  • Felippe Vieira Zacarias (UPC/BSC)
  • Zishen Wan (Georgia Tech)
  • Tianrui Wei (University of California, Berkeley)
  • Zhenlin Wu (The Hong Kong University of Science and Technology (Guangzhou))
  • Chengshuo Xu (Google)
  • Jingyi Xu (University of California, Berkeley)
  • Yuanchao Xu (North Carolina State University)
  • Yufan Xu (University of Utah)
  • Xizhe Yin (Univsersity of California, Riverside)
  • Qirui Zhang (University of Michigan)
  • Sai Qian Zhang (Harvard University/Meta)
  • Zhizhou Zhang (Uber Technologies Inc)

Questions and feedback

Please check the AE FAQs and feel free to ask questions or provide your feedback and suggestions via our public AE discussion group.