MAY 18-21, 2026 AT THE HILTON SAN FRANCISCO UNION SQUARE, SAN FRANCISCO, CA

47th IEEE Symposium on
Security and Privacy

Artifact Evaluation Process

Authors are invited to submit their artifacts immediately after receiving the acceptance notification for their paper. At least one contact author must be reachable and respond to questions in a timely manner during the entire evaluation period to allow round trip communications between the AEC and the authors. Artifacts can be submitted only in the AE time frame associated with the paper submission round.

At submission time, authors choose which badges they want to be evaluated for. Members of the AEC will evaluate each artifact using the author’s instructions within the submission as guides, as detailed later in this page. Evaluators will communicate anonymously with authors through HotCRP to resolve minor issues and ask clarifying questions.

Evaluation starts with a kick-the-tires period during which evaluators ensure they can access their assigned artifacts and perform basic operations such as building and running a minimal working example. Artifact evaluations include feedback about the artifact, giving authors the option to address any significant blocking issues for AE work using this feedback. Communication after the kick-the-tires stage end can address interpretation concerns for the produced results or minor syntactic issues in the submitted materials.

Artifact details and requirements

Artifacts can be, e.g., software, datasets, models, test suites, or mechanized proofs. Paper proofs are not accepted, as evaluators lack the time and often the expertise to carefully review them. Physical objects, such as specialized computer hardware, are also not accepted, due to the difficulty of making them available to evaluators.

To ensure that the evaluation is practical for the AEC, each code artifact must be packaged according to the instructions, and it must run on a public research infrastructure of author’s choice. For example, this includes SPHERE, Chameleon, CloudLab, Google Collab, FABRIC, etc. We understand that this may not be possible in some cases (e.g., the artifact requires special hardware or special geolocation). In these cases, authors should explain the constraint, and provide anonymous access to the AEC (e.g., via SSH, public-key based access) to the special hardware. We will accept Docker artifacts (as detailed below) that have been tested on private infrastructure. These artifacts will be evaluated on public infrastructure.

Proposed experiments should take at most 1 day to run for the evaluation. When the paper’s research requires longer run-times, the authors should design scaled-down experiments and properly justify how those can still significantly support the paper’s analyses. Hardware and software requirements must be stated when registering an artifact.

Artifact evaluation is single-blind. Each AEC member will independently test and review their assigned submissions. To maintain the anonymity of evaluators, artifact authors should not embed analytics or other tracking tools in any websites for their artifacts for the duration of the AE period. In cases where tracking is unavoidable, authors must notify the AE chair in advance so that AEC members can take adequate safeguards.

Submitting an artifact for evaluation does not give the AEC permission to make its contents public or to retain any part of it after evaluation. Thus, authors are free to include proprietary models, data files, or code in artifacts. However, we expect that meaningful parts of the artifact will be released publicly after the evaluation. If you foresee that some parts of your artifact would not be eventually publicly released, please note that in your README file. Otherwise, the expectation is that the entire artifact as evaluated will be publicly released at the permanent repository by the camera ready deadline. If the publicly released artifact contains significantly less information than the submitted artifact, and the AEC concludes that the final artifact is no longer meaningful in isolation, AEC reserves the right to not award evaluation badges.

Artifact Badges

Available

To earn this badge, the AEC must judge that the artifact associated with the paper has been made available for retrieval permanently and publicly. As an artifact undergoing AE often evolves as a consequence of AEC feedback, authors can use mutable storage for the initial submission, but must commit to uploading their materials to public services (e.g., Zenodo, FigShare, Dryad) for permanent storage backed by a Digital Object Identifier (DOI). Final permanent storage is a condition to receive this badge. Authors are welcome to report additional sources, like GitHub and GitLab, that may ease the dissemination of the artifact and possible future updates.

Functional

To earn this badge, the AEC must judge that the artifact conforms to the expectations set by the paper for functionality, usability, and relevance. Also, an artifact must be usable on other machines than the authors’, including when specialized hardware is required (for example, paths, addresses, and identifiers must not be hardcoded.) The AEC will particularly consider three aspects:

Reproduced

To earn this badge, the AEC must judge that they can use the submitted artifact to obtain the main results presented in the paper. In short, is it possible for the AEC to independently repeat the experiments and obtain results that support the main claims made by the paper? The goal of this effort is not to reproduce the results exactly, but instead to generate results independently within an allowed tolerance such that the main claims of the paper are validated. In the case of lengthy experiments, scaled-down versions can be proposed if clearly and convincingly explained for their significance.

Artifact preparation and packaging

Artifacts should be packaged to ease evaluation and use. Everything necessary for installation and running should be scripted, whenever possible. Packaging is not only about evaluation, but also about future use of the artifact by other researchers who may want to build on top of it or use it as a baseline. All relevant information for evaluation should be contained in the packaging.

For easy packaging, we have developed a Python script that asks authors about relevant information and outputs a .toml file. This file should be uploaded to HotCRP. If you encounter problems during packaging, please email AEC chairs to discuss appropriate packaging. If your artifact falls into multiple categories (e.g., code and datasets) you can decide to package and submit them for evaluation separately if they are independent, or submit them as one artifact, if your code uses your dataset to demonstrate claims.

Packaging Script

The packaging script can be obtained from https://github.com/jelenamirkovic/artmeta. It displays a series of questions for the authors to provide information on or to provide pointers to a file in their artifact repository. You can run the packaging script multiple times, and you can resume previous incomplete runs. Once you complete the packaging, please submit the resulting metadata.toml file to HotCRP.

Research claims

Linking the paper’s claims to the artifact is a necessary step that allows artifact evaluators to reproduce results. Authors must state their paper’s key results and claims clearly. Also, claims should be concrete, especially if these claims may differ from the expectations set by the paper. The AEC will still evaluate artifacts relatively to their paper, but an explanation can help setting expectations up front, especially in cases that might frustrate the evaluators without prior notice. For example, authors are encouraged to be transparent with the AEC about difficulties that evaluators might encounter in using the artifact or its maturity relative to the paper’s content. Whenever possible, please create scripts that run your code to demonstrate the claim and package these with your code.

Note on code artifacts

If releasing a code artifact authors should make every effort to package it as source code as described above. In a few exceptional cases, when this is not possible we will accept:

Authors should reach out to the AE chair when other formats look more reasonable in their judgment.

Artifact submission

Please submit the output of artifact packaging code here:
https://cycle2-ae.sp2026.ieee-security.org/

Resources

The following materials may be useful when preparing an artifact:

Acknowledgements

The AE process at IEEE S&P 2026 was inspired by similar endeavors in other systems and security conferences. This artifact packaging guide builds on materials from the AE process of NDSS’25 and USENIX Security’25.