Thank you for volunteering your time to review for ICCV 2023! To maintain a high-quality technical program, we rely very much on the time and expertise of our reviewers. This document explains what is expected of all members of the Reviewing Committee for ICCV 2023.
We also recommend you to check the Review Tutorial “How to be a good reviewer?" (download pdf: here or see Google slides: https://tinyurl.com/zmv26ea7)
Blind Reviews
Author Guidelines have instructed authors to make reasonable efforts to hide their identities, including omitting their names, affiliations, and acknowledgments. This information will of course be included in the published version. Likewise, reviewers should make all efforts to keep their identity invisible to the authors.
With the increase in popularity of arXiv preprints, sometimes the authors of a paper may be known to the reviewer. Posting to arXiv is NOT considered a violation of anonymity on the part of the authors, and in most cases, reviewers who happen to know (or suspect) the authors’ identity can still review the paper as long as they feel that they can do an impartial job. An important general principle is to make every effort to treat papers fairly whether or not you know (or suspect) who wrote them. If you do not know the identity of the authors at the start of the process, DO NOT attempt to discover them by searching the Web for preprints.
Please read the FAQ at the end of this document for further guidelines on how arXiv prior work should be handled.
Check your papers
As soon as you get your reviewing assignment, please go through all the papers to make sure that (a) there is no obvious conflict with you (e.g., a paper authored by your recent collaborator from a different institution) and (b) you feel comfortable to review the paper assigned. If either of these issues arise, please let us know right away by emailing the Program Chairs (programchairs-iccv23@googlegroups.com) and informing your Area Chair (AC) through CMT.
Please read the Author Guidelines carefully to familiarize yourself with all official policies (such as double submission and plagiarism). If you think a paper may be in violation of one of these policies, please contact the Program Chairs. In the meantime, proceed to review the paper assuming no violation has taken place.
What to Look For
Each paper that is accepted should be technically sound and make a contribution to the field. Look for what's good or stimulating in the paper. We recommend that you embrace novel, brave concepts, even if they have not been tested on many datasets. For example, the fact that a proposed method does not exceed the state-of-the-art accuracy on an existing benchmark dataset is not grounds for rejection by itself. Rather, it is important to weigh both the novelty and potential impact of the work alongside the reported performance. Minor flaws that can be easily corrected should not be a reason to reject a paper.
Check for Reproducibility
To improve reproducibility in AI research, we highly encourage authors to voluntarily submit their code as part of supplementary material, especially if they plan to release it upon acceptance. Reviewers may optionally check this code to ensure the paper’s results are reproducible and trustworthy, but are not required to. Reviewers are also encouraged to use the Reproducibility Checklist as a guide for assessing whether a paper is reproducible or not. All code/data should be reviewed confidentially and kept private, and deleted after the review process is complete. We expect (but do not require) that the accompanying code will be submitted with accepted papers.
Check for Data Contribution
Datasets are a significant part of Computer Vision research. If a paper is claiming a dataset release as one of its scientific contributions, it is expected that the dataset will be made publicly available no later than the camera-ready deadline, should it be accepted. Please indicate in the corresponding field in the review form whether the paper made such claims and whether the corresponding field in the submission form has been marked.
Check for Attribution of Data Assets
Authors are advised that they need to cite data assets used (e.g., datasets or code) much like papers. As a reviewer, please carefully check if a paper has adequately cited data assets used in the paper, and comment in the corresponding field in the review form.
Check for Use of Personal Data and Human Subjects
If a paper is using personal data or data from human subjects, the authors must have an ethics clearance from an institutional review board (IRB, or equivalent) or clearly describe that ethical principles have been followed. If there is no description of how ethical principles were ensured or GLARING violations of ethics (regardless of whether discussed or not), please inform the Area Chairs and the Program Chairs, who will follow on each specific case. Reviewers shall avoid dealing with such issues by themselves directly.
IRB reviews for the US or the appropriate local ethics approvals are typically required for new datasets in most countries. It is the dataset creators' responsibility to obtain them. If the authors use an existing, published dataset, we encourage, but do not require them to check how data was collected and whether consent was obtained. Our goal is to raise awareness of possible issues that might be ingrained in our community. Thus we would like to encourage dataset creators to provide this information to the public.
In this regard, if a paper uses an existing public dataset that is released by other researchers/research organizations, we encourage, but not require them to include a discussion of IRB related issues in the paper. Reviewers hence should not penalize a paper if such a discussion is NOT included.
Check for Discussion of Negative Societal Impact
The ICCV community has not put as much emphasis on the awareness of possible negative societal impact as other AI communities so far, but this is an important issue. We aim to raise awareness without introducing a formal policy (yet). As a result, authors are encouraged to include a discussion on potential negative societal impact. Reviewers shall weigh the inclusion of a meaningful discussion POSITIVELY. Reviewers shall NOT reject a paper solely based on that the paper has not included such a discussion as we do not have a formal policy requiring that.
Check for Discussion of Limitations
Discussing limitations used to be commonplace in our community, but seems to be increasingly lost. We point out the importance of discussing limitations especially to new authors. Therefore, authors are encouraged to explicitly and honestly discuss limitations. Reviewers shall weigh the inclusion of an honest discussion POSITIVELY, instead of penalizing the papers for including it. We note that a paper is not required to have a separate section to discuss limitations, so it can not be a sole factor for rejection.
Be Specific
Please be specific and detailed in your reviews. Your main critique of the paper should be written in terms of a list of strengths and weaknesses. You can use bullet points here, but also explain your arguments. Your discussion, more than your score, will help the authors, fellow reviewers, and Area Chairs understand the basis for your recommendation, so please be thorough. You should include specific feedback on ways the authors can improve their papers.
In the discussion of related work and references, simply saying "this is well known" or "this has been common practice in the industry for years" is not sufficient: cite specific publications, including books or public disclosures of techniques.
Please read the FAQ at the end of this document for further details on how to treat related work on arXiv, supplementary material, and rebuttals.