Instruction for Reviews
Review Criteria (for regular papers)
Submissions will be evaluated based on the significance and novelty of the results, either theoretical or empirical. Results will be judged on the degree to which they have been objectively established and/or their potential for scientific and technological impact, as well as their relevance to robotic learning. Authors of regular paper submissions will have an opportunity to submit a response to reviewers and update the papers during the rebuttal period. Reviews and rebuttals of accepted papers will be made publicly available.
We take a broad view of robot learning. Papers with both experimental and theoretical results relevant to robot learning are welcome. Our intent is to make CoRL a selective top-tier conference on robotic learning.
Desk Reject Criteria (for regular papers)
Process: ACs will identify the desk rejection candidates, using one of the criteria below as a justification. PC will examine the candidates and make the final decision. We will err on the side of caution, and only desk reject papers when there is a consensus between all PCs and the AC.
The paper can be desk rejected for one of the following reasons: not meeting the reciprocal reviewing requirement, formatting issues, anonymity violation, or scope.
Reciprocal reviewing requirement: please check the Instructions for Authors section for this requirement.
Formatting issues: The paper is either too long or in an incorrect format.
Anonymity violation: the main manuscript, supplemental materials, or a link provided in a paper identifies one or more of the authors.
Scope: All CoRL submissions must demonstrate the relevance to Robot Learning through
Intent: Explicitly address a learning question for physical robots OR
Outcome: Test the proposed learning solution on physical robots.
Rejection examples
No learning: Manually design and tune the performance of a robot controller without use of learning.
No learning: A search algorithm for model-based planning.
No robotics: A generic result on sample complexity.
No robotics: A generic RL algorithm.
Little robotics: Improved performance on a standard CV dataset, e.g., ImageNet recognition.
Gray
An RL algorithm that works only in simulator X. Does it transfer to real robot learning (sim2real, data efficiency, …)? Yes for CARLA for autonomous driving. No for Cheetah/Human-oid in Mujuco. According to our stated principles, the submission satisfies the intent. Its failure or success to demonstrate the relevance will be determined during the review process.
Reviewer Criteria
Reviewers need to have expertise in robotics, robot learning, machine learning and complementary fields, as demonstrated by a record of previous publications in the area. Reviewers ideally have a minimum of 3 first author publications in major robotics conferences and journals (e.g. CoRL, RSS, ICRA, IROS, IJRR, TRO, RAL) and core machine learning venues (e.g. NeurIPS, ICML, AIStats, JMLR, PAMI). To recruit more experienced reviewers, area chairs were asked to nominate faculty members, research scientists, researchers, and more senior graduate students.
Rebuttal Process
New this year
After receiving the reviews, authors of the papers are invited to submit a 1-page rebuttal in pdf, by the resubmission deadline. The rebuttal should be focused on addressing any factual errors in the review instead of providing additional experimental results. Reviewers should not request additional experimental results in rebuttal.
All the rebuttals will be reviewed by the reviewer, original AC, and potentially by external reviewers. The AC reserves the right to invite new reviewers if needed. The results of the current review(s) will be shared with the new reviewers in such cases.