SDPRA 2022

Second Workshop on Scope Detection & Peer Review Articles

Collocated with PAKDD 2022


  • Workshop Overview

    For years, peer review has been the formal part of scientific communication that validates a scientific research article’s quality. One of the most consensus metric determining the quality of conference submissions is peer review, yet it is becoming increasingly skewed. Part of the difficulty, we say, is that reviewers and area chairs are given a vague goal that forces them to make apples-to-oranges comparisons. There are various potential paths forward, but the main challenge is establishing the incentives and processes that would allow them to be implemented consistently across the AI/ML community. A particular research article goes through discrete filtering steps to get published in a reputed journal or conference. The first step in the peer review process is the editor’s initial screening(s). The editor’s job, who is also an expert in the particular field, decides whether an article should be rejected without further review or forwarded to expert reviewers for meticulous evaluation. Acceptance of paper depends heavily on the reviewers. It’s becoming more common for people to share their reviews on social media, especially when reviewers reject their work on spurious grounds.

    Some of the common reasons for rejection are due to paper's language and writing/formatting style, results are not better than SOTA, does not use a particular method (like GPT-3 or XLNET), method is too simple (seriously? Isn’t that a good thing?), too narrow or outdated or out of scope, completely new topic (source: #AcadTwitter)

    Contrary to the popular belief and correlation with existing accepted papers the coveted SOTA does not even necessarily advance the field. Metrics like METEOR. BLEU, ROUGE are some of the popular evaluation metrics available in NLP, but each metric has it's own drawbacks and cannot conclude that the top system has the best architecture. So what to consider?

    To demystify and improve such an obscure process, we are hosting the 2nd Workshop on Scope Detection & Peer Review Articles to address these gaps. We seek to reach the broader NLP and AI/ML community to pool the distributed efforts to improve the quality of peer review. SDPRA 2021 will comprise a research track and invited talks

    Call for Papers

    Topics of Interest: Papers are invited on substantial, original and unpublished research on all aspects of peer review. The areas of interest include, but are not limited:

    • Scope & Novelty Detection
    • Insights from existing reputed conferences & journal
    • Diversity & Inclusivity Reports
    • Reviewer Matching algorithms
    • Demographic Reports
    • Revision & Rebuttal analysis
    • Discourse modeling and argument mining
    • Bibliometrics, scientometrics, and altmetrics approaches
    • Fairness-aware data mining
    • Datasets and resources
    • Information Extraction and Retrieval
    • Search & Retrieval
    • Reproducibility
    • Evaluation Metrics
    • Negative Results
    • Carbon Footprint
    • State of the Art Results
    • Case for Simplicity
    • Ethical Considerations

    Submission Instructions:
    • Each submitted paper should include an abstract up to 200 words and be no longer than 20 single-spaced pages with 10pt font size (including references, appendices, etc.). Authors are strongly encouraged to use Springer LNCS/LNAI manuscript submission guidelines for their submissions.
    • All papers must be submitted electronically through the paper submission system in PDF format only. If required supplementary material may be submitted as a separate PDF file, but reviewers are not obligated to consider this, and your manuscript should, therefore, stand on its own merits without any supplementary material. Supplementary material will not be published in the proceedings.

    Springer will publish the proceedings of the conference as a volume of the LNAI series, and selected excellent papers will be invited for publications in special issues of high-quality journals, including Knowledge and Information Systems (KAIS) and International Journal of Data Science and Analytics.

    Submitting a paper to the workshop means that the authors agree that at least one author should attend the workshop to present the paper, if the paper is accepted. For no-show authors, their affiliations will receive a notification.

  • Important Dates for Workshop:
    • January 20, 2022: Call for Regular Papers
    • March 07, 2022: Submission Deadline
    • March 31, 2022: Author Notification
    • April 15, 2022: Camera Ready submission
    • May 16-19, 2022: Workshop
  • Important Note: In response to the ongoing pandemic, the PAKDD2022 conference is held in hybrid/online mode. All timings are as per Indian Standard Time (IST) (UTC + 05:30).