The 10th Workshop on Argument Mining


December 7, 2023 - HYBRID format

Co-located with EMNLP 2023 in Singapore

You can commit your ARR paper through softconf until Sep 25, 2023 at 23:59 AOE.

Argument mining (also known as "argumentation mining") is a gradually maturing research area within computational linguistics. At its heart, argument mining involves the automatic identification of argumentative structures in free text, such as the conclusions, premises, and inference schemes of arguments as well as their interrelations and counter-considerations. To date, researchers have investigated argument mining on genres such as legal documents, product reviews, news articles, online debates, user-generated web discourse, Wikipedia articles, scholarly data, persuasive essays, tweets, and dialogues. Aside from mining argumentative components, the field focuses on studying argument quality assessment, argument persuasiveness, and the synthesis of argumentative texts.

Argument mining gives rise to various practical applications of great importance. In particular, it provides methods that can find and visualize the main pro and con arguments in a text corpus --- or even in an argument search on the web --- towards a topic or query of interest. In instructional contexts, written and diagrammed arguments represent educational data that can be mined for conveying and assessing students' command of course material. Moreover, debate technologies like IBM Project Debater that drew a lot of attention recently rely heavily on argument mining tasks.

While solutions to basic tasks such as component segmentation and classification slowly become mature, many tasks remain largely unsolved, particularly in more open genres and topical domains. Success in argument mining requires interdisciplinary approaches informed by NLP technology, theories of semantics, pragmatics and discourse, knowledge of discourse in application domains, artificial intelligence, information retrieval, argumentation theory, and computational models of argumentation.

Call for Papers

ArgMining 2023 invites the submission of long and short papers on substantial, original, and unpublished research in all aspects of argument mining. The workshop solicits long and short papers for oral and poster presentations, as well as demos of argument mining systems and tools.


The topics for submissions include but are not limited to:

Submission Information

Accepted papers will be presented either via oral or poster presentations. They will be included in the EMNLP proceedings as workshop papers. ArgMining 2023 follows ACL’s policies for submission, review, and citation. Moreover, authors are expected to adhere to the ethical code set out in the ACL Code of Ethics. Submissions that violate any of the policies will be rejected without review.


SUBMISSION TYPES:

MULTIPLE SUBMISSIONS:

ArgMining 2023 will not consider any paper that is under review in a journal or another conference or workshop at the time of submission, and submitted papers must not be submitted elsewhere during the review period. ArgMining 2023 will also accept submissions of ARR-reviewed papers, provided that the ARR reviews and meta-reviews are available by the ARR commitment deadline (September 15). However, ArgMining 2023 will not accept direct submissions that are actively under review in ARR, or that overlap significantly (>25%) with such submissions.


SUBMISSION FORMAT AND LINK:

All long, short, and demonstration submissions must follow the two-column EMNLP 2023 format. Authors are expected to use the LaTeX or Microsoft Word style template (https://2023.emnlp.org/calls/style-and-formatting/. Submissions must conform to the official EMNLP style guidelines, which are contained in these templates. Submit your paper in PDF format via https://softconf.com/emnlp2023/ArgMining2023/.

If you want to commit your ARR paper to the workshop, fill in the ARR form via https://softconf.com/emnlp2023/ArgMining2023/.


DOUBLE BLIND REVIEW:

ArgMining 2023 will follow the ACL policies for preserving the integrity of double-blind review for long and short paper submissions. Papers must not include authors’ names and affiliations. Furthermore, self-references or links (such as github) that reveal the author’s identity, e.g., “We previously showed (Smith, 1991) …” must be avoided. Instead, use citations such as “Smith previously showed (Smith, 1991) …” Papers that do not conform to these requirements will be rejected without review. Papers should not refer, for further detail, to documents that are not available to the reviewers. For example, do not omit or redact important citation information to preserve anonymity. Instead, use third person or named reference to this work, as described above (“Smith showed” rather than “we showed”). If important citations are not available to reviewers (e.g., awaiting publication), these paper/s should be anonymised and included in the appendix. They can then be referenced from the submission without compromising anonymity. Papers may be accompanied by a resource (software and/or data) described in the paper, but these resources should also be anonymized. Unlike long and short papers, demo descriptions will not be anonymous. Demo descriptions should include the authors’ names and affiliations, and self-references are allowed.


ANONYMITY PERIOD (taken from the EMNLP call for papers in verbatim for the most part):

The following rules and guidelines are meant to protect the integrity of double-blind review and ensure that submissions are reviewed fairly. The rules make reference to the anonymity period, which runs from 1 month before the direct submission deadline (starting August 1, 2023) up to the date when your paper is accepted or rejected (October 7, 2023). For papers committed from ARR, the anonymity period starts August 15, 2023. Papers that are withdrawn during this period will no longer be subject to these rules. You may not make a non-anonymized version of your paper available online to the general community (for example, via a preprint server) during the anonymity period. Versions of the paper include papers having essentially the same scientific content but possibly differing in minor details (including title and structure) and/or in length. If you have posted a non-anonymized version of your paper online before the start of the anonymity period, you may submit an anonymized version to the conference. The submitted version must not refer to the non-anonymized version, and you must inform the programme chairs that a non-anonymized version exists. You may not update the non-anonymized version during the anonymity period, and we ask you not to advertise it on social media or take other actions that would further compromise double-blind reviewing during the anonymity period. You may make an anonymized version of your paper available (for example, on OpenReview), even during the anonymity period. For arXiv submissions, August 1, 2023 11:59pm UTC-12h (anywhere on earth) is the latest time the paper can be uploaded if you plan a direct submission to the workshop (or August 15, 2023 for papers from ARR committed to the workshops on September 15, 2023).


BEST PAPER AWARDS:

In order to recognize significant advancements in argument mining science and technology, ArgMining 2023 will include best paper awards. All papers at the workshop are eligible for the best paper awards and a selection committee consisting of prominent researchers in the fields of interest will select the recipients of the awards.

Important Dates

All deadlines are 11.59 pm UTC -12h (“anywhere on Earth”).

Keynote Speaker

Noam Slonim

Noam Slonim

IBM Research AI; founder and Principal Investigator of Project Debater


Title and topic of the talk will be announced soon.

Shared Tasks

We are pleased to present two shared tasks as part of ArgMining 2023:

ImageArg-Shared-Task-2023: The First Shared Task in Multimodal Argument Mining

Task Description:

There has been a recent surge of interest in developing methods and corpora to improve and evaluate persuasiveness in natural language applications. However, these efforts have mainly focused on the textual modality, neglecting the influence of other modalities. To address this limitation, we introduced a new multimodal dataset called ImageArg. This dataset includes persuasive tweets along with associated images, aiming to identify the image's stance towards the tweet and determine its image persuasiveness concerning a specific topic. To further this goal, we designed two shared tasks.

Participants can choose Task A or Task B, or both.

Task A (Multimodal Argument Stance Classification)
Given a tweet composed of a text and image, predict whether the given tweet Supports or Opposes the given topic.

Task B (Multimodal Image Persuasiveness Classification)
Given a tweet composed of text and image, predict whether the image makes the tweet text more Persuasive or Not.

Task Organizers:

Zheixong Liu, Mohamed Elaraby, Yang Zhong, Diane Litman (University of Pittsburgh)

PragTag-2023: The First Shared Task on Pragmatic Tagging of Peer Reviews

Task Description:

Peer reviews are argumentative texts that discuss the strengths and weaknesses of the paper under-review and provide suggestions for improvement. The automatic analysis of intentions in reviewer argumentation has numerous applications, from the analysis of reviewing practices, to aggregating information from multiple reviews and assisting junior reviewers. However, reviewing practices vary across fields and peer reviewing data for training is scarce; this introduces the danger of performance shift of such automatic analysis across disciplines. With this shared task, we invite the community to explore these challenges, using recently introduced multi-domain corpora of peer reviews.

Task Organizers:

Ilia Kuznetsov, Nils Dycke (Technical University of Darmstadt, UKP Lab)

Committee

Organizing Committee

Program Committee

  • Rodrigo Agerri, University of the Basque Country
  • Yamen Ajjour, Leibniz Universität Hannover
  • Khalid Al Khatib, University of Groningen
  • Safi Eldeen Alzi'abi, Isra University
  • Özkan Aslan, Afyon Kocatepe University
  • Roy Bar-Haim, IBM Research AI
  • Miriam Butt, University of Konstanz
  • Elena Cabrio, CNRS, Inria, I3S
  • Claire Cardie, Cornell University
  • Jonathan Clayton, University of Sheffield
  • Johannes Daxenberger, summetix
  • Lorik Dumani, Trier University
  • Roxanne El Baff, German Aerospace Center (DLR)
  • Ivan Habernal, Technische Universität Darmstadt
  • Shohreh Haddadan, University of Luxembourg
  • Yufang Hou, IBM Research AI
  • Xinyu Hua, Bloomberg AI
  • Lea Kawaletz, HHU Düsseldorf
  • Christopher Klamm, University of Mannheim
  • Manika Lamba, University of Illinois Urbana-Champaign
  • Gabriella Lapesa, University of Stuttgart
  • John Lawrence, University of Dundee
  • Beishui Liao, Zhejiang University
  • Diane Litman, University of Pittsburgh
  • Simon Parsons, University of Lincoln
  • Georgios Petasis, NCSR Demokritos, Athens
  • Olesya Razuvayevskaya, University of Sheffield
  • Chris Reed, University of Dundee
  • Patrick Saint-Dizier, IRIT, CNRS
  • Robin Schaefer, University of Potsdam
  • Jodi Schneider, University of Illinois Urbana-Champaign
  • Manfred Stede, University of Potsdam
  • Benno Stein, Bauhaus-Universität Weimar
  • Mohammed Taiye, Linnaeus University
  • Simone Teufel, University of Cambridge
  • Nicolas Turenne, Guangdong University of Foreign Studies
  • Serena Villata, Université de Nice
  • Henning Wachsmuth, Leibniz Universität Hannover
  • Vern R. Walker, Hofstra University
  • Zhongyu Wei, Fudan University
  • Timon Ziegenbein, Leibniz Universität Hannover

Past Workshops

Policy

We abide by the ACL anti-harassment policy.

Sponsors

NAVER IBM