13 August 2024. !!! Access the Argmining 2024 virtually on UNDERLINE or join us at Lotus Suite 12 !!!
13 August 2024. ArgMining2024 Proceedings are now available online.
29 July 2024. Check our panelists! See you in August!
15 July 2024. The ArgMining 2024 program is out! See you in August!
4 July 2024. We are happy to announce our Keynote Speaker: Yufang Hou from IBM Research Europe, Ireland.
Show/Hide older news ...
17 May 2024. !!! Commitment Deadline Extension !!! 24th of May AOE 31st of May AOE. Commit your paper to ArgMining2024
17 May 2024. !!! Deadline Extension !!! 17th of May AOE 20th of May AOE.
16 May 2024. If you already have your paper reviewed as part of ARR, you can now commit it to ArgMining2024 by the 24th of May AOE!
21 February 2023. The Perspective Argument Retrieval Shared Task launched their Website!
19 February 2023. DialAM Shared Task launched their Website!
7 February 2023. Check the Important Dates! Paper Submission via OpenReview by May 17, 2024
7 February 2023. The 1st Call for Papers is out!
7 February 2023. We are excited to announce the two Shared Tasks @ArgMining2024
7 December 2023. Call for Shared Task is out
7 December 2023. The official ArgMining 2024 website is launched.
Argument Mining (also known as “argumentation mining”) is an emerging research area within computational linguistics that started with focusing on automatically identifying and classifying argument elements, covering several text genres such as legal documents, news articles, online debates, scholarly data, and many more. In recent years, the field (broadly Computational Argumentation) has grown to explore argument quality and synthesis on many levels. The field offers practical uses such as argument-focused search and debating technologies, e.g., IBM Project Debater. The growing interest in computational argumentation has led to several tutorials at major NLP conferences.
While basic tasks such as argument element segmentation and classification are becoming mature, many current and emerging tasks in diverse genres and topics still need to be solved, amidst global polarization and the emergence of Large Language Models.
Program
09:00–09:10 Opening Remarks
09:10–10:30 Paper Session I
Session Chair: Anita de Waard, Elsevier
10:30–11:00 Coffee Break
11:00–12:30 Panel Session: The Human in Computational Argumentation
Moderated by Henning Wachsmuth
12:30–14:00 Lunch Break
14:00–15:00 Shared Task Session
15:00–15:30 Paper Session II
Session Chair: Chris Reed, University of Dundee
15:30–16:00 Coffee Break
16:00–17:00 Keynote: Reconstructing Fallacies in Misrepresented Science and Argument Mining in the Wild, Yufang Hou
17:00–17:40 Poster Session (Shared Task Papers + Main Workshop Papers)
17:40–17:55 Closing Remarks + Best Paper Award
Panel Session
The Human in Computational Argumentation
This panel session will discuss the role of the human in computational argumentation, exploring ways of creating more representative, fair, and effective computational models of argumentation that better capture the complexities of human discourse. The discussion will focus on two strategies of capturing human context, views, and preferences: perspectivism and personalization. While personalization aims at integrating information about the speaker and target audience (e.g., values and culture) in training or instructing language models, perspectivism aims at ensuring that the views captured by models are representative of the relevant social groups. The panel will look at the consequences, opportunities, and challenges of adapting perspectivism and personalization in computational argumentation.
Panelists
Keynote Speaker
Yufang Hou, IBM Research Europe - Ireland
Title: Reconstructing Fallacies in Misrepresented Science and Argument Mining in the Wild
About the Talk: In this talk, Yufang Hou will discuss their recent work on applying and investigating language model (LM)-based argument mining technologies in real-world scenarios, including fact-checking misinformation that misrepresents scientific publications and tackling traditional argument mining tasks in various out-of-distribution (OOD) scenarios. First, she will discuss their work on reconstructing and grounding fallacies in misrepresented science, in which health-related misinformation claims often falsely cite a credible biomedical publication as evidence. The speaker will present a new argumentation theoretical model for fallacious reasoning, together with a new dataset for real-world misinformation detection that misrepresents biomedical publications. In the second part of the talk, she will discuss their findings on LMs' capabilities for three OOD scenarios (topic shift, domain shift, and language shift) across eleven argument mining tasks.
About the Speaker: Yufang Hou is a research scientist at IBM Research Ireland. She is also a visiting professor and co-supervisor at UKP Lab -TU Darmstadt. Her research interests include referential discourse modelling, argument mining, and scholarly document processing. Yufang received WoC - Technical Innovation in Industry Award in 2020. She has served in numerous roles for ACL conferences, recently as a Senior Area Chair for EMNLP 22/23/24, and NAACL 24. She co-organized the 8th workshop on Argument Mining, the first workshop on Argumentation Knowledge Graphs, Key Point Analysis Shared Task 2021, and Dagstuhl Seminar 22432 on "Towards a Unified Model of Scholarly Argumentation".
Important Dates
- Workshop: August 15, 2024
- Direct paper submission due (OpenReview):
May 17, 2024 May 20, 2024 AOE
- Commitment deadline for ARR papers (OpenReview):
May 24, 2024 May 31, 2024 AOE
- Notification of acceptance: June 17, 2024
- Camera-ready papers due: July 1, 2024
All deadlines are 11.59 pm UTC -12h (“anywhere on Earth”).
Submission Topics
The topics for submissions include but are not limited to:
- Identification, Assessment, and Analysis of Arguments
- Identification of argument components (e.g., premises and conclusions)
- Structure analysis of arguments within and across documents
- Relation Identification between arguments and counterarguments (e.g., support and attack)
- Creation and evaluation of argument annotation schemes, relationships to linguistic and discourse annotations, (semi-) automatic argument annotation methods and tools, and creation of argumentation corpora
- Assessment of arguments with respect to various properties (e.g., stance, clarity)
- Generation of Arguments, Multi-modal and Multi-lingual Argument Mining
- Automatic generation of arguments and their components
- Consideration of discourse goals in argument generation
- Argument mining and generation from multi-modal/multi-lingual data
- Mining and Analysis of different Genres and Domains of Arguments
- Argument mining in specific genres and domains (e.g., education, law, scientific writing)
- Analysis of unique styles within genres (e.g., short informal text, highly structured writing)
- Knowledge Integration, Information Retrieval, and Real-world Applications
- Integration of commonsense and domain knowledge into argumentation models
- Combination of information retrieval methods with argument mining
- Real-world applications, including argument web search, opinion analysis and summarization, and misinformation detection
- Ethical Considerations and Future Reflections
- Reflection on the ethical aspects and societal impact of argument mining methods
- Reflection on the future of argument mining in light of the fast advancement of large language models (LLMs)
CALL FOR PAPERS
The Workshop on Argument Mining provides a regular forum for presenting and discussing cutting-edge research in argument mining (a.k.a argumentation mining) for academic and industry researchers. By continuing a series of ten successful previous workshops, this edition will welcome the submission of long, short, and demo papers. Also, it will feature two shared tasks and a keynote talk.
Check DATES and TOPICS.
Submission Details
The organizing committee welcomes submitting long papers, short papers, and demo descriptions. Accepted papers will be presented via oral or poster presentations and included in the ACL proceedings as workshop papers.
- Long paper submissions must describe substantial, original, completed, and unpublished work. Wherever appropriate, concrete evaluation and analysis should be included. Long papers must be at most eight pages, including title, text, figures, and tables. An unlimited number of pages is allowed for references. Two additional pages are allowed for appendices, and an extra page is allowed in the final version to address reviewers’ comments.
- Short paper submissions must describe original and unpublished work. Please note that a short paper is not a shortened long paper. Instead, short papers should have a point that can be made in a few pages, such as a small, focused contribution, a negative result, or an interesting application nugget. Short papers must be at most four pages, including title, text, figures, and tables. An unlimited number of pages is allowed for references. One additional page is allowed for the appendix, and an extra page is allowed in the final version to address reviewers’ comments.
- Demo papers must be at most four pages, including title, text, examples, figures, tables, and references. A separate one-page document should be provided to the workshop organizers for demo descriptions, specifying furniture and equipment needed for the demo.
Multiple Submissions
ArgMining 2024 will not consider any paper under review in a journal or another conference or workshop at the time of submission, and submitted papers must not be submitted elsewhere during the review period.
ArgMining 2024 will also accept submissions of ARR-reviewed papers, provided that the ARR reviews and meta-reviews are available by the ARR commitment deadline (May 24). However, ArgMining 2024 will not accept direct submissions that are actively under review in ARR, or that overlap significantly (>25%) with such submissions.
Submission Format
All long, short, and demonstration submissions must follow the two-column ACL 2024 format. Authors are expected to use the LaTeX or Microsoft Word style template. Submissions must be electronic and in PDF format.
Submission Link
Authors have to fill in the submission form in the OpenReview system and upload a PDF of their paper here before May 17, 2024, 11:59 pm UTC-12h (anywhere on earth). [ Submission Link]
Double Blind Review
ArgMining 2024 will follow the ACL policies preserving the integrity of double-blind review for long and short paper submissions. Papers must not include authors' names and affiliations. Furthermore, self-references or links (such as GitHub) that reveal the author’s identity, e.g., “We previously showed (Smith, 1991) …” must be avoided. Instead, use citations such as “Smith previously showed (Smith, 1991) …” Papers that do not conform to these requirements will be rejected without review. Papers should not refer, for further detail, to documents that are not available to the reviewers. For example, do not omit or redact important citation information to preserve anonymity. Instead, use the third person or named reference to this work, as described above (“Smith showed” rather than “we showed”). Papers may be accompanied by a resource (software and/or data) described in the paper, but these resources should also be anonymized.
Unlike long and short papers, demo descriptions will not be anonymous. Demo descriptions should include the authors’ names and affiliations, and self-references are allowed.
No Anonimity Period (taken from the ACL call for papers in verbatim for the most part)
There is no anonymity period or limitation on posting or discussing non-anonymous preprints while the work is under peer review.
Best Paper Award
In order to recognize significant advancements in argument mining science and technology, ArgMining 2024 will include the Best Paper award. All papers at the workshop are eligible for the best paper award, and a selection committee consisting of prominent researchers in the fields of interest will select the award recipients.
Shared Tasks
The Argument Miming Workshop will be hosting two shared tasks.
Organizers : Neele Falk from the University of Stuttgart, and Andreas Waldis from the Ubiquitous Knowledge Processing (UKP) Technical University of Darmstadt, and Infomration System Lab, Lucerne University of Applied Science and Arts.
Overview: The "Perspective Argument Retrieval" task addresses the often-overlooked challenge of incorporating socio-demographic information (such as political views, age, and gender) in argument retrieval. By focusing on these aspects, we acknowledge their potential latent influence on argumentation. With this shared task, we invite the community to develop methods that concentrate on this crucial area and advance state-of-the-art retrieval models by considering the perspective of societal diversity.
All the details regarding the shared task can be found at the Perspective Argument Retrieval Shared Task Website.
Organizers: Ramon Ruiz-Dolz, John Lawrence, Ella Schad and Chris Reed from the Centre for Argument Technology in the University of Dundee.
Overview: With the DialAM-2024 task, we propose the first shared task in dialogue argument mining where argumentation and dialogue information is modelled together in a domain-independent framework. The Inference Anchoring Theory (IAT) framework, makes possible to obtain homogeneous annotations of dialogue argumentation including relevant information and structural data from speech and argumentation, regardless of the domain, and allowing a more complete analysis of argumentation in dialogues together with a consistent cross-domain evaluation of the resulting argument mining systems. The DialAM-2024 consists of two sub-tasks: the identification of propositional (argumentative) relations, and the identification of illocutionary (speech act) relations. For both tasks all the information belonging to argumentation and dialogue will be available for the development of the submitted systems. We invite the community to participate in the DialAM-2024 task and explore how the use of additional information from the dialogue can be integrated into the argument mining process, in an attempt to take a step forward from sequence modelling approaches, where much of the relevant information to argumentation remains implicit behind the natural language.
- Task 1: Identification of Propositional Relations: In the first task, the goal is to detect argumentative relations existing between the propositions identified and segmented in the argumentative dialogue. Such relations are: Inference (RA), Confclict (CA), and Rephrase (MA).
- Task 2: Identification of Illocutionary Relation:In the second task, the goal is to detect illocutionary relations existing between locutions uttered in the dialogue and and the argumentative propositions associated with them such as: Asserting, Agreeing, Arguing, or Disagreeing among others.
All the details regarding the shared task can be found at the DialAM Shared Task Website.
Committee
Organizing Committee
Program Committee
- Rodrigo Agerri, University of the Basque Country
- Khalid Al-Khatib, University of Groningen
- Laura Alonso Alemany, Universidad Nacional de Córdoba
- Tariq Alhindi, Mohamed bin Zayed University of AI
- Emily Allaway, Columbia University
- Milad Alshomary, Columbia University
- Özkan Aslan, Afyon Kocatepe University
- Marie Bexte, Fernuniversität Gesamthochschule Hagen
- Eduardo Blanco, University of Arizona
- Miriam Butt, Universität Konstanz
- Elena Cabrio, Université Côte d'Azur
- Chung-Chi Chen, AIST, National Institute of Advanced Industrial Science and Technology
- Elena Chistova, Federal Research Center Computer Science and Control, RAS
- Philipp Cimiano, Bielefeld University
- Johannes Daxenberger, summetix GmbH
- Mohamed Elaraby, University of Pittsburgh
- Neele Falk, University of Stuttgart
- Jia Guo, National University of Singapore
- Shohreh Haddadan, Moffitt Cancer Research center
- Annette Hautli-Janisz, Universität Passau
- Philipp Heinisch, Universität Bielefeld
- Daniel Hershcovich, University of Copenhagen
- Andrea Horbach, Universität Hildesheim
- Xinyu Hua, Bloomberg
- Christopher Klamm, Universität Mannheim
- Gabriella Lapesa, GESIS Leibniz Institute for the Social Sciences
- John Lawrence, University of Dundee
- Boyang Liu, University of Manchester
- Ziqian Luo, Oracle
- Joonsuk Park, University of Richmond
- Simon Parsons, University of Lincoln
- Olesya Razuvayevskaya, University of Sheffield
- Chris Reed, University of Dundee
- Myrthe Reuver, Vrije Universiteit Amsterdam
- Julia Romberg, GESIS Leibniz Institute for the Social Sciences
- Allen G Roush, Oracle
- Ramon Ruiz-Dolz, University of Dundee
- Florian Ruosch, Department of Informatics, University of Zurich
- Sougata Saha, Mohamed bin Zayed University of Artificial Intelligence
- Patrick Saint-Dizier, CNRS
- Robin Schaefer, Universität Potsdam
- Jodi Schneider, University of Illinois, Urbana Champaign
- Lutz Schröder, Friedrich-Alexander Universität Erlangen-Nürnberg
- Arushi Sharma, University of Pittsburgh
- Manfred Stede, Universität Potsdam
- Benno Stein, Bauhaus Universität Weimar
- Aswathy Velutharambath, University of Stuttgart
- Henning Wachsmuth, Leibniz Universität Hannover
- Vern R. Walker, Hofstra University
- Ruifeng Xu, Harbin Institute of Technology
- Xiutian Zhao, University of Edinburgh
- Yang Zhong, University of Pittsburgh
- Timon Ziegenbein, Universität Hannover
Best Paper Committee
- Eduardo Blanco, University of Arizona
- Gabriella Lapesa, GESIS Leibniz Institute for the Social Sciences
- Benno Stein, Bauhaus Universität Weimar