The Second International Workshop on Argumentation and Applications
Keynote speaker
Prof. Beishui Liao, Zhejiang University
Beishui Liao is a Full Professor of Logic and Computer Science at Zhejiang University, where he has held the prestigious Qiushi Distinguished Professorship since 2019. He earned his Ph.D. in Computer Science from Zhejiang University in 2006 and has since established himself as a leading researcher in AI logic, formal argumentation, and their applications in multi-agent systems, explainable AI, and ethical AI. Currently, he leads a major National Social Science Fund of China project on Logics for New Generation Artificial Intelligence. He serves as the Director of the Institute of Logic and Cognition at Zhejiang University and is the Co-Director of the Zhejiang University-University of Luxembourg Joint Lab on Advanced Intelligent Systems and Reasoning (ZLAIRE). He has been a Guest Professor at the University of Luxembourg since 2014 and has held visiting positions at the University of Texas at Austin, the University of Brescia, the University of Oxford, and the University of Cambridge. An active figure in the academic community, he is an Associate Editor of Annals of Mathematics and Artificial Intelligence, the 'AI Logic' Corner Editor of Journal of Logic and Computation, and serves on the editorial boards of Argument & Computation and Journal of Applied Logics. He is also a steering committee member of DEON and COMMA. In 2015, he co-founded the International Conference on Logic and Argumentation (CLAR). He has published 3 monographs and a number of papers in some leading journals and conferences such as AIJ, JAIR, IJCAI, KR, etc.
- Title: Argumentation Enabled Explainable AI
- Abstract: This talk advances argumentation-enabled explainable AI (XAI) through abstract, structured, and quantitative frameworks. Foundational theories include explanation semantics and attack-defense semantics, formalizing justification-centric reasoning. Structured argumentation integrates value-based norms, while quantitative methods combine human-level knowledge and the implicit knowledge from data, as in e-commerce fraud detection and social media misinformation mitigation. Three approaches are highlighted: (1) rule-based justification combining inductive logic programming and assumption-based argumentation for high-risk decision models; (2) confidence-aware machine learning enhanced by quantitative argumentation, exemplified in an Automated eXplainable Decision System for fake news detection; (3) LLM-driven defeasible reasoning using Chain-of-Thought prompting to align large language models with formal argumentation for transparent inferences. Implementations like the Jiminy Advisor for moral stakeholder agreements and value-driven agents demonstrate ethical and operational solutions. By merging argumentation with AI techniques, we achieve interpretability in fraud prevention, ethical governance, and misinformation mitigation. The talk concludes with insights on balancing computational rigor and societal accountability in complex AI systems.