Track Chair: Dr. Mei Qiu, Xiamen University, China

With the rapid development of generative AI, synthetic text, images, and videos are increasingly indistinguishable from human-created content, raising significant concerns about authenticity, security, and trust. The ability to detect AI-generated content and ensure responsible AI deployment has become a critical research priority. This workshop focuses on AI-generated content detection across text, image, and video modalities, as well as broader AI security challenges. It aims to bring together researchers working on detection algorithms, robustness evaluation, multimodal verification, model attribution, and AI safety mechanisms to advance trustworthy and secure generative AI systems.

Topics

1. Detection of AI-generated text (LLM output detection, stylometry, semantic consistency analysis)
2. Deepfake image, video and audio detection
3. Multimodal AI-generated content verification
4. Cross-model generalization and robustness evaluation
5. Adversarial attacks and defense in generative AI detection
6. Watermarking and post-hoc detection mechanisms
7. Model fingerprinting and source attribution
8. Hallucination detection and content authenticity assessment
9. AI misuse monitoring and content moderation systems
10. Deepfake detection

 

 

 

IMPORTANT DATES

* Welcome to submit papers to NLPAI 2026 through Electronic Submission System or Conference Email Box: nlpai@cbees.net. (For paper publication, a full paper is required to be submitted; for presentation only without paper publication, an abstract can be submitted).

* Welcome to join in NLPAI 2025 as the listener if you do not want to publish any paper and present at the conference. The registration should be finished through the Online Registration System before the registration deadline.