TY - GEN
T1 - FME '23
T2 - 31st ACM International Conference on Multimedia 2023
AU - Davison, Adrian K.
AU - Li, Jingting
AU - Yap, Moi Hoon
AU - See, John
AU - Cheng, Wen Huang
AU - Li, Xiaobai
AU - Hong, Xiaopeng
AU - Wang, Su Jing
N1 - Funding Information:
We would like to thank the ACM MM ’23 conference organisers for agreeing to host our workshop and for their support, and all reviewers for their time and helpful contributions. This work is supported by grants from the National Natural Science Foundation of China (62106256, U19B2032, 62276252, 62076195), the Academy of Finland (Grant 323287), and Ministry of Science and Technology of Taiwan (MOST-109-2223-E-009-002-MY3, MOST-110-2634-F-007-015).
Funding Information:
Adrian K. Davison is currently a Lecturer in the Department of Computing and Mathematics at Manchester Metropolitan University. He received his BSc (Hons) in Multimedia Computing in 2012, and his PhD in micro-facial expression analysis in 2016 from Manchester Metropolitan University. From 2017 – 2023 he was a research associate at the University of Manchester focusing on medical imaging of children’s hip disease and systemic sclerosis-related digital ulcers. He co-created the SAMM dataset. Jingting Li is currently an associate researcher at the Institute of Psychology (IP), Chinese Academy of Sciences (CAS). She received her PhD degree in in Signal, Image, Vision from CentraleSupélec in 2019, and worked as a postdoc with IP, CAS from March 2020 to June 2022. She has published several papers related to micro-expressions in IEEE TPAMI, TAC, TIP and other national and international journals and conferences. Her current research interests include image processing, computer vision and pattern recognition, especially facial micro-expression analysis. Moi Hoon Yap is Professor of Image and Vision Computing at the Manchester Metropolitan University and a Royal Society Industry Fellow with Image Metrics Ltd. She received her PhD in Computer Science from Loughborough University in 2009. Her research is funded by Royal Society, EU funding, EPSRC, Cancer Research UK, Innovate UK, and industries. Her research expertise is in computer vision, deep learning, image/video processing on face and gesture analysis and medical image analysis. She co-created the SAMM dataset. John See is currently an Associate Professor at the School of Mathematical and Computing Sciences, Heriot-Watt University (Malaysia Campus). He received his PhD in Computer Science, MEngSc and BEng degrees from Multimedia University (MMU), Malaysia where he had previously led the Visual Processing Laboratory under the Centre for Visual Computing. From 2017 to 2019, he was a Visiting Research Fellow to Shanghai Jiao Tong University, China. Dr. See has published more than 90 articles in reputable journals and conferences such as IEEE T-PAMI, T-AC, IEEE T-MM, CVPR, ECCV, ICCV, and AAAI. His research interests include facial micro-expressions, affective computing, computational aesthetics, and deep learning. Wen-Huang Cheng is Distinguished Professor with the Institute of Electronics, National Yang Ming Chiao Tung University (NYCU),
Funding Information:
Hsinchu, Taiwan. He is also Jointly Appointed Professor with the Artificial Intelligence and Data Science Program, National Chung Hsing University (NCHU), Taichung, Taiwan. Before joining NYCU, he led the Multimedia Computing Research Group at the Research Center for Information Technology Innovation (CITI), Academia Sinica, Taipei, Taiwan, from 2010 to 2018. His research interests include multimedia, artificial intelligence, computer vision, and machine learning. Xiaobai Li is an assistant professor at the Center for Machine Vision and Signal Analysis (CMVS), University of Oulu, Finland. She received her PhD from CMVS in 2017. Her research focuses on analysing subtle information from facial videos, including micro-expression analysis, remote physiological signals measurement, and related applications. She was a funded Academy Post-doctoral researcher and leads multiple projects funded by the Finnish foundations. She is an IEEE senior member, and is currently serving as an Associate Editor for IEEE Transactions on Circuits and Systems for Video Technology. Xiaopeng Hong is currently a professor at Harbin Institute of Technology (HIT), P. R. China. He had been a distinguished investigator at Xi’an Jiaotong University, P. R. China until Oct. 2021, and a senior researcher/adjunct professor with the Center for Machine Vision and Signal Analysis, University of Oulu, Finland until February 2019. His current research interests include intelligent security, micro-expression analysis, and visual surveillance. Su-jing Wang is an Associate Researcher at the IP, CAS. He received the Ph.D degree from the College of Computer Science and Technology of Jilin University in 2012. He was a postdoctoral researcher at IP, CAS from 2012 to 2015. Since July 2015, he has joined the Chinese Academy of Sciences. His current research interests include pattern recognition and machine learning, especially micro-expression analysis. He has published more than 50 scientific papers in several important national and international journals and conferences, including IEEE TPAMI, TAC, TIP, TNN, and ECCV. Since 2014, he has served as an associate editor of Neurocomputing (Elsevier). He won the first prize of the 8th Wu Wenjun Artificial Intelligence Science and Technology Award in 2018. He was called the "Chinese Hawking" by the Xinhua News Agency, and was selected as one of the top 2% of scientists in the world in 2020 for "Impact of the Year".
Publisher Copyright:
© 2023 Owner/Author.
PY - 2023/10/27
Y1 - 2023/10/27
N2 - Micro-expressions are facial movements that are extremely short and not easily detected, which often reflect the genuine emotions of individuals. Micro-expressions are important cues for understanding real human emotions and can be used for non-contact, non-perceptual deception detection, or abnormal emotion recognition. It has broad application prospects in national security, judicial practice, health prevention, and clinical practice. However, micro-expression feature extraction and learning are highly challenging because they are typically short in duration, low intensity, and have local facial asymmetry. In addition, the intelligent micro-expression analysis combined with deep learning technology is also plagued by the problem of relatively small data samples. Not only is micro-expression elicitation very difficult, micro-expression annotation is also very time-consuming and laborious. More importantly, the micro-expression generation mechanism is not yet clear, which shackles the application of micro-expressions in real scenarios. FME'23 is the inaugural workshop in this area of research, with the aim of promoting interactions between researchers and scholars from within this niche area of research. This year we hope to discuss the growing ethical conversations when using face data, and how we can come to a consensus on micro-expression standards within affective computing.
AB - Micro-expressions are facial movements that are extremely short and not easily detected, which often reflect the genuine emotions of individuals. Micro-expressions are important cues for understanding real human emotions and can be used for non-contact, non-perceptual deception detection, or abnormal emotion recognition. It has broad application prospects in national security, judicial practice, health prevention, and clinical practice. However, micro-expression feature extraction and learning are highly challenging because they are typically short in duration, low intensity, and have local facial asymmetry. In addition, the intelligent micro-expression analysis combined with deep learning technology is also plagued by the problem of relatively small data samples. Not only is micro-expression elicitation very difficult, micro-expression annotation is also very time-consuming and laborious. More importantly, the micro-expression generation mechanism is not yet clear, which shackles the application of micro-expressions in real scenarios. FME'23 is the inaugural workshop in this area of research, with the aim of promoting interactions between researchers and scholars from within this niche area of research. This year we hope to discuss the growing ethical conversations when using face data, and how we can come to a consensus on micro-expression standards within affective computing.
KW - affective computing
KW - micro-expression
KW - multi-modality
UR - http://www.scopus.com/inward/record.url?scp=85179551206&partnerID=8YFLogxK
U2 - 10.1145/3581783.3610948
DO - 10.1145/3581783.3610948
M3 - Conference contribution
AN - SCOPUS:85179551206
SP - 9736
EP - 9738
BT - MM '23: Proceedings of the 31st ACM International Conference on Multimedia
PB - Association for Computing Machinery
Y2 - 29 October 2023 through 3 November 2023
ER -