文摘
Facial expression and its dynamic property play an important role in interpreting and conveying emotions. Recently facial expression analysis has been an active topic in both psychology and computer vision. Most previous investigations have focused on the recognition of static images with intense expressions. Different from the previous work, we present an expression synthesis method for both expression classification and intensity estimation. By means of synthesising expression manifolds from neutral faces, the dynamic variations in facial expression can be modelled and analysed. Eigentransformation is utilised on both shape and expression details in generating novel expressions. Expression classification is performed on the expanded training sets with synthesised expression landmarks, and the intensity can be estimated with synthesised expression manifolds. Comprehensive experimental results conducted on the extended Cohn-Kanade database are reported.