Animating with style: defining expressive semantics of motion
详细信息    查看全文
  • 作者:Klaus Förger ; Tapio Takala
  • 关键词:Computer animation ; Human motion ; Motion style ; Motion synthesis ; Style vector ; Feature extraction ; Feature selection ; Verbal description of motion style
  • 刊名:The Visual Computer
  • 出版年:2016
  • 出版时间:February 2016
  • 年:2016
  • 卷:32
  • 期:2
  • 页码:191-203
  • 全文大小:1,846 KB
  • 参考文献:1.Aviezer, H., Hassin, R.R., Ryan, J., Grady, C., Susskind, J., Anderson, A., Moscovitch, M., Bentin, S.: Angry, disgusted, or afraid? Studies on the malleability of emotion perception. Psychol. Sci. 19(7), 724–732 (2008)CrossRef
    2.Bruderlin, A., Williams, L.: Motion signal processing. In: Proceedings of the 22nd Annual Conference on Computer Graphics and Interactive Techniques. SIGGRAPH ‘95, pp. 97–104. ACM, New York (1995)
    3.Chi, D., Costa, M., Zhao, L., Badler, N.: The emote model for effort and shape. In: Proceedings of the 27th Annual Conference on Computer Graphics and Interactive Techniques. SIGGRAPH ‘00, pp. 173–182. ACM Press/Addison-Wesley Publishing Co., New York (2000)
    4.Cho, K., Chen, X.: Classifying and visualizing motion capture sequences using deep neural networks. In: Proceedings of the 9th International Conference on Computer Vision Theory and Applications, VISAPP2014. SciTePress (2014)
    5.Clavel, C., Plessier, J., Martin, J.C., Ach, L., Morel, B.: Combining facial and postural expressions of emotions in a virtual character. In: Ruttkay, Z., Kipp, M., Nijholt, A., Vilhjálmsson, H. (eds.) Intelligent Virtual Agents. Lecture Notes in Computer Science, vol. 5773, pp. 287–300. Springer, Berlin (2009)
    6.Förger, K., Honkela, T., Takala, T.: Impact of varying vocabularies on controlling motion of a virtual actor. In: Aylett, R., Krenn, B., Pelachaud, C., Shimodaira, H. (eds.) Intelligent Virtual Agents. Lecture Notes in Computer Science, vol. 8108, pp. 239–248. Springer, Berlin (2013)
    7.Gleicher, M.: Retargetting motion to new characters. In: Proceedings of the 25th Annual Conference on Computer Graphics and Interactive Techniques. SIGGRAPH ‘98, pp. 33–42. ACM, New York (1998)
    8.Hsu, E., Pulli, K., Popović, J.: Style translation for human motion. ACM Trans. Graph. 24(3), 1082–1089 (2005)CrossRef
    9.Joachims, T.: Optimizing search engines using clickthrough data. In: Proceedings of the Eighth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. KDD ‘02, pp. 133–142. ACM, New York (2002)
    10.Johnson, K.L., McKay, L.S., Pollick, F.E.: He throws like a girl (but only when hes sad): emotion affects sex-decoding of biological motion displays. Cognition 119(2), 265–280 (2011)CrossRef
    11.Kleinsmith, A., Bianchi-Berthouze, N.: Affective body expression perception and recognition: a survey. IEEE Trans. Affect. Comput. 4(1), 15–33 (2013)CrossRef
    12.Kovar, L., Gleicher, M., Pighin, F.: Motion graphs. In: Proceedings of the 29th Annual Conference on Computer Graphics and Interactive Techniques. SIGGRAPH ‘02, pp. 473–482. ACM, New York (2002)
    13.Lawrence, N.: Mocap toolbox for matlab. Available on-line at http://​staffwww.​dcs.​shef.​ac.​uk/​people/​N.​Lawrence/​mocap/​ (2011). Accessed 9 Feb 2015
    14.Min, J., Chai, J.: Motion graphs++: a compact generative model for semantic motion analysis and synthesis. ACM Trans. Graph. 31(6), 153:1–153:12 (2012)CrossRef
    15.Mukai, T., Kuriyama, S.: Geostatistical motion interpolation. In: ACM SIGGRAPH 2005 Papers. SIGGRAPH ‘05, pp. 1062–1070. ACM, New York (2005)
    16.Poppe, R.: A survey on vision-based human action recognition. Image Vis. Comput. 28(6), 976–990 (2010)CrossRef
    17.Rose, C., Cohen, M., Bodenheimer, B.: Verbs and adverbs: multidimensional motion interpolation. IEEE Comput. Graph. Appl. 18(5), 32–40 (1998)CrossRef
    18.Shapiro, A., Cao, Y., Faloutsos, P.: Style components. In: Proceedings of Graphics Interface 2006, pp. 33–39. Canadian Information Processing Society, Toronto, Canada (2006)
    19.Shoemake, K.: Animating rotation with quaternion curves. SIGGRAPH Comput. Graph. 19(3), 245–254 (1985)CrossRef
    20.Troje, N.F.: Decomposing biological motion: a framework for analysis and synthesis of human gait patterns. J. Vis. 2(5), 371–387 (2002)CrossRef
    21.Troje, N.F.: Retrieving information from human movement patterns. In: Shipley, T.F., Zacks, M. (eds.) Understanding Events: How Humans See, Represent, and Act on Events, pp. 308–334. Oxford University Press, New York (2008)
    22.Unuma, M., Anjyo, K., Takeuchi, R.: Fourier principles for emotion-based human figure animation. In: Proceedings of the 22nd Annual Conference on Computer Graphics and Interactive Techniques. SIGGRAPH ‘95, pp. 91–96. ACM, New York (1995)
    23.Urtasun, R., Glardon, P., Boulic, R., Thalmann, D., Fua, P.: Style-based motion synthesis. Comput. Graph. Forum 23(4), 799–812 (2004)CrossRef
    24.Wang, X., Jia, J., Cai, L.: Affective image adjustment with a single word. Vis. Comput. 29(11), 1121–1133 (2013)CrossRef
    25.Wu, J., Hu, D., Chen, F.: Action recognition by hidden temporal models. Vis. Comput. 30(12), 1395–1404 (2014)CrossRef
    26.Yoo, I., Vanek, J., Nizovtseva, M., Adamo-Villani, N., Benes, B.: Sketching human character animations by composing sequences from large motion database. Vis. Comput. 30(2), 213–227 (2014)CrossRef
    27.Zhuang, Y., Pan, Y., Xiao, J.: A modern approach to intelligent animation: theory and practice. In: Chapter Automatic Synthesis and Editing of Motion Styles, pp. 255–265. Springer, Berlin (2008)
  • 作者单位:Klaus Förger (1)
    Tapio Takala (1)

    1. Department of Computer Science, Aalto University, Otaniementie 17, 02150, Espoo, Finland
  • 刊物类别:Computer Science
  • 刊物主题:Computer Graphics
    Computer Science, general
    Artificial Intelligence and Robotics
    Image Processing and Computer Vision
  • 出版者:Springer Berlin / Heidelberg
  • ISSN:1432-2315
文摘
Actions performed by a virtual character can be controlled with verbal commands such as ‘walk five steps forward’. Similar control of the motion style, meaning how the actions are performed, is complicated by the ambiguity of describing individual motions with phrases such as ‘aggressive walking’. In this paper, we present a method for controlling motion style with relative commands such as ‘do the same, but more sadly’. Based on acted example motions, comparative annotations, and a set of calculated motion features, relative styles can be defined as vectors in the feature space. We present a new method for creating these style vectors by finding out which features are essential for a style to be perceived and eliminating those that show only incidental correlations with the style. We show with a user study that our feature selection procedure is more accurate than earlier methods for creating style vectors, and that the style definitions generalize across different actors and annotators. We also present a tool enabling interactive control of parametric motion synthesis by verbal commands. As the control method is independent from the generation of motion, it can be applied to virtually any parametric synthesis method.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700