Research Demos
  • Happy Companion [video]


  • An HCI approach to emotion regulation

    Happy Companion presents an virtual psychologist who can detect your emotional state and respond expressively through audio-visual interface in real time. It is a prototype software that tries to simulate the emotion regulation in traditional psychological consultation through human computer interaction.

    The virtual psychologist is a 3D talking avatar who can talk,behave expressively, and under her guide you can participate four basic task to regulate your emotions, including Breathing Relaxation, Muscle Relaxation,Face Learning , Joke&Game.

    The system integrates all the techniques from our research on affective computing, including emotion appraisal, detection ,recognition and expression.

     
         
  • Emotion Sensor [video]

  • A mirror to see your emotions

    Emotion Sensor is a real-time MPEG-4 FAP based facial expression analysis and synthesis system. It captures the user's face from a web-camera(the bottom-left corner in the system snapshot), then located several key facial features (eyebrow, eye, mouth etc.) in each captured frame. Consequently, the MPEG-4 facial animation parameter is extracted and smoothed, and then is used to animate the 2D/3D virtual avatars to perform the similar facial expressions as you did, which is just like looking at a mirror.

    Three kinds of virtual avatar is presented(the upper part in the system snapshot), a 3D talking head, a real human face image and a cartoon face image which is transformed from real human face image. A virtual desktop(the bottom-right corner in system snapshot) will also change dynamically according to the emotional state recognized from the users facial expression.

    The system integrates the techniques of 2D/3D facial animation, facial feature tracking, FAP extraction etc.

     
         
  • Hong kong Tourism Guider [video]
  • Hong kong Tourism Guider is a Chinese 3D talking avatar who presents introduction of famous Hong kong scenery. During the talking, expressive prosodic movement such as head nod, eyebrow movement is simulated driven by the speech prosody and text semantics. The prosodic features include stress and tone etc., and the semantic features is annotated based on PAD models(Pleasure-Arousal-Dominance). The related work has been published on ICASSP2007.

     
         
  • Face Animation Editor [video]
  • Face Animation Editor provides an interactive way to animate 3D face model or 2D face image. The software provide many different parameters to control the movement of facial features, from the high-level Emotion Parameters which produce the continuous facial expression for a specific emotion, then the mid-level Partial Expression Parameters which simulate the common facial movement on specific facial organs, to the low-level Facial Animation Parameter based on MPEG-4 animation framework which control the micro-movement of each facial feature point. The 3D animation is partially based on XFace.

     
         
  • Cartoon Face Editor [video]
  • Cartoon Face Editor can automatically generate caricatures from input face photo. User can interactively makup the face, select different hair style and cartoon style. The generated cartoon face is MPEG-4 compatible which can be animated by FAP parameters directly and simulate different facial expressions.

     
         
  • Semantic Face [video]
  • Semantic Face synthesize and recognize facial expressions based on 7 semantic dimensions: Confidence, Strength, Activation, Dominance, Intension, Pleause and Attention. For an input facial expression, the meaning of the face is recognized by the extent to which each semantic dimension expressed. For synthesizing facial expression, the 7 semantic dimensions can be used as high-level description of the speaker's modality (the attitude, intention, mood, evaluation etc) to control the facial movement.

     
         
  • Virtual Singer [video]
  • Virtual Singer shows an expressive singing avatar who can sing and take prosodic movement with the rhyme.It can provides six different kinds of facial expression with varied extent for different songs. The prosodic movement is synchronized with the music rhyme and lyrics. The virtual avatar is expected to be an expressive user interface for digital entertainment such as online music-box. The related work has won the 3rd prize in 26th Chanllege-Cup Student Technology Competention, Tsinghua University.

     
         
  • Semantic Dictionary [more]
  • Semantic Dictionary is an online system for annotating Chinese words(mainly the adjectives and adverbs) according to its semantic meaning by 11 different dimensions. The 11 dimensions focus on describing the human's emotion,mood,intention,personality and inner state etc.,which includes:
    Pleasure,Arousal,Dominance,Attention,Nervesness,Strength,Confidence etc.. The project aims to explore usefulness of the semantic cues in spoken text for expressive talking avatar synthesis.

     
         
  • Expression Clone [more]
  • Expression Clone demonstrate the usage of PEP(Partial Expression Parameter) in facial expression synthesis. The PEP parameter is extracted from real human facial images from JAFFE database, and then based on the PEP-FAP(Facial Animation Parameter) tranlation template, the similar facial expression is cloned to the talking avatar.