Pepper やNaoで使用されているNaoqi OSのメモです。

NAOqi Core - 概要 | API? | チュートリアル?

What it does


ALMood reads instantaneous emotion of persons and ambiance.

This module synthesizes various extractor outputs into an emotional perception of people and the environment ambiance. It may also provide an emotional state of the robot in the future.

As a user of the service, you can query the underlying representation of the emotional perception through a set of emotional descriptors (e.g. Positivity, Negativity, Attention):
  • Call MoodProxy::currentPersonState for the focused user
  • Call MoodProxy::personStateFromUserSession for any active user definded by a USID
  • Call MoodProxy::personStateFromPeoplePerception for any active user definded by a PPID

How it works


This module provides information on the instantaneous emotional state of the speaker and surrounding people, as well as the ambiance mood.

ALMood launches the emotion processing service in Passive operating mode as soon as NAOqi runs. Once a developer subscribes to this module in Active mode (see Operating Mode), an extractor manager dynamically triggers and adapts the processing frequency of extractors used by this module.

Emotional processing builds upon various extractors and memory keys; in particular, it currently uses head angles from ALGazeAnalysis, expressions properties and smile information from ALFaceCharacteristics, acoustic voice emotion analysis from ALVoiceEmotionAnalysis as well as general sound level from ALAudioDevice and movement information.

ALMood retrieves all relevant information compiled by those above-named extractors to combine them into high and low level extractors. Users can access the underlying representation of the emotional perception through a two-level key space: a consolidated information key (e.g. valence, attention) and more intermediate information key (e.g. smile, laugh).

The calculation of high-level keys takes into account the observation at a given moment, the previous emotional state and the ambient and social context (e.g. noisy environment, smile too long, user profile).

All emotional key values are associated with a confidence score between 0 and 1 to indicate how likely an estimation is.

Operating Mode

This module incorporates data from various extractors. In order to have a pertinent estimation of a mood, those extractors have to be started.

ALMood provides two options to manage its functioning:
  • “Active”: ALMood will manage the subscriptions itself to ensure high performances.
  • “Passive”: ALMood will let the user manage the subscriptions and listen passively to extractors.
By default, at robot startup, ALMood is launched in passive mode.

Extractors started by ALMood in active mode:
  • ALGazeAnalysis?
  • ALFaceCharacteristics?
  • ALVoiceEmotionAnalysis? (on Pepper only)

Basic Emotions

You can query for a basic emotional reaction over a fixed time period.

The analysis starts when the method MoodProxy::getEmotionalReaction is called.

An emotional reaction value can be:
  • “positive”
  • “neutral”
  • “negative”
  • “unknown”

Emotional descriptors

Person emotion:

All those descriptors provide mood data on the focused user.
  • Valence:
    • This descriptor indicates whether the person’s mood is rather positive or negative. The data returned has the following format: [value, confidence] where each element is a score in a range from 0 to 1.
  • Attention:
    • Show a score of the degree of attention the focused person gives to the robot. The data returned has the following format: [value, confidence] where each element is a score in a range from 0 to 1.

Ambiance:
  • Activity/Calm:
    • Indicates the activity level of the environment, whether it’s noisy and agitated or calm, scored in a range of 0 to 1.

Perceived stimuli

In its present state, the module reacts to the following stimuli:

Person emotion:
  • Smile degree
  • Facial expressions (neutral, happy, angry, sad)
  • Head attitude (angles), relative to a robot
  • Gaze patterns (evasion, attention, diversion)
  • Utterance accoustic tone
  • Linguistic semantics of speech
  • Sensor touch
Ambiance:
  • Energy level of noise
  • Movement detection

Getting started


To discover ALMood, download and try the following Choregraphe behavior: sample_get_mood.crg

This sample shows notice the main steps to use ALMood.

  • The extractors are started in Active mode.
  • The robot makes a joke or comment.
  • The robot detects the resulting mood (positive/negative/neutral) of the person during the 3 seconds that follow.

注意
-Please note that since most of the sources are taken from ALPeoplePerception extractors, the confidence of the emotional extractors will be low if the face is not seen correctly.
-Please note that the ambiance descriptor is best used when the robot is not speaking.

コメントをかく


「http://」を含む投稿は禁止されています。

利用規約をご確認のうえご記入下さい

Menu

NAOqi - Developer guide

Creating an application?

Programming for a living robot?

Other tutorials?

Choregraphe Suite?

どなたでも編集できます