Hi all!

I am a BSc. IT (Hons) student and am doing my research project in the field of AI, more specifically, affective computing, which basically deals with emotions within computers, either detecting them, displaying them or both. I am focusing on designing a computer program to detect emotion.

The way that my program will work is as follows:

A child (student) will participate in a variety of different activities. The computer will monitor their affective state (mood) while they are participating and if the student is, for instance, getting frustrated because the activity is too hard, the computer will change the activity.
The affective computing aspect of my project has been coded in C#. I was planning on launching the activities as executables from C#.

Originally I was going to create the various activities using Alice, a programming environment developed by Carnegie Mellon University (http://www.alice.org/) but I have decided that I’d like the emotion detected by my affective computing program to be able to have an effect on the activity.

In other words, the activity application will need to be able to accept information from my program that monitors the affective state. I would also like to record some info from the activity itself, so it would need to be able to return data at specified intervals to the affective computing program.

I haven’t done too much investigating to see if Alice has these capabilities, but in the mean time, I was wondering if anyone could suggest a different tool that I could use to develop the activities that would serve this purpose.

I need something that I can use to develop the activities fairly quickly as I would need to create at least 5 different activities (my aim is to develop at least 10). The actual activities aren’t the focus of the project, the affective computing aspect is, but the activities need to be good enough to demonstrate the capabilities of the project.

If anyone has any suggestions on what I could use to develop these activities, I would really appreciate it. I’ll keep looking in the meantime as well.


Personally, I would get some insights from psychology professors/journals as to design activities that would give a cue as to the state or mood of a person. Perhaps the easiest way would be to just have the program ask the subject flat out (if for instance the user didn't respond in a set time limit). One thing that comes to mind is that there is no more a universal indicator of mood than facial expressions. Psychologists have found that facial expressions are universal across all cultures. I would be interested in looking into facial recognition software that interfaces with a webcam (it's come a long way... especially if there is some open-source code out there to play around with). If your program could monitor the expression on your subject's face independently of the activity the subject is performing then you wouldn't need the activity itself to provide you with any cues as to the subject's mood. But unless there was an open source project out there like this, where you could get some sample code, this would probably be insanely complicated. I think the easiest way would just have the program ask the subject if they were getting frustrated based on something quantitative like a time constraint.

Thanks for your response Freezerburn. I'm one step ahead of you though. I met with a pschology professor a few days before I posted this and he gave me a great idea for activities that use both information from the facial expression recognition portion of my project as well as directly from the user via the activity.

Originally I was just going to create some activities that the user participates in pretty much seperately from the facial expression recognition portion of the project and then if that portion saw the user was becoming frustrated, etc, it would automatically change the activity or ask the user if they would like to change.

The professor I spoke to also suggested I ask the user at given time interval's how they are feeling, and I was thinking to incorporate that within the actual therapy, but while reading your post, I realised that I could more easily query the user from the expression recognition portion and acheive the same effect, so thank you!

All that aside, I would still like the activity to return some information to the expression recognition portion, such as accuracy during the activity (I'm still not sure if Alice has that capability, as I have been busy with the development of the facial expression recognition portion at the moment and haven't had time to do too much digging) so if any one know's whethere or not Alice has this capability or of another tool I can use to develop the activities, I would really appreciate it.

Sweet ! Sounds like a cool project ! I've been tinkering with the OpenCV facial recognition library and code samples myself with Python since recently learning about its existence. Sorry, don't know of the capabilities of the A.L.I.C.E. project.
But it does sound like you are developing an application that is highly interactive with the user... you may want to look into integrating some game development libraries into your development environment that closely match the kind of user interaction you're looking for. Not sure if you're a Pythonista, but for example, Python has an extensive and easy to use game library called Pygame. And with it, it is relatively simple to create 'sprites' that respond to user interaction well and you can also get feedback on the accuracy of the user's response... hence a game. For C++ there is a similar library called SDL and many more such libraries for other languages. These would be good choices for creating 2-D activities that respond to different modes of user input. Regardless, I would choose a high level language and libraries such as these to avoid having to directly call Windows API's, which will of course reduce the complexity and development time.