Authors: Jennifer Healey, Haoliang Wang, Niyati Chhaya Description: This paper presents a preliminary exploration of the challenges of automatically recognizing positive and negative facial expressions in both spontaneous and intentionally expressed conditions. Instead of recognizing iconic basic emotion states, which we have found to be less common in typical human computer interaction, we instead attempted to recognize only positive versus negative states. Our hypothesis was that this would prove more accurate if participants intentionally expressed their feelings. Our study consisted of analyzing video from seven participants, each participating in two sessions. Participants were asked to view 20 images, 10 positive and 10 negative, selected from the OASIS image data set. In the first session participants were instructed to react normally, while in the second session they were asked to intentionally express the emotion they felt when looking at each image. We extracted facial action coding units (AUs) from the recorded video and found that on average, intentionally expressed emotions generated 33\% more AU intensity across action units associated with both negative emotions (AU1, AU2, AU4 and AU5) and 117\% more intensity for AUs associated with positive emotions (AU6 and AU12). We also show that wide variation exists both in average participant responses across images and in individual reactions to images and that simply taking a ration of our identified action units is not sufficient to determine if a response is positive or negative even in the intentionally expressed case.