Consequent to creating my first final draft, I tested the code on a few class mates using the camera on my laptop, in order to see the different aspects of the face detection in use, and to highlight any problems.
In this video you will see my initial draft in action. I asked three friends to interact with the work by walking past the camera, in order to see how the code reacts to multiple faces.
This video shows how the face detection reacts when the user moves further and closer to the camera. The image placed over the face resizes according to the closeness of the user.
I believe the interaction of the face detection works well as it is specific to the users face, whereas blob detection traced the light in the room. The way the face detection interacts while the user is active is also a positive, as it picks up their distance from the camera, and the direction they are moving in. The interaction also works well as it mirrors the users actions, making the experience more personal.
I noticed that the code was only displaying in a small window, due to the small size used, however I found that using a larger screen size was not compatible with the resolution of my laptops (HP Pavillion) built in camera. I will look for a way to make the work fill the whole display.