High Fidelity Human Behavior Modeling and Prediction
This research project attempts to study and understand how artificial intelligence can be applied to the mental level, such as recognizing intention, personality, and moments of confusion, etc. We are studying this subject by constructing an escape-room experiment. Subjects follow clues placed around the room in attempts to either mentally or physically assemble objects and their attempts are recorded from two egocentric view cameras: a GoPro camera mounted on subjects' head and a gaze-tracking Tobii-glasses. Once the videos are recorded, we wrote scripts in Python and MATLAB to first undistort the collected data (and fisheye distortion for GoPro videos and radial distortion for Tobii-glasses videos) and then use Python scripts to reconstruct the 3D scene through Agisoft Metashape. We also created a detailed 3D map of the room using a Matterport Scanner. Lastly, a one-stage, real-time object detection and recognition model (YOLO V1) is implemented and tested using custom-made data of different scales of colored triangles, circles, and rectangles.
Comments