We propose a novel and simple representation of the first person actions.
The features are simple to compute feature trajectories.
Our approach does not assume hand or object segmentation and pose.
Our technique results in improvement of more than 11% on publicly available datasets.
Our method can recognize wearer's actions when hands and objects are not visible.