This file provides auxiliary information regarding the pre-computed features which are shared as to ease experiments on the proposed human intention prediction problem. For each data type (3D VICON and 2D RGB video), we provide two different feature representations, as explained in what follows. 3D VICON - F_Global and F_Local kinematic features = As explained in [1], we compute some joint-based features to encode kinematic properties of the grasping patterns at each timestamp. 2D RGB optical videos - HOG and HOF features = Through the dense trajectory code [2], we extracted dense HOG and HOF features from spatio-temporal interest volumes. ==================================================================================================================================================================== The folders 3D features and 2D features are organized as follows. 3D features. It contains four subfolders (Pouring, Passing, Drinking, Placing), each of them representing one intention. Inside, we provide one .txt file for each trial: the name of txt files are shaped as AA_BBB.txt where AA is the subject ID who performed the grasping and BBB is the ID of the trial. When each txt file is loaded, we obtain a T x 16 matrix where each row correspond to one acquisition/timestamp and each column corresponds to one feature type (the first 4 columns are the F_Global representation, the last 12 pertain to F_Local). 2D features. It contains four subfolders (Pouring, Passing, Drinking, Placing), each of them representing one intention. Inside, we provide one .mat file for each trial: the name of txt files are shaped as AA_BBB.txt where AA is the subject ID who performed the grasping and BBB is the ID of the trial. When each txt file is loaded (in MATLAB), two matrices will appear: HOG and HOF. They are shaped as T x 32 and T x 36 matrices, where the number of rows represent the available temporal acquisitions, while each column is a feature component. ==================================================================================================================================================================== TESTING PROTOCOL: for the experiment, we recommend the one-subject-out validation where each subject at a time is left out for testing, where the intention-discriminating classifiers are trained over the remaining subjects. [1] A. Zunino, J. Cavazza, A. Koul, A. Cavallo, C. Becchio and V. Murino - "Intention from Motion", arXiv:1605.09526, 2016. [2] H. Wang, A. Klaser, C. Schmid C.-L. Liu, Action Recognition by Dense Trajectories, CVPR 2011.