A New State of the Art in Human Movement Recognition
By Vahid B. Zadeh - Chief Algorithms Officer at PUSH
November 12, 2017
With the mainstream arrival of artificial intelligence (AI) and machine learning (ML), numerous algorithms have been developed in an attempt to tackle various challenges. However, the applied aspects of AI lag far behind its theoretical potential. In part, this is due to the lack of information and data resources which are the main inputs for training the algorithms, as well as the limitations of computational processing power. Data and processing limitations aside, the theoretical accuracy of many of these algorithms often fall short of what is required for performance in a real-world scenario.
In recent years, access to supercomputers and big data in different fields has given researchers the opportunity to re-discover the capabilities of ML algorithms. Furthermore, the ubiquity of ever more powerful smartphones means that these devices are now central to human-machine interaction, and are a natural beneficiary of these developments in AI. With this change in landscape, the challenge has shifted away from access to sufficient processing power, and towards garnering relevant and labeled data. As long as there is a big-enough dataset of sufficient quality, and a well defined problem in hand, ML should be capable enough to address it (some problems are better addressed from a different perspective than ML, however this is beyond the scope of this post). From a simple recommendation system that provides customized advertisements for the products you have been looking for online, to more practical engines like the autocorrect feature in cell phones and advanced systems like Apple’s Siri or Google’s facial recognition engine, applications of ML are gaining more and more presence in our everyday life, whether we notice them or not.
One of the main reasons behind the increasing interest in the applied aspects of AI during the last decade is the easy access to larger datasets collected through smartphones and wearables. Wearable devices allow us to non-invasively collect biometrics during human activities. This data can later be analyzed to either assess the activities’ impacts on the user or train systems that could improve the user experience in the future. Fitness trackers have been exploiting the capabilities of such algorithms in their development; however, ML only starts to show a real advantage over standard signal processing techniques when the size of the datasets are relatively large. At PUSH, we designed the initial series of our motion tracking algorithms by using advanced signal processing algorithms that are equipped by adaptive ML techniques to ensure the highest standards of accuracy in the industry. Thanks to our large user-base and our progressive analytics infrastructure, we have collected, labelled, anonymised, and securely stored all the data for the past few years. In the S&C field, this provides by far the largest dataset in the world that is organically collected (not in a lab controlled environment) and fully labelled and appended by metadata and metrics for every single repetition as well as raw motion data for each repetition performed. Over 16,500,000 repetitions in more than 3,000,000 sets across over 300 exercises constituting more than 400,000 sessions from more than 25,000 athletes (~20,000 male, ~4,500 females, ~1,300 other) is an unprecedented amount of training data which provides a huge degree of diversity ideal for training any ML algorithms.
Attempts to exploit ML for fitness applications are generally very limited in the literature. The problems tackled are mainly those regarding human activity recognition (HAR) or exercise detection, where the engine is able to detect which exercise is being performed (see [1-8] for examples). A simple peak-detection algorithm may also be used later to count how many repetitions of each exercise have been performed. The ability of the current systems to recognize movements is limited to a small number of fairly distinct activities, e.g., running, walking, lying, standing, and sitting . Also, the training and testing of the accuracy of the algorithms are based on datasets that are collected in a controlled environment from only a handful of participants and it’s usually assumed that the athlete performs the same movements at the same pace for at least a certain period of time before the engine is able to recognize the activity.
In an attempt to break new ground in this field, PUSH recently collaborated with scientists in the Adaptive Research Laboratory at the University of Waterloo (UW). The team, led by professor Dana Kulic, used PUSH’s dataset to “push” the envelope in HAR. PUSH defined two different problem scenarios, one tackled by the algorithms scientists at PUSH and the other assigned to the research group at UW. The first problem, assigned to UW researchers, was the classification of movement for 50 different exercises. PhD candidate Terry Um came up with a novel idea to reconstruct the motion data from the PUSH band into an image-like structure and train convolutional neural networks for the training task (see Figure 2 and 3). A total of about 450,000 repetitions from about 50,000 sets of exercises were used for training and testing the network. The data was collected from 1441 male and 307 female athletes. This method achieved an accuracy of 92.14% in recognizing the exercises. Figure 4. Shows the accuracy percentages for all 50 exercises. This is an impressive result which represents a new state of the art, especially when we consider that many of these exercises belong to the same family. For example, the method was able to distinguish seven types of bench pressing motions (which differed in grip width, incline, and whether a barbell or dumbbell was used), and five squat variations (back squat, goblet squat with DB/KB, split squat R/L). To read more about the details of this study read the original paper  presented at the International Conference on Intelligent Robots and Systems.
The second problem scenario was designed to facilitate a seamless user experience flow that would manifest as a noticeable improvement in the usage of PUSH band. Here, the athletes enter the gym, load their routine and start their session, without any intermittent user interaction with the app during the workout. The athletes perform several sets of different exercises where consecutive sets of the workout do not necessarily correspond to the same exercise (e.g., there might be circuits or supersets involved with the routine). After the workout is finished, the athletes end the data collection process. The exercise detection engine should be able to precisely detect the start and end of each set as well as the type of exercises performed during the sets. Hence, the problem is two-fold: set detection and activity recognition, either of which may be accomplished in real-time. We chose 131 exercises for this task and used the same dataset as the one used in the first scenario for training and testing. Also, full-session data was collected to assess the performance of the engine for both set detection and exercise recognition. We used a proprietary methodology for our ML algorithms training and used the standard assessment metrics to evaluate the performance of the system. Our set detection algorithms achieved an accuracy of 95% (sensitivity = 98%, specificity = 84%), while the activity recognition engine performed exercise classification with 92.2% accuracy.
Although PUSH has achieved the state of the art in HAR, we still believe that the application of ML in this industry should expand to more use cases than what is currently being addressed by researchers. Whether it’s an improved understanding of the dynamics of human movement, important trends in athletic performance that were previously hidden in the long-term workout data, or even novel insights that can enable coaches to regulate workouts to prevent potential injuries, these ideas have come much closer to realization thanks to big data access and the potentials of advanced algorithms. In our next blog posts, we will elaborate on how our human activity recognition engine will be used to improve coaches’ and athletes’ experience. We will also introduce new problems we have tackled and share novel insights we have gained by the use of AI and application of ML algorithms to our big dataset. We believe that this can set a new standard of expectations of how wearable technology can improve human health in general and more specifically how athletic performance enhancement can be brought to the next level by such breakthroughs.
About The Author
Vahid B. Zadeh - Chief Algorithms Officer
Vahid holds a Master's degree in biologically-insipired Motion Control from the University of Waterloo. His research at the Center of Excellence in Design, Robotics, and Automation as well as the lab of Computational Intelligence and Automation was mainly focused on the design and implementaion of robotic systems which were inspired from human movements.
Vahid's role is instrumental in shaping the algorithms that power PUSH's products and his knowledge and experience of dynamics and kinematics of human body motions have been critical in helping PUSH deliver accurate and reliable insights to our coaches.
- R. Poppe, “A survey on vision-based human action recognition,” Image and Vision Computing, vol. 28, no. 6, pp. 976 – 990, 2010.
- J. Aggarwal and L. Xia, “Human activity recognition from 3d data: A review,” Pattern Recognition Letters, vol. 48, pp. 70 – 80, 2014.
- O. D. Lara and M. A. Labrador, “A survey on human activity recognition using wearable sensors,” IEEE Communications Surveys Tutorials, vol. 15, no. 3, pp. 1192–1209, 2013.
- X. Long, B. Yin, and R. M. Aarts, “Single-accelerometer-based daily physical activity classification,” in 2009 Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Sept 2009, pp. 6107–6110.
- S. Chernbumroong, A. S. Atkins, and H. Yu, “Activity classification using a single wrist-worn accelerometer,” in Software, Knowledge Information, Industrial Management and Applications (SKIMA), 2011 5th International Conference on, Sept 2011, pp. 1–6.
- S. Chernbumroong, S. Cang, A. Atkins, and H. Yu, “Elderly activities recognition and classification for applications in assisted living,” Expert Systems with Applications, vol. 40, no. 5, pp. 1662 – 1674, 2013.
- D. Biswas, A. Cranny, N. Gupta, K. Maharatna, J. Achner, J. Klemke, M. Jbges, and S. Ortmann, “Recognizing upper limb movements with wrist worn inertial sensors using k-means clustering classification,” Human Movement Science, vol. 40, pp. 59 – 76, 2015.
- D. Morris, T. S. Saponas, A. Guillory, and I. Kelner, “Recofit: Using a wearable sensor to find, recognize, and count repetitive exercises,” in Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, New York, NY, USA, 2014, pp. 3225–3234.
- Um, Terry Taewoong, Vahid Babakeshizadeh, and Dana Kulic. "Exercise Motion Classification from Large-Scale Wearable Sensor Data Using Convolutional Neural Networks." arXiv preprint https://arxiv.org/abs/1610.07031 (2016). [Accessed 6 Nov 2017]