Tech Note: Using Research to Drive Product Decisions - Changes to Jump Testing

Tech Note: Using Research to Drive Product Decisions - Changes to Jump Testing

By Chris Chapman & Vahid B. Zadeh
April 24, 2019

This is a techincal note regarding an upcoming PUSH iOS app release v4.4.6 (release date: upcoming) pertaining to an update to the Jump Height metric computation.

A recent study came out from Wee et al. (2018) comparing the jump testing feature of the PUSH band, specifically using a countermovement jump (CMJ), with a linear position transducer, a video-based iPhone app and a criterion measure which in this case is a force plate using the Force Decks software (Table 1). This was one of the first external (third-party), peer-reviewed publications examining an unloaded CMJ using the PUSH band since the inception of the product over 5 years ago.

Table 1 & 2 : Summary of Results from Wee et al. (2018)

Table 1 & 2: Summary of Results from Wee et al. (2018)

The results of this study showed that the PUSH Band was both valid and reliable for mean velocity (MV) and power (MP) during a CMJ, with minimal error and bias in these metrics (Table 2). For us, this confirmed the ample performance of the device even with the older version of the hardware, and supported what we see with our internal data collections. However, our approach to science is not to affirm the positive (avoid confirmation bias), but attempt to falsify the hypothesis or find the drawback within our products. While we do conduct our own internal research and validation on a consistent, on-going basis, we want external researchers to poke holes in our product and find the gaps or areas we can improve upon, giving us an unbiased (or opposingly biased) assessment of our product as a user. In the short term it’s not ideal to have unfavourable results in publications, but in the long term it’s most ideal as it allows us to continually improve, striving to be better everyday. We need to remember that no technology is perfect, especially innovative and disruptive technology, but the company that can quickly adapt, plug the holes and provide real-world solutions to users faster than their competitors will be the one that survives the technological circle of life.

One thing to note, Wee et al (2018) used the Band 1.0 in their study, which is legacy technology that has been off the market since March 2018 when Band 2.0 was released. This highlights a major current issue in academia, the noticeable lag between the peer-reviewed literature publication model and the speed of technological progression, something we will discuss in more detail in a future post. The key consideration here is that we created the Band 2.0 hardware with new sensors to be much more accurate and solve some of the limitations we found in the Band 1.0. Some of the error observed in Wee et al. (2018) can be attributed to the older version hardware, but the fact this study used PUSH Band 1.0 does not take away from the utility of the paper nor the value we at PUSH found in its results.

That being said, our biggest takeaway was the results observed with jump height (JH). While the PUSH band’s results (r = 0.99, TEE = 1.22 cm) faired better than previous work comparing similar tools (Nuzzo et al. 2011), with respect to accuracy JH showed a large overestimation bias in values (14.4 ± 0.9 cm). We had initially assumed this was solely due to the old hardware. However, since we do data collections on a regular basis, it just so happened we recently collected some jump data internally for another project, so we decided to reanalyze this data to see if we could find data to support or refute our speculation.

At the University of Toronto Biomechanics Lab, we assessed 3 different Band 2.0 devices versus AMTI force plates (frequency = 1000Hz). We had 7 partipants complete 3 squat jumps (SJ), 3 countermovement jumps with arms fixed (CMJ), 3 countermovement jumps with arm swing (CMJ-Arms), 3 drop vertical jumps from a 20 inch box (DVJ) and 3 sets of 10 pogo jumps (RSI-10/5). The primary purpose of this collection was to determine the optimal location of the PUSH band on the waist belt for jump testing, compare response times for various iOS devices, as well as assess the interunit reliability (Figure 1). But this dataset also allows us to examine the validity of Band 2.0 metrics versus gold standard force plates. All subjects and jumps were pooled for analysis.

Chaps 2.jpg
Chaps 3.jpg

Figure 1: (Left) Placement of PUSH Bands on sacrum and hip for assessment of optimal placement for jump testing metrics. (Right) Three bands placed on the same jump belt.

Figure 2 : ( Top ) Flight time plotted for 3 PUSH Band 2.0 versus a criterion measure force plate (n = 819). ( Bottom ) 2 different placements of PUSH Band 2.0 versus a third criterion band placed at the recommended position on the sacrum (n = 546).

Figure 2: (Top) Flight time plotted for 3 PUSH Band 2.0 versus a criterion measure force plate (n = 819). (Bottom) 2 different placements of PUSH Band 2.0 versus a third criterion band placed at the recommended position on the sacrum (n = 546).

Looking at the flight time data from all 3 PUSH bands (Figure 2), this metric is used to compute jump height (JH) using an intertial motion unit (IMU). We can see the data line up very well compared to both the other bands and the criterion measure force plate. We get a quick answer to our first question with the sacral placement trending closer to the criterion than the side hip placement. Digging deeper, we used data from a single band in order to run more in-depth statistical analyses (Table 3). Looking specifically at jump height, the metric in question, in addition to a regression analysis we added a Bland-Altman plot, which shows us the individual jump error and the overall bias trendline. Even though the JH correlation is nearly perfect we can see a rather large bias, observing similar results to that shown in the Wee et al. (2018) paper.

It turns out this was a decision in the early days of PUSH to align the JH metric with the numbers measured using a Vertec, as it was the most used tool in the field. The data available at the time showed Vertec JH slightly overestimated compared to JH from flight time (FT). This logically makes sense since athletes can change shape in the air in order to reach up and hit the Vertec slats, as well as cheat the standing arm reach measurement by depressing the scapula (seen all too often in athletes trying to game the test). The Vertec results are less likely to relate to the actual center of mass displacement that would be measured by most other jump testing technology and the overestimation has been shown in previous literature (Petushek et al. 2010). This also suggests the Vertec JH is less reliable than other jump testing tools (Nuzzo et al. 2011), which is likely due to extra user "degrees-of-freedom" available when completing the task.

The initial bias in the JH calculation to align it with Vertec data was done using a simple conversion factor. If we remove this conversion factor in the PUSHcore algorithm engine, reprocess the same flight time data to get jump height, and plot this data next to the original data, we see the trend remains the exact same but there is a shift of the data down and to the left. Significantly less error is observed and the bias hovers just above zero (Figure 3 Right; Table 3). Given the large sample size (n = 273) and nearly perfect correlation betwee multiple bands (n = 830), this is more than enough evidence to support a decision to remove the conversion factor permanently. The next step is how to reconcile all of the old data with the changes moving forward.

Jump Height - PUSH Band 2.0 vs. Force Plate.png
Jump Height - PUSH Band 2.0 (Conversion Removed) vs. Force Plate (1).png
Jump Height - PUSH Band 2.0 vs. Force Plate .png
Jump Height - PUSH Band 2.0 (Conversion Removed) vs. Force Plate (2).png

Figure 3: Jump height from PUSH Band 2.0 with (Top Left) and without (Top Right) the Vertec conversion factor plotted versus force plate using linear regression (n = 273). Bland-Altman error plot of jump height for PUSH Band 2.0 with (Bottom Left) and without (Bottom Right) the Vertec conversion. Black lines indicate limits of agreement and blue line indicates the overall trendline of the data.



In the PUSH app version 4.4.6 (release date: upcoming), we have removed the Vertec conversion factor. The JH reported in the PUSH app now aligns with force plates opposed to Vertec measurement. No other jump testing metrics will be affected by this change.

If you are a PUSH Portal user, all previous data will be converted automatically, both in the reporting section (session data and test reports) and all exports. All new data will be collected without the Vertec conversion and all data will match up.

However, the previous data in the PUSH iOS app will not be changed as that data is stored locally on your device. To manually compare your older data from apps version 4.4.7 and earlier, you can use the following conversion equations:

  • If you collected JH in metric (cm): New JH (cm) = 0.95 * Old JH (cm) - 2.3 * 2.54

  • If you collected JH in imperial (inch):  New JH (inch) = 0.95 * Old JH (inch) - 2.3

In the end, we want the PUSH Band to be as valid and reliable as possible to provide users with data they can trust. Removing the previously implemented correction is something we can stand behind as the data support this decision. If you have any further questions please reach out:


Nuzzo, J.L., Anning, J.H., Scharfenberg, J.M. (2011). Reliability of three devices used for measuring vertical jump height. Journal of Strength & Conditioning Research. 25 (9): 2580-90.

Petushek, E., VanderZanden, T., Wurm, B., Ebben, W.P. (2010). Comparison of jump height values derived from a force platform and vertec. Preceedings of the 28th International Conference on Biomechanics in Sports. Available:

Wee, J.F., Lum, D., Lee, M., Roman, Q., Ee, I., Suppiah, H.T. (2019). Validity and reliability of portable gym devices and an iPhone app to measure vertical jump performance. Sport Performance & Science Reports. 44 (2).