Evolving video capture from novel eCOA to validated assessments
Video clinical outcome assessments (vCOA) provide meaningful endpoints in disease progression as they enable novel eCOAs at home and in hospitals, increasing the frequency of assessments conducted in real-world scenarios.
Alongside novel means of data capture, Aparito’s capabilities with data and machine learning provide a tremendous opportunity for automated analysis of video recordings, assisting clinicians with new measures such as the time taken for each phase of the Timed Up-and-Go (TUG) or the number of mouthfuls per minute for a dysphagia assessment.
Scale for the Assessment and Rating of Ataxia at home (SARAhome)
The Scale for the Assessment and Rating of Ataxia (SARA) tests were digitised as SARAhome in collaboration with DZNE.
SARAhome takes five domains from the eight domains included in the SARA scale performed in hospital including
- gait
- stance
- speech*
- finger-to-nose
- fast alternating hand movements
SARAhome is a standardised, validated assessment in Atom5™ available for blinded, randomised independent central assessor(s) scoring and pose estimation analysis.
*speech is undergoing the development of automatic ratings using machine learning by Grobe-Einsler M et Al, 2023.
Video Timed Up-and-Go (vTUG)
Adapting the traditional Timed Up & Go into a home-based standardised video assessment, we focus on each of the different phases, i.e.
- sit to stand
- gait
- 180-degree turn
- gait
- stand to sit
rather than the traditional total time to complete alone.
vTUG is a standardised test that uses video capture and pattern recognition to enable objective, sensitive, high-frequency assessments.
Video Hand Opening Time (vHOT)
Adapting the traditional hospital-based Hand Opening Time (HOT) to a standardised home or clinic-based assessment with frame-by-frame analysis and pose estimation analytics, vHOT is available with frame-by-frame analysis and pose estimation analytics.
Feeding & Eating Evaluation viDeo analysiS (FEEDS)
Feeding & Eating Evaluation viDeo analysiS utilises Atom5™ to
- Identify specific points on the face and hands
- Apply machine learning techniques to characterise age-dependent eating skill and technique
- Analyse chewing and swallowing coordination
Example measures include “time taken for plate-to-mouth action” (or alternative gestures) and “number of mouthfuls per minute”.