Tracking and graphical reconstruction of position and pose of a subject in a living environment can be achieved by using a private network of ambient sensors. This deployment has significant applications in areas such as assisted living, three-dimensional imaging and physical rehabilitation. Recent advances in sensor technologies, artificial intelligence and graphical visualisation tools have helped to address several key challenges associated with posture tracking and reconstruction. However, there remain a number of challenges for increasing the precision of tracking and reconstruction. This thesis presents a system that employs a novel approach for generating accurate graphical reconstructions using private cloud computing architecture. The system comprises of two edge nodes, each equipped with a local RGB-D sensor, and a centralised host computer. A method is proposed for constructing a three-dimensional kinematic model of pose and position using tracked spatial landmarks. Each edge node tracks these landmark coordinates in near real-time. Each edge node locally approximates the subject’s position, torso orientation, and leg kinematic angles using the tracked data and three kinematic models. The approximated values are transmitted to the host, where a data fusion algorithm evaluates their suitability before dispatching them for graphical reconstruction. The precision of the proposed tracking and reconstruction of integrated system is evaluated using a set of novel designed case studies in a natural living environment under both occluded and un-occluded conditions to assess its accuracy and robustness. The un-occluded test cases are compared to the calibrated ground truth values to quantify the accuracy of the system’s approximations. Then the occluded test results are compared to the un-occluded test results to determine the effect that occlusions have on the system’s approximation accuracies. The results from both these test cases and the observed limitations are presented.
|