Monitoring physical presence of elder people and reporting to a physician or relative. This includes the state of the elderly, response or non-response to vocal communication, visual signs of alerting behaviour or other patterns, as well as events, such as fall detection or injuries. Third party devices, such as a wearable sensor, would also provide important information. Yet another app would be to find where the user is in the house, and establish communication of the user with a third party (physician, nurse, parent, etc.).
Monitoring can be passive and rely on Depth Cameras (RGB-D) and LIDAR sensors, in order to (a) map the environment, (b) establish the gait and pose of the elderly user at any given moment, and (c) monitor the elderly person’s heart-rate and other data using a wearable device. Upon detection (via classification) of an accident, the app will initiate contact (e.g., verbal) and try to establish if an accident has happened, in which case it may opt to alert the professionals or carer. Gait detection and classification systems would alert about falls or accidents taking place real-time, whereas overall activity, time spent sleeping, time spent watching TV and similar monitoring would provoke system action to try and motivate the user to get active.
All these can provide a highly sophisticated, low latency alerting system, and further alleviate monitoring and care-giving costs, usually attributed to human personnel.
Dementia Prediction & Detection
By combining state-of-the-art Deep Learning through the encoding of symbolic information collected through robot-user interactions, prediction or progression of dementia is possible. For example, if a user starts forgetting things, or is constantly asking the same questions, that information, among other data-mined information, could would be auto-encoded and given as input to a CNN/DNN, that can monitor and alert physicians to the condition of the robot’s user. Such an app could drastically cut costs of the care-giving industry, whilst maintaining an accurate image about the user’s mental health. This app will require real data (and metadata) in order to be trained, and then be empirically evaluated by using patients over a long period, in order to detect if dementia is progressing. Prediction would function in a similar fashion, but instead of monitoring constantly, will run on a specific time-frame (e.g., 48 hours, 72 hours, etc.) and would use indicators dictated by our partners, such as facial expressions or emotional state, physical activities, or even repetition of phrases. Prediction would only serve as a tool for further medical evaluation, whereas detection would serve as a constant monitoring tool.
Providing entertainment to elder users may be a lucrative niche area of the overall robotics entertainment industry. Simple games and memory exercises, simple trivia or anecdotes, in combination with an emotional and affective robotic personality, would make the experience of having a robot much more pleasant. As such, this suite of apps would revolve around motivating users to be physically active, to use mental exercises and other similar stimuli, play memory games, or even reminisce.
Providing helpful education, especially in regard to the current technological advances and new device usage (e.g., such as a laptop, tablet or mobile phone). This includes handling devices such as ovens, microwaves, as well as general education through vocal communication. Educational material will be pre-loaded on a cloud-service, and use ontology, object recognition, a manual and related natural language processing (NLP) and natural language understanding (NLU) to formulate replies and responses relative to the question being asked, and not offer repetitive phrases. Therefore, simple passive monitoring would allow the robot to “understand” when it has to access information, and then process it in a manner which would produce a meaningful reply, that would allow it to aid and educate the user.