Augmenting and improving the human experience - From improving the lives of those with impaired vision, hearing, memory and movement to expanding the forefront of what the human body is capable of achieving. We will build solutions to problems faced by those who have disabilities which inhibit the use of their senses.
Passionate developer, engineers, computer scientists and designers joined us for a weekend of building and learning, as we tried to develop tech to augment and improve the human experience, enhancing the body and mind. Some of the attendees are below!
- Ilias Kiourktsidis
- Robert Myrie
- Zika Wei
- Michele Cipollone
- Max Cohen
- Gary Roberts
- Serena Zhou
- Becks Simpson
- Jonathan Villegas
- Olu adebari
- Trevor Oakley
- Ambrosio Pagaran
- Mustafa Ghafouri
These are the projects that were created!
Enabling people with disabilities to choose the best heath insurance for their specific condition. Apply machine learning and text recognition to provide a "coverage" score, making choice simpler and quicker.
Enabling visually impaired people to navigate public space by helping them to avoid obstacles. The team developed an android app, running a object classifier, combined with a depth and direction model, giving audio indication of objects on the way. For example, while moving, the app would say things like: "there is a chair on your left, there is a person on your right"
Talk to Lucy
Enabling specific voice interaction for elderly and people loosing memory. The team built, from the ground up, a conversation agent that was customised to the individual.
Enabling carers and family members to know when and if a relative with dementia / loss of memory took their medicine or not. Using ML to identify a pill box, the gesture of taking a pill, and combining it with a chatbot to query the information.
Enabling visually impaired people to skim through many articles and get a voice enabled staged summary of the text. The team trained a deep learning model on CNN data to summarise content.