Follow your feet! Scientists develop £2,700 ‘intelligent’ SHOE that helps blind people to avoid obstacles
- Austrian computer scientists created a shoe that pings vibrations to the wearer
- It warns blind and visually impaired people of obstacles using ultrasonic sensors
- They’re now working on an embedded AI camera for future versions of the shoe
Computer scientists have created an ‘intelligent’ shoe that helps blind and visually-impaired people avoid multiple obstacles.
The £2,700 (€3,200) product, called InnoMake, has been developed by Austrian company Tec-Innovation, backed by Graz University of Technology (TU Graz).
The product consists of waterproof ultrasonic sensors attached to the tip of each shoe, which vibrate and make noises near obstacles.
The closer the wearer gets to an obstacle, the faster the vibration becomes, much like a parking sensor on the back of a vehicle.
Tec-Innovation is now working on embedding an AI-powered camera as part of a new iteration of the product.
The shoe as it is now already available on the market. The ultrasonic sensor is attached to the toe of the shoe. In the future, a camera plus a processor running the algorithm will be integrated there
‘Ultrasonic sensors on the toe of the shoe detect obstacles up to four meters [13 feet] away,’ said Markus Raffer, a founder of Tec-Innovation and himself visually impaired.
‘The wearer is then warned by vibration and/or acoustic signals. This works very well and is already a great help to me personally.’
The product price includes one device per foot, along with one pair of shoes (or installation on an exiting pair of shoes), as well as a USB charger.
The system detects two pieces of information that are key to avoiding obstacles, the scientists say – the nature of an obstacle and its directional path, especially if downward facing, such as holes or stairs leading into a subway.
WHAT IS MACHINE LEARNING?
Machine learning (ML) is a branch of AI based on the idea that systems can learn from data, identify patterns and make decisions.
ML systems can learn to improve their ability to perform a task without being explicitly programmed to do so.
Such systems can find patterns or trends in sets of data to come to conclusions or help humans make better decisions.
Machine learning systems get more effective over time as they learn.
‘Not only is the warning that I am facing an obstacle relevant, but also the information about what kind of obstacle I am facing, because it makes a big difference whether it’s a wall, a car or a staircase,’ said Raffer.
The ‘approved medical device, which is available to buy on Tec-Innovation’s website, is just the first version of the product, however.
The scientists are working on integrating a camera-based recognition system that’s powered by machine learning, a type of artificial intelligence (AI).
Images captured by the embedded camera will essentially allow it to detect more about each obstacle as the wearer walks around.
‘We have developed state-of-the-art deep-learning algorithms modelled on neural networks that can do two main things after detecting and interpreting the content of the image,’ said Friedrich Fraundorfer at TU Graz.
‘They use camera images from the foot perspective to determine an area that is free of obstacles and thus safe to walk on, and they can recognise and distinguish objects.’
Tec-Innovation is now working on integrating the camera system into a new prototype so it’s both robust and comfortable.
David Schinagl in front of three shots from the perspective of shoes. The algorithm developed by TU Graz (excerpts of which are on the screens) recognises and marks areas that can be walked on without danger
The firm also want to combine the information collected while wearing the shoe into a kind of ‘street view navigation map’ for visually impaired people.
‘As it currently stands, only the wearer benefits in each case from the data the shoe collects as he or she walks,’ said Fraundorfer.
‘It would be much more sustainable if this data could also be made available to other people as a navigation aid.’
A funding application is currently being submitted to the Austrian Research Promotion Agency FFG to bring the navigation map to fruition, which researchers say would likely happen in the ‘distant future’.
Alexa, what am I holding? Amazon rolls out accessibility feature for blind and partially-sighted people
Amazon’s feature, Show and Tell, helps blind and partially sighted people identify common household grocery items
An Amazon’s feature for its smart assistant Alexa, called Show and Tell, helps blind and partially sighted people identify common household grocery items.
The feature, which launched in the UK in December 2020, works with Amazon’s Echo Show range – devices that combine a camera and a screen with an Alexa-powered smart speaker.
With Show and Tell, UK Amazon customers can say ‘Alexa, what am I holding?’ or ‘Alexa, what’s in my hand?’.
Alexa then uses the Echo Show camera and its in-built computer vision and machine learning to identify the item.
The feature helps customers identify items that are hard to distinguish by touch, such as a can of soup or a box of tea.
‘The whole idea for Show and Tell came about from feedback from blind and partially sighted customers,’ said Dennis Stansbury, UK country manager for Alexa.
‘We understood that product identification can be a challenge for these customers, and something customers wanted Alexa’s help with.
‘Whether a customer is sorting through a bag of shopping, or trying to determine what item was left out on the worktop, we want to make those moments simpler by helping identify these items and giving customers the information they need in that moment.’
Source: Read Full Article