The Ifinity Group is working on a project Virtual Warsaw/Virtualna Warszawa. Project includes a knowledge game/fun play that is operated via beacons placed in real world around certain parts of a couple of parks at the edge of Vistula river. The project’s scope includes an application that allows to participate in the test/quiz questions (that are triggered by close proximity to specific beacons) and navigate around these places (two parks and a subway station) as well as taking under consideration needs of blind and vision impaired people.
I was asked to prepare an alternative solution for a part of navigation, since initial setup was having some issues (the GPS is still a fickle little thing – not that much of a problem in a car, but much more so when on foot AND unable to see).
So one of the directions that came up during discussing the problem was to describe the surroundings to the person using the app and let them make a more informed decision based on that set of descriptions.
This approach was tested with a group of people inside a subway station with a simple pen-and-paper diagram with routes/zones described as they are in reality. The navigation was limited only to exits/entrances (what else would you want to do on a subway station, especially when you can’t see a thing?) so that removed the need to include any search options from that part of the application.
The idea was pretty simple: we know where you are, we assume you know from which direction you came in, so we know how to start directing you which way is which. After that is was a simple matter of taking your current position in correlation with your previous one and give you a description based on the assumption we know where you want to go (ie. if you came by train it was pretty safe to assume you want to get out of the station, not make a tour around it).
Turns out even without allowing you to search for a specific exit, giving you only a description was really enough for blind people to make a decision and navigate to the exit they wanted to navigate to. After that it was a simple matter of implementation of that solution into existing toolset.
The second stage was testing that idea on an open terrain (park). The same principle was applied here – assume we know where you came from and we know where you need to go to (the games in parks for the blind/vision impaired has a pre-set order).
After initial testing with describing each and every crossing it was pretty obvious it was overkill. Nobody has the time and even wants to listen to elaborate description of a path in a park -they want to get from A to B as fast and as hassle-free as possible – so the NAVIGATING part of the application had priority over anything else.
The solution to this part was just to describe the specific path person needs to follow rather than the whole area (ie. “On the next intersection turn left” rather than “You are at an intersection. The road leads to left, right and straight ahead”).
This leaves the project in the “acceptable” state for now, but the doors are still open for improvement. Blind and vision impaired people all require a very specific approach when it comes to using smartphones – they STILL prefer phones with hardware keyboards as the tactile response offer much more ease of use for them than any text-to-speech/screen description tools that are available at the moment.