Location is a strange thing. As technology advances, the methods available for determining the physical location of a specific device are increasing. This has ramifications for both end-users and us as software developers. This post is a change to the more usual Styling Android format of actually solving problems with code, but more an exploration of my location is getting more complex and some things that we should be giving consideration to.
What got me thinking about this topic was I recently used an app for a connected device whereby I was required to give the app location permission before it would pair with the device. The device itself (and the app, for that matter) had no reason to know my location to be able to work, but the pairing process is conditional upon the fine location runtime permission being granted. This got me a little concerned, so I investigated further and realised that the Android framework requires the fine location permission to perform a BLE scan. The reason for this is that in some cases it may be possible to determine the location of the device if the BLE scan shows a nearby device (such as a Beacon) which has a known location. That got me thinking that there are quite a few ways in which it’s possible to determine the location of a device.
Back in the early days of mobile phones, the only way to determine the location of a device was through GPS. This was, at the time, battery heavy and so many users kept it switched off when not required. This is no longer the case as mobile phone GPS performance has improved drastically, it actually drove innovation in other areas to determine location.
One major one was based upon the MAC addresses of nearby WiFi hotspots. If the physical location of a specific hotspot is known, then being within range of that hotspot enables the location to be determined quite accurately. If the location is not know but the user has GPS enabled, then a location for that hotspot can be uploaded to the location provider database, and it’s location is now known for future lookups. If multiple hotspots are detected, then the location can actually be determined more accurately. The device does not need to actually be connected to any of these hotspots (even ones which require a login) to be able to see their MAC addresses. This functionality is built in to the location provider within Android – currently the Play Services FusedLocationProvider (“Fused'” indicates that it can determine location from various sources). For us, as developers, we use this without even realising, but are still required to have the location permissions granted.
Another mechanism for determining device location is because cellular network towers all have a unique ID. Similarly to MAC address scanning, triangulating from multiple cell towers can be used to provide a fairly accurate location. Once again, this is done for us by FusedLocationProvider.
We’ve already explored how a BLE scan may be able to provide location information if the location of a detected in-range device and this is where it really starts to get a little confusing for the user. In my specific case, the app in question really did not use the request permission rationale to clearly explain why the permission was needed. It simply stated “we need location permission to be able to pair with your device over Bluetooth”. This is a perfect example of how not to use the permission rationale. We certainly shouldn’t overload the user with technical information, but this would more explanatory, imo: “In some cases there may be a third party Bluetooth device nearby which has a well known physical location, and as it is possible we may be able to determine your location from nearby devices we are required to ask for location permission before we can scan for Bluetooth devices”.
Interestingly Google released the first developer preview of Android 11 a couple of days before this article was published (but a few days after I had written it). Android 11 contains a small permissions change which is, perhaps, relevant to this – One-time Permissions. This is covered in more detail here but essentially allows the user to temporarily grant an app a specific permission. This makes great sense to use in cases where a specific permission may be required for some kind of set up, yet will not be required for normal operation.
Here’s where it potentially becomes a minefield and I make absolutely no apologies for only identifying possible problems without offering workable solutions.
Just because we have all of these mechanisms for determining location doesn’t mean that there can’t be more. The number of sensors on modern mobile devices is increasing all the time, but even some of the ones that have been there for a while could be used to determine location thanks to advances in technology elsewhere.
For example, phones have been equipped with cameras for many years now. It should be fairly obvious that if I were to take a picture of someone stood in front of Tower Bridge, it would be easy to work out that we are in London, and an accurate location could be determined from the angle and framing of the shot. That could also work in a perfectly normal street if the street name and one or more building numbers were visible in the shot. That would make things pretty easy for humans to determine a physical location. However the current advances in machine learning mean that machines are likely to already be able to determine location pretty accurately from the photographs we take on our devices.
Please remember that in the vast majority of cases, we include location metadata in the pictures we take with the phone camera. If you don’t believe me go in to Google Photos, select a picture at random, and press the info button – it shows you where that picture was taken. Image hosting services (not only Google) have huge amounts of images containing location metadata. That is a pretty huge set of training data for a ML model.
Is it possible that before too much longer we may need to request location permission along with camera permission? While our app may not be analysing the images from the camera to determine location it’s becoming more likely that app potentially could do that.
Another sensor that may also be used is the microphone. With voice controlled assistants, the microphone in our devices is always on. An OS could easily upload audio snippets with location data to a cloud service which may also be building a data set for training an ML model. While this may not be particularly accurate in many cases, being close to something which has a distinctive audio signature (such as the chimes of Big Ben) may also be sufficient to determine a rough location.
Moreover, combining data from multiple sources may also help narrow down possible locations: barometer / humidity give information on current weather; ambient light could give an indication of day or night; compass, accelerometer, & gyros may be combined to give indications of if the device is moving, how quickly, and in which direction. All of this may be combined with the audio data to gradually narrow down the possible location as more and more data is gathered over time. Machine Learning is proving to be particularly good at spotting patterns which are not obvious to humans
Of course, all of this may be a way off, but it feels like it is more a question of when it happens rather than if it happens.
© 2020, Mark Allison. All rights reserved.
Copyright © 2020 Styling Android. All Rights Reserved.
Information about how to reuse or republish this work may be available at http://blog.stylingandroid.com/license-information.