The Permissions Based Web
The web rose to prominence as an app platform on desktop around 15 years ago. In that time, we've since seen majors shifts in the computing landscape; most obviously with the primary (and often only!) devices of many users shifting to mobile devices. On desktop, we rarely saw apps in want of hardware features that browsers did not already expose. However, one of the significant changes we've seen with the shift to mobile is apps making use of an ever wider degree of hardware.
If we view apps like Facebook and Twitter and YouTube and Netflix moving away from the web as a failing of the web, it is reasonable to ask what features have largely driven them away from the web on mobile, yet what drives them to remain web-based on desktop.
Certainly, hardware access plays a part in this: one doesn't see Facebook using Bluetooth on desktop, yet they do for various features on mobile. As has been oft written, a single missing feature can push an app away from a given platform. The issue is arguably more complicated when many significant sites are primarily funded through advertising: fingerprinting has typically been much easier on native, and this provided a further incentive to be native (until Android 10, released in 2019 and still only 8% of Android devices, access to device-unique identifiers like serial number and IMSI is gated behind the "phone status" permission which many apps requested).
But we should examine what makes the web different than other competing platforms (React Native, Flutter, etc.). On the face of it, there are two main differences: one is that (in principle) it has specs of sufficient detail to reimplement it, and the other is its sandboxing. If we want the web to win, it has to be on these bases.
How to balance new capabilities with that sandboxing has been an ongoing controversial issue in the browser space. The Google position is that permission prompts suffice for most cases. Apple/Mozilla are significantly less convinced that users understand what users are agreeing to in those prompts.
At the core of this disagreement is uncertainty about the degree to which users believe websites are safer than native apps. On the one hand, there's the argument that by moving more within the sandbox is a net positive for the user, but against that is the historic high click-through rate of certificate warnings.
We're also seeing increasing concern about privacy on many operating systems:
Most obvious recently, iOS has gained press for its (non-blocking) notice when an app accesses the clipboard.
We've seen Microsoft try and push apps in this direction with WinRT.
We've seen Android increasingly lockdown app permissions and make them more granular.
As such, in many ways, we're moving towards convergence between the web and native insofar as privacy is concerned, whereby apps get very few permissions initially. But what's also interesting is the different approaches that we're seeing: Apple's choice of a non-blocking notification for clipboard accesses is fascinating, as it goes against the general trend of blocking permission dialogs. However, it has wholly sufficed even when iOS 14 is still in beta: we've seen a significant reaction to TikTok's frequent clipboard accesses.
There are, however, significant differences between the web loosening its permissions model and native strengthening its permissions model.
Most obviously, it's higher risk to loosen a permissions model than it is to tighten it: in the worst case, you lose security. The worst-case in the native case is everyone clicks through the permissions prompt, which is no regression from the historic norm.
In my view, the Chrome team is moving too quickly in two ways:
We don't have any research as to how users perceive the relative risks of native v. web apps.
We don't have any research as to whether users understand the additional risks (v. web apps generally) when presented with a permissions prompt.
I am not, in absolute, against extending the capabilities of the web; I merely believe it should be done with care and concern.