Originally posted on ben-evans.
For as long as most people can remember, the tech industry has had a new centre roughly every fifteen years. A model of computing sets the agenda, and the company or companies that win that model dominate the industry, and everyone is scared of them, and then a new model comes along, forms a new centre, and the old model stops mattering. Mainframes were followed by PCs, and then the web, and then smartphones.
Each of these new models started out looking limited and insignificant, but each of them unlocked a new market that was so much bigger that it pulled in all of the investment, innovation and company creation and so grew to overtake the old one.
Meanwhile, the old models didn’t go away, and neither, mostly, did the companies that had been created by them. Mainframes are still a big business and so is IBM; PCs are still a big business and so is Microsoft. But they don’t set the agenda anymore – no-one is afraid of them.
Today, multitouch smartphones are getting on for 15 years old, and the S curve is flattening out. All the obvious stuff has been built, Apple and Google won, and the new iPhone isn’t very exciting, because it can’t be. So, we ask “what’s the next generation?”
There are several ways to try to answer this.
First, each of the previous S Curves unlocked a dramatically new market, but well over 4bn people have a smartphone today and there are only 5.7bn adults on earth. We can’t unlock a radically bigger market on that axis – we’ve run out of people. Yes, we will probably deploy billions more sensors around the world, but a street light that phones home if the bulb fails is not a new platform, even if it uses a neural network (AI!) and a radio (5G!). So, in one important way, that growth model seems to be complete.
There’s a huge amount of innovation and a huge amount of primary technology creation going on at the moment, but there always is – the question here is how universal something could become. Hence, most of these could be very important to society, but plant-based meat or micro-satellites are not a model to replace smartphones or search as primary levers of the tech industry. Theoretically, a neural interface of some kind could do that, but the technology to make that more than a way to turn on a light or open a door seems to be decades away – this is science fiction, not a forecast.
The device model that could perhaps replace the smartphone is VR, or AR, or both. These cannot reach more people than smartphones (again – we ran out of people) but they could nonetheless replace the experience. At the moment this is pretty speculative. We have VR devices that are good for games and some narrow industrial use cases and there is a hope that the hardware and software can grow to become universal, but it’s not yet clear if following the hardware roadmap is all that’s needed for that to happen, or if VR needs some fundamental change if it’s to be more than a deeper and narrower subset of the games console industry (I wrote about this here). AR glasses, on the other hand, are still a frontier science question – can we create optics that look like a normal pair of reading glasses (or, one day in a few decades, contact lenses) and yet that can put something into the world that looks as though it’s really there, in broad daylight, with a good field of view? And if we can, as for VR, it’s magical, but how useful is that? Looking at this stuff today is rather like seeing a multitouch demo in 2005 – it’s clearly good for something, but what?
However, all of this might be the wrong mental model for thinking about the next step. As well as looking at the sequence ‘mainframe – PC – web – smartphone’, we should probably also think about what was going on underneath: ‘database – client/server – open source – cloud’, perhaps. That is, there are other progressions that are less visible but just as important. On that model, the fundamental trends of today are clearly machine learning and, perhaps, crypto. It’s very obvious that we are remaking the tech industry around machine learning, and probably a lot of other industries as well, and while there is a clear reason why there might not be anything after smartphones any time soon, I don’t think anyone would argue there won’t be anything after machine learning – there is a continuous process of innovation and creation (and, indeed, a pendulum, from server to local and back again). Meanwhile, if you come from Silicon Valley then things like cloud and SaaS seem like old and boring topics, but only around a quarter of large enterprise workflows have moved to the cloud at all so far – the rest are still ‘on-prem’ in old systems and indeed in mainframes. There is a huge amount of work and company creation to moving (a lot of) the rest in the next decade or two (this, really, is what I think ‘digital transformation’ means).
There’s one more model to think about, though.
We’ve spent the last few decades getting to the point that we can now give everyone on earth a cheap, reliable, easy-to-use pocket computer with access to a global information network. But so far, though over 4bn people have one of these things, we’ve only just scratched the surface of what we can do with them. There’s an old saying that the first fifty years of the car industry were about creating car companies and working out what cars should look like, and the second fifty years were about what happened once everyone had a car – they were about McDonalds and Walmart, suburbs and the remaking of the world around the car, for good and of course bad. The innovation in cars became everything around the car. One could suggest the same today about smartphones – now the innovation comes from everything else that happens around them.