The Future of Personal Computing, Part 2
By James Kwak
(This is Part 2 of 2; Part 1 covers the shift in personal computing from the age of the standalone PC to the age of cloud computing.)
We left off with the idea that personal computing was inexorably, though slowly shifting toward a Web-based model in which our computers’ main purpose is to run browsers and we spend most of our time on the Internet. A decade ago when this idea became popular it was not particularly practical, because you simply couldn’t do very interesting things in a browser; it was originally designed, after all, for reading static web pages. But in the past decade, web sites have become much richer and interactive — think about something like Gmail, with its automatic refreshing and keyboard shortcuts, or Google Documents, which allows multiple people to edit a document at the same time — to the point where most of what people do most of the time can be done in a browser.
But then there was Apple.
Apple has a computer business, but there’s nothing revolutionary about it. They make very nice, somewhat expensive computers that are structurally basically the same as Windows machines: they have an OS, people write programs that you install on top of that OS (and still not as many people as the ones who write Windows programs), and people with those computers spend more and more of their time using the browser. The increased importance of the Internet as opposed to local (on-computer) applications has probably helped their market share a little, but not that much.
Then there was the iPod, but while it was revolutionary in many ways, it didn’t mean much for the course of computing. It’s dependent on the existence of a computer running iTunes, which is an ordinary application; the only thing “Internet” about it is that it can access the Internet to buy music.
And then there was the iPhone. The iPhone was a big hit for multiple reasons, like the fact that it was cool, but for our purposes the most important is that it was the first powerful, usable computer in your pocket. Besides email (which it never did as well as a BlackBerry), you can run applications on it that will do virtually anything, since its operating system provides an API that lets developers do pretty much whatever they want.
For most iPhone users, I suspect, what they like about the iPhone is that they can check their email, take photos, and do other things that any smartphone can do. But technology commentators have focused on iPhone’s “apps,” and Apple has used its app library as a selling point against its competitors.
So what is an app? It’s just a plain old application — like the kind we’ve had on our PCs for decades — except someone figured out that if you drop the last three syllables it sounds new and cool. An app is a piece of software that runs directly on the iPhone OS (a variant of OS X, the operating system on Mac computers), and that you download and buy from Apple’s App Store.
If you’ve followed me to this point of the story, you should realize why I find the app craze so perplexing: it seems like a giant step backward, back to a pre-Internet world where we had to install a bunch of separate applications on our computers, and developers had to write different programs for each operating system. It seems worse than that, even, because with Apple the only place you can get software is from the App Store, which means that Apple gets to decide what can run on your iPhone.
The app model is not entirely pre-Internet, of course. The iPhone and iPad can download apps over the Internet, and those apps can use the Internet as well. But the experience is still that you are switching between a bunch of different applications on your device, as opposed to surfing the web using a browser. You have to find and install those applications, and periodically you have to install updates. Sure there are things that require direct operating system (or hardware) access, like graphics-intensive games, but the thing that confuses me is why people would use apps to do things that they can already do perfectly well in a browser.
This is understandable with the iPhone, because its screen is so small; some iPhone apps just take content on the web and reformat it nicely for the smaller screen. But it makes less sense when you move up to the iPad, which basically has a full-size screen. For example, the New York Times has an iPad app. Buttons across the bottom let you switch between sections (business, sports, etc.), and for each section you have a front page that shows you the beginning of each article, and if you click on an article it takes you to the full content. Very pretty. But I can’t think of any reason to prefer it to the Times’ web site, which has much more content, and which displays perfectly well on the iPad’s browser. This is just one example, but it shows how apps provide a crippled version of what is already on the web. There are other examples, like NPR’s app for listening to their radio stories; it’s nice, but why not make their web site just as nice, so everyone can benefit from it?
So why apps? The iPad is Apple’s attempt to change the way we use computers, away from the PC model and toward a tablet model. And the strategy goes beyond just a new form factor; as we start using tablets, Apple wants us to adopt the app model instead of the Internet model. In particular, Apple is aiming at the category of netbooks — small, light computers whose primary purpose is getting to the Internet (hence the name). It’s inevitable that we are going to use smaller computers with touchscreen input; Steve Jobs is right about that. The question is whether we will use them in an Internet-centric way (the way technology was trending over the past decade, and the way Google wants us to use them) or whether we will use them the way Steve Jobs wants us to use them.
Apple prefers the app model for two big reasons. First, it makes their products stickier, since you’re not just buying an iPad, you’re buying Apple’s whole system for delivering stuff onto the iPad. Second, it seems that people are willing to pay for apps while they are unwilling to pay for anything through a browser. So people will pay $1.99 for an app that plays some game when you can already play the same game for free on a web site somewhere. Maybe people think of apps as standalone objects that have some value and that they can buy, while they see web sites just as destinations that they go to and that should be free. But as long as people will pay for apps, that means that Apple can make money by selling them to you — and by preventing developers from selling them to you directly.
I think it’s not too much of a simplification to say that Apple wants to be the new Microsoft. It wants you to buy applications that run locally on your computer iPad, and it sees its competitive advantage as having the most developers and the most applications (hence all those “there’s an app for that” ads). As Microsoft showed, if you can get a lead and become the developers’ platform of choice, you can benefit from network effects.
This is why the dispute with Adobe is important. For those who don’t know, Adobe develops Flash, probably the dominant technology for interactive content on the Internet. The iPhone and iPad don’t support Flash, meaning that if you go to a site that needs Flash, you get a big empty box on your screen. (Like, for example, if you daughter wants to visit the Dinosaur Kids site to play How Big Are You?) This is important because Flash is the most widely used technology to do things on the Internet that otherwise people would buy apps for. You can think of it as a small attempt to cripple the Internet (for people using an iPhone or iPad) to force them toward the App Store (for games) or the iTunes Store (for video).
(Even the iPad’s browser — a version of Safari — has trouble with some rich interactive web sites; for example, I can’t figure out how to edit Google Documents on the iPad. This is probably simply due to the tradeoffs Apple had to live with in order to make the browser work with the iPad’s modest specs, but it has the side effect of slightly crippling the Internet and forcing you toward apps.)
But the Apple-Adobe dispute goes further. In April, Apple changed the terms of the iPhone developer agreement to prevent developers from using cross-compilers to create iPhone apps. A cross-compiler is a tool that allows you to take an application you wrote for one platform, push a button, and repackage the application for another platform (in this case, iPhone OS). The immediate target of this was Adobe, which was developing a tool that would enable developers to take Flash apps, push a button, and make them into iPhone apps. This simplest explanation for this is that Apple, as the market leader, wants to make it harder for people to develop for multiple platforms at the same time. “Write once, run anywhere” — the slogan of Java, but also the essence of developing for the web — is bad for Apple, and they want to make it as hard as possible. (John Gruber makes a different argument that Apple wants control over their platform and doesn’t want cross-compilers between it and the developers, but that interpretation is not inconsistent with mine.) In other words, if you’re number one, then openness just helps the competition, because if developers have to choose just one platform, they’re going to choose yours.
So Apple is competitive; we knew that already. And they don’t want to repeat the mistakes of the 1980s and 1990s; we knew that already, too. But I think the important point is that they are promoting a model of personal computing where most of the developers write for the iPhone OS, and if you want to use their applications you have to buy an Apple hardware product. Yes, Apple makes great hardware, but I think consumers will do better with an open model; if you look at smartphones, it’s already the case that many phones running Android — Google’s open-source operating system — are better than the iPhone at many different things. (The iPhone may still be the best overall, but there are many good reasons why you might pick a particular Android phone over the iPhone.) And Android has already passed the iPhone as the number two smartphone (measured by new sales), behind the BlackBerry.
Conceptually, I still think the best thing for consumers is a model that is open on every level: web-based development, so that content and functionality are available at the same time for anyone using any browser, allowing competition among operating systems and, for a given operating system, between different hardware manufacturers. With personal computers, Microsoft established a monopoly on the OS level, which made Windows the least common denominator of everyone’s computing experience. Now Apple wants to lock people into their hardware and OS and create an ecosystem of developers, applications, and content that you can only get through Apple.
The obvious alternative is Google, which has its own operating systems (Android and Chrome), but doesn’t particularly care if you use them or not — as long as you are using the Internet, where they sell their ads. I’d like to see an Android tablet with a real browser that can handle anything on the Web, and then I simply wouldn’t need most of the apps I have on my iPad (Calendar, Contacts, Notes, Maps, AccuWeather, Netflix, NPR, Bloomberg, etc.). Now, Google isn’t pursuing an open strategy because it’s nice; they’re doing it because they want everyone to go to the Internet to see their ads. But ultimately I think that’s a better model for consumers, because you avoid lock-in on the development level (developers don’t have to commit to the iPhone OS) and on the hardware level (anyone can build an Android device, which is already providing more innovation and choice when it comes to smartphones).
So while I like Apple products, I have no particular wish to see them win the technology war, at least not with their current strategy.