The Innovators: How a Group of Inventors, Hackers, Geniuses, and Geeks Created the Digital Revolutio - Isaacson Walter. Страница 101
These disputes should not overshadow the astonishing accomplishment that Stallman and Torvalds and their thousands of collaborators wrought. The combination of GNU and Linux created an operating system that has been ported to more hardware platforms, ranging from the world’s ten biggest supercomputers to embedded systems in mobile phones, than any other operating system. “Linux is subversive,” wrote Eric Raymond. “Who would have thought that a world-class operating system could coalesce as if by magic out of part-time hacking by several thousand developers scattered all over the planet, connected only by the tenuous strands of the Internet?”146 Not only did it become a great operating system; it became a model for commons-based peer production in other realms, from Mozilla’s Firefox browser to Wikipedia’s content.
By the 1990s there were many models for software development. There was the Apple approach, in which the hardware and the operating system software were tightly bundled, as with the Macintosh and iPhone and every iProduct in between. It made for a seamless user experience. There was the Microsoft approach, in which the operating system was unbundled from the hardware. That allowed more user choices. In addition, there were the free and open-source approaches, which allowed the software to be completely unfettered and modifiable by any user. Each model had its advantages, each had its incentives for creativity, and each had its prophets and disciples. But the approach that worked best was having all three models coexisting, along with various combinations of open and closed, bundled and unbundled, proprietary and free. Windows and Mac, UNIX and Linux, iOS and Android: a variety of approaches competed over the decades, spurring each other on—and providing a check against any one model becoming so dominant that it stifled innovation.
I.?After they became successful, Gates and Allen donated a new science building to Lakeside and named its auditorium after Kent Evans.
II.?Steve Wozniak’s unwillingness to tackle this tedious task when he wrote BASIC for the Apple II would later force Apple to have to license BASIC from Allen and Gates.
III.?Reading a draft version of this book online, Steve Wozniak said that Dan Sokol made only eight copies, because they were hard and time-consuming to make. But John Markoff, who reported this incident in What the Dormouse Said, shared with me (and Woz and Felsenstein) the transcript of his interview with Dan Sokol, who said he used a PDP-11 with a high-speed tape reader and punch. Every night he would make copies, and he estimated he made seventy-five in all.
IV.?The lawyers were right to be worried. Microsoft later was involved in a protracted antitrust suit brought by the Justice Department, which charged that it had improperly leveraged its dominance of the operating system market to seek advantage in browsers and other products. The case was eventually settled after Microsoft agreed to modify some of its practices.
V.?By 2009 the Debian version 5.0 of GNU/Linux had 324 million source lines of code, and one study estimated that it would have cost about $8 billion to develop by conventional means (http://gsyc.es/~frivas/paper.pdf).
Larry Brilliant (1944– ) and Stewart Brand on Brand’s houseboat in 2010.
William von Meister (1942–1995).
Steve Case (1958– ).
CHAPTER TEN
ONLINE
The Internet and the personal computer were both born in the 1970s, but they grew up apart from one another. This was odd, and all the more so when they continued to develop on separate tracks for more than a decade. This was partly because there was a difference in mind-set between those who embraced the joys of networking and those who got giddy at the thought of a personal computer of their very own. Unlike the utopians of the Community Memory project who loved forming virtual communities, many early fans of personal computers wanted to geek out alone on their own machines, at least initially.
There was also a more tangible reason that personal computers arose in a way that was disconnected from the rise of networks. The ARPANET of the 1970s was not open to ordinary folks. In 1981 Lawrence Landweber at the University of Wisconsin pulled together a consortium of universities that were not connected to the ARPANET to create another network based on TCP/IP protocols, which was called CSNET. “Networking was available only to a small fraction of the U.S. computer research community at the time,” he said.1 CSNET became the forerunner of a network funded by the National Science Foundation, NSFNET. But even after these were all woven together into the Internet in the early 1980s, it was hard for an average person using a personal computer at home to get access. You generally had to be affiliated with a university or research institution to jack in.
So for almost fifteen years, beginning in the early 1970s, the growth of the Internet and the boom in home computers proceeded in parallel. They didn’t intertwine until the late 1980s, when it became possible for ordinary people at home or in the office to dial up and go online. This would launch a new phase of the Digital Revolution, one that would fulfill the vision of Bush, Licklider, and Engelbart that computers would augment human intelligence by being tools both for personal creativity and for collaborating.
EMAIL AND BULLETIN BOARDS
“The street finds its own uses for things,” William Gibson wrote in “Burning Chrome,” his 1982 cyberpunk story. Thus it was that the researchers who had access to the ARPANET found their own use for it. It was supposed to be a network for time-sharing computer resources. In that it was a modest failure. Instead, like many technologies, it shot to success by becoming a medium for communications and social networking. One truth about the digital age is that the desire to communicate, connect, collaborate, and form community tends to create killer apps. And in 1972 the ARPANET got its first. It was email.
Electronic mail was already used by researchers who were on the same time-sharing computer. A program called SNDMSG allowed a user of a big central computer to send a message to the personal folder of another user who was sharing the same computer. In late 1971 Ray Tomlinson, an MIT engineer working at BBN, decided to concoct a cool hack that would allow such messages to be sent to folders on other mainframes. He did it by combining SNDMSG with an experimental file transfer program called CPYNET, which could exchange files between distant computers on the ARPANET. Then he came up with something that was even more ingenious: in order to instruct a message to go to the file folder of a user at a different site, he used the @ sign on his keyboard to create the addressing system that we all use now, username@hostname. Thus Tomlinson created not only email but the iconic symbol of the connected world.2
The ARPANET allowed researchers at one center to tap into the computing resources somewhere else, but that rarely happened. Instead email became the main method for collaborating. ARPA’s director, Stephen Lukasik, became one of the first email addicts, thus causing all researchers who needed to deal with him to follow suit. He commissioned a study in 1973 which found that, less than two years after it was invented, email accounted for 75 percent of the traffic on the ARPANET. “The largest single surprise of the ARPANET program has been the incredible popularity and success of network mail,” a BBN report concluded a few years later. It should not have been a surprise. The desire to socially network not only drives innovations, it co-opts them.