Python Programming for Teens pdf pdf

  

  ® Python Programming for Teens

  Kenneth A. Lambert Cengage Learning PTR Australia • Brazil • Japan • Korea • Mexico • Singapore • Spain • United Kingdom • United States free ebooks ==> www.ebook777.com Python ® Programming for Teens Kenneth A. Lambert

  Publisher and General Manager, Cengage Learning PTR: Stacy L. Hiquet Associate Director of Marketing: Sarah Panella Manager of Editorial Services: Heather Talbot Senior Marketing Manager: Mark Hughes Senior Product Manager: Mitzi Koontz Project/Copy Editor: Karen A. Gill Technical Reviewer: Zach Scott Interior Layout Tech: MPS Limited Cover Designer: Mike Tanamachi Indexer: Sharon Shock Proofreader: Gene Redding © 2015 Cengage Learning PTR.

  CENGAGE and CENGAGE LEARNING are registered trademarks of Cengage Learning, Inc., within the United States and certain other jurisdictions. ALL RIGHTS RESERVED. No part of this work covered by the copyright herein may be reproduced, transmitted, stored, or used in any form or by any means graphic, electronic, or mechanical, including but not limited to photocopying, recording, scanning, digitizing, taping, Web distribution, information networks, or information storage and retrieval systems, except as permitted under Section 107 or 108 of the 1976 United States Copyright Act, without the prior written permission of the publisher.

  For product information and technology assistance, contact us at Cengage Learning Customer & Sales Support, 1-800-354-9706. For permission to use material from this text or product, submit all requests online at

  Further permissions questions can be emailed to permissionrequest@cengage.com. Python is a registered trademark of the Python Software Foundation. All other trademarks are the property of their respective owners. All images © Cengage Learning unless otherwise noted. Library of Congress Control Number: 2014939193

ISBN-13: 978-1-305-27195-1

  

Cengage Learning PTR

20 Channel Center Street Boston, MA 02210 USA Cengage Learning is a leading provider of customized learning solutions with office locations around the globe, including Singapore, the United Kingdom, Australia, Mexico, Brazil, and Japan. Locate your local office at:

  Cengage Learning products are represented in Canada by Nelson Education, Ltd. For your lifelong learning solutions, visit Visit our corporate website at

  Printed in the United States of America 1 2 3 4 5 6 7 16 15 14 eISBN-10: 1-305-27196-3

  To my wife Carolyn, with much gratitude.

  Kenneth A. Lambert Lexington, Virginia Acknowledgments

  I would like to thank my friend Martin Osborne for many years of advice, friendly criticism, and encouragement on several of my book projects. I would also like to thank Zach Scott, MQA Tester, who helped to ensure that the content of all data and solution files used for this text were correct and accurate; Karen Gill, my project editor and copy editor; and Mitzi Koontz, senior product manager at Cengage Learning PTR. iv

  About the Author

  Kenneth A. Lambert is a professor of computer science and the chair of that department at Washington and Lee University. He has taught introductory programming courses for 29 years and has been an active researcher in computer science education. Lambert has authored or coauthored 25 textbooks, including a series of introductory C++ textbooks with Douglas Nance and Thomas Naps, a series of introductory Java textbooks with Martin Osborne, and a series of introductory Python textbooks. His most recent textbook is Fundamentals of Python: Data Structures. v

  Contents

Chapter 1 Getting Started with Python . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 Taking Care of Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 Downloading and Installing Python . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1 Launching and Working in the IDLE Shell . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2 Obtaining Python Help. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3 Working with Numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 Using Arithmetic. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4 Working with Variables and Assignment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6 Using Functions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .9 Using the math Module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .9 Detecting Errors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 Working with Strings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 String Literals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 The len, str, int, and float Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 Input and Output Functions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 Indexing, Slicing, and Concatenation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 String Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 Working with Lists . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 List Literals and Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 List Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 Lists from Other Sequences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 Lists and the random Module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 Tuples as Immutable Lists. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24

  vi

  

  

  

Contents vii viii Contents

  

  

  

  

  

  

  

  

  

  Contents ix

  

  

  

  

  

  

x Contents

  

  Contents xi

  

  

  

  

xii Contents

  

  

  

  

  

  

  

  

  

  

  

  Welcome to Python Programming for Teens. Whether you’re under 20 or just a teenager at heart, this book will introduce you to computer programming. You can use it in a class- room or on your own. The only assumption is that you know how to use a modern com- puter system with a keyboard, screen, and mouse.

  To make your learning experience fun and interesting, you will write programs that draw pictures on the screen and allow you to interact with them by using the mouse. Along the way, you will learn the basic principles of program design and problem solving with com- puters. You will then be able to apply these ideas and techniques to solve problems in almost any area of study. But most of all, you will experience the joy of building things that work and look great!

  Why Python?

  Computer technology and applications have become increasingly more sophisticated over the past several decades, and so has the computer science curriculum, especially at the introductory level. Today’s students learn a bit of programming and problem solving and are then expected to move quickly into topics like software development, complexity analysis, and data structures that, 20 years ago, were reserved for advanced courses. In addition, the ascent of object-oriented programming as a dominant method has led instructors and textbook authors to bring powerful, industrial-strength programming lan- guages such as C++ and Java into the introductory curriculum. As a result, instead of experiencing the rewards and excitement of computer programming, beginning students xiii xiv Introduction

  often become overwhelmed by the combined tasks of mastering advanced concepts and learning the syntax of a programming language. This book uses the Python programming language as a way of making the learning expe- rience manageable and attractive for students and instructors alike. Python offers the fol- lowing pedagogical benefits:

  n

  Python has simple, conventional syntax. Its statements are close to those of ordinary English, and its expressions use the conventional notation found in algebra. Thus, beginners can spend less time learning the syntax of a programming language and more time learning to solve interesting problems.

  n

  Python has safe semantics. Any expression or statement whose meaning violates the definition of the language produces an error message.

  n

  Python scales well. It is easy for beginners to write simple programs. Python also includes all the advanced features of a modern programming language, such as support for data structures and object-oriented software development, for use when they become necessary.

  n

  Python is highly interactive. Expressions and statements can be entered at an interpreter’s prompts to allow the programmer to try out experimental code and receive immediate feedback. Longer code segments can then be composed and saved in script files to be loaded and run as modules or standalone applications.

  n

  Python is general purpose. In today’s context, this means that the language includes resources for contemporary applications, including media computing and networks.

  n

  Python is free and is in widespread use in the industry. Students can download it to run on a variety of devices. There is a large Python user community, and expertise in Python programming has great resume value. To summarize these benefits, Python is a comfortable and flexible vehicle for expressing ideas about computation, both for beginners and experts alike. If students learn these ideas well in their first experience with programming, they should have no problems mak- ing a quick transition to other languages and technologies needed to achieve their educa- tional or career objectives. Most importantly, beginners will spend less time staring at a computer screen and more time thinking about interesting problems to solve.

  Introduction xv Organization of the Book

  The approach in this book is easygoing, with each new concept introduced only when you need it.

  Chapter 1, “Getting Started with Python,” advises you how to download, install, and start the Python programming software used in this book. You try out simple program com- mands and become acquainted with the basic features of the Python language that you will use throughout the book.

  Chapter 2, “Getting Started with Turtle Graphics,” introduces the basic commands for turtle graphics. You learn to draw pictures with a set of simple commands. Along the way, you discover a thing or two about colors and two-dimensional geometry.

  Chapter 3, “Control Structures: Sequencing, Iteration, and Selection,” covers the program commands that allow the computer to make choices and perform repetitive tasks. Chapters 4, “Composing, Saving, and Running Programs,” shows you how to save your programs in files, so you can give them to others or work on them another day. You learn how to organize a program like an essay, so it is easy for you and others to read, understand, and edit. You also learn a bit about how the computer is able to read, under- stand, and run a program.

  Chapter 5, “Defining Functions,” introduces an important design feature: the function. By organizing your programs with functions, you can simplify complex tasks and eliminate unnecessary duplications in your code.

  Chapter 6, “User Interaction with the Mouse and the Keyboard,” covers features that allow people to interact with your programs. You learn program commands for respond- ing to mouse and keyboard events, as well as pop-up dialogs that can take information from your programs’ users.

  Chapter 7, “Recursion,” teaches you about another important design strategy called recur- sion. You write some recursive functions that generate computer art and fractal images. Chapter 8, “Objects and Classes,” offers a beginner’s guide to the use of objects and classes in programming. You learn how to define new types of objects, such as menu items for choosing colors and grids for board games, and use them in interesting programs.

  Chapter 9, “Animations,” concludes the book with a brief introduction to animations. You discover how to get images to move independently and interact in interesting ways. Two appendixes follow the last chapter. Appendix A, “Turtle Graphics Commands,” provides a reference for the set of turtle graphics commands introduced in the book. xvi Introduction

  Each chapter includes a set of two programming exercises that build on concepts and examples introduced earlier in that chapter. You can find the answers to these exercises in Appendix B, “Solutions to Exercises.”

  Companion Website Downloads You may download the companion website files from

  These files include the example programs discussed in the book and the solutions to the exercises.

  A Brief History of Computing

  Before you jump ahead to programming, you might want to peek at some context. The following table summarizes some of the major developments in the history of computing. The discussion that follows provides more details about these developments.

  Approximate Date Major Developments Before 1800 Mathematicians develop and use algorithms

  Abacus used as a calculating aide First mechanical calculators built by Leibniz and Pascal

  1800–1930 Jacquard’s loom Babbage’s Analytical Engine Boole’s system of logic Hollerith’s punch card machine

  1930s Turing publishes results on computability Shannon’s theory of information and digital switching

  1940s First electronic digital computers 1950s First symbolic programming languages

  Transistors make computers smaller, faster, more durable, less expensive Emergence of data-processing applications Integrated circuits accelerate the miniaturization of computer

  1960–1975 hardware First minicomputers Time-sharing operating systems Interactive user interfaces with keyboards and monitors Proliferation of high-level programming languages Emergence of a software industry and the academic study of computer science and computer engineering

  1975–1990 First microcomputers and mass-produced personal computers Graphical user interfaces become widespread Networks and the Internet

  1990–2000 Optical storage (CDs, DVDs) Laptop computers Multimedia applications (music, photography, video) Computer-assisted manufacturing, retail, and finance World Wide Web and e-commerce

  2000–present Embedded computing (cars, appliances, and so on) Handheld music and video players Smartphones and tablets Touch screen user interfaces Wireless and cloud computing Search engines Social networks

  Before Electronic Digital Computers The term algorithm, as it’s now used, refers to a recipe or method for solving a problem.

  It consists of a sequence of well-defined instructions or steps that describe a process that halts with a solution to a problem. Ancient mathematicians developed the first algorithms. The word “algorithm” comes from the name of a Persian mathematician, Muhammad ibn Musa Al-Khawarizmi, who wrote several mathematics textbooks in the ninth century. About 2,300 years ago, the Greek mathematician Euclid, the inventor of geometry, developed an algorithm for com- puting the greatest common divisor of two numbers, which you will see later in this book. A device known as the abacus also appeared in ancient times to help people perform sim- ple arithmetic. Users calculated sums and differences by sliding beads on a grid of wires. The configuration of beads on the abacus served as the data. In the seventeenth century, the French mathematician Blaise Pascal (1623–1662) built one of the first mechanical devices to automate the process of addition. The addition opera- tion was embedded in the configuration of gears within the machine. The user entered the two numbers to be added by rotating some wheels. The sum or output number then appeared on another rotating wheel. The German mathematician Gottfried Leibnitz

  Introduction xvii xviii Introduction

  (1646–1716) built another mechanical calculator that included other arithmetic functions such as multiplication. Leibnitz, who with Newton also invented calculus, went on to pro- pose the idea of computing with symbols as one of our most basic and general intellectual activities. He argued for a universal language in which one could solve any problem by calculating. Early in the nineteenth century, the French engineer Joseph Jacquard (1752–1834) designed and constructed a machine that automated the process of weaving. Until then, each row in a weaving pattern had to be set up by hand, a quite tedious, error-prone pro- cess. Jacquard’s loom was designed to accept input in the form of a set of punched cards. Each card described a row in a pattern of cloth. Although it was still an entirely mechani- cal device, Jacquard’s loom possessed something that previous devices had lacked—the ability to carry out the instructions of an algorithm automatically. The set of cards expressed the algorithm or set of instructions that controlled the behavior of the loom. If the loom operator wanted to produce a different pattern, he just had to run the machine with a different set of cards.

  The British mathematician Charles Babbage (1792–1871) took the concept of a program- mable computer a step further by designing a model of a machine that, conceptually, bore a striking resemblance to a modern general-purpose computer. Babbage conceived his machine, which he called the Analytical Engine, as a mechanical device. His design called for four functional parts: a mill to perform arithmetic operations, a store to hold data and a program, an operator to run the instructions from punched cards, and an output to pro- duce the results on punched cards. Sadly, Babbage’s computer was never built. The project perished for lack of funds near the time when Babbage himself passed away.

  In the last two decades of the nineteenth century, a U.S. Census Bureau statistician named Herman Hollerith (1860–1929) developed a machine that automated data processing for the U.S. Census. Hollerith’s machine, which had the same component parts as Babbage’s Analytical Engine, simply accepted a set of punched cards as input and then tallied and sorted the cards. His machine greatly shortened the time it took to produce statistical results on the U.S. population. Government and business organizations seeking to auto- mate their data processing quickly adopted Hollerith’s punched card machines. Hollerith was also one of the founders of a company that eventually became IBM (International Business Machines).

  Also in the nineteenth century, the British secondary school teacher George Boole (1815–1864) developed a system of logic. This system consisted of a pair of values, TRUE and FALSE, and a set of three primitive operations on these values, AND, OR,

  Introduction xix

  and NOT. Boolean logic eventually became the basis for designing the electronic circuitry to process binary information. A half a century later, in the 1930s, the British mathematician Alan Turing (1912–1954) explored the theoretical foundations and limits of algorithms and computation. Turing’s most important contributions were to develop the concept of a universal machine that could be specialized to solve any computable problems and to demonstrate that some pro- blems are unsolvable by computers.

  The First Electronic Digital Computers (1940–1950)

  In the late 1930s, Claude Shannon (1916–2001), a mathematician and electrical engineer at MIT, wrote a classic paper titled “A Symbolic Analysis of Relay and Switching Circuits.” In this paper, he showed how operations and information in other systems, such as arithmetic, could be reduced to Boolean logic and then to hardware. For example, if the Boolean values TRUE and FALSE were written as the binary digits 1 and 0, one could write a sequence of logical operations to compute the sum of two strings of binary digits. All that was required to build an electronic digital computer was the ability to rep- resent binary digits as on/off switches and to represent the logical operations in other circuitry. The needs of the combatants in World War II pushed the development of computer hard- ware into high gear. Several teams of scientists and engineers in the United States, Great Britain, and Germany independently created the first generation of general-purpose digital electronic computers during the 1940s. All these scientists and engineers used Shannon’s innovation of expressing binary digits and logical operations in terms of elec- tronic switching devices. Among these groups was a team at Harvard University under the direction of Howard Aiken. Their computer, called the Mark I, became operational in 1944 and did mathematical work for the U.S. Navy during the war. The Mark I was con- sidered an electromechanical device because it used a combination of magnets, relays, and gears to store and process data.

  Another team under J. Presper Eckert and John Mauchly, at the University of Pennsyl- vania, produced a computer called the ENIAC (Electronic Numerical Integrator and Calculator). The ENIAC calculated ballistics tables for the artillery of the U.S. Army toward the end of the war. Because the ENIAC used entirely electronic components, it was almost a thousand times faster than the Mark I. Two other electronic digital computers were completed a bit earlier than the ENIAC. They were the ABC (Atanasoff-Berry Computer), built by John Atanasoff and Clifford xx Introduction

  Berry at Iowa State University in 1942, and the Colossus, constructed by a group working with Alan Turing in England in 1943. The ABC was created to solve systems of simulta- neous linear equations. Although the ABC’s function was much narrower than that of the ENIAC, the ABC is now regarded as the first electronic digital computer. The Colossus, whose existence had been top secret until recently, was used to crack the powerful Ger- man Enigma code during the war. The first electronic digital computers, sometimes called mainframe computers, consisted of vacuum tubes, wires, and plugs, and they filled entire rooms. Although they were much faster than people at computing, by our own current standards they were extraordi- narily slow and prone to breakdown. Moreover, the early computers were extremely diffi- cult to program. To enter or modify a program, a team of workers had to rearrange the connections among the vacuum tubes by unplugging and replugging the wires. Each pro- gram was loaded by literally hardwiring it into the computer. With thousands of wires involved, it was easy to make a mistake.

  The memory of these first computers stored only data, not the program that processed the data. As you have read, the idea of a stored program first appeared 100 years earlier in Jacquard’s loom and in Babbage’s design for the Analytical Engine. In 1946, John von Neumann realized that the instructions of the programs could also be stored in binary form in an electronic digital computer’s memory. His research group at Princeton devel- oped one of the first modern stored-program computers. Although the size, speed, and applications of computers have changed dramatically since those early days, the basic architecture and design of the electronic digital computer have remained remarkably stable.

  The First Programming Languages (1950–1965)

  The typical computer user now runs many programs, made up of millions of lines of code, that perform what would have seemed like magical tasks 20 or 30 years ago. But the first digital electronic computers had no software as today’s do. The machine code for a few relatively simple and small applications had to be loaded by hand. As the demand for larger and more complex applications grew, so did the need for tools to expedite the pro- gramming process.

  In the early 1950s, computer scientists realized that a symbolic notation could be used instead of machine code, and the first assembly languages appeared. The programmers would enter mnemonic codes for operations, such as ADD and OUTPUT, and for data variables, such as SALARY and RATE, at a keypunch machine. The keystrokes punched

  Introduction xxi

  a set of holes in a small card for each instruction. The programmers then carried their stacks of cards to a system operator, who placed them in a device called a card reader. This device translated the holes in the cards to patterns in the computer’s memory. A pro- gram called an assembler then translated the application programs in memory to machine code and executed them. Programming in assembly language was a definite improvement over programming in machine code. The symbolic notation used in assembly languages was easier for people to read and understand. Another advantage was that the assembler could catch some pro- gramming errors before the program actually executed. However, the symbolic notation still appeared a bit arcane compared to the notations of conventional mathematics. To remedy this problem, John Backus, a programmer working for IBM, developed FORTRAN (Formula Translation Language) in 1954. Programmers, many of whom were mathematicians, scientists, and engineers, could now use conventional algebraic notation. FORTRAN programmers still entered their programs on a keypunch machine, but the computer executed them after a compiler translated them to machine code.

  FORTRAN was considered ideal for numerical and scientific applications. However, expressing the kind of data used in data processing—in particular, textual information— was difficult. For example, FORTRAN was not practical for processing information that included people’s names, addresses, Social Security numbers, and the financial data of cor- porations and other institutions. In the early 1960s, a team led by Rear Admiral Grace Murray Hopper developed COBOL (Common Business Oriented Language) for data pro- cessing in the United States government. Banks, insurance companies, and other institu- tions were quick to adopt its use in data-processing applications. Also in the late 1950s and early 1960s, John McCarthy, a computer scientist at MIT, developed a powerful and elegant notation called LISP (List Processing) for expressing computations. Based on a theory of recursive functions (a subject covered in Chapter 7 of this book), LISP captured the essence of symbolic information processing. A student of McCarthy’s, Stephen “Slug” Russell, coded the first interpreter for LISP in 1960. The interpreter accepted LISP expressions directly as inputs, evaluated them, and printed their results. In its early days, LISP was used primarily for laboratory experiments in an area of research known as artificial intelligence. More recently, LISP has been touted as an ideal language for solving any difficult or complex problems. Although they were among the first high-level programming languages, FORTAN and LISP have survived for decades. They have undergone many modifications to improve their capabilities and have served as models for the development of many other xxii Introduction

  programming languages. COBOL, by contrast, is no longer in active use but has survived mainly in the form of legacy programs that must still be maintained. These new, high-level programming languages had one feature in common: abstraction. In science or any other area of enquiry, an abstraction allows human beings to reduce complex ideas or entities to simpler ones. For example, a set of ten assembly language instructions might be replaced with an equivalent algebraic expression that consists of only five symbols in FORTRAN. Put another way, any time you can say more with less, you are using an abstraction. The use of abstraction is also found in other areas of com- puting, such as hardware design and information architecture. The complexities don’t actually go away, but the abstractions hide them from view. The suppression of distracting complexity with abstractions allows computer scientists to conceptualize, design, and build ever more sophisticated and complex systems.

  Integrated Circuits, Interaction, and Timesharing (1965–1975)

  In the late 1950s, the vacuum tube gave way to the transistor as the mechanism for imple- menting the electronic switches in computer hardware. As a solid-state device, the transis- tor was much smaller, more reliable, more durable, and less expensive to manufacture than a vacuum tube. Consequently, the hardware components of computers generally became smaller in physical size, more reliable, and less expensive. The smaller and more numerous the switches became, the faster the processing and the greater the capacity of memory to store information. The development of the integrated circuit in the early 1960s allowed computer engineers to build ever smaller, faster, and less expensive computer hardware components. They per- fected a process of photographically etching transistors and other solid-state components onto thin wafers of silicon, leaving an entire processor and memory on a single chip. In 1965, Gordon Moore, one of the founders of the computer chip manufacturer Intel, made a prediction that came to be known as Moore’s Law. This prediction states that the proces- sing speed and storage capacity of hardware will increase and its cost will decrease by approximately a factor of 2 every 18 months. This trend has held true for the past 50 years. For example, there were about 50 electrical components on a chip in 1965, whereas by 2010, a chip could hold more than 60 million components. Without the integrated circuit, men would not have gone to the moon in 1969, and the world would be without the powerful and inexpensive handheld devices that people now use on a daily basis.

  Minicomputers the size of a large office desk appeared in the 1960s. The means of devel- oping and running programs were changing. Until then, a computer was typically located

  Introduction xxiii

  in a restricted area with a single human operator. Programmers composed their programs on keypunch machines in another room or building. They then delivered their stacks of cards to the computer operator, who loaded them into a card reader and compiled and ran the programs in sequence on the computer. Programmers then returned to pick up the output results, in the form of new stacks of cards or printouts. This mode of operation, also called batch processing, might cause a programmer to wait days for results, including error messages. The increases in processing speed and memory capacity enabled computer scientists to develop the first time-sharing operating system. John McCarthy, the creator of the pro- gramming language LISP, recognized that a program could automate many of the func- tions performed by the human system operator. When memory, including magnetic secondary storage, became large enough to hold several users’ programs at the same time, the programs could be scheduled for concurrent processing. Each process associated with a program would run for a slice of time and then yield the CPU to another process. All the active processes would repeatedly cycle for a turn with the CPU until they finished. Several users could now run their own programs simultaneously by entering commands at separate terminals connected to a single computer. As processor speeds continued to increase, each user gained the illusion that a time-sharing computer system belonged entirely to him.

  By the late 1960s, programmers could enter program input at a terminal and see program output immediately displayed on a CRT (Cathode Ray Tube) screen. Compared to its pre- decessors, this new computer system was both highly interactive and much more accessi- ble to its users.

  Many relatively small and medium-sized institutions, such as universities, were now able to afford computers. These machines were used not only for data processing and engi- neering applications, but for teaching and research in the new and rapidly growing field of computer science.

  Personal Computing and Networks (1975–1990)

  In the mid-1960s, Douglas Engelbart, a computer scientist working at the Stanford Research Institute (SRI), first saw one of the ultimate implications of Moore’s Law: even- tually, perhaps within a generation, hardware components would become small enough and affordable enough to mass produce an individual computer for every human being. What form would these personal computers take, and how would their owners use them? Two decades earlier, in 1945, Engelbart had read an article in The Atlantic Monthly xxiv Introduction

  titled “As We May Think” that had already posed this question and offered some answers. The author, Vannevar Bush, a scientist at MIT, predicted that computing devices would serve as repositories of information, and ultimately, of all human knowledge. Owners of computing devices would consult this information by browsing through it with pointing devices and contribute information to the knowledge base almost at will. Engelbart agreed that the primary purpose of the personal computer would be to augment the human intel- lect, and he spent the rest of his career designing computer systems that would accom- plish this goal.

  During the late 1960s, Engelbart built the first pointing device, or mouse. He also designed software to represent windows, icons, and pull-down menus on a bit-mapped display screen. He demonstrated that a computer user could not only enter text at the keyboard but directly manipulate the icons that represent files, folders, and computer applications on the screen. But for Engelbart, personal computing did not mean computing in isolation. He partici- pated in the first experiment to connect computers in a network, and he believed that soon people would use computers to communicate, share information, and collaborate on team projects.

  Engelbart developed his first experimental system, which he called NLS (oNLine System) Augment, on a minicomputer at SRI. In the early 1970s, he moved to Xerox PARC (Palo Alto Research Center) and worked with a team under Alan Kay to develop the first desktop computer system. Called the Alto, this system had many of the features of Engelbart’s Augment, as well as email and a functioning hypertext (a forerunner of the World Wide Web). Kay’s group also developed a programming language called Smalltalk, which was designed to create programs for the new computer and to teach programming to children. Kay’s goal was to develop a personal computer the size of a large notebook, which he called the Dynabook. Unfortunately for Xerox, the company’s management had more interest in photocopy machines than in the work of Kay’s visionary research group. However, a young entrepreneur named Steve Jobs visited the Xerox lab and saw the Alto in action. In 1984, Apple Computer, the now-famous company founded by Steve Jobs, brought forth the Macintosh, the first successful mass-produced personal computer with a graphical user interface.

  While Kay’s group was busy building the computer system of the future in its research lab, dozens of hobbyists gathered near San Francisco to found the Homebrew Computer Club, the first personal computer users group. They met to share ideas, programs, hardware, and applications for personal computing. The first mass-produced personal computer,

  Introduction xxv

  the Altair, appeared in 1975. The Altair contained Intel’s 8080 processor, the first micro- computer chip. But from the outside, the Altair looked and behaved more like a miniature version of the early computers than the Alto. Programs and their input had to be entered by flipping switches, and output was displayed by a set of lights. However, the Altair was small enough for personal computing enthusiasts to carry home, and I/O devices eventu- ally were invented to support the processing of text and sound.

  The Osborne and the Kaypro were among the first mass-produced interactive personal computers. They boasted tiny display screens and keyboards, with floppy disk drives for loading system software, applications software, and users’ data files. Early personal com- puting applications were word processors, spreadsheets, and games such as Pacman and SpaceWar. These computers also ran CP/M (Control Program for Microcomputers), the first PC-based operating system.

  In the early 1980s, a college dropout named Bill Gates and his partner Paul Allen built their own operating system software, which they called MS-DOS (Microsoft Disk Operat- ing System). They then arranged a deal with the giant computer manufacturer IBM to supply MS-DOS for the new line of PCs that the company intended to mass-produce. This deal proved to be an advantageous one for Gates’s company, Microsoft. Not only did Microsoft receive a fee for each computer sold, but it was able to get a head start on supplying applications software that would run on its operating system. Brisk sales of the

  IBM PC and its “clones” to individuals and institutions quickly made MS-DOS the world’s most widely used operating system. Within a few years, Gates and Allen had become billionaires, and within a decade, Gates had become the world’s richest man, a position he held for 13 straight years.

  Also in the 1970s, the U.S. Government began to support the development of a network that would connect computers at military installations and research universities. The first such network, called ARPANET (Advanced Research Projects Agency Network), con- nected four computers at SRI, UCLA (University of California at Los Angeles), UC Santa Barbara, and the University of Utah. Bob Metcalfe, a researcher associated with Kay’s group at Xerox, developed a software protocol called Ethernet for operating a network of computers. Ethernet allowed computers to communicate in a local area network (LAN) within an organization and with computers in other organizations via a wide area network (WAN). By the mid 1980s, the ARPANET had grown into what is now called the Internet, connecting computers owned by large institutions, small organizations, and individuals all over the world. xxvi Introduction Communication and Media Computing (1990–2000)

  In the 1990s, computer hardware costs continued to plummet, and processing speed and memory capacity skyrocketed. Optical storage media such as compact discs (CDs) and digital video discs (DVDs) were developed for mass storage. The computational proces- sing of images, sound, and video became feasible and widespread. By the end of the decade, entire movies were being shot or constructed and played back using digital devices. The capacity to create lifelike three-dimensional animations of whole environ- ments led to a new technology called virtual reality. New devices appeared, such as flatbed scanners and digital cameras, which could be used along with the more traditional micro- phone and speakers to support the input and output of almost any type of information. Desktop and laptop computers not only performed useful work but gave their users new means of personal expression. This decade saw the rise of computers as communication devices, with email, instant messaging, bulletin boards, chat rooms, and the amazing World Wide Web.

  Perhaps the most interesting story from this period concerns Tim Berners-Lee, the creator of the World Wide Web. In the late 1980s, Berners-Lee, a theoretical physicist doing research at the CERN Institute in Geneva, Switzerland, began to develop some ideas for using computers to share information. Computer engineers had been linking computers to networks for several years, and it was already common in research communities to exchange files and send and receive email around the world. However, the vast differences in hardware, operating systems, file formats, and applications still made it difficult for users who were not adept at programming to access and share this information. Berners- Lee was interested in creating a common medium for sharing information that would be easy to use, not only for scientists but for any other person capable of manipulating a key- board and mouse and viewing the information on a monitor. Berners-Lee was familiar with Vannevar Bush’s vision of a web-like consultation system, Engelbart’s work on NLS Augment, and the first widely available hypertext systems. One of these systems, Apple Computer’s Hypercard, broadened the scope of hypertext to hyper- media. Hypercard allowed authors to organize not just text but images, sound, video, and executable applications into webs of linked information. However, a Hypercard database sat only on standalone computers; the links could not carry Hypercard data from one com- puter to another. Furthermore, the supporting software ran only on Apple’s computers.

  Berners-Lee realized that networks could extend the reach of a hypermedia system to any computers connected to the Internet, making their information available worldwide. To preserve its independence from particular operating systems, the new medium would need to have universal standards for distributing and presenting the information.

  Introduction xxvii

  To ensure this neutrality and independence, no private corporation or individual govern- ment could own the medium and dictate the standards. Berners-Lee built the software for this new medium, now called the World Wide Web, in 1992. The software used many of the existing mechanisms for transmitting information over the Internet. People contribute information to the web by publishing files on compu- ters known as web servers. The web server software on these computers is responsible for answering requests for viewing the information stored on the web server. To view infor- mation on the web, people use software called a web browser. In response to a user’s com- mands, a web browser sends a request for information across the Internet to the appropriate web server. The server responds by sending the information back to the brow- ser’s computer, called a web client, where it is displayed or rendered in the browser. Although Berners-Lee wrote the first web server and web browser software, he made two other, even more important, contributions. First, he designed a set of rules, called HTTP (Hypertext Transfer Protocol), which allows any server and browser to talk to each other. Second, he designed a language, HTML (Hypertext Markup Language), which allows browsers to structure the information to be displayed on web pages. He then made all these resources available to anyone for free. Berners-Lee’s invention and gift of this universal information medium was a truly remarkable achievement. Today there are millions of web servers in operation around the world. Anyone with the appropriate training and resources—companies, government, nonprofit organizations, and private individuals—can start up a new web server or obtain space on one. Web browser software now runs not only on desktop and laptop computers, but on handheld devices such as cell phones.

  Wireless Computing and Smart Devices (2000–Present)

  The twenty-first century has seen the rise of wireless technology and the further miniaturiza- tion of computing devices. Today’s smartphones allow you to carry enormous computing power around in your pocket and allow you to communicate with other computing resources anywhere in the world, via wireless or cellular technology. Tiny computing devices are embedded in cars and in almost every household appliance, from the washer/dryer and home theater system to the exercise bike. Your data (photos, music, videos, and other infor- mation) can now be stored in secure servers (the “cloud”), rather than on your devices. Accompanying this new generation of devices and ways of connecting them is a wide array of new software technologies and applications. Only three very significant innova- tions are mentioned here. xxviii Introduction

  In the late 1990s, Steve Jobs rejoined Apple Computer after an extended time away. He realized that the smaller handheld devices and wireless technology would provide a new way of delivering all kinds of “content”—music, video, books, and applications—to people. To realize his vision, Jobs pursued the design and development of a handheld device with a clean, simple, and “cool” user interface to access this content. The first installment of such a device was the iPod, a music player capable of holding your entire music library as well as photos. Although the interface first used mechanical buttons and click wheels, it was soon followed by the iTouch, which employed a touch screen and could play video. The touch screen interface also allowed Apple and its programmers to provide apps, or special- purpose applications (such as games), that ran on these devices. When wireless connectivity became available, these apps could provide email, a web browser, weather channels, and thousands of other services. The iPhone and iPad, true multimedia devices with micro- phones, cameras, and motion sensors, followed along these lines a few years later.