PC = C

Reaching Natural Limits

by Dennis G. Allard


Introduction
Optical Storage
Omnipresent LANs
Data Base Servers
Window Servers
Remote Procedure Call
Network Computing
Cyberspace
Open systems and Interoperability
UNIX vs. Windows NT
Free Software
Digital Works
Digital Works in Cyberspace

Introduction

About a year ago [1991] personal computers suddenly became fast and cheap. Desktop computers costing $2000 able to execute 10 mips (million instructions per second ) with 200 megabtye disks and local area networking became the norm. Tasks which until recently could only be done on workstations costing ten times that much can now be done on PCs. From the high end, workstations of 50 mips capability are now becoming available for $5000. In short, the concepts of PC, workstation, and even mainframe, are merging. PC = C. This will have consequences. It is one of several enabling phenomena which will lead to new software and hardware markets. Another crucial enabling phenomena has been the advancement in secondary storage technology. For both ferro-magnetic and magneto-optical disk drives, we have seen a revolutionary increase in storage capacities, miniaturization of size, decrease in disk access times, and incredible and surprising decreases in cost. A 1 gigabyte ferro-magnetic disk with 10 millisecond seek times and selling for under $1000 will become common place during 1994. Writable optical disks having 30 millisecond access times and gigabyte capacities will cost $1000 but have the additional feature of using removal cartridges which only cost $50 or so per gigabyte.

The improvements in PC technology are more than just a quantitative change of scale. We will be seeing a qualitative difference in what it is possible to do on a PC. Let me illustrate this point. Consider the population of the state of California. It is somewhere between 25 and 30 million people. Pretty soon, it will be possible to store a database of all those people on a single desktop PC having, say, 5 gigabytes of writable disk storage, which would provide 200 bytes of uncompressed storage per person. This has consequences for the market. It was something you just couldn't do until now. In the recent past, companies like, say, Bank of America had to buy mainframe computers costing millions of dollars to deal with such amounts of data. Pretty soon, they won't. Computers are catching up with what I call natural limits in required resources. This phenomena happens in areas other than disk storage capacity. It happened in hi-fi stereo. Music CDs now record sound at a level of quality which is as good as what the human ear can (will ever be able to) distinguish. If hi-fi recording is to improve, it can now only do so via improvements in microphone, amplifier, and speaker technologies. Natural limits also exist in video display resolution and network bandwidth, since the human brain only can process a given finite amount of information. Although we are still far from the limits of human visual resolution and communication speeds, we're getting there.

In other words, there is a finite amount of bandwidth we need. Once we achieve it, we won't (ever) need more. It is high number, and since people can run programs to gobble up bits which they will never look at, one can argue that we will always need more and more bandwidth. But, in practice, we won't. There is some (very high) bandwidth number which someday the planet will reach and not need more.

According to Business Week [circa 1991], the worldwide computer hardware market has grown from $30 billion in 1980 to $150 billion in 1992, tripling or so in real dollars. During that period of time, the PC industry went from zero to about half of the total computer market. In particular, over the last five years, the PC market more than doubled while the rest of the computer market stayed flat. Due to the newfound performance of PCs, the PC is destined to entirely dominate the computer market within five years. That means continuing growth in the PC market. And the overall market itself will grow, owing in part to new applications based on the wonderful technologies outlined in this article.

 

Optical Storage

One of the most visible new computer technologies is the use of Compact Disks (CDs). A CD for a computer is very similar to a music CD which you put into your stereo except it can store data for use on your computer. In fact, there are several different kinds of CDs, of which music CDs are just one example. As with music CDs, computer CDs come in different sizes. A 5.25 inch diameter CD can currently store about 600 megabytes of data, which is the equivalent of about 400 3.5 inch floppy disks. With new blue light lasers, the capacity could increase to 2 gigabytes. Renny Fields, of Aero Space Corporation, is working on technology which he claims will enable storing 1 gigabit per square centimeter, another order of magnitude increase in storage density.

Someday, you will be able to access all digitized information via networks, including music you want to hear or articles you want to read. This will make commercial CD-ROMs less necessary and conceivably make them obsolete. This is because it will be possible for people to make digital copies of information which is identical to the originals and store it on their own home made CD-ROMs. One possibility is that CD-ROMs will then become much less expensive so that people do not go to the bother of copying them. Another possibility is that encryption schemes will be devised which makes copying the information more difficult. But if someone can hear music or see a video, then they are seeing an unencrypted version of it and can, themselves, make a recording of it. However, these possibilities are several years away, awaiting the maturation of the gigabyte networks and protocols currently in the research stage. This whole area provides a basis for an interesting discussion about the impact of digital technology on society.

 

Omnipresent LANs

The facts of fast cheap workstations and new high capacity storage technologies combine with another trend, that of network based computing. The installed base of PC local area networks will continue to expand and be applied in new ways. Historically, PC LANs (Local Area Networks) have been used merely to support file servers. What this meant to users was that you no longer had to copy data to a floppy and physically walk the floppy across a room in order to move the data from one machine to another, since everyone stores their data on a common shared disk somewhere on the LAN. We also are seeing programs which make use of file servers to communicate with other programs running on another machine. In such cases, the file server is used as a medium of communication between different machines on the LAN. This is how PC networked databases such as Foxpro work. But there are other trends in network usage which are about to hit the PC market big time. Let me mention two, database servers and window servers.

 

Data Base Servers

A database server is a program running on a single machine which talks with a client program, which runs on a user's workstation. There is one client program running on each user's PC but only a single database server running on another PC or UNIX machine somewhere on the network. All the client programs talk over the network to this one database server program. The client program interacts with the user to set up a database query or report by prompting for fields and whatnot. The client program then sends the user's query to the server over the network. The server processes the query from data stored on its local disk and sends the answer back. This is different from what happens with the file server scheme used by systems such as DBASE, Clipper, Foxpro, and Access. Consider an example where we want to query the database for all employees who worked overtime at least four days per week last month. In the file server scheme the program processes the entire query on the user's PC. Depending on how the database is organized, this probably involves fetching an amount of information proportional to the number of days in the previous month times the number of employees from the file server. This means a lot of network traffic between the file server and the PC. In the database server scheme, the client program sends the entire query to the database server. It is the database server which analyses the query and does all the record fetching locally from the database (which is on the same machine as the server). The only thing which is sent across the network is the original query and the final answer. This means a lot less network traffic.

Database servers also provide more powerful query functions, better reliability and better data security than classic PC database systems. The history of the situation here is this. On PCs you have had DBASE and other such systems. On mini computers, and more recently on workstations and 486 PCs, you have relational database systems such as Sybase, Ingres, Oracle, and Informix. On mainframes, you have IBM's nonrelational IMS and relational DB2 and the like. Any of these systems could be implemented via client/server technology. Originally, none were. They were all originally designed as standalone systems with, at best, file server extensions so that separate copies of the system could share files. The client/server idea was pioneered by the relational systems.

Historically, UNIX has pioneered many of the ideas you are now seeing appear on PCs. Let us not forget to mention that Xerox and Apple, not Microsoft, also pioneered a lot of the other ideas you see, and they did that pioneering on neither UNIX nor on PCs!. UNIX used to run mostly on machines more powerful than PCs but is now starting to run on PCs. PCs used to have meek twirpy little operating systems like DOS and Windows but are now starting to have real operating systems like UNIX, OS/2 and Windows NT.

 

Window Servers

Another server concept which is ten years old in the UNIX world but very new to the PC world is that of a window server. I am talking about the X Window System, the only network based windowing system in widespread use. In X Windows, you can run a program on someone else's machine but have it display on your machine. Or, you can have a program which is already running on another machine open up a window on yours. Here's an anecdote to illustrate my point. In 1990 I visited the Open Software Foundation lab in Grenoble, France. They sat me down at a desk so I could connect via the Internet to my machine in my office back in California. Using X Windows, I was able to run my electronic mail system and my word processor on my office machine and have them display on the machine I was sitting at in Grenoble. I'm not talking about a dumb terminal simulator. I mean full on graphics Windows with use of the mouse, exactly as if I were sitting in my office. With X Windows, people are, today, regularly running programs on machines all over their LAN or elsewhere on a WAN (Wide Area Network) all at the same time, while interacting with those programs via X Windows on their own workstation. Can you do this with Windows NT? No. Windows NT has not caught up with UNIX in the area of networked windowing!

I'll give one more example of network computing which I think will become as important as our previous example but is lesser known to nonprogrammers.

 

Remote Procedure Call [1999 jargon: DCOM vs CORBA]

Programs consist of routines or procedures which call each other to perform a computation and return an answer. Remote Procedure Call (RPC) is a mechanism whereby a program can have one of its procedures execute on a remote machine, i.e. on a machine other than the one the program itself is running on. For example, suppose you have a program on your PC which scans in a photo or some other large bit mapped image and allows you to display it, rotate it, zoom in or out, etc. Each of these tasks will be programmed as a set of one or more subroutines. Suppose one of those routines does some kind of fancy filtering of the image and OCR (optical character recognition) which converts it from graphics into ascii text. For most operations, your program does just fine running on a lowly 386. But for the OCR, suppose it takes 2 minutes (forever) to do the filtering. One way to speed up the program would be to modify it to run the OCR subroutine on the more powerful, say, Pentium based processor down the hall. The program would use RPC to invoke the routine on the Pentium and get back a result exactly as if the routine were local. You could even imagine having the Cray 1 located at U.C. San Diego do the OCR. Hell, the Cray could do it in about 50 micro seconds (or something like that). You might ask why not just have the entire program run on the remote powerful machine and use X Windows to display the photo on your local machine? That would be a solution. But with RPC, only the subroutine which needs to use the more expensive computer would run on it. RPC is a way to distribute computation in a manner which optimizes use of computer resources.

 

Network Computing

Everything we've been talking about so far generalizes. File servers, database servers. Window servers. A pattern seems to be emerging. These three cases are important instances of a more general computing architecture which I call network computing. Once you have a network in the loop, you can imagine chopping up your computing tasks into pieces which each run on separate machines best designed for doing that task. Imagine if all the computers in the world were tied together on one gigantic network and you could run programs on any of them and access data available on any of them. This ideal is not far from being technically feasible today. Sun Microsystems puts it as follows. The network is the computer.

Servers

Use of RPC and X Windows are both driving forces for yet one more server concept, this time referred to simply as a server. A server is a machine on the network whose purpose in life is to provide computational cycles. Servers are starting to appear on most UNIX based LANs already. Typically, they are multiprocessor machines having several CPUs in one box, providing processing capabilities on the order of several hundred mips, and being accessed only via X, RPC, and other client/server protocols. They do not normally even have a monitor attached to them! An interesting historical note here is that the server concept is reintroducing the notion of a multiuser multiprogrammed system, with all the various security issues that entails.

Interoperability

An interesting side effect of network computing has been the creation of a mechanism to achieve interoperability. In the case of RPC (Remote Procedure Call), the RPC caller can be written in a different language than the RPC callee and run on a different operating system. The only thing they have to have in common is the lingua franca of the protocol which interfaces them over the network. A similar situation exists in X Windows and client/server databases. The client program dialogs with the server program via a message protocol which is machine, operating system, and programming language independent. This means that programs written on diverse platforms can interoperate as long as they all use the same protocol for communicating with one another. [1999: This is the secret of the World Wide Web -- why it works so well.]

Local Area Networking

The networking trends we've identified all require a network on which to run. Which network? Novell? Lantastic? What I see happening here is the disappearance of Novell and Lantastic as we know them today. UNIX and Microsoft Windows NT have built-in networking. You don't need an outside network vendor, just an ethernet or token ring network adapter card.

Wide Area Networking

We are not that far from having all the computers in the world being able to talk to each other. Computers talk to each other over great distances via a Wide Area Network. A WAN uses different technology than a LAN. Whereas a LAN functions at between 2 and 10 Megabits per second, a WAN today typically uses telephone company T1 connections which operate at 1.5 Megabits per second. As fiber optics becomes prevalent, we will see WAN connections operating at 1 Gigabit per second. A gigabit is a billion bits, roughly the amount of information contained in a copy of the Encyclopedia Britannica, including the pictures. If you had a gigabit network linked into your home, you could download the Britannica to an optical disk in a matter of seconds! Actually, I'm lying. The speed of your computer would not allow this. Your computer's internal bus is slower than optical fiber communication speeds and therefore would become a bottleneck to doing high speed data transfer!. It would actually take several minutes to download the Britannica to your optical disk.

I regularly copy files from remote machines across the United States to my workstation via a wide area network known as the Internet. The Internet is the largest wide area network in the world, currently with as many as five million computers attached to it, although no one knows what the number is for sure and there is no easy way to find out. To be more precise, the Internet is a collection of several hundred cooperating wide area networks all hooked up together to create a single network spanning the globe. Running on the Internet is an increasingly interesting phenomenon called USENET, a worldwide bulletin board system which enables people to engage in electronic forums on every subject from recipes to molecular biology. In sheer number of users, USENET dwarfs Compuserve and Prodigy. When will we all have access to networks such as Internet? The April 13, 1992 issue of Forbes points out that most of our apartments and homes are already are wired for high speed network connections! This is because the coax cables used for cable TV are adequate for supporting gigabit data rates! Once we go to digital transmission of Television, we will be able to multiplex our cable lines with both TV and computer data! The Forbes article also points out that the cable TV companies will only need to replace about 20% of their wiring system with fiber optics in order to economically support computer networking. The telephone companies are also in the running for this market, but are at a disadvantage because the phone wiring which comes into our homes can only support data rates which are about a million times slower than the cable TV wires.

 

Cyberspace

Marshall McLuhan called it the Global Village. Ross Perot calls it the Electronic town hall. William Gibson, Stewart Brand and Ted Nelson call it Cyberspace. It is beginning. Bulletin board systems, Compuserve, Prodigy, and USENET are all part of this beginning. It is exciting. You will be able to reference an encyclopedia from the Library of Congress, fetch a Beethoven piano sonata, copy the latest version of Windows, or send a recipe to a friend over the electronic frontier, all in few seconds. You will be able to subscribe at almost no charge to what amount to interactive magazines on any subject you can imagine. Any time you like, you will be able to be an author in such a magazine whose editors are the subscribers such as yourself who make contributions to the net. The accountants will have to figure out who will pay for the transmissions. But once you hook all the computers together and give people the ability to send bits to each other, they will send bits to each other. All information can be recorded as bits. It is an evolution and a revolution of technology. The September 1991 issue of Scientific American is devoted to the subject of networks and the communication technologies which we are using to construct Cyberspace.

 

Open systems and Interoperability

There is much talk these days about 'open systems' and 'open architectures'. In the November 23, 1992 issue of Business Week, John Verity gives a broad analysis of the current situation in the world wide computer market. Not all of his observations strike me as valid, but one area where I believe he is right concerns open systems. As he puts it,

... companies that in 1986 built and sold finished computer systems were capturing about 80% of the total profits being generated by computer sales. The reason: Older, high-margin systems from the big computer makers still dominated. These computers all had proprietary software that kept the customers locked in -- and paying high prices.

By 1991, however, systems makers were getting just 20%. Why? Because the PC had cut out the fat -- and not just by lowering costs. The PC, and other "open" systems such as minis and workstations using UNIX software made it possible for customers to choose from a wide range of machines that all ran the same programs. ...

Open systems are ones where the interface to the system is defined in a manner such that other systems may interact with it at the level of programs interacting with each other, as opposed to at just the user level. In other words, a developer who acquires a software product knows form the outset what the system interfaces are and has programmatic access to those interfaces. In the case of UNIX, the openness of the system has been so widespread across vendors that there is now an ANSI standard defining major parts of the system call interface of the operating system. It is called the POSIX standard.

In spite of the popularity of the term, there is still vagueness as to just what 'open system' really means. As outlined above, a restricted notion is where the interface to a system is published but it is still necessary for a developer to buy a developer's kit or other proprietary tools in order to make use of that interface. At the other extreme are systems such as those produced by the Free Software Foundation, where anyone who owns the system has a complete copy of the source code and the ability to interface to the system in any way they choose. I would propose a breakdown of open systems into three kinds, open, accessible, and public. Open means simply that the system sticks to a published interface. Accessible means that a developer can access that interface via general purpose tools, such as a C compiler, without being required to purchase special rights or software from the system's manufacturer. Public means that the source code of the system is available at no charge.

 

UNIX vs. Windows NT

A major battle is about to occur between UNIX and Windows NT, Microsoft's new operating system which, unlike DOS, takes full advantage of the 32 bit architectures of 386 and 486 machines running in 'protected mode'. It will be interesting to see what level of open architecture is supported by Windows NT. It will also be interesting to see if Windows NT achieves the reliability which UNIX provides. UNIX has already established itself as an open system. Most UNIXes are accessible and some are public. Windows NT will be open but it is pretty sure to be neither accessible and sure like hell won't be public. Windows NT is claimed to be POSIX compliant, meaning that it will provide an open architecture providing UNIX like system calls. In fact, Bill Gates is quoted in the October 26, 1992 issue of LAN Times, saying that "NT is absolutely UNIX. UNIX applications run better on NT than on the other versions of UNIX already out there." This statement strikes me as cavalier and intrinsically incorrect. There are several very good versions of UNIX in the world. It is quite a claim to state that an operating system which is in beta test is better than all of them. Also, POSIX compliant means that a subset of the different UNIXes is supported. Being POSIX compliant does not a UNIX make.

There are too many anecdotes as to the unreliability of Windows to permit any rational observer the luxury of predicting that its descendent, Windows NT, will dominate UNIX. True, UNIX is a system administrators nightmare at times, with complicated and arcane system configuration. But UNIX works. When you want to modify the system or uninstall a program, it may be hard to do, but it is, at least, possible. Not true with Windows. The January 1993 issue of Imaging magazine reported that after surveying their readers, they concluded that the 'only sure way' to uninstall a Windows program is, at time of installation, to make a complete copy of the computers disk. Then, try the program and decide if you want to keep it (I guess forever). If not, erase the computers entire disk and restore it from the back up. Once you have used a Windows program for awhile, there is, in general, no sure way to cleanly uninstall it, due to side effects the program may have in Windows system files and God knows where else on your disk. May Windows NT not be plagued by such a feature.

I predict that Microsoft will start having problems in a two or three years once everyone realizes that the machines which can run Windows NT can also run UNIX and that UNIX is both less proprietary and is the operating system of choice for the likes of Sun Microsystems, Hewlit Packard, and AT&T, all of whom produce hardware on which UNIX already runs quite well. Moreover, Novell is currently positioning itself behind UNIX, as is also reported in the LAN Times issue. (Note, since I wrote this, the news broke that Novell has acquired UNIX System Labs, the AT&T subsidiary which markets AT&T UNIX. I wager 10 Aldebaran Platins on UNIX.)

I see Windows NT being a boon for UNIX, as it will cause an increase in the number of machines capable of running UNIX, will interoperate with UNIX, and will influence UNIX designers to simplify certain aspects of UNIX to compete for market share. Overall, Windows NT will have more to learn from UNIX than UNIX has to learn from Windows NT. X Windows, in particular, will remain a reason for going with UNIX, as the windowing system of Windows NT is not network based, as is X Windows. Also, there is a substantial installed base of UNIX systems.

 

Free Software

The Free Software Foundation, headed by Richard Stallman, goes one step beyond what I earlier referred to as public systems. They are trying to take over the world by creating GNU tools. GNU stands for 'GNU is Not UNIX'. Every GNU tool is copylefted, meaning that it is legal for someone who owns it to give it away. When it is given away, all its source code must be given with it. Moreover, if a system is built on top of a GNU tool, and that system is then sold, the source code for any part of the tool which extends the GNU code must still be given or sold with the tool. There are GNU versions of C compilers, Emacs text editors, various UNIX tools, and an entire GNU UNIX is in the works.

 

Digital Works

I call all works which can be transmitted to others in digital form digital works. Things like software, music recordings, and books are all digital works. They are different than physical works, such as an automobile. If you buy a car, it is fairly legal to modify it in various ways and to work on it yourself when it is broken or if you want to improve its performance. With software, you typically cannot do that, since the software producer does not give you the code or allow you to modify it legally. The difference between a car and a program is that the program can be copied and given to someone else at almost zero cost. That is the primary reason you are disallowed from having its source code. With GNU products, which are all copylefted, this is not a problem.

Programmers who work on GNU products make money not by selling them but by selling extensions to them and selling expertise in maintaining them or adapting them to customer's needs. The main idea behind Free Software is not to give everything away but to make the software code be freely available to users of the code and freely modifiable by them. This has the effect of removing controls put on software by its producers. If a user wants to modify the software to suit their needs, or even to fix a bug, it is both legal and possible to do so, under copyleft, since the source code of the software is available and can be altered by anyone.

Personally, I like the idea and think programmers and software companies should always consider copylefting some of their code. How would we make a profit on a copylefted product? If our main line of business is to provide integrated solutions to businesses, then what we are selling is our expertise, not our software, per se. Hence there is no problem. If we want to rely on product sales, then a problem does arise. We would not be able to sell the products at high prices, since that would put pressure on customers to obtain copies from others who already have the product. But this already occurs for regular copyrighted software. When the price is high, individuals tend to copy it. It is illegal to do that, but, as Bill Gates himself once pointed out, everyone does it. If you want the manuals, technical support, update notices, etc., you have to buy the software, otherwise you can bootleg it. In the case of large companies which purchase software, they cannot afford to use bootlegged software, since they want guarantees as to quality and they want the updates and manuals on a regular structured basis. So even if copylefting will limit the overall price of software, it will not put software producers out of business. It may even become a selling point, due to the freedom which goes along with it.

 

Digital works in Cyberspace

If there were no copyright protections for software, and everyone could legally and freely transmit software to others, a possible harmful effect would be that a programmer might choose not to write a piece of programming because they would not be able to earn a living doing so. This argument is similar to ones against free copying of digital music. If you can copy a CD recording onto DAT tape, say, at low cost, how will musicians make a living, since only a few people would actually buy a recording and everyone else would just make copies of it? Frankly, I think a better question to ask in all theses cases is why should the distributors make so much money? Because in most cases, it is not the artist who makes money from the deals, but the software or music producers and distributors. Personally, I think that good programmers and musicians will always be able to make out fine, whether or not their works can be freely distributed. They would no longer be millionaires, perhaps, but they would still make money because their skills are valuable. Moreover, a democratization of distribution would occur. An artist who now does not make it only because he or she does not know the right person in charge, the real controllers of our distribution channels, would have a better chance of distributing their work under a freer distribution system. It may be that free distribution of digital works can only be accomplished under a socialist system where artists are paid from a tax base because people vote them in as artists! Effectively that is what occurs now, very inefficiently. People vote with their dollars, most of the dollars going to non artists. Imagine a future where you can sit at your computer and select form all available digital works, be they software programs, music, or multimedia productions. And if when you create a digital work, you can put it on the net at very low cost. It would be possible to have an automated system whereby the creators of the works received compensation proportional to how many people used or enjoyed their creations. In Cyberspace, it will be possible to achieve this. There will still be accountants, producers, agents, and other middlemen. But they too will work on the network, and the quality of their work, more than who they know, will be what determines their success. This is because on the net everyone has the freedom to browse all published information. And anyone has the ability to author something and to contact anyone else who is on the net, as long as the person they want to contact has not specifically disallowed it. The net provides an entirely new medium for advertising and distribution of works. There is more freedom on the net than off. It's that way so far. Let's keep it that way.


Copyright ©1992 by Dennis G. Allard. All rights reserved. Permission is granted to copy or translate this document (http://oceanpark.com/papers/pcequalc.html) into any form provided that this copyright notice is preserved in all copies.