Pages

Saturday 13 July 2013

88. Evolution of Computer Power per Unit Cost


Continuing our discussion of robots, let us now take a look at their control centres, namely the computers. There are three parameters to consider for the computational underpinnings of robotic action: the processing power (or speed) of the computer; its memory size; and the price for a given combination of processing power and memory size.

The processing power or computing power can be quantified in terms of ‘million instructions per second’ or MIPS. And the size of the memory is specified in megabytes. By and large, the MIPS and the megabytes for a computer cannot be chosen independently: Barring some special applications, they should have a certain degree of compatibility (per unit cost), for reasons of optimal performance in a variety of applications.

An analysis by Hans Moravec (1999) revealed that, for general-purpose or 'universal computers', the ratio of memory (the megabytes) to speed (the MIPS) has remained remarkably constant during the entire history of computers. A ‘time constant’ can be defined here as roughly the time it takes for a computer to scan its own memory once. One megabyte per MIPS gives one second as the value of this time constant. This value has remained amazingly constant as progressively better universal computing machines have been developed over the decades, as depicted in Moravec's (1999) 'evolution slide':


Machines having too much memory for their speed are too slow (for their price), even though they can handle large programs. Similarly, lower-memory higher-speed computers are not able to handle large programs, in spite of being fast. Therefore, special jobs require special computers (rather than universal computers), entailing higher costs, as also a departure from the above universal time constant. For example, IBM’s Deep Blue computer, developed for competing with the chess legend Garry Kasparov (in 1996-97) had more speed than memory (~3 million MIPS and ~1000 megabytes, instead of the universally optimal combination of, say, 1000 MIPS and 1000 megabytes). Similarly, the MIPS-to-megabytes ratio for running certain aircraft is also skewed in favour of MIPS.

Examples of the other kind, namely slow machines (less MIPS than megabytes), include time-lapse security cameras and automatic data libraries.

Moravec estimated in 1999 that the most advanced supercomputersavailable at that time were within a factor of 100 of having the power to mimic the human brain. But then such supercomputers
come at a prohibitive cost. Costs must fall if machine intelligenceis to make much headway. Although this has indeed been happening for a whole century, what about the future? How long can this go on?The answer is: For quite a while, provided technological breakthroughsor new ideas for exploiting technologies keep coming.

An example of the latter is the use of multicore processors. Multicore chips, even for PCs, are already in the market. Nvidia introduced a chip, GeForce 8800, capable of a million MIPS speed, and low-cost enough for use in commonplace applications like displaying high-resolution videos. It has 128 processors (on a single chip) for specific functions including high-resolution video display. In a multicore processor, two or more processors on a chip process data in tandem. For example, one core may handle a calculation, a second one may input data, while a third one sends instructions to an operating system. Such load-sharing and parallel functioning improves speed and performance, and reduces energy consumption and heat generation.


Nanotechnology holds further promise for the next-generation solutions to faster and cheaper computation. DNA computing is an alternative approach being investigated; this technique has the potential for massive parallelism.
 

Quantum computing is another exciting possibility.


A factor hindering rapid progress in robotics has been the high costs incurred on sensors and actuators. Progress in nanotechnology (e.g. the development of MEMS) is resulting in continuously falling costs of sensors and actuators. It is now far less expensive to incorporate GPS (Global Positioning System) chips, video cameras and array microphones, etc., into robots.

Bill Gates announced the development of universally applicable software packages by his company Microsoft that would further facilitate the use of ordinary PCs for controlling and developing robots of ever-increasing sophistication. Many robots already have PC-based controllers. It is anticipated that a large-scale move of robotics towards universally applicable PC-based architecture will cut costs and reduce the time needed for developing new configurations of autonomous robots. In the 1970s the development of Microsoft BASIC provided a common foundation that made it possible to use software written for one set of hardware to run on another set. Something similar has been happening in robotics.

One of the challenging problems faced in robotics was that of concurrency, namely how to process simultaneously the large amount of data coming in from a variety of sensors, and send suitable commands to the actuators of the robot. The approach adopted till recently was to write a ‘single-threaded’ program that first processes all the input data and then decides on the course of action, before starting the long loop all over again. This was not a happy situation because the action taken on the basis of one set of input data may be too late for safety etc., even though subsequent input data indicates a drastically different course of action.

To solve this problem, one must write ‘multi-threaded’ programs that can allow data to travel along many paths. This tough problem has been tackled by Microsoft by developing what is called CCR (‘concurrence and coordination run-time’). Although CCR was originally meant to exploit the advantages of multicore and multiprocessor systems, it may well be just the right thing needed for robots also. The CCR is basically a sequence of libraryfunctions that can perform specific tasks. It helps in developing multithread applications quickly, for coordinating a number of simultaneous activities of a robot. Of course, competing approaches already exist, not only to CCR,but also to the so-called ‘decentralized software services’ (DSS) I mentioned in Part 87.

Although low-cost universal robots will be run by universal computers, their proliferation will have more profound consequences than those engendered by low-cost universal computers alone. Computers only manipulate symbols; i.e. they basically do ‘paperwork’ only, although the end results of such paperwork can indeed be used for, say, automation. A sophisticated universal robot goes far beyond mere paperwork. It goes into perception and action in real-life situations. There is a far greater diversity of situations in the real world than just paperwork, and in far greater numbers. There would thus be a much larger number of universal robots in action than universal computers. This, of course, will happen only when the cost per unit capability falls to low levels.

It has been questioned why Moravec has never updated his 1999 'evolution slide' I showed in the beginning of this post: 'If the graph's projections were correct and if we had the software available today, we would be able to purchase a computer to simulate the human brain in 2010 for $1000'. But this has not happened.

Perhaps Moravec's estimate for the number of MIPS needed to simulate the human brain was off by a factor between 1000 and 10,000,000. Taking the factor of 1000 as a better estimate, the modified 'evolution slide' looks as follows:



When will robotic intelligence go past human intelligence? Perhaps by 2050. Perhaps a little later than that.


2 comments: