What’s Next for Computers After Moore’s Law Ends?

Experts ponder this question at latest IEEE Rebooting Computing summit

3 December 2014

Image: Andrey Prokhorov/iStockphoto

There’s general consensus among experts that Moore’s Law for scaling of integrated circuits is coming to an end. To prepare for that day, the IEEE Rebooting Computing working group has been holding summits over the past year with invited thought leaders and decision makers from government, industry, and academia to make recommendations for the future of computing.

I attended the third summit held 23 and 24 October, in Santa Cruz, Calif. It focused on the topics of security, parallelism, approximation, and human-computing interaction. The group’s cochairs, IEEE Fellows Elie Track and Tom Conte, kicked off the event. Forty-five experts attended from organizations including Aalto University, Georgia Tech, IBM, Lawrence Berkeley Labs, Mayo Clinic, Microsoft, the University of Massachusetts-Amherst, the U.S. National Science Foundation, and U.S. National Security Agency. Presentations were given on each topic the first day and, on the second day, attendees formed working groups around the topics to discuss opportunities and challenges. Here are highlights of the meeting. The summary report can be found on the Rebooting Computing website.

  • TRUST AND SECURITY

    Neal Ziring, technical director of the Information Assurance Directorate at the National Security Agency, gave an overview of the requirements to maintain secure and trusted communication and control throughout a worldwide network of computers and devices. He admitted that the current state of computer security stinks and breaches will dramatically increase. He said standards and practices need to be established for authenticating identity and privilege in platforms and computation, including networks and the cloud. And they need to be easy to implement universally and automatically. Standard protocols are also needed to automatically update security software on new and legacy systems, and for removing compromised devices from the network.

    “Identity, rights, and trust are key in future computers, and these computers need the ability to move trust around by issuing, delegating, and withdrawing it,” he said. “This process is viable but the vision is lacking.”

    The working group concluded that hardware should provide a minimum level of security and trust. However challenges exist, including defining the security primitives and developing methods to ensure hardware integrity. Software can also provide flexibility in creating multiple levels of trust offered by the hardware.

    While control of information ultimately resides with the owner, according to the group, technical solutions could, for example, verify a person’s identity with minimal use of passwords and still protect personal attributes as well as customize privacy and security rules. IEEE can help strengthen security by developing standards, creating methods for evaluation, advocating changes, and developing a set of threat models.

  • HUMAN-COMPUTER INTERFACE

    Gregory Abowd, a professor with the School of Interactive Computing at Georgia Tech’s College of Computing, covered the interaction between humans and computers. He said we’re moving into the fourth generation of computing, known as complementary computing, which integrates computing devices into the human environment seamlessly, blurring the boundaries between the two. He described the interactions this way: the cloud (cloud computing), the crowd (crowdsourcing), and the shroud (wearable sensors and devices). The ultimate human-centric computing system would include devices that adapt to the needs and preferences of the user rather than the user adapting to the needs of the computer or the programmer.

    Complementary computing is already enabling people to become more self-sufficient with mobile apps like Yelp, Foursquare, and Waze. He noted that complementary computing isn’t just for people—large organizations could create a killer app that could anticipate individual customers’ needs.

    The working group predicted that future computers would combine communication and entertainment devices (like smartphones) and analog sensors (for navigation and health), which are interactive and controlled by the user’s voice and gestures. They would also display only relevant data from the Internet, based on answers to questions the device asks the user. The group listed several challenges, including ensuring security and privacy of information, the need for scalability, and a trust model whereby the interest of users would be paramount to those of Internet service providers.

  • PARALLELISM

    Peter Beckman, the director of the Exascale Technolgy and Computing Institute at Argonne National Laboratory, in Lemont, Ill., covered the current state of parallel supercomputers. Parallel computing carries out many calculations simultaneously or in parallel, and parallelism has become dominant in high-performance computing.

    He said that today’s supercomputers are energy hogs that are expensive to run, costing about US $25 million per year just for the electric bill (25 megawatts). IBM’s supercomputer Blue Gene Q, which is housed at Argonne, is five times more energy efficient and 20 times faster than its predecessor, Blue Gene P, but still costs $65 million to run.

    “These costs aren’t sustainable,” he admits. “High-performance computers need to become more sophisticated, reliable, efficient, and use less power. To obtain improved performance from further parallel scaling, we need to change programming to be parallel everywhere, with improved memory integration and inter-processor bandwidth.”

    Furthermore, the large power dissipation can cause variable heating among different processor cores.  He presented an example showing core temperatures that varied by 20 ºC,  causing a substantial variance in clock speed and uncontrolled latency. He said software needs to be developed that enables users to dynamically control access to processors and memory. Also needed are new latency-tolerant algorithms and methods, and new tools that will measure and predict distributions in latency and processor and memory execution.

    The working group predicted that even personal devices would become parallel. Innovations are needed in areas such as integration of processors and memory, heterogeneous parallelism, and virtual distributed parallelism. Languages and compilers should be better able to specify and take advantage of parallelism and constraints in problems, and to optimize code on a variety of platforms. There also needs to be a better understanding of how to program parallel neuromorphic systems more quickly and effectively.

    The group also said that the foundations of computer science may need to be rethought, including teaching algorithms from the perspective of highly parallel resources, and educating students about parallel programming earlier in the curriculum.

  • RANDOMNESS AND APPROXIMATION

    Dick Lipton, a professor of computer science at Georgia Tech, spoke about the importance of exploiting randomness and approximation to achieve more-efficient algorithms. Many problems that modern computation is trying to solve don’t require exact answers, just quick results which are good enough. In one approach, Sampling selects and analyzes a small portion of a large data set to obtain almost as much useful information as if the entire data set were analyzed. It is used in economics, weather simulation, and genetics. The process for selecting representative data is generally random, but proving randomness can be problematic.

    Approximation is another approach that decreases computation time and requires a specification of how much precision is necessary for a given application. Approximate computing relaxes the abstraction of near-perfect precision in general-purpose computing, communication, and storage, providing many opportunities for designing more-efficient and higher-performance systems. Approximation is used in applications such as video compression, but there has not been a systematic method to incorporate approximation in software. Both software and hardware need to be addressed.

    “To progress further, we need to create a computer culture that uses randomness and approximation more widely, and permits them to be easily implemented,” Lipton said. “That can provide a big enhancement in algorithm efficiency, without a significant increase in software development effort.”

    The working group concluded that today’s conventional technologies are expected to fall short of historical trends and projected demand. Therefore, radical departures from conventional approaches are necessary. This includes error-resilient and inherently approximate technologies, such as randomized algorithm, machine learning, and pattern recognition. But there are also challenges, including abstractions for algorithm design and programming, converting component-based approximation to end-to-end solutions, and understanding how to measure the quality of each application. The group recommended exploring software and algorithmic approximation, forming collaborations between academia and the semiconductor industry to begin prototyping approximate hardware modeling variability and error. It also suggested embracing approximation in general-purpose computing.

SPECIAL ANNOUCEMENT

At the end of the conference, Randal Bryant of the White House Office of Science and Technology Policy’s Technology and Innovation Division, gave a brief presentation about a major new government-sponsored research program focused on the future of computing. (The formal announcement about the program will be made in the next several months.) Bryant advises OSTP’s deputy director of policy about big data. This program will assist researchers in developing novel approaches that will get around anticipated roadblocks in computer performance (such as the ending of Moore’s Law scaling), and is aimed at helping the U.S. computer industry remain leaders in the coming decades. He said future computing may use exascale parallelism and big data analytics for applications for scientific discovery, national security, and business.

Do you agree that Moore’s Law is about to come to an end? What suggestions do you have on how to improve computing performance?

Learn More