|
|
John Toole |
|
on Technology Leadership |
-John Toole
Former Director of NCO
by Judy Conlon
Issue 3, September 1997
Welcome to the on-line version of NASA's Insights Newsletter.
![]() Text Only |
Insights is published by the High Performance Computing and Communications (HPCC) Program Office. Address changes to Judy Conlon or write to: NASA HPCC Insights, Mail Stop 269-3, Moffett Field, California 94035-1000, USA
Tag along with John Toole, White House appointed director of the National Coordination Office for Computing, Information, and Communications (NCO/CIC). You'll find him in lots of meetings, visiting people, and rarely behind a desk.
Like the string that ties the paper cups of a child's phone, John Toole is always connecting people with people. Tasked with leading 12 government agencies, including NASA, to find ways to maintain America's position at the forefront of computing and communications research and development, he has become a major communication link among agencies, academia and industry. Toole has to ensure that the CIC programs, the successor to the HPCC Program initiated in 1991 under the Bush Administration, invest their $1.1 billion budget to forward information technology, improving such areas as biomedical research, education, emergency response, manufacturing, national security, public health, and science and engineering.
"The payoff for industry is big. A company cannot afford to gamble everything on new high-risk technology investments and possibly bring its entire business to its knees. So HPCC invests in the long-term, high-risk research"
-John Toole
Former Director of NCO
In the shadow of Government downsizing, these are most difficult days filled with tough decisions. When asked how he makes such decisions on coordinating research and development efforts, he replies, "It's fun when you are working with the best and brightest people both inside and outside of Government." While it may be fun, at a given moment his worry list ranges from making sure NASA scientists have the right tools, to conjuring up new models for the next worldwide computing infrastructure. Poised for the opportunity, Toole brings 22 years of service from the Air Force (AF) and four years of service as a civilian. Prior to his selection as Director of the NCO/CIC, he served as Acting Director and Deputy Director of the Computing System Technology Office, as well as Advanced Research Project Agency (ARPA) program manager in the Microsystems Design and Prototyping Program. He also spent several AF years evaluating large-scale hardware and software computing systems.
We interviewed Toole after he spoke at the Computational Aerosciences (CAS) Workshop at NASA Ames Research Center. During his speech to Government, academia, and industry participants, he posed the question: "Ask yourself: where are we going to get our strategic advantage?" "People," offered one audience participant. "Exactly right," Toole replied. "The keys to investing in our future are the innovations made by people and the way their ideas are applied."
In this next century, what are the most critical areas
of development in computing and computer communication
technologies to help the United States maintain its
leadership position?
Information technology is playing a key role in all of
the areas that we focus on to stay competitive. We are
moving aggressively to define our R&D agenda in
computing, information, and communications, what our
networking strategy will be, what our long term
high-confidence system investments are, and what is needed
from research to stimulate education and training. We've got
to think of our missions, both the Government's and the
people's. With good strategies in all of these areas, we can
improve the effectiveness of high-performance computing, and
this has a rippling effect on the U.S. marketplace.
At NASA, many of these R&D investments have contributed to its overall mission. NASA's HPCC program, a critical part of the overall CIC R&D investment, has been recognized as one of the world leaders in networking. NASA has set up a national-scale information infrastructure with an integrated Internet and Intranet, and has delivered educational multimedia training over the Internet to schools across the country. We are also developing remote, space-borne computing critical to NASA space missions. On top of that, we have been solving some spectacular computational problems called Grand Challenges, such as simulating an entire aircraft engine on the computer in an overnight run on a high-end workstation and simulating complex physical and chemical interactions between the atmosphere and oceans. (See Striving for peak design and UCLA climate models in Issue 2 of INSIGHTS.)
Some people look at these Grand Challenges and say, "Oh, these are esoteric kinds of problems." Unfortunately, those individuals don't realize the science and engineering potential of innovation in solving some of the fundamental computational problems. I think some of the challenges we have crafted are representative of very difficult, high-performance activities that have given us major insight into solutions.
How has NASA's HPCC program helped the Federal program
advance high-performance computing? Is there anything that
you can cite as an example?
Quite a few things, actually. NASA has been able to
focus on real world problems. Aeroelastic calculations for a
full aircraft and computational, three-dimensional
aerodynamic simulations of an entire engine are two
examples. Thanks to the researchers of NASA's HPCC Earth and
Space Science project, we've also expanded our scientific
knowledge in such areas as cosmology, solar atmosphere,
solar wind and climatology. This work requires analysis of
large-scale simulations, which is made possible through the
HPCC Program.
In order to give these scientists a new dimension for discovery, orders of magnitude new computational capability are required. We would like to achieve practical and useful systems capable of sustaining Petaflops level performance. (A Petaflops is a measure of computer performance equal to a million billion floating point operations per second and is a thousand times more powerful than the largest massively parallel computer available today.) Under the direction of scientist David Bailey, NASA has been extremely valuable in helping us pull together a team to work toward this goal. Such a capability may enable scientists to explore physical phenomena and large man-made systems such as aircraft or to do real- time processing of scientific data.
Reaching this level of computing requires running calculations on parallel processors that scale beyond what we can do today. This class of computing demands a new way of thinking about how to develop and configure applications to take advantage of a large number of processors. NASA and other Federal agencies facing high-performance issues addressing Grand Challenge problems point to a number of significant engineering hurdles that need to be overcome.
Armed with activities such as the Computational Aerosciences Workshop, we continue to build partnerships to overcome these hurdles. Bill Feiereisen (NASA HPCC program manager) and his staff are doing a fine job in determining what efforts might have the biggest impact. This focus is important for HPCC managers who need to serve their agency and the overall needs of the Federal R&D Program. Part of my job is to do everything I can to help all of these agencies, including NASA, achieve their goals.
What applications have led to commercial markets?
The most obvious example of software that created a
commercial market is the Mosaic World Wide Web browser. This
National Science Foundation(NSF) HPCC project led to
Netscape and many other successors. Just as Mosaic was
becoming a way to communicate among a small group of people,
it took off with the sudden rise of the Web. Mosaic's
explosive growth could not have been predicted. Most
important, it was the result of fifty years of partnering
between Government and industry. (See NASA's contributions
to the growth of the Internet)
NASA also partners with industry to develop its software infrastructure. The difference is in the type of software -- which consists of very high-end simulation codes -- and the scientific and engineering needs it serves, as opposed to commercial needs.
Unlike Netscape's products, this software is not expected to have as wide a commercial acceptance. However, NASA has developed this software with an eye towards satisfying specific needs of the aerospace industry and the Earth and space sciences community. And the payoff for industry is big. A company cannot afford to gamble everything on new high-risk technology investments and possibly bring its entire business to its knees.
John Toole, former director of the NCO,
spoke at Supercomputing'96.
So HPCC invests in the long-term, high-risk research. Then, industry launches its own research and product development based on the knowledge gained from our successes and failures. That was the idea behind our investment in computational fluid dynamics, which involves using supercomputers to simulate aerodynamics. These simulations are now an accepted design tool across the entire aerospace community. (See Striving for peak design in Issue 2 of INSIGHTS.)
In this kind of work, parallel processing is an
accomplished tool for NASA. How did you feel when you
realized this was a viable effort?
There was a certain amount of satisfaction knowing that
the evolution was inevitable, yet doing it earlier brought
bigger rewards. Parallel processing is a success story that
dates back many years to the tenacious effort by many people
who wanted a multi-processor capability. However, it's a
long way from being the international hit that people
envisioned. Experts agree that it can deliver very
high-performance levels. The drawback is that the
programming of parallel computers is still very difficult.
We experimented with several different architectures that have quite different behaviors, implying there may simply not be a "best" programming method. Multiple processors were not communicating well with each other on a common problem. While increasing the number of processors should logically increase performance, we were finding the opposite in some cases. Poor software, not optimized for multiple processors, could result in serious performance losses.
Left to right: Paul Rubbert, Boeing
Commercial Airplane Group, John Toole and Bill Feiereisen,
HPCC Program Manager meet at the CAS Workshop.
With the help of NASA benchmarks -- standard codes running on a variety of machines -- NASA is within a hair's breath of eliminating performance problems on some of the major codes. The proper training, teamwork and industry commitment will make the difference. So the program has demonstrated that parallel processing is a possibility for the future, but we still have quite a distance to go.
What sorts of hardware are you anticipating using to
meet some of these computational challenges?
We want to turn our attention to an interactive
heterogeneous environment . The parallel processor, the
industry's most praised -- and maligned -- architecture,
supports various computational problems. Take the big iron
machines. Underneath the covers, you'll find an architecture
that has parallel processors. In addition, the architecture
that has captured people's attention is the symmetrical
multiprocessor system. Much of the parallel programming
world is concentrating on some variation of this
architecture -- a choice in which the commercial marketplace
forgoes high prices and steep learning curves in programming
software. On the other hand, NASA, whose high-end
applications could break machine limits, is banking on a
much larger number of processors than the 100 processors in
a symmetrical multiprocessor system to begin meeting their
needs.
We'll have a generation of machines, including the large parallel IBM SP2, all in a heterogeneous environment. In the big picture, you need a collection of research efforts that is able to accomplish its objectives and contribute to the overall goal of the program. Our experiments, for example, on homogeneous machines allowed us to study massive parallelism, while work on distributed systems gave us important insight into a class of heterogeneous challenges. So you study scaling issues within this environment and then you transplant the functionality to experiments in a heterogeneous world. You definitely have to experiment with the technology early enough so that when you have to make large-scale, competitive decisions, you have a good understanding of what you're buying.
The point is to focus on the needs of users rather than
opinions on the future of machines. Whatever the
environment, the hardware communicates on a spider web of
networks where users draw data. Depending on the codes, some
users may find better performance on Crays linked to the
distributed network. Others may improve performance by
distributing their work across a group of heterogeneous
machines. Workstation clusters may make sense for users who
want a lower-cost machine and who must operate with data
across a wide variety of systems and configurations.
With problems like these, high-end users are really concentrating on scalability, portability and compatibility. Bleeding-edge technology requires working through these issues. One ideal solution is to "buy computing by the pound," if your solution is truly scalable. When you need more horsepower, you simply buy the additional hardware and plug and play. However, it's not simply hardware -- software from top to bottom must also scale.
How has the HPCC Program exploited the capabilities of
the Internet?
We are delivering a new class of high bandwidth to
scientists and engineers who work on the Grand Challenges
that advance mankind's understanding of space science,
cosmology, astrophysics and aeronautics. The Internet was
created in part with the long-term technology investments
made by the Federal research programs. Today, it is a
centerpiece of the present HPCC communication infrastructure
and is being used to stimulate long-term research. It
includes the network elements, the backbone and regional
connections supported under the NASA Research and Education
Network (NREN) project. NREN has supported development of
technologies that have flowed into the NASA Science Internet
including Asynchronous Transfer Mode (ATM). NASA's HPCC
program had one of the first demonstration networks that
proved ATM's viability. NASA and other agency supercomputer
center programs have not just analyzed how to use networking
to link their own groups together, but have focused on how
the distribution of networks can actually serve a wide range
of customers across the country.
As a result, this particular program and its predecessors have been important to the evolution of the Internet, giving the United States a strong lead in these technologies worldwide. Developed over a 25-year period, the Internet provides an excellent example of a long-term partnership among government, academia and industry. (See NASA's contributions to the growth of the Internet )
We are constantly looking for new ways to use it more effectively in the future and to make this large-scale network truly global. We're applying this technology to the core of NASA's business -- aeronautics and space. Using one's own technology is a key principle that you want to have in any program. It forces you to develop the right kinds of technology that are important to your mission, to focus on new applications and to achieve some seemingly insurmountable goals.
What are some of the challenges you are facing as you
look to the future?
One of our biggest concerns is the economic
competitiveness of the industries that computational science
supports. We don't want to lose our competitive advantage to
other countries, which is why the cooperative arrangements
between the HPCC Program and industry are so important to
getting everyone involved. Proper Government leadership in
these research programs will provide immense leverage to the
country.
What gives you satisfaction in this leadership
role?
What personally gives me a lot of satisfaction is
actually helping other people and agencies realize some of
these dreams. Increasingly, what motivates me is witnessing
people become tremendous statesmen for their field as we
bring competitive advantage to the industry. It is these
people who ultimately will foster the innovative ideas
behind our Nation's technology leadership!
[Return to Top] [Return to Cover Page]