NGI White Paper Bill Fink & Pat Gary March 1997 NASA HPCC/ESS Networking NASA HPCC/ESS NGI Network Research Agenda Background ---------- The NASA High Performance Computing and Communications / Earth and Space Sciences (HPCC/ESS) Networking Group (NH/ENG) is conducting or planning research in leading edge, high performance network transmission technologies, prototyping next generation network protocols to support advanced routing capabilities and the new Integrated Services Model of the Internet, and testbedding advanced network applications. These research activities are in support of the NASA Center for Computational Sciences (NCCS) [1], a major supercomputer facility serving NASA's ESS researchers, and the NASA HPCC/ESS [2] Grand Challenge teams which are investigating fundamental problems in physical and computational sciences. The HPCC/ESS Computing Testbeds are currently linked via an OC-12 ATM and HiPPI based Networking Testbed [3] at NASA Goddard to the OC-12 ATM portion of ATDNet [4], and to an OC-12 ACTS satellite link [5,6], which then provides OC-12 connectivity to JPL, LeRC, and the MAGIC [7] testbed. This set of concatenated OC-12 networks is known as AAMNet (ATDNet-ACTS-MAGIC Network) [8,9,10]. The HPCC/ESS Network Testbed also has direct OC-3 connectivity to the NASA NREN [11], the rest of ATDNet, and to Hawaii via ACTS. ATDNet and ACTS in turn provide OC-3 connectivity to AAI and MAGIC. Active collaboration is ongoing with our research partners in the NREN, ATDNet, ACTS, and MAGIC communities across this high performance network infrastructure, which is enabling the development of various high performance user applications such as scientific visualization and collaboration, telemedicine, high quality video teleconferencing, and the Distr NGI White Paper NASA HPCC/ESS NGI Network Research Agenda March 1997 NGI Goal 1b: Ultra High Performance Networking ----------------------------------------------- There are several major challenges that must be met to enable the attainment and effective use of ultra high performance networks. This section focuses on the protocol issues which must be solved to deliver the maximum possible bandwidth to the end user applications. As bandwidth reaches and exceeds 1 Gbps, the bandwidth*delay product, which governs the amount of buffering required in the network and directly affects TCP performance, becomes extremely large. For example, the OC-12 ACTS satellite path currently has the largest bandwidth*delay product of 35 MB, and a future cross country OC-48 ATM network (assuming a RTT of 60 ms) will have a product of about 15 MB (OC-192 is 62 MB). Also, as the bandwidth*delay product increases, even small loss rates can dramatically reduce the effective TCP performance. * One focus of the NH/ENG research agenda is to better understand the operation of the standard algorithms in TCP, such as slow start and congestion avoidance, delayed ACKs, and fast retransmit and recovery, across very large bandwidth*delay networks (including satellite nets), to learn how these algorithms may be tuned for better performance, to study the effectiveness of the TCP-LFN (RFC1323 TCP window scaling and time stamp options) and TCP-SACK (RFC2018 TCP selective acknowledgement option) TCP extensions in optimizing the TCP throughput, and to determine if any additional TCP extensions are required to effectively utilize very large bandwidth*delay networks or if there are any inherent scaling limitations to the TCP protocol as bandwidth reaches 1 Gbps and beyond. Interactions with the underlying ATM flow and congestion control mechanisms, such as explicit rate control in ABR, will be examined. Other reliable transport protocols, such as NETBLT and XTP, will be evaluated to determine their possible applicability, and their performance will be measured and contrasted with that of TCP-LFN/SACK. Investigate what changes, if any, will be required to application protocols and actual user applications. * As part of this effort, existing tools and methods will be evaluated, or new tools and methods will be developed, to aid in performing performance analysis and measurement on these ultra high performance networks. Fink/Gary 3/26/97 [Page 2] NGI White Paper NASA HPCC/ESS NGI Network Research Agenda March 1997 NGI Goal 1a: Scalability of High Performance Networking -------------------------------------------------------- As the bandwidth requirements are relaxed somewhat, which allows more participants because of the lower cost, an additional complexity is introduced, namely all the scaling issues involved in extending high performance networking to a much larger user community. One primary research goal of the NH/ENG is to attempt to create an effective synergy between the technological capabilities of the IP(v4/v6) Internet and a global ATM infrastructure, by recognizing the commonality of functionality and requirements between the IP and ATM universes, avoiding unnecessary duplication of effort, and allowing each technology to take maximal advantage of the strengths of the other. * The primary scaling issue involves addressing and routing. Both IPv6 and ATM provide globally unique, hierarchical network addresses. One approach that would greatly simplify the linkage between IPv6 and ATM would be to integrate the IPv6 and ATM addressing and routing by simply embedding an IPv6 address in the ATM NSAP address [14] and exporting ATM level IP routes up to the IP routing infrastructure, an approach known as the Integrated Routing and Addressing (IRA) Model [15]. It would have many benefits such as providing direct shortcut routing at the ATM layer across a hierarchical PNNI ATM network, eliminate the need for the complexity of NHRP, provide for distributed ATMARP service, reduce latency for connection setup, simplify network management, and provide a name service for ATM NSAP addresses via the DNS. Explore similar possible mappings for integrating IPv4 and ATM that would also provide a sufficient level of route aggregation. * Research other methods of more closely integrating the IP and ATM layers, such as the IP switching model, which retains the speed of the ATM switching hardware while discarding the PNNI routing and UNI signaling protocols, thus effectively transforming ATM switches into IP routers which support creation of direct shortcut ATM paths across an underlying ATM infrastructure. * Develop methods and tools for simplifying the network management of IP over ATM networks, to assist with such essential functions as configuring the network, quickly detecting, isolating, and fixing problems, and collecting traffic data. Fink/Gary 3/26/97 [Page 3] NGI White Paper NASA HPCC/ESS NGI Network Research Agenda March 1997 NGI Goal 2: Implementing the New Integrated Services Model ----------------------------------------------------------- In addition to providing much higher performance scaled up to a wide user community, the Next Generation Internet also needs to support the new Integrated Services Model of the Internet that has been developed by the IETF specifically to support the requirements of the new class of real-time applications, including providing for QOS guarantees and full support of multicast. * Recommend that the NGI participants form a High-performance ATM-based MBone (HAMBone), for the purpose of testing real-time protocols and applications that cannot be tested on the existing MBone due to bandwidth limitations. This should preferably use native IP multicasting protocols such as PIM rather than tunnels. It could initially be IPv4 based, but could later be expanded to support IPv6. High performance LANs at NGI sites, such as Fast Ethernet switches, would be connected to the HAMBone so they could participate in high performance, real-time multicast sessions. That would allow experience to be gained with the application of the new Integrated Services Model to both LAN and WAN environments, for evaluating how effectively and economically a common infrastructure could support a mix of both the traditional, elastic applications and the new real-time applications, and contrast that with the model of providing separate infrastructures for the different classes of service. * An integral part of testing the new Integrated Services Model is experimenting with the resource ReSerVation Protocol (RSVP), which is the IP layer mechanism for an application to define its QOS requirements to the network. One area of research relating to RSVP is the mapping of IP layer RSVP flowspecs to ATM layer UNI signaling QOS parameters, including evaluating how well the receiver oriented IP RSVP mechanism can match with the sender oriented ATM QOS mechanism, and how well the RSVP service classes (best effort, guaranteed, and predictive) can be mapped to the ATM traffic classes (CBR, VBR, ABR, and UBR). Another area of research is the interaction between RSVP and QOS routing, such as QOSPF. Fink/Gary 3/26/97 [Page 4] NGI White Paper NASA HPCC/ESS NGI Network Research Agenda March 1997 References ---------- [1] http://sdcd.gsfc.nasa.gov/NCCS/ [2] http://sdcd.gsfc.nasa.gov/ESS/ [3] http://everest.gsfc.nasa.gov/SCTB/nasanet.gif [4] http://www.atd.net/ATDNET/ [5] http://kronos.lerc.nasa.gov/acts/acts.htm [6] http://www.cgrg.ohio-state.edu/other/actsgsn/gsnhome.html [7] http://www.ukans.magic.net/ [8] http://everest.gsfc.nasa.gov/SCTB/AAMNET_plan.htm [9] http://www.cgrg.ohio-state.edu/other/actsgsn/aamexp.htm [10] http://everest.gsfc.nasa.gov/SCTB/aamnet.gif [11] http://www.nren.nasa.gov/ [12] http://www.nasa.atd.net/hpccess-net-ngi-wp-app.htm [13] http://www.bell-labs.com/project/MONET/mon_pro.htm [14] http://www.nasa.atd.net/atm_ipv6ad.htm [15] http://www.nasa.atd.net/draft-fink-ipatm-ira-00.htm This NGI White Paper is available at: http://www.nasa.atd.net/hpccess-net-ngi-wp.htm A longer form of this NGI White Paper is available at: http://www.nasa.atd.net/hpccess-net-ngi-wp-long.htm AUTHORS ADDRESSES ----------------- Bill Fink Pat Gary NASA Goddard Space Flight Center NASA Goddard Space Flight Center Code 933 Code 933 Greenbelt, MD 20771 Greenbelt, MD 20771 Phone: +1 301 286 9423 Phone: +1 301 286 9539 Fax: +1 301 286 1775 Fax: +1 301 286 1634 E-Mail: bill.fink@gsfc.nasa.gov E-Mail: pat.gary@gsfc.nasa.gov Fink/Gary 3/26/97 [Page 5]