@DELETE { } @REFRESH { } @UPDATE { @FILE { http://cesdis.gsfc.nasa.gov/hpccm/annual.reports/ess94contents/graphics/mechoso.metric.html Update-Time{9}: 827948650 url-references{106}: http://cesdis.gsfc.nasa.gov/hpccm/annual.reports/ess94contents/esm.html mailto:lpicha@cesdis.gsfc.nasa.gov title{12}: Metric Chart keywords{137}: cesdis challenge computational curator gov grand gsfc larry lpicha metric nasa picha return scientific technical the understanding write images{29}: mechoso.metric.gif return.gif headings{169}: Earth System Model: Atmosphere/Ocean Dynamics and Tracers Chemistry PI: Roberto Mechoso University of California at Los Angeles (UCLA) Return to the Technical Write-up body{961}: background="graphics/ess.gif"> Scientific Grand Challenge: To develop a global coupled model of the atmosphere and the oceans, including chemical tracers and biological processes, to be used to model seasonal cycle and inter-annual variability. Scientific Understanding: To test the predicted seasonal cycle and interannual variability of a coupled atmosphere/ocean model with 100 chemical and macrophysical tracers and 4x the present spatial resolution. Computational Challenge: To allow rapid tests of the impact of model parameterization changes and runs representing multi-year inter-annual variability and the carbon cycle. Also, to allow visualization of time accurate model output in real time. Metric: : An ensemble of the global coupled atmosphere and ocean model simulations of one or more decades at double the linear resolution of the atmosphere and four times the resolution for the ocean. curator: Larry Picha (lpicha@cesdis.gsfc.nasa.gov) MD5{32}: 7cc7aca178e1bde21211727da89ee112 File-Size{4}: 1559 Type{4}: HTML Gatherer-Version{3}: 1.0 Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Refresh-Rate{7}: 2419200 Time-to-Live{8}: 14515200 Description{12}: Metric Chart } @FILE { http://cesdis.gsfc.nasa.gov/pub/linux/diag/hp+.c Update-Time{9}: 827948614 Partial-Text{1843}: block_input_io block_output_io do_checksum main map_shared_mem test_io_mem test_shared_mem unistd.h stdio.h stdlib.h getopt.h fcntl.h sys/mman.h asm/io.h /* hp+.c: Diagnostic program for HP PC LAN+ (27247B and 27252A) ethercards. */ /* Copyright 1994 by Donald Becker. This version released under the Gnu Public Lincese, incorporated herein by reference. Contact the author for use under other terms. This is a setup and diagnostic program for the Hewlett Packard PC LAN+ ethercards, such as the HP27247B and HP27252A. The author may be reached as becker@cesdis.gsfc.nasa.gov. C/O USRA Center of Excellence in Space Data and Information Sciences Code 930.5 Bldg. 28, Nimbus Rd., Greenbelt MD 20771 */ /* { name has_arg *flag val } */ /* The base I/O *P*ort address. */ /* Give help */ /* Transceiver type number (built-in, AUI) */ /* Interrupt number */ /* Switch to NE2000 mode */ /* Verbose mode */ /* Display version number */ /* Switch to shared-memory mode. */ /* Write the EEPROM with the specified vals */ /* A few local definitions. These mostly match the device driver definitions. */ /* See enum PageName */ /* Offset to the 8390 registers. */ /* First page of TX buffer */ /* Last page +1 of RX ring */ /* The values for HPP_OPTION. */ /* Active low, really UNreset. */ /* ... and their names. */ /* This is it folks... */ /* Transceiver type. */ /* Turn on access to the I/O ports. */ /* Check for the HP+ signature, 50 48 0x 53. */ /* Point at the Software configuration registers. */ /* Point at the Hardware configuration registers. */ /* Point at the "performance" registers. */ /* Ignore the EEPROM configuration, just for testing. */ /* Retrieve and checksum the station address. */ /* * Local variables: * compile-command: "gcc -Wall -O6 -o hp+ hp+.c" * tab-width: 4 * c-indent-level: 4 * End: */ MD5{32}: 00eadf4817699ddcce87027f977a43ac File-Size{4}: 9830 Type{1}: C Gatherer-Version{3}: 1.0 Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Refresh-Rate{7}: 2419200 Time-to-Live{8}: 14515200 Keywords{943}: access active address and arg asm aui author base becker bldg block buffer built center cesdis check checksum code command compile configuration contact copyright data definitions device diagnostic display donald driver eeprom end enum ethercards excellence fcntl few first flag folks for gcc getopt give gnu gov greenbelt gsfc hardware has help herein hewlett hpp ignore incorporated indent information input interrupt just lan last level lincese local low main map match may mem memory mman mode mostly name names nasa nimbus number offset option ort other output packard page pagename performance point ports program public reached really reference registers released retrieve ring sciences see setup shared signature software space specified station stdio stdlib such switch sys tab terms test testing the their these this transceiver turn type under unistd unreset use usra val vals values variables verbose version wall width with write Description{14}: block_input_io } @FILE { http://cesdis.gsfc.nasa.gov/admin/seminar.series/1107.html Update-Time{9}: 827948594 title{16}: November 7, 1995 keywords{25}: hosted jacqueline moigne images{137}: http://cesdis.gsfc.nasa.gov/admin/cesdis.seminars/logo.GIF http://cesdis.gsfc.nasa.gov/admin/cesdis.seminars/nasalogo-tiny.gif fugate.gif headings{173}: Mathematical Tools for Remote Sensing Data Analysis Fourth Annual Seminar Series November 7, 1995 NASA Goddard Space Flight Center Building 28, Room E210 2:00 - 3:00 p.m. body{255}: %> size=2>CENTER OF EXCELLENCE IN SPACE DATA AND INFORMATION SCIENCESsize=2> %> hosted by: Dr. Jacqueline Le Moigne Adaptive Optics Techniques for Compensation of Atmospheric Distortions Robert Fugate USAF Phillips Laboratory fugate@plk.af.mil MD5{32}: dd5b657f39c56b9a6307514b2268fe0d File-Size{4}: 5096 Type{4}: HTML Gatherer-Version{3}: 1.0 Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Refresh-Rate{7}: 2419200 Time-to-Live{8}: 14515200 Description{16}: November 7, 1995 } @FILE { http://cesdis.gsfc.nasa.gov/linux/pcmcia/dbether.c Update-Time{9}: 827948612 Partial-Text{2731}: main unistd.h stdio.h sys/file.h linux/config.h linux/kernel.h linux/sched.h linux/errno.h asm/system.h asm/io.h /* Ethercard enabler for the Databook TCIC/2 PCMCIA interface chip. */ /* Written 1994 by Donald Becker. */ /* Notes: Only works with socket 0. */ /* Default base address of the Databook TCIC/2 chip. */ /* Offsets of TCIC/2 registers from the TCIC/2 base address. */ /* Codes put into the top three bits of the TCIC_MODE register to select which auxiliary register is visiable at TCIC_AUX. */ /* Mark that TCIC_ADDR points to internal registers (rather than into the card address space). */ /* Bit definitions for selected fields (like just those that we use). */ /* Card installed. */ /* Socket control register, TCIC_SCTRL. */ /* Autoincrement after access. */ /* Enable card access to selected soc */ /* Power control register */ /* Enable current limiting */ /* 5 Volt supply control for sock 0 */ /* 5 Volt supply control for sock 1 */ /* I/O map control register */ /* Enable this map */ /* Make the buffers quieter */ /* This map is 1k or less */ /* Interrupt control/status register */ /* Write all bits 7:2 in CSR */ /* Interrupt enable register */ /* Interrupt on any changed to SSTAT*/ /* Make STKIRQ output open drain */ /* Mode register */ /* Mode register, word access */ /* Memory map control register */ /* Mem map ctl reg, enable */ /* Make accesses use quiet mode */ /* Memory map map register. */ /* Map this to card attribute space */ /* System configuration register 1 */ /* This will probe for a TCIC/2 at the standard location. */ /* Adaptor card I/O base. */ /* Which socket to use. */ /* TCIC chip I/O base. */ /* The 0x80 location is for the delay in the *_p() functions. */ /* TCIC/2 locations. */ /* Verify that *something* is at the putative TCIC address. */ /* Select socket 0. */ /* Shut down, then turn on the card */ /*PWR_CURRENTL |*/ /* Enable the current socket and set autoincrement on data accesses. */ /* Map the I/O space starting at 'card_addr' to the socket specified by 'socket'. Use 8-bit mapping, quiet mode, wait state value of 7. */ /* Load I/O control register. */ /* Load the system configuration auxiliary register*/ /* Give the chip 50 msecs. to reinitialize. */ /* Point to the socket configuration registers, and load them. */ /* IR_SCF1 for socket 0. */ /* IR_SCF1 for socket 1. */ /* Map the attribute memory into 0xd0000. */ /* Point to WR_MBASE_i */ /* Write the enable byte to the card. */ /* Keep the autoincrement from happening, so we can observer the IRQ register. */ /*outb(0x00, tcic + TCIC_PWR);*/ /* * Local variables: * compile-command: "cc -O -o dbether dbether.c -N -Wall" * c-indent-level: 4 * tab-width: 4 * End: */ MD5{32}: 48abde71351ae49a0394595c41598082 File-Size{4}: 6962 Type{1}: C Gatherer-Version{3}: 1.0 Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Refresh-Rate{7}: 2419200 Time-to-Live{8}: 14515200 Keywords{1030}: access accesses adaptor addr address after all and any asm attribute autoincrement aux auxiliary base becker bit bits buffers byte can card changed chip codes command compile config configuration control csr ctl current currentl data databook dbether default definitions delay donald down drain enable enabler end errno ethercard fields file for from functions give happening indent installed interface internal interrupt into irq just keep kernel less level like limiting linux load local location locations main make map mapping mark mbase mem memory mode msecs notes observer offsets only open outb output pcmcia point points power probe put putative pwr quiet quieter rather reg register registers reinitialize scf sched sctrl select selected set shut soc sock socket something space specified sstat standard starting state status stdio stkirq supply sys system tab tcic than that the them then this those three top turn unistd use value variables verify visiable volt wait wall which width will with word works write written Description{4}: main } @FILE { http://cesdis.gsfc.nasa.gov/%7ecreschke/peta/report/node30.html Update-Time{9}: 827948636 title{9}: Overview keywords{36}: aug chance edt overview reschke tue images{193}: /usr/local/src/latex2html/icons/next_motif.gif /usr/local/src/latex2html/icons/up_motif.gif /usr/local/src/latex2html/icons/previous_motif.gif /usr/local/src/latex2html/icons/contents_motif.gif head{469}: Next: Simulation of Mixing Up: Heterogeneous Computing: One Previous: Examples of Mixed-Machine Overview Three examples of existing HC systems are very briefly introduced here. In the first two, the decomposition of tasks into subtasks and the assignment of subtasks to machines were user specified. The third, SmartNet, schedules tasks in an HC system. The long-term goal of automatic HC is discussed in the next section. Chance Reschke Tue Aug 15 08:59:12 EDT 1995 MD5{32}: 8c4f30cd4dd967ac9e2c440c9623d074 File-Size{4}: 1687 Type{4}: HTML Gatherer-Version{3}: 1.0 Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Refresh-Rate{7}: 2419200 Time-to-Live{8}: 14515200 Description{9}: Overview } @FILE { http://cesdis.gsfc.nasa.gov/%7ecreschke/peta/report/node11.html Update-Time{9}: 827948634 title{20}: Report Organization keywords{47}: aug chance edt organization report reschke tue images{193}: /usr/local/src/latex2html/icons/next_motif.gif /usr/local/src/latex2html/icons/up_motif.gif /usr/local/src/latex2html/icons/previous_motif.gif /usr/local/src/latex2html/icons/contents_motif.gif head{1695}: Next: Issues for Petaflops Up: Introduction Previous: Workshop Approach Report Organization This report is being published as a Technical Report of the Center of Excellence in Space Data and Information Systems, Universities Space Research Association, in cooperation with NASA's Goddard Space Flight Center.Section 1 briefly describes several key events or activities that preceded the The Petaflops Frontier Workshop, the objectives and approach of the workshop and the report organization.Section 2 summarizes the key issues of petaflops computing. Much of the discussion is based on the excellent work and report from the Workshop on Enabling Technologies for Peta(FL)OPS Computing in Pasadena in 1994. The discussion provides a synopsis of the important findings and conclusions from that workshop.Section 3 includes the The Petaflops Frontier Workshop agenda and information about the organizers, the presenters and the participants.Section 4 is a synthesis of the presentations at The Petaflops Frontier Workshop in McLean, VA February 6, 1995. Eighteen presentations addressed various aspects of architecture, technology, applications, and algorithms.Section 5 consists of extended abstracts from the workshop presentations in the areas of architecture and technology, and Section 6 includes the extended abstracts from the applications and algorithms presentations. These are included both to ensure the technical content and to provide the reader with material directly by the participants.Section 7 distills the workshop results and presentations in a comprehensive discussion of conclusions and recommendations for follow-on activities. Chance Reschke Tue Aug 15 08:59:12 EDT 1995 MD5{32}: 0015fbace4ce267947b67c7ba75f2aa5 File-Size{4}: 2965 Type{4}: HTML Gatherer-Version{3}: 1.0 Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Refresh-Rate{7}: 2419200 Time-to-Live{8}: 14515200 Description{20}: Report Organization } @FILE { http://cesdis.gsfc.nasa.gov/linux/pcmcia/cardd/card.insert Update-Time{9}: 827948613 Partial-Text{185}: # PCMCIA card insertion script. # Written by Donald Becker 1994. # This script is called by 'cardd' when a PCMCIA card is inserted. # The following environment variables will be set: MD5{32}: 7cc70d261f3cc242e8eed0c345f2a208 File-Size{4}: 1797 Type{7}: Command Gatherer-Version{3}: 1.0 Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Refresh-Rate{7}: 2419200 Time-to-Live{8}: 14515200 Keywords{128}: becker called card cardd donald environment following inserted insertion pcmcia script set the this variables when will written Description{31}: # PCMCIA card insertion script. } @FILE { http://cesdis.gsfc.nasa.gov/hpccm/annual.reports/ess94contents/ess.intro.html Update-Time{9}: 827948649 url-references{59}: http://cesdis.gsfc.nasa.gov/hpccm/annual.reports/ess94.html title{24}: ESS Applications Project keywords{100}: and approach goal management objectives organization page plan previous project return strategy the images{19}: graphics/return.gif headings{99}: Introduction to the Earth and Space Science (ESS) Applications Project Return to the PREVIOUS PAGE body{5951}: background="graphics/ess.gif"> Project Goal and Objectives: The goal of the ESS Project is to demonstrate the potential afforded by teraFLOPS systems' performance to further our understanding and ability to predict the dynamic interaction of physical, chemical and biological processes affecting the solar-terrestrial environment and the universe. Project activities are focused on selected NASA Grand Challenge science applications. Many of the Grand Challenges address the integration and execution of multiple advanced disciplinary models into single multidisciplinary applications. Examples of these include coupled oceanic atmospheric biospheric interactions, 3-D simulations of the chemically perturbed atmosphere, solid earth modeling, solar flare modeling and 3-D compressible magnetohydrodynamics. Others are concerned with analysis and assimilation into models of massive data sets taken by orbiting sensors. These problems are significant in that they have both social and political implications in our society. The science requirements inherent in the NASA Grand Challenge applications necessitate computing performance into the teraFLOPS range. The project is driven by five specific objectives: 1) Support the development of massively parallel, scalable, multidisciplinary models and data processing algorithms; 2) Make available prototype, scalable, parallel architectures and massive data storage systems to ESS researchers; 3) Prepare the software environments to facilitate scientific exploration and the sharing of information and tools; 4) Develop data management tools for high-speed access management and visualization of data with teraFLOPS computers; and 5) Demonstrate the scientific and computational impact for Earth and space science applications. Strategy and Approach: The ESS strategy is to invest the first four years of the project (FY92-95) in formulation of specifications for complete and balanced teraFLOPS computing systems to support Earth and space science applications, and the next two years (FY96-97) in acquisition and augmentation of such a GSFC resident system into a stable and operational capability, suitable for migration into Code Y/S computing facilities. The ESS approach involves three principal components: 1) Use a NASA Research Announcement (NRA) to select Grand Challenge Applications and Principal Investigator Teams that require teraFLOPS computing for NASA science problems. Eight collaborative multidisciplinary Principal Investigator Teams including physical and computational scientists, software and systems engineers, and algorithm designers are addressing the Grand Challenges. In addition, 21 Guest Computational Investigators are developing specific scalable algorithmic techniques. The Investigators provide a means to rapidly evaluate and guide the maturation process for scalable massively parallel algorithms and system software and to thereby reduce the risks assumed by later ESS Grand Challenge researchers when adopting massively parallel computing technologies. 2) Provide successive generations of scalable computing systems as Testbeds for the Grand Challenge Applications; Interconnect the Investigators and the Testbeds through high speed network links (Coordinated through the National Research & Education Network); and Provide a software development environment and computational techniques support to the Investigators. 3) In collaboration with the Investigator Teams, conduct evaluations of the testbeds across applications and architectures leading to down select to the next generation scalable teraFLOPS testbed. Organization: The Goddard Space Flight Center serves as the lead center for the ESS Project and collaborates with the Jet Propulsion Laboratory. The HPCC/ESS Inter-center Technical Committee, chaired by the ESS Project Manager, coordinates the Goddard/JPL roles. The ESS Applications Steering Group, composed of representatives from each science discipline office at NASA Headquarters and from the High Performance Computing Office in Code R, as well as representatives from Goddard and JPL, provides ongoing oversight and guidance to the project. The Office of Aeronautics and Space Technology, jointly with the Office of Space Science and Applications, selected the ESS Investigators through the peer reviewed NASA Research Announcement process. The ESS Science Team, composed of the Principal Investigators chosen through the ESS NRA, and chaired by the ESS Project Scientist, organizes and carries out periodic workshops for the investigator teams and coordinates the computational experiments of the Investigations. The ESS Evaluation Director leads development of ESS computational and throughput benchmarks which are representative of the ESS computational workload. A staff of in-house computational scientists develops scalable computational techniques which address the Computational Challenges of the ESS Investigators. The ESS Project Manager serves as a member of the NASA wide High Performance Computing Working Group and representatives from each Center serve on the NASA wide Technical Coordinating Committees for Applications, Testbeds, and System Software Research. Management Plan: The project is managed in accordance with the formally approved ESS Project Plan. The ESS Project Manager at GSFC and the JPL Task Leader together oversee coordinated development of Grand Challenge applications, high performance computing testbeds, and advanced system software for the benefit of the ESS Investigators. Monthly, quarterly, and annual reports are provided to the High Performance Computing Office in Code R. ESS and its Investigators contribute annual software submissions to the High Performance Computing Software Exchange. Points of Contact: Jim Fischer Goddard Space Flight Center, Code 934 fischer@nibbles.gsfc.nasa.gov, 301-286-3465 Robert Ferraro Jet Propulsion Laboratory ferraro@zion.jpl.nasa.gov, 818-354-1340 MD5{32}: d64e50901e924606b53675fd6d7ea7f9 File-Size{4}: 6565 Type{4}: HTML Gatherer-Version{3}: 1.0 Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Refresh-Rate{7}: 2419200 Time-to-Live{8}: 14515200 Description{24}: ESS Applications Project } @FILE { http://cesdis.gsfc.nasa.gov/%7ecreschke/peta/report/node7.html Update-Time{9}: 827948634 url-references{33}: footnode.html#62 footnode.html#63 title{23}: Historical Perspective keywords{50}: aug chance edt historical perspective reschke tue images{481}: /usr/local/src/latex2html/icons/next_motif.gif /usr/local/src/latex2html/icons/up_motif.gif /usr/local/src/latex2html/icons/previous_motif.gif /usr/local/src/latex2html/icons/contents_motif.gif /usr/local/src/latex2html/icons/foot_motif.gif /usr/local/src/latex2html/icons/foot_motif.gif /usr/local/src/latex2html/icons/next_motif.gif /usr/local/src/latex2html/icons/up_motif.gif /usr/local/src/latex2html/icons/previous_motif.gif /usr/local/src/latex2html/icons/contents_motif.gif head{4799}: Next: The Petaflops Frontier Up: Introduction Previous: What is Petaflops? Historical Perspective As early as December 1991 the challenge of petaflops computing was receiving serious consideration at the Purdue Workshop on Grand Challenges in Computer Architecture for the Support of High Performance Computing sponsored by the National Science Foundation. The workshop co-chairs identified achieving petaops performance as one of four grand challenge problems in computer architecture. The authors noted that, ``This ... challenge [of achieving peta-ops computing] is to dramatically improve and effectively harness the base technologies into a future computer system that will provide usable peta-ops of computer performance to grand challenge application programmers."Following the Purdue workshop, the issue of petaflops computing was addressed by the High Performance Computing, Communications and Information Technology Subcommittee (HPCCIT) . The HPCCIT, comprised of representatives from the major government agencies involved in the HPCC program, proposed that enabling technologies for petaflops computing be addressed in a workshop in the near future.Soon after the meeting, the Administrator of NASA convened a special initiative team to evaluate its existing and future high performance computing requirements. The NASA Supercomputing Special Initiative Team used a projected 10-year period to assess the implications of the computational aerosciences and Earth and space sciences grand challenges with respect to (1) established NASA requirements, (2) other U.S. government HPC activities, including advanced architectures, component technologies, and communications, (3) U.S. industry efforts, (4) activities in academia and other orgnaizations, and (4) the approach and progress of foreign efforts.The team re-affirmed the findings of the earlier Pasadena workshop with respect to the requirements to achieve teraflops computing. The team also concluded that some NASA grand challenge problems would require petaflops computing performance. In their assessment the team identified seven major technology barriers to achieving petaflops-level performance: Systems software Memory speed Aggregate I/O Interprocessor speed Processor speed Packaging Power management Other government agencies, academia and industry were no less aware of the need to extend their horizons beyond the teraflops regime. The combination of this awareness, the HPCCIT meeting, and the report of NASA's Supercomputing Special Initiative Team, helped form the basis of the first workshop on petaflops computing.In February 1994 in Pasadena, California, Caltech hosted the first major workshop to address petaflops computing. The Workshop on Enabling Technologies for Peta(FL)OPS Computing involved over 60 invited experts in all aspects of high performance computing technology who met to establish the basis for considering future research initiatives that could lead to the development, production, and application of petaflops scaled computing systems. The objectives of the Pasadena workshop were to (1) identify applications that require petaflops performance and determine their resource demands, (2) determine the scope of the technical challenge to achieving effective petaflops computing, (3) identify critical enabling technologies that lead to petaflops computing capability, (4) establish key research issues, and (5) recommend elements of a near-term research agenda.Over a period of three days the Pasadena workshop focused on the following major and inter-related topic areas: Applications and Algorithms Device Technology Architecture and Systems Software Technology. Despite the expected challenges, the participants concluded that a petaflops computing system should be feasible in 20 years. This prediction was partly based on an assumption that during the 20 years the semiconductor industry would continue advancing in speed enhancement and in cost reduction through improved fabrication processes. And, although the workshop concluded that no paradigm shift would be needed in systems architecture, managing active latency would be essential and require a very high degree of fine-grain parallelism along with the mechanisms to exploit it. Also, a mix of technologies might be required, including semiconductor for main memory, optics for inter-processor (and possibly inter-chip) communications and secondary storage, and perhaps cryogenic (e.g., Josephson Junction) for very high clock rate and very low power processor logic. Finally, dramatic per device cost reduction and innovative approaches to system software and programming methodologies would be essential. Next: The Petaflops Frontier Up: Introduction Previous: What is Petaflops? Chance Reschke Tue Aug 15 08:59:12 EDT 1995 MD5{32}: d6eea59366f691425eac60a56d623e30 File-Size{4}: 7278 Type{4}: HTML Gatherer-Version{3}: 1.0 Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Refresh-Rate{7}: 2419200 Time-to-Live{8}: 14515200 Description{23}: Historical Perspective } @FILE { http://cesdis.gsfc.nasa.gov/admin/inf.eng/wave.tutorial.fin/comm.decency.act.html Update-Time{9}: 827948691 title{26}: Communications Decency Act keywords{9}: tutorial images{14}: wave.small.gif headings{16}: Can we ride the body{88}: TUTORIAL: The Policy Wave is Coming: Authorship in a U.S. Government Agency Context MD5{32}: 64373b8b5e542e6ce642ee6c650916fe File-Size{4}: 9272 Type{4}: HTML Gatherer-Version{3}: 1.0 Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Refresh-Rate{7}: 2419200 Time-to-Live{8}: 14515200 Description{26}: Communications Decency Act } @FILE { http://cesdis.gsfc.nasa.gov/hpccm/annual.reports/cas94contents/testbed/ Update-Time{9}: 827948842 url-references{79}: /hpccm/annual.reports/cas94contents/ cra.html cra1.html graphics/ parallel.html title{53}: Index of /hpccm/annual.reports/cas94contents/testbed/ keywords{44}: cra directory graphics html parallel parent images{96}: /icons/blank.xbm /icons/menu.gif /icons/text.gif /icons/text.gif /icons/menu.gif /icons/text.gif headings{53}: Index of /hpccm/annual.reports/cas94contents/testbed/ body{200}: Name Last modified Size Description Parent Directory 17-Oct-95 15:42 - cra.html 19-Jul-95 15:23 3K cra1.html 19-Jul-95 15:26 3K graphics/ 09-Nov-95 14:43 - parallel.html 19-Jul-95 15:25 3K MD5{32}: 27b42d3ce0f09d152aa40329bab611d3 File-Size{3}: 935 Type{4}: HTML Gatherer-Version{3}: 1.0 Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Refresh-Rate{7}: 2419200 Time-to-Live{8}: 14515200 Description{53}: Index of /hpccm/annual.reports/cas94contents/testbed/ } @FILE { http://cesdis.gsfc.nasa.gov/pub/linux/html/Ethernet-HOWTO-8.html Update-Time{9}: 827948630 url-references{340}: Ethernet-HOWTO.html#toc8 Ethernet-HOWTO-1.html#mailing-lists http://cesdis.gsfc.nasa.gov/linux/pcmcia.html Ethernet-HOWTO-3.html#xircom Ethernet-HOWTO-3.html#de-600 Ethernet-HOWTO-3.html#aep-100 Ethernet-HOWTO-3.html#aep-100 Ethernet-HOWTO-9.html Ethernet-HOWTO-7.html Ethernet-HOWTO.html#toc8 Ethernet-HOWTO.html#toc Ethernet-HOWTO.html #0 title{42}: Networking with a Laptop/Notebook Computer keywords{267}: adaptors beginning built chapter computer contents docking don ethercard isa keyboard laptop lists mailing net networking next notebook parallel pcmcia pocket port power previous realtek section slip station stuff support surfing table the this top using with xircom headings{129}: 8 8.1 Using SLIP 8.2 Built in NE2000 8.3 8.4 ISA Ethercard in the Docking Station. 8.5 Pocket / parallel port adaptors. body{4543}: Networking with a Laptop/Notebook Computer Contents of this section There are currently only a few ways to put your laptop on a network. You can use the SLIP code (and run at serial line speeds); you can buy one of the few laptops that come with a NE2000-compatible ethercard; you can get a notebook with a supported PCMCIA slot built-in; you can get a laptop with a docking station and plug in an ISA ethercard; or you can use a parallel port Ethernet adapter such as the D-Link DE-600. This is the cheapest solution, but by far the most difficult. Also, you will not get very high transmission rates. Since SLIP is not really related to ethernet cards, it will not be discussed further here. See the NET-2 Howto. This solution severely limits your laptop choices and is fairly expensive. Be sure to read the specifications carefully, as you may find that you will have to buy an additional non-standard transceiver to actually put the machine on a network. A good idea might be to boot the notebook with a kernel that has ne2000 support, and make sure it gets detected and works before you lay down your cash. PCMCIA Support As this area of Linux development is fairly young, I'd suggest that you join the LAPTOPS mailing channel. See Mailing lists... which describes how to join a mailing list channel. Try and determine exactly what hardware you have (ie. card manufacturer, PCMCIA chip controller manufacturer) and then ask on the LAPTOPS channel. Regardless, don't expect things to be all that simple. Expect to have to fiddle around a bit, and patch kernels, etc. Maybe someday you will be able to type `make config' 8-) At present, the two PCMCIA chipsets that are supported are the Databook TCIC/2 and the intel i82365. There is a number of programs on tsx-11.mit.edu in /pub/linux/packages/laptops/ that you may find useful. These range from PCMCIA Ethercard drivers to programs that communicate with the PCMCIA controller chip. Note that these drivers are usually tied to a specific PCMCIA chip (ie. the intel 82365 or the TCIC/2) For NE2000 compatible cards, some people have had success with just configuring the card under DOS, and then booting linux from the DOS command prompt via . For those that are net-surfing you can try: Don's PCMCIA Stuff Anyway, the PCMCIA driver problem isn't specific to the Linux world. It's been a real disaster in the MS-DOS world. In that world people expect the hardware to work if they just follow the manual. They might not expect it to interoperate with any other hardware or software, or operate optimally, but they do expect that the software shipped with the product will function. Many PCMCIA adaptors don't even pass this test. Things are looking up for Linux users that want PCMCIA support, as substantial progress is being made. Pioneering this effort is David Hinds. His latest PCMCIA support package can be obtained from in the directory . Look for a file like where X.Y.Z will be the latest version number. This is most likely uploaded to as well. Note that Donald's PCMCIA enabler works as a user-level process, and David Hinds' is a kernel-level solution. You may be best served by David's package as it is much more widely used. Docking stations for laptops typically cost about $250 and provide two full-size ISA slots, two serial and one parallel port. Most docking stations are powered off of the laptop's batteries, and a few allow adding extra batteries in the docking station if you use short ISA cards. You can add an inexpensive ethercard and enjoy full-speed ethernet performance. The `pocket' ethernet adaptors may also fit your need. Until recently they actually costed more than a docking station and cheap ethercard, and most tie you down with a wall-brick power supply. At present, you can choose from the D-Link, or the RealTek adaptor. Most other companies, especially Xircom, (see Xircom ) treat the programming information as a trade secret, so support will likely be slow in coming. (if ever!) Note that the transfer speed will not be all that great (perhaps 100kB/s tops?) due to the limitations of the parallel port interface. See DE-600 / DE-620 and RealTek for supported pocket adaptors. You can sometimes avoid the wall-brick with the adaptors by buying or making a cable that draws power from the laptop's keyboard port. (See keyboard power ) Next Chapter, Previous Chapter Table of contents of this chapter , General table of contents Top of the document, Beginning of this Chapter MD5{32}: 53a6d4b4679364fe06f5010dcfb031b7 File-Size{4}: 5830 Type{4}: HTML Gatherer-Version{3}: 1.0 Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Refresh-Rate{7}: 2419200 Time-to-Live{8}: 14515200 Description{42}: Networking with a Laptop/Notebook Computer } @FILE { http://cesdis.gsfc.nasa.gov/hpccm/annual.reports/ess94contents/in.house/deane.html Update-Time{9}: 827948654 Description{49}: Compressible Convection via FCT on MIMD Computers Time-to-Live{8}: 14515200 Refresh-Rate{7}: 2419200 Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Gatherer-Version{3}: 1.0 Type{4}: HTML File-Size{4}: 2949 MD5{32}: 717e8205e65dc1fe3e8929b9ad0098d0 body{2390}: Objective: The fusion generated energy in the deep solar interior is largely carried by motions within the outer one-third of the Sun. The modeling and understanding of this, the SOLAR CONVECTION ZONE, is a Grand Challenge problem, being part of the PI, efforts of Prof. R. Rosner and the GCI of Prof. R. Stein. (In addition, the PI team of Dr. J. Gardner uses FCT which is the core computational technique of this project). Our intent in this work is to augment the algorithm, machine architecture, and physics choices available for this modeling. Approach: In collaboration with Drs. S. Zalesak and D. Spicer, the technique of (F)lux (C)orrected (T)ransport has been used to model the three dimensional hydrodynamical problem of compressible convection within a stratified atmosphere on parallel computers. Accomplishments: A three-dimensional hydrodynamics code that runs on Cray C90 and workstations as well as the Intel machines (Delta and Paragon) under NX operating system and the Cray T3D under PVM. The code is written as a template using the C preprocessor, so that it produces only relevant code for the particular boundary conditions and target machine using command line switches. Significance: The physical problem of compressible convection can be modeled, along with other problems, with this code. The user can add new physics, boundary conditions and message passsing calls with minimum effects on the core algorithm. Status/Plans: The addition of magnetic fields is nearing completion. The addition of message passing calls specific to the IBM SP2 are anticipated shortly. Figure caption: The figure shows the results of a (120x120x120) simulation. The panel of 4 pictures are the vertical velocity and temperature, corresponding to looking at the surface of the Sun. The isometric on the right is the vertical velocity field. The picture of Solar granulation is that of light intensity of the Solar surface. The purpose of the illustration is that the granular feature of the flow on the Sun is readily captured by the simulations. The simulations can reveal the hidden third dimension. The flow is found to become supersonic with asymmetry between up and down motions. (c.f. the simulations of PI team of Prof R. Rosner). Point of Contact: Dr. Anil Deane NASA Goddard Space Flight Center (301) 286-7803 deane@laplace.gsfc.nasa.gov curator: Larry Picha headings{78}: Compressible Convection via FCT on MIMD Computers Return to the PREVIOUS PAGE images{36}: graphics/fct.gif graphics/return.gif keywords{60}: caption curator figure larry page picha previous return the title{49}: Compressible Convection via FCT on MIMD Computers url-references{111}: http://cesdis.gsfc.nasa.gov/hpccm/annual.reports/ess94contents/in-house.html mailto:lpicha@cesdis.gsfc.nasa.gov } @FILE { http://cesdis.gsfc.nasa.gov/hpccm/annual.reports/cas94contents/testbed.html Update-Time{9}: 827948649 url-references{377}: testbed/cra.html testbed/cra1.html testbed/parallel.html http://cesdis.gsfc.nasa.gov/hpccm/annual.reports/cas94.html http://cesdis.gsfc.nasa.gov/hpccm/annual.reports/main94.html http://www.nas.nasa.gov/HPCC/home.html http://cesdis.gsfc.nasa.gov/people/lpicha/whoiam.html http://cesdis.gsfc.nasa.gov/ http://www.usra.edu/ http://sdcd.gsfc.nasa.gov http://sdcd.gsfc.nasa.gov/ESD/ keywords{350}: aerosciences agreement and announcement annual association center computational computing cooperative data directorate directory division earth excellence for home hpcc hpccpt information lawrence main multiphysics nasa page parallel picha previous processing product project report research return sciences simulation space testbed the universities images{19}: graphics/return.gif head{923}: background="graphics/cas.back.gif">CAS Testbed ActivitiesNASAHigh Performance Computing and Communications (HPCC) ProgramComputational Aerosciences ProjectTestbed ActivitiesNASA HPCC 1994 Annual ReportThe HPCCPT-1 Cooperative Research Announcement The HPCC Testbed-1 Cooperative Research Agreement Multiphysics Product Simulation Parallel Processing Testbed Return to the PREVIOUS PAGE Other Paths:Go to the Main Directory for The NASA HPCC 1994 Annual Report Go to The Computational Aerosciences Project Home Page The NASA HPCC Home PageAuthorizing NASA Official:Author: Lawrence Picha (lpicha@usra.edu) Center of Excellence in Space Data and Information Sciences , Universities Space Research Association , NASA Goddard Space Flight Center, Greenbelt, Maryland. Last revised: 01 JULY 95 (l.picha) (A service of the Space Data and Computing Division , the Earth Sciences Directorate , NASA Goddard Space Flight Center) MD5{32}: d68d2e5868f14227acdfb64ee5de915c File-Size{4}: 2071 Type{4}: HTML Gatherer-Version{3}: 1.0 Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Refresh-Rate{7}: 2419200 Time-to-Live{8}: 14515200 } @FILE { http://cesdis.gsfc.nasa.gov/%7ecreschke/peta/report/node5.html Update-Time{9}: 827948633 url-references{235}: node6.html#SECTION00051000000000000000 node7.html#SECTION00052000000000000000 node8.html#SECTION00053000000000000000 node9.html#SECTION00053100000000000000 node10.html#SECTION00053200000000000000 node11.html#SECTION00054000000000000000 title{13}: Introduction keywords{140}: approach aug chance edt frontier historical introduction objectives organization perspective petaflops report reschke the tue what workshop images{193}: /usr/local/src/latex2html/icons/next_motif.gif /usr/local/src/latex2html/icons/up_motif.gif /usr/local/src/latex2html/icons/previous_motif.gif /usr/local/src/latex2html/icons/contents_motif.gif head{745}: Next: What is Petaflops?Up:No Title Previous:List of Tables Introduction Even as the Federal HPCC Program works towards achieving teraflops computing, policy makers and future research program planners in government, academia and industry concluded that teraflops-level computing systems will be inadequate to address many scientific and engineering problems that exist now, let alone applications that will, arise in the future. As a result, the high performance computing community is examining the feasibility of achieving petaflops-level computing over a 20-year period. What is Petaflops? Historical Perspective The Petaflops Frontier Workshop Objectives Workshop Approach Report Organization Chance Reschke Tue Aug 15 08:59:12 EDT 1995 MD5{32}: d10efabb970c7088b7fb11d63728da74 File-Size{4}: 2455 Type{4}: HTML Gatherer-Version{3}: 1.0 Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Refresh-Rate{7}: 2419200 Time-to-Live{8}: 14515200 Description{13}: Introduction } @FILE { http://cesdis.gsfc.nasa.gov/hpccm/annual.reports/ess94contents/brhr.html Update-Time{9}: 827948658 url-references{431}: brhr.intro.html brhr/summer.html brhr/object.html brhr/petaflops.html http://cesdis.gsfc.nasa.gov/hpccm/annual.reports/ess94.html http://cesdis.gsfc.nasa.gov/hpccm/annual.reports/main94.html http://sdcd.gsfc.nasa.gov/ESS/ http://cesdis.gsfc.nasa.gov/hpccm/hpcc.nasa.html http://cesdis.gsfc.nasa.gov/people/lpicha/whoiam.html http://cesdis.gsfc.nasa.gov/ http://www.usra.edu/ http://sdcd.gsfc.nasa.gov http://sdcd.gsfc.nasa.gov/ESD/ title{8}: ESS BRHR keywords{501}: and annual association author authorizing center computation computational computing data directorate directory distributed division earth edu enabling excellence flight for fourth goddard greenbelt high home hpcc information last lawrence lpicha main maryland may nasa official oject ops oriented overview page parallel performance peta physics picha previous programming project report research return revised school science sciences service space summer technologies the universities usra workshop images{115}: graphics/ess-small.gif graphics/convect-bar.gif graphics/convect-bar.gif graphics/return.gif graphics/hpccsmall.gif headings{117}: NASA High Performance Computing and Communications (HPCC)Program ESS Basic Research and Human Resources Overview body{949}: background="graphics/ess.gif"> Earth and Space Science (ESS) Project NASA HPCC 1994 Annual Report Fourth NASA Summer School in High Performance Computational Physics Oject-Oriented Programming for High Performance Parallel and Distributed Computation Workshop on Enabling Technologies for Peta(FL)OPS Computing Return to the PREVIOUS PAGE Other Paths: Go to the Main Directory for The NASA HPCC 1994 Annual Report Go to the Earth and Space Science Project Home Page Go to The NASA HPCC Home Page Authorizing NASA Official: Lee B. Holcomb, Director, NASA HPCC Office Author: Lawrence Picha (lpicha@usra.edu) Center of Excellence in Space Data and Information Sciences , Universities Space Research Association , NASA Goddard Space Flight Center, Greenbelt, Maryland. Last revised: 30 MAY 95 (l.picha) (A service of the Space Data and Computing Division , the Earth Sciences Directorate , NASA Goddard Space Flight Center) MD5{32}: 9dc515f4902773dad46fd9837643154e File-Size{4}: 2180 Type{4}: HTML Gatherer-Version{3}: 1.0 Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Refresh-Rate{7}: 2419200 Time-to-Live{8}: 14515200 Description{8}: ESS BRHR } @FILE { http://cesdis.gsfc.nasa.gov/hpccm/annual.reports/ann.rpt.95/cas.95/cas.95.ar.p2d2.html Update-Time{9}: 827948663 url-references{48}: http://www.nas.nasa.gov/NAS/Tools/Projects/P2D2/ title{49}: The Portable Parallel/Distributed Debugger (p2d2) keywords{112}: accomplishments approach contact gov http nas nasa objective plans point projects significance status tools www headings{49}: The Portable Parallel/Distributed Debugger (p2d2) body{2371}: Objective: The objective of the p2d2 project is to build a debugger for multiprocess programs that are distributed across a heterogeneous collection of machines. Later versions of the tool will be tailored to computational fluid dynamics (CFD) programming community. Achievement of this goal will put an effective program development tool in the hands of CFD programmers. Approach: In the design of p2d2 we have employed a client-server architecture. This approach permits us to isolate the architecture- and operating system-dependent code in a server. Thus, the client-side code remains highly portable. We have designed scalable user interface elements in expectation that users will want to debug computations involving many (say 16-256) processes. Accomplishments: Demonstration of prototype at Supercomputing '94 Papers at Supercomputing '94 and HICSS-28 Scalable process navigation paradigm designed and implmented Technical report describing process navigation paradigm Version 1.0 implementation (for programs using the Message Passing Interface (MPI) communication library on the IBM SP2) nearly complete Began work with first user Demonstrated scalable user interface elements at Supercomputing '95 Note: the accompanying graphic shows p2d2 being used to debug the NAS parallel benchmark "mg". The program is running on the front-end and 16 of the computational nodes of the IBM SP2. The left-hand-side of the graphic has the main window of the debugger which shows the status of all of the processes and the location in the source for one of them. The windows on the right-hand-side are giving a variety of more detailed information about the debugging session. Significance: In addition to providing benefits to the CFD programming community, p2d2 can be used as a general-purpose debugger for isolating problems in programs distributed across a heterogeneous collection of machines. As such, its potential user community is quite large. Status/Plans: Support for MPI programs running on the IBM SP2 Support for PVM (Parallel Virtual Machine programs) programs running on the Silicon Graphics cluster Support for High Performance Fortran programs - a problem domain-specific debugger (with CFD-specific operations) Point(s) of Contact: Robert Hood NASA Ames Research Center rhood@nas.nasa.gov URL: http://www.nas.nasa.gov/NAS/Tools/Projects/P2D2/ MD5{32}: d7c5b0c07bab9ff6ec492fa979b1c170 File-Size{4}: 2819 Type{4}: HTML Gatherer-Version{3}: 1.0 Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Refresh-Rate{7}: 2419200 Time-to-Live{8}: 14515200 Description{49}: The Portable Parallel/Distributed Debugger (p2d2) } @FILE { http://cesdis.gsfc.nasa.gov/admin/seminar.series/tech.report/shaffer.ps Update-Time{9}: 827948596 Partial-Text{1087}: Georgia Georgia; page: 1 of 18 OVERVIEW OF INTERNATIONAL EARTH OBSERVATION ACTIVITIES Presentation to International Earth Remote Sensing Projects Seminar Series Center of Excellence in Space Data and Information Sciences IEEE Geoscience and Remote Sensing Society Dr. Lisa R. Shaffer Acting Director, Mission to Planet Earth Division Office of External Relations NASA Headquarters, Washington, DC January 17, 1995 Georgia; page: 2 of 18 2 Outline Types of International Earth Observation Activities Forms of Cooperation in Earth Remote Sensing Overview of International Activities NASA\325s Role in International Remote Sensing Issues: Now and Future Georgia; page: 3 of 18 3 Types of International Earth Observation Activities Satellites Sensors Launch services Operations and data acquisition Data processing, archiving, and distribution Scientific investigations In situ observations for calibration/validation Applications demonstrations Operational use Georgia; page: 4 of 18 4 Approaches to Cooperation in Earth Remote Sensing National satellite systems (i.e., no cooperation) MD5{32}: 787d9472b0696946907433fe14a4f7f1 File-Size{5}: 33795 Type{10}: PostScript Gatherer-Version{3}: 1.0 Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Refresh-Rate{7}: 2419200 Time-to-Live{8}: 14515200 Keywords{611}: acquisition acting activities and applications approaches archiving calibration center cooperation data demonstrations director distribution division earth excellence external for forms future georgia geoscience headquarters ieee information international investigations issues january launch lisa mission nasa national now observation observations office operational operations outline overview page planet presentation processing projects relations remote role satellite satellites sciences scientific seminar sensing sensors series services shaffer situ society space systems types use validation washington Description{62}: Georgia Georgia; page: 1 of 18 OVERVIEW OF INTERNATIONAL EARTH } @FILE { http://cesdis.gsfc.nasa.gov/PAS2/wg3.html Update-Time{9}: 827948617 url-references{489}: http://cesdis.gsfc.nasa.gov/ /PAS2/index.html wg3.html#executivesummary wg3.html#issues wg3.html#recommendations wg3.html#concerns wg3.html#conclusions wg3.html#conclusions wg3.html#conclusions wg3.html#conclusions wg3.html#conclusions wg3.html#conclusions wg3.html#conclusions wg3.html#conclusions wg3.html#conclusions wg3.html#conclusions wg3.html#conclusions #top /PAS2/index.html http://cesdis.gsfc.nasa.gov/cesdis.html /pub/people/becker/whoiam.html mailto:becker@cesdis.gsfc.nasa.gov title{32}: Use of System Software and Tools keywords{509}: and applications base basic becker breazeal cesdis chair cherri common computing concerns conclusions corporation developers document don donald environment environments establish executive facto financial for gov group gsfc high hoc hpc improving incentives increase index intel issues nasa oregon pancake pasadena performance portability porting recommendation recommendations research second software standards state summary support system the this tool tools top university usability use working workshop head{20151}: Center of Excellence in Space Data and Information Sciences. > Use of System Software and Tools Pasadena Working Group #3 Chair, Cherri Pancake, Oregon State University Co-chair, Don Breazeal, Intel Corporation This report is one component of the Proceedings of the Second Pasadena Workshop on System Software and Tools for High Performance Computing Environments . Abstract: This report is a summary of the conclusions of the Working Group on the Use of System Software and Tools. The group included representatives from independent software vendors, national laboratories, academia, High Performance Computing (HPC) vendors, and US federal agencies. In this report, we identify obstacles to the success of HPC relative to the usability of system software and tools, and suggest strategies for overcoming them. The charter of the working group was to answer the following questions:What types of system software and tools are needed to facilitate portable, scalable applications? How can users be motivated to use them? Why don't users use and/or like existing system software and tools? Why don't vendors respond to user complaints and/or issues? What will it take to make the HPC user community grow? For the purposes of discussion we defined users to be persons involved in developing parallel applications (i.e., predominantly non-computer scientists). System software and tools are defined very broadly as including any software that the application programmer doesn't write. This report is organized as follows: section 1 is an executive summary providing an overview of the working group recommendations; section 2 describes what the group perceives as the major problems to be addressed and suggests potential solutions; section 3 provides a list of the action items recommended by the group; section 4 describes open issues and concerns; and section 5 concludes the report. Executive Summary Working Group 3 discussed the problems confronting current and potential users of HPC due to the lack of robustness and marginal usability that characterize current system software and tools. A variety of approaches were suggested, resulting in the following four recommendations: Recommendation 1: Establish a Common Base Environment for Developers of HPC Applications. NASA (with the collaboration of the larger community) should take the lead in an effort to define a minimal set of software tools to be made available uniformly across all HPC platforms. HPC vendors should be encouraged to implement this set as quickly as possible so that users can have access to the same (reliable) base environment on all HPC systems. Recommendation 2: Basic Research to Increase Tool Usability.The National Science Foundation (NSF) should provide funding for research efforts that identify user strategies for application development and that apply those strategies to tool design in order to improve usability. Recommendation 3: Financial Support for Standards and Portability.The charter of the National High Performance Software Exchange (NHPSE) should be expanded to provide funding for community-wide standardization efforts likely to improve the uniformity of HPC software, such as High Performance Fortran (HPF) and Message Passinga Interface (MPI). Recommendation 4: ISV Application Software.The national laboratories and national supercomputing centers should develop/expand programs that encourage independent software vendors (ISVs) to port key applications to HPC systems. Issues in the Use of System Software and Tools Application developers in the HPC community are dissatisfied with the system software and tools provided on HPC systems available today. Surveys of HPC users of both parallel and serial systems have shown that the acceptance of programming tools in this community is very low. Users often avoid tools, or devise their own substitutes for significant system software components. This occurs for a variety of reasons. In general, user perceptions of HPC system software and tools are that:tools crash very quickly, tools don't do what they're supposed to do, tools don't scale to large applications, number of nodes, etc., tools are too machine-specific, tools are too diverse and inconsistent, tools are not inter-operable (even on a single platform), tools are very difficult for users to learn and apply, and users are often unsure if there will be a payoff for using tools. These issues can be categorized as three software attributes that appear to be lacking in current HPC system software and tools: reliability/robustness portability/standards-compliance usability fact that the application base on parallel systems has grown very slowly. Yet the availability of key applications is precisely the mechanism needed to drive the growth of the HPC user community and the realization of the HPC's potential. As the group was quick to point out, not everyone is sold on parallelism! Potential users need to see some compelling examples of success stories if they are to be motivated to use HPC systems. Ultimately, applications and problem solving environments from Independent Software Vendors (ISVs) must become available. To address these issues, we recommend a twofold approach: improving the software environment for application developers, and providing incentives for those developers to port their applications to parallel HPC systems. Improving the Software Environment Reliability and robustness are difficult issues for the HPC vendor community. User organizations often require delivery of new systems at the earliest possible date. Because of the complexity of parallel systems, however, system software and tools are complicated, and early delivery may mean that they are relatively new and untried. As a result, the users' initial contact with the software is quite negative, and the situation improves only slowly. System vendors may appear to be unresponsive to user needs, because their resources are consumed with maintaining the status quo as market forces require new systems, languages, and features. Vendors often have a number of high priority requests, and they need to spend effort differentiating their product from those of their competitors. Yet without certain guarantees that software will be reliable and robust, it is difficult to attract new users and new applications. Compounding the problem is the fact that few, if any, users program to a single platform. The rapid rate of change in HPC technology requires that users be able to migrate their codes from platform to platform with relative ease. Standards are a primary mechanism for providing the uniformity needed to enable application portability, whether they are official standards sanctioned by a standards organization, or de facto standards developed through grass roots efforts. In this document, we use the term standards to include both types. For HPC, it is clear that successful standards must come from the community as a whole. System vendors cannot be expected to develop standards, since their products must be differentiated to maintain competitive position. Vendors can only provide input to the definition process and implement the result. It is important to note that a standard is useful only if it is in fact implemented across a range of vendor platforms. Factors that can help induce vendors to implement standard software include: the existence of a reference implementation of the standard, availability of implementations from a third party, pressure from the user community, and availability of a validation suite for testing of conformance and correctness. These should be included as part of any serious standards effort. The problem of usability may well be the most difficult to address. HPC system software suffers in comparison to the usability of software and tools provided with desktop systems because the resources available for development are much greater in the desk-top world, and the problems to be solved are much less complex. Many usability issues remain unresolved for parallel HPC software. System software and tools are often the implementation of an untried solution, and the ways in which such software can be applied effectively are often obscure. The options and variations available in programming parallel systems are so diverse that tools which attempt to adequately support all models of usage become excessively complicated. Unfortunately, little research has been conducted to identify the models of usage that should be supported in order to reach a reasonable number of users without undue complexity. Incentives to Porting Applications The availability of key applications on HPC systems will undoubtedly drive the success of HPC. Many of these applications are developed and supported by ISVs. Their very independence from hardware vendors means that ISVs need a financial incentive to port their applications to parallel platforms. The availability of reliable and usable system software and tools is a critical part of this, since the easier a system is to port to, the lower the cost to the ISV. However, ease alone is not sufficient incentive for most ISVs to initiate a port. The uncertain longevity of any specific hardware platform is a strong deterrent for porting. This creates a vicious circle, in that a platform must include key applications if it is to survive, yet the owners of key applications are wary of porting to a platform until its survival is certain. Guaranteed customers or other mechanisms for funding are needed so that ISVs can justify porting costs. Moreover, the simple existence of a successful port is not enough to attract additional customers; like the ISVs, they are wary of investing in a short-lived HPC platform. Potential customers should be encouraged to experience for themselves the improved performance that can be obtained by using the parallelized application. The first port is the most expensive, since subsequent ports can leverage much of the initial work. ISV costs go beyond the basic development effort, however, since an ISV must provide support and maintenance to customers on each target platform. Below a certain minimum number of customers, it simply is not cost-effective for the ISV to provide support. Too much of the burden of moving applications to parallel platforms falls on the ISV. Such businesses are often small, so the risk factors make involvement in such a plan unacceptable. Once the HPC market has grown and the customer base is large enough, such risks may be reduced --- but this is not true of the current market. The European Union devised one strategy for dealing with the ISV problem, the so-called Europort model. Its goal is to enable the porting of key scientific applications to parallel computer systems. Usually the application developer is partnered with a research organization and (sometimes) a system provider. The researcher supplies expertise in parallel algorithms and parallelization techniques to assist the application developer. The project is funded by the European Commission through the ESPRIT program, which supports collaborative information technology development. Potential mechanisms for supporting the migration of key ISV applications to HPC platforms include: assistance in identifying a promising customer base, long-term conditional loans; cost-sharing; assistance in carrying out ports; and the Europort collaborative model. Of these, the Europort model is the most promising and palatable, but such a model may not fit well with US policies and rules. Recommendations The groups recommendations were formulated to attempt to correct the most glaring problems in current HPC software environments. Recommendation 1: Establish a Common Base Environment for Developers of HPC Applications A community-wide working group should define and advocate the implementation of a minimal parallel computing environment that is robust and consistent across all HPC platforms. The availability of such an environment would guarantee at least minimal functionality for HPC applications developers, and the promise of uniformity across platforms would serve as an encouragement for users and ISVs who are currently faced with a wide variety of dissimilar software and tool systems. One user organization represented in the working group, NASA, was named a likely candidate for taking the lead in this effort, with the collaboration of the larger community. A kick-off meeting for this effort should be scheduled as soon as possible (this may happen as early as May, 1995). The meeting would organize an email and web-based forum to produce the base environment requirements specification. Funding should be provided to support a coordinator and support staff for the effort, and a travel budget should be supplied to broaden participation. Participants in the specification effort should include HPC system (including workstation) vendors, application developers, and ISVs (both those who have ported to parallel systems and those who have not). It is critical that the base operating environment be reliable, robust, and familiar to users. To demonstrate the intent of this recommendation we present the components of an example environment, providing minimal functionality for developing, debugging, tuning, and executing applications: C and Fortran compilers (single-node, not parallelizing) that are reliable and correct; Scalable support for hand-coded instrumentation, capable of yielding reliable, expected behavior; Support for parallel program execution that is reliable and capable of producing clear error messages; A dbx-like symbolic debugger with the ability to attach to a single process in an executing application; A gprof-style profiling tool capable of monitoring the performance of a single process in an executing application; and A facility for determining the status of an executing application, as well as discovering which users are running which programs and on which nodes/partitions. No part of this recommendation should be construed as incompatible with the ability of the system vendors to provide additional or unique tools for special needs. The base environment will establish the minimal support that must be provided in a reliable and uniform fashion. A standard set of tools will also help the vendors deliver a robust working environment much more quickly when a brand new system is released. Vendors are encouraged to provide additional tools beyond those specified as part of the base environment. HPC vendors should be encouraged to implement this set as quickly as possible so that users can have access to the same (reliable) base environment on all HPC systems. Funding should be provided to reduce vendor implementation costs. To encourage adoption, federal agencies funding the procurement of HPC systems should encourage inclusion of these requirements in Requests for Proposals (RFPs). Within two years this environment should be available on all HPC platforms. Recommendation 2: Basic Research to Increase Tool Usability User acceptance of system software and tools will not increase appreciably until such software is usable within the framework of typical application development strategies. To this end, NSF should fund collaborative research into the interaction between the user and the parallel software environment. This research should involve substantial input from experienced users engaged in developing large-scale applications. The goals of the research should be to: identify successful user strategies in developing real applications, devise ways to apply knowledge of those strategies in the presentation of tool functionality in an intuitive, usable, and familiar manner, and use this functionality in the development of simple, composable tool units. Support should be provided for participants in the collaborative efforts, including tool users, developers, and implementors. Support should also be provided for the promotion of the results of this research, in order to disseminate the information through the community. Initial results should be available within two years. Recommendation 3: Financial Support for Standards and Portability Community-wide standardization efforts offer the greatest promise for supporting the portability of HPC applications across multiple vendor platforms. Successful examples of such efforts include the BLAS (standard Basic Linear Algebra Subroutines), MPI, and HPF. Note, however, that funding for these efforts was provided ad hoc from a variety of sources, a model that works in the first few cases but cannot be sustained to encompass the wide variety of standards needed to make HPC platforms attractive to a broad user and ISV audience. A stable source of funding for these efforts would ease the path to successful implementation. Moreover, academic participation in these efforts is often constrained by the associated cost and by the lack of recognition for participation and contribution. Some method for supporting and encouraging academic participation is needed. The charter of the National HPC Software Exchange (NHPSE) should be expanded to include funding for HPC community efforts to evolve specifications of standard system software that will enable the development of portable HPC applications. These specifications should be made available to the private sector on a non-exclusive, no-cost basis. To facilitate the development of private-sector implementations, such specifications should be accompanied by a reference implementation and a validation suite. Recommendation 4: ISV Application Software A critical method for expanding the HPC market is to enable key applications software on HPC platforms through the use of ISV resources. This can be accomplished through several actions. Little additional funding is required to implement this recommendation, but rules and mechanisms need to be changed. First, ISVs and national lab employees should be made more aware of existing mechanisms for technology transfer that might affect their applications. These mechanisms are misunderstood and underutilized, but they could ease the path for ISV ports to HPC systems. Second, the mission of the national supercomputing centers should be expanded to include encouragement for ISVs, whose needs are not met by existing industrial partnership programs. New programs should be instituted which do not require large up-front membership fees for the ISV. Such programs should furnish not just machine access for carrying out an application port, but also the sale of cycles to potential customers who want to test-drive the parallelized application. Finally, existing mechanisms should be expanded to include Europort-style collaborations that don't require cost sharing by small ISVs. Issues and Concerns The recommendation to provide U. S. federal funding for Europort-style collaborations to enable key ISV application software on HPC systems raises some legal and ethical questions that the group is not qualified to answer. Using federal funds for such development efforts, and keeping the results of those efforts proprietary, may violate existing national policy. Summary and Conclusions In this report, Working Group 3 has made some very specific recommendations in the hope that they will provoke action on several key items. Recommendation 1 for the base environment is already moving forward. Recommendation 2 for user-related research would expand funding in an area that would yield concrete strategies for improving tool usability. Recommendation 3 would smooth the path to the development of standards by for providing administrative and logistical support for community-wide efforts. Recommendation 4 proposes support for ISV porting efforts that would make HPC systems more useful to the scientific and engineering communities. Implementation of any of these recommendations will move the HPC community in a direction toward improved usefulness and success. Top of this document Pasadena 2 Workshop index CESDIS HTML formating/WWW contact:Donald Becker , becker@cesdis.gsfc.nasa.gov . MD5{32}: 16d8498883dd0510145ebfd0d56121f7 File-Size{5}: 22271 Type{4}: HTML Gatherer-Version{3}: 1.0 Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Refresh-Rate{7}: 2419200 Time-to-Live{8}: 14515200 Description{32}: Use of System Software and Tools } @FILE { http://cesdis.gsfc.nasa.gov/hpccm/annual.reports/ess94contents/app.intro.html Update-Time{9}: 827948649 url-references{124}: gci.html http://cesdis.gsfc.nasa.gov/hpccm/annual.reports/ess94contents/app.software.html mailto:lpicha@cesdis.gsfc.nasa.gov title{34}: ESS Applications Software Research keywords{148}: and approach cesdis curator goal gov gsfc larry lpicha management nasa objectives organization page picha plan previous project return strategy the images{42}: graphics/gci.small.gif graphics/return.gif headings{75}: Overview of ESS Applications Software Research Return to the PREVIOUS PAGE body{2841}: background="graphics/ess.gif"> Project Goal and Objectives: The goal of the ESS applications software activity is to enable the development of NASA Grand Challenge applications on those computing platforms which are evolving towards sustained teraFLOPS performance. The objectives are to: dentify the NASA Grand Challenge Investigations and Guest Computational Investigations; identify computational techniques, termed Computational Challenges, which are essential to the success of the Grand Challenge problems; formulate embodiments of these techniques which are adapted to and perform well on highly parallel systems; and capture the successes in a reusable form. Strategy and Approach: The strategy is to select NASA Grand Challenges from a vast array of candidate NASA science problems, to select teams of aggressive scientific Investigators to attempt to implement the Grand Challenge problems on scalable testbeds, and to provide institutionalized computational technique development support to solve the Computational Challenges in order to accelerate the progress of the Investigators and to capture the results. The approach involves use of the peer reviewed NASA Research Announcement as the mechanism to select the Grand Challenge Investigations and their Investigator teams. In-house teams of computational scientists have been developed at GSFC and JPL to solve the Computational Challenges. Organization: The Office of Aeronautics and Space Technology, jointly with the Office of Space Science and Applications, selected the ESS Investigators through the peer reviewed NASA Research Announcement process. The ESS Science Team, composed of the Principal Investigators chosen through the ESS NRA, and chaired by the ESS Project Scientist, organizes and carries out periodic workshops for the investigator teams and coordinates the computational experiments of the Investigations. The ESS Evaluation Coordinator focuses activities of the Science Team leading to development of ESS computational and throughput benchmarks. A staff of computational scientists supports the Investigations by developing scalable computational techniques which address their Computational Challenges. Management Plan: At GSFC, a Deputy Project Manager for Applications directs the in-house team of computational scientists. At JPL, a Deputy Task Leader performs the same function. ESS and its Investigators contribute annual software submissions to the High Performance Computing Software Exchange. Click on the following image for a graphic display of the ESS Grand Challenge Investigations: Points of Contact: Steve Zalesak Goddard Space Flight Center, Code 934 zalesak@gondor.gsfc.nasa.gov, 301-286-8935 Robert Ferraro Jet Propulsion Laboratory ferraro@zion.jpl.nasa.gov, 818-354-1340 curator: Larry Picha (lpicha@cesdis.gsfc.nasa.gov) MD5{32}: 7661a05ee1e5253ef2cafa9a8954e196 File-Size{4}: 3460 Type{4}: HTML Gatherer-Version{3}: 1.0 Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Refresh-Rate{7}: 2419200 Time-to-Live{8}: 14515200 Description{34}: ESS Applications Software Research } @FILE { http://cesdis.gsfc.nasa.gov/hpccm/annual.reports/ess94contents/in.house/graphics/pedelty.pict Update-Time{9}: 827948858 MD5{32}: 6bdf51cbf071d3e8a3fda0220097ba12 File-Size{5}: 57918 Type{7}: Unknown Gatherer-Version{3}: 1.0 Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Refresh-Rate{7}: 2419200 Time-to-Live{8}: 14515200 } @FILE { http://cesdis.gsfc.nasa.gov/hpccm/annual.reports/ess94contents/gci.html Update-Time{9}: 827948649 url-references{151}: http://cesdis.gsfc.nasa.gov/hpccm/annual.reports/ess94contents/app.software.html http://cesdis.gsfc.nasa.gov/hpccm/annual.reports/ess94contents/gc.html title{34}: ESS Grand Challenge Investigations keywords{79}: applications challenge contents ess grand investigator software table team the images{48}: graphics graphics/return.gif graphics/return.gif headings{52}: GO TO: the Applications Software Table of Contents body{275}: gci.gif> Points of Contact: Steve Zalesak Goddard Space Flight Center, Code 934 zalesak@gondor.gsfc.nasa.gov, 301-286-8935 Robert Ferraro Jet Propulsion Laboratory ferraro@zion.jpl.nasa.gov, 818-354-1340 GO TO: ESS Grand Challenge Investigator Team Table of Contents MD5{32}: 68da0099c162ad83559da4a71af71bf7 File-Size{3}: 727 Type{4}: HTML Gatherer-Version{3}: 1.0 Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Refresh-Rate{7}: 2419200 Time-to-Live{8}: 14515200 Description{34}: ESS Grand Challenge Investigations } @FILE { http://cesdis.gsfc.nasa.gov/hpccm/iita.hp/iita.html Update-Time{9}: 827948599 url-references{148}: http://quest.arc.nasa.gov/IITA/iita1.html http://cesdis.gsfc.nasa.gov/ http://www.usra.edu/ http://sdcd.gsfc.nasa.gov http://sdcd.gsfc.nasa.gov/ESD/ title{14}: NASA HPCC IITA keywords{369}: and applications arc association authorizing authors center computing connell data directorate division earth edu excellence flight goddard gov greenbelt html http iita information infrastructure june last lawrence likens lpicha manager maryland michele nasa official picha program quest research revised sciences service space technology the universities usra william images{45}: graphics/hpcc.header.gif graphics/wavebar.gif headings{124}: Information Infrastructure Technology and Applications This web page has moved to http://quest.arc.nasa.gov/IITA/iita1.html body{518}: Authorizing NASA Official: William Likens, Program Manager, Information Infrastructure Technology and Applications Authors: Lawrence Picha (lpicha@usra.edu) & Michele O'Connell (michele@usra.edu), Center of Excellence in Space Data and Information Sciences , Universities Space Research Association , NASA Goddard Space Flight Center, Greenbelt, Maryland. Last revised: 29 JUNE 1995 (l.picha) A service of the Space Data and Computing Division , Earth Sciences Directorate , NASA Goddard Space Flight Center. MD5{32}: 26a2ebb30a066d26df278b651376eac0 File-Size{4}: 1393 Type{4}: HTML Gatherer-Version{3}: 1.0 Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Refresh-Rate{7}: 2419200 Time-to-Live{8}: 14515200 Description{14}: NASA HPCC IITA } @FILE { http://cesdis.gsfc.nasa.gov/hpccm/accomp/94accomp/94accomps.html Update-Time{9}: 827948644 title{31}: NASA HPCC FY 94 Accomplishments images{61}: hpcc.graphics/nasa.meatball.gif hpcc.graphics/hpcc.header.gif headings{29}: Showcase of Accomplishments MD5{32}: ffd58ead8da7b15a3ed94bb73a87eb23 File-Size{4}: 3931 Type{4}: HTML Gatherer-Version{3}: 1.0 Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Refresh-Rate{7}: 2419200 Time-to-Live{8}: 14515200 Keywords{26}: accomplishments hpcc nasa Description{31}: NASA HPCC FY 94 Accomplishments } @FILE { http://cesdis.gsfc.nasa.gov/hpccm/annual.reports/ess94contents/guest.ci/pedelty.html Update-Time{9}: 827948652 url-references{111}: http://cesdis.gsfc.nasa.gov/hpccm/annual.reports/ess94contents/guest.ci.html mailto:lpicha@cesdis.gsfc.nasa.gov title{20}: Morphology Filtering keywords{45}: curator larry page picha previous return the images{40}: graphics/pedelty.gif graphics/return.gif headings{106}: High Performance Morphology Filtering of Cirrus Emission from Infrared Images Return to the PREVIOUS PAGE body{2985}: Objective: Our goal is to remove cirrus emission from images of the sky generated by the Infrared Astronomy Satellite (IRAS). The cirrus emission looks remarkably like the cirrus clouds which form on Earth, but is caused by cold dust grains in our Milky Way galaxy. This infrared cirrus emission obscures our view of the universe beyond the Milky Way, and by removing it we will create a valuable new public archive, and we may even reveal new, unusual infrared objects. Approach: Previous attempts to remove the cirrus emission have failed because the emission is present on all angular scales. Our approach is to apply the techniques of morphological image processing (a.k.a. mathematical morphology). Morphological image processing is a relatively new set of tools for analyzing form and structure in images. The techniques can be computationally intensive, and so we are implementing the morphology tools on the HPCC ESS testbeds, in particular the MasPar MP-2. Accomplishments: We have dramatically improved our prototype morphological cirrus filter. This improvement was largely enabled by the tremendously faster performance of the MasPar compared to an earlier workstation implementation. We have filtered a few dozen IRAS images and are now analyzing the nature of the objects we find. This analysis involves comparing our source positions with large catalogs which are available via the Internet. We are finding many galaxies which were previously discovered at optical wavelengths, but which were previously very obscured in the infrared by the cirrus. Preliminary analytical testing shows that the filter is able to recover obscured galaxies with an accuracy of better than a few percent. A paper describing a detailed comparison of different MasPar implementations of morphological filtering was submitted for review to the Frontiers of Massively Parallel Computation meeting to be held in February, 1995. Presentations were made to an American Astronomical Society meeting in May, 1994 and to Astronomical Data Analysis Software and Systems symposia in October, 1993 and September, 1994. The morphology kernels were selected to be part of the ESS Parallel Benchmark Suite, and are being benchmarked on a variety of platforms. Significance: We hope to improve our knowledge of the infrared brightnesses of galaxies, add to our understanding of the cirrus emission, and possibly even discover new astronomical objects. We will also publicly deliver the morphology kernel routines optimized for a variety of HPC platforms. Status/Plans: We are continuing analytical testing to determine the accuracy and reliability of our filter. We expect to perform production filtering of the entire IRAS database at one and perhaps two far infrared wavelengths. This new astronomical archive will be made publicly available. Point of Contact: Dr. Jeffrey Pedelty Goddard Space Flight Center/Code 934 pedelty@jansky.gsfc.nasa.gov (301) 286-3065 curator: Larry Picha MD5{32}: 33573874aa82b7d0926a040be5157137 File-Size{4}: 3551 Type{4}: HTML Gatherer-Version{3}: 1.0 Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Refresh-Rate{7}: 2419200 Time-to-Live{8}: 14515200 Description{20}: Morphology Filtering } @FILE { http://cesdis.gsfc.nasa.gov/hpccm/annual.reports/ess94contents/app.sw/ Update-Time{9}: 827948842 url-references{152}: /hpccm/annual.reports/ess94contents/ bench.html epb.html graphics/ jnnie.html jnniepict.html memory.html midas.html require.html storage.html sw.ex.html title{52}: Index of /hpccm/annual.reports/ess94contents/app.sw/ keywords{86}: bench directory epb graphics html jnnie jnniepict memory midas parent require storage images{192}: /icons/blank.xbm /icons/menu.gif /icons/text.gif /icons/text.gif /icons/menu.gif /icons/text.gif /icons/text.gif /icons/text.gif /icons/text.gif /icons/text.gif /icons/text.gif /icons/text.gif headings{52}: Index of /hpccm/annual.reports/ess94contents/app.sw/ body{396}: Name Last modified Size Description Parent Directory 19-Jul-95 16:12 - bench.html 27-Jun-95 16:17 3K epb.html 23-Jun-95 16:01 1K graphics/ 27-Jun-95 16:13 - jnnie.html 27-Jun-95 16:09 3K jnniepict.html 23-Jun-95 15:54 1K memory.html 13-Jun-95 11:39 1K midas.html 27-Jun-95 15:02 3K require.html 13-Jun-95 11:27 1K storage.html 13-Jun-95 11:38 1K sw.ex.html 19-Jun-95 13:33 3K MD5{32}: a7782e56acd596b344d467ff1592ec36 File-Size{4}: 1733 Type{4}: HTML Gatherer-Version{3}: 1.0 Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Refresh-Rate{7}: 2419200 Time-to-Live{8}: 14515200 Description{52}: Index of /hpccm/annual.reports/ess94contents/app.sw/ } @FILE { http://cesdis.gsfc.nasa.gov/hpccm/annual.reports/cas94contents/app.sw/graphical.html Update-Time{9}: 827948647 url-references{109}: http://cesdis.gsfc.nasa.gov/hpccm/annual.reports/cas94contents/app.sw.html mailto:lpicha@cesdis.gsfc.nasa.gov title{47}: A Graphical User Interface for the FIDO Project keywords{46}: contents curator larry picha return table the images{19}: graphics/return.gif headings{80}: A Graphical User Interface for the FIDO Project Return to the Table of Contents body{3208}: Objective: The Framework for Interdisciplinary Design Optimization (FIDO) project is developing a general computational environment for performing multidisciplinary design using networked heterogeneous computers. The goal of the Graphical User Interface (GUI) development is to provide an easy way for the user to monitor and control a design cycle that involves complex programs running on a variety of computers across the network. Approach: The current Motif-based GUI consists of three separate elements: setup, application status, and data display. The setup GUI provides the user with a convenient means of choosing the initial design geometry, material properties, and run conditions from a pre-defined set of files. The interface displays the choices using a series of pop-up Motif data windows, and allows the user to modify and store new condition files. The application status GUI allows the user to monitor the status of a design run. An example of this display is shown in the left figure during the middle of the fourth design cycle. Within this figure, the upper left window displays current run parameters and contains pull-down menus for setting various options. The right window graphically displays the state of the overall design process by changing the color of each labeled box according to the work being done. The color key is shown in the lower left window. Additional detail of the system state can be obtained by selecting the boxes with a 3-D appearance. Doing so brings up an associated window that displays sub-detail for that box. The data display GUI is the third interface element, providing the user with a variety of ways to plot data during the design process. The right figure is an example of a color-coded contour plot of wing surface pressures. The buttons at the top of the plot window provide the user a variety of view controls. Accomplishment: The three GUI elements have been implemented, and were used to produce the results in the figures. The setup interface now provides a full capability for initializing a FIDO run. In addition to contour plots of aerodynamic pressures and structural stresses on the wing, the data display interface provides line-plots of cycle history for a variety of design parameters and data results. Significance: A graphical interface provides easier understanding and access to data than the previous text-based method. Also, less training of users is needed. Status/Plans: In the next version of the interface, more detail will be provided in various sub-windows of the application status GUI. The three elements of the GUI will be combined into a single interface, replacing the text-based menu that currently controls the data display. After the first implementation of FIDO has been tested and documented, the project will move to its next phase: incorporation of the full HISAIR ''Pathfinder'' engineering problem, which will increase the amount of information handled by an order of magnitude. Points of Contact: Raymond L. Gates NASA Langley Research Center (804) 865-1725 raymond.l.gates@larc.nasa.gov Kelvin W. Edwards NASA Langley Research Center (804) 864-2290 k.w.edwards@larc.nasa.gov curator: Larry Picha MD5{32}: 5a49918dec734abdc3f13788a8e12efc File-Size{4}: 3698 Type{4}: HTML Gatherer-Version{3}: 1.0 Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Refresh-Rate{7}: 2419200 Time-to-Live{8}: 14515200 Description{47}: A Graphical User Interface for the FIDO Project } @FILE { http://cesdis.gsfc.nasa.gov/petaflops/archive/workshops/pas.2.pf.obj.html Update-Time{9}: 827948644 url-references{297}: http://cesdis.gsfc.nasa.gov/petaflops/peta.html /people/tron/tron.html mailto:tron@usra.edu /people/oconnell/whoiam.html mailto:oconnell@cesdis.gsfc.nasa.gov http://cesdis.gsfc.nasa.gov/people/lpicha/whoiam.html mailto:lpicha@@cesdis.gsfc.nasa.gov http://cesdis.gsfc.nasa.gov/ http://www.usra.edu/ title{54}: Petaflops Enabling Techologies and Applications (PETA) keywords{362}: agenda and application applications basis cesdis challenges computing connell considering could derive determine development edu enabling establish for future identify initiatives issues july lawrence lead lpicha meeting michele moc ops peta petaflops picha production research revised scaled set sterling systems technologies that the thomas tron usra workshop images{79}: peta.graphics/saturn.gif peta.graphics/turb.small.gif peta.graphics/petabar.gif headings{314}: The Workshop on Enabling Technologies for Peta(FL)OPS Computing - 1994 A meeting to establish the basis for considering future research initiatives that could lead to the development, production, and application of petaFLOPS scaled computing systems. Objectives of the Workshop Return to the P.E.T.A. Directory body{1041}: Identify Applications of economic, scientific, and societal importance requiring PetaFLOPS scale computing. Determine Challenges in terms of technical barriers to achieving effective PetaFLOPS computing systems. Identify Enabling Technologies that may be critical to the implementation of PetaFLOPS computers and determine their respective roles in contributing to this objective. Derive Research Issues that define the boundary between today's state-of-the-art understanding and the critical advanced concepts to tomorrow's PetaFLOPS computing systems. Set Research Agenda for initial near-term work focused on immediate questions contributing to the uncertainty of our understandingand imposingthe greatest risk to launching a major long-term research initiative. Authorizing NASA Official: Paul H. Smith, NASA HPCC Office Senior Editor:Thomas Sterling (tron@usra.edu ) Curators: Michele O'Connell ( michele@usra.edu ), Lawrence Picha (lpicha@usra.edu ), CESDIS/ USRA , NASA Goddard Space Flight Center. Revised: 31 July 95 (moc) MD5{32}: add61edae57d74c6a910e3b9466db98c File-Size{4}: 2195 Type{4}: HTML Gatherer-Version{3}: 1.0 Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Refresh-Rate{7}: 2419200 Time-to-Live{8}: 14515200 Description{54}: Petaflops Enabling Techologies and Applications (PETA) } @FILE { http://cesdis.gsfc.nasa.gov/hpccm/annual.reports/ess94contents/iita/space.html Update-Time{9}: 827948659 title{18}: Project S.P.A.C.E. images{18}: graphics/space.gif headings{65}: Project S.P.A.C.E. (Sun, Planets, Asteroids & Comets Exploration) body{1493}: background="graphics/spacepaper.gif" text="440000"> Objective: Improve K-12 educator and student understanding of our solar system based on current data from NASA/JPL explorations. Approach: Project SPACE will provide three components: 1) an interactive multimedia space exploration experience (SPACE Simulation), 2) an in-class curriculum (SPACE Curriculum); and 3) access to the NASA/JPL electronic library (SPACE Curriculum Library). The SPACE Curriculum Library will use the Internet as a vehicle to disseminate information nationwide to educators and students. Accomplishments: SPACE Simulation (Mars Phase) is a computer-based interactive multimedia working model of the entire simulation product, and is in its final development stage. This model allows educators and students to plan and execute a robotic mission to Mars . SPACE Curriculum uses an innovative and flexible design tool (Curriculum Web) to create a model curriculum which supports current instructional pedagogy. Use of such a design promotes student interest and aids in the incorporation of space curriculum into classroom settings. Additionally the Web acts as a means to access the curriculum electronically. SPACE Curriculum Library is currently on-line on the Internet. The first of many curriculum products, such as lesson plans and hands-on activities are now available. Significance: Project SPACE Provides a platform for learners to understand the relevancy of NASA/JPL data obtained from space explorations. MD5{32}: 0f72a1bba8e78ae1f373eb73660b2a34 File-Size{4}: 3013 Type{4}: HTML Gatherer-Version{3}: 1.0 Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Refresh-Rate{7}: 2419200 Time-to-Live{8}: 14515200 Keywords{8}: project Description{18}: Project S.P.A.C.E. } @FILE { http://cesdis.gsfc.nasa.gov/hpccm/annual.reports/ess94contents/app.sw/sw.ex.html Update-Time{9}: 827948654 url-references{175}: http://sdcd.gsfc.nasa.gov/ESS http://sdcd.gsfc.nasa.gov/ESS http://cesdis.gsfc.nasa.gov/hpccm/annual.reports/ess94contents/app.software.html mailto:lpicha@cesdis.gsfc.nasa.gov title{45}: First Submission to the ESS Software Exchange keywords{73}: curator ess gov gsfc http larry nasa page picha previous return sdcd the images{19}: graphics/return.gif headings{74}: First Submission to the ESS Software Exchange Return to the PREVIOUS PAGE body{1534}: Objective: The goal of the HPCC/ESS Software Exchange is to facilitate the exchange and reuse of software. Its specific objective is to make publicly available the software products developed by the ESS Science Team. Approach: The Software Exchange has been implemented as part of the World Wide Web (WWW). The WWW was developed at CERN as a way of facilitating the exchange of information on the Internet. The use of the WWW has grown exponentially, mainly due to the creation of the Mosaic program by the NCSA. The WWW is a collection of hypertext documents distributed throughout the world, and various Web browsers offer 'point and click' access to a wide variety of Interet resources. Accomplishments: The ESS Project established a software repository accessible via the World Wide Web (WWW) in March on its project servers at Goddard Space Flight Center (http://sdcd.gsfc.nasa.gov/ESS) and at the Jet Propulsion Laboratory Status/Plans: The ESS project software repository is operational, and its contents will continue to expand with additional annual contributions from the ESS Grand Challenge teams and Guest Computational investigators. In FY95 we will solicit initial contributions from the Phase 2 Guest Computational Investigators. The ESS project staff scientists will continue to contribute the results of their development efforts as they come to fruition. Point of Contact: Dr. Jeffrey Pedelty Goddard Space Flight Center/Code 934 pedelty@jansky.gsfc.nasa.gov (301) 286-3065 curator: Larry Picha MD5{32}: 2679e7cb995b81c6db0ef5b28d97e571 File-Size{4}: 3117 Type{4}: HTML Gatherer-Version{3}: 1.0 Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Refresh-Rate{7}: 2419200 Time-to-Live{8}: 14515200 Description{45}: First Submission to the ESS Software Exchange } @FILE { http://cesdis.gsfc.nasa.gov/admin/inf.eng/ Update-Time{9}: 820867014 Description{24}: Index of /admin/inf.eng/ Time-to-Live{8}: 14515200 Refresh-Rate{7}: 2419200 Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Gatherer-Version{3}: 1.0 Type{4}: HTML File-Size{4}: 1277 MD5{32}: db01dfd1333c2eacab37d02bf24e9768 body{317}: Name Last modified Size Description Parent Directory 13-Jul-95 12:12 - CESDIS1.small.gif 16-Mar-95 10:48 12K ie.gif 09-Jun-95 11:40 31K inf.eng.html 15-Jul-95 09:53 7K inf.eng.html.txt 22-Mar-95 07:42 6K opp.html 02-May-95 12:36 3K wave.tar 07-Apr-95 19:59 768K wave.tutorial.fin/ 27-Jun-95 13:32 - headings{24}: Index of /admin/inf.eng/ images{149}: /icons/blank.xbm /icons/back.xbm /icons/image.xbm /icons/image.xbm /icons/text.xbm /icons/text.xbm /icons/text.xbm /icons/unknown.xbm /icons/menu.xbm keywords{77}: cesdis directory eng fin gif html inf opp parent small tar tutorial txt wave title{24}: Index of /admin/inf.eng/ url-references{98}: /admin CESDIS1.small.gif ie.gif inf.eng.html inf.eng.html.txt opp.html wave.tar wave.tutorial.fin/ } @FILE { http://cesdis.gsfc.nasa.gov/PAS2/wg4.text Update-Time{9}: 827948617 Partial-Text{20142}: Report of Working Group 4 INFLUENCE OF PARALLEL ARCHITECTURE ON HPC SOFTWARE Chair: Burton Smith, Tera Computer Company Co-chair: Thomas Sterling, USRA CESDIS Introduction ============ Architectural parallelism is the principal opportunity that is driving the aggressive evolution of HPC systems, achieving rapid gains in peak performance. Parallelism is also the dominant factor challenging the effective application of HPC architecture both in terms of execution efficiency and programmability. Recent trends in system development have favored hardware implemention solutions to deliver peak performance while relegating the challenge of programmability and efficiency to future envisioned system software solutions. As a consequence, programming of HPC systems in general has proven significantly more difficult than conventional supercomputers while delivered sustained performance is highly variant across the domain of HPC applications. The purpose of this report is to examine the symbiotic relationship between parallel architecture and system software in order to reveal the attributes of parallel architecture that impact the ability of system software to provide an effective computing environment. Challenge of Parallel Architecture ================================== Parallel computing has been the exclusive realm of HPC, at least until recently. To achieve high performance, the added dimension of parallelism has been imposed on hardware structure designers, applications programmers, and system software developers in addition to all of the other important aspects associated with employing conventional computers. While affording the promise of orders-of-magnitude performance advantage, parallelism in all of its manifestations has greatly complicated the problem of programming, reduced the generality of application, and compromised robustness of system operation. Together, the consequence of these negative effects is overall lower efficiency, longer system development time, high cost, and limited market when compared to mainstream computing systems. To offset these limitations, system software researchers have sought innovative approaches to HPC system management, but with little overall practical advantage. The possibility must be considered that the problem is intrinsic to the class of architectures being offered in the HPC arena and that system software may never be able to adequately compensate for their fundamental weaknesses. If so, then HPC architecture too must advance beyond its current state in conjunction with system software to realize the ultimate promise of scalability. Parallel HPC structures employ distributed integrated resources, which distinguishes them from conventional uniprocessors and imposes behavior characteristics that limit or at least complicate efficient programming and execution. Foremost among these is latency of data movement; the time required (usually measured in cycles) to perform a remote memory access by a requesting processor. Whether managed through message passing or shared memory primitives, the length and variability of communication latencies result in a sensitivity to locality that demands tasks and their operand objects be in proximity in order to avoid long waiting times for access. A second important aspect of distributed structures is the need to expose, allocate, and balance parallel activities across the system processing resources to achieve high utilization, efficiency, and performance. But the need to spread objects apart and exploit more parallelism, thereby precluding local starvations, may be in direct conflict with the need to minimize their relative latencies. The management of parallel flow control requires mechanisms whose realization, especially in software, may easily impose unacceptable overhead on the useful work being processed. Overhead can add undue burden on the execution resources, force a lower bound on the parallelism granularity that can be effectively exploited there by limiting useful program parallelism, and place an upper bound on scalability for a given application problem and size. The combined challenges of latency, starvation, and overhead derived from attempting to exploit distributed computing resources may be beyond the capability of system software to circumvent in the general case without some degree of architecture support. Economic Factors ================ The HPC market continues to represent only about 1\% of the total annual sales of computing systems. Yet, the time and cost of development historicly have exceeded those of modern microprocessor architectures. The market share and resulting revenues have proven inadequate to support many independent vendors developing unique parallel architectures and supporting system software. Compounding this is the rapid rate of evolution of microprocessor technology which has recently exceeded 50\% performance gain per year. Competing with this rate of performance advance while engaging in lengthy design cycles has been shown to be risky. These two trends have driven the HPC community to leverage the hardware development investment, rapid performance advances, and economy of scale of the microprocessor industry by integrating microprocessors and other commodity components in scalable ensembles. Mechanisms emboddied in modern microprocessors have been devised largely to support the scientific workstation or, at the low end, the personal computer and laptop. Design tradeoffs preclude significant enhancements not targetted towards these primary markets. In particular, capabilities incorporated for HPC systems were unlikely to occur due to the minimal market value. HPC vendors have either implemented basic structures, relying on programmers and system software to harness the available resources, or developed auxiliary special purpose devices to be included with the commodity parts to augment the functionality and achieve more effective scalable parallel computation. The commercial vendor offerings span this range of choices from clusters of workstations to tightly coupled shared memory multiprocessors. But the choice of developing specialty parts has to be carefully weighed against the cost and lead time incurred and the limited market benefits. In general, low cost and good reliability of HPC systems will rely on high volume hardware components. Fortunately, very recent trends in the mainstream commercial computing market have resulted in new capabilities that may offer new opportunities for HPC system architecture. Latency, even in uniprocessor based systems, has emerged as a problem no longer entirely capable of being resolved through caching methods alone. As a consequence, microprocessor designers are incorporating prefetch and multiple memory access issue mechanisms to mitigate the effects of latency. ... Clustering, at least of small numbers of workstations, is becoming a common way to achieve some performance gain, albeit at the very coarse grain level of parallelism. New networks and interfaces are being devised to greatly reduce the time, especially in software, of moving data between workstations. ... Finally, using a small number of processors in a single unit is expanding performance available to the mainstream server market. The symmetric multiprocessor (SMP) is emerging as an important new mid-range product with a substantial potential market. Microprocessor designs are incorporating sufficient mechanisms to support cache coherence by means of snooping techniques on a high speed common bus. ... Together, these new trends see microprocessor designs beginning to address the concerns of HPC architecture, but driven by the requirements of more lucrative market sectors. Parallelism is being seen as good at all scales, not just the very high end as in the past and will likely pervade the whole industry in the near future. As this new market driven constituency grows at moderate scale, the high end will benefit as well. This is opening new opportunities for HPC architecture and should influence future directions and designs. ... Software Scaling ================ Parallel system software and applications are expensive to develop and may not be commercially viable if applied only to the HPC sector. Most HPC applications are home-grown by dedicated computational scientists with few commercially available HPC applications. System software represents the best the vendors can provide given limited resources but this continues to be inadequate to the task although the overall quality is improving. Like the hardware systems counterparts, software systems for HPC environments will have to be derived, to a significant degree, from those products developed for moderate scale parallel systems such as SMPs. This means that applications and system software will have to be developed to scale up and down across system configurations in order to attract adequate market share on SMPs while capable of exploiting HPC resources for improved performance or problem size. In order to meet this objective, HPC architectures will have to support execution models found on the low as well as high end of the parallel system spectrum. In particular, both shared memory and distributed computing models need to be supported, even within a single application. To make better use of HPC resources and to share such systems effectively among a number of applications, HPC architecture will have to become more virtual in both space and time. This is particularly useful when large applications are made up of a number of separate parallel codes such as would be found in complex interdisciplinary problems. HPC Architecture Support for System Software ============================================ While it would be ideal if HPC architecture itself resolved all challenges presented by distributed resources, such is unlikely in the next few generations and system software will still be required to address many of the difficulties. Even if architecture can not eliminate the problems for system software, it should incorporate those additional mechanisms that would facilitate system software in performing its services. Some examples follow. ... Performance tuning is poorly supported in most HPC architectures. Due to the distributed nature of the system, the programmer must be involved in a wide array of decisions related to problem and data partitioning, resource allocation, communication, scheduling, and synchronization. In order to seek optimal performance, system behavior has to be observable. Often such behavior falls outside the name space of the system instruction set. Performance monitoring mechanisms are essential to provide adequate feedback to the parallel software designer and this must include access to performance critical resources such as networks, caches, synchronization primitives, and others. Many of these require additional architecture support to reveal and quantify. Such mechanisms, if provided, can be mapped into the address space of the architecture and therefore made accessible to performance monitoring tools. Capabilities that should be provided include any facility critical to performance such as those shared resources which might impose bottlenecks due to contention or insufficient bandwidth. Metrics and means for observing key communications channels fall into this catagory. Cache statistics are particularly important as they determine the effective locality of the code and data placement and may have a significant impact on performance. ... Beyond performance monitoring, robustness should be supported through architectural facilities that enable any part of the system to ascertain and verify the operational health of any other major subsystem. These should include alarm signals that indicate some system failure mode and permit recovery rountines to be initiated. Such mechanisms can aid in achieving high availability and confidence in hardware and software. Authenticated and protected messages through architectural support should also be included to enhance reliability. Commercial applications need subsystem parallelization to remove bottlenecks, especially in the area of I/O. File systems, storage systems, networking, and database management all represent examples where architecture support can greatly enhance system software functionality. Multiple I/O models, even in a single application, should be supported including central, distributed, mapped, and stream models and require some architecture enhancements. ... Future Considerations ===================== HPC systems are employed generally in rather simple and primitive ways. In the next few years the sophistication of system usage will increase dramatically as all aspects of system operation become virtualized in space and time and new applications are enabled by the availability of large scale computing systems. One consequence of a new generation of advanced applications is that new data types will become pervasive and require effective architecture support. Objects, persistent objects, and object stores will become routine and require direct architecture support. Another data type of future importance is the ``image'' structure. As this represents one of the most rapidly growing types of information exchange, these Mbyte objects will in the future be treated as atomic entities in various compressed and raw forms. Architecture support for video image streams will also become prevalent. ... The primary target for HPC system usage is response time sensitive work; that is, applications for which the user seeks solution in the shortest possible time interval. This is premium value computing, requiring dedicated resources. But where science accomplishments may permit a longer time frame, a second class of processing resource, the non-response time sensitive workload, may take advantage of brief periods of idle or partially available systems to make progress to solution. Cycle harvesting or scavenging methods have been employed, particularly on workstation clusters, primarily on an experimental basis for some time. This capability will become normal operating practice and replace the uniprocessor background tasks. To do so will require architecture support for rapid context switch, checkpointing, and managing the distributed flow control state. This capability will extend the utilization of the HPC systems, making them more cost effective. ... There are important applications that are less well suited to the capabilities of general processors and can be greatly accelerated through special purpose functional structures. HPC architectures in the future will require the ability to incorporate special purpose devices and support heterogeneous computing. Workstations and personal computers today provide open system interfaces through standardized buses and address space mapping. Similarly, efficient interfaces that permit data streaming through such units and task scheduling to take advantage of the availability of these resources will become essential and greatly enhance the value of HPC technology. ... Recommendations =============== For many reasons, a number of which have been presented in this report, HPC system architecture is entering a new phase in its evolution. This transition is driven in part out of necessity and in part in response to new opportunities. While many important computations have been successfully performed on large HPC systems, it is clear that to date the current generation does not represent an adequate capability in programmability, generality, or effectiveness. Nor has it gained sufficient market share to be a sustainable commercial product. The following general recommendations are in response to the previous findings and are offered to advance the state of HPC system architecture to resolve the critical issues of capability, useability, reliability, and marketability. ... 1. The overriding objective must be to encourage the development of more usable, broadly applicable, and robust systems at high scales. Foremost among concerns is the requirement to dramatically reduce locality sensitivity which seriously inhibits programmability. Sharing of resources must be simplified through submachine virtualization both in space and time. Parallelization of subsystems such as file systems, networking, and database management is key to removing bottlenecks. Performance monitoring mechanisms must be enhanced for performance tuning. Configuration management, resource management, and capacity planning must all be strongly supported for flexible and easily manageable systems. ... 2. Ensure that common parallel programming models are architecturally supported from low to high end systems. Both shared memory and distributed computing methods should be supported even from within a single application. Multiple I/O models should also be supported even from within a single application. ... 3. Raise the lowest common denominator through community forums. Establish the minimum needs that should be ubiquitous across HPC platforms. Third party software vendors must be able to depend on the availability of a basic set of capabilities to guarantee portability of software products across systems. Performance monitoring, low overhead synchronization, synchronized global clocks, high reliability messaging, and availability features are all examples of architecture facilities that should be common among HPC systems in order to encourage ISV investment and software development. ... 4. Develop high volume building blocks that enable programmable scalable systems. Such building blocks must be consistent with the economic business models of main stream parallel computing such as SMPs but be capable of scaling to HPC sized configurations, responding to the increased demands such systems impose. These enable investment in mass market technologies to directly impact HPC development costs and design cycle time while ensuring scalable applications and system software able to migrate both up and down. Large volume commodity components are the key to high quality and low cost. Conclusions =========== This brief report has reviewed the relationship between HPC system architecture and software to expose architectural issues that either complicate or inadequately support the needs of system software development. It has been shown that current generation HPC architecture is in part at fault for the difficult challenges confronting system software. Architecture latency, starvation, and overhead resulting from distributed computing resources all combine to restrict programmability, generality, and effectiveness. At the same time, market forces limit the flexibility of the HPC design space, constraining system development to employ commodity mass market components. Fortunately, there is a rapid move to modest scale parallelism, even in the mainstream computing sector. Both processor architectures and software products are beginning to be developed with parallelism in mind, including addressing the very problems confronting HPC systems architecture and software at present. This will provide the new opportunity for HPC to merge with the mainstream and share the benefits of economy of scale both in hardware and software. But it requires that HPC systems designers and applications programmers develop scalable products that can migrate both up and down parallel system scale. In the meantime, a number of capabilities that should be included in HPC architecture in support of system software were identified. Among these were support for performance monitoring and enhanced availability features as well as a number of mechanisms for efficient dynamic resource management. It is expected that much of the burden of presenting HPC applications programmers with programmable and effective execution environments will continue to rely on sophisticated system software but that advances in architecture are essential if the full promise of HPC systems is to be realized. ... MD5{32}: d4762015b941129d5dd51b8ad2b31f53 File-Size{5}: 20375 Type{4}: Text Gatherer-Version{3}: 1.0 Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Refresh-Rate{7}: 2419200 Time-to-Live{8}: 14515200 Keywords{7401}: ability able about accelerated access accessible accomplishments achieve achieving across activities add added addition additional address addressing adequate adequately advance advanced advances advantage affording against aggressive aid alarm albeit all allocate allocation alone also although among and annual another any apart applicable application applications applied approaches architectural architecturally architecture architectures are area arena array ascertain aspect aspects associated atomic attempting attract attributes augment authenticated auxiliary availability available avoid background balance bandwidth based basic basis become becoming been beginning behavior being benefit benefits best better between beyond blocks both bottlenecks bound brief broadly building burden burton bus buses business but cache caches caching can capabilities capability capable capacity carefully case catagory central cesdis chair challenge challenges challenging channels characteristics checkpointing choice choices circumvent class clear clocks clustering clusters coarse code codes coherence combine combined commercial commercially commodity common communication communications community company compared compensate competing complex complicate complicated components compounding compressed compromised computation computational computations computer computers computing concerns conclusions confidence configuration configurations conflict confronting conjunction consequence considerations considered consistent constituency constraining contention context continue continues control conventional cost costs counterparts coupled critical current cycle cycles data database date decisions dedicated degree deliver delivered demands denominator depend derived design designer designers designs determine develop developed developers developing development devices devised difficult difficulties dimension direct directions directly distinguishes distributed does domain dominant down dramatically driven driving due dynamic easily economic economy effective effectively effectiveness effects efficiency efficient either eliminate emboddied emerged emerging employ employed employing enable enabled encourage end engaging enhance enhanced enhancements ensembles ensure ensuring entering entirely entities environment environments envisioned especially essential establish even evolution examine examples exceeded exchange exclusive execution expanding expected expensive experimental exploit exploited exploiting expose extend facilitate facilities facility factor factors failure fall falls fault favored features feedback few file finally findings flexibility flexible flow follow following for force forces foremost forms fortunately forums found frame from full functional functionality fundamental future gain gained gains general generality generally generation generations given global good grain granularity greatly group growing grown grows guarantee hardware harness harvesting has have health heterogeneous high highly historicly home hpc ideal identified idle image impact implemented implemention importance important impose imposed imposes improved improving inadequate inadequately include included including incorporate incorporated incorporating increase increased incurred independent indicate industry influence information inhibits initiated innovative instruction insufficient integrated integrating interdisciplinary interfaces interval into intrinsic introduction investment involved issue issues isv its itself just key laptop large largely latencies latency lead least length lengthy less level leverage like likely limit limitations limited limiting little local locality long longer low lower lowest lucrative made magnitude main mainstream major make making manageable managed management managing manifestations many mapped mapping market marketability markets mass may mbyte means meantime measured mechanisms meet memory merge message messages messaging methods metrics microprocessor microprocessors mid might migrate mind minimal minimize minimum mitigate mode models moderate modern modest monitoring more most move movement moving much multiple multiprocessor multiprocessors must name nature near necessity need needs negative networking networks never new next non nor normal not number numbers object objective objects observable observing occur offer offered offerings offset often one only open opening operand operating operation operational opportunities opportunity optimal order orders other others out outside overall overhead overriding parallel parallelism parallelization part partially particular particularly partitioning parts party passing past peak per perform performance performed performing periods permit persistent personal pervade pervasive phase place placement planning platforms poorly portability possibility possible potential practical practice preclude precluding prefetch premium present presented presenting prevalent previous primarily primary primitive primitives principal problem problems processed processing processor processors product products program programmability programmable programmer programmers programming progress promise protected proven provide provided proximity purpose quality quantify raise range rapid rapidly rate rather raw realization realize realized realm reasons recent recently recommendations recovery reduce reduced related relationship relative relegating reliability rely relying remote remove removing replace report represent represents requesting require required requirement requirements requires requiring researchers resolve resolved resource resources responding response restrict result resulted resulting reveal revenues reviewed risky robust robustness rountines routine sales same scalability scalable scale scales scaling scavenging scheduling science scientific scientists second sector sectors see seek seeks seen sensitive sensitivity separate seriously server services set share shared sharing shortest should shown signals significant significantly similarly simple simplified single size sized small smith smp smps snooping software solution solutions some sophisticated sophistication sought space span special specialty spectrum speed spread standardized starvation starvations state statistics sterling still storage stores stream streaming streams strongly structure structures submachine substantial subsystem subsystems successfully such sufficient suited supercomputers support supported supporting sustainable sustained switch symbiotic symmetric synchronization synchronized system systems take target targetted task tasks techniques technologies technology tera terms than that the their them then there thereby therefore these they third this thomas those through tightly time times today together too tools total towards tradeoffs transition treated trends tuning two type types ubiquitous ultimate unacceptable undue uniprocessor uniprocessors unique unit units unlikely until upper usable usage use useability useful user using usra usually utilization value variability variant various vendor vendors verify very viable video virtual virtualization virtualized volume waiting way ways weaknesses weighed well were when where whether which while whole whose wide will with within without work working workload workstation workstations would year years yet Description{30}: Report of Working Group 4 } @FILE { http://cesdis.gsfc.nasa.gov/hpccm/annual.reports/ess94contents/nren/atdnetgraphic.html Update-Time{9}: 827948658 url-references{114}: http://cesdis.gsfc.nasa.gov/hpccm/annual.reports/ess94contents/nren/atdnet.html mailto:lpicha@cesdis.gsfc.nasa.gov title{6}: ATDNet keywords{45}: curator larry page picha previous return the images{39}: graphics/ATDnet.gif graphics/return.gif headings{82}: Application Technology Demonstration Network (ATDNet) Return to the PREVIOUS PAGE body{129}: Point of Contact: Pat Gary NASA Goddard Space Flight Center (301) 286-9539 pat.gary@gsfc.nasa.gov curator: Larry Picha MD5{32}: b927de1848a6a71a2b8d6181ce68411a File-Size{3}: 563 Type{4}: HTML Gatherer-Version{3}: 1.0 Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Refresh-Rate{7}: 2419200 Time-to-Live{8}: 14515200 Description{6}: ATDNet } @FILE { http://cesdis.gsfc.nasa.gov/hpccm/visitor/oldTXTstuff.html Update-Time{9}: 827948830 url-references{39}: #br #hdr #st #id #dl #ol #ul index.html title{21}: Basic Text Formatting keywords{109}: and back breaks definition headers indenting index line lists ordered paragraphs simple styles the unordered images{31}: shoelacebar.gif shoelacebar.gif headings{179}: Basic Text Formatting Paragraphs and Simple Line Breaks Headers Styles Indenting Definition Lists Ordered Lists Unordered Lists Back to the index Paragraphs and Simple Line Breaks body{592}: (If you don't see a solution to a formatting problem you have, try checking my HTML 2.0 Extensions section of the index.) Unless you specify otherwise, HTML text will wrap inthe browser window unaided by text formatting tags. But it does not recognize carriage returns, so you have to format those yourself. The most commonly used tags are probably those used for line breaking within chunks of text. There are two kinds of tags generally used for this type of text formatting: the simple line break tag > and the paragraph tag The line break tag > acts like a simple carriage return: MD5{32}: 2eeb2773d870a57a2d59c357b85b0d4d File-Size{4}: 8067 Type{4}: HTML Gatherer-Version{3}: 1.0 Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Refresh-Rate{7}: 2419200 Time-to-Live{8}: 14515200 Description{21}: Basic Text Formatting } @FILE { http://cesdis.gsfc.nasa.gov/pub/linux/drivers/vortex.patch Update-Time{9}: 820866793 Time-to-Live{8}: 14515200 Refresh-Rate{7}: 2419200 Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Gatherer-Version{3}: 1.0 Type{5}: Patch File-Size{4}: 3081 MD5{32}: 37d0a4789a0c65aab0a6226c235b850b } @FILE { http://cesdis.gsfc.nasa.gov/hpccm/cas.hp/graphics/ Update-Time{9}: 827948816 url-references{97}: /hpccm/cas.hp/ cas.gif cas.gif%20copy hpcc.header.gif hpccsmall.gif nasa.meatball.gif wavebar.gif title{32}: Index of /hpccm/cas.hp/graphics/ keywords{74}: cas copy directory gif header hpcc hpccsmall meatball nasa parent wavebar images{133}: /icons/blank.xbm /icons/menu.gif /icons/image.gif /icons/text.gif /icons/image.gif /icons/image.gif /icons/image.gif /icons/image.gif headings{32}: Index of /hpccm/cas.hp/graphics/ body{284}: Name Last modified Size Description Parent Directory 09-Jun-95 11:10 - cas.gif 15-Jun-95 14:44 11K cas.gif copy 23-Mar-95 14:58 17K hpcc.header.gif 18-May-95 13:28 1K hpccsmall.gif 23-May-95 11:55 2K nasa.meatball.gif 08-Nov-94 10:12 3K wavebar.gif 08-Nov-94 10:12 2K MD5{32}: fc91614c4787000f870fd61e0cab64d9 File-Size{4}: 1174 Type{4}: HTML Gatherer-Version{3}: 1.0 Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Refresh-Rate{7}: 2419200 Time-to-Live{8}: 14515200 Description{32}: Index of /hpccm/cas.hp/graphics/ } @FILE { http://cesdis.gsfc.nasa.gov/hpccm/annual.reports/ess94contents/sys.sw/vrfast.html Update-Time{9}: 827948656 url-references{115}: http://cesdis.gsfc.nasa.gov/hpccm/annual.reports/ess94contents/sys.software.html mailto:lpicha@cesdis.gsfc.nasa.gov title{59}: Initiated Use of VR-FAST within ESS Investigator Community keywords{45}: curator larry page picha previous return the images{78}: http://cesdis.gsfc.nasa.gov/hpccm/hpcc.graphics/vrfast.gif graphics/return.gif headings{88}: Initiated Use of VR-FAST within ESS Investigator Community Return to the PREVIOUS PAGE body{2642}: Objective: Develop methods to analyze high rate/high volume data generated by ESS Grand Challenges. Approach: Investigate turn-key virtual environment compatible with existing NASA science community visualization methods and software. Chose to adapt Flow Analysis Software Tool kit (FAST) developed and maintained by NAS/ARC to a virtual environment. Accomplishments: Received delivery of SGI Onyx (2 processors, 2 Reality Engine graphics subsystems) and Fakespace BOOM 3C at GSFC. Ported Virtual Reality FAST (VR-FAST) to SGI Onyx and incorporated useoof BOOM. Initiated use of VR-FAST within the ESS investigator community (e.g., Richard Rood from the GSFC Laboratory for Atmospheres, Michele Rienecker from the GSFC Laboratory for Hydrospheric Processes) Acquired GSFC expertise of VR-FAST and associated devices to allowffor quick modifications initiated through investigator responses. Initiated use of virtual environment devices with VIS-5D, a visualization package developed at the University of Wisconsin with support from NASA Marshall Space Flight Center. Demonstrated VR-FAST to John Klineberg/GSFC Center Director, Lee Holcomb/Code R, and France Cordova/NASA Chief Scientist. Significance: The job of the NASA scientist increasingly involves sifting through mountains of acquired and computationally generated data. The essence of virtual reality is to deal with the data in the same way that you deal with the actual world - through the visual cortex and motor responses, rather than through artificial interfaces. The creation of an operational virtual reality environment for rapid data searching and manipulation is required to validate the theory and transfer it to the NASA science community. Status/Plans: Phase II of the VR FAST project is currently being planned and will be implemented in the upcoming year. This phase will bring a marked increase in the capabilities available to investigators using VR FAST. Specific plans include the following: Incorporate additional data exploratory capabilities within VR-FAST to enhance scientific discovery opportunities. Continue to receive and incorporate feedback from ESS investigators for the purpose of evaluating and enhancing VR FAST and virtual environments in general. Receive virtual instrument and gesture glove at GSFC to allow access to additional VR FAST capabilities. Continue to analyze the application of virtual environment technology to other data analysis software (e.g., VIS-5D, SGI Explorer). Point of Contact: Dr. Horace Mitchell Goddard Space Flight Center/Code 932 hmitchel@vlasov.gsfc.nasa.gov (301) 286-4030 curator: Larry Picha MD5{32}: 92e2cc31d859ae7a1186af05e86ff392 File-Size{4}: 3396 Type{4}: HTML Gatherer-Version{3}: 1.0 Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Refresh-Rate{7}: 2419200 Time-to-Live{8}: 14515200 Description{32}: Initiated Use of VR-FAST within } @FILE { http://cesdis.gsfc.nasa.gov/hpccm/factsheets.html Update-Time{9}: 827948599 url-references{1137}: mailto:lpicha@cesdis.gsfc.nasa.gov http://cesdis.gsfc.nasa.gov/ #intro #speed #components #cas #ess #iita #ree #contrib #tera #imp #resource #contents #contents #cas #ess #ree #iita #contents http://cesdis.gsfc.nasa.gov/hpccm/cas.hp/cas.html mailto:feiereis@ames.arc.nasa.gov mailto:p_hunter@aeromail.hq.nasa.gov #contents http://cesdis.gsfc.nasa.gov/hpccm/ess.hp/ess.html mailto:fischer@jacks.gsfc.nasa.gov mailto:p_hunter@aeromail.hq.nasa.gov #contents http://cesdis.gsfc.nasa.gov/hpccm/iita.hp/iita.html mailto:William_Likens@qmgate.arc.nasa.gov mailto:p_hunter@aeromail.hq.nasa.gov #contents http://cesdis.gsfc.nasa.gov/hpccm/ree.hp/ree.html mailto:leon@telerobotics.Jpl.Nasa.Gov mailto:davidson@telerobotics.Jpl.Nasa.Gov mailto:p_hunter@aeromail.hq.nasa.gov #contents #contents http://cesdis.gsfc.nasa.gov/petaflops/definition.html #contents #contents http://cesdis.gsfc.nasa.gov/hpccm/accomp/94accomp/94accomps.html http://cesdis.gsfc.nasa.gov/hpccm/annual.reports/main94.html http://cesdis.gsfc.nasa.gov/hpccm/hpcc.nasa.html http://www.hpcc.gov/blue96/index.html http://www.hpcc.gov/imp95/index.html http://www.hpcc.gov/ #contents title{15}: HPCC Fact Sheet references{682}: "The Grand Challenge in cosmology is not only to collect the data needed for a deep view into the formation of the cosmos... but also to create an accurate model of the cosmos..." "[the REE project] addresses critical needs to both the Offices of Space Science and Mission to Planet Earth. A new generation of on-board computers will enhance scientific return, reduce operations costs, and mitigate down link limitations...." The technologies used in the experiments, coupled with those in support of the National Research and Education Network, lead to high-speed network communications that can be delivered commercially at one-tenth of today's cost of providing the same service. keywords{1478}: accelerate accelerating accomplishments aeromail aeronautics aerosciences alkalai america american ames and annual another antarctica application applications arc blue book cas center century cesdis change children comments communications community compare competitiveness component components computational computing contents contributions convergence coordination cray critical curriculum data davidson developed development directly documentation earth educational enabled engineering ess every excellence expects experimentation exploration feiereis feiereisen fischer flops fold for formation foundation from future galaxy giga gigaflops global gov graphic great gsfc has high home hpcc hunter iita implementation importance increase industry information infrastructure instructions internet into introduction isolated jacks james john jpl large larry later ldp leon level likens live lpicha meet models more multiple nasa nation national new next observatories oct office our over page paul performance petaflops picha plan planned play please pointers previous program project provides public qmgate quality quest questions ree related remote report requirements resources return revision role scale science sciences selected send service shaping simulated sites space speed strengthening structure supercomputer supports table taken technologies technology telerobotics tera teraflops the this tools top unique use vitality web welcome wide william with world year your images{703}: hpcc.graphics/hpcc.header.gif hpcc.graphics/lites2.gif hpcc.graphics/aeroplane.gif hpcc.graphics/lites2.gif hpcc.graphics/return.gif hpcc.graphics/lites2.gif hpcc.graphics/return.gif hpcc.graphics/lites2.gif hpcc.graphics/return.gif hpcc.graphics/lites2.gif hpcc.graphics/cas.gif hpcc.graphics/return.gif hpcc.graphics/lites2.gif hpcc.graphics/ess.gif hpcc.graphics/return.gif hpcc.graphics/lites2.gif hpcc.graphics/gonzaga1.gif hpcc.graphics/return.gif hpcc.graphics/lites2.gif hpcc.graphics/return.gif hpcc.graphics/lites2.gif hpcc.graphics/return.gif hpcc.graphics/lites2.gif hpcc.graphics/return.gif hpcc.graphics/lites2.gif hpcc.graphics/return.gif hpcc.graphics/lites2.gif hpcc.graphics/return.gif headings{939}: The National Aeronautics and Space Administration's (NASA) High Performance Computing and Communications (HPCC) Program Welcome to NASA HPCC! Last Revision: NOV 20, 1995 (ldp) Introduction RETURN to the Table of Contents The Speed of Change RETURN to the Table of Contents Components of the NASA HPCC Program RETURN to the Table of Contents Computational Aerosciences (CAS) Project RETURN to the Table of Contents Earth and Space Sciences (ESS) Project RETURN to the Table of Contents Information Infrastructure Technology and Applications (IITA) RETURN to the Table of Contents Remote Exploration and Experimentation (REE) Project RETURN to the Table of Contents NASA HPCC Program Contributions RETURN to the Table of Contents Teraflops: What is it?? RETURN to the Table of Contents Importance of NASA's Role in HPCC RETURN to the Table of Contents Resources: pointers to more HPCC related documentation RETURN to the Table of Contents body{26750}: BACKGROUND="hpcc.graphics/backdrop.gif"> To accelerate the development and application of high-performance computing technologies to meet NASA's aeronautics, Earth and space sciences, and engineering requirements into the next century. You're here because you need or want an explanation and overview of the NASA HPCC Program, its mission, and how it implements and utilizes tax payer assets. INSTRUCTIONS: You may click on the Table of Contents item (below) you're interested in and go directly to that subject. You do have the option of scrolling through the entire document which is organized according to the Table of Contents. You may return to your starting point by clicking on the ''back'' option of your browser (i.e. Mosaic or Netscape) at any time. Please send your comments and/or questions directly to Larry Picha (lpicha@cesdis.gsfc.nasa.gov) at the Center of Excellence in Space Data and Information Sciences. Previous Revision: Oct 3, 1995 (ldp) Table of Contents Introduction The Speed of Change Components of the NASA HPCC Program Computational Aerosciences (CAS) Project Earth and Space Sciences (ESS) Project Information Infrastructure Technology and Applications (IITA) component Remote Exploration and Experimentation (REE) Project NASA HPCC Program Contributions Teraflops : What is it?? Importance of NASA's Role in the National HPCC Program Resources: pointers to more HPCC related documentation In recognition of the critical importance of information technologies, the United States Government created the High Performance Computing and Communications (HPCC) Program in 1991. The goal of the Program was to foster the development of high-risk, high-payoff systems and applications that will most benefit America. The NASA HPCC program is a critical component of this government-wide effort; it is dedicated to working with American businesses and universities to increase the speed of change in research areas that support NASA's aeronautics, Earth, and space missions. By investing national resources in the NASA HPCC Program, America will be able to maintain its worldwide leadership position in aerospace, high-speed computing, communications, and other related industries. Although the High Performance Computing and Communications budget is a small percentage of the NASA budget, it has a significant impact on the Agency's mission, as well as on U.S. industry. NASA leads the planning and coordination of the software element of the Federal High Performance Computing and Communications (HPCC) Program and is also an important participant in the National Information Infrastructure initiatives. NASA's HPCC Program will: Further gains in U.S. productivity and industrial competitiveness - especially in the aeronautics industry; Extend U.S. technology leadership in high performance computing and communications; Provide wide dissemination and application of HPCC technologies; and Facilitate the use and technologies of a National Information Infrastructure (NII) - especially within the American K-12 educational systems. As we stand on the threshold of the 21st century, change has become a constant in our lives. We live in a time of unprecedented social, political, and technological change and advancement. For many Americans, the rate of change has accelerated to the point where it is nearly overwhelming. It took four hundred years between the development of movable type and the creation of the first practical typewriter. Less than one hundred years later came the development of the word processor. Now, if you buy a personal computer, the computer seems to be behind the technology curve before you even carry it home from the store. Many American business communication tools that are taken for granted today, such as FAX machines, electronic mail, pagers, and cellular phones, were unknown or generally unavailable just ten years ago. At no time in history have humans been required to process information from so many different sources at once. There can be no doubt that in the late twentieth century, the advance of technology has reached a sort of critical mass that is propelling us headlong into a future that was unimaginable a generation ago. The rapid development of computers and communications has ''shrunk'' the world. The United States is an active participant in a worldwide economy. In this new ''global village,'' the rapid movement of information has made the technological playing field for most industrialized nations very competitive. For the first time in history, the means of production, the means of communication, and the means of distribution are all based on the same technology -- computers. A unique interdependence now exists among advanced information technologies. Each new innovation allows existing industries to operate more efficiently, while at the same time, opens up new markets for the product itself. Individuals, corporations, industries -- even entire economies -- depend more than ever on information technologies. America's future and the future of each citizen will be deeply affected by the speed with which information is gathered, processed, analyzed, secured, and disseminated. NASA has a long history of developing new technologies for aerospace missions that later turn out to have far-reaching effects on society through civilian applications. For instance, satellites originally developed for space exploration and defense purposes now carry virtually all television and long-distance telephone signals to our homes. By accelerating the convergence of computing and communications technologies, the NASA HPCC Program expects to play another unique role in shaping the future of every American. Four components comprise NASA's HPCC Program: Computational AeroSciences (CAS) Earth and Space Sciences (ESS) Remote Exploration and Experimentation (REE) Information Infrastructure Technology Applications (IITA). The goal of the CAS project is to accelerate the development, availability and use of high-performance computing technology by the U.S. aerospace industry, and to hasten the emergence of a viable commercial market for hardware and software vendors to exploit this lead. The goal of the ESS project is to demonstrate the potential afforded by high-performance computing technology to further our understanding and ability to predict the dynamic interaction of physical, chemical, and biological processes affecting the solar-terrestrial environment and the universe. The goal of the REE project is to develop and demonstrate a space-qualified computing architecture that requires less than ten watts per billion operations per second. The goal of the IITA component in the NASA HPCC Program is to accelerate the implementation of a National Information Infrastructure through NASA science, engineering and technology contributions. Fact sheets on each of these projects are included in this brochure World Wide Web page. The CAS Project is focused on the specific computing requirements of the United States aerospace community and has, as its primary goal, to accelerate the availability to the United States aerospace manufacturers of high performance computing hardware and software for use in their design processes. The U.S. aerospace industry can effectively respond to increased international competition only by producing across-the-board better quality products at affordable prices. High performance computing capability is a key to the creation of a competitive advantage, by reducing product cost and design cycle times; its introduction into the design process is, however, a risk to a commercial company, that NASA can help mitigate by performing this research. The CAS project catalyzes these developments in aerospace computing, while at the same time pointing out the future way to aerospace markets for domestic computer manufacturers. The key to the entire CAS project is the aerospace design and manufacturing process. These are the procedures that a manufacturer carries out in order to move from the idea of a new aircraft to the roll-out of a new aircraft onto the runway. Computer simulations of these aircraft vastly shorten the time necessary for this process. These computer simulations, or applications as they have come to be called, need immensely fast computers in order to deliver their results in a timely fashion to the designers. CAS supports the development of these machines by acquiring the latest experimental machinery from domestic computer manufacturers and making them available as testbeds to the nationwide CAS community. The computer manufacturers and independent software vendors help out by providing system software that forms the glue between the applications programs and the computer hardware. These are computer programs like operating systems that make the computer function. The CAS community that carries out this work consists of teams of workers from the major aerospace companies, from the NASA aeronautics research centers and from American universities. The focus of the project is derived through extensive interactions with business managers of the major aerospace companies and by consultation with university researchers and NASA management. The project delivers applications and system software that have been found through its research to show an enhancement to the design process, and provides a laboratory by which the computer manufacturers can identify weaknesses and produce improvements in their products. If you are interested in additional information on this project or related activities you may access the CAS Home Page on the World Wide Web. or contact the following NASA officials: William Feiereisen (feiereis@ames.arc.nasa.gov) Project Manager, Computational Aerosciences Project High Performance Computing and Communications Office NASA - Ames Research Center, Moffett Field, California 94035 (415) 604-4225 Paul Hunter (p_hunter@aeromail.hq.nasa.gov) Program Manager, High Performance Computing and Communications Program High Performance Computing and Communications Office NASA - Headquarters, Washington, DC 20546 (202) 358-4618 - George Lake, University of Washington The Earth, its relationship to the Sun and Solar System, and the universe in its totality are the domain of the Earth and Space Sciences Project. This effort is employing advanced computers to further our understanding of and ability to predict the dynamically interacting physical, chemical, and biological processes that drive these systems. Its ultimate goal is building an assortment of computer-simulated models that combine complex Earth and space science disciplines. High-resolution, multidisciplinary models are crucial for their predictive value and for their capacity to estimate beyond what we can measure and observe directly. For example, we cannot ''see'' the beginnings of the universe or even the birth of our own planet, but simulation can provide insight into how they evolved by filling in the gaps left by telescopes or geological records. Current ESS Project investigations include probing the formation of the large-scale universe; modeling the global climate system in the past, present and future; ascertaining the dynamics of the interior of stars; and indexing and searching through massive Earth-observational data sets. Determining the pertinent interactions, their time scales, and the controls that exist in such systems requires computing power at the highest levels of performance. An objective of the ESS Project is to provide the supercomputers and software tools to facilitate these models. ''Testbed'' facilities allow access to prototype and early-production machines, such as the Convex Exemplar SPP-1 and the MasPar MP-2 at NASA/Goddard Space Flight Center. Other shared testbed facilities are available throughout NASA and at other U.S. government agencies and universities. Much of the Earth and space sciences relies on data collected from a panoply of satellites and telescopes. There are already massive volumes of data on hand, and one trillion bytes a day will be collected by NASA's Earth Observing System alone. The ESS Project is therefore engaged in developing innovative methods for analysis; these approaches range from visualization and virtual reality to ''intelligent'' information systems and assimilating data into models. Additionally, higher-resolution sensors will require entirely new data retrieval techniques. These endeavors, together with those in modeling, will in turn provide feedback to the system vendors about the effectiveness and limitations of their products, helping them to improve subsequent generations. If you are interested in additional information on this project or related activities you may access the ESS Home Page on the World Wide Web or you may contact the following NASA officials: James Fischer (fischer@jacks.gsfc.nasa.gov) Project Manager, Earth and Space Sciences Project High Performance Computing and Communications Office NASA- Goddard Space Flight Center Code 934 Greenbelt, Maryland 20771 (301) 286-3465 Paul Hunter (p_hunter@aeromail.hq.nasa.gov) Program Manager, High Performance Computing and Communications Program High Performance Computing and Communications Office NASA - Headquarters, Washington, DC 20546 (202) 358-4618 The NASA IITA component is facilitating and accelerating the implementation of a National Information Infrastructure through NASA science, engineering and technology contributions. This activity is responsive to the Congressional and Presidential goals of building new partnerships between the Federal and non-Federal sectors of U.S. society and has special emphasis on serving new communities. The IIITA component focuses on four key areas: development of Digital Library Technology; public use of Remote Sensing Data; Aerospace Design and Manufacturing; and, K-12 education over the Internet. Each of these areas supports the development of new technologies to facilitate broader access to NASA data via computer networks. This NASA activity will foster the development of new and innovative technology to support Digital Libraries; these are libraries that are effectively multimedia digital (electronic) in nature. The focus here is to support the long-term needs of NASA pilot projects already established and for the eventual scale-up to support thousands to millions of users widely distributed over the Internet. Remote Sensing Data is key as this is what will comprise the Digital Libraries. Broad public access to databases of remote sensing images and data over computer networks such as the Internet is also essential; NASA has established a Remote Sensing Public Access Center to manage just such an effort. NASA is also striving to provide support for Aerospace Design and Manufacturing through ongoing work with aircraft and propulsion companies. This is meant to facilitate the transfer of NASA -developed aerospace design technology to users in major U.S. aerospace companies. NASA is supporting the transfer of sensitive technologies through development of a secure infrastructure for NASA-industry collaborations. Finally, activities in the area of supporting K-12 education over the Internet will focus on developing curriculum enhancement products for K-12 education, which build on a core program of K-12 education programs at NASA. The result will cause expansion of a broad outreach program to educational product developers in academia and the private sector. If you are interested in additional information on this project or related activities you may access the IITA Home Page on the World Wide Web or you may contact the following NASA officials: William Likens (William_Likens@qmgate.arc.nasa.gov) Project Manager Information Infrastructure Technology and Applications High Performance Computing and Communications Office National Aeronautics and Space Administration Ames Research Center Moffett Field, California 94035 (415) 604-5699 Paul Hunter (p_hunter@aeromail.hq.nasa.gov) Program Manager, High Performance Computing and Communications Program High Performance Computing and Communications Office NASA - Headquarters, Washington, DC 20546 (202) 358-4618 - W. Huntress, NASA Headquarters The Remote Exploration and Experimentation project will develop and demonstrate a space-qualified, spaceborne computing system architecture that requires less than ten watts per billion operations per second. This computing architecture will be scalable from low-powered (sub-watt) systems to higher-powered (hundred-watt) systems that support deep-space missions lasting ten years or more. Deep-space missions require actual (real-time) analysis of sensor data of up to tens of gigabits per second and independent control of complex robotic functions with out intervention from Earth. This project will: enable and enhance U.S. spaceborne remote sensing and manipulation systems by providing dramatic advances in the performance, reliability and affordability of on-board data processing and control systems; extend U. S. technological leadership in high performance, spaceborne, real-time, durable computing systems and their applications; and, work cooperatively with the U.S. computer industry to assure that NASA technology is commercially available to the U.S. civil, defense and commercial space programs, as well as, for practical, day-to-day applications. Deep space applications were selected as a primary focus because they have stringent environmental, long-life, and low-power constraints and requirements. Furthermore, long round-trip communications times and low communications bandwidths require on-board data processing and independence from people on Earth. Since near-Earth, airborne, and ground applications are not as mass and power limited, they can use high performance data processing and control systems earlier than deep space missions. Applications that require reliable, real-time responsiveness and that benefit from small size and low power will be addressed by, as well as, gain from this project. NASA will select, in this context, intermediate applications to drive early developments while addressing the primary focus. Some examples of possible applications are: robots for hazardous waste clean-up, search-and-rescue, automated inspection and flexible manufacturing, smart portable atmospheric emission analyzers, remote Earth observing systems with very high resolution instruments, microgravity experiments, and automotive collision avoidance systems. The Remote Exploration and Experimentation project is currently not active but will resume activities in Fiscal Year 1996. If you are interested in additional information on this project or related activities you may access the REE Home Page on the World Wide Web or you may contact the following NASA officials: Leon Alkalai (leon@telerobotics.Jpl.Nasa.Gov) , Principal Investigator John Davidson (davidson@telerobotics.Jpl.Nasa.Gov) , Technical Manager Paul Stolorz, Cognizant Engineer Remote Exploration And Experimentation Project High Performance Computing and Communications Office National Aeronautics and Space Administration Jet Propulsion Laboratory Pasadena, California 91109 (818) 354-7508 Paul Hunter (p_hunter@aeromail.hq.nasa.gov) Program Manager, High Performance Computing and Communications Program High Performance Computing and Communications Office NASA - Headquarters, Washington, DC 20546 (202) 358-4618 NASA and its partner agencies are well on their way to achieving high performance computing systems that can operate at a steady rate of at least one trillion arithmetic operations per second -- one teraflop. The Numerical Aerodynamic Simulation (NAS) Parallel Benchmarks were developed to evaluate performance of parallel computing systems for workloads typical in NASA and the aeronautics community and are now in extensive use by several commercial and research communities. Through a cooperative research agreement with a consortium headed by IBM Corporation, a 160-node SP-2, installed at NASA Ames Research Center, has achieved a 25 fold increase in performance over a Cray Y-MP supercomputer (the fastest supercomputer at the inception of the HPCC Program) on the NAS benchmarks and marked the beginning of the second generation of parallel machines. A single large high-performance computer has achieved a world record of 143 gigaflops (or 143 billion arithmetic operations per second) on a parallel linear algebra problem. By coupling several large supercomputers over a network, applications exhibiting half a teraflop performance are expected to be demonstrated on the exhibition hall floor at Supercomputing '95 in November, 1995. The Internet is the creation of the HPCC agencies, but its recent phenomenal growth is the result of educational, public service, and private sector investment. NASA and the Department of Energy (DOE) are accelerating the introduction of new commercial Asynchronous Transfer Mode (ATM) networking technologies through acquisition of experimental 155 Mb/s service at multiple sites in FY 1995. Using the Advanced Communications Technology Satellite, experiments linking computers at 155 Mb/s have been demonstrated earlier this year, with experiments at 622 Mb/s planned for later this year . These experiments should demonstrate the 1997 metric of 100-fold increase in communications capability. When discussing the development of new computing technologies, terms like teraflops and gigaflops are spoken as if we should all know what they mean. These are simply units of measurement, which measure the speed with which a computer processes data to perform calculations. Tera means trillion; flops is floating point operations per second. Therefore, teraflops is a trillion floating point operations per second. Teraflops do not exist yet, at least not at sustained rates. We need to care about ''teraflops''. As you develop high level processing, it trickles down to many applications. A computer that performed in ''teraflops'' could provide farmers with long-range weather predictions and thus, impact U.S. food production. Automobile manufacturers could manipulate huge databases of information instantaneously so they could improve and change designs in real-time, compare present to past, and make predictions. Automobile manufacturers would save design time, which saves money, which keeps the cost of cars down. Cars could have powerful onboard computers with databases of maps, onboard guidance system, and an instrument that tells drivers how much gas it will take to get the nearest gas station, police station, etc. What we now have is gigaflops computing capability. Giga means billion. This is not good enough because we cannot do all of the things that we need or want to do. We can get a certain amount of data and process it, but not enough to get the information we need. The difference between ''gigaflops'' and ''teraflops'' is represented by the difference between a round trip flight between New York and Boston and a round trip flight between the Earth and the moon. Now that you are clued in to what teraflops means you may want to ask yourself ''what is a petaflops ?'' High-performance computing is vital to the Nation's progress in science and engineering. NASA's leadership role in the Federal HPCC program is primarily involved in the development of software to accomplish computational modeling needed in science and engineering. High-performance computing is critical to strengthening the global competitiveness of the U.S. aeronautics industry. NASA's HPCC computational techniques have resulted in engineering productivity improvements that enabled Pratt and Whitney to cut the design time in half for high-pressure jet engine compressors used in the Boeing 777 while reducing fuel consumption, providing savings in both development and operations costs. NASA's HPCC Program is working to produce rapid, accurate predictions of the resistance caused by air flowing over an airplane (or drag) to produce superior aircraft designs, reduced certification costs, and improved reliability. High-performance computing is critical to the vitality of the Earth and space sciences community. High-performance computing advances have enabled accurate modeling of the Earth's atmosphere, land surface, and oceans as an important effort in understanding observational data. Improved numerical methods and compute power now make possible ocean simulations that more accurately represent ocean structure and lay the foundation for a new coupled atmosphere-ocean climate models that will reduce uncertainties associated with climate change prediction. The HPCC Program has enabled new models of galaxy and large scale structure formation to be developed and simulated to compare with data from NASA's Great Observatories. These new theories are substantially altering our understanding of the formation of stars, galaxies, and the universe. NASA's Information Infrastructure Technology and Applications Program supports the development of the National Information Infrastructure and provides quality educational tools and curriculum to our nation's children. This program has supported four Live from Antarctica events over the Internet and the PBS network, produced the highly received Global Quest video on the using the Internet for education, established six major national Digital Library Testbeds jointly with NSF and the Advanced Research Projects Agency (ARPA), established 26 cooperative agreements and grants for Public Use of Earth and Space Science Data over the Internet , and developed and demonstrated several low cost approaches for establishing Internet connectivity in American K-12 schools. The results of NASA's program are now in use in thousands of schools throughout the country. Since you have access to the World Wide Web and use Mosaic (or some other Web browser application), we encourage you to go ahead and take a look at more graphic descriptions of NASA's HPCC accomplishments at the following site: Selected 1994 Program Accomplishments for the NASA HPCC Program: isolated, top-level accomplishments taken from the NASA HPCC 1994 Annual Report Additional information on the NASA HPCC Program can be accessed from the NASA HPCC Office Home Page . Information on Federal HPCC objectives and accomplishments are also available in greater detail: High Performance Computing and Communications: Foundation for America's Information Future (FY 1996 Blue Book) The FY 1995 Implementation Plan for HPCC Home Page for the National Coordination Office for High Performance Computing and Communications MD5{32}: 07f6b8ee7a283d16cee1dce77b4c8ebc File-Size{5}: 34351 Type{4}: HTML Gatherer-Version{3}: 1.0 Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Refresh-Rate{7}: 2419200 Time-to-Live{8}: 14515200 Description{15}: HPCC Fact Sheet } @FILE { http://cesdis.gsfc.nasa.gov/PAS2/wg8.html Update-Time{9}: 827948619 url-references{156}: http://cesdis.gsfc.nasa.gov/ #top /PAS2/index.html http://cesdis.gsfc.nasa.gov/cesdis.html /pub/people/becker/whoiam.html mailto:becker@cesdis.gsfc.nasa.gov title{41}: Data Parallel and Shared Memory Paradigms keywords{512}: additional advanced and application applications bandwidths becker between broad cesdis characterize continue cost data develop development document donald effort encourage for fund goals gov gsfc hardware independent index inexpensive interaction joint latency longer maximize memory minimize nasa networking paradigms parallel pasadena performance portable products programming programs prototypes pursuing reduction research researchers shared simd software support system term this top vendors work workshop head{14243}: Center of Excellence in Space Data and Information Sciences. > Data Parallel and Shared Memory Paradigms The members of this working group were to take data parallel and shared memory paradigms into account in formulating a set of four action items and in articulating responses to the five questions posed to the working groups. Recommendations Develop standard APIs and reference implementations for a portable set of user level runtime support to help application software developers to port codes to a range of parallel architectures and workstation networks. This software would target architectures that do not have strong hardware shared memory support. It appears that the high end parallel market is still too small to support a healthy base of independent software vendor program development. This leads us to recommend the continued development of application programmer interfaces (APIs) that can be employed by systems software developers and application developers so that a single application development effort can lead to software that targets both on high and low end multiprocessors as well as workstation networks. MPI, PVM and HPF are important examples of APIs that have been implemented on a wide range of platforms. The new constructs would include message passing constructs (such as MPI, PVM), remote puts and gets, parallel threads. In addition, the API should support shared address spaces on a range of parallel architectures and workstation networks. Develop portable networking software to minimize application to application latency (and maximize bandwidths) and develop inexpensive hardware support for latency reduction. Encourage joint development work by vendors and researchers to develop advanced prototypes/products. We expect that it will be possible to build on advances in commodity networking technology to design software (and inexpensive hardware) to develop inexpensive modest sized workstation networks with performance characteristics that resemble currently available medium grained parallel machines. While it seems unlikely that workstation networks will replace high end parallel architectures, performance optimized workstation networks can provide a significant market for parallelized ISV applications and parallel system software. The networks connecting PCs and Workstations are getting better. Latency is going down and bandwidth is going up. Low-cost 100Mbit, collision-free Ethernet hardware is now available at reasonable cost. More advanced networks, such as ATM or Myrinet, offer even higher bandwidth and lower latencies. Although there is still a large gap between these networks and the processor interconnects in MPPs, the gap is closing. These high-speed networks have the potential to bring clusters of workstations much closer to the level of MPPs. Unfortunately, current network interfaces and operating system software are designed for networks with much higher latency and lower bandwidth. These interfaces and systems are currently the bottlenecks through which parallel communication must squeeze. More research and collaboration between researchers and vendors is necessary to develop low latency network interfaces and system software capable of exploiting this hardware. Much attention is already being focused on ATM and video. However, the demands arising from using these networks as an interconnect for a workstation cluster are quite different (i.e., latency is more important) and deserve future study. Encourage interaction between independent software vendors and system software researchers While system software researchers interact extensively with scientists and engineers in academia and at national laboratories, it appeared that there was much less interaction between independent software vendors and systems software researchers. HPC grants and contracts should encourage match-making between researchers, ISVs and end users who are constructing commercially important applications. Increased interaction between the independent software vendor community and the systems software research community would be likely to encourage systems software researchers to focus on problems that are of particular interest to ISVs. For instance, an ISV that sells application software for parallel machines must develop a code that runs on a number of serial and parallel platform. Furthermore, each time the software is upgraded, the upgrade must be carried out on each platform. Interactions with ISVs might also help to focus the efforts of systems software researchers on new types of applications. High performance computing is potentially applicable to a wide variety of application areas; for instance, the NSF sponsored workshop on HPCC and Health Care (Washington DC, Dec 1994) identified many potential applications of high performance computing associated with health care delivery. We were not able to get a good handle on the state of commercial parallel computing. Despite the shake- out among the parallel machine vendors, there are many confirmed and even more anecdotal descriptions of parallel machines being used in various industries. We recommend that a quantitative survey be generated to quantify the degree to which parallel machines are used in the private sector, and to characterize the targeted applications. We would also recommend characterizing to degree to which companies internally develop different types of parallel software. Continue to fund broad research programs pursuing longer term goals. Improving the productivity of parallel software is a difficult and important problem that justifies long- term research funding. Today we have stable tools that can mask the lowest levels of machine differences from the users; in the future, we will have higher level tools to assist general practitioners with difficult algorithmic issues in parallel processing. It is only by encapsulating the know-how of parallel programming in effective tools will parallel computing become widespread. Eventually, we envision that the application programmer will operate within a problem solving environment, where he directly manipulates his data using domain-specific concepts. These high-level programming environments will not be built directly upon low-level hardware functions, but will themselves be built on the next lower level of programming abstractions and so forth. Thus, the hardware design defines but the lowest level of the hierarchy in the software architecture. At this point, the software research community has developed solid low-level tools such as PVM that provide portability across different hardware platforms. Promising preliminary results have also been obtained for more advanced tools such as High Performance FORTRAN, efficient parallel C++ class libraries automatic parallelization and locality optimizations that work on entire programs. Other examples include domain- specific programming languages, and interactive programming environments that combine the machine expertise in the compiler with the application knowledge of the programmer. It is important to recognize that these more ambitious projects will take longer to mature. Breakthroughs and innovations required at this level are more likely to be the results of small, dedicated research groups. We must encourage innovative work by supporting independent and even competing research projects. Standardization and user acceptance are important once research matures, but they should not be the major concerns when research is at the formative stage. Premature standardization can stifle creativity.. Premature attempts to develop an immediately usable tool can shift the research focus away from the fundamental issues; also, without a solid foundation, the tool is doomed to be fragile. Additional effort to characterize SIMD applications, programming paradigms and cost-performance There exists a substantial community of researchers who make productive use of SIMD architectures. This community was not well represented at Pasadena; several members of this working group have volunteered to carry out a further consideration of SIMD and in particular to focus on the commonality and needs of applications that exist across the SIMD community, determination of common programming paradigm across SIMD and clustered SIMD architectures assess cost/performance of applications on SIMD architectures. SIMD is the oldest parallel processing paradigm. Its roots go back as far as the 1800s when it was envisioned that weather prediction would be calculated by an auditorium full of human computers (an occupation of the time) orchestrated by a computer conductor who directed the simulation. This paradigm was manifested electronically in the Solomon computer in the early 1960s. It was followed by such successful machines as the Goodyear Staran, ICL DAP, Goodyear/NASA MPP, TMC CM-1/2 series, and currently the MasPar MP-1 and MP-2. To those who have taken on the challenge of highly ordered computation, the reward has been low product and maintenance cost, low power consumption, small size and high performance. Many applications programmers who work with SIMD architectures feel that the SIMD methodology leads to a simplified process of programming and debugging compared to the currently existing MIMD paradigms. Therefore, due to the unique view of parallel programming style that SIMD poses, there is a need for an assembling of the existing SIMD community to address their common system software and hardware needs. It is for this reason that our working group feels that further examination in terms of the needs and experiences of the SIMD community should be explored. We are proposing to convene a meeting of the SIMD community to determine the commonality and needs of applications that exist across the community. We have identified at least 40 organizations and individuals who have an interest in such discussions. We intend to emphasize the determination of a common programming paradigm across SIMD and clustered SIMD architectures. Due to the importance of commercialization, we also intend to assess the cost/performance characteristics of applications on these architectures. This forum will address such issues as paradigm, language, programming environment, operating system support and SIMD architecture extensions. It will gather together information on the various applications that are now being supported by SIMD architecture extensions, as well as those anticipated to be supported in the future. It will assess the cost/performance of various applications with respect to different SIMD architecture extensions and how they may be scaled to tera- and peta-(f)lops computing. It will enumerate the various types and characteristics of various SIMD architecture extensions such as parallelism (fine vs coarse), interprocessor network complexities, (mesh, wormhole, global adder, ... ), scalability, and the degree of synchronicity. Responses to questions: The question that was most relevent to the interests of this cross cutting group was question 4 involving the interaction between System Software and Architecture. In recent years, some of the most fruitful work in computer architecture has been on the boundary between software and architecture. For example, RISC machine make compilation easier by providing a simple, regular instruction set and open new opportunities for optimization by exposing a machine's pipeline. Unfortunately, many parallel machines are still built by computer architect and "thrown over the wall" to users who often fail to use them effectively because they are too hard to program. <> Current software efforts are focused on overcomming the limitiations of message-passing machines, which offer only the lowest-level option of point-to-point communication. Programmers or compiler-writers are left with the full responsibility for bridging the semantic gap from high-level programming languages, which typically offer a shared address space in which a program can access any data, to this shared-nothing world. The low level of these machines has made compilers extremely complex to write and made their behavior unpredictable in any but the simplest case. On the other hand, the low-level of message passing leaves a programmer with complete control over a program's behavior and performance because no system policies interfere. This control is sometimes illusionary because of the complexity of understanding and modifying a message-passing program. The other extreme is, of course, shared-memory machines. These machines offer the benefit of a shared address space and are extremely popular products in the form of Symmetric Multiprocessors (SMPs). Scalable shared memory architectures, have a reputation for poor performance, which is due, in many cases, to systems' fixed coherence protocol, which communicate data between processors. When a protocol does not match a program's sharing pattern, the excessive communication can ruin a program's performance. The design of systems software involves making tradeoffs between user control and ease of use. There is a great deal of practical interest, and much research emphasis, on systems software that is designed in a way that makes it possible for users to access several layers of software. For instance, High Performance Fortran allows users to call procedures (extrinsic procedures) whose bodies are not written in High Performance Fortran. Extrinsic procedures can be written inanother language, and can employ user level communication libraries to carry out communication. Recent research has lead to proposed systems that may allow users or compiler writers to alleviate performance problems associated with scalable shared memory by implementing protocols in software (where they can be changed) and by offering message-passing primitives to augment shared memory. Preliminary work suggests that software protocols can be used to implement highly optimized userr or compiler runtime support. Top of this document Pasadena 2 Workshop index CESDIS HTML formating/WWW contact:Donald Becker , becker@cesdis.gsfc.nasa.gov . MD5{32}: c2854fcfad2721d857d86540ace14614 File-Size{5}: 15299 Type{4}: HTML Gatherer-Version{3}: 1.0 Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Refresh-Rate{7}: 2419200 Time-to-Live{8}: 14515200 Description{41}: Data Parallel and Shared Memory Paradigms } @FILE { http://cesdis.gsfc.nasa.gov/hpccm/accomp/94accomp/ess94.accomps/ess1.html Update-Time{9}: 827948645 title{42}: Large Scale Structure and Galaxy Formation keywords{43}: and formation galaxy large scale structure images{53}: hpcc.graphics/hpcc.header.gif hpcc.graphics/cobe1.gif headings{43}: Large Scale Structure and Galaxy Formation MD5{32}: cfe65a16fa178d24d6a7acdf31cfc9b6 File-Size{4}: 4685 Type{4}: HTML Gatherer-Version{3}: 1.0 Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Refresh-Rate{7}: 2419200 Time-to-Live{8}: 14515200 Description{42}: Large Scale Structure and Galaxy Formation } @FILE { http://cesdis.gsfc.nasa.gov/linux/drivers/vortex.patch Update-Time{9}: 827948606 MD5{32}: 37d0a4789a0c65aab0a6226c235b850b File-Size{4}: 3081 Type{5}: Patch Gatherer-Version{3}: 1.0 Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Refresh-Rate{7}: 2419200 Time-to-Live{8}: 14515200 } @FILE { http://cesdis.gsfc.nasa.gov/PAS2/wg9.text Update-Time{9}: 827948619 Partial-Text{14841}: Working Group 9 -- HETEROGENEOUS COMPUTING ENVIRONMENTS Francine Berman, Co-Chair Reagan Moore, Co-Chair 9.1 INTRODUCTION In recent years, dramatic advances in network and processor technology have made it possible to use a network of computational resources to solve individual problems efficiently. Such platforms would be able to deliver an unlimited amount of processing power, memory and storage to multiple users in a cost-effective manner if an adequate infrastructure could be built to manage them. The challenge of building this software infrastructure and its accompanying computing environment is the focus of heterogeneous computing. Heterogeneous computing is the coordinated use of networked resources to harness diverse hardware platforms, software systems, and data sources distributed across multiple administrative domains. Resources used to solve HPCC applications include workstations, supercomputers, archival storage systems, disk farms, and visualization systems linked via high-speed networks. The use of these heterogeneous resources is becoming pervasive. Applications as varied as Climate Modeling, interactive 3-dimensional user interfaces such as Argonne's Cave, and the World Wide Web, utilize heterogeneous systems for distributed access to data, computing and/or visualization resources. In this document, we take a broad view of heterogeneous systems including clusters of individual workstations (as promoted by the NOW project), dedicated high-level workstation clusters, and networks of diverse high-performance architectures (as illustrated by the NII and the NSF Meta-Center). Such platforms are used because they leverage existing architectures, provide excellent cost/performance, and satisfy the requirements of compute-intensive and data-intensive applications. In effect, we are talking about using heterogeneous resources to provide a world-wide ``computational web'' in which aggregate memory, storage, bandwidth and computational power can be brought to bear on a single application. In addition, this computational web can be used to increase the throughput of multiple applications. Viewed as a ``computational web'', heterogeneous computing provides the bridge between HPCC and the NII. Transparent access to remote data, access across multiple authentication realms, and the development of a uniform application interface across diverse hardware and software systems are required to fulfill the potential of both HPCC and the NII. The development of an infrastructure which can coordinate diverse and distributed resources is critical to the success of both endeavors. Currently, heterogeneous computing is an emerging discipline and considerable development of its most basic components must be done. Efforts in defining underlying models and performance metrics, building software infrastructure and tools, and developing computing environments must be integrated and validated with real applications. Experience must be gained on a wide spectrum of workstation clusters and heterogeneous networks. Current efforts must be supported, expanded, and nurtured. In Section 9.7, we provide a number of technical and programmatic recommendations for developing the software infrastructure required for harnessing heterogeneous systems. The intervening sections lay the groundwork for these recommendations. 9.2 PROGRESS SINCE THE FIRST PASADENA WORKSHOP Heterogeneous computing was defined as a focus area at Pasadena I and at the subsequent Berkeley Springs Workshop. However a basic problem has retarded the development of heterogeneous computing: Heterogeneous research must be done as interdisciplinary research so that the development of heterogeneous applications, software infrastructure, and prototype networks can be integrated. There is no program within an individual federal funding agency or coordinated between agencies which targets over the long-term the development of infrastructure, applications and models for heterogeneous computing. This problem must be remedied in order to keep up with the current and pressing need for critical infrastructure and software support for HPCC and NII applications. Since Pasadena I, there has been some progress in the development of tools and models for heterogeneous computing, however most efforts have achieved only partial success. The MPI message-passing interface has been defined and mechanisms for heterogeneity are part of that definition, however MPI has yet to achieve the widespread use of PVM. In the last few years commercial batch queuing offerings have become available (NQE, LSF, Load Leveler). Though these products are functionally adequate, many issues important for heterogeneous computing are not addressed, e.g., common file space, failure resilience, user authentication, and administration. In addition, isolated Grand Challenge applications have shown that the use of heterogeneous parallel computing can yield improved performance. Unfortunately, these applications suffer from the lack of an adequate development environment and require a large amount of human resources to construct. Mechanisms are needed to aid scientists in exploiting heterogeneous platforms. The heterogeneous computing area is not yet ready to identify de facto standards (with the possible exception of PVM). More experience must be gained with real applications on a wide spectrum of heterogeneous platforms. At the same time, the underlying system management infrastructure must be developed. Models for presenting a single system image to the application are still incomplete, and new mechanisms are needed to provide authentication, transparent data delivery, resource scheduling, and accounting. Most successful among efforts targeted to coordinated networks has been the wide-spread use of clusters of computers with PVM as a common software interface. In addition, the use of heterogeneous platforms to accommodate the data and storage requirements of applications like the World Wide Web has become more commonplace. However even in this successful and widely-used ... 9.3 CHARACTERISTICS OF HPCC HETEROGENEOUS APPLICATIONS Coordinated networks provide performance by aggregating computing, data and network resources that cannot be elivered by a single platform. There is ... HPCC applications are characteristically large or complex programs which require intense usage of resources to achieve adequate performance. These ... 9.3.1 RESOURCE REQUIREMENTS Heterogeneous HPCC applications generally have large resource requirements and utilize heterogeneous platforms to aggregate enough resources to provide increased performance or to make the solution of a problem feasible. Distributed resources may include computation, memory, storage, ... 9.3.2 PERFORMANCE ORIENTATION Heterogeneous HPCC science and engineering applications tend to emphasize performance over other factors. Reductions in the execution time of an ... 9.3.3 LIFETIME The lifetime of some HPCC applications is typically lengthy. The ... Heterogeneous systems have the ability to evolve over time and thus can adapt to changing requirements of long- term projects and changing resource technology. Integration of archival storage access within the heterogeneous ... 9.4 HETEROGENEOUS SYSTEM SOFTWARE AND TOOLS With the advent of global connectivity of diverse machines using high-speed networks, target systems are destined to become increasingly heterogeneous. Developing tools and software to support computing in ... Although the target systems for heterogeneous computing are becoming increasingly complex, the system software and tools needed to support the environment have not been a focus for the HPCC community. Heterogeneous ... Tools and system software supporting this interface layer should provide services which enable + the matching of application program requirements to available system resources + dynamic scheduling of the application on available machines + efficient utilization and management of resources + response to queries from the application or the user about system state + prediction and measurement of various performance metrics + monitoring and checkpointing during program execu- tion etc. ... The PVM system is an example of a software system that addresses some of the issues in heterogeneous computing, and is being used to investigate others. PVM can be called a heterogeneous computing system, albeit with ... Like PVM, tools to manage the complexity of a heterogeneous environment must have a low cost--of-entry for users. Moreover, they must offer ... 9.5 INTERACTION BETWEEN SYSTEM SOFTWARE AND ARCHITECTURE A key component for successful heterogeneous computing is the ability for the system to dynamically determine the available resources. This capability ... Heterogeneous computing relies upon many software support mechanisms traditionally supplied by the operating system. These include file systems, ... Current heterogeneous environments provide a single system image at the application level. User I/O calls are trapped to allow references to ... The data delivery mechanisms should be able to access the wide variety of storage systems that are available in the HPCC community. Such access should ... The following issues are germane to parallel and dis tributed computing in general; however they require addi tional integration requirements in the heterogeneous con text. ... + Accounting systems are needed within the heterogeneous environment to control and monitor access to the sys- tem. Accounting information may also be used by the ... + Failure resilience is needed within both the NII and HPCC. It may be provided by replication (such that ... + Administration and operation mechanisms are needed to control the distributed heterogeneous environment. ... 9.6 TRANSITION FROM RESEARCH TO PRODUCTS The difficulty of providing an infrastructure for, and managing the resources of coordinated networks resists the promotion of research prototypes to products. Workstation networks require products that manage the resources ... The youth of heterogeneous computing and insufficient support for large products renders many current development efforts untested with real applications, limited in scope, or immature. The multiple software layers of ... + JOB QUEUING SYSTEMS Job Queuing systems typically provide a way to organize a workload that exceeds the resource capability. Examples of existing commercial and ... + RESOURCE SCHEDULING SYSTEMS Resource scheduling systems optimize the allocation of resources based on an objective function. Scheduling can be done for network access, CPU access, ... + APPLICATION SUPPORT ENVIRONMENTS Application support environments provide an infrastructure to distribute the application across the system. They also provide tools to simplify the development, debugging, and display of results. Examples of commercial and ... + MESSAGE PASSING LIBRARIES Message passing libraries support access to distributed memory. While ... + APPLICATION LEVEL SINGLE SYSTEM IMAGE Heterogeneous systems will be most efficient when a uniform system environment can be provided to the distributed application. Features of this ... + NETWORK SUPPORT The need to decrease message passing latency has led to the development of new paradigms for sending information between distributed tasks. In ... 9.7 RECOMMENDATIONS Diverse hardware, software, and administrative resources are already available for heterogeneous computing. Application developers are already ... TECHNICAL RECOMMENDATIONS ------------------------- 1) DEVELOP ACCURATE MODELS AND PERFORMANCE MEASURES FOR HETEROGENEOUS SYSTEMS Accurate models of heterogeneous systems, and measures which compare observed behavior with potential behavior must be designed. Time, state, ... 2) DEVELOP EFFICIENT SYSTEM MANAGEMENT STRATEGIES FOR HETEROGENEOUS PLATFORMS Efficient mechanisms must be developed to handle and coordinate the diverse resources of the heterogeneous platform. Such mechanisms should ... 3) DEVELOP TRANSPARENT MECHANISMS FOR STORING AND HANDLING DATA Data delivery tools are needed that hide the data delivery mechanism from the user. Shared file systems, distributed databases, and I/O redirection ... Integration of data base technology and archival storage technology is needed to handle the petabytes of data associated with some HPCC applications. ... Part of the heterogeneous environment is concerned with the movement of data from network attached peripherals controlled by archival storage systems, through a database running on distributed platforms to the application. This ... 4) DESIGN SYSTEM INTERFACES WHICH SUPPORT EFFICIENT IMPLE- MENTATION The layer between the programmer and the system must map applications dynamically to the system based on availability and ``cost'' of services. In ... 5) DEVELOP FAILURE RESILIENCE STRATEGIES FOR HETEROGENEOUS SYSTEMS The implementation of universal checkpointing/restarting of a heterogeneous system is a major research issue. Many of the existing systems ... PROGRAMMATIC RECOMMENDATIONS ---------------------------- If heterogeneous computing is to provide a bridge between the emerging NII and HPCC, support must be provided for its development. We recommend two thrusts to accomplish ... 1) FUNDING AGENCIES SHOULD ESTABLISH FOCUS PROGRAMS FOR HETEROGENEOUS COMPUTING which support over the long- term the integration of applications, systems software and infrastructure on coordinated networks of resources. Research should be encouraged which ... + real heterogeneous applications implemented on coordinated networks, + development of a software infrastructure for sup- porting heterogeneous applications, + performance criteria for assessing usefulness. ... 2) A NATIONAL HETEROGENEOUS TESTBED SHOULD BE INITIATED to provide a resource for developing and testing hetero- geneous applications software and systems. Such a ... The hardware and network resources for heterogeneous computing are already available. A major effort is required to develop the software, ... MD5{32}: 3b0461e0f5eee063d91233b9127dc6c6 File-Size{5}: 30949 Type{4}: Text Gatherer-Version{3}: 1.0 Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Refresh-Rate{7}: 2419200 Time-to-Live{8}: 14515200 Keywords{4556}: ability able about access accommodate accompanying accomplish accounting accurate achieve achieved across adapt addi addition addressed addresses adequate administration administrative advances advent agencies agency aggregate aggregating aid albeit allocation allow already also although among amount and application applications architecture architectures archival are area argonne assessing associated attached authentication availability available bandwidth base based basic batch bear because become becoming been behavior being berkeley berman between both bridge broad brought building built called calls can cannot capability cave center chair challenge changing characteristically characteristics checkpointing climate clusters commercial common commonplace community compare complex complexity component components computation computational compute computers computing con concerned connectivity considerable construct control controlled coordinate coordinated cost could cpu criteria critical current currently data database databases debugging decrease dedicated defined defining definition deliver delivery design designed destined determine develop developed developers developing development difficulty dimensional dis discipline disk display distribute distributed diverse document domains done dramatic during dynamic dynamically effect effective efficient efficiently effort efforts elivered emerging emphasize enable encouraged endeavors engineering enough entry environment environments establish etc even evolve example examples exceeds excellent exception execu execution existing expanded experience exploiting facto factors failure farms feasible features federal few file first focus following for francine from fulfill function functionally funding gained geneous general generally germane global grand groundwork group handle handling hardware harness harnessing has have hetero heterogeneity heterogeneous hide high however hpcc human identify illustrated image immature imple implementation implemented important improved include including incomplete increase increased increasingly individual information infrastructure initiated insufficient integrated integration intense intensive interaction interactive interdisciplinary interface interfaces intervening introduction investigate isolated issue issues its job keep key lack large last latency lay layer layers led lengthy level leveler leverage libraries lifetime like limited linked load long low lsf machines made major make manage management managing manner many map matching may measurement measures mechanism mechanisms memory mentation message meta metrics modeling models monitor monitoring moore more moreover most movement mpi multiple must national need needed network networked networks new nii not now nqe nsf number nurtured objective observed offer offerings only operating operation optimize order organize orientation other others over paradigms parallel part partial pasadena passing performance peripherals pervasive petabytes platform platforms porting possible potential power prediction presenting pressing problem problems processing processor products program programmatic programmer programs progress project projects promoted promotion prototype prototypes provide provided provides providing pvm queries queuing ready reagan real realms recent recommend recommendations redirection reductions references relies remedied remote renders replication require required requirements research resilience resists resource resources response restarting results retarded running same satisfy scheduling science scientists scope section sections sending services shared should shown simplify since single software solution solve some sources space spectrum speed spread springs standards state still storage storing strategies subsequent success successful such suffer sup supercomputers supplied support supported supporting sys system systems take talking target targeted targets tasks technical technology tem tend term testbed testing text that the them there these they this though through throughput thrusts thus time tion tional tools traditionally transition transparent trapped tributed two typically underlying unfortunately uniform universal unlimited untested upon usage use used usefulness user users using utilization utilize validated varied variety various via view viewed visualization was way web when which while wide widely widespread will with within working workload workshop workstation workstations world would years yet yield youth Description{57}: Working Group 9 -- HETEROGENEOUS COMPUTING ENVIRONMENTS } @FILE { http://cesdis.gsfc.nasa.gov/pub/linux/html/Ethernet-HOWTO-3.html Update-Time{9}: 827948628 url-references{1441}: Ethernet-HOWTO.html#toc3 http://www.crynwr.com/crynwr/home.html Ethernet-HOWTO-7.html#skel Ethernet-HOWTO-7.html#data-xfer Ethernet-HOWTO-7.html#3com-tech Ethernet-HOWTO-9.html#3com-probs Ethernet-HOWTO-9.html#alfa Ethernet-HOWTO-7.html#i82586 Ethernet-HOWTO-9.html#alfa Ethernet-HOWTO-7.html#i82586 http://cesdis.gsfc.nasa.gov/linux/pcmcia.html Ethernet-HOWTO-8.html#pcmcia #lance Ethernet-HOWTO-7.html#amd-notes #at-1500 #ne1500 #boca-pci #ni65xx Ethernet-HOWTO-10.html#ether Ethernet-HOWTO-7.html#amd-notes Ethernet-HOWTO-9.html#alfa #lance Ethernet-HOWTO-7.html#amd-notes Ethernet-HOWTO-10.html#ether #dec-21040 Ethernet-HOWTO-8.html#pcmcia http://cesdis.gsfc.nasa.gov/linux/pcmcia.html #lance Ethernet-HOWTO-7.html#amd-notes #lance Ethernet-HOWTO-7.html#amd-notes #z-note http://peipa.essex.ac.uk/html/linux-thinkpad.html Ethernet-HOWTO-8.html#pcmcia http://cesdis.gsfc.nasa.gov/linux/pcmcia.html Ethernet-HOWTO-9.html#alfa Ethernet-HOWTO-7.html#promisc Ethernet-HOWTO-7.html#i82586 #de-650 Ethernet-HOWTO-9.html#ne2k-probs Ethernet-HOWTO-4.html#ne2k-clones Ethernet-HOWTO-6.html#diag #lance Ethernet-HOWTO-7.html#amd-notes Ethernet-HOWTO-9.html#alfa Ethernet-HOWTO-7.html#i82586 Ethernet-HOWTO-9.html#alfa #3c501 Ethernet-HOWTO-4.html#8013-clones Ethernet-HOWTO-9.html#8013-probs #dec-21040 Ethernet-HOWTO-7.html#i82586 Ethernet-HOWTO-4.html Ethernet-HOWTO-2.html Ethernet-HOWTO.html#toc3 Ethernet-HOWTO.html#toc Ethernet-HOWTO.html #0 title{46}: Vendor/Manufacturer/Model Specific Information keywords{851}: accton advanced all allied alpha amd and ansel apricot arcnet associated beginning boca business cabletron can cards chapter chips clones com communications contents corp data dec devices dfi diagnostic digital discouraged don driver drivers ethernet every farallon forbid four from have hewlett howto ibm info information intel interlan international koch lan lance leave like link linksys looks machines manufacturer may micro microsystems model multicast mylex nelson net next nexxxx not note notes novell now old packard packet param pci pcmcia poor previous problems programmed programming programs pure racal realtek region research russ sager schneider section semi skeleton smc specific standard strongly stuff support supported surfers surfing table tec technical telesis the thinkpad this top two vendor vlb western whole with xircom zenith headings{1504}: 3 3.1 3c501 3c503, 3c503/16 3c505 3c507 3c509 / 3c509B 3c523 3c527 3c529 3c579 3c589 / 3c589B 3.2 Accton MPX Accton EN2212 PCMCIA Card 3.3 AT1500 AT1700 3.4 AMD LANCE (7990, 79C960, PCnet-ISA) AMD 79C961 (PCnet-ISA+) AMD 79C965 (PCnet-32) AMD 79C970 (PCnet-PCI) AMD 79C974 (PCnet-SCSI) 3.5 AC3200 EISA 3.6 Apricot Apricot Xen-II On Board Ethernet 3.7 3.8 AT&T AT&T T7231 (LanPACER+) 3.9 AT-Lan-Tec / RealTek AT-Lan-Tec / RealTek Pocket adaptor 3.10 Boca BEN (PCI, VLB) 3.11 E10**, E10**-x, E20**, E20**-x E2100 3.12 DE-100, DE-200, DE-220-T DE-530 DE-600 DE-620 DE-650 3.13 DFINET-300 and DFINET-400 3.14 DEPCA, DE100, DE200/1/2, DE210, DE422 Digital EtherWorks 3 (DE203, DE204, DE205) DE425 (EISA), DE435 DEC 21040, 21140 3.15 Farallon Farallon Etherwave 3.16 27245A HP PC Lan+ (27247A, 27247B, 27252A) HP-J2405A HP-Vectra On Board Ethernet 3.17 IBM Thinkpad 300 IBM Credit Card Adaptor for Ethernet 3.18 Ether Express Ether Express PRO 3.19 LinkSys LinkSys PCMCIA Adaptor 3.20 Mylex Mylex LNP101, LNP104 3.21 NE1000, NE2000 NE1500, NE2100 NE3200 3.22 Pure Data PDUC8028, PDI8023 3.23 Racal-Interlan NI52** NI65** 3.24 Sager Sager NP943 3.25 Schneider & Koch SK G16 3.26 WD8003, SMC Elite WD8013, SMC Elite16 SMC Elite Ultra SMC 8416 (EtherEZ) SMC 8432 PCI (EtherPower) SMC 3008 SMC 3016 SMC 9000 3.27 PE1, PE2, PE3-10B* 3.28 Z-Note body{48635}: Vendor/Manufacturer/Model Specific Information Contents of this section The only thing that one needs to use an ethernet card with Linux is the appropriate driver. For this, it is essential that the manufacturer will release the technical programming information to the general public without you (or anyone) having to sign your life away. A good guide for the likelihood of getting documentation (or, if you aren't writing code, the likelihood that someone else will write that driver you really, really need) is the availability of the Crynwr (nee Clarkson) packet driver. Russ Nelson runs this operation, and has been very helpful in supporting the development of drivers for Linux. Net-surfers can try this URL to look up Russ' software. Russ Nelson's Packet Drivers Given the documentation, you can write a driver for your card and use it for Linux (at least in theory) and if you intend to write a driver, have a look at Skeleton driver as well. Keep in mind that some old hardware that was designed for XT type machines will not function very well in a multitasking environment such as Linux. Use of these will lead to major problems if your network sees a reasonable amount of traffic. Most cards come with drivers for MS-DOS interfaces such as NDIS and ODI, but these are useless for Linux. Many people have suggested directly linking them in or automatic translation, but this is nearly impossible. The MS-DOS drivers expect to be in 16 bit mode and hook into `software interrupts', both incompatible with the Linux kernel. This incompatibility is actually a feature, as some Linux drivers are considerably better than their MS-DOS counterparts. The `8390' series drivers, for instance, use ping-pong transmit buffers, which are only now being introduced in the MS-DOS world. Keep in mind that PC ethercards have the widest variety of interfaces (shared memory, programmed I/O, bus-master, or slave DMA) of any computer hardware for anything, and supporting a new ethercard sometimes requires re-thinking most of the lower-level networking code. (If you are interested in learning more about these different forms of interfaces, see Programmed I/O vs. ... .) Also, similar product numbers don't always indicate similar products. For instance, the 3c50* product line from 3Com varies wildly between different members. Enough talk. Let's get down to the information you want. 3Com If you are not sure what your card is, but you think it is a 3Com card, you can probably figure it out from the assembly number. 3Com has a document `Identifying 3Com Adapters By Assembly Number' (ref 24500002) that would most likely clear things up. See Technical Information from 3Com for info on how to get documents from 3Com. Also note that 3Com has a FTP site with various goodies: that you may want to check out. Status -- Semi-Supported Too brain-damaged to use. Available surplus from many places. Avoid it like the plague. Again, do not purchase this card, even as a joke. It's performance is horrible, and it breaks in many ways. Cameron L. Spitzer of 3Com said: ``I'm speaking only for myself here, of course, but I believe 3Com advises against installing a 3C501 in a new system, mostly for the same reasons Donald has discussed. You probably won't be happy with the 3C501 in your Linux box. The data sheet is marked `(obsolete)' on 3Com's Developers' Order Form, and the board is not part of 3Com's program for sending free Technical Reference Manuals to people who need them. The decade-old things are nearly indestructible, but that's about all they've got going for them any more.'' For those not yet convinced, the 3c501 can only do one thing at a time -- while you are removing one packet from the single-packet buffer it cannot receive another packet, nor can it receive a packet while loading a transmit packet. This was fine for a network between two 8088-based computers where processing each packet and replying took 10's of msecs, but modern networks send back-to-back packets for almost every transaction. Donald writes: `The driver is now in the std. kernel, but under the following conditions: This is unsupported code. I know the usual copyright says all the code is unsupported, but this is _really_ unsupported. I DON'T want to see bug reports, and I'll accept bug fixes only if I'm in a good mood that day. I don't want to be flamed later for putting out bad software. I don't know all all of the 3c501 bugs, and I know this driver only handles a few that I've been able to figure out. It has taken a long intense effort just to get the driver working this well.' AutoIRQ works, DMA isn't used, the autoprobe only looks at and , and the debug level is set with the third boot-time argument. Once again, the use of a 3c501 is strongly discouraged ! Even more so with a IP multicast kernel, as you will grind to a halt while listening to all multicast packets. See the comments at the top of the source code for more details. Status -- Supported 3Com shared-memory ethercards. They also have a programmed I/O mode that doesn't use the 8390 facilities (their engineers found too many bugs!) It should be about the same speed as the same bus width WD80x3, Unless you are a light user, spend the extra money and get the 16 bit model, as the price difference isn't significant. The 3c503 does not have ``EEPROM setup'', so the diagnostic/setup program isn't needed before running the card with Linux. The shared memory address of the 3c503 is set using jumpers that are shared with the boot PROM address. This is confusing to people familiar with other ISA cards, where you always leave the jumper set to ``disable'' unless you have a boot PROM. The Linux 3c503 driver can also work with the 3c503 programmed-I/O mode, but this is slower and less reliable than shared memory mode. Also, programmed-I/O mode is not tested when updating the drivers, the deadman (deadcard?) check code may falsely timeout on some machines, and the probe for a 3c503 in programmed-I/O mode is turned off by default in some versions of the kernel. This was a panic reaction to the general device driver probe explosion; the 3c503 shared memory probe is a safe read from memory, rather than an extensive scan through I/O space. As of 0.99pl13, the kernel has an I/O port registrar that makes I/O space probes safer, and the programmed-I/O 3c503 probe has been re-enabled. You still shouldn't use the programmed-I/O mode though, unless you need it for MS-DOS compatibility. The 3c503's IRQ line is set in software, with no hints from an EEPROM. Unlike the MS-DOS drivers, the Linux driver has capability to autoIRQ: it uses the first available IRQ line in {5,2/9,3,4}, selected each time the card is ifconfig'ed. (Older driver versions selected the IRQ at boot time.) The ioctl() call in `ifconfig' will return EAGAIN if no IRQ line is available at that time. Some common problems that people have with the 503 are discussed in Problems with... . Status -- Semi-Supported This is a driver that was written by Craig Southeren . These cards also use the i82586 chip. I don't think there are that many of these cards about. It is included in the standard kernel, but it is classed as an alpha driver. See Alpha Drivers for important information on using alpha-test ethernet drivers with Linux. There is also the file that you should read if you are going to use one of these cards. It contains various options that you can enable/disable. Technical information is available in Programming the Intel chips . Status -- Semi-Supported This card uses one of the Intel chips, and the development of the driver is closely related to the development of the Intel Ether Express driver. The driver is included in the standard kernel release, but as an alpha driver. See Alpha Drivers for important information on using alpha-test ethernet drivers with Linux. Technical information is available in Programming the Intel chips . Status -- Supported It's fairly inexpensive and has excellent performance for a non-bus-master design. The drawbacks are that the original 3c509 _requires_ very low interrupt latency. The 3c509B shouldn't suffer from the same problem, due to having a larger buffer. (See below.) Note that the ISA card detection uses a different method than most cards. Basically, you ask the cards to respond by sending data to an ID_PORT (port ). Note that if you have some other strange ISA card using an I/O range that includes the ID_PORT of the 3c509, it will probably not get detected. Note that you can change the ID_PORT to or or... in if you have a conflicting ISA card, and the 3c509 will still be happy. Also note that this detection method means that it is difficult to predict which card will get detected first in a multiple ISA 3c509 configuration. The card with the lowest hardware ethernet address will end up being . This shouldn't matter to anyone, except for those people who want to assign a 6 byte hardware address to a particular interface. A working 3c509 driver was first included as an alpha-test version in the 0.99pl13 kernel sources. It is now in the standard kernel. The original 3c509 has a tiny Rx buffer (2kB), causing the driver to occasionally drop a packet if interrupts are masked for too long. To minimize this problem, you can try unmasking interrupts during IDE disk transfers (see ) and/or increasing your ISA bus speed so IDE transfers finish sooner. (Note that the driver could be completely rewritten to use predictive interrupts, but performance re-writes of working drivers are low priority unless there is some particular incentive or need.) The newer model 3c509B has 8kB on board, and the driver can set 4, 5 or 6kB for an Rx buffer. This setting can also be stored on the EEPROM. This should alleviate the above problem with the original 3c509. At this point in time, the Linux driver is not aware of this, and treats the 3c509B as an older 3c509. Cameron Spitzer writes: ``Beware that if you put a '509 in EISA addressing mode by mistake and save that in the EEPROM, you'll have to use an EISA machine or the infamous Test Via to get it back to normal, and it will conflict at IO location 0 which may hang your ISA machine. I believe this problem is corrected in the 3C509B version of the board.'' Status -- Not Supported This MCA bus card uses the i82586, and now that people are actually running Linux on MCA machines, someone may wish to try and recycle parts of the 3c507 driver into a driver for this card. Status -- Not Supported Yes, another MCA card. No, not too much interest in it. Better chances with the 3c523 or the 3c529. Status -- Not Supported This card actually uses the same chipset as the 3c509. Donald actually put hooks into the 3c509 driver to check for MCA cards after probing for EISA cards, and before probing for ISA cards. But it hasn't evolved much further than that. Donald writes: ``I don't have access to a MCA machine (nor do I fully understand the probing code) so I never wrote the or routines. If you can find a way to get the adaptor I/O address that assigned at boot time, you can just hard-wire that in place of the commented-out probe. Be sure to keep the code that reads the IRQ, if_port, and ethernet address.'' Status -- Supported The EISA version of the 509. The current EISA version uses the same 16 bit wide chip rather than a 32 bit interface, so the performance increase isn't stunning. The EISA probe code was added to 3c509.c for 0.99pl14. We would be interested in hearing progress reports from any 3c579 users. (Read the above 3c509 section for info on the driver.) Cameron Spitzer writes: ``The 3C579 (Etherlink III EISA) should be configured as an EISA card. The IO Base Address (window 0 register 6 bits 4:0) should be 1f, which selects EISA addressing mode. Logic outside the ASIC decodes the IO address s000, where s is the slot number. I don't think it was documented real well. Except for its IO Base Address, the '579 should behave EXACTLY like the'509 (EL3 ISA), and if it doesn't, I want to hear about it (at my work address).'' Status -- Semi-Supported Many people have been using this PCMCIA card for quite some time now. Note that support for it is not (at present) included in the default kernel source tree. Note that you will need a supported PCMCIA controller chipset. There are drivers available on Donald's ftp site: Or for those that are net-surfing you can try: Don's PCMCIA Stuff You will still need a PCMCIA socket enabler as well. See PCMCIA Support for more info on PCMCIA chipsets, socket enablers, etc. The "B" in the name means the same here as it does for the 3c509 case. Accton Status -- Supported Don't let the name fool you. This is still supposed to be a NE2000 compatible card. The MPX is supposed to stand for MultiPacket Accelerator, which, according to Accton, increases throughput substantially. But if you are already sending back-to-back packets, how can you get any faster... Status -- Semi-Supported David Hinds has been working on a driver for this card, and you are best to check the latest release of his PCMCIA package to see what the present status is. Allied Telesis Status --Supported These are a series of low-cost ethercards using the 79C960 version of the AMD LANCE. These are bus-master cards, and thus probably the fastest ISA bus ethercards available (although the 3c509 has lower latency thanks to predictive interrupts). DMA selection and chip numbering information can be found in AMD LANCE . More technical information on AMD LANCE based Ethernet cards can be found in Notes on AMD... . Status -- Supported The Allied Telesis AT1700 series ethercards are based on the Fujitsu MB86965. This chip uses a programmed I/O interface, and a pair of fixed-size transmit buffers. This allows small groups of packets to sent be sent back-to-back, with a short pause while switching buffers. A unique feature is the ability to drive 150ohm STP (Shielded Twisted Pair) cable commonly installed for Token Ring, in addition to 10baseT 100ohm UTP (unshielded twisted pair). The Fujitsu chip used on the AT1700 has a design flaw: it can only be fully reset by doing a power cycle of the machine. Pressing the reset button doesn't reset the bus interface. This wouldn't be so bad, except that it can only be reliably detected when it has been freshly reset. The solution/work-around is to power-cycle the machine if the kernel has a problem detecting the AT1700. Some production runs of the AT1700 had another problem: they are permanently wired to DMA channel 5. This is undocumented, there are no jumpers to disable the "feature", and no driver dares use the DMA capability because of compatibility problems. No device driver will be written using DMA if installing a second card into the machine breaks both, and the only way to disable the DMA is with a knife. The at1700 driver is included in the standard kernel source tree. AMD / Advanced Micro Devices Status -- Supported There really is no AMD ethernet card. You are probably reading this because the only markings you could find on your card said AMD and the above number. The 7990 is the original `LANCE' chip, but most stuff (including this document) refer to all these similar chips as `LANCE' chips. (...incorrectly, I might add.) These above numbers refer to chips from AMD that are the heart of many ethernet cards. For example, the Allied Telesis AT1500 (see AT1500 ) the NE1500/2100 (see NE1500 ) and the Boca-VLB/PCI cards (see Boca-VLB/PCI ) The 79C960 (a.k.a. PCnet-ISA) contains enhancements and bug fixes over the original 7990 LANCE design. Chances are that the existing LANCE driver will work with all AMD LANCE based cards. (except perhaps the NI65XX - see NI65XX for more info on that one.) This driver should also work with NE1500 and NE2100 clones. For the ISA bus master mode all structures used directly by the LANCE, the initialization block, Rx and Tx rings, and data buffers, must be accessible from the ISA bus, i.e. in the lower 16M of real memory. If more than 16MB of memory is installed, low-memory `bounce-buffers' are used when needed. The DMA channel can be set with the low bits of the otherwise-unused dev->mem_start value (a.k.a. PARAM_1). (see PARAM_1 ) If unset it is probed for by enabling each free DMA channel in turn and checking if initialization succeeds. The HP-J2405A board is an exception: with this board it's easy to read the EEPROM-set values for the IRQ, and DMA. See Notes on AMD... for more info on these chips. Status -- Supported This is the PCnet-ISA+ -- an enhanced version of the 79C960. It has support for jumper-less configuration and Plug and Play. See the info in the above section. Status -- Supported This is the PCnet-32 -- a 32 bit bus-master version of the original LANCE chip for VL-bus and local bus systems. Minor cleanups were added to the original lance driver around v1.1.50 to support these 32 bit versions of the LANCE chip. The main problem was that the current versions of the '965 and '970 chips have a minor bug. They clear the Rx buffer length field in the Rx ring when they are explicitly documented not to. Again, see the above info. Status -- Supported This is the PCnet-PCI -- similar to the PCnet-32, but designed for PCI bus based systems. Again, see the above info. Donald has modified the LANCE driver to use the PCI BIOS structure that was introduced by Drew Eckhardt for the PCI-NCR SCSI driver. This means that you need to build a kernel with PCI BIOS support enabled. Status -- Supported This is the PCnet-SCSI -- treated like a '970 from an Ethernet point of view. Again, see the above info. Don't ask if the SCSI half of the chip is supported -- this is the Ethernet-Howto , not the SCSI-Howto. Ansel Communications Status -- Semi-Supported This driver is included in the present kernel as an alpha test driver. Please see Alpha Drivers in this document for important information regarding alpha drivers. If you use it, let Donald know how things work out, as not too many people have this card and feedback has been low. Status -- Supported This on board ethernet uses an i82596 bus-master chip. It can only be at i/o address . The author of this driver is Mark Evans. By looking at the driver source, it appears that the IRQ is hardwired to 10. Earlier versions of the driver had a tendency to think that anything living at was an apricot NIC. Since then the hardware address is checked to avoid these false detections. Arcnet Status -- Semi-Supported With the very low cost and better performance of ethernet, chances are that most places will be giving away their Arcnet hardware for free, resulting in a lot of home systems with Arcnet. An advantage of Arcnet is that all of the cards have identical interfaces, so one driver will work for everyone. Recent interest in getting Arcnet going has picked up again and Avery Pennarun's alpha driver has been put into the default kernel sources for 1.1.80 and above. The arcnet driver uses `arc0' as its name instead of the usual `eth0' for ethernet devices. Bug reports and success stories can be mailed to: Note that AT&T's StarLAN is an orphaned technology, like SynOptics LattisNet, and can't be used in a standard 10Base-T environment. Status -- Not Supported These StarLAN cards use an interface similar to the i82586 chip. At one point, Matthijs Melchior () was playing with the 3c507 driver, and almost had something useable working. Haven't heard much since that. Status -- Supported This is a generic, low-cost OEM pocket adaptor being sold by AT-Lan-Tec, and (likely) a number of other suppliers. A driver for it is included in the standard kernel. Note that there is substantial information contained in the driver source file `atp.c'. BTW, the adaptor (AEP-100L) has both 10baseT and BNC connections! You can reach AT-Lan-Tec at 1-301-948-7070. Ask for the model that works with Linux, or ask for tech support. In the Netherlands a compatible adaptor is sold under the name SHI-TEC PE-NET/CT, and sells for about $125. The vendor was Megasellers. They state that they do not sell to private persons, but this doesn't appear to be strictly adhered to. They are: Megasellers, Vianen, The Netherlands. They always advertise in Dutch computer magazines. Note that the newer model EPP-NET/CT appears to be significantly different than the PE-NET/CT, and will not work with the present driver. Hopefully someone will come up with the programming information and this will be fixed up. In Germany, a similar adaptor comes as a no-brand-name product. Prolan 890b, no brand on the casing, only a roman II. Resellers can get a price of about $130, including a small wall transformer for the power. The adaptor is `normal size' for the product class, about 57mm wide, 22mm high tapering to 15mm high at the DB25 connector, and 105mm long (120mm including the BNC socket). It's switchable between the RJ45 and BNC jacks with a small slide switch positioned between the two: a very intuitive design. Donald performed some power draw measurements, and determined that the average current draw was only about 100mA @ 5V. This power draw is low enough that you could buy or build a cable to take the 5V directly from the keyboard/mouse port available on many laptops. (Bonus points here for using a standardized power connector instead of a proprietary one.) Note that the device name that you pass to is not but for this device. Boca Research Yes, they make more than just multi-port serial cards. :-) Status -- Supported These cards are based on AMD's PCnet chips, used in the AT1500 and the like. You can pick up a combo (10BaseT and 10Base2) PCI card for under $70 at the moment. More information can be found in AMD LANCE . More technical information on AMD LANCE based Ethernet cards can be found in Notes on AMD... . Cabletron Donald writes: `Yes, another one of these companies that won't release its programming information. They waited for months before actually confirming that all their information was proprietary, deliberately wasting my time. Avoid their cards like the plague if you can. Also note that some people have phoned Cabletron, and have been told things like `a D. Becker is working on a driver for linux' -- making it sound like I work for them. This is NOT the case.' If you feel like asking them why they don't want to release their low level programming info so that people can use their cards, write to support@ctron.com. Tell them that you are using Linux, and are disappointed that they don't support open systems. And no, the usual driver development kit they supply is useless. It is just a DOS object file that you are supposed to link against. Which you aren't allowed to even reverse engineer. Status -- Semi-Supported These are NEx000 almost-clones that are reported to work with the standard NEx000 drivers, thanks to a ctron-specific check during the probe. If there are any problems, they are unlikely to be fixed, as the programming information is unavailable. Status -- Semi-Supported Again, there is not much one can do when the programming information is proprietary. The E2100 is a poor design. Whenever it maps its shared memory in during a packet transfer, it maps it into the whole 128K region! That means you can't safely use another interrupt-driven shared memory device in that region, including another E2100. It will work most of the time, but every once in a while it will bite you. (Yes, this problem can be avoided by turning off interrupts while transferring packets, but that will almost certainly lose clock ticks.) Also, if you mis-program the board, or halt the machine at just the wrong moment, even the reset button won't bring it back. You will have to turn it off and leave it off for about 30 seconds. Media selection is automatic, but you can override this with the low bits of the dev->mem_end parameter. See PARAM_2 Also, don't confuse the E2100 for a NE2100 clone. The E2100 is a shared memory NatSemi DP8390 design, roughly similar to a brain-damaged WD8013, whereas the NE2100 (and NE1500) use a bus-mastering AMD LANCE design. There is an E2100 driver included in the standard kernel. However, seeing as programming info isn't available, don't expect bug-fixes. Don't use one unless you are already stuck with the card. D-Link Some people have had difficulty in finding vendors that carry D-link stuff. This should help. (714) 455-1688 in the US (081) 203-9900 in the UK 6196-643011 in Germany (416) 828-0260 in Canada (02) 916-1600 in Taiwan Status -- Supported The manual says that it is 100 % compatible with the NE2000. This is not true. You should call them and tell them you are using their card with Linux, and they should correct their documentation. Some pre-0.99pl12 driver versions may have trouble recognizing the DE2** series as 16 bit cards, and these cards are the most widely reported as having the spurious transfer address mismatch errors. Note that there are cards from Digital (DEC) that are also named DE100 and DE200, but the similarity stops there. Status -- Not Supported This appears to be a generic DEC21040 PCI chip implementation, and will most likely work with the generic 21040 driver, once Linux gets one. See DEC 21040 for more information on these cards, and the present driver situation. Status -- Supported Laptop users and other folk who might want a quick way to put their computer onto the ethernet may want to use this. The driver is included with the default kernel source tree. Bjorn Ekwall wrote the driver. Expect about 80kb/s transfer speed from this via the parallel port. You should read the README.DLINK file in the kernel source tree. Note that the device name that you pass to is now and not the previously used . If your parallel port is not at the standard then you will have to recompile. Bjorn writes: ``Since the DE-620 driver tries to sqeeze the last microsecond from the loops, I made the irq and port address constants instead of variables. This makes for a usable speed, but it also means that you can't change these assignements from e.g. lilo; you _have_ to recompile...'' Also note that some laptops implement the on-board parallel port at which is where the parallel ports on monochrome cards were/are. Supposedly, a no-name ethernet pocket adaptor marketed under the name `PE-1200' is DE-600 compatible. It is available in Europe from: SEMCON Handels Ges.m.b.h Favoritenstrasse 20 A-1040 WIEN Telephone: (+43) 222 50 41 708 Fax : (+43) 222 50 41 706 Status -- Supported Same as the DE-600, only with two output formats. Bjorn has written a driver for this model, for kernel versions 1.1 and above. See the above information on the DE-600. Status -- Semi-Supported Some people have been using this PCMCIA card for some time now with their notebooks. It is a basic 8390 design, much like a NE2000. The LinkSys PCMCIA card and the IC-Card Ethernet (available from Midwest Micro) are supposedly DE-650 clones as well. Note that at present, this driver is not part of the standard kernel, and so you will have to do some patching. See PCMCIA Support in this document, and if you can, have a look at: Don's PCMCIA Stuff DFI Status -- Supported These cards are now detected (as of 0.99pl15) thanks to Eberhard Moenkeberg who noted that they use `DFI' in the first 3 bytes of the prom, instead of using in bytes 14 and 15, which is what all the NE1000 and NE2000 cards use. (The 300 is an 8 bit pseudo NE1000 clone, and the 400 is a pseudo NE2000 clone.) Digital / DEC Status -- Supported As of linux v1.0, there is a driver included as standard for these cards. It was written by David C. Davies. There is documentation included in the source file `depca.c', which includes info on how to use more than one of these cards in a machine. Note that the DE422 is an EISA card. These cards are all based on the AMD LANCE chip. See AMD LANCE for more info. A maximum of two of the ISA cards can be used, because they can only be set for and base I/O address. If you are intending to do this, please read the notes in the driver source file in the standard kernel source tree. Status -- Supported Included into kernels v1.1.62 and above is this driver, also by David C. Davies of DEC. These cards use a proprietary chip from DEC, as opposed to the LANCE chip used in the earlier cards like the DE200. These cards support both shared memory or programmed I/O, although you take about a 50%performance hit if you use PIO mode. The shared memory size can be set to 2kB, 32kB or 64kB, but only 2 and 32 have been tested with this driver. David says that the performance is virtually identical between the 2kB and 32kB mode. There is more information (including using the driver as a loadable module) at the top of the driver file and also in . Both of these files come with the standard kernel distribution. Other interesting notes are that it appears that David is/was working on this driver for the unreleased version of Linux for the DEC Alpha AXP. And the standard driver has a number of interesting ioctl() calls that can be used to get or clear packet statistics, read/write the EEPROM, change the hardware address, and the like. Hackers can see the source code for more info on that one. David has also written a configuration utility for this card (along the lines of the DOS program ) along with other tools. These can be found on in the directory -- look for the file . Status -- Not Supported These cards are based on the 21040 chip mentioned below. At present there is no driver available. (Take heart, it is being worked on...) Status -- Not Supported The DEC 21040 is a bus-mastering single chip ethernet solution from Digital, similar to AMD's PCnet chip. The 21040 is specifically designed for the PCI bus architecture. SMC's new EtherPower PCI card uses this chip. The new 21140 just announced is for supporting 100Base-? and is supposed to be able to work with drivers for the 21040 chip. Donald has a SMC EtherPower PCI card at the moment, and is working on a driver. His home page says that he has a driver semi-working as of 28/12/94. An alpha driver may appear in a month or so. Please don't mail-bomb him asking for the driver, or help with it. Also, another person is presently working on a driver for DEC's 21040 based cards, and it is not Donald. They shall remain nameless so that their mailbox doesn't get filled with ``Is it ready yet?'' messages either. Farallon sells EtherWave adaptors and transceivers. This device allows multiple 10baseT devices to be daisy-chained. Status -- Supported This is reported to be a 3c509 clone that includes the EtherWave transceiver. People have used these successfully with Linux and the present 3c509 driver. They are too expensive for general use, but are a great option for special cases. Hublet prices start at $125, and Etherwave adds $75-$100 to the price of the board -- worth it if you have pulled one wire too few, but not if you are two network drops short. Hewlett Packard The 272** cards use programmed I/O, similar to the NE*000 boards, but the data transfer port can be `turned off' when you aren't accessing it, avoiding problems with autoprobing drivers. Thanks to Glenn Talbott for helping clean up the confusion in this section regarding the version numbers of the HP hardware. Status -- Supported 8 Bit 8390 based 10BaseT, not recommended for all the 8 bit reasons. It was re-designed a couple years ago to be highly integrated which caused some changes in initialization timing which only affected testing programs, not LAN drivers. (The new card is not `ready' as soon after switching into and out of loopback mode.) Status -- Supported The HP PC Lan+ is different to the standard HP PC Lan card. This driver was added to the list of drivers in the standard kernel at about v1.1.3X. Note that even though the driver is included, the entry in `config.in' seems to have been omitted. If you want to use it, and it doesn't come up in `config.in' then add the following line to `config.in' under the `HP PCLAN support' line: bool 'HP PCLAN Plus support' CONFIG_HPLAN_PLUS n Then run or whatever. The 47B is a 16 Bit 8390 based 10BaseT w/AUI, and the 52A is a 16 Bit 8390 based ThinLAN w/AUI. These cards are high performers (3c509 speed) without the interrupt latency problems (32K onboard RAM for TX or RX packet buffering). They both offer LAN connector autosense, data I/O in I/O space (simpler) or memory mapped (faster), and soft configuration. The 47A is the older model that existed before the `B'. Two versions 27247-60001 or 27247-60002 have part numbers marked on the card. Functionally the same to the LAN driver, except bits in ROM to identify boards differ. -60002 has a jumper to allow operation in non-standard ISA busses (chipsets that expect IOCHRDY early.) Status -- Supported These are lower priced, and slightly faster than the 27247B/27252A, but are missing some features, such as AUI, ThinLAN connectivity, and boot PROM socket. This is a fairly generic LANCE design, but a minor design decision makes it incompatible with a generic `NE2100' driver. Special support for it (including reading the DMA channel from the board) is included thanks to information provided by HP's Glenn Talbott. More technical information on LANCE based cards can be found in Notes on AMD... Status -- Supported The HP-Vectra has an AMD PCnet chip on the motherboard. Earlier kernel versions would detect it as the HP-J2405A but that would fail, as the Vectra doesn't report the IRQ and DMA channel like the J2405A. Get a kernel newer than v1.1.53 to avoid this problem. DMA selection and chip numbering information can be found in AMD LANCE . More technical information on LANCE based cards can be found in Notes on AMD... IBM / International Business Machines Status -- Supported This is compatible with the Intel based Zenith Z-note. See Z-note for more info. Supposedly this site has a comprehensive database of useful stuff for newer versions of the Thinkpad. I haven't checked it out myself yet. Thinkpad-info For those without a WWW browser handy, try Status -- Semi-Supported People have been using this PCMCIA card with Linux as well. Similar points apply, those being that you need a supported PCMCIA chipset on your notebook, and that you will have to patch the PCMCIA support into the standard kernel. See PCMCIA Support in this document, and if you can, have a look at: Don's PCMCIA Stuff Intel Ethernet Cards Status -- Semi-Supported This card uses the intel i82586. (Surprise, huh?) The driver is in the standard release of the kernel, as an alpha driver. See Alpha Drivers for important information on using alpha-test ethernet drivers with Linux. The reason is that the driver works well with slow machines, but the i82586 occasionally hangs from the packet buffer contention that a fast machine can cause. One reported hack/fix is to change all of the outw() calls to outw_p(). Also, the driver is missing promiscuous and multicast modes. (See Multicast and... ) There is also the standard way of using the chip (read slower) that is described in the chip manual, and used in other i82586 drivers, but this would require a re-write of the entire driver. There is some technical information available on the i82586 in Programming the Intel Chips and also in the source code for the driver `eexpress.c'. Don't be afraid to read it. ;-) Status -- Not-Supported This card uses the Intel 82595. If it is as ugly to use as the i82586, then don't count on anybody writing a driver. Status -- Semi-Supported This is supposed to be a re-badged DE-650. See the information on the DE-650 in DE-650 . Status -- Not Supported These are PCI cards that are based on DEC's 21040 chip. The LNP104 uses the 21050 chip to deliver four independent 10BaseT ports. The standard LNP101 is selectable between 10BaseT, 10Base2 and 10Base5 output. These cards may work with a generic 21040 driver if and when Linux gets one. (They aren't cheap either.) Mylex can be reached at the following numbers, in case anyone wants to ask them about programming information and the like. MYLEX CORPORATION, Fremont Sales: 800-77-MYLEX, (510) 796-6100 FAX: (510) 745-8016. Novell Ethernet, NExxxx and associated clones. The prefix `NE' came from Novell Ethernet. Novell followed the cheapest NatSemi databook design and sold the manufacturing rights (spun off?) Eagle, just to get reasonably-priced ethercards into the market. (The now ubiquitous NE2000 card.) Status -- Supported The now-generic name for a bare-bones design around the NatSemi 8390. They use programmed I/O rather than shared memory, leading to easier installation but slightly lower performance and a few problems. Again, the savings of using an 8 bit NE1000 over the NE2000 are only warranted if you expect light use. Some recently introduced NE2000 clones use the National Semiconductor `AT/LANTic' 83905 chip, which offers a shared memory mode similar to the 8013 and EEPROM or software configuration. Some problems can arise with poor clones. See Problems with... , and Poor NE2000 Clones In general it is not a good idea to put a NE2000 clone at I/O address because nearly every device driver probes there at boot. Some poor NE2000 clones don't take kindy to being prodded in the wrong areas, and will respond by locking your machine. Donald has written a NE2000 diagnostic program, but it is still presently in alpha test. (ne2k) See Diagnostic Programs for more information. Status -- Supported These cards use the original 7990 LANCE chip from AMD and are supported using the Linux lance driver. Some earlier versions of the lance driver had problems with getting the IRQ line via autoIRQ from the original Novell/Eagle 7990 cards. Hopefully this is now fixed. If not, then specify the IRQ via LILO, and let us know that it still has problems. DMA selection and chip numbering information can be found in AMD LANCE . More technical information on LANCE based cards can be found in Notes on AMD... Status -- Not Supported This card uses a lowly 8MHz 80186, and hence you are better off using a cheap NE2000 clone. Even if a driver was available, the NE2000 card would most likely be faster. Status -- Supported The PureData PDUC8028 and PDI8023 series of cards are reported to work, thanks to special probe code contributed by Mike Jagdis . The support is integrated with the WD driver. Status -- Semi-Supported Michael Hipp has written a driver for this card. It is included in the standard kernel as an `alpha' driver. Michael would like to hear feedback from users that have this card. See Alpha Drivers for important information on using alpha-test ethernet drivers with Linux. Michael says that ``the internal sysbus seems to be slow. So we often lose packets because of overruns while receiving from a fast remote host.'' This card also uses one of the Intel chips. See Programming the Intel Chips for more technical information. Status -- Semi-Supported There is also a driver for the LANCE based NI6510, and it is also written by Michael Hipp. Again, it is also an `alpha' driver. For some reason, this card is not compatible with the generic LANCE driver. See Alpha Drivers for important information on using alpha-test ethernet drivers with Linux. Status -- Semi-Supported This is just a 3c501 clone, with a different S.A. PROM prefix. I assume it is equally as brain dead as the original 3c501 as well. Kernels 1.1.53 and up check for the NP943 i.d. and then just treat it as a 3c501 after that. See 3Com 3c501 for all the reasons as to why you really don't want to use one of these cards. Status -- Supported This driver was included into the v1.1 kernels, and it was written by PJD Weichmann and SWS Bern. It appears that the SK G16 is similar to the NI6510, in that it is based on the first edition LANCE chip (the 7990). Once again, I have no idea as to why this card won't work with the generic LANCE driver. Western Digital / SMC (Standard Microsystems Corp.) The ethernet part of Western Digital has been bought by SMC. One common mistake people make is that the relatively new SMC Elite Ultra is the same as the older SMC Elite16 models -- this is not the case. Here is how to contact SMC (not that you should need to.) SMC / Standard Microsystems Corp., 80 Arkay Drive, Hauppage, New York, 11788, USA. Technical Support via phone: 800-992-4762 (USA) 800-433-5345 (Canada) 516-435-6250 (Other Countries) Literature requests: 800-SMC-4-YOU (USA) 800-833-4-SMC (Canada) 516-435-6255 (Other Countries) Technical Support via E-mail: techsupt@ccmail.west.smc.com FTP Site: ftp.smc.com Status -- Supported These are the 8-bit versions of the card. The 8 bit 8003 is slightly less expensive, but only worth the savings for light use. Note that some of the non-EEPROM cards (clones with jumpers, or old old old wd8003 cards) have no way of reporting the IRQ line used. In this case, auto-irq is used, and if that fails, the driver silently assings IRQ 5. Information regarding what the jumpers on old non-EEPROM wd8003 cards do can be found in conjunction with the SMC setup/driver disks stored on in the directory . Note that some of the newer SMC `SuperDisk' programs will fail to detect the old EEPROM-less cards. The file seems to be a good all-round choice. Also the jumper settings for old cards are in an ascii text file in the aforementioned archive. The latest (greatest?) version can be obtained from . As these are basically the same as their 16 bit counterparts (WD8013 / SMC Elite16), you should see the next section for more information. Status -- Supported Over the years the design has added more registers and an EEPROM. Clones usually go by the `8013' name, and usually use a non-EEPROM (jumpered) design. This part of WD has been sold to SMC, so you'll usually see something like SMC/WD8013 or SMC Elite16 Plus (WD8013). Late model SMC cards will have two main PLCC chips on board; the SMC 83c690 and the SMC 83c694. The shared memory design makes the cards 10-20 % faster, especially with larger packets. More importantly, from the driver's point of view, it avoids a few bugs in the programmed-I/O mode of the 8390, allows safe multi-threaded access to the packet buffer, and it doesn't have a programmed-I/O data register that hangs your machine during warm-boot probes. Non-EEPROM cards that can't just read the selected IRQ will attempt auto-irq, and if that fails, they will silently assign IRQ 10. (8 bit versions will assign IRQ 5) Also see 8013 clones and 8013 problems . Status -- Supported This ethercard is based on a new chip from SMC, with a few new features. While it has a mode that is similar to the older SMC ethercards, it's not compatible with the old WD80*3 drivers. However, in this mode it shares most of its code with the other 8390 drivers, while operating somewhat faster than a WD8013 clone. Since part of the Ultra looks like an 8013, the Ultra probe is supposed to find an Ultra before the wd8013 probe has a chance to mistakenly identify it. Std. as of 0.99pl14, and made possible by documentation and ethercard loan from Duke Kamstra. If you plan on using an Ultra with Linux send him a note of thanks to let him know that there are Linux users out there! Donald mentioed that it is possible to write a separate driver for the Ultra's `Altego' mode which allows chaining transmits at the cost of inefficient use of receive buffers, but that will probably not happen right away. Performance re-writes of working drivers are low priority unless there is some particular incentive or need. Bus-Master SCSI host adaptor users take note: In the manual that ships with Interactive UNIX, it mentions that a bug in the SMC Ultra will cause data corruption with SCSI disks being run from an aha-154X host adaptor. This will probably bite aha-154X compatible cards, such as the BusLogic boards, and the AMI-FastDisk SCSI host adaptors as well. Supposedly SMC has acknowledged the problem occurs with Interactive, and older Windows NT drivers. It is supposed to be a hardware conflict that can be worked around in the driver design. More on this as it develops. Some Linux users with an Ultra + aha-154X compatible cards have experienced data corruption, while others have not. Donald tried this combination himself, and wasn't able to reproduce the problem. You have been warned. Status -- Semi-Supported This card uses SMC's 83c795 chip and supports the Plug 'n Play specification. Alex Mohr writes the following: ``The card has some features above and beyond the SMC Elite Ultra, but can be put into a mode that is compatible with it. When I tried to detect the card with linux, the autoprobe in the kernel didn't recognize it as an ultra. After wandering the code a bit, I noticed that in the smc-ultra.c file it checks to see if an ID Nibble is 0x20. I inserted a check to see what it returns for my card. Apparently, it's a 0x40. So I allowed it to detect if it's a 0x20 or a 0x40, and it works fine.'' Status -- Not Supported Supposedly SMC is offering an evaluation deal on these new PCI cards for $99 ea. (not a real great deal when you consider that the Boca PCnet-PCI based cards are going for less than $70 and they are supported under Linux already) They appear to be a basic DEC 21040 implementation, i.e. one big chip and a couple of transceivers. Donald has one of these cards, and is working on a driver for it. An alpha driver may appear in a month or so, but don't hold your breath. See DEC 21040 for more info on these chips from Digital. Status -- Not Supported These 8 bit cards are based on the Fujitsu MB86950, which is an ancient version of the MB86965 used in the Linux at1700 driver. Russ says that you could probably hack up a driver by looking at the at1700.c code and his DOS packet driver for the Tiara card (tiara.asm) Status -- Not Supported These are 16bit i/o mapped 8390 cards, much similar to a generic NE2000 card. If you can get the specifications from SMC, then porting the NE2000 driver would probably be quite easy. Status -- Not Supported These cards are VLB cards based on the 91c92 chip. They are fairly expensive, and hence the demand for a driver is pretty low at the moment. Xircom Another group that won't release documentation. No cards supported. Don't look for any support in the future unless they release their programming information. And this is highly unlikely, as they forbid you from even reverse- engineering their drivers. If you are already stuck with one, see if you can trade it off on some DOS (l)user. And if you just want to verify that this is the case, you can reach Xircom at 1-800-874-7875, 1-800-438-4526 or +1-818-878-7600. They used to advertise that their products "work with all network operating systems", but have since stopped. Wonder why... Status -- Not Supported Not to get your hopes up, but if you have one of these parallel port adaptors, you may be able to use it in the DOS emulator with the Xircom-supplied DOS drivers. You will have to allow DOSEMU access to your parallel port, and will probably have to play with SIG (DOSEMU's Silly Interrupt Generator). I have no idea if this will work, but if you have any success with it, let me know, and I will include it here. Zenith Status -- Supported The built-in Z-Note network adaptor is based on the Intel i82593 using two DMA channels. There is an (alpha?) driver available in the present kernel version. As with all notebook and pocket adaptors, it is under the `Pocket and portable adaptors' section when running . See Programming the Intel chips for more technical information. Also note that the IBM ThinkPad 300 is compatible with the Z-Note. Next Chapter, Previous Chapter Table of contents of this chapter , General table of contents Top of the document, Beginning of this Chapter MD5{32}: 4b1efbd9f507557972b2c5b0b0e434a4 File-Size{5}: 60272 Type{4}: HTML Gatherer-Version{3}: 1.0 Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Refresh-Rate{7}: 2419200 Time-to-Live{8}: 14515200 Description{46}: Vendor/Manufacturer/Model Specific Information } @FILE { http://cesdis.gsfc.nasa.gov/%7ecreschke/peta/report/node69.html Update-Time{9}: 827948641 title{20}: Memory Architecture keywords{47}: architecture aug chance edt memory reschke tue images{193}: /usr/local/src/latex2html/icons/next_motif.gif /usr/local/src/latex2html/icons/up_motif.gif /usr/local/src/latex2html/icons/previous_motif.gif /usr/local/src/latex2html/icons/contents_motif.gif head{153}: Next: Applications and Algorithms: Up: Taming Massive Parallelism: Previous: Principles Memory Architecture Chance Reschke Tue Aug 15 08:59:12 EDT 1995 MD5{32}: 1b3289f62c4eaefe3adf6ee3db123f48 File-Size{4}: 1400 Type{4}: HTML Gatherer-Version{3}: 1.0 Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Refresh-Rate{7}: 2419200 Time-to-Live{8}: 14515200 Description{20}: Memory Architecture } @FILE { http://cesdis.gsfc.nasa.gov/hpccm/annual.reports/ess94contents/in.house/lou.html Update-Time{9}: 827948654 url-references{126}: multigrid.html http://cesdis.gsfc.nasa.gov/hpccm/annual.reports/ess94contents/in-house.html mailto:lpicha@cesdis.gsfc.nasa.gov title{30}: Parallel Multigrid PDE Solvers keywords{63}: curator image larry page picha picture previous return see the images{19}: graphics/return.gif headings{59}: Parallel Multigrid PDE Solvers Return to the PREVIOUS PAGE body{3115}: Objective: Develop efficient parallel algorithms/software for implementing multigrid partial differential equation (PDE) solvers on massively parallel computers. The implemented PDE solvers should be scalable and portable across different hardware platforms. These PDE solvers can be used either as library routine or expandable template code for solving many challenging problems in physics and engineering. Approach: Developing high-quality, parallel numerical PDE solvers requires expertise in both numerical mathematics and software engineering. We identified numerically efficient multigrid algorithms for solving elliptic PDEs and developed strategies for their parallel implementations on message-passing systems. We use modern software technologies in our implementations to make our code highly structured, reusable and extensible. We verified the effectiveness of our parallel multigrid solver by extending it to an incompressible fluid flow solver. Accomplishments: We developed a parallel algorithm and implemented the parallel multigrid elliptic solver package. The multigrid solver can solve N-dimensional (N <= 3) boundary-value problems for Poisson and Helmholtz equations on several commonly-used finite-difference grids, and it runs on both sequential and parallel computers. The numerical and parallel performances of the multigrid solver have been measured for some test problems on Intel Delta and Paragon systems and the results are fairly good. The multigrid solver was implemented in C with both NX and MPI interfaces for message-passing. Interfaces to the multigrid solver from an application program are available in C and Fortran. The multigrid solver has been extended to a two-dimensional (2D) incompressible fluid flow solver based on a projection method implemented on a staggered finite-difference grid. The flow solver can be used to simulate fluid flows, e.g., in astrophysics and combustion problems. The 2D multigrid flow solver has been tested on a few model problems (see picture page, 60k image) . Significance: Multigrid methods are a class of highly efficient (sometimes optimal) numerical schemes for solving a variety of numerical PDEs arising from science and engineering problems. Solving elliptic problems are often a computationally expensive step in many time-dependent scientific computing problems. Developing a general-purpose, parallel multigrid elliptic solver, however, is far from a trivial task to most application scientists. Our parallel multigrid solver package can be a useful computational tool in solving large science and engineering problems. Status/Plans: Extend the multigrid solver on staggered grid to 3D grids. Extend the multigrid flow solver to 3D problems. Investigate the possibilities of incorporating adaptive and multilevel grid features into existing parallel PDE solvers. Investigate the benefits of using object-oriented approaches (e.g. C++ or its extensions) in implementing parallel PDE solvers. Point of Contact: John Lou Jet Propulsion Laboratory (818) 354-4870 lou@acadia.jpl.nasa.gov curator: Larry Picha MD5{32}: d1bd810f95831080b8eef74310b78797 File-Size{4}: 3609 Type{4}: HTML Gatherer-Version{3}: 1.0 Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Refresh-Rate{7}: 2419200 Time-to-Live{8}: 14515200 Description{30}: Parallel Multigrid PDE Solvers } @FILE { http://cesdis.gsfc.nasa.gov/hpccm/annual.reports/ann.rpt.95/cas.95/cas.95.ar.darwin.html Update-Time{9}: 827948662 url-references{28}: http://racimac.arc.nasa.gov/ title{63}: DARWIN/HPCC Phased-Array Acoustic Measurement and Visualization keywords{101}: accomplishments approach arc contact gov http nasa objective plans point racimac significance status headings{63}: DARWIN/HPCC Phased-Array Acoustic Measurement and Visualization body{2742}: Objective: To produce a near-real-time phased-array acoustic measurement and visualization system for wind-tunnel testing by combining the skills of the DARWIN and HPCC projects, and to apply the system to the analysis of the acoustic environment around a DC-10 model with extended flaps and landing gear in the Ames 40x80 wind tunnel. Approach: Instrumentation, data collection, and storage computer systems are combined with the HPCC IBM SP-2 so produce a heterogeneous distributed computing system. The Parallel Virtual Machine (PVM) software provides data communications between machines and within the SP-2. Acoustic information is collected by an array of 40 microphones, and is stored in memory on the instrumentation computer This digitized data is routed to the SP-2 for phased array processing. A surface of points are "scanned" to determine the strength of noise sources at each location. Sound pressure levels on this surface are visualized in the FAST visualization system. A graphical user interface provides an easy-to-use data entry environment. Accomplishments: Prototype software is complete linking the NPRIME data collection system, the graphical interface, and the IBM SP-2. Calibration tests were carried out during late July and early August. A survey of the acoustic environment around a DC-10 model in the 40x80 wind tunnel was completed September 1, 1995. Significance: Recent improvements in engine noise has increased the relative contribution of the airframe to the total noise produced by aircraft during landing. Tighter airport noise regulations may limit the markets of U.S. transport aircraft manufacturers. Prior analysis procedures completed the analysis of a few frequencies overnight. The new system provides analysis for dozens of frequencies in less than 5 minutes (between test points). During the DC-10 test, several previously unknown noise sources were identified. McDonnell-Douglas and other participants in the Advanced Subsonic Transport (AST) program are pleased with the results. Status/Plans: Analysis of the DC-10 data is continuing. Improvements in parallelism and solution efficiency should allow the visualization of hundreds of frequencies in near-real-time. With a greater number of microphones, greater resolution will become possible at higher frequencies, and volume (as opposed to surface) rendering will be practical. This will require even more computational horsepower to meet the near-real-time requirement. Point(s) of Contact: Merritt H. Smith NASA Ames Research Center mhsmith@nas.nasa.gov (415)604-4493 Mike Watts NASA Ames Research Center Mike_Watts@qmgate.arc.nasa.gov (415)604-6574 DARWIN Web Page at NASA Ames Research Center: http://racimac.arc.nasa.gov/ MD5{32}: e1ea879b31583b5d88c5c1513d86ab1b File-Size{4}: 3084 Type{4}: HTML Gatherer-Version{3}: 1.0 Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Refresh-Rate{7}: 2419200 Time-to-Live{8}: 14515200 Description{63}: DARWIN/HPCC Phased-Array Acoustic Measurement and Visualization } @FILE { http://cesdis.gsfc.nasa.gov/linux/drivers/tulip.html Update-Time{9}: 827948898 url-references{354}: http://cesdis.gsfc.nasa.gov/cesdis.html /linux/drivers/tulip.c #other tulip.c v1.3/tulip.c new-tulip.c /pub/people/becker/beowulf.html tulip.patch http://cesdis.gsfc.nasa.gov/cesdis.html http://hypatia.gsfc.nasa.gov/NASA_homepage.html http://hypatia.gsfc.nasa.gov/GSFC_homepage.html http://www.hal.com/~markg/WebTechs/ #top /pub/people/becker/whoiam.html title{30}: Linux and the DEC "Tulip" Chip keywords{250}: after all and author becker beowulf better center cesdis chip complete dec description donald driver drivers extra features file fix flight for goddard implemented linux nasa other patch pci performance project space the this top tulip unneeded with images{56}: http://www.hal.com/~markg/WebTechs/images/valid_html.gif headings{142}: Linux and the DEC "Tulip" Chip Errata Using the 10base2 or AUI Port Setting the cache alignment Ethercards reported to use the DEC 21040 chip body{4294}: This page contains information on using Linux with the DEC 21040/21140 "Tulip" chips, as used on the SMC PCI EtherPower and other ethercards. The master copy of this page resides on the CESDIS WWW server. The driver for the DEC 21040 "Tulip" chip is now available! It has been integrated with the kernel source tree since 1.1.90, although it remains commented out in the configuration file. This driver works with the SMC PCI EtherPower card as well as many other PCI ethercards. This driver is available in several versions. The standard, tested v0.07a for 1.2.* series released kernels. The same conservative driver v0.07a with the extra support needed to work with the 1.3.* development kernels. The latest testing version of the driver with better performance and extra features . This version will compile with all 1.2.* kernels and recent 1.3.* development kernels. This driver was written to support the Beowulf cluster project at CESDIS. For Beowulf-specific information, read the Beowulf project description . The new generation Beowulf uses two 21140 100baseTX boards on every processor, with each network connected by 100baseTX repeaters. There are two known problem with the code previously distributed: The driver always selects the 10baseT (RJ45) port, not the AUI (often 10base2/BNC) port. port. The driver fails with corrupted transfers when used with some motherboard chip such at the Intel Saturn as used on the ASUS SP3G. Both of these problems have fixes as described below. The complete patch file fixes these problems as well as cleaning up some the development messages. The new driver automatically switches media when the 10baseT port fails. On the 21040 it switches to the AUI (usually 10base2) media, and on the 21140 it configures the chip into a 100baseTx compatible mode. This fix is unneeded in all Tulip drivers after v0.05. To use the 10base2 port with the driver in 1.2.[0-5] you must change the setting of one SIA (serial interface) register. Make the following change around line 325: -outl(0x00000004, ioaddr + CSR13); +outl(0x0000000d, ioaddr + CSR13); This fix is implemented in all Tulip drivers after v0.04. The pre-1.2 driver experienced packet data corruption when used with some motherboards, most notably the ASUS SP3G. The workaround is to set the cache alignment parameters in the Tulip chip to their most conservative values.--- /usr/src/linux-1.1.84/drivers/net/tulip.cSun Jan 22 15:42:12 1995 +++ tulip.cSun Jan 22 16:21:44 1995 @@ -268,9 +271,15 @@ /* Reset the chip, holding bit 0 set at least 10 PCI cycles. */ outl(0xfff80001, ioaddr + CSR0); SLOW_DOWN_IO; -/* Deassert reset. Wait the specified 50 PCI cycles by initializing +/* Deassert reset. Set 8 longword cache alignment, 8 longword burst. + Cache alignment bits 15:14 Burst length 13:8 + 0000No alignment 0x00000000 unlimited0800 8 longwords +40008 longwords0100 1 longword1000 16 longwords +800016 longwords0200 2 longwords2000 32 longwords +C00032 longwords0400 4 longwords + Wait the specified 50 PCI cycles after a reset by initializing Tx and Rx queues and the address filter list. */ -outl(0xfff80000, ioaddr + CSR0); +outl(0xfff84800, ioaddr + CSR0); if (irq2dev_map[dev->irq] != NULL || (irq2dev_map[dev->irq] = dev) == NULL This is reportedly a bug in the motherboard chipset's implementation of burst mode transfers. The patch above turns on a feature in the Tulip that's supposed to reduce the performance impact of maintaining cache consistency, but it is also a way to effectively limit the burst transfer length to a size the chipset can handle without error. Accton EtherDuo PCI Cogent EM100 Cogent EM400 (same with 4 ports + PCI Bridge) Cogent EM964 Quartet Four 21040 ports and a DEC 21050 PCI bridge. Danpex EN-9400P3 D-Link DFE500-Tx (Possibly inaccurate report.) D-Link DE-530CT Linksys EtherPCI <--! COMMENT "Linksys, Irvine CA, 800-546-57973, 714-261-1288, 73430.3634@compuserve.com"> SMC EtherPower With DEC21040 -- my development board. SMC EtherPower10/100 With DEC21140 -- also tested. Thomas Conrad TC5048 Znyx ZX312 EtherAction Znyx ZX315 EtherArray Two 21040 10baseT/10base2 ports and a DEC 21050 PCI bridge Znyx ZX342 (With 4 ports + PCI bridge?) CESDIS is located at the NASA Goddard Space Flight Center in Greenbelt MD. address{57}: Top Author: Donald Becker , becker@cesdis.gsfc.nasa.gov. MD5{32}: 916d54c95aad8167127a02e2739d5e2f File-Size{4}: 5839 Type{4}: HTML Gatherer-Version{3}: 1.0 Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Refresh-Rate{7}: 2419200 Time-to-Live{8}: 14515200 Description{30}: Linux and the DEC "Tulip" Chip } @FILE { http://cesdis.gsfc.nasa.gov/%7ecreschke/peta/report/node49.html Update-Time{9}: 827948639 title{12}: Conclusions keywords{39}: aug chance conclusions edt reschke tue images{203}: /usr/local/src/latex2html/icons/next_motif.gif /usr/local/src/latex2html/icons/up_motif.gif /usr/local/src/latex2html/icons/previous_motif.gif /usr/local/src/latex2html/icons/contents_motif.gif img43.gif head{435}: Next: AcknowledgmentsUp: A Petaops is Previous: Overall Architecture Conclusions A petaops system is obviously an extremely aggressive target, but a C RAM design that focuses on power consumption and bandwidth makes it plausible. While the technologies we propose are far from "proven", they are within the bounds of the imaginable with present fabrication processes and system engineering. Chance Reschke Tue Aug 15 08:59:12 EDT 1995 MD5{32}: f37fd490b7995c8c08f78cdff6bcdadc File-Size{4}: 1710 Type{4}: HTML Gatherer-Version{3}: 1.0 Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Refresh-Rate{7}: 2419200 Time-to-Live{8}: 14515200 Description{12}: Conclusions } @FILE { http://cesdis.gsfc.nasa.gov/hpccm/annual.reports/ann.rpt.95/cas.95/cas.95.list.accomp.html Update-Time{9}: 827948833 title{15}: --_-_-_-_-_-_-- MD5{32}: 600b3020ad6565944ae086b26c7a7145 File-Size{4}: 3859 Type{4}: HTML Gatherer-Version{3}: 1.0 Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Refresh-Rate{7}: 2419200 Time-to-Live{8}: 14515200 } @FILE { http://cesdis.gsfc.nasa.gov/hpccm/annual.reports/cas94contents/testbed/graphics/ Update-Time{9}: 827948842 url-references{115}: /hpccm/annual.reports/cas94contents/testbed/ bar.gif cas.gif hpccsmall.gif return.gif search.button.gif smaller.gif title{62}: Index of /hpccm/annual.reports/cas94contents/testbed/graphics/ keywords{68}: bar button cas directory gif hpccsmall parent return search smaller images{134}: /icons/blank.xbm /icons/menu.gif /icons/image.gif /icons/image.gif /icons/image.gif /icons/image.gif /icons/image.gif /icons/image.gif headings{62}: Index of /hpccm/annual.reports/cas94contents/testbed/graphics/ body{272}: Name Last modified Size Description Parent Directory 19-Jul-95 15:55 - bar.gif 17-Jul-95 13:51 3K cas.gif 17-Jul-95 13:51 22K hpccsmall.gif 17-Jul-95 13:51 2K return.gif 17-Jul-95 13:51 1K search.button.gif 17-Jul-95 13:51 2K smaller.gif 17-Jul-95 13:51 23K MD5{32}: cb742d8ea0784718a5db7096b4f86200 File-Size{4}: 1241 Type{4}: HTML Gatherer-Version{3}: 1.0 Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Refresh-Rate{7}: 2419200 Time-to-Live{8}: 14515200 Description{62}: Index of /hpccm/annual.reports/cas94contents/testbed/graphics/ } @FILE { http://cesdis.gsfc.nasa.gov/%7ecreschke/peta/report/node55.html Update-Time{9}: 827948640 title{8}: Summary keywords{35}: aug chance edt reschke summary tue images{203}: /usr/local/src/latex2html/icons/next_motif.gif /usr/local/src/latex2html/icons/up_motif.gif /usr/local/src/latex2html/icons/previous_motif.gif /usr/local/src/latex2html/icons/contents_motif.gif img48.gif head{474}: Next: AcknowledgmentUp: Design of a Previous: Massively Parallel SIMD Summary The group has successfully simulated a toroidal mesh of processing elements using circuit design software. The simulation included all local operations. In addition, the router and global networks have been designed, and we are currently in the process of simulating them. Plans are to simulate a larger network and begin to develop a VLSI prototype. Chance Reschke Tue Aug 15 08:59:12 EDT 1995 MD5{32}: 523e5bb3f89c33cb1a47e7540909d694 File-Size{4}: 1736 Type{4}: HTML Gatherer-Version{3}: 1.0 Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Refresh-Rate{7}: 2419200 Time-to-Live{8}: 14515200 Description{8}: Summary } @FILE { http://cesdis.gsfc.nasa.gov/%7ecreschke/peta/report/node80.html Update-Time{9}: 827948642 url-references{111}: http://cbl.leeds.ac.uk/nikos/tex2html/doc/latex2html/latex2html.html http://cbl.leeds.ac.uk/nikos/personal.html title{27}: About this document ... keywords{82}: about aug chance document drakos edt html latex nikos report reschke tex this tue images{146}: /usr/local/src/latex2html/icons/up_motif.gif /usr/local/src/latex2html/icons/previous_motif.gif /usr/local/src/latex2html/icons/contents_motif.gif head{438}: Up:No Title Previous: Implications for Future About this document ... This document was generated using the LaTeX 2HTML translator Version 95.1 (Fri Jan 20 1995) Copyright \251 1993, 1994, Nikos Drakos , Computer Based Learning Unit, University of Leeds. The command line arguments were: latex2html report.tex . The translation was initiated by Chance Reschke on Tue Aug 15 08:59:12 EDT 1995 Chance Reschke Tue Aug 15 08:59:12 EDT 1995 MD5{32}: aeb07d9d8525534f287cb135ab52d7fd File-Size{4}: 1826 Type{4}: HTML Gatherer-Version{3}: 1.0 Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Refresh-Rate{7}: 2419200 Time-to-Live{8}: 14515200 Description{27}: About this document ... } @FILE { http://cesdis.gsfc.nasa.gov/pub/linux/drivers/3c59x-new.c Update-Time{9}: 827948605 Partial-Text{4784}: EL3WINDOW cleanup_module init_module set_multicast_list tc59x_init update_stats vortex_close vortex_get_stats vortex_interrupt vortex_open vortex_probe1 vortex_rx vortex_start_xmit linux/config.h linux/module.h linux/version.h linux/kernel.h linux/sched.h linux/string.h linux/ptrace.h linux/errno.h linux/in.h linux/ioport.h linux/malloc.h linux/interrupt.h linux/pci.h linux/bios32.h asm/bitops.h asm/io.h asm/dma.h linux/netdevice.h linux/etherdevice.h linux/skbuff.h /* 3c59x.c: An 3Com 3c590/3c595 "Vortex" ethernet driver for linux. */ /* Written 1995 by Donald Becker. This software may be used and distributed according to the terms of the GNU Public License, incorporated herein by reference. This driver is for the 3Com "Vortex" series ethercards. Members of the series include the 3c590 PCI EtherLink III and 3c595-Tx PCI Fast EtherLink. It also works with the 10Mbs-only 3c590 PCI EtherLink III. The author may be reached as becker@CESDIS.gsfc.nasa.gov, or C/O Center of Excellence in Space Data and Information Sciences Code 930.5, Goddard Space Flight Center, Greenbelt MD 20771 */ /* Warning: Bogus! This means IS_LINUX_1_3. */ /* This will be in linux/etherdevice.h someday. */ /* The total size is twice that of the original EtherLinkIII series: the runtime register window, window 1, is now always mapped in. */ /* Theory of Operation I. Board Compatibility This device driver is designed for the 3Com FastEtherLink, 3Com's PCI to 10/100baseT adapter. It also works with the 3c590, a similar product with only a 10Mbs interface. II. Board-specific settings PCI bus devices are configured by the system at boot time, so no jumpers need to be set on the board. The system BIOS should be set to assign the PCI INTA signal to an otherwise unused system IRQ line. While it's physically possible to shared PCI interrupt lines, the 1.2.0 kernel doesn't support it. III. Driver operation The 3c59x series use an interface that's very similar to the previous 3c5x9 series. The primary interface is two programmed-I/O FIFOs, with an alternate single-contiguous-region bus-master transfer (see next). One extension that is advertised in a very large font is that the adapters are capable of being bus masters. Unfortunately this capability is only for a single contiguous region making it less useful than the list of transfer regions available with the DEC Tulip or AMD PCnet. Given the significant performance impact of taking an extra interrupt for each transfer, using DMA transfers is a win only with large blocks. IIIC. Synchronization The driver runs as two independent, single-threaded flows of control. One is the send-packet routine, which enforces single-threaded use by the dev->tbusy flag. The other thread is the interrupt handler, which is single threaded by the hardware and other software. IV. Notes Thanks to Cameron Spitzer and Terry Murphy of 3Com for providing both 3c590 and 3c595 boards. The name "Vortex" is the internal 3Com project name for the PCI ASIC, and the not-yet-released (3/95) EISA version is called "Demon". According to Terry these names come from rides at the local amusement park. The new chips support both ethernet (1.5K) and FDDI (4.5K) packet sizes! This driver only supports ethernet packets because of the skbuff allocation limit of 4K. */ /* 3Com's manufacturer's ID. */ /* Operational defintions. These are not used by other compilation units and thus are not exported in a ".h" file. First the windows. There are eight register windows, with the command and status registers available in each. */ /* The top five bits written to EL3_CMD are a command, the lower 11 bits are the parameter, if applicable. Note that 11 parameters bits was fine for ethernet, but the new chip can handle FDDI lenght frames (~4500 octets) and now parameters count 32-bit 'Dwords' rather than octets. */ /* The SetRxFilter command accepts the following classes: */ /* Bits in the EL3_STATUS general status register. */ /* Latched interrupt. */ /* Host error. */ /* EL3_CMD is still busy.*/ /* Register window 1 offsets, the window used in normal operation. On the Vortex this window is always mapped at offsets 0x10-0x1f. */ /* Remaining free bytes in Tx buffer. */ /* Window 0: EEPROM command register. */ /* Enable erasing/writing for 10 msec. */ /* Disable EWENB before 10 msec timeout. */ /* EEPROM locations. */ /* Window 3: MAC/config bits. */ /* Window 4: Various transcvr/media bits. */ /* Enable link beat and jabber for 10baseT. */ /* "ethN" string, also for kernel debug. */ /* Unlike the other PCI cards the 59x cards don't need a large contiguous memory region, so making the driver a loadable module is feasible. */ /* Remove I/O space marker in bit 0. */ MD5{32}: 34299d3cdecae1d422b843f6aeea0b36 File-Size{5}: 27311 Type{1}: C Gatherer-Version{3}: 1.0 Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Refresh-Rate{7}: 2419200 Time-to-Live{8}: 14515200 Keywords{2261}: accepts according adapter adapters advertised allocation also alternate always amd amusement and applicable are asic asm assign author available baset beat because becker before being bios bit bitops bits blocks board boards bogus boot both buffer bus busy but bytes called cameron can capability capable cards center cesdis chip chips classes cleanup close cmd code com come command compatibility compilation config configured contiguous control count data debug dec defintions demon designed dev device devices disable distributed dma doesn don donald driver dwords each eeprom eight eisa enable enforces erasing errno error ethercards etherdevice etherlink etherlinkiii ethernet ethn ewenb excellence exported extension extra fast fastetherlink fddi feasible fifos file fine first five flag flight flows following font for frames free from general get given gnu goddard gov greenbelt gsfc handle handler hardware herein host iii iiic impact include incorporated independent information init inta interface internal interrupt ioport irq jabber jumpers kernel large latched lenght less license limit line lines link linux list loadable local locations lower mac making malloc manufacturer mapped marker master masters may mbs means media members memory module msec multicast murphy name names nasa need netdevice new next normal not note notes now octets offsets one only open operation operational original other otherwise packet packets parameter parameters park pci pcnet performance physically possible previous primary probe product programmed project providing ptrace public rather reached reference region regions register registers released remaining remove rides routine runs runtime sched sciences see send series set setrxfilter settings shared should signal significant similar single size sizes skbuff software someday space specific spitzer start stats status still string support supports synchronization system taking tbusy terms terry than thanks that the theory there these this thread threaded thus time timeout top total transcvr transfer transfers tulip twice two unfortunately units unlike unused update use used useful using various version very vortex warning was which while will win window windows with works writing written xmit yet Description{9}: EL3WINDOW } @FILE { http://cesdis.gsfc.nasa.gov/%7ecreschke/peta/report/node35.html Update-Time{9}: 827948636 title{14}: Open Problems keywords{41}: aug chance edt open problems reschke tue images{387}: /usr/local/src/latex2html/icons/next_motif.gif /usr/local/src/latex2html/icons/up_motif.gif /usr/local/src/latex2html/icons/previous_motif.gif /usr/local/src/latex2html/icons/contents_motif.gif /usr/local/src/latex2html/icons/next_motif.gif /usr/local/src/latex2html/icons/up_motif.gif /usr/local/src/latex2html/icons/previous_motif.gif /usr/local/src/latex2html/icons/contents_motif.gif head{4094}: Next: ConclusionsUp: Heterogeneous Computing: One Previous: A Conceptual Model Open Problems A great many open problems need to be solved before heterogeneous computing can be made available to the average applications programmer in a transparent way. Many (possibly even most) need to be addressed just to facilitate near-optimal practical use of real heterogeneous suites in a ``visible'' (i.e., user specified) way. Below is a brief discussion of some of these open problems; it is far from exhaustive, but it will convey the types of issues that need to be addressed. Others may be found in [13, 28].Implementation of an automatic HC programming environment, such as envisioned in Section 3, will require a great deal of research for devising practical and theoretically sound methodologies for each component of each stage. A general open question that is particularly applicable to stages 1 and 2 of the conceptual model is: ``What information should (must) the user provide and what information should (can) be determined automatically?'' For example, should the user specify the subtasks within an application or can this be done automatically? Future HC systems will probably not completely automate all of the steps in the conceptual model. A key to the future success of HC hinges on striking a proper balance between the amount of information expected from the user (i.e., effort) and the level of performance delivered by the system.To program an HC system, it would be best to have machine-independent programming languages [33] that allow the user to augment the code with compiler directives. The programming language and user specified directives should be designed to facilitate (a) the compilation of the program into efficient code for any of the machines in the suite, (b) the decomposition of tasks into homogeneous subtasks, and (c) the use of machine-dependent subroutine libraries.Along with programming languages, there is a need for debugging and performance tuning tools that can be used across an HC suite of machines. This involves research in the areas of distributed programming environments and visualization tools.Operating system support for HC is needed. This includes techniques applicable at both the local machine level and at the system-wide network level.Ideally, information about the current loading and status of the machines in the HC suite and the network that is linking these machines should be incorporated into the matching and scheduling decisions. Many questions arise here: what information to include in the status (e.g., faulty or not, pending tasks), how to measure current loading, how to effectively incorporate current loading information into matching and scheduling decisions, how to communicate and structure the loading and status information in the other machines, how often to update this information, and how to estimate task/transfer completion time?There is much ongoing research in the area of inter-machine data transport. This research includes the hardware support required, the software protocols required, designing the network topology, computing the minimum time path between two machines, and devising rerouting schemes in case of faults or heavy loads. Related to this is the data reformatting problem, involving issues such as data type storage formats and sizes, byte ordering within data types, and machines' network-interface buffer sizes.Another area of research pertains to methods for dynamic task migration between different parallel machines at execution time. This could be used to rebalance loads or if a fault occurs. Current research in this area involves how to move an executing task between different machines and determining how and when to use dynamic task migration for load balancing.Lastly, there are policy issues that require system support. These include what to do with priority tasks, what to do with priority users, what to do with interactive tasks, and security. Next: ConclusionsUp: Heterogeneous Computing: One Previous: A Conceptual Model Chance Reschke Tue Aug 15 08:59:12 EDT 1995 MD5{32}: 0cdbe57a154f3eccc169035a991f156e File-Size{4}: 6080 Type{4}: HTML Gatherer-Version{3}: 1.0 Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Refresh-Rate{7}: 2419200 Time-to-Live{8}: 14515200 Description{14}: Open Problems } @FILE { http://cesdis.gsfc.nasa.gov/%7ecreschke/peta/report/node16.html Update-Time{9}: 827948634 title{8}: Summary keywords{41}: aug chance edt known reschke summary tue images{193}: /usr/local/src/latex2html/icons/next_motif.gif /usr/local/src/latex2html/icons/up_motif.gif /usr/local/src/latex2html/icons/previous_motif.gif /usr/local/src/latex2html/icons/contents_motif.gif head{538}: Next: Workshop OrganizationUp: Issues for Petaflops Previous: Important Issues and Summary Clearly, the challenges to developing a petaflops computer are formidable. And, that applies to the known challenges.The unknown will be confronted when they emerge. They may---and probably will---fall into most of the distinct areas listed earlier. Perhaps the most important point to be gleaned from this discussion is that working experts think that petaflops computing within 20 years is feasible. Chance Reschke Tue Aug 15 08:59:12 EDT 1995 MD5{32}: d02667b7310f2336523fb43a96b5d5e7 File-Size{4}: 1773 Type{4}: HTML Gatherer-Version{3}: 1.0 Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Refresh-Rate{7}: 2419200 Time-to-Live{8}: 14515200 Description{8}: Summary } @FILE { http://cesdis.gsfc.nasa.gov/hpccm/annual.reports/cas94contents/app.sw/npss.html Update-Time{9}: 827948648 url-references{109}: http://cesdis.gsfc.nasa.gov/hpccm/annual.reports/cas94contents/app.sw.html mailto:lpicha@cesdis.gsfc.nasa.gov title{40}: NPSS MOD1 Engine Simulation with Zooming keywords{46}: contents curator larry picha return table the images{19}: graphics/return.gif headings{73}: NPSS MOD1 Engine Simulation with Zooming Return to the Table of Contents body{2489}: Objective: The Numerical Propulsion System Simulation is a program focused on reducing the cost and time in developing aeropropulsion engines. The NPSS program intends to build a simulation environment that allows for the arbitrary construction of engine configurations for analysis and design. Furthermore, the software environment will permit the choice of analysis techniques, analysis complexity, languages and the ability to access and manage data from various sources. Approach: As a first step, NPSS will provide a prototype object based 1D Steady State, Transient thermodynamic aircraft engine simulator based on the public domain DIGTEM engine simulation. The prototype built demonstrated the usefulness of object oriented modeling for dynamic engine simulations, for distributed applications and for supporting numerical zooming. Accomplishment: In FY93, the NPSS Simulation environment was extended to include the ability to Numerically Zoom between levels of fidelity of codes. The NPSS MOD0 release provided the correct software platform that enables engine component codes to be distributed across computing architectures. The NPSS MOD1 release with accompanying documentation was made available to industry in February 94. Specifically, NPSS MOD1 : Demonstrated Numerical Zooming was achievable through the use of an Object Oriented design of the DIGTEM code; Significance: The NPSS MOD1 employs the object based model for engine simulations. The object model allows for engine components such as a compressor, combustor, turbine, shaft, etc to be modeled in the numerical simulation as independent entities that can be replaced with component models of greater fidelity that execute on differing computing platforms in a dynamic environment. This capability combined with the graphical user interface allows an engineer to construct arbitrary engine configurations with ease. Status/Plans: The NPSS engine simulation prototypes have generated interest within the US Aeropropulsion industry to work with Lewis on defining and building a US standard for 1D preliminary design codes. In FY94, Lewis and the US Aeropropulsion Industry began to build a new Object Oriented based 1D design code that will: 1) Incorporate the NPSS concept of numerical zooming and; 2) Incorporate the Multi-disciplinary interactions through Object Oriented Modeling. Point of Contact: Gregory Follen NASA Lewis Research Center (216) 433-5193 gfollen@lerc.nasa.gov curator: Larry Picha MD5{32}: 0f20ac33d7b5b31eb7a2e738e239a200 File-Size{4}: 2951 Type{4}: HTML Gatherer-Version{3}: 1.0 Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Refresh-Rate{7}: 2419200 Time-to-Live{8}: 14515200 Description{40}: NPSS MOD1 Engine Simulation with Zooming } @FILE { http://cesdis.gsfc.nasa.gov/%7ecreschke/peta/report/node41.html Update-Time{9}: 827948637 title{37}: SIA Projections and CPU Architecture keywords{71}: and architecture aug chance cpu edt figure projections reschke sia tue images{417}: /usr/local/src/latex2html/icons/next_motif.gif /usr/local/src/latex2html/icons/up_motif.gif /usr/local/src/latex2html/icons/previous_motif.gif /usr/local/src/latex2html/icons/contents_motif.gif img12.gif img13.gif img14.gif /usr/local/src/latex2html/icons/next_motif.gif /usr/local/src/latex2html/icons/up_motif.gif /usr/local/src/latex2html/icons/previous_motif.gif /usr/local/src/latex2html/icons/contents_motif.gif head{3431}: Next: Open IssuesUp: Processors-In-Memory (PIM) Chip Previous: Introduction SIA Projections and CPU Architecture To do this we assume two different CPU architectures. The first, based on the EXECUBE experience, assumes that each CPU is designed simply, and is optimized for fixed point computations. For this we assumed an EXECUBE-like 12K circuit CPU which executes an average instruction in about 2.5 clock cycles. The second CPU assumes a design optimized for floating point, but as with EXECUBE, a simpler (but more efficient in terms of FLOPS/silicon) design point is chosen than what is common in high end microprocessors today. We assume a 100K circuit CPU that can operate on the average at 1 FLOP per clock.The other major assumption we make is that in a mixed DRAM/logic configuration and at any projected point in time, we can smoothly vary the transistor usage on one chip from 100% logic (using the maximum projected logic density) to 100% DRAM (assuming the maximum projected DRAM density). Thus, we can look at different numbers of CPUs on a chip, with different amounts of memory available to them.The reason for this latter tradeoff is that during the workshop it became apparent that the major economic constraint on reaching a petaflops system was in the cost of the memory system to support it. Based on typical rules of thumb, a petaflop would require about a petabyte of memory, which even with very dense DRAM, would be in the order of 10,000s of chips. When this was realized, the application workgroup at the workshop came to the conclusion that there were reasonable petaflops applications where at least an `` " rule would apply, meaning that perhaps only about 32 terabytes of memory might be need for some applications. Instead of the typical ``1 byte per FLOP" rule, this translates into a ``0.03 byte/FLOP" rule.Figure 2 rolls these design assumptions, together with the SIA projections, into a spectrum of potential chip and system configurations assuming an EXECUBE-like largely fixed point CPU macro. (Note that this chart assumes extending the 1992 SIA projections out through 2010 and 2013). \240 \240 Figure: PIM Configurations for a PetaOP Figure 3 does the same for the assumed floating point CPU macro. The calculations behind the (a) chart in each figure were performed at several different year points, and took the projected logic density to determine how many CPUs might fit on different percentages of a chip. From this, and the projected on chip clock speeds, we determined a projected per chip performance number. This was plotted against the amount of memory that could be placed in the remainder of the chip (the ``knee-shaped" curves). Through these curves were then drawn straight lines that represent different ratios of storage to performance, to match the above discussion.\240 \240 Figure: PIM Configurations for a Petaflop The (b) charts in each figure then use the intersections of these pairs of curves to determine how many chips would be needed to reach a petaflops system, again for different ratios of memory to performance. The numbers agree with the feeling of the Pasadena workshop, namely that a PIM-based architecture has the potential to achieve huge levels of performance with far fewer chips (and thus cost) than the other approaches. Next: Open IssuesUp: Processors-In-Memory (PIM) Chip Previous: Introduction Chance Reschke Tue Aug 15 08:59:12 EDT 1995 MD5{32}: 9b40aa5e5d63d577f868e4b7bf3dd03e File-Size{4}: 5730 Type{4}: HTML Gatherer-Version{3}: 1.0 Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Refresh-Rate{7}: 2419200 Time-to-Live{8}: 14515200 Description{37}: SIA Projections and CPU Architecture } @FILE { http://cesdis.gsfc.nasa.gov/hpccm/hpcc.nasa.html Update-Time{9}: 827948598 url-references{670}: sound.bytes/holcomb.aiff sound.bytes/trans.html http://cesdis.gsfc.nasa.gov/hpccm/hpcc.classic.html http://www.nas.nasa.gov/home.html http://cesdis1.gsfc.nasa.gov:80/Harvest/brokers/cesdis1.gsfc.nasa.gov/query.html http://www.hpcc.gov/ http://hypatia.gsfc.nasa.gov/NASA_homepage.html http://www.hq.nasa.gov/ iitf.hp/iitf.html http://www.arc.nasa.gov/x500.html http://cesdis.gsfc.nasa.gov/petaflops/peta.html admin/hot.html mailto:lpicha@cesdis.gsfc.nasa.gov http://cesdis.gsfc.nasa.gov/web-stats/overview.html http://cesdis.gsfc.nasa.gov/people/lpicha/whoiam.html http://cesdis.gsfc.nasa.gov/ http://www.usra.edu/ http://sdcd.gsfc.nasa.gov http://sdcd.gsfc.nasa.gov/ESD/ title{26}: NASA HPCC Office Web Page keywords{535}: accesses aerodynamic among and are association authorizing authors available center cesdis code comments communications communities computing data detailed director directorate displayed division earth email excellence file gets graphically here high holcomb hpcc information introduction last lawrence lee nas nasa number numerical office official others page past performance picha privy program questions raw research revised sciences served server simulation space statistics the this transcript universities welcome what you your images{746}: hpcc.graphics/nasa.meatball.gif hpcc.graphics/hpcc.header.gif hpcc.graphics/sound.gif hpcc.graphics/hpcc.star.gif hpcc.graphics/blue.bullet.gif hpcc.graphics/blue.bullet.gif hpcc.graphics/blue.bullet.gif hpcc.graphics/NAS.gif hpcc.graphics/blue.bullet.gif hpcc.graphics/blue.bullet.gif hpcc.graphics/blue.bullet.gif hpcc.graphics/blue.bullet.gif hpcc.graphics/blue.bullet.gif hpcc.graphics/blue.bullet.gif hpcc.graphics/search.button.gif hpcc.graphics/nco.button.gif hpcc.graphics/nasa.button.gif hpcc.graphics/hq.button.gif hpcc.graphics/iitf.button.gif hpcc.graphics/people.button.gif hpcc.graphics/peta.button.gif hpcc.graphics/hpccsmall.gif hpcc.graphics/mailbutton.gif hpcc.graphics/new.gif hpcc.graphics/metric.gif hpcc.graphics/wavebar.gif headings{333}: Welcome to the NASA High Performance Computing and Communications Office The NASA HPCC Office represents important national computational capabilities: The High Performance Computing & Communications (HPCC) Program The Numerical Aerodynamic Simulation (NAS) Program Scientific and Engineering Computing Announcements Other Resources: body{2754}: (NASA Code RC) Welcome and introduction by Lee B. Holcomb , Director of the NASA HPCC Office. (188K) A transcript of Lee Holcomb's welcome and introduction is also available. %> Extend U.S. technological leadership in high performance computing and communications Provide wide dissemination and application of the technologies Spur gains in U.S. productivity and industrial competitiveness %> Act as a pathfinder in advanced large-scale computer system capability Provide a national computational capability to NASA, industry, DoD, other Government Agencies Provide a strong research tool for Office of Aeronautics %> The Office of Aeronautics conducts research and technology development programs in support of NASA's Aeronautics Enterprise; this consists of a Headquarters program office and four field centers. Scientific and engineering computing is a critical element in Office of Aeronautics' strategy for success; this provides: Computational modeling of vehicle and component structure, operation, and flight characteristics Laboratory support consisting of experimentation control and observation, data collection and storage, analysis, and distribution Logistical support for researchers that includes on-line access to published materials as well as raw data from analytic and experimental work, and improved communications capabilities ranging from electronic messaging to video-conferencing with collaborative visualization tools. NASA Awards $7.1 Million For New Internet Education Projects NASA HPCC/ESS Cooperative Agreement Notice (CAN)The ESS Project will obtain one or more major next-generation scalable parallel testbeds and award new Grand Challenge cooperative agreements through this multimillion-dollar CAN. Workshop on Remote Exploration and Experiment (REE) ProgramAugust 21-23, 1995, Jet Propulsion Laboratory, Doubletree Hotel, Pasadena, California. [past] N E W S : A Calendar of Information and Events Relating to the NASA HPCC Program email your questions or comments File Server Statistics . (Here you are privy to detailed information on the number of accesses this page gets, among others at CESDIS, and what communities are served. The raw data and the data graphically displayed are available.) Authorizing NASA Official: Lee B. Holcomb, Director, NASA HPCC Office Authors: Lawrence Picha (lpicha@usra.edu) & Michele O'Connell (michele@usra.edu), Center of Excellence in Space Data and Information Sciences , Universities Space Research Association , NASA Goddard Space Flight Center, Greenbelt, Maryland. Last revised: 22 NOV 95 (l.picha) A service of the Space Data and Computing Division , Earth Sciences Directorate , NASA Goddard Space Flight Center. MD5{32}: a225c9c495a39459433705edcc07a710 File-Size{4}: 6549 Type{4}: HTML Gatherer-Version{3}: 1.0 Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Refresh-Rate{7}: 2419200 Time-to-Live{8}: 14515200 Description{26}: NASA HPCC Office Web Page } @FILE { http://cesdis.gsfc.nasa.gov/linux/misc/10mbps.html Update-Time{9}: 827948620 url-references{172}: /linux/beowulf/beowulf.html http://www.cirrus.com/prodtech/ov.comm/cs8900.html http://www.amd.com/html/products/ind/overview/18051c.html #top /pub/people/becker/whoiam.html title{31}: 10mbps Ethernet Technology Page keywords{111}: amd author becker beowulf cesdis cluster ethernet family gov gsfc linux mbps nasa pcnet project technology top headings{69}: 10mbps Ethernet Technology Summary: Links to Ethernet controllers. body{260}: Descriptions, implementation technologies, software support, and references related to Ethernet. This document was written in support of the Beowulf Linux Cluster Project . CS8900 : ISA bus Ethernet network interface controller AMD PCnet family . Top address{35}: Author: becker@cesdis.gsfc.nasa.gov MD5{32}: e5bf81396c5398ff65b7ce981d70c033 File-Size{3}: 927 Type{4}: HTML Gatherer-Version{3}: 1.0 Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Refresh-Rate{7}: 2419200 Time-to-Live{8}: 14515200 Description{31}: 10mbps Ethernet Technology Page } @FILE { http://cesdis.gsfc.nasa.gov/hpccm/annual.reports/ann.rpt.95/cas.95/cas.95.ar.toc.html Update-Time{9}: 827948661 url-references{1317}: http://cesdis.gsfc.nasa.gov/hpccm/annual.reports/ann.rpt.95/cas.95/cas.95.ar.darwin.html http://cesdis.gsfc.nasa.gov/hpccm/annual.reports/ann.rpt.95/cas.95/cas.95.ar.himap.html http://cesdis.gsfc.nasa.gov/hpccm/annual.reports/ann.rpt.95/cas.95/cas.95.ar.nra.html http://cesdis.gsfc.nasa.gov/hpccm/annual.reports/ann.rpt.95/cas.95/cas.95.ar.overflow.html http://cesdis.gsfc.nasa.gov/hpccm/annual.reports/ann.rpt.95/cas.95/cas.95.ar.rans.mp.html http://cesdis.gsfc.nasa.gov/hpccm/annual.reports/ann.rpt.95/cas.95/cas.95.ar.aims.html http://www.nas.nasa.gov/NAS/Tools/Projects/AIMS/ http://cesdis.gsfc.nasa.gov/hpccm/annual.reports/ann.rpt.95/cas.95/cas.95.ar.nas.tr.vis.html cas.95.ar.p2d2.html app.software.html app.software.html app.software.html app.software.html app.software.html app.software.html app.software.html app.software.html app.software.html app.software.html app.software.html app.software.html app.software.html app.software.html app.software.html app.software.html app.software.html app.software.html app.software.html http://hpccp-www.larc.nasa.gov/~fido/homepage.html cas.95.ar.npss.html http://cesdis.gsfc.nasa.gov/hpccm/cas.hp/cas.html http://cesdis.gsfc.nasa.gov/people/lpicha/whoiam.html http://cesdis.gsfc.nasa.gov/ http://www.usra.edu/ http://sdcd.gsfc.nasa.gov http://sdcd.gsfc.nasa.gov/ESD/ keywords{1342}: acoustic adifor aerodynamic aeroelastic aeronautics affordable aims aircraft ames analysis and announcements applications array assessment association authors automatic based block calculation calculator cas cds center cesdis cfd clusters coarse code complex computations computer computers computing cooperative coordinate coupling cycle darwin data debugger deck derivative derivatives design differentiation direct directorate disciplinary distributed division earth enhancements environments evaluation excellence extension fem fido flow for fortran framework from gov grained grids gsfc heterogeneous high himap hpcc hpccm hpccp html http ibm identified information integration interactive interdisciplinary krylov large last launcher lawrence measurement memory method methods models module multi multidisciplinary multithreaded nas nasa navier newton ntv numerical optimization overflow parallel parallelism parallelization partition performance phased picha portable potential process propulsion rans rendering requirements research revised robust schwarz sciences sensitivity sharing simulation simulations solver space sponsored state status steady stokes structural support supported system systems task the trace tuning universities unsteady unstructured using version visualization visualizer volume with workstation worktations head{3151}: background="graphics/casback.gif"> The CAS 1995 Annual ReportThe NASA High Performance Computing and Communications Program PresentsThe Computational Aerosciences (CAS) Project 1995 Annual ReportTable of ContentsDARWIN/HPCC Phased-Array Acoustic Measurement and Visualization HiMAP Based Aeroelastic Computations on IBM SP2 Computer Status of Ames Sponsored HPCCP NASA Research Announcements A Supported Version of OVERFLOW for Parallel Computers and Workstation Clusters Multi-partition Parallel Flow Solver Module RANS-MP Tuning Parallel Applications with AIMS See AlsoAIMS The NTV - The NAS Trace Visualizer The Portable Parallel/Distributed Debugger (p2d2) The Cooperative Data Sharing (CDS) System Parallel Calculation of Sensitivity Derivatives for Aircraft Design Using Automatic Differentiation Multi-partition Parallel Flow Solver Module RANS-MP ADIFOR 2.0 Automatic Differentiation for Derivative-Based Multidisciplinary Design Optimization Requirements for an Aeronautics Affordable Systems Optimization Process Interactive Visualization of Unsteady Flow Multithreaded System for Distributed Memory Environments Enhancements to the Coordinate and Sensitivity Calculator for Multi-disciplinary Design Optimization Robust Method for Coupling CFD and FEM Analysis Identified from Assessment of Potential Methods Aeroelastic Design using Distributed Heterogeneous Computers Evaluation and Extension of High Performance Fortran ADIFOR 2.0 Automatic Differentiation for Derivative-Based Multidisciplinary Design Optimization Newton-Krylov-Schwarz: A Parallel Solver for Steady Aerodynamic Applications Parallel Volume Visualization on Unstructured Grids Support for Integration of Task and Data Parallelism Structural Analysis of Large Complex Models on IBM Worktations Coarse-Grained Parallelization of a Multi-Block Navier-Stokes Code High Performance Parallel Rendering on the IBM SP2 Direct Navier-Stokes Simulations on the IBM SP Parallel System FIDO: Framework for Interdisciplinary Design Optimization Numerical Propulsion System Simulation Steady State Cycle Deck Launcher If you are interested in additional information on this project or related activities you may access the CAS Home Page on the World Wide Web at: http://cesdis.gsfc.nasa.gov/hpccm/cas.hp/cas.html or contact the following Authorizing NASA officials: William Feiereisen Project Manager, Computational Aerosciences Project High Performance Computing and Communications Office NASA - Ames Research Center, Moffett Field, California 94035 Paul Hunter Program Manager, High Performance Computing and Communications Program High Performance Computing and Communications Office NASA - Headquarters, Washington, DC 20546 (202) 358-4618 p_hunter@aeromail.hq.nasa.gov Authors: Lawrence Picha (lpicha@usra.edu) & Michele O'Connell (michele@usra.edu), Center of Excellence in Space Data and Information Sciences , Universities Space Research Association , NASA Goddard Space Flight Center, Greenbelt, Maryland. Last revised: 18 OCT 95 (m.oconnell)A service of the Space Data and Computing Division , Earth Sciences Directorate , NASA Goddard Space Flight Center. MD5{32}: d1d03af63d146ba83a9604965b4f897c File-Size{4}: 5706 Type{4}: HTML Gatherer-Version{3}: 1.0 Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Refresh-Rate{7}: 2419200 Time-to-Live{8}: 14515200 } @FILE { http://cesdis.gsfc.nasa.gov/linux/drivers/tulip.patch Update-Time{9}: 827948898 MD5{32}: 173d3420977c322faaeb8f5e019af3eb File-Size{4}: 3376 Type{5}: Patch Gatherer-Version{3}: 1.0 Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Refresh-Rate{7}: 2419200 Time-to-Live{8}: 14515200 } @FILE { http://cesdis.gsfc.nasa.gov/pub/linux/html/Ethernet-HOWTO-9.html Update-Time{9}: 827948630 url-references{420}: Ethernet-HOWTO.html#toc9 http://cesdis.gsfc.nasa.gov/pub/linux/linux.html Ethernet-HOWTO-10.html#lilo http://cesdis.gsfc.nasa.gov/linux/misc/multicard.html Ethernet-HOWTO-7.html#probe Ethernet-HOWTO-7.html#data-xfer Ethernet-HOWTO-3.html#boca-pci Ethernet-HOWTO-3.html#pcnet-32 Ethernet-HOWTO-5.html#utp Ethernet-HOWTO-10.html Ethernet-HOWTO-8.html Ethernet-HOWTO.html#toc9 Ethernet-HOWTO.html#toc Ethernet-HOWTO.html #0 title{26}: Frequently Asked Questions keywords{445}: addresses alpha always amd and any anything arguments asked beginning big boca card cards chapter clones com contents don drivers ethercards ethernet faqs frequently getting hewlett home linux machine more multiple net never next not number one packard page pair passing pause pci pcnet per previous probed problem problems programmed questions really reason section solution specific surfing table than them this top twisted use using vlb with headings{356}: 9 9.1 9.2 9.3 9.4 9.5 9.6 9.7 FAQs Not Specific to Any Card. Token Ring 32 Bit / VLB / PCI Ethernet Cards FDDI Linking 10BaseT without a Hub SIOCSFFLAGS: Try again Link UNSPEC and HW-addr of 00:00:00:00:00:00 Huge Number of RX and TX Errors Entries in for Ethercards Linux and ``trailers'' Non-existent Apricot NIC is detected body{20862}: Frequently Asked Questions Contents of this section Here are some of the more frequently asked questions about using Linux with an Ethernet connection. Some of the more specific questions are sorted on a `per manufacturer basis'. However, since this document is basically `old' by the time you get it, any `new' problems will not appear here instantly. For these, I suggest that you make efficient use of your newsreader. For example, nn users would type to get all the news articles in your subscribed list that have `3c' in the subject. (ie. 3com, 3c509, 3c503, etc.) The moral: Read the man page for your newsreader. Alpha Drivers -- Getting and Using them I heard that there is an alpha driver available for my card. Where can I get it? The newest of the `new' drivers can be found on Donald's new ftp site: in the area. Things change here quite frequently, so just look around for it. There is still all the stuff on the old ftp site in , but this is not being actively maintained, and hence will be of limited value to most people. As of recent v1.1 kernels, the `useable' alpha drivers have been included in the standard kernel source tree. When running you will be asked if you want to be offered ALPHA test drivers. Now, if it really is an alpha, or pre-alpha driver, then please treat it as such. In other words, don't complain because you can't figure out what to do with it. If you can't figure out how to install it, then you probably shouldn't be testing it. Also, if it brings your machine down, don't complain. Instead, send us a well documented bug report, or even better, a patch! People reading this while net-surfing may want to check out: Don's Linux Home Page for the latest dirt on what is new and upcoming. Using More than one Ethernet Card per Machine What needs to be done so that Linux can run two ethernet cards? The hooks for multiple ethercards are all there. However, note that only one ethercard is auto-probed for by default. This avoids a lot of possible boot time hangs caused by probing sensitive cards. There are two ways that you can enable auto-probing for the second (and third, and...) card. The easiest method is to pass boot-time arguments to the kernel, which is usually done by LILO.Probing for the second card can be achieved by using a boot-time argument as simple as . In this case and will be assigned in the order that the cards are found at boot. Say if you want the card at to be and the card at to be then you could use The command accepts more than the IRQ + i/o + name shown above. Please have a look at Passing Ethernet Arguments... for the full syntax, card specific parameters, and LILO tips. These boot time arguments can be made permanent so that you don't have to re-enter them every time. See the LILO configuration option `' in the LILO manual. The second way (not recommended) is to edit the file and replace the entry for the i/o address that you want probed with a zero. This will enable autoprobing for that device, be it and so on. If you really need more than four ethernet cards in one machine, then you can clone the entry and change to . Note that if you are intending to use Linux as a gateway between two networks, you will have to re-compile a kernel with IP forwarding enabled. Usually using an old AT286 with something like the `kbridge' software is a better solution. If you are viewing this while net-surfing , you may wish to look at a mini-howto Donald has on his WWW site. Check out Multiple Ethercards . Problems with NE1000 / NE2000 cards (and clones) Problem: NE*000 ethercard at doesn't get detected anymore. Reason: Recent kernels ( > 1.1.7X) have more sanity checks with respect to overlapping i/o regions. Your NE2000 card is wide in i/o space, which makes it hit the parallel port at . Other devices that could be there are the second floppy controller (if equipped) at and the secondary IDE controller at . If the port(s) are already registered by another driver, the kernel will not let the probe happen. Solution: Either move your card to an address like or compile without parallel printer support. Problem: Network `goes away' every time I print something (NE2000) Reason: Same problem as above, but you have an older kernel that doesn't check for overlapping i/o regions. Use the same fix as above, and get a new kernel while you are at it. Problem: NE*000 ethercard probe at 0xNNN: 00 00 C5 ... not found. (invalid signature yy zz) Reason: First off, do you have a NE1000 or NE2000 card at the addr. 0xNNN? And if so, does the hardware address reported look like a valid one? If so, then you have a poor NE*000 clone. All NE*000 clones are supposed to have the value in bytes 14 and 15 of the SA PROM on the card. Yours doesn't -- it has `yy zz' instead. Solution: The driver (/usr/src/linux/drivers/net/ne.c) has a "Hall of Shame" list at about line 42. This list is used to detect poor clones. For example, the DFI cards use `DFI' in the first 3 bytes of the prom, instead of using 0x57 in bytes 14 and 15, like they are supposed to. You can determine what the first 3 bytes of your card PROM are by adding a line like: printk("PROM prefix: %#2x %#2x %#2x\ ",SA_prom[0],SA_prom[1],SA_prom[2]); into the driver, right after the error message you got above, and just before the "return ENXIO" at line 227. Reboot with this change in place, and after the detection fails, you will get the three bytes from the PROM like the DFI example above. Then you can add your card to the bad_clone_list[] at about line 43. Say the above line printed out: after you rebooted. And say that the 8 bit version of your card was called the "FOO-1k" and the 16 bit version the "FOO-2k". Then you would add the following line to the bad_clone_list[]: Note that the 2 name strings you add can be anything -- they are just printed at boot, and not matched against anything on the card. You can also take out the "printk()" that you added above, if you want. It shouldn't hit that line anymore anyway. Then recompile once more, and your card should be detected. Problem: Errors like Is the chip a real NatSemi 8390? (DP8390, DP83901, DP83902 or DP83905)? If not, some clone chips don't correctly implement the transfer verification register. MS-DOS drivers never do error checking, so it doesn't matter to them. Are most of the messages off by a factor of 2? If so: Are you using the NE2000 in a 16 bit slot? Is it jumpered to use only 8 bit transfers? The Linux driver expects a NE2000 to be in a 16 bit slot. A NE1000 can be in either size slot. This problem can also occur with some clones, notably D-Link 16 bit cards, that don't have the correct ID bytes in the station address PROM. Are you running the bus faster than 8Mhz? If you can change the speed (faster or slower), see if that makes a difference. Most NE2000 clones will run at 16MHz, but some may not. Changing speed can also mask a noisy bus. What other devices are on the bus? If moving the devices around changes the reliability, then you have a bus noise problem -- just what that error message was designed to detect. Congratulations, you've probably found the source of other problems as well. Problem: The machine hangs during boot right after the `8390...' or `WD....' message. Removing the NE2000 fixes the problem. Solution: Change your NE2000 base address to . Alternatively, you can use the device registrar implemented in 0.99pl13 and later kernels. Reason: Your NE2000 clone isn't a good enough clone. An active NE2000 is a bottomless pit that will trap any driver autoprobing in its space. The other ethercard drivers take great pain to reset the NE2000 so that it's safe, but some clones cannot be reset. Clone chips to watch out for: Winbond 83C901. Changing the NE2000 to a less-popular address will move it out of the way of other autoprobes, allowing your machine to boot. Problem: The machine hangs during the SCSI probe at boot. Reason: It's the same problem as above, change the ethercard's address, or use the device registrar. Problem: The machine hangs during the soundcard probe at boot. Reason: No, that's really during the silent SCSI probe, and it's the same problem as above. Problem: Errors like This bug came from timer-based packet retransmissions. If you got a timer tick _during_ a ethercard RX interrupt, and timer tick tried to retransmit a timed-out packet, you could get a conflict. Because of the design of the NE2000 you would have the machine hang (exactly the same the NE2000-clone boot hangs). Early versions of the driver disabled interrupts for a long time, and didn't have this problem. Later versions are fixed. (ie. kernels after 0.99p9 should be OK.) Problem: NE2000 not detected at boot - no boot messages at all Donald writes: `A few people have reported a problem with detecting the Accton NE2000. This problem occurs only at boot-time, and the card is later detected at run-time by the identical code my (alpha-test) ne2k diagnostic program. Accton has been very responsive, but I still haven't tracked down what is going on. I've been unable to reproduce this problem with the Accton cards we purchased. If you are having this problem, please send me an immediate bug report. For that matter, if you have an Accton card send me a success report, including the type of the motherboard. I'm especially interested in finding out if this problem moves with the particular ethercard, or stays with the motherboard.' Here are some things to try, as they have fixed it for some people: Change the bus speed, or just move the card to a different slot. Change the `I/O recovery time' parameter in the BIOS chipset configuration. Problems with WD80*3 cards Problem: A WD80*3 is falsely detected. Removing the sound or MIDI card eliminates the `detected' message. Reason: Some MIDI ports happen to produce the same checksum as a WD ethercard. Solution: Update your ethercard driver: new versions include an additional sanity check. If it is the midi chip at 0x388 that is getting detected as a WD living at 0x380, then you could also use: LILO: linux reserve=0x380,8 Problem: You get messages such as the following with your 80*3: Reason: There is a shared memory problem. Solution: If the problem is sporadic, you have hardware problems. Typical problems that are easy to fix are board conflicts, having cache or `shadow ROM' enabled for that region, or running your bus faster than 8Mhz. There are also a surprising number of memory failures on ethernet cards, so run a diagnostic program if you have one for your ethercard. If the problem is continual, and you have have to reboot to fix the problem, record the boot-time probe message and mail it to becker@cesdis.gsfc.nasa.gov - Take particular note of the shared memory location. Problem: WD80*3 will not get detected at boot. Reason: Earlier versions of the Mitsumi CD-ROM (mcd) driver probe at 0x300 will succeed if just about anything is that I/O location. This is bad news and needs to be a bit more robust. Once another driver registers that it `owns' an I/O location, other drivers (incl. the wd80x3) are `locked out' and can not probe that addr for a card. Solution: Recompile a new kernel without any excess drivers that you aren't using, including the above mcd driver. Or try moving your ethercard to a new I/O addr. Valid I/O addr. for all the cards are listed in Probed Addresses You can also point the mcd driver off in another direction by a boot-time parameter (via LILO) such as: Problem: Old wd8003 and/or jumper-settable wd8013 always get the IRQ wrong. Reason: The old wd8003 cards and jumper-settable wd8013 clones don't have the EEPROM that the driver can read the IRQ setting from. If the driver can't read the IRQ, then it tries to auto-IRQ to find out what it is. And if auto-IRQ returns zero, then the driver just assigns IRQ 5 for an 8 bit card or IRQ 10 for a 16 bit card. Solution: Avoid the auto-IRQ code, and tell the kernel what the IRQ that you have jumpered the card to is via a boot time argument. For example, if you are using IRQ 9, using the following should work. Problems with 3Com cards Problem: The 3c503 picks IRQ N, but this is needed for some other device which needs IRQ N. (eg. CD ROM driver, modem, etc.) Can this be fixed without compiling this into the kernel? Solution: The 3c503 driver probes for a free IRQ line in the order {5, 9/2, 3, 4}, and it should pick a line which isn't being used. Very old drivers used to pick the IRQ line at boot-time, and the current driver (0.99pl12 and newer) chooses when the card is open()/ifconfig'ed. Alternately, you can fix the IRQ at boot by passing parameters via LILO. The following selects IRQ9, base location 0x300, , and if_port #1 (the external transceiver). The following selects IRQ3, probes for the base location, , and the default if_port #0 (the internal transceiver) Problem: 3c503: Configured interrupt number XX is out of range. Reason: Whoever built your kernel fixed the ethercard IRQ at XX. The above is truly evil, and worse than that, it is not necessary. The 3c503 will autoIRQ when it gets ifconfig'ed, and pick one of IRQ{5, 2/9, 3, 4}. Solution: Use LILO as described above, or rebuild the kernel, enabling autoIRQ by not specifying the IRQ line. Problem: The supplied 3c503 drivers don't use the AUI (thicknet) port. How does one choose it over the default thinnet port? Solution: The 3c503 AUI port can be selected at boot-time with 0.99pl12 and later. The selection is overloaded onto the low bit of the currently-unused dev->rmem_start variable, so a boot-time parameter of: should work. A boot line to force IRQ 5, port base 0x300, and use an external transceiver is: Also note that kernel revisions 1.00 to 1.03 had an interesting `feature'. They would switch to the AUI port when the internal transciever failed. This is a problem, as it will never switch back if for example you momentarily disconnect the cable. Kernel versions 1.04 and newer only switch if the very first Tx attempt fails. Problems with Hewlett Packard Cards Problem: HP Vectra using built in AMD LANCE chip gets IRQ and DMA wrong. Solution: The HP Vectra uses a different implementation to the standard HP-J2405A. The `lance.c' driver used to always use the value in the setup register of an HP Lance implementation. In the Vectra case it's reading an invalid 0xff value. Kernel versions newer than about 1.1.50 now handle the Vectra in an appropriate fashion. Problem: HP Card is not detected at boot, even though kernel was compiled with `HP PCLAN support'. Solution: You probably have a HP PCLAN+ -- note the `plus'. Support for the PCLAN+ was added to final versions of 1.1, but some of them didn't have the entry in `config.in'. If you have the file hp-plus.c in ~/linux/drivers/net/ but no entry in config.in, then add the following line under the `HP PCLAN support' line: bool 'HP PCLAN Plus support' CONFIG_HPLAN_PLUS n Kernels up tp 1.1.54 are missing the line in `config.in' still. Do a `make mrproper;make config;make dep;make zlilo' and you should be in business. Is there token ring support for Linux? To support token ring requires more than only a writing a device driver, it also requires writing the source routing routines for token ring. It is the source routing that would be the most time comsuming to write. Alan Cox adds: `It will require (...) changes to the bottom socket layer to support 802.2 and 802.2 based TCP/IP. Don't expect anything soon.' Peter De Schrijver has been spending some time on Token Ring lately, and has patches that are available for IBM ISA and MCA token ring cards. Don't expect miracles here, as he has just started on this as of 1.1.42. You can get the patch from: What is the selection for 32 bit ethernet cards? There aren't many 32 bit ethercard device drivers because there aren't that many 32 bit ethercards. There aren't many 32 bit ethercards out there because a 10Mbs network doesn't justify spending the 5x price increment for the 32 bit interface. See Programmed I/O vs. ... as to why having an ethercard on an 8MHz ISA bus is really not a bottleneck. This might change now that AMD has introduced the 32 bit PCnet-VLB and PCnet-PCI chips. The street price of the Boca PCnet-VLB board should be under $70 from a place like CMO (see Computer Shopper). See Boca PCI/VLB for info on these cards. See AMD PCnet-32 for info on the 32 bit versions of the LANCE / PCnet-ISA chip. In the future, the DEC 21040 PCI chip will probably be supported as well, but don't hold your breath. Is there FDDI support for Linux? Donald writes: `No, there is no Linux driver for any FDDI boards. I come from a place with supercomputers, so an external observer might think FDDI would be high on my list. But FDDI never delivered end-to-end throughput that would justify its cost, and it seems to be a nearly abandoned technology now that 100base{X,Anynet} seems imminent. (And yes, I know you can now get FDDI boards for <$1K. That seems to be a last-ditch effort to get some return on the development investment. Where is the next generation of FDDI going to come from?)' Can I link 10BaseT (RJ45) based systems together without a hub? You can link 2 machines easily, but no more than that, without extra devices/gizmos. See Twisted Pair -- it explains how to do it. And no, you can't hack together a hub just by crossing a few wires and stuff. It's pretty much impossible to do the collision signal right without duplicating a hub. I get `SIOCSFFLAGS: Try again' when I run `ifconfig' -- Huh? Some other device has taken the IRQ that your ethercard is trying to use, and so the ethercard can't use the IRQ. You don't necessairly need to reboot to resolve this, as some devices only grab the IRQs when they need them and then release them when they are done. Examples are some sound cards, serial ports, floppy disk driver, etc. You can type to see which interrupts are presently in use . Those marked with a `+' are ones that are not taken on a permanent basis. Most of the Linux ethercard drivers only grab the IRQ when they are opened for use via `ifconfig'. If you can get the other device to `let go' of the required IRQ line, then you should be able to `Try again' with ifconfig. When I run ifconfig with no arguments, it reports that LINK is UNSPEC (instead of 10Mbs Ethernet) and it also says that my hardware address is all zeros. This is because people are running a newer version of the `ifconfig' program than their kernel version. This new version of ifconfig is not able to report these properties when used in conjunction with an older kernel. You can either upgrade your kernel, `downgrade' ifconfig, or simply ignore it. The kernel knows your hardware address, so it really doesn't matter if ifconfig can't read it. When I run ifconfig with no arguments, it reports that I have a huge error count in both rec'd and transmitted packets. It all seems to work ok -- What is wrong? Look again. It says big number PAUSE PAUSE PAUSE . And the same for the column. Hence the big numbers you are seeing are the total number of packets that your machine has rec'd and transmitted. If you still find it confusing, try typing instead. I have /dev/eth0 as a link to /dev/xxx. Is this right? Contrary to what you have heard, the files in /dev/* are not used. You can delete any and similar entries. Should I disable trailers when I `ifconfig' my ethercard? You can't disable trailers, and you shouldn't want to. `Trailers' are a hack to avoid data copying in the networking layers. The idea was to use a trivial fixed-size header of size `H', put the variable-size header info at the end of the packet, and allocate all packets packets `H' bytes before the start of a page. While it was a good idea, it turned out to not work well in practice. If someone suggests the use of `-trailers', note that it is the equivalent of sacrificial goats blood. It won't do anything to solve the problem, but if problem fixes itself then someone can claim deep magical knowledge. I get and when I boot, when I don't have an ``Apricot''. And then the card I do have isn't detected. The Apricot driver uses a simple checksum to detect if an Apricot is present, which mistakenly thinks that almost anything is an Apricot NIC. It really should look at the vendor prefix instead. Your choices are to move your card off of (the only place the Apricot driver probes), or better yet, re-compile a kernel without the Apricot driver. Next Chapter, Previous Chapter Table of contents of this chapter , General table of contents Top of the document, Beginning of this Chapter MD5{32}: f1ea7cfeadde32d6161c770937d99fbb File-Size{5}: 25835 Type{4}: HTML Gatherer-Version{3}: 1.0 Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Refresh-Rate{7}: 2419200 Time-to-Live{8}: 14515200 Description{26}: Frequently Asked Questions } @FILE { http://cesdis.gsfc.nasa.gov/%7ecreschke/peta/report/node21.html Update-Time{9}: 827948635 title{19}: Workshop Attendees keywords{46}: attendees aug chance edt reschke tue workshop images{387}: /usr/local/src/latex2html/icons/next_motif.gif /usr/local/src/latex2html/icons/up_motif.gif /usr/local/src/latex2html/icons/previous_motif.gif /usr/local/src/latex2html/icons/contents_motif.gif /usr/local/src/latex2html/icons/next_motif.gif /usr/local/src/latex2html/icons/up_motif.gif /usr/local/src/latex2html/icons/previous_motif.gif /usr/local/src/latex2html/icons/contents_motif.gif head{4171}: Next: Overview of PresentationsUp: Workshop Organization Previous: Workshop Presentations Workshop Attendees The workshop attendees and their affiliations are shown below:Maurice Aburdene (Bucknell University)Robin Alford (CESDIS)Von Backenstose (Department of Commerce)David Bader (University of Maryland)George Ball (University of Arizona)F. D. Bedard (National Security Agency)George Bell (Stanford University)Simon Berkovich (George Washington University)Mike Berry (Department of Defense/USAF)Bruce Black (Cray Research Inc.)Andrew Chien (University of Illinois)Fabien Coelho (\311cole des Mines)Jarrett Cohen (NASA Goddard Space Flight Center)John Conery (University of Oregon)Bob Cox (Cray Computer Corporation)David Crawford (Electronic Trend)Dave Curkendall (Jet Propulsion Laboratory)Anil Deane (George Mason University)David DiNucci (Computer Science Corporation)John Dorband (NASA Goddard Space Flight Center)Patrick Dowd (State University New York)Duncan Elliott (University of Toronto)Walter Ermler (Department of Energy)Hassan Fallah-Adl (University of Maryland)Robert Ferraro (Jet Propulsion Laboratory)Charles Fiduccia (Supercomputing Research Center)Jim Fischer (NASA Goddard Space Flight Center)Ian Foster (Argonne National Laboratory)Bruce Fryxell (George Mason University)Eugene Gavrilov (Los Alamos National Laboratory)Norman Glick (National Security Agency)Peter Gulko (Rebus Technolgies)Yang Han (George Washington University)Jim Harris (NASA HQ, Office of Mission to Planet Earth)R. Michael Hord (ERIM)Fred Johnson (National Institute of Standards and Technology)Kamal Khouri (Bucknell University)David Kilman (Los Alamos National Laboratory)Steve Knowles (Naval Space Command)Peter Kogge (Notre Dame University)John Korah (NASA, EOSDIS)Joydip Kundu (University of Oregon)H. T. Kung (Harvard University)George Lake (University of Washington)William Leinsberger (Computer Devices International)Paul Lukowicz (University at Karlsruhe)Lou Lome (Ballistic Missile Defense Organization)Serge Lubenec (George Mason University)Rick Lyon (Hughes STX)Jacob Maizel (National Cancer Institute)Yossi Matias (AT&T Bell Labortories)William Mattus (Villanova University)Thomas McCormick III (National Security Agency)Al Meilus (George Washington University)A. Ray Miller (National Security Agency)Jose Milovich (Lawrence Livermore National Laboratory)Samin Mohammed (George Mason University)Reagan Moore (San Diego Supercomputing Center)Z. George Mou (Brandeis University)Samiu Muhammed (George Mason University)Chrisochoides Nikos (Syracuse University)Michele O'Connell (CESDIS)Kevin Olson (George Mason University)Behrooz Parhami (University of California)Jeff Pedelty (NASA Goddard Space Flight Center)Ivars Peterson (Science News)Larry Picha (CESDIS)Thierry Porcher (CEA)David Probst (Concordia University)Chunming Qiao (State University New York)Donna Quammen (George Mason University)Craig Reese (Supercomputing Research Center)S. Repdauay (CPP)Michael Rilee (Cornell University)Allen Robinson (Sandia National Laboratory)Subhash Saim (NASA Ames Research Center)Subhash Saini (Computer Sciences Corporation)Ray Sakardi (National Security Agency)David Schaefer (George Mason University)Judith Schlesinger (Supercomputing Research Center)Vasili Semenov (State University New York)Bruce Shapiro (National Cancer Institute)H. J. Siegel (Purdue University)Margaret Simmons (Los Alamos National Laboratory)Burton Smith (Tera Computer)Paul H. Smith (NASA HPCC Office)Matteo Sonza-Reorda (Politecnico Di Torino)Thomas Sterling (CESDIS)Katja Stokley (George Mason University)Valerie Taylor (Northwestern University)John Thorp (Cray Research Inc.)Joe Vaughn (Computing Devices International)Chris Walter (WW Technology Group)Pearl Wang (George Mason University)Nancy Welker (National Security Agency)Leonard Wisniewski (Dartmouth College)Paul Woodward (University of Minnesota)Bill Wren (Honeywell)Richard Yentis (George Washington University)Steve Zalesak (NASA Goddard Space Flight Center)Bernard Zeigler (University of Arizona) Next: Overview of PresentationsUp: Workshop Organization Previous: Workshop Presentations Chance Reschke Tue Aug 15 08:59:12 EDT 1995 MD5{32}: 4fa293dab8e6ea56485cadadd3705eab File-Size{4}: 6633 Type{4}: HTML Gatherer-Version{3}: 1.0 Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Refresh-Rate{7}: 2419200 Time-to-Live{8}: 14515200 Description{19}: Workshop Attendees } @FILE { http://cesdis.gsfc.nasa.gov/admin/adl96/adlcall.html Update-Time{9}: 827948598 url-references{128}: http://www.gsfc.nasa.gov/GSFC_homepage.html http://www.nlm.nih.gov http:// www.ieee.org http:// lcweb.loc.gov/homepage/lchp.html title{23}: ADL '96 Call for Papers keywords{136}: and call center computer congress flight for goddard ieee library may medicine nasa national participation society space the washington images{77}: http://cesdis.gsfc.nasa.gov/hpccm/hpcc.graphics/nasa.meatball.gif nlmlogo.gif headings{279}: ADL '96 Forum Call for Participation Forum on Research and Technology Advances in Digital Libraries Sponsored by: NASA Goddard Space Flight Center; The National Library of Medicine; IEEE Computer Society; and The Library of Congress href> In Cooperation with: Corporate Support: body{542}: May 13 - 15, 1996 Library of Congress Washington, D. C. Size = 3>Brown University, Columbia University, Cornell University, George Wahsington University, National Instittute of Standards and Technology, Rutgers-Center for Information Management, Integration & Connectivity, University of Milano, The University of Maryland-Baltimore County and The University of Texas at Austin. AT, Bellcore, Bell Atlantic*, Comsat, Cray Research*, GTE*, Hughes Networks Systems*, IBM Corporation, Lockheed-Martin Corp., MCI, Sony*, Sun Microsystems* MD5{32}: 881e5cdc23dd0b8ef8ef7c71aea0eaa5 File-Size{4}: 4085 Type{4}: HTML Gatherer-Version{3}: 1.0 Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Refresh-Rate{7}: 2419200 Time-to-Live{8}: 14515200 Description{23}: ADL '96 Call for Papers } @FILE { http://cesdis.gsfc.nasa.gov/hpccm/annual.reports/cas94contents/app.sw/paracalc.html Update-Time{9}: 827948647 url-references{109}: http://cesdis.gsfc.nasa.gov/hpccm/annual.reports/cas94contents/app.sw.html mailto:lpicha@cesdis.gsfc.nasa.gov title{99}: Parallel Calculation of Sensitivity Derivatives for Aircraft Design Using Automatic Differentiation keywords{46}: contents curator larry picha return table the images{19}: graphics/return.gif headings{132}: Parallel Calculation of Sensitivity Derivatives for Aircraft Design Using Automatic Differentiation Return to the Table of Contents body{2849}: Objective: This work compares two computational approaches for calculating sensitivity derivatives (SD) from gradient code obtained by means of automatic differentiation (AD). Approach: The ADIFOR (AD of Fortran) tool, developed by Argonne National Laboratory and Rice University, is applied to the TLNS3D thin-layer Navier-Stokes flow solver to obtain aerodynamic SD with respect to wing geometric design variables. The number of design variables (NDV) ranges from 1 to 60. Coarse-grained parallelization (as shown in the Figure 1) of the TLNS3D.AD code is employed on an IBM SP/1 workstation cluster with a Fortran-M wrapper to improve the code speed and memory use. Results from the initial (unoptimized) parallel implementation on the SP/1 are compared with the most efficient (to date) implementation of the TLNS3D.AD code on a single processor of the vector Cray Y-MP. Accomplishment: Figure 2 shows the beneficial effects of SP/1 parallelization; as expected, the time required to compute the aerodynamic SD on a 972517 viscous grid decreases significantly as the number of processors (NP) used increases from 1 to 15. A fair comparison between the SP/1 and Y-MP implementations involves complex trade-offs among numerous parameters including single processor speed, Y-MP vector performance, total available memory, the amount of SP/1 parallelization employed, and machine life-cycle cost. Generally, though, on this grid the SD compute time of the Y-MP is about 10 times faster than that of the SP/1 if the number of design variables (NDV) is small. However, the Y-MP is only about 2 times faster (or less) than the SP/1 as NDV increases and parallelization can be efficiently exploited on the SP/1. Significance: Although the compute time for the vector Cray Y-MP is faster than that of the parallel IBM SP/1, for most of the SD cases examined the difference is only about a factor of 2 or less; SD calculations for large NDV can be performed efficiently on the SP/1 using coarse-grained parallelization. Consideration of the total elapsed job time, rather than compute time would favor the SP/1 even more for these cases. Moreover, the total machine resources of a 128 node SP/1 can accommodate about 1000 design variables, whereas the Cray can only accommodate about 100 design variable for this size grid. Status/Plans: Other strategies exploiting more parallelization within the TLNS3D.AD code will be studied. Fortran-M has been installed on NASA Langley Research Center Computers to allow these parallelization techniques to be mapped onto networks of heterogeneous workstations. Points of Contact: C. H. Bischof and T. L. Knauff, Jr. Argonne National Laboratory (708) 252 - 8875 bischof@mcs.anl.gov L. L. Green and K . J. Haigler NASA Langley Research Center (804) 864 - 2228 l.l.green@larc.nasa.gov curator: Larry Picha MD5{32}: a82c1ad9be3a8f1f3bfcd195c7f8ff1b File-Size{4}: 3429 Type{4}: HTML Gatherer-Version{3}: 1.0 Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Refresh-Rate{7}: 2419200 Time-to-Live{8}: 14515200 Description{67}: Parallel Calculation of Sensitivity Derivatives for Aircraft Design } @FILE { http://cesdis.gsfc.nasa.gov/hpccm/annual.reports/cas94contents/iita/k-12.html Update-Time{9}: 827948649 url-references{94}: http://cesdis.gsfc.nasa.gov/hpccm/annual.reports/cas94.html mailto:lpicha@cesdis.gsfc.nasa.gov title{57}: High Performance Computing and Communication K-12 Project keywords{46}: contents curator larry picha return table the images{19}: graphics/return.gif headings{90}: High Performance Computing and Communication K-12 Project Return to the Table of Contents body{2586}: Objective: To inspire students in the K-12 grades into pursuing careers in science and engineering. A particular focus is to target under represented schools and minorities. Approach: The Lewis project began by involving teachers early on and continually throughout the project, training was available to ensure all participants were at the same level of expertise and to standardize on computing platforms so advances within the project could be easily shared amongst all involved. With this in mind, the Lewis project focused on three areas of development: 1) Teacher & Student training; 2) Curriculum supplemental material- al; 3) Computing and Network Infrastructure within the schools. Accomplishment: Currently, the Lewis project has trained 25 teachers from fourteen schools ranging from high school to elementary. Nine schools have received Apple Macintoshes and network equipment for connecting to Internet. The training for teachers consists of instruction by Lewis personnel on topics including: Mac Basics, Internet, Visualization, computer languages, Unix, Interactive Physics, Maple, Animation Works and Spyglass. The teacher training is conducted each summer and is spread over two weeks. In the area of curriculum, Barberton High School will teach a new course entitled \322High Performance Computing\323 at the 10th and 11th grade level. Customary and innovative network efforts have been implemented within Lewis\325 K12 project. Support for connections to Internet range from basic phone line access to a successful implementation of RF technology at sustained T1 speeds. Cleveland East Technical High School has partnered with Cleveland State University to acquire Internet access and to demonstrate the cost effective use of this \322wireless\323 communication path. Significance: The HPCC K12 project has the potential to inspire students, teachers and NASA personnel toward developing and enhancing current school curriculum into a living entity that can grow and accommodate the technology already available outside the classroom. Status/Plans: Current program will continue to consist of two weeks of teacher training, providing selected schools with computers and providing basic Internet connections. New efforts proposed for FY95 include efforts to work with sight impaired and developing the Lewis Teacher Resource Center into a functioning instructional facility for year round K12 use. Point of Contact: Gregory J. Follen NASA Lewis Research Center (216)433-5193 Gynelle Mackson NASA Lewis Research Center (216) 433-8258 curator: Larry Picha MD5{32}: 5acd1c7ffcc0005badb5f4a375469dec File-Size{4}: 3079 Type{4}: HTML Gatherer-Version{3}: 1.0 Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Refresh-Rate{7}: 2419200 Time-to-Live{8}: 14515200 Description{57}: High Performance Computing and Communication K-12 Project } @FILE { http://cesdis.gsfc.nasa.gov/hpccm/annual.reports/ess94contents/guest.ci/stone.html Update-Time{9}: 827948652 url-references{111}: http://cesdis.gsfc.nasa.gov/hpccm/annual.reports/ess94contents/guest.ci.html mailto:lpicha@cesdis.gsfc.nasa.gov title{36}: Magnetic and Radiation Field Effects keywords{45}: curator larry page picha previous return the images{38}: graphics/stone.gif graphics/return.gif headings{99}: Fluid Dynamics Code Incorporating Magnetic and Radiation Field Effects Return to the PREVIOUS PAGE body{2347}: Objective: To develop a fluid dynamics code which incorporates the effects of magnetic and radiation fields for massively parallel supercomputers and apply it to the study of the dynamics of astrophysical plasmas. Approach: Standard finite-difference methods are used to evolve the equations of fluid dynamics. Special purpose algorithms developed by the PI are used to evolve the magnetic and radiation fields. The code is written in Fortran using a data parallel paradigm. Accomplishments: Fully three-dimensional hydrodynamic algorithms including the effects of magnetic fields have been implemented on a variety of massively parallel supercomputers, including the Connection Machine 2 (CM-2), CM-5, and MasPar-2. Performance on these machines varies from 2-20 times faster than on one Cray YMP processor. The code is now being used to study the dynamics of magnetized accretion disks. The accompanying figure shows the turbulence which results in a three-dimensional section of a weakly magnetized accretion disk from the development of magnetic instabilities in the flow. The magnetic field lines (yellow) have become highly tangled, and the density (colors) shows large amplitude fluctuations characteristic of turbulence in the midplane of the disk. Significance: Many astrophysical systems behave as fluids, thus a theoretical description of their dynamics is given by solutions of the equations of fluid dynamics. However, astrophysical plasmas are complex because they are affected by a variety of physical phenomena, such as magnetic fields and radiation fields from nearby stars. By implementing numerical algorithms for magnetic fluids on massivley parallel machines, the largest and most detailed numerical simulations of the dynamics of astrophyscial plasmas in a variety of contexts will be possible. Status/Plans: Year 2 milestones have been reached: the hydrodynamic algorithms including the effects of a magnetic field have been implemented on a variety of massively parallel machines, and significant applications have been made. Future plans include implementing the radiation hydrodynamic algorithms on parallel machines, and porting the existing code to message passing architectures. Point of Contact: James M. Stone University of Maryland (301) 405-2103 jstone@astro.umd.edu curator: Larry Picha MD5{32}: 4b1d0cdc3c1919bc514587ff6549efbd File-Size{4}: 2894 Type{4}: HTML Gatherer-Version{3}: 1.0 Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Refresh-Rate{7}: 2419200 Time-to-Live{8}: 14515200 Description{36}: Magnetic and Radiation Field Effects } @FILE { http://cesdis.gsfc.nasa.gov/PAS2/index.html Update-Time{9}: 827948601 url-references{681}: /cesdis.html mailto:tron@cesdis.gsfc.nasa.gov /PAS2/index.html /PAS2/README /PAS2/findings.html findings.tex /PAS2/wg2.html wg2.tex /PAS2/wg3.html wg3.tex /PAS2/wg4.html /PAS2/wg4.text /PAS2/wg5.html /PAS2/wg5.text /PAS2/wg6.html /PAS2/wg6.text /PAS2/wg7.html wg7.tex pasadw7.bib /PAS2/wg8.html wg8.tex /PAS2/wg9.html wg9.text mailto:messina@cacr.caltech.edu http://www.ccsf.caltech.edu/~jpool/ mailto:jpool@cacr.caltech.edu http://cesdis.gsfc.nasa.gov/people/tron/tron.html mailto:tron@cesdis.gsfc.nasa.gov /cesdis.html http://hypatia.gsfc.nasa.gov/NASA_homepage.html http://hypatia.gsfc.nasa.gov/GSFC_homepage.html #top /pub/people/tron/tron.html mailto:tron@cesdis.gsfc.nasa.gov title{24}: Second Pasadena Workshop keywords{320}: and author bibliography cacr caltech center cesdis computing edu environments file findings flight for form goddard gov group gsfc high james jpool messina nasa overview pasadena performance pool proceedings readme report second software space sterling system tex text the thomas tools top tron version working workshop headings{150}: Proceedings of the Second Pasadena Workshop on System Software and Tools for High Performance Computing Environments Pointers to documents: Contacts: body{2015}: This page contains links to information about the Second Pasadena Workshop available at CESDIS . This directory contains the draft reports of the nine working groups of the workshop. These are still in revision and may be expected to change over time. At this time, the report of working group 1 is in preparation and will be posted shortly. The draft of an overview paper has been included and is written as a standalone document. It summarizes the major findings and recommendations of the workshop as well as providing some background information. Questions, comments, and suggestions about this document may be sent to Thomas Sterling . Second Pasadena Workshop (this document). This web page. README file . Description of contents of this index. Overview of Workshop Findings . Summary paper of workshop issues, findings, and recommendations. This report is also available as a TeX version . Working Group 2 Report . Characteristics of HPC Scientific and Engineering Applications. This report is also available as a TeX version . Working Group 3 Report . Use of System Software and Tools. This report is also available as a TeX version . Working Group 4 Report . Influence of Parallel Architecture on HPC Software. (Text form .) Working Group 5 Report . Transition from Research to Products. (Text form .) Working Group 6 Report . Mixed Paradigms and Alternatives. (Text form .) Working Group 7 Report . Message Passing and Object Oriented Paradigms. This report is also available as a TeX version with a bibliography . Working Group 8 Report . Data Parallel and Shared Memory Paradigms. This report is also available as a TeX version . Working Group 9 Report . Heterogeneous Computing Environments. This report is also available in the submitted text version . Paul Messina, messina@cacr.caltech.edu . James Pool , jpool@cacr.caltech.edu . Thomas Sterling , tron@cesdis.gsfc.nasa.gov . CESDIS is located at the NASA Goddard Space Flight Center in Greenbelt MD. Top address{53}: Author: Thomas Sterling , tron@cesdis.gsfc.nasa.gov . MD5{32}: 563a8b9c54306bf62103141b9c03470a File-Size{4}: 3827 Type{4}: HTML Gatherer-Version{3}: 1.0 Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Refresh-Rate{7}: 2419200 Time-to-Live{8}: 14515200 Description{24}: Second Pasadena Workshop } @FILE { http://cesdis.gsfc.nasa.gov/admin/inf.eng/wave.tutorial.fin/responsible.html Update-Time{9}: 827948692 title{25}: Responsiblity and the Web MD5{32}: 406b5596af21aeac67c67a55df94dc5e File-Size{4}: 5316 Type{4}: HTML Gatherer-Version{3}: 1.0 Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Refresh-Rate{7}: 2419200 Time-to-Live{8}: 14515200 Keywords{26}: and responsiblity the web Description{25}: Responsiblity and the Web } @FILE { http://cesdis.gsfc.nasa.gov/hpccm/iitf.hp/graphics/ Update-Time{9}: 827948828 url-references{142}: /hpccm/iitf.hp/ blue.GIF blue.JPG eye_bullet.GIF hpcc.header.gif hpccsmall.gif nasa.meatball.gif think.back.gif think.gif wavebar.gif work.gif title{33}: Index of /hpccm/iitf.hp/graphics/ keywords{101}: back blue bullet directory eye gif header hpcc hpccsmall jpg meatball nasa parent think wavebar work images{202}: /icons/blank.xbm /icons/menu.gif /icons/image.gif /icons/image.gif /icons/image.gif /icons/image.gif /icons/image.gif /icons/image.gif /icons/image.gif /icons/image.gif /icons/image.gif /icons/image.gif headings{33}: Index of /hpccm/iitf.hp/graphics/ body{413}: Name Last modified Size Description Parent Directory 21-Nov-95 15:28 - blue.GIF 21-Nov-95 15:25 9K blue.JPG 21-Nov-95 15:18 2K eye_bullet.GIF 24-Jul-95 15:51 1K hpcc.header.gif 18-May-95 13:29 1K hpccsmall.gif 24-May-95 12:31 2K nasa.meatball.gif 08-Nov-94 13:46 3K think.back.gif 06-Jun-95 14:18 15K think.gif 15-Mar-95 22:17 13K wavebar.gif 08-Nov-94 13:46 2K work.gif 08-Nov-94 13:46 1K MD5{32}: 7d8cf055356ae0486d59a64caf2484cb File-Size{4}: 1706 Type{4}: HTML Gatherer-Version{3}: 1.0 Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Refresh-Rate{7}: 2419200 Time-to-Live{8}: 14515200 Description{33}: Index of /hpccm/iitf.hp/graphics/ } @FILE { http://cesdis.gsfc.nasa.gov/hpccm/annual.reports/ess94contents/iita/TEMPLATE.html Update-Time{9}: 827948852 url-references{107}: http://cesdis.gsfc.nasa.gov/hpccm/annual.reports/ess94contents/iita.html mailto:lpicha@cesdis.gsfc.nasa.gov title{32}: Finite Element Gasdynamics Codes keywords{45}: curator larry page picha previous return the images{19}: graphics/return.gif headings{153}: Developed Tools for Extending Finite Element Gasdynamics Codes to MHD Regime for Space Science and Astrophysics Applications Return to the PREVIOUS PAGE body{174}: background="graphics/ess.gif"> Objective: Approach: Accomplishments: Significance: Status/Plans: Point of Contact: Kevin Olson curator: Larry Picha MD5{32}: 23cd09597d289c82aaccd76c1e5c1d6c File-Size{3}: 732 Type{4}: HTML Gatherer-Version{3}: 1.0 Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Refresh-Rate{7}: 2419200 Time-to-Live{8}: 14515200 Description{32}: Finite Element Gasdynamics Codes } @FILE { http://cesdis.gsfc.nasa.gov/hpccm/visitor/sbkpg2.html Update-Time{9}: 827948831 url-references{72}: newBKGstuff.html index.html katie.html mailto:katie@cesdis.gsfc.nasa.gov title{37}: example.2 solid color background page keywords{47}: back background extension index katie page the images{48}: shoelacebar.gif shoelacebar.gif kLogo(tnspt).GIF headings{43}: Another example solid background color page body{79}: BGCOLOR="#9999CC"> Back to the background extension page Back to the index address{32}: Last updated 20 june 95 by katie MD5{32}: e9e570a92be2c7134b7caff29316fcec File-Size{3}: 529 Type{4}: HTML Gatherer-Version{3}: 1.0 Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Refresh-Rate{7}: 2419200 Time-to-Live{8}: 14515200 Description{37}: example.2 solid color background page } @FILE { http://cesdis.gsfc.nasa.gov/linux/pcmcia Update-Time{9}: 820866817 Description{23}: Index of /linux/pcmcia/ Time-to-Live{8}: 14515200 Refresh-Rate{7}: 2419200 Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Gatherer-Version{3}: 1.0 Type{4}: HTML File-Size{4}: 1361 MD5{32}: bfb6601ebf125a2d250b3c9960dbbdb4 body{326}: Name Last modified Size Description Parent Directory 09-May-95 16:43 - 3c589.c 22-May-94 10:49 17K 3c589.c-1.1.54 18-Oct-94 18:07 19K 3c589.html 10-Jun-94 17:11 8K cardd.tgz 22-May-94 10:53 9K cardd/ 24-Feb-95 01:46 - dbether.c 17-Jun-94 17:37 6K dbmodem.c 05-Aug-94 14:30 6K pcmcia.html 31-Mar-95 19:44 1K headings{23}: Index of /linux/pcmcia/ images{160}: /icons/blank.xbm /icons/back.xbm /icons/text.xbm /icons/text.xbm /icons/text.xbm /icons/text.xbm /icons/menu.xbm /icons/text.xbm /icons/text.xbm /icons/text.xbm keywords{55}: cardd dbether dbmodem directory html parent pcmcia tgz title{23}: Index of /linux/pcmcia/ url-references{89}: /linux 3c589.c 3c589.c-1.1.54 3c589.html cardd.tgz cardd/ dbether.c dbmodem.c pcmcia.html } @FILE { http://cesdis.gsfc.nasa.gov/hpccm/archive/factsheets.html Update-Time{9}: 827948801 url-references{34}: mailto:lpicha@cesdis.gsfc.nasa.gov title{15}: HPCC Fact Sheet keywords{263}: accelerate aeronautics and application century cesdis comments computing development directly earth engineering gov gsfc high into larry lpicha meet nasa next performance picha please questions requirements sciences send space speed technologies the welcome your images{24}: hpcc.graphics/lites2.gif headings{174}: The National Aeronautics and Space Administration's (NASA) High Performance Computing and Communications (HPCC) Program Welcome to the NASA HPCC Brochure! Table of Contents body{996}: %> To accelerate the development and application of high-performance computing technologies to meet NASA's aeronautics, earth and space sciences, and engineering requirements into the next century. %> You're here because you need or want an explanation and overview of the NASA HPCC Program, its mission, and how it implements and utilizes tax payer assets. You may click on the table of contents item you're interested in and go directly there or you may scrole through the entire document. You may return to your starting point by clicking on the ''back'' option of your browser (i.e. Mosaic or Netscape) at any time. Please send your comments and/or questions directly to Larry Picha (lpicha@cesdis.gsfc.nasa.gov). Introduction The Speed of Change Components of the NASA HPCC Program Computational Aerosciences (CAS) Project Earth and Space Sciences (ESS) Project Information Infrastructure Technology and Applications (IITA) component Remote Exploration and Experimentation (REE) Project MD5{32}: 60fba2bd0b2edca4381f59bedd13232d File-Size{5}: 14049 Type{4}: HTML Gatherer-Version{3}: 1.0 Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Refresh-Rate{7}: 2419200 Time-to-Live{8}: 14515200 Description{15}: HPCC Fact Sheet } @FILE { http://cesdis.gsfc.nasa.gov/hpccm/annual.reports/cas94contents/iita/ Update-Time{9}: 827948841 url-references{66}: /hpccm/annual.reports/cas94contents/ graphics/ iita.html k-12.html title{50}: Index of /hpccm/annual.reports/cas94contents/iita/ keywords{36}: directory graphics html iita parent images{80}: /icons/blank.xbm /icons/menu.gif /icons/menu.gif /icons/text.gif /icons/text.gif headings{50}: Index of /hpccm/annual.reports/cas94contents/iita/ body{166}: Name Last modified Size Description Parent Directory 17-Oct-95 15:42 - graphics/ 17-Jul-95 13:50 - iita.html 07-Jul-95 15:00 3K k-12.html 19-Jul-95 14:13 3K MD5{32}: 2b99e14b773c7423d65d50d21d4504a7 File-Size{3}: 794 Type{4}: HTML Gatherer-Version{3}: 1.0 Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Refresh-Rate{7}: 2419200 Time-to-Live{8}: 14515200 Description{50}: Index of /hpccm/annual.reports/cas94contents/iita/ } @FILE { http://cesdis.gsfc.nasa.gov/pub/linux/diag/diagnostic.html Update-Time{9}: 827948614 url-references{128}: hp+.c ne2k.c atp-diag.c atp.h e21.c at1700.c eexpress.c ../setup/atlantic.c http://cesdis.gsfc.nasa.gov/linux/setup/3c5x9setup.c title{46}: Linux Ethercard Diagnostic and Setup Utilities keywords{200}: and cabletron code com diagnostic diagnostics ethercard etherlink ethernet express family file header iii intel lan lantic linux national pclan program programs realtek semiconductor setup source tec headings{83}: Linux Ethercard Diagnostic and Setup Programs Diagnostic Programs Setup Programs body{1123}: This is a collections of user-level programs to check out the basic functionality of an ethercard. The "setup" programs can read (and sometimes even write) the EEPROM setup table of software-configured cards. =/icons/greenball.gif> HP PCLAN+ diagnostics, C source code. =/icons/greenball.gif> NE2000 diagnostics, C source code. =/icons/greenball.gif> AT-Lan-Tec/RealTek diagnostics, C source code. And if you don't have the kernel source, you'll need the header file . =/icons/greenball.gif> Cabletron E21xx diagnostics, C source code. =/icons/greenball.gif> AT1700 diagnostics, C source code. =/icons/greenball.gif> Intel Ethernet Express diagnostics, C source code. =/icons/greenball.gif> National Semiconductor DP83905 AT/Lantic setup program, C source code. The AT/Lantic chip is used in the NE2000+ and many other software-configured NE2000 clones. =/icons/greenball.gif> 3Com EtherLink III family (3c509, 3c529, 3c579, and 3c589) setup program, C source code. This program displays the registers and currently programmed settings. It allows the base I/O address, IRQ, and transceiver port settings to be changed. MD5{32}: fc9652a01a1773116e268353ccc0c48d File-Size{4}: 2088 Type{4}: HTML Gatherer-Version{3}: 1.0 Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Refresh-Rate{7}: 2419200 Time-to-Live{8}: 14515200 Description{46}: Linux Ethercard Diagnostic and Setup Utilities } @FILE { http://cesdis.gsfc.nasa.gov/hpccm/annual.reports/cas94contents/app.sw/compressor.html Update-Time{9}: 827948648 url-references{109}: http://cesdis.gsfc.nasa.gov/hpccm/annual.reports/cas94contents/app.sw.html mailto:lpicha@cesdis.gsfc.nasa.gov title{30}: Multistage Compressor Analysis keywords{46}: contents curator larry picha return table the images{19}: graphics/return.gif headings{63}: Multistage Compressor Analysis Return to the Table of Contents body{1887}: Objective: To develop multidisciplinary technologies for multistage compression systems that enhance full engine simulation capabilities. Approach: A detailed multistage compressor analysis code (MSTAGE) has been ported to a variety of computing systems including the IBM SP1 parallel processor. Several analyses were made to define the flow physics involved in compressor stall. These flow analyses suggested a variety of approaches to improve the performance of compression systems, while providing increased stall margins. Accomplishment: This work was conducted as part of a joint industry / government/university team (P/NASA/ MIT) effort called ''Stall Line Management''. Design and off-design flow prediction for multistage turbomachinery is one of the critical elements of this program. A key feature of this prediction capability is the physics-based models developed at NASA Lewis. These models provide a rational prediction of time averaged multistage flow physics by using steady prediction tools. Rigorous mathematical analysis and NASA high performance computing platforms (including the NASA Cray C90, IBM Workstation cluster and SP-1) were essential to the formulation and development of these models. Significance: A 1.5 percent reduction in specific fuel consumption for a large commercial aircraft engine was recently demonstrated at Pratt and Whitney. This reduction was achieved in 1/2 the historical design time by utilizing viscous 3D fluids analysis codes. Status/Plans: Compressor disk and outer casing thermal and structural analyses are being incorporated into the overall predictive system. A project plan that schedules the inclusion of several disciplines (controls, aero, structures) has been developed and approved by the performing team members. Point of Contact: Chuck Lawrence NASA Lewis Research Center (216) 433-6048 curator: Larry Picha MD5{32}: d5fd71add3c7362004bb7adcd2af408c File-Size{4}: 2319 Type{4}: HTML Gatherer-Version{3}: 1.0 Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Refresh-Rate{7}: 2419200 Time-to-Live{8}: 14515200 Description{30}: Multistage Compressor Analysis } @FILE { http://cesdis.gsfc.nasa.gov/hpccm/annual.reports/ann.rpt.95/cas.95/cas.95.ar.nas.tr.vis.html Update-Time{9}: 827948663 title{30}: NTV - The NAS Trace Visualizer keywords{75}: accomplishments approach contact objective plans point significance status headings{30}: NTV - The NAS Trace Visualizer body{2395}: Objective: The objective is to develop a programming tool aimed at supporting high performance computing on scalable parallel computers. NTV focuses on performance and correctness by helping to detect performance bottle-necks using scalable visual representations of execution traces and innovative trace-browsing capabilities. Approach: Program developers are faced with a number of computational platforms. Each platform has its own peculiarities which effect the way code is tuned for optimum performance. One of the more useful techniques available for tuning is the analysis of executions traces. Some manufacturers provide a tracing capability but some do not, requiring the use of an instrumentor such as that provided by AIMS. The quantity and complexity of trace data make graphical trace visualizers essential for analysis. Unfortunately, all existing trace visualizers are designed to handle only a specific trace format, and the formats differ among manufacturers and instrumentors. Further, the visualizers differ in function and in operation, so program developers are forced to become proficient with several analysis tools. NTV is a trace visualization tool designed to be used with all trace formats so that a user need only learn one tool. Further, unlike existing visualizers, it uses static displays which are easier to understand and more scalable than the dynamic displays common in other visualizers. The figure shows an AIMS trace from a program executing on an Intel iPSC/860 (bottom) and an IBM MPL trace from the same program ported to run on an IBM SP2 (Top). In both cases, the display of all messages to processor 0 (angled blue lines) have been turned on and all others turned off. Accomplishments: A Beta version supporting AIMS traces and IBM SP2 MPL was released. Significance: With release of the Beta version the tool is now available to help users develop efficient parallel programs. It has been demonstrated that a tool can be developed that supports very different trace formats, and that static displays can be supported on existing workstations. Status/Plans: Maintain and support the released version of NTV. Investigate and plan NTV replacement of visualizer in AIMS, support MPI/SP2 and produce a library of trace visualization display elements. Point(s) of Contact: Louis Lopez NASA Ames Research Center llopez@nas.nasa.gov 415-604-0521 MD5{32}: fb15270295a8042b504f52336da421fd File-Size{4}: 2615 Type{4}: HTML Gatherer-Version{3}: 1.0 Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Refresh-Rate{7}: 2419200 Time-to-Live{8}: 14515200 Description{30}: NTV - The NAS Trace Visualizer } @FILE { http://cesdis.gsfc.nasa.gov/hpccm/visitor/txtpg1.html Update-Time{9}: 827948831 url-references{89}: nonExistent.html newBKGstuff.html index.html katie.html mailto:katie@cesdis.gsfc.nasa.gov title{42}: Grand Example Background Manipulation page keywords{64}: back background extensions here index katie links page text the images{48}: shoelacebar.gif shoelacebar.gif kLogo(tnspt).GIF headings{45}: Grand Example of Background Manipulation Page body{335}: BGCOLOR="#FF9966" TEXT="#996666" LINK="#669999" VLINK="#336666" ALINK="#663366"> We've got the background colored, the text colored, and the links colored. Whoo whoo! In case you've already visited all of the other links on this page, here is a link to a nonexistent page. %> Back to the background extensions page Back to the index address{32}: Last updated 20 june 95 by katie MD5{32}: 24ce5b5c4782b9f6b215c9890aaf5e36 File-Size{3}: 930 Type{4}: HTML Gatherer-Version{3}: 1.0 Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Refresh-Rate{7}: 2419200 Time-to-Live{8}: 14515200 Description{42}: Grand Example Background Manipulation page } @FILE { http://cesdis.gsfc.nasa.gov/hpccm/annual.reports/cas94contents/app.sw/direct.html Update-Time{9}: 827948647 url-references{109}: http://cesdis.gsfc.nasa.gov/hpccm/annual.reports/cas94contents/app.sw.html mailto:lpicha@cesdis.gsfc.nasa.gov title{119}: Direct-Coupled, Multi-disciplinary Computational Aeroscience Simulations on distributed memory, MIMD parallel computers keywords{46}: contents curator larry picha return table the images{19}: graphics/return.gif headings{152}: Direct-Coupled, Multi-disciplinary Computational Aeroscience Simulations on distributed memory, MIMD parallel computers Return to the Table of Contents body{3539}: Objective: The goal of this project is to investigate the algorithmic and implementation issues as well as the system software requirements pertaining to the direct-coupled, multi-disciplinary computational aero-science simulations on distributed memory (DM), multiple instruction stream, multiple data stream (MIMD) parallel architectures. Approach: The design of future generations of civil transport aircraft that are competitive in the global marketplace requires multi-disciplinary analysis and design optimization capabilities involving the direct coupling of diverse physical disciplines that influence the operational characteristics of the aircraft. An immediate outcome of a such an approach would be the greatly increased computational requirements for the simulation, in comparison to what is needed for current single discipline simulations on conventional supercomputers. In the near future, it appears that the computational resources of the scale required for such multi-disciplinary analysis and/or design optimization tasks may only be fulfilled in a cost-effective manner by the use of highly parallel computer architectures. In order to effectively harness the tremendous computational power promised by such architectures, it is imperative to investigate the algorithmic and software issues involved in the development and implementation of concurrent, directly-coupled, multi-disciplinary simulations. This study takes a necessary preliminary step towards the development of this enormously complex capability by attempting to compute the unsteady aeroelastic response and flutter boundary of a wing in the transonic flow regime through the direct coupling of two disciplines, viz. fluid mechanics and structural dynamics on a DM-MIMD computer. Accomplishment: A direct-coupled, fluid-structure interaction code capable of simulating the highly nonlinear aeroelastic response of a wing in the transonic flow regime was implemented on the 128 processor Intel iPSC/860 computer. The performance and the scalability of the implementation realized on the iPSC/860 was demonstrated by computing the transient aeroelastic response of a simple High Speed Civil Transport type strake-wing configuration. Also as a part of this study, the efficacy of various concurrent time integration schemes that are based on the partitioned analysis approach were investigated. The effort also helped in gaining a greater understanding of the system software requirements associated with such multi-disciplinary simulations on DM-MIMD computers. The algorithmic and implementation details as well as the results can be found in the following papers: AIAA-94-0095 and AIAA-94-1550. Significance: This implementation for the first time exploits the functional parallelism in addition to the data parallelism present in multi-disciplinary computations on MIMD computers. It demonstrates the feasibility of carrying out complex, multi-disciplinary, computational aeroscience simulations efficiently on current generation of DM-MIMD computers. Status/Plans: The future efforts will further explore the possibility of developing more robust and scalable concurrent algorithms for fluid-structure interaction problems, the incorporation of additional disciplines and the feasibility of using emerging parallel programming language standards for developing direct-coupled, multi-disciplinary CAS applications. Point of Contact: Sisira Weeratunga NASA Ames Research Center (415) 604-3963 weeratun@nas.nasa.gov curator: Larry Picha MD5{32}: 6625f8fc4aaa8e0d09f47524c623a1db File-Size{4}: 4160 Type{4}: HTML Gatherer-Version{3}: 1.0 Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Refresh-Rate{7}: 2419200 Time-to-Live{8}: 14515200 Description{72}: Direct-Coupled, Multi-disciplinary Computational Aeroscience Simulations } @FILE { http://cesdis.gsfc.nasa.gov/petaflops/archive/workshops/frontiers.95.html Update-Time{9}: 827948600 url-references{405}: frontiers.95.pres.html /~creschke/peta/report/report.html http://sdcd.gsfc.nasa.gov/DIV-NEWS/frontiers.html http://cesdis.gsfc.nasa.gov/petaflops/peta.html /people/tron/tron.html mailto:tron@usra.edu /people/oconnell/whoiam.html mailto:oconnell@cesdis.gsfc.nasa.gov http://cesdis.gsfc.nasa.gov/people/lpicha/whoiam.html mailto:lpicha@@cesdis.gsfc.nasa.gov http://cesdis.gsfc.nasa.gov/ http://www.usra.edu/ title{54}: Petaflops Enabling Techologies and Applications (PETA) keywords{552}: academia achieving address alone and applications arise assuredly cesdis community computing conference connell edu engineering even examinng exist far feasibility federal frontier future government have high highlights hpcc inadequate individuals industry july lawrence let level lpicha many may michele moc most now over overview performance period petaflops picha presentations problems proceedings program realized report result revised scientific sdcd sighted sterling systems technical teraflops that the thomas towards tron usra will works year images{184}: peta.graphics/PETA.banner.gif peta.graphics/saturn.gif peta.graphics/saturn.gif peta.graphics/saturn.gif peta.graphics/saturn.gif peta.graphics/turb.small.gif peta.graphics/petabar.gif headings{743}: PetaFLOPS Frontier '95 The PetaFLOP Frontier Workshop was part of a deliberate and on-going process to define the long range future of high performance computing here in the United States. The one-day workshop included presentations in architecture, technology, applications, and algorithms and participants ranged from government, academia, and industry. Overview of Presentations Conference Proceedings and Technical Report The Space Data and Computing Division (SDCD) staff were instrumental in the success of Frontiers '95 as noted in the SDCD Highlights SDCD was well-represented on the overall Frontiers '95 committee, and members were very active participants in the PetaFLOPS Frontier Workshop. Return to the P.E.T.A. Directory body{779}: Even as the Federal HPCC Program works towards achieving teraFLOPS computing, far-sighted individuals in government, academia and industry have realized that teraFLOPS - level computing systems will be inadequate to address many scientific and engineering problems that exist now, let alone applications that may, and most assuredly will, arise in the future. As a result, the high performance computing community is examinng the feasibility of achieving petaFLOPS - level computing over a 20 year period. Authorizing NASA Official: Paul H. Smith, NASA HPCC Office Senior Editor:Thomas Sterling (tron@usra.edu ) Curators: Michele O'Connell ( michele@usra.edu ), Lawrence Picha (lpicha@usra.edu ), CESDIS/ USRA , NASA Goddard Space Flight Center. Revised: 31 July 95 (moc) MD5{32}: d8fa89c2d5546a5f45bbed1d3896c687 File-Size{4}: 2616 Type{4}: HTML Gatherer-Version{3}: 1.0 Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Refresh-Rate{7}: 2419200 Time-to-Live{8}: 14515200 Description{54}: Petaflops Enabling Techologies and Applications (PETA) } @FILE { http://cesdis.gsfc.nasa.gov/pub/linux/misc/hardware.html Update-Time{9}: 827948614 url-references{262}: http://wwwhost.ots.utexas.edu/ethernet/ethernet-home.html #8390irq #8390multicast #ne2000dma #auiswitch #pliplength #multi3c509 #MCAbus #check3c589 #hpvectra #eexpress #subnote #ioregion #diskdown #xircom #top /pub/linux/linux.html /pub/people/becker/whoiam.html title{16}: Network hardware keywords{493}: again all and aui author based becker bus capture cesdis code conflict disk dma don donald down driver enabled eth ethercards etherexpress ethernet excellent extents guide hardware ide information intel interrupt lance length link linux machine mca message messages micro midwest mode multiple mysteriously network one packets plip port power probe problem promiscuous region snarf status subnotebook support switches the top totally unknown utexas vectra verifying warning wiring with xircom headings{511}: Information on network hardware "eth0: unknown interrupt 0x1" messages 8390-based ethercards don't capture all packets in promiscuous mode NE2000 driver DMA conflict message Midwest Micro subnotebook I/O region extents, and snarf_region(). 3c503 mysteriously switches to the AUI port. PLIP length warning. Problem with HP Vectra 486/66XM LANCE probe. Multiple 3c509's in one machine. Intel EtherExpress driver status. IDE Disk Power-down Code. Verifying a 3c589 is enabled. Xircom, again. MCA bus 3c529 support. body{16092}: This is an informal collection of information about network hardware and bug work-arounds. Here is a quick index: A link to the Totally Excellent UTexas ethernet wiring guide . "eth0: unknown interrupt 0x1" messages . 8390-based ethercards don't capture all packets in promiscuous mode . NE2000 driver DMA conflict message . 3c503 mysteriously switches to the AUI port . PLIP length warning . Multiple 3c509's in one machine . MCA bus 3c529 support . Verifying a 3c589 is enabled . Problem with HP Vectra 486/66XM LANCE probe . Intel EtherExpress driver status . Midwest Micro subnotebook . I/O region extents, and snarf_region() . IDE Disk Power-down Code . Xircom, again . >At some moment mine /var/adm/messages started to record zillions >of messages like: >"eth0: unknown interrupt 0x1" This message should only occur with kernels 1.0.0 to 1.0.3 or so. These are mostly harmless messsages produced by error-checking code around line 277 in 8390.c. The root cause is usually some part of the system shutting off interrupts for longer than the net code expects, or that your network is exceptionally busy. This section of code that produces this message combines a check for unrecognized hardware return values with a check to prevent unlimited work being done during a single interrupt, which might indicate a hardware failure. A kernel patch (1.0.3 I think) increased this 'boguscnt' check from '5'(four actions per network interrupt) to '9' (eight actions per interrupt). This prevents the error message for almost all systems. Due to my misinterpretation of the 8390 documentation, all drivers based on the 8390 core (3c503, WD80*3, SMC Ultra, NE*000, HP PCLAN and others) do not receive multicast packets in promiscuous mode. Only network monitoring programs use promiscuous mode, and protocols that use multicast packets are currently rare, so very few people will encounter this problem. Kernels after 1.1.24 already include the following fix: drivers/net/8390.c:set_multicast_list() } else if (num_addrs < 0) - outb_p(E8390_RXCONFIG | 0x10, ioaddr + EN0_RXCR); + outb_p(E8390_RXCONFIG | 0x18, ioaddr + EN0_RXCR); else -djb 7/13/94 >The problem has to do with machine crashes about every 1 to 2 days with >this error: >eth0: DMAing conflict in ne_block_output. [DMAstat:fffffffe][irqlock:fffffff] Ohhh, bad. This "can't" happen when everything is working correctly. The "DMA" that this message is referring to is the DMA controller internal to the NE2000. It has nothing to do with the motherboard DMA channels. (A few NE1000 clones do allow the two DMA systems to be connected, but DMA results in *slower* system operation when transferring typical ethernet traffic.) What is likely happening is an interrupt conflict or a noisy interrupt line, causing the device driver to start another packet transfer when it thinks that it has locked out interrupt from the card. A remote possibility is that you are running an old kernel, or mixing versions of 8390.c and ne.c. >And thats about it. With so little on it, its hard to believe I have this >problem, but I do. The problem seems to corrolate with the addition of the >second IDE, before we had it, we used to have uptimes of 2+ weeks. The Hmmm, adding a card often results in IRQ conflicts and occasionally results in electrical noise problems. Try swapping card in their slots or changing the interrupt line. (Note: uppen IRQs are often quieter than lower ones! Try IRQ11 or IRQ15.) >I got your email address off the laptop survey list on tsx and thought I'd >write you for some experience/advice. I'm getting an Elite Subnote and >wondered how you like yours. How long have you had it? Any trouble? I've >read stuff about it's "cramped" keyboard and unreliable trackball. Is this >your experience? I've order five machine in two batches. The first machine was ordered in early January and had the following problems Power supply cut out after warming up. (Fixed) Cracking around the lid/display hinges (perhaps caused by fix above?) Unreliable power jack or plug (wiggling causes power LED to flicker). The lastest four arrived in mid-April and have none of these problems. >About Linux, any pointers about installing it? I'm planning to use >Slackware and load tinyX. I see you're running XWindows? Would you mind >sending me your Xconfig file? Also any trouble getting the trackball to >work? No problems installing it. A few notes: The 4 bit VGA server works fine. The alpha-test 8 bit server from Mike Hollick doesn't restore text mode correctly. The trackball is a two button microsoft serial mouse. Except for the missing third button I love it, and have had no problems with it. The wrist rest turned out to be far more useful that I had expected. The keyboard feels fine, it took about a week to get used to it. >I guess you're pretty happy with the Subnote since you have five. You >certainly can't beat the price. Not only was the price great, it was also the only reasonable subnote that was shipping with a 340M drive. BTW, the first was ordered with a 4M memory expansion because the 8M memory expansion cards were not due to be available until "late February". The recent batch was ordered with 8M modules, but they arrived with only the base 4M because the 8M expansion modules still were not available! Rule: if it's not "in stock for immediate delivery", it doesn't exist. People that ordered IBM Thinkpad 750s back in the fall are just getting them now! The Other Rule: divide the advertised battery life by two. (This is a question about why the kernel function snarf_region() only works up to 0x3fff, and why drivers don't bother allocating higher I/O regions.) > /* We've committed to using the board, and can start filling in *dev. */ > /* I suppose this assigns this I/O range to this proc */ >snarf_region(ioaddr, 16); > /* Why the same is not done for the range starting at ioaddr+0xC008 ? */ The snarf_region() function shares some of the same bitmap functions as the ioperm() call, and only marks the I/O ports used in the 0x0 - 0x3ff (the original PC I/O space). I claim (I wrote the ioperm() and *_region() code, so I feel the need to defend this :->) that this is actually the right thing to do, as some (many?) I/O devices deliberately ignore the upper I/O address bits because some ancient broken PC software required it. >I have been having a problem with my eternet card changing from the TP port >to the AUI without any notice. The machine will change interfaces >between 1 day and 1 week of uptime. If I move the cable from the TP port to >a tranceiver on the AUI the machine after it swaps it will work again. The '8390' part of the 3c503 driver has special code to automatically switch interfaces around line drivers/net/8390.c:156 in version 1.0. This code was added so that the ethercard could automatically configure itself for the network in use. This turned out to be a not-quite-perfect implementation of an otherwise good idea, and around version 1.0.2 the code was changed to only switch interfaces if *no* packets had yet been transmitted without error, rather than anytime in the session. >> 1. It works ONLY with short cables - I have one cable 2 meters long and one >> 40 meters long. My old plip worked fine on both; your one works with the Acckkkk! A 40 meter printer cable is *way* beyond the specs for even output-only printer traffic! It's unreasonable to expect bidirectional traffic to work on a cable this long. You should switch to ethernet for this link: not only is ethernet faster, cheaper and more reliable, it's also much *safer* for a connection this long. 10base2 provides at least 600V of isolation if the 'T' taps are insulated, and 10baseT provides over 1500V isolation with fully enclosed contacts. That's protection against lightning hits, ground loops, ground noise and ground offsets that you *need*. >Subject: Problem with HP Vectra 486/66XM LANCE probe >We're using HP Vectra 486/66XM's here and they have an AMD >79C960 chip on the motherboard. The Ethernet HowTo indicates >that this is supported using the PCnet-ISA driver, lance.c, >which says upon booting that it is: > > HP J2405A IRQ 15 DMA 7. > >The only problem is that the IRQ and DMA are incorrect. Ooops, when I put in the HP-J2405A special-case code I didn't realize that they were going to come out with an incompatible implementation. The 'lance.c' driver *always* uses the value in the setup register of an HP Lance implementation. In this case it's reading an invalid 0xff value. >For the time being, I've been hardcoding the proper IRQ and DMA >values in the driver itself and everything has been working >fine, but I'd like to get the probe for this fixed so that I >don't have to muck around with the source (or do funny things >with LILO) in the future. That's the right temporary solution, and the right long-term attitude. I'll see if I can find someone at HP that knows how to tell the difference between a J2405A and a Vectra. If there isn't an easy way, I'll just ignore a 0xff setup value and do autoIRQ/autoDMA instead. > Alan Cox suggested talking to you about figuring out how to do multiple >3c509's within 1 linux box. I have an application where I would like to do >just this. Specifically I'd like to get 3 of them into a single ISA box. The 3c509 driver already supports multiple 3c509 cards on the *ISA* bus. Look in the probe code for the variable 'current tag'. Just make certain that "eth1" and "eth2" are set to probe anywhere (address '0'), not just a specific I/O address. A side note: the 3c509 probe doesn't mix well with the rest of the probes. It's difficult to predict a priori which card will be accepted "first" -- the order is based on the hardware ethernet address. That means that the ethercard with the lowest ethernet address will be assigned to "eth0", and the next to "eth1", etc. If the "eth0" ethercard is removed, they all shift down one number. Another note: the 3c509 driver will fail to find multiple EISA-mode 3c509s and 3c579s. The file net/drivers/3c509.c needs to be modified to accept multiple EISA adaptors. This change is already made in later 1.1.* kernels. Around line 94 make the following changes -/* First check for a board on the EISA bus. */ +/* First check all slots of the EISA bus. The next slot address to + probe is kept in 'eisa_addr' to support multiple probe() calls. */ if (EISA_bus) { -for (ioaddr = 0x1000; ioaddr < 0x9000; ioaddr += 0x1000) { +static int eisa_addr = 0x1000; +while (eisa_addr < 0x9000) { +ioaddr = eisa_addr; +eisa_addr += 0x1000; + /* Check the standard EISA ID register for an encoded '3Com'. */ if (inw(ioaddr + 0xC80) != 0x6d50) continue; Please let me know if this works for you. >I have just installed linux with support for the intel etherexpress >card what is the current status of this card and where can the latest >version of the driver be got from. > >The current version I have is v0.07 1/19/94 by yourself. The EExpress driver is still in alpha test -- it only works on some machines, generally slower 386 machines. Several people are actively working on the driver, but a stable release is at least several months away. >My friend has a small utility (under DOS) which can tell the disk controller >to switch off the disk after some period of inactivity. He runs this program >and then boots linux. (He has two IDE disks). This is a standard feature of all modern IDE disks. I have a short program (appended) that I use to do the same thing on laptops. A user-level program is a poor way to do this, but I got tired of patching it into my own kernels and I didn't feel I could maintain an Official Kernel Feature. >After some time the disks are >switched off. Now, when linux wants to use them, disk driver writes some >messages about timeouts (one on the new disk and three-four on the old one) >and than everything is ok. This is almost normal: the Linux kernel gets upset when the disk doesn't respond immediately. It resets the controller, and by that time the disk has spun up. One annoying misfeature is that the disk drive posts an interrupt when it goes into spin-down mode, the kernel doesn't know where the interrupt is from, and 'syslog' immediately spins the disk back up. The quick, sleazy solution is to configure 'syslog' to ignore those messages. >BUT if the first process that wants to access >disk is swapper, the system hangs. It doesn't matter, which hard disk the >swap partition is on, the system hangs only when swapper wants to access >the disk. If somebody wants more details, I can reproduce it. Hmm, I've never experienced this. Anyway, here is my short program to put the disk into standby-timer mode. It takes a single optional parameter, the number of seconds to wait before going into standby mode. /* * diskdown.c: Shut down a IDE disk if there is no activity. * Written by Donald Becker (becker@cesdis.gsfc.nasa.gov) for Linux. */ #include < unistd.h > #include < stdio.h > #include < asm/io.h > #define IDE_BASE0x1f0 #define IDE_SECTOR_CNT0x1f2 #define IDE_CMD0x1f7 #define PORTIO_ON1 enum ide_cmd {StandbyImmediate=0xe0, IdleImmediate=0xe1, StandbyTimer=0xe2, IdleTimer=0xe3,}; main(int argc, char *argv[]) { int timeout; if (ioperm(IDE_BASE, 8, PORTIO_ON)) { perror("diskdown:ioperm()"); fprintf(stderr, "diskdown: You must run this program as root.\ "); return 1; } if (argc > 1) { timeout = atoi(argv[1]); if (timeout < 10) timeout = 10; } { int old_cnt = inb(IDE_SECTOR_CNT); printf("Old sector count: %d.\ ", old_cnt); outb((timeout + 4)/5,IDE_SECTOR_CNT); outb(StandbyTimer, IDE_CMD); outb(old_cnt,IDE_SECTOR_CNT); } return 0; } /* * Local variables: * compile-command: "gcc -O6 -o diskdown diskdown.c" * comment-column: 32 * End: */ >How does one work out/set that memory map, i.e. mem_start, >I've set io_addr to 0x300 and irq to 10 ok, its the memory >part I've got a blind spot for. The 3c589 uses 16 I/O locations and no memory locations. That makes it much easier to configure than a I/O + memory card. A quick way to check if the 3c589 is correctly mapped in to run dd if=/dev/port skip=768 count=16 bs=1 | od -t x2 instead of the 'ifconfig...'. This will show the contents of I/O locations 0x300-0x30f (768 to 768+16). The 3c589 signature of 6d 50 (or 50 6d) should be the first bytes if it's mapped in correctly. >I have a friend who just got a laptop and I've been putting linux >on it. They got a Xircom credit card ethernet adapter (it says right >on the box that it supports "all popular network operating systems, right? >:-) Unfortunately, it looks like it is unsupported in Linux. On the other >hand, it is a PCMCIA card, and it sounded like "generic" PCMCIA support >might be forthcoming. Until Xircom releases programming information, no non-standard (i.e. non-modem) product can be supported. The "generic" part of the PCMCIA support will only handle socket enabling. That's all that's needed for devices that adhere to a common register standard, like modems, but ethernet adaptors differ wildly. You should give Xircom a call and ask for the Linux driver. Tell them that it says right on the box that it "supports all popular operating systems". When they tell you that they don't have a device driver, ask them for the programming specifications:->. > I've managed to boot up linux on a PS/2 - at present I'd like to try and >get the current ETHERLINK/MC card working. I saw that in 3c509.c you >had provided some support for MCA. Some of the routime you call in >that section are undefined. What other routines do I need to have in order to build the 3c509 and try it out ? I don't have access to a MCA machine (nor do I fully understand the probing code) so I never wrote the mca_adaptor_select_mode() or mca_adaptor_id() routines. If you can find a way to get the adaptor I/O address that assigned at boot time, you can just hard-wire that in place of the commented-out probe. Be sure to keep the code that reads the IRQ, if_port, and ethernet address. Sorry I can't be more helpful. Top Linux at CESDIS address{52}: Author: Donald Becker , becker@cesdis.gsfc.nasa.gov. MD5{32}: e6eb52d3c91a6618d36841436cc045ba File-Size{5}: 18884 Type{4}: HTML Gatherer-Version{3}: 1.0 Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Refresh-Rate{7}: 2419200 Time-to-Live{8}: 14515200 Description{16}: Network hardware } @FILE { http://cesdis.gsfc.nasa.gov/hpccm/annual.reports/ess94contents/in.house/jacquie.html Update-Time{9}: 827948654 url-references{111}: http://cesdis.gsfc.nasa.gov/hpccm/annual.reports/ess94contents/in-house.html mailto:lpicha@cesdis.gsfc.nasa.gov title{88}: Parallel Implementation of a Wavelet Transform and its Application to Image Registration keywords{45}: curator larry page picha previous return the images{42}: graphics/jacquline.gif graphics/return.gif headings{117}: Parallel Implementation of a Wavelet Transform and its Application to Image Registration Return to the PREVIOUS PAGE body{3336}: Objective: To provide a fast multi-resolution wavelet decomposition (and reconstruction) which can be utilized in many applications, such as image compression, browsing, and registration. Approach: A wavelet transform is a very flexible mathematical tool which describes simultaneously the spatial and the frequency content of image data. In particular, multi-resolution wavelet transforms provide this description at multiple scales by iteratively filtering the image by low-pass and high-pass filters, andrreducing the size of the image by two in each direction at each iteration (this step being called "decimation"). When this process is applied to remote sensing data, the wavelet description can be the basis of many data management applications, especially image registration. Figure 1 shows the wavelet decomposition of an AVHRR image of the Pacific Northwest area. For image registration purposes, the wavelet decomposition extracts strong image characteristics which can be utilized as ground reference points to define the correspondence between several images, enabling automatic registration. With a fast parallel implementation of the wavelet transform, this type of process could easily be performed very rapidly for large amounts of data. Accomplishments: A preliminary study of a parallel implementation of the multi-resolution wavelet decomposition was accomplished. A first prototype of parallel image registration involving image rotations and translations has been implemented on the MasPar MP-2, and tested with AVHRR and Landsat Pathfinder datasets. Collaboration with Dr. T.A. El-Ghazawi from George Washington University and Dr. J.C. Liu from Texas A University was initiated, and resulted in five different algorithms which have been developed and run on a mesh-connected, massively parallel architecture, the MasPar MP-2 (some of them have also been tested on a MasPar MP-1). These five methods differ by the methods used for filtering and decimation, and also by the virtualizations necessary to map the data onto the parallel array. Results show that over a sequential implementation, a parallel implementation offers an improvement in speed anywhere from 200 to nearly 600 times. These results are summarized in two papers, one to be published by the International Journal on Computers and their Applications, and the second one being submitted to Frontiers'95. Significance: A fast parallel implementation of wavelet decomposition and reconstruction of image data is important not only because it is useful for many data management applications, but also because it is representative of typical pre-processing which will have to be applied routinely to large amounts of remotely sensed data. Status/Plans: In FY95, parallel image registration utilizing a wavelet decomposition will be pursued, and extended to more general image transformations. Collaboration with Dr. T. Sterling has also be initiated and will be pursued to integrate the wavelet code in the ESS Parallel Benchmarks project (EPB 1.0), and in the Beowulf Parallel Linux Project (workstation environment of 1 Gops, with 16 processors, 256 MBytes of memory and 8 GBytes of disk). Point of Contact: Jacqueline Le Moigne CESDIS Goddard Space Flight Center (301) 286-8723 lemoigne@nibbles.gsfc.nasa.gov curator: Larry Picha MD5{32}: c5bce94212d1823ccdc714929d68bbfe File-Size{4}: 3992 Type{4}: HTML Gatherer-Version{3}: 1.0 Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Refresh-Rate{7}: 2419200 Time-to-Live{8}: 14515200 Description{69}: Parallel Implementation of a Wavelet Transform and its Application to } @FILE { http://cesdis.gsfc.nasa.gov/hpccm/annual.reports/ess94contents/nren/keck.html Update-Time{9}: 827948657 url-references{107}: http://cesdis.gsfc.nasa.gov/hpccm/annual.reports/ess94contents/nren.html mailto:lpicha@cesdis.gsfc.nasa.gov title{24}: ACTS Keck/GCM Experiment keywords{45}: curator larry page picha previous return the images{19}: graphics/return.gif headings{135}: Advanced Communications Technology Satellite (ACTS) Keck Observatory/Global Climate Model (GCM) Experiment Return to the PREVIOUS PAGE body{3525}: Objective: The objectives of the ACTS/Keck GCM experiment are to (1) demonstrate distributed supercomputing (meta-supercomputer) over a high performance satellite link, (2) demonstrate remote science data gathering, control, and analysis (telescience) with meta-supercomputer resources using multiple satellite hops, and (3) determine optimum satellite terminal/supercomputer host network protocol design to for maximum meta-supercomputer efficiency. Approach: The two ACTS experiments, Keck and GCM, will be led by JPL and GSFC, respectively, with support from Caltech, UCLA, GWU, and Hawaii PacSpace. The GCM experiment will require a virtual channel connection between the JPL Cray T3D and the GSFC Cray C90, while the Keck experiment will require a virtual channel connection between a remote control room at Caltech in Pasadena, CA, and the Keck Observatory local area network on Mauna Kea, Hawaii. Based on the expected availability of network switch and host ATM SONET OC-3 equipment by early CY95, ATM was selected as the base transport mechanism. This greatly simplifies the terrestrial network infrastructure, especially in the Hawaiian islands and ATDnet. A striped (4X OC-3) HIPPI/SONET gateway will be used as a backup should all the ATM infrastructure not be available by early CY95. For Keck, Caltech will modify the graphical user interface (GUI) design for use over longer delay channels and multi-user/location control (an adaptation of one currently used), JPL will perform the network system engineering and atmospheric/fading BER analysis, and GWU the HDR site design and performance modeling. Additionally, PacSpace will assist with scheduling the use of the Honolulu HDR and engineering the Honolulu/Mauna Kea network infrastructure. For GCM, GSFC will lead the porting of the distributed global climate model to the JPL and GSFC Cray supercomputers. GSFC staff scientists will port the Poseidon OGCM and Aries AGCM codes for coupling with UCLA AGCM and GFDL OGCM codes. In both experiments, the effect of fading, burst noise, and long transit delays will be examined and compared against lower error rate terrestrial links. Accomplishments: During the past year, the project wide proposal was written (Aug. 93) and later revised (in Jan 94) to reflect later HDR delivery. In Dec. 93, the overall network infrastructure was refined to include ATM, and in May 94, the Hawaiian "last mile" fiber/microwave network infrastructure design was completed. In Jul. 94, JPL completed a atmospheric fading model and GSFC completed an integrated ATDnet network design that permits ATM, HIPPI, and raw SONET connectivity to NASA and ARPA experiment users. Significance: This pair of experiments will demonstrate the feasibility of using long path delay satellite links to establish meta-computing and control/data acquisition networks for remote collaboration, observation, and control of science experiments in hostile environments. Examples include Antarctic and undersea exploration, petroleum exploration, and interconnecting data centers to share large data bases. Status/Plans: Both applications will be designed, ported, and debugged over low speed Internet connections during the next year. Full HDR deployment and network connectivity is expected by Jul. 95, at which time high bandwidth trials are expected to commence, lasting for 9 additional months (to Mar 96). Point of Contact: Larry A. Bergman Jet Propulsion Laboratory (818) 354-4689 bergman@kronos.jpl.nasa.gov curator: Larry Picha MD5{32}: caed50375f4e96ab2209619307c994d2 File-Size{4}: 4068 Type{4}: HTML Gatherer-Version{3}: 1.0 Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Refresh-Rate{7}: 2419200 Time-to-Live{8}: 14515200 Description{24}: ACTS Keck/GCM Experiment } @FILE { http://cesdis.gsfc.nasa.gov/linux/beowulf/details.html Update-Time{9}: 827948620 url-references{71}: beowulf.html beowulf.html http://www.maxtor.com/ http://www.maxtor.com/ title{18}: Details of Beowulf keywords{43}: beowulf description details maxtor project headings{85}: Beowulf Project Details Purpose Processor Motherboard Memory Disk Scalable Network body{3253}: This file isn't a standalone document. It supports and elaborates on the Beowulf Project Description . The processors in the current Beowulf nodes are the Intel DX4 processors. This processor is hybrid between the 80486 and the Intel P5 Pentium. It features are: '486 execution core with improved microcode SMM, System Management Mode, power management from the SL series a 16KB cache, the same as the P5 and twice the 8K of the '486, made with the same 3.3V, 0.6 micron process and on the same process lines as the P5-90 and P5-100 processors. The net effect is that the DX4-100 processor is more than 50% faster than the 486DX2-66 processor. Compared to the P5-60 it has slightly better integer performance and somewhat worse floating point performance, at a significantly lower cost. The motherboards are based on the SiS 82471 chipset. This was the highest performance low-cost '486 support chipset available at the time we purchased the system. Each motherboard has 3 VL-bus slots, 2 bus-master capable 4 ISA-only slots 256K secondary cache with 2-1-1-1 burst refill. "green" power-saving circuitry. We expect the next system to use PCI Pentium motherboards based on either the Intel Neptune or Triton chipsets. Both have good performance at low cost. The newer Triton chipset has the advantage of an integrated PCI bus-master EIDE controller and potentially better memory bandwidth when used with EDO DRAM, but motherboards using this chipset may not be available in time. Each processor has 16M of 60ns DRAM. The 60ns memories are only slightly more expensive than the usual 70ns or 80ns variety, and allow use to use a shorter delay when accessing main memory. The higher memory bandwidth is especially important when the interally clock-tripled processor is doing block memory moves. Beowulf is using Maxtor EIDE disks connected to a VL bus controller based on the DTC805 chip. The measured performance is about 4.5 MB/sec., close to the physical head data rate of the drive (nominally 3.5-5.6MB/sec, depending on the zone). The scalable communications is implemented by duplicating the hardware address of a primary network adaptor to the secondary interfaces, and marking all packets received on the internal networks as coming from a single pseudo-interface. This scheme constrains each internal network to connect to each node. With these constraints the Ethernet packet contents are independent of the actual interface used and we avoid the software routing overhead of handling more general interconnect topologies. The only additional computation over a using single network interface is the computationally simple task of distributing the packets over the available device transmit queues. The current method used is alternating packets among the available network interfaces. The system-visible interface to this "channel bonding" is the 'ifenslave' command. This command is analogous to the 'ifconfig' command used to set up the primary network interface. The 'ifenslave' command copies the configuration of a "master" channel to a slave channel. It can optionally configure the slave channel to run in a receive-only mode, which is useful when initially configuring or shutting the down the additional network interfaces. MD5{32}: aded8d0bdc45e97d037c0e95574ead8d File-Size{4}: 3898 Type{4}: HTML Gatherer-Version{3}: 1.0 Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Refresh-Rate{7}: 2419200 Time-to-Live{8}: 14515200 Description{18}: Details of Beowulf } @FILE { http://cesdis.gsfc.nasa.gov/pub/linux/diag/e21.c Update-Time{9}: 827948614 Partial-Text{2193}: main mem_off mem_on stdio.h stdlib.h unistd.h asm/io.h getopt.h sys/types.h sys/stat.h sys/mman.h fcntl.h /* el21.c: Diagnostic program for Cabletron E2100 ethercards. */ /* Written 1993,1994 by Donald Becker. Copyright 1994 Donald Becker Copyright 1993 United States Government as represented by the Director, National Security Agency. This software may be used and distributed according to the terms of the Gnu Public Lincese, incorporated herein by reference. The author may be reached as becker@cesdis.gsfc.nasa.gov. C/O USRA Center of Excellence in Space Data and Information Sciences Code 930.5 Bldg. 28, Nimbus Rd., Greenbelt MD 20771 */ /* #include "8390.h" */ /* Offsets from the base_addr. */ /* Offset to the 8390 NIC. */ /* The E21** series ASIC, know as PAXI. */ /* The following registers are heavy-duty magic. Their obvious function is to provide the hardware station address. But after you read from them the three low-order address bits of the next outb() sets a write-only internal register! */ /* Enable memory in 16 bit mode. */ /* Enable memory in 8 bit mode. */ /* Low three bits of the IRQ selection. */ /* High bit of the IRQ, and media select. */ /* Offset to station address data. */ /* This is a little weird: set the shared memory window by doing a read. The low address bits specify the starting page. */ /* { name has_arg *flag val } */ /* Give help */ /* Give help */ /* Force an operation, even with bad status. */ /* Interrupt number */ /* Verbose mode */ /* Display version number */ /* Probe for E2100 series ethercards. E21xx boards have a "PAXI" located in the 16 bytes above the 8390. The "PAXI" reports the station address when read, and has an wierd address-as-data scheme to set registers when written. */ /* Needed for SLOW_DOWN_IO. */ /* Restore the old values. */ /* Do a media probe. This is magic. First we set the media register to the primary (TP) port. */ /* Select if_port detect. */ /*printk(" %04x%s", mem[0], (page & 7) == 7 ? "\n":"");*/ /* do_probe(port_base);*/ /* * Local variables: * compile-command: "gcc -Wall -O6 -N -o e21 e21.c" * tab-width: 4 * c-indent-level: 4 * End: */ MD5{32}: 1d252253b6c856c4c40a4ea8bc381cde File-Size{4}: 5710 Type{1}: C Gatherer-Version{3}: 1.0 Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Refresh-Rate{7}: 2419200 Time-to-Live{8}: 14515200 Keywords{1118}: above according addr address after agency and are arg asic asm author bad base becker bit bits bldg boards but bytes cabletron center cesdis code command compile copyright data detect diagnostic director display distributed doing donald down duty enable end ethercards even excellence fcntl first flag following for force from function gcc getopt give gnu gov government greenbelt gsfc hardware has have heavy help herein high include incorporated indent information internal interrupt irq know level lincese little local located low magic main may media mem memory mman mode name nasa national needed next nic nimbus number obvious off offset offsets old only operation order outb page paxi port primary printk probe program provide public reached read reference register registers reports represented restore scheme sciences security select selection series set sets shared slow software space specify starting stat states station status stdio stdlib sys tab terms the their them this three types unistd united used usra val values variables verbose version wall weird when width wierd window with write written you Description{4}: main } @FILE { http://cesdis.gsfc.nasa.gov/linux/drivers/v1.3/3c59x.c Update-Time{9}: 827948605 Partial-Text{4786}: EL3WINDOW cleanup_module init_module set_multicast_list tc59x_init update_stats vortex_close vortex_get_stats vortex_interrupt vortex_open vortex_probe1 vortex_rx vortex_start_xmit linux/config.h linux/module.h linux/version.h linux/kernel.h linux/sched.h linux/string.h linux/ptrace.h linux/errno.h linux/in.h linux/ioport.h linux/malloc.h linux/interrupt.h linux/pci.h linux/bios32.h asm/bitops.h asm/io.h asm/dma.h linux/netdevice.h linux/etherdevice.h linux/skbuff.h /* 3c59x.c: An 3Com 3c590/3c595 "Vortex" ethernet driver for linux. */ /* NOTICE: this driver version designed for kernel 1.2.0! Written 1995 by Donald Becker. This software may be used and distributed according to the terms of the GNU Public License, incorporated herein by reference. This driver is for the 3Com "Vortex" series ethercards. Members of the series include the 3c590 PCI EtherLink III and 3c595-Tx PCI Fast EtherLink. It also works with the 10Mbs-only 3c590 PCI EtherLink III. The author may be reached as becker@CESDIS.gsfc.nasa.gov, or C/O Center of Excellence in Space Data and Information Sciences Code 930.5, Goddard Space Flight Center, Greenbelt MD 20771 */ /* This will be in linux/etherdevice.h someday. */ /* The total size is twice that of the original EtherLinkIII series: the runtime register window, window 1, is now always mapped in. */ /* Theory of Operation I. Board Compatibility This device driver is designed for the 3Com FastEtherLink, 3Com's PCI to 10/100baseT adapter. It also works with the 3c590, a similar product with only a 10Mbs interface. II. Board-specific settings PCI bus devices are configured by the system at boot time, so no jumpers need to be set on the board. The system BIOS should be set to assign the PCI INTA signal to an otherwise unused system IRQ line. While it's physically possible to shared PCI interrupt lines, the 1.2.0 kernel doesn't support it. III. Driver operation The 3c59x series use an interface that's very similar to the previous 3c5x9 series. The primary interface is two programmed-I/O FIFOs, with an alternate single-contiguous-region bus-master transfer (see next). One extension that is advertised in a very large font is that the adapters are capable of being bus masters. Unfortunately this capability is only for a single contiguous region making it less useful than the list of transfer regions available with the DEC Tulip or AMD PCnet. Given the significant performance impact of taking an extra interrupt for each transfer, using DMA transfers is a win only with large blocks. IIIC. Synchronization The driver runs as two independent, single-threaded flows of control. One is the send-packet routine, which enforces single-threaded use by the dev->tbusy flag. The other thread is the interrupt handler, which is single threaded by the hardware and other software. IV. Notes Thanks to Cameron Spitzer and Terry Murphy of 3Com for providing both 3c590 and 3c595 boards. The name "Vortex" is the internal 3Com project name for the PCI ASIC, and the not-yet-released (3/95) EISA version is called "Demon". According to Terry these names come from rides at the local amusement park. The new chips support both ethernet (1.5K) and FDDI (4.5K) packet sizes! This driver only supports ethernet packets because of the skbuff allocation limit of 4K. */ /* 3Com's manufacturer's ID. */ /* Operational defintions. These are not used by other compilation units and thus are not exported in a ".h" file. First the windows. There are eight register windows, with the command and status registers available in each. */ /* The top five bits written to EL3_CMD are a command, the lower 11 bits are the parameter, if applicable. Note that 11 parameters bits was fine for ethernet, but the new chip can handle FDDI lenght frames (~4500 octets) and now parameters count 32-bit 'Dwords' rather than octets. */ /* The SetRxFilter command accepts the following classes: */ /* Bits in the EL3_STATUS general status register. */ /* Latched interrupt. */ /* Host error. */ /* EL3_CMD is still busy.*/ /* Register window 1 offsets, the window used in normal operation. On the Vortex this window is always mapped at offsets 0x10-0x1f. */ /* Remaining free bytes in Tx buffer. */ /* Window 0: EEPROM command register. */ /* Enable erasing/writing for 10 msec. */ /* Disable EWENB before 10 msec timeout. */ /* EEPROM locations. */ /* Window 3: MAC/config bits. */ /* Window 4: Various transcvr/media bits. */ /* Enable link beat and jabber for 10baseT. */ /* A marker for kernel snooping. */ /* Unlike the other PCI cards the 59x cards don't need a large contiguous memory region, so making the driver a loadable module is feasible. */ /* Remove I/O space marker in bit 0. */ MD5{32}: 4f0042f58cd111c3a476b7721a06b86b File-Size{5}: 25025 Type{1}: C Gatherer-Version{3}: 1.0 Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Refresh-Rate{7}: 2419200 Time-to-Live{8}: 14515200 Keywords{2246}: accepts according adapter adapters advertised allocation also alternate always amd amusement and applicable are asic asm assign author available baset beat because becker before being bios bit bitops bits blocks board boards boot both buffer bus busy but bytes called cameron can capability capable cards center cesdis chip chips classes cleanup close cmd code com come command compatibility compilation config configured contiguous control count data dec defintions demon designed dev device devices disable distributed dma doesn don donald driver dwords each eeprom eight eisa enable enforces erasing errno error ethercards etherdevice etherlink etherlinkiii ethernet ewenb excellence exported extension extra fast fastetherlink fddi feasible fifos file fine first five flag flight flows following font for frames free from general get given gnu goddard gov greenbelt gsfc handle handler hardware herein host iii iiic impact include incorporated independent information init inta interface internal interrupt ioport irq jabber jumpers kernel large latched lenght less license limit line lines link linux list loadable local locations lower mac making malloc manufacturer mapped marker master masters may mbs media members memory module msec multicast murphy name names nasa need netdevice new next normal not note notes notice now octets offsets one only open operation operational original other otherwise packet packets parameter parameters park pci pcnet performance physically possible previous primary probe product programmed project providing ptrace public rather reached reference region regions register registers released remaining remove rides routine runs runtime sched sciences see send series set setrxfilter settings shared should signal significant similar single size sizes skbuff snooping software someday space specific spitzer start stats status still string support supports synchronization system taking tbusy terms terry than thanks that the theory there these this thread threaded thus time timeout top total transcvr transfer transfers tulip twice two unfortunately units unlike unused update use used useful using various version very vortex was which while will win window windows with works writing written xmit yet Description{9}: EL3WINDOW } @FILE { http://cesdis.gsfc.nasa.gov/pub/linux/html/Ethernet-HOWTO-4.html Update-Time{9}: 827948630 url-references{286}: Ethernet-HOWTO.html#toc4 Ethernet-HOWTO-9.html#faq Ethernet-HOWTO-9.html#ne2k-probs Ethernet-HOWTO-3.html#e10xx Ethernet-HOWTO-3.html#de-100 Ethernet-HOWTO-3.html#dfi-300 Ethernet-HOWTO-5.html Ethernet-HOWTO-3.html Ethernet-HOWTO.html#toc4 Ethernet-HOWTO.html#toc Ethernet-HOWTO.html #0 title{33}: Clones of popular Ethernet cards. keywords{193}: accton all aritsoft beginning cabletron cards chapter clones contents dfi dfinet ethernet faq lan lantastic lcs link next poor popular previous problems section shinenet table tec the this top headings{47}: 4 Clones of popular Ethernet cards. 4.1 4.2 body{1994}: Contents of this section Due to the popular design of some cards, different companies will make `clones' or replicas of the original card. However, one must be careful, as some of these clones are not 100 % compatible, and can be troublesome. Some common problems with `not-quite-clones' are noted in the FAQ section . This section used to have a listing of a whole bunch of clones that were reported to work, but seeing as nearly all clones will work, it makes more sense to list the ones that don't work 100 % . Poor NE2000 Clones Here is a list of some of the NE-2000 clones that are known to have various problems. Most of them aren't fatal. In the case of the ones listed as `bad clones' -- this usually indicates that the cards don't have the two NE2000 identifier bytes. NEx000-clones have a Station Address PROM (SAPROM) in the packet buffer memory space. NE2000 clones have in bytes of the SAPROM, while other supposed NE2000 clones must be detected by their SA prefix. Accton NE2000 -- might not get detected at boot, see ne2000 problems . Aritsoft LANtastic AE-2 -- OK, but has flawed error-reporting registers. AT-LAN-TEC NE2000 -- clone uses Winbond chip that traps SCSI drivers ShineNet LCS-8634 -- clone uses Winbond chip that traps SCSI drivers Cabletron E10**, E20**, E10**-x, E20**-x -- bad clones, but the driver checks for them. See E10** . D-Link Ethernet II -- bad clones, but the driver checks for them. See DE-100 / DE-200 . DFI DFINET-300, DFINET-400 -- bad clones, but the driver checks for them. See DFI-300 / DFI-400 Poor WD8013 Clones I haven't heard of any bad clones of these cards, except perhaps for some chamelion-type cards that can be set to look like a ne2000 card or a wd8013 card. There is really no need to purchase one of these `double-identity' cards anyway. Next Chapter, Previous Chapter Table of contents of this chapter , General table of contents Top of the document, Beginning of this Chapter MD5{32}: 0ce4a5d65a26d5b1de6912ffc7320148 File-Size{4}: 2902 Type{4}: HTML Gatherer-Version{3}: 1.0 Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Refresh-Rate{7}: 2419200 Time-to-Live{8}: 14515200 Description{33}: Clones of popular Ethernet cards. } @FILE { http://cesdis.gsfc.nasa.gov/hpccm/accomp/94accomp/cas94.accomps/cas4.html Update-Time{9}: 827948645 title{45}: Numerical Propulsion Simulation System (NPSS) keywords{44}: npss numerical propulsion simulation system images{52}: hpcc.graphics/hpcc.header.gif hpcc.graphics/npss.gif headings{46}: Numerical Propulsion Simulation System (NPSS) MD5{32}: fad1dcf7dd3411835e278bd8792593b1 File-Size{4}: 3782 Type{4}: HTML Gatherer-Version{3}: 1.0 Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Refresh-Rate{7}: 2419200 Time-to-Live{8}: 14515200 Description{45}: Numerical Propulsion Simulation System (NPSS) } @FILE { http://cesdis.gsfc.nasa.gov/hpccm/annual.reports/ess94contents/guest.ci/schow.html Update-Time{9}: 827948652 url-references{171}: http://dialsparc10.ece.arizona.edu/hpcc_graphic.html>URL http://dialsparc10.ece.arizona.edu/hpcc_graphic.html

Return to the PREVIOUS PAGE curator: Larry Picha MD5{32}: 87d3edfcd00b38c730cbf2fef838c529 File-Size{4}: 2304 Type{4}: HTML Gatherer-Version{3}: 1.0 Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Refresh-Rate{7}: 2419200 Time-to-Live{8}: 14515200 Description{51}: Query And Browse Of Earth Science Imagery Databases } @FILE { http://cesdis.gsfc.nasa.gov/linux/misc/N-way.ps Update-Time{9}: 827948609 Partial-Text{1138}: 4.0 7 An Introduction to Auto-Negotiation Message Page. An Auto-Negotiation Next Page encoding which contains a pre-de\336ned 1 1-bit message code. Next Page function. The algorithm which governs Next Page communication. Next Page bit. A bit in the Auto-Negotiation Base Link Code W ord that indicates there are additional Link Code W ords with Next Pages to be exchanged. NLP Receive Link Integrity T est function. Auto-Negotiation link integrity test function which allows backward compatibility with the 10BASE-T Link Integrity T est function \050See \336gure 14-6 in IEEE 802.3\051. NLP sequence. A Normal Link Pulse sequence, as de\336ned in IEEE 802.3 section 14.2.1.1. Normal Link Pulse \050NLP\051. An out-of-band communications mechanism used in 10BASE-T to indicate link status. Physical Layer Device \050PHY\051. The portion of the physical layer between the MDI and MII. Physical Medium Attachment \050PMA\051 sublayer . The portion of the physical layer that contains the functions for transmission, collision detection, reception, and \050in the case of 100BASE-T4\051 clock recovery and skew alignment. page. MD5{32}: 002ae2c3b3b7eaf669be7571d5787532 File-Size{6}: 151244 Type{10}: PostScript Gatherer-Version{3}: 1.0 Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Refresh-Rate{7}: 2419200 Time-to-Live{8}: 14515200 Keywords{547}: additional algorithm alignment allows and are attachment auto backward band base between bit case clock code collision communication communications compatibility contains detection device encoding est exchanged for function functions governs gure ieee indicate indicates integrity introduction layer link mdi mechanism medium message mii ned negotiation next nlp normal ord ords out page pages phy physical pma portion pre pulse receive reception recovery section see sequence skew status sublayer test that the there transmission used which with Description{55}: 4.0 7 An Introduction to Auto-Negotiation Message Page. } @FILE { http://cesdis.gsfc.nasa.gov/hpccm/annual.reports/ess94contents/app.sw/jnniepict.html Update-Time{9}: 827948654 url-references{144}: http://cesdis.gsfc.nasa.gov/ http://cesdis.gsfc.nasa.gov/hpccm/annual.reports/ess94contents/app.sw/jnnie.html mailto:lpicha@cesdis.gsfc.nasa.gov title{5}: JNNIE keywords{108}: and center cesdis curator data excellence information larry picha return sciences space technical the write images{38}: graphics/jnnie.gif graphics/return.gif headings{34}: Return to the technical write-up body{222}: Point of Contact: Dr. Thomas Sterling Center of Excellence in Space Data and Information Sciences (CESDIS) Goddard Space Flight Center/Code 930.5 tron@chesapeake.gsfc.nasa.gov (301) 286-2757 curator: Larry Picha MD5{32}: 36fa623383827c2f56ce735c9a96cdf2 File-Size{3}: 626 Type{4}: HTML Gatherer-Version{3}: 1.0 Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Refresh-Rate{7}: 2419200 Time-to-Live{8}: 14515200 Description{5}: JNNIE } @FILE { http://cesdis.gsfc.nasa.gov/admin/inf.eng/wave.tutorial.fin/trade.secrecy.html Update-Time{9}: 827948691 title{20}: Policy Wave Tutorial keywords{9}: tutorial images{14}: wave.small.gif headings{16}: Can we ride the body{88}: TUTORIAL: The Policy Wave is Coming: Authorship in a U.S. Government Agency Context MD5{32}: 3fc7ed6c12104d599d53fc63954a13f3 File-Size{4}: 1586 Type{4}: HTML Gatherer-Version{3}: 1.0 Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Refresh-Rate{7}: 2419200 Time-to-Live{8}: 14515200 Description{20}: Policy Wave Tutorial } @FILE { http://cesdis.gsfc.nasa.gov/hpccm/accomp/94accomp/cas94.accomps/hpcc.graphics/AIMS.pict Update-Time{9}: 827948794 MD5{32}: 3fc00ccdd0228563a6aebcc3ca9314cd File-Size{6}: 177116 Type{7}: Unknown Gatherer-Version{3}: 1.0 Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Refresh-Rate{7}: 2419200 Time-to-Live{8}: 14515200 } @FILE { http://cesdis.gsfc.nasa.gov/hpccm/iitf.hp/minutes/ Update-Time{9}: 827948828 url-references{85}: /hpccm/iitf.hp/ 1.31.95 1.31.95.html 4.28.95.html 6.2.95.html 7.25.95.html WordTemp-4 title{32}: Index of /hpccm/iitf.hp/minutes/ keywords{31}: directory html parent wordtemp images{128}: /icons/blank.xbm /icons/menu.gif /icons/text.gif /icons/text.gif /icons/text.gif /icons/text.gif /icons/text.gif /icons/text.gif headings{32}: Index of /hpccm/iitf.hp/minutes/ body{270}: Name Last modified Size Description Parent Directory 21-Nov-95 15:28 - 1.31.95 26-Apr-95 14:51 2K 1.31.95.html 27-Apr-95 09:17 2K 4.28.95.html 24-May-95 13:29 4K 6.2.95.html 16-Jun-95 16:08 6K 7.25.95.html 02-Aug-95 11:38 5K WordTemp-4 31-Jul-95 11:11 6K MD5{32}: 52ab1b08be060212d243cd26aa108a49 File-Size{4}: 1144 Type{4}: HTML Gatherer-Version{3}: 1.0 Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Refresh-Rate{7}: 2419200 Time-to-Live{8}: 14515200 Description{32}: Index of /hpccm/iitf.hp/minutes/ } @FILE { http://cesdis.gsfc.nasa.gov/linux/misc/NWay.html Update-Time{9}: 827948609 url-references{584}: mailto:bill@lan.nsc.com mailto:becker@cesdis.gsfc.nasa.gov #1.0 #2.0 #2.1 #2.2 #3.0 #3.1 #3.2 #3.3 #3.4 #3.5 #3.6 #3.7 #4.0 #4.1 #4.2 #4.3 #4.4 #4.5 #4.6 #5.0 #5.1 #5.2 #5.3 #6.0 #7.0 #8.0 #FullDuplex #5.0 #jabber #MAU #DTE #MDU #ability #top http://cesdis.gsfc.nasa.gov/linux/misc/100mbs.html http://cesdis.gsfc.nasa.gov/linux/linux.html http://wwwhost.ots.utexas.edu/ethernet/100mbps.html http://alumni.caltech.edu/~dank/fe/ http://www.iol.unh.edu/consortiums/fe/fast_ethernet_consortium.html mailto:bill@lan.nsc.com /pub/people/becker/whoiam.html mailto:becker@cesdis.gsfc.nasa.gov title{35}: An Introduction to Auto-Negotiation references{169}: International Standard ISO/IEC 8802-3: 1992, 3rd. ed., ANSI/IEEE Std 802.3 IEEE Std 802.3u/D4-1995 (Draft supplement to ISO/IEC 8802-3:1993 ANSI/IEEE Std 802.3-1993 ed.) keywords{661}: ability alternate architecture attachment aui author auto automatic backwards basic becker benefits bill bunch cesdis charles compatibility conclusion connection consortium dan data definitions dependent detection donald dte duplex encoding equipment ethernet expandability extension extensions fast fault full function gov gsfc guide information interface interfaces introduction jabber kegel lcw linux management mau mbs mechanism media medium nasa negotiation network next operation optional page parallel path proprietary protection references related remote section sensing spurgeon synchronization technology terminal top transport type unit upgrade what images{89}: natsemi.gif connection.gif flp-burst.gif lcw.gif protocol.gif NP-LCW.gif remote-fault.gif headings{1085}: An Introduction to Auto-Negotiation Bill Bunch , February 1995 Converted to HTML and edited by Donald Becker , April 1995 CONTENTS 1.0 Introduction 2.0 What is Auto-Negotiation? 2.1 Basic Operation 2.2 Optional Operation 2.2.1 Management Interface 2.2.2 Next Page Function 2.2.3 Remote Fault Indication 3.0 Benefits of Auto-Negotiation 3.1 Automatic Connection 3.2 Backwards Compatibility 3.3 Network Protection 3.4 Technology Extensions 3.5 Upgrade Path 3.6 Management Interface 3.7 Proprietary Extension 4.0 Architecture 4.1 Ability Transport Mechanism 4.2 Data Encoding 4.3 Auto-Negotiation Synchronization 4.4 Parallel Detection 4.5 Next Page Function 4.6 Remote Fault Sensing 4.6.1 Simple Remote Fault Transport Mechanism 4.6.2 Simple Remote Fault Sensing 4.6.3 Specific Remote Faults via Next Page 5.0 Expandability 5.1 Auto-Negotiation on Alternate Media 5.2 Next Page Extension 5.2.1 Technology Ability Field Extension 5.2.2 Proprietary Extension 5.3 Network Type Extension 6.0 Conclusion 7.0 References 8.0 Definitions Other resources body{29841}: 1.0 Introduction 2.0 What is Auto-Negotiation? 2.1 Basic Operation 2.2 Optional Operation 3.0 Benefits of Auto-Negotiation 3.1 Automatic Connection 3.2 Backwards Compatibility 3.3 Network Protection 3.4 Technology Extensions 3.5 Upgrade Path 3.6 Management Interface 3.7 Proprietary Extension 4.0 Architecture 4.1 Ability Transport Mechanism 4.2 Data Encoding 4.3 Auto-Negotiation Synchronization 4.4 Parallel Detection 4.5 Next Page Function 4.6 Remote Fault Sensing 5.0 Expandability 5.1 Auto-Negotiation on Alternate Media 5.2 Next Page Extension 5.3 Network Type Extension 6.0 Conclusion 7.0 References 8.0 Definitions NWay (TM) Auto-Negotiation is a technology which was introduced by National Semiconductor to the IEEE 802.3u 100BASE-T working group in the Spring of 1994 as a result of the need for a mechanism to accommodate multi-speed network devices. National's NWay technology was chosen as the basis for this mechanism due to its simplicity, low cost, flexibility, interoperation with the installed base, and adaptability to future technologies. Currently, the Auto-Negotiation mechanism is defined in Clause 28 of the D4 draft of the ANSI/IEEE Std 802.3 MAC Parameters, Physical Layer, Medium Attachment Units and Repeater for 100 Mb/s Operation. This draft has been approved by the IEEE 802.3 Working Group. Refer to section 8.0 for definitions used in this document. Auto-Negotiation is a mechanism that takes control of the cable when a connection is established to a network device. Auto-Negotiation detects the various modes that exist in the device on the other end of the wire, the Link Partner, and advertises it own abilities to automatically configure the highest performance mode of interoperation. As a standard technology, this allows simple, automatic connection of devices that support a variety of modes from a variety of manufacturers. Auto-Negotiation acts like a rotary switch that automatically switches to the correct technology, such as 10BASE-T, 100BASE-TX, 100BASE-T4, or a corresponding Full Duplex mode. Once the highest performance common mode is determined, Auto-Negotiation passes control of the cable to the appropriate technology and becomes transparent until the connection is broken. Auto-Negotiation leverages the proven link function of 10BASE-T to provide robust operation over Category 3, 4, or 5 Unshielded Twisted Pair (UTP.) There are two basic cases that Auto-Negotiation accounts for as shown in Figure 1: Auto-Negotiation exists at both ends of a twisted-pair link. (Node A to Hub) Auto-Negotiation exists at only one end of a twisted-pair link. (Node B to Hub) Auto-Negotiation is most useful if it exists at both ends of the link since both ends speak the same "language" at start up. This allows a rich set of information to be transferred. The key to Auto-Negotiation's interoperation with installed, legacy LANs is the Parallel Detection function. The Parallel Detection function accounts for the case where only one end of a twisted-pair link has Auto-Negotiation. For example, consider an installed 10BASE-T node connected to a hub that supports 10BASE-T, 100BASE-TX, and Auto-Negotiation (see Figure 1). In this case, the hub recognizes the unique signals that the 10BASE-T only device produces and switches to 10BASE-T operation. In addition to the basic connection mechanism, Auto-Negotiation also provides the following optional additional features: The serial management interface of the Media Independent Interface (MII) register set provides a mechanism for additional control of Auto-Negotiation. It also provides a means to gather network status information. After exchanging the Base Page, which contains the information to make a connection automatically, if both ends of the link indicate support for the Next Page function, additional data may be exchanged. This allows extensions to the standard and proprietary extensions to exist without affecting interoperability. The basic transport mechanism for simple fault information is built into Auto-Negotiation, but the detection and advertisement of any particular fault is not required. Remote Fault Indication allows a device that is able to detect faults (e.g. wrong cable type, wiring fault, etc.) to advertise the presence of the fault to the Link Partner. The Remote Fault Indication may be used in conjunction with the Next Page function to transfer more information about the type of fault that occurred. The primary benefit of Auto-Negotiation is the automatic connection of the highest performance technology available without any intervention from a user, manager, or management software. If Auto-Negotiation exists at only one end of a twisted-pair link, it determines that the Link Partner does not support the Auto-Negotiation mechanism. Instead of exchanging configuration information, it examines the signal it is receiving. If Auto-Negotiation discovers that the signal matches a technology that the device supports, it will automatically connect that technology. This function, known as Parallel Detection, gives Auto-Negotiation the ability to be compatible with any device that does not support Auto-Negotiation, yet supports: 10BASE-T, 100BASE-TX, or 100BASE-T4. Connection to any technology via Parallel Detection other than those listed above is not supported by Auto-Negotiation. In the event that no common technology exists, Auto-Negotiation will not make a connection. This ensures preservation of network integrity and minimization of network down time. In particular, hubs are a primary beneficiary of this feature. For example, if a user connects a 100BASE-T4 device into a 10BASE-T/100BASE-TX switch, the result could be catastrophic for all the users connected through that switch. However, if the hub has Auto-Negotiation, it would refuse the connection and allow the rest of the network to proceed as usual. In fact, with Auto-Negotiation in the hub the network users are protected from any connection that the hub cannot recognize or accept. If Auto-Negotiation exists on both ends of a twisted-pair link, then both ends advertise their abilities to the other. Auto-Negotiation incorporates a robust handshake that ensures data integrity. The devices compare their abilities and connect at the highest performance common technology shared. Auto-Negotiation has been defined for flexibility. Standard technologies can use the basic Auto-Negotiation logic with their own definitions for the information to be exchanged (see section 5.0 for details). Currently, the IEEE 802.3 and 802.9 Working Groups each have their own, independent codes which allows the technologies to define which abilities can be advertised; In total, 32 of these codes can exist. IEEE 802.3 currently supports: 10BASE-T, 10BASE-T Full Duplex, 100BASE-TX, 100BASE-TX Full Duplex, and 100BASE-T4. Even within the IEEE 802.3 code space there is room for future technologies or enhancements. New nodes on the market will have 100Mb/s functionality as well as the traditional 10BASE-T. This means that there will be some latent performance available as these new nodes are added to an old 10BASE-T network. When the performance issue becomes critical, the latent ability can be tapped into by upgrading the hub. Auto-Negotiation enables the upgrade to occur without reconfiguring each node and/or each port on the new hub. While no management intervention is required for automatic connection, a management interface has been provided to give optional control and status of Auto-Negotiation. The management interface provides the following capabilities: Determine why a connection was refused Determine which abilities exist on the network Change connection speed Retrieve fault status Exchange arbitrary configuration information with a Link Partner (in conjunction with the Next Page function) These capabilities are useful in a managed-hub application since they give the manager remote access to all the above information and control. These functions are useful for node solutions with Auto-Negotiation as well. However, in the case of a node, the information is only available to the user of that node and not to the network at large. This information would be useful in installation and diagnostic software to help guide the user in resolving any difficulties. Auto-Negotiation has the option to send additional pieces of information after the "base" negotiation that determines the network connection before enabling the data service. This is known as the Next Page function. Among other things, it can be used to send information that corresponds to an Organizationally Unique Identifier so that extra features could be implemented on a proprietary basis, yet not conflict with standard operation. Both ends of a twisted-pair link must have Auto-Negotiation with support for the Next Page function in order to take advantage of this feature. Specific remote fault type information transfer can also be supported using this flexible mechanism. To support the many different technologies that are on the market today or will be available in the future, Auto-Negotiation has been architected in a way that provides extensibility and flexibility. Basically, an Auto-Negotiation device advertises its abilities and detects the abilities of the remote device that it is connected to, known as the Link Partner. Once Auto-Negotiation has received the Link Partner's abilities in a robust manner and it receives acknowledgment that its abilities have also been received by the Link Partner, Auto-Negotiation compares the two sets of abilities and decides which technology to connect. This decision is based upon a pre-agreed priority of technologies. Auto-Negotiation attaches the highest performance common technology to the medium and becomes transparent until the link goes down or is reset. The basic mechanism that Auto-Negotiation uses to advertise a device's abilities is a series of link pulses which encode a 16 bit word, known as a Fast Link Pulse (FLP) Burst. An FLP Burst is composed of 17 to 33 link pulses which are identical to the link pulses used in 10BASE-T to determine whether a link has a valid connection (sometimes referred to as Normal Link Pulses or NLPs.) FLP Bursts occur at the same interval as NLPs, 16.8ms. An FLP Burst has a nominal duration of 2 ms. Figure 2 shows the nominal timing of FLP Bursts. An FLP Burst interleaves clock pulses with data pulses to encode a 16 bit word. The absence of a pulse within a time window following a clock pulse encodes a logic zero and a pulse within the time window following a clock pulse encodes a logic one. The key to Auto-Negotiation's flexibility and expandability is the encoding of the 16 bit word. The 16 bit word is referred to as the Link Code Word (LCW). The LCW is encoded as shown in figure 3. The Selector Field, S[4:0], allows 32 different definitions of the Technology Ability Field to coexist. The intention is to allow standard technologies to leverage the basic Auto-Negotiation mechanism. Currently, S[4:0]=< 00001 > is assigned to IEEE 802.3 and S[4:0]=< 00010 > is assigned to IEEE 802.9. Two more codes are reserved for expansion of Auto-Negotiation. The remaining codes are reserved to be assigned to standard technologies that wish to leverage this mechanism, yet fall outside the scope of the currently defined Selector Field values. The Technology Ability Field, A[7:0], is defined relative to the Selector Field value of the Link Code Word. For IEEE 802.3 there are bits defined to advertise: 100BASE-TX Full Duplex 100BASE-T4 100BASE-TX 10BASE-T Full Duplex 10BASE-T The above list also defines the priority hierarchy for resolving multiple common abilities. That is, if both devices support both 10BASE-T and 100BASE-TX, Auto-Negotiation at both ends will connect 100BASE-TX instead of 10BASE-T. Priority resolution works such that when the 3 remaining bits in the Technology Ability Field are eventually defined, the new technology can be inserted anywhere in the list without disturbing the existing hierarchy. This means that the 3 reserved bits can be assigned without causing interoperability problems with any Auto-Negotiation device produced before these bits were defined. The Remote Fault bit, RF, allows transmission of simple fault information to the Link Partner. The Acknowledge bit, Ack, is used by the synchronization mechanism to ensure robust data transfer. The Next Page bit, NP, advertises to the Link Partner whether the Next Page function is supported. The Next Page function is used to send additional information beyond the basic configuration information. Both ends must have this ability in order to exchange this type of information. Auto-Negotiation must ensure that the Link Partner receives the Link Code Word correctly and that the Link Partner's Link Code Word is received correctly in order to make a connection decision. Auto-Negotiation uses the Arbitration function to accomplish this. Figure 4 illustrates the following example. The Local Device begins by transmitting its Link Code Word, LCW[LD] , with the Ack bit not set. Once three consecutive, matching Link Code Words are received from the Link Partner LCW[LP] (ignoring Ack), the Local Device sets the Ack bit in the transmitted Link Code Word to indicate that it has received the Link Partner's Link Code Word correctly. The Local Device continues transmitting its Link Code Word. Upon receiving three consecutive, matching Link Code Words from the Link Partner with the Ack bit set, the Local Device knows that the Link Partner has also received the Link Code Word correctly. The Local Device transmits the Link Code Word with the Ack bit set 6-8 additional times to ensure that a complete handshake has taken place. Now, both the Local Device and the Link Partner have exchanged their base Link Code Words. Each device compares their abilities and the highest performance common technology as determined by priority resolution is connected to the medium. To account for technologies that existed prior to Auto-Negotiation, Auto-Negotiation passes the signals present on the receiver to the 100BASE-TX and 100BASE-T4 Link Monitor functions. If Auto-Negotiation determines that exactly one Link Monitor function indicates that the link is good, then it can connect that technology to the media. Note, however, that this function is only implemented for 10BASE-T, 100BASE-TX, and 100BASE-T4. Future multi-mode devices will use Auto-Negotiation as the basis of automatic mode switching. Auto-Negotiation incorporates a modified 10BASE-T Link Integrity Test function in order to interoperate properly with installed 10BASE-T devices. The modifications ensure that Auto-Negotiation can control the function such that 10BASE-T devices are always correctly detected. If the Next Page bit is set in both the outgoing and incoming Link Code Words, then both the Local Device and the Link Partner are able to support the Next Page function and will participate in Next Page exchange. Once the first Link Code Word has been exchanged, both sides have the information required to configure the highest common technology. However, if Next Page exchange occurs then Auto-Negotiation does not configure the highest common technology until Next Page exchange has completed. Next Page exchange works in the same way that the `base' Link Code Words were exchanged. The main difference is the encoding of the exchanged Link Code Words which is shown in figure 5. The Next Page bit, NP, indicates that an additional Next Page will be exchanged. The Acknowledge, Ack bit works the same as for the base Link Code Word exchange. Message Page, MP, indicates whether the Message Code Field, M[10:0], will be interpreted as a Message Code or an Unformatted Code. Message Codes are pre-defined messages in the IEEE 802.3 standard, Clause 28. Unformatted Codes are arbitrary pieces of data. Following a base Link Code Word exchange with the IEEE 802.3 Selector Field value, Unformatted Codes follow Message Codes with information required by the Message Code. There are two different ways of interpreting a received Next Page. If the Message Page bit is set, then the Message Code Field, M[10:0], is a binary code that corresponds to a pre-defined message in the IEEE 802.3 standard, Clause 28. There are 2048 possible message codes. Of these, 8 codes are defined (all other codes are undefined at present): 2 codes are reserved for Auto-Negotiation expansion and the remaining 6 codes are defined as follows: Null Message: Code exchanged if there is no further information to be transmitted while the Link Partner is still transmitting information. One Unformatted Page containing a Technology Ability Field follows: Provides extension of the base Link Code Word. Two Unformatted Pages containing Technology Ability Field information follows: Provides extension of the base Link Code Word. One Unformatted Page with a binary encoded Remote Fault follows: Unformatted Page contains Remote Fault type; Remote Fault Test, Link Loss, Jabber , or Parallel Detection Fault OUI Tagged Message: Organizationally Unique Identifier followed by one Unformatted Page (defined by the transmitting organization). PHY ID Tagged Message: PHY ID followed by one Unformatted Page (defined by the transmitting organization). The Acknowledge 2 bit, Ack2, is set by the receiving device to indicate that the device supports the function indicated by the message. The Toggle bit, T, is set by the Arbitration function within Auto-Negotiation to ensure proper synchronization with the Link Partner during Next Page exchange. A basic remote fault status transport mechanism is built into the Auto-Negotiation function (i.e. mandatory). However, the ability to sense and categorize fault types is not required. To transfer simple remote fault status, a device which has detected a remote fault will set the Remote Fault bit in the Auto-Negotiation Advertisement Register (ANAR), and renegotiate. This will advertise to the Link Partner that a remote fault has been detected. If negotiation subsequently completes, the Remote Fault bit in the ANAR will be reset to clear the fault condition. Upon detection of the Remote Fault bit in the Auto-Negotiation Link Partner Advertisement Register (ANLPAR), the device will set the Remote Fault bit in the MII status register. Note: All registers are defined as part of the MII register set. Devices may implement any remote fault detection mechanism desired and use this transport mechanism to inform the Link Partner of a fault. The meaning of the fault to the receiver is limited, however. Reception of remote fault status only informs a device that something is wrong with the link rather than specifying the type of fault that has occurred. As an example, a device could detect a fault as follows. If a device is attempting to Auto-Negotiate yet "never" receives a valid set of signals that will allow it to connect, management software could detect this as being caused by a fault in making a connection. The device could then set the Remote Fault bit in the ANAR and renegotiate. The scenario described above could be caused by: (see Figure 6). The Local Device has a fault in the wiring of the receive pair. The Link Partner would have received the remote fault information and set the status bit informing management that a fault has occurred. The Local Device has a fault in the wiring of the transmit pair. The Link Partner could never receive the remote fault information. If the Link Partner also supported this type of remote fault sensing, then the situation would be equivalent to example 1, where the Local Device would inform management of the fault status. In this case, the Local Device will detect that the Link Partner is Auto-Negotiation able and set its outgoing Ack bit. The Local Device will "never" receive Ack set from the Link Partner. Since the Local Device's management agent knows that both devices are Auto-Negotiating, but cannot complete since there is no acknowledgment from the Link Partner, there must be something wrong with the transmission path. The Link Partner is not transmitting FLP Bursts and instead transmits signals of a technology that the Local Device cannot support. Since the Link Partner does not support Auto-Negotiation, the remote fault information is not received by the Link Partner. Note that no connection should be allowed since there are no common technologies between the devices. The Local Device will continue to send link pulses indefinitely. Software may determine that a fault continues to persist and notify any local management agent. It is possible for Auto-Negotiation to complete, even though some type of remote fault is present that can be detected. For example, a device may be jabbering, the wire may not support the 100Mb/s technology, or there is excessive noise present. While this type of fault could be transferred using the simple Remote Fault transport mechanism, it may be beneficial to inform the Link Partner which type of fault is being experienced. This can be accomplished if both ends of the link participate in a Next Page exchange to transfer the fault type information. The wire connection must be such that an Auto-Negotiation page exchange can complete. Auto-Negotiation has been architected to provide extensive code space that will allow the basic mechanism to be leveraged and remain interoperable regardless of the nature of new technologies. Auto-Negotiation is easily adaptable to virtually any technology that uses twisted pair wiring. While not standardized, the same mechanism could be used over media types other than twisted pair by replacing the encoding method with one that is compatible with the given media. For example, since link pulses do not directly translate onto fiber, an alternate coding scheme could be defined to replace the link pulses. The algorithm and Link Code Word encodings would all remain the same. The Next Page function is architected to provide virtually unlimited code space. The Message Code space has 2040 codes that may be defined. Implementations need only consider what is an acceptable time to make a connection. Within a given Selector Field, the base page has enough space for 8 different technologies (assuming they are to be advertised independently). If all of the base page bits are defined, the Next Page function can be used to extend this to support additional technologies. Thus far, codes have been reserved to support up to 16 additional bits dedicated to providing technology information. The Next Page function also provides the flexibility for manufacturers to define any additional information that may be used to provide control and/or status to a management agent. Through the Selector Field code space, 30 fundamentally different network types can be accommodated by the Auto-Negotiation function. Currently, IEEE 802.3, CSMA/CD LANs, and IEEE 802.9 Integrated Services LAN have adopted Auto-Negotiation and reside in this code space. Token Ring, Wireless, and others could conceivably leverage all or part of Auto-Negotiation to provide a greater level of interoperability. Auto-Negotiation is a standard, simple, low cost, flexible mechanism for providing connection interoperability between IEEE 802.3 LANs. Auto-Negotiation forms the basis for a highest common performance link configuration mechanism. In addition, Auto-Negotiation provides management control and is a valuable network status tool. Auto-Negotiation's simplicity facilitates implementing cost effective multi-function nodes and/or hubs. Auto-Negotiation's flexible architecture ensures that future technology interoperability needs can be met. National Semiconductor provided its NWay(tm) technology and expertise to create Clause 28 of ANSI/IEEE Std 802.3u Draft D4 which embodies the Auto-Negotiation Function. This draft specifically supports configuring the highest performance common mode between 10BASE-T and 100BASE-T devices, enabling multi-vendor, standard interoperability a reality for IEEE 802.3 compatible LANs. Attachment Unit Interface (AUI) . In 10BASE-T, the interface between the MAU and the DTE within a data station. ability . A mode which a device can advertise using Auto-Negotiation. advertised ability. An operational mode that is advertised using Auto-Negotiation. Auto-Negotiation. The function which allows two devices at either end of a link segment to negotiate common data service functions. Base Link Code Word. The first 16-bit message exchanged during Auto-Negotiation. Base Page. See Base Link Code Word. Data Terminal Equipment (DTE). Any source or destination of data connected to the LAN. Fast Link Pulse (FLP) Burst. A group of no more than 33 and not less than 17 10BASE-T compatible link integrity test pulses. Each FLP Burst encodes 16 bits of data using an alternating clock and data pulse sequence. Full Duplex . A type of networking which supports simultaneous reception and transmission. jabber . A condition wherein a station transmits for a period of time longer than permissible, usually due to a fault condition. link. The transmission path between any two interfaces of generic cabling. Link Code Word. The 16 bits of data encoded into a Fast Link Pulse Burst. Link Partner. The device at the opposite end of a link segment from the local device. The Link Partner device may be either a DTE or repeater. link pulse. Communication mechanism used in 10BASE-T and 100BASE-T networks to indicate link status and (in Auto-Negotiation equipped devices) to communicate information about abilities and negotiate communication methods. 10BASE-T uses Normal Link Pulses (NLPs), which indicate link status only. 10BASE-T and 100BASE-T devices equipped with Auto-Negotiation exchange information using a Fast Link Pulse mechanism which is compatible with 10BASE-T. link segment. The point-to-point full duplex medium connection between two and only two Medium Dependent Interfaces (MDIs.) local ability. See ability . Relative to the Local Device. Local Device. The local station which may attempt to Auto-Negotiate with a Link Partner. The Local Device may be either a DTE or repeater. Medium Attachment Unit (MAU). A device containing an AUI, PMA, and MDI, used to connect a repeater or DTE to a transmission medium. Medium Dependent Interface . The mechanical and electrical interface between the transmission medium and the MAU (10BASE-T) or PHY (100BASE-T). Media Independent Interface (MII). A signal interface which maps to MAC service definitions. Message Code. The pre-defined 11-bit code contained in an Auto-Negotiation Message Page. Message Page. An Auto-Negotiation Next Page encoding which contains a pre-defined 11-bit message code. Next Page function. The algorithm which governs Next Page communication. Next Page bit. A bit in the Auto-Negotiation Base Link Code Word that indicates there are additional Link Code Words with Next Pages to be exchanged. NLP Receive Link Integrity Test function. Auto-Negotiation link integrity test function which allows backward compatibility with the 10BASE-T Link Integrity Test function (See figure 14-6 in IEEE 802.3). NLP sequence. A Normal Link Pulse sequence, as defined in IEEE 802.3 section 14.2.1.1. Normal Link Pulse (NLP). An out-of-band communications mechanism used in 10BASE-T to indicate link status. Physical Layer Device (PHY). The portion of the physical layer between the MDI and MII. Physical Medium Attachment (PMA) sublayer. The portion of the physical layer that contains the functions for transmission, collision detection, reception, and (in the case of 100BASE-T4) clock recovery and skew alignment. page. In Auto-Negotiation, the encoding for a Link Code Word. Auto-Negotiation can support multiple Link Code Word encodings. The base page has a constant encoding as defined in IEEE 802.3u D4, section 28.2.1.2. Additional pages may have a pre-defined encoding (see Message Page) or may be custom encoded (see Unformatted Page.) parallel detection. In Auto-Negotiation, the ability to detect 100BASE-TX and 100BASE-T4 technology specific link signalling while also detecting the NLP sequence of a 10BASE-T device. Priority Resolution function. The mechanism used by Auto-Negotiation to select the network connection type where more than one common network ability exists (100BASE-TX, 100BASE-T4, 10BASE-T, etc.) The priority resolution table defines the relative hierarchy of connection types from the highest performance to the lowest performance. remote fault. The generic ability of a Link Partner to signal its status even in the event that it may not have an operational link. renegotiation. Re-start of the Auto-Negotiation function caused by a management or user interaction. segment. The medium connection, including connectors, between MDIs in a CSMA/CD LAN. Selector Field. A 5 bit field in the base Link Code Word encoding that is used to encode up to 32 types of messages which define basic abilities. Technology Ability Field. An 8 bit field in the Auto-Negotiation base Link Code Word encoding that is used to indicate the abilities of a Local Device, such as support for 10BASE-T, 100BASE-TX, 100BASE-T4, as well as Full Duplex capabilities. Unformatted Page. A Next Page encoding which contains an unformatted 11-bit message field. Use of his field is defined through Message Codes and information contained in the Unformatted Page message field. 100Mbs Ethernet information at CESDIS Linux-related Ethernet information at CESDIS Charles Spurgeon's 100Mbs Ethernet Guide Dan Kegel's Fast Ethernet page U.N.H. Fast Ethernet Consortium address{104}: Top Author: Bill Bunch of National Semiconductor HTML by Donald Becker , becker@cesdis.gsfc.nasa.gov . MD5{32}: 34ac9a5cce56931964adb51189e271b3 File-Size{5}: 35206 Type{4}: HTML Gatherer-Version{3}: 1.0 Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Refresh-Rate{7}: 2419200 Time-to-Live{8}: 14515200 Description{35}: An Introduction to Auto-Negotiation } @FILE { http://cesdis.gsfc.nasa.gov/hpccm/annual.reports/ess94contents/in.house/multigrid.html Update-Time{9}: 827948654 url-references{115}: http://cesdis.gsfc.nasa.gov/hpccm/annual.reports/ess94contents/in.house/lou.html mailto:lpicha@cesdis.gsfc.nasa.gov title{57}: Scaling Performance of Parallel Multigrid Elliptic Solver keywords{47}: curator larry picha return technical the write images{42}: graphics/multigrid.gif graphics/return.gif headings{34}: Return to the technical write-up body{123}: Point of Contact: John Lou Jet Propulsion Laboratory (818) 354-4870 lou@acadia.jpl.nasa.gov curator: Larry Picha MD5{32}: 07af963fe52ad0573412a94cb787175b File-Size{3}: 545 Type{4}: HTML Gatherer-Version{3}: 1.0 Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Refresh-Rate{7}: 2419200 Time-to-Live{8}: 14515200 Description{57}: Scaling Performance of Parallel Multigrid Elliptic Solver } @FILE { http://cesdis.gsfc.nasa.gov/hpccm/ree.hp/graphics/ Update-Time{9}: 827948829 url-references{113}: /hpccm/ree.hp/ blue.bullet.gif hpcc.header.gif hpccsmall.gif nasa.meatball.gif ree.gif sound.icon.gif wavebar.gif title{32}: Index of /hpccm/ree.hp/graphics/ keywords{92}: blue bullet directory gif header hpcc hpccsmall icon meatball nasa parent ree sound wavebar images{151}: /icons/blank.xbm /icons/menu.gif /icons/image.gif /icons/image.gif /icons/image.gif /icons/image.gif /icons/image.gif /icons/image.gif /icons/image.gif headings{32}: Index of /hpccm/ree.hp/graphics/ body{320}: Name Last modified Size Description Parent Directory 31-Jul-95 14:51 - blue.bullet.gif 08-Nov-94 10:17 1K hpcc.header.gif 18-May-95 13:29 1K hpccsmall.gif 23-May-95 11:55 2K nasa.meatball.gif 08-Nov-94 10:17 3K ree.gif 08-Nov-94 10:17 22K sound.icon.gif 08-Nov-94 10:17 1K wavebar.gif 08-Nov-94 10:17 2K MD5{32}: ea0163099b3562b403bd366e9dc43f7d File-Size{4}: 1317 Type{4}: HTML Gatherer-Version{3}: 1.0 Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Refresh-Rate{7}: 2419200 Time-to-Live{8}: 14515200 Description{32}: Index of /hpccm/ree.hp/graphics/ } @FILE { http://cesdis.gsfc.nasa.gov/people/becker/beowulf.html Update-Time{9}: 827948897 url-references{600}: /linux/linux.html /linux/beowulf/icpp95.html /linux/beowulf/hpdc95.html http://www.cs.virginia.edu/~hpdc95/program.html http://www.bl.uk/access/beowulf/electronic-beowulf.html http://etext.lib.virginia.edu/cgibin/browse-mixed?id=AnoBeow=public=images/modeng=/lv1/Archive/eng-parsed http://etext.lib.virginia.edu/cgibin/browse-mixed?id=AnoBeow=public=images/modeng=/lv1/Archive/eng-parsed beowulf1.html white2.html /linux/misc/100mbs.html details.html#dx4 details.html#motherboard details.html#memory details.html#network http://vita.mines.colorado.edu:3857/jscales #top /pub/people/becker/whoiam.html title{27}: Beowulf Project Description keywords{215}: adapters author becker beowulf capability cesdis description ethernet fast gov gsfc here hpdc interconnect john linux mbs memory motherboard nasa original paper processor project scalable scales status text the top headings{44}: Beowulf Project Description Other resources body{3600}: Beowulf is a project to produce the software for an off-the-shelf clustered workstation based on commodity PC-class hardware and the Linux operating system. Aug 1995: Our Beowulf paper to be presented at ICPP. Aug 2, 1995: Our Beowulf paper presented at HPDC-95 . A Long Time Ago: Beowulf: The Original and the original text 2 . Two "white papers" written by Thomas Sterling is available here and here . The current generation Beowulf hardware is a 16 processor Pentium cluster. Each node consists of A Pentium processor running at 100Mhz A PCI motherboard based on the Intel Triton chipset 256K of synchronous cache 32M of memory 1.2G EIDE disk attached to the motherboard's 17MB/sec. bus master IDE controller. Two or three 100Mbs Fast Ethernet adapters . The processors are currently connected by four NetWorth 8 port Fast Ethernet repeaters and two point-to-point 100mbps links between the two halves. We would have preferred two 16 port repeaters but they were unavailable at the time we specified the system. The original Beowulf hardware consists of a 16 node cluster. Each node consists of One DX4 processor running at 100Mhz internally. A motherboard 16M of memory 540M or 1G EIDE disk communicating at 8.3MB/sec to a VLB interface Two 10Mbs bus-master ethernet cards The most notable system enhancement is scalable interconnect capability . The machine load balances the communication among the available network cards. This is transparent to the applications -- the network appears as a single network with twice the bandwidth and the same base latency. Most of the currently running parallel applications are written using PVM to communicate, others use RPC. We plan to support MPI soon. Future work is expected to include Implementing a NVM system (Network Virtual Memory a.k.a. Distributed Shared Memory) A distributed I/O file server The near-term kernel implications are: A new device driver for the AMD PCnet32 79C965 chip that's on the new, cheap (<$70) VL bus ethercards. This chip has a backwards-compatible PCnet/LANCE mode, but has a full 32 bit mode as well. Status: The 24 bit address mode driver has been implemented as an extension to my earlier LANCE driver. This code in kernels after 1.1.54. The 32 bit address mode driver, which will be insignificantly faster, will be released after kernel 1.2. Routing/queuing changes to support "bonding" ethernet channels. The first Beowulf system has two parallel 10Mbs ethernets, and subsequent versions may have parallel 100Mbs networks. Status: Second generation prototype in production use. The public release is waiting on inclusion of the kernel diffs and the completion of the 'ifenslave' manual page. A device driver for the DEC 21040 and 21140 PCI ethernet controllers. Status: The 21040 driver has been released as 'tulip.c' in the current development kernel. The 21140 extensions will likely be included with the 1.2.14 patch file. A device driver for a 100Mbs network adaptor made by LAN Performance Labs. The 100Mbs data rate should provide an excellent test of Linux networking and help us to identify bottlenecks. The LPL100 boards can be daisy-chained, unlike 100VG and 100baseT adapters which require a hub. Status: Hardware in place, work currently suspended. I'll also be putting together a distribution of space data processing applications and distributed programming environments. Eventually I'll be working on a port of Condor and writing a distributed shared memory (aka network virtual memory) system. John Scales is using a Linux cluster at the CSM Center for Wave Phenomena. Top address{35}: Author: becker@cesdis.gsfc.nasa.gov MD5{32}: 69b9253b53d6065d8f139127f76e1c2c File-Size{4}: 5124 Type{4}: HTML Gatherer-Version{3}: 1.0 Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Refresh-Rate{7}: 2419200 Time-to-Live{8}: 14515200 Description{27}: Beowulf Project Description } @FILE { http://cesdis.gsfc.nasa.gov/linux/linux.html Update-Time{9}: 827948897 url-references{1564}: /cesdis.html /cesdis.html http://hypatia.gsfc.nasa.gov/NASA_homepage.html http://hypatia.gsfc.nasa.gov/GSFC_homepage.html /linux/beowulf/beowulf.html /linux/misc/index.html drivers/vortex.html drivers/tulip.html /linux/beowulf/icpp95.html /linux/beowulf/hpdc95.html /linux/talks/hpdc95-vg.html http://www.cs.virginia.edu/~hpdc95/program.html /linux/drivers/3c509.html /linux/talks/DECUS.html /linux/misc/NWay.html mailto:bill@lan.nsc.com misc/N-way.ps misc/N-way-1.gif misc/N-way-2.gif misc/N-way-3.gif misc/N-way-4.gif misc/N-way-5.gif misc/N-way-6.gif misc/N-way-7.gif drivers/tulip.c /linux/drivers/tulip.html drivers/tulip.c misc/boca-failure.html /linux/misc/100mbs.html drivers/3c59x.c drivers/vortex.html drivers/vortex.html drivers/tulip.c /linux/drivers/tulip.html /linux/drivers/tulip.html /linux/setup/3c5x9setup.c http:/~garman/linux/linux-faq/contents.html /pub/linux/linux.html /pub/people/becker/beowulf.html /pub/linux/drivers/index.html /pub/linux/pcmcia/pcmcia.html /pub/linux/misc/hardware.html /pub/linux/diag/diagnostic.html /pub/linux/html/Ethernet-HOWTO.html /pub/linux/misc/misc.html file://lrcftp.epfl.ch/pub/linux/atm/dist/atm-0.2.tar.gz http://www.ssc.com http://www.cs.uiowa.edu/~phenning/Linux http://sunsite.unc.edu/mdw/linux.html http://www.resus.univ-mrs.fr/Fr/CS/Linux/ http://wwwhost.ots.utexas.edu/ethernet/ethernet-home.html drivers/e2100.c diag/diagnostic.html /linux/misc/multicard.html /linux/drivers/hp-plus.c /linux/misc/hp+.html /pub/linux/diag/hp+.c http://www.hal.com/~markg/WebTechs/ #top /pub/people/becker/whoiam.html title{15}: Linux at CESDIS keywords{699}: adaptor and atm author auto becker beowulf bill bug bunch cabletron center cesdis chip cluster com compendium dec description developed diagnostic diagnotic documentation donald driver drivers errata ethercard ethercards etherlink ethernet family faqs fixes flight for format francaise goddard here howto hpdc html http iii information into journal lan links linux list mbps mbs media mini mrs multiple nasa nee negotiation netcard network networking notes nway other our page pages paper parlez pci pcmcia postscript product program programs project report resus scheme selection setup slides space stuff talk technologies textual the top tulip type univ using version vortex vous with working www images{121}: /pub/linux/misc/gull.gif /pub/linux/misc/gull.gif /pub/linux/misc/gull.gif /pub/linux/misc/gull.gif /icons/valid_html.gif headings{69}: Linux at CESDIS Hot news: Warm news: Good local pointers: Cold news: body{5347}: This page contains links to Linux information available at CESDIS . CESDIS is located at the NASA Goddard Space Flight Center in Greenbelt MD. If you are here looking for Beowulf-specific information, go to the Beowulf project description . You might also want to look at the CESDIS Networking Technologies pages . For your viewing pleasure this page is background- and blink-free! September 25 1995: There is an updated Vortex Page for the 3Com 3c590/3c592/3c595/3c597 "Vortex" PCI and "Demon" EISA Ethernet and Fast Ethernet Adapters. August 28 1995: There is an updated Tulip Page for the DEC 21040/21140 10/100Mbps PCI chips. Aug 1995: Our Beowulf paper presented at ICPP. Aug 2, 1995: Our Beowulf paper and slides presented at HPDC-95 . May 8, 1995: An updated 3Com 3c509 errata list and driver . May 8, 1995: The slides for our Linux Beowulf Cluster talk at DECUS. April 27, 1995: I've translated a description of the Auto-Negotiation (nee NWay) media type auto-selection scheme , provided by Bill Bunch of National Semiconductor to HTML. There are also a Postscript version and GIF copies of the same document's pages: Page 1 , Page 2 , Page 3 , Page 4 , Page 5 , Page 6 , Page 7 . April 20, 1995: I've updated the Tulip driver for the DEC 21140 10/100Mbps PCI chip , and the 100Mbps mode actually works! It's available for alpha test only from here . The DEC 21140 chip is used on the SMC EtherPower10/100 card, as well as 10/100 cards from several other vendors. Other improvements are automatic media selection for the 21040 and support for the Znyx 315 4-port etherarray. April 10, 1995: I've written a report that concludes the failure of the Boca BEN1PI is Boca's design flaw, not a problem with the AMD PCnet/PCI chip or my Linux device driver. April 9, 1995: I've made available a report on 100Mbs network adapters. March 14, 1995: The driver for the 3Com 100Mbs "Vortex" PCI ethercards is now working! Sorry, it won't be integrated with the kernel source tree until the 1.3.x development kernels are released. Installation instructions are with the driver description . This driver works with both the 10Mbs 3c590 and 10/100Mbs 3c595 boards. The driver for the DEC 21040 "Tulip" PCI ethercard is now working! It's available for alpha test in the Linux 1.2.0 kernel. The DEC 21040 chip is used on the SMC EtherPower card, as well as cards from many other vendors. This driver may also work with 21140-based 100Mbs cards, but I don't have the hardware to test it with. A known limitation of the SMC/Tulip driver is that it only supports the 10baseT transceiver. I've written the code to autoselect between the 10base2 and 10baseT transceivers. That update will be in an upcoming kernel. A workaround for the burst transfer errors that occur with the Intel Saturn chipset will also be added with that patch. Both problems and patches are described here . The SMC EtherEZ support should is in the Linux 1.1.84+ kernels. The EtherEZ is essentially a SMC Ultra with Plug-n-Play support, so the changes are only about a dozen lines and the driver should be very reliable. Thanks to Duke Kamstra of SMC for providing a both the PCI EtherPower and ISA EtherEZ development cards and databooks. The support for the ethernet side of the AMD PCnet+SCSI/PCI 79c974 PCI chip should be in the 1.1.84 kernel. This device is used in the Compaq Deskpro/XL series. Patches and testing (no, sigh, the '974 wasn't the same as the '970) were provided by Mark Stockton of Compaq. The support for the AMD 79c970 PCnet/PCI chip is in development kernels 1.1.69 and later. Please use this in preference to 'pci-lance.c' and my earlier PCI patches. I now have a user-friendly version of my 3Com EtherLink III family (3c509, 3c529, 3c579, and 3c589) setup program . I primarily use this to switch media types on my 3c589. It can also change the base I/O address, IRQ, and display all of the current settings. Jason Garman, a NASA student intern working here at CESDIS, has been busy converting the Linux FAQs into HTML format . Linux at CESDIS (this document). Beowulf project description . Working Linux netcard drivers . PCMCIA information and drivers . Product notes and bug fixes. Linux ethercard diagnostic programs. The ethernet HowTo. Paul Gortmaker, , has taken over as the sole editor and is doing an excellent job. Other CESDIS-developed Linux stuff. Good pointers to remote sites: Werner Almesberger's ATM drivers for Linux . The Linux Journal . Linux Links maintained by Paul Henning at U.Iowa. Linux Documentation Project maintained by Matt Welsh. Parlez vous Francaise? http://www.resus.univ-mrs.fr/Fr/CS/Linux/ An excellent compendium of Ethernet information from Charles Spurgeon at U. Texas. I've updated the driver for the Cabletron E21** network adaptor . It now has working media selection and defers grabbing the IRQ line until after it's opened. I've updated a few of the ethercard diagnotic and setup programs , but there's nothing very exciting or new. I've written a mini-HowTo on using multiple ethercard with Linux . The latest driver, one for the HP PCLAN+ EtherTwist cards, is in kernels after 1.1.27. The 1.0.* version is HP PC-LAN+ (27247 and 27252A) driver And it even comes with a textual description and a small diagnostic program: HP PC-LAN+ (27247 and 27252A) diagnostic. address{57}: Top Author: Donald Becker , becker@cesdis.gsfc.nasa.gov. MD5{32}: f09257778c669efef1373fad0157e859 File-Size{4}: 8651 Type{4}: HTML Gatherer-Version{3}: 1.0 Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Refresh-Rate{7}: 2419200 Time-to-Live{8}: 14515200 Description{15}: Linux at CESDIS } @FILE { http://cesdis.gsfc.nasa.gov/hpccm/annual.reports/cas94contents/app.sw/hsct.html Update-Time{9}: 827948648 url-references{109}: http://cesdis.gsfc.nasa.gov/hpccm/annual.reports/cas94contents/app.sw.html mailto:lpicha@cesdis.gsfc.nasa.gov title{70}: Surface Geometry Definition for a Complete High-Speed Civil Transport keywords{46}: contents curator larry picha return table the images{19}: graphics/return.gif headings{102}: Surface Geometry Definition for a Complete High-Speed Civil Transport Return to the Table of Contents body{1979}: Objective: Develop, for a complete High-Speed Civil Transport (HSCT) class vehicle, an automated surface geometry definition that is suitable for nonlinear CFD computations. Approach: The process starts with a familiar basic geometry description (the ''wave drag deck'') employed in preliminary design. Semi-analytic methods are used to resolve the surface-surface intersections. Accomplishment: The surface geometry definition tools have been applied previously to a variety of supersonic transport configurations consisting of just a wing and a fuselage. They have now been extended to include a horizontal tail, a vertical tail, a canard, a pylon and a nacelle. The Figure illustrates the complexity of the configuration which can now be handled. The surface geometry definition is now available in PLOT3D and Hess formats. Procedures for extracting the format in LaWGS and NURBS formats, that were demonstrated earlier for a wing fuselage configuration, will be extended to the more general configuration. These procedures can run comfortably on workstations. Significance: The design and optimization of an HSCT using nonlinear aerodynamics codes (Euler or Navier-Stokes) requires the ability to generate smooth surface definitions and volume grids automatically as the design variables are changed from a baseline configuration. The automated surface geometry tool described here is a prerequisite for imbedding an automated geometry/grid module in a design and optimization system for an HSCT. Status/Plans: Current work on the surface geometry definition module is focused on further testing and documentation. Related efforts are underway to link this module with automated procedures for changing the geometry as design variables are changed and for generating a multiblock CFD grid. These modules are targeted for incorporation into the FIDO system. Points of Contact: Raymond L. Barger and Mary S. Adams NASA Langley Research Center curator: Larry Picha MD5{32}: 6bb0eb34319faf468ad676b6c45a4d1f File-Size{4}: 2505 Type{4}: HTML Gatherer-Version{3}: 1.0 Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Refresh-Rate{7}: 2419200 Time-to-Live{8}: 14515200 Description{70}: Surface Geometry Definition for a Complete High-Speed Civil Transport } @FILE { http://cesdis.gsfc.nasa.gov/admin/inf.eng/wave.tar Update-Time{9}: 820867017 Embed<99>-Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Embed<99>-Gatherer-Version{3}: 1.0 Embed<99>-Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Embed<99>-title{20}: Subject of this Page Embed<99>-MD5{32}: 84937cd26650f148d5fdb68c43195b46 Embed<99>-File-Size{3}: 656 Embed<99>-Type{4}: HTML Embed<99>-Keywords{18}: page subject this Embed<99>-Description{20}: Subject of this Page Embed<99>-Nested-Filename{31}: wave.tutorial.fin/template.html Embed<97>-Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Embed<97>-Gatherer-Version{3}: 1.0 Embed<97>-Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Embed<97>-MD5{32}: 28b77023ac6eef208770ba884203149c Embed<97>-File-Size{6}: 149358 Embed<97>-Type{7}: Unknown Embed<97>-Nested-Filename{29}: wave.tutorial.fin/CESDIS.logo Embed<95>-Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Embed<95>-Gatherer-Version{3}: 1.0 Embed<95>-Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Embed<95>-title{20}: Policy Wave Tutorial Embed<95>-keywords{9}: tutorial Embed<95>-images{14}: wave.small.gif Embed<95>-headings{16}: Can we ride the Embed<95>-body{88}: TUTORIAL: The Policy Wave is Coming: Authorship in a U.S. Government Agency Context Embed<95>-MD5{32}: 3fc7ed6c12104d599d53fc63954a13f3 Embed<95>-File-Size{4}: 1586 Embed<95>-Type{4}: HTML Embed<95>-Description{20}: Policy Wave Tutorial Embed<95>-Nested-Filename{36}: wave.tutorial.fin/trade.secrecy.html Embed<93>-Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Embed<93>-Gatherer-Version{3}: 1.0 Embed<93>-Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Embed<93>-title{20}: Policy Wave Tutorial Embed<93>-keywords{9}: tutorial Embed<93>-images{14}: wave.small.gif Embed<93>-headings{16}: Can we ride the Embed<93>-body{88}: TUTORIAL: The Policy Wave is Coming: Authorship in a U.S. Government Agency Context Embed<93>-MD5{32}: 0a8a073c63142788b126d57aed571713 Embed<93>-File-Size{4}: 2505 Embed<93>-Type{4}: HTML Embed<93>-Description{20}: Policy Wave Tutorial Embed<93>-Nested-Filename{26}: wave.tutorial.fin/tap.html Embed<91>-Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Embed<91>-Gatherer-Version{3}: 1.0 Embed<91>-Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Embed<91>-title{20}: Policy Wave Tutorial Embed<91>-keywords{9}: tutorial Embed<91>-images{14}: wave.small.gif Embed<91>-headings{16}: Can we ride the Embed<91>-body{88}: TUTORIAL: The Policy Wave is Coming: Authorship in a U.S. Government Agency Context Embed<91>-MD5{32}: 0fdfa921b26f33e43d0066dcbd763c65 Embed<91>-File-Size{4}: 2424 Embed<91>-Type{4}: HTML Embed<91>-Description{20}: Policy Wave Tutorial Embed<91>-Nested-Filename{38}: wave.tutorial.fin/lib.bill.rights.html Embed<89>-Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Embed<89>-Gatherer-Version{3}: 1.0 Embed<89>-Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Embed<89>-title{20}: Policy Wave Tutorial Embed<89>-keywords{9}: tutorial Embed<89>-images{14}: wave.small.gif Embed<89>-headings{16}: Can we ride the Embed<89>-body{88}: TUTORIAL: The Policy Wave is Coming: Authorship in a U.S. Government Agency Context Embed<89>-MD5{32}: 3729ea3ef5ab8cd3e277566abfcc25b5 Embed<89>-File-Size{4}: 2540 Embed<89>-Type{4}: HTML Embed<89>-Description{20}: Policy Wave Tutorial Embed<89>-Nested-Filename{42}: wave.tutorial.fin/first.sale.doctrine.html Embed<87>-Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Embed<87>-Gatherer-Version{3}: 1.0 Embed<87>-Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Embed<87>-title{20}: Policy Wave Tutorial Embed<87>-keywords{9}: tutorial Embed<87>-images{14}: wave.small.gif Embed<87>-headings{16}: Can we ride the Embed<87>-body{88}: TUTORIAL: The Policy Wave is Coming: Authorship in a U.S. Government Agency Context Embed<87>-MD5{32}: 605dfe06cd1145a1ac036b7c5e7ac455 Embed<87>-File-Size{4}: 1982 Embed<87>-Type{4}: HTML Embed<87>-Description{20}: Policy Wave Tutorial Embed<87>-Nested-Filename{31}: wave.tutorial.fin/fair.use.html Embed<85>-Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Embed<85>-Gatherer-Version{3}: 1.0 Embed<85>-Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Embed<85>-title{20}: Policy Wave Tutorial Embed<85>-keywords{9}: tutorial Embed<85>-images{27}: wave.small.gif wave.bar.gif Embed<85>-headings{45}: Can we ride the The Audio Home Recording Act Embed<85>-body{92}: TUTORIAL: The Policy Wave is Coming: Authorship in a U.S. Government Agency Context Embed<85>-MD5{32}: 8976bf04a1e7a3df1617b1d079da5ea5 Embed<85>-File-Size{4}: 1783 Embed<85>-Type{4}: HTML Embed<85>-Description{20}: Policy Wave Tutorial Embed<85>-Nested-Filename{41}: wave.tutorial.fin/audio.home.rec.act.html Embed<83>-Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Embed<83>-Gatherer-Version{3}: 1.0 Embed<83>-Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Embed<83>-title{31}: Americans with Disabilities Act Embed<83>-MD5{32}: 6c496ce7d2dad3a9d3d204a7b1564124 Embed<83>-File-Size{4}: 1861 Embed<83>-Type{4}: HTML Embed<83>-Keywords{32}: act americans disabilities with Embed<83>-Description{31}: Americans with Disabilities Act Embed<83>-Nested-Filename{26}: wave.tutorial.fin/ada.html Embed<81>-Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Embed<81>-Gatherer-Version{3}: 1.0 Embed<81>-Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Embed<81>-title{11}: Regulations Embed<81>-keywords{9}: tutorial Embed<81>-images{14}: wave.small.gif Embed<81>-headings{16}: Can we ride the Embed<81>-body{88}: TUTORIAL: The Policy Wave is Coming: Authorship in a U.S. Government Agency Context Embed<81>-MD5{32}: 7134b788ebeb296bb36f06c24c2446dd Embed<81>-File-Size{4}: 2473 Embed<81>-Type{4}: HTML Embed<81>-Description{11}: Regulations Embed<81>-Nested-Filename{34}: wave.tutorial.fin/regulations.html Embed<79>-Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Embed<79>-Gatherer-Version{3}: 1.0 Embed<79>-Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Embed<79>-title{11}: Regulations Embed<79>-MD5{32}: d559406c191f358fd3e1ac1df9f25a26 Embed<79>-File-Size{4}: 1155 Embed<79>-Type{4}: HTML Embed<79>-Keywords{12}: regulations Embed<79>-Description{11}: Regulations Embed<79>-Nested-Filename{42}: wave.tutorial.fin/areas.of.discussion.html Embed<77>-Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Embed<77>-Gatherer-Version{3}: 1.0 Embed<77>-Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Embed<77>-title{15}: An Introduction Embed<77>-MD5{32}: 475b83183b8912a4a511de5757694a5e Embed<77>-File-Size{4}: 4245 Embed<77>-Type{4}: HTML Embed<77>-Keywords{13}: introduction Embed<77>-Description{15}: An Introduction Embed<77>-Nested-Filename{26}: wave.tutorial.fin/www.html Embed<75>-Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Embed<75>-Gatherer-Version{3}: 1.0 Embed<75>-Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Embed<75>-title{25}: Responsiblity and the Web Embed<75>-MD5{32}: 406b5596af21aeac67c67a55df94dc5e Embed<75>-File-Size{4}: 5316 Embed<75>-Type{4}: HTML Embed<75>-Keywords{26}: and responsiblity the web Embed<75>-Description{25}: Responsiblity and the Web Embed<75>-Nested-Filename{34}: wave.tutorial.fin/responsible.html Embed<73>-Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Embed<73>-Gatherer-Version{3}: 1.0 Embed<73>-Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Embed<73>-title{15}: An Introduction Embed<73>-MD5{32}: 3c40c8b678550d7186ca985952260fd1 Embed<73>-File-Size{4}: 2856 Embed<73>-Type{4}: HTML Embed<73>-Keywords{13}: introduction Embed<73>-Description{15}: An Introduction Embed<73>-Nested-Filename{28}: wave.tutorial.fin/intro.html Embed<71>-Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Embed<71>-Gatherer-Version{3}: 1.0 Embed<71>-Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Embed<71>-title{19}: Printing on the Web Embed<71>-MD5{32}: 67a2ebc1aeb0bcf383ca133830e5c017 Embed<71>-File-Size{4}: 1724 Embed<71>-Type{4}: HTML Embed<71>-Keywords{17}: printing the web Embed<71>-Description{19}: Printing on the Web Embed<71>-Nested-Filename{32}: wave.tutorial.fin/economics.html Embed<69>-Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Embed<69>-Gatherer-Version{3}: 1.0 Embed<69>-Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Embed<69>-title{17}: Discussion Points Embed<69>-MD5{32}: 6622f17b907de6c0f66fb572aea68619 Embed<69>-File-Size{4}: 1094 Embed<69>-Type{4}: HTML Embed<69>-Keywords{18}: discussion points Embed<69>-Description{17}: Discussion Points Embed<69>-Nested-Filename{33}: wave.tutorial.fin/conclusion.html Embed<67>-Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Embed<67>-Gatherer-Version{3}: 1.0 Embed<67>-Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Embed<67>-title{20}: Policy Wave Tutorial Embed<67>-MD5{32}: c5527f7bf922d0e1e71799a8af88b26d Embed<67>-File-Size{4}: 2617 Embed<67>-Type{4}: HTML Embed<67>-Keywords{21}: policy tutorial wave Embed<67>-Description{20}: Policy Wave Tutorial Embed<67>-Nested-Filename{27}: wave.tutorial.fin/wave.html Embed<65>-Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Embed<65>-Gatherer-Version{3}: 1.0 Embed<65>-Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Embed<65>-MD5{32}: d41d8cd98f00b204e9800998ecf8427e Embed<65>-File-Size{1}: 0 Embed<65>-Type{4}: HTML Embed<65>-Nested-Filename{43}: wave.tutorial.fin/.resource/trademarks.html Embed<63>-Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Embed<63>-Gatherer-Version{3}: 1.0 Embed<63>-Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Embed<63>-MD5{32}: d41d8cd98f00b204e9800998ecf8427e Embed<63>-File-Size{1}: 0 Embed<63>-Type{4}: HTML Embed<63>-Nested-Filename{50}: wave.tutorial.fin/.resource/the.copyright.act.html Embed<61>-Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Embed<61>-Gatherer-Version{3}: 1.0 Embed<61>-Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Embed<61>-head{1285}: \000\001\000\000\000\001\032\000\000\000\032\000\000\0002\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\026\000\001\000\000\000\000\000\020\000\014\000\001\000\000\000\000\000\000\000\000\000\000\000\000\001\000\000\000\001\032\000\000\000\032\000\000\0002\003C<\250\026 \000\000\000\034\0002\000\000styl\000\000\000 \000\200\377\377\000\000\000\000\003C?\234 Embed<61>-MD5{32}: bb9cfd0187af38335a4fa0a775075409 Embed<61>-File-Size{3}: 332 Embed<61>-Type{4}: HTML Embed<61>-Nested-Filename{40}: wave.tutorial.fin/.resource/patents.html Embed<59>-Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Embed<59>-Gatherer-Version{3}: 1.0 Embed<59>-Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Embed<59>-MD5{32}: d41d8cd98f00b204e9800998ecf8427e Embed<59>-File-Size{1}: 0 Embed<59>-Type{4}: HTML Embed<59>-Nested-Filename{37}: wave.tutorial.fin/.resource/foia.html Embed<57>-Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Embed<57>-Gatherer-Version{3}: 1.0 Embed<57>-Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Embed<57>-MD5{32}: d41d8cd98f00b204e9800998ecf8427e Embed<57>-File-Size{1}: 0 Embed<57>-Type{4}: HTML Embed<57>-Nested-Filename{48}: wave.tutorial.fin/.resource/first.amendment.html Embed<55>-Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Embed<55>-Gatherer-Version{3}: 1.0 Embed<55>-Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Embed<55>-MD5{32}: d41d8cd98f00b204e9800998ecf8427e Embed<55>-File-Size{1}: 0 Embed<55>-Type{4}: HTML Embed<55>-Nested-Filename{49}: wave.tutorial.fin/.resource/comm.decency.act.html Embed<53>-Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Embed<53>-Gatherer-Version{3}: 1.0 Embed<53>-Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Embed<53>-head{1282}: \000\001\000\000\000\001\032\000\000\000\032\000\000\0002\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\026\000\001\000\000\000\000\000\020\000\014\000\001\000\000\000\000\000\000\000\000\000\000\000\000\001\000\000\000\001\032\000\000\000\032\000\000\0002\003C<\250\026 \000\000\000\034\0002\000\000styl\000\000\000 \000\200\377\377\000\000\000\000\003C?L Embed<53>-MD5{32}: eb5ff6b4661c9a7061a8e204e167f9f8 Embed<53>-File-Size{3}: 332 Embed<53>-Type{4}: HTML Embed<53>-Nested-Filename{36}: wave.tutorial.fin/.resource/ada.html Embed<51>-Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Embed<51>-Gatherer-Version{3}: 1.0 Embed<51>-Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Embed<51>-MD5{32}: d41d8cd98f00b204e9800998ecf8427e Embed<51>-File-Size{1}: 0 Embed<51>-Type{4}: HTML Embed<51>-Nested-Filename{57}: wave.tutorial.fin/.resource/specific.regulations.toc.html Embed<49>-Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Embed<49>-Gatherer-Version{3}: 1.0 Embed<49>-Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Embed<49>-MD5{32}: d41d8cd98f00b204e9800998ecf8427e Embed<49>-File-Size{1}: 0 Embed<49>-Type{4}: HTML Embed<49>-Nested-Filename{40}: wave.tutorial.fin/.resource/fed.reg.html Embed<47>-Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Embed<47>-Gatherer-Version{3}: 1.0 Embed<47>-Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Embed<47>-MD5{32}: d41d8cd98f00b204e9800998ecf8427e Embed<47>-File-Size{1}: 0 Embed<47>-Type{4}: HTML Embed<47>-Nested-Filename{36}: wave.tutorial.fin/.resource/www.html Embed<45>-Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Embed<45>-Gatherer-Version{3}: 1.0 Embed<45>-Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Embed<45>-MD5{32}: d41d8cd98f00b204e9800998ecf8427e Embed<45>-File-Size{1}: 0 Embed<45>-Type{4}: HTML Embed<45>-Nested-Filename{44}: wave.tutorial.fin/.resource/responsible.html Embed<43>-Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Embed<43>-Gatherer-Version{3}: 1.0 Embed<43>-Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Embed<43>-MD5{32}: d41d8cd98f00b204e9800998ecf8427e Embed<43>-File-Size{1}: 0 Embed<43>-Type{4}: HTML Embed<43>-Nested-Filename{38}: wave.tutorial.fin/.resource/intro.html Embed<41>-Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Embed<41>-Gatherer-Version{3}: 1.0 Embed<41>-Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Embed<41>-MD5{32}: d41d8cd98f00b204e9800998ecf8427e Embed<41>-File-Size{1}: 0 Embed<41>-Type{4}: HTML Embed<41>-Nested-Filename{42}: wave.tutorial.fin/.resource/economics.html Embed<39>-Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Embed<39>-Gatherer-Version{3}: 1.0 Embed<39>-Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Embed<39>-MD5{32}: d41d8cd98f00b204e9800998ecf8427e Embed<39>-File-Size{1}: 0 Embed<39>-Type{4}: HTML Embed<39>-Nested-Filename{43}: wave.tutorial.fin/.resource/conclusion.html Embed<37>-Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Embed<37>-Gatherer-Version{3}: 1.0 Embed<37>-Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Embed<37>-MD5{32}: d41d8cd98f00b204e9800998ecf8427e Embed<37>-File-Size{1}: 0 Embed<37>-Type{4}: HTML Embed<37>-Nested-Filename{37}: wave.tutorial.fin/.resource/wave.html Embed<35>-Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Embed<35>-Gatherer-Version{3}: 1.0 Embed<35>-Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Embed<35>-head{1102}: \001\000\0039\000\264\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\377\020\332\002\000\000\000\000\000\000\000\000\000\000\000\000\000template.html\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\332\003/X\207W/X\3602/\203\325 \000 Embed<35>-MD5{32}: f5ad9b5444a9c35f1955ecd60a4b3396 Embed<35>-File-Size{3}: 300 Embed<35>-Type{4}: HTML Embed<35>-Nested-Filename{43}: wave.tutorial.fin/.finderinfo/template.html Embed<33>-Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Embed<33>-Gatherer-Version{3}: 1.0 Embed<33>-Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Embed<33>-MD5{32}: 7b30ae2e21dbc1e1880110796ecec4c5 Embed<33>-File-Size{3}: 300 Embed<33>-Type{7}: Unknown Embed<33>-Nested-Filename{41}: wave.tutorial.fin/.finderinfo/CESDIS.logo Embed<31>-Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Embed<31>-Gatherer-Version{3}: 1.0 Embed<31>-Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Embed<31>-head{1102}: \001\006\002\313\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\377\020\332\002\000\000\000\000\000\000\000\000\000\000\000\000\000trade.secrecy.html\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\332\003/\203E\354/\205\271\367/\205\271\370\000 Embed<31>-MD5{32}: fb14676099faf4a3b1fb4ba75cd416c7 Embed<31>-File-Size{3}: 300 Embed<31>-Type{4}: HTML Embed<31>-Nested-Filename{48}: wave.tutorial.fin/.finderinfo/trade.secrecy.html Embed<29>-Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Embed<29>-Gatherer-Version{3}: 1.0 Embed<29>-Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Embed<29>-head{1126}: \001\006\002\224\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\377\020\332\002\000\000\000\000\000\000\000\000\000\000\000\000\000tap.html\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\332\003/\203E\343/\205\263+/\205\263+\000 Embed<29>-MD5{32}: 31dcb9c1a2ce65c7ce6e6de1ffea048e Embed<29>-File-Size{3}: 300 Embed<29>-Type{4}: HTML Embed<29>-Nested-Filename{38}: wave.tutorial.fin/.finderinfo/tap.html Embed<27>-Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Embed<27>-Gatherer-Version{3}: 1.0 Embed<27>-Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Embed<27>-head{1090}: \001\006\002]\000Z\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\377\020\332\002\000\000\000\000\000\000\000\000\000\000\000\000\000lib.bill.rights.html\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\332\003/\203E\327/\205\272\342/\205\272\343\000 Embed<27>-MD5{32}: 7ae24fa529b8117ad3ba04b467a25b70 Embed<27>-File-Size{3}: 300 Embed<27>-Type{4}: HTML Embed<27>-Nested-Filename{50}: wave.tutorial.fin/.finderinfo/lib.bill.rights.html Embed<25>-Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Embed<25>-Gatherer-Version{3}: 1.0 Embed<25>-Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Embed<25>-head{1075}: \001\006\002&\000\264\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\377\020\332\002\000\000\000\000\000\000\000\000\000\000\000\000\000first.sale.doctrine.html\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\332\003/\203E\323/\205\257J/\205\257K\000 Embed<25>-MD5{32}: 440f6e55e8af187301f95418ca50dd00 Embed<25>-File-Size{3}: 300 Embed<25>-Type{4}: HTML Embed<25>-Nested-Filename{54}: wave.tutorial.fin/.finderinfo/first.sale.doctrine.html Embed<23>-Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Embed<23>-Gatherer-Version{3}: 1.0 Embed<23>-Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Embed<23>-head{1117}: \001\006\001\357\001\016\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\377\020\332\002\000\000\000\000\000\000\000\000\000\000\000\000\000fair.use.html\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\332\003/\203E\312/\205\262\205/\205\262\205\000 Embed<23>-MD5{32}: ce55ef008e469f200c9ebc9a896ed182 Embed<23>-File-Size{3}: 300 Embed<23>-Type{4}: HTML Embed<23>-Nested-Filename{43}: wave.tutorial.fin/.finderinfo/fair.use.html Embed<21>-Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Embed<21>-Gatherer-Version{3}: 1.0 Embed<21>-Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Embed<21>-head{1084}: \001\006\001\270\000Z\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\377\020\332\002\000\000\000\000\000\000\000\000\000\000\000\000\000audio.home.rec.act.html\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\332\003/\203E\306/\205\274\204/\205\274\204\000 Embed<21>-MD5{32}: 8b966992a2a0fc491448f308bba19847 Embed<21>-File-Size{3}: 300 Embed<21>-Type{4}: HTML Embed<21>-Nested-Filename{53}: wave.tutorial.fin/.finderinfo/audio.home.rec.act.html Embed<19>-Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Embed<19>-Gatherer-Version{3}: 1.0 Embed<19>-Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Embed<19>-head{1081}: \001\010\001\201\000\264\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\377\020\332\002\000\000\000\000\000\000\000\000\000\000\000\000\000topics.of.interest.html\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\332\003/\203E\351/\205\305f/\205\305f\000 Embed<19>-MD5{32}: 3f9bebdbb689b0d71caf35ade91bf7cf Embed<19>-File-Size{3}: 300 Embed<19>-Type{4}: HTML Embed<19>-Nested-Filename{53}: wave.tutorial.fin/.finderinfo/topics.of.interest.html Embed<17>-Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Embed<17>-Gatherer-Version{3}: 1.0 Embed<17>-Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Embed<17>-head{1099}: \001\010\001J\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\377\020\332\002\000\000\000\000\000\000\000\000\000\000\000\000\000regulations.html\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\332\003/\203E\336/\204O\006/\204O\006\000 Embed<17>-MD5{32}: 6fcf6cbbda15894b9953473f90d65931 Embed<17>-File-Size{3}: 300 Embed<17>-Type{4}: HTML Embed<17>-Nested-Filename{46}: wave.tutorial.fin/.finderinfo/regulations.html Embed<15>-Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Embed<15>-Gatherer-Version{3}: 1.0 Embed<15>-Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Embed<15>-head{1081}: \001\010\001\023\000Z\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\377\020\332\002\000\000\000\000\000\000\000\000\000\000\000\000\000areas.of.discussion.html\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\332\003/\203E\303/\205\310\223/\205\310\223\000 Embed<15>-MD5{32}: 025dcf3363eae0fbecd2f37bffc3070d Embed<15>-File-Size{3}: 300 Embed<15>-Type{4}: HTML Embed<15>-Nested-Filename{54}: wave.tutorial.fin/.finderinfo/areas.of.discussion.html Embed<13>-Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Embed<13>-Gatherer-Version{3}: 1.0 Embed<13>-Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Embed<13>-head{1135}: \001\014\000\245\001\016\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\377\020\332\002\000\000\000\000\000\000\000\000\000\000\000\000\000www.html\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\332\003/\202\237\263/\205\221\241/\205\221\241\000 Embed<13>-MD5{32}: 55716c81bd3d05310601cbe2ae7bf616 Embed<13>-File-Size{3}: 300 Embed<13>-Type{4}: HTML Embed<13>-Nested-Filename{38}: wave.tutorial.fin/.finderinfo/www.html Embed<11>-Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Embed<11>-Gatherer-Version{3}: 1.0 Embed<11>-Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Embed<11>-head{1105}: \001\014\000\245\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\377\020\332\002\000\000\000\000\000\000\000\000\000\000\000\000\000responsible.html\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\332\003/X\207W/\205\275\233/\205\275\234\000 Embed<11>-MD5{32}: 75f48c79a6351551bf64efc0d1ee0451 Embed<11>-File-Size{3}: 300 Embed<11>-Type{4}: HTML Embed<11>-Nested-Filename{46}: wave.tutorial.fin/.finderinfo/responsible.html Embed<9>-Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Embed<9>-Gatherer-Version{3}: 1.0 Embed<9>-Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Embed<9>-head{1114}: \001\014\000n\000Z\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\377\020\332\002\000\000\000\000\000\000\000\000\000\000\000\000\000intro.html\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\332\003/\202\2210/\204G\314/\204G\314\000 Embed<9>-MD5{32}: 070fe2d469267ecd055ded8d1964d7b2 Embed<9>-File-Size{3}: 300 Embed<9>-Type{4}: HTML Embed<9>-Nested-Filename{40}: wave.tutorial.fin/.finderinfo/intro.html Embed<7>-Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Embed<7>-Gatherer-Version{3}: 1.0 Embed<7>-Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Embed<7>-head{1108}: \001\014\0007\000\264\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\377\020\332\002\000\000\000\000\000\000\000\000\000\000\000\000\000economics.html\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\332\003/X\207W/\205\252\252/\205\252\253\000 Embed<7>-MD5{32}: 730e600f3db132ed886bc6a31dfb0646 Embed<7>-File-Size{3}: 300 Embed<7>-Type{4}: HTML Embed<7>-Nested-Filename{44}: wave.tutorial.fin/.finderinfo/economics.html Embed<5>-Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Embed<5>-Gatherer-Version{3}: 1.0 Embed<5>-Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Embed<5>-head{1102}: \001\014\000\000\001\016\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\377\020\332\002\000\000\000\000\000\000\000\000\000\000\000\000\000conclusion.html\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\332\003/X\207W/\205\277o/\205\277o\000 Embed<5>-MD5{32}: 3328aab5d1e609bad50eb54323f97ee7 Embed<5>-File-Size{3}: 300 Embed<5>-Type{4}: HTML Embed<5>-Nested-Filename{45}: wave.tutorial.fin/.finderinfo/conclusion.html Embed<3>-Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Embed<3>-Gatherer-Version{3}: 1.0 Embed<3>-Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Embed<3>-head{1129}: \001\016\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\377\020\332\002\000\000\000\000\000\000\000\000\000\000\000\000\000wave.html\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\332\003/\203\002j/\205\277\254/\205\277\255\000 Embed<3>-MD5{32}: 2ac842f7e2cdacb2030930d77cb944d1 Embed<3>-File-Size{3}: 300 Embed<3>-Type{4}: HTML Embed<3>-Nested-Filename{39}: wave.tutorial.fin/.finderinfo/wave.html Embed<1>-Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Embed<1>-Gatherer-Version{3}: 1.0 Embed<1>-Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Embed<1>-MD5{32}: 05af163cbe8652d469ad067f1bfd32a0 Embed<1>-File-Size{4}: 1536 Embed<1>-Type{9}: Directory Embed<1>-Nested-Filename{17}: wave.tutorial.fin MD5{32}: 0e0fcc05ada95b987beaa05cd0a706cc File-Size{6}: 786432 Type{3}: Tar Gatherer-Version{3}: 1.0 Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Refresh-Rate{7}: 2419200 Time-to-Live{8}: 14515200 Embed<2>-Nested-Filename{29}: wave.tutorial.fin/.finderinfo Embed<2>-Type{9}: Directory Embed<2>-File-Size{4}: 1536 Embed<2>-MD5{32}: d906137eb9f33c41d136a244f0bac4c6 Embed<2>-Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Embed<2>-Gatherer-Version{3}: 1.0 Embed<2>-Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Embed<4>-Nested-Filename{47}: wave.tutorial.fin/.finderinfo/bibliography.html Embed<4>-Type{4}: HTML Embed<4>-File-Size{3}: 300 Embed<4>-MD5{32}: 849316b7db7b20d015c8981f7447e1f5 Embed<4>-head{1096}: \001\014\000\000\000Z\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\377\020\332\002\000\000\000\000\000\000\000\000\000\000\000\000\000bibliography.html\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\332\003/X\207W/X\366\014/\203\325\014\000 Embed<4>-Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Embed<4>-Gatherer-Version{3}: 1.0 Embed<4>-Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Embed<6>-Nested-Filename{42}: wave.tutorial.fin/.finderinfo/context.html Embed<6>-Type{4}: HTML Embed<6>-File-Size{3}: 300 Embed<6>-MD5{32}: eea2b57fc530e3890d27d634362f1b2c Embed<6>-head{1102}: \001\014\0007\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\377\020\332\002\000\000\000\000\000\000\000\000\000\000\000\000\000context.html\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\332\003/X\207W/\204Hf/\204Hf\000 Embed<6>-Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Embed<6>-Gatherer-Version{3}: 1.0 Embed<6>-Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Embed<8>-Nested-Filename{45}: wave.tutorial.fin/.finderinfo/fed.issues.html Embed<8>-Type{4}: HTML Embed<8>-File-Size{3}: 300 Embed<8>-MD5{32}: c049c3f8db3fb8ed1c610ca41ebd13a2 Embed<8>-head{1105}: \001\014\000n\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\377\020\332\002\000\000\000\000\000\000\000\000\000\000\000\000\000fed.issues.html\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\332\003/X\207W/\203\014\034/\203\325\014\000 Embed<8>-Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Embed<8>-Gatherer-Version{3}: 1.0 Embed<8>-Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Embed<10>-Nested-Filename{42}: wave.tutorial.fin/.finderinfo/purpose.html Embed<10>-Type{4}: HTML Embed<10>-File-Size{3}: 300 Embed<10>-MD5{32}: 7952c00eb33f36cfc519184160850b62 Embed<10>-head{1102}: \001\014\000n\000\264\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\377\020\332\002\000\000\000\000\000\000\000\000\000\000\000\000\000purpose.html\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\332\003/X\207W/\204G}/\204G}\000 Embed<10>-Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Embed<10>-Gatherer-Version{3}: 1.0 Embed<10>-Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Embed<12>-Nested-Filename{44}: wave.tutorial.fin/.finderinfo/sensitive.html Embed<12>-Type{4}: HTML Embed<12>-File-Size{3}: 300 Embed<12>-MD5{32}: 87377e287d3ee1c1eade2568adb9b3af Embed<12>-head{1102}: \001\014\000\245\000\264\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\377\020\332\002\000\000\000\000\000\000\000\000\000\000\000\000\000sensitive.html\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\332\003/X\207W/YA\256/\203\325 \000 Embed<12>-Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Embed<12>-Gatherer-Version{3}: 1.0 Embed<12>-Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Embed<14>-Nested-Filename{50}: wave.tutorial.fin/.finderinfo/Picha.OConnell.FINAL Embed<14>-Type{7}: Unknown Embed<14>-File-Size{3}: 300 Embed<14>-MD5{32}: 4beaaa7422d8898aac9343c80c091305 Embed<14>-Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Embed<14>-Gatherer-Version{3}: 1.0 Embed<14>-Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Embed<16>-Nested-Filename{42}: wave.tutorial.fin/.finderinfo/fed.reg.html Embed<16>-Type{4}: HTML Embed<16>-File-Size{3}: 300 Embed<16>-MD5{32}: 45d434ab33aa8276a898059637ad48ca Embed<16>-head{1120}: \001\010\001\023\001\016\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\377\020\332\002\000\000\000\000\000\000\000\000\000\000\000\000\000fed.reg.html\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\332\003/\203E\314/\205\305\326/\205\305\326\000 Embed<16>-Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Embed<16>-Gatherer-Version{3}: 1.0 Embed<16>-Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Embed<18>-Nested-Filename{59}: wave.tutorial.fin/.finderinfo/specific.regulations.toc.html Embed<18>-Type{4}: HTML Embed<18>-File-Size{3}: 300 Embed<18>-MD5{32}: a6d7efd05f530f252edb2e79c5da6925 Embed<18>-head{1060}: \001\010\001J\000\264\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\377\020\332\002\000\000\000\000\000\000\000\000\000\000\000\000\000specific.regulations.toc.html\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\332\003/\203E\341/\204Q\221/\204Q\222\000 Embed<18>-Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Embed<18>-Gatherer-Version{3}: 1.0 Embed<18>-Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Embed<20>-Nested-Filename{38}: wave.tutorial.fin/.finderinfo/ada.html Embed<20>-Type{4}: HTML Embed<20>-File-Size{3}: 300 Embed<20>-MD5{32}: 7e567f2234a59b5c3e30e93ea1b27033 Embed<20>-head{1126}: \001\006\001\201\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\377\020\332\002\000\000\000\000\000\000\000\000\000\000\000\000\000ada.html\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\332\003/\203E\301/\204R\305/\204R\306\000 Embed<20>-Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Embed<20>-Gatherer-Version{3}: 1.0 Embed<20>-Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Embed<22>-Nested-Filename{51}: wave.tutorial.fin/.finderinfo/comm.decency.act.html Embed<22>-Type{4}: HTML Embed<22>-File-Size{3}: 300 Embed<22>-MD5{32}: 4af8a7174e0ee56ecae27f7187ed05a2 Embed<22>-head{1084}: \001\006\001\357\000Z\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\377\020\332\002\000\000\000\000\000\000\000\000\000\000\000\000\000comm.decency.act.html\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\332\003/\203E\310/\205\256]/\205\256^\000 Embed<22>-Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Embed<22>-Gatherer-Version{3}: 1.0 Embed<22>-Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Embed<24>-Nested-Filename{50}: wave.tutorial.fin/.finderinfo/first.amendment.html Embed<24>-Type{4}: HTML Embed<24>-File-Size{3}: 300 Embed<24>-MD5{32}: 507a13fcc635c73f4035325a551de9dc Embed<24>-head{1081}: \001\006\002&\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\377\020\332\002\000\000\000\000\000\000\000\000\000\000\000\000\000first.amendment.html\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\332\003/\203E\317/\204R[/\204R[\000 Embed<24>-Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Embed<24>-Gatherer-Version{3}: 1.0 Embed<24>-Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Embed<26>-Nested-Filename{39}: wave.tutorial.fin/.finderinfo/foia.html Embed<26>-Type{4}: HTML Embed<26>-File-Size{3}: 300 Embed<26>-MD5{32}: a6b63dec9103ab7b401944dae20c01b5 Embed<26>-head{1126}: \001\006\002]\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\377\020\332\002\000\000\000\000\000\000\000\000\000\000\000\000\000foia.html\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\332\003/\203E\325/\205\274\372/\205\274\373\000 Embed<26>-Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Embed<26>-Gatherer-Version{3}: 1.0 Embed<26>-Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Embed<28>-Nested-Filename{42}: wave.tutorial.fin/.finderinfo/patents.html Embed<28>-Type{4}: HTML Embed<28>-File-Size{3}: 300 Embed<28>-MD5{32}: e09c86d911c71427163a607122016092 Embed<28>-head{1117}: \001\006\002]\001\016\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\377\020\332\002\000\000\000\000\000\000\000\000\000\000\000\000\000patents.html\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\332\003/\203E\332/\205\263\272/\205\263\273\000 Embed<28>-Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Embed<28>-Gatherer-Version{3}: 1.0 Embed<28>-Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Embed<30>-Nested-Filename{52}: wave.tutorial.fin/.finderinfo/the.copyright.act.html Embed<30>-Type{4}: HTML Embed<30>-File-Size{3}: 300 Embed<30>-MD5{32}: e5dffb087a6113256e02c7a894919727 Embed<30>-head{1084}: \001\006\002\224\000\264\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\377\020\332\002\000\000\000\000\000\000\000\000\000\000\000\000\000the.copyright.act.html\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\332\003/\203E\346/\204S\212/\204S\213\000 Embed<30>-Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Embed<30>-Gatherer-Version{3}: 1.0 Embed<30>-Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Embed<32>-Nested-Filename{45}: wave.tutorial.fin/.finderinfo/trademarks.html Embed<32>-Type{4}: HTML Embed<32>-File-Size{3}: 300 Embed<32>-MD5{32}: 3c33058f0892d694c52a04a8913899e2 Embed<32>-head{1105}: \001\006\002\313\000\264\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\377\020\332\002\000\000\000\000\000\000\000\000\000\000\000\000\000trademarks.html\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\332\003/\203E\356/\205\264A/\205\264A\000 Embed<32>-Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Embed<32>-Gatherer-Version{3}: 1.0 Embed<32>-Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Embed<34>-Nested-Filename{45}: wave.tutorial.fin/.finderinfo/CESDIS.overview Embed<34>-Type{7}: Unknown Embed<34>-File-Size{3}: 300 Embed<34>-MD5{32}: 23c0ed053b1a48562a27ac3424f087fc Embed<34>-Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Embed<34>-Gatherer-Version{3}: 1.0 Embed<34>-Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Embed<36>-Nested-Filename{27}: wave.tutorial.fin/.resource Embed<36>-Type{9}: Directory Embed<36>-File-Size{4}: 1536 Embed<36>-MD5{32}: ae28b212e42d5c52ef7c0c7907576329 Embed<36>-Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Embed<36>-Gatherer-Version{3}: 1.0 Embed<36>-Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Embed<38>-Nested-Filename{45}: wave.tutorial.fin/.resource/bibliography.html Embed<38>-Type{4}: HTML Embed<38>-File-Size{1}: 0 Embed<38>-MD5{32}: d41d8cd98f00b204e9800998ecf8427e Embed<38>-Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Embed<38>-Gatherer-Version{3}: 1.0 Embed<38>-Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Embed<40>-Nested-Filename{40}: wave.tutorial.fin/.resource/context.html Embed<40>-Type{4}: HTML Embed<40>-File-Size{1}: 0 Embed<40>-MD5{32}: d41d8cd98f00b204e9800998ecf8427e Embed<40>-Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Embed<40>-Gatherer-Version{3}: 1.0 Embed<40>-Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Embed<42>-Nested-Filename{43}: wave.tutorial.fin/.resource/fed.issues.html Embed<42>-Type{4}: HTML Embed<42>-File-Size{1}: 0 Embed<42>-MD5{32}: d41d8cd98f00b204e9800998ecf8427e Embed<42>-Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Embed<42>-Gatherer-Version{3}: 1.0 Embed<42>-Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Embed<44>-Nested-Filename{40}: wave.tutorial.fin/.resource/purpose.html Embed<44>-Type{4}: HTML Embed<44>-File-Size{1}: 0 Embed<44>-MD5{32}: d41d8cd98f00b204e9800998ecf8427e Embed<44>-Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Embed<44>-Gatherer-Version{3}: 1.0 Embed<44>-Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Embed<46>-Nested-Filename{42}: wave.tutorial.fin/.resource/sensitive.html Embed<46>-Type{4}: HTML Embed<46>-File-Size{1}: 0 Embed<46>-MD5{32}: d41d8cd98f00b204e9800998ecf8427e Embed<46>-Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Embed<46>-Gatherer-Version{3}: 1.0 Embed<46>-Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Embed<48>-Nested-Filename{52}: wave.tutorial.fin/.resource/areas.of.discussion.html Embed<48>-Type{4}: HTML Embed<48>-File-Size{1}: 0 Embed<48>-MD5{32}: d41d8cd98f00b204e9800998ecf8427e Embed<48>-Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Embed<48>-Gatherer-Version{3}: 1.0 Embed<48>-Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Embed<50>-Nested-Filename{44}: wave.tutorial.fin/.resource/regulations.html Embed<50>-Type{4}: HTML Embed<50>-File-Size{1}: 0 Embed<50>-MD5{32}: d41d8cd98f00b204e9800998ecf8427e Embed<50>-Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Embed<50>-Gatherer-Version{3}: 1.0 Embed<50>-Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Embed<52>-Nested-Filename{51}: wave.tutorial.fin/.resource/topics.of.interest.html Embed<52>-Type{4}: HTML Embed<52>-File-Size{1}: 0 Embed<52>-MD5{32}: d41d8cd98f00b204e9800998ecf8427e Embed<52>-Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Embed<52>-Gatherer-Version{3}: 1.0 Embed<52>-Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Embed<54>-Nested-Filename{51}: wave.tutorial.fin/.resource/audio.home.rec.act.html Embed<54>-Type{4}: HTML Embed<54>-File-Size{1}: 0 Embed<54>-MD5{32}: d41d8cd98f00b204e9800998ecf8427e Embed<54>-Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Embed<54>-Gatherer-Version{3}: 1.0 Embed<54>-Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Embed<56>-Nested-Filename{41}: wave.tutorial.fin/.resource/fair.use.html Embed<56>-Type{4}: HTML Embed<56>-File-Size{1}: 0 Embed<56>-MD5{32}: d41d8cd98f00b204e9800998ecf8427e Embed<56>-Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Embed<56>-Gatherer-Version{3}: 1.0 Embed<56>-Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Embed<58>-Nested-Filename{52}: wave.tutorial.fin/.resource/first.sale.doctrine.html Embed<58>-Type{4}: HTML Embed<58>-File-Size{1}: 0 Embed<58>-MD5{32}: d41d8cd98f00b204e9800998ecf8427e Embed<58>-Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Embed<58>-Gatherer-Version{3}: 1.0 Embed<58>-Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Embed<60>-Nested-Filename{48}: wave.tutorial.fin/.resource/lib.bill.rights.html Embed<60>-Type{4}: HTML Embed<60>-File-Size{1}: 0 Embed<60>-MD5{32}: d41d8cd98f00b204e9800998ecf8427e Embed<60>-Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Embed<60>-Gatherer-Version{3}: 1.0 Embed<60>-Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Embed<62>-Nested-Filename{36}: wave.tutorial.fin/.resource/tap.html Embed<62>-Type{4}: HTML Embed<62>-File-Size{1}: 0 Embed<62>-MD5{32}: d41d8cd98f00b204e9800998ecf8427e Embed<62>-Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Embed<62>-Gatherer-Version{3}: 1.0 Embed<62>-Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Embed<64>-Nested-Filename{46}: wave.tutorial.fin/.resource/trade.secrecy.html Embed<64>-Type{4}: HTML Embed<64>-File-Size{1}: 0 Embed<64>-MD5{32}: d41d8cd98f00b204e9800998ecf8427e Embed<64>-Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Embed<64>-Gatherer-Version{3}: 1.0 Embed<64>-Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Embed<66>-Nested-Filename{41}: wave.tutorial.fin/.resource/template.html Embed<66>-Type{4}: HTML Embed<66>-File-Size{1}: 0 Embed<66>-MD5{32}: d41d8cd98f00b204e9800998ecf8427e Embed<66>-Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Embed<66>-Gatherer-Version{3}: 1.0 Embed<66>-Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Embed<68>-Nested-Filename{35}: wave.tutorial.fin/bibliography.html Embed<68>-Description{12}: Bibliography Embed<68>-Keywords{13}: bibliography Embed<68>-Type{4}: HTML Embed<68>-File-Size{3}: 638 Embed<68>-MD5{32}: 631da8fa30de1a3411c7143fb226fdf1 Embed<68>-title{12}: Bibliography Embed<68>-Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Embed<68>-Gatherer-Version{3}: 1.0 Embed<68>-Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Embed<70>-Nested-Filename{30}: wave.tutorial.fin/context.html Embed<70>-Description{22}: Authorship and Context Embed<70>-Keywords{23}: and authorship context Embed<70>-Type{4}: HTML Embed<70>-File-Size{4}: 3547 Embed<70>-MD5{32}: 75456d7abc8d7c9cd4e7a3c3d5a66718 Embed<70>-title{22}: Authorship and Context Embed<70>-Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Embed<70>-Gatherer-Version{3}: 1.0 Embed<70>-Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Embed<72>-Nested-Filename{33}: wave.tutorial.fin/fed.issues.html Embed<72>-Description{26}: Federal Information Issues Embed<72>-Keywords{27}: federal information issues Embed<72>-Type{4}: HTML Embed<72>-File-Size{3}: 627 Embed<72>-MD5{32}: de82bd874de570f314c9281811542937 Embed<72>-title{26}: Federal Information Issues Embed<72>-Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Embed<72>-Gatherer-Version{3}: 1.0 Embed<72>-Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Embed<74>-Nested-Filename{30}: wave.tutorial.fin/purpose.html Embed<74>-Description{19}: Purpose of Tutorial Embed<74>-Keywords{17}: purpose tutorial Embed<74>-Type{4}: HTML Embed<74>-File-Size{4}: 2541 Embed<74>-MD5{32}: 6e2b16e432e5f2d998463d889c7341b0 Embed<74>-title{19}: Purpose of Tutorial Embed<74>-Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Embed<74>-Gatherer-Version{3}: 1.0 Embed<74>-Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Embed<76>-Nested-Filename{32}: wave.tutorial.fin/sensitive.html Embed<76>-Description{15}: Sensitive Areas Embed<76>-Keywords{16}: areas sensitive Embed<76>-Type{4}: HTML Embed<76>-File-Size{3}: 854 Embed<76>-MD5{32}: cfc70c8c1f47397b3afacae08f5ee0ab Embed<76>-title{15}: Sensitive Areas Embed<76>-Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Embed<76>-Gatherer-Version{3}: 1.0 Embed<76>-Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Embed<78>-Nested-Filename{38}: wave.tutorial.fin/Picha.OConnell.FINAL Embed<78>-Type{7}: Unknown Embed<78>-File-Size{6}: 288256 Embed<78>-MD5{32}: 2afdbe2a2a9dbbbeba019419e79bb796 Embed<78>-Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Embed<78>-Gatherer-Version{3}: 1.0 Embed<78>-Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Embed<80>-Nested-Filename{30}: wave.tutorial.fin/fed.reg.html Embed<80>-Description{26}: Federal Regulations Issues Embed<80>-Type{4}: HTML Embed<80>-File-Size{4}: 1487 Embed<80>-MD5{32}: b24905513a0cb51ee2df669037a884dc Embed<80>-body{88}: TUTORIAL: The Policy Wave is Coming: Authorship in a U.S. Government Agency Context Embed<80>-headings{16}: Can we ride the Embed<80>-images{14}: wave.small.gif Embed<80>-keywords{9}: tutorial Embed<80>-title{26}: Federal Regulations Issues Embed<80>-Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Embed<80>-Gatherer-Version{3}: 1.0 Embed<80>-Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Embed<82>-Nested-Filename{47}: wave.tutorial.fin/specific.regulations.toc.html Embed<82>-Description{20}: Policy Wave Tutorial Embed<82>-Type{4}: HTML Embed<82>-File-Size{4}: 1564 Embed<82>-MD5{32}: 581403561914d4bf07e4d9bc5878f426 Embed<82>-body{88}: TUTORIAL: The Policy Wave is Coming: Authorship in a U.S. Government Agency Context Embed<82>-headings{16}: Can we ride the Embed<82>-images{14}: wave.small.gif Embed<82>-keywords{9}: tutorial Embed<82>-title{20}: Policy Wave Tutorial Embed<82>-Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Embed<82>-Gatherer-Version{3}: 1.0 Embed<82>-Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Embed<84>-Nested-Filename{41}: wave.tutorial.fin/topics.of.interest.html Embed<84>-Description{20}: Policy Wave Tutorial Embed<84>-Type{4}: HTML Embed<84>-File-Size{4}: 1464 Embed<84>-MD5{32}: bf9c05e2dca7e234b9b96c5716ae9b70 Embed<84>-body{88}: TUTORIAL: The Policy Wave is Coming: Authorship in a U.S. Government Agency Context Embed<84>-headings{16}: Can we ride the Embed<84>-images{14}: wave.small.gif Embed<84>-keywords{9}: tutorial Embed<84>-title{20}: Policy Wave Tutorial Embed<84>-Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Embed<84>-Gatherer-Version{3}: 1.0 Embed<84>-Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Embed<86>-Nested-Filename{39}: wave.tutorial.fin/comm.decency.act.html Embed<86>-Description{26}: Communications Decency Act Embed<86>-Type{4}: HTML Embed<86>-File-Size{4}: 9272 Embed<86>-MD5{32}: 64373b8b5e542e6ce642ee6c650916fe Embed<86>-body{88}: TUTORIAL: The Policy Wave is Coming: Authorship in a U.S. Government Agency Context Embed<86>-headings{16}: Can we ride the Embed<86>-images{14}: wave.small.gif Embed<86>-keywords{9}: tutorial Embed<86>-title{26}: Communications Decency Act Embed<86>-Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Embed<86>-Gatherer-Version{3}: 1.0 Embed<86>-Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Embed<88>-Nested-Filename{38}: wave.tutorial.fin/first.amendment.html Embed<88>-Description{20}: Policy Wave Tutorial Embed<88>-Type{4}: HTML Embed<88>-File-Size{4}: 2611 Embed<88>-MD5{32}: 35506e1a1a2b7c4939e0b834cb4707ab Embed<88>-body{88}: TUTORIAL: The Policy Wave is Coming: Authorship in a U.S. Government Agency Context Embed<88>-headings{16}: Can we ride the Embed<88>-images{14}: wave.small.gif Embed<88>-keywords{9}: tutorial Embed<88>-title{20}: Policy Wave Tutorial Embed<88>-Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Embed<88>-Gatherer-Version{3}: 1.0 Embed<88>-Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Embed<90>-Nested-Filename{27}: wave.tutorial.fin/foia.html Embed<90>-Description{15}: Sensitive Areas Embed<90>-Keywords{16}: areas sensitive Embed<90>-Type{4}: HTML Embed<90>-File-Size{5}: 10368 Embed<90>-MD5{32}: 1471b8fce028f37f23cdc1d8ae9c27ec Embed<90>-title{15}: Sensitive Areas Embed<90>-Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Embed<90>-Gatherer-Version{3}: 1.0 Embed<90>-Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Embed<92>-Nested-Filename{30}: wave.tutorial.fin/patents.html Embed<92>-Description{20}: Policy Wave Tutorial Embed<92>-Type{4}: HTML Embed<92>-File-Size{4}: 3462 Embed<92>-MD5{32}: 5c61ac1425b4d25faa176bdff0e94c8e Embed<92>-body{88}: TUTORIAL: The Policy Wave is Coming: Authorship in a U.S. Government Agency Context Embed<92>-headings{16}: Can we ride the Embed<92>-images{14}: wave.small.gif Embed<92>-keywords{9}: tutorial Embed<92>-title{20}: Policy Wave Tutorial Embed<92>-Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Embed<92>-Gatherer-Version{3}: 1.0 Embed<92>-Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Embed<94>-Nested-Filename{40}: wave.tutorial.fin/the.copyright.act.html Embed<94>-Description{20}: Policy Wave Tutorial Embed<94>-Type{4}: HTML Embed<94>-File-Size{4}: 2700 Embed<94>-MD5{32}: 7994498c7f3b53edf48a407e4ffe02b1 Embed<94>-body{88}: TUTORIAL: The Policy Wave is Coming: Authorship in a U.S. Government Agency Context Embed<94>-headings{16}: Can we ride the Embed<94>-images{14}: wave.small.gif Embed<94>-keywords{9}: tutorial Embed<94>-title{20}: Policy Wave Tutorial Embed<94>-Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Embed<94>-Gatherer-Version{3}: 1.0 Embed<94>-Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Embed<96>-Nested-Filename{33}: wave.tutorial.fin/trademarks.html Embed<96>-Description{20}: Policy Wave Tutorial Embed<96>-Type{4}: HTML Embed<96>-File-Size{4}: 1908 Embed<96>-MD5{32}: b9d22efa7567574e3ff94338c53bfc81 Embed<96>-body{88}: TUTORIAL: The Policy Wave is Coming: Authorship in a U.S. Government Agency Context Embed<96>-headings{16}: Can we ride the Embed<96>-images{14}: wave.small.gif Embed<96>-keywords{9}: tutorial Embed<96>-title{20}: Policy Wave Tutorial Embed<96>-Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Embed<96>-Gatherer-Version{3}: 1.0 Embed<96>-Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Embed<98>-Nested-Filename{33}: wave.tutorial.fin/CESDIS.overview Embed<98>-Type{7}: Unknown Embed<98>-File-Size{6}: 129868 Embed<98>-MD5{32}: b693a72a5c72b13fdcbf76592db53a36 Embed<98>-Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Embed<98>-Gatherer-Version{3}: 1.0 Embed<98>-Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server } @FILE { http://cesdis.gsfc.nasa.gov/hpccm/accomp/94accomp/cas94.accomps/cas2.html Update-Time{9}: 827948645 title{49}: NASA HPCC Cooperative Research Announcement (CRA) keywords{48}: announcement cooperative cra hpcc nasa research images{55}: hpcc.graphics/hpcc.header.gif hpcc.graphics/ibm.sp2.gif headings{50}: NASA HPCC Cooperative Research Announcement (CRA) MD5{32}: 2e3da3b2585e99706f2900556d101b3e File-Size{4}: 4301 Type{4}: HTML Gatherer-Version{3}: 1.0 Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Refresh-Rate{7}: 2419200 Time-to-Live{8}: 14515200 Description{49}: NASA HPCC Cooperative Research Announcement (CRA) } @FILE { http://cesdis.gsfc.nasa.gov/linux/beowulf/beowulf1.html Update-Time{9}: 827948619 title{22}: Beowulf Linux Clusters keywords{23}: beowulf clusters linux headings{23}: Beowulf Linux Clusters body{3926}: Small scale parallel processing is becoming cost effective for single users with the advent of high performance commodity microprocessors, short haul high speed networks, and low cost GigaByte capacity disks. Migration of scientific computing demands from mainframe and supercomputers to workstations continues but is hindered by the lack of scalable system software to integrate available resources into useful systems. Movement from centralized computing to distributed heterogeneous systems of workstations, file servers, and high performance computers puts more capability in the hands of the scientific and engineering computing user but creates new bottlenecks in the total system that limits scalability and environment growth. File servers and interconnecting networks can become overloaded, constraining productivity and degrading overall system cost effectiveness. This is particularly true when large datasets are involved as is the case for many Earth and space science applications; especially those involving visualization as described above. To address this challenge requires modification to the balance of resources and the leveraging of industry investment in cost effective technology. Central to this advance is the requirement for an operating system capable of managing parallel processor, communication, and mass storage resources with individual user systems. The Linux operating system is a new robust and fully open Posix compatible environment available at no cost and with full system source code. It is an ideal base with which to support system software research and advanced development, leveraging industrial and academic research and investment. It is targeted to the single most popular architecture family and has an installed base of over a hundred thousand sites. Linux has received little attention, however, as the logical substrate for parallel processing environments. The ESS Program's Parallel Linux project is motivated by the opportunity of bringing inexpensive parallel computing to the end-user environment and there-by greatly reducing the source of bottlenecks currently prevalent in networked environments for scientific computing. The ESS Parallel Linux (EPL) will enable industry defined hardware and software standards to be integrated in resource ensembles that will move the operating point of data storage, access, manipulation, and visualization to the end-user terminal resulting in lower cost, greater performance, and better global system behavior. EPL will extend Linux by achieving the following milestones: Integrate PVM for standardized message passing among multiple processors and with external networked resources. Enhance NFS to be light-weight and incorporate multiple processors driving multiple disk drives for order-of-magnitude disk access bandwidth increase. Augment with distributed task scheduling for load balancing of concurrent processes. Extend demand paging mechanisms to include access to distributed memory modules for unified addressing. Incorporate advanced Condor control software to establish clusters of ESS Parallel Linux systems for scalable distributed computing systems. The EPL Project is intended to support such important ESS application requirements as visualization and satellite terminal data processing which will run on top of it. The goal of EPL is to perform pathfinding research for the US parallel processing community by examining resource requirements within the context of this new operating point and providing demonstration systems that embody techniques capable of achieving new levels of cost effective high performance computing. It is anticipated that key elements of the Linux extensions making up EPL will be distributed to the system vendors for direct incorporation in their products (protected by the Gnu Public License) or as a template for techniques that can be ported to their products. MD5{32}: a0d9847582237e8d214632fbea9bad02 File-Size{4}: 4497 Type{4}: HTML Gatherer-Version{3}: 1.0 Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Refresh-Rate{7}: 2419200 Time-to-Live{8}: 14515200 Description{22}: Beowulf Linux Clusters } @FILE { http://cesdis.gsfc.nasa.gov/%7ecreschke/peta/report/node46.html Update-Time{9}: 827948638 title{18}: Power Limitations keywords{45}: aug chance edt limitations power reschke tue images{507}: /usr/local/src/latex2html/icons/next_motif.gif /usr/local/src/latex2html/icons/up_motif.gif /usr/local/src/latex2html/icons/previous_motif.gif /usr/local/src/latex2html/icons/contents_motif.gif img18.gif img19.gif img20.gif img21.gif img22.gif img23.gif img24.gif img25.gif img26.gif img27.gif img28.gif img29.gif /usr/local/src/latex2html/icons/next_motif.gif /usr/local/src/latex2html/icons/up_motif.gif /usr/local/src/latex2html/icons/previous_motif.gif /usr/local/src/latex2html/icons/contents_motif.gif head{3287}: Next: Computing in RAMUp: A Petaops is Previous: Introduction Power Limitations For an overall power consumption of 10KW, the energy consumed per operation in a petaops system would have to be . A power consumption in the tens of kilowatts is a comfortable limit for an air-cooled machine.CMOS consumes energy during one cycle of charging and discharging a circuit node between ground and a power supply . The minimum practical value of for room-temperature operation appears to be about 1V [1, 2], since going below doesn't improve energy per operation and where MOS threshold voltage has to be greater than about 0.2V to control leakage currents that are determined by the room-temperature value . Assuming that an average operation involves reading and changing roughly 100 bits and that circuit nodes are precharged to /2, we require which limits the capacitance to an average of 400fF.While adiabatic computing [3] attempts to reduce power, it still has switching overheads of the same general form but with a diode voltage (around 0.7V) replacing the term. At present, adiabatic computing appears to consume roughly the same power as 1V CMOS, and hence it does not appear to offer a solution with present technology.A 400fF capacitance is barely greater than the bit-line pair capacitance of a typical modern DRAM [4] which is charged for every memory access. A cache doesn't help power consumption. Even though a high hit rate reduces the number of accesses to main memory, the cache RAM charges similar capacitances on each access.A conventional memory architecture also wastes almost all bit reads because only a small fraction of the bits read by the sense amplifiers on a given cycle are actually used. This is obviously unacceptable, since we already have a tight power-budget when assuming that bits are used with perfect efficiency. Driving signals off-chip also comes with the expense of charging many pF per wire.We therefore claim that bits should usually be processed on-chip with the memory, and in fact very close to the sense amplifiers of the memory chip. It follows that the processors used must be compact and simple, or their sheer size will consume energy in routing signals.Radical changes to memory architecture, such as breaking up memory arrays and introducing extra row decoders to perform independent addressing, will reduce memory density and increase the power cost of communications. The long internal memory words are too long for a uniprocessor and difficult for a MIMD multiprocessor to utilize, unless the application can benefit from processors autonomously executing their own instruction streams but performing loads and stores to the same address in local memory at the same time. The shared memory address stream suggests the use of a shared instruction stream as well. For these reasons, we use SIMD processing elements (PEs) in the memory.Attempting to speed up the cycle time is also wrong. The best energy/operation is obtained at low and hence relatively slow switching. Faster cycle times would also make power supply transients worse, and we already propose to activate many more sense amplifiers at once than is typically done. Next: Computing in RAMUp: A Petaops is Previous: Introduction Chance Reschke Tue Aug 15 08:59:12 EDT 1995 MD5{32}: 4b09935a7dd85da9cc42059b8733048b File-Size{4}: 5806 Type{4}: HTML Gatherer-Version{3}: 1.0 Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Refresh-Rate{7}: 2419200 Time-to-Live{8}: 14515200 Description{18}: Power Limitations } @FILE { http://cesdis.gsfc.nasa.gov/hpccm/annual.reports/ann.rpt.95/graphics/ Update-Time{9}: 827948836 url-references{316}: /hpccm/annual.reports/ann.rpt.95/ back.gif bar.gif cas.gif casback.gif convect-bar.gif earth.gif ess-small.gif ess.gif hpcc.button.gif hpccsmall.gif hq.button.gif iitf.button.gif mailbutton.gif meatball.gif moz.gif nasa.button.gif nco.button.gif people.button.gif qu_book.gif return.gif search.button.gif smaller.gif title{51}: Index of /hpccm/annual.reports/ann.rpt.95/graphics/ keywords{160}: back bar book button cas casback convect directory earth ess gif hpcc hpccsmall iitf mailbutton meatball moz nasa nco parent people return search small smaller images{406}: /icons/blank.xbm /icons/menu.gif /icons/image.gif /icons/image.gif /icons/image.gif /icons/image.gif /icons/image.gif /icons/image.gif /icons/image.gif /icons/image.gif /icons/image.gif /icons/image.gif /icons/image.gif /icons/image.gif /icons/image.gif /icons/image.gif /icons/image.gif /icons/image.gif /icons/image.gif /icons/image.gif /icons/image.gif /icons/image.gif /icons/image.gif /icons/image.gif headings{51}: Index of /hpccm/annual.reports/ann.rpt.95/graphics/ body{817}: Name Last modified Size Description Parent Directory 21-Dec-95 12:06 - back.gif 17-Oct-95 15:37 4K bar.gif 17-Oct-95 15:37 3K cas.gif 17-Oct-95 15:37 22K casback.gif 17-Oct-95 15:37 1K convect-bar.gif 17-Oct-95 15:37 4K earth.gif 17-Oct-95 15:37 3K ess-small.gif 17-Oct-95 15:37 13K ess.gif 17-Oct-95 15:37 3K hpcc.button.gif 17-Oct-95 15:37 2K hpccsmall.gif 17-Oct-95 15:37 2K hq.button.gif 17-Oct-95 15:37 3K iitf.button.gif 17-Oct-95 15:37 1K mailbutton.gif 17-Oct-95 15:37 1K meatball.gif 17-Oct-95 15:37 3K moz.gif 17-Oct-95 15:37 2K nasa.button.gif 17-Oct-95 15:37 3K nco.button.gif 17-Oct-95 15:37 1K people.button.gif 17-Oct-95 15:37 1K qu_book.gif 17-Oct-95 15:37 1K return.gif 17-Oct-95 15:37 1K search.button.gif 17-Oct-95 15:37 2K smaller.gif 17-Oct-95 15:37 23K MD5{32}: 7e3b6c1c60c4b54485f3a35e6c7a55e6 File-Size{4}: 3392 Type{4}: HTML Gatherer-Version{3}: 1.0 Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Refresh-Rate{7}: 2419200 Time-to-Live{8}: 14515200 Description{51}: Index of /hpccm/annual.reports/ann.rpt.95/graphics/ } @FILE { http://cesdis.gsfc.nasa.gov/%7ecreschke/peta/report/node71.html Update-Time{9}: 827948641 url-references{81}: node72.html#SECTION000111100000000000000 node73.html#SECTION000111200000000000000 title{65}: Enabling Data-intensive Applications through Petaflops Computing keywords{118}: applications assimilation aug chance computing data edt enabling intensive introduction petaflops reschke through tue images{193}: /usr/local/src/latex2html/icons/next_motif.gif /usr/local/src/latex2html/icons/up_motif.gif /usr/local/src/latex2html/icons/previous_motif.gif /usr/local/src/latex2html/icons/contents_motif.gif head{293}: Next: IntroductionUp: Applications and Algorithms: Previous: Applications and Algorithms: Enabling Data-intensive Applications through Petaflops Computing Reagan W. Moore San Diego Supercomputer Center San Diego, CA Introduction Data Assimilation Chance Reschke Tue Aug 15 08:59:12 EDT 1995 MD5{32}: ee878e564463eb4c43f8772fb3bfc347 File-Size{4}: 1836 Type{4}: HTML Gatherer-Version{3}: 1.0 Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Refresh-Rate{7}: 2419200 Time-to-Live{8}: 14515200 Description{65}: Enabling Data-intensive Applications through Petaflops Computing } @FILE { http://cesdis.gsfc.nasa.gov/hpccm/annual.reports/ess94contents/sys.sw/meshes.html Update-Time{9}: 827948656 url-references{128}: meshes2.html http://cesdis.gsfc.nasa.gov/hpccm/annual.reports/ess94contents/sys.software.html mailto:lpicha@cesdis.gsfc.nasa.gov title{48}: A Parallel Partitioner For Finite Element Meshes keywords{57}: curator graphic larry page picha previous return see the images{42}: graphics/partsmall.gif graphics/return.gif headings{77}: A Parallel Partitioner For Finite Element Meshes Return to the PREVIOUS PAGE body{2126}: Objective: Partitioning an unstructured finite element mesh among the processors of a parallel supercomputer concurrently to set the stage for the finite element analysis. The domain partition achieves load balance, preserves proper data locality and reduces communications during the solution of the problem. Approach: The element centroids are partitioned using an inertial recursive bisection algorithm, and then nodes and elements are migrated accordingly. In design and implementation, we pay particular attention to (a) using scalable algorithms so that the partitioner performs well on very large number of processors (b) complete flexibility to handle any type of finite elements, discontiguous sequential labelings and unsorted existing mesh data. (see graphic, 60K) . Accomplishments: (a) implemented a library for partial global operations on a group of processors; (b) implemented a recursive inertial bisection algorithm to partition the centroids of finite elements; (c) implemented a scalable communication template to migrate the elements and nodes across the processors. (d) resolved complications due to multiple copies of nodes and due to variable lengths of node-lists and proc-lists. The library of (a) and tool of (b) could be stand-alone software and thus have been delivered to JPL HPCC/ESS Software Repository. Significance: Finite element analysis is used in broad, diverse areas such as analysis and designed of automobile body-frame, electromagnetic devices, airplane exteriors, etc. The ever-increasingly larger and complex geometries must be dealt in a parallel super-computer. Parallel mesh partitioner sets up the necessary environment for the problem to be calculated there. Status/Plans: The partitioner is written, debugged, and fully tested on a variety of finite element meshes. The partitioner exhibits a good (logarithmic) scaling character, i.e., partitioning a problem 8 times larger on 8 times more processors only takes twice as longer time. Point of Contact: Hong Ding Jet Propulsion Laboratory (818) 354-8983 hding@redwood.jpl.nasa.gov curator: Larry Picha MD5{32}: d85028cf4a73b5f4571bc88d5db96a1b File-Size{4}: 2728 Type{4}: HTML Gatherer-Version{3}: 1.0 Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Refresh-Rate{7}: 2419200 Time-to-Live{8}: 14515200 Description{48}: A Parallel Partitioner For Finite Element Meshes } @FILE { http://cesdis.gsfc.nasa.gov/admin/inf.eng/opp.html Update-Time{9}: 827948598 url-references{217}: mailto:yelena@cesdis.edu mailto:cas@cesdis.gsfc.nasa.gov http://cesdis.gsfc.nasa.gov/people/lpicha/whoiam.html http://cesdis.gsfc.nasa.gov/ http://www.usra.edu/ http://sdcd.gsfc.nasa.gov http://sdcd.gsfc.nasa.gov/ESD/ title{29}: CESDIS Research Opportunities keywords{225}: and association author box center cesdis computing data directorate division earth employer equal excellence information last lawrence mail opportunity picha research revised sciences space the universities usra yelena yesha images{17}: CESDIS1.small.gif headings{161}: Center of Excellence In Space Data And Information Sciences (CESDIS) NASA Goddard Space Flight Center Greenbelt, Maryland Announcement of Research Opportunities body{2296}: %> The Center of Excellence in Space Data and Information Sciences (CESDIS), located at the National Aeronautics and Space Administration's (NASA) Goddard Space Flight Center, Greenbelt, Maryland (Suburban area near Washington D. C.), is inviting applications for research positions for visiting faculty, graduate students and advanced undergraduates for the summer of 1995 and for post-doctoral appointments of up to two years beginning in the fall of 1995. In addition, CESDIS invites application for sabbatical visits of University faculty during the academic year '95-'96. Summer student visitors must be U.S. Citizens or permanent residents. The primary mission of CESDIS is to increase the connection between Computer Science and Engineering research programs at colleges and universities and NASA working with computer applications in Earth and Space science. Research areas of primary interest at CESDIS include: High performance computing, parallel input/output and data storage systems for high performance parallel computers, data base and data management systems for parallel computers, image processing and digital libraries. Visitors to CESDIS are encouraged to collaborate with NASA Goddard scientists. Additional opportunities for collaboration exists with many local research Universities and Institutions. CESDIS researchers have access to a wide range of high performance computer systems at Goddard, including a CRAY C98 supercomputer, a 16,000 MasPar MP-2, and advanced data storage systems. Applicants should send resumes, descriptions of research interests, and names and addresses of three references for us to contact. Please return to: Yelena Yesha, Director CESDIS Code 930.5 NASA Goddard Space Flight Center Greenbelt, Maryland 20771 %> Applications and inquiries may also be made via email to: Yelena Yesha or to the CESDIS Mail Box %> USRA/CESDIS is an Equal Opportunity Employer Author: Lawrence Picha (lpicha@usra.edu), Center of Excellence in Space Data and Information Sciences , Universities Space Research Association , NASA Goddard Space Flight Center, Greenbelt, Maryland. Last revised: 06 APRIL 95 (l.picha)In connection with the Space Data and Computing Division , Earth Sciences Directorate , NASA Goddard Space Flight Center. MD5{32}: 1b37d9b3152ace69347a2369fb4817b8 File-Size{4}: 3128 Type{4}: HTML Gatherer-Version{3}: 1.0 Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Refresh-Rate{7}: 2419200 Time-to-Live{8}: 14515200 Description{29}: CESDIS Research Opportunities } @FILE { http://cesdis.gsfc.nasa.gov/people/oconnell/whoiam.html Update-Time{9}: 827948631 title{17}: Michele O'Connell body{68}: bgcolor="000000" text="#ffffff" link="#ffff00" vlink="#00ffff"> MD5{32}: a057f622dbb29cc556a1c4fa90b1197a File-Size{4}: 2170 Type{4}: HTML Gatherer-Version{3}: 1.0 Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Refresh-Rate{7}: 2419200 Time-to-Live{8}: 14515200 Keywords{16}: connell michele Description{17}: Michele O'Connell } @FILE { http://cesdis.gsfc.nasa.gov/%7ecreschke/peta/report/node26.html Update-Time{9}: 827948642 url-references{1764}: node27.html#SECTION000101000000000000000 node28.html#SECTION000101100000000000000 node29.html#SECTION000101200000000000000 node30.html#SECTION000101300000000000000 node31.html#SECTION000101400000000000000 node32.html#SECTION000101500000000000000 node33.html#SECTION000101600000000000000 node34.html#SECTION000101700000000000000 node35.html#SECTION000101800000000000000 node36.html#SECTION000101900000000000000 node37.html#SECTION0001011000000000000000 node38.html#SECTION0001011100000000000000 node39.html#SECTION000102000000000000000 node40.html#SECTION000102100000000000000 node41.html#SECTION000102200000000000000 node42.html#SECTION000102300000000000000 node43.html#SECTION000102400000000000000 node44.html#SECTION000103000000000000000 node45.html#SECTION000103100000000000000 node46.html#SECTION000103200000000000000 node47.html#SECTION000103300000000000000 node48.html#SECTION000103400000000000000 node49.html#SECTION000103500000000000000 node50.html#SECTION000103600000000000000 node51.html#SECTION000103700000000000000 node52.html#SECTION000104000000000000000 node53.html#SECTION000104100000000000000 node54.html#SECTION000104200000000000000 node55.html#SECTION000104300000000000000 node56.html#SECTION000104400000000000000 node57.html#SECTION000104500000000000000 node58.html#SECTION000105000000000000000 node59.html#SECTION000105100000000000000 node60.html#SECTION000105200000000000000 node61.html#SECTION000105300000000000000 node62.html#SECTION000105400000000000000 node63.html#SECTION000105500000000000000 node64.html#SECTION000106000000000000000 node65.html#SECTION000106100000000000000 node66.html#SECTION000106200000000000000 node67.html#SECTION000106300000000000000 node68.html#SECTION000106400000000000000 node69.html#SECTION000106500000000000000 title{13}: Introduction keywords{810}: acknowledgment acknowledgments and approach architecture architectures aug bit casa center challenge chance chip computer computing conceptual conclusions convection cpu crcw currently data design distributed earth edt electronic elements enabling examples feasible for grand heterogeneous instruction interactive introduction issues limitations machine massive massively masssively memory minnesota mixed mixing model multiple neumann non ocrcw one open opto overall overview parallel parallelism performance petaflops petaops pim power principles problems processing processors projections prospects ram references related rendering reschke results science serial set sets shared sia simd simulation smartnet summary supercomputer sustained systems taming technology testbed the tue turbulent using von work images{193}: /usr/local/src/latex2html/icons/next_motif.gif /usr/local/src/latex2html/icons/up_motif.gif /usr/local/src/latex2html/icons/previous_motif.gif /usr/local/src/latex2html/icons/contents_motif.gif head{2676}: Next: Heterogeneous Computing: One Up:No Title Previous: Architecture and Technology Introduction This section includes the papers from participants who made presentations on architecture and technology issues and challenges of petaflops computing.Listed below are the titles of the extended abstracts and their authors: Heterogeneous Computing: One Approach to Sustained Petaflops Performance, H.J. Siegel, John K. Antonio, Min Tan, Richard C. Metzger, Richard F. Freund, and Yan Alexander Processors-In-Memory (PIM) Chip Architectures for Petaflops Computing, Peter M. Kogge A Petaops is Currently Feasible by Computing in RAM, Duncan Elliott Design of a Masssively Parallel Computer Using Bit Serial Processing, John E. Dorband, Maurice F. Aburdene, Kamal S. Khouri, Jason E. Piatt, Jianqing Zheng Non von Neumann Instruction Set Architecture as an Enabling Technology in Grand Challenge Systems, Justin S. M. Porter (Note: Porter was unable to make a presentation) Taming Massive Parallelism: The Prospects of Opto-Electronic CRCW-Shared Memory, Paul Lukowicz, Walter F. Tichy Lightning: A Scalable Dynamically Reconfigurable Hierarchical WDM Network for High Performance Clustering, Patrick W. Dowd PETAFLOPS: PErhaps Take A Futuristic Look at Optical Processing Systems Easing the Burden on Latency-Tolerance Mechanisms in Petaflops Computers, David K. Probst Petaflops Technology: Real Time Image Compensation, Richard G. Lyon Heterogeneous Computing: One Approach to Sustained Petaflops Performance Introduction Examples of Mixed-Machine HC Overview Simulation of Mixing in Turbulent Convection at the Minnesota Supercomputer Center Interactive Rendering of Multiple Earth Science Data Sets on the CASA Testbed SmartNet A Conceptual Model for HC Open Problems Conclusions Acknowledgments References Processors-In-Memory (PIM) Chip Architectures for Petaflops Computing Introduction SIA Projections and CPU Architecture Open Issues References A Petaops is Currently Feasible by Computing in RAM Introduction Power Limitations Computing in RAM Overall Architecture Conclusions Acknowledgments References Design of a Masssively Parallel Computer Using Bit Serial Processing Elements Introduction Massively Parallel SIMD Architecture Summary Acknowledgment References Non von Neumann Instruction Set Architecture as an Enabling Technology in Grand Challenge Systems Introduction Distributed Instruction Set Architecture Simulation Results Conclusions References Taming Massive Parallelism: The Prospects of Opto-Electronic CRCW-Shared Memory Introduction Related Work OCRCW-SM Principles Memory Architecture Chance Reschke Tue Aug 15 08:59:12 EDT 1995 MD5{32}: bc4e4beea0dbfec9f194d791410ba562 File-Size{4}: 7442 Type{4}: HTML Gatherer-Version{3}: 1.0 Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Refresh-Rate{7}: 2419200 Time-to-Live{8}: 14515200 Description{13}: Introduction } @FILE { http://cesdis.gsfc.nasa.gov/hpccm/admin/org.html Update-Time{9}: 827948599 title{25}: NASA HQ HPCC Organization MD5{32}: d1f13387ee3bcc9de9e8db8962562916 File-Size{4}: 1088 Type{4}: HTML Gatherer-Version{3}: 1.0 Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Refresh-Rate{7}: 2419200 Time-to-Live{8}: 14515200 Keywords{23}: hpcc nasa organization Description{25}: NASA HQ HPCC Organization } @FILE { http://cesdis.gsfc.nasa.gov/%7ecreschke Update-Time{9}: 827948688 Description{30}: Chance Reschke's Personal Page Time-to-Live{8}: 14515200 Refresh-Rate{7}: 2419200 Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Gatherer-Version{3}: 1.0 Type{4}: HTML File-Size{4}: 2804 MD5{32}: bcbfcc2418b8aed3ce19e1a3f8942152 body{1981}: First, I know this page is pretty drab, but such is a systems administrator's schedule... Second, I've recently moved to CESDIS from the Corporate Offices in downtown Washington, D.C. Here are my new vital stats: Chance Reschke Center of Excellence in Space Data and Information Sciences Universities Space Research Association email: creschke@usra.edu tel: (301)286-0881 vmail: (202)488-5139 Lately, I've mostly been working on bringing the CESDIS computing environment up to snuff and teaching other USRA Systems Administrators more about maintaining their Unix systems on the net. Actual PROJECTS I'VE BEEN WORKING ON are fairly few and rather quickly done. These include: =/icons/greenball.gif> Set up ISDN Internet access and half a dozen Sparc 1's configured as Xterminals at the Frontiers'95 Conference. =/icons/greenball.gif> Setting up T1 Internet access, the exhibitors network and the email room for the Advances in Digital Libraries '95 conference. =/icons/greenball.gif> Using SAMBA and CAP to integrate Suns MACs and PCs into a comfortable relationship. =/icons/greenball.gif> Setting up a distributed FreeWAIS-SF and SFgate based ://www.usra.edu/SFgatedocs/usra_html_index.html> index of all USRA WWW resources . =/icons/greenball.gif> Doing some other moderately interesting things with FreeWAIS-SF. =/icons/greenball.gif> Various Linux Things including: Porting Condor to the ://cesdis.gsfc.nasa.gov/people/becker/beowulf.html> Beowulf Linux Workstation Cluster at CESDIS (alright, I haven't actualy started this yet, but maybe next week...) Linux as cheap sendmail/POP servers. (Including advocating (and helping out where I can) changing the NASA HQ email system to use sendmail on U*ix to handle mail exchange with the outside world rather than the hodge-podge collection of PC/Mac SMTP gateway systems they're so fond of (this includes setting up and maintaining one very successfull Linux host handling roughly 1,000 messages/day)). headings{43}: Welcome to Chance Reschke's Personal Page! keywords{223}: all becker been beowulf cap cesdis cluster corporate creschke edu freewais gov gsfc html index interesting linux moderately nasa offices people projects resources samba sfgate sfgatedocs things usra working workstation www title{30}: Chance Reschke's Personal Page url-references{350}: http://cesdis.gsfc.nasa.gov http://www.usra.edu/usra/corporateoffices.html mailto:creschke@usra.edu http://lake.canberra.edu.au/pub/samba/ http://www-tec.open.ac.uk/Mac-Support/CAP.html http://ls6-www.informatik.uni-dortmund.de/freeWAIS-sf/README-sf http://ls6-www.informatik.uni-dortmund.de/SFgate/SFgate.html http http://www.usra.edu/usra/USRA http } @FILE { http://cesdis.gsfc.nasa.gov/pub/linux/drivers/3c509.c Update-Time{9}: 827948602 Partial-Text{5205}: EL3WINDOW cleanup_module el3_close el3_get_stats el3_interrupt el3_open el3_probe el3_rx el3_start_xmit id_read_eeprom init_module read_eeprom set_multicast_list update_stats linux/config.h linux/module.h linux/version.h linux/kernel.h linux/sched.h linux/string.h linux/interrupt.h linux/ptrace.h linux/errno.h linux/in.h linux/malloc.h linux/ioport.h asm/bitops.h asm/io.h linux/netdevice.h linux/etherdevice.h linux/skbuff.h /* 3c509.c: A 3c509 EtherLink3 ethernet driver for linux. */ /* Written 1993-1995 by Donald Becker. Copyright 1994,1995 by Donald Becker. Copyright 1993 United States Government as represented by the Director, National Security Agency. This software may be used and distributed according to the terms of the GNU Public License, incorporated herein by reference. This driver is for the 3Com EtherLinkIII series. The author may be reached as becker@cesdis.gsfc.nasa.gov or C/O Center of Excellence in Space Data and Information Sciences Code 930.5, Goddard Space Flight Center, Greenbelt MD 20771 Known limitations: Because of the way 3c509 ISA detection works it's difficult to predict a priori which of several ISA-mode cards will be detected first. This driver does not use predictive interrupt mode, resulting in higher packet latency but lower overhead. If interrupts are disabled for an unusually long time it could also result in missed packets, but in practice this rarely happens. */ /* To minimize the size of the driver source I only define operating constants if they are used several times. You'll need the manual if you want to understand driver details. */ /* Offsets from base I/O address. */ /* The top five bits written to EL3_CMD are a command, the lower 11 bits are the parameter, if applicable. */ /* The SetRxFilter command accepts the following classes: */ /* Register window 1 offsets, the window used in normal operation. */ /* Remaining free bytes in Tx buffer. */ /* Window 0: Set IRQ line in bits 12-15. */ /* Window 4: Various transcvr/media bits. */ /* Enable link beat and jabber for 10baseT. */ /* First check all slots of the EISA bus. The next slot address to probe is kept in 'eisa_addr' to support multiple probe() calls. */ /* Check the standard EISA ID register for an encoded '3Com'. */ /* Change the register set to the configuration window 0. */ /* Restore the "Product ID" to the EEPROM read register. */ /* Was the EISA code an add-on hack? Nahhhhh... */ /* Reset the ISA PnP mechanism on 3c509b. */ /* Select PnP config control register. */ /* Return to WaitForKey state. */ /* Select an open I/O location at 0x1*0 to do contention select. */ /* GCC optimizes this test out. */ /* Rare -- do we really need a warning? */ /* Next check for all ISA bus boards by sending the ID sequence to the ID_PORT. We find cards past the first by setting the 'current_tag' on cards as they are found. Cards with their tag set will not respond to subsequent ID sequences. */ /* For the first probe, clear all board's tag registers. */ /* Otherwise kill off already-found boards. */ /* Read in EEPROM data, which does contention-select. Only the lowest address board will stay "on-line". 3Com got the byte order backwards. */ /* Set the adaptor tag so that the next card can be found. */ /* Activate the adaptor at the EEPROM location. */ /* Free the interrupt so that some other card can use it. */ /* Read in the station address. */ /* Make up a EL3-specific-data structure. */ /* The EL3-specific entries in the device structure. */ /* Fill in the generic fields of the device structure. */ /* Read a word from the EEPROM using the regular EEPROM access register. Assume that we are in register window zero. */ /* Pause for at least 162 us. for the read to take place. */ /* Read a word from the EEPROM when in the ISA ID probe state. */ /* Issue read command, and pause for at least 162 us. for it to complete. Assume extra-fast 16Mhz bus. */ /* This should really be done by looking at one of the timer channels. */ /* Activate board: this is probably unnecessary. */ /* Set the IRQ line. */ /* Set the station address in window 2 each time opened. */ /* Start the thinnet transceiver. We should really wait 50ms...*/ /* 10baseT interface, enabled link beat and jabber check. */ /* Switch to the stats window, and clear all stats by reading. */ /* Switch to register set 1 for normal use. */ /* Accept b-case and phys addr only. */ /* Turn on statistics. */ /* Enable the receiver. */ /* Enable transmitter. */ /* Allow status bits to be seen. */ /* Ack all pending events, and set active indicator mask. */ /* Always succeed */ /* Transmitter timeout, serious problems. */ /* Issue TX_RESET and TX_START commands. */ /* Error-checking code, delete someday. */ /* IRQ line active, missed one. */ /* Make sure. */ /* Fake interrupt trigger by masking, acknowledge interrupts. */ /* Avoid timer-based retransmission conflicts. */ /* Put out the doubleword header... */ /* ... and the packet rounded to a doubleword. */ /* Interrupt us when the FIFO has room for max-sized packet. */ /* Clear the Tx status stack. */ /* Pop the status stack. */ /* The EL3 interrupt handler. */ MD5{32}: 1e657a9d6773797ed9c0f8ae3130ef1a File-Size{5}: 22395 Type{1}: C Gatherer-Version{3}: 1.0 Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Refresh-Rate{7}: 2419200 Time-to-Live{8}: 14515200 Keywords{2359}: accept accepts access according ack acknowledge activate active adaptor add addr address agency all allow already also always and applicable are asm assume author avoid backwards base based baset beat because becker bitops bits board boards buffer bus but byte bytes calls can card cards case center cesdis change channels check checking classes cleanup clear close cmd code com command commands complete config configuration conflicts constants contention control copyright could current data define delete details detected detection device difficult director disabled distributed does donald done doubleword driver each eeprom eisa enable enabled encoded entries errno error etherdevice etherlink etherlinkiii ethernet events excellence extra fake fast fields fifo fill find first five flight following for found free from gcc generic get gnu goddard got gov government greenbelt gsfc hack handler happens has header herein higher incorporated indicator information init interface interrupt interrupts ioport irq isa issue jabber kept kernel kill known latency least license limitations line link linux list location long looking lower lowest make malloc manual mask masking max may mechanism media mhz minimize missed mode module multicast multiple nahhhhh nasa national need netdevice next normal not off offsets one only open opened operating operation optimizes order other otherwise out overhead packet packets parameter past pause pending phys place pnp pop port practice predict predictive priori probably probe problems product ptrace public put rare rarely reached read reading really receiver reference register registers regular remaining represented reset respond restore result resulting retransmission return room rounded sched sciences security seen select sending sequence sequences series serious set setrxfilter setting several should size sized skbuff slot slots software some someday source space specific stack standard start state states station statistics stats status stay string structure subsequent succeed support sure switch tag take terms test that the their they thinnet this time timeout timer times top transceiver transcvr transmitter trigger turn understand united unnecessary unusually update use used using various version wait waitforkey want warning was way when which will window with word works written xmit you zero Description{9}: EL3WINDOW } @FILE { http://cesdis.gsfc.nasa.gov/%7ecreschke/peta/report/node32.html Update-Time{9}: 827948636 title{78}: Interactive Rendering of Multiple Earth Science Data Sets on the CASA Testbed keywords{103}: aug casa chance data earth edt interactive jpl multiple rendering reschke science sets testbed the tue images{193}: /usr/local/src/latex2html/icons/next_motif.gif /usr/local/src/latex2html/icons/up_motif.gif /usr/local/src/latex2html/icons/previous_motif.gif /usr/local/src/latex2html/icons/contents_motif.gif head{1561}: Next: SmartNetUp: Heterogeneous Computing: One Previous: Simulation of Mixing Interactive Rendering of Multiple Earth Science Data Sets on the CASA Testbed The CASA testbed interconnects several remote sites including the California Institute of Technology, San Diego Supercomputer Center, Jet Propulsion Laboratory ( JPL ), and Los Alamos National Laboratory [2, 27]. The computational resources of the testbed consist of various parallel and vector machines including an Intel Touchstone Delta, Thinking Machines' CM-5 and CM-200, CRAY Y-MP8/864, Y-MP/264, and Y-MP/232, and a number of workstations and specialized visualization engines.One of the applications developed on the CASA testbed involves interactive three-dimensional rendering of multiple Earth science data sets. Functional modules were identified and optimized for specific machines. Initially, raw data sets are transferred to one of the two-dimensional functional modules for processing. The two-dimensional modules manipulate image and/or elevation data via a number of different algorithms. Most of the two-dimensional modules were developed for the CRAY Y-MP/232 at JPL and the CRAY Y-MP8/864 at the San Diego Supercomputer Center. Two of the two-dimensional modules were implemented on the CM-5 and CM-200 located at Los Alamos. Output from the two-dimensional modules are sent over the network to the three-dimensional rendering process, which was implemented on the Intel Touchstone Delta located at the California Institute of Technology. Chance Reschke Tue Aug 15 08:59:12 EDT 1995 MD5{32}: 00ae27daf345af193036417042c306b7 File-Size{4}: 2936 Type{4}: HTML Gatherer-Version{3}: 1.0 Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Refresh-Rate{7}: 2419200 Time-to-Live{8}: 14515200 Description{70}: Interactive Rendering of Multiple Earth Science Data Sets on the CASA } @FILE { http://cesdis.gsfc.nasa.gov/%7ecreschke/peta/report/footnode.html Update-Time{9}: 827948643 title{9}: Footnotes keywords{72}: aug chance computing edt enabling for ops peta reschke technologies tue head{1529}: ...architecture. Siegel, H. J. et al, ``Report of the Purdue Workshop on Grand Challenges in Computer Architecture for the Support of High Performance Computing", J. Parallel and Distributed Computing, Vol. 16, No. 3, 1992, pp. 199--211. ...(HPCCIT) The HPCCIT is a subcommittee of the Committee on Physical, Mathematical, and Engineering Sciences (PMES), a committee of the Federal Coordinating Council on Science, Engineering and Technology (FCCSET) ...diameter Diameter is a measure of the number of clock cycles required for an access request to propagate across the machine ...nations.'' Thomas Sterling, Messina, P. C., Smith, P. H., Enabling Technologies for Peta(FL)OPS Computing , MIT Press, 1995, p. 159 ...affiliations Justin Porter, University of British Columbia, and Guy Robinson, Syracuse University, who could not attend the workshop provided extended abstracts, ``Non von Neuman Instruction Set Architecture" and, ``Parallel Computations for Scientific, and Engineering Applications: What Could We Do With Petaflops", respectively, that are included in this report. ...Research. An extended abstract of this presentation was not available for this report. ...Siegel Supported by Rome Laboratory under contract number F30602-94-C-0022 and by NRaD under contract number N68786-91-D-1799. Some of the research discussed used equipment supported by the National Science Foundation under grant number CDA-9015696. ...Metzger Supported by AFOSR under RL JON 2304F2TK. Chance Reschke Tue Aug 15 08:59:12 EDT 1995 MD5{32}: f157e4a8cf370cee36373e20ffbdcb6c File-Size{4}: 3236 Type{4}: HTML Gatherer-Version{3}: 1.0 Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Refresh-Rate{7}: 2419200 Time-to-Live{8}: 14515200 Description{9}: Footnotes } @FILE { http://cesdis.gsfc.nasa.gov/hpccm/iitf.hp/minutes/6.2.95.html Update-Time{9}: 827948661 title{14}: 6-2-95 Minutes MD5{32}: 5c6e81ef8e055031e7139ab1ba0ad171 File-Size{4}: 6163 Type{4}: HTML Gatherer-Version{3}: 1.0 Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Refresh-Rate{7}: 2419200 Time-to-Live{8}: 14515200 Keywords{8}: minutes Description{14}: 6-2-95 Minutes } @FILE { http://cesdis.gsfc.nasa.gov/hpccm/accomp/94accomp/ess94.accomps/ Update-Time{9}: 827948795 url-references{78}: /hpccm/accomp/94accomp/ ess1.html ess2.html ess3.html ess4.html hpcc.graphics/ title{46}: Index of /hpccm/accomp/94accomp/ess94.accomps/ keywords{40}: directory ess graphics hpcc html parent images{112}: /icons/blank.xbm /icons/menu.gif /icons/text.gif /icons/text.gif /icons/text.gif /icons/text.gif /icons/menu.gif headings{46}: Index of /hpccm/accomp/94accomp/ess94.accomps/ body{232}: Name Last modified Size Description Parent Directory 31-Jul-95 11:54 - ess1.html 10-Feb-95 13:37 4K ess2.html 10-Feb-95 14:07 4K ess3.html 04-Aug-95 16:01 2K ess4.html 10-Feb-95 13:38 3K hpcc.graphics/ 14-Jun-95 14:30 - MD5{32}: bcf365579d8d25f9b21b3c962870f723 File-Size{4}: 1041 Type{4}: HTML Gatherer-Version{3}: 1.0 Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Refresh-Rate{7}: 2419200 Time-to-Live{8}: 14515200 Description{46}: Index of /hpccm/accomp/94accomp/ess94.accomps/ } @FILE { http://cesdis.gsfc.nasa.gov/admin/seminar.series/1121.html Update-Time{9}: 827948595 title{17}: November 21, 1995 keywords{25}: hosted jacqueline moigne images{126}: http://cesdis.gsfc.nasa.gov/admin/cesdis.seminars/logo.GIF http://cesdis.gsfc.nasa.gov/admin/cesdis.seminars/nasalogo-tiny.gif headings{1023}: Mathematical Tools for Remote Sensing Data Analysis November 21, 1995 NASA Goddard Space Flight Center Building 28, Room E210 2:00 - 3:00 p.m. Recognition Theory: The Role of Probability and Statistics of objects in digital imagery is related to recognition of patterns in general signal processing of sensor data, independent of the sensor. It is the patterns of features that allow us to distinguish between different objects. Accordingly, object recognition becomes a matter of recognition of patterns of features, which is related to matched filtering, but operates in feature space. What metric should we use in order to compare two related features? Clearly, the natural variability of the feature should determine how we decide whether two features are close, which leads us to the notion of features as random vectors. We make rigorous the notion of matching features through metrics in feature space based on statistics of predicted features, and statistics of extracted features, using a Bayesian framework. body{196}: %> size=2>CENTER OF EXCELLENCE IN SPACE DATA AND INFORMATION SCIENCESsize=2> %> hosted by: Dr. Jacqueline Le Moigne Robert Hummel Courant Institute, New York University hummel@cs.nyu.edu MD5{32}: d7fda634a6db57bf18405aed4a01f855 File-Size{4}: 4157 Type{4}: HTML Gatherer-Version{3}: 1.0 Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Refresh-Rate{7}: 2419200 Time-to-Live{8}: 14515200 Description{17}: November 21, 1995 } @FILE { http://cesdis.gsfc.nasa.gov/%7ecreschke/peta/report/node12.html Update-Time{9}: 827948634 url-references{159}: node13.html#SECTION00061000000000000000 node14.html#SECTION00062000000000000000 node15.html#SECTION00063000000000000000 node16.html#SECTION00064000000000000000 title{31}: Issues for Petaflops Computers keywords{165}: and aug chance computers computing edt enabling findings for implications important introduction issues ops peta petaflops reschke summary technologies tue workshop images{193}: /usr/local/src/latex2html/icons/next_motif.gif /usr/local/src/latex2html/icons/up_motif.gif /usr/local/src/latex2html/icons/previous_motif.gif /usr/local/src/latex2html/icons/contents_motif.gif head{260}: Next: IntroductionUp:No Title Previous: Report Organization Issues for Petaflops Computers Introduction Workshop Findings on Enabling Technologies for Peta(FL)OPS Computing Important Issues and Implications Summary Chance Reschke Tue Aug 15 08:59:12 EDT 1995 MD5{32}: d8820ffbfb15e9a9dc95a4906178a5fe File-Size{4}: 1853 Type{4}: HTML Gatherer-Version{3}: 1.0 Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Refresh-Rate{7}: 2419200 Time-to-Live{8}: 14515200 Description{31}: Issues for Petaflops Computers } @FILE { http://cesdis.gsfc.nasa.gov/hpccm/annual.reports/ess94contents/nren.html Update-Time{9}: 827948657 url-references{422}: nren/keck.html nren/acts.html nren/atdnet.html nren/atm.html http://cesdis.gsfc.nasa.gov/hpccm/annual.reports/ess94.html http://cesdis.gsfc.nasa.gov/hpccm/annual.reports/main94.html http://sdcd.gsfc.nasa.gov/ESS/ http://cesdis.gsfc.nasa.gov/hpccm/hpcc.nasa.html http://cesdis.gsfc.nasa.gov/people/lpicha/whoiam.html http://cesdis.gsfc.nasa.gov/ http://www.usra.edu/ http://sdcd.gsfc.nasa.gov http://sdcd.gsfc.nasa.gov/ESD/ title{20}: ESS NREN Experiments keywords{518}: acts advanced and annual application association atdnet atm author authorizing center climate communications computing data demonstration directorate directory distributed division earth edu excellence experiment flight for global goddard greenbelt home hpcc information infrastructure keck last lawrence lpicha main maryland may model nasa network networking observatory official page picha previous project report research return revised satellite science sciences service space technology the universities usra via images{115}: graphics/ess-small.gif graphics/convect-bar.gif graphics/convect-bar.gif graphics/return.gif graphics/hpccsmall.gif headings{127}: NASA High Performance Computing and Communications (HPCC)Program National Research and Education Network (NREN) Experiments body{1004}: background="graphics/ess.gif"> Earth and Space Science (ESS) Project NASA HPCC 1994 Annual Report Advanced Communications Technology Satellite (ACTS) Keck Observatory/Global Climate Model Experiment Distributed Global Climate Model via ACTS Application Technology Demonstration Network (ATDnet) ATM Networking Infrastructure at NASA Goddard Space Flight Center Return to the PREVIOUS PAGE Other Paths: Go to the Main Directory for The NASA HPCC 1994 Annual Report Go to the Earth and Space Science Project Home Page Go to The NASA HPCC Home Page Authorizing NASA Official: Lee B. Holcomb, Director, NASA HPCC Office Author: Lawrence Picha (lpicha@usra.edu) Center of Excellence in Space Data and Information Sciences , Universities Space Research Association , NASA Goddard Space Flight Center, Greenbelt, Maryland. Last revised: 30 MAY 95 (l.picha) (A service of the Space Data and Computing Division , the Earth Sciences Directorate , NASA Goddard Space Flight Center) MD5{32}: b0c476eb50ea7b31f92dc7412fd312fb File-Size{4}: 2268 Type{4}: HTML Gatherer-Version{3}: 1.0 Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Refresh-Rate{7}: 2419200 Time-to-Live{8}: 14515200 Description{20}: ESS NREN Experiments } @FILE { http://cesdis.gsfc.nasa.gov/hpccm/annual.reports/ess94contents/graphics/zurek.metric.html Update-Time{9}: 827948651 url-references{108}: http://cesdis.gsfc.nasa.gov/hpccm/annual.reports/ess94contents/solar.html mailto:lpicha@cesdis.gsfc.nasa.gov title{12}: Metric Chart keywords{137}: cesdis challenge computational curator gov grand gsfc larry lpicha metric nasa picha return scientific technical the understanding write images{29}: gardner.metric.gif return.gif headings{146}: Solar Activity and Heliospheric Dynamics PI: John Gardner Naval Research Laboratory (Navy Research Laboratory) Return to the Technical Write-up body{965}: background="graphics/ess.gif"> Scientific Grand Challenge: To better understand the interaction of the Sun with the earth by integrating the coupled set of equations from magnetohydro- dynamics, radiation transport, and material properties with sufficient spatial and temporal resolution to be able to distinguish between different physical effects by direct comparisons with quantitative measurements. Scientific Understanding: To test solar flare models with spatial resolution of 0.2 arc seconds (120km). Computational Challenge: To increase grid resolution by four orders of magnitude in order to permit the separation of the physical spatial and temporal scales that characterize the solar convective and diffusive phenomena. Metric: To meet the Computational Challenge requires a linear grid resolution of order 120km. Thus, an increase of four orders of magnitude in grid resolution is required. curator: Larry Picha (lpicha@cesdis.gsfc.nasa.gov) MD5{32}: 3d36b453fafba3d82ddea337e989babd File-Size{4}: 1543 Type{4}: HTML Gatherer-Version{3}: 1.0 Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Refresh-Rate{7}: 2419200 Time-to-Live{8}: 14515200 Description{12}: Metric Chart } @FILE { http://cesdis.gsfc.nasa.gov/hpccm/accomp/95accomp/WordTemp-22 Update-Time{9}: 827948798 url-references{1194}: http://cesdis.gsfc.nasa.gov/hpccm/annual.reports/ann.rpt.95/cas.95/cas.95.ar.darwin.html http://cesdis.gsfc.nasa.gov/hpccm/annual.reports/ann.rpt.95/cas.95/cas.95.ar.nra.html http://cesdis.gsfc.nasa.gov/hpccm/annual.reports/ann.rpt.95/cas.95/cas.95.ar.overflow.html http://www.nas.nasa.gov/NAS/Tools/Projects/AIMS/ http://cesdis.gsfc.nasa.gov/hpccm/annual.reports/ann.rpt.95/cas.95/cas.95.ar.p2d2.html http://hpccp-www.larc.nasa.gov/~fido/homepage.html http://cesdis.gsfc.nasa.gov/hpccm/annual.reports/ann.rpt.95/cas.95/cas.95.ar.npss.html http://sdcd.gsfc.nasa.gov/ESS/annual.reports/ess95contents/app.gc.rood.html http://sdcd.gsfc.nasa.gov/ESS/annual.reports/ess95contents/app.gc.zurek.html http://sdcd.gsfc.nasa.gov/ESS/annual.reports/ess95contents/sys.nhse.html http://cesdis.gsfc.nasa.gov/petaflops/peta.html http://olympic.jpl.nasa.gov/Reports/Highlights95/hd_psas.html http://olympic.jpl.nasa.gov/Reports/Highlights95/PL_ess.html http://olympic.jpl.nasa.gov/Reports/Highlights95/ml_dvt.html http://olympic.jpl.nasa.gov/Reports/Highlights95/jl_flow.html http://cesdis.gsfc.nasa.gov/hpccm/hpcc.classic.html http://cesdis.gsfc.nasa.gov/people/lpicha/whoiam.html http://cesdis.gsfc.nasa.gov/ title{43}: NASA HPCC FY 1995 Top Level Accomplishments keywords{954}: accomplishments accretion achieves acoustic aerosciences aims algorithms ames and announcements applications are array assimilation astrophysics center climate clusters compiled computational computers construction cycle darwin data database debugger deck design dimensional distributed earth edu elliptic enabling excellence exchange fido fiscal flow for formation four framework galaxy gflops hierarchical home hpcc hpccp incompressible information infrastructure intel interdisciplinary kernel launche lawrence level lpicha measurement multigrid nasa national node numerical optimization overflow package page paragon parallel particle petaflops phased picha portable program project propulsion psas rendering research return scalable sciences showcase simulation site size software solver space sponsored state status steady supported system task technologies technology the this top tuning under usra version visualization web with workstation year images{203}: graphics/nasa.meatball.gif graphics/hpcc.header.gif graphics/at_work.gif graphics/at_work.gif graphics/arc.log.gif graphics/ess.thumb.gif graphics/gonzaga3.gif graphics/at_work.gif graphics/hpccsmall.gif headings{55}: SIZE="6">Showcase of Accomplishments (Fiscal Year 1995) body{2381}: _SETWIDTH SGI_SETHEIGHT SGI_FULLPATH="/usr/people/lpicha/web.development/95accomp/graphics/nasa.meatball.gif"> _SETWIDTH SGI_SETHEIGHT SGI_FULLPATH="/usr/people/lpicha/web.development/95accomp/graphics/hpcc.header.gif"> _FULLPATH="/usr/people/lpicha/web.development/95accomp/graphics/at_work.gif">This Page is Under Construction _FULLPATH="/usr/people/lpicha/web.development/95accomp/graphics/at_work.gif"> SIZE="4">The NASA HPCC Program: (Fiscal Year 1995 TOP LEVEL NASA HPCC Program accomplishments are compiled by NASA HPCC Project) Computational Aerosciences _SETWIDTH SGI_SETHEIGHT SGI_FULLPATH="/usr/people/lpicha/web.development/95accomp/graphics/arc.log.gif"> DARWIN/HPCC Phased-Array Acoustic Measurement and Visualization Status of Ames Sponsored HPCCP NASA Research Announcements A Supported Version of OVERFLOW for Parallel Computers and Workstation Clusters Tuning Parallel Applications with AIMS The Portable Parallel/Distributed Debugger (p2d2) FIDO: Framework for Interdisciplinary Design Optimization Numerical Propulsion System Simulation Steady State Cycle Deck Launche r Earth and Space Sciences _SETWIDTH SGI_SETHEIGHT SGI_FULLPATH="/usr/people/lpicha/web.development/95accomp/graphics/ess.thumb.gif"> Four-Dimensional Data Assimilation Scalable Hierarchical Particle Algorithms for Galaxy Formation and Accretion Astrophysics National HPCC Software Exchange PetaFLOPS Enabling Technologies and Applications Web Site Parallel PSAS Climate Data Assimilation Package Achieves 17 GFLOPS on 512-node Intel Paragon Parallel Database Rendering Distributed Visualization Task A Parallel Incompressible Flow Solver with a Parallel Multigrid Elliptic Kernel Information Infrastructure Technology and Applications _SETWIDTH SGI_SETHEIGHT SGI_FULLPATH="/usr/people/lpicha/web.development/95accomp/graphics/gonzaga3.gif"> _FULLPATH="/usr/people/lpicha/web.development/95accomp/graphics/at_work.gif"> Under Construction _SETWIDTH SGI_SETHEIGHT SGI_FULLPATH="/usr/people/lpicha/web.development/95accomp/graphics/hpccsmall.gif">Return to the HPCC Home Page Authorizing NASA Official: Lee B. Holcomb, Director, NASA HPCC Office Author: Lawrence Picha (lpicha@usra.edu) , Center of Excellence in Space Data and Information Sciences , Universities Space Research Association, NASA Goddard Space Flight Center, Greenbelt, Maryland. Last revised: 12 DEC 1995 (l.picha) MD5{32}: 0ef81ce3eb04d420d1fafe5f58107e97 File-Size{4}: 5582 Type{4}: HTML Gatherer-Version{3}: 1.0 Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Refresh-Rate{7}: 2419200 Time-to-Live{8}: 14515200 Description{43}: NASA HPCC FY 1995 Top Level Accomplishments } @FILE { http://cesdis.gsfc.nasa.gov/hpccm/archive/overview/WordTemp-59 Update-Time{9}: 827948816 url-references{34}: mailto:lpicha@cesdis.gsfc.nasa.gov title{15}: HPCC Fact Sheet keywords{388}: accelerate accelerating aeronautics american and another application century cesdis comments communications computing convergence development directly earth engineering every expects future gov gsfc high hpcc into larry lpicha meet nasa next performance picha play please program questions requirements role sciences send shaping space speed technologies teraflop the unique welcome your images{79}: graphics/lites2.gif graphics/lites2.gif graphics/lites2.gif graphics/lites2.gif headings{193}: The NASA High Performance Computing and Communications (HPCC) Program Welcome to the NASA HPCC Brochure! Table of Contents Introduction The Speed of Change Computational Aerosciences Project body{7965}: To accelerate the development and application of high-performance computing technologies to meet NASA's aeronautics, earth and space sciences, and engineering requirements into the next century. You're here because you need or want an explanation and overview of the NASA HPCC Program, its mission, and how it implements and utilizes tax payer assets. You may click on the table of contents item you're interested in and go directly there or you may scrole through the entire document. Please send your comments and/or questions directly to Larry Picha (lpicha@cesdis.gsfc.nasa.gov). Introduction The Speed of Change Components of the NASA HPCC Program NASA HPCC Program Contributions What is a Teraflop and why should I care about it anyway? Importance of NASA's Role in the National HPCC Program Resources: pointers to more HPCC related documentation Computational Aerosciences (CAS) Project Earth and Space Sciences (ESS) Project Information Infrastructure Technology and Applications (IITA) component Remote Exploration and Experimentation (REE) Project %> In recognition of the critical importance of information technologies, the United States Government created the High Performance Computing and Communications (HPCC) Program in 1991. The goal of the Program was to foster the development of high-risk, high-payoff systems and applications that will most benefit America. The NASA HPCC program is a critical component of this government-wide effort; it is dedicated to working with American businesses and universities to increase the speed of change in research areas that support NASA's aeronautics, Earth, and space missions. By investing national resources in the NASA HPCC Program, America will be able to maintain its worldwide leadership position in aerospace, high-speed computing, communications, and other related industries. Although the High Performance Computing and Communications budget is a small percentage of the NASA budget, it has a significant impact on the Agency's mission, as well as on U.S. industry. NASA leads the planning and coordination of the software element of the Federal High Performance Computing and Communications (HPCC) Program and is also an important participants in the National Information Infrastructure initiatives. NASA's HPCC Program will: Further gains in U.S. productivity and industrial competitiveness - especially in the aeronautics industry; Extend U.S. technology leadership in high performance computing and communications; Provide wide dissemination and application of HPCC technologies; and Facilitate the use and technologies of a National Information Infrastructure (NII) - especially within the American K-12 educational systems. %> As we stand on the threshold of the 21st century, change has become a constant in our lives. We live in a time of unprecedented social, political, and technological change and advancement. For many Americans, the rate of change has accelerated to the point where it is nearly overwhelming. It took four hundred years between the development of movable type and the creation of the first practical typewriter. Less than one hundred years later came the development of the word processor. Now, if you buy a personal computer, the computer seems to be behind the technology curve before you even carry it home from the store. Many American business communication tools that are taken for granted today, such as FAX machines, electronic mail, pagers, and cellular phones, were unknown or generally unavailable just ten years ago. At no time in history have humans been required to process information from so many different sources at once. There can be no doubt that in the late twentieth century, the advance of technology has reached a sort of critical mass that is propelling us headlong into a future that was unimaginable a generation ago. The rapid development of computers and communications has ''shrunk'' the world. The United States is an active participant in a worldwide economy. In this new ''global village,'' the rapid movement of information has made the technological playing field for most industrialized nations very competitive. For the first time in history, the means of production, the means of communication, and the means of distribution are all based on the same technology -- computers. An unique interdependence now exists among advanced information technologies. Each new innovation allows existing industries to operate more efficiently, while at the same time, opens up new markets for the product itself. Individuals, corporations, industries -- even entire economies -- depend more than ever on information technologies. America's future and the future of each citizen will be deeply affected by the speed with which information is gathered, processed, analyzed, secured, and disseminated. NASA has a long history of developing new technologies for aerospace missions that later turn out to have far-reaching effects on society through civilian applications. For instance, satellites orginally developed for space exploration and defense purposes now carry virtually all television and long-distance telephone signals to our homes. By accelerating the convergence of computing and communications technologies, the NASA HPCC Program expects to play another unique role in shaping the future of every American. %> The Computational Aerosciences (CAS) Project is focused on the specific computing requirements of the United States aerospace community and has, as its primary goal, to accelerate the availability to the United States aerospace manufacturers of high performance computing hardware and software for use in their design processes. The U.S. aerospace industry can effectively respond to increased international competition only by producing across-the-board better quality products at affordable prices. High performance computing capability is a key to the creation of a competitive advantage, by reducing product cost and design cycle times; its introduction into the design process is, however, a risk to a commercial company, that NASA can help mitigate by performing this research. The CAS project catalyzes these developments in aerospace computing, while at the same time pointing out the future way to aerospace markets for domestic computer manufacturers. The key to the entire CAS project is the aerospace design and manufacturing process. These are the procedures that a manufacturer carries out in order to move from the idea of a new aircraft to the roll-out of a new aircraft onto the runway. Computer simulations of these aircraft vastly shorten the time necessary for this process. These computer simulations, or applications as they have come to be called, need immensely fast computers in order to deliver their results in a timely fashion to the designers. CAS supports the development of these machines by acquiring the latest experimental machinery from domestic computer manufacturers and making them available as testbeds to the nationwide CAS community. The computer manufacturers and independent software vendors help out by providing system software that forms the glue between the applications programs and the computer hardware. These are computer programs like operating systems that make the computer function. The CAS community that carries out this work consists of teams of workers from the major aerospace companies, from the NASA aeronautics research centers and from American universities. The focus of the project is derived through extensive interactions with business managers of the major aerospace companies and by consultation with university researchers and NASA management. The project delivers applications and system software that have been found through its research to show an enhancement to the design process, and provides a laboratory by which the computer manufacturers can identify weaknesses and produce improvements in their products. MD5{32}: 691417e9c86d277b29be76ea423326cf File-Size{4}: 9013 Type{4}: HTML Gatherer-Version{3}: 1.0 Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Refresh-Rate{7}: 2419200 Time-to-Live{8}: 14515200 Description{15}: HPCC Fact Sheet } @FILE { http://cesdis.gsfc.nasa.gov/hpccm/iitf.hp/minutes/4.28.95.html Update-Time{9}: 827948661 title{15}: 4-28-95 Minutes MD5{32}: 2ae3be497b51e15317c54e00bd4d155a File-Size{4}: 4379 Type{4}: HTML Gatherer-Version{3}: 1.0 Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Refresh-Rate{7}: 2419200 Time-to-Live{8}: 14515200 Keywords{8}: minutes Description{15}: 4-28-95 Minutes } @FILE { http://cesdis.gsfc.nasa.gov/hpccm/sound.bytes/holcomb.transcript Update-Time{9}: 827948829 MD5{32}: 86daaadecb61c27aec8519f3fccb4892 File-Size{4}: 3584 Type{7}: Unknown Gatherer-Version{3}: 1.0 Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Refresh-Rate{7}: 2419200 Time-to-Live{8}: 14515200 } @FILE { http://cesdis.gsfc.nasa.gov/hpccm/annual.reports/cas94contents/sys.sw/ Update-Time{9}: 827948841 url-references{96}: /hpccm/annual.reports/cas94contents/ bench.html flow.html graphics/ hpf.html hsct.html p2d2.html title{52}: Index of /hpccm/annual.reports/cas94contents/sys.sw/ keywords{51}: bench directory flow graphics hpf hsct html parent images{128}: /icons/blank.xbm /icons/menu.gif /icons/text.gif /icons/text.gif /icons/menu.gif /icons/text.gif /icons/text.gif /icons/text.gif headings{52}: Index of /hpccm/annual.reports/cas94contents/sys.sw/ body{259}: Name Last modified Size Description Parent Directory 17-Oct-95 15:42 - bench.html 19-Jul-95 14:19 3K flow.html 19-Jul-95 14:42 2K graphics/ 09-Nov-95 14:43 - hpf.html 19-Jul-95 15:17 4K hsct.html 19-Jul-95 14:42 2K p2d2.html 19-Jul-95 14:20 2K MD5{32}: 115064922a23fff2073081ecde2f12e8 File-Size{4}: 1185 Type{4}: HTML Gatherer-Version{3}: 1.0 Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Refresh-Rate{7}: 2419200 Time-to-Live{8}: 14515200 Description{52}: Index of /hpccm/annual.reports/cas94contents/sys.sw/ } @FILE { http://cesdis.gsfc.nasa.gov/hpccm/annual.reports/ann.rpt.95/cas.95/graphics/ Update-Time{9}: 827948835 url-references{323}: /hpccm/annual.reports/ann.rpt.95/cas.95/ back.gif bar.gif cas.gif casback.gif convect-bar.gif earth.gif ess-small.gif ess.gif hpcc.button.gif hpccsmall.gif hq.button.gif iitf.button.gif mailbutton.gif meatball.gif moz.gif nasa.button.gif nco.button.gif people.button.gif qu_book.gif return.gif search.button.gif smaller.gif title{58}: Index of /hpccm/annual.reports/ann.rpt.95/cas.95/graphics/ keywords{160}: back bar book button cas casback convect directory earth ess gif hpcc hpccsmall iitf mailbutton meatball moz nasa nco parent people return search small smaller images{406}: /icons/blank.xbm /icons/menu.gif /icons/image.gif /icons/image.gif /icons/image.gif /icons/image.gif /icons/image.gif /icons/image.gif /icons/image.gif /icons/image.gif /icons/image.gif /icons/image.gif /icons/image.gif /icons/image.gif /icons/image.gif /icons/image.gif /icons/image.gif /icons/image.gif /icons/image.gif /icons/image.gif /icons/image.gif /icons/image.gif /icons/image.gif /icons/image.gif headings{58}: Index of /hpccm/annual.reports/ann.rpt.95/cas.95/graphics/ body{817}: Name Last modified Size Description Parent Directory 22-Dec-95 13:53 - back.gif 18-Oct-95 10:59 4K bar.gif 18-Oct-95 10:59 3K cas.gif 18-Oct-95 10:59 22K casback.gif 18-Oct-95 10:59 1K convect-bar.gif 18-Oct-95 10:59 4K earth.gif 18-Oct-95 10:59 3K ess-small.gif 18-Oct-95 10:59 13K ess.gif 18-Oct-95 10:59 3K hpcc.button.gif 18-Oct-95 10:59 2K hpccsmall.gif 18-Oct-95 10:59 2K hq.button.gif 18-Oct-95 10:59 3K iitf.button.gif 18-Oct-95 10:59 1K mailbutton.gif 18-Oct-95 10:59 1K meatball.gif 18-Oct-95 10:59 3K moz.gif 18-Oct-95 10:59 2K nasa.button.gif 18-Oct-95 10:59 3K nco.button.gif 18-Oct-95 10:59 1K people.button.gif 18-Oct-95 10:59 1K qu_book.gif 18-Oct-95 10:59 1K return.gif 18-Oct-95 10:59 1K search.button.gif 18-Oct-95 10:59 2K smaller.gif 18-Oct-95 10:59 23K MD5{32}: fb2ab07948c8b1aa5ef9bf22c7cdfb00 File-Size{4}: 3413 Type{4}: HTML Gatherer-Version{3}: 1.0 Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Refresh-Rate{7}: 2419200 Time-to-Live{8}: 14515200 Description{58}: Index of /hpccm/annual.reports/ann.rpt.95/cas.95/graphics/ } @FILE { http://cesdis.gsfc.nasa.gov/admin/seminar.series/1204.html Update-Time{9}: 827948595 title{16}: December 4, 1995 keywords{75}: all clear diablo ghil hosted jacqueline michael moigne nino theory unified images{135}: http://cesdis.gsfc.nasa.gov/admin/cesdis.seminars/logo.GIF http://cesdis.gsfc.nasa.gov/admin/cesdis.seminars/nasalogo-tiny.gif Ghil.gif headings{241}: Mathematical Tools for Remote Sensing Data Analysis MONDAY, December 4, 1995 NASA Goddard Space Flight Center Building 28, Room E210 2:00 - 3:00 p.m. Regular Patterns in Space and Time or Singular Spectrum Analysis for Fun and Profit body{286}: %> size=2>CENTER OF EXCELLENCE IN SPACE DATA AND INFORMATION SCIENCESsize=2> %> hosted by: Dr. Jacqueline Le Moigne = clear all> El Diablo y El Nino: A Unified Theory by Michael Ghil %> Michael Ghil Ecole Normale Superieure, Paris and University of California, Los Angeles MD5{32}: 6d8662cda5e858edc3cd168e0fd186a0 File-Size{4}: 5534 Type{4}: HTML Gatherer-Version{3}: 1.0 Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Refresh-Rate{7}: 2419200 Time-to-Live{8}: 14515200 Description{16}: December 4, 1995 } @FILE { http://cesdis.gsfc.nasa.gov/linux/pcmcia/cardd/ Update-Time{9}: 827948612 url-references{80}: /linux/pcmcia/ Makefile card.eject card.insert cardd.c cardd.h cis.h parse_cis.c title{29}: Index of /linux/pcmcia/cardd/ keywords{60}: card cardd cis directory eject insert makefile parent parse images{144}: /icons/blank.xbm /icons/menu.gif /icons/text.gif /icons/text.gif /icons/text.gif /icons/text.gif /icons/text.gif /icons/text.gif /icons/text.gif headings{29}: Index of /linux/pcmcia/cardd/ body{287}: Name Last modified Size Description Parent Directory 25-Oct-95 11:49 - Makefile 25-May-94 00:48 1K card.eject 25-May-94 11:39 1K card.insert 25-May-94 11:39 1K cardd.c 25-May-94 00:47 13K cardd.h 25-May-94 00:48 1K cis.h 25-May-94 00:48 2K parse_cis.c 25-May-94 00:48 6K MD5{32}: ab6ab6d301fbf90a1ca347d535df985f File-Size{4}: 1238 Type{4}: HTML Gatherer-Version{3}: 1.0 Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Refresh-Rate{7}: 2419200 Time-to-Live{8}: 14515200 Description{29}: Index of /linux/pcmcia/cardd/ } @FILE { http://cesdis.gsfc.nasa.gov/hpccm/annual.reports/cas94contents/app.sw/coupled.html Update-Time{9}: 827948646 url-references{109}: http://cesdis.gsfc.nasa.gov/hpccm/annual.reports/cas94contents/app.sw.html mailto:lpicha@cesdis.gsfc.nasa.gov title{42}: Coupled Simulation of Flightdynamic System keywords{46}: contents curator larry picha return table the images{46}: graphics/flightdynamic.gif graphics/return.gif headings{75}: Coupled Simulation of Flightdynamic System Return to the Table of Contents body{2153}: Objective: Simulation of an aircraft control system in a nonlinear aerodynamic environment will allow a safe and rapid means of prototyping new designs. This technology has the potential to reduce design cycle cost and enhance safety while improving performance. Approach: The coupled simulations use a diagonalized implementation of the Reynolds-averaged Navier-Stokes equations in an overset mesh framework. At each time-step, integrated surface pressure along the vehicle is passed to the nonlinear rigid body dynamics equations, which results in a new vehicle state. This new body position is then fed to the control system, which in turn commands the control effectors such that the vehicle will reach a desired state. Accomplishment: Initial coding of applied load and aerodynamic effector kinematics in a coupled vectorized flow solver/relative body motion code has been completed. In two-dimensions, the nonlinear results have been compared to linearized analogs for an oscillating supersonic canard/wing and a low-speed altitude-commanded airfoil. In three-dimensions, comparison with experiment was shown for the controlled separation of a finned store from a cavity. Significance: Design of aircraft control systems based on simplified aerodynamic models can cause costly flight test delays or lead to loss of life and aircraft. Through simulation of fully nonlinear dynamics and aerodynamics, an additional level of testing can be placed on the control system before flight test. The applications completed thus far provide the initial confidence tests of the coupled technique, laying the foundation for the simulation of a complete aircraft control system. Status/Plans: In order to reduce the expense of these nonlinear simulations, an effort is underway to distribute the computation over networked workstation processors. Using a landing lift-jet borne delta-winged aircraft as a test case, several parallel techniques will be compared against the serial implementation in terms of cost and accuracy. Point of Contact: Christopher A. Atwood NASA Ames Research Center (415)604-3974 atwood@nas.nasa.gov curator: Larry Picha MD5{32}: c3a71b75dbb6b4aa42d4d1d128b19b25 File-Size{4}: 2658 Type{4}: HTML Gatherer-Version{3}: 1.0 Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Refresh-Rate{7}: 2419200 Time-to-Live{8}: 14515200 Description{42}: Coupled Simulation of Flightdynamic System } @FILE { http://cesdis.gsfc.nasa.gov/%7ecreschke/peta/report/report.html Update-Time{9}: 827948633 url-references{3247}: node1.html#SECTION00010000000000000000 node2.html#SECTION00020000000000000000 node3.html#SECTION00030000000000000000 node4.html#SECTION00040000000000000000 node5.html#SECTION00050000000000000000 node6.html#SECTION00051000000000000000 node7.html#SECTION00052000000000000000 node8.html#SECTION00053000000000000000 node9.html#SECTION00053100000000000000 node10.html#SECTION00053200000000000000 node11.html#SECTION00054000000000000000 node12.html#SECTION00060000000000000000 node13.html#SECTION00061000000000000000 node14.html#SECTION00062000000000000000 node15.html#SECTION00063000000000000000 node16.html#SECTION00064000000000000000 node17.html#SECTION00070000000000000000 node18.html#SECTION00071000000000000000 node19.html#SECTION00072000000000000000 node20.html#SECTION00073000000000000000 node21.html#SECTION00074000000000000000 node22.html#SECTION00080000000000000000 node23.html#SECTION00081000000000000000 node24.html#SECTION00082000000000000000 node25.html#SECTION00090000000000000000 node26.html#SECTION000100000000000000000 node27.html#SECTION000101000000000000000 node28.html#SECTION000101100000000000000 node29.html#SECTION000101200000000000000 node30.html#SECTION000101300000000000000 node31.html#SECTION000101400000000000000 node32.html#SECTION000101500000000000000 node33.html#SECTION000101600000000000000 node34.html#SECTION000101700000000000000 node35.html#SECTION000101800000000000000 node36.html#SECTION000101900000000000000 node37.html#SECTION0001011000000000000000 node38.html#SECTION0001011100000000000000 node39.html#SECTION000102000000000000000 node40.html#SECTION000102100000000000000 node41.html#SECTION000102200000000000000 node42.html#SECTION000102300000000000000 node43.html#SECTION000102400000000000000 node44.html#SECTION000103000000000000000 node45.html#SECTION000103100000000000000 node46.html#SECTION000103200000000000000 node47.html#SECTION000103300000000000000 node48.html#SECTION000103400000000000000 node49.html#SECTION000103500000000000000 node50.html#SECTION000103600000000000000 node51.html#SECTION000103700000000000000 node52.html#SECTION000104000000000000000 node53.html#SECTION000104100000000000000 node54.html#SECTION000104200000000000000 node55.html#SECTION000104300000000000000 node56.html#SECTION000104400000000000000 node57.html#SECTION000104500000000000000 node58.html#SECTION000105000000000000000 node59.html#SECTION000105100000000000000 node60.html#SECTION000105200000000000000 node61.html#SECTION000105300000000000000 node62.html#SECTION000105400000000000000 node63.html#SECTION000105500000000000000 node64.html#SECTION000106000000000000000 node65.html#SECTION000106100000000000000 node66.html#SECTION000106200000000000000 node67.html#SECTION000106300000000000000 node68.html#SECTION000106400000000000000 node69.html#SECTION000106500000000000000 node70.html#SECTION000110000000000000000 node71.html#SECTION000111000000000000000 node72.html#SECTION000111100000000000000 node73.html#SECTION000111200000000000000 node74.html#SECTION000120000000000000000 node75.html#SECTION000121000000000000000 node76.html#SECTION000122000000000000000 node77.html#SECTION000123000000000000000 node78.html#SECTION000124000000000000000 node79.html#SECTION000125000000000000000 node80.html#SECTION000130000000000000000 title{40}: Frontier's '95 PetaFLOPS Workshop Report keywords{1197}: about acknowledgment acknowledgments agenda algorithms and applications approach architecture architectures assimilation attendees aug bit casa center challenge challenges chance chip committee computer computers computing conceptual conclusions contents convection cpu crcw currently data design directions discussion distributed document earth edt electronic elements enabling examples executive factors feasible figures findings for frontier future grand heterogeneous high historical implications important instruction intensive interactive introduction issues limitations list machine massive massively masssively memory minnesota mixed mixing model motivating multiple neumann non objectives ocrcw one open ops opto organization organizing overall overview parallel parallelism performance perspective peta petaflops petaops pim points power presentations principles problems processing processors projections prospects ram references related rendering report reschke results science serial set sets shared sia simd simulation smartnet some summary supercomputer sustained systems tables taming technologies technology testbed the this through tpf tue turbulent using von what work workshop images{156}: /usr/local/src/latex2html/icons/next_motif.gif /usr/local/src/latex2html/icons/contents_motif.gif /petaflops/archive/workshops/peta.graphics/PETA.banner.gif head{4210}: Next: Executive Summary PetaFLOPS Frontier '95 Workshop Thomas SterlingSenior ScientistCenter for Excellence in Space Data and Information SciencesGoddard Space Flight CenterGreenbelt, MD 20771 tron@cesdis1.gsfc.nasa.gov Michael J. MacDonaldFormer USRA Program Manager for NASA's HPCC Program209 Tara ShoresMcCormick, SC 29835mmacdona@usra.edu August 8, 1995Abstract:This report presents the proceedings of the The Petaflops Frontier (TPF) Workshop conducted at the 1995 Frontiers of Massively Parallel Processing in McLean, VA on February 6, 1995. A year after the first Pasadena Workshop on Enabling Technologies for Peta(FL)ops Computing, this workshop was held to extend the findings of the first workshop through wide coverage of related disciplines and involvement of a broader community. Over a hundred participants attended the one-day workshop at which 18 presentations were given on topics in technology, architecture, alogrithms and applications related to petaflops-scale computing.The architecture and technology presentations included discussions of heterogeneous mixed-machine and mixed-mode computing systems, processor-in-memory (PIM) technology developments along with other approaches to combining logic functions with memory, and several that dealt with the potential of various optical technologies to ease the bandwidth bottleneck.The applications and algorithms presentations focused primarily on the existing need for various applications for petaflops-level computing performance. These applications include the human genome project, modeling of physiological functions, drug design, ecological studies, and computational fluid dynamics (CFD), a discipline used in numerous applications.The workshop showed progress in the thinking about petaflops architecture and technology requirements, reinforced the need for algorithmic research to enable effective management of petaflops-level computing systems, and, finally, reinforced the findings of the first petaflops workshop in Pasadena in 1994. Executive Summary Contents List of Figures List of Tables Introduction What is Petaflops? Historical Perspective The Petaflops Frontier Workshop Objectives Workshop Approach Report Organization Issues for Petaflops Computers Introduction Workshop Findings on Enabling Technologies for Peta(FL)OPS Computing Important Issues and Implications Summary Workshop Organization Organizing Committee Agenda Workshop Presentations Workshop Attendees Overview of Presentations Architecture and Technology Overview Applications and Algorithms Overview Architecture and Technology Issues and Challenges Introduction Heterogeneous Computing: One Approach to Sustained Petaflops Performance Introduction Examples of Mixed-Machine HC Overview Simulation of Mixing in Turbulent Convection at the Minnesota Supercomputer Center Interactive Rendering of Multiple Earth Science Data Sets on the CASA Testbed SmartNet A Conceptual Model for HC Open Problems Conclusions Acknowledgments References Processors-In-Memory (PIM) Chip Architectures for Petaflops Computing Introduction SIA Projections and CPU Architecture Open Issues References A Petaops is Currently Feasible by Computing in RAM Introduction Power Limitations Computing in RAM Overall Architecture Conclusions Acknowledgments References Design of a Masssively Parallel Computer Using Bit Serial Processing Elements Introduction Massively Parallel SIMD Architecture Summary Acknowledgment References Non von Neumann Instruction Set Architecture as an Enabling Technology in Grand Challenge Systems Introduction Distributed Instruction Set Architecture Simulation Results Conclusions References Taming Massive Parallelism: The Prospects of Opto-Electronic CRCW-Shared Memory Introduction Related Work OCRCW-SM Principles Memory Architecture Applications and Algorithms: Issues and Challenges Enabling Data-intensive Applications through Petaflops Computing Introduction Data Assimilation Discussion and Conclusions Motivating Factors for TPF-1 Some High Points Applications and Algorithms Architecture and Technology Implications for Future Directions About this document ... Chance Reschke Tue Aug 15 08:59:12 EDT 1995 MD5{32}: 7cf7fdb3b45e6f32e0e58c9848970cbf File-Size{5}: 11811 Type{4}: HTML Gatherer-Version{3}: 1.0 Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Refresh-Rate{7}: 2419200 Time-to-Live{8}: 14515200 Description{40}: Frontier's '95 PetaFLOPS Workshop Report } @FILE { http://cesdis.gsfc.nasa.gov/petaflops/archive/conferences/ptts.wkshp.conf.html Update-Time{9}: 827948644 url-references{380}: ptts.wkshp.html ptts.conf.html http://www.mcs.anl.gov/hpcc/hpcc-apps/workshop.html http://cesdis.gsfc.nasa.gov/petaflops/peta.html /people/tron/tron.html mailto:tron@usra.edu /people/oconnell/whoiam.html mailto:oconnell@cesdis.gsfc.nasa.gov http://cesdis.gsfc.nasa.gov/people/lpicha/whoiam.html mailto:lpicha@@cesdis.gsfc.nasa.gov http://cesdis.gsfc.nasa.gov/ http://www.usra.edu/ title{54}: Petaflops Enabling Techologies and Applications (PETA) keywords{274}: and applications cesdis challenge computing conference connell edu enable environments focusing grand high july lawrence lpicha michele moc performance picha pittsburg proceedings revised software sterling summary systems technology that the thomas tools tron usra workshop images{129}: peta.graphics/saturn.gif peta.graphics/saturn.gif peta.graphics/saturn.gif peta.graphics/turb.small.gif peta.graphics/petabar.gif headings{308}: Pittsburg Workshop and Conference on Grand Challenge Applications and Software Technology - 1993 A Workshop and Conference focusing on systems software and tools that enable high performance computing environments. Workshop Summary Conference Summary The Proceedings Return to the P.E.T.A. Directory body{1167}: This Workshop was held May 4 - 6, 1993 at the Hyatt Regency Hotel in Pittsburgh, Pennsylvania. The objective was to bring HPCC Grand Challenge applications research groups supported under the HPCC Initiative together with HPCC software technologists in order to: Discuss multidisciplinary computational science research issues/approaches; Refine the software technology requirements for Grand Challenge applications research; Identify the major technology challenges facing users and providers The conference was held May 7, 1993 at the Hyatt Regency Hotel in Pittsburg, Pennsylvania. The objectives of the conference were to promote dialogue and interaction among leaders in the multi-sector United States high performance computing community. The Proceedings of this workshop are available courtesy of Rick Stevens, Mathematics and Computer Science Division, Argonne National Laboratory Authorizing NASA Official: Paul H. Smith, NASA HPCC Office Senior Editor:Thomas Sterling (tron@usra.edu ) Curators: Michele O'Connell ( michele@usra.edu ), Lawrence Picha (lpicha@usra.edu ), CESDIS/ USRA , NASA Goddard Space Flight Center. Revised: 31 July 95 (moc) MD5{32}: 043c809c6ed11b4595776b80c2c50831 File-Size{4}: 2479 Type{4}: HTML Gatherer-Version{3}: 1.0 Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Refresh-Rate{7}: 2419200 Time-to-Live{8}: 14515200 Description{54}: Petaflops Enabling Techologies and Applications (PETA) } @FILE { http://cesdis.gsfc.nasa.gov/hpccm/annual.reports/ann.rpt.95/iita.95/www.servers.html Update-Time{9}: 827948798 url-references{142}: graphics/map.large.gif http://rsd.gsfc.nasa.gov/rsd/ http://dlt.gsfc.nasa.gov http://www.rspac.ivv.nasa.gov mailto:lpicha@cesdis.gsfc.nasa.gov title{20}: WWW Servers for IITA keywords{110}: click cooperative dlt effort full image larry picha program project rsd rspac scale see the thumbnail version images{22}: graphics/map.small.gif headings{238}: World Wide Web Servers in Support of Information Infrastructure Technology and Applications The Public Use of Remote Sensing Data Program (RSD) The Digital Library Technology Project (DLT) The Remote Sensing Public Access Center (RSPAC) body{1327}: (click on thumbnail image to see full scale version - 120K) The RSD effort is composed of 20 projects funded by the NASA Information Infrastructure Technology and Applications (IITA) initiative, establishing partnerships between government, private business and academia to promote the use of Earth and space science data over the Internet. The DLT project supports the development of new technologies to facilitate public access to NASA data via computer networks. Technologies that develop tools, applications, and software and hardware systems, that are able to scale upward to accomodate evolving user requirements and order-of-magnitude increases in user access, are of highest priority. The RSPAC is a cooperative program among the NASA Office of Aeronautics Information Infrastructure Technology and Applications (IITA) Program, BDM International, and West Virginia University. The RSPAC is located at the NASA Software Independent Verification and Validation Center in Fairmont, West Virginia. The activities supported in these projects are largely based on World Wide Web (WWW) servers on the Internet. The progress of these activities can be followed by watching their corresponding Uniform Resource Locators. Author: Larry Picha , the Center of Excellence in Space Data and Information Sciences Dec. 19, 1995 MD5{32}: dde5e7fa23da7d3ee784f50c950f7c73 File-Size{4}: 2011 Type{4}: HTML Gatherer-Version{3}: 1.0 Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Refresh-Rate{7}: 2419200 Time-to-Live{8}: 14515200 Description{20}: WWW Servers for IITA } @FILE { http://cesdis.gsfc.nasa.gov/linux/pcmcia/cardd/parse_cis.c Update-Time{9}: 827948614 Partial-Text{819}: parse_cis parse_power tpl_cftable tpl_config tpl_device unistd.h stdio.h stdlib.h errno.h string.h sys/file.h asm/io.h cis.h cardd.h /* parse_cis.c: Parse and show the PCMCIA Card Information Structure. Written 1994 by Donald Becker. The author may be reached as becker@cesdis1.gsfc.nasa.gov. */ /* A pointer into the current tuple, here to avoid passing it as a parameter to every parse routine. */ /* This routine reads the CIS. */ /* Last-hope error check. */ /* Special forms -- no link/length value. */ /* Read in the entire tuple. */ /* Misc. function to print the various power entries. */ /* Skip advisory values. */ /* Register mask field -- ignore all but the last bits. */ /* * Local variables: * compile-command: "cc -O -c parse_cis.c -Wall" * c-indent-level: 4 * tab-width: 4 * End: */ MD5{32}: 7d0865b84ef3d464e8ca0bc58877e949 File-Size{4}: 7115 Type{1}: C Gatherer-Version{3}: 1.0 Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Refresh-Rate{7}: 2419200 Time-to-Live{8}: 14515200 Keywords{511}: advisory all and asm author avoid becker bits but card cardd cesdis cftable check cis command compile config current device donald end entire entries errno error every field file forms function gov gsfc here hope ignore indent information into last length level link local mask may misc nasa parameter parse passing pcmcia pointer power print reached read reads register routine show skip special stdio stdlib string structure sys tab the this tpl tuple unistd value values variables various wall width written Description{9}: parse_cis } @FILE { http://cesdis.gsfc.nasa.gov/hpccm/annual.reports/ess94contents/sys.sw/beowulfpict.html Update-Time{9}: 827948656 url-references{117}: http://cesdis.gsfc.nasa.gov/hpccm/annual.reports/ess94contents/sys.sw/beowulf.html mailto:lpicha@cesdis.gsfc.nasa.gov title{22}: Beowulf Parallel Linux keywords{47}: curator larry picha return technical the write images{40}: graphics/beowulf.gif graphics/return.gif headings{34}: Return to the technical write-up body{149}: Point of Contact: Dr. John E. Dorband Goddard Space Flight Center/Code 934 dorband@nibbles.gsfc.nasa.gov (301) 286-9419 curator: Larry Picha MD5{32}: 95c81b2b904968a0752948b881560cad File-Size{3}: 511 Type{4}: HTML Gatherer-Version{3}: 1.0 Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Refresh-Rate{7}: 2419200 Time-to-Live{8}: 14515200 Description{22}: Beowulf Parallel Linux } @FILE { http://cesdis.gsfc.nasa.gov/hpccm/WordTemp-11 Update-Time{9}: 827948794 title{18}: NASA HPCC Program images{29}: hpcc.graphics/hpcc.header.gif body{36}: background="hpcc.graphics/back.gif"> MD5{32}: 6555e2ea04555b21ef476729b5767475 File-Size{4}: 4659 Type{4}: HTML Gatherer-Version{3}: 1.0 Gatherer-Host{21}: cesdis1.gsfc.nasa.gov Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server Refresh-Rate{7}: 2419200 Time-to-Live{8}: 14515200 Keywords{18}: hpcc nasa program Description{18}: NASA HPCC Program } @FILE { http://cesdis.gsfc.nasa.gov/pub/linux/drivers/tulip.html Update-Time{9}: 827948602 url-references{354}: http://cesdis.gsfc.nasa.gov/cesdis.html /linux/drivers/tulip.c #other tulip.c v1.3/tulip.c new-tulip.c /pub/people/becker/beowulf.html tulip.patch http://cesdis.gsfc.nasa.gov/cesdis.html http://hypatia.gsfc.nasa.gov/NASA_homepage.html http://hypatia.gsfc.nasa.gov/GSFC_homepage.html http://www.hal.com/~markg/WebTechs/ #top /pub/people/becker/whoiam.html title{30}: Linux and the DEC "Tulip" Chip keywords{250}: after all and author becker beowulf better center cesdis chip complete dec description donald driver drivers extra features file fix flight for goddard implemented linux nasa other patch pci performance project space the this top tulip unneeded with images{56}: http://www.hal.com/~markg/WebTechs/images/valid_html.gif headings{142}: Linux and the DEC "Tulip" Chip Errata Using the 10base2 or AUI Port Setting the cache alignment Ethercards reported to use the DEC 21040 chip body{4294}: This page contains information on using Linux with the DEC 21040/21140 "Tulip" chips, as used on the SMC PCI EtherPower and other ethercards. The master copy of this page resides on the CESDIS WWW server. The driver for the DEC 21040 "Tulip" chip is now available! It has been integrated with the kernel source tree since 1.1.90, although it remains commented out in the configuration file. This driver works with the SMC PCI EtherPower card as well as many other PCI ethercards. This driver is available in several versions. The standard, tested v0.07a for 1.2.* series released kernels. The same conservative driver v0.07a with the extra support needed to work with the 1.3.* development kernels. The latest testing version of the driver with better performance and extra features . This version will compile with all 1.2.* kernels and recent 1.3.* development kernels. This driver was written to support the Beowulf cluster project at CESDIS. For Beowulf-specific information, read the Beowulf project description . The new generation Beowulf uses two 21140 100baseTX boards on every processor, with each network connected by 100baseTX repeaters. There are two known problem with the code previously distributed: The driver always selects the 10baseT (RJ45) port, not the AUI (often 10base2/BNC) port. port. The driver fails with corrupted transfers when used with some motherboard chip such at the Intel Saturn as used on the ASUS SP3G. Both of these problems have fixes as described below. The complete patch file fixes these problems as well as cleaning up some the development messages. The new driver automatically switches media when the 10baseT port fails. On the 21040 it switches to the AUI (usually 10base2) media, and on the 21140 it configures the chip into a 100baseTx compatible mode. This fix is unneeded in all Tulip drivers after v0.05. To use the 10base2 port with the driver in 1.2.[0-5] you must change the setting of one SIA (serial interface) register. Make the following change around line 325: -outl(0x00000004, ioaddr + CSR13); +outl(0x0000000d, ioaddr + CSR13); This fix is implemented in all Tulip drivers after v0.04. The pre-1.2 driver experienced packet data corruption when used with some motherboards, most notably the ASUS SP3G. The workaround is to set the cache alignment parameters in the Tulip chip to their most conservative values.--- /usr/src/linux-1.1.84/drivers/net/tulip.cSun Jan 22 15:42:12 1995 +++ tulip.cSun Jan 22 16:21:44 1995 @@ -268,9 +271,15 @@ /* Reset the chip, holding bit 0 set at least 10 PCI cycles. */ outl(0xfff80001, ioaddr + CSR0); SLOW_DOWN_IO; -/* Deassert reset. Wait the specified 50 PCI cycles by initializing +/* Deassert reset. Set 8 longword cache alignment, 8 longword burst. + Cache alignment bits 15:14 Burst length 13:8 + 0000No alignment 0x00000000 unlimited0800 8 longwords +40008 longwords0100 1 longword1000 16 longwords +800016 longwords0200 2 longwords2000 32 longwords +C00032 longwords0400 4 longwords + Wait the specified 50 PCI cycles after a reset by initializing Tx and Rx queues and the address filter list. */ -outl(0xfff80000, ioaddr + CSR0); +outl(0xfff84800, ioaddr + CSR0); if (irq2dev_map[dev->irq] != NULL || (irq2dev_map[dev->irq] = dev) == NULL This is reportedly a bug in the motherboard chipset's implementation of burst mode transfers. The patch above turns on a feature Attributes supported by the Harvest Gatherer

Attributes supported by the Harvest Gatherer

Abstract
Brief abstract about the object.

Author
Author(s) of the object.

Description
Brief description about the object.

File-Size
Number of bytes in the object.

Full-Text
Entire contents of the object.

Gatherer-Host
Host on which the Gatherer ran to extract information from the object.

Gatherer-Name
Name of the Gatherer that extracted information from the object. (eg. Full-Text, Selected-Text, or Terse).

Gatherer-Port
Port number on the Gatherer-Host that serves the Gatherer's information.

Gatherer-Version
Version number of the Gatherer.

Keywords
Searchable keywords extracted from the object.

Last-Modification-Time
The time that the object was last modified (in seconds since epoch).

MD5
MD5 16-byte checksum of the object.

Partial-Text
Only the selected contents from the object.

Refresh-Rate
How often the Broker attempts to update the content summary (in seconds relative to Update-Time).

Time-to-Live
How long content summary is valid (in seconds relative to Update-Time).

Title
Title of the object.

Type
The object's type. Some example types are: Archive, Audio, Awk, Backup, Binary, C, CHeader, Command, Compressed, CompressedTar, Configuration, Data, Directory, DotFile, Dvi, FAQ, FYI, Font, FormattedText, GDBM, GNUCompressed, GNUCompressedTar, HTML, Image, Internet-Draft, MacCompressed, Mail, Makefile, ManPage, Object, OtherCode, PCCompressed, Patch, Perl, PostScript, RCS, README, RFC, SCCS, ShellArchive, Tar, Tcl, Tex, Text, Troff, Uuencoded, and WaisSource

Update-Time
The time that Gatherer updated (generated) the content summary from the object (in seconds since the epoch).

URL
The original URL of the object.

URL-References
Any URL references present within HTML objects.