@DELETE { }
@REFRESH { }
@UPDATE {
@FILE { http://cesdis.gsfc.nasa.gov/hpccm/annual.reports/ess94contents/graphics/mechoso.metric.html
Update-Time{9}: 827948650
url-references{106}: http://cesdis.gsfc.nasa.gov/hpccm/annual.reports/ess94contents/esm.html
mailto:lpicha@cesdis.gsfc.nasa.gov
title{12}: Metric Chart
keywords{137}: cesdis
challenge
computational
curator
gov
grand
gsfc
larry
lpicha
metric
nasa
picha
return
scientific
technical
the
understanding
write
images{29}: mechoso.metric.gif
return.gif
headings{169}: Earth System Model: Atmosphere/Ocean Dynamics and Tracers Chemistry
PI: Roberto Mechoso
University of California at Los Angeles (UCLA)
Return
to the Technical Write-up
body{961}: background="graphics/ess.gif">
Scientific Grand Challenge: To
develop a global coupled model of the atmosphere and the oceans,
including chemical tracers and biological processes, to be used to
model seasonal cycle and inter-annual variability.
Scientific
Understanding: To test the predicted seasonal cycle and interannual
variability of a coupled atmosphere/ocean model with 100 chemical and
macrophysical tracers and 4x the present spatial resolution.
Computational Challenge: To allow rapid tests of the impact of
model parameterization changes and runs representing multi-year
inter-annual variability and the carbon cycle. Also, to allow
visualization of time accurate model output in real time. Metric: : An
ensemble of the global coupled atmosphere and ocean model simulations
of one or more decades at double the linear resolution of the
atmosphere and four times the resolution for the ocean.
curator:
Larry Picha (lpicha@cesdis.gsfc.nasa.gov)
MD5{32}: 7cc7aca178e1bde21211727da89ee112
File-Size{4}: 1559
Type{4}: HTML
Gatherer-Version{3}: 1.0
Gatherer-Host{21}: cesdis1.gsfc.nasa.gov
Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server
Refresh-Rate{7}: 2419200
Time-to-Live{8}: 14515200
Description{12}: Metric Chart
}
@FILE { http://cesdis.gsfc.nasa.gov/pub/linux/diag/hp+.c
Update-Time{9}: 827948614
Partial-Text{1843}: block_input_io
block_output_io
do_checksum
main
map_shared_mem
test_io_mem
test_shared_mem
unistd.h
stdio.h
stdlib.h
getopt.h
fcntl.h
sys/mman.h
asm/io.h
/* hp+.c: Diagnostic program for HP PC LAN+ (27247B and 27252A) ethercards. */
/*
Copyright 1994 by Donald Becker.
This version released under the Gnu Public Lincese, incorporated herein
by reference. Contact the author for use under other terms.
This is a setup and diagnostic program for the Hewlett Packard PC LAN+
ethercards, such as the HP27247B and HP27252A.
The author may be reached as becker@cesdis.gsfc.nasa.gov.
C/O USRA Center of Excellence in Space Data and Information Sciences
Code 930.5 Bldg. 28, Nimbus Rd., Greenbelt MD 20771
*/
/* { name has_arg *flag val } */
/* The base I/O *P*ort address. */
/* Give help */
/* Transceiver type number (built-in, AUI) */
/* Interrupt number */
/* Switch to NE2000 mode */
/* Verbose mode */
/* Display version number */
/* Switch to shared-memory mode. */
/* Write the EEPROM with the specified vals */
/* A few local definitions. These mostly match the device driver
definitions. */
/* See enum PageName */
/* Offset to the 8390 registers. */
/* First page of TX buffer */
/* Last page +1 of RX ring */
/* The values for HPP_OPTION. */
/* Active low, really UNreset. */
/* ... and their names. */
/*
This is it folks...
*/
/* Transceiver type. */
/* Turn on access to the I/O ports. */
/* Check for the HP+ signature, 50 48 0x 53. */
/* Point at the Software configuration registers. */
/* Point at the Hardware configuration registers. */
/* Point at the "performance" registers. */
/* Ignore the EEPROM configuration, just for testing. */
/* Retrieve and checksum the station address. */
/*
* Local variables:
* compile-command: "gcc -Wall -O6 -o hp+ hp+.c"
* tab-width: 4
* c-indent-level: 4
* End:
*/
MD5{32}: 00eadf4817699ddcce87027f977a43ac
File-Size{4}: 9830
Type{1}: C
Gatherer-Version{3}: 1.0
Gatherer-Host{21}: cesdis1.gsfc.nasa.gov
Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server
Refresh-Rate{7}: 2419200
Time-to-Live{8}: 14515200
Keywords{943}: access
active
address
and
arg
asm
aui
author
base
becker
bldg
block
buffer
built
center
cesdis
check
checksum
code
command
compile
configuration
contact
copyright
data
definitions
device
diagnostic
display
donald
driver
eeprom
end
enum
ethercards
excellence
fcntl
few
first
flag
folks
for
gcc
getopt
give
gnu
gov
greenbelt
gsfc
hardware
has
help
herein
hewlett
hpp
ignore
incorporated
indent
information
input
interrupt
just
lan
last
level
lincese
local
low
main
map
match
may
mem
memory
mman
mode
mostly
name
names
nasa
nimbus
number
offset
option
ort
other
output
packard
page
pagename
performance
point
ports
program
public
reached
really
reference
registers
released
retrieve
ring
sciences
see
setup
shared
signature
software
space
specified
station
stdio
stdlib
such
switch
sys
tab
terms
test
testing
the
their
these
this
transceiver
turn
type
under
unistd
unreset
use
usra
val
vals
values
variables
verbose
version
wall
width
with
write
Description{14}: block_input_io
}
@FILE { http://cesdis.gsfc.nasa.gov/admin/seminar.series/1107.html
Update-Time{9}: 827948594
title{16}: November 7, 1995
keywords{25}: hosted
jacqueline
moigne
images{137}: http://cesdis.gsfc.nasa.gov/admin/cesdis.seminars/logo.GIF
http://cesdis.gsfc.nasa.gov/admin/cesdis.seminars/nasalogo-tiny.gif
fugate.gif
headings{173}: Mathematical Tools for Remote Sensing Data Analysis Fourth Annual
Seminar Series
November 7, 1995
NASA Goddard Space Flight Center
Building 28, Room
E210
2:00 - 3:00 p.m.
body{255}:
%>
size=2>CENTER OF EXCELLENCE IN SPACE DATA AND INFORMATION
SCIENCESsize=2>
%>
hosted by:
Dr. Jacqueline Le Moigne
Adaptive Optics Techniques for
Compensation of Atmospheric
Distortions
Robert Fugate
USAF Phillips Laboratory
fugate@plk.af.mil
MD5{32}: dd5b657f39c56b9a6307514b2268fe0d
File-Size{4}: 5096
Type{4}: HTML
Gatherer-Version{3}: 1.0
Gatherer-Host{21}: cesdis1.gsfc.nasa.gov
Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server
Refresh-Rate{7}: 2419200
Time-to-Live{8}: 14515200
Description{16}: November 7, 1995
}
@FILE { http://cesdis.gsfc.nasa.gov/linux/pcmcia/dbether.c
Update-Time{9}: 827948612
Partial-Text{2731}: main
unistd.h
stdio.h
sys/file.h
linux/config.h
linux/kernel.h
linux/sched.h
linux/errno.h
asm/system.h
asm/io.h
/* Ethercard enabler for the Databook TCIC/2 PCMCIA interface chip. */
/* Written 1994 by Donald Becker. */
/*
Notes:
Only works with socket 0.
*/
/* Default base address of the Databook TCIC/2 chip. */
/* Offsets of TCIC/2 registers from the TCIC/2 base address. */
/* Codes put into the top three bits of the TCIC_MODE register
to select which auxiliary register is visiable at TCIC_AUX. */
/* Mark that TCIC_ADDR points to internal registers (rather than into the
card address space). */
/* Bit definitions for selected fields (like just those that we use). */
/* Card installed. */
/* Socket control register, TCIC_SCTRL. */
/* Autoincrement after access. */
/* Enable card access to selected soc */
/* Power control register */
/* Enable current limiting */
/* 5 Volt supply control for sock 0 */
/* 5 Volt supply control for sock 1 */
/* I/O map control register */
/* Enable this map */
/* Make the buffers quieter */
/* This map is 1k or less */
/* Interrupt control/status register */
/* Write all bits 7:2 in CSR */
/* Interrupt enable register */
/* Interrupt on any changed to SSTAT*/
/* Make STKIRQ output open drain */
/* Mode register */
/* Mode register, word access */
/* Memory map control register */
/* Mem map ctl reg, enable */
/* Make accesses use quiet mode */
/* Memory map map register. */
/* Map this to card attribute space */
/* System configuration register 1 */
/* This will probe for a TCIC/2 at the standard location. */
/* Adaptor card I/O base. */
/* Which socket to use. */
/* TCIC chip I/O base. */
/* The 0x80 location is for the delay in the *_p() functions. */
/* TCIC/2 locations. */
/* Verify that *something* is at the putative TCIC address. */
/* Select socket 0. */
/* Shut down, then turn on the card */
/*PWR_CURRENTL |*/
/* Enable the current socket and set autoincrement on data accesses. */
/* Map the I/O space starting at 'card_addr' to the
socket specified by 'socket'. Use 8-bit mapping, quiet mode, wait
state value of 7. */
/* Load I/O control register. */
/* Load the system configuration auxiliary register*/
/* Give the chip 50 msecs. to reinitialize. */
/* Point to the socket configuration registers, and load them. */
/* IR_SCF1 for socket 0. */
/* IR_SCF1 for socket 1. */
/* Map the attribute memory into 0xd0000. */
/* Point to WR_MBASE_i */
/* Write the enable byte to the card. */
/* Keep the autoincrement from happening, so we can observer the IRQ
register. */
/*outb(0x00, tcic + TCIC_PWR);*/
/*
* Local variables:
* compile-command: "cc -O -o dbether dbether.c -N -Wall"
* c-indent-level: 4
* tab-width: 4
* End:
*/
MD5{32}: 48abde71351ae49a0394595c41598082
File-Size{4}: 6962
Type{1}: C
Gatherer-Version{3}: 1.0
Gatherer-Host{21}: cesdis1.gsfc.nasa.gov
Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server
Refresh-Rate{7}: 2419200
Time-to-Live{8}: 14515200
Keywords{1030}: access
accesses
adaptor
addr
address
after
all
and
any
asm
attribute
autoincrement
aux
auxiliary
base
becker
bit
bits
buffers
byte
can
card
changed
chip
codes
command
compile
config
configuration
control
csr
ctl
current
currentl
data
databook
dbether
default
definitions
delay
donald
down
drain
enable
enabler
end
errno
ethercard
fields
file
for
from
functions
give
happening
indent
installed
interface
internal
interrupt
into
irq
just
keep
kernel
less
level
like
limiting
linux
load
local
location
locations
main
make
map
mapping
mark
mbase
mem
memory
mode
msecs
notes
observer
offsets
only
open
outb
output
pcmcia
point
points
power
probe
put
putative
pwr
quiet
quieter
rather
reg
register
registers
reinitialize
scf
sched
sctrl
select
selected
set
shut
soc
sock
socket
something
space
specified
sstat
standard
starting
state
status
stdio
stkirq
supply
sys
system
tab
tcic
than
that
the
them
then
this
those
three
top
turn
unistd
use
value
variables
verify
visiable
volt
wait
wall
which
width
will
with
word
works
write
written
Description{4}: main
}
@FILE { http://cesdis.gsfc.nasa.gov/%7ecreschke/peta/report/node30.html
Update-Time{9}: 827948636
title{9}: Overview
keywords{36}: aug
chance
edt
overview
reschke
tue
images{193}: /usr/local/src/latex2html/icons/next_motif.gif
/usr/local/src/latex2html/icons/up_motif.gif
/usr/local/src/latex2html/icons/previous_motif.gif
/usr/local/src/latex2html/icons/contents_motif.gif
head{469}: Next: Simulation of Mixing Up: Heterogeneous Computing: One Previous:
Examples of Mixed-Machine Overview Three examples of existing HC
systems are very briefly introduced here. In the first two, the
decomposition of tasks into subtasks and the
assignment of subtasks
to machines were user specified. The third, SmartNet,
schedules tasks
in an HC system. The long-term goal of automatic HC
is discussed in
the next section. Chance Reschke
Tue Aug 15 08:59:12 EDT 1995
MD5{32}: 8c4f30cd4dd967ac9e2c440c9623d074
File-Size{4}: 1687
Type{4}: HTML
Gatherer-Version{3}: 1.0
Gatherer-Host{21}: cesdis1.gsfc.nasa.gov
Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server
Refresh-Rate{7}: 2419200
Time-to-Live{8}: 14515200
Description{9}: Overview
}
@FILE { http://cesdis.gsfc.nasa.gov/%7ecreschke/peta/report/node11.html
Update-Time{9}: 827948634
title{20}: Report Organization
keywords{47}: aug
chance
edt
organization
report
reschke
tue
images{193}: /usr/local/src/latex2html/icons/next_motif.gif
/usr/local/src/latex2html/icons/up_motif.gif
/usr/local/src/latex2html/icons/previous_motif.gif
/usr/local/src/latex2html/icons/contents_motif.gif
head{1695}: Next: Issues for Petaflops Up: Introduction Previous: Workshop Approach
Report Organization This report is being published as a Technical
Report of the Center of Excellence in Space Data and Information
Systems, Universities Space Research Association, in cooperation with
NASA's Goddard Space Flight Center.Section 1 briefly describes several
key events or activities that preceded the The Petaflops Frontier
Workshop, the objectives and approach of the workshop and the report
organization.Section 2 summarizes the key issues of petaflops
computing. Much of the discussion is based on the excellent work and
report from the Workshop on Enabling Technologies for Peta(FL)OPS
Computing in Pasadena in 1994. The discussion provides a synopsis of
the important findings and conclusions from that workshop.Section 3
includes the The Petaflops Frontier Workshop agenda and information
about the organizers, the presenters and the participants.Section 4 is
a synthesis of the presentations at The Petaflops Frontier Workshop in
McLean, VA February 6, 1995. Eighteen presentations addressed various
aspects of architecture, technology, applications, and
algorithms.Section 5 consists of extended abstracts from the workshop
presentations in the areas of architecture and technology, and Section
6 includes the extended abstracts from the applications and algorithms
presentations. These are included both to ensure the technical content
and to provide the reader with material directly by the
participants.Section 7 distills the workshop results and presentations
in a comprehensive discussion of conclusions and recommendations for
follow-on activities.
Chance Reschke
Tue Aug 15 08:59:12 EDT 1995
MD5{32}: 0015fbace4ce267947b67c7ba75f2aa5
File-Size{4}: 2965
Type{4}: HTML
Gatherer-Version{3}: 1.0
Gatherer-Host{21}: cesdis1.gsfc.nasa.gov
Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server
Refresh-Rate{7}: 2419200
Time-to-Live{8}: 14515200
Description{20}: Report Organization
}
@FILE { http://cesdis.gsfc.nasa.gov/linux/pcmcia/cardd/card.insert
Update-Time{9}: 827948613
Partial-Text{185}: # PCMCIA card insertion script.
# Written by Donald Becker 1994.
# This script is called by 'cardd' when a PCMCIA card is inserted.
# The following environment variables will be set:
MD5{32}: 7cc70d261f3cc242e8eed0c345f2a208
File-Size{4}: 1797
Type{7}: Command
Gatherer-Version{3}: 1.0
Gatherer-Host{21}: cesdis1.gsfc.nasa.gov
Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server
Refresh-Rate{7}: 2419200
Time-to-Live{8}: 14515200
Keywords{128}: becker
called
card
cardd
donald
environment
following
inserted
insertion
pcmcia
script
set
the
this
variables
when
will
written
Description{31}: # PCMCIA card insertion script.
}
@FILE { http://cesdis.gsfc.nasa.gov/hpccm/annual.reports/ess94contents/ess.intro.html
Update-Time{9}: 827948649
url-references{59}: http://cesdis.gsfc.nasa.gov/hpccm/annual.reports/ess94.html
title{24}: ESS Applications Project
keywords{100}: and
approach
goal
management
objectives
organization
page
plan
previous
project
return
strategy
the
images{19}: graphics/return.gif
headings{99}: Introduction to the
Earth and Space Science (ESS) Applications Project
Return
to the PREVIOUS PAGE
body{5951}: background="graphics/ess.gif">
Project Goal and Objectives:
The goal of the ESS Project is to demonstrate the potential afforded by
teraFLOPS systems' performance to further our understanding and ability
to predict the dynamic interaction of physical, chemical and biological
processes affecting the solar-terrestrial environment and the universe.
Project activities are focused on selected NASA Grand Challenge
science applications. Many of the Grand Challenges address the
integration and execution of multiple advanced disciplinary models into
single multidisciplinary applications. Examples of these include
coupled oceanic atmospheric biospheric interactions, 3-D simulations of
the chemically perturbed atmosphere, solid earth modeling, solar flare
modeling and 3-D compressible magnetohydrodynamics. Others are
concerned with analysis and assimilation into models of massive data
sets taken by orbiting sensors. These problems are significant in that
they have both social and political implications in our society. The
science requirements inherent in the NASA Grand Challenge applications
necessitate computing performance into the teraFLOPS range.
The
project is driven by five specific objectives: 1) Support the
development of massively parallel, scalable, multidisciplinary models
and data processing algorithms; 2) Make available prototype, scalable,
parallel architectures and massive data storage systems to ESS
researchers; 3) Prepare the software environments to facilitate
scientific exploration and the sharing of information and tools; 4)
Develop data management tools for high-speed access management and
visualization of data with teraFLOPS computers; and 5) Demonstrate the
scientific and computational impact for Earth and space science
applications.
Strategy and Approach: The ESS strategy is to invest
the first four years of the project (FY92-95) in formulation of
specifications for complete and balanced teraFLOPS computing systems to
support Earth and space science applications, and the next two years
(FY96-97) in acquisition and augmentation of such a GSFC resident
system into a stable and operational capability, suitable for migration
into Code Y/S computing facilities. The ESS approach involves three
principal components: 1) Use a NASA Research Announcement (NRA) to
select Grand Challenge Applications and Principal Investigator Teams
that require teraFLOPS computing for NASA science problems. Eight
collaborative multidisciplinary Principal Investigator Teams including
physical and computational scientists, software and systems engineers,
and algorithm designers are addressing the Grand Challenges. In
addition, 21 Guest Computational Investigators are developing specific
scalable algorithmic techniques. The Investigators provide a means to
rapidly evaluate and guide the maturation process for scalable
massively parallel algorithms and system software and to thereby reduce
the risks assumed by later ESS Grand Challenge researchers when
adopting massively parallel computing technologies. 2) Provide
successive generations of scalable computing systems as Testbeds for
the Grand Challenge Applications; Interconnect the Investigators and
the Testbeds through high speed network links (Coordinated through the
National Research & Education Network); and Provide a software
development environment and computational techniques support to the
Investigators. 3) In collaboration with the Investigator Teams, conduct
evaluations of the testbeds across applications and architectures
leading to down select to the next generation scalable teraFLOPS
testbed.
Organization: The Goddard Space Flight Center serves as
the lead center for the ESS Project and collaborates with the Jet
Propulsion Laboratory. The HPCC/ESS Inter-center Technical Committee,
chaired by the ESS Project Manager, coordinates the Goddard/JPL roles.
The ESS Applications Steering Group, composed of representatives from
each science discipline office at NASA Headquarters and from the High
Performance Computing Office in Code R, as well as representatives from
Goddard and JPL, provides ongoing oversight and guidance to the
project.
The Office of Aeronautics and Space Technology, jointly
with the Office of Space Science and Applications, selected the ESS
Investigators through the peer reviewed NASA Research Announcement
process. The ESS Science Team, composed of the Principal Investigators
chosen through the ESS NRA, and chaired by the ESS Project Scientist,
organizes and carries out periodic workshops for the investigator teams
and coordinates the computational experiments of the Investigations.
The ESS Evaluation Director leads development of ESS computational and
throughput benchmarks which are representative of the ESS computational
workload. A staff of in-house computational scientists develops
scalable computational techniques which address the Computational
Challenges of the ESS Investigators.
The ESS Project Manager serves
as a member of the NASA wide High Performance Computing Working Group
and representatives from each Center serve on the NASA wide Technical
Coordinating Committees for Applications, Testbeds, and System Software
Research.
Management Plan: The project is managed in accordance
with the formally approved ESS Project Plan. The ESS Project Manager at
GSFC and the JPL Task Leader together oversee coordinated development
of Grand Challenge applications, high performance computing testbeds,
and advanced system software for the benefit of the ESS Investigators.
Monthly, quarterly, and annual reports are provided to the High
Performance Computing Office in Code R. ESS and its Investigators
contribute annual software submissions to the High Performance
Computing Software Exchange.
Points of Contact: Jim Fischer
Goddard Space Flight Center, Code 934
fischer@nibbles.gsfc.nasa.gov,
301-286-3465
Robert Ferraro
Jet Propulsion Laboratory
ferraro@zion.jpl.nasa.gov, 818-354-1340
MD5{32}: d64e50901e924606b53675fd6d7ea7f9
File-Size{4}: 6565
Type{4}: HTML
Gatherer-Version{3}: 1.0
Gatherer-Host{21}: cesdis1.gsfc.nasa.gov
Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server
Refresh-Rate{7}: 2419200
Time-to-Live{8}: 14515200
Description{24}: ESS Applications Project
}
@FILE { http://cesdis.gsfc.nasa.gov/%7ecreschke/peta/report/node7.html
Update-Time{9}: 827948634
url-references{33}: footnode.html#62
footnode.html#63
title{23}: Historical Perspective
keywords{50}: aug
chance
edt
historical
perspective
reschke
tue
images{481}: /usr/local/src/latex2html/icons/next_motif.gif
/usr/local/src/latex2html/icons/up_motif.gif
/usr/local/src/latex2html/icons/previous_motif.gif
/usr/local/src/latex2html/icons/contents_motif.gif
/usr/local/src/latex2html/icons/foot_motif.gif
/usr/local/src/latex2html/icons/foot_motif.gif
/usr/local/src/latex2html/icons/next_motif.gif
/usr/local/src/latex2html/icons/up_motif.gif
/usr/local/src/latex2html/icons/previous_motif.gif
/usr/local/src/latex2html/icons/contents_motif.gif
head{4799}: Next: The Petaflops Frontier Up: Introduction Previous: What is
Petaflops? Historical Perspective As early as December 1991 the
challenge of petaflops computing was receiving serious consideration at
the Purdue Workshop on Grand Challenges in Computer Architecture for
the Support of High Performance Computing sponsored by the National
Science Foundation. The workshop co-chairs identified achieving petaops
performance as one of four grand challenge problems in computer
architecture. The authors noted that, ``This ... challenge [of
achieving peta-ops computing] is to dramatically improve and
effectively harness the base technologies into a future computer system
that will provide usable peta-ops of computer performance to grand
challenge application programmers."Following the Purdue workshop, the
issue of petaflops computing was addressed by the High Performance
Computing, Communications and Information Technology Subcommittee
(HPCCIT) . The HPCCIT, comprised of representatives from the major
government agencies involved in the HPCC program, proposed that
enabling technologies for petaflops computing be addressed in a
workshop in the near future.Soon after the meeting, the Administrator
of NASA convened a special initiative team to evaluate its existing and
future high performance computing requirements. The NASA Supercomputing
Special Initiative Team used a projected 10-year period to assess the
implications of the computational aerosciences and Earth and space
sciences grand challenges with respect to (1) established NASA
requirements, (2) other U.S. government HPC activities, including
advanced architectures, component technologies, and communications, (3)
U.S. industry efforts, (4) activities in academia and other
orgnaizations, and (4) the approach and progress of foreign efforts.The
team re-affirmed the findings of the earlier Pasadena workshop with
respect to the requirements to achieve teraflops computing. The team
also concluded that some NASA grand challenge problems would require
petaflops computing performance. In their assessment the team
identified seven major technology barriers to achieving petaflops-level
performance: Systems software Memory speed Aggregate I/O Interprocessor
speed Processor speed Packaging Power management Other government
agencies, academia and industry were no less aware of the need to
extend their horizons beyond the teraflops regime. The combination of
this awareness, the HPCCIT meeting, and the report of NASA's
Supercomputing Special Initiative Team, helped form the basis of the
first workshop on petaflops computing.In February 1994 in Pasadena,
California, Caltech hosted the first major workshop to address
petaflops computing. The Workshop on Enabling Technologies for
Peta(FL)OPS Computing involved over 60 invited experts in all aspects
of high performance computing technology who met to establish the basis
for considering future research initiatives that could lead to the
development, production, and application of petaflops scaled computing
systems. The objectives of the Pasadena workshop were to (1) identify
applications that require petaflops performance and determine their
resource demands, (2) determine the scope of the technical challenge to
achieving effective petaflops computing, (3) identify critical enabling
technologies that lead to petaflops computing capability, (4) establish
key research issues, and (5) recommend elements of a near-term research
agenda.Over a period of three days the Pasadena workshop focused on the
following major and inter-related topic areas: Applications and
Algorithms Device Technology Architecture and Systems Software
Technology.
Despite the expected challenges, the participants
concluded that a petaflops computing system should be feasible in 20
years. This prediction was partly based on an assumption that during
the 20 years the semiconductor industry would continue advancing in
speed enhancement and in cost reduction through improved fabrication
processes. And, although the workshop concluded that no paradigm shift
would be needed in systems architecture, managing active latency would
be essential and require a very high degree of fine-grain parallelism
along with the mechanisms to exploit it. Also, a mix of technologies
might be required, including semiconductor for main memory, optics for
inter-processor (and possibly inter-chip) communications and secondary
storage, and perhaps cryogenic (e.g., Josephson Junction) for very high
clock rate and very low power processor logic. Finally, dramatic per
device cost reduction and innovative approaches to system software and
programming methodologies would be essential. Next: The Petaflops
Frontier Up: Introduction Previous: What is Petaflops? Chance Reschke
Tue Aug 15 08:59:12 EDT 1995
MD5{32}: d6eea59366f691425eac60a56d623e30
File-Size{4}: 7278
Type{4}: HTML
Gatherer-Version{3}: 1.0
Gatherer-Host{21}: cesdis1.gsfc.nasa.gov
Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server
Refresh-Rate{7}: 2419200
Time-to-Live{8}: 14515200
Description{23}: Historical Perspective
}
@FILE { http://cesdis.gsfc.nasa.gov/admin/inf.eng/wave.tutorial.fin/comm.decency.act.html
Update-Time{9}: 827948691
title{26}: Communications Decency Act
keywords{9}: tutorial
images{14}: wave.small.gif
headings{16}: Can we ride the
body{88}:
TUTORIAL: The Policy Wave is Coming: Authorship in a U.S.
Government Agency Context
MD5{32}: 64373b8b5e542e6ce642ee6c650916fe
File-Size{4}: 9272
Type{4}: HTML
Gatherer-Version{3}: 1.0
Gatherer-Host{21}: cesdis1.gsfc.nasa.gov
Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server
Refresh-Rate{7}: 2419200
Time-to-Live{8}: 14515200
Description{26}: Communications Decency Act
}
@FILE { http://cesdis.gsfc.nasa.gov/hpccm/annual.reports/cas94contents/testbed/
Update-Time{9}: 827948842
url-references{79}: /hpccm/annual.reports/cas94contents/
cra.html
cra1.html
graphics/
parallel.html
title{53}: Index of /hpccm/annual.reports/cas94contents/testbed/
keywords{44}: cra
directory
graphics
html
parallel
parent
images{96}: /icons/blank.xbm
/icons/menu.gif
/icons/text.gif
/icons/text.gif
/icons/menu.gif
/icons/text.gif
headings{53}: Index of /hpccm/annual.reports/cas94contents/testbed/
body{200}:
Name Last modified Size Description
Parent Directory 17-Oct-95
15:42 -
cra.html 19-Jul-95 15:23 3K
cra1.html 19-Jul-95 15:26 3K
graphics/ 09-Nov-95 14:43 -
parallel.html 19-Jul-95 15:25 3K
MD5{32}: 27b42d3ce0f09d152aa40329bab611d3
File-Size{3}: 935
Type{4}: HTML
Gatherer-Version{3}: 1.0
Gatherer-Host{21}: cesdis1.gsfc.nasa.gov
Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server
Refresh-Rate{7}: 2419200
Time-to-Live{8}: 14515200
Description{53}: Index of /hpccm/annual.reports/cas94contents/testbed/
}
@FILE { http://cesdis.gsfc.nasa.gov/pub/linux/html/Ethernet-HOWTO-8.html
Update-Time{9}: 827948630
url-references{340}: Ethernet-HOWTO.html#toc8
Ethernet-HOWTO-1.html#mailing-lists
http://cesdis.gsfc.nasa.gov/linux/pcmcia.html
Ethernet-HOWTO-3.html#xircom
Ethernet-HOWTO-3.html#de-600
Ethernet-HOWTO-3.html#aep-100
Ethernet-HOWTO-3.html#aep-100
Ethernet-HOWTO-9.html
Ethernet-HOWTO-7.html
Ethernet-HOWTO.html#toc8
Ethernet-HOWTO.html#toc
Ethernet-HOWTO.html
#0
title{42}: Networking with a Laptop/Notebook Computer
keywords{267}: adaptors
beginning
built
chapter
computer
contents
docking
don
ethercard
isa
keyboard
laptop
lists
mailing
net
networking
next
notebook
parallel
pcmcia
pocket
port
power
previous
realtek
section
slip
station
stuff
support
surfing
table
the
this
top
using
with
xircom
headings{129}: 8
8.1 Using SLIP
8.2 Built in NE2000
8.3
8.4 ISA Ethercard in the Docking Station.
8.5 Pocket / parallel port adaptors.
body{4543}: Networking with a Laptop/Notebook Computer Contents of this section
There are currently only a few ways to put your laptop on a
network.
You can use the SLIP code (and run at serial line
speeds);
you can buy one of the few laptops that come with a
NE2000-compatible
ethercard; you can get a notebook with a
supported
PCMCIA slot built-in; you can get a laptop with a
docking
station and plug in an ISA ethercard; or you can use a
parallel port
Ethernet adapter such as the D-Link DE-600.
This is the
cheapest solution, but by far the most difficult. Also,
you will not
get very high transmission rates. Since SLIP is not
really related to
ethernet cards, it will not be discussed further
here. See the NET-2
Howto.
This solution severely limits your laptop choices
and is fairly
expensive. Be sure to read the specifications carefully,
as you
may find that you will have to buy an additional non-standard
transceiver to actually put the machine on a network. A good
idea
might be to boot the notebook with a kernel that has
ne2000 support,
and make sure it gets detected and works
before you lay down your
cash.
PCMCIA Support
As this area of Linux development
is fairly young, I'd suggest
that you join the LAPTOPS mailing
channel. See
Mailing lists...
which describes how to join a
mailing list channel.
Try and
determine exactly what hardware you
have (ie. card manufacturer,
PCMCIA chip controller manufacturer) and
then ask on the LAPTOPS
channel. Regardless, don't expect things to be
all that simple.
Expect to have to fiddle around a bit, and patch
kernels, etc.
Maybe someday you will be able to type `make config' 8-)
At present, the two PCMCIA chipsets that are supported are
the
Databook TCIC/2 and the intel i82365.
There is a number of programs
on tsx-11.mit.edu in
/pub/linux/packages/laptops/ that you may find
useful. These
range from PCMCIA Ethercard drivers to programs that
communicate
with the PCMCIA controller chip. Note that these drivers
are
usually tied to a specific PCMCIA chip (ie. the intel 82365
or
the TCIC/2)
For NE2000 compatible cards, some people have had
success
with just configuring the card under DOS, and then
booting
linux from the DOS command prompt via .
For those that are
net-surfing you can try:
Don's PCMCIA Stuff
Anyway, the PCMCIA
driver problem isn't specific to the Linux world.
It's been a real
disaster in the MS-DOS world. In that world
people expect the hardware
to work if they just follow the manual.
They might not expect it to
interoperate with any other hardware
or software, or operate
optimally, but they do expect that the
software shipped with the
product will function. Many PCMCIA
adaptors don't even pass this test.
Things are looking up for Linux users that want PCMCIA support,
as
substantial progress is being made. Pioneering this effort
is
David Hinds. His latest PCMCIA support package can be
obtained
from in the directory
. Look for a file like
where X.Y.Z
will be the latest version
number. This is most likely uploaded to
as
well.
Note that Donald's PCMCIA enabler works as a
user-level
process, and David Hinds' is a kernel-level solution.
You
may be best served by David's package as it is
much more widely used.
Docking stations for laptops typically cost about $250
and provide two full-size ISA slots, two serial and one
parallel
port. Most docking stations are powered off of the
laptop's batteries,
and a few allow adding extra batteries in the
docking station if you
use short ISA cards. You can add an inexpensive
ethercard and enjoy
full-speed ethernet performance.
The `pocket' ethernet
adaptors may also fit your need.
Until recently they actually costed
more than a docking station and
cheap ethercard, and most tie you down
with a wall-brick power supply.
At present, you can choose from the
D-Link, or the RealTek adaptor.
Most other companies, especially
Xircom,
(see
Xircom
)
treat the programming
information as a
trade secret, so support will likely be slow in
coming. (if ever!)
Note that the transfer speed will not be all that great
(perhaps
100kB/s tops?) due to the limitations of the
parallel port interface.
See
DE-600 / DE-620
and
RealTek
for supported pocket
adaptors.
You can sometimes avoid the wall-brick with the adaptors by
buying
or making a cable that draws power from the laptop's
keyboard
port. (See
keyboard power
)
Next Chapter,
Previous Chapter Table of contents of this chapter ,
General table of
contents
Top of the document,
Beginning of this Chapter
MD5{32}: 53a6d4b4679364fe06f5010dcfb031b7
File-Size{4}: 5830
Type{4}: HTML
Gatherer-Version{3}: 1.0
Gatherer-Host{21}: cesdis1.gsfc.nasa.gov
Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server
Refresh-Rate{7}: 2419200
Time-to-Live{8}: 14515200
Description{42}: Networking with a Laptop/Notebook Computer
}
@FILE { http://cesdis.gsfc.nasa.gov/hpccm/annual.reports/ess94contents/in.house/deane.html
Update-Time{9}: 827948654
Description{49}: Compressible Convection via FCT on MIMD Computers
Time-to-Live{8}: 14515200
Refresh-Rate{7}: 2419200
Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server
Gatherer-Host{21}: cesdis1.gsfc.nasa.gov
Gatherer-Version{3}: 1.0
Type{4}: HTML
File-Size{4}: 2949
MD5{32}: 717e8205e65dc1fe3e8929b9ad0098d0
body{2390}:
Objective: The fusion generated energy in the deep solar interior
is largely carried by motions within the outer one-third of the Sun.
The modeling and understanding of this, the SOLAR CONVECTION ZONE, is a
Grand Challenge problem, being part of the PI, efforts of Prof. R.
Rosner and the GCI of Prof. R. Stein. (In addition, the PI team of Dr.
J. Gardner uses FCT which is the core computational technique of this
project). Our intent in this work is to augment the algorithm, machine
architecture, and physics choices available for this modeling.
Approach: In collaboration with Drs. S. Zalesak and D. Spicer, the
technique of (F)lux (C)orrected (T)ransport has been used to model the
three dimensional hydrodynamical problem of compressible convection
within a stratified atmosphere on parallel computers.
Accomplishments: A three-dimensional hydrodynamics code that runs on
Cray C90 and workstations as well as the Intel machines (Delta and
Paragon) under NX operating system and the Cray T3D under PVM. The code
is written as a template using the C preprocessor, so that it produces
only relevant code for the particular boundary conditions and target
machine using command line switches.
Significance: The physical
problem of compressible convection can be modeled, along with other
problems, with this code. The user can add new physics, boundary
conditions and message passsing calls with minimum effects on the core
algorithm.
Status/Plans: The addition of magnetic fields is
nearing completion. The addition of message passing calls specific to
the IBM SP2 are anticipated shortly.
Figure caption:
The figure
shows the results of a (120x120x120) simulation. The panel of 4
pictures are the vertical velocity and temperature, corresponding to
looking at the surface of the Sun. The isometric on the right is the
vertical velocity field. The picture of Solar granulation is that of
light intensity of the Solar surface. The purpose of the illustration
is that the granular feature of the flow on the Sun is readily captured
by the simulations. The simulations can reveal the hidden third
dimension. The flow is found to become supersonic with asymmetry
between up and down motions. (c.f. the simulations of PI team of Prof
R. Rosner).
Point of Contact: Dr. Anil Deane
NASA Goddard
Space Flight Center
(301) 286-7803
deane@laplace.gsfc.nasa.gov
curator: Larry Picha
headings{78}: Compressible Convection via FCT on MIMD Computers
Return
to the PREVIOUS PAGE
images{36}: graphics/fct.gif
graphics/return.gif
keywords{60}: caption
curator
figure
larry
page
picha
previous
return
the
title{49}: Compressible Convection via FCT on MIMD Computers
url-references{111}: http://cesdis.gsfc.nasa.gov/hpccm/annual.reports/ess94contents/in-house.html
mailto:lpicha@cesdis.gsfc.nasa.gov
}
@FILE { http://cesdis.gsfc.nasa.gov/hpccm/annual.reports/cas94contents/testbed.html
Update-Time{9}: 827948649
url-references{377}: testbed/cra.html
testbed/cra1.html
testbed/parallel.html
http://cesdis.gsfc.nasa.gov/hpccm/annual.reports/cas94.html
http://cesdis.gsfc.nasa.gov/hpccm/annual.reports/main94.html
http://www.nas.nasa.gov/HPCC/home.html
http://cesdis.gsfc.nasa.gov/people/lpicha/whoiam.html
http://cesdis.gsfc.nasa.gov/
http://www.usra.edu/
http://sdcd.gsfc.nasa.gov
http://sdcd.gsfc.nasa.gov/ESD/
keywords{350}: aerosciences
agreement
and
announcement
annual
association
center
computational
computing
cooperative
data
directorate
directory
division
earth
excellence
for
home
hpcc
hpccpt
information
lawrence
main
multiphysics
nasa
page
parallel
picha
previous
processing
product
project
report
research
return
sciences
simulation
space
testbed
the
universities
images{19}: graphics/return.gif
head{923}: background="graphics/cas.back.gif">CAS Testbed ActivitiesNASAHigh
Performance Computing and Communications (HPCC) ProgramComputational
Aerosciences ProjectTestbed ActivitiesNASA HPCC 1994 Annual ReportThe
HPCCPT-1 Cooperative Research Announcement The HPCC Testbed-1
Cooperative Research Agreement Multiphysics Product Simulation Parallel
Processing Testbed Return to the PREVIOUS PAGE Other Paths:Go to the
Main Directory for The NASA HPCC 1994 Annual Report Go to The
Computational Aerosciences Project Home Page The NASA HPCC Home
PageAuthorizing NASA Official:Author: Lawrence Picha (lpicha@usra.edu)
Center of Excellence in Space Data and Information Sciences ,
Universities Space Research
Association ,
NASA Goddard Space
Flight Center, Greenbelt, Maryland.
Last revised: 01 JULY 95
(l.picha)
(A service of the Space Data and Computing Division , the
Earth Sciences Directorate , NASA Goddard Space Flight Center)
MD5{32}: d68d2e5868f14227acdfb64ee5de915c
File-Size{4}: 2071
Type{4}: HTML
Gatherer-Version{3}: 1.0
Gatherer-Host{21}: cesdis1.gsfc.nasa.gov
Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server
Refresh-Rate{7}: 2419200
Time-to-Live{8}: 14515200
}
@FILE { http://cesdis.gsfc.nasa.gov/%7ecreschke/peta/report/node5.html
Update-Time{9}: 827948633
url-references{235}: node6.html#SECTION00051000000000000000
node7.html#SECTION00052000000000000000
node8.html#SECTION00053000000000000000
node9.html#SECTION00053100000000000000
node10.html#SECTION00053200000000000000
node11.html#SECTION00054000000000000000
title{13}: Introduction
keywords{140}: approach
aug
chance
edt
frontier
historical
introduction
objectives
organization
perspective
petaflops
report
reschke
the
tue
what
workshop
images{193}: /usr/local/src/latex2html/icons/next_motif.gif
/usr/local/src/latex2html/icons/up_motif.gif
/usr/local/src/latex2html/icons/previous_motif.gif
/usr/local/src/latex2html/icons/contents_motif.gif
head{745}: Next: What is Petaflops?Up:No Title Previous:List of Tables
Introduction Even as the Federal HPCC Program works towards achieving
teraflops computing, policy makers and future research program planners
in government, academia and industry concluded that teraflops-level
computing systems will be inadequate to address many scientific and
engineering problems that exist now, let alone applications that will,
arise in the future. As a result, the high performance computing
community is examining the feasibility of achieving petaflops-level
computing over a 20-year period. What is Petaflops? Historical
Perspective The Petaflops Frontier Workshop
Objectives Workshop
Approach Report Organization Chance Reschke
Tue Aug 15 08:59:12 EDT
1995
MD5{32}: d10efabb970c7088b7fb11d63728da74
File-Size{4}: 2455
Type{4}: HTML
Gatherer-Version{3}: 1.0
Gatherer-Host{21}: cesdis1.gsfc.nasa.gov
Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server
Refresh-Rate{7}: 2419200
Time-to-Live{8}: 14515200
Description{13}: Introduction
}
@FILE { http://cesdis.gsfc.nasa.gov/hpccm/annual.reports/ess94contents/brhr.html
Update-Time{9}: 827948658
url-references{431}: brhr.intro.html
brhr/summer.html
brhr/object.html
brhr/petaflops.html
http://cesdis.gsfc.nasa.gov/hpccm/annual.reports/ess94.html
http://cesdis.gsfc.nasa.gov/hpccm/annual.reports/main94.html
http://sdcd.gsfc.nasa.gov/ESS/
http://cesdis.gsfc.nasa.gov/hpccm/hpcc.nasa.html
http://cesdis.gsfc.nasa.gov/people/lpicha/whoiam.html
http://cesdis.gsfc.nasa.gov/
http://www.usra.edu/
http://sdcd.gsfc.nasa.gov
http://sdcd.gsfc.nasa.gov/ESD/
title{8}: ESS BRHR
keywords{501}: and
annual
association
author
authorizing
center
computation
computational
computing
data
directorate
directory
distributed
division
earth
edu
enabling
excellence
flight
for
fourth
goddard
greenbelt
high
home
hpcc
information
last
lawrence
lpicha
main
maryland
may
nasa
official
oject
ops
oriented
overview
page
parallel
performance
peta
physics
picha
previous
programming
project
report
research
return
revised
school
science
sciences
service
space
summer
technologies
the
universities
usra
workshop
images{115}: graphics/ess-small.gif
graphics/convect-bar.gif
graphics/convect-bar.gif
graphics/return.gif
graphics/hpccsmall.gif
headings{117}: NASA
High Performance Computing and Communications (HPCC)Program
ESS Basic Research
and Human Resources
Overview
body{949}: background="graphics/ess.gif">
Earth and Space Science (ESS)
Project
NASA HPCC 1994 Annual Report
Fourth NASA
Summer School in High Performance Computational Physics Oject-Oriented
Programming for High Performance Parallel and Distributed Computation
Workshop on Enabling Technologies for Peta(FL)OPS Computing
Return to the PREVIOUS PAGE Other Paths:
Go to the Main Directory
for The NASA HPCC 1994 Annual Report Go to the Earth and Space Science
Project Home Page
Go to The NASA HPCC Home Page
Authorizing
NASA Official: Lee B. Holcomb, Director, NASA HPCC Office
Author:
Lawrence Picha (lpicha@usra.edu) Center of Excellence in Space Data and
Information Sciences ,
Universities Space Research
Association ,
NASA Goddard Space Flight Center, Greenbelt, Maryland.
Last
revised: 30 MAY 95 (l.picha)
(A service of the Space Data and
Computing Division , the Earth Sciences Directorate , NASA Goddard
Space Flight Center)
MD5{32}: 9dc515f4902773dad46fd9837643154e
File-Size{4}: 2180
Type{4}: HTML
Gatherer-Version{3}: 1.0
Gatherer-Host{21}: cesdis1.gsfc.nasa.gov
Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server
Refresh-Rate{7}: 2419200
Time-to-Live{8}: 14515200
Description{8}: ESS BRHR
}
@FILE { http://cesdis.gsfc.nasa.gov/hpccm/annual.reports/ann.rpt.95/cas.95/cas.95.ar.p2d2.html
Update-Time{9}: 827948663
url-references{48}: http://www.nas.nasa.gov/NAS/Tools/Projects/P2D2/
title{49}: The Portable Parallel/Distributed Debugger (p2d2)
keywords{112}: accomplishments
approach
contact
gov
http
nas
nasa
objective
plans
point
projects
significance
status
tools
www
headings{49}: The Portable Parallel/Distributed Debugger (p2d2)
body{2371}:
Objective: The objective of the p2d2 project is to build a debugger
for
multiprocess programs that are distributed across a
heterogeneous
collection of machines. Later versions of the tool will
be tailored
to computational fluid dynamics (CFD) programming
community.
Achievement of this goal will put an effective program
development
tool in the hands of CFD programmers.
Approach: In the
design of p2d2 we have employed a client-server architecture.
This
approach permits us to isolate the architecture- and
operating
system-dependent code in a server. Thus, the client-side
code
remains highly portable. We have designed scalable user
interface
elements in expectation that users will want to debug
computations
involving many (say 16-256) processes.
Accomplishments: Demonstration of prototype at Supercomputing '94
Papers at Supercomputing '94 and HICSS-28 Scalable process navigation
paradigm designed and implmented Technical report describing process
navigation paradigm Version 1.0 implementation (for programs using the
Message Passing Interface (MPI) communication library on the IBM SP2)
nearly complete Began work with first user Demonstrated scalable user
interface elements at Supercomputing '95
Note: the accompanying
graphic shows p2d2 being used to debug the NAS parallel benchmark "mg".
The program is running on the front-end and 16 of the computational
nodes of the IBM SP2. The left-hand-side of the graphic has the main
window of the debugger which shows the status of all of the processes
and the location in the source
for one of them. The windows on the
right-hand-side are giving
a variety of more detailed information
about the debugging session.
Significance: In addition to providing
benefits to the CFD programming community, p2d2 can be used as a
general-purpose debugger for isolating problems in programs distributed
across a heterogeneous collection of machines. As such, its potential
user community is quite large.
Status/Plans: Support for MPI
programs running on the IBM SP2 Support for PVM (Parallel Virtual
Machine programs) programs running on the Silicon Graphics cluster
Support for High Performance Fortran programs - a problem
domain-specific debugger (with CFD-specific operations)
Point(s)
of Contact:
Robert Hood
NASA Ames Research Center
rhood@nas.nasa.gov
URL:
http://www.nas.nasa.gov/NAS/Tools/Projects/P2D2/
MD5{32}: d7c5b0c07bab9ff6ec492fa979b1c170
File-Size{4}: 2819
Type{4}: HTML
Gatherer-Version{3}: 1.0
Gatherer-Host{21}: cesdis1.gsfc.nasa.gov
Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server
Refresh-Rate{7}: 2419200
Time-to-Live{8}: 14515200
Description{49}: The Portable Parallel/Distributed Debugger (p2d2)
}
@FILE { http://cesdis.gsfc.nasa.gov/admin/seminar.series/tech.report/shaffer.ps
Update-Time{9}: 827948596
Partial-Text{1087}: Georgia Georgia; page: 1 of 18 OVERVIEW OF INTERNATIONAL EARTH
OBSERVATION ACTIVITIES Presentation to International Earth Remote
Sensing Projects Seminar Series Center of Excellence in Space Data and
Information Sciences IEEE Geoscience and Remote Sensing Society Dr.
Lisa R. Shaffer Acting Director, Mission to Planet Earth Division
Office of External Relations NASA Headquarters, Washington, DC January
17, 1995 Georgia; page: 2 of 18 2 Outline Types of International Earth
Observation Activities Forms of Cooperation in Earth Remote Sensing
Overview of International Activities NASA\325s Role in International
Remote Sensing Issues: Now and Future Georgia; page: 3 of 18 3 Types
of International Earth Observation Activities Satellites Sensors Launch
services Operations and data acquisition Data processing, archiving,
and distribution Scientific investigations In situ observations for
calibration/validation Applications demonstrations Operational use
Georgia; page: 4 of 18 4 Approaches to Cooperation in Earth Remote
Sensing
National satellite systems (i.e., no cooperation)
MD5{32}: 787d9472b0696946907433fe14a4f7f1
File-Size{5}: 33795
Type{10}: PostScript
Gatherer-Version{3}: 1.0
Gatherer-Host{21}: cesdis1.gsfc.nasa.gov
Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server
Refresh-Rate{7}: 2419200
Time-to-Live{8}: 14515200
Keywords{611}: acquisition
acting
activities
and
applications
approaches
archiving
calibration
center
cooperation
data
demonstrations
director
distribution
division
earth
excellence
external
for
forms
future
georgia
geoscience
headquarters
ieee
information
international
investigations
issues
january
launch
lisa
mission
nasa
national
now
observation
observations
office
operational
operations
outline
overview
page
planet
presentation
processing
projects
relations
remote
role
satellite
satellites
sciences
scientific
seminar
sensing
sensors
series
services
shaffer
situ
society
space
systems
types
use
validation
washington
Description{62}: Georgia Georgia; page: 1 of 18 OVERVIEW OF INTERNATIONAL EARTH
}
@FILE { http://cesdis.gsfc.nasa.gov/PAS2/wg3.html
Update-Time{9}: 827948617
url-references{489}: http://cesdis.gsfc.nasa.gov/
/PAS2/index.html
wg3.html#executivesummary
wg3.html#issues
wg3.html#recommendations
wg3.html#concerns
wg3.html#conclusions
wg3.html#conclusions
wg3.html#conclusions
wg3.html#conclusions
wg3.html#conclusions
wg3.html#conclusions
wg3.html#conclusions
wg3.html#conclusions
wg3.html#conclusions
wg3.html#conclusions
wg3.html#conclusions
#top
/PAS2/index.html
http://cesdis.gsfc.nasa.gov/cesdis.html
/pub/people/becker/whoiam.html
mailto:becker@cesdis.gsfc.nasa.gov
title{32}: Use of System Software and Tools
keywords{509}: and
applications
base
basic
becker
breazeal
cesdis
chair
cherri
common
computing
concerns
conclusions
corporation
developers
document
don
donald
environment
environments
establish
executive
facto
financial
for
gov
group
gsfc
high
hoc
hpc
improving
incentives
increase
index
intel
issues
nasa
oregon
pancake
pasadena
performance
portability
porting
recommendation
recommendations
research
second
software
standards
state
summary
support
system
the
this
tool
tools
top
university
usability
use
working
workshop
head{20151}: Center of Excellence in Space Data and Information Sciences. >
Use of
System Software and Tools Pasadena Working Group #3
Chair, Cherri
Pancake, Oregon State University
Co-chair, Don Breazeal, Intel
Corporation This report is one component of the Proceedings of the
Second Pasadena Workshop on System Software and
Tools for High
Performance Computing Environments .
Abstract: This report is a
summary of the conclusions of the Working Group
on the Use of System
Software and Tools. The group included
representatives from
independent software vendors, national laboratories,
academia, High
Performance Computing (HPC) vendors, and US federal
agencies. In
this report,
we identify obstacles to the success of HPC relative to
the usability
of system software and tools, and suggest strategies
for overcoming them.
The charter of the working group was to answer
the following
questions:What types of system software and tools are
needed to facilitate
portable, scalable applications? How can users be
motivated to use them?
Why don't users use and/or like existing
system software and tools? Why don't vendors respond to user complaints
and/or issues? What will it take to make the HPC user community grow?
For the purposes of discussion we defined users to be persons
involved
in developing parallel applications (i.e., predominantly
non-computer
scientists). System software and tools are defined very
broadly as
including any software that the application programmer
doesn't write.
This report is organized as follows: section 1
is an
executive summary providing an overview of the working group
recommendations; section 2 describes what the group perceives
as
the major problems to be addressed and suggests potential
solutions;
section 3 provides a list of the action items
recommended
by the group; section 4 describes open issues
and concerns; and
section 5 concludes the report.
Executive Summary Working Group 3
discussed the problems confronting current and potential
users of HPC
due to the lack of robustness and marginal usability that
characterize
current system software and tools. A variety of approaches
were
suggested, resulting in the following four recommendations:
Recommendation 1: Establish a Common Base Environment for
Developers
of HPC Applications. NASA (with the collaboration of the larger
community) should take the
lead in an effort to define a minimal set
of software tools to be made
available uniformly across all HPC
platforms. HPC vendors should be
encouraged to implement this set as
quickly as possible so that users can
have access to the same
(reliable) base environment on all HPC systems.
Recommendation 2:
Basic Research to Increase Tool Usability.The National Science
Foundation (NSF) should provide funding for
research efforts that
identify user
strategies for application development and that apply
those strategies to
tool design in order to improve usability.
Recommendation 3: Financial Support for Standards and Portability.The
charter of the National High Performance Software Exchange (NHPSE)
should be expanded to provide funding for
community-wide
standardization efforts likely to improve the uniformity
of HPC
software, such as High Performance Fortran (HPF) and Message
Passinga
Interface (MPI).
Recommendation 4: ISV Application
Software.The national laboratories and national supercomputing centers
should
develop/expand programs that encourage independent software
vendors (ISVs)
to port key applications to HPC systems.
Issues in
the Use of System Software and Tools Application developers in the HPC
community are dissatisfied
with the system software and tools provided
on HPC
systems available today. Surveys of HPC
users of both parallel
and serial systems have shown that the
acceptance of programming tools
in this community is very low.
Users often avoid tools, or devise
their
own substitutes for significant system software components.
This
occurs for a variety of reasons. In general, user
perceptions of
HPC system software and tools are that:tools crash very quickly, tools
don't do what they're supposed to do, tools don't scale to large
applications, number of nodes, etc., tools are too machine-specific,
tools are too diverse and inconsistent, tools are not inter-operable
(even on a single platform), tools are very difficult for users to
learn and apply, and users are often unsure if there will be a payoff
for using tools. These issues can be categorized as three software
attributes that
appear to be lacking in current HPC system software
and tools: reliability/robustness portability/standards-compliance
usability
fact that the application base on parallel systems has
grown very
slowly. Yet the availability of key applications is
precisely
the mechanism needed to drive the growth of the HPC user
community and
the realization of the HPC's potential. As the group was
quick to
point out, not everyone is sold on
parallelism! Potential
users need to see some compelling examples
of success stories if they
are to be motivated to use HPC systems.
Ultimately, applications and
problem solving environments from Independent
Software Vendors (ISVs)
must become available.
To address these issues, we recommend a
twofold approach: improving
the software environment for application
developers, and providing
incentives for those developers to port
their applications
to parallel HPC systems.
Improving the Software
Environment Reliability and robustness are difficult issues for the HPC
vendor
community. User organizations often require delivery of new
systems
at the earliest possible date. Because of the complexity of
parallel systems,
however, system software and tools are complicated,
and early
delivery may mean that they are relatively new and
untried.
As a result, the users' initial contact with the software
is
quite negative, and the situation improves only slowly.
System
vendors may appear to be unresponsive to user needs, because
their
resources are consumed with maintaining the status quo as
market
forces require new systems, languages, and features.
Vendors
often have a number of high priority requests, and they need
to
spend effort differentiating their product from those of their
competitors.
Yet without certain guarantees that software will be
reliable and robust, it
is difficult to attract new users and new
applications.
Compounding the problem is the fact that few, if any,
users program to a
single platform. The rapid rate of change in HPC
technology requires
that users be able to migrate their codes from
platform to platform
with relative ease. Standards are a primary
mechanism for providing
the uniformity needed to enable application
portability,
whether they are official standards
sanctioned by a
standards organization, or de facto standards
developed through grass
roots efforts. In this document, we use
the term standards to include
both types. For HPC, it is clear that
successful standards must come
from the community as a whole. System
vendors cannot be expected to
develop standards, since their products
must be differentiated to
maintain competitive position. Vendors
can only provide input to the
definition process and implement the
result. It is important to note
that a standard is useful only if
it is in fact implemented across a
range of vendor platforms.
Factors that can help induce vendors to
implement standard
software include: the existence of a reference
implementation
of the standard, availability of implementations from a
third party,
pressure from the user community, and availability of a
validation
suite for testing of conformance and correctness. These
should
be included as part of any serious standards effort.
The
problem of usability may well be the most difficult to address.
HPC
system software suffers in comparison to the usability of
software and
tools provided with desktop systems because the
resources available
for development are much greater in the desk-top
world, and the
problems to be solved are much less complex.
Many usability issues
remain unresolved for parallel HPC software.
System software and tools
are often the implementation of an
untried solution, and the ways in
which such software can be
applied effectively are often obscure. The
options and variations
available in programming parallel systems are
so diverse that tools
which attempt to adequately support all models
of usage become
excessively complicated. Unfortunately, little
research has been conducted
to identify the models of usage that
should be supported in order to
reach a reasonable number of users
without undue complexity.
Incentives to Porting Applications The
availability of key applications on HPC systems will undoubtedly
drive
the success of HPC. Many of these applications are developed
and
supported by ISVs. Their very independence from hardware vendors
means that
ISVs need a financial incentive to port their applications
to parallel
platforms. The availability of reliable and usable system
software and
tools is a critical part of this, since the easier a
system is to port to,
the lower the cost to the ISV.
However, ease
alone is not sufficient incentive for most ISVs to
initiate a port.
The uncertain longevity of any specific hardware platform
is a strong
deterrent for porting. This creates a vicious circle, in that
a
platform must include key applications if it is to survive, yet the
owners
of key applications are wary of porting to a platform until its
survival is
certain. Guaranteed customers or other mechanisms for
funding are needed so
that ISVs can justify porting costs. Moreover,
the simple existence of a
successful port is not enough to attract
additional customers; like the
ISVs, they are wary of investing in a
short-lived HPC platform. Potential
customers should be encouraged to
experience for themselves the improved
performance that can be
obtained by using the parallelized application.
The first port is the
most expensive, since subsequent ports can
leverage much of the
initial work. ISV costs go beyond the basic development
effort,
however, since an ISV must provide support and maintenance
to
customers on each target platform. Below a certain minimum number
of
customers, it simply is not cost-effective for the ISV to provide
support.
Too much of the burden of moving applications to parallel
platforms falls on
the ISV. Such businesses are often small, so the
risk factors make
involvement in such a plan unacceptable. Once the
HPC market has grown and
the customer base is large enough, such risks
may be reduced --- but this is
not true of the current market.
The
European Union devised one strategy for dealing with the ISV
problem,
the so-called Europort model. Its goal is to enable the porting of
key
scientific applications to parallel computer systems. Usually
the
application developer is partnered with a research organization
and
(sometimes) a system provider. The researcher supplies expertise
in
parallel algorithms and parallelization techniques to assist the
application
developer. The project is funded by the European
Commission through the
ESPRIT program, which supports collaborative
information technology
development.
Potential mechanisms for
supporting the migration of key ISV
applications to HPC platforms
include: assistance in identifying a promising
customer base,
long-term conditional loans; cost-sharing; assistance in
carrying out
ports; and the Europort collaborative model. Of these, the
Europort
model is the most promising and palatable, but such a model may
not
fit well with US policies and rules.
Recommendations The
groups recommendations were formulated to attempt to correct the
most
glaring problems in current HPC software environments.
Recommendation
1: Establish a Common Base Environment for
Developers of HPC
Applications A community-wide working group should define and advocate
the
implementation of a minimal parallel computing environment that is
robust
and consistent across all HPC platforms. The availability of
such an
environment would guarantee at least minimal functionality for
HPC
applications developers, and the promise of uniformity across
platforms
would serve as an encouragement for users and ISVs who are
currently faced
with a wide variety of dissimilar software and tool
systems.
One user organization represented in the working group,
NASA, was named
a likely candidate for taking the lead in this effort,
with the
collaboration of the larger community. A kick-off meeting for
this effort
should be scheduled as soon as possible (this may happen
as early as May,
1995). The meeting would organize an email and
web-based forum to produce
the base environment requirements
specification. Funding should be provided
to support a coordinator and
support staff for the effort, and a travel
budget should be supplied
to broaden participation. Participants in the
specification effort
should include HPC system (including workstation)
vendors, application
developers, and ISVs (both those who have ported to
parallel systems
and those who have not).
It is critical that the base operating
environment be reliable, robust,
and familiar to users. To demonstrate
the intent of this recommendation we
present the components of an
example environment, providing minimal
functionality for developing,
debugging, tuning, and executing applications:
C and Fortran compilers
(single-node, not parallelizing) that are
reliable and correct;
Scalable support for hand-coded instrumentation, capable
of yielding
reliable, expected behavior; Support for parallel program execution
that is reliable and capable
of producing clear error messages; A
dbx-like symbolic debugger with the ability to attach to a
single
process in an executing application; A gprof-style profiling tool
capable of monitoring the performance
of a single process in an
executing application; and A facility for determining the status of an
executing application,
as well as discovering which users are running
which programs and
on which nodes/partitions. No part of this
recommendation should be construed as incompatible with
the ability of
the system vendors to provide additional or unique tools for
special
needs. The base environment will establish the minimal support
that
must be provided in a reliable and uniform fashion. A standard
set of tools
will also help the vendors deliver a robust working
environment much more
quickly when a brand new system is released.
Vendors are encouraged to
provide additional tools beyond those
specified as part of the base
environment.
HPC vendors should be
encouraged to implement this set as quickly as
possible so that users
can have access to the same (reliable) base
environment on all HPC
systems. Funding should be provided to reduce vendor
implementation
costs. To encourage adoption, federal agencies funding the
procurement
of HPC systems should encourage inclusion of these requirements
in
Requests for Proposals (RFPs). Within two years this environment
should
be available on all HPC platforms.
Recommendation 2: Basic
Research to Increase Tool Usability User acceptance of system software
and tools will not increase
appreciably until such software is usable
within the framework of typical
application development strategies. To
this end, NSF should fund
collaborative research into the interaction
between the user and the
parallel software environment. This research
should involve substantial
input from experienced users engaged in
developing large-scale applications.
The goals of the research should
be to:
identify successful user strategies in developing real
applications, devise ways to apply knowledge of those strategies in
the
presentation of tool functionality in an intuitive, usable, and
familiar manner, and use this functionality in the development of
simple, composable
tool units. Support should be provided for
participants in the collaborative
efforts, including tool users,
developers, and implementors. Support should
also be provided for the
promotion of the results of this research, in order
to disseminate the
information through the community. Initial results should
be available
within two years.
Recommendation 3: Financial Support for Standards
and Portability Community-wide standardization efforts offer the
greatest promise for
supporting the portability of HPC applications
across multiple vendor
platforms. Successful examples of such efforts
include the BLAS (standard
Basic Linear Algebra Subroutines), MPI, and
HPF. Note, however, that
funding for these efforts was provided ad hoc
from a variety of
sources, a model that works in the first few cases
but cannot be sustained
to encompass the wide variety of standards
needed to make HPC platforms
attractive to a broad user and ISV
audience. A stable source of funding for
these efforts would ease the
path to successful implementation. Moreover,
academic participation in
these efforts is often constrained by the
associated cost and by the
lack of recognition for participation and
contribution. Some method
for supporting and encouraging academic
participation is needed.
The
charter of the National HPC Software Exchange (NHPSE) should
be
expanded to include funding for HPC community efforts to
evolve
specifications of standard system software that will enable the
development
of portable HPC applications. These specifications should
be made available
to the private sector on a non-exclusive, no-cost
basis. To facilitate the
development of private-sector
implementations, such specifications should be
accompanied by a
reference implementation and a validation suite.
Recommendation 4: ISV
Application Software A critical method for expanding the HPC market is
to enable key
applications software on HPC platforms through the use
of ISV resources.
This can be accomplished through several actions.
Little additional funding
is required to implement this
recommendation, but rules and mechanisms need
to be changed.
First,
ISVs and national lab employees should be made more aware of
existing
mechanisms for technology transfer that might affect
their
applications. These mechanisms are misunderstood and
underutilized, but
they could ease the path for ISV ports to HPC
systems.
Second, the mission of the national supercomputing centers
should be
expanded to include encouragement for ISVs, whose needs are
not met by
existing industrial partnership programs. New programs
should be instituted
which do not require large up-front membership
fees for the ISV. Such
programs should furnish not just machine access
for carrying out an
application port, but also the sale of cycles to
potential customers who
want to test-drive the parallelized
application.
Finally, existing mechanisms should be expanded to
include
Europort-style collaborations that don't require cost sharing
by small ISVs.
Issues and Concerns The recommendation to provide U.
S. federal funding for Europort-style
collaborations to enable key ISV
application software on HPC systems raises
some legal and ethical
questions that the group is not qualified to answer.
Using federal
funds for such development efforts, and keeping the results of
those
efforts proprietary, may violate existing national policy.
Summary
and Conclusions In this report, Working Group 3 has made some very
specific
recommendations in the hope that they will provoke action on
several key
items. Recommendation 1 for the base environment is
already moving
forward. Recommendation 2 for user-related research
would expand funding in
an area that would yield concrete strategies
for improving tool usability.
Recommendation 3 would smooth the path
to the development of standards by
for providing administrative and
logistical support for community-wide
efforts. Recommendation 4
proposes support for ISV porting efforts that
would make HPC systems
more useful to the scientific and engineering
communities.
Implementation of any of these recommendations will move the
HPC
community in a direction toward improved usefulness and success.
Top
of this document
Pasadena 2 Workshop index
CESDIS HTML
formating/WWW contact:Donald Becker ,
becker@cesdis.gsfc.nasa.gov .
MD5{32}: 16d8498883dd0510145ebfd0d56121f7
File-Size{5}: 22271
Type{4}: HTML
Gatherer-Version{3}: 1.0
Gatherer-Host{21}: cesdis1.gsfc.nasa.gov
Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server
Refresh-Rate{7}: 2419200
Time-to-Live{8}: 14515200
Description{32}: Use of System Software and Tools
}
@FILE { http://cesdis.gsfc.nasa.gov/hpccm/annual.reports/ess94contents/app.intro.html
Update-Time{9}: 827948649
url-references{124}: gci.html
http://cesdis.gsfc.nasa.gov/hpccm/annual.reports/ess94contents/app.software.html
mailto:lpicha@cesdis.gsfc.nasa.gov
title{34}: ESS Applications Software Research
keywords{148}: and
approach
cesdis
curator
goal
gov
gsfc
larry
lpicha
management
nasa
objectives
organization
page
picha
plan
previous
project
return
strategy
the
images{42}: graphics/gci.small.gif
graphics/return.gif
headings{75}: Overview of ESS Applications Software Research
Return
to the PREVIOUS PAGE
body{2841}: background="graphics/ess.gif">
Project Goal and Objectives: The
goal of the ESS applications software activity is to enable the
development of NASA Grand Challenge applications on those computing
platforms which are evolving towards sustained teraFLOPS performance.
The objectives are to: dentify the NASA Grand Challenge Investigations
and Guest Computational Investigations; identify computational
techniques, termed Computational Challenges, which are essential to the
success of the Grand Challenge problems; formulate embodiments of these
techniques which are adapted to and perform well on highly parallel
systems; and capture the successes in a reusable form.
Strategy
and Approach: The strategy is to select NASA Grand Challenges from a
vast array of candidate NASA science problems, to select teams of
aggressive scientific Investigators to attempt to implement the Grand
Challenge problems on scalable testbeds, and to provide
institutionalized computational technique development support to solve
the Computational Challenges in order to accelerate the progress of the
Investigators and to capture the results. The approach involves use of
the peer reviewed NASA Research Announcement as the mechanism to select
the Grand Challenge Investigations and their Investigator teams.
In-house teams of computational scientists have been developed at GSFC
and JPL to solve the Computational Challenges.
Organization: The
Office of Aeronautics and Space Technology, jointly with the Office of
Space Science and Applications, selected the ESS Investigators through
the peer reviewed NASA Research Announcement process. The ESS Science
Team, composed of the Principal Investigators chosen through the ESS
NRA, and chaired by the ESS Project Scientist, organizes and carries
out periodic workshops for the investigator teams and coordinates the
computational experiments of the Investigations. The ESS Evaluation
Coordinator focuses activities of the Science Team leading to
development of ESS computational and throughput benchmarks. A staff of
computational scientists supports the Investigations by developing
scalable computational techniques which address their Computational
Challenges.
Management Plan: At GSFC, a Deputy Project Manager
for Applications directs the in-house team of computational scientists.
At JPL, a Deputy Task Leader performs the same function. ESS and its
Investigators contribute annual software submissions to the High
Performance Computing Software Exchange.
Click on the following image
for a graphic display of the ESS Grand Challenge Investigations:
Points of Contact: Steve Zalesak
Goddard Space Flight Center, Code
934
zalesak@gondor.gsfc.nasa.gov, 301-286-8935
Robert Ferraro
Jet
Propulsion Laboratory
ferraro@zion.jpl.nasa.gov, 818-354-1340
curator: Larry Picha (lpicha@cesdis.gsfc.nasa.gov)
MD5{32}: 7661a05ee1e5253ef2cafa9a8954e196
File-Size{4}: 3460
Type{4}: HTML
Gatherer-Version{3}: 1.0
Gatherer-Host{21}: cesdis1.gsfc.nasa.gov
Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server
Refresh-Rate{7}: 2419200
Time-to-Live{8}: 14515200
Description{34}: ESS Applications Software Research
}
@FILE { http://cesdis.gsfc.nasa.gov/hpccm/annual.reports/ess94contents/in.house/graphics/pedelty.pict
Update-Time{9}: 827948858
MD5{32}: 6bdf51cbf071d3e8a3fda0220097ba12
File-Size{5}: 57918
Type{7}: Unknown
Gatherer-Version{3}: 1.0
Gatherer-Host{21}: cesdis1.gsfc.nasa.gov
Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server
Refresh-Rate{7}: 2419200
Time-to-Live{8}: 14515200
}
@FILE { http://cesdis.gsfc.nasa.gov/hpccm/annual.reports/ess94contents/gci.html
Update-Time{9}: 827948649
url-references{151}: http://cesdis.gsfc.nasa.gov/hpccm/annual.reports/ess94contents/app.software.html
http://cesdis.gsfc.nasa.gov/hpccm/annual.reports/ess94contents/gc.html
title{34}: ESS Grand Challenge Investigations
keywords{79}: applications
challenge
contents
ess
grand
investigator
software
table
team
the
images{48}: graphics
graphics/return.gif
graphics/return.gif
headings{52}:
GO TO: the Applications Software Table of Contents
body{275}: gci.gif>
Points of Contact: Steve Zalesak
Goddard Space Flight
Center, Code 934
zalesak@gondor.gsfc.nasa.gov, 301-286-8935
Robert
Ferraro
Jet Propulsion Laboratory
ferraro@zion.jpl.nasa.gov,
818-354-1340
GO TO: ESS Grand Challenge Investigator Team Table of
Contents
MD5{32}: 68da0099c162ad83559da4a71af71bf7
File-Size{3}: 727
Type{4}: HTML
Gatherer-Version{3}: 1.0
Gatherer-Host{21}: cesdis1.gsfc.nasa.gov
Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server
Refresh-Rate{7}: 2419200
Time-to-Live{8}: 14515200
Description{34}: ESS Grand Challenge Investigations
}
@FILE { http://cesdis.gsfc.nasa.gov/hpccm/iita.hp/iita.html
Update-Time{9}: 827948599
url-references{148}: http://quest.arc.nasa.gov/IITA/iita1.html
http://cesdis.gsfc.nasa.gov/
http://www.usra.edu/
http://sdcd.gsfc.nasa.gov
http://sdcd.gsfc.nasa.gov/ESD/
title{14}: NASA HPCC IITA
keywords{369}: and
applications
arc
association
authorizing
authors
center
computing
connell
data
directorate
division
earth
edu
excellence
flight
goddard
gov
greenbelt
html
http
iita
information
infrastructure
june
last
lawrence
likens
lpicha
manager
maryland
michele
nasa
official
picha
program
quest
research
revised
sciences
service
space
technology
the
universities
usra
william
images{45}: graphics/hpcc.header.gif
graphics/wavebar.gif
headings{124}: Information Infrastructure Technology and Applications
This web page has moved to http://quest.arc.nasa.gov/IITA/iita1.html
body{518}:
Authorizing NASA Official: William Likens, Program Manager,
Information Infrastructure Technology and Applications Authors:
Lawrence Picha (lpicha@usra.edu) & Michele O'Connell
(michele@usra.edu), Center of Excellence in Space Data and Information
Sciences , Universities Space Research Association , NASA Goddard Space
Flight Center, Greenbelt, Maryland.
Last revised: 29 JUNE 1995
(l.picha) A service of the Space Data and Computing Division , Earth
Sciences Directorate , NASA Goddard Space Flight Center.
MD5{32}: 26a2ebb30a066d26df278b651376eac0
File-Size{4}: 1393
Type{4}: HTML
Gatherer-Version{3}: 1.0
Gatherer-Host{21}: cesdis1.gsfc.nasa.gov
Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server
Refresh-Rate{7}: 2419200
Time-to-Live{8}: 14515200
Description{14}: NASA HPCC IITA
}
@FILE { http://cesdis.gsfc.nasa.gov/hpccm/accomp/94accomp/94accomps.html
Update-Time{9}: 827948644
title{31}: NASA HPCC FY 94 Accomplishments
images{61}: hpcc.graphics/nasa.meatball.gif
hpcc.graphics/hpcc.header.gif
headings{29}: Showcase of Accomplishments
MD5{32}: ffd58ead8da7b15a3ed94bb73a87eb23
File-Size{4}: 3931
Type{4}: HTML
Gatherer-Version{3}: 1.0
Gatherer-Host{21}: cesdis1.gsfc.nasa.gov
Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server
Refresh-Rate{7}: 2419200
Time-to-Live{8}: 14515200
Keywords{26}: accomplishments
hpcc
nasa
Description{31}: NASA HPCC FY 94 Accomplishments
}
@FILE { http://cesdis.gsfc.nasa.gov/hpccm/annual.reports/ess94contents/guest.ci/pedelty.html
Update-Time{9}: 827948652
url-references{111}: http://cesdis.gsfc.nasa.gov/hpccm/annual.reports/ess94contents/guest.ci.html
mailto:lpicha@cesdis.gsfc.nasa.gov
title{20}: Morphology Filtering
keywords{45}: curator
larry
page
picha
previous
return
the
images{40}: graphics/pedelty.gif
graphics/return.gif
headings{106}: High Performance Morphology Filtering of Cirrus Emission from Infrared
Images
Return
to the PREVIOUS PAGE
body{2985}:
Objective: Our goal is to remove cirrus emission from images of
the sky generated by the Infrared Astronomy Satellite (IRAS). The
cirrus emission looks remarkably like the cirrus clouds which form on
Earth, but is caused by cold dust grains in our Milky Way galaxy. This
infrared cirrus emission obscures our view of the universe beyond the
Milky Way, and by removing it we will create a valuable new public
archive, and we may even reveal new, unusual infrared objects.
Approach: Previous attempts to remove the cirrus emission have failed
because the emission is present on all angular scales. Our approach is
to apply the techniques of morphological image processing (a.k.a.
mathematical morphology). Morphological image processing is a
relatively new set of tools for analyzing form and structure in images.
The techniques can be computationally intensive, and so we are
implementing the morphology tools on the HPCC ESS testbeds, in
particular the MasPar MP-2.
Accomplishments: We have dramatically
improved our prototype morphological cirrus filter. This improvement
was largely enabled by the tremendously faster performance of the
MasPar compared to an earlier workstation implementation. We have
filtered a few dozen IRAS images and are now analyzing the nature of
the objects we find. This analysis involves comparing our source
positions with large catalogs which are available via the Internet. We
are finding many galaxies which were previously discovered at optical
wavelengths, but which were previously very obscured in the infrared by
the cirrus. Preliminary analytical testing shows that the filter is
able to recover obscured galaxies with an accuracy of better than a few
percent.
A paper describing a detailed comparison of different MasPar
implementations of morphological filtering was submitted for review to
the Frontiers of Massively Parallel Computation meeting to be held in
February, 1995. Presentations were made to an American Astronomical
Society meeting in May, 1994 and to Astronomical Data Analysis Software
and Systems symposia in October, 1993 and September, 1994. The
morphology kernels were selected to be part of the ESS Parallel
Benchmark Suite, and are being benchmarked on a variety of platforms.
Significance: We hope to improve our knowledge of the infrared
brightnesses of galaxies, add to our understanding of the cirrus
emission, and possibly even discover new astronomical objects. We will
also publicly deliver the morphology kernel routines optimized for a
variety of HPC platforms.
Status/Plans: We are continuing
analytical testing to determine the accuracy and reliability of our
filter. We expect to perform production filtering of the entire IRAS
database at one and perhaps two far infrared wavelengths. This new
astronomical archive will be made publicly available.
Point
of Contact: Dr. Jeffrey Pedelty
Goddard Space Flight Center/Code
934
pedelty@jansky.gsfc.nasa.gov
(301) 286-3065
curator:
Larry Picha
MD5{32}: 33573874aa82b7d0926a040be5157137
File-Size{4}: 3551
Type{4}: HTML
Gatherer-Version{3}: 1.0
Gatherer-Host{21}: cesdis1.gsfc.nasa.gov
Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server
Refresh-Rate{7}: 2419200
Time-to-Live{8}: 14515200
Description{20}: Morphology Filtering
}
@FILE { http://cesdis.gsfc.nasa.gov/hpccm/annual.reports/ess94contents/app.sw/
Update-Time{9}: 827948842
url-references{152}: /hpccm/annual.reports/ess94contents/
bench.html
epb.html
graphics/
jnnie.html
jnniepict.html
memory.html
midas.html
require.html
storage.html
sw.ex.html
title{52}: Index of /hpccm/annual.reports/ess94contents/app.sw/
keywords{86}: bench
directory
epb
graphics
html
jnnie
jnniepict
memory
midas
parent
require
storage
images{192}: /icons/blank.xbm
/icons/menu.gif
/icons/text.gif
/icons/text.gif
/icons/menu.gif
/icons/text.gif
/icons/text.gif
/icons/text.gif
/icons/text.gif
/icons/text.gif
/icons/text.gif
/icons/text.gif
headings{52}: Index of /hpccm/annual.reports/ess94contents/app.sw/
body{396}:
Name Last modified Size Description
Parent Directory 19-Jul-95
16:12 -
bench.html 27-Jun-95 16:17 3K
epb.html 23-Jun-95 16:01 1K
graphics/ 27-Jun-95 16:13 -
jnnie.html 27-Jun-95 16:09 3K
jnniepict.html 23-Jun-95 15:54 1K
memory.html 13-Jun-95 11:39 1K
midas.html 27-Jun-95 15:02 3K
require.html 13-Jun-95 11:27 1K
storage.html 13-Jun-95 11:38 1K
sw.ex.html 19-Jun-95 13:33 3K
MD5{32}: a7782e56acd596b344d467ff1592ec36
File-Size{4}: 1733
Type{4}: HTML
Gatherer-Version{3}: 1.0
Gatherer-Host{21}: cesdis1.gsfc.nasa.gov
Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server
Refresh-Rate{7}: 2419200
Time-to-Live{8}: 14515200
Description{52}: Index of /hpccm/annual.reports/ess94contents/app.sw/
}
@FILE { http://cesdis.gsfc.nasa.gov/hpccm/annual.reports/cas94contents/app.sw/graphical.html
Update-Time{9}: 827948647
url-references{109}: http://cesdis.gsfc.nasa.gov/hpccm/annual.reports/cas94contents/app.sw.html
mailto:lpicha@cesdis.gsfc.nasa.gov
title{47}: A Graphical User Interface for the FIDO Project
keywords{46}: contents
curator
larry
picha
return
table
the
images{19}: graphics/return.gif
headings{80}: A Graphical User Interface for the FIDO Project
Return
to the Table of Contents
body{3208}:
Objective: The Framework for Interdisciplinary Design Optimization
(FIDO) project is developing a general computational environment for
performing multidisciplinary design using networked heterogeneous
computers. The goal of the Graphical User Interface (GUI) development
is to provide an easy way for the user to monitor and control a design
cycle that involves complex programs running on a variety of computers
across the network.
Approach: The current Motif-based GUI consists of
three separate elements: setup, application status, and data display.
The setup GUI provides the user with a convenient means of choosing the
initial design geometry, material properties, and run conditions from a
pre-defined set of files. The interface displays the choices using a
series of pop-up Motif data windows, and allows the user to modify and
store new condition files. The application status GUI allows the user
to monitor the status of a design run. An example of this display is
shown in the left figure during the middle of the fourth design cycle.
Within this figure, the upper left window displays current run
parameters and contains pull-down menus for setting various options.
The right window graphically displays the state of the overall design
process by changing the color of each labeled box according to the work
being done. The color key is shown in the lower left window. Additional
detail of the system state can be obtained by selecting the boxes with
a 3-D appearance. Doing so brings up an associated window that displays
sub-detail for that box. The data display GUI is the third interface
element, providing the user with a variety of ways to plot data during
the design process. The right figure is an example of a color-coded
contour plot of wing surface pressures. The buttons at the top of the
plot window provide the user a variety of view controls.
Accomplishment: The three GUI elements have been implemented, and were
used to produce the results in the figures. The setup interface now
provides a full capability for initializing a FIDO run. In addition to
contour plots of aerodynamic pressures and structural stresses on the
wing, the data display interface provides line-plots of cycle history
for a variety of design parameters and data results.
Significance: A
graphical interface provides easier understanding and access to data
than the previous text-based method. Also, less training of users is
needed.
Status/Plans: In the next version of the interface, more
detail will be provided in various sub-windows of the application
status GUI. The three elements of the GUI will be combined into a
single interface, replacing the text-based menu that currently controls
the data display. After the first implementation of FIDO has been
tested and documented, the project will move to its next phase:
incorporation of the full HISAIR ''Pathfinder'' engineering problem,
which will increase the amount of information handled by an order of
magnitude.
Points of Contact: Raymond L. Gates
NASA Langley
Research Center
(804) 865-1725
raymond.l.gates@larc.nasa.gov
Kelvin W. Edwards
NASA Langley
Research Center
(804) 864-2290
k.w.edwards@larc.nasa.gov
curator: Larry Picha
MD5{32}: 5a49918dec734abdc3f13788a8e12efc
File-Size{4}: 3698
Type{4}: HTML
Gatherer-Version{3}: 1.0
Gatherer-Host{21}: cesdis1.gsfc.nasa.gov
Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server
Refresh-Rate{7}: 2419200
Time-to-Live{8}: 14515200
Description{47}: A Graphical User Interface for the FIDO Project
}
@FILE { http://cesdis.gsfc.nasa.gov/petaflops/archive/workshops/pas.2.pf.obj.html
Update-Time{9}: 827948644
url-references{297}: http://cesdis.gsfc.nasa.gov/petaflops/peta.html
/people/tron/tron.html
mailto:tron@usra.edu
/people/oconnell/whoiam.html
mailto:oconnell@cesdis.gsfc.nasa.gov
http://cesdis.gsfc.nasa.gov/people/lpicha/whoiam.html
mailto:lpicha@@cesdis.gsfc.nasa.gov
http://cesdis.gsfc.nasa.gov/
http://www.usra.edu/
title{54}: Petaflops Enabling Techologies and Applications (PETA)
keywords{362}: agenda
and
application
applications
basis
cesdis
challenges
computing
connell
considering
could
derive
determine
development
edu
enabling
establish
for
future
identify
initiatives
issues
july
lawrence
lead
lpicha
meeting
michele
moc
ops
peta
petaflops
picha
production
research
revised
scaled
set
sterling
systems
technologies
that
the
thomas
tron
usra
workshop
images{79}: peta.graphics/saturn.gif
peta.graphics/turb.small.gif
peta.graphics/petabar.gif
headings{314}: The Workshop on Enabling Technologies for Peta(FL)OPS Computing - 1994
A meeting to establish the basis for considering future research
initiatives that could lead to the development, production, and
application of petaFLOPS scaled computing systems.
Objectives of the Workshop
Return to the
P.E.T.A.
Directory
body{1041}:
Identify Applications of economic, scientific, and societal
importance requiring PetaFLOPS scale computing. Determine Challenges in
terms of technical barriers to achieving effective PetaFLOPS computing
systems. Identify Enabling Technologies that may be critical to the
implementation of PetaFLOPS computers and determine their respective
roles in contributing to this objective. Derive Research Issues that
define the boundary between today's state-of-the-art understanding and
the critical advanced concepts to tomorrow's PetaFLOPS computing
systems. Set Research Agenda for initial near-term work focused on
immediate questions contributing to the uncertainty of our
understandingand imposingthe greatest risk to launching a major
long-term research initiative.
Authorizing NASA
Official: Paul H. Smith, NASA HPCC Office
Senior Editor:Thomas
Sterling (tron@usra.edu )
Curators: Michele O'Connell
(
michele@usra.edu ),
Lawrence Picha (lpicha@usra.edu ),
CESDIS/
USRA , NASA Goddard Space Flight Center.
Revised: 31 July 95 (moc)
MD5{32}: add61edae57d74c6a910e3b9466db98c
File-Size{4}: 2195
Type{4}: HTML
Gatherer-Version{3}: 1.0
Gatherer-Host{21}: cesdis1.gsfc.nasa.gov
Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server
Refresh-Rate{7}: 2419200
Time-to-Live{8}: 14515200
Description{54}: Petaflops Enabling Techologies and Applications (PETA)
}
@FILE { http://cesdis.gsfc.nasa.gov/hpccm/annual.reports/ess94contents/iita/space.html
Update-Time{9}: 827948659
title{18}: Project S.P.A.C.E.
images{18}: graphics/space.gif
headings{65}: Project S.P.A.C.E. (Sun, Planets, Asteroids & Comets Exploration)
body{1493}: background="graphics/spacepaper.gif" text="440000">
Objective: Improve K-12 educator and student understanding of
our solar system based on current data from NASA/JPL explorations.
Approach: Project SPACE will provide three components: 1) an
interactive multimedia space exploration experience (SPACE Simulation),
2) an in-class curriculum (SPACE Curriculum); and 3) access to the
NASA/JPL electronic library (SPACE Curriculum Library). The SPACE
Curriculum Library will use the Internet as a vehicle to disseminate
information nationwide to educators and students.
Accomplishments: SPACE Simulation (Mars Phase) is a computer-based
interactive multimedia working model of the entire simulation product,
and is in its final development stage. This model allows educators and
students to plan and execute a robotic mission to Mars . SPACE
Curriculum uses an innovative and flexible design tool (Curriculum Web)
to create a model curriculum which supports current instructional
pedagogy. Use of such a design promotes student interest and aids in
the incorporation of space curriculum into classroom settings.
Additionally the Web acts as a means to access the curriculum
electronically. SPACE Curriculum Library is currently on-line on the
Internet. The first of many curriculum products, such as lesson plans
and hands-on activities are now available.
Significance: Project
SPACE Provides a platform for learners to understand the relevancy of
NASA/JPL data obtained from space explorations.
MD5{32}: 0f72a1bba8e78ae1f373eb73660b2a34
File-Size{4}: 3013
Type{4}: HTML
Gatherer-Version{3}: 1.0
Gatherer-Host{21}: cesdis1.gsfc.nasa.gov
Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server
Refresh-Rate{7}: 2419200
Time-to-Live{8}: 14515200
Keywords{8}: project
Description{18}: Project S.P.A.C.E.
}
@FILE { http://cesdis.gsfc.nasa.gov/hpccm/annual.reports/ess94contents/app.sw/sw.ex.html
Update-Time{9}: 827948654
url-references{175}: http://sdcd.gsfc.nasa.gov/ESS
http://sdcd.gsfc.nasa.gov/ESS
http://cesdis.gsfc.nasa.gov/hpccm/annual.reports/ess94contents/app.software.html
mailto:lpicha@cesdis.gsfc.nasa.gov
title{45}: First Submission to the ESS Software Exchange
keywords{73}: curator
ess
gov
gsfc
http
larry
nasa
page
picha
previous
return
sdcd
the
images{19}: graphics/return.gif
headings{74}: First Submission to the ESS Software Exchange
Return
to the PREVIOUS PAGE
body{1534}:
Objective: The goal of the HPCC/ESS Software Exchange is to
facilitate the exchange and reuse of software. Its specific objective
is to make publicly available the software products developed by the
ESS Science Team.
Approach: The Software Exchange has been
implemented as part of the World Wide Web (WWW). The WWW was developed
at CERN as a way of facilitating the exchange of information on the
Internet. The use of the WWW has grown exponentially, mainly due to the
creation of the Mosaic program by the NCSA. The WWW is a collection of
hypertext documents distributed throughout the world, and various Web
browsers offer 'point and click' access to a wide variety of Interet
resources.
Accomplishments: The ESS Project established a
software repository accessible via the World Wide Web (WWW) in March on
its project servers at Goddard Space Flight Center
(http://sdcd.gsfc.nasa.gov/ESS) and at the Jet Propulsion Laboratory
Status/Plans: The ESS project software repository is operational, and
its contents will continue to expand with additional annual
contributions from the ESS Grand Challenge teams and Guest
Computational investigators. In FY95 we will solicit initial
contributions from the Phase 2 Guest Computational Investigators. The
ESS project staff scientists will continue to contribute the results of
their development efforts as they come to fruition.
Point of
Contact: Dr. Jeffrey Pedelty
Goddard Space Flight Center/Code 934
pedelty@jansky.gsfc.nasa.gov
(301) 286-3065
curator: Larry
Picha
MD5{32}: 2679e7cb995b81c6db0ef5b28d97e571
File-Size{4}: 3117
Type{4}: HTML
Gatherer-Version{3}: 1.0
Gatherer-Host{21}: cesdis1.gsfc.nasa.gov
Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server
Refresh-Rate{7}: 2419200
Time-to-Live{8}: 14515200
Description{45}: First Submission to the ESS Software Exchange
}
@FILE { http://cesdis.gsfc.nasa.gov/admin/inf.eng/
Update-Time{9}: 820867014
Description{24}: Index of /admin/inf.eng/
Time-to-Live{8}: 14515200
Refresh-Rate{7}: 2419200
Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server
Gatherer-Host{21}: cesdis1.gsfc.nasa.gov
Gatherer-Version{3}: 1.0
Type{4}: HTML
File-Size{4}: 1277
MD5{32}: db01dfd1333c2eacab37d02bf24e9768
body{317}:
Name Last modified Size Description
Parent Directory 13-Jul-95
12:12 -
CESDIS1.small.gif 16-Mar-95 10:48 12K
ie.gif 09-Jun-95
11:40 31K
inf.eng.html 15-Jul-95 09:53 7K
inf.eng.html.txt
22-Mar-95 07:42 6K
opp.html 02-May-95 12:36 3K
wave.tar 07-Apr-95
19:59 768K
wave.tutorial.fin/ 27-Jun-95 13:32 -
headings{24}: Index of /admin/inf.eng/
images{149}: /icons/blank.xbm
/icons/back.xbm
/icons/image.xbm
/icons/image.xbm
/icons/text.xbm
/icons/text.xbm
/icons/text.xbm
/icons/unknown.xbm
/icons/menu.xbm
keywords{77}: cesdis
directory
eng
fin
gif
html
inf
opp
parent
small
tar
tutorial
txt
wave
title{24}: Index of /admin/inf.eng/
url-references{98}: /admin
CESDIS1.small.gif
ie.gif
inf.eng.html
inf.eng.html.txt
opp.html
wave.tar
wave.tutorial.fin/
}
@FILE { http://cesdis.gsfc.nasa.gov/PAS2/wg4.text
Update-Time{9}: 827948617
Partial-Text{20142}: Report of Working Group 4
INFLUENCE OF PARALLEL ARCHITECTURE ON HPC SOFTWARE
Chair: Burton Smith, Tera Computer Company
Co-chair: Thomas Sterling, USRA CESDIS
Introduction
============
Architectural parallelism is the principal opportunity that is driving the
aggressive evolution of HPC systems, achieving rapid gains in peak
performance. Parallelism is also the dominant factor challenging the effective
application of HPC architecture both in terms of execution efficiency and
programmability. Recent trends in system development have favored hardware
implemention solutions to deliver peak performance while relegating the
challenge of programmability and efficiency to future envisioned system
software solutions. As a consequence, programming of HPC systems in general
has proven significantly more difficult than conventional supercomputers while
delivered sustained performance is highly variant across the domain of HPC
applications. The purpose of this report is to examine the symbiotic
relationship between parallel architecture and system software in order to
reveal the attributes of parallel architecture that impact the ability of
system software to provide an effective computing environment.
Challenge of Parallel Architecture
==================================
Parallel computing has been the exclusive realm of HPC, at least until
recently. To achieve high performance, the added dimension of parallelism has
been imposed on hardware structure designers, applications programmers, and
system software developers in addition to all of the other important aspects
associated with employing conventional computers. While affording the promise
of orders-of-magnitude performance advantage, parallelism in all of its
manifestations has greatly complicated the problem of programming, reduced the
generality of application, and compromised robustness of system
operation. Together, the consequence of these negative effects is overall
lower efficiency, longer system development time, high cost, and limited
market when compared to mainstream computing systems.
To offset these limitations, system software researchers have sought
innovative approaches to HPC system management, but with little overall
practical advantage. The possibility must be considered that the problem is
intrinsic to the class of architectures being offered in the HPC arena and
that system software may never be able to adequately compensate for their
fundamental weaknesses. If so, then HPC architecture too must advance beyond
its current state in conjunction with system software to realize the ultimate
promise of scalability.
Parallel HPC structures employ distributed integrated resources, which
distinguishes them from conventional uniprocessors and imposes behavior
characteristics that limit or at least complicate efficient programming and
execution. Foremost among these is latency of data movement; the time required
(usually measured in cycles) to perform a remote memory access by a requesting
processor. Whether managed through message passing or shared memory
primitives, the length and variability of communication latencies result in a
sensitivity to locality that demands tasks and their operand objects be in
proximity in order to avoid long waiting times for access.
A second important aspect of distributed structures is the need to expose,
allocate, and balance parallel activities across the system processing
resources to achieve high utilization, efficiency, and performance. But the
need to spread objects apart and exploit more parallelism, thereby precluding
local starvations, may be in direct conflict with the need to minimize their
relative latencies. The management of parallel flow control requires
mechanisms whose realization, especially in software, may easily impose
unacceptable overhead on the useful work being processed.
Overhead can add undue burden on the execution resources, force a lower bound
on the parallelism granularity that can be effectively exploited there by
limiting useful program parallelism, and place an upper bound on scalability
for a given application problem and size. The combined challenges of latency,
starvation, and overhead derived from attempting to exploit distributed
computing resources may be beyond the capability of system software to
circumvent in the general case without some degree of architecture support.
Economic Factors
================
The HPC market continues to represent only about 1\% of the total annual sales
of computing systems. Yet, the time and cost of development historicly have
exceeded those of modern microprocessor architectures. The market share and
resulting revenues have proven inadequate to support many independent vendors
developing unique parallel architectures and supporting system
software. Compounding this is the rapid rate of evolution of microprocessor
technology which has recently exceeded 50\% performance gain per
year. Competing with this rate of performance advance while engaging in
lengthy design cycles has been shown to be risky. These two trends have driven
the HPC community to leverage the hardware development investment, rapid
performance advances, and economy of scale of the microprocessor industry by
integrating microprocessors and other commodity components in scalable
ensembles.
Mechanisms emboddied in modern microprocessors have been devised largely to
support the scientific workstation or, at the low end, the personal computer
and laptop. Design tradeoffs preclude significant enhancements not targetted
towards these primary markets. In particular, capabilities incorporated for
HPC systems were unlikely to occur due to the minimal market value. HPC
vendors have either implemented basic structures, relying on programmers and
system software to harness the available resources, or developed auxiliary
special purpose devices to be included with the commodity parts to augment the
functionality and achieve more effective scalable parallel computation. The
commercial vendor offerings span this range of choices from clusters of
workstations to tightly coupled shared memory multiprocessors. But the
choice of developing specialty parts has to be carefully weighed against the
cost and lead time incurred and the limited market benefits. In general, low
cost and good reliability of HPC systems will rely on high volume hardware
components.
Fortunately, very recent trends in the mainstream commercial computing market
have resulted in new capabilities that may offer new opportunities for HPC
system architecture. Latency, even in uniprocessor based systems, has emerged
as a problem no longer entirely capable of being resolved through caching
methods alone. As a consequence, microprocessor designers are incorporating
prefetch and multiple memory access issue mechanisms to mitigate the effects
of latency.
...
Clustering, at least of small numbers of workstations, is becoming a common
way to achieve some performance gain, albeit at the very coarse grain level of
parallelism. New networks and interfaces are being devised to greatly reduce
the time, especially in software, of moving data between workstations.
...
Finally, using a small number of processors in a single unit is expanding
performance available to the mainstream server market. The symmetric
multiprocessor (SMP) is emerging as an important new mid-range product with a
substantial potential market. Microprocessor designs are incorporating
sufficient mechanisms to support cache coherence by means of snooping
techniques on a high speed common bus.
...
Together, these new trends see microprocessor designs beginning to address the
concerns of HPC architecture, but driven by the requirements of more lucrative
market sectors. Parallelism is being seen as good at all scales, not just the
very high end as in the past and will likely pervade the whole industry in the
near future. As this new market driven constituency grows at moderate scale,
the high end will benefit as well. This is opening new opportunities for HPC
architecture and should influence future directions and designs.
...
Software Scaling
================
Parallel system software and applications are expensive to develop and may not
be commercially viable if applied only to the HPC sector. Most HPC
applications are home-grown by dedicated computational scientists with few
commercially available HPC applications. System software represents the best
the vendors can provide given limited resources but this continues to be
inadequate to the task although the overall quality is improving. Like the
hardware systems counterparts, software systems for HPC environments will have
to be derived, to a significant degree, from those products developed for
moderate scale parallel systems such as SMPs. This means that applications and
system software will have to be developed to scale up and down across system
configurations in order to attract adequate market share on SMPs while capable
of exploiting HPC resources for improved performance or problem size.
In order to meet this objective, HPC architectures will have to support
execution models found on the low as well as high end of the parallel system
spectrum. In particular, both shared memory and distributed computing models
need to be supported, even within a single application. To make better use of
HPC resources and to share such systems effectively among a number of
applications, HPC architecture will have to become more virtual in both space
and time. This is particularly useful when large applications are made up of a
number of separate parallel codes such as would be found in complex
interdisciplinary problems.
HPC Architecture Support for System Software
============================================
While it would be ideal if HPC architecture itself resolved all challenges
presented by distributed resources, such is unlikely in the next few
generations and system software will still be required to address many of the
difficulties. Even if architecture can not eliminate the problems for system
software, it should incorporate those additional mechanisms that would
facilitate system software in performing its services. Some examples follow.
...
Performance tuning is poorly supported in most HPC architectures. Due to the
distributed nature of the system, the programmer must be involved in a wide
array of decisions related to problem and data partitioning, resource
allocation, communication, scheduling, and synchronization. In order to seek
optimal performance, system behavior has to be observable. Often such behavior
falls outside the name space of the system instruction set. Performance
monitoring mechanisms are essential to provide adequate feedback to the
parallel software designer and this must include access to performance
critical resources such as networks, caches, synchronization primitives, and
others. Many of these require additional architecture support to reveal and
quantify. Such mechanisms, if provided, can be mapped into the address space
of the architecture and therefore made accessible to performance monitoring
tools.
Capabilities that should be provided include any facility critical to
performance such as those shared resources which might impose bottlenecks due
to contention or insufficient bandwidth. Metrics and means for observing key
communications channels fall into this catagory. Cache statistics are
particularly important as they determine the effective locality of the code
and data placement and may have a significant impact on performance.
...
Beyond performance monitoring, robustness should be supported through
architectural facilities that enable any part of the system to ascertain and
verify the operational health of any other major subsystem. These should
include alarm signals that indicate some system failure mode and permit
recovery rountines to be initiated. Such mechanisms can aid in achieving high
availability and confidence in hardware and software. Authenticated and
protected messages through architectural support should also be included to
enhance reliability.
Commercial applications need subsystem parallelization to remove bottlenecks,
especially in the area of I/O. File systems, storage systems, networking, and
database management all represent examples where architecture support can
greatly enhance system software functionality. Multiple I/O models, even in a
single application, should be supported including central, distributed,
mapped, and stream models and require some architecture enhancements.
...
Future Considerations
=====================
HPC systems are employed generally in rather simple and primitive ways. In the
next few years the sophistication of system usage will increase dramatically
as all aspects of system operation become virtualized in space and time and
new applications are enabled by the availability of large scale computing
systems. One consequence of a new generation of advanced applications is that
new data types will become pervasive and require effective architecture
support. Objects, persistent objects, and object stores will become routine
and require direct architecture support. Another data type of future
importance is the ``image'' structure. As this represents one of the most
rapidly growing types of information exchange, these Mbyte objects will in the
future be treated as atomic entities in various compressed and raw
forms. Architecture support for video image streams will also become
prevalent.
...
The primary target for HPC system usage is response time sensitive work; that
is, applications for which the user seeks solution in the shortest possible
time interval. This is premium value computing, requiring dedicated
resources. But where science accomplishments may permit a longer time frame, a
second class of processing resource, the non-response time sensitive workload,
may take advantage of brief periods of idle or partially available systems to
make progress to solution. Cycle harvesting or scavenging methods have been
employed, particularly on workstation clusters, primarily on an experimental
basis for some time. This capability will become normal operating practice and
replace the uniprocessor background tasks. To do so will require architecture
support for rapid context switch, checkpointing, and managing the distributed
flow control state. This capability will extend the utilization of the HPC
systems, making them more cost effective.
...
There are important applications that are less well suited to the capabilities
of general processors and can be greatly accelerated through special purpose
functional structures. HPC architectures in the future will require the
ability to incorporate special purpose devices and support heterogeneous
computing. Workstations and personal computers today provide open system
interfaces through standardized buses and address space mapping. Similarly,
efficient interfaces that permit data streaming through such units and task
scheduling to take advantage of the availability of these resources will
become essential and greatly enhance the value of HPC technology.
...
Recommendations
===============
For many reasons, a number of which have been presented in this report, HPC
system architecture is entering a new phase in its evolution. This transition
is driven in part out of necessity and in part in response to new
opportunities. While many important computations have been successfully
performed on large HPC systems, it is clear that to date the current
generation does not represent an adequate capability in programmability,
generality, or effectiveness. Nor has it gained sufficient market share to be
a sustainable commercial product. The following general recommendations are
in response to the previous findings and are offered to advance the state of
HPC system architecture to resolve the critical issues of capability,
useability, reliability, and marketability.
...
1. The overriding objective must be to encourage the development of more
usable, broadly applicable, and robust systems at high scales. Foremost among
concerns is the requirement to dramatically reduce locality sensitivity which
seriously inhibits programmability. Sharing of resources must be simplified
through submachine virtualization both in space and time. Parallelization of
subsystems such as file systems, networking, and database management is key to
removing bottlenecks. Performance monitoring mechanisms must be enhanced for
performance tuning. Configuration management, resource management, and
capacity planning must all be strongly supported for flexible and easily
manageable systems.
...
2. Ensure that common parallel programming models are architecturally
supported from low to high end systems. Both shared memory and distributed
computing methods should be supported even from within a single
application. Multiple I/O models should also be supported even from within a
single application.
...
3. Raise the lowest common denominator through community forums. Establish the
minimum needs that should be ubiquitous across HPC platforms. Third party
software vendors must be able to depend on the availability of a basic set of
capabilities to guarantee portability of software products across
systems. Performance monitoring, low overhead synchronization, synchronized
global clocks, high reliability messaging, and availability features are all
examples of architecture facilities that should be common among HPC systems in
order to encourage ISV investment and software development.
...
4. Develop high volume building blocks that enable programmable scalable
systems. Such building blocks must be consistent with the economic business
models of main stream parallel computing such as SMPs but be capable of
scaling to HPC sized configurations, responding to the increased demands such
systems impose. These enable investment in mass market technologies to
directly impact HPC development costs and design cycle time while ensuring
scalable applications and system software able to migrate both up and
down. Large volume commodity components are the key to high quality and low
cost.
Conclusions
===========
This brief report has reviewed the relationship between HPC system
architecture and software to expose architectural issues that either
complicate or inadequately support the needs of system software
development. It has been shown that current generation HPC architecture is in
part at fault for the difficult challenges confronting system
software. Architecture latency, starvation, and overhead resulting from
distributed computing resources all combine to restrict programmability,
generality, and effectiveness. At the same time, market forces limit the
flexibility of the HPC design space, constraining system development to employ
commodity mass market components. Fortunately, there is a rapid move to modest
scale parallelism, even in the mainstream computing sector. Both processor
architectures and software products are beginning to be developed with
parallelism in mind, including addressing the very problems confronting HPC
systems architecture and software at present. This will provide the new
opportunity for HPC to merge with the mainstream and share the benefits of
economy of scale both in hardware and software. But it requires that HPC
systems designers and applications programmers develop scalable products that
can migrate both up and down parallel system scale. In the meantime, a number
of capabilities that should be included in HPC architecture in support of
system software were identified. Among these were support for performance
monitoring and enhanced availability features as well as a number of
mechanisms for efficient dynamic resource management. It is expected that much
of the burden of presenting HPC applications programmers with programmable and
effective execution environments will continue to rely on sophisticated system
software but that advances in architecture are essential if the full promise
of HPC systems is to be realized.
...
MD5{32}: d4762015b941129d5dd51b8ad2b31f53
File-Size{5}: 20375
Type{4}: Text
Gatherer-Version{3}: 1.0
Gatherer-Host{21}: cesdis1.gsfc.nasa.gov
Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server
Refresh-Rate{7}: 2419200
Time-to-Live{8}: 14515200
Keywords{7401}: ability
able
about
accelerated
access
accessible
accomplishments
achieve
achieving
across
activities
add
added
addition
additional
address
addressing
adequate
adequately
advance
advanced
advances
advantage
affording
against
aggressive
aid
alarm
albeit
all
allocate
allocation
alone
also
although
among
and
annual
another
any
apart
applicable
application
applications
applied
approaches
architectural
architecturally
architecture
architectures
are
area
arena
array
ascertain
aspect
aspects
associated
atomic
attempting
attract
attributes
augment
authenticated
auxiliary
availability
available
avoid
background
balance
bandwidth
based
basic
basis
become
becoming
been
beginning
behavior
being
benefit
benefits
best
better
between
beyond
blocks
both
bottlenecks
bound
brief
broadly
building
burden
burton
bus
buses
business
but
cache
caches
caching
can
capabilities
capability
capable
capacity
carefully
case
catagory
central
cesdis
chair
challenge
challenges
challenging
channels
characteristics
checkpointing
choice
choices
circumvent
class
clear
clocks
clustering
clusters
coarse
code
codes
coherence
combine
combined
commercial
commercially
commodity
common
communication
communications
community
company
compared
compensate
competing
complex
complicate
complicated
components
compounding
compressed
compromised
computation
computational
computations
computer
computers
computing
concerns
conclusions
confidence
configuration
configurations
conflict
confronting
conjunction
consequence
considerations
considered
consistent
constituency
constraining
contention
context
continue
continues
control
conventional
cost
costs
counterparts
coupled
critical
current
cycle
cycles
data
database
date
decisions
dedicated
degree
deliver
delivered
demands
denominator
depend
derived
design
designer
designers
designs
determine
develop
developed
developers
developing
development
devices
devised
difficult
difficulties
dimension
direct
directions
directly
distinguishes
distributed
does
domain
dominant
down
dramatically
driven
driving
due
dynamic
easily
economic
economy
effective
effectively
effectiveness
effects
efficiency
efficient
either
eliminate
emboddied
emerged
emerging
employ
employed
employing
enable
enabled
encourage
end
engaging
enhance
enhanced
enhancements
ensembles
ensure
ensuring
entering
entirely
entities
environment
environments
envisioned
especially
essential
establish
even
evolution
examine
examples
exceeded
exchange
exclusive
execution
expanding
expected
expensive
experimental
exploit
exploited
exploiting
expose
extend
facilitate
facilities
facility
factor
factors
failure
fall
falls
fault
favored
features
feedback
few
file
finally
findings
flexibility
flexible
flow
follow
following
for
force
forces
foremost
forms
fortunately
forums
found
frame
from
full
functional
functionality
fundamental
future
gain
gained
gains
general
generality
generally
generation
generations
given
global
good
grain
granularity
greatly
group
growing
grown
grows
guarantee
hardware
harness
harvesting
has
have
health
heterogeneous
high
highly
historicly
home
hpc
ideal
identified
idle
image
impact
implemented
implemention
importance
important
impose
imposed
imposes
improved
improving
inadequate
inadequately
include
included
including
incorporate
incorporated
incorporating
increase
increased
incurred
independent
indicate
industry
influence
information
inhibits
initiated
innovative
instruction
insufficient
integrated
integrating
interdisciplinary
interfaces
interval
into
intrinsic
introduction
investment
involved
issue
issues
isv
its
itself
just
key
laptop
large
largely
latencies
latency
lead
least
length
lengthy
less
level
leverage
like
likely
limit
limitations
limited
limiting
little
local
locality
long
longer
low
lower
lowest
lucrative
made
magnitude
main
mainstream
major
make
making
manageable
managed
management
managing
manifestations
many
mapped
mapping
market
marketability
markets
mass
may
mbyte
means
meantime
measured
mechanisms
meet
memory
merge
message
messages
messaging
methods
metrics
microprocessor
microprocessors
mid
might
migrate
mind
minimal
minimize
minimum
mitigate
mode
models
moderate
modern
modest
monitoring
more
most
move
movement
moving
much
multiple
multiprocessor
multiprocessors
must
name
nature
near
necessity
need
needs
negative
networking
networks
never
new
next
non
nor
normal
not
number
numbers
object
objective
objects
observable
observing
occur
offer
offered
offerings
offset
often
one
only
open
opening
operand
operating
operation
operational
opportunities
opportunity
optimal
order
orders
other
others
out
outside
overall
overhead
overriding
parallel
parallelism
parallelization
part
partially
particular
particularly
partitioning
parts
party
passing
past
peak
per
perform
performance
performed
performing
periods
permit
persistent
personal
pervade
pervasive
phase
place
placement
planning
platforms
poorly
portability
possibility
possible
potential
practical
practice
preclude
precluding
prefetch
premium
present
presented
presenting
prevalent
previous
primarily
primary
primitive
primitives
principal
problem
problems
processed
processing
processor
processors
product
products
program
programmability
programmable
programmer
programmers
programming
progress
promise
protected
proven
provide
provided
proximity
purpose
quality
quantify
raise
range
rapid
rapidly
rate
rather
raw
realization
realize
realized
realm
reasons
recent
recently
recommendations
recovery
reduce
reduced
related
relationship
relative
relegating
reliability
rely
relying
remote
remove
removing
replace
report
represent
represents
requesting
require
required
requirement
requirements
requires
requiring
researchers
resolve
resolved
resource
resources
responding
response
restrict
result
resulted
resulting
reveal
revenues
reviewed
risky
robust
robustness
rountines
routine
sales
same
scalability
scalable
scale
scales
scaling
scavenging
scheduling
science
scientific
scientists
second
sector
sectors
see
seek
seeks
seen
sensitive
sensitivity
separate
seriously
server
services
set
share
shared
sharing
shortest
should
shown
signals
significant
significantly
similarly
simple
simplified
single
size
sized
small
smith
smp
smps
snooping
software
solution
solutions
some
sophisticated
sophistication
sought
space
span
special
specialty
spectrum
speed
spread
standardized
starvation
starvations
state
statistics
sterling
still
storage
stores
stream
streaming
streams
strongly
structure
structures
submachine
substantial
subsystem
subsystems
successfully
such
sufficient
suited
supercomputers
support
supported
supporting
sustainable
sustained
switch
symbiotic
symmetric
synchronization
synchronized
system
systems
take
target
targetted
task
tasks
techniques
technologies
technology
tera
terms
than
that
the
their
them
then
there
thereby
therefore
these
they
third
this
thomas
those
through
tightly
time
times
today
together
too
tools
total
towards
tradeoffs
transition
treated
trends
tuning
two
type
types
ubiquitous
ultimate
unacceptable
undue
uniprocessor
uniprocessors
unique
unit
units
unlikely
until
upper
usable
usage
use
useability
useful
user
using
usra
usually
utilization
value
variability
variant
various
vendor
vendors
verify
very
viable
video
virtual
virtualization
virtualized
volume
waiting
way
ways
weaknesses
weighed
well
were
when
where
whether
which
while
whole
whose
wide
will
with
within
without
work
working
workload
workstation
workstations
would
year
years
yet
Description{30}: Report of Working Group 4
}
@FILE { http://cesdis.gsfc.nasa.gov/hpccm/annual.reports/ess94contents/nren/atdnetgraphic.html
Update-Time{9}: 827948658
url-references{114}: http://cesdis.gsfc.nasa.gov/hpccm/annual.reports/ess94contents/nren/atdnet.html
mailto:lpicha@cesdis.gsfc.nasa.gov
title{6}: ATDNet
keywords{45}: curator
larry
page
picha
previous
return
the
images{39}: graphics/ATDnet.gif
graphics/return.gif
headings{82}: Application Technology Demonstration Network (ATDNet)
Return
to the PREVIOUS PAGE
body{129}:
Point of Contact: Pat Gary
NASA Goddard Space Flight
Center
(301) 286-9539
pat.gary@gsfc.nasa.gov
curator: Larry
Picha
MD5{32}: b927de1848a6a71a2b8d6181ce68411a
File-Size{3}: 563
Type{4}: HTML
Gatherer-Version{3}: 1.0
Gatherer-Host{21}: cesdis1.gsfc.nasa.gov
Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server
Refresh-Rate{7}: 2419200
Time-to-Live{8}: 14515200
Description{6}: ATDNet
}
@FILE { http://cesdis.gsfc.nasa.gov/hpccm/visitor/oldTXTstuff.html
Update-Time{9}: 827948830
url-references{39}: #br
#hdr
#st
#id
#dl
#ol
#ul
index.html
title{21}: Basic Text Formatting
keywords{109}: and
back
breaks
definition
headers
indenting
index
line
lists
ordered
paragraphs
simple
styles
the
unordered
images{31}: shoelacebar.gif
shoelacebar.gif
headings{179}: Basic Text Formatting
Paragraphs and Simple Line Breaks
Headers
Styles
Indenting
Definition Lists
Ordered Lists
Unordered Lists
Back to the index
Paragraphs and Simple Line Breaks
body{592}:
(If you don't see a solution to a formatting
problem you have, try checking my HTML 2.0 Extensions section of the
index.)
Unless you specify otherwise, HTML text will wrap
inthe browser window unaided by text formatting tags. But it does not
recognize carriage returns, so you have to format those yourself.
The
most commonly used tags are probably those used for line breaking
within chunks of text. There are two kinds of tags generally used for
this type of text formatting: the simple line break tag > and the
paragraph tag The line break tag > acts like a simple carriage return:
MD5{32}: 2eeb2773d870a57a2d59c357b85b0d4d
File-Size{4}: 8067
Type{4}: HTML
Gatherer-Version{3}: 1.0
Gatherer-Host{21}: cesdis1.gsfc.nasa.gov
Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server
Refresh-Rate{7}: 2419200
Time-to-Live{8}: 14515200
Description{21}: Basic Text Formatting
}
@FILE { http://cesdis.gsfc.nasa.gov/pub/linux/drivers/vortex.patch
Update-Time{9}: 820866793
Time-to-Live{8}: 14515200
Refresh-Rate{7}: 2419200
Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server
Gatherer-Host{21}: cesdis1.gsfc.nasa.gov
Gatherer-Version{3}: 1.0
Type{5}: Patch
File-Size{4}: 3081
MD5{32}: 37d0a4789a0c65aab0a6226c235b850b
}
@FILE { http://cesdis.gsfc.nasa.gov/hpccm/cas.hp/graphics/
Update-Time{9}: 827948816
url-references{97}: /hpccm/cas.hp/
cas.gif
cas.gif%20copy
hpcc.header.gif
hpccsmall.gif
nasa.meatball.gif
wavebar.gif
title{32}: Index of /hpccm/cas.hp/graphics/
keywords{74}: cas
copy
directory
gif
header
hpcc
hpccsmall
meatball
nasa
parent
wavebar
images{133}: /icons/blank.xbm
/icons/menu.gif
/icons/image.gif
/icons/text.gif
/icons/image.gif
/icons/image.gif
/icons/image.gif
/icons/image.gif
headings{32}: Index of /hpccm/cas.hp/graphics/
body{284}:
Name Last modified Size Description
Parent Directory 09-Jun-95
11:10 -
cas.gif 15-Jun-95 14:44 11K
cas.gif copy 23-Mar-95 14:58
17K
hpcc.header.gif 18-May-95 13:28 1K
hpccsmall.gif 23-May-95
11:55 2K
nasa.meatball.gif 08-Nov-94 10:12 3K
wavebar.gif
08-Nov-94 10:12 2K
MD5{32}: fc91614c4787000f870fd61e0cab64d9
File-Size{4}: 1174
Type{4}: HTML
Gatherer-Version{3}: 1.0
Gatherer-Host{21}: cesdis1.gsfc.nasa.gov
Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server
Refresh-Rate{7}: 2419200
Time-to-Live{8}: 14515200
Description{32}: Index of /hpccm/cas.hp/graphics/
}
@FILE { http://cesdis.gsfc.nasa.gov/hpccm/annual.reports/ess94contents/sys.sw/vrfast.html
Update-Time{9}: 827948656
url-references{115}: http://cesdis.gsfc.nasa.gov/hpccm/annual.reports/ess94contents/sys.software.html
mailto:lpicha@cesdis.gsfc.nasa.gov
title{59}: Initiated Use of VR-FAST within
ESS Investigator Community
keywords{45}: curator
larry
page
picha
previous
return
the
images{78}: http://cesdis.gsfc.nasa.gov/hpccm/hpcc.graphics/vrfast.gif
graphics/return.gif
headings{88}: Initiated Use of VR-FAST within
ESS Investigator Community
Return
to the PREVIOUS PAGE
body{2642}:
Objective: Develop methods to analyze high rate/high volume
data generated by ESS Grand Challenges.
Approach: Investigate
turn-key virtual environment compatible with existing NASA science
community visualization methods and software. Chose to adapt Flow
Analysis Software Tool kit (FAST) developed and maintained by NAS/ARC
to a virtual environment.
Accomplishments: Received delivery of
SGI Onyx (2 processors, 2 Reality Engine graphics subsystems) and
Fakespace BOOM 3C at GSFC. Ported Virtual Reality FAST (VR-FAST) to SGI
Onyx and incorporated useoof BOOM. Initiated use of VR-FAST within the
ESS investigator community (e.g., Richard Rood from the GSFC Laboratory
for Atmospheres, Michele Rienecker from the GSFC Laboratory for
Hydrospheric Processes) Acquired GSFC expertise of VR-FAST and
associated devices to allowffor quick modifications initiated through
investigator responses. Initiated use of virtual environment devices
with VIS-5D, a visualization package developed at the University of
Wisconsin with support from NASA Marshall Space Flight Center.
Demonstrated VR-FAST to John Klineberg/GSFC Center Director, Lee
Holcomb/Code R, and France Cordova/NASA Chief Scientist.
Significance: The job of the NASA scientist increasingly involves
sifting through mountains of acquired and computationally generated
data. The essence of virtual reality is to deal with the data in the
same way that you deal with the actual world - through the visual
cortex and motor responses, rather than through artificial interfaces.
The creation of an operational virtual reality environment for rapid
data searching and manipulation is required to validate the theory and
transfer it to the NASA science community.
Status/Plans: Phase II
of the VR FAST project is currently being planned and will be
implemented in the upcoming year. This phase will bring a marked
increase in the capabilities available to investigators using VR FAST.
Specific plans include the following: Incorporate additional data
exploratory capabilities within VR-FAST to enhance scientific discovery
opportunities. Continue to receive and incorporate feedback from ESS
investigators for the purpose of evaluating and enhancing VR FAST and
virtual environments in general. Receive virtual instrument and gesture
glove at GSFC to allow access to additional VR FAST capabilities.
Continue to analyze the application of virtual environment technology
to other data analysis software (e.g., VIS-5D, SGI Explorer).
Point of Contact: Dr. Horace Mitchell
Goddard Space Flight
Center/Code 932
hmitchel@vlasov.gsfc.nasa.gov
(301) 286-4030
curator: Larry Picha
MD5{32}: 92e2cc31d859ae7a1186af05e86ff392
File-Size{4}: 3396
Type{4}: HTML
Gatherer-Version{3}: 1.0
Gatherer-Host{21}: cesdis1.gsfc.nasa.gov
Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server
Refresh-Rate{7}: 2419200
Time-to-Live{8}: 14515200
Description{32}: Initiated Use of VR-FAST within
}
@FILE { http://cesdis.gsfc.nasa.gov/hpccm/factsheets.html
Update-Time{9}: 827948599
url-references{1137}: mailto:lpicha@cesdis.gsfc.nasa.gov
http://cesdis.gsfc.nasa.gov/
#intro
#speed
#components
#cas
#ess
#iita
#ree
#contrib
#tera
#imp
#resource
#contents
#contents
#cas
#ess
#ree
#iita
#contents
http://cesdis.gsfc.nasa.gov/hpccm/cas.hp/cas.html
mailto:feiereis@ames.arc.nasa.gov
mailto:p_hunter@aeromail.hq.nasa.gov
#contents
http://cesdis.gsfc.nasa.gov/hpccm/ess.hp/ess.html
mailto:fischer@jacks.gsfc.nasa.gov
mailto:p_hunter@aeromail.hq.nasa.gov
#contents
http://cesdis.gsfc.nasa.gov/hpccm/iita.hp/iita.html
mailto:William_Likens@qmgate.arc.nasa.gov
mailto:p_hunter@aeromail.hq.nasa.gov
#contents
http://cesdis.gsfc.nasa.gov/hpccm/ree.hp/ree.html
mailto:leon@telerobotics.Jpl.Nasa.Gov
mailto:davidson@telerobotics.Jpl.Nasa.Gov
mailto:p_hunter@aeromail.hq.nasa.gov
#contents
#contents
http://cesdis.gsfc.nasa.gov/petaflops/definition.html
#contents
#contents
http://cesdis.gsfc.nasa.gov/hpccm/accomp/94accomp/94accomps.html
http://cesdis.gsfc.nasa.gov/hpccm/annual.reports/main94.html
http://cesdis.gsfc.nasa.gov/hpccm/hpcc.nasa.html
http://www.hpcc.gov/blue96/index.html
http://www.hpcc.gov/imp95/index.html
http://www.hpcc.gov/
#contents
title{15}: HPCC Fact Sheet
references{682}: "The Grand Challenge in cosmology is not only to collect the data needed
for a deep view into the formation of the cosmos... but also to create
an accurate model of the cosmos..."
"[the REE project] addresses critical needs to both the Offices of Space
Science and Mission to Planet Earth. A new generation of on-board
computers will enhance scientific return, reduce operations costs, and
mitigate down link limitations...."
The technologies used in the experiments, coupled with those in support
of the National Research and Education Network, lead to high-speed
network communications that can be delivered commercially at one-tenth
of today's cost of providing the same service.
keywords{1478}: accelerate
accelerating
accomplishments
aeromail
aeronautics
aerosciences
alkalai
america
american
ames
and
annual
another
antarctica
application
applications
arc
blue
book
cas
center
century
cesdis
change
children
comments
communications
community
compare
competitiveness
component
components
computational
computing
contents
contributions
convergence
coordination
cray
critical
curriculum
data
davidson
developed
development
directly
documentation
earth
educational
enabled
engineering
ess
every
excellence
expects
experimentation
exploration
feiereis
feiereisen
fischer
flops
fold
for
formation
foundation
from
future
galaxy
giga
gigaflops
global
gov
graphic
great
gsfc
has
high
home
hpcc
hunter
iita
implementation
importance
increase
industry
information
infrastructure
instructions
internet
into
introduction
isolated
jacks
james
john
jpl
large
larry
later
ldp
leon
level
likens
live
lpicha
meet
models
more
multiple
nasa
nation
national
new
next
observatories
oct
office
our
over
page
paul
performance
petaflops
picha
plan
planned
play
please
pointers
previous
program
project
provides
public
qmgate
quality
quest
questions
ree
related
remote
report
requirements
resources
return
revision
role
scale
science
sciences
selected
send
service
shaping
simulated
sites
space
speed
strengthening
structure
supercomputer
supports
table
taken
technologies
technology
telerobotics
tera
teraflops
the
this
tools
top
unique
use
vitality
web
welcome
wide
william
with
world
year
your
images{703}: hpcc.graphics/hpcc.header.gif
hpcc.graphics/lites2.gif
hpcc.graphics/aeroplane.gif
hpcc.graphics/lites2.gif
hpcc.graphics/return.gif
hpcc.graphics/lites2.gif
hpcc.graphics/return.gif
hpcc.graphics/lites2.gif
hpcc.graphics/return.gif
hpcc.graphics/lites2.gif
hpcc.graphics/cas.gif
hpcc.graphics/return.gif
hpcc.graphics/lites2.gif
hpcc.graphics/ess.gif
hpcc.graphics/return.gif
hpcc.graphics/lites2.gif
hpcc.graphics/gonzaga1.gif
hpcc.graphics/return.gif
hpcc.graphics/lites2.gif
hpcc.graphics/return.gif
hpcc.graphics/lites2.gif
hpcc.graphics/return.gif
hpcc.graphics/lites2.gif
hpcc.graphics/return.gif
hpcc.graphics/lites2.gif
hpcc.graphics/return.gif
hpcc.graphics/lites2.gif
hpcc.graphics/return.gif
headings{939}: The National Aeronautics and Space Administration's (NASA) High
Performance Computing and Communications (HPCC) Program
Welcome
to NASA HPCC!
Last Revision: NOV 20, 1995 (ldp)
Introduction
RETURN to the Table of Contents
The Speed
of Change
RETURN to the Table of Contents
Components of the NASA HPCC Program
RETURN to the Table of Contents
Computational Aerosciences (CAS) Project
RETURN to the Table of Contents
Earth and Space Sciences (ESS) Project
RETURN to the Table of Contents
Information Infrastructure Technology and Applications (IITA)
RETURN to the Table of Contents
Remote Exploration and Experimentation (REE) Project
RETURN to the Table of Contents
NASA HPCC Program Contributions
RETURN to the Table of Contents
Teraflops: What is it??
RETURN to the Table of Contents
Importance of NASA's Role in HPCC
RETURN to the Table of Contents
Resources: pointers to more HPCC related documentation
RETURN to the Table of Contents
body{26750}: BACKGROUND="hpcc.graphics/backdrop.gif">
To accelerate the
development and application of high-performance computing technologies
to meet NASA's aeronautics, Earth and space sciences, and engineering
requirements into the next century.
You're here because you need
or want an explanation and overview of the NASA HPCC Program, its
mission, and how it implements and utilizes tax payer assets.
INSTRUCTIONS: You may click on the Table of Contents item (below)
you're interested in and go directly to that subject. You do have the
option of scrolling through the entire document which is organized
according to the Table of Contents. You may return to your starting
point by clicking on the ''back'' option of your browser (i.e. Mosaic
or Netscape) at any time. Please send your comments and/or questions
directly to Larry Picha (lpicha@cesdis.gsfc.nasa.gov) at the Center of
Excellence in Space Data and Information Sciences. Previous Revision:
Oct 3, 1995 (ldp)
Table of Contents
Introduction The Speed
of Change Components of the NASA HPCC Program
Computational
Aerosciences (CAS) Project Earth and Space Sciences (ESS) Project
Information Infrastructure Technology and Applications (IITA) component
Remote Exploration and Experimentation (REE) Project NASA HPCC Program
Contributions Teraflops : What is it?? Importance of NASA's Role in the
National HPCC Program Resources: pointers to more HPCC related
documentation
In recognition of the critical importance of
information technologies, the United States Government created the High
Performance Computing and Communications (HPCC) Program in 1991. The
goal of the Program was to foster the development of high-risk,
high-payoff systems and applications that will most benefit America.
The NASA HPCC program is a critical component of this government-wide
effort; it is dedicated to working with American businesses and
universities to increase the speed of change in research areas that
support NASA's aeronautics, Earth, and space missions. By investing
national resources in the NASA HPCC Program, America will be able to
maintain its worldwide leadership position in aerospace, high-speed
computing, communications, and other related industries. Although the
High Performance Computing and Communications budget is a small
percentage of the NASA budget, it has a significant impact on the
Agency's mission, as well as on U.S. industry. NASA leads the planning
and coordination of the software element of the Federal High
Performance Computing and Communications (HPCC) Program and is also an
important participant in the National Information Infrastructure
initiatives. NASA's HPCC Program will: Further gains in U.S.
productivity and industrial competitiveness - especially in the
aeronautics industry; Extend U.S. technology leadership in high
performance computing and communications; Provide wide dissemination
and application of HPCC technologies; and Facilitate the use and
technologies of a National Information Infrastructure (NII) -
especially within the American K-12 educational systems.
As
we stand on the threshold of the 21st century, change has become a
constant in our lives. We live in a time of unprecedented social,
political, and technological change and advancement. For many
Americans, the rate of change has accelerated to the point where it is
nearly overwhelming. It took four hundred years between the development
of movable type and the creation of the first practical typewriter.
Less than one hundred years later came the development of the word
processor. Now, if you buy a personal computer, the computer seems to
be behind the technology curve before you even carry it home from the
store. Many American business communication tools that are taken for
granted today, such as FAX machines, electronic mail, pagers, and
cellular phones, were unknown or generally unavailable just ten years
ago. At no time in history have humans been required to process
information from so many different sources at once. There can be no
doubt that in the late twentieth century, the advance of technology has
reached a sort of critical mass that is propelling us headlong into a
future that was unimaginable a generation ago. The rapid development of
computers and communications has ''shrunk'' the world. The United
States is an active participant in a worldwide economy. In this new
''global village,'' the rapid movement of information has made the
technological playing field for most industrialized nations very
competitive. For the first time in history, the means of production,
the means of communication, and the means of distribution are all based
on the same technology -- computers. A unique interdependence now
exists among advanced information technologies. Each new innovation
allows existing industries to operate more efficiently, while at the
same time, opens up new markets for the product itself. Individuals,
corporations, industries -- even entire economies -- depend more than
ever on information technologies. America's future and the future of
each citizen will be deeply affected by the speed with which
information is gathered, processed, analyzed, secured, and
disseminated. NASA has a long history of developing new technologies
for aerospace missions that later turn out to have far-reaching effects
on society through civilian applications. For instance, satellites
originally developed for space exploration and defense purposes now
carry virtually all television and long-distance telephone signals to
our homes. By accelerating the convergence of computing and
communications technologies, the NASA HPCC Program expects to play
another unique role in shaping the future of every American.
Four components comprise NASA's HPCC Program: Computational
AeroSciences (CAS) Earth and Space Sciences (ESS) Remote Exploration
and Experimentation (REE) Information Infrastructure Technology
Applications (IITA).
The goal of the CAS project is to accelerate
the development, availability and use of high-performance computing
technology by the U.S. aerospace industry, and to hasten the emergence
of a viable commercial market for hardware and software vendors to
exploit this lead. The goal of the ESS project is to demonstrate the
potential afforded by high-performance computing technology to further
our understanding and ability to predict the dynamic interaction of
physical, chemical, and biological processes affecting the
solar-terrestrial environment and the universe. The goal of the REE
project is to develop and demonstrate a space-qualified computing
architecture that requires less than ten watts per billion operations
per second. The goal of the IITA component in the NASA HPCC Program is
to accelerate the implementation of a National Information
Infrastructure through NASA science, engineering and technology
contributions. Fact sheets on each of these projects are included in
this brochure World Wide Web page.
The CAS Project is focused
on the specific computing requirements of the United States aerospace
community and has, as its primary goal, to accelerate the availability
to the United States aerospace manufacturers of high performance
computing hardware and software for use in their design processes. The
U.S. aerospace industry can effectively respond to increased
international competition only by producing across-the-board better
quality products at affordable prices. High performance computing
capability is a key to the creation of a competitive advantage, by
reducing product cost and design cycle times; its introduction into the
design process is, however, a risk to a commercial company, that NASA
can help mitigate by performing this research. The CAS project
catalyzes these developments in aerospace computing, while at the same
time pointing out the future way to aerospace markets for domestic
computer manufacturers. The key to the entire CAS project is the
aerospace design and manufacturing process. These are the procedures
that a manufacturer carries out in order to move from the idea of a new
aircraft to the roll-out of a new aircraft onto the runway. Computer
simulations of these aircraft vastly shorten the time necessary for
this process. These computer simulations, or applications as they have
come to be called, need immensely fast computers in order to deliver
their results in a timely fashion to the designers. CAS supports the
development of these machines by acquiring the latest experimental
machinery from domestic computer manufacturers and making them
available as testbeds to the nationwide CAS community. The computer
manufacturers and independent software vendors help out by providing
system software that forms the glue between the applications programs
and the computer hardware. These are computer programs like operating
systems that make the computer function. The CAS community that carries
out this work consists of teams of workers from the major aerospace
companies, from the NASA aeronautics research centers and from American
universities. The focus of the project is derived through extensive
interactions with business managers of the major aerospace companies
and by consultation with university researchers and NASA management.
The project delivers applications and system software that have been
found through its research to show an enhancement to the design
process, and provides a laboratory by which the computer manufacturers
can identify weaknesses and produce improvements in their products. If
you are interested in additional information on this project or related
activities you may access the CAS Home Page on the World Wide Web. or
contact the following NASA officials: William Feiereisen
(feiereis@ames.arc.nasa.gov)
Project Manager, Computational
Aerosciences Project
High Performance Computing and Communications
Office
NASA - Ames Research Center, Moffett Field, California 94035
(415) 604-4225 Paul Hunter (p_hunter@aeromail.hq.nasa.gov)
Program
Manager, High Performance Computing and Communications Program
High
Performance Computing and Communications Office
NASA - Headquarters,
Washington, DC 20546
(202) 358-4618
- George Lake,
University of Washington
The Earth, its relationship to the Sun and
Solar System, and the universe in its totality are the domain of the
Earth and Space Sciences Project. This effort is employing advanced
computers to further our understanding of and ability to predict the
dynamically interacting physical, chemical, and biological processes
that drive these systems. Its ultimate goal is building an assortment
of computer-simulated models that combine complex Earth and space
science disciplines. High-resolution, multidisciplinary models are
crucial for their predictive value and for their capacity to estimate
beyond what we can measure and observe directly. For example, we cannot
''see'' the beginnings of the universe or even the birth of our own
planet, but simulation can provide insight into how they evolved by
filling in the gaps left by telescopes or geological records. Current
ESS Project investigations include probing the formation of the
large-scale universe; modeling the global climate system in the past,
present and future; ascertaining the dynamics of the interior of stars;
and indexing and searching through massive Earth-observational data
sets. Determining the pertinent interactions, their time scales, and
the controls that exist in such systems requires computing power at the
highest levels of performance. An objective of the ESS Project is to
provide the supercomputers and software tools to facilitate these
models. ''Testbed'' facilities allow access to prototype and
early-production machines, such as the Convex Exemplar SPP-1 and the
MasPar MP-2 at NASA/Goddard Space Flight Center. Other shared testbed
facilities are available throughout NASA and at other U.S. government
agencies and universities. Much of the Earth and space sciences relies
on data collected from a panoply of satellites and telescopes. There
are already massive volumes of data on hand, and one trillion bytes a
day will be collected by NASA's Earth Observing System alone. The ESS
Project is therefore engaged in developing innovative methods for
analysis; these approaches range from visualization and virtual reality
to ''intelligent'' information systems and assimilating data into
models. Additionally, higher-resolution sensors will require entirely
new data retrieval techniques. These endeavors, together with those in
modeling, will in turn provide feedback to the system vendors about the
effectiveness and limitations of their products, helping them to
improve subsequent generations. If you are interested in additional
information on this project or related activities you may access the
ESS Home Page on the World Wide Web or you may contact the following
NASA officials: James Fischer (fischer@jacks.gsfc.nasa.gov)
Project
Manager, Earth and Space Sciences Project
High Performance Computing
and Communications Office
NASA- Goddard Space Flight Center
Code
934
Greenbelt, Maryland 20771
(301) 286-3465 Paul Hunter
(p_hunter@aeromail.hq.nasa.gov)
Program Manager, High Performance
Computing and Communications Program
High Performance Computing and
Communications Office
NASA - Headquarters, Washington, DC 20546
(202) 358-4618
The NASA IITA component is facilitating and
accelerating the implementation of a National Information
Infrastructure through NASA science, engineering and technology
contributions. This activity is responsive to the Congressional and
Presidential goals of building new partnerships between the Federal and
non-Federal sectors of U.S. society and has special emphasis on serving
new communities. The IIITA component focuses on four key areas:
development of Digital Library Technology; public use of Remote Sensing
Data; Aerospace Design and Manufacturing; and, K-12 education over the
Internet. Each of these areas supports the development of new
technologies to facilitate broader access to NASA data via computer
networks. This NASA activity will foster the development of new and
innovative technology to support Digital Libraries; these are libraries
that are effectively multimedia digital (electronic) in nature. The
focus here is to support the long-term needs of NASA pilot projects
already established and for the eventual scale-up to support thousands
to millions of users widely distributed over the Internet. Remote
Sensing Data is key as this is what will comprise the Digital
Libraries. Broad public access to databases of remote sensing images
and data over computer networks such as the Internet is also essential;
NASA has established a Remote Sensing Public Access Center to manage
just such an effort. NASA is also striving to provide support for
Aerospace Design and Manufacturing through ongoing work with aircraft
and propulsion companies. This is meant to facilitate the transfer of
NASA -developed aerospace design technology to users in major U.S.
aerospace companies. NASA is supporting the transfer of sensitive
technologies through development of a secure infrastructure for
NASA-industry collaborations. Finally, activities in the area of
supporting K-12 education over the Internet will focus on developing
curriculum enhancement products for K-12 education, which build on a
core program of K-12 education programs at NASA. The result will cause
expansion of a broad outreach program to educational product developers
in academia and the private sector. If you are interested in additional
information on this project or related activities you may access the
IITA Home Page on the World Wide Web or you may contact the following
NASA officials: William Likens (William_Likens@qmgate.arc.nasa.gov)
Project Manager
Information Infrastructure Technology and
Applications
High Performance Computing and Communications Office
National Aeronautics and Space Administration
Ames Research Center
Moffett Field, California 94035
(415) 604-5699 Paul Hunter
(p_hunter@aeromail.hq.nasa.gov)
Program Manager, High Performance
Computing and Communications Program
High Performance Computing and
Communications Office
NASA - Headquarters, Washington, DC 20546
(202) 358-4618
- W. Huntress, NASA Headquarters
The
Remote Exploration and Experimentation project will develop and
demonstrate a space-qualified, spaceborne computing system architecture
that requires less than ten watts per billion operations per second.
This computing architecture will be scalable from low-powered
(sub-watt) systems to higher-powered (hundred-watt) systems that
support deep-space missions lasting ten years or more. Deep-space
missions require actual (real-time) analysis of sensor data of up to
tens of gigabits per second and independent control of complex robotic
functions with out intervention from Earth. This project will: enable
and enhance U.S. spaceborne remote sensing and manipulation systems by
providing dramatic advances in the performance, reliability and
affordability of on-board data processing and control systems; extend
U. S. technological leadership in high performance, spaceborne,
real-time, durable computing systems and their applications; and, work
cooperatively with the U.S. computer industry to assure that NASA
technology is commercially available to the U.S. civil, defense and
commercial space programs, as well as, for practical, day-to-day
applications. Deep space applications were selected as a primary focus
because they have stringent environmental, long-life, and low-power
constraints and requirements. Furthermore, long round-trip
communications times and low communications bandwidths require on-board
data processing and independence from people on Earth. Since
near-Earth, airborne, and ground applications are not as mass and power
limited, they can use high performance data processing and control
systems earlier than deep space missions. Applications that require
reliable, real-time responsiveness and that benefit from small size and
low power will be addressed by, as well as, gain from this project.
NASA will select, in this context, intermediate applications to drive
early developments while addressing the primary focus. Some examples of
possible applications are: robots for hazardous waste clean-up,
search-and-rescue, automated inspection and flexible manufacturing,
smart portable atmospheric emission analyzers, remote Earth observing
systems with very high resolution instruments, microgravity
experiments, and automotive collision avoidance systems. The Remote
Exploration and Experimentation project is currently not active but
will resume activities in Fiscal Year 1996. If you are interested in
additional information on this project or related activities you may
access the REE Home Page on the World Wide Web or you may contact the
following NASA officials: Leon Alkalai (leon@telerobotics.Jpl.Nasa.Gov)
,
Principal Investigator
John Davidson
(davidson@telerobotics.Jpl.Nasa.Gov) ,
Technical Manager
Paul
Stolorz, Cognizant Engineer
Remote Exploration And Experimentation
Project
High Performance Computing and Communications Office
National Aeronautics and Space Administration
Jet Propulsion
Laboratory
Pasadena, California 91109
(818) 354-7508 Paul Hunter
(p_hunter@aeromail.hq.nasa.gov)
Program Manager, High Performance
Computing and Communications Program
High Performance Computing and
Communications Office
NASA - Headquarters, Washington, DC 20546
(202) 358-4618
NASA and its partner agencies are well on
their way to achieving high performance computing systems that can
operate at a steady rate of at least one trillion arithmetic operations
per second -- one teraflop. The Numerical Aerodynamic Simulation (NAS)
Parallel Benchmarks were developed to evaluate performance of parallel
computing systems for workloads typical in NASA and the aeronautics
community and are now in extensive use by several commercial and
research communities. Through a cooperative research agreement with a
consortium headed by IBM Corporation, a 160-node SP-2, installed at
NASA Ames Research Center, has achieved a 25 fold increase in
performance over a Cray Y-MP supercomputer (the fastest supercomputer
at the inception of the HPCC Program) on the NAS benchmarks and marked
the beginning of the second generation of parallel machines. A single
large high-performance computer has achieved a world record of 143
gigaflops (or 143 billion arithmetic operations per second) on a
parallel linear algebra problem. By coupling several large
supercomputers over a network, applications exhibiting half a teraflop
performance are expected to be demonstrated on the exhibition hall
floor at Supercomputing '95 in November, 1995. The Internet is the
creation of the HPCC agencies, but its recent phenomenal growth is the
result of educational, public service, and private sector investment.
NASA and the Department of Energy (DOE) are accelerating the
introduction of new commercial Asynchronous Transfer Mode (ATM)
networking technologies through acquisition of experimental 155 Mb/s
service at multiple sites in FY 1995. Using the Advanced Communications
Technology Satellite, experiments linking computers at 155 Mb/s have
been demonstrated earlier this year, with experiments at 622 Mb/s
planned for later this year . These experiments should demonstrate the
1997 metric of 100-fold increase in communications capability.
When discussing the development of new computing technologies, terms
like teraflops and gigaflops are spoken as if we should all know what
they mean. These are simply units of measurement, which measure the
speed with which a computer processes data to perform calculations.
Tera means trillion; flops is floating point operations per second.
Therefore, teraflops is a trillion floating point operations per
second. Teraflops do not exist yet, at least not at sustained rates. We
need to care about ''teraflops''. As you develop high level processing,
it trickles down to many applications. A computer that performed in
''teraflops'' could provide farmers with long-range weather predictions
and thus, impact U.S. food production. Automobile manufacturers could
manipulate huge databases of information instantaneously so they could
improve and change designs in real-time, compare present to past, and
make predictions. Automobile manufacturers would save design time,
which saves money, which keeps the cost of cars down. Cars could have
powerful onboard computers with databases of maps, onboard guidance
system, and an instrument that tells drivers how much gas it will take
to get the nearest gas station, police station, etc. What we now have
is gigaflops computing capability. Giga means billion. This is not good
enough because we cannot do all of the things that we need or want to
do. We can get a certain amount of data and process it, but not enough
to get the information we need. The difference between ''gigaflops''
and ''teraflops'' is represented by the difference between a round trip
flight between New York and Boston and a round trip flight between the
Earth and the moon. Now that you are clued in to what teraflops means
you may want to ask yourself ''what is a petaflops ?''
High-performance computing is vital to the Nation's progress in science
and engineering. NASA's leadership role in the Federal HPCC program is
primarily involved in the development of software to accomplish
computational modeling needed in science and engineering.
High-performance computing is critical to strengthening the global
competitiveness of the U.S. aeronautics industry. NASA's HPCC
computational techniques have resulted in engineering productivity
improvements that enabled Pratt and Whitney to cut the design time in
half for high-pressure jet engine compressors used in the Boeing 777
while reducing fuel consumption, providing savings in both development
and operations costs. NASA's HPCC Program is working to produce rapid,
accurate predictions of the resistance caused by air flowing over an
airplane (or drag) to produce superior aircraft designs, reduced
certification costs, and improved reliability.
High-performance
computing is critical to the vitality of the Earth and space sciences
community. High-performance computing advances have enabled accurate
modeling of the Earth's atmosphere, land surface, and oceans as an
important effort in understanding observational data. Improved
numerical methods and compute power now make possible ocean simulations
that more accurately represent ocean structure and lay the foundation
for a new coupled atmosphere-ocean climate models that will reduce
uncertainties associated with climate change prediction.
The HPCC
Program has enabled new models of galaxy and large scale structure
formation to be developed and simulated to compare with data from
NASA's Great Observatories. These new theories are substantially
altering our understanding of the formation of stars, galaxies, and the
universe.
NASA's Information Infrastructure Technology and
Applications Program supports the development of the National
Information Infrastructure and provides quality educational tools and
curriculum to our nation's children. This program has supported four
Live from Antarctica events over the Internet and the PBS network,
produced the highly received Global Quest video on the using the
Internet for education, established six major national Digital Library
Testbeds jointly with NSF and the Advanced Research Projects Agency
(ARPA), established 26 cooperative agreements and grants for Public Use
of Earth and Space Science Data over the Internet , and developed and
demonstrated several low cost approaches for establishing Internet
connectivity in American K-12 schools. The results of NASA's program
are now in use in thousands of schools throughout the country.
Since you have access to the World Wide Web and use Mosaic (or
some other Web browser application), we encourage you to go ahead and
take a look at more graphic descriptions of NASA's HPCC accomplishments
at the following site: Selected 1994 Program Accomplishments for the
NASA HPCC Program: isolated, top-level accomplishments taken from the
NASA HPCC 1994 Annual Report
Additional information on the NASA HPCC
Program can be accessed from the NASA HPCC Office Home Page .
Information on Federal HPCC objectives and accomplishments are also
available in greater detail: High Performance Computing and
Communications: Foundation for America's Information Future (FY 1996
Blue Book) The FY 1995 Implementation Plan for HPCC Home Page for the
National Coordination Office for High Performance Computing and
Communications
MD5{32}: 07f6b8ee7a283d16cee1dce77b4c8ebc
File-Size{5}: 34351
Type{4}: HTML
Gatherer-Version{3}: 1.0
Gatherer-Host{21}: cesdis1.gsfc.nasa.gov
Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server
Refresh-Rate{7}: 2419200
Time-to-Live{8}: 14515200
Description{15}: HPCC Fact Sheet
}
@FILE { http://cesdis.gsfc.nasa.gov/PAS2/wg8.html
Update-Time{9}: 827948619
url-references{156}: http://cesdis.gsfc.nasa.gov/
#top
/PAS2/index.html
http://cesdis.gsfc.nasa.gov/cesdis.html
/pub/people/becker/whoiam.html
mailto:becker@cesdis.gsfc.nasa.gov
title{41}: Data Parallel and Shared Memory Paradigms
keywords{512}: additional
advanced
and
application
applications
bandwidths
becker
between
broad
cesdis
characterize
continue
cost
data
develop
development
document
donald
effort
encourage
for
fund
goals
gov
gsfc
hardware
independent
index
inexpensive
interaction
joint
latency
longer
maximize
memory
minimize
nasa
networking
paradigms
parallel
pasadena
performance
portable
products
programming
programs
prototypes
pursuing
reduction
research
researchers
shared
simd
software
support
system
term
this
top
vendors
work
workshop
head{14243}: Center of Excellence in Space Data and Information Sciences. >
Data
Parallel and Shared Memory Paradigms The members of this working group
were to take data parallel and shared
memory paradigms into account in
formulating a set of four action items and
in articulating responses
to the five questions posed to the working groups.
Recommendations
Develop standard APIs and reference implementations for a portable set
of
user level runtime support to help application software developers
to port
codes to a range of parallel architectures and workstation
networks. This
software would target architectures that do not have
strong hardware shared
memory support. It appears that the high end
parallel market is still too small to
support a healthy base of
independent software vendor program
development. This leads us to
recommend the continued development of
application programmer
interfaces (APIs) that can be employed by systems
software developers
and application developers so that a single application
development
effort can lead to software that targets both on high and low
end
multiprocessors as well as workstation networks. MPI, PVM and HPF
are
important examples of APIs that have been implemented on a wide
range of
platforms.
The new constructs would include message passing
constructs (such as
MPI, PVM), remote puts and gets, parallel threads.
In addition, the API
should support shared address spaces on a range
of parallel architectures
and workstation networks.
Develop portable
networking software to minimize application to
application latency
(and maximize bandwidths) and develop inexpensive
hardware support for
latency reduction. Encourage joint development work by
vendors and
researchers to develop advanced prototypes/products. We
expect that it
will be possible to build on advances in commodity
networking
technology to design software (and inexpensive hardware) to
develop
inexpensive modest sized workstation networks with
performance
characteristics that resemble currently available medium
grained parallel
machines. While it seems unlikely that workstation
networks will replace
high end parallel architectures, performance
optimized workstation networks
can provide a significant market for
parallelized ISV applications and
parallel system software.
The
networks connecting PCs and Workstations are getting better.
Latency
is going down and bandwidth is going up. Low-cost 100Mbit,
collision-free
Ethernet hardware is now available at reasonable cost.
More advanced
networks, such as ATM or Myrinet, offer even higher
bandwidth and lower
latencies. Although there is still a large gap
between these networks and
the processor interconnects in MPPs, the
gap is closing. These high-speed
networks have the potential to bring
clusters of workstations much closer to
the level of MPPs.
Unfortunately, current network interfaces and operating system
software
are designed for networks with much higher latency and lower
bandwidth.
These interfaces and systems are currently the bottlenecks
through which
parallel communication must squeeze.
More research and
collaboration between researchers and vendors is
necessary to develop
low latency network interfaces and system software
capable of
exploiting this hardware. Much attention is already being
focused on
ATM and video. However, the demands arising from using these
networks
as an interconnect for a workstation cluster are quite
different
(i.e., latency is more important) and deserve future
study.
Encourage interaction between independent software vendors
and
system software researchers While system software researchers
interact
extensively with scientists and engineers in academia and at
national
laboratories, it appeared that there was much less
interaction between
independent software vendors and systems software
researchers. HPC grants
and contracts should encourage match-making
between researchers, ISVs and
end users who are constructing
commercially important applications.
Increased interaction between the
independent software vendor community and
the systems software
research community would be likely to encourage systems
software
researchers to focus on problems that are of particular interest
to
ISVs. For instance, an ISV that sells application software for
parallel
machines must develop a code that runs on a number of serial
and parallel
platform. Furthermore, each time the software is
upgraded, the upgrade must
be carried out on each platform.
Interactions with ISVs might also help to
focus the efforts of systems
software researchers on new types of
applications. High performance
computing is potentially applicable to a wide
variety of application
areas; for instance, the NSF sponsored workshop on
HPCC and Health
Care (Washington DC, Dec 1994) identified many potential
applications
of high performance computing associated with health care
delivery.
We were not able to get a good handle on the state of
commercial
parallel computing. Despite the shake- out among the
parallel machine
vendors, there are many confirmed and even more
anecdotal descriptions of
parallel machines being used in various
industries. We recommend that a
quantitative survey be generated to
quantify the degree to which parallel
machines are used in the private
sector, and to characterize the targeted
applications. We would also
recommend characterizing to degree to which
companies internally
develop different types of parallel software.
Continue to fund broad
research programs pursuing longer term
goals. Improving the
productivity of parallel software is a difficult
and important problem
that justifies long- term research funding. Today we
have stable tools
that can mask the lowest levels of machine differences
from the users;
in the future, we will have higher level tools to assist
general
practitioners with difficult algorithmic issues in
parallel
processing. It is only by encapsulating the know-how of
parallel
programming in effective tools will parallel computing become
widespread.
Eventually, we envision that the application programmer
will operate
within a problem solving environment, where he directly
manipulates his data
using domain-specific concepts. These high-level
programming environments
will not be built directly upon low-level
hardware functions, but will
themselves be built on the next lower
level of programming abstractions and
so forth. Thus, the hardware
design defines but the lowest level of the
hierarchy in the software
architecture. At this point, the software
research community has
developed solid low-level tools such as PVM that
provide portability
across different hardware platforms. Promising
preliminary results
have also been obtained for more advanced tools such as
High
Performance FORTRAN, efficient parallel C++ class libraries
automatic
parallelization and locality optimizations that work on
entire programs.
Other examples include domain- specific programming
languages, and
interactive programming environments that combine the
machine expertise in
the compiler with the application knowledge of
the programmer.
It is important to recognize that these more
ambitious projects will
take longer to mature. Breakthroughs and
innovations required at this level
are more likely to be the results
of small, dedicated research groups. We
must encourage innovative work
by supporting independent and even competing
research projects.
Standardization and user acceptance are important once
research
matures, but they should not be the major concerns when research is
at
the formative stage. Premature standardization can stifle
creativity..
Premature attempts to develop an immediately usable tool
can shift the
research focus away from the fundamental issues; also,
without a solid
foundation, the tool is doomed to be fragile.
Additional effort to characterize SIMD applications,
programming
paradigms and cost-performance There exists a substantial
community of
researchers who make productive use of SIMD
architectures. This community
was not well represented at Pasadena;
several members of this working group
have volunteered to carry out a
further consideration of SIMD and in
particular to focus on the
commonality and needs of applications that exist across the
SIMD
community, determination of common programming paradigm across
SIMD and clustered
SIMD architectures assess cost/performance of
applications on SIMD architectures. SIMD is the oldest parallel
processing paradigm. Its roots go back as
far as the 1800s when it was
envisioned that weather prediction would be
calculated by an
auditorium full of human computers (an occupation of the
time)
orchestrated by a computer conductor who directed the simulation.
This
paradigm was manifested electronically in the Solomon computer in
the
early 1960s. It was followed by such successful machines as the
Goodyear
Staran, ICL DAP, Goodyear/NASA MPP, TMC CM-1/2 series, and
currently the
MasPar MP-1 and MP-2. To those who have taken on the
challenge of highly
ordered computation, the reward has been low
product and maintenance cost,
low power consumption, small size and
high performance. Many applications
programmers who work with SIMD
architectures feel that the SIMD methodology
leads to a simplified
process of programming and debugging compared to the
currently
existing MIMD paradigms. Therefore, due to the unique view of
parallel
programming style that SIMD poses, there is a need for an
assembling
of the existing SIMD community to address their common system
software
and hardware needs.
It is for this reason that our working group
feels that further
examination in terms of the needs and experiences
of the SIMD community
should be explored. We are proposing to convene
a meeting of the SIMD
community to determine the commonality and needs
of applications that exist
across the community. We have identified at
least 40 organizations and
individuals who have an interest in such
discussions. We intend to
emphasize the determination of a common
programming paradigm across SIMD and
clustered SIMD architectures. Due
to the importance of commercialization,
we also intend to assess the
cost/performance characteristics of
applications on these
architectures.
This forum will address such issues as paradigm,
language, programming
environment, operating system support and SIMD
architecture extensions. It
will gather together information on the
various applications that are now
being supported by SIMD architecture
extensions, as well as those
anticipated to be supported in the
future. It will assess the
cost/performance of various applications
with respect to different SIMD
architecture extensions and how they
may be scaled to tera- and peta-(f)lops
computing. It will enumerate
the various types and characteristics of
various SIMD architecture
extensions such as parallelism (fine vs coarse),
interprocessor
network complexities, (mesh, wormhole, global adder, ...
),
scalability, and the degree of synchronicity.
Responses to
questions: The question that was most relevent to the interests of this
cross
cutting group was question 4 involving the interaction between
System
Software and Architecture.
In recent years, some of the most
fruitful work in computer architecture
has been on the boundary
between software and architecture. For example,
RISC machine make
compilation easier by providing a simple, regular
instruction set and
open new opportunities for optimization by exposing a
machine's
pipeline. Unfortunately, many parallel machines are still built
by
computer architect and "thrown over the wall" to users who
often fail
to use them effectively because they are too hard to program.
<>
Current software efforts are focused on overcomming the
limitiations of
message-passing machines, which offer only the
lowest-level option of
point-to-point communication. Programmers or
compiler-writers are left with
the full responsibility for bridging
the semantic gap from high-level
programming languages, which
typically offer a shared address space in which
a program can access
any data, to this shared-nothing world. The low level
of these
machines has made compilers extremely complex to write and made
their
behavior unpredictable in any but the simplest case.
On the other
hand, the low-level of message passing leaves a programmer
with
complete control over a program's behavior and performance because
no
system policies interfere. This control is sometimes illusionary
because of
the complexity of understanding and modifying a
message-passing program.
The other extreme is, of course,
shared-memory machines. These machines
offer the benefit of a shared
address space and are extremely popular
products in the form of
Symmetric Multiprocessors (SMPs). Scalable shared
memory
architectures, have a reputation for poor performance, which is
due,
in many cases, to systems' fixed coherence protocol, which
communicate data
between processors. When a protocol does not match a
program's sharing
pattern, the excessive communication can ruin a
program's performance.
The design of systems software involves making
tradeoffs between user
control and ease of use. There is a great deal
of practical interest, and
much research emphasis, on systems software
that is designed in a way that
makes it possible for users to access
several layers of software. For
instance, High Performance Fortran
allows users to call procedures
(extrinsic procedures) whose bodies
are not written in High Performance
Fortran. Extrinsic procedures can
be written inanother language, and can
employ user level communication
libraries to carry out communication.
Recent research has lead to
proposed systems that may allow users or
compiler writers to alleviate
performance problems associated with scalable
shared memory by
implementing protocols in software (where they can be
changed) and by
offering message-passing primitives to augment shared
memory.
Preliminary work suggests that software protocols can be used
to
implement highly optimized userr or compiler runtime
support.
Top of this document
Pasadena 2 Workshop index
CESDIS
HTML formating/WWW contact:Donald Becker ,
becker@cesdis.gsfc.nasa.gov
.
MD5{32}: c2854fcfad2721d857d86540ace14614
File-Size{5}: 15299
Type{4}: HTML
Gatherer-Version{3}: 1.0
Gatherer-Host{21}: cesdis1.gsfc.nasa.gov
Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server
Refresh-Rate{7}: 2419200
Time-to-Live{8}: 14515200
Description{41}: Data Parallel and Shared Memory Paradigms
}
@FILE { http://cesdis.gsfc.nasa.gov/hpccm/accomp/94accomp/ess94.accomps/ess1.html
Update-Time{9}: 827948645
title{42}: Large Scale Structure and Galaxy Formation
keywords{43}: and
formation
galaxy
large
scale
structure
images{53}: hpcc.graphics/hpcc.header.gif
hpcc.graphics/cobe1.gif
headings{43}: Large Scale Structure and Galaxy Formation
MD5{32}: cfe65a16fa178d24d6a7acdf31cfc9b6
File-Size{4}: 4685
Type{4}: HTML
Gatherer-Version{3}: 1.0
Gatherer-Host{21}: cesdis1.gsfc.nasa.gov
Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server
Refresh-Rate{7}: 2419200
Time-to-Live{8}: 14515200
Description{42}: Large Scale Structure and Galaxy Formation
}
@FILE { http://cesdis.gsfc.nasa.gov/linux/drivers/vortex.patch
Update-Time{9}: 827948606
MD5{32}: 37d0a4789a0c65aab0a6226c235b850b
File-Size{4}: 3081
Type{5}: Patch
Gatherer-Version{3}: 1.0
Gatherer-Host{21}: cesdis1.gsfc.nasa.gov
Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server
Refresh-Rate{7}: 2419200
Time-to-Live{8}: 14515200
}
@FILE { http://cesdis.gsfc.nasa.gov/PAS2/wg9.text
Update-Time{9}: 827948619
Partial-Text{14841}: Working Group 9 -- HETEROGENEOUS COMPUTING ENVIRONMENTS
Francine Berman, Co-Chair
Reagan Moore, Co-Chair
9.1 INTRODUCTION
In recent years, dramatic advances in network and processor technology
have made it possible to use a network of computational resources to solve
individual problems efficiently. Such platforms would be able to deliver an
unlimited amount of processing power, memory and storage to multiple users in
a cost-effective manner if an adequate infrastructure could be built to manage
them. The challenge of building this software infrastructure and its
accompanying computing environment is the focus of heterogeneous computing.
Heterogeneous computing is the coordinated use of networked resources to
harness diverse hardware platforms, software systems, and data sources
distributed across multiple administrative domains. Resources used to solve
HPCC applications include workstations, supercomputers, archival storage
systems, disk farms, and visualization systems linked via high-speed networks.
The use of these heterogeneous resources is becoming pervasive. Applications
as varied as Climate Modeling, interactive 3-dimensional user interfaces such
as Argonne's Cave, and the World Wide Web, utilize heterogeneous systems for
distributed access to data, computing and/or visualization resources.
In this document, we take a broad view of heterogeneous systems including
clusters of individual workstations (as promoted by the NOW project),
dedicated high-level workstation clusters, and networks of diverse
high-performance architectures (as illustrated by the NII and the NSF
Meta-Center). Such platforms are used because they leverage existing
architectures, provide excellent cost/performance, and satisfy the
requirements of compute-intensive and data-intensive applications. In effect,
we are talking about using heterogeneous resources to provide a world-wide
``computational web'' in which aggregate memory, storage, bandwidth and
computational power can be brought to bear on a single application. In
addition, this computational web can be used to increase the throughput of
multiple applications.
Viewed as a ``computational web'', heterogeneous computing provides the
bridge between HPCC and the NII. Transparent access to remote data, access
across multiple authentication realms, and the development of a uniform
application interface across diverse hardware and software systems are
required to fulfill the potential of both HPCC and the NII. The development
of an infrastructure which can coordinate diverse and distributed resources is
critical to the success of both endeavors.
Currently, heterogeneous computing is an emerging discipline and
considerable development of its most basic components must be done. Efforts
in defining underlying models and performance metrics, building software
infrastructure and tools, and developing computing environments must be
integrated and validated with real applications. Experience must be gained on
a wide spectrum of workstation clusters and heterogeneous networks. Current
efforts must be supported, expanded, and nurtured. In Section 9.7, we provide
a number of technical and programmatic recommendations for developing the
software infrastructure required for harnessing heterogeneous systems. The
intervening sections lay the groundwork for these recommendations.
9.2 PROGRESS SINCE THE FIRST PASADENA WORKSHOP
Heterogeneous computing was defined as a focus area at Pasadena I and at
the subsequent Berkeley Springs Workshop. However a basic problem has
retarded the development of heterogeneous computing: Heterogeneous research
must be done as interdisciplinary research so that the development of
heterogeneous applications, software infrastructure, and prototype networks
can be integrated. There is no program within an individual federal funding
agency or coordinated between agencies which targets over the long-term the
development of infrastructure, applications and models for heterogeneous
computing. This problem must be remedied in order to keep up with the current
and pressing need for critical infrastructure and software support for HPCC
and NII applications.
Since Pasadena I, there has been some progress in the development of
tools and models for heterogeneous computing, however most efforts have
achieved only partial success. The MPI message-passing interface has been
defined and mechanisms for heterogeneity are part of that definition, however
MPI has yet to achieve the widespread use of PVM. In the last few years
commercial batch queuing offerings have become available (NQE, LSF, Load
Leveler). Though these products are functionally adequate, many issues
important for heterogeneous computing are not addressed, e.g., common file
space, failure resilience, user authentication, and administration. In
addition, isolated Grand Challenge applications have shown that the use of
heterogeneous parallel computing can yield improved performance.
Unfortunately, these applications suffer from the lack of an adequate
development environment and require a large amount of human resources to
construct. Mechanisms are needed to aid scientists in exploiting
heterogeneous platforms.
The heterogeneous computing area is not yet ready to identify de facto
standards (with the possible exception of PVM). More experience must be
gained with real applications on a wide spectrum of heterogeneous platforms.
At the same time, the underlying system management infrastructure must be
developed. Models for presenting a single system image to the application are
still incomplete, and new mechanisms are needed to provide authentication,
transparent data delivery, resource scheduling, and accounting.
Most successful among efforts targeted to coordinated networks has been
the wide-spread use of clusters of computers with PVM as a common software
interface. In addition, the use of heterogeneous platforms to accommodate the
data and storage requirements of applications like the World Wide Web has
become more commonplace. However even in this successful and widely-used
...
9.3 CHARACTERISTICS OF HPCC HETEROGENEOUS APPLICATIONS
Coordinated networks provide performance by aggregating computing, data
and network resources that cannot be elivered by a single platform. There is
...
HPCC applications are characteristically large or complex programs which
require intense usage of resources to achieve adequate performance. These
...
9.3.1 RESOURCE REQUIREMENTS
Heterogeneous HPCC applications generally have large resource
requirements and utilize heterogeneous platforms to aggregate enough resources
to provide increased performance or to make the solution of a problem
feasible. Distributed resources may include computation, memory, storage,
...
9.3.2 PERFORMANCE ORIENTATION
Heterogeneous HPCC science and engineering applications tend to emphasize
performance over other factors. Reductions in the execution time of an
...
9.3.3 LIFETIME
The lifetime of some HPCC applications is typically lengthy. The
...
Heterogeneous systems have the ability to evolve over time and thus can
adapt to changing requirements of long- term projects and changing resource
technology. Integration of archival storage access within the heterogeneous
...
9.4 HETEROGENEOUS SYSTEM SOFTWARE AND TOOLS
With the advent of global connectivity of diverse machines using
high-speed networks, target systems are destined to become increasingly
heterogeneous. Developing tools and software to support computing in
...
Although the target systems for heterogeneous computing are becoming
increasingly complex, the system software and tools needed to support the
environment have not been a focus for the HPCC community. Heterogeneous
...
Tools and system software supporting this interface layer should provide
services which enable
+ the matching of application program requirements
to available system resources
+ dynamic scheduling of the application on available
machines
+ efficient utilization and management of resources
+ response to queries from the application or the
user about system state
+ prediction and measurement of various performance
metrics
+ monitoring and checkpointing during program execu-
tion
etc.
...
The PVM system is an example of a software system that addresses some of
the issues in heterogeneous computing, and is being used to investigate
others. PVM can be called a heterogeneous computing system, albeit with
...
Like PVM, tools to manage the complexity of a heterogeneous environment
must have a low cost--of-entry for users. Moreover, they must offer
...
9.5 INTERACTION BETWEEN SYSTEM SOFTWARE AND ARCHITECTURE
A key component for successful heterogeneous computing is the ability for
the system to dynamically determine the available resources. This capability
...
Heterogeneous computing relies upon many software support mechanisms
traditionally supplied by the operating system. These include file systems,
...
Current heterogeneous environments provide a single system image at the
application level. User I/O calls are trapped to allow references to
...
The data delivery mechanisms should be able to access the wide variety of
storage systems that are available in the HPCC community. Such access should
...
The following issues are germane to parallel and dis tributed computing
in general; however they require addi tional integration requirements in the
heterogeneous con text.
...
+ Accounting systems are needed within the heterogeneous
environment to control and monitor access to the sys-
tem. Accounting information may also be used by the
...
+ Failure resilience is needed within both the NII and
HPCC. It may be provided by replication (such that
...
+ Administration and operation mechanisms are needed to
control the distributed heterogeneous environment.
...
9.6 TRANSITION FROM RESEARCH TO PRODUCTS
The difficulty of providing an infrastructure for, and managing the
resources of coordinated networks resists the promotion of research prototypes
to products. Workstation networks require products that manage the resources
...
The youth of heterogeneous computing and insufficient support for large
products renders many current development efforts untested with real
applications, limited in scope, or immature. The multiple software layers of
...
+ JOB QUEUING SYSTEMS
Job Queuing systems typically provide a way to organize a workload that
exceeds the resource capability. Examples of existing commercial and
...
+ RESOURCE SCHEDULING SYSTEMS
Resource scheduling systems optimize the allocation of resources based on
an objective function. Scheduling can be done for network access, CPU access,
...
+ APPLICATION SUPPORT ENVIRONMENTS
Application support environments provide an infrastructure to distribute
the application across the system. They also provide tools to simplify the
development, debugging, and display of results. Examples of commercial and
...
+ MESSAGE PASSING LIBRARIES
Message passing libraries support access to distributed memory. While
...
+ APPLICATION LEVEL SINGLE SYSTEM IMAGE
Heterogeneous systems will be most efficient when a uniform system
environment can be provided to the distributed application. Features of this
...
+ NETWORK SUPPORT
The need to decrease message passing latency has led to the development
of new paradigms for sending information between distributed tasks. In
...
9.7 RECOMMENDATIONS
Diverse hardware, software, and administrative resources are already
available for heterogeneous computing. Application developers are already
...
TECHNICAL RECOMMENDATIONS
-------------------------
1) DEVELOP ACCURATE MODELS AND PERFORMANCE MEASURES FOR
HETEROGENEOUS SYSTEMS
Accurate models of heterogeneous systems, and measures which compare
observed behavior with potential behavior must be designed. Time, state,
...
2) DEVELOP EFFICIENT SYSTEM MANAGEMENT STRATEGIES FOR
HETEROGENEOUS PLATFORMS
Efficient mechanisms must be developed to handle and coordinate the
diverse resources of the heterogeneous platform. Such mechanisms should
...
3) DEVELOP TRANSPARENT MECHANISMS FOR STORING AND HANDLING
DATA
Data delivery tools are needed that hide the data delivery mechanism from
the user. Shared file systems, distributed databases, and I/O redirection
...
Integration of data base technology and archival storage technology is
needed to handle the petabytes of data associated with some HPCC applications.
...
Part of the heterogeneous environment is concerned with the movement of
data from network attached peripherals controlled by archival storage systems,
through a database running on distributed platforms to the application. This
...
4) DESIGN SYSTEM INTERFACES WHICH SUPPORT EFFICIENT IMPLE-
MENTATION
The layer between the programmer and the system must map applications
dynamically to the system based on availability and ``cost'' of services. In
...
5) DEVELOP FAILURE RESILIENCE STRATEGIES FOR HETEROGENEOUS
SYSTEMS
The implementation of universal checkpointing/restarting of a
heterogeneous system is a major research issue. Many of the existing systems
...
PROGRAMMATIC RECOMMENDATIONS
----------------------------
If heterogeneous computing is to provide a bridge
between the emerging NII and HPCC, support must be provided
for its development. We recommend two thrusts to accomplish
...
1) FUNDING AGENCIES SHOULD ESTABLISH FOCUS PROGRAMS FOR
HETEROGENEOUS COMPUTING which support over the long-
term the integration of applications, systems software
and infrastructure on coordinated networks of
resources. Research should be encouraged which
...
+ real heterogeneous applications implemented on
coordinated networks,
+ development of a software infrastructure for sup-
porting heterogeneous applications,
+ performance criteria for assessing usefulness.
...
2) A NATIONAL HETEROGENEOUS TESTBED SHOULD BE INITIATED to
provide a resource for developing and testing hetero-
geneous applications software and systems. Such a
...
The hardware and network resources for heterogeneous computing are
already available. A major effort is required to develop the software,
...
MD5{32}: 3b0461e0f5eee063d91233b9127dc6c6
File-Size{5}: 30949
Type{4}: Text
Gatherer-Version{3}: 1.0
Gatherer-Host{21}: cesdis1.gsfc.nasa.gov
Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server
Refresh-Rate{7}: 2419200
Time-to-Live{8}: 14515200
Keywords{4556}: ability
able
about
access
accommodate
accompanying
accomplish
accounting
accurate
achieve
achieved
across
adapt
addi
addition
addressed
addresses
adequate
administration
administrative
advances
advent
agencies
agency
aggregate
aggregating
aid
albeit
allocation
allow
already
also
although
among
amount
and
application
applications
architecture
architectures
archival
are
area
argonne
assessing
associated
attached
authentication
availability
available
bandwidth
base
based
basic
batch
bear
because
become
becoming
been
behavior
being
berkeley
berman
between
both
bridge
broad
brought
building
built
called
calls
can
cannot
capability
cave
center
chair
challenge
changing
characteristically
characteristics
checkpointing
climate
clusters
commercial
common
commonplace
community
compare
complex
complexity
component
components
computation
computational
compute
computers
computing
con
concerned
connectivity
considerable
construct
control
controlled
coordinate
coordinated
cost
could
cpu
criteria
critical
current
currently
data
database
databases
debugging
decrease
dedicated
defined
defining
definition
deliver
delivery
design
designed
destined
determine
develop
developed
developers
developing
development
difficulty
dimensional
dis
discipline
disk
display
distribute
distributed
diverse
document
domains
done
dramatic
during
dynamic
dynamically
effect
effective
efficient
efficiently
effort
efforts
elivered
emerging
emphasize
enable
encouraged
endeavors
engineering
enough
entry
environment
environments
establish
etc
even
evolve
example
examples
exceeds
excellent
exception
execu
execution
existing
expanded
experience
exploiting
facto
factors
failure
farms
feasible
features
federal
few
file
first
focus
following
for
francine
from
fulfill
function
functionally
funding
gained
geneous
general
generally
germane
global
grand
groundwork
group
handle
handling
hardware
harness
harnessing
has
have
hetero
heterogeneity
heterogeneous
hide
high
however
hpcc
human
identify
illustrated
image
immature
imple
implementation
implemented
important
improved
include
including
incomplete
increase
increased
increasingly
individual
information
infrastructure
initiated
insufficient
integrated
integration
intense
intensive
interaction
interactive
interdisciplinary
interface
interfaces
intervening
introduction
investigate
isolated
issue
issues
its
job
keep
key
lack
large
last
latency
lay
layer
layers
led
lengthy
level
leveler
leverage
libraries
lifetime
like
limited
linked
load
long
low
lsf
machines
made
major
make
manage
management
managing
manner
many
map
matching
may
measurement
measures
mechanism
mechanisms
memory
mentation
message
meta
metrics
modeling
models
monitor
monitoring
moore
more
moreover
most
movement
mpi
multiple
must
national
need
needed
network
networked
networks
new
nii
not
now
nqe
nsf
number
nurtured
objective
observed
offer
offerings
only
operating
operation
optimize
order
organize
orientation
other
others
over
paradigms
parallel
part
partial
pasadena
passing
performance
peripherals
pervasive
petabytes
platform
platforms
porting
possible
potential
power
prediction
presenting
pressing
problem
problems
processing
processor
products
program
programmatic
programmer
programs
progress
project
projects
promoted
promotion
prototype
prototypes
provide
provided
provides
providing
pvm
queries
queuing
ready
reagan
real
realms
recent
recommend
recommendations
redirection
reductions
references
relies
remedied
remote
renders
replication
require
required
requirements
research
resilience
resists
resource
resources
response
restarting
results
retarded
running
same
satisfy
scheduling
science
scientists
scope
section
sections
sending
services
shared
should
shown
simplify
since
single
software
solution
solve
some
sources
space
spectrum
speed
spread
springs
standards
state
still
storage
storing
strategies
subsequent
success
successful
such
suffer
sup
supercomputers
supplied
support
supported
supporting
sys
system
systems
take
talking
target
targeted
targets
tasks
technical
technology
tem
tend
term
testbed
testing
text
that
the
them
there
these
they
this
though
through
throughput
thrusts
thus
time
tion
tional
tools
traditionally
transition
transparent
trapped
tributed
two
typically
underlying
unfortunately
uniform
universal
unlimited
untested
upon
usage
use
used
usefulness
user
users
using
utilization
utilize
validated
varied
variety
various
via
view
viewed
visualization
was
way
web
when
which
while
wide
widely
widespread
will
with
within
working
workload
workshop
workstation
workstations
world
would
years
yet
yield
youth
Description{57}: Working Group 9 -- HETEROGENEOUS COMPUTING ENVIRONMENTS
}
@FILE { http://cesdis.gsfc.nasa.gov/pub/linux/html/Ethernet-HOWTO-3.html
Update-Time{9}: 827948628
url-references{1441}: Ethernet-HOWTO.html#toc3
http://www.crynwr.com/crynwr/home.html
Ethernet-HOWTO-7.html#skel
Ethernet-HOWTO-7.html#data-xfer
Ethernet-HOWTO-7.html#3com-tech
Ethernet-HOWTO-9.html#3com-probs
Ethernet-HOWTO-9.html#alfa
Ethernet-HOWTO-7.html#i82586
Ethernet-HOWTO-9.html#alfa
Ethernet-HOWTO-7.html#i82586
http://cesdis.gsfc.nasa.gov/linux/pcmcia.html
Ethernet-HOWTO-8.html#pcmcia
#lance
Ethernet-HOWTO-7.html#amd-notes
#at-1500
#ne1500
#boca-pci
#ni65xx
Ethernet-HOWTO-10.html#ether
Ethernet-HOWTO-7.html#amd-notes
Ethernet-HOWTO-9.html#alfa
#lance
Ethernet-HOWTO-7.html#amd-notes
Ethernet-HOWTO-10.html#ether
#dec-21040
Ethernet-HOWTO-8.html#pcmcia
http://cesdis.gsfc.nasa.gov/linux/pcmcia.html
#lance
Ethernet-HOWTO-7.html#amd-notes
#lance
Ethernet-HOWTO-7.html#amd-notes
#z-note
http://peipa.essex.ac.uk/html/linux-thinkpad.html
Ethernet-HOWTO-8.html#pcmcia
http://cesdis.gsfc.nasa.gov/linux/pcmcia.html
Ethernet-HOWTO-9.html#alfa
Ethernet-HOWTO-7.html#promisc
Ethernet-HOWTO-7.html#i82586
#de-650
Ethernet-HOWTO-9.html#ne2k-probs
Ethernet-HOWTO-4.html#ne2k-clones
Ethernet-HOWTO-6.html#diag
#lance
Ethernet-HOWTO-7.html#amd-notes
Ethernet-HOWTO-9.html#alfa
Ethernet-HOWTO-7.html#i82586
Ethernet-HOWTO-9.html#alfa
#3c501
Ethernet-HOWTO-4.html#8013-clones
Ethernet-HOWTO-9.html#8013-probs
#dec-21040
Ethernet-HOWTO-7.html#i82586
Ethernet-HOWTO-4.html
Ethernet-HOWTO-2.html
Ethernet-HOWTO.html#toc3
Ethernet-HOWTO.html#toc
Ethernet-HOWTO.html
#0
title{46}: Vendor/Manufacturer/Model Specific Information
keywords{851}: accton
advanced
all
allied
alpha
amd
and
ansel
apricot
arcnet
associated
beginning
boca
business
cabletron
can
cards
chapter
chips
clones
com
communications
contents
corp
data
dec
devices
dfi
diagnostic
digital
discouraged
don
driver
drivers
ethernet
every
farallon
forbid
four
from
have
hewlett
howto
ibm
info
information
intel
interlan
international
koch
lan
lance
leave
like
link
linksys
looks
machines
manufacturer
may
micro
microsystems
model
multicast
mylex
nelson
net
next
nexxxx
not
note
notes
novell
now
old
packard
packet
param
pci
pcmcia
poor
previous
problems
programmed
programming
programs
pure
racal
realtek
region
research
russ
sager
schneider
section
semi
skeleton
smc
specific
standard
strongly
stuff
support
supported
surfers
surfing
table
tec
technical
telesis
the
thinkpad
this
top
two
vendor
vlb
western
whole
with
xircom
zenith
headings{1504}: 3
3.1
3c501
3c503, 3c503/16
3c505
3c507
3c509 / 3c509B
3c523
3c527
3c529
3c579
3c589 / 3c589B
3.2
Accton MPX
Accton EN2212 PCMCIA Card
3.3
AT1500
AT1700
3.4
AMD LANCE (7990, 79C960, PCnet-ISA)
AMD 79C961 (PCnet-ISA+)
AMD 79C965 (PCnet-32)
AMD 79C970 (PCnet-PCI)
AMD 79C974 (PCnet-SCSI)
3.5
AC3200 EISA
3.6 Apricot
Apricot Xen-II On Board Ethernet
3.7
3.8 AT&T
AT&T T7231 (LanPACER+)
3.9 AT-Lan-Tec / RealTek
AT-Lan-Tec / RealTek Pocket adaptor
3.10
Boca BEN (PCI, VLB)
3.11
E10**, E10**-x, E20**, E20**-x
E2100
3.12
DE-100, DE-200, DE-220-T
DE-530
DE-600
DE-620
DE-650
3.13
DFINET-300 and DFINET-400
3.14
DEPCA, DE100, DE200/1/2, DE210, DE422
Digital EtherWorks 3 (DE203, DE204, DE205)
DE425 (EISA), DE435
DEC 21040, 21140
3.15 Farallon
Farallon Etherwave
3.16
27245A
HP PC Lan+ (27247A, 27247B, 27252A)
HP-J2405A
HP-Vectra On Board Ethernet
3.17
IBM Thinkpad 300
IBM Credit Card Adaptor for Ethernet
3.18
Ether Express
Ether Express PRO
3.19 LinkSys
LinkSys PCMCIA Adaptor
3.20 Mylex
Mylex LNP101, LNP104
3.21
NE1000, NE2000
NE1500, NE2100
NE3200
3.22 Pure Data
PDUC8028, PDI8023
3.23 Racal-Interlan
NI52**
NI65**
3.24 Sager
Sager NP943
3.25 Schneider & Koch
SK G16
3.26
WD8003, SMC Elite
WD8013, SMC Elite16
SMC Elite Ultra
SMC 8416 (EtherEZ)
SMC 8432 PCI (EtherPower)
SMC 3008
SMC 3016
SMC 9000
3.27
PE1, PE2, PE3-10B*
3.28
Z-Note
body{48635}: Vendor/Manufacturer/Model Specific Information Contents of this section
The only thing that one needs to use an ethernet card with
Linux
is the appropriate driver. For this, it is essential that
the
manufacturer will release the technical programming information
to
the general public without you (or anyone) having to sign your
life
away. A good guide for the likelihood of getting
documentation
(or, if you aren't writing code, the likelihood that
someone
else will write that driver you really, really need) is
the
availability of the Crynwr (nee Clarkson) packet driver.
Russ
Nelson runs this operation, and has been very helpful in
supporting
the development of drivers for Linux. Net-surfers can try
this
URL to look up Russ' software.
Russ Nelson's Packet Drivers
Given the documentation, you can write a driver for
your card and
use it for Linux (at least in theory)
and if you intend to write a
driver, have a look at
Skeleton driver
as well.
Keep in mind that
some old hardware that was designed for XT type
machines will not
function very well in a multitasking
environment such as Linux. Use of
these will lead to major
problems if your network sees a reasonable
amount of traffic.
Most cards come with drivers for MS-DOS interfaces
such as
NDIS and ODI, but these are useless for Linux. Many
people
have suggested directly linking them in or
automatic
translation, but this is nearly impossible. The
MS-DOS
drivers expect to be in 16 bit mode and hook into
`software
interrupts', both incompatible with the Linux kernel.
This
incompatibility is actually a feature, as some Linux drivers
are
considerably better than their MS-DOS counterparts. The
`8390' series
drivers, for instance, use ping-pong transmit
buffers, which are only
now being introduced in the MS-DOS world.
Keep in mind that PC
ethercards have the widest variety of
interfaces (shared memory,
programmed I/O, bus-master, or slave
DMA) of any computer hardware for
anything, and supporting a
new ethercard sometimes requires
re-thinking most of the lower-level
networking code. (If you are
interested in learning more about
these different forms of interfaces,
see
Programmed I/O vs. ...
.)
Also, similar product numbers
don't always indicate similar products.
For instance, the 3c50*
product line from 3Com varies wildly
between different members.
Enough talk. Let's get down to the information you want.
3Com
If you are not sure what your card is, but you think it is a
3Com
card, you can probably figure it out from the assembly
number. 3Com
has a document `Identifying 3Com Adapters By
Assembly Number' (ref
24500002) that would most likely clear
things up. See
Technical
Information from 3Com
for info on how to get documents from 3Com.
Also note that 3Com has a FTP site with various goodies:
that you
may want to check out.
Status -- Semi-Supported
Too
brain-damaged to use. Available surplus from many
places. Avoid it
like the plague. Again, do not
purchase this card, even as a joke.
It's performance
is horrible, and it breaks in many ways.
Cameron L.
Spitzer of 3Com said:
``I'm speaking only for myself here, of course,
but I
believe 3Com advises against installing a 3C501 in a
new
system, mostly for the same reasons Donald has
discussed. You probably
won't be happy with the
3C501 in your Linux box. The data sheet is
marked
`(obsolete)' on 3Com's Developers' Order Form, and
the board
is not part of 3Com's program for sending
free Technical Reference
Manuals to people who need
them. The decade-old things are
nearly
indestructible, but that's about all they've got
going for
them any more.''
For those not yet convinced, the 3c501 can only do
one
thing at a time -- while you are removing one packet
from the
single-packet buffer it cannot receive
another packet, nor can it
receive a packet while
loading a transmit packet. This was fine for
a
network between two 8088-based computers where
processing each
packet and replying took 10's of
msecs, but modern networks send
back-to-back
packets for almost every transaction.
Donald
writes:
`The driver is now in the std. kernel, but under
the
following conditions: This is unsupported code. I
know the usual
copyright says all the code is
unsupported, but this is _really_
unsupported. I
DON'T want to see bug reports, and I'll accept
bug
fixes only if I'm in a good mood that day.
I don't want to be
flamed later for putting out bad
software. I don't know all all of the
3c501 bugs,
and I know this driver only handles a few that I've
been
able to figure out. It has taken a long
intense effort just to get the
driver working this
well.'
AutoIRQ works, DMA isn't used, the
autoprobe only
looks at and , and the debug level is set
with the
third boot-time argument.
Once again, the use of a 3c501 is strongly
discouraged !
Even more so with a IP multicast kernel, as you
will
grind to a halt while listening to all multicast
packets. See
the comments at the top of the source code
for more details.
Status -- Supported
3Com shared-memory ethercards. They also have
a
programmed I/O mode that doesn't use the 8390
facilities (their
engineers found too many bugs!)
It should be about the same speed as
the same bus
width WD80x3, Unless you are a light user, spend
the
extra money and get the 16 bit model, as the
price difference isn't
significant. The 3c503 does not
have ``EEPROM setup'', so the
diagnostic/setup program
isn't needed before running the card with
Linux. The
shared memory address of the 3c503 is set using
jumpers
that are shared with the boot PROM address. This is
confusing
to people familiar with other ISA cards,
where you always leave the
jumper set to ``disable''
unless you have a boot PROM.
The Linux
3c503 driver can also work with the 3c503
programmed-I/O mode, but
this is slower and less
reliable than shared memory mode. Also,
programmed-I/O
mode is not tested when updating the drivers,
the
deadman (deadcard?) check code may falsely timeout on
some
machines, and the probe for a 3c503 in
programmed-I/O mode is turned
off by default in some
versions of the kernel. This was a panic
reaction to
the general device driver probe explosion; the
3c503
shared memory probe is a safe read from memory, rather
than an
extensive scan through I/O space. As of 0.99pl13,
the kernel has an
I/O port registrar that makes I/O
space probes safer,
and the
programmed-I/O 3c503 probe has been re-enabled.
You still shouldn't
use the programmed-I/O mode though,
unless you need it for MS-DOS
compatibility.
The 3c503's IRQ line is set in software, with no
hints
from an EEPROM. Unlike the MS-DOS drivers, the
Linux driver has
capability to autoIRQ: it uses the
first available IRQ line in
{5,2/9,3,4}, selected each
time the card is ifconfig'ed. (Older driver
versions
selected the IRQ at boot time.) The ioctl() call
in
`ifconfig' will return EAGAIN if no IRQ line is
available at that
time.
Some common problems that people have with the 503
are
discussed in
Problems with...
.
Status --
Semi-Supported
This is a driver that was written by Craig
Southeren
. These cards also
use the i82586 chip.
I don't think
there are that many of these cards about.
It is included in the
standard kernel, but it is classed as
an alpha driver. See
Alpha
Drivers
for important information on using alpha-test ethernet
drivers
with Linux.
There is also the file
that you should read
if you are going to use one of these cards.
It contains various
options that you can enable/disable.
Technical information is
available in
Programming the Intel chips
.
Status --
Semi-Supported
This card uses one of the Intel chips, and
the
development of the driver is closely related to
the development
of the Intel Ether Express driver.
The driver is included in the
standard kernel
release, but as an alpha driver.
See
Alpha
Drivers
for important
information on using alpha-test ethernet
drivers
with Linux. Technical information is available in
Programming the Intel chips
.
Status -- Supported
It's
fairly inexpensive and has
excellent performance for a non-bus-master
design.
The drawbacks are that the original 3c509
_requires_ very
low interrupt latency. The 3c509B
shouldn't suffer from the same
problem, due to
having a larger buffer. (See below.)
Note that the
ISA card detection uses a different method
than most cards. Basically,
you ask the cards to respond
by sending data to an ID_PORT (port ).
Note that
if you have some other strange ISA card using an I/O
range
that includes the ID_PORT of the 3c509, it will probably
not
get detected. Note that you can change the ID_PORT to
or or... in if
you have
a conflicting ISA card, and the 3c509 will still be
happy.
Also note that this detection method means that
it is
difficult to predict which card will get detected first
in a multiple
ISA 3c509 configuration.
The card with the lowest hardware ethernet
address
will end up being . This shouldn't matter
to anyone, except
for those people who want to assign
a 6 byte hardware address to a
particular interface.
A working 3c509 driver was first included as
an
alpha-test version in the 0.99pl13 kernel sources.
It is now in
the standard kernel.
The original 3c509 has a tiny Rx buffer (2kB),
causing the driver to
occasionally drop a packet if interrupts are
masked for
too long. To minimize this problem, you can try
unmasking
interrupts during IDE disk transfers (see )
and/or
increasing your ISA bus speed so IDE transfers finish
sooner.
(Note that the driver could
be completely rewritten to use
predictive interrupts,
but performance re-writes of working drivers
are low
priority unless there is some particular incentive or
need.)
The newer model 3c509B has 8kB on board, and the
driver can set 4, 5
or 6kB for an Rx buffer. This setting
can also be stored on the
EEPROM. This should alleviate the
above problem with the original
3c509. At this point in
time, the Linux driver is not aware of this,
and treats
the 3c509B as an older 3c509.
Cameron Spitzer
writes:
``Beware that if you put a '509 in EISA addressing mode
by
mistake and save that in the EEPROM, you'll have
to use an EISA
machine or the infamous Test Via to
get it back to normal, and it will
conflict at IO
location 0 which may hang your ISA machine.
I believe
this problem is corrected in the 3C509B
version of the board.''
Status -- Not Supported
This MCA bus card uses the i82586, and
now that people are
actually running Linux on MCA machines, someone
may wish
to try and recycle parts of the 3c507 driver into a
driver
for this card.
Status -- Not Supported
Yes, another MCA
card. No, not too much interest in it.
Better chances with the 3c523
or the 3c529.
Status -- Not Supported
This card actually
uses the same chipset as the 3c509.
Donald actually put hooks into the
3c509 driver to check
for MCA cards after probing for EISA cards, and
before
probing for ISA cards. But it hasn't evolved much further
than
that. Donald writes:
``I don't have access to a MCA machine (nor do I
fully understand
the probing code) so I never wrote the
or
routines. If you can find a way to get the
adaptor I/O address that
assigned at boot time, you can just
hard-wire that in place of the
commented-out probe. Be sure to
keep the code that reads the IRQ,
if_port, and ethernet address.''
Status -- Supported
The EISA
version of the 509. The current EISA version
uses the same 16 bit wide
chip rather than a 32 bit
interface, so the performance increase isn't
stunning.
The EISA probe code was added to 3c509.c for 0.99pl14.
We
would be interested in hearing progress reports
from any 3c579 users.
(Read the above 3c509
section for info on the driver.)
Cameron
Spitzer writes:
``The 3C579 (Etherlink III EISA) should be
configured
as an EISA card. The IO Base Address (window 0
register 6
bits 4:0) should be 1f, which selects EISA
addressing mode. Logic
outside the ASIC decodes the
IO address s000, where s is the slot
number. I don't
think it was documented real well. Except for its
IO
Base Address, the '579 should behave EXACTLY like
the'509 (EL3
ISA), and if it doesn't, I want to hear
about it (at my work
address).''
Status -- Semi-Supported
Many people have
been using this PCMCIA card for quite some time
now. Note that
support for it is not (at present) included
in the default kernel
source tree. Note that you will need
a supported PCMCIA controller
chipset. There are drivers
available on Donald's ftp site:
Or
for those that are net-surfing you can try:
Don's PCMCIA Stuff
You
will still need a PCMCIA socket enabler as well.
See
PCMCIA
Support
for more
info on PCMCIA chipsets, socket enablers, etc.
The "B" in the name means the same here as it does for
the 3c509
case.
Accton
Status -- Supported
Don't let the
name fool you. This is still supposed to be a
NE2000 compatible card.
The MPX is supposed to stand for
MultiPacket Accelerator, which,
according to Accton, increases
throughput substantially. But if you
are already sending
back-to-back packets, how can you get any
faster...
Status -- Semi-Supported
David Hinds has been
working on a driver for this card, and
you are best to check the
latest release of his PCMCIA
package to see what the present status
is.
Allied Telesis
Status --Supported
These are
a series of low-cost ethercards using the 79C960 version
of the AMD
LANCE. These are bus-master cards, and thus probably
the fastest ISA
bus ethercards available (although the 3c509
has lower latency thanks
to predictive interrupts).
DMA selection and chip numbering
information can be found in
AMD LANCE
.
More technical
information on AMD LANCE based Ethernet cards
can be found in
Notes
on AMD...
.
Status -- Supported
The Allied Telesis
AT1700 series ethercards are based
on the Fujitsu MB86965. This chip
uses a programmed
I/O interface, and a pair of fixed-size
transmit
buffers. This allows small groups of packets to sent
be sent
back-to-back, with a short pause while
switching buffers.
A unique
feature is the ability to drive 150ohm STP
(Shielded Twisted Pair)
cable commonly installed for
Token Ring, in addition to 10baseT 100ohm
UTP
(unshielded twisted pair).
The Fujitsu chip used on the AT1700
has a design flaw:
it can only be fully reset by doing a power cycle
of the machine.
Pressing the reset button doesn't reset the bus
interface. This
wouldn't be so bad, except that it can only be
reliably detected
when it has been freshly reset. The
solution/work-around is to
power-cycle the machine if the kernel has
a problem detecting
the AT1700.
Some production runs of the AT1700
had another problem:
they are permanently wired to DMA channel 5.
This is
undocumented, there are no jumpers to disable the "feature",
and no driver dares use the DMA capability because of
compatibility
problems. No device driver will be
written using DMA if installing a
second card into
the machine breaks both, and the only way to
disable
the DMA is with a knife.
The at1700 driver is included in
the standard
kernel source tree.
AMD / Advanced Micro Devices
Status -- Supported
There really is no AMD ethernet
card. You are probably reading this
because the only markings you
could find on your card said AMD
and the above number. The 7990 is the
original `LANCE' chip,
but most stuff (including this document) refer
to all these
similar chips as `LANCE' chips. (...incorrectly, I might
add.)
These above numbers refer to chips from AMD
that are the heart
of many ethernet cards.
For example, the Allied Telesis AT1500 (see
AT1500
) the NE1500/2100 (see
NE1500
) and the Boca-VLB/PCI cards
(see
Boca-VLB/PCI
)
The 79C960 (a.k.a. PCnet-ISA) contains
enhancements and bug fixes
over the original 7990 LANCE design.
Chances are that the existing LANCE driver will work
with all AMD
LANCE based cards. (except perhaps the NI65XX - see
NI65XX
for
more info on that one.)
This driver should also work with NE1500 and
NE2100
clones.
For the ISA bus master mode all structures
used
directly by the LANCE, the initialization block,
Rx and Tx
rings, and data buffers, must be accessible
from the ISA bus, i.e. in
the lower 16M of real memory.
If more than 16MB of memory is
installed, low-memory `bounce-buffers'
are used when needed.
The DMA
channel can be set with the low bits
of the otherwise-unused
dev->mem_start value (a.k.a. PARAM_1).
(see
PARAM_1
)
If unset
it is probed for by enabling each free DMA channel
in turn and
checking if initialization succeeds.
The HP-J2405A board is an
exception: with this board it's easy
to read the EEPROM-set values
for the IRQ, and DMA.
See
Notes on AMD...
for more info on
these chips.
Status -- Supported
This is the PCnet-ISA+
-- an enhanced version of the 79C960.
It has support for jumper-less
configuration and Plug and
Play. See the info in the above section.
Status -- Supported
This is the PCnet-32 -- a 32 bit
bus-master version of the
original LANCE chip for VL-bus and local bus
systems.
Minor cleanups were added to the original lance
driver
around v1.1.50 to support these 32 bit versions of the
LANCE
chip. The main problem was that the
current versions of the
'965 and '970 chips have a minor bug.
They clear the Rx buffer length
field in the Rx ring when they
are explicitly documented not to.
Again, see the above info.
Status -- Supported
This is the
PCnet-PCI -- similar to the PCnet-32, but designed
for PCI bus based
systems. Again, see the above info.
Donald has modified
the LANCE
driver to use the PCI BIOS structure
that was introduced by Drew
Eckhardt for the PCI-NCR SCSI
driver. This means that you need to
build a kernel with
PCI BIOS support enabled.
Status --
Supported
This is the PCnet-SCSI -- treated like a '970 from an
Ethernet
point of view. Again, see the above info. Don't ask if the
SCSI half of the chip is supported -- this is the
Ethernet-Howto ,
not the SCSI-Howto.
Ansel Communications
Status
-- Semi-Supported
This driver is included in the present kernel as
an
alpha test driver.
Please see
Alpha Drivers
in
this
document for important information regarding
alpha drivers.
If you
use it, let Donald know how things work out,
as not too many people
have this card and feedback
has been low.
Status -- Supported
This on board ethernet uses an i82596
bus-master chip.
It can only be at i/o address . The author of
this
driver is Mark Evans. By looking at the driver source,
it
appears that the IRQ is hardwired to 10.
Earlier versions of the
driver had a tendency to think
that anything living at was an apricot
NIC.
Since then the hardware address is checked to avoid these
false
detections.
Arcnet
Status -- Semi-Supported
With the
very low cost and better performance of ethernet,
chances are that
most places will be giving away their Arcnet
hardware for free,
resulting in a lot of home systems with Arcnet.
An advantage of
Arcnet is that all of the cards have identical
interfaces, so one
driver will work for everyone.
Recent interest in getting Arcnet
going has picked up again
and Avery Pennarun's alpha driver has been
put into the
default kernel sources for 1.1.80 and above. The arcnet
driver
uses `arc0' as its name instead of the usual `eth0'
for
ethernet devices.
Bug reports and success stories can be mailed
to:
Note that AT&T's StarLAN is an orphaned
technology, like
SynOptics LattisNet, and can't be used in a standard
10Base-T
environment.
Status -- Not Supported
These
StarLAN cards use an interface similar to the i82586
chip. At one
point, Matthijs Melchior
() was playing with the 3c507
driver, and
almost had something useable working. Haven't
heard much since that.
Status -- Supported
This is a generic,
low-cost OEM pocket adaptor being sold by
AT-Lan-Tec, and (likely) a
number of other suppliers. A
driver for it is included in the standard
kernel.
Note that there is substantial information contained in
the
driver source file `atp.c'.
BTW, the adaptor (AEP-100L) has both
10baseT and BNC connections!
You can reach AT-Lan-Tec at
1-301-948-7070. Ask for the model
that works with Linux, or ask for
tech support.
In the Netherlands a compatible adaptor is sold under
the name SHI-TEC
PE-NET/CT, and sells for about $125. The vendor was
Megasellers.
They state that they do not sell to private persons, but
this doesn't appear to be strictly adhered to.
They are:
Megasellers, Vianen, The Netherlands. They always
advertise in Dutch
computer magazines. Note that the
newer model EPP-NET/CT appears to be
significantly different
than the PE-NET/CT, and will not work with the
present driver.
Hopefully someone will come up with the programming
information
and this will be fixed up.
In Germany, a
similar
adaptor comes as a no-brand-name product. Prolan 890b,
no
brand on the casing, only a roman II. Resellers can get a price
of
about $130, including a small wall transformer for the power.
The
adaptor is `normal size' for the product class, about 57mm wide,
22mm
high tapering to 15mm high at the DB25 connector, and 105mm
long
(120mm including the BNC socket). It's switchable between the
RJ45
and BNC jacks with a small slide switch positioned between the
two:
a very intuitive design.
Donald performed some power draw
measurements, and determined
that the average current draw was only
about 100mA @ 5V.
This power draw is low enough
that you could buy or
build a cable to take the 5V directly from the
keyboard/mouse port
available on many laptops. (Bonus points here
for using a standardized
power connector instead of a
proprietary one.)
Note that the device
name that you pass to
is not but for this device.
Boca
Research
Yes, they make more than just multi-port serial cards.
:-)
Status -- Supported
These cards are based on AMD's
PCnet chips, used in the AT1500 and
the like. You can pick up a combo
(10BaseT and 10Base2) PCI
card for under $70 at the moment.
More
information can be found in
AMD LANCE
.
More technical
information on AMD LANCE based Ethernet cards
can be found in
Notes
on AMD...
.
Cabletron
Donald writes:
`Yes, another one
of these companies that won't release its
programming information.
They waited for months before actually
confirming that all their
information was proprietary, deliberately
wasting my time. Avoid their
cards like the plague if you can.
Also note that some people have
phoned Cabletron, and have been
told things like `a D. Becker is
working on a driver
for linux' -- making it sound like I work for
them. This is
NOT the case.'
If you feel like asking them why they
don't want to release their
low level programming info so that people
can use their cards, write
to support@ctron.com.
Tell them that you
are using Linux, and are disappointed that they
don't support open
systems. And no, the usual driver development
kit they supply is
useless. It is just a DOS object file that
you are supposed to link
against. Which you aren't allowed to
even reverse engineer.
Status -- Semi-Supported
These are NEx000 almost-clones that are
reported to
work with the standard NEx000 drivers, thanks to
a
ctron-specific check during the probe. If there are
any problems,
they are unlikely to be fixed, as the
programming information is
unavailable.
Status -- Semi-Supported
Again, there is not
much one can do when the
programming information is proprietary.
The
E2100 is a poor design. Whenever it maps its
shared memory in during a
packet transfer, it
maps it into the whole 128K region! That means
you
can't safely use another interrupt-driven shared
memory device in
that region, including another E2100.
It will work most of the time,
but every once in
a while it will bite you. (Yes, this problem can
be
avoided by turning off interrupts while
transferring packets, but that
will almost certainly
lose clock ticks.) Also, if you mis-program the
board,
or halt the machine at just the wrong moment, even
the reset
button won't bring it back. You will have
to turn it off and leave it
off for about 30 seconds.
Media selection is automatic, but you can
override this
with the low bits of the dev->mem_end parameter.
See
PARAM_2
Also, don't confuse the E2100 for a NE2100 clone.
The E2100
is a shared memory NatSemi DP8390 design,
roughly similar to a
brain-damaged WD8013, whereas
the NE2100 (and NE1500) use a
bus-mastering AMD
LANCE design.
There is an E2100 driver included in
the standard kernel.
However, seeing as programming info isn't
available,
don't expect bug-fixes. Don't use one
unless you are
already stuck with the card.
D-Link
Some people have
had difficulty in finding vendors that
carry D-link stuff. This should
help.
(714) 455-1688 in the US
(081) 203-9900 in the UK
6196-643011 in Germany
(416) 828-0260 in Canada
(02) 916-1600 in
Taiwan
Status -- Supported
The manual says that it is
100 % compatible with the
NE2000. This is not true. You should call
them and
tell them you are using their card with Linux, and
they
should correct their documentation. Some pre-0.99pl12
driver
versions may have trouble recognizing the DE2**
series as 16 bit
cards, and these cards are the most
widely reported as having the
spurious transfer address
mismatch errors. Note that there are cards
from
Digital (DEC) that are also named DE100 and DE200,
but the
similarity stops there.
Status -- Not Supported
This
appears to be a generic DEC21040 PCI chip implementation,
and will
most likely work with the generic 21040 driver, once
Linux gets one.
See
DEC 21040
for more information on these cards, and the
present driver
situation.
Status -- Supported
Laptop
users and other folk who might want a quick
way to put their computer
onto the ethernet may want
to use this. The driver is included with
the default
kernel source tree.
Bjorn Ekwall wrote the
driver.
Expect about 80kb/s transfer speed from this via the
parallel
port. You should read the README.DLINK
file in the kernel source tree.
Note that the device name that you pass to
is now and not the
previously
used .
If your parallel port is not at the standard
then you will have to recompile. Bjorn writes:
``Since the DE-620
driver tries to sqeeze the last microsecond
from the loops, I made
the irq and port address constants instead
of variables. This makes
for a usable speed, but it also means
that you can't change these
assignements from e.g. lilo;
you _have_ to recompile...'' Also note
that some laptops
implement the on-board parallel port at which
is
where the parallel ports on monochrome cards were/are.
Supposedly, a
no-name ethernet pocket adaptor marketed
under the name `PE-1200' is
DE-600 compatible.
It is available in Europe from:
SEMCON Handels
Ges.m.b.h
Favoritenstrasse 20
A-1040 WIEN
Telephone: (+43) 222 50
41 708
Fax : (+43) 222 50 41 706
Status -- Supported
Same as the DE-600, only with two output formats.
Bjorn has written
a driver for this model,
for kernel versions 1.1 and above. See the
above information
on the DE-600.
Status -- Semi-Supported
Some people have been using this PCMCIA card for
some time now with
their notebooks. It is a basic
8390 design, much like a NE2000. The
LinkSys PCMCIA
card and the IC-Card Ethernet (available from
Midwest
Micro) are supposedly DE-650 clones as well.
Note that at present,
this driver is
not part of the standard kernel, and so you will
have
to do some patching.
See
PCMCIA Support
in this document,
and
if you can, have a look at:
Don's PCMCIA Stuff
DFI
Status -- Supported
These cards are now detected (as of
0.99pl15) thanks to
Eberhard Moenkeberg who noted that
they use `DFI'
in the first 3 bytes of the prom, instead
of using in bytes 14 and 15,
which is what all the
NE1000 and NE2000 cards use. (The 300 is an 8
bit
pseudo NE1000 clone, and the 400 is a pseudo NE2000 clone.)
Digital / DEC
Status -- Supported
As of linux
v1.0, there is a driver included as standard
for these cards. It was
written by David C. Davies.
There is documentation included in the
source file
`depca.c', which includes info on how to use more
than
one of these cards in a machine. Note that the DE422 is
an EISA
card. These cards are all based on the AMD LANCE chip.
See
AMD
LANCE
for more info.
A maximum of two of the ISA cards can be used,
because they
can only be set for and base I/O address.
If you are
intending to do this, please read the notes in
the driver source file
in the standard kernel
source tree.
Status -- Supported
Included into kernels v1.1.62 and above is this driver,
also by
David C. Davies of DEC. These cards use a proprietary
chip from DEC,
as opposed to the LANCE chip used in the
earlier cards like the
DE200. These cards support both shared
memory or programmed I/O,
although you take about a 50%performance hit if you use PIO mode. The
shared memory size can
be set to 2kB, 32kB or 64kB, but only 2 and 32
have been tested
with this driver. David says that the performance is
virtually
identical between the 2kB and 32kB mode. There is more
information
(including using the driver as a loadable module) at the
top
of the driver file and also in .
Both of these files come with
the standard kernel distribution.
Other interesting notes are that it
appears that David is/was
working on this driver for the unreleased
version of Linux
for the DEC Alpha AXP. And the standard driver has a
number
of interesting ioctl() calls that can be used to get or
clear
packet statistics, read/write the EEPROM, change the
hardware
address, and the like. Hackers can see the source
code for more info
on that one.
David has also written a configuration utility for
this
card (along the lines of the DOS program )
along with other
tools. These can be found on
in the directory
-- look for
the
file .
Status -- Not Supported
These cards are based
on the 21040 chip mentioned below.
At present there is no driver
available. (Take heart, it
is being worked on...)
Status --
Not Supported
The DEC 21040 is a bus-mastering single chip ethernet
solution
from Digital, similar to AMD's PCnet chip. The 21040
is
specifically designed for the PCI bus architecture.
SMC's new
EtherPower PCI card uses this chip.
The new 21140 just announced is
for supporting 100Base-? and
is supposed to be able to work with
drivers for the 21040 chip.
Donald has a SMC EtherPower PCI card at
the moment, and is
working on a driver. His home page says that he
has a driver
semi-working as of 28/12/94. An alpha driver may appear
in a month
or so. Please don't mail-bomb him asking for the driver,
or
help with it. Also, another person is presently working on a
driver for DEC's 21040 based cards, and it is not Donald. They
shall remain nameless so that their mailbox doesn't get filled
with
``Is it ready yet?'' messages either.
Farallon sells
EtherWave adaptors and transceivers. This device
allows multiple
10baseT devices to be daisy-chained.
Status -- Supported
This is reported to be a 3c509 clone that includes the
EtherWave
transceiver. People have used these successfully
with Linux and the
present 3c509 driver. They are too expensive
for general use, but are
a great option for special cases. Hublet
prices start at $125, and
Etherwave
adds $75-$100 to the price of the board -- worth
it if you
have pulled one wire too few, but not if you are two
network drops
short.
Hewlett Packard
The 272** cards use programmed I/O,
similar to the NE*000 boards,
but the data transfer port can be
`turned off' when you aren't
accessing it, avoiding problems with
autoprobing drivers.
Thanks to Glenn Talbott for helping clean up the
confusion in this
section regarding the version numbers of the HP
hardware.
Status -- Supported
8 Bit 8390 based 10BaseT,
not recommended for all the
8 bit reasons. It was re-designed a couple
years
ago to be highly integrated which caused some
changes in
initialization timing which only
affected testing programs, not LAN
drivers. (The
new card is not `ready' as soon after switching
into
and out of loopback mode.)
Status -- Supported
The HP PC
Lan+ is different to the standard HP PC Lan
card. This driver was
added to the list of drivers in the standard
kernel at about v1.1.3X.
Note that even though
the driver is included, the entry in
`config.in' seems
to have been omitted. If you want to use it, and it
doesn't
come up in `config.in' then add the following line to
`config.in' under the `HP PCLAN support' line:
bool 'HP PCLAN Plus
support' CONFIG_HPLAN_PLUS n
Then run or whatever.
The 47B is a
16 Bit 8390 based 10BaseT w/AUI, and
the 52A is a 16 Bit 8390 based
ThinLAN w/AUI.
These cards are high performers (3c509 speed)
without
the interrupt latency problems (32K onboard RAM for TX
or RX
packet buffering). They both offer LAN
connector autosense, data I/O
in I/O space (simpler) or
memory mapped (faster), and soft
configuration.
The 47A is the older model that existed before the
`B'.
Two versions 27247-60001 or 27247-60002 have part
numbers marked
on the card. Functionally the same to
the LAN driver, except bits in
ROM to identify
boards differ. -60002 has a jumper to allow
operation
in non-standard ISA busses (chipsets
that expect IOCHRDY early.)
Status -- Supported
These are lower priced, and slightly
faster than the
27247B/27252A, but are missing some features, such
as
AUI, ThinLAN connectivity, and boot PROM socket.
This is a fairly
generic LANCE design, but a minor
design decision makes it
incompatible with a generic
`NE2100' driver. Special support for it
(including
reading the DMA channel from the board) is included
thanks
to information provided by HP's Glenn
Talbott.
More technical
information on LANCE based cards can be found in
Notes on AMD...
Status -- Supported
The HP-Vectra has an AMD PCnet chip on
the motherboard.
Earlier kernel versions would detect it as the
HP-J2405A
but that would fail, as the Vectra doesn't report the
IRQ
and DMA channel like the J2405A.
Get a kernel newer than v1.1.53 to
avoid this
problem.
DMA selection and chip numbering information can
be found in
AMD LANCE
.
More technical information on LANCE based
cards can be found in
Notes on AMD...
IBM / International
Business Machines
Status -- Supported
This is
compatible with the Intel based Zenith Z-note.
See
Z-note
for
more info.
Supposedly this site has a comprehensive database
of
useful stuff for newer versions of the Thinkpad. I haven't
checked
it out myself yet.
Thinkpad-info
For those without a WWW browser
handy, try
Status -- Semi-Supported
People have been
using this PCMCIA card with Linux as well.
Similar points apply, those
being that you need a supported
PCMCIA chipset on your notebook, and
that you will have to
patch the PCMCIA support into the standard
kernel.
See
PCMCIA Support
in this document,
and if you can,
have a look at:
Don's PCMCIA Stuff
Intel Ethernet Cards
Status -- Semi-Supported
This card uses the intel
i82586. (Surprise, huh?)
The driver is in the standard release of
the
kernel, as an alpha driver. See
Alpha Drivers
for
important
information on using alpha-test ethernet drivers
with
Linux.
The reason is that the driver works well with slow
machines,
but the i82586 occasionally hangs from the packet
buffer
contention that a fast machine can cause.
One reported
hack/fix is to change all of the outw()
calls to outw_p(). Also, the
driver is missing promiscuous
and multicast modes. (See
Multicast
and...
)
There is also the standard way of using the chip (read
slower)
that is described in the chip manual, and used in
other
i82586 drivers, but this would require a re-write
of the entire
driver.
There is some technical information available on
the i82586
in
Programming the Intel Chips
and also in the source code for
the driver `eexpress.c'. Don't
be afraid to read it. ;-)
Status -- Not-Supported
This card uses the Intel 82595. If it is as
ugly to use as the
i82586, then don't count on anybody writing a
driver.
Status -- Semi-Supported
This is
supposed to be a re-badged DE-650. See the information
on the DE-650
in
DE-650
.
Status -- Not Supported
These are PCI cards that are based on DEC's 21040 chip. The
LNP104
uses the 21050 chip to deliver four independent
10BaseT ports. The
standard LNP101 is selectable
between 10BaseT, 10Base2 and 10Base5
output.
These cards may work with a generic 21040 driver if
and when
Linux gets one. (They aren't cheap either.)
Mylex can be reached at
the following numbers, in case anyone
wants to ask them about
programming information and the like.
MYLEX CORPORATION, Fremont
Sales: 800-77-MYLEX, (510) 796-6100
FAX: (510) 745-8016.
Novell Ethernet, NExxxx and associated clones.
The prefix `NE'
came from Novell Ethernet. Novell followed the
cheapest NatSemi
databook design and sold the manufacturing rights
(spun off?) Eagle,
just to get reasonably-priced ethercards into
the market. (The now
ubiquitous NE2000 card.)
Status -- Supported
The
now-generic name for a bare-bones design around
the NatSemi 8390. They
use programmed I/O rather than
shared memory, leading to easier
installation but
slightly lower performance and a few problems.
Again,
the savings of using an 8 bit NE1000 over the NE2000
are only
warranted if you expect light use. Some
recently introduced NE2000
clones use the National
Semiconductor `AT/LANTic' 83905 chip, which
offers
a shared memory mode similar to the 8013 and EEPROM
or
software configuration. Some problems can arise
with poor clones.
See
Problems with...
, and
Poor NE2000 Clones
In general it
is not a good idea to put a NE2000
clone at I/O address because
nearly
every device driver probes there at boot. Some
poor NE2000
clones don't take kindy to being prodded
in the wrong areas, and will
respond by locking your
machine.
Donald has written a NE2000
diagnostic program, but it
is still presently in alpha test.
(ne2k)
See
Diagnostic Programs
for more
information.
Status -- Supported
These cards use the original 7990 LANCE chip
from AMD and
are supported using the Linux lance driver.
Some
earlier versions of the lance driver had problems
with getting the IRQ
line via autoIRQ from the original
Novell/Eagle 7990 cards. Hopefully
this is now fixed.
If not, then specify the IRQ via LILO, and let us
know
that it still has problems.
DMA selection and chip numbering
information can be found in
AMD LANCE
.
More technical
information on LANCE based cards can be found in
Notes on AMD...
Status -- Not Supported
This card uses a lowly 8MHz 80186,
and hence you are better
off using a cheap NE2000 clone. Even if a
driver was available,
the NE2000 card would most likely be faster.
Status -- Supported
The PureData PDUC8028 and
PDI8023 series of cards are reported
to work, thanks to special probe
code contributed by Mike
Jagdis . The support is integrated
with the
WD driver.
Status -- Semi-Supported
Michael
Hipp has written a driver for this card. It is included
in the
standard kernel as an `alpha' driver. Michael would like
to hear
feedback from users that have this card. See
Alpha Drivers
for
important
information on using alpha-test ethernet drivers
with
Linux.
Michael says that ``the internal sysbus seems to be slow. So
we often
lose packets because of overruns while receiving from a
fast remote host.''
This card also uses one of the Intel chips. See
Programming the Intel Chips
for more technical information.
Status -- Semi-Supported
There is also a driver for the
LANCE based NI6510, and it
is also written by Michael Hipp. Again, it
is also an
`alpha' driver. For some reason, this card is not
compatible
with the generic LANCE driver. See
Alpha Drivers
for
important
information on using alpha-test ethernet drivers
with
Linux.
Status -- Semi-Supported
This is just
a 3c501 clone, with a different S.A. PROM
prefix. I assume it is
equally as brain dead as the
original 3c501 as well. Kernels 1.1.53
and up check
for the NP943 i.d. and then just treat it as a
3c501
after that. See
3Com 3c501
for all the reasons as to why
you really don't want
to use one of these cards.
Status -- Supported
This driver was included into the v1.1 kernels,
and it was
written by PJD Weichmann and SWS Bern. It appears that
the
SK G16 is similar to the NI6510, in that it is based on
the first
edition LANCE chip (the 7990). Once again, I
have no idea as to why
this card won't work with the generic
LANCE driver.
Western
Digital / SMC (Standard Microsystems Corp.)
The ethernet part of
Western Digital has been bought by SMC.
One common mistake people make
is that the relatively new SMC Elite Ultra
is the same as the older
SMC Elite16 models -- this is not the case.
Here is how to contact
SMC (not that you should need to.) SMC / Standard Microsystems Corp.,
80 Arkay Drive, Hauppage, New York,
11788, USA.
Technical Support
via phone: 800-992-4762 (USA)
800-433-5345 (Canada)
516-435-6250
(Other Countries)
Literature requests: 800-SMC-4-YOU (USA)
800-833-4-SMC (Canada)
516-435-6255 (Other Countries)
Technical
Support via E-mail: techsupt@ccmail.west.smc.com
FTP Site:
ftp.smc.com
Status -- Supported
These are the 8-bit
versions of the card. The
8 bit 8003 is slightly less expensive, but
only
worth the savings for light use. Note that some
of the
non-EEPROM cards (clones with jumpers, or
old old old wd8003 cards)
have no way of reporting
the IRQ line used. In this case, auto-irq is
used, and if
that fails, the driver silently assings IRQ
5.
Information regarding what the jumpers on old non-EEPROM
wd8003
cards do can be found in conjunction with the
SMC setup/driver disks
stored on
in the directory
. Note that some of the
newer SMC
`SuperDisk' programs will fail to detect
the old EEPROM-less cards.
The file
seems to be a good all-round choice. Also the
jumper
settings for old cards are in an ascii text file in the
aforementioned archive. The latest (greatest?) version
can be
obtained from .
As these are basically the
same as their 16 bit
counterparts (WD8013 / SMC Elite16),
you should see the next section
for more information.
Status -- Supported
Over
the
years the design has added more registers and an
EEPROM. Clones
usually go by the `8013' name, and
usually use a non-EEPROM (jumpered)
design. This part
of WD has been sold to SMC, so you'll usually
see
something like SMC/WD8013 or SMC Elite16 Plus (WD8013).
Late
model SMC cards will have two main PLCC chips
on board; the SMC 83c690
and the SMC 83c694.
The shared memory design makes the cards 10-20 %
faster,
especially with larger packets. More importantly, from the
driver's point of view, it avoids a few bugs in the
programmed-I/O
mode of the 8390, allows safe
multi-threaded access to the packet
buffer, and
it doesn't have a programmed-I/O data register that
hangs
your machine during warm-boot probes.
Non-EEPROM cards that can't
just read the selected
IRQ will attempt auto-irq, and if that fails,
they will
silently assign IRQ 10. (8 bit versions will assign IRQ 5)
Also see
8013 clones
and
8013 problems
.
Status
-- Supported
This ethercard is based on a new chip from SMC, with
a
few new features. While it has a mode that is
similar to the older SMC
ethercards, it's not
compatible with the old WD80*3 drivers. However,
in
this mode it shares most of its code with the other
8390 drivers,
while operating somewhat faster than a
WD8013 clone.
Since part of
the Ultra looks like
an 8013, the Ultra probe is supposed to find
an
Ultra before the wd8013 probe has a chance to
mistakenly identify
it.
Std. as of 0.99pl14, and made possible by documentation
and
ethercard loan from
Duke Kamstra. If you plan on using an Ultra with
Linux
send him a note of thanks to let him know that there
are Linux
users out there!
Donald mentioed that it is possible to write a
separate
driver for the Ultra's `Altego' mode which allows
chaining
transmits at the cost of inefficient use of receive
buffers, but that
will probably not happen right away.
Performance re-writes of working
drivers are low
priority unless there is some particular incentive
or
need.
Bus-Master SCSI host adaptor users take note: In the
manual
that ships with Interactive UNIX, it mentions
that a bug in the SMC
Ultra will cause data corruption
with SCSI disks being run from an
aha-154X host adaptor.
This will probably bite aha-154X compatible
cards, such
as the BusLogic boards, and the AMI-FastDisk SCSI
host
adaptors as well.
Supposedly SMC has acknowledged the problem
occurs with
Interactive, and older Windows NT drivers. It is
supposed
to be a hardware conflict that can be worked around in
the
driver design. More on this as it develops.
Some Linux users with an
Ultra + aha-154X compatible cards
have experienced data corruption,
while others have not.
Donald tried this combination himself, and
wasn't able
to reproduce the problem. You have been warned.
Status -- Semi-Supported
This card uses SMC's 83c795 chip and
supports the Plug 'n Play
specification. Alex Mohr writes
the
following:
``The card has some features above and beyond the
SMC
Elite Ultra, but can be put into a mode that is compatible
with it.
When I tried to detect the card with linux, the autoprobe
in the
kernel didn't recognize it as an ultra. After wandering
the code a
bit, I noticed that in the smc-ultra.c file
it checks to see if an ID
Nibble is 0x20. I inserted
a check to see what it returns for my
card. Apparently, it's
a 0x40. So I allowed it to detect if it's a
0x20 or a 0x40, and
it works fine.''
Status -- Not
Supported
Supposedly SMC is offering an evaluation deal on
these new
PCI cards for $99 ea. (not a real
great deal when you consider that
the Boca PCnet-PCI
based cards are going for less than $70 and
they
are supported under Linux already) They appear to be
a basic DEC
21040 implementation, i.e. one big chip
and a couple of
transceivers.
Donald has one of these cards, and is working on
a
driver for it. An alpha driver may appear in a month
or so, but
don't hold your breath.
See
DEC 21040
for more
info on these
chips from Digital.
Status -- Not Supported
These 8 bit
cards are based on the Fujitsu MB86950, which is an
ancient version of
the MB86965 used in the Linux at1700
driver. Russ says that you could
probably hack up a driver
by looking at the at1700.c code and his DOS
packet driver
for the Tiara card (tiara.asm)
Status -- Not
Supported
These are 16bit i/o mapped 8390 cards, much similar to a
generic
NE2000 card. If you can get the specifications from SMC,
then
porting the NE2000 driver would probably be quite easy.
Status -- Not Supported
These cards are VLB cards based on the
91c92 chip. They are
fairly expensive, and hence the demand for a
driver is pretty
low at the moment.
Xircom
Another group
that won't release documentation. No cards
supported. Don't look for
any support in the future unless
they release their programming
information. And this is
highly unlikely, as they forbid you from even
reverse-
engineering their drivers. If you are already stuck with
one,
see if you can trade it off on some DOS (l)user.
And if you
just want to verify that this is the case, you can
reach Xircom at
1-800-874-7875, 1-800-438-4526 or +1-818-878-7600.
They used to
advertise that their products "work with all
network operating
systems", but have since stopped. Wonder
why...
Status --
Not Supported
Not to get your hopes up, but if you have one of these
parallel
port adaptors, you may be able to use it in the DOS
emulator
with the Xircom-supplied DOS drivers. You will have to
allow
DOSEMU access to your parallel port, and will probably have
to
play with SIG (DOSEMU's Silly Interrupt Generator). I have
no idea if
this will work, but if you have any success with it,
let me know, and
I will include it here.
Zenith
Status --
Supported
The built-in Z-Note network adaptor is based on the
Intel
i82593 using two DMA channels. There is an (alpha?)
driver
available in the present kernel version. As with all
notebook
and pocket adaptors, it is under the `Pocket and
portable
adaptors' section when running .
See
Programming the
Intel chips
for more technical information.
Also note that the IBM
ThinkPad 300 is compatible with the Z-Note.
Next Chapter,
Previous Chapter Table of contents of this chapter ,
General table of
contents
Top of the document,
Beginning of this Chapter
MD5{32}: 4b1efbd9f507557972b2c5b0b0e434a4
File-Size{5}: 60272
Type{4}: HTML
Gatherer-Version{3}: 1.0
Gatherer-Host{21}: cesdis1.gsfc.nasa.gov
Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server
Refresh-Rate{7}: 2419200
Time-to-Live{8}: 14515200
Description{46}: Vendor/Manufacturer/Model Specific Information
}
@FILE { http://cesdis.gsfc.nasa.gov/%7ecreschke/peta/report/node69.html
Update-Time{9}: 827948641
title{20}: Memory Architecture
keywords{47}: architecture
aug
chance
edt
memory
reschke
tue
images{193}: /usr/local/src/latex2html/icons/next_motif.gif
/usr/local/src/latex2html/icons/up_motif.gif
/usr/local/src/latex2html/icons/previous_motif.gif
/usr/local/src/latex2html/icons/contents_motif.gif
head{153}: Next: Applications and Algorithms: Up: Taming Massive Parallelism:
Previous: Principles Memory Architecture Chance Reschke
Tue Aug 15
08:59:12 EDT 1995
MD5{32}: 1b3289f62c4eaefe3adf6ee3db123f48
File-Size{4}: 1400
Type{4}: HTML
Gatherer-Version{3}: 1.0
Gatherer-Host{21}: cesdis1.gsfc.nasa.gov
Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server
Refresh-Rate{7}: 2419200
Time-to-Live{8}: 14515200
Description{20}: Memory Architecture
}
@FILE { http://cesdis.gsfc.nasa.gov/hpccm/annual.reports/ess94contents/in.house/lou.html
Update-Time{9}: 827948654
url-references{126}: multigrid.html
http://cesdis.gsfc.nasa.gov/hpccm/annual.reports/ess94contents/in-house.html
mailto:lpicha@cesdis.gsfc.nasa.gov
title{30}: Parallel Multigrid PDE Solvers
keywords{63}: curator
image
larry
page
picha
picture
previous
return
see
the
images{19}: graphics/return.gif
headings{59}: Parallel Multigrid PDE Solvers
Return
to the PREVIOUS PAGE
body{3115}:
Objective: Develop efficient parallel algorithms/software for
implementing multigrid partial differential equation (PDE) solvers on
massively parallel computers. The implemented PDE solvers should be
scalable and portable across different hardware platforms. These PDE
solvers can be used either as library routine or expandable template
code for solving many challenging problems in physics and engineering.
Approach: Developing high-quality, parallel numerical PDE solvers
requires expertise in both numerical mathematics and software
engineering. We identified numerically efficient multigrid algorithms
for solving elliptic PDEs and developed strategies for their parallel
implementations on message-passing systems. We use modern software
technologies in our implementations to make our code highly structured,
reusable and extensible. We verified the effectiveness of our parallel
multigrid solver by extending it to an incompressible fluid flow
solver.
Accomplishments: We developed a parallel algorithm and
implemented the parallel multigrid elliptic solver package. The
multigrid solver can solve N-dimensional (N <= 3) boundary-value
problems for Poisson and Helmholtz equations on several commonly-used
finite-difference grids, and it runs on both sequential and parallel
computers. The numerical and parallel performances of the multigrid
solver have been measured for some test problems on Intel Delta and
Paragon systems and the results are fairly good. The multigrid solver
was implemented in C with both NX and MPI interfaces for
message-passing. Interfaces to the multigrid solver from an application
program are available in C and Fortran. The multigrid solver has been
extended to a two-dimensional (2D) incompressible fluid flow solver
based on a projection method implemented on a staggered
finite-difference grid. The flow solver can be used to simulate fluid
flows, e.g., in astrophysics and combustion problems. The 2D multigrid
flow solver has been tested on a few model problems (see picture page,
60k image) .
Significance: Multigrid methods are a class of highly
efficient (sometimes optimal) numerical schemes for solving a variety
of numerical PDEs arising from science and engineering problems.
Solving elliptic problems are often a computationally expensive step in
many time-dependent scientific computing problems. Developing a
general-purpose, parallel multigrid elliptic solver, however, is far
from a trivial task to most application scientists. Our parallel
multigrid solver package can be a useful computational tool in solving
large science and engineering problems.
Status/Plans: Extend the
multigrid solver on staggered grid to 3D grids. Extend the multigrid
flow solver to 3D problems. Investigate the possibilities of
incorporating adaptive and multilevel grid features into existing
parallel PDE solvers. Investigate the benefits of using object-oriented
approaches (e.g. C++ or its extensions) in implementing parallel PDE
solvers.
Point of Contact: John Lou
Jet Propulsion
Laboratory
(818) 354-4870
lou@acadia.jpl.nasa.gov
curator:
Larry Picha
MD5{32}: d1bd810f95831080b8eef74310b78797
File-Size{4}: 3609
Type{4}: HTML
Gatherer-Version{3}: 1.0
Gatherer-Host{21}: cesdis1.gsfc.nasa.gov
Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server
Refresh-Rate{7}: 2419200
Time-to-Live{8}: 14515200
Description{30}: Parallel Multigrid PDE Solvers
}
@FILE { http://cesdis.gsfc.nasa.gov/hpccm/annual.reports/ann.rpt.95/cas.95/cas.95.ar.darwin.html
Update-Time{9}: 827948662
url-references{28}: http://racimac.arc.nasa.gov/
title{63}: DARWIN/HPCC Phased-Array Acoustic Measurement and Visualization
keywords{101}: accomplishments
approach
arc
contact
gov
http
nasa
objective
plans
point
racimac
significance
status
headings{63}: DARWIN/HPCC Phased-Array Acoustic Measurement and Visualization
body{2742}:
Objective: To produce a near-real-time phased-array acoustic
measurement and visualization system for wind-tunnel testing by
combining the skills of the DARWIN and HPCC projects, and to apply the
system to the analysis of the acoustic environment around a DC-10 model
with extended flaps and landing gear in the Ames 40x80 wind tunnel.
Approach: Instrumentation, data collection, and storage computer
systems are combined with the HPCC IBM SP-2 so produce a heterogeneous
distributed computing system. The Parallel Virtual Machine (PVM)
software provides data communications between machines and within the
SP-2. Acoustic information is collected by an array of 40 microphones,
and is stored in memory on the instrumentation computer This digitized
data is routed to the SP-2 for phased array processing. A surface of
points are "scanned" to determine the strength of noise sources at each
location. Sound pressure levels on this surface are visualized in the
FAST visualization system. A graphical user interface provides an
easy-to-use data entry environment.
Accomplishments: Prototype
software is complete linking the NPRIME data collection system, the
graphical interface, and the IBM SP-2. Calibration tests were carried
out during late July and early August. A survey of the acoustic
environment around a DC-10 model in the 40x80 wind tunnel was completed
September 1, 1995.
Significance: Recent improvements in engine
noise has increased the relative contribution of the airframe to the
total noise produced by aircraft during landing. Tighter airport noise
regulations may limit the markets of U.S. transport aircraft
manufacturers. Prior analysis procedures completed the analysis of a
few frequencies overnight. The new system provides analysis for dozens
of frequencies in less than 5 minutes (between test points). During the
DC-10 test, several previously unknown noise sources were identified.
McDonnell-Douglas and other participants in the Advanced Subsonic
Transport (AST) program are pleased with the results.
Status/Plans:
Analysis of the DC-10 data is continuing. Improvements in parallelism
and solution efficiency should allow the visualization of hundreds of
frequencies in near-real-time. With a greater number of microphones,
greater resolution will become possible at higher frequencies, and
volume (as opposed to surface) rendering will be practical. This will
require even more computational horsepower to meet the near-real-time
requirement.
Point(s) of Contact:
Merritt H. Smith
NASA Ames
Research Center
mhsmith@nas.nasa.gov
(415)604-4493
Mike Watts
NASA Ames Research Center
Mike_Watts@qmgate.arc.nasa.gov
(415)604-6574
DARWIN Web Page at NASA Ames Research Center:
http://racimac.arc.nasa.gov/
MD5{32}: e1ea879b31583b5d88c5c1513d86ab1b
File-Size{4}: 3084
Type{4}: HTML
Gatherer-Version{3}: 1.0
Gatherer-Host{21}: cesdis1.gsfc.nasa.gov
Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server
Refresh-Rate{7}: 2419200
Time-to-Live{8}: 14515200
Description{63}: DARWIN/HPCC Phased-Array Acoustic Measurement and Visualization
}
@FILE { http://cesdis.gsfc.nasa.gov/linux/drivers/tulip.html
Update-Time{9}: 827948898
url-references{354}: http://cesdis.gsfc.nasa.gov/cesdis.html
/linux/drivers/tulip.c
#other
tulip.c
v1.3/tulip.c
new-tulip.c
/pub/people/becker/beowulf.html
tulip.patch
http://cesdis.gsfc.nasa.gov/cesdis.html
http://hypatia.gsfc.nasa.gov/NASA_homepage.html
http://hypatia.gsfc.nasa.gov/GSFC_homepage.html
http://www.hal.com/~markg/WebTechs/
#top
/pub/people/becker/whoiam.html
title{30}: Linux and the DEC "Tulip" Chip
keywords{250}: after
all
and
author
becker
beowulf
better
center
cesdis
chip
complete
dec
description
donald
driver
drivers
extra
features
file
fix
flight
for
goddard
implemented
linux
nasa
other
patch
pci
performance
project
space
the
this
top
tulip
unneeded
with
images{56}: http://www.hal.com/~markg/WebTechs/images/valid_html.gif
headings{142}: Linux and the DEC "Tulip" Chip
Errata
Using the 10base2 or AUI Port
Setting the cache alignment
Ethercards reported to use the DEC 21040 chip
body{4294}:
This page contains information on using Linux with the DEC
21040/21140
"Tulip" chips, as used on the SMC PCI EtherPower and other
ethercards.
The master copy of this page resides on the
CESDIS
WWW
server.
The driver for the DEC 21040 "Tulip"
chip is now available!
It has been integrated with the kernel
source tree since 1.1.90,
although it remains commented out in the
configuration file.
This
driver works with the SMC PCI EtherPower card as well as many
other
PCI ethercards.
This driver is available in several versions. The
standard, tested v0.07a for 1.2.* series released kernels. The same
conservative driver v0.07a with the extra support needed to work with
the 1.3.* development kernels. The latest testing version of the driver
with better performance and extra features . This version will compile
with all 1.2.* kernels and recent 1.3.* development kernels.
This
driver was written to support the Beowulf cluster project at
CESDIS.
For Beowulf-specific information, read the
Beowulf project
description .
The new generation Beowulf uses two 21140 100baseTX
boards on every
processor, with each network connected by 100baseTX
repeaters.
There are two known problem with the code previously
distributed: The driver always selects the 10baseT (RJ45) port, not the
AUI (often
10base2/BNC) port.
port. The driver fails with corrupted
transfers when used with some motherboard
chip such at the Intel
Saturn as used on the ASUS SP3G.
Both of these problems have fixes as
described below. The
complete patch file fixes these problems as
well
as cleaning up some the development messages.
The new driver
automatically switches media when the 10baseT port fails.
On the 21040
it switches to the AUI (usually 10base2) media, and on the
21140 it
configures the chip into a 100baseTx compatible mode.
This fix is
unneeded in all Tulip drivers after v0.05.
To use the 10base2 port
with the driver in 1.2.[0-5] you must change
the setting of one SIA
(serial interface) register. Make the following
change around line
325: -outl(0x00000004, ioaddr + CSR13);
+outl(0x0000000d,
ioaddr + CSR13);
This fix is implemented in all Tulip drivers
after v0.04.
The pre-1.2 driver experienced packet data corruption
when used with some
motherboards, most notably the ASUS SP3G. The
workaround is to set
the cache alignment parameters in the Tulip chip
to their most conservative
values.---
/usr/src/linux-1.1.84/drivers/net/tulip.cSun Jan 22 15:42:12
1995
+++ tulip.cSun Jan 22 16:21:44 1995
@@ -268,9 +271,15 @@
/* Reset the chip, holding bit 0 set at least 10 PCI cycles. */
outl(0xfff80001, ioaddr + CSR0);
SLOW_DOWN_IO;
-/*
Deassert reset. Wait the specified 50 PCI cycles by
initializing
+/* Deassert reset. Set 8 longword cache alignment, 8
longword burst.
+ Cache alignment bits 15:14 Burst length
13:8
+ 0000No alignment 0x00000000 unlimited0800 8
longwords
+40008 longwords0100 1
longword1000 16 longwords
+800016
longwords0200 2 longwords2000 32
longwords
+C00032 longwords0400 4 longwords
+
Wait the specified 50 PCI cycles after a reset by initializing
Tx and Rx queues and the address filter list. */
-outl(0xfff80000,
ioaddr + CSR0);
+outl(0xfff84800, ioaddr + CSR0);
if
(irq2dev_map[dev->irq] != NULL
|| (irq2dev_map[dev->irq] =
dev) == NULL
This is reportedly a bug in the motherboard chipset's
implementation of
burst mode transfers. The patch above turns on a
feature in the Tulip that's
supposed to reduce the performance impact
of maintaining cache consistency,
but it is also a way to effectively
limit the burst transfer length to a size
the chipset can handle
without error.
Accton EtherDuo PCI Cogent EM100
Cogent EM400 (same with 4 ports + PCI Bridge) Cogent EM964
Quartet Four 21040 ports and a DEC 21050 PCI bridge. Danpex
EN-9400P3 D-Link DFE500-Tx (Possibly inaccurate report.)
D-Link DE-530CT Linksys EtherPCI
<--! COMMENT "Linksys,
Irvine CA, 800-546-57973, 714-261-1288, 73430.3634@compuserve.com">
SMC EtherPower With DEC21040 -- my development board.
SMC EtherPower10/100 With DEC21140 -- also tested. Thomas
Conrad TC5048 Znyx ZX312 EtherAction Znyx ZX315 EtherArray
Two 21040 10baseT/10base2 ports and a DEC 21050 PCI bridge Znyx
ZX342 (With 4 ports + PCI bridge?)
CESDIS
is located
at the
NASA
Goddard Space Flight Center in Greenbelt MD.
address{57}: Top
Author:
Donald Becker
, becker@cesdis.gsfc.nasa.gov.
MD5{32}: 916d54c95aad8167127a02e2739d5e2f
File-Size{4}: 5839
Type{4}: HTML
Gatherer-Version{3}: 1.0
Gatherer-Host{21}: cesdis1.gsfc.nasa.gov
Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server
Refresh-Rate{7}: 2419200
Time-to-Live{8}: 14515200
Description{30}: Linux and the DEC "Tulip" Chip
}
@FILE { http://cesdis.gsfc.nasa.gov/%7ecreschke/peta/report/node49.html
Update-Time{9}: 827948639
title{12}: Conclusions
keywords{39}: aug
chance
conclusions
edt
reschke
tue
images{203}: /usr/local/src/latex2html/icons/next_motif.gif
/usr/local/src/latex2html/icons/up_motif.gif
/usr/local/src/latex2html/icons/previous_motif.gif
/usr/local/src/latex2html/icons/contents_motif.gif
img43.gif
head{435}: Next: AcknowledgmentsUp: A Petaops is Previous: Overall Architecture
Conclusions A petaops system is obviously an extremely aggressive
target, but a
C RAM design that focuses on power consumption and
bandwidth makes
it plausible. While the technologies we propose are
far from "proven", they
are within the bounds of the imaginable with
present fabrication processes
and system engineering. Chance Reschke
Tue Aug 15 08:59:12 EDT 1995
MD5{32}: f37fd490b7995c8c08f78cdff6bcdadc
File-Size{4}: 1710
Type{4}: HTML
Gatherer-Version{3}: 1.0
Gatherer-Host{21}: cesdis1.gsfc.nasa.gov
Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server
Refresh-Rate{7}: 2419200
Time-to-Live{8}: 14515200
Description{12}: Conclusions
}
@FILE { http://cesdis.gsfc.nasa.gov/hpccm/annual.reports/ann.rpt.95/cas.95/cas.95.list.accomp.html
Update-Time{9}: 827948833
title{15}: --_-_-_-_-_-_--
MD5{32}: 600b3020ad6565944ae086b26c7a7145
File-Size{4}: 3859
Type{4}: HTML
Gatherer-Version{3}: 1.0
Gatherer-Host{21}: cesdis1.gsfc.nasa.gov
Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server
Refresh-Rate{7}: 2419200
Time-to-Live{8}: 14515200
}
@FILE { http://cesdis.gsfc.nasa.gov/hpccm/annual.reports/cas94contents/testbed/graphics/
Update-Time{9}: 827948842
url-references{115}: /hpccm/annual.reports/cas94contents/testbed/
bar.gif
cas.gif
hpccsmall.gif
return.gif
search.button.gif
smaller.gif
title{62}: Index of /hpccm/annual.reports/cas94contents/testbed/graphics/
keywords{68}: bar
button
cas
directory
gif
hpccsmall
parent
return
search
smaller
images{134}: /icons/blank.xbm
/icons/menu.gif
/icons/image.gif
/icons/image.gif
/icons/image.gif
/icons/image.gif
/icons/image.gif
/icons/image.gif
headings{62}: Index of /hpccm/annual.reports/cas94contents/testbed/graphics/
body{272}:
Name Last modified Size Description
Parent Directory 19-Jul-95
15:55 -
bar.gif 17-Jul-95 13:51 3K
cas.gif 17-Jul-95 13:51 22K
hpccsmall.gif 17-Jul-95 13:51 2K
return.gif 17-Jul-95 13:51 1K
search.button.gif 17-Jul-95 13:51 2K
smaller.gif 17-Jul-95 13:51
23K
MD5{32}: cb742d8ea0784718a5db7096b4f86200
File-Size{4}: 1241
Type{4}: HTML
Gatherer-Version{3}: 1.0
Gatherer-Host{21}: cesdis1.gsfc.nasa.gov
Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server
Refresh-Rate{7}: 2419200
Time-to-Live{8}: 14515200
Description{62}: Index of /hpccm/annual.reports/cas94contents/testbed/graphics/
}
@FILE { http://cesdis.gsfc.nasa.gov/%7ecreschke/peta/report/node55.html
Update-Time{9}: 827948640
title{8}: Summary
keywords{35}: aug
chance
edt
reschke
summary
tue
images{203}: /usr/local/src/latex2html/icons/next_motif.gif
/usr/local/src/latex2html/icons/up_motif.gif
/usr/local/src/latex2html/icons/previous_motif.gif
/usr/local/src/latex2html/icons/contents_motif.gif
img48.gif
head{474}: Next: AcknowledgmentUp: Design of a Previous: Massively Parallel SIMD
Summary The group has successfully simulated a toroidal mesh of
processing elements using circuit design software. The simulation
included all local operations. In addition, the router and global
networks have been designed, and we are currently in the process of
simulating them. Plans are to simulate a larger network and begin to
develop a VLSI prototype. Chance Reschke
Tue Aug 15 08:59:12 EDT
1995
MD5{32}: 523e5bb3f89c33cb1a47e7540909d694
File-Size{4}: 1736
Type{4}: HTML
Gatherer-Version{3}: 1.0
Gatherer-Host{21}: cesdis1.gsfc.nasa.gov
Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server
Refresh-Rate{7}: 2419200
Time-to-Live{8}: 14515200
Description{8}: Summary
}
@FILE { http://cesdis.gsfc.nasa.gov/%7ecreschke/peta/report/node80.html
Update-Time{9}: 827948642
url-references{111}: http://cbl.leeds.ac.uk/nikos/tex2html/doc/latex2html/latex2html.html
http://cbl.leeds.ac.uk/nikos/personal.html
title{27}: About this document ...
keywords{82}: about
aug
chance
document
drakos
edt
html
latex
nikos
report
reschke
tex
this
tue
images{146}: /usr/local/src/latex2html/icons/up_motif.gif
/usr/local/src/latex2html/icons/previous_motif.gif
/usr/local/src/latex2html/icons/contents_motif.gif
head{438}: Up:No Title Previous: Implications for Future About this document ...
This document was generated using the LaTeX 2HTML translator Version
95.1 (Fri Jan 20 1995) Copyright \251 1993, 1994, Nikos Drakos ,
Computer Based Learning Unit, University of Leeds. The command line
arguments were:
latex2html report.tex . The translation was initiated
by Chance Reschke on Tue Aug 15 08:59:12 EDT 1995 Chance Reschke
Tue
Aug 15 08:59:12 EDT 1995
MD5{32}: aeb07d9d8525534f287cb135ab52d7fd
File-Size{4}: 1826
Type{4}: HTML
Gatherer-Version{3}: 1.0
Gatherer-Host{21}: cesdis1.gsfc.nasa.gov
Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server
Refresh-Rate{7}: 2419200
Time-to-Live{8}: 14515200
Description{27}: About this document ...
}
@FILE { http://cesdis.gsfc.nasa.gov/pub/linux/drivers/3c59x-new.c
Update-Time{9}: 827948605
Partial-Text{4784}: EL3WINDOW
cleanup_module
init_module
set_multicast_list
tc59x_init
update_stats
vortex_close
vortex_get_stats
vortex_interrupt
vortex_open
vortex_probe1
vortex_rx
vortex_start_xmit
linux/config.h
linux/module.h
linux/version.h
linux/kernel.h
linux/sched.h
linux/string.h
linux/ptrace.h
linux/errno.h
linux/in.h
linux/ioport.h
linux/malloc.h
linux/interrupt.h
linux/pci.h
linux/bios32.h
asm/bitops.h
asm/io.h
asm/dma.h
linux/netdevice.h
linux/etherdevice.h
linux/skbuff.h
/* 3c59x.c: An 3Com 3c590/3c595 "Vortex" ethernet driver for linux. */
/*
Written 1995 by Donald Becker.
This software may be used and distributed according to the terms
of the GNU Public License, incorporated herein by reference.
This driver is for the 3Com "Vortex" series ethercards. Members of
the series include the 3c590 PCI EtherLink III and 3c595-Tx PCI Fast
EtherLink. It also works with the 10Mbs-only 3c590 PCI EtherLink III.
The author may be reached as becker@CESDIS.gsfc.nasa.gov, or C/O
Center of Excellence in Space Data and Information Sciences
Code 930.5, Goddard Space Flight Center, Greenbelt MD 20771
*/
/* Warning: Bogus! This means IS_LINUX_1_3. */
/* This will be in linux/etherdevice.h someday. */
/* The total size is twice that of the original EtherLinkIII series: the
runtime register window, window 1, is now always mapped in. */
/*
Theory of Operation
I. Board Compatibility
This device driver is designed for the 3Com FastEtherLink, 3Com's PCI to
10/100baseT adapter. It also works with the 3c590, a similar product
with only a 10Mbs interface.
II. Board-specific settings
PCI bus devices are configured by the system at boot time, so no jumpers
need to be set on the board. The system BIOS should be set to assign the
PCI INTA signal to an otherwise unused system IRQ line. While it's
physically possible to shared PCI interrupt lines, the 1.2.0 kernel doesn't
support it.
III. Driver operation
The 3c59x series use an interface that's very similar to the previous 3c5x9
series. The primary interface is two programmed-I/O FIFOs, with an
alternate single-contiguous-region bus-master transfer (see next).
One extension that is advertised in a very large font is that the adapters
are capable of being bus masters. Unfortunately this capability is only for
a single contiguous region making it less useful than the list of transfer
regions available with the DEC Tulip or AMD PCnet. Given the significant
performance impact of taking an extra interrupt for each transfer, using
DMA transfers is a win only with large blocks.
IIIC. Synchronization
The driver runs as two independent, single-threaded flows of control. One
is the send-packet routine, which enforces single-threaded use by the
dev->tbusy flag. The other thread is the interrupt handler, which is single
threaded by the hardware and other software.
IV. Notes
Thanks to Cameron Spitzer and Terry Murphy of 3Com for providing both
3c590 and 3c595 boards.
The name "Vortex" is the internal 3Com project name for the PCI ASIC, and
the not-yet-released (3/95) EISA version is called "Demon". According to
Terry these names come from rides at the local amusement park.
The new chips support both ethernet (1.5K) and FDDI (4.5K) packet sizes!
This driver only supports ethernet packets because of the skbuff allocation
limit of 4K.
*/
/* 3Com's manufacturer's ID. */
/* Operational defintions.
These are not used by other compilation units and thus are not
exported in a ".h" file.
First the windows. There are eight register windows, with the command
and status registers available in each.
*/
/* The top five bits written to EL3_CMD are a command, the lower
11 bits are the parameter, if applicable.
Note that 11 parameters bits was fine for ethernet, but the new chip
can handle FDDI lenght frames (~4500 octets) and now parameters count
32-bit 'Dwords' rather than octets. */
/* The SetRxFilter command accepts the following classes: */
/* Bits in the EL3_STATUS general status register. */
/* Latched interrupt. */
/* Host error. */
/* EL3_CMD is still busy.*/
/* Register window 1 offsets, the window used in normal operation.
On the Vortex this window is always mapped at offsets 0x10-0x1f. */
/* Remaining free bytes in Tx buffer. */
/* Window 0: EEPROM command register. */
/* Enable erasing/writing for 10 msec. */
/* Disable EWENB before 10 msec timeout. */
/* EEPROM locations. */
/* Window 3: MAC/config bits. */
/* Window 4: Various transcvr/media bits. */
/* Enable link beat and jabber for 10baseT. */
/* "ethN" string, also for kernel debug. */
/* Unlike the other PCI cards the 59x cards don't need a large contiguous
memory region, so making the driver a loadable module is feasible.
*/
/* Remove I/O space marker in bit 0. */
MD5{32}: 34299d3cdecae1d422b843f6aeea0b36
File-Size{5}: 27311
Type{1}: C
Gatherer-Version{3}: 1.0
Gatherer-Host{21}: cesdis1.gsfc.nasa.gov
Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server
Refresh-Rate{7}: 2419200
Time-to-Live{8}: 14515200
Keywords{2261}: accepts
according
adapter
adapters
advertised
allocation
also
alternate
always
amd
amusement
and
applicable
are
asic
asm
assign
author
available
baset
beat
because
becker
before
being
bios
bit
bitops
bits
blocks
board
boards
bogus
boot
both
buffer
bus
busy
but
bytes
called
cameron
can
capability
capable
cards
center
cesdis
chip
chips
classes
cleanup
close
cmd
code
com
come
command
compatibility
compilation
config
configured
contiguous
control
count
data
debug
dec
defintions
demon
designed
dev
device
devices
disable
distributed
dma
doesn
don
donald
driver
dwords
each
eeprom
eight
eisa
enable
enforces
erasing
errno
error
ethercards
etherdevice
etherlink
etherlinkiii
ethernet
ethn
ewenb
excellence
exported
extension
extra
fast
fastetherlink
fddi
feasible
fifos
file
fine
first
five
flag
flight
flows
following
font
for
frames
free
from
general
get
given
gnu
goddard
gov
greenbelt
gsfc
handle
handler
hardware
herein
host
iii
iiic
impact
include
incorporated
independent
information
init
inta
interface
internal
interrupt
ioport
irq
jabber
jumpers
kernel
large
latched
lenght
less
license
limit
line
lines
link
linux
list
loadable
local
locations
lower
mac
making
malloc
manufacturer
mapped
marker
master
masters
may
mbs
means
media
members
memory
module
msec
multicast
murphy
name
names
nasa
need
netdevice
new
next
normal
not
note
notes
now
octets
offsets
one
only
open
operation
operational
original
other
otherwise
packet
packets
parameter
parameters
park
pci
pcnet
performance
physically
possible
previous
primary
probe
product
programmed
project
providing
ptrace
public
rather
reached
reference
region
regions
register
registers
released
remaining
remove
rides
routine
runs
runtime
sched
sciences
see
send
series
set
setrxfilter
settings
shared
should
signal
significant
similar
single
size
sizes
skbuff
software
someday
space
specific
spitzer
start
stats
status
still
string
support
supports
synchronization
system
taking
tbusy
terms
terry
than
thanks
that
the
theory
there
these
this
thread
threaded
thus
time
timeout
top
total
transcvr
transfer
transfers
tulip
twice
two
unfortunately
units
unlike
unused
update
use
used
useful
using
various
version
very
vortex
warning
was
which
while
will
win
window
windows
with
works
writing
written
xmit
yet
Description{9}: EL3WINDOW
}
@FILE { http://cesdis.gsfc.nasa.gov/%7ecreschke/peta/report/node35.html
Update-Time{9}: 827948636
title{14}: Open Problems
keywords{41}: aug
chance
edt
open
problems
reschke
tue
images{387}: /usr/local/src/latex2html/icons/next_motif.gif
/usr/local/src/latex2html/icons/up_motif.gif
/usr/local/src/latex2html/icons/previous_motif.gif
/usr/local/src/latex2html/icons/contents_motif.gif
/usr/local/src/latex2html/icons/next_motif.gif
/usr/local/src/latex2html/icons/up_motif.gif
/usr/local/src/latex2html/icons/previous_motif.gif
/usr/local/src/latex2html/icons/contents_motif.gif
head{4094}: Next: ConclusionsUp: Heterogeneous Computing: One Previous: A
Conceptual Model Open Problems A great many open problems need to be
solved
before heterogeneous computing can be made available to
the
average applications programmer in a transparent way.
Many
(possibly even most) need to be addressed just to
facilitate
near-optimal practical use of real heterogeneous suites
in a
``visible'' (i.e., user specified) way.
Below is a brief discussion of
some of these open problems;
it is far from exhaustive, but it will
convey the types of
issues that need to be addressed.
Others may be
found in [13, 28].Implementation of an automatic HC
programming
environment, such as envisioned in Section 3,
will
require a great deal of research for devising practical
and
theoretically sound methodologies for each component
of each stage. A
general open question that is particularly applicable to
stages 1 and
2 of the conceptual model is: ``What
information should (must) the
user provide and what information
should (can) be determined
automatically?'' For example,
should the user specify the subtasks
within an application or
can this be done automatically? Future HC
systems will
probably not completely automate all of the steps in
the
conceptual model. A key to the future success of HC hinges
on
striking a proper balance between the amount of
information
expected from the user (i.e., effort) and the level of
performance delivered by the system.To program an HC system, it would
be best to have
machine-independent programming languages [33] that
allow the user
to augment the code with compiler directives.
The
programming language and user specified directives should be designed
to facilitate (a) the compilation of the program into efficient
code
for any of the machines in the suite, (b) the decomposition
of
tasks into homogeneous subtasks, and (c) the use
of
machine-dependent subroutine libraries.Along with programming
languages, there is a need for
debugging and performance tuning tools
that can be used across an
HC suite of machines.
This involves
research in the areas of distributed
programming environments and
visualization tools.Operating system support for HC is needed.
This
includes techniques applicable at both the local machine
level and at
the system-wide network level.Ideally, information about the current
loading and status
of the machines in the HC suite and the network
that is linking
these machines should be incorporated into the
matching and
scheduling decisions. Many questions arise here: what
information to include in the status (e.g., faulty or not, pending
tasks), how to measure
current loading, how to effectively incorporate
current loading
information into matching and scheduling decisions,
how to
communicate and structure the loading and status information
in
the other machines, how often to update this information,
and how
to estimate task/transfer completion time?There is much ongoing
research in the area of inter-machine data
transport. This research
includes the hardware support required,
the software protocols
required, designing the network topology,
computing the minimum time
path between two machines,
and devising rerouting schemes in case of
faults or heavy loads.
Related to this is the data reformatting
problem, involving
issues such as data type storage formats and sizes,
byte ordering
within data types, and machines' network-interface
buffer sizes.Another area of research pertains to
methods for dynamic
task migration between different parallel
machines at execution
time.
This could be used to rebalance loads or if a fault
occurs.
Current research in this area involves
how to move an
executing task between different machines
and determining
how and
when to use dynamic task migration for load balancing.Lastly, there are
policy issues that require system
support. These include what to do
with priority tasks, what to do with priority users, what to do with
interactive tasks, and security. Next: ConclusionsUp: Heterogeneous
Computing: One Previous: A Conceptual Model Chance Reschke
Tue Aug 15
08:59:12 EDT 1995
MD5{32}: 0cdbe57a154f3eccc169035a991f156e
File-Size{4}: 6080
Type{4}: HTML
Gatherer-Version{3}: 1.0
Gatherer-Host{21}: cesdis1.gsfc.nasa.gov
Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server
Refresh-Rate{7}: 2419200
Time-to-Live{8}: 14515200
Description{14}: Open Problems
}
@FILE { http://cesdis.gsfc.nasa.gov/%7ecreschke/peta/report/node16.html
Update-Time{9}: 827948634
title{8}: Summary
keywords{41}: aug
chance
edt
known
reschke
summary
tue
images{193}: /usr/local/src/latex2html/icons/next_motif.gif
/usr/local/src/latex2html/icons/up_motif.gif
/usr/local/src/latex2html/icons/previous_motif.gif
/usr/local/src/latex2html/icons/contents_motif.gif
head{538}: Next: Workshop OrganizationUp: Issues for Petaflops Previous: Important
Issues and Summary Clearly, the challenges to developing a petaflops
computer are formidable. And, that applies to the known challenges.The
unknown will be confronted when they emerge. They may---and probably
will---fall into most of the distinct areas listed earlier. Perhaps the
most important point to be gleaned from this discussion is that working
experts think that petaflops computing within 20 years is feasible.
Chance Reschke
Tue Aug 15 08:59:12 EDT 1995
MD5{32}: d02667b7310f2336523fb43a96b5d5e7
File-Size{4}: 1773
Type{4}: HTML
Gatherer-Version{3}: 1.0
Gatherer-Host{21}: cesdis1.gsfc.nasa.gov
Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server
Refresh-Rate{7}: 2419200
Time-to-Live{8}: 14515200
Description{8}: Summary
}
@FILE { http://cesdis.gsfc.nasa.gov/hpccm/annual.reports/cas94contents/app.sw/npss.html
Update-Time{9}: 827948648
url-references{109}: http://cesdis.gsfc.nasa.gov/hpccm/annual.reports/cas94contents/app.sw.html
mailto:lpicha@cesdis.gsfc.nasa.gov
title{40}: NPSS MOD1 Engine Simulation with Zooming
keywords{46}: contents
curator
larry
picha
return
table
the
images{19}: graphics/return.gif
headings{73}: NPSS MOD1 Engine Simulation with Zooming
Return
to the Table of Contents
body{2489}:
Objective: The Numerical Propulsion System Simulation is a program
focused on reducing the cost and time in developing aeropropulsion
engines. The NPSS program intends to build a simulation environment
that allows for the arbitrary construction of engine configurations for
analysis and design. Furthermore, the software environment will permit
the choice of analysis techniques, analysis complexity, languages and
the ability to access and manage data from various sources.
Approach:
As a first step, NPSS will provide a prototype object based 1D Steady
State, Transient thermodynamic aircraft engine simulator based on the
public domain DIGTEM engine simulation. The prototype built
demonstrated the usefulness of object oriented modeling for dynamic
engine simulations, for distributed applications and for supporting
numerical zooming.
Accomplishment: In FY93, the NPSS Simulation
environment was extended to include the ability to Numerically Zoom
between levels of fidelity of codes. The NPSS MOD0 release provided the
correct software platform that enables engine component codes to be
distributed across computing architectures. The NPSS MOD1 release with
accompanying documentation was made available to industry in February
94. Specifically, NPSS MOD1 : Demonstrated Numerical Zooming was
achievable through the use of an Object Oriented design of the DIGTEM
code;
Significance: The NPSS MOD1 employs the object based model for
engine simulations. The object model allows for engine components such
as a compressor, combustor, turbine, shaft, etc to be modeled in the
numerical simulation as independent entities that can be replaced with
component models of greater fidelity that execute on differing
computing platforms in a dynamic environment. This capability combined
with the graphical user interface allows an engineer to construct
arbitrary engine configurations with ease.
Status/Plans: The NPSS
engine simulation prototypes have generated interest within the US
Aeropropulsion industry to work with Lewis on defining and building a
US standard for 1D preliminary design codes. In FY94, Lewis and the US
Aeropropulsion Industry began to build a new Object Oriented based 1D
design code that will: 1) Incorporate the NPSS concept of numerical
zooming and; 2) Incorporate the Multi-disciplinary interactions through
Object Oriented Modeling.
Point of Contact: Gregory Follen
NASA
Lewis Research Center
(216) 433-5193
gfollen@lerc.nasa.gov
curator: Larry Picha
MD5{32}: 0f20ac33d7b5b31eb7a2e738e239a200
File-Size{4}: 2951
Type{4}: HTML
Gatherer-Version{3}: 1.0
Gatherer-Host{21}: cesdis1.gsfc.nasa.gov
Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server
Refresh-Rate{7}: 2419200
Time-to-Live{8}: 14515200
Description{40}: NPSS MOD1 Engine Simulation with Zooming
}
@FILE { http://cesdis.gsfc.nasa.gov/%7ecreschke/peta/report/node41.html
Update-Time{9}: 827948637
title{37}: SIA Projections and CPU Architecture
keywords{71}: and
architecture
aug
chance
cpu
edt
figure
projections
reschke
sia
tue
images{417}: /usr/local/src/latex2html/icons/next_motif.gif
/usr/local/src/latex2html/icons/up_motif.gif
/usr/local/src/latex2html/icons/previous_motif.gif
/usr/local/src/latex2html/icons/contents_motif.gif
img12.gif
img13.gif
img14.gif
/usr/local/src/latex2html/icons/next_motif.gif
/usr/local/src/latex2html/icons/up_motif.gif
/usr/local/src/latex2html/icons/previous_motif.gif
/usr/local/src/latex2html/icons/contents_motif.gif
head{3431}: Next: Open IssuesUp: Processors-In-Memory (PIM) Chip Previous:
Introduction SIA Projections and CPU Architecture To do this we assume
two different CPU architectures. The first,
based on the EXECUBE
experience, assumes that each CPU is
designed simply, and is optimized
for fixed point computations.
For this we assumed an EXECUBE-like 12K
circuit CPU which
executes an average instruction in about 2.5 clock
cycles. The
second CPU assumes a design optimized for floating point,
but as
with EXECUBE, a simpler (but more efficient in terms
of
FLOPS/silicon) design point is chosen than what is common in
high
end microprocessors today. We assume a 100K circuit CPU
that can
operate on the average at 1 FLOP per clock.The other major assumption
we make is that in a mixed
DRAM/logic configuration and at any
projected point in time, we
can smoothly vary the transistor usage on
one chip from 100%
logic (using the maximum projected logic density)
to 100% DRAM
(assuming the maximum projected DRAM density). Thus, we
can look
at different numbers of CPUs on a chip, with different
amounts
of memory available to them.The reason for this latter
tradeoff is that during the workshop
it became apparent that the major
economic constraint on
reaching a petaflops system was in the cost of
the memory
system to support it. Based on typical rules of thumb,
a
petaflop would require about a petabyte of memory, which even
with
very dense DRAM, would be in the order of 10,000s of
chips. When this
was realized, the application workgroup at the
workshop came to the
conclusion that there were reasonable
petaflops applications where at
least an `` " rule would apply,
meaning that perhaps only about 32
terabytes of memory might be
need for some applications. Instead of
the typical ``1 byte per
FLOP" rule, this translates into a ``0.03
byte/FLOP" rule.Figure 2 rolls these design assumptions, together with
the SIA
projections, into a spectrum of potential chip and
system
configurations assuming an EXECUBE-like largely fixed point
CPU
macro. (Note that this chart assumes extending the 1992
SIA
projections out through 2010 and 2013).
\240 \240
Figure: PIM
Configurations for a PetaOP Figure 3 does the same for the assumed
floating point CPU macro. The calculations behind the (a) chart in each
figure were performed
at several different year points, and took the
projected logic
density to determine how many CPUs might fit on
different
percentages of a chip. From this, and the projected on
chip
clock speeds, we determined a projected per chip
performance
number. This was plotted against the amount of memory that
could
be placed in the remainder of the chip (the ``knee-shaped"
curves).
Through these curves were then drawn straight lines
that
represent different ratios of storage to performance, to
match
the above discussion.\240 \240
Figure: PIM Configurations for
a Petaflop The (b) charts in each figure then use the intersections
of
these pairs of curves to determine how many chips would
be needed
to reach a petaflops system, again for different
ratios of memory to
performance. The numbers agree with the
feeling of the Pasadena
workshop, namely that a PIM-based architecture
has the potential to
achieve huge levels of performance with far
fewer chips (and thus
cost) than the other approaches. Next: Open IssuesUp:
Processors-In-Memory (PIM) Chip Previous: Introduction Chance Reschke
Tue Aug 15 08:59:12 EDT 1995
MD5{32}: 9b40aa5e5d63d577f868e4b7bf3dd03e
File-Size{4}: 5730
Type{4}: HTML
Gatherer-Version{3}: 1.0
Gatherer-Host{21}: cesdis1.gsfc.nasa.gov
Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server
Refresh-Rate{7}: 2419200
Time-to-Live{8}: 14515200
Description{37}: SIA Projections and CPU Architecture
}
@FILE { http://cesdis.gsfc.nasa.gov/hpccm/hpcc.nasa.html
Update-Time{9}: 827948598
url-references{670}: sound.bytes/holcomb.aiff
sound.bytes/trans.html
http://cesdis.gsfc.nasa.gov/hpccm/hpcc.classic.html
http://www.nas.nasa.gov/home.html
http://cesdis1.gsfc.nasa.gov:80/Harvest/brokers/cesdis1.gsfc.nasa.gov/query.html
http://www.hpcc.gov/
http://hypatia.gsfc.nasa.gov/NASA_homepage.html
http://www.hq.nasa.gov/
iitf.hp/iitf.html
http://www.arc.nasa.gov/x500.html
http://cesdis.gsfc.nasa.gov/petaflops/peta.html
admin/hot.html
mailto:lpicha@cesdis.gsfc.nasa.gov
http://cesdis.gsfc.nasa.gov/web-stats/overview.html
http://cesdis.gsfc.nasa.gov/people/lpicha/whoiam.html
http://cesdis.gsfc.nasa.gov/
http://www.usra.edu/
http://sdcd.gsfc.nasa.gov
http://sdcd.gsfc.nasa.gov/ESD/
title{26}: NASA HPCC Office Web Page
keywords{535}: accesses
aerodynamic
among
and
are
association
authorizing
authors
available
center
cesdis
code
comments
communications
communities
computing
data
detailed
director
directorate
displayed
division
earth
email
excellence
file
gets
graphically
here
high
holcomb
hpcc
information
introduction
last
lawrence
lee
nas
nasa
number
numerical
office
official
others
page
past
performance
picha
privy
program
questions
raw
research
revised
sciences
served
server
simulation
space
statistics
the
this
transcript
universities
welcome
what
you
your
images{746}: hpcc.graphics/nasa.meatball.gif
hpcc.graphics/hpcc.header.gif
hpcc.graphics/sound.gif
hpcc.graphics/hpcc.star.gif
hpcc.graphics/blue.bullet.gif
hpcc.graphics/blue.bullet.gif
hpcc.graphics/blue.bullet.gif
hpcc.graphics/NAS.gif
hpcc.graphics/blue.bullet.gif
hpcc.graphics/blue.bullet.gif
hpcc.graphics/blue.bullet.gif
hpcc.graphics/blue.bullet.gif
hpcc.graphics/blue.bullet.gif
hpcc.graphics/blue.bullet.gif
hpcc.graphics/search.button.gif
hpcc.graphics/nco.button.gif
hpcc.graphics/nasa.button.gif
hpcc.graphics/hq.button.gif
hpcc.graphics/iitf.button.gif
hpcc.graphics/people.button.gif
hpcc.graphics/peta.button.gif
hpcc.graphics/hpccsmall.gif
hpcc.graphics/mailbutton.gif
hpcc.graphics/new.gif
hpcc.graphics/metric.gif
hpcc.graphics/wavebar.gif
headings{333}: Welcome to the NASA High Performance Computing and Communications
Office
The NASA HPCC Office represents important national computational
capabilities:
The High Performance Computing & Communications (HPCC) Program
The Numerical Aerodynamic
Simulation (NAS) Program
Scientific and Engineering Computing
Announcements
Other Resources:
body{2754}:
(NASA Code RC)
Welcome and introduction by Lee B.
Holcomb , Director of the NASA HPCC Office. (188K) A
transcript of
Lee Holcomb's welcome and introduction is also available.
%>
Extend U.S. technological leadership in high performance
computing and communications
Provide wide dissemination and
application of the technologies
Spur gains in U.S. productivity and
industrial competitiveness
%>
Act as a pathfinder in advanced
large-scale computer system capability
Provide a national
computational capability to NASA, industry, DoD, other Government
Agencies
Provide a strong research tool for Office of Aeronautics
%>
The Office of Aeronautics conducts research and
technology development programs in support of NASA's Aeronautics
Enterprise; this consists of a Headquarters program office and four
field centers. Scientific and engineering computing is a critical
element in Office of Aeronautics' strategy for success; this provides:
Computational modeling of vehicle and component structure,
operation, and flight characteristics
Laboratory support consisting
of experimentation control and observation, data collection and
storage, analysis, and distribution
Logistical support for
researchers that includes on-line access to published materials as well
as raw data from analytic and experimental work, and improved
communications capabilities ranging from electronic messaging to
video-conferencing with collaborative visualization tools.
NASA Awards $7.1 Million For New Internet Education Projects
NASA HPCC/ESS Cooperative Agreement Notice (CAN)The ESS Project
will obtain one or more major next-generation scalable parallel
testbeds and award new Grand Challenge cooperative agreements through
this multimillion-dollar CAN.
Workshop on Remote Exploration and
Experiment (REE) ProgramAugust 21-23, 1995, Jet Propulsion Laboratory,
Doubletree Hotel, Pasadena, California. [past]
N E W S : A Calendar of Information and
Events Relating to the NASA HPCC Program
email your questions or
comments
File Server Statistics .
(Here you are privy to
detailed information on the number of accesses this page gets, among
others at CESDIS, and what communities are served. The raw data and the
data graphically displayed are available.)
Authorizing NASA
Official: Lee B. Holcomb, Director, NASA HPCC Office
Authors: Lawrence
Picha (lpicha@usra.edu) & Michele O'Connell (michele@usra.edu), Center
of Excellence in Space Data and Information Sciences ,
Universities
Space Research
Association ,
NASA Goddard Space Flight Center,
Greenbelt, Maryland.
Last revised: 22 NOV 95 (l.picha) A service of
the Space Data and Computing Division , Earth Sciences Directorate ,
NASA Goddard Space Flight Center.
MD5{32}: a225c9c495a39459433705edcc07a710
File-Size{4}: 6549
Type{4}: HTML
Gatherer-Version{3}: 1.0
Gatherer-Host{21}: cesdis1.gsfc.nasa.gov
Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server
Refresh-Rate{7}: 2419200
Time-to-Live{8}: 14515200
Description{26}: NASA HPCC Office Web Page
}
@FILE { http://cesdis.gsfc.nasa.gov/linux/misc/10mbps.html
Update-Time{9}: 827948620
url-references{172}: /linux/beowulf/beowulf.html
http://www.cirrus.com/prodtech/ov.comm/cs8900.html
http://www.amd.com/html/products/ind/overview/18051c.html
#top
/pub/people/becker/whoiam.html
title{31}: 10mbps Ethernet Technology Page
keywords{111}: amd
author
becker
beowulf
cesdis
cluster
ethernet
family
gov
gsfc
linux
mbps
nasa
pcnet
project
technology
top
headings{69}: 10mbps Ethernet Technology
Summary:
Links to Ethernet controllers.
body{260}:
Descriptions, implementation technologies, software support,
and
references related to Ethernet.
This document was written in
support of the
Beowulf Linux Cluster Project .
CS8900 : ISA bus
Ethernet network interface controller
AMD PCnet family .
Top
address{35}: Author:
becker@cesdis.gsfc.nasa.gov
MD5{32}: e5bf81396c5398ff65b7ce981d70c033
File-Size{3}: 927
Type{4}: HTML
Gatherer-Version{3}: 1.0
Gatherer-Host{21}: cesdis1.gsfc.nasa.gov
Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server
Refresh-Rate{7}: 2419200
Time-to-Live{8}: 14515200
Description{31}: 10mbps Ethernet Technology Page
}
@FILE { http://cesdis.gsfc.nasa.gov/hpccm/annual.reports/ann.rpt.95/cas.95/cas.95.ar.toc.html
Update-Time{9}: 827948661
url-references{1317}: http://cesdis.gsfc.nasa.gov/hpccm/annual.reports/ann.rpt.95/cas.95/cas.95.ar.darwin.html
http://cesdis.gsfc.nasa.gov/hpccm/annual.reports/ann.rpt.95/cas.95/cas.95.ar.himap.html
http://cesdis.gsfc.nasa.gov/hpccm/annual.reports/ann.rpt.95/cas.95/cas.95.ar.nra.html
http://cesdis.gsfc.nasa.gov/hpccm/annual.reports/ann.rpt.95/cas.95/cas.95.ar.overflow.html
http://cesdis.gsfc.nasa.gov/hpccm/annual.reports/ann.rpt.95/cas.95/cas.95.ar.rans.mp.html
http://cesdis.gsfc.nasa.gov/hpccm/annual.reports/ann.rpt.95/cas.95/cas.95.ar.aims.html
http://www.nas.nasa.gov/NAS/Tools/Projects/AIMS/
http://cesdis.gsfc.nasa.gov/hpccm/annual.reports/ann.rpt.95/cas.95/cas.95.ar.nas.tr.vis.html
cas.95.ar.p2d2.html
app.software.html
app.software.html
app.software.html
app.software.html
app.software.html
app.software.html
app.software.html
app.software.html
app.software.html
app.software.html
app.software.html
app.software.html
app.software.html
app.software.html
app.software.html
app.software.html
app.software.html
app.software.html
app.software.html
http://hpccp-www.larc.nasa.gov/~fido/homepage.html
cas.95.ar.npss.html
http://cesdis.gsfc.nasa.gov/hpccm/cas.hp/cas.html
http://cesdis.gsfc.nasa.gov/people/lpicha/whoiam.html
http://cesdis.gsfc.nasa.gov/
http://www.usra.edu/
http://sdcd.gsfc.nasa.gov
http://sdcd.gsfc.nasa.gov/ESD/
keywords{1342}: acoustic
adifor
aerodynamic
aeroelastic
aeronautics
affordable
aims
aircraft
ames
analysis
and
announcements
applications
array
assessment
association
authors
automatic
based
block
calculation
calculator
cas
cds
center
cesdis
cfd
clusters
coarse
code
complex
computations
computer
computers
computing
cooperative
coordinate
coupling
cycle
darwin
data
debugger
deck
derivative
derivatives
design
differentiation
direct
directorate
disciplinary
distributed
division
earth
enhancements
environments
evaluation
excellence
extension
fem
fido
flow
for
fortran
framework
from
gov
grained
grids
gsfc
heterogeneous
high
himap
hpcc
hpccm
hpccp
html
http
ibm
identified
information
integration
interactive
interdisciplinary
krylov
large
last
launcher
lawrence
measurement
memory
method
methods
models
module
multi
multidisciplinary
multithreaded
nas
nasa
navier
newton
ntv
numerical
optimization
overflow
parallel
parallelism
parallelization
partition
performance
phased
picha
portable
potential
process
propulsion
rans
rendering
requirements
research
revised
robust
schwarz
sciences
sensitivity
sharing
simulation
simulations
solver
space
sponsored
state
status
steady
stokes
structural
support
supported
system
systems
task
the
trace
tuning
universities
unsteady
unstructured
using
version
visualization
visualizer
volume
with
workstation
worktations
head{3151}: background="graphics/casback.gif">
The CAS 1995 Annual ReportThe NASA
High Performance Computing and Communications Program PresentsThe
Computational Aerosciences (CAS) Project 1995 Annual ReportTable of
ContentsDARWIN/HPCC Phased-Array Acoustic Measurement and Visualization
HiMAP Based Aeroelastic Computations on IBM SP2 Computer Status of Ames
Sponsored HPCCP NASA Research Announcements A Supported Version of
OVERFLOW for Parallel Computers and Workstation Clusters
Multi-partition Parallel Flow Solver Module RANS-MP Tuning Parallel
Applications with AIMS
See AlsoAIMS The NTV - The NAS Trace
Visualizer The Portable Parallel/Distributed Debugger (p2d2) The
Cooperative Data Sharing (CDS) System Parallel Calculation of
Sensitivity Derivatives for Aircraft Design Using Automatic
Differentiation Multi-partition Parallel Flow Solver Module RANS-MP
ADIFOR 2.0 Automatic Differentiation for Derivative-Based
Multidisciplinary Design Optimization Requirements for an Aeronautics
Affordable Systems Optimization Process Interactive Visualization of
Unsteady Flow Multithreaded System for Distributed Memory Environments
Enhancements to the Coordinate and Sensitivity Calculator for
Multi-disciplinary Design Optimization Robust Method for Coupling CFD
and FEM Analysis Identified from Assessment of Potential Methods
Aeroelastic Design using Distributed Heterogeneous Computers Evaluation
and Extension of High Performance Fortran ADIFOR 2.0 Automatic
Differentiation for Derivative-Based Multidisciplinary Design
Optimization Newton-Krylov-Schwarz: A Parallel Solver for Steady
Aerodynamic Applications Parallel Volume Visualization on Unstructured
Grids Support for Integration of Task and Data Parallelism Structural
Analysis of Large Complex Models on IBM Worktations Coarse-Grained
Parallelization of a Multi-Block Navier-Stokes Code High Performance
Parallel Rendering on the IBM SP2 Direct Navier-Stokes Simulations on
the IBM SP Parallel System FIDO: Framework for
Interdisciplinary
Design Optimization Numerical Propulsion System Simulation Steady State
Cycle Deck Launcher If you are interested in additional information on
this project or related activities you may access the CAS Home Page on
the World Wide Web at:
http://cesdis.gsfc.nasa.gov/hpccm/cas.hp/cas.html or contact the
following Authorizing NASA officials:
William Feiereisen
Project
Manager, Computational Aerosciences Project
High Performance
Computing and Communications Office
NASA - Ames Research Center,
Moffett Field, California 94035
Paul Hunter
Program Manager, High
Performance Computing and Communications Program
High Performance
Computing and Communications Office
NASA - Headquarters, Washington,
DC 20546
(202) 358-4618
p_hunter@aeromail.hq.nasa.gov
Authors:
Lawrence Picha (lpicha@usra.edu) & Michele O'Connell
(michele@usra.edu), Center of Excellence in Space Data and Information
Sciences ,
Universities Space Research
Association ,
NASA Goddard
Space Flight Center, Greenbelt, Maryland.
Last revised: 18 OCT 95
(m.oconnell)A service of the Space Data and Computing Division , Earth
Sciences Directorate , NASA Goddard Space Flight Center.
MD5{32}: d1d03af63d146ba83a9604965b4f897c
File-Size{4}: 5706
Type{4}: HTML
Gatherer-Version{3}: 1.0
Gatherer-Host{21}: cesdis1.gsfc.nasa.gov
Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server
Refresh-Rate{7}: 2419200
Time-to-Live{8}: 14515200
}
@FILE { http://cesdis.gsfc.nasa.gov/linux/drivers/tulip.patch
Update-Time{9}: 827948898
MD5{32}: 173d3420977c322faaeb8f5e019af3eb
File-Size{4}: 3376
Type{5}: Patch
Gatherer-Version{3}: 1.0
Gatherer-Host{21}: cesdis1.gsfc.nasa.gov
Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server
Refresh-Rate{7}: 2419200
Time-to-Live{8}: 14515200
}
@FILE { http://cesdis.gsfc.nasa.gov/pub/linux/html/Ethernet-HOWTO-9.html
Update-Time{9}: 827948630
url-references{420}: Ethernet-HOWTO.html#toc9
http://cesdis.gsfc.nasa.gov/pub/linux/linux.html
Ethernet-HOWTO-10.html#lilo
http://cesdis.gsfc.nasa.gov/linux/misc/multicard.html
Ethernet-HOWTO-7.html#probe
Ethernet-HOWTO-7.html#data-xfer
Ethernet-HOWTO-3.html#boca-pci
Ethernet-HOWTO-3.html#pcnet-32
Ethernet-HOWTO-5.html#utp
Ethernet-HOWTO-10.html
Ethernet-HOWTO-8.html
Ethernet-HOWTO.html#toc9
Ethernet-HOWTO.html#toc
Ethernet-HOWTO.html
#0
title{26}: Frequently Asked Questions
keywords{445}: addresses
alpha
always
amd
and
any
anything
arguments
asked
beginning
big
boca
card
cards
chapter
clones
com
contents
don
drivers
ethercards
ethernet
faqs
frequently
getting
hewlett
home
linux
machine
more
multiple
net
never
next
not
number
one
packard
page
pair
passing
pause
pci
pcnet
per
previous
probed
problem
problems
programmed
questions
really
reason
section
solution
specific
surfing
table
than
them
this
top
twisted
use
using
vlb
with
headings{356}: 9
9.1
9.2
9.3
9.4
9.5
9.6
9.7 FAQs Not Specific to Any Card.
Token Ring
32 Bit / VLB / PCI Ethernet Cards
FDDI
Linking 10BaseT without a Hub
SIOCSFFLAGS: Try again
Link UNSPEC and HW-addr of 00:00:00:00:00:00
Huge Number of RX and TX Errors
Entries in for Ethercards
Linux and ``trailers''
Non-existent Apricot NIC is detected
body{20862}: Frequently Asked Questions Contents of this section
Here are
some of the more frequently asked questions about using
Linux with an
Ethernet connection. Some of the more specific
questions are sorted on
a `per manufacturer basis'.
However, since this
document is basically
`old' by the time you get it, any `new' problems
will not appear here
instantly. For these, I suggest that you make
efficient use of your
newsreader. For example, nn users would type
to get all the news
articles in your subscribed list that have
`3c' in the subject. (ie.
3com, 3c509, 3c503, etc.)
The moral: Read the man page for your
newsreader.
Alpha Drivers -- Getting and Using them
I heard
that there is an alpha driver available for my card.
Where can I get
it?
The newest of the `new' drivers can be found on Donald's new
ftp
site: in the
area. Things
change here quite frequently, so just
look around for it.
There is still all the stuff on the old ftp site
in , but this is not being actively maintained,
and hence will be of
limited value to most people.
As of recent v1.1 kernels, the
`useable' alpha drivers have been
included in the standard kernel
source tree. When running
you will be asked if you want to be
offered
ALPHA test drivers.
Now, if it really is an alpha, or
pre-alpha driver, then please
treat it as such. In other words, don't
complain because you
can't figure out what to do with it. If you
can't figure out
how to install it, then you probably shouldn't be
testing it.
Also, if it brings your machine down, don't complain.
Instead,
send us a well documented bug report, or even better, a
patch!
People reading this while net-surfing may want to check out:
Don's Linux Home Page
for the latest dirt on what is new and
upcoming.
Using More than one Ethernet Card per Machine
What needs to be done so that Linux can run two ethernet cards?
The
hooks for multiple ethercards are all there.
However, note that only
one ethercard is
auto-probed for by default. This avoids a lot of
possible
boot time hangs caused by probing sensitive cards.
There
are two ways that you can enable auto-probing for
the second (and
third, and...) card. The easiest
method is to pass boot-time arguments
to the kernel,
which is usually done by LILO.Probing for the
second
card can be achieved by using a boot-time argument
as simple as . In
this
case and will be assigned in the order
that the cards are found
at boot. Say if you want
the card at to be and
the card at to be then
you could use
The command accepts more than the IRQ + i/o
+
name shown above. Please have a look at
Passing Ethernet Arguments...
for the full syntax, card specific parameters, and LILO tips.
These boot time arguments can be made permanent so that you
don't
have to re-enter them every time. See the LILO
configuration option
`' in the LILO manual.
The second way (not recommended) is to edit
the file
and replace the entry for the
i/o address that you want
probed with a zero. This will
enable autoprobing for that device, be
it
and so on. If you really need more than four ethernet
cards in
one machine, then you can clone the entry
and change to .
Note that
if you are intending to use Linux as a gateway between
two networks,
you will have to re-compile a kernel with IP
forwarding enabled.
Usually using an old AT286 with something
like the `kbridge' software
is a better solution.
If you are viewing this while net-surfing , you
may wish
to look at a mini-howto Donald has on his WWW site. Check
out
Multiple Ethercards
.
Problems with NE1000 / NE2000
cards (and clones)
Problem:
NE*000 ethercard at doesn't get
detected anymore.
Reason:
Recent kernels ( > 1.1.7X) have more
sanity checks with respect
to overlapping i/o regions. Your NE2000
card is wide in
i/o space, which makes it hit the parallel port at
.
Other devices that could be there are the second floppy
controller
(if equipped) at and the secondary
IDE controller at .
If the port(s)
are already registered by another driver, the
kernel will not let the
probe happen.
Solution:
Either move your card to an address like
or compile without parallel printer support.
Problem:
Network
`goes away' every time I print something (NE2000)
Reason:
Same
problem as above, but you have an older kernel that
doesn't check for
overlapping i/o regions. Use the
same fix as above, and get a new
kernel while you are at it.
Problem:
NE*000 ethercard probe at
0xNNN: 00 00 C5 ... not found.
(invalid signature yy zz)
Reason:
First off, do you have a NE1000 or NE2000 card at the addr.
0xNNN?
And if so, does the hardware address reported look like a
valid
one? If so, then you have a poor NE*000 clone. All NE*000
clones
are supposed to have the value in bytes 14 and 15 of the
SA
PROM on the card. Yours doesn't -- it has `yy zz' instead.
Solution:
The driver (/usr/src/linux/drivers/net/ne.c) has a "Hall of Shame"
list at about line 42. This list is used to detect poor clones.
For
example, the DFI cards use `DFI' in the first 3 bytes of the
prom,
instead of using 0x57 in bytes 14 and 15, like they are
supposed to.
You can determine what the first 3 bytes of your card PROM are
by
adding a line like:
printk("PROM prefix: %#2x %#2x
%#2x\
",SA_prom[0],SA_prom[1],SA_prom[2]);
into the driver, right
after the error message you got above, and
just before the "return
ENXIO" at line 227.
Reboot with this change in place, and after the
detection fails, you
will get the three bytes from the PROM like the
DFI example above.
Then you can add your card to the bad_clone_list[]
at about
line 43. Say the above line printed out:
after you
rebooted. And say that the 8 bit version of your card was
called the
"FOO-1k" and the 16 bit version the "FOO-2k". Then you would
add the
following line to the bad_clone_list[]:
Note that the 2 name
strings you add can be anything -- they are just
printed at boot, and
not matched against anything on the card.
You can also take out the
"printk()" that you added above, if you want.
It shouldn't hit that
line anymore anyway. Then recompile once more,
and your card should
be detected.
Problem:
Errors like
Is the chip a real NatSemi
8390? (DP8390, DP83901, DP83902 or DP83905)?
If not, some clone chips
don't correctly implement the transfer
verification register. MS-DOS
drivers never do error checking,
so it doesn't matter to them.
Are
most of the messages off by a factor of 2?
If so: Are you using the
NE2000 in a 16 bit slot?
Is it jumpered to use only 8 bit transfers?
The Linux driver expects a NE2000 to be in a 16 bit slot. A NE1000
can
be in either size slot. This problem can also occur with some
clones,
notably D-Link 16 bit cards, that don't have the correct ID
bytes
in the station address PROM.
Are you running the bus faster
than 8Mhz?
If you can change the speed (faster or slower), see if
that
makes a difference. Most NE2000 clones will run at 16MHz,
but
some may not. Changing speed can also mask a noisy bus.
What
other devices are on the bus?
If moving the devices around changes the
reliability, then you
have a bus noise problem -- just what that error
message was
designed to detect. Congratulations, you've probably found
the
source of other problems as well.
Problem:
The machine hangs
during boot right after the `8390...' or
`WD....' message. Removing
the NE2000 fixes the problem.
Solution:
Change your NE2000 base
address to . Alternatively, you
can use the device registrar
implemented in 0.99pl13 and later
kernels.
Reason:
Your NE2000
clone isn't a good enough clone. An active
NE2000 is a bottomless pit
that will trap any driver
autoprobing in its space. The other
ethercard drivers take
great pain to reset the NE2000 so that it's
safe, but some
clones cannot be reset. Clone chips to watch out
for:
Winbond 83C901. Changing the NE2000 to a less-popular
address
will move it out of the way of other autoprobes,
allowing your machine
to boot.
Problem:
The machine hangs during the SCSI probe at
boot.
Reason:
It's the same problem as above, change
the
ethercard's address, or use the device registrar.
Problem:
The
machine hangs during the soundcard probe at boot.
Reason:
No,
that's really during the silent SCSI probe, and it's
the same problem
as above.
Problem:
Errors like
This bug came from timer-based
packet retransmissions. If you got a
timer tick _during_ a ethercard
RX interrupt, and timer tick tried to
retransmit a timed-out packet,
you could get a conflict. Because of
the design of the NE2000 you
would have the machine hang (exactly the
same the NE2000-clone boot
hangs).
Early versions of the driver disabled interrupts for a long
time,
and didn't have this problem. Later versions are fixed. (ie.
kernels
after 0.99p9 should be OK.)
Problem:
NE2000 not detected
at boot - no boot messages at all
Donald writes:
`A few people have
reported a problem with detecting the Accton NE2000.
This problem
occurs only at boot-time, and the card is later detected
at run-time
by the identical code my (alpha-test) ne2k diagnostic
program. Accton
has been very responsive, but I still haven't tracked
down what is
going on. I've been unable to reproduce this problem
with the Accton
cards we purchased. If you are having this problem,
please send me an
immediate bug report. For that matter, if you have
an Accton card send
me a success report, including the type of the
motherboard. I'm
especially interested in finding out if this problem
moves with the
particular ethercard, or stays with the motherboard.'
Here are some
things to try, as they have fixed it for some people:
Change the bus
speed, or just move the card to a different slot. Change the `I/O
recovery time' parameter in the BIOS
chipset configuration.
Problems with WD80*3 cards
Problem:
A WD80*3 is falsely
detected. Removing the sound or
MIDI card eliminates the `detected'
message.
Reason:
Some MIDI ports happen to produce the same
checksum as a
WD ethercard.
Solution:
Update your ethercard
driver: new versions include an
additional sanity check. If it is the
midi chip at 0x388
that is getting detected as a WD living at 0x380,
then
you could also use: LILO: linux reserve=0x380,8
Problem:
You get messages such as the following with your 80*3:
Reason:
There is a shared memory problem.
Solution:
If the problem is
sporadic, you have hardware problems.
Typical problems that are easy
to fix are board conflicts,
having cache or `shadow ROM' enabled for
that region, or
running your bus faster than 8Mhz. There are also
a
surprising number of memory failures on ethernet cards,
so run a
diagnostic program if you have one for your
ethercard.
If the
problem is continual, and you have have to reboot
to fix the problem,
record the boot-time probe message
and mail it to
becker@cesdis.gsfc.nasa.gov - Take
particular note of the shared
memory location.
Problem:
WD80*3 will not get detected at boot.
Reason:
Earlier versions of the Mitsumi CD-ROM (mcd) driver probe
at 0x300 will succeed if just about anything is that I/O
location.
This is bad news and needs to be a bit more robust.
Once
another driver registers that it `owns' an I/O
location, other drivers
(incl. the wd80x3) are `locked
out' and can not probe that addr for a
card.
Solution:
Recompile a new kernel without any excess drivers
that
you aren't using, including the above mcd driver.
Or try moving
your ethercard to a new I/O addr. Valid
I/O addr. for all the cards
are listed in
Probed Addresses
You can also point the mcd driver
off in another direction
by a boot-time parameter (via LILO) such as:
Problem:
Old wd8003 and/or jumper-settable wd8013 always get the
IRQ wrong.
Reason:
The old wd8003 cards and jumper-settable wd8013
clones don't
have the EEPROM that the driver can read the IRQ setting
from.
If the driver can't read the IRQ, then it tries to auto-IRQ
to
find out what it is. And if auto-IRQ returns zero, then
the driver
just assigns IRQ 5 for an 8 bit card or IRQ 10 for
a 16 bit card.
Solution:
Avoid the auto-IRQ code, and tell the kernel what the
IRQ
that you have jumpered the card to is via a boot time
argument.
For example, if you are using IRQ 9, using the
following
should work.
Problems with 3Com cards
Problem:
The 3c503 picks IRQ N, but this is needed for some
other
device which needs IRQ N. (eg. CD ROM driver, modem, etc.)
Can this be
fixed without compiling this into the kernel?
Solution:
The 3c503
driver probes for a free IRQ line in the order
{5, 9/2, 3, 4}, and it
should pick a line which isn't being
used. Very old drivers used to
pick the IRQ line
at boot-time, and the current driver (0.99pl12 and
newer) chooses when
the card is open()/ifconfig'ed.
Alternately, you
can fix the IRQ at boot by passing
parameters via LILO. The following
selects IRQ9, base
location 0x300, , and if_port #1
(the
external transceiver).
The following selects IRQ3, probes
for the base location,
, and the default if_port #0
(the internal
transceiver)
Problem:
3c503: Configured
interrupt number XX is out of range.
Reason:
Whoever built your
kernel fixed the ethercard IRQ at XX.
The above is truly evil, and
worse than that, it is
not necessary. The 3c503 will autoIRQ when it
gets
ifconfig'ed, and pick one of IRQ{5, 2/9, 3, 4}.
Solution:
Use
LILO as described above, or rebuild the kernel, enabling
autoIRQ by
not specifying the IRQ line.
Problem:
The supplied 3c503 drivers
don't use the AUI (thicknet) port.
How does one choose it over the
default thinnet port?
Solution:
The 3c503 AUI port can be selected
at boot-time with 0.99pl12
and later. The selection is overloaded onto
the low bit of
the currently-unused dev->rmem_start variable, so a
boot-time
parameter of:
should work. A boot line to force IRQ
5, port base 0x300,
and use an external transceiver is:
Also
note that kernel revisions 1.00 to 1.03 had an
interesting `feature'.
They would switch to the AUI port
when the internal transciever
failed. This is a problem,
as it will never switch back if for example
you
momentarily disconnect the cable. Kernel versions 1.04
and newer
only switch if the very first Tx attempt fails.
Problems with
Hewlett Packard Cards
Problem:
HP Vectra using built in AMD
LANCE chip gets IRQ and DMA wrong.
Solution:
The HP Vectra uses a
different implementation to the
standard HP-J2405A. The `lance.c'
driver used to
always use the value in the setup register of an HP
Lance
implementation. In the Vectra case it's reading an
invalid
0xff value. Kernel versions newer than about 1.1.50
now
handle the Vectra in an appropriate fashion.
Problem:
HP Card is
not detected at boot, even though kernel was
compiled with `HP PCLAN
support'.
Solution:
You probably have a HP PCLAN+ -- note the
`plus'. Support
for the PCLAN+ was added to final versions of 1.1, but
some
of them didn't have the entry in `config.in'. If you have
the
file hp-plus.c in ~/linux/drivers/net/ but no entry
in config.in, then
add the following line under the `HP
PCLAN support' line:
bool 'HP
PCLAN Plus support' CONFIG_HPLAN_PLUS n
Kernels up tp 1.1.54 are
missing the line in `config.in' still.
Do a `make mrproper;make
config;make dep;make zlilo' and you
should be in business.
Is there token ring support for Linux?
To support token
ring
requires more than only a writing a device driver, it also
requires
writing the source routing routines for token ring. It is the
source routing that would be the most time comsuming to write.
Alan
Cox adds: `It will require (...) changes to the bottom socket
layer to
support 802.2 and 802.2 based TCP/IP. Don't expect
anything soon.'
Peter De Schrijver has been spending some time on Token Ring
lately,
and has patches that are available for IBM ISA and
MCA token ring
cards. Don't expect miracles here, as he has
just started on this as
of 1.1.42. You can get the patch
from:
What is the
selection for 32 bit ethernet cards?
There aren't many 32 bit
ethercard device drivers because there
aren't that many 32 bit
ethercards.
There aren't many 32 bit ethercards out there because a
10Mbs
network doesn't justify spending the 5x price increment for
the 32 bit interface.
See
Programmed I/O vs. ...
as to
why
having an ethercard on an 8MHz ISA bus is really not
a
bottleneck.
This might change now that AMD has introduced the 32
bit PCnet-VLB
and PCnet-PCI chips. The street price of the Boca
PCnet-VLB board
should be under $70 from a place like CMO
(see
Computer Shopper). See
Boca PCI/VLB
for info on these cards.
See
AMD PCnet-32
for info on the
32 bit versions of the LANCE
/ PCnet-ISA chip.
In the future, the DEC 21040 PCI chip will probably
be supported
as well, but don't hold your breath.
Is there
FDDI support for Linux?
Donald writes: `No, there is no Linux driver
for any FDDI boards.
I come from a place with supercomputers, so an
external
observer might think
FDDI would be high on my list. But
FDDI never delivered end-to-end
throughput that would justify its
cost, and it seems to be a nearly
abandoned technology now that
100base{X,Anynet} seems imminent.
(And yes, I know you can now get
FDDI boards for <$1K. That
seems to be a last-ditch effort to get some
return on the
development investment. Where is the next generation of
FDDI
going to come from?)'
Can I link 10BaseT (RJ45) based
systems together without a hub?
You can link 2 machines easily, but
no more than that, without
extra devices/gizmos. See
Twisted Pair
-- it explains
how to do it. And no, you can't hack together a hub
just by
crossing a few wires and stuff. It's pretty much impossible
to do the collision signal right without duplicating a hub.
I get `SIOCSFFLAGS: Try again' when I run `ifconfig' -- Huh?
Some
other device has taken the IRQ that your ethercard
is trying to use,
and so the ethercard can't use the IRQ.
You don't necessairly need to
reboot to resolve this, as
some devices only grab the IRQs when they
need them and
then release them when they are done. Examples are
some
sound cards, serial ports, floppy disk driver, etc. You
can type
to see which interrupts
are presently in use . Those marked with a `+'
are
ones that are not taken on a permanent basis. Most of the
Linux
ethercard drivers only grab the IRQ when they are
opened for use via
`ifconfig'. If you can get the other
device to `let go' of the
required IRQ line, then you
should be able to `Try again' with
ifconfig.
When I run ifconfig with no arguments, it reports
that
LINK is UNSPEC (instead of 10Mbs Ethernet) and it
also says that
my hardware address is all zeros.
This is because people are running
a newer version of
the `ifconfig' program than their kernel version.
This
new version of ifconfig is not able to report these
properties
when used in conjunction with an older kernel. You can
either
upgrade your kernel, `downgrade' ifconfig, or simply
ignore
it. The kernel knows your hardware address, so it
really
doesn't matter if ifconfig can't read it.
When I
run ifconfig with no arguments, it reports that I
have a huge error
count in both rec'd and transmitted
packets. It all seems to work ok
-- What is wrong?
Look again. It says big number PAUSE
PAUSE PAUSE
.
And the same for the column.
Hence the big numbers you are seeing
are the total number of
packets that your machine has rec'd and
transmitted.
If you still find it confusing, try typing
instead.
I have /dev/eth0 as a link to /dev/xxx. Is this right?
Contrary to what you have heard, the files in /dev/* are not
used.
You can delete any and similar entries.
Should I
disable trailers when I `ifconfig' my ethercard?
You can't disable
trailers, and you shouldn't want
to. `Trailers' are a hack to avoid
data copying in the
networking layers. The idea was to use a trivial
fixed-size header of size `H', put the variable-size header
info at
the end of the packet, and allocate all packets
packets `H' bytes
before the start of a page. While it was a
good idea, it turned out to
not work well in practice.
If someone suggests the use of `-trailers',
note that it
is the equivalent of sacrificial goats blood. It won't
do
anything to solve the problem, but if problem fixes itself
then
someone can claim deep magical knowledge.
I get
and
when I boot, when I don't
have an ``Apricot''. And then the card I do
have isn't detected.
The Apricot driver uses a simple checksum to
detect if an
Apricot is present, which mistakenly thinks that almost
anything
is an Apricot NIC. It really should look at the vendor
prefix
instead. Your choices are to move your card off of
(the only
place the Apricot driver probes), or better yet,
re-compile a kernel
without the Apricot driver.
Next Chapter, Previous Chapter
Table of contents of this chapter ,
General table of contents
Top
of the document,
Beginning of this Chapter
MD5{32}: f1ea7cfeadde32d6161c770937d99fbb
File-Size{5}: 25835
Type{4}: HTML
Gatherer-Version{3}: 1.0
Gatherer-Host{21}: cesdis1.gsfc.nasa.gov
Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server
Refresh-Rate{7}: 2419200
Time-to-Live{8}: 14515200
Description{26}: Frequently Asked Questions
}
@FILE { http://cesdis.gsfc.nasa.gov/%7ecreschke/peta/report/node21.html
Update-Time{9}: 827948635
title{19}: Workshop Attendees
keywords{46}: attendees
aug
chance
edt
reschke
tue
workshop
images{387}: /usr/local/src/latex2html/icons/next_motif.gif
/usr/local/src/latex2html/icons/up_motif.gif
/usr/local/src/latex2html/icons/previous_motif.gif
/usr/local/src/latex2html/icons/contents_motif.gif
/usr/local/src/latex2html/icons/next_motif.gif
/usr/local/src/latex2html/icons/up_motif.gif
/usr/local/src/latex2html/icons/previous_motif.gif
/usr/local/src/latex2html/icons/contents_motif.gif
head{4171}: Next: Overview of PresentationsUp: Workshop Organization Previous:
Workshop Presentations Workshop Attendees The workshop attendees and
their affiliations are shown below:Maurice Aburdene (Bucknell
University)Robin Alford (CESDIS)Von Backenstose (Department of
Commerce)David Bader (University of Maryland)George Ball (University of
Arizona)F. D. Bedard (National Security Agency)George Bell (Stanford
University)Simon Berkovich (George Washington University)Mike Berry
(Department of Defense/USAF)Bruce Black (Cray Research Inc.)Andrew
Chien (University of Illinois)Fabien Coelho (\311cole des Mines)Jarrett
Cohen (NASA Goddard Space Flight Center)John Conery (University of
Oregon)Bob Cox (Cray Computer Corporation)David Crawford (Electronic
Trend)Dave Curkendall (Jet Propulsion Laboratory)Anil Deane (George
Mason University)David DiNucci (Computer Science Corporation)John
Dorband (NASA Goddard Space Flight Center)Patrick Dowd (State
University New York)Duncan Elliott (University of Toronto)Walter Ermler
(Department of Energy)Hassan Fallah-Adl (University of Maryland)Robert
Ferraro (Jet Propulsion Laboratory)Charles Fiduccia (Supercomputing
Research Center)Jim Fischer (NASA Goddard Space Flight Center)Ian
Foster (Argonne National Laboratory)Bruce Fryxell (George Mason
University)Eugene Gavrilov (Los Alamos National Laboratory)Norman Glick
(National Security Agency)Peter Gulko (Rebus Technolgies)Yang Han
(George Washington University)Jim Harris (NASA HQ, Office of Mission to
Planet Earth)R. Michael Hord (ERIM)Fred Johnson (National Institute of
Standards and Technology)Kamal Khouri (Bucknell University)David Kilman
(Los Alamos National Laboratory)Steve Knowles (Naval Space
Command)Peter Kogge (Notre Dame University)John Korah (NASA,
EOSDIS)Joydip Kundu (University of Oregon)H. T. Kung (Harvard
University)George Lake (University of Washington)William Leinsberger
(Computer Devices International)Paul Lukowicz (University at
Karlsruhe)Lou Lome (Ballistic Missile Defense Organization)Serge
Lubenec (George Mason University)Rick Lyon (Hughes STX)Jacob Maizel
(National Cancer Institute)Yossi Matias (AT&T Bell Labortories)William
Mattus (Villanova University)Thomas McCormick III (National Security
Agency)Al Meilus (George Washington University)A. Ray Miller (National
Security Agency)Jose Milovich (Lawrence Livermore National
Laboratory)Samin Mohammed (George Mason University)Reagan Moore (San
Diego Supercomputing Center)Z. George Mou (Brandeis University)Samiu
Muhammed (George Mason University)Chrisochoides Nikos (Syracuse
University)Michele O'Connell (CESDIS)Kevin Olson (George Mason
University)Behrooz Parhami (University of California)Jeff Pedelty (NASA
Goddard Space Flight Center)Ivars Peterson (Science News)Larry Picha
(CESDIS)Thierry Porcher (CEA)David Probst (Concordia
University)Chunming Qiao (State University New York)Donna Quammen
(George Mason University)Craig Reese (Supercomputing Research Center)S.
Repdauay (CPP)Michael Rilee (Cornell University)Allen Robinson (Sandia
National Laboratory)Subhash Saim (NASA Ames Research Center)Subhash
Saini (Computer Sciences Corporation)Ray Sakardi (National Security
Agency)David Schaefer (George Mason University)Judith Schlesinger
(Supercomputing Research Center)Vasili Semenov (State University New
York)Bruce Shapiro (National Cancer Institute)H. J. Siegel (Purdue
University)Margaret Simmons (Los Alamos National Laboratory)Burton
Smith (Tera Computer)Paul H. Smith (NASA HPCC Office)Matteo
Sonza-Reorda (Politecnico Di Torino)Thomas Sterling (CESDIS)Katja
Stokley (George Mason University)Valerie Taylor (Northwestern
University)John Thorp (Cray Research Inc.)Joe Vaughn (Computing Devices
International)Chris Walter (WW Technology Group)Pearl Wang (George
Mason University)Nancy Welker (National Security Agency)Leonard
Wisniewski (Dartmouth College)Paul Woodward (University of
Minnesota)Bill Wren (Honeywell)Richard Yentis (George Washington
University)Steve Zalesak (NASA Goddard Space Flight Center)Bernard
Zeigler (University of Arizona) Next: Overview of PresentationsUp:
Workshop Organization Previous: Workshop Presentations Chance Reschke
Tue Aug 15 08:59:12 EDT 1995
MD5{32}: 4fa293dab8e6ea56485cadadd3705eab
File-Size{4}: 6633
Type{4}: HTML
Gatherer-Version{3}: 1.0
Gatherer-Host{21}: cesdis1.gsfc.nasa.gov
Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server
Refresh-Rate{7}: 2419200
Time-to-Live{8}: 14515200
Description{19}: Workshop Attendees
}
@FILE { http://cesdis.gsfc.nasa.gov/admin/adl96/adlcall.html
Update-Time{9}: 827948598
url-references{128}: http://www.gsfc.nasa.gov/GSFC_homepage.html
http://www.nlm.nih.gov
http:// www.ieee.org
http:// lcweb.loc.gov/homepage/lchp.html
title{23}: ADL '96 Call for Papers
keywords{136}: and
call
center
computer
congress
flight
for
goddard
ieee
library
may
medicine
nasa
national
participation
society
space
the
washington
images{77}: http://cesdis.gsfc.nasa.gov/hpccm/hpcc.graphics/nasa.meatball.gif
nlmlogo.gif
headings{279}: ADL '96 Forum
Call for Participation
Forum on Research and Technology
Advances in Digital Libraries
Sponsored by: NASA Goddard Space Flight Center; The National Library of
Medicine; IEEE Computer Society; and The Library of Congress href>
In Cooperation with:
Corporate Support:
body{542}:
May 13 - 15, 1996
Library of Congress
Washington, D. C.
Size = 3>Brown University, Columbia University, Cornell University,
George Wahsington University, National Instittute of Standards and
Technology, Rutgers-Center for Information Management, Integration &
Connectivity, University of Milano, The University of
Maryland-Baltimore County and The University of Texas at Austin.
AT, Bellcore, Bell Atlantic*, Comsat, Cray Research*, GTE*, Hughes
Networks Systems*, IBM Corporation, Lockheed-Martin Corp., MCI, Sony*,
Sun Microsystems*
MD5{32}: 881e5cdc23dd0b8ef8ef7c71aea0eaa5
File-Size{4}: 4085
Type{4}: HTML
Gatherer-Version{3}: 1.0
Gatherer-Host{21}: cesdis1.gsfc.nasa.gov
Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server
Refresh-Rate{7}: 2419200
Time-to-Live{8}: 14515200
Description{23}: ADL '96 Call for Papers
}
@FILE { http://cesdis.gsfc.nasa.gov/hpccm/annual.reports/cas94contents/app.sw/paracalc.html
Update-Time{9}: 827948647
url-references{109}: http://cesdis.gsfc.nasa.gov/hpccm/annual.reports/cas94contents/app.sw.html
mailto:lpicha@cesdis.gsfc.nasa.gov
title{99}: Parallel Calculation of Sensitivity Derivatives for Aircraft Design
Using Automatic Differentiation
keywords{46}: contents
curator
larry
picha
return
table
the
images{19}: graphics/return.gif
headings{132}: Parallel Calculation of Sensitivity Derivatives for Aircraft Design
Using Automatic Differentiation
Return
to the Table of Contents
body{2849}:
Objective: This work compares two computational approaches for
calculating sensitivity derivatives (SD) from gradient code obtained by
means of automatic differentiation (AD).
Approach: The ADIFOR (AD of
Fortran) tool, developed by Argonne National Laboratory and Rice
University, is applied to the TLNS3D thin-layer Navier-Stokes flow
solver to obtain aerodynamic SD with respect to wing geometric design
variables. The number of design variables (NDV) ranges from 1 to 60.
Coarse-grained parallelization (as shown in the Figure 1) of the
TLNS3D.AD code is employed on an IBM SP/1 workstation cluster with a
Fortran-M wrapper to improve the code speed and memory use. Results
from the initial (unoptimized) parallel implementation on the SP/1 are
compared with the most efficient (to date) implementation of the
TLNS3D.AD code on a single processor of the vector Cray Y-MP.
Accomplishment: Figure 2 shows the beneficial effects of SP/1
parallelization; as expected, the time required to compute the
aerodynamic SD on a 972517 viscous grid decreases significantly as the
number of processors (NP) used increases from 1 to 15. A fair
comparison between the SP/1 and Y-MP implementations involves complex
trade-offs among numerous parameters including single processor speed,
Y-MP vector performance, total available memory, the amount of SP/1
parallelization employed, and machine life-cycle cost. Generally,
though, on this grid the SD compute time of the Y-MP is about 10 times
faster than that of the SP/1 if the number of design variables (NDV) is
small. However, the Y-MP is only about 2 times faster (or less) than
the SP/1 as NDV increases and parallelization can be efficiently
exploited on the SP/1.
Significance: Although the compute time for
the vector Cray Y-MP is faster than that of the parallel IBM SP/1, for
most of the SD cases examined the difference is only about a factor of
2 or less; SD calculations for large NDV can be performed efficiently
on the SP/1 using coarse-grained parallelization. Consideration of the
total elapsed job time, rather than compute time would favor the SP/1
even more for these cases. Moreover, the total machine resources of a
128 node SP/1 can accommodate about 1000 design variables, whereas the
Cray can only accommodate about 100 design variable for this size
grid.
Status/Plans: Other strategies exploiting more parallelization
within the TLNS3D.AD code will be studied. Fortran-M has been installed
on NASA Langley Research Center Computers to allow these
parallelization techniques to be mapped onto networks of heterogeneous
workstations.
Points of Contact: C. H. Bischof and T. L. Knauff,
Jr.
Argonne National Laboratory
(708) 252 - 8875
bischof@mcs.anl.gov
L. L. Green and K . J. Haigler
NASA Langley
Research Center
(804) 864 - 2228
l.l.green@larc.nasa.gov
curator: Larry Picha
MD5{32}: a82c1ad9be3a8f1f3bfcd195c7f8ff1b
File-Size{4}: 3429
Type{4}: HTML
Gatherer-Version{3}: 1.0
Gatherer-Host{21}: cesdis1.gsfc.nasa.gov
Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server
Refresh-Rate{7}: 2419200
Time-to-Live{8}: 14515200
Description{67}: Parallel Calculation of Sensitivity Derivatives for Aircraft Design
}
@FILE { http://cesdis.gsfc.nasa.gov/hpccm/annual.reports/cas94contents/iita/k-12.html
Update-Time{9}: 827948649
url-references{94}: http://cesdis.gsfc.nasa.gov/hpccm/annual.reports/cas94.html
mailto:lpicha@cesdis.gsfc.nasa.gov
title{57}: High Performance Computing and Communication K-12 Project
keywords{46}: contents
curator
larry
picha
return
table
the
images{19}: graphics/return.gif
headings{90}: High Performance Computing and Communication K-12 Project
Return
to the Table of Contents
body{2586}:
Objective: To inspire students in the K-12 grades into pursuing
careers in science and engineering. A particular focus is to target
under represented schools and minorities.
Approach: The Lewis project
began by involving teachers early on and continually throughout the
project, training was available to ensure all participants were at the
same level of expertise and to standardize on computing platforms so
advances within the project could be easily shared amongst all
involved. With this in mind, the Lewis project focused on three areas
of development: 1) Teacher & Student training; 2) Curriculum
supplemental material-
al; 3) Computing and Network Infrastructure
within the schools.
Accomplishment: Currently, the Lewis project has
trained 25 teachers from fourteen schools ranging from high school to
elementary. Nine schools have received Apple Macintoshes and network
equipment for connecting to Internet. The training for teachers
consists of instruction by Lewis personnel on topics including: Mac
Basics, Internet, Visualization, computer languages, Unix, Interactive
Physics, Maple, Animation Works and Spyglass. The teacher training is
conducted each summer and is spread over two weeks. In the area of
curriculum, Barberton High School will teach a new course entitled
\322High Performance Computing\323 at the 10th and 11th grade level.
Customary and innovative network efforts have been implemented within
Lewis\325 K12 project. Support for connections to Internet range from
basic phone line access to a successful implementation of RF technology
at sustained T1 speeds. Cleveland East Technical High School has
partnered with Cleveland State University to acquire Internet access
and to demonstrate the cost effective use of this \322wireless\323
communication path.
Significance: The HPCC K12 project has the
potential to inspire students, teachers and NASA personnel toward
developing and enhancing current school curriculum into a living entity
that can grow and accommodate the technology already available outside
the classroom.
Status/Plans: Current program will continue to consist
of two weeks of teacher training, providing selected schools with
computers and providing basic Internet connections. New efforts
proposed for FY95 include efforts to work with sight impaired and
developing the Lewis Teacher Resource Center into a functioning
instructional facility for year round K12 use.
Point of Contact:
Gregory J. Follen
NASA Lewis Research Center
(216)433-5193
Gynelle Mackson
NASA Lewis Research Center
(216)
433-8258
curator: Larry Picha
MD5{32}: 5acd1c7ffcc0005badb5f4a375469dec
File-Size{4}: 3079
Type{4}: HTML
Gatherer-Version{3}: 1.0
Gatherer-Host{21}: cesdis1.gsfc.nasa.gov
Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server
Refresh-Rate{7}: 2419200
Time-to-Live{8}: 14515200
Description{57}: High Performance Computing and Communication K-12 Project
}
@FILE { http://cesdis.gsfc.nasa.gov/hpccm/annual.reports/ess94contents/guest.ci/stone.html
Update-Time{9}: 827948652
url-references{111}: http://cesdis.gsfc.nasa.gov/hpccm/annual.reports/ess94contents/guest.ci.html
mailto:lpicha@cesdis.gsfc.nasa.gov
title{36}: Magnetic and Radiation Field Effects
keywords{45}: curator
larry
page
picha
previous
return
the
images{38}: graphics/stone.gif
graphics/return.gif
headings{99}: Fluid Dynamics Code Incorporating Magnetic and Radiation Field Effects
Return
to the PREVIOUS PAGE
body{2347}:
Objective: To develop a fluid dynamics code which incorporates
the effects of magnetic and radiation fields for massively parallel
supercomputers and apply it to the study of the dynamics of
astrophysical plasmas.
Approach: Standard finite-difference
methods are used to evolve the equations of fluid dynamics. Special
purpose algorithms developed by the PI are used to evolve the magnetic
and radiation fields. The code is written in Fortran using a data
parallel paradigm.
Accomplishments: Fully three-dimensional
hydrodynamic algorithms including the effects of magnetic fields have
been implemented on a variety of massively parallel supercomputers,
including the Connection Machine 2 (CM-2), CM-5, and MasPar-2.
Performance on these machines varies from 2-20 times faster than on one
Cray YMP processor. The code is now being used to study the dynamics of
magnetized accretion disks. The accompanying figure shows the
turbulence which results in a three-dimensional section of a weakly
magnetized accretion disk from the development of magnetic
instabilities in the flow. The magnetic field lines (yellow) have
become highly tangled, and the density (colors) shows large amplitude
fluctuations characteristic of turbulence in the midplane of the
disk.
Significance: Many astrophysical systems behave as fluids, thus
a theoretical description of their dynamics is given by solutions of
the equations of fluid dynamics. However, astrophysical plasmas are
complex because they are affected by a variety of physical phenomena,
such as magnetic fields and radiation fields from nearby stars. By
implementing numerical algorithms for magnetic fluids on massivley
parallel machines, the largest and most detailed numerical simulations
of the dynamics of astrophyscial plasmas in a variety of contexts will
be possible.
Status/Plans: Year 2 milestones have been reached:
the hydrodynamic algorithms including the effects of a magnetic field
have been implemented on a variety of massively parallel machines, and
significant applications have been made. Future plans include
implementing the radiation hydrodynamic algorithms on parallel
machines, and porting the existing code to message passing
architectures.
Point of Contact: James M. Stone
University
of Maryland
(301) 405-2103
jstone@astro.umd.edu
curator:
Larry Picha
MD5{32}: 4b1d0cdc3c1919bc514587ff6549efbd
File-Size{4}: 2894
Type{4}: HTML
Gatherer-Version{3}: 1.0
Gatherer-Host{21}: cesdis1.gsfc.nasa.gov
Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server
Refresh-Rate{7}: 2419200
Time-to-Live{8}: 14515200
Description{36}: Magnetic and Radiation Field Effects
}
@FILE { http://cesdis.gsfc.nasa.gov/PAS2/index.html
Update-Time{9}: 827948601
url-references{681}: /cesdis.html
mailto:tron@cesdis.gsfc.nasa.gov
/PAS2/index.html
/PAS2/README
/PAS2/findings.html
findings.tex
/PAS2/wg2.html
wg2.tex
/PAS2/wg3.html
wg3.tex
/PAS2/wg4.html
/PAS2/wg4.text
/PAS2/wg5.html
/PAS2/wg5.text
/PAS2/wg6.html
/PAS2/wg6.text
/PAS2/wg7.html
wg7.tex
pasadw7.bib
/PAS2/wg8.html
wg8.tex
/PAS2/wg9.html
wg9.text
mailto:messina@cacr.caltech.edu
http://www.ccsf.caltech.edu/~jpool/
mailto:jpool@cacr.caltech.edu
http://cesdis.gsfc.nasa.gov/people/tron/tron.html
mailto:tron@cesdis.gsfc.nasa.gov
/cesdis.html
http://hypatia.gsfc.nasa.gov/NASA_homepage.html
http://hypatia.gsfc.nasa.gov/GSFC_homepage.html
#top
/pub/people/tron/tron.html
mailto:tron@cesdis.gsfc.nasa.gov
title{24}: Second Pasadena Workshop
keywords{320}: and
author
bibliography
cacr
caltech
center
cesdis
computing
edu
environments
file
findings
flight
for
form
goddard
gov
group
gsfc
high
james
jpool
messina
nasa
overview
pasadena
performance
pool
proceedings
readme
report
second
software
space
sterling
system
tex
text
the
thomas
tools
top
tron
version
working
workshop
headings{150}: Proceedings of the Second Pasadena Workshop on
System Software and
Tools for High Performance Computing Environments
Pointers to documents:
Contacts:
body{2015}:
This page contains links to information about the Second Pasadena
Workshop available at
CESDIS .
This directory contains the draft
reports of the nine working groups of the
workshop. These are still in
revision and may be expected to change over time.
At this time, the
report of working group 1 is in preparation
and will be posted
shortly.
The draft of an overview paper has been included and is
written as a
standalone document. It summarizes the major findings and
recommendations of
the workshop as well as providing some background
information.
Questions, comments, and suggestions about this document
may be sent to
Thomas Sterling .
Second Pasadena Workshop
(this document).
This web page. README file .
Description of
contents of this index. Overview of Workshop Findings .
Summary paper
of workshop issues, findings, and recommendations.
This report is
also available as a TeX version . Working Group 2 Report .
Characteristics of HPC Scientific and Engineering Applications.
This
report is also available as a TeX version . Working Group 3 Report .
Use of System Software and Tools.
This report is also available as a
TeX version . Working Group 4 Report .
Influence of Parallel
Architecture on HPC Software.
(Text form .) Working Group 5 Report
.
Transition from Research to Products.
(Text form .) Working
Group 6 Report .
Mixed Paradigms and Alternatives.
(Text form .)
Working Group 7 Report .
Message Passing and Object Oriented
Paradigms.
This report is also available as a TeX version
with a
bibliography . Working Group 8 Report .
Data Parallel and Shared
Memory Paradigms.
This report is also available as a TeX version .
Working Group 9 Report .
Heterogeneous Computing Environments.
This report is also available in the submitted text version .
Paul Messina,
messina@cacr.caltech.edu . James Pool
,
jpool@cacr.caltech.edu . Thomas
Sterling ,
tron@cesdis.gsfc.nasa.gov .
CESDIS
is located at the
NASA
Goddard Space Flight Center in Greenbelt MD.
Top
address{53}: Author:
Thomas Sterling
, tron@cesdis.gsfc.nasa.gov
.
MD5{32}: 563a8b9c54306bf62103141b9c03470a
File-Size{4}: 3827
Type{4}: HTML
Gatherer-Version{3}: 1.0
Gatherer-Host{21}: cesdis1.gsfc.nasa.gov
Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server
Refresh-Rate{7}: 2419200
Time-to-Live{8}: 14515200
Description{24}: Second Pasadena Workshop
}
@FILE { http://cesdis.gsfc.nasa.gov/admin/inf.eng/wave.tutorial.fin/responsible.html
Update-Time{9}: 827948692
title{25}: Responsiblity and the Web
MD5{32}: 406b5596af21aeac67c67a55df94dc5e
File-Size{4}: 5316
Type{4}: HTML
Gatherer-Version{3}: 1.0
Gatherer-Host{21}: cesdis1.gsfc.nasa.gov
Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server
Refresh-Rate{7}: 2419200
Time-to-Live{8}: 14515200
Keywords{26}: and
responsiblity
the
web
Description{25}: Responsiblity and the Web
}
@FILE { http://cesdis.gsfc.nasa.gov/hpccm/iitf.hp/graphics/
Update-Time{9}: 827948828
url-references{142}: /hpccm/iitf.hp/
blue.GIF
blue.JPG
eye_bullet.GIF
hpcc.header.gif
hpccsmall.gif
nasa.meatball.gif
think.back.gif
think.gif
wavebar.gif
work.gif
title{33}: Index of /hpccm/iitf.hp/graphics/
keywords{101}: back
blue
bullet
directory
eye
gif
header
hpcc
hpccsmall
jpg
meatball
nasa
parent
think
wavebar
work
images{202}: /icons/blank.xbm
/icons/menu.gif
/icons/image.gif
/icons/image.gif
/icons/image.gif
/icons/image.gif
/icons/image.gif
/icons/image.gif
/icons/image.gif
/icons/image.gif
/icons/image.gif
/icons/image.gif
headings{33}: Index of /hpccm/iitf.hp/graphics/
body{413}:
Name Last modified Size Description
Parent Directory 21-Nov-95
15:28 -
blue.GIF 21-Nov-95 15:25 9K
blue.JPG 21-Nov-95 15:18 2K
eye_bullet.GIF 24-Jul-95 15:51 1K
hpcc.header.gif 18-May-95 13:29
1K
hpccsmall.gif 24-May-95 12:31 2K
nasa.meatball.gif 08-Nov-94
13:46 3K
think.back.gif 06-Jun-95 14:18 15K
think.gif 15-Mar-95
22:17 13K
wavebar.gif 08-Nov-94 13:46 2K
work.gif 08-Nov-94 13:46
1K
MD5{32}: 7d8cf055356ae0486d59a64caf2484cb
File-Size{4}: 1706
Type{4}: HTML
Gatherer-Version{3}: 1.0
Gatherer-Host{21}: cesdis1.gsfc.nasa.gov
Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server
Refresh-Rate{7}: 2419200
Time-to-Live{8}: 14515200
Description{33}: Index of /hpccm/iitf.hp/graphics/
}
@FILE { http://cesdis.gsfc.nasa.gov/hpccm/annual.reports/ess94contents/iita/TEMPLATE.html
Update-Time{9}: 827948852
url-references{107}: http://cesdis.gsfc.nasa.gov/hpccm/annual.reports/ess94contents/iita.html
mailto:lpicha@cesdis.gsfc.nasa.gov
title{32}: Finite Element Gasdynamics Codes
keywords{45}: curator
larry
page
picha
previous
return
the
images{19}: graphics/return.gif
headings{153}: Developed Tools for Extending Finite Element Gasdynamics Codes to MHD
Regime for Space Science and Astrophysics Applications
Return
to the PREVIOUS PAGE
body{174}: background="graphics/ess.gif">
Objective:
Approach:
Accomplishments:
Significance:
Status/Plans:
Point of Contact: Kevin Olson
curator: Larry Picha
MD5{32}: 23cd09597d289c82aaccd76c1e5c1d6c
File-Size{3}: 732
Type{4}: HTML
Gatherer-Version{3}: 1.0
Gatherer-Host{21}: cesdis1.gsfc.nasa.gov
Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server
Refresh-Rate{7}: 2419200
Time-to-Live{8}: 14515200
Description{32}: Finite Element Gasdynamics Codes
}
@FILE { http://cesdis.gsfc.nasa.gov/hpccm/visitor/sbkpg2.html
Update-Time{9}: 827948831
url-references{72}: newBKGstuff.html
index.html
katie.html
mailto:katie@cesdis.gsfc.nasa.gov
title{37}: example.2 solid color background page
keywords{47}: back
background
extension
index
katie
page
the
images{48}: shoelacebar.gif
shoelacebar.gif
kLogo(tnspt).GIF
headings{43}: Another example solid background color page
body{79}: BGCOLOR="#9999CC">
Back to the background extension page Back to the
index
address{32}: Last updated 20 june 95 by katie
MD5{32}: e9e570a92be2c7134b7caff29316fcec
File-Size{3}: 529
Type{4}: HTML
Gatherer-Version{3}: 1.0
Gatherer-Host{21}: cesdis1.gsfc.nasa.gov
Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server
Refresh-Rate{7}: 2419200
Time-to-Live{8}: 14515200
Description{37}: example.2 solid color background page
}
@FILE { http://cesdis.gsfc.nasa.gov/linux/pcmcia
Update-Time{9}: 820866817
Description{23}: Index of /linux/pcmcia/
Time-to-Live{8}: 14515200
Refresh-Rate{7}: 2419200
Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server
Gatherer-Host{21}: cesdis1.gsfc.nasa.gov
Gatherer-Version{3}: 1.0
Type{4}: HTML
File-Size{4}: 1361
MD5{32}: bfb6601ebf125a2d250b3c9960dbbdb4
body{326}:
Name Last modified Size Description
Parent Directory 09-May-95
16:43 -
3c589.c 22-May-94 10:49 17K
3c589.c-1.1.54 18-Oct-94
18:07 19K
3c589.html 10-Jun-94 17:11 8K
cardd.tgz 22-May-94 10:53
9K
cardd/ 24-Feb-95 01:46 -
dbether.c 17-Jun-94 17:37 6K
dbmodem.c 05-Aug-94 14:30 6K
pcmcia.html 31-Mar-95 19:44 1K
headings{23}: Index of /linux/pcmcia/
images{160}: /icons/blank.xbm
/icons/back.xbm
/icons/text.xbm
/icons/text.xbm
/icons/text.xbm
/icons/text.xbm
/icons/menu.xbm
/icons/text.xbm
/icons/text.xbm
/icons/text.xbm
keywords{55}: cardd
dbether
dbmodem
directory
html
parent
pcmcia
tgz
title{23}: Index of /linux/pcmcia/
url-references{89}: /linux
3c589.c
3c589.c-1.1.54
3c589.html
cardd.tgz
cardd/
dbether.c
dbmodem.c
pcmcia.html
}
@FILE { http://cesdis.gsfc.nasa.gov/hpccm/archive/factsheets.html
Update-Time{9}: 827948801
url-references{34}: mailto:lpicha@cesdis.gsfc.nasa.gov
title{15}: HPCC Fact Sheet
keywords{263}: accelerate
aeronautics
and
application
century
cesdis
comments
computing
development
directly
earth
engineering
gov
gsfc
high
into
larry
lpicha
meet
nasa
next
performance
picha
please
questions
requirements
sciences
send
space
speed
technologies
the
welcome
your
images{24}: hpcc.graphics/lites2.gif
headings{174}: The National Aeronautics and Space Administration's (NASA) High
Performance Computing and Communications (HPCC) Program
Welcome
to the NASA HPCC Brochure!
Table of Contents
body{996}: %>
To accelerate the development and application of high-performance
computing technologies to meet NASA's aeronautics, earth and space
sciences, and engineering requirements into the next century. %>
You're here because you need or want an explanation and overview of the
NASA HPCC Program, its mission, and how it implements and utilizes tax
payer assets. You may click on the table of contents item you're
interested in and go directly there or you may scrole through the
entire document. You may return to your starting point by clicking on
the ''back'' option of your browser (i.e. Mosaic or Netscape) at any
time.
Please send your comments and/or questions directly to Larry
Picha (lpicha@cesdis.gsfc.nasa.gov).
Introduction The Speed of
Change Components of the NASA HPCC Program Computational Aerosciences
(CAS) Project Earth and Space Sciences (ESS) Project Information
Infrastructure Technology and Applications (IITA) component Remote
Exploration and Experimentation (REE) Project
MD5{32}: 60fba2bd0b2edca4381f59bedd13232d
File-Size{5}: 14049
Type{4}: HTML
Gatherer-Version{3}: 1.0
Gatherer-Host{21}: cesdis1.gsfc.nasa.gov
Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server
Refresh-Rate{7}: 2419200
Time-to-Live{8}: 14515200
Description{15}: HPCC Fact Sheet
}
@FILE { http://cesdis.gsfc.nasa.gov/hpccm/annual.reports/cas94contents/iita/
Update-Time{9}: 827948841
url-references{66}: /hpccm/annual.reports/cas94contents/
graphics/
iita.html
k-12.html
title{50}: Index of /hpccm/annual.reports/cas94contents/iita/
keywords{36}: directory
graphics
html
iita
parent
images{80}: /icons/blank.xbm
/icons/menu.gif
/icons/menu.gif
/icons/text.gif
/icons/text.gif
headings{50}: Index of /hpccm/annual.reports/cas94contents/iita/
body{166}:
Name Last modified Size Description
Parent Directory 17-Oct-95
15:42 -
graphics/ 17-Jul-95 13:50 -
iita.html 07-Jul-95 15:00 3K
k-12.html 19-Jul-95 14:13 3K
MD5{32}: 2b99e14b773c7423d65d50d21d4504a7
File-Size{3}: 794
Type{4}: HTML
Gatherer-Version{3}: 1.0
Gatherer-Host{21}: cesdis1.gsfc.nasa.gov
Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server
Refresh-Rate{7}: 2419200
Time-to-Live{8}: 14515200
Description{50}: Index of /hpccm/annual.reports/cas94contents/iita/
}
@FILE { http://cesdis.gsfc.nasa.gov/pub/linux/diag/diagnostic.html
Update-Time{9}: 827948614
url-references{128}: hp+.c
ne2k.c
atp-diag.c
atp.h
e21.c
at1700.c
eexpress.c
../setup/atlantic.c
http://cesdis.gsfc.nasa.gov/linux/setup/3c5x9setup.c
title{46}: Linux Ethercard Diagnostic and Setup Utilities
keywords{200}: and
cabletron
code
com
diagnostic
diagnostics
ethercard
etherlink
ethernet
express
family
file
header
iii
intel
lan
lantic
linux
national
pclan
program
programs
realtek
semiconductor
setup
source
tec
headings{83}: Linux Ethercard Diagnostic and Setup Programs
Diagnostic Programs
Setup Programs
body{1123}:
This is a collections of user-level programs to check out the
basic
functionality of an ethercard. The "setup" programs can read
(and sometimes
even write) the EEPROM setup table of
software-configured cards.
=/icons/greenball.gif>
HP PCLAN+
diagnostics, C source code. =/icons/greenball.gif>
NE2000 diagnostics,
C source code. =/icons/greenball.gif>
AT-Lan-Tec/RealTek diagnostics,
C source code. And if you don't have the kernel source, you'll need
the
header file . =/icons/greenball.gif>
Cabletron E21xx diagnostics,
C source code. =/icons/greenball.gif>
AT1700 diagnostics, C source
code. =/icons/greenball.gif>
Intel Ethernet Express diagnostics, C
source code.
=/icons/greenball.gif>
National Semiconductor
DP83905 AT/Lantic setup program, C source code. The AT/Lantic chip is
used in the NE2000+ and many other software-configured
NE2000 clones.
=/icons/greenball.gif>
3Com EtherLink III family (3c509, 3c529, 3c579,
and 3c589) setup program, C source code. This program displays the
registers and currently programmed settings. It allows the
base I/O
address, IRQ, and transceiver port settings to be changed.
MD5{32}: fc9652a01a1773116e268353ccc0c48d
File-Size{4}: 2088
Type{4}: HTML
Gatherer-Version{3}: 1.0
Gatherer-Host{21}: cesdis1.gsfc.nasa.gov
Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server
Refresh-Rate{7}: 2419200
Time-to-Live{8}: 14515200
Description{46}: Linux Ethercard Diagnostic and Setup Utilities
}
@FILE { http://cesdis.gsfc.nasa.gov/hpccm/annual.reports/cas94contents/app.sw/compressor.html
Update-Time{9}: 827948648
url-references{109}: http://cesdis.gsfc.nasa.gov/hpccm/annual.reports/cas94contents/app.sw.html
mailto:lpicha@cesdis.gsfc.nasa.gov
title{30}: Multistage Compressor Analysis
keywords{46}: contents
curator
larry
picha
return
table
the
images{19}: graphics/return.gif
headings{63}: Multistage Compressor Analysis
Return
to the Table of Contents
body{1887}:
Objective: To develop multidisciplinary technologies for multistage
compression systems that enhance full engine simulation capabilities.
Approach: A detailed multistage compressor analysis code (MSTAGE)
has been ported to a variety of computing systems including the IBM SP1
parallel processor. Several analyses were made to define the flow
physics involved in compressor stall. These flow analyses suggested a
variety of approaches to improve the performance of compression
systems, while providing increased stall margins.
Accomplishment:
This work was conducted as part of a joint industry /
government/university team (P/NASA/ MIT) effort called ''Stall Line
Management''. Design and off-design flow prediction for multistage
turbomachinery is one of the critical elements of this program. A key
feature of this prediction capability is the physics-based models
developed at NASA Lewis. These models provide a rational prediction of
time averaged multistage flow physics by using steady prediction tools.
Rigorous mathematical analysis and NASA high performance computing
platforms (including the NASA Cray C90, IBM Workstation cluster and
SP-1) were essential to the formulation and development of these
models.
Significance: A 1.5 percent reduction in specific fuel
consumption for a large commercial aircraft engine was recently
demonstrated at Pratt and Whitney. This reduction was achieved in 1/2
the historical design time by utilizing viscous 3D fluids analysis
codes.
Status/Plans: Compressor disk and outer casing thermal and
structural analyses are being incorporated into the overall predictive
system. A project plan that schedules the inclusion of several
disciplines (controls, aero, structures) has been developed and
approved by the performing team members.
Point of Contact: Chuck
Lawrence
NASA Lewis Research Center
(216) 433-6048
curator: Larry Picha
MD5{32}: d5fd71add3c7362004bb7adcd2af408c
File-Size{4}: 2319
Type{4}: HTML
Gatherer-Version{3}: 1.0
Gatherer-Host{21}: cesdis1.gsfc.nasa.gov
Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server
Refresh-Rate{7}: 2419200
Time-to-Live{8}: 14515200
Description{30}: Multistage Compressor Analysis
}
@FILE { http://cesdis.gsfc.nasa.gov/hpccm/annual.reports/ann.rpt.95/cas.95/cas.95.ar.nas.tr.vis.html
Update-Time{9}: 827948663
title{30}: NTV - The NAS Trace Visualizer
keywords{75}: accomplishments
approach
contact
objective
plans
point
significance
status
headings{30}: NTV - The NAS Trace Visualizer
body{2395}:
Objective: The objective is to develop a programming tool aimed at
supporting high performance computing on scalable parallel computers.
NTV focuses on performance and correctness by helping to detect
performance bottle-necks using scalable visual representations of
execution traces and innovative trace-browsing capabilities.
Approach: Program developers are faced with a number of
computational platforms. Each platform has its own peculiarities which
effect the way code is tuned for optimum performance. One of the more
useful techniques available for tuning is the analysis of executions
traces. Some manufacturers provide a tracing capability but some do
not, requiring the use of an instrumentor such as that provided by
AIMS. The quantity and complexity of trace data make graphical trace
visualizers essential for analysis. Unfortunately, all existing trace
visualizers are designed to handle only a specific trace format, and
the
formats differ among manufacturers and instrumentors. Further,
the visualizers differ in function and in operation, so program
developers are forced to become proficient with several analysis tools.
NTV is a trace visualization tool
designed to be used with all trace
formats so that a user need only learn one tool. Further, unlike
existing visualizers, it uses static displays which are easier to
understand and more scalable than the dynamic displays common in other
visualizers.
The figure shows an AIMS trace from a program executing
on an Intel iPSC/860 (bottom) and an IBM MPL trace from the same
program ported to run on an IBM SP2 (Top). In both cases, the display
of all messages to processor 0 (angled blue lines) have been turned on
and all others turned off.
Accomplishments: A Beta version
supporting AIMS traces and IBM SP2 MPL was released.
Significance:
With release of the Beta version the tool is now available to help
users develop efficient parallel programs. It has been demonstrated
that a tool can be developed that supports very different trace
formats, and that static displays can be supported on existing
workstations.
Status/Plans: Maintain and support the released
version of NTV. Investigate and plan NTV replacement of visualizer in
AIMS, support MPI/SP2 and produce a library of trace visualization
display elements.
Point(s) of Contact:
Louis Lopez
NASA Ames
Research Center
llopez@nas.nasa.gov
415-604-0521
MD5{32}: fb15270295a8042b504f52336da421fd
File-Size{4}: 2615
Type{4}: HTML
Gatherer-Version{3}: 1.0
Gatherer-Host{21}: cesdis1.gsfc.nasa.gov
Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server
Refresh-Rate{7}: 2419200
Time-to-Live{8}: 14515200
Description{30}: NTV - The NAS Trace Visualizer
}
@FILE { http://cesdis.gsfc.nasa.gov/hpccm/visitor/txtpg1.html
Update-Time{9}: 827948831
url-references{89}: nonExistent.html
newBKGstuff.html
index.html
katie.html
mailto:katie@cesdis.gsfc.nasa.gov
title{42}: Grand Example Background Manipulation page
keywords{64}: back
background
extensions
here
index
katie
links
page
text
the
images{48}: shoelacebar.gif
shoelacebar.gif
kLogo(tnspt).GIF
headings{45}: Grand Example of Background Manipulation Page
body{335}: BGCOLOR="#FF9966" TEXT="#996666" LINK="#669999" VLINK="#336666"
ALINK="#663366">
We've got the background colored, the text
colored, and the links colored. Whoo whoo!
In case you've already
visited all of the other links on this page, here is a link to a
nonexistent page. %>
Back to the background extensions page Back to
the index
address{32}: Last updated 20 june 95 by katie
MD5{32}: 24ce5b5c4782b9f6b215c9890aaf5e36
File-Size{3}: 930
Type{4}: HTML
Gatherer-Version{3}: 1.0
Gatherer-Host{21}: cesdis1.gsfc.nasa.gov
Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server
Refresh-Rate{7}: 2419200
Time-to-Live{8}: 14515200
Description{42}: Grand Example Background Manipulation page
}
@FILE { http://cesdis.gsfc.nasa.gov/hpccm/annual.reports/cas94contents/app.sw/direct.html
Update-Time{9}: 827948647
url-references{109}: http://cesdis.gsfc.nasa.gov/hpccm/annual.reports/cas94contents/app.sw.html
mailto:lpicha@cesdis.gsfc.nasa.gov
title{119}: Direct-Coupled, Multi-disciplinary Computational Aeroscience Simulations
on distributed memory, MIMD parallel computers
keywords{46}: contents
curator
larry
picha
return
table
the
images{19}: graphics/return.gif
headings{152}: Direct-Coupled, Multi-disciplinary Computational Aeroscience Simulations
on distributed memory, MIMD parallel computers
Return
to the Table of Contents
body{3539}:
Objective: The goal of this project is to investigate the
algorithmic and implementation issues as well as the system software
requirements pertaining to the direct-coupled, multi-disciplinary
computational aero-science simulations on distributed memory (DM),
multiple instruction stream, multiple data stream (MIMD) parallel
architectures.
Approach: The design of future generations of civil
transport aircraft that are competitive in the global marketplace
requires multi-disciplinary analysis and design optimization
capabilities involving the direct coupling of diverse physical
disciplines that influence the operational characteristics of the
aircraft. An immediate outcome of a such an approach would be the
greatly increased computational requirements for the simulation, in
comparison to what is needed for current single discipline simulations
on conventional supercomputers. In the near future, it appears that the
computational resources of the scale required for such
multi-disciplinary analysis and/or design optimization tasks may only
be fulfilled in a cost-effective manner by the use of highly parallel
computer architectures. In order to effectively harness the tremendous
computational power promised by such architectures, it is imperative to
investigate the algorithmic and software issues involved in the
development and implementation of concurrent, directly-coupled,
multi-disciplinary simulations. This study takes a necessary
preliminary step towards the development of this enormously complex
capability by attempting to compute the unsteady aeroelastic response
and flutter boundary of a wing in the transonic flow regime through the
direct coupling of two disciplines, viz. fluid mechanics and structural
dynamics on a DM-MIMD computer.
Accomplishment: A direct-coupled,
fluid-structure interaction code capable of simulating the highly
nonlinear aeroelastic response of a wing in the transonic flow regime
was implemented on the 128 processor Intel iPSC/860 computer. The
performance and the scalability of the implementation realized on the
iPSC/860 was demonstrated by computing the transient aeroelastic
response of a simple High Speed Civil Transport type strake-wing
configuration. Also as a part of this study, the efficacy of various
concurrent time integration schemes that are based on the partitioned
analysis approach were investigated. The effort also helped in gaining
a greater understanding of the system software requirements associated
with such multi-disciplinary simulations on DM-MIMD computers. The
algorithmic and implementation details as well as the results can be
found in the following papers: AIAA-94-0095 and AIAA-94-1550.
Significance: This implementation for the first time exploits the
functional parallelism in addition to the data parallelism present in
multi-disciplinary computations on MIMD computers. It demonstrates the
feasibility of carrying out complex, multi-disciplinary, computational
aeroscience simulations efficiently on current generation of DM-MIMD
computers.
Status/Plans: The future efforts will further explore the
possibility of developing more robust and scalable concurrent
algorithms for fluid-structure interaction problems, the incorporation
of additional disciplines and the feasibility of using emerging
parallel programming language standards for developing direct-coupled,
multi-disciplinary CAS applications.
Point of Contact: Sisira
Weeratunga
NASA Ames Research Center
(415) 604-3963
weeratun@nas.nasa.gov
curator: Larry Picha
MD5{32}: 6625f8fc4aaa8e0d09f47524c623a1db
File-Size{4}: 4160
Type{4}: HTML
Gatherer-Version{3}: 1.0
Gatherer-Host{21}: cesdis1.gsfc.nasa.gov
Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server
Refresh-Rate{7}: 2419200
Time-to-Live{8}: 14515200
Description{72}: Direct-Coupled, Multi-disciplinary Computational Aeroscience Simulations
}
@FILE { http://cesdis.gsfc.nasa.gov/petaflops/archive/workshops/frontiers.95.html
Update-Time{9}: 827948600
url-references{405}: frontiers.95.pres.html
/~creschke/peta/report/report.html
http://sdcd.gsfc.nasa.gov/DIV-NEWS/frontiers.html
http://cesdis.gsfc.nasa.gov/petaflops/peta.html
/people/tron/tron.html
mailto:tron@usra.edu
/people/oconnell/whoiam.html
mailto:oconnell@cesdis.gsfc.nasa.gov
http://cesdis.gsfc.nasa.gov/people/lpicha/whoiam.html
mailto:lpicha@@cesdis.gsfc.nasa.gov
http://cesdis.gsfc.nasa.gov/
http://www.usra.edu/
title{54}: Petaflops Enabling Techologies and Applications (PETA)
keywords{552}: academia
achieving
address
alone
and
applications
arise
assuredly
cesdis
community
computing
conference
connell
edu
engineering
even
examinng
exist
far
feasibility
federal
frontier
future
government
have
high
highlights
hpcc
inadequate
individuals
industry
july
lawrence
let
level
lpicha
many
may
michele
moc
most
now
over
overview
performance
period
petaflops
picha
presentations
problems
proceedings
program
realized
report
result
revised
scientific
sdcd
sighted
sterling
systems
technical
teraflops
that
the
thomas
towards
tron
usra
will
works
year
images{184}: peta.graphics/PETA.banner.gif
peta.graphics/saturn.gif
peta.graphics/saturn.gif
peta.graphics/saturn.gif
peta.graphics/saturn.gif
peta.graphics/turb.small.gif
peta.graphics/petabar.gif
headings{743}: PetaFLOPS Frontier '95
The PetaFLOP Frontier Workshop was part of a deliberate and on-going
process to define the long range future of high performance computing
here in the United States. The one-day workshop included presentations
in architecture, technology, applications, and algorithms and
participants ranged from government, academia, and industry.
Overview of Presentations
Conference Proceedings and Technical Report
The Space Data and Computing Division (SDCD) staff were instrumental in
the success of Frontiers '95 as noted in the SDCD Highlights
SDCD
was well-represented on the overall Frontiers '95 committee, and
members were very active participants in the PetaFLOPS Frontier
Workshop.
Return to the
P.E.T.A.
Directory
body{779}:
Even as the Federal HPCC Program works towards achieving
teraFLOPS computing, far-sighted individuals in government, academia
and industry have realized that teraFLOPS - level computing systems
will be inadequate to address many scientific and engineering problems
that exist now, let alone applications that may, and most assuredly
will, arise in the future. As a result, the high performance computing
community is examinng the feasibility of achieving petaFLOPS - level
computing over a 20 year period.
Authorizing NASA Official: Paul H. Smith, NASA HPCC Office
Senior Editor:Thomas Sterling (tron@usra.edu )
Curators: Michele
O'Connell (
michele@usra.edu ),
Lawrence Picha (lpicha@usra.edu
),
CESDIS/ USRA , NASA Goddard Space Flight Center.
Revised: 31 July
95 (moc)
MD5{32}: d8fa89c2d5546a5f45bbed1d3896c687
File-Size{4}: 2616
Type{4}: HTML
Gatherer-Version{3}: 1.0
Gatherer-Host{21}: cesdis1.gsfc.nasa.gov
Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server
Refresh-Rate{7}: 2419200
Time-to-Live{8}: 14515200
Description{54}: Petaflops Enabling Techologies and Applications (PETA)
}
@FILE { http://cesdis.gsfc.nasa.gov/pub/linux/misc/hardware.html
Update-Time{9}: 827948614
url-references{262}: http://wwwhost.ots.utexas.edu/ethernet/ethernet-home.html
#8390irq
#8390multicast
#ne2000dma
#auiswitch
#pliplength
#multi3c509
#MCAbus
#check3c589
#hpvectra
#eexpress
#subnote
#ioregion
#diskdown
#xircom
#top
/pub/linux/linux.html
/pub/people/becker/whoiam.html
title{16}: Network hardware
keywords{493}: again
all
and
aui
author
based
becker
bus
capture
cesdis
code
conflict
disk
dma
don
donald
down
driver
enabled
eth
ethercards
etherexpress
ethernet
excellent
extents
guide
hardware
ide
information
intel
interrupt
lance
length
link
linux
machine
mca
message
messages
micro
midwest
mode
multiple
mysteriously
network
one
packets
plip
port
power
probe
problem
promiscuous
region
snarf
status
subnotebook
support
switches
the
top
totally
unknown
utexas
vectra
verifying
warning
wiring
with
xircom
headings{511}: Information on network hardware
"eth0: unknown interrupt 0x1" messages
8390-based ethercards don't capture all packets in promiscuous mode
NE2000 driver DMA conflict message
Midwest Micro subnotebook
I/O region extents, and snarf_region().
3c503 mysteriously switches to the AUI port.
PLIP length warning.
Problem with HP Vectra 486/66XM LANCE probe.
Multiple 3c509's in one machine.
Intel EtherExpress driver status.
IDE Disk Power-down Code.
Verifying a 3c589 is enabled.
Xircom, again.
MCA bus 3c529 support.
body{16092}:
This is an informal collection of information about network
hardware and
bug work-arounds.
Here is a quick index: A link
to the
Totally Excellent UTexas ethernet wiring guide . "eth0: unknown
interrupt 0x1" messages . 8390-based ethercards don't capture all
packets in
promiscuous mode . NE2000 driver DMA conflict message .
3c503 mysteriously switches to the AUI port . PLIP length warning .
Multiple 3c509's in one machine . MCA bus 3c529 support . Verifying a
3c589 is enabled . Problem with HP Vectra 486/66XM LANCE probe . Intel
EtherExpress driver status . Midwest Micro subnotebook . I/O region
extents, and snarf_region() . IDE Disk Power-down Code . Xircom, again
.
>At some moment mine /var/adm/messages started to record
zillions
>of messages like:
>"eth0: unknown interrupt 0x1"
This
message should only occur with kernels 1.0.0 to 1.0.3 or so.
These
are mostly harmless messsages produced by error-checking code
around
line 277 in 8390.c. The root cause is usually some part of the
system
shutting off interrupts for longer than the net code expects,
or that your
network is exceptionally busy.
This section of code
that produces this message combines a check for
unrecognized hardware
return values with a check to prevent unlimited
work being done during
a single interrupt, which might indicate a hardware
failure. A kernel
patch (1.0.3 I think) increased this 'boguscnt' check from
'5'(four
actions per network interrupt) to '9' (eight actions per
interrupt).
This prevents the error message for almost all systems.
Due to my misinterpretation of the 8390 documentation, all
drivers based on
the 8390 core (3c503, WD80*3, SMC Ultra, NE*000, HP
PCLAN and others) do not
receive multicast packets in promiscuous
mode.
Only network monitoring programs use promiscuous mode, and
protocols that
use multicast packets are currently rare, so very few
people will encounter
this problem. Kernels after 1.1.24 already
include the following fix:
drivers/net/8390.c:set_multicast_list()
} else if (num_addrs < 0)
- outb_p(E8390_RXCONFIG | 0x10, ioaddr +
EN0_RXCR);
+ outb_p(E8390_RXCONFIG | 0x18, ioaddr + EN0_RXCR);
else
-djb 7/13/94
>The problem has to do with machine crashes
about every 1 to 2 days with
>this error:
>eth0: DMAing conflict in
ne_block_output. [DMAstat:fffffffe][irqlock:fffffff]
Ohhh, bad.
This "can't" happen when everything is working correctly.
The "DMA"
that this message is referring to is the DMA controller
internal to
the NE2000. It has nothing to do with the motherboard DMA
channels. (A
few NE1000 clones do allow the two DMA systems to be
connected, but
DMA results in *slower* system operation when
transferring typical
ethernet traffic.)
What is likely happening is an interrupt conflict
or a noisy interrupt
line, causing the device driver to start another
packet transfer when
it thinks that it has locked out interrupt from
the card.
A remote possibility is that you are running an old kernel,
or mixing
versions of 8390.c and ne.c.
>And thats about it. With so
little on it, its hard to believe I have this
>problem, but I do. The
problem seems to corrolate with the addition of the
>second IDE,
before we had it, we used to have uptimes of 2+ weeks. The
Hmmm,
adding a card often results in IRQ conflicts and occasionally
results
in electrical noise problems. Try swapping card in their
slots or
changing the interrupt line. (Note: uppen IRQs are often
quieter than
lower ones! Try IRQ11 or IRQ15.)
>I got your email address off
the laptop survey list on tsx and thought I'd
>write you for some
experience/advice. I'm getting an Elite Subnote and
>wondered how you
like yours. How long have you had it? Any trouble? I've
>read stuff
about it's "cramped" keyboard and unreliable trackball. Is this
>your
experience?
I've order five machine in two batches. The first
machine was ordered
in early January and had the following problems
Power supply cut out after warming up. (Fixed) Cracking around the
lid/display hinges (perhaps caused by
fix above?) Unreliable power
jack or plug (wiggling causes power LED to flicker).
The lastest
four arrived in mid-April and have none of these problems.
>About
Linux, any pointers about installing it? I'm planning to
use
>Slackware and load tinyX. I see you're running XWindows? Would
you mind
>sending me your Xconfig file? Also any trouble getting the
trackball to
>work?
No problems installing it. A few
notes:
The 4 bit VGA server works fine. The alpha-test 8 bit
server from
Mike Hollick doesn't restore text mode
correctly.
The trackball is a two button microsoft serial mouse.
Except
for the missing third button I love it, and have had
no
problems with it.
The wrist rest turned out to be far
more useful that I had expected.
The keyboard feels fine, it
took about a week to get used to it.
>I guess you're pretty happy
with the Subnote since you have five. You
>certainly can't beat the
price.
Not only was the price great, it was also the only
reasonable subnote
that was shipping with a 340M drive.
BTW, the
first was ordered with a 4M memory expansion because the 8M
memory
expansion cards were not due to be available until "late
February".
The recent batch was ordered with 8M modules, but they
arrived with
only the base 4M because the 8M expansion modules still
were not
available! Rule: if it's not "in stock for immediate
delivery", it
doesn't exist. People that ordered IBM Thinkpad 750s
back in the fall
are just getting them now!
The Other Rule: divide the advertised
battery life by two.
(This is a question about why the kernel
function snarf_region() only works
up to 0x3fff, and why drivers don't
bother allocating higher I/O regions.)
> /* We've committed to using
the board, and can start filling in *dev. */
> /* I suppose this
assigns this I/O range to this proc */
>snarf_region(ioaddr,
16);
> /* Why the same is not done for the range starting at
ioaddr+0xC008 ? */
The snarf_region() function shares some of the
same bitmap functions
as the ioperm() call, and only marks the I/O
ports used in the 0x0 - 0x3ff
(the original PC I/O space). I claim (I
wrote the ioperm() and
*_region() code, so I feel the need to defend
this :->) that this is
actually the right thing to do, as some (many?)
I/O devices
deliberately ignore the upper I/O address bits because
some ancient
broken PC software required it.
>I have been
having a problem with my eternet card changing from the TP port
>to
the AUI without any notice. The machine will change interfaces
>between 1 day and 1 week of uptime. If I move the cable from the TP
port to
>a tranceiver on the AUI the machine after it swaps it will
work again.
The '8390' part of the 3c503 driver has special code to
automatically switch
interfaces around line drivers/net/8390.c:156 in
version 1.0. This
code was added so that the ethercard could
automatically configure itself
for the network in use. This turned out
to be a not-quite-perfect
implementation of an otherwise good idea,
and around version 1.0.2 the code
was changed to only switch
interfaces if *no* packets had yet been
transmitted without error,
rather than anytime in the session.
>> 1. It works ONLY with
short cables - I have one cable 2 meters long and one
>> 40 meters
long. My old plip worked fine on both; your one works with the
Acckkkk! A 40 meter printer cable is *way* beyond the specs for
even
output-only printer traffic! It's unreasonable to expect
bidirectional
traffic to work on a cable this long.
You should
switch to ethernet for this link: not only is ethernet faster,
cheaper
and more reliable, it's also much *safer* for a connection this
long.
10base2 provides at least 600V of isolation if the 'T' taps
are
insulated, and 10baseT provides over 1500V isolation with fully
enclosed
contacts. That's protection against lightning hits, ground
loops, ground
noise and ground offsets that you *need*.
>Subject: Problem with HP Vectra 486/66XM LANCE probe
>We're
using HP Vectra 486/66XM's here and they have an AMD
>79C960 chip on
the motherboard. The Ethernet HowTo indicates
>that this is supported
using the PCnet-ISA driver, lance.c,
>which says upon booting that it
is:
>
> HP J2405A IRQ 15 DMA 7.
>
>The only problem is that the IRQ
and DMA are incorrect.
Ooops, when I put in the HP-J2405A
special-case code I didn't realize that
they were going to come out
with an incompatible implementation. The
'lance.c' driver *always*
uses the value in the setup register of an HP
Lance implementation. In
this case it's reading an invalid 0xff value.
>For the time being,
I've been hardcoding the proper IRQ and DMA
>values in the driver
itself and everything has been working
>fine, but I'd like to get the
probe for this fixed so that I
>don't have to muck around with the
source (or do funny things
>with LILO) in the future.
That's the
right temporary solution, and the right long-term attitude.
I'll see
if I can find someone at HP that knows how to tell the
difference
between a J2405A and a Vectra. If there isn't an easy way,
I'll just
ignore a 0xff setup value and do autoIRQ/autoDMA instead.
> Alan Cox suggested talking to you about figuring out how to do
multiple
>3c509's within 1 linux box. I have an application where I
would like to do
>just this. Specifically I'd like to get 3 of them
into a single ISA box.
The 3c509 driver already supports multiple
3c509 cards on the *ISA* bus.
Look in the probe code for the variable
'current tag'. Just make
certain that "eth1" and "eth2" are set to
probe anywhere (address
'0'), not just a specific I/O address.
A
side note: the 3c509 probe doesn't mix well with the rest of the
probes.
It's difficult to predict a priori which card will be
accepted "first"
-- the order is based on the hardware ethernet
address. That means
that the ethercard with the lowest ethernet
address will be assigned
to "eth0", and the next to "eth1", etc. If
the "eth0" ethercard is
removed, they all shift down one number.
Another note: the 3c509 driver will fail to find multiple
EISA-mode
3c509s and 3c579s.
The file net/drivers/3c509.c needs to be
modified to accept multiple
EISA adaptors. This change is already made
in later 1.1.* kernels.
Around line 94 make the following changes
-/* First check for a board on the EISA bus. */
+/* First
check all slots of the EISA bus. The next slot address to
+ probe
is kept in 'eisa_addr' to support multiple probe() calls. */
if
(EISA_bus) {
-for (ioaddr = 0x1000; ioaddr < 0x9000; ioaddr +=
0x1000) {
+static int eisa_addr = 0x1000;
+while
(eisa_addr < 0x9000) {
+ioaddr =
eisa_addr;
+eisa_addr += 0x1000;
+
/* Check
the standard EISA ID register for an encoded '3Com'. */
if (inw(ioaddr + 0xC80) != 0x6d50)
continue;
Please let me know if this works for
you.
>I have just installed linux with support for the intel
etherexpress
>card what is the current status of this card and where
can the latest
>version of the driver be got from.
>
>The current
version I have is v0.07 1/19/94 by yourself.
The EExpress driver is
still in alpha test -- it only works on some
machines, generally
slower 386 machines.
Several people are actively working on the
driver, but a
stable release is at least several months away.
>My friend has a small utility (under DOS) which can tell the
disk controller
>to switch off the disk after some period of
inactivity. He runs this program
>and then boots linux. (He has two
IDE disks).
This is a standard feature of all modern IDE disks. I
have a short
program (appended) that I use to do the same thing on
laptops. A
user-level program is a poor way to do this, but I got
tired of
patching it into my own kernels and I didn't feel I could
maintain
an Official Kernel Feature.
>After some time the disks
are
>switched off. Now, when linux wants to use them, disk driver
writes some
>messages about timeouts (one on the new disk and
three-four on the old one)
>and than everything is ok.
This is
almost normal: the Linux kernel gets upset when the disk
doesn't
respond immediately. It resets the controller, and by that
time the
disk has spun up. One annoying misfeature is that the disk drive
posts
an interrupt when it goes into spin-down mode, the kernel
doesn't know
where the interrupt is from, and 'syslog' immediately
spins the disk
back up. The quick, sleazy solution is to configure
'syslog' to ignore
those messages.
>BUT if the first process that wants to access
>disk
is swapper, the system hangs. It doesn't matter, which hard disk
the
>swap partition is on, the system hangs only when swapper wants to
access
>the disk. If somebody wants more details, I can reproduce it.
Hmm, I've never experienced this.
Anyway, here is my short
program to put the disk into standby-timer
mode. It takes a single
optional parameter, the number of seconds to
wait before going into
standby mode.
/*
* diskdown.c: Shut down a IDE disk if there is no
activity.
* Written by Donald Becker (becker@cesdis.gsfc.nasa.gov)
for Linux.
*/
#include < unistd.h >
#include < stdio.h
>
#include < asm/io.h >
#define IDE_BASE0x1f0
#define
IDE_SECTOR_CNT0x1f2
#define IDE_CMD0x1f7
#define
PORTIO_ON1
enum ide_cmd {StandbyImmediate=0xe0,
IdleImmediate=0xe1, StandbyTimer=0xe2,
IdleTimer=0xe3,};
main(int argc, char *argv[])
{
int timeout;
if (ioperm(IDE_BASE, 8, PORTIO_ON))
{
perror("diskdown:ioperm()");
fprintf(stderr, "diskdown: You
must run this program as root.\
");
return 1;
}
if (argc >
1) {
timeout = atoi(argv[1]);
if (timeout < 10)
timeout
= 10;
}
{
int old_cnt =
inb(IDE_SECTOR_CNT);
printf("Old sector count: %d.\
",
old_cnt);
outb((timeout +
4)/5,IDE_SECTOR_CNT);
outb(StandbyTimer, IDE_CMD);
outb(old_cnt,IDE_SECTOR_CNT);
}
return 0;
}
/*
* Local
variables:
* compile-command: "gcc -O6 -o diskdown diskdown.c"
*
comment-column: 32
* End:
*/
>How does one work out/set
that memory map, i.e. mem_start,
>I've set io_addr to 0x300 and irq to
10 ok, its the memory
>part I've got a blind spot for.
The 3c589
uses 16 I/O locations and no memory locations. That makes it
much
easier to configure than a I/O + memory card.
A quick way to
check if the 3c589 is correctly mapped in to run dd if=/dev/port
skip=768 count=16 bs=1 | od -t x2
instead of the 'ifconfig...'. This
will show the contents of I/O locations
0x300-0x30f (768 to 768+16).
The 3c589 signature of 6d 50 (or 50 6d)
should be the first bytes if
it's mapped in correctly.
>I have a friend who just got a
laptop and I've been putting linux
>on it. They got a Xircom credit
card ethernet adapter (it says right
>on the box that it supports "all
popular network operating systems, right?
>:-) Unfortunately, it looks
like it is unsupported in Linux. On the other
>hand, it is a PCMCIA
card, and it sounded like "generic" PCMCIA support
>might be
forthcoming.
Until Xircom releases programming information, no
non-standard (i.e.
non-modem) product can be supported.
The
"generic" part of the PCMCIA support will only handle socket
enabling.
That's all that's needed for devices that adhere to a
common register
standard, like modems, but ethernet adaptors differ
wildly.
You
should give Xircom a call and ask for the Linux driver. Tell them
that
it says right on the box that it "supports all popular
operating
systems". When they tell you that they don't have a device
driver,
ask them for the programming specifications:->.
> I've
managed to boot up linux on a PS/2 - at present I'd like to try
and
>get the current ETHERLINK/MC card working. I saw that in 3c509.c
you
>had provided some support for MCA. Some of the routime you call
in
>that section are undefined. What other routines do I need to have
in
order to build the 3c509 and try it out ?
I don't have access
to a MCA machine (nor do I fully understand the
probing code) so I
never wrote the mca_adaptor_select_mode() or
mca_adaptor_id()
routines. If you can find a way to get the adaptor
I/O address that
assigned at boot time, you can just hard-wire that in
place of the
commented-out probe. Be sure to keep the code that reads
the IRQ,
if_port, and ethernet address.
Sorry I can't be more helpful.
Top
Linux at CESDIS
address{52}: Author:
Donald Becker
, becker@cesdis.gsfc.nasa.gov.
MD5{32}: e6eb52d3c91a6618d36841436cc045ba
File-Size{5}: 18884
Type{4}: HTML
Gatherer-Version{3}: 1.0
Gatherer-Host{21}: cesdis1.gsfc.nasa.gov
Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server
Refresh-Rate{7}: 2419200
Time-to-Live{8}: 14515200
Description{16}: Network hardware
}
@FILE { http://cesdis.gsfc.nasa.gov/hpccm/annual.reports/ess94contents/in.house/jacquie.html
Update-Time{9}: 827948654
url-references{111}: http://cesdis.gsfc.nasa.gov/hpccm/annual.reports/ess94contents/in-house.html
mailto:lpicha@cesdis.gsfc.nasa.gov
title{88}: Parallel Implementation of a Wavelet Transform and its Application to
Image Registration
keywords{45}: curator
larry
page
picha
previous
return
the
images{42}: graphics/jacquline.gif
graphics/return.gif
headings{117}: Parallel Implementation of a Wavelet Transform and its Application to
Image Registration
Return
to the PREVIOUS PAGE
body{3336}:
Objective: To provide a fast multi-resolution wavelet
decomposition (and reconstruction) which can be utilized in many
applications, such as image compression, browsing, and registration.
Approach: A wavelet transform is a very flexible mathematical tool
which describes simultaneously the spatial and the frequency content of
image data. In particular, multi-resolution wavelet transforms provide
this description at multiple scales by iteratively filtering the image
by low-pass and high-pass filters, andrreducing the size of the image
by two in each direction at each iteration (this step being called
"decimation"). When this process is applied to remote sensing data, the
wavelet description can be the basis of many data management
applications, especially image registration. Figure 1 shows the wavelet
decomposition of an AVHRR image of the Pacific Northwest area.
For
image registration purposes, the wavelet decomposition extracts strong
image characteristics which can be utilized as ground reference points
to define the correspondence between several images, enabling automatic
registration. With a fast parallel implementation of the wavelet
transform, this type of process could easily be performed very rapidly
for large amounts of data.
Accomplishments: A preliminary study of a
parallel implementation of the multi-resolution wavelet decomposition
was accomplished. A first prototype of parallel image registration
involving image rotations and translations has been implemented on the
MasPar MP-2, and tested with AVHRR and Landsat Pathfinder datasets.
Collaboration with Dr. T.A. El-Ghazawi from George Washington
University and Dr. J.C. Liu from Texas A University was initiated, and
resulted in five different algorithms which have been developed and run
on a mesh-connected, massively parallel architecture, the MasPar MP-2
(some of them have also been tested on a MasPar MP-1). These five
methods differ by the methods used for filtering and decimation, and
also by the virtualizations necessary to map the data onto the parallel
array. Results show that over a sequential implementation, a parallel
implementation offers an improvement in speed anywhere from 200 to
nearly 600 times. These results are summarized in two papers, one to be
published by the International Journal on Computers and their
Applications, and the second one being submitted to Frontiers'95.
Significance: A fast parallel implementation of wavelet decomposition
and reconstruction of image data is important not only because it is
useful for many data management applications, but also because it is
representative of typical pre-processing which will have to be applied
routinely to large amounts of remotely sensed data.
Status/Plans:
In FY95, parallel image registration utilizing a wavelet decomposition
will be pursued, and extended to more general image transformations.
Collaboration with Dr. T. Sterling has also be initiated and will be
pursued to integrate the wavelet code in the ESS Parallel Benchmarks
project (EPB 1.0), and in the Beowulf Parallel Linux Project
(workstation environment of 1 Gops, with 16 processors, 256 MBytes of
memory and 8 GBytes of disk).
Point of Contact: Jacqueline
Le Moigne
CESDIS
Goddard Space Flight Center
(301) 286-8723
lemoigne@nibbles.gsfc.nasa.gov
curator: Larry Picha
MD5{32}: c5bce94212d1823ccdc714929d68bbfe
File-Size{4}: 3992
Type{4}: HTML
Gatherer-Version{3}: 1.0
Gatherer-Host{21}: cesdis1.gsfc.nasa.gov
Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server
Refresh-Rate{7}: 2419200
Time-to-Live{8}: 14515200
Description{69}: Parallel Implementation of a Wavelet Transform and its Application to
}
@FILE { http://cesdis.gsfc.nasa.gov/hpccm/annual.reports/ess94contents/nren/keck.html
Update-Time{9}: 827948657
url-references{107}: http://cesdis.gsfc.nasa.gov/hpccm/annual.reports/ess94contents/nren.html
mailto:lpicha@cesdis.gsfc.nasa.gov
title{24}: ACTS Keck/GCM Experiment
keywords{45}: curator
larry
page
picha
previous
return
the
images{19}: graphics/return.gif
headings{135}: Advanced Communications Technology Satellite (ACTS) Keck
Observatory/Global Climate Model (GCM) Experiment
Return
to the PREVIOUS PAGE
body{3525}:
Objective: The objectives of the ACTS/Keck GCM experiment are to
(1) demonstrate distributed supercomputing (meta-supercomputer) over a
high performance satellite link, (2) demonstrate remote science data
gathering, control, and analysis (telescience) with meta-supercomputer
resources using multiple satellite hops, and (3) determine optimum
satellite terminal/supercomputer host network protocol design to for
maximum meta-supercomputer efficiency.
Approach: The two ACTS
experiments, Keck and GCM, will be led by JPL and GSFC, respectively,
with support from Caltech, UCLA, GWU, and Hawaii PacSpace. The GCM
experiment will require a virtual channel connection between the JPL
Cray T3D and the GSFC Cray C90, while the Keck experiment will require
a virtual channel connection between a remote control room at Caltech
in Pasadena, CA, and the Keck Observatory local area network on Mauna
Kea, Hawaii. Based on the expected availability of network switch and
host ATM SONET OC-3 equipment by early CY95, ATM was selected as the
base transport mechanism. This greatly simplifies the terrestrial
network infrastructure, especially in the Hawaiian islands and ATDnet.
A striped (4X OC-3) HIPPI/SONET gateway will be used as a backup should
all the ATM infrastructure not be available by early CY95.
For Keck,
Caltech will modify the graphical user interface (GUI) design for use
over longer delay channels and multi-user/location control (an
adaptation of one currently used), JPL will perform the network system
engineering and atmospheric/fading BER analysis, and GWU the HDR site
design and performance modeling. Additionally, PacSpace will assist
with scheduling the use of the Honolulu HDR and engineering the
Honolulu/Mauna Kea network infrastructure. For GCM, GSFC will lead the
porting of the distributed global climate model to the JPL and GSFC
Cray supercomputers. GSFC staff scientists will port the Poseidon OGCM
and Aries AGCM codes for coupling with UCLA AGCM and GFDL OGCM codes.
In both experiments, the effect of fading, burst noise, and long
transit delays will be examined and compared against lower error rate
terrestrial links.
Accomplishments: During the past year, the
project wide proposal was written (Aug. 93) and later revised (in Jan
94) to reflect later HDR delivery. In Dec. 93, the overall network
infrastructure was refined to include ATM, and in May 94, the Hawaiian
"last mile" fiber/microwave network infrastructure design was
completed. In Jul. 94, JPL completed a atmospheric fading model and
GSFC completed an integrated ATDnet network design that permits ATM,
HIPPI, and raw SONET connectivity to NASA and ARPA experiment users.
Significance: This pair of experiments will demonstrate the feasibility
of using long path delay satellite links to establish meta-computing
and control/data acquisition networks for remote collaboration,
observation, and control of science experiments in hostile
environments. Examples include Antarctic and undersea exploration,
petroleum exploration, and interconnecting data centers to share large
data bases.
Status/Plans: Both applications will be designed,
ported, and debugged over low speed Internet connections during the
next year. Full HDR deployment and network connectivity is expected by
Jul. 95, at which time high bandwidth trials are expected to commence,
lasting for 9 additional months (to Mar 96).
Point of Contact:
Larry A. Bergman
Jet Propulsion Laboratory
(818) 354-4689
bergman@kronos.jpl.nasa.gov
curator: Larry Picha
MD5{32}: caed50375f4e96ab2209619307c994d2
File-Size{4}: 4068
Type{4}: HTML
Gatherer-Version{3}: 1.0
Gatherer-Host{21}: cesdis1.gsfc.nasa.gov
Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server
Refresh-Rate{7}: 2419200
Time-to-Live{8}: 14515200
Description{24}: ACTS Keck/GCM Experiment
}
@FILE { http://cesdis.gsfc.nasa.gov/linux/beowulf/details.html
Update-Time{9}: 827948620
url-references{71}: beowulf.html
beowulf.html
http://www.maxtor.com/
http://www.maxtor.com/
title{18}: Details of Beowulf
keywords{43}: beowulf
description
details
maxtor
project
headings{85}: Beowulf Project Details
Purpose
Processor
Motherboard
Memory
Disk
Scalable Network
body{3253}:
This file isn't a standalone document. It supports and elaborates
on
the Beowulf Project Description .
The processors in the current
Beowulf nodes are the Intel DX4 processors.
This processor is hybrid
between the 80486 and the Intel P5 Pentium. It
features are: '486
execution core with improved microcode SMM, System Management Mode,
power management from the SL series a 16KB cache, the same as the P5
and twice the 8K of the '486, made with the same 3.3V, 0.6 micron
process and on the same process
lines as the P5-90 and P5-100
processors.
The net effect is that the DX4-100 processor is more than
50% faster than
the 486DX2-66 processor. Compared to the P5-60 it has
slightly better
integer performance and somewhat worse floating point
performance, at a
significantly lower cost.
The motherboards are
based on the SiS 82471 chipset. This was the highest
performance
low-cost '486 support chipset available at the time we purchased
the
system. Each motherboard has 3 VL-bus slots, 2 bus-master capable 4
ISA-only slots 256K secondary cache with 2-1-1-1 burst refill. "green"
power-saving circuitry.
We expect the next system to use PCI
Pentium motherboards based on
either the Intel Neptune or Triton
chipsets. Both have good performance at
low cost. The newer Triton
chipset has the advantage of an integrated PCI
bus-master EIDE
controller and potentially better memory bandwidth when
used with EDO
DRAM, but motherboards using this chipset may not be available
in
time.
Each processor has 16M of 60ns DRAM. The 60ns memories are
only slightly
more expensive than the usual 70ns or 80ns variety, and
allow use to use a
shorter delay when accessing main memory. The
higher memory bandwidth is
especially important when the interally
clock-tripled processor is doing
block memory moves.
Beowulf is
using Maxtor EIDE disks
connected to a VL bus controller based on the
DTC805 chip. The measured
performance is about 4.5 MB/sec., close to
the physical head data rate of
the drive (nominally 3.5-5.6MB/sec,
depending on the zone).
The scalable communications is implemented
by duplicating the
hardware address of a primary network adaptor to
the secondary interfaces,
and marking all packets received on the
internal networks as coming from a
single pseudo-interface. This
scheme constrains each internal network to
connect to each node. With
these constraints the Ethernet packet
contents are independent of the
actual interface used and we avoid the
software routing overhead of
handling more general interconnect topologies.
The only additional
computation over a using single network interface is
the
computationally simple task of distributing the packets over
the
available device transmit queues. The current method used is
alternating
packets among the available network interfaces.
The
system-visible interface to this "channel bonding" is the
'ifenslave'
command. This command is analogous to the 'ifconfig' command
used to
set up the primary network interface. The 'ifenslave' command
copies
the configuration of a "master" channel to a slave channel. It
can
optionally configure the slave channel to run in a receive-only
mode, which
is useful when initially configuring or shutting the down
the additional
network interfaces.
MD5{32}: aded8d0bdc45e97d037c0e95574ead8d
File-Size{4}: 3898
Type{4}: HTML
Gatherer-Version{3}: 1.0
Gatherer-Host{21}: cesdis1.gsfc.nasa.gov
Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server
Refresh-Rate{7}: 2419200
Time-to-Live{8}: 14515200
Description{18}: Details of Beowulf
}
@FILE { http://cesdis.gsfc.nasa.gov/pub/linux/diag/e21.c
Update-Time{9}: 827948614
Partial-Text{2193}: main
mem_off
mem_on
stdio.h
stdlib.h
unistd.h
asm/io.h
getopt.h
sys/types.h
sys/stat.h
sys/mman.h
fcntl.h
/* el21.c: Diagnostic program for Cabletron E2100 ethercards. */
/*
Written 1993,1994 by Donald Becker.
Copyright 1994 Donald Becker
Copyright 1993 United States Government as represented by the
Director, National Security Agency.
This software may be used and distributed according to the terms of the
Gnu Public Lincese, incorporated herein by reference.
The author may be reached as becker@cesdis.gsfc.nasa.gov.
C/O USRA Center of Excellence in Space Data and Information Sciences
Code 930.5 Bldg. 28, Nimbus Rd., Greenbelt MD 20771
*/
/* #include "8390.h" */
/* Offsets from the base_addr. */
/* Offset to the 8390 NIC. */
/* The E21** series ASIC, know as PAXI. */
/* The following registers are heavy-duty magic. Their obvious function is
to provide the hardware station address. But after you read from them the
three low-order address bits of the next outb() sets a write-only internal
register! */
/* Enable memory in 16 bit mode. */
/* Enable memory in 8 bit mode. */
/* Low three bits of the IRQ selection. */
/* High bit of the IRQ, and media select. */
/* Offset to station address data. */
/* This is a little weird: set the shared memory window by doing a
read. The low address bits specify the starting page. */
/* { name has_arg *flag val } */
/* Give help */
/* Give help */
/* Force an operation, even with bad status. */
/* Interrupt number */
/* Verbose mode */
/* Display version number */
/* Probe for E2100 series ethercards.
E21xx boards have a "PAXI" located in the 16 bytes above the 8390.
The "PAXI" reports the station address when read, and has an wierd
address-as-data scheme to set registers when written.
*/
/* Needed for SLOW_DOWN_IO. */
/* Restore the old values. */
/* Do a media probe. This is magic.
First we set the media register to the primary (TP) port. */
/* Select if_port detect. */
/*printk(" %04x%s", mem[0], (page & 7) == 7 ? "\n":"");*/
/* do_probe(port_base);*/
/*
* Local variables:
* compile-command: "gcc -Wall -O6 -N -o e21 e21.c"
* tab-width: 4
* c-indent-level: 4
* End:
*/
MD5{32}: 1d252253b6c856c4c40a4ea8bc381cde
File-Size{4}: 5710
Type{1}: C
Gatherer-Version{3}: 1.0
Gatherer-Host{21}: cesdis1.gsfc.nasa.gov
Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server
Refresh-Rate{7}: 2419200
Time-to-Live{8}: 14515200
Keywords{1118}: above
according
addr
address
after
agency
and
are
arg
asic
asm
author
bad
base
becker
bit
bits
bldg
boards
but
bytes
cabletron
center
cesdis
code
command
compile
copyright
data
detect
diagnostic
director
display
distributed
doing
donald
down
duty
enable
end
ethercards
even
excellence
fcntl
first
flag
following
for
force
from
function
gcc
getopt
give
gnu
gov
government
greenbelt
gsfc
hardware
has
have
heavy
help
herein
high
include
incorporated
indent
information
internal
interrupt
irq
know
level
lincese
little
local
located
low
magic
main
may
media
mem
memory
mman
mode
name
nasa
national
needed
next
nic
nimbus
number
obvious
off
offset
offsets
old
only
operation
order
outb
page
paxi
port
primary
printk
probe
program
provide
public
reached
read
reference
register
registers
reports
represented
restore
scheme
sciences
security
select
selection
series
set
sets
shared
slow
software
space
specify
starting
stat
states
station
status
stdio
stdlib
sys
tab
terms
the
their
them
this
three
types
unistd
united
used
usra
val
values
variables
verbose
version
wall
weird
when
width
wierd
window
with
write
written
you
Description{4}: main
}
@FILE { http://cesdis.gsfc.nasa.gov/linux/drivers/v1.3/3c59x.c
Update-Time{9}: 827948605
Partial-Text{4786}: EL3WINDOW
cleanup_module
init_module
set_multicast_list
tc59x_init
update_stats
vortex_close
vortex_get_stats
vortex_interrupt
vortex_open
vortex_probe1
vortex_rx
vortex_start_xmit
linux/config.h
linux/module.h
linux/version.h
linux/kernel.h
linux/sched.h
linux/string.h
linux/ptrace.h
linux/errno.h
linux/in.h
linux/ioport.h
linux/malloc.h
linux/interrupt.h
linux/pci.h
linux/bios32.h
asm/bitops.h
asm/io.h
asm/dma.h
linux/netdevice.h
linux/etherdevice.h
linux/skbuff.h
/* 3c59x.c: An 3Com 3c590/3c595 "Vortex" ethernet driver for linux. */
/*
NOTICE: this driver version designed for kernel 1.2.0!
Written 1995 by Donald Becker.
This software may be used and distributed according to the terms
of the GNU Public License, incorporated herein by reference.
This driver is for the 3Com "Vortex" series ethercards. Members of
the series include the 3c590 PCI EtherLink III and 3c595-Tx PCI Fast
EtherLink. It also works with the 10Mbs-only 3c590 PCI EtherLink III.
The author may be reached as becker@CESDIS.gsfc.nasa.gov, or C/O
Center of Excellence in Space Data and Information Sciences
Code 930.5, Goddard Space Flight Center, Greenbelt MD 20771
*/
/* This will be in linux/etherdevice.h someday. */
/* The total size is twice that of the original EtherLinkIII series: the
runtime register window, window 1, is now always mapped in. */
/*
Theory of Operation
I. Board Compatibility
This device driver is designed for the 3Com FastEtherLink, 3Com's PCI to
10/100baseT adapter. It also works with the 3c590, a similar product
with only a 10Mbs interface.
II. Board-specific settings
PCI bus devices are configured by the system at boot time, so no jumpers
need to be set on the board. The system BIOS should be set to assign the
PCI INTA signal to an otherwise unused system IRQ line. While it's
physically possible to shared PCI interrupt lines, the 1.2.0 kernel doesn't
support it.
III. Driver operation
The 3c59x series use an interface that's very similar to the previous 3c5x9
series. The primary interface is two programmed-I/O FIFOs, with an
alternate single-contiguous-region bus-master transfer (see next).
One extension that is advertised in a very large font is that the adapters
are capable of being bus masters. Unfortunately this capability is only for
a single contiguous region making it less useful than the list of transfer
regions available with the DEC Tulip or AMD PCnet. Given the significant
performance impact of taking an extra interrupt for each transfer, using
DMA transfers is a win only with large blocks.
IIIC. Synchronization
The driver runs as two independent, single-threaded flows of control. One
is the send-packet routine, which enforces single-threaded use by the
dev->tbusy flag. The other thread is the interrupt handler, which is single
threaded by the hardware and other software.
IV. Notes
Thanks to Cameron Spitzer and Terry Murphy of 3Com for providing both
3c590 and 3c595 boards.
The name "Vortex" is the internal 3Com project name for the PCI ASIC, and
the not-yet-released (3/95) EISA version is called "Demon". According to
Terry these names come from rides at the local amusement park.
The new chips support both ethernet (1.5K) and FDDI (4.5K) packet sizes!
This driver only supports ethernet packets because of the skbuff allocation
limit of 4K.
*/
/* 3Com's manufacturer's ID. */
/* Operational defintions.
These are not used by other compilation units and thus are not
exported in a ".h" file.
First the windows. There are eight register windows, with the command
and status registers available in each.
*/
/* The top five bits written to EL3_CMD are a command, the lower
11 bits are the parameter, if applicable.
Note that 11 parameters bits was fine for ethernet, but the new chip
can handle FDDI lenght frames (~4500 octets) and now parameters count
32-bit 'Dwords' rather than octets. */
/* The SetRxFilter command accepts the following classes: */
/* Bits in the EL3_STATUS general status register. */
/* Latched interrupt. */
/* Host error. */
/* EL3_CMD is still busy.*/
/* Register window 1 offsets, the window used in normal operation.
On the Vortex this window is always mapped at offsets 0x10-0x1f. */
/* Remaining free bytes in Tx buffer. */
/* Window 0: EEPROM command register. */
/* Enable erasing/writing for 10 msec. */
/* Disable EWENB before 10 msec timeout. */
/* EEPROM locations. */
/* Window 3: MAC/config bits. */
/* Window 4: Various transcvr/media bits. */
/* Enable link beat and jabber for 10baseT. */
/* A marker for kernel snooping. */
/* Unlike the other PCI cards the 59x cards don't need a large contiguous
memory region, so making the driver a loadable module is feasible.
*/
/* Remove I/O space marker in bit 0. */
MD5{32}: 4f0042f58cd111c3a476b7721a06b86b
File-Size{5}: 25025
Type{1}: C
Gatherer-Version{3}: 1.0
Gatherer-Host{21}: cesdis1.gsfc.nasa.gov
Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server
Refresh-Rate{7}: 2419200
Time-to-Live{8}: 14515200
Keywords{2246}: accepts
according
adapter
adapters
advertised
allocation
also
alternate
always
amd
amusement
and
applicable
are
asic
asm
assign
author
available
baset
beat
because
becker
before
being
bios
bit
bitops
bits
blocks
board
boards
boot
both
buffer
bus
busy
but
bytes
called
cameron
can
capability
capable
cards
center
cesdis
chip
chips
classes
cleanup
close
cmd
code
com
come
command
compatibility
compilation
config
configured
contiguous
control
count
data
dec
defintions
demon
designed
dev
device
devices
disable
distributed
dma
doesn
don
donald
driver
dwords
each
eeprom
eight
eisa
enable
enforces
erasing
errno
error
ethercards
etherdevice
etherlink
etherlinkiii
ethernet
ewenb
excellence
exported
extension
extra
fast
fastetherlink
fddi
feasible
fifos
file
fine
first
five
flag
flight
flows
following
font
for
frames
free
from
general
get
given
gnu
goddard
gov
greenbelt
gsfc
handle
handler
hardware
herein
host
iii
iiic
impact
include
incorporated
independent
information
init
inta
interface
internal
interrupt
ioport
irq
jabber
jumpers
kernel
large
latched
lenght
less
license
limit
line
lines
link
linux
list
loadable
local
locations
lower
mac
making
malloc
manufacturer
mapped
marker
master
masters
may
mbs
media
members
memory
module
msec
multicast
murphy
name
names
nasa
need
netdevice
new
next
normal
not
note
notes
notice
now
octets
offsets
one
only
open
operation
operational
original
other
otherwise
packet
packets
parameter
parameters
park
pci
pcnet
performance
physically
possible
previous
primary
probe
product
programmed
project
providing
ptrace
public
rather
reached
reference
region
regions
register
registers
released
remaining
remove
rides
routine
runs
runtime
sched
sciences
see
send
series
set
setrxfilter
settings
shared
should
signal
significant
similar
single
size
sizes
skbuff
snooping
software
someday
space
specific
spitzer
start
stats
status
still
string
support
supports
synchronization
system
taking
tbusy
terms
terry
than
thanks
that
the
theory
there
these
this
thread
threaded
thus
time
timeout
top
total
transcvr
transfer
transfers
tulip
twice
two
unfortunately
units
unlike
unused
update
use
used
useful
using
various
version
very
vortex
was
which
while
will
win
window
windows
with
works
writing
written
xmit
yet
Description{9}: EL3WINDOW
}
@FILE { http://cesdis.gsfc.nasa.gov/pub/linux/html/Ethernet-HOWTO-4.html
Update-Time{9}: 827948630
url-references{286}: Ethernet-HOWTO.html#toc4
Ethernet-HOWTO-9.html#faq
Ethernet-HOWTO-9.html#ne2k-probs
Ethernet-HOWTO-3.html#e10xx
Ethernet-HOWTO-3.html#de-100
Ethernet-HOWTO-3.html#dfi-300
Ethernet-HOWTO-5.html
Ethernet-HOWTO-3.html
Ethernet-HOWTO.html#toc4
Ethernet-HOWTO.html#toc
Ethernet-HOWTO.html
#0
title{33}: Clones of popular Ethernet cards.
keywords{193}: accton
all
aritsoft
beginning
cabletron
cards
chapter
clones
contents
dfi
dfinet
ethernet
faq
lan
lantastic
lcs
link
next
poor
popular
previous
problems
section
shinenet
table
tec
the
this
top
headings{47}: 4 Clones of popular Ethernet cards.
4.1
4.2
body{1994}:
Contents of this section
Due to the popular design of some
cards, different companies will
make `clones' or replicas of the
original card. However, one must
be careful, as some of these clones
are not 100 % compatible, and
can be troublesome. Some common problems
with `not-quite-clones'
are noted in
the FAQ section
.
This
section used to have a listing of a whole bunch of clones that
were
reported to work, but seeing as nearly all clones will
work, it makes
more sense to list the ones that don't
work 100 % .
Poor NE2000
Clones
Here is a list of some of the NE-2000 clones that are known
to
have various problems. Most of them aren't fatal. In the case
of
the ones listed as `bad clones' -- this usually indicates that
the
cards don't have the two NE2000 identifier bytes. NEx000-clones
have a
Station Address PROM (SAPROM) in the packet buffer memory
space.
NE2000 clones have in bytes
of the SAPROM, while
other
supposed NE2000 clones must be detected by their SA prefix.
Accton
NE2000
-- might not get detected at boot, see
ne2000 problems
.
Aritsoft LANtastic AE-2
-- OK, but has flawed error-reporting
registers.
AT-LAN-TEC NE2000
-- clone uses Winbond chip that traps
SCSI drivers
ShineNet LCS-8634
-- clone uses Winbond chip that
traps SCSI drivers
Cabletron E10**, E20**, E10**-x, E20**-x
-- bad
clones, but the driver checks for them. See
E10**
.
D-Link
Ethernet II
-- bad clones, but the driver checks for them. See
DE-100 / DE-200
.
DFI DFINET-300, DFINET-400
-- bad clones, but
the driver checks for them. See
DFI-300 / DFI-400
Poor WD8013
Clones
I haven't heard of any bad clones of these cards,
except
perhaps for some chamelion-type cards that can be set to
look
like a ne2000 card or a wd8013 card. There is really no
need to
purchase one of these `double-identity' cards
anyway.
Next
Chapter, Previous Chapter Table of contents of this chapter ,
General
table of contents
Top of the document,
Beginning of this Chapter
MD5{32}: 0ce4a5d65a26d5b1de6912ffc7320148
File-Size{4}: 2902
Type{4}: HTML
Gatherer-Version{3}: 1.0
Gatherer-Host{21}: cesdis1.gsfc.nasa.gov
Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server
Refresh-Rate{7}: 2419200
Time-to-Live{8}: 14515200
Description{33}: Clones of popular Ethernet cards.
}
@FILE { http://cesdis.gsfc.nasa.gov/hpccm/accomp/94accomp/cas94.accomps/cas4.html
Update-Time{9}: 827948645
title{45}: Numerical Propulsion Simulation System (NPSS)
keywords{44}: npss
numerical
propulsion
simulation
system
images{52}: hpcc.graphics/hpcc.header.gif
hpcc.graphics/npss.gif
headings{46}: Numerical Propulsion Simulation System (NPSS)
MD5{32}: fad1dcf7dd3411835e278bd8792593b1
File-Size{4}: 3782
Type{4}: HTML
Gatherer-Version{3}: 1.0
Gatherer-Host{21}: cesdis1.gsfc.nasa.gov
Gatherer-Name{48}: Contents of the cesdis1.gsfc.nasa.gov WWW server
Refresh-Rate{7}: 2419200
Time-to-Live{8}: 14515200
Description{45}: Numerical Propulsion Simulation System (NPSS)
}
@FILE { http://cesdis.gsfc.nasa.gov/hpccm/annual.reports/ess94contents/guest.ci/schow.html
Update-Time{9}: 827948652
url-references{171}: http://dialsparc10.ece.arizona.edu/hpcc_graphic.html>URL
http://dialsparc10.ece.arizona.edu/hpcc_graphic.html
- Abstract
- Brief abstract about the object.
- Author
- Author(s) of the object.
- Description
- Brief description about the object.
- File-Size
- Number of bytes in the object.
- Full-Text
- Entire contents of the object.
- Gatherer-Host
- Host on which the Gatherer ran to extract information from
the object.
- Gatherer-Name
- Name of the Gatherer that extracted information from the
object. (eg. Full-Text, Selected-Text, or Terse).
- Gatherer-Port
- Port number on the Gatherer-Host that serves the Gatherer's
information.
- Gatherer-Version
- Version number of the Gatherer.
- Keywords
- Searchable keywords extracted from the object.
- Last-Modification-Time
- The time that the object was last modified
(in seconds since epoch).
- MD5
- MD5 16-byte checksum of the object.
- Partial-Text
- Only the selected contents from the object.
- Refresh-Rate
- How often the Broker attempts to update the content summary
(in seconds relative to Update-Time).
- Time-to-Live
- How long content summary is valid (in seconds relative to Update-Time).
- Title
- Title of the object.
- Type
- The object's type. Some example types are:
Archive,
Audio,
Awk,
Backup,
Binary,
C,
CHeader,
Command,
Compressed,
CompressedTar,
Configuration,
Data,
Directory,
DotFile,
Dvi,
FAQ,
FYI,
Font,
FormattedText,
GDBM,
GNUCompressed,
GNUCompressedTar,
HTML,
Image,
Internet-Draft,
MacCompressed,
Mail,
Makefile,
ManPage,
Object,
OtherCode,
PCCompressed,
Patch,
Perl,
PostScript,
RCS,
README,
RFC,
SCCS,
ShellArchive,
Tar,
Tcl,
Tex,
Text,
Troff,
Uuencoded, and
WaisSource
- Update-Time
- The time that Gatherer updated (generated) the content summary
from the object (in seconds since the epoch).
- URL
- The original URL of the object.
- URL-References
- Any URL references present within HTML objects.