home
***
CD-ROM
|
disk
|
FTP
|
other
***
search
/
Usenet 1994 January
/
usenetsourcesnewsgroupsinfomagicjanuary1994.iso
/
answers
/
os-research
/
part1
next >
Wrap
Internet Message Format
|
1993-12-02
|
32KB
Path: senator-bedfellow.mit.edu!bloom-beacon.mit.edu!nic.hookup.net!europa.eng.gtefsd.com!library.ucla.edu!agate!darkstar.UCSC.EDU!osr
From: bos@scrg.cs.tcd.ie (Bryan O'Sullivan)
Newsgroups: comp.os.research,comp.answers,news.answers
Subject: Comp.os.research: Frequently answered questions [1/2]
Followup-To: poster
Date: 1 Dec 1993 09:58:40 GMT
Organization: SCRG, Department of Computer Science, Trinity College Dublin
Lines: 690
Approved: comp-os-research@cse.ucsc.edu, news-answers-request@mit.edu
Message-ID: <2dhpsg$her@darkstar.UCSC.EDU>
Reply-To: os-faq@cse.ucsc.edu
NNTP-Posting-Host: ftp.cse.ucsc.edu
Summary: frequent topics of discussion on the operating systems research group
Originator: osr@ftp
Xref: senator-bedfellow.mit.edu comp.os.research:3343 comp.answers:2862 news.answers:15262
Archive-name: os-research/part1
Version: $Revision: 1.3 $
Last-Modified: $Date: 1993/11/28 18:01:08 $
Answers to frequently asked questions
for comp.os.research: part 1 of 2
Bryan O'Sullivan
Please note that this article is still very much in beta at the
moment; I make no particular claims as to the accuracy of the material
below, as some of the information is quite old.
TABLE OF CONTENTS
1. Introduction
1.1. How to read this article
1.2. Reader contributions and comments
1.3. Acknowledgments and caveats
2. Recurrent discussions
2.1. Microkernels, macrokernels, and the in-betweenies
2.2. Threads
2.2.1. Distinguishing features
2.2.2. Characterising implementations of multithreading
2.2.3. The history of threads
3. File systems
3.1. Extent-based versus log-structured file systems
4. Distributed systems
4.1. What is the current status of the (insert name) project?
4.2. How do approaches to load balancing differ?
4.3. Fault tolerance in distributed systems
4.4. Naming in distributed systems
4.5. Mobile and disconnected operation
4.6. Distributed shared memory
4.7. What have we learned?
5. Operating systems teaching
5.1. What good undergraduate-level texts are available?
5.2. What instructional operating systems can I use?
5.3. Where can I find the canonical list of OS papers for grad courses?
------------------------------
Subject: [1] Introduction
From: Introduction
This posting consists of answers to many of the questions most
frequently asked and summaries of the topics most frequently covered
on comp.os.research, the Usenet operating systems research discussion
group. The purpose of this posting is to circulate existing
information, and to avoid rehashing old topics of discussion and
questions. Please read both parts of this document before posting to
this newsgroup.
This newsgroup is moderated; the moderator is Darrell Long
<darrell@cse.ucsc.edu>. A companion posting to the FAQs, `Welcome to
comp.os.research', briefly covers the moderation policy and guidelines
for posting to comp.os.research. It can be found in either
comp.os.research or news.answers, and is posted regularly.
Due to its size, the FAQ is split up into two parts; each is posted
once a month. The welcome posting is posted weekly.
Note: chunks of text of the form [92-02-12-21-20.29] indicate the
original posting from which a section of this article was inspired,
snarfed, or just plain copied wholesale. Other chunks in square
brackets are comments and reminders to myself. These latter chunks of
text will be removed as appropriate material is added, but the
attributions will remain.
------------------------------
Subject: [1.1] How to read this article
From: Introduction
This article is posted in digest format; using the `G%' command from
within the `nn' newsreader should split it up into separate
sub-articles which you can browse through.
To skip to a particular question numbered n.m, use `/: \[n\.m\]' from
most pagers. From within GNU Emacs, you can use `C-s [n.m]'. This
article is treated as an outline when edited by GNU Emacs.
------------------------------
Subject: [1.2] Reader contributions and comments
From: Introduction
Your contributions, comments, and corrections are welcomed; mail sent
to <os-faq@cse.ucsc.edu> will be dealt with as quickly as I can
manage. Generally, performing a reply or followup to this article
from within your newsreader should do the Right Thing.
While I am more than happy to include submissions of material for the
FAQ if they seem appropriate, it would make my life a lot easier if
such text were proof-read in advance, and kept concise. I don't have
as much time as I would like to digest 15K text files and summarise
them in three paragraphs for inclusion here.
------------------------------
Subject: [1.3] Acknowledgments and caveats
From: Introduction
Although this FAQ has been the result of a co-operative effort, any
blame for inaccuracies and errors lies entirely with my edits. I
would like to thank the following people for their part in
contributing to this article:
Surendar Chandra <surendar@cs.duke.edu>
Steve Chapin <sjc@cs.purdue.edu>
Crispin Cowan <crispin@csd.uwo.ca>
Dan Hildebrand <danh@qnx.com>
Gordon Irlam <gordoni@netcom.com>
Darrell Long <darrell@cse.ucsc.edu>
Peter Magnusson <psm@sics.se>
Craig Partridge <craig@bbn.com>
Robert Walsh <rjwalsh@scrg.cs.tcd.ie>
------------------------------
Subject: [2] Recurrent discussions
From: Recurrent discussions
A number of topics tend to appear with regularity in comp.os.research.
This section attempts to go over some of the most commonly-covered
ground. I haven't made the list of topics covered exhaustive by any
means.
------------------------------
Subject: [2.1] Microkernels, macrokernels, and the in-betweenies
From: Recurrent discussions
A recurrent topic of discussion in this newsgroup has been the
comparison between microkernel (for example Mach and QNX) and
`macrokernel' (traditional Unix) operating systems. The basic notion
of a microkernel consists of devolving as much functionality as
possible into processes rather than the kernel itself; different
systems take different approaches to implementing this.
However, anecdotal evidence [93-03-03-07-56.52] suggests that the
distinction between microkernel and monolithic architectures is
becoming more blurred as time goes on, as the two advance. For
example, most modern monolithic kernels now implement multiple threads
of execution and fine-grained parallelism. Architecturally, this
approach begins to appear similar to a microkernel with several
kernel-space processes working from shared memory.
As an aside, people often complain that the Mach system can't be a
`real' microkernel, because it is so large (at least, this is the
argument most frequently cited). However, I have been told that
automatically-generated code stubs contribute very significantly to
the size of the kernel, and that some size reduction would be likely
if MIG (the stub generator) produced better code. [Can someone from
CMU comment on this?]
Debating microkernels versus monolithic kernels on the basis of kernel
size misses the central, architectural point. In the same way as the
point of a RISC processor is not to minimise the instruction count,
but rather to make a different tradeoff between what is implemented
in the processor instruction set and what is implemented in other
ways, the microkernel architectural issue is to determine which
services are implemented in the microkernel, and which services are
implemented external to that microkernel. By making appropriate
choices here, the goal is to enhance various OS attributes in a manner
that might not be addressable with a monolithic kernel OS. System
attributes such as performance, flexibility, realtime, etc. are all
variables which are taken into account.
Some history:
Ira Goldstein and Paul Dale were the coiners of the term `microkernel'
back around 1989.
------------------------------
Subject: [2.2] Threads
From: Recurrent discussions
The exact meaning of the term `thread' is not generally agreed upon.
One of the more common usages denotes a `lightweight' process
(sequential flow of control) which shares an address space and some
other resources with others, and for which context switching time is
lower than for `heavyweight' (i.e. kernel-supported) processes.
Throughout the following material, when we refer to `processes', this
can be taken as meaning heavyweight processes.
------------------------------
Subject: [2.2.1] Distinguishing features
From: Recurrent discussions
Some of the features which distinguish different approaches to
threading are listed below:
- Number of *concurrent* flows of control: generally, threads may
potentially make use of multiple processors in order to allow
several to execute concurrently. That is, the model usually takes
into consideration the possibility that there may be more than one
flow of control active at any time.
- Scheduling policy: a thread scheduler may be pre-emptive, in which
case a thread is put to sleep either when it waits upon some
resource or runs for the full duration of its time quantum, or
non-pre-emptive, in which case individual threads continue to run
until they relinquish the processor themselves (either through
waiting on a resource or calling the analogue of a sleep()
function).
Systems which are non-pre-emptive and may only ever have a single
active flow of control (regardless of the number of processors
available) are referred to as coroutine systems. Coroutine
programming requires quite a different approach from threads-based
programming, as many of the synchronisation and resource-sharing
problems which occur in threaded environments need never trouble the
coroutines programmer.
------------------------------
Subject: [2.2.2] Characterising implementations of multithreading
From: Recurrent discussions
An important distinction may be made between user-level threads and
kernel-supported threads. A user-level thread maintains all its state
in user space. A consequence of this is that no kernel resources need
to be allocated per thread, and switching between threads can be done
without changing address space. A disadvantage is that user level
threads cannot execute while the kernel is busy, for instance, with
paging or I/O. This would require some knowledge and participation on
the part of the kernel.
It is possible to combine both methods, as is done in SunOS 5.x (aka
Solaris 2.x). Here, one or more light weight processes (LWPs)
multitask one or more user-level threads, which in turn are
implemented using user-space libraries.
Some issues which characterise thread implementations, and which
determine the uses to which a threads package may be put, include:
- How much by way of kernel resources does a thread require? This
will typically limit the number of threads that can be started by a
process.
- Under what circumstances will the entire process hang? For
instance, if some thread gets a page fault, may another thread in
that process be dispatched?
- Does switching threads require a full system call (as on the SPARC),
or may context switches between threads be performed entirely at
user level?
- How are signals handled? Can signals be masked individually per
thread? Is there a `broadcast' signal?
- How are stacks handled? Will the stacks shrink/grow dynamically on
a per thread basis?
Several systems today (QNX and Plan 9, for instance) take the stance
that threads `fix the symptom, but not the problem'. Rather than
using threads because the OS context switch time is too slow, a better
approach, according to the architects of these systems, is to fix the
OS. It's ironic, now that even PC-hosted desktop OSes provide
MMU-protected multitasking, the fashionable programming model has
become multiple threads running in a common address space, making
debugging difficult, and also making it more difficult to generate
reliable code. With fast context switching, existing OS services like
explicitly allocated shared memory between a team of cooperating
processes can create a `threaded' environment, without opening the
Pandora's box of problems that a fully shared memory space entails.
------------------------------
Subject: [2.2.3] The history of threads
From: Recurrent discussions
[93-04-21-13-32.11, 92-01-27-17-05.54] The notion of a thread, as a
sequential flow of control, dates back to 1965, at least, with the
Berkeley Timesharing System. Only they weren't called threads at that
time, but processes [Dijkstra]. Processes interacted through shared
variables, semaphores, and similar means. Max Smith did a prototype
threads implementation on Multics around 1970; it used multiple stacks
in a single heavyweight process to support background compilations.
Then came Unix, in the early 1970s. The Unix notion of a `process'
became a sequential thread of control *plus* a virtual address space
(incidentally, the Unix notion of a process derived directly from the
Multics process design [Saltzer]). So `processes', in the Unix sense,
are quite heavyweight machines. Since they cannot share memory (each
has its own address space), they interact through pipes, signals,
etc). Shared memory (also a rather ponderous mechanism) was added
much later.
After some time, Unix users started to miss the old processes that
could share memory. This led to the `invention' of threads: old-style
processes that shared the address space of a single Unix process.
They also were called `lightweight', by way of contrast with
`heavyweight' Unix processes. This distinction dates back to the very
late 70s or early 80s, i.e. to the first `microkernels' (Thoth
(precursor of the V-kernel and QNX), Amoeba, Chorus, the
RIG-Accent-Mach family, etc).
On a side note, threads have been in continuous use in
telecommunications applications for quite a long time.
See also:
Cheriton, D. R., `Multi-process structuring and the Thoth operating
system', Ph.D. Thesis, University of Waterloo, 1979.
Daley, R. C., Dennis, J. B., `Virtual memory, processes, and
sharing in Multics', Comm, ACM 11, 306-312, May 1968.
Dennis, J. B., van Horn, E. C., `Programming semantics for
multiprogrammed computations', MAC-TR-21, 1966.
Dijkstra, E. W., `Cooperating sequential processes', in `Programming
Languages', Genuys, F. (ed.), Academic Press, 1965.
Saltzer, J. H., `Traffic control in a multiplexed computer system',
MAC-TR-30 (Sc.D. Thesis), July, 1966.
------------------------------
Subject: [3] File systems
From: File systems
This field is discussed both here and in the comp.arch.storage
newsgroup. This section needs fleshing out at the moment; it will
grow over time (hopefully!).
------------------------------
Subject: [3.1] Extent-based versus log-structured file systems
From: File systems
[92-11-18-10-57.53, 92-11-22-10-16.26] A general definition for a
log-structured storage system might be the following: logging is an
append-only storage semantics. The unit of logging is the record.
Write accesses append records to the end of the log. A log record may
become obsolete. Useless records are compacted out of the log when
possible. Other write accesses are forbidden.
An extent-based file system is another candicate for better filesystem
performance. The approach used under QNX, for example, is to have
files exist as an array of extents on disk, where each is extent is as
many contiguous blocks as could be allocated at that location. By
using a bitmap for space allocation, files can also grow `in-place',
if adjacent free space on disk exists. This approach allows the unit
of disk space allocation to remain 512 bytes, which is also the
smallest unit of physical I/O. The potential performance bottleneck
of this approach does not happen because the filesystem code passes
I/O requests to the disk drivers in units of extents, not 512 byte
blocks. The filesystem also heuristically modifies the size of the
pre-read requests based on the historical access pattern to the
file. This approach provides the performance advantages of larger
physical disk block sizes, without the wasted disk space that results
from unused portions of large blocks, or the complexity of trying to
allocate space from partially unused blocks.
------------------------------
Subject: [4] Distributed systems
From: Distributed systems
A great deal of the high-profile research carried out in operating
systems these days deals with distributed computing. Not
surprisingly, discussions of distributed systems make up a large
amount of the traffic on comp.os.research.
------------------------------
Subject: [4.1] What is the current status of the (insert name) project?
From: Distributed systems
See the section on `available software' for information on
distributions of some of the systems mentioned here.
- The Amoeba project is still going. There are roughly 20 people
working on it, but most of these are no longer kernel hackers. They
are working on using it for parallel programming, wide-area
distributed systems, and other things. Amoeba is used in over 100
universities at the moment, and is also used at commercial
institutions.
- Cronus is still under development at BBN. The current public
release is 3.0.
- Horus is being developed by the same group that worked on Isis; the
head of this group is Robbert van Renesse.
- Isis is no longer being developed at Cornell; it is now managed as a
commercial product.
- Mach: awaiting response from rfr
- Plan 9 is currently being restructured to make good use of a 300MBPS
fibre-optic network. A general release of the system is under
consideration at the moment.
- QNX is a commercial realtime OS with an installed base of over
250,000 systems. It is used extensively in process control, factory
automation, medical instrumentation, communications and
point-of-sale. A number of universities are also doing research
with QNX.
- The Sprite network operating system project is winding down. The
user community is shrinking, and only three people are currently
using the system as a basis for graduate research. Sprite is
continuing to be used as the testbed for the Berkeley RAID project.
------------------------------
Subject: [4.2] How do approaches to load balancing differ?
From: Distributed systems
Load-balancing policy falls into two broad groups: static and dynamic.
Static policies use algorithms which operate without regard to
run-time loads across a system, while dynamic policies use the
run-time performance of various parts of a system in order to make
more `informed' decisions about balancing.
[92-11-06-12-53.57] A dynamic load-balancing policy is one which uses
run-time state information in making scheduling decisions.
There are two kinds of dynamic policies: adaptive and non-adaptive.
The latter always use the same (fixed, load-dependent) policy; the
former may adjust policy parameters in order to gradually improve
their performance.
The key point is that while non-adaptive policies use only the
information about the run-time state, adaptive policies use, in
addition to this, information about current performance.
In adaptive policies, the rules for adjusting policy parameters may be
static or dynamic. An example of the former might be: `shift to a
conservative migration rule when system-wide load patterns are varying
too rapidly'. An example of the latter could be: `increase
sender-side threshold when migrated jobs cause slowdown rather than
speedup'. Some researchers refer to the performance-driven adaptation
exhibited by the second policy as `learning'.
Since both non-adaptive policies and adaptive policies with static
rules really use only load information, it is confusing to distinguish
between them. One way to avoid such confusion is to restrict the use
of the word `adaptive' to policies that use performance feedback in
order to drive their adjustment of policy parameters.
[Steve can polish this section up, rearrange it, and add material.]
------------------------------
Subject: [4.3] Fault tolerance in distributed systems
From: Distributed systems
[There has been a lot of traffic on this. Material sought.]
------------------------------
Subject: [4.4] Naming in distributed systems
From: Distributed systems
[Material on naming and/or global naming sought.]
------------------------------
Subject: [4.5] Mobile and disconnected operation
From: Distributed systems
[Lots of questions, few answers.]
------------------------------
Subject: [4.6] Distributed shared memory
From: Distributed systems
[As for mobile systems.]
------------------------------
Subject: [4.7] What have we learned?
From: Distributed systems
Andy Tanenbaum started a (very long) thread on this topic in
comp.os.research in April of 1992 [92-04-03-17-10.05]. The interested
reader is directed to the comp.os.research archives.
[Does anyone want to distill that thread into something suitable for
inclusion here?]
------------------------------
Subject: [5] Operating systems teaching
From: Operating systems teaching
This section attempts to give some useful pointers to people who teach
operating systems, both at undergraduate and postgraduate level.
------------------------------
Subject: [5.1] What good undergraduate-level texts are available?
From: Operating systems teaching
The comments below have been provided by a variety of people, so any
`me's or `I's you encounter are not necessarily those of the
maintainer!
- `Operating System Concepts', third edition, by Abraham Silberschatz,
James Peterson, and Peter Galvin is a popular text. Addison-Wesley,
1991, ISBN 0-201-51379-X. It is popularly called the `dinosaur'
book, because the cover has pictures of dinosaurs to symbolise
various operating systems. For you Jurassic Park fans: Mach is a
velociraptor.
I think this is the `standard' OS text, although I have a couple of
others that I also think are good, and that I draw from when I teach
OS. The dinosaur book doesn't have the greatest organisation, and
sometimes wanders when describing things. Its strong point lies in
the copious examples.
The first 84 pages covers operating system basics, the next 120
pages covers process management including 30 pages on deadlocks.
130 pages on storage management: memory, virtual memory, secondary
storage. 70 pages on file systems and protection. Then 100 pages
on distributed systems. The last part of the book has case studies
on Unix and Mach: 50 pages on Unix and 30 pages on Mach. The last
chapter gives a short 10 page historical perspective.
The book gives a good (but slightly theoretical) overview of
operating system concepts. A good complement would be the books
covering Minix or BSD, which are more implementation-oriented.
The fourth edition of this book should be available from November
1993; James Peterson is no longer one of the authors involved.
- `An Operating Systems Vade Mecum', second edition, by Raphael
Finkel, 1988, Prentice Hall, ISBN 0-13-637950-8. I really like this
book; it is a bit more theoretical than the dinosaur book, but is
well-written and clear. I would accompany it with labs based on one
of the educational experimental OS's (NachOS, OSP) for hands-on
experience.
The edition mentioned above is now out of print. However, it may be
obtained via anonymous ftp from
ftp.ms.uky.edu:pub/tech-reports/UK/cs/vade.mecum.2.ps.Z. Here is
the associated chunk of README:
This textbook is out of print. It was published by Prentice Hall.
The author now owns the copyright. Permission is granted to copy
this text for any noncommercial purpose. Feel free to generate
copies of the text for your students. You may also photocopy the
original book without restriction. Kindly send suggested upgrades
to the author: raphael@ms.uky.edu. He is planning a new edition
sometime.
[It's been a few years since I've looked at this book, so I can't
remember what it contains. Can anyone help?]
- `The Logical Design of Operating Systems', second edition, Lubomir
Bic, Alan Shaw, 1988, Prentice Hall, ISBN 0-13-540139-9. This one
isn't as theoretical as Finkel's book, nor is it as long as the
dinosaur book. I haven't tried to use it in a course yet, but it
looks like a fairly well-rounded text.
[Can anyone write a paragraph on the various topics covered ... ?]
- `Modern Operating Systems,' Andrew Tanenbaum, 1992, Prentice Hall,
ISBN 0-13-588187-0. This started out as a rewrite of the Minix
book, but he pulled the Minix-specific material and added seven
chapters on distributed systems. It's a bit heavy for undergrads,
depending on how far into the distributed systems you go, but I like
Tanenbaum as an author. He'll be bringing out a second edition of
the Minix book sometime soon; as he says, one is for `hands-on'
(Minix) and one is for `hands-off' (Modern OS).
The book is divided into two parts: `traditional' introductory
material, taken more or less verbatim from the Minix book, and an
introduction to distributed systems. Each parts concludes with a
case study and comparison of two well-known systems (Unix and
MS-DOS, and Mach and Amoeba). The bibliography at the end is
organised well for more advanced coverage of the topics encountered
throughout the book.
Topics covered in the first part include process concepts, memory
management, file system organisation and I/O, and deadlock detection
and avoidance. The second part addresses issues such as distributed
communication, synchronisation (the section on clock synchronisation
is well put together), processes in distributed environments
(nothing on process migration), and distributed file systems (using
AFS as an example).
- `Concurrent Systems', Jean Bacon, 1992, Addison-Wesley, ISBN
0-201-41677-8. This covers much the same material as `Modern
Operating Systems', but goes into rather more detail on databases
and languages. The book is divided into four parts, and comes with
a separate instructor's manual (ISBN 0-201-62406-0). The first
covers basic material, such as OS functions, and system and language
support for concurrent processes. Part 2 deals with simple
concurrent actions, covering topics such as shared-memory IPC,
message passing, persistent data, crashes, and distributed data.
The third part of the book covers transactions, concurrency control,
and failure recovery. The final section presents a set of case
studies, with Unix, Mach and Chorus being covered, along with some
of the work done at Cambridge over the past decade. An interesting
emphasis is placed on language-level support for concurrency
throughout the book, and the focus on database issues is also a good
thing.
I haven't read the book in as much detail as I would like, but it
seems to be well put together. The cramming of so many topics under
one cover means that there is probably too much material for a
single undergraduate course, and the book perforce does not go into
as much detail as I would like on some topics (a problem I also find
with Tanenbaum's book). Well worth a look, however.
Two books commonly used in conjunction with the texts listed above
are:
- `The Design and Implementation of the 4.3BSD Unix Operating System',
Samuel Leffler, Kirk McKusick, Michael Karels, John Quarterman,
1989, Addison-Wesley, ISBN 0-201-06196-1. This book describes the
kernel of 4.3BSD Unix in some detail, covering process and memory
management (the latter being heavily VAX-oriented), file system
organisation, device driver internals, terminal handling, IPC,
network communications, some details of the Internet protocols, and
system operation. I found this book to be well-written and concise.
Accompanying the above is the `4.3BSD Answer Book', Samuel Leffler,
Kirk McKusick, 1991, Addison-Wesley, ISBN 0-201-54629-9. This short
book provides answers to all of the exercises found at the end of
each chapter in the daemon book.
- `The Design of the Unix Operating System', Maurice Bach, 1986,
Prentice Hall, ISBN not to hand right now. This is the
authoritative description of the internals of System V Unix. Due to
copyright restrictions, it contains no actual code, but rather
pseudo-code (I didn't find this to be a problem). I haven't read
this book in a few years, so I can't remember what it covers.
I am not aware of any similar book which covers the implementation of
a distributed system.
------------------------------
Subject: [5.2] What instructional operating systems can I use?
From: Operating systems teaching
- Minix, from Amsterdam's Vrije Universiteit, was developed by Andy
Tanenbaum, and is a mostly-Unix lookalike based on a message-passing
microkernel-similar architecture. The system is used in Tanenbaum's
`Modern Operating Systems' and its predecessor, `Operating Systems:
Design and Implementation'. See the Minix Information Sheet, posted
regularly to comp.os.minix, for ordering information; Minix is
copyrighted and is not in the public domain.
- NachOS is an instructional OS developed at Berkeley by Tom Anderson
<tea@cs.berkeley.edu>, Wayne Christopher, Stephen Procter (and
others?). It currently runs on DEC MIPS and Sun SPARC workstations,
and a port to the HP 700 series is in progress. The NachOS system,
along with sample assignments and an overview paper which was
presented at Usenix, is available via anonymous ftp from
ftp.cs.berkeley.edu:ucb/nachos.
- OSP (current version is 1.7) is an operating systems simulator,
available via ftp from sblapis1.cs.sunysb.edu, with username ospftp,
and password as in the instructor's guide. Used in `OSP---an
Environment for Operating Systems', Michael Kifer, Scott Smolka,
Addison-Wesley.
- SunOS Minix consists of a set of patches for Minix which allows the
Minix system to run in a single monolithic Unix process on top of
SunOS 4.x on Sun 3 and Sun 4 machines. SunOS Minix is produced by
applying a set of patches to Mac Minix 1.5 (both 1.5.10.0 and
1.5.10.1 can be used) or PC Minix 1.5. Also, Atari Minix has been
used as the base version by at least one person. The latest version
(2.0) includes a preliminary attempt at a port to Solaris 2.x.
SunOS Minix is available via anonymous ftp from
csc.canterbury.ac.nz:UNIX/SMX_2_00.TAR_Z.
- Xinu was developed at Purdue by Doug Comer and some others. It was
designed to be small and layered, so that the code is succinct and
easily understandable. It is intended for education, and is a
`conventional' operating system. Xinu runs on the IBM PC, Sun-3,
SPARC, LSI, MIPS, Macintosh, and VAX architectures. The system is
used in Comer's `Operating System Design: the Xinu Approach'.
Contact <xinu-librarian@cs.purdue.edu> or Prentice Hall for ordering
information; Xinu is copyrighted and is not in the public domain.
------------------------------
Subject: [5.3] Where can I find the canonical list of OS papers for grad courses?
From: Operating systems teaching
[93-03-14-17-09.47] Darrell Long <darrell@cse.ucsc.edu> maintains a
bibliography which provides a good starting point for graduate OS
course reading lists. This may be imported using refdbms as
ucsc.grad.os, from refdbms.cse.ucsc.edu 4117 or refdbms.cs.vu.nl 4117.