home
***
CD-ROM
|
disk
|
FTP
|
other
***
search
/
No Fragments Archive 10: Diskmags
/
nf_archive_10.iso
/
MAGS
/
STEN
/
STEN09.MSA
/
NEWS.TXT
/
APP_1.ASC
next >
Wrap
Text File
|
2010-04-21
|
35KB
|
788 lines
Appendix A: Hacker Folklore
***************************
This appendix contains several legends and fables which illuminate the
meaning of various entries in the lexicon.
The Meaning of `Hack'
=====================
"The word {hack} doesn't really have 69 different meanings", according
to Phil Agre, an MIT hacker. "In fact, {hack} has only one meaning, an
extremely subtle and profound one which defies articulation. Which
connotation is implied by a given use of the word depends in similarly
profound ways on the context. Similar remarks apply to a couple of
other hacker words, most notably {random}."
Hacking might be characterized as "an appropriate application of
ingenuity". Whether the result is a quick-and-dirty patchwork job or
a carefully crafted work of art, you have to admire the cleverness
that went into it.
An important secondary meaning of {hack} is `a creative practical
joke'. This kind of hack is easier to explain to non-hackers than the
programming kind. Accordingly, here are some examples of practical
joke hacks:
In 1961, students from Caltech (California Institute of Technology in
Pasadena) hacked the Rose Bowl football game. One student posed as a
reporter and `interviewed' the director of the University of
Washington card stunts (such stunts involve people in the stands who
hold up colored cards to make pictures). The reporter learned exactly
how the stunts were operated, and also that the director would be out
to dinner later.
While the director was eating, the students (who called themselves the
`Fiendish Fourteen') picked a lock and stole one of the direction
sheet blanks for the card stunts. They then had a printer run off
2300 copies of the blank. The next day they picked the lock again and
stole the master plans for the stunts, large sheets of graph paper
colored in with the stunt pictures. Using these as a guide, they made
new instructions for three of the stunts on the duplicated blanks.
Finally, they broke in once more, replacing the stolen master plans
and substituting the stack of diddled instruction sheets for the
original set.
The result was that three of the pictures were totally different.
Instead of spelling "WASHINGTON", the word "CALTECH" was flashed.
Another stunt showed the word "HUSKIES", the Washington nickname,
but spelled it backwards. And what was supposed to have been a
picture of a husky instead showed a beaver. (Both Caltech and MIT use
the beaver as a mascot. Beavers are nature's engineers.)
After the game, the Washington faculty athletic representative said,
"Some thought it ingenious; others were indignant." The Washington
student body president remarked, "No hard feelings, but at the time
it was unbelievable. We were amazed."
This is now considered a classic hack, particularly because revising
the direction sheets constituted a form of programming.
Another classic hack:
Some MIT students once illicitly used a quantity of thermite to weld a
trolley car to its tracks. The hack was actually not dangerous, as
they did this at night to a parked trolley. It took the transit
people quite a while to figure out what was wrong with the trolley,
and even longer to figure out how to fix it. They ended up putting
jacks under the trolley and cutting the section of track on either
side of the wheel with oxyacetylene torches. Then they unbolted the
wheel, welded in a new piece of track, bolted on a new wheel, and
removed the jacks. The hackers sneaked in the next night and stole
the fused track and wheel!
The pranksters' plunder was later used as the trophy at the First Annual
All-Tech Sing. They carted it in on a very heavy duty dolly up the
freight elevator of the Student Center. Six feet of rail and a trolley
wheel is a *lot* of steel.
A rather similar hack, perpetrated by a fraternity at CMU, cost their
campus its trolley service.
Though these displayed some cleverness, the side-effect of expensive
property damage was definitely an esthetic minus. The best hacks are
harmless ones.
And another:
One winter, late at night, an MIT fraternity hosed down an underpass
that is part of a commuter expressway near MIT. This produced an ice
slick that `trapped' a couple of small cars: they didn't have the
momentum or traction to climb out of the underpass. While it was
clever to apply some simple science to trap a car, it was also very
dangerous as it could have caused a collision. As such, this was a
very poor hack overall.
And yet another:
On November 20, 1982, MIT hacked the Harvard-Yale football game. Just
after Harvard's second touchdown against Yale in the second quarter, a
small black ball popped up out of the ground at the 40-yard line, and
grew bigger, and bigger, and bigger. The letters "MIT" appeared all
over the ball. As the players and officials stood around gawking, the
ball grew to six feet in diameter and then burst with a bang and a
cloud of white smoke.
As the Boston Globe later reported, "If you want to know the truth,
M.I.T. won The Game."
The prank had taken weeks of careful planning by members of MIT's
Delta Kappa Epsilon fraternity. The device consisted of a weather
balloon, a hydraulic ram powered by Freon gas to lift it out of the
ground, and a vacuum-cleaner motor to inflate it. They made eight
separate expeditions to Harvard Stadium between 1 and 5 AM, in which
they located an unused 110-volt circuit in the stadium, and ran buried
wiring from the stadium circuit to the 40-yard line, where they buried
the balloon device. When the time came to activate the device, two
fraternity members had merely to flip a circuit breaker and push a
plug into an outlet.
This stunt had all the earmarks of a perfect hack: surprise,
publicity, the ingenious use of technology, safety, and harmlessness.
The use of manual control allowed the prank to be timed so as not to
disrupt the game (it was set off between plays, so the outcome of the
game would not be unduly affected). The perpetrators had even
thoughtfully attached a note to the balloon explaining that the device
was not dangerous and contained no explosives.
Harvard president Derek Bok commented: "They have an awful lot of
clever people down there at MIT, and they did it again." President
Paul E. Gray of MIT said, "There is absolutely no truth to the rumor
that I had anything to do with it, but I wish there were."
Still another:
At Stevens Tech, a programmer, having seen the {Cookie Bear} program
on the ITS systems, proceeded to write his own version for TOPS-10.
Unlike the ITS one, this version, called TSCB (Time-Sharing Cookie
Bear) was able to simultaneously harass multiple users at a time with
numerous {bells and whistles}. It had a mode to look for a
particular user or program name and pounce as soon as it saw either;
it accepted wildcards (e.g. the command `BOTHER [3??,*]' would sic the
bear on all Chemistry Department users); and it had commands to hide
as various other programs (making detection difficult if not
impossible).
Later on, it acquired other, nastier features; the `PUNISH' command
would look for a particular user or program name and log that job out
as soon as it saw it; the `IWANT' command could grab a reserved device
from another user, etc.
This program became well-known in the Stevens folklore, and copies
ended up on just about everywhere despite the efforts of the Computer
Center administration to eradicate it. Fortunately, this program
required privileges to work; unfortunately, the ability of Computer
Center employees to get and use these privileges with impunity lead to
a strong `us vs. them' mentality among Stevens hackers.
Finally, here is a great story about one of the classic computer hacks.
Back in the mid-1970s, several of the system support staff at Motorola
discovered a relatively simple way to crack system security on the
XEROX CP-V timesharing system. Through a simple programming strategy,
it was possible for a user program to trick the system into running a
portion of the program in `master mode' (supervisor state), in which
memory protection does not apply. The program could then poke a large
value into its `privilege level' byte (normally write-protected) and
could then proceed to bypass all levels of security within the
file-management system, patch the system monitor, and do numerous
other interesting things. In short, the barn door was wide open.
Motorola quite properly reported this problem to XEROX via an official
`level 1 SIDR' (a bug report with an intended urgency of `needs to be
fixed yesterday'). Because the text of each SIDR was entered into a
database that could be viewed by quite a number of people, Motorola
followed the approved procedure: they simply reported the problem as
`Security SIDR', and attached all of the necessary documentation,
ways-to-reproduce, etc.
XEROX sat on their thumbs...they either didn't realize the severity of
the problem, or didn't assign the necessary operating-system-staff
resources to develop and distribute an official patch.
Months passed. The Motorola guys pestered their XEROX field-support
rep, to no avail. Finally they decided to take Direct Action, to
demonstrate to XEROX management just how easily the system could be
cracked and just how thoroughly the security safeguards could be
subverted.
They dug around in the operating-system listings and devised a
thoroughly devilish set of patches. These patches were then
incorporated into a pair of programs called `Robin Hood' and `Friar
Tuck'. Robin Hood and Friar Tuck were designed to run as `ghost jobs'
(daemons, in UNIX terminology); they would use the existing loophole
to subvert system security, install the necessary patches, and then
keep an eye on one another's statuses in order to keep the system
operator (in effect, the superuser) from aborting them.
So... one day, the system operator on the main CP-V software
development system in El Segundo was surprised by a number of unusual
phenomena. These included the following:
* Tape drives would rewind and dismount their tapes in the middle of a
job.
* Disk drives would seek back and forth so rapidly that they'd attempt
to walk across the floor (see {walking drives}).
* The card-punch output device would occasionally start up of itself and
punch a {lace card}. These would usually jam in the punch.
* The console would print snide and insulting messages from Robin Hood
to Friar Tuck, or vice versa.
* The XEROX card reader had two output stackers; it could be instructed
to stack into A, stack into B, or stack into A unless a card was
unreadable, in which case the bad card was placed into stacker B. One
of the patches installed by the ghosts added some code to the
card-reader driver... after reading a card, it would flip over to
the opposite stacker. As a result, card decks would divide themselves
in half when they were read, leaving the operator to recollate them
manually.
Naturally, the operator called in the operating-system developers. They
found the bandit ghost jobs running, and X'ed them... and were once
again surprised. When Robin Hood was X'ed, the following sequence of
events took place:
!X id1
id1: Friar Tuck... I am under attack! Pray save me!
id1: Off (aborted)
id2: Fear not, friend Robin! I shall rout the Sheriff of
Nottingham's men!
id1: Thank you, my good fellow!
Each ghost-job would detect the fact that the other had been killed,
and would start a new copy of the recently-slain program within a few
milliseconds. The only way to kill both ghosts was to kill them
simultaneously (very difficult) or to deliberately crash the system.
Finally, the system programmers did the latter... only to find
that the bandits appeared once again when the system rebooted! It
turned out that these two programs had patched the boot-time OS image
(the kernel file, in UNIX terms) and had added themselves to the list
of programs that were to be started at boot time...
The Robin Hood and Friar Tuck ghosts were finally eradicated when the
system staff rebooted the system from a clean boot-tape and
reinstalled the monitor. Not long thereafter, XEROX released a patch
for this problem.
It is alleged that XEROX filed a complaint with Motorola's management about
the merry-prankster actions of the two employees in question. It is
not recorded that any serious disciplinary action was taken against
either of them.
The Untimely Demise of Mabel the Monkey
=======================================
The following, modulo a couple of inserted commas and
capitalization changes for readability, is the exact text of a famous
USENET message. The reader may wish to review the definitions of
{PM} in the main text before continuing.
Date: Wed 3 Sep 86 16:46:31-EDT
From: "Art Evans" <Evans@TL-20B.ARPA>
Subject: Always Mount a Scratch Monkey
To: Risks@CSL.SRI.COM
My friend Bud used to be the intercept man at a computer vendor for
calls when an irate customer called. Seems one day Bud was sitting at
his desk when the phone rang.
Bud: Hello. Voice: YOU KILLED MABEL!!
B: Excuse me? V: YOU KILLED MABEL!!
This went on for a couple of minutes and Bud was getting nowhere, so he
decided to alter his approach to the customer.
B: HOW DID I KILL MABEL? V: YOU PM'ED MY MACHINE!!
Well, to avoid making a long story even longer, I will abbreviate what had
happened. The customer was a Biologist at the University of Blah-de-blah,
and he had one of our computers that controlled gas mixtures that Mabel (the
monkey) breathed. Now, Mabel was not your ordinary monkey. The University
had spent years teaching Mabel to swim, and they were studying the effects
that different gas mixtures had on her physiology. It turns out that the
repair folks had just gotten a new Calibrated Power Supply (used to
calibrate analog equipment), and at their first opportunity decided to
calibrate the D/A converters in that computer. This changed some of the gas
mixtures and poor Mabel was asphyxiated. Well, Bud then called the branch
manager for the repair folks:
Manager: Hello
B: This is Bud, I heard you did a PM at the University of
Blah-de-blah.
M: Yes, we really performed a complete PM. What can I do
for you?
B: Can you swim?
The moral is, of course, that you should always mount a scratch monkey.
~~~~~~~~~~~~~~~~~~~~~~
There are several morals here related to risks in use of computers.
Examples include, "If it ain't broken, don't fix it." However, the
cautious philosophical approach implied by "always mount a scratch
monkey" says a lot that we should keep in mind.
Art Evans
Tartan Labs
TV Typewriters: A Tale Of Hackish Ingenuity
===========================================
Here is a true story about a glass tty. One day an MIT hacker was in
a motorcycle accident and broke his leg. He had to stay in the
hospital quite a while, and got restless because he couldn't {hack}.
Two of his friends therefore took a terminal and modem for it to the
hospital, so that he could use the computer by telephone from his
hospital bed.
Now this happened some years before the spread of home computers, and
computer terminals were not a familiar sight to the average person.
When the two friends got to the hospital, a guard stopped them and
asked what they were carrying. They explained that they wanted to
take a computer terminal to their friend who was a patient.
The guard got out his list of things that patients were permitted to
have in their rooms: TV, radio, electric razor, typewriter, tape
player... no computer terminals. Computer terminals weren't on the
list, so they couldn't take it in. Rules are rules, you know.
Fair enough, said the two friends, and they left again. They were
frustrated, of course, because they knew that the terminal was as
harmless as a TV or anything else on the list... which gave them an
idea.
The next day they returned, and the same thing happened: a guard
stopped them and asked what they were carrying. They said, "This is
a TV typewriter!" The guard was skeptical, so they plugged it in and
demonstrated it. "See? You just type on the keyboard and what you
type shows up on the TV screen." Now the guard didn't stop to think
about how utterly useless a typewriter would be that didn't produce
any paper copies of what you typed; but this was clearly a TV
typewriter, no doubt about it. So he checked his list: "A TV is all
right, a typewriter is all right... okay, take it on in!"
Two Stories About `Magic' (by Guy Steele)
=========================================
When Barbara Steele was in her fifth month of pregnancy in 1981, her
doctor sent her to a specialist to have a sonogram made to determine
whether there were twins. She dragged her husband Guy along to the
appointment. It was quite fascinating; as the doctor moved an
instrument along the skin, a small TV screen showed cross-sectional
pictures of the abdomen.
Now Barbara and I had both studied computer science at MIT, and we
both saw that some complex computerized image-processing was involved.
Out of curiosity, we asked the doctor how it was done, hoping to learn
some details about the mathematics involved. The doctor, not knowing
our educational background, simply said, "The probe sends out sound
waves, which bounce off the internal organs. A microphone picks up
the echoes, like radar, and send the signals to a computer --- and the
computer makes a picture." Thanks a lot! Now a hacker would have
said, "... and the computer *magically* (or {automagically})
makes a picture", implicitly acknowledging that he has glossed over
an extremely complicated process.
Some years ago I was snooping around in the cabinets that housed the
MIT AI Lab's PDP-10, and noticed a little switch glued to the frame of
one cabinet. It was obviously a homebrew job, added by one of the
lab's hardware hackers (no one knows who).
You don't touch an unknown switch on a computer without knowing what
it does, because you might crash the computer. The switch was labeled
in a most unhelpful way. It had two positions, and scrawled in pencil
on the metal switch body were the words `magic' and `more magic'.
The switch was in the `more magic' position.
I called another hacker over to look at it. He had never seen the
switch before either. Closer examination revealed that the switch
only had one wire running to it! The other end of the wire did
disappear into the maze of wires inside the computer, but it's a basic
fact of electricity that a switch can't do anything unless there are
two wires connected to it. This switch had a wire connected on one
side and no wire on its other side.
It was clear that this switch was someone's idea of a silly joke.
Convinced by our reasoning that the switch was inoperative, we flipped
it. The computer instantly crashed.
Imagine our utter astonishment. We wrote it off as coincidence, but
nevertheless restored the switch to the `more magic' position before
reviving the computer.
A year later, I told this story to yet another hacker, David Moon as I
recall. He clearly doubted my sanity, or suspected me of a
supernatural belief in the power of this switch, or perhaps thought I
was fooling him with a bogus saga. To prove it to him, I showed him
the very switch, still glued to the cabinet frame with only one wire
connected to it, still in the `more magic' position. We scrutinized
the switch and its lone connection, and found that the other end of
the wire, though connected to the computer wiring, was connected to a
ground pin. That clearly made the switch doubly useless: not only was
it electrically nonoperative, but it was connected to a place that
couldn't affect anything anyway. So we flipped the switch.
The computer promptly crashed.
This time we ran for Richard Greenblatt, a long-time MIT hacker, who
was close at hand. He had never noticed the switch before, either.
He inspected it, concluded it was useless, got some diagonal cutters
and {dike}d it out. We then revived the computer and it ran fine
ever since.
We still don't know how the switch crashed the machine. There is a
theory that some circuit near the ground pin was marginal, and
flipping the switch changed the electrical capacitance enough to upset
the circuit as millionth-of-a-second pulses went through it. But
we'll never know for sure; all we can really say is that the switch
was {magic}.
I still have that switch in my basement. Maybe I'm silly, but I
usually keep it set on `more magic'.
A Selection of AI Koans
=======================
These are some of the funniest examples of a genre of jokes told at
the MIT AI lab about various noted hackers. The original koans were
composed by Danny Hillis. In reading these, it is at least useful to
know that Minsky, Sussman, and Drescher are AI researchers of note,
that Tom Knight was one of the Lisp machine's principal designers, and
that David Moon wrote much of Lisp machine Lisp.
* * *
A novice was trying to fix a broken Lisp machine by turning the power
off and on.
Knight, seeing what the student was doing spoke sternly: "You can not
fix a machine by just power-cycling it with no understanding of what
is going wrong."
Knight turned the machine off and on.
The machine worked.
* * *
One day a student came to Moon and said, "I understand how to
make a better garbage collector. We must keep a reference count
of the pointers to each cons."
Moon patiently told the student the following story:
"One day a student came to Moon and said, `I understand how
to make a better garbage collector...
[Ed. note: Pure reference-count garbage collectors have problems with
circular structures that point to themselves.]
* * *
In the days when Sussman was a novice, Minsky once came to him as
he sat hacking at the PDP-6.
"What are you doing?" asked Minsky.
"I am training a randomly wired neural net to play Tic-Tac-Toe",
Sussman replied.
"Why is the net wired randomly?" asked Minsky.
"I do not want it to have any preconceptions of how to play."
Sussman said.
Minsky then shut his eyes.
"Why do you close your eyes?" Sussman asked his teacher.
"So that the room will be empty."
At that moment, Sussman was enlightened.
* * *
A disciple of another sect once came to Drescher as he was
eating his morning meal.
"I would like to give you this personality test", said the
outsider, "because I want you to be happy."
Drescher took the paper that was offered him and put it
into the toaster, saying:
"I wish the toaster to be happy, too."
OS and JEDGAR
=============
This story says a lot about the the ITS ethos.
On the ITS system there was a program that allowed you to see what is
being printed on someone else's terminal. It spied on the other guy's
output by examining the insides of the monitor system. The output spy
program was called OS. Throughout the rest of the computer science
(and at IBM too) OS means `operating system', but among old-time ITS
hackers it almost always meant `output spy'.
OS could work because ITS purposely had very little in the way of
`protection' that prevented one user from trespassing on another's
areas. Fair is fair, however. There was another program that would
automatically notify you if anyone started to spy on your output. It
worked in exactly the same way, by looking at the insides of the
operating system to see if anyone else was looking at the insides that
had to do with your output. This `counterspy' program was called
JEDGAR (a six-letterism pronounced as two syllables: /jed'gr/), in
honor of the former head of the FBI.
But there's more. The rest of the story is that JEDGAR would ask the
user for `license to kill'. If the user said yes, then JEDGAR would
actually {gun} the job of the {luser} who was spying.
Unfortunately, people found this made life too violent, especially when
tourists learned about it. One of the systems hackers solved the
problem by replacing JEDGAR with another program that only pretended
to do its job. It took a long time to do this, because every copy of
JEDGAR had to be patched, and to this day no one knows how many people
never figured out that JEDGAR had been defanged.
The Story of Mel, a Real Programmer
===================================
This was posted to USENET by its author Ed Nather (utastro!nather) on
May 21, 1983.
A recent article devoted to the *macho* side of programming
made the bald and unvarnished statement:
Real Programmers write in Fortran.
Maybe they do now,
in this decadent era of
Lite beer, hand calculators and "user-friendly" software
but back in the Good Old Days,
when the term "software" sounded funny
and Real Computers were made out of drums and vacuum tubes,
Real Programmers wrote in machine code.
Not Fortran. Not RATFOR. Not, even, assembly language.
Machine Code.
Raw, unadorned, inscrutable hexadecimal numbers.
Directly.
Lest a whole new generation of programmers
grow up in ignorance of this glorious past,
I feel duty-bound to describe,
as best I can through the generation gap,
how a Real Programmer wrote code.
I'll call him Mel,
because that was his name.
I first met Mel when I went to work for Royal McBee Computer Corp.,
a now-defunct subsidiary of the typewriter company.
The firm manufactured the LGP-30,
a small, cheap (by the standards of the day)
drum-memory computer,
and had just started to manufacture
the RPC-4000, a much-improved,
bigger, better, faster --- drum-memory computer.
Cores cost too much,
and weren't here to stay, anyway.
(That's why you haven't heard of the company, or the computer.)
I had been hired to write a Fortran compiler
for this new marvel and Mel was my guide to its wonders.
Mel didn't approve of compilers.
"If a program can't rewrite its own code",
he asked, "what good is it?"
Mel had written,
in hexadecimal,
the most popular computer program the company owned.
It ran on the LGP-30
and played blackjack with potential customers
at computer shows.
Its effect was always dramatic.
The LGP-30 booth was packed at every show,
and the IBM salesmen stood around
talking to each other.
Whether or not this actually sold computers
was a question we never discussed.
Mel's job was to re-write
the blackjack program for the RPC-4000.
(Port? What does that mean?)
The new computer had a one-plus-one
addressing scheme,
in which each machine instruction,
in addition to the operation code
and the address of the needed operand,
had a second address that indicated where, on the revolving drum,
the next instruction was located.
In modern parlance,
every single instruction was followed by a GO TO!
Put *that* in Pascal's pipe and smoke it.
Mel loved the RPC-4000
because he could optimize his code:
that is, locate instructions on the drum
so that just as one finished its job,
the next would be just arriving at the "read head"
and available for immediate execution.
There was a program to do that job,
an "optimizing assembler",
but Mel refused to use it.
"You never know where its going to put things",
he explained, "so you'd have to use separate constants".
It was a long time before I understood that remark.
Since Mel knew the numerical value
of every operation code,
and assigned his own drum addresses,
every instruction he wrote could also be considered
a numerical constant.
He could pick up an earlier "add" instruction, say,
and multiply by it,
if it had the right numeric value.
His code was not easy for someone else to modify.
I compared Mel's hand-optimized programs
with the same code massaged by the optimizing assembler program,
and Mel's always ran faster.
That was because the "top-down" method of program design
hadn't been invented yet,
and Mel wouldn't have used it anyway.
He wrote the innermost parts of his program loops first,
so they would get first choice
of the optimum address locations on the drum.
The optimizing assembler wasn't smart enough to do it that way.
Mel never wrote time-delay loops, either,
even when the balky Flexowriter
required a delay between output characters to work right.
He just located instructions on the drum
so each successive one was just *past* the read head
when it was needed;
the drum had to execute another complete revolution
to find the next instruction.
He coined an unforgettable term for this procedure.
Although "optimum" is an absolute term,
like "unique", it became common verbal practice
to make it relative:
"not quite optimum" or "less optimum"
or "not very optimum".
Mel called the maximum time-delay locations
the "most pessimum".
After he finished the blackjack program
and got it to run,
("Even the initializer is optimized",
he said proudly)
he got a Change Request from the sales department.
The program used an elegant (optimized)
random number generator
to shuffle the "cards" and deal from the "deck",
and some of the salesmen felt it was too fair,
since sometimes the customers lost.
They wanted Mel to modify the program
so, at the setting of a sense switch on the console,
they could change the odds and let the customer win.
Mel balked.
He felt this was patently dishonest,
which it was,
and that it impinged on his personal integrity as a programmer,
which it did,
so he refused to do it.
The Head Salesman talked to Mel,
as did the Big Boss and, at the boss's urging,
a few Fellow Programmers.
Mel finally gave in and wrote the code,
but he got the test backwards,
and, when the sense switch was turned on,
the program would cheat, winning every time.
Mel was delighted with this,
claiming his subconscious was uncontrollably ethical,
and adamantly refused to fix it.
After Mel had left the company for greener pa$ture$,
the Big Boss asked me to look at the code
and see if I could find the test and reverse it.
Somewhat reluctantly, I agreed to look.
Tracking Mel's code was a real adventure.
I have often felt that programming is an art form,
whose real value can only be appreciated
by another versed in the same arcane art;
there are lovely gems and brilliant coups
hidden from human view and admiration, sometimes forever,
by the very nature of the process.
You can learn a lot about an individual
just by reading through his code,
even in hexadecimal.
Mel was, I think, an unsung genius.
Perhaps my greatest shock came
when I found an innocent loop that had no test in it.
No test. *None*.
Common sense said it had to be a closed loop,
where the program would circle, forever, endlessly.
Program control passed right through it, however,
and safely out the other side.
It took me two weeks to figure it out.
The RPC-4000 computer had a really modern facility
called an index register.
It allowed the programmer to write a program loop
that used an indexed instruction inside;
each time through,
the number in the index register
was added to the address of that instruction,
so it would refer
to the next datum in a series.
He had only to increment the index register
each time through.
Mel never used it.
Instead, he would pull the instruction into a machine register,
add one to its address,
and store it back.
He would then execute the modified instruction
right from the register.
The loop was written so this additional execution time
was taken into account ---
just as this instruction finished,
the next one was right under the drum's read head,
ready to go.
But the loop had no test in it.
The vital clue came when I noticed
the index register bit,
the bit that lay between the address
and the operation code in the instruction word,
was turned on ---
yet Mel never used the index register,
leaving it zero all the time.
When the light went on it nearly blinded me.
He had located the data he was working on
near the top of memory ---
the largest locations the instructions could address ---
so, after the last datum was handled,
incrementing the instruction address
would make it overflow.
The carry would add one to the
operation code, changing it to the next one in the instruction set:
a jump instruction.
Sure enough, the next program instruction was
in address location zero,
and the program went happily on its way.
I haven't kept in touch with Mel,
so I don't know if he ever gave in to the flood of
change that has washed over programming techniques
since those long-gone days.
I like to think he didn't.
In any event,
I was impressed enough that I quit looking for the
offending test,
telling the Big Boss I couldn't find it.
He didn't seem surprised.
When I left the company,
the blackjack program would still cheat
if you turned on the right sense switch,
and I think that's how it should be.
I didn't feel comfortable
hacking up the code of a Real Programmer.
[This is one of hackerdom's great heroic epics, free verse or no. In a
few spare images it captures more about the esthetics and psychology
of hacking than every scholarly volume on the subject put together.
For an opposing point of view, see the entry for {real programmer}.]