home
***
CD-ROM
|
disk
|
FTP
|
other
***
search
/
Hacker Chronicles 2
/
HACKER2.BIN
/
1014.NEURON
< prev
next >
Wrap
Text File
|
1993-09-25
|
336KB
|
8,403 lines
Received: from BUACCA by BUACCA.BU.EDU (Mailer R2.08 PTF009) with BSMTP id
1641; Fri, 02 Jul 93 14:02:05 EDT
Received: from noc4.dccs.upenn.edu by BUACCA.BU.EDU (IBM VM SMTP R1.2.1) with
TCP; Fri, 02 Jul 93 14:01:57 EDT
Received: from CATTELL.PSYCH.UPENN.EDU by noc4.dccs.upenn.edu
id AA26249; Fri, 2 Jul 93 13:59:46 -0400
Return-Path: <marvit@cattell.psych.upenn.edu>
Received: from LOCALHOST by cattell.psych.upenn.edu
id AA20377; Fri, 2 Jul 93 12:13:02 EDT
Posted-Date: Fri, 02 Jul 93 12:12:22 -0400
From: "Neuron-Digest Moderator" <neuron-request@cattell.psych.upenn.edu>
To: Neuron-Distribution:;
Subject: Neuron Digest V11 #42 (conferences & CFP)
Reply-To: "Neuron-Request" <neuron-request@cattell.psych.upenn.edu>
X-Errors-To: "Neuron-Request" <neuron-request@cattell.psych.upenn.edu>
Organization: University of Pennsylvania
Date: Fri, 02 Jul 93 12:12:22 -0400
Message-Id: <20346.741629542@cattell.psych.upenn.edu>
Sender: marvit@cattell.psych.upenn.edu
Neuron Digest Friday, 2 Jul 1993
Volume 11 : Issue 42
Today's Topics:
Travel grants for CNS*93
CLNL'93 - Revised deadline
call for papers: SAC '94
PASE'93
Send submissions, questions, address maintenance, and requests for old
issues to "neuron-request@cattell.psych.upenn.edu". The ftp archives are
available from cattell.psych.upenn.edu (130.91.68.31). Back issues
requested by mail will eventually be sent, but may take a while.
----------------------------------------------------------------------
Subject: Travel grants for CNS*93
From: Jim Bower <jbower@smaug.cns.caltech.edu>
Date: Tue, 29 Jun 93 15:30:08 -0800
The Second Annual
Computation and Neural Systems Meeting
CNS*93
July 31 through August 7,1993
Washington DC
Travel grant announcement
With support from the National Science Foundation, the CNS*93
organizing committee is pleased to announce the availability of travel
grants for the upcoming meeting. Funds will be provided based on the
distance traveled to the meeting. Grants are expected to range from $200
to $500.
General meeting information:
The Computation and Neural Systems meeting (CNS*93) is the second
in a series of annual inter- disciplinary conferences intended to address
the broad range of research approaches and issues involved in the general
field of computational neuroscience. The meeting will bring together
experimental and theoretical neurobiologists along with engineers,
computer scientists, cognitive scientists, physicists, and mathematicians
interested in understanding how biological neural systems compute. The
meeting will equally emphasize experimental, model-based, and more
abstract theoretical approaches to understanding neurobiological
computation.
The meeting will be composed of three parts: a day of tutorials,
three and a half days of research presentations, and two and a half days
of follow up workshops. The agenda for the main meeting is based on 85
peer reviewed papers presented in both oral and poster format. The
tutorial day and the main meeting itself will be held at the Hyatt
Regency, Bethesda Maryland. The postmeeting workshops will be held at the
Coolfont resort which is set within 1350 mountainous acres in the Eastern
Panhandle of West Virginia.
Further information:
Additional information about the meeting is available
via FTP over the internet (address: 131.215.137.69 ). To
obtain information about the agenda, currently registered
attendees, or paper abstracts, the initial sequence is the
same (Things you type are in ""):
> yourhost% "ftp 131.215.137.69"
> 220 mordor FTP server (SunOS 4.1) ready.
Name (131.215.137.69:<yourname>): "ftp"
> 331 Guest login OK, send ident as password.
Password: "yourname@yourhost.yourside.yourdomain"
> 230 Guest login OK, access restrictions apply.
ftp> "cd cns93"
> 250 CWD command successful.
ftp>
At this point you can do one of several things:
1) To examine what is available type: "ls"
Directory as of 6/1/93:
abstracts (a directory)
agenda
attendees
general_information
registration
rooms_to_share
travel_arrangements
travel_grants
tutorials
workshops
2) To download specific files type: "get <filename>"
for example: "get agenda" or "get attendees"
3) To download meeting abstracts first type: "cd
cns93/abstracts"
a) to view the list of abstracts type: "ls"
b) to download specific abstracts type: "get
<abstract name>"
c) to download all abstracts type: "mget *"
Once you have obtained the information you want type:
"quit"
Registration:
We would recommend registering for the meeting as
soon as possible as space for some meeting events is
limited. Participants can register for the meeting in
several different ways. 1) electronically, 2) via email,
3) via regular surface mail. Each different method is
described below. Please only register using one method.
1) Interactive electronic registration:
For those of you with internet connectivity who would
like to register electronically for CNS*93 we have
provided an internet account through which you may submit
your registration information. To use this service you
need only "telnet" to "mordor.cns.caltech.edu" and login as
"cns93". No password is required. For example (You
type what appears in ""):
yourhost% "telnet mordor.cns.caltech.edu"
Trying 131.215.137.69 ...
Connected to mordor.cns.caltech.edu.
Escape character is '^]'.
SunOS UNIX (mordor)
login: "cns93"
Now answer all questions
2) For those with easy access to electronic mail, simply
fill in the attached registration form and email it to:
cp@smaug.cns.caltech.edu
3) Finally, for those who elect neither of the above
options, please print out the attached registration form
and send with payment via surface mail to the address
indicated.
CNS*93 Registrations
Division of Biology 216-76
Caltech
Pasadena, CA 91125
This address should also be used to send check or money
orders following electronic registration.
*******************************************
REGISTRATION FORM
CNS*93
WASHINGTON D.C. July 31 - August 7 1993
*****************************************
Name :
Title :
Organization :
Address :
City : State : Zip : Country
:
Telephone : email address :
Registration Fees :
_____ $ 25 Tutorial (July 31) (includes lunch)
Technical Program (August 1 - 4)
_____ $ 300 Regular
_____ $ 125 Full-time Student (Include verification of
status)
_____ $ 50 Banquet (for each additional banquet
ticket)
(main registration includes one banquet ticket
and book of abstracts)
Post-meeting Workshop (August 4 - 7)
_____ $ 325 (includes round-trip transportation, meals
and lodging)
$ ______ Total Payment
Please indicate method of payment :
____ Check or Money Order (Payable in US. dollars
to CNS*93 - Caltech)
will be sent to CNS*93 Registrations
Division of Biology 216-76
Caltech
Pasadena, CA 91125
___ Visa ___ Mastercard ___ American Express
Charge my card number
________________________________________
Expiration date ____________ Name of cardholder
___________________
Signature as appears on card :
_________________________ Date ____________
Please make sure to indicate CNS*93 and YOUR name
on all money transfers
Did you submit an abstract & summary ? ( ) yes ( ) no
title :
Do you have special dietary preferences or restrictions
(e.g., diabetic, low
sodium, kosher, vegetarian)? If so, please note:
Some grants to cover partial travel expenses may become
available. Do you wish to apply for a grant ? ( ) yes
( ) no
------------------------------
Subject: CLNL'93 - Revised deadline
From: Russell Greiner <greiner@learning.siemens.com>
Date: Tue, 29 Jun 93 23:25:52 -0500
re: deadlines for Computational Learning and Natural Learning (CLNL'93)
Due to popular requests, we have decided to extend the deadline for CLNL'93
submission by one week, until 7/July/93. Below is the revised call for
papers, with updated "Important Dates" and "Programme Committee" entries,
as well as general registration information.
We look forward to receiving your papers, and also hope that you
will attend the workshop this September!
Russ Greiner
(Chair, CLNL'93)
CLNL'93 -- Call for Submissions
Computational Learning and Natural Learning
Provincetown, Massachusetts
10-12 September 1993
CLNL'93 is the fourth of an ongoing series of workshops designed to bring
together researchers from a diverse set of disciplines --- including
computational learning theory, AI/machine learning,
connectionist learning, statistics, and control theory ---
to explore issues at the intersection of theoretical learning research
and natural learning systems.
Theme:
To be useful, the learning methods used by our fields must be able
to handle the complications inherent in real-world tasks. We therefore
encourage researchers to submit papers that discuss extensions to
learning systems that let them address issues such as:
* handling many irrelevant features
* dealing with large amounts of noise
* inducing very complex concepts
* mining enormous sets of data
* learning over extended periods of time
* exploiting large amounts of background knowledge
We welcome theoretical analyses, comparative studies of existing algorithms,
psychological models of learning in complex domains, and reports on relevant
new techniques.
Submissions:
Authors should submit three copies of an abstract (100 words or less) and a
summary (2000 words or less) of original research to:
CLNL'93 Workshop
Learning Systems Department
Siemens Corporate Research
755 College Road East
Princeton, NJ 08540-6632
by 30 June 1993. We will also accept plain-text, stand-alone LaTeX
or Postscript submissions sent by electronic mail to
clnl93@learning.scr.siemens.com
Each submission will be refereed by the workshop organizers and evaluated
based on its relevance to the theme, originality, clarity, and significance.
Copies of accepted abstracts will be distributed at the workshop, and
MIT Press has agreed to publish an edited volume that incorporates papers
from the meeting, subject to revisions and additional reviewing.
Invited Talks:
Tom Dietterich Oregon State University
Ron Rivest Massachusetts Institute of Technology
Leo Breiman University of California, Berkeley
Yann le Cun Bell Laboratories
Important Dates:
Deadline for submissions: 7 July 1993
Notification of acceptance: 27 July 1993
CLNL'93 Workshop: 10-12 September 1993
Programme Committee:
Andrew Barron, Russell Greiner, Steve Hanson, Robert Holte,
Michael Jordan, Stephen Judd, Pat Langley, Thomas Petsche,
Tomaso Poggio, Ron Rivest, Eduardo Sontag, Steve Whitehead
Workshop Sponsors:
Siemens Corporate Research and MIT Laboratory of Computer Science
CLNL'93
General Information
Dates:
The workshop officially begins at 9am Friday 10/Sept, and concludes by 3pm
Sunday 12/Sept, in time to catch the 3:30pm Provincetown-Boston ferry.
Location:
All sessions will take place in the Provincetown Inn (800 942-5388). We
encourage registrants to stay there; please sign up in the enclosed
registration form. Notice the $74/night does correspond to $37/person per
night double-occupancy, if two people share one room.
Cost:
The cost to attend this workshop is $50/person in general; $25/student.
This includes
* attendance at all presentation and poster sessions, including the four
invited talks;
* the banquet dinner on Saturday night; and
* a copy of the accepted abstracts.
Transportation:
Provincetown is located at the very tip of Cape Cod, jutting into the
Atlantic Ocean. The drive from Boston to Provincetown requires
approximately two hours. There is also a daily ferry (run by Bay State
Cruise Lines, 617 723-7800) that leaves Commonwealth Pier in Boston Harbor
at 9:30am and arrives in Provincetown at 12:30pm; the return trip departs
Provincetown at 3:30pm, arriving at Commonwealth Pier at 6:30pm. Its cost
is $15/person, one way. There are also cabs, busses and commuter airplanes
(CapeAir, 800 352-0714) that service this Boston-Provincetown route.
Reception (Tentative):
If there is sufficient interest (as indicated by signing up on the form
below), we will hold a reception on a private ferry that leaves Commonwealth
Pier for Provincetown at 6:30pm 9/Sept. The additional (Siemens-subsidized)
cost for ferry and reception is $40/person, which also includes the return
Provincetown-Boston ferry trip on 12/Sept. You must sign up by 30/June;
we will announce by 13/July whether this private ferry will be used (and
refund the money otherwise).
Inquiries:
For additional information about CLNL'93, contact
clnl93@learning.scr.siemens.com
or the above address. To learn more about Provincetown, contact their
Chamber of Commerce at 508 487-3424.
CLNL'93 Registration
Name: ________________________________________________
Affiliation: ________________________________________________
Address: ________________________________________________
________________________________________________
Telephone: ____________________ E-mail: ____________________
Select the appropriate options and fees:
Workshop registration fee ($50 regular; $25 student) ___________
Ferry transportation + reception ($40) ___________
Hotel room(*) ($74 = 1 night deposit) ___________
Arrival date ___________ Departure date _____________
Name of person sharing room (optional) __________________
# of breakfasts desired ($7.50/bkfst; no deposit req'd) ___
Total amount enclosed: ___________
(*) This is at the Provincetown Inn. For minimum stay of 2 nights.
The total cost for three nights is $222 = $74 x 3, plus optional breakfasts.
The block of rooms held for CLNL'93 will be released on 30 June 93; room
reservations received after this date are accepted subject to availability.
See hotel for cancellation policy.
If you are not using a credit card, make your check payable in U.S. dollars
to "Provincetown Inn/CLNL'93", and mail your completed registration form to
Provincetown Inn/CLNL
P.O. Box 619
Provincetown, MA 02657.
If you are using Visa or MasterCard, please fill out the following,
which you may mail to above address, or FAX to 508 487-2911.
Signature: ______________________________________________
Visa/MasterCard #: ______________________________________________
Expiration: ______________________________________________
------------------------------
Subject: call for papers: SAC '94
From: MASETTI@BOLOGNA.INFN.IT
Date: Mon, 17 May 93 09:57:00 +0000
===========================================================
| |
| |
| |
| CALL FOR PAPERS |
| =============== |
| |
| 1994 ACM Symposium on Applied Computing (SAC'94) |
| |
| |
| TRACK ON FUZZY LOGIC IN APPLICATIONS |
| ------------------------------------ |
| |
| Phoenix Civic Plaza, Phoenix, Arizona, USA |
| |
| March 6-8, 1994 |
| |
===========================================================
SAC'94 is the annual conference of the ACM Special Interest Group
on Applied Computing (SIGAPP), APL (SIGAPL), Biomedical Computing (SIGBIO),
Business Information Technology (SIGBIT), Computer Uses in Education (SIGCUE),
Forth (SIGFORTH), and Small and Personal Computer (SIGSMALL/PC).
For the past nine years, SAC has become a primary forum for applied
computing practitioners and researchers.
Once again SAC'94 will be held in conjunction with the 1994 ACM Computer
Science Conference (CSC'94).
Fuzzy Logic in Applications is one of the major tracks in SAC.
The purpose of this track is to provide a forum for the interchange of
ideas, research, development activities, and applications among academic
and practitioners in the areas related to Fuzzy Logic in Applications.
State-of-the-art and state-of-the-practice original papers relevant to
the track themes as well as panel proposals are solicited.
RELEVANT TOPICS:
Applications of Fuzzy Systems to:
- System Control - Signal Processing
- Intelligent Information Systems - Image Understanding
- Case-Based Reasoning - Pattern Recognition
- Decision Making and Analysis - Robotics and Automation
- Modelling - Medical Diagnostic and MRI
- Databases and Information Retrieval - Evolutionary Computation
- Neural Systems
IMPORTANT DATES:
Submission of draft papers: 17.09.1993
Notification of acceptance: 01.11.1993
Camera-ready copy due: 20.11.1993
TRACK CHAIR:
Madjid Fathi
FB Informatik, LS1
P.O.BOX 500 500
University of Dortmund
D-4600 Dortmund 50
Germany
Tel: +49231-7556372
FAX: +49231-7556555
Email: fathi@ls1.informatik.uni-dortmund.de
HONORARY ADVISOR :
Lotfi A. Zadeh, University of California, Berkeley
TRACK ADVISORY:
Y. Attikiouzel, Univ. of Western Australia
H. Berenji, NASA Ames Division, AI Research, CA, USA
M. Jamshidi, Univ. of New Mexico, NM, USA
A. Kandel, Univ. of South Florida, USA
R. Kruse, Univ. of Braunschweig, Germany
E.H. Mamdani, Univ. of London, GB
M. Masetti, Univ. of Bologna, Italy
H. Prade, Univ. of Paul Sabatier, France
B. Reusch, Univ. of Dortmund, Germany
E.H. Ruspini, SRI International, USA
H. Tanaka, Univ. of Osaka, Japan
L. Valverde, Univ. of de les Illes Baleares, Spain
R.R. Yager, Iona College, Editor in-chief, USA
H.J. Zimmermann, Univ. of Aachen, Germany
GUIDELINES FOR SUBMISSION
Several Categories of papers will be considered for presentation and publication
including:
(i) original and unpublished research articles,
(ii) Reports of applications in
- business,
- government,
- industrie,
- arts,
- science and
- engineering.
Accepted papers will be published in the ACM/SAC'94 Conference Proceedings
to be printed by the ACM Press.
In order to facilitate the blind external review process, submission guidelines
must be strictly adhered to:
- Submit 5 copies of your manuscript to the track chair.
- Authors names and addresses MUST NOT appear in the body of the paper,
self-reference must be in the third person, attribution to the author(s)
must be in the form of "author", and bibliographical entries by the
author(s) must also be in the form of "author".
- The body of the paper should not exceed 5.000 words
(approximately 20 doubled-spaced pages).
- A seperate cover sheet shoeld be attached to each copy, containing
- the title of the paper,
- the author(s) and affiliation(s),
- and the address (including e-mail address and fax number, if available)
to which correspondence should be sent.
- Panel proposals must include abstract of the topics and a copy of
resume/vita of the moderator.
------------------------------
Subject: PASE'93
From: ff@lri.fr
Date: Mon, 24 May 93 17:56:59 +0100
First Announcement
PASE '93
4th International Workshop on Parallel Applications
in Statistics and Economics
>> Exploration of Complex Systems Dynamics <<
Ascona, Switzerland, November 22-26, 1993
Centro Stefano Franscini, Monte Verita
The purpose of this workshop is to bring together researchers interested
in innovative information processing systems and their applications in
the areas of statistics, finance and economics. The focus will be on
in-depth presentations of state-of-the-art methods and applications as
well as on communicating current research topics. This workshop is
intended for industrial and academic persons seeking new ways of
comprehending the behavior of dynamic systems. The PASE '93 workshop is
concerned with but not restricted to the following topics:
o Artificial Neural Networks
o Dynamical and Chaotic Systems
o Fuzzy Logic
o Genetic Algorithms
o Stochastic Optimization
Organizing Committee:
M. Dacorogna, O&A Zurich H. Beran, ICS Prague
F. Murtagh, Munotec Munich M. Hanf, IPS ETH Zurich
E. Pelikan, ICS Prague A. Scheidegger, CSCS Manno
D. Wuertz, IPS ETH Zurich M. Tomassini, CSCS Manno
Please contact for further information and registration
Hynek Beran, ICS Prag
Pod vodarenskou vezi 2
182 07 PRAGUE 8, Czech Republic
FAX: +42 2 858 57 89
E-mail: pase@uivt1.uivt.cas.cs
and for local arrangements
Marco Tomassini, CSCS Manno
Galleria 2, Via Cantonale
6928 MANNO, Switzerland
FAX: +41 91 506711
E-mail: pase@cscs.ch
The workshop will be held near Ascona, an attractive holiday resort in
Ticino, the Italian-speaking canton of Switzerland. In keeping with the
tradition of the PASE workshop, an art exhibition as well as other social
events will be organized.
Further information will be available from anonymous ftp:
ftp maggia.ethz.ch (129.132.17.1)
"Instructions to Authors"
***** ASCII *****
____________________________________________________________________________
INSTRUCTIONS TO AUTHORS
1. Manuscript
Two copies of the manuscript should be submitted to the
Organizing Committee (See the Contact Address). Manuscripts must
be submitted in English.
2. Copyright
Original papers (not published or not simultaneously submitted
to another journal) will be reviewed. Copyright for published
papers will be vested in the publisher.
3. Text
Text double space on one side of the sheet only, with a margin
of at least 5 cm, (2") on the left. Titles of chapters and
paragraphs should appear clearly distinguished from the text.
Complete text records on 5 1/4" floppy discs are prefered.
4. Equations
Mathematical equations inserted in the text must be clearly
formulated in such a manner that there can be no possible doubt
about meaning of the symbols employed.
5. Figures and Tables
The figures, if any, must be clearly numbered and their
position in the text marked. They will be drawn in Indian ink on
white paper or tracing paper, bearing in mind that they will be
reduced to a width of either 7,5 or 15 (3 or 6") for printing.
After scaling down, the normal lines ought to have a minimum
thickness of 0,1 mm and maximum of 0,3 mm while lines for which
emphasis is wanted can reach a maximum thickness of 0,5 mm.
Labelling of the figures must be easy legible after reduction. It
will be as far as possible placed across the width of the diagram
from left to right. The height of the characters after scaling
down must not be less than 1mm. Photographs for insertion in the
text will be well defined and printed on glossy white paper, and
will be scaled down for printing to a width of 7,5 to 15 cm (3 to
6"). All markings on photographs are covered by the same
recommendations as for figures. It is recommended that authors of
communications accompany each figure or photograph with a
descriptive title giving sufficient information on the content of
the picture. Tables of characteristics or values inserted in the
text or appended to the article must be prepared in a clear
manner, preferably as Camera Ready text. Should a table need
several pages these must be kept together by sticking or other
appropriate means in such a way as to emphasize the unity of the
table.
6. Abstracts and Required Informations
An abstracts of 10 to 20 typed lines written by the author in
the English will precede and introduce each article. Provide
title, authors, affiliation, data of dispatch on a separate sheet
with exact mailing address for correspondence.
7. References
The references should be cited in a sequential manner and the
reference numbers should be in square brackets. References must
be listed at the end of the paper in the numerical order.
Example for references to articles from journals
[1] Dawes E.M., Corrigan K.: Linear models. Trans.JSCM, 30},
1991, 42-51.
Example for references to articles from proceedings
[2] Brown B.: Linears models. In: Proc.of the IEEE International
Conference, A.Priets(Eds), IEE Press, New York, 1988, 607-610.
Example for refrences to books
[3] Bollobes B.: Extremal Graph Theory. Springer, New York, 1988.
All references should be indicated in the manuscript.
*** REMARKS FOR THE SPECIAL ISSUE ***
The contributions are requested by the Organizing Commitee on
the PC-compatible 3 1/2" or 5 1/4" floppy disk in an ASCII form.
Adding the TEX file will be welcomed. For figures the post-script
files can be also enclosed. Please, do send the hardopy together
with your floppy disk. Sending your contribution in any other form
may cause troubles for publishing.
In the case of any problems please do contact the Organizing
Commitee:
Hynek Beran, ICS Prague
Pod vodarenskou vezi 2
182 07 PRAGUE 8, Czech Republic
FAX: +42 2 858 57 89
E-mail: pase@uivt1.uivt.cas.cs
Your contributions are to be sent to the same address.
================================================
------------------------------
End of Neuron Digest [Volume 11 Issue 42]
*****************************************
Received: from BUACCA by BUACCA.BU.EDU (Mailer R2.08 PTF009) with BSMTP id
0517; Fri, 09 Jul 93 13:32:44 EDT
Received: from noc4.dccs.upenn.edu by BUACCA.BU.EDU (IBM VM SMTP R1.2.1) with
TCP; Fri, 09 Jul 93 13:32:36 EDT
Received: from CATTELL.PSYCH.UPENN.EDU by noc4.dccs.upenn.edu
id AA08659; Fri, 9 Jul 93 13:29:21 -0400
Return-Path: <marvit@cattell.psych.upenn.edu>
Received: from LOCALHOST by cattell.psych.upenn.edu
id AA04255; Fri, 9 Jul 93 12:17:11 EDT
Posted-Date: Fri, 09 Jul 93 12:16:27 -0400
From: "Neuron-Digest Moderator" <neuron-request@cattell.psych.upenn.edu>
To: Neuron-Distribution:;
Subject: Neuron Digest V11 #43 (jobs & misc)
Reply-To: "Neuron-Request" <neuron-request@cattell.psych.upenn.edu>
X-Errors-To: "Neuron-Request" <neuron-request@cattell.psych.upenn.edu>
Organization: University of Pennsylvania
Date: Fri, 09 Jul 93 12:16:27 -0400
Message-Id: <4246.742234587@cattell.psych.upenn.edu>
Sender: marvit@cattell.psych.upenn.edu
Neuron Digest Friday, 9 Jul 1993
Volume 11 : Issue 43
Today's Topics:
job openings
Help: Research on Neural Robot Systems that Learn to Behave?
Cultured Neural Nets
NN and sismo. : results
Reinforcement Learning Mailing List
Kolmogorov's Theorem, real world applications
Send submissions, questions, address maintenance, and requests for old
issues to "neuron-request@cattell.psych.upenn.edu". The ftp archives are
available from cattell.psych.upenn.edu (130.91.68.31). Back issues
requested by mail will eventually be sent, but may take a while.
----------------------------------------------------------------------
Subject: job openings
From: John Ostrem <cic!john!ostrem@unix.sri.com>
Date: Tue, 22 Jun 93 13:48:32 -0800
Communication Intelligence Corporation (CIC) is a leader in handwriting
recognition and other pen input technologies. We currently market
recognizers for English, Western European, and Asian languages on a variety
of platforms (e.g., DOS, Windows, Macintosh, and so on). These systems enable
the pen to serve as the sole input and control device, combining the functions
of both keyboard and mouse, and adding new capabilities.
Advanced development is directed toward integrated discrete/cursive
recognizers, and future integration with voice recognition, OCR, and
similar technologies.
CIC was founded in 1981 in conjunction with SRI International (formerly
Stanford Research Institute). CIC is headquartered in Redwood Shores,
California, and has an international subsidiary, CIC Japan, Inc., in
Tokyo, Japan.
CIC currently has immediate openings for the following positions:
- -----------------------------------------------------------------------------
POSITION: Software Engineer
QUALIFICATIONS:
1. 3-5 years experience in designing and coding for large software projects
in a UNIX environment
2. Good communication skills and works well with other people.
3. Expert C programmer (at least 3-5 years experience)
4. BS or MS in Computer Science or the equivalent
5. Experience in graphics programming and user interfaces a plus
6. The following are additional pluses:
a. Experience in handwriting recognition (on-line or off-line)
b. Linguistic experience (particularly statistical linguistics)
c. Experience planning/executing complex projects
d. Experience in commercial companies
e. Experience in SunOS system administration
JOB DESCRIPTION:
1. Work with and support researchers working on handwriting and speech
recognition
2. Design, implement, and support data collection software and analysis
tools
- -----------------------------------------------------------------------------
POSITION: Pattern Recognition Specialist/project leader
QUALIFICATIONS:
1. Strong background in statistics, pattern recognition, algorithm development
2. Experience in OCR a plus
3. 3-5 years experience in designing and coding for large software projects
in a UNIX environment
4. Good communication skills and works well with other people.
5. Expert C programmer (at least 3-5 years experience)
6. Ph.D. or substantial experience in Computer Science, Electrical
Engineering or the equivalent
7. The following are additional pluses:
a. Experience in handwriting recognition (on-line or off-line)
b. Linguistic experience (particularly statistical linguistics)
c. Experience planning/executing complex projects
d. Experience in commercial companies
JOB DESCRIPTION
1. Work with a team of researchers on the next generation of handwriting
recognition systems (both off-line and on-line) for the commercial
market
2. Develop into a project leader/manager
- -----------------------------------------------------------------------------
Please reply to cic!ostrem@unix.sri.com (or cic\!ostrem@unix.sri.com in
command mode), or write or fax to
John S. Ostrem
Communication Intelligence Corporation
275 Shoreline Drive, 6th Floor
Redwood Shores, CA 94065-1413
Fax: (415) 802-7777
------------------------------
Subject: Help: Research on Neural Robot Systems that Learn to Behave?
From: ashley@cs.unsw.oz.au (Ashley Aitken)
Date: Mon, 05 Jul 93 19:29:16 +0900
G'day,
I am interested in finding out about research labs around the world that are
actively working in the following research area,
Real or more-likely simulated Robots, or part thereof, which are capable of
Learning Sensory-Motor Maps and Complex Behaviours, and to some degree
Based on Neuroscience (ie biologically plausible neural networks).
Two researchers that come to mind are Gerald Edelman (Neural Darwin Systems)
and Michael Kuperstein (Neural Model of Adaptive Hand-Eye Coordination).
If you could please e-mail me any details I would be most grateful. If there
is enough interest I will post a summary.
Thanks in advance,
Ashley Aitken.
- --
E-MAIL : ashley@cse.unsw.edu.au AARNet
Schools of EE and CS&E, (AI Lab) c/o Basser College, (Flat 7A)
University of New South Wales, The Kensington Colleges,
Box 1,PO KENSINGTON,N.S.W.,2033. Box 24,PO KENSINGTON,N.S.W,2033.
AUSTRALIA. AUSTRALIA.
------------------------------
Subject: Cultured Neural Nets
From: Steve Potter <spotter@darwin.bio.uci.edu>
Date: Fri, 02 Jul 93 12:26:17 -0800
Below I present a bibliography of all of the researchers I know of that
are growing neurons in culture on multielectrode substrates. A belated
thank-you is due to several connectionists who responded to my request
posted a couple of years ago. This is a
surprisingly small list. If you know of someone I have missed, please
send me email (spotter@darwin.bio.uci.edu).
I believe that approaches such as these are likely to close the gap
between the engineering and biological camps of neural network research.
With long-term, multi-site monitoring of real (though simple) networks, we
may learn which aspects of real neural processors must be included in our
simulations if we hope to emulate the accomplishments of Mother Nature.
If you are involved in this research and I have not contacted
you already, please email me; I am looking for a post-doctoral position.
Steve Potter
Psychobiology dept.
UC Irvine
Irvine, CA 92717
spotter@darwin.bio.uci.edu
CULTURED NETS ON MULTI-ELECTRODE SUBSTRATES:
(Recent or representative publications are listed)
Steve Potter 7-2-93
spotter@darwin.bio.uci.edu
Masuo Aizawa
(Layman's article)
Freedman, D.H. (1992). If he only had a brain. Discover : 54-60.
Robert L. Dawes, Martingale Research (Texas)
(Proposal--Never followed up?)
Dawes, R.L. (1987). Biomasscomp: A procedure for mapping the architecture
of a living neural network into a machine. IEEE ICNN proceedings 3: 215-225.
Mitch D. Eggers, MIT
(Any subsequent work with this device?)
Eggers, M.D., Astolfi, D.K., Liu, S., Zeuli, H.E., Doeleman, S.S., McKay,
R., Khuon, T.S., and Ehrlich, D.J. (1990). Electronically wired petri
dish: A microfabricated interface to the biological neuronal network. J.
Vac. Sci. Technol. B 8: 1392-1398.
Peter Fromherz, Ulm University (Germany)
Fromherz, P., Offenhausser, A., Vetter, T., and Weis, J. (1991). A
neuron-silicon junction: a Retzius cell of the leech on an insulated-gate
field-effect transistor. Science 252: 1290-3.
Guenter W. Gross, U. of N. Texas
Gross, G.W. and Kowalski, J. (1991) Experimental and theoretical analysis
of random nerve cell network dynamics, in Neural Networks: Concepts,
applications, and implementations (P. Antognetti and B Milutinovic, Eds.)
Prentice-Hall: NJ. p. 47-110.
Vera Janossy, Central Research Inst. for Physics (Hungary)
Janossy, V., Toth, A., Bodocs, L., Imrik, P., Madarasz, E., and Gyevai, A.
(1990). Multielectrode culture chamber: a device for long-term recording
of bioelectric activities in vitro. Acta Biol Hung 41: 309-20.
Akio Kawana, NTT (Japan)
(News article)
Koppel, T. (1993). Computer firms look to the brain. Science 260: 1075-1077.
Jerome Pine, Caltech
Regehr, W.G., Pine, J., Cohan, C.S., Mischke, M.D., and Tank, D.W. (1989).
Sealing cultured invertebrate neurons to embedded dish electrodes
facilitates long-term stimulation and recording. J Neurosci Methods 30:
91-106.
David W. Tank, AT&T Bell Labs
(Abstract)
Tank, D.W. and Ahmed, Z. (1985). Multiple site monitoring of activity in
cultured neurons. Biophys. J. 47: 476a.
C. D. W. Wilkinson, U. of Glasgow (Scotland)
Connolly, P., Clark, P., Curtis, A.S., Dow, J.A., and Wilkinson, C.D.
(1990). An extracellular microelectrode array for monitoring electrogenic
cells in culture. Biosens Bioelectron 5: 223-34.
Curtis, A.S., Breckenridge, L., Connolly, P., Dow, J.A., Wilkinson, C.D.,
and Wilson, R. (1992). Making real neural nets: design criteria. Med Biol
Eng Comput 30: CE33-6.
ACUTE PREPS (NOT CULTURED):
Bruce C. Wheeler, U. of Illinois
(Hippocampal slice)
Boppart, S.A., Wheeler, B.C., and Wallace, C.S. (1992). A flexible
perforated microelectrode array for extended neural recordings. Ieee Trans
Biomed Eng 39: 37-42.
Novak, J.L. and Wheeler, B.C. (1986). Recording from the Aplysia abdominal
ganglion with a planar microelectrode array. Ieee Trans Biomed Eng 33:
196-202.
Markus Meister, Harvard
Meister, M., Wong, R.O., Baylor, D.A., and Shatz, C.J. (1991). Synchronous
bursts of action potentials in ganglion cells of the developing mammalian
retina. Science 252: 939-43.
Litke, A. and Meister, M. (1991). The retinal readout array. Nuclear
Instruments and Methods in Physics Research A310: 389-394.
------------------------------
Subject: NN and sismo. : results
From: slablee@mines.u-nancy.fr
Date: Tue, 06 Jul 93 10:29:02 +0700
I posted a request for help in February 1993, and promised to
sum up the answers as soon as possible.
Well, I really got many answers, and I finally found a little
time to give you these results.
My problem was :
I'm trying to use NN to detect the start of a sampled signal
within noise.
My first attempts with a BackProp Network (7520x150x70x1 !!!)
were unsuccesfull, because of network "paralysis" (as describe
by Rumelhart) and local minima. The network always stopped
learning with a rather high error.
After having received all the answers, I decided to use the
Cascade-Correlation of Scott Fahlman, which worked very well...
Let me first thank all the following people for having replied
my request :
Rick Alan (IEEE Neural Network Council, USA)
- 70324.1625@compuserve.com -
Frederic Alexandre (CRIN (Computer Research Center), Nancy, France)
- falex@loria.crin.fr -
Bill Armstrong (University of Alberta, Canada)
- arms@cs.ualberta.ca -
Paul Bakker (University of Queenland, Lucia, Australia)
- bakker@cs.uq.oz.au -
Bart Bartholomew (US National Coomputer Security Center, Meade, Maryland, USA)
- hcbarth@afterlife.ncsc.mil -
Istvan Berkeley (University of Alberta, Canada)
- istvan@psych.ualberta.ca -
Weiren Chang (University of Texas, USA)
- wrchang@ccwf.cc.utexas.edu -
Terry Chung (Queen's University, Canada)
- terry@solar.me.queensu.ca -
Michel Ianotto (Supelec, Metz, France)
- mi@ese-metz.fr -
Charles W.Lee (Bolton Institute, Bolton, UK)
- (helped me by mail (I mean "snail-mail" !),
I didn't find any e-mail address...) -
Stan Malyshev (University of California, Berkeley, USA)
- stas@sting.berkeley.edu -
William A.Rennie (University of Albany, New-York, USA)
- wr5908@csc.albany.edu -
George Rudolph (Brigham Young University, Provo, Utah, USA)
- george@axon.cs.byu.edu -
Soheil Shams (Hughes Research Labs, Malibu, Canada)
- shams@maxwell.hrl.hac.com -
- ----------------------------------------------------------------
This project was lead by the INERIS (Institut National de
l'Environnement Industriel et des Risques = National Institute
of Industrial Environment and Risks), and help was offered by
the Earth Science Department (Earth Mechanics Laboratories) of the
Ecole des Mines de Nancy (Nancy, France).
It lead, among other things, to the study of the possibility of
using NN for detecting the start of a sismical wave (P-wave)
within a signal with a lot of noise.
This study showed that using NN could bring better results in this
problem of P-wave-detection.
So we build a connexionnist system called SCOP (Systeme Connexionniste
de Pointage).
This system is going to be improved and tested soon.
It uses the Cascade-Correlation algorithm (from Scott E.Fahlman - Carnegie
Mellon University, Pittsburgh, PA).
I hope further studies will show other uses of this system (in the same
field).
- -------------------------------------------------------------------------
I worked on a HP9000-720 computer, under UNIX system.
All the parts of the system have been developped in C (ANSI).
I pre-processed the signal with the Gabor method (a "sliding" FFT)
wich give a 3-Dimensional representation of the spectrum
(a time-frequencies diagram).
We will perhaps study the results with using a Wavelet preprocessing.
The NN take a 400 points input (each point = the spectral value at a
given time for a given frequency).
So the NN have, after learning, about 410 units.
There are 5 outputs. Each output unit represents a time window containing
the start of P-waves. In the learning patterns, there were four zeros
and one 1. The output unit which have the 1 value give the window which
must contain the start of P-waves.
The results were 89.8 % of good answers for a 1.3 seconds window
(versus 83 % with the "classical" (not NN) algorithms).
The problem was the lack of datas for learning. The network learned
with about 3000 patterns, and was tested with about 500 patterns.
- --------------------------------------------------------------------------
If you want more details about this project, please e-mail to :
slablee@mines.u-nancy.fr
(BE CAREFUL : until August 1st only)
or
gueniffe@mines.u-nancy.fr after August 1st
(please explain that the message is for me !)
- --------------------------------------------------------------------------
SOME OF THE ANSWERS
====================
Terry Solar advised me to use the Scott Fahlman's Cascade Correlation,
because the results I had could be the best performance I could have
with the BackProp structure : backprop NNs have a fixed structure so
we can just "fine tune" an initial "guess"...
Bart Bartholomew spent a long time to explain me his experiences with
the pre-processing problems. He noticed that the zeroth component is
actually the D.C. bias of the input terms and can normally be discarded
here. Another idea would be to use the differences of the frequencies
rather than the frequencies themselves. I didn't have yet enough time for
trying this. Bart spoke also of filters. Sorry Bart, I didn't find enough
time again !
Weiren Chang spoke about simulated annealing or perturbations of the
weights in order to avoid local minima. The problem is : the larger
the network is, the easier you run into a local minimum. And my first
network was really huge !
Rick Alan said that the key to getting a net to learn is preprocessing
thre data properly. I perfectly agree with this opinion : the REAL problem
in NN learning is preprocessing.
Paul Bakker spoke about the "relativity" of the error (this word is mine !).
What he means is : I said that I have a 5 % error, but this could be of
no importance if the result is 0.95 for a 1 target, and 0.5 for a 0 target.
The problem was that I was using with back-prop a real number as output,
between 0 and 1, and not a binary answer (like bits). Now, I've
changed my outputs, and they are bits, so Paul's words are wrights.
William A.Rennie had the same idea as Paul Bakker about binary outputs.
He also spoke of OVERTRAINING. It is perhaps the thing that helped me
the most, because overtraining is a real problem that I didn't see.
The problem is that the net could start to memorize the training data
rather than generalizing. William says that my training set would
probably have to contain over a half a million cases to prevent
memorization (he was speaking about my 7520x150x70x1 backprop-net,
not about the later one). There's a mean to avoid this : to compute
the error for the testing set at each iteration, and stop the
training when the performance on the training set begins to climb
rapidly while performance on the testing data remains unchanged (so
it is the beginning of overtraining).
I trained my net like this, and... IT WORKED !
The performance was really better.
Bill Armstrong proposed to use the Atree Adaptive Logic Network
Simulation Package (available via ftp from menaik.cs.ualberta.ca
[129.128.4.241] in pub/atre27.exe).
Soheil Shams thinks (like others) that my network was huge and need
a real large database of samples. He says also that it is important
to look at the net input sum to each neuron to make sure it is not
saturated.
Istvan Berkeley also said that my net was huge, and that my learning
set could just be unlearnable...
Stan Malyshev proposed to use QuickProp instead of BackProp.
Well, the Cascade-Correlation algorithm I used after these answers
was using QuickProp.
He also advised to put my units into a single layer, for faster learning.
I did it... and he was wright !
Michel Ianotto advised to be careful in the choice of the activation function.
For George Rudolph, the problem is not necessarily the BackProp, but
the purposes and datas of my network, and the way I turned my problem.
And Charles W.Lee gave me a paper of him (to appear in 'neural networks')
called : 'learning in neural networks by using tangent planes to constraint
surfaces'. (I didn't find his e-mail, but I have his address and his phone
number).
- ----------------------------------------------------------------------------
Thanks everybody !
The Cascade-Correlation Learning Architecture was developped by :
Scott E.Fahlman - School of Computer Science - Carnegie Mellon University -
5000 Forbes Avenue - Pittsburgh - PA 15213.
Internet : sef+@cs.cmu.edu Phone : 412 268-2575
A C code of Cascade-Correlation and some papers about it could be found
by ftp at ftp.cs.cmu.edu [128.2.206.173] in afs/cs/project/connect/code
- ----------------------------------------------------------------------------
Stephane Lablee
slablee@mines.u-nancy.fr (until 01/08/93)
gueniffe@mines.u-nancy.fr (after 01/08/93)
Ecole des Mines de Nancy
Parc de Saurupt
54042 Nancy Cedex
France
- ----------------------------------------------------------------------------
- --
------------------------------
Subject: Reinforcement Learning Mailing List
From: Matthew McDonald <mafm@cs.uwa.edu.au>
Date: Thu, 08 Jul 93 13:34:17 +0700
This message is to announce an informal mailing list
devoted to reinforcement learning. The list is intended to provide an
informal, unmoderated, forum for discussing subjects relevant to
research in reinforcement learning; in particular, discussion of
problems, interesting papers and that sort of thing is welcome.
Announcements and other information relevant to researchers in the
field are also welcome. People are encouraged to post abstracts of
recent papers or reports.
The list is intended to be fairly informal and unmoderated.
If you'd like to join the list, please send mail to
`reinforce-request@cs.uwa.edu.au'
Cheers,
- --
Matthew McDonald mafm@cs.uwa.oz.au
Nothing is impossible for anyone impervious to reason.
------------------------------
Subject: Kolmogorov's Theorem, real world applications
From: CMRGW@staffordshire.ac.uk
Date: Fri, 09 Jul 93 10:40:00 +0000
To: neuron-request@130.91.68.31
Subject: re Kolmogorovs Theorem
Status: R
In issue number 37 K. Maguire writes
> any real-world applications of Kolmogorovs Theorem <
In this months issue of the Journal of Chemical Information and Computer
Sciences I have a paper titled "Predicting Phosphorus NMR Shifts Using
Neural Networks". In essence this paper demonstrates that Nets can be
used to represent the continuous mapping that occurs between the
parameters of molecular structure (a la graph theory ) and the NMR shift of
a central resonating atom. Currently there are several non net methods for
predicting these shifts from the structure, but these are generally long
and have a high manual analysis component, ofeten taking weeks for any
one sub class of compounds. On an IRIS wd34, the net can find a decent
generalisation in several hours. The material in the JCICS paper was
submitted last June. To date our results indicate that we are very close
to the current state of the art in terms of prediction performance. As in
Chemistry, new compounds are being continuously discovered the prediction
methods currently used "go off" when new compounds are discovered in the
various subclasses. The manual derivation process must then be repeated.
The ability of the net to do this automatically is a major step forward
We anticipate that in the next few years ournet based method will find
quite a lot of use.
Geoff West
------------------------------
End of Neuron Digest [Volume 11 Issue 43]
*****************************************
Received: from BUACCA by BUACCA.BU.EDU (Mailer R2.08 PTF009) with BSMTP id
8490; Thu, 15 Jul 93 17:51:07 EDT
Received: from noc4.dccs.upenn.edu by BUACCA.BU.EDU (IBM VM SMTP R1.2.1) with
TCP; Thu, 15 Jul 93 17:50:59 EDT
Received: from CATTELL.PSYCH.UPENN.EDU by noc4.dccs.upenn.edu
id AA16182; Thu, 15 Jul 93 17:48:55 -0400
Return-Path: <marvit@cattell.psych.upenn.edu>
Received: from LOCALHOST by cattell.psych.upenn.edu
id AA10924; Thu, 15 Jul 93 16:41:34 EDT
Posted-Date: Thu, 15 Jul 93 16:40:55 -0400
From: "Neuron-Digest Moderator" <neuron-request@cattell.psych.upenn.edu>
To: Neuron-Distribution:;
Subject: Neuron Digest V11 #44 (papers & TRs)
Reply-To: "Neuron-Request" <neuron-request@cattell.psych.upenn.edu>
X-Errors-To: "Neuron-Request" <neuron-request@cattell.psych.upenn.edu>
Organization: University of Pennsylvania
Date: Thu, 15 Jul 93 16:40:55 -0400
Message-Id: <10890.742768855@cattell.psych.upenn.edu>
Sender: marvit@cattell.psych.upenn.edu
Neuron Digest Thursday, 15 Jul 1993
Volume 11 : Issue 44
Today's Topics:
Administrivia - Paper announcements
Preprint available: A network to velocity vector-field correction
New Book and Videotape on Genetic Programming
Subject: Preprint available: On spike synchronization
Preprint: Computational Models of the Neural Bases of Learning and Memory
paper available - the Ghost in the Machine
thesis available
PhD dissertation available
Send submissions, questions, address maintenance, and requests for old
issues to "neuron-request@cattell.psych.upenn.edu". The ftp archives are
available from cattell.psych.upenn.edu (130.91.68.31). Back issues
requested by mail will eventually be sent, but may take a while.
----------------------------------------------------------------------
Subject: Administrivia - Paper announcements
From: "Neuron-Digest Moderator, Peter Marvit" <neuron@cattell.psych.upenn.edu>
Date: Thu, 15 Jul 93 16:27:10 -0500
Dear readers,
Many of you have noticed a lack of paper and technical report
announcements for some time with the Neuron Digest. As moderator, I have
tended to put priority to time-sensitive conference announcements and
general personal discussion (including jobs announcements). So, I have
had a significant backlog of papers and TRs. I will start rectifying the
absence. Unfortunately, many "pre-prints" will now be available in the
journals or conference proceedings.
As a reminder, if someone advertises, hard-copy *PLEASE* do not
indiscriminately ask for copies. A deluge of requests from the net can
put a strain on a researcher's time and budget. Carefully read the
abstract and then decide if you think it is worth someone to make a copy,
address an envelope, put on lots of stamps, and mail it to you. Please
also note that many hard-copy adverts include specific mailing
instructions plus a publication fee. Please read the directions
carefully.
This is also a note to encourage electronic distribution of manuscripts,
either in Postscript form with figures or plain text.
Cheers and happy reading!
Peter
: Peter Marvit, Neuron Digest Moderator :
: Email: <neuron-request@cattell.psych.upenn.edu> :
: Courtesy of the Psychology Department, University of Pennsylvania :
: 3815 Walnut St., Philadelphia, PA 19104 w:215/898-6274 h:215/387-6433 :
------------------------------
Subject: Preprint available: A network to velocity vector-field correction
From: Alfred_Nischwitz <alfred@lnt.e-technik.tu-muenchen.de>
Date: Mon, 31 Aug 92 14:14:10 +0100
The following paper has been accepted for publication in the
proceedings of the International Conference on
Artificial Neural Networks '92 in Brighton:
Relaxation in 4D state space - A competitive network
approach to object-related velocity vector-field correction
by Helmut Gluender Institut fuer Medizinische Psychologie
Ludwig-Maximilians-Universitaet
Goethestrasse 31, D-8000 Muenchen 2, Germany
and Astrid Lehmann Lehrstuhl fuer Nachrichtentechnik
Technische Universitaet Muenchen
Arcisstrasse 21, D-8000 Muenchen 2, Germany
ABSTRACT:
A standard principle of (energy-)minimization is applied to the
problem of visual motion analysis. In contrast to well-known
mathematical optimization procedures and universal optimizing
networks it is proposed to use a problem-adapted network
architecture. Owing to the bilocal coincidence-type motion
detector considered here the task of object-related motion
analysis appears as a geometric correspondence problem. Hence,
the correct spatio-temporal correspondeces between elements in
consecutive images must be selected from all possible ones. This
is performed by neighborhood operations that are repeatedly
applied to the instantaneous signal representation in the
space/velocity-domain until an estimate of the actual flow-field
is reached.
Hardcopies of the paper are available. Please send requests
to the following address in Germany:
Helmut Gluender
Institut fuer Medizinische Psychologie
Ludwig-Maximilians-Universitaet
Goethestrasse 31, D-8000 Muenchen 2, Germany
or via email to:
alfred@lnt.e-technik.tu-muenchen.de
communicated by Alfred Nischwitz
------------------------------
Subject: New Book and Videotape on Genetic Programming
From: John Koza <koza@CS.Stanford.EDU>
Date: Sun, 15 Nov 92 16:40:57 -0800
BOOK AND VIDEOTAPE ON GENETIC PROGRAMMING
A new book and a one-hour videotape (in VHS NTSC, PAL, and SECAM
formats) on genetic programming are now available from the MIT
Press.
NEW BOOK...
GENETIC PROGRAMMING: ON THE PROGRAMMING OF COMPUTERS BY
MEANS OF NATURAL SELECTION
by John R. Koza, Stanford University
The recently developed genetic programming paradigm provides a
way to genetically breed a computer program to solve a wide variety
of problems. Genetic programming starts with a population of
randomly created computer programs and iteratively applies the
Darwinian reproduction operation and the genetic crossover (sexual
recombination) operation in order to breed better individual
programs. The book describes and illustrates genetic programming
with 81 examples from various fields.
840 pages. 270 Illustrations. ISBN 0-262-11170-5.
Contents...
1 Introduction and Overview
2 Pervasiveness of the Problem of Program Induction
3 Introduction to Genetic Algorithms
4 The Representation Problem for Genetic Algorithms
5 Overview of Genetic Programming
6 Detailed Description of Genetic Programming
7 Four Introductory Examples of Genetic Programming
8 Amount of Processing Required to Solve a Problem
9 Nonrandomness of Genetic Programming
10 Symbolic Regression - Error-Driven Evolution
11 Control - Cost-Driven Evolution
12 Evolution of Emergent Behavior
13 Evolution of Subsumption
14 Entropy-Driven Evolution
15 Evolution of Strategy
16 Co-Evolution
17 Evolution of Classification
18 Iteration, Recursion, and Setting
19 Evolution of Constrained Syntactic Structures
20 Evolution of Building Blocks
21 Evolution of Hierarchies of Building Blocks
22 Parallelization of Genetic Programming
23 Ruggedness of Genetic Programming
24 Extraneous Variables and Functions
25 Operational Issues
26 Review of Genetic Programming
27 Comparison with Other Paradigms
28 Spontaneous Emergence of Self-Replicating and Self-Improving
Computer Programs
29 Conclusions
Appendices contain simple software in Common LISP for
implementing experiments in genetic programming.
ONE-HOUR VIDEOTAPE...
GENETIC PROGRAMMING: THE MOVIE
by John R. Koza and James P. Rice, Stanford University
The one-hour videotape (in VHS NTSC, PAL, and SECAM formats)
provides a general introduction to genetic programming and a
visualization of actual computer runs for 22 of the problems
discussed in the book GENETIC PROGRAMMING: ON THE PROGRAMMING
OF COMPUTER BY MEANS OF NATURAL SELECTION. The problems
include symbolic regression, the intertwined spirals, the artificial
ant, the truck backer upper, broom balancing, wall following, box
moving, the discrete pursuer-evader game, the differential pursuer-
evader game, inverse kinematics for controlling a robot arm,
emergent collecting behavior, emergent central place foraging, the
integer randomizer, the one-dimensional cellular automaton
randomizer, the two-dimensional cellular automaton randomizer,
task prioritization (Pac Man), programmatic image compression,
solving numeric equations for a numeric root, optimization of lizard
foraging, Boolean function learning for the 11-multiplexer, co-
evolution of game-playing strategies, and hierarchical automatic
function definition as applied to learning the Boolean even-11-
parity function.
- ---------------------------ORDER FORM----------------------
PHONE: 800-326-4471 TOLL-FREE or 617-625-8569
MAIL: The MIT Press, 55 Hayward Street, Cambridge, MA 02142
FAX: 617-625-9080
Please send
____ copies of the book GENETIC PROGRAMMING: ON THE
PROGRAMMING OF COMPUTERS BY MEANS OF NATURAL SELECTION by
John R. Koza (KOZGII) (ISBN 0-262-11170-5) @ $55.00.
____ copies of the one-hour videotape GENETIC PROGRAMMING: THE
MOVIE by John R. Koza and James P. Rice in VHS NTSC format
(KOZGVV) (ISBN 0-262-61084-1) @$34.95
____ copies of the videotape in PAL format (KOZGPV) (ISBN 0-262-
61087-6) @$44.95
____ copies of the videotape in SECAM format (KOZGSV) (ISBN 0-
262-61088-4) @44.95.
Name __________________________________
Address_________________________________
City____________________________________
State_________________Zip________________
Country_________________________________
Phone Number ___________________________
$ _______ Total
$ _______ Shipping and Handling ($3 per item. Outside U.S. and
Canada, add $6 per item for surface rate or $22 per item for airmail)
$ _______ Canada - Add 7% GST
$ _______ Total due MIT Press
__ Payment attached (check payable to The MIT Press in U.S. funds)
__ Please charge to my VISA or MASTERCARD credit card
Number ________________________________
Credit Card Expires _________________________________
Signature ________________________________
------------------------------
From: Alfred_Nischwitz <alfred@lnt.e-technik.tu-muenchen.de>
Date: Wed, 13 Jan 93 14:08:00 +0100
Subject: Preprint available: On spike synchronization
The following paper will be published in
"Brain Theory - Spatio-Temporal Aspects of Brain Function"
edited by A.Aertzen & W. von Seelen, Elsevier, Amsterdam:
ON SPIKE SYNCHRONIZATION
by Helmut Gluender Institut fuer Medizinische Psychologie
Ludwig-Maximilians-Universitaet
Goethestrasse 31, D-8000 Muenchen 2, Germany
and Alfred Nischwitz Lehrstuhl fuer Nachrichtentechnik
Technische Universitaet Muenchen
Arcisstrasse 21, D-8000 Muenchen 2, Germany
ABSTRACT:
We start with historically founded reflections on the relevance
of synchronized activity for the neural processing of information and we
propose to differentiate between synchrony at the emitting and the
receiving side. In the main part we introduce model networks which consist
of chains of locally coupled and noisy spiking neurons. In the case of
lateral excitation without delay as well as for delayed lateral inhibition
these basic structures can turn homogeneous stimulations into synchronized
activity. The synchrony is maintained under temporally varying stimulations
thus evoking aperiodic spike fronts. Although we present some hypotheses,
the question of how the nervous system deals with this network property
remains to be answered.
Hardcopies of the paper are available. Please send requests via
email or to the following address in Germany:
Alfred Nischwitz
Lehrstuhl fuer Nachrichtentechnik
Technische Universitaet Muenchen
Arcisstrasse 21, D-8000 Muenchen 2, F.R.Germany
email: alfred@lnt.e-technik.tu-muenchen.de
Alfred Nischwitz
------------------------------
From: Alfred_Nischwitz <alfred@lnt.e-technik.tu-muenchen.de>
Date: Tue, 19 Jan 93 10:56:51 +0100
Subject: Letter available: Spike-Synchronization mechanisms
From: Alfred_Nischwitz <alfred@lnt.e-technik.tu-muenchen.de>
Date: Tue, 19 Jan. 93
The following short letter is published in the
german issue of scientific american (in german language):
SPIKE-SYNCHRONISATIONS MECHANISMEN
by Alfred Nischwitz Lehrstuhl fuer Nachrichtentechnik
Technische Universitaet Muenchen
Arcisstrasse 21, D-8000 Muenchen 2, Germany
and
Helmut Gluender Institut fuer Medizinische Psychologie
Ludwig-Maximilians-Universitaet
Goethestrasse 31, D-8000 Muenchen 2, Germany
ABSTRACT:
Anhand von Zeichnungen werden Synchronisations- und
Desynchronisations-mechanismen fuer inhibitorisch
und exzitatorisch gekoppelte 'leaky integrate and fire'
Modell-Neurone erklaert.
Hardcopies of the paper are available. Please send requests via
email or to the following address in Germany:
Alfred Nischwitz
Lehrstuhl fuer Nachrichtentechnik
Technische Universitaet Muenchen
Arcisstrasse 21, D-8000 Muenchen 2, F.R.Germany
email: alfred@lnt.e-technik.tu-muenchen.de
Alfred Nischwitz
------------------------------
Subject: Preprint: Computational Models of the Neural Bases of Learning and
Memory
From: Mark Gluck <gluck@pavlov.rutgers.edu>
Date: Wed, 03 Feb 93 09:13:20 -0500
For (hard copy) preprints of the following article:
Gluck, M. A. & Granger, R. C. (1993). Computational models of the neural
bases of learning and memory. Annual Review of Neuroscience. 16: 667-706
ABSTRACT: Advances in computational analyses of parallel-processing have
made computer simulation of learning systems an increasingly useful tool
in understanding complex aggregate functional effects of changes in
neural systems. In this article, we review current efforts to develop
computational models of the neural bases of learning and memory, with a
focus on the behavioral implications of network-level characterizations
of synaptic change in three anatomical regions: olfactory (piriform)
cortex, cerebellum, and the hippocampal formation.
____________________________________
Send US-mail address to: Mark Gluck (Center for Neuroscience, Rutgers-Newark)
gluck@pavlov.rutgers.edu
------------------------------
Subject: paper available - the Ghost in the Machine
From: Andrew Wuensche <100020.2727@CompuServe.COM>
Date: 03 Jun 93 11:38:43 -0500
The Ghost in the Machine
========================
Cognitive Science Research Paper 281, University of Sussex.
The following paper describes recent work on the basins of attraction of
random Boolean networks, and implications on memory and learning.
Currently only hard-copies are available. To request a copy, email
andywu@cogs.susx.ac.uk, or write to
Andy Wuensche, 48 Esmond Road, London W4 1JQ, UK
giving a surface mail address.
A B S T R A C T
- ---------------
The Ghost in the Machine
Basins of Attraction of Random Boolean Networks
This paper examines the basins of attraction of random Boolean networks,
a very general class of discrete dynamical systems, in which cellular
automata (CA) form a special sub-class. A reverse algorithm is presented
which directly computes the set of pre-images (if any) of a network's
state. Computation is many orders of magnitude faster than exhaustive
testing, making the detailed structure of random network basins of
attraction readily accessible for the first time. They are portrayed as
diagrams that connect up the network's global states according to their
transitions. Typically, the topology is branching trees rooted on
attractor cycles.
The homogeneous connectivity and rules of CA are necessary for the
emergence of coherent space-time structures such as gliders, the basis of
CA models of artificial life. On the other hand random Boolean networks
have a vastly greater parameter/basin field configuration space capable
of emergent categorisation.
I argue that the basin of attraction field constitutes the network's
memory; but not simply because separate attractors categorise state space
- - in addition, within each basin, sub-categories of state space are
categorised along transient trees far from equilibrium, creating a
complex hierarchy of content addressable memory. This may answer a basic
difficulty in explaining memory by attractors in biological networks
where transient lengths are probably astronomical.
I describe a single step learning algorithm for re-assigning
pre-images in random Boolean networks. This allows the sculpting of their
basin of attraction fields to approach any desired configuration. The
process of learning and its side effects are made visible. In the context
of many semi-autonomous weakly coupled networks, the basin field/network
relationship may provide a fruitful metaphor for the mind/brain.
------------------------------
Subject: thesis available
From: "Egbert J.W. Boers" <boers@WI.leidenuniv.nl>
Date: Thu, 01 Jul 93 15:41:41 +0100
FTP-host: archive.cis.ohio-state.edu
FTP-filename: /pub/neuroprose/boers.biological-metaphors.ps.Z
The file boers.biological-metaphors.ps.Z (104 pages) is now available for
copying from the Neuroprose repository:
Biological metaphors
and the design of modular
artificial neural networks
Egbert J.W. Boers, Herman Kuiper
Leiden University
The Netherlands
ABSTRACT: In this thesis, a method is proposed with which good modular
artificial neural network structures can be found automatically using a
computer program. A number of biological metaphors are incorporated in
the method. It will be argued that modular artificial neural networks
have a better performance than their non-modular counterparts. The human
brain can also be seen as a modular neural network, and the proposed
search method is based on the natural process that resulted in the brain:
Genetic algorithms are used to imitate evolution, and L-systems are used
to model the kind of recipes nature uses in biological growth.
A small number of experiments have been done to investigate the
possibilities of the method. Preliminary results show that the method
does find modular networks, and that those networks outperform 'standard'
solutions. The method looks very promising, although the experiments
done were too limited to draw any general conclusions. One drawback is
the large amount of computing time needed to evaluate the quality of a
population member, and therefore in chapter 9 a number of possible
improvements are given on how to increase the speed of the method, as
well as a number of suggestions on how to continue from here.
Unfortunately, I'm not in the position to distribute paper-copies of this
thesis. Questions and remarks are most welcome.
Egbert Boers
Leiden University
The Netherlands
boers@wi.LeidenUniv.nl
------------------------------
Subject: PhD dissertation available
From: SCHOLTES@ALF.LET.UVA.NL
Date: Mon, 08 Feb 93 11:36:00 +0700
===================================================================
Ph.D. DISSERTATION AVAILABLE
on
Neural Networks, Natural Language Processing, Information Retrieval
===================================================================
A Copy of the dissertation "Neural Networks in Natural Language Processing
and Information Retrieval" by Johannes C. Scholtes can be obtained for
cost price and fast airmail- delivery at US$ 25,-.
Payment by Major Creditcards (VISA, AMEX, MC, Diners) is accepted and
encouraged. Please include Name on Card, Number and Exp. Date. Your Credit
card will be charged for Dfl. 47,50.
Within Europe one can also send a Euro-Cheque for Dfl. 47,50 to:
University of Amsterdam
J.C. Scholtes
Dufaystraat 1
1075 GR Amsterdam
The Netherlands
Do not forget to mention a surface shipping address. Please allow 2-4
weeks for delivery.
Abstract
1.0 Machine Intelligence
For over fifty years the two main directions in machine intelligence
(MI), neural networks (NN) and artificial intelligence (AI), have been
studied by various persons with many dif-ferent backgrounds. NN and AI
seemed to conflict with many of the traditional sciences as well as with
each other. The lack of a long research history and well defined
foundations has always been an obstacle for the general acceptance of
machine intelligence by other fields.
At the same time, traditional schools of science such as mathematics and
physics devel-oped their own tradition of new or "intelligent"
algorithms. Progress made in the field of statistical reestimation
techniques such as the Hidden Markov Models (HMM) started a new phase in
speech recognition. Another application of the progress of mathematics
can be found in the application of the Kalman filter in the
interpretation of sonar and radar signals. Much more examples of such
"intelligent" algorithms can be found in the statistical classification
en filtering techniques of the study of pattern recognition (PR).
Here, the field of neural networks is studied with that of pattern
recognition in mind. Although only global qualitative comparisons are
made, the importance of the relation between them is not to be
underestimated. In addition it is argued that neural networks do indeed
add something to the fields of MI and PR, instead of competing or
conflicting with them.
2.0 Natural Language Processing
The study of natural language processing (NLP) exists even longer than
that of MI. Already in the beginning of this century people tried to
analyse human language with machines. However, serious efforts had to
wait until the development of the digital com- puter in the 1940s, and
even then, the possibilities were limited. For over 40 years, sym- bolic
AI has been the most important approach in the study of NLP. That this
has not always been the case, may be concluded from the early work on NLP
by Harris. As a mat-ter of fact, Chomsky's Syntactic Structures was an
attack on the lack of structural proper- ties in the mathematical methods
used in those days. But, as the latter's work remained the standard in
NLP, the former has been forgotten completely until recently. As the
scientific community in NLP devoted all its attention to the symbolic
AI-like theories, the only use-ful practical implementation of NLP
systems were those that were based on statistics rather than on
linguistics. As a result, more and more scientists are redirecting their
atten- tion towards the statistical techniques available in NLP. The
field of connectionist NLP can be considered as a special case of these
mathematical methods in NLP.
More than one reason can be given to explain this turn in approach. On
the one hand, many problems in NLP have never been addressed properly by
symbolic AI. Some exam- ples are robust behavior in noisy environments,
disambiguation driven by different kinds of knowledge, commensense
generalizations, and learning (or training) abilities.
On the other hand, mathematical methods have become much stronger and
more sensitive to spe-cific properties of language such as hierarchical
structures.
Last but not least, the relatively high degree of success of mathematical
techniques in commercial NLP systems might have set the trend towards the
implementation of simple, but straightforward algorithms.
In this study, the implementation of hierarchical structures and
semantical features in mathematical objects such as vectors and matrices
is given much attention. These vectors can then be used in models such as
neural networks, but also in sequential statistical pro- cedures
implementing similar characteristics.
3.0 Information Retrieval
The study of information retrieval (IR) was traditionally related to
libraries on the one hand and military applications on the other.
However, as PC's grew more popular, most common users loose track of the
data they produced over the last couple of years. This, together with the
introduction of various "small platform" computer programs made the field
of IR relevant to ordinary users.
However, most of these systems still use techniques that have been
developed over thirty years ago and that implement nothing more than a
global surface analysis of the textual (layout) properties. No deep
structure whatsoever, is incorporated in the decision whether or not to
retrieve a text.
There is one large dilemma in IR research. On the one hand, the data
collections are so incredibly large, that any method other than a global
surface analysis would fail. On the other hand, such a global analysis
could never implement a contextually sensitive method to restrict the
number of possible candidates returned by the retrieval system.
As a result, all methods that use some linguistic knowledge exist only
in laboratories and not in the real world. Conversely, all methods that
are used in the real world are based on technological achievements from
twenty to thirty years ago.
Therefore, the field of information retrieval would be greatly indebted
to a method that could incorporate more context without slowing down. As
computers are only capable of processing numbers within reasonable time
limits, such a method should be basedon vec- tors of numbers rather than
on symbol manipulations. This is exactly where the challenge is: on the
one hand keep up the speed, and on the other hand incorporate more
context. If possible, the data representation of the contextual
information must not be restricted to a single type of media. It should
be possible to incorporate symbolic language as well as sound, pictures
and video concurrently in the retrieval phase, although one does not know
exactly how yet...
Here, the emphasis is more on real-time filtering of large amounts of
dynamic data than on document retrieval from large (static) data bases.
By incorporating more contextual infor- mation, it should be possible to
implement a model that can process large amounts of unstructured text
without providing the end-user with an overkill of information.
4.0 The Combination
As this study is a very multi-disciplinary one, the risk exists that it
remainsrestricted to a surface discussion of many different problems
without analyzing one in depth. To avoid this, some central themes,
applications and tools are chosen. The themes in this work are
self-organization, distributed data representations and context. The
applications are NLP and IR, the tools are (variants of) Kohonen feature
maps, a well known model from neural network research.
Self-organization and context are more related to each other than one may
suspect. First, without the proper natural context, self-organization
shall not be possible. Next, self-organization enables one to discover
contextual relations that were not known before.
Distributed data representation may solve many of the unsolved problems
in NLP and IR by introducing a powerful and efficient knowledge
integration and generalization tool. However, distributed data
representation and self-organization trigger new problems that should be
solved in an elegant manner.
Both NLP and IR work on symbolic language. Both have properties in common
but both focus on different features of language. In NLP hierarchical
structures and semantical fea- tures are important. In IR the amount of
data sets the limitations of the methods used. However, as computers grow
more powerful and the data sets get larger and larger, both approaches
get more and more common ground. By using the same models on both
applications, a better understanding of both may be obtained.
Both neural networks and statistics would be able to implement
self-organization, distrib- uted data and context in the same manner. In
this thesis, the emphasis is on Kohonen fea- ture maps rather than on
statistics. However, it may be possible to implement many of the
techniques used with regular sequential mathematical algorithms.
So, the true aim of this work can be formulated as the understanding of
self-organization, distributed data representation, and context in NLP
and IR, by in depth analysis of Kohonen feature maps.
------------------------------
End of Neuron Digest [Volume 11 Issue 44]
*****************************************
Received: from BUACCA by BUACCA.BU.EDU (Mailer R2.08 PTF009) with BSMTP id
1703; Fri, 16 Jul 93 20:11:27 EDT
Received: from noc4.dccs.upenn.edu by BUACCA.BU.EDU (IBM VM SMTP R1.2.1) with
TCP; Fri, 16 Jul 93 20:11:19 EDT
Received: from CATTELL.PSYCH.UPENN.EDU by noc4.dccs.upenn.edu
id AA02595; Fri, 16 Jul 93 20:08:43 -0400
Return-Path: <marvit@cattell.psych.upenn.edu>
Received: from LOCALHOST by cattell.psych.upenn.edu
id AA07147; Fri, 16 Jul 93 18:58:38 EDT
Posted-Date: Fri, 16 Jul 93 18:57:57 -0400
From: "Neuron-Digest Moderator" <neuron-request@cattell.psych.upenn.edu>
To: Neuron-Distribution:;
Subject: Neuron Digest V11 #45 (Papers, jobs, school, and neuro)
Reply-To: "Neuron-Request" <neuron-request@cattell.psych.upenn.edu>
X-Errors-To: "Neuron-Request" <neuron-request@cattell.psych.upenn.edu>
Organization: University of Pennsylvania
Date: Fri, 16 Jul 93 18:57:57 -0400
Message-Id: <7112.742863477@cattell.psych.upenn.edu>
Sender: marvit@cattell.psych.upenn.edu
Neuron Digest Friday, 16 Jul 1993
Volume 11 : Issue 45
Today's Topics:
Administrivia - ND on holiday for two weeks
Neuroprose entry - extracting and learning "grammar"
Paper available in Neuroprose
TR - Models of Reading aloud
SUMMARY: From neurobiological to computational models - State of the art?
Re: SUMMARY: From neurobiological to computation
Post-doc in Neurophysiology...
PostDoc positions in Korea
Cambridge Neural Nets Summer School
POSITION AVAILABLE - STATISTICIAN
Research Opportunities in Neural Networks
Send submissions, questions, address maintenance, and requests for old
issues to "neuron-request@cattell.psych.upenn.edu". The ftp archives are
available from cattell.psych.upenn.edu (130.91.68.31). Back issues
requested by mail will eventually be sent, but may take a while.
----------------------------------------------------------------------
Subject: Administrivia - ND on holiday for two weeks
From: "Neuron-Digest Moderator, Peter Marvit"
<neuron@cattell.psych.upenn.edu>
Date: Fri, 16 Jul 93 18:44:01 -0500
Dear readers,
Due to a last minute change of plans, Neuron Digest will go on holiday
for the next two weeks (a little earlier and longer than I had planned).
I will return on August 3, so do not get worried if you do not hear from
me until later during that first week in August.
Thanks to all, as always, for your continued readership and support.
Apologies to contributors who must wait.
-Peter
: Peter Marvit, Neuron Digest Moderator :
: Email: <neuron-request@cattell.psych.upenn.edu> :
: Courtesy of the Psychology Department, University of Pennsylvania :
: 3815 Walnut St., Philadelphia, PA 19104 w:215/898-6274 h:215/387-6433 :
------------------------------
Subject: Neuroprose entry - extracting and learning "grammar"
From: giles@research.nj.nec.com (Lee Giles)
Date: Tue, 18 Feb 92 13:49:26 -0500
[[ Editor's Note: Personal apologies to Lee for the slight (!) delay. -PM ]]
The following paper has been placed in the Neuroprose archive.
Comments and questions are invited.
*******************************************************************
--------------------------------------------
EXTRACTING AND LEARNING AN "UNKNOWN" GRAMMAR
WITH RECURRENT NEURAL NETWORKS
--------------------------------------------
C.L.Giles*, C.B.Miller D.Chen, G.Z.Sun, H.H.Chen, Y.C.Lee
NEC Research Institute *Institute for Advanced Computer Studies
4 Independence Way Dept. of Physics & Astronomy
Princeton, N.J. 08540 University of Maryland
giles@research.nj.nec.com College Park, Md 20742
___________________________________________________________________
- -------------------------------------------------------------------
Abstract
--------
Simple second-order recurrent networks are shown to readily
learn small known regular grammars when trained with positive
and negative strings examples. We show that similar methods
are appropriate for learning "unknown" grammars from examples
of their strings. The training algorithm is an incremental real-
time, recurrent learning (RTRL) method that computes the
complete gradient and updates the weights at the end of each
string. After or during training, a dynamic clustering algorithm
extracts the production rules that the neural network has
learned. The methods are illustrated by extracting rules from
unknown deterministic regular grammars. For many cases the
extracted grammar outperforms the neural net from which it
was extracted in correctly classifying unseen strings.
(To be published in Advances in Neural Information Processing
Systems 4, J.E. Moody, S.J. Hanson and R.P. Lippmann (eds.)
Morgan Kaufmann, San Mateo, Ca 1992).
********************************************************************
Filename: giles.nips91.ps.Z
- ----------------------------------------------------------------
FTP INSTRUCTIONS
unix% ftp archive.cis.ohio-state.edu (or 128.146.8.52)
Name: anonymous
Password: anything
ftp> cd pub/neuroprose
ftp> binary
ftp> get giles.nips91.ps.Z
ftp> bye
unix% zcat giles.nips91.ps.Z | lpr
(or whatever *you* do to print a compressed PostScript file)
- ----------------------------------------------------------------
^^^^^^^^^^^^^^^^^^^^^^^cut here^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
C. Lee Giles
NEC Research Institute
4 Independence Way
Princeton, NJ 08540
USA
Internet: giles@research.nj.nec.com
UUCP: princeton!nec!giles
PHONE: (609) 951-2642
FAX: (609) 951-2482
------------------------------
Subject: Paper available in Neuroprose
From: arun@hertz.njit.edu (arun maskara spec lec cis)
Date: Mon, 16 Mar 92 13:46:08 -0500
The following paper is now available by ftp from neuroprose archive:
Forcing Simple Recurrent Neural Networks to Encode Context
Arun Maskara, New Jersey Institute of Technology,
Department of Computer and Information Sciences
University Heights, Newark, NJ 07102, arun@hertz.njit.edu
Andrew Noetzel, The William Paterson College,
Department of Computer Science, Wayne, NJ 07470
Abstract
The Simple Recurrent Network (SRN) is a neural network model that has been
designed for the recognition of symbol sequences. It is a back-propagation
network with a single hidden layer of units. The symbols of a sequence are
presented one at a time at the input layer. But the activation pattern in
the hidden units during the previous input symbol is also presented as an
auxiliary input. In previous research, it has been shown that the SRN
can be trained to behave as a finite state automaton (FSA) which accepts the
valid strings corresponding to a particular grammar and rejects the invalid
strings. It does this by predicting each successive symbol in the input string.
However, the SRN architecture sometime fails to encode the context necessary to
predict the next input symbol. This happens when two different states in the FSA
generating the strings have the same output, and the SRN develops similar hidden
layer encodings for these states. The failure happens more often when number of
units in the hidden layer is limited. We have developed a new architecture,
called the Forced Simple Recurrent Network (FSRN), that solves this problem.
This architecture contains additional output units, which are trained to show
the current input and the previous context. Simulation results show that for
certain classes of FSA with $u$ states, the SRN with $\lceil \log_2u \rceil$
units in the hidden layers fails, where as the FSRN with the same number of
hidden layer units succeeds.
- ------------------------------------------------------------------------------
-
Copy of the postscript file has been placed in neuroprose archive. The
file name is maskara.fsrn.ps.Z
The usual instructions can be followed to obtain the file from the
directory pub/neuroprose from the ftp site archive.cis.ohio-state.edu
Arun Maskara
------------------------------
Subject: TR - Models of Reading aloud
From: Max Coltheart <mcolthea@laurel.ocs.mq.edu.au>
Date: Tue, 24 Mar 92 08:20:24 +1000
Models Of Reading Aloud: Dual-Route And Parallel-Distributed-Processing
Approaches
Max Coltheart, Brent Curtis and Paul Atkins
School of Behavioural Sciences
Macquarie University
Sydney NSW 2109
Australia
email: max@currawong.mqcc.mq.oz.au
Submitted for publication March 23, 1992.
Abstract
It has often been argued that various facts about skilled reading aloud cannot
be explained by any model unless that model possesses a dual-route
architecture: one route from print to speech that may be described as lexical
(in the sense that it operates by retrieving pronunciations from a mental
lexicon) and another route from print to speech that may be described as
non-lexical (in the sense that it computes pronunciations by rule, rather
than by retrieving them from a lexicon). This broad claim has been challenged
by Seidenberg and McClelland (1989, 1990). Their model has but a single route
from print to speech, yet, they contend, it can account for major facts about
reading which have hitherto been claimed to require a dual-route architecture.
We identify six of these major facts about reading. The one-route model
proposed by Seidenberg and McClelland can account for the first of these, but
not the remaining five: how people read nonwords aloud, how they perform
visual lexical decision, how two particular forms of acquired dyslexia can
arise, and how different patterns of developmental dyslexia can arise.
Since models with dual-route architectures can explain all six of these
basic facts about reading, we suggest that this remains the viable
architecture for any tenable model of skilled reading and learning to read.
Preprints available from MC at the above address.
------------------------------
Subject: SUMMARY: From neurobiological to computational models - State of the
art?
From: massimo@cui.unige.ch (DEFRANCESCO Massimo)
Organization: University of Geneva, Switzerland
Date: 04 Jul 93 08:25:03 +0000
[[ Editor's Note: This is from the Neuroscience mailing list. -PM ]]
A couple of weeks ago I posted a request on the net for the state
of the art in neuromodelling, with emphasis on the computational side.
I received 13 answers, 4 of which were asking for the summary.
Many thanks to the following people (hope I forgot nobody):
Richard E. Myers (rmyers@ics.uci.edu)
Jacques Brisson (X042@hec.ca)
Christoph Ian Connoly (connoly@cs.umass.edu)
Prashanth Kumar (akp@cns.nyu.edu)
Tobias Fabio Christen (tchriste@iiic.ethz.ch)
Joern Erbguth (jnerbgut@cip.informatik.uni-erlangen.de)
Drago IndJic (d.indjic@ic.ac.uk)
Thomas P. Vogl (vogl@helix.nih.gov)
German Cavelier (cavelier@smaug.cns.caltech.edu)
======================
Original question:
>I need your precious help to find out what is the state of the art
>in neuromodelling of the human brain AND derivation from the neurological
>model of "practical", computer-oriented, artificial neural network
>models. We are going to start a research that will a) study the
>behaviour of real neurons at the neurophysiological level, b) develop
>a theoretical (biological) model able to explain the observations,
>and c) develop from it an artificial neural network (ANN) model usable in
>practical applications.
>We are aware of at least one ANN model which was heavily derived from
>neurophysiological investigations of neurons in the hyppocampus, i-e. the
>Dystal model (Alkon et al).
>We are heavily interested in references/pointers to any work of this kind.
>Email is preferred because faster. Feel free to post anyway.
>I will compile a summary of the answers that I'll receive privately.
=======================
Richard E. Myers recommends to look at the work of Gary Lynch and Richard
Granger. References included:
1. Gluck MA; Granger R.
Computational models of the neural bases of learning and memory.
Annual Review of Neuroscience, 1993, 16:667-706.
(UI: 93213082)
Pub type: Journal Article; Review; Review, Academic.
2. Ambros-Ingerson J; Granger R; Lynch G.
Simulation of paleocortex performs hierarchical clustering.
Science, 1990 Mar 16, 247(4948):1344-8.
ABSTRACT available. (UI: 90193697)
3. Granger R; Lynch G.
Higher olfactory processes: perceptual learning and memory.
Current Opinion in Neurobiology, 1991 Aug, 1(2):209-14.
ABSTRACT available. (UI: 92330264)
Pub type: Journal Article; Review; Review, Tutorial.
4. Anton PS; Lynch G; Granger R.
Computation of frequency-to-spatial transform by olfactory bulb
glomeruli.
Biological Cybernetics, 1991, 65(5):407-14.
ABSTRACT available. (UI: 92075782)
=============================
Jacques Brisson (X042@hec.ca) suggests to get a hand on the november
1992 issue of Trends In the Neuroscience. It is a special edition on
Nervous System Modelization. [thanks Jacques, we did]
=============================
Christopher Ian Connolly (connoly@cs.umass.edu) suggests the following
article:
Connolly CI, Burns JB, "A Model for the Functioning of the Striatum",
Biological Cybernetics 68(6):535-544.
It discusses a method for robot control and a plausible correlate in
networks of medium spiny cells ("matrisomes") of the striatum.
=============================
Prashanth Kumar A.K. (akp@cns.nyu.edu) writes:
work of Ken miller might be of interest to you (E-Mail: ken@caltech.edu)
he worked with mike stryker and developed models for development of ODC
and now for orientation selectivity.
=============================
Drago Indjic (d.indjic@ic.ac.uk) suggests to take a look at Kryukov et al
work described in a few books by Manchester University Press 1990:
Attention and neural networks and Stochastic Cellular Systems.
Kryukov is continuing Ukhtomsky work (Pavlov reflex
school competitor)and is based on many decades of
experimental work.
=============================
Thomas P. Vogl (vogl@helix.nih.gov) writes:
look at the paper by Blackwell et al in the July '92 issue of "Pattern
Recognition" ; also the paper by Werness et al (particularly some of the
references therein) in the December '92 issue of "Biological Cybernetics"
=============================
German Cavelier (cavelier@smaug.cns.caltech.edu) suggests to ask info
about GENESIS to David Bilitch (dhb@smaug.cns.caltech.edu).
============================
Joern Erbguth sent to me a very interesting and long summary he posted
some days before my request on neuromodelling. Since his summary has been
already posted to bionet.neuroscience, I won't include it here (but will
include it in my summary for comp.ai.neural-nets)
=============================
That's all. Thanks again for your help. The net is a real pleasure.
Massimo
_______________________________________________________________________
Massimo de Francesco email: massimo@cui.unige.ch
Research assistant
Computer Science Center
University of Geneva
Switzerland
------------------------------
Subject: Re: SUMMARY: From neurobiological to computation
From: massimo@cui.unige.ch (DEFRANCESCO Massimo)
Organization: University of Geneva, Switzerland
Date: 04 Jul 93 08:43:49 +0000
In article 28366@news.unige.ch, massimo@cui.unige.ch (DEFRANCESCO Massimo)
writes:
>
>A couple of weeks ago I posted a request on the net for the state
>of the art in neuromodelling, with emphasis on the computational side.
>I received 13 answers, 4 of which were asking for the summary.
>
>Many thanks to the following people (hope I forgot nobody):
>
Well, I did. Here is the reply of Tobias Fabio Christen (tchriste@iiic.ethz.ch):
We are a team at Ciba SA in Basel that has members of the biological
side the electrical side and me as a computer scientist, due to
connections of my boss (Thomas Knoepfel) we have contact and pers.
comm. with people from the EPFZ (department for theoretical physics)
and the Brain Research Center at the UNI ZH. At the moment we are
working around a simulation for presynaptic modulation via auto- and
heteroreceptors (though we are still way apart from the definite
model).
Depending on whether you want a purely electrophysiological model or
whether you want to bring in some biochemical aspects (specially for
synaptical
transmission and second messenger gadgets), you do need different
approaches and literature (where as these two worlds seem to avoid each
other).
For the former case a good introduction to the evolution
biological->technical net is:
1
AU - Koch C
AU - Segev I
TI - Methods in Neuronal Modeling
ED - Koch C
ED - Segev I
BK -
SO - MIT Press ,Cambridge, MA 1989;1:0-0
2
AU - Traub RD
AU - Miles R
TI - Neuronal Networks of the Hippocampus
ED - Traub RD
ED - Miles R
BK -
SO - Cambridge University Press ,New York 1991;1:0-0
The biochemical side has little or nothing done in generalizing models
there is only little literature available (if you need it fell free to
contact me)
toby
Thanks again!
Massimo
------------------------------
Subject: Post-doc in Neurophysiology...
From: pck@castle.ed.ac.uk (P C Knox)
Organization: Edinburgh University
Date: 06 Jul 93 08:20:45 +0000
---- LABORATORY FOR NEUROSCIENCE ----
University of Edinburgh
Applications are invited for a post-doctoral research post for
up to three years to join a group working on the physiology of the
control of gaze (see Donaldson & Knox, Neuroscience 38:145-161, 1990;
Knox & Donaldson, Proc.Roy.Soc.Lond.B. 246:243-250, 1991; Hayman et al,
J.Physiol (Lond) 459:458P, 1993).
Experience in single unit electrophysiology is essential and
some experience of anatomical neural tracing techniques using
transported markers would be and advantage. Salary on the AR1A scale
with placement according to age and experience. Initial salary up to
#17 379, which includes Wellcome Trust supplement. Informal enquiries:
telephone 031-650-3526
Applications (please quote REF: NA 930215) including full CV and
the names and addresses of two academic referees, should be submitted
to:
The Personnel Office,
The University of Edinburgh,
1 Roxburgh Street,
Edinburgh EH8 9TB
The closing date is 31st July, 1993.
------------------------------
Subject: PostDoc positions in Korea
From: sbcho@gorai.kaist.ac.kr (Kang)
Organization: KTRC in Seoul, Korea
Date: 07 Jul 93 10:04:01 +0000
Please pass this advertisement to anyone who you think
might be interested.
Thanks in advance.
Sung-Bae Cho
=======================================================================
Korea Advanced Institute of Science and Technology
Computer Science Department
and
Center for Artificial Intelligence Research
Post Doctoral Researchers in AI & Pattern Recognition
Two one-year Post Doctoral Researcher positions are available with
the AI & Pattern Recognition Group. The main project is in the area
of 'Cursive handwriting recognition with hidden Markov model and/or
artificial neural networks,' and will aim to explore the feasibility
of psychological and cognitive scientific studies. Pattern recognition
issues such as pattern discrimination and modeling power will be
investigated. But researchers in other AI fields will also be applicable.
For further particulars and an application form, contact Dr. Jin H.
Kim, Computer Science Department, KAIST, 373-1, Koosung-dong,
Yoosung-ku, Taejeon 305-701, Republic of Korea. Phone 82 42 869 3517,
E-mail: jkim@cs.kaist.ac.kr.
The Center follows an equal opportunities policy.
=======================================================================
********
* ******* Sung-Bae Cho
** ** Computer Science Department
*** *** *** *** Korea Advanced Institute of Science and Technology
*** *** *** *** 373-1, Goosung-dong, Yoosung-ku,
*** *** *** *** Taejeon 305-701, South Korea
** **
******* * Phone : 82-42-869-3557
******** e-mail: sbcho@gorai.kaist.ac.kr
------------------------------
Subject: Cambridge Neural Nets Summer School
From: Richard Prager <rwp@eng.cam.ac.uk>
Date: Fri, 09 Jul 93 11:38:20 +0000
The Cambridge University Programme for Industry in Collaboration
with the Cambridge University Engineering Department Announce
their Third Annual Neural Networks Summer School.
3 1/2 day short course
13-16 September 1993
BOURLARD GEE HINTON JERVIS
JORDAN KOHONEN NARENDRA NIRANJAN
PECE PRAGER SUTTON TARRASENKO
Outline and aim of the course
The course will give a broad introduction to the application and design of
neural networks and deal with both the theory and with specific
applications. Survey material will be given, together with recent
research results in architecture and training methods, and applications
including signal processing, control, speech, robotics and human vision.
Design methodologies for a number of common neural network architectures
will be covered, together with the theory behind neural network
algorithms. Participants will learn the strengths and weaknesses of the
neural network approach, and how to assess the potential of the technology
in respect of their own requirements.
Lectures are being given by international experts in the field, and
delegates will have the opportunity of learning first hand the technical
and practical details of recent work in neural networks from those who are
contributing to those developments.
Who Should Attend
The course is intended for engineers, software specialists and other
scientists who need to assess the current potential of neural networks.
The course will be of interest to senior technical staff who require an
overview of the subject, and to younger professionals who have recently
moved into the field, as well as to those who already have expertise in
this area and who need to keep abreast of recent developments. Some,
although not all, of the lectures will involve graduate level mathematical
theory.
PROGRAMME
Introduction and overview:
Connectionist computing: an introduction and overview
Programming a neural network
Parallel distributed processing perspective
Theory and parallels with conventional algorithms
Architectures:
Pattern processing and generalisation
Bayesian methods in neural networks
Reinforcement learning neural networks
Communities of expert networks
Self organising neural networks
Feedback networks for optimization
Applications:
Classification of time series
Learning forward and inverse dynamical models
Control of nonlinear dynamical systems using neural networks
Artificial and biological vision systems
Silicon VLSI neural networks
Applications to diagnostic systems
Shape recognition in neural networks
Applications to speech recognition
Applications to mobile robotics
Financial system modelling
Applications in medical diagnostics
LECTURERS
DR HERVE BOURLARD is with Lernout & Hauspie Speech Products in
Brussels. He has made many contributions to the subject particularly in
the area of speech recognition.
MR ANDREW GEE is with the Speech, Vision and Robotics Group of
the Cambridge University Engineering Department. He specialises in the
use of neural networks for solving complex optimization problems.
PROFESSOR GEOFFREY HINTON is in the Computer Science Department
at the University of Toronto. He was a founding member of the PDP
research group and is responsible for many advances in the subject
including the classic back-propagation paper.
MR TIMOTHY JERVIS is with Cambridge University Engineering
Department. His interests lie in the field of neural networks and in
the application of Bayesian statistical techniques to learning control.
PROFESSOR MICHAEL JORDAN is in the Department of Brain & Cognitive Science
at MIT. He was a founding member of the PDP research group and he made
many contributions to the subject particularly in forward and inverse
systems.
PROFESSOR TEUVO KOHONEN is with the Academy of Finland and Laboratory of
Computer and Information Science at Helsinki University of Technology.
His specialities are in self-organising maps and their applications.
PROFESSOR K S NARENDRA is with the Center for Systems Science in the
Electrical Engineering Department at Yale University. His interests are
in the control of complex systems using neural networks.
DR MAHESAN NIRANJAN is with the Department of Engineering at Cambridge
University. His specialities are in speech processing and pattern
classification.
DR ARTHUR PECE is in the Physiological laboratory at the University of
Cambridge. His interests are in biological vision and especially neural
network models of cortical vision.
DR RICHARD PRAGER is with the Department of Engineering at Cambridge
University. His specialities are in speech and vision processing using
artificial neural systems.
DR RICH SUTTON is with the Adaptive Systems Department of GTE Laboratories
near Boston, USA. His specialities are in reinforcement learning,
planning and animal learning behaviour.
DR LIONEL TARRASENKO is with the Department of Engineering at the
University of Oxford. His specialities are in robotics and the hardware
implementation of neural computing.
COURSE FEES AND ACCOMMODATION
The course fee is 750 (UK pounds), payable in advance, and includes full
course notes, a certificate of attendance, and lunch and day-time
refreshments for the duration of the course. A number of heavily
discounted places are available for academics; please contact Renee Taylor
if you would like to be considered for one of these places. Accommodation
can be arranged for delegates in college rooms with shared facilities at
Wolfson College at 163 (UK pounds) for 4 nights to include bed and
breakfast, dinner with wine and a Course Dinner.
For more information contact: Renee Taylor, Course Development Manager
Cambridge Programme for Industry, 1 Trumpington Street, Cambridge CB2 1QA,
United Kingdom tel: +44 (0)223 332722 fax +44 (0)223 301122
email: rt10005@uk.ac.cam.phx
------------------------------
Subject: POSITION AVAILABLE - STATISTICIAN
From: Phil Goodman <goodman@unr.edu>
Date: Mon, 12 Jul 93 22:59:14 +0000
******************* Professional Position Announcement ******************
"STATISTICIAN for NEURAL NETWORK & REGRESSION DATABASE RESEARCH"
.- - - - - - - - - - - - - - OVERVIEW - - - - - - - - - - - - - - - - -.
| |
| THE LOCATION: |
| Nevada's Reno/Lake Tahoe region is an outstanding environment for |
| living, working, and raising a family. Winter skiing is world-class,|
| summer recreation includes many mountain and water sports, and |
| historical exploration and cultural opportunities abound. |
| |
| THE PROJECT: |
| The new CENTER FOR BIOMEDICAL MODELING RESEARCH recently received |
| federal funding to refine and apply a variety of artificial neural |
| network algorithms to large cardiovascular health care databases. |
| |
| THE CHALLENGE: |
| The predictive performance of neural nets will be compared to |
| advanced regression models. Other comparisons to be made include |
| handling of missing and noisy data, and selection of important |
| interactions among variables. |
| |
| THE JOB REQUIREMENT: |
| Masters-level or equivalent statistician with working knowledge |
| of the SAS statistical package and the UNIX operating system. |
| |
| THE SALARY : |
| Approximate starting annual salary: $42,000 + full benefits . |
| (actual salary will depend on experience and qualifications) |
._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ .
POSITION: Research Statistics Coordinator for
NEURAL NETWORKS / HEALTH CARE DATABASE PROJECT
LOCATION: Center for Biomedical Modeling Research
Department of Internal Medicine
University of Nevada School of Medicine
Washoe Medical Center, Reno, Nevada
START DATE: September 1, 1993
CLOSING DATE: Open until filled.
DESCRIPTION: Duties include acquisition and translation of data
from multiple external national sources; data management and archiving;
performance of exploratory and advanced regression statistics;
performance of artificial neural network processing; participation
in scholarly research and publications.
QUALIFICATIONS: (1) M.S., M.A., M.P.H. or equivalent training in
statistics with experience in logistic and Cox regression analyses,
(2) ability to program in the SAS statistical language, and
(3) experience with UNIX computer operating systems.
Desirable but not mandatory are the abilities to use
(4) the S-PLUS data management system and (5) the C programming language.
SALARY: Commensurate with qualifications and experience.
(For example, with database experience, typical annual
salary would be approximately $42,000 + full benefits.)
APPLICATION: > Informal inquiry may be made to:
Phil Goodman, Director, Center for Biomedical Modeling Research
Internet: goodman@unr.edu Phone: 702-328-4867
> Formal consideration requires a letter of application,
vita, and names of three references sent to:
Philip Goodman, MD, MS
Director, Center for Biomedical Modeling Research
University of Nevada School of Medicine
Washoe Medical Center, Room H1-166
77 Pringle Way, Reno, NV 89520
The University of Nevada is an Equal Opportunity/Affirmative Action
employer and does not discriminate on the basis of race, color,
religion, sex, age, national origin, veteran's status or handicap
in any program it operates. University of Nevada employs only U.S.
citizens and aliens lawfully authorized to work in the United States.
************************************************************************
------------------------------
Subject: Research Opportunities in Neural Networks
From: rohwerrj <rohwerrj@cs.aston.ac.uk>
Date: Tue, 13 Jul 93 12:49:52 +0000
*****************************************************************************
RESEARCH OPPORTUNITIES in NEURAL NETWORKS
Dept. of Computer Science and Applied Mathematics
Aston University
*****************************************************************************
Funding has recently become available for up to 6 PhD studentships and
up to 3 postdoctoral fellowships in the Neural Computing Research
Group at Aston University. This group is currently undergoing a major
expansion with the recent appointments of Professor Chris Bishop
(formerly head of the Applied Neurocomputing Centre at AEA Technology,
Harwell Laboratory) and Professor David Lowe (formerly head of the
neural network research group at DRA, Malvern), joining Professor
David Bounds and lecturers Richard Rohwer and Alan Harget. In
addition, substantial funds are being invested in new computer
hardware and software and other resources, which will provide the
Group with extensive research facilities.
The research programme of the Group is focussed on the development of
neural computing techniques from a sound statistical pattern
processing perspective. Research topics span the complete range from
developments of the theoretical foundations of neural computing,
through to a wide range of application areas. The Group maintains
close links with several industrial organisations, and is
participating in a number of collaborative projects.
For further information, please contact me at the address below:
Richard Rohwer
Dept. of Computer Science and Applied Mathematics
Aston University
Aston Triangle
Birmingham B4 7ET
ENGLAND
Tel: (44 or 0) (21) 359-3611 x4688
FAX: (44 or 0) (21) 333-6215
rohwerrj@uk.ac.aston.cs
------------------------------
End of Neuron Digest [Volume 11 Issue 45]
*****************************************
Received: from BUACCA by BUACCA.BU.EDU (Mailer R2.08 PTF009) with BSMTP id
1299; Thu, 12 Aug 93 18:30:20 EDT
Received: from noc4.dccs.upenn.edu by BUACCA.BU.EDU (IBM VM SMTP R1.2.1) with
TCP; Thu, 12 Aug 93 18:30:18 EDT
Received: from CATTELL.PSYCH.UPENN.EDU by noc4.dccs.upenn.edu
id AA26848; Thu, 12 Aug 93 18:26:45 -0400
Return-Path: <marvit@cattell.psych.upenn.edu>
Received: from LOCALHOST by cattell.psych.upenn.edu
id AA18329; Thu, 12 Aug 93 17:13:58 EDT
Posted-Date: Thu, 12 Aug 93 17:13:12 -0400
From: "Neuron-Digest Moderator" <neuron-request@CATTELL.PSYCH.UPENN.EDU>
To: Neuron-Distribution:;
Subject: Neuron Digest V11 #46 (discussion + jobs + queries)
Reply-To: "Neuron-Request" <neuron-request@CATTELL.PSYCH.UPENN.EDU>
X-Errors-To: "Neuron-Request" <neuron-request@psych.upenn.edu>
Organization: University of Pennsylvania
Date: Thu, 12 Aug 93 17:13:12 -0400
Message-Id: <18313.745189992@cattell.psych.upenn.edu>
Sender: marvit@cattell.psych.upenn.edu
Neuron Digest Thursday, 12 Aug 1993
Volume 11 : Issue 46
Today's Topics:
Administrivia - We're back
Whence cybernetics?
Re: Whence cybernetics?
Re: Whence cybernetics?
Re: Whence cybernetics?
Research posts at U. of Central England
P.C. Based Neural Softwares
Basins of Attraction of Cellular Automata
CASE project available for students
Training of Nets and Multiple Solutions
Send submissions, questions, address maintenance, and requests for old
issues to "neuron-request@psych.upenn.edu". The ftp archives are
available from psych.upenn.edu (130.91.68.31). Back issues requested by
mail will eventually be sent, but may take a while.
----------------------------------------------------------------------
Subject: Administrivia - We're back
From: "Neuron-Digest Moderator, Peter Marvit"
<neuron@cattell.psych.upenn.edu>
Date: Wed, 11 Aug 93 13:30:57 -0500
Dear readers,
Again, I thank you for your patience during my slightly longer than
expected "holiday." The Neuron Digest resumes with this issue and will
be coming rather thickly in the next few weeks. Most of the issues will
be, as I previously wrote, long-overdue paper and technical report
announcements as well as conference announcments.
However, I will start off first with the usual set of discussion items
from you readers, plus a short set of postings about cybernetics from a
related list. Enjoy...
-Peter
: Peter Marvit, Neuron Digest Moderator <neuron-request@psych.upenn.edu> :
: Courtesy of the Psychology Department, University of Pennsylvania :
: 3815 Walnut St., Philadelphia, PA 19104 w:215/898-6274 h:215/387-6433 :
------------------------------
Subject: Whence cybernetics?
From: gal2@kimbark.uchicago.edu (Jacob Galley)
Organization: University of Chicago Computing Organizations
Date: 04 Jul 93 03:45:55 +0000
I have been studying linguistics and cognitive science type stuff for
about two years in college, and I am just now becoming aware of the
long line of cybernetic thought which runs parallel to "good
old-fashioned" symbolic AI. Why is this work now (and apparently
always since the schism) more obscure than work done in symbolic,
serial cognitive modelling?
I quote from _Foundations of Neural Networks_ by Tarun Khanna
(Addison-Wesley 1990):
#This continuous/symbolic dichotomy gave rise to and was then
#reinforced by other concurrently existing dichotomies. The
#cyberneticians dealt primarily with pattern recognition and were
#concerned with developing systems that learned. The AI community, on
#the other hand, concentrated on problem solving and therefore on
#creating systems that performed specific tasks demanding intelligence,
#for example, theorem-proving and game-playing. Predictably, each of
#these groups of tasks was easier to tackle in its specific class of
#systems. For example, it is easier to tackle a game-playing exercise
#in a programming system than in a continuous system. Simultaneously,
#cyberneticians were preoccupied with the neurophysiology and the AI
#community with psychology. While the former connection is easier to
#understand, the latter arose primarily because it is easier to
#postulate psychologically meaningful results using programming systems
#than it is to postulate physiological ones. Their preoccupation with
#neurophysiology led cyberneticians to deal primarily with highly
#parallel systems. The programming systems employed by the AI community
#were, on the other hand, inherently serial. (page 4)
[Khanna goes on to portray connectionism as a new hybrid between the
two traditions.]
I am amazed that this alternative to symbolic AI is so obscure. Why
are (symbolic) artificial intelligence classes, theories and opinions
so easy to find, but cybernetic thought has faded away, become
esoteric?
There are lots of reasons I can think of which seem reasonable, but I
don't know enough of the history to be sure:
* Cybernetic theory is more abstract, difficult, vague. (No idea yet
if this is even true.)
* The "Chomskyan Revolution" in linguistics and/or the "Cognitive
Revolution" in psychology tipped the scales in the symbolic AI
tradition's favor. (No idea what the causal relationships are
between the three symbolic schools, if any can be clearly
attributed.)
* The foundations of serial programming caught on before the
foundations of parallel programming (which we are still hammering
out today, imho), so applications of symbolic AI were more
successful, more glamorous, sooner.
Does anyone have any thoughts on this?
Jake.
- --
* What's so interdisciplinary about studying lower levels of thought process?
<-- Jacob Galley * gal2@midway.uchicago.edu
------------------------------
Subject: Re: Whence cybernetics?
From: gal2@kimbark.uchicago.edu (Jacob Galley)
Organization: University of Chicago
Date: 04 Jul 93 18:01:20 +0000
I received the following reply, and figured I might as well post it.
(I've added comp.ai.neural-nets to the list, since I now know it exists.)
- ---------
Date: Sun, 4 Jul 1993 00:41:16 -0700 (PDT)
From: Melvin Rader <radermel@u.washington.edu>
Subject: Re: Whence cybernetics
First off, I'm not posting this because I can't -- I just found this modem
in the parents' computer a couple days ago, and I haven't yet figured out
how to deal with my system's editor for posting to usenet.
Anyway, in response to your question:
By cybernetics, I take you to mean the study of neural networks
and connectionist models of artificial intelligence. By no means is it
dead, or even all that obscure. As an undergraduate at the Evergreen
State College in Olympia, WA this year I took four credits of
'Connectionism' and another four of programming of neural networks. I
believe there's a newsgroup devoted to neural networks as well.
Seymour Papert has written a whimsical account of the history of
network vs. symbolic approaches to artificial intelligence:
"Once upon a time two daughter sciences were born to the new
science of cybernetics. One sister was natural, with features inherited
from the study of the brain, from the way nature does things. The other
was artificial, related from the beginning to the use of computers. Each
of the sister sciences tried to build models of intelligence, but from
very different materials. The natural sister built models (called neural
networks) out of mathematically purified neurones. The artificial sister
built her models out of computer programs.
"In their first bloom of youth the two were equally successful and
equally pursued by suitors from other fields of knowledge. They got on
very well together. Their relationship changed in the early sixties when
a new monarch appeared, one with the largest coffers ever seen in the
kingdom of the sciences: Lord DARPA, the Defence Department's Advanced
Research Projects Agency. The artificial sister grew jealous and was
determined to keep for herself the access to Lord DARPA's research funds.
The natural sister would have to be slain.
"The bloody work was attempted by two staunch followers of the
artificial sister, Marvin Minsky and Seymour Papert, cast in the role of
the huntsman sent to slay Snow White and bring back her heart as proof of
the deed. Their weapon was not the dagger but the mightier pen, from which
came a book - Perceptrons ..."
Minsky and Papert's book did effectively kill further research
into neural networks for about two decades. The thrust of the book
was that with the learning algorithms that had been developed then, neural
networks could only learn linearly separable problems, which are always
simple (this was proved mathematically). Networks existed which could
solve more complicated problems, but they had to be "hard wired" - the
person setting up the network had to set it up in such a way that the network
already "knew" everything that it was going to be tested on; there was
no way for such a network to learn. (The book also raised some other,
more philosophical concerns.) Since learning was basically the only
advantage neural network models had over symbolic models (aside from an
asthetic appeal due to their resemblance to natural models), research into
neural networks died out. (Also, NN research is associated
philosophically with behaviorism - NNs solve through association. When
behaviorism died, it also helped bring down the NN field.)
However, in the late 70's (I think) the 'backpropagation training
algorithm' was developed. Backpropagation allows the training of neural
networks which are powerful enough to solve non-linearly separable
problems, although it has no natural equivalent. With the development of
backpropagation, and with the association of several big names with the
field, research into network models of artificial intelligence revived.
I understand the term 'Connectionism' to apply to a field which
draws from neural network research and research into the brain. In
contrast to whatever book you were quoting from, I understand
connectionist thought to be at odds with the symbolic approach to
artificial intelligence. A good book to read on the subject is
Connectionism and the Mind by Bechtel and Abrahamsen. It is a good
introduction to connectionism and goes into the philosophy behind it all,
although some of the math is off.
- --Kimbo
- --
* What's so interdisciplinary about studying lower levels of thought process?
<-- Jacob Galley * gal2@midway.uchicago.edu
------------------------------
Subject: Re: Whence cybernetics?
From: wbdst+@pitt.edu (William B Dwinnell)
Organization: University of Pittsburgh
Date: 04 Jul 93 22:08:00 +0000
The passage you posted concerning cybernetics is somewhat misleading. The
term "cybernetics" was coined by Norbert Wiener in the 1940's, defining it
as "the entire field of control and communication theory, whether in the
machine or in the animal". In its narrowest sense, as Wiener wrote about
it, cybernetics might be thought of as a precursor to modern information
theory (he mentions Shannon, by the way, in his book "Cybernetics"), control
theory (including what we now call robotics), and, to some degree, prediction.
In the most general sense, "cybernetics" may be construed as covering all
of computer science, and more. It is common today for people to present
cybernetics in light of AI or robotics, but there is no reason to put
this special slant on cybernetics. Probably the most accurate short
definition of "cybernetics", using contemporary terminology would be a
proto-science concerning information theory and communication theory.
------------------------------
Subject: Re: Whence cybernetics?
From: minsky@media.mit.edu (Marvin Minsky)
Organization: MIT Media Laboratory
Date: 06 Jul 93 06:05:52 +0000
>Date: Sun, 4 Jul 1993 00:41:16 -0700 (PDT)
>From: Melvin Rader <radermel@u.washington.edu>
>Subject: Re: Whence cybernetics
> By cybernetics, I take you to mean the study of neural networks
>and connectionist models of artificial intelligence. By no means is it
>dead, or even all that obscure. As an undergraduate at the Evergreen
>State College in Olympia, WA this year I took four credits of
>'Connectionism' and another four of programming of neural networks. I
> Minsky and Papert's book did effectively kill further research
>into neural networks for about two decades. The thrust of the book
>was that with the learning algorithms that had been developed then, neural
>networks could only learn linearly separable problems, which are always
>simple (this was proved mathematically). Networks existed which could
>solve more complicated problems, but they had to be "hard wired" - the
>person setting up the network had to set it up in such a way that the network
> etc.
You'd better give those credits back. The book explained (1) some
theory of which geometric problems were linearly separable (and the
results were not notably simple), (2) derived lower bounds on how the
size of networks and coefficients grow with the size of certain
problems, and (3) these results have nothing whatever to do with the
learning algorithms involved, because they only discuss the existence
of suitable networks.
There was not so much research in neural networks between 1969, when
the book was published, and around 1980 or so. This may have been
partly because we showed that feedforward nets are impractical for
various kinds of invariant recognitions on large retinas, but they are
useful for many other kinds of problems. The problem was that too
many people propagated absurdly wrong summaries of what the book said
- -- as in the above account. There were some gloomy remarks near the
end of the book about the unavailability of convergence guarantees for
multilayer nets (as compared with the simple perceptron procedure,
which always converges for separable patterns), and this might have
discouraged some theorists. There still are no such guarantees for
learning algorithms of practical size -- but for many practical
purposes, no one cares much about that.
------------------------------
Subject: Research posts at U. of Central England
From: "r.kelly" <DFCA4601G@UNIVERSITY-CENTRAL-ENGLAND.AC.UK>
Date: Tue, 20 Jul 93 11:04:57
University of Central England
Neural Net Applications: Research Posts
Monitoring of Gas Cylinder Production:
This programme is concerned with the use of ANNs to identify a
number of problems commonly found during the production of aluminium
gas cylinders. ANNs will be trained to recognise and classify
problems from historical process data. Later stages of the work
will involve real-time data.
This is a 3 year SERC studentship that will be converted to a CASE
Award. Applicants will be expected to register for MPhil/PhD.
Monitoring and Control of Aluminium Spot-Welding:
This programme is concerned with the use of ANNs for the monitoring
and control of resistance spot-welding of aluminium. The work is
primarily concerned with the use of ANNs to provide a novel method of
NDT for spot-welds. Initial work will concentrate on historical
process data but it is anticipated that the later stages of the work
will involve enhancement to real-time operation. This program is
funded by Alcan International.
Although initially planned as a ONE year Reasearch Assistantship it
would be possible to convert the post into a TWO or THREE year
studentship if preferred to allow registration for MPhil/Phd.
Candidates for both posts should have an interest in ANNs and good
mathematical skills, ideally with a knowledge of MATLAB.
Enquiries to: Dr Keith Osman, Manufacturing Technology, UCE Birmingham
tel: 021-331-5662 fax: 021-331-6315 email: k.a.osman@uk.ac.uce
------------------------------
Subject: P.C. Based Neural Softwares
From: "R.S.Habib" <R.S.Habib@lut.ac.uk>
Date: Thu, 22 Jul 93 16:12:56 +0000
Hello There,
Can anybody advice me how can I get a list of some avialble P.C. based
softwre tools preferably but (not necessary) Back-prop. based Any advice
is appreciated.
Thank you
R. Istepanian
E-mail : R.S.Habib@lut.ac.uk
------------------------------
Subject: Basins of Attraction of Cellular Automata
From: jboller@is.morgan.com (John Boller)
Date: Fri, 30 Jul 93 15:37:18 -0500
Hi,
I am looking for references to the comparison of
Basins of Attraction of Cellular Automata and
Neural Networks.
I would greatly appreciate anyone who could point
me in the correct direction.
thanks, john boller
email: jboller@is.morgan.com
------------------------------
Subject: CASE project available for students
From: John Shawe-Taylor <john@dcs.rhbnc.ac.uk>
Date: Tue, 03 Aug 93 14:00:39 +0000
The following quota CASE project title and description has been agreed
with an industrial partner. Any interested students should contact
asap either John Shawe-Taylor by email (john@dcs.rhbnc.ac.uk) or Dieter
Gollmann by telephone (0784-443698).
Many thanks,
John Shawe-Taylor
Department of Computer Science,
Royal Holloway, University of London,
Egham, Surrey TW20 0EX UK
Fax: (0784) 443420
Constrained Models of Time series Prediction: Theory and Practice
The project will examine neural models for time series prediction. The
problem of introducing constraints will be examined, in particular
criteria for choosing appropriate cost functions and a posteriori
estimates of confidence levels will be studied, including situations
where shifts occur in the underlying distributions. These principles will
be addressed in the context of applications on real data arising from
complex predictions of market developments.
------------------------------
Subject: Training of Nets and Multiple Solutions
From: jboller@is.morgan.com (John Boller)
Date: Thu, 05 Aug 93 15:07:18 -0500
Hi,
I was wondering if anyone had anecdotes or sources
about the behaviour of Neural Nets when the training
sets permit a number of solutions as equally likely
(same utility value). I am looking at Basins of
Attraction, and this case would seem to be related.
Thanks, John Boller
email: jboller@is.morgan.com
------------------------------
End of Neuron Digest [Volume 11 Issue 46]
*****************************************
Received: from BUACCA by BUACCA.BU.EDU (Mailer R2.08 PTF009) with BSMTP id
0562; Tue, 17 Aug 93 18:08:52 EDT
Received: from noc4.dccs.upenn.edu by BUACCA.BU.EDU (IBM VM SMTP R1.2.1) with
TCP; Tue, 17 Aug 93 18:08:46 EDT
Received: from CATTELL.PSYCH.UPENN.EDU by noc4.dccs.upenn.edu
id AA10635; Tue, 17 Aug 93 17:39:04 -0400
Return-Path: <marvit@cattell.psych.upenn.edu>
Received: from LOCALHOST by cattell.psych.upenn.edu
id AA08375; Tue, 17 Aug 93 16:25:56 EDT
Posted-Date: Tue, 17 Aug 93 16:25:15 -0400
From: "Neuron-Digest Moderator" <neuron-request@CATTELL.PSYCH.UPENN.EDU>
To: Neuron-Distribution:;
Subject: Neuron Digest V11 #47 (tech reports, papers, publications)
Reply-To: "Neuron-Request" <neuron-request@CATTELL.PSYCH.UPENN.EDU>
X-Errors-To: "Neuron-Request" <neuron-request@psych.upenn.edu>
Organization: University of Pennsylvania
Date: Tue, 17 Aug 93 16:25:15 -0400
Message-Id: <8359.745619115@cattell.psych.upenn.edu>
Sender: marvit@cattell.psych.upenn.edu
Neuron Digest Friday, 13 Aug 1993
Volume 11 : Issue 47
Today's Topics:
NIPS91 paper in neuroprose
IJNS contents volume 2, issue 4 (1991)
TR - Training Second-Order Recurrent Neural Networks Using Hints
TR - connectionist model for commonsense reasoning with rules
Paper - Logics and Variables in Connectionist Models
TR - Fuzzy Evidential Logic: A Model of Causality for Commonsense Reasoning}
Preprint available: A network to velocity vector-field correction
neural-oscillator network, reprints available
ALOPEX algorithm solves the MONK's problems
TR - Modelling the Development of Topography and Ocular Dominance
Preprint available: A network to velocity vector-field correction
Neural Chess: Paper Presentation
Send submissions, questions, address maintenance, and requests for old
issues to "neuron-request@psych.upenn.edu". The ftp archives are
available from psych.upenn.edu (130.91.68.31). Back issues requested by
mail will eventually be sent, but may take a while.
----------------------------------------------------------------------
Subject: NIPS91 paper in neuroprose
From: giles@research.nj.nec.com (Lee Giles)
Date: Wed, 05 Feb 92 15:29:51 -0500
The following paper has been placed in the Neuroprose archive.
Comments and questions are invited.
*******************************************************************
--------------------------------------------
EXTRACTING AND LEARNING AN "UNKNOWN" GRAMMAR
WITH RECURRENT NEURAL NETWORKS
--------------------------------------------
C.L.Giles*, C.B.Miller D.Chen, G.Z.Sun, H.H.Chen, Y.C.Lee
NEC Research Institute *Institute for Advanced Computer Studies
4 Independence Way Dept. of Physics & Astronomy
Princeton, N.J. 08540 University of Maryland
giles@research.nj.nec.com College Park, Md 20742
___________________________________________________________________
------------------------------
Subject: IJNS contents volume 2, issue 4 (1991)
From: Benny Lautrup <LAUTRUP@nbivax.nbi.dk>
Date: Mon, 09 Mar 92 17:28:00 +0100
INTERNATIONAL JOURNAL OF NEURAL SYSTEMS
The International Journal of Neural Systems is a quarterly journal
which covers information processing in natural and artificial neural
systems. It publishes original contributions on all aspects of this
broad subject which involves physics, biology, psychology, computer
science and engineering. Contributions include research papers, reviews
and short communications. The journal presents a fresh undogmatic
attitude towards this multidisciplinary field with the aim to be a
forum for novel ideas and improved understanding of collective and
cooperative phenomena with computational capabilities.
ISSN: 0129-0657 (IJNS)
- ----------------------------------
Contents of Volume 2, issue number 4 (1991):
1. N. Burgess, M.N. Granieri & S. Patarnello:
3-D object classification: Application of a constructor algorithm.
2. R. Meir:
On deriving deterministic learning rules from stochastic systems.
3. E.M. Johansson, F.U. Dowla & D.M. Goodman:
Backpropagation learning for multi-layer feed-forward neural networks
using the conjugate gradient method.
4. W. Banzhaf & M. Schmutz:
Some notes on competition among cell assemblies.
5. M. Bengtsson:
Asymptotic properties of a third order neural network.
6. C.J.P. Vicente, J. Carrabina & E. Vladerrama:
Discrete learning in feed-forward neural networks.
7. J. Chen, M.A. Shanblatt & C-H Maa:
Improved neural networks for linear and non-linear programming.
8. M. Bahrami:
Recognition of rules and exceptions by neural networks.
9. A.V. Robins:
Multiple representations in connectionist systems.
10. D.G. Stork:
Book Review
Evolution of the first nervous systems by P.A.V. Anderson (ED).
- ----------------------------------
Editorial board:
B. Lautrup (Niels Bohr Institute, Denmark) (Editor-in-charge)
S. Brunak (Technical Univ. of Denmark) (Assistant Editor-in-Charge)
D. Stork (Stanford) (Book review editor)
Associate editors:
B. Baird (Berkeley)
D. Ballard (University of Rochester)
E. Baum (NEC Research Institute)
S. Bjornsson (University of Iceland)
J. M. Bower (CalTech)
S. S. Chen (University of North Carolina)
R. Eckmiller (University of Dusseldorf)
J. L. Elman (University of California, San Diego)
M. V. Feigelman (Landau Institute for Theoretical Physics)
F. Fogelman-Soulie (Paris)
K. Fukushima (Osaka University)
A. Gjedde (Montreal Neurological Institute)
S. Grillner (Nobel Institute for Neurophysiology, Stockholm)
T. Gulliksen (University of Oslo)
D. Hammerstrom (Oregon Graduate Institute)
D. Horn (Tel Aviv University)
J. Hounsgaard (University of Copenhagen)
B. A. Huberman (XEROX PARC)
L. B. Ioffe (Landau Institute for Theoretical Physics)
P. I. M. Johannesma (Katholieke Univ. Nijmegen)
M. Jordan (MIT)
G. Josin (Neural Systems Inc.)
I. Kanter (Princeton University)
J. H. Kaas (Vanderbilt University)
A. Lansner (Royal Institute of Technology, Stockholm)
A. Lapedes (Los Alamos)
B. McWhinney (Carnegie-Mellon University)
M. Mezard (Ecole Normale Superieure, Paris)
J. Moody (Yale, USA)
A. F. Murray (University of Edinburgh)
J. P. Nadal (Ecole Normale Superieure, Paris)
E. Oja (Lappeenranta University of Technology, Finland)
N. Parga (Centro Atomico Bariloche, Argentina)
S. Patarnello (IBM ECSEC, Italy)
P. Peretto (Centre d'Etudes Nucleaires de Grenoble)
C. Peterson (University of Lund)
K. Plunkett (University of Aarhus)
S. A. Solla (AT&T Bell Labs)
M. A. Virasoro (University of Rome)
D. J. Wallace (University of Edinburgh)
D. Zipser (University of California, San Diego)
- ----------------------------------
CALL FOR PAPERS
Original contributions consistent with the scope of the journal are
welcome. Complete instructions as well as sample copies and
subscription information are available from
The Editorial Secretariat, IJNS
World Scientific Publishing Co. Pte. Ltd.
73, Lynton Mead, Totteridge
London N20 8DH
ENGLAND
Telephone: (44)81-446-2461
or
World Scientific Publishing Co. Inc.
Suite 1B
1060 Main Street
River Edge
New Jersey 07661
USA
Telephone: (1)201-487-9655
or
World Scientific Publishing Co. Pte. Ltd.
Farrer Road, P. O. Box 128
SINGAPORE 9128
Telephone (65)382-5663
------------------------------
Subject: TR - Training Second-Order Recurrent Neural Networks Using Hints
From: omlinc@cs.rpi.edu (Christian Omlin)
Date: Fri, 08 May 92 13:22:03 -0500
The following paper has been placed in the Neuroprose archive.
Comments and questions are encouraged.
*******************************************************************
--------------------------------------------
TRAINING SECOND-ORDER RECURRENT NEURAL
NETWORKS USING HINTS
--------------------------------------------
C.W. Omlin* C.L. Giles
Computer Science Department *NEC Research Institute
Rensselaer Polytechnic Institute 4 Independence Way
Troy, N.Y. 12180 Princeton, N.J. 08540
omlinc@turing.cs.rpi.edu giles@research.nj.nec.com
Abstract
--------
We investigate a method for inserting rules into discrete-time
second-order recurrent neural networks which are trained to
recognize regular languages. The rules defining regular languages
can be expressed in the form of transitions in the corresponding
deterministic finite-state automaton. Inserting these rules as hints
into networks with second-order connections is straight-forward.
Our simulations results show that even weak hints seem to improve
the convergence time by an order of magnitude.
(To be published in Machine Learning: Proceedings of the Ninth
International Conference (ML92),D. Sleeman and P. Edwards (eds.),
Morgan Kaufmann, San Mateo, CA 1992).
********************************************************************
Filename: omlin.hints.ps.Z
- ----------------------------------------------------------------
FTP INSTRUCTIONS
unix% ftp archive.cis.ohio-state.edu (or 128.146.8.52)
Name: anonymous
Password: anything
ftp> cd pub/neuroprose
ftp> binary
ftp> get omlin.hints.ps.Z
ftp> bye
unix% zcat omlin.hints.ps.Z | lpr
(or whatever *you* do to print a compressed PostScript file)
- ----------------------------------------------------------------
- ----------------------------------------------------------------------------
Christian W. Omlin Troy, NY 12180 USA
Computer Science Department Phone: (518) 276-2930 Fax: (518) 276-4033
Amos Eaton 119 E-mail: omlinc@turing.cs.rpi.edu
Rensselaer Polytechnic Institute omlinc@research.nj.nec.com
- ----------------------------------------------------------------------------
------------------------------
Subject: TR - connectionist model for commonsense reasoning with rules
From: rsun@athos.cs.ua.edu (Ron Sun)
Date: Fri, 05 Jun 92 10:09:22 -0600
TR availble:
A Connectionist Model for Commonsense Reasoning Incorporating Rules
and Similarities
by Ron Sun
For the purpose of modeling commonsense reasoning, we investigate
connectionist models of rule-based reasoning, and show that while such
models can usually carry out reasoning in exactly the same way as
symbolic systems, they have more to offer in terms of commonsense
reasoning. A connectionist architecture, {\sc CONSYDERR}, is proposed
for capturing certain commonsense reasoning competence, which partially
remedies the brittleness problem in traditional rule-based systems. The
architecture employs a two-level, dual representational scheme, which
utilizes both localist and distributed representations and explores the
synergy resulting from the interaction between the two. {\sc CONSYDERR}
is therefore capable of accounting for many difficult patterns in
commonsense reasoning with this simple combination of the two levels.
This work also shows that connectionist models of reasoning are not just
``implementations" of their symbolic counterparts, but better
computational models of commonsense reasoning.
It is FTPable from archive.cis.ohio-state.edu
in: pub/neuroprose (Courtesy of Jordan Pollack)
No hardcopy available.
FTP procedure:
unix> ftp archive.cis.ohio-state.edu (or 128.146.8.52)
Name: anonymous
Password: neuron
ftp> cd pub/neuroprose
ftp> binary
ftp> get sun.ka.ps.Z
ftp> quit
unix> uncompress sun.ka.ps.Z
unix> lpr sun.ka.ps (or however you print postscript)
------------------------------
Subject: Paper - Logics and Variables in Connectionist Models
From: rsun@athos.cs.ua.edu (Ron Sun)
Date: Fri, 05 Jun 92 10:09:35 -0600
Beyond Associative Memories:
Logics and Variables in Connectionist Models
Ron Sun
abstract
This paper demonstrates the role of connectionist (neural network) models
in reasoning beyond that of an associative memory. First we show that
there is a connection between propositional logics and the weighted-sum
computation customarily used in connectionist models. Specifically, the
weighted-sum computation can handle Horn clause logic and Shoham's logic
as special cases. Secondly, we show how variables can be incorporated
into connectionist models to enhance their representational power. We
devise solutions to the connectionist variable binding problem to enable
connectioninst networks to handle variables and dynamic bindings in
reasoning. A new model, the Discrete Neuron formalism, is employed for
dealing with the variable binding problem, which is an extension of the
weighted-sum models. Formal definitions are presented, and examples are
analyzed in details.
To appear in: Information Sciences,
special issues on neural nets and AI
It is FTPable from archive.cis.ohio-state.edu
in: pub/neuroprose
No hardcopy available.
FTP procedure:
unix> ftp archive.cis.ohio-state.edu (or 128.146.8.52)
Name: anonymous
Password: neuron
ftp> cd pub/neuroprose
ftp> binary
ftp> get sun.beyond.ps.Z
ftp> quit
unix> uncompress sun.beyond.ps.Z
unix> lpr sun.beyond.ps (or however you print postscript)
------------------------------
Subject: TR - Fuzzy Evidential Logic: A Model of Causality for Commonsense
Reasoning}
From: rsun@athos.cs.ua.edu (Ron Sun)
Date: Fri, 05 Jun 92 10:09:48 -0600
TR availble:
Fuzzy Evidential Logic: A Model of Causality for Commonsense Reasoning}
Ron Sun
This paper proposes a fuzzy evidential model for commonsense causal
reasoning. After an analysis of the advantages and limitations of
existing accounts of causality, a generalized rule-based model FEL ({\it
Fuzzy Evidential Logic}) is proposed that takes into account the
inexactness and the cumulative evidentiality of commonsense reasoning.
It corresponds naturally to a neural (connectionist) network. Detailed
analyses are performed regarding how the model handles commonsense causal
reasoning.
To appear in Proc. of 14th Coggnitive Science Conference, 1992
- ----------------------------------------------------------------
It is FTPable from archive.cis.ohio-state.edu
in: pub/neuroprose (Courtesy of Jordan Pollack)
No hardcopy available.
FTP procedure:
unix> ftp archive.cis.ohio-state.edu (or 128.146.8.52)
Name: anonymous
Password: neuron
ftp> cd pub/neuroprose
ftp> binary
ftp> get sun.cogsci92.ps.Z
ftp> quit
unix> uncompress sun.cogsci92.ps.Z
unix> lpr sun.cogsci92.ps (or however you print postscript)
------------------------------
Subject: Preprint available: A network to velocity vector-field correction
From: Alfred_Nischwitz <alfred@lnt.e-technik.tu-muenchen.de>
Date: Thu, 11 Jun 92 17:04:12 +0100
The following paper has been accepted for publication in the proceedings
of the International Conference on Artificial Neural Networks '92 in
Brighton:
Relaxation in 4D state space - A competitive network
approach to object-related velocity vector-field correction
by Helmut Gluender Institut fuer Medizinische Psychologie
Ludwig-Maximilians-Universitaet
Goethestrasse 31, D-8000 Muenchen 2, Germany
and Astrid Lehmann Lehrstuhl fuer Nachrichtentechnik
Technische Universitaet Muenchen
Arcisstrasse 21, D-8000 Muenchen 2, Germany
ABSTRACT:
A standard principle of (energy-)minimization is applied to the
problem of visual motion analysis. In contrast to well-known
mathematical optimization procedures and universal optimizing
networks it is proposed to use a problem-adapted network
architecture. Owing to the bilocal coincidence-type motion
detector considered here the task of object-related motion
analysis appears as a geometric correspondence problem. Hence,
the correct spatio-temporal correspondeces between elements in
consecutive images must be selected from all possible ones. This
is performed by neighborhood operations that are repeatedly
applied to the instantaneous signal representation in the
space/velocity-domain until an estimate of the actual flow-field
is reached.
Hardcopies of the paper are available. Please send requests
to the following address in Germany:
Helmut Gluender
Institut fuer Medizinische Psychologie
Ludwig-Maximilians-Universitaet
Goethestrasse 31, D-8000 Muenchen 2, Germany
or via email to:
alfred@lnt.e-technik.tu-muenchen.de
communicated by Alfred Nischwitz
------------------------------
Subject: neural-oscillator network, reprints available
From: "Lambert Schomaker <SCHOMAKER@NICI.KUN.NL>" <SCHOMAKER@NICI.KUN.NL>
Date: Wed, 17 Jun 92 10:42:00 +0700
[]
Reprints of the following publication are available:
Schomaker, L.R.B., 1992. A neural-oscillator network model of
temporal pattern generation. Human Movement Science 11, 181-192.
Abstract.
Most contemporary neural network models deal with essentially static,
perceptual problems of classification and transformation. Models such as
multi-layer feedforward perceptrons generally do not incorporate time as an
essential dimension. Where time is involved, the proposed solutions suffer
from serious limitations. The TDNN solution for the representation of time is
limited by its a priori fixed time window, whereas recurrent networks of the
Jordan or Elman kind are particularly difficult to train. Biological neural
networks, however, are inherently temporal systems. In modelling motor
behaviour, it is essential to have models that are able to produce temporal
patterns of varying duration and complexity. A model is proposed, based on a
network of pulse oscillators consisting of neuron/interneuron (NiN) pairs.
Due to the inherent temporal properties, a simple NiN net, taught by a
pseudo-Hebbian learning scheme, could be used in simulating handwriting
pen-tip displacement of individual letters.
------------------------------
Subject: ALOPEX algorithm solves the MONK's problems
From: unni@neuro.cs.gmr.com (K.P.Unnikrishnan)
Date: Thu, 18 Jun 92 13:30:36 -0500
In one of the recent issues of 'Neuron Digest', S.B. Thrun had reported
performance comparisons of different learning algorithms, (both machine
learning and neural network), on the MONK problems. Though a number of
algorithms (for example, AQ17-DCI, AQ17-HCI, AQ15-GA, Assistant
Professional, Backpropagation and Cascade correlation) were found to give
100% correct results on two of the three sets of the problems, none of
the algorithms gave 100% correct classifications for all the three data
sets. We have found that a multi-layer perceptron trained using the
ALOPEX algorithm gives 100% correct classification of all three data
sets.
The details of the ALOPEX algorithm can be found in the paper titled
'LEARNING IN CONNECTIONIST NETWORKS USING THE ALOPEX ALGORITHM' (Proc.
IJCNN '92, pp. 926-931). A copy of the this paper has been placed at the
NEUROPROSE ftp archive under the name unni.alopex.ps.Z. If you would like
a copy of the simulation program, send a note to unni@neuro.cs.gmr.com
K.P. Unnikrishnan
GM Research Labs.
ABSTRACT
- ----------
LEARNING IN CONNECTIONIST NETWORKS USING THE ALOPEX ALGORITHM
K. P. Unnikrishnan & K. P. Venugopal
We describe the Alopex algorithm as a universal learning algorithm for
neural networks. The algorithm is stochastic and it can be used for
learning in networks of any topology, including those with feedback. The
neurons could contain any transfer function and the learning could
involve minimization of any error measure. The efficacy of the algorithm
is investigated by applying it on multilayer perceptrons to solve
problems such as XOR, parity and encoder. These results are compared with
that ones obtained using back- propagation learning algorithm. The
scaling properties of Alopex are studied using the encoder problem of
different sizes. Taking the specific case of XOR problem, it is shown
that a smoother error surface, with fewer local minima, could be obtained
by using an information theoretic error measure. An appropriate
'annealing' scheme for the algorithm is described and it is shown that
the Alopex can escape out of the local minima.
FTP INSTRUCTIONS
- ----------------
neuro% ftp archive.cis.ohio-state.edu
Name: anonymous
Password: guest
ftp> binary
ftp> cd pub/neuroprose
ftp> get unni.alopex.ps.Z
ftp> quit
neuro% uncompress unni.alopex.ps.Z
neuro% lpr unni.alopex.ps
------------------------------
Subject: TR - Modelling the Development of Topography and Ocular Dominance
From: Geoffrey Goodhill <gjg@cns.edinburgh.ac.uk>
Date: Tue, 23 Jun 92 18:49:54 +0000
The following technical report version of my thesis is now available
in neuroprose:
Correlations, Competition, and Optimality:
Modelling the Development of Topography and Ocular Dominance
CSRP 226
Geoffrey Goodhill
School of Cognitive and Computing Science
University Of Sussex
ABSTRACT
There is strong biological evidence that the same mechanisms underly
the formation of both topography and ocular dominance in the visual
system. However, previous computational models of visual development
do not satisfactorily address both of these phenomena
simultaneously. In this thesis we discuss in detail several
models of visual development, focussing particularly on the form
of correlations within and between eyes.
Firstly, we analyse the "correlational" model for ocular dominance
development recently proposed in [Miller, Keller & Stryker 1989] .
This model was originally presented for the case of identical
correlations within each eye and zero correlations between the eyes.
We relax these assumptions by introducing perturbative correlations
within and between eyes, and show that (a) the system is unstable to
non-identical perturbations in each eye, and (b) the addition of small
positive correlations between the eyes, or small negative correlations
within an eye, can cause binocular solutions to be favoured over
monocular solutions.
Secondly, we extend the elastic net model of [Goodhill 1988, Goodhill
and Willshaw 1990] for the development of topography and ocular
dominance, in particular considering its behaviour in the
two-dimensional case. We give both qualitative and quantitative
comparisons with the performance of an algorithm based on the
self-organizing feature map of Kohonen, and show that in general the
elastic net performs better. In addition we show that (a) both
algorithms can reproduce the effects of monocular deprivation, and (b)
that a global orientation for ocular dominance stripes in the elastic
net case can be produced by anisotropic boundary conditions in the
cortex.
Thirdly, we introduce a new model that accounts for the development of
topography and ocular dominance when distributed patterns of activity
are presented simultaneously in both eyes, with significant
correlations both within and between eyes. We show that stripe width
in this model can be influenced by two factors: the extent of lateral
interactions in the postsynaptic sheet, and the degree to which the
two eyes are correlated. An important aspect of this model is the form
of the normalization rule to limit synaptic strengths: we analyse this
for a simple case.
The principal conclusions of this work are as follows:
1. It is possible to formulate computational models that account for
(a) both topography and stripe formation, and (b) ocular dominance
segregation in the presence of *positive* correlations between
the two eyes.
2. Correlations can be used as a ``currency'' with which to compare
locality within an eye with correspondence between eyes. This
leads to the novel prediction that stripe width can be influenced
by the degree of correlation between the two eyes.
Instructions for obtaining by anonymous ftp:
% ftp cheops.cis.ohio-state.edu
Name: anonymous
Password:neuron
ftp> binary
ftp> cd pub/neuroprose
ftp> get goodhill.thesis.tar
ftp> quit
% tar -xvf goodhill.thesis.tar (This creates a directory called thesis)
% cd thesis
% more README
WARNING: goodhill.thesis.tar is 2.4 Megabytes, and the thesis takes up
13 Megabytes if all files are uncompressed (there are only 120 pages
- - the size is due to the large number of pictures). Each file within
the tar file is individually compressed, so it is not necessary to
have 13 Meg of spare space in order to print out the thesis.
The hardcopy version is also available by requesting CSRP 226 from:
Berry Harper
School of Cognitive and Computing Sciences
University of Sussex
Falmer
Brighton BN1 9QN
GREAT BRITAIN
Please enclose a cheque for either 5 pounds sterling or 10 US dollars,
made out to "University of Sussex".
Geoffrey Goodhill
University of Edinburgh
Centre for Cognitive Science
2 Buccleuch Place
Edinburgh EH8 9LW
email: gjg@cns.ed.ac.uk
------------------------------
Subject: Preprint available: A network to velocity vector-field correction
From: Alfred_Nischwitz <alfred@lnt.e-technik.tu-muenchen.de>
Date: Thu, 02 Jul 92 13:51:37 +0100
The following paper has been accepted for publication in the proceedings
of the International Conference on Artificial Neural Networks '92 in
Brighton:
Relaxation in 4D state space - A competitive network
approach to object-related velocity vector-field correction
by Helmut Gluender Institut fuer Medizinische Psychologie
Ludwig-Maximilians-Universitaet
Goethestrasse 31, D-8000 Muenchen 2, Germany
and Astrid Lehmann Lehrstuhl fuer Nachrichtentechnik
Technische Universitaet Muenchen
Arcisstrasse 21, D-8000 Muenchen 2, Germany
ABSTRACT:
A standard principle of (energy-)minimization is applied to the
problem of visual motion analysis. In contrast to well-known
mathematical optimization procedures and universal optimizing
networks it is proposed to use a problem-adapted network
architecture. Owing to the bilocal coincidence-type motion
detector considered here the task of object-related motion
analysis appears as a geometric correspondence problem. Hence,
the correct spatio-temporal correspondeces between elements in
consecutive images must be selected from all possible ones. This
is performed by neighborhood operations that are repeatedly
applied to the instantaneous signal representation in the
space/velocity-domain until an estimate of the actual flow-field
is reached.
Hardcopies of the paper are available. Please send requests
to the following address in Germany:
Helmut Gluender
Institut fuer Medizinische Psychologie
Ludwig-Maximilians-Universitaet
Goethestrasse 31, D-8000 Muenchen 2, Germany
or via email to:
alfred@lnt.e-technik.tu-muenchen.de
communicated by Alfred Nischwitz
------------------------------
Subject: Neural Chess: Paper Presentation
From: David Kanecki <kanecki@cs.uwp.edu>
Date: Sun, 12 Jul 92 18:53:47 -0600
[[ Editor's Note: Long-term readers will remember David's thoughtful
postings in the past and his long-term work with applying connectionist
models to the game of chess. Unfortunately this announcement sat in my
queue too long for it to be timely. However, I encourage any you who
have a serious interest in this problem to get in touch with David.
Perhaps he will be kind enough to provide a recap of his talk to Digest
readers? -PM ]]
PAPER PRESENTATION ANNOUNCEMENT
"Simulation as an Intelligent, Thinking Computer Program as
Neural Chess"
By
David H. Kanecki, Bio. Sci., A.C.S.
40th Summer Computer Simulation Conference
Society for Computer Simulation
July 27-31, 1992
Nugget Hotel, Sparks, NV
Group 4, Session 7, Tuesday, July 28, 3:30-5:00
Carson Room, Nugget Hotel, Sparks, NV
In the above presentation, I will present results obtained from my
ten year development project. In addition, I will present a new methology
to modelling cognitive reasoning.
If anyone wishes a copy of this paper or to attend the conference, please
contact the Society for Computer Simulation, San Diego, CA. As to
electronic distribution of my paper, I will not have any information until
I check with the society after August 1st.
This work is a major breakthrough in intelligent thinking systems that
can be used for applications of navigation, logistics, etc.
* * *
This conference will present 15 topics of which 2 are related to neural
network applications. The two topics are intelligent systems, group 3, and
AI/KBS in simulation, group 4, of which 25 seminars are scheduled.
"As we learn and teach, we move to the next higher level of intelligence."
David H. Kanecki, Bio. Sci., A.C.S.
P.O. Box 93
Kenosha, WI 53141
kanecki@cs.uwp.wisc.edu
------------------------------
End of Neuron Digest [Volume 11 Issue 47]
****************************************
Received: from BUACCA by BUACCA.BU.EDU (Mailer R2.08 PTF009) with BSMTP id
3295; Wed, 18 Aug 93 20:32:01 EDT
Received: from noc4.dccs.upenn.edu by BUACCA.BU.EDU (IBM VM SMTP R1.2.1) with
TCP; Wed, 18 Aug 93 20:31:57 EDT
Received: from CATTELL.PSYCH.UPENN.EDU by noc4.dccs.upenn.edu
id AA24311; Wed, 18 Aug 93 20:01:25 -0400
Return-Path: <marvit@cattell.psych.upenn.edu>
Received: from LOCALHOST by cattell.psych.upenn.edu
id AA07519; Wed, 18 Aug 93 18:59:31 EDT
Posted-Date: Wed, 18 Aug 93 18:58:54 -0400
From: "Neuron-Digest Moderator" <neuron-request@CATTELL.PSYCH.UPENN.EDU>
To: Neuron-Distribution:;
Subject: Neuron Digest V11 #48 (tech reports, books, papers)
Reply-To: "Neuron-Request" <neuron-request@CATTELL.PSYCH.UPENN.EDU>
X-Errors-To: "Neuron-Request" <neuron-request@psych.upenn.edu>
Organization: University of Pennsylvania
Date: Wed, 18 Aug 93 18:58:54 -0400
Message-Id: <7511.745714734@cattell.psych.upenn.edu>
Sender: marvit@cattell.psych.upenn.edu
Neuron Digest Wednesday, 18 Aug 1993
Volume 11 : Issue 48
Today's Topics:
ANNs for Noise Filtering, Edge Detect. and Signature Extraction
Paper available: `Statistical Aspects of Neural Networks'
Preprint available: A network to velocity vector-field correction
Several papers (Simulated Annealing, Review, NP-hardness)
TR - VISUAL ATTENTION AND INVARIANT PATTERN RECOGNITION
2 TRs - Iterated Function Systems, Approximations to Functions
Book - The Global Dynamics of CA
IJNS contents vol. 3 issues 2 and 3
Preprint available: Synchronization and label-switching
VLSI Neural Network Application in High Energy Physics
Preprint available: A network to velocity vector-field correction
Genetic Synthesis of Unsupervised Learning Algorithms
Preprint Available: Random-Walk Learning
Send submissions, questions, address maintenance, and requests for old
issues to "neuron-request@psych.upenn.edu". The ftp archives are
available from psych.upenn.edu (130.91.68.31). Back issues requested by
mail will eventually be sent, but may take a while.
----------------------------------------------------------------------
Subject: ANNs for Noise Filtering, Edge Detect. and Signature Extraction
From: speeba@cardiff.ac.uk (Eduardo Bayro)
Date: Sat, 11 Jul 92 19:24:12 +0000
/****NEURAL COMPUTING****IMAGE PROCESING*****NEURAL COMPUTING*********/
Journal Systems Engineering (1992)2, Springer Verlag
NEURAL COMPUTING FOR NOISE FILTERING, EDGE DETECTION
AND SIGNATURE EXTRACTION
D.T. Pham and E.J. Bayro-Corrochano
This paper describes two applications of neural computing
to low-level image processing. The first application concerns noise
filtering and edge detection. A neural processor employing back-
propagation multi-layer perceptrons is presented which has been shown
quantitatively to perform better than well known conventional edge
detectors. The second application is in feature extraction. A mask set
has been designed for picking up basic geometrical details of skele-
tonised contours. The use of the masks in a net which implements the
n-tuple contour analysis technique is reported.
Correspondence and offprint requests to:
D.T. Pham
e-mail: phamdt@uk.ac.cardiff
E.J. Bayro-Corrochano
e-mail: speeba@uk.ac.cardif
Intelligent Systems Research Laboratory, School of
Electrical, Electronic and Systems Engineering,
University of Wales, College of Cardiff, P.O. Box 904,
Cardiff CF1 3YH, U.K.
/****NEURAL COMPUTING****IMAGE PROCESING*****NEURAL COMPUTING*********/
------------------------------
Subject: Paper available: `Statistical Aspects of Neural Networks'
From: ripley@statistics.oxford.ac.uk (Prof. Brian Ripley)
Date: Mon, 20 Jul 92 11:46:51 +0000
[This corrects a message sent a hour or so ago. We have re-organized to
a more logical directory.]
A paper, with principal audience statisticians, entitled
Statistical Aspects of Neural Networks
is available by anonymous ftp from
markov.stats.ox.ac.uk (192.76.20.1 or 129.67.1.190)
at pub/neural/papers/ripley.ps.Z (336kB), with abstract ripley.abstract as
follows:
Neural networks have been a much-publicized topic of research in the
last five years, and are now beginning to be used in a wide range of
subject areas traditionally thought by statisticians to be their
domain. This paper explores the basic ideas of neural networks from the
point of view of a statistician, and compares some of their
applications with those of traditional and modern methods of statistics
and pattern recognition.
Neural networks are mainly used as non-linear approximations to
multivariable functions or as classifiers. They are non-parametric in
character in that no subject-domain knowledge is incorporated in the
modelling process, and the parameters are estimated using algorithms
which at least in principle can be computed on loosely-coupled parallel
computers. We argue that the modelling-based approach traditional in
statistics and pattern recognition can be at least as effective, and
often more so. This is illustrated by data on the areas in Zimbabwe
environmentally suitable for Tsetse flies.
Invited lectures for SemStat (S\'eminaire Europ\'een de
Statistique), Sandbjerg, Denmark, 25-30 April 1992. To appear in the
proceedings to be published by Chapman & Hall in January 1993.
.----------------------------------------------------.
| Prof. Brian D. Ripley |
| Dept. of Statistics, |
| University of Oxford, |
| 1 South Parks Road, |
| Oxford OX1 3TG, UK |
| |
| ripley@uk.ac.ox.stats (JANET) |
| ripley@stats.ox.ac.uk (Internet) |
`----------------------------------------------------'
------------------------------
Subject: Preprint available: A network to velocity vector-field correction
From: Alfred_Nischwitz <alfred@lnt.e-technik.tu-muenchen.de>
Date: Mon, 20 Jul 92 14:24:33 +0100
The following paper has been accepted for publication in the
proceedings of the International Conference on
Artificial Neural Networks '92 in Brighton:
Relaxation in 4D state space - A competitive network
approach to object-related velocity vector-field correction
by Helmut Gluender Institut fuer Medizinische Psychologie
Ludwig-Maximilians-Universitaet
Goethestrasse 31, D-8000 Muenchen 2, Germany
and Astrid Lehmann Lehrstuhl fuer Nachrichtentechnik
Technische Universitaet Muenchen
Arcisstrasse 21, D-8000 Muenchen 2, Germany
ABSTRACT:
A standard principle of (energy-)minimization is applied to the
problem of visual motion analysis. In contrast to well-known
mathematical optimization procedures and universal optimizing
networks it is proposed to use a problem-adapted network
architecture. Owing to the bilocal coincidence-type motion
detector considered here the task of object-related motion
analysis appears as a geometric correspondence problem. Hence,
the correct spatio-temporal correspondeces between elements in
consecutive images must be selected from all possible ones. This
is performed by neighborhood operations that are repeatedly
applied to the instantaneous signal representation in the
space/velocity-domain until an estimate of the actual flow-field
is reached.
Hardcopies of the paper are available. Please send requests
to the following address in Germany:
Helmut Gluender
Institut fuer Medizinische Psychologie
Ludwig-Maximilians-Universitaet
Goethestrasse 31, D-8000 Muenchen 2, Germany
or via email to:
alfred@lnt.e-technik.tu-muenchen.de
communicated by Alfred Nischwitz
------------------------------
Subject: Several papers (Simulated Annealing, Review, NP-hardness)
From: Xin Yao <Xin.Yao@dbce.csiro.au>
Organization: CSIRO, Div. Building Constr. and Eng'ing, Melb., Australia
Date: Fri, 24 Jul 92 14:27:37 -0500
The following papers have been put in neuroprose archive. Thanks to Jordan
Pollack. Limited number of hard copies can be obtained by sending a note,
specifying the author and title, to:
Smail: Ms. Cathy Bowditch, The Editor
CSIRO Division of Building, Construction and Engineering
PO Box 56, Highett, Vic 3190, Australia
Email: cathy@mel.dbce.csiro.au
(1) X. Yao, "A Review of Evolutionary Artificial Neural Networks," Accepted by
International Journal of Intelligent Systems, to appear.
Filename in neuroprose: yao.eann.ps.Z
(2) X. Yao, "Finding Approximate Solutions to NP-hard Problems by Neural
Networks Is Hard," Information Processing Letters, 41:93--98, 1992.
Filename: yao.complex.ps.Z
(3) X. Yao, "Simulated Annealing with Extended Neighbourhood," International
Journal of Computer Mathematics, 40:169--189, 1991.
Filename: yao.sa_en.ps.Z
ftp Instructions:
unix> ftp archive.cis.ohio-state.edu (or 128.146.8.52)
Name: anonymous
Password: (your email address)
ftp> cd pub/neuroprose
ftp> binary
ftp> get yao.filename.ps.Z (where filename is one of the above three)
ftp> quit
unix>uncompress yao.filename.ps.Z
unix>lpr yao.filename.ps (or whatever you used to print .ps files)
- --
| Xin Yao CSIRO Division of Building, Construction and Engineering |
| Post Office Box 56, Highett, Victoria, Australia 3190 |
| Internet: xin@mel.dbce.csiro.au Fax: +61 3 252 6244 |
| Tel: +61 3 252 6000 (swtichboard) +61 3 252 6374 (office) |
|_____________________________________________________________________________|
------------------------------
Subject: TR - VISUAL ATTENTION AND INVARIANT PATTERN RECOGNITION
From: bruno@cns.caltech.edu (Bruno Olshausen)
Date: Fri, 07 Aug 92 22:51:01 -0800
The following technical report has been archived for public ftp:
- ----------------------------------------------------------------------
A NEURAL MODEL OF VISUAL ATTENTION AND INVARIANT PATTERN RECOGNITION
Bruno Olshausen, Charles Anderson*, and David Van Essen
Computation and Neural Systems Program
Division of Biology, 216-76
and
*Jet Propulsion Laboratory
California Institute of Technology
Pasadena, CA 91125
CNS Memo 18
Abstract. We present a biologically plausible model of an attentional
mechanism for forming position- and scale-invariant object
representations. The model is based on using control neurons to
dynamically modify the synaptic strengths of intra-cortical
connections so that information from a windowed region of primary
visual cortex, V1, is routed to higher cortical areas while preserving
information about spatial relationships. This paper describes details
of a neural circuit for routing visual information and provides a
solution for controlling the circuit as part of an autonomous
attentional system for recognizing objects. The model is designed to
be consistent with known neurophysiology, neuroanatomy, and
psychophysics, and it makes a variety of experimentally testable
predictions.
- ----------------------------------------------------------------------
Obtaining the paper via anonymous ftp:
1. ftp to kant.cns.caltech.edu (131.215.135.31)
2. login as 'anonymous' and type your email address as the password
3. cd to pub/cnsmemo.18
4. set transfer mode to binary (type 'binary' at the prompt)
5. get either 'paper-apple.tar.Z' or 'paper-sparc.tar.Z'. The first
will print on the Apple LaserWriter II, the other on the SPARCprinter.
(They may work on other PostScript printers too, but I can't guarantee it.)
6. quit from ftp, and then uncompress and detar the file on your
machine by typing
uncompress -c filename.tar.Z | tar xvf -
7. remove the tarfile and print out the three postscript files
(paper1.ps, paper2.ps and paper3.ps), beginning with paper3.ps.
If you don't have an appropriate PostScript printer, then send a
request for a hardcopy to bruno@cns.caltech.edu.
------------------------------
Subject: 2 TRs - Iterated Function Systems, Approximations to Functions
From: rdj@demos.lanl.gov (Roger D. Jones)
Date: Mon, 10 Aug 92 10:35:07 -0700
TECHNICAL REPORTS AVAILABLE
A RECURRENT NETWORK FOR THE SOLUTION TO THE INVERSE PROBLEM
OF ITERATED FUNCTION SYSTEMS
O. L. Bakalis, R. D. Jones, Y. C. Lee, and B. J. Travis
ON THE EXISTENCE AND STABILITY OF CERTAIN TYPES OF NEUROMORPHIC
APPROXIMATIONS TO FUNCTIONS
R. K. Prasanth, R. D. Jones, and Y. C. Lee
Please send surface mail address to rdj@lanl.gov
or
Roger D. Jones
MS-F645
Los Alamos National Laboratory
Los Alamos, New Mexico 87545
------------------------------
Subject: Book - The Global Dynamics of CA
From: Andrew Wuensche <100020.2727@CompuServe.COM>
Date: 13 Aug 92 09:06:07 -0500
I would like to announce the following book, now available.
thanks
Andy Wuensche
wuensch@santafe.edu
THE GLOBAL DYNAMICS OF CELLULAR AUTOMATA
An Atlas of Basin of Attraction Fields of
One-Dimensional Cellular Automata.
Andrew Wuensche
Mike Lesser
Foreword by Chris Langton
Diskette included for PC-compatible computers.
Santa Fe Institute Studies in the Sciences of Complexity
Reference Vol 1
Addison-Wesley Publishing Co. Reading MA, phone:(800) 447 2226
IBSN 0-201-55740-1 price: about $54
Abstract:
The Global Dynamics of Cellular Automata introduces a new global
perspective for the study of discrete dynamical systems, analogous to
the phase portrait in continuous dynamical systems.
As well as looking at the unique trajectory of the systems future,
an algorithm is presented that directly computes the multiple merging
trajectories that may have constituted the system's past. A given set
of cellular automata parameters will, in a sense, crystallize state
space into a set of basins of attraction that will typically have the
topology of branching trees rooted on attractor cycles. The explicit
portraits of these mathematical objects are made accessible. The Atlas
presents two complete classes of such objects: for the 3-neighbour
rules (elementary rules) and for the 5-neighbour totalystic rules.
The book looks in detail at CA architecture and rule systems, and
the corresponding global dynamics. It is shown that the evolution of CA
with periodic boundary conditions is bound by general principles
relating to symmetrys of the circular array. The rule numbering system
and equivalence classes are reviewed. Symmetry categories, rule
clusters, limited pre-image rules, and the reverse algorithm are
introduced. The Z parameter (depending only on the rule table) is
introduced, reflecting the degree of pre-imaging, or the convergence of
dynamical flow in state space evident in the basin of attraction
field. A relationship between the Z parameter, basin field topology,
and rule behaviour classes is proposed. A genotype-phenotype analogy
looks at the effect of mutating the rule table to produce mutant basin
fields.
The accompanying software is an interactive research tool capable of
generating basins of attraction for any of the 2^32 CA rules in
5-neighbour rule space (for a range of array size), as well as
pre-images, space-time patterns and mutation. The operating
instructions are contained in the book.
* * * * * *
------------------------------
Subject: IJNS contents vol. 3 issues 2 and 3
From: BRUNAK@nbivax.nbi.dk
Date: 30 Oct 92 10:56:54 +0100
Begin Message:
- -----------------------------------------------------------------------
INTERNATIONAL JOURNAL OF NEURAL SYSTEMS
The International Journal of Neural Systems is a quarterly journal
which covers information processing in natural and artificial neural
systems. It publishes original contributions on all aspects of this
broad subject which involves physics, biology, psychology, computer
science and engineering. Contributions include research papers, reviews
and short communications. The journal presents a fresh undogmatic
attitude towards this multidisciplinary field with the aim to be a
forum for novel ideas and improved understanding of collective and
cooperative phenomena with computational capabilities.
ISSN: 0129-0657 (IJNS)
- ----------------------------------
Contents of Volume 3, issue number 2 (1992):
1. H.C. Card & C.R. Schneider:
Analog CMOS Neural Circuits - In situ Learning.
2. M.W. Goudreau & C.L. Giles:
Routing in Random Multistage Interconnections Networks:
Comparing Exhaustive Scarch, Greedy and Neural Network Approaches.
3. P.J. Zwietering, E.H. L. Aarts & J. Wessels:
Exact Classification with Two-Layered Perceptrons.
4. D. Saad & R. Sasson:
Examining the CHIR Algorithm Performance for
Multilayer Networks and Continous Input Vectors.
5. I. Ginzberg & D. Horn:
Learning the Rule of a Time Series.
6. H.J. Chang, J. Ghosh & K. Liano:
A Macroscopic Model of Neural Ensembles:
Learning-Induced Oscilliations in a Cell.
7. S. Hejazi, S.M. Bauer & R.A. Spangler:
Neural Network Analysis of Thermal Image Data.
8. K.T. Sun & H.C. Fu:
A Neural Network Implemantation for the Traffic
Control Problem on Crossbar Switch Networks.
Contents of Volume 3, issue number 3 (1992):
1. J. Reynolds & L. Tarassenko:
Spoken Letter Recognition with Neural Networks.
2. Z. Li:
Different Retinal Ganglion Cells have Different Functional Goals.
3. O. Shagrir:
A Neural Net with Self-Inhibiting Units for the N-Queens Problem.
4. L. Xu, S. Klasa & A. Yuille:
Recent Advances on Techniques of Static Feed-forward Networks
with Supervised Learning.
5. M-Y. Chow & S.O. Yee:
A Measure of Relative Robustness for Feedforward
Neural Networks Subject to Small Input Perturbations.
6. F.L. Chung & T. Lee:
A Node Pruning Algorithm for Backpropagation Networks.
7. S. Tan, J. Hao & J. Vandewalle:
Pattern Storage and Hopfield Neural Assosiative
Memory with Hidden Structure.
- ----------------------------------
Editorial board:
B. Lautrup (Niels Bohr Institute, Denmark) (Editor-in-charge)
S. Brunak (Technical Univ. of Denmark) (Assistant Editor-in-Charge)
D. Stork (Stanford) (Book review editor)
Associate editors:
J. Alspector (Bellcore)
B. Baird (Berkeley)
D. Ballard (University of Rochester)
E. Baum (NEC Research Institute)
S. Bjornsson (University of Iceland)
J. M. Bower (CalTech)
S. S. Chen (University of North Carolina)
R. Eckmiller (University of Dusseldorf)
J. L. Elman (University of California, San Diego)
M. V. Feigelman (Landau Institute for Theoretical Physics)
F. Fogelman-Soulie (Paris)
K. Fukushima (Osaka University)
A. Gjedde (Montreal Neurological Institute)
S. Grillner (Nobel Institute for Neurophysiology, Stockholm)
T. Gulliksen (University of Oslo)
D. Hammerstrom (Oregon Graduate Institute)
D. Horn (Tel Aviv University)
J. Hounsgaard (University of Copenhagen)
B. A. Huberman (XEROX PARC)
L. B. Ioffe (Landau Institute for Theoretical Physics)
P. I. M. Johannesma (Katholieke Univ. Nijmegen)
M. Jordan (MIT)
G. Josin (Neural Systems Inc.)
I. Kanter (Princeton University)
J. H. Kaas (Vanderbilt University)
A. Lansner (Royal Institute of Technology, Stockholm)
A. Lapedes (Los Alamos)
B. McWhinney (Carnegie-Mellon University)
M. Mezard (Ecole Normale Superieure, Paris)
J. Moody (Yale, USA)
A. F. Murray (University of Edinburgh)
J. P. Nadal (Ecole Normale Superieure, Paris)
E. Oja (Lappeenranta University of Technology, Finland)
N. Parga (Centro Atomico Bariloche, Argentina)
S. Patarnello (IBM ECSEC, Italy)
P. Peretto (Centre d'Etudes Nucleaires de Grenoble)
C. Peterson (University of Lund)
K. Plunkett (University of Aarhus)
S. A. Solla (AT&T Bell Labs)
M. A. Virasoro (University of Rome)
D. J. Wallace (University of Edinburgh)
D. Zipser (University of California, San Diego)
- ----------------------------------
CALL FOR PAPERS
Original contributions consistent with the scope of the journal are
welcome. Complete instructions as well as sample copies and
subscription information are available from
The Editorial Secretariat, IJNS
World Scientific Publishing Co. Pte. Ltd.
73, Lynton Mead, Totteridge
London N20 8DH
ENGLAND
Telephone: (44)81-446-2461
or
World Scientific Publishing Co. Inc.
Suite 1B
1060 Main Street
River Edge
New Jersey 07661
USA
Telephone: (1)201-487-9655
or
World Scientific Publishing Co. Pte. Ltd.
Farrer Road, P. O. Box 128
SINGAPORE 9128
Telephone (65)382-5663
- -----------------------------------------------------------------------
End Message
------------------------------
Subject: Preprint available: Synchronization and label-switching
From: Alfred_Nischwitz <alfred@lnt.e-technik.tu-muenchen.de>
Date: Wed, 08 Apr 92 19:20:43 +0100
The following paper has been accepted for publishing in the
proceedings of the International Conference on
Artificial Neural Networks '92 in Brighton:
SYNCHRONIZATION AND LABEL-SWITCHING IN NETWORKS OF
LATERALLY COUPLED MODEL NEURONS
by Alfred Nischwitz Lehrstuhl fuer Nachrichtentechnik
Peter Klausner Technische Universitaet Muenchen
Andreas von Oertzen Arcisstrasse 21, D-8000 Muenchen 2, Germany
and
Helmut Gluender Institut fuer Medizinische Psychologie
Ludwig-Maximilians-Universitaet
Goethestrasse 31, D-8000 Muenchen 2, Germany
ABSTRACT:
Necessary Conditions for the impulse synchronization in non-
oscillating networks of laterally coupled 'integrate-and-fire'
model neurons are investigated. The behaviour of such networks
for homogeneous stimulations as well as for differently stimulated
subpopulations is studied. In the first case, synchronization
accurate to fractions of the impulse duration can be achieved by
either lateral inhibition or lateral excitation and in the second
case, good and independent synchronization is obtained within
subpopulations, if they are separated by unstimulated neurons.
Hardcopies of the paper are available. Please send requests via
email or to the following address in Germany:
Alfred Nischwitz
Lehrstuhl fuer Nachrichtentechnik
Technische Universitaet Muenchen
Arcisstrasse 21, D-8000 Muenchen 2, F.R.Germany
email: alfred@lnt.e-technik.tu-muenchen.de
Alfred Nischwitz
------------------------------
Subject: VLSI Neural Network Application in High Energy Physics
From: LINDSEY@FNAL.FNAL.GOV
Date: Mon, 13 Apr 92 14:05:53 -0600
For those interested in hardware neural network applications,
copies of the following paper are available via mail or fax. Send requests to
Clark Lindsey at BITNET%"LINDSEY@FNAL".
REAL TIME TRACK FINDING IN A DRIFT CHAMBER WITH A
VLSI NEURAL NETWORK*
Clark S. Lindsey (a), Bruce Denby (a), Herman Haggerty (a),
and Ken Johns (b)
(a) Fermi National Accelerator Laboratory, P.O. Box 500, Batavia,
Illinois 60510.
(b) University of Arizona, Dept of Physics, Tucson, Arizona 85721.
ABSTRACT
In a test setup, a hardware neural network determined track parameters
of charged particles traversing a drift chamber. Voltages proportional
to the drift times in 6 cells of the 3-layer chamber were inputs to the
Intel ETANN neural network chip which had been trained to give the
slope and intercept of tracks. We compare network track parameters to
those obtained from off-line track fits. To our knowledge this is the
first on-line application of a VLSI neural network to a high energy
physics detector. This test explored the potential of the chip and the
practical problems of using it in a real world setting. We compare chip
performance to a neural network simulation on a conventional computer.
We discuss possible applications of the chip in high energy physics
detector triggers.
Accepted by Nuclear Instruments and Methods, Section A
* FERMILAB-Pub-92/55
------------------------------
Subject: Preprint available: A network to velocity vector-field correction
From: Alfred_Nischwitz <alfred@lnt.e-technik.tu-muenchen.de>
Date: Thu, 30 Apr 92 09:49:47 +0100
The following paper has been accepted for publishing in the
proceedings of the International Conference on
Artificial Neural Networks '92 in Brighton:
Relaxation in 4D state space - A competitive network
approach to object-related velocity vector-field correction
by Helmut Gluender Institut fuer Medizinische Psychologie
Ludwig-Maximilians-Universitaet
Goethestrasse 31, D-8000 Muenchen 2, Germany
and Astrid Lehmann Lehrstuhl fuer Nachrichtentechnik
Technische Universitaet Muenchen
Arcisstrasse 21, D-8000 Muenchen 2, Germany
ABSTRACT:
A standard principle of (energy-)minimization is applied to the
problem of visual motion analysis. In contrast to well-known
mathematical optimization procedures and universal optimizing
networks it is proposed to use a problem-adapted network
architecture. Owing to the bilocal coincidence-type motion
detector considered here the task of object-related motion
analysis appears as a geometric correspondence problem. Hence,
the correct spatio-temporal correspondeces between elements in
consecutive images must be selected from all possible ones. This
is performed by neighborhood operations that are repeatedly
applied to the instantaneous signal representation in the
space/velocity-domain until an estimate of the actual flow-field
is reached.
Hardcopies of the paper are available. Please send requests
to the following address in Germany:
Helmut Gluender
Institut fuer Medizinische Psychologie
Ludwig-Maximilians-Universitaet
Goethestrasse 31, D-8000 Muenchen 2, Germany
or via email to:
alfred@lnt.e-technik.tu-muenchen.de
communicated by Alfred Nischwitz
------------------------------
Subject: Genetic Synthesis of Unsupervised Learning Algorithms
From: dasdan@trbilun.bitnet (Ali Dasdan)
Date: Mon, 19 Jul 93 11:33:44 +0200
The following 25-page paper is available via anonymous ftp.
Genetic Synthesis of Unsupervised Learning Algorithms
Ali DASDAN and Kemal OFLAZER
Department of Computer Engineering and Information Science
Bilkent University
06533 Bilkent, Ankara, TURKEY
Email : dasdan@bcc.bilkent.edu.tr
Abstract
This paper presents new unsupervised learning algorithms that have been
synthesized using a genetic approach. A set of such learning algorithms
has been compared with the classical Kohonen's Algorithm on the
Self-Organizing Map and has been found to provide a better performance measure.
This study indicates that there exist many unsupervised learning algorithms
that lead to an organization similar to that of Kohonen's Algorithm, and
that genetic algorithms can be used to search for optimal algorithms and
optimal architectures for the unsupervised learning.
To obtain an electronic copy:
Either #1 :
- --------
ftp archive.cis.ohio-state.edu
login: anonymous
password: <your email address>
cd /pub/neuroprose
binary
get dasdan.gen-unsup.ps.Z
quit
Then at your system:
uncompress dasdan.gen-unsup.ps.Z
Or #2 :
- --------
ftp firat.bcc.bilkent.edu.tr
login: anonymous
password: <your email address>
cd /pub/Neural/Papers
binary
get gen-unsup.ps.z
quit
Then at your system:
uncompress gen-unsup.ps.z
Kemal Oflazer e-mail: ko@hattusas.cs.bilkent.edu.tr
Bilkent University : ko@cs.bilkent.edu.tr
Computer Engineering Department : ko@trbilun.bitnet (BITNET/EARN)
Bilkent, ANKARA, 06533 TURKIYE tel: (90) 4 - 266-4133
fax: (90) 4 - 266-4126
- ----------------------------------------------------------------------------
------------------------------
Subject: Preprint Available: Random-Walk Learning
From: rwa@spine.lanl.gov (Russell W. Anderson)
Date: Fri, 23 Jul 93 08:08:01 -0700
PREPRINT AVAILABLE:
"Biased Random-Walk Learning:
A Neurobiological Correlate to Trial-and-Error"
(In press: Progress in Neural Networks)
Russell W. Anderson
Los Alamos National Laboratory
Abstract: Neural network models offer a theoretical testbed for the study
of learning at the cellular level. The only experimentally verified
learning rule, Hebb's rule, is extremely limited in its ability to train
networks to perform complex tasks. An identified cellular mechanism
responsible for Hebbian-type long-term potentiation, the NMDA receptor,
is highly versatile. Its function and efficacy are modulated by a wide
variety of compounds and conditions and are likely to be directed by
non-local phenomena. Furthermore, it has been demonstrated that NMDA
receptors are not essential for some types of learning. We have shown
that another neural network learning rule, the chemotaxis algorithm, is
theoretically much more powerful than Hebb's rule and is consistent with
experimental data. A biased random-walk in synaptic weight space is a
learning rule immanent in nervous activity and may account for some types
of learning -- notably the acquisition of skilled movement.
- ------------------------------------------
Electronic copy available, excluding 2 figures.
For hardcopies of the figures, please
contact me by email or slow mail.
To obtain a postscript copy:
%ftp mhc.lanl.gov
login: anonymous
password: <your email address>
ftp> cd pub
ftp> binary
ftp> get bias.ps.Z
ftp> quit
%uncompress bias.ps.Z
%lpr bias.ps
E-mail:
send request to rwa@temin.lanl.gov
Slow mail:
Russell Anderson
Theoretical Division (T-10)
MS K710
Los Alamos National Laboratory
Los Alamos, NM 87545
USA
(505) 667-9455
- -----------------------------------------------------
------------------------------
End of Neuron Digest [Volume 11 Issue 48]
*****************************************
Received: from BUACCA by BUACCA.BU.EDU (Mailer R2.08 PTF009) with BSMTP id
0695; Fri, 27 Aug 93 01:18:23 EDT
Received: from noc4.dccs.upenn.edu by BUACCA.BU.EDU (IBM VM SMTP R1.2.1) with
TCP; Fri, 27 Aug 93 01:18:21 EDT
Received: from CATTELL.PSYCH.UPENN.EDU by noc4.dccs.upenn.edu
id AA15987; Fri, 27 Aug 93 01:15:13 -0400
Return-Path: <marvit@cattell.psych.upenn.edu>
Received: from LOCALHOST by cattell.psych.upenn.edu
id AA17228; Fri, 27 Aug 93 00:34:02 EDT
Posted-Date: Fri, 27 Aug 93 00:33:20 -0400
From: "Neuron-Digest Moderator" <neuron-request@CATTELL.PSYCH.UPENN.EDU>
To: Neuron-Distribution:;
Subject: Neuron Digest V11 #49 (discussion, follow-ups, queries, RFP)
Reply-To: "Neuron-Request" <neuron-request@CATTELL.PSYCH.UPENN.EDU>
X-Errors-To: "Neuron-Request" <neuron-request@psych.upenn.edu>
Organization: University of Pennsylvania
Date: Fri, 27 Aug 93 00:33:20 -0400
Message-Id: <17189.746426000@cattell.psych.upenn.edu>
Sender: marvit@cattell.psych.upenn.edu
Neuron Digest Friday, 27 Aug 1993
Volume 11 : Issue 49
Today's Topics:
`Statistical Aspects of Neural Networks'
Thesis proposal, comments solicted
cybernetics & AI
Basins of Attraction of Cellular Automata
S/w for forecast
new member hello
RFP Research - McDonnell-Pew Program
Send submissions, questions, address maintenance, and requests for old
issues to "neuron-request@psych.upenn.edu". The ftp archives are
available from psych.upenn.edu (130.91.68.31). Back issues requested by
mail will eventually be sent, but may take a while.
----------------------------------------------------------------------
Subject: `Statistical Aspects of Neural Networks'
From: ripley@stats.ox.ac.uk (Prof. Brian Ripley)
Date: 19 Aug 93 10:45:09 +0000
Unfortunately, Neuron Digest has just posted an announcement of this
paper dated 20 July 1992 (note, 13 months ago). The paper is now
published, and the publisher quite reasonably wants people to buy the
book, so the electronic version is no longer available on our ftp server.
The details are:
B.D. Ripley (1993) Statistical aspects of neural networks. In
`Networks and Chaos -- Statistical and Probabilistic Aspects'
eds O.E. Barndorff-Nielsen, J.L. Jensen and W.S. Kendall. Chapman & Hall.
ISBN 0 412 46530 2. pp. 40-123.
I'm sorry that several people have been misled by the very belated
announcement, completely outside my control.
Brian Ripley
[[ Editor's Note: I have replied personally to Brian Ripley. As I
thought I had mentioned in an earlier Digest, I felt the policy of
"better late than never" was a useful one. At least one author felt
differently. I am *very* interested in comments from you, the *readers*,
about my editorial policy. ]]
------------------------------
Subject: Thesis proposal, comments solicted
From: "ANTHONY ROBINS." <COSCAVR@rivendell.otago.ac.nz>
Date: Fri, 13 Aug 93 16:14:00 +1200
Dear moderator
Could you circulate on Neuron Digest the following thesis proposal,
regarding Information Spaces.
Many thanks.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
To whoever is interested:
Following is a simplified version of my proposal for a PhD thesis. This may or
may not be of interest to the current audience, but if it is, I should very
much appreciate comments and/or pointers into relevant areas of available
literature - or maybe pointers at others who are doing similar things.
In anticipation of your hints, pointers, flames, or whatever - many thanks!
Till Noever
SnailMail: Department of Computer Science, EMail: cosctill@otago.ac.nz
University of Otago,
PO Box 56, Dunedin,
New Zealand
SYNOPSIS OF PROPOSAL:
"In my M.Sc. thesis I proposed a generalised view of computation and cognition
as a process of projection of points in high-dimensional state spaces into
lower-dimensional subspaces of these. The space containing all possible
computational state spaces has been dubbed 'Information Space', and the process
of projection into subspaces 'Information Space Reduction'. The properties of
computational processes and the objects they interact with, may therefore be
represented in terms of the topological and geometrical properties of of this
space, or its relevant subspaces. Approaches to computation, especially with
neural nets, which use a similar kind of framework, have already been proposed;
particularly by Amari and his co-workers.
One result of applying the Information Space paradigm is the notion that
'neural' and 'symbolic' computation must be understood as being qualitatively
equivalent, in spite of their perceived differences. One of the reasons why the
apparent differences are understood as somehow fundamental, rather than
essentially quantitative, is that researchers who attempt to generate abstract
models for computational systems begin with the systems themselves - their
physical or logical structure, or the processes that are conjectured to take
place within them. The particular properties of those systems thereby assume
excessive importance, and the resulting models will be correspondingly limited.
The limitation of those models in turn, makes it difficult to explain how - in
a biological cognitive system like the human brain, for example - high-level
symbolic constructs and representations can exist and interact on a matrix
which appears to be essentially a very complex neural network.
In my PhD thesis and research I intend to take the reverse approach, and
propose a model which begins by considering a, recursively definable,
high-dimensional Information Space, whose subspaces may have various topologies
and metrics. Initially this will be done theoretically, to be followed by a
computational implementation. This model should be self-consistent, and, in its
basic form, require no references to vocabulary other than the limited set
which it is based upon.
Computational and psychological concepts such as 'representation', 'learning',
and 'memory' will subsequently be explicitly defined within this framework.
Using these definitions, the system will then be presented with a simple
computational or cognitive task, such as representing, learning about, and
re-representing - that is, providing some output information about - a simple
object; such as, for example, a Necker Cube.With the aim of identifying just
how the Information Space Reductions map onto standard computing and
psychological vocabulary.
Success in this venture would be a significant step along the way towards
producing a fundamental theory of all that computational and cognitive activity
which may take place in physically realisable systems."
------------------------------
Subject: cybernetics & AI
From: cavonius@ifado.arb-phys.uni-dortmund.de
Date: Fri, 13 Aug 93 09:51:12 +0100
I suspect that a large part of the answer to Galley's question
on what happened to cybernetics is that time plays a role:
it's unfortunate, but in this field - to a greater extent than
in science in general - activity is dictated by what happens
to be fashionable at any given moment. Cybernetics was all the
rage in the late 40s and 50s. Too much enthusiasm was generated,
and when it failed to achieve everything that was expected of
it was renounced in favor of AI. To a certain extent the same
is now happening to AI, although AI will be harder to kill off
because our investment in it is much larger than the investment
in cybernetics was.
Dick Cavonius
------------------------------
Subject: Basins of Attraction of Cellular Automata
From: Andrew Wuensche <100020.2727@CompuServe.COM>
Date: 13 Aug 93 06:46:08 -0500
Basins of Attraction of Cellular Automata
ref recent enquiry from John Boller..
>I am looking for references to the comparison of
>Basins of Attraction of Cellular Automata and
>Neural Networks.
>I would greatly appreciate anyone who could point
>me in the correct direction.
The book "The Global Dynamics of Cellular Automata" and pre-print "The
Ghost in the Machine" detailed below may be of interest..
The pre-print describes recent work on the basins of attraction of
random Boolean networks (disorderd cellular automata), and implications
on memory and learning (the abstract was posted in Volume 11:Issue 44 of
Neuron Digest). Currently only hard-copies are available. To
request copies, send email to:
andywu@cogs.susx.ac.uk, or write to
Andy Wuensche, 48 Esmond Road, London W4 1JQ, UK
dont forget to give a surface mail address.
The Global Dynamics of Cellular Automata
========================================
An Atlas of Basin of Attraction Fields of
One-Dimensional Cellular Automata.
Andrew Wuensche
Mike Lesser
Foreword by Chris Langton
Diskette included for PC-compatible computers.
Santa Fe Institute Studies in the Sciences of Complexity
Reference Vol 1 Addison-Wesley IBSN 0-201-55740-1 1992
The Ghost in the Machine
========================
Basins of Attraction of Random Boolean Networks
Andrew Wuensche
Cognitive Science Research Paper 281, University of Sussex, 1993 (to be
published in Artificial Life III, Santa Fe Institute Studies in the
Sciences of Complexity).
------------------------------
Subject: S/w for forecast
From: okoks@pc.ibt.dk
Date: Fri, 13 Aug 93 11:39:00 -0800
Hello,
I have for some time now read about the use of ANN in economics, and would
like to do some estimations/forecasts myself.
The problem is I do not have a program for this purpose. Can anyone recommend
such a program, preferably free- or shareware, as my funding is limited?
TIA,
Karsten Strobek
Institute of Economics Phone: +45 35 32 30 25
University of Copenhagen Fax: +45 35 32 30 00
Studiestraede 6 Internet: Okoks@pc.ibt.dk
DK-1455 Copenhagen K
Denmark
------------------------------
Subject: new member hello
From: anich@cordmc.dnet.etn.com (Steve Anich, Eaton Corporation, Milwaukee)
Date: Mon, 16 Aug 93 11:31:22 -0500
Hi,
I'm new to Neural Networks. I'm currently trying to get a ANN to
recognize the difference between a good or bad weld based
on a voltage & current signature. I was using a product
from HNC to do this, but it has been crashing like crazy. I'm
going start rolling my own (after I'm done with this message),
in case the HNC tech support people can't help.
My main interest in ANN for signal/siganture classification. I expect
to be using it also with multiple sensor shortly. I am
interested (but blissfully ignorant) in ANN which can
optimize themselves over time (while be used).
Question: Does anyone know of some existing code (say C++
classes for ANN) that I can ftp from somewhere? I'd rather not
start from scratch.
FYI: I heard about this digest on a FAQ from somewhere aboyt AI.
Thanks,
- --steve
..................................................................
Steve Anich
Eaton Corporation R&D Center | email: anich@cordmc.dnet.etn.com
Systems & Software Technologies | stevea48@aol.com
4201 N. 27th Street | voice: 414-449-6457
Milwaukee WI 53216, USA | fax: 414-449-6221
..................................................................
"I eat kludges for breakfast, Buckwheat!"
------------------------------
Subject: RFP Research - McDonnell-Pew Program
From: Cognitive Neuroscience <cns@clarity.Princeton.EDU>
Date: Tue, 17 Aug 93 11:30:02 -0500
McDonnell-Pew Program
in Cognitive Neuroscience
SEPTEMBER 1993
Individual Grants-in-Aid
for Research
Program supported jointly by the
James S. McDonnell Foundation
and The Pew Charitable Trusts
INTRODUCTION
The McDonnell-Pew Program in Cognitive Neuroscience has been
created jointly by the James S. McDonnell Foundation and The Pew Charitable
Trusts to promote the development of cognitive neuroscience. The foundations
have allocated $20 million over a five-year period for this program.
Cognitive neuroscience attempts to understand human mental events by
specifying how neural tissue carries out computations. Work in cognitive
neuroscience is interdisciplinary in character, drawing on developments in
clinical and basic neuroscience, computer science, psychology, linguistics,
and philosophy. Cognitive neuroscience excludes descriptions of psychological
function that do not address the underlying brain mechanisms and
neuroscientific descriptions that do not speak to psychological function.
The program has three components.
(1) Institutional grants, which have already been awarded,
for the purpose of creating centers where cognitive
scientists and neuroscientists can work together.
(2) Small grants-in-aid, presently being awarded, for individual
research projects to encourage Ph.D. and M.D. investigators
in cognitive neuroscience.
(3) Small grants-in-aid, presently being awarded, for individual
training projects to encourage Ph.D. and M.D. investigators
to acquire skills for interdisciplinary research.
This brochure describes the individual grants-in-aid for research.
RESEARCH GRANTS
The McDonnell-Pew Program in Cognitive Neuroscience will issue a
limited number of awards to support collaborative work by cognitive
neuroscientists. Applications are sought for projects of exceptional merit
that are not currently fundable through other channels and from investigators
who are not at institutions already funded by an institutional grant from
the program. In order to distribute available funds as widely as possible,
preference will be given to applicants who have not received previous grants
under this program.
Preference will be given to projects that are interdisciplinary in
character. The goals of the program are to encourage broad participation
in the development of the field and to facilitate the participation of
investigators outside the major centers of cognitive neuroscience.
There are no U.S. citizenship restrictions or requirements, nor does
the proposed work need to be conducted at a U.S. institution, providing the
sponsoring organization qualifies as tax-exempt as described in the
"Applications" section of this brochure. Ph.D. thesis research of graduate
students will not be funded.
Grant support under the research component is limited to $30,000
per year for two years. Indirect costs are to be included in the $30,000
maximum and may not exceed 10 percent of total salaries and fringe
benefits. These grants are not renewable after two years.
The program is looking for innovative proposals that would, for
example:
* combine experimental data from cognitive psychology and neuroscience;
* explore the implications of neurobiological methods for the study
of the higher cognitive processes;
* bring formal modeling techniques to bear on cognition, including
emotions and higher thought processes;
* use sensing or imaging techniques to observe the brain during
conscious activity;
* make imaginative use of patient populations to analyze cognition;
* develop new theories of the human mind/brain system.
This list of examples is necessarily incomplete but should suggest the
general kind of proposals desired. Ideally, a small grant-in-aid for
research should facilitate the initial exploration of a novel or risky
idea, with success leading to more extensive funding from other sources.
APPLICATIONS
Applicants should submit five copies of the following information:
* a brief, one-page abstract describing the proposed work;
* a brief, itemized budget that includes direct and indirect
costs (indirect costs may not exceed 10 percent of total
salaries and fringe benefits);
* a budget justification;
* a narrative proposal that does not exceed 5,000 words; the
5,000-word proposal should include:
1) a description of the work to be done and where
it might lead;
2) an account of the investigator's professional
qualifications to do the work;
3) an account of any plans to collaborate with other
cognitive neuroscientists;
4) a brief description of the available research
facilities;
* curriculum(a) vitae of the participating investigator(s);
* an authorized document indicating clearance for the use of
human and animal subjects;
* an endoresement letter from the officer of the sponsoring
institution who will be responsible for administering the
grant.
One copy of the following items must also be submitted along with the
proposal. These documents should be readily available from the sponsoring
institution's grants or development office.
* A copy of the IRS determination letter, or the international
equivalent, stating that the sponsoring organization is a nonprofit,
tax-exempt institution classified as a 501(c)(3) organization.
* A copy of the IRS determination letter stating that your organization
is not listed as a private foundation under section 509(a) of the
Internal Revenue Service Code.
* A statement on the sponsoring institution's letterhead, following
the wording on Attachment A and signed by an officer of the
institution, certifying that the status or purpose of the
organization has not changed since the issuance of the IRS
determinations. (If your organization's name has changed, include
a copy of the IRS document reflecting this change.)
* An audited financial statement of the most recently completed fiscal
year of the sponsoring organization.
* A current list of the names and professional affiliations of the
members of the organization's board of trustees and the names and
titles of the principal officers.
Other appended documents will not be accepted for evaluation and will be
returned to the applicant. Any incomplete proposals will also be returned
to the applicant.
Submissions will be reviewed by the program's advisory board.
Applications must be postmarked on or before FEBRUARY 1 to be considered
for review.
INFORMATION
McDonnell-Pew Program in Cognitive Neuroscience
Green Hall 1-N-6
Princeton University
Princeton, New Jersey 08544-1010
Telephone: 609-258-5014
Facsimile: 609-258-3031
Email: cns@clarity.princeton.edu
ADVISORY BOARD
Emilio Bizzi, M.D.
Eugene McDermott Professor in the Brain
Sciences and Human Behavior
Chairman, Department of Brain and Cognitive Sciences
Massachusetts Institute of Technology, E25-526
Cambridge, Massachusetts 02139
Sheila E. Blumstein, Ph.D.
Professor of Cognitive and Linguistic Sciences
Dean of the College
Brown University
University Hall, Room 218
Providence, Rhode Island 02912
Stephen J. Hanson, Ph.D.
Head, Learning Systems Department
Siemens Corporate Research
755 College Road East
Princeton, New Jersey 08540
Jon H. Kaas, Ph.D.
Centennial Professor
Department of Psychology
Vanderbilt University
301 Wilson Hall
111 21st Avenue South
Nashville, Tennessee 37240
George A. Miller, Ph.D.
Director, McDonnell-Pew Program in Cognitive Neuroscience
James S. McDonnell Distinguished University Professor of Psychology
Department of Psychology
Princeton University
Princeton, New Jersey 08544-1010
Mortimer Mishkin, Ph.D.
Chief, Laboratory of Neurpsychology
National Institute of Mental Health
9000 Rockville Pike
Building 49, Room 1B80
Bethesda, Maryland 20892
Marcus E. Raichle, M.D.
Professor of Neurology and Radiology
Division of Radiation Sciences
Washington University School of Medicine
Campus Box 8225
510 S. Kingshighway Boulevard
St. Louis, Missouri 63110
Endel Tulving, Ph.D.
Tanenbaum Chair in Cognitive Neuroscience
Rotman Research Institute of Baycrest Centre
3560 Bathurst Street
North York, Ontario M6A 2E1
Canada
------------------------------
End of Neuron Digest [Volume 11 Issue 49]
*****************************************
Received: from BUACCA by BUACCA.BU.EDU (Mailer R2.08 PTF009) with BSMTP id
1034; Tue, 31 Aug 93 22:25:02 EDT
Received: from noc4.dccs.upenn.edu by BUACCA.BU.EDU (IBM VM SMTP R1.2.1) with
TCP; Tue, 31 Aug 93 22:24:59 EDT
Received: from CATTELL.PSYCH.UPENN.EDU by noc4.dccs.upenn.edu
id AA27399; Tue, 31 Aug 93 22:15:17 -0400
Return-Path: <marvit@cattell.psych.upenn.edu>
Received: from LOCALHOST by cattell.psych.upenn.edu
id AA29134; Tue, 31 Aug 93 21:17:09 EDT
Posted-Date: Tue, 31 Aug 93 21:16:32 -0400
From: "Neuron-Digest Moderator" <neuron-request@CATTELL.PSYCH.UPENN.EDU>
To: Neuron-Distribution:;
Subject: Neuron Digest V11 #50 (misc discussion, s/w, queries, etc.)
Reply-To: "Neuron-Request" <neuron-request@CATTELL.PSYCH.UPENN.EDU>
X-Errors-To: "Neuron-Request" <neuron-request@psych.upenn.edu>
Organization: University of Pennsylvania
Date: Tue, 31 Aug 93 21:16:32 -0400
Message-Id: <29129.746846192@cattell.psych.upenn.edu>
Sender: marvit@cattell.psych.upenn.edu
Neuron Digest Tuesday, 31 Aug 1993
Volume 11 : Issue 50
Today's Topics:
Letter/Submission to Neuron Digest
Neural Dreams...
Ref: 1993 SCSC Paper
Request
Kohonen Software & Email Address
Benchmarks?
seeking a Director for a Center for Neuroscience at Boston Univ.
NevProp 1.16 Update Available
Neural hardware performance criteria
Commercial Neural Network Software
cybernetics
Help for signature verification
neural network society membership
neural network research centers
Send submissions, questions, address maintenance, and requests for old
issues to "neuron-request@psych.upenn.edu". The ftp archives are
available from psych.upenn.edu (130.91.68.31). Back issues requested by
mail will eventually be sent, but may take a while.
----------------------------------------------------------------------
Subject: Letter/Submission to Neuron Digest
From: David Kanecki <kanecki@cs.uwp.edu>
Date: Tue, 17 Aug 93 20:54:50 -0600
[[ Editor's Note: Thanks to David for this description. I'm sure many
readers will find it food for thought and may wish to correspond with him
about his work on the "classic" problem which AI tried to tackle 20-30
years ago. -PM ]]
~
August 17, 1993
Dear Peter,
If you wish, I can send a copy of my "Neural Chess" presentation text and a
copy of my "3D Neural Chess" presentation text.
Since, I last wrote you, I have made two other developments in Neural Chess
as 1) Development of 3d Neural Chess and 2) Completion of the first Neural
Chess study "The Use of 2D Neural Chess, an Intelligent, Thinking Computer
Program as an Aid to Compare Strategic/Personal Neural Processing". I would
like to submit the "3d Neural Chess" abstract and the "2d Neural chess"
study for publication in the Neuron Digest.
Also, I am unemployed and looking for employment in this or other science
fields. For employment correspondence or a copy of my current resume,
please contact me at 4410 19th Avenue, Kenosha, WI 53140, (414)- 654-7560
or by e-mail. Finally, I have set up the "Neural/Sim BBS" at
(414)-654-7560 that runs from 7pm to 7am CT, Monday through Friday.
Next, an overview of what I present is:
On intelligent, thinking systems I have developed four categories of
new systems and technologies as 1) 1992 - Module 1 - 2d Neural Chess - The
basis of intelligent, thinking systems; 2) 1993 - Module 2 - 3d Neural
Chess - The enhancement and next level in intelligent, thinking systems;
and 3) The ability to quantify strategic/ personal neural decision making;
and 4) High speed and high volume data analysis using the neural programs
to define "intelligent control processor systems".
Finally, any inquires or licences for the Neural Chess programs or high
speed and high volume data analysis (intelligent control processor system)
can be directed to: Peter Jansson, Attorney at Law, 245 Main St. Racine,
WI, (414)-632-6900
======== 3d Neural Chess Abstract ========
First, I have developed "3D Neural Chess". This is an enhancement of the
Neural Chess program that allows the computer to play chess in 3 dimensions
as x,y, and z and opposed to two dimensions as x and y as conventional
chess. Thus, the decision making is more complicated. The "3D Neural Chess"
paper was published in 1993 Summer Computer Simulation Conference Proceed-
ings. This application represents a major learning breakthrough in learning
and teaching applications and modules using the main program as the basis.
The abstract of the paper is as follows:
A Decision Support System for Simulation and Real Time Applications
as 3D Neural Chess
David H. Kanecki, A.C.S., Bio. Sci.
P.O. Box 26944
Wauwatosa, WI 53226-0944
In decision support systems, this 3d computer program covers a major break-
through in the importance of new software techniques and applications in
micro computing environments, palmtop computing. 3d neural chess includes
the whole decision space, while 2d neural chess includes a plane of the
decision space, and is only a partial support to the decision support
system. Based on my work and paper as "Simulation as an Intelligent, Think-
ing Computer Program as Neural Chess" that was presented at the 1992 SCSC
(Kanecki 1992a), I have developed a 3d Neural chess program that shows the
1) strength, 2) adaptability, and 3) learning of reasoning by sensation,
sensual reasoning. This decision support program was a 3 year outgrowth
from the Neural Chess program. The goals of the project were 1) Is sensual
reasoning universal, 2) How well could it work in the next dimensional
system?
In this paper, I will present the game environment of three dimensional
chess that the 3d chess program played. Also, I will desrcibe the conclu-
sions of a match against against a human opponent. Finally, I will describe
the cognitive theory proved in this project.
The decision support system as 3d neural chess is expandable to many other
physical systems. It can be used in accessible, portable micro systems in
various physical environments, terrain processing, navigation, and resource
planning applications (air, water, land, space). The computer programs
requires thousands of lines of programming and uses "no decision trees or
databases" and is based strictly on "sensual reasoning" by the neural
network. By additional logic models, the future applications mentioned can
be developed.
Keywords: Sensual reasoning, Decision Support Systems, 3D Neural Chess,
AI/KBS, Neural Networks, Terrain Processing, Robotics, Cognition, Intelli-
gent Thinking Computer Software, Resource Planning, Logistics, Atomic Mind,
Interaction Difference
<Published in the 1993 Summer Computer Simulation Conference Proceedings,
The Society for Computer Simulation, San Diego, CA >
======== 2d Neural Chess Study =========
With the neural chess program, I have been able to study the strategic and
personal neural processing of individuals based on the chess matches that
they have played. From this study, I have found measureable differences in
neural processing. The neural chess programs allow one to quantify these
differences as stated in the paper below:
The Use of 2D-Neural Chess,
an Intelligent, Thinking Computer Program,
as an Aid to Compare
Strategic/Personal Neural Processing
By
David H. Kanecki, A.C.S., Bio. Sci.
P.O. Box 26944
Wauwatosa, WI 53226-0944
Internet: kanecki@cs.uwp.edu
Neural/SIM BBS, (414)-654-7560, 7pm-7am ct, 300/1200 baud
With the use of the 2D-Neural Chess program, I was able to determine quan-
titative differences in various chess strategies. The study was done by
having the 2D Neural chess program generate a neural matrix that best
responded and adapted to a given chess match that was processed by the
program. The response and adaptation used a process called sensual reason-
ing (Kanecki 1992a). This process allows the computer to make a decision by
integrating its sensations in a method similar to biological organisms
(Kanecki 1992a).
The 2D Neural Chess program, represents a new development in emulating
intelligence and thinking. The program is unique in that it uses no data-
base or game trees. Instead, it uses real time neural update using a bio-
logical basis as its model (Kanecki 1990a). The basis of its decision
making, is the atomic neuron and atomic mind (Kanecki 1992a). The atomic
neuron sense and the atomic mind integrates and acts.
In the initial test matches of the program against a human opponent, the
program was able to defeat a human opponent (Kanecki 1991a). Also, the 2D-
Neural Chess program had learned how to defeat a human opponent in only
four games. In addition, the 2D-Neural Chess program is so responsive and
adaptive, it can continue to play a strong chess match even when an oppo-
nent deliberately makes an illegal move (Kanecki 1991a, Kanecki 1992a). In
addition, the Neural Chess program can explain in ordinary human written
language why it choose its move. Lastly, the 2D-Neural Chess program repre-
sents 10 years of research.
In this study using the 2D-Neural Chess program, I wanted to see if the
program could allow one to quantitatively determine if there was a differ-
ence in various chess strategies? To start the study, I selected 5 chess
strategies with 3 replicates of each strategy. Then, after each match, the
atomic neural values were sorted and collated by strategy, time interval,
and game result. Next, the data was analyzed for 3 major time intervals
using the chi square method. Finally, the data was normalized by dividing
the chi square value by the significant chi square value. Thus, any value
greater than 1.00 is significant. Also, any value greater than 2.00 is
highly significant and a value greater than 3.00 is extremely significant.
The results of the study are:
Strategy - Win/Loss Comparison
Move | KID GRU QID BER PIR
----------------------------------------------------------
5 | 0.18 0.14 0.72 0.96 0.04
11 | 0.33 1.41 1.21 2.11 0.20
18 | 4.14 10.15 2.33 5.51 0.59
In the table above, the abbreviations 'KID' is used to indicate that the
"King's Indian Defense" was used, 'GRU' is used to indicate that the
"Gruenfeld Defense" was used, 'BER' is used to indicate that the "Bern
Defense" was used, and 'PIR' is used to indicate that the "PIRC Defense"
was used. The word 'move' is used to indicate the ending time interval. The
three time intervals used were moves 0, initial atomic neuron states, to 5,
moves 6 to 11, and moves 12 to 18.
Two general comments can be made on various chess strategies. One, the
difference measure, the normalized chi square value, increased in each time
interval. Two, the difference measure reaches the significance threshold,
1.00, at different time intervals. For example, the 'KID' strategy reaches
the significance threshold at time interval 18, the 'GRU' strategy reaches
the significance threshold at time interval 11, the 'QID' strategy reaches
the significance threshold at time interval 11, and the 'BER' strategy
reaches the significance interval at time interval 11. The only exception
to the second statement, is the 'PIR' strategy. This strategy does not
reach the significance threshold in the time intervals studied.
The quantitative analysis that is afforded by sensual reasoning, allows one
to view increases and decreases in difference measures. Also, the quantita-
tive difference measure allows one to monitor changes in the neural system
as the atomic neuron and atomic mind. Finally, it shows how responsive and
adaptive neural systems are to solving neural processing problems, i.e.
chess strategies.
The game of chess was used as metaphor to study neural interaction in
decision making. An interesting observation made in this project, was that
the computing method was needed much more that the computing hardware. For
example, Neural Chess, later to be called 2D-Neural Chess, was originally
developed on an Osborne-1 with 64K RAM and running CP/M. Also, because of
the computing power of neural systems, the Osborne-1 Neural chess program
was able to make chess decisions in real time. Thus, a neural processing
method and computer used with proper neural architecture basis allows one
to study an aspect of human thought "in vitro".
[1992] Kanecki, D.H., "Simulation as an Intelligent, Thinking Computer
System as Neural Chess", Proceedings 1992 Summer Computer Simulation Con-
ference, The Society for Computer Simulation, pages 428-432.
[1991] Kanecki, D.H., "Neural Chess - Presentation of Findings", Neuron
Digest, Volume 7, Issue 39, July 8, 1991.
[1990] Kanecki, D.H., "Neural Chess: I have developed a program", Neuron
Digest, Volume 6, Issue 88, November 25, 1990.
------------------------------
Subject: Neural Dreams...
From: ttgrq@info.win.tue.nl (Andreas Gammel)
Date: Wed, 18 Aug 93 14:11:33 +0100
[[ Editor's Note: As long-time readers know, the Digest is for beginners
and sophisticates alike. This fellow does take the topic farther than
many of our day-to-day work... -PM ]]
Hello, my name is Andreas and I'm a newbie in the field of NN and this
list. If read a very nice book about it (back propagation, cascade
correlation, hopfield nets etc) and I'm thinking of writing some programs
for it. Any software (Dos, Unix, sources and ftp-sites) would be most
appreciated. Just email it to me.
I have a rather phylosofical topic..
I was wondering if it would be possible to let a Neural Net dream?
What I mean is this. Say we make a NN with 100 binary inputs and 1 binary
output. The input is a 10 x 10-grid representing some picture. The output
is an answer to the question "Does the picture show a house" for example.
We then train this NN to respond positivly to pictures of houses and
negatively to other pictures. Now this is al fine, it has been done
before... but I was wondering is it would be thinkable to REVERSE all
arrows in the trained NN so that the input becomes 1 bit (house? yes or
no) and the output becomes a 10x10 grid. (in other words the NN is
DREAMING about houses). I suspect that human dreaming works basicly the
same.
Would the output resemble a house? And if yes, to what extent? If it
works, could the same be achieved by using centerfolds instead of houses,
thereby creating a 'dirty mind' (for our younger list-members)
Bye for now
Andreas
ttgrq@info.win.tue.nl
------------------------------
Subject: Ref: 1993 SCSC Paper
From: David Kanecki <kanecki@cs.uwp.edu>
Date: Wed, 18 Aug 93 13:07:50 -0600
Dear Peter,
The complete reference for the 1993 SCSC paper is:
"A Decision Support System for Simulation and Real Time Applications as
3D Neural Chess", 1993 Summer Computer Simulation Conference Proceedings,
Published by the Society for Computer Simulation (SCS), San Diego, CA,
pages 289-294.
David H. Kanecki, A.C.S., Bio. Sci.
kanecki@cs.uwp.edu
------------------------------
Subject: Request
From: pmastin@vnet.IBM.COM
Date: Thu, 19 Aug 93 15:36:25 -0500
[[ Editor's Note: A common question, but I no longer have a common
answer. What are considered the better intro books now? -PM ]]
I am new to neural networks, but come from a background of logic
Programming. I am interested in learning algorthms, and have
heard NN are good for this purpose. Is there any lit. that
establishes this, that would be accessable to a neophyte like
myself? Thanks in advance for any answers.
Pete
------------------------------
Subject: Kohonen Software & Email Address
From: Steve Greig <steve@department-computer-studies.napier.ac.uk>
Date: Mon, 23 Aug 93 13:02:47 +0000
Dear All,
I'm looking for some source code for a Kohonen Self-Organizing Feature Map.
The language of the source code doesn't matter, just as long as it is clear
and comprehensible (comments!?). On second thoughts, assembler language would
not be much use. C/C++/Pascal/Oberon/Lisp/Scheme/Prolog/SML would be okay.
Also, does anyone know what Kohonen's email address is, or if he has one.
Thanks in advance,
Steve
- -----------------------------------------------------------------------------
Steve Greig email: steve@uk.ac.napier.dcs
Computer Studies Department
Napier University tel: +44-31-455-4285
219 Colinton Road fax: +44-31-455-7209
Edinburgh EH14 1DJ
Scotland
------------------------------
Subject: Benchmarks?
From: stjaffe@vaxsar.vassar.edu (steve jaffe)
Date: 23 Aug 93 14:20:40 -0500
[[ Editor's Note: I remember Scott Fahlman (at CMU?) was informally
collecting benchmarks, buty I haven't heard of those efforts for a couple
of years... -PM ]]
Does there exist a reasonably standard set of benchmark problems on which
to compare various algorithms with respect to speed, accuracy, ability to
generalize, etc.? Information sent to me via email will be summarized to
the digest.
Thanks.
Steve Jaffe
Math Dept, Vassar College, Poughkeepsie NY 12601
stjaffe@vaxsar.vassar.edu
------------------------------
Subject: seeking a Director for a Center for Neuroscience at Boston Univ.
From: Announce@retina.bu.edu (Boston University Center for Adaptive Systems)
Organization: Center for Adaptive Systems, Boston University, Boston, MA, USA
Date: 25 Aug 93 18:25:20 +0000
Boston University seeks to hire a tenured full professor
to serve as the Director of a new Center for Neuroscience
on its Charles River Campus. The director will take a
leadership role in hiring the core faculty of the center.
The director will also coordinate the development of a
graduate PhD granting program in neuroscience. Candidates
for director should have an exceptional international
reputation as an experimental neuroscientist, preferably
in behavioral neurophysiology or related areas. The
director should have a broad scholarly perspective in
neuroscience and demonstrated leadership skills with which
to build a world-class center. The director would also
coordinate the process of linking the center to the multiple
neuroscience-related resources that are already part of
Boston University. Candidates should send a complete
curriculum vitae and three letters of recommendation to
Neuroscience Search Committee, Department of Cognitive and
Neural Systems, Room 240, 111 Cummington Street, Boston
University, Boston, MA 02215. Boston University is an
Equal Opportunity/Affirmative Action employer.
------------------------------
Subject: NevProp 1.16 Update Available
From: Phil Goodman <goodman@unr.edu>
Date: Thu, 26 Aug 93 16:35:53 +0000
Please consider the following update announcement:
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
NevProp 1.16 corrects a bug in the output range of symmetric sigmoids and
one occuring when the number of testing is fewer than training cases.
These fixes are further described in the README.CHANGES file at the
UNR anonymous ftp, described below.
The UNR anonymous ftp host is 'unssun.scs.unr.edu', and the files are
in the directory 'pub/goodman/nevpropdir'.
Version 1.15 users can update 3 ways:
a. Just re-ftp the 'nevprop1.16.shar' file and unpack and 'make' np again.
(also available at the CMU machine, describe below.)
b. Just re-ftp (in "binary" mode) the DOS or MAC executable binaries
located in the 'dosdir' or 'macdir' subdirectories, respectively.
c. Ftp only the 'np.c' file provided, replacing your old version, then 'make'
d. Ftp only the 'np-patchfile', then issue the command
'patch < np-patchfile' to locally update np.c, then 'make' again.
New users can obtain NevProp 1.16 from the anonymous UNR anonymous ftp
as described in (a) or (b) above, or from the CMU machine:
a. Create an FTP connection from wherever you are to machine
"ftp.cs.cmu.edu". The internet address of this machine is
128.2.206.173, for those who need it.
b. Log in as user "anonymous" with your own ID as password.
You may see an error message that says "filenames may not
have /.. in them" or something like that. Just ignore it.
c. Change remote directory to "/afs/cs/project/connect/code".
NOTE: You must do this in a single operation. Some of the
super directories on this path are protected against outside
users.
d. At this point FTP should be able to get a listing of files
in this directory with "dir" & fetch the ones you want with "get".
(The exact FTP commands depend on your local FTP server.)
Version 1.2 will be released soon. A major new feature will be the option
of using cross-entropy rather than least squares error function.
Phil
___________________________
___________________________ Phil Goodman,MD,MS goodman@unr.edu
| __\ | _ \ | \/ || _ \ Associate Professor & CBMR Director
|| ||_// ||\ /||||_// Cardiovascular Studies Team Leader
|| | _( || \/ ||| _(
||__ ||_\\ || |||| \\ CENTER for BIOMEDICAL MODELING RESEARCH
|___/ |___/ || |||| \\ University of Nevada School of Medicine
Washoe Medical Center H1-166, 77 Pringle Way,
Reno, NV 89520 702-328-4867 FAX:328-4111
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
------------------------------
Subject: Neural hardware performance criteria
From: Heini Withagen <heiniw@sun1.eeb.ele.tue.nl>
Date: Fri, 27 Aug 93 14:37:13 +0100
Currently, several neural network chips are available, both commercially
and in laboratories. Choosing which of those chips best suits your
application can be difficult. At the moment, several criteria are
used to describe the performance of a chip, like Connections Per Second,
Connections Updates Per Second, etc. However, these criteria are very
rough and with these it is not possible to compare chips very well.
At our university, we have done some research to come up with better
criteria which take into account the (neural) architecture of the chip,
the speed, the sensitivity to non-idealities (like non-linear multipliers
in the case of an analog chip), etc.
With this posting, I am hoping to evoke some reactions to see if there
are people who are interested in this subject. Especially, reactions
from the commercial side would be welcome (Intel, Adaptive Solutions,
Hitachi, AT&T, etc.).
Greetings,
- --
Heini Withagen
Dep. of Elec. Engineering EH 9.29
Eindhoven University of Technology
P.O. Box 513 Phone: 31-40472366
5600 MB Eindhoven Fax: 31-40455674
The Netherlands E-mail: heiniw@eeb.ele.tue.nl
========================================================================
------------------------------
Subject: Commercial Neural Network Software
From: manallack_d%frgen.dnet@smithkline.com (NAME "David Manallack")
Date: Fri, 27 Aug 93 09:39:54 -0500
Dear Neuron Digest Readers
We are currently writing a chapter on Neural Networks in a book
titled 'Methods and Principles in Medicinal Chemistry'. The book
is targeted at medicinal chemists interested in modern methods of
molecular design. As you may be aware, networks have found various
uses in chemistry (e.g Quantitative Structure-Activity Relationships
(QSAR)), typically using back propagation algorithms.
The editors of the book have asked us to include an appendix
listing commercially available neural network software suitable
for use by medicinal chemists.
We would therefore like to request any interested readers to send
us the name, address and cost of any suitable software. A brief
description would also be appreciated.
David Manallack email: manallack_d%frgen.dnet@smithkline.com
David Livingstone email: livingstondj@smithkline.com
------------------------------
Subject: cybernetics
From: Harry Jerison <IJC1HJJ@MVS.OAC.UCLA.EDU>
Date: Fri, 27 Aug 93 17:29:00 -0800
Dear friends;
People smart enough to read & write for this forum ought to know things too.
Queries and some replies about cybernetics were inexcusably but correctably
ignorant. Correction can begin with the review (2 columns) in SCIENCE
257:1146 (1992) of Heims, S. J. THE CYBERNETICS GROUP (MIT Press, 1991). Just
the review; the book is gravy. One of the Max Planck Institutes in Germany is
on "Biological Cybernetics," and there is a scientific journal with that name
in its title. There is a lot more, some of which is in the "What's in a name"
category, people doing cybernetics but not knowing it, or calling what they
were doing cybernetics, only because it sounded good 40 years ago (the Soviet
ploy to escape being tarnished by "psychology" when there was a USSR). I
don't have the exchange in this Digest in front of me, but remember my
astonishment at the naivete displayed on cybernetics "vs" AI. Some smart
people may have forgotten how to read anything but a computer monitor and only
if digital machines feed the crt. The rest of the cure (correction) might
involve exposure to libraries and words in print.
Harry Jerison (ijc1hjj@mvs.oac.ucla.edu - Psychiatry, UCLA)
------------------------------
Subject: Help for signature verification
From: B.NIEVOLA%CEFET.ANPR.BR@UICVM.UIC.EDU
Date: Tue, 31 Aug 93 10:54:00 -0300
Dear Sir,
I'm currently working with neural networks and other AI
applications. One of such is a system for signature verification.
I have one student that is working on another project
and I'd like to obtain information for him. His message is:
"I would like to have some informations (references) about the
applicability of neural networks in the enhancement, limiariza_
tion and segmentation of poor quality images, more specific, in
fingerprint images. Also, information about neural networks in
fingerprint identification would be of great help.
Best regards,
Marcos Lopes"
Can you help him? He has no e-mail, but you can use my
address, to communicate with him. If someone in the list could
give some information, I appreciate. Thank you,
Prof. Julio Cesar Nievola
CEFET-PR
EMAIL: b.nievola@cefet.anpr.br
------------------------------
Subject: neural network society membership
From: rp@rdm.ch (Paulo Rios)
Date: Tue, 31 Aug 93 18:58:06 +0000
Hi!
I would like to join one or two main neural network societies.
Does anyone out there know their e-mail or regular address? Any
information would be welcome.
Thank you.
Paul
==================================================
Paul Rios
KMS Development Lab
Switzerland
E-mail: rp@rdm.ch
==================================================
------------------------------
Subject: neural network research centers
From: rp@rdm.ch (Paulo Rios)
Date: Tue, 31 Aug 93 19:14:08 +0000
Hi!
I am studying the use of neural network techniques in our software
product, a communication network (including cabling) management
system. I am also interested in doing research in some specific
areas of concern to us in the company. Information on research done
elsewhere might prove to be very useful.
Therefore, I would be interested in learning about the major research centers
worldwide (especially in North America, Europe and Australia),
university and industry, in neural network research.
Does anyone out there know of an e-mailing server, paper or book with
a list of these centers plus a brief description of their main areas of
research? What about research in the area I mentioned above?
Any information would be welcome.
Thank you.
Paul
==================================================
Paul Rios
KMS Development Lab
Switzerland
E-mail: rp@rdm.ch
==================================================
------------------------------
End of Neuron Digest [Volume 11 Issue 50]
*****************************************
Received: from BUACCA by BUACCA.BU.EDU (Mailer R2.08 PTF009) with BSMTP id
3391; Wed, 08 Sep 93 19:08:50 EDT
Received: from noc4.dccs.upenn.edu by BUACCA.BU.EDU (IBM VM SMTP R1.2.1) with
TCP; Wed, 08 Sep 93 19:08:42 EDT
Received: from CATTELL.PSYCH.UPENN.EDU by noc4.dccs.upenn.edu
id AA16555; Wed, 8 Sep 93 19:00:29 -0400
Return-Path: <marvit@cattell.psych.upenn.edu>
Received: from LOCALHOST by cattell.psych.upenn.edu
id AA22030; Wed, 8 Sep 93 18:05:03 EDT
Posted-Date: Wed, 08 Sep 93 18:04:19 -0400
From: "Neuron-Digest Moderator" <neuron-request@CATTELL.PSYCH.UPENN.EDU>
To: Neuron-Distribution:;
Subject: Neuron Digest V12 #1 (misc queries, jobs, software)
Reply-To: "Neuron-Request" <neuron-request@CATTELL.PSYCH.UPENN.EDU>
X-Errors-To: "Neuron-Request" <neuron-request@psych.upenn.edu>
Organization: University of Pennsylvania
Date: Wed, 08 Sep 93 18:04:19 -0400
Message-Id: <21995.747525859@cattell.psych.upenn.edu>
Sender: marvit@cattell.psych.upenn.edu
Neuron Digest Wednesday, 8 Sep 1993
Volume 12 : Issue 1
Today's Topics:
Administrivia, New Volume
Kohonen maps & LVQ -- huge bibliography (and reference request)
Please post this announcement of a new book
PC Based NN Software
Using artif. neural nets in QSAR's
nonlinear controllers
SNNS-info
Re: Inquiries
Cognitive scientist position at Cornell
AM6 Users: release notes and bug fixes available
Send submissions, questions, address maintenance, and requests for old
issues to "neuron-request@psych.upenn.edu". The ftp archives are
available from psych.upenn.edu (130.91.68.31). Back issues requested by
mail will eventually be sent, but may take a while.
----------------------------------------------------------------------
Subject: Administrivia, New Volume
From: "Neuron-Digest Moderator, Peter Marvit" <neuron@psych.upenn.edu>
Date: Wed, 08 Sep 93 18:00:11 -0500
Dear readers,
As you will notice, and as has been traditional at the start of the
academic year, we begin a new volume with this issue -- V12. The
readership has continued to grow and the master list has nearly 1800
addresses from all over the world and all types of people. Many of the
address are local redistribution aliases and one is gatewayed to the
USENET news group comp.ai.neural-networks, so the actual readership of
the Digest is quite large.
I want to thank you all, once again, for your continued support and
suggestions. Without your efforts and contributions, the Digest would
not work.
This fall, I hope to put the back issues both on a gopher server and a
WAIS server. In addition, I hope to get a mail server up and running so
people without Internet access can get at the archives. I will make an
announcement when/if these happen. I certainly know we need a table of
contents for the archives and that will be a first priority.
The diveristy of topics discussed in this forum reflects the diversity of
our readers. I hope it stays that way...
-Peter
: Peter Marvit, Neuron Digest Moderator <neuron-request@psych.upenn.edu> :
: Courtesy of the Psychology Department, University of Pennsylvania :
: 3815 Walnut St., Philadelphia, PA 19104 w:215/898-6274 h:215/387-6433 :
------------------------------
Subject: Kohonen maps & LVQ -- huge bibliography (and reference request)
From: Bibliography <biblio@nucleus.hut.fi>
Date: Tue, 31 Aug 93 13:08:00 +0700
Hello,
We are in the process of compiling the complete bibliography
of works on Kohonen Self-Organizing Map and Learning Vector
Quantization all over the world. Currently the bibliography
contains more than 1000 entries. The bibliography is now
available (in BibTeX and PostScript formats) by anonymous FTP from:
cochlea.hut.fi:/pub/ref/references.bib.Z ( BibTeX file)
cochlea.hut.fi:/pub/ref/references.ps.Z ( PostScript file)
The above files are compressed. Please make sure you use "binary" mode
when you transfer these files.
Please send any additions and corrections to :
biblio@cochlea.hut.fi
Please follow the IEEE instructions of references (full names of
authors, name of article, journal name, volume + number where applicable,
first and last page number, year, etc.) and BibTeX-format, if possible.
Yours,
Jari Kangas
Helsinki University of Technology
Laboratory of Computer and Information Science
Rakentajanaukio 2 C
SF-02150 Espoo,
FINLAND
------------------------------
Subject: Please post this announcement of a new book
From: valmir@vnet.IBM.COM
Date: Wed, 01 Sep 93 08:58:36 -0300
MASSIVELY PARALLEL MODELS OF COMPUTATION
Distributed Parallel Processing in Artificial Intelligence
and Optimization
Valmir C. Barbosa
Ellis Horwood Series in Artificial Intelligence
Ellis Horwood/Simon & Schuster, 1993
telephone: +44-442-881900
fax: +44-442-882099
ISBN 0-13-562968-3, approximate price: US$ 62.95
ABSTRACT
This book covers the simulation by distributed parallel computers of
massively parallel models of interest in artificial intelligence and
optimization, bringing together two major areas of current interest
within computer science --- distributed parallel processing and
massively parallel models in artificial intelligence and optimization.
Throughout ten chapters, a series of important massively parallel
models of computation are surveyed, including cellular automata,
Hopfield neural networks, Markov random fields, Bayesian networks,
and other more specialized neural networks with important applications
to the solution of mathematical problems. Emphasis is placed on the
dynamic behavior of these models, and on how some fundamental
techniques of distributed parallel program design can be employed
for their simulation by parallel computers. In addition, the main
application areas of each model are also discussed, as well as how
the models interrelate to one another.
The book has been intended to have a multidisciplinary character,
and will appeal to professionals and students in a variety of fields,
as in computer science, electrical engineering, and cognitive science.
CONTENTS
Preface
PART 1. Introduction and Background
Chapter 1. Introduction
Chapter 2. Background
PART 2. Fundamentals of Distributed Parallel Computation
Chapter 3. Models of Distributed Parallel Computation
Chapter 4. Timing and Synchronization
PART 3. Fully Concurrent Automaton Networks
Chapter 5. Cellular Automata
Chapter 6. Analog Hopfield Neural Networks
Chapter 7. Other Analog Neural Networks
PART 4. Partially Concurrent Automaton Networks
Chapter 8. Binary Hopfield Neural Networks
Chapter 9. Markov Random Fields
Chapter 10. Bayesian Networks
PART 5. Appendices
Appendix A. A Distributed Parallel Programming Primer
Appendix B. The Software for Automaton Network Simulation
Appendix C. Long Proofs
Appendix D. Additional Edge-Reversal Properties
Bibliography
Author Index
Subject Index
------------------------------
Subject: PC Based NN Software
From: bcrowder@mail.lmi.org
Date: Wed, 01 Sep 93 10:20:27 -0500
I have just started my quest to understand NN. I am looking at
several products such as Ward's NeuroShell. I would prefer to work in
a Visual Basic 3.0 framework to failitate links to data sources. Are
there any products that folks have found that they like in that
environment. Any comments on other environments on the PC would be
appreciated. I want to work with stock data and decision criteria for
a buy/sell point.
Thanks,
Bill Crowder //bcrowder@lmi.org//
------------------------------
Subject: Using artif. neural nets in QSAR's
From: Bernard Budde <BUDDE@SC.AGRO.NL>
Date: Wed, 01 Sep 93 17:07:00 +0000
Neuron Digesters,
In Neuron Digest 11(50), 31 Aug 1993, David Manallack wrote:
(part of the original message)
> Subject: Commercial Neural Network Software
> From: manallack_d%frgen.dnet@smithkline.com (NAME "David Manallack")
> Date: Fri, 27 Aug 93 09:39:54 -0500
>
>
> As you may be aware, networks have found various
> uses in chemistry (e.g Quantitative Structure-Activity Relationships
> (QSAR)), typically using back propagation algorithms.
>
> David Manallack email: manallack_d%frgen.dnet@smithkline.com
> David Livingstone email: livingstondj@smithkline.com
I am working in the QSAR-field and I spend a small amount of my time on neural
nets. For this purpose I use a home-brew back-prop net. So far I have seen only
very few articles on the subject, and *none* in which I find the conclusions
justified by the data.
Therefore, I am very interested in all information that deals with the use of
neural-nets in QSAR studies (titles, FTP-sites, code, data-sets... anything).
Please send your response to: budde@sc.agro.nl. I'll send a summary to Neuron
Digest in October.
>From below the sea-level, Bernard Budde budde@sc.agro.nl
------------------------------
Subject: nonlinear controllers
From: garcia@ece.nps.navy.mil (Randall Garcia 12-94)
Date: Wed, 01 Sep 93 14:52:55 -0800
I'm looking for information on using nnets for controlling a nonlinear
plant transfer function.
The problem, in particular, is a derivative of the classic "broomstick"
problem in which control is applied to ensure a pendulum type set up
remains vertical when placed on a moving cart.
My application is that of a pitch plane controller for a tail driven
missile. I would like to see if anyone has modelled this system and
controlled it using a nnet with backpropagation.
Please send info or examples to:
R. E. Garcia
NPS code EC
Monterey, California 93943
or email
garcia@ece.nps.navy.mil
TIA: REG
------------------------------
Subject: SNNS-info
From: mph295@bio-medical-physics.aberdeen.ac.uk ("j.carson")
Date: Thu, 02 Sep 93 12:06:45 +0000
Hello my name is James Carson. I am doing a Phd at Aberdeen
university medical physics department. My subject is neural networks
techniques applied to medical image processing/analysis. I have just
obtained using ftp a copy of SNNS version 3.0 and am in the processes of
learning how to use it. I would greatly appreciate any feedback from
others who may have used it already. especially the ART1 and ART2
features. Also i would like to hear from anyone who has attenpted to use
neural networks to enhance images. I seem to find that most of the
techniques work well with images containing lost of nicely defined edges
but not with medical images such as mammagrams. Has anyone else had much
success with images other than black tables on white backgrounds.
yours sincerely
James
mph295@uk.ac.abdn.biomed
------------------------------
Subject: Re: Inquiries
From: David Kanecki <kanecki@cs.uwp.edu>
Date: Fri, 03 Sep 93 14:07:14 -0600
Dear Peter,
On the 2d Neural Chess, 3d Neural Chess, and High Volume Data Analysis
Process Controllers, please contact me directly at kanecki@cs.uwp.edu.
On the 1992 and 1993 published papers on 2d and 3d Neural Chess respectively,
please contact the Society for Computer Simulation at (619)-277-3888.
Also, I have been appointed head of the Emergency Planning Committee for
the Society for Computer Simulation. If individuals would like to submit
papers on Emergency Planning for the April Multiconference please sent it
to:
SCS
1994 Simulation Multiconference/ Emergency Planning
P.O. Box 17900
San Diego, CA 92177
Thank you for your assistance.
Sincerely,
David H. Kanecki, A.C.S., Bio. Sci.
kanecki@cs.uwp.edu
P.O. Box 26944
Wauwatosa, WI 53226-0944
------------------------------
Subject: Cognitive scientist position at Cornell
From: tvs1@Cornell.edu (Tom Smulders)
Organization: Cornell University
Date: 08 Sep 93 18:06:43 +0000
COGNITIVE PSYCHOLOGIST, CORNELL UNIVERSITY
The Department of Psychology at Cornell University is considering
candidates for a tenure-track assistant professorship in any area of
cognition. Areas of specialization include but are not limited to:
memory, attention, language and speech processing, concepts, knowledge
representation, reasoning, problem solving, decision making, mathematical
psychology, motor control and action. The position will begin in August,
1994. Review of applications will begin December 1, 1993. Cornell
University is an Equal Opportunity/Affirmative Action Employer
Interested applicants should submit a curriculum vitae, reprints or
preprints of completed research, and letters of recommendation sent
directly from three referees to:
Secretary, Cognitive Psychology Search Committee
Department of Psychology, Uris Hall, Cornell University
Ithaca, NY 14853-7601, USA.
email: kas10@cornell.edu
FAX: 607-255-8433 Voice: 607-255-6364
------------------------------
Subject: AM6 Users: release notes and bug fixes available
From: Russell R Leighton <taylor@world.std.com>
Date: Sun, 29 Aug 93 22:21:27 -0500
There has been an update to the am6.notes file at the AM6
ftp sites. User's not on the AM6 users mailing list
should get this file and update their installation.
Russ
======== REPOST OF AM6 RELEASE (long) ========
The following describes a neural network simulation environment
made available free from the MITRE Corporation. The software
contains a neural network simulation code generator which generates
high performance ANSI C code implementations for modular backpropagation
neural networks. Also included is an interface to visualization tools.
FREE NEURAL NETWORK SIMULATOR
AVAILABLE
Aspirin/MIGRAINES
Version 6.0
The Mitre Corporation is making available free to the public a
neural network simulation environment called Aspirin/MIGRAINES.
The software consists of a code generator that builds neural network
simulations by reading a network description (written in a language
called "Aspirin") and generates an ANSI C simulation. An interface
(called "MIGRAINES") is provided to export data from the neural
network to visualization tools. The previous version (Version 5.0)
has over 600 registered installation sites world wide.
The system has been ported to a number of platforms:
Host platforms:
convex_c2 /* Convex C2 */
convex_c3 /* Convex C3 */
cray_xmp /* Cray XMP */
cray_ymp /* Cray YMP */
cray_c90 /* Cray C90 */
dga_88k /* Data General Aviion w/88XXX */
ds_r3k /* Dec Station w/r3000 */
ds_alpha /* Dec Station w/alpha */
hp_parisc /* HP w/parisc */
pc_iX86_sysvr4 /* IBM pc 386/486 Unix SysVR4 */
pc_iX86_sysvr3 /* IBM pc 386/486 Interactive Unix SysVR3 */
ibm_rs6k /* IBM w/rs6000 */
news_68k /* News w/68XXX */
news_r3k /* News w/r3000 */
next_68k /* NeXT w/68XXX */
sgi_r3k /* Silicon Graphics w/r3000 */
sgi_r4k /* Silicon Graphics w/r4000 */
sun_sparc /* Sun w/sparc */
sun_68k /* Sun w/68XXX */
Coprocessors:
mc_i860 /* Mercury w/i860 */
meiko_i860 /* Meiko w/i860 Computing Surface */
Included with the software are "config" files for these platforms.
Porting to other platforms may be done by choosing the "closest"
platform currently supported and adapting the config files.
New Features
- ------------
- ANSI C ( ANSI C compiler required! If you do not
have an ANSI C compiler, a free (and very good)
compiler called gcc is available by anonymous ftp
from prep.ai.mit.edu (18.71.0.38). )
Gcc is what was used to develop am6 on Suns.
- Autoregressive backprop has better stability
constraints (see examples: ringing and sequence),
very good for sequence recognition
- File reader supports "caching" so you can
use HUGE data files (larger than physical/virtual
memory).
- The "analyze" utility which aids the analysis
of hidden unit behavior (see examples: sonar and
characters)
- More examples
- More portable system configuration
for easy installation on systems
without a "config" file in distribution
Aspirin 6.0
- ------------
The software that we are releasing now is for creating,
and evaluating, feed-forward networks such as those used with the
backpropagation learning algorithm. The software is aimed both at
the expert programmer/neural network researcher who may wish to tailor
significant portions of the system to his/her precise needs, as well
as at casual users who will wish to use the system with an absolute
minimum of effort.
Aspirin was originally conceived as ``a way of dealing with MIGRAINES.''
Our goal was to create an underlying system that would exist behind
the graphics and provide the network modeling facilities.
The system had to be flexible enough to allow research, that is,
make it easy for a user to make frequent, possibly substantial, changes
to network designs and learning algorithms. At the same time it had to
be efficient enough to allow large ``real-world'' neural network systems
to be developed.
Aspirin uses a front-end parser and code generators to realize this goal.
A high level declarative language has been developed to describe a network.
This language was designed to make commonly used network constructs simple
to describe, but to allow any network to be described. The Aspirin file
defines the type of network, the size and topology of the network, and
descriptions of the network's input and output. This file may also include
information such as initial values of weights, names of user defined
functions.
The Aspirin language is based around the concept of a "black box".
A black box is a module that (optionally) receives input and
(necessarily) produces output. Black boxes are autonomous units
that are used to construct neural network systems. Black boxes
may be connected arbitrarily to create large possibly heterogeneous
network systems. As a simple example, pre or post-processing stages
of a neural network can be considered black boxes that do not learn.
The output of the Aspirin parser is sent to the appropriate code
generator that implements the desired neural network paradigm.
The goal of Aspirin is to provide a common extendible front-end language
and parser for different network paradigms. The publicly available software
will include a backpropagation code generator that supports several
variations of the backpropagation learning algorithm. For backpropagation
networks and their variations, Aspirin supports a wide variety of
capabilities:
1. feed-forward layered networks with arbitrary connections
2. ``skip level'' connections
3. one and two-dimensional weight tessellations
4. a few node transfer functions (as well as user defined)
5. connections to layers/inputs at arbitrary delays,
also "Waibel style" time-delay neural networks
6. autoregressive nodes.
7. line search and conjugate gradient optimization
The file describing a network is processed by the Aspirin parser and
files containing C functions to implement that network are generated.
This code can then be linked with an application which uses these
routines to control the network. Optionally, a complete simulation
may be automatically generated which is integrated with the MIGRAINES
interface and can read data in a variety of file formats. Currently
supported file formats are:
Ascii
Type1, Type2, Type3 Type4 Type5 (simple floating point file formats)
ProMatlab
Examples
- --------
A set of examples comes with the distribution:
xor: from RumelHart and McClelland, et al,
"Parallel Distributed Processing, Vol 1: Foundations",
MIT Press, 1986, pp. 330-334.
encode: from RumelHart and McClelland, et al,
"Parallel Distributed Processing, Vol 1: Foundations",
MIT Press, 1986, pp. 335-339.
bayes: Approximating the optimal bayes decision surface
for a gauss-gauss problem.
detect: Detecting a sine wave in noise.
iris: The classic iris database.
characters: Learing to recognize 4 characters independent
of rotation.
ring: Autoregressive network learns a decaying sinusoid
impulse response.
sequence: Autoregressive network learns to recognize
a short sequence of orthonormal vectors.
sonar: from Gorman, R. P., and Sejnowski, T. J. (1988).
"Analysis of Hidden Units in a Layered Network Trained to
Classify Sonar Targets" in Neural Networks, Vol. 1, pp. 75-89.
spiral: from Kevin J. Lang and Michael J, Witbrock, "Learning
to Tell Two Spirals Apart", in Proceedings of the 1988 Connectionist
Models Summer School, Morgan Kaufmann, 1988.
ntalk: from Sejnowski, T.J., and Rosenberg, C.R. (1987).
"Parallel networks that learn to pronounce English text" in
Complex Systems, 1, 145-168.
perf: a large network used only for performance testing.
monk: The backprop part of the monk paper. The MONK's problem were
the basis of a first international comparison
of learning algorithms. The result of this comparison is summarized in
"The MONK's Problems - A Performance Comparison of Different Learning
algorithms" by S.B. Thrun, J. Bala, E. Bloedorn, I. Bratko, B.
Cestnik, J. Cheng, K. De Jong, S. Dzeroski, S.E. Fahlman, D. Fisher,
R. Hamann, K. Kaufman, S. Keller, I. Kononenko, J. Kreuziger, R.S.
Michalski, T. Mitchell, P. Pachowicz, Y. Reich H. Vafaie, W. Van de
Welde, W. Wenzel, J. Wnek, and J. Zhang has been published as
Technical Report CS-CMU-91-197, Carnegie Mellon University in Dec.
1991.
wine: From the ``UCI Repository Of Machine Learning Databases
and Domain Theories'' (ics.uci.edu: pub/machine-learning-databases).
Performance of Aspirin simulations
- ----------------------------------
The backpropagation code generator produces simulations
that run very efficiently. Aspirin simulations do
best on vector machines when the networks are large,
as exemplified by the Cray's performance. All simulations
were done using the Unix "time" function and include all
simulation overhead. The connections per second rating was
calculated by multiplying the number of iterations by the
total number of connections in the network and dividing by the
"user" time provided by the Unix time function. Two tests were
performed. In the first, the network was simply run "forward"
100,000 times and timed. In the second, the network was timed
in learning mode and run until convergence. Under both tests
the "user" time included the time to read in the data and initialize
the network.
Sonar:
This network is a two layer fully connected network
with 60 inputs: 2-34-60.
Millions of Connections per Second
Forward:
SparcStation1: 1
IBM RS/6000 320: 2.8
HP9000/720: 4.0
Meiko i860 (40MHz) : 4.4
Mercury i860 (40MHz) : 5.6
Cray YMP: 21.9
Cray C90: 33.2
Forward/Backward:
SparcStation1: 0.3
IBM RS/6000 320: 0.8
Meiko i860 (40MHz) : 0.9
HP9000/720: 1.1
Mercury i860 (40MHz) : 1.3
Cray YMP: 7.6
Cray C90: 13.5
Gorman, R. P., and Sejnowski, T. J. (1988). "Analysis of Hidden Units
in a Layered Network Trained to Classify Sonar Targets" in Neural Networks,
Vol. 1, pp. 75-89.
Nettalk:
This network is a two layer fully connected network
with [29 x 7] inputs: 26-[15 x 8]-[29 x 7]
Millions of Connections per Second
Forward:
SparcStation1: 1
IBM RS/6000 320: 3.5
HP9000/720: 4.5
Mercury i860 (40MHz) : 12.4
Meiko i860 (40MHz) : 12.6
Cray YMP: 113.5
Cray C90: 220.3
Forward/Backward:
SparcStation1: 0.4
IBM RS/6000 320: 1.3
HP9000/720: 1.7
Meiko i860 (40MHz) : 2.5
Mercury i860 (40MHz) : 3.7
Cray YMP: 40
Cray C90: 65.6
Sejnowski, T.J., and Rosenberg, C.R. (1987). "Parallel networks that
learn to pronounce English text" in Complex Systems, 1, 145-168.
Perf:
This network was only run on a few systems. It is very large
with very long vectors. The performance on this network
is in some sense a peak performance for a machine.
This network is a two layer fully connected network
with 2000 inputs: 100-500-2000
Millions of Connections per Second
Forward:
Cray YMP 103.00
Cray C90 220
Forward/Backward:
Cray YMP 25.46
Cray C90 59.3
MIGRAINES
- ------------
The MIGRAINES interface is a terminal based interface
that allows you to open Unix pipes to data in the neural
network. This replaces the NeWS1.1 graphical interface
in version 4.0 of the Aspirin/MIGRAINES software. The
new interface is not a simple to use as the version 4.0
interface but is much more portable and flexible.
The MIGRAINES interface allows users to output
neural network weight and node vectors to disk or to
other Unix processes. Users can display the data using
either public or commercial graphics/analysis tools.
Example filters are included that convert data exported through
MIGRAINES to formats readable by:
- Gnuplot 3
- Matlab
- Mathematica
- Xgobi
Most of the examples (see above) use the MIGRAINES
interface to dump data to disk and display it using
a public software package called Gnuplot3.
Gnuplot3 can be obtained via anonymous ftp from:
>>>> In general, Gnuplot 3 is available as the file gnuplot3.?.tar.Z
>>>> Please obtain gnuplot from the site nearest you. Many of the major ftp
>>>> archives world-wide have already picked up the latest version, so if
>>>> you found the old version elsewhere, you might check there.
>>>>
>>>> NORTH AMERICA:
>>>>
>>>> Anonymous ftp to dartmouth.edu (129.170.16.4)
>>>> Fetch
>>>> pub/gnuplot/gnuplot3.?.tar.Z
>>>> in binary mode.
>>>>>>>> A special hack for NeXTStep may be found on 'sonata.cc.purdue.edu'
>>>>>>>> in the directory /pub/next/submissions. The gnuplot3.0 distribution
>>>>>>>> is also there (in that directory).
>>>>>>>>
>>>>>>>> There is a problem to be aware of--you will need to recompile.
>>>>>>>> gnuplot has a minor bug, so you will need to compile the command.c
>>>>>>>> file separately with the HELPFILE defined as the entire path name
>>>>>>>> (including the help file name.) If you don't, the Makefile will over
>>>>>>>> ride the def and help won't work (in fact it will bomb the program.)
NetTools
- -----------
We have include a simple set of analysis tools
by Simon Dennis and Steven Phillips.
They are used in some of the examples to illustrate
the use of the MIGRAINES interface with analysis tools.
The package contains three tools for network analysis:
gea - Group Error Analysis
pca - Principal Components Analysis
cda - Canonical Discriminants Analysis
Analyze
- -------
"analyze" is a program inspired by Denis and Phillips'
Nettools. The "analyze" program does PCA, CDA, projections,
and histograms. It can read the same data file formats as are
supported by "bpmake" simulations and output data in a variety
of formats. Associated with this utility are shell scripts that
implement data reduction and feature extraction. "analyze" can be
used to understand how the hidden layers separate the data in order to
optimize the network architecture.
How to get Aspirin/MIGRAINES
- -----------------------
The software is available from two FTP sites, CMU's simulator
collection and UCLA's cognitive science machines. The compressed tar
file is a little less than 2 megabytes. Most of this space is
taken up by the documentation and examples. The software is currently
only available via anonymous FTP.
> To get the software from CMU's simulator collection:
1. Create an FTP connection from wherever you are to machine "pt.cs.cmu.edu"
(128.2.254.155).
2. Log in as user "anonymous" with password your username.
3. Change remote directory to "/afs/cs/project/connect/code". Any
subdirectories of this one should also be accessible. Parent directories
should not be. ****You must do this in a single operation****:
cd /afs/cs/project/connect/code
4. At this point FTP should be able to get a listing of files in this
directory and fetch the ones you want.
Problems? - contact us at "connectionists-request@cs.cmu.edu".
5. Set binary mode by typing the command "binary" ** THIS IS IMPORTANT **
6. Get the file "am6.tar.Z"
7. Get the file "am6.notes"
> To get the software from UCLA's cognitive science machines:
1. Create an FTP connection to "ftp.cognet.ucla.edu" (128.97.8.19)
(typically with the command "ftp ftp.cognet.ucla.edu")
2. Log in as user "anonymous" with password your username.
3. Change remote directory to "pub/alexis", by typing the command "cd
pub/alexis"
4. Set binary mode by typing the command "binary" ** THIS IS IMPORTANT **
5. Get the file by typing the command "get am6.tar.Z"
6. Get the file "am6.notes"
Other sites
- -----------
If these sites do not work well for you, then try the archie
internet mail server. Send email:
To: archie@cs.mcgill.ca
Subject: prog am6.tar.Z
Archie will reply with a list of internet ftp sites
that you can get the software from.
How to unpack the software
- --------------------------
After ftp'ing the file make the directory you
wish to install the software. Go to that
directory and type:
zcat am6.tar.Z | tar xvf -
-or-
uncompress am6.tar.Z ; tar xvf am6.tar
How to print the manual
- -----------------------
The user documentation is located in ./doc in a
few compressed PostScript files. To print
each file on a PostScript printer type:
uncompress *.Z
lpr -s *.ps
Why?
- ----
I have been asked why MITRE is giving away this software.
MITRE is a non-profit organization funded by the
U.S. federal government. MITRE does research and
development into various technical areas. Our research
into neural network algorithms and applications has
resulted in this software. Since MITRE is a publically
funded organization, it seems appropriate that the
product of the neural network research be turned back
into the technical community at large.
Thanks
- ------
Thanks to the beta sites for helping me get
the bugs out and make this portable.
Thanks to the folks at CMU and UCLA for the ftp sites.
Copyright and license agreement
- -------------------------------
Since the Aspirin/MIGRAINES system is licensed free of charge,
the MITRE Corporation provides absolutely no warranty. Should
the Aspirin/MIGRAINES system prove defective, you must assume
the cost of all necessary servicing, repair or correction.
In no way will the MITRE Corporation be liable to you for
damages, including any lost profits, lost monies, or other
special, incidental or consequential damages arising out of
the use or in ability to use the Aspirin/MIGRAINES system.
This software is the copyright of The MITRE Corporation.
It may be freely used and modified for research and development
purposes. We require a brief acknowledgement in any research
paper or other publication where this software has made a significant
contribution. If you wish to use it for commercial gain you must contact
The MITRE Corporation for conditions of use. The MITRE Corporation
provides absolutely NO WARRANTY for this software.
Russell Leighton ^ / |\ /|
INTERNET: taylor@world.std.com |-| / | | |
| | / | | |
------------------------------
End of Neuron Digest [Volume 12 Issue 1]
****************************************
Received: from BUACCA by BUACCA.BU.EDU (Mailer R2.08 PTF009) with BSMTP id
3102; Tue, 14 Sep 93 14:35:40 EDT
Received: from noc4.dccs.upenn.edu by BUACCA.BU.EDU (IBM VM SMTP R1.2.1) with
TCP; Tue, 14 Sep 93 14:35:36 EDT
Received: from CATTELL.PSYCH.UPENN.EDU by noc4.dccs.upenn.edu
id AA12043; Tue, 14 Sep 93 14:25:11 -0400
Return-Path: <marvit@cattell.psych.upenn.edu>
Received: from LOCALHOST by cattell.psych.upenn.edu
id AA07235; Tue, 14 Sep 93 13:30:22 EDT
Posted-Date: Tue, 14 Sep 93 13:28:50 -0400
From: "Neuron-Digest Moderator" <neuron-request@CATTELL.PSYCH.UPENN.EDU>
To: Neuron-Distribution:;
Subject: Neuron Digest V12 #2 (conferences and CFP)
Reply-To: "Neuron-Request" <neuron-request@CATTELL.PSYCH.UPENN.EDU>
X-Errors-To: "Neuron-Request" <neuron-request@psych.upenn.edu>
Organization: University of Pennsylvania
Date: Tue, 14 Sep 93 13:28:50 -0400
Message-Id: <7156.748027730@cattell.psych.upenn.edu>
Sender: marvit@cattell.psych.upenn.edu
Neuron Digest Tuesday, 14 Sep 1993
Volume 12 : Issue 2
Today's Topics:
PASE workshop
WCCI '94 Announcement and Call for Papers
ICNN 94 Call for Symposia Speakers
CALL FOR PAPERS -- NINTH GODDARD AI CONFERENCE
Neural Architectures and Distributed AI
call for papers
Send submissions, questions, address maintenance, and requests for old
issues to "neuron-request@psych.upenn.edu". The ftp archives are
available from psych.upenn.edu (130.91.68.31). Back issues requested by
mail will eventually be sent, but may take a while.
----------------------------------------------------------------------
Subject: PASE workshop
From: Diethelm Wuertz <wuertz@ips.id.ethz.ch>
Date: Mon, 07 Jun 93 16:33:31 +0100
First Announcement
PASE '93
4th International Workshop on Parallel Applications
in Statistics and Economics
>> Exploration of Complex Systems Dynamics <<
Ascona, Switzerland, November 22-26, 1993
Monte Verita
The purpose of this workshop is to bring together researchers interested
in innovative information processing systems and their applications in
the areas of statistics, finance and economics. The focus will be on
in-depth presentations of state-of-the-art methods and applications as
well as on communicating current research topics. This workshop is
intended for industrial and academic persons seeking new ways of
comprehending the behavior of dynamic systems. The PASE '93 workshop is
concerned with but not restricted to the following topics:
o Artificial Neural Networks
o Dynamical and Chaotic Systems
o Fuzzy Logic
o Genetic Algorithms
o Stochastic Optimization
Organizing Committee:
M. Dacorogna, O&A Zurich H. Beran, ICS Prague
F. Murtagh, Munotec Munich M. Hanf, IPS ETH Zurich
E. Pelikan, ICS Prague A. Scheidegger, CSCS Manno
D. Wuertz, IPS ETH Zurich M. Tomassini, CSCS Manno
Please contact for further information and registration
Hynek Beran, ICS Prag
Pod vodarenskou vezi 2
182 07 PRAGUE 8, Czech Republic
FAX: +42 2 858 57 89
E-mail: pase@uivt1.uivt.cas.cs
and for local arrangements
Marco Tomassini, CSCS Manno
Galleria 2, Via Cantonale
6928 MANNO, Switzerland
FAX: +41 91 506711
E-mail: pase@cscs.ch
The workshop will be held near Ascona, an attractive holiday resort in
Ticino, the Italian-speaking canton of Switzerland. In keeping with the
tradition of the PASE workshop, an art exhibition as well as other social
events will be organized.
Further information will be available from anonymous ftp:
ftp maggia.ethz.ch (129.132.17.1)
------------------------------
Subject: WCCI '94 Announcement and Call for Papers
From: Gary Jacobs <gjacobs@qualcomm.com>
Date: Fri, 11 Jun 93 11:00:40 -0800
Gary Jacobs
gjacobs@qualcomm.com
(619)597-5029 voice
(619)452-9096 fax
HARD FACT IN A WORLD OF FANTASY
A world of sheer fantasy awaits your arrival at the IEEE World Congress
on Computational Intelligence next year; our host is Walt Disney World in
Orlando Florida. Simultaneous Neural Network, Fuzzy Logic and
Evolutionary Programming conferences will provide an unprecedented
opportunity for technical development while the charms of the nearby
Magic Kingdom and Epcot Center attempt to excite your fancies.
The role imagination has played in the development of Computational
Intelligence techniques is well known; before they became "innovative" the
various CI technologies were dismissed as "fantasies" of brilliant minds.
Now these tools are real; perhaps it's only appropriate that they should be
further explored and their creators honored in a world of the imagination, a
world where dreams come true.
Share your facts at Disney World; share your imagination. Come to the IEEE
World Congress on Computational Intelligence.
It's as new as tomorrow.
___________________________________________________________________________
***CALL FOR PAPERS***
___________________________________________________
IEEE WORLD CONGRESS ON COMPUTATIONAL INTELLIGENCE
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
* IEEE International Conference on Neural Networks *
* FUZZ/IEEE '94 *
* IEEE International Symposium on Evolutionary Computation *
June 26 - July 2, 1994
Walt Disney World Dolphin Hotel, Lake Buena Vista, Florida
Sponsored by the IEEE Neural Networks Council
- ---------------------------------------------------------------------
IEEE INTERNATIONAL CONFERENCE ON NEURAL NETWORKS
Steven K. Rogers, General Chair
rogers@afit.af.mil
Topics:
Applications, architectures, artificially intelligent neural networks,
artificial life, associative memory, computational intelligence,
cognitive science, embedology, filtering, fuzzy neural systems, hybrid
systems, image processing, implementations, intelligent control,
learning and memory, machine vision, motion analysis, neurobiology,
neurocognition, neurodynamics, optimization, pattern recognition,
prediction, robotics, sensation and perception, sensorimotor systems,
speech, hearing and language, system identification, supervised and
unsupervised learning, tactile sensors, and time series analysis.
-------------------------------------------
FUZZ/IEEE '94
Piero P. Bonissone, General Chair
bonissone@crd.ge.ge.com
Topics:
Basic principles and foundations of fuzzy logic, relations between
fuzzy logic and other approximate reasoning methods, qualitative and
approximate-reasoning modeling, hardware implementations of fuzzy-
logic algorithms, design, analysis, and synthesis of fuzzy-logic
controllers, learning and acquisition of approximate models, relations
between fuzzy logic and neural networks, integration of fuzzy logic
and neural networks, integration of fuzzy logic and evolutionary
computing, and applications.
-------------------------------------------
IEEE CONFERENCE ON EVOLUTIONARY COMPUTATION
Zbigniew Michalewicz, General Chair
zbyszek@mosaic.uncc.edu
Topics:
Theory of evolutionary computation, evolutionary computation
applications, efficiency and robustness comparisons with other direct
search algorithms, parallel computer applications, new ideas
incorporating further evolutionary principles, artificial life,
evolutionary algorithms for computational intelligence, comparisons
between different variants of evolutionary algorithms, machine
learning applications, evolutionary computation for neural networks,
and fuzzy logic in evolutionary algorithms.
- ---------------------------------------------------------------------
INSTRUCTIONS FOR ALL THREE CONFERENCES
Papers must be received by December 10, 1993. Papers will be reviewed
by senior researchers in the field, and all authors will be informed
of the decisions at the end of the review proces. All accepted papers
will be published in the Conference Proceedings. Six copies (one
original and five copies) of the paper must be submitted. Original
must be camera ready, on 8.5x11-inch white paper, one-column format in
Times or similar fontstyle, 10 points or larger with one-inch margins
on all four sides. Do not fold or staple the original camera-ready
copy. Four pages are encouraged. The paper must not exceed six pages
including figures, tables, and references, and should be written in
English. Centered at the top of the first page should be the complete
title, author name(s), affiliation(s) and mailing address(es). In the
accompanying letter, the following information must be included: 1)
Full title of paper, 2) Corresponding authors name, address, telephone
and fax numbers, 3) First and second choices of technical session, 4)
Preference for oral or poster presentation, and 5) Presenter's name,
address, telephone and fax numbers. Mail papers to (and/or obtain
further information from): World Congress on Computational
Intelligence, Meeting Management, 5665 Oberlin Drive, #110, San Diego,
California 92121, USA (email: 70750.345@compuserve.com, telephone:
619-453-6222).
------------------------------
Subject: ICNN 94 Call for Symposia Speakers
From: druck@afit.af.mil (Dennis W. Ruck)
Date: Tue, 22 Jun 93 13:22:30 -0500
Greetings from ICNN 94 Program Committee!
The program committee for the IEEE International Conference on Neural
Networks for 1994 (ICNN 94) is now gathering nominations of prominent
researchers in the field for plenary presentations. We made up a list
of potential speakers and realized that women and minorities were not
well represented.
Please send us your suggestions of women and minority speakers to
give plenary (aka symposium) talks at the ICNN 94 conference.
ICNN 94 will be held June 26 - July 2, 1994 at the Walt Disney
Dolphin Hotel in Orlando, Florida as part of the World Congress on
Computational Intelligence (WCCI). FUZZ-IEEE and the IEEE
International Symposium on Evolutionary Computation make up the rest
of the WCCI. The World Cup Soccer Tournament will also be held for
the first time in the United States in Orlando, Florida concurrently
with the WCCI.
Steven K. Rogers Dennis W. Ruck
ICNN 94 General Chair ICNN 94 U.S. Program Chair
rogers@afit.af.mil druck@afit.af.mil
------------------------------
Subject: CALL FOR PAPERS -- NINTH GODDARD AI CONFERENCE
From: James Rash <jim@class.gsfc.nasa.gov>
Date: Tue, 27 Jul 93 09:47:07 -0500
CALL FOR PAPERS
The Ninth Annual Goddard Conference on
Space Applications of Artificial Intelligence
May 10 - 12, 1994
NASA Goddard Space Flight Center, Greenbelt, MD, USA
Scope:
This conference will focus on AI and advanced computing technologies
relevant to space systems, space operations, and space science. The
major AI topics will include, but are not limited to:
o Knowledge-based spacecraft command and control
o Expert system management and methodologies
o Intelligent User Interfaces
o Distributed knowledge-based systems
o Fault-tolerant rule-based systems
o Simulation-based reasoning
o Knowledge acquisition
o Robotics and telerobotics
o Neural networks
o Computer Vision
while the major Advanced Computing Technologies will be:
o Intelligent database management
o Scientific Visualization
o Virtual Reality
o Planning and scheduling
o Fault isolation and diagnosis
o Image Processing
o Heterogeneous Computing
o Parallel Processing
Original, unpublished papers are now being solicited for the conference.
Papers must describe work with a clear AI content or be related to the
above advanced computing technology topics. Furthermore, papers must
stress the applicability of these technology solutions to space-related
problems. Authors are asked to submit abstracts first for initial
review.
Accepted papers will be presented formally or as poster presentations,
which may include demonstrations. All accepted papers will be published
in the Conference Proceedings as an official NASA document, and select
papers may appear in a special issue of the international journal
"Telematics and Informatics". There will be a Conference award for Best
Paper.
Submission:
Abstracts should be 300-500 words in length. Two copies of the abstract
should be submitted by September 20, 1993 along with the author's name,
affiliation, address, e-mail address and telephone number. Notification
of tentative acceptance will be given by October 8, 1993. Papers should
be no longer than 15 pages and must be submitted in camera-ready form for
final acceptance by December 3, 1993.
Abstracts may be submitted through e-mail, FAX, or regular mail.
E-mail and FAX are preferred.
Submission Address:
Mail: Nick Short
NASA GSFC, Code 930.1
Building 28, Room W270
Greenbelt, MD 20771 USA
E-mail: short@dunloggin.gsfc.nasa.gov
FAX: (301) 286-5152
Important Dates:
Abstract Submission September 20, 1993
Abstract Acceptance Notification October 8, 1993
Paper Submission December 3, 1993
Conference Chair:
Nick Short
NASA GSFC, Code 930.1
Greenbelt, MD 20771 USA
short@dunloggin.gsfc.nasa.gov
------------------------------
Subject: Neural Architectures and Distributed AI
From: Jim Liaw <liaw%dylink.usc.edu@usc.edu>
Date: Wed, 07 Jul 93 10:36:42 -0800
[[ Editor's Note: Sorry, it's officially too late to submit papers, but
certainly not too late to attend. -PM ]]
**** Call for Papers ****
Neural Architectures and Distributed AI:
From Schema Assemblages to Neural Networks
October 19-20, 1993
The Center for Neural Engineering
University of Southern California
announces a Workshop on
Neural Architectures and Distributed AI:
>From Schema Assemblages to Neural Networks
October 19-20, 1993
[This Workshop was previously scheduled for April 1993]
Program Committee: Michael Arbib (Organizer), George Bekey, Damian Lyons,
Paul Rosenbloom, and Ron Sun
To design complex technological systems, we need a multilevel methodology
which combines a coarse-grain analysis of cooperative or distributed
computation (we shall refer to the computing agents at this level as
"schemas") with a fine-grain model of flexible, adaptive computation (for
which neural networks provide a powerful general paradigm). Schemas
provide a language for distributed artificial intelligence and perceptual
robotics which is "in the style of the brain", but at a relatively high
level of abstraction relative to neural networks. We seek (both at the
level of schema asemblages, and in terms of "modular" neural networks) a
distributed model of computation, supporting many concurrent activities
for recognition of objects, and the planning and control of different
activities. The use, representation, and recall of knowledge is mediated
through the activity of a network of interacting computing agents which
between them provide processes for going from a particular situation and
a particular structure of goals and tasks to a suitable course of action.
This action may involve passing of messages, changes of state,
instantiation to add new schema instances to the network, deinstantiation
to remove instances, and may involve self-modification and
self-organization. Schemas provide a form of knowledge representation
which differs from frames and scripts by being of a finer granularity.
Schema theory is generative: schemas may well be linkedwwww to others to
provide yet more comprehensive schemas, whereas frames tend to "build in"
from the overall framework. The analysis of interacting computing agents
(the schema instances) is intermediate between the overall specification
of some behavior and the neural networks that subserve it. The Workshop
will focus on different facets of this multi-level methodology. While
the emphasis will be on technological systems, papers will also be
accepted on biological and cognitive systems.
Submission of Papers
A list of sample topics for contributions is as follows, where a hybrid
approach means one in which the abstract schema level is integrated with
neural or other lower level models:
Schema Theory as a description language for neural networks
Modular neural networks
Alternative paradigms for modeling symbolic and subsymbolic knowledge
Hierarchical and distributed representations: adaptation and coding
Linking DAI to Neural Networks to Hybrid Architecture
Formal Theories of Schemas
Hybrid approaches to integrating planning & reaction
Hybrid approaches to learning
Hybrid approaches to commonsense reasoning by integrating neural networks
and rule-based reasoning (using schemas for the integration)
Programming Languages for Schemas and Neural Networks
Schema Theory Applied in Cognitive Psychology, Linguistics, and Neuroscience
Prospective contributors should send a five-page extended abstract, including
figures with informative captions and full references - a hard copy, either
by regular mail or fax - by August 15, 1993 to Michael Arbib, Center for
Neural Engineering, University of Southern California, Los Angeles,
CA 90089-2520, USA [Tel: (213) 740-9220, Fax: (213) 746-2863,
arbib@pollux.usc.edu]. Please include your full address, including fax and
email, on the paper.
In accepting papers submitted in response to this Call for Papers, preference
will be given to papers which present practical examples of, theory of, and/or
methodology for the design and analysis of complex systems in which the
overall specification or analysis is conducted in terms of a network of
interacting schemas, and where some but not necessarily all of the schemas
are implemented in neural networks. Papers which present a single neural
network for pattern recognition ("perceptual schema") or pattern generation
("motor schema") will not be accepted. It is the development of a
methodology to analyze the interaction of multiple functional units that
constitutes the distinctive thrust of this Workshop.
Notification of acceptance or rejection will be sent by email no later than
September 1, 1993. There are currently no plans to issue a formal
proceedings of full papers, but (revised versions) of accepted abstracts
received prior to October 1, 1993 will be collected with the full text of the
Tutorial in a CNE Technical Report which will be made available to registrants
at the start of the meeting.
A number of papers have already been accepted for the Workshop. These
include the following:
Arbib: Schemas and Neural Networks: A Tutorial Introduction to Integrating
Symbolic and Subsymbolic Approaches to Cooperative Computation
Arkin: Reactive Schema-based Robotic Systems: Principles and Practice
Heenskerk and Keijzer: A Real-time Neural Implementation of a Schema Driven
Toy-Car
Leow and Miikkulainen, Representing and Learning Visual Schemas in Neural
Networks for Scene Analysis
Lyons & Hendriks: Describing and analysing robot behavior with schema theory
Murphy, Lyons & Hendriks: Visually Guided Multi-Fingered Robot Hand Grasping
as Defined by Schemas and a Reactive System
Sun: Neural Schemas and Connectionist Logic: A Synthesis of the Symbolic
and the Subsymbolic
Weitzenfeld: Hierarchy, Composition, Heterogeneity, and Multi-granularity
in Concurrent Object-Oriented Programming for Schemas and Neural Networks
Wilson & Hendler: Neural Network Software Modules
Bonus Event: The CNE Research Review: Monday, October 18, 1993
The CNE Review will present a day-long sampling of CNE research, with talks
by faculty, and students, as well as demos of hardware and software. Special
attention will be paid to talks on, and demos in, our new Autonomous Robotics
Lab and Neuro-Optical Computing Lab. Fully paid registrants of the Workshop
are entitled to attend the CNE Review at no extra charge.
Registration
The registration fee of $150 ($40 for qualified students who include a
"certificate of student status" from their advisor) includes a copy of the
abstracts, coffee breaks, and a dinner to be held on the evening of October
18th.
Those wishing to register should send a check payable to "Center for Neural
Engineering, USC" for $150 ($40 for students and CNE members) together
with the following information to Paulina Tagle, Center for Neural
Engineering, University of Southern California, University Park, Los Angeles,
CA 90089-2520, USA.
- ---------------------------------------------------
SCHEMAS AND NEURAL NETWORKS
Center for Neural Engineering, USC
October 19-20, 1992
NAME: ___________________________________________
ADDRESS: _________________________________________
PHONE NO.: _______________
FAX:___________________
EMAIL: ___________________________________________
I intend to submit a paper: YES [ ] NO [ ]
I wish to be registered for the CNE Research
Review: YES [ ] NO [ ]
Accommodation
Attendees may register at the hotel of their choice, but the closest hotel to
USC is the University Hilton, 3540 South Figueroa Street, Los Angeles, CA
90007, Phone: (213) 748-4141, Reservation: (800) 872-1104,
Fax: (213) 7480043. A single room costs $70/night while a double room costs
$75/night. Workshop participants must specify that they are "Schemas and
Neural Networks Workshop" attendees to avail of the above rates.
Information on student accommodation may be obtained from the Student Chair,
Jean-Marc Fellous, fellous@pollux.usc.edu.
------------------------------
Subject: call for papers
From: PIURI@IPMEL1.POLIMI.IT
Date: 27 Aug 93 13:04:14 +0000
=============================================================================
14th IMACS WORLD CONGRESS ON COMPUTATION AND APPLIED MATHEMATICS
July 11-15, 1994
Atlanta, Georgia, USA
Sponsored by:
IMACS - International Association for Mathematics and Computers in Simulation
IFAC - International Federation for Automatic Control
IFIP - International Federation for Information Processing
IFORS - International Federation of Operational Research Societies
IMEKO - International Measurement Confederation
General Chairman: Prof. W.F. Ames
Georgia Institute of Technology, Atlanta, GA, USA
SESSIONS ON NEURAL NETWORKS
1. NEURAL NETWORK ARCHITECTURES AND IMPLEMENTATIONS
2. APPLICATION OF NEURAL TECHNIQUES FOR SIGNAL AND IMAGE PROCESSING
>>>>>> CALL FOR PAPERS <<<<<<
The IMACS World Congress on Computation and Applied Mathematics is held every
three year to provide a large general forum to professionals and scientists
for analyzing and discussing the fundamental advances of research in all
areas of scientific computation, applied mathematics, mathematical modelling,
and system simulation in and for specific disciplines, the philosophical
aspects, and the impact on society and on disciplinary and interdisciplinary
research.
In the 14th edition, two sessions are planned on neural networks: "Neural
Network Architectures and Implementations" and "Application of Neural
Techniques for Signal and Image Processing".
The first session will focus on all theoretical and practical aspects of
architectural design and realization of neural networks:
from mathematical analysis and modelling to behavioral specification,
from architectural definition to structural design, from VLSI implementation
to software emulation, from design simulation at any abstraction level
to CAD tools for neural design, simulation and evaluation.
The second session will present the concepts, the design and the use of
neural solutions within the area of signal and image processing,
e.g., for modelling, identification, analysis, classification, recognition,
and filtering. Particular emphasis will be given to presentation of
specific applications or application areas.
Authors interested in the above neural sessions are invited to send
a one page abstract, the title of the paper and the author's address
by electronic mail, fax or postal mail to the Neural Sessions' Chairman
by October 15, 1993.
Authors must then submit five copies of their typed manuscript by postal
mail or fax to the Neural Sessions' Chairman by November 19, 1993.
Preliminary notification of acceptance/rejection will be mailed by
November 30, 1993. Final acceptance/rejection will be mailed by January
31, 1994.
Neural Sessions' Chairman: Prof. Vincenzo Piuri
Department of Electronics and Information
Politecnico di Milano
piazza L. da Vinci 32
I-20133 Milano, Italy
phone no. +39-2-23993606, +39-2-23993623
fax no. +39-2-23993411
e-mail piuri@ipmel1.polimi.it
=============================================================================
------------------------------
End of Neuron Digest [Volume 12 Issue 2]
****************************************
Received: from BUACCA by BUACCA.BU.EDU (Mailer R2.08 PTF009) with BSMTP id
9775; Fri, 24 Sep 93 02:04:00 EDT
Received: from noc4.dccs.upenn.edu by BUACCA.BU.EDU (IBM VM SMTP R1.2.1) with
TCP; Fri, 24 Sep 93 02:03:57 EDT
Received: from CATTELL.PSYCH.UPENN.EDU by noc4.dccs.upenn.edu
id AA22948; Fri, 24 Sep 93 01:48:50 -0400
Return-Path: <marvit@cattell.psych.upenn.edu>
Received: from LOCALHOST by cattell.psych.upenn.edu
id AA03013; Fri, 24 Sep 93 00:59:50 EDT
Posted-Date: Fri, 24 Sep 93 00:59:06 -0400
From: "Neuron-Digest Moderator" <neuron-request@CATTELL.PSYCH.UPENN.EDU>
To: Neuron-Distribution:;
Subject: Neuron Digest V12 #3 (requests, software, jobs, and such)
Reply-To: "Neuron-Request" <neuron-request@CATTELL.PSYCH.UPENN.EDU>
X-Errors-To: "Neuron-Request" <neuron-request@psych.upenn.edu>
Organization: University of Pennsylvania
Date: Fri, 24 Sep 93 00:59:06 -0400
Message-Id: <2994.748846746@cattell.psych.upenn.edu>
Sender: marvit@cattell.psych.upenn.edu
Neuron Digest Friday, 24 Sep 1993
Volume 12 : Issue 3
Today's Topics:
Request for handwriting recognition programs
New Digest Reader seeks chaotic time series modeling and prediction
Stock Trading
Problem with Pygmalion
PSYCHE-D Announcement
New GENESIS version 1.4
Microcanonical Annealing
Computational Neuroscience job at San Diego Supercomputer Center
NeuroWindow
Research Opportunities at the University of the West of England, Bristol
Research post available
Filtering with ANNs
Position announcement
Superchair
Send submissions, questions, address maintenance, and requests for old
issues to "neuron-request@psych.upenn.edu". The ftp archives are
available from psych.upenn.edu (130.91.68.31). Back issues requested by
mail will eventually be sent, but may take a while.
----------------------------------------------------------------------
Subject: Request for handwriting recognition programs
From: BKIRKLAND@rivendell.otago.ac.nz
Date: Thu, 09 Sep 93 12:50:00 +1200
[[ Editor's Note: Perhaps some fellow New Zealander's might help this
fellow out? Otherwise, perhaps those readers who have or are working in
hardwriing recognition might be able to point Brian to some appropriate
sources. I know, for example, that the U.S. Postal Service has its test
database available for public ftp (somewhere). -PM ]]
SUBJECT: WANTED: Handwritten digit or character recognition software using
neural networks
As part of a dissertation, I want to compare handwritten digit or
character recognition using neural networks against the manual data entry
method. The text recognition program is to process text from document
images on disk files (the documents will have been scanned into the disk
files as a separate process) and give out suitable output (e.g. DIGIT IS
"8" or CHARACTER IS "D"). I will be recording results in the form of
times taken and error rates for experiments with the two methods.
I am looking for programs running under MS-DOS or Windows that perform
handwriting recognition. As a condition, the programs must use a neural
network (eg counterpropagation). These programs must not have any errors
that prevent it from running properly. They can be in .EXE form or
compilable under Borland/ Turbo C++ version 3.1.
I would be interested to hear about FTP sites or mailing addresses where
I could obtain such programs from. I would also be interested to hear
about commercial products fitting the above criteria.
Any queries or replies can be sent to me at BKIRKLAND@OTAGO.AC.NZ
Thank you for your attention,
Yours faithfully,
Barry Kirkland, B.Sc.
==============================================================================
------------------------------
Subject: New Digest Reader seeks chaotic time series modeling and prediction
From: Guay Richard at caeoffice
<smtplink%Guay_Richard_at_caeoffice@mails.imed.com>
Date: Thu, 09 Sep 93 09:33:23 -0600
Hi,
My name is Richard A. Guay. I am very interested in neural network
application to chaotic time series modeling and prediction. If there
are others out there with ideals in this area or know of some
references in this area, please let me know at
richardg@mails.imed.com. I am currently looking at models based on a
shunting neural architecture. I am also looking for a good C++ class
library for neural network architectures.
Thank you for your time and attention.
RAG
------------------------------
Subject: Stock Trading
From: raethpg%wrdc.dnet@wl.wpafb.af.mil
Date: Thu, 09 Sep 93 13:53:25 -0500
[[ Editor's Note: Readers are reminded that the Moderator of this Digest
should be awarded 10% of all profits made from using the ideas published
here. -PM ]]
Given the interest in the Digest on securities trading, I thought the
attached message would be of interest to the readers.
Best,
Pete.
I N T E R O F F I C E M E M O R A N D U M
Date: 07-Sep-1993 06:10pm EDT
From: @Sunburn.Stanford.EDU:BurmanJ
SUBJ: GP and Stock Trading
(Note: GP refers to genetic programming)
As a GPer and registered investment advisor (great combination), I am
very familiar with the characteristics of stock trading and the potential
uses of GP towards trend prediction. If one studies the statistical
characteristics of market activities, one finds that there is very little
correlation between current, past and future behavior of stock prices ...
this probably results from the Dow Theory which basically says that any
market advantage in price is very quickly equalized by competition from
investors and market makers (the people that control the price of stocks,
bonds, etc.). However, if one also studies the pattern characteristics
of stock prices over time (and volume), there are definite patterns that
seem to emerge. The real key to market prediction is to try and
characterize these patterns that can vary with time and price swings
(i.e. they are not statistically stationary).
>From another perspective, stock trading can be viewed as a zero sum
game between a trader, the masses and the market makers or
specialists. You want to avoid the behavior of the masses and be
aware of the merchandising behavior of the market makers.
Moreover, volume behavior is complex and can be related to price
variation through temporal modeling. How to combine these ideas
into a GP model is not at all easy ... also, attempting to predict
market behavior in a stock through simulation is very different
than actually trading. Selling out your investment is the hardest
decision since sometimes it pays to cut your loses short versus
waiting for a turn around that may come over a long period of time.
My basic advice: you need to read several good books on the market and
really understanding the details of the market mechanisms from
fundamentals to how the market makers compete against the average
investor. It may appear easy to try and develop a GP model for this
application, but one is competing against professional traders on the New
York Stock Exchange whose livelihood is to is take your money all the way
to their bank ... oh by the way, these market makers know who is buying
and selling and they can trade for their own accounts.
For further discussion and opinions, contact:
Jerry Burman, jab@joker.iipo.gtegsc.com
- --------------- End of Message ----------------------------------
------------------------------
Subject: Problem with Pygmalion
From: dathe@arsen.chemie.ba-freiberg.de (M.Dathe +2272)
Date: Fri, 10 Sep 93 12:57:21 +0700
[[ Editor's Note: I assume pygmalian is some type of simulation
software. However, it's new to me. Perhaps someone has direct
experience? -PM ]]
Hi evrybody,
Via ftp I got a copy of the pygmalion program package.
I tried to compile it on my IBM RS6k-355 under AIX 3.2.
With some declaration changes it wasn't a problem to compile the
sources, except of the last link.
I got an error from the linker:
unresolved external: asciiDiskWidgetClass from AddInformation.c.
This file (AddInformation.c) is located in the pyg/src/pgm/display directory.
I looked for this external in all libs I have ...
I found in the include/X11/Xaw an extern declaration of asciiDiskWidgetClass,
but ...
Does anybody know what library is needed to link the package or
knows anybode about the problem and may help me?
Thank's a lot.
M.Dathe
- --------------------------------------------------------------------------
Markus Dathe | dathe@arsen.chemie.ba-freiberg.de
TU Bergakademie Freiberg |
FB Chemie, Inst. f. Analyt. Ch. | T: ++49/3731/51-2272
Leipziger Str. 29 |
09596 Freiberg/Sa. | F: ++49/3731/51-3666
Germany |
- --------------------------------------------------------------------------
------------------------------
Subject: PSYCHE-D Announcement
From: X91007@pitvax.xx.rmit.edu.au
Date: Fri, 10 Sep 93 15:39:50 -0600
[[ Editor's Note: I think this was announced last year. Given the wide
interests of this Digest's readers, I thought some might be game for a
more speculative and far reaching forum of some of the stickier topics.
Perhaps it's time to spark a discussion in this Digest, again? -PM ]]
ANNOUNCEMENT OF THE PSYCHE-D DISCUSSION LIST
PSYCHE is a refereed electronic journal dedicated to supporting the
interdisciplinary exploration of the nature of consciousness and its
relation to the brain. PSYCHE publishes material relevant to that
exploration from the perspectives afforded by the disciplines of
Cognitive Science, Philosophy, Psychology, Neuroscience, Artificial
Intelligence and Anthropology. Interdisciplinary discussions are
particularly encouraged.
A discussion list PSYCHE-D has been created to aid people that are
interested in the subject of consciousness. It is hoped that it will
allow members to share ideas, do common research and so on. PSYCHE-D will
also be used to discuss articles that appear in the journal of the same
name, but in addition members are invited to speak on other related
themes.
To subscribe, just send the command:
SUBSCRIBE PSYCHE-D Your Name
to
LISTSERV@NKI.BITNET
For general information on LISTSERV send the command "INFO PR" or "INFO ?" to
LISTSERV@NKI.BITNET.
Subscriptions to the e-journal PSYCHE - as opposed to the discussion
group - may be initiated by sending the "SUBSCRIBE PSYCHE-L Your Name"
one-line command (without quotes) in the body of an electronic mail
message to LISTSERV@NKI.BITNET. If you would like to have any further
questions regarding either the electronic journal or the discussion group
please contact the Executive Editor of PSYCHE:
Patrick Wilken
e-mail: x91007@pitvax.xx.rmit.edu.au
------------------------------
Subject: New GENESIS version 1.4
From: Jim Bower <jbower@smaug.bbb.caltech.edu>
Date: Sun, 12 Sep 93 19:04:45 -0800
This is to announce the availability of a new release of the GENESIS
simulator. This version (ver. 1.4.1, August 1993) is greatly improved from
the previous public release (ver. 1.1, July 1990).
Description:
GENESIS (GEneral NEural SImulation System) is a general purpose
simulation platform which was developed to support the simulation of neural
systems ranging from complex models of single neurons to simulations of
large networks made up of more abstract neuronal components. Most current
GENESIS applications involve realistic simulations of biological neural
systems. Although the software can also model more abstract networks, other
simulators are more suitable for backpropagation and similar connectionist
modeling.
GENESIS and its graphical front-end XODUS are written in C and run on SUN
and DEC graphics work stations under UNIX (Sun version 4.0 and up, Ultrix
3.1, 4.0 and up), and X-windows (versions X11R3, X11R4, and X11R5). The
current version of GENESIS has also been used with Silicon Graphics (Irix
4.0.1 and up) and the HP 700 series (HPUX). The distribution includes full
source code and documentation for both GENESIS and XODUS as well as fourteen
demonstration and tutorial simulations. Documentation for these simulations
is included, along with three papers that describe the general organization
of the simulator. The distributed compressed tar file is about 3 MB in size.
In addition to sample simulations which demonstrate the construction of
neural simulations, the new GENESIS release contains a number of interactive
tutorials for teaching concepts in neurobiology and realistic neural
modeling. As their use requires no knowldge of GENESIS programming, they
are suitable for use in a computer simulation laboratory which would
accompany upper division undergraduate and graduate neuroscience courses,or
for self-study. Each of these has on-line help and a number of suggested
exercises or "experiments". These tutorials may also be taken apart and
modified to create your own simulations, as several of them are derived from
recent research simulations.
The following papers give further information about GENESIS:
Wilson, M. A., Bhalla, U. S., Uhley, J. D., and Bower, J. M. (1989)
GENESIS: A system for simulating neural networks. In: Advances in Neural
Information Processing Systems. D. Touretzky, editor. Morgan Kaufmann,
San Mateo, CA. pp. 485-492
Matthew A. Wilson and James M. Bower, "The Simulation of Large-Scale
Neural Networks", in Methods in Neuronal Modeling, Christof Koch and Idan
Segev, editors. (MIT Press, 1989)
Acquiring GENESIS via free FTP distribution:
GENESIS may be obtained via FTP from genesis.cns.caltech.edu
(131.215.137.64). As this is a large software package, please read the
above description to determine if GENESIS is likely to be suitable for your
purposes before you follow this procedure. To acquire the software use
'telnet' to connect to genesis.cns.caltech.edu and login as the user
"genesis" (no password required). If you answer all the questions asked of
you an 'ftp' account will automatically be created for you. You can then
'ftp' back to the machine and download the software. Further inquiries
concerning GENESIS may be addressed to genesis@cns.caltech.edu.
------------------------------
Subject: Microcanonical Annealing
From: suchi@pollux.cs.uga.edu (Suchi Bhandarkar)
Date: Thu, 16 Sep 93 12:16:47 -0500
Could somebody provide me with a reference that contains the formal
proof of the asymptotic convergence of the Micro-Canonical
Annealing Algorithm by M. Creutz. The original paper by M. Creutz
is as follows:
M. Creutz, "Microcanonical Monte Carlo Simulation", in Physical
Review Letters, Vol. 50, No. 19, May 9, 1983, pp. 1411 - 1414.
The original paper however does not contain a formal proof
of asymptotic convergence, only simulation results. Please
e-mail your responses to "suchi@cs.uga.edu"
Thank you very much,
Suchi Bhandarkar
Dept. of Computer Science
Univ. of Georgia
suchi@cs.uga.edu
------------------------------
Subject: Computational Neuroscience job at San Diego Supercomputer Center
From: Ken Miller <ken@phy.ucsf.edu>
Date: Thu, 16 Sep 93 10:52:43 -0800
COMPUTATIONAL NEUROSCIENCE JOB:
I just checked with SDSC, and applications are still being accepted
for this job (ad posted below). However, as the job has been
advertised for two months, applicants are encouraged to act quickly.
Ken
Kenneth D. Miller telephone: (415) 476-8217
Dept. of Physiology internet: ken@phy.ucsf.edu
UCSF fax: (415) 476-4929
513 Parnassus
San Francisco, CA 94143-0444
[Office: S-859]
- ----------------------------------------
This ad appeared in Science on July 16, 1993:
San Diego Supercomputer Center
-----------------------------
The San Diego Supercomputer Center is a National Computational Science
Laboratory operated by General Atomics and the National Science Foundation.
It serves the nationwide community of scientists and engineers. We are
currently accepting applications for a Staff Scientist in computational
ecology, computational neurobiology, or scientific databases to join our
team of computational scientists.
Requirements include a Ph.D. plus postdoctoral experience in one of the
above areas. For the computational ecology or neurobiology position, a
willingness to initiate an outreach program in, and collaborative projects
with, the research community is necessary.
General Atomics offers comprehensive salary and benefit plans as well as an
exciting, dynamic environment well suited to applicants who are highly
motivated and flexible.
Please submit your letter of application, curriculum vitae, list of
publications and three references to General Atomics, Dept. 93-23, P.O.
Box 85608, San Diego, CA 92186-9784. EEO/AAE
If you want further information about this position, please contact
Rozeanne Steckler (steckler@sdsc.edu, 619-534-5122) or Dan Sulzbach
(sulzbach@sdsc.edu, 619-534-5125) at SDSC.
------------------------------
Subject: NeuroWindow
From: root@luna.portal.com (ROOT)
Date: Thu, 16 Sep 93 16:05:27 -0700
Does anyone one have any experience with,information or reviews
on NeuroWindows by the Ward Group. Any comments appreciated.
Thank you!
mike@luna.portal.com
------------------------------
Subject: Research Opportunities at the University of the West of England,
Bristol
From: tcf@hal.uwe-bristol.ac.uk (Terry Fogarty)
Date: Fri, 17 Sep 93 15:57:51 +0000
Research Opportunities at the University of the West of England, Bristol.
The Bristol Transputer Centre, within the Faculty of Computer Studies and
Mathematics, undertakes research and collaboration projects in parallel
and distributed computing, artificial intelligence and databases.
Following its successful rating in the recent Research Assessment
exercise, the Centre is now able to consolidate its activities and wishes
to appoint two research fellows and two research students to work within
the following areas;
Evolutionary Computation
Cooperating Knowledge-based Systems
Monitoring and Control of Distributed Systems
Eliciting rules using Machine Learning.
All posts are fixed term for a period of three years.
Research Fellows REf;R/286
You should have a PhD and a significant research record in one of the
above areas. Salary will be in the range 12,900 to 20,400 pounds
sterling. For informal discussion please contact Dr Roger Miles, on
Bristol (0272) 656261 EXT 3180. Selection will be on merit; we welcome
applications from women, black people and members of other minority
ethnic groups and disabled people who are under-represented in the
Faculty. For further information and an application form to be returned
by 12th October 1993 please ring our 24 hour answerphone service on
Bristol (0272) 763813 or write to Personnel Services. UWE Bristol,
Frenchay Campus, Coldharbour Lane. Bristol BS16 1QY. Please quote the
reference number in all correspondence.
Research Studentships
You should have a good honours degree in a computing related subject. The
studentship will cover fees for registration for a higher degree and
carry a bursary of 5,400 pounds sterling which may be supplemented by
part time teaching
For an informal discussion please contact Dr Roger Miles on Bristol
(0272) 656261 ext 3180. For further information please ring Mrs Fay
Coleman, The Administrator, Bristol Transputer Centre on Bristol (0272)
656261 ext 3183.
****************************************************************************
------------------------------
Subject: Research post available
From: CRReeves <srx014@cck.coventry.ac.uk>
Date: Mon, 20 Sep 93 15:16:30 +0700
The following University Research Studentship is available, starting
as soon as possible:
"Application of neural networks to the inference of homologous DNA sequences
from related genomes"
This project involves the application of neural network techniques in plant
genetics. Primary DNA sequence data are being accumulated for a wide range of
organisms, and the role of model species in plant genetics is crucial in
expanding our knowledge of the fundamental mechanisms of plant development.
The purpose of this project is the evaluation of neurocomputing methods in
the prediction of gene sequences for a variety of agricultural species. The
work will be carried out in the School of Mathematical and Information
Sciences at Coventry University (where there is a variety of ongoing research
in the applications of neural networks), in collaboration with Horticultural
Research International at Wellesbourne, Warwickshire, where there is access to
large databases of genetic characteristics.
Applicants do not need a specialist background in either genetics or neural
computation; preferably, they should have a background in mathematics and
a competence in at least one high-level computing language (C, Pascal, etc.).
Please send CVs by email or by post to
___________________________________________
| Nigel Steele |
| Chair, Mathematics Division |
| School of Mathematical and Information |
| Sciences |
| Coventry University |
| Priory St |
| Coventry CV1 5FB |
| tel :+44 (0)203 838568 |
| fax :+44 (0)203 838585 |
| Email: nsteele@uk.ac.cov |
|___________________________________________|
[Message sent by Colin Reeves (CRReeves@uk.ac.cov)]
------------------------------
Subject: Filtering with ANNs
From: Landi Leonardo <landi@aguirre.ing.unifi.it>
Date: Mon, 20 Sep 93 10:27:01 +0100
Hi everybody,
I am inolved in filtering a signal with an additive noise. In order to do
that, I would like to use a Neural Network. Before using one
architecture, I would like to know if there are theoretical results or
papers that have shed some light in this area.
Thank you very much and enjoy these last summer days.
Leonardo Landi
Dipartimento di Sistemi ed Informatica
Facolta' di Ingegneria
Universita' degli Studi di Firenze
via Santa Marta 3
50139 Firenze
ITALIA
E-Mail: landi@aguirre.ing.unifi.it
tel: +39-55-4796365
fax: +39-55-4796363
------------------------------
Subject: Position announcement
From: "NN.JOB" <garza@mcc.com>
Date: Mon, 20 Sep 93 14:17:36 -0600
******************* Position Announcement ******************
MCC (Microelectronics & Computer Technology Corp.) is one of the
countries most broad-based industry consortia. MCC membership of almost
100 companies/organizations includes a diverse group of electronics,
computer, aerospace, manufacturing, and other advanced technology
organizations. MCC has an immediate opening for a Member of Technical
Staff (MTS) or Senior MTS in their Neural Network Projects. Job
responsibilities will be to conduct applied research in one or more of
the following three areas (listed in order of importance):
Intelligent financial systems,
OCR, and
Spectral (image/signal) processing applications
Required skills:
Neural net research & development experience
PhD in relevant area, preferably in EE, physics, or applied mathematics
Strong quantitative skills
C programming, UNIX background
Preferred skills:
Experience in financial applications and/or time series analysis
Demonstrated project leadership
Strong communication skills
Please forward your resume and salary history to:
MCC
ATTN: Neural Network Job
3500 W. Balcones Center Drive
Austin, TX 78759
email: nn.job@mcc.com
------------------------------
Subject: Superchair
From: "R.C. Lacher" <lacher@NU.CS.FSU.EDU>
Date: Tue, 21 Sep 93 13:47:13 -0500
[[ Editor's Note: Perhaps the next endowment will be for music directors
who work on models of gelatinous suspensions? It would be called the
Superconductor Supercolloider Superchair. -PM ]]
I would like to call the following announcement to the attention of the
connectionists research community. Note that the position is rather wide
open as to field or home department. In particular, nominations or
applications from eminent scientists in various connectionist fields are
encouraged to apply or receive nominations. Biology, Computer Science,
Mathematics, Physics, Psychology, and Statistics are all departments in
the college.
__o __o __o __o __o
-\<, -\<, -\<, -\<, -\<,
Chris Lacher _ _ _ _ _ _ _ _ O/_O _ O/_O _ O/_O _ O/_O _ O/_O _ _ _ _
Department of Computer Science Phone: (904) 644-4029
Florida State University Fax: (904) 644-0058
Tallahassee, FL 32306-4019 Email: lacher@cs.fsu.edu
===================================================================
The Thinking Machines Corporation Eminent Scholar Chair
in
High Performance Computing
Applications and nominations are invited for the TMC Eminent Scholar
Chair in High Performance Computing at Florida State University. This
position is supported, in part, by a $4 million endowment and will be
filled at a senior level in the College of Arts and Sciences.
Applicants and nominees should have a distinguished academic or
research record in one or more fields closely associated with modern
high performance computing. These fields include applied mathematics,
applied computer science, and computational science in one or more
scientific or engineering disciplines. The appointment will be in one or
more academic departments and in the Supercomputer Computations
Research Institute (SCRI).
The primary responsibilities of the successful candidate will be to
establish new research and education directions in high performance
computing that complement the existing strong programs in SCRI, the
National High Magnetic Field Laboratory, the Structural Biology
Institute, the Global Climate Research Institute, and the academic
departments. The Chair will be closely involved with the addition of several
junior level academic appointments in connection with this new initiative in
high performance computing in order to establish the strongest possible
group effort.
The deadline for applications is December 17, 1993. Applications and
nominations should be sent to: HPC Chair Selection Committee,
Mesoscale Air-Sea Interaction Group, Florida State University 32306-3041.
Florida State University is an Equal Opportunity/Equal Access/Affirmative
Action Employer. Women and minorities are encouraged to apply.
------------------------------
End of Neuron Digest [Volume 12 Issue 3]
****************************************