home
***
CD-ROM
|
disk
|
FTP
|
other
***
search
/
Virtual Reality Zone
/
VRZONE.ISO
/
mac
/
TEXT
/
MISC
/
VORL.TXT
< prev
next >
Wrap
Text File
|
1992-07-05
|
14KB
|
326 lines
Article 4437 of sci.virtual-worlds:
Path: watserv1!torn!utcsri!rpi!usc!cs.utexas.edu!uunet!ogicse!news.u.washington.edu!milton.u.washington.edu!hlab
From: 74040.1543@CompuServe.COM (Omar Loggiodice)
Newsgroups: sci.virtual-worlds
Subject: TECH: VORL a world description language
Message-ID: <1992Jul6.085449.8585@u.washington.edu>
Date: 4 Jul 92 17:07:25 GMT
Article-I.D.: u.1992Jul6.085449.8585
Sender: news@u.washington.edu (USENET News System)
Organization: University of Washington
Lines: 309
Approved: cyberoid@milton.u.washington.edu
Originator: hlab@milton.u.washington.edu
This is a file which I wrote months ago to generate discussion about the
world description language. Due to the message post by Bernie about the
topic, I decide to post it in sci.vw. I've have found some errors in it,
especially in the example, in a future revision I'll correct them and add
other ideas.
File: WEOL02.TXT
From: Omar Loggiodice [74040,1543]
Subj: World Editor File structure
... ...
... ...... ...
... ... ... ...
... ... ... ...
... ... ... ...
... ... ... ...
... ... ... ...
... ... ... ...
... ... ... ...
... ... ... ...
... ... ... ...
... ...
... .. . .. ...
... .W.E.W. ...
... . P . ...
... ...
... ...
... ...
...
------------
Introduction
------------
With the following document I try to describe some ideas I have had
regarding
the Virtual World Structure (VWS), it's representation and implementation, for
the World Editor project being currently developed by an on-line community
from
the Computer Arts forum (COMART) in it's Virtual Reality section. It must be
noticed that the following is a VERY ROUGH draft which still needs lots of
refinement. I have left many topics out, but with small steps we will
finally climb the ladder.
To define the VWS I tried to borrow as much as I can from the real world,
since our objective is to model it, this will lead to a more accurate
representation; however, we must be aware that the implementation will be
made in a computer environment so the structure must be suitable for it.
This is a first draft, thus, comments and discussion will finally lead to
the final representation and structure, this is why I encourage you to email
me giving feedback. I have volunteered to work in the VWS and it's
representation for the WE/WP project, this is my first contribution regarding
these issues, alhtough it borrows some definitions made in the files
explaining the virtual computer concept I was working in (WEOL00.TXT and
WEOL01.TXT).
Omar Loggiodice [74040,1543]
Assistant Leader of WE/WP project.
--------------------------------------------------
The Virtual World Structure and its representation
--------------------------------------------------
Let's look around us for a while, what do we see?. Well, we see there
are lots of "objects" acting upon each others. We are one of those objects,
and
we act upon others by touching them, for example. We "see" objects because
light acts upon the object, this in turn reflects it, and finally light
"touches" our retina. We are able to hear music because the speaker "acts"
upon air which finally reaches our ears.
As we can see in the above examples, the "interface" that gives us
information from the world are our senses, thus, to create a virtual world, we
must act on our senses; and thus the definition I use of an object which
belongs to the virtual world: A set of effects on our senses (the "effects"
might be changing with time or not). This is a very general definition, and we
are not, technologically speaking, able to fullfil all it's requirements, so
we are focusing on some senses first such as sight.
I propose the Virtual world to be composed of objects which act upon each
others. The following diagram tries to explain this.
************** ************** ************
* Object 1 *<-----------> * Object 2 *<-------....---> * Object n *
************** ************** ************
/ \ ! / \
! !--- This could be VH. !
!----------------------------------------...-----------!
So, all objects interact with each other. One of those objects is very
special, it represents the "human" that is "living" in that virtual world,
or, as I called him before 'virtual human'. It is a very special object
because it is the only object that should be able to comunicate with the
real world, with the human that he represents. This 'virtual human' (VH)
is able to receive information from his real 'clone' (uhhh!) in some way,
for example by means of the PowerGlove or the head position detector.
All the objects hold information about what they are, such as their
visual characteristics, their sounds, etc... and how they act upon other
objects: stoping them if they get to near, not making them visible, etc.
We also need to consider the scope of an object, that is, an object
can't be seen everytime, everywhere in the VW, so we need to define the
the scope of the object.
From the explanation we can conclude that to define an object we need:
- possible actions upon other objects
- representation in term of the senses (for now only
visual or graphical representation)
- Reactions to other objects
- scope of the object
It is important to define the actions that an object can make upon
other objects, for now, I consider the following:
- Change of position
- Change of graphical representation
- Change of the scope (which is defined by the scene)
This has been a general explanation of the concept but to implement it we
have been thinking about a "script" language. In the next section I will try
to make a definition of such a language (VORL).
----------------------------------
VORL definition (script language)
----------------------------------
I propose the script to be divided in two sections: The World definition
section, and the object description section. The world will be divided in
scenes, each one is composed of a set of objects. The description of each
scene is contained in the scene section in the source code of the script.
Each section, contains a set of objects which are described in the
object description section. The following is the language structure, words
in brackets are user-generated, while the others are keywords of the language.
/* World definition */
Scene#<n>
:trigger
{
<object_id>, <object_msg_id> <boolean op> <object_id>, <object_msg_id>..;
.
.
}
:description
{
<object_id>;
.
.
}
/*object description*/
<object_id>:
{
<object_msg_parameter> <relational op> <object_msg_parameter>
<boolean op>
<object_msg_parameter> <relational op> <object_msg_parameter>:-
{<statement>;
. <statement>;
.
}
.
.
}
<n>: Integer that identifies each scene
<object_id>: string that identifies and object
<object_msg_id>: string that identifies a specific message
<boolean op>: OR|AND|NOT
<object_msg>: is a structure that contains the following
- <object_id> this is the sender of the message
- <object_msg_id>
- long LONGPARAMETER x
- long LONGPARAMETER y
- long LONGPARAMETER extra
<object_msg_parameter>:Object_Id|MSG_ID|x|y|extra
<relational op>: >|<|<=|>=|=|!=
<GraphicDescriptor>: a structure containing the graphic representation of the
object, I would suggest it to conform to a predefined
graphic file format.
<position>: a structure containing the coordinates to draw an object
<statement>: Any of the following: (for now)
- SendMessage(<object_id>|scene#<n>,<object_msg>)
- draw(<GraphicDescriptor>,<position>)
- load(<filename>,<GraphicDescriptor>)
- Functions for adding & deleting objects to a scene
- Functions for modifying the <GraphicDescriptor>
- Assignments
- if-else
Let me give an example to show how the language works. Suppose we have two
scenes: the first one is just a door, the second is a table. The script
code which will define the virtual world could be:
/* world description */
Scene#<0>
:trigger
{
scene1_door, door_open;
}
:description
{
door;
}
Scene#<1>
:trigger
{
door, door_open;
}
:description
{
table;
scene1_door;
}
/*object description*/
door:
{
MSG_ID=OM_PAINT:- {load("door.grf",door_GD);
IniPos.x=10;
IniPos.y=10;
IniPos.z=10;
draw(door_GD,IniPos);
}
MSG_ID=OM_CLICK AND Object_Id=VH:- {load("open_door.grf",door_GD);
IniPos.x=10;
IniPos.y=10;
IniPos.z=10;
draw(door_GD,IniPos);
Open=1;
}
MSG_ID=OM_POS AND Object_Id=VH:- { if x=10 and y=10 and z=10 and
Open=1
{
SendMessage(scene#1,door_open);
Open=0;
}
}
MSG_ID=OM_ERASE:- undraw(door_GD);
}
scene1_door:
{
MSG_ID=OM_PAINT:- {load("2ndoor.grf",door_GD);
IniPos.x=10;
IniPos.y=10;
IniPos.z=10;
draw(door_GD,IniPos);
}
MSG_ID=OM_CLICK AND Object_Id=VH:- {load("open_door.grf",door_GD);
IniPos.x=10;
IniPos.y=10;
IniPos.z=10;
draw(door_GD,IniPos);
Open=1;
}
MSG_ID=OM_POS AND Object_Id=VH:- { if x=10 and y=10 and z=10 and
Open=1
{
SendMessage(scene#0,door_open);
Open=0;
}
}
MSG_ID=OM_ERASE:- undraw(door_GD);
}
table:
{
MSG_ID=OM_PAINT:- {load("table.grf",table_GD);
IniPos.x=0;
IniPos.y=0;
IniPos.z=0;
draw(table_GD,IniPos);
}
MSG_ID=OM_ERASE:- undraw(table_GD);
}
As you can notice, there are a set of predefined variables and messages.
The
variable SCENENO is global, and it contains the scene number. The message
OM_CLICK is generated whenever the mouse or any similar action (such as the
closing of the hand with the PowerGlove) occurs. The message OM_PAINT is
generated whenever a change of scene is made, and it is sent to all the
objects on that scene. The message OM_ERASE is generated when there is about
to be a scene change. The message OM_POS is generated when an object wants to
inform about it's position to another object. Notice that all the scenes
contain a special object called VH which stands for the Virtual Human (see
file
WEOL01.TXT).
The interpreter does the following:
The "root" scene, or scene 0 is always the first scene. So the interpreter
sets the global variable SCENENO to 0 and sends a OM_PAINT message to all
the objects that compose scene. The objects paint themselves and the control
return to the message dispatcher in the interpreter. When the user (now VH)
clicks the mouse (or uses the glove in a similar way) over the door a OM_CLICK
is generated by the message dispatcher and it is sent to the pertinent object
(the door in scene 0), the door processes the message and it is opened. If the
VH changes his position (moving the glove or mouse) an OM_POS is generated and
sent to all the objects in the scene. The door checks the position of the VH
and if it is the correct one it forces a jump to scene 2. The rest of
the code works in a similar way.
Omar Loggiodice CompuServe: 74040,1543
Internet: 74040.1543@compuserve.com
Assistant Leader of WE/WP project.
---->ORL