Notice: This material is excerpted from Special Edition Using Java, ISBN: 0-7897-0604-0. The electronic version of this material has not been through the final proof reading stage that the book goes through before being published in printed form. Some errors may exist here that are corrected before the book is published. This material is provided "as is" without any warranty of any kind.
by Mark Waks
By now, you have seen many of the ways in which Java is being used on the Web today. But many more applications are coming. Because Java can be used for all sorts of trusted, distributed applications, people are looking at using Java in a variety of ways, some of which are more exotic than simple home pages.
This chapter explores the interaction of Java and another new technology: Virtual Reality. Java and Virtual Reality appear to be separate but actually are vital parts of a whole. For even while Java is changing the Net by allowing users to interact with programs on the Web, Virtual Reality, in the form of VRML, is allowing users to start building spaces in which that interaction can take place.(c)Virtual Reality_
Virtual Reality (VR) is one of the hottest topics in the computing world today-so much so that the term is overused, sometimes when it isn't even appropriate. Virtual Reality is the art of simulating reality within the computer and presenting that "reality" to the user as best the computer can.
The exact details of this presentation can vary quite widely. In the simplest cases, the system displays a 3-D image on a conventional monitor, and the user navigates around in this world by moving the mouse. In some of the most sophisticated systems, the user is outfitted with a helmet that provides a fully responsive stereoscopic image, and with gloves (or even a full bodysuit) that capture and respond to the user's movements. But although the details differ, the basic concept is the same: allowing the user to explore an artificial version of reality and trying to make that reality reasonably convincing.
VR is starting to develop into a mature set of technologies. A reasonably well-equipped personal computer or workstation now can display (or render) a three-dimensional display fast enough to present a convincing imitation of reality as the user moves through it. Specialized enhancement boards that speed this display are beginning to appear on the market, and the newest operating systems often include built-in 3-D libraries, so that proprietary libraries are less necessary. VR-based games are almost commonplace by now.(c)Cyberspace and the History of VRML_
Over the past decade, science-fiction authors have frequently used the term cyberspace-a term that William Gibson coined in his book Neuromancer. Although no two authors agree on precisely what cyberspace would look like, most concur on the basic model: a shared virtual place (or places), where people interact both with objects in that place, and with other people in the same place. The idea is a powerful, almost mythic one, with people interacting on a level higher than the physical one, where almost anything is possible.
People began to realize years ago that a cyberspace of sorts already exists, in the form of the Internet. The Net is a common place for communication and activity, bearing little resemblance to the physical world. Granted, the communication is text-based, but the idea is similar to that of the fictional cyberspace.
Finally, people around the world started arriving at a common idea. We have relatively mature and cheap Virtual Reality technology, and we have a network that covers the entire world, in the form of the World Wide Web. If we combine the two ideas, we could get something very much like Gibson's idea of cyberspace: a world-spanning VR system, which anyone can enter at will and to which anyone can add their own personal areas.
The idea gelled at the first WWW conference, which was held in Geneva, Switzerland, in May 1994. People who were interested in the idea met at the conference and began to talk seriously. Dave Raggett coined the name Virtual Reality Markup Language, or VRML (later changed to Virtual Reality Modeling Language), for the new language that would provide the underpinnings of cyberspace, in homage to HTML. Afterward, the group set up a mailing list. The list attracted enormous attention in a remarkably short time, and the process of creating cyberspace was started.
To avoid reinventing the wheel, the members of the VRML mailing list decided to use an existing VR language as its basis. After considerable discussion and debate, the group adopted the Open Inventor ASCII format as a starting point. This format is the human-readable input language for Silicon Graphics' Inventor system, one of the better-established 3-D modeling packages. The group (led in particular by several developers, including Mark Pesce, Tony Parisi, and Gavin Bell) stripped the language down to its bare bones, and added a few new Web-specific features, resulting in the first attempt at putting virtual reality on the Web: VRML 1.0.(c)The VRML 1.0 Language_
A full description of the VRML language is beyond the scope of this book. You can find the complete language specification at the following URL:
http://www.vrml.org/
For a comprehensive treatment of VRML, see Using VRML, Special Edition (Que). This book covers the language itself, as well as VRML browsers, utilities, and advanced programming topics.
This chapter concentrates on the major ideas behind the language, so that you can understand how Java relates to it.
VRML is all about creating a scene: the collection of objects that users look at. The scene is held together in a scene graph, a hierarchical description of the scene, and how the objects relate to each other. Figure 29.1 shows the scene graph for a very simple scene, containing a red cube, a green sphere, and a blue cone.
#VRML V1.0 ascii Group { Separator { Material { ambientColor 1 0 0 # Red } Translation { translation -3 0 0 # Left } Cube { } } Separator { Material { ambientColor 0 1 0 # Green } Sphere { } } Separator { Material { ambientColor 0 0 1 # Blue } Translation { translation 3 0 0 # Right } Cone { } } }
Fig. 29.1
Fig. 29.2 A Very Simple VRML scene
The graph consists of a collection of nodes -- specific objects or groupings of objects, which are laid out hierarchically. The VRML language is mainly just a listing of the different kinds of nodes that you can use in a scene graph and the simple syntax that holds them together.(d)Fields_
Before you learn the syntax, understanding the kinds of fields that may be present in a VRML file is useful. You can think of a field as being a parameter to a node. A Cube node, for example, has the fields width, height, and depth, and a DirectionalLight node has the fields on, intensity, color, and direction.
VRML has several classes of fields, and each named field falls into one of these classes. These classes in turn fall into two broad categories: single-valued and multi-valued fields. A single-valued field describes exactly one thing. (However, a single-valued field may involve several numbers. A three-dimensional vector, for example, requires three numbers, but it is only one vector.) Multi-valued fields list a number of values of the same kind-for example, a list of colors or vectors. By convention, the names of classes for single-valued fields start with SF, and those for multi-valued fields start with MF.
Table 29.1 shows the fields that are available in VRML 1.0.
Class Name | Description |
---|---|
SFBitMask | A bit mask |
SFBool | A true/false value |
SFColor | A single color, written as an RGB triplet |
SFEnum | A collection of named options |
SFFloat | A single floating-point number |
SFImage | A bitmap image, which can be grayscale or RGB |
SFLong | A single long integer |
SFMatrix | A 4x4 transformation matrix |
SFRotation | A rotation around a specified axis |
SFString | An ASCII string |
SFVec2f | A two-dimensional vector, with floating-point values |
SFVec3f | A three-dimensional vector |
MFColor | A list of colors |
MFLong | A list of long integers |
MFVec2f | A list of two-dimensional vectors |
MFVec3f | A list of three-dimensional vectors |
Table 29.1 Node Classes in VRML 1.0
Single-valued fields are simply given as values after the field name. The direction field of a DirectionalLight node, for example, looks like this:
direction 1 .5 .35
direction is a SFVec3f field-a three-dimensional vector-and the three numbers after the name of the field specify the value of that field.
Multi-valued fields are placed in brackets after the field name, and the values are separated by commas. The ambientColor field of the Material node, for example, describes a list of colors, which the various faces of later objects may take. A typical ambientColor field might look like this:
ambientColor [.2 .5 .8, 1 0 .5, .9 .4 .65]
ambientColor is a MFColor field. The field in the preceding example lists three colors, separated by commas, each listing the RGB triplet that describes the color.(d)Nodes_
As mentioned earlier, the language is mainly about describing nodes. Several kinds of nodes exist. Most nodes describe a physical object, such as a cube or a cylinder. Other nodes provide other information, such as the colors or transformations to be applied to subsequent objects.(d)Node Syntax_
Nodes have an extremely simple syntax, as follows:
node-name { field field ... }
You start with the name of the node and follow with the names of the fields, surrounded by braces. Every node expects specific named fields, but default values often exist, so sometimes you can omit fields.
A simple cube can be described as follows:
Cube { width 1 # Note that all three fields are of height 1 # class SFFloat depth 1 }
By default, a cube is size 2 in every direction, so Cube {} is equivalent to the following:
Cube { width 2 height 2 depth 2 }(d)Node Types_
Table 29.2 lists the types of nodes that are available in VRML 1.0.
Node Name | Description |
---|---|
AsciiText | Strings to display in 3-D |
Cone | A simple cone |
Coordinate3 | A list of coordinates for later nodes to use |
Cube | A simple cube |
Cylinder | A simple cylinder |
DirectionalLight | A light source that shines in a specific direction |
FontStyle | Describes the font for later AsciiText nodes |
Group, Separator, Switch, and TransformSeparator | See below |
IndexedFaceSet | An object described by a list of polygons |
IndexedLineSet | A different way of drawing an arbitrary shape |
Info | Essentially a comment in the graph |
LOD | Formerly LevelOfDetail; see below |
Material | Describes colors for later shapes |
MaterialBinding | Describes how those colors are mapped to shapes |
MatrixTransform | Transforms a shape via a matrix |
Normal | Describes the normals (vectors related to the faces) to a shape, for efficiency in rendering that shape |
NormalBinding | Describes how those normals are mapped to shapes |
OrthographicCamera | See below |
PerspectiveCamera | See below |
PointLight | An omnidirectional light at a location |
PointSet | A collection of points for use in later nodes |
Rotation | Describes how to rotate subsequent objects |
Scale | Describes how to scale subsequent objects |
ShapeHints | Provides hints for optimizing the rendering process |
Sphere | A simple sphere |
SpotLight | A fixed, conical light source |
Texture2,Texture2Transform,TextureCoordinate2 | Describes and applies two-dimensional and textures to objects |
Transform | Allows arbitrary transformation of later objects |
Translation | Describes how to move subsequent objects |
WWWAnchor | A link, very much like the <a href=""> construct in HTML |
WWWInline | Includes an object from any URL |
Table 29.2 Types of Nodes in VRML 1.0
The purpose of this section is simply to familiarize you with the concepts in VRML, so the section will not go into much further detail about the nodes; see the language specification for full details. A few nodes, however, warrant more discussion.(d)Properties_
Several of the node types, such as Transform and Material, are used in later nodes; these nodes are known as properties. Properties are related to the concept of scene traversal, which is how the browser understands the scene graph. After loading the graph, the VRML browser performs a depth-first traversal of the graph, putting everything together as it goes. So when the browser reaches (for example) a Translation node, it knows to move all the subsequent objects appropriately.
The various kinds of transformations are particularly special, because they are cumulative. If, for example, you have a node that scales up by 2 and another node that scales up by 3, all subsequent objects are scaled up by a factor of 6.(d)Groups_
Groups are nodes that collect objects together. For example, no primitive object called Chair exists, but you can make a chair out of suitable pieces-perhaps Cylinders for the legs and back, and a Cube (or a rectangular prism, anyway) for the seat. You collect these pieces in some type of Group node. Figure 29.3 shows a chair made this way.
Fig. 29.3 A basic chair, as displayed in Netscape Live3D
Several kinds of Group nodes exist. The basic Group simply allows you to collect objects in a single higher-level object; there collected objects are called child nodes. The syntax for children is simple: children are contained within the parent. To group a cube and a sphere, the syntax would be something like the following:
Group { Cube { width 3 height 2 depth 1 } Sphere { radius 3 } }
Basic groups are used relatively rarely, for one reason: properties. You usually want to define objects by using some properties internally-for example, using a Translation node to move a chair leg 6 inches to the right. But as mentioned earlier, transformations simply accumulate. You don't want the chair to cause everything in the scene graph to shift 6 inches.
The solution is a Separator node, which is the most commonly used node type for describing objects. A Separator is just like a group, except that when the browser is traversing the scene graph, it preserves the properties (such as transformations) before entering the Separator and restores them after leaving the Separator. Thus, any properties that are specified as children of the Separator do not affect any objects outside that Separator. Most objects use Separator as their top-level node.
TransformSeparator is just like the normal Separator, except that it saves only the transformations, not any other properties. Switch allows you to choose one of several children to activate.
You view the VRML scene through a camera. If a camera is established in the scene, it usually is used as the initial viewing location (although most browsers allow you to move within the scene).
Two kinds of cameras exist. The PerspectiveCamera shows everything in perspective, and the OrthographicCamera displays everything with no shrinkage in the distance.(d)
LOD is a special node whose main purpose is efficiency. The LOD node lists a collection of distances and objects. As you draw closer to the LOD node, the displayed object changes.
This effect can be used for object animation (actually, changing the object as you come closer), but it is used mainly to control the level of detail. (LOD is a modification of the LevelOfDetail node from Open Inventor, the language VRML evolved from.) LOD allows the browser to show only a very simple object, with little detail, when the user is far away and the object appears small (the screen resolution does not permit great detail anyway) and to show more and more detail as the user draw near. This way, the browser needs to load only the more-detailed objects (which presumably have larger files) as needed.(d)
The WWWAnchor node is what makes VRML a Web-based language. This node is a Group like the Groups described earlier in the chapter; it behaves like a Separator. But when the user selects one of the children of WWWAnchor (usually, by clicking it), this action tells the browser to go to some other URL. This can be a simple hypertext-like link to a URL, or the browser can include the 3-D position of the click within the WWWAnchor as part of the URL, similar to how imagemaps work in HTML.(d)General Syntax_
Every VRML file must begin with a standard header that identifies it as such, as follows:
#VRML V1.0 ascii
After that, everything is nodes and fields. Notice that the entire file is a single high-level node-generally, a Group or Separator that contains various children.
Anything else that appears on a line after a pound sign (#) is considered to be a comment; the browser ignores comments.(d)Instancing_
One last concept is useful for reading and understanding VRML: instancing, which is how you can share a node (and its children, if any) in multiple places in the file. The DEF command, placed before a node, assigns a name to that node. If that name is used in a USE command anywhere else in that file, the same node is used in place of the USE command. The following Group contains two spheres of different sizes:
Group { DEF sphere1 Sphere { radius 3 } Scale { scalefactor .5 .5 .5 } USE sphere1 }
Instancing usually is used with groups, so as not to waste file space in describing the same object in multiple places.(c)VRML 1.0 and Java_
So how does Java fit in with VRML? Java and VRML make an excellent pair, because the strengths of Java fit so well with the weaknesses of VRML.
VRML 1.0 has a few major flaws. One flaw is that VRML is oriented toward a single user. You upload a VRML scene in your browser and then prowl around in it, much as you examine a normal HTML Web page. This is all well and good, but it doesn't match one of the main requirements of cyberspace: that it be a place for people to interact. VRML 1.0 is a good way to distribute 3-D models on the Net, but it isn't a particularly effective communications medium otherwise.
Also, VRML 1.0 is static. Nothing in VRML 1.0 alone really allows objects to move and change. In the real world, things are changing all the time, and many objects react to all sorts of stimuli. Again, VRML 1.0 falls a bit short of being full cyberspace.
All of these problems are understandable; VRML 1.0 was an attempt to get something put together quickly and to serve as a base standard that people could begin to use. But more is needed, to address these failings. In particular, VRML worlds need multiple-user communications and behaviors-objects that react to the world around them.
Given the fact that VRML and Java came onto the Internet scene at about the same time, people naturally thought about melding the two, and several companies are trying to do just that in a variety of ways. The following section talks about one of those approaches: Liquid Reality.(d)Liquid Reality_
Liquid Reality is a particularly relevant implementation of VRML, in that it is itself written in Java. Liquid Reality is based on Ice, a 3-D library for Java, and comes from Dimension X, a high-end Web-site company in California. You can find Liquid Reality at the following URL:
http://www.dimensionx.com/lr/
Liquid Reality implements VRML as a collection of Java classes. Due to the flexibility of Java, the product has remarkable extensibility. When the Liquid Reality browser encounters a node type that it doesn't understand, it goes back to the server and asks for a class that implements that node type. Thus, Liquid Reality provides extensibility to VRML much as the HotJava browser provides extensibility to HTML.
Liquid Reality provides Java with a large class library, which essentially covers all the capabilities of VRML. A class exists for each kind of field (such as SFColor or MFLong). Each class implements setValue() and getValue() methods; multi-valued fields also implement methods to return the number of values in the field and set a particular value.
Similarly, a class exists for each kind of node-for example, class CubeNode or SeparatorNode. The node's fields are public members of the class; when a field is changed, it notifies the node that contains it, so that the node can (if necessary) be redrawn. Each node has methods that do the following things:
Several methods deal with scene traversal, including the following:
By using Liquid Reality in a Java-based environment, you extend the capabilities of VRML as needed. You can have Java routines modify the scene graph on the fly, and you can add new node classes to make up for the gaps in VRML 1.0. Overall, Liquid Reality makes the basic VRML language considerably more interactive and interesting.
These capabilities soon may be available to everyone who uses VRML, due to a new proposal for revising the language: Moving Worlds.(c)The Future VRML 2.0 and Moving Worlds_
Perhaps the greatest potential lies in the next revision of VRML. The VRML community has generally accepted the facts that VRML 1.0 is just a first step and that the time has come to advance the standard to incorporate behavior and multiple-user capability, which means integrating VRML with some sort of programming language.
At the time when this chapter was written, the shape of VRML 2.0 had not yet been determined; various vendors of major tools had proposed several possibilities. Probably the most interesting of the proposals is Moving Worlds, submitted by Silicon Graphics and backed by several major VRML vendors, including Sony, Worldmaker, Netscape, Intervista, Paper Software, and Chaco. This section discusses Moving Worlds and how it relates to Java.
The Moving Worlds proposal is still being discussed, and the details are still up in the air, so this section does not provide the complete details; instead, the section discusses the broad scope of how this proposal changes VRML. For more details, see the Moving Worlds page at the following URL:
http://webspace.sgi.com/moving-worlds/
At the time when this chapter was written, Netscape had produced a beta implementation of Moving Worlds named Live3D, running with Netscape 2.0; the illustrations in this chapter were displayed with Live3D (which also works with VRML 1.0). For more information, see the Netscape site at the following URL:
http://home.netscape.com/
Bear in mind that the spec still is in a state of flux. The examples in this section should give you a good idea of how things work, but the details may have changed by the time you read this chapter.
Moving Worlds is a ground-up rewrite of VRML. The syntax and many nodes are mostly the same as in version 1.0, but the proposal attempts to address all the major failings of the first version. Moving Worlds includes the following features:
Most of the new static capabilities aren't world-shaking, but they are useful. Elements such as sound can make a virtual world considerably more real to the user. Numerous small tweaks have been made in the various nodes, so don't assume that any given node is exactly the same in version 2.0 as it is in version 1.0.
A few nodes have been added to make creating and using objects easier. The Separator node from VRML 1.0 has been replaced by an enhanced Transform node, which has many of the characteristics of the old Separator but combines those capabilities with the transformations (such as size and orientation) that apply to the node. Also, a new node Shape has been added, which can be used to collect the geometry for an object. Shape has the fields appearance (which defines things such as the Materials for the object) and geometry (which defines the actual form of the object).
To understand how Moving Worlds differs from VRML 1.0, consider an example (courtesy of Silicon Graphics). The files in listings 29.1 and 29.2 show a simple scene that includes a red cone, a blue sphere, and a green cylinder. The first file shows how you might represent these objects in VRML 1.0; the second file shows how you might represent them in Moving Worlds.
Listing 29.1 A Simple Scene in VRML 1.0 #VRML V1.0 ascii Separator { Transform { translation 0 2 0 } Material { diffuseColor 1 0 0 } Cone { } Separator { Transform { scaleFactor 2 2 2 } Material { diffuseColor 0 0 1 } Sphere { } Transform { translation 2 0 0 } Material { diffuseColor 0 1 0 } Cylinder { } } } Listing 29.2 The Same Scene, in VRML 2.0 #VRML V2.0 ascii Transform { translation 0 2 0 children [ Shape { appearance Appearance { material Material { diffuseColor 1 0 0 } } geometry Cone { } }, Transform { scaleFactor 2 2 2 children [ Shape { appearance Appearance { material Material { diffuseColor 0 0 1 } } geometry Sphere { } }, Transform { translation 2 0 0 children [ Shape { appearance Appearance { material Material { diffuseColor 0 1 0 } } geometry Cylinder { } } ] } ] } ] }
The file in listing 29.2 is a bit wordier than the one in listing 29.1, but also a bit more elegant. The Shape nodes allow you to collect the appearance of an object and its geometry, making it a little clearer how things relate to one another. Combining the Separator and Transform nodes makes sense, because they usually are used together.(c)Prototyping_
Prototyping is a welcome addition to VRML that any object-oriented programmer will understand. Recall that in VRML 1.0, the only way to reuse a node is with the DEF and USE commands. These commands do not take any parameters, and don't really provide any good way to create a new node class.
Prototyping addresses these concerns. With the new PROTO keyword, you can define a prototype object, which is conceptually close to a class. This prototype exposes certain fields (which are used internally to set the fields of the objects that comprise the prototype) and certain events. (Events are described later in this section.) Effectively, PROTO allows you to define new VRML node types, which you then can use just like the built-in ones.
Listing 29.3 shows part of the code for a simple bookshelf, whose back and bottom can be painted in colors specified as a field.
Listing 29.3 VRML 2.0 Prototype for a Bookshelf PROTO Bookshelf [ field MFColor backColor .5 .5 .5 field MFColor baseColor .2 .2 .2 ] { Transform { children [ Shape { # back of the bookcase appearance Appearance { material Material { diffuseColor IS backColor } } geometry Cube { ... } }, Shape { # bottom of the bookcase appearance Appearance { material Material { diffuseColor IS baseColor } } geometry Cube { ... } } ] # End of the children } # End of the Transform } # End of the prototype
Bookshelf is a top-level Transform node (which acts as a kind of Group) that contains two children: the back and the base. The IS command in each Material node (such as diffuseColor IS backColor) tells the browser to take the value specified in the backColor field, and use it for the diffuseColor field of Material. Therefore, the fields of the prototype are propagated into the bookshelf's subsidiary objects and used appropriately.
Prototypes enable you to create libraries of reusable VRML objects.(c)Sensors and Routes_
Sensors are objects that generate events, which are messages that can be passed between nodes. You can use sensors and events to make your worlds truly interactive. Kinds of sensors include the following:
BoxProximitySensor generates an event when the user gets close to it. ClickSensor detects mouse movement and clicks. CylinderSensor, DiskSensor, PlaneSensor, and SphereSensor detect mouse drags and map them to various kinds of rotations and movements, so that you can allow users to manipulate objects. Finally, TimeSensor generates clock ticks, which other nodes can use to change over time.
You can use the events generated by sensors by specifying them in ROUTE commands. ROUTEs are not nodes and are not part of the scene graph; they are browser commands to the browser that connect nodes.
A ROUTE connects an output event from one node to an input to another node. In general, most fields of most nodes have a corresponding set event. A Transform node, for example, can take set_rotation events, set_scale events, and so on. You can think of these events as being input events. By using a ROUTE, you can connect the outputs from a Sensor (or some other node that generates events) to these inputs and cause things to change. The following code causes object myCylinder to rotate:
DEF myCylinder Transform { children [ DEF Rotator CylinderSensor { } Cylinder { ... } ] } ... ROUTE Rotator.rotation TO myCylinder.set_rotation
Rotator interprets mouse drags as meaning cylindrical rotation and generates events of type SFRotation. The ROUTE command sends these SFRotation events from Rotator to myCylinder, where they are used to set the rotation for myCylinder, turning it appropriately.
Some more code is necessary to completely flesh out the example, but you get the idea: ROUTEs send events from the output of one object to the input of another.(c)Scripting_
The most critical new node in the Moving Worlds roster is the Script node. Although the analogy is inexact, you can think of Scripts as being the VRML equivalents of applets.
A Script node has the following fields:
A Script node also has any number of eventIns, eventOuts, and fields. In general, the syntax is similar to that of the PROTO construct. The behavior field provides the class of the applet. scriptType indicates the language of the applet. (Moving Worlds is not restricted to Java, although that language is used the most.) mustEvaluate tells the browser whether it can buffer events or must send them to the applet immediately. directOutputs tells the browser whether this applet can send events directly to other nodes and receive events from other nodes.
The Script node receives events just as any other node does. The node passes those events to the related applet, which processes them and (usually) generates other events to change the state of the virtual world. The API
The applet uses an API, which is defined as part of the standard, and which defines how Java accesses the outside world. Many routines are defined in this API, but the routines fall into a small number of categories.
First is a Field class, which extends Java's Object class by default and therefore inherits all the capabilities of Object. For each type of VRML field, a class defines a read-only (constant) Java version of that field. For the SFColor field type (which defines a single color value), the corresponding read-only class is:
class ConstSFColor extends Field { public float[] getValue(); }
These constant classes typically are used for input values from VRML to Java applets, so they have only the single routine getValue(). Notice that ConstSFColor uses an array of floats. A color is an RGB value, so it needs three floating-point numbers to represent it.
For each field, the API also defines a class which can be written as well as read:
class SFColor extends Field { public float[] getValue(); public void setValue(float[] value) throws ArrayIndexOutOfBoundsException; }
Single-valued Fields have only getValue() and setValue() methods. For multiple-value Fields, the options are slightly more complex, as the following code shows:
class MFColor extends Field { public float[][] getValue(); public void setValue(float[][] value) throws ArrayIndexOutOfBoundsException; public void setValue(ConstMFColor value); public void set1Value(int index, float[] value); }
In this case, you can set the value of the entire collection of colors from an array of arrays of floats (that is, an array of colors), set all of the colors from a ConstMFColor object (essentially copying the constant to a variable object), or set a single color within the list.
Notice that setValue() throws an exception. In general, all Fields define a setValue() method, and many of them can throw exceptions. Be prepared to write exception handlers when necessary.
The API currently defines two interfaces, one describing how Java interprets events, and one describing the basic capabilities of a node:
interface EventIn { public String getname(); public SFTime getTimeStamp(); public ConstField getValue(); } interface Node { public ConstField getValue(String fieldName) throws InvalidFieldException; public void postEventIn(String eventName, Field eventValue) throws InvalideventInException; }
These interfaces are useful in fleshing out the rest of the interface, particularly the Script class, which is the superclass for most Java programs that will interact with VRML:
class Script implements Node { public void processEvents(Events [] events) throws Exception; public void eventsProcessed() throws Exception; protected Field getEventOut(String eventName) throws InvalidEventOutException; protected Field getField(String fieldName) throws InvalidFieldException; }
All Scripts should be subclasses of Script. Notice that the Exceptions are left generalized, so that Scripts can tailor their exceptions to their needs. The methods of Script are shown and described below, in the examples.
The API also defines a Browser class, which has several methods. These methods examine the state of the virtual world, find out where the camera is pointing, set the major characteristics of the virtual world (how foggy it is, for example), and load new geometry from specified URLs.(d)Example
To better understand how Scripts and applets work, as well as how you build objects in Moving Worlds, consider the examples (courtesy of Silicon Graphics) shown in listings 29.4 and 29.5.
Listing 29.4 The TextureAnchor Node, and Related Java Class PROTO TextureAnchor [ field SFString name "" field SFString target "" field MFNode children [ ] { Group { children [ DEF CS ClickSensor { }, Group { children IS children } ] } DEF S Script { field SFString name IS name field SFString target IS target eventIn SFVec2f hitTexCoord behavior "TextureAnchor.java" } ROUTE CS.hitTexCoord TO S.hitTexCoord } TextureAnchor.java _________ import "vrml.*" class TextureAnchor extends Script { SFString name = (SFString) getField("name"); SFString target = (SFString) getField("target"); public void hitTexCoord(ConstSFVec2f value, SFTime ts) { // construct the string String str; sprintf(str, "%s?%g,%g target=%s", name.getValue(), value.getValue()[0], value.getValue()[1], target.getValue()); Browser.loadURL(str); } }
This example defines a somewhat improved version of the WWWAnchor node. The original anchor node was trying, more or less, to duplicate the concept of the HTML image map; the node had some problems with that procedure, however, because the application domain is a little different from HTML. In particular, you aren't clicking a single image, generally, but a texture that has been applied to a surface and that might repeat. What you really want is to know where in the texture you are, which is provided by this node.
TextureAnchor takes three parameters:
The ClickSensor node can do a variety of things. The node generates a hitTexCoord event, for example, whenever the user clicks the mouse button. This event is a 2-D value, giving the location of the click on the texture below the mouse. (In other words, if you have a surface with a texture tiled on it, and you click that surface, the ClickSensor tells you the coordinates within the texture that you clicked.)
The work is done by the hitTexCoord() method in the Java class TextureAnchor. Notice that the way that this method is called isn't particularly evident. The related event occurs in VRML, and the method seems to just get called. In fact, the method is being called indirectly.
Remember that the basic Script class (shown above) defines a processEvents() method. By default, this method takes a list of events that the browser has queued up and calls the appropriate method for each event, passing to that method the data from the event and the time when the event occurred. Thus, when a hitTexCoord event occurs in the browser, that event is queued up and sent to processEvents(), which takes the vector value (because the event is an SFVec2f), which now is a Java ConstSFVec2f, and passes that value and the time to the hitTextCoord() method. processEvents() always passes the time to methods, because the time is useful so often.
Although you can override processEvents() to gain more control of your event processing, this model usually works well. You do not need to get involved with creating or reacting to events directly; you can simply assume that the connection between VRML and Java is established.
You also can define an eventsProcessed() method, which will be called after processEvents() has dealt with all of the pending events. This technique sometimes is useful to build up your data and then deal with all of it at the same time.
So the hitTexCoord() method is invoked. This method builds a URL,
using the usual image-map syntax (anything after a question mark is interpreted
as coordinates). The name and target values were given in the instance
of the node; you get those values by using the getField() method. The TextureAnchor
class gets the value of the name field from the node, extracts the X and
Y coordinates of the mouse click from the value that was passed in (an
SFVec2f, remember), and combines the name and coordinates to make up the
URL. Finally, hitTextCoord() calls Browser.LoadURL to actually load in
the new URL.
(d)Example
The next example is a little more sophisticated; it sends events as well as receives them, providing semi-intelligent control of a simple animation. The example shown in listing 29.5 describes a helicopter with spinning rotors. Notice that Chopper simply pulls the Rotor node over the network and uses it; this section is examining the control logic, which turns the rotors on and off when the user clicks the mouse, and assumes that Rotor works correctly.
Listing 29.5 The Chopper Node, and Corresponding Java EXTERNPROTO Rotor [ eventIn MFFloat Spin field MFNode children ] "http://somewhere/Rotor.wrl" # Where to look for implementation PROTO Chopper [ field SFFloat maxAltitude 30 field SFFloat rotorSpeed 1 ] { Group { children [ DEF CLICK ClickSensor { }, # Get click events Shape { ... body... }, DEF Top Rotor { ... geometry ... }, DEF Back Rotor { ... geometry ... } ] } DEF SCRIPT Script { eventIn SFBool startOrStopEngines field maxAltitude IS maxAltitude field rotorSpeed IS rotorSpeed field SFNode topRotor USE Top field SFNode backRotor USE Back scriptType "java" behavior "chopper.java" } ROUTE CLICK.isActive TO SCRIPT.startOrStopEngines } DEF MyScene Group { DEF MikesChopper Chopper { maxAltitude 40 } } chopper.java: ______- import "vrml.*" public class Chopper extends Script { SFNode TopRotor = (SFNode) getField("topRotor"); SFNode BackRotor = (SFNode) getField("backRotor"); float fRotorSpeed = ((SFFloat) getField("rotorSpeed")).getValue(); boolean bEngineStarted = FALSE; public void startOrStopEngines(ConstSFBool value, SFTime ts) { boolean val = value.getValue(); // Don't do anything on mouse-down: if (val == FALSE) return; // Otherwise, start or stop engines: if (bEngineStarted == FALSE) { StartEngine(); } else { StopEngine(); } } public void SpinRotors(fInRotorSpeed, fSeconds) { MFFloat rotorParams; float[] rp = rotorParams.getValue(); rp[0] = 0; rp[1] = fInRotorSpeed; rp[2] = 0; rp[3] = fSeconds; TopRotor.postEventIn("Spin", rotorParams); rp[0] = fInRotorSpeed; rp[1] = 0; rp[2] = 0; rp[3] = fSeconds; BackRotor.postEventIn("Spin", rotorParams); } public void StartEngine() { // Sound could be done either by controlling a PointSound node // (put into another SFNode field) OR by adding/removing a // PointSound from the Separator (in which case the Separator // would need to be passed in an SFNode field). SpinRotors(fRotorSpeed, 3); bEngineStarted = TRUE; } public void StopEngine() { SpinRotors(0, 6); bEngineStarted = FALSE; } }
EXTERNPROTO, at the top of the listing, loads the Rotor node over the network, and provides a brief spec for the visible events and fields of the Rotor. Spin describes the orientation and speed at which the rotor turns; the children describe the geometry of the rotor.
PROTO describes a Chopper prototype-essentially, a class. Chopper takes two parameters: maxAltitude (don't worry about it for this example) and rotorSpeed (how fast the rotors should turn).
Group at the top of Chopper contains the geometry of the helicopter and the rotors, and ClickSensor detects when the user clicks the helicopter. Then come the definitions of the Script node and its fields, as well as a ROUTE command from ClickSensor to that Script, so that the Script receives events when the user clicks the mouse.
The corresponding Java class Chopper starts out by getting the Fields with which it is concerned: pointers to the Rotor nodes and the overall rotor speed.
When the user clicks the mouse, the ClickSensor detects that action and generates an isActive event (two events, actually-a FALSE event when the mouse button goes down and a TRUE event when it goes up again). That event is routed to the Script's startOrStopEngines event. As described in the preceding example, the event gets queued up for processEvents(), which then dispatches the data to the startOrStopEngines() method.
The method checks the value that has been passed in. If the value is FALSE, the user just clicked the mouse button down; the method doesn't do anything in that case, but waits for the user to release the button. If the value is TRUE, the button has been released, so the method continues. The method checks the bEngineStarted variable, which keeps track of whether the rotors are running, and calls either StartEngine() or StopEngine(). These methods, in turn, call SpinRotors(), which has a speed of 0 (meaning that the rotors should stop) or the speed that the object specified (to turn the rotors on).
SpinRotors builds up a rotorParams variable-an MFFloat that corresponds to the elements of the Spin eventIn on the Rotor object. SpinRotors uses this variable to send Spin events to both TopRotor and BackRotor, turning them on or off as appropriate.(c)And On to Cyberspace_ CE="Courier">And On to Cyberspace
A full example of multiple-user cyberspace would be too involved to present in this chapter, but how this example would be presented is clear. As demonstrated in the examples above, you can use Java applets to modify the virtual world in arbitrary ways. The examples only looked at user clicks and uploading files or starting animations, but the sky is essentially the limit; you can use events to set virtually any characteristic of the scene graph.
The next step is obvious: hook people together. Using Java, you can create communication links among multiple users who are viewing the same world. You could use those links to pass information back and forth. Each user could be embodied by an avatar-a visible geometric representation of where the user is and what he or she is doing. Although avatars in virtual-reality systems can be quite fancy, looking like lifelike humans, they also can be quite simple-basic geometric objects, chosen by the user, that are only complex enough to show which way the user is facing.
You could track each user's movement by attaching events to the user's viewpoint and send those movements to the other users in the same room, moving avatars just as the camera moves. If you combine these avatars with a simple chat or voice interaction, you begin to get a useful area in which people can meet.
With a little imagination, even more capabilities are possible. The objects in rooms can have dynamic characteristics, for example, and the communications links can keep track of these characteristics. Thus, when one user flicks on a light switch in a room, this actions sends a message to the browsers of all the other users who are present, telling their lights to turn on as well. The synchronization issues aren't simple, but they are solvable, now that you have the necessary tools.
For technical support for our books and software contact support@mcp.com
Copyright ©1996, Que Corporation