home
***
CD-ROM
|
disk
|
FTP
|
other
***
search
/
TopWare 18: Liquid
/
Image.iso
/
liquid
/
top1089
/
rdsgen.doc
< prev
next >
Wrap
Text File
|
1993-12-17
|
13KB
|
282 lines
RDSGEN.C ver. 1.1B
Random Dot Stereogram Generator
About RDSGEN:
RDSGEN creates Single Image Random Dot Stereograms (SIRDS) from
"depth" files that can be created with various other programs.
RDSGEN is designed to take type 2 Targa input files from sources such
as POVRAY and POLYRAY ray tracing programs. It can also take
non-interlaced GIF input files from various programs including
FRACTINT. The program can automatically detect the input type.
RDSGEN will output either a type 3 (gray-scale) Targa file or a GIF
file. The default output type is GIF.
RDSGEN is capable of displaying 320x200, 640x350, or 640x480 images
on a VGA compatible graphics adapter. Sorry no printer support yet.
RDSGEN uses the red and green color attributes from the input image
to calculate the relative depth value. This value is used to generate
the depth projection. This depth encoding technique is consistent
with output created by the POLYRAY ray tracing program. Depth
encoding can be simulated with varying degrees of success with other
ray tracing programs as well.
RDSGEN can generate GIF files in Black and White, with 16 random
colors, or with up to 256 colors coming from a background bit map.
GIF background maps can come from any suitable source.
Using RDSGEN:
To use RDSGEN type:
RDSGEN input.ext output.ext options
where: input.ext - full filename (path and extension) of the
input file. Data type must be either TGA
or GIF.
output.ext - full filename (path and extension) of
the output file. Extension should be
GIF unless the -t option is used in
which case the extension should be TGA.
options - Can be entered in any order following
output file name. The valid program
options follow.
a# : Algorithm number, either 1, 2, or 3.
b### : Adaptive background replacement
value, range 1 to 64.
c : Generate random color background.
Uses the standard EGA/VGA palette.
d### : Density value, range 0 to 100.
Default value is 50 Higher values
produce darker images.
f## : Depth of field value, range 1 to 16.
Limits depth projection to one over
the field number times the strip
width. Default value is 2.
i : Add indexing triangles to RDS image.
Indexing triangles appear at top
of the picture and can help some
people align their eyes correctly.
mc.ext : GIF file name to be used as a
background map. Background image
must have the same dimensions as
the image being generated.
n : Produces a precedence reversed
(negative) image.
r##### : Seed number for random generator.
Valid values are in the range 1
to 32767. The default value is 1.
Use this option if you are getting
Tired of the same old background.
s### : Strip count, range 1 to width of
input image. Default value is 8
which creates a strip 1/8 the
width of the screen.
t : Generate type 3 TGA output file.
v : Display image on video screen while
processing. Only CGA, EGA, or VGA
graphics are available at this time.
RDS algorithms:
RDSGEN 1.1 contains three generating algorithms. Algorithm one has come
to be known as the "Emperor's New Clothes" algorithm because it was
first documented on GRAPHDEV within a program by the same name.
It can be more accurately attributed to the team of Tyler and Clarke
who first reported the technique in 1990 and coined the term autostereogram.
RDSGEN 1.0 modified this algorithm so that depth ranges could be scaled
so as not to exceed 3/4 of the strip width. This turned out to be a
less than adequate ratio for best viewing. RDSGEN 1.1 defaults to 1/2 the
strip with and provides a depth-of-field control for greater flexibility.
To combat the persistent echo effects, I added the "right-most-left check".
This technique, described below, was first developed for algorithm two but
it's so fast and effective, I had to retro-fit it into algorithm one.
The second algorithm is something of my own invention. It was created as
a response to some of the shortcomings of algorithm one. To counter
the skewing effect (left shift), the transformation is applied to pixels
at points equidistant laterally from the perceived point's position in space.
In other words, instead of copying a dot from some offset back, up to the
current position; copy the dot from half the offset back to half the offset
forward. This centers the perceived image, thus avoiding the left shift.
Algorithm two also uses the non-linear offset calculation found in algorithm
three. This produces a more realistic depth projection which helps fool
your brain, and that's really what it's all about.
To avoid the echo or false image effect, I developed the "right-most-left
check". The echo occurs when extreme foreground surfaces are followed
immediately by the background plane. When the foreground feature is
being calculated, the offset between corresponding pixels is very small
(around half the strip width). The offset for the background plane
is exactly one strip width all the time. When the transition from
foreground to background occurs, groups of pixels that are already
encoded with foreground depth information are copied to the right,
repetitively. This is observed as bands similar to and at the same depth
as the right edges of the objects in the RDS. The transition causes
the current left pixel (d) to be further left than the previous left
pixel (n). By keeping track of how far right the left pixel has
proceeded durring the scan, it is a simple matter to detect the transition
point. If, when the transition occurs, a random or background pixel is
substituted for the right pixel, the echo is eliminated and the problem
is solved.
Algorithm three is based on a routine by Thimbleby, Inglis, and Witten.
This algorithm employs a two pass technique, first determining the
relationship between the pixels then encoding them with appropriate color.
The relationship between pixels is based on the requirement that two
dots (one for each eye) be perceived in combination as a single dot.
Both dots must therefore be colored the same. The links between
related (or in the authors terms "constrained") pixels are maintained in
the same[] array. The algorithm I've coded is the same as the original
in this function, though I have yet to perceive any particular quality of
the output image that is improved by this mechanization.
The greatest image improvement actually comes from the hidden-surface
elimination routine. In essence, if a point cannot be seen by both
eyes (because a closer surface intervenes) then the two dots don't
have to be the same color (constrained). Unfortunately the computational
overhead of this routine is outright depressing. I've taken the liberty
of eliminating the floating point math and employing an incremental
technique that takes the big divides out of the loop. I also replaced
their left-right pointer juggle (wholly absurd) with a straight-forward
test. The end result is a routine that is much faster than the original
but still the slowest by far of the three algorithms presented. The
resulting image is, however, quite good.
Using RDSGEN with FRACTINT:
FRACTINT creates a GIF file that can be read right into RDSGEN. The
trick is to make the color information meaningful. To do this simply
create the fractal image with a color map composed entirely of shades
of red. A color map that cycles through the red scale can be made
with an ascii editor. Once you've created a color map you can create
your fractal. To use the new color map simply enter color cycling
mode <c> then hit <l> for load. You will be prompted with a list of
color maps from the current directory. Select the new color map and
your done. The file can now be saved as a GIF. In batch mode just
add map= and the new color map name to the command line. Similar
results are possible using the grey.map already supplied with FRACTINT.
It is also possible to have FRACTINT create a Targa output file.
Unfortunately this can only be done for 3D projections. If you
select Light Source Fill you will be prompted with the Color/Mono
option on the next screen. If you select 1, FRACTINT will create a
24-bit Targa file. In batch mode use fullcolor=yes with 3d=yes.
Using RDSGEN with POLYRAY:
Virtually any scene that POLYRAY can make, RDSGEN can convert into an
RDS. Alexander Enzmann was clever enough to add a little option to
POLYRAY that causes the output to contain depth data instead of color
data. The color for each pixel is calculated as a function of the
distance from the object to the viewer. The most significant portion
of the depth information is contained in the 8 bits of red intensity.
The least significant depth data is in the green byte. The blue byte
is always zero. To use this feature simply enter -p z as a command
line option. Since RDSGEN cannot handle compressed TGA files, you
should add the -u option as well.
The big problem with POLYRAY depth files is that the background is
encoded with high values (0xFFFF). Because this depth is usually way
out of the range of the objects in the scene, the depth of field
for those objects is reduced to virtually nothing. In other words
your images become flat. To combat this problem you can either
provide a background object that completely fills the screen and is
in closer proximity to the other objects, or you can use RDSGEN's
adaptive background feature. This option allows you to replace the
high values with another, more reasonable, background depth. The new
background depth is determined as a function of the depth ranges of
the other objects in the image based on the value passed with the -a
option on the command line. The new background is deeper than the
deepest visible point in any object by a fraction of the distance to
the least deep point in any object. For example -b2 would replace
all 0xFFFF depth values with a new value deeper than the lowest
object by 1/2 the depth of all the objects. This may seem awkward
but something had to be done, trust me. Another problem happens with
anti-aliasing turned on. The nice depth values around the edges are
averaged with the bogus background values coming up with something
inbetween. This ruins the depth of field calculations as well as
providing erroneous halos around your objects.
Using RDSGEN with POVRAY:
Creating RDS images from POVRAY output is not so simple as the
POLYRAY stuff. Maybe somebody will add the depth encoding feature
to POV someday. Until then, the best technique available is to use
black fog. The fog command gradually decreases the visibility of the
objects in the scene in direct proportion to the distance from the
viewer. Pretty much just what we need. The command needs a color
vector and a distance. The color should be Black (<0,0,0>) so that
it does not interfere with the color of the objects in the scene.
The distance should be calculated so that the fog obliterates the
scene just beyond the range of the objects being displayed. If you
use objects about the origin and a point of view along an axis (say Z)
this is a simple matter, otherwise it's a test on the Pythagorean theorem.
All the objects should be the same color, preferably white, although red
and even green works. Use full ambient, no diffuse, no phongs, and no
light source(s). Output produced in this way has been shown to
produce perfectly viewable results.
POVRAY is also a good source for background bit maps. First, create a large
flat object that completely fills the screen. Select a texture for
this object that will become your background fill pattern. Best
results are achieved with the more random patterns, stones and
granites. Use full ambient, no diffuse, no phongs, and no glossy
finishes. Most stone textures require some lighting to look correct,
but you can't afford any highlights. Generate the output file at the
same resolution as your depth file. Using one of the available
utility programs, convert the TGA produced by POVRAY to GIF format.
You now have a 256-color background map for RDSGEN. Add to map file name
to the /m switch and RDSGEN will make a full-color GIF output file.
Because everybody wants money for their Borland SVGA drivers (BGI
files), I cannot include the required viewing routines and still give away
the code. You'll have to use a GIF viewer on the output file to see what
you've made.
I'm still working on printer support, so you're on your own there.
The improved algorithms and addition of color to the process has
sidetracked me for the time. I hope you can appreciate that.
So there it is, try it out. Please report any problems or bugs.
Comments will be accepted, money will not.
Fred Feucht
74020,407