home
***
CD-ROM
|
disk
|
FTP
|
other
***
search
/
Usenet 1994 January
/
usenetsourcesnewsgroupsinfomagicjanuary1994.iso
/
answers
/
sybase-faq
< prev
next >
Wrap
Internet Message Format
|
1993-10-18
|
137KB
Path: senator-bedfellow.mit.edu!bloom-beacon.mit.edu!news.kei.com!ddsw1!meaddata!news
From: davidp@meaddata.com (David Pledger)
Newsgroups: comp.databases.sybase,comp.databases,comp.answers,news.answers
Subject: comp.databases.sybase Frequently Asked Questions (FAQ)
Supersedes: <sybfaq_748286795@meaddata.com>
Followup-To: comp.databases.sybase
Date: 18 Oct 1993 14:49:54 GMT
Organization: Strategic Data Systems, Dayton, OH
Lines: 3203
Approved: news-answers-request@mit.edu
Distribution: world
Expires: 1 Dec 1993 14:49:50 GMT
Message-ID: <sybfaq_750955790@meaddata.com>
NNTP-Posting-Host: ibis.meaddata.com
Summary: This monthly posting contains a list of Frequently Asked Questions
about the Sybase Relational Database Management System (RDBMS).
Keywords: Sybase,RDBMS,FAQ,Frequently Asked Questions,isql,T-SQL
Xref: senator-bedfellow.mit.edu comp.databases.sybase:3698 comp.databases:29845 comp.answers:2343 news.answers:13686
Archive-name: sybase-faq
Last-modified: 1993/09/16
Version: 1.2
======================================================================
S Y B A S E
F R E Q U E N T L Y A S K E D Q U E S T I O N S
Version 1.2
Copyright 1993 by David W. Pledger, All rights reserved.
======================================================================
Table of Contents
======================================================================
1. Introduction
1.1. General Information
1.2. Summary of changes
1.3. Posting Hints
1.4. Archive information
1.5. Acknowledgments
1.6. Terms and Abbreviations
1.7. Sybase Corporate Information
Q1. How can I get in touch with Sybase?
Q2. Who are my local user groups and how can I get in touch with
them?
2. Database Server
2.1. General Questions
Q1. What periodicals exist for Sybase?
Q2. What's a good book about Sybase?
Q3. Does Sybase support the X/Open XA interface?
Q4. Does Sybase support ODBC (Microsoft Windows Open Database
Connectivity)?
Q5. What are some of the size limitations of Sybase?
2.2. Sybase ISQL
Q1. How do I prevent isql output from wrapping around at 80
columns?
Q2. How do I send isql output to a file? The -o switch doesn't
work?
Q3. Can I submit a multiline statement as input to isql without
creating a file with the commands in it first?
Q4. How do I prevent the password from being displayed when
someone does a UNIX *ps* command?
Q5. I want to add some new features to isql. Does anyone have
the source code?
2.3. Sybase Transact-SQL
Q1. What exactly do sp_primarykey, sp_foreignkey, and
sp_commonkey do?
Q2. I want to write a new system stored procedure that gives me
information not provided by the existing stored procedures. How
do I make these available to all users like the system stored
procedures?
Q3. How can I do a "row level select" (built-in "if" function)
without having to create a temporary table, etc.?
Q4. How do I use a table name as a parameter to a stored
procedure, which will then run a query on the specified table?
Q5. Can you change the definition of a table to prohibit nulls
once you've defined it to permit them?
Q6. Is there a simple way to solve the Sybase ""matching
quotes"" requirement in a character field?
Q7. How can I do a case-insensitive search?
Q8. How do wildcards used for pattern matching work in the
context of the LIKE operator?
Q9. How do I put a unique serial number on a table?
Q10.Exactly when does a trigger fire?
Q11.Is there an easy way within the server to determine how many
days are in the current month?
Q12.How can I recursively retrieve the rows in a bill of
materials type problem?
Q13.What mechanism does Sybase offer to control concurrency when
multiple users are doing *select - think - update* kind of
operations?
Q14. In what order are defaults, rules, triggers, etc.
enforced/executed?
2.4. Sybase Bulk Copy
Q1. When using BCP to copy a database, is the copy equivalent to
the original in terms of performance?
Q2. Can BCP load null dates?
2.5. Sybase Backup and Recovery
Q1. How can I dump more than one database to a single tape?
2.6. Upgrading the Sybase Server
Q1. I'm upgrading from version <x> and/or operating system <p>
to version <y> and/or operating system <q>. Any advice?
2.7. Sybase Security
Q1. What different mechanisms are there to control Sybase
security?
2.8. Sybase Database Administration
Q1. Why does the transaction log on the model database keep
filling up?
Q2. Why does my transaction log fill up even when I have
allocated lots of space for it?
Q3. Is there a way to turn off logging altogether? How about
putting the transaction logs on `/dev/null'? How does tempdb
avoid logging?
Q4. Is there any reason not to have `truncate log on checkpoint'
turned on for the model database?
Q5. Why doesn't the Sybase kill command work?
Q6. What are some of the undocumented features of DBCC?
Q7. Why don't the dbcc commands produce any output on my screen?
Q8. What are the trace flags used for and what are some of the
more common flags?
Q9. Is there a way to accurately estimate how much space a table
and its indexes are going to take?
Q10.What causes a database to be marked SUSPECT and can I
recover a database that comes up marked `SUSPECT'?
Q11.My database tables often get locked by the client's hung
workstation. Is there a way that I can unlock those locked
tables?
Q12.Does the server sort order affect performance? Is binary
sort order the fastest way?
Q13.Does Sybase have a memory limit?
2.9. Sybase Performance Tuning
Q1. How much overhead do variable size and NULL columns require?
Q2. How are null values stored? How does Sybase distinguish
between an integer and a null value for an integer, and so forth?
Q3. How are text and image types stored?
Q4. How do I interpret the cryptic output of 'set showplan on'?
Q5. How does the query optimizer work? Does the ordering of
tables in the from clause or the conditionals in the where
clauses affect the performance of the query?
Q6. Can I force the optimizer to access tables in a certain
order or to use a particular index?
Q7. Does dropping an index cause recompilation of a stored
procedure?
Q8. Does the time for a select that yields 1000 rows from a
table of 10,000 differ much from the same select when the table
contains 100,000 rows?
Q9. Is there a way to gather performance statistics besides
using sp_monitor?
Q10.Does Sybase do page or row level locking?
Q11.What types of locks can be issued and what do they mean?
Q12.What exactly does the HOLDLOCK keyword do?
Q13.Why, when a stored procedure is forced to compile, does the
query plan grow eventually causing the stored procedure to crash?
Q14.What is a segment and why should I use one?
Q15.What determines whether an update will be performed 'in
place' or deferred?
Q16.How does altering a database table to add a new column
affect the storage of the affected table?
How do I delete a column from a table?
2.10. Sybase Network Issues
Q1.How can I make Sybase talk to two separate ethernet
interfaces on our server?
Q2.Can I use Sybase over PPP (Peer-to-Peer protocol)?
3. Sybase Core Applications
3.1. Open Client
Q1. Has anyone implemented a C++ class library for Sybase?
Q2. How can I use the Sybase Open Client with my C++ code?
Q3. Which C compiler(s) is the DOS version of the Open Client
software compatible with?
3.2. Open Server
3.3. APT
Q1. Is it possible to place other visible fields on top of
invisible fields, or do I have to have big open spaces?
3.4. DWB
3.5. Report Writer
Q1. How can I load the reports into a production db in a batched
(non-interactive) way?
3.6. Gain Momemtum
4. Third Party Applications
4.1. User Interface/Client Applications
1. JYACC JAM/DBi
2. Uniface
3. Power Builder (Microsoft Windows only)
4. Microsoft Access/Visual Basic
5. DataEase
6. Unify
7. Focus
8. ObjectView
9. Q+E
10.Superbase
11.R&R Report Writer for Windows, SQL Edition
12.CorVu
4.2. Class Libraries
1. DBh++
2. C++ API
3. Persistence
4.3. Other Miscellaneous Products and Tools
1. SybPERL
2. SQL-BackTrack
3. dbViewer
4. Xsybmon
5. Sybtcl
======================================================================
1. Introduction
----------------------------------------------------------------------
1.1. General Information
This document is copyrighted. Compiling and refining this document
monthly requires a significant effort on my part. My main reason for
copyrighting this document is to protect my sweat-equity and be given credit
for this effort. Copy and distribute this document freely under these three
guidelines: (1) Include my name, the copyright notice, and this paragraph in
all reproductions. (2) Do not under any circumstances distribute this
document for profit. (3) Do not publish this document or any portion of this
document in any journal, trade publication, etc. without my prior written
consent. I do not guarantee or warrant that information contained in this
document is accurate.
The intent of this document is to answer many of the frequently asked
questions about the various products that Sybase offers. The emphasis of this
FAQ is on the database server, primarily because that is my expertise. I will
include questions and discussion on Open Server, Open Client, and other
topics as I receive pertinent information.
There are undoubtedly typos, mistakes, and other misinformation which I am
certainly trying my hardest to eliminate. Comments and corrections are
welcomed and encouraged. Please direct all comments to davidp@meaddata.com.
Include the phrase `Sybase FAQ' as the subject of your message. I will
include your changes as appropriate and give credit where credit is due.
As an added bonus, I will add you to a mailing list to automatically
receive future releases of this document as soon as it is available if
you provide any new information or corrections.
This article is posted to the news group comp.database.sybase monthly. This
article is also cross-posted to news groups news.answers, comp.databases, and
comp.answers.
Please send me a note if you have any particular topic you would like to see
addressed or any comments on the content or organization of this document.
----------------------------------------------------------------------
1.2. Summary of changes
This is the Sybase FAQ, version 1.2. This supercedes version 1.1 issued in
mid September, 1993.
As you can see, I have once again reorganized the document in an attempt
to find the best presentation. (All of this change will stop when you
see release 2.0.)
My appologies for all those individuals who provided information that
didn't make it into the FAQ. It has been a busy month. I will
include it in the next release (In particular, the client applications
have been changed very little). I'll also divide this FAQ into multiple
parts next time since it is getting a little too large.
This is a living document and is still in its infancy. There are significant
changes each month, so I will not attempt to list them at this time. As the
document becomes more stable, I will include a detailed list of changes.
----------------------------------------------------------------------
1.3. Posting Hints
Before posting to comp.databases.sybase, please consider that many people in
Netland are reading News using an 80 column display. If you set your right
margin to 75 it will make your article much easier to read for those people.
You may want to refer to the newsgroup news.newusers.questions for
translations of IMHO, IMO, BTW, wrt, 8-), etc.
----------------------------------------------------------------------
1.4. Archive Information
This FAQ is archived at the ftp site: straylight.acs.ncsu.edu:/pub/sybase
----------------------------------------------------------------------
1.5. Acknowledgments
The following individuals have made significant contributions toward the
compilation of this document. I have received many usefule comments
from individuals at Sybase that have greatly improved to content
and accuracy of this document. Many thanks to all of you!
Name Email Address
---------------------- ---------------------------
David Pledger davidp@meaddata.com
Ben von Ullrich, Sybase
Howard Michalski, Sybase
Elton Wildermuth, Sybase
Tom Warfield vnunet!twarfield
David Joyner nsysdbj@acs.ncsu.edu
----------------------------------------------------------------------
1.6. Terms and Abbreviations
The following list contains terms and abbreviations that are used within this
document.
APT - Application Programming Toolkit (Sybase Product)
dbid - Database Id
DDL - Database Definition Language (SQL Create & Index Statements)
DML - Database Manipulation Language (SQL Insert, Update, Delete, &
Select)
DSS - Decision Support Systems
DWB - Data WorkBench (Sybase Product)
EBF - Emergency Bug Fix
Gain Momentum - a multimedia tool that supports audio, video, animation,
and is a front-end to SQL databases. (Sybase product)
GAM - global allocation map
indid- Index Id
LFS - logical file system
LRU - least recently used
OAM - Object Allocation Map
OLTP - On Line Transaction Processing
objid - Object Id
PSS - Process Slot Structure
Rid - Row Id
Rollup - Collection of bug fixes issued as an upgrade release.
SPID - Server Process Id
sproc - Stored Procedure
SQR - Structured Query Report Writer (Sybase Product)
STS - Sybase Technical Support
T-SQL - Transact SQL, Sybase's version of SQL with extensions.
vdevno- virtual device number
----------------------------------------------------------------------
1.7. Sybase Corporate Information
++++++++++++++++++++++++++++++++++++++++++++++
Q1. How can I get in touch with Sybase?
Answer: Sybase's main phone number for all customer inquiries
is 1-800-8-SYBASE (1-800-879-2273)
Europe
---------------------------------------------
[partial list]
France 33-1-42'18'42'18
Germany 49-211-59760
Netherlands 31-3465-82999
UK 44-628-597100
Other European inquiries 31-3465-82999 (European Headquarters)
Canada 416-566-1803
Japan 81-3-5280-1141
For other Asia, Pacific, and Latin America inquiries, 510-596-3500
Corporate Address
Sybase, Inc.
6475 Christie Avenue
Emeryville, CA 94608
Phone: 1-(510) 596-3500 (corporate headquarters)
FAX: 1 (510) 658-9441
Dial Up Service
INSIGHT 1-510-601-4991. To register, dial up the above number
with your computer/modem and have you customer number (from
any Sybase software packing list) handy. Next time you call
Technical Support or customer Service, ask for your contact ID.
Only registered technical support contacts are allowed to dial in.
Support Renewals
1-510-596-4524
Customer Service
1-510-596-3333. This is the main customer service line. They can
answer or direct any non-technical, non-support renewal questions and
expedite service.
++++++++++++++++++++++++++++++++++++++++++++++
Q2. Who are my local User Groups and how can I get in touch
with them?
Answer: There are a number of groups in different areas of the country,
some of which include...
BAWASLUG: Baltimore / Washington Area Sybase Local User's Group
Meets Quarterly
Contact: Unknown
GLSSUG: Great Lakes SQL Server User Group. Meets monthly.
Contact: GLSSUG, Information Management Group, 720 N. Franklin St.,
Suite 300, Chicago, IL 60610
======================================================================
2. Database Server
----------------------------------------------------------------------
2.1. General Questions
++++++++++++++++++++++++++++++++++++++++++++++
Q1. What periodicals exist for Sybase?
Answer: The following magazines are either Sybase specific or related to
relational database design.
SQL Forum
PO Box 240
Lynnwood, WA 98046-0240
Phone (206)382-6607
Published bi-monthly (6 issues yearly)
us$60/year.
Sybase Magazine
(You already get this for free if you are a customer)
The Relational Journal
Codd & Date, Inc.
1772A Technology Drive
San Jose, CA 95110-1306
Phone: (408) 441-6400
Published bi-monthly
us$249.00/year
++++++++++++++++++++++++++++++++++++++++++++++
Q2. What's a good book about Sybase?
Answer: Consider the following texts.
A Guide to Sybase and SQL Server
McGoveran and Date
Addison Wesley Publishers, 1993
ISBN 0-201-55710-X
Sybase Architecture and Administration
John Kirkwood
Ellis Horwood Publishers
ISBN 0-13-100330-5
++++++++++++++++++++++++++++++++++++++++++++++
Q3. Does Sybase support the X/Open XA interface?
Answer: Currently, Sybase does not support the X/Open XA interface. You
cannot use it with either Encina or Tuxedo for global transaction management
in the X/Open DTP environment. System 10 is supposed to be XA complient.
However, you CAN use it with TOP END, NCR's TP Monitor. TOP END's XA Veneer
Technology allows Sybase's non-XA compliant DBMS product to participate in
global transactions in an X/Open DTP environment. This XA Veneer DOES make
use of Sybase's two-phase commit feature.
Thanks to Ray Niety.
++++++++++++++++++++++++++++++++++++++++++++++
Q4. Does Sybase support ODBC (Microsoft Windows Open Database
Connectivity)?
Answer: Yes, but you may need to install additional stored procedures in
the master database to get it to work. these can be loaded with the file
"instcat.sql". If you are running Sybase under Novell or the Microsoft SQL
Server (purchased and supported by Microsoft) these come pre-installed, but
may not be up to date. In particular, if you are trying to use Visual Basic
as a front end, you will need to run the instcat.sql script that comes with
the Visual Basic distribution diskettes.
++++++++++++++++++++++++++++++++++++++++++++++
Q5. What are some of the size and space limitations of Sybase?
Answer:
* Columns in a table, view, or query...................... 250
* Tables in a view or query................................ 16
(including work tbles, which are created by
sorts and aggregrates)
* Indexes per table....................................... 250
( plus 1 clustered index)
* Columns in a composite index............................. 16
* Maximum row size in bytes.............................. 1962
(not counting text and image columns)
* Size of code for a query or stored procedure............ 65K
* Memory required for a query or stored procedure......... 65K
* Comparisons in a WHERE clause........................... 250
* Items in an IN clause (WHERE X in (1,2,3...))........... 250
* Parameters for a stored procedure....................... 255
* Levels of nesting of stored procedure calls............. 15
* Databases per server.................................. 32767
* Tables per database............................... 2 billion
* Rows per table............................ Available storage
----------------------------------------------------------------------
2.2. Sybase ISQL
++++++++++++++++++++++++++++++++++++++++++++++
Q1. How do I prevent isql output from wrapping around at 80
columns?
Answer: Use the -w switch to specify a different width, as in
isql -Ulogin -Sserver -w132 /* 132 character width column */
See 'isql' in the the Utility Programs Section in the Sybase Commands
Reference manual for a detailed explanation of all command line switches.
++++++++++++++++++++++++++++++++++++++++++++++
Q2. How do I send isql output to a file? The -o switch doesn't work?
Answer: Use the redirection symbol, ">", as in
isql -i script.sql > results.sql
++++++++++++++++++++++++++++++++++++++++++++++
Q3. Can I submit a multiline statement as input to isql without creating
a file with the commands in it first?
Answer: Yes, try...
isql -Ulogin -Ppassword >outfile_name <<EOF
use database
go
select column
from table
where condition is true
order by column
go
EOF
This is referenced as a "here document" in most UNIX manuals. This will also
result in the password being visible by anybody happening to do a `ps'
command when the command is run.
This method also works in DOS:
isql -Ulogin -Ppassword -i con >outfile_name
/* ^^^ con stands for console */
use database
go
select column
from table
where condition is true
order by column
go
^Z /* Control - Z */
++++++++++++++++++++++++++++++++++++++++++++++
Q4. How do I prevent the password from being displayed when someone
does a UNIX *ps* command?
Answer: Depending on the version of Sybase and the port, this may or may
not already be supported. In the cases where it is not supported, several
tricks have been used.
For those cases where the password shows up, try using the command line
options -i and -o rather than the shell redirects (< and >). This is nice
because the "Password:" prompt shows up accept your password.
isql -U login -i input.sql -o output.out
Password: password
You can also put the password as the first line that isql receives from
standard input.
isql -U logins >output.out <<EOT
password
use database
go
sp_help
go
quit
EOT
One last alternative, thanks to Uday Shankar, is to either directly or
through an environment variable echo the password and pipe it into isql. The
password doesn't show up with the *ps* command and the password is not part
of the isql call. An example:
echo "password" | isql -U<login> << EOF
use database
go
sp_who
go
quit
EOF
++++++++++++++++++++++++++++++++++++++++++++++
Q5. I want to add some new features to isql. Does anyone have the source
code?
Answer: David Joyner at NCSU has published a shareware version, called
"dsql". It is available via anonymous ftp from
straylight.acs.ncsu.edu:/pub/sybase.
----------------------------------------------------------------------
2.3. Sybase Transact-SQL
++++++++++++++++++++++++++++++++++++++++++++++
Q1. What exactly do sp_primarykey, sp_foreignkey, and sp_commonkey do?
Answer: They register the key relationships in syskeys. They DO NOT create
indexes and they DO NOT make Sybase automatically enforce referential
integrity. The key relationships registered in syskeys may be used by a
front-end product to infer the logical schema.
DWB's VQL module uses them to create joins as queries are built. APT-BUILD
uses them in a similar fashion. Currently, they are included to build a more
complete data dictionary, though SQL Server itself does not use them. In
system 10, the DDL supports declared entity relationships, which are fully
supported by the relational engine. Therefore, the need for these stored
procedures in system 10 is essentially obviated.
++++++++++++++++++++++++++++++++++++++++++++++
Q2. I want to write a new system stored procedure that gives me
information not provided by the existing stored procedures. How do
I make these available to all users like the system stored procedures?
Answer: All system stored procedures MUST start with the prefix `sp_' AND
be loaded by the System Administrator in the master database. Procedures
starting with this prefix have two main properties (1) They are visible from
all databases, and (2) They switch context to the local database when
executed. For example, a reference to the sysusers table does not read the
sysusers table from the master database, but from the local database in which
the procedure is called.
Do NOT replace any of the existing stored procedures with procedures of your
own design. Any upgrade which runs the `installmaster' script will delete and
overwrite your changes. Making changes to Sybase-supplied procedures can
also damage your system tables and/or your SQL Server if the procedures do
not behave as other procedures and SQL Server expect them to. Proceed with
caution.
++++++++++++++++++++++++++++++++++++++++++++++
Q3. How can I do a "row level select" (built-in "if" function) without
having to create a temporary table, etc.?
Answer: This original solution detailed in previous versions may require a
division by zero and results in SQL errors that are avoidable. Andrew
Zanevsky provides this alternate solution that is functionaly equivalent to
the previous solution and eliminates the divide by zero problem.
I'm trying to create a view on a table selecting one of two fields depending
on the value of a third e.g...
select field1 from table where field3 = 1
union
select field2 from table where field3 = 2
If field1 & field2 are integers then this will work
SELECT field1*(1-ABS(SIGN(field3-1)))+field2*(1-ABS(SIGN(field3-2)))
FROM table
The function
eqfn(x,y) = (1 - ABS( SIGN(x - y)))
is a function which will return 1 if x = y and 0 otherwise.
The solution where the fields are characters is more complicated.
SELECT SUBSTRING(field1+field2,
eqfn(field3,2)*datalength(field1)+1,
eqfn(field3,1)*datalength(field1)+eqfn(field3,2)*datalength(field2))
FROM table
only this will return field1 if field3 is 1 and field2 otherwise. It can be
modified to return a different value if field3 is not 1 or 2.
++++++++++++++++++++++++++++++++++++++++++++++
Q4. How do I use a table name as a parameter to a stored procedure,
which will then run a query on the specified table?
Answer: You can't; also you can't do "dynamic queries". However, you might
want to try using sp_rename to "fool" Sybase, as suggested (although not
wholeheartedly recommended) by rthomas@hakatac.almanac.bc.ca (Robert N
Thomas) [this won't work with temporary tables, though]:
1. Create a view of each table you will want to access as a parameter.(this
will allow other sessions to continue accessesing the tables without
interruption).
2. Set the permissions on the views so that NOBODY can access them. Only
through the MAGIC stored procedure is access granted to the views.
3. Figure out how to declare a section of your stored procedure as critical,
so that only one sybase process can access the below code at one time.
4. Setup the procedure to look something like:
CRITICAL (I forget the exact command).
sp_rename inuse, @vartable
select * from inuse
sp_rename @vartable, inuse
END CRITICAL portion
++++++++++++++++++++++++++++++++++++++++++++++
Q5. Can you change the definition of a table to prohibit nulls once
you've defined it to permit them?
Answer: No, but you can prevent NULLs using triggers. A trigger can use
the `IS NULL' test to check if any column has a NULL value. A RULE will not
work. The rule check is NOT executed against columns that contain a NULL
value.
++++++++++++++++++++++++++++++++++++++++++++++
Q6. Is there a simple way to solve the Sybase ""matching quotes""
requirement in a character field?
Answer: A client application program can use the dbsafestr() call, which
is part of DB-Library. This routine will double any and all quotes in a
character string, making that string "safe" for inclusion within any SQL
statement.
In APT-SQL, the similar function is sqlexpr().
++++++++++++++++++++++++++++++++++++++++++++++
Q7. How can I do a case-insensitive search?
Answer: There are two ways to accomplish this:
Method 1: Use the case-insensitive sort order. This may be specified
during server installation or changed afterward with a bit of work. This
affects all databases on the server and cannot be isolated to a single
database.
Method 2: Use the upper or lower function to equate strings for
searching. For example,
select col1 from table where upper(col1) = upper("string")
or
select col1 from table where upper(col1) = "STRING"
Using the upper function in the where clause on the column name
'upper(columnname)' causes the optimizer to NOT use any index defined on that
column. This can result in poor retrieval performance since a table scan
rather than an indexed retrieval will be performed. Converting columns to
upper case upon insert or update is a better strategy since the data will be
physically stored in the table in upper case. The 'upper' function no longer
needs to be used on the column name and any index on that column is likely to
be used. For example,
select col1 from table where col1 = upper("string")
or
select col1 from table where col1 = "STRING"
will use an index defined on col1 since col1 is physically stored in the
database in upper case. Triggers can also be used to maintain a shadow
column of the case-sensitive (or printable) column is a uniform-case column,
e.g., last_name_lc, which is lower()ed from the last_name column value in the
table's insert and update triggers. One last option is to use the LIKE
operator to search the column for both cases. For example,
WHERE last_name like "[Zz][Zz][Yy][Vv][Aa]"
Thanks to Sorin Shtirbu, Christopher Eastman, and Ben von Ullrich
++++++++++++++++++++++++++++++++++++++++++++++
Q8. How do wildcards used for pattern matching work in the
context of the LIKE operator?
Answer: This is best answered with an example:
Given that table1 contains col1 and has the values
table1
----------
Bob
Ricky
The following query:
select *
from table1
where col1 not like '____' /* 4 underscores */
will return "Ricky" and will NOT return "Bob", "Ricky"
Here's why:
1. ["Bob" = "Bob "] is TRUE. This is a given, since ANSI says that in
comparing two strings, the shorter string will be conceptually padded
with blanks to equal the length of the longer string before comparing.
2. If 1 is TRUE, then ["Bob" LIKE "Bob "] is also TRUE. Otherwise, a LIKE
comparison would differ fundamentally from an EQUAL comparison.
3. ["Bob" LIKE "___"] and [" " LIKE "_"] are both TRUE, by Sybase's
definitions of the wildcards.
4. By 2 and 3, ["Bob" LIKE "Bob_"] is TRUE. Therefore, ["Bob" LIKE "____"]
is TRUE, and ["Bob" NOT LIKE "____"] is FALSE. The query should NOT
return "Bob", because the string has been extended with blanks to pad it
out to the length of the "longer" (pattern) string.
To select all names of NOT EXACTLY 4 characters, use
NOT LIKE "[^ ][^ ][^ ][^ ]"
This pattern string will match ONLY non-blank characters, so the query will
fail to match all strings with blanks in them ("Bob ") as well as all strings
longer than 4 characters ("Ricky").
-- Elton Wildermuth, Sybase SQL Server Development
++++++++++++++++++++++++++++++++++++++++++++++
Q9. How do I put a unique serial number on a table?
Answer: Michael Keirnan writes:
Create a reference table with one row (I've also heard them referred to as
surrogate id tables). Create a stored procedure called something like
get_next_id. This stored procedure increments the current id and returns, via
a parameter, the new id. This of course is done inside a transaction, and the
increment (UPDATE statement) should be done first. No trigger required. For
example:
create table IDExamples:Surrogate Id
(NextId int)
go
create procedure GetNextId
@SurrogateId int out
as
/* Start a transaction */
begin transaction
/* Update the ID first to lock the table
** and block others from changing the value.
*/
update ID
set NextId = NextId + 1
/* Safe to select, others calls blocked. */
select @SurrogateId = NextId
from ID
/* Commit the completed transaction */
commit transaction
go
There is an important disclaimer to this method. This approach guarantees
that all inserts into the table are single threaded and that concurrent
inserts will never happen. Each request for an ID will be blocked and wait
for any preceeding requests for an ID since the page containing the ID is
locked. This could be a bottleneck for a multi-user system.
Now that we all know the answer, System 10 will provide the keyword
'identity' and will automatically generate surrogate IDs as required.
++++++++++++++++++++++++++++++++++++++++++++++
Q10. Exactly when does a trigger fire?
Answer: A trigger will fire once per statement affecting the table
(insert, update, and/or delete), even if NO rows are affected. It fires after
the physical table has been modified (AFTER indexes are checked and updated,
after rules are checked, after defaults are applied). Triggers are just
about the LAST step prior to transaction commit. Any ROLLBACK TRANSACTION
statement in the trigger will do just that: undo all the changes made to all
table data and indexes affected by the command. Triggers do not (until
System 10) fire recursively on the trigger table if the trigger alters its
trigger table. This gives rise to coding like:
/* If you just want to count the number of rows in the log */
create trigger happy_trails
on the_range
for update
as
if (select count(*) from inserted) = 0 return
-OR-
/* @@rowcount is assigned to a variable, @rows_altered, because the if()
** changes its value.
*/
create trigger happy_trails
on the_range
for update
as
declare @rows_altered int
select @rows_altered=@@rowcount
if (@rows_altered = 0) return
This eliminates the expense of going through later trigger code which will
have no effect. A similar method can be used if, for example, you want to
allow only one row inserted per statement.
++++++++++++++++++++++++++++++++++++++++++++++
Q11. Is there an easy way within the server to determine how many days
are in the current month?
Answer: This solution comes from Elton Wildermuth at Sybase
Obtain the month number, M.
If (M = 2) /* February is a special case */
Obtain the 4 digit year, Y
if ((Y % 4 = 0) and
((Y % 100 != 0) or
(Y % 400 = 0)))
days := 29
else
days := 28
else
if (M > 7) /* If month is after "July" */
M := M - 7 /* subtract 7 from month */
days := 30 + (M & 1)
/* Now, if month is odd, it has 31 days */
Why this works:
31 30 31 30 31 30 31
Ja -- Ma Ap My Ju Jy
Au Se Oc No De
Suggestion: build this into a stored procedure, and call it; assign its
return value to a variable. Give the procedure an optional datetime param, so
that it can calculate days-in-month for a random date; let the date default
to getdate(). Example:
create procedure get_days
@days int OUTPUT,
@date datetime=NULL
as
declare @m int,
@y int
if (@date is NULL)
select @date = getdate()
select @m = datepart(mm, @date)
if (@m = 2)
begin
select @y = datepart(yy, @date)
if (@y % 4 = 0) and
((@y % 100 != 0) or (@y % 400 = 0))
select @days = 29
else
select @days = 28
end
else
begin
if (@m > 7)
select @m = @m - 7
select @days = (30 + (@m & 1))
end
return
Tony Langdon offers an alternate solution to this problem, noting that the
previous solution cannot be used within a DML instruction.
select datepart(day,
dateadd(day,-1,
dateadd(month,1,
dateadd(day,1-datepart(day,getdate()),getdate()))
)
)
Which works as follows :
1. Get first day in current month
dateadd(day,1-datepart(day,getdate()),getdate())
2. Get first day in next month
dateadd(month,1,......)
3. Get last day in current month
dateadd(day,-1,.......)
4. Days in month
datepart(day,.........)
One final solution provided by Sorin Shtirbu (shtirbu@fnal.fnal.gov) is as
follows:
select 33 - datepart(day,
dateadd(day,32,
dateadd(day,
0-datepart(day,getdate())+1,getdate()) ))
++++++++++++++++++++++++++++++++++++++++++++++
Q12. How can I recursively retrieve the rows in a bill of
materials type problem?
Answer: Rob Hawkes provided this interpretation of an idea from "A Guide
To Sybase and SQL Server" by McGoveran and Date and solved the problem with a
stored procedure.
create proc getMenuLeaves (@current int) as
/* Given a menu_id in the hierarchy defined by the menu_link table, this
** procedure returns all nodes (menu_ids) which are descendants of the given
** node and which are leaf nodes (no descendants). */
set nocount on
declare @root int
declare @level int
select @root = @current
create table #stack (item int, level int)
create table #leaves (leaf int)
insert into #stack values (@current, 1)
select @level = 1
while @level > 0
begin
if exists(select * from #stack where level = @level)
begin
select @current = item from #stack where level = @level
if not exists (select menu_id_child from eeddb..menu_link
where menu_id_parent = @current)
begin
insert #leaves values (@current)
end
delete from #stack
where level = @level and item = @current
insert #stack select menu_id_child, @level+1
from eeddb..menu_link where menu_id_parent = @current
if @@rowcount > 0 select @level = @level+1
end
else select @level = @level-1
end
select * from #leaves where leaf != @root order by leaf
return
++++++++++++++++++++++++++++++++++++++++++++++
Q13. What mechanism does Sybase offer to control concurrency
when multiple users are doing *select - think - update* kind of
operations?
Answer: Sybase offers "browse mode" for such applications.
Conceptually, browse mode involves three steps:
1. Select result rows containing columns derived from one or more database
tables. The user now looks at returned data and decides which rows to update
- this is the thinking part.
2. Where appropriate, change values in columns of the result rows (not the
actual database rows, but copies stored in program variables by step (1), one
row at a time.
3. Update the original database tables, one row at a time, using the new
values in the results rows.
To implement this scheme the application needs to use the SELECT with "FOR
BROWSE" option when reading the rows, copy the column values into program
variables, one row at a time, change the variables values when and where
equired (usually in response to user input) and finally, execute an UPDATE
command that updates the database row corresponding to the current result row
using the "timestamp" column for the table (Every table updated in this
manner must have the 'timestamp' column in the table).
At step (3) the where clause refers to the value of the row's timestamp
column stored when it was returned in step 1. If someone else has updated
the row in the meantime, its timestamp value will have change and the update
will fail, thus telling the application that the row has been modified by
another user. When this happens, the application can decide to either forget
the update for that row or re-read and show the user then new values for that
row and decide if s/he wants to proceed with the update, do a modified
version of the update, or whatever.
Browse mode is documented in the Sybase Open Client DB-Library reference
manuals and the T-SQL Commands Reference Manual 4.9.1. Although historically
only used from 3GL programming languages like c, browse mode is implemented
to varying degrees by some 4GL tools such as Powerbuilder, etc (but not APT)
and can in fact be used from T-SQL itself (thus allowing application like
APT access to it.)
Care must be taken when using browse mode on PC based clients. Not all 4GLs
represent date and time datatypes at the same precision as expected by
Sybase. This can result in an application where an update never, OK almost
never, takes place. In situations like this, another user column can be
added and treated much the same way as the timestamp column works, i.e., only
update when the column has the same value as it did when you selected it out
of the database. Not quite as elegant since you have to update this column
yourself.
It is expected that the functionality offered by cursors, implemented in the
upcoming System 10 SQL Server and Open Client, will substantially overlap and
exceed the functionality provided by browse mode.
++++++++++++++++++++++++++++++++++++++++++++++
Q14. In what order are defaults, rules, triggers, etc. enforced/executed?
Answer: Execution occurs in the following order:.
1. Default substitution occurs. Substitute a default value for each column
that does not have a user supplied value (if a default value exists). If
a column has two defaults, one bound directly to the column, and one
bound to the type on which the column is defined, the default bound
directly to the column takes precedence. Note that if the user supplies
any value whatsoever for a column, including NULL, the default
substitution will not occur.
2. The transaction will be automatically rolled back if, after default
substitution, any columns forbidding nulls now contain a null value.
3. Rule enforcement occurs. Determine if any column violates a rule
associated with that column either directly or through the columns type.
If a column has two rules, one bound directly to the column, and one
bound to the type on which the column is defined, the rule bound directly
to the column takes precedence.
4. The transaction will be automatically rolled back if any column violates
the associated rule, including columns in which default values were
substituted.
5. Roll back any transaction that contains rows which would violate a unique
index on the table.
6. Execute the trigger to enforce user defined integrity.
----------------------------------------------------------------------
2.4. Sybase Bulk Copy
++++++++++++++++++++++++++++++++++++++++++++++
Q1. When using Bulk Copy (BCP) to copy a database, is the copy
equivalent to the original in terms of performance?
Answer: Copying via bcp will remove the "holes" and usually compact the
rows more contiguously than the original. If your table is large, and has
had many rows deleted throughout the table, performing this table rebuild may
improve I/O performance.
Meaningful "holes" only exist when larger tables with a clustered index have
a small fillfactor, and/or have small groups of rows deleted from areas
spanning most of the table. These gaps are not large on a per-page basis,
since through all manipulations, pages are always kept at least half-full,
and rows on a page are always congituous (free space on any page is kept
together at the end of the page).
Recreating the clustered index will fill these on-page gaps, placing a
uniform number of rows on all pages. The clustered index should be recreated
using the following methods.
SELECT INTO another_table,
TRUNCATE TABLE original_table,
INSERT original_table SELECT * from another_table
OR
bcp out,
TRUNCATE TABLE original_table,
bcp in
Related to this topic is the notion of external fragmentation of a table's
allocated extents (chains of 8 pages, or table "building blocks"). While not
impacting table I/O effectiveness, this type of fragmentation is a greater
contributor than the above internal fragmentation to excessive reserved space
allocation on a table, space not yet re-used after being partially
deallocated in a DELETE.
Fragmented extents occur only when less than one extent (8 - 2K pages) of
contiguous rows are ever deleted. Until completely emptied, extents remain
allocated to the table indefinetly, effectively reserving small groupings of
empty pages which could otherwise be freed for use by other tables in the
database. This type of fragmentation may be removed in exactly the same
manner as described above.
The only performance gains to be had from rebuilding are realized via
increased row-per-page counts for I/O done to retrieve any page with a
desired row on it. (You get more info if you get 16 rows for reading 8 pages
at 2 rows per page than you would if you had only 1 row per page, and only
got 4 rows for the same 8 pages read).
When row size is large, there is often so little free space left by deleting
large rows that SQL Server's page splitting and filling algorithms have
already optimally filled all gaps on the ends of pages.
The bottom line is, like most choices in physical database design, there are
always tadeoffs. You must always apply the exact requirements of your
application.
BCP-ing into an empty table with indexes, or building the indexes after the
data is all in WILL indeed fill in the gaps in the extent chains where rows
had been deleted in the original source table. Rows are always compacted to
have no spaces between rows.
Unless you note a substantial difference between the reserved space and the
allocated space given by sp_spaceused for both tables, the performance
difference is typically not that great.
The best way to copy a database is to use DUMP DATABASE and LOAD DATABASE...
it's just one operation, and produces an exact page-by-page copy of the
original database, "spaces" and all.
Thanks to Benjamin von Ullrich
++++++++++++++++++++++++++++++++++++++++++++++
Q2. Can BCP load null dates?
Answer: BCP can load null dates if there is nothing between the delimiters
for the columns. If it encounters a space it converts that to Jan 1, 1900.
Here is an example:
create table foo
(seq_no int not null,
date1 datetime null,
date2 datetime null)
The following is the contents of a file that we are going to bcp into table
foo. I am using a tilde to delimit columns and a tilde followed by a return
(\n) as a row terminator.
1~ ~~
2~~ ~
3~~~
Now we use bcp with the delimiters specified above.
bcp foo in foo.dat -c -t~ -r"~\n"
Starting copy...
3 rows copied.
Clock Time (ms.): total = 37 Avg = 12 (81.08 rows per sec.)
Via isql let's look at the results.
1> select * from foo
2> go
seq_no date1 date2
----------- ------ ------
1 Jan 1 1900 12:00AM NULL
2 NULL Jan 1 1900 12:00AM
3 NULL NULL
(3 rows affected)
----------------------------------------------------------------------
2.5. Sybase Backup and Recovery
++++++++++++++++++++++++++++++++++++++++++++++
Q1. How can I dump more than one database to a single tape?
Answer: Tell Sybase that the tape device is really a disk. Declare the
tape /dev/nrst? as a "disk" device (sp_addumpdevice "disk", ...). Then
successive dumps will follow each other on tape. Of course, you've got to
maintain your own directory of what's on the tape. Use "mt -f /dev/nrst0
sta" to check. This method is not supported by Sybase.
Another alternative is to write your databases dumps to files and write the
files to tape using standard unix commands. Multiple dumps can safely be put
onto a tape using this method, but you must maintain your own index.
The first of the two methods is commonly practiced, but not supported by
Sybase. One article originating from Sybase stated...
Please save yourself a lot of grief and don't do this (First Method above).
The various platforms handle tapes in slightly different ways and the various
Sybase server ports make slightly different attempts to work around this. On
some platforms the above suggestion will work, but on some other platforms,
you overwrite your dump, and on yet others, it just fails. Worse yet, from OS
release to OS release, and Sybase release to Sybase release, the behavior of
any specific platform can change.
The point of doing dumps is that you know your data is safe. If you are doing
something that is "not really supported", then how do you know your data is
safe? If you don't care if your data is safe, save even more tapes and don't
do dumps at all.
None of the above is meant to imply that the Sybase dump mechanism is better
or worse than any other possibility. However, it is the mechanism Sybase
provides and supports.
Thanks to David Gould
And one final word, Sybase System 10 includes a Backup Server, which will
handle this problem. DataTools, Inc. also provides a product that can backup
multiple Sybase databases on a single tape. See section 9 of this document.
----------------------------------------------------------------------
2.6. Upgrading the Sybase Server
++++++++++++++++++++++++++++++++++++++++++++++
Q1. I'm upgrading the Server from version <x> and/or operating
system <p> to version <y> and/or operating system <q>. Any advice?
Answer: In general, read the install guide and release bulletin for the
latest news on the recommended OS levels for upgrades. (read these ALL THE
WAY THROUGH BEFORE YOU START, not as you go!)
If your planned OS level is not mentioned, call Sybase Technical Support and
find out if your *current* SQL Server is certified on the new SQL Server's
certified operating system. Most SQL Server upgrades require that you be
able to run both SQL Server versions on the same machine. Also, if the
upgrade fails for some reason, you may need to fall back to your previous
version of SQL Server. This fallback should always be on a certified OS
version.
Above all, make a DUMP DATABASE backup of ALL databases, and ALWAYS use
'sybconfig' to do the upgrade. Doing an upgrade by hand can destroy your SQL
Server. If you have the time, computing power, and disk space, consider
building a new SQL Server from the backups you make of your production
server, and trying a trial upgrade on this "test" server. Don't run the test
server for long, as you license agreement doesn't allow indefinite use of
multiple copies of your software.
----------------------------------------------------------------------
2.7. Sybase Security
++++++++++++++++++++++++++++++++++++++++++++++
Q1. What different mechanisms are there to control Sybase security?
Answer: The following summarizes techniques to control security with SQL
Server that I have received from various sources plus some comments of my
own. My concern was how to control updates to a database in an environment
with end-user "query"" tools that include update capabilities (e.g. Pioneer,
Q+E, Microsoft Access). I want to especially thank those who responded to my
question. All responses where useful.
There are four fundamental methods, each described in more detail below: (1)
adopted authority; (2) login ID; (3)gatekeeper; (4) triggers. All techniques
are premised on fundamental security features of user authentication and
grant/revoke permissions to resources. Each technique has a cost and must be
weighed against the risk/benefit.
Where available, references are cited. Particularly useful for those with
access to CompuServe are Microsoft's Knowledge Base (MSKB) and Microsoft's
Software Library (MSL).
1. Adopted authority.
This seems to be the most common approach making use of SQL Servers authority
checking structure -- if the owner of an object (stored procedure or view)
has necessary authority to all underlying objects, then authorized users of
this object have the same authority. All updates are done via stored
procedures owned by a user with update authority to underlying objects. Users
are granted authority to stored procedures but do not have update authority
to any tables. This approach encapsulates your database as much as possible
using views and stored procedures."
While one can often choose between use of views and stored procedures for
SELECT access to data, it is important to note the following traits of these
two object types:
Views are best used for data access whose access methods must be
arbitrary. If you need to run a query, to join to other tables for
example, a view does the best job.
Stored procedures may be parameterized, but the data returned cannot be
expanded or limited in any way (short of rewriting the procedure). Joins
of tables, and especially other views can be messy from an optimization
standpoint, so keeping complete control through stored procedures can be
attractive.
Sybase T-SQL allows updates through views, but only under certain conditions.
Stored procedures are best used for all write operations, since they suffer
from no restrictions for updates, but can be set up to update anything that
is needed, such as several tables opposed to the UPDATE command's
restricstions of only one table per command.
Note that the object ownership chain is broken between databases! Since a
server login may have completely different privileges from one database to
another, the access manager checks the runtime user's permissions against
those of any object residing outside the current database.
The assumption here is that users are not aware of stored procedures since
they will only be used by application programs for database maintenance. It
is possible to "hide" the stored procedures; possibly in a totally different
database. Still the knowledgeable user could find them and execute them.
Programmers might object to the use of stored procedures versus direct use of
SQL.
As long as you REVOKE all permissions which exist on database objects (in
lieu of the stored procedures's ownership-inheritance permissions), there is
no way the application tables may be accessed in any way outside of the
application stored procedures. Running procedures outside of any application
should not result in any adverse situation with your database, as long as
your application is "well-written" (i.e., doesn't flip out if you run one SP
without running some other one right after it), and uses triggers or other
stored procedures to maintain referential integrity.
In the case where applications are written using end-user tools such as Q+E
and Excel, Robert Thomas describes hiding the calls to stored procedures
using DDE or DLL calls; possibly using a special password as a parameter to
the stored procedure. He also recommends making sure sp_helptext for these
procedures return nothing. He sometimes uses a technique of mixing DDE and
DLL calls in which in the middle of a DDE conversation he establishes a
temporary second login to SQL Server for update purposes using DLL calls.
See MSKB article Q47270: "INF: SQL Access Permissions and Trigger Execution"
which describes SQL Server adopted authority structure using triggers.
However, this concept applies as well to stored procedures and views.
2. Login IDs
The basic idea is to use different login IDs for update and query use. The
trick is to keep the one for update hidden and unobtrusive. There are several
techniques for doing this with varying degrees of sophistication.
Maintenance applications could use a single special ID and password that
allows update privileges; while normal user IDs have read only authority.
This is based on the assumption that access to applications is controlled and
that steps are taken to make sure the special login does not become common
knowledge. One problem is that it is difficult to tell who is logged onto the
database since they all use the same ID.
Lawrence Bertolini wrote with a table driven variation of the above. A
special login is used by an application to access, of course, a special
table. This table cross references a normal login ID to a special login ID.
Once the special ID is located, the application logoffs and then back on
using the special ID. For example, my normal ID might be "seth" but my
special ID might be"seth_12x9t". Again, normal precautions must be taken to
ensure that this scheme is not compromised.
3. Gatekeeper
The most sophisticated approach is to control all logins with a custom
written front-end gatekeeper to SQL Server. All requests to SQL Server must
pass through this program which can determine the privileges needed by the
requester. The action taken by the program is flexible. Two possibilities are
to use the two login ID approach as above or to analyze each request
rejecting those that are not acceptable (e.g. update from Q+E). This approach
also allows maintaining an audit trail of SQL Server logins and requests. The
key issue is authenticating the requesting program. The login ID can be
authenticated using the native operating system security.
This technique is described in Microsoft's "Open Data Services User's Guide"
as the SECURE application. Manual and source are included with SQL Server
4.2. It is also described in MSKB article Q79958: "INF: ODS Security and
Auditing Application". Source for SECURE42 should be in MSL as "S13264"
however it is missing as of this writing but I am told it will be added
within a week or two. This program requires Microsoft's SQL Server
Programmer's Reference for C.
For those of you not using the Microsoft SQL Server, Sybase's Open Server
product was designed exactly for this type of application: when you need an
arbitrary software agent to look just like a SQL Server to Sybase compatible
tools. Open Server allows you to construct your own "SQL Server", from top
to bottom, with the SQL Server networking and API to do all the
communications with clients, making your program look like a SQL Server, but
actually do just about anything you can program in supported 3GL's.
4. Triggers
This technique places update control logic inside triggers associated with
each table. For example, the trigger could check a table to make sure the
requesting application was authorized for updates. This technique is
described in MSKB article Q66678: "INF: Providing Application Security
Through Triggers in SQL". Obviously a trigger needs to be written for each
table however the update check could be placed in its own stored procedure
and work for all tables.
5. Other ideas
Possibly an obvious Answer: don't provide tools for ad hoc queries that
include update capabilities. It seems in the personal computer arena this is
unrealistic.
Another option is physically separate database for update and ad hoc query.
There is a fair amount of overhead but actually might work well where
performance is critical for maintenance transactions.
Disclaimer: I have tried to present the above information as accurately as
possible including citations. However, I leave it up to you to verify the
information and determine its correctness and applicability to your needs.
Provided by: Seth Siegal with additions by Benjamin von Ullrich
----------------------------------------------------------------------
2.8. Sybase Database Administration
++++++++++++++++++++++++++++++++++++++++++++++
Q1. Why does the transaction log on the model database keep filling up?
Answer: Up to release 4.8, SQL server stored tempdb's next object_id in
the log of the model database.I don't remember exactly why this was
necessary, but i think it has something to do with avoiding re-issuance of
object_ids that may be in stored procedures and/or transaction logs of all
server databases. Since model is copied into tempdb at boot time, it seemed
logical to store the next object id in model. All that was logged was a 4-
byte integer, so it could take months for the log in model to fill up. This
problem was fixed in version 4.8 . The next object id is now stored
elsewhere.
++++++++++++++++++++++++++++++++++++++++++++++
Q2. Why does my transaction log fill up even when I have
allocated lots of space for it?
Answer: The capacity of the log is limited by two things:
1. The total allocated size of the log
2. The frequency of its truncation (or DUMP).
You can have a very active system with small transactions, and not fill up
the log if all transactions commit very quickly (optimal behaviour for SQL
Server) and you dump the log very often. You can also have an ad-hoc system,
in which transaction size and duration may vary. In this case, the following
paragraphs apply.
Due to the sequential nature of the log, only the inactive portion of the log
may be truncated by any DUMP TRANSACTION command. The inactive portion of the
log runs from the "beginning" to the page which has the BEGIN XACT record for
the oldest *active* (uncommitted) transaction. Pages which follow this oldest
active transaction in the log are considered active for the purposes of DUMP
TRANSACTION, since they may depend on changes made (yet to be committed or
still to be rolled back) by this transaction. Recovery (at LOAD TRAN or
system startup time) replays transactions as committed or rolled back in the
exact order in which they appear in the log, so portions appearing in the log
after an uncommitted transaction may not be removed.
The implication here is that given a large enough or long-running enough
transaction, one can hold up the entire log (from dumping, not from continued
logging!) while the transaction is still pending. If your log fills up, and
you have a very old transaction that started at the beginning of the log, no
DUMP TRAN command can or will clear it until the transaction COMMITs or is
ROLLed BACK.
The only things you can do in this case are:
1. ALTER DATABASE to add more space to the log, hopefully allowing enough
space & time for your old transaction(s) to commit (find that user who
typed BEGIN TRAN ... UPDATE/INSERT/DELETE ... and the went to lunch!)
This only makes sense if you know the transaction must finish. KILL the
long-running process/transaction.
2. Shut down the sql server to terminate the long-running/old transaction.
These last two effectively terminate the transaction without a COMMIT, making
it get rolled back upon recovery. This is a fairly drastic action to kill a
process to clear a log.... if you can kill the client process, or type ^C to
abort from the same, please do so to achieve a cleaner and easier return to
normal processing.
Long-term, it is best to avoid long/log-intensive transactions. This may be
done by breaking up large deletes into smaller pieces by adding a WHERE
clause to target a range of rows. You can also use a WHILE loop, re-
selecting the MIN() or MAX() of an int or char key into a variable whose
value you check for NOT NULL, and then use to alter the table. This advice
applies the same to DELETEs, UPDATEs, and similarly for INSERTs.
If your problem transaction is to delete all rows in a table, consider using
the TRUNCATE TABLE command. This command uses a minuscule proportion of log
versus a DELETE of all rows, as it merely logs the deallocation of pages
assigned tothe table, instead of an image of every row deleted. For this
reason, it is also MUCH faster than DELETE for most good-sized tables.
Permission to use TRUNCATE TABLE is only available to the dbo, however.
Benjamin von Ullrich
++++++++++++++++++++++++++++++++++++++++++++++
Q3. Is there a way to turn off logging altogether? How about putting
the transaction logs on `/dev/null'? How does tempdb avoid logging?
Answer: The transaction logs are an integral part of Sybase operations. It
must be able to read from as well as write to the log device. This is why
/dev/null won't work.
What you can do is use "sp_dboption dbname, trunc, true". This will
automatically clear out the INACTIVE PORTION of transaction log every minute
or so (when the CHECKPOINT SLEEP process does its work). This is the way
tempdb works. Keep in mind that you have just prevented recovery from
incremental transaction log dumps (dump tran) and that you can ONLY recover
the database from the last full database backup (dump database).
++++++++++++++++++++++++++++++++++++++++++++++
Q4. Is there any reason not to have `truncate log on checkpoint' turned
on for the model database?
Answer: Since this database is the template for all databases at CREATE
DATABASE time, setting this option on in model makes it be automatically set
on for all new databases as they are created. Aside fro the simple fact that
this may not be what you want on all new databases, if you are in the midst
of a frenzied recovery of a major production database (say, in the middle of
the day, while all your users are down), and you load your database backup,
the first gift your clever option on model will give you is a truncated log
in front of all of the transaction log dumps you were about to apply to bring
the database you just loaded up to the time of failure. Truncating the log at
any time between LOAD DATABASE and your last LOAD TRANSACTION blasts a hole
in the log chain and halts the recovery operation then and there.
++++++++++++++++++++++++++++++++++++++++++++++
Q5. Why doesn't the Sybase kill command work?
Answer: Killing a Sybase process will result in one of four reactions:
1. The process is an ordinary retrieve transaction, i.e. SELECT, and it
dies immediately. (Actually it dies as soon as the process wakes up
(when an I/O completes), and the engine becomes available to run a task,
or a necessary lock is acquired).
2. The process is an update transaction. It does not die until the server
has rolled back the transaction. The time is directly related to the size
of the transaction.
3. The process is a DBCC transaction. Sybase forks a separate process for
the transaction, and the new one is out of the users' control. DBCC
checks tables index by index and can only be killed when it finishes one
index and is ready for the next one. It may take anywhere from several
minutes to four hours to die. Note: DBCC elapsed time to complete any
check is directly related to the size of the object(s) being checked.
There is no upper elapsed time limit. The good news is that System 10
has many new, documented checktable() and checkalloc() "subset" commands
which allow the DBA choices in the level of checking to do versus the
time available versus data integrity requirements.
4. The process is sleeping. We cannot kill a sleeping process. When an end-
user process gets disconnected, we cannot kill the Sybase process and
release the locks. This can happen if a PC-client is rebooted or turned
off with an active connection.
System 10 will provide an unconditional kill.
++++++++++++++++++++++++++++++++++++++++++++++
Q6. What are some of the undocumented features of DBCC?
Answer: There are a number of undocumented DBCC options that tech support
uses to analyze your database. Some of these are DESTRUCTIVE and tech support
will not help you if you screw up your database using one of these commands.
They can also tell what you have done.
There are NO SECRETS in the undocumented dbcc commands; they are a fleeting
sert of diagnostic and repair utilities to help fix extreme problems with
database pages, index structures, ans sticky problems with system tables.
They are best only used by Sybase Technical Support, since their structure
and applicability towards any given problem is always best judged from those
who are extremely familiar and experienced with a great variety of failures
and associated damage, and know when to use and not to use each dbcc command.
Sybase generally only uses them when backups are not available or the backup
recovery options are not optimal with regard to the application's
availability requirements. The bottom line is: knowing all dbcc commands is
no panacea, and can EASILY get you into more trouble than you are already in
when you need such tools.
The System 10 SysAdmin Guide includes a NEW, LARGE section devoted
exclusively to dbcc, including hit\nts on usage, planning a dbcc strategy to
fit in with your backup and recovery plans, and performance impact analysis
data to help you make an informed desision on database maintenance.
++++++++++++++++++++++++++++++++++++++++++++++
Q7. Why don't the dbcc commands produce any output on my screen?
Answer: Most of the dbcc commands direct their output to the console (the
terminal on which the Sybase dataserver was started). No output is seen on
the terminal when executing a dbcc command at any terminal other than the
console. There are several exceptions, but I can't remember which commands
automatically send output to the local terminal right now.
To redirect dbcc output to your terminal rather than the console, type from
the command line:
dbcc traceon(3604)
go
Subsequent dbcc output will appear at the local terminal. Output is also
logged into the errorlog file.
++++++++++++++++++++++++++++++++++++++++++++++
Q8. What are the trace flags used for and what are some of the more
common flags?
Answer: Trace flags disable or enable certain features with the database
server. They may be executed from the command line through the dbcc command
or may be installed in the RUNSERVER file when prefixed by a '-T'. There are
a number of trace flags that can be used. An initial list follows:
dbcc traceon(3604) redirects dbcc output to your screen rather than the
console.
dbcc traceon(3605) redirects dbcc output to the errorlog.
++++++++++++++++++++++++++++++++++++++++++++++
Q9. Is there a way to accurately estimate how much space a table and its
indexes are going to take?
Answer: FYI, lot's of people have asked for it, and here it is! the
officially UNSUPPORTED stored procedure sp_estspace. It works under 4.9.2,
but I make no guarantees. What's it good for: estimating the size of tables
and their indexes given an existing table and index schema.
Have fun.
**************************************************
Doug Smith Sr. Instructor
Sybase Professional Services, Northwest District
***************************************************
create procedure sp_estspace
/* A procedure to estimate the disk space requirements of a table
** and its associated indexes.
** November 21, 1991
** Written by Malcolm Colton with assistance from Hal Spitz
** Modified by Jim Panttaja November 25, 1991
*/
(@table_name varchar(30)=null, /* name of table to estimate */
@no_of_rows float = 1, /* number of rows in the table */
@fill_factor float = 0, /* the fill factor */
@cols_to_max varchar(255) =null /* variable length columns for which
to use the maximum rather than 50% of
the maximum length */
)
as
declare @msg varchar(120)
/* Give usage statement if @table_name is null */
if @table_name = null or @no_of_rows = 1
begin
print `Usage is:'
print ` estspace table_name, no_of_rows, fill_factor, cols_to_max'
print `where table_name is the name of the table,'
print ` no_of_rows is the number of rows in the table,'
print ` fill_factor is the index fill factor (default = 0) `
print ` cols_to_max is a list of the variable length columns for which'
print ` to use the maximum length instead of the average'
print ` (default = null)'
print `Examples: estspace titles, 10000, 50, "title, notes"'
print ` estspace titles, 50000'
print ` estspace titles, 50000, 0, null, 40'
return
end
declare @sum_fixed int,
@sum_var int,
@sum_avgvar int,
@table_id int,
@num_var int,
@data_pages float,
@sysstat tinyint,
@temp float,
@index_id int,
@last_id int,
@i int,
@level_pages float,
@key varchar(30),
@usertype tinyint,
@type tinyint,
@level tinyint,
@vartype smallint,
@more bit,
@next_level float,
@rows_per_page smallint,
@row_len smallint,
@length tinyint,
@index_name varchar(30),
@page_size smallint,
@page_K tinyint,
@index_type varchar(20),
@factor float
select @sum_fixed=0,
@sum_var=0,
@sum_avgvar=0,
@table_id=0,
@num_var=0,
@data_pages=0,
@row_len=0,
@sysstat=0
set nocount on
/* Make sure table exists */
select @sysstat = sysstat,
@table_id = id
from sysobjects where name = @table_name
and uid = user_id()
if @sysstat & 7 not in (1,3)
begin
select @msg = "I can't find the table "+@table_name
print @msg
return
end
/* Get machine page size */
select @page_size = low - 32
from master.dbo.spt_values
where type = `E'
and number = 1
select @page_K = (@page_size +32) /1024
if @fill_factor !=0
select @fill_factor = @fill_factor / 100.0
/* Create tables for results */
create table #results
(name varchar(30),
type varchar(12),
level tinyint,
pages float,
Kbytes float)
create table #times
(name varchar(30),
type varchar(12) null,
tot_pages float,
time_mins float null)
/* Create table of column info for the table to be estimated */
select length, type, name, offset
into #col_table
from syscolumns
where id = @table_id
/* Look up the important values from this table */
select @sum_fixed = isnull(sum(length),0)
from #col_table
where offset !< 0
select @num_var = isnull(count(*),0), @sum_var = isnull(sum(length),0)
from #col_table
where offset < 0
and charindex(name, @cols_to_max) > 0
select @num_var = @num_var + isnull(count(*),0),
@sum_avgvar = isnull(sum(length / 2),0)
from #col_table
where offset < 0
and charindex(name, @cols_to_max) = 0
/* Calculate the data page requirements */
if @num_var = 0
select @row_len = 4.0 + @sum_fixed
else
select @row_len = 8.0 + @sum_fixed + @sum_var +@sum_avgvar + @num_var
+ (@sum_var +@sum_avgvar) / 256.0
/* Allow for fill-factor if set to other than zero */
if @fill_factor = 0
select @temp = convert(float, @no_of_rows) *
( convert(float, @row_len) / convert(float, @page_size) )
else
begin
select @temp = convert(float, @no_of_rows) /
(convert(float, @page_size) * convert(float, @fill_factor) )
select @temp = convert(float, @row_len) * @temp
end
/* Now add in allocation pages */
select @temp = @temp +(@temp / 256.0)
select @data_pages = @temp + 1.0
if @data_pages < 8.0
select @data_pages = 8.0
insert #results values
(@table_name, `data', 0, @data_pages, @data_pages * @page_K)
/* See if the table has any indexes */
select @index_id = min(indid)
from sysindexes
where id = @table_id
and indid > 0
if @index_id = null /* We've finished if there are no indexes */
begin
select @msg = @table_name + ` has no indexes'
print @msg
select name, type, level,
Pages = str(pages,12,0), Kbytes = str(Kbytes,12,0)
from #results
select Total_Mbytes = str(sum(Kbytes)/1000.0,15,0)
from #results
drop table #results
return
end
select @sum_fixed = 0,
@sum_var = 0,
@num_var = 0,
@temp = 0
/* For each index, calculate the important variables
** use them to calculate the index size, and print it */
while @index_id != null
begin
select @index_name = name
from sysindexes
where id = @table_id
and indid = @index_id
if @index_id = 1
select @index_type = `clustered'
else
select @index_type = `nonclustered'
select @num_var = 0,
@sum_var = 0,
@sum_fixed = 0
select @i = 1
/* Look up each of the key fields for the index */
while @i <= 16
begin
select @key = index_col(@table_name, @index_id, @i)
if @key = null
break
else /* Process one key field */
begin
select @type = type, @length = length, @vartype = offset
from syscolumns
where id = @table_id
and name = @key
if @vartype < 0
select @num_var = @num_var + 1
else
select @sum_fixed = @sum_fixed + @length
if @vartype < 0 /* variable:check if in @cols_to_max */
begin
if charindex(@key, @cols_to_max) = 0
select @sum_var = @sum_var + (@length / 2)
else
select @sum_var = @sum_var + @length
end
end
select @i = @i + 1 /* Get next key field in this index */
end
/* Calculate the space used by this index */
if @num_var = 0
select @row_len = 5 + @sum_fixed
else
select @row_len = @sum_fixed + @sum_var + @num_var + 8
if @index_id != 1 /* add row id for nc indexes */
select @row_len = @row_len + 4
select @level = 0
/* Allow for fill-factor if set to other than zero */
if @fill_factor = 0
select @rows_per_page = @page_size / @row_len - 2
else
select @rows_per_page = @page_size / @row_len * @fill_factor
if @rows_per_page > 256
select @rows_per_page = 256
/* For clustered indexes, the first level of index is based on the
** number of data pages.
** For nonclustered, it is the number of data rows */
if @index_id = 1
select @next_level = @data_pages
else
select @next_level = @no_of_rows
select @more = 1 /* Flag for end of index levels */
while @more = 1
begin
/* calculate the number of pages at a single index level */
select @temp = @next_level / convert(float, @rows_per_page)
/* Add in a factor for allocation pages */
if @temp > 200.0
select @temp = @temp + (@temp /256.0) + 1.0
select @level_pages = @temp
insert #results values
(@index_name, @index_type, @level, @level_pages,
@level_pages * @page_K)
if @index_id != 1 and @level = 0 /* adjust NC non-leaf rows */
begin
select @row_len = @row_len + 4
/* Allow for fill-factor if set to other than zero */
if @fill_factor = 0
select @rows_per_page = @page_size/@row_len - 2
else
select @rows_per_page = @page_size/@row_len*@fill_factor
end
if @rows_per_page > 256
select @rows_per_page = 256
select @next_level = @level_pages
select @level = @level + 1
/* see if we can fit the next level in 1 page */
if @rows_per_page >= @next_level
select @more = 0
end
/* Account for single root page */
if @level_pages > 1
insert #results values
(@index_name, @index_type, @level, 1, @page_K)
/* Now look for next index id for this table */
select @last_id = @index_id
select @index_id = null
select @index_id = min(indid)
from sysindexes
where id = @table_id
and indid > @last_id
end
select name, type, level, Pages = str(pages,12,0), Kbytes = str(Kbytes,12,0)
from #results
select Total_Mbytes = str(sum(Kbytes)/1000.0,15,0)
from #results
drop table #results
drop table #col_table
return
/* ### DEFNCOPY: END OF DEFINITION */
++++++++++++++++++++++++++++++++++++++++++++++
Q10. What causes a database to be marked SUSPECT and can I recover
a database that comes up marked `SUSPECT'?
Answer: My previous response to this question contained BAD INFORMATION
that I want to clear up at this point. The previous example recommended the
use of the dbcc command 'save_rebuild_log'. DO NOT DO THIS. This command
does NOT rebuild your log, it just creates a NEW, EMPTY one, and leaves the
old one in the database, if possible, for Technical Support consultation as
part of a recovery plan. The intended use of this utility is to move the
log aside before initial patching, so checkpoints may be made without
disturbing a log that will be consulted later on for more clues into the
failure and the ensuing recovery. The previous post implied that this
command somehow rebuilt the transaction log. NOT! This, by the way, is a
good example of why only DOCUMENTED dbcc commands should be used (See earlier
question on undocumented dbcc commands).
A database is marked suspect when the integrity of the database is
questionable. The damage was caused at some previous time by a software or
hardware problem. Run the dbcc commands checktable or checkdb to determine
the extent of the damage.
There are occasional situations in which a database will be marked suspect
even though there is nothing wrong with the database. I ran into this
situation as an example:
A PC-client is rebooted during an update leaving an uncommitted
transaction in the transaction log. Eventually the transaction log
fills up even though it is dumped regularly. The sa, aware of the
problem, decides to cycle the server after dumping the transaction log
had minimal effect (only the inactive portion of the log was dumped).
When the database comes up, it is marked suspect. It is marked suspect
because the server is unable to do a checkpoint on recovery due to the
fact that the transaction log is full. This is a recoverable situation.
The following steps allow a suspect database to be recovered.
1. Start the server and watch the database come up "suspect"
2. execute isql as "sa"
> sp_configure "allow",1
> go
> reconfigure with override
> go
> update master..sysdatabases /* Bypass recovery on startup */
set status = -32768
where name=<suspect_db_name>
> go
3. Shutdown and restart the server. The server will come up and the
database will not be marked suspect.
4. Execute isql as "sa"
> use <suspect_db_name>
> go
> update master..sysdatabases /* Reset the database status */
set status=0
where name='<suspect_db_name>'
> go
> sp_configure "allow",0
> go
> reconfigure
> go
5. Execute dbcc checkdb and checkcatalog to validate the integrity of the
database. If the database passes these checks, you can continue safely.
DO NOT ASSUME that the database is OK just because you were able to make
it recover by changing the status flag.
6. If dbcc indicates problems, you will need to COPY OUT YOUR DATA ASAP
(bcp), and REBUILD THE DATABASE. You may not even be able to do this, in
which case you must restore your database from previous database and
transaction log dumps.
++++++++++++++++++++++++++++++++++++++++++++++
Q11. My database tables often get locked by the client's hung
workstation. Is there a way that I can unlock those locked tables?
Answer: The most common reasons for this kind of behavior is a PC client
where the user in the middle of the query assumes he has had enough and
reboots the PC. This will leave a sleeping process with all locks on the
table being held as is. A kill command will not be able to kill this process
since an attention cannot be raised on a sleeping process. The only way to
get around this problem is to make sure that users do not reboot their
machines in the middle of a query.
Also if you are using Q+E you might want to change cancel = 1 your qex.ini /
qe.ini depending on the version of Q+E. This will force a dbcancel to be
issued when the query window is closed. If a dbcancel is not issued then a
call to dbclose is made. Most often than not the connection is not closed
properly since there is pending data on that socket.
One other option is to set the keepalive parameter on the server machine to a
fairly low value if this is a configurable parameter on your platform. The
result of setting this option is that at the specified time frame if there is
no response from the client socket the server will drop that process. This
will clear all the locks that are being held by that process.
++++++++++++++++++++++++++++++++++++++++++++++
Q12. Does the server sort order affect performance? Is binary sort order
the fastest way?
Answer: Yes, binary sort order is fastest because no lookup is needed.
Please keep in mind that sort order only has impact on operations that
involve comparison of character data like creating indexes and evaluating
qualifications on character values. (Most of the performance gain of binary
sorting is that binary comparisons are native to all computers; all other
sort orders involve algorithmic binary comparisons of multi-byte abstract
data types.)
Sort orders are defined in .srt files found under in the character set
directories. There are three values associated with each character. Looking
at the character file defining the sort order, you can correlate those three
values with the placement of that character in the file.
Primary sort value is determined by the line in the file.
Secondary sort value is determined by the position within the line.
Tertiary sort value is also dependent on the position of the character on
the line.
Some examples from files in the iso_1 directory of a 4.9.1 installation:
dictionary.srt
==============
char=0x41,0x61,0xC0,0xE0,0xC1,0xE1,0xC2,0xE2,0xC3,0xE3,0xC4,0xE4,0xC5,0xE5
;A, a, A-grave, a-grave, A-acute, a-acute, A-circumflex, a-circumflex,
;A-tilde, a-tilde, ;A-diaeresis, a-diaeresis, A-ring, a-ring
char = 0x42, 0x62 ;letter B, b
With dictionary sorting, every "a" is sorted before every "b" and among
different "a" values there is sorting based on the different secondary sort
values.
nocase.srt
==========
char=0x41=0x61,0xC0=0xE0,0xC1=0xE1,0xC2=0xE2,0xC3=0xE3,0xC4=0xE4,0xC5=0xE5
;A, a, A-grave, a-grave, A-acute, a-acute, A-circumflex, a-circumflex,
;A-tilde, a-tilde, ;A-diaeresis, a-diaeresis, A-ring, a-ring
char = 0x42=0x62 ;letter B, b
With case insensitivity, "A" and "a" have the same secondary as well as
primary sort order. That is denoted in the file by the equal sign between the
two hex values for their encondings in the ISO 8859-1 character set. The case
insensitivity also applies to names in the SQL Server so you could not have
two objects in the same database with names differing only in case, such as
SuperBowl and SuperbOwl.
noaccent.srt
============
char=0x41=0x61=0xC0=0xE0=0xC1=0xE1=0xC2=0xE2=0xC3=0xE3=0xC4=0xE4=0xC5=0xE5
;A, a, A-grave, a-grave, A-acute, a-acute, A-circumflex, a-circumflex,
;A-tilde, a-tilde, ;A-diaeresis, a-diaeresis, A-ring, a-ring
char = 0x42=0x62 ;letter B, b
With no accent, any "a" is equal to another. This could be useful if an
application searches on last names and the entry is not exactly correct, like
an A-grave instead of A-acute. This sort order is new with the 4.9.1. It is
considered very useful by some European customers.
The only difference between the files nocase.srt and nocasepref.srt is the
line "preference=true" in the latter. With preference, "A" is equal to "a".
However, in the results of a query with ORDER BY on a character column, "A"
will precede "a". This has important performance implications. An index on
character data can not ensure values are already in the order you prefer and
comparisons using tertiary sort values must be done in a worktable.
Robert Garvey
++++++++++++++++++++++++++++++++++++++++++++++
Q13. Does Sybase have a memory limit?
Answer: Sybase has no memory limit. The typical problem with getting the
memory you want on UNIX is due to the OS's insistence that there be enough
swap space to accommodate the entire data space of a process at startup time.
UNIX doesn't want to give out memory it cannot in theory write *entirely* to
disk at some point. Thus, when SQL Server asks for 16MB of memory, for
example, unless you have that much swap space available, the request will be
denied, and the server will live with less or abort. Run the utility your
UNIX provides to tell how much swap is in use when you have this memory
problem with SQL Server. If it varies a lot, consider putting your RUNSERVER
command file in your system startup procedure, so SQL Server can start up
when memory is most clear. If your swap space is often lacking in large
amounts of space regardless of other system activity, you'll need to add
more. The general rule is to have between 2 and 3 times physical memory size
in swap space.
Benjamin von Ullrich
----------------------------------------------------------------------
2.9. Sybase Performance Tuning
++++++++++++++++++++++++++++++++++++++++++++++
Q1. How much overhead do variable size and NULL columns require?
Answer: The Sybase Performance and Tuning class notes give the following
information:
An additional 5 bytes are used if there are ANY variable length fields.
An additional 1 byte is used for each variable length field.
Therefore, if you have two variable length fields, you have an extra seven
bytes per row. Also, note that any field defined as allowing nulls is treated
as variable length.
++++++++++++++++++++++++++++++++++++++++++++++
Q2. How are null values stored? How does Sybase distinguish between an
integer and a null value for an integer, and so forth?
Answer: Sybase stores NULL values as zero length columns of the required
datatype. The first byte represents the length of the field. That is how
the server knows whether a field is NULL, i.e., the length of the field is 0.
In the case of datatypes that are of fixed length, such as Integer, and
therefore do not contain a length prefix, Sybase has defined a set of
variable length equivalent datatypes to use when such columns are defined as
allowing nulls. Thus, any column that allows nulls is by definition of
variable length (this can have a significant impact on the way the optimizer
works in certain situations).
To get a quick idea of how large a table-row will be, you can look in
sysindexes at the max/min values of the table if it is created. It wont give
you all that you need but will give you best/worts case row-sizes, which in
many cases is good enough.
Thanks to Howard Michalski
++++++++++++++++++++++++++++++++++++++++++++++
Q3. How are text and image types stored?
Answer: Text and image data are stored on whole data pages for any value
or amount of data other than NULL. A pointer to the head of a chain of pages
is stored in the regular data pages in the table whenever a text or image
value is inserted into the table. If NULL is inserted, no text or image
pages are allocated, and thus no internal fragmentation.
Keep in mind that pages are always allocated and deallocated to/from a table
in extents of 8 pages, so initial column allocations take a good deal of
space.
Searching text (with LIKE) takes ONE LOCK PER PAGE. If your server is still
configured for the default number of locks (5000), you will quickly run out
of locks on your whole SQL Server unless you "up" this value. TEXT is not
meant to search -- use some other table or summary field to describe the
relevant contents of text field(s) you must search.
++++++++++++++++++++++++++++++++++++++++++++++
Q4. How do I interpret the cryptic output of 'set showplan on'?
Answer: The 'set' commands provide invaluable information about how a
particular batch or query is going to execute. The 'set showplan on' command
displays a number of phrases that help to determine what decisions the query
optimizer has made.
Most of the usefulness of this output is the index selection. Look for index
usage that is consistent with what you would expect, knowing the nature of
clustered versus non-clustered indexes. This is a long story, and requires
in-depth knowledge of these index types and the optimizer's related choices.
Following is a list of typical phrases and the meaning of each of these
phrases. (This is not an exhaustive list and will be more fully developed in
future releases.):
1. FROM TABLE <tablename> Nested Interation Table Scan
This indicates that the server is going to access every single row in
the table to perform this query. Every single page will be read. NOTE
-- Don't always be alarmed by every instance of Table Scan. Tables
which are less than one extent (16K = 8 pages at 2K per page) in size
are ALWAYS scanned. Worktables, which are created on the fly, tend to
stay in cache, so these are less of a performance hit than tables scans
on user-defined tables.
2. FROM TABLE <tablename> Nested iteration Index: <indexname>
The server is going to access rows in this table using the explicitly
named index. The server reads only those pages on which the non-
clustered index indicates a row exists. Only the appropriate portions
of the table are accessed.
3. FROM TABLE <tablename> Nested iteration using Cusltered Index.
The server is going to access rows in this table using the clustered
index. The name of this index will not be explicitly specified, but
there can only be one clustered index per table. The server reads only
those pages on which a row exists. Often, there are more "hits" per
page retrieved with a clustered index than a nonclustered since the data
is physically ordered according the the declaration of the clustered
index. The fact that the clustered index was selected by the plan
indicates that rows of similar nature are being retrieved and those rows
are positioned physically close to one another on the disk.
++++++++++++++++++++++++++++++++++++++++++++++
Q5. How does the query optimizer work? Does the ordering of tables in
the from clause or the conditionals in the where clauses affect the
performance of the query?
Answer: Normally, the ordering in the from clause and the where clause
will not affect the performance of the query. The only time that it can have
this effect is if there is more than one query plan that the optimizer
estimates will take exactly the same time as the best plan. In this case, the
optimizer will choose the first of these plans that it sees. The ordering in
the from and where clauses will change which of these plans it sees first.
Only in this case will the ordering affect the query plan.
This will affect the performance if some of these plans with identical cost
estimates are significantly faster or slower than the others. This should not
happen - the optimizer's cost estimates should reflect the true cost of
running the query. But in practice, the optimizer sometimes has a bug or
other problem that causes the cost estimates to be inaccurate.
So, for the ordering in the from or where clause to affect the performance,
the following must be true:
Two or more query plans have the same cost estimate, and this is the
lowest cost estimate for the query.
A bug in the optimizer causes one of these identical cost estimates to be
significantly inaccurate.
Needless to say, these two things don't happen very often at the same time.
Jeff Lichtman
++++++++++++++++++++++++++++++++++++++++++++++
Q6. Can I force the optimizer to access tables in a certain order or to
use a particular index?
Answer: Yes, if one of your problems is that tables are being accessed in
the wrong order (the showplan is screwed up) then you can try the following:
set forceplan on
select . . ..
from table_a a, table_b b, table_c c
where . . .
set forceplan off
The 'set forceplan on/off' will tell the optimizer to access the tables in
the order that they've been listed in the 'from' clause. Mind you, you have
to make this determination as to which tables should come first in the list.
You can force the server to use a particular index by putting the index id
('indid' from sysindexes) in parentheses after the table name in the 'from'
clause, but I recommend reconfiguring the query so that the optimizer can
figure out which index to use on its own. Usually you can specify enough
relationships to "nudge" the optimizer the right way.
Steve Medin had these comments on this topic:
Force index is implemented by placing a number after the table name in the
from clause. The number refers tothe index that will be used by the
optimizer, where the clustered index is always (1) and the nonclustered
indices are sequenced in the order of your DDL create index statements, or
chronologically if you have several scripts that build your indices. The
possibility that a nonclustered index will get out of sequence is fairly
high, but this feature can be quite useful if the optimizer refuses to use
the clustered index on a table and you have provided where criteria for all
the index columns. To force use of the clustered index on you ORDERS table,
try:
...
FROM ORDERS(1),
...
This, again, can be useful when you can make an assumption about a table's
size and you would rather tablescan a tiny table than get a clustered index
iteration on the larger table that will not use the clustered index.
Try these out with showplan and stats io on. If you're really daring, try
putting them in live application code. when you call tech support, they will
tell you to remove the statements and recreate the problem. Tech Support
will not refuse to open a case over subjective judgements over the use of
Sybase software. However, if a user is insisting that the *undocumented*
query optimization rules behave in some particular manner, especially when
the *undocumented* forceindex feature is used, it is understandable that
Sybase refuse to 'fix' this 'bug'.
++++++++++++++++++++++++++++++++++++++++++++++
Q7. Does dropping an index cause recompilation of a stored procedure?
Answer: Yes, dropping an index will cause recompilation of stored
procedures which `touch' the indexed table. Adding an index, or updating
statistics will NOT.
Use sp_recompile <tablename> to force all objects referencing tablename to
recompile on their next execution.
++++++++++++++++++++++++++++++++++++++++++++++
Q8. Does the time for a select that yields 1000 rows from a table of
10,000 differ much from the same select when the table contains 100,000
rows?
Answer: Table size would not be a factor iff you have a clustered index on
the columns used to locate the SELECTed rows. Since clustering orders the
rows by the columns which make up the index keys, we would locate the first
data page where the key matches the qualification, and follow the page chain
until the next key is encountered, and stop scanning. This all depends on the
type of qualification, but this illustrates that a clustered index orders a
table such that any part of it is just as locatable as any other, regardless
of total size. B-trees properly maintained are never very deep, so index
depth is never an issue in SQL Server.
Actually, you could also achieve like response time on a small vs. large
table if the result columns of the query are covered by a nonclustered index.
This is a poor way to accomplish this, however, since non-clustered indexes
on multi-million row tables take up a good deal of room, but this can be your
only alternative if you are already using the clustered index for something
else and can't change it.
Be careful not to add so much to the table if it is wide (has many fields,
and/or many large character fields). Normalize out these "big text" fields to
other table(s) that you only look at when you need to. some of the best
performance gains can be had by having more rows per page.
Benjamin von Ullrich
++++++++++++++++++++++++++++++++++++++++++++++
Q9. Is there a way to gather performance statistics besides using
sp_monitor?
Answer: Sybase is now offering a product called SQL Monitor. It is a
separate server that monitors shared memory and provide detailed information
on server internals.
There is also a PC based tool called SQL Watch by PACE Systems that is pretty
good.
You may also want to check out Xsybmon by David Joyner. This is a free
application that continuously executes the stored procedure sp_monitor and
displays the results. See section 9.3 for details on this product.
Version 4.8 and above of the SQL Server also offer the dbcc command
'monitor'. It is used in the following manner:
/* Zero all of the counters */
dbcc monitor("clear", "all", "on")
go
/* Wait during the sampling period, typically 60 seconds to allow for
accumulation of data */
/* Sample the counters */
dbcc monitor("sample", "all", "on")
/* View the counters */
dbcc traceon(8399) /* Enable useful names */
go
select field_name, group_name, value
from sysmonitors
[where value != 0]
Various monitoring groups will be displayed.
++++++++++++++++++++++++++++++++++++++++++++++
Q10. Does Sybase do page or row level locking?
Answer: Sybase does page level locking and under certain circumstances
will escalate locks to the table level. If an update is issued that will
require more than about 200 exclusive page locks to be acquired it will try
to escalate its lock to the whole table rather than the individual pages.
The escalation attempt may not succeed and thus locking may well continue at
the page level even in this case.
Thanks to David Shanahan
++++++++++++++++++++++++++++++++++++++++++++++
Q11. What types of locks can be issued and what do they mean?
Answer: Locks can be placed on a page, a table, or an extent. (An extent
is a group of 8 database pages that are being either allocated or
deallocated.) Sybase does not support row level locking.
Exclusive locks, beginning with the prefix 'Ex_' are set so that no other
transaction can acquire a lock of any kind on the locked objects until the
original lock is released at the end of the transaction.
Shared locks, beginning with the prefix 'Sh_', are issued for non-update or
read operations. When a shared lock is applied to a table or page, other
transactions can also acquire a shared lock even though the first transaction
has not completed. No transaction can acquire an exclusive lock until all
shared locks on it have been released.
An intent lock is represents the intention to acquire a shared or exclusive
lock on a page.
An extent lock is used when a CREATE or DROP command is running, or while an
INSERT operation that requires new pages for data or index entries is
running.
A demand lock prevents any additional shared locks from being issued on an
object. This is required since shared locks can overlap one another and
force a write transaction to wait indefinitely. The demand lock is issued
after a write operation waits on four successive read locks to complete.
The locks which are currently being enforced can be monitored using sp_lock.
This shows which spid has which objects locked in which database and the type
of lock that is in place.
The following types of locks are reported by the sp_lock stored procedure.
(The list below was generated by issuing the SQL statement "select name from
master.dbo.spt_values where type = 'L' order by name"):
Ex_extent
Ex_extent-blk
Ex_extent-demand
Ex_intent
Ex_intent-blk
Ex_intent-demand
Ex_page
Ex_page-blk
Ex_page-demand
Ex_table
Ex_table-blk
Ex_table-demand
Sh_extent
Sh_extent-blk
Sh_extent-demand
Sh_intent
Sh_intent-blk
Sh_intent-demand
Sh_page
Sh_page-blk
Sh_page-demand
Sh_table
Sh_table-blk
Sh_table-demand
Update_page
Update_page-blk
Update_page-demand
++++++++++++++++++++++++++++++++++++++++++++++
Q12. What exactly does the HOLDLOCK keyword do?
Answer: The HOLDLOCK keyword is used in the from clause of a select
statement to make a shared lock more restrictive. Normally, a shared lock is
released as soon as the required table, view or page is no longer needed,
regardless of whether or not the transaction is complete. Using a HOLDLOCK
on a particular table extends the reach of the shared lock to the end of the
transaction in which it is involved, even if the statement no longer requires
the lock. This assures read consistency within a transaction when there is
the possibility that another user might update the table between two
successive reads of the data. In other words, if a HOLDLOCK is NOT used,
there is no guarantee that a row that is read twice within a transaction will
result in the same value both times.
It is important to remember that the HOLDLOCK only issues a shared lock and
NOT and exclusive lock. Other users can also issue a shared lock and read
through your shared lock to obtain the same value; therefore, you should not
base an update on a value obtained through the use of a shared lock.
For example, DO NOT DO THIS...
begin transaction
select col1 from table HOLDLOCK where conditions
update col1 set col1 = col1 + 1 from table where conditions
commit transaction
You might expect this to prevent others from reading the same column you are
planning on changing, but this is not the case. There is a possibility that
another user may read and update the column based on the value they read,
possibly overwriting the change you just made.
The solution to the above example is to update on column first, thereby
obtaining an exclusive lock through the end of the transaction, as in
begin transaction
update col1 set col1 = col1 + 1 from table where conditions
select col1 - 1 from table where conditions
commit transaction
++++++++++++++++++++++++++++++++++++++++++++++
Q13. Why, when a stored procedure is forced to compile, does the query
plan grow eventually causing the stored procedure to crash?
Answer: Any of the following will cause a stored procedure to grow when it
is recompiled:
1. One of the tables used in the procedure is dropped and recreated.
2. A new rule or default is bound to one of the tables or the user runs
sp_recompile on one of the tables.
3. The database containing the stored procedure is re-loaded.
Other things causing a stored procedure to be re-compiled will not cause it
to grow. For example, dropping an index on one of the tables used in the
procedure or doing EXEC WITH RECOMPILE.
The difference is between simple recompilation and re-resolution. Re-
resolution happens when one of the tables changes in such a way that the
query trees stored in sysprocedure may be invalid. The datatypes, column
offsets, object ids or other parts of the tree may change. In this case, the
server must re-allocate some of the query tree nodes. The old nodes are not
de-allocated (there is no way to do this within a single procedure header),
so the procedure grows.
In time, trying to execute the stored procedure will result in a 703 error
about exceeding the 64 page limit for a query.
System 10 Notes:
1.In System 10, the server will automatically compress the stored procedures
upon recompilation, therefore, the above problems will be fixed.
2.There is no longer this page limit on the size of stored procedures. They
can grow indefinitely until the procedure cache configured is depleted.
Thanks to Andrew Fergusen
++++++++++++++++++++++++++++++++++++++++++++++
Q14. What is a segment and why should I use one?
Answer: When using segments to optimize a database's performance, there
are some things to keep in mind:
1.Contention with the default and system segments.
2.Recovery
Care must be taken to avoid contention with the default and system segments.
Typically, when the decision has been made to assign a table or index to a
perticular database segment the intention is to reserve that segment's use to
operations on a particular object. When createing the database the 'system'
and 'default' segments will point to all available 'non-logsegment' devices.
This means that if one creates a user segment, by default the devices it
point so will also be pointed to by the 'system' and 'default' segments. To
eliminate the risk of the user segment filling up, or contention on the
devices from other objects in the 'default/system' realm use 'sp_dropsegment'
to remove the "maps" to those devices. For example:
create database USERDB on data1=10, data2=10, data3=10
log on log1=20
go
use USERDB
go
exec sp_dropsegment "default", data3
exec sp_dropsegment "system", data3
go
sp_addsegment "seg1", data3
go
.
.
.
create table TABLE1 (i int, date datetime)
on seg1
go
This gives TABLE1 complete "ownership" of device 'data3'. Keep in mind that
when a table/index is created its growth is restricted to the space available
within its segment's "domain". In this example, TABLE1 can grow no larger
than ~10 Meg. This example brings up an interesting footnote. A database
will allocate space on its assigned devices in order by their NAME. In this
case, the system tables for USERDB were create on device data1 ( the first
device in the system segment - which, BTW, pointed to data1, 2, and 3 up to
the point where we dropped the map to data3). If we were to create segment
'seg1' as mapped to device 'data1' TABLE1 would "share" a portion of 'data1'
with the system tables that were created there during the CREATE DATABASE
exeution. The system tables would "grow" onto devices in the system segments
"domain", but would be "anchored" on 'data1'.
Now with regards to item number 2, recovery. When restoring from a database
dump it is important to remember that the dump will restore the segments but
NOT their mappings. This is not an issue if you are restoring over the same
database from which the dump was made, but when loading onto a new database
the segment's maps must be rebuilt.
If a dump was taken of USERDB (above) and loaded onto a database (called
NEWUSERDB) segment 'seg1' would exist but its map to device 'data3' would
not. [this supports dumps across servers without dependency on device names,
etc.] An extra step will be to rebuild the logical-to-logical-tophysical
mapping of 'seg1' to the equivalent 'data3'. To do this, proced as follows:
/* The new database, created with equal fragment mappings , but on
different device names */
create database NEWUSERDB on data10=10, data20=10, data30=10
log on log10=20
go
load database NEWUSERDB from some_dump
go
use NEWUSERDB
go
exec sp_dropsegment "default", data30
exec sp_dropsegment "system", data30
go
sp_extendsegnemt seg1, data3
go
Note that 'seg1' exists after the load, but has no maps. Also note that the
re-mapping of 'seg1' could have taken place before the load database command.
Procedure sp_extendsegment will also work on a database that has been created
using the FOR LOAD option.
The easiest way to picture all of this is to look at the schema diagrams that
outline the server's system tables (master versus userdb). Segment info on a
particular database is kept in BOTH the user and master databases. The user
database maintains object mappings to a segment and the master database
maintains segment mappings to a device (or devices).
Provided by Howard Michalski
++++++++++++++++++++++++++++++++++++++++++++++
Q15. What determines whether an update will be performed 'in place' or
deferred?
Answer: Presence or absence of varchar columns does not affect whether the
server does a direct (in-place) or deferred (delete/reinsert) update. As of
this writing, the rules for doing direct updates are:
1. If multiple columns are being updated, they must be contiguous.
2. The column(s) being updated must all be fixed length -- the row may not
change size because of the update.
3. Exactly one row must be affected, AND SQL Server must know this at the
beginning of the query.
4. No column being updated may participate in the index that was used to
find the row.
5. No update triggers may be present on the table being updated.
Any update that doesn't follow all the rules is deferred, not direct.
(Note: performance work is presently being done that may eventually relax
some of these restrictions. However, these are the current rules.) If these
rules make it sound like almost all updates are deferred rather than direct
... guess what?
[By the way. In a direct update, we need to update only those indexes that
actually refer to the columns being changed. Deferred updates, on the other
hand, require that we update every index referring to the row.]
Thanks to Elton Wildermuth
++++++++++++++++++++++++++++++++++++++++++++++
Q16. How does altering a database table to add a new column affect the
storage of the affected table?
Answer: No rows in your table are changed as a result of ALTER TABLE. The
only thing that happens is that the table's schema is updated to reflect the
extra column. That column is NULL in all presently existing rows, and we can
tell that without making any changes to the data rows.
What can take massive amounts of time, though, is the subsequent UPDATE that
stuffs data into all those previously NULL columns. Depending how full each
page is, there can be a huge storm of page splits. Also, note that by the
rules (2 and 3), all those updated rows get deleted, then reinserted.
Thanks to Elton Wildermuth
How do I delete a column from a table?
Answer: Sybase doesn't let you "alter table drop <column>". You must make
of new copy of the table, excluding the desired column.
----------------------------------------------------------------------
2.10. Sybase Network Issues
++++++++++++++++++++++++++++++++++++++++++++++
Q1. How can I make Sybase talk to two separate ethernet interfaces on
our server?
Answer: You can have as many master entries in the interfaces file for
the protocol/port combinations that you have. Simply add a new line for the
alternate hostname assigned to the second ethernet port, e.g.
The interfaces entry was:
SYBASE
query tcp sun-ether primename 2025
master tcp sun-ether primename 2025
console tcp sun-ether primename 2026
debug tcp sun-ether primename 2027
And it now is
SYBASE
query tcp sun-ether primename 2025
query tcp sun-ether secondname 2025
master tcp sun-ether primename 2025
master tcp sun-ether secondname 2025
console tcp sun-ether primename 2026
debug tcp sun-ether primename 2027
The key on the server end is the master line not the query line.
++++++++++++++++++++++++++++++++++++++++++++++
Q2. Can I use Sybase over PPP (Peer-to-Peer protocol)?
Answer: Yes. The PPP interface to your host is an extra interface with a
new hostname. If you look in Sybase's interface file you'll see that you
specify a hostname and a portnumber. This means that Sybase will listen to
that particular portnumber on the interface that corresponds with the
hostname specified in the interfaces file. This is probably your ethernet.
Telnet and friends listen to ANYHOST, a special ip-address that translates to
any interface that is up in the kernel.
The solution is simple and a bit Sybase version specific. First the hacks.
1. ANYHOST is implemented as ip address 0.0.0.0. If you add a host ALL to
your hostfile and use ALL as hostname in the interfaces file, Sybase will
pass 0.0.0.0 as ip address to the kernel and listens to its portnumber on
all interfaces.
2. Some versions of Sybase appear to have the constant hostname NULLHOST
built in. Principle the same as 1.
3. Now the proper solution. I don't know which version you need. Probably at
least 4.8. May also be platform specific. But you can add more than one
tcp line to the interfaces file (See previous Question). You can
duplicate the line for "master" for each interface you want Sybase to
listen to (that is duplicate with the appropriate hostname).
dave@exlog.com (Dave St.Clair)
======================================================================
3. Sybase Core Applications
----------------------------------------------------------------------
3.1. Open Client
++++++++++++++++++++++++++++++++++++++++++++++
Q1. Has anyone implemented a C++ class library for Sybase?
Answer: A class library in this context provides a mechanism for allowing
an object-oriented language such as C++ to access and manipulate database
objects. Some of these class libraries provide an abstraction of multiple
databases, such as Oracle, Ingres, and Sybase, to provide a single library of
routines to access all of these different products.
See the archive ftp.acs.ncsu.edu:/pub/sybase++ for info. Section 9.2 below,
also provides sources of commercial products that have implemented database
class libraries.
++++++++++++++++++++++++++++++++++++++++++++++
Q2. How can I use the Sybase Open Client with my C++ code?
Answer: Create a header file like the following and you're all set.
#ifndef _FIX_SYBASE_H
#define _FIX_SYBASE_H
#define COMPILE_STYLE CPP_COMPILE
extern "C"
{
#include "sybfront.h"
#include "sybdb.h"
};
#endif /* ifndef _FIX_SYBASE_H */
++++++++++++++++++++++++++++++++++++++++++++++
Q3. Which C compiler(s) is the DOS version of the Open Client software
compatible with?
Answer: The Open Client was compiled using Microsoft C.
David Benua (dbenua@panix.com) had this to say... I haven't tried this with
the Sybase OC, but I've seen this problem with a number of other vendor's SW
packages. Normally the problem is that the .LIB was built to import routines
from MLIBCEW.LIB (or some other such library). I've gotten around this by
building an empty lib (with the Borland lib program) named MLIBCEW.LIB and
including it in the load list.
----------------------------------------------------------------------
3.2. Open Server
----------------------------------------------------------------------
3.3. APT
++++++++++++++++++++++++++++++++++++++++++++++
Q1. Is it possible to place other visible fields on top of invisible
fields, or do I have to have big open spaces?
Answer: There was not, until version 5.0, a way to store necessary lookup
data, foreign keys, and other miscellaneous data storages in an address space
on the client that many fpo's need access to. The largest problem with the
variables in fpo's is that they are automatic, and are gone as soon as the
procedure exits. Global variables in 5.0 help, but only if you can tolerate
only one value for a variable for all forms in a system at all times. Even
under 5.3 APT, there is no way to store a GROUP that your application code
needs on the client anywhere but hidden on the form.
Any perceived performance hit always involved a very busy client machine (in
which all client OS processes were lagging), or involved poor thoughtput on
the SQL Server, or the network between the two. With modern, midrange
hardware, APT has never benn, nor should be, a performance bottleneck in and
of itself.
----------------------------------------------------------------------
3.4. DWB
----------------------------------------------------------------------
3.5. Report Writer
++++++++++++++++++++++++++++++++++++++++++++++
Q1. How can I load the reports into a production db in a batched (non-
interactive) way?
Answer: Report Writer only wants to load them interactively. This does
not integrate with reasonable procedures of code management and software
change control. Thanks to M. Cushman for this answer.
----------------------------------------------------------------------
3.6. Gain Momemtum
======================================================================
4. Third Party Applications
----------------------------------------------------------------------
4.1. User Interface/Client Applications
++++++++++++++++++++++++++++++++++++++++++++++
1. JYACC JAM/DBi
Company: JYACC, Inc.
Address: 116 John Street
-or- One Sansome St., Suite 2100
New York, NY 10038 San Francisco, CA 94104
Phone: 800-458-3313 415-951-1070
Fax:
Summary:
++++++++++++++++++++++++++++++++++++++++++++++
2. Uniface
Company:
Address:
Phone: 410-740-8745 -or- 510-748-6145
Fax:
Summary:
++++++++++++++++++++++++++++++++++++++++++++++
3. Power Builder (Microsoft Windows only)
Company: Powersoft Corporation
Address: 70 Blanchard Road
Burlington, MA 01803
Phone: 617-229-2200
Fax:
Summary:
++++++++++++++++++++++++++++++++++++++++++++++
4. Microsoft Access/Visual Basic
Company: Microsoft Corp.
Address:
Phone:
Fax:
Summary:
Windows 3.1
++++++++++++++++++++++++++++++++++++++++++++++
5. DataEase
Company: DataEase International, Inc.
Address: 7 Cambridge Drive
Trumbull, CT 06611
Phone: (203) 374-8000
Fax:
Summary:
++++++++++++++++++++++++++++++++++++++++++++++
6. Unify
Company:
Address: 3901 Lennane Drive
Sacramento, CA 95834-1922
Phone: 800-24-UNIFY
Fax:
Summary:
++++++++++++++++++++++++++++++++++++++++++++++
7. Focus
Company: Information Builders, Inc.
Address: 1250 Broadway
New York, NY
Phone: 212-736-4433
Fax:
Summary:
++++++++++++++++++++++++++++++++++++++++++++++
8. ObjectView
Company: KnowlegeWare Inc
Address: 3340 Peachtree Road, N.E.
Suite 1100
Atlanta, GA 20226
Phone: (404) 231-8575
Fax:
Summary:
Windows 3.1
Supports DDE
Workgroup edition available
++++++++++++++++++++++++++++++++++++++++++++++
9. Q+E
Company: Pioneer Software
Address:
Phone:
Fax:
Summary:
Windows 3.1.
Simple spreadsheet-like browser.
Can be used as an OLE object.
++++++++++++++++++++++++++++++++++++++++++++++
10. Superbase
Company: SPC Software
Address:
Phone:
Fax:
Summary:
Windows 3.1
Complete database forms/report/application package.
SQL link purchased separately.
Can be used as an OLE object.
++++++++++++++++++++++++++++++++++++++++++++++
11. R&R Report Writer for Windows, SQL Edition
Company:
Address:
Phone: 508-366-1122
Fax:
Summary:
Windows 3.?
Supports Sybase SQL Server, Microsoft SQL Server,
Oracle, Netware SQL, Btrieve, and dBASE databases
++++++++++++++++++++++++++++++++++++++++++++++
12. CorVu
Company:Sycomp Pty Ltd
Address: Level 4
16. James Place
North Sydney 2060
AUSTRALIA
Phone: +61 2 959 3522
Fax: +61 2 959 3583
Summary: CorVu is an end user graphical query and report writing tool with
decision support and forecasting options. Requires Microsoft Windows 3.1. Can
use ODBC or Sycomp's own communications layer, which is faster than and
avoids the need to have Sybase's Open DB-library on each client machine.
----------------------------------------------------------------------
4.2. Class Libraries
++++++++++++++++++++++++++++++++++++++++++++++
1. DBh++
Rogue Wave
++++++++++++++++++++++++++++++++++++++++++++++
2. C++ API
Qualix email at info@qualix.com
++++++++++++++++++++++++++++++++++++++++++++++
3. Persistence
Persistence Software
----------------------------------------------------------------------
4.3. Other Miscellaneous Products and Tools
++++++++++++++++++++++++++++++++++++++++++++++
1. SybPERL
++++++++++++++++++++++++++++++++++++++++++++++
2. SQL-BackTrack
Company: DataTools, Inc.
Address:
Phone:
Fax:
Summary: A complete backup and recovery tool for Sybase. Its
features include:
object-level backup and recovery,
automated and remote backups
backups to ANSI-labeled tape, disk, optical disk, and Legato NetWorker.
logical and physical format backups (logical format backups are device-
,OS-and Sybase version-independent, which means you can move data from
4.1 to 4.9 or 10.
incremental backups
data compression and data encryption
Also, for the record, SQL-BackTrack writes multiple dumps to a single tape
(3.4-1), as well as dumps spanning tapes.
DataTools has a marketing agreement with Sybase to support System 10, and has
a paper comparing the SQL-BackTrack backup facilities with System 10 and
describing how SQL-BackTrack will extend the System 10 Backup Server.
Documentation and literature available upon request.
++++++++++++++++++++++++++++++++++++++++++++++
3. dbViewer
Qualix
++++++++++++++++++++++++++++++++++++++++++++++
4. Xsybmon
Shareware by David Joyner
Comments: Xsybmon wraps up all the statistics covered by sp_monitor as well
as some other useful information in a Motif interface. The latest version is
available via anonymous FTP from:
straylight.acs.ncsu.edu:/pub/sybase
++++++++++++++++++++++++++++++++++++++++++++++
5. Sybtcl
Comments: Sybtcl is an extension to Tool Command Language (Tcl) that
provides access to a Sybase Database server. Sybtcl adds additional Tcl
commands that login to a SQL Server, pass SQL code, read results, etc.
Sybtcl was inspired by similar tools written for Perl (sybperl, oraperl) but
was written from scratch instead of borrowing on the work of either Perl
extension.
Sybtcl features:
o supports multiple connections to the same or different SQL Servers
o provides "nextrow" processing for regular and compute return rows
o converts results to strings and returns rows as Tcl lists
o allows user defined null values to be returned
o stored procedures can be executed and return values accessed
o accesses column names, lengths, and datatypes of rows & return values
o provides feedback of SQL Server and DB-Lib messages
o reads/writes text or image datatypes to files
Sybtcl does not:
o perform row buffering or browse mode
o bulk copies
o support two phase commit on multiple servers
REQUIREMENTS
Since Sybtcl is an extension to Tcl, you should already have Tcl, or be
prepared to get it via Ftp [sites listed below].
Of course, you must also have access to a Sybase Databaser Server.
Additionally, you must have the Sybase Open Client (aka "DB-Library") package
that provides header files and object libraries; Sybtcl must be linked with
libsybdb.a.
I normally build Sybtcl with Tcl, Extended Tcl, and the X11 Tk widget set
yielding tcl and wishx interpreters. Sybtcl is written with no dependencies
other than Tcl, so it should be possible to link it with the the minimal Tcl
library. (Although Sybtcl uses "handles", I didn't rely on the handle
functions provided by Extended Tcl.)
The distribution sybtcl-1.3.tar.Z is on harbor.ecn.purdue.edu in
/pub/tcl/extensions.
Tom Poindexter, tpoind@advtech.uswest.com or tpoindex@nyx.cs.du.edu
--
+==============================+=============================================+
| David W. Pledger | S T R A T E G I C D A T A S Y S T E M S |
| davidp@meaddata.com | PO Box 498, Springboro, OH 45066 |
| Custom Database Applications | Phone (513)748-2460, (800)253-5624 ext 2940 |
--
+==============================+=============================================+
| David W. Pledger | S T R A T E G I C D A T A S Y S T E M S |
| davidp@meaddata.com | PO Box 498, Springboro, OH 45066 |
| Custom Database Applications | Phone (513)748-2460, (800)253-5624 ext 2940 |