Discussion:
Backup of system databases for Disaster Recovery
(too old to reply)
Richard
2007-11-28 16:41:43 UTC
Permalink
We're reviewing our disaster recovery procedures now that we've
upgraded to 15.0.2 (from 12.5.0.3). In fact, we're getting the
opportunity to test those procedures (long overdue) instead of them
just being theoretical. We're struggling with deciding which system
databases we should do transaction log dumps on. Here's what we're
planning.

master - the "Troubleshooting" manual says to "Truncate the master
database log frequently". I'm assuming this means "dump transaction
master with truncate only", but I guess a "dump transaciton master to
filespec" would achieve the same result. Can you even apply log dumps
when recovering master? We'll perform full dumps of master daily (to
capture password changes, etc), and after any device/database changes.

sybsystemprocs - perform full dumps daily and log dumps throughout the
day to capture any changes to site-specific stored-procs that may have
occurred since the last full dump.

sybsystemdb - We don't use any two-phased commits. Is there any need
to back this up?

sybmgmntdb - perform full dumps daily and log dumps throughout the day
to capture any data changes (eg: task history) that may have occurred
since the last full dump.

model - perform full dumps daily and log dumps throughout the day to
capture any changes (eg: new user datatypes) that may have occurred
since the last full dump. Changes are very unlikely since changes to
model are out infrequent but missing a model update is that last thing
we'd want to worry about during a disaster recovery.

We also bcp the sytem tables mentioned in the "planning for recovery"
section of the manuals, but it seems like syssrvroles and
sysloginroles should be on the list also.

Any guidance greatly appreciated.

TIA
Mark A. Parsons
2007-11-28 17:42:40 UTC
Permalink
Post by Richard
We're reviewing our disaster recovery procedures now that we've
upgraded to 15.0.2 (from 12.5.0.3). In fact, we're getting the
opportunity to test those procedures (long overdue) instead of them
just being theoretical. We're struggling with deciding which system
databases we should do transaction log dumps on. Here's what we're
planning.
master - the "Troubleshooting" manual says to "Truncate the master
database log frequently". I'm assuming this means "dump transaction
master with truncate only", but I guess a "dump transaciton master to
filespec" would achieve the same result. Can you even apply log dumps
when recovering master? We'll perform full dumps of master daily (to
capture password changes, etc), and after any device/database changes.
Trunc the log and 'dump database master to ...' on a regular basis, in conjunction with a bcp out of the critical tables.
Post by Richard
sybsystemprocs - perform full dumps daily and log dumps throughout the
day to capture any changes to site-specific stored-procs that may have
occurred since the last full dump.
If you really have that much activity in sybsystemprocs I assume you could go transaction log dumps (I haven't actually
done it myself).

I usually find that 1 of 2 options suffice:

1 - regular db dumps (sybsystemprocs is relatively small ... on the order of ~200 MB ... peanuts compared to the normal
user db's)

or

2 - maintain DDL for anything over/above those created by the various installxxxx scripts.
Post by Richard
sybsystemdb - We don't use any two-phased commits. Is there any need
to back this up?
No, no need to backup.

Though a threshold on the log to trunc the log would be a good idea. Heavy CIS activity (eg, MDA in pre-15.0.x) can
cause heavy log usage in sybsystemdb.
Post by Richard
sybmgmntdb - perform full dumps daily and log dumps throughout the day
to capture any data changes (eg: task history) that may have occurred
since the last full dump.
Haven't used this database so go with what you think sounds reasonable ...
Post by Richard
model - perform full dumps daily and log dumps throughout the day to
capture any changes (eg: new user datatypes) that may have occurred
since the last full dump. Changes are very unlikely since changes to
model are out infrequent but missing a model update is that last thing
we'd want to worry about during a disaster recovery.
Skip the tran log dumps and just go with db dumps. Model is typically quite small (4-6 MB) so a full db dump is so
small that it easily eliminates headaches of having to apply smallinsh transaction logs.

(I haven't actually tried doing tran log dumps on model so I'm not sure if a) it's doable and b) if the logs can be loaded.)

Alternatively, maintain the DDL used to modify model over and above that which is created when the dataserver is created.
Post by Richard
We also bcp the sytem tables mentioned in the "planning for recovery"
section of the manuals, but it seems like syssrvroles and
sysloginroles should be on the list also.
Yes, add these tables to the mix. (Sybase documentation isn't all it's cracked up to be, eg, still no P&T manuals 2+
years after 15.0 was released.)

Since the master system tables are (relatively) small it doesn't hurt to pull out 'extra' tables ... better to have them
and not need them, than to not have them but need them later, eh? ;-)
Post by Richard
Any guidance greatly appreciated.
Obviously (?) put your dumps and bcp files on a different set of disks than the database devices ... preferably on a
different machine (either via remote dumps or network cp's after performing local dumps).
A. M.
2007-11-29 10:56:22 UTC
Permalink
Post by Richard
We're reviewing our disaster recovery procedures now that we've
upgraded to 15.0.2 (from 12.5.0.3). In fact, we're getting the
opportunity to test those procedures (long overdue) instead of them
just being theoretical. We're struggling with deciding which system
databases we should do transaction log dumps on. Here's what we're
planning.
master - the "Troubleshooting" manual says to "Truncate the master
database log frequently". I'm assuming this means "dump transaction
master with truncate only", but I guess a "dump transaciton master to
filespec" would achieve the same result. Can you even apply log dumps
when recovering master? We'll perform full dumps of master daily (to
capture password changes, etc), and after any device/database changes.
The master database doesn't support a separate log device.
All you can do is truncate the log and dump the database.
Its common practice to bcp out some of the key system
tables but I usually just keep the source T-SQL statements
and a bcp out of syslogins.
Post by Richard
sybsystemprocs - perform full dumps daily and log dumps throughout the
day to capture any changes to site-specific stored-procs that may have
occurred since the last full dump.
Do you make changes to sybsystemprocs? If not, the installmaster
script has all the source to the system sprocs. You should
also keep the source to any of your own code you load into it.
Post by Richard
sybsystemdb - We don't use any two-phased commits. Is there any need
to back this up?
No, just truncating the log should be fine.
Post by Richard
sybmgmntdb - perform full dumps daily and log dumps throughout the day
to capture any data changes (eg: task history) that may have occurred
since the last full dump.
I don't use this db personally so I can't comment but your
plan sounds reasonable.
Post by Richard
model - perform full dumps daily and log dumps throughout the day to
capture any changes (eg: new user datatypes) that may have occurred
since the last full dump. Changes are very unlikely since changes to
model are out infrequent but missing a model update is that last thing
we'd want to worry about during a disaster recovery.
If you don't make changes to model, there's no need to
worry about it. Model is created during system build.
If you install user defined datatypes there, keep the
source code for these.
Post by Richard
We also bcp the sytem tables mentioned in the "planning for recovery"
section of the manuals, but it seems like syssrvroles and
sysloginroles should be on the list also.
Assuming that the suids all match but keeping the source
to your grant role statements would suffice.

-am © 2007
unknown
2007-11-29 21:36:52 UTC
Permalink
Post by Richard
Post by Richard
We're reviewing our disaster recovery procedures now
that we've upgraded to 15.0.2 (from 12.5.0.3). In fact,
we're getting the opportunity to test those procedures
(long overdue) instead of them just being theoretical.
We're struggling with deciding which system databases we
should do transaction log dumps on. Here's what we're
planning.
master - the "Troubleshooting" manual says to "Truncate
the master database log frequently". I'm assuming this
means "dump transaction master with truncate only", but
I guess a "dump transaciton master to filespec" would
achieve the same result. Can you even apply log dumps
when recovering master? We'll perform full dumps of
Post by Richard
master daily (to capture password changes, etc), and
after any device/database changes.
The master database doesn't support a separate log
device.
All you can do is truncate the log and dump the
database.
Its common practice to bcp out some of the key system
tables but I usually just keep the source T-SQL
statements
and a bcp out of syslogins.
Post by Richard
sybsystemprocs - perform full dumps daily and log dumps
throughout the day to capture any changes to
site-specific stored-procs that may have occurred since
the last full dump.
Do you make changes to sybsystemprocs? If not, the
installmaster
script has all the source to the system sprocs. You
should
also keep the source to any of your own code you load
into it.
Post by Richard
sybsystemdb - We don't use any two-phased commits. Is
there any need to back this up?
No, just truncating the log should be fine.
Post by Richard
sybmgmntdb - perform full dumps daily and log dumps
throughout the day to capture any data changes (eg: task
history) that may have occurred since the last full
dump.
I don't use this db personally so I can't comment but
your
plan sounds reasonable.
Post by Richard
model - perform full dumps daily and log dumps
throughout the day to capture any changes (eg: new user
datatypes) that may have occurred since the last full
dump. Changes are very unlikely since changes to model
are out infrequent but missing a model update is that last
Post by Richard
thing we'd want to worry about during a disaster
recovery.
If you don't make changes to model, there's no need to
worry about it. Model is created during system build.
If you install user defined datatypes there, keep the
source code for these.
Post by Richard
We also bcp the sytem tables mentioned in the "planning
for recovery" section of the manuals, but it seems like
syssrvroles and sysloginroles should be on the list
also.
Assuming that the suids all match but keeping the
source
to your grant role statements would suffice.
-am © 2007
Anthony,
If you don't use sybmgmtdb, how are you scheduling db
backups, tran log backups, stats etc. What tool are you
using?
Mark A. Parsons
2007-11-29 22:20:19 UTC
Permalink
Post by unknown
Anthony,
If you don't use sybmgmtdb, how are you scheduling db
backups, tran log backups, stats etc. What tool are you
using?
FWIW, most clients I work with use a combination of tools:

1 - thresholds on the database log segment

and/or

2 - crontab jobs for Unix/Linux machines (I suppose Windows folks could use the MS Window Scheduler if they're desperate.)

and/or

3 - a third-party job scheduler program, eg, Autosys, Control-M, etc.
A. M.
2007-11-30 12:05:47 UTC
Permalink
Post by unknown
Post by A. M.
Post by Richard
sybmgmntdb - perform full dumps daily and log dumps
throughout the day to capture any data changes (eg: task
history) that may have occurred since the last full
dump.
I don't use this db personally so I can't comment but
your
plan sounds reasonable.
If you don't use sybmgmtdb, how are you scheduling db
backups, tran log backups, stats etc. What tool are you
using?
Scripts or programs called from cron. If you are using
Unix or Linux, this is trivial. A shell script or
perl script will run the same on either. If its
Solaris, and my client allows it, I prefer to use
my scripting language.

I've seen people use bat scripts on Windows. I can't
say I was impressed but it works.

Threshold triggers will handle the other tran log
issues. Mark mentions a few third party scheduling
tools. These are OK but a little overkill. You'd
still need to write something that works with them.
Some larger sites will use backup programs that
have modules for databases like Sybase. Legato
Networker (or whatever its called now) is one of
the more popular ones.

-am © 2007

Loading...