Discussion:
compare database
(too old to reply)
ziona@dmateck.com
2008-12-02 17:54:20 UTC
Permalink
Hi,
in a high availability system we need a quick compare between the main
and the standby servers databases,
dose someone knows a sheap and quick way to compare 2 databases ?
I know that thers is a number indication what was the last log file
loaded, can anyone help me ?
Thanks
Mark A. Parsons
2008-12-03 00:07:49 UTC
Permalink
What exactly are you looking to compare?

Database sizes?
Segment sizes?
Number of objects (eg, tables, procs, triggers, etc)?
Users/Groups/Roles?
Permissions?
Table structures (eg, # of columns, column names, column datatypes, index names/structures, etc)?
Number of records in the various tables?
Actual data in the various tables?
Something else?

And what do you mean by high availability, ie, how do you populate the standby server? Is this a real HA setup whereby
you have one set of disks shared across machines? Are you using a log shipping method? Are you using replication? Are
you using some sort of disk mirroring technology?
Post by ***@dmateck.com
Hi,
in a high availability system we need a quick compare between the main
and the standby servers databases,
dose someone knows a sheap and quick way to compare 2 databases ?
I know that thers is a number indication what was the last log file
loaded, can anyone help me ?
Thanks
Derek Asirvadem
2008-12-03 03:07:00 UTC
Permalink
There are many types of "HA", each with their own costs and cutover
timeframes. There are many levels of comparing the content of
databases, to identify whether the are the "same". From your question,
I assume you are performing log shipping (dump tran source_db, load
tran target_db). In this case, the detail with the dbs do not need
comparison, just the dump/load times.

master.sysdatabase.dumptrdate show the dump time, but not the load time.

For log shipping implementations, which are based on scripts, it is
easy enough to find out from that level, which tran dumps:
- had any data
- got shipped
- got loaded
Of course, a good set of scripts would be logging everything, including
err msgs, and a bad set of scripts means you will have to look at the
file creation date, when a script last executed, etc., but essentially
you need to look at that (script) level to find out.
--
Cheers
Derek
Senior Sybase DBA / Information Architect
Copyright © 2008 Software Gems Pty Ltd
Quality Standards = Zero Maintenance + Zero Surprises
Performance Standards = Predictability + Scaleability
ziona@dmateck.com
2008-12-04 15:23:43 UTC
Permalink
It is a very simple dump/load transaction system,I thought that there
was a data such as the master.sysdatabase.dumptrdate
something that do not need a modification to the current system, now I
know that I have to invest an effort to make the control more
accurate.
Anyway, thanks for the help.
Derek Asirvadem
2008-12-05 03:20:09 UTC
Permalink
If the effort was invested when the simple system was written, it would
not need revisiting later.

All my scripts log every action, even daily db and triggered tran dumps
and loads. It requires one line in the script, and you can centralise
(or not) your logfiles.
--
Cheers
Derek
Senior Sybase DBA / Information Architect
Copyright © 2008 Software Gems Pty Ltd
Quality Standards = Zero Maintenance + Zero Surprises
Performance Standards = Predictability + Scaleability
Loading...