Aggregation allows you to collect and merge information from multiple Guardium Servers to a single Guardium Aggregation Server.
If you are running Guardium in an enterprise deployment, you may have multiple Guardium servers monitoring different environments (different geographic locations or business units, for example). It may be useful to collect all data in a central location to facilitate an enterprise view of database usage. You can accomplish this by exporting data from a number of servers to another server that has been configured (during the initial installation procedures) as an aggregation server. In such a deployment, you typically run all reports, assessments, audit processes, and so forth, on the aggregation server to achieve an enterprise view.
Guardium also supports hierarchical aggregation, where multiple aggregation units merge upwards to a higher-level, central aggregation server. This is useful for multi-level views. For example, you may need to deploy one aggregation server for North America aggregating multiple units, another aggregation server for Asia aggregating multiple units, and a central, global aggregation server merging the contents of the North America and Asia aggregation servers into a single corporate view. To consolidate data, all aggregated Guardium servers export data to the aggregation server on a scheduled basis. The aggregation server imports that data into a single database on the aggregation server, so that reports run on the aggregation server are based on the data consolidated from all of the aggregated Guardium servers.
The Guardium administrator defines the System Shared Secret on the System Configuration panel, which is described in the following section. The system shared secret is used for archive/restore operations, and for Central Management and Aggregation operations. When used, its value must be the same for all units that will communicate. This value is null at installation time, and can change over time.
The system shared secret is used:
When secure connections are being established between a Central Manager and a managed unit.
When an aggregated unit signs and encrypts data for export to the aggregator.
When any unit signs and encrypts data for archiving.
When an aggregator imports data from an aggregated unit.
When any unit restores archived data.
Depending on your company’s security practices, you may be required to change the system shared secret from time to time. Because the shared secret can change, each system maintains a shared secret keys file, containing an historical record of all shared secrets defined on that system. This allows an exported (or archived) file from a system with an older shared secret to be imported (or restored) by a system on which that same shared secret has been replaced with a newer one. Shared secrets (current and historic ones) can be exported from one appliance and imported to another through the CLI.
Scheduled export operations send data from Guardium collector units to a Guardium aggregation server. On its own schedule, the aggregation server executes an import operation to complete the aggregation process. On either or both units, archive and purge operations are scheduled to back up and purge data on a regular basis (both to free up space and to speed up access operations on the internal database). The export, archive, and purge functions typically do not operate on the same data. For example, you may want to export and archive all information older than one day and purge all information older than one month, thereby always leaving one month of data on the sending unit.
To export data to an aggregation server, follow the procedure below. You can define a single export configuration for each Guardium unit.
Click on the Administration Console tab.
Click on Data Export in the Data Management section of left hand column menu to open the Data Export specification panel.
Check the Export data checkbox as this will open additional options for exporting data.
In the boxes following Export data older than, specify a starting day for the export operation as a number of days, weeks, or months prior to the current day, which is day zero. These are calendar measurements, so if today is April 24, all data captured on April 23 is one day old, regardless of the time when the operation is performed. To archive data starting with yesterday’s data, enter the value 1.
5. Optionally, use the boxes following Ignore data older than to control how many days of data will be archived. Any value specified here must be greater than the Export data older than value, so you always export at least two days of data. If you leave the Ignore data older than blank, you export data for all days older than the value specified in the Export data older than row; It is recommended to always set the ’Ignore older than’ value, otherwise you will be exporting the exact same days over and over again; overloading the network and the aggregator with redundant data (that will be ignored).
The Export Values box is checked by default. In some cases, where the collector resides in a country that prohibits the export of data, and the aggregation server resides in another country you would want to uncheck the Export Values box, which would mask all fields containing database values.
In the Host box, enter the IP address or DNS host name of the aggregation server to which this system’s encrypted data files will be sent. This unit and the aggregation server to which it is sending data must have the same System Shared Secret. If not, the export operation works, but the aggregation server that receives the data is not able to decrypt the exported file and the Import will fail. See System Shared Secret for more information.
Click the Apply button to save the export and purge configuration for this unit. When you click the Apply button, the system attempts to verify that the specified aggregator host will accept data from this unit. If the operation fails, the following message is displayed and the configuration will not be saved: A test data file could not be sent to this host. Please confirm the hostname or IP address is entered correctly and the host is online. If the Apply operation succeeds, the buttons in the Scheduling panel become active.
Click the Run Once Now button to run the operation once.
Click the Modify Schedule button to schedule this operation to run on a regular basis.
To stop the export of data to an aggregation server:
Click on the Administration Console tab.
Click on Data Export in the Data Management section of left hand column menu to open the Data Export specification panel.
Clear the Export checkbox.
Click the Apply button
The Guardium collector units export encrypted data files to another Guardium appliance configured as an aggregation server. The encrypted data files reside in a special location on the aggregation server until the aggregation server executes an import operation to decrypt and merge all data to its own internal database.
Note:To avoid the possibility of importing files that have not completely arrived, the aggregation server will not import files that have changed in the last two minutes.
Follow the procedure outlined below to define the Data Import operation on an aggregation server. You can define only a single Data Import configuration on each unit.
Click on the Administration Console tab.
Click on Data Import in the Data Management section of left hand column menu to open the Data Import specification panel.
Check the Import checkbox which causes the appearance of an additional non-modifiable field indicating the location of the data files to be imported.
Click the Apply button to save the configuration. The Apply button is only available when you toggle the Import data from checkbox on or off.
Click the Run Once Now button to run the operation once.
Click the Modify Schedule button to schedule the operation to open the general-purpose task scheduler and run on a regular basis. This aggregation server and all units exporting data to it must have the same System Shared Secret. If not, the export operations will still work, but the aggregation server will not be able to decrypt the files of exported data.
To stop importing data sent from other Guardium units:
Click on the Administration Console tab.
Click on Data Import in the Data Management section of left hand column menu to open the Data Import specification panel.
Un-check the Import data from checkbox.
Click the Apply button to save the configuration. Stopping importing does not stop other Guardium units from exporting data to this system. To stop that, you must stop the Export operation on each sending unit.
Archiving and purging data on a regular basis is essential for the health of your Guardium system. For the best performance, we strongly recommend that you archive and purge all data that is not needed. For example, if you only need three moths of data on the Guardium appliance, archive and purge all data that is older than 90 days.
The archive and purge process frees space and preserves information for future use. You should periodically archive and purge data from standalone units and from aggregation units. On Guardium units that export data to aggregation servers, data is typically archived only from the highest level aggregation server, although it is possible to archive from any and all units. The Guardium’s archive function creates signed, encrypted files that cannot be tampered with.
It may be necessary to run reports or investigations on this data at some point. For example, some regulatory environments may require that you keep this information for three, five, or even seven years in a form that can be queried within 24-hours. This functionality is supported by the Guardium restore capability, which allows you to restore archived data to the unit.
The following sections describe how to define and schedule archiving and how to restore from an archive.
Note: The archive and restore operations depend on the file names generated during the archiving process. DO NOT change the names of archived files.
Archive data files can be sent to an SCP or FTP host on the network, or to an EMC Centera or TSM storage system (if configured). You can define a single archiving configuration for each unit To archive data to another host on the network and optionally purge data from the unit, follow the procedure outlined below.
Click on the Administration Console tab.
Click on Data Archive in the Data Management section of left hand column menu to open the Data Archive specification panel.
Check the Archive checkbox to expose additional fields for the archive process.
In the boxes following Archive data older than, specify a starting day for the archive operation as a number of days, weeks, or months prior to the current day, which is day zero. These are calendar measurements, so if today is April 24, all data captured on April 23 is one day old, regardless of the time when the operation is performed. To archive data starting with yesterday’s data, enter the value 1.
Optionally, use the boxes following Ignore data older than to control how many days of data will be archived. Any value specified here must be greater than the value in the 'Archive data older than' field. If you leave the Ignore data older than row blank, you archive data for all days older than the value specified in the Archive data older than row. This means that if you archive daily and purge data older than 30 days, you archive each day of data 30 times (before it is purged on the 31st day). Depending on the archive options configured for your system (using the store storage-system CLI command), you may have EMC Centera or TSM options on your panel, as illustrated above. If you select one of those archive destinations, see the appropriate topic:
EMC Centera Archive and Backup
TSM Archive and Backup
Steps 6-9 apply only when System is selected as the archive destination:
Enter the IP address or DNS Host name of the host to receive the archived data
In the Directory box, identify the directory in which the data is to be stored. How you specify this depends on whether the file transfer method used is FTP or SCP. For FTP, specify the directory relative to the FTP account home directory. For SCP, specify the directory as an absolute path.
In the Username box, enter the user name to use for logging onto the host machine. This user must have write/execute permissions for the directory specified in the Directory box (above).
In the Password box, enter the password for the above user, then enter it again in the Re-enter Password box.
Check the Purge checkbox to purge data, whether or not it is archived. When this box is marked, the Purge data older than fields display. It is *IMPORTANT* to note that the Purge configuration is used by both Data Archive and Data Export. Changes made here will apply to any executions of Data Export and vice-versa. In the event that purging is activated and both Data Export and Data Archive run on the same day, the first operation that runs will likely purge any old data before the second operation's execution. For this reason, any time that Data Export and Data Archive are both configured, the purge age must be greater than both the age at which to export and the age at which to archive.
If purging data, use the Purge data older than fields to specify a starting day for the purge operation as a number of days, weeks, or months prior to the current day, which is day zero. All data from the specified day and all older days will be purged, except as noted below. Any value specified for the starting purge date must be greater than the value specified for the Archive data older than value. In addition, if data exporting is active (see Exporting Data to an Aggregation Server, above), the starting purge date specified here must be greater than the Export data older than value. There is no warning when you purge data that has not been archived or exported by a previous operation. The purge operation does not purge restored data whose age is within the do not purge restored data timeframe specified on a restore operation. For more information, see Restoring Archived Data, below.
Click Apply to verify and save the configuration changes. When you click the Apply button, the system attempts to verify the specified Host, Directory, Username, and Password by sending a test data file to that location.
Click the Run Once Now button to run the operation once.
Click the Modify Schedule button to schedule the operation to run on a regular basis. The general-purpose task scheduler is opened.
Click on the Guardium Monitor tab.
Click on Aggregation/Archive Log in the left hand column menu to open the Aggregation/Archive Log Report.
Check to ensure that each Archive/Purge operation has a status of "Succeeded".
Click Tools tab
Click Report Building tab
Select Aggregation/Archive Tracking from the left hand column options to bring up the Aggregation/Import/Export Log Query Builder panel
See Building Queries and Building Reports for assistance in defining a query and building a report
When you select EMC Centera as an archive or backup destination, the EMC Centera portion of the archive or backup configuration panel expands, as illustrated below (it is the same for both operations - only one version of the panel is illustrated).
To use EMC Centera:
Click on the Administration Console tab.
Click on the Data Archive or System Backup in the Data Management section of left hand column menu, initially, the Network radio button is selected by default, and the Network backup parameters are displayed
Select the EMC Centera radio button. The EMC Centera parameters will be displayed on the panel.
In the Retention box, enter the number of days to retain the data. The maximum is 24855 (68 years). If you want to save if for longer, you can restore the data later and save it again.
In the Centera Pool Address box, enter the Centera Pool Connection String; for example: 10.2.3.4,10.6.7.8?/var/centera/profile1_rwe.pea
Click the Upload PEA file button to upload a Centera PEA file to be used for the connection string.
Click the Apply button to save the configuration. The system will attempt to verify the Centera address by opening a pool using the connection string specified. If the operation fails, you will be informed and the configuration will not be saved.
When you select TSM as an archive or backup destination, the TSM portion of the archive or backup configuration panel expands, as illustrated below (it is the same for both operations - only one version of the panel is shown). Before setting TSM as an archive or backup destination, the Guardium system must be registered with the TSM server as a client node. A TSM client system options file (dsm.sys) must be created (on your PC, for example) and uploaded to Guardium. Depending on how that file is defined, you may also need to upload a dsm.opt file. For help creating a dsm.sys file for use by Guardium, consult with your company’s TSM administrator. To upload a TSM configuration file, see the import tsm config CLI command.
To use TSM:
Click on the Administration Console tab.
Click on the Data Archive or System Backup in the Data Management section of left hand column menu, initially, the Network radio button is selected by default, and the Network backup parameters are displayed.
Select the TSM radio button. The TSM parameters will be displayed on the panel.
In the Password box, enter the TSM password that this Guardium unit uses to request TSM services, and re-enter it in the Re-enter Password box.
Optionally enter a Server name matching a servername entry in your dsm.sys file.
Optionally enter an As Host name.
Click the Apply button to save the configuration. When you click the Apply button, the system attempts to verify the TSM destination by sending a test file to the server using the dsmc archive command. If the operation fails, you will be informed and the configuration will not be saved.
Click on the Administration Console tab.
Click on Data Archive in the Data Management section of left hand column menu to open the Data Archive specification panel.
Un-check the Archive or Purge checkboxs.
Click the Apply button.
As described previously, archives are written to an SCP or FTP host, or to a Centera or TSM storage system (see Archiving Data, above). To restore archives, you must copy the appropriate file(s) back to the Guardium unit on which the data is to be restored. There is a separate file for each day of data. Depending on how your archive/purge operation is configured, you may have multiple copies of data archived for the same day. Archive and export data file names have the same format: <daysequence>-<hostname.domain>-w<run_datestamp>-d<data_date>.dbdump.enc For example: 732423-g1.guardium.com-w20050425.040042-d2005-04-22.dbdump.enc
Unless you are restoring data from the first archive created during the month, you will need to restore multiple days of data. That is because when restoring data, Guardium needs to have all of the information that it had when the data being restored was archived. After the archive was created, some of that information may have been purged due to a lack of use. All information needed for a restore operation is archived automatically, the first time that data is archived each month. So, when restoring data, you have two options:
Restore the first day of the month and all the following
days until the desired day
or
Restore the desired day and then the first day of the following month
For example, to restore June 28th, either restore June 1st through June 28th, or restore June 28th and July 1st.
To restore archives:
From the CLI, use a separate import file command to copy each archived data file to be restored to the Guardium unit. The archive and restore operations depend on the file names generated during the archiving process. DO NOT change the names of archived files. If a generated file name is changed, the restore operation will not work.
Click on the Administration Console tab.
Click on Data Restore in the Data Management section of left hand column menu to open the Data Archive specification panel.
Check the Restore data from box to activate the data restore function.
In the Directory box, enter the directory name to which the archived data files have been copied. The import file command copies all files with an enc extension to the /var/dump directory.
Optionally specify in the Don’t purge restored data for at least n days box a number of days to ”r;protect” restored data from any purge operations. Otherwise, the restored data may be purged the next time a purge operation runs.
Click Apply to save the configuration.
Click Run Once Now to run the restore operation.