In the referenced doc, the steps for such a recovery are the first
listed in "Choosing the procedure to recover user ASPs"; i.e. in the
task list, under the heading "Recovering a basic user auxiliary storage
pool (ASP) after recovering the system ASP". Such a setup is not a snap
to recover from the loss of ASP1, but if limited almost exclusively to
journals and receivers, DR for that loss within that setup enables a RPO
[Recovery Point Objective] up to the last transaction without too many
hassles.
What was on ASPn is already there when recovering the system ASP
after the loss of ASP1. The data _remained_ on disks that make up each
ASPn, because each ASPn was [in a sense] unaffected for the _existence_
of its objects. It was the System ASP that was lost, so it was the ASP1
that was then configured and its disks initialized as part of a
re-install. Later restores will be done into ASP1 to effect its
recovery -- by restoring only what is new. A RCLSTG SELECT(*ALL)
OMIT(*DBXREF) needs to run before the restores, to get the
_addressability_ to the objects [for example, the journals and
receivers] that are in each ASPn, but which are not yet visible after
only the install. The /visibility/ will be obtained to the existing
objects in ASPn by that reclaim request, before an effective restore
OPTION(*NEW) is performed. Restoring only new objects avoids replacing
or conflicting with those existing objects on the disks making up ASPn.
However to avoid a large number of diagnostics logged for
OPTION(*NEW), using OMITLIB(list_of_libs_in_ASPn) is the documented
manner to effect restore of only the new objects; where that
list_of_libs is available by DSPOBJD *ALL TYPE(*LIB), minus those
libraries created in ASP1 by the install of the LIC and OS.
[The restored] ASP1 does not remember the configuration of the ASPn
as neither part of the OS nor user data. That setup was established and
was maintained across the loss & DR activity at a disk configuration
level; DST was used to configure the ASPs. The LIC can establish the
visibility to the /objects/ on the disks because the reclaim storage
request progresses though all of the permanent objects on disk to make
them visible and assign /default/ ownership. Each object in a library
residing in ASPn still has addressability to its library, but each *LIB
that is found in ASPn is inserted into the QSYS *LIB so that the library
is visible in QSYS; its objects are then immediately addressable through
that newly visible *LIB. The *LIB objects were in ASPn but were not
/in/ their *LIB because their library was in ASP1; i.e. where QSYS
resided, the library in which all [other] *LIB objects reside. Although
I say /other/ *LIB, the QSYS *LIB is actually referred to as the
/machine context/ to avoid the implication that a *LIB is in a *LIB.
It is the ownership that makes the scenario less palatable, because
ownership should be reestablished for each existing object which was
made addressable\visible by name. For library based objects, it is not
too difficult to resolve if the rule is to reflect the ownership of the
library in the objects within; then tools like CHGOWNLIB LIB() which
effect that automatically can ease recovery. And I see the doc fails to
note CUROWNAUT(*REVOKE), even as the default, it should be very clear,
that parm(specification) is an effective requirement.
Regards, Chuck
As an Amazon Associate we earn from qualifying purchases.
This thread ...
RE: RAID, journaling, and ASPs, oh my!, (continued)
This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact
[javascript protected email address].
Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.