On 23-Nov-2013 12:15 -0800, Steinmetz, Paul wrote:
V7R1 OS Upgrade from DSLO image using automatic install complete.
2 minor issues.
1) 5770DE1 2 DB2 XML Extender
resulted in error status. This also occurred on another partition
months back. I expect the same error on upcoming Production V7R1
upgrade.
Anyone else run into this
I am curious... Any clarification on what the details behind "this"
might be? Effectively, _the error_ versus _an error_ for which the
install failed, and thus left the option in an error status. That is,
what specifically were the errors that preceded the 'not installed'
error issued for that RSTLICPGM 5770DE1 OPTION(02) [as performed by the
user QLPINSTALL], as recorded in the LPP-install joblog?
Below is the fix.
1) IPL then signon as QSECOFR
2) DSPFFD QSYS/QADBXREF
- what do you see for the CCSID of the string columns?
3) CHGJOB CCSID() from step 2, the DSPFFD
4) CALL QSYS2/QSQXRLF (DLT QSYS2)
5) CALL QSYS2/QSQXRLF (CRT QSYS2)
6) CALL QSYS/QSQSYSIBM
7) CALL QSYS/QSQIBMCHK
Then run RSTLICPGM 5770DE1 OPTION(2).
Interesting. That seems to imply a dependence of that LPP option-2,
on some object(s) that should exist as part of a properly-installed OS
OPTION(01) "Extended Base" feature which is a pre-requisite to install
of any LPP. Makes one wonder if\what errors transpired in that other
install joblog, for which that prior LPP install had failed, but for
which apparently no similar notification of its failure was given.? But
I recall that, unfortunately, the joblog of that OS option(01) [kwd
XTND] install would have been /cleared/ if the install had been deemed
to have been successful [a general /feature/ of auto-install]. As such,
any problem would have to be inferred instead, from the side effects on
any dependent features, or perhaps from what the output of a CHKPRDOPT
*OPSYS OPTION(01) DETAIL(*FULL) might have revealed, issued before the
corrective\recovery actions were taken. Although as I recall [from when
I was recruited to remake that installation], I never updated the Check
Product Option for that option to fully diagnose any missing database
file objects that were created-into versus being shipped-as part of the
option. Instead as I recall, some implicit corrective action, the step
(5) of the above, would be performed [¿if a SQL catalog VIEW file was
missing?] so from the messages logged for that work, what objects were
missing might be inferred; and that I deferred improvement of the
notification of [any other] missing objects to the owners of the code.
2) PTF 5770HAS - SI50474 damaged status when attempting to apply.
Below is the fix:
- Delete SI50474
- Re-Order SI50474
- Load individually
- Apply individually
Hmm. That does not make much sense, unless the copy of that PTF was
obtained as /test/ vs being a final\GA copy. The standard recovery for
a PTF with a logically /damaged/ state, is to issue the LODPTF
[irrespective its current status] and then APYPTF for that just-loaded
PTF. Notably... the PTF should *not* need to be deleted [no DLTPTF] nor
re-ordered [no SNDPTFORD or equivalent]. Note that doing these
operations separately\individually from other PTF activity, AFaIK,
should not be necessary; beneficial, only to more closely reveal the
progress of processing just that one PTF, versus the clutter that would
be seen when loading and applying others at the same time.
The errors that were logged during the pre-damage apply [or remove]
processing would identify the origin for the status=damaged; i.e.
reveals the condition for which the process since refers to the PTF as
/damaged/ due to the prior inability to transition the PTF to the next
logical\requested status. Preventing the error(s) should prevent the
PTF being marked as /logically/ damaged when loaded and re-applied.
As an Amazon Associate we earn from qualifying purchases.