Keeping this strictly as a business decision, how much of an overhead are 
we talking about here?  If the business decision is to avoid overhead then 
how much are we truly saving?  Or are we clinging to something from 
technology that is no longer applicable?
I did some time trials. 
     FIIMJOE    O    E             DISK
     D                 DS
     D  iprod                        25A
     D  iprodnbr                     25S 0 overlay(iprod)
     D theTime         s               z
      /free
       theTime=%timestamp();
       dsply theTime;
       for iprodnbr=1 to 50000;
         write iimjoer;
       ENDFOR;
       theTime=%timestamp();
       dsply theTime;
       *inlr=*on;
       return;
      /end-free 
File defined 4 different ways.  Program needed no recompiling.  3 runs for 
each file definition.  CLRPFM between runs.  Times are in seconds (second 
dsply - first dsply).
CREATE TABLE ROB.IIMJOE (IPROD CHAR ( 25) NOT NULL WITH DEFAULT,
  CONSTRAINT ROB.IPROD PRIMARY KEY (IPROD)) 
  rcdfmt iimjoer 
12.649
12.296
12.166
CREATE TABLE ROB.IIMJOE (IPROD CHAR (25 ) NOT NULL WITH DEFAULT)
  RCDFMT IIMJOER 
.443
.370
.348
R IIMJOER 
  IPROD         25A 
.647
.384
.347
avg=.459
                            UNIQUE 
R IIMJOER 
  IPROD         25A 
K IPROD 
.373
.371
.388
avg=.377
Granted I am a little floored by that constraint.  Let's set aside for a 
moment and drop down to the old method of defining a primary key 
constraint.  The last time trial.  As you can see, you INCREASE overhead 
by not having a primary key.  So, again, what business reason is there for 
not having a primary key on the item master?  Even if you counter argue 
that the data needs more time trials because of one skew I think it's fair 
to say that there is no overhead, performance wise.
Rob Berendt
As an Amazon Associate we earn from qualifying purchases.