CPYTOIMPF to effect a CSV export of QIWS/QCUSTCDT with quoted 
character strings does little more than generate and execute the 
following SQL, then FETCH the rows and place them into the stream 
file.  Generating the SQL should take only milliseconds; i.e. a 
simple algorithm according to type & attributes of each column of 
the database *FILE being exported.
<code>
    select cast(
      char(CUSNUM)
      concat ',' concat
      '"' concat LSTNAM concat '"'
      concat ',' concat
      '"' concat INIT concat '"'
      concat ',' concat
      '"' concat STREET concat '"'
      concat ',' concat
      '"' concat CITY concat '"'
      concat ',' concat
      '"' concat STATE concat '"'
      concat ',' concat
      '"' concat LSTNAM concat '"'
      concat ',' concat
      char(ZIPCOD)
      concat ',' concat
      char(CDTLMT)
      concat ',' concat
      char(CHGCOD)
      concat ',' concat
      char(BALDUE)
      concat ',' concat
      char(CDTDUE)
      concat x'0D25'
              AS varchar(2000) /* ccsid #### */ )
     /* the 2K may not be representative */
    from qiws/qcustcdt
     /* FOR READ ONLY WITH NC */
</code>
  Make that a CREATE VIEW then FTP the data via that VIEW into a 
stream file with ASCII, and that can be compared with CPYTOIMPF from 
the physical file to infer the efficiency of getting the data into 
the STMF.  Similarly use RPG to open the VIEW using RLA and then 
loop on read & write the data to a stream file; I believe the export 
[still] uses SQL FETCH.  The above SELECT could also be run using 
the DB2 command line interface and have the output directed to a 
STMF to see how the CPYTOIMPF compares with the QSH in efficiency of 
getting the data into the STMF.  That is, in each case the database 
aspect for accessing the data as CSV records, should be pretty much 
the same.  Oops... except I suppose, that the DB2 command line 
fetches just one row at a time :-) although IIRC the pre-v5r4 the 
database export still did too, and I am not sure if any release of 
import has multi-row insert although import should do concurrent 
threaded inserts for "large" files.
  Rather than import/export processing "time compared to using RPG 
to access the IFS" it may how they are coded for use of the SQL that 
makes the difference.?
Regards, Chuck
Kurt Anderson wrote:
While I'm not experienced with CPYFRMIMPF I have dealt with
CPYTOIMPF and the latter takes an amazing amount of time compared to
using RPG to access the IFS. Has anyone experienced the same with
CPYFRMIMPF? If so, I suggest accessing the IFS with another method
(I'm an RPG guy so my thought is RPG, but that's just me). However
if speed isn't an issue (like the files aren't that big), than
there's probably no reason to do it yourself.
As an Amazon Associate we earn from qualifying purchases.