Would be helpful to see the Java code being used.
Possible that the way it's written has a lot to do with the slowness.
Not including an ORDER BY is always going to be faster...
Also make sure that FOR FETCH ONLY WITH NC is specified.
Some JDBC properties to consider
https://javadoc.io/static/net.sf.jt400/jt400/21.0.0/com/ibm/as400/access/doc-files/JDBCProperties.html
"block criteria"
"block size"
"data compression"
"receive buffer size"
"tcp no delay"
An alternative to using block size, is to use setFetchSize
https://javadoc.io/static/net.sf.jt400/jt400/21.0.0/com/ibm/as400/access/AS400JDBCStatement.html#setFetchSize-int-
Not sure off the top of my head if that would give you a larger set of
records than a 512 block size.
Note that to use it, block size must be 0.
HTH,
Charles
---------- Forwarded message ---------
From: <smith5646midrange@xxxxxxxxx>
Date: Tue, Sep 2, 2025 at 7:52 AM
Subject: Sever pulling a bazillion rows from IBMi via JDBC (I think)
To: Midrange Systems Technical Discussion <midrange-l@xxxxxxxxxxxxxxxxxx>
We have a server that is pulling a bazillion rows via JDBC (I think) daily.
Unfortunately, we are stuck with this process as is for a while until we can
get a project on the plan to pull only net changes. It currently runs for
2-3 hours each day.
What is the fastest way to pull this data? Will it be faster to omit the
order by clause and just let SQL hand back data in whatever order that it
wants or would it be quicker to add an order by clause. It seems like
having an order by clause would slow it down because the file is not in any
particular sequence but I also didn't know if an order by RRN would help.
Any thoughts (other than rewriting the process which is in the plans but
probably not for this year)?
As an Amazon Associate we earn from qualifying purchases.