|
From: Alan Campin <snip> My benchmarks showed the opposite. SQL INSERT is 10-100 times slower than the native WRITE. You will see some performance improvement using prepared statements, but native WRITE is still faster. Please provide an example program and your benchmarks. </snip> This from IBM. Articles have repeatedly been in Iseries Network about SQL improvements. According to IBM, differences occur because in DDS I/O system writes are not validated but every time that you read a record data is validated. In SQL I/O system, data is validated in on write and not validated on read. If you are writing all the time and never reading, you would see slower but usually you write something and read many, many, many times.
Alan, I think you might want to do some basic research. Defining your data with DDL is a completely different topic than accessing it via SQL. You can read DDL-defined files just as easily using native I/O as you can with SQL. I understand the confusion; even SQL experts get these mixed up. But really, it's this sort of misinformation that can convince people to make bad decisions.
As far as reading data in, how often do we do single read vs reading in multiple records? Most of the time we are reading in multiple records joined, as least, logically to other records. You are almost always doing the equivalent of SETLL and READ or READE all the time. Again thinking database instead of files.
All the time! Do you actually write business applications? I have to chain to get an item master, then chain to get a customer record. I need to see if a warehouse override exists. And even when I do a loop, it's usually a small number of records.
c/Exec Sql + Fetch Next From ListCursor For 32766 Rows Into :dsItemRecord c/End-Exec
I have NEVER created a 32000+ array of data structures. I can't see why you would except in utility programs or MAYBE some queries. Either one of those is a good reason to use SQL. But this DS approach has other issues: any time the data you are fetching doesn't exactly match the physical file layout, you have to manually create a data structure, and make sure it matches your SQL statement. It's double maintenance whenever you need a new field, and seems to cancel out some of the "ease of use" that SQL supposedly gives you.
Remember when doing SQL with join, the I/O only occurs once and even the moves would only occur once in the buffer. From then on, only the changed fields would be moved
If you read into an array of data structures as you propose, the data structure would contain both header and detail information. You must copy the header data multiple times.
My bottom line is one stated by other previously in SQL vs Record I/O discussions. SQL has changed and the AS/400 has changed. A lot of the old assumption we used to make are just not true anymore.
But you haven't proven a single thing. That's why I despise these sort of mental gymnastics. Prove your point. Write a benchmark. Until then, it's just smoke. Joe
As an Amazon Associate we earn from qualifying purchases.
This mailing list archive is Copyright 1997-2025 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].
Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.