some additional thoughts inspirated by other posts:

@DTAARA:
a better aproach would be a database based solution: we used one file (dbname, filename, maxkey) for all needed keys and one function getKey(dbname, filename, blockSize) returning the starting Number of the block. Inside the function it's a little bit tricky to avoid collisions and deadlocks (start with the update of the keyfile, then read the maxKey, afterwards commit - you need a commitscope of its own -> named ACTGRP, for other DBs connection of its own). If there is no row for dbname, filename, simply insert one to start - so you would not need a maintenance function. This should work for all databases I know - without any risc of key conflicts during insert.

@autoincrement: There might be diffrences for diffrent databases and even worse: it might get complex to consolidate data originating from diffrent databases.

@CMPSWP solution: this is as400 only and it might be a bottleneck for maximum throughput. In the business warehouse project, I mentioned before, we used massiv parallelism to speed up the load process to its maximum. To get more speed, we only needed a higher number of CPUs and a change in our configuration. So speed was only limited by the maximum number of CPUs.

D*B




As an Amazon Associate we earn from qualifying purchases.

This thread ...

Follow-Ups:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.