I have a socket application that receives messages from a data queue and
send the message to a remote host via sockets.  Currently the application
uses a single persistent socket connection.  However, the application
needs to be changed so that sends messages to four different hosts for
redundancy.
The easiest solution would be to submit four different jobs, each one 
talking to a different host.
Sockets are an API that lets your program control the network layer of the 
operating system. Since modern systems are able to speak many different 
network protocols, telling us that you're writing a "socket program" 
doesn't really say much.  Is it Unix-domain?  TCP?  UDP?  IPX/SPX?
I'll assume TCP, since TCP is the most common.  (Though, I alaways hate 
making assumptions!)
There's a big problem with bridging TCP and data queues.  They have a 
totally different paradigm!  Data queues are asynchronus, when you write 
data to them, all that gets verified is that the data is placed in the 
queue.  No verification is made that it ever reaches the destination 
program -- in fact, in the data queue paradigm, there's no need for the 
other program to be continually listning.  It's possible for a data queue 
processor to be run only once a week, with the inttent of handling all 
queued entries for that week.
TCP, on the other hand, is designed for "right here, right now" 
communications. There has to be a program listening for connections and 
handlign them immediately. Nothing is queued for later, if the program 
can't receive the data, you get an error.  It's more like APPC than data 
queues.
I just don't understand why you'd want to try to bridge them.  If your 
application calls for immediate, real-time communications, use sockets 
throughout.  If your application calls for asynchronous communications, 
use data queues throughout. What's the point of bridging them?

I am not sure the best way to switch between sockets. The easiest way to change the application would be to just switch between sockets but each time I send a message it would have to wait to receive a reply before processing a new message. That doesn't seem to be very efficient.
If you REALLY want to do this...   Here's what you do:

a) Put all of the sockets in non-blocking mode.
b) Create an array of data structures containing a read buffer, write buffer, and socket descriptor for each socket. c) Loop through the array to build sets of descriptors for the select() API to use. d) For each socket with space in it's read buffer, use FD_SET to turn on that descriptor in the read set. e) For each socket with data to send in the write buffer, use FD_SET to turn on that descriptor in the write set.
f) Call the select() API with a 0.5 second timeout.
h) Loop through all of the sockets, and read any that are marked readable, write data from teh write buffer to any that are marked writable. Update the read/write buffers accordingly. i) Check the data queue(s) with a 0 second timeout. IF any data is available to send to the sockets, put it in the appropriate write buffer. j) If there's a full record of data in a socket's read buffer, put it into the appropriate data queue.
k) go back to the top of the loop (step c)

The 0.5 delay on the select() API will keep the job from consuming too much CPU time (unless there's a LOT of traffic on the sockets.) Unfortunately, if there's no traffic on the sockets, it'll also mean that the data queues are only checked on 0.5 second intervals, so that might add some delay. Of course, you can adjust that timeout to try to get it right.
Alternately, you might use the spawn() API to create a second job so that 
one job handles reading from the socket and writing to the data queue, and 
the other handles reading from the data queue and writing to the socket. 
That'll eliminate the 0.5 second delay.
If you're a Java or C programmer, you might consider using threads instead 
of separate jobs, but since you only need to start them up once, and since 
you'll only be submitting (at most) 8 jobs, it probably won't make much 
difference in performance.
It just seems silly to create all this complexity.  I'd follow the 
K.I.S.S. principle and make it all TCP or all data queues..





As an Amazon Associate we earn from qualifying purchases.

This thread ...

Replies:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2025 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact copyright@midrange.com.

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.