|
On Mon, 6 May 2002, Jim W wrote: > 1) Once a read set is built or restored, is the program supposed to loop > thru it one time and go to sleep, or does it cycle thru for a period of time > before it sleeps? I've played with the timeout value, and it doesn't appear > to work the way the documentation indicates it should. Exactly how is that > supposed to work? Well, basically you tell select() to return control to your program as soon as one of these conditions is true: 1) You've supplied a "read set" and one or more of the descriptors you've supplied has data to read in it's input buffer. 2) You've supplied a "write set" and one or more of the descriptors you've supplied has space for data in it's output buffer. 3) You've supplied an "exceptional set" and one or more descriptors you've supplied has some exceptional condition pending. (Out of band data is the only condition that triggers this in TCP/IP at this time) 4) You've supplied a "timeout" value, and that much time has elapsed. The first time that any one of these things happens, select() will end, and you'll need to use FD_ISSET to determine which has happened. > > My program (partial code still below) seems to behave fairly reasonably when > the first message is processed. It doesn't eat too much CPU and does go to > SELW after a minute or so. The timeout value is 30 seconds. But once I > send another message to process, it will run thru the code and take 90+ % of > the resourses if nothing much else is running. It also doesn't appear to > sleep. Unless a high volume of data is being sent to the server, it should spend most of it's time at SELW status. It shouldn't take a minute to get there. It definitely shouldn't consume 90% of the CPU. I've got a similar server program running on my system which runs 24/7. The most CPU I've ever seen it consume is 1.2%. You mileage may vary -- the speed of your system, number of people, volume of data, will all impact this -- but I find 90% to be a bit high :) > > 2) In your reply you recommended an FDZero each time thru the loop. Would > that be placed at the top of the loop, right before the readset is restored? Yes, zero out the descriptor set, then set each one explcitly. So, FD_ZERO, followed by FD_SET for each descriptor (which you seem to be calling FDZero and FDSet in your code). The idea is to make sure that anything that select() did to the set the previous time through the loop has been wiped clean and you've got the right numbers set. Possibly something like this (untested): C CallP FDZero(FDes) C 0 Do CurMax J C If SckFlags(J + 1) = '1' C CallP FDSet(J: FDes) C EndIf C EndDo > You also suggested dumping the contents of the FDes data somewhere and > manually checking that the correct descriptors are set before & after > calling the select. What exactly should they look like? I describe what the sets should look like in my tutorial on this page: http://klement.dstorm.net/rpg/socktut/x1032.html Looking at your code again, I see this line: C If FDIsSet(I: FDes) > 0 In my tutorial, FD_ISSET returns *ON or *OFF, it returns an indicator. But you seem to be returning a number? Is that true, or is this a bug? I also note this: * ....Receive first 8 bytes of header length C Eval RC = Recv(I: SocketData@ C : 8: 0) * If no data received, then reset RC C Eval SockDtaLen = %Len(%Trim(SocketData)) This code seems wrong to me. after recv() has run, RC will contain the length of the data that you received. Why are you doing the %Len(%Trim()) thing? Since you haven't yet converted the data to EBCDIC, I doubt that the %Trim() is doing anything useful. %Trim() removes any trailing x'40' (EBCDIC spaces) from the start & end of the string. in ASCII, x'40' is the @ symbol. So, (unless of course you're sending the data in EBCDIC in the first place) you're removing any leading or trailing @ symbols, then checking to see how long the data is... Worse yet, recv() will only fill in the amount of data that it received, and leave the rest unchanged. So, if (from the previous call) you receive "HELLO", and on the next call you receive "BYE", the contents of SocketData would be "BYELO". Now, if you always used RC to determine how much data was received, this wouldn't be an issue. maybe somehting like: C Eval MsgLen = %subst(SocketData:1:RC) Then work with message. This way you're only working with the data that you received this time through. Also keep in mind that recv() will return whatever is in the receive buffer for the socket. It isn't necessarily 8 bytes long. It may be only part of the 8 bytes you need... you should be either using some sort of delimiter (usually CR/LF) to determine when a whole message has been received, or else you should be using a loop to get a fixed length of 8 bytes... I hope you understand what I mean, as this is a bit too much to try to fully explain in this message... > > I remember the second time I looked at your tutorial on-line, I saw that you > had made some changes to your tutorials, and had asked for feedback for > improvements. Maybe a chapter on debugging problems like this, with some > detail on how the descriptors should be set before and after function calls > would be helpful. > I haven't even figured out what you're doing wrong, yet :) But, from the other questions I've had, it appears that select() does need some more attention in the tutorial. I wish I had more time to work on it! :(
As an Amazon Associate we earn from qualifying purchases.
This mailing list archive is Copyright 1997-2025 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].
Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.