|
From: Hall, Philip <phall@spss.com> > Also found is that there is a big difference between machine response time (as measured by timers in the code) and user perceived 'wall time' (as measured by looking at the clock on the wall) > perception is king. When I wrote the software that is (was?) behind the telephone company's Directory Assistence in New York City, the contract defined response time as the time from the user's last keystroke to the time the first character of the response appeared on the screen. At the time, the system used async 56Kb lines so the transmission time was noticable. We first implemented the system to begin showing the response as soon as some data had been received (to kind of comply with the contract). Then just for fun (actually to test a theory of mine) I buffered up the data inside the terminal until all the data had been received and then FLASHED the response on the screen in one go. Although the result showed an average of 0.5 second later, ten minutes after the new method had been downloaded, I got a phone call from the field (there were 2000 terminals out there) and a voice asked me in a "frantic" way: "WHAT DID YOU DO". Preparing myself for some halfbaked excuse, I heard her next utterance: "everything is SO much faster now". Needless to say that was the way we left it.
As an Amazon Associate we earn from qualifying purchases.
This mailing list archive is Copyright 1997-2025 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].
Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.