|
I ran across a couple HTTP Server direcrives in the Information Center:
LoadModule deflate_module /QSYS.LIB/QHTTPSVR.LIB/QZSRCORE.SRVPGM
SetOutputFilter DEFLATE
And added them to my configuration file. No errors so far.
Is that all I need to do to implement compression? Is there a way to
tell that it's working?
One thing that motivated me to implement HTTP Server compression (in
addition to the suggestions made on this list) was that I ran one of my
JavaScript files through WINZIP, which yeilded 64% compression; and that
was after my own compression routine previously removed a lot of
whitespace from the original file.
Nathan.
--
Sent using goWebtop from Laszlo Systems.
Try it yourself : http://www.gowebtop.com
On Wed, Mar 12, 2008 at 8:51 AM, Walden H. Leverich wrote:
--I have mixed feelings about obfuscation.
Obfuscation serves a purpose (maybe), but it's not compression. Sure,
the first obvious step to obfuscation is to remove comments and
whitespace, and that makes the file smaller, but then it moves on to
more of a "security" role in an attempt to hide your IP. At the end of
the day, the browser needs to understand the JS enough to run it, and
therefore someone can de-obfuscate it if they care to.
On the other hand, I'm sure that on-demand compression/decompression
takes some CPU time.
Yes and no. You can get a lot of compression for a little CPU, it's
that
extra 10% that takes 90% of the time. But even if you go for heavy
compression, most web servers will cache the compressed copy of the
file
anyway so you only take the hit on the server once, after all, it's
the
same file they're compressing over and over, why not cache it.
On the client-side, most browsers have more CPU then they know what to
do with, so decompressing (which is the quicker of the two operations
anyway) is a nit. Actually, you may see a performance _increase_ due
to
compression. Remember Stacker from the old PC days. The theory was
simple, sure we take a hit in CPU for the compression/decompression,
but
we read less from the disk so things are faster. Well, same argument
holds here, sure you take a hit from the decompress, but you had to
download 7K not 27K, so you more than make up for the decompression
time. Also, modern browsers will cache that decompressed file anyway,
so
you're only talking about a one-time decompression hit. When looking
at
compression settings on the server make sure you look at the caching
setting too. You can eliminate a _lot_ of requests with client-side
caching, and that's a win-win since you save the bandwidth and they
get
the page faster.
-Walden
--
Walden H Leverich III
Tech Software
(516) 627-3800 x3051
WaldenL@xxxxxxxxxxxxxxx
http://www.TechSoftInc.com
Quiquid latine dictum sit altum viditur.
(Whatever is said in Latin seems profound.)
--
This is the Web Enabling the AS400 / iSeries (WEB400) mailing list
To post a message email: WEB400@xxxxxxxxxxxx
To subscribe, unsubscribe, or change list options,
visit: http://lists.midrange.com/mailman/listinfo/web400
or email: WEB400-request@xxxxxxxxxxxx
Before posting, please take a moment to review the archives
at http://archive.midrange.com/web400.
This is the Web Enabling the AS400 / iSeries (WEB400) mailing list
To post a message email: WEB400@xxxxxxxxxxxx
To subscribe, unsubscribe, or change list options,
visit: http://lists.midrange.com/mailman/listinfo/web400
or email: WEB400-request@xxxxxxxxxxxx
Before posting, please take a moment to review the archives
at http://archive.midrange.com/web400.
As an Amazon Associate we earn from qualifying purchases.
This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].
Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.