Brad

With both tools you have to parse the entire JSON string and it is stored internally in memory (each tool has its own method for doing so) for you to then extract items of information. So the numbers below are simply for doing that, and do not include any time that may be taken in retrieving nodes or walking through the tree etc. That sort of thing is done after the parsing has complete. Each tool also has its own doco and samples to look at.

Does that help?

Kevin

-----Original Message-----
From: WEB400 [mailto:web400-bounces@xxxxxxxxxxxx] On Behalf Of Bradley Stone
Sent: 02 June 2015 21:34
To: Web Enabling the IBM i (AS/400 and iSeries)
Subject: Re: [WEB400] receiving POST encoded in JSON

Kevin,

I'd be curious what these times mean. Are you extracting data or is it simply a process that parses the JSON and puts in into memory? I only ask because parsing can be different than parsing and retrieving many objects.

A practical example I'd be interested in (as far as speed goes between the
two) is how long it takes to extract a few elements from the sample JSON.

The sample JSON provided is a large array of objects. I'd like too see times (and maybe sample code) to parse out and retrieve for each JSON array
element:

- each "guid" object
- each "favoriteFruit" object
- the 3rd "tag" for each object
- the 2nd "friends name" for each object.

The reason is a while back I had to write my own JSON parsing routine. At first is was just a ham-fisted brute force method. :)

Now I'm almost done with an update that makes it exponentially faster simply because I'm caching object nodes in memory once they're "found" (for example, running the above example on my little 515 resulted in times 8 times faster after adding the node caching).

I thought about going through the entire JSON file first and storing the location of each node as well (instead of only storing it when it's requested and found), but I'm having a hard time convincing myself that's a good idea since we won't always want to go through the entire JSON file or need each node's data. But, it's on my "to do" list for improving my personal JSON parser, or at least for experimentation.

Brad
www.bvstools.com

On Mon, Jun 1, 2015 at 12:12 PM, Kevin Turner < kevin.turner@xxxxxxxxxxxxxxxxxxxx> wrote:

Yes I am using milliseconds and yes it did take over 16 seconds -
which is why I need to confirm it didn't barf internally rather than
actually successfully complete the parsing process. It could be that
140kb managed to blow its brains out.


--
This is the Web Enabling the IBM i (AS/400 and iSeries) (WEB400) mailing list To post a message email: WEB400@xxxxxxxxxxxx To subscribe, unsubscribe, or change list options,
visit: http://lists.midrange.com/mailman/listinfo/web400
or email: WEB400-request@xxxxxxxxxxxx
Before posting, please take a moment to review the archives at http://archive.midrange.com/web400.


___________________________________________
This email has been scanned by iomartcloud.
http://www.iomartcloud.com/


________________________________

NOTICE: The information in this electronic mail transmission is intended by CoralTree Systems Ltd for the use of the named individuals or entity to which it is directed and may contain information that is privileged or otherwise confidential. If you have received this electronic mail transmission in error, please delete it from your system without copying or forwarding it, and notify the sender of the error by reply email or by telephone, so that the sender's address records can be corrected.



--------------------------------------------------------------------------------


CoralTree Systems Limited
Company Registration Number 5021022.
Registered Office:
12-14 Carlton Place
Southampton
Hampshire
SO15 2EA
VAT Registration Number 834 1020 74.

As an Amazon Associate we earn from qualifying purchases.

This thread ...

Follow-Ups:
Replies:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.