Actually in my case I have some ugly named value pairs where I really only need to make sure the field name exists in the FFIELDNAM variable.

They will usually end with a specified value and will exist within a very small recordset, so I think this will work perfectly.

Thanks for the start.

Regards,
Richard Schoen
RJS Software Systems Inc.
Where Information Meets Innovation
Document Management, Workflow, Report Delivery, Forms and Business Intelligence
Email: richard@xxxxxxxxxxxxxxx
Web Site: http://www.rjssoftware.com
Tel: (952) 736-5800
Fax: (952) 736-5801
Toll Free: (888) RJSSOFT


------------------------------

message: 6
date: Mon, 3 Dec 2012 15:50:26 +0000
from: "DeLong, Eric" <EDeLong@xxxxxxxxxxxxxxx>
subject: RE: Interesting SQL Question - Flatten Multiple Records into
One Record

Richard,

I try to avoid LIKE as much as possible, so I often get creative with alternatives.

In your example, it appears you want to ignore the first character, which presumably could be upper or lower case. Easy fix is to wrap the field in UPPER() or LOWER() to monocase the string, then test for equality...

-Eric

-----Original Message-----
From: midrange-l-bounces@xxxxxxxxxxxx [mailto:midrange-l-bounces@xxxxxxxxxxxx] On Behalf Of Richard Schoen
Sent: Saturday, December 01, 2012 7:06 AM
To: midrange-l@xxxxxxxxxxxx
Subject: RE: Interesting SQL Question - Flatten Multiple Records into One Record

Jim,

Thanks for your response.

Your answer is interesting as well.

In my case I will have some unknown number of fields(value pair records) (usually around 50-100) where I may want pick off 30 or 40 of them for the ID = 1

This query only has to compute an aggregate max function on the columns. Obviously, there's not going to be much difference over one record, but over millions, it's huge!!!

I may be picking out only 30-40 fields that belong to ID=1 out of possibly millions. This means my resultsets will typically only contain 50-100 fields per ID.

When you say this could be slow, doesn't the SQL engine first evaluate the ID = 1 to limit the subset of records before processing the 30-40 selected records ?

I would think that would probably remain very fast. I do have the option to put each ID resultset into a temp table before processing if needed, but am hoping that's not necessary.

Note:
One other think I forgot to mention in my case is that I need to pick FieldName based on a LIKE ends with condition (like '%ield1'), not an equal condition the way Eric showed me. I'm not sure if your example would be easily modifiable. I just changed the CASE statement to a LIKE and got what I needed. Oh, and one other thing. FieldValue is actually a 5000 byte buffer so I had to define the field lengths on the fly as well. I know my data will typically only have around 30 bytes or less of data, so shrinking the field sizes is paramount to keep the return data buffer small.

Here's a real world example of how it currently looks in testing:

select FFORMID,
char(max(Case when FFIELDNAM like '%Fld1'
then trim(ffieldval) end),30) as fld1 ,
char(max(Case when FFIELDNAM like '%Fld2'
then trim(ffieldval) end),30) as fld2
from formfld where FFORMID =
'cb3c0801-93a8-1940-ad63-0004ac10'
group by fformid



As an Amazon Associate we earn from qualifying purchases.

This thread ...


Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.