It looked like dense_rank did the trick with my sample data but when I had a larger data set I ran into a snag that may not have been apparent in the example I presented. Here is a slightly different variation that describes the remaining issue.

BB8179 04/17/11 04/02/11
BE7214 04/17/11 04/02/11
BE7488 04/18/11 04/02/11
BE2178 04/18/11 04/02/11
BE1618 04/18/11 04/02/11
BD9608 04/21/11 04/02/11
BE2180 04/21/11 04/02/11
BE1696 04/21/11 04/02/11
BD9607 05/7/11 05/28/11
BB6382 05/7/11 05/28/11
BB7942 05/10/11 05/28/11
BE7487 05/10/11 05/28/11
BE7489 05/11/11 05/28/11
BE2179 05/11/11 05/28/11
BE8955 05/11/11 05/28/11


What I would like to do is count the unique occurrences of the first
date column within the *same* occurrence of the second date column.
Something like:

BB8179 04/17/11 1 04/02/11
BE7214 04/17/11 1 04/02/11
BE7488 04/18/11 2 04/02/11
BE2178 04/18/11 2 04/02/11
BE1618 04/18/11 2 04/02/11
BD9608 04/21/11 3 04/02/11
BE2180 04/21/11 3 04/02/11
BE1696 04/21/11 3 04/02/11
BD9607 05/7/11 1 05/28/11
BB6382 05/7/11 1 05/28/11
BB7942 05/10/11 2 05/28/11
BE7487 05/10/11 2 05/28/11
BE7489 05/11/11 3 05/28/11
BE2179 05/11/11 3 05/28/11
BE8955 05/11/11 3 05/28/11


I tried adding the second column to the order by clause but that seemed to make no difference. I also tried 'partition' by using the second column but that also didn't deliver the desired result. Ideas?


As an Amazon Associate we earn from qualifying purchases.

This thread ...

Follow-Ups:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.