My bad, didn't mean to hijack the thread. Will start another discussing
pandas installation.

On Wed, Mar 8, 2017 at 11:58 AM, Massimo Fantin <massimo@xxxxxxxxxxxxx>
wrote:

Yes i have installed pandas as prerequisite of superset :
https://github.com/airbnb/superset

but installed manually so i take the source and : python ./setup.py
install

because it go on error on the linker with the use "g++" instead of
"/QOpenSys/QIBM/ProdData/OPS/Python3.4/lib/python3.4/config/ld_so_aix"
for python3
or use "/QOpenSys/QIBM/ProdData/OPS/Python2.7/lib/python2.7/config/ld_so_aix"
for python2

so i do (python3) :

python ./setup.py install
/QOpenSys/QIBM/ProdData/OPS/Python3.4/lib/python3.4/config/ld_so_aix gcc
-pthread -bI:/QOpenSys/QIBM/ProdData/OPS/Python3.4/lib/python3.4/config/python.exp
build/temp.os400-powerpc-3.4/pandas/msgpack/_packer.o
-L/QOpenSys/QIBM/ProdData/OPS/Python3.4/lib -o
build/lib.os400-powerpc-3.4/pandas/msgpack/_packer.so
python ./setup.py install
/QOpenSys/QIBM/ProdData/OPS/Python3.4/lib/python3.4/config/ld_so_aix gcc
-pthread -bI:/QOpenSys/QIBM/ProdData/OPS/Python3.4/lib/python3.4/config/python.exp
build/temp.os400-powerpc-3.4/pandas/msgpack/_unpacker.o
-L/QOpenSys/QIBM/ProdData/OPS/Python3.4/lib -o
build/lib.os400-powerpc-3.4/pandas/msgpack/_unpacker.so
python ./setup.py install

and it ok.

Ps .
i use superset with sqlalchemy and "ibm_db_sa" from my modified git
repository : https://github.com/openmax/python-ibmdbsa
to do BI on db2 ibmi data .

Massimo Fantin



Il 08/03/2017 18:16, Jim Steil ha scritto:

Following on John's tips, have you tried Pandas yet?

Anyone know if it is available for IBM i Python yet? It wasn't working
last I checked late summer of 2016.

Pandas (http://pandas.pydata.org/) is a great package for
reading/manipulating/writing data. In addition to importing from SQL, you
can suck in data from Excel sheets, different file types (csv, tsv, json,
etc) and json data streams from a URL.

Here is the code I'd use to get SQL output to an Excel file.

-----------------------------
import pandas as pd
import pypyodbc

conn =
pypyodbc.connect('dsn=odbc_datasource_name;uid=username;pwd=password')

sql = 'SELECT id, first_name, last_name, hire_date FROM
library_name.table_name'

dataframe = pd.read_sql(sql, conn)
writer = pd.ExcelWriter('c:/temp/my_excel_file.xlsx',
engine='xlsxwriter',
date_format='mm/dd/yyyy')
dataframe.to_excel(writer, index=False)
-----------------------------

I'm using pypyodbc on Windows/Linux boxes but you should be able to
substitute...

conn = ibm_db_dbi.connect(database='*LOCAL')

...as (John mentioned) to run directly on the i if/when pandas is
available
there.

I'd love to be able to run this directly from IBM i, but until then I'll
be
running my python on Linux, connecting to the i for data via ODBC.

Good, free course on how to use pandas
https://www.datacamp.com/community/tutorials/pandas-tutorial
-dataframe-python

-Jim


On Wed, Mar 8, 2017 at 11:02 AM, Buck Calabro <kc2hiz@xxxxxxxxx> wrote:

On 3/7/2017 7:14 PM, John Yeung wrote:

Thanks for reporting! I have particular interest in hearing what it's
like for folks whose "native" computing environment is IBM midrange.

First of all, did you have a particular reason to choose Python 2
instead of Python 3? If I had to guess, it would be that you had
already played around a little with iSeriesPython, which is a Python
2, so that was the most natural (and scientific!) step.

I didn't even put _that_ much thought into it.

I have indeed played with iSeriesPython, and I found it very useful.
With this, I wanted to see how the 'as shipped' 5733-OPS would feel.
And so, it never occurred to me to even /try/ the PASE python3 command.
In fact I only knew I was using Python 2.7 because python --version
reported 2.7.11

Hm. OK, did you really use ibm_db (directly), or ibm_db_dbi? If ibm_db
directly, any particular reason?

I did, and my sole reason is ignorance as pure as the driven snow. The
Python that I use at home is crude and rude and I don't remember if I
Use any databases at all there. So this is as classic a brute force
learning experience as it comes.

The reason I ask is that Python stipulates a standard programmer
interface to databases (you can Google it as PEP 249), and ibm_db_dbi
is meant as a kind of wrapper module which adheres to that standard.
"Naked" ibm_db is a somewhat lower-level and more DB2-specific
interface.

That's good to know, thanks!

I used the new DB2
service UDTF qsys2.parse_statement() to extract the column names to be
used as the heading row. For 'select *' statements, this returns zero
columns, but the ibm_db2.columns() function will get the column names
from the table itself.

The Python standard database interface stipulates that cursor objects
should have an attribute (called "description") which gives you
exactly the columns that were retrieved, regardless of whether you
used the star.

Ooo, very nice. I sense changes in my future.

I was going to say that finding the documentation was kind of a pain,
but it's such a new domain to me that I can't judge until I've got used
to the new idiom.

Which documentation? Python in general? IBM's Python? ibm_db?
xlsxwriter?

Python has really nice docs. Very friendly to the 'brute force and
ignorance' learner :-)

XlsxWriter... so so for me. I sort of prefer the Javadoc-ish format
where I can easily drill down into the particulars. This doc is more
task oriented which would be great if I had any idea what task I wanted
to accomplish. I have a strong tendency to browse the doc API by API,
which might be excessive for most people. Might be. :-)

The worst was ibm_db. For starters, I didn't really grok the difference
between ibm_db and ibm_db_dbi and it turned out that finding the API for
ibm_db (see above) was easier than finding ibm_db_dbi. Well that's
because there /isn't/ any API doc for ibm_db_dbi. As you explained, the
dbi stuff is documented at PEP 249, so... I learnt another thing :-)

I would like to find out where, if anywhere, your experiences differ
from mine; and work toward improving the situation.

If I were in Utopia, the API doc would be a link on the home page of
both XlsxWriter and ibm_db. Once I get to the API doc, I'm happy with
the quality of the documentation itself.

Mixing CLP, Python, DB2, Excel, and the new DB2 services was very easy.
The only thing I wanted to do but didn't was to feed the exit code back
from Python to the CLP.

Interesting. I'm not sure how easy or useful that will be. I mean, I
understand why one would want to do it, and how it *ought to* be
useful. But I don't know if it works out that nicely in Python, and in
particular IBM's Python for PASE.

A while ago, you expressed a bit of confusion over how to mentally fit
Python into your RPG-centric modules-and-functions style of program
architecture. I think you were hoping that Python would somehow be
another language for writing subprocedures.

Your understanding is 100%.

I suggested that it
doesn't really fit that model. You can do modules and functions within
Python, and Python can call out to other stuff (like CL or RPG) and
get results back. But you don't generally call into Python to get a
return value, except from other Python code. (And this is pretty much
the case for all the scripting languages.) Any "results" you get from
Python tend to be in the form of database changes, IFS files, data
across a socket, etc. In other words, as viewed from the outside
world, Python is usually operating by side-effect. But from within
Python, you are free to have almost whatever architecture you like.

Yes, well I'm more of a 'closed loop' person. When I go diving I'm
really sure the dive shop filled my tank, and I'm really sure my
regulator and BCD and drysuit all work as intended, and that the
batteries in my lights are good. But I always check these things before
getting into the water.

A quaint story :-) but the point is that I am an IBM i programmer coming
to Gnu. My brain has decades of 'if a program goes awry, it throws an
exception and the caller needs to deal with it'. I don't expect that
PASE or Python will go casters up; the far more likely scenario is an
authority issue - the user profile isn't authorised to the table it's
trying to read.

If I were doing this in RPG/CL, I'd have the RPG program send a *ESCAPE
message to *CALLER. The CLP would do a MONMSG and take the appropriate
action, which might even be sending a *ESCAPE message to *CALLER.

Altogether, quite a pleasant experience. I really like the short
feedback loop of an interpreted language.

Me too. I look forward to hearing more.

Oh, quick question: What are you using to edit your Python source? RDi?

RDi with the PyDev plug-in. The current PyDev (5.5) won't run on Luna
:-( so I'm using PyDev 4.5.5 instead. I managed to upload the lot to
Github this morning. https://github.com/starbuck5250/sql_xlsx This is
a technology demonstrator, and I don't know if it'll ever be useful
enough to be put into production. But as a very first attempt, I was
pleased with how quickly the working code came together.

--
--buck

Try wiki.midrange.com!

--
This is the IBMi Open Source Roundtable (OpenSource) mailing list
To post a message email: OpenSource@xxxxxxxxxxxx
To subscribe, unsubscribe, or change list options,
visit: http://lists.midrange.com/mailman/listinfo/opensource
or email: OpenSource-request@xxxxxxxxxxxx
Before posting, please take a moment to review the archives
at http://archive.midrange.com/opensource.


--
This is the IBMi Open Source Roundtable (OpenSource) mailing list
To post a message email: OpenSource@xxxxxxxxxxxx
To subscribe, unsubscribe, or change list options,
visit: http://lists.midrange.com/mailman/listinfo/opensource
or email: OpenSource-request@xxxxxxxxxxxx
Before posting, please take a moment to review the archives
at http://archive.midrange.com/opensource.


As an Amazon Associate we earn from qualifying purchases.

This thread ...

Replies:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.