pysqllike documentation

Build status Build Status Windows GitHub Issues MIT License Code Health Requirements Status

Links: pypi, github, documentation, wheel, README, blog, l-issues-todolist

What is it?

Writing a map/reduce job (using PIG for example), usually requires to switch from local files to remote files (on Hadoop). On way to work is extract a small sample of the data which will be processed by a map/reduce job. The job is then locally developped. And when it works, it is run on a parallized environment.

The goal of this extension is allow the implementation of this job using Python syntax as follows:

def myjob(input):
    iter = (input.age, input.nom, age2 = input.age2*input.age2)
    wher = iter.where( (iter.age > 60).Or(iter.age < 25))
    return wher

input = IterRow (None, [ {"nom": 10}, {"jean": 40} ] )
output = myjob(input)

When the job is ready, it can be translated into a PIG job:

input = LOAD '...' USING PigStorage('\t') AS (nom, age);
iter = FOREACH input GENERATE age, nom, age*age AS age2 ;
wher = FILTER iter BY age > 60 or age < 25 ;
STORE wher INTO '...' USING PigStorage();

It should also be translated into SQL.


pip install pysqllike


  • not yet ready

Quick start