XD blog

blog page

big data, data skew, map reduce, programming


2014-05-02 Map / Reduce

I wish sometimes bugs could let me go. I'm working on an map/reduce algorithm. The problem would be smaller I would not hesitate to avoid map/reduce but it is not. Any fast it can be, I find it very slow and impossible to run. So I polish. I wake up in the morning and a detail strikes me up. Shit, shit, shit, I should have written it that way. It works on a small sample but this sample is resistant to many mistakes. I see this algorithm working in my head from the beginning. But I missed this case, as if a train would hide another one and get me killed when I cross. That kind of stuff always happens with big data. Pruning, scaling...

Most of the time, the issue comes from skewed data. Skewness is a way to say that the data is not uniformly distributed. Precisely, when reducing a table or joining two tables, we group data sharing the same key (as we do a GROUP BY or a JOIN on SQL). People research about it: A Study of Skew in MapReduce Applications. But let's see on a example what it means: notebook on Reduce skew data.


<-- -->

Xavier Dupré