MapReduce is a programming model designed for processing large volumes of data in parallel by dividing the work into a set of independent tasks. MapReduce programs are written in a particular style influenced by functional programming constructs, specifically idioms for processing lists of data.
This blog explains the nature of this programming model and how it can be used to write programs which run in the Hadoop environment.