MapReduce经典案例分享
MapReduce经典案例分享
资源文件math 张三 99 李四 90 王五 90 赵六 60 资源文件china 张三 79 李四 75 王五 80 赵六 90 资源文件english 张三 89 李四 75 王五 70 赵六 90 分析: map 阶段将将学生姓名作为key 成绩作为value.这样Reduce阶段得到的数据就是 key:张三 value:{99,79,89} …… 在Reduce中将学生的成绩球平均值。 实现:package com.bwzy.Hadoop;
import java.io.IOException;
import java.util.StringTokenizer;
import org.apache.hadoop.conf.Configured;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.Reducer.Context;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.input.TextInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.mapreduce.lib.output.TextOutputFormat;
import org.apache.hadoop.util.Tool;
import org.apache.hadoop.util.ToolRunner;
import com.bwzy.hadoop.HeBing.Map;
import com.bwzy.hadoop.HeBing.Reduce;
public class AvgSorce extends Configured implements Tool {
public static class Map extends Mapper<LongWritable, Text, Text, IntWritable> {
public void map(LongWritable key, Text value, Context context) throws IOException,InterruptedException {
String line = value.toString();
StringTokenizer tokenizer = new StringTokenizer(line);
while(tokenizer.hasMoreElements()){
String strName = tokenizer.nextToken();
String strSorce = tokenizer.nextToken();
context.write(new Text(strName), new IntWritable(Integer.parseInt(strSorce)));
}
}
}
public static class Reduce extends Reducer<Text, IntWritable, Text, IntWritable> {
public void reduce(Text key, Iterable<IntWritable> values, Context context)
throws IOException, InterruptedException {
int sum = 0;
int num = 0;
for (IntWritable sorce : values) {
sum+=sorce.get();
num++;
}
context.write(key, new IntWritable((int)(sum/num)));
}
}
@Override
public int run(String[] arg0) throws Exception {
Job job = new Job(getConf());
job.setJobName("AvgSorce");
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);
job.setMapperClass(Map.class);
// job.setCombinerClass(Reduce.class);
job.setReducerClass(Reduce.class);
job.setInputFormatClass(TextInputFormat.class);
job.setOutputFormatClass(TextOutputFormat.class);
FileInputFormat.setInputPaths(job, new Path(arg0[0]));
FileOutputFormat.setOutputPath(job, new Path(arg0[1]));
boolean success = job.waitForCompletion(true);
return success ? 0 : 1;
}
public static void main(String[] args) throws Exception {
int ret = ToolRunner.run(new AvgSorce(), args);
System.exit(ret);
}
}
赵六 80
推荐阅读:
Hadoop 新 MapReduce 框架 Yarn 详解
Hadoop中HDFS和MapReduce节点基本简介
MapReduce的自制Writable分组输出及组内排序
MapReduce的一对多连接操作
Hadoop--两个简单的MapReduce程序
Hadoop 中利用 MapReduce 读写 MySQL 数据
|
【内容导航】 | |
第1页:求平均分数 | 第2页:统计单词数 |
第3页:合并文档 |
评论暂时关闭