Hadoop读书笔记(五)MapReduce统计单词demo,hadoopmapreduce


Hadoop读书笔记(一)Hadoop介绍:http://blog.csdn.net/caicongyang/article/details/39898629

Hadoop读书笔记(二)HDFS的shell操作:http://blog.csdn.net/caicongyang/article/details/41253927

Hadoop读书笔记(三)Java API操作HDFS:http://blog.csdn.net/caicongyang/article/details/41290955

Hadoop读书笔记(四)HDFS体系结构 :http://blog.csdn.net/caicongyang/article/details/41322649


1.demo说明

功能:统计文章中每一个单词出现的次数

步骤:

1.1读取hdfs中的文件。每一行解析成一个<k,v>。每一个键值对调用一次map函数

1.2 覆盖map(),接收1.1产生的<k,v>,进行处理,转换为新的<k,v>输出

1.3 对1.2输出的<k,v>进行分区。默认分为1个区

1.4 对不同分区中的数据进行排序(按照k)、分组。分组指的是相同key的value放到一个集合中

1.5 (可选)对分组后的数据进行规约。

2.1 多个map任务的输出,按照不同的分区,通过网络copy到不同的reduce节点上。

2.2 对多个map的输出进行合并、排序。覆盖reduce函数,接收的是分组后的数据,实现自己的业务逻辑,处理后,产生新的<k,v>输出。

2.3 对reduce输出的<k,v>写到hdfs中。

2.代码

WordCount.java
package mapReduce;

import java.io.IOException;
import java.net.URI;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.input.TextInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.mapreduce.lib.output.TextOutputFormat;
import org.apache.hadoop.mapreduce.lib.partition.HashPartitioner;


public class WordCount {
	private static final String INPUT_PATH="hdfs://192.168.80.100:9000/hello";
	private static final String OUT_PATH="hdfs://192.168.80.100:9000/out";
	public static void main(String[] args) throws Exception {
		Configuration conf = new Configuration();
		FileSystem fileSystem = FileSystem.get(new URI(INPUT_PATH),conf);
		Path outPath = new Path(OUT_PATH);
		if(fileSystem.exists(outPath)){
			fileSystem.delete(outPath,true);
		}
		//创建作业
		Job job = new Job(conf, WordCount.class.getSimpleName());
		//1.1读取指定的文件
		FileInputFormat.setInputPaths(job, INPUT_PATH);
		//输入文件格式化
		job.setInputFormatClass(TextInputFormat.class);
		
		//1.2指定自定义Mapper类
		job.setMapperClass(MyMapper.class);
		//指定输出类型
		job.setOutputKeyClass(Text.class);
		job.setOutputValueClass(LongWritable.class);
		
		//1.3分区
		job.setPartitionerClass(HashPartitioner.class);
		//设置reduce个数
		job.setNumReduceTasks(1);
		
		//1.4排序、分组
		
		//1.5归约
		
		//2.2指定Reducer类
		job.setReducerClass(MyReducer.class);
		
		//设定输出类型
		job.setOutputKeyClass(Text.class);
		job.setOutputValueClass(LongWritable.class);
		
		//输出地址
		FileOutputFormat.setOutputPath(job, new Path(OUT_PATH));
		
		//输出文件格式化类
		job.setOutputFormatClass(TextOutputFormat.class);
		
		//job交给JobTracker执行
		job.waitForCompletion(true);
	}
	
	
	static class MyReducer extends Reducer<Text, LongWritable, Text, LongWritable>{
		@Override
		protected void reduce(Text key, Iterable<LongWritable> value, Context context) throws IOException, InterruptedException {
			long count = 0L;
			for(LongWritable times :value ){
				count += times.get();
			}
			context.write(key, new LongWritable(count));
		}
		
	}
	
	static class MyMapper extends Mapper<LongWritable ,Text,Text,LongWritable>{
		@Override
		protected void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {
			String[] splited = value.toString().split("\t");
			for(String word:splited){
				context.write(new Text(word), new LongWritable(1));
			}
		}
	}
	
}




欢迎大家一起讨论学习!

有用的自己收!

记录与分享,让你我共成长!欢迎查看我的其他博客;我的博客地址:http://blog.csdn.net/caicongyang










相关内容