Hive中自定义Map/Reduce示例 In Java

Hive支持自定义map与reduce script。接下来我用一个简单的wordcount例子加以说明。
如果自己使用Java开发,需要处理System.in,System,out以及key/value的各种逻辑,比较麻烦。有人开发了一个小框架,可以让我们使用与Hadoop中map与reduce相似的写法,只关注map与reduce即可。如今此框架已经集成在Hive中,就是$HIVE_HOME/lib/hive-contrib-2.3.0.jar,hive版本不同,对应的contrib名字可能不同。

1
2
3
4
开发工具:intellij
JDK:jdk1.7
hive:2.3.0
hadoop:2.8.1

###一、开发map与reduce

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
“map类
public class WordCountMap {
public static void main(String args[]) throws Exception{
new GenericMR().map(System.in, System.out, new Mapper() {
@Override
public void map(String[] strings, Output output) throws Exception {
for(String str:strings){
String[] strs=str.split("\\W+");//如果源文本文件是以\t分隔的,则不需要再拆分,传入的strings就是每行拆分好的单词
for(String str_2:strs) {
output.collect(new String[]{str_2, "1"});
}
}
}
});
}
}

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
"reduce类
public class WordCountReducer {
public static void main(String args[]) throws Exception{
new GenericMR().reduce(System.in, System.out, new Reducer() {
@Override
public void reduce(String s, Iterator<String[]> iterator, Output output) throws Exception {
int sum=0;
while(iterator.hasNext()){
Integer count=Integer.valueOf(iterator.next()[1]);
sum+=count;
}
output.collect(new String[]{s,String.valueOf(sum)});
}
});
}
}

###二、导出jar包
然后导出Jar包(包含hive-contrib-2.3.0),假如导出jar包名为wordcount.jar
File->Project Structure
add Artifacts
不用填写Main Class,直接点击OK
jar包配置
生成jar包

###三、编写hive sql

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
drop table if exists raw_lines;

-- create table raw_line, and read all the lines in '/user/inputs', this is the path on your local HDFS
create external table if not exists raw_lines(line string)
ROW FORMAT DELIMITED
stored as textfile
location '/user/inputs';

drop table if exists word_count;

-- create table word_count, this is the output table which will be put in '/user/outputs' as a text file, this is the path on your local HDFS

create external table if not exists word_count(word string, count int)
ROW FORMAT DELIMITED
FIELDS TERMINATED BY '\t'
lines terminated by '\n' STORED AS TEXTFILE LOCATION '/user/outputs/';


-- add the mapper&reducer scripts as resources, please change your/local/path
--must use "add file",not "add jar",or,hive won't find map and reduce main class
add file your/local/path/wordcount.jar;

from (
from raw_lines
map raw_lines.line
--call the mapper here
using 'java -cp wordcount.jar WordCountMap'
as word, count
cluster by word) map_output
insert overwrite table word_count
reduce map_output.word, map_output.count
--call the reducer here
using 'java -cp wordcount.jar WordCountReducer'
as word,count;

此hive sql保存为wordcount.hql
四、执行hive sql

1
beeline -u [hiveserver] -n username -f wordcount.hql

简单说下Hive的自定义map与reduce内部原理:
hive读取文本文件,然后将其一行行输入系统标准输入中,用户自定义的Map读取标准输入流中数据,一行行处理,然后将其按照一定格式(例如:”key\tvalue”)输出到标准输出流中,然后hive会将输出的字符串进行排序,然后再送到标准输入流中,Reduce再从标准输入流中读取数据进行相应处理,处理完成后,再送到标准输出流中,Hive再对Reduce结果进行处理存入表中。