spark streaming 调试技巧,sparkstreaming
spark streaming 调试技巧,sparkstreaming
[root@hadoop-3 ~]# ll /var/log/hadoop-yarn/container/application_1429701572510_0022/container_1429701572510_0022_01_000002/
总用量 932
-rw-r–r– 1 yarn yarn 339015 4月 29 11:53 stderr
-rw-r–r– 1 yarn yarn 613851 4月 29 11:53 stdout (我们输出的日志)
userLog.foreachRDD(new Function2<JavaPairRDD<String, Iterable<String>>, Time, Void>() {
@Override
public Void call(JavaPairRDD<String, Iterable<String>> stringIterableJavaPairRDD, Time time) throws Exception {
if(!stringIterableJavaPairRDD.partitions().isEmpty()) {
stringIterableJavaPairRDD.foreachPartition(new VoidFunction<Iterator<Tuple2<String, Iterable<String>>>>() {
@Override
public void call(Iterator<Tuple2<String, Iterable<String>>> tuple2Iterator) throws Exception {
//初始化hbase 连接
HBaseConnectionFactory.init();
while (tuple2Iterator.hasNext()) {
// 具体的逻辑代码
}
});}
return null;
}
});
上述代码有个bug,这样会初始化很多的hbase connection,最后抛出类似如下异常:
Caused by: java.net.SocketException: 打开的文件过多
加个判断,
userLog.foreachRDD(new Function2<JavaPairRDD<String, Iterable<String>>, Time, Void>() {
@Override
public Void call(JavaPairRDD<String, Iterable<String>> stringIterableJavaPairRDD, Time time) throws Exception {
if(!stringIterableJavaPairRDD.partitions().isEmpty()) {
stringIterableJavaPairRDD.foreachPartition(new VoidFunction<Iterator<Tuple2<String, Iterable<String>>>>() {
@Override
public void call(Iterator<Tuple2<String, Iterable<String>>> tuple2Iterator) throws Exception {
//初始化hbase 连接
if(HBaseConnectionFactory.getConnection() == null || HBaseConnectionFactory.getConnection.isClosed())
HBaseConnectionFactory.init();
while (tuple2Iterator.hasNext()) {
// 具体的逻辑代码
}
});}
return null;
}
});
评论暂时关闭