spark0.9分布式安装


spark安装包:spark-0.9.0-incubating-bin-hadoop2.tgz

操作系统:     CentOS6.4

jdk版本:      jdk1.7.0_21

1. Cluster模式

1.1安装Hadoop

VMware Workstation创建三台CentOS虚拟机,hostname分别设置为 master,slaver01, slaver02,设置SSH无密码登陆,安装hadoop,然后启动hadoop集群。参考我的这篇博客,hadoop-2.2.0分布式安装.

1.2 Scala

在三台机器上都要安装 Scala 2.9.3,按照我的博客SparK安装的步骤。JDK在安装Hadoop时已经安装了。进入master节点。

$ cd
$ scp -r scala-2.10.3 root@slaver01:~
$ scp -r scala-2.10.3 root@slaver02:~

 

1.3master上安装并配置Spark

解压

$ tar -zxf spark-0.9.0-incubating-bin-hadoop2.tgz
$ mv spark-0.9.0-incubating-bin-hadoop2 spark-0.9

inconf/spark-env.sh中设置SCALA_HOME

$ cd ~/spark-0.9/conf
$ mv spark-env.sh.template spark-env.sh
$ vi spark-env.sh
# add the following line
export SCALA_HOME=/root/scala-2.10.3
export JAVA_HOME=/usr/java/jdk1.7.0_21
export SPARK_MASTER_IP=192.168.159.129

export SPARK_MASTER_IP=192.168.159.129

export SPARK_WORKER_MEMORY=1000m
# save and exit

conf/slaves,添加Sparkworkerhostname,一行一个。

$ vim slaves
slaver01
slaver02
master
# save and exit

(可选)设置 SPARK_HOME环境变量,并将SPARK_HOME/bin加入PATH

$ vim /etc/profile
# add the following lines at the end
export SPARK_HOME=$HOME/spark-0.9
export PATH=$PATH:$SPARK_HOME/bin
# save and exit vim
#make the bash profile take effect immediately
$ source /etc/profile

1.4在所有worker上安装并配置Spark

既然master上的这个文件件已经配置好了,把它拷贝到所有的worker即可。注意,三台机器spark所在目录必须一致,因为master会登陆到worker上执行命令,master认为workerspark路径与自己一样。

$ cd
$ scp -r spark-0.9 root@slaver01:~
$ scp -r spark-0.9 root@slaver02:~

1.5启动 Spark集群

master上执行

$ cd ~/spark-0.9
$ ./sbin/start-all.sh

检测进程是否启动

[root@master ~]# jps
3200 SecondaryNameNode
7350 Master
7470 Worker
3025 NameNode
8022 Jps
3332 ResourceManager
 

浏览masterweb UI(默认http://master:8080).这是你应该可以看到所有的work节点,以及他们的CPU个数和内存等信息。


1.6运行Spark自带的例子

运行SparkPi

$ cd ~/spark-0.9
./bin/run-example org.apache.spark.examples.SparkPi spark://192.168.159.129:7077
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/root/spark-0.9/examples/target/scala-2.10/spark-examples_2.10-assembly-0.9.0-incubating.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/root/spark-0.9/assembly/target/scala-2.10/spark-assembly_2.10-0.9.0-incubating-hadoop2.2.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.SimpleLoggerFactory]
0 [spark-akka.actor.default-dispatcher-3] INFO akka.event.slf4j.Slf4jLogger - Slf4jLogger started
89 [spark-akka.actor.default-dispatcher-2] INFO Remoting - Starting remoting
386 [spark-akka.actor.default-dispatcher-5] INFO Remoting - Remoting started; listening on addresses :[akka.tcp://spark@master:51617]
393 [spark-akka.actor.default-dispatcher-3] INFO Remoting - Remoting now listens on addresses: [akka.tcp://spark@master:51617]
451 [main] INFO org.apache.spark.SparkEnv - Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
451 [main] INFO org.apache.spark.SparkEnv - Registering BlockManagerMaster
519 [main] INFO org.apache.spark.storage.DiskBlockManager - Created local directory at /tmp/spark-local-20140208154038-05cb
524 [main] INFO org.apache.spark.storage.MemoryStore - MemoryStore started with capacity 148.5 MB.
556 [main] INFO org.apache.spark.network.ConnectionManager - Bound socket to port 55761 with id = ConnectionManagerId(master,55761)
563 [main] INFO org.apache.spark.storage.BlockManagerMaster - Trying to register BlockManager
568 [spark-akka.actor.default-dispatcher-3] INFO org.apache.spark.storage.BlockManagerMasterActor$BlockManagerInfo - Registering block manager master:55761 with 148.5 MB RAM
569 [main] INFO org.apache.spark.storage.BlockManagerMaster - Registered BlockManager
598 [main] INFO org.apache.spark.HttpServer - Starting HTTP Server
680 [main] INFO org.eclipse.jetty.server.Server - jetty-7.x.y-SNAPSHOT
706 [main] INFO org.eclipse.jetty.server.AbstractConnector - Started SocketConnector@0.0.0.0:56100
707 [main] INFO org.apache.spark.broadcast.HttpBroadcast - Broadcast server started at http://192.168.159.129:56100
716 [main] INFO org.apache.spark.SparkEnv - Registering MapOutputTracker
723 [main] INFO org.apache.spark.HttpFileServer - HTTP File server directory is /tmp/spark-5f82de63-0061-4d3b-b0a7-156bda3935f9
723 [main] INFO org.apache.spark.HttpServer - Starting HTTP Server
723 [main] INFO org.eclipse.jetty.server.Server - jetty-7.x.y-SNAPSHOT
726 [main] INFO org.eclipse.jetty.server.AbstractConnector - Started SocketConnector@0.0.0.0:47225
990 [main] INFO org.eclipse.jetty.server.Server - jetty-7.x.y-SNAPSHOT
991 [main] INFO org.eclipse.jetty.server.handler.ContextHandler - started o.e.j.s.h.ContextHandler{/storage/rdd,null}
991 [main] INFO org.eclipse.jetty.server.handler.ContextHandler - started o.e.j.s.h.ContextHandler{/storage,null}
991 [main] INFO org.eclipse.jetty.server.handler.ContextHandler - started o.e.j.s.h.ContextHandler{/stages/stage,null}
992 [main] INFO org.eclipse.jetty.server.handler.ContextHandler - started o.e.j.s.h.ContextHandler{/stages/pool,null}
992 [main] INFO org.eclipse.jetty.server.handler.ContextHandler - started o.e.j.s.h.ContextHandler{/stages,null}
992 [main] INFO org.eclipse.jetty.server.handler.ContextHandler - started o.e.j.s.h.ContextHandler{/environment,null}
992 [main] INFO org.eclipse.jetty.server.handler.ContextHandler - started o.e.j.s.h.ContextHandler{/executors,null}
992 [main] INFO org.eclipse.jetty.server.handler.ContextHandler - started o.e.j.s.h.ContextHandler{/metrics/json,null}
992 [main] INFO org.eclipse.jetty.server.handler.ContextHandler - started o.e.j.s.h.ContextHandler{/static,null}
992 [main] INFO org.eclipse.jetty.server.handler.ContextHandler - started o.e.j.s.h.ContextHandler{/,null}
1003 [main] INFO org.eclipse.jetty.server.AbstractConnector - Started SelectChannelConnector@0.0.0.0:4040
1004 [main] INFO org.apache.spark.ui.SparkUI - Started Spark Web UI at http://master:4040
14/02/08 15:40:39 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
1861 [main] INFO org.apache.spark.SparkContext - Added JAR /root/spark-0.9/examples/target/scala-2.10/spark-examples_2.10-assembly-0.9.0-incubating.jar at http://192.168.159.129:47225/jars/spark-examples_2.10-assembly-0.9.0-incubating.jar with timestamp 1391845239903
2274 [spark-akka.actor.default-dispatcher-4] INFO org.apache.spark.deploy.client.AppClient$ClientActor - Connecting to master spark://192.168.159.129:7077...
2586 [main] INFO org.apache.spark.SparkContext - Starting job: reduce at SparkPi.scala:39
2628 [spark-akka.actor.default-dispatcher-2] INFO org.apache.spark.scheduler.DAGScheduler - Got job 0 (reduce at SparkPi.scala:39) with 2 output partitions (allowLocal=false)
2684 [spark-akka.actor.default-dispatcher-2] INFO org.apache.spark.scheduler.DAGScheduler - Final stage: Stage 0 (reduce at SparkPi.scala:39)
2685 [spark-akka.actor.default-dispatcher-2] INFO org.apache.spark.scheduler.DAGScheduler - Parents of final stage: List()
2724 [spark-akka.actor.default-dispatcher-2] INFO org.apache.spark.scheduler.DAGScheduler - Missing parents: List()
2742 [spark-akka.actor.default-dispatcher-2] INFO org.apache.spark.scheduler.DAGScheduler - Submitting Stage 0 (MappedRDD[1] at map at SparkPi.scala:35), which has no missing parents
3013 [spark-akka.actor.default-dispatcher-2] INFO org.apache.spark.scheduler.DAGScheduler - Submitting 2 missing tasks from Stage 0 (MappedRDD[1] at map at SparkPi.scala:35)
3015 [spark-akka.actor.default-dispatcher-2] INFO org.apache.spark.scheduler.TaskSchedulerImpl - Adding task set 0.0 with 2 tasks
3391 [spark-akka.actor.default-dispatcher-14] INFO org.apache.spark.scheduler.cluster.SparkDeploySchedulerBackend - Connected to Spark cluster with app ID app-20140208154041-0000
4273 [spark-akka.actor.default-dispatcher-5] INFO org.apache.spark.deploy.client.AppClient$ClientActor - Executor added: app-20140208154041-0000/0 on worker-20140208153258-master-37524 (master:37524) with 1 cores
4274 [spark-akka.actor.default-dispatcher-5] INFO org.apache.spark.scheduler.cluster.SparkDeploySchedulerBackend - Granted executor ID app-20140208154041-0000/0 on hostPort master:37524 with 1 cores, 512.0 MB RAM
4277 [spark-akka.actor.default-dispatcher-5] INFO org.apache.spark.deploy.client.AppClient$ClientActor - Executor added: app-20140208154041-0000/1 on worker-20140208153257-slaver01-49982 (slaver01:49982) with 1 cores
4277 [spark-akka.actor.default-dispatcher-5] INFO org.apache.spark.scheduler.cluster.SparkDeploySchedulerBackend - Granted executor ID app-20140208154041-0000/1 on hostPort slaver01:49982 with 1 cores, 512.0 MB RAM
4282 [spark-akka.actor.default-dispatcher-5] INFO org.apache.spark.deploy.client.AppClient$ClientActor - Executor added: app-20140208154041-0000/2 on worker-20140208153257-slaver02-53193 (slaver02:53193) with 1 cores
4283 [spark-akka.actor.default-dispatcher-5] INFO org.apache.spark.scheduler.cluster.SparkDeploySchedulerBackend - Granted executor ID app-20140208154041-0000/2 on hostPort slaver02:53193 with 1 cores, 512.0 MB RAM
4526 [spark-akka.actor.default-dispatcher-15] INFO org.apache.spark.deploy.client.AppClient$ClientActor - Executor updated: app-20140208154041-0000/0 is now RUNNING
6497 [spark-akka.actor.default-dispatcher-13] INFO org.apache.spark.deploy.client.AppClient$ClientActor - Executor updated: app-20140208154041-0000/1 is now RUNNING
6666 [spark-akka.actor.default-dispatcher-16] INFO org.apache.spark.deploy.client.AppClient$ClientActor - Executor updated: app-20140208154041-0000/2 is now RUNNING
18070 [Timer-0] WARN org.apache.spark.scheduler.TaskSchedulerImpl - Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient memory
27259 [spark-akka.actor.default-dispatcher-13] INFO org.apache.spark.scheduler.cluster.SparkDeploySchedulerBackend - Registered executor: Actor[akka.tcp://sparkExecutor@master:43274/user/Executor#-668520616] with ID 0
27279 [spark-akka.actor.default-dispatcher-13] INFO org.apache.spark.scheduler.TaskSetManager - Starting task 0.0:0 as TID 0 on executor 0: master (PROCESS_LOCAL)
27287 [spark-akka.actor.default-dispatcher-13] INFO org.apache.spark.scheduler.TaskSetManager - Serialized task 0.0:0 as 1426 bytes in 7 ms
35953 [spark-akka.actor.default-dispatcher-14] INFO org.apache.spark.storage.BlockManagerMasterActor$BlockManagerInfo - Registering block manager master:47550 with 297.0 MB RAM
57587 [spark-akka.actor.default-dispatcher-13] INFO org.apache.spark.deploy.client.AppClient$ClientActor - Executor updated: app-20140208154041-0000/1 is now FAILED (Command exited with code 1)
57774 [spark-akka.actor.default-dispatcher-13] INFO org.apache.spark.scheduler.cluster.SparkDeploySchedulerBackend - Executor app-20140208154041-0000/1 removed: Command exited with code 1
58001 [spark-akka.actor.default-dispatcher-13] INFO org.apache.spark.deploy.client.AppClient$ClientActor - Executor added: app-20140208154041-0000/3 on worker-20140208153257-slaver01-49982 (slaver01:49982) with 1 cores
58002 [spark-akka.actor.default-dispatcher-13] INFO org.apache.spark.scheduler.cluster.SparkDeploySchedulerBackend - Granted executor ID app-20140208154041-0000/3 on hostPort slaver01:49982 with 1 cores, 512.0 MB RAM
58002 [spark-akka.actor.default-dispatcher-13] INFO org.apache.spark.deploy.client.AppClient$ClientActor - Executor updated: app-20140208154041-0000/3 is now RUNNING
59570 [spark-akka.actor.default-dispatcher-3] INFO org.apache.spark.deploy.client.AppClient$ClientActor - Executor updated: app-20140208154041-0000/2 is now FAILED (Command exited with code 1)
59570 [spark-akka.actor.default-dispatcher-3] INFO org.apache.spark.scheduler.cluster.SparkDeploySchedulerBackend - Executor app-20140208154041-0000/2 removed: Command exited with code 1
59573 [spark-akka.actor.default-dispatcher-3] INFO org.apache.spark.deploy.client.AppClient$ClientActor - Executor added: app-20140208154041-0000/4 on worker-20140208153257-slaver02-53193 (slaver02:53193) with 1 cores
59573 [spark-akka.actor.default-dispatcher-3] INFO org.apache.spark.scheduler.cluster.SparkDeploySchedulerBackend - Granted executor ID app-20140208154041-0000/4 on hostPort slaver02:53193 with 1 cores, 512.0 MB RAM
59699 [spark-akka.actor.default-dispatcher-3] INFO org.apache.spark.deploy.client.AppClient$ClientActor - Executor updated: app-20140208154041-0000/4 is now RUNNING
63430 [spark-akka.actor.default-dispatcher-15] INFO org.apache.spark.scheduler.TaskSetManager - Starting task 0.0:1 as TID 1 on executor 0: master (PROCESS_LOCAL)
63431 [spark-akka.actor.default-dispatcher-15] INFO org.apache.spark.scheduler.TaskSetManager - Serialized task 0.0:1 as 1426 bytes in 1 ms
63506 [Result resolver thread-0] INFO org.apache.spark.scheduler.TaskSetManager - Finished TID 0 in 36193 ms on master (progress: 0/2)
63599 [spark-akka.actor.default-dispatcher-15] INFO org.apache.spark.scheduler.DAGScheduler - Completed ResultTask(0, 0)
63655 [Result resolver thread-1] INFO org.apache.spark.scheduler.TaskSetManager - Finished TID 1 in 224 ms on master (progress: 1/2)
63655 [spark-akka.actor.default-dispatcher-15] INFO org.apache.spark.scheduler.DAGScheduler - Completed ResultTask(0, 1)
63745 [spark-akka.actor.default-dispatcher-15] INFO org.apache.spark.scheduler.DAGScheduler - Stage 0 (reduce at SparkPi.scala:39) finished in 60.628 s
63749 [Result resolver thread-1] INFO org.apache.spark.scheduler.TaskSchedulerImpl - Remove TaskSet 0.0 from pool
63831 [main] INFO org.apache.spark.SparkContext - Job finished: reduce at SparkPi.scala:39, took 61.24527294 s
Pi is roughly 3.144
64266 [main] INFO org.eclipse.jetty.server.handler.ContextHandler - stopped o.e.j.s.h.ContextHandler{/,null}
64266 [main] INFO org.eclipse.jetty.server.handler.ContextHandler - stopped o.e.j.s.h.ContextHandler{/static,null}
64267 [main] INFO org.eclipse.jetty.server.handler.ContextHandler - stopped o.e.j.s.h.ContextHandler{/metrics/json,null}
64267 [main] INFO org.eclipse.jetty.server.handler.ContextHandler - stopped o.e.j.s.h.ContextHandler{/executors,null}
64267 [main] INFO org.eclipse.jetty.server.handler.ContextHandler - stopped o.e.j.s.h.ContextHandler{/environment,null}
64267 [main] INFO org.eclipse.jetty.server.handler.ContextHandler - stopped o.e.j.s.h.ContextHandler{/stages,null}
64267 [main] INFO org.eclipse.jetty.server.handler.ContextHandler - stopped o.e.j.s.h.ContextHandler{/stages/pool,null}
64267 [main] INFO org.eclipse.jetty.server.handler.ContextHandler - stopped o.e.j.s.h.ContextHandler{/stages/stage,null}
64267 [main] INFO org.eclipse.jetty.server.handler.ContextHandler - stopped o.e.j.s.h.ContextHandler{/storage,null}
64267 [main] INFO org.eclipse.jetty.server.handler.ContextHandler - stopped o.e.j.s.h.ContextHandler{/storage/rdd,null}
64480 [main] INFO org.apache.spark.scheduler.cluster.SparkDeploySchedulerBackend - Shutting down all executors
64484 [spark-akka.actor.default-dispatcher-4] INFO org.apache.spark.scheduler.cluster.SparkDeploySchedulerBackend - Asking each executor to shut down
65576 [spark-akka.actor.default-dispatcher-16] ERROR akka.remote.EndpointWriter - AssociationError [akka.tcp://spark@master:51617] <- [akka.tcp://sparkExecutor@master:43274]: Error [Shut down address: akka.tcp://sparkExecutor@master:43274] [
akka.remote.ShutDownAssociation: Shut down address: akka.tcp://sparkExecutor@master:43274
Caused by: akka.remote.transport.Transport$InvalidAssociationException: The remote system terminated the association because it is shutting down.
]
65853 [spark-akka.actor.default-dispatcher-13] INFO org.apache.spark.MapOutputTrackerMasterActor - MapOutputTrackerActor stopped!
65947 [connection-manager-thread] INFO org.apache.spark.network.ConnectionManager - Selector thread was interrupted!
65989 [main] INFO org.apache.spark.network.ConnectionManager - ConnectionManager stopped
66015 [main] INFO org.apache.spark.storage.MemoryStore - MemoryStore cleared
66017 [main] INFO org.apache.spark.storage.BlockManager - BlockManager stopped
66017 [spark-akka.actor.default-dispatcher-4] INFO org.apache.spark.storage.BlockManagerMasterActor - Stopping BlockManagerMaster
66019 [main] INFO org.apache.spark.storage.BlockManagerMaster - BlockManagerMaster stopped
66024 [spark-akka.actor.default-dispatcher-14] INFO akka.remote.RemoteActorRefProvider$RemotingTerminator - Shutting down remote daemon.
66024 [spark-akka.actor.default-dispatcher-14] INFO akka.remote.RemoteActorRefProvider$RemotingTerminator - Remote daemon shut down; proceeding with flushing remote transports.
66033 [main] INFO org.apache.spark.SparkContext - Successfully stopped SparkContext 
[root@master spark-0.9]#
 

运行SparkLR

#Logistic Regression

 [root@master spark-0.9]# ./bin/run-example org.apache.spark.examples.SparkLR spark://192.168.159.129:7077
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/root/spark-0.9/examples/target/scala-2.10/spark-examples_2.10-assembly-0.9.0-incubating.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/root/spark-0.9/assembly/target/scala-2.10/spark-assembly_2.10-0.9.0-incubating-hadoop2.2.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.SimpleLoggerFactory]
0 [spark-akka.actor.default-dispatcher-4] INFO akka.event.slf4j.Slf4jLogger - Slf4jLogger started
95 [spark-akka.actor.default-dispatcher-4] INFO Remoting - Starting remoting
441 [spark-akka.actor.default-dispatcher-2] INFO Remoting - Remoting started; listening on addresses :[akka.tcp://spark@master:59496]
441 [spark-akka.actor.default-dispatcher-2] INFO Remoting - Remoting now listens on addresses: [akka.tcp://spark@master:59496]
494 [main] INFO org.apache.spark.SparkEnv - Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
494 [main] INFO org.apache.spark.SparkEnv - Registering BlockManagerMaster
565 [main] INFO org.apache.spark.storage.DiskBlockManager - Created local directory at /tmp/spark-local-20140208155527-140e
569 [main] INFO org.apache.spark.storage.MemoryStore - MemoryStore started with capacity 148.5 MB.
607 [main] INFO org.apache.spark.network.ConnectionManager - Bound socket to port 46771 with id = ConnectionManagerId(master,46771)
kend - Registered executor: Actor[akka.tcp://sparkExecutor@slaver02:37832/user/Executor#1651743406] with ID 3
45103 [spark-akka.actor.default-dispatcher-4] INFO org.apache.spark.scheduler.cluster.SparkDeploySchedulerBackend - Executor 1 disconnected, so removing it
45103 [spark-akka.actor.default-dispatcher-4] ERROR org.apache.spark.scheduler.TaskSchedulerImpl - Lost executor 1 on slaver01: remote Akka client disassociated
45103 [spark-akka.actor.default-dispatcher-4] INFO org.apache.spark.scheduler.TaskSetManager - Re-queueing tasks for 1 from TaskSet 0.0
45104 [spark-akka.actor.default-dispatcher-4] WARN org.apache.spark.scheduler.TaskSetManager - Lost TID 1 (task 0.0:0)
45104 [spark-akka.actor.default-dispatcher-4] INFO org.apache.spark.scheduler.TaskSetManager - Starting task 0.0:0 as TID 3 on executor 3: slaver02 (PROCESS_LOCAL)
45105 [spark-akka.actor.default-dispatcher-5] INFO org.apache.spark.scheduler.DAGScheduler - Executor lost: 1 (epoch 1)
45105 [spark-akka.actor.default-dispatcher-13] INFO org.apache.spark.storage.BlockManagerMasterActor - Trying to remove executor 1 from BlockManagerMaster.
45106 [spark-akka.actor.default-dispatcher-5] INFO org.apache.spark.storage.BlockManagerMaster - Removed 1 successfully in removeExecutor
45433 [spark-akka.actor.default-dispatcher-4] INFO org.apache.spark.scheduler.TaskSetManager - Serialized task 0.0:0 as 551899 bytes in 329 ms
45453 [spark-akka.actor.default-dispatcher-4] INFO org.apache.spark.deploy.client.AppClient$ClientActor - Executor updated: app-20140208155019-0001/1 is now FAILED (Command exited with code 1)
45453 [spark-akka.actor.default-dispatcher-4] INFO org.apache.spark.scheduler.cluster.SparkDeploySchedulerBackend - Executor app-20140208155019-0001/1 removed: Command exited with code 1
45468 [spark-akka.actor.default-dispatcher-14] INFO org.apache.spark.deploy.client.AppClient$ClientActor - Executor added: app-20140208155019-0001/5 on worker-20140208153257-slaver01-49982 (slaver01:49982) with 1 cores
45469 [spark-akka.actor.default-dispatcher-14] INFO org.apache.spark.scheduler.cluster.SparkDeploySchedulerBackend - Granted executor ID app-20140208155019-0001/5 on hostPort slaver01:49982 with 1 cores, 512.0 MB RAM
45472 [spark-akka.actor.default-dispatcher-5] ERROR akka.remote.EndpointWriter - AssociationError [akka.tcp://spark@master:52897] -> [akka.tcp://sparkExecutor@slaver01:57904]: Error [Association failed with [akka.tcp://sparkExecutor@slaver01:57904]] [
akka.remote.EndpointAssociationException: Association failed with [akka.tcp://sparkExecutor@slaver01:57904]
Caused by: akka.remote.transport.netty.NettyTransport$$anonfun$associate$1$$anon$2: Connection refused: slaver01/192.168.159.130:57904
]
45483 [spark-akka.actor.default-dispatcher-4] ERROR akka.remote.EndpointWriter - AssociationError [akka.tcp://spark@master:52897] -> [akka.tcp://sparkExecutor@slaver01:57904]: Error [Association failed with [akka.tcp://sparkExecutor@slaver01:57904]] [
akka.remote.EndpointAssociationException: Association failed with [akka.tcp://sparkExecutor@slaver01:57904]
Caused by: akka.remote.transport.netty.NettyTransport$$anonfun$associate$1$$anon$2: Connection refused: slaver01/192.168.159.130:57904
]
45780 [spark-akka.actor.default-dispatcher-5] ERROR akka.remote.EndpointWriter - AssociationError [akka.tcp://spark@master:52897] -> [akka.tcp://sparkExecutor@slaver01:57904]: Error [Association failed with [akka.tcp://sparkExecutor@slaver01:57904]] [
akka.remote.EndpointAssociationException: Association failed with [akka.tcp://sparkExecutor@slaver01:57904]
Caused by: akka.remote.transport.netty.NettyTransport$$anonfun$associate$1$$anon$2: Connection refused: slaver01/192.168.159.130:57904
]
45781 [spark-akka.actor.default-dispatcher-15] INFO org.apache.spark.deploy.client.AppClient$ClientActor - Executor updated: app-20140208155019-0001/5 is now RUNNING
46589 [spark-akka.actor.default-dispatcher-2] INFO org.apache.spark.storage.BlockManagerMasterActor$BlockManagerInfo - Added rdd_0_1 in memory on master:44542 (size: 717.5 KB, free: 296.3 MB)
46762 [Result resolver thread-0] INFO org.apache.spark.scheduler.TaskSetManager - Finished TID 2 in 20510 ms on master (progress: 0/2)
46767 [spark-akka.actor.default-dispatcher-2] INFO org.apache.spark.scheduler.DAGScheduler - Completed ResultTask(0, 1)
50080 [spark-akka.actor.default-dispatcher-13] INFO org.apache.spark.scheduler.cluster.SparkDeploySchedulerBackend - Registered executor: Actor[akka.tcp://sparkExecutor@slaver01:60056/user/Executor#1166039681] with ID 5
54828 [spark-akka.actor.default-dispatcher-2] INFO org.apache.spark.storage.BlockManagerMasterActor$BlockManagerInfo - Registering block manager slaver01:43329 with 297.0 MB RAM
62254 [spark-akka.actor.default-dispatcher-4] INFO org.apache.spark.storage.BlockManagerMasterActor$BlockManagerInfo - Registering block manager slaver02:55569 with 297.0 MB RAM
89782 [spark-akka.actor.default-dispatcher-3] INFO org.apache.spark.storage.BlockManagerMasterActor$BlockManagerInfo - Added rdd_0_0 in memory on slaver02:55569 (size: 717.5 KB, free: 296.3 MB)
90043 [Result resolver thread-1] INFO org.apache.spark.scheduler.TaskSetManager - Finished TID 3 in 44939 ms on slaver02 (progress: 1/2)
90043 [spark-akka.actor.default-dispatcher-3] INFO org.apache.spark.scheduler.DAGScheduler - Completed ResultTask(0, 0)
90061 [spark-akka.actor.default-dispatcher-3] INFO org.apache.spark.scheduler.DAGScheduler - Stage 0 (reduce at SparkLR.scala:64) finished in 84.257 s
90066 [Result resolver thread-1] INFO org.apache.spark.scheduler.TaskSchedulerImpl - Remove TaskSet 0.0 from pool
90075 [main] INFO org.apache.spark.SparkContext - Job finished: reduce at SparkLR.scala:64, took 84.881569465 s
On iteration 2
90100 [main] INFO org.apache.spark.SparkContext - Starting job: reduce at SparkLR.scala:64
90100 [spark-akka.actor.default-dispatcher-3] INFO org.apache.spark.scheduler.DAGScheduler - Got job 1 (reduce at SparkLR.scala:64) with 2 output partitions (allowLocal=false)
90100 [spark-akka.actor.default-dispatcher-3] INFO org.apache.spark.scheduler.DAGScheduler - Final stage: Stage 1 (reduce at SparkLR.scala:64)
90101 [spark-akka.actor.default-dispatcher-3] INFO org.apache.spark.scheduler.DAGScheduler - Parents of final stage: List()
90103 [spark-akka.actor.default-dispatcher-3] INFO org.apache.spark.scheduler.DAGScheduler - Missing parents: List()
90104 [spark-akka.actor.default-dispatcher-3] INFO org.apache.spark.scheduler.DAGScheduler - Submitting Stage 1 (MappedRDD[2] at map at SparkLR.scala:62), which has no missing parents
90164 [spark-akka.actor.default-dispatcher-3] INFO org.apache.spark.scheduler.DAGScheduler - Submitting 2 missing tasks from Stage 1 (MappedRDD[2] at map at SparkLR.scala:62)
90165 [spark-akka.actor.default-dispatcher-3] INFO org.apache.spark.scheduler.TaskSchedulerImpl - Adding task set 1.0 with 2 tasks
90169 [spark-akka.actor.default-dispatcher-13] INFO org.apache.spark.scheduler.TaskSetManager - Starting task 1.0:1 as TID 4 on executor 4: master (PROCESS_LOCAL)
90191 [spark-akka.actor.default-dispatcher-13] INFO org.apache.spark.scheduler.TaskSetManager - Serialized task 1.0:1 as 551895 bytes in 21 ms
90194 [spark-akka.actor.default-dispatcher-13] INFO org.apache.spark.scheduler.TaskSetManager - Starting task 1.0:0 as TID 5 on executor 3: slaver02 (PROCESS_LOCAL)
90220 [spark-akka.actor.default-dispatcher-13] INFO org.apache.spark.scheduler.TaskSetManager - Serialized task 1.0:0 as 551895 bytes in 25 ms
91222 [Result resolver thread-2] INFO org.apache.spark.scheduler.TaskSetManager - Finished TID 4 in 1053 ms on master (progress: 0/2)
91224 [spark-akka.actor.default-dispatcher-16] INFO org.apache.spark.scheduler.DAGScheduler - Completed ResultTask(1, 1)
91609 [Result resolver thread-3] INFO org.apache.spark.scheduler.TaskSetManager - Finished TID 5 in 1415 ms on slaver02 (progress: 1/2)
91610 [spark-akka.actor.default-dispatcher-16] INFO org.apache.spark.scheduler.DAGScheduler - Completed ResultTask(1, 0)
91610 [spark-akka.actor.default-dispatcher-16] INFO org.apache.spark.scheduler.DAGScheduler - Stage 1 (reduce at SparkLR.scala:64) finished in 1.437 s
91611 [main] INFO org.apache.spark.SparkContext - Job finished: reduce at SparkLR.scala:64, took 1.510467278 s
On iteration 3
91616 [Result resolver thread-3] INFO org.apache.spark.scheduler.TaskSchedulerImpl - Remove TaskSet 1.0 from pool
91622 [main] INFO org.apache.spark.SparkContext - Starting job: reduce at SparkLR.scala:64
91623 [spark-akka.actor.default-dispatcher-2] INFO org.apache.spark.scheduler.DAGScheduler - Got job 2 (reduce at SparkLR.scala:64) with 2 output partitions (allowLocal=false)
91623 [spark-akka.actor.default-dispatcher-2] INFO org.apache.spark.scheduler.DAGScheduler - Final stage: Stage 2 (reduce at SparkLR.scala:64)
91623 [spark-akka.actor.default-dispatcher-2] INFO org.apache.spark.scheduler.DAGScheduler - Parents of final stage: List()
91624 [spark-akka.actor.default-dispatcher-2] INFO org.apache.spark.scheduler.DAGScheduler - Missing parents: List()
91624 [spark-akka.actor.default-dispatcher-2] INFO org.apache.spark.scheduler.DAGScheduler - Submitting Stage 2 (MappedRDD[3] at map at SparkLR.scala:62), which has no missing parents
91777 [spark-akka.actor.default-dispatcher-2] INFO org.apache.spark.scheduler.DAGScheduler - Submitting 2 missing tasks from Stage 2 (MappedRDD[3] at map at SparkLR.scala:62)
91777 [spark-akka.actor.default-dispatcher-2] INFO org.apache.spark.scheduler.TaskSchedulerImpl - Adding task set 2.0 with 2 tasks
91779 [spark-akka.actor.default-dispatcher-5] INFO org.apache.spark.scheduler.TaskSetManager - Starting task 2.0:1 as TID 6 on executor 4: master (PROCESS_LOCAL)
91899 [spark-akka.actor.default-dispatcher-5] INFO org.apache.spark.scheduler.TaskSetManager - Serialized task 2.0:1 as 551896 bytes in 119 ms
91899 [spark-akka.actor.default-dispatcher-5] INFO org.apache.spark.scheduler.TaskSetManager - Starting task 2.0:0 as TID 7 on executor 3: slaver02 (PROCESS_LOCAL)
91922 [spark-akka.actor.default-dispatcher-5] INFO org.apache.spark.scheduler.TaskSetManager - Serialized task 2.0:0 as 551896 bytes in 23 ms
92290 [Result resolver thread-0] INFO org.apache.spark.scheduler.TaskSetManager - Finished TID 6 in 511 ms on master (progress: 0/2)
92291 [spark-akka.actor.default-dispatcher-14] INFO org.apache.spark.scheduler.DAGScheduler - Completed ResultTask(2, 1)
92694 [Result resolver thread-1] INFO org.apache.spark.scheduler.TaskSetManager - Finished TID 7 in 794 ms on slaver02 (progress: 1/2)
92694 [Result resolver thread-1] INFO org.apache.spark.scheduler.TaskSchedulerImpl - Remove TaskSet 2.0 from pool
92694 [spark-akka.actor.default-dispatcher-4] INFO org.apache.spark.scheduler.DAGScheduler - Completed ResultTask(2, 0)
92695 [spark-akka.actor.default-dispatcher-4] INFO org.apache.spark.scheduler.DAGScheduler - Stage 2 (reduce at SparkLR.scala:64) finished in 0.913 s
92695 [main] INFO org.apache.spark.SparkContext - Job finished: reduce at SparkLR.scala:64, took 1.072671482 s
On iteration 4
92704 [main] INFO org.apache.spark.SparkContext - Starting job: reduce at SparkLR.scala:64
92704 [spark-akka.actor.default-dispatcher-3] INFO org.apache.spark.scheduler.DAGScheduler - Got job 3 (reduce at SparkLR.scala:64) with 2 output partitions (allowLocal=false)
92704 [spark-akka.actor.default-dispatcher-3] INFO org.apache.spark.scheduler.DAGScheduler - Final stage: Stage 3 (reduce at SparkLR.scala:64)
92704 [spark-akka.actor.default-dispatcher-3] INFO org.apache.spark.scheduler.DAGScheduler - Parents of final stage: List()
92707 [spark-akka.actor.default-dispatcher-3] INFO org.apache.spark.scheduler.DAGScheduler - Missing parents: List()
92707 [spark-akka.actor.default-dispatcher-3] INFO org.apache.spark.scheduler.DAGScheduler - Submitting Stage 3 (MappedRDD[4] at map at SparkLR.scala:62), which has no missing parents
92734 [spark-akka.actor.default-dispatcher-3] INFO org.apache.spark.scheduler.DAGScheduler - Submitting 2 missing tasks from Stage 3 (MappedRDD[4] at map at SparkLR.scala:62)
92734 [spark-akka.actor.default-dispatcher-3] INFO org.apache.spark.scheduler.TaskSchedulerImpl - Adding task set 3.0 with 2 tasks
92736 [spark-akka.actor.default-dispatcher-14] INFO org.apache.spark.scheduler.TaskSetManager - Starting task 3.0:1 as TID 8 on executor 4: master (PROCESS_LOCAL)
92759 [spark-akka.actor.default-dispatcher-14] INFO org.apache.spark.scheduler.TaskSetManager - Serialized task 3.0:1 as 551899 bytes in 23 ms
92760 [spark-akka.actor.default-dispatcher-14] INFO org.apache.spark.scheduler.TaskSetManager - Starting task 3.0:0 as TID 9 on executor 3: slaver02 (PROCESS_LOCAL)
92789 [spark-akka.actor.default-dispatcher-14] INFO org.apache.spark.scheduler.TaskSetManager - Serialized task 3.0:0 as 551899 bytes in 28 ms
93091 [Result resolver thread-2] INFO org.apache.spark.scheduler.TaskSetManager - Finished TID 8 in 356 ms on master (progress: 0/2)
93092 [spark-akka.actor.default-dispatcher-13] INFO org.apache.spark.scheduler.DAGScheduler - Completed ResultTask(3, 1)
96638 [Result resolver thread-3] INFO org.apache.spark.scheduler.TaskSetManager - Finished TID 9 in 3878 ms on slaver02 (progress: 1/2)
96638 [Result resolver thread-3] INFO org.apache.spark.scheduler.TaskSchedulerImpl - Remove TaskSet 3.0 from pool
96639 [spark-akka.actor.default-dispatcher-2] INFO org.apache.spark.scheduler.DAGScheduler - Completed ResultTask(3, 0)
96639 [spark-akka.actor.default-dispatcher-2] INFO org.apache.spark.scheduler.DAGScheduler - Stage 3 (reduce at SparkLR.scala:64) finished in 3.899 s
96639 [main] INFO org.apache.spark.SparkContext - Job finished: reduce at SparkLR.scala:64, took 3.935444196 s
On iteration 5
96646 [main] INFO org.apache.spark.SparkContext - Starting job: reduce at SparkLR.scala:64
96646 [spark-akka.actor.default-dispatcher-4] INFO org.apache.spark.scheduler.DAGScheduler - Got job 4 (reduce at SparkLR.scala:64) with 2 output partitions (allowLocal=false)
96646 [spark-akka.actor.default-dispatcher-4] INFO org.apache.spark.scheduler.DAGScheduler - Final stage: Stage 4 (reduce at SparkLR.scala:64)
96646 [spark-akka.actor.default-dispatcher-4] INFO org.apache.spark.scheduler.DAGScheduler - Parents of final stage: List()
96649 [spark-akka.actor.default-dispatcher-4] INFO org.apache.spark.scheduler.DAGScheduler - Missing parents: List()
96649 [spark-akka.actor.default-dispatcher-4] INFO org.apache.spark.scheduler.DAGScheduler - Submitting Stage 4 (MappedRDD[5] at map at SparkLR.scala:62), which has no missing parents
96677 [spark-akka.actor.default-dispatcher-4] INFO org.apache.spark.scheduler.DAGScheduler - Submitting 2 missing tasks from Stage 4 (MappedRDD[5] at map at SparkLR.scala:62)
96677 [spark-akka.actor.default-dispatcher-4] INFO org.apache.spark.scheduler.TaskSchedulerImpl - Adding task set 4.0 with 2 tasks
96678 [spark-akka.actor.default-dispatcher-16] INFO org.apache.spark.scheduler.TaskSetManager - Starting task 4.0:1 as TID 10 on executor 4: master (PROCESS_LOCAL)
96702 [spark-akka.actor.default-dispatcher-16] INFO org.apache.spark.scheduler.TaskSetManager - Serialized task 4.0:1 as 551896 bytes in 24 ms
96703 [spark-akka.actor.default-dispatcher-16] INFO org.apache.spark.scheduler.TaskSetManager - Starting task 4.0:0 as TID 11 on executor 3: slaver02 (PROCESS_LOCAL)
96726 [spark-akka.actor.default-dispatcher-16] INFO org.apache.spark.scheduler.TaskSetManager - Serialized task 4.0:0 as 551896 bytes in 23 ms
97554 [Result resolver thread-0] INFO org.apache.spark.scheduler.TaskSetManager - Finished TID 10 in 876 ms on master (progress: 0/2)
97555 [spark-akka.actor.default-dispatcher-4] INFO org.apache.spark.scheduler.DAGScheduler - Completed ResultTask(4, 1)
97810 [Result resolver thread-1] INFO org.apache.spark.scheduler.TaskSetManager - Finished TID 11 in 1108 ms on slaver02 (progress: 1/2)
97811 [Result resolver thread-1] INFO org.apache.spark.scheduler.TaskSchedulerImpl - Remove TaskSet 4.0 from pool
97811 [spark-akka.actor.default-dispatcher-14] INFO org.apache.spark.scheduler.DAGScheduler - Completed ResultTask(4, 0)
97811 [spark-akka.actor.default-dispatcher-14] INFO org.apache.spark.scheduler.DAGScheduler - Stage 4 (reduce at SparkLR.scala:64) finished in 1.130 s
97811 [main] INFO org.apache.spark.SparkContext - Job finished: reduce at SparkLR.scala:64, took 1.165505777 s
Final w: (5816.075967498865, 5222.008066011391, 5754.751978607454, 3853.1772062206846, 5593.565827145932, 5282.387874201054, 3662.9216051953435, 4890.78210340607, 4223.371512250292, 5767.368579668863)
[root@master spark-0.9]# [root@master spark-0.9]#

1.6HDFS读取文件并运行WordCount

$ cd ~/spark-0.9
$ MASTER=spark:// 192.168.159.129:7077 ./spark-shell
[root@master spark-0.9]# MASTER=spark://192.168.159.129:7077 bin/spark-shell
14/02/08 16:17:57 INFO HttpServer: Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
14/02/08 16:17:57 INFO HttpServer: Starting HTTP Server
Welcome to
      ____              __
     / __/__  ___ _____/ /__
    _\ \/ _ \/ _ `/ __/  '_/
   /___/ .__/\_,_/_/ /_/\_\   version 0.9.0
      /_/
 
Using Scala version 2.10.3 (Java HotSpot(TM) Client VM, Java 1.7.0_21)
Type in expressions to have them evaluated.
Type :help for more information.
14/02/08 16:18:04 INFO Slf4jLogger: Slf4jLogger started
14/02/08 16:18:04 INFO Remoting: Starting remoting
14/02/08 16:18:04 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://spark@master:57338]
14/02/08 16:18:04 INFO Remoting: Remoting now listens on addresses: [akka.tcp://spark@master:57338]
14/02/08 16:18:04 INFO SparkEnv: Registering BlockManagerMaster
14/02/08 16:18:04 INFO DiskBlockManager: Created local directory at /tmp/spark-local-20140208161804-d96e
14/02/08 16:18:04 INFO MemoryStore: MemoryStore started with capacity 297.0 MB.
14/02/08 16:18:04 INFO ConnectionManager: Bound socket to port 54967 with id = ConnectionManagerId(master,54967)
14/02/08 16:18:04 INFO BlockManagerMaster: Trying to register BlockManager
14/02/08 16:18:04 INFO BlockManagerMasterActor$BlockManagerInfo: Registering block manager master:54967 with 297.0 MB RAM
14/02/08 16:18:04 INFO BlockManagerMaster: Registered BlockManager
14/02/08 16:18:04 INFO HttpServer: Starting HTTP Server
14/02/08 16:18:04 INFO HttpBroadcast: Broadcast server started at http://192.168.159.129:51193
14/02/08 16:18:04 INFO SparkEnv: Registering MapOutputTracker
14/02/08 16:18:04 INFO HttpFileServer: HTTP File server directory is /tmp/spark-a63d283e-90dd-4a09-b2de-d9feda31f607
14/02/08 16:18:04 INFO HttpServer: Starting HTTP Server
14/02/08 16:18:05 INFO SparkUI: Started Spark Web UI at http://master:4040
14/02/08 16:18:05 INFO AppClient$ClientActor: Connecting to master spark://192.168.159.129:7077...
14/02/08 16:18:06 INFO SparkDeploySchedulerBackend: Connected to Spark cluster with app ID app-20140208161806-0007
14/02/08 16:18:06 INFO AppClient$ClientActor: Executor added: app-20140208161806-0007/0 on worker-20140208153258-master-37524 (master:37524) with 1 cores
14/02/08 16:18:06 INFO SparkDeploySchedulerBackend: Granted executor ID app-20140208161806-0007/0 on hostPort master:37524 with 1 cores, 512.0 MB RAM
14/02/08 16:18:06 INFO AppClient$ClientActor: Executor added: app-20140208161806-0007/1 on worker-20140208153257-slaver01-49982 (slaver01:49982) with 1 cores
14/02/08 16:18:06 INFO SparkDeploySchedulerBackend: Granted executor ID app-20140208161806-0007/1 on hostPort slaver01:49982 with 1 cores, 512.0 MB RAM
14/02/08 16:18:06 INFO AppClient$ClientActor: Executor added: app-20140208161806-0007/2 on worker-20140208153257-slaver02-53193 (slaver02:53193) with 1 cores
14/02/08 16:18:06 INFO SparkDeploySchedulerBackend: Granted executor ID app-20140208161806-0007/2 on hostPort slaver02:53193 with 1 cores, 512.0 MB RAM
14/02/08 16:18:06 INFO AppClient$ClientActor: Executor updated: app-20140208161806-0007/1 is now RUNNING
14/02/08 16:18:06 INFO AppClient$ClientActor: Executor updated: app-20140208161806-0007/0 is now RUNNING
14/02/08 16:18:06 INFO AppClient$ClientActor: Executor updated: app-20140208161806-0007/2 is now RUNNING
14/02/08 16:18:07 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Created spark context..
Spark context available as sc.
scala> val file = sc.textFile("hdfs://192.168.159.129:9000//tmp-output101/part-r-00000")
14/02/08 17:16:29 INFO MemoryStore: ensureFreeSpace(132668) called with curMem=132636, maxMem=311387750
14/02/08 17:16:29 INFO MemoryStore: Block broadcast_1 stored as values to memory (estimated size 129.6 KB, free 296.7 MB)
file: org.apache.spark.rdd.RDD[String] = MappedRDD[3] at textFile at <console>:12
 
scala>  val count = file.flatMap(line => line.split(" ")).map(word => (word, 1)).reduceByKey(_+_)
14/02/08 17:16:42 INFO FileInputFormat: Total input paths to process : 1
count: org.apache.spark.rdd.RDD[(String, Int)] = MapPartitionsRDD[8] at reduceByKey at <console>:14
 
scala>  count.collect()
14/02/08 17:16:49 INFO SparkContext: Starting job: collect at <console>:17
14/02/08 17:16:49 INFO DAGScheduler: Registering RDD 6 (reduceByKey at <console>:14)
14/02/08 17:16:49 INFO DAGScheduler: Got job 0 (collect at <console>:17) with 2 output partitions (allowLocal=false)
14/02/08 17:16:49 INFO DAGScheduler: Final stage: Stage 0 (collect at <console>:17)
14/02/08 17:16:49 INFO DAGScheduler: Parents of final stage: List(Stage 1)
14/02/08 17:16:49 INFO DAGScheduler: Missing parents: List(Stage 1)
14/02/08 17:16:49 INFO DAGScheduler: Submitting Stage 1 (MapPartitionsRDD[6] at reduceByKey at <console>:14), which has no missing parents
14/02/08 17:16:49 INFO DAGScheduler: Submitting 2 missing tasks from Stage 1 (MapPartitionsRDD[6] at reduceByKey at <console>:14)
14/02/08 17:16:49 INFO TaskSchedulerImpl: Adding task set 1.0 with 2 tasks
14/02/08 17:16:49 INFO TaskSetManager: Starting task 1.0:0 as TID 0 on executor 2: slaver02 (NODE_LOCAL)
14/02/08 17:16:49 INFO TaskSetManager: Serialized task 1.0:0 as 1955 bytes in 47 ms
14/02/08 17:16:49 INFO TaskSetManager: Starting task 1.0:1 as TID 1 on executor 1: slaver01 (NODE_LOCAL)
14/02/08 17:16:49 INFO TaskSetManager: Serialized task 1.0:1 as 1955 bytes in 0 ms
14/02/08 17:17:02 INFO TaskSetManager: Finished TID 1 in 13049 ms on slaver01 (progress: 0/2)
14/02/08 17:17:02 INFO DAGScheduler: Completed ShuffleMapTask(1, 1)
14/02/08 17:17:05 INFO TaskSetManager: Finished TID 0 in 15876 ms on slaver02 (progress: 1/2)
14/02/08 17:17:05 INFO DAGScheduler: Completed ShuffleMapTask(1, 0)
14/02/08 17:17:05 INFO DAGScheduler: Stage 1 (reduceByKey at <console>:14) finished in 15.877 s
14/02/08 17:17:05 INFO DAGScheduler: looking for newly runnable stages
14/02/08 17:17:05 INFO DAGScheduler: running: Set()
14/02/08 17:17:05 INFO DAGScheduler: waiting: Set(Stage 0)
14/02/08 17:17:05 INFO DAGScheduler: failed: Set()
14/02/08 17:17:05 INFO DAGScheduler: Missing parents for Stage 0: List()
14/02/08 17:17:05 INFO DAGScheduler: Submitting Stage 0 (MapPartitionsRDD[8] at reduceByKey at <console>:14), which is now runnable
14/02/08 17:17:05 INFO TaskSchedulerImpl: Remove TaskSet 1.0 from pool
14/02/08 17:17:05 INFO DAGScheduler: Submitting 2 missing tasks from Stage 0 (MapPartitionsRDD[8] at reduceByKey at <console>:14)
14/02/08 17:17:05 INFO TaskSchedulerImpl: Adding task set 0.0 with 2 tasks
14/02/08 17:17:05 INFO TaskSetManager: Starting task 0.0:0 as TID 2 on executor 2: slaver02 (PROCESS_LOCAL)
14/02/08 17:17:05 INFO TaskSetManager: Serialized task 0.0:0 as 1807 bytes in 1 ms
14/02/08 17:17:05 INFO TaskSetManager: Starting task 0.0:1 as TID 3 on executor 1: slaver01 (PROCESS_LOCAL)
14/02/08 17:17:05 INFO TaskSetManager: Serialized task 0.0:1 as 1807 bytes in 0 ms
14/02/08 17:17:05 INFO MapOutputTrackerMasterActor: Asked to send map output locations for shuffle 0 to spark@slaver01:43048
14/02/08 17:17:05 INFO MapOutputTrackerMaster: Size of output statuses for shuffle 0 is 147 bytes
14/02/08 17:17:05 INFO MapOutputTrackerMasterActor: Asked to send map output locations for shuffle 0 to spark@slaver02:42393
14/02/08 17:17:07 INFO TaskSetManager: Finished TID 2 in 1741 ms on slaver02 (progress: 0/2)
14/02/08 17:17:07 INFO DAGScheduler: Completed ResultTask(0, 0)
14/02/08 17:17:07 INFO TaskSetManager: Finished TID 3 in 2311 ms on slaver01 (progress: 1/2)
14/02/08 17:17:07 INFO DAGScheduler: Completed ResultTask(0, 1)
14/02/08 17:17:07 INFO DAGScheduler: Stage 0 (collect at <console>:17) finished in 2.321 s
14/02/08 17:17:07 INFO TaskSchedulerImpl: Remove TaskSet 0.0 from pool
14/02/08 17:17:07 INFO SparkContext: Job finished: collect at <console>:17, took 18.921360252 s
res1: Array[(String, Int)] = Array((Fate        1,1), (wayside. 1,1), (energetic        3,1), (fourth   7,1), (gouging       1,1), (Frances  1,1), (NOVEMBER 1,1), (about.   2,1), (propel   1,1), (beginning.       2,1), (faunas        1,1), (natural  26,1), (calves, 1,1), (tends    5,1), (sea-level.       1,1), (molluscs.    1,1), (supported 3,1), (stoneless        1,1), (Here     9,1), (Planet   1,1), (dwell    1,1), (behaviour--experimenting,     1,1), (Actual   1,1), (Sluggish 1,1), (Looked   1,1), (mysterious       12,1), (fauna,  2,1), (primaries     1,1), (Mercury's        2,1), (_instinctively_  2,1), (females  2,1), (ELECTRICITY?     1,1), (claws,        1,1), (sedimentary      6,1), (Vertebrates      4,1), ("little-brain"   4,1), (fertilises   3,1), (word.     1,1), (all;     3,1), ("atomism."       1,1), (reactions.       3,1), (away.    10,1), (beginning,   2,1), (groove.  1,1), (Egg-eating       1,1), (depreciate       1,1), (know,    ...
scala>

1.8停止 Spark集群

$ cd ~/spark-0.9
$ ./sbin/stop-all.sh

 

 

 

相关内容