Centos 下Hbase0.98.10-hadoop2 集群的配置,centoshadoop2.2


环境

操作系统 Centos 6.5 64-bit

Hadoop:hadoop-2.5.2

HBase:hbase-0.98.10-hadoop2

Zookeeper:zookeeper-3.4.6

物理机器

 ip                       主机名   
192.168.40.107 hadoop107
192.168.40.108 hadoop108 
192.168.40.104   hadoop104
节点上已经布署好hadoop集群并正常启动,关于hadoop的配置请参考:Centos 6.5 下hadoop2.5.2的HA集群原理讲解以及详细配置

详细步骤

1.下载并解压hbase-0.98.10-hadoop2-bin.tar.gz到/root/hadoop下
2.修改 hbase-env.sh ,hbase-site.xml,regionservers 这三个配置文件如下:

#hbase-env.sh  

export HBASE_OPTS="-XX:+UseConcMarkSweepGC"  
export JAVA_HOME=/root/hadoop/jdk1.7.0_51  
export HBASE_HOME=/root/hadoop/hbase-0.98.10-hadoop2  
export HADOOP_HOME=/root/hadoop/hadoop-2.5.2  
export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HBASE_HOME/bin 
export HBASE_MANAGES_ZK=true

需要注意的地方是 ZooKeeper的配置。这与 hbase-env.sh 文件相关,文件中 HBASE_MANAGES_ZK 环境变量用来设置是使用hbase默认自带的 Zookeeper还是使用独立的ZooKeeper。HBASE_MANAGES_ZK=false 时使用独立的,为true时使用默认自带的。

#hbase-site.xml  
  
   

   <configuration>  

    <property>  

        <name>hbase.rootdir</name>
       <value> hdfs://hadoop148:9000/hbase</value>
    </property>  
      
      <property>  

        <name>hbase.cluster.distributed</name>  

        <value>true</value>  

    </property>  

    <property>  

        <name>hbase.zookeeper.quorum</name> 
       <value> hadoop107,hadoop108,hadoop104 </value>         
    </property>  


       <property>
      <name> hbase.master </name> 
      <value> hadoop107:60000 </value>
    </property>  
      
      <property>  

        <name>zookeeper.session.timeout</name>  

        <value>60000</value>  

    </property>  

    <property>  

        <name>hbase.zookeeper.property.clientPort</name>  

        <value>2181</value>                        

    </property>  

<property>  

 

  <name>hbase.tmp.dir</name>  

  <value>/root/hadoop/hbase-0.98.10-hadoop2/hbase-tmp</value>

 </property>

<property>  

  <name>hbase.zookeeper.property.dataDir</name>  

  <value>${hbase.tmp.dir}/zookeeper</value>

 </property>
</configuration>
#regionservers  
hadoop108  
hadoop104 

3.将文件分发到集群其它节点上,启动hbase并检查是否成功
在HMaster即Namenode (hadoop107)启动HBase数据库(Hadoop集群必须已经启动)。 启动命令:

bin/start-hbase.sh
启动信息如下

[root@hadoop107 hbase-0.98.10-hadoop2]# bin/start-hbase.sh 
hadoop107: starting zookeeper, logging to /root/hadoop/hbase-0.98.10-hadoop2/logs/hbase-root-zookeeper-hadoop107.out
hadoop104: starting zookeeper, logging to /root/hadoop/hbase-0.98.10-hadoop2/logs/hbase-root-zookeeper-hadoop104.out
hadoop108: starting zookeeper, logging to /root/hadoop/hbase-0.98.10-hadoop2/logs/hbase-root-zookeeper-hadoop108.out
starting master, logging to /root/hadoop/hbase-0.98.10-hadoop2/logs/hbase-root-master-hadoop107.out
hadoop104: starting regionserver, logging to /root/hadoop/hbase-0.98.10-hadoop2/logs/hbase-root-regionserver-hadoop104.out
hadoop108: starting regionserver, logging to /root/hadoop/hbase-0.98.10-hadoop2/logs/hbase-root-regionserver-hadoop108.out

在HMaster上用jps命令查看

[root@hadoop107 conf]# jps  
12560 NameNode  
30611 Jps  
12861 JobTracker  
26302 HQuorumPeer  
28715 HMaster  
12755 SecondaryNameNode  
在datanode上用jps命令查看

[root@hadoop108 logs]# jps  
8194 Jps  
1020 DataNode  
1147 TaskTracker  
7376 HQuorumPeer  
7583 HRegionServer  

然后输入如下命令进入hbase的命令行管理界面:
bin/hbase shell
   在hbase shell下 输入list,如下所示,列举你当前数据库的名称,如下图所示。如果你的Hbase没配置成功会抛出java错误。

[root@hadoop107 hbase-0.98.10]# bin/hbase shell  
HBase Shell; enter 'help' for list of supported commands.  
Type "exit" to leave the HBase Shell  
Version 0.98.10, r1332822, Tue May  1 21:43:54 UTC 2012  
hbase(main):001:0< list  
TABLE                                                                                                               
member                                                                                                              
people                                                                                                              
2 row(s) in 0.3980 seconds  

我们也可以通过WEB页面来管理查看HBase数据库。
  HMaster:http://192.168.40.107:60010/master-status



4.安装过程中碰到的问题及解决方法

问题1: hadoop108的HRegionServer启动失败
解决方法:是由于这三点节点的系统时间不一致相差超过集群的检查时间30s,使用NTP同步时间


问题2:Class path contains multiple SLF4J bindings.

SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/home/xuhui/hadoop-2.2.0/hbase-0.98.2-hadoop2/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/xuhui/hadoop-2.2.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFacto
解决办法:将hbase中各个节点的lib/下的slf4j-log4j12-1.7.5.jar删除,与/home/xuhui/hadoop-2.2.0/share/hadoop/common/lib下的jar重复包含了



相关内容