Hadoop/Yarn/MapReduce内存分配(配置)方案,hadoopmapreduce


以horntonworks给出推荐配置为蓝本,给出一种常见的Hadoop集群上各组件的内存分配方案。方案最右侧一栏是一个8G VM的分配方案,方案预留1-2G的内存给操作系统,分配4G给Yarn/MapReduce,当然也包括了HIVE,剩余的2-3G是在需要使用HBase时预留给HBase的。

Configuration FileConfiguration SettingValue Calculation       8G VM (4G For MR)   
yarn-site.xmlyarn.nodemanager.resource.memory-mb= containers * RAM-per-container4096
yarn-site.xmlyarn.scheduler.minimum-allocation-mb= RAM-per-container1024
yarn-site.xmlyarn.scheduler.maximum-allocation-mb= containers * RAM-per-container4096
mapred-site.xmlmapreduce.map.memory.mb= RAM-per-container1024
mapred-site.xml        mapreduce.reduce.memory.mb= 2 * RAM-per-container2048
mapred-site.xmlmapreduce.map.java.opts= 0.8 * RAM-per-container819
mapred-site.xmlmapreduce.reduce.java.opts= 0.8 * 2 * RAM-per-container1638
yarn-site.xml (check)yarn.app.mapreduce.am.resource.mb= 2 * RAM-per-container2048
yarn-site.xml (check)yarn.app.mapreduce.am.command-opts= 0.8 * 2 * RAM-per-container1638
tez-site.xml  
tez.am.resource.memory.mb  
= RAM-per-container
1024
tez-site.xml  
tez.am.java.opts  
= 0.8 * RAM-per-container
819
tez-site.xml  
hive.tez.container.size  
= RAM-per-container
1024
tez-site.xml  
hive.tez.java.opts  
= 0.8 * RAM-per-container
819
 

相关内容