并伪布满式安装,3源码编写翻译

2019-05-03 22:11 来源:未知

8.3.  jar包位置

[root@mini05 target]# pwd
/app/software/hadoop-2.7.6-src/hadoop-dist/target
[root@mini05 target]# ll -h  
total 571M
drwxr-xr-x 2 root root   28 Jun  8 16:28 antrun
drwxr-xr-x 3 root root   22 Jun  8 16:28 classes
-rw-r--r-- 1 root root 1.9K Jun  8 16:36 dist-layout-stitching.sh
-rw-r--r-- 1 root root  643 Jun  8 16:36 dist-tar-stitching.sh
drwxr-xr-x 9 root root  149 Jun  8 16:36 hadoop-2.7.6
-rw-r--r-- 1 root root 190M Jun  8 16:36 hadoop-2.7.6.tar.gz
-rw-r--r-- 1 root root  26K Jun  8 16:36 hadoop-dist-2.7.6.jar
-rw-r--r-- 1 root root 381M Jun  8 16:37 hadoop-dist-2.7.6-javadoc.jar
-rw-r--r-- 1 root root  24K Jun  8 16:36 hadoop-dist-2.7.6-sources.jar
-rw-r--r-- 1 root root  24K Jun  8 16:36 hadoop-dist-2.7.6-test-sources.jar
drwxr-xr-x 2 root root   51 Jun  8 16:36 javadoc-bundle-options
drwxr-xr-x 2 root root   28 Jun  8 16:28 maven-archiver
drwxr-xr-x 3 root root   22 Jun  8 16:28 maven-shared-archive-resources
drwxr-xr-x 3 root root   22 Jun  8 16:28 test-classes
drwxr-xr-x 2 root root    6 Jun  8 16:28 test-dir

  

PS:

1 相关编译所需的软件放在了云盘中,其中CentOS-7.4_hadoop-2.7.6.tar.gz 是上面编译好的 hadoop-2.7.6.tar.gz 进行的重命名,直接使用即可。
2 
3 链接:https://pan.baidu.com/s/1saAbQdoO4GRIbdCH--prkw 密码:372f

 

一,编译前所需软件

6.运行hadoop

1.首次运行需要格式化namenode(是对namenode进行初始化

[root@localhost ~]# hdfs namenode -format (或 hadoop namenode -format)

2.启动hadoop

先启动HDFS
[root@localhost ~]# start-dfs.sh

再启动YARN
[root@localhost ~]# start-yarn.sh

3.验证是否启动成功

[root@localhost ~]# jps
12880 SecondaryNameNode
13025 ResourceManager
12725 DataNode
13305 NodeManager
13353 Jps
12607 NameNode

4.使用浏览器查看HDFS管理界面和MR管理界面

(HDFS管理界面)

(MR管理界面)

5.1.  软件安装

[root@mini05 software]# pwd
/app/software
[root@mini05 software]# tar xf apache-ant-1.9.11-bin.tar.gz  
[root@mini05 software]# mv apache-ant-1.9.11 /app/ 
[root@mini05 app]# ln -s apache-ant-1.9.11 ant  
[root@mini05 app]# ll
total 0
lrwxrwxrwx 1 root root  17 Jun  8 10:38 ant -> apache-ant-1.9.11
drwxr-xr-x 6 root root 235 Mar 24 01:08 apache-ant-1.9.11
drwxr-xr-x 6 root root  99 Jun  8 10:18 apache-maven-3.5.3
lrwxrwxrwx 1 yun  yun   13 Jun  7 22:49 jdk -> jdk1.8.0_112/
drwxr-xr-x 8 yun  yun  255 Sep 23  2016 jdk1.8.0_112
lrwxrwxrwx 1 root root  18 Jun  8 10:19 maven -> apache-maven-3.5.3
drwxrwxr-x 2 yun  yun  222 Jun  8 10:18 software

  

二,编译

mvn package -Pdist,native -DskipTests -Dtar
  • 初次编译时间可能会比较长一些,因为需要下载一些依赖包
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 01:04 h
[INFO] Finished at: 2017-03-04T12:12:38 08:00
[INFO] Final Memory: 244M/1613M
[INFO] ------------------------------------------------------------------------
  • 再次编译的话时间就会缩短很多,因为依赖包已经下载完毕了
[INFO] -------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] -------------------------------------------------------------------
[INFO] Total time: 10:47 min
[INFO] Finished at: 2017-03-13T22:12:47 08:00
[INFO] Final Memory: 237M/1614M
[INFO] --------------------------------------------------------------------

7.3配置ssh免登陆(可选)

按上述方式搭建好hadoop后,每次运行除了要修改修改主机名和IP地址的映射关系外(通过设置静态IP解决),在每次执行start-dfs.sh和start-yarn.sh脚本的时候,都要输入Linux的登入密码。我们可以通过配置ssh免登陆的方式,来避免每次都要输入密码。
配置方式:

1.生成ssh公钥和私钥,执行下面的命令并连续按下回车三次
[root@localhost ~]# ssh-keygen -t rsa
2.将公钥拷贝到要免密登陆的目标机器上(localhost 本机)
[root@localhost ~]# ssh-copy-id localhost

5.2.  环境变量

[root@mini05 profile.d]# pwd
/etc/profile.d 
[root@mini05 profile.d]# vim ant.sh 
export ANT_HOME="/app/ant"
export PATH=$ANT_HOME/bin:$PATH

[root@mini05 profile.d]# source /etc/profile  
[root@mini05 profile.d]# ant -version   
Apache Ant(TM) version 1.9.11 compiled on March 23 2018

  

4,protoc的安装
  • protocolbuffer(以下简称PB)是google 的一种数据交换的格式,它独立于语言,独立于平台
    详细介绍参见protoc扫盲
    protoc开发者指南

  • protoc安装
    1)首先下载protoc,为了顺利编译推荐protoc2.5.0
    protoc2.5.0下载地址

2)设置编译目录

./configure

3)编译整个包

make

4)安装protoc

make install

5)配置protoc的环境变量

vim  ~/.bash_profile
添加
export PROTOC_HOME=/Users/zhaolei/protobuf
export PATH=$PROTOC_HOME/bin:$PATH

最详细的protoc安装见protoc源码根目录的INSTALL.txt文件

3.1.1查看是否安装了openjdk

[root@localhost ~]# java -version
openjdk version "1.8.0_65"
OpenJDK Runtime Environment (build 1.8.0_65-b17)
OpenJDK 64-Bit Server VM (build 25.65-b01, mixed mode)

1. 使用系统和软件

操作系统:CentOS Linux release 7.4.1708 (Core)   64位
使用软件如下:
jdk1.8.0_112.tar.gz
hadoop-2.7.6-src.tar.gz 
apache-ant-1.9.11-bin.tar.gz
apache-maven-3.5.3-bin.tar.gz
findbugs-3.0.1.tar.gz
protobuf-2.5.0.tar.gz

 

1,Homebrew

Homebrew是Mac OSX上的软件包管理工具,能在Mac中方便的安装软件或者卸载软件.

  • 安装:
/usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"
  • 常见使用命令
brew更新: brew update
软件安装:  brew install xxx
软件卸载:  brew uninstall xxx
软件查找:  brew search xxx
已安装软件列表:  brew list
打开官网brew:  brew home
显示软件信息:  brew info

更详细的brew介绍详见brew官网

1.安装环境说明

  1. 本机操作系统macOS Sierra 10.12.6
  2. 采用的虚拟机VirtualBox 5.1.28
  3. 虚拟机镜像CentOS 7 64bit

6.1.  软件安装

[root@mini05 software]# pwd
/app/software
[root@mini05 software]# tar xf findbugs-3.0.1.tar.gz 
[root@mini05 software]# mv findbugs-3.0.1 /app/
[root@mini05 software]# cd /app/
[root@mini05 app]# ln -s findbugs-3.0.1/ findbugs  
[root@mini05 app]# ll
total 0
lrwxrwxrwx 1 root root  17 Jun  8 10:38 ant -> apache-ant-1.9.11
drwxr-xr-x 6 root root 235 Mar 24 01:08 apache-ant-1.9.11
drwxr-xr-x 6 root root  99 Jun  8 10:18 apache-maven-3.5.3
lrwxrwxrwx 1 root root  15 Jun  8 11:01 findbugs -> findbugs-3.0.1/
drwxr-xr-x 8 root root 104 Jun  8 11:00 findbugs-3.0.1
lrwxrwxrwx 1 yun  yun   13 Jun  7 22:49 jdk -> jdk1.8.0_112/
drwxr-xr-x 8 yun  yun  255 Sep 23  2016 jdk1.8.0_112
lrwxrwxrwx 1 root root  18 Jun  8 10:19 maven -> apache-maven-3.5.3
drwxrwxr-x 2 yun  yun  222 Jun  8 11:01 software

  

3,Maven的安装

Maven主要用于程序构建,并且还提供项目管理的功能

brew install maven

7.6其他说明

1.所用的到相关软件下载地址:飞机

2.参考链接:

CentOS7-64bit 编译 Hadoop-2.5.0,并分布式安装
https://my.oschina.net/u/1428349/blog/31364
Hadoop编译安装2.7.3(CentOS7)
https://www.2cto.com/net/201612/567546.html

3.仅供参考,欢迎指正

4.2.  环境变量

[root@mini05 profile.d]# pwd
/etc/profile.d
[root@mini05 profile.d]# cat maven.sh 
export MAVEN_HOME=/app/maven/
export PATH=$MAVEN_HOME/bin:$PATH

[root@mini05 profile.d]# source /etc/profile 
[root@mini05 profile.d]# mvn -v  
Apache Maven 3.5.3 (3383c37e1f9e9b3bc3df5050c29c8aff9f295297; 2018-02-25T03:49:05 08:00)
Maven home: /app/maven
Java version: 1.8.0_112, vendor: Oracle Corporation
Java home: /app/jdk1.8.0_112/jre
Default locale: en_US, platform encoding: UTF-8
OS name: "linux", version: "3.10.0-693.el7.x86_64", arch: "amd64", family: "unix"

  

2,CMake的安装

CMake是一个跨平台的安装(编译)工具,可以用简单的语句来描述所有平台的安装(编译过程)。

安装CMake

brew install cmake

3.3.4安装protobuf

1.安装一些其他需要依赖的软件(虚拟机需要联网)

[root@localhost ~]# yum -y install maven svn ncurses-devel gcc* lzo-devel zlib-devel autoconf automake libtool cmake openssl-devel

2.安装protobuf,最后一条命令输入后需要耐心等待

[root@localhost ~]# cd /opt/protobuf-2.5.0           
[root@localhost protobuf-2.5.0]# ./configure
[root@localhost protobuf-2.5.0]#make && make install

3.验证是否安装成功

[root@localhost protobuf-2.5.0]# protoc --version
libprotoc 2.5.0

6.2.  环境变量

[root@mini05 profile.d]# pwd
/etc/profile.d
[root@mini05 profile.d]# cat findbugs.sh 
export FINDBUGS_HOME="/app/findbugs"
export PATH=$FINDBUGS_HOME/bin:$PATH

[root@mini05 profile.d]# source /etc/profile  
[root@mini05 profile.d]# findbugs -version   
3.0.1

  

本机环境

系统版本: macOS Sierra 10.12.2
JDK版本: java version "1.8.0_121"
Hadoop源码: Hadoop-2.7.3

3.编译前的准备

4.1.  软件安装

[root@mini05 software]# pwd
/app/software 
[root@mini05 software]# tar xf apache-maven-3.5.3-bin.tar.gz 
[root@mini05 software]# mv apache-maven-3.5.3 /app/ 
[root@mini05 software]# cd /app/
[root@mini05 app]# ln -s apache-maven-3.5.3 maven
[root@mini05 app]# ll
total 0
drwxr-xr-x 6 root root  99 Jun  8 10:18 apache-maven-3.5.3
lrwxrwxrwx 1 yun  yun   13 Jun  7 22:49 jdk -> jdk1.8.0_112/
drwxr-xr-x 8 yun  yun  255 Sep 23  2016 jdk1.8.0_112
lrwxrwxrwx 1 root root  18 Jun  8 10:19 maven -> apache-maven-3.5.3
drwxrwxr-x 2 yun  yun  222 Jun  8 10:18 software

  

三,编译后的使用

源码编译后我们使用编译后的jar来运行Hadoop
例如,如果我们修改了hadoop-hdfs-project工程的代码的话
Hadoop-2.7.3-src/hadoop-hdfs-project/hadoop-hdfs/target下找到文件hadoop-hdfs-2.7.3.jar
这就是我们刚生成的jar包,我们将hadoop程序中的hadoop-hdfs-2.7.3.jar替换成该文件(在目录hadoop-2.7.3/share/hadoop/hdfs下)
接下来重新启动hadoop即可

3.3编译环境搭建

7. 安装protobuf

# 不能使用 protobuf-2.6.1.tar.gz 会出错 
[root@mini05 software]# pwd
/app/software
[root@mini05 software]# tar xf protobuf-2.5.0.tar.gz  
[root@mini05 software]# cd protobuf-2.5.0/ 
[root@mini05 protobuf-2.5.0]# pwd
/app/software/protobuf-2.5.0
[root@mini05 protobuf-2.6.1]# ./configure 
………………
[root@mini05 protobuf-2.6.1]# make  
………………
[root@mini05 protobuf-2.6.1]# make install 
………………
[root@mini05 protobuf-2.6.1]# protoc --version  
libprotoc 2.5.0

图片 1  

3.2.2禁止防火墙开机自启

[root@localhost ~]# systemctl disable firewalld.service 
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
Removed symlink /etc/systemd/system/basic.target.wants/firewalld.service.

8. 编译Hadoop

3.2.2关闭防火墙

[root@localhost ~]# systemctl stop firewalld.service

2. 必要的包安装

[root@mini05 ~]# yum install -y cmake 
[root@mini05 ~]# yum install -y openssl-devel 
[root@mini05 ~]# yum install -y ncurses-devel 

  

3.3.1上传所需要的软件安装压缩包

示例:Mac在终端使用scp命令上传本地hadoop目录下的所有软件包到虚拟机的/opt/目录下

3.1.  软件安装

[root@mini05 software]# pwd
/app/software
[root@mini05 software]# tar xf jdk1.8.0_112.tar.gz 
[root@mini05 software]# ll
total 201392
drwxr-xr-x 8   10  143      4096 Dec 20 13:27 jdk1.8.0_112
-rw-r--r-- 1 root root 189815615 Mar 12 16:47 jdk1.8.0_112.tar.gz
[root@mini05 software]# mv jdk1.8.0_112/ /app/
[root@mini05 software]# cd /app/
[root@mini05 app]# ln -s jdk1.8.0_112/ jdk
[root@mini05 app]# ll
total 8
lrwxrwxrwx  1 root root    13 May 16 23:19 jdk -> jdk1.8.0_112/
drwxr-xr-x  8   10   143 4096 Dec 20 13:27 jdk1.8.0_112

  

3.2关闭防火墙

3. 安装Jdk【java8】

7.5推荐使用远程连接的方式来操作

  1. Windows推荐使用secureCRT,进行操作和上传文件
  2. MacOS推荐使用终端,使用ssh命令进行连接执行命令,使用scp命令上传和下载文件,eg:
远程连接linux虚拟机
$ ssh root@192.168.43.216
上传本地文件到linux虚拟机
$ scp /Users/michealyan/hadoop/* root@172.17.129.78:/opt/

5. 安装ant

3.1删除预安装的openjdk(如果有)

4. 安装maven

7.总结

9. 文章参考

1、hadoop2.7.3编译和安装

2、hadoop搭建时为什么最好重新编译源码的原因

 

3.1.3依次删除OPENJDK

[root@localhost ~]# rpm -e --nodeps java-1.7.0-openjdk-1.7.0.91-2.6.2.3.el7.x86_64
[root@localhost ~]# rpm -e --nodeps tzdata-java-2015g-1.el7.noarch
[root@localhost ~]# rpm -e --nodeps java-1.8.0-openjdk-headless-1.8.0.65-3.b17.el7.x86_64
[root@localhost ~]# rpm -e --nodeps java-1.8.0-openjdk-1.8.0.65-3.b17.el7.x86_64
[root@localhost ~]# rpm -e --nodeps java-1.7.0-openjdk-headless-1.7.0.91-2.6.2.3.el7.x86_64

 

7.4设置静态ip(可选)

  1. 通过ip a命令查看网卡名称,默认情况下会有两个一个lo回环网卡,而另一个就是我们所需要修改的网卡
  2. 修改网卡的配置文件方式
[root@localhost ~]# vi /etc/sysconfig/network-scripts/ifcfg-网卡名
修改或添加下面的内容
BOOTPROTO="static"
ONBOOT="yes"
IPADDR="设置一个静态ip"
NETMASK="255.255.255.0"

8.1.  编译Hadoop

 

[root@mini05 software]# pwd
/app/software
[root@mini05 software]# tar xf  hadoop-2.7.6-src.tar.gz 
[root@mini05 software]# cd hadoop-2.7.6-src/ 
[root@mini05 hadoop-2.7.6-src]# pwd
/app/software/hadoop-2.7.6-src
[root@mini05 hadoop-2.7.6-src]# mvn clean install -DskipTests  # clean 参数可以省略 
………………
### 持续时间较长,中间也有可能一直卡主,如果卡主那么Ctrl C 之后再执行即可。总过程可能会有多次卡主那么中断后再执行该命令。 
………………………………
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop Main 2.7.6 ........................... SUCCESS [  1.098 s]
[INFO] Apache Hadoop Build Tools .......................... SUCCESS [  1.151 s]
[INFO] Apache Hadoop Project POM .......................... SUCCESS [  0.851 s]
[INFO] Apache Hadoop Annotations .......................... SUCCESS [  0.390 s]
[INFO] Apache Hadoop Project Dist POM ..................... SUCCESS [  0.298 s]
[INFO] Apache Hadoop Assemblies ........................... SUCCESS [  0.158 s]
[INFO] Apache Hadoop Maven Plugins ........................ SUCCESS [  1.494 s]
[INFO] Apache Hadoop MiniKDC .............................. SUCCESS [  3.178 s]
[INFO] Apache Hadoop Auth ................................. SUCCESS [  1.902 s]
[INFO] Apache Hadoop Auth Examples ........................ SUCCESS [  0.916 s]
[INFO] Apache Hadoop Common ............................... SUCCESS [ 38.980 s]
[INFO] Apache Hadoop NFS .................................. SUCCESS [  1.097 s]
[INFO] Apache Hadoop KMS .................................. SUCCESS [ 13.255 s]
[INFO] Apache Hadoop Common Project ....................... SUCCESS [  0.099 s]
[INFO] Apache Hadoop HDFS ................................. SUCCESS [ 57.047 s]
[INFO] Apache Hadoop HttpFS ............................... SUCCESS [  7.958 s]
[INFO] Apache Hadoop HDFS BookKeeper Journal .............. SUCCESS [ 49.933 s]
[INFO] Apache Hadoop HDFS-NFS ............................. SUCCESS [  0.632 s]
[INFO] Apache Hadoop HDFS Project ......................... SUCCESS [  0.048 s]
[INFO] hadoop-yarn ........................................ SUCCESS [  0.045 s]
[INFO] hadoop-yarn-api .................................... SUCCESS [  3.861 s]
[INFO] hadoop-yarn-common ................................. SUCCESS [01:35 min]
[INFO] hadoop-yarn-server ................................. SUCCESS [  0.030 s]
[INFO] hadoop-yarn-server-common .......................... SUCCESS [  1.002 s]
[INFO] hadoop-yarn-server-nodemanager ..................... SUCCESS [  1.825 s]
[INFO] hadoop-yarn-server-web-proxy ....................... SUCCESS [  0.499 s]
[INFO] hadoop-yarn-server-applicationhistoryservice ....... SUCCESS [  0.940 s]
[INFO] hadoop-yarn-server-resourcemanager ................. SUCCESS [  3.872 s]
[INFO] hadoop-yarn-server-tests ........................... SUCCESS [  0.759 s]
[INFO] hadoop-yarn-client ................................. SUCCESS [  0.687 s]
[INFO] hadoop-yarn-server-sharedcachemanager .............. SUCCESS [  0.850 s]
[INFO] hadoop-yarn-applications ........................... SUCCESS [  0.034 s]
[INFO] hadoop-yarn-applications-distributedshell .......... SUCCESS [  0.396 s]
[INFO] hadoop-yarn-applications-unmanaged-am-launcher ..... SUCCESS [  0.272 s]
[INFO] hadoop-yarn-site ................................... SUCCESS [  0.040 s]
[INFO] hadoop-yarn-registry ............................... SUCCESS [  0.711 s]
[INFO] hadoop-yarn-project ................................ SUCCESS [  0.248 s]
[INFO] hadoop-mapreduce-client ............................ SUCCESS [  0.114 s]
[INFO] hadoop-mapreduce-client-core ....................... SUCCESS [  2.589 s]
[INFO] hadoop-mapreduce-client-common ..................... SUCCESS [  1.546 s]
[INFO] hadoop-mapreduce-client-shuffle .................... SUCCESS [  0.467 s]
[INFO] hadoop-mapreduce-client-app ........................ SUCCESS [  1.305 s]
[INFO] hadoop-mapreduce-client-hs ......................... SUCCESS [  0.780 s]
[INFO] hadoop-mapreduce-client-jobclient .................. SUCCESS [ 19.947 s]
[INFO] hadoop-mapreduce-client-hs-plugins ................. SUCCESS [  0.364 s]
[INFO] Apache Hadoop MapReduce Examples ................... SUCCESS [  0.831 s]
[INFO] hadoop-mapreduce ................................... SUCCESS [  0.169 s]
[INFO] Apache Hadoop MapReduce Streaming .................. SUCCESS [  8.894 s]
[INFO] Apache Hadoop Distributed Copy ..................... SUCCESS [ 34.396 s]
[INFO] Apache Hadoop Archives ............................. SUCCESS [  0.309 s]
[INFO] Apache Hadoop Rumen ................................ SUCCESS [  0.471 s]
[INFO] Apache Hadoop Gridmix .............................. SUCCESS [  0.484 s]
[INFO] Apache Hadoop Data Join ............................ SUCCESS [  0.187 s]
[INFO] Apache Hadoop Ant Tasks ............................ SUCCESS [  0.099 s]
[INFO] Apache Hadoop Extras ............................... SUCCESS [  0.386 s]
[INFO] Apache Hadoop Pipes ................................ SUCCESS [  0.024 s]
[INFO] Apache Hadoop OpenStack support .................... SUCCESS [  0.445 s]
[INFO] Apache Hadoop Amazon Web Services support .......... SUCCESS [03:23 min]
[INFO] Apache Hadoop Azure support ........................ SUCCESS [ 17.180 s]
[INFO] Apache Hadoop Client ............................... SUCCESS [  1.042 s]
[INFO] Apache Hadoop Mini-Cluster ......................... SUCCESS [  0.862 s]
[INFO] Apache Hadoop Scheduler Load Simulator ............. SUCCESS [  1.292 s]
[INFO] Apache Hadoop Tools Dist ........................... SUCCESS [  0.614 s]
[INFO] Apache Hadoop Tools ................................ SUCCESS [  0.033 s]
[INFO] Apache Hadoop Distribution 2.7.6 ................... SUCCESS [  0.124 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 09:51 min
[INFO] Finished at: 2018-06-08T16:28:47 08:00
[INFO] ------------------------------------------------------------------------

[root@mini05 hadoop-2.7.6-src]# mvn package -Pdist,native -DskipTests -Dtar   
…………………………
[INFO] Executing tasks
main:
     [exec] $ tar cf hadoop-2.7.6.tar hadoop-2.7.6
     [exec] $ gzip -f hadoop-2.7.6.tar
     [exec] 
     [exec] Hadoop dist tar available at: /app/software/hadoop-2.7.6-src/hadoop-dist/target/hadoop-2.7.6.tar.gz  # Hadoop编译好的jar包位置 
     [exec] 
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-dist ---
[INFO] Building jar: /app/software/hadoop-2.7.6-src/hadoop-dist/target/hadoop-dist-2.7.6-javadoc.jar
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop Main 2.7.6 ........................... SUCCESS [  7.433 s]
[INFO] Apache Hadoop Build Tools .......................... SUCCESS [  0.789 s]
[INFO] Apache Hadoop Project POM .......................... SUCCESS [ 21.333 s]
[INFO] Apache Hadoop Annotations .......................... SUCCESS [  1.470 s]
[INFO] Apache Hadoop Assemblies ........................... SUCCESS [  0.135 s]
[INFO] Apache Hadoop Project Dist POM ..................... SUCCESS [ 13.132 s]
[INFO] Apache Hadoop Maven Plugins ........................ SUCCESS [  2.676 s]
[INFO] Apache Hadoop MiniKDC .............................. SUCCESS [  3.423 s]
[INFO] Apache Hadoop Auth ................................. SUCCESS [  3.391 s]
[INFO] Apache Hadoop Auth Examples ........................ SUCCESS [  2.545 s]
[INFO] Apache Hadoop Common ............................... SUCCESS [01:06 min]
[INFO] Apache Hadoop NFS .................................. SUCCESS [  3.635 s]
[INFO] Apache Hadoop KMS .................................. SUCCESS [  9.051 s]
[INFO] Apache Hadoop Common Project ....................... SUCCESS [  0.044 s]
[INFO] Apache Hadoop HDFS ................................. SUCCESS [01:17 min]
[INFO] Apache Hadoop HttpFS ............................... SUCCESS [ 12.516 s]
[INFO] Apache Hadoop HDFS BookKeeper Journal .............. SUCCESS [  9.920 s]
[INFO] Apache Hadoop HDFS-NFS ............................. SUCCESS [  2.311 s]
[INFO] Apache Hadoop HDFS Project ......................... SUCCESS [  0.029 s]
[INFO] hadoop-yarn ........................................ SUCCESS [  0.036 s]
[INFO] hadoop-yarn-api .................................... SUCCESS [ 22.028 s]
[INFO] hadoop-yarn-common ................................. SUCCESS [ 16.103 s]
[INFO] hadoop-yarn-server ................................. SUCCESS [  0.041 s]
[INFO] hadoop-yarn-server-common .......................... SUCCESS [  6.938 s]
[INFO] hadoop-yarn-server-nodemanager ..................... SUCCESS [ 10.326 s]
[INFO] hadoop-yarn-server-web-proxy ....................... SUCCESS [  2.036 s]
[INFO] hadoop-yarn-server-applicationhistoryservice ....... SUCCESS [  4.360 s]
[INFO] hadoop-yarn-server-resourcemanager ................. SUCCESS [ 11.051 s]
[INFO] hadoop-yarn-server-tests ........................... SUCCESS [  3.508 s]
[INFO] hadoop-yarn-client ................................. SUCCESS [  3.938 s]
[INFO] hadoop-yarn-server-sharedcachemanager .............. SUCCESS [  2.340 s]
[INFO] hadoop-yarn-applications ........................... SUCCESS [  0.022 s]
[INFO] hadoop-yarn-applications-distributedshell .......... SUCCESS [  1.674 s]
[INFO] hadoop-yarn-applications-unmanaged-am-launcher ..... SUCCESS [  1.333 s]
[INFO] hadoop-yarn-site ................................... SUCCESS [  0.020 s]
[INFO] hadoop-yarn-registry ............................... SUCCESS [  3.027 s]
[INFO] hadoop-yarn-project ................................ SUCCESS [  2.675 s]
[INFO] hadoop-mapreduce-client ............................ SUCCESS [  0.086 s]
[INFO] hadoop-mapreduce-client-core ....................... SUCCESS [ 11.176 s]
[INFO] hadoop-mapreduce-client-common ..................... SUCCESS [ 10.672 s]
[INFO] hadoop-mapreduce-client-shuffle .................... SUCCESS [  2.336 s]
[INFO] hadoop-mapreduce-client-app ........................ SUCCESS [  5.157 s]
[INFO] hadoop-mapreduce-client-hs ......................... SUCCESS [  3.429 s]
[INFO] hadoop-mapreduce-client-jobclient .................. SUCCESS [  2.947 s]
[INFO] hadoop-mapreduce-client-hs-plugins ................. SUCCESS [  1.182 s]
[INFO] Apache Hadoop MapReduce Examples ................... SUCCESS [  3.813 s]
[INFO] hadoop-mapreduce ................................... SUCCESS [  1.939 s]
[INFO] Apache Hadoop MapReduce Streaming .................. SUCCESS [  2.445 s]
[INFO] Apache Hadoop Distributed Copy ..................... SUCCESS [  5.319 s]
[INFO] Apache Hadoop Archives ............................. SUCCESS [  1.183 s]
[INFO] Apache Hadoop Rumen ................................ SUCCESS [  3.061 s]
[INFO] Apache Hadoop Gridmix .............................. SUCCESS [  2.350 s]
[INFO] Apache Hadoop Data Join ............................ SUCCESS [  1.619 s]
[INFO] Apache Hadoop Ant Tasks ............................ SUCCESS [  1.436 s]
[INFO] Apache Hadoop Extras ............................... SUCCESS [  1.817 s]
[INFO] Apache Hadoop Pipes ................................ SUCCESS [  5.664 s]
[INFO] Apache Hadoop OpenStack support .................... SUCCESS [  2.720 s]
[INFO] Apache Hadoop Amazon Web Services support .......... SUCCESS [  2.630 s]
[INFO] Apache Hadoop Azure support ........................ SUCCESS [  2.453 s]
[INFO] Apache Hadoop Client ............................... SUCCESS [  6.406 s]
[INFO] Apache Hadoop Mini-Cluster ......................... SUCCESS [  0.674 s]
[INFO] Apache Hadoop Scheduler Load Simulator ............. SUCCESS [  3.096 s]
[INFO] Apache Hadoop Tools Dist ........................... SUCCESS [  6.273 s]
[INFO] Apache Hadoop Tools ................................ SUCCESS [  0.032 s]
[INFO] Apache Hadoop Distribution 2.7.6 ................... SUCCESS [ 25.962 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 07:28 min
[INFO] Finished at: 2018-06-08T16:37:06 08:00
[INFO] ------------------------------------------------------------------------

  

3.2.1查看防火墙的状态

[root@localhost ~]# systemctl status firewalld.service
● firewalld.service - firewalld - dynamic firewall daemon
   Loaded: loaded (/usr/lib/systemd/system/firewalld.service; enabled; vendor preset: enabled)
   Active: active (running) since Thu 2017-10-19 14:45:46 CST; 22min ago
 Main PID: 607 (firewalld)
   CGroup: /system.slice/firewalld.service
           └─607 /usr/bin/python -Es /usr/sbin/firewalld --nofork --nopid

Oct 19 14:45:44 localhost.localdomain systemd[1]: Starting firewalld - dynami...
Oct 19 14:45:46 localhost.localdomain systemd[1]: Started firewalld - dynamic...
Hint: Some lines were ellipsized, use -l to show in full.

3.2.  环境变量

[root@mini05 ~]$ pwd
/app
[root@mini05 ~]$ ll -d jdk*  # 可以根据实际情况选择jdk版本,其中jdk1.8 可以兼容 jdk1.7   
lrwxrwxrwx 1 yun yun   11 Mar 15 14:58 jdk -> jdk1.8.0_112
drwxr-xr-x 8 yun yun 4096 Dec 20 13:27 jdk1.8.0_112
[root@mini05 profile.d]$ pwd
/etc/profile.d
[root@mini05 profile.d]$ cat jdk.sh # java环境变量   
export JAVA_HOME=/app/jdk
export JRE_HOME=/app/jdk/jre
export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar:$JRE_HOME/lib:$CLASSPATH
export PATH=$JAVA_HOME/bin:$PATH

[root@mini05 profile.d]# source /etc/profile
[root@mini05 profile.d]$ java -version  
java version "1.8.0_112"
Java(TM) SE Runtime Environment (build 1.8.0_112-b15)
Java HotSpot(TM) 64-Bit Server VM (build 25.112-b15, mixed mode)

  

4.编译hadoop源码

1.进入hadoop源码的根目录,执行编译命令(编译过程需联网)

[root@localhost ~]# cd /opt/hadoop-2.7.4-src
[root@localhost hadoop-2.7.4-src]# mvn clean package -Pdist -DskipTests -Dtar

2.编译过程时间很长,需耐心等待

3.编译成功如下图所示

8.2.  查看Native

[root@mini05 native]# pwd
/app/software/hadoop-2.7.6-src/hadoop-dist/target/hadoop-2.7.6/lib/native
[root@mini05 native]# file *  
libhadoop.a:        current ar archive
libhadooppipes.a:   current ar archive
libhadoop.so:       symbolic link to `libhadoop.so.1.0.0'
libhadoop.so.1.0.0: ELF 64-bit LSB shared object, x86-64, version 1 (SYSV), dynamically linked, BuildID[sha1]=1092bb61838f0b1c0d982b20dc8223ae93ba708f, not stripped
libhadooputils.a:   current ar archive
libhdfs.a:          current ar archive
libhdfs.so:         symbolic link to `libhdfs.so.0.0.0'
libhdfs.so.0.0.0:   ELF 64-bit LSB shared object, x86-64, version 1 (SYSV), dynamically linked, BuildID[sha1]=e719718169b5319c4f9109b5b5d8f33cebe65ed6, not stripped

  

7.1创建虚拟机的一些注意事项

  1. 创建的虚拟机的内存大小和虚拟硬盘大小不要设置的太小,否则会引起一些不必要的麻烦。比如,如果虚拟机内存太小,则在后面编译Hadoop的mavn项目的时候,就有可能产生内存溢出而编译失败的情况。(我创建的时候分配了2G的内存和20G的虚拟硬盘)
  2. 虚拟机网络设置,由于安装编译的过程需要联网,所以我是用了桥接的方式,并且没有设置静态IP,所以搭建成功后每次运行hadoop前,都需要先查看当前的IP地址,然后修改主机名和IP地址的映射关系,才能正常运行。

6. 安装findbugs

3.3.2解压所有的软件到/opt目录下

使用命令tar -zxvf 软件包位置 -C 解压到的路径,例如:

[root@localhost opt]# tar -zxvf /opt/apache-ant-1.9.4-bin.tar.gz -C /opt/

3.3.3安装java、ant和findbugs

1.配置环境变量,修改配置文件[root@localhost opt]# vi /etc/profile,在末尾添加

export JAVA_HOME=/opt/jdk1.7.0_80      
export ANT_HOME=/opt/apache-ant-1.9.4    
export FINDBUGS_HOME=/opt/findbugs-1.3.9
export PATH=$PATH:$FINDBUGS_HOME/bin:$ANT_HOME/bin:$JAVA_HOME/bin

2.使配置文件生效[root@localhost opt]# source /etc/profile

3.验证是否安装成功

[root@localhost opt]# java -version
java version "1.7.0_80"
Java(TM) SE Runtime Environment (build 1.7.0_80-b15)
Java HotSpot(TM) 64-Bit Server VM (build 24.80-b11, mixed mode)
[root@localhost opt]# ant -version
Apache Ant(TM) version 1.9.4 compiled on April 29 2014
[root@localhost opt]# findbugs -version
1.3.9

2.为什么要编译

因为我们从官方下载的Hadoop安装压缩包(hadoop-2.7.4.tar.gz)是32位系统下的,如果我们将它部署在64位的系统上运行就会报错,所以我们最好在自己的64位系统上自己重新编译Hadoop源码(hadoop-2.7.4-src.tar.gz)(也可以从网上下载他人编译好的Hadoop64位安装包)。

7.2编译其他版本的hadoop

大致的编译过程基本上是一致的,只不过用到的相关依赖软件的版本号,需要从hadoop源码包的根目录中的BUILDING.txt文件中查看。下图是部分内容截图:

3.1.2查看openjdk源

[root@localhost ~]# rpm -qa | grep java
java-1.7.0-openjdk-1.7.0.91-2.6.2.3.el7.x86_64
tzdata-java-2015g-1.el7.noarch
python-javapackages-3.4.1-11.el7.noarch
javapackages-tools-3.4.1-11.el7.noarch
java-1.8.0-openjdk-headless-1.8.0.65-3.b17.el7.x86_64
java-1.8.0-openjdk-1.8.0.65-3.b17.el7.x86_64
java-1.7.0-openjdk-headless-1.7.0.91-2.6.2.3.el7.x86_64

5.安装hadoop

1.复制编译生成的Hadoop文件至/opt/目录下

[root@localhost ~]# cp -r /opt/hadoop-2.7.4-src/hadoop-dist/target/hadoop-2.7.4 /opt/hadoop-2.7.4

编译生成目录截图

2.配置环境变量

[root@localhost opt]# vi /etc/profile
在末尾添加
 export HADOOP_HOME=/opt/hadoop-2.7.4
 export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
[root@localhost opt]# source /etc/profile

3.修改主机名为hadoop,会在后面修改hadoop的配置文件中用到

[root@localhost ~]# hostnamectl set-hostname hadoop
[root@localhost ~]# hostnamectl status
   Static hostname: hadoop
         Icon name: computer-vm
           Chassis: vm
        Machine ID: ca98320ca97f4fbebdb7d5a4bd32c052
           Boot ID: 6953000caef747c6a65be85f40921f0e
    Virtualization: kvm
  Operating System: CentOS Linux 7 (Core)
       CPE OS Name: cpe:/o:centos:centos:7
            Kernel: Linux 3.10.0-327.el7.x86_64
      Architecture: x86-64

4.查看当前IP地址

[root@localhost ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 08:00:27:6e:65:f5 brd ff:ff:ff:ff:ff:ff
    inet 192.168.43.216/24 brd 192.168.43.255 scope global dynamic enp0s3
       valid_lft 3524sec preferred_lft 3524sec
    inet6 fe80::a00:27ff:fe6e:65f5/64 scope link 
       valid_lft forever preferred_lft forever

5.设置hosts本地解析,添加主机名和当前ip进行映射关系

[root@localhost ~]# vi /etc/hosts
在末尾添加
#当前IP地址 主机名
192.168.43.216 hadoop

6.配置hadoop,伪分布式需要修改5个配置文件

(1)进入配置文件目录
[root@localhost ~]# cd /opt/hadoop-2.7.4/etc/hadoop

(2)修改第一个配置文件:hadoop-env.sh 
[root@localhost hadoop]# vi hadoop-env.sh
找到‘export JAVA_HOME=’,修改为:
export JAVA_HOME=/opt/jdk1.7.0_80

(3)修改第二个配置文件:core-site.xml      
[root@localhost hadoop]# vi core-site.xml
<configuration>     
    <!-- 指定HADOOP所使用的文件系统schema(URI),HDFS的老大(NameNode)的地址 -->
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://hadoop:9000</value>
    </property>

    <!-- 指定hadoop运行时产生文件的存储目录 -->   
    <property>
        <name>hadoop.tmp.dir</name>
        <value>/opt/hadoop-2.7.4/tmp</value>
    </property>
</configuration>

(4)修改第三个配置文件:hdfs-site.xml
[root@localhost hadoop]# vi hdfs-site.xml
<configuration> 
        <!-- 指定HDFS副本的数量 -->
        <property>
            <name>dfs.replication</name>
            <value>1</value>
        </property> 
</configuration>   

(5)修改第四个配置文件:mapred-site.xml 
[root@localhost hadoop]# mv mapred-site.xml.template mapred-site.xml
[root@localhost hadoop]# vi mapred-site.xml
<configuration> 
        <!-- 指定mr运行在yarn上 -->
        <property>
            <name>mapreduce.framework.name</name>
            <value>yarn</value>
        </property>     
</configuration>

(6)修改第五个配置文件:yarn-site.xml
[root@localhost hadoop]# vi yarn-site.xml
<configuration>
        <!-- 指定YARN的老大(ResourceManager)的地址 -->
        <property>
            <name>yarn.resourcemanager.hostname</name>
            <value>hadoop</value>
        </property>
        <!-- reducer获取数据的方式 -->
        <property>
            <name>yarn.nodemanager.aux-services</name>
            <value>mapreduce_shuffle</value>
        </property>
<configuration>
TAG标签: 韦德娱乐1946
版权声明:本文由韦德娱乐1946_韦德娱乐1946网页版|韦德国际1946官网发布于韦德国际1946官网,转载请注明出处:并伪布满式安装,3源码编写翻译