生产环境实战spark (7)分布式集群 5台设备 Hadoop集群安装

生产环境实战spark (7)分布式集群 5台设备 Hadoop集群安装

生产环境实战spark (7)分布式集群 5台设备 Hadoop集群安装

1,Hadoop 下载。

下载地址:http://hadoop.apache.org/releases.html

下载版本:hadoop 2.6.5 版本    hadoop 2.6.x版本比较稳定


2,使用winscp工具上传到master节点。   检查:    [root@master rhzf_spark_setupTools]# ls
hadoop-2.6.5.tar.gz  jdk-8u121-linux-x64.tar.gz  scala-2.11.8.zip
[root@master rhzf_spark_setupTools]#
 

3,解压缩安装hadoop。
   [root@master rhzf_spark_setupTools]# tar  -zxvf hadoop-2.6.5.tar.gz
 

[root@master hadoop-2.6.5]# vi /etc/profile

export JAVA_HOME=/usr/local/jdk1.8.0_121
export SCALA_HOME=/usr/local/scala-2.11.8
export HADOOP_HOME=/usr/local/hadoop-2.6.5


export PATH=.:$PATH:$JAVA_HOME/bin:$SCALA_HOME/bin:$HADOOP_HOME/bin


在命令行中输入source /etc/profile,使刚才修改的HADOOP_HOME及PATH配置文件生效

[root@master hadoop-2.6.5]# source /etc/profile
[root@master hadoop-2.6.5]#


4,Hadoop core-site.xml配置文件修改。   [root@master hadoop]# cat core-site.xml
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
 Licensed under the Apache License, Version 2.0 (the "License");
 you may not use this file except in compliance with the License.
 You may obtain a copy of the License at


   http://www.apache.org/licenses/LICENSE-2.0


 Unless required by applicable law or agreed to in writing, software
 distributed under the License is distributed on an "AS IS" BASIS,
 WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 See the License for the specific language governing permissions and
 limitations under the License. See accompanying LICENSE file.
-->


<!-- Put site-specific property overrides in this file. -->


<configuration>
   <property>
       <name>hadoop.tmp.dir</name>
       <value> /usr/local/hadoop-2.6.5/tmp</value>
       <description>hadoop.tmp.dir</description>
   </property>
   <property>
       <name>fs.defaultFS</name>
       <value>hdfs://Master:9000</value>
   </property>
   <property>
      <name>hadoop.native.lib</name>
      <value>false</value>
      <description>no use native hadoop libraries </description>
     </property>
</configuration>
[root@master hadoop]#
 
 
 

5,Hadoop  hdfs-site.xml配置文件修改。  
   [root@master hadoop]# cat  hdfs-site.xml
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
 Licensed under the Apache License, Version 2.0 (the "License");
 you may not use this file except in compliance with the License.
 You may obtain a copy of the License at


   http://www.apache.org/licenses/LICENSE-2.0


 Unless required by applicable law or agreed to in writing, software
 distributed under the License is distributed on an "AS IS" BASIS,
 WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 See the License for the specific language governing permissions and
 limitations under the License. See accompanying LICENSE file.
-->


<!-- Put site-specific property overrides in this file. -->


<configuration>
   <property>
       <name>dfs.replication</name>
       <value>3</value>
   </property>
   <property>
       <name>dfs.namenode.name.dir</name>
       <value>/usr/local/hadoop-2.6.5/tmp/dfs/name</value>
   </property>
   <property>
       <name>dfs.datanode.data.dir</name>
       <value>/usr/local/hadoop-2.6.5/tmp/dfs/data</value>
   </property>
</configuration>
[root@master hadoop]#
 


7,Hadoop  master slaves配置文件修改。master作为主节点和数据处理节点。

[root@master hadoop]# cat slaves
master
worker01
worker02
worker03
worker04
[root@master hadoop]#


Hadoop最简 最小化配置完成

8,编写脚本,将Hadoop 文件分发到worker节点。[root@master local]# cd rhzf_setup_scripts
[root@master rhzf_setup_scripts]# ls
rhzf_hosts_scp.sh  rhzf_scala.sh  rhzf_ssh.sh
[root@master rhzf_setup_scripts]# vi rhzf_hadoop.sh


#!/bin/sh
for i in  238 239 240 241
do
scp   -rq /usr/local/hadoop-2.6.5  [email protected].$i:/usr/local/hadoop-2.6.5
scp   -rq /etc/profile  [email protected].$i:/etc/profile
ssh   [email protected].$i source /etc/profile
done


执行脚本

[root@master rhzf_setup_scripts]# chmod u+x rhzf_hadoop.sh
[root@master rhzf_setup_scripts]# ls
rhzf_hadoop.sh  rhzf_hosts_scp.sh  rhzf_scala.sh  rhzf_ssh.sh
[root@master rhzf_setup_scripts]# ./rhzf_hadoop.sh
[root@master rhzf_setup_scripts]#


9,worker节点检查。   Last login: Wed Apr 19 10:12:34 2017 from 132.150.75.19
[root@worker01 ~]# cd /usr/local
[root@worker01 local]# ls
bin  etc  games  hadoop-2.6.5  include  lib  lib64  libexec  sbin  scala-2.11.8  share  src
[root@worker01 local]#
 
 
   Last login: Wed Apr 19 10:12:39 2017 from 132.150.75.19
[root@worker02 ~]# cd /usr/local
[root@worker02 local]# ls
bin  etc  games  hadoop-2.6.5  include  lib  lib64  libexec  sbin  scala-2.11.8  share  src
[root@worker02 local]#
 
 
 
   Last login: Wed Apr 19 10:12:44 2017 from 132.150.75.19
[root@worker03 ~]# cd /usr/local
[root@worker03 local]# ls
bin  etc  games  hadoop-2.6.5  include  lib  lib64  libexec  sbin  scala-2.11.8  share  src
[root@worker03 local]#
 
 
   Last login: Wed Apr 19 10:12:49 2017 from 132.150.75.19
[root@worker04 ~]# cd /usr/local
[root@worker04 local]# ls
bin  etc  games  hadoop-2.6.5  include  lib  lib64  libexec  sbin  scala-2.11.8  share  src
[root@worker04 local]#
 

10,Hadopp集群文件系统格式化。  
   [root@master hadoop-2.6.5]# cd bin
[root@master bin]# ls
container-executor  hadoop  hadoop.cmd  hdfs  hdfs.cmd  mapred  mapred.cmd  rcc  test-container-executor  yarn  yarn.cmd
[root@master bin]# hdfs namenode -format
17/04/19 15:21:05 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = master/10.100.100.237
STARTUP_MSG:   args = [-format]
STARTUP_MSG:   version = 2.6.5
STARTUP_MSG:   classpath = /usr/local/hadoop-2.6.5/etc/hadoop:/usr/local/hadoop-2.6.5/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/usr/local/hadoop-    ......略
STARTUP_MSG:   build = https://github.com/apache/hadoop.git -r e8c9fe0b4c252caf2ebf1464220599650f119997; compiled by 'sjlee' on 2016-10-02T23:43Z
STARTUP_MSG:   java = 1.8.0_121
************************************************************/
17/04/19 15:21:05 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
17/04/19 15:21:05 INFO namenode.NameNode: createNameNode [-format]
17/04/19 15:21:05 WARN common.Util: Path /usr/local/hadoop-2.6.5/tmp/dfs/name should be specified as a URI in configuration files. Please update hdfs configuration.
17/04/19 15:21:05 WARN common.Util: Path /usr/local/hadoop-2.6.5/tmp/dfs/name should be specified as a URI in configuration files. Please update hdfs configuration.
Formatting using clusterid: CID-d0ca7040-2c7d-419b-85be-24323b923f2f
17/04/19 15:21:06 INFO namenode.FSNamesystem: No KeyProvider found.
17/04/19 15:21:06 INFO namenode.FSNamesystem: fsLock is fair:true
17/04/19 15:21:06 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
17/04/19 15:21:06 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
17/04/19 15:21:06 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
17/04/19 15:21:06 INFO blockmanagement.BlockManager: The block deletion will start around 2017 Apr 19 15:21:06
17/04/19 15:21:06 INFO util.GSet: Computing capacity for map BlocksMap
17/04/19 15:21:06 INFO util.GSet: VM type       = 64-bit
17/04/19 15:21:06 INFO util.GSet: 2.0% max memory 889 MB = 17.8 MB
17/04/19 15:21:06 INFO util.GSet: capacity      = 2^21 = 2097152 entries
17/04/19 15:21:06 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false
17/04/19 15:21:06 INFO blockmanagement.BlockManager: defaultReplication         = 3
17/04/19 15:21:06 INFO blockmanagement.BlockManager: maxReplication             = 512
17/04/19 15:21:06 INFO blockmanagement.BlockManager: minReplication             = 1
17/04/19 15:21:06 INFO blockmanagement.BlockManager: maxReplicationStreams      = 2
17/04/19 15:21:06 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000
17/04/19 15:21:06 INFO blockmanagement.BlockManager: encryptDataTransfer        = false
17/04/19 15:21:06 INFO blockmanagement.BlockManager: maxNumBlocksToLog          = 1000
17/04/19 15:21:06 INFO namenode.FSNamesystem: fsOwner             = root (auth:SIMPLE)
17/04/19 15:21:06 INFO namenode.FSNamesystem: supergroup          = supergroup
17/04/19 15:21:06 INFO namenode.FSNamesystem: isPermissionEnabled = true
17/04/19 15:21:06 INFO namenode.FSNamesystem: HA Enabled: false
17/04/19 15:21:06 INFO namenode.FSNamesystem: Append Enabled: true
17/04/19 15:21:06 INFO util.GSet: Computing capacity for map INodeMap
17/04/19 15:21:06 INFO util.GSet: VM type       = 64-bit
17/04/19 15:21:06 INFO util.GSet: 1.0% max memory 889 MB = 8.9 MB
17/04/19 15:21:06 INFO util.GSet: capacity      = 2^20 = 1048576 entries
17/04/19 15:21:06 INFO namenode.NameNode: Caching file names occuring more than 10 times
17/04/19 15:21:06 INFO util.GSet: Computing capacity for map cachedBlocks
17/04/19 15:21:06 INFO util.GSet: VM type       = 64-bit
17/04/19 15:21:06 INFO util.GSet: 0.25% max memory 889 MB = 2.2 MB
17/04/19 15:21:06 INFO util.GSet: capacity      = 2^18 = 262144 entries
17/04/19 15:21:06 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
17/04/19 15:21:06 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
17/04/19 15:21:06 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension     = 30000
17/04/19 15:21:06 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
17/04/19 15:21:06 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
17/04/19 15:21:06 INFO util.GSet: Computing capacity for map NameNodeRetryCache
17/04/19 15:21:06 INFO util.GSet: VM type       = 64-bit
17/04/19 15:21:06 INFO util.GSet: 0.029999999329447746% max memory 889 MB = 273.1 KB
17/04/19 15:21:06 INFO util.GSet: capacity      = 2^15 = 32768 entries
17/04/19 15:21:06 INFO namenode.NNConf: ACLs enabled? false
17/04/19 15:21:06 INFO namenode.NNConf: XAttrs enabled? true
17/04/19 15:21:06 INFO namenode.NNConf: Maximum size of an xattr: 16384
17/04/19 15:21:06 INFO namenode.FSImage: Allocated new BlockPoolId: BP-1333219187-10.100.100.237-1492586466692
17/04/19 15:21:06 INFO common.Storage: Storage directory /usr/local/hadoop-2.6.5/tmp/dfs/name has been successfully formatted.
17/04/19 15:21:06 INFO namenode.FSImageFormatProtobuf: Saving image file /usr/local/hadoop-2.6.5/tmp/dfs/name/current/fsimage.ckpt_0000000000000000000 using no compression
17/04/19 15:21:06 INFO namenode.FSImageFormatProtobuf: Image file /usr/local/hadoop-2.6.5/tmp/dfs/name/current/fsimage.ckpt_0000000000000000000 of size 321 bytes saved in 0 seconds.
17/04/19 15:21:06 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
17/04/19 15:21:06 INFO util.ExitUtil: Exiting with status 0
17/04/19 15:21:06 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at master/10.100.100.237
************************************************************/
[root@master bin]#
 
 

11,启动Hadopp集群。
 
  [root@master sbin]# start-all.sh
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
Starting namenodes on [Master]
Master: Warning: Permanently added the ECDSA host key for IP address '10.100.100.237' to the list of known hosts.
Master: starting namenode, logging to /usr/local/hadoop-2.6.5/logs/hadoop-root-namenode-master.out
worker04: Warning: Permanently added 'worker04' (ECDSA) to the list of known hosts.
master: starting datanode, logging to /usr/local/hadoop-2.6.5/logs/hadoop-root-datanode-master.out
worker04: starting datanode, logging to /usr/local/hadoop-2.6.5/logs/hadoop-root-datanode-worker04.out
worker03: starting datanode, logging to /usr/local/hadoop-2.6.5/logs/hadoop-root-datanode-worker03.out
worker01: starting datanode, logging to /usr/local/hadoop-2.6.5/logs/hadoop-root-datanode-worker01.out
worker02: starting datanode, logging to /usr/local/hadoop-2.6.5/logs/hadoop-root-datanode-worker02.out
worker04: /usr/local/hadoop-2.6.5/bin/hdfs: line 276: /usr/local/jdk1.8.0_121/bin/java: No such file or directory
worker03: /usr/local/hadoop-2.6.5/bin/hdfs: line 276: /usr/local/jdk1.8.0_121/bin/java: No such file or directory
worker01: /usr/local/hadoop-2.6.5/bin/hdfs: line 276: /usr/local/jdk1.8.0_121/bin/java: No such file or directory
worker02: /usr/local/hadoop-2.6.5/bin/hdfs: line 276: /usr/local/jdk1.8.0_121/bin/java: No such file or directory
Starting secondary namenodes [0.0.0.0]
0.0.0.0: Warning: Permanently added '0.0.0.0' (ECDSA) to the list of known hosts.
0.0.0.0: starting secondarynamenode, logging to /usr/local/hadoop-2.6.5/logs/hadoop-root-secondarynamenode-master.out
starting yarn daemons
starting resourcemanager, logging to /usr/local/hadoop-2.6.5/logs/yarn-root-resourcemanager-master.out
worker04: starting nodemanager, logging to /usr/local/hadoop-2.6.5/logs/yarn-root-nodemanager-worker04.out
worker02: starting nodemanager, logging to /usr/local/hadoop-2.6.5/logs/yarn-root-nodemanager-worker02.out
master: starting nodemanager, logging to /usr/local/hadoop-2.6.5/logs/yarn-root-nodemanager-master.out
worker01: starting nodemanager, logging to /usr/local/hadoop-2.6.5/logs/yarn-root-nodemanager-worker01.out
worker03: starting nodemanager, logging to /usr/local/hadoop-2.6.5/logs/yarn-root-nodemanager-worker03.out
worker04: /usr/local/hadoop-2.6.5/bin/yarn: line 284: /usr/local/jdk1.8.0_121/bin/java: No such file or directory
worker02: /usr/local/hadoop-2.6.5/bin/yarn: line 284: /usr/local/jdk1.8.0_121/bin/java: No such file or directory
worker01: /usr/local/hadoop-2.6.5/bin/yarn: line 284: /usr/local/jdk1.8.0_121/bin/java: No such file or directory
worker03: /usr/local/hadoop-2.6.5/bin/yarn: line 284: /usr/local/jdk1.8.0_121/bin/java: No such file or directory
[root@master sbin]#
[root@master sbin]#  
 
[root@master sbin]# jps
20609 SecondaryNameNode
20420 DataNode
20789 ResourceManager
20903 NodeManager
21225 Jps
20266 NameNode
[root@master sbin]#
 
 

之前在master节点将java升级更新了,但4个woker节点上 java没有更新升级

[root@worker04 local]# java -version
java version "1.7.0_51"
OpenJDK Runtime Environment (rhel-2.4.5.5.el7-x86_64 u51-b31)
OpenJDK 64-Bit Server VM (build 24.51-b03, mixed mode)
[root@worker04 local]# jps
bash: jps: command not found...
[root@worker04 local]#


12,woker节点上安装java 8    [root@master rhzf_setup_scripts]# cat  rhzf_jdk.sh
#!/bin/sh
for i in  238 239 240 241
do
ssh   root@10.*.*.$i rpm -e --nodeps   java-1.7.0-openjdk-1.7.0.51-2.4.5.5.el7.x86_64
ssh   root@10.*.*.$i rpm -e --nodeps java-1.7.0-openjdk-headless-1.7.0.51-2.4.5.5.el7.x86_64
scp   -rq /usr/local/jdk1.8.0_121  root@10.*.*.$i:/usr/local/jdk1.8.0_121
done
[root@master rhzf_setup_scripts]# ./rhzf_jdk.sh
-bash: ./rhzf_jdk.sh: Permission denied
[root@master rhzf_setup_scripts]# ls
rhzf_hadoop.sh  rhzf_hosts_scp.sh  rhzf_jdk.sh  rhzf_scala.sh  rhzf_ssh.sh
[root@master rhzf_setup_scripts]# chmod u+x  rhzf_jdk.sh
[root@master rhzf_setup_scripts]# ls
rhzf_hadoop.sh  rhzf_hosts_scp.sh  rhzf_jdk.sh  rhzf_scala.sh  rhzf_ssh.sh
[root@master rhzf_setup_scripts]# ./rhzf_jdk.sh
 

刷新

# source /etc/profile


检查4个woker节点上jdk 安装完成

[root@worker01 local]# java -version
java version "1.8.0_121"
Java(TM) SE Runtime Environment (build 1.8.0_121-b13)
Java HotSpot(TM) 64-Bit Server VM (build 25.121-b13, mixed mode)
[root@worker01 local]#



[root@worker02 bin]# java -version
java version "1.8.0_121"
Java(TM) SE Runtime Environment (build 1.8.0_121-b13)
Java HotSpot(TM) 64-Bit Server VM (build 25.121-b13, mixed mode)
[root@worker02 bin]#



[root@worker03 local]# source /etc/profile
[root@worker03 local]# java -version
java version "1.8.0_121"
Java(TM) SE Runtime Environment (build 1.8.0_121-b13)
Java HotSpot(TM) 64-Bit Server VM (build 25.121-b13, mixed mode)
[root@worker03 local]#




[root@worker04 local]# source /etc/profile
[root@worker04 local]# java -version
java version "1.8.0_121"
Java(TM) SE Runtime Environment (build 1.8.0_121-b13)
Java HotSpot(TM) 64-Bit Server VM (build 25.121-b13, mixed mode)
[root@worker04 local]#


13,停止Hadoop集群  
[root@master sbin]#  stop-all.sh
This script is Deprecated. Instead use stop-dfs.sh and stop-yarn.sh
Stopping namenodes on [Master]
Master: stopping namenode
worker04: no datanode to stop
master: stopping datanode
worker02: no datanode to stop
worker01: no datanode to stop
worker03: no datanode to stop
Stopping secondary namenodes [0.0.0.0]
0.0.0.0: stopping secondarynamenode
stopping yarn daemons
stopping resourcemanager
worker04: no nodemanager to stop
worker03: no nodemanager to stop
master: stopping nodemanager
worker01: no nodemanager to stop
worker02: no nodemanager to stop
no proxyserver to stop
[root@master sbin]#


14,再次重启Hadoop集群,一切正常  
 [root@master sbin]# start-all.sh    
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
Starting namenodes on [Master]
Master: starting namenode, logging to /usr/local/hadoop-2.6.5/logs/hadoop-root-namenode-master.out
worker04: starting datanode, logging to /usr/local/hadoop-2.6.5/logs/hadoop-root-datanode-worker04.out
master: starting datanode, logging to /usr/local/hadoop-2.6.5/logs/hadoop-root-datanode-master.out
worker03: starting datanode, logging to /usr/local/hadoop-2.6.5/logs/hadoop-root-datanode-worker03.out
worker01: starting datanode, logging to /usr/local/hadoop-2.6.5/logs/hadoop-root-datanode-worker01.out
worker02: starting datanode, logging to /usr/local/hadoop-2.6.5/logs/hadoop-root-datanode-worker02.out
Starting secondary namenodes [0.0.0.0]
0.0.0.0: starting secondarynamenode, logging to /usr/local/hadoop-2.6.5/logs/hadoop-root-secondarynamenode-master.out
starting yarn daemons
starting resourcemanager, logging to /usr/local/hadoop-2.6.5/logs/yarn-root-resourcemanager-master.out
worker04: starting nodemanager, logging to /usr/local/hadoop-2.6.5/logs/yarn-root-nodemanager-worker04.out
worker02: starting nodemanager, logging to /usr/local/hadoop-2.6.5/logs/yarn-root-nodemanager-worker02.out
master: starting nodemanager, logging to /usr/local/hadoop-2.6.5/logs/yarn-root-nodemanager-master.out
worker01: starting nodemanager, logging to /usr/local/hadoop-2.6.5/logs/yarn-root-nodemanager-worker01.out
worker03: starting nodemanager, logging to /usr/local/hadoop-2.6.5/logs/yarn-root-nodemanager-worker03.out
[root@master sbin]# jps
22869 ResourceManager
22330 NameNode
22490 DataNode
23323 Jps
22684 SecondaryNameNode
22990 NodeManager
[root@master sbin]#

4个worker节点检查

[root@worker01 local]# jps
20752 DataNode
20884 NodeManager
21001 Jps
[root@worker01 local]#


[root@worker02 bin]# jps
20771 DataNode
21019 Jps
20895 NodeManager
[root@worker02 bin]#


[root@worker03 local]#
[root@worker03 local]# jps
20528 DataNode
20658 NodeManager
20775 Jps
[root@worker03 local]#



[root@worker04 local]# jps
20624 NodeManager
20500 DataNode
20748 Jps
[root@worker04 local]#



15,webui页面打不开,10.*.*237 :50070   尝试了再次格式化文件系统,不行。hdfs namenode -format    重启hadoop集群也不行。  

17,修改配置   [root@worker03 hadoop]# cat hdfs-site.xml
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
 Licensed under the Apache License, Version 2.0 (the "License");
 you may not use this file except in compliance with the License.
 You may obtain a copy of the License at


   http://www.apache.org/licenses/LICENSE-2.0


 Unless required by applicable law or agreed to in writing, software
 distributed under the License is distributed on an "AS IS" BASIS,
 WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 See the License for the specific language governing permissions and
 limitations under the License. See accompanying LICENSE file.
-->


<!-- Put site-specific property overrides in this file. -->


<configuration>
   <property>
       <name>dfs.replication</name>
       <value>3</value>
   </property>
   <property>
       <name>dfs.namenode.name.dir</name>
       <value>/usr/local/hadoop-2.6.5/tmp/dfs/name</value>
   </property>
   <property>
       <name>dfs.datanode.data.dir</name>
       <value>/usr/local/hadoop-2.6.5/tmp/dfs/data</value>
   </property>
 <property>
     <name>dfs.http.address</name>
     <value>10.100.100.237:50070</value>
 </property>
</configuration>
[root@worker03 hadoop]#
 

Read more

深入理解 Proxy 和 Object.defineProperty

在JavaScript中,对象是一种核心的数据结构,而对对象的操作也是开发中经常遇到的任务。在这个过程中,我们经常会使用到两个重要的特性:Proxy和Object.defineProperty。这两者都允许我们在对象上进行拦截和自定义操作,但它们在实现方式、应用场景和灵活性等方面存在一些显著的区别。本文将深入比较Proxy和Object.defineProperty,包括它们的基本概念、使用示例以及适用场景,以帮助读者更好地理解和运用这两个特性。 1. Object.defineProperty 1.1 基本概念 Object.defineProperty 是 ECMAScript 5 引入的一个方法,用于直接在对象上定义新属性或修改已有属性。它的基本语法如下: javascript 代码解读复制代码Object.defineProperty(obj, prop, descriptor); 其中,obj是目标对象,prop是要定义或修改的属性名,descriptor是一个描述符对象,用于定义属性的特性。 1.2 使用示例 javascript 代码解读复制代码//

By Ne0inhk

Proxy 和 Object.defineProperty 的区别

Proxy 和 Object.defineProperty 是 JavaScript 中两个不同的特性,它们的作用也不完全相同。 Object.defineProperty 允许你在一个对象上定义一个新属性或者修改一个已有属性。通过这个方法你可以精确地定义属性的特征,比如它是否可写、可枚举、可配置等。该方法的使用场景通常是需要在一个对象上创建一个属性,然后控制这个属性的行为。 Proxy 也可以用来代理一个对象,但是相比于 Object.defineProperty,它提供了更加强大的功能。使用 Proxy 可以截获并重定义对象的基本操作,比如访问属性、赋值、函数调用等等。在这些操作被执行之前,可以通过拦截器函数对这些操作进行拦截和修改。因此,通过 Proxy,你可以完全重写一个对象的默认行为。该方法的使用场景通常是需要对一个对象的行为进行定制化,或者需要在对象上添加额外的功能。 对比 以下是 Proxy 和 Object.defineProperty 的一些区别对比: 方面ProxyObject.defineProperty语法使用 new Proxy(target,

By Ne0inhk