当前位置: 首页 > news >正文

ftp 网站自己做的网站怎么添加文档

ftp 网站,自己做的网站怎么添加文档,上海地区网站设计,贵州交通建设集团有限公司网站目录 一、实验 1.环境 2.Linux 部署 HDFS 分布式集群 3.Linux 使用 HDFS 文件系统 二、问题 1.ssh-copy-id 报错 2. 如何禁用ssh key 检测 3.HDFS有哪些配置文件 4.hadoop查看版本报错 5.启动集群报错 6.hadoop 的启动和停止命令 7.上传文件报错 8.HDFS 使用命令 一…目录 一、实验 1.环境 2.Linux 部署 HDFS 分布式集群 3.Linux 使用 HDFS 文件系统 二、问题 1.ssh-copy-id 报错 2. 如何禁用ssh key 检测 3.HDFS有哪些配置文件 4.hadoop查看版本报错 5.启动集群报错 6.hadoop 的启动和停止命令 7.上传文件报错 8.HDFS 使用命令 一、实验 1.环境 1主机 表1  主机 主机架构软件版本IP备注hadoop NameNode SecondaryNameNode hadoop2.7.7192.168.204.50 node01DataNodehadoop2.7.7192.168.204.51node02DataNodehadoop2.7.7192.168.204.52node03DataNodehadoop2.7.7192.168.204.53 2安全机制 查看 [rootlocalhost ~]# sestatus关闭 [rootlocalhost ~]# vim /etc/selinux/config …… SELINUXdisabled …… 再次查看需要reboot重启 [rootlocalhost ~]# sestatus3防火墙 关闭 [rootlocalhost ~]# systemctl stop firewalld [rootlocalhost ~]# systemctl mask firewalld4安装java [rootlocalhost ~]# yum install -y java-1.8.0-openjdk-devel.x86_64查看 [rootlocalhost ~]# jpshadoop node01 node02 node03 5域名主机名 [rootlocalhost ~]# vim /etc/hosts …… 192.168.205.50 hadoop 192.168.205.51 node01 192.168.205.52 node02 192.168.205.53 node036修改主机名 [rootlocalhost ~]# hostnamectl set-hostname 主机名 [rootlocalhost ~]# bash7hadoop节点创建密钥 [roothadoop ~]# mkdir /root/.ssh [roothadoop ~]# cd /root/.ssh/ [roothadoop .ssh]# ssh-keygen -t rsa -b 2048 -N 8添加免密登录 [roothadoop .ssh]# ssh-copy-id -i id_rsa.pub hadoop[roothadoop .ssh]# ssh-copy-id -i id_rsa.pub node01[roothadoop .ssh]# ssh-copy-id -i id_rsa.pub node02[roothadoop .ssh]# ssh-copy-id -i id_rsa.pub node032.Linux 部署 HDFS 分布式集群 1官网 https://hadoop.apache.org/ 查看版本 https://archive.apache.org/dist/hadoop/common/ 2下载 wget https://archive.apache.org/dist/hadoop/common/hadoop-2.7.7/hadoop-2.7.7.tar.gz (3) 解压 tar -zxf hadoop-2.7.7.tar.gz 3移动 mv hadoop-2.7.7 /usr/local/hadoop 4更改权限 chown -R root.root hadoop5验证版本 需要修改环境配置文件hadoop-env.sh 申明JAVA安装路径和hadoop配置文件路径 修改配置文件 [roothadoop hadoop]# vim hadoop-env.sh验证 [roothadoop hadoop]# ./bin/hadoop version6修改节点配置文件 [roothadoop hadoop]# vim slaves修改前 修改后 node01node02node03 7查看官方文档 https://hadoop.apache.org/docs/ 指定版本 https://hadoop.apache.org/docs/r2.7.7/ 查看核心配置文件 https://hadoop.apache.org/docs/r2.7.7/hadoop-project-dist/hadoop-common/core-default.xml 文件系统配置参数 数据目录配置参数 8修改核心配置文件 [roothadoop hadoop]# vim core-site.xml修改前 修改后 configurationpropertynamefs.defaultFS/namevaluehdfs://hadoop:9000/valuedescriptionhdfs file system/description/propertypropertynamehadoop.tmp.dir/namevalue/var/hadoop/value/property/configuration9查看HDFS配置文件 https://hadoop.apache.org/docs/r2.7.7/hadoop-project-dist/hadoop-hdfs/hdfs-default.xml namenode: 副本数量 10修改HDFS配置文件 [roothadoop hadoop]# vim hdfs-site.xml修改前 修改后 configurationpropertynamedfs.namenode.http-address/namevaluehadoop:50070/valuenamedfs.namenode.secondary.http-address/namevaluehadoop:50090/valuenamedfs.replication/namevalue2/value/property /configuration(11) 查看同步 [roothadoop ~]# rpm -q rsync同步 [roothadoop ~]# rsync -aXSH --delete /usr/local/hadoop node01:/usr/local/ [roothadoop ~]# rsync -aXSH --delete /usr/local/hadoop node02:/usr/local/ [roothadoop ~]# rsync -aXSH --delete /usr/local/hadoop node03:/usr/local/12初始化hdfs [roothadoop ~]# mkdir /var/hadoop13查看命令 [roothadoop hadoop]# ./bin/hdfs Usage: hdfs [--config confdir] [--loglevel loglevel] COMMANDwhere COMMAND is one of:dfs run a filesystem command on the file systems supported in Hadoop.classpath prints the classpathnamenode -format format the DFS filesystemsecondarynamenode run the DFS secondary namenodenamenode run the DFS namenodejournalnode run the DFS journalnodezkfc run the ZK Failover Controller daemondatanode run a DFS datanodedfsadmin run a DFS admin clienthaadmin run a DFS HA admin clientfsck run a DFS filesystem checking utilitybalancer run a cluster balancing utilityjmxget get JMX exported values from NameNode or DataNode.mover run a utility to move block replicas acrossstorage typesoiv apply the offline fsimage viewer to an fsimageoiv_legacy apply the offline fsimage viewer to an legacy fsimageoev apply the offline edits viewer to an edits filefetchdt fetch a delegation token from the NameNodegetconf get config values from configurationgroups get the groups which users belong tosnapshotDiff diff two snapshots of a directory or diff thecurrent directory contents with a snapshotlsSnapshottableDir list all snapshottable dirs owned by the current userUse -help to see optionsportmap run a portmap servicenfs3 run an NFS version 3 gatewaycacheadmin configure the HDFS cachecrypto configure HDFS encryption zonesstoragepolicies list/get/set block storage policiesversion print the versionMost commands print help when invoked w/o parameters.14格式化hdfs [roothadoop hadoop]# ./bin/hdfs namenode -format查看目录 [roothadoop hadoop]# cd /var/hadoop/ [roothadoop hadoop]# tree . . └── dfs└── name└── current├── fsimage_0000000000000000000├── fsimage_0000000000000000000.md5├── seen_txid└── VERSION3 directories, 4 files(15) 启动集群 查看目录 [roothadoop hadoop]# cd ~ [roothadoop ~]# cd /usr/local/hadoop/ [roothadoop hadoop]# ls启动 [roothadoop hadoop]# ./sbin/start-dfs.sh查看日志新生成logs目录 [roothadoop hadoop]# cd logs/ ; ll查看jps [roothadoop hadoop]# jpsdatanode节点查看node01 datanode节点查看node02 datanode节点查看node03 (16)查看命令 [roothadoop hadoop]# ./bin/hdfs dfsadmin Usage: hdfs dfsadmin Note: Administrative commands can only be run as the HDFS superuser.[-report [-live] [-dead] [-decommissioning]][-safemode enter | leave | get | wait][-saveNamespace][-rollEdits][-restoreFailedStorage true|false|check][-refreshNodes][-setQuota quota dirname...dirname][-clrQuota dirname...dirname][-setSpaceQuota quota [-storageType storagetype] dirname...dirname][-clrSpaceQuota [-storageType storagetype] dirname...dirname][-finalizeUpgrade][-rollingUpgrade [query|prepare|finalize]][-refreshServiceAcl][-refreshUserToGroupsMappings][-refreshSuperUserGroupsConfiguration][-refreshCallQueue][-refresh host:ipc_port key [arg1..argn][-reconfig datanode|... host:ipc_port start|status][-printTopology][-refreshNamenodes datanode_host:ipc_port][-deleteBlockPool datanode_host:ipc_port blockpoolId [force]][-setBalancerBandwidth bandwidth in bytes per second][-fetchImage local directory][-allowSnapshot snapshotDir][-disallowSnapshot snapshotDir][-shutdownDatanode datanode_host:ipc_port [upgrade]][-getDatanodeInfo datanode_host:ipc_port][-metasave filename][-triggerBlockReport [-incremental] datanode_host:ipc_port][-help [cmd]]Generic options supported are -conf configuration file specify an application configuration file -D propertyvalue use value for given property -fs local|namenode:port specify a namenode -jt local|resourcemanager:port specify a ResourceManager -files comma separated list of files specify comma separated files to be copied to the map reduce cluster -libjars comma separated list of jars specify comma separated jar files to include in the classpath. -archives comma separated list of archives specify comma separated archives to be unarchived on the compute machines.The general command line syntax is17验证集群 查看报告发现3个节点 [roothadoop hadoop]# ./bin/hdfs dfsadmin -report Configured Capacity: 616594919424 (574.25 GB) Present Capacity: 598915952640 (557.78 GB) DFS Remaining: 598915915776 (557.78 GB) DFS Used: 36864 (36 KB) DFS Used%: 0.00% Under replicated blocks: 0 Blocks with corrupt replicas: 0 Missing blocks: 0 Missing blocks (with replication factor 1): 0------------------------------------------------- Live datanodes (3):Name: 192.168.204.53:50010 (node03) Hostname: node03 Decommission Status : Normal Configured Capacity: 205531639808 (191.42 GB) DFS Used: 12288 (12 KB) Non DFS Used: 5620584448 (5.23 GB) DFS Remaining: 199911043072 (186.18 GB) DFS Used%: 0.00% DFS Remaining%: 97.27% Configured Cache Capacity: 0 (0 B) Cache Used: 0 (0 B) Cache Remaining: 0 (0 B) Cache Used%: 100.00% Cache Remaining%: 0.00% Xceivers: 1 Last contact: Thu Mar 14 10:30:18 CST 2024Name: 192.168.204.51:50010 (node01) Hostname: node01 Decommission Status : Normal Configured Capacity: 205531639808 (191.42 GB) DFS Used: 12288 (12 KB) Non DFS Used: 6028849152 (5.61 GB) DFS Remaining: 199502778368 (185.80 GB) DFS Used%: 0.00% DFS Remaining%: 97.07% Configured Cache Capacity: 0 (0 B) Cache Used: 0 (0 B) Cache Remaining: 0 (0 B) Cache Used%: 100.00% Cache Remaining%: 0.00% Xceivers: 1 Last contact: Thu Mar 14 10:30:18 CST 2024Name: 192.168.204.52:50010 (node02) Hostname: node02 Decommission Status : Normal Configured Capacity: 205531639808 (191.42 GB) DFS Used: 12288 (12 KB) Non DFS Used: 6029533184 (5.62 GB) DFS Remaining: 199502094336 (185.80 GB) DFS Used%: 0.00% DFS Remaining%: 97.07% Configured Cache Capacity: 0 (0 B) Cache Used: 0 (0 B) Cache Remaining: 0 (0 B) Cache Used%: 100.00% Cache Remaining%: 0.00% Xceivers: 1 Last contact: Thu Mar 14 10:30:18 CST 202418web页面验证 http://192.168.204.50:50070/ http://192.168.204.50:50090/ http://192.168.204.51:50075/ (19)访问系统 目前为空 3.Linux 使用 HDFS 文件系统 1查看命令 [roothadoop hadoop]# ./bin/hadoop Usage: hadoop [--config confdir] [COMMAND | CLASSNAME]CLASSNAME run the class named CLASSNAMEorwhere COMMAND is one of:fs run a generic filesystem user clientversion print the versionjar jar run a jar filenote: please use yarn jar to launchYARN applications, not this command.checknative [-a|-h] check native hadoop and compression libraries availabilitydistcp srcurl desturl copy file or directories recursivelyarchive -archiveName NAME -p parent path src* dest create a hadoop archiveclasspath prints the class path needed to get thecredential interact with credential providersHadoop jar and the required librariesdaemonlog get/set the log level for each daemontrace view and modify Hadoop tracing settingsMost commands print help when invoked w/o parameters.[roothadoop hadoop]# ./bin/hadoop fs Usage: hadoop fs [generic options][-appendToFile localsrc ... dst][-cat [-ignoreCrc] src ...][-checksum src ...][-chgrp [-R] GROUP PATH...][-chmod [-R] MODE[,MODE]... | OCTALMODE PATH...][-chown [-R] [OWNER][:[GROUP]] PATH...][-copyFromLocal [-f] [-p] [-l] localsrc ... dst][-copyToLocal [-p] [-ignoreCrc] [-crc] src ... localdst][-count [-q] [-h] path ...][-cp [-f] [-p | -p[topax]] src ... dst][-createSnapshot snapshotDir [snapshotName]][-deleteSnapshot snapshotDir snapshotName][-df [-h] [path ...]][-du [-s] [-h] path ...][-expunge][-find path ... expression ...][-get [-p] [-ignoreCrc] [-crc] src ... localdst][-getfacl [-R] path][-getfattr [-R] {-n name | -d} [-e en] path][-getmerge [-nl] src localdst][-help [cmd ...]][-ls [-d] [-h] [-R] [path ...]][-mkdir [-p] path ...][-moveFromLocal localsrc ... dst][-moveToLocal src localdst][-mv src ... dst][-put [-f] [-p] [-l] localsrc ... dst][-renameSnapshot snapshotDir oldName newName][-rm [-f] [-r|-R] [-skipTrash] src ...][-rmdir [--ignore-fail-on-non-empty] dir ...][-setfacl [-R] [{-b|-k} {-m|-x acl_spec} path]|[--set acl_spec path]][-setfattr {-n name [-v value] | -x name} path][-setrep [-R] [-w] rep path ...][-stat [format] path ...][-tail [-f] file][-test -[defsz] path][-text [-ignoreCrc] src ...][-touchz path ...][-truncate [-w] length path ...][-usage [cmd ...]]Generic options supported are -conf configuration file specify an application configuration file -D propertyvalue use value for given property -fs local|namenode:port specify a namenode -jt local|resourcemanager:port specify a ResourceManager -files comma separated list of files specify comma separated files to be copied to the map reduce cluster -libjars comma separated list of jars specify comma separated jar files to include in the classpath. -archives comma separated list of archives specify comma separated archives to be unarchived on the compute machines.The general command line syntax is bin/hadoop command [genericOptions] [commandOptions]2查看文件目录 [roothadoop hadoop]# ./bin/hadoop fs -ls /3创建文件夹 [roothadoop hadoop]# ./bin/hadoop fs -mkdir /devops查看 查看web 4上传文件 [roothadoop hadoop]# ./bin/hadoop fs -put *.txt /devops/查看 [roothadoop hadoop]# ./bin/hadoop fs -ls /devops/查看web Permission Owner Group Size Last Modified Replication Block Size Name -rw-r--r-- root supergroup 84.4 KB 2024/3/14 11:05:33 2 128 MB LICENSE.txt -rw-r--r-- root supergroup 14.63 KB 2024/3/14 11:05:34 2 128 MB NOTICE.txt -rw-r--r-- root supergroup 1.33 KB 2024/3/14 11:05:34 2 128 MB README.txt 下载 5创建文件 [roothadoop hadoop]# ./bin/hadoop fs -touchz /tfile查看 [roothadoop hadoop]# ./bin/hadoop fs -ls /5下载文件 [roothadoop hadoop]# ./bin/hadoop fs -get /tfile /tmp/查看 [roothadoop hadoop]# ls -l /tmp/ | grep tfile查看web (6) 查看命令比较 之前的设置 所以查看功能相同 [roothadoop hadoop]# ./bin/hadoop fs -ls /[roothadoop hadoop]# ./bin/hadoop fs -ls hdfs://hadoop:9000/另外官网默认是file 使用的是本地文件目录 [roothadoop hadoop]# ./bin/hadoop fs -ls file:///二、问题 1.ssh-copy-id 报错 1报错 /usr/bin/ssh-copy-id: ERROR: ssh: connect to host hadoop port 22: Connection refused2原因分析 主机解析错误。 3解决方法 修改前 修改后 成功 2. 如何禁用ssh key 检测 1修改配置文件 [roothadoop .ssh]# vim /etc/ssh/ssh_config添加配置 StrictHostKeyChecking no成功 3.HDFS有哪些配置文件 1配置文件 1环境配置文件 hadoop-env.sh2)核心配置文件 core-site.xml3)HDFS配置文件 hdfs-site.xml4)节点配置文件 slaves 4.hadoop查看版本报错 (1) 报错 2原因分析 未申明JAVA环境。 3解决方法 申明JAVA环境。 查看 rpm -ql java-1.8.0-openjdk确定JAVA环境 /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.402.b06-1.el7_9.x86_64/jre 确定配置路径 /usr/local/hadoop/etc/hadoop 修改配置文件 [roothadoop hadoop]# vim hadoop-env.sh修改前 修改后 成功 [roothadoop hadoop]# ./bin/hadoop version5.启动集群报错 1报错 2原因分析 ssh-copy-id 未对本地主机验证。 3解决方法 ssh-copy-id 对本地主机验证。 [roothadoop hadoop]# ssh-copy-id hadoop如继续报错 需要停止Hadoop HDFS守护进程NameNode、SecondaryNameNode和DataNode [roothadoop hadoop]# ./sbin/stop-dfs.sh再次启动 6.hadoop 的启动和停止命令 1命令 sbin/start-all.sh 启动所有的Hadoop守护进程。包括NameNode、 Secondary NameNode、DataNode、ResourceManager、NodeManager sbin/stop-all.sh 停止所有的Hadoop守护进程。包括NameNode、 Secondary NameNode、DataNode、ResourceManager、NodeManager sbin/start-dfs.sh 启动Hadoop HDFS守护进程NameNode、SecondaryNameNode、DataNode sbin/stop-dfs.sh 停止Hadoop HDFS守护进程NameNode、SecondaryNameNode和DataNode sbin/hadoop-daemons.sh start namenode 单独启动NameNode守护进程 sbin/hadoop-daemons.sh stop namenode 单独停止NameNode守护进程 sbin/hadoop-daemons.sh start datanode 单独启动DataNode守护进程 sbin/hadoop-daemons.sh stop datanode 单独停止DataNode守护进程 sbin/hadoop-daemons.sh start secondarynamenode 单独启动SecondaryNameNode守护进程 sbin/hadoop-daemons.sh stop secondarynamenode 单独停止SecondaryNameNode守护进程 sbin/start-yarn.sh 启动ResourceManager、NodeManager sbin/stop-yarn.sh 停止ResourceManager、NodeManager sbin/yarn-daemon.sh start resourcemanager 单独启动ResourceManager sbin/yarn-daemons.sh start nodemanager 单独启动NodeManager sbin/yarn-daemon.sh stop resourcemanager 单独停止ResourceManager sbin/yarn-daemons.sh stopnodemanager 单独停止NodeManager sbin/mr-jobhistory-daemon.sh start historyserver 手动启动jobhistory sbin/mr-jobhistory-daemon.sh stop historyserver 手动停止jobhistory 7.上传文件报错 1报错 2原因分析 命令错误 3解决方法 使用正确命令 [roothadoop hadoop]# ./bin/hadoop fs -put *.txt /devops/8.HDFS 使用命令 1命令 ls 查看文件或目录cat 查看文件内容put 上传get 下载
http://www.pierceye.com/news/101392/

相关文章:

  • 企业网站的建设要注意哪些方面免费字体下载网站
  • 建怎样的网站挣钱快网站怎么做微博认证吗
  • 衡水做网站改版网站开发教程流程
  • 鞍山网站制作人才招聘广州网站优化步骤
  • 网站使用微信支付宁国网络推广
  • 成都网站建设六六济南网站制作公司
  • c 网站开发技术链友咨询
  • 手机网站推荐怎样做网站建设
  • 下载学校网站模板下载安装住建部官网查询
  • 模板网站新增备案两次都未通过网站也打不开电子商务网站建设实训报告文章
  • 做标签网站是干嘛的帐号售卖网站建设
  • 建设市民中心网站wordpress只显示标题插件
  • 网站备案的好处鲜花网站建设论文百度文库
  • 网站建设运营策划石家庄住房和建设局网站
  • 网站制作器公司网站虚假宣传但网站不是我做的
  • 大淘客网站建设婚庆网页设计作品dw
  • 嘉兴网站关键词优化后端开发流程
  • 有网络网站打不开怎么回事培训机构推广
  • 淄博网站建设优化珍云网站可信图标
  • 大连外贸网站建设江门营销网站建设
  • 县网站建设方案怎么做付费的小说网站
  • 企业公众号以及网站建设我想做个网站
  • 网站设为主页功能怎么做怎样制作h5
  • 网站的内容与功能设计微信公众平台小程序二维码怎么生成
  • 西安网站快速优化重庆明建网络科技有限公司干啥的
  • 广州市天河区门户网站软件制作公司
  • 做网站前期创建文件夹博罗高端网站建设价格
  • 襄阳网站建设价格淄博网站推广价格
  • 网站推广的软件六安网站制作哪里有
  • 大型门户网站模板wordpress有哪些小工具