当前位置: 首页 > news >正文

c#做的网站怎么上传图片网站开发 python 工具

c#做的网站怎么上传图片,网站开发 python 工具,昆明网站建设博滇,二级域名做网站注意一、背景 kerberos认证是比较底层的认证#xff0c;掌握好了用起来比较简单。 kettle完成kerberos认证后会存储认证信息在jvm中#xff0c;之后直接连接hive就可以了无需提供额外的用户信息。 spark thriftserver本质就是通过hive jdbc协议连接并运行spark sql任务。 二、…一、背景 kerberos认证是比较底层的认证掌握好了用起来比较简单。 kettle完成kerberos认证后会存储认证信息在jvm中之后直接连接hive就可以了无需提供额外的用户信息。 spark thriftserver本质就是通过hive jdbc协议连接并运行spark sql任务。 二、思路 kettle中可以使用js调用java类的方法。编写一个jar放到kettle的lib目录下并。在启动kettle后会自动加载此jar中的类。编写一个javascript转换完成kerbero即可。 二、kerberos认证模块开发 准备使用scala语言完成此项目。 hadoop 集群版本: cdh-6.2.0 kettle 版本: 8.2.0.0-342 2.1 生成kerberos工具jar包 2.1.1 创建maven项目并编写pom 创建maven项目这里依赖比较多觉得没用的删掉即可 注意这里为了便于管理很多包都是compile最后通过maven-assembly-plugin复制到zip文件中 propertiesmaven.compiler.source8/maven.compiler.sourcemaven.compiler.target8/maven.compiler.targetproject.build.sourceEncodingUTF-8/project.build.sourceEncodingscala.version2.11.12/scala.versionscala.major.version2.11/scala.major.versiontarget.java.version1.8/target.java.versionhadoop.version3.0.0-cdh6.2.0/hadoop.versionspark.version2.4.0-cdh6.2.0/spark.versionhive.version2.1.1-cdh6.2.0/hive.versionzookeeper.version3.4.5-cdh6.2.0/zookeeper.versionjackson.version2.14.2/jackson.versionhttpclient5.version5.2.1/httpclient5.version/propertiesdependenciesdependencygroupIdorg.scala-lang/groupIdartifactIdscala-library/artifactIdversion${scala.version}/versionscopecompile/scope/dependencydependencygroupIdorg.scala-lang/groupIdartifactIdscala-reflect/artifactIdversion${scala.version}/versionscopecompile/scope/dependencydependencygroupIdorg.scala-lang/groupIdartifactIdscala-compiler/artifactIdversion${scala.version}/versionscopecompile/scope/dependencydependencygroupIdorg.slf4j/groupIdartifactIdslf4j-api/artifactIdversion1.7.28/versionscopeprovided/scope/dependencydependencygroupIdorg.apache.logging.log4j/groupIdartifactIdlog4j-slf4j-impl/artifactIdversion2.9.1/versionscopeprovided/scope/dependencydependencygroupIdorg.apache.logging.log4j/groupIdartifactIdlog4j-api/artifactIdversion2.11.1/versionscopeprovided/scope/dependencydependencygroupIdorg.apache.logging.log4j/groupIdartifactIdlog4j-core/artifactIdversion2.11.1/versionscopeprovided/scope/dependencydependencygroupIdorg.apache.hadoop/groupIdartifactIdhadoop-common/artifactIdversion${hadoop.version}/versionscopecompile/scope/dependencydependencygroupIdorg.apache.hadoop/groupIdartifactIdhadoop-client/artifactIdversion${hadoop.version}/versionscopecompile/scope/dependencydependencygroupIdorg.apache.spark/groupIdartifactIdspark-core_${scala.major.version}/artifactIdversion${spark.version}/versionscopeprovided/scope/dependencydependencygroupIdorg.apache.spark/groupIdartifactIdspark-sql_${scala.major.version}/artifactIdversion${spark.version}/versionscopeprovided/scope/dependencydependencygroupIdorg.apache.spark/groupIdartifactIdspark-streaming_${scala.major.version}/artifactIdversion${spark.version}/versionscopeprovided/scope/dependencydependencygroupIdorg.apache.hive/groupIdartifactIdhive-jdbc/artifactIdversion${hive.version}/versionscopecompile/scope/dependencydependencygroupIdorg.apache.spark/groupIdartifactIdspark-hive-thriftserver_${scala.major.version}/artifactIdversion${spark.version}/versionscopecompile/scope/dependencydependencygroupIdorg.apache.zookeeper/groupIdartifactIdzookeeper/artifactIdversion${zookeeper.version}/versionscopecompile/scope/dependency!-- jackon --dependencygroupIdcom.fasterxml.jackson.core/groupIdartifactIdjackson-core/artifactIdversion${jackson.version}/versionscopecompile/scope/dependencydependencygroupIdcom.fasterxml.jackson.core/groupIdartifactIdjackson-databind/artifactIdversion${jackson.version}/versionscopecompile/scope/dependencydependencygroupIdcom.fasterxml.jackson.dataformat/groupIdartifactIdjackson-dataformat-xml/artifactIdversion${jackson.version}/versionscopecompile/scope/dependencydependencygroupIdcom.fasterxml.jackson.module/groupIdartifactIdjackson-module-scala_2.11/artifactIdversion${jackson.version}/versionscopecompile/scope/dependency!-- https://mvnrepository.com/artifact/org.junit.jupiter/junit-jupiter-api --dependencygroupIdorg.junit.jupiter/groupIdartifactIdjunit-jupiter-api/artifactIdversion5.6.2/versionscopetest/scope/dependencydependencygroupIdorg.scalatest/groupIdartifactIdscalatest_2.11/artifactIdversion3.2.8/versionscopetest/scope/dependencydependencygroupIdorg.scalactic/groupIdartifactIdscalactic_2.12/artifactIdversion3.2.8/versionscopetest/scope/dependencydependencygroupIdorg.projectlombok/groupIdartifactIdlombok/artifactIdversion1.18.14/versionscopeprovided/scope/dependency/dependenciesbuildpluginsplugingroupIdnet.alchim31.maven/groupIdartifactIdscala-maven-plugin/artifactIdversion4.5.6/versionconfiguration/configurationexecutionsexecutionidscala-compiler/idphaseprocess-resources/phasegoalsgoaladd-source/goalgoalcompile/goal/goals/executionexecutionidscala-test-compiler/idphaseprocess-test-resources/phasegoalsgoaladd-source/goalgoaltestCompile/goal/goals/execution/executions/plugin!-- disable surefire --plugingroupIdorg.apache.maven.plugins/groupIdartifactIdmaven-surefire-plugin/artifactIdversion2.7/versionconfigurationskipTeststrue/skipTests/configuration/plugin!-- enable scalatest --plugingroupIdorg.scalatest/groupIdartifactIdscalatest-maven-plugin/artifactIdversion2.2.0/versionconfigurationreportsDirectory${project.build.directory}/surefire-reports/reportsDirectoryjunitxml./junitxmlfilereportsWDF TestSuite.txt/filereports/configurationexecutionsexecution/execution/executions/pluginplugingroupIdorg.apache.maven.plugins/groupIdartifactIdmaven-assembly-plugin/artifactIdversion3.0.0/versionconfigurationappendAssemblyIdfalse/appendAssemblyId !-- descriptorRefs-- !-- descriptorRefjar-with-dependencies/descriptorRef-- !-- /descriptorRefs--descriptorsdescriptorsrc/assembly/assembly.xml/descriptor/descriptorsarchive!-- manifest--!-- lt;!ndash; 你的mainclass入口我就是test.scala 在scala文件夹下 目录就是scr/main/scala ndash;gt;--!-- mainClasscom.chenxii.myspark.sparkcore.Test/mainClass--!-- /manifest--/archive/configurationexecutionsexecutionidmake-assembly/idphasepackage/phasegoalsgoalsingle/goal/goals/execution/executions/plugin/plugins/buildrepositoriesrepositoryidcloudera/idnamecloudera/nameurlhttps://repository.cloudera.com/artifactory/cloudera-repos//url/repository/repositories /project 新建一个空白的xml文件如下图 maven-assembly-plugin插件会使用到assembly.xml粘贴如下内容至assembly.xml文件中再修改下 自己的项目groupid。 assembly xmlnshttp://maven.apache.org/plugins/maven-assembly-plugin/assembly/1.1.3xmlns:xsihttp://www.w3.org/2001/XMLSchema-instancexsi:schemaLocationhttp://maven.apache.org/plugins/maven-assembly-plugin/assembly/1.1.3 http://maven.apache.org/xsd/assembly-1.1.3.xsdidappserverB/idformatsformatzip/format/formatsdependencySetsdependencySetoutputDirectory/ext-lib/outputDirectoryincludesincludeorg.apache.hadoop:*/includeincludeorg.apache.hive:*/includeincludeorg.apache.hive.shims:*/includeincludeorg.apache.spark:spark-hive-thriftserver_*/includeincludeorg.apache.zookeeper:*/includeincludeorg.apache.curator:*/includeincludeorg.apache.commons:commons-lang3/includeincludeorg.apache.commons:commons-configuration2/includeincludeorg.apache.commons:commons-math3/includeincludecom.fasterxml.jackson.core:*/includeincludecom.fasterxml.jackson.dataformat:*/includeincludecom.fasterxml.jackson.module:*/includeincludeorg.scala-lang:*/includeincludeorg.apache.thrift:libthrift/includeincludecom.thoughtworks.paranamer:paranamer/includeincludecom.google.re2j:re2j/includeincludecom.fasterxml.woodstox:woodstox-core/includeincludeorg.codehaus.woodstox:stax2-api/includeincludeorg.apache.httpcomponents.core5:*/includeincludeorg.apache.httpcomponents.client5:*/includeincludeorg.apache.htrace:*/includeincludecom.github.rholder:guava-retrying/includeincludeorg.eclipse.jetty:jetty-util/includeincludeorg.mortbay.jetty:*/includeinclude自己的项目groupid:*/include/includes/dependencySet/dependencySets /assembly2.1.2 编写类 KerberosConf 暂时没啥用。 case class KerberosConf(principal: String, keyTabPath: String, conf: String/etc/krb5.conf)ConfigUtils 类用于生成hadoop 的Configurationkerberos认证的时候会用到。 import org.apache.commons.lang3.StringUtils import org.apache.hadoop.conf.Configuration import java.io.FileInputStream import java.nio.file.{Files, Paths}object ConfigUtils {val LOGGER org.slf4j.LoggerFactory.getLogger(KerberosUtils.getClass)var hadoopConfiguration: Configuration nullvar hiveConfiguration: Configuration nullprivate var hadoopConfDir: String nullprivate var hiveConfDir: String nulldef setHadoopConfDir(dir: String): Configuration {hadoopConfDir dirrefreshHadoopConfig}def getHadoopConfDir: String {if (StringUtils.isEmpty(hadoopConfDir)) {val tmpConfDir System.getenv(HADOOP_CONF_DIR)if (StringUtils.isNotEmpty(tmpConfDir) fileOrDirExists(tmpConfDir)) {hadoopConfDir tmpConfDir} else {val tmpHomeDir System.getenv(HADOOP_HOME)if (StringUtils.isNotEmpty(tmpHomeDir) fileOrDirExists(tmpHomeDir)) {val tmpConfDirLong s${tmpHomeDir}/etc/hadoopval tmpConfDirShort s${tmpHomeDir}/confif (fileOrDirExists(tmpConfDirLong)) {hadoopConfDir tmpConfDirLong} else if (fileOrDirExists(tmpConfDirShort)) {hadoopConfDir tmpConfDirShort}}}}LOGGER.info(sdiscover hadoop conf from : ${hadoopConfDir})hadoopConfDir}def getHadoopConfig: Configuration {if (hadoopConfiguration null) {hadoopConfiguration new Configuration()configHadoop()}hadoopConfiguration}def refreshHadoopConfig: Configuration {hadoopConfiguration new Configuration()configHadoop()}def configHadoop(): Configuration {var coreXml var hdfsXml val hadoopConfDir getHadoopConfDirif (StringUtils.isNotEmpty(hadoopConfDir)) {val coreXmlTmp s${hadoopConfDir}/core-site.xmlval hdfsXmlTmp s${hadoopConfDir}/hdfs-site.xmlval coreExists fileOrDirExists(coreXmlTmp)val hdfsExists fileOrDirExists(hdfsXmlTmp)if (coreExists hdfsExists) {LOGGER.info(sdiscover hadoop conf from hadoop conf dir: ${hadoopConfDir})coreXml coreXmlTmphdfsXml hdfsXmlTmphadoopAddSource(coreXml, hadoopConfiguration)hadoopAddSource(hdfsXml, hadoopConfiguration)}}LOGGER.info(score-site path : ${coreXml}, hdfs-site path : ${hdfsXml})hadoopConfiguration}def getHiveConfDir: String {if (StringUtils.isEmpty(hiveConfDir)) {val tmpConfDir System.getenv(HIVE_CONF_DIR)if (StringUtils.isNotEmpty(tmpConfDir) fileOrDirExists(tmpConfDir)){hiveConfDir tmpConfDir} else {val tmpHomeDir System.getenv(HIVE_HOME)if (StringUtils.isNotEmpty(tmpHomeDir) fileOrDirExists(tmpHomeDir)) {val tmpConfDirShort s${tmpHomeDir}/conf}if (fileOrDirExists(tmpConfDir)) {hiveConfDir tmpConfDirShort}}}}LOGGER.info(sdiscover hive conf from : ${hiveConfDir})hiveConfDir}def configHive(): Configuration {if (hiveConfiguration ! null) {return hiveConfiguration} else {hiveConfiguration new Configuration()}var hiveXml val hiveConfDir getHiveConfDirif (StringUtils.isEmpty(hiveConfDir)) {val hiveXmlTmp s${hiveConfDir}/hive-site.xmlval hiveExist fileOrDirExists(hiveXml)if (hiveExist) {LOGGER.info(sdiscover hive conf from : ${hiveConfDir})hiveXml hiveXmlTmphadoopAddSource(hiveXml, hiveConfiguration)}}LOGGER.info(shive-site path : ${hiveXml})hiveConfiguration}def getHiveConfig: Configuration {if (hiveConfiguration null) {hiveConfiguration new Configuration()configHive()}hiveConfiguration}def refreshHiveConfig: Configuration {hiveConfiguration new Configuration()configHive()}def hadoopAddSource(confPath: String, conf: Configuration): Unit {val exists fileOrDirExists(confPath)if (exists) {LOGGER.warn(sadd [${confPath} to hadoop conf])var fi: FileInputStream nulltry {fi new FileInputStream(confPath)conf.addResource(fi)conf.get($$)} finally {if (fi ! null) fi.close()}} else {LOGGER.error(s[${confPath}] file does not exists!)}}def toUnixStyleSeparator(path: String): String {path.replaceAll(\\\\, /)}def fileOrDirExists(path: String): Boolean {Files.exists(Paths.get(path))} }KerberosUtils 就是用于认证的类。 import org.apache.commons.lang3.StringUtils import org.apache.hadoop.conf.Configuration import org.apache.hadoop.security.UserGroupInformation import org.apache.kerby.kerberos.kerb.keytab.Keytab import org.slf4j.Loggerimport java.io.File import java.net.URL import java.nio.file.{Files, Paths} import scala.collection.JavaConversions._ import scala.collection.JavaConverters._object KerberosUtils {val LOGGER: Logger org.slf4j.LoggerFactory.getLogger(KerberosUtils.getClass)def loginKerberos(krb5Principal: String, krb5KeytabPath: String, krb5ConfPath: String, hadoopConf: Configuration): Boolean {val authType hadoopConf.get(hadoop.security.authentication)if (!kerberos.equalsIgnoreCase(authType)) {LOGGER.error(skerberos utils get hadoop authentication type [${authType}] ,not kerberos!)} else {LOGGER.info(skerberos utils get hadoop authentication type [${authType}]!)}UserGroupInformation.setConfiguration(hadoopConf)System.setProperty(java.security.krb5.conf, krb5ConfPath)System.setProperty(javax.security.auth.useSubjectCredsOnly, false)UserGroupInformation.loginUserFromKeytab(krb5Principal, krb5KeytabPath)val user UserGroupInformation.getLoginUserif (user.getAuthenticationMethod UserGroupInformation.AuthenticationMethod.KERBEROS) {val usnm: String user.getShortUserNameLOGGER.info(skerberos utils login success, curr user: ${usnm})true} else {LOGGER.info(kerberos utils login failed)false}}def loginKerberos(krb5Principal: String, krb5KeytabPath: String, krb5ConfPath: String): Boolean {val hadoopConf ConfigUtils.getHadoopConfigloginKerberos(krb5Principal, krb5KeytabPath, krb5ConfPath, hadoopConf)}def loginKerberos(kerberosConf: KerberosConf): Boolean {loginKerberos(kerberosConf.principal, kerberosConf.keyTabPath, kerberosConf.conf)}def loginKerberos(krb5Principal: String, krb5KeytabPath: String, krb5ConfPath: String,hadoopConfDir:String):Boolean{ConfigUtils.setHadoopConfDir(hadoopConfDir)loginKerberos(krb5Principal,krb5KeytabPath,krb5ConfPath)}def loginKerberos(): Boolean {var principal: String nullvar keytabPath: String nullvar krb5ConfPath: String nullval classPath: URL this.getClass.getResource(/)val classPathObj Paths.get(classPath.toURI)var keytabPathList Files.list(classPathObj).iterator().asScala.toListkeytabPathList keytabPathList.filter(p p.toString.toLowerCase().endsWith(.keytab)).toListval krb5ConfPathList keytabPathList.filter(p p.toString.toLowerCase().endsWith(krb5.conf)).toListif (keytabPathList.nonEmpty) {val ktPath keytabPathList.get(0)val absPath ktPath.toAbsolutePathval keytab Keytab.loadKeytab(new File(absPath.toString))val pri keytab.getPrincipals.get(0).getNameif (StringUtils.isNotEmpty(pri)) {principal prikeytabPath ktPath.toString}}if (krb5ConfPathList.nonEmpty) {val confPath krb5ConfPathList.get(0)krb5ConfPath confPath.toAbsolutePath.toString}if (StringUtils.isNotEmpty(principal) StringUtils.isNotEmpty(keytabPath) StringUtils.isNotEmpty(krb5ConfPath)) {ConfigUtils.configHadoop()// ConfigUtils.configHive()val hadoopConf ConfigUtils.hadoopConfigurationloginKerberos(principal, keytabPath, krb5ConfPath, hadoopConf)} else {false}} }2.1.3 编译打包 mvn package 后maven-assembly-plugin会在target/生成出一个zip包。zip包最里层的各种jar就是需要的jar包了将这些jar包都放到 kettle 的lib目录或自定义的目录自定义方法请看下文就好。 注意 1本例中的kettle8.2中的KETTLE_HOME/plugins\pentaho-big-data-plugin\hadoop-configurations 目录下有几个hadoop plugin在kettle9之前的版本全局只能有一个hadoop选择使用哪个hadoop需要在文件KETTLE_HOME/plugins\pentaho-big-data-plugin\plugin.properties的active.hadoop.configuration...配置就是文件夹的名字比如此次配置hdp30作为hadoop plugin 基础的hadoop版本注kettle9以后的版本不是这么配置的这个hadoop版本越接近实际集群的版本越好kettle每次此启动都尝试加载此目录下的类不一样也可以 2由于集群版本可能和任何上边的hadoop plugin都不一致此时需要把集群版本的依赖jar包提前加载所以需要把相关依赖放在KETTLE_HOME/lib下如果想放在另设目录比如KETTLE_HOME/ext-lib。win下可以把Spoon.bat中的两处set LIBSPATH后都追加;..\ext-lib注意是分号分隔。当然linux下修改spoon.sh在LIBPATH$CURRENTDIR最后都追加:..\ext-lib注意是冒号分隔之所有要有..是因为项目启动类是KETTLE_HOME/launcher\launcher.jar中。 类和包报错说明 报错一 kettle8.2报错很难理解9之后的版本好得多如下内容实际Watcher类找不到但实际是有的。 2024/01/03 17:34:01 - spark-read-ha-sample.0 - Error connecting to database: (using class org.apache.hive.jdbc.HiveDriver) 2024/01/03 17:34:01 - spark-read-ha-sample.0 - org/apache/zookeeper/Watcher报错二 出现这种问题的是也是java类加载错误应该是不同的classloader加载类导致不识别。 loader constraint violation: loader (instance of java/net/URLClassLoader) previously initiated loading for a different type with name org/apache/curator/RetryPolicy报错三 出现此问题暂时无法解释。 java.lang.IllegalArgumentException: port out of range:-1at java.net.InetSocketAddress.checkPort(InetSocketAddress.java:143)at java.net.InetSocketAddress.createUnresolved(InetSocketAddress.java:254)at org.apache.zookeeper.client.ConnectStringParser.init(ConnectStringParser.java:76)at org.apache.zookeeper.ZooKeeper.init(ZooKeeper.java:447)at org.apache.curator.utils.DefaultZookeeperFactory.newZooKeeper(DefaultZookeeperFactory.java:29)at org.apache.curator.framework.imps.CuratorFrameworkImpl$2.newZooKeeper(CuratorFrameworkImpl.java:150)at org.apache.curator.HandleHolder$1.getZooKeeper(HandleHolder.java:94)at org.apache.curator.HandleHolder.getZooKeeper(HandleHolder.java:55)at org.apache.curator.ConnectionState.reset(ConnectionState.java:262)at org.apache.curator.ConnectionState.start(ConnectionState.java:109)at org.apache.curator.CuratorZookeeperClient.start(CuratorZookeeperClient.java:191)at org.apache.curator.framework.imps.CuratorFrameworkImpl.start(CuratorFrameworkImpl.java:259)at org.apache.hive.jdbc.ZooKeeperHiveClientHelper.configureConnParams(ZooKeeperHiveClientHelper.java:63)at org.apache.hive.jdbc.Utils.configureConnParams(Utils.java:520)at org.apache.hive.jdbc.Utils.parseURL(Utils.java:440)at org.apache.hive.jdbc.HiveConnection.init(HiveConnection.java:134)at org.apache.hive.jdbc.HiveDriver.connect(HiveDriver.java:107)以上三个问题的解决方法是 将hadoophivezookeeper 和 curator包都放在lib目录或自定义加载目录虽然KETTLE_HOME\plugins\pentaho-big-data-plugin\hadoop-configurations 目录下选定的hadoop plugin 本例中是hdp30也有相关依赖但实际上lib目录和hadoop plugin类不能混用相关的包必须放在一起切记血泪教训不用管lib目录和hadoop plugin中存在不同版本的jar包 由于kettle设计的比较负责支持很多插件此问题应该是由于不同进程不同类加载器所加载的类不能通用所致。 2.2 启动kettle和类加载说明 debug模式启动SpoonDebug.bat 如果还想看类加载路径可以在Spoon.bat中的set OPT 行尾添加jvm选项 -verbose:class 。 如果cmd黑窗口中文乱码可以把SpoonDebug.bat中的 -Dfile.encodingUTF-8 删除即可。 kettle会把所有jar包都缓存都存储在kettle-home\system\karaf\caches目录下。 日志里打印的所有 bundle数字目录下得jar包都是在缓存目录下。 如果kettle在运行过程中卡掉了不反应了八成是因为操作过程中点击了cmd黑窗口此时在cmd黑窗口内敲击回车cmd日志就会继续打印窗口也会恢复响应。 2.3 编写js通过kerberos认证 配置信息就是填写kerberos的配置。 javascript代码完成kerberos认证。 配置信息内填写如下 javascript代码内容如下 kerberos认证中是需要HADOOP_CONF_DIR的如果调用没有hadoop_conf_dir参数据方法就是去环境变量中取了。 // 给类起个别名java没有这种写法python有。 var utils Packages.全类路径.KerberosUtils; // 使用 HADOOP_CONF_DIR 或 HADOOP_HOME 环境变量,配置登录 Kerberos var loginRes utils.loginKerberos(krb5_principal,krb5_keytab,krb5_conf);// 使用用户提供的 hadoop_conf_dir 登录kerberos // hadoop_conf_dir 参数可以从上一步获取也可以直接写死。 // var loginRes utils.loginKerberos(krb5_principal,krb5_keytab,krb5_conf,hadoop_conf_dir);添加一个写结果的模块 好了执行启动 如果报如下错误说明kettle没有找到java类检查类路径和包是否错误 TypeError: Cannot call property loginKerberos in object [JavaPackage utils]. It is not a function, it is object. (script#6)如果打印如下内容说明执行认证成功了。 2024/01/02 18:18:04 - 写日志.0 - 2024/01/02 18:18:04 - 写日志.0 - ------------ 行号 1------------------------------ 2024/01/02 18:18:04 - 写日志.0 - loginRes Y三、包装模块开发 keberos认证会在jvm存储信息这些信息如果想使用必须前于hive或hadoop任务一个job 结构如下 kerberos-login 就是刚刚写的转换。 必须如上包装层数少了认证不过去 四、连接hdfs 如果项目中使用也必须使用前面的包装模块把hadoop任务包在里边 五、连接hive或者spark thriftserver 连接hive和spark thriftserver是一样的。以下以spark举例说明。 注意连接hive或者spark之前一定先手动运行下刚刚的kerberos-login认证模块否则测试连接和特征列表都将就失败或报错 4.1 zookeeper的ha方式连接 # 主机名称 # 注意这里主机名会后少写一个:2181 zk-01.com:2181,zk-02.com:2181,zk-03.com# 数据库名称 # 后边把kerberos连接参数也加上。zooKeeperNamespace 参数从HIVE_HOME/conf/hive-site.xml或SPARK_HOME/conf/hive-site.xml文件获取即可。而serviceDiscoveryModezooKeeper是固定写法。 default;serviceDiscoveryModezooKeeper;zooKeeperNamespacespark2_server# 端口号 # 主机名故意少写一个就在这里补上了。 2181先手动运行下kerberos认证模块再测试连接下 填写完毕后可以点击特征列表按钮找到URL项查看格式应为 jdbc:hive2://zk-01.com:2181,zk-01.com:2181,zk-01.com:2181/default;serviceDiscoveryModezooKeeper;zooKeeperNamespace... 测试连接 4.2 单点连接方式 # 主机名称 # 就是hive server2 的主机 host不要写IP# 数据库名称 # HIVE_HOME/conf/hive-site.xml或SPARK_HOME/conf/hive-site.xml中找到配置 hive.server2.authentication.kerberos.principal # 比如spark/_HOSTXXXXX.COM # 本质也是在default数据库后边拼接连接字符串 default;principalspark/_HOSTXXXXX.COM# 端口号也在SPARK_HOME/conf/hive-site.xml中找到配置hive.server2.thrift.port有 10016填写完毕后可以点击特征列表按钮找到URL项查看格式应为jdbc:hive2://host:port/default;principal... 格式。 测试连接 五、其他 kettle读取Hive表不支持bigint和timstamp类型解决 六 kettle使用技巧 6.1 kettle 任务嵌套使用相对目录 以上内容殊荣输入也是可以的${Internal.Entry.Current.Directory} 。 参考文章 hive 高可用详解 Hive MetaStore HA、hive server HA原理详解hive高可用实现 kettle开发篇-JavaScript脚本-Day31 kettle组件javaScript脚本案例1 kettle配置javascript环境 kettle javascript Javascript脚本组件 Kettle之【执行SQL脚本】控件用法 文章介绍了环境变量和 占位符 ? 的使用方法。
http://www.pierceye.com/news/672672/

相关文章:

  • 一个网站的建设需要什么手续phpcms旅游网站模板下载
  • 昆明做网站费用做网站的一些话术
  • sae 网站备案信息汽车配件加工网
  • 做游戏网站要备案吗群晖做网站需要备案吗
  • 网站制作教程为什么语音转文字里面没有海南的
  • 怎么让别人看到自己做的网站地信的网站建设
  • 网站主体注销泰安新闻视频在线
  • 怀柔网站建设优化seo瓯北网站制作公司
  • 福田住房和建设局网站官网做自己点击网站
  • 临沂市建设局网站简介佛山建网站
  • 哪种类型的网站比较难做阿里云宝塔安装wordpress
  • 购物网站起名网站建设皿金手指排名
  • 河北省住房和城市建设厅网站怎么做cpa网站
  • 网站备案 取名资讯通不过软文投放平台有哪些?
  • 民治做网站多少钱好看的企业网站首页
  • 腾讯域名怎么建设网站客户管理系统免费
  • 承德网站建设报价网站建设中企动力最佳a5
  • 图书馆第一代网站建设海口会计报名网站
  • 网站设计师简介中国工厂网站官方网站
  • 广州移动 网站建设十大职业资格培训机构
  • 网站建设维护协议书网站开发程序用什么好
  • 零基础做网站教程天猫商城商品来源
  • 广州知名网站建设公司教育机构培训
  • 做游戏解说上传在什么网站好企业网站定制
  • 用iis浏览网站南宁网站seo大概多少钱
  • 如何用手机网站做淘宝客wordpress 免费 旅游
  • 青岛网站建设网站制作seo顾问服务福建
  • phpcms网站织梦 网站栏目管理 很慢
  • 金融网站 改版方案seo推广优化培训
  • 博物馆设计网站推荐网站布局有哪些常见的