当前位置: 首页 > news >正文

城乡建设招投标网站随州建设网站

城乡建设招投标网站,随州建设网站,网络营销的定价策略6个,济南网站推广公司Flink 系列文章 1、Flink 部署、概念介绍、source、transformation、sink使用示例、四大基石介绍和示例等系列综合文章链接 13、Flink 的table api与sql的基本概念、通用api介绍及入门示例 14、Flink 的table api与sql之数据类型: 内置数据类型以及它们的属性 15、Flink 的ta…Flink 系列文章 1、Flink 部署、概念介绍、source、transformation、sink使用示例、四大基石介绍和示例等系列综合文章链接 13、Flink 的table api与sql的基本概念、通用api介绍及入门示例 14、Flink 的table api与sql之数据类型: 内置数据类型以及它们的属性 15、Flink 的table api与sql之流式概念-详解的介绍了动态表、时间属性配置如何处理更新结果、时态表、流上的join、流上的确定性以及查询配置 16、Flink 的table api与sql之连接外部系统: 读写外部系统的连接器和格式以及FileSystem示例1 16、Flink 的table api与sql之连接外部系统: 读写外部系统的连接器和格式以及Elasticsearch示例2 16、Flink 的table api与sql之连接外部系统: 读写外部系统的连接器和格式以及Apache Kafka示例3 16、Flink 的table api与sql之连接外部系统: 读写外部系统的连接器和格式以及JDBC示例4 16、Flink 的table api与sql之连接外部系统: 读写外部系统的连接器和格式以及Apache Hive示例6 20、Flink SQL之SQL Client: 不用编写代码就可以尝试 Flink SQL可以直接提交 SQL 任务到集群上 22、Flink 的table api与sql之创建表的DDL 24、Flink 的table api与sql之Catalogs介绍、类型、java api和sql实现ddl、java api和sql操作catalog-1 24、Flink 的table api与sql之Catalogsjava api操作数据库、表-2 24、Flink 的table api与sql之Catalogsjava api操作视图-3 26、Flink 的SQL之概览与入门示例 27、Flink 的SQL之SELECT (select、where、distinct、order by、limit、集合操作和去重)介绍及详细示例1 27、Flink 的SQL之SELECT (SQL Hints 和 Joins)介绍及详细示例2 27、Flink 的SQL之SELECT (窗口函数)介绍及详细示例3 27、Flink 的SQL之SELECT (窗口聚合)介绍及详细示例4 27、Flink 的SQL之SELECT (Group Aggregation分组聚合、Over Aggregation Over聚合 和 Window Join 窗口关联)介绍及详细示例5 27、Flink 的SQL之SELECT (Top-N、Window Top-N 窗口 Top-N 和 Window Deduplication 窗口去重)介绍及详细示例6 27、Flink 的SQL之SELECT (Pattern Recognition 模式检测)介绍及详细示例7 29、Flink SQL之DESCRIBE、EXPLAIN、USE、SHOW、LOAD、UNLOAD、SET、RESET、JAR、JOB Statements、UPDATE、DELETE1 29、Flink SQL之DESCRIBE、EXPLAIN、USE、SHOW、LOAD、UNLOAD、SET、RESET、JAR、JOB Statements、UPDATE、DELETE2 30、Flink SQL之SQL 客户端通过kafka和filesystem的例子介绍了配置文件使用-表、视图等 32、Flink table api和SQL 之用户自定义 Sources Sinks实现及详细示例 41、Flink之Hive 方言介绍及详细示例 42、Flink 的table api与sql之Hive Catalog 43、Flink之Hive 读写及详细验证示例 44、Flink之module模块介绍及使用示例和Flink SQL使用hive内置函数及自定义函数详细示例–网上有些说法好像是错误的 文章目录 Flink 系列文章五、Catalog API3、视图操作1、官方示例2、SQL创建HIVE 视图示例1、maven依赖2、代码3、运行结果 3、API创建Hive 视图示例1、maven依赖2、代码3、运行结果 本文简单介绍了通过java api操作视图提供了三个示例即sql实现和java api的两种实现方式。 本文依赖flink和hive、hadoop集群能正常使用。 本文示例java api的实现是通过Flink 1.13.5版本做的示例SQL 如果没有特别说明则是Flink 1.17版本。 五、Catalog API 3、视图操作 1、官方示例 // create view catalog.createTable(new ObjectPath(mydb, myview), new CatalogViewImpl(...), false);// drop view catalog.dropTable(new ObjectPath(mydb, myview), false);// alter view catalog.alterTable(new ObjectPath(mydb, mytable), new CatalogViewImpl(...), false);// rename view catalog.renameTable(new ObjectPath(mydb, myview), my_new_view, false);// get view catalog.getTable(myview);// check if a view exist or not catalog.tableExists(mytable);// list views in a database catalog.listViews(mydb); 2、SQL创建HIVE 视图示例 1、maven依赖 propertiesencodingUTF-8/encodingproject.build.sourceEncodingUTF-8/project.build.sourceEncodingmaven.compiler.source1.8/maven.compiler.sourcemaven.compiler.target1.8/maven.compiler.targetjava.version1.8/java.versionscala.version2.12/scala.versionflink.version1.13.6/flink.version/propertiesdependenciesdependencygroupIdorg.apache.flink/groupIdartifactIdflink-clients_2.11/artifactIdversion${flink.version}/version/dependencydependencygroupIdorg.apache.flink/groupIdartifactIdflink-scala_2.11/artifactIdversion${flink.version}/version/dependencydependencygroupIdorg.apache.flink/groupIdartifactIdflink-java/artifactIdversion${flink.version}/versionscopeprovided/scope /dependencydependencygroupIdorg.apache.flink/groupIdartifactIdflink-streaming-scala_2.11/artifactIdversion${flink.version}/version/dependencydependencygroupIdorg.apache.flink/groupIdartifactIdflink-streaming-java_2.11/artifactIdversion${flink.version}/versionscopeprovided/scope/dependencydependencygroupIdorg.apache.flink/groupIdartifactIdflink-table-api-scala-bridge_2.11/artifactIdversion${flink.version}/version/dependencydependencygroupIdorg.apache.flink/groupIdartifactIdflink-table-api-java-bridge_2.11/artifactIdversion${flink.version}/version/dependency!-- blink执行计划,1.11默认的 --dependencygroupIdorg.apache.flink/groupIdartifactIdflink-table-planner-blink_2.11/artifactIdversion${flink.version}/versionscopeprovided/scope /dependencydependencygroupIdorg.apache.flink/groupIdartifactIdflink-table-common/artifactIdversion${flink.version}/version/dependency!-- flink连接器 --dependencygroupIdorg.apache.flink/groupIdartifactIdflink-connector-kafka_2.12/artifactIdversion${flink.version}/version!-- scopeprovided/scope --/dependencydependencygroupIdorg.apache.flink/groupIdartifactIdflink-sql-connector-kafka_2.12/artifactIdversion${flink.version}/versionscopeprovided/scope/dependencydependencygroupIdorg.apache.flink/groupIdartifactIdflink-connector-jdbc_2.12/artifactIdversion${flink.version}/versionscopeprovided/scope/dependencydependencygroupIdorg.apache.flink/groupIdartifactIdflink-csv/artifactIdversion${flink.version}/version/dependencydependencygroupIdorg.apache.flink/groupIdartifactIdflink-json/artifactIdversion${flink.version}/version/dependencydependencygroupIdorg.apache.flink/groupIdartifactIdflink-connector-hive_2.12/artifactIdversion${flink.version}/versionscopeprovided/scope /dependencydependencygroupIdorg.apache.hive/groupIdartifactIdhive-metastore/artifactIdversion2.1.0/version/dependencydependencygroupIdorg.apache.hive/groupIdartifactIdhive-exec/artifactIdversion3.1.2/versionscopeprovided/scope /dependencydependencygroupIdorg.apache.flink/groupIdartifactIdflink-shaded-hadoop-2-uber/artifactIdversion2.7.5-10.0/version!-- scopeprovided/scope --/dependencydependencygroupIdmysql/groupIdartifactIdmysql-connector-java/artifactIdversion5.1.38/versionscopeprovided/scope!--version8.0.20/version --/dependency!-- 日志 --dependencygroupIdorg.slf4j/groupIdartifactIdslf4j-log4j12/artifactIdversion1.7.7/versionscoperuntime/scope/dependencydependencygroupIdlog4j/groupIdartifactIdlog4j/artifactIdversion1.2.17/versionscoperuntime/scope/dependencydependencygroupIdcom.alibaba/groupIdartifactIdfastjson/artifactIdversion1.2.44/version/dependencydependencygroupIdorg.projectlombok/groupIdartifactIdlombok/artifactIdversion1.18.2/version!-- scopeprovided/scope --/dependency/dependenciesbuildsourceDirectorysrc/main/java/sourceDirectoryplugins!-- 编译插件 --plugingroupIdorg.apache.maven.plugins/groupIdartifactIdmaven-compiler-plugin/artifactIdversion3.5.1/versionconfigurationsource1.8/sourcetarget1.8/target!--encoding${project.build.sourceEncoding}/encoding --/configuration/pluginplugingroupIdorg.apache.maven.plugins/groupIdartifactIdmaven-surefire-plugin/artifactIdversion2.18.1/versionconfigurationuseFilefalse/useFiledisableXmlReporttrue/disableXmlReportincludesinclude**/*Test.*/includeinclude**/*Suite.*/include/includes/configuration/plugin!-- 打包插件(会包含所有依赖) --plugingroupIdorg.apache.maven.plugins/groupIdartifactIdmaven-shade-plugin/artifactIdversion2.3/versionexecutionsexecutionphasepackage/phasegoalsgoalshade/goal/goalsconfigurationfiltersfilterartifact*:*/artifactexcludes!-- zip -d learn_spark.jar META-INF/*.RSA META-INF/*.DSA META-INF/*.SF --excludeMETA-INF/*.SF/excludeexcludeMETA-INF/*.DSA/excludeexcludeMETA-INF/*.RSA/exclude/excludes/filter/filterstransformerstransformerimplementationorg.apache.maven.plugins.shade.resource.ManifestResourceTransformer!-- 设置jar包的入口类(可选) --mainClass org.table_sql.TestHiveViewBySQLDemo/mainClass/transformer/transformers/configuration/execution/executions/plugin/plugins/build2、代码 package org.table_sql;import java.util.HashMap; import java.util.List;import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment; import org.apache.flink.table.api.SqlDialect; import org.apache.flink.table.api.bridge.java.StreamTableEnvironment; import org.apache.flink.table.catalog.CatalogDatabaseImpl; import org.apache.flink.table.catalog.CatalogView; import org.apache.flink.table.catalog.ObjectPath; import org.apache.flink.table.catalog.hive.HiveCatalog; import org.apache.flink.table.module.hive.HiveModule; import org.apache.flink.types.Row; import org.apache.flink.util.CollectionUtil;/*** author alanchan**/ public class TestHiveViewBySQLDemo {public static final String tableName viewtest;public static final String hive_create_table_sql CREATE TABLE tableName (\n id INT,\n name STRING,\n age INT ) TBLPROPERTIES (\n sink.partition-commit.delay5 s,\n sink.partition-commit.triggerpartition-time,\n sink.partition-commit.policy.kindmetastore,success-file );/*** param args* throws Exception*/public static void main(String[] args) throws Exception {StreamExecutionEnvironment env StreamExecutionEnvironment.getExecutionEnvironment();StreamTableEnvironment tenv StreamTableEnvironment.create(env);String moduleName myhive;String hiveVersion 3.1.2;tenv.loadModule(moduleName, new HiveModule(hiveVersion));String name alan_hive;String defaultDatabase default;String databaseName viewtest_db;String hiveConfDir /usr/local/bigdata/apache-hive-3.1.2-bin/conf;HiveCatalog hiveCatalog new HiveCatalog(name, defaultDatabase, hiveConfDir);tenv.registerCatalog(name, hiveCatalog);tenv.useCatalog(name);tenv.listDatabases();hiveCatalog.createDatabase(databaseName, new CatalogDatabaseImpl(new HashMap(), hiveConfDir) {}, true);// tenv.executeSql(create database databaseName);tenv.useDatabase(databaseName);// 创建第一个视图viewName_byTableString selectSQL select * from tableName;String viewName_byTable test_view_table_V;String createViewSQL create view viewName_byTable as selectSQL;tenv.getConfig().setSqlDialect(SqlDialect.HIVE);tenv.executeSql(hive_create_table_sql);// tenv.getConfig().setSqlDialect(SqlDialect.DEFAULT);String insertSQL insert into tableName values (1,alan,18);tenv.executeSql(insertSQL);tenv.executeSql(createViewSQL);tenv.listViews();CatalogView catalogView (CatalogView) hiveCatalog.getTable(new ObjectPath(databaseName, viewName_byTable));ListRow results CollectionUtil.iteratorToList(tenv.executeSql(select * from viewName_byTable).collect());for (Row row : results) {System.out.println(test_view_table_V: row.toString());}// 创建第二个视图String viewName_byView test_view_view_V;tenv.executeSql(create view viewName_byView (v2_id,v2_name,v2_age) comment test_view_view_V comment as select * from viewName_byTable);catalogView (CatalogView) hiveCatalog.getTable(new ObjectPath(databaseName, viewName_byView));results CollectionUtil.iteratorToList(tenv.executeSql(select * from viewName_byView).collect());System.out.println(test_view_view_V comment : catalogView.getComment());for (Row row : results) {System.out.println(test_view_view_V : row.toString());}tenv.executeSql(drop database databaseName cascade);}} 3、运行结果 前提是flink的集群可用。使用maven打包成jar。 [alanchanserver2 bin]$ flink run /usr/local/bigdata/flink-1.13.5/examples/table/table_sql-0.0.2-SNAPSHOT.jarHive Session ID ed6d5c9b-e00f-4881-840d-24c72aba6db7 Hive Session ID 14445dc8-1f08-4f0f-bb45-aba8c6f52174 Job has been submitted with JobID bff7b59367bd5de6e778b442c4cc4404 Hive Session ID 4c16f4fc-4c10-4353-b322-e6633e3ebe3d Hive Session ID 57949f09-bdcb-497f-a85c-ed9766fc4ce3 2023-10-13 02:42:24,891 INFO org.apache.hadoop.mapred.FileInputFormat [] - Total input files to process : 0 Job has been submitted with JobID 80e48bb76e3d580412fdcdc434a8a979 test_view_table_V: I[1, alan, 18] Hive Session ID a73d5b93-2129-4159-ad5e-0814df77e987 Hive Session ID e4ae1a79-4d5e-4835-81de-ebc2041eedf9 2023-10-13 02:42:33,648 INFO org.apache.hadoop.mapred.FileInputFormat [] - Total input files to process : 1 Job has been submitted with JobID c228d9ce3bdce91dc68bff75d14db1e5 test_view_view_V comment : test_view_view_V comment test_view_view_V : I[1, alan, 18] Hive Session ID e4a38393-d760-4bd3-8d8b-864cbe0daba7 3、API创建Hive 视图示例 通过api创建视图相对比较麻烦且存在版本更新的过期方法情况。 通过TableSchema和CatalogViewImpl创建视图则已过期当前推荐使用通过CatalogView和ResolvedSchema来创建视图。 另外需要注意的是下面两个参数的区别 String originalQuery,原始的sql String expandedQuery,带有数据库名称的表甚至包含hivecatalog 例如如果使用default作为默认的数据库查询语句为select * from test1则 originalQuery ”select name,value from test1“即可 expandedQuery “selecttest1.name, test1.value from default.test1” 修改、删除视图等操作比较简单不再赘述。 1、maven依赖 此处使用的依赖与上示例一致mainclass变成本示例的类不再赘述。 2、代码 import static org.apache.flink.util.Preconditions.checkNotNull;import java.util.ArrayList; import java.util.Arrays; import java.util.Collections; import java.util.HashMap; import java.util.List; import org.apache.flink.api.common.typeinfo.Types; import org.apache.flink.api.common.typeinfo.TypeInformation; import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment; import org.apache.flink.table.api.DataTypes; import org.apache.flink.table.api.Schema; import org.apache.flink.table.api.SqlDialect; import org.apache.flink.table.api.TableSchema; import org.apache.flink.table.api.bridge.java.StreamTableEnvironment; import org.apache.flink.table.catalog.CatalogBaseTable; import org.apache.flink.table.catalog.CatalogDatabaseImpl; import org.apache.flink.table.catalog.CatalogView; import org.apache.flink.table.catalog.CatalogViewImpl; import org.apache.flink.table.catalog.ObjectPath; import org.apache.flink.table.catalog.ResolvedCatalogView; import org.apache.flink.table.catalog.ResolvedSchema; import org.apache.flink.table.catalog.exceptions.CatalogException; import org.apache.flink.table.catalog.exceptions.DatabaseNotExistException; import org.apache.flink.table.catalog.exceptions.TableAlreadyExistException; import org.apache.flink.table.catalog.hive.HiveCatalog; import org.apache.flink.table.module.hive.HiveModule; import org.apache.flink.types.Row; import org.apache.flink.util.CollectionUtil; import org.apache.flink.table.catalog.CatalogBaseTable; import org.apache.flink.table.catalog.Column;/*** author alanchan**/ public class TestHiveViewByAPIDemo {public static final String tableName viewtest;public static final String hive_create_table_sql CREATE TABLE tableName (\n id INT,\n name STRING,\n age INT ) TBLPROPERTIES (\n sink.partition-commit.delay5 s,\n sink.partition-commit.triggerpartition-time,\n sink.partition-commit.policy.kindmetastore,success-file );/*** param args* throws Exception*/public static void main(String[] args) throws Exception {StreamExecutionEnvironment env StreamExecutionEnvironment.getExecutionEnvironment();StreamTableEnvironment tenv StreamTableEnvironment.create(env);System.setProperty(HADOOP_USER_NAME, alanchan);String moduleName myhive;String hiveVersion 3.1.2;tenv.loadModule(moduleName, new HiveModule(hiveVersion));String catalogName alan_hive;String defaultDatabase default;String databaseName viewtest_db;String hiveConfDir /usr/local/bigdata/apache-hive-3.1.2-bin/conf;HiveCatalog hiveCatalog new HiveCatalog(catalogName, defaultDatabase, hiveConfDir);tenv.registerCatalog(catalogName, hiveCatalog);tenv.useCatalog(catalogName);tenv.listDatabases();hiveCatalog.createDatabase(databaseName, new CatalogDatabaseImpl(new HashMap(), hiveConfDir) {}, true);// tenv.executeSql(create database databaseName);tenv.useDatabase(databaseName);tenv.getConfig().setSqlDialect(SqlDialect.HIVE);tenv.executeSql(hive_create_table_sql);String insertSQL insert into tableName values (1,alan,18);String insertSQL2 insert into tableName values (2,alan2,19);String insertSQL3 insert into tableName values (3,alan3,20);tenv.executeSql(insertSQL);tenv.executeSql(insertSQL2);tenv.executeSql(insertSQL3);tenv.getConfig().setSqlDialect(SqlDialect.DEFAULT);String viewName1 test_view_table_V;String viewName2 test_view_table_V2;ObjectPath path1 new ObjectPath(databaseName, viewName1);//ObjectPath.fromString(viewtest_db.test_view_table_V2)ObjectPath path2 new ObjectPath(databaseName, viewName2);String originalQuery SELECT id, name, age FROM tableName WHERE id 1 ; // String originalQuery String.format(select * from %s,tableName WHERE id 1 );System.out.println(originalQuery:originalQuery);String expandedQuery SELECT id, name, age FROM databaseName.tableName WHERE id 1 ; // String expandedQuery String.format(select * from %s.%s, catalogName, path1.getFullName());System.out.println(expandedQuery:expandedQuery);String comment this is a comment;// 创建视图第一种方式(通过TableSchema和CatalogViewImpl)已声明过期 createView1(originalQuery,expandedQuery,comment,hiveCatalog,path1);// 查询视图ListRow results CollectionUtil.iteratorToList( tenv.executeSql(select * from viewName1).collect());for (Row row : results) {System.out.println(test_view_table_V: row.toString());}// 创建视图第二种方式通过Schema和ResolvedSchemacreateView2(originalQuery,expandedQuery,comment,hiveCatalog,path2);ListRow results2 CollectionUtil.iteratorToList( tenv.executeSql(select * from viewtest_db.test_view_table_V2).collect());for (Row row : results2) {System.out.println(test_view_table_V2: row.toString());}tenv.executeSql(drop database databaseName cascade);}static void createView1(String originalQuery,String expandedQuery,String comment,HiveCatalog hiveCatalog,ObjectPath path) throws Exception {TableSchema viewSchema new TableSchema(new String[]{id, name,age}, new TypeInformation[]{Types.INT, Types.STRING,Types.INT});CatalogBaseTable viewTable new CatalogViewImpl(originalQuery,expandedQuery,viewSchema, new HashMap(),comment);hiveCatalog.createTable(path, viewTable, false);}static void createView2(String originalQuery,String expandedQuery,String comment,HiveCatalog hiveCatalog,ObjectPath path) throws Exception {ResolvedSchema resolvedSchema new ResolvedSchema(Arrays.asList(Column.physical(id, DataTypes.INT()),Column.physical(name, DataTypes.STRING()),Column.physical(age, DataTypes.INT())),Collections.emptyList(),null);CatalogView origin CatalogView.of(Schema.newBuilder().fromResolvedSchema(resolvedSchema).build(),comment, // String.format(select * from tt), // String.format(select * from %s.%s, TEST_CATALOG_NAME, path1.getFullName()),originalQuery,expandedQuery,Collections.emptyMap());CatalogView view new ResolvedCatalogView(origin, resolvedSchema); // ObjectPath.fromString(viewtest_db.test_view_table_V2)hiveCatalog.createTable(path, view, false);}}3、运行结果 [alanchanserver2 bin]$ flink run /usr/local/bigdata/flink-1.13.5/examples/table/table_sql-0.0.3-SNAPSHOT.jarHive Session ID ab4d159a-b2d3-489e-988f-eebdc43d9517 Hive Session ID 391de19c-5d5a-4a83-a88c-c43cca71fc63 Job has been submitted with JobID a880510032165523f3f2a559c5ab4ec9 Hive Session ID cb063c31-eaf2-44e3-8fc0-9e8d2a6a3a5d Job has been submitted with JobID cb05286c404b561306f8eb3969c3456a Hive Session ID 8132b36e-c9e2-41a2-8f42-3fe842e0991f Job has been submitted with JobID 264aef7da1b17598bda159d946827dea Hive Session ID 7657be14-8188-4362-84a9-4c84c596021b 2023-10-16 07:21:19,073 INFO org.apache.hadoop.mapred.FileInputFormat [] - Total input files to process : 3 Job has been submitted with JobID 05c2bb7265b0430cb12e00237f18444b test_view_table_V: I[1, alan, 18] test_view_table_V: I[2, alan2, 19] test_view_table_V: I[3, alan3, 20] Hive Session ID 7bb01c0d-03c9-413a-9040-c89676cec3b9 2023-10-16 07:21:27,512 INFO org.apache.hadoop.mapred.FileInputFormat [] - Total input files to process : 3 Job has been submitted with JobID 79130d1fe56d88a784980d16e7f1cfb4 test_view_table_V2: I[1, alan, 18] test_view_table_V2: I[2, alan2, 19] test_view_table_V2: I[3, alan3, 20] Hive Session ID 6d44ea95-f733-4c56-8da4-e2687a4bf945 本文简单介绍了通过java api操作视图提供了三个示例即sql实现和java api的两种实现方式。
http://www.pierceye.com/news/889789/

相关文章:

  • 做肝病科网站张家港高端网站制作
  • 深圳外网站建设网站的策划分析
  • 电商网站建设机构移动通网站建设
  • 网站内容不被收录企业网站制作的市场
  • 个人网站 jsp 域名空间电商线上推广
  • 网站开发实战视频教程西安网站建设定
  • 有没有99块钱做网站邢台做网站的公司哪家好?
  • 物流网站设计与实现制作公司内部募捐网站
  • 新西兰注册公司做网站局域网网站
  • 做外贸要开通哪个网站网站建设模板
  • 广州专业做网站的科技公司维度网络做网站
  • l建设银行网站怎么注册网站
  • 网站设计多少钱wordpress调用菜单代码
  • 成都p2p网站建设手机网站和app有什么区别
  • 人像摄影作品网站怎么做多个网站单点登录
  • 企业网站设计推广方案外贸seo优化方法
  • 广州网站推广找哪家西安网站制作定制
  • 各大招聘网站上海网页制作方法
  • 舟山建设信息港门户网站网站用途及栏目说明
  • 在线留电话的广告专业搜索引擎seo服务商
  • 网站建设方案说明松山湖网站建设公司
  • 西安网站推广方案网站主机是服务器吗
  • seo站内优化培训北京社保网上服务平台官网
  • 滨海做网站价格呼和浩特市网站公司电话
  • vs2012网站开发环境logo免费下载网站
  • 手机网站懒人模板章丘网站优化
  • 常州做网站的企业中国十大动漫学校
  • 广东手机网站制作电话wordpress 被写入文件
  • 意大利之家设计网站什么是软件开发技术
  • 下载flash网站国外域名备案