1.升级pom文件中jar版本

2.增加elasticsearch、redis的测试用例
This commit is contained in:
xuwujing 2019-04-01 17:51:32 +08:00
parent 3f26e9165d
commit 01a841009d
9 changed files with 2082 additions and 2135 deletions

145
README.md
View File

@ -1,71 +1,74 @@
## java-study
**介绍**
[java-study](https://github.com/xuwujing/java-study) 是本人学习Java过程中记录的一些代码从Java基础的数据类型、jdk1.8的使用、IO、集合、线程等等技术以及一些常用框架netty、mina、springboot、kafka、storm、zookeeper、redis、hbase、hive等等。
**使用**
下载:
git clone https://github.com/xuwujing/java-study
然后使用maven方式导入IDE中运行main方法即可。
## 项目结构
com.pancm.arithmetic - 一些算法相关类
com.pancm.basics - 一些Java基础相关类 主要是三大特性、修饰符、io、集合、反射、克隆等等相关代码
com.pancm.bigdata - 大数据相关的类 主要是hbase、storm、zookeeper等等相关的代码
com.pancm.commons - 一些第三方工具类的测试用例 主要是apache commons、apache lang、google common、google guava、joda等等一些工具包测试使用代码
com.pancm.design - 设计模式相关的示例类 包含常用的23种设计模式
com.pancm.jdk8 - jdk1.8相关的类 主要是lambda、stream以及LocalDateTime等等测试代码
com.pancm.mq - 一些消息中间件的类主要包含kafka、rabbitmq相关的测试代码
com.pancm.nio - 一些nio框架主要是netty和mina
com.pancm.others - 一些不知道怎么定义的测试类Jsoup(爬虫)、logback、lombok等等测试代码
com.pancm.pojo - 实体相关类
com.pancm.question - 一些面试可能会问的问题的类
com.pancm.sql - 一些数据库相关的类,包括非关系型数据库
com.pancm.thread - 一些线程相关的类 从基本的使用到各种并发的测试类
com.pancm.utils - 一些常用的工具类 主要是Json数据转换日期转换二维码图片生成工具类常用的AES、MD5、BASE64等等编码解码工具类redis、kafka、zookeeper等等工具类
## 相关文章
这里介绍的文章主要是本人写的一些博客。博客主要发布在[个人博客](http://www.panchengming.com)、[CSDN](https://blog.csdn.net/qazwsxpcm)、[博客园](https://www.cnblogs.com/xuwujing/)等但是由于个人博客在github上访问可能较慢CSDN目前观感体验不好所以以下链接主要就在博客园中了。
**Java基础相关:**
- [基本数据类型](https://www.cnblogs.com/xuwujing/p/8597557.html)
- [修饰符和String](https://www.cnblogs.com/xuwujing/p/8638329.html)
- [封装、继承和多态](https://www.cnblogs.com/xuwujing/p/8681123.html)
- [集合List、Map和Set](https://www.cnblogs.com/xuwujing/p/8886821.html)
- [多线程](https://www.cnblogs.com/xuwujing/p/9102870.html)
- [IO流](https://www.cnblogs.com/xuwujing/p/9191546.html)
- [总结篇](https://www.cnblogs.com/xuwujing/p/9236376.html)
**设计模式:**
- [单例模式](https://www.cnblogs.com/xuwujing/p/9277266.html)
- [工厂方法和抽象工厂模式](https://www.cnblogs.com/xuwujing/p/9363142.html)
- [建造者模式和原型模式](https://www.cnblogs.com/xuwujing/p/9496346.html)
- [适配器模式和桥接模式](https://www.cnblogs.com/xuwujing/p/9520851.html)
- [外观模式和装饰器模式](https://www.cnblogs.com/xuwujing/p/9545272.html)
- [组合模式和过滤器模式](https://www.cnblogs.com/xuwujing/p/9630850.html)
- [享元模式和代理模式](https://www.cnblogs.com/xuwujing/p/9704228.html)
- [责任链模式和命令模式](https://www.cnblogs.com/xuwujing/p/9794886.html)
- [解释器模式和迭代器模式](https://www.cnblogs.com/xuwujing/p/9873514.html)
- [访问者模式和中介者模式](https://www.cnblogs.com/xuwujing/p/9911997.html)
- [策略模式和模板方法模式](https://www.cnblogs.com/xuwujing/p/9954263.html)
- [观察者模式和空对象模式](https://www.cnblogs.com/xuwujing/p/10036204.html)
- [总结篇](https://www.cnblogs.com/xuwujing/p/10134494.html)
**进阶相关:**
- [JDK1.8的Lambda、Stream和日期的使用详解](https://www.cnblogs.com/xuwujing/p/10145691.html)
**其他:**
- [个人收集的资源分享](https://www.cnblogs.com/xuwujing/p/10393111.html)
## 其他
在这些代码中,虽然大部分都是自己写的,但是也有不少是在学习过程中从网上或书上直接摘抄的,当时有些并未标明出处,现在由于忘了出处,有些代码并未标明,若有冒犯,忘请见谅!
## java-study
**介绍**
[java-study](https://github.com/xuwujing/java-study) 是本人学习Java过程中记录的一些代码从Java基础的数据类型、jdk1.8的使用、IO、集合、线程等等技术以及一些常用框架netty、mina、springboot、kafka、storm、zookeeper、es、redis、hbase、hive等等。
**使用**
下载:
git clone https://github.com/xuwujing/java-study
然后使用maven方式导入IDE中运行main方法即可。
## 项目结构
com.pancm.arithmetic - 一些算法相关类
com.pancm.basics - 一些Java基础相关类 主要是三大特性、修饰符、io、集合、反射、克隆等等相关代码
com.pancm.bigdata - 大数据相关的类 主要是hbase、storm、zookeeper等等相关的代码
com.pancm.commons - 一些第三方工具类的测试用例 主要是apache commons、apache lang、google common、google guava、joda等等一些工具包测试使用代码
com.pancm.design - 设计模式相关的示例类 包含常用的23种设计模式
com.pancm.elasticsearch - elasticsearch相关使用的测试用例包括索引mapping的创建、全文检索、聚合查询等等
com.pancm.jdk8 - jdk1.8相关的类 主要是lambda、stream以及LocalDateTime等等测试代码
com.pancm.mq - 一些消息中间件的类主要包含kafka、rabbitmq相关的测试代码
com.pancm.nio - 一些nio框架主要是netty和mina
com.pancm.others - 一些不知道怎么定义的测试类Jsoup(爬虫)、logback、lombok等等测试代码
com.pancm.pojo - 实体相关类
com.pancm.question - 一些面试可能会问的问题的类
com.pancm.redis - redis相关使用的类
com.pancm.sql - 一些数据库相关的类
com.pancm.thread - 一些线程相关的类 从基本的使用到各种并发的测试类
com.pancm.utils - 一些常用的工具类 主要是Json数据转换日期转换二维码图片生成工具类常用的AES、MD5、BASE64等等编码解码工具类redis、kafka、zookeeper等等工具类
## 相关文章
这里介绍的文章主要是本人写的一些博客。博客主要发布在[个人博客](http://www.panchengming.com)、[CSDN](https://blog.csdn.net/qazwsxpcm)、[博客园](https://www.cnblogs.com/xuwujing/)等但是由于个人博客在github上访问可能较慢CSDN目前观感体验不好所以以下链接主要就在博客园中了。
**Java基础相关:**
- [基本数据类型](https://www.cnblogs.com/xuwujing/p/8597557.html)
- [修饰符和String](https://www.cnblogs.com/xuwujing/p/8638329.html)
- [封装、继承和多态](https://www.cnblogs.com/xuwujing/p/8681123.html)
- [集合List、Map和Set](https://www.cnblogs.com/xuwujing/p/8886821.html)
- [多线程](https://www.cnblogs.com/xuwujing/p/9102870.html)
- [IO流](https://www.cnblogs.com/xuwujing/p/9191546.html)
- [总结篇](https://www.cnblogs.com/xuwujing/p/9236376.html)
**设计模式:**
- [单例模式](https://www.cnblogs.com/xuwujing/p/9277266.html)
- [工厂方法和抽象工厂模式](https://www.cnblogs.com/xuwujing/p/9363142.html)
- [建造者模式和原型模式](https://www.cnblogs.com/xuwujing/p/9496346.html)
- [适配器模式和桥接模式](https://www.cnblogs.com/xuwujing/p/9520851.html)
- [外观模式和装饰器模式](https://www.cnblogs.com/xuwujing/p/9545272.html)
- [组合模式和过滤器模式](https://www.cnblogs.com/xuwujing/p/9630850.html)
- [享元模式和代理模式](https://www.cnblogs.com/xuwujing/p/9704228.html)
- [责任链模式和命令模式](https://www.cnblogs.com/xuwujing/p/9794886.html)
- [解释器模式和迭代器模式](https://www.cnblogs.com/xuwujing/p/9873514.html)
- [访问者模式和中介者模式](https://www.cnblogs.com/xuwujing/p/9911997.html)
- [策略模式和模板方法模式](https://www.cnblogs.com/xuwujing/p/9954263.html)
- [观察者模式和空对象模式](https://www.cnblogs.com/xuwujing/p/10036204.html)
- [总结篇](https://www.cnblogs.com/xuwujing/p/10134494.html)
**进阶相关:**
- [JDK1.8的Lambda、Stream和日期的使用详解](https://www.cnblogs.com/xuwujing/p/10145691.html)
**其他:**
- [个人收集的资源分享](https://www.cnblogs.com/xuwujing/p/10393111.html)
## 其他
在这些代码中,虽然大部分都是自己写的,但是也有不少是在学习过程中从网上或书上直接摘抄的,当时有些并未标明出处,现在由于忘了出处,有些代码并未标明,若有冒犯,请见谅!

856
pom.xml
View File

@ -1,456 +1,400 @@
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>1.0.0</groupId>
<artifactId>java_study</artifactId>
<version>0.0.1-SNAPSHOT</version>
<packaging>jar</packaging>
<name>java_study</name>
<url>http://maven.apache.org</url>
<properties>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
<project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding>
<java.version>1.8</java.version>
<maven.compiler.source>1.8</maven.compiler.source>
<maven.compiler.target>1.8</maven.compiler.target>
</properties>
<dependencies>
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<version>4.12</version>
</dependency>
<dependency>
<groupId>jdk.tools</groupId>
<artifactId>jdk.tools</artifactId>
<version>1.8</version>
<scope>system</scope>
<systemPath>${JAVA_HOME}/lib/tools.jar</systemPath>
</dependency>
<!--日志 -->
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-api</artifactId>
<version>1.7.25</version>
</dependency>
<!-- logback 相关jar -->
<dependency>
<groupId>ch.qos.logback</groupId>
<artifactId>logback-classic</artifactId>
<version>1.2.3</version>
</dependency>
<dependency>
<groupId>ch.qos.logback</groupId>
<artifactId>logback-core</artifactId>
<version>1.2.3</version>
</dependency>
<!-- 工具包 start -->
<!--jackson -->
<dependency>
<groupId>com.fasterxml.jackson.jaxrs</groupId>
<artifactId>jackson-jaxrs-json-provider</artifactId>
<version>2.9.6</version>
</dependency>
<!-- gson -->
<dependency>
<groupId>com.google.code.gson</groupId>
<artifactId>gson</artifactId>
<version>2.8.5</version>
</dependency>
<!-- fastjson -->
<dependency>
<groupId>com.alibaba</groupId>
<artifactId>fastjson</artifactId>
<version>1.2.49</version>
</dependency>
<dependency>
<groupId>commons-codec</groupId>
<artifactId>commons-codec</artifactId>
<version>1.11</version>
</dependency>
<!-- apache 工具包 -->
<dependency>
<groupId>commons-lang</groupId>
<artifactId>commons-lang</artifactId>
<version>2.6</version>
</dependency>
<dependency>
<groupId>org.apache.commons</groupId>
<artifactId>commons-lang3</artifactId>
<version>3.7</version>
</dependency>
<!-- 压缩使用的 -->
<dependency>
<groupId>org.apache.commons</groupId>
<artifactId>commons-compress</artifactId>
<version>1.18</version>
</dependency>
<!--Excel表格 相关jar -->
<dependency>
<groupId>org.apache.poi</groupId>
<artifactId>poi</artifactId>
<version>3.9</version>
</dependency>
<dependency>
<groupId>org.apache.poi</groupId>
<artifactId>poi-ooxml</artifactId>
<version>3.17</version>
</dependency>
<!-- 二维码工具jar包 -->
<dependency>
<groupId>com.google.zxing</groupId>
<artifactId>core</artifactId>
<version>3.0.0</version>
</dependency>
<dependency>
<groupId>com.google.zxing</groupId>
<artifactId>javase</artifactId>
<version>3.0.0</version>
</dependency>
<!--pagehelper分页工具类 -->
<dependency>
<groupId>com.github.pagehelper</groupId>
<artifactId>pagehelper</artifactId>
<version>4.1.0</version>
</dependency>
<!--使用lombok 在pojo中可以免去写getter和setter -->
<dependency>
<groupId>org.projectlombok</groupId>
<artifactId>lombok</artifactId>
<version>1.16.18</version>
</dependency>
<!--gecco 爬虫 -->
<dependency>
<groupId>com.geccocrawler</groupId>
<artifactId>gecco</artifactId>
<version>1.2.8</version>
</dependency>
<!--protobuf jar -->
<dependency>
<groupId>com.google.protobuf</groupId>
<artifactId>protobuf-java</artifactId>
<version>3.5.1</version>
</dependency>
<!--quartz定时器 -->
<dependency>
<groupId>org.quartz-scheduler</groupId>
<artifactId>quartz</artifactId>
<version>2.3.0</version>
</dependency>
<dependency>
<groupId>org.quartz-scheduler</groupId>
<artifactId>quartz-jobs</artifactId>
<version>2.3.0</version>
</dependency>
<!-- 工具包 end -->
<!-- 数据库相关jar start -->
<!--redis 相关jar -->
<dependency>
<groupId>redis.clients</groupId>
<artifactId>jedis</artifactId>
<version>2.9.0</version>
</dependency>
<!--mysql驱动包 -->
<dependency>
<groupId>mysql</groupId>
<artifactId>mysql-connector-java</artifactId>
<version>5.1.41</version>
</dependency>
<!--sqlite相关jar -->
<dependency>
<groupId>org.xerial</groupId>
<artifactId>sqlite-jdbc</artifactId>
<version>3.20.1</version>
</dependency>
<!--SQL Server 驱动包 -->
<dependency>
<groupId>com.microsoft.sqlserver</groupId>
<artifactId>sqljdbc4</artifactId>
<version>4.0</version>
</dependency>
<!-- 数据库相关jar end -->
<!-- 通信相关jar start -->
<!--netty 相关jar -->
<dependency>
<groupId>io.netty</groupId>
<artifactId>netty-all</artifactId>
<version>4.1.17.Final</version>
</dependency>
<!--mina 相关jar -->
<dependency>
<groupId>org.apache.mina</groupId>
<artifactId>mina-core</artifactId>
<version>2.0.16</version>
</dependency>
<!--http相关jar -->
<dependency>
<groupId>com.github.kevinsawicki</groupId>
<artifactId>http-request</artifactId>
<version>6.0</version>
</dependency>
<!-- 通信相关jar end -->
<!-- 消息中间件 相关jar start -->
<!--rabbitmq 相关jar -->
<dependency>
<groupId>com.rabbitmq</groupId>
<artifactId>amqp-client</artifactId>
<version>4.3.0</version>
</dependency>
<!-- kafka -->
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka_2.12</artifactId>
<version>1.0.0</version>
<exclusions>
<exclusion>
<groupId>org.apache.zookeeper</groupId>
<artifactId>zookeeper</artifactId>
</exclusion>
<exclusion>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-log4j12</artifactId>
</exclusion>
<exclusion>
<groupId>log4j</groupId>
<artifactId>log4j</artifactId>
</exclusion>
</exclusions>
</dependency>
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-clients</artifactId>
<version>1.0.0</version>
</dependency>
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-streams</artifactId>
<version>1.0.0</version>
</dependency>
<!-- 消息中间件 相关jar end -->
<!-- 大数据相关 jar start -->
<!-- zookeeper -->
<dependency>
<groupId>org.apache.zookeeper</groupId>
<artifactId>zookeeper</artifactId>
<version>3.4.10</version>
<exclusions>
<exclusion>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-log4j12</artifactId>
</exclusion>
<exclusion>
<groupId>log4j</groupId>
<artifactId>log4j</artifactId>
</exclusion>
</exclusions>
</dependency>
<!-- zookeeper 工具类 -->
<dependency>
<groupId>com.101tec</groupId>
<artifactId>zkclient</artifactId>
<version>0.10</version>
</dependency>
<!--hadoop 相关架包 -->
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-common</artifactId>
<version>2.8.2</version>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-client</artifactId>
<version>2.8.2</version>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-hdfs</artifactId>
<version>2.8.2</version>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-mapreduce-client-core</artifactId>
<version>2.8.2</version>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-yarn-common</artifactId>
<version>2.8.2</version>
</dependency>
<!--HBase相关jar -->
<dependency>
<groupId>org.apache.hbase</groupId>
<artifactId>hbase-hadoop-compat</artifactId>
<version>1.3.1</version>
</dependency>
<dependency>
<groupId>org.apache.hbase</groupId>
<artifactId>hbase-server</artifactId>
<version>1.4.8</version>
</dependency>
<dependency>
<groupId>org.apache.hbase</groupId>
<artifactId>hbase-client</artifactId>
<version>1.4.8</version>
</dependency>
<dependency>
<groupId>org.apache.hbase</groupId>
<artifactId>hbase-common</artifactId>
<version>1.4.8</version>
</dependency>
<!--Hive相关jar -->
<dependency>
<groupId>org.apache.hive</groupId>
<artifactId>hive-jdbc</artifactId>
<version>3.1.1</version>
</dependency>
<!--storm相关jar -->
<dependency>
<groupId>org.apache.storm</groupId>
<artifactId>storm-core</artifactId>
<version>1.2.2</version>
</dependency>
<dependency>
<groupId>org.apache.storm</groupId>
<artifactId>storm-kafka</artifactId>
<version>1.2.2</version>
</dependency>
<!--Spark相关jar -->
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-core_2.11</artifactId>
<version>2.4.0</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-sql_2.11</artifactId>
<version>2.4.0</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-streaming_2.11</artifactId>
<version>2.4.0</version>
<scope>provided</scope>
</dependency>
<dependency>
<groupId>com.sparkjava</groupId>
<artifactId>spark-core</artifactId>
<version>2.8.0</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-mllib_2.11</artifactId>
<version>2.4.0</version>
<scope>provided</scope>
</dependency>
<!-- ES 相关jar包 -->
<!-- ES高级API -->
<dependency>
<groupId>org.elasticsearch.client</groupId>
<artifactId>elasticsearch-rest-high-level-client</artifactId>
<version>6.6.1</version>
</dependency>
<!--Jest工具包 -->
<dependency>
<groupId>io.searchbox</groupId>
<artifactId>jest</artifactId>
<version>6.3.1</version>
</dependency>
<!-- 大数据相关 jar end -->
</dependencies>
<repositories>
<repository>
<id>spring-milestone</id>
<url>http://repo.spring.io/libs-release</url>
</repository>
</repositories>
</project>
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>1.0.0</groupId>
<artifactId>java_study</artifactId>
<version>0.0.1-SNAPSHOT</version>
<packaging>jar</packaging>
<name>java_study</name>
<url>http://maven.apache.org</url>
<properties>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
<project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding>
<java.version>1.8</java.version>
<maven.compiler.source>1.8</maven.compiler.source>
<maven.compiler.target>1.8</maven.compiler.target>
</properties>
<dependencies>
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<version>4.12</version>
</dependency>
<!--<dependency>
<groupId>jdk.tools</groupId>
<artifactId>jdk.tools</artifactId>
<version>1.8</version>
<scope>system</scope>
<systemPath>${JAVA_HOME}/lib/tools.jar</systemPath>
</dependency>-->
<!--日志 -->
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-api</artifactId>
<version>1.7.25</version>
</dependency>
<!-- logback 相关jar -->
<dependency>
<groupId>ch.qos.logback</groupId>
<artifactId>logback-classic</artifactId>
<version>1.2.3</version>
</dependency>
<dependency>
<groupId>ch.qos.logback</groupId>
<artifactId>logback-core</artifactId>
<version>1.2.3</version>
</dependency>
<!-- 工具包 start -->
<!--jackson -->
<dependency>
<groupId>com.fasterxml.jackson.jaxrs</groupId>
<artifactId>jackson-jaxrs-json-provider</artifactId>
<version>2.9.6</version>
</dependency>
<!-- gson -->
<dependency>
<groupId>com.google.code.gson</groupId>
<artifactId>gson</artifactId>
<version>2.8.5</version>
</dependency>
<!-- fastjson -->
<dependency>
<groupId>com.alibaba</groupId>
<artifactId>fastjson</artifactId>
<version>1.2.49</version>
</dependency>
<dependency>
<groupId>commons-codec</groupId>
<artifactId>commons-codec</artifactId>
<version>1.11</version>
</dependency>
<!-- apache 工具包 -->
<dependency>
<groupId>commons-lang</groupId>
<artifactId>commons-lang</artifactId>
<version>2.6</version>
</dependency>
<dependency>
<groupId>org.apache.commons</groupId>
<artifactId>commons-lang3</artifactId>
<version>3.7</version>
</dependency>
<!-- 压缩使用的 -->
<dependency>
<groupId>org.apache.commons</groupId>
<artifactId>commons-compress</artifactId>
<version>1.18</version>
</dependency>
<!--Excel表格 相关jar -->
<dependency>
<groupId>org.apache.poi</groupId>
<artifactId>poi</artifactId>
<version>3.9</version>
</dependency>
<dependency>
<groupId>org.apache.poi</groupId>
<artifactId>poi-ooxml</artifactId>
<version>3.17</version>
</dependency>
<!-- 二维码工具jar包 -->
<dependency>
<groupId>com.google.zxing</groupId>
<artifactId>core</artifactId>
<version>3.0.0</version>
</dependency>
<dependency>
<groupId>com.google.zxing</groupId>
<artifactId>javase</artifactId>
<version>3.0.0</version>
</dependency>
<!--pagehelper分页工具类 -->
<dependency>
<groupId>com.github.pagehelper</groupId>
<artifactId>pagehelper</artifactId>
<version>4.1.0</version>
</dependency>
<!--使用lombok 在pojo中可以免去写getter和setter -->
<dependency>
<groupId>org.projectlombok</groupId>
<artifactId>lombok</artifactId>
<version>1.16.18</version>
</dependency>
<!--gecco 爬虫 -->
<dependency>
<groupId>com.geccocrawler</groupId>
<artifactId>gecco</artifactId>
<version>1.2.8</version>
</dependency>
<!--protobuf jar -->
<dependency>
<groupId>com.google.protobuf</groupId>
<artifactId>protobuf-java</artifactId>
<version>3.5.1</version>
</dependency>
<!--quartz定时器 -->
<dependency>
<groupId>org.quartz-scheduler</groupId>
<artifactId>quartz</artifactId>
<version>2.3.0</version>
</dependency>
<dependency>
<groupId>org.quartz-scheduler</groupId>
<artifactId>quartz-jobs</artifactId>
<version>2.3.0</version>
</dependency>
<!-- 工具包 end -->
<!-- 数据库相关jar start -->
<!--redis 相关jar -->
<dependency>
<groupId>redis.clients</groupId>
<artifactId>jedis</artifactId>
<version>2.9.0</version>
</dependency>
<!--mysql驱动包 -->
<dependency>
<groupId>mysql</groupId>
<artifactId>mysql-connector-java</artifactId>
<version>5.1.41</version>
</dependency>
<!--sqlite相关jar -->
<dependency>
<groupId>org.xerial</groupId>
<artifactId>sqlite-jdbc</artifactId>
<version>3.20.1</version>
</dependency>
<!--SQL Server 驱动包 -->
<dependency>
<groupId>com.microsoft.sqlserver</groupId>
<artifactId>sqljdbc4</artifactId>
<version>4.0</version>
</dependency>
<!-- 数据库相关jar end -->
<!-- 通信相关jar start -->
<!--netty 相关jar -->
<dependency>
<groupId>io.netty</groupId>
<artifactId>netty-all</artifactId>
<version>4.1.17.Final</version>
</dependency>
<!--mina 相关jar -->
<dependency>
<groupId>org.apache.mina</groupId>
<artifactId>mina-core</artifactId>
<version>2.0.16</version>
</dependency>
<!--http相关jar -->
<dependency>
<groupId>com.github.kevinsawicki</groupId>
<artifactId>http-request</artifactId>
<version>6.0</version>
</dependency>
<!-- 通信相关jar end -->
<!-- 消息中间件 相关jar start -->
<!--rabbitmq 相关jar -->
<dependency>
<groupId>com.rabbitmq</groupId>
<artifactId>amqp-client</artifactId>
<version>4.3.0</version>
</dependency>
<!-- kafka -->
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka_2.12</artifactId>
<version>1.0.0</version>
<exclusions>
<exclusion>
<groupId>org.apache.zookeeper</groupId>
<artifactId>zookeeper</artifactId>
</exclusion>
<exclusion>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-log4j12</artifactId>
</exclusion>
<exclusion>
<groupId>log4j</groupId>
<artifactId>log4j</artifactId>
</exclusion>
</exclusions>
</dependency>
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-clients</artifactId>
<version>1.0.0</version>
</dependency>
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-streams</artifactId>
<version>1.0.0</version>
</dependency>
<!-- 消息中间件 相关jar end -->
<!-- 大数据相关 jar start -->
<!-- zookeeper -->
<dependency>
<groupId>org.apache.zookeeper</groupId>
<artifactId>zookeeper</artifactId>
<version>3.4.10</version>
<exclusions>
<exclusion>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-log4j12</artifactId>
</exclusion>
<exclusion>
<groupId>log4j</groupId>
<artifactId>log4j</artifactId>
</exclusion>
</exclusions>
</dependency>
<!-- zookeeper 工具类 -->
<dependency>
<groupId>com.101tec</groupId>
<artifactId>zkclient</artifactId>
<version>0.10</version>
</dependency>
<dependency>
<groupId>org.apache.hbase</groupId>
<artifactId>hbase-client</artifactId>
<version>2.1.4</version>
</dependency>
<dependency>
<groupId>org.apache.hbase</groupId>
<artifactId>hbase-common</artifactId>
<version>2.1.4</version>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-client</artifactId>
<version>3.2.0</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-core_2.12</artifactId>
<version>2.4.0</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-sql_2.12</artifactId>
<version>2.4.0</version>
</dependency>
<dependency>
<groupId>org.apache.storm</groupId>
<artifactId>storm-core</artifactId>
<version>1.2.2</version>
<scope>provided</scope>
</dependency>
<dependency>
<groupId>org.apache.storm</groupId>
<artifactId>storm-kafka</artifactId>
<version>1.2.2</version>
<scope>provided</scope>
</dependency>
<!-- 大数据相关 jar end -->
<!-- ES 相关jar包 -->
<!-- ES高级API -->
<dependency>
<groupId>org.elasticsearch.client</groupId>
<artifactId>elasticsearch-rest-high-level-client</artifactId>
<version>6.6.1</version>
</dependency>
<!--Jest工具包 -->
<dependency>
<groupId>io.searchbox</groupId>
<artifactId>jest</artifactId>
<version>6.3.1</version>
</dependency>
</dependencies>
<repositories>
<repository>
<id>spring-milestone</id>
<url>http://repo.spring.io/libs-release</url>
</repository>
</repositories>
</project>

View File

@ -1,344 +1,344 @@
package com.pancm.sql.easticsearch;
import java.io.IOException;
import java.util.List;
import java.util.Map;
import java.util.concurrent.TimeUnit;
import org.apache.http.HttpHost;
import org.elasticsearch.action.search.SearchRequest;
import org.elasticsearch.action.search.SearchResponse;
import org.elasticsearch.action.search.ShardSearchFailure;
import org.elasticsearch.action.support.IndicesOptions;
import org.elasticsearch.client.RequestOptions;
import org.elasticsearch.client.RestClient;
import org.elasticsearch.client.RestHighLevelClient;
import org.elasticsearch.common.unit.Fuzziness;
import org.elasticsearch.common.unit.TimeValue;
import org.elasticsearch.index.query.MatchQueryBuilder;
import org.elasticsearch.index.query.QueryBuilders;
import org.elasticsearch.rest.RestStatus;
import org.elasticsearch.search.SearchHits;
import org.elasticsearch.search.aggregations.Aggregation;
import org.elasticsearch.search.aggregations.AggregationBuilders;
import org.elasticsearch.search.aggregations.Aggregations;
import org.elasticsearch.search.aggregations.bucket.terms.Terms;
import org.elasticsearch.search.aggregations.bucket.terms.Terms.Bucket;
import org.elasticsearch.search.aggregations.bucket.terms.TermsAggregationBuilder;
import org.elasticsearch.search.builder.SearchSourceBuilder;
import org.elasticsearch.search.fetch.subphase.highlight.HighlightBuilder;
import org.elasticsearch.search.suggest.Suggest;
import org.elasticsearch.search.suggest.SuggestBuilder;
import org.elasticsearch.search.suggest.SuggestBuilders;
import org.elasticsearch.search.suggest.SuggestionBuilder;
import org.elasticsearch.search.suggest.term.TermSuggestion;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
/**
* @Title: EsHighLevelRestSearchTest
* @Description: Java High Level REST Client Es高级客户端查询使用使用教程 (Search查询使用教程)
* 官方文档地址:
* https://www.elastic.co/guide/en/elasticsearch/client/java-rest/current/java-rest-high.html
* @Version:1.0.0
* @author pancm
* @date 2019年3月12日
*/
public class EsHighLevelRestSearchTest {
private static String elasticIp = "192.169.0.23";
private static int elasticPort = 9200;
private static Logger logger = LoggerFactory.getLogger(EsHighLevelRestSearchTest.class);
private static RestHighLevelClient client = null;
/**
* @param args
*/
public static void main(String[] args) {
try {
init();
search();
} catch (Exception e) {
e.printStackTrace();
}finally {
close();
}
}
/*
* 初始化服务
*/
private static void init() {
client = new RestHighLevelClient(RestClient.builder(new HttpHost(elasticIp, elasticPort, "http")));
}
/*
* 关闭服务
*/
private static void close() {
if (client != null) {
try {
client.close();
} catch (IOException e) {
e.printStackTrace();
}
}
}
/**
* search查询使用示例
*
* @throws IOException
*/
private static void search() throws IOException {
/*
* 查询集群所有的索引
*
*/
SearchRequest searchRequestAll = new SearchRequest();
SearchSourceBuilder searchSourceBuilder = new SearchSourceBuilder();
searchSourceBuilder.query(QueryBuilders.matchAllQuery());
searchRequestAll.source(searchSourceBuilder);
// 同步查询
SearchResponse searchResponseAll = client.search(searchRequestAll, RequestOptions.DEFAULT);
System.out.println("所有查询总数:" + searchResponseAll.getHits().getTotalHits());
// 查询指定的索引库
SearchRequest searchRequest = new SearchRequest("user");
SearchSourceBuilder sourceBuilder = new SearchSourceBuilder();
// 设置查询条件
sourceBuilder.query(QueryBuilders.termQuery("user", "pancm"));
// 设置起止和结束
sourceBuilder.from(0);
sourceBuilder.size(5);
sourceBuilder.timeout(new TimeValue(60, TimeUnit.SECONDS));
// 设置路由
// searchRequest.routing("routing");
// 设置索引库表达式
searchRequest.indicesOptions(IndicesOptions.lenientExpandOpen());
// 查询选择本地分片默认是集群分片
searchRequest.preference("_local");
// 排序
// 根据默认值进行降序排序
// sourceBuilder.sort(new ScoreSortBuilder().order(SortOrder.DESC));
// 根据字段进行升序排序
// sourceBuilder.sort(new FieldSortBuilder("id").order(SortOrder.ASC));
// 关闭suorce查询
// sourceBuilder.fetchSource(false);
String[] includeFields = new String[] { "title", "user", "innerObject.*" };
String[] excludeFields = new String[] { "_type" };
// 包含或排除字段
// sourceBuilder.fetchSource(includeFields, excludeFields);
searchRequest.source(sourceBuilder);
// 同步查询
SearchResponse searchResponse = client.search(searchRequest, RequestOptions.DEFAULT);
// HTTP状态代码执行时间或请求是否提前终止或超时
RestStatus status = searchResponse.status();
TimeValue took = searchResponse.getTook();
Boolean terminatedEarly = searchResponse.isTerminatedEarly();
boolean timedOut = searchResponse.isTimedOut();
// 供关于受搜索影响的切分总数的统计信息以及成功和失败的切分
int totalShards = searchResponse.getTotalShards();
int successfulShards = searchResponse.getSuccessfulShards();
int failedShards = searchResponse.getFailedShards();
// 失败的原因
for (ShardSearchFailure failure : searchResponse.getShardFailures()) {
// failures should be handled here
}
// 结果
searchResponse.getHits().forEach(hit -> {
Map<String, Object> map = hit.getSourceAsMap();
String string = hit.getSourceAsString();
System.out.println("普通查询的Map结果:" + map);
System.out.println("普通查询的String结果:" + string);
});
System.out.println("\n=================\n");
/*
* 全文查询使用示例
*/
// 搜索字段user为pancm的数据
MatchQueryBuilder matchQueryBuilder = new MatchQueryBuilder("user", "pancm");
// 设置模糊查询
matchQueryBuilder.fuzziness(Fuzziness.AUTO);
// 设置前缀长度
matchQueryBuilder.prefixLength(3);
// 设置最大扩展选项来控制查询的模糊过程
matchQueryBuilder.maxExpansions(10);
/*
* QueryBuilder也可以
*/
// QueryBuilder matchQueryBuilder = QueryBuilders.matchQuery("user", "kimchy")
// .fuzziness(Fuzziness.AUTO)
// .prefixLength(3)
// .maxExpansions(10);
SearchSourceBuilder searchSourceBuilder2 = new SearchSourceBuilder();
searchSourceBuilder2.query(matchQueryBuilder);
SearchRequest searchRequest2 = new SearchRequest();
searchRequest2.source(searchSourceBuilder2);
// 同步查询
SearchResponse searchResponse2 = client.search(searchRequest, RequestOptions.DEFAULT);
SearchHits hits = searchResponse.getHits();
//总条数和分值
long totalHits = hits.getTotalHits();
float maxScore = hits.getMaxScore();
hits.forEach(hit -> {
String index = hit.getIndex();
String type = hit.getType();
String id = hit.getId();
float score = hit.getScore();
Map<String, Object> sourceAsMap = hit.getSourceAsMap();
String string = hit.getSourceAsString();
System.out.println("Match查询的Map结果:" + sourceAsMap);
System.out.println("Match查询的String结果:" + string);
String documentTitle = (String) sourceAsMap.get("title");
// List<Object> users = (List<Object>) sourceAsMap.get("user");
Map<String, Object> innerObject =
(Map<String, Object>) sourceAsMap.get("innerObject");
});
System.out.println("\n=================\n");
/*
* 高亮查询
*/
SearchSourceBuilder searchSourceBuilder3 = new SearchSourceBuilder();
HighlightBuilder highlightBuilder = new HighlightBuilder();
HighlightBuilder.Field highlightTitle = new HighlightBuilder.Field("title");
// 设置字段高亮字体
highlightTitle.highlighterType("user");
highlightBuilder.field(highlightTitle);
HighlightBuilder.Field highlightUser = new HighlightBuilder.Field("user");
highlightBuilder.field(highlightUser);
searchSourceBuilder3.highlighter(highlightBuilder);
SearchRequest searchRequest3 = new SearchRequest();
searchRequest3.source(searchSourceBuilder3);
// 同步查询
SearchResponse searchResponse3 = client.search(searchRequest3, RequestOptions.DEFAULT);
searchResponse3.getHits().forEach(hit -> {
Map<String, Object> map = hit.getSourceAsMap();
String string = hit.getSourceAsString();
System.out.println("Highlight查询的Map结果:" + map);
System.out.println("Highlight查询的String结果:" + string);
});
System.out.println("\n=================\n");
/**
* 聚合查询
*/
SearchSourceBuilder searchSourceBuilder4 = new SearchSourceBuilder();
//terms 就是分组统计 根据user进行分组并创建一个新的聚合user_
TermsAggregationBuilder aggregation = AggregationBuilders.terms("user_").field("user");
aggregation.subAggregation(AggregationBuilders.avg("average_age").field("age"));
searchSourceBuilder4.aggregation(aggregation);
SearchRequest searchRequest4 = new SearchRequest();
searchRequest4.source(searchSourceBuilder4);
// 同步查询
SearchResponse searchResponse4 = client.search(searchRequest4, RequestOptions.DEFAULT);
//聚合查询返回条件
Aggregations aggregations = searchResponse4.getAggregations();
List<Aggregation> aggregationList = aggregations.asList();
for (Aggregation agg : aggregations) {
String type = agg.getType();
if (type.equals(TermsAggregationBuilder.NAME)) {
Bucket elasticBucket = ((Terms) agg).getBucketByKey("Elastic");
long numberOfDocs = elasticBucket.getDocCount();
System.out.println("条数:"+numberOfDocs);
}
}
// Terms byCompanyAggregation = aggregations.get("user_");
// Bucket elasticBucket = byCompanyAggregation.getBucketByKey("Elastic");
// Avg averageAge = elasticBucket.getAggregations().get("average_age");
// double avg = averageAge.getValue();
//
// System.out.println("聚合查询返回的结果:"+avg);
/*
* 建议查询
*/
SearchSourceBuilder searchSourceBuilder5 = new SearchSourceBuilder();
SuggestionBuilder termSuggestionBuilder = SuggestBuilders.termSuggestion("user").text("pancm");
SuggestBuilder suggestBuilder = new SuggestBuilder();
suggestBuilder.addSuggestion("suggest_user", termSuggestionBuilder);
searchSourceBuilder5.suggest(suggestBuilder);
SearchRequest searchRequest5 = new SearchRequest();
searchRequest5.source(searchSourceBuilder5);
// 同步查询
SearchResponse searchResponse5= client.search(searchRequest5, RequestOptions.DEFAULT);
Suggest suggest = searchResponse5.getSuggest();
TermSuggestion termSuggestion = suggest.getSuggestion("suggest_user");
for (TermSuggestion.Entry entry : termSuggestion.getEntries()) {
for (TermSuggestion.Entry.Option option : entry) {
String suggestText = option.getText().string();
System.out.println("返回结果:"+suggestText);
}
}
}
}
package com.pancm.easticsearch;
import java.io.IOException;
import java.util.List;
import java.util.Map;
import java.util.concurrent.TimeUnit;
import org.apache.http.HttpHost;
import org.elasticsearch.action.search.SearchRequest;
import org.elasticsearch.action.search.SearchResponse;
import org.elasticsearch.action.search.ShardSearchFailure;
import org.elasticsearch.action.support.IndicesOptions;
import org.elasticsearch.client.RequestOptions;
import org.elasticsearch.client.RestClient;
import org.elasticsearch.client.RestHighLevelClient;
import org.elasticsearch.common.unit.Fuzziness;
import org.elasticsearch.common.unit.TimeValue;
import org.elasticsearch.index.query.MatchQueryBuilder;
import org.elasticsearch.index.query.QueryBuilders;
import org.elasticsearch.rest.RestStatus;
import org.elasticsearch.search.SearchHits;
import org.elasticsearch.search.aggregations.Aggregation;
import org.elasticsearch.search.aggregations.AggregationBuilders;
import org.elasticsearch.search.aggregations.Aggregations;
import org.elasticsearch.search.aggregations.bucket.terms.Terms;
import org.elasticsearch.search.aggregations.bucket.terms.Terms.Bucket;
import org.elasticsearch.search.aggregations.bucket.terms.TermsAggregationBuilder;
import org.elasticsearch.search.builder.SearchSourceBuilder;
import org.elasticsearch.search.fetch.subphase.highlight.HighlightBuilder;
import org.elasticsearch.search.suggest.Suggest;
import org.elasticsearch.search.suggest.SuggestBuilder;
import org.elasticsearch.search.suggest.SuggestBuilders;
import org.elasticsearch.search.suggest.SuggestionBuilder;
import org.elasticsearch.search.suggest.term.TermSuggestion;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
/**
* @Title: EsHighLevelRestSearchTest
* @Description: Java High Level REST Client Es高级客户端查询使用使用教程 (Search查询使用教程)
* 官方文档地址:
* https://www.elastic.co/guide/en/elasticsearch/client/java-rest/current/java-rest-high.html
* @Version:1.0.0
* @author pancm
* @date 2019年3月12日
*/
public class EsHighLevelRestSearchTest {
private static String elasticIp = "192.169.0.23";
private static int elasticPort = 9200;
private static Logger logger = LoggerFactory.getLogger(EsHighLevelRestSearchTest.class);
private static RestHighLevelClient client = null;
/**
* @param args
*/
public static void main(String[] args) {
try {
init();
search();
} catch (Exception e) {
e.printStackTrace();
}finally {
close();
}
}
/*
* 初始化服务
*/
private static void init() {
client = new RestHighLevelClient(RestClient.builder(new HttpHost(elasticIp, elasticPort, "http")));
}
/*
* 关闭服务
*/
private static void close() {
if (client != null) {
try {
client.close();
} catch (IOException e) {
e.printStackTrace();
}
}
}
/**
* search查询使用示例
*
* @throws IOException
*/
private static void search() throws IOException {
/*
* 查询集群所有的索引
*
*/
SearchRequest searchRequestAll = new SearchRequest();
SearchSourceBuilder searchSourceBuilder = new SearchSourceBuilder();
searchSourceBuilder.query(QueryBuilders.matchAllQuery());
searchRequestAll.source(searchSourceBuilder);
// 同步查询
SearchResponse searchResponseAll = client.search(searchRequestAll, RequestOptions.DEFAULT);
System.out.println("所有查询总数:" + searchResponseAll.getHits().getTotalHits());
// 查询指定的索引库
SearchRequest searchRequest = new SearchRequest("user");
SearchSourceBuilder sourceBuilder = new SearchSourceBuilder();
// 设置查询条件
sourceBuilder.query(QueryBuilders.termQuery("user", "pancm"));
// 设置起止和结束
sourceBuilder.from(0);
sourceBuilder.size(5);
sourceBuilder.timeout(new TimeValue(60, TimeUnit.SECONDS));
// 设置路由
// searchRequest.routing("routing");
// 设置索引库表达式
searchRequest.indicesOptions(IndicesOptions.lenientExpandOpen());
// 查询选择本地分片默认是集群分片
searchRequest.preference("_local");
// 排序
// 根据默认值进行降序排序
// sourceBuilder.sort(new ScoreSortBuilder().order(SortOrder.DESC));
// 根据字段进行升序排序
// sourceBuilder.sort(new FieldSortBuilder("id").order(SortOrder.ASC));
// 关闭suorce查询
// sourceBuilder.fetchSource(false);
String[] includeFields = new String[] { "title", "user", "innerObject.*" };
String[] excludeFields = new String[] { "_type" };
// 包含或排除字段
// sourceBuilder.fetchSource(includeFields, excludeFields);
searchRequest.source(sourceBuilder);
// 同步查询
SearchResponse searchResponse = client.search(searchRequest, RequestOptions.DEFAULT);
// HTTP状态代码执行时间或请求是否提前终止或超时
RestStatus status = searchResponse.status();
TimeValue took = searchResponse.getTook();
Boolean terminatedEarly = searchResponse.isTerminatedEarly();
boolean timedOut = searchResponse.isTimedOut();
// 供关于受搜索影响的切分总数的统计信息以及成功和失败的切分
int totalShards = searchResponse.getTotalShards();
int successfulShards = searchResponse.getSuccessfulShards();
int failedShards = searchResponse.getFailedShards();
// 失败的原因
for (ShardSearchFailure failure : searchResponse.getShardFailures()) {
// failures should be handled here
}
// 结果
searchResponse.getHits().forEach(hit -> {
Map<String, Object> map = hit.getSourceAsMap();
String string = hit.getSourceAsString();
System.out.println("普通查询的Map结果:" + map);
System.out.println("普通查询的String结果:" + string);
});
System.out.println("\n=================\n");
/*
* 全文查询使用示例
*/
// 搜索字段user为pancm的数据
MatchQueryBuilder matchQueryBuilder = new MatchQueryBuilder("user", "pancm");
// 设置模糊查询
matchQueryBuilder.fuzziness(Fuzziness.AUTO);
// 设置前缀长度
matchQueryBuilder.prefixLength(3);
// 设置最大扩展选项来控制查询的模糊过程
matchQueryBuilder.maxExpansions(10);
/*
* QueryBuilder也可以
*/
// QueryBuilder matchQueryBuilder = QueryBuilders.matchQuery("user", "kimchy")
// .fuzziness(Fuzziness.AUTO)
// .prefixLength(3)
// .maxExpansions(10);
SearchSourceBuilder searchSourceBuilder2 = new SearchSourceBuilder();
searchSourceBuilder2.query(matchQueryBuilder);
SearchRequest searchRequest2 = new SearchRequest();
searchRequest2.source(searchSourceBuilder2);
// 同步查询
SearchResponse searchResponse2 = client.search(searchRequest, RequestOptions.DEFAULT);
SearchHits hits = searchResponse.getHits();
//总条数和分值
long totalHits = hits.getTotalHits();
float maxScore = hits.getMaxScore();
hits.forEach(hit -> {
String index = hit.getIndex();
String type = hit.getType();
String id = hit.getId();
float score = hit.getScore();
Map<String, Object> sourceAsMap = hit.getSourceAsMap();
String string = hit.getSourceAsString();
System.out.println("Match查询的Map结果:" + sourceAsMap);
System.out.println("Match查询的String结果:" + string);
String documentTitle = (String) sourceAsMap.get("title");
// List<Object> users = (List<Object>) sourceAsMap.get("user");
Map<String, Object> innerObject =
(Map<String, Object>) sourceAsMap.get("innerObject");
});
System.out.println("\n=================\n");
/*
* 高亮查询
*/
SearchSourceBuilder searchSourceBuilder3 = new SearchSourceBuilder();
HighlightBuilder highlightBuilder = new HighlightBuilder();
HighlightBuilder.Field highlightTitle = new HighlightBuilder.Field("title");
// 设置字段高亮字体
highlightTitle.highlighterType("user");
highlightBuilder.field(highlightTitle);
HighlightBuilder.Field highlightUser = new HighlightBuilder.Field("user");
highlightBuilder.field(highlightUser);
searchSourceBuilder3.highlighter(highlightBuilder);
SearchRequest searchRequest3 = new SearchRequest();
searchRequest3.source(searchSourceBuilder3);
// 同步查询
SearchResponse searchResponse3 = client.search(searchRequest3, RequestOptions.DEFAULT);
searchResponse3.getHits().forEach(hit -> {
Map<String, Object> map = hit.getSourceAsMap();
String string = hit.getSourceAsString();
System.out.println("Highlight查询的Map结果:" + map);
System.out.println("Highlight查询的String结果:" + string);
});
System.out.println("\n=================\n");
/**
* 聚合查询
*/
SearchSourceBuilder searchSourceBuilder4 = new SearchSourceBuilder();
//terms 就是分组统计 根据user进行分组并创建一个新的聚合user_
TermsAggregationBuilder aggregation = AggregationBuilders.terms("user_").field("user");
aggregation.subAggregation(AggregationBuilders.avg("average_age").field("age"));
searchSourceBuilder4.aggregation(aggregation);
SearchRequest searchRequest4 = new SearchRequest();
searchRequest4.source(searchSourceBuilder4);
// 同步查询
SearchResponse searchResponse4 = client.search(searchRequest4, RequestOptions.DEFAULT);
//聚合查询返回条件
Aggregations aggregations = searchResponse4.getAggregations();
List<Aggregation> aggregationList = aggregations.asList();
for (Aggregation agg : aggregations) {
String type = agg.getType();
if (type.equals(TermsAggregationBuilder.NAME)) {
Bucket elasticBucket = ((Terms) agg).getBucketByKey("Elastic");
long numberOfDocs = elasticBucket.getDocCount();
System.out.println("条数:"+numberOfDocs);
}
}
// Terms byCompanyAggregation = aggregations.get("user_");
// Bucket elasticBucket = byCompanyAggregation.getBucketByKey("Elastic");
// Avg averageAge = elasticBucket.getAggregations().get("average_age");
// double avg = averageAge.getValue();
//
// System.out.println("聚合查询返回的结果:"+avg);
/*
* 建议查询
*/
SearchSourceBuilder searchSourceBuilder5 = new SearchSourceBuilder();
SuggestionBuilder termSuggestionBuilder = SuggestBuilders.termSuggestion("user").text("pancm");
SuggestBuilder suggestBuilder = new SuggestBuilder();
suggestBuilder.addSuggestion("suggest_user", termSuggestionBuilder);
searchSourceBuilder5.suggest(suggestBuilder);
SearchRequest searchRequest5 = new SearchRequest();
searchRequest5.source(searchSourceBuilder5);
// 同步查询
SearchResponse searchResponse5= client.search(searchRequest5, RequestOptions.DEFAULT);
Suggest suggest = searchResponse5.getSuggest();
TermSuggestion termSuggestion = suggest.getSuggestion("suggest_user");
for (TermSuggestion.Entry entry : termSuggestion.getEntries()) {
for (TermSuggestion.Entry.Option option : entry) {
String suggestText = option.getText().string();
System.out.println("返回结果:"+suggestText);
}
}
}
}

View File

@ -1,475 +1,475 @@
package com.pancm.sql.easticsearch;
import java.io.IOException;
import java.util.HashMap;
import java.util.Map;
import java.util.concurrent.TimeUnit;
import java.util.function.BiConsumer;
import org.apache.http.HttpHost;
import org.elasticsearch.ElasticsearchException;
import org.elasticsearch.action.ActionListener;
import org.elasticsearch.action.DocWriteRequest;
import org.elasticsearch.action.DocWriteResponse;
import org.elasticsearch.action.bulk.BackoffPolicy;
import org.elasticsearch.action.bulk.BulkItemResponse;
import org.elasticsearch.action.bulk.BulkProcessor;
import org.elasticsearch.action.bulk.BulkRequest;
import org.elasticsearch.action.bulk.BulkResponse;
import org.elasticsearch.action.delete.DeleteRequest;
import org.elasticsearch.action.delete.DeleteResponse;
import org.elasticsearch.action.get.GetRequest;
import org.elasticsearch.action.get.GetResponse;
import org.elasticsearch.action.index.IndexRequest;
import org.elasticsearch.action.index.IndexResponse;
import org.elasticsearch.action.support.WriteRequest;
import org.elasticsearch.action.support.replication.ReplicationResponse;
import org.elasticsearch.action.update.UpdateRequest;
import org.elasticsearch.action.update.UpdateResponse;
import org.elasticsearch.client.RequestOptions;
import org.elasticsearch.client.RestClient;
import org.elasticsearch.client.RestHighLevelClient;
import org.elasticsearch.common.unit.ByteSizeUnit;
import org.elasticsearch.common.unit.ByteSizeValue;
import org.elasticsearch.common.unit.TimeValue;
import org.elasticsearch.common.xcontent.XContentBuilder;
import org.elasticsearch.common.xcontent.XContentFactory;
import org.elasticsearch.common.xcontent.XContentType;
import org.elasticsearch.rest.RestStatus;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
/**
* @Title: EsHighLevelRestTest1
* @Description: Java High Level REST Client Es高级客户端使用教程一 (基本CRUD使用)
官方文档地址: https://www.elastic.co/guide/en/elasticsearch/client/java-rest/current/java-rest-high.html
* @since jdk 1.8
* @Version:1.0.0
* @author pancm
* @date 2019年3月5日
*/
public class EsHighLevelRestTest1 {
private static String elasticIp = "192.169.0.23";
private static int elasticPort = 9200;
private static Logger logger = LoggerFactory.getLogger(EsHighLevelRestTest1.class);
private static RestHighLevelClient client = null;
public static void main(String[] args) {
try {
init();
careatindex();
get();
exists();
update();
delete();
bulk();
close();
} catch (Exception e) {
e.printStackTrace();
}
}
/*
* 初始化服务
*/
private static void init() {
client = new RestHighLevelClient(RestClient.builder(new HttpHost(elasticIp, elasticPort, "http")));
}
/*
* 关闭服务
*/
private static void close() {
if (client != null) {
try {
client.close();
} catch (IOException e) {
e.printStackTrace();
}
}
}
/**
* 创建索引
*
* @throws IOException
*/
private static void careatindex() throws IOException {
String index = "user";
String type = "userindex";
// 唯一编号
String id = "1";
IndexRequest request = new IndexRequest(index, type, id);
/*
* 第一种方式通过jsonString进行创建
*/
// json
String jsonString = "{" + "\"user\":\"pancm\"," + "\"postDate\":\"2019-03-08\","+ "\"age\":\"18\","
+ "\"message\":\"study Elasticsearch\"" + "}";
request.source(jsonString, XContentType.JSON);
/*
* 第二种方式通过map创建,会自动转换成json的数据
*/
Map<String, Object> jsonMap = new HashMap<>();
jsonMap.put("user", "pancm");
jsonMap.put("postDate", "2019-03-08");
jsonMap.put("age", "18");
jsonMap.put("message", "study Elasticsearch");
request.source(jsonMap);
/*
* 第三种方式 : 通过XContentBuilder对象进行创建
*/
XContentBuilder builder = XContentFactory.jsonBuilder();
builder.startObject();
{
builder.field("user", "pancm");
builder.timeField("postDate", "2019-03-08");
builder.field("age", "18");
builder.field("message", "study Elasticsearch");
}
builder.endObject();
request.source(builder);
IndexResponse indexResponse = client.index(request, RequestOptions.DEFAULT);
//对响应结果进行处理
String index1 = indexResponse.getIndex();
String type1 = indexResponse.getType();
String id1 = indexResponse.getId();
long version = indexResponse.getVersion();
// 如果是新增/修改的话的话
if (indexResponse.getResult() == DocWriteResponse.Result.CREATED) {
} else if (indexResponse.getResult() == DocWriteResponse.Result.UPDATED) {
}
ReplicationResponse.ShardInfo shardInfo = indexResponse.getShardInfo();
if (shardInfo.getTotal() != shardInfo.getSuccessful()) {
}
if (shardInfo.getFailed() > 0) {
for (ReplicationResponse.ShardInfo.Failure failure : shardInfo.getFailures()) {
String reason = failure.reason();
}
}
System.out.println("创建成功!");
}
/**
* 查询数据
*
* @throws IOException
*/
private static void get() {
String index = "user";
String type = "userindex";
// 唯一编号
String id = "1";
// 创建查询请求
GetRequest getRequest = new GetRequest(index, type, id);
GetResponse getResponse = null;
try {
getResponse = client.get(getRequest, RequestOptions.DEFAULT);
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} catch (ElasticsearchException e) {
// 如果是索引不存在
if (e.status() == RestStatus.NOT_FOUND) {
System.out.println("该索引库不存在!"+index);
}
}
// 如果存在该数据则返回对应的结果
if (getResponse.isExists()) {
long version = getResponse.getVersion();
String sourceAsString = getResponse.getSourceAsString();
Map<String, Object> sourceAsMap = getResponse.getSourceAsMap();
byte[] sourceAsBytes = getResponse.getSourceAsBytes();
System.out.println("查询返回结果String:"+sourceAsString);
System.out.println("查询返回结果Map:"+sourceAsMap);
} else {
System.out.println("没有找到该数据!");
}
}
/**
* 是否存在
* @throws IOException
*/
private static void exists() throws IOException {
String index = "user";
String type = "userindex";
// 唯一编号
String id = "1";
// 创建查询请求
GetRequest getRequest = new GetRequest(index, type, id);
boolean exists = client.exists(getRequest, RequestOptions.DEFAULT);
ActionListener<Boolean> listener = new ActionListener<Boolean>() {
@Override
public void onResponse(Boolean exists) {
System.out.println("=="+exists);
}
@Override
public void onFailure(Exception e) {
System.out.println("失败的原因:"+e.getMessage());
}
};
//进行异步监听
// client.existsAsync(getRequest, RequestOptions.DEFAULT, listener);
System.out.println("是否存在:"+exists);
}
/**
* 更新操作
* @throws IOException
*/
private static void update() throws IOException {
String index = "user";
String type = "userindex";
// 唯一编号
String id = "1";
UpdateRequest upateRequest=new UpdateRequest();
upateRequest.id(id);
upateRequest.index(index);
upateRequest.type(type);
//依旧可以使用Map这种集合作为更新条件
Map<String, Object> jsonMap = new HashMap<>();
jsonMap.put("user", "xuwujing");
jsonMap.put("postDate", "2019-03-11");
upateRequest.doc(jsonMap);
//
upateRequest.docAsUpsert(true);
// upsert 方法表示如果数据不存在那么就新增一条
upateRequest.upsert(jsonMap);
client.update(upateRequest, RequestOptions.DEFAULT);
System.out.println("更新成功!");
}
/**
* 删除
* @throws IOException
*
*/
private static void delete() throws IOException {
String index = "user";
String type = "userindex";
// 唯一编号
String id = "1";
DeleteRequest deleteRequest=new DeleteRequest();
deleteRequest.id(id);
deleteRequest.index(index);
deleteRequest.type(type);
//设置超时时间
deleteRequest.timeout(TimeValue.timeValueMinutes(2));
//设置刷新策略"wait_for"
//保持此请求打开直到刷新使此请求的内容可以搜索为止此刷新策略与高索引和搜索吞吐量兼容但它会导致请求等待响应直到发生刷新
deleteRequest.setRefreshPolicy(WriteRequest.RefreshPolicy.WAIT_UNTIL);
//同步删除
DeleteResponse deleteResponse = client.delete(deleteRequest, RequestOptions.DEFAULT);
/*
* 异步删除操作
*/
//进行监听
ActionListener<DeleteResponse> listener = new ActionListener<DeleteResponse>() {
@Override
public void onResponse(DeleteResponse deleteResponse) {
System.out.println("响应:"+deleteResponse);
}
@Override
public void onFailure(Exception e) {
System.out.println("删除监听异常:"+e.getMessage());
}
};
//异步删除
// client.deleteAsync(deleteRequest, RequestOptions.DEFAULT, listener);
ReplicationResponse.ShardInfo shardInfo = deleteResponse.getShardInfo();
//如果处理成功碎片的数量少于总碎片的情况,说明还在处理或者处理发生异常
if (shardInfo.getTotal() != shardInfo.getSuccessful()) {
System.out.println("需要处理的碎片总量:"+shardInfo.getTotal());
System.out.println("处理成功的碎片总量:"+shardInfo.getSuccessful());
}
if (shardInfo.getFailed() > 0) {
for (ReplicationResponse.ShardInfo.Failure failure :
shardInfo.getFailures()) {
String reason = failure.reason();
}
}
System.out.println("删除成功!");
}
/**
* 批量操作示例
* @throws InterruptedException
*/
private static void bulk() throws IOException, InterruptedException {
String index = "estest";
String type = "estest";
BulkRequest request = new BulkRequest();
//批量新增
request.add(new IndexRequest(index, type, "1")
.source(XContentType.JSON,"field", "foo"));
request.add(new IndexRequest(index, type, "2")
.source(XContentType.JSON,"field", "bar"));
request.add(new IndexRequest(index, type, "3")
.source(XContentType.JSON,"field", "baz"));
//可以进行修改/删除/新增 操作
request.add(new UpdateRequest(index, type, "2")
.doc(XContentType.JSON,"field", "test"));
request.add(new DeleteRequest(index, type, "3"));
request.add(new IndexRequest(index, type, "4")
.source(XContentType.JSON,"field", "baz"));
BulkResponse bulkResponse = client.bulk(request, RequestOptions.DEFAULT);
//可以快速检查一个或多个操作是否失败 true是有至少一个失败
if (bulkResponse.hasFailures()) {
System.out.println("有一个操作失败!");
}
//对处理结果进行遍历操作并根据不同的操作进行处理
for (BulkItemResponse bulkItemResponse : bulkResponse) {
DocWriteResponse itemResponse = bulkItemResponse.getResponse();
//操作失败的进行处理
if (bulkItemResponse.isFailed()) {
BulkItemResponse.Failure failure = bulkItemResponse.getFailure();
}
if (bulkItemResponse.getOpType() == DocWriteRequest.OpType.INDEX
|| bulkItemResponse.getOpType() == DocWriteRequest.OpType.CREATE) {
IndexResponse indexResponse = (IndexResponse) itemResponse;
} else if (bulkItemResponse.getOpType() == DocWriteRequest.OpType.UPDATE) {
UpdateResponse updateResponse = (UpdateResponse) itemResponse;
} else if (bulkItemResponse.getOpType() == DocWriteRequest.OpType.DELETE) {
DeleteResponse deleteResponse = (DeleteResponse) itemResponse;
}
}
System.out.println("批量执行成功!");
/*
* 批量执行处理器相关示例代码
*/
//批量处理器的监听器设置
BulkProcessor.Listener listener = new BulkProcessor.Listener() {
//在执行BulkRequest的每次执行之前调用这个方法允许知道将要在BulkRequest中执行的操作的数量
@Override
public void beforeBulk(long executionId, BulkRequest request) {
int numberOfActions = request.numberOfActions();
logger.debug("Executing bulk [{}] with {} requests",
executionId, numberOfActions);
}
//在每次执行BulkRequest之后调用这个方法允许知道BulkResponse是否包含错误
@Override
public void afterBulk(long executionId, BulkRequest request,
BulkResponse response) {
if (response.hasFailures()) {
logger.warn("Bulk [{}] executed with failures", executionId);
} else {
logger.debug("Bulk [{}] completed in {} milliseconds",
executionId, response.getTook().getMillis());
}
}
//如果BulkRequest失败则调用该方法该方法允许知道失败
@Override
public void afterBulk(long executionId, BulkRequest request, Throwable failure) {
logger.error("Failed to execute bulk", failure);
}
};
BiConsumer<BulkRequest, ActionListener<BulkResponse>> bulkConsumer =
(request2, bulkListener) -> client.bulkAsync(request, RequestOptions.DEFAULT, bulkListener);
//创建一个批量执行的处理器
BulkProcessor bulkProcessor = BulkProcessor.builder(bulkConsumer, listener).build();
BulkProcessor.Builder builder = BulkProcessor.builder(bulkConsumer, listener);
//根据当前添加的操作数量设置刷新新批量请求的时间(默认为1000使用-1禁用它)
builder.setBulkActions(500);
//根据当前添加的操作大小设置刷新新批量请求的时间(默认为5Mb使用-1禁用)
builder.setBulkSize(new ByteSizeValue(1L, ByteSizeUnit.MB));
//设置允许执行的并发请求数量(默认为1使用0只允许执行单个请求)
builder.setConcurrentRequests(0);
//设置刷新间隔如果间隔通过则刷新任何挂起的BulkRequest(默认为未设置)
builder.setFlushInterval(TimeValue.timeValueSeconds(10L));
//设置一个常量后退策略该策略最初等待1秒并重试最多3次
builder.setBackoffPolicy(BackoffPolicy
.constantBackoff(TimeValue.timeValueSeconds(1L), 3));
IndexRequest one = new IndexRequest(index, type, "1").
source(XContentType.JSON, "title",
"In which order are my Elasticsearch queries executed?");
IndexRequest two = new IndexRequest(index, type, "2")
.source(XContentType.JSON, "title",
"Current status and upcoming changes in Elasticsearch");
IndexRequest three = new IndexRequest(index, type, "3")
.source(XContentType.JSON, "title",
"The Future of Federated Search in Elasticsearch");
bulkProcessor.add(one);
bulkProcessor.add(two);
bulkProcessor.add(three);
//如果所有大容量请求都已完成则该方法返回true;如果在所有大容量请求完成之前的等待时间已经过去则返回false
boolean terminated = bulkProcessor.awaitClose(30L, TimeUnit.SECONDS);
System.out.println("请求的响应结果:"+terminated);
}
}
package com.pancm.easticsearch;
import java.io.IOException;
import java.util.HashMap;
import java.util.Map;
import java.util.concurrent.TimeUnit;
import java.util.function.BiConsumer;
import org.apache.http.HttpHost;
import org.elasticsearch.ElasticsearchException;
import org.elasticsearch.action.ActionListener;
import org.elasticsearch.action.DocWriteRequest;
import org.elasticsearch.action.DocWriteResponse;
import org.elasticsearch.action.bulk.BackoffPolicy;
import org.elasticsearch.action.bulk.BulkItemResponse;
import org.elasticsearch.action.bulk.BulkProcessor;
import org.elasticsearch.action.bulk.BulkRequest;
import org.elasticsearch.action.bulk.BulkResponse;
import org.elasticsearch.action.delete.DeleteRequest;
import org.elasticsearch.action.delete.DeleteResponse;
import org.elasticsearch.action.get.GetRequest;
import org.elasticsearch.action.get.GetResponse;
import org.elasticsearch.action.index.IndexRequest;
import org.elasticsearch.action.index.IndexResponse;
import org.elasticsearch.action.support.WriteRequest;
import org.elasticsearch.action.support.replication.ReplicationResponse;
import org.elasticsearch.action.update.UpdateRequest;
import org.elasticsearch.action.update.UpdateResponse;
import org.elasticsearch.client.RequestOptions;
import org.elasticsearch.client.RestClient;
import org.elasticsearch.client.RestHighLevelClient;
import org.elasticsearch.common.unit.ByteSizeUnit;
import org.elasticsearch.common.unit.ByteSizeValue;
import org.elasticsearch.common.unit.TimeValue;
import org.elasticsearch.common.xcontent.XContentBuilder;
import org.elasticsearch.common.xcontent.XContentFactory;
import org.elasticsearch.common.xcontent.XContentType;
import org.elasticsearch.rest.RestStatus;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
/**
* @Title: EsHighLevelRestTest1
* @Description: Java High Level REST Client Es高级客户端使用教程一 (基本CRUD使用)
官方文档地址: https://www.elastic.co/guide/en/elasticsearch/client/java-rest/current/java-rest-high.html
* @since jdk 1.8
* @Version:1.0.0
* @author pancm
* @date 2019年3月5日
*/
public class EsHighLevelRestTest1 {
private static String elasticIp = "192.169.0.23";
private static int elasticPort = 9200;
private static Logger logger = LoggerFactory.getLogger(EsHighLevelRestTest1.class);
private static RestHighLevelClient client = null;
public static void main(String[] args) {
try {
init();
careatindex();
get();
exists();
update();
delete();
bulk();
close();
} catch (Exception e) {
e.printStackTrace();
}
}
/*
* 初始化服务
*/
private static void init() {
client = new RestHighLevelClient(RestClient.builder(new HttpHost(elasticIp, elasticPort, "http")));
}
/*
* 关闭服务
*/
private static void close() {
if (client != null) {
try {
client.close();
} catch (IOException e) {
e.printStackTrace();
}
}
}
/**
* 创建索引
*
* @throws IOException
*/
private static void careatindex() throws IOException {
String index = "user";
String type = "userindex";
// 唯一编号
String id = "1";
IndexRequest request = new IndexRequest(index, type, id);
/*
* 第一种方式通过jsonString进行创建
*/
// json
String jsonString = "{" + "\"user\":\"pancm\"," + "\"postDate\":\"2019-03-08\","+ "\"age\":\"18\","
+ "\"message\":\"study Elasticsearch\"" + "}";
request.source(jsonString, XContentType.JSON);
/*
* 第二种方式通过map创建,会自动转换成json的数据
*/
Map<String, Object> jsonMap = new HashMap<>();
jsonMap.put("user", "pancm");
jsonMap.put("postDate", "2019-03-08");
jsonMap.put("age", "18");
jsonMap.put("message", "study Elasticsearch");
request.source(jsonMap);
/*
* 第三种方式 : 通过XContentBuilder对象进行创建
*/
XContentBuilder builder = XContentFactory.jsonBuilder();
builder.startObject();
{
builder.field("user", "pancm");
builder.timeField("postDate", "2019-03-08");
builder.field("age", "18");
builder.field("message", "study Elasticsearch");
}
builder.endObject();
request.source(builder);
IndexResponse indexResponse = client.index(request, RequestOptions.DEFAULT);
//对响应结果进行处理
String index1 = indexResponse.getIndex();
String type1 = indexResponse.getType();
String id1 = indexResponse.getId();
long version = indexResponse.getVersion();
// 如果是新增/修改的话的话
if (indexResponse.getResult() == DocWriteResponse.Result.CREATED) {
} else if (indexResponse.getResult() == DocWriteResponse.Result.UPDATED) {
}
ReplicationResponse.ShardInfo shardInfo = indexResponse.getShardInfo();
if (shardInfo.getTotal() != shardInfo.getSuccessful()) {
}
if (shardInfo.getFailed() > 0) {
for (ReplicationResponse.ShardInfo.Failure failure : shardInfo.getFailures()) {
String reason = failure.reason();
}
}
System.out.println("创建成功!");
}
/**
* 查询数据
*
* @throws IOException
*/
private static void get() {
String index = "user";
String type = "userindex";
// 唯一编号
String id = "1";
// 创建查询请求
GetRequest getRequest = new GetRequest(index, type, id);
GetResponse getResponse = null;
try {
getResponse = client.get(getRequest, RequestOptions.DEFAULT);
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} catch (ElasticsearchException e) {
// 如果是索引不存在
if (e.status() == RestStatus.NOT_FOUND) {
System.out.println("该索引库不存在!"+index);
}
}
// 如果存在该数据则返回对应的结果
if (getResponse.isExists()) {
long version = getResponse.getVersion();
String sourceAsString = getResponse.getSourceAsString();
Map<String, Object> sourceAsMap = getResponse.getSourceAsMap();
byte[] sourceAsBytes = getResponse.getSourceAsBytes();
System.out.println("查询返回结果String:"+sourceAsString);
System.out.println("查询返回结果Map:"+sourceAsMap);
} else {
System.out.println("没有找到该数据!");
}
}
/**
* 是否存在
* @throws IOException
*/
private static void exists() throws IOException {
String index = "user";
String type = "userindex";
// 唯一编号
String id = "1";
// 创建查询请求
GetRequest getRequest = new GetRequest(index, type, id);
boolean exists = client.exists(getRequest, RequestOptions.DEFAULT);
ActionListener<Boolean> listener = new ActionListener<Boolean>() {
@Override
public void onResponse(Boolean exists) {
System.out.println("=="+exists);
}
@Override
public void onFailure(Exception e) {
System.out.println("失败的原因:"+e.getMessage());
}
};
//进行异步监听
// client.existsAsync(getRequest, RequestOptions.DEFAULT, listener);
System.out.println("是否存在:"+exists);
}
/**
* 更新操作
* @throws IOException
*/
private static void update() throws IOException {
String index = "user";
String type = "userindex";
// 唯一编号
String id = "1";
UpdateRequest upateRequest=new UpdateRequest();
upateRequest.id(id);
upateRequest.index(index);
upateRequest.type(type);
//依旧可以使用Map这种集合作为更新条件
Map<String, Object> jsonMap = new HashMap<>();
jsonMap.put("user", "xuwujing");
jsonMap.put("postDate", "2019-03-11");
upateRequest.doc(jsonMap);
//
upateRequest.docAsUpsert(true);
// upsert 方法表示如果数据不存在那么就新增一条
upateRequest.upsert(jsonMap);
client.update(upateRequest, RequestOptions.DEFAULT);
System.out.println("更新成功!");
}
/**
* 删除
* @throws IOException
*
*/
private static void delete() throws IOException {
String index = "user";
String type = "userindex";
// 唯一编号
String id = "1";
DeleteRequest deleteRequest=new DeleteRequest();
deleteRequest.id(id);
deleteRequest.index(index);
deleteRequest.type(type);
//设置超时时间
deleteRequest.timeout(TimeValue.timeValueMinutes(2));
//设置刷新策略"wait_for"
//保持此请求打开直到刷新使此请求的内容可以搜索为止此刷新策略与高索引和搜索吞吐量兼容但它会导致请求等待响应直到发生刷新
deleteRequest.setRefreshPolicy(WriteRequest.RefreshPolicy.WAIT_UNTIL);
//同步删除
DeleteResponse deleteResponse = client.delete(deleteRequest, RequestOptions.DEFAULT);
/*
* 异步删除操作
*/
//进行监听
ActionListener<DeleteResponse> listener = new ActionListener<DeleteResponse>() {
@Override
public void onResponse(DeleteResponse deleteResponse) {
System.out.println("响应:"+deleteResponse);
}
@Override
public void onFailure(Exception e) {
System.out.println("删除监听异常:"+e.getMessage());
}
};
//异步删除
// client.deleteAsync(deleteRequest, RequestOptions.DEFAULT, listener);
ReplicationResponse.ShardInfo shardInfo = deleteResponse.getShardInfo();
//如果处理成功碎片的数量少于总碎片的情况,说明还在处理或者处理发生异常
if (shardInfo.getTotal() != shardInfo.getSuccessful()) {
System.out.println("需要处理的碎片总量:"+shardInfo.getTotal());
System.out.println("处理成功的碎片总量:"+shardInfo.getSuccessful());
}
if (shardInfo.getFailed() > 0) {
for (ReplicationResponse.ShardInfo.Failure failure :
shardInfo.getFailures()) {
String reason = failure.reason();
}
}
System.out.println("删除成功!");
}
/**
* 批量操作示例
* @throws InterruptedException
*/
private static void bulk() throws IOException, InterruptedException {
String index = "estest";
String type = "estest";
BulkRequest request = new BulkRequest();
//批量新增
request.add(new IndexRequest(index, type, "1")
.source(XContentType.JSON,"field", "foo"));
request.add(new IndexRequest(index, type, "2")
.source(XContentType.JSON,"field", "bar"));
request.add(new IndexRequest(index, type, "3")
.source(XContentType.JSON,"field", "baz"));
//可以进行修改/删除/新增 操作
request.add(new UpdateRequest(index, type, "2")
.doc(XContentType.JSON,"field", "test"));
request.add(new DeleteRequest(index, type, "3"));
request.add(new IndexRequest(index, type, "4")
.source(XContentType.JSON,"field", "baz"));
BulkResponse bulkResponse = client.bulk(request, RequestOptions.DEFAULT);
//可以快速检查一个或多个操作是否失败 true是有至少一个失败
if (bulkResponse.hasFailures()) {
System.out.println("有一个操作失败!");
}
//对处理结果进行遍历操作并根据不同的操作进行处理
for (BulkItemResponse bulkItemResponse : bulkResponse) {
DocWriteResponse itemResponse = bulkItemResponse.getResponse();
//操作失败的进行处理
if (bulkItemResponse.isFailed()) {
BulkItemResponse.Failure failure = bulkItemResponse.getFailure();
}
if (bulkItemResponse.getOpType() == DocWriteRequest.OpType.INDEX
|| bulkItemResponse.getOpType() == DocWriteRequest.OpType.CREATE) {
IndexResponse indexResponse = (IndexResponse) itemResponse;
} else if (bulkItemResponse.getOpType() == DocWriteRequest.OpType.UPDATE) {
UpdateResponse updateResponse = (UpdateResponse) itemResponse;
} else if (bulkItemResponse.getOpType() == DocWriteRequest.OpType.DELETE) {
DeleteResponse deleteResponse = (DeleteResponse) itemResponse;
}
}
System.out.println("批量执行成功!");
/*
* 批量执行处理器相关示例代码
*/
//批量处理器的监听器设置
BulkProcessor.Listener listener = new BulkProcessor.Listener() {
//在执行BulkRequest的每次执行之前调用这个方法允许知道将要在BulkRequest中执行的操作的数量
@Override
public void beforeBulk(long executionId, BulkRequest request) {
int numberOfActions = request.numberOfActions();
logger.debug("Executing bulk [{}] with {} requests",
executionId, numberOfActions);
}
//在每次执行BulkRequest之后调用这个方法允许知道BulkResponse是否包含错误
@Override
public void afterBulk(long executionId, BulkRequest request,
BulkResponse response) {
if (response.hasFailures()) {
logger.warn("Bulk [{}] executed with failures", executionId);
} else {
logger.debug("Bulk [{}] completed in {} milliseconds",
executionId, response.getTook().getMillis());
}
}
//如果BulkRequest失败则调用该方法该方法允许知道失败
@Override
public void afterBulk(long executionId, BulkRequest request, Throwable failure) {
logger.error("Failed to execute bulk", failure);
}
};
BiConsumer<BulkRequest, ActionListener<BulkResponse>> bulkConsumer =
(request2, bulkListener) -> client.bulkAsync(request, RequestOptions.DEFAULT, bulkListener);
//创建一个批量执行的处理器
BulkProcessor bulkProcessor = BulkProcessor.builder(bulkConsumer, listener).build();
BulkProcessor.Builder builder = BulkProcessor.builder(bulkConsumer, listener);
//根据当前添加的操作数量设置刷新新批量请求的时间(默认为1000使用-1禁用它)
builder.setBulkActions(500);
//根据当前添加的操作大小设置刷新新批量请求的时间(默认为5Mb使用-1禁用)
builder.setBulkSize(new ByteSizeValue(1L, ByteSizeUnit.MB));
//设置允许执行的并发请求数量(默认为1使用0只允许执行单个请求)
builder.setConcurrentRequests(0);
//设置刷新间隔如果间隔通过则刷新任何挂起的BulkRequest(默认为未设置)
builder.setFlushInterval(TimeValue.timeValueSeconds(10L));
//设置一个常量后退策略该策略最初等待1秒并重试最多3次
builder.setBackoffPolicy(BackoffPolicy
.constantBackoff(TimeValue.timeValueSeconds(1L), 3));
IndexRequest one = new IndexRequest(index, type, "1").
source(XContentType.JSON, "title",
"In which order are my Elasticsearch queries executed?");
IndexRequest two = new IndexRequest(index, type, "2")
.source(XContentType.JSON, "title",
"Current status and upcoming changes in Elasticsearch");
IndexRequest three = new IndexRequest(index, type, "3")
.source(XContentType.JSON, "title",
"The Future of Federated Search in Elasticsearch");
bulkProcessor.add(one);
bulkProcessor.add(two);
bulkProcessor.add(three);
//如果所有大容量请求都已完成则该方法返回true;如果在所有大容量请求完成之前的等待时间已经过去则返回false
boolean terminated = bulkProcessor.awaitClose(30L, TimeUnit.SECONDS);
System.out.println("请求的响应结果:"+terminated);
}
}

View File

@ -1,343 +1,343 @@
package com.pancm.sql.easticsearch;
import static org.junit.Assert.assertNull;
import java.io.IOException;
import java.util.List;
import java.util.Map;
import org.apache.http.HttpHost;
import org.elasticsearch.action.ActionListener;
import org.elasticsearch.action.admin.cluster.node.tasks.list.ListTasksResponse;
import org.elasticsearch.action.bulk.BulkItemResponse;
import org.elasticsearch.action.get.GetResponse;
import org.elasticsearch.action.get.MultiGetItemResponse;
import org.elasticsearch.action.get.MultiGetRequest;
import org.elasticsearch.action.get.MultiGetResponse;
import org.elasticsearch.action.support.IndicesOptions;
import org.elasticsearch.client.RequestOptions;
import org.elasticsearch.client.RestClient;
import org.elasticsearch.client.RestHighLevelClient;
import org.elasticsearch.client.RethrottleRequest;
import org.elasticsearch.common.unit.TimeValue;
import org.elasticsearch.index.query.TermQueryBuilder;
import org.elasticsearch.index.reindex.BulkByScrollResponse;
import org.elasticsearch.index.reindex.DeleteByQueryRequest;
import org.elasticsearch.index.reindex.ReindexRequest;
import org.elasticsearch.index.reindex.ScrollableHitSource;
import org.elasticsearch.index.reindex.UpdateByQueryRequest;
import org.elasticsearch.tasks.TaskId;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
/**
* @Title: EsHighLevelRestTest2
* @Description: Java High Level REST Client Es高级客户端使用教程二 (关于组合使用)
* @since jdk 1.8
* @Version:1.0.0
* @author pancm
* @date 2019年3月11日
*/
public class EsHighLevelRestTest2 {
private static String elasticIp = "192.169.0.23";
private static int elasticPort = 9200;
private static Logger logger = LoggerFactory.getLogger(EsHighLevelRestTest2.class);
private static RestHighLevelClient client = null;
/**
* @param args
*/
public static void main(String[] args) {
try {
init();
multiGet();
reindex();
updataByQuery();
deleteByQuery();
rethrottleByQuery();
close();
} catch (Exception e) {
e.printStackTrace();
}
}
/*
* 初始化服务
*/
private static void init() {
client = new RestHighLevelClient(RestClient.builder(new HttpHost(elasticIp, elasticPort, "http")));
}
/*
* 关闭服务
*/
private static void close() {
if (client != null) {
try {
client.close();
} catch (IOException e) {
e.printStackTrace();
}
}
}
/**
* 多查询使用
*
* @throws IOException
*/
private static void multiGet() throws IOException {
MultiGetRequest request = new MultiGetRequest();
request.add(new MultiGetRequest.Item("estest", "estest", "1"));
request.add(new MultiGetRequest.Item("user", "userindex", "2"));
// 禁用源检索默认启用
// request.add(new MultiGetRequest.Item("user", "userindex", "2").fetchSourceContext(FetchSourceContext.DO_NOT_FETCH_SOURCE));
// 同步构建
MultiGetResponse response = client.mget(request, RequestOptions.DEFAULT);
// 异步构建
// MultiGetResponse response2 = client.mgetAsync(request, RequestOptions.DEFAULT, listener);
/*
* 返回的MultiGetResponse包含在' getResponses中的MultiGetItemResponse的列表其顺序与请求它们的顺序相同
* 如果成功MultiGetItemResponse包含GetResponse或MultiGetResponse如果失败了就失败
* 成功看起来就像一个正常的GetResponse
*/
for (MultiGetItemResponse item : response.getResponses()) {
assertNull(item.getFailure());
GetResponse get = item.getResponse();
String index = item.getIndex();
String type = item.getType();
String id = item.getId();
// 如果请求存在
if (get.isExists()) {
long version = get.getVersion();
String sourceAsString = get.getSourceAsString();
Map<String, Object> sourceAsMap = get.getSourceAsMap();
byte[] sourceAsBytes = get.getSourceAsBytes();
System.out.println("查询的结果:" + sourceAsMap);
} else {
System.out.println("没有找到该文档!");
}
}
}
/**
* 索引复制
*
* @throws IOException
*/
private static void reindex() throws IOException {
// 创建索引复制请求并进行索引复制
ReindexRequest request = new ReindexRequest();
// 需要复制的索引
request.setSourceIndices("user");
// 复制的目标索引
request.setDestIndex("dest_test");
// 表示如果在复制索引的时候有缺失的文档的话会进行创建,默认是index
request.setDestOpType("create");
// 如果在复制的过程中发现版本冲突那么会继续进行复制
request.setConflicts("proceed");
// 只复制文档类型为 userindex 的数据
request.setSourceDocTypes("userindex");
// 只复制 pancm 用户的数据
request.setSourceQuery(new TermQueryBuilder("user", "pancm"));
// 设置复制文档的数量
request.setSize(10);
// 设置一次批量处理的条数默认是1000
request.setSourceBatchSize(100);
// 进行排序
// request.addSortField("postDate", SortOrder.DESC);
// 指定切片大小
request.setSlices(2);
//设置超时时间
request.setTimeout(TimeValue.timeValueMinutes(2));
//允许刷新
request.setRefresh(true);
// 同步执行
BulkByScrollResponse bulkResponse = client.reindex(request, RequestOptions.DEFAULT);
// 异步执行
// client.reindexAsync(request, RequestOptions.DEFAULT, listener);
// 响应结果处理
TimeValue timeTaken = bulkResponse.getTook();
boolean timedOut = bulkResponse.isTimedOut();
long totalDocs = bulkResponse.getTotal();
long updatedDocs = bulkResponse.getUpdated();
long createdDocs = bulkResponse.getCreated();
long deletedDocs = bulkResponse.getDeleted();
long batches = bulkResponse.getBatches();
long noops = bulkResponse.getNoops();
long versionConflicts = bulkResponse.getVersionConflicts();
long bulkRetries = bulkResponse.getBulkRetries();
long searchRetries = bulkResponse.getSearchRetries();
TimeValue throttledMillis = bulkResponse.getStatus().getThrottled();
TimeValue throttledUntilMillis = bulkResponse.getStatus().getThrottledUntil();
List<ScrollableHitSource.SearchFailure> searchFailures = bulkResponse.getSearchFailures();
List<BulkItemResponse.Failure> bulkFailures = bulkResponse.getBulkFailures();
System.out.println("索引复制总共花费了:" + timeTaken.getMillis() + " 毫秒,总条数:" + totalDocs + ",创建数:" + createdDocs
+ ",更新数:" + updatedDocs);
}
/**
* 根据查询条件更新
*
* @throws IOException
*/
private static void updataByQuery() throws IOException {
//
UpdateByQueryRequest request = new UpdateByQueryRequest("user");
// 设置查询条件
request.setQuery(new TermQueryBuilder("user", "pancm"));
// 设置复制文档的数量
request.setSize(10);
// 设置一次批量处理的条数默认是1000
request.setBatchSize(100);
//设置路由
request.setRouting("=cat");
//设置超时时间
request.setTimeout(TimeValue.timeValueMinutes(2));
//允许刷新
request.setRefresh(true);
//索引选项
request.setIndicesOptions(IndicesOptions.LENIENT_EXPAND_OPEN);
// 同步执行
BulkByScrollResponse bulkResponse = client.updateByQuery(request, RequestOptions.DEFAULT);
// 异步执行
// client.updateByQueryAsync(request, RequestOptions.DEFAULT, listener);
// 返回结果
TimeValue timeTaken = bulkResponse.getTook();
boolean timedOut = bulkResponse.isTimedOut();
long totalDocs = bulkResponse.getTotal();
long updatedDocs = bulkResponse.getUpdated();
long deletedDocs = bulkResponse.getDeleted();
long batches = bulkResponse.getBatches();
long noops = bulkResponse.getNoops();
long versionConflicts = bulkResponse.getVersionConflicts();
long bulkRetries = bulkResponse.getBulkRetries();
long searchRetries = bulkResponse.getSearchRetries();
TimeValue throttledMillis = bulkResponse.getStatus().getThrottled();
TimeValue throttledUntilMillis = bulkResponse.getStatus().getThrottledUntil();
List<ScrollableHitSource.SearchFailure> searchFailures = bulkResponse.getSearchFailures();
List<BulkItemResponse.Failure> bulkFailures = bulkResponse.getBulkFailures();
System.out.println("查询更新总共花费了:" + timeTaken.getMillis() + " 毫秒,总条数:" + totalDocs + ",更新数:" + updatedDocs);
}
/**
* 根据查询条件删除
* @throws IOException
*/
private static void deleteByQuery() throws IOException {
//
DeleteByQueryRequest request = new DeleteByQueryRequest("user");
// 设置查询条件
request.setQuery(new TermQueryBuilder("user", "pancm"));
// 设置复制文档的数量
request.setSize(10);
// 设置一次批量处理的条数默认是1000
request.setBatchSize(100);
//设置路由
request.setRouting("=cat");
//设置超时时间
request.setTimeout(TimeValue.timeValueMinutes(2));
//允许刷新
request.setRefresh(true);
//索引选项
request.setIndicesOptions(IndicesOptions.LENIENT_EXPAND_OPEN);
// 同步执行
BulkByScrollResponse bulkResponse = client.deleteByQuery(request, RequestOptions.DEFAULT);
// 异步执行
// client.updateByQueryAsync(request, RequestOptions.DEFAULT, listener);
// 返回结果
TimeValue timeTaken = bulkResponse.getTook();
boolean timedOut = bulkResponse.isTimedOut();
long totalDocs = bulkResponse.getTotal();
long deletedDocs = bulkResponse.getDeleted();
long batches = bulkResponse.getBatches();
long noops = bulkResponse.getNoops();
long versionConflicts = bulkResponse.getVersionConflicts();
long bulkRetries = bulkResponse.getBulkRetries();
long searchRetries = bulkResponse.getSearchRetries();
TimeValue throttledMillis = bulkResponse.getStatus().getThrottled();
TimeValue throttledUntilMillis = bulkResponse.getStatus().getThrottledUntil();
List<ScrollableHitSource.SearchFailure> searchFailures = bulkResponse.getSearchFailures();
List<BulkItemResponse.Failure> bulkFailures = bulkResponse.getBulkFailures();
System.out.println("查询更新总共花费了:" + timeTaken.getMillis() + " 毫秒,总条数:" + totalDocs + ",删除数:" + deletedDocs);
}
/**
* 用于更改正在运行的重索引逐查询更新或逐查询删除任务的当前节流或完全禁用任务的节流
* @throws IOException
*/
private static void rethrottleByQuery() throws IOException {
TaskId taskId=new TaskId("1");
//用于更改正在运行的重索引逐查询更新或逐查询删除任务的当前节流或完全禁用任务的节流
//并且将请求将任务的节流更改为每秒100个请求
RethrottleRequest request = new RethrottleRequest(taskId,100.0f);
// 同步设置需要更改的流
// client.reindexRethrottle(request, RequestOptions.DEFAULT);
// client.updateByQueryRethrottle(request, RequestOptions.DEFAULT);
// client.deleteByQueryRethrottle(request, RequestOptions.DEFAULT);
ActionListener<ListTasksResponse> listener = new ActionListener<ListTasksResponse>() {
@Override
public void onResponse(ListTasksResponse response) {
System.out.println("===="+response.getTasks().toString());
}
@Override
public void onFailure(Exception e) {
System.out.println("====---"+e.getMessage());
}
};
// 异步设置要更改的流
client.reindexRethrottleAsync(request, RequestOptions.DEFAULT, listener);
client.updateByQueryRethrottleAsync(request, RequestOptions.DEFAULT, listener);
client.deleteByQueryRethrottleAsync(request, RequestOptions.DEFAULT, listener);
System.out.println("已成功设置!");
}
}
package com.pancm.easticsearch;
import static org.junit.Assert.assertNull;
import java.io.IOException;
import java.util.List;
import java.util.Map;
import org.apache.http.HttpHost;
import org.elasticsearch.action.ActionListener;
import org.elasticsearch.action.admin.cluster.node.tasks.list.ListTasksResponse;
import org.elasticsearch.action.bulk.BulkItemResponse;
import org.elasticsearch.action.get.GetResponse;
import org.elasticsearch.action.get.MultiGetItemResponse;
import org.elasticsearch.action.get.MultiGetRequest;
import org.elasticsearch.action.get.MultiGetResponse;
import org.elasticsearch.action.support.IndicesOptions;
import org.elasticsearch.client.RequestOptions;
import org.elasticsearch.client.RestClient;
import org.elasticsearch.client.RestHighLevelClient;
import org.elasticsearch.client.RethrottleRequest;
import org.elasticsearch.common.unit.TimeValue;
import org.elasticsearch.index.query.TermQueryBuilder;
import org.elasticsearch.index.reindex.BulkByScrollResponse;
import org.elasticsearch.index.reindex.DeleteByQueryRequest;
import org.elasticsearch.index.reindex.ReindexRequest;
import org.elasticsearch.index.reindex.ScrollableHitSource;
import org.elasticsearch.index.reindex.UpdateByQueryRequest;
import org.elasticsearch.tasks.TaskId;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
/**
* @Title: EsHighLevelRestTest2
* @Description: Java High Level REST Client Es高级客户端使用教程二 (关于组合使用)
* @since jdk 1.8
* @Version:1.0.0
* @author pancm
* @date 2019年3月11日
*/
public class EsHighLevelRestTest2 {
private static String elasticIp = "192.169.0.23";
private static int elasticPort = 9200;
private static Logger logger = LoggerFactory.getLogger(EsHighLevelRestTest2.class);
private static RestHighLevelClient client = null;
/**
* @param args
*/
public static void main(String[] args) {
try {
init();
multiGet();
reindex();
updataByQuery();
deleteByQuery();
rethrottleByQuery();
close();
} catch (Exception e) {
e.printStackTrace();
}
}
/*
* 初始化服务
*/
private static void init() {
client = new RestHighLevelClient(RestClient.builder(new HttpHost(elasticIp, elasticPort, "http")));
}
/*
* 关闭服务
*/
private static void close() {
if (client != null) {
try {
client.close();
} catch (IOException e) {
e.printStackTrace();
}
}
}
/**
* 多查询使用
*
* @throws IOException
*/
private static void multiGet() throws IOException {
MultiGetRequest request = new MultiGetRequest();
request.add(new MultiGetRequest.Item("estest", "estest", "1"));
request.add(new MultiGetRequest.Item("user", "userindex", "2"));
// 禁用源检索默认启用
// request.add(new MultiGetRequest.Item("user", "userindex", "2").fetchSourceContext(FetchSourceContext.DO_NOT_FETCH_SOURCE));
// 同步构建
MultiGetResponse response = client.mget(request, RequestOptions.DEFAULT);
// 异步构建
// MultiGetResponse response2 = client.mgetAsync(request, RequestOptions.DEFAULT, listener);
/*
* 返回的MultiGetResponse包含在' getResponses中的MultiGetItemResponse的列表其顺序与请求它们的顺序相同
* 如果成功MultiGetItemResponse包含GetResponse或MultiGetResponse如果失败了就失败
* 成功看起来就像一个正常的GetResponse
*/
for (MultiGetItemResponse item : response.getResponses()) {
assertNull(item.getFailure());
GetResponse get = item.getResponse();
String index = item.getIndex();
String type = item.getType();
String id = item.getId();
// 如果请求存在
if (get.isExists()) {
long version = get.getVersion();
String sourceAsString = get.getSourceAsString();
Map<String, Object> sourceAsMap = get.getSourceAsMap();
byte[] sourceAsBytes = get.getSourceAsBytes();
System.out.println("查询的结果:" + sourceAsMap);
} else {
System.out.println("没有找到该文档!");
}
}
}
/**
* 索引复制
*
* @throws IOException
*/
private static void reindex() throws IOException {
// 创建索引复制请求并进行索引复制
ReindexRequest request = new ReindexRequest();
// 需要复制的索引
request.setSourceIndices("user");
// 复制的目标索引
request.setDestIndex("dest_test");
// 表示如果在复制索引的时候有缺失的文档的话会进行创建,默认是index
request.setDestOpType("create");
// 如果在复制的过程中发现版本冲突那么会继续进行复制
request.setConflicts("proceed");
// 只复制文档类型为 userindex 的数据
request.setSourceDocTypes("userindex");
// 只复制 pancm 用户的数据
request.setSourceQuery(new TermQueryBuilder("user", "pancm"));
// 设置复制文档的数量
request.setSize(10);
// 设置一次批量处理的条数默认是1000
request.setSourceBatchSize(100);
// 进行排序
// request.addSortField("postDate", SortOrder.DESC);
// 指定切片大小
request.setSlices(2);
//设置超时时间
request.setTimeout(TimeValue.timeValueMinutes(2));
//允许刷新
request.setRefresh(true);
// 同步执行
BulkByScrollResponse bulkResponse = client.reindex(request, RequestOptions.DEFAULT);
// 异步执行
// client.reindexAsync(request, RequestOptions.DEFAULT, listener);
// 响应结果处理
TimeValue timeTaken = bulkResponse.getTook();
boolean timedOut = bulkResponse.isTimedOut();
long totalDocs = bulkResponse.getTotal();
long updatedDocs = bulkResponse.getUpdated();
long createdDocs = bulkResponse.getCreated();
long deletedDocs = bulkResponse.getDeleted();
long batches = bulkResponse.getBatches();
long noops = bulkResponse.getNoops();
long versionConflicts = bulkResponse.getVersionConflicts();
long bulkRetries = bulkResponse.getBulkRetries();
long searchRetries = bulkResponse.getSearchRetries();
TimeValue throttledMillis = bulkResponse.getStatus().getThrottled();
TimeValue throttledUntilMillis = bulkResponse.getStatus().getThrottledUntil();
List<ScrollableHitSource.SearchFailure> searchFailures = bulkResponse.getSearchFailures();
List<BulkItemResponse.Failure> bulkFailures = bulkResponse.getBulkFailures();
System.out.println("索引复制总共花费了:" + timeTaken.getMillis() + " 毫秒,总条数:" + totalDocs + ",创建数:" + createdDocs
+ ",更新数:" + updatedDocs);
}
/**
* 根据查询条件更新
*
* @throws IOException
*/
private static void updataByQuery() throws IOException {
//
UpdateByQueryRequest request = new UpdateByQueryRequest("user");
// 设置查询条件
request.setQuery(new TermQueryBuilder("user", "pancm"));
// 设置复制文档的数量
request.setSize(10);
// 设置一次批量处理的条数默认是1000
request.setBatchSize(100);
//设置路由
request.setRouting("=cat");
//设置超时时间
request.setTimeout(TimeValue.timeValueMinutes(2));
//允许刷新
request.setRefresh(true);
//索引选项
request.setIndicesOptions(IndicesOptions.LENIENT_EXPAND_OPEN);
// 同步执行
BulkByScrollResponse bulkResponse = client.updateByQuery(request, RequestOptions.DEFAULT);
// 异步执行
// client.updateByQueryAsync(request, RequestOptions.DEFAULT, listener);
// 返回结果
TimeValue timeTaken = bulkResponse.getTook();
boolean timedOut = bulkResponse.isTimedOut();
long totalDocs = bulkResponse.getTotal();
long updatedDocs = bulkResponse.getUpdated();
long deletedDocs = bulkResponse.getDeleted();
long batches = bulkResponse.getBatches();
long noops = bulkResponse.getNoops();
long versionConflicts = bulkResponse.getVersionConflicts();
long bulkRetries = bulkResponse.getBulkRetries();
long searchRetries = bulkResponse.getSearchRetries();
TimeValue throttledMillis = bulkResponse.getStatus().getThrottled();
TimeValue throttledUntilMillis = bulkResponse.getStatus().getThrottledUntil();
List<ScrollableHitSource.SearchFailure> searchFailures = bulkResponse.getSearchFailures();
List<BulkItemResponse.Failure> bulkFailures = bulkResponse.getBulkFailures();
System.out.println("查询更新总共花费了:" + timeTaken.getMillis() + " 毫秒,总条数:" + totalDocs + ",更新数:" + updatedDocs);
}
/**
* 根据查询条件删除
* @throws IOException
*/
private static void deleteByQuery() throws IOException {
//
DeleteByQueryRequest request = new DeleteByQueryRequest("user");
// 设置查询条件
request.setQuery(new TermQueryBuilder("user", "pancm"));
// 设置复制文档的数量
request.setSize(10);
// 设置一次批量处理的条数默认是1000
request.setBatchSize(100);
//设置路由
request.setRouting("=cat");
//设置超时时间
request.setTimeout(TimeValue.timeValueMinutes(2));
//允许刷新
request.setRefresh(true);
//索引选项
request.setIndicesOptions(IndicesOptions.LENIENT_EXPAND_OPEN);
// 同步执行
BulkByScrollResponse bulkResponse = client.deleteByQuery(request, RequestOptions.DEFAULT);
// 异步执行
// client.updateByQueryAsync(request, RequestOptions.DEFAULT, listener);
// 返回结果
TimeValue timeTaken = bulkResponse.getTook();
boolean timedOut = bulkResponse.isTimedOut();
long totalDocs = bulkResponse.getTotal();
long deletedDocs = bulkResponse.getDeleted();
long batches = bulkResponse.getBatches();
long noops = bulkResponse.getNoops();
long versionConflicts = bulkResponse.getVersionConflicts();
long bulkRetries = bulkResponse.getBulkRetries();
long searchRetries = bulkResponse.getSearchRetries();
TimeValue throttledMillis = bulkResponse.getStatus().getThrottled();
TimeValue throttledUntilMillis = bulkResponse.getStatus().getThrottledUntil();
List<ScrollableHitSource.SearchFailure> searchFailures = bulkResponse.getSearchFailures();
List<BulkItemResponse.Failure> bulkFailures = bulkResponse.getBulkFailures();
System.out.println("查询更新总共花费了:" + timeTaken.getMillis() + " 毫秒,总条数:" + totalDocs + ",删除数:" + deletedDocs);
}
/**
* 用于更改正在运行的重索引逐查询更新或逐查询删除任务的当前节流或完全禁用任务的节流
* @throws IOException
*/
private static void rethrottleByQuery() throws IOException {
TaskId taskId=new TaskId("1");
//用于更改正在运行的重索引逐查询更新或逐查询删除任务的当前节流或完全禁用任务的节流
//并且将请求将任务的节流更改为每秒100个请求
RethrottleRequest request = new RethrottleRequest(taskId,100.0f);
// 同步设置需要更改的流
// client.reindexRethrottle(request, RequestOptions.DEFAULT);
// client.updateByQueryRethrottle(request, RequestOptions.DEFAULT);
// client.deleteByQueryRethrottle(request, RequestOptions.DEFAULT);
ActionListener<ListTasksResponse> listener = new ActionListener<ListTasksResponse>() {
@Override
public void onResponse(ListTasksResponse response) {
System.out.println("===="+response.getTasks().toString());
}
@Override
public void onFailure(Exception e) {
System.out.println("====---"+e.getMessage());
}
};
// 异步设置要更改的流
client.reindexRethrottleAsync(request, RequestOptions.DEFAULT, listener);
client.updateByQueryRethrottleAsync(request, RequestOptions.DEFAULT, listener);
client.deleteByQueryRethrottleAsync(request, RequestOptions.DEFAULT, listener);
System.out.println("已成功设置!");
}
}

View File

@ -1,370 +1,370 @@
package com.pancm.sql.easticsearch;
import java.io.Serializable;
import java.util.ArrayList;
import java.util.List;
import org.elasticsearch.index.query.QueryBuilders;
import org.elasticsearch.search.builder.SearchSourceBuilder;
import io.searchbox.client.JestClient;
import io.searchbox.client.JestClientFactory;
import io.searchbox.client.JestResult;
import io.searchbox.client.config.HttpClientConfig;
import io.searchbox.core.Bulk;
import io.searchbox.core.BulkResult;
import io.searchbox.core.Delete;
import io.searchbox.core.DocumentResult;
import io.searchbox.core.Index;
import io.searchbox.core.Search;
import io.searchbox.indices.CreateIndex;
import io.searchbox.indices.DeleteIndex;
import io.searchbox.indices.mapping.GetMapping;
import io.searchbox.indices.mapping.PutMapping;
/**
*
* @Title: JestTest
* @Description: es Jest 测试类
* @Version:1.0.0
* @author pancm
* @date 2019年2月28日
*/
public class JestTest {
private static JestClient jestClient;
private static String indexName = "userindex";
private static String typeName = "user";
private static String elasticIps="http://127.0.0.1:9200";
public static void main(String[] args) throws Exception {
jestClient = getJestClient();
insertBatch();
serach1();
serach2();
serach3();
jestClient.close();
}
private static JestClient getJestClient() {
JestClientFactory factory = new JestClientFactory();
factory.setHttpClientConfig(new HttpClientConfig.Builder(elasticIps).connTimeout(60000).readTimeout(60000).multiThreaded(true).build());
return factory.getObject();
}
public static void insertBatch() {
List<Object> objs = new ArrayList<Object>();
objs.add(new User(1L, "张三", 20, "张三是个Java开发工程师","2018-4-25 11:07:42"));
objs.add(new User(2L, "李四", 24, "李四是个测试工程师","1980-2-15 19:01:32"));
objs.add(new User(3L, "王五", 25, "王五是个运维工程师","2016-8-21 06:11:32"));
boolean result = false;
try {
result = insertBatch(jestClient,indexName, typeName,objs);
} catch (Exception e) {
e.printStackTrace();
}
System.out.println("批量新增:"+result);
}
/**
* 全文搜索
*/
public static void serach1() {
String query ="工程师";
try {
SearchSourceBuilder searchSourceBuilder = new SearchSourceBuilder();
searchSourceBuilder.query(QueryBuilders.queryStringQuery(query));
//分页设置
searchSourceBuilder.from(0).size(2);
System.out.println("全文搜索查询语句:"+searchSourceBuilder.toString());
System.out.println("全文搜索返回结果:"+search(jestClient,indexName, typeName, searchSourceBuilder.toString()));
} catch (Exception e) {
e.printStackTrace();
}
}
/**
* 精确搜索
*/
public static void serach2() {
try {
SearchSourceBuilder searchSourceBuilder = new SearchSourceBuilder();
searchSourceBuilder.query(QueryBuilders.termQuery("age", 24));
System.out.println("精确搜索查询语句:"+searchSourceBuilder.toString());
System.out.println("精确搜索返回结果:"+search(jestClient,indexName, typeName, searchSourceBuilder.toString()));
} catch (Exception e) {
e.printStackTrace();
}
}
/**
* 区间搜索
*/
public static void serach3() {
String createtm="createtm";
String from="2016-8-21 06:11:32";
String to="2018-8-21 06:11:32";
try {
SearchSourceBuilder searchSourceBuilder = new SearchSourceBuilder();
searchSourceBuilder.query(QueryBuilders.rangeQuery(createtm).gte(from).lte(to));
System.out.println("区间搜索语句:"+searchSourceBuilder.toString());
System.out.println("区间搜索返回结果:"+search(jestClient,indexName, typeName, searchSourceBuilder.toString()));
} catch (Exception e) {
e.printStackTrace();
}
}
/**
* 创建索引
* @param indexName
* @return
* @throws Exception
*/
public boolean createIndex(JestClient jestClient,String indexName) throws Exception {
JestResult jr = jestClient.execute(new CreateIndex.Builder(indexName).build());
return jr.isSucceeded();
}
/**
* 新增数据
* @param indexName
* @param typeName
* @param source
* @return
* @throws Exception
*/
public boolean insert(JestClient jestClient,String indexName, String typeName, String source) throws Exception {
PutMapping putMapping = new PutMapping.Builder(indexName, typeName, source).build();
JestResult jr = jestClient.execute(putMapping);
return jr.isSucceeded();
}
/**
* 查询数据
* @param indexName
* @param typeName
* @return
* @throws Exception
*/
public static String getIndexMapping(JestClient jestClient,String indexName, String typeName) throws Exception {
GetMapping getMapping = new GetMapping.Builder().addIndex(indexName).addType(typeName).build();
JestResult jr =jestClient.execute(getMapping);
return jr.getJsonString();
}
/**
* 批量新增数据
* @param indexName
* @param typeName
* @param objs
* @return
* @throws Exception
*/
public static boolean insertBatch(JestClient jestClient,String indexName, String typeName, List<Object> objs) throws Exception {
Bulk.Builder bulk = new Bulk.Builder().defaultIndex(indexName).defaultType(typeName);
for (Object obj : objs) {
Index index = new Index.Builder(obj).build();
bulk.addAction(index);
}
BulkResult br = jestClient.execute(bulk.build());
return br.isSucceeded();
}
/**
* 全文搜索
* @param indexName
* @param typeName
* @param query
* @return
* @throws Exception
*/
public static String search(JestClient jestClient,String indexName, String typeName, String query) throws Exception {
Search search = new Search.Builder(query)
.addIndex(indexName)
.addType(typeName)
.build();
JestResult jr = jestClient.execute(search);
// System.out.println("--"+jr.getJsonString());
// System.out.println("--"+jr.getSourceAsObject(User.class));
return jr.getSourceAsString();
}
/**
* 删除索引
* @param indexName
* @return
* @throws Exception
*/
public boolean delete(JestClient jestClient,String indexName) throws Exception {
JestResult jr = jestClient.execute(new DeleteIndex.Builder(indexName).build());
return jr.isSucceeded();
}
/**
* 删除数据
* @param indexName
* @param typeName
* @param id
* @return
* @throws Exception
*/
public boolean delete(JestClient jestClient,String indexName, String typeName, String id) throws Exception {
DocumentResult dr = jestClient.execute(new Delete.Builder(id).index(indexName).type(typeName).build());
return dr.isSucceeded();
}
}
class User implements Serializable{
/**
*
*/
private static final long serialVersionUID = 1L;
/** 编号 */
private Long id;
/** 姓名 */
private String name;
/** 年龄 */
private Integer age;
/** 描述 */
private String description;
/** 创建时间 */
private String createtm;
public User(){
}
public User(Long id, String name, Integer age, String description, String createtm) {
super();
this.id = id;
this.name = name;
this.age = age;
this.description = description;
this.createtm = createtm;
}
/**
* 获取编号
* @return id
*/
public Long getId() {
return id;
}
/**
* 设置编号
* @param Long id
*/
public void setId(Long id) {
this.id = id;
}
/**
* 获取姓名
* @return name
*/
public String getName() {
return name;
}
/**
* 设置姓名
* @param String name
*/
public void setName(String name) {
this.name = name;
}
/**
* 获取年龄
* @return age
*/
public Integer getAge() {
return age;
}
/**
* 设置年龄
* @param Integer age
*/
public void setAge(Integer age) {
this.age = age;
}
/**
* 获取描述
* @return description
*/
public String getDescription() {
return description;
}
/**
* 设置描述
* @param String description
*/
public void setDescription(String description) {
this.description = description;
}
/**
* 获取创建时间
* @return createtm
*/
public String getCreatetm() {
return createtm;
}
/**
* 设置创建时间
* @param String createtm
*/
public void setCreatetm(String createtm) {
this.createtm = createtm;
}
@Override
public String toString() {
return "User [id=" + id + ", name=" + name + ", age=" + age + ", description=" + description + ", createtm="
+ createtm + "]";
}
}
package com.pancm.easticsearch;
import java.io.Serializable;
import java.util.ArrayList;
import java.util.List;
import org.elasticsearch.index.query.QueryBuilders;
import org.elasticsearch.search.builder.SearchSourceBuilder;
import io.searchbox.client.JestClient;
import io.searchbox.client.JestClientFactory;
import io.searchbox.client.JestResult;
import io.searchbox.client.config.HttpClientConfig;
import io.searchbox.core.Bulk;
import io.searchbox.core.BulkResult;
import io.searchbox.core.Delete;
import io.searchbox.core.DocumentResult;
import io.searchbox.core.Index;
import io.searchbox.core.Search;
import io.searchbox.indices.CreateIndex;
import io.searchbox.indices.DeleteIndex;
import io.searchbox.indices.mapping.GetMapping;
import io.searchbox.indices.mapping.PutMapping;
/**
*
* @Title: JestTest
* @Description: es Jest 测试类
* @Version:1.0.0
* @author pancm
* @date 2019年2月28日
*/
public class JestTest {
private static JestClient jestClient;
private static String indexName = "userindex";
private static String typeName = "user";
private static String elasticIps="http://127.0.0.1:9200";
public static void main(String[] args) throws Exception {
jestClient = getJestClient();
insertBatch();
serach1();
serach2();
serach3();
jestClient.close();
}
private static JestClient getJestClient() {
JestClientFactory factory = new JestClientFactory();
factory.setHttpClientConfig(new HttpClientConfig.Builder(elasticIps).connTimeout(60000).readTimeout(60000).multiThreaded(true).build());
return factory.getObject();
}
public static void insertBatch() {
List<Object> objs = new ArrayList<Object>();
objs.add(new User(1L, "张三", 20, "张三是个Java开发工程师","2018-4-25 11:07:42"));
objs.add(new User(2L, "李四", 24, "李四是个测试工程师","1980-2-15 19:01:32"));
objs.add(new User(3L, "王五", 25, "王五是个运维工程师","2016-8-21 06:11:32"));
boolean result = false;
try {
result = insertBatch(jestClient,indexName, typeName,objs);
} catch (Exception e) {
e.printStackTrace();
}
System.out.println("批量新增:"+result);
}
/**
* 全文搜索
*/
public static void serach1() {
String query ="工程师";
try {
SearchSourceBuilder searchSourceBuilder = new SearchSourceBuilder();
searchSourceBuilder.query(QueryBuilders.queryStringQuery(query));
//分页设置
searchSourceBuilder.from(0).size(2);
System.out.println("全文搜索查询语句:"+searchSourceBuilder.toString());
System.out.println("全文搜索返回结果:"+search(jestClient,indexName, typeName, searchSourceBuilder.toString()));
} catch (Exception e) {
e.printStackTrace();
}
}
/**
* 精确搜索
*/
public static void serach2() {
try {
SearchSourceBuilder searchSourceBuilder = new SearchSourceBuilder();
searchSourceBuilder.query(QueryBuilders.termQuery("age", 24));
System.out.println("精确搜索查询语句:"+searchSourceBuilder.toString());
System.out.println("精确搜索返回结果:"+search(jestClient,indexName, typeName, searchSourceBuilder.toString()));
} catch (Exception e) {
e.printStackTrace();
}
}
/**
* 区间搜索
*/
public static void serach3() {
String createtm="createtm";
String from="2016-8-21 06:11:32";
String to="2018-8-21 06:11:32";
try {
SearchSourceBuilder searchSourceBuilder = new SearchSourceBuilder();
searchSourceBuilder.query(QueryBuilders.rangeQuery(createtm).gte(from).lte(to));
System.out.println("区间搜索语句:"+searchSourceBuilder.toString());
System.out.println("区间搜索返回结果:"+search(jestClient,indexName, typeName, searchSourceBuilder.toString()));
} catch (Exception e) {
e.printStackTrace();
}
}
/**
* 创建索引
* @param indexName
* @return
* @throws Exception
*/
public boolean createIndex(JestClient jestClient,String indexName) throws Exception {
JestResult jr = jestClient.execute(new CreateIndex.Builder(indexName).build());
return jr.isSucceeded();
}
/**
* 新增数据
* @param indexName
* @param typeName
* @param source
* @return
* @throws Exception
*/
public boolean insert(JestClient jestClient,String indexName, String typeName, String source) throws Exception {
PutMapping putMapping = new PutMapping.Builder(indexName, typeName, source).build();
JestResult jr = jestClient.execute(putMapping);
return jr.isSucceeded();
}
/**
* 查询数据
* @param indexName
* @param typeName
* @return
* @throws Exception
*/
public static String getIndexMapping(JestClient jestClient,String indexName, String typeName) throws Exception {
GetMapping getMapping = new GetMapping.Builder().addIndex(indexName).addType(typeName).build();
JestResult jr =jestClient.execute(getMapping);
return jr.getJsonString();
}
/**
* 批量新增数据
* @param indexName
* @param typeName
* @param objs
* @return
* @throws Exception
*/
public static boolean insertBatch(JestClient jestClient,String indexName, String typeName, List<Object> objs) throws Exception {
Bulk.Builder bulk = new Bulk.Builder().defaultIndex(indexName).defaultType(typeName);
for (Object obj : objs) {
Index index = new Index.Builder(obj).build();
bulk.addAction(index);
}
BulkResult br = jestClient.execute(bulk.build());
return br.isSucceeded();
}
/**
* 全文搜索
* @param indexName
* @param typeName
* @param query
* @return
* @throws Exception
*/
public static String search(JestClient jestClient,String indexName, String typeName, String query) throws Exception {
Search search = new Search.Builder(query)
.addIndex(indexName)
.addType(typeName)
.build();
JestResult jr = jestClient.execute(search);
// System.out.println("--"+jr.getJsonString());
// System.out.println("--"+jr.getSourceAsObject(User.class));
return jr.getSourceAsString();
}
/**
* 删除索引
* @param indexName
* @return
* @throws Exception
*/
public boolean delete(JestClient jestClient,String indexName) throws Exception {
JestResult jr = jestClient.execute(new DeleteIndex.Builder(indexName).build());
return jr.isSucceeded();
}
/**
* 删除数据
* @param indexName
* @param typeName
* @param id
* @return
* @throws Exception
*/
public boolean delete(JestClient jestClient,String indexName, String typeName, String id) throws Exception {
DocumentResult dr = jestClient.execute(new Delete.Builder(id).index(indexName).type(typeName).build());
return dr.isSucceeded();
}
}
class User implements Serializable{
/**
*
*/
private static final long serialVersionUID = 1L;
/** 编号 */
private Long id;
/** 姓名 */
private String name;
/** 年龄 */
private Integer age;
/** 描述 */
private String description;
/** 创建时间 */
private String createtm;
public User(){
}
public User(Long id, String name, Integer age, String description, String createtm) {
super();
this.id = id;
this.name = name;
this.age = age;
this.description = description;
this.createtm = createtm;
}
/**
* 获取编号
* @return id
*/
public Long getId() {
return id;
}
/**
* 设置编号
* @param Long id
*/
public void setId(Long id) {
this.id = id;
}
/**
* 获取姓名
* @return name
*/
public String getName() {
return name;
}
/**
* 设置姓名
* @param String name
*/
public void setName(String name) {
this.name = name;
}
/**
* 获取年龄
* @return age
*/
public Integer getAge() {
return age;
}
/**
* 设置年龄
* @param Integer age
*/
public void setAge(Integer age) {
this.age = age;
}
/**
* 获取描述
* @return description
*/
public String getDescription() {
return description;
}
/**
* 设置描述
* @param String description
*/
public void setDescription(String description) {
this.description = description;
}
/**
* 获取创建时间
* @return createtm
*/
public String getCreatetm() {
return createtm;
}
/**
* 设置创建时间
* @param String createtm
*/
public void setCreatetm(String createtm) {
this.createtm = createtm;
}
@Override
public String toString() {
return "User [id=" + id + ", name=" + name + ", age=" + age + ", description=" + description + ", createtm="
+ createtm + "]";
}
}

View File

@ -1,8 +1,8 @@
/**
* @Title: package-info
* @Description: es 相关测试类
* @Version:1.0.0
* @author pancm
* @date 2019年2月28日
*/
package com.pancm.sql.easticsearch;
/**
* @Title: package-info
* @Description: es 相关测试类
* @Version:1.0.0
* @author pancm
* @date 2019年2月28日
*/
package com.pancm.easticsearch;

View File

@ -1,60 +1,60 @@
package com.pancm.sql.redis;
import java.util.Iterator;
import java.util.List;
import java.util.Set;
import redis.clients.jedis.Jedis;
/**
*
* @Title: RedisTest
* @Description: redis测试代码
* @Version:1.0.0
* @author pancm
* @date 2017-8-19
*/
public class RedisTest {
/**
* @param args
*/
public static void main(String[] args) {
// 连接到本地的 redis服务
Jedis jedis = new Jedis("127.0.0.1");
System.out.println("连接成功");
// 查看服务是否运行
System.out.println("服务正在运行: " + jedis.ping());
// 存储到列表中
jedis.lpush("list", "redis");
jedis.lpush("list", "java");
jedis.lpush("list", "mysql");
// 获取存储的数据并输出
List<String> list = jedis.lrange("list", 0, 2);
for (int i = 0, j = list.size(); i < j; i++) {
System.out.println("list的输出结果:" + list.get(i));
}
// 设置 redis 字符串数据
jedis.set("rst", "redisStringTest");
// 获取存储的数据并输出
System.out.println("redis 存储的字符串为: " + jedis.get("rst"));
// 存储数据
jedis.sadd("setTest1", "abc");
jedis.sadd("setTest1", "abcd");
jedis.sadd("setTest1", "abcde");
// 获取数据并输出
Set<String> keys = jedis.keys("*");
// Set<String> keys=jedis.smembers("setTest1");
// 定义迭代器输出
Iterator<String> it = keys.iterator();
while (it.hasNext()) {
String key = it.next();
System.out.println(key);
}
}
}
package com.pancm.redis;
import java.util.Iterator;
import java.util.List;
import java.util.Set;
import redis.clients.jedis.Jedis;
/**
*
* @Title: RedisTest
* @Description: redis测试代码
* @Version:1.0.0
* @author pancm
* @date 2017-8-19
*/
public class RedisTest {
/**
* @param args
*/
public static void main(String[] args) {
// 连接到本地的 redis服务
Jedis jedis = new Jedis("127.0.0.1");
System.out.println("连接成功");
// 查看服务是否运行
System.out.println("服务正在运行: " + jedis.ping());
// 存储到列表中
jedis.lpush("list", "redis");
jedis.lpush("list", "java");
jedis.lpush("list", "mysql");
// 获取存储的数据并输出
List<String> list = jedis.lrange("list", 0, 2);
for (int i = 0, j = list.size(); i < j; i++) {
System.out.println("list的输出结果:" + list.get(i));
}
// 设置 redis 字符串数据
jedis.set("rst", "redisStringTest");
// 获取存储的数据并输出
System.out.println("redis 存储的字符串为: " + jedis.get("rst"));
// 存储数据
jedis.sadd("setTest1", "abc");
jedis.sadd("setTest1", "abcd");
jedis.sadd("setTest1", "abcde");
// 获取数据并输出
Set<String> keys = jedis.keys("*");
// Set<String> keys=jedis.smembers("setTest1");
// 定义迭代器输出
Iterator<String> it = keys.iterator();
while (it.hasNext()) {
String key = it.next();
System.out.println(key);
}
}
}

View File

@ -1,8 +1,8 @@
/**
* @Title: package-info
* @Description: redis 相关代码
* @Version:1.0.0
* @author pancm
* @date 2018年9月21日
*/
package com.pancm.sql.redis;
/**
* @Title: package-info
* @Description: redis 相关代码
* @Version:1.0.0
* @author pancm
* @date 2018年9月21日
*/
package com.pancm.redis;