Compare commits

...

21 Commits

Author SHA1 Message Date
xuwujing
b807ffea96 Changes 2024-01-24 16:28:09 +08:00
xuwujing
1ce0ec06f5 Changes 2024-01-18 15:46:40 +08:00
xuwujing
dd422e13aa Changes 2024-01-17 14:49:19 +08:00
xuwujing
d26489d68f Changes 2023-12-29 14:47:39 +08:00
xuwujing
a033648275 1.更新readme 2023-12-26 16:48:13 +08:00
pancm
ef5dc3deb8 add ftp 2023-06-08 11:24:25 +08:00
pancm
b6783b7025 add ffmpeg 2023-06-08 11:24:10 +08:00
pancm
f02d2cc6f4 add easyExcel 2023-06-08 11:14:41 +08:00
pancm
62c4c0b957 readme更新 2023-06-07 17:43:36 +08:00
pancm
3fba28f56e readme更新 2023-06-07 17:42:39 +08:00
xuwujing
083b8910b8
Merge pull request #10 from xuwujing/dependabot/maven/junit-junit-4.13.1
Bump junit from 4.12 to 4.13.1
2021-03-25 16:14:45 +08:00
xuwujing
5ecf5d1f94
Merge pull request #6 from xuwujing/dependabot/maven/org.apache.storm-storm-kafka-1.2.3
Bump storm-kafka from 1.2.2 to 1.2.3
2021-03-25 16:14:22 +08:00
xuwujing
797694e3d5
Merge pull request #11 from xuwujing/dependabot/maven/org.apache.poi-poi-3.17
Bump poi from 3.9 to 3.17
2021-03-25 16:14:03 +08:00
xuwujing
486a1fdab6
Merge pull request #12 from xuwujing/dev
Dev
2021-03-25 16:13:23 +08:00
xuwujing
29b1a0a783 insignificance update 2021-03-07 17:42:34 +08:00
xuwujing
a3a5104f01 1.增加ElasticSearch聚合使用示例 2021-03-06 14:32:33 +08:00
xuwujing
f697826997 1.增加ElasticSearch聚合使用示例 2021-03-06 14:22:51 +08:00
xuwujing
9800dabe89 更新说明 2021-03-06 14:20:40 +08:00
dependabot[bot]
1fc0ef47a3
Bump poi from 3.9 to 3.17
Bumps poi from 3.9 to 3.17.

Signed-off-by: dependabot[bot] <support@github.com>
2021-01-14 19:55:45 +00:00
dependabot[bot]
09a38484f6
Bump junit from 4.12 to 4.13.1
Bumps [junit](https://github.com/junit-team/junit4) from 4.12 to 4.13.1.
- [Release notes](https://github.com/junit-team/junit4/releases)
- [Changelog](https://github.com/junit-team/junit4/blob/main/doc/ReleaseNotes4.12.md)
- [Commits](https://github.com/junit-team/junit4/compare/r4.12...r4.13.1)

Signed-off-by: dependabot[bot] <support@github.com>
2020-10-13 08:56:20 +00:00
dependabot[bot]
035656604d
Bump storm-kafka from 1.2.2 to 1.2.3
Bumps storm-kafka from 1.2.2 to 1.2.3.

Signed-off-by: dependabot[bot] <support@github.com>
2020-02-20 10:06:42 +00:00
19 changed files with 1319 additions and 80 deletions

87
.gitignore vendored
View File

@ -1,43 +1,44 @@
/target/
/classes/
/log/
/logs/
.classpath
.project
.settings
.myeclipse
##filter databfile¡¢sln file##
*.mdb
*.ldb
*.sln
##class file##
*.com
*.class
*.dll
*.exe
*.o
*.so
# compression file
*.7z
*.dmg
*.gz
*.iso
*.jar
*.rar
*.tar
*.zip
*.via
*.iml
*.tmp
*.err
*.log
# OS generated files #
/.idea
.DS_Store
.DS_Store?
._*
.Spotlight-V100
.Trashes
Icon?
ehthumbs.db
Thumbs.db
/target/
/classes/
/log/
/logs/
.classpath
.project
.settings
.myeclipse
##filter databfile<6C><65>sln file##
*.mdb
*.ldb
*.sln
##class file##
*.com
*.class
*.dll
*.exe
*.o
*.so
# compression file
*.7z
*.dmg
*.gz
*.iso
*.jar
*.rar
*.tar
*.zip
*.via
*.iml
*.tmp
*.err
*.log
# OS generated files #
/.idea
.DS_Store
.DS_Store?
._*
.Spotlight-V100
.Trashes
Icon?
ehthumbs.db
Thumbs.db
/.github/

View File

@ -81,10 +81,28 @@
**ElasticSearch相关:**
- [ElasticSearch实战系列一: ElasticSearch集群+Kinaba安装教程](https://www.cnblogs.com/xuwujing/p/11385255.html)
- [ElasticSearch实战系列一: ElasticSearch集群+Kibana安装教程](https://www.cnblogs.com/xuwujing/p/11385255.html)
- [ElasticSearch实战系列二: ElasticSearch的DSL语句使用教程---图文详解](https://www.cnblogs.com/xuwujing/p/11567053.html)
- [ElasticSearch实战系列三: ElasticSearch的JAVA API使用教程](https://www.cnblogs.com/xuwujing/p/11645630.html)
- [ElasticSearch实战系列四: ElasticSearch理论知识介绍](https://www.cnblogs.com/xuwujing/p/12093933.html)
- [ElasticSearch实战系列五: ElasticSearch的聚合查询基础使用教程之度量(Metric)聚合](https://www.cnblogs.com/xuwujing/p/12385903.html)
- [ElasticSearch实战系列六: Logstash快速入门](https://www.cnblogs.com/xuwujing/p/13412108.html)
- [ElasticSearch实战系列七: Logstash实战使用-图文讲解](https://www.cnblogs.com/xuwujing/p/13520666.html)
- [ElasticSearch实战系列八: Filebeat快速入门和使用---图文详解](https://www.cnblogs.com/xuwujing/p/13532125.html)
- [ElasticSearch实战系列九: ELK日志系统介绍和安装](https://www.cnblogs.com/xuwujing/p/13870806.html)
- [ElasticSearch实战系列十: ElasticSearch冷热分离架构](https://www.cnblogs.com/xuwujing/p/14599290.html)
- [ElasticSearch实战系列十一: ElasticSearch错误问题解决方案](https://www.cnblogs.com/xuwujing/p/14806392.html)
**手记系列:**
- [手记系列之一 ----- 关于微信公众号和小程序的开发流程](https://www.cnblogs.com/xuwujing/p/16841577.html)
- [手记系列之二 ----- 关于IDEA的一些使用方法经验](https://www.cnblogs.com/xuwujing/p/16862451.html)
- [手记系列之三 ----- 关于使用Nginx的一些使用方法和经验](https://www.cnblogs.com/xuwujing/p/16885964.html)
- [手记系列之四 ----- 关于使用MySql的经验](https://www.cnblogs.com/xuwujing/p/17356379.html)
- [手记系列之五 ----- SQL使用经验分享](https://www.cnblogs.com/xuwujing/p/17444266.html)
- [手记系列之六 ----- 分享个人使用kafka经验](https://www.cnblogs.com/xuwujing/p/17466519.html)
- [手记系列之七 ----- 分享Linux使用经验](https://www.cnblogs.com/xuwujing/p/17807802.html)
@ -98,6 +116,10 @@
- [个人收集的资源分享](https://www.cnblogs.com/xuwujing/p/10393111.html)
- [一个毕业三年的程序猿对于提升自我的一些建议](https://www.cnblogs.com/xuwujing/p/11735726.html)
- [认清自我不在迷茫2019个人年终总结](https://www.cnblogs.com/xuwujing/p/12174112.html)
- [纵然前路坎坷也要毅然前行2020年终总结](https://www.cnblogs.com/xuwujing/p/14233270.html)
- [有一点思考的2021年终总结](https://www.cnblogs.com/xuwujing/p/15746791.html)
- [一个想活得简单的程序猿的2022年终总结](https://www.cnblogs.com/xuwujing/p/17060965.html)
- [写给步入三十的自己2023年终总结!](https://www.cnblogs.com/xuwujing/p/17868627.html)
## 其他

24
pom.xml
View File

@ -27,7 +27,7 @@
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<version>4.12</version>
<version>4.13.1</version>
</dependency>
@ -117,7 +117,7 @@
<dependency>
<groupId>org.apache.poi</groupId>
<artifactId>poi</artifactId>
<version>3.9</version>
<version>3.17</version>
</dependency>
<dependency>
@ -223,12 +223,19 @@
</dependency>
<!--SQL Server 驱动包 -->
<dependency>
<!--<dependency>
<groupId>com.microsoft.sqlserver</groupId>
<artifactId>sqljdbc4</artifactId>
<version>4.0</version>
</dependency>
-->
<dependency>
<groupId>com.microsoft.sqlserver</groupId>
<artifactId>mssql-jdbc</artifactId>
<version>6.2.0.jre8</version>
<scope>test</scope>
</dependency>
<!-- 数据库相关jar end -->
@ -375,7 +382,7 @@
<dependency>
<groupId>org.apache.storm</groupId>
<artifactId>storm-kafka</artifactId>
<version>1.2.2</version>
<version>1.2.3</version>
<scope>provided</scope>
</dependency>
@ -395,9 +402,14 @@
<groupId>io.searchbox</groupId>
<artifactId>jest</artifactId>
<version>6.3.1</version>
</dependency>
</dependency>
<!-- excel 工具类-->
<dependency>
<groupId>com.alibaba</groupId>
<artifactId>easyexcel</artifactId>
<version>2.2.7</version>
</dependency>

View File

@ -22,19 +22,19 @@ public class ResponsibilityTest {
String name = "xuwujing";
String something = "去聚餐";
String something2 = "去旅游";
Learder learder1 =new Supervisor(name, something);
Learder learder2 =new BranchManager(name, something);
Learder learder3 =new GeneralManager(name, something);
learder1.setLearder(learder2);
learder2.setLearder(learder3);
learder1.handler(1);
Leader leader1 =new Supervisor(name, something);
Leader leader2 =new BranchManager(name, something);
Leader leader3 =new GeneralManager(name, something);
leader1.setLeader(leader2);
leader2.setLeader(leader3);
leader1.handler(1);
Learder learder4 =new Supervisor(name, something2);
Learder learder5 =new BranchManager(name, something2);
Learder learder6 =new GeneralManager(name, something2);
learder4.setLearder(learder5);
learder5.setLearder(learder6);
learder4.handler(0);
Leader leader4 =new Supervisor(name, something2);
Leader leader5 =new BranchManager(name, something2);
Leader leader6 =new GeneralManager(name, something2);
leader4.setLeader(leader5);
leader5.setLeader(leader6);
leader4.handler(0);
@ -76,24 +76,24 @@ class ConcreteHandler extends Handler {
}
abstract class Learder{
abstract class Leader {
protected Learder learder;
protected Leader leader;
protected void setLearder(Learder learder){
this.learder=learder;
protected void setLeader(Leader leader){
this.leader = leader;
}
protected Learder getLearder(){
return learder;
protected Leader getLeader(){
return leader;
}
abstract void handler(int level);
}
//主管
class Supervisor extends Learder{
class Supervisor extends Leader {
private String name;
private String something;
public Supervisor(String name,String something) {
@ -108,13 +108,13 @@ class Supervisor extends Learder{
System.out.println("主管处理了 "+name+"所述的<"+something+">事情!");
}else{
System.out.println("主管未能处理 "+name+"所述的<"+something+">事情!转交给上级!");
getLearder().handler(level);
getLeader().handler(level);
}
}
}
//部门经理
class BranchManager extends Learder{
class BranchManager extends Leader {
private String name;
private String something;
public BranchManager(String name,String something) {
@ -129,13 +129,13 @@ class BranchManager extends Learder{
System.out.println("部门经理处理了 "+name+"所述的<"+something+">事情!");
}else{
System.out.println("部门经理未能处理 "+name+"所述的<"+something+">事情!转交给上级!");
getLearder().handler(level);
getLeader().handler(level);
}
}
}
//总经理
class GeneralManager extends Learder{
class GeneralManager extends Leader {
private String name;
private String something;
public GeneralManager(String name,String something) {
@ -150,7 +150,7 @@ class GeneralManager extends Learder{
System.out.println("总经理处理了 "+name+"所述的<"+something+">事情!");
}else{
System.out.println("总经理未能处理 "+name+"所述的<"+something+">事情!转交给上级!");
getLearder().handler(level);
getLeader().handler(level);
}
}
}

View File

@ -8,13 +8,17 @@ import org.elasticsearch.action.admin.indices.get.GetIndexRequest;
import org.elasticsearch.action.bulk.BulkRequest;
import org.elasticsearch.action.search.SearchRequest;
import org.elasticsearch.action.search.SearchResponse;
import org.elasticsearch.action.support.IndicesOptions;
import org.elasticsearch.action.update.UpdateRequest;
import org.elasticsearch.client.RequestOptions;
import org.elasticsearch.client.RestClient;
import org.elasticsearch.client.RestClientBuilder;
import org.elasticsearch.client.RestHighLevelClient;
import org.elasticsearch.common.xcontent.XContentType;
import org.elasticsearch.index.query.BoolQueryBuilder;
import org.elasticsearch.index.query.QueryBuilders;
import org.elasticsearch.rest.RestStatus;
import org.elasticsearch.script.Script;
import org.elasticsearch.search.aggregations.Aggregation;
import org.elasticsearch.search.aggregations.AggregationBuilder;
import org.elasticsearch.search.aggregations.AggregationBuilders;
@ -23,10 +27,13 @@ import org.elasticsearch.search.aggregations.bucket.histogram.DateHistogramInter
import org.elasticsearch.search.aggregations.bucket.terms.Terms;
import org.elasticsearch.search.aggregations.bucket.terms.TermsAggregationBuilder;
import org.elasticsearch.search.aggregations.metrics.avg.Avg;
import org.elasticsearch.search.aggregations.metrics.cardinality.CardinalityAggregationBuilder;
import org.elasticsearch.search.aggregations.metrics.max.Max;
import org.elasticsearch.search.aggregations.metrics.min.Min;
import org.elasticsearch.search.aggregations.metrics.sum.Sum;
import org.elasticsearch.search.aggregations.metrics.tophits.TopHits;
import org.elasticsearch.search.aggregations.pipeline.PipelineAggregatorBuilders;
import org.elasticsearch.search.aggregations.pipeline.bucketselector.BucketSelectorPipelineAggregationBuilder;
import org.elasticsearch.search.builder.SearchSourceBuilder;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
@ -270,7 +277,7 @@ public class EsAggregationSearchTest {
private static void maxSearch() throws IOException{
String buk="t_grade";
AggregationBuilder aggregation = AggregationBuilders.max(buk).field("grade");
logger.info("求班级的最分数:");
logger.info("求班级的最分数:");
agg(aggregation,buk);
}
@ -393,6 +400,28 @@ public class EsAggregationSearchTest {
});
}
private static void agg(List<Map<String, Object>> list, Aggregations aggregations) {
aggregations.forEach(aggregation -> {
String name = aggregation.getName();
Terms genders = aggregations.get(name);
for (Terms.Bucket entry : genders.getBuckets()) {
String key = entry.getKey().toString();
long t = entry.getDocCount();
Map<String,Object> map =new HashMap<>();
map.put(name,key);
map.put(name+"_"+"count",t);
//判断里面是否还有嵌套的数据
List<Aggregation> list2 = entry.getAggregations().asList();
if (list2.isEmpty()) {
list.add(map);
}else{
agg(list, entry.getAggregations());
}
}
});
System.out.println(list);
}
private static SearchResponse search(AggregationBuilder aggregation) throws IOException {
@ -473,6 +502,99 @@ public class EsAggregationSearchTest {
}
}
/**
* @Author pancm
* @Description having
* @Date 2020/8/21
* @Param []
* @return void
**/
private static void havingSearch() throws IOException{
String index="";
SearchRequest searchRequest = new SearchRequest(index);
searchRequest.indices(index);
SearchSourceBuilder sourceBuilder = new SearchSourceBuilder();
BoolQueryBuilder boolQueryBuilder = new BoolQueryBuilder();
searchRequest.indicesOptions(IndicesOptions.lenientExpandOpen());
String alias_name = "nas_ip_address_group";
String group_name = "nas_ip_address";
String query_name = "acct_start_time";
String query_type = "gte,lte";
String query_name_value="2020-08-05 13:25:55,2020-08-20 13:26:55";
String[] query_types= query_type.split(",");
String[] query_name_values= query_name_value.split(",");
for (int i = 0; i < query_types.length; i++) {
if("gte".equals(query_types[i])){
boolQueryBuilder.must(QueryBuilders.rangeQuery(query_name).gte(query_name_values[i]));
}
if("lte".equals(query_types[i])){
boolQueryBuilder.must(QueryBuilders.rangeQuery(query_name).lte(query_name_values[i]));
}
}
AggregationBuilder aggregationBuilder = AggregationBuilders.terms(alias_name).field(group_name).size(Integer.MAX_VALUE);
//声明BucketPath用于后面的bucket筛选
Map<String, String> bucketsPathsMap = new HashMap<>(8);
bucketsPathsMap.put("groupCount", "_count");
//设置脚本
Script script = new Script("params.groupCount >= 1000");
//构建bucket选择器
BucketSelectorPipelineAggregationBuilder bs =
PipelineAggregatorBuilders.bucketSelector("having", bucketsPathsMap, script);
aggregationBuilder.subAggregation(bs);
sourceBuilder.aggregation(aggregationBuilder);
//不需要解释
sourceBuilder.explain(false);
//不需要原始数据
sourceBuilder.fetchSource(false);
//不需要版本号
sourceBuilder.version(false);
sourceBuilder.query(boolQueryBuilder);
searchRequest.source(sourceBuilder);
System.out.println(sourceBuilder);
// 同步查询
SearchResponse searchResponse = client.search(searchRequest, RequestOptions.DEFAULT);
// 查询条数
long count = searchResponse.getHits().getHits().length;
Aggregations aggregations = searchResponse.getAggregations();
// agg(aggregations);
Map<String,Object> map =new HashMap<>();
List<Map<String,Object>> list =new ArrayList<>();
agg(list,aggregations);
// System.out.println(map);
System.out.println(list);
}
/**
* @Author pancm
* @Description 去重
* @Date 2020/8/26
* @Param []
* @return void
**/
private static void distinctSearch() throws IOException{
String buk="group";
String distinctName="name";
AggregationBuilder aggregation = AggregationBuilders.terms("age").field("age");
CardinalityAggregationBuilder cardinalityBuilder = AggregationBuilders.cardinality(distinctName).field(distinctName);
//根据创建时间按天分组
// AggregationBuilder aggregation3 = AggregationBuilders.dateHistogram("createtm")
// .field("createtm")
// .format("yyyy-MM-dd")
// .dateHistogramInterval(DateHistogramInterval.DAY);
//
// aggregation2.subAggregation(aggregation3);
aggregation.subAggregation(cardinalityBuilder);
agg(aggregation,buk);
}
private static void topSearch() throws IOException{

View File

@ -96,13 +96,10 @@ public class EsHighLevelRestTest2 {
request.add(new MultiGetRequest.Item("user", "userindex", "2"));
// 禁用源检索默认启用
// request.add(new MultiGetRequest.Item("user", "userindex", "2").fetchSourceContext(FetchSourceContext.DO_NOT_FETCH_SOURCE));
// 同步构建
MultiGetResponse response = client.mget(request, RequestOptions.DEFAULT);
// 异步构建
// MultiGetResponse response2 = client.mgetAsync(request, RequestOptions.DEFAULT, listener);
/*
* 返回的MultiGetResponse包含在' getResponses中的MultiGetItemResponse的列表其顺序与请求它们的顺序相同
* 如果成功MultiGetItemResponse包含GetResponse或MultiGetResponse如果失败了就失败

View File

@ -0,0 +1,22 @@
package com.pancm.excel;
import lombok.Data;
import lombok.EqualsAndHashCode;
import java.util.Date;
/**
* @author pancm
* @Title: pancm_project
* @Description:
* @Version:1.0.0
* @Since:jdk1.8
* @date 2023/3/23
*/
@Data
@EqualsAndHashCode
public class DemoData {
private String string;
private Date date;
private Double doubleData;
}

View File

@ -0,0 +1,206 @@
package com.pancm.excel;
import com.alibaba.excel.EasyExcel;
import com.alibaba.excel.ExcelWriter;
import com.alibaba.excel.metadata.Head;
import com.alibaba.excel.support.ExcelTypeEnum;
import com.alibaba.excel.util.CollectionUtils;
import com.alibaba.excel.write.merge.AbstractMergeStrategy;
import com.alibaba.excel.write.metadata.WriteSheet;
import com.alibaba.excel.write.metadata.WriteTable;
import com.google.common.collect.Lists;
import org.apache.poi.ss.usermodel.Cell;
import org.apache.poi.ss.usermodel.Sheet;
import org.apache.poi.ss.util.CellRangeAddress;
import java.util.ArrayList;
import java.util.Date;
import java.util.List;
import java.util.stream.Collectors;
/**
* @author pancm
* @Title: pancm_project
* @Description: 多行合并
* 合并单元格
* @Version:1.0.0
* @Since:jdk1.8
* @date 2023/3/23
*/
public class EasyExcelMergeTest {
public static void main(String[] args) {
writeExcel();
writeExcel01();
writeExcel02();
writeExcel03();
}
private static String getPath(String s) {
return System.getProperty("user.dir") + "/" + s+"_"+System.currentTimeMillis() + ".xlsx";
}
private static List<DemoData> data1() {
List<DemoData> list = Lists.newArrayList();
for (int i = 0; i < 3; i++) {
DemoData data = new DemoData();
data.setString("字符串" + 1);
data.setDate(new Date());
data.setDoubleData(0.56);
list.add(data);
}
for (int i = 0; i < 3; i++) {
DemoData data = new DemoData();
data.setString("字符串" + 2);
data.setDate(new Date());
data.setDoubleData(0.56);
list.add(data);
}
for (int i = 0; i < 4; i++) {
DemoData data = new DemoData();
data.setString("字符串" + 3);
data.setDate(new Date());
data.setDoubleData(0.57);
list.add(data);
}
return list;
}
// 自定义合并策略 该类继承了AbstractMergeStrategy抽象合并策略需要重写merge()方法
public static class CustomMergeStrategy extends AbstractMergeStrategy {
/**
* 分组每几行合并一次
*/
private List<Integer> exportFieldGroupCountList;
/**
* 目标合并列index
*/
private Integer targetColumnIndex;
// 需要开始合并单元格的首行index
private Integer rowIndex;
// exportDataList为待合并目标列的值
public CustomMergeStrategy(List<String> exportDataList, Integer targetColumnIndex) {
this.exportFieldGroupCountList = getGroupCountList(exportDataList);
this.targetColumnIndex = targetColumnIndex;
}
@Override
protected void merge(Sheet sheet, Cell cell, Head head, Integer relativeRowIndex) {
if (null == rowIndex) {
rowIndex = cell.getRowIndex();
}
// 仅从首行以及目标列的单元格开始合并忽略其他
if (cell.getRowIndex() == rowIndex && cell.getColumnIndex() == targetColumnIndex) {
mergeGroupColumn(sheet);
}
}
private void mergeGroupColumn(Sheet sheet) {
int rowCount = rowIndex;
for (Integer count : exportFieldGroupCountList) {
if (count == 1) {
rowCount += count;
continue;
}
// 合并单元格
CellRangeAddress cellRangeAddress = new CellRangeAddress(rowCount, rowCount + count - 1, targetColumnIndex, targetColumnIndex);
sheet.addMergedRegionUnsafe(cellRangeAddress);
rowCount += count;
}
}
// 该方法将目标列根据值是否相同连续可合并存储可合并的行数
private List<Integer> getGroupCountList(List<String> exportDataList) {
if (CollectionUtils.isEmpty(exportDataList)) {
return new ArrayList<>();
}
List<Integer> groupCountList = new ArrayList<>();
int count = 1;
for (int i = 1; i < exportDataList.size(); i++) {
if (exportDataList.get(i).equals(exportDataList.get(i - 1))) {
count++;
} else {
groupCountList.add(count);
count = 1;
}
}
// 处理完最后一条后
groupCountList.add(count);
return groupCountList;
}
}
// 单列多行合并
public static void writeExcel() {
String fileName = getPath("单列多行");
ExcelWriter excelWriter = EasyExcel.write(fileName).excelType(ExcelTypeEnum.XLSX).build();
List<DemoData> demoDataList = data1();
// 写sheet的时候注册相应的自定义合并单元格策略
WriteSheet writeSheet = EasyExcel.writerSheet("模板1").head(DemoData.class)
.registerWriteHandler(new CustomMergeStrategy(demoDataList.stream().map(DemoData::getString).collect(Collectors.toList()), 0))
.build();
excelWriter.write(demoDataList, writeSheet);
excelWriter.finish();
}
//多列多行合并
public static void writeExcel01() {
String fileName = getPath("多行多列");
ExcelWriter excelWriter = EasyExcel.write(fileName).excelType(ExcelTypeEnum.XLSX).build();
List<DemoData> demoDataList = data1();
WriteSheet writeSheet = EasyExcel.writerSheet("模板1").head(DemoData.class)
.registerWriteHandler(new CustomMergeStrategy(demoDataList.stream().map(DemoData::getString).collect(Collectors.toList()), 0))
.registerWriteHandler(new CustomMergeStrategy(demoDataList.stream().map(o -> o.getDoubleData().toString()).collect(Collectors.toList()), 2))
.build();
excelWriter.write(demoDataList, writeSheet);
excelWriter.finish();
}
//多sheet
public static void writeExcel02() {
String fileName = getPath("多sheet");
ExcelWriter excelWriter = EasyExcel.write(fileName).excelType(ExcelTypeEnum.XLSX).build();
List<DemoData> demoDataList = data1();
WriteSheet writeSheet = EasyExcel.writerSheet("模板1").head(DemoData.class)
.registerWriteHandler(new CustomMergeStrategy(demoDataList.stream().map(DemoData::getString).collect(Collectors.toList()), 0))
.registerWriteHandler(new CustomMergeStrategy(demoDataList.stream().map(o -> o.getDoubleData().toString()).collect(Collectors.toList()), 2))
.build();
excelWriter.write(demoDataList, writeSheet);
WriteSheet writeSheet1 = EasyExcel.writerSheet("模板2").head(DemoData.class).build();
excelWriter.write(data1(), writeSheet1);
excelWriter.finish();
}
//多表
public static void writeExcel03() {
String fileName = getPath("多表");
ExcelWriter excelWriter = EasyExcel.write(fileName).excelType(ExcelTypeEnum.XLSX).build();
WriteSheet writeSheet = EasyExcel.writerSheet("模板").needHead(Boolean.FALSE).build();
List<DemoData> demoDataList = data1();
// 需要表头设置为trueWriteTable一些属性会继承自WriteSheet
WriteTable writeTable = EasyExcel.writerTable(1).head(DemoData.class).needHead(Boolean.TRUE)
.registerWriteHandler(new CustomMergeStrategy(demoDataList.stream().map(DemoData::getString).collect(Collectors.toList()), 0))
.registerWriteHandler(new CustomMergeStrategy(demoDataList.stream().map(o -> o.getDoubleData().toString()).collect(Collectors.toList()), 2))
.build();
excelWriter.write(demoDataList, writeSheet, writeTable);
WriteTable writeTable1 = EasyExcel.writerTable(2).head(DemoData.class).needHead(Boolean.TRUE).build();
excelWriter.write(data1(), writeSheet, writeTable1);
excelWriter.finish();
}
}

View File

@ -0,0 +1,288 @@
package com.pancm.excel;
import com.alibaba.excel.EasyExcel;
import com.alibaba.excel.ExcelReader;
import com.alibaba.excel.context.AnalysisContext;
import com.alibaba.excel.metadata.CellData;
import com.alibaba.excel.metadata.CellExtra;
import com.alibaba.excel.read.listener.ReadListener;
import com.alibaba.excel.read.metadata.ReadSheet;
import com.alibaba.fastjson.JSON;
import lombok.Data;
import lombok.EqualsAndHashCode;
import lombok.extern.slf4j.Slf4j;
import org.junit.Test;
import java.io.File;
import java.util.ArrayList;
import java.util.Date;
import java.util.List;
import java.util.Map;
/**
* @author pancm
* @Title: pancm_project
* @Description: 参考: https://www.yuque.com/easyexcel/doc
* @Version:1.0.0
* @Since:jdk1.8
* @date 2021/1/26
*/
@Slf4j
public class EasyExcelTest {
/**
* 最简单的读
* <p>
* 1. 创建excel对应的实体对象 参照{@link DemoData}
* <p>
* 2. 由于默认一行行的读取excel所以需要创建excel一行一行的回调监听器参照{@link DemoDataListener}
* <p>
* 3. 直接读即可
*/
@Test
public void simpleRead() {
// 写法1JDK8+ ,不用额外写一个DemoDataListener
// since: 3.0.0-beta1
String fileName = "/home" + "demo" + File.separator + "demo.xlsx";
// 这里 需要指定读用哪个class去读然后读取第一个sheet 文件流会自动关闭
// 这里每次会读取3000条数据 然后返回过来 直接调用使用数据就行
// EasyExcel.read(fileName, DemoData.class, new PageReadListener<DemoData>(dataList -> {
// for (DemoData demoData : dataList) {
// log.info("读取到一条数据{}", JSON.toJSONString(demoData));
// }
// })).sheet().doRead();
// 写法2
// 匿名内部类 不用额外写一个DemoDataListener
fileName = "/home" + "demo" + File.separator + "demo.xlsx";
// 这里 需要指定读用哪个class去读然后读取第一个sheet 文件流会自动关闭
EasyExcel.read(fileName, DemoData.class, new ReadListener<DemoData>() {
/**
* 单次缓存的数据量
*/
public static final int BATCH_COUNT = 100;
/**
*临时存储
*/
private List<DemoData> cachedDataList = new ArrayList<>();
/**
* All listeners receive this method when any one Listener does an error report. If an exception is thrown here, the
* entire read will terminate.
*
* @param exception
* @param context
* @throws Exception
*/
@Override
public void onException(Exception exception, AnalysisContext context) throws Exception {
}
/**
* When analysis one head row trigger invoke function.
*
* @param headMap
* @param context
*/
@Override
public void invokeHead(Map<Integer, CellData> headMap, AnalysisContext context) {
}
@Override
public void invoke(DemoData data, AnalysisContext context) {
cachedDataList.add(data);
if (cachedDataList.size() >= BATCH_COUNT) {
saveData();
// 存储完成清理 list
cachedDataList = new ArrayList<>();
}
}
/**
* The current method is called when extra information is returned
*
* @param extra extra information
* @param context
*/
@Override
public void extra(CellExtra extra, AnalysisContext context) {
}
@Override
public void doAfterAllAnalysed(AnalysisContext context) {
saveData();
}
/**
* Verify that there is another piece of data.You can stop the read by returning false
*
* @param context
* @return
*/
@Override
public boolean hasNext(AnalysisContext context) {
return false;
}
/**
* 加上存储数据库
*/
private void saveData() {
log.info("{}条数据,开始存储数据库!", cachedDataList.size());
log.info("存储数据库成功!");
}
}).sheet().doRead();
// 有个很重要的点 DemoDataListener 不能被spring管理要每次读取excel都要new,然后里面用到spring可以构造方法传进去
// 写法3
fileName = "/home" + "demo" + File.separator + "demo.xlsx";
// 这里 需要指定读用哪个class去读然后读取第一个sheet 文件流会自动关闭
EasyExcel.read(fileName, DemoData.class, new DemoDataListener()).sheet().doRead();
// 写法4
fileName = "/home" + "demo" + File.separator + "demo.xlsx";
// 一个文件一个reader
ExcelReader excelReader = null;
try {
excelReader = EasyExcel.read(fileName, DemoData.class, new DemoDataListener()).build();
// 构建一个sheet 这里可以指定名字或者no
ReadSheet readSheet = EasyExcel.readSheet(0).build();
// 读取一个sheet
excelReader.read(readSheet);
} finally {
if (excelReader != null) {
// 这里千万别忘记关闭读的时候会创建临时文件到时磁盘会崩的
excelReader.finish();
}
}
}
@Data
@EqualsAndHashCode
class DemoData {
private String string;
private Date date;
private Double doubleData;
}
class DemoDataListener implements ReadListener<DemoData> {
/**
* 每隔5条存储数据库实际使用中可以100条然后清理list 方便内存回收
*/
private static final int BATCH_COUNT = 100;
/**
* 缓存的数据
*/
private List<DemoData> cachedDataList = new ArrayList<>();
/**
* 假设这个是一个DAO当然有业务逻辑这个也可以是一个service当然如果不用存储这个对象没用
*/
// private DemoDAO demoDAO;
//
// public DemoDataListener() {
// // 这里是demo所以随便new一个实际使用如果到了spring,请使用下面的有参构造函数
// demoDAO = new DemoDAO();
// }
/**
* 如果使用了spring,请使用这个构造方法每次创建Listener的时候需要把spring管理的类传进来
*
* @param demoDAO
*/
// public DemoDataListener(DemoDAO demoDAO) {
// this.demoDAO = demoDAO;
// }
/**
* All listeners receive this method when any one Listener does an error report. If an exception is thrown here, the
* entire read will terminate.
*
* @param exception
* @param context
* @throws Exception
*/
@Override
public void onException(Exception exception, AnalysisContext context) throws Exception {
}
/**
* When analysis one head row trigger invoke function.
*
* @param headMap
* @param context
*/
@Override
public void invokeHead(Map<Integer, CellData> headMap, AnalysisContext context) {
}
/**
* 这个每一条数据解析都会来调用
*
* @param data one row value. Is is same as {@link AnalysisContext#readRowHolder()}
* @param context
*/
@Override
public void invoke(DemoData data, AnalysisContext context) {
log.info("解析到一条数据:{}", JSON.toJSONString(data));
cachedDataList.add(data);
// 达到BATCH_COUNT了需要去存储一次数据库防止数据几万条数据在内存容易OOM
if (cachedDataList.size() >= BATCH_COUNT) {
saveData();
// 存储完成清理 list
// cachedDataList = ListUtils.newArrayListWithExpectedSize(BATCH_COUNT);
}
}
/**
* The current method is called when extra information is returned
*
* @param extra extra information
* @param context
*/
@Override
public void extra(CellExtra extra, AnalysisContext context) {
}
/**
* 所有数据解析完成了 都会来调用
*
* @param context
*/
@Override
public void doAfterAllAnalysed(AnalysisContext context) {
// 这里也要保存数据确保最后遗留的数据也存储到数据库
saveData();
log.info("所有数据解析完成!");
}
/**
* Verify that there is another piece of data.You can stop the read by returning false
*
* @param context
* @return
*/
@Override
public boolean hasNext(AnalysisContext context) {
return false;
}
/**
* 加上存储数据库
*/
private void saveData() {
log.info("{}条数据,开始存储数据库!", cachedDataList.size());
// demoDAO.save(cachedDataList);
log.info("存储数据库成功!");
}
}
}

View File

@ -0,0 +1,110 @@
package com.pancm.excel;
import com.alibaba.excel.EasyExcel;
import com.alibaba.fastjson.JSON;
import com.alibaba.fastjson.JSONObject;
import lombok.Data;
import javax.servlet.http.HttpServletResponse;
import java.io.IOException;
import java.io.Serializable;
import java.net.URLEncoder;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
public class EasyExcelUtils {
public static void main(String[] args) {
String[] headMap = { "项目名称", "楼栋名称", "单元名称", "楼层名称", "房间名称", "业主/租户姓名", "房间状态", "房间功能","认证人数","测试" };
String[] dataStrMap={"hName","bName","uName","fName","pName","cName","pState","pFunction","pNum"};
NoModelWriteData d = new NoModelWriteData();
d.setFileName("认证统计");
d.setHeadMap(headMap);
d.setDataStrMap(dataStrMap);
List<JSONObject> listDatas = new ArrayList<>();
JSONObject jsonObject = new JSONObject();
jsonObject.put("hName","项目1");
jsonObject.put("bName","二楼");
jsonObject.put("aa","测试");
d.setDataList(listDatas);
EasyExcelUtils easyExcelUtils = new EasyExcelUtils();
// easyExcelUtils.jsonWrite(d, response);
}
//不创建对象的导出
public void jsonWrite(NoModelWriteData data, HttpServletResponse response) throws IOException {
// 这里注意 有同学反应使用swagger 会导致各种问题请直接用浏览器或者用postman
try {
// response.setContentType("application/vnd.ms-excel");
response.setContentType("application/vnd.openxmlformats-officedocument.spreadsheetml.sheet");
response.setCharacterEncoding("utf-8");
// 这里URLEncoder.encode可以防止中文乱码 当然和easyexcel没有关系
String fileName = URLEncoder.encode(data.getFileName(), "UTF-8");
response.setHeader("Content-disposition", "attachment;filename=" + fileName + ".xlsx");
// 这里需要设置不关闭流
EasyExcel.write(response.getOutputStream()).head(head(data.getHeadMap())).sheet(data.getFileName()).doWrite(dataList(data.getDataList(), data.getDataStrMap()));
} catch (Exception e) {
// 重置response
response.reset();
response.setContentType("application/json");
response.setCharacterEncoding("utf-8");
Map<String, String> map = new HashMap<String, String>();
map.put("status", "failure");
map.put("message", "下载文件失败" + e.getMessage());
response.getWriter().println(JSON.toJSONString(map));
}
}
//创建对象的导出
public <T> void simpleWrite(SimpleWriteData data,Class<T> clazz, HttpServletResponse response) throws IOException {
// 这里注意 有同学反应使用swagger 会导致各种问题请直接用浏览器或者用postman
// response.setContentType("application/vnd.ms-excel");
response.setContentType("application/vnd.openxmlformats-officedocument.spreadsheetml.sheet");
response.setCharacterEncoding("utf-8");
// 这里URLEncoder.encode可以防止中文乱码 当然和easyexcel没有关系
String fileName = URLEncoder.encode(data.getFileName(), "UTF-8");
response.setHeader("Content-disposition", "attachment;filename=" + fileName + ".xlsx");
EasyExcel.write(response.getOutputStream(), clazz).sheet(data.getFileName()).doWrite(data.getDataList());
}
//设置表头
private List<List<String>> head(String[] headMap) {
List<List<String>> list = new ArrayList<List<String>>();
for (String head : headMap) {
List<String> headList = new ArrayList<String>();
headList.add(head);
list.add(headList);
}
return list;
}
//设置导出的数据内容
private List<List<Object>> dataList(List<JSONObject> dataList, String[] dataStrMap) {
List<List<Object>> list = new ArrayList<List<Object>>();
for (JSONObject map : dataList) {
List<Object> data = new ArrayList<Object>();
for (int i = 0; i < dataStrMap.length; i++) {
data.add(map.get(dataStrMap[i]));
}
list.add(data);
}
return list;
}
}
@Data
class NoModelWriteData implements Serializable {
private String fileName;//文件名
private String[] headMap;//表头数组
private String[] dataStrMap;//对应数据字段数组
private List<JSONObject> dataList;//数据集合
}
@Data
class SimpleWriteData implements Serializable {
private String fileName;//文件名
private List<?> dataList;//数据列表
}

View File

@ -0,0 +1,115 @@
package com.pancm.excel;
import com.alibaba.excel.EasyExcel;
import com.alibaba.excel.ExcelWriter;
import com.alibaba.excel.annotation.ExcelIgnore;
import com.alibaba.excel.annotation.ExcelProperty;
import com.alibaba.excel.write.metadata.WriteSheet;
import com.alibaba.fastjson.JSONObject;
import lombok.Data;
import lombok.EqualsAndHashCode;
import lombok.extern.slf4j.Slf4j;
import java.util.ArrayList;
import java.util.Date;
import java.util.List;
/**
* @author pancm
* @Title: pancm_project
* @Description: excel写模块测试
* @Version:1.0.0
* @Since:jdk1.8
* @date 2022/3/2
*/
@Slf4j
public class EasyExcelWriteTest {
public static void main(String[] args) {
List<JSONObject> jsonObjectList = new ArrayList<>();
JSONObject jsonObject = new JSONObject();
jsonObject.put("t1",1);
jsonObject.put("t2","2");
JSONObject jsonObject2 = new JSONObject();
jsonObject2.put("t1",11);
jsonObject2.put("t2","22");
jsonObjectList.add(jsonObject);
jsonObjectList.add(jsonObject2);
String fileName = "simpleWrite" + System.currentTimeMillis() + ".xlsx";
// 这里 需要指定写用哪个class去写然后写到第一个sheet名字为模板 然后文件流会自动关闭
// 如果这里想使用03 传入excelType参数即可
EasyExcel.write(fileName).sheet("模板").doWrite(jsonObjectList);
}
public void excludeOrIncludeWrite() {
// 注意 simpleWrite在数据量不大的情况下可以使用5000以内具体也要看实际情况数据量大参照 重复多次写入
// 写法1 JDK8+
// since: 3.0.0-beta1
String fileName = "/home"+ "simpleWrite" + System.currentTimeMillis() + ".xlsx";
// 这里 需要指定写用哪个class去写然后写到第一个sheet名字为模板 然后文件流会自动关闭
// 如果这里想使用03 传入excelType参数即可
// EasyExcel.write(fileName, DemoData.class)
// .sheet("模板")
// .doWrite(
// () -> {
// // 分页查询数据
// return data();
// });
// 写法2
fileName = "/home"+ "simpleWrite" + System.currentTimeMillis() + ".xlsx";
// 这里 需要指定写用哪个class去写然后写到第一个sheet名字为模板 然后文件流会自动关闭
// 如果这里想使用03 传入excelType参数即可
EasyExcel.write(fileName, DemoData.class).sheet("模板").doWrite(data());
// 写法3
fileName = "/home"+ "/simpleWrite" + System.currentTimeMillis() + ".xlsx";
// 这里 需要指定写用哪个class去写
ExcelWriter excelWriter = null;
try {
excelWriter = EasyExcel.write(fileName, DemoData.class).build();
WriteSheet writeSheet = EasyExcel.writerSheet("模板").build();
excelWriter.write(data(), writeSheet);
} finally {
// 千万别忘记finish 会帮忙关闭流
if (excelWriter != null) {
excelWriter.finish();
}
}
}
private List<DemoData> data() {
List<DemoData> list = new ArrayList<>();
for (int i = 0; i < 10; i++) {
DemoData data = new DemoData();
data.setString("字符串" + i);
data.setDate(new Date());
data.setDoubleData(0.56);
list.add(data);
}
return list;
}
@Data
@EqualsAndHashCode
class DemoData {
@ExcelProperty("字符串标题")
private String string;
@ExcelProperty("日期标题")
private Date date;
@ExcelProperty("数字标题")
private Double doubleData;
/**
* 忽略这个字段
*/
@ExcelIgnore
private String ignore;
}
}

View File

@ -0,0 +1,9 @@
/**
* @Title: pancm_project
* @Description: excel的工具类
* @Version:1.0.0
* @Since:jdk1.8
* @author pancm
* @date 2021/1/26
*/
package com.pancm.excel;

View File

@ -0,0 +1,66 @@
package com.pancm.ffmpeg;
import java.io.BufferedReader;
import java.io.IOException;
import java.io.InputStreamReader;
import java.util.ArrayList;
import java.util.List;
/**
* @author pancm
* @Title: FFmpegTest
* @Description: FFmpeg获取视频图片
* 调用FFmpeg的命令获取ts视频的图片
* @Version:1.0.0
* @Since:jdk1.8
* @Date 2021/4/14
**/
public class FFmpegTest {
public static void main(String[] args) {
String ffmpegExePath = "C:\\ffmpeg\\bin\\ffmpeg.exe";
String inputFilePath = "D:\\video\\ts\\25-16_940.ts";
String outputFilePath = "D:\\video\\ts\\t2.jpg";
List<String> command = new ArrayList<String>();
command.add(ffmpegExePath);
command.add("-i");
command.add(inputFilePath);
command.add("-f");
command.add("image2");
command.add("-ss");
command.add("1");
command.add("-t");
command.add("0.001");
command.add("-s");
command.add("320*240");
command.add(outputFilePath);
ProcessBuilder builder = new ProcessBuilder();
builder.command(command);
//正常信息和错误信息合并输出
builder.redirectErrorStream(true);
try {
//开始执行命令
Process process = builder.start();
//如果你想获取到执行完后的信息那么下面的代码也是需要的
StringBuffer sbf = new StringBuffer();
String line = null;
BufferedReader br = new BufferedReader(new InputStreamReader(process.getInputStream()));
while ((line = br.readLine()) != null) {
sbf.append(line);
sbf.append(" ");
}
String resultInfo = sbf.toString();
System.out.println(resultInfo);
} catch (IOException e) {
e.printStackTrace();
}
}
}

View File

@ -0,0 +1,69 @@
package com.pancm.ffmpeg;
import java.io.BufferedReader;
import java.io.IOException;
import java.io.InputStreamReader;
import java.util.ArrayList;
import java.util.List;
/**
* @author pancm
* @Title: gb28181_platform
* @Description: FFmpeg相关的工具类
* @Version:1.0.0
* @Since:jdk1.8
* @date 2021/4/15
*/
public class FFmpegUtil {
/**
* 执行ffmpeg的项目命令
* @param cmdList
* @throws IOException
*/
public static void exec(List<String> cmdList) throws IOException {
BufferedReader br = null;
try {
ProcessBuilder builder = new ProcessBuilder();
builder.command(cmdList);
//正常信息和错误信息合并输出
builder.redirectErrorStream(true);
//开始执行命令
Process process = builder.start();
StringBuffer sbf = new StringBuffer();
String line = null;
br = new BufferedReader(new InputStreamReader(process.getInputStream()));
while ((line = br.readLine()) != null) {
sbf.append(line);
sbf.append(" ");
}
String resultInfo = sbf.toString();
System.out.println(resultInfo);
} finally {
if (br != null) {
br.close();
}
}
}
public static void main(String[] args) throws IOException {
String ffmpegExePath = "C:\\ffmpeg\\bin\\ffmpeg.exe";
String inputFilePath = "D:\\video\\ts\\25-16_940.ts";
String outputFilePath = "D:\\video\\ts\\t3.jpg";
List<String> command = new ArrayList<String>();
command.add(ffmpegExePath);
command.add("-i");
command.add(inputFilePath);
command.add("-f");
command.add("image2");
command.add("-ss");
command.add("1");
command.add("-t");
command.add("0.001");
command.add("-s");
command.add("640*480");
command.add(outputFilePath);
exec(command);
}
}

View File

@ -0,0 +1,9 @@
/**
* @Title: pancm_project
* @Description: ffmpeg的工具类
* @Version:1.0.0
* @Since:jdk1.8
* @author pancm
* @date 2021/1/26
*/
package com.pancm.ffmpeg;

View File

@ -0,0 +1,96 @@
package com.pancm.ftp;
import org.apache.commons.net.ftp.FTPClient;
import org.apache.commons.net.ftp.FTPFile;
import org.apache.commons.net.ftp.FTPReply;
import java.io.IOException;
import java.text.DecimalFormat;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
/**
* @author pancm
* @Title: leakproof-server
* @Description: ftp帮助类
* @Version:1.0.0
* @Since:jdk1.8
* @date 2021/8/18
*/
public class FtpHelper {
/**
* 获取文件列表文件的属性
*
* @param ip
* @param port
* @param user
* @param pwd
* @param url
* @return
*/
public static List<Map<String, String>> getListFiles(String ip, int port, String user, String pwd, String url) throws IOException {
List<Map<String, String>> mapList = new ArrayList<>();
FTPClient ftpClient = new FTPClient();
ftpClient.connect(ip, port);
ftpClient.login(user, pwd);
FTPFile[] ftpFiles = ftpClient.listFiles(url);
if(ftpFiles!=null && ftpFiles.length>0) {
for (FTPFile ftpFile : ftpFiles) {
Map<String, String> map = new HashMap<>();
map.put("fileName",ftpFile.getName());
map.put("fileSize",getSize(ftpFile.getSize()));
map.put("fileTime",ftpFile.getTimestamp().getTime().toString());
mapList.add(map);
}
}
return mapList;
}
private static boolean testFtp(String ip, int port, String user, String pwd) throws IOException {
FTPClient ftpClient = new FTPClient();
ftpClient.connect(ip, port);//连接ftp
ftpClient.login(user, pwd);//登陆ftp
return FTPReply.isPositiveCompletion(ftpClient.getReplyCode());
}
public static String getSize(long size) {
//获取到的size为1705230
long GB = 1024 * 1024 * 1024;//定义GB的计算常量
long MB = 1024 * 1024;//定义MB的计算常量
long KB = 1024;//定义KB的计算常量
DecimalFormat df = new DecimalFormat("0.00");//格式化小数
String resultSize = "";
if (size / GB >= 1) {
//如果当前Byte的值大于等于1GB
resultSize = df.format(size / (float) GB) + "GB";
} else if (size / MB >= 1) {
//如果当前Byte的值大于等于1MB
resultSize = df.format(size / (float) MB) + "MB";
} else if (size / KB >= 1) {
//如果当前Byte的值大于等于1KB
resultSize = df.format(size / (float) KB) + "KB";
} else {
resultSize = size + "B";
}
return resultSize;
}
public static void main(String[] args) throws Exception {
String ip = "192.168.10.90";
int port = 21;
String user = "root";
String pwd = "lgwy@2020";
String url = "/home/userfile/admin";
System.out.println(testFtp(ip,port,user,pwd));
System.out.println(getListFiles(ip,port,user,pwd,url));
}
}

View File

@ -0,0 +1,88 @@
//package com.pancm.ftp;
//
//import lombok.extern.slf4j.Slf4j;
//import net.schmizz.sshj.SSHClient;
//import net.schmizz.sshj.sftp.SFTPClient;
//import net.schmizz.sshj.transport.verification.PromiscuousVerifier;
//
//import java.io.IOException;
//
//
//@Slf4j
//public final class SmartSshUtils {
//
// public static boolean testSFTP(String hostName,
// String username,
// String password){
// SSHClient ssh = new SSHClient();
// SFTPClient sftpClient = null;
// try {
// //ssh.loadKnownHosts(); to skip host verification
// ssh.addHostKeyVerifier(new PromiscuousVerifier());
// ssh.connect(hostName);
// ssh.authPassword(username, password);
// sftpClient = ssh.newSFTPClient();
// return true;
// }catch (IOException e) {
// e.printStackTrace();
// }
//
// return false;
// }
//
//
// public static void downLoadFileBySsh(String hostName,
// String username,
// String password,
// String srcFilePath,
// String targetFilePath
// ) {
// SSHClient ssh = new SSHClient();
// SFTPClient sftpClient = null;
// try {
// //ssh.loadKnownHosts(); to skip host verification
// ssh.addHostKeyVerifier(new PromiscuousVerifier());
// ssh.connect(hostName);
// ssh.authPassword(username, password);
// sftpClient = ssh.newSFTPClient();
// sftpClient.get(srcFilePath, targetFilePath);
// //create a folder
//// sftpClient.mkdir("/opt/app/testFolder");
// //sftpClient.mkdirs("");创建多级文件夹
// //sftpClient.rmdir("");重命名文件夹
// //sftpClient.ls(""); //列出当前目录
// } catch (IOException e) {
// log.error(e.getMessage(), e);
// } finally {
// if (null != sftpClient) {
// try {
// sftpClient.close();
// } catch (IOException e) {
// log.error(e.getMessage(), e);
// }
// }
// try {
// ssh.disconnect();
// } catch (IOException e) {
// log.error(e.getMessage(), e);
// }
// }
// }
//
// /**
// * 静态工具类应该禁用构造方法
// */
// private SmartSshUtils(){}
//
//
// public static void main(String[] args) {
// String hostName="192.168.9.80";
// String username="root";
// String password="Admin#12$34!";
// String srcFilePath="/home/release/file";
// String targetFilePath="D:\\d1";
//
// SmartSshUtils.downLoadFileBySsh(hostName,username,password,srcFilePath,targetFilePath);
//
// }
//}

View File

@ -0,0 +1,9 @@
/**
* @Title: pancm_project
* @Description: ftp的工具类
* @Version:1.0.0
* @Since:jdk1.8
* @author pancm
* @date 2021/1/26
*/
package com.pancm.ftp;

View File

@ -25,7 +25,6 @@ public class RedisTest {
System.out.println("连接成功");
// 查看服务是否运行
System.out.println("服务正在运行: " + jedis.ping());
// 存储到列表中
jedis.lpush("list", "redis");
jedis.lpush("list", "java");
@ -35,7 +34,6 @@ public class RedisTest {
for (int i = 0, j = list.size(); i < j; i++) {
System.out.println("list的输出结果:" + list.get(i));
}
// 设置 redis 字符串数据
jedis.set("rst", "redisStringTest");
// 获取存储的数据并输出