介绍
MESH-PLATFORM
是一个全新的软件开发框架。
1:项目使用框架
- Awssdk-s3:
Awssdk-s3
提供了与 Amazon S3 (Simple Storage Service) 以及市面满足S3协议大部分OSS交互的完整功能。
1:上传方式
1:本地
// 示例数据类
oss:
local:
enabled: true
base-path: D:\temps\demo
2:MinIO
- 注意:awssdk-s3 使用MinIO作为存储,需要设置正确的region,以及path-style-access-enabled
// 示例数据类
oss:
aws:
enabled: true
endpoint: http://ip:post
access-key: xxxxxxxxxxxxxxxxxx
secret-key: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
region: aws-global
bucket-name: temp_bucket
extendConfig:
path-style-access-enabled: true
3:阿里OSS
- 注意:awssdk-s3 使用MinIO作为存储,需要设置正确的region
oss:
ali:
enabled: true
endpoint: http://oss-cn-beijing.aliyuncs.com
access-key: xxxxxxxxxxxxxxxxxx
secret-key: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
region: aws-global
bucket-name: temp_bucket
3:上传逻辑
1:默认配置
- 分片上传:项目默认上传方式采用分片上传,默认分片大小
UploadExtendConst.DEFAULT_PART_SIZE
10MB。
2:逻辑流程
第一步:初始化分片上传
1:客户端发起初始化请求
2:S3返回本次上传的唯一Upload ID
3:后续所有操作都需要携带此Upload ID
public String initiateMultipartUpload(String filename) {
// 初始化分片上传任务
CreateMultipartUploadResponse createMultipartUploadResponse = s3Client.createMultipartUpload(builder -> builder
.bucket(awsOssProperties.getBucketName())
.key(filename));
// 获取分片上传uploadId
return createMultipartUploadResponse.uploadId();
}
第二步:上传分片
1:将文件分割为多个分片(通常5MB-5GB,最小5MB)
2:每个分片独立上传(可并行)
3:每个分片需要指定Part Number(1-10,000的整数)
4:服务端返回每个分片的ETag标识
public String uploadMultipart(MultiPartBO multiPartBO, String uploadId) {
try (InputStream in = multiPartBO.getFile().getInputStream()) {
// 上传后会将 InputStream 加密,需要重置为ByteArrayInputStream/FileInputStream 才能继续上传
ByteArrayInputStream fileInputStream = new ByteArrayInputStream(IOUtils.toByteArray(in));
// 分片上传请求
UploadPartRequest uploadPartRequest = UploadPartRequest.builder()
.bucket(awsOssProperties.getBucketName())
.key(multiPartBO.getFilename())
.uploadId(uploadId)
.partNumber(multiPartBO.getPartNum())
.build();
UploadPartResponse partResponse = s3Client.uploadPart(
uploadPartRequest,
RequestBody.fromBytes(fileInputStream.readAllBytes()));
//返回上传文件标记
return partResponse.eTag();
} catch (IOException e) {
log.error("文件【{}】上传分片【{}】失败", multiPartBO.getFilename(), multiPartBO.getPartNum(), e);
throw new RuntimeException(e);
}
}
第三步:完成分片上传
1:客户端发送所有分片的Part Number和ETag列表
2:S3验证后将所有分片按顺序合并
3:返回最终合并对象的ETag
public void completeMultipartUpload(String filename) {
//获取缓存文件上传进度信息
UploadProcess uploadProcess = UploadExtendConst.UPLOAD_PROCESS_STORAGE.get(filename);
//获取分片文件标记信息
List<CompletedPart > partETagList = uploadProcess.getUploadPartList()
.stream()
.map(uploadPart -> CompletedPart.builder()
.partNumber(uploadPart.getPartNum())
.eTag(uploadPart.getUploadAddr())
.build())
.collect(Collectors.toList());
//创建合并请求对象
//调用API合并文件
s3Client.completeMultipartUpload(b -> b
.bucket(awsOssProperties.getBucketName())
.key(uploadProcess.getFilename())
.uploadId(uploadProcess.getUploadId())
.multipartUpload(CompletedMultipartUpload.builder().parts(partETagList).build()));
//清除文件上传信息
UploadExtendConst.UPLOAD_PROCESS_STORAGE.remove(filename);
//清楚本地临时文件
FileUtil.del(uploadProcess.getTempPath());
}
3:分片方式
- 前端分片: 1:由前端计算分隔文件,向后端传递分片文件信息。 2:调用合并接口,将存储空间中的分片文件合并。
//前端分片简易案例
@Test
public void testUpload() throws Exception {
String chunkFileFolder = "C:\\Users\\admin\\Desktop\\test\\";
java.io.File file = new java.io.File("C:\\Users\\admin\\Desktop\\test.zip");
long contentLength = file.length();
// 每块大小设置为10MB
long partSize = 10 * 1024 * 1024;
// 文件分片块数 最后一块大小可能小于 10MB
long chunkFileNum = (long) Math.ceil(contentLength * 1.0 / partSize);
RestTemplate restTemplate = new RestTemplate();
String token = "xxxxxxxxxxxxxxxxx";
try (RandomAccessFile raf_read = new RandomAccessFile(file, "r")) {
// 缓冲区
byte[] b = new byte[1024];
for (int i = 1; i <= chunkFileNum; i++) {
// 块文件
java.io.File chunkFile = new java.io.File(chunkFileFolder + i);
// 创建向块文件的写对象
try (RandomAccessFile raf_write = new RandomAccessFile(chunkFile, "rw")) {
int len;
while ((len = raf_read.read(b)) != -1) {
raf_write.write(b, 0, len);
// 如果块文件的大小达到20M 开始写下一块儿 或者已经到了最后一块
if (chunkFile.length() >= partSize) {
break;
}
}
// 上传
MultiValueMap<String, Object> body = new LinkedMultiValueMap<>();
body.add("file", new FileSystemResource(chunkFile));
body.add("partNum", i);
body.add("partSize", partSize);
body.add("currentPartSize", chunkFile.length());
body.add("totalSize", contentLength);
body.add("filename", file.getName());
body.add("totalParts", chunkFileNum);
//计算偏移量
body.add("offsetSize", (i-1)*partSize);
HttpHeaders headers = new HttpHeaders();
headers.setContentType(MediaType.MULTIPART_FORM_DATA);
headers.set("Authorization","Bearer "+token);
HttpEntity<MultiValueMap<String, Object>> requestEntity = new HttpEntity<>(body, headers);
String serverUrl = "http://localhost:8080/upload/url";
ResponseEntity<String> response = restTemplate.postForEntity(serverUrl, requestEntity, String.class);
System.out.println("Response code: " + response.getStatusCode() + " Response body: " + response.getBody());
} finally {
FileUtil.del(chunkFile);
}
}
}
// 合并文件
String mergeUrl = "http://localhost:8080/update/multi/part/complete?filename=" + file.getName();
HttpHeaders headers = new HttpHeaders();
headers.setContentType(MediaType.MULTIPART_FORM_DATA);
headers.set("Authorization","Bearer "+token);
//封装请求头
HttpEntity<MultiValueMap<String, Object>> formEntity = new HttpEntity<MultiValueMap<String, Object>>(headers);
ResponseEntity<String> response = restTemplate.exchange(mergeUrl, HttpMethod.GET, formEntity, String.class);
System.out.println("Response code: " + response.getStatusCode() + " Response body: " + response.getBody());
}
- 后端分片: 1:后端接收完整文件,计算分隔文件。 2:执行异步分片上传,上传完成后合并文件。
上传文件
public DocFileBO uploadFileMultiPart(MultipartFile file) {
String tempPath = awsOssProperties.getTempPath() + file.getOriginalFilename() + IdUtil.fastSimpleUUID();
//切割文件
List<MultiPartBO> multiPartBOList = OssFileUtil.splitUploadFile(file, awsOssProperties.getSliceConfig().getPartSize(),tempPath);
//第二种方式多线程并发上传
//初始化线程池
ExecutorService executorService = Executors.newFixedThreadPool(awsOssProperties.getSliceConfig().getConnectionsNum());
//初始化线程回调集合
List<Future<UploadProcess>> futures = CollUtil.newArrayList();
//多线程上传
for (MultiPartBO partBO : multiPartBOList) {
futures.add(executorService.submit(new UploadPartTask(this, partBO)));
}
//关闭线程池
executorService.shutdown();
//多线程回调可自定义操作
for (Future<UploadProcess> future : futures) {
try {
UploadProcess partResult = future.get();
String uploadId = partResult.getUploadId();
} catch (Exception e) {
throw new BaseException(e);
}
}
UploadExtendConst.UPLOAD_PROCESS_STORAGE.get(file.getOriginalFilename()).setTempPath(tempPath);
//可自定义添加验证
log.info("所有分片上传完毕!");
//可自定义添加验证
//如果上传完毕则合并
completeMultipartUpload(file.getOriginalFilename());
//设置返回值
DocFileBO docFileBO = OssFileUtil.MultipartFileToDocFile(file);
docFileBO.setFileSource(OssTypeConst.AWS);
docFileBO.setFileEndpoint(awsOssProperties.getEndPoint());
docFileBO.setFileBucket(awsOssProperties.getBucketName());
docFileBO.setFileAddr(StrUtil.SLASH
.concat(docFileBO.getFileName()).concat(StrUtil.DOT).concat(docFileBO.getFileType()));
return docFileBO;
}
分隔文件
public static List<MultiPartBO> splitUploadFile(MultipartFile multipartFile, Long partSize,String tempPath){
List<MultiPartBO> files = CollUtil.newArrayList();
//multipartFile 转换 File 临时目录
String tempFileFolder = tempPath + multipartFile.getOriginalFilename() + IdUtil.fastSimpleUUID();;
//multipartFile 转换 File
File tempFile = multipartFileToFile(tempFileFolder, multipartFile);
// 原文件大小
long totalSize = multipartFile.getSize();
// 文件分片块数 最后一块大小可能小于 20MB
long totalParts = (long) Math.ceil(totalSize * NumberConst.NUM_1.doubleValue() / partSize);
//缓存文件切分分片
List<File> fileList = splitFile(tempFile,tempPath);
//封装BO对象
for (int i = NumberConst.NUM_0; i < fileList.size(); i++) {
File file = fileList.get(i);
MultiPartBO part = new MultiPartBO();
part.setPartNum((i + NumberConst.NUM_1));
part.setPartSize(file.length());
part.setOffsetSize(NumberConst.NUM_0.longValue());
part.setCurrentPartSize(file.length());
part.setTotalSize(totalSize);
part.setTotalParts(totalParts);
part.setFilename(multipartFile.getOriginalFilename());
MultipartFile toMultipartFile = fileToMultipartFile(file);
part.setFile(toMultipartFile);
files.add(part);
}
//清楚本地临时文件
FileUtil.del(tempFileFolder);
//返回对象
return files;
}