详解mybatis批量插入10万条数据的优化过程

网友投稿 871 2022-10-28


详解mybatis批量插入10万条数据的优化过程

数据库 在使用mybatis插入大量数据的时候,为了提高效率,放弃循环插入,改为批量插入,mapper如下:

package com.lcy.service.mapper;

import com.lcy.service.pojo.TestVO;

import org.apache.ibatis.annotations.Insert;

import java.util.List;

public interface TestMapper {

@Insert("")

Integer testBatchInsert(List list);

}

实体类:

package com.lcy.service.pojo;

import lombok.AllArgsConstructor;

import lombok.Data;

import lombok.NoArgsConstructor;

@Data

@NoArgsConstructor

@AllArgsConstructor

public class TestVO {

private String t1;

private String t2;

private String t3;

private String t4;

private String t5;

}

测试类如下:

import com.lcy.service.TestApplication;

import com.lcy.service.mapper.TestMapper;

import com.lcy.service.pojo.TestVO;

import org.junit.Test;

import org.junit.runner.RunWith;

import org.springframework.beans.factory.annotation.Autowired;

import org.springframework.boot.test.context.SpringBootTest;

import org.springframework.test.context.junit4.SpringRunner;

import java.util.ArrayList;

import java.util.List;

@SpringBootTest(classes = TestApplication.class)

@RunWith(SpringRunner.class)

public class TestDemo {

@Autowired

private TestMapper testMapper;

@Test

public void insert() {

List list = new ArrayList<>();

for (int i = 0; i < 200000; i++) {

list.add(new TestVO(i + "," + i, i + "," + i, i + "," + i, i + "," + i, i + "," + i));

}

System.out.println(testMapper.testBatchInsert(list));

}

}

为了复现bug,我限制了JVM内存:

执行测试类报错如下:

java.lang.OutOfMemoryError: Java heap space

 at java.base/java.util.Arrays.copyOf(Arrays.java:3746)

可以看到,Arrays在申请内存的时候,导致栈内存溢出

改进方法,分批新增:

import com.lcy.service.TestApplication;

import com.lcy.service.mapper.TestMapper;

import com.lcy.service.pojo.TestVO;

import org.junit.Test;

import org.junit.runner.RunWith;

import org.springframework.beans.factory.annotation.Autowired;

import org.springframework.boot.test.context.SpringBootTest;

import org.springframework.test.context.junit4.SpringRunner;

import javax.swing.*;

import java.util.ArrayList;

import java.util.List;

import java.util.stream.Collectors;

@SpringBootTest(classes = TestApplication.class)

@RunWith(SpringRunner.class)

public class TestDemo {

@Autowired

private TestMapper testMapper;

@Test

public void insert() {

List list = new ArrayList<>();

for (int i = 0; i < 200000; i++) {

list.add(new TestVO(i + "," + i, i + "," + i, i + "," + i, i + "," + i, i + "," + i));

}

int index = list.size() / 10000;

for (int i=0;i< index;i++){

//stream流表达式,skip表示跳过前i*10000条记录,limit表示读取当前流的前10000条记录

testMapper.testBatchInsert(list.stream().skip(i*10000).limit(10000).collect(Collectors.toList()));

}

}

}

还有一种方法是调高JVM内存,不过不建议使用,不仅吃内存,而且数据量过大会导致sql过长报错


版权声明:本文内容由网络用户投稿,版权归原作者所有,本站不拥有其著作权,亦不承担相应法律责任。如果您发现本站中有涉嫌抄袭或描述失实的内容,请联系我们jiasou666@gmail.com 处理,核实后本网站将在24小时内删除侵权内容。

上一篇:访问网络文件系统
下一篇:count()与strlen()
相关文章

 发表评论

暂时没有评论,来抢沙发吧~