软删除脏数据job笔记

某次处理一个case,发现线上库里有很多数据有问题。于是决定写一个job来将有问题的数据软删除掉。涉及到的两条SQL语句如下:

<select id="loadTSKTVBillDailyFlowData" parameterClass="map" resultClass="tsKTVDailyFlowData">
    /*+zebra:w*/SELECT ID,
    DistributionDetailID,
    PayPlanID,
    FlowDirection
    FROM TS_KTVBillDailyFlow WHERE FlowDirection != -1
    GROUP BY DistributionDetailID
    HAVING COUNT(DistributionDetailID)>1 LIMIT #pageSize#;
</select>

<update id="updateTSKTVBillDailyFlowData" parameterClass="java.util.HashMap">
    UPDATE TS_KTVBillDailyFlow
    SET FlowDirection = -1
    WHERE
    <isNotEmpty property="distributionDetailIDList">
        DistributionDetailID IN
        <iterate property="distributionDetailIDList" open="(" close=")" conjunction=",">
            #distributionDetailIDList[]#
        </iterate>
    </isNotEmpty>
    AND payplanId=0
</update>   

前面是选取出有问题的数据,后面是将有问题的数据进行软删除。

按照这两条SQL语句的思路写完程序之后上PPE环境测试,发现第一条select语句执行速度相当慢,平均每次花费3000ms-4000ms。原因在于group操作花费了大量时间。

经过权衡,决定从hive上拉取全部有问题的数据(第一条SQL),将数据放入txt,然后写一个job来读取txt,边读txt边进行update操作。

job主要代码如下:

public class CleanKTVBillDailyFlowBiz {

    private static final AvatarLogger logger = AvatarLoggerFactory.getLogger(CleanKTVBillDailyFlowBiz.class);
    @Autowired
    private PayPlanBillDao payPlanBillDao;
    public void cleanData(){
        InputStream is=this.getClass().getResourceAsStream("/DistributionDetailID.txt");
        //InputStream is=当前类.class.getResourceAsStream("XX.config");
        BufferedReader br=new BufferedReader(new InputStreamReader(is));
        try {
            String line = null;
            String distributionDetailID = null;
            List<String> distributionDetailIDList = new ArrayList<String>();
            int i = 0;
            while((line=br.readLine())!=null ){
                distributionDetailID = line;
                distributionDetailIDList.add(distributionDetailID);
                i++;
                if(i >= 500){
                    int rows = payPlanBillDao.updateTSKTVBillDailyFlowData(distributionDetailIDList);
                    logger.info(String.format("预期更新%d条,实际更新%d条", distributionDetailIDList.size(), rows));
                    i = 0;
                    distributionDetailIDList.clear();
                }
            }
            //最后剩下不到500条单独处理
            if(distributionDetailIDList.size() > 0){
                int rows = payPlanBillDao.updateTSKTVBillDailyFlowData(distributionDetailIDList);
                logger.info(String.format("预期更新%d条,实际更新%d条", distributionDetailIDList.size(), rows));
                distributionDetailIDList.clear();
            }
        } catch (Exception e){
            logger.error("Clean data exception", e);
        }
    }
}

DistributionDetailID.txt文件放在sources文件夹的根目录下,打成jar包之后位于jar包的根目录下,不能只用普通的读取文件的方式来读取txt文件的内容。

参考:http://blog.csdn.net/b_h_l/article/details/7767829

原文地址:https://www.cnblogs.com/umgsai/p/5665043.html