c3p0----获取不到链接

最近别人的项目,因为经常获取不到链接出错,我好奇也就跟着摆弄了一把,使用的插件是:c3p0+spring+ibatiS,当然事务管理部分也配置上了配置如下:

 1 <bean id="dataSource" class="com.mchange.v2.c3p0.ComboPooledDataSource"
 2         destroy-method="close">
 3             <property name="driverClass" value="${jdbc.driverClassName}" />
 4             <property name="jdbcUrl" value="${jdbc.url}" />
 5             <property name="user" value="${jdbc.username}" />
 6             <property name="password" value="${jdbc.password}" />
 7             <property name="minPoolSize" value="${datasource.minPoolSize}" />
 8             <property name="maxPoolSize" value="${datasource.maxPoolSize}" />
 9             <property name="initialPoolSize" value="${datasource.initialPoolSize}" />
10             <property name="maxIdleTime" value="${datasource.maxIdleTime}" />
11             <property name="maxStatements" value="${datasource.maxStatements}" />
12             <property name="idleConnectionTestPeriod" value="${datasource.idleConnectionTestPeriod}" />
13             <property name="acquireIncrement" value="${datasource.acquireIncrement}" />
14             <property name="acquireRetryAttempts" value="${datasource.acquireRetryAttempts}" />
15             <property name="breakAfterAcquireFailure" value="${datasource.breakAfterAcquireFailure}" />
16                 <property name="checkoutTimeout" value="${datasource.checkoutTimeout}" />
17                 <property name="numHelperThreads" value="${datasource.numHelperThreads}" />
18     </bean>
19 
20 <!-- 事务管理 -->
21     <bean id="transactionManager"
22         class="org.springframework.jdbc.datasource.DataSourceTransactionManager">
23         <property name="dataSource" ref="dataSource" />
24     </bean>
25 
26     <!-- 事务拦截器 -->
27     <bean id="transactionInterceptor"
28         class="org.springframework.transaction.interceptor.TransactionInterceptor">
29         <property name="transactionManager" ref="transactionManager" />
30         <property name="transactionAttributes">
31             <!-- 下面定义事务传播属性:key表示 service中的方法 -->
32             <props>
33                 <prop key="insert*">PROPAGATION_REQUIRED</prop>
34                 <prop key="update*">PROPAGATION_REQUIRED</prop>
35                 <prop key="delete*">PROPAGATION_REQUIRED</prop>
36                 <prop key="select*">PROPAGATION_REQUIRED,readOnly</prop>
37                 <prop key="get*">PROPAGATION_REQUIRED,readOnly</prop>
38                 <prop key="query*">PROPAGATION_REQUIRED,readOnly</prop>
39                 <prop key="pageQuery*">PROPAGATION_REQUIRED,readOnly</prop>
40                 <prop key="*">PROPAGATION_REQUIRED</prop>
41             </props>
42         </property>
43     </bean>
44     <!-- BeanName auto proxy to define the interceptor -->
45 
46     <bean
47         class="org.springframework.aop.framework.autoproxy.BeanNameAutoProxyCreator">
48         <property name="beanNames">
49             <list>
50             <!-- 配置需要事务管理的service -->
51                 <value>*Service</value>
52                 <value>*ServiceImpl</value>
53             </list>
54         </property>
55         <property name="interceptorNames">
56             <list>
57                 <value>transactionInterceptor</value>
58             </list>
59         </property>
60     </bean>

上面的配置有点长,捡几个比较重要的项说明吧:

datasource.minPoolSize=30
datasource.maxPoolSize=80
datasource.initialPoolSize=40
datasource.maxIdleTime=60
datasource.acquireIncrement=5

datasource.idleConnectionTestPeriod=60 datasource.maxStatements=0 datasource.acquireRetryAttempts=30 datasource.breakAfterAcquireFailure=false datasource.testConnectionOnCheckout=false datasource.checkoutTimeout=20000 datasource.numHelperThreads=10

配置都很常规,任务线程数从默认的3加大到10,这也比较可取,因为minpoolSize是30,初始化的时候建的pool是40,每次增加5个链接数。

趁此机会,又看了把c3p0的源码,我写了个很简单的测试,当然配置完全一模一样,不过结构上使用的是mybatis,然后只是使用了get方法,假定所有的业务逻辑都走了正常的transaction拦截器:即所有的链接在用完都把链接还回去了。

  1. 先是从100线程数,并发调用使用c3p0链接的service方法,发现毫无压力。显然100太小儿科了。
  2. 大力一把,将线程数加到1000,这样一来,就有一些(为数不多的几个线程获取不到connection了),即c3p0内部获取链接的方法:
     1 private synchronized Object prelimCheckoutResource( long timeout )
     2     throws TimeoutException, ResourcePoolException, InterruptedException
     3     {
     4         try
     5         {
     6             ensureNotBroken();
     7 
     8             int available = unused.size();
     9             if (available == 0)
    10             {
    11                 int msz = managed.size();
    12 
    13                 if (msz < max)
    14                 {
    15                     // to cover all the load, we need the current size, plus those waiting already for acquisition, 
    16                     // plus the current client 
    17                     int desired_target = msz + acquireWaiters.size() + 1;
    18 
    19                     if (logger.isLoggable(MLevel.FINER))
    20                         logger.log(MLevel.FINER, "acquire test -- pool size: " + msz + "; target_pool_size: " + target_pool_size + "; desired target? " + desired_target);
    21 
    22                     if (desired_target >= target_pool_size)
    23                     {
    24                         //make sure we don't grab less than inc Connections at a time, if we can help it.
    25                         desired_target = Math.max(desired_target, target_pool_size + inc);
    26 
    27                         //make sure our target is within its bounds
    28                         target_pool_size = Math.max( Math.min( max, desired_target ), min );
    29 
    30                         _recheckResizePool();
    31                     }
    32                 }
    33                 else
    34                 {
    35                     if (logger.isLoggable(MLevel.FINER))
    36                         logger.log(MLevel.FINER, "acquire test -- pool is already maxed out. [managed: " + msz + "; max: " + max + "]");
    37                 }
    38 
    39                 awaitAvailable(timeout); //throws timeout exception
    40             }

    直接到39行进行等待,其实也不难看出,available==0了,然后不管是否进行 _recheckResizePool();这个操作都需要等着。这里的等待时间,是配置的checkoutTimeout=2000,这么长时间的等待都木有资源可用。按照以往的理解,c3p0的实际上有进行连接池的扩展和收缩链接数的操作的对不,往下看,其实能看出来在_recheckResizePool()方法中就有进行扩容的判断的操作的:

     1   int desired_target = msz + acquireWaiters.size() + 1;
     2 
     3                     if (logger.isLoggable(MLevel.FINER))
     4                         logger.log(MLevel.FINER, "acquire test -- pool size: " + msz + "; target_pool_size: " + target_pool_size + "; desired target? " + desired_target);
     5 
     6                     if (desired_target >= target_pool_size)
     7                     {
     8                         //make sure we don't grab less than inc Connections at a time, if we can help it.
     9                         desired_target = Math.max(desired_target, target_pool_size + inc);
    10 
    11                         //make sure our target is within its bounds
    12                         target_pool_size = Math.max( Math.min( max, desired_target ), min );
    13 
    14                         _recheckResizePool();
    15                     }

    当等待的任务过大时,target_pool_size = Math.max( Math.min( max, desired_target ), min );会进行将target_pool_size置为max值。

    1   if ((shrink_count = msz - pending_removes - target_pool_size) > 0)
    2                 shrinkPool( shrink_count );
    3             else if ((expand_count = target_pool_size - (msz + pending_acquires)) > 0)
    4                 expandPool( expand_count );

    这里的target_pool_size是业务当前需要的目标size,这里会有歧义,msz是当前的poolsize,

    pending_acquires是当前等待获取资源的任务数,有可能很大,expand_count = target_pool_size - (msz + pending_acquires)) > 0这个条件的结果:expand_count= 80 - (40+ 40) <=0 ,所以当等待资源过大,但是当业务线程数过大时,链接并未被扩容,原因就在这里。瞬间并发到超峰值。所以当等待资源都timeout到很小值或者有资源使用完并还回链接后,新的请求进来时,该判断条件满足后,会进行资源的重新添加任务。至于如何添加任务,也是很常规的,即一个链接资源的新建需要一个线程,所以上面的配置10个线程,同时间内(理论上)只能创建10个链接,不过task的run方法都很瞬时,也能满足一定量的创建。

    而在创建资源的线程中,有一个竞争条件:synchronized ( ThreadPoolAsynchronousRunner.this ),所以这里可以说是并发创建资源的瓶颈,而当线程池都满了后,c3p0的expand就没什么作用了,它没有expand的作用,这时候只能通过回收现有资源重复使用,或者某个链接坏了,重新创建

原文地址:https://www.cnblogs.com/leeying/p/3817989.html