POJ 1180

题目链接:http://poj.org/problem?id=1180

Description

There is a sequence of N jobs to be processed on one machine. The jobs are numbered from 1 to N, so that the sequence is 1,2,..., N. The sequence of jobs must be partitioned into one or more batches, where each batch consists of consecutive jobs in the sequence. The processing starts at time 0. The batches are handled one by one starting from the first batch as follows. If a batch b contains jobs with smaller numbers than batch c, then batch b is handled before batch c. The jobs in a batch are processed successively on the machine. Immediately after all the jobs in a batch are processed, the machine outputs the results of all the jobs in that batch. The output time of a job j is the time when the batch containing j finishes. 

A setup time S is needed to set up the machine for each batch. For each job i, we know its cost factor Fi and the time Ti required to process it. If a batch contains the jobs x, x+1,... , x+k, and starts at time t, then the output time of every job in that batch is t + S + (Tx + Tx+1 + ... + Tx+k). Note that the machine outputs the results of all jobs in a batch at the same time. If the output time of job i is Oi, its cost is Oi * Fi. For example, assume that there are 5 jobs, the setup time S = 1, (T1, T2, T3, T4, T5) = (1, 3, 4, 2, 1), and (F1, F2, F3, F4, F5) = (3, 2, 3, 3, 4). If the jobs are partitioned into three batches {1, 2}, {3}, {4, 5}, then the output times (O1, O2, O3, O4, O5) = (5, 5, 10, 14, 14) and the costs of the jobs are (15, 10, 30, 42, 56), respectively. The total cost for a partitioning is the sum of the costs of all jobs. The total cost for the example partitioning above is 153. 

You are to write a program which, given the batch setup time and a sequence of jobs with their processing times and cost factors, computes the minimum possible total cost. 

Input

Your program reads from standard input. The first line contains the number of jobs N, 1 <= N <= 10000. The second line contains the batch setup time S which is an integer, 0 <= S <= 50. The following N lines contain information about the jobs 1, 2,..., N in that order as follows. First on each of these lines is an integer Ti, 1 <= Ti <= 100, the processing time of the job. Following that, there is an integer Fi, 1 <= Fi <= 100, the cost factor of the job.

Output

Your program writes to standard output. The output contains one line, which contains one integer: the minimum possible total cost.

Sample Input

5
1
1 3
3 2
4 3
2 3
1 4

Sample Output

153

题意:

有N个工作排成一个序列,分别编号为1,2,3,…,N;

这些工作,被分成若干批("one or more"),并且满足:

  1. 每一批内的工作编号是连续的,机器处理“批(batchs)”的顺序按照序列的顺序来
  2. 处理一批所用时间为:预处理时间(setup time)S + 处理包内每个工作所耗时间之和
  3. 对于一个工作,它的完成时间O[i] = 开始处理它所在批的时刻t + S + 处理包内每个工作所耗时间之和
  4. 机器处理完一批,就立即同时输出该批内所有工作的结果

对于每个工作我们知道:

  1. 处理这项工作所耗时间T[i]
  2. 成本因子F[i](对于每项工作,它所要耗费的成本为O[i]*F[i])

现在要求,找到一个工作划分方案,使得成本耗费最少,输出该成本耗费。

题解:

设dp[i]代表从第i项工作到第N项工作需要耗费的最小成本;

设 $ { m{Tsum}}left[ i ight] = sumlimits_{k = i}^N {{ m{T}}left[ k ight]} { m{,}};;{ m{Fsum}}left[ i ight] = sumlimits_{k = i}^N {{ m{F}}left[ k ight]} $ ;

状态转移方程为:dp[i] = min{ dp[k] + ( S + Tsum[i] - Tsum[k] ) * Fsum[i] },i<k≤N

也就是说执行第k个batch的花费,看成不只包括第k个batch内所有工作的成本花费,同时还包括因执行第k个batch而延迟执行后续其他batch所增加的成本耗费。

那么对于计算dp[i]时中k可能选择的两个点a,b(i<a<b≤N),若有:

dp[b] + ( S + Tsum[i] - Tsum[b] ) * Fsum[i] ≤ dp[a] + ( S + Tsum[i] - Tsum[a] ) * Fsum[i]

则可以说b点优于a点;

对上式变形可得:

( dp[a] - dp[b] ) / ( Tsum[a] - Tsum[b] ) ≥ Fsum[i]

设g(a,b) = ( dp[a] - dp[b] ) / ( Tsum[a] - Tsum[b] ),则有

b点优于a点 <=> g(a,b) ≥ Fsum[i];

b点劣于a点 <=> g(a,b) < Fsum[i];

另外还有g(a,b) ≥ g(b,c),b必然被淘汰。

然后就可以进行斜率DP优化了(具体怎么优化参考之前的几篇文章HDU3507HDU2993HDU2829)。

AC代码:

#include<iostream>
#include<cstdio>
using namespace std;
const int maxn=10000+5;

int N,S;
int T[maxn],F[maxn];
int Tsum[maxn],Fsum[maxn];
int dp[maxn];
int q[maxn],head,tail;

double g(int a,int b)
{
    return double(dp[a]-dp[b])/double(Tsum[a]-Tsum[b]);
}

int main()
{
    scanf("%d%d",&N,&S);
    for(int i=1;i<=N;i++) scanf("%d%d",&T[i],&F[i]);

    Tsum[N+1]=Fsum[N+1]=0;
    for(int i=N;i>=1;i--) Tsum[i]=Tsum[i+1]+T[i], Fsum[i]=Fsum[i+1]+F[i];

    head=tail=0;
    q[tail++]=N+1;
    dp[N+1]=0;
    for(int i=N,a,b;i>=1;i--)
    {
        while(head+1<tail)
        {
            b=q[head], a=q[head+1];
            if(g(a,b)<Fsum[i]) head++;
            else break;
        }
        int k=q[head];
        dp[i]=dp[k]+(S+Tsum[i]-Tsum[k])*Fsum[i];

        while(head+1<tail)
        {
            b=q[tail-2], a=q[tail-1];
            if(g(a,b)>=g(b,i)) tail--;
            else break;
        }
        q[tail++]=i;
    }

    printf("%d
",dp[1]);
}
原文地址:https://www.cnblogs.com/dilthey/p/8877172.html