基于RGB颜色空间椭球模型的去雾算法

 On the effectiveness of the dark channel prior for single image dehazing by approximating with minimum volume ellipsoid

Kristofer B.Gibson, Truong Q.Nguyen 

       对于雾天成像模型:,利用主成分分析法分析其像素灰度值的椭球聚集状态。 

       对于雾天图像的某一窗口,且中总共有个像素,存在一系列位于窗口中的无雾图像的向量(均为列向量)。 

       设为其中的一个3x1的颜色向量,则其3x3的自相关矩阵为:

                                                                                             

则有K-L变换为:

      对角矩阵中的元素值为的特征值的递减排列,且有,其中为自相关矩阵的特征值,且3x3矩阵为标准正交矩阵。 

      设在矩形窗口内的传输函数为一定值,则有雾天像素值的自相关矩阵为:

所以雾天图像的窗口的像素灰度值的自相关矩阵与无雾图像的窗口的像素灰度值的自相关矩阵之间是比例系数关系。同时两者对应的对角矩阵也满足如上比例系数关系,推导过程为:

从而有 

       对于雾天成像模型:,其传输函数,当时,;当时,。所以传输函数的取值范围为:

       对于大气散射模型,两边取期望算符以求取窗口内部的向量的数学期望,则有:

则对于聚集的向量的期望,当时,;当时,。所以之间变化。由此可得结论:当景深时,,且聚集的向量的平均向量越接近于大气光向量

       又有:

       由此得出结论:当景深时,,则窗口中任一向量的自相关矩阵对应的特征值矩阵中的特征值随着减小而减小,从而向量聚集的椭球的尺寸也随之减小,即相较于图像中近处区域的像素灰度值聚集的椭球,图像中远处区域的像素灰度值的聚集椭球在空间中的位置更加接近于大气光向量,且其体积更小。

       上面是从理论的角度分析图像中不同区域的像素灰度值的聚集状态,下面通过实验验证理论的正确性。图(a)中,选取两个大小相同的矩形区域,红色矩形框内为远处景物的像素,蓝色矩形框内为近处景物的像素。将两个矩形框中的像素的灰度值投射到坐标空间中,观察近处景物与远处景物对应的像素灰度值的分布情况,可以得出结论:远处景物的像素的聚集向量在位置上更加接近大气光向量,且聚集的椭球的尺寸比近处景物的聚集椭球尺寸要小。这与上述用主成分分析法分析像素灰度值的聚集状态的规律是一致的。

          将图像的某一矩形窗口中像素的灰度值投影到RGB颜色空间中,观察其分布状态的matlab代码如下:

clear all;
I=imread('C:UserszihangDesktopqiusky.jpg');
[m,n,o]=size(I);
I=double(I);
I=I./255;
[region,~]=imcrop(I);
[h,w,ch]=size(region);
col_img=zeros(h*w,1,ch);
for i=1:1:w
    col_img(1+(i-1)*h:i*h,1,:)=region(1:h,i,:);
end

x=col_img(:,:,1);y=col_img(:,:,2);z=col_img(:,:,3); 
scatter3(x,y,z,'p');
xlabel('R');ylabel('G');zlabel('B');
title('RGB Color Space');
axis([0 1 0 1 0 1]);

       将探测系统获取图像某一矩形窗口的像素灰度值投射到空间中,其向量分布情况如上述模型所述,设坐标空间中的任一向量为,图像的一个矩形窗口区域为,其对应的在空间中的向量聚集区域为,可由下式表示:

其中表示椭球的中心向量,其表示椭球中心在空间中的位置;为椭球的协方差矩阵,其规定了椭球的形状和方向,可用下式表示协方差矩阵:

       设椭球中的向量的集合为为椭球中向量点坐标的个数),则上述构造的椭球模型的协方差矩阵和中心向量应满足:包含集合中的所有向量,且使得椭球的体积最小。因此,可将椭球模型的求解过程转化为最小椭球体积(Minimum Volume Enclosing Ellipse, MVEE的求解问题,即求解如下最小化问题:

       通过Khachiyan算法对该最小化问题进行求解,可解得该椭球的协方差矩阵的逆矩阵与中心向量

       由于的性质,可通过奇异值分解获得其逆矩阵的特征值和特征向量:

 其中为特征值递减排列构成的正交矩阵,特征值对应的特征向量构成的矩阵为

        得到上述的各个分量,就可以估计每一个像素处的传输函数值。作者提出椭球先验(Ellipsoid Prior)的传输函数估计方法:

其中为修正系数,通常取0.95,向量对求解得到的传输函数进一步利用导向滤波等方法对其进行优化,即可得到最终估计的传输函数。

       下面给出上述求解过程的C++代码,注意要配置opencv库和eigen库,另外需在release下运行:       

#include <iostream>
#include <stdlib.h>
#include <time.h>
#include <cmath>
#include <algorithm>
#include <opencv2opencv.hpp>
#include <EigenDense>
#include <opencv2coreeigen.hpp>

#include <stdio.h>
#include <fstream>
#include <string>
#include <map>
#include <cmath>
#include <vector>

#define EPS 0.1
#define LMC 1e-18

using namespace Eigen;
using namespace cv;
using namespace std;

typedef struct Pixel
{
    int x, y;
    int data;
}Pixel;

bool structCmp(const Pixel &a, const Pixel &b)
{
    return a.data > b.data;//descending降序
}

Mat minFilter(Mat srcImage, int kernelSize);
Mat GuidedFilter(Mat I, Mat p, int r, double eps);
Mat dehaze(Mat& src, Mat& t, double A_r, double A_g, double A_b);
Mat transform(Mat& input);
void cv_to_eigen(const Mat& img, MatrixXd& output);
void decompose(MatrixXd& A, VectorXd& s, MatrixXd& U, MatrixXd& V);
int findIdx(Eigen::VectorXd& vec, double val);

int main()
{
    string loc = "D:\save\ellipsoidDCP\ellipsoidtest\cityscape.png";
    double scale = 1.0;
    clock_t start, finish;
    double duration;

    cout << "A defog program" << endl
        << "----------------" << endl;

    Mat image = imread(loc);
    Mat resizedImage;
    int originRows = image.rows;
    int originCols = image.cols;

    if (scale < 1.0)
    {
        resize(image, resizedImage, Size(originCols * scale, originRows * scale));
    }
    else
    {
        scale = 1.0;
        resizedImage = image;
    }

    int rows = resizedImage.rows;
    int cols = resizedImage.cols;
    Mat convertImage;
    resizedImage.convertTo(convertImage, CV_64FC3, 1 / 255.0, 0);
    int kernelSize = 15 ? max((rows * 0.01), (cols * 0.01)) : 15 < max((rows * 0.01), (cols * 0.01));
    //int kernelSize = 15;
    int parse = kernelSize / 2;
    
    Mat darkChannel(rows, cols, CV_8UC1);
    Mat normalDark(rows, cols, CV_64FC1);
    int nr = rows;
    int nl = cols;
    double b, g, r;
    start = clock();
    cout << "generating dark channel image." << endl;
    if (resizedImage.isContinuous())
    {
        nl = nr * nl;
        nr = 1;
    }
    for (int i = 0; i < nr; i++)
    {
        double min;
        const uchar* inData = resizedImage.ptr<uchar>(i);
        uchar* outData = darkChannel.ptr<uchar>(i);
        for (int j = 0; j < nl; j++)
        {
            b = *inData++;
            g = *inData++;
            r = *inData++;
            min = b > g ? g : b;
            min = min > r ? r : min;
            *outData++ = min;
        }
    }
    darkChannel = minFilter(darkChannel, kernelSize);

    imshow("darkChannel", darkChannel);
    cout << "dark channel generated." << endl;

    //estimate Airlight
    //开一个结构体数组存暗通道,再sort,取最大0.1%,利用结构体内存储的原始坐标在原图中取点
    cout << "estimating airlight." << endl;
    rows = darkChannel.rows, cols = darkChannel.cols;
    int pixelTot = rows * cols * 0.001;
    int *A = new int[3];
    Pixel *toppixels, *allpixels;
    toppixels = new Pixel[pixelTot];
    allpixels = new Pixel[rows * cols];


    for (unsigned int r = 0; r < rows; r++)
    {
        const uchar *data = darkChannel.ptr<uchar>(r);
        for (unsigned int c = 0; c < cols; c++)
        {
            allpixels[r*cols + c].data = *data;
            allpixels[r*cols + c].x = r;
            allpixels[r*cols + c].y = c;
        }
    }
    std::sort(allpixels, allpixels + rows * cols, structCmp);

    memcpy(toppixels, allpixels, pixelTot * sizeof(Pixel));

    double A_r, A_g, A_b, avg, maximum = 0;
    int idx, idy, max_x, max_y;
    for (int i = 0; i < pixelTot; i++)
    {
        idx = allpixels[i].x; idy = allpixels[i].y;
        const uchar *data = resizedImage.ptr<uchar>(idx);
        data += 3 * idy;
        A_b = *data++;
        A_g = *data++;
        A_r = *data++;
        //cout << A_r << " " << A_g << " " << A_b << endl;
        avg = (A_r + A_g + A_b) / 3.0;
        if (maximum < avg)
        {
            maximum = avg;
            max_x = idx;
            max_y = idy;
        }
    }

    delete[] toppixels;
    delete[] allpixels;

    for (int i = 0; i < 3; i++)
    {
        A[i] = resizedImage.at<Vec3b>(max_x, max_y)[i];
    }
    cout << "airlight estimated as: " << A[0] << ", " << A[1] << ", " << A[2] << endl;
    //cout << endl;

    //暗通道归一化操作(除A)
    //(I / A)
    cout << "start normalization of input image I." << endl;
    double tmp_A[3];
    tmp_A[0] = A[0] / 255.0;
    tmp_A[1] = A[1] / 255.0;
    tmp_A[2] = A[2] / 255.0;

    int patchSize = 15;
    Mat parseImage; int radius = patchSize / 2;
    copyMakeBorder(convertImage, parseImage, radius, radius, radius, radius, BORDER_REPLICATE);
    Mat trans(rows, cols, CV_64FC1); 
    int i = 0; int j = 0;
    for (unsigned int r = 0; r < convertImage.rows; r++)
    {
        double *tData = trans.ptr<double>(r);
        double thetaVal; 
        for (unsigned int c = 0; c < convertImage.cols; c++)
        {
            Rect ROI(c, r, patchSize, patchSize);
            Mat imageROI = parseImage(ROI);
            imageROI = transform(imageROI);
            MatrixXd regionROI;
            cv_to_eigen(imageROI, regionROI);
            //引用Khachiyan算法计算聚集状态
            vector<double> center;
            vector<vector<double>> E;

            long int iters;
            iters = 0;
            double err = 1 + EPS;
            double alpha;
            int n = regionROI.rows();
            int d = regionROI.cols();
            MatrixXd X = regionROI.transpose();
            MatrixXd Q(X.rows() + 1, X.cols());
            Q.row(0) = X.row(0);
            Q.row(1) = X.row(1);
            Q.row(2).setOnes();
            VectorXd u = (1 / (double)n) * VectorXd::Ones(n);
            VectorXd uhat = u;
            MatrixXd G(d + 1, d + 1);
            MatrixXd noiseye = MatrixXd::Identity(G.rows(), G.cols()) * LMC;
            VectorXd g(n);
            double m; int i;
            while (err > EPS)
            {
                G = Q * u.asDiagonal() * Q.transpose() + noiseye;
                g = (Q.transpose() * G.inverse() * Q).diagonal();
                m = g.maxCoeff();
                i = findIdx(g, m);
                alpha = (m - d - 1) / ((d + 1) * (m - 1));
                uhat = (1 - alpha) * u;
                uhat(i) += alpha;
                err = (uhat - u).norm();
                u = uhat;
                iters++;
            }

            // Decompose US^2U^T
            /*MatrixXd E = (1 / (double)d) * (
                X * u.asDiagonal() * X.transpose()
                - (X * u) * (X * u).transpose()
                ).inverse();*/
            MatrixXd Einv = (1 / (double)d) * (
                X * u.asDiagonal() * X.transpose()
                - (X * u) * (X * u).transpose()
                );
            VectorXd x = X * u;
            VectorXd s(d);
            MatrixXd V(d, d);
            MatrixXd U(d, d);
            decompose(Einv, s, U, V);

            center.resize(x.size());
            VectorXd::Map(&center[0], x.size()) = x;

            vector<double> test0;
            vector<double> test1;
            vector<double> test2;

            test0.push_back(Einv(0, 0)); test0.push_back(Einv(1, 0)); test0.push_back(Einv(2, 0));
            test1.push_back(Einv(0, 1)); test1.push_back(Einv(1, 1)); test1.push_back(Einv(2, 1));
            test2.push_back(Einv(0, 2)); test2.push_back(Einv(1, 2)); test2.push_back(Einv(2, 2));
            E.push_back(test0);
            E.push_back(test1);
            E.push_back(test2);

            if ((isnan(center[0])) || (isnan(center[1])) || (isnan(center[2])) ||
                (isnan(E[0][0])) || (isnan(E[0][1])) || (isnan(E[0][2])) ||
                (isnan(E[1][0])) || (isnan(E[1][1])) || (isnan(E[1][2])) ||
                (isnan(E[2][0])) || (isnan(E[2][1])) || (isnan(E[2][2])))
            {
                thetaVal = 0.1;
            }
            else 
            {
                double d1 = s(0);
                vector<double> V1(3);
                V1[0] = V(0, 0); V1[1] = V(1, 0); V1[2] = V(2, 0);
                vector<double> g(3);
                double coeffi = pow(d1, 0.5);
                g[0] = center[0] - coeffi * V1[0]; g[1] = center[1] - coeffi * V1[1]; g[2] = center[2] - coeffi * V1[2];
                thetaVal = (pow(pow(g[0], 2) + pow(g[1], 2) + pow(g[2], 2), 0.5)) / (pow(3, 0.5));
            }
            double omega = 0.95;
            *tData = 1 - omega * thetaVal;
            tData++; 
        }
    }

    imshow("origin t", trans);

    cout << "start estimating transmission and guided image filtering." << endl;
    int kernelSizeTrans = std::max(3, kernelSize);
    //求t与将t进行导向滤波
    cout << "tansmission estimated and guided filtered." << endl;

    Mat graymat; Mat graymat_64F;
    cvtColor(resizedImage, graymat_64F, CV_BGR2GRAY);
    trans.convertTo(trans, CV_8UC1, 255.0, 0);
    imwrite("origin_t.bmp", trans);
    graymat_64F.convertTo(graymat_64F, CV_8UC1, 1.0, 0);
    imwrite("grayimg.bmp", graymat_64F);
    
    string loc1 = "D:\save\ellipsoidDCP\ellipsoidtest\origin_t.bmp";
    Mat trans1 = imread(loc1);
    string loc2 = "D:\save\ellipsoidDCP\ellipsoidtest\grayimg.bmp";
    Mat grayimg = imread(loc2);

    Mat transmission = GuidedFilter(grayimg, trans1, 15, 0.0001);
    imshow("grayimg", graymat_64F);
    cout << "start recovering." << endl;

    transmission.convertTo(transmission, CV_8UC1, 255.0, 0);
    imwrite("refine_t.bmp", transmission);

    string loc3 = "D:\save\ellipsoidDCP\ellipsoidtest\refine_t.bmp";
    Mat refinetrans = imread(loc3);
    cvtColor(refinetrans, refinetrans, CV_BGR2GRAY);
    refinetrans.convertTo(refinetrans, CV_64FC1, 1 / 255.0, 0);

    double A_b1 = tmp_A[0] * 255; double A_g1 = tmp_A[1] * 255; double A_r1 = tmp_A[2] * 255;
    Mat finalImage = dehaze(resizedImage, refinetrans, A_r1, A_g1, A_b1);
    cout << "recovering finished." << endl;
    imshow("final", finalImage);
    finish = clock();
    duration = (double)(finish - start);
    cout << "defog used " << duration << "ms time;" << endl;
    imwrite("recover.bmp", finalImage);

    waitKey(0);
    
    destroyAllWindows();
    image.release();
    resizedImage.release();
    convertImage.release();
    darkChannel.release();
    trans.release();
    finalImage.release();
    
    return 0;
}

Mat minFilter(Mat srcImage, int kernelSize)
{
    int radius = kernelSize / 2;

    int srcType = srcImage.type();
    int targetType = 0;
    if (srcType % 8 == 0)
    {
        targetType = 0;
    }
    else
    {
        targetType = 6;
    }
    Mat ret(srcImage.rows, srcImage.cols, targetType);
    Mat parseImage;
    copyMakeBorder(srcImage, parseImage, radius, radius, radius, radius, BORDER_REPLICATE);
    for (unsigned int r = 0; r < srcImage.rows; r++)
    {
        double *fOutData = ret.ptr<double>(r);
        uchar *uOutData = ret.ptr<uchar>(r);
        for (unsigned int c = 0; c < srcImage.cols; c++)
        {
            Rect ROI(c, r, kernelSize, kernelSize);
            Mat imageROI = parseImage(ROI);
            double minValue = 0, maxValue = 0;
            Point minPt, maxPt;
            minMaxLoc(imageROI, &minValue, &maxValue, &minPt, &maxPt);
            if (!targetType)
            {
                *uOutData++ = (uchar)minValue;
                continue;
            }
            *fOutData++ = minValue;
        }
    }
    return ret;
}


Mat GuidedFilter(Mat I, Mat p, int r, double eps)
{
    cv::Mat _I;
    I.convertTo(_I, CV_64FC1, 1.0 / 255);
    I = _I;

    cv::Mat _p;
    p.convertTo(_p, CV_64FC1, 1.0 / 255);
    p = _p;

    //[hei, wid] = size(I);  
    int hei = I.rows;
    int wid = I.cols;

    r = 2 * r + 1;//因为opencv自带的boxFilter()中的Size,比如9x9,我们说半径为4 

                  //mean_I = boxfilter(I, r) ./ N;  
    cv::Mat mean_I;
    cv::boxFilter(I, mean_I, CV_64FC1, cv::Size(r, r));

    //mean_p = boxfilter(p, r) ./ N;  
    cv::Mat mean_p;
    cv::boxFilter(p, mean_p, CV_64FC1, cv::Size(r, r));

    //mean_Ip = boxfilter(I.*p, r) ./ N;  
    cv::Mat mean_Ip;
    cv::boxFilter(I.mul(p), mean_Ip, CV_64FC1, cv::Size(r, r));

    //cov_Ip = mean_Ip - mean_I .* mean_p; % this is the covariance of (I, p) in each local patch.  
    cv::Mat cov_Ip = mean_Ip - mean_I.mul(mean_p);

    //mean_II = boxfilter(I.*I, r) ./ N;  
    cv::Mat mean_II;
    cv::boxFilter(I.mul(I), mean_II, CV_64FC1, cv::Size(r, r));

    //var_I = mean_II - mean_I .* mean_I;  
    cv::Mat var_I = mean_II - mean_I.mul(mean_I);

    //a = cov_Ip ./ (var_I + eps); % Eqn. (5) in the paper;     
    cv::Mat a = cov_Ip / (var_I + eps);

    //b = mean_p - a .* mean_I; % Eqn. (6) in the paper;  
    cv::Mat b = mean_p - a.mul(mean_I);

    //mean_a = boxfilter(a, r) ./ N;  
    cv::Mat mean_a;
    cv::boxFilter(a, mean_a, CV_64FC1, cv::Size(r, r));

    //mean_b = boxfilter(b, r) ./ N;  
    cv::Mat mean_b;
    cv::boxFilter(b, mean_b, CV_64FC1, cv::Size(r, r));

    //q = mean_a .* I + mean_b; % Eqn. (8) in the paper;  
    cv::Mat q = mean_a.mul(I) + mean_b;

    return q;
}

Mat dehaze(Mat& src, Mat& t, double A_r, double A_g, double A_b)
{
    /// Split RGB to channels
    src.convertTo(src, CV_64FC3);
    vector<Mat> channels(3);
    split(src, channels);        // separate color channels

    Mat R = channels[2];
    Mat G = channels[1];
    Mat B = channels[0];

    /// Remove haze
    channels[2] = (R - A_r) / t + A_r;
    channels[1] = (G - A_g) / t + A_g;
    channels[0] = (B - A_b) / t + A_b;

    Mat dst;
    merge(channels, dst);

    dst.convertTo(dst, CV_8UC3);
    return dst;
}

Mat transform(Mat& input)
{
    int row = input.rows;
    int col = input.cols;
    int length = row * col;
    Mat output = Mat::zeros(length, 3, CV_64FC1);
    for (int ker_row = 0; ker_row < row; ker_row++)
    {
        for (int ker_col = 0; ker_col < col; ker_col++)
        {
            output.at<double>(ker_row * col + ker_col, 0) = input.at<Vec3d>(ker_row, ker_col)[0];
            output.at<double>(ker_row * col + ker_col, 1) = input.at<Vec3d>(ker_row, ker_col)[1];
            output.at<double>(ker_row * col + ker_col, 2) = input.at<Vec3d>(ker_row, ker_col)[2];
        }
    }
    return output;
}

void decompose(MatrixXd& A, VectorXd& s, MatrixXd& U, MatrixXd& V)
{
    JacobiSVD<MatrixXd> SVD(A, ComputeThinU | ComputeThinV);
    s = SVD.singularValues();
    V = SVD.matrixV();
    U = SVD.matrixU();
}

void cv_to_eigen(const Mat& img, MatrixXd& output)
{
    cv2eigen(img, output);
}

int findIdx(Eigen::VectorXd& vec, double val)
{
    for (int i = 0; i<vec.rows(); i++)
    {
        if (vec(i) == val)
        {
            return i;
        }
    }
    return -1;
}

       下面给出该代码的多组运行结果:

       

    

    

    

    

            附录(Khachiyan算法)

原文地址:https://www.cnblogs.com/rust/p/10626884.html