Iterative Least Square(ILS) vs RANdom SAmple Consistency(RANSAC)

**ILS** is a new algorithm, created by me. However, its theory and every tricks it adopts come from the existing ideas. **RANSAC** is a old fashion maching learning (especially linear learning) algorithm, which is proved to work pretty well. ILS algorithm is presented as follow:
  1. Suppose data (D) contains both good and bad samples, here I mean bad, is saying that samples that are not satisifying the linear equation with given precision bounds. And otherwise it should be a good sample. This suppose is an optional one.(**)
  2. Suppose bad samples are in unbiased distribution(like Gaussian), which means no extra variance is changing the observation(sampling). This suppose is neccessary.
  3. Suppose the target linear Equition (Ax=b) do exists in real number field, and (ain{A}) can be any vector that could be transformed from nonlinear space, which means, there could be a latent variable (z), satisifying (zleftarrow f(a)), (b) is presented as the end observation, and there could be nothing like (aleftarrow f^{-1}(z)), since such nonlinear space will probably have no unique invserse map relation transforming it back to linear space. Thus, a trained model like Neural Network can get bad prediction or inference of (a) according to (z), and such (a) will not satisify the linear equation we supposed before.
  4. Iteration Process:
    • Solve (x) by using Least Square algorithm, add bias 1 to vector (vec{x}) to transform the equation as (left[egin{matrix}A & bend{matrix} ight]left[egin{matrix}vec{x}\1end{matrix} ight]=0). This is easily done by simply solving an inverted matrix of the dimension of (a) plus one. Since (a) is sampled from the hidden variant, its dimension is small, no dimension curse occurs.
    • Judge each sample from (A) according to the absolute value of residual error (||Ax-b||), and find out the top-N samples from (A) of residual error, N could be simply set to a small number like 3, according to its sample volume of (A).
    • If the mean residual error ( or the maximum one ) is under the precision control, then, stop this iteration and exit with good solution of (x) as solved by this round. Otherwise, go ahead.
    • Remove the samples with top-N residual error from (A).
    • Go to the first step and restart all over again.

A simple application used in concatenating two images along rows is presented here, this code is NOT FOR COMMERCIAL USE, pay attention to this requirement!


#include "OpenCVconfig.h"
#include "opencv2/calib3d/calib3d.hpp"
#include "opencv2/imgproc.hpp"
#include "opencv2/imgcodecs.hpp"
#include "opencv2/highgui.hpp"
#include "opencv2/core.hpp"
#include "opencv2/core/utility.hpp"
#include "opencv2/ximgproc/edge_filter.hpp"


#include <stdio.h>

using namespace cv;
using namespace cv::ximgproc;
using namespace std;

// code here that is not the core part
// ... ... 
// ... ...
// ... ...

int AlignViews(
	cv::Mat & im1_aligned,
	cv::Mat & im2_aligned,
	const cv::Mat & im1,
	const cv::Mat & im2)
{
	// code here that is not the core part
        // ... ...

	// solve row-aligning transform matrix with dynamic training samples
	int j;
	float u, v, u1, v1;
	float a21, a22, a23, a31, a32;
	int cntr;

	float max_res[WORST_N_CASE], avg_res, this_res;
	std::vector<unsigned char> mask_accepted;
	int worst_pairs[WORST_N_CASE], total_pairs;
	Mat mat_x, vec_y, vec_beta, vec_betaT, mat_xT, mat_xTx, mat_xTx_inv, vec_res;

	mat_x.create(pts1.size(), 5, CV_32F);
	vec_y.create(pts1.size(), 1, CV_32F);
	vec_beta.create(5, 1, CV_32F);

	float * ptr_x = (float*)mat_x.data;
	float * ptr_y = (float*)vec_y.data;

	mask_accepted.resize(pts1.size());
	ransac_used = pts1.size();
	avg_res = 1e5;

	if (ransac_used < WORST_N_CASE + 10) {
		im1.copyTo(im1_aligned);
		im2.copyTo(im2_aligned);
		return 0; // failed with auto alignment
	}

	while (avg_res > RESIDUAL_LEVEL) {
		mat_x.resize(ransac_used);
		vec_y.resize(ransac_used);
		for (i = 0; i < ransac_used; ++i) {
			v1 = (pts1[i].y - ppy) / fy;
			u = (pts2[i].x - ppx) / fx;
			v = (pts2[i].y - ppy) / fy;
			ptr_x[i * 5] = u;
			ptr_x[i * 5 + 1] = v;
			ptr_x[i * 5 + 2] = 1;
			ptr_x[i * 5 + 3] = -u * v1;
			ptr_x[i * 5 + 4] = -v * v1;
			ptr_y[i] = v1;
		}
		cv::transpose(mat_x, mat_xT);
		mat_xTx = mat_xT * mat_x;
		if (cv::invert(mat_xTx, mat_xTx_inv) == 0) {
			// this approach failed with bad selecting point pairs
			im1.copyTo(im1_aligned);
			im2.copyTo(im2_aligned);
			return 0; // failed with auto alignment
		}
		vec_beta = mat_xT * vec_y;
		vec_beta = mat_xTx_inv * vec_beta;
		// apply the solved model to all samples to find out the worst N cases
		vec_res = mat_x * vec_beta;
		vec_res = cv::abs(vec_res - vec_y);
		avg_res = 0;
		max_res[0] = 0;
		cntr = 0;
		for (i = 0; i < ransac_used; ++i) {
			this_res = vec_res.at<float>(i);
			avg_res += this_res;
			if (this_res >= max_res[0]) {
				for (j = WORST_N_CASE - 1; j > 0; --j) {
					max_res[j] = max_res[j - 1];
					worst_pairs[j] = worst_pairs[j - 1];
				}
				max_res[0] = this_res;
				worst_pairs[0] = i;
				++cntr;
			}
		}
		// update average residual error for this round
		avg_res = avg_res / ransac_used;
		// filter out these worst N cases
		for (i = 0; i < ransac_used; ++i) {
			mask_accepted[i] = 1;
		}
		cntr = cntr < WORST_N_CASE ? cntr : WORST_N_CASE;
		for (i = 0; i < cntr; ++i) {
			mask_accepted[worst_pairs[i]] = 0;
		}
		total_pairs = ransac_used;
		ransac_used = 0;
		for (i = 0; i < total_pairs; ++i) {
			if (mask_accepted[i]) {
				pts1_good[ransac_used] = pts1[i];
				pts2_good[ransac_used] = pts2[i];
				++ransac_used;
			}
		}
		pts1_good.resize(ransac_used);
		pts2_good.resize(ransac_used);
		std::swap(pts1, pts1_good);
		std::swap(pts2, pts2_good);

		// check if the points are enough
		if (ransac_used < 5) {
			im1.copyTo(im1_aligned);
			im2.copyTo(im2_aligned);
			return 0; // faild with no enough points
		}
	}
	
	// code here that is not the core part
        // ... ...

	return 1;
}

原文地址:https://www.cnblogs.com/thisisajoke/p/11280235.html