VINS 检测回环辅助激光建图

最近接到一个任务,在激光检测回环失败时,比如黑色物体多,场景大等,可否利用视觉进行回环检测。如果只是检测回环,现有的许多框架都可以使用。ORB-SLAM本身就有单目模式,且效果不错。但是发现ORB在检测回环时,必须要进行pose计算,产生地图点,然后根据地图点和回环之间的关系进行回环检测。这样就比较耗费资源,可否只检测回环,并不计算位姿与地图点。然后想到VINS也是有单目检测回环功能,就着手从VINS开始。

1. feature_tracker模块

这部分模块无需较大改动,只需要在节点里增加激光数据触发信号,让激光关键帧与图象帧保持数据同步

2. estimator模块

此部分会融合IMU做pose计算,所以这个节点在回环检测中不需要使用。需要在launch文件中,删去此节点,不运行。

3. pose_graph模块

3.1 此部分为回环检测核心模块

首先修改订阅的topic,只需要订阅图像与feature_tracker发布的特征点信息。

   ros::Subscriber sub_image = n.subscribe(IMAGE_TOPIC, 2000, image_callback);

   ros::Subscriber sub_point = n.subscribe("/feature_tracker/feature", 2000, point_callback);

3.2 修改KeyFrame构造函数

以前的需要特征点对应的3D点,此时为了减少计算量,只需要特征点图像坐标与归一化坐标,以及特征点光流平均速度

KeyFrame::KeyFrame(double _time_stamp, int _index, cv::Mat &_image,vector<cv::Point2f> &_point_2d_uv, vector<cv::Point2f> &_point_2d_normal, double _aver_velocity,  int _sequence)
{
	time_stamp = _time_stamp;
	index = _index;
	image = _image.clone();
	cv::resize(image, thumbnail, cv::Size(80, 60));
	std::cout<<"this is resize"<<std::endl;
	point_2d_uv = _point_2d_uv;
	point_2d_norm = _point_2d_normal;
	has_loop = false;
	loop_index = -1;
	has_fast_point = false;
	loop_info << 0, 0, 0, 0, 0, 0, 0, 0;
	sequence = _sequence;
	computeWindowBRIEFPoint();
	computeBRIEFPoint();
	aver_velocity = _aver_velocity;
//这里可以把图像清除了,但是为了显示图像,可以暂且留着
        
}
3.3 改动的还有addkeyFrame

为了减少计算量,当机器人静止时,可以不需要进行回环检测,这里判断条件就是特征点的平均光溜速度,对于这个阈值可以根据最小视差来决定。

void PoseGraph::addKeyFrame(KeyFrame* cur_kf, bool flag_detect_loop)
{
    //先判断是否关键帧
   std::cout<<"this is function of addKeyFrame"<<std::endl;
    if(cur_kf->aver_velocity<=0.2&&keyframelist.size()>0)
    {
	      std::cout<<"loop index unchanged"<<last_loop_index<<std::endl;
    }else{
      
	    if (sequence_cnt != cur_kf->sequence)
		{
		    sequence_cnt++;
		    sequence_loop.push_back(0);
		}
		cur_kf->index = global_index;
		last_index= global_index;
		    int loop_index = -1;
		if (flag_detect_loop)
		{
		    TicToc tmp_t;
		    loop_index = detectLoop(cur_kf, cur_kf->index);
		    last_loop_index = loop_index;
		    std::cout<<"loop index is "<<loop_index<<std::endl;
		    KeyFrame * old_kf = getKeyFrame(loop_index);
		    if(loop_index !=-1)
		    {
			if(cur_kf->findConnection(old_kf))
			{
			  std::cout<<"this is true loop with index of : "<< loop_index<<std::endl;
			}
		    }
		}
		global_index++;
		keyframelist.push_back(cur_kf);
	    }
}

detectLoop函数没有做改动,对与findConnection函数做了如下改动。一开始以为可以不需要findConnection函数,直接用detectLoop得出的结果就能使用了,但是发现误匹配很多,VINS之前是使用

  PnPRANSAC(matched_2d_old_norm, matched_3d, status, PnP_T_old, PnP_R_old); 

函数进行两帧之间的真实相似度检测。但是此函数需要3D点,且计算了相对位姿,并不适合我的任务,正好VINS的老版本仅仅使用了两帧图像的2d点进行验证,虽然精度略微降低,但是无需3D点,减少了计算量。

bool KeyFrame::findConnection(KeyFrame* old_kf)
{
	std::cout<<"this is function of findConnection"<<std::endl;
	TicToc tmp_t;
	//printf("find Connection
");
	vector<cv::Point2f> matched_2d_cur, matched_2d_old;
	vector<cv::Point2f> matched_2d_cur_norm, matched_2d_old_norm;
	vector<uchar> status;

	matched_2d_cur = point_2d_uv;
	matched_2d_cur_norm = point_2d_norm;

	TicToc t_match;
	std::cout<<"old_kf keypoint : "<<old_kf->keypoints.size()<<std::endl;
	searchByBRIEFDes(matched_2d_old, matched_2d_old_norm, status, old_kf->brief_descriptors, old_kf->keypoints, old_kf->keypoints_norm);
	
	reduceVector(matched_2d_cur, status);
	reduceVector(matched_2d_old, status);
	reduceVector(matched_2d_cur_norm, status);
	reduceVector(matched_2d_old_norm, status);

	//printf("search by des finish
");
	status.clear();
	if ((int)matched_2d_cur.size() > MIN_LOOP_NUM)
	{
		status.clear();
	     FundmantalMatrixRANSAC(matched_2d_cur_norm,matched_2d_old_norm,status);
	    reduceVector(matched_2d_cur, status);
	    reduceVector(matched_2d_old, status);
	    reduceVector(matched_2d_cur_norm, status);
	    reduceVector(matched_2d_old_norm, status);

	        	int gap = 10;
	        	cv::Mat gap_image(ROW, gap, CV_8UC1, cv::Scalar(255, 255, 255));
	            cv::Mat gray_img, loop_match_img;
	            cv::Mat old_img = old_kf->image;
	            cv::hconcat(image, gap_image, gap_image);
	            cv::hconcat(gap_image, old_img, gray_img);
	            cvtColor(gray_img, loop_match_img, CV_GRAY2RGB);
	            for(int i = 0; i< (int)matched_2d_cur.size(); i++)
	            {
	                cv::Point2f cur_pt = matched_2d_cur[i];
	                cv::circle(loop_match_img, cur_pt, 5, cv::Scalar(0, 255, 0));
	            }
	            for(int i = 0; i< (int)matched_2d_old.size(); i++)
	            {
	                cv::Point2f old_pt = matched_2d_old[i];
	                old_pt.x += (COL + gap);
	                cv::circle(loop_match_img, old_pt, 5, cv::Scalar(0, 255, 0));
	            }
	            for (int i = 0; i< (int)matched_2d_cur.size(); i++)
	            {
	                cv::Point2f old_pt = matched_2d_old[i];
	                old_pt.x += (COL + gap) ;
	                cv::line(loop_match_img, matched_2d_cur[i], old_pt, cv::Scalar(0, 255, 0), 2, 8, 0);
	            }
	            cv::Mat notation(50, COL + gap + COL, CV_8UC3, cv::Scalar(255, 255, 255));
	            putText(notation, "current frame: " + to_string(index) + "  sequence: " + to_string(sequence), cv::Point2f(20, 30), CV_FONT_HERSHEY_SIMPLEX, 1, cv::Scalar(255), 3);

	            putText(notation, "previous frame: " + to_string(old_kf->index) + "  sequence: " + to_string(old_kf->sequence), cv::Point2f(20 + COL + gap, 30), CV_FONT_HERSHEY_SIMPLEX, 1, cv::Scalar(255), 3);
	            cv::vconcat(notation, loop_match_img, loop_match_img);

	            if ((int)matched_2d_cur.size() > MIN_LOOP_NUM)
	            {
	            	
	            	cv::imshow("loop connection",loop_match_img);  
	            	cv::waitKey(100);  
			return true;
	            }
	}

	//printf("loop final use num %d %lf--------------- 
", (int)matched_2d_cur.size(), t_match.toc());
	return false;
}

效果如图所示。

原文地址:https://www.cnblogs.com/easonslam/p/9040818.html