anchor_target_layer中的bounding regression

在anchor_target层,这两行是计算bounding regression代码:

bbox_targets = np.zeros((len(inds_inside), 4), dtype=np.float32)
bbox_targets = _compute_targets(anchors, gt_boxes[argmax_overlaps, :])
def _compute_targets(ex_rois, gt_rois):
    """Compute bounding-box regression targets for an image."""

    assert ex_rois.shape[0] == gt_rois.shape[0]
    assert ex_rois.shape[1] == 4
    assert gt_rois.shape[1] == 5

    return bbox_transform(ex_rois, gt_rois[:, :4]).astype(np.float32, copy=False)

以下是bounding regression的计算公式:

def bbox_transform(ex_rois, gt_rois):
    ex_widths = ex_rois[:, 2] - ex_rois[:, 0] + 1.0
    ex_heights = ex_rois[:, 3] - ex_rois[:, 1] + 1.0
    ex_ctr_x = ex_rois[:, 0] + 0.5 * ex_widths
    ex_ctr_y = ex_rois[:, 1] + 0.5 * ex_heights

    gt_widths = gt_rois[:, 2] - gt_rois[:, 0] + 1.0
    gt_heights = gt_rois[:, 3] - gt_rois[:, 1] + 1.0
    gt_ctr_x = gt_rois[:, 0] + 0.5 * gt_widths
    gt_ctr_y = gt_rois[:, 1] + 0.5 * gt_heights

    targets_dx = (gt_ctr_x - ex_ctr_x) / ex_widths
    targets_dy = (gt_ctr_y - ex_ctr_y) / ex_heights
    targets_dw = np.log(gt_widths / ex_widths)
    targets_dh = np.log(gt_heights / ex_heights)

    targets = np.vstack(
        (targets_dx, targets_dy, targets_dw, targets_dh)).transpose()
    return targets

 argmax_overlaps是每个anchor对应最大overlap的gt_boxes的下标,所以bbox_targets存储的是anchor和最大overlap的gt之间的bouding regression。

因为all_anchors裁减掉了2/3左右,仅仅保留在图像内的anchor。利用_unmap函数复原作为这一层的一个输出,并reshape成相应的格式,输出到rpn_loss_bbox。

rpn_loss_bbox的另一个输入是由特征提取出的4个坐标变换。

实际上,
rpn_loss_bbox就是rpn损失函数的第二部分,也就是计算框损失的部分。论文中的两个输入是ti和ti*,我本以为ti和ti*是两个框的4个坐标(即左上右下)。但实际看代码发现,ti是
rpn_bbox_pred,

是一个feature map(即特征向量)。ti*是anchor和gt bounding box regression的结果(即△x,△y,△w,△h)。这样也可以看出rpn_bbox_pred不是直接生成的roi坐标,而是feature map。

原文地址:https://www.cnblogs.com/ymjyqsx/p/7603803.html