webrtc(android)从采集到编码

一. 采集

1. CameraEnumerator(相机枚举器)

public interface CameraEnumerator {
  public String[] getDeviceNames();
  public boolean isFrontFacing(String deviceName);
  public boolean isBackFacing(String deviceName);
  public List<CaptureFormat> getSupportedFormats(String deviceName);
  public CameraVideoCapturer createCapturer(String deviceName, CameraVideoCapturer.CameraEventsHandler eventsHandler);//核心api,创建一个录像捕捉器
}
//CameraEnumerator 的实现有Camera1Enumerator和Camera2Enumerator(api 21以上)

通过CameraEnumerator 我们可以获取一个录像接口

2. VideoCapturer接口

public interface VideoCapturer {
  //初始化, 核心会绑定capturerObserver(VideoSource会持有), 需要处理好线程等
  void initialize(SurfaceTextureHelper surfaceTextureHelper, Context applicationContext, CapturerObserver capturerObserver);
  void startCapture(int width, int height, int framerate);
  void stopCapture() throws InterruptedException;
  void changeCaptureFormat(int width, int height, int framerate);
  void dispose();
}

CameraVideoCapturer是VideoCapturer的一个实现

3. CapturerObserver接口

  void onCapturerStarted(boolean success);
  void onCapturerStopped();
  void onFrameCaptured(VideoFrame frame);

VideoSource 实现逻辑, 并通过这个接口和VideoCapturer关联

向下流程

VideoSource.onFrameCaptured =>
VideoProcessor.onFrameCaptured(自定义), NativeAndroidVideoTrackSource.onFrameCaptured=>
---- cpp
AndroidVideoTrackSource:: OnFrameCaptured  (图像进行裁剪)=>
AdaptedVideoTrackSource::OnFrame =>
VideoBroadcaster::OnFrame  => 遍历sink传递,sink包含编码器等(sink_pair.sink->OnFrame(frame))

关于SurfaceTextureHelper 的作用

SurfaceTextureHelper 持有一个surfaceTexture对象,  surfaceTexture 监听自己的图片变化```surfaceTexture.setOnFrameAvailableListener(listener, handler);```, 有图片变化了会tryDeliverTextureFrame, 生成VideoFrame 向外传递
> VideoFrame  有多种类型:  OES 类型  YUV类型,  c++(jni)才会把OES转为YUV
> RendererCommon.GlDrawer 很关键,它把oes纹理绘制到texture frame(OES,RGB)或者YUV frame(yuv)  ps:OES  是支持照相机的一种纹理类型,仅android下

Camera2Session中 持有surface , 创建照相机的时候传入surface, surface绑定SurfaceTextureHelper的surfaceTexture对象

4. 编码协商至VideoBroadcaster的sink流程

PeerConnectionFactory.Builder =>  支持传入编码器,解码器, 音频设备,音频处理,前向纠错控制,网络控制,网络预测,媒体传输
PeerConnection::SetLocalDescription => //
PeerConnection::ApplyLocalDescription => 
if (IsUnifiedPlan)
{
  PeerConnection::UpdateTransceiversAndDataChannels =>
  PeerConnection::UpdateTransceiverChannel => 
  PeerConnection::CreateVideoChannel =>
  ChannelManager::CreateVideoChannel {
      //media_engine_真正的类型是CompositeMediaEngine<WebRtcVoiceEngine, WebRtcVideoEngine>
      VideoMediaChannel* media_channel = media_engine_->video().CreateMediaChannel  //会创建真实的 WebRtcVideoChannel类型的media_channel
     //创建类型是VideoChannel的video_channel
    auto video_channel = std::make_unique<VideoChannel>(
        worker_thread_, network_thread_, signaling_thread,
        absl::WrapUnique(media_channel), content_name, srtp_required,  //
        crypto_options, ssrc_generator);
  }
}

核心对象WebRtcVideoChannel

WebRtcVideoChannel::AddSendStream => //new WebRtcVideoSendStream
WebRtcVideoChannel::WebRtcVideoSendStream::SetCodec =>
WebRtcVideoChannel::WebRtcVideoSendStream::RecreateWebRtcStream 
{
//CreateVideoSendStream创建VideoSendStream
stream_ = call_->CreateVideoSendStream(std::move(config),
                                         parameters_.encoder_config.Copy());
//SetSource函数设置图像数据源,最后会调用VideoTrack的AddOrUpdateSink函数注册sink对象,该sink对象是一个VideoStreamEncoder对象
stream_->SetSource(this, GetDegradationPreference()); (VideoSendStream::SetSource)=>
VideoStreamEncoder::SetSource
原文地址:https://www.cnblogs.com/WillingCPP/p/14448173.html