立体声混音设备

  1. ............................................来自百度经验

    首先右键单击右下角的托盘区域的小喇叭图标,如图所示,点击“录音设备”。

    如何录制电脑内部声音?立体声混音的妙用
  2.  

    这时候我们发现录音设备有三种:麦克风、线路输入、立体声混音。

    默认情况下,立体声混音是禁用的,所以我们一般录不到电脑内部的声音的原因就在于此。

    如何录制电脑内部声音?立体声混音的妙用
  3.  

    当然有时候立体声混音被隐藏起来了,会被误认为没有立体声混音,这也是需要注意的地方!

    我们只需要在如图所示的空白的地方右键,把“显示禁用的设备”“显示断开的设备”都打钩就显示出来了!

    如何录制电脑内部声音?立体声混音的妙用
  4.  

    显示出来之后,我们右键启用。

    如何录制电脑内部声音?立体声混音的妙用
  5.  

    另外如果只想录制电脑内部声音,不希望录制麦克风的声音,可以把麦克风暂时禁用。这样外界大吵大闹都不影响电脑内部声音的录制!

    如何录制电脑内部声音?立体声混音的妙用
  6.  

    一切设置完毕,我们可以在一些录制软件中看到设置的效果。

    比如我们使用录屏软件Camtasia便可以设置为立体声混音,这样录制的便是电脑内部声音。

    如何录制电脑内部声音?立体声混音的妙用
  7.  这和通过麦克风录制是两个概念,麦克风是外部录制设备

。。。。。。。。。。转。载。。。。。。。。。。

WASAP录制声音,不依赖是否有立体声混音设备。而且他是直接取得的数字信号,声音清晰度高。

1 获取默认音频设备GetDefaultAudioEndpoint

HRESULT GetDefaultAudioEndpoint(

[in] EDataFlow dataFlow,
[in] ERole role,
[out] IMMDevice **ppDevice
);

  • 获取麦克风 
  1. IMMDevice *pDevice = NULL; 
  2. hr = pEnumerator->GetDefaultAudioEndpoint(eCapture, eConsole, &pDevice); 
  3.  
  • 获取render 
  1. hr = pEnumerator->GetDefaultAudioEndpoint(eRender, eConsole, &pDevice); 

 代码实现可参见:WASAPI 01 采集默认设备的音频   https://www.jianshu.com/p/968f684ecd83

1 其中设备内录制采集初始化中,注意一些参数设置。

pAudioClient->Initialize(
        AUDCLNT_SHAREMODE_SHARED,
        AUDCLNT_STREAMFLAGS_LOOPBACK, 
        hnsRequestedDuration,//会影响音频采集的完整性,采集数据是否丢失等。尤其是多个程序并发操作。
        0,
        pwfx,
        NULL);

hnsRequestedDuration:

作为时间值的缓冲区容量。此参数为参考时间类型,以100纳秒单位表示。此参数包含调用者请求的缓冲区大小,音频应用程序将与音频引擎(在共享模式下)或终结点设备(在独占模式下)共享该缓冲区。如果调用成功,则该方法分配的缓冲区至少有这么大。有关引用时间的详细信息,请参阅Windows SDK文档。有关缓冲需求的更多信息,请参阅备注。

这个buffer是上层 调用GetBuffer函数 和 底层系统采集 的共用buffer大小,决定了可以存储多少了packet。或者说最多存储多长时间的数据。audioclient,底层系统声音采集时,数据是固定周期拿固定数据的。也就是每到这个周期时间点(每隔这么长时间),GetBuffer才能取到一个包的数据。在上层有阻塞时,这个可以连续取得多个packet。

hnsBufferDuration

The buffer capacity as a time value. This parameter is of type REFERENCE_TIME and is expressed in 100-nanosecond units. This parameter contains the buffer size that the caller requests for the buffer that the audio application will share with the audio engine (in shared mode) or with the endpoint device (in exclusive mode). If the call succeeds, the method allocates a buffer that is a least this large. For more information about REFERENCE_TIME, see the Windows SDK documentation. For more information about buffering requirements, see Remarks.

https://docs.microsoft.com/zh-cn/windows/win32/api/audioclient/nf-audioclient-iaudioclient-initialize?redirectedfrom=MSDN

 

 重点接口2:

GetBuffer

IAudioCaptureClient::GetBuffer

The GetBuffer method retrieves a pointer to the next available packet of data in the capture endpoint buffer.

HRESULT GetBuffer(
  BYTE  **ppData,
  UINT32  *pNumFramesToRead,
  DWORD  *pdwFlags,
  UINT64  *pu64DevicePosition,
  UINT64  *pu64QPCPosition
);

Parameters

ppData

[out]  Pointer to a pointer variable into which the method writes the starting address of the next data packet that is available for the client to read.

pNumFramesToRead

[out]  Pointer to a UINT32 variable into which the method writes the frame count (the number of audio frames available in the data packet). The client should either read the entire data packet or none of it.

pdwFlags

[out]  Pointer to a DWORD variable into which the method writes the buffer-status flags. The method writes either 0 or the bitwise-OR combination of one or more of the following _AUDCLNT_BUFFERFLAGS enumeration values:

AUDCLNT_BUFFERFLAGS_SILENT

AUDCLNT_BUFFERFLAGS_DATA_DISCONTINUITY

AUDCLNT_BUFFERFLAGS_TIMESTAMP_ERROR

pu64DevicePosition

[out]  Pointer to a UINT64 variable into which the method writes the device position of the first audio frame in the data packet. The device position is expressed as the number of audio frames from the start of the stream. This parameter can be NULL if the client does not require the device position. For more information, see Remarks.

pu64QPCPosition

[out]  Pointer to a UINT64 variable into which the method writes the value of the performance counter at the time that the audio endpoint device recorded the device position of the first audio frame in the data packet. The method converts the counter value to 100-nanosecond units before writing it to *pu64QPCPosition. This parameter can be NULL if the client does not require the performance counter value. For more information, see Remarks.

Return Value

If the method succeeds and *pNumFramesToRead is nonzero, indicating that a packet is ready to be read, the method returns S_OK. If it succeeds and *pNumFramesToReadis 0, indicating that no capture data is available to be read, the method returns AUDCLNT_S_BUFFER_EMPTY. If it fails, possible return codes include, but are not limited to, the values shown in the following table.

Return code Description
AUDCLNT_E_OUT_OF_ORDER A previous IAudioCaptureClient::GetBuffercall is still in effect.
AUDCLNT_E_DEVICE_INVALIDATED The audio endpoint device has been unplugged, or the audio hardware or associated hardware resources have been reconfigured, disabled, removed, or otherwise made unavailable for use.
AUDCLNT_E_BUFFER_OPERATION_PENDING Buffer cannot be accessed because a stream reset is in progress.
AUDCLNT_E_SERVICE_NOT_RUNNING The Windows audio service is not running.
E_POINTER Parameter ppDatapNumFramesToRead, or pdwFlags is NULL.

Remarks

This method retrieves the next data packet from the capture endpoint buffer. At a particular time, the buffer might contain zero, one, or more packets that are ready to read. Typically, a buffer-processing thread that reads data from a capture endpoint buffer reads all of the available packets each time the thread executes.

During processing of an audio capture stream, the client application alternately calls GetBuffer and the IAudioCaptureClient::ReleaseBuffer method. The client can read no more than a single data packet with each GetBuffer call. Following each GetBuffercall, the client must call ReleaseBuffer to release the packet before the client can call GetBuffer again to get the next packet.

Two or more consecutive calls either to GetBuffer or to ReleaseBuffer are not permitted and will fail with error code AUDCLNT_E_OUT_OF_ORDER. To ensure the correct ordering of calls, a GetBuffer call and its corresponding ReleaseBuffer call must occur in the same thread.

这里如果上一个getbuffer 得到的0,则可以连续用GetBuffer,但是最好是成对的get 和release ,确保用过及时归还。

During each GetBuffer call, the caller must either obtain the entire packet or none of it. Before reading the packet, the caller can check the packet size (available through the pNumFramesToRead parameter) to make sure that it has enough room to store the entire packet.

During each ReleaseBuffer call, the caller reports the number of audio frames that it read from the buffer. This number must be either the (nonzero) packet size or 0. If the number is 0, then the next GetBuffer call will present the caller with the same packet as in the previous GetBuffer call.

Following each GetBuffer call, the data in the packet remains valid until the next ReleaseBuffer call releases the buffer.

The client must call ReleaseBuffer after a GetBuffer call that successfully obtains a packet of any size other than 0. The client has the option of calling or not calling ReleaseBuffer to release a packet of size 0.

The method outputs the device position and performance counter through the pu64DevicePosition and pu64QPCPosition output parameters. These values provide a time stamp for the first audio frame in the data packet. Through the pdwFlags output parameter, the method indicates whether the reported device position is valid.

The device position that the method writes to *pu64DevicePosition is the stream-relative position of the audio frame that is currently playing through the speakers (for a rendering stream) or being recorded through the microphone (for a capture stream). The position is expressed as the number of frames from the start of the stream. The size of a frame in an audio stream is specified by the nBlockAlign member of the WAVEFORMATEX (or WAVEFORMATEXTENSIBLE) structure that specifies the stream format. The size, in bytes, of an audio frame equals the number of channels in the stream multiplied by the sample size per channel. For example, for a stereo (2-channel) stream with 16-bit samples, the frame size is four bytes.

The performance counter value that GetBuffer writes to *pu64QPCPosition is not the raw counter value obtained from the QueryPerformanceCounter function. If t is the raw counter value, and if f is the frequency obtained from the QueryPerformanceFrequency function, GetBuffer calculates the performance counter value as follows:

*pu64QPCPosition = 10,000,000**.**t/f

The result is expressed in 100-nanosecond units. For more information about QueryPerformanceCounter and QueryPerformanceFrequency, see the Windows SDK documentation.

If no new packet is currently available, the method sets *pNumFramesToRead = 0 and returns status code AUDCLNT_S_BUFFER_EMPTY. In this case, the method does not write to the variables that are pointed to by the ppDatapu64DevicePosition, and pu64QPCPosition parameters.

Clients should avoid excessive delays between the GetBuffer call that acquires a packet and the ReleaseBuffer call that releases the packet. The implementation of the audio engine assumes that the GetBuffer call and the corresponding ReleaseBuffer call occur within the same buffer-processing period. Clients that delay releasing a packet for more than one period risk losing sample data.

For a code example that calls the GetBuffer method, see Capturing a Stream.

Requirements

Client: Windows Vista

Header: Include Audioclient.h.

See Also

原文地址:https://www.cnblogs.com/8335IT/p/9559429.html