AV Foundation 实现文字转语音

AV Foundation

主要框架

  • CoreAudio
    • 音频处理框架
    • 扩展学习:《Learning CoreAudio》
  • CoreVideo
    • 视频处理的管道模式,逐帧访问
  • CoreMedia
    • 提供音频和视频处理的低级数据类型和接口,如CMTime
  • CoreAnimation
    • 动画框架

AV Foundation解析

  • 音频播放和记录
    • AVAudioPlayer
    • AVAudioRecorder
  • 媒体文件检查
    • 媒体文件的信息,媒体长度,创建时间等
    • 艺术家
  • 视频播放
    • AVPlayer
    • AVPlayerItem
  • 媒体捕捉
    • 摄像头 AVCaputureSession
  • 媒体编辑
    • 音视频整合,编辑,修改,场景动画,如IMovie
  • 媒体处理
    • AVAssetReader
    • AVAssetWriter

文字转语音

  • 文字转语音主要是用到AVSpeechSynthesizer

  • 里面封装了一些语音的常见操作,包括常见的播放、暂停、停止等

  • 使用AV Foundation需要包含头文件 #import <AVFoundation/AVFoundation.h>

  • 声明一个AVSpeechSynthesizer实例

#import "ViewController.h"
#import <AVFoundation/AVFoundation.h>

@interface ViewController ()
/**语音播放对象*/
@property (nonatomic, strong) AVSpeechSynthesizer *synthesizer;
/**语音支持类型数组*/
@property (nonatomic, copy) NSArray *voices;
/// 播放的文字数组
@property (nonatomic, copy) NSArray *speechStrings;
/// 当前播放
@property (nonatomic, assign) NSInteger *currentIndex;
@end
  • 初始化实例对象

//1 创建AVSpeechSynthesizer对象 _synthesizer = [[AVSpeechSynthesizer alloc] init];

  • 设置支持的语模式,可以通过方法speechVoices获得系统支持的语音
// 查看支持的语音体系
    NSLog(@"%@",[AVSpeechSynthesisVoice speechVoices]);

// 这里只用中文演示
 _voices =
    @[
       [AVSpeechSynthesisVoice voiceWithLanguage:@"zh-CN"]
       ];
  • 然后从本地读取一个文件,以换行符分隔
// 从本地读取文件
- (void)read {
    _speechStrings = [[NSString stringWithContentsOfFile:[[NSBundle mainBundle]pathForResource:@"test" ofType:@"txt"] encoding:NSUTF8StringEncoding error:nil] componentsSeparatedByString:@"
"];
}
  • 最后点击屏幕即可播放
// 开始播放
- (void)beginConversation {
    for (NSInteger i = 0;  i < self.speechStrings.count; i ++) {
        AVSpeechUtterance *utterance = [[AVSpeechUtterance alloc] initWithString:self.speechStrings[i]];
        // 播放语音类型
        utterance.voice = self.voices[0];
        // 播放速率
        utterance.rate = 0.4f;
        // 音调改变
        utterance.pitchMultiplier= 0.8f;
        // 播放下一条暂停一下
        utterance.postUtteranceDelay = 0.1f;
        [_synthesizer speakUtterance:utterance];
    }
}

播放控制

- (IBAction)play:(id)sender {
    if (_currentIndex == 0) {
        [self beginConversation:_currentIndex];
    }else {
        [_synthesizer continueSpeaking];
    }
}

- (IBAction)stop:(id)sender {
    _currentIndex = 0;

    [_synthesizer stopSpeakingAtBoundary:AVSpeechBoundaryImmediate];
}

- (IBAction)pause:(id)sender {

    [_synthesizer pauseSpeakingAtBoundary:AVSpeechBoundaryImmediate];
}

- (IBAction)previous:(id)sender {
    _currentIndex -= 2;
    if (_currentIndex <= 0) {
        _currentIndex = 0;
    }else if(_currentIndex >= _speechStrings.count) {
        _currentIndex = _speechStrings.count;
    }
    [_synthesizer stopSpeakingAtBoundary:AVSpeechBoundaryImmediate];
    [self beginConversation:_currentIndex];
}

- (IBAction)next:(id)sender {
    if (_currentIndex <= 0) {
        _currentIndex = 0;
    }else if(_currentIndex >= _speechStrings.count) {
        _currentIndex = _speechStrings.count;
    }
    
    [_synthesizer stopSpeakingAtBoundary:AVSpeechBoundaryImmediate];
    [self beginConversation:_currentIndex];
}

// 代理方法
- (void)speechSynthesizer:(AVSpeechSynthesizer *)synthesizer didStartSpeechUtterance:(AVSpeechUtterance *)utterance {
    _currentIndex ++;
    
    if (_currentIndex <= 0) {
        _currentIndex = 0;
    }else if(_currentIndex >= _speechStrings.count) {
        _currentIndex = _speechStrings.count;
    }
    self.currentLabel.text = [NSString stringWithFormat:@"%zd",_currentIndex];
}
- (void)speechSynthesizer:(AVSpeechSynthesizer *)synthesizer didFinishSpeechUtterance:(AVSpeechUtterance *)utterance {
    
}
原文地址:https://www.cnblogs.com/songliquan/p/5424583.html