python 将分词结果写入txt文件

首先我运用的分词工具是结巴分词 import jieba  然后调用jieba.cut( )  但是jieba.cut 返回的是一个generator的迭代器

他可以显示分词结果 但是无法将结果写入txt 各种报错。类似于a bytes-like object is required, not 'generator'

然后我将结果进行了Str( )处理 还是报类似的错误 只不过变成了not ' generator'

然后经过思考我将结果作list( )处理 然后对生成的list进行处理,去' [  ' ,' ] '和' ,'

def text_save(filename,data):
    file = open(filename,'a+')
    for i in range(len(data)):
        s = str(data[i]).replace('[','').replace(']','')
        s = s.replace("'",'').replace(',','')+' '
        l = clearSen(s)
        file.write(l)
   # file.close(s)

然后可以写进去了  但是遇到乱码问题,写入txt中的文本乱码mmp。

在终端测试各个步骤的输出结果,发现是在对list( )处理时,应该加入utf-8操作。

def text_save(filename,data):
    file = open(filename,'a+',encoding='utf-8')
    for i in range(len(data)):
        s = str(data[i]).replace('[','').replace(']','')
        s = s.replace("'",'').replace(',','')+' '
        l = clearSen(s)
        file.write(l)
   # file.close(s)

#添加句子功能
def usr_add_sentence():
    correct_sentence = entry_add.get()
    correct_sentences = list(jieba.cut(correct_sentence))
   # clearSen(correct_sentences)
    print(correct_sentences)
    text_save('./data/kenlm/2014_words.txt',correct_sentences)
    text_save('./data/kenlm/people2014_words.txt',correct_sentences)

over~

原文地址:https://www.cnblogs.com/baobaotql/p/10826632.html