中文词频统计

下载一长篇中文文章。

从文件读取待分析文本。

news = open('gzccnews.txt','r',encoding = 'utf-8')

安装与使用jieba进行中文分词。

pip install jieba

import jieba

list(jieba.lcut(news))

生成词频统计

排序

排除语法型词汇,代词、冠词、连词

输出词频最大TOP20

import jieba

f = open('novel.txt','r',encoding='utf-8')
novel = f.read()
f.close()

exclude = { '
','u3000','-',' ','','','','','','','','','','','','',
            '','','','','','','','','','','','','','','','','',
            '','','','','','','便','','','','','','','','','','',
            '','','','','','','','','','','','','','','','','',
            '','','','','','','','','','','','','','','','','',
            '一个','','','','',''}

sep = ''',。“”‘’’、?!:'''
for c in sep:
    novel = novel.replace(c,' ')

novels = list(jieba.lcut(novel))

Dict= {}
Set = set(novels) - exclude
for w in Set:
    Dict[w] = novel.count(w)

List = list(Dict.items())
List.sort(key=lambda x:x[1],reverse=True)

for i in range(20):
    print(List[i])

原文地址:https://www.cnblogs.com/wumeiying/p/8664830.html