11.22作业

1.使用朴素贝叶斯模型对iris数据集进行花分类

尝试使用3种不同类型的朴素贝叶斯:

高斯分布型

多项式型

伯努利型

from sklearn.datasets import load_iris    
iris = load_iris()
from sklearn.naive_bayes import GaussianNB      #高斯模型
iris.data[55]
iris.target[55]

gnb = GaussianNB()    #模型
pred = gnb.fit(iris.data,iris.target)    #训练
y_pred = pred.predict(iris.data)    #分类

print(iris.data.shape[0],(iris.target != y_pred).sum())

from sklearn.datasets import load_iris    
iris = load_iris()
from sklearn.naive_bayes import BernoulliNB    #伯努利模型
iris.data[55]
iris.target[55]

gnb = BernoulliNB()    #模型
pred = gnb.fit(iris.data,iris.target)    #训练
y_pred = pred.predict(iris.data)    #分类

print(iris.data.shape[0],(iris.target != y_pred).sum())

from sklearn.datasets import load_iris    
iris = load_iris()
from sklearn.naive_bayes import MultinomialNB   #多项式模型
iris.data[55]
iris.target[55]

gnb = MultinomialNB()    #模型
pred = gnb.fit(iris.data,iris.target)    #训练
y_pred = pred.predict(iris.data)    #分类

print(iris.data.shape[0],(iris.target != y_pred).sum())

2.使用sklearn.model_selection.cross_val_score(),对模型进行验证。

from sklearn.naive_bayes import GaussianNB 
from sklearn.model_selection import cross_val_score   #高斯交叉验证
gnb = GaussianNB ()
scores = cross_val_score(gnb,iris.data,iris.target,cv=10)
print("Accuracy:%.3f"%scores.mean())  

from sklearn.naive_bayes import BernoulliNB 
from sklearn.model_selection import cross_val_score     #伯努利交叉验证
gnb = BernoulliNB ()
scores = cross_val_score(gnb,iris.data,iris.target,cv=10)
print("Accuracy:%.3f"%scores.mean())

from sklearn.naive_bayes import MultinomialNB 
from sklearn.model_selection import cross_val_score     #多项式交叉验证
gnb = MultinomialNB()
scores = cross_val_score(gnb,iris.data,iris.target,cv=10)
print("Accuracy:%.3f"%scores.mean())

 

3. 垃圾邮件分类

数据准备:

  • 用csv读取邮件数据,分解出邮件类别及邮件内容。
  • 对邮件内容进行预处理:去掉长度小于3的词,去掉没有语义的词等

尝试使用nltk库:

pip install nltk

import nltk

nltk.download

不成功:就使用词频统计的处理方法

训练集和测试集数据划分

  • from sklearn.model_selection import train_test_split
import csv    #用csv读取邮件数据,分解出邮件类别及邮件内容
file_path = r'C:UsersAdministratorDesktopSMSSpamCollectionjsn.txt'
sms = open(file_path,'r',encoding = 'utf-8')
sms_data = []
sms_label = []
csv_reader = csv.reader(sms,delimiter='	')
for line in csv_reader:
    sms_label.append(line[0]) 
    sms_data.append(line[1])
sms.close()
sms_label
sms_data    


from sklearn.model_selection import train_test_split
x_train,x_test,y_train,y_test = train_test_split(sms_data,sms_label,test_size=0.3,random_state=0,stratify=sms_label) #训练集,测试集

from sklearn.feature_extraction.text import TfidfVectorizer
vectorizer = TfidfVectorizer(min_df = 2,ngram_range=(1,2),stop_words='english',strip_accents='unicode',norm='l2')
x_train = vectorizer.fit_transform(x_train)
x_test = vectorizer.transform(x_test)

原文地址:https://www.cnblogs.com/Tlzlykc/p/9999217.html