选择最佳KNN的最近邻数

1、从sklearn自带的数据集中导入数据

from sklearn.datasets import load_iris
from sklearn.neighbors import KNeighborsClassifier
from sklearn import cross_validation
import numpy as np
data = load_iris()
X = data.data
y = data.target

size = np.random.permutation(y.size)
X = X[size]
y = y[size]

2、通过交叉验证来判断n_neighbors的值,也就是k值,为多少时分类效果最好。

n_range = range(1,31)
n_scores = []
for n in n_range:
    knn = KNeighborsClassifier(n_neighbors=n)
    score = cross_validation.cross_val_score(knn, X, y, cv=10)
    n_scores.append(score.mean())

import pandas
result = pandas.DataFrame({'n_range':n_range, 'n_scores':n_scores})
zuijia = int(result[result['n_scores']==max(n_scores)]['n_range'])
#zuijia就是效果最好的k值
import matplotlib.pyplot as plt
plt.plot(n_range, n_scores,'b:+')
plt.show()

3、将iris数据分为训练集和测试集,带入最佳的k值,运用knn预测。

X_train = X[:100]
y_train = y[:100]
X_test = X[100:]
y_test = y[100:]
KNN = KNeighborsClassifier(n_neighbors=zuijia)
KNN.fit(X_train,y_train)
plt.plot(KNN.predict(X_test),y_test, 'b:+')
plt.show()
原文地址:https://www.cnblogs.com/chenyaling/p/6744294.html