kaggle 入门比赛:使用随机森林解Bag of Words Meets Bags of Popcorn解题报告

  这个kaggle比赛就是使用word2Vec,将句子转换为多个词向量进行情感分析,判断句子是好评还是差评。

  这篇解题报告转载自kaggle的revanth:https://www.kaggle.com/revanthrex/sentiment-analysis-with-word2vec ,中间为了调优word2Vec参数修改了程序。

  首先查看tsv文件,看内容是什么:

id	review
"9999_0"	"Watching Time Chasers, it obvious that it was made by a bunch of friends. Maybe they were sitting around one day in film school and said, "Hey, let's pool our money together and make a really bad movie!" Or something like that. What ever they said, they still ended up making a really bad movie--dull story, bad script, lame acting, poor cinematography, bottom of the barrel stock music, etc. All corners were cut, except the one that would have prevented this film's release. Life's like that."
"45057_0"	"I saw this film about 20 years ago and remember it as being particularly nasty. I believe it is based on a true incident: a young man breaks into a nurses' home and rapes, tortures and kills various women.<br /><br />It is in black and white but saves the colour for one shocking shot.<br /><br />At the end the film seems to be trying to make some political statement but it just comes across as confused and obscene.<br /><br />Avoid."
View Code

  文章内容是html代码,待会要去除标签。

  找到文件加载(tsv文件事先解压):

import pandas as pd
import sys

DIR=''
# Read data from files 
train = pd.read_csv( DIR+"labeledTrainData.tsv", header=0, 
 delimiter="	", quoting=3 )

test = pd.read_csv( DIR+"testData.tsv", header=0, delimiter="	", quoting=3 )
unlabeled_train = pd.read_csv( DIR+"unlabeledTrainData.tsv", header=0, 
 delimiter="	", quoting=3 )

# Verify the number of reviews that were read (100,000 in total)
print( "Read %d labeled train reviews, %d labeled test reviews, " 
"and %d unlabeled reviews
" % (train["review"].size,  
 test["review"].size, unlabeled_train["review"].size ))
加载文件

  把一段文字切成一组单词的函数:review_to_wordlist

# Import various modules for string cleaning
from bs4 import BeautifulSoup
import re
from nltk.corpus import stopwords

def review_to_wordlist( review, remove_stopwords=False ):
    # Function to convert a document to a sequence of words,
    # optionally removing stop words.  Returns a list of words.
    
    # 1. Remove HTML
    review_text = BeautifulSoup(review).get_text()
    
    # 2. Remove non-letters
    review_text = re.sub("[^a-zA-Z]"," ", review_text)
    
    # 3. Convert words to lower case and split them
    words = review_text.lower().split()
    
    # 4. Optionally remove stop words (false by default)
    if remove_stopwords:
        stops = set(stopwords.words("english"))
        words = [w for w in words if not w in stops]
    
    # 5. Return a list of words
    return(words)
切分句子

  使用tokenizer切分段落为句子的函数:review_to_sentences

# Download the punkt tokenizer for sentence splitting
import nltk.data
import nltk

# Load the punkt tokenizer
tokenizer = nltk.data.load('tokenizers/punkt/english.pickle')

# Define a function to split a review into parsed sentences
def review_to_sentences( review, tokenizer, remove_stopwords=False ):
    # Function to split a review into parsed sentences. Returns a list of sentences, where each sentence is a list of words
    
    # 1. Use the NLTK tokenizer to split the paragraph into sentences
    raw_sentences = tokenizer.tokenize(review.strip())
    
    # 2. Loop over each sentence
    sentences = []
    for raw_sentence in raw_sentences:
        # If a sentence is empty, skip it
        if len(raw_sentence) > 0:
            # Otherwise, call review_to_wordlist to get a list of words
            sentences.append( review_to_wordlist( raw_sentence,remove_stopwords ))
    
    # Return the list of sentences (each sentence is a list of words,so this returns a list of lists
    return sentences
#

sentences = []  # Initialize an empty list of sentences

print ("Parsing sentences from training set")
for review in train["review"]:
    sentences += review_to_sentences(review, tokenizer)

print ("Parsing sentences from unlabeled set")
for review in unlabeled_train["review"]:
    sentences += review_to_sentences(review, tokenizer)
#
切分段落

  日志和word2vec参数。这里word2vec.Word2Vec的初始化函数参数意义:

    size:每句话的词向量维度,多的话词义分析更准,但也容易过拟合;

    window:单词在句子中的最大跨度限制,例如:the quick brown fox jumps over a lazy dog.,设置为5,则quick和dog配不到一块(跨度7)。

    min_count:单词在句子内的最小词频数。词频小于次数将不会出现在词向量。

    sample:拉大高频词汇和低频词汇被选进词向量的概率之差的系数,此系数越低,概率之差越大

(像a、the这些词就是高频的冠词,无意义,加大此系数可以防止此单词被单独拖出来;但是也存在某些高频的有意义词,例如back、up等,根据语境和词频的匹配程度调整)

    worker:并发线程数。

# Import the built-in logging module and configure it so that Word2Vec 
# creates nice output messages
import logging
logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s',
    level=logging.INFO)
#


# Set values for various parameters
num_features = int(sys.argv[1])    # Word vector dimensionality                      
min_word_count = int(sys.argv[2])   # Minimum word count                        
num_workers = 4       # Number of threads to run in parallel
context = 10          # Context window size                                                                                    
downsampling = float(sys.argv[3])   # Downsample setting for frequent words

# Initialize and train the model (this will take some time)
from gensim.models import word2vec
print ("Training model...")

model = word2vec.Word2Vec(sentences, workers=num_workers, 
            size=num_features, min_count = min_word_count, 
            window = context, sample = downsampling)
日志和word2vec初始化

   model.doesnt_match和model.most_similar都是用来测试模型训练结果的方法

  其中doesnt_match可以传入一段数组,判断这些单词哪些与大多数单词不是同一类型,例如[man,woman,child,kitchen],这个数组只有kitchen的词结构与其他三个相差甚远,可判断不是同一种词

  most_similar可在模型词库中找到与传入单词结构、意义近似的单词,并且标明近似程度,例如传入queen,会返回以下内容:

[('princess', 0.6779699325561523),
 ('bride', 0.6370287537574768),
 ('belle', 0.5911383628845215),
 ('eva', 0.5903465747833252),
 ('mistress', 0.5865148305892944),
 ('latifah', 0.5846465229988098),
 ('victoria', 0.577500581741333),
 ('showgirl', 0.5712460279464722),
 ('maid', 0.5661402344703674),
 ('madame', 0.559766411781311)]

queen:皇后&女皇;
princess:公主;
bride:新娘;
belle:美女;(某地)最美的女人;
eva:伊娃;
mistress:情妇;女教师;女主人,主妇
(再扯下去就扯远了。。。)可得知,右边的近似指数越小,单词的意义和语境符合程度也越小
与queen结构相似的词

  一般词向量模型会sims化以优化性能:

# If you don't plan to train the model any further, calling 
# init_sims will make the model much more memory-efficient.
model.init_sims(replace=True)

# It can be helpful to create a meaningful model name and 
# save the model for later use. You can load it later using Word2Vec.load()
model_name = "300features_40minwords_10context"
model.save(model_name)
View Code

  以下代码把模型和测试词向量取平均值:

import numpy as np  # Make sure that numpy is imported

def makeFeatureVec(words, model, num_features):
    # Function to average all of the word vectors in a given
    # paragraph
    #
    # Pre-initialize an empty numpy array (for speed)
    featureVec = np.zeros((num_features,),dtype="float32")
    #
    nwords = 0.
    # 
    # Index2word is a list that contains the names of the words in 
    # the model's vocabulary. Convert it to a set, for speed 
    index2word_set = set(model.wv.index2word)
    #
    # Loop over each word in the review and, if it is in the model's
    # vocaublary, add its feature vector to the total
    for word in words:
        if word in index2word_set: 
            nwords = nwords + 1.
            featureVec = np.add(featureVec,model[word])
    # 
    # Divide the result by the number of words to get the average
    featureVec = np.divide(featureVec,nwords)
    return featureVec


def getAvgFeatureVecs(reviews, model, num_features):
    # Given a set of reviews (each one a list of words), calculate 
    # the average feature vector for each one and return a 2D numpy array 
    # 
    # Initialize a counter
    counter = 0
    # 
    # Preallocate a 2D numpy array, for speed
    reviewFeatureVecs = np.zeros((len(reviews),num_features),dtype="float32")
    # 
    # Loop through the reviews
    for review in reviews:
       #
       # Print a status message every 1000th review
        if counter%1000 == 0:
            print ("Review %d of %d" % (counter, len(reviews)))
        
       # Call the function (defined above) that makes average feature vectors
        reviewFeatureVecs[counter] = makeFeatureVec(review, model,num_features)
       #
       # Increment the counter
        counter = counter + 1
    return reviewFeatureVecs
#
# ****************************************************************
# Calculate average feature vectors for training and testing sets,
# using the functions we defined above. Notice that we now use stop word
# removal.

clean_train_reviews = []
for review in train["review"]:
    clean_train_reviews.append( review_to_wordlist( review,remove_stopwords=True ))

trainDataVecs = getAvgFeatureVecs( clean_train_reviews, model, num_features )

print("Creating average feature vecs for test reviews")

clean_test_reviews = []
for review in test["review"]:
    clean_test_reviews.append( review_to_wordlist( review, 
        remove_stopwords=True ))

testDataVecs = getAvgFeatureVecs( clean_test_reviews, model, num_features )
View Code

  trainDataVecs就是上面平均化后的模型词向量,testDataVecs就是上面平均化后的测试词向量。

  这里使用结点数量100的随机森林,输出文件Word2Vec_AverageVectors.csv,训练结果与准确率99%的答案有4243词的差距,准确率83%

# Fit a random forest to the training data, using 100 trees
from sklearn.ensemble import RandomForestClassifier
forest = RandomForestClassifier( n_estimators = 100 )

print ("Fitting a random forest to labeled training data...")
forest = forest.fit( trainDataVecs, train["sentiment"] )

# Test & extract results 
result = forest.predict( testDataVecs )

# Write the test results 
output = pd.DataFrame( data={"id":test["id"], "sentiment":result} )
output.to_csv( "Word2Vec_AverageVectors.csv", index=False, quoting=3 )
View Code

     第二种方法,是通过无监督学习,减少词向量过于相近的数据。这里将模型数据减少到之前的1/5

  方法就是,创建KMeans进行无监督学习,聚类数量为原来数据词向量数量1/5,

from sklearn.cluster import KMeans
import time

start = time.time() # Start time

# Set "k" (num_clusters) to be 1/5th of the vocabulary size, or an
# average of 5 words per cluster
word_vectors = model.wv.syn0
num_clusters = int(word_vectors.shape[0] / 5)

if num_clusters <= 0:
    num_clusters = 1
#
print(word_vectors)
print(num_clusters)
# Initalize a k-means object and use it to extract centroids
kmeans_clustering = KMeans( n_clusters = num_clusters )
idx = kmeans_clustering.fit_predict( word_vectors )

# Get the end time and print how long the process took
end = time.time()
elapsed = end - start
print ("Time taken for K Means clustering: ", elapsed, "seconds.")
数据预聚合

  之后根据训练结果,将原数据与预聚合数据进行映射,取最前的10个聚类数据:

# Create a Word / Index dictionary, mapping each vocabulary word to
# a cluster number                                                                                            
word_centroid_map = dict(zip( model.wv.index2word, idx ))

# For the first 10 clusters
for cluster in range(0,10):
    #
    # Print the cluster number  
    print( "
Cluster %d" % cluster)
    #
    # Find all of the words for that cluster number, and print them out
    words = []
    for i in range(0,len(word_centroid_map.values())):
        val=list(word_centroid_map.values())
        #print(val)
        if( val == cluster ):
            words.append(word_centroid_map.keys()[i])
    print(words)
#
def create_bag_of_centroids( wordlist, word_centroid_map ):
    #
    # The number of clusters is equal to the highest cluster index
    # in the word / centroid map
    num_centroids = max( word_centroid_map.values() ) + 1
    #
    # Pre-allocate the bag of centroids vector (for speed)
    bag_of_centroids = np.zeros( num_centroids, dtype="float32" )
    #
    # Loop over the words in the review. If the word is in the vocabulary,
    # find which cluster it belongs to, and increment that cluster count 
    # by one
    for word in wordlist:
        if word in word_centroid_map:
            index = word_centroid_map[word]
            bag_of_centroids[index] += 1
    #
    # Return the "bag of centroids"
    return bag_of_centroids
取部分数据

  之后,将拿到的数据进行随机森林操作,训练准确度提升些许(84%)。

  改变《日志和word2vec》参数里的size(向量维度)、min_count(最小词频数)和sample(最高词频和最低词频被选中概率之差),进行cross validation,准确率没有明显变化,在(83.512%-84.732%)

  原因是word2Vec模型有一定的误差,随机森林等偏线性的机器学习也会对训练模型带来一定的失真。使用IMDB模型可以针对电影评论做出有效的判断,使用RNN而不是经典算法也能有效改善准确度。

原文地址:https://www.cnblogs.com/dgutfly/p/13296861.html