天池nlp新人赛_task3.对TF-IDF进一步认识

TF-IDF进一步认识

from sklearn.feature_extraction.text import CountVectorizer
作用:只考虑每个word在每个文章中出现的次数。
重要的参数。
stop_words:这个问题下很难确定stop_words,这里考虑不设置。可以通过其它属性来过滤过多的和过少的word
ngram:(1, 2)表示将字和词都考虑进去
max_features: 只考虑词频最大的一些word。这里与idf无关。不进行设置。
max_df: 在很多文本中出现,则忽略。考虑调参设置为最大类别文本数量所占训练集比例到1之间。
min_df: 在很少文本中出现也可能是无用的词。考虑调参设置为0到最少类别文本数量所占训练集百分比之间。

from sklearn.feature_extraction.text import TfidfTransformer
这个考虑IDF
于是TF-IDF特征提取方法如下:

# 分词
from sklearn.feature_extraction.text import TfidfTransformer
from sklearn.feature_extraction.text import CountVectorizer


def tf_idf(contents):
    # 提取文本特征tf-idf
    vectorizer = CountVectorizer(ngram_range=(1, 2), max_df=0.4, min_df=0.001)
    transformer = TfidfTransformer()
    tfidf = transformer.fit_transform(vectorizer.fit_transform(contents))
    return tfidf

ngram_range(1, 1), tf-idf最终提取大概耗时3min21s。ngram_range(1, 2)耗时8min41s。
特征维度也因此上升到83444维。
划分验证集
from sklearn.model_selection import train_test_split
注意这里stratify,按照y的比例进行划分,不然可能造成类别不平衡,因为原始的数据集就很不平衡。
但是我们可能需要交叉验证,则比较好的方法就是
from sklearn.model_selection import StratifiedKFold

SKF = StratifiedKFold(n_splits = 5, shuffle = False)

model = lgb.LGBMClassifier( 
                        boosting = 'gbdt',
                        objective = 'multiclass', #分类用binary,多分类用multi-class,回归用regression
                        num_class = 14,
                        metrics = 'multi_logloss',
                        n_estimators = 100,
                        num_leaves = 30, #搭配max_septh使用,取值<=2^(max_depth),否则过拟合,单独调时可使得max_depth=-1,表示不限制树的深度
                        max_depth = 5,
                        min_data_in_leaf = 15,
                        min_sum_hession_in_leaf = 0.005,
                        feature_fraction = 0.8,
                        bagging_fraction = 0.8,
                        bagging_freq = 5,
                        lambda_l1 = 0.1,
                        lambda_l2 = 0.1,
                        learning_rate = 0.1,
                        device = 'gpu',
                        gpu_platform_id = 0,
                        gpu_device_id = 0)

for (train_index, val_index) in tqdm(SKF.split(X_train, y_train)):
    X_train_, X_val_, y_train_, y_val_ = X_train[train_index], X_train[val_index], y_train[train_index], y_train[val_index]
    # print(X_train_.shape, X_val_.shape, y_train_.shape, y_val_.shape)
    model.fit(X_train_, y_train_, eval_set = (X_val_, y_val_), eval_metric='multi_logloss', early_stopping_rounds=100, verbose = False)
    print(f1_score(y_val_, model.predict(X_val_), average='macro'))

其它思路:
考虑上采样的效果

考虑F1-score是否有其局限性。

原文地址:https://www.cnblogs.com/zuotongbin/p/13378393.html