nltk30_Investigating bias with NLTK

sklearn实战-乳腺癌细胞数据挖掘(博客主亲自录制视频教程)

https://study.163.com/course/introduction.htm?courseId=1005269003&utm_campaign=commission&utm_source=cp-400000000398149&utm_medium=share

https://www.pythonprogramming.net/investigating-nltk-tutorial/

算法测试后发现许多不准确,即偏见,负面评价更多。

In this tutorial, we discuss a few issues. The most major issue is that we have a fairly biased algorithm. You can test this yourself by commenting-out the shuffling of the documents, then training against the first 1900, and leaving the last 100 (all positive) reviews. Test, and you will find you have very poor accuracy.

Conversely, you can test against the first 100 data sets, all negative, and train against the following 1900. You will find very high accuracy here. This is a bad sign. It could mean a lot of things, and there are many options for us to fix it.

我们需要用新的数据集来建模

That said, the project I have in mind for us suggests we go ahead and use a different data set anyways, so we will do that. In the end, we will find this new data set still contains some bias, and that is that it picks up negative things more often. The reason for this is that negative reviews tend to be "more negative" than positive reviews are positive. Handling this can be done with some simple weighting, but it can also get complex fast. Maybe a tutorial for another day. For now, we're going to just grab a new dataset, which we'll be discussing in the next tutorial.

不同数据集需要不同分类器,没有统一万能的分类器,为了Twitter建模情感分析,我们需要Twitter的训练数据。Twitter数据特点是文字更短。

So now it is time to train on a new data set. Our goal is to do Twitter sentiment, so we're hoping for a data set that is a bit shorter per positive and negative statement. It just so happens that I have a data set of 5300+ positive and 5300+ negative movie reviews, which are much shorter. These should give us a bit more accuracy from the larger training set, as well as be more fitting for tweets from Twitter.

下载文件的链接downloads for the short reviews

I have hosted both files here, you can find them by going to the downloads for the short reviews. Save these files as positive.txt and negative.txt.

Now, we can build our new data set in a very similar way as before. What needs to change?

We need a new methodology for creating our "documents" variable, and then we also need a new way to create the "all_words" variable. No problem, really, here's how I did it:

python风控评分卡建模和风控常识(博客主亲自录制视频教程)

原文地址:https://www.cnblogs.com/webRobot/p/6281494.html