【Python实战】机型自动化标注(搜狗爬虫实现)

1. 引言

从安卓手机收集上来的机型大都为这样:

mi|5
mi|4c
mi 4c
2014022
kiw-al10
nem-tl00h

收集的机型大都杂乱无章,不便于做统计分析。因此,标注显得尤为重要。

中关村在线有对国内大部分手机的介绍情况,包括手机机型nem-tl00h及其对应的常见名称荣耀畅玩5C。因而,设计机型自动化标注策略如下:

  1. 在搜狗搜索中输入机型进行搜索,为了限定第一个返回结果为ZOL网站,加上限定词site:detail.zol.com.cn
  2. 通过第一条返回结果的链接,跳转到相应的ZOL页面,解析拿到标注名称与手机别名。

2. 实现

根据上面的爬取策略,我用Python实现一个简单的爬虫:采用PyQuery解析HTML页面,PyQuery采用类似jQuery的语法来操作HTML元素,熟悉jQuery的人对PyQuery是上手即用。

Sogou爬虫的代码实现(基于Python 3.5.2)如下:

# -*- coding: utf-8 -*-
# @Time    : 2016/8/8
# @Author  : rain
import codecs
import csv
import logging
import re
import time
import urllib.parse
import urllib.request
import urllib.error

from pyquery import PyQuery as pq


def quote_url(model_name):
    base_url = "https://www.sogou.com/web?query=%s"
    site_zol = "site:detail.zol.com.cn "
    return base_url % (urllib.parse.quote(site_zol + model_name))


def parse_sogou(model_name):
    search_url = quote_url(model_name)
    request = urllib.request.Request(url=search_url, headers={
        'User-Agent': 'Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) '
                      'Chrome/45.0.2454.101 Safari/537.36'})
    sogou_html = urllib.request.urlopen(request).read()
    sogou_dom = pq(sogou_html)
    goto_url = sogou_dom("div.results>.vrwrap>.vrTitle>a[target='_blank']").eq(0).attr("href")
    logging.warning("goto url: %s", goto_url)
    if goto_url is None:
        return None
    goto_dom = pq(url=goto_url)
    script_text = goto_dom("script").text()
    zol_url = re.findall(r'("(.*)")', script_text)[0]
    return zol_url


def parse_zol(model_name):
    zol_url = parse_sogou(model_name)
    if zol_url is None:
        return None, None
    try:
        zol_html = urllib.request.urlopen(zol_url).read()
    except urllib.error.HTTPError as e:
        logging.exception(e)
        return None, None
    zol_dom = pq(zol_html)
    title = zol_dom(".page-title.clearfix")
    name = title("h1").text()
    alias = title("h2").text()
    if u'(' in name and u')' in name:
        match_result = re.match(u'(.*)((.*))', name)
        name = match_result.group(1)
        alias = match_result.group(2) + " " + alias
    return name, alias


if __name__ == "__main__":
    with codecs.open("./resources/data.txt", 'r', 'utf-8') as fr:
        with open("./resources/result.csv", 'w', newline='') as fw:
            writer = csv.writer(fw, delimiter=',')
            for model in fr.readlines():
                model = model.rstrip()
                label_name, label_alias = parse_zol(model)
                writer.writerow([model, label_name, label_alias])
                logging.warning("model: %s, name: %s, alias: %s", model, label_name, label_alias)
                time.sleep(10)

为了防止sogou封禁,每爬一次则休息10s。当然,这种爬取的速度会非常慢,需要做些优化。

3. 优化

下载验证码

sogou是通过访问频次来进行封禁,当访问次数过多时,会要求输入验证码:

<div class="content-box">
    <p class="ip-time-p">IP:61...<br/>访问时间:2016.08.09 15:40:04</p>
    <p class="p2">用户您好,您的访问过于频繁,为确认本次访问为正常用户行为,需要您协助验证。</p>
    ...
    <form name="authform" method="POST" id="seccodeForm" action="/">
        <p class="p4">
        	...
            <input type="hidden" name="m" value="0"/>            <span class="s1">
                <a onclick="changeImg2();" href="javascript:void(0)">
                    <img id="seccodeImage" onload="setImgCode(1)" onerror="setImgCode(0)" src="util/seccode.php?tc=1470728404" width="100" height="40" alt="请输入图中的验证码" title="请输入图中的验证码"/>
                </a>
            </span>
            <a href="javascript:void(0);" id="change-img" onclick="changeImg2();" style="padding-left:50px;">换一张</a>
            <span class="s2" id="error-tips" style="display: none;"/>
        </p>
    </form>
    ...
</div>

通过分析html,真实的验证码图像需要做如下的拼接:

http://weixin.sogou.com/antispider/util/seccode.php?tc=1470728404

下载验证码图像到本地:

import urllib.request
from pyquery import PyQuery as pq
import re


for i in range(100):
    html = urllib.request.urlopen("https://www.sogou.com/web?query=treant").read()
    dom = pq(html)
    img_src = dom("#seccodeImage").attr("src")
    if img_src is not None:
        img_name = re.search("tc=(.*)", img_src).group(1)
        anti_img_url = "http://weixin.sogou.com/antispider/" + img_src
        urllib.request.urlretrieve(anti_img_url, "./images/" + img_name + ".jpg")

tesseract识别验证码,识别的效果的一般,等以后有时间再考虑下其他识别方法。

原文地址:https://www.cnblogs.com/en-heng/p/5754112.html