爬虫之正则和xpath

一.正解解析

常用正则表达式回顾:

   单字符:
        . : 除换行以外所有字符
        [] :[aoe] [a-w] 匹配集合中任意一个字符
        d :数字  [0-9]
        D : 非数字
        w :数字、字母、下划线、中文
        W : 非w
        s :所有的空白字符包,括空格、制表符、换页符等等。等价于 [ f

	v]。
        S : 非空白

    数量修饰:
        * : 任意多次  >=0
        + : 至少1次   >=1
        ? : 可有可无  0次或者1次
        {m} :固定m次 hello{3,}
        {m,} :至少m次
        {m,n} :m-n次
    边界:
        $ : 以某某结尾 
        ^ : 以某某开头
    分组:
        (ab)  
    贪婪模式: .*
    非贪婪(惰性)模式: .*?
    re.I : 忽略大小写
    re.M :多行匹配
    re.S :单行匹配

    re.sub(正则表达式, 替换内容, 字符串)

爬取糗百数据

import re
import requests
from urllib import request
import os

#1.检查页面数据是否为动态加载出来的
#2.获取页面源码数据
if not os.path.exists('qiutu'):
    os.mkdir('qiutu')
    
headers = {
    'User-Agent':'Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/71.0.3578.98 Safari/537.36'
}
url = 'https://www.qiushibaike.com/pic/'
page_text = requests.get(url=url,headers=headers).text
#3.解析img标签的src属性值
ex = '<div class="thumb">.*?<img src="(.*?)" alt=.*?</div>'
img_url_list = re.findall(ex,page_text,re.S)
for img_url in img_url_list:
    img_url = 'https:'+img_url
    imgPath = 'qiutu/'+img_url.split('/')[-1]
    #4.对图片url发请求
    #5.持久化存储
    request.urlretrieve(url=img_url,filename=imgPath)
    print(imgPath+'下载成功!!!')

二.xpath解析

xpath介绍

https://www.cnblogs.com/clbao/articles/10803582.html

1.下载:pip install lxml
2.导包:from lxml import etree
3.使用:
将html文档或者xml文档转换成一个etree对象,然后调用对象中的方法查找指定的节点

 1.本地文件

本地文件:tree = etree.parse(文件路径或者一段代码)
                tree.xpath("xpath表达式")
from lxml import etree

html = """
    <div>
      <ul>
         <li class="item-0"><a href="link1.html" rel="external nofollow" rel="external nofollow" rel="external nofollow" >first item</a></li>
         <li class="item-inactive"><a href="link2.html" rel="external nofollow" rel="external nofollow" rel="external nofollow" >third item</a></li>
         <li class="item-1"><a href="link3.html" rel="external nofollow" rel="external nofollow" rel="external nofollow" >fourth item</a></li>
         <li class="item-0"><a href="link4.html" rel="external nofollow" rel="external nofollow" rel="external nofollow" >fifth item</a>
       </ul>
     </div>
    """
html = etree.HTML(html)
print(html)  # <Element html at 0x3531800>
result = etree.tostring(html)
print(result.decode("utf-8"))  # 补全了html和body标签

2.网络数据

网络数据:tree = etree.HTML(网页内容字符串)
                tree.xpath("xpath表达式")

测试页面数据

<html lang="en">
<head>
    <meta charset="UTF-8" />
    <title>测试bs4</title>
</head>
<body>
    <div>
        <p>百里守约</p>
    </div>
    <div class="song">
        <p>李清照</p>
        <p>王安石</p>
        <p>苏轼</p>
        <p>柳宗元</p>
        <a href="http://www.song.com/" title="赵匡胤" target="_self">
            <span>this is span</span>
        宋朝是最强大的王朝,不是军队的强大,而是经济很强大,国民都很有钱</a>
        <a href="" class="du">总为浮云能蔽日,长安不见使人愁</a>
        <img src="http://www.baidu.com/meinv.jpg" alt="" />
    </div>
    <div class="tang">
        <ul>
            <li><a href="http://www.baidu.com" title="qing">清明时节雨纷纷,路上行人欲断魂,借问酒家何处有,牧童遥指杏花村</a></li>
            <li><a href="http://www.163.com" title="qin">秦时明月汉时关,万里长征人未还,但使龙城飞将在,不教胡马度阴山</a></li>
            <li><a href="http://www.126.com" alt="qi">岐王宅里寻常见,崔九堂前几度闻,正是江南好风景,落花时节又逢君</a></li>
            <li><a href="http://www.sina.com" class="du">杜甫</a></li>
            <li><a href="http://www.dudu.com" class="du">杜牧</a></li>
            <li><b>杜小月</b></li>
            <li><i>度蜜月</i></li>
            <li><a href="http://www.haha.com" id="feng">凤凰台上凤凰游,凤去台空江自流,吴宫花草埋幽径,晋代衣冠成古丘</a></li>
        </ul>
    </div>
</body>
</html>
属性定位:
    #找到class属性值为song的div标签
    //div[@class="song"] 
层级&索引定位:
    #找到class属性值为tang的div的直系子标签ul下的第二个子标签li下的直系子标签a
    //div[@class="tang"]/ul/li[2]/a
逻辑运算:
    #找到href属性值为空且class属性值为du的a标签
    //a[@href="" and @class="du"]
模糊匹配:
    //div[contains(@class, "ng")]
    //div[starts-with(@class, "ta")]
取文本:
    # /表示获取某个标签下的文本内容
    # //表示获取某个标签下的文本内容和所有子标签下的文本内容
    //div[@class="song"]/p[1]/text()
    //div[@class="tang"]//text()
取属性:
    //div[@class="tang"]//li[2]/a/@href

58二手房数据

import requests
from lxml import etree


#获取页面源码数据
url = 'https://bj.58.com/changping/ershoufang/?utm_source=sem-baidu-pc&spm=105916147073.26840108910'
headers = {
    'User-Agent':'Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/71.0.3578.98 Safari/537.36'
}
page_text = requests.get(url=url,headers=headers).text

#实例化etree对象且将页面源码数据加载到该对象中
tree = etree.HTML(page_text)
li_list = tree.xpath('//ul[@class="house-list-wrap"]/li')
all_data_list = []
for li in li_list:
    title = li.xpath('.//div[@class="list-info"]/h2/a/text()')[0]
    detail_url = li.xpath('.//div[@class="list-info"]/h2/a/@href')[0]
    if not 'https:' in detail_url:
        detail_url = 'https:'+detail_url
    price = li.xpath('.//div[@class="price"]/p//text()')
    price = ''.join(price)
    
    #对详情页发起请求,获取页面数据
    detail_page_text = requests.get(url=detail_url,headers=headers).text
    tree = etree.HTML(detail_page_text)
    desc = tree.xpath('//div[@class="general-item-wrap"]//text()')
    desc = ''.join(desc).strip(' 
  	')
    
    dic = {
        'title':title,
        'price':price,
        'desc':desc
    }
    all_data_list.append(dic)
    
print(all_data_list)    
原文地址:https://www.cnblogs.com/clbao/p/10250961.html