python简易爬虫,帮助理解re模块

20161203更新:

1.使用了BS4解析html

2.使用了mysql-connector插入了数据库表

pip install mysql-connector
import urllib.request  
from bs4 import BeautifulSoup  
import re  
import mysql.connector

def getMovieInfo():  
    url="https://movie.douban.com"  
    data=urllib.request.urlopen(url).read()  
    page_data=data.decode('UTF-8')  
    '''''print(page_data)'''  
   soup
=BeautifulSoup(page_data,"html.parser") #连接mysql conn = mysql.connector.connect(host='locahost',user='root',password='888888',database='test') cursor = conn.cursor() cursor.execute('delete from doubanmovie where 1=1') for link in soup.findAll('li',attrs={"data-actors": True}): moviename=link['data-title'] actors = link['data-actors'] region=link['data-region'] release=link['data-release'] duration = link['data-duration'] director = link['data-director'] rate = link['data-rate'] imgsrc =link.img['src'] cursor.execute("INSERT INTO doubanmovie VALUES ('', %s, %s, %s, %s, %s, %s, %s, %s,now())",[moviename,actors,region,release,duration,director,rate,imgsrc]) conn.commit() print('mysql',cursor.rowcount) print(link['data-title']) print('演员:',link['data-actors']) print(link.img['src']) cursor.close() conn.close() #函数调用 getMovieInfo()

更新:基于python3的爬虫教程

两个版本代码区别:

1.在3中,urllib.urlopen变成urllib.request.urlopen,之前的都要加request

2.在3中,print后面要加(),即输出代码:print()

3.在3中,

html = urllib.request.urlopen(url).read()返回的是byte类型,字节码,需要转换成UTF-8,
代码:html = html.decode('utf-8')
#coding=utf-8
import urllib.request
import re

def getHtml(url):
    page = urllib.request.urlopen(url)
    html = page.read()
    html =html.decode('utf-8')
    return html

def getImg(html):
    reg = r'src="(.+?.jpg)" pic_ext'
    imgre = re.compile(reg)
    imglist = re.findall(imgre,html)
    x = 0
    for imgurl in imglist:
        urllib.request.urlretrieve(imgurl,'%s.jpg' % x)
        x+=1


html = getHtml("http://tieba.baidu.com/p/2460150866")

print (getImg(html))

以下是基于python2的:

把筛选的图片地址通过for循环遍历并保存到本地,代码如下:

 

复制代码
#coding=utf-8
import urllib
import re

def getHtml(url):
    page = urllib.urlopen(url)
    html = page.read()
    return html

def getImg(html):
    reg = r'src="(.+?.jpg)" pic_ext'
    imgre = re.compile(reg)
    imglist = re.findall(imgre,html)
    x = 0
    for imgurl in imglist:
        urllib.urlretrieve(imgurl,'%s.jpg' % x)
        x+=1


html = getHtml("http://tieba.baidu.com/p/2460150866")

print getImg(html)
复制代码

 

  这里的核心是用到了urllib.urlretrieve()方法,直接将远程数据下载到本地。

 我们又创建了getImg()函数,用于在获取的整个页面中筛选需要的图片连接。re模块主要包含了正则表达式:

  re.compile() 可以把正则表达式编译成一个正则表达式对象.

  re.findall() 方法读取html 中包含 imgre(正则表达式)的数据。

    运行脚本将得到整个页面中包含图片的URL地址。

原文地址:https://www.cnblogs.com/zipon/p/5925218.html