爬取全部的校园新闻

作业来源:https://edu.cnblogs.com/campus/gzcc/GZCC-16SE1/homework/3002

0.从新闻url获取点击次数,并整理成函数

  • newsUrl
  • newsId(re.search())
  • clickUrl(str.format())
  • requests.get(clickUrl)
  • re.search()/.split()
  • str.lstrip(),str.rstrip()
  • int
  • 整理成函数
  • 获取新闻发布时间及类型转换也整理成函数

1.从新闻url获取新闻详情: 字典,anews

2.从列表页的url获取新闻url:列表append(字典) alist

3.生成所页列表页的url并获取全部新闻 :列表extend(列表) allnews

import requests
from  bs4 import  BeautifulSoup
from datetime import datetime
import locale
import re
locale.setlocale(locale.LC_CTYPE,'chinese')

def getClickCount(newsUrl):
    newsId = re.findall('\_(.*).html', newsUrl)[0].split('/')[1]   #使用正则表达式取得新闻编号
    clickUrl = 'http://oa.gzcc.cn/api.php?op=count&id={}&modelid=80'.format(newsId)
    clickStr = requests.get(clickUrl).text
    return(re.search("hits').html('(.*)');",clickStr).group(1))

def getNewsContent(content):
    f = open('gzccNews.txt','a',encoding='utf8')
    f.write(content)
    f.close()

def getNewDetail(newsUrl):
    resd = requests.get(newsUrl)  # 返回response
    resd.encoding = 'utf-8'
    soupd = BeautifulSoup(resd.text, 'html.parser')
    print('标题:' + soupd.select('.show-title')[0].text)
    print('链接:'+newsUrl)
    newsUrl = newsUrl
    info = soupd.select('.show-info')[0].text
    time = re.search('发布时间:(.*) xa0xa0 xa0xa0作者:', info).group(1)
    dtime = datetime.strptime(time, '%Y-%m-%d %H:%M:%S')
    if info.find('作者:') > 0:
        author = info[info.find('作者:'):].split()[0].lstrip('作者:')
    else:
        author = '无'
    if info.find('审核:') > 0:
        check = info[info.find('审核:'):].split()[0].lstrip('审核:')
    else:
        check = '无'
    if info.find('来源:') > 0:
        source = info[info.find('来源:'):].split()[0].lstrip('来源:')
    else:
        sourec = '无'
    if info.find('摄影:') > 0:
        photo = info[info.find('摄影:'):].split()[0].lstrip('摄影:')
    else:
        photo = '无'
    print('发布时间:{}
作者:{}
审核:{}
来源:{}
摄影:{}'.format(dtime,author,check,source,photo))
    clickCount = getClickCount(newsUrl)
    print('点击次数:' + clickCount)
    content = soupd.select('.show-content')[0].text
    getNewsContent(content)
    # print(content)

def getLiUrl(ListPageUrl):
    res = requests.get(ListPageUrl)
    res.encoding = 'utf-8'
    soup = BeautifulSoup(res.text,'html.parser')
    # print(soup.select('li'))
    for news in soup.select('li'):
        if len(news.select('.news-list-title'))>0:
            a = news.a.attrs['href']
            getNewDetail(a)

firstUrl = 'http://news.gzcc.cn/html/xiaoyuanxinwen/'
print('第1页:')
getLiUrl(firstUrl)
res = requests.get(firstUrl)
res.encoding = 'utf-8'
soupn = BeautifulSoup(res.text,'html.parser')
n =  int(soupn.select('.a1')[0].text.rstrip('条'))//10+1

# for i in range(2,n):
#     pageUrl = 'http://news.gzcc.cn/html/xiaoyuanxinwen/{}.html'.format(i)
#     print('第{}页:'.format(i))
#     getLiUrl(pageUrl)
#     break

运行截图:

4.设置合理的爬取间隔

import time

import random

time.sleep(random.random()*3)

#设置合理的爬取间隔
for i in range(5):
    time.sleep(random.random()*3)  

5.用pandas做简单的数据处理并保存

保存到csv或excel文件 

newsdf.to_csv(r'F:duym爬虫gzccnews.csv')

#保存文件
pd.Series(allnews)
newsdf=pd.DataFrame(allnews)
newsdf.to_csv(r'F:
ews.csv')

截图:

原文地址:https://www.cnblogs.com/lingzihui/p/10711159.html