获取一篇新闻的全部信息

作业来源:https://edu.cnblogs.com/campus/gzcc/GZCC-16SE2/homework/2894

给定一篇新闻的链接newsUrl,获取该新闻的全部信息

标题、作者、发布单位、审核、来源

发布时间:转换成datetime类型

点击:

  • newsUrl
  • newsId(使用正则表达式re)
  • clickUrl(str.format(newsId))
  • requests.get(clickUrl)
  • newClick(用字符串处理,或正则表达式)
  • int()

整个过程包装成一个简单清晰的函数。

代码:

import requests
import re
from bs4 import BeautifulSoup
from datetime import datetime
url = 'http://news.gzcc.cn/html/2019/xiaoyuanxinwen_0320/11029.html'

def newsdt(showinfo): 
    newsDate = showinfo.split()[0].split(':')[1]
    newsTime = showinfo.split()[1]
    newsDT = newsDate + ' ' + newsTime
    return newsDT

def click(url):
    id = re.findall('(d{1,5})', url)[-1]
    clickUrl = 'http://oa.gzcc.cn/api.php?op=count&id={}&modelid=80'.format(id)
    res = requests.get(clickUrl)
    newsClick = res.text.split('.html')[-1].lstrip("('").rstrip("');")
    return newsClick

def newsid(url):
    newsID = re.findall('(d{1,5})', url)[-1]
    return newsID


def news(url):
    res = requests.get(url)
    res.encoding = 'utf-8'
    soup = BeautifulSoup(res.text, 'html.parser')
    # 获取新闻题目
    title = soup.select('.show-title')[0].text
    showinfo = soup.select('.show-info')[0].text
    newsDT = newsdt(showinfo)
    author = soup.select('.show-info')[0].text.split()[2]  # 获取作者
    check = soup.select('.show-info')[0].text.split()[3]  # 获取审核
    laiyuan = soup.select('.show-info')[0].text.split()[4]  # 获取来源
    newsID = newsid(url)  # 获取新闻编号
    newsClick = click(url)  # 获取点击次数
    title = '新闻标题:' + title
    newsID = '新闻编号:' + newsID
    newsDT = '发布日期:' + newsDT
    newsClick = '文章点击次数:' + newsClick

    print("" + title + '
'" " + newsDT + '
'" " + newsID + '
'" " + author + '
'" " + check + '
'"" + laiyuan + '
'" " + newsClick + '
')
news(url)

效果:

原文地址:https://www.cnblogs.com/zy5250/p/10651545.html