找回密码
 中文实名注册
查看: 229|回复: 0

【爬虫实战】爬取菜鸟教程的python100例

[复制链接]

697

主题

1085

帖子

2万

积分

管理员

Rank: 9Rank: 9Rank: 9

积分
22883
发表于 2022-5-4 09:31:49 | 显示全部楼层 |阅读模式
[Python] 纯文本查看 复制代码
import requests
from lxml import etree
 
base_url = 'https://www.runoob.com/python/python-exercise-example%s.html'
 
 
def get_element(url):
    headers = {
        'cookie': '__gads=Test; Hm_lvt_3eec0b7da6548cf07db3bc477ea905ee=1573454862,1573470948,1573478656,1573713819; Hm_lpvt_3eec0b7da6548cf07db3bc477ea905ee=1573714018; SERVERID=fb669a01438a4693a180d7ad8d474adb|1573713997|1573713863',
        'referer': 'https://www.runoob.com/python/python-100-examples.html',
        'user-agent': 'Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/77.0.3865.90 Safari/537.36'
    }
    response = requests.get(url, headers=headers)
    return etree.HTML(response.text)
 
 
def write_py(i, text):
    with open('练习实例%s.py' % i, 'w', encoding='utf-8') as file:
        file.write(text)
 
 
def main():
    for i in range(1, 101):
        html = get_element(base_url % i)
        content = '题目:' + html.xpath('//div[@id="content"]/p[2]/text()')[0] + '\n'
        fenxi = html.xpath('//div[@id="content"]/p[position()>=2]/text()')[0]
        daima = ''.join(html.xpath('//div[@class="hl-main"]/span/text()')) + '\n'
        haha = '"""\n' + content + fenxi + daima + '\n"""'
        write_py(i, haha)
        print(fenxi)
 
if __name__ == '__main__':
    main()

回复

使用道具 举报

您需要登录后才可以回帖 登录 | 中文实名注册

本版积分规则

小黑屋|东台市机器人学会 ( 苏ICP备2021035350号-1;苏ICP备2021035350号-2;苏ICP备2021035350号-3 )

GMT+8, 2024-5-19 23:40 , Processed in 0.049618 second(s), 26 queries .

Powered by Discuz! X3.4

Copyright © 2001-2020, Tencent Cloud.

快速回复 返回顶部 返回列表