java中的接口是类吗
213
2022-08-25
【python爬虫专项(9)】哪吒之魔童降世影片的海报爬取
以哪吒之魔童降世影片的海报为例进行图片爬取
参考网址:哪吒之魔童降世官方海报
爬虫逻辑:【分页网页url采集】-【数据采集】-【保存图片】
经过前两篇文章的实践,可以发现两种爬虫逻辑各有优缺点,逻辑(一)可以获得相对详细的信息,但是需要从主url中获取分页url再进行数据的爬取,很消耗时间,而逻辑(二)则是直接获取在第一个url上的信息,爬取即可,很省时,当然相对地获取的信息也就较少一些。而这次获取图片信息不需要再到分页的rul上采集数据,直接进行首页url的图片信息抓取即可,但是不同的是,这里还要多一个图片存储的过程,也就是基于逻辑(二)上还要封装一个函数3,如下
函数1:get_urls(n) → 【分页网页url采集】 n:页数参数 结果:得到一个分页网页的list
函数2:get_pic(ui,d_h,d_c) → 【数据采集】 ui:数据信息网页 d_h:user-agent信息 d_c:cookies信息 结果:得到图片的dict,包括图片id和图片src
函数3:save_pic(srci) → 【保存图片】 srci:图片src
明确采集内容
1)img标签中的src属性信息
对应的网址打开后,内容如下
2)要求采集150条数据(也是全部的海报数据)
5页【分页网页url采集】- 每页30条数据
3)结果保存一个文件夹,注意批量命名图片
查找url规律
哪吒海报的2-4页的url如下
u2 = = = = [] for i in range(5): ui = " urllst.append(ui)
导入相关库和封装第一个函数
import requests from bs4 import BeautifulSoupdef get_url(n): ''' 【分页网址url采集】函数 n:页数参数 结果:得到一个分页网页的list ''' lst = [] for i in range(n): ui = " #print(ui) lst.append(ui) return lstprint(get_url(5))
输出的结果为:
[‘= { 'User-Agent' : 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/75.0.3770.100 Safari/537.36' }dic_cookies = {}cookies = 'll="108296"; bid=b9z-Z1JF8wQ; _vwo_uuid_v2=DDF408830197B90007427EFEAB67DF985|b500ed9e7e3b5f6efec01c709b7000c3; douban-fav-remind=1; __yadk_uid=2D7qqvQghjfgVOD0jdPUlybUNa2MBZbz; gr_user_id=5943e535-83de-4105-b840-65b7a0cc92e1; dbcl2="150296873:VHh1cXhumGU"; push_noty_num=0; push_doumail_num=0; __utmv=30149280.15029; __gads=ID=dcc053bfe97d2b3c:T=1579101910:S=ALNI_Ma5JEn6w7PLu-iTttZOFRZbG4sHCw; ct=y; Hm_lvt_cfafef0aa0076ffb1a7838fd772f844d=1579102240; __utmz=81379588.1579138975.5.5.utmcsr=baidu|utmccn=(organic)|utmcmd=organic; __utmz=30149280.1579162528.9.8.utmcsr=baidu|utmccn=(organic)|utmcmd=organic; ck=csBn; _pk_ref.100001.3ac3=%5B%22%22%2C%22%22%2C1581081161%2C%22_pk_ses.100001.3ac3=*; __utma=30149280.195590675.1570957615.1581050101.1581081161.16; __utmc=30149280; __utma=81379588.834351582.1571800818.1581050101.1581081161.12; __utmc=81379588; ap_v=0,6.0; gr_session_id_22c937bbd8ebd703f2d8e9445f7dfd03=b6a046c7-15eb-4e77-bc26-3a7af29e68b1; gr_cs1_b6a046c7-15eb-4e77-bc26-3a7af29e68b1=user_id%3A1; gr_session_id_22c937bbd8ebd703f2d8e9445f7dfd03_b6a046c7-15eb-4e77-bc26-3a7af29e68b1=true; _pk_id.100001.3ac3=6ec264aefc5132a2.1571800818.12.1581082472.1581050101.; __utmb=30149280.7.10.1581081161; __utmb=81379588.7.10.1581081161'cookies_lst = cookies.split("; ")for i in cookies_lst: dic_cookies[i.split("=")[0]] = i.split("=")[1]
请求网页信息和网页解析
首先请求网页信息,看是否可以正常访问上面输出的几个网址,拿其中的一个进行试错
urllst_1 = get_url(5)u1 = urllst_1[0]r1 = requests.get(u1, headers= dic_heders, cookies = dic_cookies)print(r1)
输出的结果为:
因此,选择一个海报的标签信息进行试错,看是否可以正确的获取海报的id和海报的src(id是为了存储时候方便命名文件名称,src就是图片信息)
urllst_1 = get_url(5) #获取海报五页的urlu1 = urllst_1[0] #选取第一页r1 = requests.get(u1, headers= dic_heders, cookies = dic_cookies) #返回请求信息soup_1 = BeautifulSoup(r1.text,'lxml')#解析页面ul = soup_1.find("ul",class_="poster-col3 clearfix") #查找存放所有海报的ul标签lis = ul.find_all('li')#再找到存放每个海报的li标签li_1 = lis[0]#然后选取里面第一个海报dic_pic = {}#创建字典存放照片的id和src信息dic_pic['pic_id'] = li_1['data-id']dic_pic['pic_src'] = li_1.find('img')['src']print(dic_pic)
输出的结果为:
{‘pic_id’: ‘2563780504’, ‘pic_src’: ‘get_pic(ui,d_h,d_c): ''' 【数据采集】 ui:数据信息网页 d_h:user-agent信息 d_c:cookies信息 结果:得到图片的dict,包括图片id和图片src ''' ri = requests.get(ui, headers= d_h, cookies = d_c) soup_i = BeautifulSoup(ri.text,'lxml') ul = soup_i.find("ul",class_="poster-col3 clearfix") lis = ul.find_all('li') pic_lst =[] for li in lis: dic_pic = {} dic_pic['pic_id'] = li['data-id'] dic_pic['pic_src'] = li.find('img')['src'] pic_lst.append(dic_pic) return pic_lst urllst_1 = get_url(5)u1 = urllst_1[0]print(get_pic(u1,dic_heders,dic_cookies))print(len(get_pic(u1,dic_heders,dic_cookies)))
输出的结果为:(只截取部分海报信息,总共30条信息,第一页的海报内容全部爬取下来了)
接着获取五页全部的海报信息,需要添加一个for循环,然后再创建两个列表,分别装爬取到的信息和没有爬取到的信息。如下
successed_lst = []error_lst = []for u in urllst_1: try: successed_lst.extend(get_pic(u,dic_heders,dic_cookies)) print("成功采集{}条海报信息".format(len(successed_lst))) except: error_lst.append(u) print('海报信息采集失败,网址为:',u)
输出为:
封装第三个函数
该部分完成的任务是将前面获得海报信息存储到本地。和之前的操作一样,先以第一个海报为例,进行试错存储
pic = successed_lst[0]img = requests.get(pic['pic_src'])print(img)
输出结果为:
with open(pic['pic_id'] + '.jpg', 'wb') as f: f.write(img.content) f.close()
输出的结果为:(如果没有指定路径,生成的图片会自动放在与代码文件相同的路径下)
至此就完成了图片存放本地的试错,接着就是封装第三个函数,进行全部图片的存储
def save(srci): '''srci:图片src''' img = requests.get(srci['pic_src']) with open(pic['pic_id'] + '.jpg', 'wb') as f: f.write(img.content) f.close()
最后调用函数三,进行海报文件的存储
import os path = os.getcwd()if not os.path.exists('哪吒海报'): os.mkdir('哪吒海报')os.chdir(path+"/" + '哪吒海报')n = 1for pic in successed_lst: try: save(pic) print('成功储存{}张海报'.format(n)) n += 1 except: print('图片存储失败,对应的src为:',pic) continue
至此整个爬取哪吒之魔童降世影片的海报的项目就全部结束了,下面是全部代码和输出结果
全部代码及输出结果
import requests from bs4 import BeautifulSoupimport pandas as pddef get_url(n): ''' 【分页网址url采集】函数 n:页数参数 结果:得到一个分页网页的list ''' lst = [] for i in range(n): ui = " #print(ui) lst.append(ui) return lstdef get_pic(ui,d_h,d_c): ''' 【数据采集】 ui:数据信息网页 d_h:user-agent信息 d_c:cookies信息 结果:得到图片的dict,包括图片id和图片src ''' ri = requests.get(ui, headers= d_h, cookies = d_c) soup_i = BeautifulSoup(ri.text,'lxml') ul = soup_i.find("ul",class_="poster-col3 clearfix") lis = ul.find_all('li') pic_lst =[] for li in lis: dic_pic = {} dic_pic['pic_id'] = li['data-id'] dic_pic['pic_src'] = li.find('img')['src'] pic_lst.append(dic_pic) return pic_lstdef save(srci): '''srci:图片src''' img = requests.get(srci['pic_src']) with open(pic['pic_id'] + '.jpg', 'wb') as f: f.write(img.content) f.close()if __name__ == "__main__": dic_heders = { 'User-Agent' : 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/75.0.3770.100 Safari/537.36' } dic_cookies = {} cookies = 'll="108296"; bid=b9z-Z1JF8wQ; _vwo_uuid_v2=DDF408830197B90007427EFEAB67DF985|b500ed9e7e3b5f6efec01c709b7000c3; douban-fav-remind=1; __yadk_uid=2D7qqvQghjfgVOD0jdPUlybUNa2MBZbz; gr_user_id=5943e535-83de-4105-b840-65b7a0cc92e1; dbcl2="150296873:VHh1cXhumGU"; push_noty_num=0; push_doumail_num=0; __utmv=30149280.15029; __gads=ID=dcc053bfe97d2b3c:T=1579101910:S=ALNI_Ma5JEn6w7PLu-iTttZOFRZbG4sHCw; ct=y; Hm_lvt_cfafef0aa0076ffb1a7838fd772f844d=1579102240; __utmz=81379588.1579138975.5.5.utmcsr=baidu|utmccn=(organic)|utmcmd=organic; __utmz=30149280.1579162528.9.8.utmcsr=baidu|utmccn=(organic)|utmcmd=organic; ck=csBn; _pk_ref.100001.3ac3=%5B%22%22%2C%22%22%2C1581081161%2C%22_pk_ses.100001.3ac3=*; __utma=30149280.195590675.1570957615.1581050101.1581081161.16; __utmc=30149280; __utma=81379588.834351582.1571800818.1581050101.1581081161.12; __utmc=81379588; ap_v=0,6.0; gr_session_id_22c937bbd8ebd703f2d8e9445f7dfd03=b6a046c7-15eb-4e77-bc26-3a7af29e68b1; gr_cs1_b6a046c7-15eb-4e77-bc26-3a7af29e68b1=user_id%3A1; gr_session_id_22c937bbd8ebd703f2d8e9445f7dfd03_b6a046c7-15eb-4e77-bc26-3a7af29e68b1=true; _pk_id.100001.3ac3=6ec264aefc5132a2.1571800818.12.1581082472.1581050101.; __utmb=30149280.7.10.1581081161; __utmb=81379588.7.10.1581081161' cookies_lst = cookies.split("; ") for i in cookies_lst: dic_cookies[i.split("=")[0]] = i.split("=")[1] urllst_1 = get_url(5) u1 = urllst_1[0] # print(get_pic(u1,dic_heders,dic_cookies)) # print(len(get_pic(u1,dic_heders,dic_cookies))) successed_lst = [] error_lst = [] for u in urllst_1: try: successed_lst.extend(get_pic(u,dic_heders,dic_cookies)) print("成功采集{}条海报信息".format(len(successed_lst))) except: error_lst.append(u) print('海报信息采集失败,网址为:',u) import os path = os.getcwd() if not os.path.exists('哪吒海报'): os.mkdir('哪吒海报') os.chdir(path+"/" + '哪吒海报') n = 1 for pic in successed_lst: try: save(pic) print('成功储存{}张海报'.format(n)) n += 1 except: print('图片存储失败,对应的src为:',pic) continue
输出结果如下:
版权声明:本文内容由网络用户投稿,版权归原作者所有,本站不拥有其著作权,亦不承担相应法律责任。如果您发现本站中有涉嫌抄袭或描述失实的内容,请联系我们jiasou666@gmail.com 处理,核实后本网站将在24小时内删除侵权内容。
发表评论
暂时没有评论,来抢沙发吧~