Python selenium爬取微博數(shù)據(jù)代碼實(shí)例
爬取某人的微博數(shù)據(jù),把某人所有時(shí)間段的微博數(shù)據(jù)都爬下來(lái)。
具體思路:
創(chuàng)建driver-----get網(wǎng)頁(yè)----找到并提取信息-----保存csv----翻頁(yè)----get網(wǎng)頁(yè)(開(kāi)始循環(huán))----...----沒(méi)有“下一頁(yè)”就結(jié)束,
用了while True,沒(méi)用自我調(diào)用函數(shù)
嘟大海的微博:https://weibo.com/u/1623915527
辦公室小野的微博:https://weibo.com/bgsxy
代碼如下
from selenium import webdriverfrom selenium.webdriver.common.keys import Keysimport csvimport osimport time #只有這2個(gè)參數(shù)設(shè)置,想爬誰(shuí)的微博數(shù)據(jù)就在這里改地址和目標(biāo)csv名稱(chēng)就行weibo_url = ’https://weibo.com/bgsxy?profile_ftype=1&is_all=1#_0’csv_name = ’bgsxy_allweibo.csv’ def start_chrome(): print(’開(kāi)始創(chuàng)建瀏覽器’) driver = webdriver.Chrome(executable_path=’C:/Users/lori/Desktop/python52project/chromedriver_win32/chromedriver.exe’) driver.start_client() return driver def get_web(url): #獲取網(wǎng)頁(yè),并下拉到最底部 print(’開(kāi)始打開(kāi)指定網(wǎng)頁(yè)’) driver.get(url) time.sleep(7) scoll_down() time.sleep(5) def scoll_down(): # 滾輪下拉到最底部 html_page = driver.find_element_by_tag_name(’html’) for i in range(7): print(i) html_page.send_keys(Keys.END) time.sleep(1) def get_data(): print(’開(kāi)始查找并提取數(shù)據(jù)’) card_sel = ’div.WB_cardwrap.WB_feed_type’ time_sel = ’a.S_txt2[node-type='feed_list_item_date']’ source_sel = ’a.S_txt2[suda-uatrack='key=profile_feed&value=pubfrom_guest']’ content_sel = ’div.WB_text.W_f14’ interact_sel = ’span.line.S_line1>span>em:nth-child(2)’ cards = driver.find_elements_by_css_selector(card_sel) info_list = [] for card in cards: time = card.find_elements_by_css_selector(time_sel)[0].text #雖然有可能在一個(gè)card中有2個(gè)time元素,我們?nèi)〉谝粋€(gè)就對(duì) if card.find_elements_by_css_selector(source_sel): source = card.find_elements_by_css_selector(source_sel)[0].text else: source = ’’ content = card.find_elements_by_css_selector(content_sel)[0].text link = card.find_elements_by_css_selector(time_sel)[0].get_attribute(’href’) trans = card.find_elements_by_css_selector(interact_sel)[1].text comment = card.find_elements_by_css_selector(interact_sel)[2].text like = card.find_elements_by_css_selector(interact_sel)[3].text info_list.append([time,source,content,link,trans,comment,like]) return info_list def save_csv(info_list,csv_name): csv_path = ’./’ + csv_name print(’開(kāi)始寫(xiě)入csv文件’) if os.path.exists(csv_path): with open(csv_path,’a’,newline=’’,encoding=’utf-8-sig’) as f: #newline=’’避免空行;encoding=’utf-8-sig’比utf8牛,保存中文沒(méi)問(wèn)題 writer = csv.writer(f) writer.writerows(info_list) else: with open(csv_path,’w+’,newline=’’,encoding=’utf-8-sig’) as f: writer = csv.writer(f) writer.writerow([’發(fā)布時(shí)間’,’來(lái)源’,’內(nèi)容’,’鏈接’,’轉(zhuǎn)發(fā)數(shù)’,’評(píng)論數(shù)’,’點(diǎn)贊數(shù)’]) writer.writerows(info_list) time.sleep(5) def next_page_url(): next_page_sel = ’a.page.next’ next_page_ele = driver.find_elements_by_css_selector(next_page_sel) if next_page_ele: return next_page_ele[0].get_attribute(’href’) else: return None driver = start_chrome()input(’請(qǐng)?jiān)赾hrome中登錄weibo.com’) # 暫停程序,手動(dòng)登錄weibo.com while True: get_web(weibo_url) info_list = get_data() save_csv(info_list,csv_name) if next_page_url(): weibo_url = next_page_url() else: print(’爬取結(jié)束’) break
以上就是本文的全部?jī)?nèi)容,希望對(duì)大家的學(xué)習(xí)有所幫助,也希望大家多多支持好吧啦網(wǎng)。
相關(guān)文章:
1. CSS Hack大全-教你如何區(qū)分出IE6-IE10、FireFox、Chrome、Opera2. 基于javaweb+jsp實(shí)現(xiàn)企業(yè)財(cái)務(wù)記賬管理系統(tǒng)3. React優(yōu)雅的封裝SvgIcon組件示例4. jsp文件下載功能實(shí)現(xiàn)代碼5. ASP中格式化時(shí)間短日期補(bǔ)0變兩位長(zhǎng)日期的方法6. jsp+servlet實(shí)現(xiàn)猜數(shù)字游戲7. ASP基礎(chǔ)知識(shí)Command對(duì)象講解8. XML入門(mén)精解之結(jié)構(gòu)與語(yǔ)法9. ASP腳本組件實(shí)現(xiàn)服務(wù)器重啟10. jsp+mysql實(shí)現(xiàn)網(wǎng)頁(yè)的分頁(yè)查詢(xún)
