首页   注册   登录
V2EX = way to explore
V2EX 是一个关于分享和探索的地方
现在注册
已注册用户请  登录
推荐学习书目
Learn Python the Hard Way
Python 学习手册
Python Cookbook
Python 基础教程
Python Sites
PyPI - Python Package Index
http://www.simple-is-better.com/
http://diveintopython.org/toc/index.html
Pocoo
值得关注的项目
PyPy
Celery
Jinja2
Read the Docs
gevent
pyenv
virtualenv
Stackless Python
Beautiful Soup
结巴中文分词
Green Unicorn
Sentry
Shovel
Pyflakes
pytest
Python 编程
pep8 Checker
Styles
PEP 8
Google Python Style Guide
Code Style from The Hitchhiker's Guide
V2EX  ›  Python

[新人求助] 为何在爬取网站图片时,其他链接正常输出,但是爬取到其中一个链接时就会报错

  •  
  •   15874103329 · 54 天前 · 532 次点击
    这是一个创建于 54 天前的主题,其中的信息可能已经有所发展或是发生改变。
    报错地段在 59 行,报错提示为:Unterminated string starting at: line 1 column 1 (char 0)
    主要想不通,为啥别的链接不报错,每次一到这个链接就报错
    import requests
    from urllib.parse import urlencode
    from requests.exceptions import RequestException
    import random
    import json
    from bs4 import BeautifulSoup
    import re

    headers_chi = [
    'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/49.0.2623.22 Safari/537.36 SE 2.X MetaSr 1.0',
    'Mozilla/5.0 (Windows NT 6.1; rv:49.0) Gecko/20100101 Firefox/49.0',
    'Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/51.0.2704.103 Safari/537.36',
    'Mozilla/5.0 (Windows NT 6.2; WOW64; rv:21.0) Gecko/20100101 Firefox/21.0'
    ]
    def shouye_dizhi():
    data = {
    'offset': '0',
    'format': 'json',
    'keyword': '美女',
    'autoload': 'true',
    'count': '20',
    'cur_tab': '3',
    'from': 'gallery'
    }
    url = 'https://www.toutiao.com/search_content/?' + urlencode(data)
    try:
    headers = {}
    headers['User-Agent'] = random.choice(headers_chi)
    dizhi = requests.get(url,headers = headers)
    if dizhi.status_code == 200:
    return dizhi.text
    except RequestException:
    print('首页加载出错')
    return None

    def shouye_xiangqing(html):
    data = json.loads(html)
    if data and 'data' in data.keys():
    for item in data.get('data'):
    yield item.get('article_url')

    def xiangqingye_dizhi(url):
    try:
    headers = {}
    headers['User-Agent'] = random.choice(headers_chi)
    dizhi = requests.get(url,headers = headers)
    if dizhi.status_code == 200:
    return dizhi.text
    except RequestException:
    print('详情页加载出错')
    return None

    def xiangqingye_jiexi(html,url):
    jiexi = BeautifulSoup(html,'lxml')
    title = jiexi.select('title')[0].get_text()
    print(title)
    zhengze = re.compile('JSON.parse\(([\s\S]*?)\)')
    jieguo = re.search(zhengze,html)
    data = json.loads(json.loads(jieguo.group(1)))
    if data and 'sub_images' in data.keys():
    sub_images = data.get('sub_images')
    items = [item.get('url')for item in sub_images]
    return {
    'title':title,
    'url':url,
    'items':items
    }


    def main():
    html = shouye_dizhi()
    for url in shouye_xiangqing(html):
    html = xiangqingye_dizhi(url)
    tupian = xiangqingye_jiexi(html,url)
    print(tupian)

    if __name__ == "__main__":
    main()
    4 回复  |  直到 2018-12-28 15:54:33 +08:00
        1
    15874103329   54 天前
    求大佬帮忙看一下啊
        2
    hp66722667   54 天前
    这么多 if 格式也都是错的,爱莫能助啊,建议你好好排版一下也许会有人帮你看看
        3
    careofzm   54 天前   ♥ 1
    可以跑过, 就改了一个地方

        4
    15874103329   54 天前
    @careofzm 感谢大佬
    关于   ·   FAQ   ·   API   ·   我们的愿景   ·   广告投放   ·   感谢   ·   实用小工具   ·   3845 人在线   最高记录 4385   ·  
    创意工作者们的社区
    World is powered by solitude
    VERSION: 3.9.8.3 · 18ms · UTC 08:23 · PVG 16:23 · LAX 00:23 · JFK 03:23
    ♥ Do have faith in what you're doing.
    沪ICP备16043287号-1