发布于 2015-09-04 06:58:58 | 764 次阅读 | 评论: 0 | 来源: 网络整理
Scrapy uses Request
and Response
objects for crawling web sites.
Typically, Request
objects are generated in the spiders and pass across the system until they reach the Downloader, which executes the request and returns a Response
object which travels back to the spider that issued the request.
Both Request
and Response
classes have subclasses which add functionality not required in the base classes. These are described below in Request subclasses and Response subclasses.
scrapy.http.
Request
(url[, callback, method='GET', headers, body, cookies, meta, encoding='utf-8', priority=0, dont_filter=False, errback])¶A Request
object represents an HTTP request, which is usually generated in the Spider and executed by the Downloader, and thus generating a Response
.
参数: |
|
---|
url
¶A string containing the URL of this request. Keep in mind that this attribute contains the escaped URL, so it can differ from the URL passed in the constructor.
This attribute is read-only. To change the URL of a Request use replace()
.
method
¶A string representing the HTTP method in the request. This is guaranteed to be uppercase. Example: "GET"
, "POST"
, "PUT"
, etc
headers
¶A dictionary-like object which contains the request headers.
body
¶A str that contains the request body.
This attribute is read-only. To change the body of a Request use replace()
.
meta
¶A dict that contains arbitrary metadata for this request. This dict is empty for new Requests, and is usually populated by different Scrapy components (extensions, middlewares, etc). So the data contained in this dict depends on the extensions you have enabled.
See Request.meta special keys for a list of special meta keys recognized by Scrapy.
This dict is shallow copied when the request is cloned using the copy()
or replace()
methods, and can also be accessed, in your spider, from the response.meta
attribute.
copy
()¶Return a new Request which is a copy of this Request. See also: Passing additional data to callback functions.
replace
([url, method, headers, body, cookies, meta, encoding, dont_filter, callback, errback])¶Return a Request object with the same members, except for those members given new values by whichever keyword arguments are specified. The attribute Request.meta
is copied by default (unless a new value is given in the meta
argument). See also Passing additional data to callback functions.
The callback of a request is a function that will be called when the response of that request is downloaded. The callback function will be called with the downloaded Response
object as its first argument.
Example:
def parse_page1(self, response):
return scrapy.Request("http://www.example.com/some_page.html",
callback=self.parse_page2)
def parse_page2(self, response):
# this would log http://www.example.com/some_page.html
self.log("Visited %s" % response.url)
In some cases you may be interested in passing arguments to those callback functions so you can receive the arguments later, in the second callback. You can use the Request.meta
attribute for that.
Here’s an example of how to pass an item using this mechanism, to populate different fields from different pages:
def parse_page1(self, response):
item = MyItem()
item['main_url'] = response.url
request = scrapy.Request("http://www.example.com/some_page.html",
callback=self.parse_page2)
request.meta['item'] = item
return request
def parse_page2(self, response):
item = response.meta['item']
item['other_url'] = response.url
return item
The Request.meta
attribute can contain any arbitrary data, but there are some special keys recognized by Scrapy and its built-in extensions.
Those are:
dont_redirect
dont_retry
handle_httpstatus_list
dont_merge_cookies
(see cookies
parameter of Request
constructor)cookiejar
redirect_urls
bindaddress
The IP of the outgoing IP address to use for the performing the request.
Here is the list of built-in Request
subclasses. You can also subclass it to implement your own custom functionality.
The FormRequest class extends the base Request
with functionality for dealing with HTML forms. It uses lxml.html forms to pre-populate form fields with form data from Response
objects.
scrapy.http.
FormRequest
(url[, formdata, ...])¶The FormRequest
class adds a new argument to the constructor. The remaining arguments are the same as for the Request
class and are not documented here.
参数: | formdata (dict or iterable of tuples) – is a dictionary (or iterable of (key, value) tuples) containing HTML Form data which will be url-encoded and assigned to the body of the request. |
---|
The FormRequest
objects support the following class method in addition to the standard Request
methods:
from_response
(response[, formname=None, formnumber=0, formdata=None, formxpath=None, clickdata=None, dont_click=False, ...])¶Returns a new FormRequest
object with its form field values pre-populated with those found in the HTML <form>
element contained in the given response. For an example see 使用FormRequest.from_response()方法模拟用户登录.
The policy is to automatically simulate a click, by default, on any form control that looks clickable, like a <input type="submit">
. Even though this is quite convenient, and often the desired behaviour, sometimes it can cause problems which could be hard to debug. For example, when working with forms that are filled and/or submitted using javascript, the default from_response()
behaviour may not be the most appropriate. To disable this behaviour you can set the dont_click
argument to True
. Also, if you want to change the control clicked (instead of disabling it) you can also use the clickdata
argument.
参数: |
|
---|
The other parameters of this class method are passed directly to the FormRequest
constructor.
0.10.3 新版功能: The formname
parameter.
0.17 新版功能: The formxpath
parameter.
If you want to simulate a HTML Form POST in your spider and send a couple of key-value fields, you can return a FormRequest
object (from your spider) like this:
return [FormRequest(url="http://www.example.com/post/action",
formdata={'name': 'John Doe', 'age': '27'},
callback=self.after_post)]
通常网站通过 <input type="hidden">
实现对某些表单字段(如数据或是登录界面中的认证令牌等)的预填充。 使用Scrapy抓取网页时,如果想要预填充或重写像用户名、用户密码这些表单字段, 可以使用 FormRequest.from_response()
方法实现。下面是使用这种方法的爬虫例子:
import scrapy
class LoginSpider(scrapy.Spider):
name = 'example.com'
start_urls = ['http://www.example.com/users/login.php']
def parse(self, response):
return scrapy.FormRequest.from_response(
response,
formdata={'username': 'john', 'password': 'secret'},
callback=self.after_login
)
def after_login(self, response):
# check login succeed before going on
if "authentication failed" in response.body:
self.log("Login failed", level=scrapy.log.ERROR)
return
# continue scraping with authenticated session...
scrapy.http.
Response
(url[, status=200, headers, body, flags])¶A Response
object represents an HTTP response, which is usually downloaded (by the Downloader) and fed to the Spiders for processing.
参数: |
|
---|
url
¶A string containing the URL of the response.
This attribute is read-only. To change the URL of a Response use replace()
.
status
¶An integer representing the HTTP status of the response. Example: 200
, 404
.
headers
¶A dictionary-like object which contains the response headers.
body
¶A str containing the body of this Response. Keep in mind that Response.body is always a str. If you want the unicode version use TextResponse.body_as_unicode()
(only available in TextResponse
and subclasses).
This attribute is read-only. To change the body of a Response use replace()
.
request
¶The Request
object that generated this response. This attribute is assigned in the Scrapy engine, after the response and the request have passed through all Downloader Middlewares. In particular, this means that:
response_downloaded
signal.meta
¶A shortcut to the Request.meta
attribute of the Response.request
object (ie. self.request.meta
).
Unlike the Response.request
attribute, the Response.meta
attribute is propagated along redirects and retries, so you will get the original Request.meta
sent from your spider.
参见
Request.meta
attribute
flags
¶A list that contains flags for this response. Flags are labels used for tagging Responses. For example: ‘cached’, ‘redirected‘, etc. And they’re shown on the string representation of the Response (__str__ method) which is used by the engine for logging.
copy
()¶Returns a new Response which is a copy of this Response.
replace
([url, status, headers, body, request, flags, cls])¶Returns a Response object with the same members, except for those members given new values by whichever keyword arguments are specified. The attribute Response.meta
is copied by default.
Here is the list of available built-in Response subclasses. You can also subclass the Response class to implement your own functionality.
scrapy.http.
TextResponse
(url[, encoding[, ...]])¶TextResponse
objects adds encoding capabilities to the base Response
class, which is meant to be used only for binary data, such as images, sounds or any media file.
TextResponse
objects support a new constructor argument, in addition to the base Response
objects. The remaining functionality is the same as for the Response
class and is not documented here.
参数: | encoding (string) – is a string which contains the encoding to use for this response. If you create a TextResponse object with a unicode body, it will be encoded using this encoding (remember the body attribute is always a string). If encoding is None (default value), the encoding will be looked up in the response headers and body instead. |
---|
TextResponse
objects support the following attributes in addition to the standard Response
ones:
encoding
¶A string with the encoding of this response. The encoding is resolved by trying the following mechanisms, in order:
HtmlResponse
and XmlResponse
classes do.selector
¶A Selector
instance using the response as target. The selector is lazily instantiated on first access.
TextResponse
objects support the following methods in addition to the standard Response
ones:
body_as_unicode
()¶Returns the body of the response as unicode. This is equivalent to:
response.body.decode(response.encoding)
But not equivalent to:
unicode(response.body)
Since, in the latter case, you would be using you system default encoding (typically ascii) to convert the body to unicode, instead of the response encoding.
xpath
(query)¶A shortcut to TextResponse.selector.xpath(query)
:
response.xpath('//p')
css
(query)¶A shortcut to TextResponse.selector.css(query)
:
response.css('p')
scrapy.http.
HtmlResponse
(url[, ...])¶The HtmlResponse
class is a subclass of TextResponse
which adds encoding auto-discovering support by looking into the HTML meta http-equiv attribute. See TextResponse.encoding
.
scrapy.http.
XmlResponse
(url[, ...])¶The XmlResponse
class is a subclass of TextResponse
which adds encoding auto-discovering support by looking into the XML declaration line. See TextResponse.encoding
.