开头顺便提一下,水木有个迷你版的入口:http://www.newsmth.net/index2.html
今天pipa问我搜索水木全站,我告诉他令狐冲搜索,他说好像用不了。水木Web最近需要登陆才浏览文章。于是我今天写了一下登陆水木的代码。调用smthLogin()先,然后再通过get_url_data()就可以保持Cookie来去抓每个页面的内容就可以了。其中,get_url_data()是在一年半前改写,改写了关键几行,其他懒得改了,凑合用吧,呵呵。
登陆水木示例代码如下:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47 import urllib
import winsound
import re
import os
import urllib2
import httplib
import cookielib
import socket
###==================================================================
def smthLogin(uid,psw):
cj = cookielib.CookieJar()
post_data = urllib.urlencode({‘id’:uid,
‘passwd’:psw,
‘kick_multi’:1})
path = ‘http://www.newsmth.net/bbslogin2.php/'
opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cj))
opener.addheaders = [(‘User-agent’,‘Mozilla/4.0 (compatible; MSIE 5.5; Windows NT)’)]
urllib2.install_opener(opener)
req = urllib2.Request(path,post_data)
conn = urllib2.urlopen(req)
print conn.read()
#得到网页的源代码
def get_url_data(url):
#print "Get: ",url
isOK = False
try1 = 0
try2 = 0
while try1 < 5 and isOK == False:
try:
req2 = urllib2.Request(url)
htmlSource = urllib2.urlopen(req2).read()
print htmlSource
except:
try2 = try2+1
if try2 > try1:
try1 = try2
else:
isOK = True
if isOK:
return htmlSource
else:
return ""
登陆水木
登陆水木
...