Crawling pages to obtain .htm and .txt files from sec.gov
up vote
2
down vote
favorite
I am pretty much a python newbie and was looking at this question on StackOverflow.
Whereas the OP was interested in downloading the .htm |.txt
files, I was simply interested in using Beautiful Soup
and requests
to gather all the links, to those files, into one structure. Ideally, it would have been a single list but I have a list of lists.
The process is:
Start with the landing page and grab all the links in the leftmost column which has the header
CIK
.
CIK column sample with single link selected on right
Navigate to each of those links and grab the document button links in the
Format
column:
Format column sample with single link selected on right
Navigate to each of those document button links and grab the document links in the
Document Format Files
section, if the file mask is.htm | .txt
.
Document Format Files sample with single link selected on right
- Print the list of links (which is a list of lists)
Other info:
I use CSS selectors throughout to target the links.
What particularly concerns me, given my limited Python experience, is the fact I have a list of lists rather than a single list, whether my re-use of variable names is confusing and potentially bug prone, and I guess whether what I am doing is particularly pythonic and efficient.
In languages I am more familiar with, I might attempt to create a class and provide the class with methods; which in effect are my current functions.
Set-up info:
The version of the jupyter notebook server is: 5.5.0
The server is running on this version of Python:
Python 3.6.5 |Anaconda, Inc.| (default, Mar 29 2018, 13:32:41) [MSC v.1900 64 bit (AMD64)]
Python code:
from bs4 import BeautifulSoup
import requests
from urllib.parse import urljoin
base = 'https://www.sec.gov'
start_urls = ["https://www.sec.gov/cgi-bin/browse-edgar?company=&match=&CIK=&filenum=&State=&Country=&SIC=2834&owner=exclude&Find=Find+Companies&action=getcompany"]
def makeSoup(url):
res = requests.get(url)
soup = BeautifulSoup(res.content, "lxml")
return soup
def getLinks(pattern):
links = soup.select(pattern)
return links
def getFullLink(links):
outputLinks =
for link in links:
outputLinks.append(urljoin(base, link.get("href")))
return outputLinks
soup = makeSoup(start_urls[0])
links = getLinks("#seriesDiv [href*='&CIK']")
firstLinks = getFullLink(links)
penultimateLinks =
for link in firstLinks:
soup = makeSoup(link)
links = getLinks('[id=documentsbutton]')
nextLinks = getFullLink(links)
penultimateLinks.append(nextLinks)
finalLinks =
for link in penultimateLinks:
for nextLink in link:
soup = makeSoup(nextLink)
links = getLinks("[summary='Document Format Files'] td:nth-of-type(3) [href$='.htm'],td:nth-of-type(3) [href$='.txt']")
nextLinks = getFullLink(links)
finalLinks.append(nextLinks)
print(finalLinks)
python beginner python-3.x web-scraping beautifulsoup
add a comment |
up vote
2
down vote
favorite
I am pretty much a python newbie and was looking at this question on StackOverflow.
Whereas the OP was interested in downloading the .htm |.txt
files, I was simply interested in using Beautiful Soup
and requests
to gather all the links, to those files, into one structure. Ideally, it would have been a single list but I have a list of lists.
The process is:
Start with the landing page and grab all the links in the leftmost column which has the header
CIK
.
CIK column sample with single link selected on right
Navigate to each of those links and grab the document button links in the
Format
column:
Format column sample with single link selected on right
Navigate to each of those document button links and grab the document links in the
Document Format Files
section, if the file mask is.htm | .txt
.
Document Format Files sample with single link selected on right
- Print the list of links (which is a list of lists)
Other info:
I use CSS selectors throughout to target the links.
What particularly concerns me, given my limited Python experience, is the fact I have a list of lists rather than a single list, whether my re-use of variable names is confusing and potentially bug prone, and I guess whether what I am doing is particularly pythonic and efficient.
In languages I am more familiar with, I might attempt to create a class and provide the class with methods; which in effect are my current functions.
Set-up info:
The version of the jupyter notebook server is: 5.5.0
The server is running on this version of Python:
Python 3.6.5 |Anaconda, Inc.| (default, Mar 29 2018, 13:32:41) [MSC v.1900 64 bit (AMD64)]
Python code:
from bs4 import BeautifulSoup
import requests
from urllib.parse import urljoin
base = 'https://www.sec.gov'
start_urls = ["https://www.sec.gov/cgi-bin/browse-edgar?company=&match=&CIK=&filenum=&State=&Country=&SIC=2834&owner=exclude&Find=Find+Companies&action=getcompany"]
def makeSoup(url):
res = requests.get(url)
soup = BeautifulSoup(res.content, "lxml")
return soup
def getLinks(pattern):
links = soup.select(pattern)
return links
def getFullLink(links):
outputLinks =
for link in links:
outputLinks.append(urljoin(base, link.get("href")))
return outputLinks
soup = makeSoup(start_urls[0])
links = getLinks("#seriesDiv [href*='&CIK']")
firstLinks = getFullLink(links)
penultimateLinks =
for link in firstLinks:
soup = makeSoup(link)
links = getLinks('[id=documentsbutton]')
nextLinks = getFullLink(links)
penultimateLinks.append(nextLinks)
finalLinks =
for link in penultimateLinks:
for nextLink in link:
soup = makeSoup(nextLink)
links = getLinks("[summary='Document Format Files'] td:nth-of-type(3) [href$='.htm'],td:nth-of-type(3) [href$='.txt']")
nextLinks = getFullLink(links)
finalLinks.append(nextLinks)
print(finalLinks)
python beginner python-3.x web-scraping beautifulsoup
add a comment |
up vote
2
down vote
favorite
up vote
2
down vote
favorite
I am pretty much a python newbie and was looking at this question on StackOverflow.
Whereas the OP was interested in downloading the .htm |.txt
files, I was simply interested in using Beautiful Soup
and requests
to gather all the links, to those files, into one structure. Ideally, it would have been a single list but I have a list of lists.
The process is:
Start with the landing page and grab all the links in the leftmost column which has the header
CIK
.
CIK column sample with single link selected on right
Navigate to each of those links and grab the document button links in the
Format
column:
Format column sample with single link selected on right
Navigate to each of those document button links and grab the document links in the
Document Format Files
section, if the file mask is.htm | .txt
.
Document Format Files sample with single link selected on right
- Print the list of links (which is a list of lists)
Other info:
I use CSS selectors throughout to target the links.
What particularly concerns me, given my limited Python experience, is the fact I have a list of lists rather than a single list, whether my re-use of variable names is confusing and potentially bug prone, and I guess whether what I am doing is particularly pythonic and efficient.
In languages I am more familiar with, I might attempt to create a class and provide the class with methods; which in effect are my current functions.
Set-up info:
The version of the jupyter notebook server is: 5.5.0
The server is running on this version of Python:
Python 3.6.5 |Anaconda, Inc.| (default, Mar 29 2018, 13:32:41) [MSC v.1900 64 bit (AMD64)]
Python code:
from bs4 import BeautifulSoup
import requests
from urllib.parse import urljoin
base = 'https://www.sec.gov'
start_urls = ["https://www.sec.gov/cgi-bin/browse-edgar?company=&match=&CIK=&filenum=&State=&Country=&SIC=2834&owner=exclude&Find=Find+Companies&action=getcompany"]
def makeSoup(url):
res = requests.get(url)
soup = BeautifulSoup(res.content, "lxml")
return soup
def getLinks(pattern):
links = soup.select(pattern)
return links
def getFullLink(links):
outputLinks =
for link in links:
outputLinks.append(urljoin(base, link.get("href")))
return outputLinks
soup = makeSoup(start_urls[0])
links = getLinks("#seriesDiv [href*='&CIK']")
firstLinks = getFullLink(links)
penultimateLinks =
for link in firstLinks:
soup = makeSoup(link)
links = getLinks('[id=documentsbutton]')
nextLinks = getFullLink(links)
penultimateLinks.append(nextLinks)
finalLinks =
for link in penultimateLinks:
for nextLink in link:
soup = makeSoup(nextLink)
links = getLinks("[summary='Document Format Files'] td:nth-of-type(3) [href$='.htm'],td:nth-of-type(3) [href$='.txt']")
nextLinks = getFullLink(links)
finalLinks.append(nextLinks)
print(finalLinks)
python beginner python-3.x web-scraping beautifulsoup
I am pretty much a python newbie and was looking at this question on StackOverflow.
Whereas the OP was interested in downloading the .htm |.txt
files, I was simply interested in using Beautiful Soup
and requests
to gather all the links, to those files, into one structure. Ideally, it would have been a single list but I have a list of lists.
The process is:
Start with the landing page and grab all the links in the leftmost column which has the header
CIK
.
CIK column sample with single link selected on right
Navigate to each of those links and grab the document button links in the
Format
column:
Format column sample with single link selected on right
Navigate to each of those document button links and grab the document links in the
Document Format Files
section, if the file mask is.htm | .txt
.
Document Format Files sample with single link selected on right
- Print the list of links (which is a list of lists)
Other info:
I use CSS selectors throughout to target the links.
What particularly concerns me, given my limited Python experience, is the fact I have a list of lists rather than a single list, whether my re-use of variable names is confusing and potentially bug prone, and I guess whether what I am doing is particularly pythonic and efficient.
In languages I am more familiar with, I might attempt to create a class and provide the class with methods; which in effect are my current functions.
Set-up info:
The version of the jupyter notebook server is: 5.5.0
The server is running on this version of Python:
Python 3.6.5 |Anaconda, Inc.| (default, Mar 29 2018, 13:32:41) [MSC v.1900 64 bit (AMD64)]
Python code:
from bs4 import BeautifulSoup
import requests
from urllib.parse import urljoin
base = 'https://www.sec.gov'
start_urls = ["https://www.sec.gov/cgi-bin/browse-edgar?company=&match=&CIK=&filenum=&State=&Country=&SIC=2834&owner=exclude&Find=Find+Companies&action=getcompany"]
def makeSoup(url):
res = requests.get(url)
soup = BeautifulSoup(res.content, "lxml")
return soup
def getLinks(pattern):
links = soup.select(pattern)
return links
def getFullLink(links):
outputLinks =
for link in links:
outputLinks.append(urljoin(base, link.get("href")))
return outputLinks
soup = makeSoup(start_urls[0])
links = getLinks("#seriesDiv [href*='&CIK']")
firstLinks = getFullLink(links)
penultimateLinks =
for link in firstLinks:
soup = makeSoup(link)
links = getLinks('[id=documentsbutton]')
nextLinks = getFullLink(links)
penultimateLinks.append(nextLinks)
finalLinks =
for link in penultimateLinks:
for nextLink in link:
soup = makeSoup(nextLink)
links = getLinks("[summary='Document Format Files'] td:nth-of-type(3) [href$='.htm'],td:nth-of-type(3) [href$='.txt']")
nextLinks = getFullLink(links)
finalLinks.append(nextLinks)
print(finalLinks)
python beginner python-3.x web-scraping beautifulsoup
python beginner python-3.x web-scraping beautifulsoup
edited Nov 16 at 8:55
asked Nov 15 at 16:22
QHarr
1969
1969
add a comment |
add a comment |
1 Answer
1
active
oldest
votes
up vote
1
down vote
accepted
Since you never use getLinks
without a call to getFullLink
afterwards, I would merge these two functions. I would also make it a generator:
def get_links(soup, pattern):
for link in soup.select(pattern):
yield urljoin(base, link.get("href"))
Then your main part can become this nested for
loop:
if __name__ == "__main__":
pattern1 = "#seriesDiv [href*='&CIK']"
pattern2 = "[id=documentsbutton]"
pattern3 = "[summary='Document Format Files'] td:nth-of-type(3) [href$='.htm'],td:nth-of-type(3) [href$='.txt']"
final_links =
for first_link in get_links(make_soup(start_urls[0]), pattern1):
for second_link in get_links(make_soup(first_link), pattern2):
final_links.extend(get_links(make_soup(second_link), pattern3))
print(final_links)
I also renamed your functions according to Python's official style-guide, PEP8 and added a if __name__ == "__main__":
guard.
One way to make this a bit faster (this sounds like it could be a lot of requests) is to use requests.Session
to re-use the connection to the server:
session = requests.Session()
def make_soup(url):
res = session.get(url)
res.raise_for_status()
return BeautifulSoup(res.content, "lxml")
Here I also added a guard so that the program stops if any site does not exists (i.e. returns 404 or similar).
1
OMG.... That is sooooooo much better. Thank you for taking the time to review +. I will leave a little longer if you don't mind to see if there is any further feedback.
– QHarr
Nov 16 at 7:53
add a comment |
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
1
down vote
accepted
Since you never use getLinks
without a call to getFullLink
afterwards, I would merge these two functions. I would also make it a generator:
def get_links(soup, pattern):
for link in soup.select(pattern):
yield urljoin(base, link.get("href"))
Then your main part can become this nested for
loop:
if __name__ == "__main__":
pattern1 = "#seriesDiv [href*='&CIK']"
pattern2 = "[id=documentsbutton]"
pattern3 = "[summary='Document Format Files'] td:nth-of-type(3) [href$='.htm'],td:nth-of-type(3) [href$='.txt']"
final_links =
for first_link in get_links(make_soup(start_urls[0]), pattern1):
for second_link in get_links(make_soup(first_link), pattern2):
final_links.extend(get_links(make_soup(second_link), pattern3))
print(final_links)
I also renamed your functions according to Python's official style-guide, PEP8 and added a if __name__ == "__main__":
guard.
One way to make this a bit faster (this sounds like it could be a lot of requests) is to use requests.Session
to re-use the connection to the server:
session = requests.Session()
def make_soup(url):
res = session.get(url)
res.raise_for_status()
return BeautifulSoup(res.content, "lxml")
Here I also added a guard so that the program stops if any site does not exists (i.e. returns 404 or similar).
1
OMG.... That is sooooooo much better. Thank you for taking the time to review +. I will leave a little longer if you don't mind to see if there is any further feedback.
– QHarr
Nov 16 at 7:53
add a comment |
up vote
1
down vote
accepted
Since you never use getLinks
without a call to getFullLink
afterwards, I would merge these two functions. I would also make it a generator:
def get_links(soup, pattern):
for link in soup.select(pattern):
yield urljoin(base, link.get("href"))
Then your main part can become this nested for
loop:
if __name__ == "__main__":
pattern1 = "#seriesDiv [href*='&CIK']"
pattern2 = "[id=documentsbutton]"
pattern3 = "[summary='Document Format Files'] td:nth-of-type(3) [href$='.htm'],td:nth-of-type(3) [href$='.txt']"
final_links =
for first_link in get_links(make_soup(start_urls[0]), pattern1):
for second_link in get_links(make_soup(first_link), pattern2):
final_links.extend(get_links(make_soup(second_link), pattern3))
print(final_links)
I also renamed your functions according to Python's official style-guide, PEP8 and added a if __name__ == "__main__":
guard.
One way to make this a bit faster (this sounds like it could be a lot of requests) is to use requests.Session
to re-use the connection to the server:
session = requests.Session()
def make_soup(url):
res = session.get(url)
res.raise_for_status()
return BeautifulSoup(res.content, "lxml")
Here I also added a guard so that the program stops if any site does not exists (i.e. returns 404 or similar).
1
OMG.... That is sooooooo much better. Thank you for taking the time to review +. I will leave a little longer if you don't mind to see if there is any further feedback.
– QHarr
Nov 16 at 7:53
add a comment |
up vote
1
down vote
accepted
up vote
1
down vote
accepted
Since you never use getLinks
without a call to getFullLink
afterwards, I would merge these two functions. I would also make it a generator:
def get_links(soup, pattern):
for link in soup.select(pattern):
yield urljoin(base, link.get("href"))
Then your main part can become this nested for
loop:
if __name__ == "__main__":
pattern1 = "#seriesDiv [href*='&CIK']"
pattern2 = "[id=documentsbutton]"
pattern3 = "[summary='Document Format Files'] td:nth-of-type(3) [href$='.htm'],td:nth-of-type(3) [href$='.txt']"
final_links =
for first_link in get_links(make_soup(start_urls[0]), pattern1):
for second_link in get_links(make_soup(first_link), pattern2):
final_links.extend(get_links(make_soup(second_link), pattern3))
print(final_links)
I also renamed your functions according to Python's official style-guide, PEP8 and added a if __name__ == "__main__":
guard.
One way to make this a bit faster (this sounds like it could be a lot of requests) is to use requests.Session
to re-use the connection to the server:
session = requests.Session()
def make_soup(url):
res = session.get(url)
res.raise_for_status()
return BeautifulSoup(res.content, "lxml")
Here I also added a guard so that the program stops if any site does not exists (i.e. returns 404 or similar).
Since you never use getLinks
without a call to getFullLink
afterwards, I would merge these two functions. I would also make it a generator:
def get_links(soup, pattern):
for link in soup.select(pattern):
yield urljoin(base, link.get("href"))
Then your main part can become this nested for
loop:
if __name__ == "__main__":
pattern1 = "#seriesDiv [href*='&CIK']"
pattern2 = "[id=documentsbutton]"
pattern3 = "[summary='Document Format Files'] td:nth-of-type(3) [href$='.htm'],td:nth-of-type(3) [href$='.txt']"
final_links =
for first_link in get_links(make_soup(start_urls[0]), pattern1):
for second_link in get_links(make_soup(first_link), pattern2):
final_links.extend(get_links(make_soup(second_link), pattern3))
print(final_links)
I also renamed your functions according to Python's official style-guide, PEP8 and added a if __name__ == "__main__":
guard.
One way to make this a bit faster (this sounds like it could be a lot of requests) is to use requests.Session
to re-use the connection to the server:
session = requests.Session()
def make_soup(url):
res = session.get(url)
res.raise_for_status()
return BeautifulSoup(res.content, "lxml")
Here I also added a guard so that the program stops if any site does not exists (i.e. returns 404 or similar).
answered Nov 15 at 19:23
Graipher
22k53183
22k53183
1
OMG.... That is sooooooo much better. Thank you for taking the time to review +. I will leave a little longer if you don't mind to see if there is any further feedback.
– QHarr
Nov 16 at 7:53
add a comment |
1
OMG.... That is sooooooo much better. Thank you for taking the time to review +. I will leave a little longer if you don't mind to see if there is any further feedback.
– QHarr
Nov 16 at 7:53
1
1
OMG.... That is sooooooo much better. Thank you for taking the time to review +. I will leave a little longer if you don't mind to see if there is any further feedback.
– QHarr
Nov 16 at 7:53
OMG.... That is sooooooo much better. Thank you for taking the time to review +. I will leave a little longer if you don't mind to see if there is any further feedback.
– QHarr
Nov 16 at 7:53
add a comment |
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fcodereview.stackexchange.com%2fquestions%2f207734%2fcrawling-pages-to-obtain-htm-and-txt-files-from-sec-gov%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown