Crawling pages to obtain .htm and .txt files from sec.gov











up vote
2
down vote

favorite












I am pretty much a python newbie and was looking at this question on StackOverflow.



Whereas the OP was interested in downloading the .htm |.txt files, I was simply interested in using Beautiful Soup and requests to gather all the links, to those files, into one structure. Ideally, it would have been a single list but I have a list of lists.



The process is:





  1. Start with the landing page and grab all the links in the leftmost column which has the header CIK.



    CIK column sample with single link selected on right











  1. Navigate to each of those links and grab the document button links in the Format column:



    Format column sample with single link selected on right











  1. Navigate to each of those document button links and grab the document links in the Document Format Files section, if the file mask is .htm | .txt.



    Document Format Files sample with single link selected on right










  1. Print the list of links (which is a list of lists)




Other info:



I use CSS selectors throughout to target the links.



What particularly concerns me, given my limited Python experience, is the fact I have a list of lists rather than a single list, whether my re-use of variable names is confusing and potentially bug prone, and I guess whether what I am doing is particularly pythonic and efficient.



In languages I am more familiar with, I might attempt to create a class and provide the class with methods; which in effect are my current functions.





Set-up info:



The version of the jupyter notebook server is: 5.5.0



The server is running on this version of Python:



Python 3.6.5 |Anaconda, Inc.| (default, Mar 29 2018, 13:32:41) [MSC v.1900 64 bit (AMD64)]





Python code:



from bs4 import BeautifulSoup
import requests
from urllib.parse import urljoin

base = 'https://www.sec.gov'
start_urls = ["https://www.sec.gov/cgi-bin/browse-edgar?company=&match=&CIK=&filenum=&State=&Country=&SIC=2834&owner=exclude&Find=Find+Companies&action=getcompany"]

def makeSoup(url):
res = requests.get(url)
soup = BeautifulSoup(res.content, "lxml")
return soup

def getLinks(pattern):
links = soup.select(pattern)
return links

def getFullLink(links):
outputLinks =
for link in links:
outputLinks.append(urljoin(base, link.get("href")))
return outputLinks

soup = makeSoup(start_urls[0])
links = getLinks("#seriesDiv [href*='&CIK']")
firstLinks = getFullLink(links)

penultimateLinks =

for link in firstLinks:
soup = makeSoup(link)
links = getLinks('[id=documentsbutton]')
nextLinks = getFullLink(links)
penultimateLinks.append(nextLinks)

finalLinks =

for link in penultimateLinks:
for nextLink in link:
soup = makeSoup(nextLink)
links = getLinks("[summary='Document Format Files'] td:nth-of-type(3) [href$='.htm'],td:nth-of-type(3) [href$='.txt']")
nextLinks = getFullLink(links)
finalLinks.append(nextLinks)

print(finalLinks)









share|improve this question




























    up vote
    2
    down vote

    favorite












    I am pretty much a python newbie and was looking at this question on StackOverflow.



    Whereas the OP was interested in downloading the .htm |.txt files, I was simply interested in using Beautiful Soup and requests to gather all the links, to those files, into one structure. Ideally, it would have been a single list but I have a list of lists.



    The process is:





    1. Start with the landing page and grab all the links in the leftmost column which has the header CIK.



      CIK column sample with single link selected on right











    1. Navigate to each of those links and grab the document button links in the Format column:



      Format column sample with single link selected on right











    1. Navigate to each of those document button links and grab the document links in the Document Format Files section, if the file mask is .htm | .txt.



      Document Format Files sample with single link selected on right










    1. Print the list of links (which is a list of lists)




    Other info:



    I use CSS selectors throughout to target the links.



    What particularly concerns me, given my limited Python experience, is the fact I have a list of lists rather than a single list, whether my re-use of variable names is confusing and potentially bug prone, and I guess whether what I am doing is particularly pythonic and efficient.



    In languages I am more familiar with, I might attempt to create a class and provide the class with methods; which in effect are my current functions.





    Set-up info:



    The version of the jupyter notebook server is: 5.5.0



    The server is running on this version of Python:



    Python 3.6.5 |Anaconda, Inc.| (default, Mar 29 2018, 13:32:41) [MSC v.1900 64 bit (AMD64)]





    Python code:



    from bs4 import BeautifulSoup
    import requests
    from urllib.parse import urljoin

    base = 'https://www.sec.gov'
    start_urls = ["https://www.sec.gov/cgi-bin/browse-edgar?company=&match=&CIK=&filenum=&State=&Country=&SIC=2834&owner=exclude&Find=Find+Companies&action=getcompany"]

    def makeSoup(url):
    res = requests.get(url)
    soup = BeautifulSoup(res.content, "lxml")
    return soup

    def getLinks(pattern):
    links = soup.select(pattern)
    return links

    def getFullLink(links):
    outputLinks =
    for link in links:
    outputLinks.append(urljoin(base, link.get("href")))
    return outputLinks

    soup = makeSoup(start_urls[0])
    links = getLinks("#seriesDiv [href*='&CIK']")
    firstLinks = getFullLink(links)

    penultimateLinks =

    for link in firstLinks:
    soup = makeSoup(link)
    links = getLinks('[id=documentsbutton]')
    nextLinks = getFullLink(links)
    penultimateLinks.append(nextLinks)

    finalLinks =

    for link in penultimateLinks:
    for nextLink in link:
    soup = makeSoup(nextLink)
    links = getLinks("[summary='Document Format Files'] td:nth-of-type(3) [href$='.htm'],td:nth-of-type(3) [href$='.txt']")
    nextLinks = getFullLink(links)
    finalLinks.append(nextLinks)

    print(finalLinks)









    share|improve this question


























      up vote
      2
      down vote

      favorite









      up vote
      2
      down vote

      favorite











      I am pretty much a python newbie and was looking at this question on StackOverflow.



      Whereas the OP was interested in downloading the .htm |.txt files, I was simply interested in using Beautiful Soup and requests to gather all the links, to those files, into one structure. Ideally, it would have been a single list but I have a list of lists.



      The process is:





      1. Start with the landing page and grab all the links in the leftmost column which has the header CIK.



        CIK column sample with single link selected on right











      1. Navigate to each of those links and grab the document button links in the Format column:



        Format column sample with single link selected on right











      1. Navigate to each of those document button links and grab the document links in the Document Format Files section, if the file mask is .htm | .txt.



        Document Format Files sample with single link selected on right










      1. Print the list of links (which is a list of lists)




      Other info:



      I use CSS selectors throughout to target the links.



      What particularly concerns me, given my limited Python experience, is the fact I have a list of lists rather than a single list, whether my re-use of variable names is confusing and potentially bug prone, and I guess whether what I am doing is particularly pythonic and efficient.



      In languages I am more familiar with, I might attempt to create a class and provide the class with methods; which in effect are my current functions.





      Set-up info:



      The version of the jupyter notebook server is: 5.5.0



      The server is running on this version of Python:



      Python 3.6.5 |Anaconda, Inc.| (default, Mar 29 2018, 13:32:41) [MSC v.1900 64 bit (AMD64)]





      Python code:



      from bs4 import BeautifulSoup
      import requests
      from urllib.parse import urljoin

      base = 'https://www.sec.gov'
      start_urls = ["https://www.sec.gov/cgi-bin/browse-edgar?company=&match=&CIK=&filenum=&State=&Country=&SIC=2834&owner=exclude&Find=Find+Companies&action=getcompany"]

      def makeSoup(url):
      res = requests.get(url)
      soup = BeautifulSoup(res.content, "lxml")
      return soup

      def getLinks(pattern):
      links = soup.select(pattern)
      return links

      def getFullLink(links):
      outputLinks =
      for link in links:
      outputLinks.append(urljoin(base, link.get("href")))
      return outputLinks

      soup = makeSoup(start_urls[0])
      links = getLinks("#seriesDiv [href*='&CIK']")
      firstLinks = getFullLink(links)

      penultimateLinks =

      for link in firstLinks:
      soup = makeSoup(link)
      links = getLinks('[id=documentsbutton]')
      nextLinks = getFullLink(links)
      penultimateLinks.append(nextLinks)

      finalLinks =

      for link in penultimateLinks:
      for nextLink in link:
      soup = makeSoup(nextLink)
      links = getLinks("[summary='Document Format Files'] td:nth-of-type(3) [href$='.htm'],td:nth-of-type(3) [href$='.txt']")
      nextLinks = getFullLink(links)
      finalLinks.append(nextLinks)

      print(finalLinks)









      share|improve this question















      I am pretty much a python newbie and was looking at this question on StackOverflow.



      Whereas the OP was interested in downloading the .htm |.txt files, I was simply interested in using Beautiful Soup and requests to gather all the links, to those files, into one structure. Ideally, it would have been a single list but I have a list of lists.



      The process is:





      1. Start with the landing page and grab all the links in the leftmost column which has the header CIK.



        CIK column sample with single link selected on right











      1. Navigate to each of those links and grab the document button links in the Format column:



        Format column sample with single link selected on right











      1. Navigate to each of those document button links and grab the document links in the Document Format Files section, if the file mask is .htm | .txt.



        Document Format Files sample with single link selected on right










      1. Print the list of links (which is a list of lists)




      Other info:



      I use CSS selectors throughout to target the links.



      What particularly concerns me, given my limited Python experience, is the fact I have a list of lists rather than a single list, whether my re-use of variable names is confusing and potentially bug prone, and I guess whether what I am doing is particularly pythonic and efficient.



      In languages I am more familiar with, I might attempt to create a class and provide the class with methods; which in effect are my current functions.





      Set-up info:



      The version of the jupyter notebook server is: 5.5.0



      The server is running on this version of Python:



      Python 3.6.5 |Anaconda, Inc.| (default, Mar 29 2018, 13:32:41) [MSC v.1900 64 bit (AMD64)]





      Python code:



      from bs4 import BeautifulSoup
      import requests
      from urllib.parse import urljoin

      base = 'https://www.sec.gov'
      start_urls = ["https://www.sec.gov/cgi-bin/browse-edgar?company=&match=&CIK=&filenum=&State=&Country=&SIC=2834&owner=exclude&Find=Find+Companies&action=getcompany"]

      def makeSoup(url):
      res = requests.get(url)
      soup = BeautifulSoup(res.content, "lxml")
      return soup

      def getLinks(pattern):
      links = soup.select(pattern)
      return links

      def getFullLink(links):
      outputLinks =
      for link in links:
      outputLinks.append(urljoin(base, link.get("href")))
      return outputLinks

      soup = makeSoup(start_urls[0])
      links = getLinks("#seriesDiv [href*='&CIK']")
      firstLinks = getFullLink(links)

      penultimateLinks =

      for link in firstLinks:
      soup = makeSoup(link)
      links = getLinks('[id=documentsbutton]')
      nextLinks = getFullLink(links)
      penultimateLinks.append(nextLinks)

      finalLinks =

      for link in penultimateLinks:
      for nextLink in link:
      soup = makeSoup(nextLink)
      links = getLinks("[summary='Document Format Files'] td:nth-of-type(3) [href$='.htm'],td:nth-of-type(3) [href$='.txt']")
      nextLinks = getFullLink(links)
      finalLinks.append(nextLinks)

      print(finalLinks)






      python beginner python-3.x web-scraping beautifulsoup






      share|improve this question















      share|improve this question













      share|improve this question




      share|improve this question








      edited Nov 16 at 8:55

























      asked Nov 15 at 16:22









      QHarr

      1969




      1969






















          1 Answer
          1






          active

          oldest

          votes

















          up vote
          1
          down vote



          accepted










          Since you never use getLinks without a call to getFullLink afterwards, I would merge these two functions. I would also make it a generator:



          def get_links(soup, pattern):
          for link in soup.select(pattern):
          yield urljoin(base, link.get("href"))


          Then your main part can become this nested for loop:



          if __name__ == "__main__":
          pattern1 = "#seriesDiv [href*='&CIK']"
          pattern2 = "[id=documentsbutton]"
          pattern3 = "[summary='Document Format Files'] td:nth-of-type(3) [href$='.htm'],td:nth-of-type(3) [href$='.txt']"

          final_links =
          for first_link in get_links(make_soup(start_urls[0]), pattern1):
          for second_link in get_links(make_soup(first_link), pattern2):
          final_links.extend(get_links(make_soup(second_link), pattern3))
          print(final_links)


          I also renamed your functions according to Python's official style-guide, PEP8 and added a if __name__ == "__main__": guard.



          One way to make this a bit faster (this sounds like it could be a lot of requests) is to use requests.Session to re-use the connection to the server:



          session = requests.Session()

          def make_soup(url):
          res = session.get(url)
          res.raise_for_status()
          return BeautifulSoup(res.content, "lxml")


          Here I also added a guard so that the program stops if any site does not exists (i.e. returns 404 or similar).






          share|improve this answer

















          • 1




            OMG.... That is sooooooo much better. Thank you for taking the time to review +. I will leave a little longer if you don't mind to see if there is any further feedback.
            – QHarr
            Nov 16 at 7:53











          Your Answer





          StackExchange.ifUsing("editor", function () {
          return StackExchange.using("mathjaxEditing", function () {
          StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
          StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["\$", "\$"]]);
          });
          });
          }, "mathjax-editing");

          StackExchange.ifUsing("editor", function () {
          StackExchange.using("externalEditor", function () {
          StackExchange.using("snippets", function () {
          StackExchange.snippets.init();
          });
          });
          }, "code-snippets");

          StackExchange.ready(function() {
          var channelOptions = {
          tags: "".split(" "),
          id: "196"
          };
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function() {
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled) {
          StackExchange.using("snippets", function() {
          createEditor();
          });
          }
          else {
          createEditor();
          }
          });

          function createEditor() {
          StackExchange.prepareEditor({
          heartbeatType: 'answer',
          convertImagesToLinks: false,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: null,
          bindNavPrevention: true,
          postfix: "",
          imageUploader: {
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          },
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          });


          }
          });














           

          draft saved


          draft discarded


















          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fcodereview.stackexchange.com%2fquestions%2f207734%2fcrawling-pages-to-obtain-htm-and-txt-files-from-sec-gov%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown

























          1 Answer
          1






          active

          oldest

          votes








          1 Answer
          1






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes








          up vote
          1
          down vote



          accepted










          Since you never use getLinks without a call to getFullLink afterwards, I would merge these two functions. I would also make it a generator:



          def get_links(soup, pattern):
          for link in soup.select(pattern):
          yield urljoin(base, link.get("href"))


          Then your main part can become this nested for loop:



          if __name__ == "__main__":
          pattern1 = "#seriesDiv [href*='&CIK']"
          pattern2 = "[id=documentsbutton]"
          pattern3 = "[summary='Document Format Files'] td:nth-of-type(3) [href$='.htm'],td:nth-of-type(3) [href$='.txt']"

          final_links =
          for first_link in get_links(make_soup(start_urls[0]), pattern1):
          for second_link in get_links(make_soup(first_link), pattern2):
          final_links.extend(get_links(make_soup(second_link), pattern3))
          print(final_links)


          I also renamed your functions according to Python's official style-guide, PEP8 and added a if __name__ == "__main__": guard.



          One way to make this a bit faster (this sounds like it could be a lot of requests) is to use requests.Session to re-use the connection to the server:



          session = requests.Session()

          def make_soup(url):
          res = session.get(url)
          res.raise_for_status()
          return BeautifulSoup(res.content, "lxml")


          Here I also added a guard so that the program stops if any site does not exists (i.e. returns 404 or similar).






          share|improve this answer

















          • 1




            OMG.... That is sooooooo much better. Thank you for taking the time to review +. I will leave a little longer if you don't mind to see if there is any further feedback.
            – QHarr
            Nov 16 at 7:53















          up vote
          1
          down vote



          accepted










          Since you never use getLinks without a call to getFullLink afterwards, I would merge these two functions. I would also make it a generator:



          def get_links(soup, pattern):
          for link in soup.select(pattern):
          yield urljoin(base, link.get("href"))


          Then your main part can become this nested for loop:



          if __name__ == "__main__":
          pattern1 = "#seriesDiv [href*='&CIK']"
          pattern2 = "[id=documentsbutton]"
          pattern3 = "[summary='Document Format Files'] td:nth-of-type(3) [href$='.htm'],td:nth-of-type(3) [href$='.txt']"

          final_links =
          for first_link in get_links(make_soup(start_urls[0]), pattern1):
          for second_link in get_links(make_soup(first_link), pattern2):
          final_links.extend(get_links(make_soup(second_link), pattern3))
          print(final_links)


          I also renamed your functions according to Python's official style-guide, PEP8 and added a if __name__ == "__main__": guard.



          One way to make this a bit faster (this sounds like it could be a lot of requests) is to use requests.Session to re-use the connection to the server:



          session = requests.Session()

          def make_soup(url):
          res = session.get(url)
          res.raise_for_status()
          return BeautifulSoup(res.content, "lxml")


          Here I also added a guard so that the program stops if any site does not exists (i.e. returns 404 or similar).






          share|improve this answer

















          • 1




            OMG.... That is sooooooo much better. Thank you for taking the time to review +. I will leave a little longer if you don't mind to see if there is any further feedback.
            – QHarr
            Nov 16 at 7:53













          up vote
          1
          down vote



          accepted







          up vote
          1
          down vote



          accepted






          Since you never use getLinks without a call to getFullLink afterwards, I would merge these two functions. I would also make it a generator:



          def get_links(soup, pattern):
          for link in soup.select(pattern):
          yield urljoin(base, link.get("href"))


          Then your main part can become this nested for loop:



          if __name__ == "__main__":
          pattern1 = "#seriesDiv [href*='&CIK']"
          pattern2 = "[id=documentsbutton]"
          pattern3 = "[summary='Document Format Files'] td:nth-of-type(3) [href$='.htm'],td:nth-of-type(3) [href$='.txt']"

          final_links =
          for first_link in get_links(make_soup(start_urls[0]), pattern1):
          for second_link in get_links(make_soup(first_link), pattern2):
          final_links.extend(get_links(make_soup(second_link), pattern3))
          print(final_links)


          I also renamed your functions according to Python's official style-guide, PEP8 and added a if __name__ == "__main__": guard.



          One way to make this a bit faster (this sounds like it could be a lot of requests) is to use requests.Session to re-use the connection to the server:



          session = requests.Session()

          def make_soup(url):
          res = session.get(url)
          res.raise_for_status()
          return BeautifulSoup(res.content, "lxml")


          Here I also added a guard so that the program stops if any site does not exists (i.e. returns 404 or similar).






          share|improve this answer












          Since you never use getLinks without a call to getFullLink afterwards, I would merge these two functions. I would also make it a generator:



          def get_links(soup, pattern):
          for link in soup.select(pattern):
          yield urljoin(base, link.get("href"))


          Then your main part can become this nested for loop:



          if __name__ == "__main__":
          pattern1 = "#seriesDiv [href*='&CIK']"
          pattern2 = "[id=documentsbutton]"
          pattern3 = "[summary='Document Format Files'] td:nth-of-type(3) [href$='.htm'],td:nth-of-type(3) [href$='.txt']"

          final_links =
          for first_link in get_links(make_soup(start_urls[0]), pattern1):
          for second_link in get_links(make_soup(first_link), pattern2):
          final_links.extend(get_links(make_soup(second_link), pattern3))
          print(final_links)


          I also renamed your functions according to Python's official style-guide, PEP8 and added a if __name__ == "__main__": guard.



          One way to make this a bit faster (this sounds like it could be a lot of requests) is to use requests.Session to re-use the connection to the server:



          session = requests.Session()

          def make_soup(url):
          res = session.get(url)
          res.raise_for_status()
          return BeautifulSoup(res.content, "lxml")


          Here I also added a guard so that the program stops if any site does not exists (i.e. returns 404 or similar).







          share|improve this answer












          share|improve this answer



          share|improve this answer










          answered Nov 15 at 19:23









          Graipher

          22k53183




          22k53183








          • 1




            OMG.... That is sooooooo much better. Thank you for taking the time to review +. I will leave a little longer if you don't mind to see if there is any further feedback.
            – QHarr
            Nov 16 at 7:53














          • 1




            OMG.... That is sooooooo much better. Thank you for taking the time to review +. I will leave a little longer if you don't mind to see if there is any further feedback.
            – QHarr
            Nov 16 at 7:53








          1




          1




          OMG.... That is sooooooo much better. Thank you for taking the time to review +. I will leave a little longer if you don't mind to see if there is any further feedback.
          – QHarr
          Nov 16 at 7:53




          OMG.... That is sooooooo much better. Thank you for taking the time to review +. I will leave a little longer if you don't mind to see if there is any further feedback.
          – QHarr
          Nov 16 at 7:53


















           

          draft saved


          draft discarded



















































           


          draft saved


          draft discarded














          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fcodereview.stackexchange.com%2fquestions%2f207734%2fcrawling-pages-to-obtain-htm-and-txt-files-from-sec-gov%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          Morgemoulin

          Scott Moir

          Souastre