Webcrawler for a car forum












1












$begingroup$


I've been experimenting more with webcrawling and hence have started to get a better understanding compared to my previous questions. Right now, my code scraps from a car forum on each page and iterates through every pages. What would you recommend to improve on?



import requests
from bs4 import BeautifulSoup, SoupStrainer
import pandas as pd

list_topic =
list_time =

SESSION = requests.Session()


def get_response(url): # Gets the <html> structure from the website #
response = SESSION.get(url)
soup = BeautifulSoup(response.text, 'lxml',
parse_only=SoupStrainer('ul', {'class': 'posts posts-archive'}))
return soup


def iteration(url, max_page=52):
starting_page = 1
while starting_page <= max_page:
## formats the new URL etc (https://paultan.org/topics/test-drive-reviews/page/1) ##
new_url = url + f"page/{starting_page}"
data = get_response(new_url)
get_reviews(data)
## iteration starts ##
starting_page += 1


def get_reviews(response):
for container in response('article'):
title = container.h2.a.text
time = container.time.text
list_topic.append(title)
list_time.append(time)
else:
None


def create_pdReview():
return pd.DataFrame({'Title': list_topic, 'Time': list_time})


if __name__ == '__main__':
URL = 'https://paultan.org/topics/test-drive-reviews/'
print(iteration(URL))
print(create_pdReview())


I've been wondering; would using yield improve the efficiency and simplicity of the code? How would it be done? Because I've been trying to learn from my previous inquiries that has been answered on earlier. Here is a similar question and I'm trying to put to practice what has been recommended so far.










share|improve this question









New contributor




Minial is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.







$endgroup$








  • 4




    $begingroup$
    Welcome to Code Review. The current question title, which states your concerns about the code, applies to too many questions on this site to be useful. The site standard is for the title to simply state the task accomplished by the code. Please see How to Ask for examples, and revise the title accordingly.
    $endgroup$
    – Zeta
    18 hours ago










  • $begingroup$
    What you may and may not do after receiving answers
    $endgroup$
    – Jamal
    7 mins ago
















1












$begingroup$


I've been experimenting more with webcrawling and hence have started to get a better understanding compared to my previous questions. Right now, my code scraps from a car forum on each page and iterates through every pages. What would you recommend to improve on?



import requests
from bs4 import BeautifulSoup, SoupStrainer
import pandas as pd

list_topic =
list_time =

SESSION = requests.Session()


def get_response(url): # Gets the <html> structure from the website #
response = SESSION.get(url)
soup = BeautifulSoup(response.text, 'lxml',
parse_only=SoupStrainer('ul', {'class': 'posts posts-archive'}))
return soup


def iteration(url, max_page=52):
starting_page = 1
while starting_page <= max_page:
## formats the new URL etc (https://paultan.org/topics/test-drive-reviews/page/1) ##
new_url = url + f"page/{starting_page}"
data = get_response(new_url)
get_reviews(data)
## iteration starts ##
starting_page += 1


def get_reviews(response):
for container in response('article'):
title = container.h2.a.text
time = container.time.text
list_topic.append(title)
list_time.append(time)
else:
None


def create_pdReview():
return pd.DataFrame({'Title': list_topic, 'Time': list_time})


if __name__ == '__main__':
URL = 'https://paultan.org/topics/test-drive-reviews/'
print(iteration(URL))
print(create_pdReview())


I've been wondering; would using yield improve the efficiency and simplicity of the code? How would it be done? Because I've been trying to learn from my previous inquiries that has been answered on earlier. Here is a similar question and I'm trying to put to practice what has been recommended so far.










share|improve this question









New contributor




Minial is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.







$endgroup$








  • 4




    $begingroup$
    Welcome to Code Review. The current question title, which states your concerns about the code, applies to too many questions on this site to be useful. The site standard is for the title to simply state the task accomplished by the code. Please see How to Ask for examples, and revise the title accordingly.
    $endgroup$
    – Zeta
    18 hours ago










  • $begingroup$
    What you may and may not do after receiving answers
    $endgroup$
    – Jamal
    7 mins ago














1












1








1





$begingroup$


I've been experimenting more with webcrawling and hence have started to get a better understanding compared to my previous questions. Right now, my code scraps from a car forum on each page and iterates through every pages. What would you recommend to improve on?



import requests
from bs4 import BeautifulSoup, SoupStrainer
import pandas as pd

list_topic =
list_time =

SESSION = requests.Session()


def get_response(url): # Gets the <html> structure from the website #
response = SESSION.get(url)
soup = BeautifulSoup(response.text, 'lxml',
parse_only=SoupStrainer('ul', {'class': 'posts posts-archive'}))
return soup


def iteration(url, max_page=52):
starting_page = 1
while starting_page <= max_page:
## formats the new URL etc (https://paultan.org/topics/test-drive-reviews/page/1) ##
new_url = url + f"page/{starting_page}"
data = get_response(new_url)
get_reviews(data)
## iteration starts ##
starting_page += 1


def get_reviews(response):
for container in response('article'):
title = container.h2.a.text
time = container.time.text
list_topic.append(title)
list_time.append(time)
else:
None


def create_pdReview():
return pd.DataFrame({'Title': list_topic, 'Time': list_time})


if __name__ == '__main__':
URL = 'https://paultan.org/topics/test-drive-reviews/'
print(iteration(URL))
print(create_pdReview())


I've been wondering; would using yield improve the efficiency and simplicity of the code? How would it be done? Because I've been trying to learn from my previous inquiries that has been answered on earlier. Here is a similar question and I'm trying to put to practice what has been recommended so far.










share|improve this question









New contributor




Minial is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.







$endgroup$




I've been experimenting more with webcrawling and hence have started to get a better understanding compared to my previous questions. Right now, my code scraps from a car forum on each page and iterates through every pages. What would you recommend to improve on?



import requests
from bs4 import BeautifulSoup, SoupStrainer
import pandas as pd

list_topic =
list_time =

SESSION = requests.Session()


def get_response(url): # Gets the <html> structure from the website #
response = SESSION.get(url)
soup = BeautifulSoup(response.text, 'lxml',
parse_only=SoupStrainer('ul', {'class': 'posts posts-archive'}))
return soup


def iteration(url, max_page=52):
starting_page = 1
while starting_page <= max_page:
## formats the new URL etc (https://paultan.org/topics/test-drive-reviews/page/1) ##
new_url = url + f"page/{starting_page}"
data = get_response(new_url)
get_reviews(data)
## iteration starts ##
starting_page += 1


def get_reviews(response):
for container in response('article'):
title = container.h2.a.text
time = container.time.text
list_topic.append(title)
list_time.append(time)
else:
None


def create_pdReview():
return pd.DataFrame({'Title': list_topic, 'Time': list_time})


if __name__ == '__main__':
URL = 'https://paultan.org/topics/test-drive-reviews/'
print(iteration(URL))
print(create_pdReview())


I've been wondering; would using yield improve the efficiency and simplicity of the code? How would it be done? Because I've been trying to learn from my previous inquiries that has been answered on earlier. Here is a similar question and I'm trying to put to practice what has been recommended so far.







python python-3.x beautifulsoup






share|improve this question









New contributor




Minial is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.











share|improve this question









New contributor




Minial is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.









share|improve this question




share|improve this question








edited 6 mins ago









Jamal

30.3k11116226




30.3k11116226






New contributor




Minial is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.









asked 19 hours ago









MinialMinial

475




475




New contributor




Minial is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.





New contributor





Minial is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.






Minial is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.








  • 4




    $begingroup$
    Welcome to Code Review. The current question title, which states your concerns about the code, applies to too many questions on this site to be useful. The site standard is for the title to simply state the task accomplished by the code. Please see How to Ask for examples, and revise the title accordingly.
    $endgroup$
    – Zeta
    18 hours ago










  • $begingroup$
    What you may and may not do after receiving answers
    $endgroup$
    – Jamal
    7 mins ago














  • 4




    $begingroup$
    Welcome to Code Review. The current question title, which states your concerns about the code, applies to too many questions on this site to be useful. The site standard is for the title to simply state the task accomplished by the code. Please see How to Ask for examples, and revise the title accordingly.
    $endgroup$
    – Zeta
    18 hours ago










  • $begingroup$
    What you may and may not do after receiving answers
    $endgroup$
    – Jamal
    7 mins ago








4




4




$begingroup$
Welcome to Code Review. The current question title, which states your concerns about the code, applies to too many questions on this site to be useful. The site standard is for the title to simply state the task accomplished by the code. Please see How to Ask for examples, and revise the title accordingly.
$endgroup$
– Zeta
18 hours ago




$begingroup$
Welcome to Code Review. The current question title, which states your concerns about the code, applies to too many questions on this site to be useful. The site standard is for the title to simply state the task accomplished by the code. Please see How to Ask for examples, and revise the title accordingly.
$endgroup$
– Zeta
18 hours ago












$begingroup$
What you may and may not do after receiving answers
$endgroup$
– Jamal
7 mins ago




$begingroup$
What you may and may not do after receiving answers
$endgroup$
– Jamal
7 mins ago










1 Answer
1






active

oldest

votes


















2












$begingroup$

When we are discussing performance of a particular piece of code, it's important to recognize bottlenecks and major contributors to the runtime of the program.



In your particular case, even though you've applied some optimizations like SoupStrainer speed-up for HTML parsing, the synchronous nature of the script is the biggest problem by far. The script is processing pages one by one, not getting to the next page until the processing for the current page is finished.



Switching to an asynchronous approach would be the natural next step in your optimizations. Look into using third-party frameworks like Scrapy or, if you are adventurous, things like asyncio or grequests.





You could apply one more optimization to your current script which should help you optimize the "crawling/scraping" part - instead of using requests.get(), initialize session = requests.Session() and use session.get() to make requests (documentation). This would allow the underlying TCP connection to be re-used for subsequent requests resulting in a performance increase.






share|improve this answer











$endgroup$













  • $begingroup$
    I've changed the way of iteration; but it looks like an honest mess at the moment. I've used Session as you've recommended ~ but at the moment I'm trying to think how to simplify and get it to work with my previous version =l.
    $endgroup$
    – Minial
    42 mins ago











Your Answer





StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["\$", "\$"]]);
});
});
}, "mathjax-editing");

StackExchange.ifUsing("editor", function () {
StackExchange.using("externalEditor", function () {
StackExchange.using("snippets", function () {
StackExchange.snippets.init();
});
});
}, "code-snippets");

StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "196"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});






Minial is a new contributor. Be nice, and check out our Code of Conduct.










draft saved

draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fcodereview.stackexchange.com%2fquestions%2f211458%2fwebcrawler-for-a-car-forum%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown

























1 Answer
1






active

oldest

votes








1 Answer
1






active

oldest

votes









active

oldest

votes






active

oldest

votes









2












$begingroup$

When we are discussing performance of a particular piece of code, it's important to recognize bottlenecks and major contributors to the runtime of the program.



In your particular case, even though you've applied some optimizations like SoupStrainer speed-up for HTML parsing, the synchronous nature of the script is the biggest problem by far. The script is processing pages one by one, not getting to the next page until the processing for the current page is finished.



Switching to an asynchronous approach would be the natural next step in your optimizations. Look into using third-party frameworks like Scrapy or, if you are adventurous, things like asyncio or grequests.





You could apply one more optimization to your current script which should help you optimize the "crawling/scraping" part - instead of using requests.get(), initialize session = requests.Session() and use session.get() to make requests (documentation). This would allow the underlying TCP connection to be re-used for subsequent requests resulting in a performance increase.






share|improve this answer











$endgroup$













  • $begingroup$
    I've changed the way of iteration; but it looks like an honest mess at the moment. I've used Session as you've recommended ~ but at the moment I'm trying to think how to simplify and get it to work with my previous version =l.
    $endgroup$
    – Minial
    42 mins ago
















2












$begingroup$

When we are discussing performance of a particular piece of code, it's important to recognize bottlenecks and major contributors to the runtime of the program.



In your particular case, even though you've applied some optimizations like SoupStrainer speed-up for HTML parsing, the synchronous nature of the script is the biggest problem by far. The script is processing pages one by one, not getting to the next page until the processing for the current page is finished.



Switching to an asynchronous approach would be the natural next step in your optimizations. Look into using third-party frameworks like Scrapy or, if you are adventurous, things like asyncio or grequests.





You could apply one more optimization to your current script which should help you optimize the "crawling/scraping" part - instead of using requests.get(), initialize session = requests.Session() and use session.get() to make requests (documentation). This would allow the underlying TCP connection to be re-used for subsequent requests resulting in a performance increase.






share|improve this answer











$endgroup$













  • $begingroup$
    I've changed the way of iteration; but it looks like an honest mess at the moment. I've used Session as you've recommended ~ but at the moment I'm trying to think how to simplify and get it to work with my previous version =l.
    $endgroup$
    – Minial
    42 mins ago














2












2








2





$begingroup$

When we are discussing performance of a particular piece of code, it's important to recognize bottlenecks and major contributors to the runtime of the program.



In your particular case, even though you've applied some optimizations like SoupStrainer speed-up for HTML parsing, the synchronous nature of the script is the biggest problem by far. The script is processing pages one by one, not getting to the next page until the processing for the current page is finished.



Switching to an asynchronous approach would be the natural next step in your optimizations. Look into using third-party frameworks like Scrapy or, if you are adventurous, things like asyncio or grequests.





You could apply one more optimization to your current script which should help you optimize the "crawling/scraping" part - instead of using requests.get(), initialize session = requests.Session() and use session.get() to make requests (documentation). This would allow the underlying TCP connection to be re-used for subsequent requests resulting in a performance increase.






share|improve this answer











$endgroup$



When we are discussing performance of a particular piece of code, it's important to recognize bottlenecks and major contributors to the runtime of the program.



In your particular case, even though you've applied some optimizations like SoupStrainer speed-up for HTML parsing, the synchronous nature of the script is the biggest problem by far. The script is processing pages one by one, not getting to the next page until the processing for the current page is finished.



Switching to an asynchronous approach would be the natural next step in your optimizations. Look into using third-party frameworks like Scrapy or, if you are adventurous, things like asyncio or grequests.





You could apply one more optimization to your current script which should help you optimize the "crawling/scraping" part - instead of using requests.get(), initialize session = requests.Session() and use session.get() to make requests (documentation). This would allow the underlying TCP connection to be re-used for subsequent requests resulting in a performance increase.







share|improve this answer














share|improve this answer



share|improve this answer








edited 10 hours ago

























answered 10 hours ago









alecxealecxe

15.1k53478




15.1k53478












  • $begingroup$
    I've changed the way of iteration; but it looks like an honest mess at the moment. I've used Session as you've recommended ~ but at the moment I'm trying to think how to simplify and get it to work with my previous version =l.
    $endgroup$
    – Minial
    42 mins ago


















  • $begingroup$
    I've changed the way of iteration; but it looks like an honest mess at the moment. I've used Session as you've recommended ~ but at the moment I'm trying to think how to simplify and get it to work with my previous version =l.
    $endgroup$
    – Minial
    42 mins ago
















$begingroup$
I've changed the way of iteration; but it looks like an honest mess at the moment. I've used Session as you've recommended ~ but at the moment I'm trying to think how to simplify and get it to work with my previous version =l.
$endgroup$
– Minial
42 mins ago




$begingroup$
I've changed the way of iteration; but it looks like an honest mess at the moment. I've used Session as you've recommended ~ but at the moment I'm trying to think how to simplify and get it to work with my previous version =l.
$endgroup$
– Minial
42 mins ago










Minial is a new contributor. Be nice, and check out our Code of Conduct.










draft saved

draft discarded


















Minial is a new contributor. Be nice, and check out our Code of Conduct.













Minial is a new contributor. Be nice, and check out our Code of Conduct.












Minial is a new contributor. Be nice, and check out our Code of Conduct.
















Thanks for contributing an answer to Code Review Stack Exchange!


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


Use MathJax to format equations. MathJax reference.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fcodereview.stackexchange.com%2fquestions%2f211458%2fwebcrawler-for-a-car-forum%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

Morgemoulin

Scott Moir

Souastre