Parsing million words of list in python regular expression is slow











up vote
-1
down vote

favorite












I have been working on a project in python which takes content from web links and find some important words from content of that page.
I used regular expression to do so.
But it takes huge time to get the results.



How it works:




  1. It makes request to given url

  2. take whole html content and data from external js files(within src attribute of tag)

  3. Save those things to list. (name it as file, used in re later)

  4. and then perform searching of important data using regular expression.


Here is the sample:



List:



list = ['secret', 'secret_key', 'token', 'secret_token', 'auth_token','access_token','username','password','aws_access_key_id','aws_secret_access_key', 'secretkey']



Regular Expression:



for item in seclst:
try:
secregex = r'(["']?[w-]*' + item + '[w-]*[s]*["']?[s]*[:=>]{1,2}[s]*["'](.*?)["'])'
matches = re.finditer(secregex, file, re.MULTILINE | re.IGNORECASE )
for matchNum, match in enumerate(matches):
if len(match.group(2)) > 0:
secretList.add(match.group())
except:
pass


There are some other functions too.



Execution time and explanation:




  1. When I use 'https://www.facebook.com'(without cookies) it takes approximately 41 seconds (including doing other functionalities)

  2. When I use 'https://www.facebook.com'(with cookies) it takes approximately 5 to 6 min (including doing other functionalities)


How can I optimize it?










share|improve this question









New contributor




Neeraj Sonaniya is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.
















  • 1




    I find it implausible that any text analysis of a web page could take minutes. I think that there is something you haven't told us about what your code is doing.
    – 200_success
    3 hours ago















up vote
-1
down vote

favorite












I have been working on a project in python which takes content from web links and find some important words from content of that page.
I used regular expression to do so.
But it takes huge time to get the results.



How it works:




  1. It makes request to given url

  2. take whole html content and data from external js files(within src attribute of tag)

  3. Save those things to list. (name it as file, used in re later)

  4. and then perform searching of important data using regular expression.


Here is the sample:



List:



list = ['secret', 'secret_key', 'token', 'secret_token', 'auth_token','access_token','username','password','aws_access_key_id','aws_secret_access_key', 'secretkey']



Regular Expression:



for item in seclst:
try:
secregex = r'(["']?[w-]*' + item + '[w-]*[s]*["']?[s]*[:=>]{1,2}[s]*["'](.*?)["'])'
matches = re.finditer(secregex, file, re.MULTILINE | re.IGNORECASE )
for matchNum, match in enumerate(matches):
if len(match.group(2)) > 0:
secretList.add(match.group())
except:
pass


There are some other functions too.



Execution time and explanation:




  1. When I use 'https://www.facebook.com'(without cookies) it takes approximately 41 seconds (including doing other functionalities)

  2. When I use 'https://www.facebook.com'(with cookies) it takes approximately 5 to 6 min (including doing other functionalities)


How can I optimize it?










share|improve this question









New contributor




Neeraj Sonaniya is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.
















  • 1




    I find it implausible that any text analysis of a web page could take minutes. I think that there is something you haven't told us about what your code is doing.
    – 200_success
    3 hours ago













up vote
-1
down vote

favorite









up vote
-1
down vote

favorite











I have been working on a project in python which takes content from web links and find some important words from content of that page.
I used regular expression to do so.
But it takes huge time to get the results.



How it works:




  1. It makes request to given url

  2. take whole html content and data from external js files(within src attribute of tag)

  3. Save those things to list. (name it as file, used in re later)

  4. and then perform searching of important data using regular expression.


Here is the sample:



List:



list = ['secret', 'secret_key', 'token', 'secret_token', 'auth_token','access_token','username','password','aws_access_key_id','aws_secret_access_key', 'secretkey']



Regular Expression:



for item in seclst:
try:
secregex = r'(["']?[w-]*' + item + '[w-]*[s]*["']?[s]*[:=>]{1,2}[s]*["'](.*?)["'])'
matches = re.finditer(secregex, file, re.MULTILINE | re.IGNORECASE )
for matchNum, match in enumerate(matches):
if len(match.group(2)) > 0:
secretList.add(match.group())
except:
pass


There are some other functions too.



Execution time and explanation:




  1. When I use 'https://www.facebook.com'(without cookies) it takes approximately 41 seconds (including doing other functionalities)

  2. When I use 'https://www.facebook.com'(with cookies) it takes approximately 5 to 6 min (including doing other functionalities)


How can I optimize it?










share|improve this question









New contributor




Neeraj Sonaniya is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.











I have been working on a project in python which takes content from web links and find some important words from content of that page.
I used regular expression to do so.
But it takes huge time to get the results.



How it works:




  1. It makes request to given url

  2. take whole html content and data from external js files(within src attribute of tag)

  3. Save those things to list. (name it as file, used in re later)

  4. and then perform searching of important data using regular expression.


Here is the sample:



List:



list = ['secret', 'secret_key', 'token', 'secret_token', 'auth_token','access_token','username','password','aws_access_key_id','aws_secret_access_key', 'secretkey']



Regular Expression:



for item in seclst:
try:
secregex = r'(["']?[w-]*' + item + '[w-]*[s]*["']?[s]*[:=>]{1,2}[s]*["'](.*?)["'])'
matches = re.finditer(secregex, file, re.MULTILINE | re.IGNORECASE )
for matchNum, match in enumerate(matches):
if len(match.group(2)) > 0:
secretList.add(match.group())
except:
pass


There are some other functions too.



Execution time and explanation:




  1. When I use 'https://www.facebook.com'(without cookies) it takes approximately 41 seconds (including doing other functionalities)

  2. When I use 'https://www.facebook.com'(with cookies) it takes approximately 5 to 6 min (including doing other functionalities)


How can I optimize it?







python python-3.x multithreading






share|improve this question









New contributor




Neeraj Sonaniya is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.











share|improve this question









New contributor




Neeraj Sonaniya is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.









share|improve this question




share|improve this question








edited 4 hours ago









πάντα ῥεῖ

3,87531328




3,87531328






New contributor




Neeraj Sonaniya is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.









asked 4 hours ago









Neeraj Sonaniya

12




12




New contributor




Neeraj Sonaniya is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.





New contributor





Neeraj Sonaniya is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.






Neeraj Sonaniya is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.








  • 1




    I find it implausible that any text analysis of a web page could take minutes. I think that there is something you haven't told us about what your code is doing.
    – 200_success
    3 hours ago














  • 1




    I find it implausible that any text analysis of a web page could take minutes. I think that there is something you haven't told us about what your code is doing.
    – 200_success
    3 hours ago








1




1




I find it implausible that any text analysis of a web page could take minutes. I think that there is something you haven't told us about what your code is doing.
– 200_success
3 hours ago




I find it implausible that any text analysis of a web page could take minutes. I think that there is something you haven't told us about what your code is doing.
– 200_success
3 hours ago















active

oldest

votes











Your Answer





StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["\$", "\$"]]);
});
});
}, "mathjax-editing");

StackExchange.ifUsing("editor", function () {
StackExchange.using("externalEditor", function () {
StackExchange.using("snippets", function () {
StackExchange.snippets.init();
});
});
}, "code-snippets");

StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "196"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});






Neeraj Sonaniya is a new contributor. Be nice, and check out our Code of Conduct.










draft saved

draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fcodereview.stackexchange.com%2fquestions%2f209323%2fparsing-million-words-of-list-in-python-regular-expression-is-slow%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown






























active

oldest

votes













active

oldest

votes









active

oldest

votes






active

oldest

votes








Neeraj Sonaniya is a new contributor. Be nice, and check out our Code of Conduct.










draft saved

draft discarded


















Neeraj Sonaniya is a new contributor. Be nice, and check out our Code of Conduct.













Neeraj Sonaniya is a new contributor. Be nice, and check out our Code of Conduct.












Neeraj Sonaniya is a new contributor. Be nice, and check out our Code of Conduct.
















Thanks for contributing an answer to Code Review Stack Exchange!


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


Use MathJax to format equations. MathJax reference.


To learn more, see our tips on writing great answers.





Some of your past answers have not been well-received, and you're in danger of being blocked from answering.


Please pay close attention to the following guidance:


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fcodereview.stackexchange.com%2fquestions%2f209323%2fparsing-million-words-of-list-in-python-regular-expression-is-slow%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

Morgemoulin

Scott Moir

Souastre