Parsing million words of list in python regular expression is slow
up vote
-1
down vote
favorite
I have been working on a project in python which takes content from web links and find some important words from content of that page.
I used regular expression to do so.
But it takes huge time to get the results.
How it works:
- It makes request to given url
- take whole html content and data from external js files(within src attribute of tag)
- Save those things to list. (name it as file, used in re later)
- and then perform searching of important data using regular expression.
Here is the sample:
List:
list = ['secret', 'secret_key', 'token', 'secret_token', 'auth_token','access_token','username','password','aws_access_key_id','aws_secret_access_key', 'secretkey']
Regular Expression:
for item in seclst:
try:
secregex = r'(["']?[w-]*' + item + '[w-]*[s]*["']?[s]*[:=>]{1,2}[s]*["'](.*?)["'])'
matches = re.finditer(secregex, file, re.MULTILINE | re.IGNORECASE )
for matchNum, match in enumerate(matches):
if len(match.group(2)) > 0:
secretList.add(match.group())
except:
pass
There are some other functions too.
Execution time and explanation:
- When I use 'https://www.facebook.com'(without cookies) it takes approximately 41 seconds (including doing other functionalities)
- When I use 'https://www.facebook.com'(with cookies) it takes approximately 5 to 6 min (including doing other functionalities)
How can I optimize it?
python python-3.x multithreading
New contributor
add a comment |
up vote
-1
down vote
favorite
I have been working on a project in python which takes content from web links and find some important words from content of that page.
I used regular expression to do so.
But it takes huge time to get the results.
How it works:
- It makes request to given url
- take whole html content and data from external js files(within src attribute of tag)
- Save those things to list. (name it as file, used in re later)
- and then perform searching of important data using regular expression.
Here is the sample:
List:
list = ['secret', 'secret_key', 'token', 'secret_token', 'auth_token','access_token','username','password','aws_access_key_id','aws_secret_access_key', 'secretkey']
Regular Expression:
for item in seclst:
try:
secregex = r'(["']?[w-]*' + item + '[w-]*[s]*["']?[s]*[:=>]{1,2}[s]*["'](.*?)["'])'
matches = re.finditer(secregex, file, re.MULTILINE | re.IGNORECASE )
for matchNum, match in enumerate(matches):
if len(match.group(2)) > 0:
secretList.add(match.group())
except:
pass
There are some other functions too.
Execution time and explanation:
- When I use 'https://www.facebook.com'(without cookies) it takes approximately 41 seconds (including doing other functionalities)
- When I use 'https://www.facebook.com'(with cookies) it takes approximately 5 to 6 min (including doing other functionalities)
How can I optimize it?
python python-3.x multithreading
New contributor
1
I find it implausible that any text analysis of a web page could take minutes. I think that there is something you haven't told us about what your code is doing.
– 200_success
3 hours ago
add a comment |
up vote
-1
down vote
favorite
up vote
-1
down vote
favorite
I have been working on a project in python which takes content from web links and find some important words from content of that page.
I used regular expression to do so.
But it takes huge time to get the results.
How it works:
- It makes request to given url
- take whole html content and data from external js files(within src attribute of tag)
- Save those things to list. (name it as file, used in re later)
- and then perform searching of important data using regular expression.
Here is the sample:
List:
list = ['secret', 'secret_key', 'token', 'secret_token', 'auth_token','access_token','username','password','aws_access_key_id','aws_secret_access_key', 'secretkey']
Regular Expression:
for item in seclst:
try:
secregex = r'(["']?[w-]*' + item + '[w-]*[s]*["']?[s]*[:=>]{1,2}[s]*["'](.*?)["'])'
matches = re.finditer(secregex, file, re.MULTILINE | re.IGNORECASE )
for matchNum, match in enumerate(matches):
if len(match.group(2)) > 0:
secretList.add(match.group())
except:
pass
There are some other functions too.
Execution time and explanation:
- When I use 'https://www.facebook.com'(without cookies) it takes approximately 41 seconds (including doing other functionalities)
- When I use 'https://www.facebook.com'(with cookies) it takes approximately 5 to 6 min (including doing other functionalities)
How can I optimize it?
python python-3.x multithreading
New contributor
I have been working on a project in python which takes content from web links and find some important words from content of that page.
I used regular expression to do so.
But it takes huge time to get the results.
How it works:
- It makes request to given url
- take whole html content and data from external js files(within src attribute of tag)
- Save those things to list. (name it as file, used in re later)
- and then perform searching of important data using regular expression.
Here is the sample:
List:
list = ['secret', 'secret_key', 'token', 'secret_token', 'auth_token','access_token','username','password','aws_access_key_id','aws_secret_access_key', 'secretkey']
Regular Expression:
for item in seclst:
try:
secregex = r'(["']?[w-]*' + item + '[w-]*[s]*["']?[s]*[:=>]{1,2}[s]*["'](.*?)["'])'
matches = re.finditer(secregex, file, re.MULTILINE | re.IGNORECASE )
for matchNum, match in enumerate(matches):
if len(match.group(2)) > 0:
secretList.add(match.group())
except:
pass
There are some other functions too.
Execution time and explanation:
- When I use 'https://www.facebook.com'(without cookies) it takes approximately 41 seconds (including doing other functionalities)
- When I use 'https://www.facebook.com'(with cookies) it takes approximately 5 to 6 min (including doing other functionalities)
How can I optimize it?
python python-3.x multithreading
python python-3.x multithreading
New contributor
New contributor
edited 4 hours ago
πάντα ῥεῖ
3,87531328
3,87531328
New contributor
asked 4 hours ago
Neeraj Sonaniya
12
12
New contributor
New contributor
1
I find it implausible that any text analysis of a web page could take minutes. I think that there is something you haven't told us about what your code is doing.
– 200_success
3 hours ago
add a comment |
1
I find it implausible that any text analysis of a web page could take minutes. I think that there is something you haven't told us about what your code is doing.
– 200_success
3 hours ago
1
1
I find it implausible that any text analysis of a web page could take minutes. I think that there is something you haven't told us about what your code is doing.
– 200_success
3 hours ago
I find it implausible that any text analysis of a web page could take minutes. I think that there is something you haven't told us about what your code is doing.
– 200_success
3 hours ago
add a comment |
active
oldest
votes
active
oldest
votes
active
oldest
votes
active
oldest
votes
active
oldest
votes
Neeraj Sonaniya is a new contributor. Be nice, and check out our Code of Conduct.
Neeraj Sonaniya is a new contributor. Be nice, and check out our Code of Conduct.
Neeraj Sonaniya is a new contributor. Be nice, and check out our Code of Conduct.
Neeraj Sonaniya is a new contributor. Be nice, and check out our Code of Conduct.
Thanks for contributing an answer to Code Review Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Some of your past answers have not been well-received, and you're in danger of being blocked from answering.
Please pay close attention to the following guidance:
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fcodereview.stackexchange.com%2fquestions%2f209323%2fparsing-million-words-of-list-in-python-regular-expression-is-slow%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
1
I find it implausible that any text analysis of a web page could take minutes. I think that there is something you haven't told us about what your code is doing.
– 200_success
3 hours ago