How to make sure only one instance of a bash script runs?
up vote
21
down vote
favorite
A solution that does not require additional tools would be prefered.
linux bash lock
add a comment |
up vote
21
down vote
favorite
A solution that does not require additional tools would be prefered.
linux bash lock
What about a lock file?
– Marco
Sep 18 '12 at 11:07
@Marco I found this SO answer using that, but as stated in a comment, this can create a race condition
– Tobias Kienzler
Sep 18 '12 at 11:18
3
This is BashFAQ 45.
– jw013
Sep 18 '12 at 13:49
@jw013 thanks! So maybe something likeln -s my.pid .lock
will claim the lock (followed byecho $$ > my.pid
) and on failure can check whether the PID stored in.lock
is really an active instance of the script
– Tobias Kienzler
Sep 18 '12 at 15:22
add a comment |
up vote
21
down vote
favorite
up vote
21
down vote
favorite
A solution that does not require additional tools would be prefered.
linux bash lock
A solution that does not require additional tools would be prefered.
linux bash lock
linux bash lock
asked Sep 18 '12 at 11:04
Tobias Kienzler
4,256104587
4,256104587
What about a lock file?
– Marco
Sep 18 '12 at 11:07
@Marco I found this SO answer using that, but as stated in a comment, this can create a race condition
– Tobias Kienzler
Sep 18 '12 at 11:18
3
This is BashFAQ 45.
– jw013
Sep 18 '12 at 13:49
@jw013 thanks! So maybe something likeln -s my.pid .lock
will claim the lock (followed byecho $$ > my.pid
) and on failure can check whether the PID stored in.lock
is really an active instance of the script
– Tobias Kienzler
Sep 18 '12 at 15:22
add a comment |
What about a lock file?
– Marco
Sep 18 '12 at 11:07
@Marco I found this SO answer using that, but as stated in a comment, this can create a race condition
– Tobias Kienzler
Sep 18 '12 at 11:18
3
This is BashFAQ 45.
– jw013
Sep 18 '12 at 13:49
@jw013 thanks! So maybe something likeln -s my.pid .lock
will claim the lock (followed byecho $$ > my.pid
) and on failure can check whether the PID stored in.lock
is really an active instance of the script
– Tobias Kienzler
Sep 18 '12 at 15:22
What about a lock file?
– Marco
Sep 18 '12 at 11:07
What about a lock file?
– Marco
Sep 18 '12 at 11:07
@Marco I found this SO answer using that, but as stated in a comment, this can create a race condition
– Tobias Kienzler
Sep 18 '12 at 11:18
@Marco I found this SO answer using that, but as stated in a comment, this can create a race condition
– Tobias Kienzler
Sep 18 '12 at 11:18
3
3
This is BashFAQ 45.
– jw013
Sep 18 '12 at 13:49
This is BashFAQ 45.
– jw013
Sep 18 '12 at 13:49
@jw013 thanks! So maybe something like
ln -s my.pid .lock
will claim the lock (followed by echo $$ > my.pid
) and on failure can check whether the PID stored in .lock
is really an active instance of the script– Tobias Kienzler
Sep 18 '12 at 15:22
@jw013 thanks! So maybe something like
ln -s my.pid .lock
will claim the lock (followed by echo $$ > my.pid
) and on failure can check whether the PID stored in .lock
is really an active instance of the script– Tobias Kienzler
Sep 18 '12 at 15:22
add a comment |
10 Answers
10
active
oldest
votes
up vote
16
down vote
accepted
Almost like nsg's answer: use a lock directory. Directory creation is atomic under linux and unix and *BSD and a lot of other OSes.
if mkdir $LOCKDIR
then
# Do important, exclusive stuff
if rmdir $LOCKDIR
then
echo "Victory is mine"
else
echo "Could not remove lock dir" >&2
fi
else
# Handle error condition
...
fi
You can put the PID of the locking sh into a file in the lock directory for debugging purposes, but don't fall into the trap of thinking you can check that PID to see if the locking process still executes. Lots of race conditions lie down that path.
1
I'd consider using the stored PID to check whether the locking instance is still alive. However, here's a claim thatmkdir
is not atomic on NFS (which is not the case for me, but I guess one should mention that, if true)
– Tobias Kienzler
Sep 18 '12 at 13:08
Yes, by all means use the stored PID to see if the locking process still executes, but don't attempt to do anything other than log a message. The work of checking the stored pid, creating a new PID file, etc, leaves a big window for races.
– Bruce Ediger
Sep 18 '12 at 13:37
Ok, as Ihunath stated, the lockdir would most likely be in/tmp
which is usually not NFS shared, so that should be fine.
– Tobias Kienzler
Sep 19 '12 at 8:33
I would userm -rf
to remove the lock directory.rmdir
will fail if someone (not necessarily you) managed to add a file to the directory.
– chepner
Sep 22 '12 at 4:32
add a comment |
up vote
17
down vote
To add to Bruce Ediger's answer, and inspired by this answer, you should also add more smarts to the cleanup to guard against script termination:
#Remove the lock directory
function cleanup {
if rmdir $LOCKDIR; then
echo "Finished"
else
echo "Failed to remove lock directory '$LOCKDIR'"
exit 1
fi
}
if mkdir $LOCKDIR; then
#Ensure that if we "grabbed a lock", we release it
#Works for SIGTERM and SIGINT(Ctrl-C)
trap "cleanup" EXIT
echo "Acquired lock, running"
# Processing starts here
else
echo "Could not create lock directory '$LOCKDIR'"
exit 1
fi
Alternatively,if ! mkdir "$LOCKDIR"; then handle failure to lock and exit; fi trap and do processing after if-statement
.
– Kusalananda
Feb 22 at 13:19
add a comment |
up vote
5
down vote
This may be too simplistic, please correct me if I'm wrong. Isn't a simple ps
enough?
#!/bin/bash
me="$(basename "$0")";
running=$(ps h -C "$me" | grep -wv $$ | wc -l);
[[ $running > 1 ]] && exit;
# do stuff below this comment
Nice and/or brilliant. :)
– Spooky
Mar 3 '17 at 16:49
1
I've used this condition for a week, and in 2 occasions it didn't prevent new process from starting. I figured what the problem is - new pid is a substring of the old one and gets hidden bygrep -v $$
. real examples: old - 14532, new - 1453, old - 28858, new - 858.
– Naktibalda
Feb 22 at 11:30
I fixed it by changinggrep -v $$
togrep -v "^${$} "
– Naktibalda
Feb 22 at 11:52
@Naktibalda good catch, thanks! You could also fix it withgrep -wv "^$$"
(see edit).
– terdon♦
Feb 22 at 12:38
Thanks for that update. My pattern occasionally failed because shorter pids were left padded with spaces.
– Naktibalda
Mar 8 at 16:50
|
show 1 more comment
up vote
4
down vote
I would use a lock file, as mentioned by Marco
#!/bin/bash
# Exit if /tmp/lock.file exists
[ -f /tmp/lock.file ] && exit
# Create lock file, sleep 1 sec and verify lock
echo $$ > /tmp/lock.file
sleep 1
[ "x$(cat /tmp/lock.file)" == "x"$$ ] || exit
# Do stuff
sleep 60
# Remove lock file
rm /tmp/lock.file
1
(I think you forgot to create the lock file) What about race conditions?
– Tobias Kienzler
Sep 18 '12 at 11:28
ops :) Yes, race conditions is a problem in my example, I usually write hourly or daily cron jobs and race conditions are rare.
– nsg
Sep 18 '12 at 11:32
They shouldn't be relevant in my case either, but it's something one should keep in mind. Maybe usinglsof $0
isn't bad, either?
– Tobias Kienzler
Sep 18 '12 at 11:34
You can diminish the race condition by writing your$$
in the lock file. Thensleep
for a short interval and read it back. If the PID is still yours, you successfully acquired the lock. Needs absolutely no additional tools.
– manatwork
Sep 18 '12 at 11:41
1
I have never used lsof for this purpose, I this it should work. Note that lsof is really slow in my system (1-2 sec) and most likely there is a lot of time for race conditions.
– nsg
Sep 18 '12 at 11:45
|
show 4 more comments
up vote
3
down vote
If you want to make sure that only one instance of your script is running take a look at:
Lock your script (against parallel run)
Otherwise you can check ps
or invoke lsof <full-path-of-your-script>
, since i wouldn't call them additional tools.
Supplement:
actually i thought of doing it like this:
for LINE in `lsof -c <your_script> -F p`; do
if [ $$ -gt ${LINE#?} ] ; then
echo "'$0' is already running" 1>&2
exit 1;
fi
done
this ensures that only the process with the lowest pid
keeps on running even if you fork-and-exec several instances of <your_script>
simultaneously.
1
Thanks for the link, but could you include the essential parts in your answer? It's common policy at SE to prevent link rot... But something like[[(lsof $0 | wc -l) > 2]] && exit
might actually be enough, or is this also prone to race conditions?
– Tobias Kienzler
Sep 18 '12 at 11:30
You are right the essential part of my answer was missing and only posting links is pretty lame. I added my own suggestion to the answer.
– user1146332
Sep 18 '12 at 12:52
add a comment |
up vote
2
down vote
One other way to make sure a single instance of bash script runs:
#!/bin/bash
# Check if another instance of script is running
pidof -o %PPID -x $0 >/dev/null && echo "ERROR: Script $0 already running" && exit 1
...
pidof -o %PPID -x $0
gets the PID of the existing script if its already running or exits with error code 1 if no other script is running
add a comment |
up vote
1
down vote
Although you've asked for a solution without additional tools, this is my favourite way using flock
:
#!/bin/sh
[ "${FLOCKER}" != "$0" ] && exec env FLOCKER="$0" flock "$0" "$0" "$@" || exit 1
echo "servus!"
sleep 10
This comes from man flock
, although I've removed -n
from the args to flock
, which makes flock
fail instead of waiting until other instances are done. You can use a timeout (-w
) as well.
The script invokes itself through flock
if the environment variable FLOCKER
isn't set. In this "recursive" invocation, said variable is set and the line does nothing.
The exit 1
is a safety net so the script terminates unsuccessfully if flock
wasn't found. The dash/bash-versions I've tested don't continue execution after an unsuccessful exec
, but man flock
has || :
in it.
Points to consider:
- Requires
flock
, the example script terminates with an error if it can't be found - Needs no extra lock file
- May not work if the script is on NFS (see https://serverfault.com/questions/66919/file-locks-on-an-nfs)
See also https://stackoverflow.com/questions/185451/quick-and-dirty-way-to-ensure-only-one-instance-of-a-shell-script-is-running-at.
add a comment |
up vote
0
down vote
You can use this: https://github.com/sayanarijit/pidlock
sudo pip install -U pidlock
pidlock -n sleepy_script -c 'sleep 10'
> A solution that does not require additional tools would be prefered.
– dhag
Jan 12 at 19:13
add a comment |
up vote
0
down vote
My code to you
#!/bin/bash
script_file="$(/bin/readlink -f $0)"
lock_file=${script_file////_}
function executing {
echo "'${script_file}' already executing"
exit 1
}
(
flock -n 9 || executing
sleep 10
) 9> /var/lock/${lock_file}
Based on man flock
, improving only:
- the name of the lock file, to be based on the full name of the script
- the message
executing
Where I put here the sleep 10
, you can put all the main script.
add a comment |
up vote
0
down vote
This is a modified version of Anselmo's Answer. The idea is to create a read only file descriptor using the bash script itself and use flock
to handle the lock.
SCRIPT=`realpath $0` # get absolute path to the script itself
exec 6< "$SCRIPT" # open bash script using file descriptor 6
flock -n 6 || { echo "ERROR: script is already running" && exit 1; } # lock file descriptor 6 OR show error message if script is already running
echo "Run your single instance code here"
The main difference to all other answer's is that this code doesn't modify the filesystem, uses a very low footprint and doesn't need any cleanup since the file descriptor is closed as soon as the script finishes independent of the exit state. Thus it doesn't matter if the script fails or succeeds.
You should always quote all shell variable references unless you have a good reason not to, and you’re sure you know what you’re doing. So you should be doingexec 6< "$SCRIPT"
.
– Scott
Nov 2 at 6:01
@Scott I've changed the code according your suggestions. Many thanks.
– John Doe
Nov 2 at 6:38
add a comment |
10 Answers
10
active
oldest
votes
10 Answers
10
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
16
down vote
accepted
Almost like nsg's answer: use a lock directory. Directory creation is atomic under linux and unix and *BSD and a lot of other OSes.
if mkdir $LOCKDIR
then
# Do important, exclusive stuff
if rmdir $LOCKDIR
then
echo "Victory is mine"
else
echo "Could not remove lock dir" >&2
fi
else
# Handle error condition
...
fi
You can put the PID of the locking sh into a file in the lock directory for debugging purposes, but don't fall into the trap of thinking you can check that PID to see if the locking process still executes. Lots of race conditions lie down that path.
1
I'd consider using the stored PID to check whether the locking instance is still alive. However, here's a claim thatmkdir
is not atomic on NFS (which is not the case for me, but I guess one should mention that, if true)
– Tobias Kienzler
Sep 18 '12 at 13:08
Yes, by all means use the stored PID to see if the locking process still executes, but don't attempt to do anything other than log a message. The work of checking the stored pid, creating a new PID file, etc, leaves a big window for races.
– Bruce Ediger
Sep 18 '12 at 13:37
Ok, as Ihunath stated, the lockdir would most likely be in/tmp
which is usually not NFS shared, so that should be fine.
– Tobias Kienzler
Sep 19 '12 at 8:33
I would userm -rf
to remove the lock directory.rmdir
will fail if someone (not necessarily you) managed to add a file to the directory.
– chepner
Sep 22 '12 at 4:32
add a comment |
up vote
16
down vote
accepted
Almost like nsg's answer: use a lock directory. Directory creation is atomic under linux and unix and *BSD and a lot of other OSes.
if mkdir $LOCKDIR
then
# Do important, exclusive stuff
if rmdir $LOCKDIR
then
echo "Victory is mine"
else
echo "Could not remove lock dir" >&2
fi
else
# Handle error condition
...
fi
You can put the PID of the locking sh into a file in the lock directory for debugging purposes, but don't fall into the trap of thinking you can check that PID to see if the locking process still executes. Lots of race conditions lie down that path.
1
I'd consider using the stored PID to check whether the locking instance is still alive. However, here's a claim thatmkdir
is not atomic on NFS (which is not the case for me, but I guess one should mention that, if true)
– Tobias Kienzler
Sep 18 '12 at 13:08
Yes, by all means use the stored PID to see if the locking process still executes, but don't attempt to do anything other than log a message. The work of checking the stored pid, creating a new PID file, etc, leaves a big window for races.
– Bruce Ediger
Sep 18 '12 at 13:37
Ok, as Ihunath stated, the lockdir would most likely be in/tmp
which is usually not NFS shared, so that should be fine.
– Tobias Kienzler
Sep 19 '12 at 8:33
I would userm -rf
to remove the lock directory.rmdir
will fail if someone (not necessarily you) managed to add a file to the directory.
– chepner
Sep 22 '12 at 4:32
add a comment |
up vote
16
down vote
accepted
up vote
16
down vote
accepted
Almost like nsg's answer: use a lock directory. Directory creation is atomic under linux and unix and *BSD and a lot of other OSes.
if mkdir $LOCKDIR
then
# Do important, exclusive stuff
if rmdir $LOCKDIR
then
echo "Victory is mine"
else
echo "Could not remove lock dir" >&2
fi
else
# Handle error condition
...
fi
You can put the PID of the locking sh into a file in the lock directory for debugging purposes, but don't fall into the trap of thinking you can check that PID to see if the locking process still executes. Lots of race conditions lie down that path.
Almost like nsg's answer: use a lock directory. Directory creation is atomic under linux and unix and *BSD and a lot of other OSes.
if mkdir $LOCKDIR
then
# Do important, exclusive stuff
if rmdir $LOCKDIR
then
echo "Victory is mine"
else
echo "Could not remove lock dir" >&2
fi
else
# Handle error condition
...
fi
You can put the PID of the locking sh into a file in the lock directory for debugging purposes, but don't fall into the trap of thinking you can check that PID to see if the locking process still executes. Lots of race conditions lie down that path.
edited Feb 7 '17 at 20:33
schaiba
5,41912028
5,41912028
answered Sep 18 '12 at 12:00
Bruce Ediger
34.5k565119
34.5k565119
1
I'd consider using the stored PID to check whether the locking instance is still alive. However, here's a claim thatmkdir
is not atomic on NFS (which is not the case for me, but I guess one should mention that, if true)
– Tobias Kienzler
Sep 18 '12 at 13:08
Yes, by all means use the stored PID to see if the locking process still executes, but don't attempt to do anything other than log a message. The work of checking the stored pid, creating a new PID file, etc, leaves a big window for races.
– Bruce Ediger
Sep 18 '12 at 13:37
Ok, as Ihunath stated, the lockdir would most likely be in/tmp
which is usually not NFS shared, so that should be fine.
– Tobias Kienzler
Sep 19 '12 at 8:33
I would userm -rf
to remove the lock directory.rmdir
will fail if someone (not necessarily you) managed to add a file to the directory.
– chepner
Sep 22 '12 at 4:32
add a comment |
1
I'd consider using the stored PID to check whether the locking instance is still alive. However, here's a claim thatmkdir
is not atomic on NFS (which is not the case for me, but I guess one should mention that, if true)
– Tobias Kienzler
Sep 18 '12 at 13:08
Yes, by all means use the stored PID to see if the locking process still executes, but don't attempt to do anything other than log a message. The work of checking the stored pid, creating a new PID file, etc, leaves a big window for races.
– Bruce Ediger
Sep 18 '12 at 13:37
Ok, as Ihunath stated, the lockdir would most likely be in/tmp
which is usually not NFS shared, so that should be fine.
– Tobias Kienzler
Sep 19 '12 at 8:33
I would userm -rf
to remove the lock directory.rmdir
will fail if someone (not necessarily you) managed to add a file to the directory.
– chepner
Sep 22 '12 at 4:32
1
1
I'd consider using the stored PID to check whether the locking instance is still alive. However, here's a claim that
mkdir
is not atomic on NFS (which is not the case for me, but I guess one should mention that, if true)– Tobias Kienzler
Sep 18 '12 at 13:08
I'd consider using the stored PID to check whether the locking instance is still alive. However, here's a claim that
mkdir
is not atomic on NFS (which is not the case for me, but I guess one should mention that, if true)– Tobias Kienzler
Sep 18 '12 at 13:08
Yes, by all means use the stored PID to see if the locking process still executes, but don't attempt to do anything other than log a message. The work of checking the stored pid, creating a new PID file, etc, leaves a big window for races.
– Bruce Ediger
Sep 18 '12 at 13:37
Yes, by all means use the stored PID to see if the locking process still executes, but don't attempt to do anything other than log a message. The work of checking the stored pid, creating a new PID file, etc, leaves a big window for races.
– Bruce Ediger
Sep 18 '12 at 13:37
Ok, as Ihunath stated, the lockdir would most likely be in
/tmp
which is usually not NFS shared, so that should be fine.– Tobias Kienzler
Sep 19 '12 at 8:33
Ok, as Ihunath stated, the lockdir would most likely be in
/tmp
which is usually not NFS shared, so that should be fine.– Tobias Kienzler
Sep 19 '12 at 8:33
I would use
rm -rf
to remove the lock directory. rmdir
will fail if someone (not necessarily you) managed to add a file to the directory.– chepner
Sep 22 '12 at 4:32
I would use
rm -rf
to remove the lock directory. rmdir
will fail if someone (not necessarily you) managed to add a file to the directory.– chepner
Sep 22 '12 at 4:32
add a comment |
up vote
17
down vote
To add to Bruce Ediger's answer, and inspired by this answer, you should also add more smarts to the cleanup to guard against script termination:
#Remove the lock directory
function cleanup {
if rmdir $LOCKDIR; then
echo "Finished"
else
echo "Failed to remove lock directory '$LOCKDIR'"
exit 1
fi
}
if mkdir $LOCKDIR; then
#Ensure that if we "grabbed a lock", we release it
#Works for SIGTERM and SIGINT(Ctrl-C)
trap "cleanup" EXIT
echo "Acquired lock, running"
# Processing starts here
else
echo "Could not create lock directory '$LOCKDIR'"
exit 1
fi
Alternatively,if ! mkdir "$LOCKDIR"; then handle failure to lock and exit; fi trap and do processing after if-statement
.
– Kusalananda
Feb 22 at 13:19
add a comment |
up vote
17
down vote
To add to Bruce Ediger's answer, and inspired by this answer, you should also add more smarts to the cleanup to guard against script termination:
#Remove the lock directory
function cleanup {
if rmdir $LOCKDIR; then
echo "Finished"
else
echo "Failed to remove lock directory '$LOCKDIR'"
exit 1
fi
}
if mkdir $LOCKDIR; then
#Ensure that if we "grabbed a lock", we release it
#Works for SIGTERM and SIGINT(Ctrl-C)
trap "cleanup" EXIT
echo "Acquired lock, running"
# Processing starts here
else
echo "Could not create lock directory '$LOCKDIR'"
exit 1
fi
Alternatively,if ! mkdir "$LOCKDIR"; then handle failure to lock and exit; fi trap and do processing after if-statement
.
– Kusalananda
Feb 22 at 13:19
add a comment |
up vote
17
down vote
up vote
17
down vote
To add to Bruce Ediger's answer, and inspired by this answer, you should also add more smarts to the cleanup to guard against script termination:
#Remove the lock directory
function cleanup {
if rmdir $LOCKDIR; then
echo "Finished"
else
echo "Failed to remove lock directory '$LOCKDIR'"
exit 1
fi
}
if mkdir $LOCKDIR; then
#Ensure that if we "grabbed a lock", we release it
#Works for SIGTERM and SIGINT(Ctrl-C)
trap "cleanup" EXIT
echo "Acquired lock, running"
# Processing starts here
else
echo "Could not create lock directory '$LOCKDIR'"
exit 1
fi
To add to Bruce Ediger's answer, and inspired by this answer, you should also add more smarts to the cleanup to guard against script termination:
#Remove the lock directory
function cleanup {
if rmdir $LOCKDIR; then
echo "Finished"
else
echo "Failed to remove lock directory '$LOCKDIR'"
exit 1
fi
}
if mkdir $LOCKDIR; then
#Ensure that if we "grabbed a lock", we release it
#Works for SIGTERM and SIGINT(Ctrl-C)
trap "cleanup" EXIT
echo "Acquired lock, running"
# Processing starts here
else
echo "Could not create lock directory '$LOCKDIR'"
exit 1
fi
edited May 23 '17 at 11:33
Community♦
1
1
answered Jan 20 '15 at 11:30
Igor Zevaka
27123
27123
Alternatively,if ! mkdir "$LOCKDIR"; then handle failure to lock and exit; fi trap and do processing after if-statement
.
– Kusalananda
Feb 22 at 13:19
add a comment |
Alternatively,if ! mkdir "$LOCKDIR"; then handle failure to lock and exit; fi trap and do processing after if-statement
.
– Kusalananda
Feb 22 at 13:19
Alternatively,
if ! mkdir "$LOCKDIR"; then handle failure to lock and exit; fi trap and do processing after if-statement
.– Kusalananda
Feb 22 at 13:19
Alternatively,
if ! mkdir "$LOCKDIR"; then handle failure to lock and exit; fi trap and do processing after if-statement
.– Kusalananda
Feb 22 at 13:19
add a comment |
up vote
5
down vote
This may be too simplistic, please correct me if I'm wrong. Isn't a simple ps
enough?
#!/bin/bash
me="$(basename "$0")";
running=$(ps h -C "$me" | grep -wv $$ | wc -l);
[[ $running > 1 ]] && exit;
# do stuff below this comment
Nice and/or brilliant. :)
– Spooky
Mar 3 '17 at 16:49
1
I've used this condition for a week, and in 2 occasions it didn't prevent new process from starting. I figured what the problem is - new pid is a substring of the old one and gets hidden bygrep -v $$
. real examples: old - 14532, new - 1453, old - 28858, new - 858.
– Naktibalda
Feb 22 at 11:30
I fixed it by changinggrep -v $$
togrep -v "^${$} "
– Naktibalda
Feb 22 at 11:52
@Naktibalda good catch, thanks! You could also fix it withgrep -wv "^$$"
(see edit).
– terdon♦
Feb 22 at 12:38
Thanks for that update. My pattern occasionally failed because shorter pids were left padded with spaces.
– Naktibalda
Mar 8 at 16:50
|
show 1 more comment
up vote
5
down vote
This may be too simplistic, please correct me if I'm wrong. Isn't a simple ps
enough?
#!/bin/bash
me="$(basename "$0")";
running=$(ps h -C "$me" | grep -wv $$ | wc -l);
[[ $running > 1 ]] && exit;
# do stuff below this comment
Nice and/or brilliant. :)
– Spooky
Mar 3 '17 at 16:49
1
I've used this condition for a week, and in 2 occasions it didn't prevent new process from starting. I figured what the problem is - new pid is a substring of the old one and gets hidden bygrep -v $$
. real examples: old - 14532, new - 1453, old - 28858, new - 858.
– Naktibalda
Feb 22 at 11:30
I fixed it by changinggrep -v $$
togrep -v "^${$} "
– Naktibalda
Feb 22 at 11:52
@Naktibalda good catch, thanks! You could also fix it withgrep -wv "^$$"
(see edit).
– terdon♦
Feb 22 at 12:38
Thanks for that update. My pattern occasionally failed because shorter pids were left padded with spaces.
– Naktibalda
Mar 8 at 16:50
|
show 1 more comment
up vote
5
down vote
up vote
5
down vote
This may be too simplistic, please correct me if I'm wrong. Isn't a simple ps
enough?
#!/bin/bash
me="$(basename "$0")";
running=$(ps h -C "$me" | grep -wv $$ | wc -l);
[[ $running > 1 ]] && exit;
# do stuff below this comment
This may be too simplistic, please correct me if I'm wrong. Isn't a simple ps
enough?
#!/bin/bash
me="$(basename "$0")";
running=$(ps h -C "$me" | grep -wv $$ | wc -l);
[[ $running > 1 ]] && exit;
# do stuff below this comment
edited Feb 22 at 12:37
answered Sep 18 '12 at 12:19
terdon♦
126k31243419
126k31243419
Nice and/or brilliant. :)
– Spooky
Mar 3 '17 at 16:49
1
I've used this condition for a week, and in 2 occasions it didn't prevent new process from starting. I figured what the problem is - new pid is a substring of the old one and gets hidden bygrep -v $$
. real examples: old - 14532, new - 1453, old - 28858, new - 858.
– Naktibalda
Feb 22 at 11:30
I fixed it by changinggrep -v $$
togrep -v "^${$} "
– Naktibalda
Feb 22 at 11:52
@Naktibalda good catch, thanks! You could also fix it withgrep -wv "^$$"
(see edit).
– terdon♦
Feb 22 at 12:38
Thanks for that update. My pattern occasionally failed because shorter pids were left padded with spaces.
– Naktibalda
Mar 8 at 16:50
|
show 1 more comment
Nice and/or brilliant. :)
– Spooky
Mar 3 '17 at 16:49
1
I've used this condition for a week, and in 2 occasions it didn't prevent new process from starting. I figured what the problem is - new pid is a substring of the old one and gets hidden bygrep -v $$
. real examples: old - 14532, new - 1453, old - 28858, new - 858.
– Naktibalda
Feb 22 at 11:30
I fixed it by changinggrep -v $$
togrep -v "^${$} "
– Naktibalda
Feb 22 at 11:52
@Naktibalda good catch, thanks! You could also fix it withgrep -wv "^$$"
(see edit).
– terdon♦
Feb 22 at 12:38
Thanks for that update. My pattern occasionally failed because shorter pids were left padded with spaces.
– Naktibalda
Mar 8 at 16:50
Nice and/or brilliant. :)
– Spooky
Mar 3 '17 at 16:49
Nice and/or brilliant. :)
– Spooky
Mar 3 '17 at 16:49
1
1
I've used this condition for a week, and in 2 occasions it didn't prevent new process from starting. I figured what the problem is - new pid is a substring of the old one and gets hidden by
grep -v $$
. real examples: old - 14532, new - 1453, old - 28858, new - 858.– Naktibalda
Feb 22 at 11:30
I've used this condition for a week, and in 2 occasions it didn't prevent new process from starting. I figured what the problem is - new pid is a substring of the old one and gets hidden by
grep -v $$
. real examples: old - 14532, new - 1453, old - 28858, new - 858.– Naktibalda
Feb 22 at 11:30
I fixed it by changing
grep -v $$
to grep -v "^${$} "
– Naktibalda
Feb 22 at 11:52
I fixed it by changing
grep -v $$
to grep -v "^${$} "
– Naktibalda
Feb 22 at 11:52
@Naktibalda good catch, thanks! You could also fix it with
grep -wv "^$$"
(see edit).– terdon♦
Feb 22 at 12:38
@Naktibalda good catch, thanks! You could also fix it with
grep -wv "^$$"
(see edit).– terdon♦
Feb 22 at 12:38
Thanks for that update. My pattern occasionally failed because shorter pids were left padded with spaces.
– Naktibalda
Mar 8 at 16:50
Thanks for that update. My pattern occasionally failed because shorter pids were left padded with spaces.
– Naktibalda
Mar 8 at 16:50
|
show 1 more comment
up vote
4
down vote
I would use a lock file, as mentioned by Marco
#!/bin/bash
# Exit if /tmp/lock.file exists
[ -f /tmp/lock.file ] && exit
# Create lock file, sleep 1 sec and verify lock
echo $$ > /tmp/lock.file
sleep 1
[ "x$(cat /tmp/lock.file)" == "x"$$ ] || exit
# Do stuff
sleep 60
# Remove lock file
rm /tmp/lock.file
1
(I think you forgot to create the lock file) What about race conditions?
– Tobias Kienzler
Sep 18 '12 at 11:28
ops :) Yes, race conditions is a problem in my example, I usually write hourly or daily cron jobs and race conditions are rare.
– nsg
Sep 18 '12 at 11:32
They shouldn't be relevant in my case either, but it's something one should keep in mind. Maybe usinglsof $0
isn't bad, either?
– Tobias Kienzler
Sep 18 '12 at 11:34
You can diminish the race condition by writing your$$
in the lock file. Thensleep
for a short interval and read it back. If the PID is still yours, you successfully acquired the lock. Needs absolutely no additional tools.
– manatwork
Sep 18 '12 at 11:41
1
I have never used lsof for this purpose, I this it should work. Note that lsof is really slow in my system (1-2 sec) and most likely there is a lot of time for race conditions.
– nsg
Sep 18 '12 at 11:45
|
show 4 more comments
up vote
4
down vote
I would use a lock file, as mentioned by Marco
#!/bin/bash
# Exit if /tmp/lock.file exists
[ -f /tmp/lock.file ] && exit
# Create lock file, sleep 1 sec and verify lock
echo $$ > /tmp/lock.file
sleep 1
[ "x$(cat /tmp/lock.file)" == "x"$$ ] || exit
# Do stuff
sleep 60
# Remove lock file
rm /tmp/lock.file
1
(I think you forgot to create the lock file) What about race conditions?
– Tobias Kienzler
Sep 18 '12 at 11:28
ops :) Yes, race conditions is a problem in my example, I usually write hourly or daily cron jobs and race conditions are rare.
– nsg
Sep 18 '12 at 11:32
They shouldn't be relevant in my case either, but it's something one should keep in mind. Maybe usinglsof $0
isn't bad, either?
– Tobias Kienzler
Sep 18 '12 at 11:34
You can diminish the race condition by writing your$$
in the lock file. Thensleep
for a short interval and read it back. If the PID is still yours, you successfully acquired the lock. Needs absolutely no additional tools.
– manatwork
Sep 18 '12 at 11:41
1
I have never used lsof for this purpose, I this it should work. Note that lsof is really slow in my system (1-2 sec) and most likely there is a lot of time for race conditions.
– nsg
Sep 18 '12 at 11:45
|
show 4 more comments
up vote
4
down vote
up vote
4
down vote
I would use a lock file, as mentioned by Marco
#!/bin/bash
# Exit if /tmp/lock.file exists
[ -f /tmp/lock.file ] && exit
# Create lock file, sleep 1 sec and verify lock
echo $$ > /tmp/lock.file
sleep 1
[ "x$(cat /tmp/lock.file)" == "x"$$ ] || exit
# Do stuff
sleep 60
# Remove lock file
rm /tmp/lock.file
I would use a lock file, as mentioned by Marco
#!/bin/bash
# Exit if /tmp/lock.file exists
[ -f /tmp/lock.file ] && exit
# Create lock file, sleep 1 sec and verify lock
echo $$ > /tmp/lock.file
sleep 1
[ "x$(cat /tmp/lock.file)" == "x"$$ ] || exit
# Do stuff
sleep 60
# Remove lock file
rm /tmp/lock.file
edited Sep 18 '12 at 11:51
answered Sep 18 '12 at 11:18
nsg
1,00669
1,00669
1
(I think you forgot to create the lock file) What about race conditions?
– Tobias Kienzler
Sep 18 '12 at 11:28
ops :) Yes, race conditions is a problem in my example, I usually write hourly or daily cron jobs and race conditions are rare.
– nsg
Sep 18 '12 at 11:32
They shouldn't be relevant in my case either, but it's something one should keep in mind. Maybe usinglsof $0
isn't bad, either?
– Tobias Kienzler
Sep 18 '12 at 11:34
You can diminish the race condition by writing your$$
in the lock file. Thensleep
for a short interval and read it back. If the PID is still yours, you successfully acquired the lock. Needs absolutely no additional tools.
– manatwork
Sep 18 '12 at 11:41
1
I have never used lsof for this purpose, I this it should work. Note that lsof is really slow in my system (1-2 sec) and most likely there is a lot of time for race conditions.
– nsg
Sep 18 '12 at 11:45
|
show 4 more comments
1
(I think you forgot to create the lock file) What about race conditions?
– Tobias Kienzler
Sep 18 '12 at 11:28
ops :) Yes, race conditions is a problem in my example, I usually write hourly or daily cron jobs and race conditions are rare.
– nsg
Sep 18 '12 at 11:32
They shouldn't be relevant in my case either, but it's something one should keep in mind. Maybe usinglsof $0
isn't bad, either?
– Tobias Kienzler
Sep 18 '12 at 11:34
You can diminish the race condition by writing your$$
in the lock file. Thensleep
for a short interval and read it back. If the PID is still yours, you successfully acquired the lock. Needs absolutely no additional tools.
– manatwork
Sep 18 '12 at 11:41
1
I have never used lsof for this purpose, I this it should work. Note that lsof is really slow in my system (1-2 sec) and most likely there is a lot of time for race conditions.
– nsg
Sep 18 '12 at 11:45
1
1
(I think you forgot to create the lock file) What about race conditions?
– Tobias Kienzler
Sep 18 '12 at 11:28
(I think you forgot to create the lock file) What about race conditions?
– Tobias Kienzler
Sep 18 '12 at 11:28
ops :) Yes, race conditions is a problem in my example, I usually write hourly or daily cron jobs and race conditions are rare.
– nsg
Sep 18 '12 at 11:32
ops :) Yes, race conditions is a problem in my example, I usually write hourly or daily cron jobs and race conditions are rare.
– nsg
Sep 18 '12 at 11:32
They shouldn't be relevant in my case either, but it's something one should keep in mind. Maybe using
lsof $0
isn't bad, either?– Tobias Kienzler
Sep 18 '12 at 11:34
They shouldn't be relevant in my case either, but it's something one should keep in mind. Maybe using
lsof $0
isn't bad, either?– Tobias Kienzler
Sep 18 '12 at 11:34
You can diminish the race condition by writing your
$$
in the lock file. Then sleep
for a short interval and read it back. If the PID is still yours, you successfully acquired the lock. Needs absolutely no additional tools.– manatwork
Sep 18 '12 at 11:41
You can diminish the race condition by writing your
$$
in the lock file. Then sleep
for a short interval and read it back. If the PID is still yours, you successfully acquired the lock. Needs absolutely no additional tools.– manatwork
Sep 18 '12 at 11:41
1
1
I have never used lsof for this purpose, I this it should work. Note that lsof is really slow in my system (1-2 sec) and most likely there is a lot of time for race conditions.
– nsg
Sep 18 '12 at 11:45
I have never used lsof for this purpose, I this it should work. Note that lsof is really slow in my system (1-2 sec) and most likely there is a lot of time for race conditions.
– nsg
Sep 18 '12 at 11:45
|
show 4 more comments
up vote
3
down vote
If you want to make sure that only one instance of your script is running take a look at:
Lock your script (against parallel run)
Otherwise you can check ps
or invoke lsof <full-path-of-your-script>
, since i wouldn't call them additional tools.
Supplement:
actually i thought of doing it like this:
for LINE in `lsof -c <your_script> -F p`; do
if [ $$ -gt ${LINE#?} ] ; then
echo "'$0' is already running" 1>&2
exit 1;
fi
done
this ensures that only the process with the lowest pid
keeps on running even if you fork-and-exec several instances of <your_script>
simultaneously.
1
Thanks for the link, but could you include the essential parts in your answer? It's common policy at SE to prevent link rot... But something like[[(lsof $0 | wc -l) > 2]] && exit
might actually be enough, or is this also prone to race conditions?
– Tobias Kienzler
Sep 18 '12 at 11:30
You are right the essential part of my answer was missing and only posting links is pretty lame. I added my own suggestion to the answer.
– user1146332
Sep 18 '12 at 12:52
add a comment |
up vote
3
down vote
If you want to make sure that only one instance of your script is running take a look at:
Lock your script (against parallel run)
Otherwise you can check ps
or invoke lsof <full-path-of-your-script>
, since i wouldn't call them additional tools.
Supplement:
actually i thought of doing it like this:
for LINE in `lsof -c <your_script> -F p`; do
if [ $$ -gt ${LINE#?} ] ; then
echo "'$0' is already running" 1>&2
exit 1;
fi
done
this ensures that only the process with the lowest pid
keeps on running even if you fork-and-exec several instances of <your_script>
simultaneously.
1
Thanks for the link, but could you include the essential parts in your answer? It's common policy at SE to prevent link rot... But something like[[(lsof $0 | wc -l) > 2]] && exit
might actually be enough, or is this also prone to race conditions?
– Tobias Kienzler
Sep 18 '12 at 11:30
You are right the essential part of my answer was missing and only posting links is pretty lame. I added my own suggestion to the answer.
– user1146332
Sep 18 '12 at 12:52
add a comment |
up vote
3
down vote
up vote
3
down vote
If you want to make sure that only one instance of your script is running take a look at:
Lock your script (against parallel run)
Otherwise you can check ps
or invoke lsof <full-path-of-your-script>
, since i wouldn't call them additional tools.
Supplement:
actually i thought of doing it like this:
for LINE in `lsof -c <your_script> -F p`; do
if [ $$ -gt ${LINE#?} ] ; then
echo "'$0' is already running" 1>&2
exit 1;
fi
done
this ensures that only the process with the lowest pid
keeps on running even if you fork-and-exec several instances of <your_script>
simultaneously.
If you want to make sure that only one instance of your script is running take a look at:
Lock your script (against parallel run)
Otherwise you can check ps
or invoke lsof <full-path-of-your-script>
, since i wouldn't call them additional tools.
Supplement:
actually i thought of doing it like this:
for LINE in `lsof -c <your_script> -F p`; do
if [ $$ -gt ${LINE#?} ] ; then
echo "'$0' is already running" 1>&2
exit 1;
fi
done
this ensures that only the process with the lowest pid
keeps on running even if you fork-and-exec several instances of <your_script>
simultaneously.
edited Sep 18 '12 at 12:56
answered Sep 18 '12 at 11:17
user1146332
1,899612
1,899612
1
Thanks for the link, but could you include the essential parts in your answer? It's common policy at SE to prevent link rot... But something like[[(lsof $0 | wc -l) > 2]] && exit
might actually be enough, or is this also prone to race conditions?
– Tobias Kienzler
Sep 18 '12 at 11:30
You are right the essential part of my answer was missing and only posting links is pretty lame. I added my own suggestion to the answer.
– user1146332
Sep 18 '12 at 12:52
add a comment |
1
Thanks for the link, but could you include the essential parts in your answer? It's common policy at SE to prevent link rot... But something like[[(lsof $0 | wc -l) > 2]] && exit
might actually be enough, or is this also prone to race conditions?
– Tobias Kienzler
Sep 18 '12 at 11:30
You are right the essential part of my answer was missing and only posting links is pretty lame. I added my own suggestion to the answer.
– user1146332
Sep 18 '12 at 12:52
1
1
Thanks for the link, but could you include the essential parts in your answer? It's common policy at SE to prevent link rot... But something like
[[(lsof $0 | wc -l) > 2]] && exit
might actually be enough, or is this also prone to race conditions?– Tobias Kienzler
Sep 18 '12 at 11:30
Thanks for the link, but could you include the essential parts in your answer? It's common policy at SE to prevent link rot... But something like
[[(lsof $0 | wc -l) > 2]] && exit
might actually be enough, or is this also prone to race conditions?– Tobias Kienzler
Sep 18 '12 at 11:30
You are right the essential part of my answer was missing and only posting links is pretty lame. I added my own suggestion to the answer.
– user1146332
Sep 18 '12 at 12:52
You are right the essential part of my answer was missing and only posting links is pretty lame. I added my own suggestion to the answer.
– user1146332
Sep 18 '12 at 12:52
add a comment |
up vote
2
down vote
One other way to make sure a single instance of bash script runs:
#!/bin/bash
# Check if another instance of script is running
pidof -o %PPID -x $0 >/dev/null && echo "ERROR: Script $0 already running" && exit 1
...
pidof -o %PPID -x $0
gets the PID of the existing script if its already running or exits with error code 1 if no other script is running
add a comment |
up vote
2
down vote
One other way to make sure a single instance of bash script runs:
#!/bin/bash
# Check if another instance of script is running
pidof -o %PPID -x $0 >/dev/null && echo "ERROR: Script $0 already running" && exit 1
...
pidof -o %PPID -x $0
gets the PID of the existing script if its already running or exits with error code 1 if no other script is running
add a comment |
up vote
2
down vote
up vote
2
down vote
One other way to make sure a single instance of bash script runs:
#!/bin/bash
# Check if another instance of script is running
pidof -o %PPID -x $0 >/dev/null && echo "ERROR: Script $0 already running" && exit 1
...
pidof -o %PPID -x $0
gets the PID of the existing script if its already running or exits with error code 1 if no other script is running
One other way to make sure a single instance of bash script runs:
#!/bin/bash
# Check if another instance of script is running
pidof -o %PPID -x $0 >/dev/null && echo "ERROR: Script $0 already running" && exit 1
...
pidof -o %PPID -x $0
gets the PID of the existing script if its already running or exits with error code 1 if no other script is running
edited Oct 17 '17 at 20:56
answered Oct 17 '17 at 20:25
Sethu
212
212
add a comment |
add a comment |
up vote
1
down vote
Although you've asked for a solution without additional tools, this is my favourite way using flock
:
#!/bin/sh
[ "${FLOCKER}" != "$0" ] && exec env FLOCKER="$0" flock "$0" "$0" "$@" || exit 1
echo "servus!"
sleep 10
This comes from man flock
, although I've removed -n
from the args to flock
, which makes flock
fail instead of waiting until other instances are done. You can use a timeout (-w
) as well.
The script invokes itself through flock
if the environment variable FLOCKER
isn't set. In this "recursive" invocation, said variable is set and the line does nothing.
The exit 1
is a safety net so the script terminates unsuccessfully if flock
wasn't found. The dash/bash-versions I've tested don't continue execution after an unsuccessful exec
, but man flock
has || :
in it.
Points to consider:
- Requires
flock
, the example script terminates with an error if it can't be found - Needs no extra lock file
- May not work if the script is on NFS (see https://serverfault.com/questions/66919/file-locks-on-an-nfs)
See also https://stackoverflow.com/questions/185451/quick-and-dirty-way-to-ensure-only-one-instance-of-a-shell-script-is-running-at.
add a comment |
up vote
1
down vote
Although you've asked for a solution without additional tools, this is my favourite way using flock
:
#!/bin/sh
[ "${FLOCKER}" != "$0" ] && exec env FLOCKER="$0" flock "$0" "$0" "$@" || exit 1
echo "servus!"
sleep 10
This comes from man flock
, although I've removed -n
from the args to flock
, which makes flock
fail instead of waiting until other instances are done. You can use a timeout (-w
) as well.
The script invokes itself through flock
if the environment variable FLOCKER
isn't set. In this "recursive" invocation, said variable is set and the line does nothing.
The exit 1
is a safety net so the script terminates unsuccessfully if flock
wasn't found. The dash/bash-versions I've tested don't continue execution after an unsuccessful exec
, but man flock
has || :
in it.
Points to consider:
- Requires
flock
, the example script terminates with an error if it can't be found - Needs no extra lock file
- May not work if the script is on NFS (see https://serverfault.com/questions/66919/file-locks-on-an-nfs)
See also https://stackoverflow.com/questions/185451/quick-and-dirty-way-to-ensure-only-one-instance-of-a-shell-script-is-running-at.
add a comment |
up vote
1
down vote
up vote
1
down vote
Although you've asked for a solution without additional tools, this is my favourite way using flock
:
#!/bin/sh
[ "${FLOCKER}" != "$0" ] && exec env FLOCKER="$0" flock "$0" "$0" "$@" || exit 1
echo "servus!"
sleep 10
This comes from man flock
, although I've removed -n
from the args to flock
, which makes flock
fail instead of waiting until other instances are done. You can use a timeout (-w
) as well.
The script invokes itself through flock
if the environment variable FLOCKER
isn't set. In this "recursive" invocation, said variable is set and the line does nothing.
The exit 1
is a safety net so the script terminates unsuccessfully if flock
wasn't found. The dash/bash-versions I've tested don't continue execution after an unsuccessful exec
, but man flock
has || :
in it.
Points to consider:
- Requires
flock
, the example script terminates with an error if it can't be found - Needs no extra lock file
- May not work if the script is on NFS (see https://serverfault.com/questions/66919/file-locks-on-an-nfs)
See also https://stackoverflow.com/questions/185451/quick-and-dirty-way-to-ensure-only-one-instance-of-a-shell-script-is-running-at.
Although you've asked for a solution without additional tools, this is my favourite way using flock
:
#!/bin/sh
[ "${FLOCKER}" != "$0" ] && exec env FLOCKER="$0" flock "$0" "$0" "$@" || exit 1
echo "servus!"
sleep 10
This comes from man flock
, although I've removed -n
from the args to flock
, which makes flock
fail instead of waiting until other instances are done. You can use a timeout (-w
) as well.
The script invokes itself through flock
if the environment variable FLOCKER
isn't set. In this "recursive" invocation, said variable is set and the line does nothing.
The exit 1
is a safety net so the script terminates unsuccessfully if flock
wasn't found. The dash/bash-versions I've tested don't continue execution after an unsuccessful exec
, but man flock
has || :
in it.
Points to consider:
- Requires
flock
, the example script terminates with an error if it can't be found - Needs no extra lock file
- May not work if the script is on NFS (see https://serverfault.com/questions/66919/file-locks-on-an-nfs)
See also https://stackoverflow.com/questions/185451/quick-and-dirty-way-to-ensure-only-one-instance-of-a-shell-script-is-running-at.
edited May 23 '17 at 11:33
Community♦
1
1
answered Feb 7 '17 at 20:08
schieferstapel
1113
1113
add a comment |
add a comment |
up vote
0
down vote
You can use this: https://github.com/sayanarijit/pidlock
sudo pip install -U pidlock
pidlock -n sleepy_script -c 'sleep 10'
> A solution that does not require additional tools would be prefered.
– dhag
Jan 12 at 19:13
add a comment |
up vote
0
down vote
You can use this: https://github.com/sayanarijit/pidlock
sudo pip install -U pidlock
pidlock -n sleepy_script -c 'sleep 10'
> A solution that does not require additional tools would be prefered.
– dhag
Jan 12 at 19:13
add a comment |
up vote
0
down vote
up vote
0
down vote
You can use this: https://github.com/sayanarijit/pidlock
sudo pip install -U pidlock
pidlock -n sleepy_script -c 'sleep 10'
You can use this: https://github.com/sayanarijit/pidlock
sudo pip install -U pidlock
pidlock -n sleepy_script -c 'sleep 10'
answered Jan 12 at 18:32
Arijit Basu
1
1
> A solution that does not require additional tools would be prefered.
– dhag
Jan 12 at 19:13
add a comment |
> A solution that does not require additional tools would be prefered.
– dhag
Jan 12 at 19:13
> A solution that does not require additional tools would be prefered.
– dhag
Jan 12 at 19:13
> A solution that does not require additional tools would be prefered.
– dhag
Jan 12 at 19:13
add a comment |
up vote
0
down vote
My code to you
#!/bin/bash
script_file="$(/bin/readlink -f $0)"
lock_file=${script_file////_}
function executing {
echo "'${script_file}' already executing"
exit 1
}
(
flock -n 9 || executing
sleep 10
) 9> /var/lock/${lock_file}
Based on man flock
, improving only:
- the name of the lock file, to be based on the full name of the script
- the message
executing
Where I put here the sleep 10
, you can put all the main script.
add a comment |
up vote
0
down vote
My code to you
#!/bin/bash
script_file="$(/bin/readlink -f $0)"
lock_file=${script_file////_}
function executing {
echo "'${script_file}' already executing"
exit 1
}
(
flock -n 9 || executing
sleep 10
) 9> /var/lock/${lock_file}
Based on man flock
, improving only:
- the name of the lock file, to be based on the full name of the script
- the message
executing
Where I put here the sleep 10
, you can put all the main script.
add a comment |
up vote
0
down vote
up vote
0
down vote
My code to you
#!/bin/bash
script_file="$(/bin/readlink -f $0)"
lock_file=${script_file////_}
function executing {
echo "'${script_file}' already executing"
exit 1
}
(
flock -n 9 || executing
sleep 10
) 9> /var/lock/${lock_file}
Based on man flock
, improving only:
- the name of the lock file, to be based on the full name of the script
- the message
executing
Where I put here the sleep 10
, you can put all the main script.
My code to you
#!/bin/bash
script_file="$(/bin/readlink -f $0)"
lock_file=${script_file////_}
function executing {
echo "'${script_file}' already executing"
exit 1
}
(
flock -n 9 || executing
sleep 10
) 9> /var/lock/${lock_file}
Based on man flock
, improving only:
- the name of the lock file, to be based on the full name of the script
- the message
executing
Where I put here the sleep 10
, you can put all the main script.
answered Sep 3 at 20:38
Anselmo Blanco Dominguez
11
11
add a comment |
add a comment |
up vote
0
down vote
This is a modified version of Anselmo's Answer. The idea is to create a read only file descriptor using the bash script itself and use flock
to handle the lock.
SCRIPT=`realpath $0` # get absolute path to the script itself
exec 6< "$SCRIPT" # open bash script using file descriptor 6
flock -n 6 || { echo "ERROR: script is already running" && exit 1; } # lock file descriptor 6 OR show error message if script is already running
echo "Run your single instance code here"
The main difference to all other answer's is that this code doesn't modify the filesystem, uses a very low footprint and doesn't need any cleanup since the file descriptor is closed as soon as the script finishes independent of the exit state. Thus it doesn't matter if the script fails or succeeds.
You should always quote all shell variable references unless you have a good reason not to, and you’re sure you know what you’re doing. So you should be doingexec 6< "$SCRIPT"
.
– Scott
Nov 2 at 6:01
@Scott I've changed the code according your suggestions. Many thanks.
– John Doe
Nov 2 at 6:38
add a comment |
up vote
0
down vote
This is a modified version of Anselmo's Answer. The idea is to create a read only file descriptor using the bash script itself and use flock
to handle the lock.
SCRIPT=`realpath $0` # get absolute path to the script itself
exec 6< "$SCRIPT" # open bash script using file descriptor 6
flock -n 6 || { echo "ERROR: script is already running" && exit 1; } # lock file descriptor 6 OR show error message if script is already running
echo "Run your single instance code here"
The main difference to all other answer's is that this code doesn't modify the filesystem, uses a very low footprint and doesn't need any cleanup since the file descriptor is closed as soon as the script finishes independent of the exit state. Thus it doesn't matter if the script fails or succeeds.
You should always quote all shell variable references unless you have a good reason not to, and you’re sure you know what you’re doing. So you should be doingexec 6< "$SCRIPT"
.
– Scott
Nov 2 at 6:01
@Scott I've changed the code according your suggestions. Many thanks.
– John Doe
Nov 2 at 6:38
add a comment |
up vote
0
down vote
up vote
0
down vote
This is a modified version of Anselmo's Answer. The idea is to create a read only file descriptor using the bash script itself and use flock
to handle the lock.
SCRIPT=`realpath $0` # get absolute path to the script itself
exec 6< "$SCRIPT" # open bash script using file descriptor 6
flock -n 6 || { echo "ERROR: script is already running" && exit 1; } # lock file descriptor 6 OR show error message if script is already running
echo "Run your single instance code here"
The main difference to all other answer's is that this code doesn't modify the filesystem, uses a very low footprint and doesn't need any cleanup since the file descriptor is closed as soon as the script finishes independent of the exit state. Thus it doesn't matter if the script fails or succeeds.
This is a modified version of Anselmo's Answer. The idea is to create a read only file descriptor using the bash script itself and use flock
to handle the lock.
SCRIPT=`realpath $0` # get absolute path to the script itself
exec 6< "$SCRIPT" # open bash script using file descriptor 6
flock -n 6 || { echo "ERROR: script is already running" && exit 1; } # lock file descriptor 6 OR show error message if script is already running
echo "Run your single instance code here"
The main difference to all other answer's is that this code doesn't modify the filesystem, uses a very low footprint and doesn't need any cleanup since the file descriptor is closed as soon as the script finishes independent of the exit state. Thus it doesn't matter if the script fails or succeeds.
edited Nov 2 at 6:37
answered Nov 2 at 5:32
John Doe
1011
1011
You should always quote all shell variable references unless you have a good reason not to, and you’re sure you know what you’re doing. So you should be doingexec 6< "$SCRIPT"
.
– Scott
Nov 2 at 6:01
@Scott I've changed the code according your suggestions. Many thanks.
– John Doe
Nov 2 at 6:38
add a comment |
You should always quote all shell variable references unless you have a good reason not to, and you’re sure you know what you’re doing. So you should be doingexec 6< "$SCRIPT"
.
– Scott
Nov 2 at 6:01
@Scott I've changed the code according your suggestions. Many thanks.
– John Doe
Nov 2 at 6:38
You should always quote all shell variable references unless you have a good reason not to, and you’re sure you know what you’re doing. So you should be doing
exec 6< "$SCRIPT"
.– Scott
Nov 2 at 6:01
You should always quote all shell variable references unless you have a good reason not to, and you’re sure you know what you’re doing. So you should be doing
exec 6< "$SCRIPT"
.– Scott
Nov 2 at 6:01
@Scott I've changed the code according your suggestions. Many thanks.
– John Doe
Nov 2 at 6:38
@Scott I've changed the code according your suggestions. Many thanks.
– John Doe
Nov 2 at 6:38
add a comment |
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f48505%2fhow-to-make-sure-only-one-instance-of-a-bash-script-runs%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
What about a lock file?
– Marco
Sep 18 '12 at 11:07
@Marco I found this SO answer using that, but as stated in a comment, this can create a race condition
– Tobias Kienzler
Sep 18 '12 at 11:18
3
This is BashFAQ 45.
– jw013
Sep 18 '12 at 13:49
@jw013 thanks! So maybe something like
ln -s my.pid .lock
will claim the lock (followed byecho $$ > my.pid
) and on failure can check whether the PID stored in.lock
is really an active instance of the script– Tobias Kienzler
Sep 18 '12 at 15:22