Limit memory usage for a single Linux process











up vote
137
down vote

favorite
50












I'm running pdftoppm to convert a user-provided PDF into a 300DPI image. This works great, except if the user provides an PDF with a very large page size. pdftoppm will allocate enough memory to hold a 300DPI image of that size in memory, which for a 100 inch square page is 100*300 * 100*300 * 4 bytes per pixel = 3.5GB. A malicious user could just give me a silly-large PDF and cause all kinds of problems.



So what I'd like to do is put some kind of hard limit on memory usage for a child process I'm about to run--just have the process die if it tries to allocate more than, say, 500MB of memory. Is that possible?



I don't think ulimit can be used for this, but is there a one-process equivalent?










share|improve this question















migrated from stackoverflow.com Aug 8 '12 at 1:45


This question came from our site for professional and enthusiast programmers.















  • Maybe docker?
    – user7000
    May 12 '16 at 23:40















up vote
137
down vote

favorite
50












I'm running pdftoppm to convert a user-provided PDF into a 300DPI image. This works great, except if the user provides an PDF with a very large page size. pdftoppm will allocate enough memory to hold a 300DPI image of that size in memory, which for a 100 inch square page is 100*300 * 100*300 * 4 bytes per pixel = 3.5GB. A malicious user could just give me a silly-large PDF and cause all kinds of problems.



So what I'd like to do is put some kind of hard limit on memory usage for a child process I'm about to run--just have the process die if it tries to allocate more than, say, 500MB of memory. Is that possible?



I don't think ulimit can be used for this, but is there a one-process equivalent?










share|improve this question















migrated from stackoverflow.com Aug 8 '12 at 1:45


This question came from our site for professional and enthusiast programmers.















  • Maybe docker?
    – user7000
    May 12 '16 at 23:40













up vote
137
down vote

favorite
50









up vote
137
down vote

favorite
50






50





I'm running pdftoppm to convert a user-provided PDF into a 300DPI image. This works great, except if the user provides an PDF with a very large page size. pdftoppm will allocate enough memory to hold a 300DPI image of that size in memory, which for a 100 inch square page is 100*300 * 100*300 * 4 bytes per pixel = 3.5GB. A malicious user could just give me a silly-large PDF and cause all kinds of problems.



So what I'd like to do is put some kind of hard limit on memory usage for a child process I'm about to run--just have the process die if it tries to allocate more than, say, 500MB of memory. Is that possible?



I don't think ulimit can be used for this, but is there a one-process equivalent?










share|improve this question















I'm running pdftoppm to convert a user-provided PDF into a 300DPI image. This works great, except if the user provides an PDF with a very large page size. pdftoppm will allocate enough memory to hold a 300DPI image of that size in memory, which for a 100 inch square page is 100*300 * 100*300 * 4 bytes per pixel = 3.5GB. A malicious user could just give me a silly-large PDF and cause all kinds of problems.



So what I'd like to do is put some kind of hard limit on memory usage for a child process I'm about to run--just have the process die if it tries to allocate more than, say, 500MB of memory. Is that possible?



I don't think ulimit can be used for this, but is there a one-process equivalent?







linux memory ulimit






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Aug 29 '14 at 15:17









derobert

71.1k8151210




71.1k8151210










asked Feb 13 '11 at 8:00









Ben Dilts

788265




788265




migrated from stackoverflow.com Aug 8 '12 at 1:45


This question came from our site for professional and enthusiast programmers.






migrated from stackoverflow.com Aug 8 '12 at 1:45


This question came from our site for professional and enthusiast programmers.














  • Maybe docker?
    – user7000
    May 12 '16 at 23:40


















  • Maybe docker?
    – user7000
    May 12 '16 at 23:40
















Maybe docker?
– user7000
May 12 '16 at 23:40




Maybe docker?
– user7000
May 12 '16 at 23:40










5 Answers
5






active

oldest

votes

















up vote
53
down vote



accepted










There's some problems with ulimit. Here's a useful read on the topic: Limiting time and memory consumption of a program in Linux, which lead to the timeout tool, which lets you cage a process (and its forks) by time or memory consumption.



The timeout tool requires Perl 5+ and the /proc filesystem mounted. After that you copy the tool to e.g. /usr/local/bin like so:



curl https://raw.githubusercontent.com/pshved/timeout/master/timeout | 
sudo tee /usr/local/bin/timeout && sudo chmod 755 /usr/local/bin/timeout


After that, you can 'cage' your process by memory consumption as in your question like so:



timeout -m 500 pdftoppm Sample.pdf


Alternatively you could use -t <seconds> and -x <hertz> to respectively limit the process by time or CPU constraints.



The way this tool works is by checking multiple times per second if the spawned process has not oversubscribed its set boundaries. This means there actually is a small window where a process could potentially be oversubscribing before timeout notices and kills the process.



A more correct approach would hence likely involve cgroups, but that is much more involved to set up, even if you'd use Docker or runC, which among things, offer a more user-friendly abstraction around cgroups.






share|improve this answer























  • Seems to be working for me now (again?) but here's the google cache version: webcache.googleusercontent.com/…
    – kvz
    Apr 27 '17 at 12:32










  • Can we use timeout together with taskset (we need to limit both memory and cores) ?
    – ransh
    Oct 24 '17 at 12:47






  • 3




    It should be noted that this answer is not referring to the linux standard coreutils utility of the same name! Thus, the answer is potentially dangerous if anywhere on your system, some package has a script expecting timeout to be the linux standard coreutils package! I am unaware of this tool being packaged for distributions such as debian.
    – user1404316
    Apr 8 at 7:03












  • Does -t <seconds> constraint kill the process after that many seconds?
    – Roger That
    Nov 26 at 2:05


















up vote
98
down vote













Another way to limit this is to use Linux's control groups. This is especially useful if you want to limit a process's (or group of processes') allocation of physical memory distinctly from virtual memory. For example:



cgcreate -g memory:/myGroup
echo $(( 500 * 1024 * 1024 )) > /sys/fs/cgroup/memory/myGroup/memory.limit_in_bytes
echo $(( 5000 * 1024 * 1024 )) > /sys/fs/cgroup/memory/myGroup/memory.memsw.limit_in_bytes


will create a control group named myGroup, cap the set of processes run under myGroup up to 500 MB of physical memory and up to 5000 MB of swap. To run a process under the control group:



cgexec -g memory:myGroup pdftoppm


Note that on a modern Ubuntu distribution this example requires installing the cgroup-bin package and editing /etc/default/grub to change GRUB_CMDLINE_LINUX_DEFAULT to:



GRUB_CMDLINE_LINUX_DEFAULT="cgroup_enable=memory swapaccount=1"


and then running sudo update-grub and rebooting to boot with the new kernel boot parameters.






share|improve this answer



















  • 3




    The firejail program will also let you start a process with memory limits (using cgroups and namespaces to limit more than just memory). On my systems I did not have to change the kernel command line for this to work!
    – Ned64
    Feb 15 at 12:20










  • Do you need the GRUB_CMDLINE_LINUX_DEFAULT modification to make the setting persistent? I found another way to make it persistent here.
    – stason
    Aug 5 at 18:36




















up vote
69
down vote













If your process doesn't spawn more children that consume the most memory, you may use setrlimit function. More common user interface for that is using ulimit command of the shell:



$ ulimit -Sv 500000     # Set ~500 mb limit
$ pdftoppm ...


This will only limit "virtual" memory of your process, taking into account—and limiting—the memory the process being invoked shares with other processes, and the memory mapped but not reserved (for instance, Java's large heap). Still, virtual memory is the closest approximation for processes that grow really large, making the said errors insignificant.



If your program spawns children, and it's them which allocate memory, it becomes more complex, and you should write auxiliary scripts to run processes under your control. I wrote in my blog, why and how.






share|improve this answer



















  • 2




    why is setrlimit more complex for more children? man setrlimit tells me that "A child process created via fork(2) inherits its parents resource limits. Resource limits are preserved across execve(2)"
    – akira
    Feb 13 '11 at 8:13






  • 6




    Because the kernel does not sum the vm size for all child processes; if it did it would get the answer wrong anyway. The limit is per-process, and is virtual address space, not memory usage. Memory usage is harder to measure.
    – MarkR
    Feb 13 '11 at 8:17






  • 1




    if i understand the question correctly then OP whats the limit per subprocess (child) .. not in total.
    – akira
    Feb 13 '11 at 8:21










  • @MarkR, anyway, virtual address space is a good approximation for the memory used, especially if you run a program that's not controlled by a virtual machine (say, Java). At least I don't know any better metric.
    – Pavel Shved
    Feb 13 '11 at 8:23






  • 2




    Just wanted to say thanks - this ulimit approach helped me with firefox's bug 622816 – Loading a large image can "freeze" firefox, or crash the system; which on a USB boot (from RAM) tends to freeze the OS, requiring hard restart; now at least firefox crashes itself, leaving the OS alive... Cheers!
    – sdaau
    Apr 4 '13 at 15:51




















up vote
6
down vote













In addition to the tools from daemontools, suggested by Mark Johnson, you can also consider chpst which is found in runit. Runit itself is bundled in busybox, so you might already have it installed.



The man page of chpst shows the option:




-m bytes
limit memory. Limit the data segment, stack segment, locked physical pages, and total of all segment per process to bytes bytes
each.







share|improve this answer






























    up vote
    5
    down vote













    I'm using the below script, which works great. It uses cgroups through cgmanager. Update: it now uses the commands from cgroup-tools. Name this script limitmem and put it in your $PATH and you can use it like limitmem 100M bash. This will limit both memory and swap usage. To limit just memory remove the line with memory.memsw.limit_in_bytes.



    Disclaimer: I wouldn't be surprised if cgroup-tools also breaks in the future. The correct solution would be to use the systemd api's for cgroup management but there are no command line tools for that a.t.m.



    #!/bin/sh

    # This script uses commands from the cgroup-tools package. The cgroup-tools commands access the cgroup filesystem directly which is against the (new-ish) kernel's requirement that cgroups are managed by a single entity (which usually will be systemd). Additionally there is a v2 cgroup api in development which will probably replace the existing api at some point. So expect this script to break in the future. The correct way forward would be to use systemd's apis to create the cgroups, but afaik systemd currently (feb 2018) only exposes dbus apis for which there are no command line tools yet, and I didn't feel like writing those.

    # strict mode: error if commands fail or if unset variables are used
    set -eu

    if [ "$#" -lt 2 ]
    then
    echo Usage: `basename $0` "<limit> <command>..."
    echo or: `basename $0` "<memlimit> -s <swaplimit> <command>..."
    exit 1
    fi

    cgname="limitmem_$$"

    # parse command line args and find limits

    limit="$1"
    swaplimit="$limit"
    shift

    if [ "$1" = "-s" ]
    then
    shift
    swaplimit="$1"
    shift
    fi

    if [ "$1" = -- ]
    then
    shift
    fi

    if [ "$limit" = "$swaplimit" ]
    then
    memsw=0
    echo "limiting memory to $limit (cgroup $cgname) for command $@" >&2
    else
    memsw=1
    echo "limiting memory to $limit and total virtual memory to $swaplimit (cgroup $cgname) for command $@" >&2
    fi

    # create cgroup
    sudo cgcreate -g "memory:$cgname"
    sudo cgset -r memory.limit_in_bytes="$limit" "$cgname"
    bytes_limit=`cgget -g "memory:$cgname" | grep memory.limit_in_bytes | cut -d -f2`

    # try also limiting swap usage, but this fails if the system has no swap
    if sudo cgset -r memory.memsw.limit_in_bytes="$swaplimit" "$cgname"
    then
    bytes_swap_limit=`cgget -g "memory:$cgname" | grep memory.memsw.limit_in_bytes | cut -d -f2`
    else
    echo "failed to limit swap"
    memsw=0
    fi

    # create a waiting sudo'd process that will delete the cgroup once we're done. This prevents the user needing to enter their password to sudo again after the main command exists, which may take longer than sudo's timeout.
    tmpdir=${XDG_RUNTIME_DIR:-$TMPDIR}
    tmpdir=${tmpdir:-/tmp}
    fifo="$tmpdir/limitmem_$$_cgroup_closer"
    mkfifo --mode=u=rw,go= "$fifo"
    sudo -b sh -c "head -c1 '$fifo' >/dev/null ; cgdelete -g 'memory:$cgname'"

    # spawn subshell to run in the cgroup. If the command fails we still want to remove the cgroup so unset '-e'.
    set +e
    (
    set -e
    # move subshell into cgroup
    sudo cgclassify -g "memory:$cgname" --sticky `sh -c 'echo $PPID'` # $$ returns the main shell's pid, not this subshell's.
    exec "$@"
    )

    # grab exit code
    exitcode=$?

    set -e

    # show memory usage summary

    peak_mem=`cgget -g "memory:$cgname" | grep memory.max_usage_in_bytes | cut -d -f2`
    failcount=`cgget -g "memory:$cgname" | grep memory.failcnt | cut -d -f2`
    percent=`expr "$peak_mem" / ( "$bytes_limit" / 100 )`

    echo "peak memory used: $peak_mem ($percent%); exceeded limit $failcount times" >&2

    if [ "$memsw" = 1 ]
    then
    peak_swap=`cgget -g "memory:$cgname" | grep memory.memsw.max_usage_in_bytes | cut -d -f2`
    swap_failcount=`cgget -g "memory:$cgname" |grep memory.memsw.failcnt | cut -d -f2`
    swap_percent=`expr "$peak_swap" / ( "$bytes_swap_limit" / 100 )`

    echo "peak virtual memory used: $peak_swap ($swap_percent%); exceeded limit $swap_failcount times" >&2
    fi

    # remove cgroup by sending a byte through the pipe
    echo 1 > "$fifo"
    rm "$fifo"

    exit $exitcode





    share|improve this answer



















    • 1




      call to cgmanager_create_sync failed: invalid request for every process I try to run with limitmem 100M processname. I'm on Xubuntu 16.04 LTS and that package is installed.
      – Aaron Franke
      Mar 12 '17 at 9:58












    • Ups, I get this error message: $ limitmem 400M rstudio limiting memory to 400M (cgroup limitmem_24575) for command rstudio Error org.freedesktop.DBus.Error.InvalidArgs: invalid request any idea?
      – R Kiselev
      Feb 15 at 7:19












    • @RKiselev cgmanager is deprecated now, and not even available in Ubuntu 17.10. The systemd api that it uses was changed at some point, so that's probably the reason. I have updated the script to use cgroup-tools commands.
      – JanKanis
      Feb 15 at 11:39












    • if the calculation for percent results in zero, the expr status code is 1, and this script exits prematurely. recommend changing the line to: percent=$(( "$peak_mem" / $(( "$bytes_limit" / 100 )) )) (ref: unix.stackexchange.com/questions/63166/…)
      – Willi Ballenthin
      May 23 at 22:46











    Your Answer








    StackExchange.ready(function() {
    var channelOptions = {
    tags: "".split(" "),
    id: "106"
    };
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function() {
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled) {
    StackExchange.using("snippets", function() {
    createEditor();
    });
    }
    else {
    createEditor();
    }
    });

    function createEditor() {
    StackExchange.prepareEditor({
    heartbeatType: 'answer',
    convertImagesToLinks: false,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: null,
    bindNavPrevention: true,
    postfix: "",
    imageUploader: {
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    },
    onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    });


    }
    });














    draft saved

    draft discarded


















    StackExchange.ready(
    function () {
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f44985%2flimit-memory-usage-for-a-single-linux-process%23new-answer', 'question_page');
    }
    );

    Post as a guest















    Required, but never shown

























    5 Answers
    5






    active

    oldest

    votes








    5 Answers
    5






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes








    up vote
    53
    down vote



    accepted










    There's some problems with ulimit. Here's a useful read on the topic: Limiting time and memory consumption of a program in Linux, which lead to the timeout tool, which lets you cage a process (and its forks) by time or memory consumption.



    The timeout tool requires Perl 5+ and the /proc filesystem mounted. After that you copy the tool to e.g. /usr/local/bin like so:



    curl https://raw.githubusercontent.com/pshved/timeout/master/timeout | 
    sudo tee /usr/local/bin/timeout && sudo chmod 755 /usr/local/bin/timeout


    After that, you can 'cage' your process by memory consumption as in your question like so:



    timeout -m 500 pdftoppm Sample.pdf


    Alternatively you could use -t <seconds> and -x <hertz> to respectively limit the process by time or CPU constraints.



    The way this tool works is by checking multiple times per second if the spawned process has not oversubscribed its set boundaries. This means there actually is a small window where a process could potentially be oversubscribing before timeout notices and kills the process.



    A more correct approach would hence likely involve cgroups, but that is much more involved to set up, even if you'd use Docker or runC, which among things, offer a more user-friendly abstraction around cgroups.






    share|improve this answer























    • Seems to be working for me now (again?) but here's the google cache version: webcache.googleusercontent.com/…
      – kvz
      Apr 27 '17 at 12:32










    • Can we use timeout together with taskset (we need to limit both memory and cores) ?
      – ransh
      Oct 24 '17 at 12:47






    • 3




      It should be noted that this answer is not referring to the linux standard coreutils utility of the same name! Thus, the answer is potentially dangerous if anywhere on your system, some package has a script expecting timeout to be the linux standard coreutils package! I am unaware of this tool being packaged for distributions such as debian.
      – user1404316
      Apr 8 at 7:03












    • Does -t <seconds> constraint kill the process after that many seconds?
      – Roger That
      Nov 26 at 2:05















    up vote
    53
    down vote



    accepted










    There's some problems with ulimit. Here's a useful read on the topic: Limiting time and memory consumption of a program in Linux, which lead to the timeout tool, which lets you cage a process (and its forks) by time or memory consumption.



    The timeout tool requires Perl 5+ and the /proc filesystem mounted. After that you copy the tool to e.g. /usr/local/bin like so:



    curl https://raw.githubusercontent.com/pshved/timeout/master/timeout | 
    sudo tee /usr/local/bin/timeout && sudo chmod 755 /usr/local/bin/timeout


    After that, you can 'cage' your process by memory consumption as in your question like so:



    timeout -m 500 pdftoppm Sample.pdf


    Alternatively you could use -t <seconds> and -x <hertz> to respectively limit the process by time or CPU constraints.



    The way this tool works is by checking multiple times per second if the spawned process has not oversubscribed its set boundaries. This means there actually is a small window where a process could potentially be oversubscribing before timeout notices and kills the process.



    A more correct approach would hence likely involve cgroups, but that is much more involved to set up, even if you'd use Docker or runC, which among things, offer a more user-friendly abstraction around cgroups.






    share|improve this answer























    • Seems to be working for me now (again?) but here's the google cache version: webcache.googleusercontent.com/…
      – kvz
      Apr 27 '17 at 12:32










    • Can we use timeout together with taskset (we need to limit both memory and cores) ?
      – ransh
      Oct 24 '17 at 12:47






    • 3




      It should be noted that this answer is not referring to the linux standard coreutils utility of the same name! Thus, the answer is potentially dangerous if anywhere on your system, some package has a script expecting timeout to be the linux standard coreutils package! I am unaware of this tool being packaged for distributions such as debian.
      – user1404316
      Apr 8 at 7:03












    • Does -t <seconds> constraint kill the process after that many seconds?
      – Roger That
      Nov 26 at 2:05













    up vote
    53
    down vote



    accepted







    up vote
    53
    down vote



    accepted






    There's some problems with ulimit. Here's a useful read on the topic: Limiting time and memory consumption of a program in Linux, which lead to the timeout tool, which lets you cage a process (and its forks) by time or memory consumption.



    The timeout tool requires Perl 5+ and the /proc filesystem mounted. After that you copy the tool to e.g. /usr/local/bin like so:



    curl https://raw.githubusercontent.com/pshved/timeout/master/timeout | 
    sudo tee /usr/local/bin/timeout && sudo chmod 755 /usr/local/bin/timeout


    After that, you can 'cage' your process by memory consumption as in your question like so:



    timeout -m 500 pdftoppm Sample.pdf


    Alternatively you could use -t <seconds> and -x <hertz> to respectively limit the process by time or CPU constraints.



    The way this tool works is by checking multiple times per second if the spawned process has not oversubscribed its set boundaries. This means there actually is a small window where a process could potentially be oversubscribing before timeout notices and kills the process.



    A more correct approach would hence likely involve cgroups, but that is much more involved to set up, even if you'd use Docker or runC, which among things, offer a more user-friendly abstraction around cgroups.






    share|improve this answer














    There's some problems with ulimit. Here's a useful read on the topic: Limiting time and memory consumption of a program in Linux, which lead to the timeout tool, which lets you cage a process (and its forks) by time or memory consumption.



    The timeout tool requires Perl 5+ and the /proc filesystem mounted. After that you copy the tool to e.g. /usr/local/bin like so:



    curl https://raw.githubusercontent.com/pshved/timeout/master/timeout | 
    sudo tee /usr/local/bin/timeout && sudo chmod 755 /usr/local/bin/timeout


    After that, you can 'cage' your process by memory consumption as in your question like so:



    timeout -m 500 pdftoppm Sample.pdf


    Alternatively you could use -t <seconds> and -x <hertz> to respectively limit the process by time or CPU constraints.



    The way this tool works is by checking multiple times per second if the spawned process has not oversubscribed its set boundaries. This means there actually is a small window where a process could potentially be oversubscribing before timeout notices and kills the process.



    A more correct approach would hence likely involve cgroups, but that is much more involved to set up, even if you'd use Docker or runC, which among things, offer a more user-friendly abstraction around cgroups.







    share|improve this answer














    share|improve this answer



    share|improve this answer








    edited Nov 23 at 3:32









    Roger That

    626




    626










    answered Nov 30 '11 at 9:42









    kvz

    67168




    67168












    • Seems to be working for me now (again?) but here's the google cache version: webcache.googleusercontent.com/…
      – kvz
      Apr 27 '17 at 12:32










    • Can we use timeout together with taskset (we need to limit both memory and cores) ?
      – ransh
      Oct 24 '17 at 12:47






    • 3




      It should be noted that this answer is not referring to the linux standard coreutils utility of the same name! Thus, the answer is potentially dangerous if anywhere on your system, some package has a script expecting timeout to be the linux standard coreutils package! I am unaware of this tool being packaged for distributions such as debian.
      – user1404316
      Apr 8 at 7:03












    • Does -t <seconds> constraint kill the process after that many seconds?
      – Roger That
      Nov 26 at 2:05


















    • Seems to be working for me now (again?) but here's the google cache version: webcache.googleusercontent.com/…
      – kvz
      Apr 27 '17 at 12:32










    • Can we use timeout together with taskset (we need to limit both memory and cores) ?
      – ransh
      Oct 24 '17 at 12:47






    • 3




      It should be noted that this answer is not referring to the linux standard coreutils utility of the same name! Thus, the answer is potentially dangerous if anywhere on your system, some package has a script expecting timeout to be the linux standard coreutils package! I am unaware of this tool being packaged for distributions such as debian.
      – user1404316
      Apr 8 at 7:03












    • Does -t <seconds> constraint kill the process after that many seconds?
      – Roger That
      Nov 26 at 2:05
















    Seems to be working for me now (again?) but here's the google cache version: webcache.googleusercontent.com/…
    – kvz
    Apr 27 '17 at 12:32




    Seems to be working for me now (again?) but here's the google cache version: webcache.googleusercontent.com/…
    – kvz
    Apr 27 '17 at 12:32












    Can we use timeout together with taskset (we need to limit both memory and cores) ?
    – ransh
    Oct 24 '17 at 12:47




    Can we use timeout together with taskset (we need to limit both memory and cores) ?
    – ransh
    Oct 24 '17 at 12:47




    3




    3




    It should be noted that this answer is not referring to the linux standard coreutils utility of the same name! Thus, the answer is potentially dangerous if anywhere on your system, some package has a script expecting timeout to be the linux standard coreutils package! I am unaware of this tool being packaged for distributions such as debian.
    – user1404316
    Apr 8 at 7:03






    It should be noted that this answer is not referring to the linux standard coreutils utility of the same name! Thus, the answer is potentially dangerous if anywhere on your system, some package has a script expecting timeout to be the linux standard coreutils package! I am unaware of this tool being packaged for distributions such as debian.
    – user1404316
    Apr 8 at 7:03














    Does -t <seconds> constraint kill the process after that many seconds?
    – Roger That
    Nov 26 at 2:05




    Does -t <seconds> constraint kill the process after that many seconds?
    – Roger That
    Nov 26 at 2:05












    up vote
    98
    down vote













    Another way to limit this is to use Linux's control groups. This is especially useful if you want to limit a process's (or group of processes') allocation of physical memory distinctly from virtual memory. For example:



    cgcreate -g memory:/myGroup
    echo $(( 500 * 1024 * 1024 )) > /sys/fs/cgroup/memory/myGroup/memory.limit_in_bytes
    echo $(( 5000 * 1024 * 1024 )) > /sys/fs/cgroup/memory/myGroup/memory.memsw.limit_in_bytes


    will create a control group named myGroup, cap the set of processes run under myGroup up to 500 MB of physical memory and up to 5000 MB of swap. To run a process under the control group:



    cgexec -g memory:myGroup pdftoppm


    Note that on a modern Ubuntu distribution this example requires installing the cgroup-bin package and editing /etc/default/grub to change GRUB_CMDLINE_LINUX_DEFAULT to:



    GRUB_CMDLINE_LINUX_DEFAULT="cgroup_enable=memory swapaccount=1"


    and then running sudo update-grub and rebooting to boot with the new kernel boot parameters.






    share|improve this answer



















    • 3




      The firejail program will also let you start a process with memory limits (using cgroups and namespaces to limit more than just memory). On my systems I did not have to change the kernel command line for this to work!
      – Ned64
      Feb 15 at 12:20










    • Do you need the GRUB_CMDLINE_LINUX_DEFAULT modification to make the setting persistent? I found another way to make it persistent here.
      – stason
      Aug 5 at 18:36

















    up vote
    98
    down vote













    Another way to limit this is to use Linux's control groups. This is especially useful if you want to limit a process's (or group of processes') allocation of physical memory distinctly from virtual memory. For example:



    cgcreate -g memory:/myGroup
    echo $(( 500 * 1024 * 1024 )) > /sys/fs/cgroup/memory/myGroup/memory.limit_in_bytes
    echo $(( 5000 * 1024 * 1024 )) > /sys/fs/cgroup/memory/myGroup/memory.memsw.limit_in_bytes


    will create a control group named myGroup, cap the set of processes run under myGroup up to 500 MB of physical memory and up to 5000 MB of swap. To run a process under the control group:



    cgexec -g memory:myGroup pdftoppm


    Note that on a modern Ubuntu distribution this example requires installing the cgroup-bin package and editing /etc/default/grub to change GRUB_CMDLINE_LINUX_DEFAULT to:



    GRUB_CMDLINE_LINUX_DEFAULT="cgroup_enable=memory swapaccount=1"


    and then running sudo update-grub and rebooting to boot with the new kernel boot parameters.






    share|improve this answer



















    • 3




      The firejail program will also let you start a process with memory limits (using cgroups and namespaces to limit more than just memory). On my systems I did not have to change the kernel command line for this to work!
      – Ned64
      Feb 15 at 12:20










    • Do you need the GRUB_CMDLINE_LINUX_DEFAULT modification to make the setting persistent? I found another way to make it persistent here.
      – stason
      Aug 5 at 18:36















    up vote
    98
    down vote










    up vote
    98
    down vote









    Another way to limit this is to use Linux's control groups. This is especially useful if you want to limit a process's (or group of processes') allocation of physical memory distinctly from virtual memory. For example:



    cgcreate -g memory:/myGroup
    echo $(( 500 * 1024 * 1024 )) > /sys/fs/cgroup/memory/myGroup/memory.limit_in_bytes
    echo $(( 5000 * 1024 * 1024 )) > /sys/fs/cgroup/memory/myGroup/memory.memsw.limit_in_bytes


    will create a control group named myGroup, cap the set of processes run under myGroup up to 500 MB of physical memory and up to 5000 MB of swap. To run a process under the control group:



    cgexec -g memory:myGroup pdftoppm


    Note that on a modern Ubuntu distribution this example requires installing the cgroup-bin package and editing /etc/default/grub to change GRUB_CMDLINE_LINUX_DEFAULT to:



    GRUB_CMDLINE_LINUX_DEFAULT="cgroup_enable=memory swapaccount=1"


    and then running sudo update-grub and rebooting to boot with the new kernel boot parameters.






    share|improve this answer














    Another way to limit this is to use Linux's control groups. This is especially useful if you want to limit a process's (or group of processes') allocation of physical memory distinctly from virtual memory. For example:



    cgcreate -g memory:/myGroup
    echo $(( 500 * 1024 * 1024 )) > /sys/fs/cgroup/memory/myGroup/memory.limit_in_bytes
    echo $(( 5000 * 1024 * 1024 )) > /sys/fs/cgroup/memory/myGroup/memory.memsw.limit_in_bytes


    will create a control group named myGroup, cap the set of processes run under myGroup up to 500 MB of physical memory and up to 5000 MB of swap. To run a process under the control group:



    cgexec -g memory:myGroup pdftoppm


    Note that on a modern Ubuntu distribution this example requires installing the cgroup-bin package and editing /etc/default/grub to change GRUB_CMDLINE_LINUX_DEFAULT to:



    GRUB_CMDLINE_LINUX_DEFAULT="cgroup_enable=memory swapaccount=1"


    and then running sudo update-grub and rebooting to boot with the new kernel boot parameters.







    share|improve this answer














    share|improve this answer



    share|improve this answer








    edited Jul 4 '16 at 15:00









    kYuZz

    1033




    1033










    answered Apr 16 '14 at 8:36









    user65369

    1,08164




    1,08164








    • 3




      The firejail program will also let you start a process with memory limits (using cgroups and namespaces to limit more than just memory). On my systems I did not have to change the kernel command line for this to work!
      – Ned64
      Feb 15 at 12:20










    • Do you need the GRUB_CMDLINE_LINUX_DEFAULT modification to make the setting persistent? I found another way to make it persistent here.
      – stason
      Aug 5 at 18:36
















    • 3




      The firejail program will also let you start a process with memory limits (using cgroups and namespaces to limit more than just memory). On my systems I did not have to change the kernel command line for this to work!
      – Ned64
      Feb 15 at 12:20










    • Do you need the GRUB_CMDLINE_LINUX_DEFAULT modification to make the setting persistent? I found another way to make it persistent here.
      – stason
      Aug 5 at 18:36










    3




    3




    The firejail program will also let you start a process with memory limits (using cgroups and namespaces to limit more than just memory). On my systems I did not have to change the kernel command line for this to work!
    – Ned64
    Feb 15 at 12:20




    The firejail program will also let you start a process with memory limits (using cgroups and namespaces to limit more than just memory). On my systems I did not have to change the kernel command line for this to work!
    – Ned64
    Feb 15 at 12:20












    Do you need the GRUB_CMDLINE_LINUX_DEFAULT modification to make the setting persistent? I found another way to make it persistent here.
    – stason
    Aug 5 at 18:36






    Do you need the GRUB_CMDLINE_LINUX_DEFAULT modification to make the setting persistent? I found another way to make it persistent here.
    – stason
    Aug 5 at 18:36












    up vote
    69
    down vote













    If your process doesn't spawn more children that consume the most memory, you may use setrlimit function. More common user interface for that is using ulimit command of the shell:



    $ ulimit -Sv 500000     # Set ~500 mb limit
    $ pdftoppm ...


    This will only limit "virtual" memory of your process, taking into account—and limiting—the memory the process being invoked shares with other processes, and the memory mapped but not reserved (for instance, Java's large heap). Still, virtual memory is the closest approximation for processes that grow really large, making the said errors insignificant.



    If your program spawns children, and it's them which allocate memory, it becomes more complex, and you should write auxiliary scripts to run processes under your control. I wrote in my blog, why and how.






    share|improve this answer



















    • 2




      why is setrlimit more complex for more children? man setrlimit tells me that "A child process created via fork(2) inherits its parents resource limits. Resource limits are preserved across execve(2)"
      – akira
      Feb 13 '11 at 8:13






    • 6




      Because the kernel does not sum the vm size for all child processes; if it did it would get the answer wrong anyway. The limit is per-process, and is virtual address space, not memory usage. Memory usage is harder to measure.
      – MarkR
      Feb 13 '11 at 8:17






    • 1




      if i understand the question correctly then OP whats the limit per subprocess (child) .. not in total.
      – akira
      Feb 13 '11 at 8:21










    • @MarkR, anyway, virtual address space is a good approximation for the memory used, especially if you run a program that's not controlled by a virtual machine (say, Java). At least I don't know any better metric.
      – Pavel Shved
      Feb 13 '11 at 8:23






    • 2




      Just wanted to say thanks - this ulimit approach helped me with firefox's bug 622816 – Loading a large image can "freeze" firefox, or crash the system; which on a USB boot (from RAM) tends to freeze the OS, requiring hard restart; now at least firefox crashes itself, leaving the OS alive... Cheers!
      – sdaau
      Apr 4 '13 at 15:51

















    up vote
    69
    down vote













    If your process doesn't spawn more children that consume the most memory, you may use setrlimit function. More common user interface for that is using ulimit command of the shell:



    $ ulimit -Sv 500000     # Set ~500 mb limit
    $ pdftoppm ...


    This will only limit "virtual" memory of your process, taking into account—and limiting—the memory the process being invoked shares with other processes, and the memory mapped but not reserved (for instance, Java's large heap). Still, virtual memory is the closest approximation for processes that grow really large, making the said errors insignificant.



    If your program spawns children, and it's them which allocate memory, it becomes more complex, and you should write auxiliary scripts to run processes under your control. I wrote in my blog, why and how.






    share|improve this answer



















    • 2




      why is setrlimit more complex for more children? man setrlimit tells me that "A child process created via fork(2) inherits its parents resource limits. Resource limits are preserved across execve(2)"
      – akira
      Feb 13 '11 at 8:13






    • 6




      Because the kernel does not sum the vm size for all child processes; if it did it would get the answer wrong anyway. The limit is per-process, and is virtual address space, not memory usage. Memory usage is harder to measure.
      – MarkR
      Feb 13 '11 at 8:17






    • 1




      if i understand the question correctly then OP whats the limit per subprocess (child) .. not in total.
      – akira
      Feb 13 '11 at 8:21










    • @MarkR, anyway, virtual address space is a good approximation for the memory used, especially if you run a program that's not controlled by a virtual machine (say, Java). At least I don't know any better metric.
      – Pavel Shved
      Feb 13 '11 at 8:23






    • 2




      Just wanted to say thanks - this ulimit approach helped me with firefox's bug 622816 – Loading a large image can "freeze" firefox, or crash the system; which on a USB boot (from RAM) tends to freeze the OS, requiring hard restart; now at least firefox crashes itself, leaving the OS alive... Cheers!
      – sdaau
      Apr 4 '13 at 15:51















    up vote
    69
    down vote










    up vote
    69
    down vote









    If your process doesn't spawn more children that consume the most memory, you may use setrlimit function. More common user interface for that is using ulimit command of the shell:



    $ ulimit -Sv 500000     # Set ~500 mb limit
    $ pdftoppm ...


    This will only limit "virtual" memory of your process, taking into account—and limiting—the memory the process being invoked shares with other processes, and the memory mapped but not reserved (for instance, Java's large heap). Still, virtual memory is the closest approximation for processes that grow really large, making the said errors insignificant.



    If your program spawns children, and it's them which allocate memory, it becomes more complex, and you should write auxiliary scripts to run processes under your control. I wrote in my blog, why and how.






    share|improve this answer














    If your process doesn't spawn more children that consume the most memory, you may use setrlimit function. More common user interface for that is using ulimit command of the shell:



    $ ulimit -Sv 500000     # Set ~500 mb limit
    $ pdftoppm ...


    This will only limit "virtual" memory of your process, taking into account—and limiting—the memory the process being invoked shares with other processes, and the memory mapped but not reserved (for instance, Java's large heap). Still, virtual memory is the closest approximation for processes that grow really large, making the said errors insignificant.



    If your program spawns children, and it's them which allocate memory, it becomes more complex, and you should write auxiliary scripts to run processes under your control. I wrote in my blog, why and how.







    share|improve this answer














    share|improve this answer



    share|improve this answer








    edited Jul 6 '16 at 14:05









    Sahil Singh

    28428




    28428










    answered Feb 13 '11 at 8:11









    P Shved

    1,11175




    1,11175








    • 2




      why is setrlimit more complex for more children? man setrlimit tells me that "A child process created via fork(2) inherits its parents resource limits. Resource limits are preserved across execve(2)"
      – akira
      Feb 13 '11 at 8:13






    • 6




      Because the kernel does not sum the vm size for all child processes; if it did it would get the answer wrong anyway. The limit is per-process, and is virtual address space, not memory usage. Memory usage is harder to measure.
      – MarkR
      Feb 13 '11 at 8:17






    • 1




      if i understand the question correctly then OP whats the limit per subprocess (child) .. not in total.
      – akira
      Feb 13 '11 at 8:21










    • @MarkR, anyway, virtual address space is a good approximation for the memory used, especially if you run a program that's not controlled by a virtual machine (say, Java). At least I don't know any better metric.
      – Pavel Shved
      Feb 13 '11 at 8:23






    • 2




      Just wanted to say thanks - this ulimit approach helped me with firefox's bug 622816 – Loading a large image can "freeze" firefox, or crash the system; which on a USB boot (from RAM) tends to freeze the OS, requiring hard restart; now at least firefox crashes itself, leaving the OS alive... Cheers!
      – sdaau
      Apr 4 '13 at 15:51
















    • 2




      why is setrlimit more complex for more children? man setrlimit tells me that "A child process created via fork(2) inherits its parents resource limits. Resource limits are preserved across execve(2)"
      – akira
      Feb 13 '11 at 8:13






    • 6




      Because the kernel does not sum the vm size for all child processes; if it did it would get the answer wrong anyway. The limit is per-process, and is virtual address space, not memory usage. Memory usage is harder to measure.
      – MarkR
      Feb 13 '11 at 8:17






    • 1




      if i understand the question correctly then OP whats the limit per subprocess (child) .. not in total.
      – akira
      Feb 13 '11 at 8:21










    • @MarkR, anyway, virtual address space is a good approximation for the memory used, especially if you run a program that's not controlled by a virtual machine (say, Java). At least I don't know any better metric.
      – Pavel Shved
      Feb 13 '11 at 8:23






    • 2




      Just wanted to say thanks - this ulimit approach helped me with firefox's bug 622816 – Loading a large image can "freeze" firefox, or crash the system; which on a USB boot (from RAM) tends to freeze the OS, requiring hard restart; now at least firefox crashes itself, leaving the OS alive... Cheers!
      – sdaau
      Apr 4 '13 at 15:51










    2




    2




    why is setrlimit more complex for more children? man setrlimit tells me that "A child process created via fork(2) inherits its parents resource limits. Resource limits are preserved across execve(2)"
    – akira
    Feb 13 '11 at 8:13




    why is setrlimit more complex for more children? man setrlimit tells me that "A child process created via fork(2) inherits its parents resource limits. Resource limits are preserved across execve(2)"
    – akira
    Feb 13 '11 at 8:13




    6




    6




    Because the kernel does not sum the vm size for all child processes; if it did it would get the answer wrong anyway. The limit is per-process, and is virtual address space, not memory usage. Memory usage is harder to measure.
    – MarkR
    Feb 13 '11 at 8:17




    Because the kernel does not sum the vm size for all child processes; if it did it would get the answer wrong anyway. The limit is per-process, and is virtual address space, not memory usage. Memory usage is harder to measure.
    – MarkR
    Feb 13 '11 at 8:17




    1




    1




    if i understand the question correctly then OP whats the limit per subprocess (child) .. not in total.
    – akira
    Feb 13 '11 at 8:21




    if i understand the question correctly then OP whats the limit per subprocess (child) .. not in total.
    – akira
    Feb 13 '11 at 8:21












    @MarkR, anyway, virtual address space is a good approximation for the memory used, especially if you run a program that's not controlled by a virtual machine (say, Java). At least I don't know any better metric.
    – Pavel Shved
    Feb 13 '11 at 8:23




    @MarkR, anyway, virtual address space is a good approximation for the memory used, especially if you run a program that's not controlled by a virtual machine (say, Java). At least I don't know any better metric.
    – Pavel Shved
    Feb 13 '11 at 8:23




    2




    2




    Just wanted to say thanks - this ulimit approach helped me with firefox's bug 622816 – Loading a large image can "freeze" firefox, or crash the system; which on a USB boot (from RAM) tends to freeze the OS, requiring hard restart; now at least firefox crashes itself, leaving the OS alive... Cheers!
    – sdaau
    Apr 4 '13 at 15:51






    Just wanted to say thanks - this ulimit approach helped me with firefox's bug 622816 – Loading a large image can "freeze" firefox, or crash the system; which on a USB boot (from RAM) tends to freeze the OS, requiring hard restart; now at least firefox crashes itself, leaving the OS alive... Cheers!
    – sdaau
    Apr 4 '13 at 15:51












    up vote
    6
    down vote













    In addition to the tools from daemontools, suggested by Mark Johnson, you can also consider chpst which is found in runit. Runit itself is bundled in busybox, so you might already have it installed.



    The man page of chpst shows the option:




    -m bytes
    limit memory. Limit the data segment, stack segment, locked physical pages, and total of all segment per process to bytes bytes
    each.







    share|improve this answer



























      up vote
      6
      down vote













      In addition to the tools from daemontools, suggested by Mark Johnson, you can also consider chpst which is found in runit. Runit itself is bundled in busybox, so you might already have it installed.



      The man page of chpst shows the option:




      -m bytes
      limit memory. Limit the data segment, stack segment, locked physical pages, and total of all segment per process to bytes bytes
      each.







      share|improve this answer

























        up vote
        6
        down vote










        up vote
        6
        down vote









        In addition to the tools from daemontools, suggested by Mark Johnson, you can also consider chpst which is found in runit. Runit itself is bundled in busybox, so you might already have it installed.



        The man page of chpst shows the option:




        -m bytes
        limit memory. Limit the data segment, stack segment, locked physical pages, and total of all segment per process to bytes bytes
        each.







        share|improve this answer














        In addition to the tools from daemontools, suggested by Mark Johnson, you can also consider chpst which is found in runit. Runit itself is bundled in busybox, so you might already have it installed.



        The man page of chpst shows the option:




        -m bytes
        limit memory. Limit the data segment, stack segment, locked physical pages, and total of all segment per process to bytes bytes
        each.








        share|improve this answer














        share|improve this answer



        share|improve this answer








        edited Dec 8 '16 at 14:25









        Mark Fisher

        1034




        1034










        answered Jul 9 '15 at 12:26









        Oz123

        325316




        325316






















            up vote
            5
            down vote













            I'm using the below script, which works great. It uses cgroups through cgmanager. Update: it now uses the commands from cgroup-tools. Name this script limitmem and put it in your $PATH and you can use it like limitmem 100M bash. This will limit both memory and swap usage. To limit just memory remove the line with memory.memsw.limit_in_bytes.



            Disclaimer: I wouldn't be surprised if cgroup-tools also breaks in the future. The correct solution would be to use the systemd api's for cgroup management but there are no command line tools for that a.t.m.



            #!/bin/sh

            # This script uses commands from the cgroup-tools package. The cgroup-tools commands access the cgroup filesystem directly which is against the (new-ish) kernel's requirement that cgroups are managed by a single entity (which usually will be systemd). Additionally there is a v2 cgroup api in development which will probably replace the existing api at some point. So expect this script to break in the future. The correct way forward would be to use systemd's apis to create the cgroups, but afaik systemd currently (feb 2018) only exposes dbus apis for which there are no command line tools yet, and I didn't feel like writing those.

            # strict mode: error if commands fail or if unset variables are used
            set -eu

            if [ "$#" -lt 2 ]
            then
            echo Usage: `basename $0` "<limit> <command>..."
            echo or: `basename $0` "<memlimit> -s <swaplimit> <command>..."
            exit 1
            fi

            cgname="limitmem_$$"

            # parse command line args and find limits

            limit="$1"
            swaplimit="$limit"
            shift

            if [ "$1" = "-s" ]
            then
            shift
            swaplimit="$1"
            shift
            fi

            if [ "$1" = -- ]
            then
            shift
            fi

            if [ "$limit" = "$swaplimit" ]
            then
            memsw=0
            echo "limiting memory to $limit (cgroup $cgname) for command $@" >&2
            else
            memsw=1
            echo "limiting memory to $limit and total virtual memory to $swaplimit (cgroup $cgname) for command $@" >&2
            fi

            # create cgroup
            sudo cgcreate -g "memory:$cgname"
            sudo cgset -r memory.limit_in_bytes="$limit" "$cgname"
            bytes_limit=`cgget -g "memory:$cgname" | grep memory.limit_in_bytes | cut -d -f2`

            # try also limiting swap usage, but this fails if the system has no swap
            if sudo cgset -r memory.memsw.limit_in_bytes="$swaplimit" "$cgname"
            then
            bytes_swap_limit=`cgget -g "memory:$cgname" | grep memory.memsw.limit_in_bytes | cut -d -f2`
            else
            echo "failed to limit swap"
            memsw=0
            fi

            # create a waiting sudo'd process that will delete the cgroup once we're done. This prevents the user needing to enter their password to sudo again after the main command exists, which may take longer than sudo's timeout.
            tmpdir=${XDG_RUNTIME_DIR:-$TMPDIR}
            tmpdir=${tmpdir:-/tmp}
            fifo="$tmpdir/limitmem_$$_cgroup_closer"
            mkfifo --mode=u=rw,go= "$fifo"
            sudo -b sh -c "head -c1 '$fifo' >/dev/null ; cgdelete -g 'memory:$cgname'"

            # spawn subshell to run in the cgroup. If the command fails we still want to remove the cgroup so unset '-e'.
            set +e
            (
            set -e
            # move subshell into cgroup
            sudo cgclassify -g "memory:$cgname" --sticky `sh -c 'echo $PPID'` # $$ returns the main shell's pid, not this subshell's.
            exec "$@"
            )

            # grab exit code
            exitcode=$?

            set -e

            # show memory usage summary

            peak_mem=`cgget -g "memory:$cgname" | grep memory.max_usage_in_bytes | cut -d -f2`
            failcount=`cgget -g "memory:$cgname" | grep memory.failcnt | cut -d -f2`
            percent=`expr "$peak_mem" / ( "$bytes_limit" / 100 )`

            echo "peak memory used: $peak_mem ($percent%); exceeded limit $failcount times" >&2

            if [ "$memsw" = 1 ]
            then
            peak_swap=`cgget -g "memory:$cgname" | grep memory.memsw.max_usage_in_bytes | cut -d -f2`
            swap_failcount=`cgget -g "memory:$cgname" |grep memory.memsw.failcnt | cut -d -f2`
            swap_percent=`expr "$peak_swap" / ( "$bytes_swap_limit" / 100 )`

            echo "peak virtual memory used: $peak_swap ($swap_percent%); exceeded limit $swap_failcount times" >&2
            fi

            # remove cgroup by sending a byte through the pipe
            echo 1 > "$fifo"
            rm "$fifo"

            exit $exitcode





            share|improve this answer



















            • 1




              call to cgmanager_create_sync failed: invalid request for every process I try to run with limitmem 100M processname. I'm on Xubuntu 16.04 LTS and that package is installed.
              – Aaron Franke
              Mar 12 '17 at 9:58












            • Ups, I get this error message: $ limitmem 400M rstudio limiting memory to 400M (cgroup limitmem_24575) for command rstudio Error org.freedesktop.DBus.Error.InvalidArgs: invalid request any idea?
              – R Kiselev
              Feb 15 at 7:19












            • @RKiselev cgmanager is deprecated now, and not even available in Ubuntu 17.10. The systemd api that it uses was changed at some point, so that's probably the reason. I have updated the script to use cgroup-tools commands.
              – JanKanis
              Feb 15 at 11:39












            • if the calculation for percent results in zero, the expr status code is 1, and this script exits prematurely. recommend changing the line to: percent=$(( "$peak_mem" / $(( "$bytes_limit" / 100 )) )) (ref: unix.stackexchange.com/questions/63166/…)
              – Willi Ballenthin
              May 23 at 22:46















            up vote
            5
            down vote













            I'm using the below script, which works great. It uses cgroups through cgmanager. Update: it now uses the commands from cgroup-tools. Name this script limitmem and put it in your $PATH and you can use it like limitmem 100M bash. This will limit both memory and swap usage. To limit just memory remove the line with memory.memsw.limit_in_bytes.



            Disclaimer: I wouldn't be surprised if cgroup-tools also breaks in the future. The correct solution would be to use the systemd api's for cgroup management but there are no command line tools for that a.t.m.



            #!/bin/sh

            # This script uses commands from the cgroup-tools package. The cgroup-tools commands access the cgroup filesystem directly which is against the (new-ish) kernel's requirement that cgroups are managed by a single entity (which usually will be systemd). Additionally there is a v2 cgroup api in development which will probably replace the existing api at some point. So expect this script to break in the future. The correct way forward would be to use systemd's apis to create the cgroups, but afaik systemd currently (feb 2018) only exposes dbus apis for which there are no command line tools yet, and I didn't feel like writing those.

            # strict mode: error if commands fail or if unset variables are used
            set -eu

            if [ "$#" -lt 2 ]
            then
            echo Usage: `basename $0` "<limit> <command>..."
            echo or: `basename $0` "<memlimit> -s <swaplimit> <command>..."
            exit 1
            fi

            cgname="limitmem_$$"

            # parse command line args and find limits

            limit="$1"
            swaplimit="$limit"
            shift

            if [ "$1" = "-s" ]
            then
            shift
            swaplimit="$1"
            shift
            fi

            if [ "$1" = -- ]
            then
            shift
            fi

            if [ "$limit" = "$swaplimit" ]
            then
            memsw=0
            echo "limiting memory to $limit (cgroup $cgname) for command $@" >&2
            else
            memsw=1
            echo "limiting memory to $limit and total virtual memory to $swaplimit (cgroup $cgname) for command $@" >&2
            fi

            # create cgroup
            sudo cgcreate -g "memory:$cgname"
            sudo cgset -r memory.limit_in_bytes="$limit" "$cgname"
            bytes_limit=`cgget -g "memory:$cgname" | grep memory.limit_in_bytes | cut -d -f2`

            # try also limiting swap usage, but this fails if the system has no swap
            if sudo cgset -r memory.memsw.limit_in_bytes="$swaplimit" "$cgname"
            then
            bytes_swap_limit=`cgget -g "memory:$cgname" | grep memory.memsw.limit_in_bytes | cut -d -f2`
            else
            echo "failed to limit swap"
            memsw=0
            fi

            # create a waiting sudo'd process that will delete the cgroup once we're done. This prevents the user needing to enter their password to sudo again after the main command exists, which may take longer than sudo's timeout.
            tmpdir=${XDG_RUNTIME_DIR:-$TMPDIR}
            tmpdir=${tmpdir:-/tmp}
            fifo="$tmpdir/limitmem_$$_cgroup_closer"
            mkfifo --mode=u=rw,go= "$fifo"
            sudo -b sh -c "head -c1 '$fifo' >/dev/null ; cgdelete -g 'memory:$cgname'"

            # spawn subshell to run in the cgroup. If the command fails we still want to remove the cgroup so unset '-e'.
            set +e
            (
            set -e
            # move subshell into cgroup
            sudo cgclassify -g "memory:$cgname" --sticky `sh -c 'echo $PPID'` # $$ returns the main shell's pid, not this subshell's.
            exec "$@"
            )

            # grab exit code
            exitcode=$?

            set -e

            # show memory usage summary

            peak_mem=`cgget -g "memory:$cgname" | grep memory.max_usage_in_bytes | cut -d -f2`
            failcount=`cgget -g "memory:$cgname" | grep memory.failcnt | cut -d -f2`
            percent=`expr "$peak_mem" / ( "$bytes_limit" / 100 )`

            echo "peak memory used: $peak_mem ($percent%); exceeded limit $failcount times" >&2

            if [ "$memsw" = 1 ]
            then
            peak_swap=`cgget -g "memory:$cgname" | grep memory.memsw.max_usage_in_bytes | cut -d -f2`
            swap_failcount=`cgget -g "memory:$cgname" |grep memory.memsw.failcnt | cut -d -f2`
            swap_percent=`expr "$peak_swap" / ( "$bytes_swap_limit" / 100 )`

            echo "peak virtual memory used: $peak_swap ($swap_percent%); exceeded limit $swap_failcount times" >&2
            fi

            # remove cgroup by sending a byte through the pipe
            echo 1 > "$fifo"
            rm "$fifo"

            exit $exitcode





            share|improve this answer



















            • 1




              call to cgmanager_create_sync failed: invalid request for every process I try to run with limitmem 100M processname. I'm on Xubuntu 16.04 LTS and that package is installed.
              – Aaron Franke
              Mar 12 '17 at 9:58












            • Ups, I get this error message: $ limitmem 400M rstudio limiting memory to 400M (cgroup limitmem_24575) for command rstudio Error org.freedesktop.DBus.Error.InvalidArgs: invalid request any idea?
              – R Kiselev
              Feb 15 at 7:19












            • @RKiselev cgmanager is deprecated now, and not even available in Ubuntu 17.10. The systemd api that it uses was changed at some point, so that's probably the reason. I have updated the script to use cgroup-tools commands.
              – JanKanis
              Feb 15 at 11:39












            • if the calculation for percent results in zero, the expr status code is 1, and this script exits prematurely. recommend changing the line to: percent=$(( "$peak_mem" / $(( "$bytes_limit" / 100 )) )) (ref: unix.stackexchange.com/questions/63166/…)
              – Willi Ballenthin
              May 23 at 22:46













            up vote
            5
            down vote










            up vote
            5
            down vote









            I'm using the below script, which works great. It uses cgroups through cgmanager. Update: it now uses the commands from cgroup-tools. Name this script limitmem and put it in your $PATH and you can use it like limitmem 100M bash. This will limit both memory and swap usage. To limit just memory remove the line with memory.memsw.limit_in_bytes.



            Disclaimer: I wouldn't be surprised if cgroup-tools also breaks in the future. The correct solution would be to use the systemd api's for cgroup management but there are no command line tools for that a.t.m.



            #!/bin/sh

            # This script uses commands from the cgroup-tools package. The cgroup-tools commands access the cgroup filesystem directly which is against the (new-ish) kernel's requirement that cgroups are managed by a single entity (which usually will be systemd). Additionally there is a v2 cgroup api in development which will probably replace the existing api at some point. So expect this script to break in the future. The correct way forward would be to use systemd's apis to create the cgroups, but afaik systemd currently (feb 2018) only exposes dbus apis for which there are no command line tools yet, and I didn't feel like writing those.

            # strict mode: error if commands fail or if unset variables are used
            set -eu

            if [ "$#" -lt 2 ]
            then
            echo Usage: `basename $0` "<limit> <command>..."
            echo or: `basename $0` "<memlimit> -s <swaplimit> <command>..."
            exit 1
            fi

            cgname="limitmem_$$"

            # parse command line args and find limits

            limit="$1"
            swaplimit="$limit"
            shift

            if [ "$1" = "-s" ]
            then
            shift
            swaplimit="$1"
            shift
            fi

            if [ "$1" = -- ]
            then
            shift
            fi

            if [ "$limit" = "$swaplimit" ]
            then
            memsw=0
            echo "limiting memory to $limit (cgroup $cgname) for command $@" >&2
            else
            memsw=1
            echo "limiting memory to $limit and total virtual memory to $swaplimit (cgroup $cgname) for command $@" >&2
            fi

            # create cgroup
            sudo cgcreate -g "memory:$cgname"
            sudo cgset -r memory.limit_in_bytes="$limit" "$cgname"
            bytes_limit=`cgget -g "memory:$cgname" | grep memory.limit_in_bytes | cut -d -f2`

            # try also limiting swap usage, but this fails if the system has no swap
            if sudo cgset -r memory.memsw.limit_in_bytes="$swaplimit" "$cgname"
            then
            bytes_swap_limit=`cgget -g "memory:$cgname" | grep memory.memsw.limit_in_bytes | cut -d -f2`
            else
            echo "failed to limit swap"
            memsw=0
            fi

            # create a waiting sudo'd process that will delete the cgroup once we're done. This prevents the user needing to enter their password to sudo again after the main command exists, which may take longer than sudo's timeout.
            tmpdir=${XDG_RUNTIME_DIR:-$TMPDIR}
            tmpdir=${tmpdir:-/tmp}
            fifo="$tmpdir/limitmem_$$_cgroup_closer"
            mkfifo --mode=u=rw,go= "$fifo"
            sudo -b sh -c "head -c1 '$fifo' >/dev/null ; cgdelete -g 'memory:$cgname'"

            # spawn subshell to run in the cgroup. If the command fails we still want to remove the cgroup so unset '-e'.
            set +e
            (
            set -e
            # move subshell into cgroup
            sudo cgclassify -g "memory:$cgname" --sticky `sh -c 'echo $PPID'` # $$ returns the main shell's pid, not this subshell's.
            exec "$@"
            )

            # grab exit code
            exitcode=$?

            set -e

            # show memory usage summary

            peak_mem=`cgget -g "memory:$cgname" | grep memory.max_usage_in_bytes | cut -d -f2`
            failcount=`cgget -g "memory:$cgname" | grep memory.failcnt | cut -d -f2`
            percent=`expr "$peak_mem" / ( "$bytes_limit" / 100 )`

            echo "peak memory used: $peak_mem ($percent%); exceeded limit $failcount times" >&2

            if [ "$memsw" = 1 ]
            then
            peak_swap=`cgget -g "memory:$cgname" | grep memory.memsw.max_usage_in_bytes | cut -d -f2`
            swap_failcount=`cgget -g "memory:$cgname" |grep memory.memsw.failcnt | cut -d -f2`
            swap_percent=`expr "$peak_swap" / ( "$bytes_swap_limit" / 100 )`

            echo "peak virtual memory used: $peak_swap ($swap_percent%); exceeded limit $swap_failcount times" >&2
            fi

            # remove cgroup by sending a byte through the pipe
            echo 1 > "$fifo"
            rm "$fifo"

            exit $exitcode





            share|improve this answer














            I'm using the below script, which works great. It uses cgroups through cgmanager. Update: it now uses the commands from cgroup-tools. Name this script limitmem and put it in your $PATH and you can use it like limitmem 100M bash. This will limit both memory and swap usage. To limit just memory remove the line with memory.memsw.limit_in_bytes.



            Disclaimer: I wouldn't be surprised if cgroup-tools also breaks in the future. The correct solution would be to use the systemd api's for cgroup management but there are no command line tools for that a.t.m.



            #!/bin/sh

            # This script uses commands from the cgroup-tools package. The cgroup-tools commands access the cgroup filesystem directly which is against the (new-ish) kernel's requirement that cgroups are managed by a single entity (which usually will be systemd). Additionally there is a v2 cgroup api in development which will probably replace the existing api at some point. So expect this script to break in the future. The correct way forward would be to use systemd's apis to create the cgroups, but afaik systemd currently (feb 2018) only exposes dbus apis for which there are no command line tools yet, and I didn't feel like writing those.

            # strict mode: error if commands fail or if unset variables are used
            set -eu

            if [ "$#" -lt 2 ]
            then
            echo Usage: `basename $0` "<limit> <command>..."
            echo or: `basename $0` "<memlimit> -s <swaplimit> <command>..."
            exit 1
            fi

            cgname="limitmem_$$"

            # parse command line args and find limits

            limit="$1"
            swaplimit="$limit"
            shift

            if [ "$1" = "-s" ]
            then
            shift
            swaplimit="$1"
            shift
            fi

            if [ "$1" = -- ]
            then
            shift
            fi

            if [ "$limit" = "$swaplimit" ]
            then
            memsw=0
            echo "limiting memory to $limit (cgroup $cgname) for command $@" >&2
            else
            memsw=1
            echo "limiting memory to $limit and total virtual memory to $swaplimit (cgroup $cgname) for command $@" >&2
            fi

            # create cgroup
            sudo cgcreate -g "memory:$cgname"
            sudo cgset -r memory.limit_in_bytes="$limit" "$cgname"
            bytes_limit=`cgget -g "memory:$cgname" | grep memory.limit_in_bytes | cut -d -f2`

            # try also limiting swap usage, but this fails if the system has no swap
            if sudo cgset -r memory.memsw.limit_in_bytes="$swaplimit" "$cgname"
            then
            bytes_swap_limit=`cgget -g "memory:$cgname" | grep memory.memsw.limit_in_bytes | cut -d -f2`
            else
            echo "failed to limit swap"
            memsw=0
            fi

            # create a waiting sudo'd process that will delete the cgroup once we're done. This prevents the user needing to enter their password to sudo again after the main command exists, which may take longer than sudo's timeout.
            tmpdir=${XDG_RUNTIME_DIR:-$TMPDIR}
            tmpdir=${tmpdir:-/tmp}
            fifo="$tmpdir/limitmem_$$_cgroup_closer"
            mkfifo --mode=u=rw,go= "$fifo"
            sudo -b sh -c "head -c1 '$fifo' >/dev/null ; cgdelete -g 'memory:$cgname'"

            # spawn subshell to run in the cgroup. If the command fails we still want to remove the cgroup so unset '-e'.
            set +e
            (
            set -e
            # move subshell into cgroup
            sudo cgclassify -g "memory:$cgname" --sticky `sh -c 'echo $PPID'` # $$ returns the main shell's pid, not this subshell's.
            exec "$@"
            )

            # grab exit code
            exitcode=$?

            set -e

            # show memory usage summary

            peak_mem=`cgget -g "memory:$cgname" | grep memory.max_usage_in_bytes | cut -d -f2`
            failcount=`cgget -g "memory:$cgname" | grep memory.failcnt | cut -d -f2`
            percent=`expr "$peak_mem" / ( "$bytes_limit" / 100 )`

            echo "peak memory used: $peak_mem ($percent%); exceeded limit $failcount times" >&2

            if [ "$memsw" = 1 ]
            then
            peak_swap=`cgget -g "memory:$cgname" | grep memory.memsw.max_usage_in_bytes | cut -d -f2`
            swap_failcount=`cgget -g "memory:$cgname" |grep memory.memsw.failcnt | cut -d -f2`
            swap_percent=`expr "$peak_swap" / ( "$bytes_swap_limit" / 100 )`

            echo "peak virtual memory used: $peak_swap ($swap_percent%); exceeded limit $swap_failcount times" >&2
            fi

            # remove cgroup by sending a byte through the pipe
            echo 1 > "$fifo"
            rm "$fifo"

            exit $exitcode






            share|improve this answer














            share|improve this answer



            share|improve this answer








            edited Feb 15 at 11:50

























            answered Apr 26 '16 at 12:53









            JanKanis

            386415




            386415








            • 1




              call to cgmanager_create_sync failed: invalid request for every process I try to run with limitmem 100M processname. I'm on Xubuntu 16.04 LTS and that package is installed.
              – Aaron Franke
              Mar 12 '17 at 9:58












            • Ups, I get this error message: $ limitmem 400M rstudio limiting memory to 400M (cgroup limitmem_24575) for command rstudio Error org.freedesktop.DBus.Error.InvalidArgs: invalid request any idea?
              – R Kiselev
              Feb 15 at 7:19












            • @RKiselev cgmanager is deprecated now, and not even available in Ubuntu 17.10. The systemd api that it uses was changed at some point, so that's probably the reason. I have updated the script to use cgroup-tools commands.
              – JanKanis
              Feb 15 at 11:39












            • if the calculation for percent results in zero, the expr status code is 1, and this script exits prematurely. recommend changing the line to: percent=$(( "$peak_mem" / $(( "$bytes_limit" / 100 )) )) (ref: unix.stackexchange.com/questions/63166/…)
              – Willi Ballenthin
              May 23 at 22:46














            • 1




              call to cgmanager_create_sync failed: invalid request for every process I try to run with limitmem 100M processname. I'm on Xubuntu 16.04 LTS and that package is installed.
              – Aaron Franke
              Mar 12 '17 at 9:58












            • Ups, I get this error message: $ limitmem 400M rstudio limiting memory to 400M (cgroup limitmem_24575) for command rstudio Error org.freedesktop.DBus.Error.InvalidArgs: invalid request any idea?
              – R Kiselev
              Feb 15 at 7:19












            • @RKiselev cgmanager is deprecated now, and not even available in Ubuntu 17.10. The systemd api that it uses was changed at some point, so that's probably the reason. I have updated the script to use cgroup-tools commands.
              – JanKanis
              Feb 15 at 11:39












            • if the calculation for percent results in zero, the expr status code is 1, and this script exits prematurely. recommend changing the line to: percent=$(( "$peak_mem" / $(( "$bytes_limit" / 100 )) )) (ref: unix.stackexchange.com/questions/63166/…)
              – Willi Ballenthin
              May 23 at 22:46








            1




            1




            call to cgmanager_create_sync failed: invalid request for every process I try to run with limitmem 100M processname. I'm on Xubuntu 16.04 LTS and that package is installed.
            – Aaron Franke
            Mar 12 '17 at 9:58






            call to cgmanager_create_sync failed: invalid request for every process I try to run with limitmem 100M processname. I'm on Xubuntu 16.04 LTS and that package is installed.
            – Aaron Franke
            Mar 12 '17 at 9:58














            Ups, I get this error message: $ limitmem 400M rstudio limiting memory to 400M (cgroup limitmem_24575) for command rstudio Error org.freedesktop.DBus.Error.InvalidArgs: invalid request any idea?
            – R Kiselev
            Feb 15 at 7:19






            Ups, I get this error message: $ limitmem 400M rstudio limiting memory to 400M (cgroup limitmem_24575) for command rstudio Error org.freedesktop.DBus.Error.InvalidArgs: invalid request any idea?
            – R Kiselev
            Feb 15 at 7:19














            @RKiselev cgmanager is deprecated now, and not even available in Ubuntu 17.10. The systemd api that it uses was changed at some point, so that's probably the reason. I have updated the script to use cgroup-tools commands.
            – JanKanis
            Feb 15 at 11:39






            @RKiselev cgmanager is deprecated now, and not even available in Ubuntu 17.10. The systemd api that it uses was changed at some point, so that's probably the reason. I have updated the script to use cgroup-tools commands.
            – JanKanis
            Feb 15 at 11:39














            if the calculation for percent results in zero, the expr status code is 1, and this script exits prematurely. recommend changing the line to: percent=$(( "$peak_mem" / $(( "$bytes_limit" / 100 )) )) (ref: unix.stackexchange.com/questions/63166/…)
            – Willi Ballenthin
            May 23 at 22:46




            if the calculation for percent results in zero, the expr status code is 1, and this script exits prematurely. recommend changing the line to: percent=$(( "$peak_mem" / $(( "$bytes_limit" / 100 )) )) (ref: unix.stackexchange.com/questions/63166/…)
            – Willi Ballenthin
            May 23 at 22:46


















            draft saved

            draft discarded




















































            Thanks for contributing an answer to Unix & Linux Stack Exchange!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid



            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.


            To learn more, see our tips on writing great answers.





            Some of your past answers have not been well-received, and you're in danger of being blocked from answering.


            Please pay close attention to the following guidance:


            • Please be sure to answer the question. Provide details and share your research!

            But avoid



            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.


            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f44985%2flimit-memory-usage-for-a-single-linux-process%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            Popular posts from this blog

            Morgemoulin

            Scott Moir

            Souastre