GPU usage monitoring (CUDA)












173














I installed CUDA toolkit on my computer and started BOINC project on GPU. In BOINC I can see that it is running on GPU, but is there a tool that can show me more details about that what is running on GPU - GPU usage and memory usage?










share|improve this question



























    173














    I installed CUDA toolkit on my computer and started BOINC project on GPU. In BOINC I can see that it is running on GPU, but is there a tool that can show me more details about that what is running on GPU - GPU usage and memory usage?










    share|improve this question

























      173












      173








      173


      76





      I installed CUDA toolkit on my computer and started BOINC project on GPU. In BOINC I can see that it is running on GPU, but is there a tool that can show me more details about that what is running on GPU - GPU usage and memory usage?










      share|improve this question













      I installed CUDA toolkit on my computer and started BOINC project on GPU. In BOINC I can see that it is running on GPU, but is there a tool that can show me more details about that what is running on GPU - GPU usage and memory usage?







      monitoring gpu






      share|improve this question













      share|improve this question











      share|improve this question




      share|improve this question










      asked May 13 '12 at 10:46









      pbm

      16.9k52847




      16.9k52847






















          15 Answers
          15






          active

          oldest

          votes


















          189














          For Nvidia GPUs there is a tool nvidia-smi that can show memory usage, GPU utilization and temperature of GPU. There also is a list of compute processes and few more options but my graphic card (GeForce 9600 GT) is not fully supported.



          Sun May 13 20:02:49 2012       
          +------------------------------------------------------+
          | NVIDIA-SMI 3.295.40 Driver Version: 295.40 |
          |-------------------------------+----------------------+----------------------+
          | Nb. Name | Bus Id Disp. | Volatile ECC SB / DB |
          | Fan Temp Power Usage /Cap | Memory Usage | GPU Util. Compute M. |
          |===============================+======================+======================|
          | 0. GeForce 9600 GT | 0000:01:00.0 N/A | N/A N/A |
          | 0% 51 C N/A N/A / N/A | 90% 459MB / 511MB | N/A Default |
          |-------------------------------+----------------------+----------------------|
          | Compute processes: GPU Memory |
          | GPU PID Process name Usage |
          |=============================================================================|
          | 0. Not Supported |
          +-----------------------------------------------------------------------------+





          share|improve this answer

















          • 1




            My ION chip does not show usage, either. :/
            – Raphael
            Jun 10 '12 at 22:15






          • 86




            watch -n 0.5 nvidia-smi, will keep the output updated without filling your terminal with output.
            – Bar
            Jul 14 '16 at 18:26






          • 15




            @Bar Good tip. watch -d -n 0.5 nvidia-smi will be even better.
            – zeekvfu
            Jan 17 at 16:27






          • 1




            @zeekvfu I think it'd be better to explain what does the -d flag do
            – donlucacorleone
            Oct 10 at 14:41






          • 2




            @donlucacorleone man watch tells us the -d flag highlights differences between the outputs, so it can aid in highlighting which metrics are changing over time.
            – David Kaczynski
            Oct 21 at 2:56



















          56














          For linux, use nvidia-smi -l 1 will continually give you the gpu usage info, with in refresh interval of 1 second.






          share|improve this answer

















          • 63




            I prefer to use watch -n 1 nvidia-smi to obtain continuous updates without filling the terminal with output
            – ali_m
            Jan 27 '16 at 23:59










          • Using watch means your starting a new process every second to poll the cards. Better to do -l, and not every second, I'd suggest every minute or every 5 minutes.
            – Mick T
            Apr 19 at 15:55



















          49














          For Intel GPU's there exists the intel-gpu-tools from http://intellinuxgraphics.org/ project, which brings the command intel_gpu_top (amongst other things). It is similar to top and htop, but specifically for the Intel GPU.



             render busy:  18%: ███▋                                   render space: 39/131072
          bitstream busy: 0%: bitstream space: 0/131072
          blitter busy: 28%: █████▋ blitter space: 28/131072

          task percent busy
          GAM: 33%: ██████▋ vert fetch: 0 (0/sec)
          GAFS: 3%: ▋ prim fetch: 0 (0/sec)
          VS: 0%: VS invocations: 559188 (150/sec)
          SF: 0%: GS invocations: 0 (0/sec)
          VF: 0%: GS prims: 0 (0/sec)
          DS: 0%: CL invocations: 186396 (50/sec)
          CL: 0%: CL prims: 186396 (50/sec)
          SOL: 0%: PS invocations: 8191776208 (38576436/sec)
          GS: 0%: PS depth pass: 8158502721 (38487525/sec)
          HS: 0%:
          TE: 0%:
          GAFM: 0%:
          SVG: 0%:





          share|improve this answer































            46














            Recently I have written a simple command-line utility called gpustat (which is a wrapper of nvidia-smi) : please take a look at https://github.com/wookayin/gpustat.








            share|improve this answer





























              29














              nvidia-smi does not work on some linux machines (returns N/A for many properties). You can use nvidia-settings instead (this is also what mat kelcey used in his python script).



              nvidia-settings -q GPUUtilization -q useddedicatedgpumemory


              You can also use:



              watch -n0.1 "nvidia-settings -q GPUUtilization -q useddedicatedgpumemory"


              for continuous monitoring.






              share|improve this answer



















              • 4




                Glad this wasn't a comment. It's exactly what I was searching for when I came across this question.
                – Score_Under
                Jun 20 '15 at 0:24










              • Thanks, this is what worked for me, since I have a GeForce card which is not supported by nvidia-smi.
                – alexg
                Dec 22 '15 at 9:23






              • 4




                You can do nvidia-settings -q all to see what other parameters you can monitor. I'm monitoring GPUCurrentProcessorClockFreqs and GPUCurrentClockFreqs.
                – alexg
                Dec 22 '15 at 9:34






              • 1




                Thanks man, good idea to query all, since each card may have different strings to monitor!
                – ruoho ruotsi
                Feb 2 '16 at 19:08



















              12














              For completeness, AMD has two options:





              1. fglrx (closed source drivers).



                $ aticonfig --odgc --odgt



              2. mesa (open source drivers), you can use RadeonTop.




                View your GPU utilization, both for the total activity percent and individual blocks.









              share|improve this answer































                10














                For Linux, I use this HTOP like tool that I wrote myself. It monitors and gives an overview of the GPU temperature as well as the core / VRAM / PCI-E & memory bus usage. It does not monitor what's running on the GPU though.



                gmonitor



                enter image description here






                share|improve this answer

















                • 1




                  nvidia-settings requires a running X11, which is not always the case.
                  – Victor Sergienko
                  Jul 8 '17 at 0:57










                • works for me with no hassle!
                  – Hennadii Madan
                  Jul 14 '17 at 6:29



















                9














                I have a GeForce 1060 GTX video card and I found that the following command give me info about card utilization, temperature, fan speed and power consumption:



                $ nvidia-smi --format=csv --query-gpu=power.draw,utilization.gpu,fan.speed,temperature.gpu


                You can see list of all query options with:



                $ nvidia-smi --help-query-gpu





                share|improve this answer





















                • It would be worth adding memory.used or (memory.free) as well.
                  – Zoltan
                  Sep 23 at 9:10



















                3














                For OS X



                Including Mountain Lion



                iStat Menus



                Excluding Mountain Lion



                atMonitor




                The last version of atMonitor to support GPU related features is atMonitor 2.7.1.




                – and the link to 2.7.1 delivers 2.7b.



                For the more recent version of the app, atMonitor - FAQ explains:




                To make atMonitor compatible with MacOS 10.8 we have removed all GPU related features.




                I experimented with 2.7b a.k.a. 2.7.1 on Mountain Lion with a MacBookPro5,2 with NVIDIA GeForce 9600M GT. The app ran for a few seconds before quitting, it showed temperature but not usage:



                                                                  screenshot of atMonitor 2.7b on Mountain Lion






                share|improve this answer































                  2














                  Glances has a plugin which shows GPU utilization and memory usage.



                  enter image description here



                  http://glances.readthedocs.io/en/stable/aoa/gpu.html



                  Uses the nvidia-ml-py3 library: https://pypi.python.org/pypi/nvidia-ml-py3






                  share|improve this answer





























                    1














                    for nvidia on linux i use the following python script which uses an optional delay and repeat like iostat and vmstat



                    https://gist.github.com/matpalm/9c0c7c6a6f3681a0d39d



                    $ gpu_stat.py 1 2
                    {"util":{"PCIe":"0", "memory":"10", "video":"0", "graphics":"11"}, "used_mem":"161", "time": 1424839016}
                    {"util":{"PCIe":"0", "memory":"10", "video":"0", "graphics":"9"}, "used_mem":"161", "time":1424839018}





                    share|improve this answer





























                      1














                      I have had processes terminate (probably killed or crashed) and continue to use resources, but were not listed in nvidia-smi. Usually these processes were just taking gpu memory.



                      If you think you have a process using resources on a GPU and it is not being shown in nvidia-smi, you can try running this command to double check. It will show you which processes are using your GPUs.



                      sudo fuser -v /dev/nvidia*


                      This works on EL7, Ubuntu or other distributions might have their nvidia devices listed under another name/location.






                      share|improve this answer































                        1














                        The following function appends information such as PID, user name, CPU usage, memory usage, GPU memory usage, program arguments and run time of processes that are being run on the GPU, to the output of nvidia-smi:



                        function better-nvidia-smi () {
                        nvidia-smi
                        join -1 1 -2 3
                        <(nvidia-smi --query-compute-apps=pid,used_memory
                        --format=csv
                        | sed "s/ //g" | sed "s/,/ /g"
                        | awk 'NR<=1 {print toupper($0)} NR>1 {print $0}'
                        | sed "/[NotSupported]/d"
                        | awk 'NR<=1{print $0;next}{print $0| "sort -k1"}')
                        <(ps -a -o user,pgrp,pid,pcpu,pmem,time,command
                        | awk 'NR<=1{print $0;next}{print $0| "sort -k3"}')
                        | column -t
                        }


                        Example output:



                        $ better-nvidia-smi
                        Fri Sep 29 16:52:58 2017
                        +-----------------------------------------------------------------------------+
                        | NVIDIA-SMI 378.13 Driver Version: 378.13 |
                        |-------------------------------+----------------------+----------------------+
                        | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
                        | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
                        |===============================+======================+======================|
                        | 0 GeForce GT 730 Off | 0000:01:00.0 N/A | N/A |
                        | 32% 49C P8 N/A / N/A | 872MiB / 976MiB | N/A Default |
                        +-------------------------------+----------------------+----------------------+
                        | 1 Graphics Device Off | 0000:06:00.0 Off | N/A |
                        | 23% 35C P8 17W / 250W | 199MiB / 11172MiB | 0% Default |
                        +-------------------------------+----------------------+----------------------+

                        +-----------------------------------------------------------------------------+
                        | Processes: GPU Memory |
                        | GPU PID Type Process name Usage |
                        |=============================================================================|
                        | 0 Not Supported |
                        | 1 5113 C python 187MiB |
                        +-----------------------------------------------------------------------------+
                        PID USED_GPU_MEMORY[MIB] USER PGRP %CPU %MEM TIME COMMAND
                        9178 187MiB tmborn 9175 129 2.6 04:32:19 ../path/to/python script.py args 42





                        share|improve this answer























                        • Carefull, I don't think the pmem given by ps takes into account the total memory of the GPU but that of the CPU because ps is not "Nvidia GPU" aware
                          – SebMa
                          May 29 at 14:09





















                        0














                        You can use nvtop, it's similar to htop but for NVIDIA GPUs. Link: https://github.com/Syllo/nvtop






                        share|improve this answer





























                          0














                          This script is more readable and is designed for easy mods and extensions.



                          You can replace gnome-terminal with your favorite terminal window program.





                          #! /bin/bash

                          if [ "$1" = "--guts" ]; then
                          echo; echo " ctrl-c to gracefully close"
                          f "$a"
                          f "$b"
                          exit 0; fi

                          # easy to customize here using "nvidia-smi --help-query-gpu" as a guide
                          a='--query-gpu=pstate,memory.used,utilization.memory,utilization.gpu,encoder.stats.sessionCount'
                          b='--query-gpu=encoder.stats.averageFps,encoder.stats.averageLatency,temperature.gpu,power.draw'
                          p=0.5 # refresh period in seconds
                          s=110x9 # view port as width_in_chars x line_count

                          c="s/^/ /; s/, +/t/g"
                          t="`echo '' |tr 'n' 't'`"
                          function f() { echo; nvidia-smi --format=csv "$1" |sed -r "$c" |column -t "-s$t" "-o "; }
                          export c t a b; export -f f
                          gnome-terminal --hide-menubar --geometry=$s -- watch -t -n$p "`readlink -f "$0"`" --guts

                          #


                          License: GNU GPLv2, TranSeed Research






                          share|improve this answer





















                            Your Answer








                            StackExchange.ready(function() {
                            var channelOptions = {
                            tags: "".split(" "),
                            id: "106"
                            };
                            initTagRenderer("".split(" "), "".split(" "), channelOptions);

                            StackExchange.using("externalEditor", function() {
                            // Have to fire editor after snippets, if snippets enabled
                            if (StackExchange.settings.snippets.snippetsEnabled) {
                            StackExchange.using("snippets", function() {
                            createEditor();
                            });
                            }
                            else {
                            createEditor();
                            }
                            });

                            function createEditor() {
                            StackExchange.prepareEditor({
                            heartbeatType: 'answer',
                            autoActivateHeartbeat: false,
                            convertImagesToLinks: false,
                            noModals: true,
                            showLowRepImageUploadWarning: true,
                            reputationToPostImages: null,
                            bindNavPrevention: true,
                            postfix: "",
                            imageUploader: {
                            brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
                            contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
                            allowUrls: true
                            },
                            onDemand: true,
                            discardSelector: ".discard-answer"
                            ,immediatelyShowMarkdownHelp:true
                            });


                            }
                            });














                            draft saved

                            draft discarded


















                            StackExchange.ready(
                            function () {
                            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f38560%2fgpu-usage-monitoring-cuda%23new-answer', 'question_page');
                            }
                            );

                            Post as a guest















                            Required, but never shown

























                            15 Answers
                            15






                            active

                            oldest

                            votes








                            15 Answers
                            15






                            active

                            oldest

                            votes









                            active

                            oldest

                            votes






                            active

                            oldest

                            votes









                            189














                            For Nvidia GPUs there is a tool nvidia-smi that can show memory usage, GPU utilization and temperature of GPU. There also is a list of compute processes and few more options but my graphic card (GeForce 9600 GT) is not fully supported.



                            Sun May 13 20:02:49 2012       
                            +------------------------------------------------------+
                            | NVIDIA-SMI 3.295.40 Driver Version: 295.40 |
                            |-------------------------------+----------------------+----------------------+
                            | Nb. Name | Bus Id Disp. | Volatile ECC SB / DB |
                            | Fan Temp Power Usage /Cap | Memory Usage | GPU Util. Compute M. |
                            |===============================+======================+======================|
                            | 0. GeForce 9600 GT | 0000:01:00.0 N/A | N/A N/A |
                            | 0% 51 C N/A N/A / N/A | 90% 459MB / 511MB | N/A Default |
                            |-------------------------------+----------------------+----------------------|
                            | Compute processes: GPU Memory |
                            | GPU PID Process name Usage |
                            |=============================================================================|
                            | 0. Not Supported |
                            +-----------------------------------------------------------------------------+





                            share|improve this answer

















                            • 1




                              My ION chip does not show usage, either. :/
                              – Raphael
                              Jun 10 '12 at 22:15






                            • 86




                              watch -n 0.5 nvidia-smi, will keep the output updated without filling your terminal with output.
                              – Bar
                              Jul 14 '16 at 18:26






                            • 15




                              @Bar Good tip. watch -d -n 0.5 nvidia-smi will be even better.
                              – zeekvfu
                              Jan 17 at 16:27






                            • 1




                              @zeekvfu I think it'd be better to explain what does the -d flag do
                              – donlucacorleone
                              Oct 10 at 14:41






                            • 2




                              @donlucacorleone man watch tells us the -d flag highlights differences between the outputs, so it can aid in highlighting which metrics are changing over time.
                              – David Kaczynski
                              Oct 21 at 2:56
















                            189














                            For Nvidia GPUs there is a tool nvidia-smi that can show memory usage, GPU utilization and temperature of GPU. There also is a list of compute processes and few more options but my graphic card (GeForce 9600 GT) is not fully supported.



                            Sun May 13 20:02:49 2012       
                            +------------------------------------------------------+
                            | NVIDIA-SMI 3.295.40 Driver Version: 295.40 |
                            |-------------------------------+----------------------+----------------------+
                            | Nb. Name | Bus Id Disp. | Volatile ECC SB / DB |
                            | Fan Temp Power Usage /Cap | Memory Usage | GPU Util. Compute M. |
                            |===============================+======================+======================|
                            | 0. GeForce 9600 GT | 0000:01:00.0 N/A | N/A N/A |
                            | 0% 51 C N/A N/A / N/A | 90% 459MB / 511MB | N/A Default |
                            |-------------------------------+----------------------+----------------------|
                            | Compute processes: GPU Memory |
                            | GPU PID Process name Usage |
                            |=============================================================================|
                            | 0. Not Supported |
                            +-----------------------------------------------------------------------------+





                            share|improve this answer

















                            • 1




                              My ION chip does not show usage, either. :/
                              – Raphael
                              Jun 10 '12 at 22:15






                            • 86




                              watch -n 0.5 nvidia-smi, will keep the output updated without filling your terminal with output.
                              – Bar
                              Jul 14 '16 at 18:26






                            • 15




                              @Bar Good tip. watch -d -n 0.5 nvidia-smi will be even better.
                              – zeekvfu
                              Jan 17 at 16:27






                            • 1




                              @zeekvfu I think it'd be better to explain what does the -d flag do
                              – donlucacorleone
                              Oct 10 at 14:41






                            • 2




                              @donlucacorleone man watch tells us the -d flag highlights differences between the outputs, so it can aid in highlighting which metrics are changing over time.
                              – David Kaczynski
                              Oct 21 at 2:56














                            189












                            189








                            189






                            For Nvidia GPUs there is a tool nvidia-smi that can show memory usage, GPU utilization and temperature of GPU. There also is a list of compute processes and few more options but my graphic card (GeForce 9600 GT) is not fully supported.



                            Sun May 13 20:02:49 2012       
                            +------------------------------------------------------+
                            | NVIDIA-SMI 3.295.40 Driver Version: 295.40 |
                            |-------------------------------+----------------------+----------------------+
                            | Nb. Name | Bus Id Disp. | Volatile ECC SB / DB |
                            | Fan Temp Power Usage /Cap | Memory Usage | GPU Util. Compute M. |
                            |===============================+======================+======================|
                            | 0. GeForce 9600 GT | 0000:01:00.0 N/A | N/A N/A |
                            | 0% 51 C N/A N/A / N/A | 90% 459MB / 511MB | N/A Default |
                            |-------------------------------+----------------------+----------------------|
                            | Compute processes: GPU Memory |
                            | GPU PID Process name Usage |
                            |=============================================================================|
                            | 0. Not Supported |
                            +-----------------------------------------------------------------------------+





                            share|improve this answer












                            For Nvidia GPUs there is a tool nvidia-smi that can show memory usage, GPU utilization and temperature of GPU. There also is a list of compute processes and few more options but my graphic card (GeForce 9600 GT) is not fully supported.



                            Sun May 13 20:02:49 2012       
                            +------------------------------------------------------+
                            | NVIDIA-SMI 3.295.40 Driver Version: 295.40 |
                            |-------------------------------+----------------------+----------------------+
                            | Nb. Name | Bus Id Disp. | Volatile ECC SB / DB |
                            | Fan Temp Power Usage /Cap | Memory Usage | GPU Util. Compute M. |
                            |===============================+======================+======================|
                            | 0. GeForce 9600 GT | 0000:01:00.0 N/A | N/A N/A |
                            | 0% 51 C N/A N/A / N/A | 90% 459MB / 511MB | N/A Default |
                            |-------------------------------+----------------------+----------------------|
                            | Compute processes: GPU Memory |
                            | GPU PID Process name Usage |
                            |=============================================================================|
                            | 0. Not Supported |
                            +-----------------------------------------------------------------------------+






                            share|improve this answer












                            share|improve this answer



                            share|improve this answer










                            answered May 13 '12 at 18:05









                            pbm

                            16.9k52847




                            16.9k52847








                            • 1




                              My ION chip does not show usage, either. :/
                              – Raphael
                              Jun 10 '12 at 22:15






                            • 86




                              watch -n 0.5 nvidia-smi, will keep the output updated without filling your terminal with output.
                              – Bar
                              Jul 14 '16 at 18:26






                            • 15




                              @Bar Good tip. watch -d -n 0.5 nvidia-smi will be even better.
                              – zeekvfu
                              Jan 17 at 16:27






                            • 1




                              @zeekvfu I think it'd be better to explain what does the -d flag do
                              – donlucacorleone
                              Oct 10 at 14:41






                            • 2




                              @donlucacorleone man watch tells us the -d flag highlights differences between the outputs, so it can aid in highlighting which metrics are changing over time.
                              – David Kaczynski
                              Oct 21 at 2:56














                            • 1




                              My ION chip does not show usage, either. :/
                              – Raphael
                              Jun 10 '12 at 22:15






                            • 86




                              watch -n 0.5 nvidia-smi, will keep the output updated without filling your terminal with output.
                              – Bar
                              Jul 14 '16 at 18:26






                            • 15




                              @Bar Good tip. watch -d -n 0.5 nvidia-smi will be even better.
                              – zeekvfu
                              Jan 17 at 16:27






                            • 1




                              @zeekvfu I think it'd be better to explain what does the -d flag do
                              – donlucacorleone
                              Oct 10 at 14:41






                            • 2




                              @donlucacorleone man watch tells us the -d flag highlights differences between the outputs, so it can aid in highlighting which metrics are changing over time.
                              – David Kaczynski
                              Oct 21 at 2:56








                            1




                            1




                            My ION chip does not show usage, either. :/
                            – Raphael
                            Jun 10 '12 at 22:15




                            My ION chip does not show usage, either. :/
                            – Raphael
                            Jun 10 '12 at 22:15




                            86




                            86




                            watch -n 0.5 nvidia-smi, will keep the output updated without filling your terminal with output.
                            – Bar
                            Jul 14 '16 at 18:26




                            watch -n 0.5 nvidia-smi, will keep the output updated without filling your terminal with output.
                            – Bar
                            Jul 14 '16 at 18:26




                            15




                            15




                            @Bar Good tip. watch -d -n 0.5 nvidia-smi will be even better.
                            – zeekvfu
                            Jan 17 at 16:27




                            @Bar Good tip. watch -d -n 0.5 nvidia-smi will be even better.
                            – zeekvfu
                            Jan 17 at 16:27




                            1




                            1




                            @zeekvfu I think it'd be better to explain what does the -d flag do
                            – donlucacorleone
                            Oct 10 at 14:41




                            @zeekvfu I think it'd be better to explain what does the -d flag do
                            – donlucacorleone
                            Oct 10 at 14:41




                            2




                            2




                            @donlucacorleone man watch tells us the -d flag highlights differences between the outputs, so it can aid in highlighting which metrics are changing over time.
                            – David Kaczynski
                            Oct 21 at 2:56




                            @donlucacorleone man watch tells us the -d flag highlights differences between the outputs, so it can aid in highlighting which metrics are changing over time.
                            – David Kaczynski
                            Oct 21 at 2:56













                            56














                            For linux, use nvidia-smi -l 1 will continually give you the gpu usage info, with in refresh interval of 1 second.






                            share|improve this answer

















                            • 63




                              I prefer to use watch -n 1 nvidia-smi to obtain continuous updates without filling the terminal with output
                              – ali_m
                              Jan 27 '16 at 23:59










                            • Using watch means your starting a new process every second to poll the cards. Better to do -l, and not every second, I'd suggest every minute or every 5 minutes.
                              – Mick T
                              Apr 19 at 15:55
















                            56














                            For linux, use nvidia-smi -l 1 will continually give you the gpu usage info, with in refresh interval of 1 second.






                            share|improve this answer

















                            • 63




                              I prefer to use watch -n 1 nvidia-smi to obtain continuous updates without filling the terminal with output
                              – ali_m
                              Jan 27 '16 at 23:59










                            • Using watch means your starting a new process every second to poll the cards. Better to do -l, and not every second, I'd suggest every minute or every 5 minutes.
                              – Mick T
                              Apr 19 at 15:55














                            56












                            56








                            56






                            For linux, use nvidia-smi -l 1 will continually give you the gpu usage info, with in refresh interval of 1 second.






                            share|improve this answer












                            For linux, use nvidia-smi -l 1 will continually give you the gpu usage info, with in refresh interval of 1 second.







                            share|improve this answer












                            share|improve this answer



                            share|improve this answer










                            answered Jun 4 '13 at 17:10









                            qed

                            99641119




                            99641119








                            • 63




                              I prefer to use watch -n 1 nvidia-smi to obtain continuous updates without filling the terminal with output
                              – ali_m
                              Jan 27 '16 at 23:59










                            • Using watch means your starting a new process every second to poll the cards. Better to do -l, and not every second, I'd suggest every minute or every 5 minutes.
                              – Mick T
                              Apr 19 at 15:55














                            • 63




                              I prefer to use watch -n 1 nvidia-smi to obtain continuous updates without filling the terminal with output
                              – ali_m
                              Jan 27 '16 at 23:59










                            • Using watch means your starting a new process every second to poll the cards. Better to do -l, and not every second, I'd suggest every minute or every 5 minutes.
                              – Mick T
                              Apr 19 at 15:55








                            63




                            63




                            I prefer to use watch -n 1 nvidia-smi to obtain continuous updates without filling the terminal with output
                            – ali_m
                            Jan 27 '16 at 23:59




                            I prefer to use watch -n 1 nvidia-smi to obtain continuous updates without filling the terminal with output
                            – ali_m
                            Jan 27 '16 at 23:59












                            Using watch means your starting a new process every second to poll the cards. Better to do -l, and not every second, I'd suggest every minute or every 5 minutes.
                            – Mick T
                            Apr 19 at 15:55




                            Using watch means your starting a new process every second to poll the cards. Better to do -l, and not every second, I'd suggest every minute or every 5 minutes.
                            – Mick T
                            Apr 19 at 15:55











                            49














                            For Intel GPU's there exists the intel-gpu-tools from http://intellinuxgraphics.org/ project, which brings the command intel_gpu_top (amongst other things). It is similar to top and htop, but specifically for the Intel GPU.



                               render busy:  18%: ███▋                                   render space: 39/131072
                            bitstream busy: 0%: bitstream space: 0/131072
                            blitter busy: 28%: █████▋ blitter space: 28/131072

                            task percent busy
                            GAM: 33%: ██████▋ vert fetch: 0 (0/sec)
                            GAFS: 3%: ▋ prim fetch: 0 (0/sec)
                            VS: 0%: VS invocations: 559188 (150/sec)
                            SF: 0%: GS invocations: 0 (0/sec)
                            VF: 0%: GS prims: 0 (0/sec)
                            DS: 0%: CL invocations: 186396 (50/sec)
                            CL: 0%: CL prims: 186396 (50/sec)
                            SOL: 0%: PS invocations: 8191776208 (38576436/sec)
                            GS: 0%: PS depth pass: 8158502721 (38487525/sec)
                            HS: 0%:
                            TE: 0%:
                            GAFM: 0%:
                            SVG: 0%:





                            share|improve this answer




























                              49














                              For Intel GPU's there exists the intel-gpu-tools from http://intellinuxgraphics.org/ project, which brings the command intel_gpu_top (amongst other things). It is similar to top and htop, but specifically for the Intel GPU.



                                 render busy:  18%: ███▋                                   render space: 39/131072
                              bitstream busy: 0%: bitstream space: 0/131072
                              blitter busy: 28%: █████▋ blitter space: 28/131072

                              task percent busy
                              GAM: 33%: ██████▋ vert fetch: 0 (0/sec)
                              GAFS: 3%: ▋ prim fetch: 0 (0/sec)
                              VS: 0%: VS invocations: 559188 (150/sec)
                              SF: 0%: GS invocations: 0 (0/sec)
                              VF: 0%: GS prims: 0 (0/sec)
                              DS: 0%: CL invocations: 186396 (50/sec)
                              CL: 0%: CL prims: 186396 (50/sec)
                              SOL: 0%: PS invocations: 8191776208 (38576436/sec)
                              GS: 0%: PS depth pass: 8158502721 (38487525/sec)
                              HS: 0%:
                              TE: 0%:
                              GAFM: 0%:
                              SVG: 0%:





                              share|improve this answer


























                                49












                                49








                                49






                                For Intel GPU's there exists the intel-gpu-tools from http://intellinuxgraphics.org/ project, which brings the command intel_gpu_top (amongst other things). It is similar to top and htop, but specifically for the Intel GPU.



                                   render busy:  18%: ███▋                                   render space: 39/131072
                                bitstream busy: 0%: bitstream space: 0/131072
                                blitter busy: 28%: █████▋ blitter space: 28/131072

                                task percent busy
                                GAM: 33%: ██████▋ vert fetch: 0 (0/sec)
                                GAFS: 3%: ▋ prim fetch: 0 (0/sec)
                                VS: 0%: VS invocations: 559188 (150/sec)
                                SF: 0%: GS invocations: 0 (0/sec)
                                VF: 0%: GS prims: 0 (0/sec)
                                DS: 0%: CL invocations: 186396 (50/sec)
                                CL: 0%: CL prims: 186396 (50/sec)
                                SOL: 0%: PS invocations: 8191776208 (38576436/sec)
                                GS: 0%: PS depth pass: 8158502721 (38487525/sec)
                                HS: 0%:
                                TE: 0%:
                                GAFM: 0%:
                                SVG: 0%:





                                share|improve this answer














                                For Intel GPU's there exists the intel-gpu-tools from http://intellinuxgraphics.org/ project, which brings the command intel_gpu_top (amongst other things). It is similar to top and htop, but specifically for the Intel GPU.



                                   render busy:  18%: ███▋                                   render space: 39/131072
                                bitstream busy: 0%: bitstream space: 0/131072
                                blitter busy: 28%: █████▋ blitter space: 28/131072

                                task percent busy
                                GAM: 33%: ██████▋ vert fetch: 0 (0/sec)
                                GAFS: 3%: ▋ prim fetch: 0 (0/sec)
                                VS: 0%: VS invocations: 559188 (150/sec)
                                SF: 0%: GS invocations: 0 (0/sec)
                                VF: 0%: GS prims: 0 (0/sec)
                                DS: 0%: CL invocations: 186396 (50/sec)
                                CL: 0%: CL prims: 186396 (50/sec)
                                SOL: 0%: PS invocations: 8191776208 (38576436/sec)
                                GS: 0%: PS depth pass: 8158502721 (38487525/sec)
                                HS: 0%:
                                TE: 0%:
                                GAFM: 0%:
                                SVG: 0%:






                                share|improve this answer














                                share|improve this answer



                                share|improve this answer








                                edited Dec 5 '13 at 1:04









                                Cristian Ciupitu

                                2,07911621




                                2,07911621










                                answered May 13 '12 at 14:05









                                jippie

                                8,88172956




                                8,88172956























                                    46














                                    Recently I have written a simple command-line utility called gpustat (which is a wrapper of nvidia-smi) : please take a look at https://github.com/wookayin/gpustat.








                                    share|improve this answer


























                                      46














                                      Recently I have written a simple command-line utility called gpustat (which is a wrapper of nvidia-smi) : please take a look at https://github.com/wookayin/gpustat.








                                      share|improve this answer
























                                        46












                                        46








                                        46






                                        Recently I have written a simple command-line utility called gpustat (which is a wrapper of nvidia-smi) : please take a look at https://github.com/wookayin/gpustat.








                                        share|improve this answer












                                        Recently I have written a simple command-line utility called gpustat (which is a wrapper of nvidia-smi) : please take a look at https://github.com/wookayin/gpustat.









                                        share|improve this answer












                                        share|improve this answer



                                        share|improve this answer










                                        answered Jun 9 '16 at 6:50









                                        Jongwook Choi

                                        57644




                                        57644























                                            29














                                            nvidia-smi does not work on some linux machines (returns N/A for many properties). You can use nvidia-settings instead (this is also what mat kelcey used in his python script).



                                            nvidia-settings -q GPUUtilization -q useddedicatedgpumemory


                                            You can also use:



                                            watch -n0.1 "nvidia-settings -q GPUUtilization -q useddedicatedgpumemory"


                                            for continuous monitoring.






                                            share|improve this answer



















                                            • 4




                                              Glad this wasn't a comment. It's exactly what I was searching for when I came across this question.
                                              – Score_Under
                                              Jun 20 '15 at 0:24










                                            • Thanks, this is what worked for me, since I have a GeForce card which is not supported by nvidia-smi.
                                              – alexg
                                              Dec 22 '15 at 9:23






                                            • 4




                                              You can do nvidia-settings -q all to see what other parameters you can monitor. I'm monitoring GPUCurrentProcessorClockFreqs and GPUCurrentClockFreqs.
                                              – alexg
                                              Dec 22 '15 at 9:34






                                            • 1




                                              Thanks man, good idea to query all, since each card may have different strings to monitor!
                                              – ruoho ruotsi
                                              Feb 2 '16 at 19:08
















                                            29














                                            nvidia-smi does not work on some linux machines (returns N/A for many properties). You can use nvidia-settings instead (this is also what mat kelcey used in his python script).



                                            nvidia-settings -q GPUUtilization -q useddedicatedgpumemory


                                            You can also use:



                                            watch -n0.1 "nvidia-settings -q GPUUtilization -q useddedicatedgpumemory"


                                            for continuous monitoring.






                                            share|improve this answer



















                                            • 4




                                              Glad this wasn't a comment. It's exactly what I was searching for when I came across this question.
                                              – Score_Under
                                              Jun 20 '15 at 0:24










                                            • Thanks, this is what worked for me, since I have a GeForce card which is not supported by nvidia-smi.
                                              – alexg
                                              Dec 22 '15 at 9:23






                                            • 4




                                              You can do nvidia-settings -q all to see what other parameters you can monitor. I'm monitoring GPUCurrentProcessorClockFreqs and GPUCurrentClockFreqs.
                                              – alexg
                                              Dec 22 '15 at 9:34






                                            • 1




                                              Thanks man, good idea to query all, since each card may have different strings to monitor!
                                              – ruoho ruotsi
                                              Feb 2 '16 at 19:08














                                            29












                                            29








                                            29






                                            nvidia-smi does not work on some linux machines (returns N/A for many properties). You can use nvidia-settings instead (this is also what mat kelcey used in his python script).



                                            nvidia-settings -q GPUUtilization -q useddedicatedgpumemory


                                            You can also use:



                                            watch -n0.1 "nvidia-settings -q GPUUtilization -q useddedicatedgpumemory"


                                            for continuous monitoring.






                                            share|improve this answer














                                            nvidia-smi does not work on some linux machines (returns N/A for many properties). You can use nvidia-settings instead (this is also what mat kelcey used in his python script).



                                            nvidia-settings -q GPUUtilization -q useddedicatedgpumemory


                                            You can also use:



                                            watch -n0.1 "nvidia-settings -q GPUUtilization -q useddedicatedgpumemory"


                                            for continuous monitoring.







                                            share|improve this answer














                                            share|improve this answer



                                            share|improve this answer








                                            edited Jun 16 '16 at 7:11









                                            Pierre.Vriens

                                            96651015




                                            96651015










                                            answered May 5 '15 at 13:40









                                            Jonathan

                                            39132




                                            39132








                                            • 4




                                              Glad this wasn't a comment. It's exactly what I was searching for when I came across this question.
                                              – Score_Under
                                              Jun 20 '15 at 0:24










                                            • Thanks, this is what worked for me, since I have a GeForce card which is not supported by nvidia-smi.
                                              – alexg
                                              Dec 22 '15 at 9:23






                                            • 4




                                              You can do nvidia-settings -q all to see what other parameters you can monitor. I'm monitoring GPUCurrentProcessorClockFreqs and GPUCurrentClockFreqs.
                                              – alexg
                                              Dec 22 '15 at 9:34






                                            • 1




                                              Thanks man, good idea to query all, since each card may have different strings to monitor!
                                              – ruoho ruotsi
                                              Feb 2 '16 at 19:08














                                            • 4




                                              Glad this wasn't a comment. It's exactly what I was searching for when I came across this question.
                                              – Score_Under
                                              Jun 20 '15 at 0:24










                                            • Thanks, this is what worked for me, since I have a GeForce card which is not supported by nvidia-smi.
                                              – alexg
                                              Dec 22 '15 at 9:23






                                            • 4




                                              You can do nvidia-settings -q all to see what other parameters you can monitor. I'm monitoring GPUCurrentProcessorClockFreqs and GPUCurrentClockFreqs.
                                              – alexg
                                              Dec 22 '15 at 9:34






                                            • 1




                                              Thanks man, good idea to query all, since each card may have different strings to monitor!
                                              – ruoho ruotsi
                                              Feb 2 '16 at 19:08








                                            4




                                            4




                                            Glad this wasn't a comment. It's exactly what I was searching for when I came across this question.
                                            – Score_Under
                                            Jun 20 '15 at 0:24




                                            Glad this wasn't a comment. It's exactly what I was searching for when I came across this question.
                                            – Score_Under
                                            Jun 20 '15 at 0:24












                                            Thanks, this is what worked for me, since I have a GeForce card which is not supported by nvidia-smi.
                                            – alexg
                                            Dec 22 '15 at 9:23




                                            Thanks, this is what worked for me, since I have a GeForce card which is not supported by nvidia-smi.
                                            – alexg
                                            Dec 22 '15 at 9:23




                                            4




                                            4




                                            You can do nvidia-settings -q all to see what other parameters you can monitor. I'm monitoring GPUCurrentProcessorClockFreqs and GPUCurrentClockFreqs.
                                            – alexg
                                            Dec 22 '15 at 9:34




                                            You can do nvidia-settings -q all to see what other parameters you can monitor. I'm monitoring GPUCurrentProcessorClockFreqs and GPUCurrentClockFreqs.
                                            – alexg
                                            Dec 22 '15 at 9:34




                                            1




                                            1




                                            Thanks man, good idea to query all, since each card may have different strings to monitor!
                                            – ruoho ruotsi
                                            Feb 2 '16 at 19:08




                                            Thanks man, good idea to query all, since each card may have different strings to monitor!
                                            – ruoho ruotsi
                                            Feb 2 '16 at 19:08











                                            12














                                            For completeness, AMD has two options:





                                            1. fglrx (closed source drivers).



                                              $ aticonfig --odgc --odgt



                                            2. mesa (open source drivers), you can use RadeonTop.




                                              View your GPU utilization, both for the total activity percent and individual blocks.









                                            share|improve this answer




























                                              12














                                              For completeness, AMD has two options:





                                              1. fglrx (closed source drivers).



                                                $ aticonfig --odgc --odgt



                                              2. mesa (open source drivers), you can use RadeonTop.




                                                View your GPU utilization, both for the total activity percent and individual blocks.









                                              share|improve this answer


























                                                12












                                                12








                                                12






                                                For completeness, AMD has two options:





                                                1. fglrx (closed source drivers).



                                                  $ aticonfig --odgc --odgt



                                                2. mesa (open source drivers), you can use RadeonTop.




                                                  View your GPU utilization, both for the total activity percent and individual blocks.









                                                share|improve this answer














                                                For completeness, AMD has two options:





                                                1. fglrx (closed source drivers).



                                                  $ aticonfig --odgc --odgt



                                                2. mesa (open source drivers), you can use RadeonTop.




                                                  View your GPU utilization, both for the total activity percent and individual blocks.










                                                share|improve this answer














                                                share|improve this answer



                                                share|improve this answer








                                                edited Nov 29 '13 at 19:38

























                                                answered Nov 28 '13 at 21:52









                                                kevinf

                                                28527




                                                28527























                                                    10














                                                    For Linux, I use this HTOP like tool that I wrote myself. It monitors and gives an overview of the GPU temperature as well as the core / VRAM / PCI-E & memory bus usage. It does not monitor what's running on the GPU though.



                                                    gmonitor



                                                    enter image description here






                                                    share|improve this answer

















                                                    • 1




                                                      nvidia-settings requires a running X11, which is not always the case.
                                                      – Victor Sergienko
                                                      Jul 8 '17 at 0:57










                                                    • works for me with no hassle!
                                                      – Hennadii Madan
                                                      Jul 14 '17 at 6:29
















                                                    10














                                                    For Linux, I use this HTOP like tool that I wrote myself. It monitors and gives an overview of the GPU temperature as well as the core / VRAM / PCI-E & memory bus usage. It does not monitor what's running on the GPU though.



                                                    gmonitor



                                                    enter image description here






                                                    share|improve this answer

















                                                    • 1




                                                      nvidia-settings requires a running X11, which is not always the case.
                                                      – Victor Sergienko
                                                      Jul 8 '17 at 0:57










                                                    • works for me with no hassle!
                                                      – Hennadii Madan
                                                      Jul 14 '17 at 6:29














                                                    10












                                                    10








                                                    10






                                                    For Linux, I use this HTOP like tool that I wrote myself. It monitors and gives an overview of the GPU temperature as well as the core / VRAM / PCI-E & memory bus usage. It does not monitor what's running on the GPU though.



                                                    gmonitor



                                                    enter image description here






                                                    share|improve this answer












                                                    For Linux, I use this HTOP like tool that I wrote myself. It monitors and gives an overview of the GPU temperature as well as the core / VRAM / PCI-E & memory bus usage. It does not monitor what's running on the GPU though.



                                                    gmonitor



                                                    enter image description here







                                                    share|improve this answer












                                                    share|improve this answer



                                                    share|improve this answer










                                                    answered Feb 6 '17 at 12:54









                                                    Mountassir El Hafi

                                                    10112




                                                    10112








                                                    • 1




                                                      nvidia-settings requires a running X11, which is not always the case.
                                                      – Victor Sergienko
                                                      Jul 8 '17 at 0:57










                                                    • works for me with no hassle!
                                                      – Hennadii Madan
                                                      Jul 14 '17 at 6:29














                                                    • 1




                                                      nvidia-settings requires a running X11, which is not always the case.
                                                      – Victor Sergienko
                                                      Jul 8 '17 at 0:57










                                                    • works for me with no hassle!
                                                      – Hennadii Madan
                                                      Jul 14 '17 at 6:29








                                                    1




                                                    1




                                                    nvidia-settings requires a running X11, which is not always the case.
                                                    – Victor Sergienko
                                                    Jul 8 '17 at 0:57




                                                    nvidia-settings requires a running X11, which is not always the case.
                                                    – Victor Sergienko
                                                    Jul 8 '17 at 0:57












                                                    works for me with no hassle!
                                                    – Hennadii Madan
                                                    Jul 14 '17 at 6:29




                                                    works for me with no hassle!
                                                    – Hennadii Madan
                                                    Jul 14 '17 at 6:29











                                                    9














                                                    I have a GeForce 1060 GTX video card and I found that the following command give me info about card utilization, temperature, fan speed and power consumption:



                                                    $ nvidia-smi --format=csv --query-gpu=power.draw,utilization.gpu,fan.speed,temperature.gpu


                                                    You can see list of all query options with:



                                                    $ nvidia-smi --help-query-gpu





                                                    share|improve this answer





















                                                    • It would be worth adding memory.used or (memory.free) as well.
                                                      – Zoltan
                                                      Sep 23 at 9:10
















                                                    9














                                                    I have a GeForce 1060 GTX video card and I found that the following command give me info about card utilization, temperature, fan speed and power consumption:



                                                    $ nvidia-smi --format=csv --query-gpu=power.draw,utilization.gpu,fan.speed,temperature.gpu


                                                    You can see list of all query options with:



                                                    $ nvidia-smi --help-query-gpu





                                                    share|improve this answer





















                                                    • It would be worth adding memory.used or (memory.free) as well.
                                                      – Zoltan
                                                      Sep 23 at 9:10














                                                    9












                                                    9








                                                    9






                                                    I have a GeForce 1060 GTX video card and I found that the following command give me info about card utilization, temperature, fan speed and power consumption:



                                                    $ nvidia-smi --format=csv --query-gpu=power.draw,utilization.gpu,fan.speed,temperature.gpu


                                                    You can see list of all query options with:



                                                    $ nvidia-smi --help-query-gpu





                                                    share|improve this answer












                                                    I have a GeForce 1060 GTX video card and I found that the following command give me info about card utilization, temperature, fan speed and power consumption:



                                                    $ nvidia-smi --format=csv --query-gpu=power.draw,utilization.gpu,fan.speed,temperature.gpu


                                                    You can see list of all query options with:



                                                    $ nvidia-smi --help-query-gpu






                                                    share|improve this answer












                                                    share|improve this answer



                                                    share|improve this answer










                                                    answered Apr 14 '17 at 10:26









                                                    lyubomir

                                                    9111




                                                    9111












                                                    • It would be worth adding memory.used or (memory.free) as well.
                                                      – Zoltan
                                                      Sep 23 at 9:10


















                                                    • It would be worth adding memory.used or (memory.free) as well.
                                                      – Zoltan
                                                      Sep 23 at 9:10
















                                                    It would be worth adding memory.used or (memory.free) as well.
                                                    – Zoltan
                                                    Sep 23 at 9:10




                                                    It would be worth adding memory.used or (memory.free) as well.
                                                    – Zoltan
                                                    Sep 23 at 9:10











                                                    3














                                                    For OS X



                                                    Including Mountain Lion



                                                    iStat Menus



                                                    Excluding Mountain Lion



                                                    atMonitor




                                                    The last version of atMonitor to support GPU related features is atMonitor 2.7.1.




                                                    – and the link to 2.7.1 delivers 2.7b.



                                                    For the more recent version of the app, atMonitor - FAQ explains:




                                                    To make atMonitor compatible with MacOS 10.8 we have removed all GPU related features.




                                                    I experimented with 2.7b a.k.a. 2.7.1 on Mountain Lion with a MacBookPro5,2 with NVIDIA GeForce 9600M GT. The app ran for a few seconds before quitting, it showed temperature but not usage:



                                                                                                      screenshot of atMonitor 2.7b on Mountain Lion






                                                    share|improve this answer




























                                                      3














                                                      For OS X



                                                      Including Mountain Lion



                                                      iStat Menus



                                                      Excluding Mountain Lion



                                                      atMonitor




                                                      The last version of atMonitor to support GPU related features is atMonitor 2.7.1.




                                                      – and the link to 2.7.1 delivers 2.7b.



                                                      For the more recent version of the app, atMonitor - FAQ explains:




                                                      To make atMonitor compatible with MacOS 10.8 we have removed all GPU related features.




                                                      I experimented with 2.7b a.k.a. 2.7.1 on Mountain Lion with a MacBookPro5,2 with NVIDIA GeForce 9600M GT. The app ran for a few seconds before quitting, it showed temperature but not usage:



                                                                                                        screenshot of atMonitor 2.7b on Mountain Lion






                                                      share|improve this answer


























                                                        3












                                                        3








                                                        3






                                                        For OS X



                                                        Including Mountain Lion



                                                        iStat Menus



                                                        Excluding Mountain Lion



                                                        atMonitor




                                                        The last version of atMonitor to support GPU related features is atMonitor 2.7.1.




                                                        – and the link to 2.7.1 delivers 2.7b.



                                                        For the more recent version of the app, atMonitor - FAQ explains:




                                                        To make atMonitor compatible with MacOS 10.8 we have removed all GPU related features.




                                                        I experimented with 2.7b a.k.a. 2.7.1 on Mountain Lion with a MacBookPro5,2 with NVIDIA GeForce 9600M GT. The app ran for a few seconds before quitting, it showed temperature but not usage:



                                                                                                          screenshot of atMonitor 2.7b on Mountain Lion






                                                        share|improve this answer














                                                        For OS X



                                                        Including Mountain Lion



                                                        iStat Menus



                                                        Excluding Mountain Lion



                                                        atMonitor




                                                        The last version of atMonitor to support GPU related features is atMonitor 2.7.1.




                                                        – and the link to 2.7.1 delivers 2.7b.



                                                        For the more recent version of the app, atMonitor - FAQ explains:




                                                        To make atMonitor compatible with MacOS 10.8 we have removed all GPU related features.




                                                        I experimented with 2.7b a.k.a. 2.7.1 on Mountain Lion with a MacBookPro5,2 with NVIDIA GeForce 9600M GT. The app ran for a few seconds before quitting, it showed temperature but not usage:



                                                                                                          screenshot of atMonitor 2.7b on Mountain Lion







                                                        share|improve this answer














                                                        share|improve this answer



                                                        share|improve this answer








                                                        edited Nov 30 '13 at 13:03

























                                                        answered Dec 23 '12 at 15:34









                                                        Graham Perrin

                                                        220122




                                                        220122























                                                            2














                                                            Glances has a plugin which shows GPU utilization and memory usage.



                                                            enter image description here



                                                            http://glances.readthedocs.io/en/stable/aoa/gpu.html



                                                            Uses the nvidia-ml-py3 library: https://pypi.python.org/pypi/nvidia-ml-py3






                                                            share|improve this answer


























                                                              2














                                                              Glances has a plugin which shows GPU utilization and memory usage.



                                                              enter image description here



                                                              http://glances.readthedocs.io/en/stable/aoa/gpu.html



                                                              Uses the nvidia-ml-py3 library: https://pypi.python.org/pypi/nvidia-ml-py3






                                                              share|improve this answer
























                                                                2












                                                                2








                                                                2






                                                                Glances has a plugin which shows GPU utilization and memory usage.



                                                                enter image description here



                                                                http://glances.readthedocs.io/en/stable/aoa/gpu.html



                                                                Uses the nvidia-ml-py3 library: https://pypi.python.org/pypi/nvidia-ml-py3






                                                                share|improve this answer












                                                                Glances has a plugin which shows GPU utilization and memory usage.



                                                                enter image description here



                                                                http://glances.readthedocs.io/en/stable/aoa/gpu.html



                                                                Uses the nvidia-ml-py3 library: https://pypi.python.org/pypi/nvidia-ml-py3







                                                                share|improve this answer












                                                                share|improve this answer



                                                                share|improve this answer










                                                                answered Nov 6 '17 at 21:12









                                                                coreindustries

                                                                211




                                                                211























                                                                    1














                                                                    for nvidia on linux i use the following python script which uses an optional delay and repeat like iostat and vmstat



                                                                    https://gist.github.com/matpalm/9c0c7c6a6f3681a0d39d



                                                                    $ gpu_stat.py 1 2
                                                                    {"util":{"PCIe":"0", "memory":"10", "video":"0", "graphics":"11"}, "used_mem":"161", "time": 1424839016}
                                                                    {"util":{"PCIe":"0", "memory":"10", "video":"0", "graphics":"9"}, "used_mem":"161", "time":1424839018}





                                                                    share|improve this answer


























                                                                      1














                                                                      for nvidia on linux i use the following python script which uses an optional delay and repeat like iostat and vmstat



                                                                      https://gist.github.com/matpalm/9c0c7c6a6f3681a0d39d



                                                                      $ gpu_stat.py 1 2
                                                                      {"util":{"PCIe":"0", "memory":"10", "video":"0", "graphics":"11"}, "used_mem":"161", "time": 1424839016}
                                                                      {"util":{"PCIe":"0", "memory":"10", "video":"0", "graphics":"9"}, "used_mem":"161", "time":1424839018}





                                                                      share|improve this answer
























                                                                        1












                                                                        1








                                                                        1






                                                                        for nvidia on linux i use the following python script which uses an optional delay and repeat like iostat and vmstat



                                                                        https://gist.github.com/matpalm/9c0c7c6a6f3681a0d39d



                                                                        $ gpu_stat.py 1 2
                                                                        {"util":{"PCIe":"0", "memory":"10", "video":"0", "graphics":"11"}, "used_mem":"161", "time": 1424839016}
                                                                        {"util":{"PCIe":"0", "memory":"10", "video":"0", "graphics":"9"}, "used_mem":"161", "time":1424839018}





                                                                        share|improve this answer












                                                                        for nvidia on linux i use the following python script which uses an optional delay and repeat like iostat and vmstat



                                                                        https://gist.github.com/matpalm/9c0c7c6a6f3681a0d39d



                                                                        $ gpu_stat.py 1 2
                                                                        {"util":{"PCIe":"0", "memory":"10", "video":"0", "graphics":"11"}, "used_mem":"161", "time": 1424839016}
                                                                        {"util":{"PCIe":"0", "memory":"10", "video":"0", "graphics":"9"}, "used_mem":"161", "time":1424839018}






                                                                        share|improve this answer












                                                                        share|improve this answer



                                                                        share|improve this answer










                                                                        answered Feb 25 '15 at 4:42









                                                                        mat kelcey

                                                                        1112




                                                                        1112























                                                                            1














                                                                            I have had processes terminate (probably killed or crashed) and continue to use resources, but were not listed in nvidia-smi. Usually these processes were just taking gpu memory.



                                                                            If you think you have a process using resources on a GPU and it is not being shown in nvidia-smi, you can try running this command to double check. It will show you which processes are using your GPUs.



                                                                            sudo fuser -v /dev/nvidia*


                                                                            This works on EL7, Ubuntu or other distributions might have their nvidia devices listed under another name/location.






                                                                            share|improve this answer




























                                                                              1














                                                                              I have had processes terminate (probably killed or crashed) and continue to use resources, but were not listed in nvidia-smi. Usually these processes were just taking gpu memory.



                                                                              If you think you have a process using resources on a GPU and it is not being shown in nvidia-smi, you can try running this command to double check. It will show you which processes are using your GPUs.



                                                                              sudo fuser -v /dev/nvidia*


                                                                              This works on EL7, Ubuntu or other distributions might have their nvidia devices listed under another name/location.






                                                                              share|improve this answer


























                                                                                1












                                                                                1








                                                                                1






                                                                                I have had processes terminate (probably killed or crashed) and continue to use resources, but were not listed in nvidia-smi. Usually these processes were just taking gpu memory.



                                                                                If you think you have a process using resources on a GPU and it is not being shown in nvidia-smi, you can try running this command to double check. It will show you which processes are using your GPUs.



                                                                                sudo fuser -v /dev/nvidia*


                                                                                This works on EL7, Ubuntu or other distributions might have their nvidia devices listed under another name/location.






                                                                                share|improve this answer














                                                                                I have had processes terminate (probably killed or crashed) and continue to use resources, but were not listed in nvidia-smi. Usually these processes were just taking gpu memory.



                                                                                If you think you have a process using resources on a GPU and it is not being shown in nvidia-smi, you can try running this command to double check. It will show you which processes are using your GPUs.



                                                                                sudo fuser -v /dev/nvidia*


                                                                                This works on EL7, Ubuntu or other distributions might have their nvidia devices listed under another name/location.







                                                                                share|improve this answer














                                                                                share|improve this answer



                                                                                share|improve this answer








                                                                                edited Sep 18 '17 at 18:38

























                                                                                answered Sep 18 '17 at 18:27









                                                                                Rick Smith

                                                                                1136




                                                                                1136























                                                                                    1














                                                                                    The following function appends information such as PID, user name, CPU usage, memory usage, GPU memory usage, program arguments and run time of processes that are being run on the GPU, to the output of nvidia-smi:



                                                                                    function better-nvidia-smi () {
                                                                                    nvidia-smi
                                                                                    join -1 1 -2 3
                                                                                    <(nvidia-smi --query-compute-apps=pid,used_memory
                                                                                    --format=csv
                                                                                    | sed "s/ //g" | sed "s/,/ /g"
                                                                                    | awk 'NR<=1 {print toupper($0)} NR>1 {print $0}'
                                                                                    | sed "/[NotSupported]/d"
                                                                                    | awk 'NR<=1{print $0;next}{print $0| "sort -k1"}')
                                                                                    <(ps -a -o user,pgrp,pid,pcpu,pmem,time,command
                                                                                    | awk 'NR<=1{print $0;next}{print $0| "sort -k3"}')
                                                                                    | column -t
                                                                                    }


                                                                                    Example output:



                                                                                    $ better-nvidia-smi
                                                                                    Fri Sep 29 16:52:58 2017
                                                                                    +-----------------------------------------------------------------------------+
                                                                                    | NVIDIA-SMI 378.13 Driver Version: 378.13 |
                                                                                    |-------------------------------+----------------------+----------------------+
                                                                                    | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
                                                                                    | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
                                                                                    |===============================+======================+======================|
                                                                                    | 0 GeForce GT 730 Off | 0000:01:00.0 N/A | N/A |
                                                                                    | 32% 49C P8 N/A / N/A | 872MiB / 976MiB | N/A Default |
                                                                                    +-------------------------------+----------------------+----------------------+
                                                                                    | 1 Graphics Device Off | 0000:06:00.0 Off | N/A |
                                                                                    | 23% 35C P8 17W / 250W | 199MiB / 11172MiB | 0% Default |
                                                                                    +-------------------------------+----------------------+----------------------+

                                                                                    +-----------------------------------------------------------------------------+
                                                                                    | Processes: GPU Memory |
                                                                                    | GPU PID Type Process name Usage |
                                                                                    |=============================================================================|
                                                                                    | 0 Not Supported |
                                                                                    | 1 5113 C python 187MiB |
                                                                                    +-----------------------------------------------------------------------------+
                                                                                    PID USED_GPU_MEMORY[MIB] USER PGRP %CPU %MEM TIME COMMAND
                                                                                    9178 187MiB tmborn 9175 129 2.6 04:32:19 ../path/to/python script.py args 42





                                                                                    share|improve this answer























                                                                                    • Carefull, I don't think the pmem given by ps takes into account the total memory of the GPU but that of the CPU because ps is not "Nvidia GPU" aware
                                                                                      – SebMa
                                                                                      May 29 at 14:09


















                                                                                    1














                                                                                    The following function appends information such as PID, user name, CPU usage, memory usage, GPU memory usage, program arguments and run time of processes that are being run on the GPU, to the output of nvidia-smi:



                                                                                    function better-nvidia-smi () {
                                                                                    nvidia-smi
                                                                                    join -1 1 -2 3
                                                                                    <(nvidia-smi --query-compute-apps=pid,used_memory
                                                                                    --format=csv
                                                                                    | sed "s/ //g" | sed "s/,/ /g"
                                                                                    | awk 'NR<=1 {print toupper($0)} NR>1 {print $0}'
                                                                                    | sed "/[NotSupported]/d"
                                                                                    | awk 'NR<=1{print $0;next}{print $0| "sort -k1"}')
                                                                                    <(ps -a -o user,pgrp,pid,pcpu,pmem,time,command
                                                                                    | awk 'NR<=1{print $0;next}{print $0| "sort -k3"}')
                                                                                    | column -t
                                                                                    }


                                                                                    Example output:



                                                                                    $ better-nvidia-smi
                                                                                    Fri Sep 29 16:52:58 2017
                                                                                    +-----------------------------------------------------------------------------+
                                                                                    | NVIDIA-SMI 378.13 Driver Version: 378.13 |
                                                                                    |-------------------------------+----------------------+----------------------+
                                                                                    | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
                                                                                    | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
                                                                                    |===============================+======================+======================|
                                                                                    | 0 GeForce GT 730 Off | 0000:01:00.0 N/A | N/A |
                                                                                    | 32% 49C P8 N/A / N/A | 872MiB / 976MiB | N/A Default |
                                                                                    +-------------------------------+----------------------+----------------------+
                                                                                    | 1 Graphics Device Off | 0000:06:00.0 Off | N/A |
                                                                                    | 23% 35C P8 17W / 250W | 199MiB / 11172MiB | 0% Default |
                                                                                    +-------------------------------+----------------------+----------------------+

                                                                                    +-----------------------------------------------------------------------------+
                                                                                    | Processes: GPU Memory |
                                                                                    | GPU PID Type Process name Usage |
                                                                                    |=============================================================================|
                                                                                    | 0 Not Supported |
                                                                                    | 1 5113 C python 187MiB |
                                                                                    +-----------------------------------------------------------------------------+
                                                                                    PID USED_GPU_MEMORY[MIB] USER PGRP %CPU %MEM TIME COMMAND
                                                                                    9178 187MiB tmborn 9175 129 2.6 04:32:19 ../path/to/python script.py args 42





                                                                                    share|improve this answer























                                                                                    • Carefull, I don't think the pmem given by ps takes into account the total memory of the GPU but that of the CPU because ps is not "Nvidia GPU" aware
                                                                                      – SebMa
                                                                                      May 29 at 14:09
















                                                                                    1












                                                                                    1








                                                                                    1






                                                                                    The following function appends information such as PID, user name, CPU usage, memory usage, GPU memory usage, program arguments and run time of processes that are being run on the GPU, to the output of nvidia-smi:



                                                                                    function better-nvidia-smi () {
                                                                                    nvidia-smi
                                                                                    join -1 1 -2 3
                                                                                    <(nvidia-smi --query-compute-apps=pid,used_memory
                                                                                    --format=csv
                                                                                    | sed "s/ //g" | sed "s/,/ /g"
                                                                                    | awk 'NR<=1 {print toupper($0)} NR>1 {print $0}'
                                                                                    | sed "/[NotSupported]/d"
                                                                                    | awk 'NR<=1{print $0;next}{print $0| "sort -k1"}')
                                                                                    <(ps -a -o user,pgrp,pid,pcpu,pmem,time,command
                                                                                    | awk 'NR<=1{print $0;next}{print $0| "sort -k3"}')
                                                                                    | column -t
                                                                                    }


                                                                                    Example output:



                                                                                    $ better-nvidia-smi
                                                                                    Fri Sep 29 16:52:58 2017
                                                                                    +-----------------------------------------------------------------------------+
                                                                                    | NVIDIA-SMI 378.13 Driver Version: 378.13 |
                                                                                    |-------------------------------+----------------------+----------------------+
                                                                                    | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
                                                                                    | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
                                                                                    |===============================+======================+======================|
                                                                                    | 0 GeForce GT 730 Off | 0000:01:00.0 N/A | N/A |
                                                                                    | 32% 49C P8 N/A / N/A | 872MiB / 976MiB | N/A Default |
                                                                                    +-------------------------------+----------------------+----------------------+
                                                                                    | 1 Graphics Device Off | 0000:06:00.0 Off | N/A |
                                                                                    | 23% 35C P8 17W / 250W | 199MiB / 11172MiB | 0% Default |
                                                                                    +-------------------------------+----------------------+----------------------+

                                                                                    +-----------------------------------------------------------------------------+
                                                                                    | Processes: GPU Memory |
                                                                                    | GPU PID Type Process name Usage |
                                                                                    |=============================================================================|
                                                                                    | 0 Not Supported |
                                                                                    | 1 5113 C python 187MiB |
                                                                                    +-----------------------------------------------------------------------------+
                                                                                    PID USED_GPU_MEMORY[MIB] USER PGRP %CPU %MEM TIME COMMAND
                                                                                    9178 187MiB tmborn 9175 129 2.6 04:32:19 ../path/to/python script.py args 42





                                                                                    share|improve this answer














                                                                                    The following function appends information such as PID, user name, CPU usage, memory usage, GPU memory usage, program arguments and run time of processes that are being run on the GPU, to the output of nvidia-smi:



                                                                                    function better-nvidia-smi () {
                                                                                    nvidia-smi
                                                                                    join -1 1 -2 3
                                                                                    <(nvidia-smi --query-compute-apps=pid,used_memory
                                                                                    --format=csv
                                                                                    | sed "s/ //g" | sed "s/,/ /g"
                                                                                    | awk 'NR<=1 {print toupper($0)} NR>1 {print $0}'
                                                                                    | sed "/[NotSupported]/d"
                                                                                    | awk 'NR<=1{print $0;next}{print $0| "sort -k1"}')
                                                                                    <(ps -a -o user,pgrp,pid,pcpu,pmem,time,command
                                                                                    | awk 'NR<=1{print $0;next}{print $0| "sort -k3"}')
                                                                                    | column -t
                                                                                    }


                                                                                    Example output:



                                                                                    $ better-nvidia-smi
                                                                                    Fri Sep 29 16:52:58 2017
                                                                                    +-----------------------------------------------------------------------------+
                                                                                    | NVIDIA-SMI 378.13 Driver Version: 378.13 |
                                                                                    |-------------------------------+----------------------+----------------------+
                                                                                    | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
                                                                                    | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
                                                                                    |===============================+======================+======================|
                                                                                    | 0 GeForce GT 730 Off | 0000:01:00.0 N/A | N/A |
                                                                                    | 32% 49C P8 N/A / N/A | 872MiB / 976MiB | N/A Default |
                                                                                    +-------------------------------+----------------------+----------------------+
                                                                                    | 1 Graphics Device Off | 0000:06:00.0 Off | N/A |
                                                                                    | 23% 35C P8 17W / 250W | 199MiB / 11172MiB | 0% Default |
                                                                                    +-------------------------------+----------------------+----------------------+

                                                                                    +-----------------------------------------------------------------------------+
                                                                                    | Processes: GPU Memory |
                                                                                    | GPU PID Type Process name Usage |
                                                                                    |=============================================================================|
                                                                                    | 0 Not Supported |
                                                                                    | 1 5113 C python 187MiB |
                                                                                    +-----------------------------------------------------------------------------+
                                                                                    PID USED_GPU_MEMORY[MIB] USER PGRP %CPU %MEM TIME COMMAND
                                                                                    9178 187MiB tmborn 9175 129 2.6 04:32:19 ../path/to/python script.py args 42






                                                                                    share|improve this answer














                                                                                    share|improve this answer



                                                                                    share|improve this answer








                                                                                    edited Sep 29 '17 at 15:02

























                                                                                    answered Sep 29 '17 at 14:36









                                                                                    Lenar Hoyt

                                                                                    2741315




                                                                                    2741315












                                                                                    • Carefull, I don't think the pmem given by ps takes into account the total memory of the GPU but that of the CPU because ps is not "Nvidia GPU" aware
                                                                                      – SebMa
                                                                                      May 29 at 14:09




















                                                                                    • Carefull, I don't think the pmem given by ps takes into account the total memory of the GPU but that of the CPU because ps is not "Nvidia GPU" aware
                                                                                      – SebMa
                                                                                      May 29 at 14:09


















                                                                                    Carefull, I don't think the pmem given by ps takes into account the total memory of the GPU but that of the CPU because ps is not "Nvidia GPU" aware
                                                                                    – SebMa
                                                                                    May 29 at 14:09






                                                                                    Carefull, I don't think the pmem given by ps takes into account the total memory of the GPU but that of the CPU because ps is not "Nvidia GPU" aware
                                                                                    – SebMa
                                                                                    May 29 at 14:09













                                                                                    0














                                                                                    You can use nvtop, it's similar to htop but for NVIDIA GPUs. Link: https://github.com/Syllo/nvtop






                                                                                    share|improve this answer


























                                                                                      0














                                                                                      You can use nvtop, it's similar to htop but for NVIDIA GPUs. Link: https://github.com/Syllo/nvtop






                                                                                      share|improve this answer
























                                                                                        0












                                                                                        0








                                                                                        0






                                                                                        You can use nvtop, it's similar to htop but for NVIDIA GPUs. Link: https://github.com/Syllo/nvtop






                                                                                        share|improve this answer












                                                                                        You can use nvtop, it's similar to htop but for NVIDIA GPUs. Link: https://github.com/Syllo/nvtop







                                                                                        share|improve this answer












                                                                                        share|improve this answer



                                                                                        share|improve this answer










                                                                                        answered Oct 18 at 10:06









                                                                                        karl71

                                                                                        1012




                                                                                        1012























                                                                                            0














                                                                                            This script is more readable and is designed for easy mods and extensions.



                                                                                            You can replace gnome-terminal with your favorite terminal window program.





                                                                                            #! /bin/bash

                                                                                            if [ "$1" = "--guts" ]; then
                                                                                            echo; echo " ctrl-c to gracefully close"
                                                                                            f "$a"
                                                                                            f "$b"
                                                                                            exit 0; fi

                                                                                            # easy to customize here using "nvidia-smi --help-query-gpu" as a guide
                                                                                            a='--query-gpu=pstate,memory.used,utilization.memory,utilization.gpu,encoder.stats.sessionCount'
                                                                                            b='--query-gpu=encoder.stats.averageFps,encoder.stats.averageLatency,temperature.gpu,power.draw'
                                                                                            p=0.5 # refresh period in seconds
                                                                                            s=110x9 # view port as width_in_chars x line_count

                                                                                            c="s/^/ /; s/, +/t/g"
                                                                                            t="`echo '' |tr 'n' 't'`"
                                                                                            function f() { echo; nvidia-smi --format=csv "$1" |sed -r "$c" |column -t "-s$t" "-o "; }
                                                                                            export c t a b; export -f f
                                                                                            gnome-terminal --hide-menubar --geometry=$s -- watch -t -n$p "`readlink -f "$0"`" --guts

                                                                                            #


                                                                                            License: GNU GPLv2, TranSeed Research






                                                                                            share|improve this answer


























                                                                                              0














                                                                                              This script is more readable and is designed for easy mods and extensions.



                                                                                              You can replace gnome-terminal with your favorite terminal window program.





                                                                                              #! /bin/bash

                                                                                              if [ "$1" = "--guts" ]; then
                                                                                              echo; echo " ctrl-c to gracefully close"
                                                                                              f "$a"
                                                                                              f "$b"
                                                                                              exit 0; fi

                                                                                              # easy to customize here using "nvidia-smi --help-query-gpu" as a guide
                                                                                              a='--query-gpu=pstate,memory.used,utilization.memory,utilization.gpu,encoder.stats.sessionCount'
                                                                                              b='--query-gpu=encoder.stats.averageFps,encoder.stats.averageLatency,temperature.gpu,power.draw'
                                                                                              p=0.5 # refresh period in seconds
                                                                                              s=110x9 # view port as width_in_chars x line_count

                                                                                              c="s/^/ /; s/, +/t/g"
                                                                                              t="`echo '' |tr 'n' 't'`"
                                                                                              function f() { echo; nvidia-smi --format=csv "$1" |sed -r "$c" |column -t "-s$t" "-o "; }
                                                                                              export c t a b; export -f f
                                                                                              gnome-terminal --hide-menubar --geometry=$s -- watch -t -n$p "`readlink -f "$0"`" --guts

                                                                                              #


                                                                                              License: GNU GPLv2, TranSeed Research






                                                                                              share|improve this answer
























                                                                                                0












                                                                                                0








                                                                                                0






                                                                                                This script is more readable and is designed for easy mods and extensions.



                                                                                                You can replace gnome-terminal with your favorite terminal window program.





                                                                                                #! /bin/bash

                                                                                                if [ "$1" = "--guts" ]; then
                                                                                                echo; echo " ctrl-c to gracefully close"
                                                                                                f "$a"
                                                                                                f "$b"
                                                                                                exit 0; fi

                                                                                                # easy to customize here using "nvidia-smi --help-query-gpu" as a guide
                                                                                                a='--query-gpu=pstate,memory.used,utilization.memory,utilization.gpu,encoder.stats.sessionCount'
                                                                                                b='--query-gpu=encoder.stats.averageFps,encoder.stats.averageLatency,temperature.gpu,power.draw'
                                                                                                p=0.5 # refresh period in seconds
                                                                                                s=110x9 # view port as width_in_chars x line_count

                                                                                                c="s/^/ /; s/, +/t/g"
                                                                                                t="`echo '' |tr 'n' 't'`"
                                                                                                function f() { echo; nvidia-smi --format=csv "$1" |sed -r "$c" |column -t "-s$t" "-o "; }
                                                                                                export c t a b; export -f f
                                                                                                gnome-terminal --hide-menubar --geometry=$s -- watch -t -n$p "`readlink -f "$0"`" --guts

                                                                                                #


                                                                                                License: GNU GPLv2, TranSeed Research






                                                                                                share|improve this answer












                                                                                                This script is more readable and is designed for easy mods and extensions.



                                                                                                You can replace gnome-terminal with your favorite terminal window program.





                                                                                                #! /bin/bash

                                                                                                if [ "$1" = "--guts" ]; then
                                                                                                echo; echo " ctrl-c to gracefully close"
                                                                                                f "$a"
                                                                                                f "$b"
                                                                                                exit 0; fi

                                                                                                # easy to customize here using "nvidia-smi --help-query-gpu" as a guide
                                                                                                a='--query-gpu=pstate,memory.used,utilization.memory,utilization.gpu,encoder.stats.sessionCount'
                                                                                                b='--query-gpu=encoder.stats.averageFps,encoder.stats.averageLatency,temperature.gpu,power.draw'
                                                                                                p=0.5 # refresh period in seconds
                                                                                                s=110x9 # view port as width_in_chars x line_count

                                                                                                c="s/^/ /; s/, +/t/g"
                                                                                                t="`echo '' |tr 'n' 't'`"
                                                                                                function f() { echo; nvidia-smi --format=csv "$1" |sed -r "$c" |column -t "-s$t" "-o "; }
                                                                                                export c t a b; export -f f
                                                                                                gnome-terminal --hide-menubar --geometry=$s -- watch -t -n$p "`readlink -f "$0"`" --guts

                                                                                                #


                                                                                                License: GNU GPLv2, TranSeed Research







                                                                                                share|improve this answer












                                                                                                share|improve this answer



                                                                                                share|improve this answer










                                                                                                answered Dec 17 at 20:16









                                                                                                Douglas Daseeco

                                                                                                1413




                                                                                                1413






























                                                                                                    draft saved

                                                                                                    draft discarded




















































                                                                                                    Thanks for contributing an answer to Unix & Linux Stack Exchange!


                                                                                                    • Please be sure to answer the question. Provide details and share your research!

                                                                                                    But avoid



                                                                                                    • Asking for help, clarification, or responding to other answers.

                                                                                                    • Making statements based on opinion; back them up with references or personal experience.


                                                                                                    To learn more, see our tips on writing great answers.





                                                                                                    Some of your past answers have not been well-received, and you're in danger of being blocked from answering.


                                                                                                    Please pay close attention to the following guidance:


                                                                                                    • Please be sure to answer the question. Provide details and share your research!

                                                                                                    But avoid



                                                                                                    • Asking for help, clarification, or responding to other answers.

                                                                                                    • Making statements based on opinion; back them up with references or personal experience.


                                                                                                    To learn more, see our tips on writing great answers.




                                                                                                    draft saved


                                                                                                    draft discarded














                                                                                                    StackExchange.ready(
                                                                                                    function () {
                                                                                                    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f38560%2fgpu-usage-monitoring-cuda%23new-answer', 'question_page');
                                                                                                    }
                                                                                                    );

                                                                                                    Post as a guest















                                                                                                    Required, but never shown





















































                                                                                                    Required, but never shown














                                                                                                    Required, but never shown












                                                                                                    Required, but never shown







                                                                                                    Required, but never shown

































                                                                                                    Required, but never shown














                                                                                                    Required, but never shown












                                                                                                    Required, but never shown







                                                                                                    Required, but never shown







                                                                                                    Popular posts from this blog

                                                                                                    Morgemoulin

                                                                                                    Scott Moir

                                                                                                    Souastre