how to manage cleaning of /tmp better on hadoop machines
As all know the content of /tmp should be deleted after some time.
In my case we have machines ( redhat version 7.2 ) that are configured as following.
As we can see the service that is triggered to clean up /tmp
will be activated every 24H ( 1d ).
systemd-tmpfiles-clean.timer
from my machine:
more /lib/systemd/system/systemd-tmpfiles-clean.timer
# This file is part of systemd.
#
# systemd is free software; you can redistribute it and/or modify it
# under the terms of the GNU Lesser General Public License as published by
# the Free Software Foundation; either version 2.1 of the License, or
# (at your option) any later version.
[Unit]
Description=Daily Cleanup of Temporary Directories
Documentation=man:tmpfiles.d(5) man:systemd-tmpfiles(8)
[Timer]
OnBootSec=15min
OnUnitActiveSec=1d
And this is the file that is responsible for the rules.
We can see that files/folders according to these rules will be deleted if they are older then 10 days, (this is my understanding , please correct me if I am wrong).
the rules are:
more /usr/lib/tmpfiles.d/tmp.conf
# This file is part of systemd.
#
# systemd is free software; you can redistribute it and/or modify it
# under the terms of the GNU Lesser General Public License as published by
# the Free Software Foundation; either version 2.1 of the License, or
# (at your option) any later version.
# See tmpfiles.d(5) for details
# Clear tmp directories separately, to make them easier to override
v /tmp 1777 root root 10d
v /var/tmp 1777 root root 30d
# Exclude namespace mountpoints created with PrivateTmp=yes
x /tmp/systemd-private-%b-*
X /tmp/systemd-private-%b-*/tmp
x /var/tmp/systemd-private-%b-*
X /var/tmp/systemd-private-%b-*/tmp
But because we have a hadoop cluster, we noticed that /tmp
contains thousands of empty folders and files, and also folders and files with content that are really huge.
Example:
drwx------ 2 hive hadoop 6 Dec 19 13:54 2d069b18-f07f-4c8b-a7c7-45cd8cfc9d42_resources
drwx------ 2 hive hadoop 6 Dec 19 13:59 ed46a2a0-f142-4bff-9a7b-f2d430aff26d_resources
drwx------ 2 hive hadoop 6 Dec 19 14:04 ce7dc2ca-7a12-4aca-a4ef-87803a33a353_resources
drwx------ 2 hive hadoop 6 Dec 19 14:09 43fd3ce0-01f0-423a-89e5-cfd9f82792e6_resources
drwx------ 2 hive hadoop 6 Dec 19 14:14 f808fe5b-2f27-403f-9704-5d53cba176d3_resources
drwx------ 2 hive hadoop 6 Dec 19 14:19 6ef04ca4-9ab1-43f3-979c-9ba5edb9ccee_resources
drwx------ 2 hive hadoop 6 Dec 19 14:24 387330de-c6f5-4055-9f43-f67d577bd0ed_resources
drwx------ 2 hive hadoop 6 Dec 19 14:29 9517d4d9-8964-41c1-abde-a85f226b38ea_resources
drwx------ 2 hive hadoop 6 Dec 19 14:34 a46a9083-f097-4460-916f-e431f5790bf8_resources
drwx------ 2 hive hadoop 6 Dec 19 14:39 81379a84-17c8-4b24-b69a-d91710868560_resources
drwx------ 2 hive hadoop 6 Dec 19 14:44 4b8ba746-12f5-4caf-b21e-52300b8712a5_resources
drwx------ 2 hive hadoop 6 Dec 19 14:49 b7a2f98b-ecf2-4e9c-a92f-0da31d12a81a_resources
drwx------ 2 hive hadoop 6 Dec 19 14:54 2a745ade-e1a7-421d-9829-c7eb915982ce_resources
drwx------ 2 hive hadoop 6 Dec 19 14:59 9dc1a021-9adf-448b-856d-b14e2cb9812b_resources
drwx------ 2 hive hadoop 6 Dec 19 15:04 5599580d-c664-4f2e-95d3-ebdf479a33b9_resources
drwx------ 2 hive hadoop 6 Dec 19 15:09 d97dfbb5-444a-4401-ba58-d338f1724e68_resources
drwx------ 2 hive hadoop 6 Dec 19 15:14 832cf420-f601-4549-b131-b08853339a39_resources
drwx------ 2 hive hadoop 6 Dec 19 15:19 cd1f10e2-ad4e-4b4e-a3cb-4926ccc5a9c5_resources
drwx------ 2 hive hadoop 6 Dec 19 15:24 19dff3c0-8024-4631-b8da-1d31fea7203f_resources
drwx------ 2 hive hadoop 6 Dec 19 15:29 23528426-b8fb-4d14-8ea9-2fb799fefe51_resources
drwx------ 2 hive hadoop 6 Dec 19 15:34 e3509760-9823-4e30-8d0b-77c5aee80efd_resources
drwx------ 2 hive hadoop 6 Dec 19 15:39 3c157b4d-917c-49ef-86da-b44e310ca30a_resources
drwx------ 2 hive hadoop 6 Dec 19 15:44 b370af30-5323-4ad5-b39e-f02a0dcdc6bb_resources
drwx------ 2 hive hadoop 6 Dec 19 15:49 18a5ea21-30f9-45a8-8774-6d8200ada7ff_resources
drwx------ 2 hive hadoop 6 Dec 19 15:54 ee776a04-f0e8-4295-9872-f8fc6482913e_resources
drwx------ 2 hive hadoop 6 Dec 19 15:59 f5935653-0bf6-4171-895a-558eef8b0773_resources
drwx------ 2 hive hadoop 6 Dec 19 16:04 e80ea30b-c729-48a2-897d-ae7c94a4fa04_resources
drwx------ 2 hive hadoop 6 Dec 19 16:09 fde6f7e4-89bd-41b4-99d3-17204bf66f05_resources
We are worried that /tmp
will be full and services can't delete the content because of that.
So we want to delete the folders and files from /tmp
according to this:
every folder/file will be deleted if it is older than 1 day
and service will be activated each 1 hour
So we intend to set the following:
OnUnitActiveSec=1h ( in file /lib/systemd/system/systemd-tmpfiles-clean.timer )
v /tmp 1777 root root 1d ( in file /usr/lib/tmpfiles.d/tmp.conf )
Am I right here with the new settings?
secondly - after setting this, do we need to do something for it to take effect ?
linux rhel cron services hadoop
add a comment |
As all know the content of /tmp should be deleted after some time.
In my case we have machines ( redhat version 7.2 ) that are configured as following.
As we can see the service that is triggered to clean up /tmp
will be activated every 24H ( 1d ).
systemd-tmpfiles-clean.timer
from my machine:
more /lib/systemd/system/systemd-tmpfiles-clean.timer
# This file is part of systemd.
#
# systemd is free software; you can redistribute it and/or modify it
# under the terms of the GNU Lesser General Public License as published by
# the Free Software Foundation; either version 2.1 of the License, or
# (at your option) any later version.
[Unit]
Description=Daily Cleanup of Temporary Directories
Documentation=man:tmpfiles.d(5) man:systemd-tmpfiles(8)
[Timer]
OnBootSec=15min
OnUnitActiveSec=1d
And this is the file that is responsible for the rules.
We can see that files/folders according to these rules will be deleted if they are older then 10 days, (this is my understanding , please correct me if I am wrong).
the rules are:
more /usr/lib/tmpfiles.d/tmp.conf
# This file is part of systemd.
#
# systemd is free software; you can redistribute it and/or modify it
# under the terms of the GNU Lesser General Public License as published by
# the Free Software Foundation; either version 2.1 of the License, or
# (at your option) any later version.
# See tmpfiles.d(5) for details
# Clear tmp directories separately, to make them easier to override
v /tmp 1777 root root 10d
v /var/tmp 1777 root root 30d
# Exclude namespace mountpoints created with PrivateTmp=yes
x /tmp/systemd-private-%b-*
X /tmp/systemd-private-%b-*/tmp
x /var/tmp/systemd-private-%b-*
X /var/tmp/systemd-private-%b-*/tmp
But because we have a hadoop cluster, we noticed that /tmp
contains thousands of empty folders and files, and also folders and files with content that are really huge.
Example:
drwx------ 2 hive hadoop 6 Dec 19 13:54 2d069b18-f07f-4c8b-a7c7-45cd8cfc9d42_resources
drwx------ 2 hive hadoop 6 Dec 19 13:59 ed46a2a0-f142-4bff-9a7b-f2d430aff26d_resources
drwx------ 2 hive hadoop 6 Dec 19 14:04 ce7dc2ca-7a12-4aca-a4ef-87803a33a353_resources
drwx------ 2 hive hadoop 6 Dec 19 14:09 43fd3ce0-01f0-423a-89e5-cfd9f82792e6_resources
drwx------ 2 hive hadoop 6 Dec 19 14:14 f808fe5b-2f27-403f-9704-5d53cba176d3_resources
drwx------ 2 hive hadoop 6 Dec 19 14:19 6ef04ca4-9ab1-43f3-979c-9ba5edb9ccee_resources
drwx------ 2 hive hadoop 6 Dec 19 14:24 387330de-c6f5-4055-9f43-f67d577bd0ed_resources
drwx------ 2 hive hadoop 6 Dec 19 14:29 9517d4d9-8964-41c1-abde-a85f226b38ea_resources
drwx------ 2 hive hadoop 6 Dec 19 14:34 a46a9083-f097-4460-916f-e431f5790bf8_resources
drwx------ 2 hive hadoop 6 Dec 19 14:39 81379a84-17c8-4b24-b69a-d91710868560_resources
drwx------ 2 hive hadoop 6 Dec 19 14:44 4b8ba746-12f5-4caf-b21e-52300b8712a5_resources
drwx------ 2 hive hadoop 6 Dec 19 14:49 b7a2f98b-ecf2-4e9c-a92f-0da31d12a81a_resources
drwx------ 2 hive hadoop 6 Dec 19 14:54 2a745ade-e1a7-421d-9829-c7eb915982ce_resources
drwx------ 2 hive hadoop 6 Dec 19 14:59 9dc1a021-9adf-448b-856d-b14e2cb9812b_resources
drwx------ 2 hive hadoop 6 Dec 19 15:04 5599580d-c664-4f2e-95d3-ebdf479a33b9_resources
drwx------ 2 hive hadoop 6 Dec 19 15:09 d97dfbb5-444a-4401-ba58-d338f1724e68_resources
drwx------ 2 hive hadoop 6 Dec 19 15:14 832cf420-f601-4549-b131-b08853339a39_resources
drwx------ 2 hive hadoop 6 Dec 19 15:19 cd1f10e2-ad4e-4b4e-a3cb-4926ccc5a9c5_resources
drwx------ 2 hive hadoop 6 Dec 19 15:24 19dff3c0-8024-4631-b8da-1d31fea7203f_resources
drwx------ 2 hive hadoop 6 Dec 19 15:29 23528426-b8fb-4d14-8ea9-2fb799fefe51_resources
drwx------ 2 hive hadoop 6 Dec 19 15:34 e3509760-9823-4e30-8d0b-77c5aee80efd_resources
drwx------ 2 hive hadoop 6 Dec 19 15:39 3c157b4d-917c-49ef-86da-b44e310ca30a_resources
drwx------ 2 hive hadoop 6 Dec 19 15:44 b370af30-5323-4ad5-b39e-f02a0dcdc6bb_resources
drwx------ 2 hive hadoop 6 Dec 19 15:49 18a5ea21-30f9-45a8-8774-6d8200ada7ff_resources
drwx------ 2 hive hadoop 6 Dec 19 15:54 ee776a04-f0e8-4295-9872-f8fc6482913e_resources
drwx------ 2 hive hadoop 6 Dec 19 15:59 f5935653-0bf6-4171-895a-558eef8b0773_resources
drwx------ 2 hive hadoop 6 Dec 19 16:04 e80ea30b-c729-48a2-897d-ae7c94a4fa04_resources
drwx------ 2 hive hadoop 6 Dec 19 16:09 fde6f7e4-89bd-41b4-99d3-17204bf66f05_resources
We are worried that /tmp
will be full and services can't delete the content because of that.
So we want to delete the folders and files from /tmp
according to this:
every folder/file will be deleted if it is older than 1 day
and service will be activated each 1 hour
So we intend to set the following:
OnUnitActiveSec=1h ( in file /lib/systemd/system/systemd-tmpfiles-clean.timer )
v /tmp 1777 root root 1d ( in file /usr/lib/tmpfiles.d/tmp.conf )
Am I right here with the new settings?
secondly - after setting this, do we need to do something for it to take effect ?
linux rhel cron services hadoop
add a comment |
As all know the content of /tmp should be deleted after some time.
In my case we have machines ( redhat version 7.2 ) that are configured as following.
As we can see the service that is triggered to clean up /tmp
will be activated every 24H ( 1d ).
systemd-tmpfiles-clean.timer
from my machine:
more /lib/systemd/system/systemd-tmpfiles-clean.timer
# This file is part of systemd.
#
# systemd is free software; you can redistribute it and/or modify it
# under the terms of the GNU Lesser General Public License as published by
# the Free Software Foundation; either version 2.1 of the License, or
# (at your option) any later version.
[Unit]
Description=Daily Cleanup of Temporary Directories
Documentation=man:tmpfiles.d(5) man:systemd-tmpfiles(8)
[Timer]
OnBootSec=15min
OnUnitActiveSec=1d
And this is the file that is responsible for the rules.
We can see that files/folders according to these rules will be deleted if they are older then 10 days, (this is my understanding , please correct me if I am wrong).
the rules are:
more /usr/lib/tmpfiles.d/tmp.conf
# This file is part of systemd.
#
# systemd is free software; you can redistribute it and/or modify it
# under the terms of the GNU Lesser General Public License as published by
# the Free Software Foundation; either version 2.1 of the License, or
# (at your option) any later version.
# See tmpfiles.d(5) for details
# Clear tmp directories separately, to make them easier to override
v /tmp 1777 root root 10d
v /var/tmp 1777 root root 30d
# Exclude namespace mountpoints created with PrivateTmp=yes
x /tmp/systemd-private-%b-*
X /tmp/systemd-private-%b-*/tmp
x /var/tmp/systemd-private-%b-*
X /var/tmp/systemd-private-%b-*/tmp
But because we have a hadoop cluster, we noticed that /tmp
contains thousands of empty folders and files, and also folders and files with content that are really huge.
Example:
drwx------ 2 hive hadoop 6 Dec 19 13:54 2d069b18-f07f-4c8b-a7c7-45cd8cfc9d42_resources
drwx------ 2 hive hadoop 6 Dec 19 13:59 ed46a2a0-f142-4bff-9a7b-f2d430aff26d_resources
drwx------ 2 hive hadoop 6 Dec 19 14:04 ce7dc2ca-7a12-4aca-a4ef-87803a33a353_resources
drwx------ 2 hive hadoop 6 Dec 19 14:09 43fd3ce0-01f0-423a-89e5-cfd9f82792e6_resources
drwx------ 2 hive hadoop 6 Dec 19 14:14 f808fe5b-2f27-403f-9704-5d53cba176d3_resources
drwx------ 2 hive hadoop 6 Dec 19 14:19 6ef04ca4-9ab1-43f3-979c-9ba5edb9ccee_resources
drwx------ 2 hive hadoop 6 Dec 19 14:24 387330de-c6f5-4055-9f43-f67d577bd0ed_resources
drwx------ 2 hive hadoop 6 Dec 19 14:29 9517d4d9-8964-41c1-abde-a85f226b38ea_resources
drwx------ 2 hive hadoop 6 Dec 19 14:34 a46a9083-f097-4460-916f-e431f5790bf8_resources
drwx------ 2 hive hadoop 6 Dec 19 14:39 81379a84-17c8-4b24-b69a-d91710868560_resources
drwx------ 2 hive hadoop 6 Dec 19 14:44 4b8ba746-12f5-4caf-b21e-52300b8712a5_resources
drwx------ 2 hive hadoop 6 Dec 19 14:49 b7a2f98b-ecf2-4e9c-a92f-0da31d12a81a_resources
drwx------ 2 hive hadoop 6 Dec 19 14:54 2a745ade-e1a7-421d-9829-c7eb915982ce_resources
drwx------ 2 hive hadoop 6 Dec 19 14:59 9dc1a021-9adf-448b-856d-b14e2cb9812b_resources
drwx------ 2 hive hadoop 6 Dec 19 15:04 5599580d-c664-4f2e-95d3-ebdf479a33b9_resources
drwx------ 2 hive hadoop 6 Dec 19 15:09 d97dfbb5-444a-4401-ba58-d338f1724e68_resources
drwx------ 2 hive hadoop 6 Dec 19 15:14 832cf420-f601-4549-b131-b08853339a39_resources
drwx------ 2 hive hadoop 6 Dec 19 15:19 cd1f10e2-ad4e-4b4e-a3cb-4926ccc5a9c5_resources
drwx------ 2 hive hadoop 6 Dec 19 15:24 19dff3c0-8024-4631-b8da-1d31fea7203f_resources
drwx------ 2 hive hadoop 6 Dec 19 15:29 23528426-b8fb-4d14-8ea9-2fb799fefe51_resources
drwx------ 2 hive hadoop 6 Dec 19 15:34 e3509760-9823-4e30-8d0b-77c5aee80efd_resources
drwx------ 2 hive hadoop 6 Dec 19 15:39 3c157b4d-917c-49ef-86da-b44e310ca30a_resources
drwx------ 2 hive hadoop 6 Dec 19 15:44 b370af30-5323-4ad5-b39e-f02a0dcdc6bb_resources
drwx------ 2 hive hadoop 6 Dec 19 15:49 18a5ea21-30f9-45a8-8774-6d8200ada7ff_resources
drwx------ 2 hive hadoop 6 Dec 19 15:54 ee776a04-f0e8-4295-9872-f8fc6482913e_resources
drwx------ 2 hive hadoop 6 Dec 19 15:59 f5935653-0bf6-4171-895a-558eef8b0773_resources
drwx------ 2 hive hadoop 6 Dec 19 16:04 e80ea30b-c729-48a2-897d-ae7c94a4fa04_resources
drwx------ 2 hive hadoop 6 Dec 19 16:09 fde6f7e4-89bd-41b4-99d3-17204bf66f05_resources
We are worried that /tmp
will be full and services can't delete the content because of that.
So we want to delete the folders and files from /tmp
according to this:
every folder/file will be deleted if it is older than 1 day
and service will be activated each 1 hour
So we intend to set the following:
OnUnitActiveSec=1h ( in file /lib/systemd/system/systemd-tmpfiles-clean.timer )
v /tmp 1777 root root 1d ( in file /usr/lib/tmpfiles.d/tmp.conf )
Am I right here with the new settings?
secondly - after setting this, do we need to do something for it to take effect ?
linux rhel cron services hadoop
As all know the content of /tmp should be deleted after some time.
In my case we have machines ( redhat version 7.2 ) that are configured as following.
As we can see the service that is triggered to clean up /tmp
will be activated every 24H ( 1d ).
systemd-tmpfiles-clean.timer
from my machine:
more /lib/systemd/system/systemd-tmpfiles-clean.timer
# This file is part of systemd.
#
# systemd is free software; you can redistribute it and/or modify it
# under the terms of the GNU Lesser General Public License as published by
# the Free Software Foundation; either version 2.1 of the License, or
# (at your option) any later version.
[Unit]
Description=Daily Cleanup of Temporary Directories
Documentation=man:tmpfiles.d(5) man:systemd-tmpfiles(8)
[Timer]
OnBootSec=15min
OnUnitActiveSec=1d
And this is the file that is responsible for the rules.
We can see that files/folders according to these rules will be deleted if they are older then 10 days, (this is my understanding , please correct me if I am wrong).
the rules are:
more /usr/lib/tmpfiles.d/tmp.conf
# This file is part of systemd.
#
# systemd is free software; you can redistribute it and/or modify it
# under the terms of the GNU Lesser General Public License as published by
# the Free Software Foundation; either version 2.1 of the License, or
# (at your option) any later version.
# See tmpfiles.d(5) for details
# Clear tmp directories separately, to make them easier to override
v /tmp 1777 root root 10d
v /var/tmp 1777 root root 30d
# Exclude namespace mountpoints created with PrivateTmp=yes
x /tmp/systemd-private-%b-*
X /tmp/systemd-private-%b-*/tmp
x /var/tmp/systemd-private-%b-*
X /var/tmp/systemd-private-%b-*/tmp
But because we have a hadoop cluster, we noticed that /tmp
contains thousands of empty folders and files, and also folders and files with content that are really huge.
Example:
drwx------ 2 hive hadoop 6 Dec 19 13:54 2d069b18-f07f-4c8b-a7c7-45cd8cfc9d42_resources
drwx------ 2 hive hadoop 6 Dec 19 13:59 ed46a2a0-f142-4bff-9a7b-f2d430aff26d_resources
drwx------ 2 hive hadoop 6 Dec 19 14:04 ce7dc2ca-7a12-4aca-a4ef-87803a33a353_resources
drwx------ 2 hive hadoop 6 Dec 19 14:09 43fd3ce0-01f0-423a-89e5-cfd9f82792e6_resources
drwx------ 2 hive hadoop 6 Dec 19 14:14 f808fe5b-2f27-403f-9704-5d53cba176d3_resources
drwx------ 2 hive hadoop 6 Dec 19 14:19 6ef04ca4-9ab1-43f3-979c-9ba5edb9ccee_resources
drwx------ 2 hive hadoop 6 Dec 19 14:24 387330de-c6f5-4055-9f43-f67d577bd0ed_resources
drwx------ 2 hive hadoop 6 Dec 19 14:29 9517d4d9-8964-41c1-abde-a85f226b38ea_resources
drwx------ 2 hive hadoop 6 Dec 19 14:34 a46a9083-f097-4460-916f-e431f5790bf8_resources
drwx------ 2 hive hadoop 6 Dec 19 14:39 81379a84-17c8-4b24-b69a-d91710868560_resources
drwx------ 2 hive hadoop 6 Dec 19 14:44 4b8ba746-12f5-4caf-b21e-52300b8712a5_resources
drwx------ 2 hive hadoop 6 Dec 19 14:49 b7a2f98b-ecf2-4e9c-a92f-0da31d12a81a_resources
drwx------ 2 hive hadoop 6 Dec 19 14:54 2a745ade-e1a7-421d-9829-c7eb915982ce_resources
drwx------ 2 hive hadoop 6 Dec 19 14:59 9dc1a021-9adf-448b-856d-b14e2cb9812b_resources
drwx------ 2 hive hadoop 6 Dec 19 15:04 5599580d-c664-4f2e-95d3-ebdf479a33b9_resources
drwx------ 2 hive hadoop 6 Dec 19 15:09 d97dfbb5-444a-4401-ba58-d338f1724e68_resources
drwx------ 2 hive hadoop 6 Dec 19 15:14 832cf420-f601-4549-b131-b08853339a39_resources
drwx------ 2 hive hadoop 6 Dec 19 15:19 cd1f10e2-ad4e-4b4e-a3cb-4926ccc5a9c5_resources
drwx------ 2 hive hadoop 6 Dec 19 15:24 19dff3c0-8024-4631-b8da-1d31fea7203f_resources
drwx------ 2 hive hadoop 6 Dec 19 15:29 23528426-b8fb-4d14-8ea9-2fb799fefe51_resources
drwx------ 2 hive hadoop 6 Dec 19 15:34 e3509760-9823-4e30-8d0b-77c5aee80efd_resources
drwx------ 2 hive hadoop 6 Dec 19 15:39 3c157b4d-917c-49ef-86da-b44e310ca30a_resources
drwx------ 2 hive hadoop 6 Dec 19 15:44 b370af30-5323-4ad5-b39e-f02a0dcdc6bb_resources
drwx------ 2 hive hadoop 6 Dec 19 15:49 18a5ea21-30f9-45a8-8774-6d8200ada7ff_resources
drwx------ 2 hive hadoop 6 Dec 19 15:54 ee776a04-f0e8-4295-9872-f8fc6482913e_resources
drwx------ 2 hive hadoop 6 Dec 19 15:59 f5935653-0bf6-4171-895a-558eef8b0773_resources
drwx------ 2 hive hadoop 6 Dec 19 16:04 e80ea30b-c729-48a2-897d-ae7c94a4fa04_resources
drwx------ 2 hive hadoop 6 Dec 19 16:09 fde6f7e4-89bd-41b4-99d3-17204bf66f05_resources
We are worried that /tmp
will be full and services can't delete the content because of that.
So we want to delete the folders and files from /tmp
according to this:
every folder/file will be deleted if it is older than 1 day
and service will be activated each 1 hour
So we intend to set the following:
OnUnitActiveSec=1h ( in file /lib/systemd/system/systemd-tmpfiles-clean.timer )
v /tmp 1777 root root 1d ( in file /usr/lib/tmpfiles.d/tmp.conf )
Am I right here with the new settings?
secondly - after setting this, do we need to do something for it to take effect ?
linux rhel cron services hadoop
linux rhel cron services hadoop
edited Dec 19 '18 at 19:18
tink
4,16311219
4,16311219
asked Dec 19 '18 at 16:31
yael
2,42812159
2,42812159
add a comment |
add a comment |
1 Answer
1
active
oldest
votes
This combination will certainly work. However, instead of removing all in /tmp
every hour, you're probably better of by deleting the resource files and directories only, e.g.
R /tmp/*_resources
Keep in mind that your changes on the systemd and tmpfiles configuration should not be done in /usr
or /lib
. Instead, place the according overrides in /etc
, e.g.
echo 'R /tmp/*_resources' >> /etc/tmpfiles.d/hadoop
cp /lib/systemd/system/systemd-tmpfiles-clean.timer
/etc/systemd/system/systemd-tmpfiles-clean.timer
$EDITOR /etc/systemd/system/systemd-tmpfiles-clean.timer
If you change the files in /usr
or /lib
you might end up with conflicts during upgrades.
If you already changed your files, make sure to reload the unit files with systemctl daemon-reload
. Otherwise systemd
won't pickup the change of your timer.
lets say I set OnUnitActiveSec=1h wand " v /tmp 1777 root root 1d " , how to take affect this configuration ?
– yael
Dec 19 '18 at 17:22
I also update this note in my post
– yael
Dec 19 '18 at 17:29
I am posting new question regarding that
– yael
Dec 19 '18 at 18:19
@yael you essentially changed your question so much that you've asked a new one. Please refrain from changing the original aspect of a question in the future. Either way, I guess you didn'tdaemon-reload
your configuration yet.
– Zeta
Dec 19 '18 at 18:47
what you mean about daemon-reload , and how to do that?
– yael
Dec 19 '18 at 18:52
|
show 4 more comments
Your Answer
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "106"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f489956%2fhow-to-manage-cleaning-of-tmp-better-on-hadoop-machines%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
This combination will certainly work. However, instead of removing all in /tmp
every hour, you're probably better of by deleting the resource files and directories only, e.g.
R /tmp/*_resources
Keep in mind that your changes on the systemd and tmpfiles configuration should not be done in /usr
or /lib
. Instead, place the according overrides in /etc
, e.g.
echo 'R /tmp/*_resources' >> /etc/tmpfiles.d/hadoop
cp /lib/systemd/system/systemd-tmpfiles-clean.timer
/etc/systemd/system/systemd-tmpfiles-clean.timer
$EDITOR /etc/systemd/system/systemd-tmpfiles-clean.timer
If you change the files in /usr
or /lib
you might end up with conflicts during upgrades.
If you already changed your files, make sure to reload the unit files with systemctl daemon-reload
. Otherwise systemd
won't pickup the change of your timer.
lets say I set OnUnitActiveSec=1h wand " v /tmp 1777 root root 1d " , how to take affect this configuration ?
– yael
Dec 19 '18 at 17:22
I also update this note in my post
– yael
Dec 19 '18 at 17:29
I am posting new question regarding that
– yael
Dec 19 '18 at 18:19
@yael you essentially changed your question so much that you've asked a new one. Please refrain from changing the original aspect of a question in the future. Either way, I guess you didn'tdaemon-reload
your configuration yet.
– Zeta
Dec 19 '18 at 18:47
what you mean about daemon-reload , and how to do that?
– yael
Dec 19 '18 at 18:52
|
show 4 more comments
This combination will certainly work. However, instead of removing all in /tmp
every hour, you're probably better of by deleting the resource files and directories only, e.g.
R /tmp/*_resources
Keep in mind that your changes on the systemd and tmpfiles configuration should not be done in /usr
or /lib
. Instead, place the according overrides in /etc
, e.g.
echo 'R /tmp/*_resources' >> /etc/tmpfiles.d/hadoop
cp /lib/systemd/system/systemd-tmpfiles-clean.timer
/etc/systemd/system/systemd-tmpfiles-clean.timer
$EDITOR /etc/systemd/system/systemd-tmpfiles-clean.timer
If you change the files in /usr
or /lib
you might end up with conflicts during upgrades.
If you already changed your files, make sure to reload the unit files with systemctl daemon-reload
. Otherwise systemd
won't pickup the change of your timer.
lets say I set OnUnitActiveSec=1h wand " v /tmp 1777 root root 1d " , how to take affect this configuration ?
– yael
Dec 19 '18 at 17:22
I also update this note in my post
– yael
Dec 19 '18 at 17:29
I am posting new question regarding that
– yael
Dec 19 '18 at 18:19
@yael you essentially changed your question so much that you've asked a new one. Please refrain from changing the original aspect of a question in the future. Either way, I guess you didn'tdaemon-reload
your configuration yet.
– Zeta
Dec 19 '18 at 18:47
what you mean about daemon-reload , and how to do that?
– yael
Dec 19 '18 at 18:52
|
show 4 more comments
This combination will certainly work. However, instead of removing all in /tmp
every hour, you're probably better of by deleting the resource files and directories only, e.g.
R /tmp/*_resources
Keep in mind that your changes on the systemd and tmpfiles configuration should not be done in /usr
or /lib
. Instead, place the according overrides in /etc
, e.g.
echo 'R /tmp/*_resources' >> /etc/tmpfiles.d/hadoop
cp /lib/systemd/system/systemd-tmpfiles-clean.timer
/etc/systemd/system/systemd-tmpfiles-clean.timer
$EDITOR /etc/systemd/system/systemd-tmpfiles-clean.timer
If you change the files in /usr
or /lib
you might end up with conflicts during upgrades.
If you already changed your files, make sure to reload the unit files with systemctl daemon-reload
. Otherwise systemd
won't pickup the change of your timer.
This combination will certainly work. However, instead of removing all in /tmp
every hour, you're probably better of by deleting the resource files and directories only, e.g.
R /tmp/*_resources
Keep in mind that your changes on the systemd and tmpfiles configuration should not be done in /usr
or /lib
. Instead, place the according overrides in /etc
, e.g.
echo 'R /tmp/*_resources' >> /etc/tmpfiles.d/hadoop
cp /lib/systemd/system/systemd-tmpfiles-clean.timer
/etc/systemd/system/systemd-tmpfiles-clean.timer
$EDITOR /etc/systemd/system/systemd-tmpfiles-clean.timer
If you change the files in /usr
or /lib
you might end up with conflicts during upgrades.
If you already changed your files, make sure to reload the unit files with systemctl daemon-reload
. Otherwise systemd
won't pickup the change of your timer.
edited Dec 19 '18 at 20:19
answered Dec 19 '18 at 17:08
Zeta
61837
61837
lets say I set OnUnitActiveSec=1h wand " v /tmp 1777 root root 1d " , how to take affect this configuration ?
– yael
Dec 19 '18 at 17:22
I also update this note in my post
– yael
Dec 19 '18 at 17:29
I am posting new question regarding that
– yael
Dec 19 '18 at 18:19
@yael you essentially changed your question so much that you've asked a new one. Please refrain from changing the original aspect of a question in the future. Either way, I guess you didn'tdaemon-reload
your configuration yet.
– Zeta
Dec 19 '18 at 18:47
what you mean about daemon-reload , and how to do that?
– yael
Dec 19 '18 at 18:52
|
show 4 more comments
lets say I set OnUnitActiveSec=1h wand " v /tmp 1777 root root 1d " , how to take affect this configuration ?
– yael
Dec 19 '18 at 17:22
I also update this note in my post
– yael
Dec 19 '18 at 17:29
I am posting new question regarding that
– yael
Dec 19 '18 at 18:19
@yael you essentially changed your question so much that you've asked a new one. Please refrain from changing the original aspect of a question in the future. Either way, I guess you didn'tdaemon-reload
your configuration yet.
– Zeta
Dec 19 '18 at 18:47
what you mean about daemon-reload , and how to do that?
– yael
Dec 19 '18 at 18:52
lets say I set OnUnitActiveSec=1h wand " v /tmp 1777 root root 1d " , how to take affect this configuration ?
– yael
Dec 19 '18 at 17:22
lets say I set OnUnitActiveSec=1h wand " v /tmp 1777 root root 1d " , how to take affect this configuration ?
– yael
Dec 19 '18 at 17:22
I also update this note in my post
– yael
Dec 19 '18 at 17:29
I also update this note in my post
– yael
Dec 19 '18 at 17:29
I am posting new question regarding that
– yael
Dec 19 '18 at 18:19
I am posting new question regarding that
– yael
Dec 19 '18 at 18:19
@yael you essentially changed your question so much that you've asked a new one. Please refrain from changing the original aspect of a question in the future. Either way, I guess you didn't
daemon-reload
your configuration yet.– Zeta
Dec 19 '18 at 18:47
@yael you essentially changed your question so much that you've asked a new one. Please refrain from changing the original aspect of a question in the future. Either way, I guess you didn't
daemon-reload
your configuration yet.– Zeta
Dec 19 '18 at 18:47
what you mean about daemon-reload , and how to do that?
– yael
Dec 19 '18 at 18:52
what you mean about daemon-reload , and how to do that?
– yael
Dec 19 '18 at 18:52
|
show 4 more comments
Thanks for contributing an answer to Unix & Linux Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Some of your past answers have not been well-received, and you're in danger of being blocked from answering.
Please pay close attention to the following guidance:
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f489956%2fhow-to-manage-cleaning-of-tmp-better-on-hadoop-machines%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown