Fastest way to migrate to a bigger SSD (18.04)
I've seen a few postings about migration with older Ubuntu versions.
What's the fastest way to migrate 120GB (80GB full) to a 1TB SSD under Ubuntu 18.04?
- live duplicate or via image possible
- how to extend the new volume on the new SSD?
Because I've only a limited time window once I've started I'd be happy for suggestions for the fastest way to get new system up and running again.
partitioning ssd dd
add a comment |
I've seen a few postings about migration with older Ubuntu versions.
What's the fastest way to migrate 120GB (80GB full) to a 1TB SSD under Ubuntu 18.04?
- live duplicate or via image possible
- how to extend the new volume on the new SSD?
Because I've only a limited time window once I've started I'd be happy for suggestions for the fastest way to get new system up and running again.
partitioning ssd dd
A limited time window for access to hardware? Limited downtime for server allowed? Limited because there are other high-priority jobs? "Fast" is only fast if you have rehearsed the operation and discovered all your pain points in advance (there will be some!) Murphy's Law applies to all time-critical operations; be prepared with complete backups and install media befor you begin.
– user535733
41 mins ago
Is having both disks in the system an option?
– Thorbjørn Ravn Andersen
20 mins ago
no, I don't want to use both disks afterwards. (limited=because of other jobs that should run at a certain time) Murphy: yes, I'm willing to prepare everything as good as possible
– ssssstut
12 mins ago
add a comment |
I've seen a few postings about migration with older Ubuntu versions.
What's the fastest way to migrate 120GB (80GB full) to a 1TB SSD under Ubuntu 18.04?
- live duplicate or via image possible
- how to extend the new volume on the new SSD?
Because I've only a limited time window once I've started I'd be happy for suggestions for the fastest way to get new system up and running again.
partitioning ssd dd
I've seen a few postings about migration with older Ubuntu versions.
What's the fastest way to migrate 120GB (80GB full) to a 1TB SSD under Ubuntu 18.04?
- live duplicate or via image possible
- how to extend the new volume on the new SSD?
Because I've only a limited time window once I've started I'd be happy for suggestions for the fastest way to get new system up and running again.
partitioning ssd dd
partitioning ssd dd
edited 2 hours ago
SurvivalMachine
1,2413717
1,2413717
asked 3 hours ago
ssssstut
415
415
A limited time window for access to hardware? Limited downtime for server allowed? Limited because there are other high-priority jobs? "Fast" is only fast if you have rehearsed the operation and discovered all your pain points in advance (there will be some!) Murphy's Law applies to all time-critical operations; be prepared with complete backups and install media befor you begin.
– user535733
41 mins ago
Is having both disks in the system an option?
– Thorbjørn Ravn Andersen
20 mins ago
no, I don't want to use both disks afterwards. (limited=because of other jobs that should run at a certain time) Murphy: yes, I'm willing to prepare everything as good as possible
– ssssstut
12 mins ago
add a comment |
A limited time window for access to hardware? Limited downtime for server allowed? Limited because there are other high-priority jobs? "Fast" is only fast if you have rehearsed the operation and discovered all your pain points in advance (there will be some!) Murphy's Law applies to all time-critical operations; be prepared with complete backups and install media befor you begin.
– user535733
41 mins ago
Is having both disks in the system an option?
– Thorbjørn Ravn Andersen
20 mins ago
no, I don't want to use both disks afterwards. (limited=because of other jobs that should run at a certain time) Murphy: yes, I'm willing to prepare everything as good as possible
– ssssstut
12 mins ago
A limited time window for access to hardware? Limited downtime for server allowed? Limited because there are other high-priority jobs? "Fast" is only fast if you have rehearsed the operation and discovered all your pain points in advance (there will be some!) Murphy's Law applies to all time-critical operations; be prepared with complete backups and install media befor you begin.
– user535733
41 mins ago
A limited time window for access to hardware? Limited downtime for server allowed? Limited because there are other high-priority jobs? "Fast" is only fast if you have rehearsed the operation and discovered all your pain points in advance (there will be some!) Murphy's Law applies to all time-critical operations; be prepared with complete backups and install media befor you begin.
– user535733
41 mins ago
Is having both disks in the system an option?
– Thorbjørn Ravn Andersen
20 mins ago
Is having both disks in the system an option?
– Thorbjørn Ravn Andersen
20 mins ago
no, I don't want to use both disks afterwards. (limited=because of other jobs that should run at a certain time) Murphy: yes, I'm willing to prepare everything as good as possible
– ssssstut
12 mins ago
no, I don't want to use both disks afterwards. (limited=because of other jobs that should run at a certain time) Murphy: yes, I'm willing to prepare everything as good as possible
– ssssstut
12 mins ago
add a comment |
2 Answers
2
active
oldest
votes
What about using a Live System and
sudo dd if=/dev/sdx0 of=/dev/sdy0
and
sudo resize2fs /dev/sdy0
afterwards to resize the partition.
With sdx0 being your old partition and sdy0 being the new one.
dd: how long would it take to duplicate around 100 GB approx.? resize2fs: does it expand the file system to the max avail. capacity?
– ssssstut
10 mins ago
add a comment |
For a fast migration you have more than a couple of choice (and for fast not use dd, it copy bit per bit)
common used tools:
https://www.acronis.com/en-us/ (should free to personal use)
https://clonezilla.org/ (open source)
https://www.symantec.com/products/ghost-solutions-suite (commercial)
You can also decide to create a tar of your OS and restore it (more manual work to do but still fast then dd)
- https://help.ubuntu.com/community/BackupYourSystem/TAR
Personally I suggest you to use clonezilla, have a large community and a good Knowledge on the common issues can be encountered and of course because is opensource!
"bit per bit"?dd ... bs=1M
will do the copy using 1 MB blocks.
– Hannu
34 mins ago
yes, but as per the first answer and if you do not pass any flags to dd the default is that, off course with the base count flag will faster then normal, but you have also to think the dd command do not skip free space, so the image size of 500GB disk will be 500GB, does not metter if the disk is in use at 30%. For this I suggest different solution, partclone is more evoluted then dd for this pourpose.
– AtomiX84
30 mins ago
In a low usage situation any other tool than a block by block copy is preferred, yes. Then you also have other issues -> superuser.com/a/1388090/346288
– Hannu
27 mins ago
add a comment |
Your Answer
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "89"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2faskubuntu.com%2fquestions%2f1105560%2ffastest-way-to-migrate-to-a-bigger-ssd-18-04%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
2 Answers
2
active
oldest
votes
2 Answers
2
active
oldest
votes
active
oldest
votes
active
oldest
votes
What about using a Live System and
sudo dd if=/dev/sdx0 of=/dev/sdy0
and
sudo resize2fs /dev/sdy0
afterwards to resize the partition.
With sdx0 being your old partition and sdy0 being the new one.
dd: how long would it take to duplicate around 100 GB approx.? resize2fs: does it expand the file system to the max avail. capacity?
– ssssstut
10 mins ago
add a comment |
What about using a Live System and
sudo dd if=/dev/sdx0 of=/dev/sdy0
and
sudo resize2fs /dev/sdy0
afterwards to resize the partition.
With sdx0 being your old partition and sdy0 being the new one.
dd: how long would it take to duplicate around 100 GB approx.? resize2fs: does it expand the file system to the max avail. capacity?
– ssssstut
10 mins ago
add a comment |
What about using a Live System and
sudo dd if=/dev/sdx0 of=/dev/sdy0
and
sudo resize2fs /dev/sdy0
afterwards to resize the partition.
With sdx0 being your old partition and sdy0 being the new one.
What about using a Live System and
sudo dd if=/dev/sdx0 of=/dev/sdy0
and
sudo resize2fs /dev/sdy0
afterwards to resize the partition.
With sdx0 being your old partition and sdy0 being the new one.
edited 39 mins ago
user535733
7,65222942
7,65222942
answered 1 hour ago
theFeiter
314
314
dd: how long would it take to duplicate around 100 GB approx.? resize2fs: does it expand the file system to the max avail. capacity?
– ssssstut
10 mins ago
add a comment |
dd: how long would it take to duplicate around 100 GB approx.? resize2fs: does it expand the file system to the max avail. capacity?
– ssssstut
10 mins ago
dd: how long would it take to duplicate around 100 GB approx.? resize2fs: does it expand the file system to the max avail. capacity?
– ssssstut
10 mins ago
dd: how long would it take to duplicate around 100 GB approx.? resize2fs: does it expand the file system to the max avail. capacity?
– ssssstut
10 mins ago
add a comment |
For a fast migration you have more than a couple of choice (and for fast not use dd, it copy bit per bit)
common used tools:
https://www.acronis.com/en-us/ (should free to personal use)
https://clonezilla.org/ (open source)
https://www.symantec.com/products/ghost-solutions-suite (commercial)
You can also decide to create a tar of your OS and restore it (more manual work to do but still fast then dd)
- https://help.ubuntu.com/community/BackupYourSystem/TAR
Personally I suggest you to use clonezilla, have a large community and a good Knowledge on the common issues can be encountered and of course because is opensource!
"bit per bit"?dd ... bs=1M
will do the copy using 1 MB blocks.
– Hannu
34 mins ago
yes, but as per the first answer and if you do not pass any flags to dd the default is that, off course with the base count flag will faster then normal, but you have also to think the dd command do not skip free space, so the image size of 500GB disk will be 500GB, does not metter if the disk is in use at 30%. For this I suggest different solution, partclone is more evoluted then dd for this pourpose.
– AtomiX84
30 mins ago
In a low usage situation any other tool than a block by block copy is preferred, yes. Then you also have other issues -> superuser.com/a/1388090/346288
– Hannu
27 mins ago
add a comment |
For a fast migration you have more than a couple of choice (and for fast not use dd, it copy bit per bit)
common used tools:
https://www.acronis.com/en-us/ (should free to personal use)
https://clonezilla.org/ (open source)
https://www.symantec.com/products/ghost-solutions-suite (commercial)
You can also decide to create a tar of your OS and restore it (more manual work to do but still fast then dd)
- https://help.ubuntu.com/community/BackupYourSystem/TAR
Personally I suggest you to use clonezilla, have a large community and a good Knowledge on the common issues can be encountered and of course because is opensource!
"bit per bit"?dd ... bs=1M
will do the copy using 1 MB blocks.
– Hannu
34 mins ago
yes, but as per the first answer and if you do not pass any flags to dd the default is that, off course with the base count flag will faster then normal, but you have also to think the dd command do not skip free space, so the image size of 500GB disk will be 500GB, does not metter if the disk is in use at 30%. For this I suggest different solution, partclone is more evoluted then dd for this pourpose.
– AtomiX84
30 mins ago
In a low usage situation any other tool than a block by block copy is preferred, yes. Then you also have other issues -> superuser.com/a/1388090/346288
– Hannu
27 mins ago
add a comment |
For a fast migration you have more than a couple of choice (and for fast not use dd, it copy bit per bit)
common used tools:
https://www.acronis.com/en-us/ (should free to personal use)
https://clonezilla.org/ (open source)
https://www.symantec.com/products/ghost-solutions-suite (commercial)
You can also decide to create a tar of your OS and restore it (more manual work to do but still fast then dd)
- https://help.ubuntu.com/community/BackupYourSystem/TAR
Personally I suggest you to use clonezilla, have a large community and a good Knowledge on the common issues can be encountered and of course because is opensource!
For a fast migration you have more than a couple of choice (and for fast not use dd, it copy bit per bit)
common used tools:
https://www.acronis.com/en-us/ (should free to personal use)
https://clonezilla.org/ (open source)
https://www.symantec.com/products/ghost-solutions-suite (commercial)
You can also decide to create a tar of your OS and restore it (more manual work to do but still fast then dd)
- https://help.ubuntu.com/community/BackupYourSystem/TAR
Personally I suggest you to use clonezilla, have a large community and a good Knowledge on the common issues can be encountered and of course because is opensource!
answered 1 hour ago
AtomiX84
1765
1765
"bit per bit"?dd ... bs=1M
will do the copy using 1 MB blocks.
– Hannu
34 mins ago
yes, but as per the first answer and if you do not pass any flags to dd the default is that, off course with the base count flag will faster then normal, but you have also to think the dd command do not skip free space, so the image size of 500GB disk will be 500GB, does not metter if the disk is in use at 30%. For this I suggest different solution, partclone is more evoluted then dd for this pourpose.
– AtomiX84
30 mins ago
In a low usage situation any other tool than a block by block copy is preferred, yes. Then you also have other issues -> superuser.com/a/1388090/346288
– Hannu
27 mins ago
add a comment |
"bit per bit"?dd ... bs=1M
will do the copy using 1 MB blocks.
– Hannu
34 mins ago
yes, but as per the first answer and if you do not pass any flags to dd the default is that, off course with the base count flag will faster then normal, but you have also to think the dd command do not skip free space, so the image size of 500GB disk will be 500GB, does not metter if the disk is in use at 30%. For this I suggest different solution, partclone is more evoluted then dd for this pourpose.
– AtomiX84
30 mins ago
In a low usage situation any other tool than a block by block copy is preferred, yes. Then you also have other issues -> superuser.com/a/1388090/346288
– Hannu
27 mins ago
"bit per bit"?
dd ... bs=1M
will do the copy using 1 MB blocks.– Hannu
34 mins ago
"bit per bit"?
dd ... bs=1M
will do the copy using 1 MB blocks.– Hannu
34 mins ago
yes, but as per the first answer and if you do not pass any flags to dd the default is that, off course with the base count flag will faster then normal, but you have also to think the dd command do not skip free space, so the image size of 500GB disk will be 500GB, does not metter if the disk is in use at 30%. For this I suggest different solution, partclone is more evoluted then dd for this pourpose.
– AtomiX84
30 mins ago
yes, but as per the first answer and if you do not pass any flags to dd the default is that, off course with the base count flag will faster then normal, but you have also to think the dd command do not skip free space, so the image size of 500GB disk will be 500GB, does not metter if the disk is in use at 30%. For this I suggest different solution, partclone is more evoluted then dd for this pourpose.
– AtomiX84
30 mins ago
In a low usage situation any other tool than a block by block copy is preferred, yes. Then you also have other issues -> superuser.com/a/1388090/346288
– Hannu
27 mins ago
In a low usage situation any other tool than a block by block copy is preferred, yes. Then you also have other issues -> superuser.com/a/1388090/346288
– Hannu
27 mins ago
add a comment |
Thanks for contributing an answer to Ask Ubuntu!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Some of your past answers have not been well-received, and you're in danger of being blocked from answering.
Please pay close attention to the following guidance:
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2faskubuntu.com%2fquestions%2f1105560%2ffastest-way-to-migrate-to-a-bigger-ssd-18-04%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
A limited time window for access to hardware? Limited downtime for server allowed? Limited because there are other high-priority jobs? "Fast" is only fast if you have rehearsed the operation and discovered all your pain points in advance (there will be some!) Murphy's Law applies to all time-critical operations; be prepared with complete backups and install media befor you begin.
– user535733
41 mins ago
Is having both disks in the system an option?
– Thorbjørn Ravn Andersen
20 mins ago
no, I don't want to use both disks afterwards. (limited=because of other jobs that should run at a certain time) Murphy: yes, I'm willing to prepare everything as good as possible
– ssssstut
12 mins ago