expand size of persistent fle

Post Reply
User avatar
mimosa
Salix Warrior
Posts: 3101
Joined: 25. May 2010, 17:02
Contact:

expand size of persistent fle

Post by mimosa » 17. Oct 2011, 17:56

Am I right in thinking it isn't possible to make the persistent file any bigger than the size it was originally created as?

I have a feeling this question has already come up, but my search didn't find it.

I'm using Salix 13.1 Xfce Live, if that makes any difference. Just created a 500MB file and updating seems to have eaten most of that up.

Shador
Posts: 1295
Joined: 11. Jun 2009, 14:04
Location: Bavaria

Re: expand size of persistent fle

Post by Shador » 20. Oct 2011, 09:43

It should be possible, as long as the filesystem created in the file supports growing. It's afaik xfs which supports growing.
It should work something like this (written by memory):

Code: Select all

dd if=/dev/zero of=savefile bs=1024 count=1M conv=notrunc,append # --> +1G
losetup -f savefile
xfs_grow ....... /dev/loopX .....
Image

User avatar
JRD
Salix Warrior
Posts: 949
Joined: 7. Jun 2009, 22:52
Location: Lyon, France

Re: expand size of persistent fle

Post by JRD » 20. Oct 2011, 13:15

Shador seems correct yes.
But it's "bs=1M count=1024"
Image

User avatar
mimosa
Salix Warrior
Posts: 3101
Joined: 25. May 2010, 17:02
Contact:

Re: expand size of persistent fle

Post by mimosa » 20. Oct 2011, 18:41

Well, it seems to have worked :D
Here is what I did:

Code: Select all

root[mimosa]# mkdir tmpmnt/                                                                                                                  # make sure filesystem is unmounted
root[mimosa]# dd if=/dev/zero of=/path/to/slxsave.xfs bs=1M count=12 conv=notrunc oflag=append   # first time, I added 512 MB to existing 500 MB ;)
root[mimosa]# mount /path/to/slxsave.xfs ./tmpmnt/ -o loop                                                                    # I got confused with losetup, but this did the job
root[mimosa]# xfs_growfs -d tmpmnt/                                                                                                       # grow the filesystem to fill new space
Thanks Shador and JRD. That was easy! Would it also be easy to add this functionality to the Persistence Wizard? It must be common for people to underestimate the size they want for persistence.

In my case, that was partly because updating an old version of Salix used up a fair amount of space. A good strategy would presumably be to use LiveClone after updating to avoid the need to keep out-of-date application files.

EDIT In fact this seemed to damage the filesystem within the persistent file - see post below on the same procedure for ext4. Try:

Code: Select all

#xfs_check -f /path/to/slxsave.xfs         #the filesystem shouldn't be mounted for either of these; first check then fix errors
#xfs_repair -f /path/to/slxsave.xfs
Last edited by mimosa on 30. Oct 2011, 16:34, edited 1 time in total.

Shador
Posts: 1295
Joined: 11. Jun 2009, 14:04
Location: Bavaria

Re: expand size of persistent fle

Post by Shador » 20. Oct 2011, 21:10

losetup is implicitly called by mount -o loop. I.e. losetup -f <file> creates a loopback device e.g. /dev/loop0, which is then mounted by mount e.g. on /mnt. It's useful when you just want to make a file usable as a loopback/blockdevice but don't want to mount it. For example if the filesystem only supports offline resizing e.g. the filesystem must not be mounted while it's being resized (xfs supports online resizing).
Image

User avatar
mimosa
Salix Warrior
Posts: 3101
Joined: 25. May 2010, 17:02
Contact:

Re: expand size of persistent fle

Post by mimosa » 20. Oct 2011, 21:14

I think xfs *requires* to be mounted for resizing; and that would explain why I had trouble with just losetup.

Shador
Posts: 1295
Joined: 11. Jun 2009, 14:04
Location: Bavaria

Re: expand size of persistent fle

Post by Shador » 20. Oct 2011, 21:16

mimosa wrote:I think xfs *requires* to be mounted for resizing; and that would explain why I had trouble with just losetup.
You're right, the man page says so too. I'm used to resizing ext* fs only. So my instructions originated from that point of view.
Image

User avatar
mimosa
Salix Warrior
Posts: 3101
Joined: 25. May 2010, 17:02
Contact:

Re: expand size of persistent fle

Post by mimosa » 28. Oct 2011, 20:19

Here is the approach for ext4 (though from the man page, it looks as though you *could* do this with the savefile mounted as a loop device):

Code: Select all

# dd if=/dev/zero of=./salixlive.save bs=1M count=512 conv=notrunc oflag=append
# losetup -f salixlive.save
# losetup -a                                                            # EDIT check the loopback device, it might not be /dev/loop0; here, let's assume it is
# e2fsck -f /dev/loop0                                             # resize2fs insists on checking the filesystem first
# resize2fs /dev/loop0
EDIT

As noted on the RC1 thread, this seemed to cause a problem with the persistent file which in turn caused the shutown process to hang. However, running e2fsck appears to have fixed it! The same thing happened with the xfs persistent file I expanded as described a few posts above, and maybe it could have been fixed simialrly - see xfs_check and xfs_repair.
Last edited by mimosa on 30. Oct 2011, 14:46, edited 4 times in total.

Shador
Posts: 1295
Joined: 11. Jun 2009, 14:04
Location: Bavaria

Re: expand size of persistent fle

Post by Shador » 28. Oct 2011, 22:45

mimosa wrote:Here is the approach for ext4 (though from the man page, it looks as though you *could* do this with the savefile mounted as a loop device):
Yes, you can if the kernel supports this. Anyway, you should be careful whether your persistent file is really mapped to /dev/loop0. With other loop devices already in use it could get a higher index. You can use losetup -a to check which one is actually used.

Edit:
BTW you could also use the truncate command instead of the dd command to resize the actual file. Be sure not to shrink the file, because truncate deletes the extra data. It's usually much faster than dd especially for big files, because it doesn't actually write zeroes to the file (as a sideeffect only space will be used for those parts of the file which have been written to). The disadvantage is that a sparse file is created which becomes more easily fragmented. Another solution is fallocate on filesystems with pre-allocation support (e.g. ext4). It doesn't write zeros to the the disk and reserves as many blocks marked as empty as necessary, thus overcoming the fragmentation problem.
Image

User avatar
mimosa
Salix Warrior
Posts: 3101
Joined: 25. May 2010, 17:02
Contact:

Re: expand size of persistent fle

Post by mimosa » 28. Oct 2011, 22:57

you should be careful whether your persistent file is really mapped to /dev/loop0. With other loop devices already in use it could get a higher index. You can use losetup -a to check which one is actually used.
I wondered about this part! In this case I guessed there were no others. I'll edit the code in my post above to include it.

Anyway, thanks for all the tips - I'm having a lot of fun playing around with the RC1 :D

Post Reply