未加星标

zfs receive oddity

字体大小 | |
[系统(linux) 所属分类 系统(linux) | 发布者 店小二03 | 时间 2016 | 作者 红领巾 ] 0人收藏点击收藏

Every so often, even a system as good as zfs will throw you a curveball. This one threw me for a while, and here's a simplified example.

All I'm trying to do here is replicate one file system. So I create it, touch a file so I know it's made it.

zfs create -o rpool/t1

touch /rpool/t1/1

OK, snapshot it and send it.

zfs snapshot [email protected]

zfs send [email protected] | zfs recv rpool/t2

Create another file, and create a snapshot at both source and destination.

touch /rpool/t1/2

zfs snapshot [email protected]

zfs snapshot [email protected]

And now send an incremental stream from the original.

zfs send -i [email protected] [email protected] | zfs recv -F rpool/t2

That works, the whole point of the -F flag is to discard any subsequent changes. (You'll usually need this if the file system is mounted at the receiver, because even access time updates count as updates that will need to be discarded.) It will roll back rpool/t2 to the original @t1s1 snapshot, discarding the local @t2s2 snapshot, then update the rpool/t2 file system to the @t1s2 snapshot.

So far so good.

Now a minor variation.

I create it, touch a file so I know it's made it.

zfs create rpool/t1

touch /rpool/t1/1

OK, snapshot it and send it.

zfs snapshot [email protected]

zfs send [email protected] | zfs recv rpool/t2

Create another file, and create a snapshot at both source and destination.

touch /rpool/t1/2

zfs snapshot [email protected]

zfs snapshot [email protected]

And now send the incremental stream just like last time.

zfs send -i [email protected] [email protected] | zfs recv -F rpool/t2

Kaboom . This fails, reporting:

cannot restore to [email protected]: destination already exists

What? The problem is hinted at in the zfs man page, where the description of -F says:

If receiving an incremental replication stream (for example,

one generated by zfs send -R [-i|-I]), destroy snapshots and

file systems that do not exist on the sending side.

The problem, then, is that zfs won't destroy the @s2 snapshot that exists at the receiver, because a snapshot of the same name exists in the source. It's not the same snapshot, of course, but it has the same name. This prevents the rollback, and the receive fails.

Snapshot name collisions are pretty common. We have an automatic snapshot regime, so pretty much every file system we have has a daily snapshot that embeds the date, and being automatic, they all have the same name.

What this means in practice is that if you have snapshots created on the receiving side, you'll have to explicitly roll the file system back to the snapshot you sent to previously, to avoid hitting name collisions.

I think this behaviour is wrong, although I'm not quite confident enough to call it a bug. The point is that on the receiving side, any snapshots created after the one that was sent are irrelevant - it shouldn't matter what their names are, and I'm not at all sure why zfs even bothers checking the names of snapshots that ought to be deleted.

本文系统(linux)相关术语:linux系统 鸟哥的linux私房菜 linux命令大全 linux操作系统

分页:12
转载请注明
本文标题:zfs receive oddity
本站链接:http://www.codesec.net/view/480576.html
分享请点击:


1.凡CodeSecTeam转载的文章,均出自其它媒体或其他官网介绍,目的在于传递更多的信息,并不代表本站赞同其观点和其真实性负责;
2.转载的文章仅代表原创作者观点,与本站无关。其原创性以及文中陈述文字和内容未经本站证实,本站对该文以及其中全部或者部分内容、文字的真实性、完整性、及时性,不作出任何保证或承若;
3.如本站转载稿涉及版权等问题,请作者及时联系本站,我们会及时处理。
登录后可拥有收藏文章、关注作者等权限...
技术大类 技术大类 | 系统(linux) | 评论(0) | 阅读(28)