-
-
Notifications
You must be signed in to change notification settings - Fork 229
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ZFS: Add log, spare & dedup vdev types, support multiple special vdevs, fix bug with mixed vdev types #883
Merged
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
The type keyword was included before every vdev: ```shell zpool create -f <name> /dev/sda log mirror /dev/sdb /dev/sdc log mirror /dev/sdd /dev/sde ``` but this is incorrect and should instead be: ```shell zpool create -f <name> /dev/sda log mirror /dev/sdb /dev/sdc mirror /dev/sdd /dev/sde ```
A config like ```nix { vdev = [ { mode = "mirror"; members = [ "data1" "data2" ]; } { members = [ "data3" ]; } ]; } ``` would result in the following command: ```shell zpool create -f <name> mirror /dev/data1 /dev/data2 /dev/data3 ``` which would result in a single vdev with a 3-way mirror, rather than a vdev with a 2-way mirror and a second vdev with a single disk. By reordering the vdevs to handle those with an empty mode first we transform this into: ```shell zpool create -f <name> /dev/data3 mirror /dev/data1 /dev/data2 ``` which does have the desired outcome.
iFreilicht
reviewed
Nov 18, 2024
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks really good! I have a few small notes, but in general, this looks very mergeable 👍
MaienM
commented
Nov 18, 2024
iFreilicht
approved these changes
Nov 20, 2024
@mergify queue |
☑️ The pull request will automatically be requeuedMerge queue reset: an external action moved the base branch head to c3b83db |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This adds support for the missing categories of devices:
log
,spare
, anddedup
.log
&dedup
can each have multiple vdevs assigned (the workload will be load-balanced over these), so these accept a list of vdev configurations. This is actually also true forspecial
, but the existing implementation only supported a single vdev. This has been updated to accept either a list of vdevs (like the other two), or a single vdev (for backwards compatibility).spare
(likecache
) just takes a list of device names, neither type supports more complicated vdevs like mirrors.While testing this I ran into a bug that affected all types which support having multiple vdevs (which includes the regular data vdevs). The following topology config:
would result in the following command:
which would result in a single vdev with a 3-way mirror, rather than a vdev with a 2-way mirror and a second vdev with a single disk. As far as I could find there is no keyword to indicate "mirror config ended, next is a vdev made up of a single disk" (which kinda makes sense given that ZFS would reject such a setup without the
-f
flag), so to fix this I added a sort so that the vdevs with an empty mode are moved to the start. This means the above config will now result in the following command:(An alternative would have been to instead generate a
zpool add
command for each vdev, but that would be a larger change, and we'd need to start handling partial successes so I opted not to at this point. Doing that would also be a step towards handing making changes to an existing pool such as adding a vdev, so it might be worth considering at some point.)I extended the
zfs-with-vdevs
example to have this 3-device setup for each of the types that support it so that this is tested, which results in the following pool (which is also validated in the tests):