Keep volume policies after migrating it to another primary storage#5067
Conversation
|
|
||
| @Override | ||
| public boolean copyPoliciesBetweenVolumesAndDestroySourceVolumeAfterMigration(Event destinationEvent, Answer destinationEventAnswer, VolumeInfo sourceVolume, | ||
| VolumeInfo destinationVolume, boolean retryExpungeVolumeAsync) { |
There was a problem hiding this comment.
@GutoVeronezi I think, it is better to separate copy policies and destroy volume.
There was a problem hiding this comment.
@sureshanaparti I extracted the code that destroy the source volume to a method and added tests to it; However I kept this method, calling both copy policies and destroy volume to facilitate and guide the process.
|
Can anyone review this? |
DaanHoogland
left a comment
There was a problem hiding this comment.
This is a lot of good improvements in log and exception messages as well as a solid looking re-factor of the volume object and - service. I t will need extensive regression testing on anything that relates to that. I don;t see hypervisor specific risks but these may be hidden in there.
CLGTM
| * @param destinationEvent | ||
| * @param destinationEventAnswer | ||
| * @param sourceVolume | ||
| * @param destinationVolume | ||
| * @param retryExpungeVolumeAsync |
There was a problem hiding this comment.
maybe remove these if there is no sensible description to be given.
| /** | ||
| * Tests the following scenario: | ||
| * If the volume gets deleted by another thread (cleanup) and the cleanup is attempted again, the volume isnt found in DB and hence NPE occurs | ||
| * during transition |
There was a problem hiding this comment.
why delete this javadoc. it seems to add vallue to the method name, which doesn't describe what it does. validateHandleProcessEventCopyCmdAnswerNotPrimaryStoreDoNotSetSize is a bit cryptic.
There was a problem hiding this comment.
@DaanHoogland This whole test class, as well as those in test package in this module were deleted. The tests in this module were already disabled for a long time; therefore we considered that they were irrelevant, as they were a bunch of code that were not being used.
There was a problem hiding this comment.
ah right, it isn't a change but an addition and a deletion, sorry and thanks
|
Packaging result: ✔️ centos7 ✔️ centos8 ✔️ debian. SL-JID 311 |
|
Trillian Build Failed (tid-1033) |
|
Can anyone review this? |
|
@GutoVeronezi a trick for getting the right person is running |
Thanks for the tip @DaanHoogland. |
|
Packaging result: ✔️ el7 ✔️ el8 ✔️ debian. SL-JID 671 |
|
Trillian test result (tid-1395)
|
|
@sureshanaparti does this LGTY? cc @nvazquez |
|
Looking good, I think it will need manual testing for migrations and also volumes regression testing, @sureshanaparti is this something you can check please? |
|
@nvazquez any update about this one? |
|
@weizhouapache can you please review/verify this improvement? |
|
@blueorangutan package |
|
@weizhouapache a Jenkins job has been kicked to build packages. I'll keep you posted as I make progress. |
|
Packaging result: ✔️ el7 ✖️ el8 ✔️ debian ✔️ suse15. SL-JID 1105 |
|
@blueorangutan package |
|
@weizhouapache a Jenkins job has been kicked to build packages. I'll keep you posted as I make progress. |
|
Packaging result: ✔️ el7 ✔️ el8 ✔️ debian ✔️ suse15. SL-JID 1109 |
|
@blueorangutan test |
|
@weizhouapache a Trillian-Jenkins test job (centos7 mgmt + kvm-centos7) has been kicked to run smoke tests |
|
Trillian test result (tid-1941)
|
weizhouapache
left a comment
There was a problem hiding this comment.
tested ok on kvm.
(1) both root and data disk
(2) offline volume migration (migratevolume)
(3) online volume migration (migratevirtualmachinewithvolume)
|
@blueorangutan test centos7 vmware-67u3 |
|
@rhtyd a Trillian-Jenkins test job (centos7 mgmt + vmware-67u3) has been kicked to run smoke tests |
|
Trillian test result (tid-1962)
|
Description
In ACS, there are two ways to migrate a volume between storages:
migrateVolume);migrateVirtualMachineWithVolume);Both, although are in differents flows, work similarly: they create a new volume in the destination storage and then delete the old volume in the source storage; When we delete the old volume, we also delete the volume policies (snapshot scheduling).
Therefore, the new volume will not have the snapshot schedules and we will lose this information. This PR intends to copy the policies from source to destination volume before deleting the old volume;
Types of changes
Feature/Enhancement Scale or Bug Severity
Feature/Enhancement Scale
How Has This Been Tested?
It has been tested locally in a test lab.
migrateVolume);migrateVirtualMachineWithVolume);-- After applying this patch, the policies were copied.
Also, I created Java unit tests for the methods I changed.