Skip to content

server: Do not resize volume of running vm on KVM host if host is not Up or not Enabled#4148

Merged
yadvr merged 2 commits intoapache:4.13from
ustcweizhou:4.13-disallow-resize-volume-disable-kvm-host
Jun 25, 2020
Merged

server: Do not resize volume of running vm on KVM host if host is not Up or not Enabled#4148
yadvr merged 2 commits intoapache:4.13from
ustcweizhou:4.13-disallow-resize-volume-disable-kvm-host

Conversation

@ustcweizhou
Copy link
Contributor

@ustcweizhou ustcweizhou commented Jun 16, 2020

Description

If we resize a volume of a vm running on a host which is not Up or not Enable, the job will be scheduled to another normal host. Then the volume will be resized by "qemu-img resize" instead of "virsh blockresize", the image might be corrupted after resize.

Types of changes

  • Breaking change (fix or feature that would cause existing functionality to change)
  • New feature (non-breaking change which adds functionality)
  • Bug fix (non-breaking change which fixes an issue)
  • Enhancement (improves an existing feature and functionality)
  • Cleanup (Code refactoring and cleanup, that may add test cases)

Screenshots (if appropriate):

How Has This Been Tested?

Copy link
Member

@yadvr yadvr left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, did not test it

@yadvr yadvr added this to the 4.13.2.0 milestone Jun 17, 2020
/* Do not resize volume of running vm on KVM host if host is not Up or not Enabled */
if (currentSize != newSize && userVm.getState() == State.Running && userVm.getHypervisorType() == HypervisorType.KVM) {
HostVO host = _hostDao.findById(userVm.getHostId());
if (host.getStatus() != Status.Up) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ustcweizhou should you do a null check, what if the host was removed?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@rhtyd added null check

@yadvr
Copy link
Member

yadvr commented Jun 20, 2020

@blueorangutan package

@blueorangutan
Copy link

@rhtyd a Jenkins job has been kicked to build packages. I'll keep you posted as I make progress.

@blueorangutan
Copy link

Packaging result: ✔centos7 ✔debian. JID-1422

@yadvr
Copy link
Member

yadvr commented Jun 24, 2020

@blueorangutan package

@blueorangutan
Copy link

@rhtyd a Jenkins job has been kicked to build packages. I'll keep you posted as I make progress.

@blueorangutan
Copy link

Packaging result: ✔centos7 ✔debian. JID-1438

@yadvr
Copy link
Member

yadvr commented Jun 24, 2020

@blueorangutan test

@blueorangutan
Copy link

@rhtyd a Trillian-Jenkins test job (centos7 mgmt + kvm-centos7) has been kicked to run smoke tests

@blueorangutan
Copy link

Trillian test result (tid-1834)
Environment: kvm-centos7 (x2), Advanced Networking with Mgmt server 7
Total time taken: 28154 seconds
Marvin logs: https://github.com/blueorangutan/acs-prs/releases/download/trillian/pr4148-t1834-kvm-centos7.zip
Intermittent failure detected: /marvin/tests/smoke/test_privategw_acl.py
Smoke tests completed. 76 look OK, 1 have error(s)
Only failed tests results shown below:

Test Result Time (s) Test File
test_02_vpc_privategw_static_routes Failure 174.28 test_privategw_acl.py
test_03_vpc_privategw_restart_vpc_cleanup Failure 174.93 test_privategw_acl.py
test_04_rvpc_privategw_static_routes Failure 233.75 test_privategw_acl.py

@yadvr
Copy link
Member

yadvr commented Jun 25, 2020

LGTM

@yadvr yadvr merged commit 5526342 into apache:4.13 Jun 25, 2020
Copy link
Contributor

@shwstppr shwstppr left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, makes sense

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

7 participants