First look at cinder with devstack

This post is just a first look and usage of Openstack cinder under devstack. By default devstack configures cinder to use LVM iSCSI driver. This is configured in /etc/cinder/cinder.conf :

volume_driver=cinder.volume.drivers.lvm.LVMISCSIDriver

The driver in use when deploying cinder via devstack can be changed via CINDER_DRIVER devstack’s environment variable.

By default the LVM driver looks for a volume group called “cinder-volumes” so devstack must create it. It’s just a file mounted via a loop device where a LVM partition has been created. The pvs command displays the volume group called cinder-volumes.

$ sudo losetup -a
/dev/loop0: [0801]:535640 (/opt/stack/data/drives/images/swift.img)
/dev/loop1: [0801]:290956 (/opt/stack/data/stack-volumes-backing-file)
$ file /opt/stack/data/stack-volumes-backing-file  
/opt/stack/data/stack-volumes-backing-file: LVM2 PV (Linux Logical Volume Manager), UUID: JcdWpB-UGiC-9mI5-GpmM-jr0O-Th7l-t4Lqki, size: 10747904000
$ sudo pvs
  PV         VG            Fmt  Attr PSize  PFree 
  /dev/loop1 stack-volumes lvm2 a-   10.01g 10.01g
$ sudo vgs
  VG            #PV #LV #SN Attr   VSize  VFree 
  stack-volumes   1   0   0 wz--n- 10.01g 10.01g

As all openstack components actions and data are segmented by tenant. We’ll use the admin tenant. We create a block volume of 1G.

$ . openrc admin admin
$ cinder create --display-name myVolume 1
+---------------------+--------------------------------------+
|       Property      |                Value                 |
+---------------------+--------------------------------------+
|     attachments     |                  []                  |
|  availability_zone  |                 nova                 |
|       bootable      |                False                 |
|      created_at     |      2013-07-29T09:31:37.705400      |
| display_description |                 None                 |
|     display_name    |               myVolume               |
|          id         | e506f007-e8eb-43c1-968e-190a6bab8bce |
|       metadata      |                  {}                  |
|         size        |                  1                   |
|     snapshot_id     |                 None                 |
|     source_volid    |                 None                 |
|        status       |               creating               |
|     volume_type     |                 None                 |
+---------------------+--------------------------------------+

$ cinder list
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
|                  ID                  |   Status  | Display Name | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
| e506f007-e8eb-43c1-968e-190a6bab8bce | available |   myVolume   |  1   |     None    |  False   |             |
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+

The LVM command reports that cinder has created the requested 1G logical volume.

$ sudo lvs
  LV                                          VG            Attr   LSize Origin Snap%  Move Log Copy%  Convert
  volume-e506f007-e8eb-43c1-968e-190a6bab8bce stack-volumes -wi-ao 1.00g

Cinder is composed of at least three components called:
- cinder-api
- cinder-scheduler
- cinder-volume

The previous API call for creating our device is handled by the cinder-api process as it is the main entry point. Cinder client obviously use the cinder REST API. Then the API process requests the cinder-scheduler via the message queue to find the cinder-volume process where to send the request. The scheduler main functionality is to apply filters to determines which volume managers is more suitable to handle the request. The following cinder configuration determines which filters are evaluated:

scheduler_default_filters=AvailabilityZoneFilter,CapacityFilter,CapabilitiesFilter

We can have as many cinder-volume process as available storage backend. Each backend may have different capabilities or may be located in different zone. That’s why the scheduler must select the best host according to the tenant’s request.

The following logs from cinder-scheduler and cinder-volume detail step by step the volume creation (filters evaluation, cinder-volume process selection, logical volume creation and then SCSI target configuration) :

In cinder scheduler log :

2013-07-29 11:31:37.801 DEBUG cinder.openstack.common.rpc.amqp [-] received {u'_context_roles': [u'admin'], u'_context_request_id': u'req-51463834-28f3-4863-afdb-1ab138138b0f', u'_context_quota_class': None, u'_context_project_name': u'admin', u'_unique_id': u'0c6038c5817d4f6b8f75f15960294832', u'_context_tenant': u'1b90ffdc6961416ca5766eba5f753874', u'args': {u'request_spec': {u'volume_id': u'e506f007-e8eb-43c1-968e-190a6bab8bce', u'volume_properties': {u'status': u'creating', u'volume_type_id': None, u'display_name': u'myVolume', ...
2013-07-29 11:31:37.802 DEBUG cinder.scheduler.filter_scheduler [req-51463834-28f3-4863-afdb-1ab138138b0f 261cb8fa46b547c991f2738c337a6e1c 1b90f
fdc6961416ca5766eba5f753874] Filtered [host 'devstack-local': free_capacity_gb: 10.75] from (pid=9530) _get_weighted_candidates /opt/stack/cinde
r/cinder/scheduler/filter_scheduler.py:227
2013-07-29 11:31:37.803 DEBUG cinder.scheduler.filter_scheduler [req-51463834-28f3-4863-afdb-1ab138138b0f 261cb8fa46b547c991f2738c337a6e1c 1b90f
fdc6961416ca5766eba5f753874] Choosing WeighedHost [host: devstack-local, weight: 10.0] from (pid=9530) _schedule /opt/stack/cinder/cinder/schedu
ler/filter_scheduler.py:240
2013-07-29 11:31:37.901 DEBUG cinder.openstack.common.rpc.amqp [req-51463834-28f3-4863-afdb-1ab138138b0f 261cb8fa46b547c991f2738c337a6e1c 1b90ff
dc6961416ca5766eba5f753874] Making asynchronous cast on cinder-volume.devstack-local... from (pid=9530) cast /opt/stack/cinder/cinder/openstack/
common/rpc/amqp.py:623

In cinder-volume logs:

2013-07-29 11:31:37.915 DEBUG cinder.openstack.common.rpc.amqp [-] received {u'_context_roles': [u'admin'], u'_context_request_id': u'req-51463834-28f3-4863-afdb-1ab138138b0f', u'_context_quota_class': None, u'_context_project_name': u'admin', u'_unique_id': u'8e2399ab7cc44d78ab78c3b1f8a7ffe5', u'_context_tenant': u'1b90ffdc6961416ca5766eba5f753874', u'args': {u'request_spec': {u'volume_id': u'e506f007-e8eb-43c1-968e-190a6bab8bce', u'volume_properties': {u'status': u'creating', u'volume_type_id': None, u'display_name': u'myVolume'
2013-07-29 11:31:37.970 DEBUG cinder.volume.manager [req-51463834-28f3-4863-afdb-1ab138138b0f 261cb8fa46b547c991f2738c337a6e1c 1b90ffdc6961416ca
5766eba5f753874] volume volume-e506f007-e8eb-43c1-968e-190a6bab8bce: creating lv of size 1G from (pid=28289) create_volume /opt/stack/cinder/cinder/volume/manager.py:248
2013-07-29 11:31:37.971 INFO cinder.volume.manager [req-51463834-28f3-4863-afdb-1ab138138b0f 261cb8fa46b547c991f2738c337a6e1c 1b90ffdc6961416ca5766eba5f753874] volume volume-e506f007-e8eb-43c1-968e-190a6bab8bce: creating
2013-07-29 11:31:37.972 DEBUG cinder.openstack.common.processutils [req-51463834-28f3-4863-afdb-1ab138138b0f 261cb8fa46b547c991f2738c337a6e1c 1b90ffdc6961416ca5766eba5f753874] Running cmd (subprocess): sudo cinder-rootwrap /etc/cinder/rootwrap.conf lvcreate -L 1G -n volume-e506f007-e8eb-43c1-968e-190a6bab8bce stack-volumes from (pid=28289) execute /opt/stack/cinder/cinder/openstack/common/processutils.py:142
2013-07-29 11:31:38.538 DEBUG cinder.volume.manager [req-51463834-28f3-4863-afdb-1ab138138b0f 261cb8fa46b547c991f2738c337a6e1c 1b90ffdc6961416ca5766eba5f753874] volume volume-e506f007-e8eb-43c1-968e-190a6bab8bce: creating export from (pid=28289) create_volume /opt/stack/cinder/cinder/volume/manager.py:332
2013-07-29 11:31:38.543 INFO cinder.brick.iscsi.iscsi [req-51463834-28f3-4863-afdb-1ab138138b0f 261cb8fa46b547c991f2738c337a6e1c 1b90ffdc6961416ca5766eba5f753874] Creating iscsi_target for: volume-e506f007-e8eb-43c1-968e-190a6bab8bce
2013-07-29 11:31:38.548 DEBUG cinder.openstack.common.processutils [req-51463834-28f3-4863-afdb-1ab138138b0f 261cb8fa46b547c991f2738c337a6e1c 1b90ffdc6961416ca5766eba5f753874] Running cmd (subprocess): sudo cinder-rootwrap /etc/cinder/rootwrap.conf tgt-admin --update iqn.2010-10.org.openstack:volume-e506f007-e8eb-43c1-968e-190a6bab8bce from (pid=28289) execute /opt/stack/cinder/cinder/openstack/common/processutils.py:142
2013-07-29 11:31:39.485 DEBUG cinder.openstack.common.processutils [req-51463834-28f3-4863-afdb-1ab138138b0f 261cb8fa46b547c991f2738c337a6e1c 1b90ffdc6961416ca5766eba5f753874] Running cmd (subprocess): sudo cinder-rootwrap /etc/cinder/rootwrap.conf tgt-admin --show from (pid=28289) execute /opt/stack/cinder/cinder/openstack/common/processutils.py:142
2013-07-29 11:31:39.916 INFO cinder.volume.manager [req-51463834-28f3-4863-afdb-1ab138138b0f 261cb8fa46b547c991f2738c337a6e1c 1b90ffdc6961416ca5766eba5f753874] volume volume-e506f007-e8eb-43c1-968e-190a6bab8bce: created successfully
2013-07-29 11:31:39.917 INFO cinder.volume.manager [req-51463834-28f3-4863-afdb-1ab138138b0f 261cb8fa46b547c991f2738c337a6e1c 1b90ffdc6961416ca5766eba5f753874] Clear capabilities

Note the reference “req-51463834-28f3-4863-afdb-1ab138138b0f” that allow us to follow the volume creation through all processes.

The last step of the volume creation is to make it available to a compute host via iSCSI. The tgt tools suite is used to do that. In /etc/tgt/targets.conf the directory /opt/stack/data/cinder/volumes are included and for each volumes cinder creates a file that describe the target.

$ cat  /opt/stack/data/cinder/volumes/volume-e506f007-e8eb-43c1-968e-190a6bab8bce 

   <target iqn.2010-10.org.openstack:volume-e506f007-e8eb-43c1-968e-190a6bab8bce>
          backing-store /dev/stack-volumes/volume-e506f007-e8eb-43c1-968e-190a6bab8bce
          IncomingUser biePDpWvAh3Z5iydEQew 4boM9MFSi3LZrYiWrRuR
   </target>

$ cat /etc/tgt/targets.conf 
# Empty targets configuration file -- please see the package
# documentation directory for an example.
#
# You can drop individual config snippets into /etc/tgt/conf.d
include /etc/tgt/conf.d/*.conf
include /etc/tgt/stack.d/*

$ ls -al /etc/tgt/stack.d
lrwxrwxrwx 1 root root 30 Jul 23 18:34 /etc/tgt/stack.d -> /opt/stack/data/cinder/volumes

With tgtadm we can list all local targets and LUNs available:

$ sudo tgtadm --lld iscsi --op show --mode target
Target 1: iqn.2010-10.org.openstack:volume-e506f007-e8eb-43c1-968e-190a6bab8bce
    System information:
        Driver: iscsi
        State: ready
    I_T nexus information:
    LUN information:
        LUN: 0
            Type: controller
            SCSI ID: IET     00010000
            SCSI SN: beaf10
            Size: 0 MB, Block size: 1
            Online: Yes
            Removable media: No
            Readonly: No
            Backing store type: null
            Backing store path: None
            Backing store flags: 
        LUN: 1
            Type: disk
            SCSI ID: IET     00010001
            SCSI SN: beaf11
            Size: 1074 MB, Block size: 512
            Online: Yes
            Removable media: No
            Readonly: No
            Backing store type: rdwr
            Backing store path: /dev/stack-volumes/volume-e506f007-e8eb-43c1-968e-190a6bab8bce
            Backing store flags: 
    Account information:
    ACL information:
        ALL

Now we want to use our volume in a VM instance, so let’s create a VM and attach our volume:

$ nova keypair-add --pub-key ~fabien/.ssh/id_rsa.pub fbo
$ nova secgroup-add-rule default tcp 22 22 0.0.0.0/0
$ nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0
$ nova boot --image 0223bf77-5200-4c20-af61-21bc4dd8b2c9 --flavor 1 --key_name fbo --security_groups default myVM
$ nova list
+--------------------------------------+-------+--------+------------+-------------+------------------+
| ID                                   | Name  | Status | Task State | Power State | Networks         |
+--------------------------------------+-------+--------+------------+-------------+------------------+
| 35c1b926-8377-4b62-8df4-f91a2520aad2 | myVM2 | ACTIVE | None       | Running     | private=10.0.0.2 |
+--------------------------------------+-------+--------+------------+-------------+------------------+
$ nova  volume-list
+--------------------------------------+-----------+--------------+------+-------------+-------------+
| ID                                   | Status    | Display Name | Size | Volume Type | Attached to |
+--------------------------------------+-----------+--------------+------+-------------+-------------+
| c8de072b-fe47-4c02-a61d-3874b7e2d6ce | available | myVolume     | 1    | None        |             |
+--------------------------------------+-----------+--------------+------+-------------+-------------+
$ nova volume-attach 35c1b926-8377-4b62-8df4-f91a2520aad2 c8de072b-fe47-4c02-a61d-3874b7e2d6ce auto
+----------+--------------------------------------+
| Property | Value                                |
+----------+--------------------------------------+
| device   | /dev/vdb                             |
| serverId | 35c1b926-8377-4b62-8df4-f91a2520aad2 |
| id       | c8de072b-fe47-4c02-a61d-3874b7e2d6ce |
| volumeId | c8de072b-fe47-4c02-a61d-3874b7e2d6ce |
+----------+--------------------------------------+

Now the block device must be available under /dev/vdb in our VM:

$ ssh -i /home/fabien/.ssh/id_rsa cirros@10.0.0.2
$ ls -al /dev/vdb
$ sudo -i
# mkfs.ext3 /dev/vdb
# mount /dev/vdb /mnt
# df -h | grep vdb
/dev/vdb               1007.9M     33.3M    923.4M   3% /mnt
# cd /mnt
# touch a b c
# ls
a           b           c           lost+found
# umount /mnt

Cinder and the LVM driver allow the user to create volume snapshots. First we need to detach the volume. The snapshot can be created even if the volume stay attached (–force option) but the result can be an inconsistent snapshot if some data has not been synchronized to the volume :

$ nova volume-detach 35c1b926-8377-4b62-8df4-f91a2520aad2 c8de072b-fe47-4c02-a61d-3874b7e2d6ce
$ cinder snapshot-create --display-name myVolume_snap300713 c8de072b-fe47-4c02-a61d-3874b7e2d6ce
+---------------------+--------------------------------------+
|       Property      |                Value                 |
+---------------------+--------------------------------------+
|      created_at     |      2013-07-30T15:17:58.324203      |
| display_description |                 None                 |
|     display_name    |         myVolume_snap300713          |
|          id         | 65c9394e-0ced-4ff5-a167-c84e52c9d1a9 |
|       metadata      |                  {}                  |
|         size        |                  1                   |
|        status       |               creating               |
|      volume_id      | c8de072b-fe47-4c02-a61d-3874b7e2d6ce |
+---------------------+--------------------------------------+

Let’s try to add more data in our volume and then restore the snapshot:

$ nova volume-attach 35c1b926-8377-4b62-8df4-f91a2520aad2 c8de072b-fe47-4c02-a61d-3874b7e2d6ce auto             
$ ssh -i /home/fabien/.ssh/id_rsa cirros@10.0.0.2
$ sudo mount /dev/vdb /mnt
$ ls /mnt
a           b           c           lost+found
$ sudo touch /mnt/d && sync
$ ls /mnt
a           b           c           d           lost+found

To restore a cinder snapshot we must create a new volume from this snapshot :

$ cinder create --snapshot-id 65c9394e-0ced-4ff5-a167-c84e52c9d1a9 --display-name myVolume300713 1
$ cinder list
+--------------------------------------+-----------+----------------+------+-------------+----------+-------------+
|                  ID                  |   Status  |  Display Name  | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+-----------+----------------+------+-------------+----------+-------------+
| 7f62258a-1c83-4a51-85d4-44b6235b13b8 | available | myVolume300713 |  1   |     None    |  False   |             |
| c8de072b-fe47-4c02-a61d-3874b7e2d6ce | available |    myVolume    |  1   |     None    |  False   |             |
+--------------------------------------+-----------+----------------+------+-------------+----------+-------------+

The original data has been restored the directory ‘d’ no longer exists:


$ nova volume-attach 35c1b926-8377-4b62-8df4-f91a2520aad2 7f62258a-1c83-4a51-85d4-44b6235b13b8 auto
$ ssh -i /home/fabien/.ssh/id_rsa cirros@10.0.0.2
$ sudo mount /dev/vdb /mnt
$ ls /mnt
a           b           c           lost+found

One thought on “First look at cinder with devstack

Leave a Reply to keerthivasan Cancel reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>