First look at cinder with devstack

This post is just a first look and usage of Openstack cinder under devstack. By default devstack configures cinder to use LVM iSCSI driver. This is configured in /etc/cinder/cinder.conf :

volume_driver=cinder.volume.drivers.lvm.LVMISCSIDriver

The driver in use when deploying cinder via devstack can be changed via CINDER_DRIVER devstack’s environment variable.

By default the LVM driver looks for a volume group called “cinder-volumes” so devstack must create it. It’s just a file mounted via a loop device where a LVM partition has been created. The pvs command displays the volume group called cinder-volumes.

$ sudo losetup -a
/dev/loop0: [0801]:535640 (/opt/stack/data/drives/images/swift.img)
/dev/loop1: [0801]:290956 (/opt/stack/data/stack-volumes-backing-file)
$ file /opt/stack/data/stack-volumes-backing-file  
/opt/stack/data/stack-volumes-backing-file: LVM2 PV (Linux Logical Volume Manager), UUID: JcdWpB-UGiC-9mI5-GpmM-jr0O-Th7l-t4Lqki, size: 10747904000
$ sudo pvs
  PV         VG            Fmt  Attr PSize  PFree 
  /dev/loop1 stack-volumes lvm2 a-   10.01g 10.01g
$ sudo vgs
  VG            #PV #LV #SN Attr   VSize  VFree 
  stack-volumes   1   0   0 wz--n- 10.01g 10.01g

As all openstack components actions and data are segmented by tenant. We’ll use the admin tenant. We create a block volume of 1G.

$ . openrc admin admin
$ cinder create --display-name myVolume 1
+---------------------+--------------------------------------+
|       Property      |                Value                 |
+---------------------+--------------------------------------+
|     attachments     |                  []                  |
|  availability_zone  |                 nova                 |
|       bootable      |                False                 |
|      created_at     |      2013-07-29T09:31:37.705400      |
| display_description |                 None                 |
|     display_name    |               myVolume               |
|          id         | e506f007-e8eb-43c1-968e-190a6bab8bce |
|       metadata      |                  {}                  |
|         size        |                  1                   |
|     snapshot_id     |                 None                 |
|     source_volid    |                 None                 |
|        status       |               creating               |
|     volume_type     |                 None                 |
+---------------------+--------------------------------------+

$ cinder list
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
|                  ID                  |   Status  | Display Name | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
| e506f007-e8eb-43c1-968e-190a6bab8bce | available |   myVolume   |  1   |     None    |  False   |             |
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+

The LVM command reports that cinder has created the requested 1G logical volume.

$ sudo lvs
  LV                                          VG            Attr   LSize Origin Snap%  Move Log Copy%  Convert
  volume-e506f007-e8eb-43c1-968e-190a6bab8bce stack-volumes -wi-ao 1.00g

Cinder is composed of at least three components called:
- cinder-api
- cinder-scheduler
- cinder-volume

The previous API call for creating our device is handled by the cinder-api process as it is the main entry point. Cinder client obviously use the cinder REST API. Then the API process requests the cinder-scheduler via the message queue to find the cinder-volume process where to send the request. The scheduler main functionality is to apply filters to determines which volume managers is more suitable to handle the request. The following cinder configuration determines which filters are evaluated:

scheduler_default_filters=AvailabilityZoneFilter,CapacityFilter,CapabilitiesFilter

We can have as many cinder-volume process as available storage backend. Each backend may have different capabilities or may be located in different zone. That’s why the scheduler must select the best host according to the tenant’s request.

The following logs from cinder-scheduler and cinder-volume detail step by step the volume creation (filters evaluation, cinder-volume process selection, logical volume creation and then SCSI target configuration) :

In cinder scheduler log :

2013-07-29 11:31:37.801 DEBUG cinder.openstack.common.rpc.amqp [-] received {u'_context_roles': [u'admin'], u'_context_request_id': u'req-51463834-28f3-4863-afdb-1ab138138b0f', u'_context_quota_class': None, u'_context_project_name': u'admin', u'_unique_id': u'0c6038c5817d4f6b8f75f15960294832', u'_context_tenant': u'1b90ffdc6961416ca5766eba5f753874', u'args': {u'request_spec': {u'volume_id': u'e506f007-e8eb-43c1-968e-190a6bab8bce', u'volume_properties': {u'status': u'creating', u'volume_type_id': None, u'display_name': u'myVolume', ...
2013-07-29 11:31:37.802 DEBUG cinder.scheduler.filter_scheduler [req-51463834-28f3-4863-afdb-1ab138138b0f 261cb8fa46b547c991f2738c337a6e1c 1b90f
fdc6961416ca5766eba5f753874] Filtered [host 'devstack-local': free_capacity_gb: 10.75] from (pid=9530) _get_weighted_candidates /opt/stack/cinde
r/cinder/scheduler/filter_scheduler.py:227
2013-07-29 11:31:37.803 DEBUG cinder.scheduler.filter_scheduler [req-51463834-28f3-4863-afdb-1ab138138b0f 261cb8fa46b547c991f2738c337a6e1c 1b90f
fdc6961416ca5766eba5f753874] Choosing WeighedHost [host: devstack-local, weight: 10.0] from (pid=9530) _schedule /opt/stack/cinder/cinder/schedu
ler/filter_scheduler.py:240
2013-07-29 11:31:37.901 DEBUG cinder.openstack.common.rpc.amqp [req-51463834-28f3-4863-afdb-1ab138138b0f 261cb8fa46b547c991f2738c337a6e1c 1b90ff
dc6961416ca5766eba5f753874] Making asynchronous cast on cinder-volume.devstack-local... from (pid=9530) cast /opt/stack/cinder/cinder/openstack/
common/rpc/amqp.py:623

In cinder-volume logs:

2013-07-29 11:31:37.915 DEBUG cinder.openstack.common.rpc.amqp [-] received {u'_context_roles': [u'admin'], u'_context_request_id': u'req-51463834-28f3-4863-afdb-1ab138138b0f', u'_context_quota_class': None, u'_context_project_name': u'admin', u'_unique_id': u'8e2399ab7cc44d78ab78c3b1f8a7ffe5', u'_context_tenant': u'1b90ffdc6961416ca5766eba5f753874', u'args': {u'request_spec': {u'volume_id': u'e506f007-e8eb-43c1-968e-190a6bab8bce', u'volume_properties': {u'status': u'creating', u'volume_type_id': None, u'display_name': u'myVolume'
2013-07-29 11:31:37.970 DEBUG cinder.volume.manager [req-51463834-28f3-4863-afdb-1ab138138b0f 261cb8fa46b547c991f2738c337a6e1c 1b90ffdc6961416ca
5766eba5f753874] volume volume-e506f007-e8eb-43c1-968e-190a6bab8bce: creating lv of size 1G from (pid=28289) create_volume /opt/stack/cinder/cinder/volume/manager.py:248
2013-07-29 11:31:37.971 INFO cinder.volume.manager [req-51463834-28f3-4863-afdb-1ab138138b0f 261cb8fa46b547c991f2738c337a6e1c 1b90ffdc6961416ca5766eba5f753874] volume volume-e506f007-e8eb-43c1-968e-190a6bab8bce: creating
2013-07-29 11:31:37.972 DEBUG cinder.openstack.common.processutils [req-51463834-28f3-4863-afdb-1ab138138b0f 261cb8fa46b547c991f2738c337a6e1c 1b90ffdc6961416ca5766eba5f753874] Running cmd (subprocess): sudo cinder-rootwrap /etc/cinder/rootwrap.conf lvcreate -L 1G -n volume-e506f007-e8eb-43c1-968e-190a6bab8bce stack-volumes from (pid=28289) execute /opt/stack/cinder/cinder/openstack/common/processutils.py:142
2013-07-29 11:31:38.538 DEBUG cinder.volume.manager [req-51463834-28f3-4863-afdb-1ab138138b0f 261cb8fa46b547c991f2738c337a6e1c 1b90ffdc6961416ca5766eba5f753874] volume volume-e506f007-e8eb-43c1-968e-190a6bab8bce: creating export from (pid=28289) create_volume /opt/stack/cinder/cinder/volume/manager.py:332
2013-07-29 11:31:38.543 INFO cinder.brick.iscsi.iscsi [req-51463834-28f3-4863-afdb-1ab138138b0f 261cb8fa46b547c991f2738c337a6e1c 1b90ffdc6961416ca5766eba5f753874] Creating iscsi_target for: volume-e506f007-e8eb-43c1-968e-190a6bab8bce
2013-07-29 11:31:38.548 DEBUG cinder.openstack.common.processutils [req-51463834-28f3-4863-afdb-1ab138138b0f 261cb8fa46b547c991f2738c337a6e1c 1b90ffdc6961416ca5766eba5f753874] Running cmd (subprocess): sudo cinder-rootwrap /etc/cinder/rootwrap.conf tgt-admin --update iqn.2010-10.org.openstack:volume-e506f007-e8eb-43c1-968e-190a6bab8bce from (pid=28289) execute /opt/stack/cinder/cinder/openstack/common/processutils.py:142
2013-07-29 11:31:39.485 DEBUG cinder.openstack.common.processutils [req-51463834-28f3-4863-afdb-1ab138138b0f 261cb8fa46b547c991f2738c337a6e1c 1b90ffdc6961416ca5766eba5f753874] Running cmd (subprocess): sudo cinder-rootwrap /etc/cinder/rootwrap.conf tgt-admin --show from (pid=28289) execute /opt/stack/cinder/cinder/openstack/common/processutils.py:142
2013-07-29 11:31:39.916 INFO cinder.volume.manager [req-51463834-28f3-4863-afdb-1ab138138b0f 261cb8fa46b547c991f2738c337a6e1c 1b90ffdc6961416ca5766eba5f753874] volume volume-e506f007-e8eb-43c1-968e-190a6bab8bce: created successfully
2013-07-29 11:31:39.917 INFO cinder.volume.manager [req-51463834-28f3-4863-afdb-1ab138138b0f 261cb8fa46b547c991f2738c337a6e1c 1b90ffdc6961416ca5766eba5f753874] Clear capabilities

Note the reference “req-51463834-28f3-4863-afdb-1ab138138b0f” that allow us to follow the volume creation through all processes.

The last step of the volume creation is to make it available to a compute host via iSCSI. The tgt tools suite is used to do that. In /etc/tgt/targets.conf the directory /opt/stack/data/cinder/volumes are included and for each volumes cinder creates a file that describe the target.

$ cat  /opt/stack/data/cinder/volumes/volume-e506f007-e8eb-43c1-968e-190a6bab8bce 

   <target iqn.2010-10.org.openstack:volume-e506f007-e8eb-43c1-968e-190a6bab8bce>
          backing-store /dev/stack-volumes/volume-e506f007-e8eb-43c1-968e-190a6bab8bce
          IncomingUser biePDpWvAh3Z5iydEQew 4boM9MFSi3LZrYiWrRuR
   </target>

$ cat /etc/tgt/targets.conf 
# Empty targets configuration file -- please see the package
# documentation directory for an example.
#
# You can drop individual config snippets into /etc/tgt/conf.d
include /etc/tgt/conf.d/*.conf
include /etc/tgt/stack.d/*

$ ls -al /etc/tgt/stack.d
lrwxrwxrwx 1 root root 30 Jul 23 18:34 /etc/tgt/stack.d -> /opt/stack/data/cinder/volumes

With tgtadm we can list all local targets and LUNs available:

$ sudo tgtadm --lld iscsi --op show --mode target
Target 1: iqn.2010-10.org.openstack:volume-e506f007-e8eb-43c1-968e-190a6bab8bce
    System information:
        Driver: iscsi
        State: ready
    I_T nexus information:
    LUN information:
        LUN: 0
            Type: controller
            SCSI ID: IET     00010000
            SCSI SN: beaf10
            Size: 0 MB, Block size: 1
            Online: Yes
            Removable media: No
            Readonly: No
            Backing store type: null
            Backing store path: None
            Backing store flags: 
        LUN: 1
            Type: disk
            SCSI ID: IET     00010001
            SCSI SN: beaf11
            Size: 1074 MB, Block size: 512
            Online: Yes
            Removable media: No
            Readonly: No
            Backing store type: rdwr
            Backing store path: /dev/stack-volumes/volume-e506f007-e8eb-43c1-968e-190a6bab8bce
            Backing store flags: 
    Account information:
    ACL information:
        ALL

Now we want to use our volume in a VM instance, so let’s create a VM and attach our volume:

$ nova keypair-add --pub-key ~fabien/.ssh/id_rsa.pub fbo
$ nova secgroup-add-rule default tcp 22 22 0.0.0.0/0
$ nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0
$ nova boot --image 0223bf77-5200-4c20-af61-21bc4dd8b2c9 --flavor 1 --key_name fbo --security_groups default myVM
$ nova list
+--------------------------------------+-------+--------+------------+-------------+------------------+
| ID                                   | Name  | Status | Task State | Power State | Networks         |
+--------------------------------------+-------+--------+------------+-------------+------------------+
| 35c1b926-8377-4b62-8df4-f91a2520aad2 | myVM2 | ACTIVE | None       | Running     | private=10.0.0.2 |
+--------------------------------------+-------+--------+------------+-------------+------------------+
$ nova  volume-list
+--------------------------------------+-----------+--------------+------+-------------+-------------+
| ID                                   | Status    | Display Name | Size | Volume Type | Attached to |
+--------------------------------------+-----------+--------------+------+-------------+-------------+
| c8de072b-fe47-4c02-a61d-3874b7e2d6ce | available | myVolume     | 1    | None        |             |
+--------------------------------------+-----------+--------------+------+-------------+-------------+
$ nova volume-attach 35c1b926-8377-4b62-8df4-f91a2520aad2 c8de072b-fe47-4c02-a61d-3874b7e2d6ce auto
+----------+--------------------------------------+
| Property | Value                                |
+----------+--------------------------------------+
| device   | /dev/vdb                             |
| serverId | 35c1b926-8377-4b62-8df4-f91a2520aad2 |
| id       | c8de072b-fe47-4c02-a61d-3874b7e2d6ce |
| volumeId | c8de072b-fe47-4c02-a61d-3874b7e2d6ce |
+----------+--------------------------------------+

Now the block device must be available under /dev/vdb in our VM:

$ ssh -i /home/fabien/.ssh/id_rsa cirros@10.0.0.2
$ ls -al /dev/vdb
$ sudo -i
# mkfs.ext3 /dev/vdb
# mount /dev/vdb /mnt
# df -h | grep vdb
/dev/vdb               1007.9M     33.3M    923.4M   3% /mnt
# cd /mnt
# touch a b c
# ls
a           b           c           lost+found
# umount /mnt

Cinder and the LVM driver allow the user to create volume snapshots. First we need to detach the volume. The snapshot can be created even if the volume stay attached (–force option) but the result can be an inconsistent snapshot if some data has not been synchronized to the volume :

$ nova volume-detach 35c1b926-8377-4b62-8df4-f91a2520aad2 c8de072b-fe47-4c02-a61d-3874b7e2d6ce
$ cinder snapshot-create --display-name myVolume_snap300713 c8de072b-fe47-4c02-a61d-3874b7e2d6ce
+---------------------+--------------------------------------+
|       Property      |                Value                 |
+---------------------+--------------------------------------+
|      created_at     |      2013-07-30T15:17:58.324203      |
| display_description |                 None                 |
|     display_name    |         myVolume_snap300713          |
|          id         | 65c9394e-0ced-4ff5-a167-c84e52c9d1a9 |
|       metadata      |                  {}                  |
|         size        |                  1                   |
|        status       |               creating               |
|      volume_id      | c8de072b-fe47-4c02-a61d-3874b7e2d6ce |
+---------------------+--------------------------------------+

Let’s try to add more data in our volume and then restore the snapshot:

$ nova volume-attach 35c1b926-8377-4b62-8df4-f91a2520aad2 c8de072b-fe47-4c02-a61d-3874b7e2d6ce auto             
$ ssh -i /home/fabien/.ssh/id_rsa cirros@10.0.0.2
$ sudo mount /dev/vdb /mnt
$ ls /mnt
a           b           c           lost+found
$ sudo touch /mnt/d && sync
$ ls /mnt
a           b           c           d           lost+found

To restore a cinder snapshot we must create a new volume from this snapshot :

$ cinder create --snapshot-id 65c9394e-0ced-4ff5-a167-c84e52c9d1a9 --display-name myVolume300713 1
$ cinder list
+--------------------------------------+-----------+----------------+------+-------------+----------+-------------+
|                  ID                  |   Status  |  Display Name  | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+-----------+----------------+------+-------------+----------+-------------+
| 7f62258a-1c83-4a51-85d4-44b6235b13b8 | available | myVolume300713 |  1   |     None    |  False   |             |
| c8de072b-fe47-4c02-a61d-3874b7e2d6ce | available |    myVolume    |  1   |     None    |  False   |             |
+--------------------------------------+-----------+----------------+------+-------------+----------+-------------+

The original data has been restored the directory ‘d’ no longer exists:


$ nova volume-attach 35c1b926-8377-4b62-8df4-f91a2520aad2 7f62258a-1c83-4a51-85d4-44b6235b13b8 auto
$ ssh -i /home/fabien/.ssh/id_rsa cirros@10.0.0.2
$ sudo mount /dev/vdb /mnt
$ ls /mnt
a           b           c           lost+found

Swift ACL usage examples

This blog post is a reminder for swift ACL usage. ACLs in swift are set up by the acl.py middleware. The ACL let the account (or tenant) owner set R/W access rights for authenticated users or even unauthenticated access. The later is really cool to configure data access for anyone. Below we’ll mostly use python-swiftclient CLI to set the ALCs so it’s better to mention that the headers involved in ACL configuration are X-Container-Read and X-Container-Write.

The examples are performed against a devstack configuration with keystone. Here we prepare our test environment:

Create a new user called ‘demouser2′:

$ source openrc admin admin
$ keystone  user-role-list --user demouser2 --tenant demo

For a non privileged user (we don't provide any role to demouser2) in an account the operations on swift account is forbidden :

$ swift --os-tenant-name=demo --os-username=demouser2 --os-password=demouser2 --os-auth-url=http://localhost:5000/v2.0 stat
Account HEAD failed: http://192.168.56.102:8080/v1/AUTH_cbd3ac87d06b4f73b096ae2d4bcbb477 403 Forbidden
$ swift --os-tenant-name=demo --os-username=demouser2 --os-password=demouser2 --os-auth-url=http://localhost:5000/v2.0 list
Account GET failed: http://192.168.56.102:8080/v1/AUTH_cbd3ac87d06b4f73b096ae2d4bcbb477?format=json 403 Forbidden  [first 60 chars of response] Forbidden Access was denied to this resourc

Create a container in demo account and push an object :

swift --os-tenant-name=demo --os-username=demo --os-password=wxcvbn --os-auth-url=http://localhost:5000/v2.0 post container
swift --os-tenant-name=demo --os-username=demo --os-password=wxcvbn --os-auth-url=http://localhost:5000/v2.0 upload container stackrc

To allow access to that container for user demouser2 we must set the ACL with tenant:user. Here 'demo:demouser2' (if just the tenant is specifed then all users under that tenant will be allowed):

swift --os-tenant-name=demo --os-username=demo --os-password=wxcvbn --os-auth-url=http://localhost:5000/v2.0 post container -r demo:demouser2
swift --os-tenant-name=demo --os-username=demo --os-password=wxcvbn --os-auth-url=http://localhost:5000/v2.0 stat container 
  Account: AUTH_cbd3ac87d06b4f73b096ae2d4bcbb477
Container: container
  Objects: 1
    Bytes: 9673
 Read ACL: demo:demouser2
Write ACL: 
  Sync To: 
 Sync Key: 
Accept-Ranges: bytes
X-Timestamp: 1374660280.52319
X-Trans-Id: tx410a244a56e5488da1ba12234e31eb3a
Content-Type: text/plain; charset=utf-8

demouser2 is now able to stat and list the container:

$ swift --os-tenant-name=demo --os-username=demouser2 --os-password=demouser2 --os-auth-url=http://localhost:5000/v2.0 list container
stackrc
$ swift --os-tenant-name=demo --os-username=demouser2 --os-password=demouser2 --os-auth-url=http://localhost:5000/v2.0 stat container
  Account: AUTH_cbd3ac87d06b4f73b096ae2d4bcbb477
Container: container
  Objects: 1
    Bytes: 9673
 Read ACL: 
Write ACL: 
  Sync To: 
 Sync Key: 
Accept-Ranges: bytes
X-Timestamp: 1374660280.52319
X-Trans-Id: tx3bc153f84cb84b4f864c704641077490
Content-Type: text/plain; charset=utf-8

Beside of giving rights to an authenticated user, the account owner can set the container for public access:

$ swift --os-tenant-name=demo --os-username=demo --os-password=wxcvbn --os-auth-url=http://localhost:5000/v2.0 post container -r '.r:*'
$ swift --os-tenant-name=demo --os-username=demo --os-password=wxcvbn --os-auth-url=http://localhost:5000/v2.0 stat container
  Account: AUTH_cbd3ac87d06b4f73b096ae2d4bcbb477
Container: container
  Objects: 1
    Bytes: 9673
 Read ACL: .r:*
Write ACL: 
  Sync To: 
 Sync Key: 
Accept-Ranges: bytes
X-Timestamp: 1374660280.52319
X-Trans-Id: txb319121498b742fea15f493ddf67f43d
Content-Type: text/plain; charset=utf-8

Object access are successful with any HTTP client (we don't provide any token):

$ curl http://192.168.56.102:8080/v1/AUTH_cbd3ac87d06b4f73b096ae2d4bcbb477/container/stackrc
$ curl -I http://192.168.56.102:8080/v1/AUTH_cbd3ac87d06b4f73b096ae2d4bcbb477/container/stackrc

If we want to allow container listing, ie being able to list objects in a container or even retrieving container stats we must add the .rlistings ACL.

$ swift --os-tenant-name=demo --os-username=demo --os-password=wxcvbn --os-auth-url=http://localhost:5000/v2.0 post container -r '.r:*,.rlistings'
$ curl -H'Accept: application/json' http://192.168.56.102:8080/v1/AUTH_cbd3ac87d06b4f73b096ae2d4bcbb477/container
[{"hash": "a6e2a0d0cfe5732321366269cb8aec11", "last_modified": "2013-07-24T12:41:20.780320", "bytes": 9673, "name": "stackrc", "content_type": "application/octet-stream"}]
$ curl -I  http://192.168.56.102:8080/v1/AUTH_cbd3ac87d06b4f73b096ae2d4bcbb477/container
HTTP/1.1 204 No Content
Content-Length: 0
X-Container-Object-Count: 1
Accept-Ranges: bytes
X-Timestamp: 1374660280.52319
X-Container-Bytes-Used: 9673
Content-Type: text/plain; charset=utf-8
X-Trans-Id: tx66f89632880c4410851e4d0f9db0cfe0
Date: Wed, 24 Jul 2013 13:10:19 GMT

The '.r' ACL stands for Referer. Swift look at the "Referer" header to allow or not the request.

$ swift --os-tenant-name=demo --os-username=demo --os-password=wxcvbn --os-auth-url=http://localhost:5000/v2.0 post container -r '.r:.openstack.org,.rlistings'

Be trying one of our previous commands, the request is now forbidden.

$ curl http://192.168.56.102:8080/v1/AUTH_cbd3ac87d06b4f73b096ae2d4bcbb477/container
Unauthorized This server could not verify that you are authorized to access the document you requested.

The request will be allowed when the Referer header is set:

$ curl -I -H 'Referer: http://docs.openstack.org' http://192.168.56.102:8080/v1/AUTH_cbd3ac87d06b4f73b096ae2d4bcbb477/container
HTTP/1.1 204 No Content
Content-Length: 0
X-Container-Object-Count: 1
Accept-Ranges: bytes
X-Timestamp: 1374660280.52319
X-Container-Bytes-Used: 9673
Content-Type: text/plain; charset=utf-8
X-Trans-Id: tx6e028ebb44a341e28025b797572ffcea
Date: Wed, 24 Jul 2013 13:18:31 GMT

For write ACLs we can configure access for authenticated user. Note that Referer rule in the write ACL is not allowed. For those examples we'll use curl instead of swift client.

$ swift --os-tenant-name=demo --os-username=demo --os-password=wxcvbn --os-auth-url=http://localhost:5000/v2.0 stat container
  Account: AUTH_cbd3ac87d06b4f73b096ae2d4bcbb477
Container: container
  Objects: 3
    Bytes: 10724
 Read ACL: 
Write ACL: demo:demouser2
  Sync To: 
 Sync Key: 
Accept-Ranges: bytes
X-Timestamp: 1374660280.52319
X-Trans-Id: tx09032c7d33824ec0a7d4133b7e9d6fc9
Content-Type: text/plain; charset=utf-8

Retrieve the demouser2 token against keystone:

$ T=$(keystone --os-tenant-name=demo --os-username=demouser2 --os-password=demouser2 --os-auth-url=http://localhost:5000/v2.0 token-get | awk '/ id / {print $4}')

The object download is forbidden as there is no rule in read ACL:

$ curl -i -XGET -H "X-Auth-Token: $T" http://localhost:8080/v1/AUTH_cbd3ac87d06b4f73b096ae2d4bcbb477/container/openrc
HTTP/1.1 403 Forbidden
Content-Length: 73
Content-Type: text/html; charset=UTF-8
X-Trans-Id: txcf044ca23149415787950de5a5d5c3d4
Date: Wed, 24 Jul 2013 15:07:02 GMT

Finally the object PUT work as expected:

$ curl -i -XPUT -H "X-Auth-Token: $T" --data-binary "bouh" http://localhost:8080/v1/AUTH_cbd3ac87d06b4f73b096ae2d4bcbb477/container/openrc
HTTP/1.1 201 Created
Last-Modified: Wed, 24 Jul 2013 15:07:07 GMT
Content-Length: 0
Etag: c9c5384adec41a13eea91ed4d20d809e
Content-Type: text/html; charset=UTF-8
X-Trans-Id: tx9ad48e4b965a4e119d6d85a436a3896d
Date: Wed, 24 Jul 2013 15:07:07 GMT

Swift tempurl middleware

Swift has a middleware called tempurl that allow public access to objects. This feature can be useful when an account owner want to allow access for a foreign user to some objects of his account. A common usage when swift is used as storage backend in a web application is to built the temporary urls server side and let the browser communicate directly with swift for a limited delay. User can target temporary urls using only GET/PUT/HEAD http verbs.
Here we’ll see how to use temporary urls. Our demo Swift installation is a devstack.

First we push two objects in a container with python-swiftclient.

$ swift -A http://192.168.56.101:8080/auth/v1.0 -U admin:admin -K admin upload container openrc
openrc
$ swift -A http://192.168.56.101:8080/auth/v1.0 -U admin:admin -K admin upload container stackrc
stackrc

The tempurl middleware will look at Temp-URL-Key and Temp-URL-Key-2 meta when an object is
requested via a temporary url to allow or not access. Those keys are used to build the temporary signature. Having the ability to set two keys on an account is safer when the owner want to change the key as previous key is still set and let the middleware validate existing temporary urls.

$ swift -A http://192.168.56.101:8080/auth/v1.0 -U admin:admin -K admin post -m Temp-URL-Key:thekey
$ swift -A http://192.168.56.101:8080/auth/v1.0 -U admin:admin -K admin post -m Temp-URL-Key-2:thekey2

Now we want to allow access to the object called openrc in container ‘container’ then we need
to create the temporary URL. Swift comes with a tool called swift-temp-url that is a builder. Give it some arguments, the allowed HTTP method, the availability delay, the object path, and one of the keys set in the account and it will gracefully create the temporary URL for you.

$ ./swift-temp-url GET 6000 /v1/AUTH_admin/container/openrc thekey
/v1/AUTH_admin/container/openrc?temp_url_sig=97d6512e86dbccdfba315172a3c7dcf7463253b9&temp_url_expires=1374241736

The temp_url_sig is really important as it is a hash(hmac-sha1) of the elements the account owner
has allowed, the http method, the object path and the expiration delay. Thus user corruption of the url by changing the object path for instance result of a 401 http error.

The object can be retrieved with any http client.

$ curl http://192.168.56.101:8080/v1/AUTH_admin/container/openrc?temp_url_sig=97d6512e86dbccdfba315172a3c7dcf7463253b9\&temp_url_expires=1374241736

To allow access to our second object ‘stackrc’, we just need to compute a new temporary
url with swift-temp-url.

$ ./swift-temp-url GET 6000 /v1/AUTH_admin/container/stackrc thekey2
/v1/AUTH_admin/container/stackrc?temp_url_sig=f3a665279ff0147a5c885e15cee08939f9eda235&temp_url_expires=1374242123

stackrc and all other objects contained in this account can be targeted according to the path given to swift-temp-url command.

As said above PUT method can be allowed and temporary url can be used to let the user push data in a specified path. Just create a new temporary url with PUT method as argument.

$ ./swift-temp-url PUT 6000 /v1/AUTH_admin/container/uploadedfile thekey2
/v1/AUTH_admin/container/uploadedfile?temp_url_sig=d13a57b34e8da4b29b90e682cfe5af0187aaeea9&temp_url_expires=1374242636

$ curl -i -XPUT -d 'myuploadeddata' http://192.168.56.101:8080/v1/AUTH_admin/container/uploadedfile?temp_url_sig=d13a57b34e8da4b29b90e682cfe5af0187aaeea9\&temp_url_expires=1374242636
HTTP/1.1 201 Created
Last-Modified: Fri, 19 Jul 2013 12:24:51 GMT
Content-Length: 0
Etag: 73da0271131e15f1f02d5c0b3b14ca76
Content-Type: text/html; charset=UTF-8
X-Trans-Id: tx5ebb3ad8cd664153861f0-0051e93013
Date: Fri, 19 Jul 2013 12:24:51 GMT

In addition to GET and PUT verbs, once one of those are allowed we can perform a HEAD to retrieve header of the object.

$ curl -I -XHEAD http://192.168.56.101:8080/v1/AUTH_admin/container/openrc?temp_url_sig=12d78498a020a92b3a535b1ea7d99047ae84b3a4\&temp_url_expires=1374242076
HTTP/1.1 200 OK
Content-Length: 3083
Accept-Ranges: bytes
Last-Modified: Fri, 19 Jul 2013 11:46:55 GMT
Etag: fd310d3418adb42e9075a6192d6804c8
X-Timestamp: 1374234415.34971
Content-Type: application/octet-stream
X-Trans-Id: tx708b7df70eda4c10bc88d-0051e93a0f
Date: Fri, 19 Jul 2013 13:07:27 GMT

A service called Cloudwatt uses swift as object storage and as I have an account so I have tried the tempurl functionality. In Cloudwatt’s dashboard in account’s informations you’ll find your tenant id, username, password and the url of the identity service (example above are based on built-in identity service of swift) keystone.

First when getting stats from my account I seen that Cloudwatt has already set a meta Temp-Url-Key for me. Nice i’ll use it as it is.

$ swift --os-auth-url=https://identity.fr0.cloudwatt.com:443/v2.0 --os-username=fabien.boucher@enovance.com --os-password=******* --os-tenant-name=******* stat
Account: AUTH_e31d1c545b1943509f535a3803
Containers: 2
Objects: 161
Bytes: 148371033
Meta Temp-Url-Key: 824810b0-9217-11e2-9c16
X-Timestamp: 1363864042.26270
X-Trans-Id: tx40f6c3b3edef4e1c99f6550afab579ed
Content-Type: text/plain; charset=utf-8
Accept-Ranges: bytes

 

So I push a file on a container called ‘testcont’

$ swift --os-auth-url=https://identity.fr0.cloudwatt.com:443/v2.0 --os-username=fabien.boucher@enovance.com --os-password=******* --os-tenant-name=******* upload testcont openrc

 

With the swift-temp-url tool I can create the temporary url.

$ ./swift-temp-url GET 6000 /v1/AUTH_e31d1c545b1943509f535a3803be6f7e/testcont/openrc 824810b0-9217-11e2-9c16

 

Now I’m able to retrieve the file via the temporary url. Note that the hostname is not the same as above as here we use the storage url instead of keystone url.

$ curl https://storage.fr0.cloudwatt.com:443/v1/AUTH_e31d1c545b1943509f535a3803/testcont/openrc?temp_url_sig=65dd157a46c5a4ca5f606a1d3dc9851ccd5b3def\&temp_url_expires=1374341906