Metacentrum Cloud Documentation
Toggle Dark/Light/Auto mode Toggle Dark/Light/Auto mode Toggle Dark/Light/Auto mode

Advanced Features

The following guide will introduce you to advanced features available in MetaCentrum Cloud. For basic instructions on how to start a virtual machine instance, see Quick Start.

Orchestration

The OpenStack orchestration service can be used to deploy and manage complex virtual topologies as single entities, including basic auto-scaling and self-healing.

This feature is provided as it is and configuration is entirely the responsibility of the user.

For details, refer to the official documentation.

Image upload

We don’t support uploading personal images by default. MetaCentrum Cloud images are optimized for running in the cloud and we recommend users customize them instead of building their own images from scratch. If you need to upload a custom image, please contact user support for appropriate permissions.

Instructions for uploading a custom image:

  1. Upload only images in RAW format (not qcow2, vmdk, etc.).

  2. Upload is supported only through OpenStack CLI with Application Credentials.

  3. Each image needs to contain metadata:

hw_scsi_model=virtio-scsi
hw_disk_bus=scsi
hw_rng_model=virtio
hw_qemu_guest_agent=yes
os_require_quiesce=yes

Following needs to be set up correctly (consult official documentation) or instances won’t start:

os_type=linux # example
os_distro=ubuntu # example
  1. The image should contain cloud-init, qemu-guest-agent, and grow-part tools

  2. OpenStack will resize an instance after the start. The image shouldn’t contain any empty partitions or free space

For a more detailed explanation about CLI work with images, please refer to https://docs.openstack.org/python-openstackclient/pike/cli/command-objects/image.html.

Image visibility

In OpenStack there are 4 possible visibilities of a particular image: public, private, shared, community.

You can view these images via CLI or in dashboard.

In dashboard visit section Images and then you can search via listed image and/or set searching criteria in search bar. There is a parameter Visibility where you can specify visibility of image you are searching for. These visibility parameters are explained below.

1. Public images

Public image is an image visible and readable to everyone. Only OpenStack admins can modify them.

2. Private images

Private image is an image visible only to the owner of that image. This is the default setting for all newly created images.

3. Shared images

Shared image is an image visible only to the owner and possibly certain groups that the owner specified. How to share an image between projects, please read the following tutorial below. Image owners are responsible for managing shared images.

4. Community images

Community image is an image that is accessible to everyone. Image owners are responsible for managing community images. Community images are visible in the dashboard using Visibility: Community query. These images can be listed via CLI command:

openstack image list --community.

This is especially beneficial in case of a great number of users who should get access to this image or if you own an old image but some users might still require that image. In that case, you can make set the old image and Community image and set the new one as default.

WARNING

To create or upload this image you must have an image_uploader right.

Creating a new Community image can look like this:

openstack image create --file test-cirros.raw --property hw_scsi_model=virtio-scsi --property hw_disk_bus=scsi --property hw_rng_model=virtio --property hw_qemu_guest_agent=yes --property os_require_quiesce=yes --property os_type=linux --community test-cirros

Note that references to existing community images should use <image-id> instead of <image-name>. See image visibility design upstream document for more details.

Image sharing between projects

There are two ways sharing an OpenStack Glance image among projects, using shared or community image visibility.

Shared image approach

Image sharing allows you to share your image between different projects and then it is possible to launch instances from that image in those projects with other collaborators etc. As mentioned in a section about CLI, you will need to use your OpenStack credentials from openrc or cloud.yaml file.

Then to share an image you need to know its ID, which you can find with the command:

openstack image show <name_of_image>

where name_of_image is the name of the image you want to share.

After that, you will also have to know the ID of the project you want to share your image with. If you do not know the ID of that project you can use the following command, which can help you find it:

openstack project list | grep <name_of_other_project>

where <name_of_project> is the name of the other project. Its ID will show up in the first column.

Now all with the necessary IDs, you can share your image. First, you need to set an attribute of the image to shared by the following command:

openstack image set --shared <image_ID>

And now you can share it with your project by typing this command:

openstack image add project <image_ID> <ID_of_other_project>

where ID_of_other_project is the ID of the project you want to share the image with.

Now you can check if the user of the other project accepted your image by command:

openstack image member list <image_ID>

If the other user did not accept your image yet, the status column will contain the value: pending.

Accepting shared image

To accept a shared image you need to know <image_ID> of the image that the other person wants to share with you. To accept shared image to your project you need to use the following command:

openstack image set --accept <image_ID>

You can then verify that by listing your images:

openstack image list | grep <image_ID>

Unshare shared image

As an owner of the shared image, you can check all projects that have access to the shared image by the following command:

openstack image member list <image_ID>

When you find <ID_project_to_unshare> of project, you can cancel the access of that project to the shared image by command:

openstack image remove project <image ID> <ID_project_to_unshare>

Community image approach

This approach is very simple:

  1. Mark an image as community (openstack image set --shared <image_ID>)
  2. Now everyone can use the community image, but there are two limitations:
    • to list community images you have to specify visibility (in UI: Visibility: Community, cli: openstack image list --community)
    • to use any community image you have to use <image_ID> (references via <image_name> result in NOT FOUND)

Add SWAP file to instance

By default VMs after creation do not have SWAP partition. If you need to add a SWAP file to your system you can download and run script that create a SWAP file on your VM.

Local SSDs

Default MetaCentrum Cloud storage is implemented via the CEPH storage cluster deployed on top of HDDs. This configuration should be sufficient for most cases. For instances, that require high throughput and IOPS, it is possible to utilize hypervisor local SSDs. Requirements for instances on hypervisor local SSD:

  • instances can be deployed only via API (CLI, Ansible, Terraform …), instances deployed via web GUI (Horizon) will always use CEPH for its storage
  • supported only by flavors with ssd-ephem suffix (e.g. hpc.4core-16ram-ssd-ephem)
  • instances can be rebooted without prior notice or you can be required to delete them
  • you can request them when asking for a new project, or an existing project on cloud@metacentrum.cz

Affinity policy

Affinity policy is tool users can use to deploy nodes of a cluster on the same physical machine or if they should be spread among other physical machines. This can be beneficial if you need fast communication between nodes or you need them to be spread due to load-balancing or high availability etc. For more info please refer to https://docs.openstack.org/senlin/train/scenarios/affinity.html.

LBaaS - OpenStack Octavia

Load Balancer is a tool used for distributing a set of tasks over a particular set of resources. Its main goal is to find the optimal use of resources and make the processing of particular tasks more efficient.

In the following example, you can see how a basic HTTP server is deployed via CLI.

Requirements:

  • 2 instances connected to the same internal subnet and configured with HTTP application on TCP port 80
openstack loadbalancer create --name my_lb --vip-subnet-id <external_subnet_ID>

where <external_subnet_ID> is an ID of external shared subnet created by cloud admins reachable from the Internet.

You can check the newly created Load Balancer by running the following command:

openstack loadbalancer show my_lb

Now you must create a listener on port 80 to enable incoming traffic by the following command:

openstack loadbalancer listener create --name listener_http --protocol HTTP --protocol-port 80 my_lb

Now you must add a pool on the created listener to set up the configuration for Load Balancer. You can do it by the following command:

openstack loadbalancer pool create --name pool_http --lb-algorithm ROUND_ROBIN --listener listener_http --protocol HTTP

Here you created a pool using the Round Robin algorithm for load balancing.

And now you must configure both nodes to join to Load Balancer:

openstack loadbalancer member create --subnet-id <internal_subnet_ID> --address 192.168.50.15 --protocol-port 80 pool_http
openstack loadbalancer member create --subnet-id <internal_subnet_ID> --address 192.168.50.16 --protocol-port 80 pool_http

where <internal_subnet_ID> is an ID of internal subnet used by your instances and –address specifies an address of the concrete instance.

For more info, please refer to https://docs.openstack.org/octavia/train/user/guides/basic-cookbook.html#basic-lb-with-hm-and-fip.

NOTICE:

Sometimes it can happen that Load Balancer is working but the connection is not working because it is not added into security groups. So to prevent this don’t forget to apply neutron security group to amphorae created on the LB network to allow traffic to reach the configured load balancer. See the load balancer deployment walkthrough for more details.

LBaaS (Load Balancer as a service) provides the user with a load balancing service, that can be fully managed via OpenStack API (some basic tasks are supported by GUI). Core benefits:

  • creation and management of load balancer resources can be easily automatized via API or existing tools like Ansible or Terraform
  • applications can be easily scaled by starting up more OpenStack instances and registering them into the load balancer
  • public IPv4 addresses saving - you can deploy one load balancer with one public IP and serve multiple services on multiple pools of instances by TCP/UDP port or L7 policies

This feature is provided as it is and configuration is entirely the responsibility of the user.

Official documentation for LBaaS (Octavia) service - https://docs.openstack.org/octavia/latest/user/index.html

Cloud orchestration tools

Terraform

Terraform is the best orchestration tool for creating and managing cloud infrastructure. It is capable of greatly simplifying cloud operations. It gives you an option if something goes wrong you can easily rebuild your cloud infrastructure.

It manages resources like virtual machines, DNS records, etc.

It is managed through configuration templates containing info about its tasks and resources. They are saved as *.tf files. If configuration changes, Terraform can detect it and create additional operations to apply those changes.

Here is an example how this configuration file can look like:

variable "image" {
default = "Debian 10"
}

variable "flavor" {
default = "standard.small"
}

variable "ssh_key_file" {
default = "~/.ssh/id_rsa"
}

You can use OpenStack Provider which is a tool for managing resources OpenStack supports via Terraform. Terraform has an advantage over Heat because it can be used also in other architectures, not only in OpenStack

For more detail please refer to https://registry.terraform.io/providers/terraform-provider-openstack/openstack/latest/docs and https://www.terraform.io/intro/index.html.

Heat

Heat is another orchestration tool used for managing cloud resources. This one is OpenStack exclusive so you can’t use it anywhere else. Just like Terraform it is capable of simplifying orchestration operations in your cloud infrastructure.

It also uses configuration templates for the specification of information about resources and tasks. You can manage resources like servers, floating IPs, volumes, security groups, etc. via Heat.

Here is an example of a Heat configuration template in form of a *.yaml file:

heat_template_version: 2021-04-06

description: Test template

resources:
  my_instance:
    type: OS::Nova::Server
    properties:
      key_name: id_rsa
      image: Debian10_image
      flavor: standard.small

You can find more information here https://wiki.openstack.org/wiki/Heat.

Object storage management

OpenStack supports object storage based on OpenStack Swift. Creation of object storage container (database) is done by clicking on +Container on Object storage containers page.

Every object typically contains data along with metadata and a unique global identifier to access it. OpenStack allows you to upload your files via HTTPS protocol. There are two ways of managing created object storage container:

  1. Use OpenStack component Swift

  2. Use S3 API

In both cases, you will need application credentials to be able to manage your data.

Swift credentials

The easiest way to generate Swift storage credentials is through MetaCentrum cloud dashboard. You can generate application credentials as described here. You must have role heat_stack_owner.

S3 credentials

If you want to use S3 API you will need to generate ec2 credentials for access. Note that to generate ec2 credentials you will also need credentials containing the role of heat_stack_owner. Once you sourced your credentials for CLI you can generate ec2 credentials by the following command:

$ openstack ec2 credentials create          
+------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field      | Value                                                                                                                                                                    |
+------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| access     | 896**************************651                                                                                                                                         |
| project_id | f0c**************************508                                                                                                                                         |
| secret     | 336**************************49c                                                                                                                                         |
...
| user_id    | e65***********************************************************6a                                                                                                         |
+------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------+

Then you may use one of the s3 clients (minio client mc, s3cmd, …) Running minio client against created object storage container is very easy:

$ MC config host add swift-s3 https://object-store.cloud.muni.cz  896**************************651 336**************************49c --api S3v2
Added `swift-s3` successfully.

$ MC ls swift-s3
[2021-04-19 15:13:45 CEST]     0B freznicek-test/

s3cmd client requires a configuration file that looks like this: In this case please open your file with credentials that will look like this:

[default]
access_key = 896**************************651
secret_key = 336**************************49c
host_base = object-store.cloud.muni.cz
host_bucket = object-store.cloud.muni.cz
use_https = True

For more info please refer to https://docs.openstack.org/swift/latest/s3_compat.html and https://docs.openstack.org/train/config-reference/object-storage/configure-s3.html.