« Previous 1 2 3 4 Next »
Migrate your workloads to the cloud
Preparing to Move
How Good Is the Automation?
Once you have determined what you actually need to have in the cloud, the next step is to ask yourself how to upload the existing setup to a cloud. Highly automated operations make this far easier than their non-automated relatives. After all, operating system images are available for all relevant Linux varieties in the popular clouds. If you can start a series of VMs and fire your preconfigured automation at them, all you have to do is worry about migrating your existing data; otherwise, you're done with your work.
If the degree of automation is low, as is often the case in conventional environments, things get trickier. When time is of the essence, admins will hardly have the opportunity to integrate automation quickly into the existing setup; rather, this task has to be done after the migration as part of cloud-specific optimization. If you have to deal with a setup that has very little automation, your first steps in the cloud will still be classic manual work.
Whether manual or automated, in the next step of the move, you ideally want to create a virtual environment in your section of the cloud that matches your existing setup in terms of type, scope, and performance.
Cloud Differences
If you migrate existing setups, you should also consider the many small differences between clouds and normal environments, such as the network configuration already mentioned. In clouds, it is based on DHCP; static allocation of IPs to VMs is also feasible with some tinkering but does not allow you to manage the networks with the cloud APIs, because the cloud is not aware of the statically configured IP on your VM.
High availability is a tricky topic in clouds. Outside of clouds, a cluster manager is often used (e.g., Pacemaker). If you want redundant data storage, you will typically turn to shared storage with RAID or software solutions like a distributed replicated block device (DRBD). The storage part is easy to implement in clouds. Most clouds offer a volume service that is redundant in itself.
By storing the relevant data on a volume that can be attached to VMs as required, you can effectively resolve this part of the problem.
More difficult is the question as to how typical services, such as a database, can be set up to provide high availability. The combination of Pacemaker and Corosync, which switches IP addresses back and forth, cannot be used in AWS. The additional IP is not visible to other VMs on the same subnet unless it is permanently assigned to a system through the AWS API.
However, this contradicts the idea of a reversible IP address, and letting Pacemaker talk directly to the AWS API is such an amateur solution that nobody has taken it seriously or implemented it to date.
If you want to implement high availability in clouds, you are almost inevitably forced to rely on the cloud-specific tools for the respective environment. Using AWS as an example, you can understand what this might look like thanks to an online tutorial [2], where the author explains how to use built-in AWS resources to obtain a switching MariaDB instance with high availability.
How Data Gets into the Cloud
Without the cloud environment's technology, it will be difficult to answer the question of how existing data finds its way in. What sounds like a trivial task can become a real pitfall – how complex the process actually is depends on the volume of data to be transferred.
For example, if the customer database is only a few gigabytes, it won't be a big problem to copy it once only from A to B within a maintenance window. However, if the volume of data is larger, a different approach is needed because the data is not allowed to change on the existing side after synchronization from the old to the new setup.
One possible way out of this dilemma is to operate a database in the target setup as a slave of what is still the production database. Most cloud providers now have a VPN-as-a-Service function that lets you use a VPN to connect to the virtual network of the virtual environment in the cloud. Your own client then behaves as if it were also a VM. If you connect the old network and the new network in this way, one instance of MariaDB can run in slave mode on the new network and receive updates from the master.
This approach also solves the problem of the new setup not having the latest data at the moment when the switchover takes place. If you copy the data manually from A to B in the traditional way, you would have to trigger a new sync before the go-live of site B to copy the latest chunks of data that changed since the last sync.
The answer to the question of how classic assets like image files find their way into the cloud, and how you can configure your setup so that all systems that need access to it have that access later, is not quite as easy. Sure, if you have set up NFS, you can also set up an NFS server in the cloud and have it host your data assets, but the copy problem remains.
If you are setting up a virtual VPN anyway, you can use tools like Rsync and create incremental backups. If the source and target systems somehow meet up on the network and at least one of them has a public IP, Rsync can also be used via SSH in most cases. However, a final sync is then necessary, as with the database.
« Previous 1 2 3 4 Next »
Buy this article as PDF
(incl. VAT)