Tuesday, May 16, 2017

What do I need to consider when integrating Public Cloud? The Hybrid Conundrum: Part 1

Having worked through these challenges with various customers, I thought it would be a good idea to share. When integrating Public Cloud there are a number of “should and should not's” that warrant some consideration.

Perhaps the first and foremost is what goes where? Private, Public or managed by a 3rd party? For those of you who remember the early days of virtualization you likely remember capacity assessments which described what was needed to virtualize a set workload. While the information required has changed, the process is very similar. Today’s assessment software from vendors like Cloudamize take a similar workload assessment approach but provide a different set of outputs that are important to Cloud.

Optimizing the workload is still very important as even with virtualization we tend to over assign resources. In Public Cloud every excess is a price point so right sizing performance has a direct association to the cost. In addition, in Public Cloud when you build a virtual instance it is a layering of components that each have a specific performance characteristic. For example, in Public Cloud you have different storage performance tiers on which to add different virtual instance classes that are predefined with a set number of CPUs and memory configuration. Building without performance input leads to virtual instances that may under perform or cost to much.

Because Virtual instances are sold by certain T-Shirt sizes or classes in Public Cloud, having a tool like Cloudamize to translate from a VMware VM to an Azure Virtual instance class can be a great starting point. One of the other characteristics of Public Cloud providers is while they are very accommodating of ingress traffic (traffic coming in) they typically charge for most egress traffic (traffic coming out).

If we think of our business applications as a bunch of chatty VMs it is important to know who is talking to who. This allows us to ensure that all application interdependencies can be migrated together. This cuts down on the cost and flow of egress traffic between the Public Cloud and the Enterprise datacenter. This is another capability of a good assessment tool; the ability to identify related application traffic flow between a group of VMs.

Having a look at the relationships between the VMs and applications allows us to consider whether it should run in the Enterprise or on Public Cloud. Often this requires a look at the empirical data along with some reasoning. For example, if I have a legacy application that I will continue to use until I cutover to a new Cloud based application, should I migrate it to Public Cloud? If I have VMs providing backup services in the Enterprise should these migrate? What order and what things do I need in place for the actual migration? We will have a look at these more carefully in my next post @podoherty.

Saturday, September 17, 2016

VMware Integrated OpenStack 3.0 “VIO”: Pete Cruz

OpenStack it typically used to repatriated workloads from Public Cloud. It really does require the entire OpenStack framework along with the ability to manage and monitor it. You need visibility to manage all the layers; vRealize can plugin to this stack and provide you this management. VMware Integrated OpenStack “VIO” is an integrated product approach. VIO is truly OpenStack so VMware uses all the code from the opensource thread. The whole stack is fully supported from VMware. Essentially we are combining OpenStack and the SDDC framework from VMware.

As mentioned, it fully integrates with vRealize Suite. With 3.0 it is Mitaka based along with an extremely simplified the deployment; Compact VIO. You can also Import existing vSphere workloads. When VIO 2.0 was introduced it provided seamless automated upgrade and rollback. With VIO 3.0 this has been enhanced with modules like Glance having native ability to see VMware templates.

This allows you to quickly standup OpenStack and import existing workloads. Mitaka improves the manageability and scalability along with the overall user experience while reducing the amount of steps. In addition Nova was simplified, one step process for integrating identity.

The VIO 3.0 reduces the profile from 15 VMs to 7 -  9 VMs. In addition VIO provides full HA support and zero downtime. Database replication is included to ensure no loss of data.

In compact mode the footprint is down to two (2) VMs. This is Ideal for small deployments. The database is still backed up in realtime so no database loss using compact mode.

You can now quickly import vSphere VMs into VIO. Once imported you can start managing VMs through the OpenStack APIs. VIO delivers AWS Productivity with Private Cloud Control. This ensures the AppDev teams can have the flexibility they need but the Operations team maintains management and control. VMware is seeing great uptake in VIO. The momentum is growing around this product.

The upgrade process is extremely simple; it is one of core featues of the design that the deployment and upgrade is this straighforward. the Management console enables you to stand up the 3.0 environment and migrate everything over.

image

Highly regulated industries are making this shift to repatriate public workloads from the cloud. Publing a Private Cloud using VIO allows you to avoid the line rate of moving workloads back and forth.

vSphere Integrated Containers “VIC” : Karthik Narayan knarayan@vmware.com – Open Beta Announced

People want to build features and ship them to customers as quickly as they can using Linux Containers. Multiple Containers share a kernel. Docker is perhaps the most popular vender in this space. Containers enable you to go from DEV, TEST and PROD at an accelerated pace. In addition to across environments you can deploy cross clouds. For highly regulated industries moving containers into production can be time-consuming. The challenge with containers really is around operations, security and monitoring.

The Old approach for Containers was to use a vSphere VM with a Container engine. In the creation of the VM using this model sizing becomes a challenge. Also the visibility is at the VM level not at the Container level. This makes it hard to identify noisy neighbor issues between containers on the host. VMware Integrated Containers is direct integration between the Container engine and the vSphere host. This allows transparent visibility and management from vCenter and the command line. You can create the container host in vCenter and provide the endpoint directly to the developer. The developer can then spin up the containers as they see fit.

In this Demo it shows a few of the capabilities including the vApp getting created.

When you query the endpoint the vSphere characteristics are passed through vs. the old way, which presented the Linux VM information. Visibility is to the container from a developer perspective but it is a VM to the vSphere administrator. This allows you to run containers along side other VMs. You do not need to setup a new set of hosts.

The beta was announced at VMworld 2016 (Note: vSphere 6.0 U2 is required to run “VIC”) along with a newmanagement portal and central registry based on Harbour (A PHP based user interface for managing Docker containers). The Open-source version can be downloaded here at https://github.com/vmware/vic .

You can do this ofcourse with Docker using Swarm but  leveraging vSphere Integrated containers gies you native vCenter clustering provided by the VMware environment.