Saturday, September 17, 2016

VMware Integrated OpenStack 3.0 “VIO”: Pete Cruz

OpenStack it typically used to repatriated workloads from Public Cloud. It really does require the entire OpenStack framework along with the ability to manage and monitor it. You need visibility to manage all the layers; vRealize can plugin to this stack and provide you this management. VMware Integrated OpenStack “VIO” is an integrated product approach. VIO is truly OpenStack so VMware uses all the code from the opensource thread. The whole stack is fully supported from VMware. Essentially we are combining OpenStack and the SDDC framework from VMware.

As mentioned, it fully integrates with vRealize Suite. With 3.0 it is Mitaka based along with an extremely simplified the deployment; Compact VIO. You can also Import existing vSphere workloads. When VIO 2.0 was introduced it provided seamless automated upgrade and rollback. With VIO 3.0 this has been enhanced with modules like Glance having native ability to see VMware templates.

This allows you to quickly standup OpenStack and import existing workloads. Mitaka improves the manageability and scalability along with the overall user experience while reducing the amount of steps. In addition Nova was simplified, one step process for integrating identity.

The VIO 3.0 reduces the profile from 15 VMs to 7 -  9 VMs. In addition VIO provides full HA support and zero downtime. Database replication is included to ensure no loss of data.

In compact mode the footprint is down to two (2) VMs. This is Ideal for small deployments. The database is still backed up in realtime so no database loss using compact mode.

You can now quickly import vSphere VMs into VIO. Once imported you can start managing VMs through the OpenStack APIs. VIO delivers AWS Productivity with Private Cloud Control. This ensures the AppDev teams can have the flexibility they need but the Operations team maintains management and control. VMware is seeing great uptake in VIO. The momentum is growing around this product.

The upgrade process is extremely simple; it is one of core featues of the design that the deployment and upgrade is this straighforward. the Management console enables you to stand up the 3.0 environment and migrate everything over.


Highly regulated industries are making this shift to repatriate public workloads from the cloud. Publing a Private Cloud using VIO allows you to avoid the line rate of moving workloads back and forth.

vSphere Integrated Containers “VIC” : Karthik Narayan – Open Beta Announced

People want to build features and ship them to customers as quickly as they can using Linux Containers. Multiple Containers share a kernel. Docker is perhaps the most popular vender in this space. Containers enable you to go from DEV, TEST and PROD at an accelerated pace. In addition to across environments you can deploy cross clouds. For highly regulated industries moving containers into production can be time-consuming. The challenge with containers really is around operations, security and monitoring.

The Old approach for Containers was to use a vSphere VM with a Container engine. In the creation of the VM using this model sizing becomes a challenge. Also the visibility is at the VM level not at the Container level. This makes it hard to identify noisy neighbor issues between containers on the host. VMware Integrated Containers is direct integration between the Container engine and the vSphere host. This allows transparent visibility and management from vCenter and the command line. You can create the container host in vCenter and provide the endpoint directly to the developer. The developer can then spin up the containers as they see fit.

In this Demo it shows a few of the capabilities including the vApp getting created.

When you query the endpoint the vSphere characteristics are passed through vs. the old way, which presented the Linux VM information. Visibility is to the container from a developer perspective but it is a VM to the vSphere administrator. This allows you to run containers along side other VMs. You do not need to setup a new set of hosts.

The beta was announced at VMworld 2016 (Note: vSphere 6.0 U2 is required to run “VIC”) along with a newmanagement portal and central registry based on Harbour (A PHP based user interface for managing Docker containers). The Open-source version can be downloaded here at .

You can do this ofcourse with Docker using Swarm but  leveraging vSphere Integrated containers gies you native vCenter clustering provided by the VMware environment.

Thursday, September 15, 2016

How IBM is Accelerating Cloud #IBMCloudTour16 Robert Tercek author of Vaporized @SuperPlex

Welcome to the software defined society; this is in many ways an invisible transformation. Often the only visible sign of this change is things going away; old business model are co-opted by new models. Apple sells more music than the music industry combined. Robert talks about things being vaporized; replacing physical media with software. Your smartphone has adopted the persona of over 25 digital devices over a very short period of time.

Robert sites a retailer owned by in which they show pictures of products in subways. Consumers can scan them and their products are provided at their next stop. Ironically this company which started in Asia, went to this model because they could not afford the realestate to bring their stores to market, so they brought their stores to the consumers.

In addition this is tranforming information; information is becoming atmosphere. Information is being liberated and distributed freely. Universities in the states are all introducing free cloud based streaming services of their ciriculum. Why, because they have no choice because new startups are doing this already.

Money is being vaporized. Bitcoin is referenced, but its success is not important as it has proved that the blockchain protocol or trust protocol works at scale.

By 2020 we will have 5 million applications. These applications are replacing everything in your wallet. Robert also sites the simple keycard in hotels being replaced by software. On phones with Near Field Communications “NFC” you can book and walk directly past the checkin to your room.

As good become feature rich they shed the characteristics of products and become services. For example historically the automobile was a huge transformation; it shaped our cities and shifted our economies to petroleum based.

Young people do not see the car as a symbol of freedom; they see it as an obligation. Today software is replacing the car threw new models based on carshare such as Ubber. Uber is now valued at 62.5 Billion which exceeds the valuation of GM and Ford. Every auto company now has a ridesharing or autonomous car innitiative.

What companies like Uber represent is a big shift; They are imposing digital rules into the real world. When Uber responds to critisizm that they go around regulations they respond that they are pushing regulation down to the consumer. The evaluation and control is done on the spot by the end user. Uber launched the first self-driving cars in Pitsburg. They were not the first, there are now several autonomous car startups deploying around the world.

What is exciting is that the software models are coming out of the cloud and effecting the world we live in. Airbnb’s private valuation exceeds all but three of the top hotel chains without owning a single piece of realestate. In addition this is transforming us as well; the first thing we do when we want to engage is to look for an app, if we don’t find one we move on.

All these services depend on the cloud. The invisible part is the massive amount of compute power that it takes to make this happen.  The most important part of the cloud is that it will reshape your organization. Companies shift from making products to digital services.

We are about to go through the next Trillion dollars of change in wearable, implantable and digestable computers. Intelligent technology that is able to react to the environment around us. Once cities deploy LEDs they are able to integrate sensors, cell towers and other data collection points across the city so that the city transforms into the computer.

In 2010 we entered the Zettabytes era (1.2 Zettabytes). This is a 1 followed by 21 zero’s. In 2014 we generated 4 Zettabytes. Data is the new oil. The datasets we have now however are to big for us to handle. This is why AI (Artificial Intelligence) and Cognitive computing has come into vogue; It is the only way we can deal with it.

The business plans of the next 10,000 startups are simple, take anything and add AI to it. The drivers for this change is Cloud, but due to compliance these will be hybrid Cloud environments. There is no industry that will not be impacted. James Dimon from J P Morgan Chase gave a warning to the banking industry that silicon valley is coming. New startsup in silicon valley, backed by venture capital dollars are looking to challenge the old banking models.

Therefore Robert states “The forecast is Cloudy with a chance of reinvention. The future of work is entreprenurial and the future of technology is Hybrid Cloud.”

Sunday, September 4, 2016

#vmworld Media Scrum with @ray_ofarrell and @ybhighheels with @podoherty

You know when I attended the Day 1 keynote with the Cross Private to Public Cloud demonstration, there was one thing that I struggled with. It was the value of a Hot-migration between Private and a Non VMware based HyperScale Public Clouds (like Azure, Amazon and Google for example). While it had a wow factor; Public Clouds have very different frameworks from the VMware SDDC model. Because of this, customers tend to develop into them vs. migrate to them.

With the great access I get to the show I was able to ask my question directly to Ray O'Farrell @ray_ofarrell the CTO of VMware. Ray explained that while a migration could be of benefit to avoid vendor lock-in to a Public cloud, he did not see VMs migrating dynamically to and from different HyperScale Clouds in a DRS like fashion. The value of the VMware Cloud Services tech preview was more around the ability to deliver cross-cloud management from a security and compliance perspective, rather than as a migration tool.

Consider if you will a customer that has a Hybrid environment with multiple Public Clouds that they are using for building applications. Without a cross-cloud management framework where would they start if they were asked for a compliance audit? VMware's solution is designed to address this and with the integration of Arkin (vRealize Network Insight) and NSX components, visualize security problems and enable you to bring them into compliance.

I asked if the scenario would be different to a VMware Public Cloud Target? Yanbing Li @ybhighheels the Senior VP and GM of the Storage and Availability Business Unit at VMware responded and mentioned that when using VSAN stretched clusters this is indeed foreseeable to a VMware centric Cloud. Storage may still be used in a HyperScale Cloud but the inherent VM format translation would make the technical problem a little more challenging from an engineering perspective.

The foundation that VMware has layed out looks to be one of great value, it is going to be exciting to see how this early tech preview delevops moving forward.

Wednesday, August 31, 2016

#VMworld VMware Validated Design for SDDC presented by RyanJohnson @tenthirtyam

The VMware Validated Design for SDDC is a kit that you can download that steps you through the design decisions required to design and deploy the SDDC Framework. Before getting started however, you should understand the scenario or situation you are designing to; is it a greenfield environment? will SDDC be a  single region deployment and what is the scale of the solution? You also need to understand the service availabiity, will it be a core service? is it critical to the organization or designed for Dev/test?

In the VMware Validated Design Kit for SDDC there are 220+ seperate Design Decisions that allow you to consider the Decison, the Design Justification and the Design Implications of each point. In addition, the SDDC framework lays out the underlying vSphere environment and then layers on the SDDC components. The kit also provides some guidance for hardware as well. From an environment perspective this is the rack space, power and cooling as well as system services like AD, the Certificate Authority, DNS and NTP, SMTP Relay and FTP or SFTP services.

From a hardware perspective the solution should be built in three (3) seperate Pods or Clusters, The include the Management, Compute and Edge services. With the latest version of the validated design, we can combine the compute and edge services to reduce the getting started cost to a two (2) Pod design. When deploying VSAN you should review the list of VIrtual SAN Ready Hosts.

There are a few requirements for NFS storage in the SDDC framework. It is recommended that the templates be stored on NFS. In addition for Log Insight the export function requires NFS storage. The SDDC recommended designed is based on a Layer 3 Leaf and Spine Network design. The design guides are hardware agnostic so the underlying hardware tends to be whatever is preferred by the customer.

The guide lays out the suggested VLANs that you should use as part of your SDDC design. A standard management Pod typically consists of 4 ESXi hosts connected through a virtual Distributed Switch “vDS” leveraging VXLAN. From a VXLAN Perspective there is both a Universal Management Transport Zone and a Universal Compute Transport Zone. If the design spans more than one region, Universal wires configured across all the sites are recommended for the failover of vRA. In addition there is a Universal wire for vROPs collection traffic. In additon there is a common Single Sign-On "SSO" Domain. For additional details download validated designs here

#VMworld Exclusive Interview with @VirtualStef by @podoherty

Sr. EUC Architect at VMware Stephane Asselin, is now part of the customer success team. As part of Sanjay's (@spoonen) strategic innitiative, a team was created to help ensure customers can realize the vision of Workspace One. VMware wants cradle to grave support for customers deploying Workspace One. Working with strategic customers, Stephane and team will quarterback the migration to Workspace One.


Customers who have invested in Workspace One want that single pane for end user services. After customers complete their initial use cases, Stephane and team will help them to derive the full value of the suite. Being part of the EUC Business Unit, they also have direct access to product engineering to ensure any feature requests can be considered and potentially adopted.

According to Stephane, there is a big difference between adopting the vision and setting up the software. Interest has been strong but they really want customers to make the solution strategic within their organization. The struggle with complete adoption is not the technology, it is both robust and mature. It is trying to change the existing enterprise legacy environments and the disruptive nature of the technology when implemented properly. Getting users who are comfortable in a PC environment to adopt a hybrid mobile strategy can be difficult. Although it a vast improvement to their workday, it is still change.

Every customer approaches Workspace One differently; some will implement AirWatch and then add Workspace One to it. For these folks the application lifecylce management and application catalog can be confusing. For traditional VDI users, it is a matter of getting comfortable with the idea of presenting a integrated catalog of View, Enterpise Apps, SaaS and Cloud applications.

"We reach two different worlds; AirWatch customers wanting to do more; or Horizon customers who are used to the virtual desktop presentation, understanding the need for catalogs."

The team is aggressive and want to make sure customers are successful, so customers who are interested should contact their VMware Account Rep or VMware Partners if they are interested in the program.

#VMworld How to Manage Personal Settings with App Volumes & ThinApp with @VirtualStef & @HilkoLantinga

There are many things to consider when managing applications? How do you manage the lifecycle of your apps? How will you deliver the application; offline, virtual or cloud? How many applications are not under your jurisdiction? How big is your shadow IT problem?

AppVolumes enables you to simplify lifecycle management and delivery. With AppVolumes you can build an AppStack which can be a single or multiple applications delivered according to a user’s preset permissions. You can make these AppStacks read only. This allows you to deploy a many-to-one model. Many users sharing a single AppStack.

Grouping is very important when developing your AppStacks. Performance is optimal when you limit yourself to 7-10 applications max within an AppStack although more are supported. It is also important to have a clean environment for creating your initial AppStack so that testing can be done outside the deployed, production environment. This is typically referred to as your Provisioning VM. You create a clean AppStack, attach it to your Provisioning VM and deploy your application. When provisioning, the AppStack is essentially in Read/Write mode. After installing the application you flip it to Read-Only mode and it is essentially ready for deployment.

While you can apply an AppStack dynamically, it is considered a best practice to do it at logon or on reboot so that the introduction of new applications are applied in a control manner. If you plan properly for what information shall reside on a Writable Volume they can be very useful. By default the Writable Volume size may be large but you can easily adjust by following this blog post (

You should use ThinApp with AppVolumes. While AppStacks are a delivery method, ThinApp is a packaging solution. A ThinApp packages redirects any writes to an isolated container that runs in User mode. If multiple Users launch the same ThinApp packages the memory blocks are shared.

VMware User Environment Manager "UEM" allows you to simply the application of user settings and policies to your virtual desktops. Often these settings are stored in multiple locations such as login scripts, GPOs and custom application configuration files. UEM allows you to consolidate all these settings through a single management console.

UEM does not change the way that Windows Folder redirection works but it does give you a simpler way to manage redirection along with a multitude of other user configuration items. As it does use Folder redirection you have to either continue to use it or turn it off and have UEM manage the process?

When the user logs in the Base Profile is loaded and then user specific metadata and then application launch information. This is done in a light touch, just in time way to ensure the login is efficient. When an application closed the application settings are exported (This requires DirectFlex to be enabled). It is a good idea to enable the backup option of UEM so that you can quickly recover from any profile problems.

To transition from Persona you can enable UEM to run the logoff script to collect information but not process it. Once the information is collected you can turn off Persona and configure UEM as you would like. This is covered in this KB article.

To deal with different application requirements, have a single application with different configurations that are applied according to a specific condition. For example an application that requires different language support is a single application with different languages configured that are applied according to a specific configuration in UEM.

Some helpful hints for common scenarios like OST files in Outlook. You can redirect OST files to writable volumes using the UEM configuration. This is a two-step process; one creating and assigning a writable volume and then adjusting the location in UEM under the Outlook application.

You can solve many problems by combining AppVolumes, ThinApp and UEM but careful planning is required. Test and validate each individual solution before combining all the layers.

#VMworld Architecting VSAN the VCDXway @simonlong_ & @rayheffer

Great Session presented by Simon Long and Ray Heffer. Before getting into VSAN you need to understand some basic terms:
  1. FTT - Failures to Tolerate - How many hosts needed to tolerate failures
  2. Flash Read Cache Reservation - SSD capacity reserved as read cache for the virtual machine object
  3. Object Space Reservation - the reserve specified as a percentage of the total object address space
  4. Failure Tolerance Method - can be set to either performance or capacity
  5. Witness - ESXi host used for tie breaking
  6. Sparse Swap - provisions VM without space reservation for VM swap
VSAN Objects are all files that make up a VM such as the VMDK and snapfile. When an object is deployed on VSAN it will have related items distributed across hosts. These related items are referred to as Components and are the building blocks of all Objects.

In addition you have SAN Disk Groups that are used to pool flash and magnetic disks. Disks groups are composed of 1 cache disk and at least 1 capacity disk.

VSAN supports View Storage Accelerator. View Storage Accelerator stores commonly used read blocks in DRAM on the ESXi hosts. The Minimum number of Hosts is for VSAN is 2 while the maximum is 64. 2 does not include the witness host. There is a branch architecture that can be done with a 2 host configuration with the witness located elsewhere. VSAN does not support SIOC, Storage DRS or SE Sparse Disk.

When integrating Horizon it will automatically create different VSAN Storage policies based on the Desktop Pool type deployed. If you manually change the Storage policies then a Refresh, Recompose or Rebalance will switch them back to the defaults.

There are 6 default VSAN Storage Policies such as Dedicated Linked Clone, Floating Full Clone, Replica and Persistent Disk that are created by Horizon. It is a good idea to change the FTT setting for the Replica Policy to 2. Is is also a good idea to create a Golden Master VM and Default SAN policy as well.

When you are building out a Horizon environment it is important to understand the business requirements as well as the constraints. In addition use should look to remove all your Single Points of Failures "SPOF" in your design.

While you can deploy View on a Virtual SAN Stretched Cluster, you do have to be careful as the java connection service communication between Connection servers is not latency tolerant. It may be better to have separate View Pods using version Horizon 7 vs. a single Pod depending on the latency between datacenters.

Tuesday, August 30, 2016

#VMworld How to Deploy VMware NSX with CISCO Infrastructure

This session is presented by Ron Fuller @ccie5851 and Paul Mancuso @pmancuso

NSX has over 1700+ customers and growing. A few common usecases are micro-segmentation, remote access and IT automation. The sesison will focus on how to integrate Cisco Nexus/ACI and UCS environments. 

NSX provides a faithful reproduction of networking services and infrastructures in software. It is a distrubuted architecture so as you scale out compute you scale out capacity. In addition there is an Firewall component and an integrated API for automation.

NSX Manager is the centralized management plane. Three NSX controllers make up the control plane. In addition the distributed Logical Router "dLR" controls adjecency.

NSX requires three clusters; an infrastructure and management cluster, compute cluster and an Edge cluster. These can be rack servers or integrated UCS blades. On the Edge cluster we would deploy Edge services like the dLR.

We suggest in the logical segmentation of traffic; Management, vMotion, VXLAN and storage networks for standard virtualization. NSX introduces two new VLANs, a transit network for VXLANs and one for Software Bridging between the virtual and physical network. We recommend that the software bridiging is done on the Edge cluster.

In a standard configuration, you end up with 3 IP stacks; Management, VXLAN and the VMotion network. VMware's VXLAN is multicast free. You can use either unicast or a hybrid mode. This is done through L2 frame encapsulation and VXLAN Tunnel EndPoints of "VTEP's". 

VXLAN can be segregated by creating a Transport Zone which is a collection of VXLAN prepared ESXi clusters. Only 1 vDS per ESX cluster can be enabled for VXLAN. Note: if you are running NSX on those ESX clusters you do not need vSphere Enterprise to create a vDS. You get that capability through NSX licensing. Only the VMware vDS is supported so you cannot use Nexus framework.

NSX creates dvUplink port-groups for VXLAN enabled hosts. This uplink carries the VXLAN traffic. NSX Switching requires only two things: an MTU of 1600 and IP Connectivity. NSX is truely agnostic from an underlying switch perspective. 

It is easy to say configure MTU 1600 in your environment put it does take some planning to ensure it is configured on your Cisco framework. VXLAN encapsulation traffic is a 1600 UDP frame. All links belonging to fabric mut be enabled with Jumbo MTU. The risk is that if it is not configured properly you could black hole network traffic so ensure you plan accordingly.

When we look at common Cisco Datacenter Pod topologies, NSX is agnostic. It is important however for VXLAN transport that the VLAN is common between Cisco Pods. For UCS Blades VMware does have some tuning guides for both the B series and C series blades to help you properly tune for NSX\VXLAN traffic. In addition there are NSX Design Guides for the NSX and Cisco UCS and Nexus 9000 infrastructure.

#VMworld VMworld General Session Day 2

Sanjay Poonen @spoonen is introduced 

Sanjay wants to discuss the world going digital. For example Sanjay talks about his kid’s education being transformed through after-hours learning like @khanacademy. Three years ago the end user computing industry was a small part of VMware's business and now it is a huge part of their portfolio.

If we think about the Apps today, there is approx. 50% of the world that is still client-server today. In addition some are web and the others are truly Mobile apps. We have brought this world together in Workspace One. This allows you to marry consumer simplicity with enterprise security.

Sanjay will cover how apps and identity work together as well as desktop and mobile and the underlying security principals. Sanjay transitions to a live demo of Workspace One. Research has shown that users pull their phone out of their pocket 90 times a day for approx. 100 seconds at a time. End User solutions have to provide value under those conditions.

The first thing that is show is single sign-on "SSO" providing simple access to all your business applications. Client-server, web and mobile apps are shown is a single pain of glass, In addition through the integration of boxer (the email application that was acquired by VMware). Integration is shown with AirWatch providing security across different storage repositories such as Google drive. In addition a swipe approval component was built into Workspace One. The integration of Horizon is shown by accessing a desktop through Workspace One.

Sanjay introduces Stephanie Buscemi @sbuscemi the Executive Vice President of Saleforce. Stephanie talks about the partnership with VMware and Saleforce One. Stephanie shows Salesforce wave and the ability to see how sales are doing over the quarter is, look at opportunities along with the opportunity data. In addition all the deal dynamics are available to move the deal forward on your mobile device. The SSO for Saleforce One is all provided through Workspace One.

Sanjay mentions that through the VMworld App you get a free license for VMware Workstation or Fusion. The VMware Horizon team has been innovating like crazy. Sanjay mentions the IBM Softlayer agreement enabling them to bring Desktop as a Service "DaaS" to more customers. According to IDC Horizon leads the market. AirWatch also leads both the IDC and Gartner magic quadrants.

VMware is building out an entire IoT platform that is on display on the show floor. Sanjay transitions to Windows 10 and Workspace One integration. Sanjay shows a demo where a user tries to copy sensitive data from O365 and pasting to twitter. Through Secure Conditional Access the cut and paste is prevented. Now conditional access is shown through the perspective of a user attempting to open data on a spreadsheet on a Horizon desktop. Through the application of NSX security policies the ability to access that data is removed demonstrating micro-segmentation. 

VMware Tanium TrustPoint ( ) is demo'd onstage which enables human like queries to see live data in the environment. For example the demo looks for a specific MD5 hash running. The interface brings up every process in the environment that is running the hash. TrustPoint Trace is demo'd which provides deep analytics for what is happening in the environment on the endpoint. You can then look for anything malicious and see if it is running across the environment. Most organizations would take weeks to provide this information while VMware Tanium Trustpoint is doing it in seconds.

Ray O'Farrell @ray_ofarrell the CTO of VMware is introduced. Ray talks about Cloud-Native applications which is a fundamental shift in management frameworks. With a container strategy it is often confusing to understand who you are serving; developers, operations or the end users? The truth is that with Cloud-Native applications, it is all of them.

Kit Colbert @kitcolbert the CTO of the Cloud Platform Business Unit is introduced. Container usage is moving from the early adopters to the enterprise. The speed of containers makes the value evident for developers. From an IT perspective it is much more difficult to manage. Really since VMware does this with VMs it is easy to extend this approach to Containers. This is done through VMware Enterprise Container Platforms. 

vSphere Enterprise Containers are designed for customers that are running a mix of containers and VMs. Within vSphere there is a Docker compatible API. This was ok for the initial release but a Container registry and a developer based portal was required. These have now been integrated into vSphere Enterprise Containers. This is demo'd live onstage along with the ability to enable certain developers and deploy the container through the new portal. From a VM admin perspective all the containers are shown as individual VMs. 

The demo moves to the integration of the Service Composer using NSX security groups being applied to containers. The management of these containers are fully integrated to vRealize Operations Manager. The demo works up the vRealize Suite showing integration of container deployment through vRealize Automation. The Container management portal in vSphere Enterprise Containers is also built into vRealize Automation.

Kit switches gears to the Photon Platform. Photon Platform is geared towards the scaled-out enterprise container workload. The demands for speed and elasticity in these types of platforms are very complex problems to solve. The Photon Platform is open sourced like vSphere Enterprise Containers; a commercial offering called VMware-Pivotal Cloud Native Stack is also available. Kit reiterates that no matter where you are on the adoption of containers VMware has a solution. 

Rajiv Ramaswami the EVP/GM Networking and Security is introduced. The average cost of a data breach is $4 million dollars. With NSX micro-segmentation you can solve this problem by applying a per app firewall through policy. Using NSX your security is always on. 

Rajiv mentions that vRealize Network Insight is free to run an assessment to understand the current security profile of your organization. After an assessment, you need to install NSX. To simply the creation of policies, an early tech-preview is shown of the Micro-segmentation planner. The Micro-segmentation planner allows you to visualize and automatically create Security Policy rules. Once they are created you can push a button to apply these rules.

Yanbing Li @ybhighheels is introduced to talk about Virtual SAN. Virtual SAN is directly implemented as part of vSphere. It is Software Defined Storage. Since Virtual SAN launched they have grown to 5000 customers. VMware is adding 100 Virtual SAN customers a week. Virtual SAN is now becoming mainstream. 40% of the fortune 1000 companies have deployed Virtual SAN today. 64% of these customers are using Virtual SAN for business critical workloads. Several service providers are also looking at Virtual SAN such as IBM and 

An early tech-preview is shown of Virtual SAN capacity planning in which the analytics predict a performance problem is coming, leveraging vRealize Automation a policy is applied which moves the workloads to a 3rd party cloud (no info on how this was done under the covers, maybe Virtual SAN stretched cluster..).

Ray finishes with VMware Cloud Foundation which is a hyper-converged software platform for Private and Public Cloud. VMware's vision recognizes that you will have to deal with a multi-cloud environment with many types of applications from traditional, SaaS and cloud native delivered to any device.  Ray challenges the audience to learn the cross cloud products, engage with VMware and be a leader in this new era.

Monday, August 29, 2016

#VMworld Virtual Volumes Technical Deep Dive

Presented by Patrick Dirks and Pete Flecha @vPedrowArrow

Customers face several challenges with storage today: Silo'd Management with a fragmentation of tools, Rigid Infrastructure with a static class of services that is both Complex and time consuming. Key Issues with Traditional Storage Management is that visibility is difficult, capabilities are applied at the LUN level. In addition a set of vendor specific requirements are usually required to turn on advanced features.

The secret sauce of Virtual Volumes "VVOLs" is the Storage Policy-Based Management. This allows capabilities to be applied at the virtual Disk level. It also enables volumes can be consumed on demand. VVOLs allow you to move from a LUN centric architecture to a simplified service/policy model. Virtual disks are natively represented on the arrays. VVOL leverages vStorage APIs for Storage Awareness or a "VASA" Provider for the management data outside the data path. The VASA provider is provided by the storage vendor.

There are five different types of recognized virtual volumes: config (logs), data (VMDKs), MEM (Snapshots), SWAP and other VMware specific files. To setup VVOLs the Storage Admin sets up a Storage container with the capabilities the SAN provides. A Storage container on the SAN is associated to a Virtual Volume on the Host. A Storage Container is not a LUN however, it is more like publishing the capabilities of the SAN. The vSphere Admin can apply those features at a VM level through policy

In addition a Protocol Endpoint is defined to provide an I/O target for commands. There is no storage capacity with a Protocol Endpoint. These are presented by the VASA provider. 

The Policy determines the placement of the VM on a virtual volume. There is a difference between Storage Capabilities and a VM Storage Policy but these often get confused. Storage Capabilities provide SAN features. The VM admin defines a VM Storage Policy based on the requirements needed by the VM. When you provision, the Automated Policy engine will show you the datastores that are compliant with the VM's policy requirements.

From a VM Admins perspective, it looks similar in vCenter to the way datastores are presented without VVOLs with the exception of the addition of the storage policies. From a storage admins perspective, they see Storage Containers but also which VMs are associated to them.

If there is I/O demand from a VM perspective the Array can request the host do a rebind by reseting the Protocol Endpoints to change the flow of I/O providing better controls for I/O demands.  With VVOLs, Snapshots are also enhanced because the I/O is offloaded to the Array so that they are extremely quick as it is just a metadata change on the storage side.  

This is the first release of VVOLs and it requires vSphere 6.0 and above. Todays session will be repeated Wednesday at 1 PM.

#VMworld VMworld General Session: Day 1

Theme for this year’s session is Be Tomorrow

Pat Gelsinger is introduced as the CEO

Pat challenges the audience to determine which way they will go in the future. VMware is helping customers and partners face forward to deal with the challenges in a digital world. Digital Transformation is the buzz word of 2016. Digital Transformation implies that it needs different management frameworks. VMware says nonsense to this as all business today is digital.

One of the important questions to ask with Digital Transformation is "Is it real?” Pat sites GE as being the only company still part of the Dow industrial average after 12 decades. This is because they are constantly innovating. The next company Pat mentions is CVS and their mobile application that helps track everything they do for the company and their customers.

IDC says that approx. 20% of business are leading in digital transformation. 8 of 10 however don't get it. Are the companies that are not moving forward leaning on their traditional IT methods? What VMware is building is transformational. 

In 2006 Amazon started their Cloud Platform. In 2006 98% where running in traditional IT. 2% where in Public Cloud which was largely salesforce. In 2011 7% of workloads where in Public Cloud with 6% in Private Cloud. In 2016 15% in Public and 12% in Private Cloud. In 2021 according to VMware data will be split 50/50.

When does Public Cloud exceed Private Cloud? The estimate is 2030. There is also a shift in where customers are running their workloads. Managed Cloud Services are growing by 18%. The dominant shift is customers Private Clouds running in someone else's datacenter.

Device proliferation is making end user devices a replacement market. But IoT is exploding and in 2019 it is expected that devices will exceed the number of human devices connected to the web. VMware believes that in this new world as Cloud takes route IT expands not decreases.

VMware and IDC looked at the industries that are embracing cloud such as the construction, professional services, securities, insurance and transportation industries. One major increase in the use of Cloud is through the business units in an organization. Business usage is increasing by through Shadow IT. In an average industry there are cloud, SaaS and mix personal of devices. IT is typically in charge of security in this environment. Ironically IT is responsible for everything but control very little. This sets up a struggle of freedom vs control.

VMware has been working on this through the SDDC. The datacenter will be software and programmable. NSX and Virtual SAN are in the rapid adoption phase "the tornado phase" described in the book "Crossing the Chasm".

VMware Announced today vCloud Air 0 downtime migration to their Cloud. But VMware is not stopping there. VMware Introduces the VMware Cross-Cloud Architecture. VMware Cloud Foundation is the name of this solution. This is not just new pricing and packaging, the foundation extends management and security to Public Cloud. The first partner in this model is IBM.
Robert Leblanc the Sr. VP of IBM is introduced. Robert mentions that they have 500 clients in the cloud with 50% growth month over month. In addition to providing software defined datacenters they also provide desktops as a service. IBM and Cloud Foundation provides the vRealize software as a service.

VMware Cloud Foundation provides visibility in both VMware and Non-VMware Private and Public Clouds. Guido Appenzeller the Chief Technology Strategy Office at VMware is introduced. Guido asks "is IT necessary with Public Cloud?” Even in an age with mega clouds IT is required but the way we manage is different. Citi Bank takes the stage to describe their cloud strategy. The challenge for the bank is can an automated self-service environment be hybrid, leverage commodity components and be secure?

How will the bank deal with the complexity of bursting to Cloud? One of the Banks challenges is automating in a standard way across different Cloud APIs. In addition security is absolute and has to function in a multi-jurisdictional and complaint manner.

Guido asks if VMware can VMotion and manage across multiple hardware providers, why not multiple Public Clouds? VMware Cloud Services is introduced and it will be delivered as a SaaS offering. The demo is started through the VMware Cloud Services management interface. Guido adds a developer users Public Cloud credentials. The tool then sucks up all the resources the user has been developing. Now we can see all the instances the user is running. We can now look at this by application. In the demo the tiers are exposed as well as the costs of the workloads. Guido opens up Security Services to look at the network traffic flow to see if it is secure. This looks to be an Arkin vRealize Network Insight integration.

Part of the suite is NSX; they switch back to VMware's Cloud Services interface. They then deploy the application with an NSX cloud gateway as part the application in the Amazon Public Cloud. The demo switches back to vRealize Network Insight "Security Services" to show that the insecure network traffic flow is now closed.

Guido reiterates that they have done micro-segmentation and secure policy in the Public Cloud. The demo now migrates the workloads between different geographies to different Public Cloud environments (Azure and AWS in the demo). Guido reiterates that they cloned and migrated different workloads across Public Clouds. This is an early tech preview at this time.

Pat returns to stage with Michael Dell who explains how Dells Converged Infrastructure approach is complementary to VMware's Private Cloud Strategy. Pat challenges everyone to take the time to learn these new technologies at the show.

# VMworld OnX blogs the show

Technorati Tags: ,,,,,

VMworld brings together thought leaders, subject matter experts, and IT professionals to immerse themselves in the latest in virtualization and cloud technology. Attended by over 20,000 attendees from technical professionals to business decision-makers from organizations of all sizes, representing a wide range of industries.

This year, Paul O’Doherty or National Director of Cloud from the OnX CTO group will be an official VMworld blogger for the 2016 event. Follow Paul’s updates on all the new announcements, exclusive interviews and technology updates released on OnX’s blog, Paul’s site, twitter (@podoherty) and the rss feed. #be_tomorrow at the show!

Monday, January 11, 2016

“vRealize Automation LiveLessons: A Cloud Management and Automation Primer" Project Announcement

After two successful book launches with Pearson View on VMware Horizon, Pearson’s IT Certification Company, LiveLesson’s is working on a new project with Paul O’Doherty, from our team.

 LiveLesson’s has asked Paul O’Doherty to develop a video training series based on VMware vRealize Automation 7 which will also help those looking to certify as a VMware Certified Professional specializing in Cloud Management and Automation.

When asked about why get into video training, Paul mentioned “After the success of our 2nd book release, we decided to look at video training.” “More and more training is done visually; video provides an exciting opportunity to package information in a very unique way.”

Paul is excited and knee deep in production. The prerelease title is “vRealize Automation LiveLessons: A Cloud Management and Automation Primer". The series will also help in preparing students for the “VMware Certified Professional 6 - Cloud Management and Automation Exam”. Paul is working with his long suffering friend and editor, Ellie Bru. Ellie has worked on all of Paul’s past book publications. In addition Paul is working with Denise Lincoln, who has been an invaluable source of information on the video production process.

As the release date get’s closer, details will be provided to ensure you know how to access the series. We continue to appreciate your feedback and support on all our initiatives, please keep them coming.