Our involvement in the OpenStack community started with the inclusion of the Nova Hyper-V driver in the Folsom release and went on in just a few months with more Nova features, the Cinder Windows Storage driver, Cloud-Init for Windows and now Quantum, in time for the forthcoming Grizzly release!

 

Quantum Hyper-V Plugin

Quantum is a very broad and modular project, encompassing layer 2 and 3 networking features on a wide range of technologies.

The initial release of the Hyper-V Quantum plugin offers the networking options listed below. Beside the listed options, support for Microsoft’s NVGRE virtual networking is planned to be released very soon as well.

VLAN

VLANs are the traditional option in network isolation configuration: a well tested, widely supported and well-known solution that provides excellent interoperability.

There are of course drawbacks. In particular the added configuration complexity and large number of addresses that need to be learned by switches and routers are among the reasons for the adoption of software defined networking solutions like Microsoft’s NVGRE or OpenVSwitch.

Flat networking

In this case the network consists of a single non-partitioned space. This is useful for testing and simple scenarios.

Local networking

Networking is limited to the Hyper-V host only. Useful for testing and simple scenarios where communication between VMs constrained to a single host is enough.

 

Components

There are currently three main Hyper-V related components required for configuring networking in OpenStack.

Quantum Hyper-V plugin

This is the server-side component, to be executed within quantum-server, typically on a controller node on Linux or Windows. The plugin takes care of the centralized network configuration, including networks, subnets and ports. The plugin communicates via RPC APIs with a specific agent running as a service on every Hyper-V node.

Quantum Hyper-V Agent

The agent takes care of configuring Hyper-V networking and “plugs” the VM virtual network adapters (NICs) in the required virtual switches. Each VM can have multiple NICs connected to different networks managed by Quantum. For example a given VM could have a NIC connected to a network with VLAN ID 1000, another on a network with VLAN ID 1001 and third one connected to a local network.

Nova Compute Quantum Vif plugin

The Nova Hyper-V driver supports different networking infrastructures, currently Nova networking and Quantum, which means that an independent plugin is required to instruct Nova about how to handle networking. The Quantum Vif plugin for Hyper-V is part of the Nova project itself and can be selected in the configuration files, as shown in the following examples.

Routing and DHCP (layer 3 networking)

One of our requirements was to maximize networking interoperability with other compute nodes (KVM, Xen, etc) and networking layers. For this reason Layer 3 networking is handled on Linux, using the existing Quantum agents for DHCP lease management and networking. A typical deployment uses a dedicated server for networking, separated from the controller.

 

Interoperability

The Quantum Hyper-V Agent is designed to be fully compatible with the OpenVSwitch plugin for VLAN / Flat / Local networking.

This means that it’s possible to add an Hyper-V Compute node configured with the Quantum Hyper-V Agent to an OpenStack infrastructure configured with the Quantum OVS plugin without any change on the Quantum server configuration.

At the same time, it’s also possible to add a Linux node configured with the Quantum OVS plugin to an OpenStack infrastructure running Quantum server with the Hyper-V plugin. This is especially useful for running the Quantum L3 and DHCP Agents on Linux.

 

Configuration

Example setup

This a basic setup suitable for testing and simple scenarios, based on one Ubuntu Server 12.04 LTS x64 controller node running the majority of the OpenStack services and a separate Nova compute node running Hyper-V Server 2012.

Hyper-V Server is free and can be downloaded from here.

As an alternative you can install Windows Server 2012 and enable the Hyper-V role if you need GUI access.

Controller

Setting up a complete OpenStack environment goes beyond the scope of this document, but if you want to quickly setup a test environment (not a production one!) DevStack is a very good choice.

Here’s an example localrc file for DevStack, more info available here:

 

Hyper-V Plugin Configuration

Open the file /etc/quantum/quantum.conf and look for the [DEFAULT] section and select the Hyper-V plugin:

 

Open the file /etc/quantum/plugins/hyperv/hyperv_quantum_plugin.ini, look for the [DATABASE] section and set the proper database configuration, e.g.:

 

If the database doesn’t exist, it can be created with:

 

In the [HYPERV] section set:

 

You can now start the Quantum server from your Quantum repository with:

 

Hyper-V Compute Node

On this server we need to install Nova compute and the Quantum Hyper-V agent. You can either install each component manually or use our free installer. The Grizzly beta version is built automatically every night and contains the latest Nova sources pulled from the repository. This is the preferred way to install OpenStack components on Windows.

The installer has been written to simplify to the maximum degree the Nova Compute and Quantum Hyper-V Agent deployment process. A detailed step by step guide is also available.

In case you should prefer to install each component manually you can refer to the documentation available here.

 

Hyper-V Agent Configuration

Note: If you installed Quantum with the installer, the following configuration is automatically generated and doesn’t need to be generated manually. The Quantum agent in this case is executed as a Windows service called quantum-hyperv-agent started automatically at system boot.

To manually generate the configuration, create the agent configuration file:

 

Add an [AGENT] section with:

Where “external” and “private” are virtual switches.

 

If you don’t have an external virtual switch configured you can create one now in Powershell with:

The AllowManagementOS parameter is not necessary if you don’t need to access the host for management purposes, just make sure to have at least another network adapter for that!
Here’s how to get the adapter name from the list of network adapters on your host:

Quantum local networking requires a private virtual switch, where communication is allowed between VMs on the host only. A private virtual switch can be created with the following Powershell command:

 

You can now start the agent with:

 

You will also need to update the Nova compute configuration file to use Quantum networking. Locate and open your nova.conf file and add / edit the following lines setting controller_address according to your configuration:

 

Restart the Nova compute service afterwards.

 

Example

Our example consists of two networks, net1 and net2 with different VLAN ids and one subnet per network.

On the controller node execute the following commands:

 

At this point we can already deploy an instance with the following Nova command, including two NICs, one for each network. The ids of the networks can be easily obtained with a simple shell script:

 

Ok, it’s time to finally boot a VM:

 

Note: The above script is expecting to find a glance image called “Ubuntu Server 12.04” and a Nova keypair called “key1”.

Once the VM deployment ends (this can be verified with nova list) we will find a running VM with the expected networking configuration. We can also verify the Quantum port allocations with:

Resulting in an output similar to:

 

Here’s a snapshot showing how the VLAN settings are configured on the virtual machine in Hyper-V Virtual Machine Manager:

Quantum_port_1

 

Some Windows NIC drivers disable VLAN access by default!

Check the following registry key:

 
Look in all the child keys xxxx (e.g. 0001, 0002) for a value named “VLanFiltering” and if present make sure that is set to 0. 
In case of changes, reboot the server or restart the corresponding adapters.
 

10 comments:

RayOctober 15, 2013 at 09:32Reply

Sorry for interrupt your guys again:
I saw there’s a sentence here to say: without any change on the Quantum server configuration.

Currently I am trying to use flat to instead vlan, my kvm host can get an ip from dhcp server, but seems my vm on hyper-v still can’t. Could you please give me some guide how to debug this problem?

I can get the dhcp request in my controller, but seems the tap device never get the request. I don’t know why.

Thanks.

Alessandro PilottiOctober 30, 2013 at 07:19Reply

Hi,

we’ll have to do some checks on your configurations. Please follow up on the #Openstack-HyperV channel on IRC (FreeNode), we’ll guide you in the troubleshooting.

AnantNovember 28, 2013 at 20:00Reply

Hi,

I have deployed Ubuntu-12.04 server and Windows Server 2012. I have OpenStack Grizzly installed along with the Neutron plugin. I deployed OpenStack using the stack.sh script provided at DevStack.
Now when I try to boot the VM I get the following message: “block nbd13: receive control failed (result -32)”

On the other side(windows server) the quamtum-hyperv-agent.log has an exception:
2013-11-28 08:36:00 ERROR [quantum.openstack.common.rpc.amqp] Exception during message handling
Traceback (most recent call last):
File “C:\Program Files (x86)\Cloudbase Solutions\OpenStack\Nova\Python27\lib\site-packages\quantum-2013.1.3-py2.7.egg\quantum\openstack\common\rpc\amqp.py”, line 430, in _process_data
rval = self.proxy.dispatch(ctxt, version, method, **args)
File “C:\Program Files (x86)\Cloudbase Solutions\OpenStack\Nova\Python27\lib\site-packages\quantum-2013.1.3-py2.7.egg\quantum\openstack\common\rpc\dispatcher.py”, line 138, in dispatch
raise rpc_common.UnsupportedRpcVersion(version=version)
UnsupportedRpcVersion: Specified RPC version, 1.1, not supported by this endpoint.
2013-11-28 08:36:17 ERROR [quantum.openstack.common.rpc.amqp] Exception during message handling
Traceback (most recent call last):
File “C:\Program Files (x86)\Cloudbase Solutions\OpenStack\Nova\Python27\lib\site-packages\quantum-2013.1.3-py2.7.egg\quantum\openstack\common\rpc\amqp.py”, line 430, in _process_data
rval = self.proxy.dispatch(ctxt, version, method, **args)
File “C:\Program Files (x86)\Cloudbase Solutions\OpenStack\Nova\Python27\lib\site-packages\quantum-2013.1.3-py2.7.egg\quantum\openstack\common\rpc\dispatcher.py”, line 138, in dispatch
raise rpc_common.UnsupportedRpcVersion(version=version)
UnsupportedRpcVersion: Specified RPC version, 1.1, not supported by this endpoint.
2013-11-28 08:36:18 ERROR [quantum.openstack.common.rpc.amqp] Exception during message handling
Traceback (most recent call last):
File “C:\Program Files (x86)\Cloudbase Solutions\OpenStack\Nova\Python27\lib\site-packages\quantum-2013.1.3-py2.7.egg\quantum\openstack\common\rpc\amqp.py”, line 430, in _process_data
rval = self.proxy.dispatch(ctxt, version, method, **args)
File “C:\Program Files (x86)\Cloudbase Solutions\OpenStack\Nova\Python27\lib\site-packages\quantum-2013.1.3-py2.7.egg\quantum\openstack\common\rpc\dispatcher.py”, line 138, in dispatch
raise rpc_common.UnsupportedRpcVersion(version=version)
UnsupportedRpcVersion: Specified RPC version, 1.1, not supported by this endpoint.
2013-11-28 08:36:19 ERROR [quantum.openstack.common.rpc.amqp] Exception during message handling
Traceback (most recent call last):
File “C:\Program Files (x86)\Cloudbase Solutions\OpenStack\Nova\Python27\lib\site-packages\quantum-2013.1.3-py2.7.egg\quantum\openstack\common\rpc\amqp.py”, line 430, in _process_data
rval = self.proxy.dispatch(ctxt, version, method, **args)
File “C:\Program Files (x86)\Cloudbase Solutions\OpenStack\Nova\Python27\lib\site-packages\quantum-2013.1.3-py2.7.egg\quantum\openstack\common\rpc\dispatcher.py”, line 138, in dispatch
raise rpc_common.UnsupportedRpcVersion(version=version)
UnsupportedRpcVersion: Specified RPC version, 1.1, not supported by this endpoint.

Can you please tell me what this means ?

On doing a “nova list” I can see a running VM but I dont think it is deployed on the windows server since there was the above exception, and I cant see anything in the hyper-v. I guess it is being deployed on the Ubuntu itself.

Thanks

Alessandro PilottiNovember 29, 2013 at 00:23Reply

Can you share your devStack localrc script? The versions of the OpenStack components must match on all servers. For example you cannot have Grizzly on a Hyper-V compute node and the latest Icehouse bits deployed by DevStack on the controller.

The UnsupportedRpcVersion error is typically related to a mismatch between the component’s API.

Evin HernandezJanuary 22, 2014 at 01:17Reply

Can We update this for neutron the config seems to be inaccurate. This is what i have in my hyperv-neutron-agent config

[DEFAULT]
verbose=true
control_exchange=neutron
policy_file=C:\Program Files (x86)\Cloudbase Solutions\OpenStack\Nova\etc\policy.json
rpc_backend=neutron.openstack.common.rpc.impl_kombu
rabbit_host=10.240.10.118
rabbit_port=5672
rabbit_userid=guest
rabbit_password=Welcome1
admin_password=Welcome1
service_password=Welcome1
mysql_password=Welcome1
service_token=Welcome1
q_host=10.240.10.118
multi_host=1
database_type=mysql
service_host=10.240.10.118
mysql_host=10.240.10.118
glance_hostport=10.240.10.118:9292
enabled_services=n-cpu,rabbit,neutron,q-agt
logdir=C:\OpenStack\Log\
logfile=neutron-hyperv-agent.log
[AGENT]
polling_interval=2
physical_network_vswitch_mappings=*:CorpNet
enable_metrics_collection=false

Alessandro PilottiJanuary 22, 2014 at 01:44Reply

Hi, I suggest to use our installer to install Nova compute and the Neutron agent on Hyper-V, as it takes care of generating the configuration files:

http://www.cloudbase.it/openstack/openstack-compute-installer/

Here’s an example of a hyperv-neutron-agent.conf taken from a working Havana deployment.
Note: this configuration uses qpid, the corresponding rabbit_* options can be used as well of course.

[DEFAULT]
verbose=true
control_exchange=neutron
policy_file=C:\Program Files (x86)\Cloudbase Solutions\OpenStack\Nova\etc\policy.json
rpc_backend=neutron.openstack.common.rpc.impl_qpid
qpid_hostname=192.168.209.130
qpid_port=5672
qpid_username=guest
qpid_password=guest
logdir=C:\OpenStack\Log
logfile=neutron-hyperv-agent.log
[AGENT]
polling_interval=2
physical_network_vswitch_mappings=*:external
enable_metrics_collection=true

Evin HernandezJanuary 22, 2014 at 02:01Reply

Ive used the installer to install on hyper v is there an updated version ?

Alessandro PilottiJanuary 22, 2014 at 02:43Reply

The beta installer is updated every night with the latest development version (suitable for Devstack), while the stable versions installers are updated anytime a new official version of OpenStack is released.

For example the current Havana version has been updated for the latest 2013.2.1 and it will be updated to 2013.2.2 as soon as it gets released in a few days from now.

muthuJune 26, 2014 at 17:12Reply

Great work on hyper-v making it easy to use as compute :)
I have few questions
1. Do the new icehouse compute installer support ml2 with ovs with vlan typedriver?
2. I see vlan supported but under which plugin?
3. ml2 is supported with icehouse installer?

Alessandro PilottiJune 27, 2014 at 13:50Reply

Yes, ML2 is supported, the Hyper-V mechanism needs to be enabled in the ML2 configuration.

Comments closed
For technical questions & support please visit ask.cloudbase.it
Menu