Our involvement in the OpenStack community started with the inclusion of the Nova Hyper-V driver in the Folsom release and went on in just a few months with more Nova features, the Cinder Windows Storage driver, Cloud-Init for Windows and now Neutron.

 

Neutron Hyper-V Agent

Neutron is a very broad and modular project, encompassing layer 2 and 3 networking features on a wide range of technologies.

The initial release of the Hyper-V Neutron plugin offers the networking options listed below. Beside the listed options, support for Microsoft’s NVGRE virtual networking is planned to be released very soon as well.

VLAN

VLANs are the traditional option in network isolation configuration: a well tested, widely supported and well-known solution that provides excellent interoperability.

There are of course drawbacks. In particular the added configuration complexity and large number of addresses that need to be learned by switches and routers are among the reasons for the adoption of software defined networking solutions like Microsoft’s NVGRE or OpenVSwitch.

Flat networking

In this case the network consists of a single non-partitioned space. This is useful for testing and simple scenarios.

Local networking

Networking is limited to the Hyper-V host only. Useful for testing and simple scenarios where communication between VMs constrained to a single host is enough.

 

Components

There are currently three main Hyper-V related components required for configuring networking in OpenStack.

Neutron ML2 (Modular Layer 2) Plugin

The Modular Layer 2 (ml2) plugin is a framework allowing OpenStack Networking to simultaneously utilize the variety of layer 2 networking technologies found in complex real-world data centers. It currently works with the existing OpenVSwitch, linuxbridge, and Hyper-V L2 agents. The plugin takes care of the centralized network configuration, including networks, subnets and ports. The plugin communicates via RPC APIs with a specific agent running as a service on every Hyper-V node.

Neutron Hyper-V plugin

Introduced in Grizzly, it has since been replaced by Neutron ML2 Plugin and it is no longer available, starting with the Kilo release.

Neutron Hyper-V Agent

The agent takes care of configuring Hyper-V networking and “plugs” the VM virtual network adapters (NICs) in the required virtual switches. Each VM can have multiple NICs connected to different networks managed by Neutron. For example a given VM could have a NIC connected to a network with VLAN ID 1000, another on a network with VLAN ID 1001 and third one connected to a local network.

Nova Compute Neutron Vif plugin

The Nova Hyper-V driver supports different networking infrastructures, currently Nova networking and Neutron, which means that an independent plugin is required to instruct Nova about how to handle networking. The Neutron Vif plugin for Hyper-V is part of the Nova project itself and can be selected in the configuration files, as shown in the following examples.

Routing and DHCP (layer 3 networking)

One of our requirements was to maximize networking interoperability with other compute nodes (KVM, Xen, etc) and networking layers. For this reason Layer 3 networking is handled on Linux, using the existing Neutron agents for DHCP lease management and networking. A typical deployment uses a dedicated server for networking, separated from the controller.

 

Interoperability

The Neutron Hyper-V Agent is designed to be fully compatible with the OpenVSwitch plugin for VLAN / Flat / Local networking.

This means that it’s possible to add an Hyper-V Compute node configured with the Neutron Hyper-V Agent to an OpenStack infrastructure configured with the Neutron OVS plugin without any change on the Neutron server configuration.

 

Configuration

Example setup

This a basic setup suitable for testing and simple scenarios, based on one Ubuntu Server 12.04 LTS x64 controller node running the majority of the OpenStack services and a separate Nova compute node running Hyper-V Server 2012.

Hyper-V Server is free and can be downloaded from here.

As an alternative you can install Windows Server 2012 and enable the Hyper-V role if you need GUI access.

Controller

Setting up a complete OpenStack environment goes beyond the scope of this document, but if you want to quickly setup a test environment (not a production one!) DevStack is a very good choice.

Here’s an example localrc file for DevStack, more info available here:

 

ML2 Plugin Configuration

Open the file /etc/neutron/neutron.conf and look for the [DEFAULT] section and select the ML2 plugin:

 

Open the file /etc/neutron/plugins/ml2/ml2_conf.ini, look for the [ml2] section and set the proper configuration, e.g.:

 

In the [ml2_type_vlan] section set:

 

(Kilo release or newer). Hyper-V Mechanism Driver now exists in the networking_hyperv third party library. In order to use this mechanism driver, you must install the library:

 

You can now start the Neutron server from your Neutron repository with:

 

Hyper-V Compute Node

On this server we need to install Nova compute and the Neutron Hyper-V agent. You can either install each component manually or use our free installer. The Kilo beta version is built automatically every night and contains the latest Nova sources pulled from the repository. This is the preferred way to install OpenStack components on Windows.

The installer has been written to simplify to the maximum degree the Nova Compute and Neutron Hyper-V Agent deployment process. A detailed step by step guide is also available.

In case you should prefer to install each component manually you can refer to the documentation available here.

Note (Kilo release or newer): The Hyper-V Neutron Agent has been decomposed from the main neutron repository and moved to the networking_hyperv library. In order to use the agent, the library must be installed:

 

Hyper-V Agent Configuration

Note: If you installed Neutron with the installer, the following configuration is automatically generated and doesn’t need to be generated manually. The Neutron agent in this case is executed as a Windows service called neutron-hyperv-agent started automatically at system boot.

To manually generate the configuration, create the agent configuration file:

 

Add an [AGENT] section with:

Where “external” and “private” are virtual switches.

 

If you don’t have an external virtual switch configured you can create one now in Powershell with:

The AllowManagementOS parameter is not necessary if you don’t need to access the host for management purposes, just make sure to have at least another network adapter for that!
Here’s how to get the adapter name from the list of network adapters on your host:

Neutron local networking requires a private virtual switch, where communication is allowed between VMs on the host only. A private virtual switch can be created with the following Powershell command:

 

You can now start the agent with:

 

You will also need to update the Nova compute configuration file to use Neutron networking. Locate and open your nova.conf file and add / edit the following lines setting controller_address according to your configuration:

 

Restart the Nova compute service afterwards.

 

Example

Our example consists of two networks, net1 and net2 with different VLAN ids and one subnet per network.

On the controller node execute the following commands:

 

At this point we can already deploy an instance with the following Nova command, including two NICs, one for each network. The ids of the networks can be easily obtained with a simple shell script:

 

Ok, it’s time to finally boot a VM:

 

Note: The above script is expecting to find a glance image called “Ubuntu Server 12.04” and a Nova keypair called “key1”.

Once the VM deployment ends (this can be verified with nova list) we will find a running VM with the expected networking configuration. We can also verify the Neutron port allocations with:

Resulting in an output similar to:

 

Here’s a snapshot showing how the VLAN settings are configured on the virtual machine in Hyper-V Virtual Machine Manager:

Neutron_port_1

Troubleshooting

If you are here, it means that the OpenStack instance created does not have any network connectivity. Next, we will try to determine the cause of the issue.

Neutron Controller

Before the instance is able to receive an IP, the port associated to the instance must be bound. To check this, run:

The result must have the following fields:

If the output is similar, then the port was properly created and bound. The issue can be either with the DHCP configuration or on the Hyper-V Neutron Agent side. If instead the field binding:vif_type has the value binding_failed, it means that the port was not properly bound and the following items must be verified.

Make sure that the Neutron agents are alive. At least DHCP agent, HyperV agent, Open vSwitch agent and the Metadata agent should be alive:

If an agent is alive, the output should be:

If there is XXX instead of :-) it means that the agent is dead or is not properly reporting its state. If the HyperV agent is dead, check the logs on your Hyper-V compute node:

 

Check the network in which the instance was created:

The field provider:network_type must be one of those values: vlan, flat, local, as those are the ones supported by Hyper-V. If it is not, create a new instance using another network that is compatible with Hyper-V.

 

Check that the /etc/neutron/plugins/ml2/ml2_conf.ini file contains hyperv as a mechanism. Check the ML2 Plugin Configuration section for more details.

 

Check if the subnet where the instance was created is DHCP enabled.

The output should be similar to this:

 

Hyper-V Compute Node

If the port is bound, according to neutron, then the issue might be on the Hyper-V Node’s side. First of all, you should check that the NIC has been connected corectly on Hyper-V. On the Hyper-V compute node, open a powershell and execute:

 

Secondly, if the instance was created on a network with the network_type set to vlan, you should check that the VLAN was properly set:

where $PORT_ID is the same port_id shown in neutron. It should display something like this:

If the output is different from what you expect, the Hyper-V Neutron Agent’s logs will be useful to determine the issue:

 

If the results are correct, then there can be a few reasons for why the instances do not have any connectivity:
– Check that the DCHP agent is alive.
– Check that the neutron subnet is DHCP enabled.
– Check that the Hyper-V VSwitch is external and properly configured (see the Hyper-V Agent Configuration section).
– Make sure that the Hyper-V VSwitch is connected to the same network as the neutron controller (in typical deployments, eth1)

 

Some Windows NIC drivers disable VLAN access by default!

Check the following registry key:

Look in all the child keys xxxx (e.g. 0001, 0002) for a value named “VLanFiltering” and if present make sure that is set to 0.
In case of changes, reboot the server or restart the corresponding adapters.

10 comments:

RayOctober 15, 2013 at 09:32Reply

Sorry for interrupt your guys again:
I saw there’s a sentence here to say: without any change on the Quantum server configuration.

Currently I am trying to use flat to instead vlan, my kvm host can get an ip from dhcp server, but seems my vm on hyper-v still can’t. Could you please give me some guide how to debug this problem?

I can get the dhcp request in my controller, but seems the tap device never get the request. I don’t know why.

Thanks.

Alessandro PilottiOctober 30, 2013 at 07:19Reply

Hi,

we’ll have to do some checks on your configurations. Please follow up on the #Openstack-HyperV channel on IRC (FreeNode), we’ll guide you in the troubleshooting.

AnantNovember 28, 2013 at 20:00Reply

Hi,

I have deployed Ubuntu-12.04 server and Windows Server 2012. I have OpenStack Grizzly installed along with the Neutron plugin. I deployed OpenStack using the stack.sh script provided at DevStack.
Now when I try to boot the VM I get the following message: “block nbd13: receive control failed (result -32)”

On the other side(windows server) the quamtum-hyperv-agent.log has an exception:
2013-11-28 08:36:00 ERROR [quantum.openstack.common.rpc.amqp] Exception during message handling
Traceback (most recent call last):
File “C:\Program Files (x86)\Cloudbase Solutions\OpenStack\Nova\Python27\lib\site-packages\quantum-2013.1.3-py2.7.egg\quantum\openstack\common\rpc\amqp.py”, line 430, in _process_data
rval = self.proxy.dispatch(ctxt, version, method, **args)
File “C:\Program Files (x86)\Cloudbase Solutions\OpenStack\Nova\Python27\lib\site-packages\quantum-2013.1.3-py2.7.egg\quantum\openstack\common\rpc\dispatcher.py”, line 138, in dispatch
raise rpc_common.UnsupportedRpcVersion(version=version)
UnsupportedRpcVersion: Specified RPC version, 1.1, not supported by this endpoint.
2013-11-28 08:36:17 ERROR [quantum.openstack.common.rpc.amqp] Exception during message handling
Traceback (most recent call last):
File “C:\Program Files (x86)\Cloudbase Solutions\OpenStack\Nova\Python27\lib\site-packages\quantum-2013.1.3-py2.7.egg\quantum\openstack\common\rpc\amqp.py”, line 430, in _process_data
rval = self.proxy.dispatch(ctxt, version, method, **args)
File “C:\Program Files (x86)\Cloudbase Solutions\OpenStack\Nova\Python27\lib\site-packages\quantum-2013.1.3-py2.7.egg\quantum\openstack\common\rpc\dispatcher.py”, line 138, in dispatch
raise rpc_common.UnsupportedRpcVersion(version=version)
UnsupportedRpcVersion: Specified RPC version, 1.1, not supported by this endpoint.
2013-11-28 08:36:18 ERROR [quantum.openstack.common.rpc.amqp] Exception during message handling
Traceback (most recent call last):
File “C:\Program Files (x86)\Cloudbase Solutions\OpenStack\Nova\Python27\lib\site-packages\quantum-2013.1.3-py2.7.egg\quantum\openstack\common\rpc\amqp.py”, line 430, in _process_data
rval = self.proxy.dispatch(ctxt, version, method, **args)
File “C:\Program Files (x86)\Cloudbase Solutions\OpenStack\Nova\Python27\lib\site-packages\quantum-2013.1.3-py2.7.egg\quantum\openstack\common\rpc\dispatcher.py”, line 138, in dispatch
raise rpc_common.UnsupportedRpcVersion(version=version)
UnsupportedRpcVersion: Specified RPC version, 1.1, not supported by this endpoint.
2013-11-28 08:36:19 ERROR [quantum.openstack.common.rpc.amqp] Exception during message handling
Traceback (most recent call last):
File “C:\Program Files (x86)\Cloudbase Solutions\OpenStack\Nova\Python27\lib\site-packages\quantum-2013.1.3-py2.7.egg\quantum\openstack\common\rpc\amqp.py”, line 430, in _process_data
rval = self.proxy.dispatch(ctxt, version, method, **args)
File “C:\Program Files (x86)\Cloudbase Solutions\OpenStack\Nova\Python27\lib\site-packages\quantum-2013.1.3-py2.7.egg\quantum\openstack\common\rpc\dispatcher.py”, line 138, in dispatch
raise rpc_common.UnsupportedRpcVersion(version=version)
UnsupportedRpcVersion: Specified RPC version, 1.1, not supported by this endpoint.

Can you please tell me what this means ?

On doing a “nova list” I can see a running VM but I dont think it is deployed on the windows server since there was the above exception, and I cant see anything in the hyper-v. I guess it is being deployed on the Ubuntu itself.

Thanks

Alessandro PilottiNovember 29, 2013 at 00:23Reply

Can you share your devStack localrc script? The versions of the OpenStack components must match on all servers. For example you cannot have Grizzly on a Hyper-V compute node and the latest Icehouse bits deployed by DevStack on the controller.

The UnsupportedRpcVersion error is typically related to a mismatch between the component’s API.

Evin HernandezJanuary 22, 2014 at 01:17Reply

Can We update this for neutron the config seems to be inaccurate. This is what i have in my hyperv-neutron-agent config

[DEFAULT]
verbose=true
control_exchange=neutron
policy_file=C:\Program Files (x86)\Cloudbase Solutions\OpenStack\Nova\etc\policy.json
rpc_backend=neutron.openstack.common.rpc.impl_kombu
rabbit_host=10.240.10.118
rabbit_port=5672
rabbit_userid=guest
rabbit_password=Welcome1
admin_password=Welcome1
service_password=Welcome1
mysql_password=Welcome1
service_token=Welcome1
q_host=10.240.10.118
multi_host=1
database_type=mysql
service_host=10.240.10.118
mysql_host=10.240.10.118
glance_hostport=10.240.10.118:9292
enabled_services=n-cpu,rabbit,neutron,q-agt
logdir=C:\OpenStack\Log\
logfile=neutron-hyperv-agent.log
[AGENT]
polling_interval=2
physical_network_vswitch_mappings=*:CorpNet
enable_metrics_collection=false

Alessandro PilottiJanuary 22, 2014 at 01:44Reply

Hi, I suggest to use our installer to install Nova compute and the Neutron agent on Hyper-V, as it takes care of generating the configuration files:

http://www.cloudbase.it/openstack/openstack-compute-installer/

Here’s an example of a hyperv-neutron-agent.conf taken from a working Havana deployment.
Note: this configuration uses qpid, the corresponding rabbit_* options can be used as well of course.

[DEFAULT]
verbose=true
control_exchange=neutron
policy_file=C:\Program Files (x86)\Cloudbase Solutions\OpenStack\Nova\etc\policy.json
rpc_backend=neutron.openstack.common.rpc.impl_qpid
qpid_hostname=192.168.209.130
qpid_port=5672
qpid_username=guest
qpid_password=guest
logdir=C:\OpenStack\Log
logfile=neutron-hyperv-agent.log
[AGENT]
polling_interval=2
physical_network_vswitch_mappings=*:external
enable_metrics_collection=true

Evin HernandezJanuary 22, 2014 at 02:01Reply

Ive used the installer to install on hyper v is there an updated version ?

Alessandro PilottiJanuary 22, 2014 at 02:43Reply

The beta installer is updated every night with the latest development version (suitable for Devstack), while the stable versions installers are updated anytime a new official version of OpenStack is released.

For example the current Havana version has been updated for the latest 2013.2.1 and it will be updated to 2013.2.2 as soon as it gets released in a few days from now.

muthuJune 26, 2014 at 17:12Reply

Great work on hyper-v making it easy to use as compute :)
I have few questions
1. Do the new icehouse compute installer support ml2 with ovs with vlan typedriver?
2. I see vlan supported but under which plugin?
3. ml2 is supported with icehouse installer?

Alessandro PilottiJune 27, 2014 at 13:50Reply

Yes, ML2 is supported, the Hyper-V mechanism needs to be enabled in the ML2 configuration.

Comments closed
For technical questions & support please visit ask.cloudbase.it
Menu