NFS backend for Openstack Glance/Cinder/Instance-store

Kimi Zhang

In this post, let’s go through how to configure NFS as unified storage backend for Openstack Glance, Cinder and shared instance-store, also we look at how it works under the hood.

Setup: 1 controller and 2 compute nodes. Controller acts as a NFS server as well.
OS+Openstack: RHEL7 + Juno

Controller: 192.168.255.1 HPDL36
Compute:  192.168.255.2 HPDL37
Compute:  192.168.255.3 HPDL38

Setup NFS server on controller server

Create 3 folders as shared source for instance-store, glance and cinder, grant enough access right:

mkdir /nfsshare; chmod 777 /nfsshare
mkdir /nfsshare_glance; chmod 777 /nfsshare_glance
mkdir /nfsshare_cinder; chmod 777 /nfsshare_cinder

Create /etc/exports

/nfsshare   *(rw,no_root_squash)
/nfsshare_cinder *(rw,no_root_squash)
/nfsshare_glance *(rw,no_root_squash)

 Start NFS server

systemctl start rpcbind
systemctl start nfs
systemctl start nfslock

Setup NFS clients

Glance

Mount NFS share on controller node for glance:

mount HPDL36:/nfsshare_glance /var/lib/glance/images

Nova instance-store

Mount NFS share on 2 compute nodes for shared instance-store

mount HPDL36:/nfsshare /var/lib/nova/instances

Cinder

Cinder-volume service will handle…

View original post 1,470 more words

Advertisements

Building redundant and distributed L3 network in Juno

Kimi Zhang

Before Juno, when we deploy Openstack in production, there always is a painful point about L3 Agent:   High availability and performance bottleneck. Now Juno comes with new Neutron features to provide HA L3-agent and Distributed Virtual Router (DVR).

Specifications:

https://github.com/openstack/neutron-specs/blob/master/specs/juno/neutron-ovs-dvr.rst

https://github.com/openstack/neutron-specs/blob/master/specs/juno/l3-high-availability.rst

DVR distributes East-West traffic via virtual routers running on compute nodes. Also virtual routers on compute nodes handle North-South floating IP traffic locally for VM running on the same node. However if floating IP is not in use, VM originated external SNAT traffic is still handled centrally by virtual router in controller/network node.

HA L3 Agent provides virtual router HA by VRRP. A virtual gateway IP is always available from one of controller/network node.

Let’s take a look how they work in details

DVR

Steps to enable DVR:

  1. Precondition
    DVR currently only supports tunnel overlays (VxLAN or GRE) with l2population enabled, VLAN as overlay is not supported yet.
    So to…

View original post 2,167 more words

Understanding Hadoop Kerberos Authentication

A Little Bit Every Day

Hadoop supports to authenticate its clients and users using Kerberos for security. Understanding the whole mechanism I’m sure isn’t easy otherwise I won’t compose this blog blah blah. That’s not only because Kerberos itself is very complex, but also it involves other complicated things such as SASL, GSSAPI, JAAS and etc. To start with here is a rough picture overall where I tried to put all things together simply. After that, I’d like to try to explain and when you come back to this picture again, I hope it can be more meaningful and clear.

Per Kerberos authentication mechanism, both server side and client side need to authenticate into the system. Server side in Hadoop, I mean Hadoop services, like namenode/datanode, jobtracker/tasktracker etc; for client side, I mean hdfs client, job client etc used by Hadoop users. Of course they are not limited to such user tools since there can…

View original post 871 more words