20 October 2015

Web transport communication

Web communications are like blind men, in fact, all computers are blind. They require sound to identify each other.
Imagine George. Now, George is a popular guy. People are always screaming from the crowd to get answers from George, "Hey, George!"
Sometimes they only want a single word answer that is freely available information;
Hey, George! I'm Ralph!
Hi, Ralph! What's your question?
George, what's the capital of California?
Ralph, it's 'Sacramento"
But what if Ralph wants personal information that only he and George know?
Hey George! It's Ralph!
Hi, Ralph. What can I do for you?
George, I need to know my bank balance.
Now here's a good time to take note of a few things; Notice how George and Ralph keep addressing each other by name? If Ralph just screamed out "What's my bank balance?" at the very least, no one is going to know who the hell Ralph is talking to. It's also important to note the George just sort of trusts that Ralph is telling the truth, that he really is Ralph. What if Jacque wanted to tell George that he was Ralph and ask for Ralph's bank balance? Shouldn't George protect this information?
So now we arrive at George checking if Ralph is legit:
Ralph, I can tell you your balance, but I need your password
Now here's the thing about web sessions or "conversations", since computers are blind, the only way that can continue a multi-sentenced conversation in such a noisy crowd is to address each other every time; Ralph, George, Ralph, George. Each sentence is a different REQUEST/RESPONSE. And when you have to start establishing identity, George is going to have to ask for the password every time Ralph wants something, but that is both tedious and dangerous; someone listening in on the conversation could learn the password. So George and Ralph agree on a shorter, temporary password that REPRESENTS the real one. This password actually references the STATE the conversation between the two is currently in; Ralph has already provided the password and George has authenticated it. The STATE of this conversation is that George now knows he's talking to Ralph, and every time the agreed phrase is provided to George by Ralph, George knows to continue where they left off instead of starting a new conversation (in fact, if Ralph continues to forget to provide the "conversation state" enough times and George is lead to believe that it's a new conversation each time, George will eventually stop talking to Ralph). So after Ralph provides his password, George can respond with this temporary "ticket" that Ralph can give George every time he wants to ask George for resource.

03 September 2015

Migrate from VMware to OpenStack

Introduction

NOTE: Content in this article was copied from:
http://www.npit.nl/blog/2015/08/13/migrate-to-openstack/

How to migrate VMware virtual machines (Linux and Windows) from VMware ESXi to OpenStack. With the steps and commands, it should be easy to create scripts that do the migration automatically.
Just to make it clear, these steps do not convert traditional (non-cloud) applications to cloud ready applications. In this case we started to use OpenStack as a traditional hypervisor infrastructure.
Disclaimer: This information is provided as-is. I will decline any responsibility caused by or with these steps and/or commands. I suggest you don’t try and/or test these commands in a production environment. Some commands are very powerful and can destroy configurations and data in Ceph and OpenStack. So always use this information with care and great responsibility.

Global steps 

  1. Convert VMDK files to single-file VMDK files (VMware only)
  2. Convert the single-file virtual hard drive files to RAW disk files
  3. Expand partition sizes (optional)
  4. Inject VirtIO drivers
  5. Inject software and scripts
  6. Create Cinder volumes of all the virtual disks
  7. Ceph import
  8. Create Neutron port with predefined IP-address and/or MAC address
  9. Create and boot the instance in OpenStack

Specifications

Here are the specifications of the infrastructure I used for the migration:
  • Cloud platform: OpenStack Icehouse
  • Cloud storage: Ceph
  • Windows instances: Windows Server 2003 to 2012R2 (all versions, except Itanium)
  • Linux instances: RHEL5/6/7, SLES, Debian and Ubuntu
  • Only VMDK files from ESXi can be converted, I was not able to convert VMDK files from VMware Player with qemu-img
  • I have no migration experience with encrypted source disks
  • OpenStack provides VirtIO paravirtual hardware to instances

Requirements

A Linux ‘migration node’ (tested with Ubuntu 14.04/15.04, RHEL6, Fedora 19-21) with:
  • Operating system (successfully tested with the following):
  • RHEL6 (RHEL7 did not have the “libguestfs-winsupport” -necessary for NTFS formatted disks- package available at the time of writing)
  • Fedora 19, 20 and 21
  • Ubuntu 14.04 and 15.04
  • Network connections to a running OpenStack environment (duh). Preferable not over the internet, as we need ‘super admin’ permissions. Local network connections are usually faster than connections over the internet.
  • Enough hardware power to convert disks and run instances in KVM (sizing depends on the instances you want to migrate in a certain amount of time).
We used a server with 8x Intel Xeon E3-1230 @ 3.3GHz, 32GB RAM, 8x 1TB SSD and we managed to migrate >300GB per hour. However, it really depends on the usage of the disk space of the instances. But also my old company laptop (Core i5 and 4GB of RAM and an old 4500rmp HDD) worked, but obviously the performance was very poor.
  • Local sudo (root) permissions on the Linux migration node
  • QEMU/KVM host
  • Permissions to OpenStack (via Keystone)
  • Permissions to Ceph
  • Unlimited network access to the OpenStack API and Ceph (I have not figured out the network ports that are necessary)
  • VirtIO drivers (downloadable from Red Hat, Fedora, and more)
  • Packages (all packages should be in the default distributions repository):
“python-cinderclient” (to control volumes)
“python-keystoneclient” (for authentication to OpenStack)
“python-novaclient” (to control instances)
“python-neutronclient” (to control networks)
“python-httplib2” (to be able to communicate with webservice)
“libguestfs-tools” (to access the disk files)
“libguestfs-winsupport” (should be separately installed on RHEL based systems only)
“libvirt-client” (to control KVM)
“qemu-img” (to convert disk files)
“ceph” (to import virtual disk into Ceph)
“vmware-vdiskmanager” (to convert disks, downloadable from VMware)

Steps

1. Convert VMDK to VMDK 

In the next step (convert VMDK to RAW) I was unable to directly convert non-single VMDK files (from ESXi 5.5) to RAW files. So, in the first place I converted all VMDK files to single-file VMDK files.
I used the vmware-vdiskmanager tool (use argument -r to define the source disk, use argument -t 0 to specify output format ‘single growable virtal disk’) to complete this action:
vmware-vdiskmanager -r <source.vmdk> -t 0 <target.vmdk>
Do this for all disks of the virtual machine to be migrated.

2. Convert VMDK to RAW 

The second step should convert the VMDK file to RAW format. With the libguestfs-tools it is much easier to inject stuff (files and registry) in RAW disks than in VMDK files. And we need the RAW format anyway, to import the virtual disks in Ceph.
Use this command to convert the VMDK file to RAW format (use argument -p to see the progress, use -O raw to define the target format):
qemu-img convert -p <source.vmdk> -O raw <target.raw>

For virtual machines with VirtIO support (newer Linux kernels) you could also convert the VMDK files directly into Ceph by using the rbd output. In that case, go to chapter 6 and follow from there.

3. Expand partitions (optional) 

Some Windows servers I migrated had limited free disk space on the Windows partition. There was not enough space to install new management applications. So, I mounted the RAW file on a loop device to check the free disk space.
losetup -f
(to get the first available loop device, like /dev/loop2)
losetup </dev/loop2> <disk.raw>
kpartx -av </dev/loop2> 
(do for each partition):
mount /dev/mapper/loop2p* /mnt/loop2p*
du -h (to check free disk space)
umount /mnt/loop2p*
kpartx -d </dev/loop2> 
losetup -d </dev/loop2>

If the disk space is insufficient, then you may want to increase the disk size:
Check which partition you want to expand (you can see the partitions, filesystem and size. This should be enough to determine the partition you want to expand):
virt-filesystems --long -h --all -a <diskfile.raw>
Create a new disk (of 50G in this example):
truncate -s 50G <newdiskfile.raw>
Copy the original disk to the newly created disk while expanding the partition (in this example /dev/sda2):
virt-resize --expand /dev/sda2 <originaldisk.raw> <biggerdisk.raw>
Test the new and bigger disk before you remove the original disk!

4. Inject drivers

4.1 Windows Server 2012

Since Windows Server 2012 and Windows 8.0, the driver store is protected by Windows. It is very hard to inject drivers in an offline Windows disk. Windows Server 2012 does not boot from VirtIO hardware by default. So, I took these next steps to install the VirtIO drivers into Windows. Note that these steps should work for all tested Windows versions (2003/2008/2012).
  1. Create a new KVM instance. Make sure the Windows disk is created as IDE disk! The network card shoud be a VirtIO device.
  2. Add an extra VirtIO disk, so Windows can install the VirtIO drivers.
  3. Off course you should add a VirtIO ISO or floppy drive which contains the drivers. You could also inject the driver files with virt-copy-in and inject the necessary registry settings (see  paragraph 4.4) for automatic installation of the drivers.
  4. Start the virtual machine and give Windows about two minutes to find the new VirtIO hardware. Install the drivers for all newly found hardware. Verify that there are no devices that have no driver installed.
  5. Shutdown the system and remove the extra VirtIO disk.
  6. Redefine the Windows disk as VirtIO disk (this was IDE) and start the instance. It should boot without problems. Shut down the virtual machine.

4.2 Linux (kernel 2.6.25 and above)

Linux kernels 2.6.25 and above have already built-in support for VirtIO hardware. So there is no need to inject VirtIO drivers. Create and start a new KVM virtual machine with VirtIO hardware. When LVM partitions do not mount automatically, run this to fix:
(log in)
mount -o remount,rw /
pvscan
vgscan
reboot
(after the reboot all LVM partitions should be mounted and Linux should boot fine)
Shut down the virtual machine when done.

4.3 Linux (kernel older than 2.6.25)

Some Linux distributions provide VirtIO modules for older kernel versions. Some examples:
  • Red Hat provides VirtIO support for RHEL 3.9 and up
  • SuSe provides VirtIO support for SLES 10 SP3 and up
The steps for older kernels are:
  1. Create KVM instance:
  2. Linux (prior to kernel 2.6.25): Create and boot KVM instance with IDE hardware (this is limited to 4 disks in KVM, as only one IDE controller can be configured which results in 4 disks!). I have not tried SCSI or SATA as I only had old Linux machines with no more than 4 disks. Linux should start without issues.
  3. Load the virtio modules (this is distribution specific): RHEL (older versions):https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Virtualization_Host_Configuration_and_Guest_Installation_Guide/ch10s04.html) and for SLES 10 SP3 systems:https://www.suse.com/documentation/opensuse121/book_kvm/data/app_kvm_virtio_install.htm
  4. Shutdown the instance.
  5. Change all disks to VirtIO disks and boot the instance. It should now boot without problems.
  6. Shut down the virtual machine when done.

4.4 Windows Server 2008 (and older versions); depricated

For Windows versions prior to 2012 you could also use these steps to insert the drivers (the steps in 4.1 should also work for Windows 2003/2008).
  1. Copy all VirtIO driver files (from the downloaded VirtIO drivers) of the corresponding Windows version and architecture to C:\Drivers\. You can use the tool virt-copy-in to copy files and folders into the virtual disk.
  2. Copy *.sys files to %WINDIR%\system32\drivers\ (you may want to use virt-ls to look for the correct directory. Note that Windows is not very consistent with lower and uppercase characters). You can use the tool virt-copy-in to copy files and folders into the virtual disk.
  3. The Windows registry should combine the hardware ID’s and drivers, but there are no VirtIO drivers installed in Windows by default. So we need to do this by ourselves. You could inject the registry file with virt-win-reg. If you choose to copy all VirtIO drivers to an other location than C:\Drivers, you must change the “DevicePath” variable in the last line (the most easy way is to change it in some Windows machine and then export the registry file, and use that line).
Registry file (I called the file mergeviostor.reg, as it holds the VirtIO storage information only):

Windows Registry Editor Version 5.00
[HKEY_LOCAL_MACHINE\SYSTEM\ControlSet001\Control\CriticalDeviceDatabase\pci#ven_1af4&dev_1001&subsys_00000000]
 "ClassGUID"="{4D36E97B-E325-11CE-BFC1-08002BE10318}"
 "Service"="viostor"
[HKEY_LOCAL_MACHINE\SYSTEM\ControlSet001\Control\CriticalDeviceDatabase\pci#ven_1af4&dev_1001&subsys_00020000]
 "ClassGUID"="{4D36E97B-E325-11CE-BFC1-08002BE10318}"
 "Service"="viostor"
[HKEY_LOCAL_MACHINE\SYSTEM\ControlSet001\Control\CriticalDeviceDatabase\pci#ven_1af4&dev_1001&subsys_00021AF4]
 "ClassGUID"="{4D36E97B-E325-11CE-BFC1-08002BE10318}"
 "Service"="viostor"
[HKEY_LOCAL_MACHINE\SYSTEM\ControlSet001\Control\CriticalDeviceDatabase\pci#ven_1af4&dev_1001&subsys_00021AF4&rev_00]
 "ClassGUID"="{4D36E97B-E325-11CE-BFC1-08002BE10318}"
 "Service"="viostor"
[HKEY_LOCAL_MACHINE\SYSTEM\ControlSet001\Control\CriticalDeviceDatabase\pci#ven_1af4&dev_1004&subsys_00081af&rev_00]
 "ClassGUID"="{4D36E97B-E325-11CE-BFC1-08002BE10318}"
 "Service"="viostor"
[HKEY_LOCAL_MACHINE\SYSTEM\ControlSet001\Services\viostor]
 "ErrorControl"=dword:00000001
 "Group"="SCSI miniport"
 "Start"=dword:00000000
 "Tag"=dword:00000021
 "Type"=dword:00000001
 "ImagePath"="system32\\drivers\\viostor.sys"
[HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion]
 "DevicePath"=hex(2):25,00,53,00,79,00,73,00,74,00,65,00,6d,00,52,00,6f,00,6f,00,74,00,25,00,5c,00,69,00,6e,00,66,00,3b,00,63,00,3a,00,5c,00,44,00,72,00,69,00,76,00,65,00,72,00,73,00,00,00

When these steps have been executed, Windows should boot from VirtIO disks without BSOD. Also all other drivers (network, balloon etc.) should install automatically when Windows boots.

See: https://support.microsoft.com/en-us/kb/314082  (written for Windows XP, but it is still usable for Windows 2003 and 2008).

5. Customize the virtual machine (optional) 

To prepare the operating system to run in OpenStack, you probably would like to uninstall some software (like VMware Tools and drivers), change passwords and install new management tooling etc.. You can automate this by writing a script that does this for you (those scripts are beyond the scope of this article). You should be able to inject the script and files with the virt-copy-in command into the virtual disk.

5.1 Automatically start scripts in Linux

I started the scripts within Linux manually as I only had a few Linux servers to migrate. I guess Linux engineers should be able to completely automate this.

5.2 Automatically start scripts in Windows

I choose the RunOnce method to start scripts at Windows boot as it works on all versions of Windows that I had to migrate. You can put a script in the RunOnce by injecting a registry file. RunOnce scripts are only run when a user has logged in. So, you should also inject a Windows administrator UserName, Password and set AutoAdminLogon to ‘1’. When Windows starts, it will automatically log in as the defined user. Make sure to shut down the virtual machine when done.
Example registry file to auto login into Windows (with user ‘Administrator’ and password ‘Password’) and start the C:\StartupWinScript.vbs.:
Windows Registry Editor Version 5.00
[HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\RunOnce]
 "Script"="cscript C:\\StartupWinScript.vbs"
 "Parameters"=""
[HKEY_LOCAL_MACHINE\Software\Microsoft\Windows NT\CurrentVersion\Winlogon]
 "AutoAdminLogon"="1"
 "UserName"="Administrator"
 "Password"="Password"

6. Create Cinder volumes 

For every disk you want to import, you need to create a Cinder volume. The volume size in the Cinder command does not really matter, as we remove (and recreate with the import) the Ceph device in the next step. We create the cinder volume only to create the link between Cinder and Ceph.
Nevertheless, you should keep the volume size the same as the disk you are planning to import. This is useful for the overview in the OpenStack dashboard (Horizon).
You create a cinder volume with the following command (the size is in GB, you can check the available volume types by cinder type-list):
cinder create --display-name <name_of_disk> <size> --volume-type <volumetype>

Note the volume id (you can also find the volume id with the following command) as we need the ids in the next step.
cinder list | grep <name_of_disk>

7. Ceph import 

As soon as the Cinder volumes are created, we can import the RAW image files. But first we need to remove the actual Ceph disk. Make sure you remove the correct Ceph block device!
In the first place you should know in which Ceph pool the disk resides. Then remove the volume from Ceph (the volume-id is the volume id that you noted in the previous step ‘Create Cinder volumes’):
rbd -p <ceph_pool> rm volume-<volume-id>
Next step is to import the RAW file into the volume on Ceph (all ceph* arguments will result in better performance. The raw_disk_file variable is the complete path to the raw file. The volume-id is the ID that you noted before).
rbd -p <ceph_pool> --rbd_cache=true --rbd_cache_size=134217728 --rbd_cache_max_dirty=100663296 --rbd_default_format=2 import <raw_disk_file> volume-<volume-id>
Do this for all virtual disks of the virtual machine.
Note that you could also convert VMDK files directly into Ceph. In that case replace the rbd import command by the qemu-img convert command and use the rbd output.

Be careful! The rbd command is VERY powerful (you could destroy more data on Ceph than intended)!

8. Create Neutron port (optional) 

In some cases you might want to set a fixed IP-address or a MAC-address. You can do that by create a port with neutron and use that port in the next step (create and boot instance in OpenStack).
You should first know what the network_name is (nova net-list), you need the ‘Label’. Only the network_name is mandatory. You could also add security groups by adding
 --security-group <security_group_name>
Add this parameter for each security group, so if you want to add i.e. 6 security-groups, you should add this parameter 6 times.
neutron port-create --fixed-ip ip_address=<ip_address> --mac-address <mac_address> <network_name> --name <port_name>
Note the id of the neutron port, you will need it in the next step.

9. Create and boot instance in OpenStack 

Now we have everything prepared to create an instance from the Cinder volumes and an optional neutron port.
Note the volume-id of the boot disk.
Now you only need to know the id of the flavor you want to choose. Run nova flavor-list to get the flavor-id of the desired flavor.
Now you can create and boot the new instance:

nova boot <instance_name> --flavor <flavor_id> --boot-volume <boot_volume_id> --nic port-id=<neutron_port_id>
Note the Instance ID. Now, add each other disk of the instance by executing this command (if there are other volumes you want to add):
nova volume-attach <instance_ID> <volume_id>

OpenStack use cases and tips

OpenStack is probably one of the most fast-growing project in the history of Open Source and as such it is prone to be obfuscated by its shininess. Understanding when and even IF to employ a solution or another is key to successful business.

Why OpenStack?

If by any case you stumbled here without knowing what OpenStack is I highly suggest you to get a glance of OpenStack before proceeding in reading this article. That said, there are two ways of using OpenStack: through a provider or hosting it. In this article I’ll discuss on the use of OpenStack to manage infrastructure and avoid application management purposely (it is a way too broad topic to merge it with this one). Before we dive in the argument let’s summarize the advantages and the disadvantages of OpenStack:
Pros:
  • It can easily scale across multiple nodes.
  • It is easily manageable and provides useful statistics.
  • It can include different hypervisors.
  • It is vendor-free hence no vendor lock-in.
  • Has multiple interfaces: WebCLI and REST.
  • It is modular: you can add or remove components you don’t need.
  • Components like SaharaIronic and Magnum can literally revolutionize your workflow.
  • It can be employed over an existing virtualized environment.
Cons:
  • Installation and upgrades are literally barriers. (If you’re not using tools like RDO.)
  • It can break easily if not managed correctly.
  • Lacks of skills in the staff is a major issue.
  • Can become overkill for small projects.

Hypothetical use cases

Case 1: University

The reality universities face is ever-changing and providing a stable solution to host services can sometimes become a difficult task. In a small university OpenStack is almost not needed, and employing it is overkill, but in a large university the use of OpenStack can be justified by the enormous amount of hardware resources employed in order to provide services to its students. In this scenario the use of OpenStack simplifies the management of internal services and also improve security by providing isolation (achieved by virtualization). In case the university has multiple locations over a geographical region, centralizing hardware resources in the principal location reduces the number of machines needed and the cost of maintenance allowing a more flexible approach. OpenStack helps in this case only if the university is large enough and/or has multiple locations.

Case 2: IT School

This case is probably a must if the IT team is skilled enough. Employing OpenStack in a IT School gives enormous flexibility both to its students and to its IT team. An OpenStack deployment could be easily configured to use LDAP and provide students with their own lab. Using a large central cluster could also replace the need of desktop machines (to be replaced with thin-clients), this would achieve an easier maintenance and could potentially improve the security (many schools use outdated software because of the absence of update plans). If thin-clients are not a choice OpenStack Ironic could provide an easy way to provision bare metal machines. Of course the use of OS is only indicated if the school is large enough and has enough machines, but usually IT schools tend to have lots of machines and older hardware.

Case 3: IT Small-Medium Business / Enterprise

Depending on the true nature of the SMB/Enterprise OpenStack can become a key to a successful business. Enterprises are the primary OpenStack field but when should you deploy it in SMBs? The first answer is when you expect growth. If you know the workload in a year will double you will surely need to address this problem. OpenStack innate ability to scale out will enable the SMB to scale at will and avoid overbuying. A big data company could benefit from Sahara which combines the power of Hadoop/Spark with the flexibility of OpenStack. For the most adventurous there’s also containerization with the Magnum component enabling the use of Docker and Kubernetes (which is not included in the actual release but will be in the next one). OpenStack is surely a great way to start hosting your SMB/Enterprise cloud, but it can also prove difficult to manage it; skill availability can become a major issue. Also, migrating from an existing virtualized environment might be difficult depending on the case.

Conclusions

As you can see OpenStack can become your best friend if you know how to use it. However theinstallation is the hardest barrier in the first place, and in enterprises migrating from an existing environment to OpenStack can be a challenge. The second problem is the lack of skills needed in order to install and manage the system. But if you’ve got both problems solved you can experience a boost in flexibility and a reduction in costs that you could never expect. If the project is big enough and you haven’t already started, OpenStack can look difficult from the outside but will pay better in the long run.

17 July 2015

Raspberry PI Hadoop Cluster

Raspberry PI Hadoop Cluster

hadoop pi boardsIf you like Raspberry Pi's and like to get into Distributed Computing and Big Data processing what could be a better than creating your own Raspberry Pi Hadoop Cluster? 

The tutorial does not assume that you have any previous knowledge of Hadoop. Hadoop is a framework for storage and processing of large amount of data. Or "Big Data" which is a pretty common buzzword those days. The performance of running Hadoop on a Rasperry PI is probably terrible but I hope to be able to make a small and fully functional little cluster to see how it works and perform.

In this tutorial we start with using one Raspberry PI at first and then adding two more after we have a working single node. We will also do some simple performance tests to compare the impact of adding more nodes to the cluster. Last we try to improve and optimize Hadoop for Raspberry Pi cluster.

Fundamentals of Hadoop

What is Hadoop?

"The Apache Hadoop software library is a framework that allows for the distributed processing of large data sets across clusters of computers using simple programming models. It is designed to scale up from single servers to thousands of machines, each offering local computation and storage. Rather than rely on hardware to deliver high-availability, the library itself is designed to detect and handle failures at the application layer, so delivering a highly-available service on top of a cluster of computers, each of which may be prone to failures."

http://hadoop.apache.org/

Components of Hadoop

Hadoop is built up by a number of components and Open Source frameworks which makes it quite flexible and modular. However before diving deeper into Hadoop it is easier to view it as two main parts – data storage (HDFS) and data processing (MapReduce):

  • HDFS – Hadoop Distributed File System
    The Hadoop Distributed File System (HDFS) was designed to run on low cost hardware and is higly fault tolerant. Files are split up into blocks that are replicated to the DataNodes. By default blocks have a size of 64MB and are replicated to 3 nodes in the cluster. However those settings can be adjusted to specific needs. 

    Overview of HDFS File System architecture:
    Hadoop HDFS

  • MapReduce
    MapReduce is a software framework written in Java that is used to create application that can process large amount of data. Although its written in Java there are other languages available to write a MapReduce application. As with HDFS it is built to be fault tolerant and to work in large-scale cluster environments. The framework have the ability to split up input data into smaller tasks (map tasks) that can be executed in parallel processes. The output from the map tasks are then reduced (reduce task) and usually saved to the file system.

    Below you will see the MapReduce flow of the WordCount sample program that we will use later. WordCount takes a text file as input, divides it into smaller parts and then count each word and outputs a file with a count of all words within the file.

    MapReduce flow overview (WordCount example):
    Hadoop MapReduce WordCount

Daemons/services

Daemon/serviceDescription
NameNodeRuns on a Master node. Manages the HDFS file system on the cluster. 
Secondary NameNodeVery misleading name. It is NOT a backup for the NameNode. It make period checks/updates so in case the NameNode fails it can be restarted without the need to restart the data nodes. – http://wiki.apache.org/hadoop/FAQ#What_is_the_purpose_of_the_secondary_name-node.3F
JobTrackerManages MapReduce jobs and distributes them to the nodes in the cluster.
DataNodeRuns on a slave node. Act as HDFS file storage.
TaskTrackerRuns MapReduce jobs which are received from the JobTracker.

Master and Slaves

  • Master
    Is the node in the cluster that has the namenode and jobtracker. In this tutorial we will also configure our master node to act as both master and slave.
  • Slave
    Node in the cluster that act as a DataNode and TaskTracker.

Note: When a node is running a job the TaskTracker will try to use local data (in its "own" DataNode") if possible. Hence the benefit of having both the DataNode and TaskTracker on the same node since there will be no overhead network traffic. This also implies that it is important to know how data is distributed and stored in HDFS.

Start/stop scripts

ScriptDescription
start-dfs.shStarts NameNode, Secondary NameNode and DataNode(s)
stop-dfs.shStops NameNode, Secondary NameNode and DataNode(s)
start-mapred.shStarts JobTracker and TaskTracker(s)
stop-mapred.shStops JobTracker and TaskTracker(s)

The above scripts should be executed from the NameNode. Through SSH connections daemons will be started on all the nodes in the cluster (all nodes defined in conf/slaves)

Configuration files

Configuration fileDescription
conf/core-site.xmlGeneral site settings such as location of NameNode and JobTracker
conf/hdfs-site.xmlSettings for HDFS file system
conf/mapred-site.xmlSettings for MapReduce daemons and jobs
conf/hadoop-env.shEnvironment configuration settings. Java, SSH and others
conf/masterDefines master node
conf/slavesDefines computing nodes in the cluster (slaves). On a slave this file has the default value of localhost

Web Interface (default ports)

Status and information of Hadoop daemons can be viewed from a web browser through web each dameons web interface:

Daemon/servicePort
NameNode50070
Secondary NameNode50090
JobTracker50030
DataNode(s)50075
TaskTracker(s)50060

hadoop cluster in a shoeboxThe setup

  • Three Raspberry PI's model B
    (Or you could do with one if you only do first part of tutorial)
  • Three 8GB class 10 SD cards
  • An old PC Power Supply
  • An old 10/100 router used as network switch
  • Shoebox from my latest SPD bicycle shoes
  • Raspbian Wheezy 2014-09-09
  • Hadoop 1.2.1
NameIPHadoop Roles
node1192.168.0.110

NameNode
Secondary NameNode
JobTracker
DataNode
TaskTracker

node2192.168.0.111DataNode
TaskTracker
node3192.168.0.112DataNode
TaskTracker

Ensure to adjust names and IP numbers to fit your enivronment.

Single Node Setup

Install Raspbian

Download Raspbian from:
http://downloads.raspberrypi.org/raspbian_latest

For instructions on how to write the image to an SD card and download SD card flashing program please see:
http://www.raspberrypi.org/documentation/installation/installing-images/README.md

For more detailed instructions on how to setup the Pi see:
http://elinux.org/RPi_Hub

Write 2014-09-09-wheezy-raspbian.img to your SD card. Insert the card to your Pi, connect keyboard, screen and network and power it up.

Go through the setup and ensure the following configuration or adjust it to your choice:

  • Expand SD card
  • Set password
  • Choose console login
  • Chose keyboard layout and locales
  • Overclocking, High, 900MHz CPU, 250MHz Core, 450MHz SDRAM (If you do any voltmodding ensure you have a good power supply for the PI)
  • Under advanced options:
    • Hostname: node1
    • Memory split: 16mb
    • Enable SSH Server

Restart the PI.

Configure Network

Install a text editor of your choice and edit as root or with sudo:
/etc/network/interfaces

iface eth0 inet static  address 192.168.0.110  netmask 255.255.255.0  gateway: 192.168.0.1

Edit /etc/resolv.conf and ensure your namesservers (DNS) are configured properly.

Restart the PI.

Configure Java Environment

With the image 2014-09-09-wheezy-raspbian.img Java comes pre-installed. Verify by typing:

java -version    java version "1.8.0"  Java(TM) SE Runtime Environment (build 1.8.0-b132)  Java HotSpot(TM) Client VM (build 25.0-b70, mixed mode)

Prepare Hadoop User Account and Group

sudo addgroup hadoop  sudo adduser --ingroup hadoop hduser  sudo adduser hduser sudo

Configure SSH

Create SSH RSA pair keys with blank password in order for hadoop nodes to be able to talk with each other without prompting for password.

su hduser  mkdir ~/.ssh  ssh-keygen -t rsa -P ""  cat ~/.ssh/id_rsa.pub > ~/.ssh/authorized_keys

Verify that hduser can login to SSH

su hduser  ssh localhost

Go back to previous shell (pi/root).

Install Hadoop

Download and install

cd ~/  wget http://apache.mirrors.spacedump.net/hadoop/core/hadoop-1.2.1/hadoop-1.2.1.tar.gz  sudo mkdir /opt  sudo tar -xvzf hadoop-1.2.1.tar.gz -C /opt/  cd /opt  sudo mv hadoop-1.2.1 hadoop  sudo chown -R hduser:hadoop hadoop

Configure Environment Variables

This configuration assumes that you are using the pre-installed version of Java in 2014-09-09-wheezy-raspbian.img.

Add hadoop to environment variables by adding the following lines to the end of /etc/bash.bashrc:

export JAVA_HOME=$(readlink -f /usr/bin/java | sed "s:bin/java::")  export HADOOP_INSTALL=/opt/hadoop  export PATH=$PATH:$HADOOP_INSTALL/bin

Alternative you can add the configuration above to ~/.bashrc in the home directory of hduser.

Exit and reopen hduser shell to verify hadoop executable is accessible outside /opt/hadoop/bin folder:

exit  su hduser  hadoop version    hduser@node1 /home/hduser $ hadoop version  Hadoop 1.2.1  Subversion https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.2 -r 1503152  Compiled by mattf on Mon Jul 22 15:23:09 PDT 2013  From source with checksum 6923c86528809c4e7e6f493b6b413a9a  This command was run using /opt/hadoop/hadoop-core-1.2.1.jar

Configure Hadoop environment variables

As root/sudo edit /opt/hadoop/conf/hadoop-env.sh, uncomment and change the following lines:

# The java implementation to use. Required.  export JAVA_HOME=$(readlink -f /usr/bin/java | sed "s:bin/java::")    # The maximum amount of heap to use, in MB. Default is 1000.  export HADOOP_HEAPSIZE=250    # Command specific options appended to HADOOP_OPTS when specified  export HADOOP_DATANODE_OPTS="-Dcom.sun.management.jmxremote $HADOOP_DATANODE_OPTSi -client"

Note 1: If you forget to add the -client option to HADOOP_DATANODE_OPTS you will get the following error messge in hadoop-hduser-datanode-node1.out:

Error occurred during initialization of VM  Server VM is only supported on ARMv7+ VFP

Note 2: If you run SSH on a different port than 22 then you need to change the following parameter:

# Extra ssh options. Empty by default.  # export HADOOP_SSH_OPTS="-o ConnectTimeout=1 -o SendEnv=HADOOP_CONF_DIR"  export HADOOP_SSH_OPTS="-p <YOUR_PORT>"

Or you will get the error:

connect to host localhost port 22: Address family not supported by protocol 

Configure Hadoop

In /opt/hadoop/conf edit the following configuration files:

core-site.xml

<configuration>    <property>      <name>hadoop.tmp.dir</name>      <value>/hdfs/tmp</value>    </property>    <property>      <name>fs.default.name</name>      <value>hdfs://localhost:54310</value>    </property>  </configuration>  

mapred-site.xml

<configuration>    <property>      <name>mapred.job.tracker</name>      <value>localhost:54311</value>    </property>  </configuration>

hdfs-site.xml

<configuration>    <property>      <name>dfs.replication</name>      <value>1</value>    </property>  </configuration> 

Create HDFS file system

sudo mkdir -p /hdfs/tmp  sudo chown hduser:hadoop /hdfs/tmp  sudo chmod 750 /hdfs/tmp  hadoop namenode -format

Start services

Login as hduser. Run:

/opt/hadoop/bin/start-dfs.sh  /opt/hadoop/bin/start-mapred.sh

Run the jps command to checkl that all services started as supposed to:

jps    16640 JobTracker  16832 Jps  16307 NameNode  16550 SecondaryNameNode  16761 TaskTracker  16426 DataNode

If you cannot see all of the processes above review the log files in /opt/hadoop/logs to find the source of the problem.

Run sample test

Upload sample files to HDFS (Feel free to grab any other textfile you like than license.txt):

hadoop dfs -copyFromLocal /opt/hadoop/LICENSE.txt /license.txt

Run wordcount example:

hadoop jar /opt/hadoop/hadoop-examples-1.2.1.jar wordcount /license.txt /license-out.txt

When completed you will see some statistics about the job. If you like to see the outputfile grab the file form HDFS to local file system:

hadoop dfs -copyToLocal /license-out.txt ~/

Open the ~/license-out.txt/part-r-00000 file in any text editor to see the result. (You should have all words in the license.txt file and their number of occurrences)

Single node performance test

For performance test I have put together a few sample files by concatenating textbooks from projectgutenberg and run them in the same manner as the sample test above.

Result:

FileSizeWordcount execution time (mm:ss)
smallfile.txt2MB 2:17
mediumfile.txt35MB 9:19

Download sample text files for performance test.

 I also tried to some larger files but then the PI ran out of memory.

Hadoop Raspberry Pi Cluster Setup

Prepare Node1 for cloning

Since we will make a clone of node1 later the settings made here will be the "base" for all new nodes. 

Edit configuration files

/etc/hosts

192.168.0.110 node1  192.168.0.111 node2  192.168.0.112 node3

In a more serious setup you should use real DNS to setup name lookup, however to make it easy we will just go with the hosts file.

/opt/hadoop/conf/masters

node1  

Note: conf/masters file actually tells which node that is the Secondary NameNode. Node1 will become NameNode when we start the NameNode service on that machine.

In /opt/hadoop/conf edit the following configuration files and change from localhost to node1:

core-site.xml

<configuration>    <property>      <name>hadoop.tmp.dir</name>      <value>/hdfs/tmp</value>    </property>    <property>      <name>fs.default.name</name>      <value>hdfs://node1:54310</value>    </property>  </configuration>  

mapred-site.xml

<configuration>    <property>      <name>mapred.job.tracker</name>      <value>node1:54311</value>    </property>  </configuration>

Wipe HDFS

Note: In the next step we will completely wipte out the current hdfs storage – all files and data that you have used in hdfs will be lost. When you format the namenode there is also an issue causing the error message: Incompatible namespaceIDs in path/to/hdfs. This can happen when starting/doing file operations on the datanode after the namenode has been formatted. This issue is explained more in detail here.

rm -rf /hdfs/tmp/*

Later on we will format the namenode but we do this to ensure the hdfs filesystem is clean on all the nodes.

Clone Node1 and setup slaves

Clone the SD Card of node1 to the other SD cards you plan to use for the other nodes. There are various programs that can do this i used Win32DiskImager.

For each cloned node make sure to:

  • Change hostame in /etc/hostname
  • Change IP Adress in /etc/network/interfaces
  • Restart the Pi.

Configure Node1

/opt/hadoop/conf/slaves

node1  node2  node3

Note: The masters and slaves configuration files are only read by the hadoop start/stop scripts such as: start-all.sh, start-dfs.sh and start-mapred.sh.

On node1, ensure you can reach node2 and node3 from ssh as hduser without need to enter password. If this does not work: copy /home/hduser/.ssh/id_rsa.pub on node1 to /home/hduser/.ssh/authorized_keys on the node that you try to connect to.

su hduser  ssh node1  exit  ssh node2  exit  ssh node3  exit

Enter Yes when you get the "Host key verification failed message".

Format hdfs and start services

On node1:

hadoop namenode -format  /opt/hadoop/bin/start-dfs.sh  /opt/hadoop/bin/start-mapred.sh

Verify that daemons are running correctly

On node1:

jps  3729 SecondaryNameNode  4003 Jps  3607 DataNode  3943 TaskTracker  3819 JobTracker  3487 NameNode

On the other nodes:

jps  2307 TaskTracker  2227 DataNode  2363 Jps

Note: If you have issues you can examine the logfiles /opt/hadoop/logs or you can try to start each service manually on the node that is failing for example:

On node1:
hadoop namenode
hadoop datanode

You may now also try to access hadoop from the web interface to see which nodes that are active and other statistics:

http://node1:50030
http://node1:50070

Hadoop Raspberry Pi performance tests and optimization

For those tests I used the same sample text files as for the single node setup.

Download sample files

Those tests are to highlight some of the issues that can occur when you run hadoop the first time and especially in a Raspberry Pi cluster since it is very limited.  The tests will do some things "very wrong" in order to point out the issues that can occur. If you just want to optimize for the Raspberry Pi you can check out the changes that are made in the last test. Also please notice that those test are done for the mediuim.txt sample file provided above and is no "general-purpose" optimizations. If you have used Hadoop before those test are probably of no use for you since you already have figured out what to do :)

First run

Start two three SSH terminal windows – one for each node. Then start a monitoring program in each of them. I used nmon but you could as well go with top or any other monitor of your choice. Now you will be able to watch the load put on your Pi's by the WordCount MapReduce program.

Go back to your main terminal window (for node1) and upload files to HDFS and run the WordCount program:

hadoop dfs -copyFromLocal mediumfile.txt /mediumfile2.txt  hadoop jar /opt/hadoop/hadoop-examples-1.2.1.jar wordcount /mediumfile2.txt /mediumfile2-out.txt

Then watch the monitors of your nodes. Not much going on on node2 and node3? But node1 is running all of the job? The JobTracker is not distributing the jobs out to our other nodes. This is because as default HDFS is configured for use of really large files and the block-size is set to 64mb. Our file is only 35MB (medium.txt) hence it will only be split into one block and hence only one node can work on it.  

Second run

Optimize block size

In order to tackle the block-size problem above edit the conf/hdfs-site.xml on all your nodes and to the following:

hdfs-site.xml

<configuration>   <property>   <name>dfs.replication</name>   <value>1</value>   </property>   <property>   <name>dfs.block.size</name>   <value>1048576</value>   </property>  </configuration>

The above configuration will set block size to 1mb. Lets make another run and see what happens:

hadoop jar /opt/hadoop/hadoop-examples-1.2.1.jar wordcount /mediumfile2.txt /mediumfile3-out.txt
FileSizeWordCount execution time (mm:ss)
mediumfile.txt35MB 14:24

Haddop Terminal MonitoringStill not very impressive, right? It's even worse than the single node setup… This is due to that when you upload a file to HDFS and you do it locally e.g. from a datanode (which we are doing since node1 is a datanode) it will copy the data local. Hence all our blocks are now on node1. Hadoop also tries to run jobs as close as possible to where the data i stored to avoid network overhead. However some of the blocks might get copied over the node2 and node3 for processing but node1 is moste likely to get the most load. Also node1 is running as NameNode and JobTracker and has additional work to do. Also I noticed in several of the jobs the job failed with out of memory exception as seen in picture to the right. Then 1mb of block-size is might be to small even on the Pi's depending on our file size. But now will have our file split into 31 blocks where each block will cause a bit of overhead. (The less blocks we need the better – if we still can evenly spread the blocks across our nodes).

Third run

Optimize block size

Lets make another try. This time we change the block-size to 10mb: (conf/hdfs-site.xml)

hdfs-site.xml

<property>   <name>dfs.block.size</name>   <value>1048576</value>   </property>

Format NameNode

Node1 got a bit overloaded in the previous scenario we will now remove its role as TaskTracker and DataNode. Before we can remove node1 as DataNode format the namenode (as we otherwise would end up with dataloss since we have the dfs.replication set to 1 our data is not redundant)

On all nodes:

rm -rf /hdfs/tmp/*

On node1:

hadoop namenode -format

Configure Node1 to only be master

Edit conf/slaves and remove node1. Then stop and start the cluster again: 

stop-mapred.sh  stop-dfs.sh  start-dfs.sh  start-mapred.sh

Then upload our sample data and start the job again:

hadoop dfs -copyFromLocal mediumfile.txt /mediumfile.txt  hadoop jar /opt/hadoop/hadoop-examples-1.2.1.jar wordcount /mediumfile.txt /mediumfile-out.txt
FileSizeWordCount execution time (mm:ss)
mediumfile.txt35MB 6:26

So now we actually got a bit of improvement compared to a single node setup. This is due to that when you upload a file to HDFS from a client e.g. not locally on the DataNode Hadoop will try to spread the blocks evenly among the nodes and not as in our previous test. However this is still not optimal since now we are not using node1 to its full processing potential. What we would like to do is to have all nodes as DataNodes and TaskTrackers with the file blocks spread nice and evenly on all of them. 

Also if you go to http://node1:50030 and click on number 3 under "nodes" in the table you will see that our nodes are setup to be able to handle 2 map tasks (See picture below). However the Raspberry Pi is a one (and one pretty slow) processor core. It will most likely not perform well of running multiple tasks. So lets set things correct in the last run.

hadoop web task trackers 2

Fourth run

Re-format NameNode (again)

On all nodes:

rm -rf /hdfs/tmp/*

On node1:

hadoop namenode -format

Optimize block size

Lets make the block-size a bit smaller than before. Lower it to 5mb.

<configuration>   <property>   <name>dfs.replication</name>   <value>1</value>   </property>   <property>   <name>dfs.block.size</name>   <value>5242880</value>   </property>  </configuration>

Configure TaskTrackers max tasks

As mentioned in the last text of previous test. If you go to http://node1:50030 and look on your nodes you will se that max map and reducer tasks are set to 2. This is to much for the Raspberry Pi's. We will change max map and reducer tasks to the amount of CPU cores each device has: 1.

On all your nodes:

mapred-site.xml

 <configuration>   <property>   <name>mapred.job.tracker</name>   <value>node1:54311</value>   </property>  <property>  <name>mapred.tasktracker.map.tasks.maximum</name>  <value>1</value>  </property>  <property>  <name>mapred.tasktracker.reduce.tasks.maximum</name>  <value>1</value>  </property>  </configuration>

Configure Node1 back to act as both slave and master

Edit conf/slaves and add node1. Then stop and start the cluster again: 

stop-mapred.sh  stop-dfs.sh  start-dfs.sh  start-mapred.sh

Verify Max Map Tasks and Max Reduce Tasks

Go to http://node1:50030, click your nodes in the cluster summary table and ensure max map and max reduce tasks are set to 1:

hadoop web task trackers

Upload Sample file (again)

hadoop dfs -copyFromLocal mediumfile.txt /mediumfile.txt

Balance HDFS file system

Of course it is possible to upload data on one node and the distribute it evenly across all nodes. Run the following to see how our mediumfile.txt currently is stored:

hadoop fsck /mediumfile.txt -files -blocks -racks

As you most likely will see all the blocks are stored on node1. In order to spread the blocks evenly on all nodes run the following:

hadoop balancer -threshold 0.1

The threshold parameter is a float value from 0 to 100 (percentage). The lower the more balanced your blocks will be. Since we only have one file and that file is a very small percentage of our total storage we need to set it really small to put the balancer into work. After the balancing is complete very the file blocks again by:

hadoop fsck /mediumfile.txt -files -blocks -racks

Last run

hadoop jar /opt/hadoop/hadoop-examples-1.2.1.jar wordcount /mediumfile.txt /mediumfile-out.txt
FileSizeWordCount execution time (mm:ss)
mediumfile.txt35MB 5:26

Finally  we got a bit better performance! There are probably lots of other things we could fine tune more but for this tutorial we are happy with this. If you want to go further there are plenty of stuff to find on google and elsewhere. Hope you enjoyed! Now go code some cool MapReduce jobs and put your cluster to work! :)

Besmir ZANAJ

Creating a new LDAP server with FreeIPA and configure to allow vSphere authentication

Was setting up a new FreeIPA sever for my homelab and found out that the default configuration in FreeIPA does not allow you to use VMware v...