27 October 2016

Build open source clouds with 4 OpenStack guides and tutorials

Every time you turn around, it seems like there’s a new open source project which might be of value to a cloud administrator. A huge number of these projects fall under the umbrella of OpenStack, the open source cloud toolkit.
Fortunately, there are plenty of tools out there to help with growing your OpenStack knowledge base, from meetups and in-person training, to mailing lists and IRC channels, to books, websites, and the official documentation.
Adding to that list are many individual members of the OpenStack community who are sharing their own tutorials, guides, and other helpful information across their own blogs and community sites. In order to help you keep up with these, every month Opensource.com takes a look a the latest community-created educational content for OpenStackers and brings it to you here.

  • One of the more interesting aspects of OpenStack is that it really is a composable toolkit of different projects which are designed to be used in conjunction with one another but which can provide value to other projects outside of OpenStack itself. One great example of that are OpenStack’s storage projects, which can be used independently of OpenStack or swapped out within an OpenStack cloud. Recently, John Griffith provided a great tutorial for how OpenStack’s Cinder block storage project can be used with Docker and Linux container systems.
  • One of the challenges that comes up in having so many different interchangeable parts, particularly with storage components, is knowing how to choose the right one for your needs and the needs of your cloud’s users. Learn all about the various factors that are important to consider in this guide to selecting a storage backend for OpenStack.
  • Mistral provides a workflow service within OpenStack, which the TripleO project recently adopted in the most recent release cycle. Like any cloud project, the team encountered a few unexpected hiccups along the way, and documented them in this look at debugging Mistral in TripleO.
  • One challenge of a large project like OpenStack with a diversity of contributors, often working on pseudo-independent projects is that the code base can reflect a variety of different coding styles and bring ambiguities related to uncertainties in the code. Various automated tools can help to reign this in; one such tool, Eslint, is specifically oriented towards JavaScript code. Learn how to implement Eslint for your OpenStack project’s JavaScript-based sections.


Thanks for checking out our website.

18 January 2016

An introduction to OpenStack clouds for beginners

What is OpenStack? Who might use it?

OpenStack is an open source cloud operating system written in Python to manage pools of compute, storage, and networking resources via command-line interface (CLI) or a web-based dashboard. It is designed to run on commodity hardware and is sometimes referred as Infrastructure as a Service (IaaS). OpenStack runs on common Linux platforms such as RHEL, SUSE, or Ubuntu.
OpenStack is an infrastructure (or in simpler terms, a cloud). It can create an environment that provides on-demand increase or decrease of resource allocation, and the resources are not limited to a single location. Big data, web services, and Network Function Virtualization (NFV) for service providers are all good applications for OpenStack.

What are the key services and components of OpenStack? What do they do?

OpenStack follows a bi-annual release cycle, with each release identified by a name instead of number, so the first release was Austin, the current release is Mitaka, and the previous releases were Liberty and Kilo, respectively. Since the Kilo release, OpenStack has started to shift from the incubation/integrated model to the Big Tent model, where projects are tagged with specific attributes.
The major components of a cloud infrastructure are compute, storage, and networking. These used to be called the core services of OpenStack, while all others were called the shared services.
Compute:
  • Nova: Provides virtual machines (VMs) on demand.
Storage:
  • Swift: Provides a scalable storage system that supports object storage.
  • Cinder: Provides persistent block storage to guest VMs.
Networking:
  • Neutron: Provides network connectivity as a service between interface devices managed by OpenStack services.
Shared services:
  • Keystone: Provides authentication and authorization for all the OpenStack services.
  • Glance: Provides a catalog and repository for virtual disk images.
  • Horizon: Provides a modular, web-based user interface for OpenStack services.
  • Ceilometer: Provides a single point of contact for billing systems.
  • Heat: Provides orchestration services for multiple composite cloud applications.
  • Trove: Provides database-as-a-service (DBaaS) provisioning for relational and non-relational database engines.
  • Sahara: Provides a service to provision data intensive application clusters.
  • Magnum: Offers container orchestration engines for deploying and managing containers.
I have listed only the most common projects. New projects are added in each release.
Since switching to the Big Tent approach, more and more projects are now considered a part of OpenStack. There is a committee working on OpenStack DefCore, a minimum required feature set which products must comply with in order to use the OpenStack name.

Why use OpenStack and not just a traditional virtualization tool? What value does it provide over hypervisor?

Virtualization tools abstract the resource from the physical hardware and allow for automation.
OpenStack pushes this one step further by providing an elastic, self-service, and measurable infrastructure for managing a pool of compute, storage, and networking resources. The resources that OpenStack manages can be either physical or virtual.

How can OpenStack work with containers? Why might an enterprise wish to do this?

Project Magnum uses OpenStack as an infrastructure to deploy Docker containers. Before project Magnum, Docker container was listed as a hypervisor type in Nova (a compute service of OpenStack).
In project Magnum, there is a concept of a pods, bays, and services which together as if they were a single application to which access policy can be applied.
The container orchestration engine (COE) allows for the deployment of multiple Docker containers as a unit. At this time, the supported COEs in Magnum are:
One of the popular container applications in the enterprise space is microservices, wherein a big, monolithic application is divided into "micro-services" implemented in the form of containers). This new trend in application deployment provides agility, scalability, and high availability.
The Liberty release introduced project Kuryr, which is built on top of Neutron and addresses networking issues specific to containers in an OpenStack infrastructure.

What does a typical OpenStack deployment look like?

I don't think there's such thing as a typical OpenStack deployment, and that's the beauty of it. While it is not a one-size-fits-all product, OpenStack offers a very flexible and rich infrastructure. What it can offer is limited only by what the architect can come up with. OpenStack is just like a LEGO set; we can pick and chose to fit a particular deployment requirement. Not only are the resources in OpenStack elastic, but the feature set is also elastic in a sense that we can add and delete feature sets.

Creating a new LDAP server with FreeIPA and configure to allow vSphere authentication

Was setting up a new FreeIPA sever for my homelab and found out that the default configuration in FreeIPA does not allow you to use VMware v...