Myths, truths and our take on IBM Cloud Paks

With IBM’s purchase of Red Hat, the entire portfolio of software solutions for cybersecurity, applications, databases, automation and hybrid cloud management has been ported to OpenShift under the brand name
IBM Cloud Paks
. This means that many of these applications have been redesigned and adapted to run on top of containers (although some, such as QRadar, have been doing so for years) and be controlled by Kubernetes, which is the container orchestrator on which OpenShift is based.

How are IBM Cloud Paks deployed?

IBM Cloud Paks are installed on a PaaS environment with OpenShift both in own data centers on IBM Power Systems, VMWare, KVM (RHEV / LinuxONE) and in public clouds of Microsoft (Azure), IBM, Amazon (AWS) and Google (GCP). Thanks to IBM Satellite, it can be deployed in a combination of on-premises and cloud resources, through a flexible hybrid architecture. Our
professional services department
can help you.

How are IBM Cloud Paks licensed? How much do they cost?

This is perhaps one of the least known and, in our opinion, most controversial parts. IBM has always sold perpetual licenses for all its software solutions. These licenses come with a basic technical support for HW incidents and a more advanced one for SW (SWMA) that can be renewed typically every 3 years. By moving to a cloud environment we are moving towards pay-per-use systems, which are very scalable and sometimes, to be honest, very expensive. complicated to estimate. For example, Data has these prices per “virtual core”. That is, from a few hundred dollars to a few hundred thousand… :)

This has obvious advantages for solutions where it makes sense to keep renewing support, as is the case with open and extremely complex solutions such as those based on micro-services and containers. Customers who are not comfortable with this model can continue to purchase appliances or licenses to deploy many of these solutions in their infrastructure for a one-time payment and optional support to be renewed every several years. In others, this is the only model as they are native solutions for Kubernetes and Cloud based environments.

Do I need to have OpenShift to install an IBM Cloud Pak?

Short answer: yes. However, if you don’t have it, you can deploy it without too many problems thanks to the installers included in the latest versions both in your own infrastructure and in an external one (IaaS) from your favorite cloud provider.

Are IBM Cloud Paks worth it?

As a good free verse that we are in the systems integrators sector, we think that some do, and others not so much (at least for now). It depends on their intended use, the dependency we have with other applications and the level of maturity in the adoption of containers and the use of Kubernetes in our organization. If we are starting out with Dockers, OpenShift and Cloud environments, perhaps it is better to stick to a good
digital transformation and modernization plan
rather than “putting the cart before the horse”.

Are there IBM Cloud Paks courses or training?

In order to take advantage of this technology you need to master both the infrastructure (OpenShift) for which there is official training offered by Red Hat, and an intensive hands-on workshop developed by ourselves. Once the infrastructure is under control you will need to train on the different IBM products and solutions you are interested in, as they are collections of Software grouped by category and licensed together. The Cloud Pak for Security, for example, is primarily IBM QRadar SOAR Platform while the Cloud Pak for Applications includes the entire Websphere suite.

That said, if you want,
we speak
.

 

Upgrade your IBM Power9 or your LPARs may not start up.

We often overlook the need to perform preventive updates not only of the operating system (AIX, Linux, IBM i) but also of the FW of IBM Power servers. IBM publishes fixes for problems that other customers have experienced and it is usually not necessary for you to do so as well. In doing so, we keep our systems secure from all types of external and internal threats and vulnerabilities.

The problem we talked about in this short article is a bug that prevented LPARs from booting on Power9 servers if they had been running for more than 814 days. It sounds a bit like the printers of a few years ago that failed to print several hundred thousand pages, we never know if on purpose or by mistake. In the case of IBM it is a recognized firmware bug that is fixed with the update VH950_045_045 / FW950.00 available from November 23, 2020. So if you are an IBM customer where in the last two years, your aging Power9 systems have not been upgraded, you are likely to have this problem for the remainder of the year.

We give you a hint, the error is CA000040 which prevents the LPAR from booting and whose temporary solution could be to use the Power8 compatibility mode from the HMC or ASMI while you install the pending updates.

At Sixe Ingeniería we have been preventively maintaining our clients’ IBM Power systems for more than 15 years.
We can help you monitor, update and preventively maintain your entire infrastructure.
IBM and Lenovo server and storage systems. We also help you minimize licensing costs and offer the best prices on upgrades to new generations of IBM Power servers. Contact us for more information.

We installed and tested the new IBM AIX 7.3

After joining IBM’s OpenBeta program, we have been able to download and test the new version of AIX 7.3, which comes on its 35th anniversary.

Among its novelties, the following stand out:

  • Python and Bash frameworks that work directly with AIX, we won’t have to reinstall them manually!
  • Support for the dnf command (standard in Red Hat) for installing open source packages from the AIX Toolbox. AIX has been speaking Linux for a long time, but since version 7.3 it is becoming more and more integrated, providing developers and system administrators with all the features needed to modernize UNIX environments.
  • Reduced time to dynamically add processors/memory to a running LPAR, helpful for LPARs with databases using hundreds of GB or several TB of RAM. This is coupled with the reduction of IPL times for this type of partitioning.
  • pigz and zlibNX commands now transparently use NX GZIP acceleration in Power9 and Power10
  • Enhanced support for logical volume (LVM) encryption to include rootvg and dump device.
  • The TCP protocol stack now supports CUBIC, a TCP network congestion avoidance algorithm that can get high-bandwidth connections across networks faster and more reliably.
  • Additional IP security enhancements (IPsec)
  • Possibility to create an OVA file from an mksysb using the create_ova command in order to speed up cloud (PowerVS) and hybrid deployments.
  • Creating an ISO image from the new command mksysb_iso
  • Integration with the new IBM Open XL C/C++ and Fortran compilers
  • Increased file size and file system size
  • Improved Ansible and Ansible Tower support
  • PowerVC 2.X Integration

 

Meet the new IBM Power10 and AIX 7.3 Servers

The Power of 10

September 8th is the date of the official announcement of the new IBM servers with Power10 processors, which will be followed by the announcement of the 7.3 version of AIX, which will be 35 years old in 2021. Considering the technical features available we know that they incorporate DDR5 memory, a PCIe 5.0 interface and that they are designed using 7nm technology from Samsung. The Power10 processors will once again come in two flavors. One with 15 cores in SMT-8 mode (ideal for AIX and IBM i) and others with 30 cores and SMT-4 for Linux-only workloads. The Power10 chips also incorporate major enhancements for Artificial Intelligence (AI), allowing Machine Learning loads to run up to 20 times faster than POWER9.

One million SAPS. Infrastructure matters a lot.

As usual, the first systems to be announced will be scale-up systems designed for highly virtualized environments with resource-intensive applications such as SAP HANA. Published benchmarks indicate that 1 million SAPS are achieved with 120 cores, which is half the number of cores needed in the previous generation of Power9 in E980 servers. Compared to current third-party servers available this year, HPE achieved about 670,000 SAPS (which equates to about 120,000 concurrent users) using 224 cores in its Superdome Flex 280 based on Intel’s most powerful processors (the Xeon Platinum). For those of you that this doesn’t tell you much, the other reading is that the performance per core has continued to improve a lot while the rest of the manufacturers keep it stagnant by adding other complementary hardware (flash memory, more cores, etc).

All the memory you need

The arrival of “Memory Inception” technology allows you to create clusters of systems that share memory between them, being able to reach several Petabytes of RAM available for the same environment divided into several physical servers. This positions IBM as a leader in the development of hardware technologies for application clusters on Red Hat OpenShift. Soon to be announced are the “medium” two and four socket servers where we will be able to continue to deploy mixed IBMi, AIX and Linux environments well.

End-to-end encryption

We cannot end this article without mentioning one of the key features of the IBM Power platform, which is data security. The new processors incorporate four times more AES encryption components to anticipate the needs of cryptographic standards coming in 2022 such as quantum-safe cryptography or fully homomorphic encryption. All of this applies to new container-based workloads where security has become the primary concern of the organizations that use them.

AIX 7.3, UNIX beyond 2030

Although it will give for another article, with the arrival of Power10 will be announced the new version of AIX, which will be the 7.3, which has not happened since 2015. Numbering is a matter of marketing. If IBM had chosen to call this version 8.1 it would have perhaps, generated doubts about whether the new features impacted stability for existing applications, but like any new version it incorporates many interesting new features. Today we continue to deploy new environments on AIX, as well as migrating others from Solaris, HP-UX and even Linux.

In all of our large and medium-sized clients there is a part of their productive environments where the information that keeps their business and internal processes alive is processed. Where do you install Oracle, DB2, SAP, SAS, etc? In AIX. No other UNIX-like operating system offers the same maturity, stability, performance and scalability. It is a modern UNIX, with great compatibility with modern applications such as Chef, Puppet, Ansible and that coexists wonderfully with other environments based on Linux, IBM i or Z/OS which has a lot of life ahead and the new version 7.3 is good proof of this. It also has three big advantages for departments and system administrators: everything works (vs that beta-tester feeling so ingrained in Linux), they run on the most stable and robust servers out there (except for the Mainframe) and you learn only once, instead of every time a new version is released: we all remember that moment where “ifconfig -a” stopped working in Red Hat :)

Time to renew equipment, licenses… and to upgrade

With the arrival of a new processor technology, the “sales” begin at IBM. If you have Power7 or Power8 equipment whose maintenance contracts are about to expire (or are already out of support) and you are considering whether or not to renew them, count on our help. We advise you on how to save a lot of money with our audit services and renewal of licenses, take advantage of 100% of IBM Power equipment that you have and we offer you at cost pricenew Power9 servers and soon Power10.

Need technical support?

At Sixe Ingeniería we offer technical support and preventive maintenance of AIX and Linux on Power Systems directly and without intermediaries. We will be happy to help you.

Seize the True Power of CI/CD with Tekton and Kubernetes Pipelines

The introduction of Kubernetes (Tekton) Pipelines has made a revolution in the way we handle CI/CD workflows in software development. The addition of Tekton, the Kubernetes-native framework, has given us more power and flexibility in creating and managing pipelines. This article focuses on the importance of Kubernetes Pipelines and Tekton on Red Hat OpenShift, and how these tools can help you make your development process truly continuous.

What is a Pipeline?

A pipeline is an automated process that drives software through the building, testing, and deploying stages of the software development lifecycle. In other words, a pipeline executes the Continuous Integration and Continuous Delivery (CI/CD) workflow. It automatically handles the tasks of running the test suite, analyzing code, creating binaries, containerization, and deploying changes to the cloud or on-premise solutions.

Why Should You Build Pipelines with Kubernetes?

As the development world moves to embrace microservices-based applications ahead of monolithic applications, the CI/CD process has become truly continuous with incremental updates to the codebase that are independently deployable.

In such a setting, Kubernetes simplifies the process of creating and maintaining CI/CD pipelines. It deploys each microservice to a single Kubernetes cluster and maintains several copies of each microservice to serve as dev, test, and prod versions.

With Kubernetes pipelines, you no longer have to rebuild the entire application during each build. Instead, Kubernetes updates the container of the microservice and deploys it through the defined pipeline. There’s no need for writing build scripts anymore as Kubernetes automatically handles the process with only a few configuration options we provide. This reduces the chance for human errors in the CI/CD workflow.

What is Tekton?

Tekton allows you to take Kubernetes pipelines to the next level. It’s an open-source, Kubernetes-native framework for developing CI/CD pipelines. Tekton provides extensions to Custom Resource Definitions (CRDs) in Kubernetes to make it easier to create and standardize pipelines. It has in-built support for coupling with existing CI/CD tools in the industry such as Jenkins, Jenkins X, Skaffold, Knative, and OpenShift.

The OpenShift integration of Tekton, named OpenShift Pipelines, introduces even more power and flexibility to this system through RedHat and OpenShift developer tools.

Why Should You Use Tekton?

Tekton pipelines use Kubernetes clusters as a first-class type and containers as their primary building blocks. The decoupled nature of Tekton ensures that you can use a single pipeline to deploy to separate Kubernetes clusters. This makes it easier to deploy services across multiple cloud solutions supplied by different vendors or even across on-premise systems.

Tekton allows you to run the automated tasks in isolation, without being affected by other processes running in the same system. Another specialty of Tekton is the flexibility it provides to switch resources, such as GitHub repositories, in between pipeline runs.

It also facilitates switching pipeline implementation depending on the type of resource. For example, you can set up a unique implementation to handle images.

Tekton coupled with OpenShift ensures high availability of the system by allowing each unit to independently scale on-demand. And you get improved logging/monitoring tools and fast self-recovery features in-built to Kubernetes.

How Does Tekton Work?

Tekton provides Kubernetes-style CRDs for declaring CI/CD pipelines. The resources are declared in a yaml which is, usually, stored with the code repository. We will consider the basic CRDs that are essential when creating pipelines.

Task

A Task is the smallest configurable unit in a Tekton pipeline. It’s similar to a function that accepts a set of inputs and outputs certain results. Each task can either run individually and independently or as a part of the pipeline. A command executed by a Task is called a Step. Each task consists of one or more Steps. Tekton executes each Task in its own Kubernetes pod.

Pipeline

A Pipeline consists of a number of Tasks that form the final automated CI/CD workflow. In addition to Tasks, it also contains PipelineResources. They are provided as inputs and outputs to Pipeline Tasks.

PipelineResource

A PipelineResource is an object that is used as an input or an output to a Task. For example, if the Task accepts a GitHub repository as input and builds and outputs the related Docker image, both of them are declared as PipelineResource objects.

PipelineRun

A PipelineRun is an instance of a Pipeline that is being executed. It initiates the execution of the Pipeline and manages the PipelineResources passed to Tasks as inputs and outputs.

TaskRun

A TaskRun is a running instance of a Task. PipelineRun creates TaskRun objects for each Task in the Pipeline to initiate the execution.

Trigger

A Trigger is an external event that triggers the CI/CD workflow. For example, a Git pull request could act as a Trigger. The information passed with the event payload is then used to trigger the Tasks in the Pipeline.

Condition

Conditions are similar to if statements in regular programming. They perform a validation check against provided conditions and return a True or False value. The Pipeline checks these Conditions before running a Task. If the Condition returns True, the Task is run, and if it returns False, the Task is skipped.

With these components, you can create complex, fully automated pipelines to build, test, and deploy your applications to the cloud or on-premise solutions.

Who Should Use Tekton?

Platform engineers who build CI/CD workflows for developers in an organization would find Tekton an ideal framework to make this process simpler. Developers too can build CI/CD workflows with Tekton for software and application development projects. This gives them the ability to easily manage different stages of the development process, such as dev, test, prod versions of the product, with minimal human interference.

What’s Next?

Refer to official Tekton and OpenShift Pipelines documentation to learn more on how to set up CI/CD pipelines that fulfill your organization’s needs with ease.

Need help?

We offer Kubernetes and OpenShift trainings and we can help you to buy, deploy and manage your OpenShift environment on IBM Power Systems.

Everything you need to know about Rancher – enterprise Kubernetes management

One of the most valuable innovations that have happened in cloud computing is the use of containers to run cloud-based applications and services. Platforms like Kubernetes have made it much easier to manage containerized workloads and services on cloud platforms. For those who may not know, Kubernetes is an open-source platform for deploying, managing and automating containerized workloads and services.

Being open-source, Kubernetes has several distributions that you can choose from if you intend to deploy workloads on the cloud. One of the distributions that you choose is Rancher. If you are keen to learn more about Rancher and how it compares with other Kubernetes distributions, this article is for you. We shall discuss what Rancher is, its key features, why you should use it, and how it compares with other alternatives.  Let’s dive in!

rancher what it is

What is Rancher?

Rancher is a software stack that is used to manage Kubernetes clusters. It is basically software that DevOps can use while adopting the user of containers. Rancher includes a full distribution of Kubernetes, Docker Swarm, and Apache Mesos, making it simple to manage container clusters on any cloud platform. Some of the popular companies that use Rancher include; Alibaba travelers, Abeja, Trivago, UseInsider, Starbucks, Oxylabs, yousign, and many more.

Rancher has recently been bought by SUSE, and this acquisition will significantly change their direction. SUSE already had its container management solution, but after acquiring Rancher, they will most likely pivot from their initial solution and focus on making Rancher a much better software.

One of Rancher’s significant benefits is the ability to manage multiple Kubernetes clusters in a simplified manner. It offers simplified management of multiple Kubernetes clusters that can be created manually using ranchers Kubernetes distribution called RKE (Rancher Kubernetes Engine) or imported into cluster manager management panel.

Besides Rancher Kubernetes Engine (RKE), Rancher has initiated several other innovative projects, and one of these is the K3S – a simper Kubernetes control panel that is mainly used in edge computing. Now that SUSE has taken Rancher, we hope that they will improve it even further to make it a complete Kubernetes platform.

Features in Rancher

Some of the main features in Rancher include the following

  • Docker Catalog App
  • Included Kubernetes Distribution
  • Included Docker Swarm Distribution
  • Included Mesos Distribution
  • Infrastructure Management
  • Some of the key features in Rancher include the following;
  • Manage Hosts, Deploy Containers, Monitor Resources
  • User Management & Collaboration
  • Native Docker APIs & Tools
  • Monitoring and Logging
  • Connect Containers, Manage Disks, Deploy Load Balancers

Why use Rancher?

With several other distributions of Kubernetes on the market, why choose Rancher? Let’s look at some of the key advantages/benefits Rancher poses.

  • It is easy to use: One of the reasons one would choose Rancher over any other Kubernetes platform is the simplified web UI that makes it easy to do whatever you need. It is a platform that even developers who are not so experienced with Kubernetes can easily get started with.
  • It can easily be deployed on any cloud infrastructure: Another critical advantage that Rancher has over other Kubernetes platforms is its compatibility with different cloud platforms; so, you can quickly deploy it on any cloud infrastructure.
  • Simplifies managing clusters: Rancher is probably the best choice to manage multiple Kubernetes clusters from one interface. Its ability to manage multiple clusters is one of the significant strengths that were built at the core of Rancher.
  • It includes load balancing and health check: This is one of the major features that is included in Rancher, which is very handy if you intend to deploy a system that will likely get huge traffic.
  • It is open-source and totally free: RKE, K3s, and all other Rancher products are open source and free to use for anyone. If you don’t have a budget to spend on container management software, then Rancher is the best choice for you. However, getting support from Rancher labs will require you to pay some money.

 When not to use Rancher.

Despite having lots of advantages, there are certain scenarios where it is advisable not to use Rancher. Below are some of the situations where you should avoid using Rancher.

  • If you are interested in more mature products: When compared to other Kubernetes platforms like OpenShift, Rancher is pretty new and is still evolving. If you are the kind of person that loves using already mature products that won’t experience any radical changes, you might be disappointed with Rancher.
  • If you don’t intend to use multiple clusters: One of the major strengths that Rancher has over other Kubernetes distributions is its ability to manage multiple container clusters from one interface. For those managing single clusters, you will likely not put Rancher to good use, so you are better off choosing another platform.

How Rancher compares with other alternatives like OpenShift

One of the key strengths that OpenShift has over Rancher is that it is a mature platform and has full support from Red Hat. If you are already into the Red Hat ecosystem, your obvious choice for managing containers should be OpenShift. Rancher also has support from Rancher Labs, but it is not as reliable as Red Hat’s. Using Rancher is more logical if you intend to manage multiple container clusters.

Conclusion

Rancher is an excellent tool for managing and automating Kubernetes clusters. It also has lots of handy features that you can take advantage of, especially if you are managing multiple Kubernetes clusters.

The ability to manage all your clusters from one place is one of the reasons you should choose Rancher over any other platform if you intend to manage multiple clusters. Rancher is also very easy to learn and use, so new Kubernetes users can quickly get started with Rancher.

Need training, consulting or architecting?

We are SUSE and Red Hat Business Partnerns. We can help you deploying both Rancher and OpenShift PoCs so you can evaluate and try both solutions. We have also also developed some Docker / kubernetes and OpenShift 4 hands-on trainings that could be of your interest.

red hat openshift plus edition

Red Hat OpenShift Platform Plus: What’s new?

Openshift Platform Plus is the latest member of the OpenShift family. Red Hat’s PaaS solution for Kubernetes-based applications. It has been announced in conjunction with Red Hat Linux version 8.4. Thanks to Red Hat Openshift Plus, organizations can, for a slight additional cost (to be discussed later) manage not only applications but also their security policies and configurations, regardless of where their applications are located as it includes new built-in support for application lifecycle in multi-cluster environments, as well as the ability to create clusters of just 3 nodes or even with remote “worker” nodes allowing to expand Kubernetes clusters to almost any location, which includes facilities with low available power.

red hat openshift plus edition

What is Included in OpenShift Platform Plus?

  • Kubernetes Engine (base layer of OpenShift on Linux OS)
  • Orchestration/ management control (Red Hat Advanced Cluster Manager for Kubernetes-ACM)
  • Security protocols (Red Hat Advanced Cluster Security for Kubernetes-ACS)
  • Registry software (Red Hat Quay)

OpenShift Platform Plus provides all the base, management, and security features in one package, which were available separately at their prices. Advanced Cluster Security (ACS) was added to Openshift after the acquisition of StackRox, a Kubernetes native security provider.

Multi-cloud and CI/CD ready

OpenShift provides a complete setup solution to either build an environment or run a container-based application on hybrid cloud or on-premises server infrastructure. Openshift offers two types are container application infrastructure, i.e., Managed and self-managed. Managed platforms are fully featured cloud-based services like Azure, IBM, AWS, and Google Cloud. At the same time, self-managed platforms are highly customizable but require a highly skilled team for each part of the deployment.

OpenShift evolved with time and included critical features for enhanced functionality. In the beginning, OpenShift Kubernetes Engines was introduced that includes Enterprise Kubernetes runtime, Red Hat Enterprise Linux CoreOS immutable container operating system, Administrator console, and Red Hat OpenShift Virtualization. Then comes OpenShift Container Platform augmented with Developer’s console, log/cost management, OpenShift Serverless (Knative), OpenShift Server Mesh (Istio), OpenShift Pipelines, and OpenShift GitOps (Tekton, ArgoCD). Comprising all of the features previously available, OpenShift Platform Plus comes with additional features.

OpenShift Platform Plus Features

Red Hat Advanced Cluster Management for Kubernetes

It is an advanced management control option from OpenShift. It provides the customers with full access to unified multi-cluster management (editing the clusters on public or private clouds, allocating resources for clusters, and troubleshoot issues on the entire domain from a single layout) through the Openshift Web console. Further, it provides policy-based management that includes standardizing policy parameters for security, application, and infrastructure framework.

Applications can be deployed across the network using advanced application life-cycle management. It also helps to control application flow over the nodes, Day-2 configuration management using Ansible. Advanced Cluster Management (ACM) aims to provide cluster health solutions related to storing, optimization, and troubleshooting. OpenShift Monitoring tools are well-designed to operate with ease and efficiency.

Red Hat Advanced Cluster Security for Kubernetes (ACS)

ACS was added to the OpenShift family after acquiring StackRox, powering the ACS as the core component of OpenShift Platform Plus. This security feature is different from previously deployed security measures. Previously, security protocols were applied after the application is developed. ACS offers the inclusion of security from the very beginning, i.e., in the codebase. Advanced security features augment every step of the application development life cycle.

Advanced cluster security follows international standards of container security like CIS and NIST benchmarks. The security protocols include data breach security, network protection, elimination of blind spots, reducing time and cost by efficiently implementing security policy codes, avoiding operational conflicts, data overlapping, and redundancy. ACS is a perfect execution of DevSecOps protocols.

Red Hat Quay

A container registry is a storage platform used to store containers for Kubernetes and other container-based application development like DevOps. When a container application is formed, its image is created. It is a kind of .exe file that contains all the files and components for a successful installation. When a container image is placed on other platforms, it is used to create the same container application. A container image is a template used to create more applications.

Red Hat Quay is a private container registry used to create, distribute, and store container images. When container image is shared across the network repository, specific vulnerabilities head up. RedHat Quay uses Clair security to cope with such vulnerabilities providing strict access controls, authentication protocols, and other distribution issues.

RedHat Quay Features

  • Each image is tagged with the timestamp; RedHat Quay offers Time Machine to tag image version and rollback capability like downgrading or restore factory settings. It provides 2-week configurable history for image tags.
  • Geographic distribution ensures quick and flawless images using Content Distribution Networks (CDNs) so that each access point has its nearby repository. RedHat Quay also uses BitTorrent technology to reduce waiting time for content availability.
  • Runtime resource garbage collection helps identify useless or less used operations to optimize resource use and increase efficiency.
  • RedHat Quay offers unlimited storage for multiple image collections.
  • Automated triggering for continuous integration/continuous delivery (CI/CD) pipeline
  • Log-based auditing by scanning APIs and user interfaces
  • Authentication protocols using LDAP, OAuth, OIDC, and keystone assure secure logging and organizational hierarchical access control.
  • Account automation provides the creation of credentials required to run the program.
  • Multi-platform adaptability

RedHat OpenShift Platform Plus Pricing

Advanced features of OpenShift Platform Plus are deemed to be available between April 2021 to June 2021. The current price is not yet available. OpenShift.io is a free platform for cloud-based deployment. The cost for Platform Plus depends upon sizing and subscription. According to Capterra, each Plus Feature costs $20 per month for self-managed plans. A better way to read and choose a subscription model, and contact our sales department. You can also request a demo of OpenShift Plus for free.

RedHat OpenShift Platform training and professional services

We offer Kubernetes and OpenShift trainings and we can help you to buy, deploy and manage your OpenShift environment on IBM Power Systems. Contact us for more information

Red Hat OpenShift 4.7 is here and the installer is included.

Installation wizard, kubernetes 1.20 and more new in OpenShift 4.7

Kubernetes 1.20 and CRI-o 1.20

Its technology is based on Kubernetes versions 1.20 and the CRI-O 1.20 container engine, which completely replaces what little was left of Docker. OpenShift 4.7 has continued its path of new features that we will talk about later, but if we can say something relevant about this version is that it is more stable, much more stable. A lot of changes have been made throughout the software stack that improve the availability, resiliency and robustness of the platform as a whole. More and better checks have been implemented, a new diagnostic system for the installer has been implemented and the error codes have been enriched. It also includes improvements in the control panel to facilitate the monitoring of Pipelines, operators (connected or disconnected) storage systems and communication networks.

An installer for bare-metal servers?

Yes, at last. This version comes with the (Technology Preview) installation wizard. A breakthrough to simplify the deployment of the whole set of packages, dependencies, services and configurations required to install OpenShift on our own servers.

But wasn’t it always installed in the cloud with an automatic installer?

Yes and no. The cloud installer is great, everything works like magic BUT there are more and more workloads like AI, HPC, DL, Telco/5G, ML, that are unfeasible to deploy in cloud because of costs (you have to upload and download many, many GBs) and performance. We have already discussed how to configure OpenShift for AI/DL environments on IBM Power Systems servers. One of the main objections to such deployments was the complexity of manually installing the environment. The installer will simplify it a lot.

Windows Containers

It sounds strange, but Microsoft customers are millions in the world. If a system like this wants to succeed, it needs to support them. That’s why Red Hat OpenShift 4.7 continues to expand support for Windows Containers, a feature announced as early as the end of 2020. In addition to support for Windows Containers on AWS and Azure, OpenShift will now include support for vSphere (available early next April 2021) using the Installer Provided Infrastructure (IPI) for VMWare. Red Hat customers can now migrate their Windows containers from their VMWare virtualized systems to their Red Hat OpenShift cluster in a simple and, most importantly, fully supported and secure manner.

IPSec support in OVN

In hybrid cloud environments, one of the big challenges is how we connect our remote nodes and clusters to each other or to our local data centers. With OpenShift’s virtual network support for the IPSec protocol, this is greatly facilitated.

Automatic pod scaling

The last feature we wanted to highlight is the horizontal uto-scaling of pods (HPA). It allows, by measuring memory utilization and through a replication controller, to expand the number of pods or change their characteristics automatically.

If you want to try Red Hat OpenShift 4.7, there are a number of ways to do it, from online learning tutorials and demos on your laptop to how to do it in the public cloud or in your own data center.

You can check the rest of the news at https://www.openshift.com/blog/red-hat-openshift-4.7-is-now-available, set up your demo environment at home and start training now with our practical courses.

Machine & Deep Learning on-premises with Red Hat OpenShift 4.7

By providing a vast amount of data as a training set, computers have been made capable of decision making and to learn autonomously.  AI usually is undertaken in conjunction with machine learning, deep learning and big data analytics. Throughout the history of AI, the major limitation was computational power, cpu-memory-gpu bandwith and high performance storage systems. Machine learning requires immense computational power with intelligent execution of commands keeping processors and gpus at extremely high utilization, sustained for hours, days or weeks.  In this article we will discuss the different alternatives for these types of workloads. From cloud environments like Azure, AWS, IBM or Google to on-premises deployments with secure containers running on Red Hat OpenShift and using IBM Power Systems.

Before executing an AI model, there are a few things to keep in mind. The choice of hardware and software tools is equally essential for algorithms for solving a particular problem. Before evaluating the best options, we must first understand the prerequisites to establish an AI running environment.

What Are the Hardware Prerequisites to Run an AI Application?

  • High computing power (GPU can ignite deep learning up to 100 times more than standard CPUs)
  • Storage capacity and disk performance
  • Networking Infrastructure (from 10GB)

What Are the Software Prerequisites to Run an AI Application?

  • Operating system
  • Computing environment and its variables
  • Libraries and other binaries
  • Configuration files

As we now know the prerequisites to establish an AI setup, let’s dive into all components and the best possible combinations. There are two choices for setting up an AI deployment: cloud and on-premises.  We already told you that none of them is better as it depends on each situation.

Cloud infrastructure

Some traditional famous cloud servers are

  1. Amazon Web Services (AWS)
  2. Microsoft Azure
  3. Google Cloud Platform (GCP)
  4. IBM Cloud

In addition to these, there exist clouds specialized for machine learning. These ML specific clouds provide support of GPUs rather than CPUs for better computation and specialized software environments. These environments are ideal for small workloads where there is no confidential or sensitive data. When we have to upload and download many TB of data or run intensive models for weeks at a time, being able to reduce these times to days or hours and do them on our own servers saves us a lot of costs. We will talk about this next.

On-premises servers and Platform-as-a-service (PAaS) deployments

These are specialized servers present in the working premises of an AI company. Many companies provide highly customized as well as built-from-scratch on-prem AI servers. For example, IBM’s AC922 and IC922 are perfect for on-premises AI setup.

Companies choose as they must consider future growth and tradeoff between current needs and expenses from the above two. If your company is just a startup, cloud AI servers are best because this choice can eliminate the worry of installations at somehow affordable rates. But if your company grows in number and more data scientists join, cloud computing will not ease your burdens. In this case, technology experts emphasize on-prem AI infrastructure for better security, customization, and expansion.

ALSO READ: Deploy Your Hybrid Cloud for ML and DL

Choosing the best HW architecture

Almost all cloud service platforms are now offering GPU’s supported computation as GPU has nearly 100 times more potent than the average CPU, especially if machine learning is about computer vision. But the real problem is the data flow rate between node and cloud server, no matter how many GPUs are connected. That’s why the on-prem AI setup has more votes, as data flow is no longer a big problem.

The second point to consider is the bandwidth between GPUs and CPUs. On traditional architectures such as Intel, this traffic is transferred over PCI channels. IBM developed a connector called NVLink, so that NVidia card GPUs and Power9 cores could communicate directly without intermediate layers. This multiplies the bandwidth, which is already more than 2 times higher between processor and memory. Result: no more bottlenecks!

As we have pointed out above the software prerequisites for running AI applications, now we have to consider the best software environment for optimal AI performance.

What data-center architecture is best for AI/DL/ML?

While talking about servers regarding AI soft infrastructure, the traditional design was virtualization; it’s a simple distribution of computation resources under separate operating systems. We call each independent operating system environment a “Virtual Machine.” If we need to run AI applications on virtual machines, we face multiple constraints. One is the resources necessary for running the entire system, including OS operations and AI operations. Each virtual machine requires extra computational and storage resources. Moreover, it’s not easy to transfer a running program on a specific virtual machine to another virtual machine without resetting the environmental variables.

What a container is?

To solve this virtualization problem, the concept “Container” jumps in. A container is an independent software environment under a standard operating system with a complete runtime environment, AI application, and dependencies, libraries, binaries, and configurations file brought together as a single entity. Containerization gives extra advantages as AI operations are executed directly in the container, and OS does not have to send each command every time (saving massive data flow instances). Second but not less, it’s relatively easy to transfer a container from one platform to another platform, as this transfer does not require changing the environmental variables. This approach enables data scientists to focus more on the application rather than the environment.

RedHat OpenShift Container Platform

The best containerization software built-in Linux is Red Hat’s OpenShift Container Platform (an On-prem PAaS) based on Kubernetes. Its fundamentals are built on CRI-O containers, while Kubernetes control containerization management. The latest version of Open Shift is 4.7. The major update provided in OpenShift 4.7 is its relative independency from Docker and better security.

NVIDIA GPU operator for Openshift containers

NVIDIA and Red Hat OpenShift have come together to assist in running AI applications. While using GPUs as high compute processors, the biggest problem is to virtualize or distribute GPUs’ power across the containers. NVIDIA GPU Operator for Red Hat Openshift is a Kubernetes that mediates GPU resources’ scheduling and distribution. Since the GPU is a special resource in the cluster, it requires a few components to be installed before application workloads can be deployed onto the GPU, these components include:

  • NVIDIA drivers
  • specific runtime for kubernetes
  • container device plugin
  • automatic node labelling rules
  • monitoring compnents

The most widely used use cases that utilize GPUs for acceleration are image processing, computer audition, conversational AI using NLP, and computer vision using artificial neural networks.

Computing Environment for Machine Learning

There are several AI computing environments to test/run AI applications. Top of them is Tensorflow, Microsoft Azure, Apache Spark, and PyTorch. Among these, Tensorflow (Created by Google) is chosen by the most. Tensorflow is a production-grade end-to-end open-source platform having libraries for machine learning. The primary data unit used in both TensorFlow and Pytorch is Tensor. The best thing about TensorFlow is it uses data flow graphs for operations. It is just like a flowchart, and programs keep track of the success and failure of each data flow. This approach saves much time by not going back to the baseline of an operation and tests other sub-models if a data flow fails.

So far, we have discussed all the choices to establish an AI infrastructure, including hardware and software. It is not easy to select a specific product that harbors all the desired AI system components. IBM offers both hardware and software components for efficient and cost-effective AI research and development like IBM Power Systems and IBM PowerAI, respectively.

IBM Power Systems

IBM Power Systems has flexible and need-based components for running an AI system. IBM Power Systems offers accelerated servers like IBM AC922 and IBM IC922 for ML training and ML inference, respectively.

IBM PowerAI

IBM PowerAI is an intelligent AI environment execution platform facilitating efficient deep learning, machine learning, and AI applications by utilizing IBM Power Systems and NVidia GPUs’ full power. It provides many optimizations that accelerate performance, improve resource utilization, facilitates the installation, customization, and prevents management problems. It also provides ready-to-use deep learning frameworks such as Theano, Bazel, OpenBLAS, TensorFlow, Caffe-BVLC or IBM Caffe.

Which is the best server for on-premises deployments. Let’s compare IBM AC922 and IBM IC922

If you need servers that can withstand machine learning loads for months or years without interruption, running at a high load percentage, Intel systems are not an option. NVIDIA DGX systems are also available, but as they cannot virtualize GPUs, when you want to run several different learning models, you have to buy more graphics cards, which makes them much more expensive. The choice of the right server will also depend on the budget. IC922s (designed for AI inference and high performarce linux workloads) are about half the price of AC922s (designed for AI datasets training), so for small projects they can be perfectly suitable.

 

If you are interested in these technologies, request a demonstration without obligation. We have programs where we can assign you a complete server for a month so that you can test directly the advantages of this hardware architecture.

Red Hat is now free for small environments. We teach you how to migrate from CentOS

CentOS Stream isn’t that terrible

With the announcement of CentOS Stream many users were left more than worried. Moving from a stable and free distribution, RHEL’s twin sister to a test environment, prone to errors and needing constant patches did not seem like good news. In addition, its lifecycle (that is, time during which updates are provided to each release) too short, which did not make it appropriate for most servers in production that at the time decided to install CentOS instead of RHEL for not requiring high-level technical support. This is the case for small businesses or academic institutions.

Fedora already existed, where all the new technical advances that would later arrive in Red Hat (and CentOS) are being tested. Fedora is an ideal Linux distribution for desktop environments but few companies had in production.

Finally the scenario has cleared up. Fedora will continue to have the same function within Red Hat’s Linux ecosystem, nothing changes here. CentOS Stream will be a platform that, using “continuous delivery” techniques, inherited from the devops philosophy, will become the next minor version of Red Hat Enterprise Linux (RHEL) with a fairly short lifecycle.

Red Hat Linux is now free for small environments

However, the big news is this: for environments with up to 16 productive servers, RHEL can be installed at no cost using Developer licenses. On second thought, it makes all the sense in the world. With the exception of large super-computing (HPC) environments, you migrate from CentOS to RHEL seamlessly and at no additional cost. Not only will the features be maintained, but the latest updates will be accessed immediately.

This will be possible from February 1, 2021. Red Hat warns that it will keep the subscription system, even if they are free by trying to simplify them as much as possible. They argue legal reasons, such as that as new laws such as the GDPR come into force, the terms and conditions of your software need to be updated. That is, it is neither, nor are the licenses expected in perpetuity that IBM still maintains for example.

From our point of view this is a success in expanding the user base, but also potential future client not only of Red Hat Linux but of all its products: Satellite, Ansible, OpenShift, Openstack, Cloudforms among many others.

How do we migrate from CentOS to Red Hat?

We have a utility that performs migration, convert2rhel

• convert2rhel –disable-submgr –enablerepo < RHEL_RepoID1 > –enablerepo < RHEL_RepoID2 > –debug

Change RHEL_RepoID to the repositories you choose in for /etc/yum.repos.d/code> rhel-7-server-rpmscode> example, or rhel-8-baseoscode> rhel-8-appstreamcode> .

You can look at the options with:

Convert2rhel -h

And when you’re ready, just start the process:

Convert2rhel

In this link you have all the details

SiXe Ingeniería
×