Seize the True Power of CI/CD with Tekton and Kubernetes Pipelines

The introduction of Kubernetes (Tekton) Pipelines has made a revolution in the way we handle CI/CD workflows in software development. The addition of Tekton, the Kubernetes-native framework, has given us more power and flexibility in creating and managing pipelines. This article focuses on the importance of Kubernetes Pipelines and Tekton on Red Hat OpenShift, and how these tools can help you make your development process truly continuous.

What is a Pipeline?

A pipeline is an automated process that drives software through the building, testing, and deploying stages of the software development lifecycle. In other words, a pipeline executes the Continuous Integration and Continuous Delivery (CI/CD) workflow. It automatically handles the tasks of running the test suite, analyzing code, creating binaries, containerization, and deploying changes to the cloud or on-premise solutions.

Why Should You Build Pipelines with Kubernetes?

As the development world moves to embrace microservices-based applications ahead of monolithic applications, the CI/CD process has become truly continuous with incremental updates to the codebase that are independently deployable.

In such a setting, Kubernetes simplifies the process of creating and maintaining CI/CD pipelines. It deploys each microservice to a single Kubernetes cluster and maintains several copies of each microservice to serve as dev, test, and prod versions.

With Kubernetes pipelines, you no longer have to rebuild the entire application during each build. Instead, Kubernetes updates the container of the microservice and deploys it through the defined pipeline. There’s no need for writing build scripts anymore as Kubernetes automatically handles the process with only a few configuration options we provide. This reduces the chance for human errors in the CI/CD workflow.

What is Tekton?

Tekton allows you to take Kubernetes pipelines to the next level. It’s an open-source, Kubernetes-native framework for developing CI/CD pipelines. Tekton provides extensions to Custom Resource Definitions (CRDs) in Kubernetes to make it easier to create and standardize pipelines. It has in-built support for coupling with existing CI/CD tools in the industry such as Jenkins, Jenkins X, Skaffold, Knative, and OpenShift.

The OpenShift integration of Tekton, named OpenShift Pipelines, introduces even more power and flexibility to this system through RedHat and OpenShift developer tools.

Why Should You Use Tekton?

Tekton pipelines use Kubernetes clusters as a first-class type and containers as their primary building blocks. The decoupled nature of Tekton ensures that you can use a single pipeline to deploy to separate Kubernetes clusters. This makes it easier to deploy services across multiple cloud solutions supplied by different vendors or even across on-premise systems.

Tekton allows you to run the automated tasks in isolation, without being affected by other processes running in the same system. Another specialty of Tekton is the flexibility it provides to switch resources, such as GitHub repositories, in between pipeline runs.

It also facilitates switching pipeline implementation depending on the type of resource. For example, you can set up a unique implementation to handle images.

Tekton coupled with OpenShift ensures high availability of the system by allowing each unit to independently scale on-demand. And you get improved logging/monitoring tools and fast self-recovery features in-built to Kubernetes.

How Does Tekton Work?

Tekton provides Kubernetes-style CRDs for declaring CI/CD pipelines. The resources are declared in a yaml which is, usually, stored with the code repository. We will consider the basic CRDs that are essential when creating pipelines.

Task

A Task is the smallest configurable unit in a Tekton pipeline. It’s similar to a function that accepts a set of inputs and outputs certain results. Each task can either run individually and independently or as a part of the pipeline. A command executed by a Task is called a Step. Each task consists of one or more Steps. Tekton executes each Task in its own Kubernetes pod.

Pipeline

A Pipeline consists of a number of Tasks that form the final automated CI/CD workflow. In addition to Tasks, it also contains PipelineResources. They are provided as inputs and outputs to Pipeline Tasks.

PipelineResource

A PipelineResource is an object that is used as an input or an output to a Task. For example, if the Task accepts a GitHub repository as input and builds and outputs the related Docker image, both of them are declared as PipelineResource objects.

PipelineRun

A PipelineRun is an instance of a Pipeline that is being executed. It initiates the execution of the Pipeline and manages the PipelineResources passed to Tasks as inputs and outputs.

TaskRun

A TaskRun is a running instance of a Task. PipelineRun creates TaskRun objects for each Task in the Pipeline to initiate the execution.

Trigger

A Trigger is an external event that triggers the CI/CD workflow. For example, a Git pull request could act as a Trigger. The information passed with the event payload is then used to trigger the Tasks in the Pipeline.

Condition

Conditions are similar to if statements in regular programming. They perform a validation check against provided conditions and return a True or False value. The Pipeline checks these Conditions before running a Task. If the Condition returns True, the Task is run, and if it returns False, the Task is skipped.

With these components, you can create complex, fully automated pipelines to build, test, and deploy your applications to the cloud or on-premise solutions.

Who Should Use Tekton?

Platform engineers who build CI/CD workflows for developers in an organization would find Tekton an ideal framework to make this process simpler. Developers too can build CI/CD workflows with Tekton for software and application development projects. This gives them the ability to easily manage different stages of the development process, such as dev, test, prod versions of the product, with minimal human interference.

What’s Next?

Refer to official Tekton and OpenShift Pipelines documentation to learn more on how to set up CI/CD pipelines that fulfill your organization’s needs with ease.

Need help?

We offer Kubernetes and OpenShift trainings and we can help you to buy, deploy and manage your OpenShift environment on IBM Power Systems.

Everything you need to know about Rancher – enterprise Kubernetes management

One of the most valuable innovations that have happened in cloud computing is the use of containers to run cloud-based applications and services. Platforms like Kubernetes have made it much easier to manage containerized workloads and services on cloud platforms. For those who may not know, Kubernetes is an open-source platform for deploying, managing and automating containerized workloads and services.

Being open-source, Kubernetes has several distributions that you can choose from if you intend to deploy workloads on the cloud. One of the distributions that you choose is Rancher. If you are keen to learn more about Rancher and how it compares with other Kubernetes distributions, this article is for you. We shall discuss what Rancher is, its key features, why you should use it, and how it compares with other alternatives.  Let’s dive in!

rancher what it is

What is Rancher?

Rancher is a software stack that is used to manage Kubernetes clusters. It is basically software that DevOps can use while adopting the user of containers. Rancher includes a full distribution of Kubernetes, Docker Swarm, and Apache Mesos, making it simple to manage container clusters on any cloud platform. Some of the popular companies that use Rancher include; Alibaba travelers, Abeja, Trivago, UseInsider, Starbucks, Oxylabs, yousign, and many more.

Rancher has recently been bought by SUSE, and this acquisition will significantly change their direction. SUSE already had its container management solution, but after acquiring Rancher, they will most likely pivot from their initial solution and focus on making Rancher a much better software.

One of Rancher’s significant benefits is the ability to manage multiple Kubernetes clusters in a simplified manner. It offers simplified management of multiple Kubernetes clusters that can be created manually using ranchers Kubernetes distribution called RKE (Rancher Kubernetes Engine) or imported into cluster manager management panel.

Besides Rancher Kubernetes Engine (RKE), Rancher has initiated several other innovative projects, and one of these is the K3S – a simper Kubernetes control panel that is mainly used in edge computing. Now that SUSE has taken Rancher, we hope that they will improve it even further to make it a complete Kubernetes platform.

Features in Rancher

Some of the main features in Rancher include the following

  • Docker Catalog App
  • Included Kubernetes Distribution
  • Included Docker Swarm Distribution
  • Included Mesos Distribution
  • Infrastructure Management
  • Some of the key features in Rancher include the following;
  • Manage Hosts, Deploy Containers, Monitor Resources
  • User Management & Collaboration
  • Native Docker APIs & Tools
  • Monitoring and Logging
  • Connect Containers, Manage Disks, Deploy Load Balancers

Why use Rancher?

With several other distributions of Kubernetes on the market, why choose Rancher? Let’s look at some of the key advantages/benefits Rancher poses.

  • It is easy to use: One of the reasons one would choose Rancher over any other Kubernetes platform is the simplified web UI that makes it easy to do whatever you need. It is a platform that even developers who are not so experienced with Kubernetes can easily get started with.
  • It can easily be deployed on any cloud infrastructure: Another critical advantage that Rancher has over other Kubernetes platforms is its compatibility with different cloud platforms; so, you can quickly deploy it on any cloud infrastructure.
  • Simplifies managing clusters: Rancher is probably the best choice to manage multiple Kubernetes clusters from one interface. Its ability to manage multiple clusters is one of the significant strengths that were built at the core of Rancher.
  • It includes load balancing and health check: This is one of the major features that is included in Rancher, which is very handy if you intend to deploy a system that will likely get huge traffic.
  • It is open-source and totally free: RKE, K3s, and all other Rancher products are open source and free to use for anyone. If you don’t have a budget to spend on container management software, then Rancher is the best choice for you. However, getting support from Rancher labs will require you to pay some money.

 When not to use Rancher.

Despite having lots of advantages, there are certain scenarios where it is advisable not to use Rancher. Below are some of the situations where you should avoid using Rancher.

  • If you are interested in more mature products: When compared to other Kubernetes platforms like OpenShift, Rancher is pretty new and is still evolving. If you are the kind of person that loves using already mature products that won’t experience any radical changes, you might be disappointed with Rancher.
  • If you don’t intend to use multiple clusters: One of the major strengths that Rancher has over other Kubernetes distributions is its ability to manage multiple container clusters from one interface. For those managing single clusters, you will likely not put Rancher to good use, so you are better off choosing another platform.

How Rancher compares with other alternatives like OpenShift

One of the key strengths that OpenShift has over Rancher is that it is a mature platform and has full support from Red Hat. If you are already into the Red Hat ecosystem, your obvious choice for managing containers should be OpenShift. Rancher also has support from Rancher Labs, but it is not as reliable as Red Hat’s. Using Rancher is more logical if you intend to manage multiple container clusters.

Conclusion

Rancher is an excellent tool for managing and automating Kubernetes clusters. It also has lots of handy features that you can take advantage of, especially if you are managing multiple Kubernetes clusters.

The ability to manage all your clusters from one place is one of the reasons you should choose Rancher over any other platform if you intend to manage multiple clusters. Rancher is also very easy to learn and use, so new Kubernetes users can quickly get started with Rancher.

Need training, consulting or architecting?

We are SUSE and Red Hat Business Partnerns. We can help you deploying both Rancher and OpenShift PoCs so you can evaluate and try both solutions. We have also also developed some Docker / kubernetes and OpenShift 4 hands-on trainings that could be of your interest.

red hat openshift plus edition

Red Hat OpenShift Platform Plus: What’s new?

Openshift Platform Plus is the latest member of the OpenShift family. Red Hat’s PaaS solution for Kubernetes-based applications. It has been announced in conjunction with Red Hat Linux version 8.4. Thanks to Red Hat Openshift Plus, organizations can, for a slight additional cost (to be discussed later) manage not only applications but also their security policies and configurations, regardless of where their applications are located as it includes new built-in support for application lifecycle in multi-cluster environments, as well as the ability to create clusters of just 3 nodes or even with remote “worker” nodes allowing to expand Kubernetes clusters to almost any location, which includes facilities with low available power.

red hat openshift plus edition

What is Included in OpenShift Platform Plus?

  • Kubernetes Engine (base layer of OpenShift on Linux OS)
  • Orchestration/ management control (Red Hat Advanced Cluster Manager for Kubernetes-ACM)
  • Security protocols (Red Hat Advanced Cluster Security for Kubernetes-ACS)
  • Registry software (Red Hat Quay)

OpenShift Platform Plus provides all the base, management, and security features in one package, which were available separately at their prices. Advanced Cluster Security (ACS) was added to Openshift after the acquisition of StackRox, a Kubernetes native security provider.

Multi-cloud and CI/CD ready

OpenShift provides a complete setup solution to either build an environment or run a container-based application on hybrid cloud or on-premises server infrastructure. Openshift offers two types are container application infrastructure, i.e., Managed and self-managed. Managed platforms are fully featured cloud-based services like Azure, IBM, AWS, and Google Cloud. At the same time, self-managed platforms are highly customizable but require a highly skilled team for each part of the deployment.

OpenShift evolved with time and included critical features for enhanced functionality. In the beginning, OpenShift Kubernetes Engines was introduced that includes Enterprise Kubernetes runtime, Red Hat Enterprise Linux CoreOS immutable container operating system, Administrator console, and Red Hat OpenShift Virtualization. Then comes OpenShift Container Platform augmented with Developer’s console, log/cost management, OpenShift Serverless (Knative), OpenShift Server Mesh (Istio), OpenShift Pipelines, and OpenShift GitOps (Tekton, ArgoCD). Comprising all of the features previously available, OpenShift Platform Plus comes with additional features.

OpenShift Platform Plus Features

Red Hat Advanced Cluster Management for Kubernetes

It is an advanced management control option from OpenShift. It provides the customers with full access to unified multi-cluster management (editing the clusters on public or private clouds, allocating resources for clusters, and troubleshoot issues on the entire domain from a single layout) through the Openshift Web console. Further, it provides policy-based management that includes standardizing policy parameters for security, application, and infrastructure framework.

Applications can be deployed across the network using advanced application life-cycle management. It also helps to control application flow over the nodes, Day-2 configuration management using Ansible. Advanced Cluster Management (ACM) aims to provide cluster health solutions related to storing, optimization, and troubleshooting. OpenShift Monitoring tools are well-designed to operate with ease and efficiency.

Red Hat Advanced Cluster Security for Kubernetes (ACS)

ACS was added to the OpenShift family after acquiring StackRox, powering the ACS as the core component of OpenShift Platform Plus. This security feature is different from previously deployed security measures. Previously, security protocols were applied after the application is developed. ACS offers the inclusion of security from the very beginning, i.e., in the codebase. Advanced security features augment every step of the application development life cycle.

Advanced cluster security follows international standards of container security like CIS and NIST benchmarks. The security protocols include data breach security, network protection, elimination of blind spots, reducing time and cost by efficiently implementing security policy codes, avoiding operational conflicts, data overlapping, and redundancy. ACS is a perfect execution of DevSecOps protocols.

Red Hat Quay

A container registry is a storage platform used to store containers for Kubernetes and other container-based application development like DevOps. When a container application is formed, its image is created. It is a kind of .exe file that contains all the files and components for a successful installation. When a container image is placed on other platforms, it is used to create the same container application. A container image is a template used to create more applications.

Red Hat Quay is a private container registry used to create, distribute, and store container images. When container image is shared across the network repository, specific vulnerabilities head up. RedHat Quay uses Clair security to cope with such vulnerabilities providing strict access controls, authentication protocols, and other distribution issues.

RedHat Quay Features

  • Each image is tagged with the timestamp; RedHat Quay offers Time Machine to tag image version and rollback capability like downgrading or restore factory settings. It provides 2-week configurable history for image tags.
  • Geographic distribution ensures quick and flawless images using Content Distribution Networks (CDNs) so that each access point has its nearby repository. RedHat Quay also uses BitTorrent technology to reduce waiting time for content availability.
  • Runtime resource garbage collection helps identify useless or less used operations to optimize resource use and increase efficiency.
  • RedHat Quay offers unlimited storage for multiple image collections.
  • Automated triggering for continuous integration/continuous delivery (CI/CD) pipeline
  • Log-based auditing by scanning APIs and user interfaces
  • Authentication protocols using LDAP, OAuth, OIDC, and keystone assure secure logging and organizational hierarchical access control.
  • Account automation provides the creation of credentials required to run the program.
  • Multi-platform adaptability

RedHat OpenShift Platform Plus Pricing

Advanced features of OpenShift Platform Plus are deemed to be available between April 2021 to June 2021. The current price is not yet available. OpenShift.io is a free platform for cloud-based deployment. The cost for Platform Plus depends upon sizing and subscription. According to Capterra, each Plus Feature costs $20 per month for self-managed plans. A better way to read and choose a subscription model, and contact our sales department. You can also request a demo of OpenShift Plus for free.

RedHat OpenShift Platform training and professional services

We offer Kubernetes and OpenShift trainings and we can help you to buy, deploy and manage your OpenShift environment on IBM Power Systems. Contact us for more information

New IBM QRadar courses updated to version 7.4.2

Of all the IBM courses, perhaps the most demanded and valued by IBM clients and partners are the QRadar SIEM courses. Therefore, from June 2021 the new official courses will be available: QRadar SIEM Fundamentals (BQ104G) and QRadar SIEM advanced functionalities (BQ204G) . We have also updated our pre-sales, architecture, deployment and initial configuration workshop to version 7.4. Since 2014 we have trained over 35 clients and 400 students from 20 different countries in this amazing technology. We have passed on all our practical experience in real projects and have helped to successfully pass official certifications.

What is QRadar?

The market leading solution for the prevention, detection and remediation of security incidents. Hundreds of SOCs (Security Operations Centers) around the world rely on technology developed by Q1 Labs and acquired by IBM in 2011 to complement their cybersecurity capabilities. QRadar allows us to link events ranging from physical security (access controls), ID card readers, OT devices to the service infrastructure deployed in the cloud or even the daily activity logs of users. Its capabilities allow us to analyze thousands of events per second to ensure that our organization is not only secure, but also compliant with applicable industry regulations and legislation. QRadar also has strategic partnerships with Juniper Networks, Enterasys, Nortel, McAfee, Foundry Networks and 3Com among other companies. The product is so powerful that many of these companies sell their own SIEMs based on QRadar technology.

What’s new?

In the last year and a half many things have changed. From the user interface that has been completely revamped, to new applications that allow you to analyze incidents in a fully automated way. For example, the QRadar Advisor application with Watson (IBM AI) automatically maps tactics and techniques available in the MITRE ATT&CK database to internal QRadar rules. Through an innovative monitoring dashboard you can see the techniques used by attackers and their relationship to open security incidents.

The new versions allow you to use the hints and tips provided by IBM QRadar Use Case Manager (formerly QRadar Tuning app) to help you optimize the configuration and tuning of QRadar rules, keeping them always up to date and ready for when they are needed.

 

What about certifications?

We have been helping to prepare for the official IBM exams in the technologies we teach for a long time. That is why we have decided that from June 2021, we will include at no additional cost a preparation day for the certifications in all private courses that are contracted to us with at least 4 students enrolled.

I want to enroll in a course


Contact us and in less than 24h you will receive an offer.
All courses are taught both on-site and on-line. Our instructors speak English, French and Spanish.

Need more help?

At Sixe Ingeniería we are IBM Security BP. We sell, install and support IBM QRadar SIEM. We also conduct tailor-made training, seminars and technical talks. We also advise you with the licenses and the definition of the architecture you need at no additional cost. Ask for a demonstration without obligation.

Red Hat OpenShift 4.7 is here and the installer is included.

Installation wizard, kubernetes 1.20 and more new in OpenShift 4.7

Kubernetes 1.20 and CRI-o 1.20

Its technology is based on Kubernetes versions 1.20 and the CRI-O 1.20 container engine, which completely replaces what little was left of Docker. OpenShift 4.7 has continued its path of new features that we will talk about later, but if we can say something relevant about this version is that it is more stable, much more stable. A lot of changes have been made throughout the software stack that improve the availability, resiliency and robustness of the platform as a whole. More and better checks have been implemented, a new diagnostic system for the installer has been implemented and the error codes have been enriched. It also includes improvements in the control panel to facilitate the monitoring of Pipelines, operators (connected or disconnected) storage systems and communication networks.

An installer for bare-metal servers?

Yes, at last. This version comes with the (Technology Preview) installation wizard. A breakthrough to simplify the deployment of the whole set of packages, dependencies, services and configurations required to install OpenShift on our own servers.

But wasn’t it always installed in the cloud with an automatic installer?

Yes and no. The cloud installer is great, everything works like magic BUT there are more and more workloads like AI, HPC, DL, Telco/5G, ML, that are unfeasible to deploy in cloud because of costs (you have to upload and download many, many GBs) and performance. We have already discussed how to configure OpenShift for AI/DL environments on IBM Power Systems servers. One of the main objections to such deployments was the complexity of manually installing the environment. The installer will simplify it a lot.

Windows Containers

It sounds strange, but Microsoft customers are millions in the world. If a system like this wants to succeed, it needs to support them. That’s why Red Hat OpenShift 4.7 continues to expand support for Windows Containers, a feature announced as early as the end of 2020. In addition to support for Windows Containers on AWS and Azure, OpenShift will now include support for vSphere (available early next April 2021) using the Installer Provided Infrastructure (IPI) for VMWare. Red Hat customers can now migrate their Windows containers from their VMWare virtualized systems to their Red Hat OpenShift cluster in a simple and, most importantly, fully supported and secure manner.

IPSec support in OVN

In hybrid cloud environments, one of the big challenges is how we connect our remote nodes and clusters to each other or to our local data centers. With OpenShift’s virtual network support for the IPSec protocol, this is greatly facilitated.

Automatic pod scaling

The last feature we wanted to highlight is the horizontal uto-scaling of pods (HPA). It allows, by measuring memory utilization and through a replication controller, to expand the number of pods or change their characteristics automatically.

If you want to try Red Hat OpenShift 4.7, there are a number of ways to do it, from online learning tutorials and demos on your laptop to how to do it in the public cloud or in your own data center.

You can check the rest of the news at https://www.openshift.com/blog/red-hat-openshift-4.7-is-now-available, set up your demo environment at home and start training now with our practical courses.

Machine & Deep Learning on-premises with Red Hat OpenShift 4.7

By providing a vast amount of data as a training set, computers have been made capable of decision making and to learn autonomously.  AI usually is undertaken in conjunction with machine learning, deep learning and big data analytics. Throughout the history of AI, the major limitation was computational power, cpu-memory-gpu bandwith and high performance storage systems. Machine learning requires immense computational power with intelligent execution of commands keeping processors and gpus at extremely high utilization, sustained for hours, days or weeks.  In this article we will discuss the different alternatives for these types of workloads. From cloud environments like Azure, AWS, IBM or Google to on-premises deployments with secure containers running on Red Hat OpenShift and using IBM Power Systems.

Before executing an AI model, there are a few things to keep in mind. The choice of hardware and software tools is equally essential for algorithms for solving a particular problem. Before evaluating the best options, we must first understand the prerequisites to establish an AI running environment.

What Are the Hardware Prerequisites to Run an AI Application?

  • High computing power (GPU can ignite deep learning up to 100 times more than standard CPUs)
  • Storage capacity and disk performance
  • Networking Infrastructure (from 10GB)

What Are the Software Prerequisites to Run an AI Application?

  • Operating system
  • Computing environment and its variables
  • Libraries and other binaries
  • Configuration files

As we now know the prerequisites to establish an AI setup, let’s dive into all components and the best possible combinations. There are two choices for setting up an AI deployment: cloud and on-premises.  We already told you that none of them is better as it depends on each situation.

Cloud infrastructure

Some traditional famous cloud servers are

  1. Amazon Web Services (AWS)
  2. Microsoft Azure
  3. Google Cloud Platform (GCP)
  4. IBM Cloud

In addition to these, there exist clouds specialized for machine learning. These ML specific clouds provide support of GPUs rather than CPUs for better computation and specialized software environments. These environments are ideal for small workloads where there is no confidential or sensitive data. When we have to upload and download many TB of data or run intensive models for weeks at a time, being able to reduce these times to days or hours and do them on our own servers saves us a lot of costs. We will talk about this next.

On-premises servers and Platform-as-a-service (PAaS) deployments

These are specialized servers present in the working premises of an AI company. Many companies provide highly customized as well as built-from-scratch on-prem AI servers. For example, IBM’s AC922 and IC922 are perfect for on-premises AI setup.

Companies choose as they must consider future growth and tradeoff between current needs and expenses from the above two. If your company is just a startup, cloud AI servers are best because this choice can eliminate the worry of installations at somehow affordable rates. But if your company grows in number and more data scientists join, cloud computing will not ease your burdens. In this case, technology experts emphasize on-prem AI infrastructure for better security, customization, and expansion.

ALSO READ: Deploy Your Hybrid Cloud for ML and DL

Choosing the best HW architecture

Almost all cloud service platforms are now offering GPU’s supported computation as GPU has nearly 100 times more potent than the average CPU, especially if machine learning is about computer vision. But the real problem is the data flow rate between node and cloud server, no matter how many GPUs are connected. That’s why the on-prem AI setup has more votes, as data flow is no longer a big problem.

The second point to consider is the bandwidth between GPUs and CPUs. On traditional architectures such as Intel, this traffic is transferred over PCI channels. IBM developed a connector called NVLink, so that NVidia card GPUs and Power9 cores could communicate directly without intermediate layers. This multiplies the bandwidth, which is already more than 2 times higher between processor and memory. Result: no more bottlenecks!

As we have pointed out above the software prerequisites for running AI applications, now we have to consider the best software environment for optimal AI performance.

What data-center architecture is best for AI/DL/ML?

While talking about servers regarding AI soft infrastructure, the traditional design was virtualization; it’s a simple distribution of computation resources under separate operating systems. We call each independent operating system environment a “Virtual Machine.” If we need to run AI applications on virtual machines, we face multiple constraints. One is the resources necessary for running the entire system, including OS operations and AI operations. Each virtual machine requires extra computational and storage resources. Moreover, it’s not easy to transfer a running program on a specific virtual machine to another virtual machine without resetting the environmental variables.

What a container is?

To solve this virtualization problem, the concept “Container” jumps in. A container is an independent software environment under a standard operating system with a complete runtime environment, AI application, and dependencies, libraries, binaries, and configurations file brought together as a single entity. Containerization gives extra advantages as AI operations are executed directly in the container, and OS does not have to send each command every time (saving massive data flow instances). Second but not less, it’s relatively easy to transfer a container from one platform to another platform, as this transfer does not require changing the environmental variables. This approach enables data scientists to focus more on the application rather than the environment.

RedHat OpenShift Container Platform

The best containerization software built-in Linux is Red Hat’s OpenShift Container Platform (an On-prem PAaS) based on Kubernetes. Its fundamentals are built on CRI-O containers, while Kubernetes control containerization management. The latest version of Open Shift is 4.7. The major update provided in OpenShift 4.7 is its relative independency from Docker and better security.

NVIDIA GPU operator for Openshift containers

NVIDIA and Red Hat OpenShift have come together to assist in running AI applications. While using GPUs as high compute processors, the biggest problem is to virtualize or distribute GPUs’ power across the containers. NVIDIA GPU Operator for Red Hat Openshift is a Kubernetes that mediates GPU resources’ scheduling and distribution. Since the GPU is a special resource in the cluster, it requires a few components to be installed before application workloads can be deployed onto the GPU, these components include:

  • NVIDIA drivers
  • specific runtime for kubernetes
  • container device plugin
  • automatic node labelling rules
  • monitoring compnents

The most widely used use cases that utilize GPUs for acceleration are image processing, computer audition, conversational AI using NLP, and computer vision using artificial neural networks.

Computing Environment for Machine Learning

There are several AI computing environments to test/run AI applications. Top of them is Tensorflow, Microsoft Azure, Apache Spark, and PyTorch. Among these, Tensorflow (Created by Google) is chosen by the most. Tensorflow is a production-grade end-to-end open-source platform having libraries for machine learning. The primary data unit used in both TensorFlow and Pytorch is Tensor. The best thing about TensorFlow is it uses data flow graphs for operations. It is just like a flowchart, and programs keep track of the success and failure of each data flow. This approach saves much time by not going back to the baseline of an operation and tests other sub-models if a data flow fails.

So far, we have discussed all the choices to establish an AI infrastructure, including hardware and software. It is not easy to select a specific product that harbors all the desired AI system components. IBM offers both hardware and software components for efficient and cost-effective AI research and development like IBM Power Systems and IBM PowerAI, respectively.

IBM Power Systems

IBM Power Systems has flexible and need-based components for running an AI system. IBM Power Systems offers accelerated servers like IBM AC922 and IBM IC922 for ML training and ML inference, respectively.

IBM PowerAI

IBM PowerAI is an intelligent AI environment execution platform facilitating efficient deep learning, machine learning, and AI applications by utilizing IBM Power Systems and NVidia GPUs’ full power. It provides many optimizations that accelerate performance, improve resource utilization, facilitates the installation, customization, and prevents management problems. It also provides ready-to-use deep learning frameworks such as Theano, Bazel, OpenBLAS, TensorFlow, Caffe-BVLC or IBM Caffe.

Which is the best server for on-premises deployments. Let’s compare IBM AC922 and IBM IC922

If you need servers that can withstand machine learning loads for months or years without interruption, running at a high load percentage, Intel systems are not an option. NVIDIA DGX systems are also available, but as they cannot virtualize GPUs, when you want to run several different learning models, you have to buy more graphics cards, which makes them much more expensive. The choice of the right server will also depend on the budget. IC922s (designed for AI inference and high performarce linux workloads) are about half the price of AC922s (designed for AI datasets training), so for small projects they can be perfectly suitable.

 

If you are interested in these technologies, request a demonstration without obligation. We have programs where we can assign you a complete server for a month so that you can test directly the advantages of this hardware architecture.

The importance of cybersecurity in the healthcare sector

Hospitals, health centres and all the elements that make up the healthcare sector depend to a large extent on the proper functioning of computerized systems. In fact, these are indispensable for performing clinical and administrative tasks at any time during all days of the year. Therefore, and taking into account the high sensitivity of patients’ clinical data, cybersecurity prevention is essential. Theft or misuse of them can have devastating consequences.

Cyberattacks on hospitals and health centers, an unre-new practice

It is curious, but traditionally the complexes that make up the healthcare sector have taken little or no care of their cybersecurity processes. In fact, it has been regarded as a sector of little interest to cybercriminals, when the opposite could really be said.

It is true that, with the advent of the COVID-19 pandemic, cyberattacks have multiplied and becomemore relevant at the media level. However, they are not the first. For example, the different organizations that make up the health sector in the United States encrypted the losses of this criminal activity in 2019 at more than $4 billion.

The risks of not taking care of cybersecurity in the health sector

But what are the main reasons why cybercriminals focus on attacking hospitals and health centers? Basically, we can cite the following:

  • Theft of clinical patient information.
  • Theft of the identity of medical specialists.
  • Access to sensitive patient data.
  • Purchase and sale of clinical information on the black market.

This relieves the importance of hiring an experienced professional with a cybersecurity career. But there’s more. For example, in recent years, the number of medical devices running connected to the Internet has grown exponentially. And, with them, the risk of cyberattack. In fact, this trend is expected to continue upwards for quite some time.

These devices use technology of theso-called Internet of Things (IoT) and, despite their undoubted usefulness in the healthcare sector, most cyberattacks are directed towards them. The lack of protection and vulnerability they present to hackers means that, in too many cases, end-user security is compromised by them.

Cybercriminals’ preferred formula for attacking healthcare IoT devices

There is no doubt that ransomware files and malware are the most commonly used by cybercriminals when attacking health centers, hospitals and other particularly vulnerable places within the healthcare sector.

A ransomware is a program that downloads, installs and runs on your computer thanks to the Internet. In doing so, it ‘hijacks’ all the device or some of the information it stores and, in exchange for its release, requests an economic bailout (hence its name).

The removal of these files and malware is not excessively complex for computer security specialists, but the consequences they can have on hospitals and medical centers are of great consideration. For example, they involve:

  • Disruption of the center’s operational processes, at least, on the affected IoT computers.

  • Inability to access patient information and diagnostic tests.
  • Need to restore systems and backups.
  • Damage to the corporate reputation of the center or company after suffering the attack.

All of this comes at a very important economic cost from a business point of view. In fact, it can be so high that the investment of implementing the best cybersecurity solutions sounds ridiculous. Just restoring systems is a task that can stop medical center activity for almost a day.

How to prevent cyberattacks in the health sector?

Interestingly, the best way
to prevent cyberattacks on IoT
equipment is by strategically investing in those devices. That is, making greater and better use of them. More and more technologies are in place to control access, block attacks by malicious files, and ultimately safeguard critical information and processes with as little human intervention as possible.

The reality is to acquire an infrastructure of equipment, programs and specialized personnel within a hospital or medical center can be an inesumable investment. However, there are alternatives. The most interesting of these is the one that goes through the implementation of cloud solutions. The reduction in costs is very noticeable and the solutions offered are very effective.

SaaS(Software as a Service)solutions are currently the most widely used in medical centers that use cloud platforms for their systems. But for them to work, it is necessary to consider a cybersecurity strategy of the data prior to the dumping of the data on the servers. Encryption and encryption mechanisms are basic at this point. A fairly simple and fully automated task that can result in a really high return on investment.

In short, the health sector, both in terms of hospitals and health centres, is particularly sensitive in cybersecurity. Especially since most of its processes depend on IoT devices that are highly sensitive to the action of hackers. However, the advantages they provide in terms of efficiency and productivity make their use indispensable. With this being clear, it is obvious that the investment in protecting these systems, which must always be made from a strategic perspective, is essential.

Red Hat is now free for small environments. We teach you how to migrate from CentOS

CentOS Stream isn’t that terrible

With the announcement of CentOS Stream many users were left more than worried. Moving from a stable and free distribution, RHEL’s twin sister to a test environment, prone to errors and needing constant patches did not seem like good news. In addition, its lifecycle (that is, time during which updates are provided to each release) too short, which did not make it appropriate for most servers in production that at the time decided to install CentOS instead of RHEL for not requiring high-level technical support. This is the case for small businesses or academic institutions.

Fedora already existed, where all the new technical advances that would later arrive in Red Hat (and CentOS) are being tested. Fedora is an ideal Linux distribution for desktop environments but few companies had in production.

Finally the scenario has cleared up. Fedora will continue to have the same function within Red Hat’s Linux ecosystem, nothing changes here. CentOS Stream will be a platform that, using “continuous delivery” techniques, inherited from the devops philosophy, will become the next minor version of Red Hat Enterprise Linux (RHEL) with a fairly short lifecycle.

Red Hat Linux is now free for small environments

However, the big news is this: for environments with up to 16 productive servers, RHEL can be installed at no cost using Developer licenses. On second thought, it makes all the sense in the world. With the exception of large super-computing (HPC) environments, you migrate from CentOS to RHEL seamlessly and at no additional cost. Not only will the features be maintained, but the latest updates will be accessed immediately.

This will be possible from February 1, 2021. Red Hat warns that it will keep the subscription system, even if they are free by trying to simplify them as much as possible. They argue legal reasons, such as that as new laws such as the GDPR come into force, the terms and conditions of your software need to be updated. That is, it is neither, nor are the licenses expected in perpetuity that IBM still maintains for example.

From our point of view this is a success in expanding the user base, but also potential future client not only of Red Hat Linux but of all its products: Satellite, Ansible, OpenShift, Openstack, Cloudforms among many others.

How do we migrate from CentOS to Red Hat?

We have a utility that performs migration, convert2rhel

• convert2rhel –disable-submgr –enablerepo < RHEL_RepoID1 > –enablerepo < RHEL_RepoID2 > –debug

Change RHEL_RepoID to the repositories you choose in for /etc/yum.repos.d/code> rhel-7-server-rpmscode> example, or rhel-8-baseoscode> rhel-8-appstreamcode> .

You can look at the options with:

Convert2rhel -h

And when you’re ready, just start the process:

Convert2rhel

In this link you have all the details

Get certified at IBM Infosphere DataStage, Optim, Information Server and Governance Catalog

IBM InfoSphere is an ecosystem that has a comprehensive set of tools to efficiently manage a data integration project. If your company is currently using such products in the management of its IT assets or is in the process of implementing them, you need to learn how they work. In this article we will talk about 4 widely used tools in the context of business intelligence and tell you how to get the official IBM certifications. Let’s start.

IBM DataStage

This solution enables you to extract, modify, and export all kinds of data sources, including enterprise applications, indexed and sequential files, mainframe and relational databases, and external sources.

It has two operating modes:

ETL. This is the acronym for Extract, Transform and Load. The software is installed on a server or on different terminals, from where it extracts and processes data from various sources.

Design and supervision. Through its graphical interface, it proceeds to the creation and monitoring of ETL processes and the management of the corresponding metadata.

IBM DataStage is ideal for companies that need to deploy and manage a Data Mart or Dara WareHouse datastore. Its main functions are related to the handling of large amounts of data. With this tool, you can apply validation rules, adopt a parallel and scalable type of processing, perform multiple integrations and complex transformations, use metadata to perform analysis and maintenance tasks.

If you’re a project administrator or ETL developer, our IBM InfoSphere DataStage Essentials course is just what you need to acquire the necessary skills. Datastage certification will teach you how to generate parallel jobs in order to access relational and sequential data at the same time, and to master the functions that allow you to modify and integrate them, as needed.

Infosphere Information Server

It is a scalable platform that supports data from a variety of sources designed to handle any volume of data, regardless of its size. This is made possible by its MPP functions, an acronym for Massively Parallel Processing.

Information Server is the most effective solution for companies looking for flexibility in integrating their most critical data. It is an effective business intelligence technology using data and point-of-impact analytics, big data, and master data management techniques.

With Datastage formation, you can locate and integrate your data across multiple systems, both on-premises and across the cloud, and establish a single business language to manage information. In addition, it will help you gain a greater understanding of your IT assets by analyzing rules and improving integration with other specialized data management products.

Our IBM InfoSphere Information Server Administrative Tasks course is a good introduction for administrators of this platform who need to assume the administrative roles that are required to support users and developers.

The Datastage course, taught by SiXe Engineering, begins with an essential description of Information Server and its related products. Datastage training then goes on to detail the activities you’ll need to take on, including reporting and user management.

IBM InfoSphere Information Governance Catalog

With this interactive web-based tool, you’ll increase your organization’s ability to create and manage a powerful control plan for the most important data. To achieve this goal, you’ll need to define mandatory policies that need to be followed to store, structure, move, and transform information. In this way, you will be able to obtain reliable results that can be used in different integration projects.

IBM InfoSphere Information Governance Catalog supports the latest business intelligence techniques, including lifecycle management, privacy and security initiatives, big data integration, and master data management.

Its broad possibilities will allow you to define a common language within your company when it comes to handling data from different sources and correctly managing information architectures. At the same time, this tool will help you gain a full understanding of how data connects to each other and track how information flows and changes.

If you have some experience with Information Server or IBM InfoSphere MDM and want to learn how to use this tool, the New Features in IBM InfoSphere Data Integration and Governance course is your best choice.

IBM infosphere training lets you learn about the latest features when it comes to data integration or governance within the IBM InfoSphere ecosystem.

InfoSphere Optim

This application is designed to archive transaction history and disused application data and allows you to make queries in a way that allows you to fully comply with information retention regulations.

With this solution, your data will be present in all applications, databases, operating systems and computers where it is required. This way, your test environments will be safe and your release cycles will be shorter and cheaper.

This tool will also help you manage the growth of your data and the reduction of your total cost of ownership (TCO). This will help increase the business value of your information stores.

The courses taught by SiXe Engineering related to the products and components of the IBM InfoSphere suite will help administrators and developers learn to master this tool. This is the case with the Datastage certification and the Infosphere Data Architect course. We also taught the rest of
IBM’s official course catalog.

Contact us without obligation.

Reasons to learn Linux in 2021

Eight compelling reasons to be certified on Linux this 2021

If you’ve come this far, you probably have more or less advanced Linux knowledge or intend to improve them. This operating system created in Finland in 1991 is one of the most staunch competitors that Windows has. There are several advantages to Microsoft software. Would you like to know the 8 compelling reasons to expand your training with Linux certifications this 2021? See.

3 Generic Reasons to Increase Your Linux Knowledge

For those who don’t have much experience using this operating system, we’ll start by talking about three important reasons for more advanced training that allows them to fully understand the features and possibilities it offers.

1. Being versatile makes you a better professional

The first reason for forming on Linux has not much to do with operating systems or their features, but with the market itself. In the technology sector, it is important to be versatile to offer better solutions to users. Therefore, receiving training on Linux is not only useful, but necessary.

2. It is a safer operating system than its competitors

Compared to Windows, Linux has an armored architecture. This is an attack-proof operating system. There are several reasons that make GNU/Linux systems such a secure option. Being open source, it has an army of hundreds of users constantly updating it. Anyone with sufficient knowledge can improve the system to prevent violations of file integrity and sensitive data privacy. In addition, it is based on UNIX, a software that stands out for its powerful and effective privilege management for multiple users.

In a world where business data is compromised, this advantage augurs for a promising future for Linux over other systems. If you offer IT services to companies, knowing more about Linux’s security capabilities will allow you to better support.

3. It’s a free, stable and easy-to-use operating system

Unlike Windows and macOS, you don’t need to pay for a license or purchase a specific computer before you can install Linux on your computer. It’s completely free. The business is in business support. We ourselves are partners with Red Hat, IBM and SUSE.

On the other hand, the main drawback on the difficulty involved for an average user to use this operating system has been left in the past. Currently, its different distributions have a very friendly interface. As if that weren’t enough, it’s a very stable software. He rarely suffers from interruptions. and you don’t need to reinstall it every time you perform an update.

5 reasons to train on Linux in 2021 with SiXe

Now, if you’re one of those who were timidly formed on Linux at University, or you’ve done it superficially through one-off courses, we give you five reasons to learn more about this free software with SiXe.

1. We offer realistic practices tailored to the corporate segment

Linux is a clear example that free software is on the rise. And not only in the domestic sphere, but also in the professional. Not for nothing, Red Hat’s 2020
Enterprise Open Source
Status report makes it clear that a growing number of businesses see a viable solution in open source.

According to this research, 95% of the organizations surveyed noted that such programs (Including Linux) are an important part of their infrastructure. This study is just one more test of the value this operating system has for software developers, users, and system administrators.

Supply and demand control the market, and in the technology sector it is no exception. Therefore, the growing interest of companies in open source solutions is a valuable job opportunity for professionals who specialize in Linux.

SiXe helps you complete your Linux training. Particularly interesting is the fact that all our courses focus on providing adequate solutions to the problems that often arise in the real world.

2. Courses have useful content for any context.

The designed trainings pair software developers and system administrators we teach include:

Mention aside deserve the following courses oriented to the most demanded certifications on the market:

Linux intensive course for administrators.

You will learn how to successfully manage your Linux systems. This course is characterized by providing essential training to successfully solve an exam such as RHCSA (Red Hat Certified System Administrator) or SUSE Certified Administrator.

Intensive Linux course for engineers.

Red Hat courses and SUSE courses are in increasing demand. This training will help you certify yourself as a Red Hat Certified Engineer, SUSE Certified Engineer, and Linux Foundation Certified Engineer. In this way, you will be able to respond to the needs of the market. The Linux intensive course for engineers is ideal for system administrators who want to learn how to use commands and configure kernel execution parameters. It is a 30-hour training with practical vision and a workshop designed to face all the official tests and exams.


Red Hat Openshift 4 Workshop
: our flagship course in 2020 with more than 30 editions worldwide.

All our courses are characterized by providing quality content and imparting really useful knowledge based on real projects that SiXe Ingeniería has implemented with global clients. That’s what sets us apart from others.

Add to this the fact that our formations are not affected by the distribution of your choice. It doesn’t matter if you use OpenSUSE, Red Hat Fedora, Ubuntu, CentOS or more. You can choose the one you prefer, and our courses will remain just as useful. This is because at SiXe we adapt our courses to cover all common versions and practices.

In addition to the aforementioned Linux trainings, it is also possible to obtain a specific certification on IBM through its official Linux courses for IBM Z (Mainframe) systems, SAP HANA deployment, basic linux management, and Linux for UNIX administrators

3. You’ll learn how to master Red Hat and SUSE

At SiXe Ingeniería we work in partnership with the main exponents of the sector. One of the most relevant is Red Hat. This compiler and distributor has been offering open source solutions to international organizations for nearly three decades.

Most importantly, Linux certification courses issued by Red Hat are increasingly quoted. This is precisely due to the business trend of implementing Open Source solutions, among which Red Hat Enterprise Linux stands out.

For its part, SUSE is another of the companies of which we are Business Partner and a leading Open Source distributor in the world of open source. Proof of this is that by 2020, even with the pandemic, it raised its revenue figure by 14%. One of its most popular systems, SUSE Linux Enterprise, stands out for its functionality, intuitive desktops and a host of built-in applications.

And as mentioned earlier, with our intensive Linux course for engineers it is possible to prepare for Red Hat Certified Engineer (RHCE), SUSE Certified Engineer and Linux Foundation Certified Engineer certificates.

4. Courses are taught by experienced instructors

All our trainings are taught by professionals. The absence of intermediaries helps us to ensure a homogeneous line, without loss of quality or mismatches in the matter.

All teaching is part of our work and experience. Such is the case of Red Hat courses and SUSE courses, which synthesize the result of hundreds of hours of experience in consulting and providing systems-related services.

5. You can benefit from our additional services

SiXe is not only dedicated to providing training, it also offers consulting and maintenance services in all types of systems. Therefore, studying with us opens the doors to other features.

In this way, your company will not only benefit from the training we offer, but also from our additional services. This includes solutions for Data Center and cybersecurity solutions. In the latter aspect, it should be noted that we are associated with the National Cybersecurity Institute (INCIBE), which guarantees the quality of our contributions.

Get your Linux certification with us

In short, we can conclude that there are compelling reasons to strengthen your Linux knowledge. It is a fact that the projections of this operating system are quite optimistic, especially in the corporate area.

The interest of companies in the distributions of this operating system, the great security they offer when it comes to preventing computer attacks and their versatility make receiving certifications on Linux more necessary than ever.

Of course, for teaching to be effective, it is important that these trainings are solvent, that they offer quality material and that they are taught by experienced personal instructors. And that’s exactly what you’ll find at SiXe. If you are interested in our
courses, please
contact us.

SiXe Ingeniería
×