IBM Power 11 logo no oficial

What do we expect from IBM Power11?

The evolution of IBM’s Power architecture has been the subject of intense debate in the technology community. Over the past few years, this architecture has undergone significant strategic changes that have generated criticism and expectations alike. As with KVM, we almost guessed everything IBM was going to announce; let’s take a second shot at Power11. In this case, we don’t have the kernel.org lists to clue us in, but we do have 10 years of trajectory since Power8 and a market with very clear demands for alternative architectures to x86, even more so when Intel is going through one of its worst moments in its history..

Background and a little history

With Power8, came Power OEM/LC systems, NVIDIA GPUs, the NVLink connector and the possibility of having a first version of KVM on Power(not to be confused with the 2024 announcement). However, in practice, the challenges outweighed the opportunities… and we’ll leave it at that 🙂. Some felt that IBM was ahead of the market, while others felt that there was a lack of supported and proven solutions on these servers to achieve the anticipated impact; there was even talk of mass adoption by Google or Rackspace. Power9 represented a milestone in IBM’s strategy by offering a more open and accessible architecture for the community. Through the OpenPOWER Foundation, IBM released a significant portion of the specifications and technologies associated with Power9, allowing third parties to design and manufacture their own systems based on this architecture, similar to what is done with ARM or x86. Companies such as Raptor Computing Systems developed Power9-based systems using open source firmware and software, offering highly auditable and user-controllable platforms.

However, in the next generation, development delays-perhaps exacerbated by the COVID-19 pandemic-led IBM, upon launching Power10, tolicense blocks of intellectual property from Synopsys for components like the DDR4/5 PHY and PCIe 5.0, this decision introduced proprietary firmware into the system, breaking with the openness established with Power9 and limiting community involvement in the development of these technologies. Additionally, NVIDIA’s strategic shift since Power9, opting for alternative architectures such as ARM-based GPUs, complicated the reintegration of GPUs into the Power platform. In Power10, IBM’s strategic response was to focus on inference within the processor core, enabling artificial intelligence processing directly on the chip, without relying on GPUs.

With the anticipated release of Power11, there is an expectation that IBM will address these past challenges and realign its strategy with current market demands. This includes reintegrating GPUs and other accelerators, enhancing support for open-source workloads and Linux applications, and continuing to advance AIX and IBM i as key components of the Power ecosystem

Image of IBM Power from 2010 onwards explains the features of each IBM Power.

Evolution of IBM Power from 2010 to 10/16/2024

Anticipating Power11: Key Expectations and Strategic Imperatives

The decisions made around Power10 have had a significant impact on both the community and the market. Moving away from an open architecture raised concerns among developers and companies that prioritize transparency and collaborative development. Competitors with open frameworks, such as RISC-V, have gained traction by offering the flexibility and freedom that Power10 lacked. This underscores the competitive value of openness in today’s technology landscape, where open-source solutions increasingly dominate the market for new workloads. Looking forward to Power11, there is a strong anticipation that IBM will address these concerns. At SIXE, we advocate for a return to open development practices, providing access to firmware source code and specifications to foster greater collaboration and innovation.

We believe Power11 should correct the limitations seen in Power10, especially by regaining control over critical components like DDR PHY and PCIe interfaces. Avoiding reliance on third-party intellectual property is essential for achieving a truly open architecture. In doing so, IBM can realign with community demands and tap into the expertise of developers and organizations committed to open-source principles. Furthermore, reintegrating GPUs and other accelerators is crucial to meet the growing need for heterogeneous computing. By supporting a wide range of accelerators—including GPUs, FPGAs, and specialized AI processors—IBM can offer flexible, powerful solutions tailored to diverse workloads.

This strategy aligns with industry trends toward modular and scalable architectures that can handle increasingly complex and dynamic computational requirements. Strengthening support for open-source workloads and enhancing compatibility with Linux applications will be vital for the broader adoption of Power11. Seamless integration with open-source tools and frameworks will attract a larger developer community, making it easier to migrate existing applications to the Power platform. This approach not only encourages innovation but also addresses market demands for flexible, cost-effective solutions. Additionally, we are keen to see how these hardware advancements can be fully utilized by AIX and IBM i, reinforcing IBM’s commitment to its longstanding customer base. It is essential that businesses relying on these operating systems can benefit from Power11’s innovations without compromising on stability, performance, compatibility, or availability for their critical systems

Conclusion

If there is one thing we know for sure, it is that there is no one operating system or architecture that fits all workloads. What is most valuable for Power customers is the possibility of integrating on the same machines the databases on which their business depends on AIX or IBM i, private clouds with KVM, front-ends with Kubernetes on Linux and, hopefully soon, also AI, ML, HPC, etc. workloads. At SIXE we think that, just as there is no perfect music for every moment, there is no universal operating system, database or programming language. In Power we can have them all, and that’s why we love it.

For us, Power11 represents an opportunity for IBM to realign its strategy: integrating GPUs and accelerators to meet high-performance computing needs, enhancing support for open source workloads and Linux applications, and continuing to develop its leading-edge operating systems for mission-critical environments, such as AIX and IBM i. In doing so, IBM can deliver a versatile and powerful platform that appeals to a broad spectrum of users. The success of Power11 will depend on IBM’s ability to balance proprietary innovation with openness and collaboration with third parties.

Need help with IBM Power?

Get in touch with SIXE; we are not only experts in everything that runs on Power servers, but also active promoters and part of the IBM Champions community. We have extensive knowledge in virtualization, security, critical environments on AIX, application modernization with RPG and IBM i, as well as emerging use cases with Linux on Power.

 

Logo FreeRTOS con TUX de fondo

Real-time Linux (RTOS) – Now part of your kernel

Did you know that while you have opened the browser to read this… your computer has decided to prioritize that process leaving many others in the queue?🤯 Do you want to know how it does it? What does it mean when Linux becomes an RTOS? Well, read on and I’ll show you. And watch out, because if you are interested in the world of the penguin OS, we are going to tell you more than one fact that you may not know...💥

How does the Linux Kernel scheduler work?

The Linux scheduler works as in the previous example: Basically, it decides in which state to put the processes (running, interruptible, non-interruptible, zombie or stopped) and their execution order to improve your experience. For their execution order, each process has a priority level. Let’s say you have a background process running and you open the browser. The scheduler will interrupt that background process and focus resources on opening the browser, ensuring it runs quickly and efficiently.

The concept of expropriation (preemption)

Expropriation on Linux🐧? It’s not what you’re thinking of…. Expropriation is a fundamental feature, as it allows processes to be interrupted if a higher priority one breaks in. In Linux version 2.6, the ability to expropriate processeswas added to the kernel, that is, the kernel can interrupt processes. Systems that are not preemptible must terminate the running task in order to get to the next one.

In the case of Linux, since version 2.6.24, the Completely Fair Scheduler (CFS) is used as a scheduler. This scheduler is governed by ensuring “fair” access to the CPU.

Completely Fair Scheduler: how do you decide which process should run at which time in order to have fair access to the CPU?

There are two types of priorities: static and dynamic.

  • Static (Niceness): Can be adjusted by the user. The lower the value, the more important the program is and the more CPU time it consumes.
  • Dynamic: It is set according to the behavior of the program. They can be I/O Bound (programs that need a lot of CPU time because they are usually waiting) or CPU Bound programs (that require less CPU time, since they usually perform intensive tasks that could collapse other processes).

How does the planner prioritize?

The operating system maintains two lists of programs:

  • List 1: Programs that still have time to use.
  • List 2: Programs that have used your time.

When a program uses its time, the system calculates how much time it should have next time and moves it to the second list. When the first list becomes empty, the two lists are swapped. This helps the system work efficiently. Linux 2.6, with the fully preemptible kernel, greatly improved the responsiveness of the system. Now, the kernel can be interrupted on low-priority tasks to respond to priority events.

 

PREEMPT_RT inside the Linux Kernel

With a new kernel update, Linux can be controlled with pinpoint accuracy. An RTOS implies that the system will be accurate for critical tasks, such as in medical centers. However, since Linux was not intended for that, the fact that it is now part of the kernel brings certain features, even if they do not make it an RTOS.

Feature Enhancement
Straightforward integration and simplified maintenance
  • Less dependence on external patching: Direct access to upgrades without managing patches.
  • Easier maintenance: Easier upgrades and fewer compatibility issues.
Improved stability and performance
  • Testing and validation: Increased stability and performance through rigorous testing.
  • Continuous development: Continuous improvements in functionality and performance.
Accessibility for developers
  • Ease of use: Enabling more accessible real-time functionalities.
  • Documentation and support: Increased documentation and support in the community.
Competition with dedicated systems
  • Increased competitiveness: Positioning Linux as an alternative to dedicated RTOS.
Extended use cases
  • Critical applications: Adoption of Linux in critical systems where accuracy is essential.

Why has PREEMPT_RT taken so long to become part of the kernel?

In addition to financial problems and the community’s lack of interest in taking a real-time approach to Linux, a technical problem arose: the printk. Printk es una función que imprime mensajes en el búfer de registro del kernel. El problema con esta función es que producía retrasos cada vez que se llamaba. Este retraso interrumpía el flujo normal del sistema, y alejado este problema, PREEMPT_RT se pudo incorporar al kernel.

How does Linux becoming a Real-Time Operating System affect you?

For the average user: nothing. However, if you are a developer, this innovation in the Linux core will be a breakthrough to take into account. Until now, developers who need real-time precision opted for other operating systems designed for it. With the new PREEMPT_RT function integrated into the Linux kernel, this will no longer be necessary. The feature allows Linux to stop any task to prioritize a request in real time, which is essential for applications that demand low latency.

Use case: home security

Imagine you are using a voice assistant at home that controls both the lighting and the security system. If it detects an intrusion while you are at home, it should prioritize the activation of alarms and notify you immediately. In this case, the lights or music can wait; what really matters is your safety. This ability to respond immediately in critical situations can make all the difference.

Why is Real Time necessary?

As we have seen in the use case, RTOSs can complete unforeseen tasks, moreover, in specific and predictable times. In workloads that require precision, RTOSs play a critical role. In this case, RTOSs are often seen in IoT applications:

  • Vehicles: Pioneering cars like Tesla can brake immediately if they detect an obstacle.
  • Critical systems: In aircraft or medicine, systems must operate on tight schedules.
  • Industry: In industrial processes, a slight delay can cause failures.
The role of AI and machine learning

AI and machine learning also play a key role in RTOS and IoT. They could predict events and support fast and effective decision making.

Conclusion

In short, Linux Real Time will finally become a reality. The integration of Linux as a real-time operating system marks a turning point and opens up new opportunities for critical tasks in sectors such as healthcare, robotics and IoT. With the PREEMPT_RT function integrated into the kernel, Ubuntu Linux guarantees greater accuracy. However, we should not fail to keep in mind that the penguin🐧 operating system is not 100% an RTOS, it was not designed for it. So, we will see if companies will adapt Canonical’s solution to their real-time needs, or will continue to opt for other solutions such as FreeRTOS or Zephyr. Do you want to continue learning about Linux? We offer you official certifications. And if you don’t have enough… we adapt to you with tailor-made training 👇

Intensive training on Linux systems

Linux is the order of the day… if you don’t want to be left out of the latest technological demands, we recommend our Canonical Ubuntu courses 👇

Official SIXE training at Canonical, creators of Ubuntu

Instalar windows en ibm power logo ibm windows xp sobre el logo de sixe

Installing Windows on IBM Power (for fun)

In a recent conversation with what I call the Wizards of Power, i.e. the technical management of this fantastic platform: inventors, architects, distinguished engineers and great teams of people behind it, I was asked. “Hugo, why your interest in emulation, who would want to emulate other architectures on Power, what’s the point?”

My answer is that in the open source world, many of the things we do, we do out of curiosity or even just for fun. It resonates in my head that if one day I can have as much fun on a Linux on ppc64le as I do on x86 or slowly on ARM (Mac, Raspberry) it will mean that Power can be “the third” architecture for Linux far beyond the actual use cases and mission-critical workloads. In other words, if I can do the same on ppc64le as on other architectures, I can use Power for any use case.

Why have a thousand x86 servers wasting energy and taking up space in the DPC when we can have a few Power servers doing the same job in a more secure and efficient way? Customers will say, for compatibility, for using standard tools. But multi-architecture can be a new standard, if it isn’t already.

I don’t want to go too deep into this topic today, there are several ideas published on the IBM portal and I think the IBM, Debian, Canonical and Red Hat teams are doing an excellent job which I will cover in future posts.

There were news in the kernel.org list that we have been covering at SIXE blog during the last months regarding the hard work being done on this, and with the arrival of the new FW1060 level we finally have full KVM support on PowerVM. This is something equivalent to that which exists in IBM Z/Linux One. Great!

As always, I wanted to push the technology to its limits including an old dream: to run a Windows (the “enemy” for AIX & Linux guys) and in this case and for more fun Windows XP on a Power10, using KVM and QEMU.

Preparation

We need to configure the LPAR to be KVM host, this will change the way it uses PowerVM in such a way that there is no overhead, and at least one dedicated processor must also be assigned to it (not in “donating” mode, mind you). This will give us 8 dedicated threads to run our virtual processors in KVM. Yes, it’s a lot simpler and less capable than PowerVM with its micro-partitioning, but it’s still an industry standard and not everyone needs to commute to work on a plane. Don’t you think so?

Choosing the distribution

In my experience the best support for ppc64le experiments is usually Debian or Fedora. In this case I have installed Fedora40 and upgraded to the latest levels. Then you have to install all the  virtualization packages and qemu support for other architectures. Following my idea of creating interactive articles, I will use virt-manager to avoid complex QEMU settings. I my environment I have installed all qemu-system-*

To get Windows to detect our virtual SATA disks as usable, you’ll need to set up this

Once you are done you can install what your disks will need

# dnf install virtio-win-stable

You will also need a Windows XP .iso and its licence numbers. I recommend placing it in /var/lib/libvirtd/images so that it is automatically detected by virt-manager.

Creating the VM (just follow the wizard)

Make sure you select x86 as architecture (qemu will be in charge of this)

 

Just like when we ran AIX on x86, don’t expect it to go very fast, although it took about an hour to install… actually about as long as it took on a PC of the time. What I do to see MS Messenger again! Enjoy the video and stay updated by following us! 

Installing Windows on IBM Power (for fun)Installing Windows on IBM Power (for fun)

Further tests

What about running a MS PowerShell for ARM64 on docker? I can now “dir” on Power, fantastic! :P

Conclusion

The work done to support KVM is for me the biggest news in recent years because of the endless possibilities it opens up for the Power platform.  As far as I have been able to test everything works and works very well. Congratulations to all the people who have made it possible. 

Logo Suse En fondo de SIXE

Understanding high availability (HA) on SUSE Linux

High availability and business continuity are crucial to keep applications and services always operational.
High availability clusters allow critical services to keep running, even if servers or hardware components fail.
SUSE Linux offers a robust set of tools for creating and managing these clusters.
In this article, we explore the current state of clustering in SUSE Linux, with a focus on key technologies such as Pacemaker, Corosync, DRBD and others.
These, with minor differences are available on x86 and ppc64le.

Pacemaker: the brain of the cluster

Pacemaker is the engine that manages high availability clusters in SUSE Linux.
Its main function is to manage cluster resources, ensuring that critical services are operational and recover quickly in case of failure. Pacemaker continuously monitors resources (databases, web services, file systems, etc.) and, if it detects a problem, migrates those resources to other nodes in the cluster to keep them up and running.
Pacemaker stands out for its flexibility and ability to manage a wide variety of resources.
From simple services to more complex distributed systems, it is capable of handling most high-availability scenarios that a company may need.

Corosync: the cluster’s nervous system

Corosync is responsible for communication between cluster nodes.
It ensures that all nodes have the same view of the cluster status at all times, which is essential for coordinated decision making.
It also manages quorum, which determines whether there are enough active nodes for the cluster to operate safely.
If quorum is lost, measures can be taken to prevent data loss or even service downtime.

DRBD: the backbone of the data

DRBD (Distributed Replicated Block Device) is a block-level storage replication solution that replicates data between nodes in real time.
With DRBD, data from one server is replicated to another server almost instantaneously, creating an exact copy.
This is especially useful in scenarios where it is crucial that critical data is always available, even if a node fails.
Combined with Pacemaker, DRBD allows services to continue operating with access to the same data, even if they are on different nodes.

Other key technologies in SUSE Linux clusters

In addition to Pacemaker, Corosync and DRBD, there are other essential technologies for building robust clusters on SUSE Linux:

  • SBD (Storage-Based Death): SBD is a fencing tool that isolates a misbehaving node from causing problems in the cluster.
    This is achieved by using a shared storage device that nodes use to communicate their state.
  • OCF (Open Cluster Framework): OCF scripts are the basis of the resources managed by Pacemaker.
    They define how to start, stop and check the status of a resource, providing the flexibility needed to integrate a wide range of services into the cluster.
  • Csync2: A tool for synchronizing files between nodes in a cluster.
    It ensures that configuration files and other critical data are always up to date on all nodes.

Current status and future trends

Clusters in SUSE Linux have matured and are adapting to new business demands.
With the growing adoption of containerized environments and with parts in different clouds, clusters in SUSE Linux are evolving to better integrate with them.
This includes improved support for container orchestration and distributed applications that require high availability beyond replicating two disks per DRBD and keeping a virtual IP alive :) Still, today, the combination of Pacemaker, Corosync, DRBD and other tools provides a solid foundation for creating high availability clusters that can scale and adapt to the needs of SAP HANA and other solutions that require high if not total availability. If you need help at SIXE we can help you.

Cheatsheet for creating and managing clusters with Pacemaker on SUSE Linux

Here is a modest cheatsheet to help you create and manage clusters with Pacemaker on SUSE Linux.
Sharing is caring!

Task Command / Description
Package installation
Installing Pacemaker and Corosync zypper install -y pacemaker corosync crmsh
Basic configuration
Configure the Corosync file Edit /etc/corosync/corosync.conf to define the transport, interfaces and network.
Start services systemctl start corosync && systemctl start pacemaker
Enable services at startup systemctl enable corosync && systemctl enable pacemaker
Cluster management
View cluster status crm status
See node details crm_node -l
Add a new node crm node add <nombre_del_nodo>
Eject a node crm node remove <nombre_del_nodo>
View cluster logs crm_mon --logfile <ruta_del_log>
Resource configuration
Create a resource crm configure primitive <nombre_recurso> <tipo_agente> params <parámetros>
Delete a resource crm configure delete <nombre_recurso>
Modify a resource crm configure edit <nombre_recurso>
Show complete cluster configuration crm configure show
Configuration of groups and assemblies
Create a resource group crm configure group <nombre_grupo> <recurso1> <recurso2> ...
Create an ordered set crm configure colocation <nombre_conjunto> inf: <recurso1> <recurso2>
Create an execution order crm configure order <orden> <recurso1> then <recurso2>
Restrictions and placements
Create placement restriction crm configure colocation <nombre_restricción> inf: <recurso1> <recurso2>
Create location restriction crm configure location <nombre_ubicación> <recurso> <puntaje> <nodo>
Failover and recovery
Force migration of a resource crm resource migrate <nombre_recurso> <nombre_nodo>
Clear status of a resource crm resource cleanup <nombre_recurso>
Temporarily disable a resource crm resource unmanage <nombre_recurso>
Enabling a resource after disabling it crm resource manage <nombre_recurso>
Advanced configuration
Configure the quorum `crm configure property no-quorum-policy=<freeze
Configure fencing crm configure primitive stonith-sbd stonith:external/sbd params pcmk_delay_max=<tiempo>
Configure timeout of a resource crm configure primitive <nombre_recurso> <tipo_agente> op start timeout=<tiempo> interval=<intervalo>
Validation and testing
Validate cluster configuration crm_verify --live-check
Simulate a failure crm_simulate --run
Policy management
Configure recovery policy crm configure rsc_defaults resource-stickiness=<valor>
Configure resource priority crm configure resource default-resource-stickiness=<valor>
Stopping and starting the cluster
Stop the entire cluster crm cluster stop --all
Start up the entire cluster crm cluster start --all

 

Logo Sixe Noticia

SIXE: your trusted IBM partner

In this fast-changing and complex technological era, choosing the right suppliers is crucial.
When it comes to solutions like IBM’s, the real difference is not the size of the company, but its technical capacity, human capital, commitment and level of specialization.
SIXE Ingeniería is your ideal IBM partner, and here we explain why.

Technical expertise: Who do you want to design and manage your project?

At SIXE, we are not a company that resells any product or service looking for a margin, passing the technical challenge to someone else and “bye-bye”.
We specialize in key areas such as cybersecurity and critical IT infrastructure.
Unlike IBM’s large partners, who usually outsource most of their projects, at SIXE every task is executed by our in-house experts. Do you prefer to rely on a company that subcontracts or on a team that is directly involved in every technical detail? Our engineering company approach allows us to design solutions tailored to the specific needs of each client.
We do not offer generic configurations or deployments, but solutions tailored to exactly what your organization (and your team) needs.
We have experts in IBM Power infrastructure, storage, operating systems (AIX, Red Hat, IBM i, Debian, zOS), Informix and DB2 databases, application servers, etc.

Personalized engagement: what care do you expect to receive?

In large consulting firms, projects often become just another number on your client list.
Do you want to be just one more or do you prefer exclusive treatment?
At SIXE, we offer a personalized service, ensuring that each project receives the attention it needs to go well and that you trust us for many years to come.
Our agile structure allows us to adapt quickly and work side by side with the systems managers, ensuring that the proposed solutions meet your expectations and needs.

Innovation and flexibility

Large companies are often trapped in bureaucratic processes that prevent them from innovating or reacting quickly to market changes.
How many times have you come across solutions that are outdated or slow to implement?
At SIXE, we can adapt quickly and offer solutions that not only follow the latest trends, but anticipate them.
This is essential for projects that require quick and effective responses in a changing environment.
Also when something, no matter how trendy it is or how spectacular it sounds in a Power Point, it involves risks, we will raise our voice and let you know.

Transparency and control

When projects are outsourced, transparency and control are diluted.
At SIXE, you have the security of knowing exactly who is working on your project and how resources are being managed.
Large consulting firms, because of their size, tend to lose this transparency, delegating tasks to third parties without the client having any real control over the process.
Would you rather risk losing visibility on your project or have a partner that keeps you informed and in control of each milestone?

Long-term relationships: are you looking for an additional supplier or a strategic partner?

We are not looking to close short-term contracts; our goal is to build long-lasting relationships based on an ethical relationship.
This means that, once the technology is implemented, we remain committed to the project, offering technical support, training and consulting whenever necessary.
Large companies, on the other hand, tend to focus on the initial implementation, leaving everything else aside.
Outsourcing everything, of course, just like Homer Simpson would do.

Return on investment: where does your money go?

In many large consulting firms, much of the budget goes to cover overhead, with little direct impact on project quality.
They do not have good engineers on staff because their managers think that outsourcing technical talent reduces risks and improves margins.
At SIXE, every euro invested translates into real value; we do not have a pool of managers and executives spending hours in meetings and meals with clients.
What we do have is an internationally recognized engineering team, committed to the company and its clients for more than 15 years.
We are also part of a network of experts recognized internationally by IBM.

The difference is in the execution

Although it may be said otherwise, the real difference in a technology project is not in the size of the company, but in how and by whom each project is executed.
At SIXE, we combine technical expertise, commitment and transparency, offering a precise and results-oriented execution.
In a market saturated with options, why not choose a partner that ensures quality, innovation and a relationship based on collaboration?
Choosing SIXE as your IBM partner means opting for an approach based on technical excellence and total commitment to results.
Don’t leave the success of your project in the hands of chance, we are a partner who will care as much as you do (for our sake) about the final result and the relationship between our companies in the medium and long term.

Not only IBM

Although 50% of our business is related to IBM training, consulting and infrastructure projects, we are also a strategic partner of Canonical (Ubuntu), Red Hat and SUSE.

What about your competitors?

The truth is that we do not have because there is no other company of our size with our level of specialization in the solutions we offer.
There are other small and medium-sized companies with incredible human capital that complement the technologies we work on and with which we always collaborate, but we never compete.
When we don’t know how to do something, we always ask for help and let our clients know. It is part of our DNA.

ibm i logo sixe

Learn IBM i and RPG with SIXE

IBM RPG training: SIXE❤️IBM i y RPG

SIXE is a reference in official IBM training.
For years, we have offered specialized courses in IBM i and RPG, key technologies for many CRMs and ERPs used by large companies around the world.
Among our most outstanding courses is the advanced programming workshop in RPG IV.
If you are new to RPG, you can start learning IBM i and RPG with SIXE in our RPG IV basics workshop.
These courses will allow you to cover from the basics to the most advanced techniques of this robust programming language.

Personalization and teaching quality

One of SIXE’s biggest differentiators is our customized approach.
Each course can be tailored to your team’s specific needs, ensuring practical and relevant training.
Did you know that many courses are taught by IBM Champions?
These internationally recognized experts ensure that students receive the highest quality, most up-to-date training.
Plus, we are an integrated company led by IBM instructors.

History and relevance of IBM i today

IBM i, launched in 1988, is the evolution of the AS/400 system, designed to be robust, scalable and secure.
Over more than three decades, it has maintained its mission to provide a stable and reliable platform for enterprise data management.
The latest release, IBM i 7.5, includes key enhancements in security, performance and cloud integration, reinforcing its relevance in today’s IT environment.

Current RPG use cases: Can I have the ticket?

RPG (Report Program Generator) continues to be fundamental to many organizations using IBM i, especially in industries such as banking, manufacturing and retail.
RPG has been updated with modern programming techniques, making it as relevant today as it was in its early days.
For example, when you pay at a supermarket, the ticket and associated processes (inventory, ordering, invoicing) are managed by a program in RPG on an IBM Power system running IBM i.

Don’t call me AS/400

An interesting anecdote about IBM i is that its predecessor, the AS/400, was introduced in 1988 as a system “as easy to use as a refrigerator”.
At a time when computer systems were complicated, this promise highlighted IBM i as a revolutionary system in terms of accessibility and simplicity.
Although the name has changed, if you need an AS/400 course, we can arrange that too.

Why choose SIXE?

With over 15 years of experience, SIXE offers not only training, but a comprehensive educational experience that is tailored to each client’s needs.
Our focus on quality and customization, coupled with the expertise of highly qualified instructors, makes SIXE the best choice for those seeking effective and personalized IBM official training.
To explore more about these courses and to register, please visit the following links on our website:

sixe logos with its partners suse, canonical, red hat and ibm

Logos sixe podman docker y lxd ubuntu

Discover Ubuntu LXD: The alternative to Docker or Podman

Do you still use only Docker or Podman? Find out why you should try Ubuntu LXD

INTRODUCTION

Ubuntu LXD is Ubuntu’s container manager, based on LXC(Linux Containers), which despite the rise of technologies such as Docker in the Kubernetes ecosystem, remains highly relevant. This article explores the reasons behind the persistence of LXD, its distinctive use cases and the products that employ it in the real world. Ready to find out why you should pay attention?

WHAT IS UBUNTU LXD?

LXD is a container management tool that acts as an enhancement to LXC, offering a more complete containerization experience geared towards lightweight virtual machines. While Docker and all other containers based on the OCI standard are ephemeral by design, LXD is more focused on providing full system containers, allowing multiple processes and services to run in a virtual machine-like fashion. You can even, deploy a complete Kubernetes environment, with its containers inside an LXD In that it looks much more like its close relatives such as BSD jails, Solaris zones and AIX WPARs. Still think Docker or Podman are your only options?

LXD interface screenshot

The evolution of containers

Remember when Docker was the one containerization tool everyone loved? Since its release in 2013, Docker revolutionized application development and deployment by making containers accessible and easy to use. Docker allowed developers to package their applications together with all their dependencies, ensuring that they would work consistently in any environment. This innovation led to a massive adoption of containers in the industry, with Docker and Podman becoming de facto standards, if not directly their orchestrators such as Kubernetes. But is Docker the only star of the show?

While Docker was getting all the attention, LXD was quietly working to offer something different: full OS containers. As organizations adopt containers for more use cases, the need for more sophisticated and efficient management has arisen. This is where LXD comes in. Can you imagine having the flexibility of virtual machines but with the efficiency of containers, without having to go crazy and totally change use cases?

Comparison between Ubuntu LXD, Podman and Docker

Docker and Podman are designed to package and deploy individual applications, while Ubuntu LXD offers a more complete experience. Its architecture focuses on containerization of microservices, cloud applications and continuous deployment.

In addition, they are tightly integrated with Kubernetes, the most popular container orchestration tool on the market. On the other hand, LXD allows you to run a complete system inside a container. This capability makes it ideal for use cases where a complete environment is required, similar to a virtual machine but with the efficiency of containers. See the difference?image of LXD and Docker logos

Ubuntu LXD Use Cases

LXD excels in several specific scenarios. For example, in
Infrastructure as a Service (IaaS)
LXD enables the creation and management of complete operating system containers. This is ideal for cloud service providers who need to offer complete environments without the overhead of traditional virtual machines. Have you ever had trouble replicating a development environment identical to the production environment? With LXD, developers can create isolated and replicable development environments, minimizing configuration and dependency issues.

lxd image virtual machines and linux containers

In the field of network simulations and testing, LXD allows you to simulate complex networks and test services at the network level. This capability is crucial for replicating entire network infrastructures within a single host. For system administration and DevOps tasks, LXD offers flexibility beyond application containerization. It allows the creation of complete environments that can be managed, updated and monitored as if they were physical machines, but with the efficiency of containers. Still think that only Docker is your only alternative?

Solutions using Ubuntu LXD

Canonical, the company behind Ubuntu and a Sixe partner, has developed several solutions based on Ubuntu LXD to offer exceptional performance and flexibility. Among these solutions is MAAS (Metal as a Service), which uses LXD to provide highly configurable development and test environments. It allows users to deploy complete operating systems in containers, facilitating the management of large and complex infrastructures.

canonical's microcloud github statistics

Microcloud benefits from LXD by integrating it to offer full operating system containers as an additional (or alternative) option to traditional virtual machines, improving flexibility and efficiency in resource management. In addition, Travis CI, a continuous integration platform, uses LXD to run its test environments, enabling Travis CI to deliver fast and reproducible test environments, improving developer efficiency. Are you surprised? There is more.

For those of you who are looking to implement these solutions in your environment,
SIXE Engineering
is the reference partner of Canonical and Ubuntu that you are looking for. With extensive experience in implementing LXD and other virtualization technologies, SIXE can help you maximize the potential of your technology infrastructures. Whether you need support for MAAS, OpenStack or any other LXD-based solution, SIXE has the knowledge and experience to guide you every step of the way. When there are many paths that fork, we can recommend, advise and accompany you on the one that suits you best. Without compromises or being tied to any manufacturer, because with Canonical we do not offer closed products, but open technologies, made with and for the community, taking the philosophy of free software to its ultimate consequences.

Conclusion

Despite the dominance of lightweight containerization technologies such as Docker and Podman in Kubernetes, LXD remains relevant in many use cases because of its ability to provide full OS containers. Its use in infrastructure as a service, development environments, network simulations and system administration as well as its adoption in products such as MAAS, OpenStack and Travis are proof of this.

In our view, the benefits of LXD lie in its unique ability to combine the efficiency of containers with the simplicity of virtual machines, offering a hybrid solution that remains essential for multiple applications. Still think Docker is the only option? Surely not. We hope you enjoyed this article and remember that, for any implementation of these technologies,
you can count on SIXE’s expert support by clicking here.
We will always be at your side with the best free solutions.

Logo MicroStack sobre fondo azul

Exploring MicroStack: A Lightweight Private Cloud Solution

LEVERAGING MICROSTACK AS A LIGHTWEIGHT PRIVATE CLOUD SOLUTION

As organizations continue to embrace cloud computing, choosing the right cloud infrastructure becomes a critical decision. Moreover, MicroStack: Lightweight Private Cloud Solutiona lightweight, easy to install, open-source tool based on the Openstack Platform, has emerged as a compelling choice for many businesses. This blog post will explore the advantages of using MicroStack, highlight the growing market share of the Openstack Platform, and discuss the rising prices of public cloud competitors, as well as a hands-on direct look at the intuitive Openstack Dashboard, so as to explore its capabilities and ease of use.

 

Ilsutración de una mujer usando un ordenador en la nube

WHY CHOOSE MICROSTACK?

MicroStack provides the flexibility of open-source software over traditional public cloud deployments, as well as providing a more lightweight, and easy to deploy version of the Openstack platform . This flavor of Openstack is greatly suited both for startups or small cloud deployments within great organizations.

Open source flexibility🌐

MicroStack provides the flexibility of open-source software without the burden of licensing fees or vendor lock-in. This allows organizations to implement cloud infrastructure at a lower cost and with the freedom to modify and extend the platform according to their specific needs. The community-driven development model ensures continuous improvements and innovations, fostering a robust ecosystem around MicroStack.


Customizability🛠️

Additionaly, with MicroStack, organizations have full access to the source code and can tailor the platform to fit their unique requirements. This includes integrating a wide range of plug-ins and extensions, enabling businesses to build a cloud environment that aligns precisely with their operational goals. This flexibility is crucial for adapting to evolving business demands and optimizing resource utilization.


Simplified deployment 🚀

MicroStack is designed for ease of deployment, offering a streamlined installation process that minimizes complexity and setup time, being able to bootstrap a cloud deployment into a compute node in less than 6 commands, with an average deployment time of 30 minutes. This makes it particularly suitable for organizations looking to quickly establish or expand their cloud footprint without extensive technical expertise. The straightforward deployment also lowers initial barriers to adoption, enabling faster time-to-value for cloud initiatives.


Vendor neutrality🛡️

Unlike proprietary cloud solutions that lock users into specific vendors, MicroStack supports a diverse range of hardware and software configurations. Canonical’s  firm belief in open-source and vendor neutrality reduces dependency risks and allows organizations to select the best components for their infrastructure. It also aligns with industry trends towards open standards and interoperability, enhancing long-term scalability and operational efficiency. Consequently, MicroStack supports a diverse range of hardware and software configurations.


Lightweight footprint🌱

Unlike full-scale OpenStack deployments that require substantial hardware resources, MicroStack operates efficiently on smaller-scale environments. This makes it an ideal choice for edge computing scenarios or organizations with limited infrastructure budgets. By optimizing resource usage and minimizing overhead, MicroStack enhances operational efficiency while reducing total cost of ownership.

TECHNICAL AND PERFORMANCE BENEFITS

Furthermore, MicroStack provides robust technical capabilities that support diverse workload requirements such as:

Scalability📈

MicroStack is designed to scale horizontally, accommodating growing workloads and evolving business needs. Whether deploying a few nodes or scaling up to thousands, MicroStack ensures seamless expansion without compromising performance or stability. This scalability is essential for organizations experiencing rapid growth or fluctuating demand patterns in their cloud operations.


Advanced networking🛰️

The networking capabilities of MicroStack, powered by components like Neutron, offer advanced features such as Software-Defined Networking (SDN) and Network Functions Virtualization (NFV). These capabilities enable organizations to create complex network topologies, optimize traffic management, and enhance overall network performance. MicroStack’s focus on modern networking paradigms supports emerging technologies like containers and edge computing, aligning with industry trends towards agile and adaptive IT infrastructures.


Efficient storage solutions📦

MicroStack supports a variety of storage backends through components like Cinder (block storage) and Swift (object storage). This versatility allows organizations to implement highly performant and scalable storage solutions tailored to specific application requirements.


Cost efficiency💰

MicroStack’s efficient resource management tools optimize resource utilization, minimize waste, and enhance operational efficiency. By maximizing the use of existing infrastructure resources and reducing the need for costly proprietary solutions, MicroStack enables organizations to allocate resources more strategically and focus on innovation rather than infrastructure management.

Ilustración con tonos verdes de una pasarela de pago

COST ADVANTAGES

MicroStack’s efficient resource management tools optimize resource utilization traditional cloud solutions:

  • Lower total cost of ownership (TCO)

By eliminating licensing fees and leveraging commodity hardware, MicroStack significantly reduces both upfront CapEx and ongoing OpEx as the Organization and the Cloud deployment scale.

Organizations can achieve substantial cost savings while maintaining the flexibility and scalability of an open-source cloud platform. This cost-effectiveness makes the Openstack Platform accessible to organizations of all sizes, from startups to large enterprises, seeking to optimize their IT investments and maximize their return on investment.

  • Cost efficiency

MicroStack’s efficient resource management tools optimize resource utilization, minimize waste, and enhance operational efficiency. By maximizing the use of existing infrastructure resources and reducing the need for costly proprietary solutions, MicroStack enables organizations to allocate resources more strategically and focus on innovation rather than infrastructure management.

Mujer mirando sus ingresos

MARKET TRENDS AND PRICE INCREASES IN PUBLIC CLOUD SERVICES

The Rise of OpenStack: A Growing Market

MicroStack offers a viable alternative by providing cost-effective cloud solutions, is projected to experience significant market growth from $5.46 billion in 2024 to a staggering $29.5 billion in 2031 . This growth underscores the increasing adoption and recognition of OpenStack’s benefits among organizations worldwide. Its flexibility, cost-effectiveness, and robust community support make it a preferred choice for businesses aiming to deploy scalable and efficient cloud infrastructures.

Cost Challenges in Public Cloud Services

In contrast, the cost of public cloud services has been on the rise. While these platforms offer extensive features and global reach, their escalating prices present challenges for organizations seeking to manage cloud costs effectively. MicroStack offers a viable alternative by providing cost-effective cloud solutions without compromising performance or scalability.

The Shift from Serverless to Monolithic Deployments

Paradoxically, even public cloud giants like Amazon are refraining from using their own public cloud, AWS as a micro service / serverless platform , moving away from serverless and choosing instead to opt for a monolithic deployment, which has decreased their OPEX by 90% . This type of architecture, if beneficial, can be quickly and seamlessly integrated into your environment with Microstack, fully leveraging the Openstack platform in a few simple steps, having all your pertinent architecture under a single private network, with simple and intuitive management of network topology in case of a future upscale scenario. For smaller enterprises, Microstack will even further simplify the migration or deployment of such an infrastructure.

OpenStack’s Adoption Among Leading Enterprises

For instance, over 50% of the Fortune 100 companies have embraced Openstack , highlighting their trust and reliance on these technologies to support mission-critical operations and strategic initiatives.
Businesses like Comcast, Allstate, Bosch, and Capital One are leveraging Openstack to drive innovation and achieve competitive advantages.

OpenStack’s Global Impact

Furthermore, in regions like APAC, organizations such as UnionPay, China Mobile, and China Railway are leveraging OpenStack to scale and transform their IT operations, further driving the adoption and growth of open-source cloud solutions globally.

MicroStack offers compelling cost advantages compared to traditional cloud solutions:
Gráfica que presenta la posición de Openstack en el mercado

OUR EXPERIENCE WITH MICROSTACK AT SIXE

Overall, our experience with MicroStack at SIXE from an operational perspective can be described as the pinnacle of practicality and efficiencyInstalling, working with, and deploying MicroStack was straightforward and intuitive, allowing us to fully bootstrap a private cloud in under 30 minutes.

To summarize, navigating the complexities of cloud infrastructure management is a crucial aspect of modern IT operations. In this final section, we delve into our user experience with the MicroStack dashboard.

The MicroStack dashboard exemplifies how our partner Canonical’s commitment to ease of use and accessibility. Leveraging the dashboard, users can easily deploy and manage virtual machines, configure networking, and monitor resource utilization, all from a centralized hub, thus flattening the learning curve required to deploy and operate critical cloud-based infrastructures.

✨How to launch and configure a virtual instance?

It only takes a few clicks to launch and configure a virtual instance via the dashboard.

We launch an instance from the button found at the top right corner, a pop-up menu appears where we can define the server configuration.

We provide the name and project of our instance.

Next, we choose the image for our VM, we can use a standard OS-ISO image, or import our custom-built snapshots from a previous set-up VM for a quick, yet customized deployment of our specific enterprise needs.

Next, we select the flavor of the instance, flavors are Openstack’s way of configuring virtual hardware specifications, you may use one of the preset flavors or make one to suit your specific infrastructure and application needs.

We will be using the medium flavor specification, Openstack even preemptively warns us of the hardware constraints that every snapshot or image is subject to.

Assuming your network is already configured, the final (and optional) step is to add a security group so we can access the instance via SSH and operate within it.

Now our customized instance is set up and running! :)

Under the actions menu found at the right, we can associate a floating ip in order to directly SSH into the instance from within our internal network.

Now we can use that ip to directly access the instance via SSH!

New IBM Power systems automation course with Ansible!

We are pleased to announce the launch of the
official IBM and SIXE course on automating IBM Power systems with Ansible.
. This training program is designed to provide advanced, hands-on skills in the automation of various IBM Power Systems platforms, including AIX, Linux and IBM i, as well as VIOS and PowerHA servers.

🏆 Things you will learn during the course:

  • AIX Automation: Master the automation of repetitive and complex tasks in AIX.
  • Linux Automation in Power: Learn how to manage and automate operations on Linux servers in Power Systems environments and how to deploy complex environments such as SAP HANA.
  • IBM i Automation: Discover how to simplify IBM i systems administration using Ansible.
  • VIOS Management: Improve Virtual I/O Server (VIOS) efficiency with advanced automation techniques.
  • PowerHA Implementation: Learn best practices for automating high availability in Power Systems using PowerHA.

🎓 Who is it intended for?

This course is intended for system administrators, IT engineers, solution architects and any professional interested in improving their automation skills in IBM Power Systems environments. No previous Ansible experience is required, although basic knowledge of system administration is beneficial. If you wish, you can previously take our Ansible and AWX course.

💼 Course benefits:

  • Official certification: Obtain an internationally recognized certification by IBM and SIXE.
  • Practical skills: Participate in practical exercises and real-world projects that will prepare you for real-world challenges.
  • Exclusive materials: Access exclusive and updated training resources.

📍 Modality:

The course will be offered in a hybrid format, with both classroom and online options to suit your needs.

📝 Registration:

Don’t miss this opportunity to advance your career and transform the way you work with IBM Power Systems!
Register today
and secure your place in the next edition.

Join us and take your critical environment automation skills to the next level. We look forward to seeing you at the official IBM and SIXE course on automating IBM Power systems with Ansible!

SiXe Ingeniería
×