Installing Windows on IBM Power (for fun)

In a recent conversation with what I call the Wizards of Power, i.e. the technical management of this fantastic platform: inventors, architects, distinguished engineers and great teams of people behind it, I was asked. “Hugo, why your interest in emulation, who would want to emulate other architectures on Power, what’s the point?”

My answer is that in the open source world, many of the things we do, we do out of curiosity or even just for fun. It resonates in my head that if one day I can have as much fun on a Linux on ppc64le as I do on x86 or slowly on ARM (Mac, Raspberry) it will mean that Power can be “the third” architecture for Linux far beyond the actual use cases and mission-critical workloads. In other words, if I can do the same on ppc64le as on other architectures, I can use Power for any use case.

Why have a thousand x86 servers wasting energy and taking up space in the DPC when we can have a few Power servers doing the same job in a more secure and efficient way? Customers will say, for compatibility, for using standard tools. But multi-architecture can be a new standard, if it isn’t already.

I don’t want to go too deep into this topic today, there are several ideas published on the IBM portal and I think the IBM, Debian, Canonical and Red Hat teams are doing an excellent job which I will cover in future posts.

There were news in the kernel.org list that we have been covering at SIXE blog during the last months regarding the hard work being done on this, and with the arrival of the new FW1060 level we finally have full KVM support on PowerVM. This is something equivalent to that which exists in IBM Z/Linux One. Great!

As always, I wanted to push the technology to its limits including an old dream: to run a Windows (the “enemy” for AIX & Linux guys) and in this case and for more fun Windows XP on a Power10, using KVM and QEMU.

Preparation

We need to configure the LPAR to be KVM host, this will change the way it uses PowerVM in such a way that there is no overhead, and at least one dedicated processor must also be assigned to it (not in “donating” mode, mind you). This will give us 8 dedicated threads to run our virtual processors in KVM. Yes, it’s a lot simpler and less capable than PowerVM with its micro-partitioning, but it’s still an industry standard and not everyone needs to commute to work on a plane. Don’t you think so?

Choosing the distribution

In my experience the best support for ppc64le experiments is usually Debian or Fedora. In this case I have installed Fedora40 and upgraded to the latest levels. Then you have to install all the  virtualization packages and qemu support for other architectures. Following my idea of creating interactive articles, I will use virt-manager to avoid complex QEMU settings. I my environment I have installed all qemu-system-*

To get Windows to detect our virtual SATA disks as usable, you’ll need to set up this

Once you are done you can install what your disks will need

# dnf install virtio-win-stable

You will also need a Windows XP .iso and its licence numbers. I recommend placing it in /var/lib/libvirtd/images so that it is automatically detected by virt-manager.

Creating the VM (just follow the wizard)

Make sure you select x86 as architecture (qemu will be in charge of this)

 

Just like when we ran AIX on x86, don’t expect it to go very fast, although it took about an hour to install… actually about as long as it took on a PC of the time. What I do to see MS Messenger again! Enjoy the video and stay updated by following us! 

Installing Windows on IBM Power (for fun)Installing Windows on IBM Power (for fun)

Further tests

What about running a MS PowerShell for ARM64 on docker? I can now “dir” on Power, fantastic! :P

Conclusion

The work done to support KVM is for me the biggest news in recent years because of the endless possibilities it opens up for the Power platform.  As far as I have been able to test everything works and works very well. Congratulations to all the people who have made it possible. 

Logo Suse En fondo de SIXE

Understanding high availability (HA) on SUSE Linux

High availability and business continuity are crucial to keep applications and services always operational.
High availability clusters allow critical services to keep running, even if servers or hardware components fail.
SUSE Linux offers a robust set of tools for creating and managing these clusters.
In this article, we explore the current state of clustering in SUSE Linux, with a focus on key technologies such as Pacemaker, Corosync, DRBD and others.
These, with minor differences are available on x86 and ppc64le.

Pacemaker: the brain of the cluster

Pacemaker is the engine that manages high availability clusters in SUSE Linux.
Its main function is to manage cluster resources, ensuring that critical services are operational and recover quickly in case of failure. Pacemaker continuously monitors resources (databases, web services, file systems, etc.) and, if it detects a problem, migrates those resources to other nodes in the cluster to keep them up and running.
Pacemaker stands out for its flexibility and ability to manage a wide variety of resources.
From simple services to more complex distributed systems, it is capable of handling most high-availability scenarios that a company may need.

Corosync: the cluster’s nervous system

Corosync is responsible for communication between cluster nodes.
It ensures that all nodes have the same view of the cluster status at all times, which is essential for coordinated decision making.
It also manages quorum, which determines whether there are enough active nodes for the cluster to operate safely.
If quorum is lost, measures can be taken to prevent data loss or even service downtime.

DRBD: the backbone of the data

DRBD (Distributed Replicated Block Device) is a block-level storage replication solution that replicates data between nodes in real time.
With DRBD, data from one server is replicated to another server almost instantaneously, creating an exact copy.
This is especially useful in scenarios where it is crucial that critical data is always available, even if a node fails.
Combined with Pacemaker, DRBD allows services to continue operating with access to the same data, even if they are on different nodes.

Other key technologies in SUSE Linux clusters

In addition to Pacemaker, Corosync and DRBD, there are other essential technologies for building robust clusters on SUSE Linux:

  • SBD (Storage-Based Death): SBD is a fencing tool that isolates a misbehaving node from causing problems in the cluster.
    This is achieved by using a shared storage device that nodes use to communicate their state.
  • OCF (Open Cluster Framework): OCF scripts are the basis of the resources managed by Pacemaker.
    They define how to start, stop and check the status of a resource, providing the flexibility needed to integrate a wide range of services into the cluster.
  • Csync2: A tool for synchronizing files between nodes in a cluster.
    It ensures that configuration files and other critical data are always up to date on all nodes.

Current status and future trends

Clusters in SUSE Linux have matured and are adapting to new business demands.
With the growing adoption of containerized environments and with parts in different clouds, clusters in SUSE Linux are evolving to better integrate with them.
This includes improved support for container orchestration and distributed applications that require high availability beyond replicating two disks per DRBD and keeping a virtual IP alive :) Still, today, the combination of Pacemaker, Corosync, DRBD and other tools provides a solid foundation for creating high availability clusters that can scale and adapt to the needs of SAP HANA and other solutions that require high if not total availability. If you need help at SIXE we can help you.

Cheatsheet for creating and managing clusters with Pacemaker on SUSE Linux

Here is a modest cheatsheet to help you create and manage clusters with Pacemaker on SUSE Linux.
Sharing is caring!

Task Command / Description
Package installation
Installing Pacemaker and Corosync zypper install -y pacemaker corosync crmsh
Basic configuration
Configure the Corosync file Edit /etc/corosync/corosync.conf to define the transport, interfaces and network.
Start services systemctl start corosync && systemctl start pacemaker
Enable services at startup systemctl enable corosync && systemctl enable pacemaker
Cluster management
View cluster status crm status
See node details crm_node -l
Add a new node crm node add <nombre_del_nodo>
Eject a node crm node remove <nombre_del_nodo>
View cluster logs crm_mon --logfile <ruta_del_log>
Resource configuration
Create a resource crm configure primitive <nombre_recurso> <tipo_agente> params <parámetros>
Delete a resource crm configure delete <nombre_recurso>
Modify a resource crm configure edit <nombre_recurso>
Show complete cluster configuration crm configure show
Configuration of groups and assemblies
Create a resource group crm configure group <nombre_grupo> <recurso1> <recurso2> ...
Create an ordered set crm configure colocation <nombre_conjunto> inf: <recurso1> <recurso2>
Create an execution order crm configure order <orden> <recurso1> then <recurso2>
Restrictions and placements
Create placement restriction crm configure colocation <nombre_restricción> inf: <recurso1> <recurso2>
Create location restriction crm configure location <nombre_ubicación> <recurso> <puntaje> <nodo>
Failover and recovery
Force migration of a resource crm resource migrate <nombre_recurso> <nombre_nodo>
Clear status of a resource crm resource cleanup <nombre_recurso>
Temporarily disable a resource crm resource unmanage <nombre_recurso>
Enabling a resource after disabling it crm resource manage <nombre_recurso>
Advanced configuration
Configure the quorum `crm configure property no-quorum-policy=<freeze
Configure fencing crm configure primitive stonith-sbd stonith:external/sbd params pcmk_delay_max=<tiempo>
Configure timeout of a resource crm configure primitive <nombre_recurso> <tipo_agente> op start timeout=<tiempo> interval=<intervalo>
Validation and testing
Validate cluster configuration crm_verify --live-check
Simulate a failure crm_simulate --run
Policy management
Configure recovery policy crm configure rsc_defaults resource-stickiness=<valor>
Configure resource priority crm configure resource default-resource-stickiness=<valor>
Stopping and starting the cluster
Stop the entire cluster crm cluster stop --all
Start up the entire cluster crm cluster start --all

 

Logo Sixe Noticia

SIXE: your trusted IBM partner

In this fast-changing and complex technological era, choosing the right suppliers is crucial.
When it comes to solutions like IBM’s, the real difference is not the size of the company, but its technical capacity, human capital, commitment and level of specialization.
SIXE Ingeniería is your ideal IBM partner, and here we explain why.

Technical expertise: Who do you want to design and manage your project?

At SIXE, we are not a company that resells any product or service looking for a margin, passing the technical challenge to someone else and “bye-bye”.
We specialize in key areas such as cybersecurity and critical IT infrastructure.
Unlike IBM’s large partners, who usually outsource most of their projects, at SIXE every task is executed by our in-house experts. Do you prefer to rely on a company that subcontracts or on a team that is directly involved in every technical detail? Our engineering company approach allows us to design solutions tailored to the specific needs of each client.
We do not offer generic configurations or deployments, but solutions tailored to exactly what your organization (and your team) needs.
We have experts in IBM Power infrastructure, storage, operating systems (AIX, Red Hat, IBM i, Debian, zOS), Informix and DB2 databases, application servers, etc.

Personalized engagement: what care do you expect to receive?

In large consulting firms, projects often become just another number on your client list.
Do you want to be just one more or do you prefer exclusive treatment?
At SIXE, we offer a personalized service, ensuring that each project receives the attention it needs to go well and that you trust us for many years to come.
Our agile structure allows us to adapt quickly and work side by side with the systems managers, ensuring that the proposed solutions meet your expectations and needs.

Innovation and flexibility

Large companies are often trapped in bureaucratic processes that prevent them from innovating or reacting quickly to market changes.
How many times have you come across solutions that are outdated or slow to implement?
At SIXE, we can adapt quickly and offer solutions that not only follow the latest trends, but anticipate them.
This is essential for projects that require quick and effective responses in a changing environment.
Also when something, no matter how trendy it is or how spectacular it sounds in a Power Point, it involves risks, we will raise our voice and let you know.

Transparency and control

When projects are outsourced, transparency and control are diluted.
At SIXE, you have the security of knowing exactly who is working on your project and how resources are being managed.
Large consulting firms, because of their size, tend to lose this transparency, delegating tasks to third parties without the client having any real control over the process.
Would you rather risk losing visibility on your project or have a partner that keeps you informed and in control of each milestone?

Long-term relationships: are you looking for an additional supplier or a strategic partner?

We are not looking to close short-term contracts; our goal is to build long-lasting relationships based on an ethical relationship.
This means that, once the technology is implemented, we remain committed to the project, offering technical support, training and consulting whenever necessary.
Large companies, on the other hand, tend to focus on the initial implementation, leaving everything else aside.
Outsourcing everything, of course, just like Homer Simpson would do.

Return on investment: where does your money go?

In many large consulting firms, much of the budget goes to cover overhead, with little direct impact on project quality.
They do not have good engineers on staff because their managers think that outsourcing technical talent reduces risks and improves margins.
At SIXE, every euro invested translates into real value; we do not have a pool of managers and executives spending hours in meetings and meals with clients.
What we do have is an internationally recognized engineering team, committed to the company and its clients for more than 15 years.
We are also part of a network of experts recognized internationally by IBM.

The difference is in the execution

Although it may be said otherwise, the real difference in a technology project is not in the size of the company, but in how and by whom each project is executed.
At SIXE, we combine technical expertise, commitment and transparency, offering a precise and results-oriented execution.
In a market saturated with options, why not choose a partner that ensures quality, innovation and a relationship based on collaboration?
Choosing SIXE as your IBM partner means opting for an approach based on technical excellence and total commitment to results.
Don’t leave the success of your project in the hands of chance, we are a partner who will care as much as you do (for our sake) about the final result and the relationship between our companies in the medium and long term.

Not only IBM

Although 50% of our business is related to IBM training, consulting and infrastructure projects, we are also a strategic partner of Canonical (Ubuntu), Red Hat and SUSE.

What about your competitors?

The truth is that we do not have because there is no other company of our size with our level of specialization in the solutions we offer.
There are other small and medium-sized companies with incredible human capital that complement the technologies we work on and with which we always collaborate, but we never compete.
When we don’t know how to do something, we always ask for help and let our clients know. It is part of our DNA.

ibm i logo sixe

Learn IBM i and RPG with SIXE

IBM RPG training: SIXE❤️IBM i y RPG

SIXE is a reference in official IBM training.
For years, we have offered specialized courses in IBM i and RPG, key technologies for many CRMs and ERPs used by large companies around the world.
Among our most outstanding courses is the advanced programming workshop in RPG IV.
If you are new to RPG, you can start learning IBM i and RPG with SIXE in our RPG IV basics workshop.
These courses will allow you to cover from the basics to the most advanced techniques of this robust programming language.

Personalization and teaching quality

One of SIXE’s biggest differentiators is our customized approach.
Each course can be tailored to your team’s specific needs, ensuring practical and relevant training.
Did you know that many courses are taught by IBM Champions?
These internationally recognized experts ensure that students receive the highest quality, most up-to-date training.
Plus, we are an integrated company led by IBM instructors.

History and relevance of IBM i today

IBM i, launched in 1988, is the evolution of the AS/400 system, designed to be robust, scalable and secure.
Over more than three decades, it has maintained its mission to provide a stable and reliable platform for enterprise data management.
The latest release, IBM i 7.5, includes key enhancements in security, performance and cloud integration, reinforcing its relevance in today’s IT environment.

Current RPG use cases: Can I have the ticket?

RPG (Report Program Generator) continues to be fundamental to many organizations using IBM i, especially in industries such as banking, manufacturing and retail.
RPG has been updated with modern programming techniques, making it as relevant today as it was in its early days.
For example, when you pay at a supermarket, the ticket and associated processes (inventory, ordering, invoicing) are managed by a program in RPG on an IBM Power system running IBM i.

Don’t call me AS/400

An interesting anecdote about IBM i is that its predecessor, the AS/400, was introduced in 1988 as a system “as easy to use as a refrigerator”.
At a time when computer systems were complicated, this promise highlighted IBM i as a revolutionary system in terms of accessibility and simplicity.
Although the name has changed, if you need an AS/400 course, we can arrange that too.

Why choose SIXE?

With over 15 years of experience, SIXE offers not only training, but a comprehensive educational experience that is tailored to each client’s needs.
Our focus on quality and customization, coupled with the expertise of highly qualified instructors, makes SIXE the best choice for those seeking effective and personalized IBM official training.
To explore more about these courses and to register, please visit the following links on our website:

sixe logos with its partners suse, canonical, red hat and ibm

Logos sixe podman docker y lxd ubuntu

Discover Ubuntu LXD: The alternative to Docker or Podman

Do you still use only Docker or Podman? Find out why you should try Ubuntu LXD

INTRODUCTION

Ubuntu LXD is Ubuntu’s container manager, based on LXC(Linux Containers), which despite the rise of technologies such as Docker in the Kubernetes ecosystem, remains highly relevant. This article explores the reasons behind the persistence of LXD, its distinctive use cases and the products that employ it in the real world. Ready to find out why you should pay attention?

WHAT IS UBUNTU LXD?

LXD is a container management tool that acts as an enhancement to LXC, offering a more complete containerization experience geared towards lightweight virtual machines. While Docker and all other containers based on the OCI standard are ephemeral by design, LXD is more focused on providing full system containers, allowing multiple processes and services to run in a virtual machine-like fashion. You can even, deploy a complete Kubernetes environment, with its containers inside an LXD In that it looks much more like its close relatives such as BSD jails, Solaris zones and AIX WPARs. Still think Docker or Podman are your only options?

LXD interface screenshot

The evolution of containers

Remember when Docker was the one containerization tool everyone loved? Since its release in 2013, Docker revolutionized application development and deployment by making containers accessible and easy to use. Docker allowed developers to package their applications together with all their dependencies, ensuring that they would work consistently in any environment. This innovation led to a massive adoption of containers in the industry, with Docker and Podman becoming de facto standards, if not directly their orchestrators such as Kubernetes. But is Docker the only star of the show?

While Docker was getting all the attention, LXD was quietly working to offer something different: full OS containers. As organizations adopt containers for more use cases, the need for more sophisticated and efficient management has arisen. This is where LXD comes in. Can you imagine having the flexibility of virtual machines but with the efficiency of containers, without having to go crazy and totally change use cases?

Comparison between Ubuntu LXD, Podman and Docker

Docker and Podman are designed to package and deploy individual applications, while Ubuntu LXD offers a more complete experience. Its architecture focuses on containerization of microservices, cloud applications and continuous deployment.

In addition, they are tightly integrated with Kubernetes, the most popular container orchestration tool on the market. On the other hand, LXD allows you to run a complete system inside a container. This capability makes it ideal for use cases where a complete environment is required, similar to a virtual machine but with the efficiency of containers. See the difference?image of LXD and Docker logos

Ubuntu LXD Use Cases

LXD excels in several specific scenarios. For example, in
Infrastructure as a Service (IaaS)
LXD enables the creation and management of complete operating system containers. This is ideal for cloud service providers who need to offer complete environments without the overhead of traditional virtual machines. Have you ever had trouble replicating a development environment identical to the production environment? With LXD, developers can create isolated and replicable development environments, minimizing configuration and dependency issues.

lxd image virtual machines and linux containers

In the field of network simulations and testing, LXD allows you to simulate complex networks and test services at the network level. This capability is crucial for replicating entire network infrastructures within a single host. For system administration and DevOps tasks, LXD offers flexibility beyond application containerization. It allows the creation of complete environments that can be managed, updated and monitored as if they were physical machines, but with the efficiency of containers. Still think that only Docker is your only alternative?

Solutions using Ubuntu LXD

Canonical, the company behind Ubuntu and a Sixe partner, has developed several solutions based on Ubuntu LXD to offer exceptional performance and flexibility. Among these solutions is MAAS (Metal as a Service), which uses LXD to provide highly configurable development and test environments. It allows users to deploy complete operating systems in containers, facilitating the management of large and complex infrastructures.

canonical's microcloud github statistics

Microcloud benefits from LXD by integrating it to offer full operating system containers as an additional (or alternative) option to traditional virtual machines, improving flexibility and efficiency in resource management. In addition, Travis CI, a continuous integration platform, uses LXD to run its test environments, enabling Travis CI to deliver fast and reproducible test environments, improving developer efficiency. Are you surprised? There is more.

For those of you who are looking to implement these solutions in your environment,
SIXE Engineering
is the reference partner of Canonical and Ubuntu that you are looking for. With extensive experience in implementing LXD and other virtualization technologies, SIXE can help you maximize the potential of your technology infrastructures. Whether you need support for MAAS, OpenStack or any other LXD-based solution, SIXE has the knowledge and experience to guide you every step of the way. When there are many paths that fork, we can recommend, advise and accompany you on the one that suits you best. Without compromises or being tied to any manufacturer, because with Canonical we do not offer closed products, but open technologies, made with and for the community, taking the philosophy of free software to its ultimate consequences.

Conclusion

Despite the dominance of lightweight containerization technologies such as Docker and Podman in Kubernetes, LXD remains relevant in many use cases because of its ability to provide full OS containers. Its use in infrastructure as a service, development environments, network simulations and system administration as well as its adoption in products such as MAAS, OpenStack and Travis are proof of this.

In our view, the benefits of LXD lie in its unique ability to combine the efficiency of containers with the simplicity of virtual machines, offering a hybrid solution that remains essential for multiple applications. Still think Docker is the only option? Surely not. We hope you enjoyed this article and remember that, for any implementation of these technologies,
you can count on SIXE’s expert support by clicking here.
We will always be at your side with the best free solutions.

Logo MicroStack sobre fondo azul

Exploring MicroStack: A Lightweight Private Cloud Solution

LEVERAGING MICROSTACK AS A LIGHTWEIGHT PRIVATE CLOUD SOLUTION

As organizations continue to embrace cloud computing, choosing the right cloud infrastructure becomes a critical decision. Moreover, MicroStack: Lightweight Private Cloud Solutiona lightweight, easy to install, open-source tool based on the Openstack Platform, has emerged as a compelling choice for many businesses. This blog post will explore the advantages of using MicroStack, highlight the growing market share of the Openstack Platform, and discuss the rising prices of public cloud competitors, as well as a hands-on direct look at the intuitive Openstack Dashboard, so as to explore its capabilities and ease of use.

 

Ilsutración de una mujer usando un ordenador en la nube

WHY CHOOSE MICROSTACK?

MicroStack provides the flexibility of open-source software over traditional public cloud deployments, as well as providing a more lightweight, and easy to deploy version of the Openstack platform . This flavor of Openstack is greatly suited both for startups or small cloud deployments within great organizations.

Open source flexibility🌐

MicroStack provides the flexibility of open-source software without the burden of licensing fees or vendor lock-in. This allows organizations to implement cloud infrastructure at a lower cost and with the freedom to modify and extend the platform according to their specific needs. The community-driven development model ensures continuous improvements and innovations, fostering a robust ecosystem around MicroStack.


Customizability🛠️

Additionaly, with MicroStack, organizations have full access to the source code and can tailor the platform to fit their unique requirements. This includes integrating a wide range of plug-ins and extensions, enabling businesses to build a cloud environment that aligns precisely with their operational goals. This flexibility is crucial for adapting to evolving business demands and optimizing resource utilization.


Simplified deployment 🚀

MicroStack is designed for ease of deployment, offering a streamlined installation process that minimizes complexity and setup time, being able to bootstrap a cloud deployment into a compute node in less than 6 commands, with an average deployment time of 30 minutes. This makes it particularly suitable for organizations looking to quickly establish or expand their cloud footprint without extensive technical expertise. The straightforward deployment also lowers initial barriers to adoption, enabling faster time-to-value for cloud initiatives.


Vendor neutrality🛡️

Unlike proprietary cloud solutions that lock users into specific vendors, MicroStack supports a diverse range of hardware and software configurations. Canonical’s  firm belief in open-source and vendor neutrality reduces dependency risks and allows organizations to select the best components for their infrastructure. It also aligns with industry trends towards open standards and interoperability, enhancing long-term scalability and operational efficiency. Consequently, MicroStack supports a diverse range of hardware and software configurations.


Lightweight footprint🌱

Unlike full-scale OpenStack deployments that require substantial hardware resources, MicroStack operates efficiently on smaller-scale environments. This makes it an ideal choice for edge computing scenarios or organizations with limited infrastructure budgets. By optimizing resource usage and minimizing overhead, MicroStack enhances operational efficiency while reducing total cost of ownership.

TECHNICAL AND PERFORMANCE BENEFITS

Furthermore, MicroStack provides robust technical capabilities that support diverse workload requirements such as:

Scalability📈

MicroStack is designed to scale horizontally, accommodating growing workloads and evolving business needs. Whether deploying a few nodes or scaling up to thousands, MicroStack ensures seamless expansion without compromising performance or stability. This scalability is essential for organizations experiencing rapid growth or fluctuating demand patterns in their cloud operations.


Advanced networking🛰️

The networking capabilities of MicroStack, powered by components like Neutron, offer advanced features such as Software-Defined Networking (SDN) and Network Functions Virtualization (NFV). These capabilities enable organizations to create complex network topologies, optimize traffic management, and enhance overall network performance. MicroStack’s focus on modern networking paradigms supports emerging technologies like containers and edge computing, aligning with industry trends towards agile and adaptive IT infrastructures.


Efficient storage solutions📦

MicroStack supports a variety of storage backends through components like Cinder (block storage) and Swift (object storage). This versatility allows organizations to implement highly performant and scalable storage solutions tailored to specific application requirements.


Cost efficiency💰

MicroStack’s efficient resource management tools optimize resource utilization, minimize waste, and enhance operational efficiency. By maximizing the use of existing infrastructure resources and reducing the need for costly proprietary solutions, MicroStack enables organizations to allocate resources more strategically and focus on innovation rather than infrastructure management.

Ilustración con tonos verdes de una pasarela de pago

COST ADVANTAGES

MicroStack’s efficient resource management tools optimize resource utilization traditional cloud solutions:

  • Lower total cost of ownership (TCO)

By eliminating licensing fees and leveraging commodity hardware, MicroStack significantly reduces both upfront CapEx and ongoing OpEx as the Organization and the Cloud deployment scale.

Organizations can achieve substantial cost savings while maintaining the flexibility and scalability of an open-source cloud platform. This cost-effectiveness makes the Openstack Platform accessible to organizations of all sizes, from startups to large enterprises, seeking to optimize their IT investments and maximize their return on investment.

  • Cost efficiency

MicroStack’s efficient resource management tools optimize resource utilization, minimize waste, and enhance operational efficiency. By maximizing the use of existing infrastructure resources and reducing the need for costly proprietary solutions, MicroStack enables organizations to allocate resources more strategically and focus on innovation rather than infrastructure management.

Mujer mirando sus ingresos

MARKET TRENDS AND PRICE INCREASES IN PUBLIC CLOUD SERVICES

The Rise of OpenStack: A Growing Market

MicroStack offers a viable alternative by providing cost-effective cloud solutions, is projected to experience significant market growth from $5.46 billion in 2024 to a staggering $29.5 billion in 2031 . This growth underscores the increasing adoption and recognition of OpenStack’s benefits among organizations worldwide. Its flexibility, cost-effectiveness, and robust community support make it a preferred choice for businesses aiming to deploy scalable and efficient cloud infrastructures.

Cost Challenges in Public Cloud Services

In contrast, the cost of public cloud services has been on the rise. While these platforms offer extensive features and global reach, their escalating prices present challenges for organizations seeking to manage cloud costs effectively. MicroStack offers a viable alternative by providing cost-effective cloud solutions without compromising performance or scalability.

The Shift from Serverless to Monolithic Deployments

Paradoxically, even public cloud giants like Amazon are refraining from using their own public cloud, AWS as a micro service / serverless platform , moving away from serverless and choosing instead to opt for a monolithic deployment, which has decreased their OPEX by 90% . This type of architecture, if beneficial, can be quickly and seamlessly integrated into your environment with Microstack, fully leveraging the Openstack platform in a few simple steps, having all your pertinent architecture under a single private network, with simple and intuitive management of network topology in case of a future upscale scenario. For smaller enterprises, Microstack will even further simplify the migration or deployment of such an infrastructure.

OpenStack’s Adoption Among Leading Enterprises

For instance, over 50% of the Fortune 100 companies have embraced Openstack , highlighting their trust and reliance on these technologies to support mission-critical operations and strategic initiatives.
Businesses like Comcast, Allstate, Bosch, and Capital One are leveraging Openstack to drive innovation and achieve competitive advantages.

OpenStack’s Global Impact

Furthermore, in regions like APAC, organizations such as UnionPay, China Mobile, and China Railway are leveraging OpenStack to scale and transform their IT operations, further driving the adoption and growth of open-source cloud solutions globally.

MicroStack offers compelling cost advantages compared to traditional cloud solutions:
Gráfica que presenta la posición de Openstack en el mercado

OUR EXPERIENCE WITH MICROSTACK AT SIXE

Overall, our experience with MicroStack at SIXE from an operational perspective can be described as the pinnacle of practicality and efficiencyInstalling, working with, and deploying MicroStack was straightforward and intuitive, allowing us to fully bootstrap a private cloud in under 30 minutes.

To summarize, navigating the complexities of cloud infrastructure management is a crucial aspect of modern IT operations. In this final section, we delve into our user experience with the MicroStack dashboard.

The MicroStack dashboard exemplifies how our partner Canonical’s commitment to ease of use and accessibility. Leveraging the dashboard, users can easily deploy and manage virtual machines, configure networking, and monitor resource utilization, all from a centralized hub, thus flattening the learning curve required to deploy and operate critical cloud-based infrastructures.

✨How to launch and configure a virtual instance?

It only takes a few clicks to launch and configure a virtual instance via the dashboard.

We launch an instance from the button found at the top right corner, a pop-up menu appears where we can define the server configuration.

We provide the name and project of our instance.

Next, we choose the image for our VM, we can use a standard OS-ISO image, or import our custom-built snapshots from a previous set-up VM for a quick, yet customized deployment of our specific enterprise needs.

Next, we select the flavor of the instance, flavors are Openstack’s way of configuring virtual hardware specifications, you may use one of the preset flavors or make one to suit your specific infrastructure and application needs.

We will be using the medium flavor specification, Openstack even preemptively warns us of the hardware constraints that every snapshot or image is subject to.

Assuming your network is already configured, the final (and optional) step is to add a security group so we can access the instance via SSH and operate within it.

Now our customized instance is set up and running! :)

Under the actions menu found at the right, we can associate a floating ip in order to directly SSH into the instance from within our internal network.

Now we can use that ip to directly access the instance via SSH!

New IBM Power systems automation course with Ansible!

We are pleased to announce the launch of the
official IBM and SIXE course on automating IBM Power systems with Ansible.
. This training program is designed to provide advanced, hands-on skills in the automation of various IBM Power Systems platforms, including AIX, Linux and IBM i, as well as VIOS and PowerHA servers.

🏆 Things you will learn during the course:

  • AIX Automation: Master the automation of repetitive and complex tasks in AIX.
  • Linux Automation in Power: Learn how to manage and automate operations on Linux servers in Power Systems environments and how to deploy complex environments such as SAP HANA.
  • IBM i Automation: Discover how to simplify IBM i systems administration using Ansible.
  • VIOS Management: Improve Virtual I/O Server (VIOS) efficiency with advanced automation techniques.
  • PowerHA Implementation: Learn best practices for automating high availability in Power Systems using PowerHA.

🎓 Who is it intended for?

This course is intended for system administrators, IT engineers, solution architects and any professional interested in improving their automation skills in IBM Power Systems environments. No previous Ansible experience is required, although basic knowledge of system administration is beneficial. If you wish, you can previously take our Ansible and AWX course.

💼 Course benefits:

  • Official certification: Obtain an internationally recognized certification by IBM and SIXE.
  • Practical skills: Participate in practical exercises and real-world projects that will prepare you for real-world challenges.
  • Exclusive materials: Access exclusive and updated training resources.

📍 Modality:

The course will be offered in a hybrid format, with both classroom and online options to suit your needs.

📝 Registration:

Don’t miss this opportunity to advance your career and transform the way you work with IBM Power Systems!
Register today
and secure your place in the next edition.

Join us and take your critical environment automation skills to the next level. We look forward to seeing you at the official IBM and SIXE course on automating IBM Power systems with Ansible!

Can we run (nested) KVM VMs on the top of IBM PowerVM Linux LPARs?

Updated! No longer a rumor but officially supported as of July 19, 2024 (see annoucement)

A brief history of nested virtualization on IBM Hardware

Nested virtualization enables a virtual machine (VM) to host other VMs, creating a layered virtualization environment. This capability is particularly beneficial in enterprise scenarios where flexibility, scalability, and efficient resource management (if we save on CPU we do on $$$ licenses) are critical.

While it can be used for testing purposes with KVM on x86 or VMware, the performance is often suboptimal due to multiple translations and modifications of hardware instructions before they reach the CPU or I/O subsystem. This issue is not unique to these platforms and can affect other virtualization technologies as well.

On platforms like Z, although the performance impact of nested virtualization exists, improvements and optimisations in the hypervisor can mitigate these effects, making it 100% viable for enterprise use.

Virtualization layers on IBM Mainframe

Before delving into nested KVM on PowerVM, it’s essential to understand similar technologies. If the mainframe is the grandfather of current server technology, then logical partitioning (LPARs) and virtualization technologies (zVM) are the grandmothers of hypervisor solutions.

zvm linuxone kvm powervm hypervisors

In this picture (taken from this GREAT article from ) you can see up to 4 layers

Level 1 Virtualization: Shows an LPAR running Linux natively

Level 2 Virtualization: Shows VMs running on z/VM or KVM Hypervisor

Level 3 Virtualization: Shows nesting of z/VM Virtual Machines

Level 4 Virtualization: Shows Linux containers that can either run as stand-alone containers or can be orchestrated with kubernetes

Now have a look to this old (2010) image from the IBM Power platform architecture. Can you see anything similar? :) Let’s move on!

powervm virtualization

Deploying VMs on the top of a PowerVM Linux LPAR

If we have LPARs on Power where we can run AIX, Linux, and IBM i, and in Linux, we can install KVM, can we run VMs inside an LPAR?

Not quite; it will fail at some point. Why? Because KVM is not zVM (for now), and we need some tweaks in the Linux kernel code to support nested virtualization not just with IBM Power9 or Power10 processors, but also with the Power memory subsystem and I/O.

By examining the kernel.org mailing lists, we can see promising developments. Successfully running multiple VMs with KVM on a PowerVM LPAR means porting some fantastic mainframe virtualization technology to IBM Power, allowing us to run VMs and Kubernetes/OpenShift Virtualization on ppc64le for production purposes. This would make a significant difference if the performance penalty is minimal.CPU virtualization on Power and Mainframe systems simply allocates processor time without mapping a full thread as KVM or VMware do. Therefore, it is technically possible to add a hypervisor on top without significantly affecting performance as IBM does with LinuxOne.

Latest news for KVM on IBM PowerVM LPARs  (May 2024)

At Sixe, we have been closely monitoring developments in ppc64 and ppc64le for years. Recently, we’ve found some intriguing messages on the Linux kernel mailing lists. These messages provide insights into the immediate roadmap for this highly anticipated and demanded technology.

1) Add a VM capability to enable nested virtualization
Summary: This message discusses the implementation of nested virtualization capabilities in KVM for PowerPC, including module configurations and support on POWER9 CPUs.

2) Nested PAPR API (KVM on PowerVM)
Summary: It details the extension of register state for the nested PAPR API, the management of multiple VCPUs, and the implementation of specific hypercalls.

3) KVM: PPC: Book3S HV: Nested HV virtualization
Summary: A series of patches improving nested virtualization in KVM for PowerPC, including the handling of hypercalls, page faults, and mapping tables in debugfs.

For more detailed information, you can consult the following links:

Will we be able to install Windows on Power Systems (for fun)?

CAKE - Perhaps, Perhaps, Perhaps (Official Audio)CAKE – Perhaps, Perhaps, Perhaps (Official Audio)

Stay tuned!!

Alianza de Sixe con Canonical

SIXE announces a Strategic Alliance with Canonical / Ubuntu as part of its commitment to free software

Madrid, Spain – May 8, 2024 – SIXEa leader in IT infrastructure solutions for critical environments with more than 15 years of experience, has announced its strategic partnership with Canonicala world leader in the development of open source software based on Ubuntuthe world’s most popular Linux distribution.

SIXE brings extensive experience in implementing and supporting solutions for large customers in Europe and the Americas. Throughout his career, he has worked with leading companies in sectors such as banking, telecommunications, energy, public administration and manufacturing. We are very excited to join Canonical as a strategic partner. We share a passion for open source, an open DNA in code and also in business, putting our customers at the center. This partnership will allow us to offer even more innovative and efficient solutions to our customers in Europe and America.

Proven experience and shared values

SIXE has a highly qualified team, certified in the main free and open source software technologies such as Ubuntu, Red Hat and SUSE. In addition, the company has agreements with other technology leaders, enabling it to offer comprehensive solutions that include hardware, software, implementation services and advanced technical support.

Commitment to freedom and transparency

Free software is based on the freedom of users to run, modify and distribute the software without restrictions. Canonical stands out for its commitment to licenses such as GPLv3, guaranteeing these fundamental freedoms. In addition, its commitment to accessibility and transparency is reflected in the open development of Ubuntu, whose source code is publicly available for review and modification, fostering collaboration and trust in the developer community.

Long-term support

Canonical differentiates itself by its long-term support with Ubuntu LTS (Long Term Support), which offers security and maintenance updates for 5 years, extendable to 10 years for small environments. This provides stability and predictability for business users, which SIXE appreciates. Canonical’s commitment to customer freedom to decide when and how to upgrade without losing access to security patches is one of the many reasons we chose to partner.

Solutions for companies and organizations of all sizes and industries

Canonical has developed novel ways to implement not only Linux environments with Ubuntu, but complete public and private cloud solutions with OpenStack, container platforms with Kubernetes and virtualization environments with KVM, that work in both small and large data centers, across desktops and edge computing environments.

A strategic alliance for an open future

The strategic alliance between SIXE and Canonical will allow us to offer an even wider range of free and open source software solutions, along with enhanced support. With this collaboration, the companies will help their customers leverage the potential of open source software to transform their businesses, bringing value and efficiency.

SiXe Ingeniería
×