Webinars de IBM Power en Dame Power. Cursos de IBM i , AIX , Linux , Kubernetes

IBM Power 2025 Webinars: Learn for free with experts

Can you imagine finding solutions to your Linux, AIX, IBM i, etc… all in one?!🙀 Well, now it is possible thanks to Dame Power, the Spanish-speaking community of IBM Power.

At SIXE, we’re excited to be a part of the exclusive Dame Power webinars. A series of free sessions designed to help you dive deeper into the IBM Power ecosystem.

If you work with IBM i, AIX, Linux, PowerVM or Kubernetes, this is your opportunity to learn directly from experts and apply the knowledge in your projects. Discover the most innovative trends from experts, one-on-one.

📅 IBM Power 2025 WebinarsFree Webinars of IBM Power | AIX , IBM i , Linux , Kubernetes and more

Throughout the year, Dame Power will offer a series of webinars focused on key topics for IBM Power professionals:

Linux in Power: Truths, myths and tips to maximize your performance.
AIX 7.3: The evolution of modern UNIX and its impact on the enterprise.
KVM in PowerVM: Exploring new possibilities in virtualization.
Kubernetes on Power: Efficient container deployment and management.
IBM Power Security: Beyond marketing, real strategies to protect your systems.

Why join these webinars?

By attending these sessions, you will be able to:

✔️ Get practical troubleshooting tips for IBM i, AIX, Linux and more.
✔️ Discover trends in security, cloud, AI and edge computing.
✔️ Learn from IBM Champions working with Power on real-world cases.
✔️ Follow step-by-step advanced configurations and server optimization.

How to register for Dame Power webinars?

It’s easy! All you have to do is:

1️⃣ Click here to subscribe to the Dame Power Substack.
2️⃣ Check the welcome email, where you will find the registration form.
3️⃣ Once you fill it out, you will receive the date, time and link to access the webinar. Access the webinars at this link.
4️⃣ Join, ask questions and boost your knowledge.

🎁 Additional benefits for attendees

If you register for these webinars, you will also gain access to:

🎓 Exclusive discounts on SIXE courses.
📄 Premium content: Offline access to webinars.
🤝 Community: Be part of the largest group of IBM Power experts in Spanish.

Get ready to learn #FullOfPower.

These webinars are more than just lectures: they are a real opportunity to improve your skills, connect with experts and apply new knowledge in your day-to-day work.

📢 Share this event so that other IT teams can benefit from this knowledge.

ML en sixe

How to implement an ML architecture without failing in the attempt

📌 Are you interested in automation, AI, etc? You are in the right place. At SIXE we are going to tell you how to set up a ML architecture avoiding the most common mistakes.

Machine Learning (ML) is no longer the future, it is the present. Companies from all sectors are betting on artificial intelligence to improve processes, automate tasks and make smarter decisions.

But here comes a reality check that you may not want to hear.

Most ML project models fail

🔴 80% of ML models never make it to production.
🔴 6% of companies are investing in training their team in AI.
🔴 Many infrastructures are not ready to scale ML projects.

And therein lies the problem. It’s not enough to have powerful AI models if the infrastructure they run on is a shambles. If your architecture is not scalable, secure and efficient, your ML project is doomed to failure.

Here’s how to avoid these mistakes and design a Machine Learning infrastructure that really works.


Stop reinventing the wheel: use what you already have

One of the most common mistakes is to think that you need a completely new infrastructure to implement ML. False.

Many companies already have underutilized resources that they can leverage for Machine Learning:

GPUs with spare capacity (often only used for graphics tasks).
Underutilized servers that can be assigned to ML workloads.
Access to public clouds that could be better optimized.

📌 Exclusive advice from SIXE: Big companies will sell you that you need to buy and buy. A efore spending on more hardware or hiring more, analyze what you can optimize within what you already have. If you don’t know how, we can do it for you.. We perform audits to make your infrastructure greener and make the most of your resources. Spend less, produce more.


GPUs: Are you taking advantage of them?

Here’s a bombshell: More than 50% of GPUs in enterprises are underutilized.

Yes, they bought powerful hardware, but they are not using it efficiently. Why?

❌ They do not have GPU management tools.
❌ GPUs are assigned to projects that don’t even need them.
❌ Capacity is wasted due to lack of planning.

📌 Solutions you can apply TODAY:

✅ Implements a job manager and GPU scheduler.
✅ Use Kubernetes to orchestrate ML models efficiently.
✅ Adopt a workload scheduler.

If you are thinking of buying more GPUs because “there is not enough capacity”, do an audit first. It is quite possible that you can free up resources and delay purchases. In many cases, it is possible to free up resources and delay purchases by optimizing existing infrastructure. Systems such as AIX, Linux, IBM i, RHEL, SUSE may have untapped capacity that can be reallocated with technical adjustments. At SIXE we audit all these systems to identify opportunities for improvement without the need to change hardware, prioritizing efficiency over investment.


If you do not automate you are living in the past.

The lack of standardization in ML is a serious problem. Each team uses different tools, processes are not replicable and everything becomes chaotic.

This is where MLOps comes in.

MLOps is not just a term bandied about lately, it is a necessity for ML models to move from the experimentation phase to production without headaches.

📌 Benefits of MLOps:

Automates repetitive tasks (validation, deployment, security).
Reduces human errors in model configuration and execution.
Improves reproducibility of experiments.

If you don’t have a clear MLOps strategy, your team will end up doing the same work over and over again. We recommend you train your team on MLOps to stop wasting time on repetitive tasks. At SIXE, we understand the challenge of ML and we offer a MLOps course with Ubuntu Linux designed to help you implement efficient and scalable workflows.


Hybrid cloud: The perfect balance between cost and flexibility

The eternal debate between public and private cloud has generated more than one headache in companies. Should you opt for the agility of the public cloud or prioritize the control and security of a private cloud? The good news is that you don’t have to choose. There is an in-between solution that combines the best of both worlds: the hybrid cloud.

Public cloud only: Can be costly and raises security concerns.
Private cloud only: Requires investment in hardware and maintenance.

🔹Use the public cloud for quick experiments and initial testing.
🔹Migrate models to private cloud when you need more control and security.
🔹Make sure your infrastructure is portable to move between clouds, avoiding environment incompatibility.

Thanks to the ability to seamlessly interconnect between environments, the hybrid cloud eliminates vendor lock-in and optimizes operational costs. A hybrid architecture gives you the best of both worlds: agility to innovate and stability to scale.


ML Security: Don’t wait until it’s too late

Many people think about security when it is already too late. An attack on your ML models or a data breach can have disastrous consequences.

Best practices to protect your ML infrastructure:

Perform at least one annual security audit of your infrastructure.
Implement strong authentication and identity management.
Encrypt data before using it in ML models.

Remember: Security is never enough. The more “layers” of security you have, the less likely you are to be in the news for a data breach ;)


Training: Without a trained team, how will you manage your infrastructure?

AI and ML are constantly evolving. If your equipment is not upgraded, it will be left behind.

🔹 Training in MLOps workshops.
🔹 Internal learning. Foster a culture of continuous learning within your organization through mentoring, collaborative documentation and practical sessions.

💡 At SIXE we offer MLOps training to help companies build scalable and efficient architectures. If your team needs to get up to speed, we can adapt to your company’s specific needs.


Don’t waste hours chasing an error

If your ML infrastructure fails and you don’t have monitoring, you ‘re going to spend hours (or days) trying to figure out what happened.

📊 Essential tools for observability in ML:

Real-time dashboards for model and hardware monitoring.
Automatic alerts to detect problems before they become critical.
Detailed logs for process traceability and error resolution.

If you don’t have full visibility over your infrastructure, sooner or later you will have problems.


Conclusion

Building a scalable and efficient architecture for Machine Learning is not just a technical challenge, but a change of mindset. Leverage your current resources, optimize the use of GPUs and adopt MLOps to automate key processes.

Do you want to design an ML architecture that really works? We can help you.

👉 Contact us and we will help you create a scalable, secure and AI-optimized infrastructure.

IBM Power11: everything we know so far

Constantly updated post (based exclusively on SIXE’s opinions and expectations)

The evolution of Power architecture has always sparked curiosity and debate in our community. While IBM strives to balance innovation with market demands in each new generation, results haven’t always met expectations. Now, with Power11 on the horizon, we explore its potential and the lessons IBM might have learned. Plus, discover our “wishlist” for Power11 (if dreams came true…🙄).

Which models might be released?

We expect, as with Power10, 1, 2, 4 and up to 16 socket models. With equivalents to the S1012, S1022, S1024, E1050 or E1080, these future models will be unveiled soon, along with rPerf metrics to understand equivalence to current models and those they will replace

Since 2010, each Power generation has adapted to shifting markets. Today, with competitors like DeepSeek and optimized AI chips, new inference options are emerging—reducing reliance on NVIDIA GPUs. A prime example? The confirmed integration of IBM’s Spyre Accelerator.

Why is the IBM Spyre accelerator important?

Image of the spyre coprocessor Everything we know about IBM Power11

IBM Spyre Accelerator

Designed for AI workloads, this component could revolutionize generative AI and complex model processing. From modernizing RPG code to enhancing DB2 with AI, and supporting Open Source on ppc64le (Linux on Power) and HPC, its versatility stands out. Notably, Power’s bandwidth between processors, memory, and accelerators could outperform x86/ARM systems at a fraction of the cost of high-end NVIDIA GPUs. Its final impact will depend on IBM’s implementation and real-world benchmarks.

Power11 processor innovations

Power11 delivers three key upgrades:
Higher clock speeds and 25% more cores per chip than Power10.
Enhanced reliability, power efficiency, and quantum security (building on Power10’s foundation).

Innovation in processor manufacturing technologies

Power11 leverages Integrated Stacked Capacitor (ISC) technology and improved cooling systems (heat sinks, fans). Together, these boost core density and computational power while optimizing energy use.

DDR5 memory: Better performance

Thnks to DDR5 support, Power11 gains higher bandwidth and efficiency. Importantly, not just a Power11 feature: DDR5 also works with Power10 (and possibly DDR4), allowing memory reuse from older systems. Looking ahead, DDR6 integration in future Power servers could push performance even further.

What role does KVM play in Power11?

Virtualization is critical, and Power11’s KVM integration strengthens its Linux compatibility. Since Power10, KVM has operated within PowerVM, enabling hybrid environments (e.g., mixing Power nodes in OpenStack). While KVM doesn’t replace PowerVM (IBM’s free, feature-rich hypervisor), it offers flexibility for Linux-native tools like Canonical’s LXD. We’ve covered this in depth before.

Conclusion

Power11 isn’t just hardware—it’s IBM’s chance to reconnect with its community. By blending cutting-edge tech with openness, IBM could deliver a versatile platform for today’s flexible, innovation-driven market. If IBM successfully balances these innovations with market demands, Power11 could be a major turning point.

Want to transform your infrastructure with Power? At SIXE, we specialize in Power systems—whether you’re migrating or optimizing existing setups.

imagen empresa sostenible, blog de SIXE

IBM Power9: Upgrade or maintain? What to do after the end of official support

Is my Power9 obsolete, should we upgrade to Power10 or Power11?

Stop for a moment, don’t rush. Here are the 4 keys to why maintaining your Power9 systems could be the best thing for your company and the environment.

IBM has announced the end of support for Power9 systems as of January 31, 2026. This comes with a clear message: upgrade to Power10 models or wait for the new Power11. But do you really need to upgrade your systems now? At SIXE, we believe that Power9s can continue to function perfectly well if managed properly. Making hasty decisions without evaluating the options can be costly, both for your company and for the environment.

The Power9 dilemma: Renew or maintain?

It is true that newer systems offer significant improvements in performance and efficiency. However, manufacturing new servers generates a large amount of CO2 emissions and increases the demand for rare materials and does NOT offset emissions. Instead, maintaining and optimizing your Power9s can be a much more sustainable and cost-effective option. Here’s why:

1. Why is the manufacture of new servers not sustainable?

Upgrading to new servers involves a significant environmental cost. Although Power11 systems will be more efficient, hardware manufacturing generates tons of CO2 emissions. Your Power9, with proper maintenance, can remain functional and less harmful to the environment. We prove it to you with the example of infomaniak:


comparative tables of contamination by changing IT hardware

As the graph shows, extending the lifetime of servers by upgrading key components, such as the processor or memory, can drastically reduce the environmental impact. It also contributes to the circular economy by avoiding waste of resources.

Solution: Opt for preventive maintenance strategies and component upgrades to maximize the life of your Power9.


2. Benefits of virtualization in Power9

Lack of server virtualization and consolidation increases energy consumption and waste generation. With virtualization tools, your Power9s can run much more efficiently, reducing the need for new equipment and the associated environmental impact.

Solution: Implement virtualization solutions to optimize the use of your resources. At SIXE we offer virtualization training in Linux and VMware to help you maximize the performance of your infrastructure.


Measurement and management of environmental impact

Without measuring the impact of your activities, it is impossible to optimize your resources and reduce your carbon footprint. Power9 systems can be evaluated to identify opportunities for efficiency and sustainability improvements.

  • Energy audits: Identifies areas of high consumption and optimization opportunities.
  • Life Cycle Assessment: Analyze where your Power server is now. This way you can assess the environmental impact of your servers from manufacture to replacement.

Solution: Conduct regular audits and use measurement tools such as IBM Cloud Carbon Calculator or IBM Systems Energy Estimator to manage the impact of your IT infrastructure.

 


4. Economic impact of a hasty decision

Rushing to renew equipment may not be cost-effective if IBM Power9 servers still meet your company’s current needs. Before investing in new systems, it’s critical to analyze your company’s Return on Investment (ROI) to determine if the upgrade is financially justifiable.

  • Acquisition and maintenance costs: The purchase of new servers (power10, power11) implies a high initial cost. However, the more classic servers, with proper maintenance, can continue to operate efficiently, avoiding this expense.
  • Long life cycle: By means of a life cycle assessment, their life cycle can be correctly extended (e.g. by upgrading components). This means that, with optimization, they can be a viable long-term solution.
  • Current capacity vs. future needs: If the Power9 still efficiently handles current workloads, an immediate upgrade may be unnecessary. Perform a performance performance analysis can be key.

Solution: Evaluate the ROI of renewing your servers versus replacing them. In many cases, maintaining and optimizing your Power9 can be the most cost-effective and sustainable option for your business.


Conclusion

The end of support for IBM Power9 systems doesn’t mean you should rush to replace them with the next Power11. With the right strategies, your Power9s can remain a sustainable and efficient solution. Our recommendation is to evaluate the specific case of your infrastructure. Before making a decision, consider the environmental and economic impact of renewing your infrastructure. At SIXE we help you optimize your systems and take the first step towards a more sustainable technology. Contact us for an audit.

TUX realizando un Healthcheck a AIX de IBM

Why is it crucial to perform an AIX healthcheck?

Did you know that many AIX systems are “working fine” until they suddenly… stop working?😱

The funny thing is that problems almost always warn you, but… are you listening to them? 🤔 If you want to know how a simple healthcheck can help you detect those early warnings and prevent critical failures before your AIX“implodes,” read on.👇

Health-what?

A healthcheck is a quick, preliminary examination of the state of a system. Its main purpose is to provide an overview of the system’s performance, stability and security to identify which areas require immediate attention. Unlike a full audit, which is much more detailed and in-depth, the healthcheck is an initial step that allows you to determine the “state of health” of the system in an agile and efficient manner. And what’s more… pss pss! at SIXE we perform healthchecks. Request it here


What is the purpose of performing an AIX Healthcheck?

The AIX healthcheck is a technical assessment focused on reviewing key aspects of the system, such as resource usage, hardware health, error logs and basic security. This process allows you to identify potential problems and priorities for intervention without going into the level of detail that a full audit would. Some key points covered by anAIX healthcheck :

  • Overall performance: Evaluation of CPU, memory and storage utilization to identify bottlenecks and areas for improvement.
  • Hardware status: Detection of faults or degraded components that may affect system stability.
  • Recurring errors: Review of system logs (errpt, syslog) to identify patterns of errors and anomalies that may indicate underlying problems.
  • Basic security compliance: Verification of key settings such as access, user permissions and password policies to ensure that the system is protected against unauthorized access.

This preliminary analysis is particularly useful for companies that need an initial diagnosis to determine what aspects to address later, either through a specific optimization or a complete audit.


Why perform an AIX Healthcheck?

1. Identify critical problems quickly

The healthcheck acts as an early warning to detect faults or weaknesses that could lead to major disruptions. For example:

  • Processes that consume too many resources.
  • Unsafe or inadequate configurations.
  • Status of hard disks, memory and other critical system components.

2. Optimize resources

It allows finding configurations that limit system performance, such as excessive CPU usage or poorly distributed storage. This helps to make quick adjustments that improve operability without the need for more complex measures.

3. Establish priorities

The result of the healthcheck provides a clear starting point for planning future actions: from implementing patches to performing a more detailed audit.

Useful tools for AIX Healthcheck

Some tools and commands that can simplify the process are:

  • nmon: For detailed performance analysis.
  • errpt: To identify hardware and software errors.
  • topas: To monitor resources in real time.
  • PowerSC: To review security settings.

NMON Tutorial to monitor Linux and AIX

What to expect as an AIX healthcheck customer?

How does an AIX healthcheck work, what should I ask for, what will it help me? As a customer, you will receive a detailed report on the state of your environment in terms of security, performance, availability and suggestions for improvement. The process includes clear and practical recommendations to optimize your system, improve security and ensure that your AIX is running efficiently. This report will not only help you prevent future problems, but will also provide you with a concrete action plan to improve performance and keep your infrastructure protected and operational.


Conclusion

A healthcheck is the first step in ensuring that a system is operating correctly and efficiently. It acts as a “healthcheck” that identifies problems and priorities, providing a solid foundation for more complex decisions, such as a full audit or resource optimization. In short, performing this quick and easy review can save time, prevent major problems and ensure that the system is in optimal condition to support the organization’s operational needs. If you want us to perform a healtcheck of your AIX system you can request it here https://sixe.es/sistemas/consultoria/healthcheck-de-sistemas-aix

 

What do we expect from IBM Power11?

The evolution of IBM’s Power architecture has been the subject of intense debate in the technology community. Over the past few years, this architecture has undergone significant strategic changes that have generated criticism and expectations alike. As with KVM, we almost guessed everything IBM was going to announce; let’s take a second shot at Power11. In this case, we don’t have the kernel.org lists to clue us in, but we do have 10 years of trajectory since Power8 and a market with very clear demands for alternative architectures to x86, even more so when Intel is going through one of its worst moments in its history..

Background and a little history

Power9: Openness and Community Collaboration

With Power8, came Power OEM/LC systems, NVIDIA GPUs, the NVLink connector and the possibility of having a first version of KVM on Power(not to be confused with the 2024 announcement). However, in practice, the challenges outweighed the opportunities… and we’ll leave it at that 🙂. Some felt that IBM was ahead of the market, while others felt that there was a lack of supported and proven solutions on these servers to achieve the anticipated impact; there was even talk of mass adoption by Google or Rackspace.

Power9: Openness and Community Collaboration

Power9 represented a milestone in IBM’s strategy by offering a more open and accessible architecture for the community. Through the OpenPOWER Foundation, IBM released a significant portion of the specifications and technologies associated with Power9, allowing third parties to design and manufacture their own systems based on this architecture, similar to what is done with ARM or x86. Companies such as Raptor Computing Systems developed Power9-based systems using open source firmware and software, offering highly auditable and user-controllable platforms.

Power10: The shift towards proprietary solutions

However, in the next generation, development delays-perhaps exacerbated by the COVID-19 pandemic-led IBM, upon launching Power10, to license blocks of intellectual property from Synopsys for components like the DDR4/5 PHY and PCIe 5.0, this decision introduced proprietary firmware into the system, breaking with the openness established with Power9 and limiting community involvement in the development of these technologies. Additionally, NVIDIA’s strategic shift since Power9, opting for alternative architectures such as ARM-based GPUs, complicated the reintegration of GPUs into the Power platform. In Power10, IBM’s strategic response was to focus on inference within the processor core, enabling artificial intelligence processing directly on the chip, without relying on GPUs.

Power11, what could offer us?

With the anticipated release of Power11, there is an expectation that IBM will address these past challenges and realign its strategy with current market demands. This includes reintegrating GPUs and other accelerators, enhancing support for open-source workloads and Linux applications, and continuing to advance AIX and IBM i as key components of the Power ecosystem.

Imagen de IBM Power desde 2010 en adelante explica las características de cada IBM Power

Anticipating Power11: Key expectations and strategic imperatives

The decisions made around Power10 have had a significant impact on both the community and the market. Moving away from an open architecture raised concerns among developers and companies that prioritize transparency and collaborative development. Competitors with open frameworks, such as RISC-V, have gained traction by offering the flexibility and freedom that Power10 lacked. This underscores the competitive value of openness in today’s technology landscape, where open-source solutions increasingly dominate the market for new workloads. Looking forward to Power11, there is a strong anticipation that IBM will address these concerns. At SIXE, we advocate for a return to open development practices, providing access to firmware source code and specifications to foster greater collaboration and innovation.

Reintegrating GPUs and Accelerators

We believe Power11 should correct the limitations seen in Power10, especially by regaining control over critical components like DDR PHY and PCIe interfaces. Avoiding reliance on third-party intellectual property is essential for achieving a truly open architecture. In doing so, IBM can realign with community demands and tap into the expertise of developers and organizations committed to open-source principles. Furthermore, reintegrating GPUs and other accelerators is crucial to meet the growing need for heterogeneous computing. By supporting a wide range of accelerators—including GPUs, FPGAs, and specialized AI processors—IBM can offer flexible, powerful solutions tailored to diverse workloads.

Strengthening the Open-Source ecosystem and Linux support

This strategy aligns with industry trends toward modular and scalable architectures that can handle increasingly complex and dynamic computational requirements. Strengthening support for open-source workloads and enhancing compatibility with Linux applications will be vital for the broader adoption of Power11. Seamless integration with open-source tools and frameworks will attract a larger developer community, making it easier to migrate existing applications to the Power platform.

The role of AIX and IBM i in IBM Power’s strategy

This approach not only encourages innovation but also addresses market demands for flexible, cost-effective solutions. Additionally, we are keen to see how these hardware advancements can be fully utilized by AIX and IBM i, reinforcing IBM’s commitment to its longstanding customer base. It is essential that businesses relying on these operating systems can benefit from Power11’s innovations without compromising on stability, performance, compatibility, or availability for their critical systems

Conclusion

If there is one thing we know for sure, it is that there is no one operating system or architecture that fits all workloads. What is most valuable for Power customers is the possibility of integrating on the same machines the databases on which their business depends on AIX or IBM i, private clouds with KVM, front-ends with Kubernetes on Linux and, hopefully soon, also AI, ML, HPC, etc. workloads. At SIXE we think that, just as there is no perfect music for every moment, there is no universal operating system, database or programming language. In Power we can have them all, and that’s why we love it.

For us, Power11 represents an opportunity for IBM to realign its strategy: integrating GPUs and accelerators to meet high-performance computing needs, enhancing support for open source workloads and Linux applications, and continuing to develop its leading-edge operating systems for mission-critical environments, such as AIX and IBM i. In doing so, IBM can deliver a versatile and powerful platform that appeals to a broad spectrum of users. The success of Power11 will depend on IBM’s ability to balance proprietary innovation with openness and collaboration with third parties.

Need help with IBM Power?

Get in touch with SIXE; we are not only experts in everything that runs on Power servers, but also active promoters and part of the IBM Champions community. We have extensive knowledge in virtualization, security, critical environments on AIX, application modernization with RPG and IBM i, as well as emerging use cases with Linux on Power.

 

Logo FreeRTOS con TUX de fondo

Real-time Linux (RTOS) – Now part of your kernel

Did you know that while you have opened the browser to read this… your computer has decided to prioritize that process leaving many others in the queue?🤯 Do you want to know how it does it? What does it mean when Linux becomes an RTOS? Well, read on and I’ll show you. And watch out, because if you are interested in the world of the penguin OS, we are going to tell you more than one fact that you may not know...💥

How does the Linux Kernel scheduler work?

The Linux scheduler works as in the previous example: Basically, it decides in which state to put the processes (running, interruptible, non-interruptible, zombie or stopped) and their execution order to improve your experience. For their execution order, each process has a priority level. Let’s say you have a background process running and you open the browser. The scheduler will interrupt that background process and focus resources on opening the browser, ensuring it runs quickly and efficiently.

The concept of expropriation (preemption)

Expropriation on Linux🐧? It’s not what you’re thinking of…. Expropriation is a fundamental feature, as it allows processes to be interrupted if a higher priority one breaks in. In Linux version 2.6, the ability to expropriate processeswas added to the kernel, that is, the kernel can interrupt processes. Systems that are not preemptible must terminate the running task in order to get to the next one.

In the case of Linux, since version 2.6.24, the Completely Fair Scheduler (CFS) is used as a scheduler. This scheduler is governed by ensuring “fair” access to the CPU.

Completely Fair Scheduler: how do you decide which process should run at which time in order to have fair access to the CPU?

There are two types of priorities: static and dynamic.

  • Static (Niceness): Can be adjusted by the user. The lower the value, the more important the program is and the more CPU time it consumes.
  • Dynamic: It is set according to the behavior of the program. They can be I/O Bound (programs that need a lot of CPU time because they are usually waiting) or CPU Bound programs (that require less CPU time, since they usually perform intensive tasks that could collapse other processes).

How does the planner prioritize?

The operating system maintains two lists of programs:

  • List 1: Programs that still have time to use.
  • List 2: Programs that have used your time.

When a program uses its time, the system calculates how much time it should have next time and moves it to the second list. When the first list becomes empty, the two lists are swapped. This helps the system work efficiently. Linux 2.6, with the fully preemptible kernel, greatly improved the responsiveness of the system. Now, the kernel can be interrupted on low-priority tasks to respond to priority events.

 

PREEMPT_RT inside the Linux Kernel

With a new kernel update, Linux can be controlled with pinpoint accuracy. An RTOS implies that the system will be accurate for critical tasks, such as in medical centers. However, since Linux was not intended for that, the fact that it is now part of the kernel brings certain features, even if they do not make it an RTOS.

Feature Enhancement
Straightforward integration and simplified maintenance
  • Less dependence on external patching: Direct access to upgrades without managing patches.
  • Easier maintenance: Easier upgrades and fewer compatibility issues.
Improved stability and performance
  • Testing and validation: Increased stability and performance through rigorous testing.
  • Continuous development: Continuous improvements in functionality and performance.
Accessibility for developers
  • Ease of use: Enabling more accessible real-time functionalities.
  • Documentation and support: Increased documentation and support in the community.
Competition with dedicated systems
  • Increased competitiveness: Positioning Linux as an alternative to dedicated RTOS.
Extended use cases
  • Critical applications: Adoption of Linux in critical systems where accuracy is essential.

Why has PREEMPT_RT taken so long to become part of the kernel?

In addition to financial problems and the community’s lack of interest in taking a real-time approach to Linux, a technical problem arose: the printk. Printk es una función que imprime mensajes en el búfer de registro del kernel. El problema con esta función es que producía retrasos cada vez que se llamaba. Este retraso interrumpía el flujo normal del sistema, y alejado este problema, PREEMPT_RT se pudo incorporar al kernel.

How does Linux becoming a Real-Time Operating System affect you?

For the average user: nothing. However, if you are a developer, this innovation in the Linux core will be a breakthrough to take into account. Until now, developers who need real-time precision opted for other operating systems designed for it. With the new PREEMPT_RT function integrated into the Linux kernel, this will no longer be necessary. The feature allows Linux to stop any task to prioritize a request in real time, which is essential for applications that demand low latency.

Use case: home security

Imagine you are using a voice assistant at home that controls both the lighting and the security system. If it detects an intrusion while you are at home, it should prioritize the activation of alarms and notify you immediately. In this case, the lights or music can wait; what really matters is your safety. This ability to respond immediately in critical situations can make all the difference.

Why is Real Time necessary?

As we have seen in the use case, RTOSs can complete unforeseen tasks, moreover, in specific and predictable times. In workloads that require precision, RTOSs play a critical role. In this case, RTOSs are often seen in IoT applications:

  • Vehicles: Pioneering cars like Tesla can brake immediately if they detect an obstacle.
  • Critical systems: In aircraft or medicine, systems must operate on tight schedules.
  • Industry: In industrial processes, a slight delay can cause failures.
The role of AI and machine learning

AI and machine learning also play a key role in RTOS and IoT. They could predict events and support fast and effective decision making.

Conclusion

In short, Linux Real Time will finally become a reality. The integration of Linux as a real-time operating system marks a turning point and opens up new opportunities for critical tasks in sectors such as healthcare, robotics and IoT. With the PREEMPT_RT function integrated into the kernel, Ubuntu Linux guarantees greater accuracy. However, we should not fail to keep in mind that the penguin🐧 operating system is not 100% an RTOS, it was not designed for it. So, we will see if companies will adapt Canonical’s solution to their real-time needs, or will continue to opt for other solutions such as FreeRTOS or Zephyr. Do you want to continue learning about Linux? We offer you official certifications. And if you don’t have enough… we adapt to you with tailor-made training 👇

Intensive training on Linux systems

Linux is the order of the day… if you don’t want to be left out of the latest technological demands, we recommend our Canonical Ubuntu courses 👇

Official SIXE training at Canonical, creators of Ubuntu

Instalar windows en ibm power logo ibm windows xp sobre el logo de sixe

Installing Windows XP on IBM Power (for fun)

Why not emulate other architectures on Power?

In a recent conversation with what I like to call the Wizards of Power – the technical leadership behind this amazing platform, including inventors, architects, distinguished engineers, and incredible teams – they asked me:

“Why are you interested in emulation? Who would want to emulate other architectures on Power, and what’s the point?”

My response was that, in the open-source world, many things we do are driven by curiosity or even just for fun. So… why not install Windows on IBM Power?

The curiosity as our engine

It resonates in my head that if one day I can have as much fun with Linux on ppc64le as I do on x86 or gradually on ARM (Mac, Raspberry), it will mean Power can be “the third” architecture for Linux far beyond real use cases and mission-critical workloads.

In other words, if I can do the same on ppc64le as on other architectures, I can use Power for any use case.

Why have thousands of x86 servers wasting energy and taking up space in the data center when we can have a few Power servers doing the same work more securely and efficiently?

Clients might say it’s for compatibility, for using standard tools. But multi-architecture could be the new standard, if it’s not already.

I don’t want to dive too deep into this today. Several ideas have been published on the IBM portal, and I believe the teams at IBM, Debian, Canonical, and Red Hat are doing an excellent job, which I will cover in future posts.

There have been news updates in the SIXE blog over the past months covering the hard work being done in this area, and with the release of the new FW1060 firmware level, we finally have full support for KVM on PowerVM. This is equivalent to what exists on IBM Z/Linux One. Great!

As always, I wanted to push technology to its limits, including an old dream: running Windows (the “enemy” for AIX and Linux folks), and in this case, running Windows XP on an IBM Power10, using KVM and QEMU.

Preparation

Setting up an LPAR to run Windows on IBM Power requires specific steps, such as assigning a dedicated processor. We need to configure the LPAR to be a KVM host, which will change how it uses PowerVM to avoid overhead. We also need to assign at least one dedicated processor (not in “donor” mode, mind you). This will give us 8 dedicated threads to run our virtual processors in KVM. Yes, it’s simpler and less capable than PowerVM with its micro-partitions, but it’s still an industry standard, and not everyone needs to fly to work. Don’t you think?

KVM Capable seleccionado

Choosing the Distribution

From my experience, the best support for experiments with ppc64le tends to be Debian or Fedora. In this case, I’ve installed Fedora 40 and updated it to the latest levels. Then, you need to install all the virtualization packages and the QEMU support for other architectures. Following my idea of creating interactive articles, I will use virt-manager to avoid complex QEMU configurations. In my environment, I’ve installed all the qemu-system-* packages.

qemu-system command

For Windows to detect our SATA virtual disks as usable, you’ll need to configure this. Once that’s done, you can install what your disks will need:

dnf install virtio-win-stable

You’ll also need a Windows XP ISO and its license numbers. I recommend placing it in /var/lib/libvirtd/images so it can be automatically detected by virt-manager.

Creating the Virtual Machine (just follow the wizard)

Make sure to select x86 as the architecture (QEMU will handle this).

Menu creación máquina virtual

 

Machine Virtual Manager

Just like when running AIX on x86, don’t expect it to be very fast, although it took me about an hour to install… pretty much the same time it would take on a PC back then.

I can’t wait to see MS Messenger again! Enjoy the video and stay updated by following us!

Other Tests

What do you think about running MS PowerShell for ARM64 in Docker? Now I can “dir” in Power, how cool! :P

Running MS PowerShell for ARM64 in Docker

Conclusion

The work done to support KVM is, for me, the biggest breakthrough in recent years because of the endless possibilities it opens up for the Power platform. The work to support KVM not only opens possibilities for Linux but also enables new ways to experiment with Windows on IBM Power, a powerful and innovative combination.

From what I’ve been able to test, everything works and works great. Congratulations to everyone who made this possible.
Logo Suse En fondo de SIXE

Understanding high availability (HA) on SUSE Linux

High availability and business continuity are crucial to keep applications and services always operational.
High availability clusters allow critical services to keep running, even if servers or hardware components fail.
SUSE Linux offers a robust set of tools for creating and managing these clusters.
In this article, we explore the current state of clustering in SUSE Linux, with a focus on key technologies such as Pacemaker, Corosync, DRBD and others.
These, with minor differences are available on x86 and ppc64le.

Pacemaker: the brain of the cluster

Pacemaker is the engine that manages high availability clusters in SUSE Linux.
Its main function is to manage cluster resources, ensuring that critical services are operational and recover quickly in case of failure. Pacemaker continuously monitors resources (databases, web services, file systems, etc.) and, if it detects a problem, migrates those resources to other nodes in the cluster to keep them up and running.
Pacemaker stands out for its flexibility and ability to manage a wide variety of resources.
From simple services to more complex distributed systems, it is capable of handling most high-availability scenarios that a company may need.

Corosync: the cluster’s nervous system

Corosync is responsible for communication between cluster nodes.
It ensures that all nodes have the same view of the cluster status at all times, which is essential for coordinated decision making.
It also manages quorum, which determines whether there are enough active nodes for the cluster to operate safely.
If quorum is lost, measures can be taken to prevent data loss or even service downtime.

DRBD: the backbone of the data

DRBD (Distributed Replicated Block Device) is a block-level storage replication solution that replicates data between nodes in real time.
With DRBD, data from one server is replicated to another server almost instantaneously, creating an exact copy.
This is especially useful in scenarios where it is crucial that critical data is always available, even if a node fails.
Combined with Pacemaker, DRBD allows services to continue operating with access to the same data, even if they are on different nodes.

Other key technologies in SUSE Linux clusters

In addition to Pacemaker, Corosync and DRBD, there are other essential technologies for building robust clusters on SUSE Linux:

  • SBD (Storage-Based Death): SBD is a fencing tool that isolates a misbehaving node from causing problems in the cluster.
    This is achieved by using a shared storage device that nodes use to communicate their state.
  • OCF (Open Cluster Framework): OCF scripts are the basis of the resources managed by Pacemaker.
    They define how to start, stop and check the status of a resource, providing the flexibility needed to integrate a wide range of services into the cluster.
  • Csync2: A tool for synchronizing files between nodes in a cluster.
    It ensures that configuration files and other critical data are always up to date on all nodes.

Current status and future trends

Clusters in SUSE Linux have matured and are adapting to new business demands.
With the growing adoption of containerized environments and with parts in different clouds, clusters in SUSE Linux are evolving to better integrate with them.
This includes improved support for container orchestration and distributed applications that require high availability beyond replicating two disks per DRBD and keeping a virtual IP alive :) Still, today, the combination of Pacemaker, Corosync, DRBD and other tools provides a solid foundation for creating high availability clusters that can scale and adapt to the needs of SAP HANA and other solutions that require high if not total availability. If you need help at SIXE we can help you.

Cheatsheet for creating and managing clusters with Pacemaker on SUSE Linux

Here is a modest cheatsheet to help you create and manage clusters with Pacemaker on SUSE Linux.
Sharing is caring!

Task Command / Description
Package installation
Installing Pacemaker and Corosync zypper install -y pacemaker corosync crmsh
Basic configuration
Configure the Corosync file Edit /etc/corosync/corosync.conf to define the transport, interfaces and network.
Start services systemctl start corosync && systemctl start pacemaker
Enable services at startup systemctl enable corosync && systemctl enable pacemaker
Cluster management
View cluster status crm status
See node details crm_node -l
Add a new node crm node add <nombre_del_nodo>
Eject a node crm node remove <nombre_del_nodo>
View cluster logs crm_mon --logfile <ruta_del_log>
Resource configuration
Create a resource crm configure primitive <nombre_recurso> <tipo_agente> params <parámetros>
Delete a resource crm configure delete <nombre_recurso>
Modify a resource crm configure edit <nombre_recurso>
Show complete cluster configuration crm configure show
Configuration of groups and assemblies
Create a resource group crm configure group <nombre_grupo> <recurso1> <recurso2> ...
Create an ordered set crm configure colocation <nombre_conjunto> inf: <recurso1> <recurso2>
Create an execution order crm configure order <orden> <recurso1> then <recurso2>
Restrictions and placements
Create placement restriction crm configure colocation <nombre_restricción> inf: <recurso1> <recurso2>
Create location restriction crm configure location <nombre_ubicación> <recurso> <puntaje> <nodo>
Failover and recovery
Force migration of a resource crm resource migrate <nombre_recurso> <nombre_nodo>
Clear status of a resource crm resource cleanup <nombre_recurso>
Temporarily disable a resource crm resource unmanage <nombre_recurso>
Enabling a resource after disabling it crm resource manage <nombre_recurso>
Advanced configuration
Configure the quorum `crm configure property no-quorum-policy=<freeze
Configure fencing crm configure primitive stonith-sbd stonith:external/sbd params pcmk_delay_max=<tiempo>
Configure timeout of a resource crm configure primitive <nombre_recurso> <tipo_agente> op start timeout=<tiempo> interval=<intervalo>
Validation and testing
Validate cluster configuration crm_verify --live-check
Simulate a failure crm_simulate --run
Policy management
Configure recovery policy crm configure rsc_defaults resource-stickiness=<valor>
Configure resource priority crm configure resource default-resource-stickiness=<valor>
Stopping and starting the cluster
Stop the entire cluster crm cluster stop --all
Start up the entire cluster crm cluster start --all

 

Logo Sixe Noticia

SIXE: your trusted IBM partner

In this fast-changing and complex technological era, choosing the right suppliers is crucial.
When it comes to solutions like IBM’s, the real difference is not the size of the company, but its technical capacity, human capital, commitment and level of specialization.
SIXE Ingeniería is your ideal IBM partner, and here we explain why.

Technical expertise: Who do you want to design and manage your project?

At SIXE, we are not a company that resells any product or service looking for a margin, passing the technical challenge to someone else and “bye-bye”.
We specialize in key areas such as cybersecurity and critical IT infrastructure.
Unlike IBM’s large partners, who usually outsource most of their projects, at SIXE every task is executed by our in-house experts. Do you prefer to rely on a company that subcontracts or on a team that is directly involved in every technical detail? Our engineering company approach allows us to design solutions tailored to the specific needs of each client.
We do not offer generic configurations or deployments, but solutions tailored to exactly what your organization (and your team) needs.
We have experts in IBM Power infrastructure, storage, operating systems (AIX, Red Hat, IBM i, Debian, zOS), Informix and DB2 databases, application servers, etc.

Personalized engagement: what care do you expect to receive?

In large consulting firms, projects often become just another number on your client list.
Do you want to be just one more or do you prefer exclusive treatment?
At SIXE, we offer a personalized service, ensuring that each project receives the attention it needs to go well and that you trust us for many years to come.
Our agile structure allows us to adapt quickly and work side by side with the systems managers, ensuring that the proposed solutions meet your expectations and needs.

Innovation and flexibility

Large companies are often trapped in bureaucratic processes that prevent them from innovating or reacting quickly to market changes.
How many times have you come across solutions that are outdated or slow to implement?
At SIXE, we can adapt quickly and offer solutions that not only follow the latest trends, but anticipate them.
This is essential for projects that require quick and effective responses in a changing environment.
Also when something, no matter how trendy it is or how spectacular it sounds in a Power Point, it involves risks, we will raise our voice and let you know.

Transparency and control

When projects are outsourced, transparency and control are diluted.
At SIXE, you have the security of knowing exactly who is working on your project and how resources are being managed.
Large consulting firms, because of their size, tend to lose this transparency, delegating tasks to third parties without the client having any real control over the process.
Would you rather risk losing visibility on your project or have a partner that keeps you informed and in control of each milestone?

Long-term relationships: are you looking for an additional supplier or a strategic partner?

We are not looking to close short-term contracts; our goal is to build long-lasting relationships based on an ethical relationship.
This means that, once the technology is implemented, we remain committed to the project, offering technical support, training and consulting whenever necessary.
Large companies, on the other hand, tend to focus on the initial implementation, leaving everything else aside.
Outsourcing everything, of course, just like Homer Simpson would do.

Return on investment: where does your money go?

In many large consulting firms, much of the budget goes to cover overhead, with little direct impact on project quality.
They do not have good engineers on staff because their managers think that outsourcing technical talent reduces risks and improves margins.
At SIXE, every euro invested translates into real value; we do not have a pool of managers and executives spending hours in meetings and meals with clients.
What we do have is an internationally recognized engineering team, committed to the company and its clients for more than 15 years.
We are also part of a network of experts recognized internationally by IBM.

The difference is in the execution

Although it may be said otherwise, the real difference in a technology project is not in the size of the company, but in how and by whom each project is executed.
At SIXE, we combine technical expertise, commitment and transparency, offering a precise and results-oriented execution.
In a market saturated with options, why not choose a partner that ensures quality, innovation and a relationship based on collaboration?
Choosing SIXE as your IBM partner means opting for an approach based on technical excellence and total commitment to results.
Don’t leave the success of your project in the hands of chance, we are a partner who will care as much as you do (for our sake) about the final result and the relationship between our companies in the medium and long term.

Not only IBM

Although 50% of our business is related to IBM training, consulting and infrastructure projects, we are also a strategic partner of Canonical (Ubuntu), Red Hat and SUSE.

What about your competitors?

The truth is that we do not have because there is no other company of our size with our level of specialization in the solutions we offer.
There are other small and medium-sized companies with incredible human capital that complement the technologies we work on and with which we always collaborate, but we never compete.
When we don’t know how to do something, we always ask for help and let our clients know. It is part of our DNA.

SIXE
×