Providing High Availability to NFS in CEPH using Ganesha

Introduction to Ceph and Ceph-Ganesha

Ceph-Ganesha, an NFS tool embedded within CEPH with powerful orchestration features that enable high availability and dynamic management on a multi-node Ceph cluster. We will focus on the declarative simplicity of its deployment, and showing off its HA capabilities.

 

Ceph is an open-source, software-defined storage platform that delivers highly scalable object, block, and file storage from a unified cluster. At its core, Ceph’s architecture is built on a distributed network of independent nodes. Data is stored across OSDs (Object Storage Daemons), managed by Monitors, and orchestrated by Managers.

 

Ceph architecture explained

The Ceph File System (CephFS) is a POSIX-compliant file system that sits atop this infrastructure, providing a distributed and fault-tolerant namespace. For a system administrator, Ceph offers a great alternative to traditional storage arrays by providing a single, resilient platform that can grow linearly with the addition of commodity hardware.

 

Its self-healing and self-managing capabilities are key benefits, reducing the operational overhead typically associated with petabyte-scale storage.

 

What is NFS Ganesha in Ceph?

NFS Ganesha is an open-source NFS server that acts as a user-space gateway, a key distinction from conventional NFS servers that reside within the operating system’s kernel. This fundamental design choice provides a more robust and stable service environment. A bug in a user-space daemon is far less likely to cause a catastrophic system failure, a crucial advantage for a critical service endpoint. Ganesha’s architecture is also designed for maximum compatibility, supporting a full range of NFS protocols from NFSv3 to NFSv4.2, ensuring it can serve a diverse client base.

 

The true genius of Ganesha lies in its File System Abstraction Layer, or FSAL. This modular architecture decouples the NFS protocol logic from the underlying storage. For a Ceph environment, the FSAL_CEPH module is the key, enabling Ganesha to act as a sophisticated Ceph client. This means administrators can provide a consistent NFS interface to clients while benefiting from the full power and scalability of the Ceph cluster, all without exposing the underlying Ceph infrastructure directly. If you would like to learn more about Ceph, we offer a pracial course on Ceph.

A modern data center filled with glowing Ceph storage nodes connected in a resilient cluster. In the center, a friendly cartoon-style Ganesha deity sits at a console with multiple arms managing NFS exports, cables, and servers. One hand holds a network cable, another a laptop, another a glowing Ceph logo, symbolizing high availability and orchestration.

Cephadm integration: Declarative deployment of Ceph-Ganesha

The integration of Ganesha with the Ceph orchestrator (cephadm) elevates its deployment from a manual, host-specific task to an elegant, cluster-wide operation. This partnership allows for a declarative approach to service management, where a single command can manage the entire lifecycle of the Ganesha service.

 

For any mission-critical service, a system administrator’s primary concern is ensuring business continuity. Unplanned downtime can lead to significant data loss, loss of productivity, and damaged reputation. High Availability (HA) is the architectural principle that addresses this concern by eliminating single points of failure. For an NFS service, this means that if one server node goes offline, another node can seamlessly take over its duties. This provides administrators with peace of mind and allows for planned maintenance without impacting the end-user. For Ceph, its inherent distributed nature is the perfect complement to an HA NFS service, as the underlying storage is already resilient to node failures.

 

Preparing CephFS Storage for Ganesha

A successful Ganesha deployment begins with preparing the underlying CephFS storage. A seasoned administrator will provision the necessary pools to host the filesystem data and metadata, setting the stage for the service to be deployed.

 

Create a dedicated pool for NFS Ganesha data with autoscaling enabled

# sudo ceph osd pool create ganeshapool 32 32

# sudo ceph osd pool set ganeshapool pg_autoscale_mode on

 

Create a metadata pool, marked as bulk for optimized behavior

# sudo ceph osd pool create ganeshapool_metadata 16 16

# sudo ceph osd pool set ganeshapool_metadata bulk true

 

Tie the pools to a new CephFS filesystem

# sudo ceph osd pool application enable ganeshapool cephfs

# sudo ceph osd pool application enable ganeshapool_metadata cephfs

# sudo ceph fs new ganeshafs ganeshapool_metadata ganeshapool

# ceph fs set ganeshafs max_mds 3

# ceph orch apply mds cephfs --placement="3 ceph-node1 ceph-node2"

Deploying the Ceph NFS Ganesha Service

With the storage foundation laid, the deployment of Ganesha itself can be done either with .yamls or with simple orchestration CLI commands. The ceph orch apply command is a powerful instruction to the orchestrator, telling it to ensure the desired state of the NFS service. By specifying a placement count and listing the cluster’s hosts, the administrator ensures that a Ganesha daemon will run on every designated node, a critical step for a resilient and highly available service. 

 

Deploy the Ganesha NFS service across all three specified hosts

 

# sudo ceph orch apply nfs myganeshanfs ganeshafs --placement="3 ceph-node1 ceph-node2 ceph-node3"

 

This single command initiates a complex, multi-faceted deployment. The orchestrator pulls the necessary container images, configures the daemons, and distributes them across the specified hosts. This contrasts sharply with manual, host-by-host installations, showcasing the power of centralized orchestration. These scenarios are covered in detail in our advanced Ceph course, where we cover step-by-step orchestration with cephadm and HA configurations.

 

Advanced capabilities: Dynamic exports and service resilience

Once the Ganesha service is running, its power is further revealed through its dynamic export management capabilities. Instead of editing static configuration files, an expert can create, modify, and delete NFS exports on the fly using a series of simple commands. This is invaluable in dynamic environments where storage needs change rapidly.

 

Create a new export to make the CephFS filesystem accessible

 

# sudo ceph nfs export create cephfs myganeshanfs /ganesha ganeshafs --path=/

The true value of this distributed deployment lies in its service resilience. The Ceph orchestrator is constantly monitoring the health of the Ganesha daemons. Should a host fail, the orchestrator will automatically detect the loss and take action to ensure the service remains available. This automated failover process provides a high degree of transparency to clients, moving Ganesha from a simple gateway to a genuinely high-availability service. Its architecture is built to withstand disruption, making it an indispensable part of a robust storage strategy.

Real-World example

Let’s say we have a cluster with 3 ganesha-ready nodes, that means we can successfully export the underlying ceph fs from node 1 to node 2 and from node 2 to node 3, or whichever way we want !

Conclusion: Why Ceph-Ganesha is essential for modern storage

NFS Ganesha is more than just a gateway; it is a critical component for integrating traditional file services with modern, scalable storage. By leveraging the command-line orchestration of cephadm, administrators can deploy a highly available, resilient, and dynamically manageable service. The process is a testament to the power of declarative infrastructure management, simplifying what would otherwise be a complex task. The architectural design of Ganesha, combined with the power of the Ceph orchestrator, makes it a perfect solution for meeting the demanding storage requirements of today’s hybrid environments, precisely for this reason, at SIXE, we not only offer Ceph training but also specialized support to ensure that companies can maintain the stability of their production infrastructures.

👉 Ceph Technical Support

👉 Intensive Ceph Course

👉 Advanced Ceph Course

Terraform + AWS: From giant states to 3-minute deployments

“We haven’t touched our AWS infrastructure in three months out of fear of breaking something.” Sound familiar? The solution isn’t to change tools—it’s to change your methodology.

The lie we’ve believed

We all start the same: “Let’s do Infrastructure as Code—it’ll be amazing.” And indeed, the first few days are magical. You create your first VPC, security groups, a few instances… Everything works. You feel like a wizard.

Then reality hits.

Six months later, you have gigantic state files, tightly coupled modules, and every change feels like a game of Russian roulette. Does this sound familiar?

  1. terraform plan → 20 minutes of waiting
  2. A 400-line plan that no one understands
  3. “Are you sure you want to apply this?”
  4. Three hours debugging because something failed on line 247

But there’s one factor most teams overlook…

What actually works (and why no one tells you)

After rescuing dozens of Terraform projects, the formula is simpler than you think:

Small states + smart modules + GitOps that doesn’t scare you.

Layered states (not per project)

Forget “one state to rule them all.” Break it down like this:

terraform/
├── network/     # VPC, subnets, NAT gateways
├── data/        # RDS, ElastiCache  
├── compute/     # EKS, ECS, ASGs
└── apps/        # ALBs, Route53

Each layer evolves independently. The data team can update RDS without touching the network. This could be your game changer.

The remote state trick

The magic is in connecting layers without coupling them:

data "terraform_remote_state" "network" {
  backend = "s3"
  config = {
    bucket = "company-terraform-states"
    key    = "network/terraform.tfstate"
  }
}

# Use outputs from another layer
subnet_id = data.terraform_remote_state.network.outputs.private_subnet_id

Modules that don’t give you a headache

Create specific modules for each type of workload:

  • secure-webapp/ – ALB + WAF + instances
  • microservice/ – EKS service + ingress + monitoring
  • data-pipeline/ – Lambda + SQS + RDS with backups

No more “universal” modules requiring 47 parameters.

Multi-cloud is already here

Now it gets interesting. Many teams are adopting hybrid strategies: AWS for critical applications, OpenStack for development and testing.

Why? Cost and control.

# Same module, different cloud
module "webapp" {
  source = "./modules/webapp"
  
  # On OpenStack for dev
  provider = openstack.dev
  instance_type = "m1.medium"
  
  # On AWS for prod  
  # provider = aws.prod
  # instance_type = "t3.medium"
}

The future isn’t “AWS or nothing.” It’s architectural flexibility. The power to choose the solution you want, when you want, adapted to your budget.

OpenTofu changes the game

With the recent changes in Terraform, OpenTofu is becoming the smart choice. Same syntax, open-source governance, zero vendor lock-in.

The advantage is huge: you can migrate gradually without changing a single line of code. Perfect for teams that want control without drama.

The question you should ask yourself

Did your last terraform apply take years off your life?

If yes, the problem isn’t technical—it’s methodological.

Do you recognize these symptoms in your team? The difference between success and chaos lies in applying the right techniques from the start.


If you want to dive deeper into these methodologies, our Terraform/OpenTofu courses cover everything from fundamentals to advanced GitOps with real multi-cloud use cases.

Does your server need replacing? The right to repair says no

The new European Right to Repair Directive is putting an end to one of the most expensive myths in the IT sector: that switching to “more efficient” hardware is always more sustainable. Right to Repair makes products easier and faster to refurbish. And in the IT world, this means completely rethinking our relationship with hardware.

The myth of “new hardware is always better”.

For years we have heard the same speech: “this server is already 5 years old, it has to be replaced”. But is that really the case, have you looked on paper if it is really worth changing that IBM Power9 just because it is no longer supported? Because you may be in for a surprise. The reality is much more complex and, above all, more expensive than it looks.

When you buy a new server, you don’t just pay sticker price. You pay:

  • The carbon footprint of its manufacture
  • Transportation from the factory
  • Waste management of previous equipment
  • Migration and configuration costs
  • Lost productivity time during the transition

On the contrary, when you renew your existing hardware, you make the most of an already amortized investment and drastically reduce the environmental impact.

Let’s do green math: numbers don’t lie

The new right-to-repair regulations can extend the useful life of products by up to 10 years, which in IT terms translates into:

Why are these numbers so favorable? Because the manufacturing phase accounts for 70-85% of the total carbon footprint of any IT equipment. Keeping a server running for 8-10 years instead of 3-5 is literally doubling its environmental efficiency.

Right to repair in IT and sustainable technology

Beyond hardware: software also counts

The right to repair in IT is not limited to hardware. It includes:

  • Extended support for operating systems outside the official cycle. At SIXE we are committed to support outside the imposed life cycle and we can extend the useful life of Linux, AIX, Ceph, and other systems.
  • Independent maintenance of databases such as DB2, Oracle or Informix.
  • Security upgrades without the need to migrate the entire platform
  • Continuous performance optimization instead of mass replacements

The right to redress: more than a law, a philosophy

“My supplier says it’s insecure.”

Manufacturers have obvious business incentives to sell new hardware. However, a properly maintained 2018 server can be more secure than a poorly configured new one.

“No spare parts available.”

With independent maintenance providers, the availability of spare parts extends years beyond what is offered by the original manufacturers.

“Performance will be lower.”

A 5-year-old optimized system can outperform a new one without proper configuration.

Our sustainable commitment at SIXE: to make it last as long as the hardware itself allows

At SIXE, we have been advocating this philosophy for years. Not because it’s a trend, but because the numbers prove it: an approach based on preventive maintenance, continuous optimization and intelligent reuse of resources generates better ROI than the traditional buy-use-pull cycle.

Our commitment to “make it last forever” is not marketing. It is engineering applied with economic and environmental criteria.

Conclusion: the future is circular, not linear.

The right to repair in IT is not a regulatory imposition. It is an opportunity to rethink how we manage enterprise technology. An approach where maintaining, optimizing and extending the life of equipment is not only greener, but also more cost-effective.

The question is not whether your company will adapt to this reality. The question is whether it will do so before or after your competition.

Ready to make the leap to more sustainable and efficient IT? Discover our sustainable technology services and start optimizing your infrastructure today.

And if your system is giving you problems, we can assess its efficiency before replacing it.

👉Our consulting / service portfolio

How to fix the most common error in Ceph

Ceph is a powerful and flexible solution for distributed storage, but like any complex tool, it is not exempt from errors that are difficult to diagnose. If you get the message “could not connect to ceph cluster despite configured monitors”, you know that something is wrong with your cluster. And no, it’s not that the monitors are asleep. This error is more common than it seems, especially after network changes, reboots or when someone has touched the configuration “just a little bit”.

In this article we get to the point: we tell you the real causes behind this problem and most importantly, how to fix it without losing your data or your sanity in the process.

What does the error “could not connect to ceph cluster despite configured monitors” really mean?

When Ceph tells you that it cannot connect to the cluster “despite configured monitors”, what is really happening is that the client or daemon can see the configuration of the monitors, but cannot establish communication with any of them. It’s like being ghosting, no matter how much you call, they don’t pick it up.

Ceph monitors are the brains of the cluster: they maintain the topology map, manage authentication, and coordinate global state. Without connection to the monitors, your Ceph cluster is basically a bunch of expensive disks with no functionality.

Troubleshoot Ceph errors

The 5 most common causes (and their solutions)

Network and connectivity problems

The number one cause is usually the network. Either because of misconfigured firewalls, IP changes or routing problems.

Rapid diagnosis:

# Verifica conectividad básica
telnet [IP_MONITOR] 6789
# o con netcat
nc -zv [IP_MONITOR] 6789

# Comprueba las rutas
ip route show

Solution:

  • Make sure that ports 6789 (monitor) and 3300 (msgr2) are open.
  • Verify that there are no iptables rules blocking communication.
  • If you use firewalld, open the corresponding services:
firewall-cmd --permanent --add-service=ceph-mon
firewall-cmd --reload

2. Monmap out of date after IP change

If you have changed node IPs or modified the network configuration, it is likely that the monmap (monitor map) is obsolete.

Diagnosis:

# Revisa el monmap actual
ceph mon dump

# Compara con la configuración
cat /etc/ceph/ceph.conf | grep mon_host

Solution:

# Extrae un monmap actualizado de un monitor funcionando
ceph mon getmap -o monmap_actual

# Inyecta el monmap corregido en el monitor problemático
ceph-mon -i [MON_ID] --inject-monmap monmap_actual

3. Time synchronization problems

Ceph monitors are very strict with time synchronization. An offset of more than 50ms can cause this error.

Diagnosis:

# Verifica el estado de NTP/chrony
chrony sources -v
# o con ntpq
ntpq -p

# Comprueba el skew entre nodos
ceph status

Solution:

# Configura chrony correctamente
systemctl enable chronyd
systemctl restart chronyd

# Si tienes servidores NTP locales, úsalos
echo "server tu.servidor.ntp.local iburst" >> /etc/chrony.conf

4. Critical or corrupted monitors

If the monitors have suffered data corruption or are in an inconsistent state, they may not respond correctly.

Diagnosis:

# Revisa los logs del monitor
journalctl -u ceph-mon@[MON_ID] -f

# Verifica el estado del almacén del monitor
du -sh /var/lib/ceph/mon/ceph-[MON_ID]/

Solution:

# Para un monitor específico, reconstruye desde los OSDs
systemctl stop ceph-mon@[MON_ID]
rm -rf /var/lib/ceph/mon/ceph-[MON_ID]/*
ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-0 --journal-path /var/lib/ceph/osd/ceph-0/journal --type bluestore --op update-mon-db --mon-store-path /tmp/mon-store
ceph-mon --mkfs -i [MON_ID] --monmap /tmp/monmap --keyring /tmp/ceph.mon.keyring

5. Incorrect client configuration

Sometimes the problem is on the client side: outdated configuration, incorrect keys or poorly defined parameters.

Diagnosis:

# Verifica la configuración del cliente
ceph config show client

# Comprueba las claves de autenticación
ceph auth list | grep client

Solution:

# Regenera las claves de cliente si es necesario
ceph auth del client.admin
ceph auth get-or-create client.admin mon 'allow *' osd 'allow *' mds 'allow *' mgr 'allow *'

# Actualiza la configuración
ceph config dump > /etc/ceph/ceph.conf
When to ask for help (before it’s too late)

This error can escalate quickly if not handled correctly. If you find yourself in any of these situations, it’s time to stop and seek professional help:

  • All monitors are down simultaneously
  • You have lost quorum and cannot regain it.
  • Data appears corrupted or inaccessible
  • The cluster is in production and you can’t afford to experiment.

Ceph clusters in production are not trial and error territory. One false move can turn a connectivity problem into a data loss.

The best solution to the error “could not connect to ceph cluster despite configured monitors” : prevent

To avoid encountering this error in the future:

Proactive monitoring:

  • Configure alerts for monitor status
  • Monitors network latency between nodes
  • Monitors time synchronization

Best practices:

  • Always deploy at least 3 monitors (better 5 in production).
  • Keep regular backups of the monmap and keys.
  • Document any network configuration changes
  • Uses automations (Ansiblefor example, is perfect for configuration changes).

Regular testing:

  • Periodically tests connectivity between nodes
  • Simulates monitor failures in development environment
  • Verify that your recovery procedures are working

Need help with your Ceph cluster?

Distributed storage clusters such as Ceph require specific expertise to function optimally. If you have encountered this error and the above solutions do not solve your problem, or if you simply want to ensure that your Ceph infrastructure is properly configured and optimized, we can help.

Our team has experience solving complex Ceph problems in production environments, from urgent troubleshooting to performance optimization and high availability planning.

We offer help with

Don’t let a connectivity problem become a major headache. The right expertise can save you time, money and, above all, stress.

IBM Power11 : Discover all the news

🆕 IBM Power11 is here

The wait is over: today IBM Power11 is officially presented, the new generation of servers that seeks to consolidate Power as a benchmark in performance, efficiency and openness.New IBM Power11

What’s new with the new Power servers?

IBM is committed to a full-stack design, with integration from the processor to the cloud, designed to simplify management, reduce costs and enable AI without the need for GPUs. Power11 offers us:

  • IBM Spyre Accelerator for Generative AI and Business Processes

  • Up to 25% more cores per chip compared to Power10

  • DDR5 memory with improved bandwidth and efficiency

  • Concurrent maintenance, quantum-secure cryptography, and automated energy-efficient mode

  • Full support for AIX, IBM i, Linux, and hybrid deployments (Power Virtual Server)

See the Power11 models available today:

  • 🔹 IBM Power S1122

    Compact 2U server, ideal for space-constrained environments. Up to 60 Power11 cores, 4TB of DDR5 RAM and advanced cyber resiliency and energy efficiency capabilities. Perfect for Linux, AIX or IBM i loads in mixed production environments.

    🔹 IBM Power S1124

    Designed to consolidate critical loads in 4U form factor with up to 60 cores, 8 TB of memory and dual socket. Ideal for medium to large enterprises that want cloud flexibility, without sacrificing performance or security.

    🔹 IBM Power E1150

    Intermediate model with high scalability, designed for demanding loads and SAP deployments, databases or intensive virtualization.

    🔹 IBM Power E1180

    The most powerful of the Power11 family. Up to 256 cores, 64 TB of memory and improved energy efficiency up to 28%. Designed for AI, advanced analytics and massive consolidation in mission-critical environments with 99.9999% availability.

More open and hybrid-ready power

All Power11 models can also be deployed on Power Virtual Server, integrating AIX, IBM i and Linux loads in hybrid environments, without the need to rewrite applications. In addition, KVM and PowerVM support allows you to choose the hypervisor that best fits your environment.

Availability: IBM Power11 will be available globally starting July 25, 2025. The IBM Spyre accelerator will be available in the fourth quarter of 2025.

What about the future?

Power11 ushers in a new era where AI, quantum security and energy efficiency are no longer promises, but native features.

If you like the new Power11 models, we have good news for you, because at SIXE we sell and migrate Power11 (and Power10, 9…). At SIXE we have been helping our customers to make the most of the power of Power for years.

Learn how to build and deploy AI agents with LangGraph using watsonx.ai

Artificial intelligence no longer just responds, it also makes decisions. With frameworks like LangGraph and platforms like watsonx.ai , you can build agents that reason and act autonomously 🤯.

In this article, we will explain how to implement a ReAct (Reasoning + Action) agent locally and deploy it on IBM Cloud, all this with a practical example that includes a weather query tool 🌤️.

A practical guide to using your agents with LangGraph and Watsonx.ai

Project architecture

  • Machine with local project
    • Here you develop and test the agent with Python, LangGraph and dependencies.
  • ZIP (pip-zip)
    • Package with your code and additional tools.
  • Software Specification
    • Environment with libraries necessary to execute the agent.
  • watsonx.ai
    • Platform where you deploy the service as a REST API.
  • IBM Cloud Object Storage
    • Stores deployment assets.

Let’s prepare the environment for our agent

We need:

  • Python 3.12 installed
  • Access to IBM Cloud and watsonx.ai
  • Poetry for dependency management

Have you got everything? Well, first things first, clone the repository that we will use as an example. It is based on the official IBM examples.

git clone https://github.com/thomassuedbroecker/watsonx-agent-langgraph-deployment-example.git 
cd ./agents/langgraph-arxiv-research

First of all, let’s understand the example project.

[Developer Workstation] → [CI/Build Process] → [Deployment] ↓
[IBM Cloud / watsonx.ai]

The main files of the agent are:

ai_service.py
Main file that starts the agent service in production.
agent.py
Core logic of the AI agent based on LangGraph. Defines the workflow.
tools.py
Tools connected to the agent (Weather API).

Diagram of the Langgraph and watson.ai example repo

Let’s configure the environment

python3.12 -m venv .venv
source ./.venv/bin/activate
python3 -m pip install --upgrade pip
python3 -m pip install poetry

We also recommend the use of Anaconda or miniconda. It allows us to manage virtual environments or Python packages in a simple way and is widely used in ML.

In order for Python to find our custom modules (such as agents and tools), we need to include the current directory in the environment variable PYTHONPATH

 

export PYTHONPATH=$(pwd):${PYTHONPATH}

echo ${PYTHONPATH}

 

Once we have the environment ready, it is time for the variables. You must create a config.toml file if you don’t already have one and use your IBM Cloud credentials:

[deployment]
watsonx_apikey = "TU_APIKEY"
watsonx_url = "" # Tiene que seguir el siguiente formato: `https://{REGION}.ml.cloud.ibm.com0`
space_id = "SPACE_ID"
deployment_id = "YOUR_DEPLOYMENT_ID"
[deployment.custom]
model_id = "mistralai/mistral-large" # underlying model of WatsonxChat
thread_id = "thread-1" # Más información: https://langchain-ai.github.io/langgraph/how-tos/persistence/
sw_runtime_spec = "runtime-24.1-py3.11"

You will find your variables here:

https://dataplatform.cloud.ibm.com/developer-access

Once there, select your deployment space and copy the necessary data (API Key, Space ID, etc.).

Execution at the agent’s premises

It is time to test the agent:

source ./.venv/bin/activate
poetry run python examples/execute_ai_service_locally.py

Since it’s a weather agent why don’t you try it with something like something like…?

“What is the current weather in Madrid?”

The console should give you the time in Madrid. Congratulations! we only need to do the deploy in watsonx.ai

Agent deployment in watsonx.ai

source ./.venv/bin/activate
poetry run python scripts/deploy.py
This code will deploy the agent in Watsonx.ai
deploy.py does the following:
  1. Read the configuration (config.toml) with your credentials and deployment space.
  2. Package your code in a ZIP file for uploading to IBM Cloud.
  3. Creates a custom software specification based on a base environment (such as runtime-24.1-py3.11).
  4. Deploy the agent as a REST service in watsonx.ai.
  5. Save the deployment_id , needed to interact with the agent later.

In short:
takes your local agent, prepares it and turns it into a cloud-accessible service.

We check that everything is correct in watsonx.ai and go to the “Test” section. There we paste the following json (it is only one question)
{
"messages": [
{
"content": "What is the weather in Malaga?",
"data": {
"endog": [
0
],
"exog": [
0
] },
"role": "User"
}
] }
Click on predict and the agent will use the weather_service.
In the response json you will see the agent process -> call tool -> collect city -> process and return temperature.
Your agent is up and running on watsonx.ai!
If you want to test it from the terminal to make sure that it works, just use
source ./.venv/bin/activate
poetry run python examples/query_existing_deployment.py
Conclusions

If you have any doubts, we recommend the following video tutorial where you can follow step by step the development connected with watsonx.ai

If you want to continue exploring these types of implementations or learn more about cloud development and artificial intelligence, we invite you to explore our AI courses.👇

SIXE: Your partner specialized in LinuxONE 5

Can you imagine what it would be like to have a powerful infrastructure without paying proprietary licenses? Well… you can🥳 with LinuxONE 5

In an ever-evolving technology environment, choosing a critical infrastructure based on Linux and AI requires not only advanced technology, but also a partner that masters every technical and strategic layer. IBM LinuxONE Emperor 5 , powered by the IBM Telum II processor and its integrated AI accelerators are a milestone in security, performance and scalability. At SIXE , we are experts in designing, implementing and supporting IBM LinuxONE 5 solutions, combining our expertise in IBM technologies with our role as a strategic partner of Canonical, Red Hat and SUSE .

What is IBM LinuxONE 5?

IBM LinuxONE Emperor 5 is a next-generation platform designed for companies that need maximum levels of security, energy efficiency… as well as the ability to manage AI and hybrid cloud workloads. It includes new features such as:

  • IBM Telum II processor : With multiple on-chip AI accelerators, ideal for inference on co-located data.
  • Confidential Containers : Advanced protection for applications and data in multi-tenant environments.
  • Quantum-safe encryption : Preparing for future threats from quantum computing.
  • 99% availability : Architecture designed to minimize critical outages.
Features of IBM LinuxOne5 | SIXE Partner

This system is not just hardware: it is a comprehensive solution that integrates software, security and sustainability, positioning itself as an ally for complex digital transformations. In addition, with the advantage of opensource: not being dependent on licenses.

OpenSource and IBM experts

At SIXE, we are not intermediaries: we are certified engineers in IBM Power and open source technologies . Unlike large partners that outsource complex projects, at SIXE we lead each LinuxONE 5 implementation with an internal team specialized in:

  • IBM Power Hardware : Configuration and optimization of IBM LinuxONE Emperor 5 and Power10 (and future Power11) systems.
  • Operating Systems : Support for Red Hat Enterprise Linux (RHEL) SUSE Linux Enterprise Server (SLES) and Ubuntu.
  • AI and hybrid infrastructure : Integration of containers, Kubernetes and AI tools (such as IBM Watsonx) with LinuxONE 5.
  • Security and compliance : We are experts in IBM security audits and licensing.

What do we offer at SIXE to make you stay with us?

Large enterprises often treat LinuxONE 5 implementation projects as one-off transactions. At SIXE, we work closely with your technical teams to ensure that the solution is tailored to your specific needs. We offer you a differential:

  • Respond quickly to changes in requirements or architectures.
  • Maintain direct communication with systems managers.
  • Design migration plans from legacy (IBM i, AIX) to LinuxONE 5. You can see more details here.

Long-term relationships: Are you looking for a supplier or a strategic partner?

We don’t just work to close deals: we build lasting relationships. Once you’ve implemented LinuxONE Emperor 5, we’re still there for you. We offer technical support, training and upgrades , ensuring that your investment continues to generate long-term value.

Return on investment: SIXE is a safe investment.

At SIXE, every euro invested translates into real value . Without unnecessary layers of management or empty meetings, we focus resources on engineers and experts with more than 15 years of experience in IBM, Canonical and open source technologies. We are part of a global network of specialists recognized by leaders such as IBM and Canonical , which reinforces our ability to deliver exceptional results.

Not only LinuxONE 5

Although much of our business is focused on IBM solutions, we are also strategic partners of Canonical (Ubuntu), Red Hat and SUSE , as well as using technologies such as QRadar XDR . This diversity allows us to offer comprehensive solutions for your infrastructure.

Choosing SIXE as a partner for IBM LinuxONE 5 means betting on a human, technical and strategic approach. Don’t leave your critical infrastructure in the hands of intermediaries: trust a team that is as committed as you are to the success of your project.

👉 Find out more about our IBM LinuxONE 5 offering here.

Ready to transform your infrastructure with IBM LinuxONE 5 and SIXE? Contact us and let’s get started together.

IBM i 7.6 vs Ubuntu: analysis to choose (or combine) wisely

When it comes to IBM Power servers, many decisions seem like a battle between two worlds: the almost legendary robustness of IBM i 7.6 and the freedom of Ubuntu Linux. But what if the best choice is not one or the other, but both? In this article we cut to the chase: we tell you what no one else can explain clearly, without selling you smoke and mirrors, without marrying ourselves to a single approach. Just the technical truth, well told.


IBM i: a closed (but very efficient) fortress

If you’ve worked with IBM i, you know what we’re talking about: stability, performance and a database that won’t crash even if you throw a nasty core dump on it.

IBM i is not just an operating system, but an integrated platform: OS, database (Db2 for i), security, backups, virtualization and native HA (PowerHA, Db2 Mirror) in a single environment optimized for Power10. The integration of these layers avoids intermediate layers or dependencies between external tools.

Technical matters: IBM i runs on a microkernel that manages persistent objects on disk with a native object-oriented, non-file-based model. Its journaling system guarantees consistency even in the face of power outages, and allows remote journaling for DR replication without the need for snapshots.

IBM i 7.6 improves native SQL performance, strengthens security (with integrated multi-factor authentication and more object-level encryption), and enables more modern APIs (REST, OpenAPI, JSON), which allow traditional business logic (RPG, COBOL) to be exposed as microservices. At SIXE we already have an analysis of all the new features of IBM i. If you want to take a look , click here.


Ubuntu in Power: freedom, but with responsibilities

On the other hand, if you are from the Linux team, you already know what Ubuntu brings: DevOps ecosystem, containers, microservices and official Canonical support for Power for years, with optimized images for the ppc64le architecture.

Ubuntu is not plug-and-play like IBM i, but it doesn’t pretend to be either. You can deploy PostgreSQL, MongoDB, Redis, Apache Kafka, Ceph… you name it. And if you mount KVM (already integrated into PowerVM), you can use LXD, OpenStack or orchestrators like MAAS or Juju to manage the environment at scale.

Ubuntu or IBM i

But yes: there is no magic. You’ll have to build the stack yourself: HA with Pacemaker, backups, security with SELinux… And that implies having good Ansible playbooks or well-defined CI/CD pipelines. Nothing is done by yourself.

In HPC and AI, Ubuntu on Power is taking off strong: Power10 (and soon Power11) has brutal bandwidth, and with the upcoming IBM Spyre accelerator on the horizon, you can train models without relying on NVIDIA GPUs.


What about security?

This is where IBM i shines by design: the entire system is security-oriented. Each object has its own authority, with highly granular user profiles and an audit journal that logs everything that happens, without having to install and configure syslog-ng or ELK stack.

Ubuntu, on the other hand, has everything you need: ufw, auditd, encryption with LUKS, application-level protection with AppArmor or SELinux… but you have to integrate it manually and maintain it. A poorly patched Ubuntu environment is an easy target.

On IBM i, security patches are few and far between and controlled; on Ubuntu there are almost daily updates. That’s not a bad thing, but it requires well automated patch management processes.


Costs: beware of what looks cheap

Many people see Ubuntu and say “free!”. But not all that glitters is gold. On Power servers, the hardware is the same, and the operating cost can skyrocket if you don’t automate well or if you need to replicate services that IBM i comes ready to use.

IBM i has a more expensive license, yes. But you can do more with less staff. If your load is critical, stable and doesn’t vary every week, in the medium term TCO can play in your favor.


Modernization: Do I stay with RPG or switch to microservices?

If you have code in RPG, IBM i 7.6 lets you continue to use it… and even modernize it with REST APIs, Node.js or Python (via PASE). VS Code is also getting into the game and is gaining more traction so you can modernize and write code more easily.

Does your team prefer to work in containers, use CI/CD and deploy? Ubuntu. Nothing more to say.


Conclusion: one, the other… or both?

Sometimes it’s not a matter of choosing black or white. In many Power environments, what really works is combining the best of each world. Here are some recommendations:

ScenarioRecommendation
You already have IBM i with RPG💡 Follow and modernize from the inside
New apps in Power

🐧 Ubuntu with containers

Minimal downtime, no hassle🛡️ IBM i + Db2
Total freedom🧩 Ubuntu on Power
Reduce dependence on IBM in the future🔄 Ubuntu with progressive migration
Mixed Linux + IBM i equipment

🐧🛢️Hybrid approach: back IBM i, front Ubuntu


What if I don’t want to choose?

Good question. In fact, many companies don’t. They use IBM i for critical and stable loads (billing, ERP, etc.), and Ubuntu for everything new: APIs, frontends, microservices, AI.

This hybrid approach gives you the best of both worlds: the reliability of IBM i, with the agility and ecosystem of Ubuntu.


What’s next?

If you are in doubt, in the middle of planning or directly with the licensing Excel open… in SIXE we help you to analyze your environment and design the most realistic way: either maintain, migrate or combine. Fill out the form here and we will contact you.

No smoke. No impossible promises. Only solutions that work (really) and face to face with our engineers.

IBM i 7.6 | Everything you need to know about the latest version of IBM’s operating system

IBM i 7.6 will be available on April 18, 2025. IBM i is the latest evolution of IBM’s enterprise operating system, designed exclusively to run on IBM Power servers (which, by the way, we support). This version maintains the integrated platform philosophy that characterizes IBM i, but adds new features relevant to modern environments in security, availability and support for open source technologies.

In this article, we will explore all the new features, requirements and benefits of the latest IBM i 7.6 update.

Post in continuous update. On 10/04/2025 at the IBM webcast we will get all the details of IBM i 7.6 and update with news.

Click here to access the webcast of the day 10/04/2024

Whats new in IBM i 7.6Index

  1. News
  2. Requirements
  3. Licenses
  4. Life Cycle
  5. Linux or IBM i 7.6?
  6. IBM i 7.6 in 2025 and beyond?
An all-in-one system

IBM i 7.6 is much more than just an operating system: it integrates middleware, the Db2 for i database engine and a wide range of native tools for management, development and high availability. This means that, unlike models that require the integration of various components, with IBM i you have the advantage of a turnkey solution optimized for transactional workloads and legacy applications written in RPG or COBOL.

Main new features of IBMi 7.6

🛡️Safety as a fundamental pillar

  • Integrated multi-factor authentication (MFA):
    • Support for time-based one-time passwords (TOTP), such as Google Authenticator.
    • MFA available even without internet connection.
    • Extended protection for critical profiles such as QSECOFR.
    • Independent implementation for System Service Tools (SST) and Dedicated Service Tools (DST).
  • Protection of credentials and sensitive data:
    • New cryptographic APIs, such as PBKDF-2 and ECC/SHA3 algorithms.
    • Support for ASP 1 (Auxiliary Storage Pool) encryption.
    • Improvements in Digital Certificate Manager (DCM) to facilitate TLS certificate management.
  • Simplified regulatory compliance:
    • Advanced tools to comply with strict regulations, such as GDPR or HIPAA.

🧠 Improvement of Db2 for i

    • New SQL functionalities such as data-change-table-reference at UPDATE and DELETE
    • New table SQLSTATE_INFO for debugging SQL errors
    • Enhancement of DUMP_PLAN_CACHE with optional filters
    • Direct view of BLOB columns from ACS
    • SQL services for new auditing and security functions

💻 Development with Code for IBM i (VS Code)

    • Support for free and fixed RPG.
    • DDS compilation from Git.
    • Batch Debug and Service Exit Point support.
    • Integrated Db2 extension: SQL validation, editable results, hover help.
    • Integration with AI for code analysis and natural language queries.

It appears that Code for IBM i (VS Code) is displacing RDi as the tool of choice for the community.


☁️ High availability and disaster recovery ( HA/DR )

  • PowerHA expands its integration with IBM Cloud, offering new capabilities to replicate and protect data in the cloud:
    • Cloud replication:
      • Support for IASP Volume (LUN-Level) Switching and FlashCopy.
      • Full automation to minimize downtime.
    • Hybrid scalability:
      • Resilient design for hybrid environments (on-premises + cloud).
      • Ideal for companies seeking business continuity without investing in additional hardware.

🧰 Navigator for i: new interface and more control

    • Full support for MFA from Navigator.
    • License expiration view from the dashboard + expiration alerts.
    • Commands such as CFGHOSTSVR to manage unsecured connections.
    • More secure connection and system monitoring tools.

IBM i 7.6 requirements

Only compatible with IBM Power10 servers with firmware level FW1060 or higher.

    • IBM recommends using HMC v10 or higher.
    • Power9 is not officially supported for this release.
    • Requires updated VIOS and PowerVM.

💰IBMi 7.6 Licenses

IBM maintains the model by core in different levels (P05, P10…). The base license includes:

  • IBM i + Db2 + Navigator + PASE
  • Access to open source tools and integrated middleware
Licenses with additional costs:
    • PowerHA
    • Db2 Mirror
    • BRMS
    • RDi (if still in use)

Life Cycle

  • IBM i 7.6 will be supported until the middle of the next decade.

  • IBM i 7.4 enters the “fixes only” phase, with no new features.

  • IBM is committed to a release cycle every 3 years, and has been since 7.2


Ubuntu or IBM i 7.6?

It depends on what you are looking for:

  • Linux (Ubuntu, RHEL): modular, open, low initial cost, more specialized staff, but you need to integrate the whole stack yourself (OS, database, HA, security…).

  • IBM i 7.6: all integrated, excellent performance per core, embedded security, legendary stability, fewer technical staff required.

We will soon make an in-depth analysis of both OS, so stay tuned to our blog, because in these days we will be bringing you all the updated information on how to choose the best option for your company.


IBM i 7.6 in 2025 and beyond? Conclusions

Is IBMi 7.6 for you? Well…

  • ✔️ You use Power10
  • ✔️ You depend on RPG or Db2 for i
  • ✔️ You want maximum security without complications

➡️ Then IBM i 7.6 is for you.

At SIXE we can help you with:

Want to know if IBM i 7.6 is right for your company? Contact us and we will help you.

And if you are passionate about the world of IBM Power, for the inauguration of our web Dame Power, the IBM Power Hispanic Communitywe are giving away free WEBINARS.

The next one is on 04/24/2025 . We will inform you about news, tips and tricks of AIX 7.3.

We have limited places, so don’t think too much about it ;) Click here for more information about the webinars.

References

IBM Power 2025 Webinars: Learn for free with experts

Can you imagine finding solutions to your Linux, AIX, IBM i, etc… all in one?!🙀 Well, now it is possible thanks to Dame Power, the Spanish-speaking community of IBM Power.

At SIXE, we’re excited to be a part of the exclusive Dame Power webinars. A series of free sessions designed to help you dive deeper into the IBM Power ecosystem.

If you work with IBM i, AIX, Linux, PowerVM or Kubernetes, this is your opportunity to learn directly from experts and apply the knowledge in your projects. Discover the most innovative trends from experts, one-on-one.

📅 IBM Power 2025 WebinarsFree Webinars of IBM Power | AIX , IBM i , Linux , Kubernetes and more

Throughout the year, Dame Power will offer a series of webinars focused on key topics for IBM Power professionals:

Linux in Power: Truths, myths and tips to maximize your performance.
AIX 7.3: The evolution of modern UNIX and its impact on the enterprise.
KVM in PowerVM: Exploring new possibilities in virtualization.
Kubernetes on Power: Efficient container deployment and management.
IBM Power Security: Beyond marketing, real strategies to protect your systems.

Why join these webinars?

By attending these sessions, you will be able to:

✔️ Get practical troubleshooting tips for IBM i, AIX, Linux and more.
✔️ Discover trends in security, cloud, AI and edge computing.
✔️ Learn from IBM Champions working with Power on real-world cases.
✔️ Follow step-by-step advanced configurations and server optimization.

How to register for Dame Power webinars?

It’s easy! All you have to do is:

1️⃣ Click here to subscribe to the Dame Power Substack.
2️⃣ Check the welcome email, where you will find the registration form.
3️⃣ Once you fill it out, you will receive the date, time and link to access the webinar. Access the webinars at this link.
4️⃣ Join, ask questions and boost your knowledge.

🎁 Additional benefits for attendees

If you register for these webinars, you will also gain access to:

🎓 Exclusive discounts on SIXE courses.
📄 Premium content: Offline access to webinars.
🤝 Community: Be part of the largest group of IBM Power experts in Spanish.

Get ready to learn #FullOfPower.

These webinars are more than just lectures: they are a real opportunity to improve your skills, connect with experts and apply new knowledge in your day-to-day work.

📢 Share this event so that other IT teams can benefit from this knowledge.

SIXE