Open source storage for AI and HPC: when Ceph is no longer an alternative but the only viable way forward

When CERN needs to store and process data from the Large Hadron Collider ( LHC, the world’s largest and most powerful particle accelerator), scale is everything. At this level, technology and economics converge in a clear conclusion: open source technologies such as Ceph, EOS and Lustre are not an “alternative” to traditional enterprise solutions; in many scenarios, they are the only viable way forward.

With more than 1 exabyte of disk storage, 7 billion files y 45 petabytes per week processed during data collection campaigns, the world’s largest particle physics laboratory is moving into a field where the classical models of capacity licensing models no longer no longer make economic sense.

This reality, documented in the paper presented at CHEP 2025, “Ceph at CERN in the multi-datacentre era”, reflects what more and more universities and research centers are finding: there are use cases where open source does not compete with enterprise solutions.reflects what more and more universities and research centers are realizing: there are use cases where open source does not compete with enterprise solutions, it defines its own categoryIt defines its own category, for which traditional architectures were simply not designed.

open source cern storage

CERN: numbers that change the rules

The CERN figures are not only impressive; they explain why certain technologies are chosen:

  • >1 exabyte of disk storage, distributed over ~2,000 servers with 60,000 disks.

  • >4 exabytes of annual transfers.

  • Up to 45 PB/week and sustained throughput >10 GB/s sustained throughput in data collection periods.

Architecture is heterogeneous by necessity:

  • EOS for physics files (more than 1 EB).

  • CTA (CERN Tape Archive) for long-term archiving.

  • Ceph (more than 60 PB) for blocks, S3 objects and CephFS, backing up OpenStack.

It is not only the volume that is relevant, but also the trajectory. In a decade, they have gone from a few petabytes to exabytes. without disruptive architectural leapsadding nodes commodity horizontally. This elasticity does not exist in the proprietary cabins with capacity licenses.

The economics of the exabyte: where capacity models fail

Current licensing models in the enterprise market are reasonable for typical environments. for typical environments (tens or hundreds of terabytes, predictable growth, balanced CapEx and OpEx). They provide integration, 24×7 support, certifications and a partner ecosystem. But at petabyte or exabyte scale with rapid growth, the equation changes.

  • At SIXE we are IBM Premier Partnerand we have evolved towards capacity-based licensing.

    • IBM Spectrum Virtualize uses Storage Capacity Units (SCU)~1 TB per SCU. The annual cost per SCU can range from 445 y 2.000 €depending on volume, customer profile and environmental conditions.

    • IBM Storage Defender uses Resource Units (RUs). For example, IBM Storage Protect consumes 17 RUs/TB for the first 100 TB and 15 RUs/TB for the next 250 TB, allowing resiliency capabilities to be combined under a unified license.

  • Similar models exist at NetApp (term-capacity licensing), Pure Storage, Dell Technologies and others: pay for managed or provisioned capacity..

All of this works in conventional enterprise environments. However, manage 60 PB under per-capacity licensing, even with high volume discounts, can translate into millions of euros per year in software alonewithout counting hardware, support or services. At that point, the question is no longer whether open source is “viable”, but rather whether it is is there any realistic alternative to it for these scales.

Technical capabilities: an already mature open source

The economic advantage would not apply if the technology were inferior. This is not the case. For certain AI and HPC loads, the capabilities are equivalent or higher:

  • Ceph offers unified storage virtualization with thin provisioning, compression at BlueStore, snapshots y COW clones without significant penalty, multisite replication (RGW and RBD), and tiering between media, and if you want your team to understand how to take advantage of Ceph, we have…

  • CERN documents multi-datacenter strategies for business continuity and disaster recovery using stretch clusters y multisite replicationwith RPO/RTO comparable to enterprise solutions.

IBM recognizes this maturity with IBM Storage Ceph (a derivative of Red Hat Ceph Storage), which combines open source technology technology with support, certifications and SLAs enterprise level. At SIXEas an IBM Premier Partnerwe implemented IBM Storage Ceph when business support is required and also Ceph upstream when flexibility and independence are prioritized.

Key architectural difference:

  • IBM Spectrum Virtualize is an enterprise layer that manages heterogeneous storage from blockwith dedicated nodes or instances, and advanced mobility, replication and automation features.

  • Ceph is a native native distributed system that serves blocks, objects and files from the same horizontal infrastructureeliminating silos. At pipelines for datasets, blocks for metadata, file shares for collaboration – this unification brings clear operational advantages clear operational advantages.

Conceptual digital illustration symbolizing mature open source storage technology. Three distinct data flows (subtly different colors) converge into a single glowing structure, symbolizing integration and scalability. The environment evokes a modern data center with soft blue and white lighting, clean geometry, and a sense of precision and reliability.

Large-scale AI and HPC: where the distributed shines

The training training of foundational models read petabytes in parallel in parallelwith aggregate bandwidths of 100 GB/s or more. The inference requires sub-10 ms latencies with thousands of concurrent requests.

Traditional architectures with SAN controllers controllers suffer bottlenecks when hundreds of GPUS (A100, H100…) access data at the same time. It is estimated that about 33 % of GPUs in corporate AI environments operate at less than 15 15 % utilization due to storage saturationwith the consequent cost in underutilized underutilized assets.

Distributed architectures architecturesCeph, Lustre, BeeGFS– were born for these patterns:

  • Luster drives 7 of the 10 supercomputers in the Top500supercomputers, with >1 TB/s aggregate throughput in large installations. Frontier (ORNL) uses ~700 PB in Lustre and writes >35 TB/s sustained.

  • BeeGFS scales storage and metadata independently independentlyexceeding 50 GB/s sustained with tens of thousands of customers in production.

  • MinIOoptimized for objects in AI, has demonstrated >2.2 TiB/s read performance in training, difficult to match by centralized architectures.

Integration with GPU has also matured: GPUDirect Storage allows GPUs to read from NVMe-oF without passing through the CPU, reducing latency and freeing up cycles. Modern open source systems support these protocols. nativelyin proprietary solutions, they often rely on firmware y certifications that take quarters to arrive.

SIXE: sustainable open source, with or without commercial support

Migrating to large-scale open source storage is not trivial. Distributed systems require specific experience.

At SIXE we have been more than 20 years with Linux y open source. Like IBM Premier Partnerwe offer the best of both worlds:

  • IBM Storage Ceph e IBM Storage Scale (formerly Spectrum Scale/GPFS) for those who need Guaranteed SLAs, certifications y 24×7 global support.

  • Ceph upstream (and related technologies) for organizations that prefer maximum flexibility and control maximum flexibility and control.

It is not a contradictory position, but a strategic strategicdifferent profiles, different needs. A multinational bank values certifications and enterprise support. A research center with strong technical equipment can operate upstream directly.

Our intensive training at Ceph are hands-on workshops from three-dayThe workshops are three days: real clusters are deployed and design decisions. Knowledge transfer reduces the dependence on consultants and empower to the internal team. If your team still has little experience with Ceph, click here to see our initiation course, if on the other hand you want to get the most out of Ceph, we leave you here the advanced Ceph course, where your team will be able to integrate two crucial technological factors right now: Storage + AI.

 

Our philosophyWe do not sell technology, we transfer capacity. We deploy IBM Storage Ceph with full support, Ceph upstream with our specialized support or hybrid approacheson a case-by-case basis.

The opportunity for massive data and science

Several factors align:

  • The data is growing exponentially: a NovaSeq X Plus can generate 16 TB per run; the SKA telescope telescope will produce exabytes per yearAI models demand datasets datasets.

  • The budgets do not grow at the same pace. The capacity licensing models make unfeasible to scale proprietary systems at the required pace.

Open source solutions, whether upstream o commercially supported (e.g., IBM Storage Ceph), eliminates this dichotomy: growth is planned by hardware cost y operational capacitywith software whose costs do not do not scale linearly per terabyte.

Centers such as Fermilab, DESY, CERN itself CERN or the Barcelona Supercomputing Center have demonstrated that this approach is technically feasible y operationally superior for their cases. In its recent paper, CERN details multi-datacenter for DR with Ceph (stretch and multisite), achieving availability comparable to enterprise solutions, with flexibility and total control.

A maturing ecosystem: planning now

The open source storage ecosystem for HPC e AI is evolving fast:

  • Ceph Foundation (Linux Foundation) coordinates contributions from CERN, Bloomberg, DigitalOcean, OVH, IBMamong others, aligned with real production needs.

  • IBM maintains IBM Storage Ceph as a supported product and actively contributes upstream.

It is the ideal confluence between open source innovation y enterprise support. For organizations with a horizon of decadesthe question is no longer whether adopt open source, but when and when and how do so in a structured structured way.

The technology is matureThe technology is mature, the success stories are documented and support exists in both community community and commercial. What is often missing is the expertise to draw up the roadmap: model (upstream, commercial or hybrid), sizing, training y sustainable operation.

SIXE: your partner towards a storage that grows with you

At SIXE we work at that intersection. Like IBM Premier Partnerwe gain access to world-class support, roadmaps y certifications. At the same time, we maintain deep expertise in upstream and other ecosystem technologies, because there is no one-size-fits-all solution one-size-fits-all solution.

When a center contacts us, we don’t start with the catalog catalogbut with the key key questions:

  • What are your access patterns?

  • What growth project?

  • What capabilities does your equipment have?

  • What are the risks can you assume?

  • What is the budget budget (CapEx/OpEx)?

The answers guide the recommendation: IBM Storage Ceph with enterprise support, upstream with our support, a hybrid, or even evaluate if a traditional solution still makes sense in your case. We design solutions that work for 5 and 10 years, the important thing for us is to create durable and sustainable solutions over time ;)

Our commitment is to sustainable technologiestechnologies, not subject to commercial fluctuations, that provide control over infrastructure and scale technically and technically and economically.

The case of CERN is not an academic curiosity: it shows where storage for data-intensive loads is going. data-intensive loads. The question is not whether your organization will get there, but whether it will how will arrive: ready o on the run. The window of opportunity to plan calmly is open. open. The successes exist. The technology is ready. The ecosystem also. It remains to take the strategic decision to invest in infrastructures that will accompany your organization for decades to come. decades of data growth.

Contact us!

Does your organization generate massive volumes of data for AI o research? At SIXE we help research centers, universities, and innovative organizations to design, implement and operate storage scalable with Ceph, Storage Scale and other leading technologies, both upstream as with IBM business supportaccording to your needs. Contact us at for a no-obligation strategic consultation.

References

See you at Common Iberia 2025!

SIXE’s team will attend as part of Common Iberia 2025. We will be back in Madrid on November 13th and 14th for the reference event of the IBM i, AIX and Power ecosystem. Two days dedicated to the latest developments in Power technology, from the announcement of Power 11 to real AI use cases, with international experts, IBM Champions and community leaders.

Click on the image to access the event registration form.

Our sessions at Common Iberia Madrid:

Document Intelligence on IBM Power with Docling and Granite
Discover how to implement advanced document intelligence directly into your Power infrastructure.

Common Iberia 2025

AIX 7.3 news and best practices: performance, availability and security
Everything you need to know about the latest AIX 7.3 capabilities to optimize your critical systems.

Common Iberia 2025

Ubuntu on Power: containers, AI, DB and other 100% open source wonders
Explore the possibilities of the open source ecosystem on Power architectures.

Ubuntu at Power Common Iberia 2025

ILE RPG – Using IBM i Services (SQL) and QSYS2 SQL Functions
Learn how to take full advantage of native IBM i SQL services in your RPG applications.

Common Iberia 2025

In addition to the presentation of Project BOB (IBM’s integrated development assistant), the event includes sessions on AI, high availability, PowerVS, modern development with VS Code, and an open discussion on AI use cases in IBM i.


✅Reserveyour place now

Connect with the IBM Power community, share success stories and discover the latest innovations in critical systems – we look forward to seeing you in Madrid!

How to make your first N8N AI agent for free

Automations are the order of the day. I’m sure you’ve read thousands of news and used chatgpt. However, there is a way to get the most out of it…. A LOT. Today we are going to teach you how to take your first steps. We’ll show you how to create your first intelligent agent with n8n IA from scratch, completely free, and without complicating your life. If your business receives repetitive questions by email, forms or chat, this tutorial is for you.

We will set up a chatbot that will answer questions about your company, collect customer data when necessary, and also check if they already exist in your database so as not to duplicate them. All this using n8n with Ollama, local AI models, and Docker.

What is n8n? What does it have to do with AI and why can it be useful for my company?

n8n is an open source automation platform (like Zapier or Make) that allows you to connect applications, databases, APIs and, most importantly for us today, artificial intelligence models.

What makes n8n special is that you can create visual workflows by dragging nodes, without having to be a crack programmer. And because it is open source, you have full control over your data.

The 3 ways to use n8n (and which one is the best)

Before we get into the nitty-gritty, let’s explain the options you have for working with n8n:

1. n8n Cloud (the fast option)

The cloud version of n8n. You sign up, pay a monthly subscription and that’s it. Zero installation, zero maintenance. Perfect if you want to get started right away, but it has limitations in the free plan and your data is on third party servers. Problem? maybe the price? n8n prices 2025

2. Locally on your computer (what we will do today)

You install n8n on your local machine with Docker. It is 100% free, ideal for learning and testing. The problem is that it only works when your computer is on. If you turn it off, no more workflows.

3. VPS with n8n (the most practical and efficient option)

You hire a VPS (virtual private server) and set up n8n there. Your AI agent will be available 24/7, 365 days a year. This is the professional option if you want your business to run without interruptions. It is the option we recommend and we have good news: With the code SIXE you can get a discount on your VPS to have n8n always available. Contact us for more information on how to set it up.

Today we are going to use option 2 (local with Docker) so you can learn without spending a euro. Then, when you see the potential, you can easily migrate to a VPS.

Prerequisites for using n8n locally

Before you start, make sure you have:

  • Windows 10/11 (64-bit)
  • We recommend at least 8GB of RAM (required to run AI models locally).
  • Eager to learn (this is free)

Step 1: Install Docker

Docker Desktop is the easiest way to use Docker on Windows. Download it by
click here
. To verify that you have installed it type docker --version to check that it is ready.

If you see the version, perfect! Docker is ready.

Step 2: Install n8n with Docker Desktop

Now comes the easy part. With a single command you will have n8n up and running.

Steps:

  1. Open PowerShell
  2. Execute this command:
docker run -it --rm --name n8n -p 5678:5678 -v n8n_data:/home/node/.n8n n8nio/n8n

What does this command do?

  • docker runruns a container
  • -itinteractive mode (you will see the logs in real time)
  • --rmdeletes the container when you close it (don’t worry, the data is saved in the volume)
  • --name n8n: name the container “n8n” : name the container “n8n”.
  • -p 5678:5678: map port 5678 (you will access through http://localhost:5678)
  • -v n8n_data:/home/node/.n8nCreate a volume to store your workflows
  1. Wait 30-60 seconds while downloading the n8n image.
  2. When you see something like Editor is now accessible via: http://localhost:5678/, you’ve got it.
  3. Open your browser and go to: http://localhost:5678
  4. Create your local account:
    • Email: the one you want (it’s local, it’s not sent anywhere)
    • Password: the one you want
    • Name: your name or your business name

We already have everything available to use n8n! For any doubt, we recommend you to follow the official n8n tutorial (click here to see it). If you are going to use Ollama, as in our case, we recommend you to use the official pack that includes Ollama so you have everything in the same environment. Click here for n8n + ollama.

Step 3: Install Ollama (your local AI model)

To use n8n AI we need a language model. We are going to use Ollama, which runs AI models directly on your computer, for free and without limits. That is, you can use your computer’s resources to run AI models.

Install Ollama on Windows:

  1. Download Ollama by clicking here (official)
  2. Install Ollama
  3. Verify the installation:
    • Open PowerShell
    • Write: ollama --version
    • You should see the installed version

Download the recommended template:

We are going to use Qwen2.5:1.5b, a small but powerful model, perfect for enterprise chatbots. It is fast and does not need a supercomputer. However, from here you can find thousands of models to use.

In the shell, run:

ollama pull qwen2.5:1.5b

Verify that it works:

ollama run qwen2.5:1.5b

If you get an interactive prompt where you can type, it works. Type /bye to exit.

You must take into account one very important thing: Agents depending on their AI model will be able to use tools or not. You can use other models such as

The larger the model, the better the responses but the more RAM you need. This translates into response time.

 

Step 4: Configure Ollama for n8n

Here comes a critical step. Docker Desktop on Windows has a little network “problem” that we need to fix. Ollama is running on your Windows, but n8n is inside a Docker container. They need to talk to each other.

The solution:

  1. Stop n8n if you have it running (Ctrl + C in PowerShell)
  2. Restart n8n with this command:
docker run -it --rm --name n8n -p 5678:5678 --add-host=host.docker.internal:host-gateway -v n8n_data:/home/node/.n8n n8nio/n8n

The parameter --add-host=host.docker.internal:host-gateway allows n8n to access Ollama.

  1. Make a note of this URL because you will need it: http://host.docker.internal:11434

This is the address you will use to connect n8n to Ollama.

Step 5: Create your first agent in n8n

How’s it going? Let’s start having some fun. Let’s create a chatbot that:

  • Answer questions about your business using information from a document.
  • Collect data from interested customers
  • Check if the client already exists before saving it
  • Alert your team when someone needs human attention

Import the base workflow:

At SIXE we like to make things easy for you. Instead of creating everything from scratch, we will give you a template and customize it for you.

  1. In n8n, click on “Workflows” (top menu).
  2. Click on the “+ Add workflow” button and select “Import from File”.
  3. Download here the ia tutorial template with n8n from SIXE.
  4. Import the file into n8n (in the workflow, there is an extra icon, click and import from file).

Configure the AI model (Ollama):

  1. In the workflow, drag a new node from the left pane
  2. Search for “Ollama Chat Model” and add it to the canvas
  3. Click on the Ollama node to configure it:
    • Base URL: http://host.docker.internal:11434
    • Model: qwen2.5:1.5b
    • Temperature: 0.7 (controls creativity: 0 = very precise, 1 = very creative)
  4. Connects the Ollama node to the “AI Agent” node:
    • Drag from the (bottom) point of Ollama node
    • Up to the point of the AI Agent node
    • This tells the agent to use Ollama as a brain.

Customize the agent prompt:

The prompt is the personality of your chatbot. Here you tell it what it does, how it talks and what information it has.

  1. Click on the node “AI Agent”.
  2. In “System Message”, copy this prompt and customize it:
Eres el asistente virtual oficial de "[TU EMPRESA]".

Tu función es ayudar a los usuarios que escriben al chatbot de la web, ofreciendo respuestas claras, útiles y verídicas.

? Reglas principales:
- Nunca inventes información
- Si no sabes algo, admítelo y ofrece derivar al equipo humano
- Sé profesional pero cercano

? Información sobre [TU EMPRESA]:
[Aquí describe tu negocio: qué hacéis, servicios, precios, horarios, etc.]

Ejemplo:
"Somos una agencia de automatización que ayuda a empresas a ahorrar tiempo usando herramientas como n8n, Airtable y Make. Nuestros servicios incluyen:
- Consultoría inicial (gratis)
- Implementación de automatizaciones (desde 500€)
- Formación para equipos (200€/persona)
Horario: L-V de 9h a 18h"

? Si detectas interés en contratar:
Pregunta educadamente:
- Nombre
- Email
- Teléfono (opcional)
- Qué necesita específicamente

Una vez tengas estos datos, los guardarás automáticamente.

? Si no puedes ayudar:
Ofrece derivar al equipo humano y contacto por WhatsApp.

Add memory to the chatbot:

For the chatbot to remember the conversation (not repeat questions), it needs memory.

  1. Click on the node “AI Agent”.
  2. Look for the “Memory” section in the agent options.
  3. Add a memory node (simple memory that remembers the last messages, with 5, which is the default number of messages, should be enough).

You already have a functional chatbot. But let’s add more spice to it.

Step 6: Add knowledge and tools to the agent

The tools are like apps that the agent can use when he needs them. Let’s add the essential ones. To start creating, click on add tool to the agent.

Tool 1: Google Docs (knowledge base)

Instead of putting all the info in the prompt (which has a limit), we will use a Google Doc as a knowledge base.

  1. Create a Google Doc with as much information as possible about your business
Pregunta: ¿Cuánto cuestan vuestros servicios?
Respuesta: Nuestro servicio básico cuesta X€, el premium Y€...

Pregunta: ¿Cuánto tarda un proyecto?
Respuesta: Entre 2-4 semanas dependiendo de la complejidad...

[Añade todas las preguntas frecuentes, horario, contacto etc]
  1. In n8n, drag the node “Google Docs Tool”.
  2. Connect your Google Account. This step is a bit heavy if you are inexperienced but we promise to guide you in the easiest way possible.
    1. Sign in to Google Cloud Console
    2. Create a project if you do not have one
    3. Go to APIs and services and under credentials configure the consent screen.
      Connect Google Sheets, Docs and Drive to N8N
    4. Once that is done, add the google service you need. In our case, google docs. Google Docs API with n8n
    5. We go back to APIs/credentials and create an OAuth client. There the most important thing is to give permissions and in “URL of authorized redirects” we will put the URL that n8n gives us. Connect n8n with Google Docs
    6. We copy the client ID and the secret and put them in n8n.
  3. Select the document you created
  4. Connect the node to the AI Agent

Now the chatbot can consult that document when someone asks a question. Remember that it is important to tell the agent when to use each tool in “System Message” in the agent configuration.

Tool 2: Search contact (Airtable)

Before saving a customer, you have to see if it already exists. We will use Airtable as a simple CRM. Create a new tool and attach it to the agent.

Preparation in Airtable:

  1. Create a free account on airtable.com
  2. Create a new Base called “CRM”.
  3. Create a “Contacts” table with these columns:
    • Name (text)
    • Email (email)
    • Telephone (phone)
    • Date of contact (date)

In n8n:

  1. Drag the node “Airtable Tool”.
  2. Configure:
    • Operation: “Search”.
    • Base: [your CRM base].
    • Table: Contacts
    • Activate “From AI” in Filter Formula

This allows the agent to search if an email already exists.

Tool 3: Save/Update contact (Airtable)

If the contact does not exist, we save it. If it does exist, we update it only if there is new data. Again, we add a new tool.

  1. Drag another node “Airtable Tool”.
  2. Configure:
    • Operation: “Create or Update” (Upsert)
    • Base: [your CRM base].
    • Table: Contacts
    • Matching Column: Email
    • Activate “From AI” in all the fields.

The agent can now automatically save contacts without duplicating them.

Tool 4: Notify the team via Slack or Telegram.

When someone needs human attention, the chatbot can alert via Slack/Telegram.

  1. Drag the “Slack Tool” node (or Telegram if you prefer).
  2. Connect your Slack account
  3. Select the channel where you want the notifications
  4. At the agent’s prompt, he adds:
Si el usuario pide hablar con una persona o su caso es complejo, usa la herramienta de Slack para avisar al equipo con:
- Nombre del usuario
- Email o teléfono
- Resumen breve del problema

Step 7: Activate and test the chatbot

Everything is ready! It’s time to test.

  1. Try the chatbot:
    • Click on “Test workflow”.
    • Write: “Hello, what services do you offer?”
    • Check how it responds using the Google Doc info.
  2. Try saving a contact:
    • Write: “I’m interested, I’m Juan Perez, my email is juan@ejemplo.com”.
    • The chatbot should save the contact in Airtable
    • Check it on your Airtable table
  3. Try a duplicate:
    • He writes: “I’m John Smith again, my phone is 666777888”.
    • The chatbot should update the existing record, not create a new one.

Step 8: Integrate into your website

The chatbot is already working, but it is in localhost. To use it on your website you need two things:

Option A: Migrate to VPS (recommended)

As you know, we work with Krystal, which offers custom VPS and they are committed to the environment :) You can also take advantage of a discount with the code “SIXE”. With a VPS your chatbot will be available 24/7. n8n offers the option to embed on your website, so it’s perfect.

If you want to learn how to do it with instructors, set it up correctly with HTTPS, custom domain and everything ready for production… We offer a basic course and an advanced course in n8n.

Option B: Use ngrok (temporary, for testing only)

If you want to try ngrok already on your website without VPS:

ngrok http 5678
  • Copy the URL it gives you (type https://xyz.ngrok.io)
  • Use that URL instead of localhost in your website

Important: The ngrok URL changes every time you restart it. It is not for production.

n8n stands for freedom with AI

And that’s it. You have your first n8n IA agent running locally with Ollama. In production we recommend OpenAI, especially gpt-4o-mini thanks to its price and good performance. Now it’s your turn to experiment. Try other models, adjust prompts, add more tools.

Doubts? Want us to train you or your team to set up n8n and AI agents in production? Write to us.

And if you enjoyed the article, share it with others who are looking to automate with AI without spending a lot of money.

 

Additional resources:

How to implement high-availability NFS in Ceph using Ganesha-NFS

Introduction to Ceph and Ceph-Ganesha

Ceph-Ganesha, an NFS tool embedded within CEPH with powerful orchestration features that enable high availability and dynamic management on a multi-node Ceph cluster. We will focus on the declarative simplicity of its deployment, and showing off its HA capabilities.

 

Ceph is an open-source, software-defined storage platform that delivers highly scalable object, block, and file storage from a unified cluster. At its core, Ceph’s architecture is built on a distributed network of independent nodes. Data is stored across OSDs (Object Storage Daemons), managed by Monitors, and orchestrated by Managers.

 

Ceph architecture explained

The Ceph File System (CephFS) is a POSIX-compliant file system that sits atop this infrastructure, providing a distributed and fault-tolerant namespace. For a system administrator, Ceph offers a great alternative to traditional storage arrays by providing a single, resilient platform that can grow linearly with the addition of commodity hardware.

 

Its self-healing and self-managing capabilities are key benefits, reducing the operational overhead typically associated with petabyte-scale storage.

 

What is NFS Ganesha in Ceph?

NFS Ganesha is an open-source NFS server that acts as a user-space gateway, a key distinction from conventional NFS servers that reside within the operating system’s kernel. This fundamental design choice provides a more robust and stable service environment. A bug in a user-space daemon is far less likely to cause a catastrophic system failure, a crucial advantage for a critical service endpoint. Ganesha’s architecture is also designed for maximum compatibility, supporting a full range of NFS protocols from NFSv3 to NFSv4.2, ensuring it can serve a diverse client base.

 

The true genius of Ganesha lies in its File System Abstraction Layer, or FSAL. This modular architecture decouples the NFS protocol logic from the underlying storage. For a Ceph environment, the FSAL_CEPH module is the key, enabling Ganesha to act as a sophisticated Ceph client. This means administrators can provide a consistent NFS interface to clients while benefiting from the full power and scalability of the Ceph cluster, all without exposing the underlying Ceph infrastructure directly. If you would like to learn more about Ceph, we offer a pracial course on Ceph.

A modern data center filled with glowing Ceph storage nodes connected in a resilient cluster. In the center, a friendly cartoon-style Ganesha deity sits at a console with multiple arms managing NFS exports, cables, and servers. One hand holds a network cable, another a laptop, another a glowing Ceph logo, symbolizing high availability and orchestration.

Cephadm integration: Declarative deployment of Ceph-Ganesha

The integration of Ganesha with the Ceph orchestrator (cephadm) elevates its deployment from a manual, host-specific task to an elegant, cluster-wide operation. This partnership allows for a declarative approach to service management, where a single command can manage the entire lifecycle of the Ganesha service.

 

For any mission-critical service, a system administrator’s primary concern is ensuring business continuity. Unplanned downtime can lead to significant data loss, loss of productivity, and damaged reputation. High Availability (HA) is the architectural principle that addresses this concern by eliminating single points of failure. For an NFS service, this means that if one server node goes offline, another node can seamlessly take over its duties. This provides administrators with peace of mind and allows for planned maintenance without impacting the end-user. For Ceph, its inherent distributed nature is the perfect complement to an HA NFS service, as the underlying storage is already resilient to node failures.

 

Preparing CephFS Storage for Ganesha

A successful Ganesha deployment begins with preparing the underlying CephFS storage. A seasoned administrator will provision the necessary pools to host the filesystem data and metadata, setting the stage for the service to be deployed.

 

Create a dedicated pool for NFS Ganesha data with autoscaling enabled

# sudo ceph osd pool create ganeshapool 32 32

# sudo ceph osd pool set ganeshapool pg_autoscale_mode on

 

Create a metadata pool, marked as bulk for optimized behavior

# sudo ceph osd pool create ganeshapool_metadata 16 16

# sudo ceph osd pool set ganeshapool_metadata bulk true

 

Tie the pools to a new CephFS filesystem

# sudo ceph osd pool application enable ganeshapool cephfs

# sudo ceph osd pool application enable ganeshapool_metadata cephfs

# sudo ceph fs new ganeshafs ganeshapool_metadata ganeshapool

# ceph fs set ganeshafs max_mds 3

# ceph orch apply mds cephfs --placement="3 ceph-node1 ceph-node2"

Deploying the Ceph NFS Ganesha Service

With the storage foundation laid, the deployment of Ganesha itself can be done either with .yamls or with simple orchestration CLI commands. The ceph orch apply command is a powerful instruction to the orchestrator, telling it to ensure the desired state of the NFS service. By specifying a placement count and listing the cluster’s hosts, the administrator ensures that a Ganesha daemon will run on every designated node, a critical step for a resilient and highly available service. 

 

Deploy the Ganesha NFS service across all three specified hosts

 

# sudo ceph orch apply nfs myganeshanfs ganeshafs --placement="3 ceph-node1 ceph-node2 ceph-node3"

 

This single command initiates a complex, multi-faceted deployment. The orchestrator pulls the necessary container images, configures the daemons, and distributes them across the specified hosts. This contrasts sharply with manual, host-by-host installations, showcasing the power of centralized orchestration. These scenarios are covered in detail in our advanced Ceph course, where we cover step-by-step orchestration with cephadm and HA configurations.

 

Advanced capabilities: Dynamic exports and service resilience

Once the Ganesha service is running, its power is further revealed through its dynamic export management capabilities. Instead of editing static configuration files, an expert can create, modify, and delete NFS exports on the fly using a series of simple commands. This is invaluable in dynamic environments where storage needs change rapidly.

 

Create a new export to make the CephFS filesystem accessible

 

# sudo ceph nfs export create cephfs myganeshanfs /ganesha ganeshafs --path=/

The true value of this distributed deployment lies in its service resilience. The Ceph orchestrator is constantly monitoring the health of the Ganesha daemons. Should a host fail, the orchestrator will automatically detect the loss and take action to ensure the service remains available. This automated failover process provides a high degree of transparency to clients, moving Ganesha from a simple gateway to a genuinely high-availability service. Its architecture is built to withstand disruption, making it an indispensable part of a robust storage strategy.

Real-World example

Let’s say we have a cluster with 3 ganesha-ready nodes, that means we can successfully export the underlying ceph fs from node 1 to node 2 and from node 2 to node 3, or whichever way we want !

Conclusion: Why Ceph-Ganesha is essential for modern storage

NFS Ganesha is more than just a gateway; it is a critical component for integrating traditional file services with modern, scalable storage. By leveraging the command-line orchestration of cephadm, administrators can deploy a highly available, resilient, and dynamically manageable service. The process is a testament to the power of declarative infrastructure management, simplifying what would otherwise be a complex task. The architectural design of Ganesha, combined with the power of the Ceph orchestrator, makes it a perfect solution for meeting the demanding storage requirements of today’s hybrid environments, precisely for this reason, at SIXE, we not only offer Ceph training but also specialized support to ensure that companies can maintain the stability of their production infrastructures.

? Ceph Technical Support

? Intensive Ceph Course

? Advanced Ceph Course

Terraform + AWS: From giant states to 3-minute deployments

“We haven’t touched our AWS infrastructure in three months out of fear of breaking something.” Sound familiar? The solution isn’t to change tools—it’s to change your methodology.

The lie we’ve believed

We all start the same: “Let’s do Infrastructure as Code—it’ll be amazing.” And indeed, the first few days are magical. You create your first VPC, security groups, a few instances… Everything works. You feel like a wizard.

Then reality hits.

Six months later, you have gigantic state files, tightly coupled modules, and every change feels like a game of Russian roulette. Does this sound familiar?

  1. terraform plan → 20 minutes of waiting
  2. A 400-line plan that no one understands
  3. “Are you sure you want to apply this?”
  4. Three hours debugging because something failed on line 247

But there’s one factor most teams overlook…

What actually works (and why no one tells you)

After rescuing dozens of Terraform projects, the formula is simpler than you think:

Small states + smart modules + GitOps that doesn’t scare you.

Layered states (not per project)

Forget “one state to rule them all.” Break it down like this:

terraform/
├── network/     # VPC, subnets, NAT gateways
├── data/        # RDS, ElastiCache  
├── compute/     # EKS, ECS, ASGs
└── apps/        # ALBs, Route53

Each layer evolves independently. The data team can update RDS without touching the network. This could be your game changer.

The remote state trick

The magic is in connecting layers without coupling them:

data "terraform_remote_state" "network" {
  backend = "s3"
  config = {
    bucket = "company-terraform-states"
    key    = "network/terraform.tfstate"
  }
}

# Use outputs from another layer
subnet_id = data.terraform_remote_state.network.outputs.private_subnet_id

Modules that don’t give you a headache

Create specific modules for each type of workload:

  • secure-webapp/ – ALB + WAF + instances
  • microservice/ – EKS service + ingress + monitoring
  • data-pipeline/ – Lambda + SQS + RDS with backups

No more “universal” modules requiring 47 parameters.

Multi-cloud is already here

Now it gets interesting. Many teams are adopting hybrid strategies: AWS for critical applications, OpenStack for development and testing.

Why? Cost and control.

# Same module, different cloud
module "webapp" {
  source = "./modules/webapp"
  
  # On OpenStack for dev
  provider = openstack.dev
  instance_type = "m1.medium"
  
  # On AWS for prod  
  # provider = aws.prod
  # instance_type = "t3.medium"
}

The future isn’t “AWS or nothing.” It’s architectural flexibility. The power to choose the solution you want, when you want, adapted to your budget.

OpenTofu changes the game

With the recent changes in Terraform, OpenTofu is becoming the smart choice. Same syntax, open-source governance, zero vendor lock-in.

The advantage is huge: you can migrate gradually without changing a single line of code. Perfect for teams that want control without drama.

The question you should ask yourself

Did your last terraform apply take years off your life?

If yes, the problem isn’t technical—it’s methodological.

Do you recognize these symptoms in your team? The difference between success and chaos lies in applying the right techniques from the start.


If you want to dive deeper into these methodologies, our Terraform/OpenTofu courses cover everything from fundamentals to advanced GitOps with real multi-cloud use cases.

Does your server need replacing? The right to repair says no

The new European Right to Repair Directive is putting an end to one of the most expensive myths in the IT sector: that switching to “more efficient” hardware is always more sustainable. Right to Repair makes products easier and faster to refurbish. And in the IT world, this means completely rethinking our relationship with hardware.

The myth of “new hardware is always better”.

For years we have heard the same speech: “this server is already 5 years old, it has to be replaced”. But is that really the case, have you looked on paper if it is really worth changing that IBM Power9 just because it is no longer supported? Because you may be in for a surprise. The reality is much more complex and, above all, more expensive than it looks.

When you buy a new server, you don’t just pay sticker price. You pay:

  • The carbon footprint of its manufacture
  • Transportation from the factory
  • Waste management of previous equipment
  • Migration and configuration costs
  • Lost productivity time during the transition

On the contrary, when you renew your existing hardware, you make the most of an already amortized investment and drastically reduce the environmental impact.

Let’s do green math: numbers don’t lie

The new right-to-repair regulations can extend the useful life of products by up to 10 years, which in IT terms translates into:

Why are these numbers so favorable? Because the manufacturing phase accounts for 70-85% of the total carbon footprint of any IT equipment. Keeping a server running for 8-10 years instead of 3-5 is literally doubling its environmental efficiency.

Right to repair in IT and sustainable technology

Beyond hardware: software also counts

The right to repair in IT is not limited to hardware. It includes:

  • Extended support for operating systems outside the official cycle. At SIXE we are committed to support outside the imposed life cycle and we can extend the useful life of Linux, AIX, Ceph, and other systems.
  • Independent maintenance of databases such as DB2, Oracle or Informix.
  • Security upgrades without the need to migrate the entire platform
  • Continuous performance optimization instead of mass replacements

The right to redress: more than a law, a philosophy

“My supplier says it’s insecure.”

Manufacturers have obvious business incentives to sell new hardware. However, a properly maintained 2018 server can be more secure than a poorly configured new one.

“No spare parts available.”

With independent maintenance providers, the availability of spare parts extends years beyond what is offered by the original manufacturers.

“Performance will be lower.”

A 5-year-old optimized system can outperform a new one without proper configuration.

Our sustainable commitment at SIXE: to make it last as long as the hardware itself allows

At SIXE, we have been advocating this philosophy for years. Not because it’s a trend, but because the numbers prove it: an approach based on preventive maintenance, continuous optimization and intelligent reuse of resources generates better ROI than the traditional buy-use-pull cycle.

Our commitment to “make it last forever” is not marketing. It is engineering applied with economic and environmental criteria.

Conclusion: the future is circular, not linear.

The right to repair in IT is not a regulatory imposition. It is an opportunity to rethink how we manage enterprise technology. An approach where maintaining, optimizing and extending the life of equipment is not only greener, but also more cost-effective.

The question is not whether your company will adapt to this reality. The question is whether it will do so before or after your competition.

Ready to make the leap to more sustainable and efficient IT? Discover our sustainable technology services and start optimizing your infrastructure today.

And if your system is giving you problems, we can assess its efficiency before replacing it.

?Our consulting / service portfolio

How to fix the most common error in Ceph

Ceph is a powerful and flexible solution for distributed storage, but like any complex tool, it is not exempt from errors that are difficult to diagnose. If you get the message “could not connect to ceph cluster despite configured monitors”, you know that something is wrong with your cluster. And no, it’s not that the monitors are asleep. This error is more common than it seems, especially after network changes, reboots or when someone has touched the configuration “just a little bit”.

In this article we get to the point: we tell you the real causes behind this problem and most importantly, how to fix it without losing your data or your sanity in the process.

What does the error “could not connect to ceph cluster despite configured monitors” really mean?

When Ceph tells you that it cannot connect to the cluster “despite configured monitors”, what is really happening is that the client or daemon can see the configuration of the monitors, but cannot establish communication with any of them. It’s like being ghosting, no matter how much you call, they don’t pick it up.

Ceph monitors are the brains of the cluster: they maintain the topology map, manage authentication, and coordinate global state. Without connection to the monitors, your Ceph cluster is basically a bunch of expensive disks with no functionality.

Troubleshoot Ceph errors

The 5 most common causes (and their solutions)

Network and connectivity problems

The number one cause is usually the network. Either because of misconfigured firewalls, IP changes or routing problems.

Rapid diagnosis:

# Verifica conectividad básica
telnet [IP_MONITOR] 6789
# o con netcat
nc -zv [IP_MONITOR] 6789

# Comprueba las rutas
ip route show

Solution:

  • Make sure that ports 6789 (monitor) and 3300 (msgr2) are open.
  • Verify that there are no iptables rules blocking communication.
  • If you use firewalld, open the corresponding services:
firewall-cmd --permanent --add-service=ceph-mon
firewall-cmd --reload

2. Monmap out of date after IP change

If you have changed node IPs or modified the network configuration, it is likely that the monmap (monitor map) is obsolete.

Diagnosis:

# Revisa el monmap actual
ceph mon dump

# Compara con la configuración
cat /etc/ceph/ceph.conf | grep mon_host

Solution:

# Extrae un monmap actualizado de un monitor funcionando
ceph mon getmap -o monmap_actual

# Inyecta el monmap corregido en el monitor problemático
ceph-mon -i [MON_ID] --inject-monmap monmap_actual

3. Time synchronization problems

Ceph monitors are very strict with time synchronization. An offset of more than 50ms can cause this error.

Diagnosis:

# Verifica el estado de NTP/chrony
chrony sources -v
# o con ntpq
ntpq -p

# Comprueba el skew entre nodos
ceph status

Solution:

# Configura chrony correctamente
systemctl enable chronyd
systemctl restart chronyd

# Si tienes servidores NTP locales, úsalos
echo "server tu.servidor.ntp.local iburst" >> /etc/chrony.conf

4. Critical or corrupted monitors

If the monitors have suffered data corruption or are in an inconsistent state, they may not respond correctly.

Diagnosis:

# Revisa los logs del monitor
journalctl -u ceph-mon@[MON_ID] -f

# Verifica el estado del almacén del monitor
du -sh /var/lib/ceph/mon/ceph-[MON_ID]/

Solution:

# Para un monitor específico, reconstruye desde los OSDs
systemctl stop ceph-mon@[MON_ID]
rm -rf /var/lib/ceph/mon/ceph-[MON_ID]/*
ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-0 --journal-path /var/lib/ceph/osd/ceph-0/journal --type bluestore --op update-mon-db --mon-store-path /tmp/mon-store
ceph-mon --mkfs -i [MON_ID] --monmap /tmp/monmap --keyring /tmp/ceph.mon.keyring

5. Incorrect client configuration

Sometimes the problem is on the client side: outdated configuration, incorrect keys or poorly defined parameters.

Diagnosis:

# Verifica la configuración del cliente
ceph config show client

# Comprueba las claves de autenticación
ceph auth list | grep client

Solution:

# Regenera las claves de cliente si es necesario
ceph auth del client.admin
ceph auth get-or-create client.admin mon 'allow *' osd 'allow *' mds 'allow *' mgr 'allow *'

# Actualiza la configuración
ceph config dump > /etc/ceph/ceph.conf
When to ask for help (before it’s too late)

This error can escalate quickly if not handled correctly. If you find yourself in any of these situations, it’s time to stop and seek professional help:

  • All monitors are down simultaneously
  • You have lost quorum and cannot regain it.
  • Data appears corrupted or inaccessible
  • The cluster is in production and you can’t afford to experiment.

Ceph clusters in production are not trial and error territory. One false move can turn a connectivity problem into a data loss.

The best solution to the error “could not connect to ceph cluster despite configured monitors” : prevent

To avoid encountering this error in the future:

Proactive monitoring:

  • Configure alerts for monitor status
  • Monitors network latency between nodes
  • Monitors time synchronization

Best practices:

  • Always deploy at least 3 monitors (better 5 in production).
  • Keep regular backups of the monmap and keys.
  • Document any network configuration changes
  • Uses automations (Ansiblefor example, is perfect for configuration changes).

Regular testing:

  • Periodically tests connectivity between nodes
  • Simulates monitor failures in development environment
  • Verify that your recovery procedures are working

Need help with your Ceph cluster?

Distributed storage clusters such as Ceph require specific expertise to function optimally. If you have encountered this error and the above solutions do not solve your problem, or if you simply want to ensure that your Ceph infrastructure is properly configured and optimized, we can help.

Our team has experience solving complex Ceph problems in production environments, from urgent troubleshooting to performance optimization and high availability planning.

We offer help with

Don’t let a connectivity problem become a major headache. The right expertise can save you time, money and, above all, stress.

IBM Power11 : Discover all the news

? IBM Power11 is here

The wait is over: today IBM Power11 is officially presented, the new generation of servers that seeks to consolidate Power as a benchmark in performance, efficiency and openness.New IBM Power11

What’s new with the new Power servers?

IBM is committed to a full-stack design, with integration from the processor to the cloud, designed to simplify management, reduce costs and enable AI without the need for GPUs. Power11 offers us:

  • IBM Spyre Accelerator for Generative AI and Business Processes

  • Up to 25% more cores per chip compared to Power10

  • DDR5 memory with improved bandwidth and efficiency

  • Concurrent maintenance, quantum-secure cryptography, and automated energy-efficient mode

  • Full support for AIX, IBM i, Linux, and hybrid deployments (Power Virtual Server)

See the Power11 models available today:

  • ? IBM Power S1122

    Compact 2U server, ideal for space-constrained environments. Up to 60 Power11 cores, 4TB of DDR5 RAM and advanced cyber resiliency and energy efficiency capabilities. Perfect for Linux, AIX or IBM i loads in mixed production environments.

    ? IBM Power S1124

    Designed to consolidate critical loads in 4U form factor with up to 60 cores, 8 TB of memory and dual socket. Ideal for medium to large enterprises that want cloud flexibility, without sacrificing performance or security.

    ? IBM Power E1150

    Intermediate model with high scalability, designed for demanding loads and SAP deployments, databases or intensive virtualization.

    ? IBM Power E1180

    The most powerful of the Power11 family. Up to 256 cores, 64 TB of memory and improved energy efficiency up to 28%. Designed for AI, advanced analytics and massive consolidation in mission-critical environments with 99.9999% availability.

More open and hybrid-ready power

All Power11 models can also be deployed on Power Virtual Server, integrating AIX, IBM i and Linux loads in hybrid environments, without the need to rewrite applications. In addition, KVM and PowerVM support allows you to choose the hypervisor that best fits your environment.

Availability: IBM Power11 will be available globally starting July 25, 2025. The IBM Spyre accelerator will be available in the fourth quarter of 2025.

What about the future?

Power11 ushers in a new era where AI, quantum security and energy efficiency are no longer promises, but native features.

If you like the new Power11 models, we have good news for you, because at SIXE we sell and migrate Power11 (and Power10, 9…). At SIXE we have been helping our customers to make the most of the power of Power for years.

Learn how to build and deploy AI agents with LangGraph using watsonx.ai

Artificial intelligence no longer just responds, it also makes decisions. With frameworks like LangGraph and platforms like watsonx.ai , you can build agents that reason and act autonomously ?.

In this article, we will explain how to implement a ReAct (Reasoning + Action) agent locally and deploy it on IBM Cloud, all this with a practical example that includes a weather query tool ?️.

A practical guide to using your agents with LangGraph and Watsonx.ai

Project architecture

  • Machine with local project
    • Here you develop and test the agent with Python, LangGraph and dependencies.
  • ZIP (pip-zip)
    • Package with your code and additional tools.
  • Software Specification
    • Environment with libraries necessary to execute the agent.
  • watsonx.ai
    • Platform where you deploy the service as a REST API.
  • IBM Cloud Object Storage
    • Stores deployment assets.

Let’s prepare the environment for our agent

We need:

  • Python 3.12 installed
  • Access to IBM Cloud and watsonx.ai
  • Poetry for dependency management

Have you got everything? Well, first things first, clone the repository that we will use as an example. It is based on the official IBM examples.

git clone https://github.com/thomassuedbroecker/watsonx-agent-langgraph-deployment-example.git 
cd ./agents/langgraph-arxiv-research

First of all, let’s understand the example project.

[Developer Workstation] → [CI/Build Process] → [Deployment] ↓
[IBM Cloud / watsonx.ai]

The main files of the agent are:

ai_service.py
Main file that starts the agent service in production.
agent.py
Core logic of the AI agent based on LangGraph. Defines the workflow.
tools.py
Tools connected to the agent (Weather API).

Diagram of the Langgraph and watson.ai example repo

Let’s configure the environment

python3.12 -m venv .venv
source ./.venv/bin/activate
python3 -m pip install --upgrade pip
python3 -m pip install poetry

We also recommend the use of Anaconda or miniconda. It allows us to manage virtual environments or Python packages in a simple way and is widely used in ML.

In order for Python to find our custom modules (such as agents and tools), we need to include the current directory in the environment variable PYTHONPATH

 

export PYTHONPATH=$(pwd):${PYTHONPATH}

echo ${PYTHONPATH}

 

Once we have the environment ready, it is time for the variables. You must create a config.toml file if you don’t already have one and use your IBM Cloud credentials:

[deployment]
watsonx_apikey = "TU_APIKEY"
watsonx_url = "" # Tiene que seguir el siguiente formato: `https://{REGION}.ml.cloud.ibm.com0`
space_id = "SPACE_ID"
deployment_id = "YOUR_DEPLOYMENT_ID"
[deployment.custom]
model_id = "mistralai/mistral-large" # underlying model of WatsonxChat
thread_id = "thread-1" # Más información: https://langchain-ai.github.io/langgraph/how-tos/persistence/
sw_runtime_spec = "runtime-24.1-py3.11"

You will find your variables here:

https://dataplatform.cloud.ibm.com/developer-access

Once there, select your deployment space and copy the necessary data (API Key, Space ID, etc.).

Execution at the agent’s premises

It is time to test the agent:

source ./.venv/bin/activate
poetry run python examples/execute_ai_service_locally.py

Since it’s a weather agent why don’t you try it with something like something like…?

“What is the current weather in Madrid?”

The console should give you the time in Madrid. Congratulations! we only need to do the deploy in watsonx.ai

Agent deployment in watsonx.ai

source ./.venv/bin/activate
poetry run python scripts/deploy.py
This code will deploy the agent in Watsonx.ai
deploy.py does the following:
  1. Read the configuration (config.toml) with your credentials and deployment space.
  2. Package your code in a ZIP file for uploading to IBM Cloud.
  3. Creates a custom software specification based on a base environment (such as runtime-24.1-py3.11).
  4. Deploy the agent as a REST service in watsonx.ai.
  5. Save the deployment_id , needed to interact with the agent later.

In short:
takes your local agent, prepares it and turns it into a cloud-accessible service.

We check that everything is correct in watsonx.ai and go to the “Test” section. There we paste the following json (it is only one question)
{
"messages": [
{
"content": "What is the weather in Malaga?",
"data": {
"endog": [
0
],
"exog": [
0
] },
"role": "User"
}
] }
Click on predict and the agent will use the weather_service.
In the response json you will see the agent process -> call tool -> collect city -> process and return temperature.
Your agent is up and running on watsonx.ai!
If you want to test it from the terminal to make sure that it works, just use
source ./.venv/bin/activate
poetry run python examples/query_existing_deployment.py
Conclusions

If you have any doubts, we recommend the following video tutorial where you can follow step by step the development connected with watsonx.ai

If you want to continue exploring these types of implementations or learn more about cloud development and artificial intelligence, we invite you to explore our AI courses.?

SIXE: Your partner specialized in LinuxONE 5

Can you imagine what it would be like to have a powerful infrastructure without paying proprietary licenses? Well… you can? with LinuxONE 5

In an ever-evolving technology environment, choosing a critical infrastructure based on Linux and AI requires not only advanced technology, but also a partner that masters every technical and strategic layer. IBM LinuxONE Emperor 5 , powered by the IBM Telum II processor and its integrated AI accelerators are a milestone in security, performance and scalability. At SIXE , we are experts in designing, implementing and supporting IBM LinuxONE 5 solutions, combining our expertise in IBM technologies with our role as a strategic partner of Canonical, Red Hat and SUSE .

What is IBM LinuxONE 5?

IBM LinuxONE Emperor 5 is a next-generation platform designed for companies that need maximum levels of security, energy efficiency… as well as the ability to manage AI and hybrid cloud workloads. It includes new features such as:

  • IBM Telum II processor : With multiple on-chip AI accelerators, ideal for inference on co-located data.
  • Confidential Containers : Advanced protection for applications and data in multi-tenant environments.
  • Quantum-safe encryption : Preparing for future threats from quantum computing.
  • 99% availability : Architecture designed to minimize critical outages.
Features of IBM LinuxOne5 | SIXE Partner

This system is not just hardware: it is a comprehensive solution that integrates software, security and sustainability, positioning itself as an ally for complex digital transformations. In addition, with the advantage of opensource: not being dependent on licenses.

OpenSource and IBM experts

At SIXE, we are not intermediaries: we are certified engineers in IBM Power and open source technologies . Unlike large partners that outsource complex projects, at SIXE we lead each LinuxONE 5 implementation with an internal team specialized in:

  • IBM Power Hardware : Configuration and optimization of IBM LinuxONE Emperor 5 and Power10 (and future Power11) systems.
  • Operating Systems : Support for Red Hat Enterprise Linux (RHEL) SUSE Linux Enterprise Server (SLES) and Ubuntu.
  • AI and hybrid infrastructure : Integration of containers, Kubernetes and AI tools (such as IBM Watsonx) with LinuxONE 5.
  • Security and compliance : We are experts in IBM security audits and licensing.

What do we offer at SIXE to make you stay with us?

Large enterprises often treat LinuxONE 5 implementation projects as one-off transactions. At SIXE, we work closely with your technical teams to ensure that the solution is tailored to your specific needs. We offer you a differential:

  • Respond quickly to changes in requirements or architectures.
  • Maintain direct communication with systems managers.
  • Design migration plans from legacy (IBM i, AIX) to LinuxONE 5. You can see more details here.

Long-term relationships: Are you looking for a supplier or a strategic partner?

We don’t just work to close deals: we build lasting relationships. Once you’ve implemented LinuxONE Emperor 5, we’re still there for you. We offer technical support, training and upgrades , ensuring that your investment continues to generate long-term value.

Return on investment: SIXE is a safe investment.

At SIXE, every euro invested translates into real value . Without unnecessary layers of management or empty meetings, we focus resources on engineers and experts with more than 15 years of experience in IBM, Canonical and open source technologies. We are part of a global network of specialists recognized by leaders such as IBM and Canonical , which reinforces our ability to deliver exceptional results.

Not only LinuxONE 5

Although much of our business is focused on IBM solutions, we are also strategic partners of Canonical (Ubuntu), Red Hat and SUSE , as well as using technologies such as QRadar XDR . This diversity allows us to offer comprehensive solutions for your infrastructure.

Choosing SIXE as a partner for IBM LinuxONE 5 means betting on a human, technical and strategic approach. Don’t leave your critical infrastructure in the hands of intermediaries: trust a team that is as committed as you are to the success of your project.

? Find out more about our IBM LinuxONE 5 offering here.

Ready to transform your infrastructure with IBM LinuxONE 5 and SIXE? Contact us and let’s get started together.

SIXE