New redbook! Red Hat Ansible for AIX, IBM i and Linux on IBM Power

Today we are in luck, IBM has just published the draft of the reedbook that we have been waiting for months. In it we will see how the integration of Ansible in IBM Power environments has opened a new world of possibilities for the automated administration of this type of systems, thanks to the growing support from Red Hat: not only for the different Linux distributions, but also for AIX, IBM i, HMC consoles and VIOS servers, fundamental components of the IBM Power platform.

Ansible, the leading-edge automation technology, has found fertile ground in the robust and powerful IBM Power systems. With its sophisticated architecture and support for a variety of operating systems, IBM Power is positioned as an ideal choice for companies seeking exceptional performance and reliable security. Incorporating Ansible into this environment not only improves operational efficiency but also opens up new avenues for application and infrastructure management as code.

The heart of the Ansible revolution in IBM Power lies in its ability to efficiently orchestrate and automate complex tasks. From application deployment to patch management and security configuration, Ansible simplifies traditionally time-consuming processes. Its declarative language, based on YAML and Jinja2, allows users to describe their infrastructures in simple terms, making automation accessible even to those with no programming experience (and perhaps no desire or need to learn).

In addition, Ansible facilitates automated application deployment on Power servers, deftly handling everything from Node.js deployments to multi-tier application orchestration such as OpenShift or OpenStack. This versatility makes it an indispensable tool for day-to-day operations in Linux, AIX and IBM i environments, covering critical aspects such as storage, security, and configuration settings.

A draft of this redbook is available at https://www.redbooks.ibm.com/redpieces/pdfs/sg248551.pdf.

How ACME saved your business with FS7300 and safe-guarded copies

Today ransomware attacks have become a constant threat to businesses of all sizes. An effective solution to this challenge is the use of protected copies on advanced storage systems such as those offered by IBM’s FS7300 storage systems. This article explores a case in which a customer, which we will call ACME, was able to recover its critical systems in minutes after a ransomware attack, thanks to the capabilities of the FS7300 cockpit.

A key technology: IBM Safe-guarded copies

Protected copies on IBM FS7300 systems are replicas of data that are stored securely and isolated within the same system. These copies are not accessible for normal modification or deletion, making them immune to malware attacks such as ransomware.

Our client

ACME is a leading financial services provider in a North African country that recently faced a sophisticated ransomware attack in November 2023. This attack encrypted a significant amount of their critical data, affecting essential operations. Fortunately, I had recently implemented IBM’s FS7300 storage cabinet, which included the protected copy feature and which SIXE had scheduled to run on a regular basis. An alert from IBM Storage Protect warned that more files than normal had been modified during planned backups.

Response to Attack

When ACME became aware of the attack, its IT team acted quickly. Using the protected copies stored in their FS7300 cabinet, they were able to restore the affected data in a matter of minutes. This rapid recovery was made possible by the efficient data management and instantaneous recovery capability of the FS7300 system.

Key Benefits

The ability to recover quickly from a ransomware attack is crucial to maintaining business continuity. In the case of ACME, IBM’s FS7300 booth provided:

  1. Fast Recovery: Data restoration was almost instantaneous, minimizing downtime.
  2. Data Integrity: Protected copies ensured that the restored data was free from corruption or tampering.
  3. Uninterrupted Operations: Rapid recovery allowed critical business operations to continue without significant interruption.

This case demonstrates how an advanced storage solution such as IBM’s FS7300 array, equipped with copy-protected technology, can be a lifesaver in crisis situations such as ransomware attacks. It provides not only an additional layer of security, but also the confidence that business data can be recovered quickly and efficiently, ensuring business continuity in times of uncertainty and constant threats.

IBM Storage CEPH vs. Storage Scale (GPFS), GFS2, NFS and SMB

IBM Storage CEPH is a software-defined storage solution based on open source ceph technology that is gaining more and more followers. It offers a scalable, resilient and high-performance storage system. It is especially suited for environments that require massive, distributed storage, such as data centers, cloud applications and big data environments.

What are the main Use Cases?

  1. Object Storage: Ideal for storing massive amounts of unstructured data, such as images, videos and backup files.
  2. Block Storage: Used for file systems, databases and virtual machines, offering high availability and performance.
  3. Distributed File Systems: Supports applications that require concurrent access to files from multiple nodes.

Technical Fundamentals

  • Scalable Structure: It is based on a distributed architecture that allows scaling horizontally, adding more nodes as needed.
  • High Availability: Designed to be resilient to failures, with redundancy and automatic data recovery.
  • Data Consistency: Ensures data integrity and consistency even in high concurrency environments.

Comparison with other storage solutions

  1. Versus GPFS (IBM Spectrum Scale):
    • CEPH is best suited for environments where massive scalability and a highly flexible storage infrastructure are needed.
    • GPFS offers superior performance in environments where high I/O throughput and efficient management of large numbers of small files is required.
  2. Before NFS and SMB:
    • NFS and SMB are shared storage protocols that work well for sharing files on local networks. CEPH offers a more robust and scalable solution for large-scale and distributed environments.
    • CEPH provides greater fault tolerance and more efficient data management for large data volumes.
  3. Vs GFS2:
    • GFS2 is suitable for cluster environments with shared data access, but CEPH offers superior scalability and flexibility.
    • CEPH excels in object and block storage scenarios, while GFS2 focuses more on file storage.

When is GPFS (Storage Scale) a better solution than CEPH?

When very high I/O performance is required

  • GPFS is designed to provide very high I/O performance, especially in environments requiring high input/output (I/O) throughput and low latency. It is particularly effective in applications that handle large numbers of small files or in environments with heavy I/O workloads.

If we have to manage small files in a very efficient manner

  • GPFS excels at efficiently managing large numbers of small files, a common scenario in high-performance computing and analysis environments.

In HPC environments

  • In high performance computing (HPC) environments, where consistency and reliability are crucial along with high performance, GPFS provides a more robust and optimized platform.

When we need advanced functions such as an ILM

  • For applications that require advanced handling of unstructured data with features such as data deduplication, compression and data lifecycle management, GPFS can have more specialized functions.

Conclusions

In summary, GPFS is preferable to CEPH in scenarios where high I/O throughput, efficient small file management, and in HPC environments where consistency and reliability are as important as performance. In addition, in environments that are already deeply integrated with IBM solutions, GPFS can offer better synergy and optimized performance.

However, in our opinion, IBM CEPH is best suited in scenarios where a highly scalable storage solution is required, with object, block and file storage capabilities, and where data integrity and availability are critical. It stands out compared to NFS, SMB and GFS2 in terms of scalability, flexibility and ability to handle large volumes of distributed data.

That is, neither one nor the other, it all depends on the workloads and use cases. Contact us!

Migrating from Lustre FS to IBM Storage Scale (GPFS)

In this article we tell you how we migrated in SIXE HPC environments from LUSTRE to GPFS, well, now called IBM Storage Scale (and not long ago Spectrum Scale). As you know, High-Performance Computing (HPC) environments play a critical role in scientific research, engineering and innovation in a wide variety of fields. To take full advantage of the potential of these infrastructures, an efficient, high-performance storage system is essential. One of the most widely used parallel file systems in HPC environments is Lustre FS, but sometimes migrating to more advanced and versatile solutions becomes a necessity. In this article, we will explore the migration process from Lustre FS to IBM Storage Scale (formerly known as GPFS) in an HPC infrastructure composed of hundreds of compute nodes with internal or external storage, connected to a high-performance network, such as InfiniBand or 10G Ethernet.

Why migrate to IBM Storage Scale (GPFS)?

IBM Storage Scale, formerly known as GPFS (General Parallel File System), is a highly scalable and robust parallel file system designed for high-performance applications, including HPC environments. As storage and performance needs continue to grow in HPC environments, migrating to a solution like IBM Storage Scale can offer significant benefits:

  1. Scalability: IBM Storage Scale can scale horizontally to accommodate an increase in the amount of data and compute nodes seamlessly. This is essential in HPC environments where workloads can be extremely storage demanding.
  2. High performance: IBM Storage Scale is designed for high read and write performance, making it ideal for HPC applications that require fast and efficient access to large data sets.
  3. Stability and security: IBM Storage Scale is known for its reliability and security. It offers fault tolerance features that ensure the availability of critical data at all times. Data that can be protected with encryption when necessary.
  4. Integration with HPC environments: IBM Storage Scale integrates well with high-performance networks used in HPC environments, such as InfiniBand or 10G Ethernet, simplifying the transition.
  5. Support: SIXE provides ongoing support and maintenance for Storage Scale, ensuring that your storage system is backed by a company with over 20 years of experience in this technology. We do this through IBM, of which we are a value-added business partner.
  6. Easier architecture to deploy, scale and maintain. For us, this is the key point that makes us recommend to undertake this migration. Lustre FS from a certain scale becomes complex to manage, monitor and update, while GPFS works perfectly in 90% of the scenarios with few additional adjustments.

Planning the migration from Lustre FS

Migrating a parallel file system in an HPC infrastructure is a complex and critical task that requires careful planning. Here are some key steps to consider:

  1. Requirements assessment: Before beginning the migration, it is essential to understand the storage and performance requirements of your HPC workload. This will help determine the optimal IBM Storage Scale configuration. We need to understand the use cases and the specific needs of the environment. Also those points where Lustre FS worked particularly well or poorly :)
  2. Architecture design: We design the best possible architecture for IBM Storage Scale taking into account the topology of your high-performance network, storage and compute node distribution. This should be done in a manner that minimizes or eliminates downtime during migration. It is in this phase where we will decide whether to use IBM COS (Cloud Object Storage), ESS (Elastic Storage) or Spectrum Scale (GPFS) deployed directly on the storage servers, compute, or both.
  3. Data preparation: We make sure your data is organized and ready for migration. This may involve cleaning up unwanted data or reorganizing existing data.
  4. Development environment testing: Before migration to production, we perform extensive testing in a development environment to identify potential issues and adjust the configuration as needed.
  5. Hot migration planning: We determine the best time to perform the live migration, minimizing the impact on HPC operations. This may require scheduling the migration during periods of low activity. Storage Scale has several functionalities that enable a non-stop migration of environments. This is essential as data movement can take days to complete.
  6. Migration execution: We carry out the migration following the elaborated plan. This may include data transfer and IBM Storage Scale configuration.
  7. Testing and validation: We perform extensive testing to ensure that all data has been successfully migrated and that the new storage system meets performance requirements.
  8. Training: We provide training to users and IT staff to enable them to adapt to the new file system.
  9. Ongoing maintenance and support: Develop an ongoing maintenance plan to ensure that your storage system performs optimally over time.

Conclusions

Migrating from Lustre FS to IBM Storage Scale (formerly GPFS) in an HPC infrastructure can be a challenging but rewarding task. In doing so, research centers and organizations can take advantage of a highly scalable, reliable and high-performance parallel file system. However, thorough planning, testing and proper training are critical to ensure a successful migration and minimize any disruption to HPC operations.

If you are considering a migration to IBM Storage Scale, we offer you to do it in close collaboration with SIXE. We are storage experts and specialist HPC consultants to ensure that the transition is as smooth and effective as possible. With the right approach and the right investment of time and resources, you can significantly improve the ability of your HPC infrastructure to support high-performance research and applications in the future.

AlmaLinux and OpenSUSE Leap on IBM Power / ppc64le (emulated with QEMU from a x86 box)

(Disclaimer: this article has been written  for our blog at IBM)

Getting started

 If you want to explore the Linux distributions that run on IBM Power (ppc64le) but lack the latter, you can emulate it thanks to QEMU. You can check the Linux compatibility matrices in Power at the following link, and decide which distribution and version you want to try :)

 Like any other HW architecture emulation, it has challenges. What inspired me to write this article was this inspiring tutorial, Run a full-system Linux on Power environment from Microsoft Windows, by Emma Erickson and Paul Clarke. I wanted to suggest a more “user-friendly” approach, the network running by default and a GUI to explore all the options available or to modify existing deployments. 

For this demo, I will be using a standard (and cheap) x86 box running the latest Ubuntu (23.04) and the packages included in the distribution itself. No need to compile anything.

System preparation

This is my system, but it should work on any x86 machine with virtualization capabilities.

ubuntu@sixe-dev:~$ cat /proc/cpuinfo | grep model

model name : Intel(R) Xeon(R) CPU E5-1410 v2 @ 2.80GHz

ubuntu@sixe-dev:~$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description:    Ubuntu 23.04
Release:        23.04
Codename:       lunar

We make sure all updates are applied and reboot.

ubuntu@sixe-dev:~$ sudo apt update && sudo apt upgrade

ubuntu@sixe-dev:~$ sudo reboot

I will use virt-manager as a GUI for QEMU, which would help me (or any other “QEMU newbie”). It’s just what people are used to doing with Virtualbox or VMWare Player, and that’s why I like it :)

ubuntu@sixe-dev:~$ sudo apt install -y qemu-system-ppc qemu-kvm virt-manager virtinst libvirt-clients bridge-utils

In my case, I ssh from Windows WSL, redirecting the X. Another option would be to provide a minimal graphical environment and connect via RDP or VNC.

~$ ssh -X ubuntu@sixe-dev

Warning: No xauth data; using fake authentication data for X11 forwarding.

Welcome to Ubuntu 23.04 (GNU/Linux 6.2.0-35-generic x86_64)

Download .iso files

I will be downloading two free and open Linux distros with great support on Power. The download folder will be /var/lib/libvirt/images, which is used by virt-manager by default. 

ubuntu@sixe-dev:~$ cd /var/lib/libvirt/images/

ubuntu@sixe-dev:~$ sudo wget https://repo.almalinux.org/almalinux/8/isos/ppc64le/AlmaLinux-8-latest-ppc64le-minimal.iso

ubuntu@sixe-dev:~$ sudo wget https://download.opensuse.org/distribution/leap/15.5/iso/openSUSE-Leap-15.5-DVD-ppc64le-Media.iso

Launch the Virtual Machine Manager

Although it’s a little-known tool (unless you’re a Linux geek), it’s as simple and powerful as VirtualBox or VMware Player. It also integrates with QEMU to test operating systems on any other architecture.

ubuntu@sixe-dev:~$ virt-manager

 

Creating and installing a new VM from .iso

To install the .iso, create a new VM. Choose ppc64le as the architecture, tune CPU and memory, and add a new virtual disk. I recorded a video to show the whole process, you can skip the last part, in our case the installation GUI took almost 9 minutes to finish :)

Deploying Alma & OpenSUSE Linux on IBM Power (ppc64le) using QEMU on x86Deploying Alma & OpenSUSE Linux on IBM Power (ppc64le) using QEMU on x86

All the installation settings work. For your information, I used a default LVM configuration for storage and automatic DHCP on my NAT network device.

Here you can see the emulated CPU being detected correctly.

Once the system is installed, I recommend checking the virtual IP address

… and make sure the sshd daemon is running

hugo@almapower:~$ systemctl start sshd

So I connect from my local host using ssh

ubuntu@sixe-dev:~$ ssh root@192.168.122.28

root@192.168.122.28's password:

Last login: Fri Oct 27 03:58:07 2023 from 192.168.122.1

From now on I will ssh into the VM from my host machine. This way I can copy, paste and resize the console without any problems.

Try other distros like OpenSUSE Leap

You can do the same with other distributions. In my case, the second distribution that works well is OpenSUSE Leap. 

I even installed the graphical environment.

.. as well as Firefox, and started the web browser to visit our website. You’ll need a little patience, though, as it won’t be lightning fast.

What to do now?

Your Linux is nothing special, except running on a much more secure, powerful, and stable architecture. The operation is the same as on x86. Apple has changed its architecture several times, and more and more manufacturers are betting on alternatives to x86 (see ARM).

For example, another popular Red Hat-derived distribution, Rocky Linux, includes not only x86 and ppc64le on its download page, but also ARM or s390x (Linux One / mainframe environments).

You can add other repositories or consult the database of packages available for Linux at IBM Power – https://www.ibm.com/it-infrastructure/resources/power-open-source/

As a disclaimer, although we have them running in production on LPARs with PowerVM, we have not been able to find the combination of configurations and OS versions that would allow us to run Rocky 9.2 and Ubuntu 22.10/23.04 on QEMU. So my recommendation is to try AlmaLinux or OpenSUSE. Of course, their enterprise-supported “sisters” RHEL and SUSE work just as well.

 In future articles we will discuss use cases like AWX or Kubernetes on Linux (ppc64le), emulated or real :)

I hope this article leaves you with no excuses for not trying Linux on Power.

Discover the History of Common Europe. Don’t miss the Prague Congress 2023!

Do you work with AIX, Linux and IBM i? Are you an IBM Power user?

Common Europe, a federation of IBM technology user associations in Europe, has been at the forefront of fostering growth and knowledge in this field for several decades. As we prepare for the Prague 2023 Congress, it is an ideal time to review the history of this magnificent institution and highlight the importance of attending this event.

History of Common Europe

Common Europe began more than six decades ago, in 1955. It began its work as a federation of IBM system user associations, and over the years, has grown into an international community. Common Europe’s goal has always been to provide a platform for learning, networking and collaboration, resulting in a steady and progressive growth in the technical skills and knowledge of its members.

Throughout its history, Common Europe has demonstrated a tireless commitment to carrying out its mission. They have worked closely with IBM and other industry leaders to provide their members with the training and support they need to take full advantage of emerging technologies and industry best practices.

Why attend the Prague 2023 Congress?

If you work with AIX, Linux and IBM i on Power, the relevance and value of the Prague 2023 Congress cannot be overemphasized. At this event, you will have the opportunity to learn from the best in the field, expand your skills and knowledge, and connect with other like-minded professionals.

The Prague 2023 Congress will feature a wide variety of workshops, training sessions and presentations addressing all aspects of these technologies, from implementation and management to the latest innovations.

We are excited to announce that several members of Common Iberia, an association of IBM users in Spain and Portugal of which SIXE is a member, will participate in the congress as speakers. His presence guarantees a valuable perspective, sharing innovative ideas and experiences in the use of AIX, Linux and IBM i on Power with many other experts from Europe and America.

In addition to us, there will also be a multitude of other experts and industry leaders present, making the Prague 2023 Congress a real opportunity for any professional looking to improve their skills and knowledge in these technologies… and above all, to strengthen and expand our large community of users.

FW update required due to vulnerability in IBM PowerVM (Power9 and Power10)

We would like to inform all our customers (and readers) that a bug has been identified in PowerVM that could lead to a security problem in Power9 and Power10 systems. The main risk is that a malicious actor with user privileges in a logical partition can compromise the isolation between logical partitions without being detected. This could result in data loss or unauthorized code execution on other logical partitions (LPARs) on the same physical server. Technical details can be found at https://www.ibm.com/support/pages/node/6993021

Are all Power servers at risk?

No. Only some IBM Power9 or Power10 models are at risk and always depending on their FW versions. Servers prior to Power9 and those running OP9xx firmware are not exposed to this vulnerability. There is no evidence that this vulnerability has been exploited to gain unauthorized access on any IBM client but it is always better to be safe than sorry :)

When and by whom was this vulnerability found?

The vulnerability was identified by IBM internally. A solution has already been created and thoroughly tested and was launched on May 17 at Fix Central.

What is recommended to customers?

Customers should follow the instructions in Fix Central to download and install the updated firmware.

What would be the impact for productive environments?

The main concern is the possibility of data leakage or unauthorized code running on other logical partitions of the same physical server. We have found no evidence that this vulnerability has been exploited to gain unauthorized access.

Are certain environments more vulnerable than others?

IBM cannot specify which client environments might be most at risk since access to partitions is controlled by the client. However, any environment in which privileged user access has been granted to one or more partitions should be considered potentially vulnerable. In other words, environments with a high density of LPARs, where production and test systems are mixed, are more likely to suffer from this vulnerability.

Can the patch be applied without shutting down the equipment?

The firmware containing the fix can be installed concurrently and will fix this vulnerability on all systems with the exception of a Power10 system running firmware prior to FW1010.10. In this case, the solution must be applied in a disruptive manner, requiring the server to be shut down to install the update and eliminate the vulnerability.

What types of partitions may be affected?

Any IBM Power9 or Power10 server mentioned in the security bulletin that has multiple partitions could be affected. It does not matter how these partitions were created or managed.

Is IBM’s Power Virtual Server (Power VS) environment at risk?

The vulnerability also affected the Power Systems Virtual Server offering on IBM Cloud (Power VS), but the patch has already been applied to remediate it.

Need help with preventive maintenance of your IBM Power systems?

Contact us and find out about our preventive maintenance service and 24/7 support.

First steps with the QRadar XDR API using python and Alienvault OTX

IBM QRadar XDR is a security information and event management (SIEM) platform used to monitor an organization’s network security and respond to security incidents as quickly and comprehensively as possible. While QRadar is already incredibly powerful and customizable on its own, there are several reasons why we might want to enhance it with Python scripting using its comprehensive API.

Getting started with the QRadar API

Let’s see an example of how you could use the QRadar API to get different information from its database (ArielDB) using Python. The first thing we need first is a token, which is created from Admin – > Authorized Services

Generating the python code for the QRadar API

Let’s start with something very simple, connect and retrieve the last 100 events detected by the platform.

import requests
import json

# Configura las credenciales y la URL del servidor QRadar
qradar_host = 'https://<your_qradar_host>'
api_token = '<your_api_token>'

# Define la URL de la API para obtener los eventos
url = f'{qradar_host}/api/ariel/searches'

# Define los encabezados de la solicitud
headers = {
'SEC': api_token,
'Content-Type': 'application/json',
'Accept': 'application/json'
}

# Define la consulta AQL (Ariel Query Language) para obtener los últimos 100 eventos
query_data = {
'query_expression': 'SELECT * FROM events LAST 100'
}

# Realiza la solicitud a la API de QRadar
response = requests.post(url, headers=headers, data=json.dumps(query_data))

# Verifica que la solicitud fue exitosa
if response.status_code == 201:
print("Solicitud de búsqueda enviada correctamente.")
search_id = response.json()['search_id']
else:
print("Error al enviar la solicitud de búsqueda:", response.content)

In this example, replace <your_qradar_host> with the host address of your QRadar server and <your_api_token> with the API token you obtained from your QRadar instance.

This code will prompt QRadar to run a search of the last 100 events. The response to this search request will include a ‘search_id’ which you can then use to retrieve the search results once they become available. You can change this query to any of the queries available in the
guide provided by IBM to get the most out of
to get the most out of QRadar’s Ariel Query Language

Detecting malicious IPs in QRadar using AlienVault OTX open sources

While in QRadar we have X-Force as a pre-defined module to perform malicious IP lookups and integrate them into our rules, for a multitude of reasons (including the end of the support / SWMA payment to IBM) we may want to use open sources to perform these types of functions. A fairly common example that we talk about in our courses and workshops is maintaining a series of data structures updated with “malicious” IPs obtained through open cybersecurity data sources.

Using the QRadar API, we can create python code to create a rule that constantly updates a reference_set that we will later use in different rules.

To achieve what you are asking for, you would need to break it down into two steps.

  1. First, you would need an open source security intelligence source that provides a list of malicious IPs. A commonly used example is the AlienVault Open Threat Exchange (OTX) malicious IP list just mentioned.
  2. Then, we will use the QRadar API to update a reference set with that list of IPs.

Programming it in Python is very simple:

First, download the malicious IPs from the open source security intelligence source (in this case, AlienVault OTX):

import requests
import json

otx_api_key = '<your_otx_api_key>'
otx_url = 'https://otx.alienvault.com:443/api/v1/indicators/export'

headers = {
'X-OTX-API-KEY': otx_api_key,
}

response = requests.get(otx_url, headers=headers)

if response.status_code == 200:
malicious_ips = response.json()
else:
print("Error al obtener las IPs maliciosas:", response.content)

We then use the QRadar API to update a reference set with those IPs:

qradar_host = 'https://<your_qradar_host>'
api_token = '<your_api_token>'
reference_set_name = '<your_reference_set_name>'

url = f'{qradar_host}/api/reference_data/sets/{reference_set_name}'

headers = {
'SEC': api_token,
'Content-Type': 'application/json',
'Accept': 'application/json'
}

for ip in malicious_ips:
data = {'value': ip}
response = requests.post(url, headers=headers, data=json.dumps(data))

if response.status_code != 201:
print(f"Error al agregar la IP {ip} al conjunto de referencia:", response.content)

The next and last step is to use this reference set in the rulers we need .

Want to know more about IBM QRadar XDR?

Consult our services of sales, deployment, consultingandofficial training.

IBM QRadar SIEM/XDR courses updated to version 7.5.2! Including SOAR, NDR and EDR features of QRadar Suite

We are pleased to announce that all of our IBM QRadar SIEM / XDR courses have been upgraded to version 7.5.2. In this new release, powerful SOAR, NDR and EDR features have been incorporated within the QRadar Suite, providing our students with an even more comprehensive and up-to-date learning experience with a mid-term view of the technology through CloudPak for Security IBM’s new disruptive cybersecurity products that are on the way.

IBM QRadar XDR is the market-leading information security solution that enables real-time security event management and analysis. With its ability to collect, correlate and analyze data from multiple sources, QRadar SIEM provides organizations with a comprehensive view of their security posture and helps them effectively detect and respond to threats.

In QRadar SIEM / XDR version 7.5.2, three key features have been introduced that further extend the capabilities of the platform:

  1. SOAR (Security Orchestration, Automation and Response): This feature enables the automation of security tasks and response orchestration, which streamlines and optimizes incident detection and response processes. With SOAR, organizations can automate workflows, investigate incidents more efficiently and take quick and accurate action to contain and mitigate threats.
  2. NDR (Network Detection and Response): With the NDR feature, QRadar SIEM / XDR expands its ability to detect network threats. This feature uses advanced network traffic analysis algorithms to identify suspicious behavior and malicious activity. By combining network threat detection with event correlation and security logs, QRadar SIEM / XDR provides comprehensive visibility into threat activity across the entire infrastructure.
  3. EDR (Endpoint Detection and Response): The EDR feature enables threat detection and response on endpoint devices, such as desktops, laptops and servers. With EDR, QRadar SIEM / XDR continuously monitors endpoints for indicators of compromise, malicious activity and anomalous behavior. This helps to quickly identify and contain threats that might go undetected by traditional security solutions.

At Sixe, we are committed to providing our students with the most up-to-date and relevant knowledge in the field of cybersecurity. The upgrade of our IBM QRadar SIEM / XDR courses to version 7.5.2, along with the addition of SOAR, NDR and EDR features from QRadar Suite, allows us to provide a comprehensive learning experience that reflects the latest trends and developments in the field of information security.

If you are interested in learning about QRadar SIEM / XDR and taking advantage of all these new features, we invite you to explore our updated courses:

You can also ask us for customized training or consulting, as well as technical support and support with your QRadar projects.

Sealpath IRM: we discuss native integrations and its on-premises and SaaS (cloud) options.

Over the past few years, Sealpath has worked hard to provide native integrations with several popular and widely used tools in enterprise environments to facilitate adoption and improve efficiency in data protection and intellectual property. Some of these tools are mentioned below. Below is an updated list of the products 100% compatible with Sealpath IRM that we at Sixe have tested and which are already enjoyed by many customers around the world.

Sealpath main integrations through optional modules

All these modules are available in on-premises (local installation) or cloud (SaaS) format.

  1. Sealpath for RDS: This module allows working in remote desktop or Citrix environments that require a single installer per terminal server.
  2. Sealpath for File Servers and SharePoint: Allows the automatic protection of folders in file servers, SharePoint, OneDrive, Alfresco and other documentation repositories.
  3. Automatic Protection for Exchange: Provides automatic protection of message bodies and attachments in Microsoft Exchange according to specific rules.
  4. AD/LDAP Connector: Enables integration with Active Directory or LDAP in a SaaS system.
  5. SealPath for Mobile Devices: Provides access to protected documentation through the SealPath Document Viewer app or Microsoft Office Mobile on iOS, Android or Mac OSX mobile devices.
  6. Platform customization: Includes the ability to customize the appearance of email invitations, user and administrator portals.
  7. Multi-organization: Offers the possibility of having more than one “host” or sub-organization linked to the same company. Ideal for large corporate groups or government offices with different types of hierarchies or very complex organizational charts.
  8. DLP Connectors: Enables automatic protection of information based on rules configured in Symantec, McAfee and ForcePoint DLP, which are the solutions we like the most in this sector.
  9. SealPath Sync Connector: Facilitates offline access to a large number of files stored in certain folders on a user’s device.
  10. Protection Based Classification Connector: Enables automatic protection of documents classified by an information classification solution that includes tags in the file metadata.
  11. SealPath Secure Browser: Enables viewing and editing of protected documents in the web browser.
  12. SealPath SDK (.Net, REST, command-line): Allows the use of SealPath SDK in REST, .Net or command-line format for the integration of protection with certain corporate applications.

As you can see, there is no shortage of modules and add-ons that extend the capabilities of the main Sealpath IRM solution, allowing organizations to adapt protection and access control to their specific needs… and above all, without having to change the way they work or the products they already use.

Do we deploy it or use SaaS mode)?

This is the second major customer question. Sealpath IRM offers two deployment modes: Software as a Service (SaaS) and On-Premises. Both options offer the same functionality and data protection, but differ in the way they are hosted and managed. The main differences between the two modalities are presented below:

  1. Hosting and infrastructure management:

  • Sealpath SaaS: In the SaaS mode, the infrastructure and servers are hosted and managed by Sealpath in the cloud. This means that customers do not need to worry about server maintenance, upgrades and security, as these aspects are Sealpath’s responsibility.
  • Sealpath On-Premises: In the On-Premises option, the infrastructure and servers are deployed and managed within the client’s facilities or in its own private cloud environment. This gives customers greater control over the location and access to their data, but also means they must manage and maintain the servers themselves.
  1. Integration with Active Directory and LDAP:

  • Sealpath SaaS: In the SaaS version, customers can integrate Sealpath with their Active Directory or LDAP systems using the AD/LDAP connector. This connector allows synchronizing users and groups with the Sealpath system and facilitates the administration of permissions and access policies.
  • Sealpath On-Premises: In the On-Premises version, integration with Active Directory or LDAP is integrated by default and it is not necessary to purchase an additional connector.
  1. Licensing of additional modules:

  • Sealpath SaaS: Some modules, such as SealPath for Mobile Devices, are included in the SaaS version at no additional cost.
  • Sealpath On-Premises: In the On-Premises option, these modules must be purchased separately according to the organization’s needs.
  1. Platform customization:

  • Sealpath SaaS: Customization of the look and feel of the platform (colors, logos, etc.) may be limited compared to the On-Premises option, since it is based on a shared environment in the cloud.
  • Sealpath On-Premises: The On-Premises option allows for greater customization of the platform, since it is hosted in a dedicated environment controlled by the client.

The choice between Sealpath SaaS and On-Premises depends on the organization’s needs and preferences in terms of control over infrastructure, costs and ease of administration. Both options provide robust protection and the same functionality for controlling access to confidential information and intellectual property. Unlike Microsoft and other competitors, customers are not forced to use one model or another, because only they know what is best for them.

Interested in Sealpath IRM?

Request a no-obligation demonstration

SiXe Ingeniería
×