
One of the OCP deployment use cases getting more attention lately is the GitOps-ZTP. This article describes how to use DCI to deploy an OCP cluster following this methodology.
One of the OCP deployment use cases getting more attention lately is the GitOps-ZTP. This article describes how to use DCI to deploy an OCP cluster following this methodology.
This blog post aims to guide DCI users on locating the resources provided by our DCI agents (dci-openshift-agent, dci-openshift-app-agent, and dci-pipeline) on the server used to launch these agents. It also includes an explanation of each piece of installed code. Having this information clearly described is crucial for troubleshooting, as it helps users know where to look in the DCI agents' code.
The DCI openshift agents bring transparent support to use the newly added Image Digest Mirror Sets (IDMS) in OpenShift. The main use of IDMS is in disconnected clusters. The IDMS provides the registry mirroring functionality for images that are not reachable due to network constraints.
This blog post presents practical examples of dci-pipeline usage that can be useful when dealing with testing and troubleshooting of OCP clusters and workloads deployed on top of it with DCI.
This blog post continues the overview of dci-pipeline and related testing tools, focusing on some useful features that can really help you when addressing testing and troubleshooting with DCI jobs.
We are addressing DevOps or QA engineers who run and validate heavy test suites with thousands of tests in every JUnit file. These engineers often feel tired from having to manually create a list of failed tests to send back to the developers. While automated validation can help, it's challenging to implement it for real-world needs, such as making sure all tests are working or that all tests beginning with 'network' are passing. In this post, we will showcase an Ansible role that automates this type of validation.
In previous blog posts, we have learned about the OCP on Libvirt project and the benefits it brings to us, with regards to the flexible deployment of OCP clusters where the nodes are virtual machines deployed on KVM. However, we have only covered the case of one single physical server. Can we use multiple physical servers to set up a lab with multiple, distributed deployment on Libvirt-based OCP clusters? The answer is yes! And we will explain the main requirements and challenges in terms of networking in this blog post.
One of the main challenges during the workload migration to OCP-4.12 is the massive deprecation of beta APIs. In this post, we will discuss how to identify the APIs that are going to be deprecated and how DCI could simplify this task for you.
The storage tester role, shipped in the DCI OpenShift Agent, allows the user to test a storage class behavior during an upgrade. It can be seen as an example of resilience testing that can be done with DCI on a cluster.
This post will describe what is a pipeline in the DCI world and how to use it step by step to create workflows in our testing environment.
This post is a tutorial on how to install a standard three-masters and two-workers OpenShift cluster on one powerful baremetal server. The idea is to first create all required virtual machines and networks using libvirt KVM driver, and then install OpenShift on that virtual cluster using DCI.
ACM is a tool that allows the deployment and management of OCP clusters and workloads on top of it. In this post, we will go through an example of how to use ACM to deploy an SNO instance in the DCI way.
ACM is a tool that allows deploying, and managing OCP clusters and workloads on top of it. Now the DCI Agent has the support to automate the creation of SNO instances by the integration of the ACM roles.
dci-queue: a simple resource management system in DCI
Prefixes allow you to control the inventory and settings of different DCI environments from a single central directory. We hope this article will convince you of the convenience of using prefixes in your DCI labs and will serve as a solid foundation for you to start leveraging their potential.
Good practices in naming for DCI jobs
Developing for the OCP ecosystem can be a daunting task, developing for the ecosystem that tests this can become even more so but, fear not, we have tried to make it as painless as possible.
Everydays, Red Hat DCI platform runs hundreds of jobs from different teams and partners, with all our products, with different purposes: debugging, certifications, tests, daily activities, etc. You may need to build a dedicated dashboard to follow your own specific activity, display graphical results, study some specific data or identify specific job behaviors. For such requirements, Google spreadsheet can provide you facilities to implement your ideas in a few minutes. In this blog post, you will learn how to quickly build a dashboard with Google spreadsheet by requesting DCI data, getting it dynamically and sending a pdf report by email periodically.
Components are the artifacts used in a DCI job, these are the elements that distinguish jobs. They are the elements to be tested on each job. This post will discuss their use and an example of how to automate them to be continuously tested.
In a previous post, you have been introduced to Red Hat Distributed CI (DCI) infrastructure and how it enables Red Hat partners to integrate into Red Hat CI workflow. Now, we will be focusing on how to interact with DCI through the Python API.
Openshift Cloud Platform is meant to be the standard for modern Telco infrastructure. One of the goals of the Telco Partner CI Lab team is to test the installation of Openshift with all the requirements needed for Telco workloads. For one of our main partners, it means installing/upgrading an Openshift platform, some extra operators to handle specific hardware like SRIOV cards, and external products for storage or load balancing needed to handle CNFs to reduce any risk.
Red Hat provides mainly infrastructure software like RHEL, OpenShift or OpenStack. These are established technologies for our customers but also for our partners. In order to keep the software as stable as possible, Red Hat works on doing various Quality Assurance and Continuous Integration processes. In this article we are going to focus on one specifically. The CI workflow from the point of view of a Red Hat partner.