From 1bd00c6a700aeab82c0cdae0ecebca9513cb6932 Mon Sep 17 00:00:00 2001 From: Piotr Konopka Date: Wed, 8 Apr 2026 17:21:18 +0200 Subject: [PATCH] Minor documentation inconsistency fixes Out of curiosity, I asked an LLM agent to review the documentation against the code and point out inconsistencies. That's what it came with. --- README.md | 2 +- core/integration/README.md | 2 +- docs/building.md | 2 +- docs/running.md | 4 ++-- 4 files changed, 5 insertions(+), 5 deletions(-) diff --git a/README.md b/README.md index 4889d30cb..eb4d30be3 100644 --- a/README.md +++ b/README.md @@ -34,7 +34,7 @@ There are two ways of interacting with AliECS: ### I want to ensure AliECS can **run and control my process** -* **My software is based on FairMQ and/or O² DPL (Data Processing Later)** +* **My software is based on FairMQ and/or O² DPL (Data Processing Layer)** AliECS natively supports FairMQ (and DPL) devices. Head to [ControlWorkflows](https://github.com/AliceO2Group/ControlWorkflows) for instructions on how to configure your software to be controlled by AliECS. diff --git a/core/integration/README.md b/core/integration/README.md index eda781e82..41996b2b9 100644 --- a/core/integration/README.md +++ b/core/integration/README.md @@ -177,7 +177,7 @@ DD scheduler plugin informs the Data Distribution software about the pool of FLP See [Legacy events: Kafka plugin](/docs/kafka.md#legacy-events-kafka-plugin) -# LHC plugin +## LHC This plugin listens to Kafka messages coming from the LHC DIP Client and pushes any relevant internal notifications to the AliECS core. Its main purpose is to provide basic information about ongoing LHC activity (e.g. fill information) to affected parties and allow AliECS to react upon them (e.g. by automatically stopping a physics run when stable beams are over). diff --git a/docs/building.md b/docs/building.md index 0a4243ec9..a692b7f91 100644 --- a/docs/building.md +++ b/docs/building.md @@ -80,7 +80,7 @@ Running `make` will take a while as all dependencies are gathered, built and ins $ make all ``` -You should find several executables including `o2control-core`, `o2control-executor` and `coconut` in `bin`. +You should find several executables including `o2-aliecs-core`, `o2-aliecs-executor` and `coconut` in `bin`. For subsequent builds (after the first one), plain `make` (instead of `make all`) is sufficient. See the [Makefile reference](makefile_reference.md) for more information. diff --git a/docs/running.md b/docs/running.md index 53d80afab..d3fd958fa 100644 --- a/docs/running.md +++ b/docs/running.md @@ -10,7 +10,7 @@ The recommended way to set up a Mesos cluster is by performing a complete deploy The AliECS core on the head node should be stopped (`systemctl stop o2-aliecs-core`) and your own AliECS core should be made to point to the head node. Typically, it can be done by replacing the AliECS core binary on the head node with your own and restarting the `o2-aliecs-core` systemd service. -The following example flags assume a remote head node `centosvmtest`, the use of the default `settings.yaml` file, very verbose output, verbose workflow dumps on every workflow deployment, and the executor having been copied (`scp`) to `/opt/o2control-executor` on all controlled nodes: +The following example flags assume a remote head node `centosvmtest`, the use of the default `settings.yaml` file, very verbose output, verbose workflow dumps on every workflow deployment, and the executor having been copied (`scp`) to `/opt/o2-aliecs-executor` on all controlled nodes: ```bash --coreConfigurationUri @@ -22,7 +22,7 @@ http://centosvmtest:5050/api/v1/scheduler --verbose --veryVerbose --executor -/opt/o2control-executor +/opt/o2-aliecs-executor --dumpWorkflows ```