Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -536,7 +536,7 @@ All notable changes to this project will be documented in this file.
### Added

- Support for 1.15.0 ([#125])
- Sensitive property key is setable via a secret ([#125])
- Sensitive property key is settable via a secret ([#125])

### Changed

Expand Down
2 changes: 1 addition & 1 deletion docs/modules/nifi/pages/usage_guide/custom-components.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -153,7 +153,7 @@ binaryData:
custom-nifi-processor-1.0.0.nar: ...
----

The Python script is taken from {nifi-docs-flowfile-source}[the offical NiFi Python developer guide].
The Python script is taken from {nifi-docs-flowfile-source}[the official NiFi Python developer guide].

Afterwards, we need to mount the ConfigMap as described in xref:nifi:usage_guide/extra-volumes.adoc[] and extend the `nifi.properties` file:

Expand Down
2 changes: 1 addition & 1 deletion docs/modules/nifi/pages/usage_guide/security.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -364,7 +364,7 @@ allow := {
----
<1> The default rule should return `"resourceNotFound": true`. If this is not set, NiFi's access policy inheritance will not work. Any values for the `allowed` field in the response will be ignored.
<2> A rule that grants all users access to the root process group and thus to all components in the NiFi instance.
<3> A rule that denies access to a specific process group for the user "alice". For this process group the default rego rule will not be applied and NiFi's component inhertiance will not be used. All child components of this process group will also be authorized based on this rule unless a more granular rule overrides it.
<3> A rule that denies access to a specific process group for the user "alice". For this process group the default rego rule will not be applied and NiFi's component inheritance will not be used. All child components of this process group will also be authorized based on this rule unless a more granular rule overrides it.

[#communication-between-nifi-nodes]
==== Communication between NiFi nodes
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -116,7 +116,7 @@ and `Hadoop Configuration Resources` to `/stackable/userdata/nifi-hive-s3-config
Afterwards you can create the `PutIceberg` processor and configure the `HiveCatalogService`.
Also set `Catalog Namespace` to your schema name and the `Table Name`.

For the `File Format` it is recommened to use `PARQUET` or `ORC` rather than `AVRO` for performance reasons, but you can leave it empty or choose your desired format.
For the `File Format` it is recommended to use `PARQUET` or `ORC` rather than `AVRO` for performance reasons, but you can leave it empty or choose your desired format.

You should end up with the following `PutIceberg` processor:

Expand Down
2 changes: 1 addition & 1 deletion extra/crds.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -150,7 +150,7 @@ spec:
enabled:
default: true
description: |-
Wether the Kubernetes Job should be created, defaults to true. It can be helpful to disable
Whether the Kubernetes Job should be created, defaults to true. It can be helpful to disable
the Job, e.g. when you configOverride an authentication mechanism, which the Job currently
can't use to authenticate against NiFi.
type: boolean
Expand Down
2 changes: 1 addition & 1 deletion rust/operator-binary/src/config/mod.rs
Original file line number Diff line number Diff line change
Expand Up @@ -442,7 +442,7 @@ pub fn build_nifi_properties(
"2".to_string(),
);

// Volatile Provenance Respository Properties
// Volatile Provenance Repository Properties
properties.insert(
"nifi.provenance.repository.buffer.size".to_string(),
"100000".to_string(),
Expand Down
2 changes: 1 addition & 1 deletion rust/operator-binary/src/controller.rs
Original file line number Diff line number Diff line change
Expand Up @@ -964,7 +964,7 @@ async fn build_node_rolegroup_statefulset(
// uses RC2. Thus, the keytool usage here LGTM (no alias trickery) and has my nod of approval.
prepare_args.extend(vec![
// The source directory is a secret-op mount and we do not want to write / add anything in there
// Therefore we import all the contents to a truststore in "writeable" empty dirs.
// Therefore we import all the contents to a truststore in "writable" empty dirs.
// Keytool is only barking if a password is not set for the destination truststore (which we set)
// and do provide an empty password for the source truststore coming from the secret-operator.
// Using no password will result in a warning.
Expand Down
2 changes: 1 addition & 1 deletion rust/operator-binary/src/crd/mod.rs
Original file line number Diff line number Diff line change
Expand Up @@ -322,7 +322,7 @@ pub fn default_allow_all() -> bool {
#[derive(Clone, Debug, Deserialize, JsonSchema, PartialEq, Serialize)]
#[serde(rename_all = "camelCase")]
pub struct CreateReportingTaskJob {
/// Wether the Kubernetes Job should be created, defaults to true. It can be helpful to disable
/// Whether the Kubernetes Job should be created, defaults to true. It can be helpful to disable
/// the Job, e.g. when you configOverride an authentication mechanism, which the Job currently
/// can't use to authenticate against NiFi.
#[serde(default = "CreateReportingTaskJob::default_enabled")]
Expand Down
2 changes: 1 addition & 1 deletion rust/operator-binary/src/operations/upgrade.rs
Original file line number Diff line number Diff line change
Expand Up @@ -88,7 +88,7 @@ pub async fn cluster_version_update_state(
// we requeue to wait until a full stop has been performed.
if target_replicas == 0 && current_replicas > 0 {
tracing::info!(
"Cluster is performing a full restart at the moment and still shutting down, remaining replicas: [{}] - requeueing to wait for shutdown to finish",
"Cluster is performing a full restart at the moment and still shutting down, remaining replicas: [{}] - requeuing to wait for shutdown to finish",
current_replicas
);
return Ok(ClusterVersionUpdateState::UpdateInProgress);
Expand Down
4 changes: 2 additions & 2 deletions rust/operator-binary/src/product_logging.rs
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@ pub const NIFI_LOG_FILE: &str = "nifi.log4j.xml";
const CONSOLE_CONVERSION_PATTERN: &str = "%date %level [%thread] %logger{40} %msg%n";
// This is required to remove double entries in the nifi.log4j.xml as well as nested
// console output like: "<timestamp> <loglevel> ... <timestamp> <loglevel> ...
const ADDITONAL_LOGBACK_CONFIG: &str = r#" <appender name="PASSTHROUGH" class="ch.qos.logback.core.ConsoleAppender">
const ADDITIONAL_LOGBACK_CONFIG: &str = r#" <appender name="PASSTHROUGH" class="ch.qos.logback.core.ConsoleAppender">
<encoder>
<pattern>%msg%n</pattern>
</encoder>
Expand Down Expand Up @@ -78,7 +78,7 @@ pub fn extend_role_group_config_map(
.value as u32,
CONSOLE_CONVERSION_PATTERN,
log_config,
Some(ADDITONAL_LOGBACK_CONFIG),
Some(ADDITIONAL_LOGBACK_CONFIG),
),
);
}
Expand Down
2 changes: 1 addition & 1 deletion rust/operator-binary/src/security/mod.rs
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ pub enum Error {
#[snafu(display("tls failure"))]
Tls { source: tls::Error },

#[snafu(display("sensistive key failure"))]
#[snafu(display("sensitive key failure"))]
SensitiveKey { source: sensitive_key::Error },

#[snafu(display("failed to ensure OIDC admin password exists"))]
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ kind: Secret
metadata:
name: secret-operator-keytab
data:
# To create keytab. When promted enter password asdf
# To create keytab. When prompted enter password asdf
# cat | ktutil << 'EOF'
# list
# add_entry -password -p stackable-secret-operator@CLUSTER.LOCAL -k 1 -e aes256-cts-hmac-sha384-192
Expand Down
Loading
Loading