Skip to content

Forcepoint/fp-ngfw-smc-toolbox

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

SMC Toolbox

logo

Overview

The SMC Toolbox is a comprehensive Ansible automation solution designed to simplify and streamline the management of Secure Messaging Center (SMC) instances. Whether you're deploying new instances, upgrading existing systems, collecting diagnostics, or performing routine maintenance, this toolbox provides production-ready playbooks and roles that handle complex operational tasks with consistency and reliability.

What This Tool Does

This automation framework provides:

  • Remote Upgrades: Seamlessly upgrade SMC Manager instances across your infrastructure with automatic rollback capabilities
  • Backup & Restore: Create, manage, and restore SMC Manager backups with local storage support
  • Service Management: Start, stop, and restart SMC services with health checks and port verification
  • Diagnostics Collection: Gather sginfo and system traces for troubleshooting with local storage
  • Configuration Management: Add, update, or remove configuration options across your SMC instances
  • Log Management: Enable/disable debug traces for detailed troubleshooting, log server certification
  • Dependency Verification: Check and install required packages on target systems

Key Features

  • Organized Structure: Playbooks categorized by function (upgrade, backup, diagnostics, service, configuration)
  • Reusable Roles: Common operations encapsulated as professional Ansible roles for consistency
  • Flexible Inventory: Support for single-instance and multi-instance deployments, SSH and AWS SSM connections
  • Local Storage: Backup and diagnostics stored locally for easy management and retrieval
  • Production Ready: Tested against multiple SMC versions with comprehensive error handling
  • Easy Extension: Clear structure makes it simple to add new playbooks and customize for your needs

Supported Operations

Operation Playbook Use Case
Upgrade playbooks/upgrade/remote_upgrade.yml Deploy new SMC versions
Revert Upgrade playbooks/upgrade/revert_installation.yml Rollback failed upgrades
Service Control playbooks/service/*.yml Start/stop/restart services
Backup playbooks/backup/create_management_server_backup.yml Create management server backup
Restore playbooks/backup/restore_management_server_backup.yml Restore management server backup
Diagnostics playbooks/diagnostics/*.yml Collect logs and system info
Configuration playbooks/configuration/*.yml Manage SMC settings like SGConfiguration or debug traces
Jar Deployment playbooks/upgrade/deploy_restore_jars.yml Useful to debug an env with specific code

This tool helps manage SMC instances using Ansible.

Project Structure

smc-toolbox/
├── playbooks/              # Organized playbooks by function
│   ├── upgrade/           # SMC upgrade operations
│   ├── backup/            # Backup and restore operations
│   ├── diagnostics/       # Diagnostics collection and trace checking
│   ├── configuration/     # Configuration management
│   ├── service/           # Service management (start/stop/restart)
│   └── dependencies/      # Dependency checking and installation
├── roles/                 # Reusable Ansible roles
│   ├── welcome/
│   ├── agreement/
│   ├── system_info/
├── inventory/             # Inventory and group variables
│   ├── group_vars/       # Group-specific variables
│   ├── host_vars/        # Host-specific variables
├── ansible.cfg           # Ansible configuration
├── requirements.yml      # Collection dependencies
└── docs/                 # Documentation

Prerequisites

Before starting using this project you need to make sure to have all needed requirements. Current implementation expects to have one service per server.

Ansible version >= 2.13.6 If usage of aws_ssm is needed then ansible-core versions 2.15.9 at least is required.

Clone this project

You need to clone this project

Install Ansible Bundle

Ansible must be used from a landing machine, this machine should fit the topology and have connection rights (ssh) to Management/Log servers to upgrade

┌──────────────────────────────┐
│                              │
│     Ansible Control Node     │
│                              │
└──────────────┬───────────────┘
               │
       ┌───────┴───────┐
       │               │
       ▼               ▼
┌─────────────┐  ┌─────────────┐
│    SSH      │  │    SSM      │
│ Connection  │  │ Connection  │
└──────┬──────┘  └──────┬──────┘
       │               │
       │               ▼
       │        ┌─────────────┐
       │        │   AWS SSM   │
       │        │   Service   │
       │        └──────┬──────┘
       │               │
       ▼               ▼
┌──────────────────────────────┐
│                              │
│     Target SMC Servers       │
│  (Management/Log Servers)    │
│                              │
└──────────────────────────────┘

To install Ansible on the control machine:

  • sudo apt-get install -y python3.*
  • pip3 install ansible

Hosts File

Topology file must be created for each OS type (Linux/Windows), it should contain all wanted Log Servers and Management Servers that need to be upgraded. (It is possible to create many topology file to handle passive/active server case)

Example of hosts file that will be put in inventory/ directory: Hosts file must follow the topology structure shown in the following example files:

!!! WARNING !!!
Currently only management and log servers are supported

When management and log servers are running on different server.

[mgt]

[log]

[local]
localhost

Or

When management and log servers are running on different server.

[mgt_log]

[local]
localhost

Example with all servers existing in on env:

[mgt]
3.75.208.12   ansible_user=ubuntu   ansible_ssh_private_key_file=/home/user/.ssh/id_rsa
[log]
18.159.196.1   ansible_user=ubuntu   ansible_ssh_private_key_file=/home/user/.ssh/id_rsa
[local]
localhost

Example with group of log servers

[mgt]
1.1.2.3 ansible_user=ubuntu   ansible_ssh_private_key_file=/home/user/.ssh/id_rsa

[log:children]
Log_server_group_1
Log_server_group_2

[Log_server_group_1]
1.12.2.13 ansible_user=ubuntu   ansible_ssh_private_key_file=/home/user/.ssh/id_rsa
1.12.2.15 ansible_user=ubuntu   ansible_ssh_private_key_file=/home/user/.ssh/id_rsa
1.12.2.17 ansible_user=ubuntu   ansible_ssh_private_key_file=/home/user/.ssh/id_rsa

[Log_server_group_2]
1.12.2.20 ansible_user=ubuntu   ansible_ssh_private_key_file=/home/user/.ssh/id_rsa
1.12.2.21 ansible_user=ubuntu   ansible_ssh_private_key_file=/home/user/.ssh/id_rsa
1.12.2.22 ansible_user=ubuntu   ansible_ssh_private_key_file=/home/user/.ssh/id_rsa

[local]
localhost

Additional configuration

display_banner: (true/false)

SMC toolbox playbooks

Check dependencies

Before starting using Playbooks it is possible to check that remote destination have all needed packages.

ansible-playbook playbooks/dependencies/check_and_install_dependancies.yml -i inventory/preprod_hosts -l mgt

Remote upgrade

Working directory is by default /data to put the SMC golden master installation zipfile needed for upgrade (can be downloaded from : https://support.forcepoint.com/s/ or any nas for internal usage) or if you pass version and os to extravars. or using dedicated playbook: playbooks/upgrade/get_last_official_build_zip.yml

!!! WARNING !!!
Only Linux is supported for remote upgrade

Example:

ansible-playbook playbooks/upgrade/get_last_official_build_zip.yml -i inventory/preprod_hosts --extra-vars "version=7.0.2 os=linux"

By default, build is searched in .data folder. But it can be changed with

--extra-vars "local_data_folder=MY_CUSTOM_PATH"

Also, by default, we assume that ansible_user is part of sudoers, in case it's not the case or there is password asked, playbook can be run using option -K or --ask-become-pass, password will be asked by prompt

Then to proceed to upgrade (as example) :

ansible-playbook playbooks/upgrade/remote_upgrade.yml -i inventory/preprod_hosts

or

ansible-playbook playbooks/upgrade/remote_upgrade.yml -i inventory/preprod_hosts -l log_server_group_1

or with privilege escalation password needed

ansible-playbook playbooks/upgrade/remote_upgrade.yml -i inventory/preprod_hosts --extra-vars "-K" -l log_server_group_1

or getting upgrade zip from official Forcepoint Website

ansible-playbook playbooks/upgrade/remote_upgrade.yml -i inventory/preprod_hosts --extra-vars "version=7.0.2 os=linux"

Revert installation

In case of error you might need to revert SMC installation.

ansible-playbook playbooks/upgrade/revert_installation.yml -i inventory/preprod_hosts --extra-vars

or

ansible-playbook playbooks/upgrade/revert_installation.yml -i inventory/preprod_hosts --extra-vars -l log_server_group_1

Start/Stop/Restart server

Select corresponding playbook

  • playbooks/service/management_server_service.yml
  • playbooks/service/log_server_service.yml

Change server service state: start/stop/restart

For management server it is possible to do an additional check with extra vars smc_api_port. Once set we'll ensure port is up and running before proceeding.

ansible-playbook playbooks/service/management_server_service.yml -i inventory/preprod_hosts --extra-vars "action=start"

or

ansible-playbook playbooks/service/log_server_service.yml -i inventory/preprod_hosts --extra-vars "action=restart" -l log_server_group_1

or

ansible-playbook playbooks/service/management_server_service.yml -i inventory/preprod_hosts --extra-vars "action=start smc_api_port=8082"

Collect backup

Collect SMC management server backup. By default, diagnostics will be stored in ./diagnostics/[ansible_host]/backup. It can be overridden with

  • --extra-var "local_diagnostics_dir=/home/user/diags"
ansible-playbook playbooks/backup/collect_backup.yml -i inventory/preprod_hosts

or

ansible-playbook playbooks/backup/collect_backup.yml -i inventory/preprod_hosts -l log_server_group_1

Collect sginfo

Collect SMC server(s) sginfo By default will be stored in ./diagnostics/[ansible_host]/sginfo. It can be overridden with

--extra-var "local_diagnostics_dir=/home/user/diags"

ansible-playbook playbooks/diagnostics/collect_sginfo.yml -i inventory/preprod_hosts

or

ansible-playbook playbooks/diagnostics/collect_sginfo.yml -i inventory/preprod_hosts -l log_server_group_1

Collect diagnostics

Collect SMC server(s) sginfo and SMC management server backup. (combination of the two previous playbook) By default, diagnostics will be stored in ./diagnostics. It can be overridden with

  • --extra-var "local_diagnostics_dir=/home/user/diags"
ansible-playbook playbooks/diagnostics/collect_diagnostics.yml -i inventory/preprod_hosts

or

ansible-playbook playbooks/diagnostics/collect_diagnostics.yml -i inventory/preprod_hosts -l log_server_group_1

Check server traces

Check server traces based on known pattern list. Check is only done on current SMC build so older traces are ignored.

ansible-playbook playbooks/diagnostics/check_server_traces.yml -i inventory/preprod_hosts

or

ansible-playbook playbooks/diagnostics/check_server_traces.yml -i inventory/preprod_hosts -l log_server_group_1

Add/Remove debug traces

Add debug traces is sometimes helpful in order to get more information for a failure.

Add debug traces

ansible-playbook playbooks/configuration/add_remove_debug_traces.yml -i inventory/preprod_hosts --extra-vars "state=present debug=stonesoft.dataserver.protocol debug_level=debug" -l log

Remove debug traces

ansible-playbook playbooks/configuration/add_remove_debug_traces.yml -i inventory/preprod_hosts --extra-vars "state=absent debug=stonesoft.dataserver.protocol debug_level=debug" -l log

Add/Remove option in configuration file

Add, update or remove a parameter in one of SD-WAN manager configuration files (SGConfiguration, LogServerConfiguration, ...). The configuration file must be located in $SG_HOME/data.

Add option in SGConfiguration.txt

ansible-playbook playbooks/configuration/add_remove_option_to_server.yml -i inventory/preprod_hosts --extra-vars "sg_option_name=NETFLOW_TEMPLATE_INTERVAL_SECONDS sg_option_value=120 state=present configuration_file=SGConfiguration.txt" -l mgt

Update option in SGConfiguration.txt

ansible-playbook playbooks/configuration/add_remove_option_to_server.yml -i inventory/preprod_hosts --extra-vars "sg_option_name=NETFLOW_TEMPLATE_INTERVAL_SECONDS sg_option_value=60 state=present configuration_file=SGConfiguration.txt" -l mgt

Remove option in SGConfiguration.txt

ansible-playbook playbooks/configuration/add_remove_option_to_server.yml -i inventory/preprod_hosts --extra-vars "sg_option_name=NETFLOW_TEMPLATE_INTERVAL_SECONDS sg_option_value=60 state=absent configuration_file=SGConfiguration.txt" -l mgt

Certify log server

Whenever management server backup is restore on brand new SMC installation then log server needs to be certified.

!!! WARNING !!!
Before certify log server is stopped. Then after certify has succeeded it is started.

ansible-playbook playbooks/backup/certify_log_server.yml -i inventory/preprod_hosts --extra-vars "smc_login=smc_user smc_pwd=smc_password active_server_ip=127.0.0.1 log_server_name='Log Server'" -l log

Create Management server backup

In order to create a new management server backup then run following command

ansible-playbook playbooks/backup/create_management_server_backup.yml -i inventory/preprod_hosts -l mgt_log

Restore management server backup

When management server backup needs to be restored then run following scripts

!!! INFO Playbook does stop SMC server and start it again after backup has been restored.

ansible-playbook playbooks/backup/restore_managment_server_backup.yml -i inventory/preprod_hosts -l mgt_log

Deploy or restore jars

Sometime instead of getting SMC install for upgrade it could be faster just to upgrade jars. So in this situation it is possible to deploy jars with playbooks/upgrade/deploy_restore_jars.yml playbook.

Make sure to define action:

  • action=install: to install new jars (existing jars will be saved to it is possible to revert back env)
  • action=restore to restore saved jars (from saved jars we'll restore them)

By default, we do expect to have jars in ./jars, but it can be overridden with jars_path=MY_CUSTOM_PATH.

Before starting script make sure servers are stopped. Once jars have been installed start them again.

ansible-playbook playbooks/upgrade/deploy_restore_jars.yml -i inventory/preprod_hosts --extra-vars 'action=install ' -l mgt

Validate script

In order to be sure scripts are still valid linter has to be executed. Run make check

Troubleshooting

To troubleshoot in case of error : adding -vvv to command line might help

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Packages

 
 
 

Contributors