CLI
The Steampunk Spotter CLI enables use from the console with the ability to scan Ansible content such as playbooks, roles, collections, or task files.
Installation
steampunk-spotter
requires Python 3 and is available as a
steampunk-spotter Python package.
We suggest installing the package into a clean Python virtual environment.
Usage
After the CLI is installed, you can explore its commands and options by
running spotter --help
.
The --help/-h
optional argument is also available for every command.
Limitations
The current release version of Steampunk Spotter contains the following limitations that also apply to the CLI:
- with the FREE subscription plan, you can perform up to 5 scans/month,
- with the PRO or ENTERPRISE subscription plan, you can perform an unlimited number of scans,
- for the ENTERPRISE subscription plan, contact us at steampunk@xlab.si to discuss your needs.
Authentication
To use the CLI, you have to supply your Steampunk Spotter user account
credentials.
If you don't have an account, use the spotter register
command, which will
direct you to the page where you can create one.
Steampunk Spotter supports two kinds of credentials:
- Your username and password.
- Your API token. To create and manage tokens, visit the Steampunk Spotter App's web application, open the top-right account menu, click My Settings and switch to the API tokens tab. See this document for more details.
After that, you can start scanning right away.
or:
or:
or:
Steampunk Spotter CLI also offers for convenience the spotter login
command,
which saves the chosen credentials (in plaintext) in your user profile:
or:
or:
or:
After a successful CLI login, the credentials no longer need to be supplied explicitly to the CLI commands.
You can use the spotter logout
command to log out from the Spotter user
account directly from the CLI.
This will take care of removing the authentification tokens for the Spotter API
endpoint you are currently using that resides in the local storage folder
(~/.config/steampunk-spotter
by default) with the authentication tokens.
Note
LDAP login isn't possible with the --username
and --password
options.
Instead, you'll need to use an API token.
Scanning
The CLI spotter scan
command is used for scanning Ansible
content (playbooks, roles, collections, or task files) and returning the
scan results.
Let us assume we have the following Ansible playbook playbook.yml
file:
In this case, the CLI tool will report:
spotter scan playbook.yml
Scanning...success. ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100% 0:00:00
Check results:
playbook.yml:5:7: ERROR: [E902] (rewritable) If you are using Ansible 2.9 or lower, fully
qualified module names will not work. Use a non-fully-qualified name, such as user instead of
sensu.sensu_go.user. View docs at
http://staging.spotter.xlab-internal.si/plugin-docs/sensu/sensu_go/user/1.14.0/module/ for more info.
playbook.yml:5:7: ERROR: [E005] name is a required parameter in module sensu.sensu_go.user.
View docs at http://staging.spotter.xlab-internal.si/plugin-docs/sensu/sensu_go/user/None/module/
for more info.
playbook.yml:9:7: ERROR: [E902] (rewritable) If you are using Ansible 2.9 or lower, fully
qualified module names will not work. Use a non-fully-qualified name, such as uri instead of
ansible.builtin.uri.
playbook.yml:9:7: HINT: [H004] Parameter user is an alias for parameter url_username in module
ansible.builtin.uri.
Scan summary:
Spotter took 0.370 s to scan your input.
It resulted in 3 error(s), 0 warning(s) and 1 hint(s).
Can rewrite 1 file(s) with 2 change(s).
Overall status: ERROR
Visit http://spotter.steampunk.si/organization/.../ to view this scan result.
Ansible content
The scan
command accepts a positional argument that can be one or many paths
to files or directories.
The CLI will automatically detect the type of your Ansible content and scan it.
The following types of Ansible content files are currently supported:
- blocks,
- tasks,
- playbooks,
- roles (applies to
tasks
,handlers
androles
folders), - collections (applies to
roles
,playbooks
andtests/integration/targets/
folders and any playbooks at the root of the collection), - plugins and
- module defaults.
Scan a task file, which contains the tasks
section of the playbook:
Scan two playbooks:
Scan multiple playbooks using glob:
Scan two roles (scans tasks and handlers folders):
Scan collection (scans Ansible content within roles and playbooks folders and playbooks at the root of the collection directory):
Scan multiple files at once:
Scan any folder that contains Ansible content:
Using one or more --exclude-paths PATH
switches, the files and directories
in (or, for directories, under) PATH
are omitted from the scan.
The following command scans all files and directories under deployment/
, but
omits deployment/skip_me
directory and all its subdirectories, and it also
omits deployment/skip_playbook.yml
:
spotter scan --exclude-paths deployment/skip_me --exclude-paths deployment/skip_playbook.yml deployment
In the Scan summary section of the console output, the number of paths skipped
indicates, how many of the --exclude-paths
entries were actually found by
the scan and, in turn, skipped. For example:
Selecting the target project
This part is only relevant for users with a PRO plan or higher.
By default, the scan results are stored in the first project of the user's first organization (in the app).
Users that have multiple organizations or projects in the app can use
--project-id
optional argument to specify the UUID of an existing target
project, where the scan result will be stored.
You can learn your project id by logging into the app, selecting the appropriate organization, and navigating to the project's dashboard.
Excluding values
By default, CLI parses full Ansible YAML content with all values from
playbooks (e.g., parameter values from Ansible modules,
variables from Ansible plays, etc.).
With values, we can discover additional tips for improvements.
CLI will try to detect and omit any secrets (e.g., passwords, SSH keys,
cloud credentials, etc.) from being transmitted.
If you want to omit parsing and sending the values, you can use
--exclude-values
optional argument.
Allowing secrets to be included in scans
Warning
This option is insecure and should only be used in a closed an private on-premises context.
Under particular circumstances, we might want to let Spotter scan all the variable values, including the potentially sensitive ones. The benefits of this approach is that the scans are faster, and that the checks return better results in case of inline shells and commands that also contain sensitive data.
To skip detection and obfuscation of secrets, use the --skip-detect-secrets
optional argument.
Including Ansible variables
Ansible automation encourages reuse of playbooks through allowing to bring
variables out of playbooks and storing them in variable files.
Use the --include-vars
CLI option to include these files in scans of your
projects and to send the variables and their values to the Steampunk Spotter
backend server.
Excluding metadata
By default, CLI collects metadata (i.e., file names, line and column numbers,
YAML markers) from Ansible content.
This is needed for enriched user experience in the Spotter App and to get
additional tips for improvements.
If you want to use metadata just for displaying the scan output, which means
that no data about your Ansible content structure is sent to the backend
server, you can use the --exclude-metadata
option.
Excluding environment information
By default, CLI collects the information on the environment it has been executed from, including:
- List of packages installed in the Python environment.
- List of installed Ansible collections.
- List of installed Ansible roles.
- Contents of the Ansible configuration.
To prevent this information from being included in the payload, use the
--exclude-environment
optional argument.
Automated application of suggestions to your code
There is also a --rewrite
option that rewrites your files with
fixes after scanning.
This action will modify your files.
Suppressing check result levels
You can use the --display-level
optional argument to suppress the check result
levels.
For example, to show only errors (suppress warnings and hints):
Applying scan profiles
When we run scans, we might have a particular goal in mind. For example, in one project, we might be interested in upgrading our Ansible environment to a newer version of Ansible. In another one, we want to improve the playbooks for the Ansible version that we are currently in. This means that some check results that Steampunk Spotter produces may be relevant in one of the projects but not in the other one.
Using the --profile
option, we can specify a scan profile containing a
selected set of checks for scanning.
Spotter comes with the following profiles that are always available:
default
- this profile is suitable for day-to-day testing and improving Ansible Playbooks.full
- displays the full range of check results, which would be helpful for the Ansible playbook updating to work at a newer version of Ansible.security
- this profile includes checks for potential security issues.
Other profiles may be used if they are defined at your organization.
For example, to run all checks (apply full
profile):
You can create scan profiles in the Web App. See how.
Skipping and enforcing checks
Skipping and enforcing checks is possible on three different levels.
The rules here are the following:
- checks skipped on the organization level will override enforced checks on organization level;
- checks enforced on the organization level cannot be skipped on the scan level;
- checks skipped on scan will override enforced checks on scan level;
- checks enforced on scan level cannot be skipped on task level;
- on task level checks cannot be enforced but only skipped.
Note
You can opt to use the Spotter app to edit organization level configuration rules; See how.
When skipping/enforcing checks, we can supply three parameters:
event
: the event code of the check result (e.g.,W2600
);subevent_code
: check subcode (e.g.,B324
);fqcn
: fully-qualified collection name (e.g.,amazon.aws.cloudformation
).
With these three parameters, we can indicate that we want to skip/enforce a particular check or subcheck for a particular FQCN.
Organization level
Organization admins can upload a JSON/YAML configuration file for a particular organization (default organization will be used if not specified). The configuration format is the same as others for the CLI. Users can select which checks will be skipped and enforced on the organization level in the config file. This will then have an effect when doing scans within that organization.
For example, we have the following org-config.json
config file:
{
"skip_checks": [
{
"event": "W003",
"fqcn": "ansible.builtin.uri"
}
],
"enforce_checks": [
{
"event": "E005",
"fqcn": "community.crypto.x509_certificate"
}
]
}
We can then use the config set
command to upload it.
There is also a config get
command that will display the current
configuration for a particular organization.
For clearing the current organization configuration, you can use
config clear
command.
Scan level
On the scan level, checks can be skipped via a configuration file.
This can be either a project-level configuration file (called .spotter.json
,
.spotter.yml
or .spotter.yaml
) or configuration file provided as
--config
CLI option.
If we have the following spotter.yml
config file in our CWD:
skip_checks:
- event: W003
fqcn: ansible.builtin.uri
- event: E601
fqcn: community.crypto.x509_certificate
enforce_checks:
- event: E005
- event: E903
Then, run the scan on our playbook with the spotter scan
command.
This will skip and enforce the listed checks.
We can also skip check with options.
For example, we might want to skip all checks that are related to Ansible
module deprecations and redirections.
We can do that by using the --skip-checks
option, where we list
the checks we want to skip by their IDs (E1300, E1301, and H1302).
On the other hand, we might want to enforce some checks that have been
skipped on the organization level.
For example, we might want to enforce all checks that are related to the use
of with_items.
We can do that by using the --enforce-checks
optional argument, where we
list the checks we want to enforce by their IDs (W1100 and E1101).
Advanced usage here also allows you to skip or enforce checks for particular.
FQCN or checks subcode.
The valid pattern is: event[fqcn=<fqcn>, subevent_code=<subevent_code>]
.
For example:
# skip H1900 only for sensu.sensu_go.user module and W003 for all modules
spotter scan --skip-checks H1900[fqcn=sensu.sensu_go.user],W003 playbook.yml
You can also use the --skip-checks
and --enforce-checks
optional arguments
multiple times, such as:
spotter scan --skip-checks W2600[subevent_code=B324] --skip-checks H1900[fqcn=community.aws.data_pipeline] playbook.yml
Task level
On this level, skipping is done inside the Ansible content.
This can be achieved using the noqa
(NO Quality Assurance) YAML comments placed at the end of any line inside the Ansible task.
The syntax here is # noqa: event[fqcn=<fqcn>, subevent_code=<subevent_code>]
,
where params in the square brackets are optional.
For example:
---
- name: Sample playbook with comments
hosts: localhost
tasks:
- name: Get the payload from the API # noqa: W003, E903
uri:
url: "/some-url"
method: GET
user: "username1"
- name: Ensure that the server certificate belongs to the specified private key
community.crypto.x509_certificate: # noqa: E601[fqcn=community.crypto.x509_certificate]
path: "{{ config_path }}/certificates/server.crt"
privatekey_path: "{{ config_path }}/certificates/server.key"
provider: assertonly
Exporting and importing scan payload
To see what data is collected from your Ansible content and sent to the
backend server, you can use the option --export-payload
argument.
spotter scan --export-payload payload.json playbook.yml
Scan data saved to payload.json.
Note: this operation is fully offline. No actual scan was executed.
After that, you can also import (with the --import-payload
optional argument)
the exported payload and scan it:
Setting target Ansible version
The Steampunk Spotter CLI detects the Ansible version from the environment that it is being run from.
To scan against a different version, we can use --ansible-version
or -a
.
For instance, if we want to scan for potential issues related to running our
playbooks with Ansible 2.9, we can use the following command:
We can also completely omit the target Ansible version in our scans. To do this,
we can use the --no-ansible-version
switch.
Scan configuration
Before scanning, it is possible to configure the scan via the configuration file or optional CLI variables.
We support multiple scan configuration sources. The order in which we read the configuration is the following (in each step, we overwrite what the previous one has, configuration files should be in JSON/YAML format):
- local discovery of the user's environment (e.g.,
ansible --version
); - project-level configuration file (called
.spotter.json
,.spotter.yml
or.spotter.yaml
); - configuration file provided as
--config
CLI optional argument; - optional CLI arguments (e.g.,
--ansible-version
).
All supported configuration options are shown in the configuration file below.
ansible_version: "2.9"
skip_checks: ["E1300", "E1301", "H1302"]
enforce_checks: ["E005", "W200", "H500"]
For instance, if we want to set the target Ansible version, we can use the following JSON configuration file:
After that, we can run the scan command:
Formatting scan result
By default, the CLI will output the scan result in plain text format.
The --format
option allows you to specify the alternative output
format of the scan result, such as JSON or YAML.
Omitting documentation URLs from the output
In the scan result, the CLI will display a URL to the relevant Ansible content
documentation whenever possible.
To omit these documentation URLs from all the output, use the --no-docs-url
option.
Managing custom policies
It is possible to create and use policies for custom Spotter checks written in Rego Language for Open Policy Agent (OPA). The use of custom policies is only available in Spotter's ENTERPRISE plan.
Use the policies set
command to set custom OPA policies.
This will override all current policies.
Set one policy:
Set a whole directory with policies: Set policy for a specific project: Set policy for the whole organization:After that, run a scan to see the check results you included.
Use the policies clear
command to clear custom policies.
Clear policies:
Clear policies for a specific project: Clear policies for the whole organization: You may encounter an API error due to a timeout, like the following:API error: HTTPConnectionPool(host='api.spotter.steampunk.si', port=443): Read timed out. (read timeout=10)
This indicates that the operation timed out before completion, potentially due to network latency or
server response delays. To mitigate this issue use the --timeout TIMEOUT
switch:
Setting storage folder
The CLI uses local storage for caching access tokens for the Steampunk
Spotter API.
The default location is ~/.config/steampunk-spotter
, but if you want to
change it, you can use the --storage-path
option.
Disabling colorized output
The CLI will colorize the scan result by default. We use the'- no-colors' option to make the output non-colorized.
Setting API endpoint
The CLI connects to the Steampunk Spotter API (backend server) to perform scanning. While a default API endpoint is provided, you may need or want to set a custom endpoint. If you have an on-prem deployment, a custom endpoint is mandatory.
The precedence of the API endpoint configuration is the following, where the first one specified takes effect:
-
Set the
--endpoint
global option: -
Set the
SPOTTER_ENDPOINT
environment variable: -
Create (if it doesn't exist) and open
This approach allows~/.config/steampunk-spotter/spotter.json
for editing. The JSON-formatted contents need to have the root JSON entry"endpoint": "<spotter-api-url>"
<spotter-api-url>
to persist as the default custom endpoint to be used by the Spotter CLI. -
The default Spotter SaaS API endpoint is used if no other configuration is specified:
https://api.spotter.steampunk.si/api.
Verifying SSL/TLS server certificate
Note
This functionality is new in CLI version 5.0.0.
The CLI performs verification that the endpoint can be trusted for communication with the trust established at your system level's Certificate Authority (CA) store.
If your organization mandates a security solution using a private TLS
certificate, or you are using an on-premises end-point secured with a TLS
certificate signed with a private CA, then using spotter scan
might result
in an error that is similar to the following:
SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1129)'))
The recommended approach is to make sure that the CA certificate that signed the TLS certificate is in your Operating System's trusted CA store.
If the file containing the trusted CA store is on your local file system, but
it cannot be installed globally, then you can use the --cacert CACERT
switch
to use the CA store for the individual scan:
If the CA store is not available, you can also use the --insecure
or -k
switch to bypass the verification:
Setting the SPOTTER_INSECURE
environment variable to one of 1
, true
,
True
, or TRUE
also disables the trusted CA certificate verification:
Warning
Using the --insecure
or -k
switch or the SPOTTER_INSECURE
environment
variable exposes you to the risk of an Adversary-in-the-Middle attack, where
a malicious party might intercept the contents of your scans.
CI/CD integrations
The CLI can be used in CI/CD pipelines to set up quality scanning of your Ansible content.
When using the CLI in CI/CD workflows, it is essential that you provide Steampunk Spotter credentials as secrets (i.e., pipeline-protected and masked variables).
GitLab
The CLI can be integrated with GitLab CI/CD to display scan results as GitLab’s unit test reports. This means that you will be quickly able to see which checks have failed within your CI/CD pipeline.
This is done by using the Spotter CLI tool directly in the CI/CD configuration
and configuring it to output your scan result in JUnit XML format, which
allows GitLab to display check results as green check marks for successful
checks and red crosses for unsuccessful checks.
To do so, you should use the spotter scan
CLI command along with the
--junit-xml <path-junit-xml>
option that will create a JUnit XML
report at the specified location.
Below is a .gilab-ci.yml
example containing a CI job for the test stage,
where you call the CLI mentioned above command and then upload the created
JUnit report format XML file as an artifact to GitLab, which will then display
it within your pipeline details page or merge request widget.
stages:
- test
spotter-scan:
stage: test
image:
name: registry.gitlab.com/xlab-steampunk/steampunk-spotter-client/spotter-cli:<version>
entrypoint: [""]
variables:
SPOTTER_API_TOKEN: $SPOTTER_API_TOKEN
script:
- spotter scan --junit-xml report.xml .
artifacts:
when: always
reports:
junit: report.xml
GitHub
In your CI/CD pipeline, you can specify the name of the
Steampunk Spotter GitHub Action (xlab-steampunk/spotter-action@master
)
with a tag number as a step within your YAML workflow file.
For example, inside your .github/workflows/ci.yml
file, you can write:
name: test
on: push
jobs:
test:
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@v3
- name: Run Ansible content scan
uses: xlab-steampunk/spotter-action@<version>
env:
SPOTTER_API_TOKEN: ${{ secrets.SPOTTER_API_TOKEN }}
For comprehensive usage and more examples, refer to Steampunk Spotter Action on GitHub Marketplace and Steampunk Spotter GitHub Action repository.
To enhance compatibility and integration capabilities, Spotter supports SARIF format for scan results.
Note
To use SARIF in GitHub's code scanning feature, enable CodeQL. This grants GitHub permission to perform read-only analysis on your repository. See how.
To use this feature, execute the following command:
This command facilitates integration with GitHub code scanning and other platforms supporting the SARIF format.
When incorporating it into a GitHub Actions workflow, follow the example YAML configuration below:
name: test
on: push
jobs:
test:
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@v3
- name: Run Ansible content scan with SARIF output
uses: xlab-steampunk/spotter-action@master
env:
SPOTTER_API_TOKEN: ${{ secrets.SPOTTER_API_TOKEN }}
with:
sarif_file: example.sarif
continue-on-error: true
- name: Check Sarif
uses: github/codeql-action/upload-sarif@v3
with:
sarif_file: example.sarif
This setup allows for seamless integration of static analysis results into the GitHub environment, enhancing code security and quality checks within your repository, displaying detailed information about the code checks, including issues, vulnerabilities, and recommendations.
Others
For other CI/CDs, we currently only support using the steampunk-spotter
Python package and setting it up as a regular shell command.
You can also use the spotter-cli Docker image that is available in our GitLab
Registry (use registry.gitlab.com/xlab-steampunk/steampunk-spotter-client/spotter-cli:latest
image path and select the appropriate tag).
You can use the spotter scan
CLI command along with the
--junit-xml <path-junit-xml>
option to export the scan result in
JUnit XML format is consumed by CI tools such as Jenkins or Bamboo.
Integration with AAP
Use Steampunk Spotter to check your playbooks in the context of your Ansible Automation Platform (AAP). Obtain assurance about playbook quality and compliance, knowing that only the valid and compliant playbooks will be run in your AAP.
Building an Execution Environment with Steampunk Spotter
To enable Steampunk Spotter in the runtime of your AAP, build the execution environment with the Steampunk Spotter CLI integrated.
Usually, to build an execution environment, you run a command similar to:
To enable Steampunk Spotter in the execution environment, use the following command instead:
This command runs ansible-builder
on your behalf, while also ensuring that
the Steampunk Spotter CLI is properly installed and configured to be included
in the AAP runtime.
Any additional ansible-builder build
switches and flags can be used with
spotter build
, as they will be passed on to the ansible-builder build
execution.
Please refer to the official documentation
for further information about the available switches.
Using Steampunk Spotter in AAP
Steampunk Spotter in the AAP is capable of performing the following tasks:
- Pre-flight checks: runs a scan of the playbooks in the execution
environment before they are executed by ansible.
In effect, this is the AAP running
spotter scan
in the target execution enevironment. - Runtime scans: performs scans during the playbooks' execution. In this mode, Steampunk Spotter has an insight into all the available and evaluated variables and expressions.
To control the Steampunk Spotter in your execution environment, set the following variables:
SPOTTER_ENDPOINT
: the API endpoint for the Steampunk Spotter backend.SPOTTER_TOKEN
: the API token used to authenticate with the Steampunk Spotter backend. To obtain the token, log into the Steampunk Spotter web app as the user that the AAP scans will be performed on behalf of, then generate an API token.SPOTTER_ON_ERROR_EXIT
: set to value other than0
to have Steampunk Spotter stop the AAP execution if it reaches a failure condition.SPOTTER_PREFLIGHT_ENABLED
: set to value other than0
to have Steampunk Spotter perform the pre-flight check. VariableSPOTTER_DEBUG
needs also to be set. VariablesSPOTTER_ORGANIZATION
andSPOTTER_PROJECT
are optional.SPOTTER_RUNTIME_ENABLED
: set to value other than0
to have Steampunk Spotter perform the runtime check. VariablesSPOTTER_ORGANIZATION
, andSPOTTER_PROJECT
need also to be set.SPOTTER_ORGANIZATION
: ID of the organization for which the scan will be performed. Mandatory withSPOTTER_RUNTIME_ENABLED
.SPOTTER_PROJECT
: ID of the project in which the scan check results will be sabed. Mandatory withSPOTTER_RUNTIME_ENABLED
.SPOTTER_DEBUG
: set to value other than0
to have Steampunk Spotter print out debug information. Mandatory withSPOTTER_PREFLIGHT_ENABLED
.
Development
You will first need to clone this repository.
Running from source
If you want to run directly from the source, run the following commands:
Testing
There is a dev.sh
script that enables local testing.
For example, you can run linters, unit, or integration tests.
It is recommended to run all tests locally before pushing new code anywhere.
Building a Docker image
Before building the Docker image, copy the wheel file distribution to the
root of this repository.
You can build the wheel using python3 -m build --wheel --outdir . .
command.
Make sure that you have only one *.whl
file.
You can build the wheel yourself or download one from the latest CI/CD
pipeline.
After that, proceed with the following commands:
After that, you can run scanning in a Docker container.
Mount your Ansible content to the default /scan
working directory.
The credentials can be specified as environment variables for the Docker
container or later via --token
or --username
and --password
CLI
options.
For example:
docker run --rm -it -e SPOTTER_API_TOKEN=<api-token> -v "/path/to/your/playbooks/:/scan" spotter-cli scan .
or