Skip to content



The Steampunk Spotter CLI enables use from the console with the ability to scan Ansible content such as playbooks, roles, collections, or task files.


steampunk-spotter requires Python 3 and is available as a steampunk-spotter Python package.

pip install steampunk-spotter

We suggest installing the package into a clean Python virtual environment.


After the CLI is installed, you can explore its commands and options by running spotter --help. The --help/-h optional argument is also available for every command.


The current release version of Steampunk Spotter contains the following limitations that also apply to the CLI:

  • with the FREE subscription plan, you can perform up to 5 scans/month,
  • with the PRO or ENTERPRISE subscription plan, you can perform an unlimited number of scans,
  • for the ENTERPRISE subscription plan, contact us at to discuss your needs.


To use the CLI, you have to supply your Steampunk Spotter user account credentials. If you don't have an account, use the spotter register command, which will direct you to the page where you can create one.

Steampunk Spotter supports two kinds of credentials:

  • Your username and password.
  • Your API token. To create and manage tokens, visit the Steampunk Spotter App's web application, open the top-right account menu, click My Settings and switch to the API tokens tab. See this document for more details.

After that, you can start scanning right away.

spotter --token <api-token> scan playbook.yml


export SPOTTER_API_TOKEN=<api-token>
spotter scan playbook.yml


spotter --username <username> --password <password> scan playbook.yml


export SPOTTER_USERNAME=<username>
export SPOTTER_PASSWORD=<password>
spotter scan playbook.yml

Steampunk Spotter CLI also offers for convenience the spotter login command, which saves the chosen credentials (in plaintext) in your user profile:

spotter --token <api-token> login


export SPOTTER_API_TOKEN=<api-token>
spotter login


spotter --username <username> --password <password> login


export SPOTTER_USERNAME=<username>
export SPOTTER_PASSWORD=<password>
spotter login

After a successful CLI login, the credentials no longer need to be supplied explicitly to the CLI commands.

You can use the spotter logout command to log out from the Spotter user account directly from the CLI. This will take care of removing the authentification tokens for the Spotter API endpoint you are currently using that resides in the local storage folder (~/.config/steampunk-spotter by default) with the authentication tokens.


LDAP login isn't possible with the --username and --password options. Instead, you'll need to use an API token.


The CLI spotter scan command is used for scanning Ansible content (playbooks, roles, collections, or task files) and returning the scan results.

Ansible content

The scan command accepts a positional argument that can be one or many paths to files or directories. The CLI will automatically detect the type of your Ansible content and scan it.

The following types of Ansible content files are currently supported:

  • blocks,
  • tasks,
  • playbooks,
  • roles (applies to tasks, handlers and roles folders),
  • collections (applies to roles, playbooks and tests/integration/targets/ folders and any playbooks at the root of the collection),
  • plugins and
  • module defaults.

Scan a task file, which contains the tasks section of the playbook:

spotter scan path/to/taskfile1.yml

Scan two playbooks:

spotter scan path/to/playbook1.yml path/to/playbook2.yml

Scan multiple playbooks using glob:

spotter scan path/to/playbook/folder/play_*.yml

Scan two roles (scans tasks and handlers folders):

spotter scan path/to/role1 path/to/role2

Scan collection (scans Ansible content within roles and playbooks folders and playbooks at the root of the collection directory):

spotter scan path/to/collection

Scan multiple files at once:

spotter scan path/to/taskfile.yml path/to/playbook.yml \
          path/to/role path/to/collection

Scan any folder that contains Ansible content:

spotter scan path/to/folder

Using one or more --exclude-paths PATH switches, the files and directories in (or, for directories, under) PATH are omitted from the scan. The following command scans all files and directories under deployment/, but omits deployment/skip_me directory and all its subdirectories, and it also omits deployment/skip_playbook.yml:

spotter scan --exclude-paths deployment/skip_me --exclude-paths deployment/skip_playbook.yml deployment

Let us assume we have the following Ansible playbook playbook.yml file:

- name: Sample playbook
  hosts: localhost
    - name: Create a new Sensu Go user
        password: "{{ lookup('env', 'SENSU_USER_PASSWORD') }}"

    - name: Get the payload from the API
        url: "/some-url"
        method: GET
        user: "username1"

In this case, the CLI tool will report:

spotter scan playbook.yml
Scanning...success. ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100% 0:00:00

Check results:
playbook.yml:5:7: ERROR: [E005] name is a required parameter in module sensu.sensu_go.user.
  View docs at for
  more info.
playbook.yml:5:7: WARNING: [W905] If you support Ansible 2.9 or lower, fully-qualified
  module names will not work. Use a non-fully-qualified name, such as user instead of
  sensu.sensu_go.user. View docs at
  for more info.
playbook.yml:5:7: HINT: [H1900] (rewritable) Required collection sensu.sensu_go is
   missing from requirements.yml or requirements.yml is missing.
playbook.yml:9:7: WARNING: [W905] If you support Ansible 2.9 or lower, fully-qualified
  module names will not work. Use a non-fully-qualified name, such as uri instead of
playbook.yml:9:7: HINT: [H004] Parameter user is an alias for parameter url_username in
  module ansible.builtin.uri.

Scan summary:
Spotter took 1.693 s to scan your input.
It resulted in 1 error(s), 2 warning(s) and 2 hint(s).
Can rewrite 1 file(s) with 1 change(s).
Overall status: ERROR

Selecting the target project

This part is only relevant for users with a PRO plan or higher.

By default, the scan results are stored in the first project of the user's first organization (in the app).

Users that have multiple organizations or projects in the app can use --project-id optional argument to specify the UUID of an existing target project, where the scan result will be stored.

spotter scan --project-id <project-id> .

You can learn your project id by logging into the app, selecting the appropriate organization, and navigating to the project's dashboard.

Excluding values

By default, CLI parses full Ansible YAML content with all values from playbooks (e.g., parameter values from Ansible modules, variables from Ansible plays, etc.). With values, we can discover additional tips for improvements. CLI will try to detect and omit any secrets (e.g., passwords, SSH keys, cloud credentials, etc.) from being transmitted. If you want to omit parsing and sending the values, you can use --exclude-values optional argument.

spotter scan --exclude-values playbook.yml

Excluding metadata

By default, CLI collects metadata (i.e., file names, line and column numbers, YAML markers) from Ansible content. This is needed for enriched user experience in the Spotter App and to get additional tips for improvements. If you want to use metadata just for displaying the scan output, which means that no data about your Ansible content structure is sent to the backend server, you can use the --exclude-metadata option.

spotter scan --exclude-metadata playbook.yml

Excluding environment information

By default, CLI collects the information on the environment it has been executed from, including:

  • List of packages installed in the Python environment.
  • List of installed Ansible collections.
  • List of installed Ansible roles.
  • Contents of the Ansible configuration.

To prevent this information from being included in the payload, use the --exclude-environment optional argument.

spotter scan --exclude-environment playbook.yml

Automated application of suggestions to your code

There is also a --rewrite option that rewrites your files with fixes after scanning. This action will modify your files.

spotter scan --rewrite playbook.yml

Suppressing check result levels

You can use the --display-level optional argument to suppress the check result levels. For example, to show only errors (suppress warnings and hints):

spotter scan --display-level error playbook.yml 

Applying scan profiles

When we run scans, we might have a particular goal in mind. For example, in one project, we might be interested in upgrading our Ansible environment to a newer version of Ansible. In another one, we want to improve the playbooks for the Ansible version that we are currently in. This means that some check results that Steampunk Spotter produces may be relevant in one of the projects but not in the other one.

Using the --profile option, we can specify a scan profile containing a selected set of checks for scanning. Spotter currently supports the following profiles:

  • default - this profile is suitable for day-to-day testing and improving Ansible Playbooks.
  • full - displays the full range of check results, which would be helpful for the Ansible playbook updating to work at a newer version of Ansible.
  • security - this profile includes checks for potential security issues.

For example, to run all checks (apply full profile):

spotter scan --profile full playbook.yml 

Skipping and enforcing checks

Skipping and enforcing checks is possible on three different levels.

The rules here are the following:

  • checks skipped on the organization level will override enforced checks on organization level;
  • checks enforced on the organization level cannot be skipped on the scan level;
  • checks skipped on scan will override enforced checks on scan level;
  • checks enforced on scan level cannot be skipped on task level;
  • on task level checks cannot be enforced but only skipped.


You can opt to use the Spotter app to edit organization level configuration rules; See how.

When skipping/enforcing checks, we can supply three parameters:

  • event: the event code of the check result (e.g., W2600);
  • subevent_code: check subcode (e.g., B324);
  • fqcn: fully-qualified collection name (e.g.,

With these three parameters, we can indicate that we want to skip/enforce a particular check or subcheck for a particular FQCN.

Organization level

Organization admins can upload a JSON/YAML configuration file for a particular organization (default organization will be used if not specified). The configuration format is the same as others for the CLI. Users can select which checks will be skipped and enforced on the organization level in the config file. This will then have an effect when doing scans within that organization.

For example, we have the following org-config.json config file:

  "skip_checks": [
      "event": "W003",
      "fqcn": "ansible.builtin.uri"
  "enforce_checks": [
      "event": "E005",
      "fqcn": "community.crypto.x509_certificate"

We can then use the config set command to upload it.

spotter config set org-config.json

There is also a config get command that will display the current configuration for a particular organization.

spotter config get

For clearing the current organization configuration, you can use config clear command.

spotter config clear

Scan level

On the scan level, checks can be skipped via a configuration file. This can be either a project-level configuration file (called .spotter.json, .spotter.yml or .spotter.yaml) or configuration file provided as --config CLI option.

If we have the following spotter.yml config file in our CWD:

  - event: W003
    fqcn: ansible.builtin.uri
  - event: E601
    fqcn: community.crypto.x509_certificate
  - event: E005
  - event: E903

Then, run the scan on our playbook with the spotter scan command. This will skip and enforce the listed checks.

We can also skip check with options. For example, we might want to skip all checks that are related to Ansible module deprecations and redirections. We can do that by using the --skip-checks option, where we list the checks we want to skip by their IDs (E1300, E1301, and H1302).

spotter scan --skip-checks E1300,E1301,H1302 playbook.yml

On the other hand, we might want to enforce some checks that have been skipped on the organization level. For example, we might want to enforce all checks that are related to the use of with_items. We can do that by using the --enforce-checks optional argument, where we list the checks we want to enforce by their IDs (W1100 and E1101).

spotter scan --enforce-checks W1100,E1101 playbook.yml

Advanced usage here also allows you to skip or enforce checks for particular. FQCN or checks subcode. The valid pattern is: event[fqcn=<fqcn>, subevent_code=<subevent_code>].

For example:

# skip H1900 only for sensu.sensu_go.user module and W003 for all modules
spotter scan --skip-checks H1900[fqcn=sensu.sensu_go.user],W003 playbook.yml

You can also use the --skip-checks and --enforce-checks optional arguments multiple times, such as:

spotter scan --skip-checks W2600[subevent_code=B324] --skip-checks H1900[] playbook.yml

Task level

On this level, skipping is done inside the Ansible content. This can be achieved using the noqa (NO Quality Assurance) YAML comments placed at the end of any line inside the Ansible task. The syntax here is # noqa: event[fqcn=<fqcn>, subevent_code=<subevent_code>], where params in the square brackets are optional.

For example:

- name: Sample playbook with comments
  hosts: localhost
    - name: Get the payload from the API  # noqa: W003, E903
        url: "/some-url"
        method: GET
        user: "username1"

    - name: Ensure that the server certificate belongs to the specified private key
      community.crypto.x509_certificate:  # noqa: E601[fqcn=community.crypto.x509_certificate]
        path: "{{ config_path }}/certificates/server.crt"
        privatekey_path: "{{ config_path }}/certificates/server.key"
        provider: assertonly

Exporting and importing scan payload

To see what data is collected from your Ansible content and sent to the backend server, you can use the option --export-payload argument.

spotter scan --export-payload payload.json playbook.yml 
Scan data saved to payload.json.
Note: this operation is fully offline. No actual scan was executed.

After that, you can also import (with the --import-payload optional argument) the exported payload and scan it:

spotter scan --import-payload payload.json

Setting target Ansible version

The Steampunk Spotter CLI detects the Ansible version from the environment that it is being run from.

To scan against a different version, we can use --ansible-version or -a. For instance, if we want to scan for potential issues related to running our playbooks with Ansible 2.9, we can use the following command:

spotter scan --ansible-version 2.9 playbook.yml
or simply

spotter scan -a 2.14 playbook.yml

We can also completely omit the target Ansible version in our scans. To do this, we can use the --no-ansible-version switch.

Scan configuration

Before scanning, it is possible to configure the scan via the configuration file or optional CLI variables.

We support multiple scan configuration sources. The order in which we read the configuration is the following (in each step, we overwrite what the previous one has, configuration files should be in JSON/YAML format):

  • local discovery of the user's environment (e.g., ansible --version);
  • project-level configuration file (called .spotter.json, .spotter.yml or .spotter.yaml);
  • configuration file provided as --config CLI optional argument;
  • optional CLI arguments (e.g., --ansible-version).

All supported configuration options are shown in the configuration file below.

ansible_version: "2.9"
skip_checks: ["E1300", "E1301", "H1302"]
enforce_checks: ["E005", "W200", "H500"]

For instance, if we want to set the target Ansible version, we can use the following JSON configuration file:

  "ansible_version": "2.14"

After that, we can run the scan command:

spotter scan --config config.yml playbook.yml

Formatting scan result

By default, the CLI will output the scan result in plain text format. The --format option allows you to specify the alternative output format of the scan result, such as JSON or YAML.

# scan Ansible playbooks
spotter scan --format json playbook.yml

Omitting documentation URLs from the output

In the scan result, the CLI will display a URL to the relevant Ansible content documentation whenever possible. To omit these documentation URLs from all the output, use the --no-docs-url option.

spotter scan --no-docs-url playbook.yml

Managing custom policies

It is possible to create and use policies for custom Spotter checks written in Rego Language for Open Policy Agent (OPA). The use of custom policies is only available in Spotter's ENTERPRISE plan.

Use the policies set command to set custom OPA policies. This will override all current policies.

Set one policy:

spotter policies set policy.rego
Set a whole directory with policies:
spotter policies set policies/
Set policy for a specific project:
spotter policies set --project-id <project-id> policy.rego
Set policy for the whole organization:
spotter policies set --organization-id <organization-id> policy.rego

After that, run a scan to see the check results you included.

spotter scan playbook.yml

Use the policies clear command to clear custom policies.

Clear policies:

spotter policies clear
Clear policies for a specific project:
spotter policies clear --project-id <project-id>
Clear policies for the whole organization:
spotter policies clear --organization-id <organization-id>
You may encounter an API error due to a timeout, like the following: API error: HTTPConnectionPool(host='', port=443): Read timed out. (read timeout=10)

This indicates that the operation timed out before completion, potentially due to network latency or server response delays. To mitigate this issue use the --timeout TIMEOUT switch:

spotter --timeout 60 policies set <policies>

Setting storage folder

The CLI uses local storage for caching access tokens for the Steampunk Spotter API. The default location is ~/.config/steampunk-spotter, but if you want to change it, you can use the --storage-path option.

spotter --storage-path /my/project/.storage scan playbook.yml

Disabling colorized output

The CLI will colorize the scan result by default. We use the'- no-colors' option to make the output non-colorized.

spotter --no-color scan playbook.yml

Setting API endpoint

The CLI connects to the Steampunk Spotter API (backend server) to perform scanning. While a default API endpoint is provided, you may need or want to set a custom endpoint. If you have an on-prem deployment, a custom endpoint is mandatory.

The precedence of the API endpoint configuration is the following, where the first one specified takes effect:

  1. Set the --endpoint global option:

    spotter --endpoint "<spotter-api-url>" scan playbook.yml

  2. Set the SPOTTER_ENDPOINT environment variable:

    export SPOTTER_ENDPOINT=<spotter-api-url>

  3. Create (if it doesn't exist) and open ~/.config/steampunk-spotter/spotter.json for editing. The JSON-formatted contents need to have the root JSON entry "endpoint": "<spotter-api-url>"

      "endpoint": "<spotter-api-url>"
    This approach allows <spotter-api-url> to persist as the default custom endpoint to be used by the Spotter CLI.

  4. The default Spotter SaaS API endpoint is used if no other configuration is specified:

CI/CD integrations

The CLI can be used in CI/CD pipelines to set up quality scanning of your Ansible content.

When using the CLI in CI/CD workflows, it is essential that you provide Steampunk Spotter credentials as secrets (i.e., pipeline-protected and masked variables).


The CLI can be integrated with GitLab CI/CD to display scan results as GitLab’s unit test reports. This means that you will be quickly able to see which checks have failed within your CI/CD pipeline.

This is done by using the Spotter CLI tool directly in the CI/CD configuration and configuring it to output your scan result in JUnit XML format, which allows GitLab to display check results as green check marks for successful checks and red crosses for unsuccessful checks. To do so, you should use the spotter scan CLI command along with the --junit-xml <path-junit-xml> option that will create a JUnit XML report at the specified location.

Below is a .gilab-ci.yml example containing a CI job for the test stage, where you call the CLI mentioned above command and then upload the created JUnit report format XML file as an artifact to GitLab, which will then display it within your pipeline details page or merge request widget.

  - test

  stage: test
    entrypoint: [""]
    - spotter scan --junit-xml report.xml .
    when: always
      junit: report.xml


In your CI/CD pipeline, you can specify the name of the Steampunk Spotter GitHub Action (xlab-steampunk/spotter-action@master) with a tag number as a step within your YAML workflow file.

For example, inside your .github/workflows/ci.yml file, you can write:

name: test
on: push
    runs-on: ubuntu-latest
      - name: Checkout repository
        uses: actions/checkout@v3

      - name: Run Ansible content scan
        uses: xlab-steampunk/spotter-action@<version>
          SPOTTER_API_TOKEN: ${{ secrets.SPOTTER_API_TOKEN }}

For comprehensive usage and more examples, refer to Steampunk Spotter Action on GitHub Marketplace and Steampunk Spotter GitHub Action repository.

To enhance compatibility and integration capabilities, Spotter supports SARIF format for scan results.


To use SARIF in GitHub's code scanning feature, enable CodeQL. This grants GitHub permission to perform read-only analysis on your repository. See how.

To use this feature, execute the following command:

spotter scan --sarif report.sarif playbook.yml

This command facilitates integration with GitHub code scanning and other platforms supporting the SARIF format.

When incorporating it into a GitHub Actions workflow, follow the example YAML configuration below:

name: test
on: push
    runs-on: ubuntu-latest
      - name: Checkout repository
        uses: actions/checkout@v3

      - name: Run Ansible content scan with SARIF output
        uses: xlab-steampunk/spotter-action@master
          SPOTTER_API_TOKEN: ${{ secrets.SPOTTER_API_TOKEN }}
          sarif_file: example.sarif
        continue-on-error: true

      - name: Check Sarif
        uses: github/codeql-action/upload-sarif@v3
          sarif_file: example.sarif 

This setup allows for seamless integration of static analysis results into the GitHub environment, enhancing code security and quality checks within your repository, displaying detailed information about the code checks, including issues, vulnerabilities, and recommendations.


For other CI/CDs, we currently only support using the steampunk-spotter Python package and setting it up as a regular shell command. You can also use the spotter-cli Docker image that is available in our GitLab Registry (use image path and select the appropriate tag). You can use the spotter scan CLI command along with the --junit-xml <path-junit-xml> option to export the scan result in JUnit XML format is consumed by CI tools such as Jenkins or Bamboo.

Integration with AAP

Use Steampunk Spotter to check your playbooks in the context of your Ansible Automation Platform (AAP). Obtain assurance about playbook quality and compliance, knowing that only the valid and compliant playbooks will be run in your AAP.

Building an Execution Environment with Steampunk Spotter

To enable Steampunk Spotter in the runtime of your AAP, build the execution environment with the Steampunk Spotter CLI integrated.

Usually, to build an execution environment, you run a command similar to:

ansible-builder build --file execution_environment.yml

To enable Steampunk Spotter in the execution environment, use the following command instead:

spotter build --file execution_environment.yml

This command runs ansible-builder on your behalf, while also ensuring that the Steampunk Spotter CLI is properly installed and configured to be included in the AAP runtime. Any additional ansible-builder build switches and flags can be used with spotter build, as they will be passed on to the ansible-builder build execution. Please refer to the official documentation for further information about the available switches.

Using Steampunk Spotter in AAP

Steampunk Spotter in the AAP is capable of performing the following tasks:

  • Pre-flight checks: runs a scan of the playbooks in the execution environment before they are executed by ansible. In effect, this is the AAP running spotter scan in the target execution enevironment.
  • Runtime scans: performs scans during the playbooks' execution. In this mode, Steampunk Spotter has an insight into all the available and evaluated variables and expressions.

To control the Steampunk Spotter in your execution environment, set the following variables:

  • SPOTTER_ENDPOINT: the API endpoint for the Steampunk Spotter backend.
  • SPOTTER_TOKEN: the API token used to authenticate with the Steampunk Spotter backend. To obtain the token, log into the Steampunk Spotter web app as the user that the AAP scans will be performed on behalf of, then generate an API token.
  • SPOTTER_ON_ERROR_EXIT: set to value other than 0 to have Steampunk Spotter stop the AAP execution if it reaches a failure condition.
  • SPOTTER_PREFLIGHT_ENABLED: set to value other than 0 to have Steampunk Spotter perform the pre-flight check. Variable SPOTTER_DEBUG needs also to be set. Variables SPOTTER_ORGANIZATION and SPOTTER_PROJECT are optional.
  • SPOTTER_RUNTIME_ENABLED: set to value other than 0 to have Steampunk Spotter perform the runtime check. Variables SPOTTER_ORGANIZATION, and SPOTTER_PROJECT need also to be set.
  • SPOTTER_ORGANIZATION: ID of the organization for which the scan will be performed. Mandatory with SPOTTER_RUNTIME_ENABLED.
  • SPOTTER_PROJECT: ID of the project in which the scan check results will be sabed. Mandatory with SPOTTER_RUNTIME_ENABLED.
  • SPOTTER_DEBUG: set to value other than 0 to have Steampunk Spotter print out debug information. Mandatory with SPOTTER_PREFLIGHT_ENABLED.


You will first need to clone this repository.

git clone
cd spotter-cli

Running from source

If you want to run directly from the source, run the following commands:

python3 -m venv .venv && . .venv/bin/activate
pip install -e .


There is a script that enables local testing. For example, you can run linters, unit, or integration tests. It is recommended to run all tests locally before pushing new code anywhere.

# get help
./ help
# run linters
./ lint

Building a Docker image

Before building the Docker image, copy the wheel file distribution to the root of this repository. You can build the wheel using python3 -m build --wheel --outdir . . command. Make sure that you have only one *.whl file. You can build the wheel yourself or download one from the latest CI/CD pipeline. After that, proceed with the following commands:

docker build -t spotter-cli .

After that, you can run scanning in a Docker container. Mount your Ansible content to the default /scan working directory. The credentials can be specified as environment variables for the Docker container or later via --token or --username and --password CLI options.

For example:

docker run --rm -it -e SPOTTER_API_TOKEN=<api-token> -v "/path/to/your/playbooks/:/scan" spotter-cli scan .


docker run --rm -it -e SPOTTER_USERNAME=<username> -e SPOTTER_PASSWORD=<password> -v "/path/to/your/playbooks/:/scan" spotter-cli scan .