Ansible interpreter
Customers operating in environments with both Chef and Ansible previously lacked a seamless integration path within Chef Courier. This limitation forced customers to use multiple orchestration tools or manual interventions, resulting in increased operational complexity and slower time-to-value.
Ansible Playbook execution is now natively supported, which unifies automation strategies, reduced tool fragmentation, and improves operational efficiency. This feature benefits DevOps engineers, system administrators, and automation teams in hybrid-tool environments.
The Ansible interpreter executes Ansible playbooks as actions in a Chef Courier job. It supports playbooks from various sources, including local files, HTTPS URLs, and S3 buckets.
Prerequisites
Before you can run playbooks with Ansible interpreter, the following prerequisites must be met:
- Ansible or the Chef Ansible Habitat skill must be installed.
- The ansible-playbook executable must be installed on the target node.
Supported Playbook Sources
The Ansible interpreter supports fetching playbooks from the following sources:
- Local Filesystem: Specify the path to a playbook that exists on the target node’s local filesystem.
- HTTPS/HTTP URLs: Provide a public HTTPS or HTTP URL to download the playbook from a remote server. Only public URLs are supported; authentication isn’t supported.
- S3 Buckets: Use an S3 URI to fetch the playbook from an AWS S3 bucket. The target node must be an EC2 instance with appropriate IAM permissions and instance metadata enabled.
These options allow you to flexibly source playbooks for execution in a variety of environments and workflows.
Limitations
The Ansible interpreter has the following limitations regarding supported playbook sources:
Local Playbook Source
- The specified playbook file must exist on the target node and be accessible by the user running the interpreter.
- No additional validation is performed on the file path beyond existence and readability.
HTTPS/HTTP Playbook Source
- Only public HTTPS and HTTP URLs are supported.
- Authentication methods such as HTTP Basic Auth, OAuth, or custom headers aren’t supported.
- The interpreter can’t fetch playbooks from endpoints that require authentication.
S3 Playbook Source
- Requires the target node to be an AWS EC2 instance with instance metadata enabled.
- The EC2 instance must have an IAM role attached that grants access to the specified S3 bucket.
- Access to S3 buckets using static credentials or other authentication methods isn’t supported.
Ensure your environment meets these requirements when specifying playbook sources.
Node management skills definition
In most cases, the ansible-interpreter
does not require a complex node management skill definition because it’s a stateless, agentless tool that runs playbooks directly on the target node using the existing Ansible installation. The skill definition primarily serves as a registration mechanism within the Chef platform, enabling you to assign and manage the interpreter as part of your node management workflows. This lightweight approach ensures minimal configuration overhead while still allowing for integration, versioning, and optional overrides as needed.
The following is the node management skills definition for the ansible-interpreter
:
{
"item": {
"canister": {
"name": "ansible-interpreter",
"origin": "chef-platform",
"service": false
},
"configurationTemplates": [],
"dependencies": [],
"name": "ansible-interpreter",
"native": null
}
}
How to add the ansible-interpreter
skill
To add a new skill such as ansible-interpreter
to your node management environment, follow these steps:
Define the skill: Create a skill definition JSON as shown above. This definition specifies the skill’s name, origin, dependencies, and other metadata. Refer to Defining skills for further details.
Assemble the skill: Use the skill assembly process to package your skill definition, any configuration templates, and dependencies. This step ensures your skill is ready for deployment and use within the Chef platform. Refer to Skill assembly for guidance on assembling your skill.
Override skill settings (optional): If needed, you can override default skill settings for specific nodes or environments. This allows for customization of skill behavior without modifying the original skill definition. Refer to Skill settings for more information.
Deploy the skill using a node cohort: With skill assembly updates, all the associated node cohorts should get the updated skills. You can also create a new node cohort with a newly defined skill assembly for
ansible-interpreter
.Assign the cohort to nodes: All existing nodes will get updated skills from the node cohort on a pre-defined periodic sync. Assign the newly created node cohort with the
ansible-interpreter
skill to the appropriate nodes or node groups. This enables those nodes to use the Ansible interpreter for executing playbooks.
References
Refer to the following documentation links for further details:
Arguments for Ansible interpreter
When defining a command for the Ansible interpreter in your Courier job, the following arguments are required:
playbook_source (string, required): The path or URI to the Ansible playbook to execute. This can be a local file path, a public HTTPS/HTTP URL, or an S3 URI, depending on your environment and requirements.
Other arguments may be supported for advanced use cases (such as extra_params
for passing variables), but playbook_source
is always required. Ensure that the specified playbook source is accessible and valid according to the supported playbook sources and limitations described above.
If you need to pass additional variables to your playbook, use the extra_params.extra_vars
field as shown in the Examples section.
Examples
Local playbook example
"command": {
"playbook_source": "/path/to/playbook.yml"
}
HTTPS playbook example
"command": {
"playbook_source": "https://example.com/playbooks/playbook.yml"
}
S3 bucket playbook example
"command": {
"playbook_source": "s3://bucket_name/folder/playbook.yml"
}
Using extra_vars in the command example
To pass extra variables as parameters to your Ansible playbook, include the extra_params.extra_vars
field in the command
section. For example:
"command": {
"playbook_source": "/path/to/playbook.yml",
"extra_params": {
"extra_vars": {
"dir_name": "/path/to/directory"
}
}
}
This sets the dir_name
variable in your playbook execution context.
Full job example: Local playbook
{
"description": "Run a local ansible playbook job",
"name": "ansible local playbook job",
"scheduleRule": "immediate",
"target": {
"executionType": "sequential",
"groups": [
{
"batchSize": {
"type": "percent",
"value": 100
},
"distributionMethod": "batching",
"nodeIdentifiers": ["--NODE1--"],
"nodeListType": "nodes",
"successCriteria": [
{
"numRuns": {
"type": "percent",
"value": 100
},
"status": "success"
}
],
"timeoutSeconds": 240
}
]
},
"actions": {
"accessMode": "agent",
"steps": [
{
"command": {
"playbook_source": "/path/to/playbooks/playbook.yml"
},
"conditions": [],
"description": "",
"expectedInputs": {},
"failureBehavior": {
"action": "retryThenFail",
"retryBackoffStrategy": {
"arguments": [],
"delaySeconds": 0,
"type": "linear"
}
},
"inputs": {},
"interpreter": {
"name": "chef-platform/ansible-interpreter",
"skill": {
"maxVersion": "0.1.0",
"minVersion": "0.1.0"
}
},
"limits": {
"cores": 0,
"cpu": 1,
"timeoutSeconds": 0
},
"name": "Execute Playbook",
"outputFieldRules": {},
"retryCount": 1
}
]
}
}