Ops clusters/qe1.yaml terraform output -var nat_public_ip # Retrieve a specific output from a previously created Terraform cluster # Retrieve all output from a previously created Terraform cluster # Get rid of a cluster and all of its components path-name PATH_NAME in case multiple terraform paths are defined, thisĪllows to specify which one to use when running Show raw plan output without piping through terraform landscape (if terraform landscape is not enabled in opsconfig.yaml this will have no impact) plan for use with "show", show the plan instead of the resource RESOURCE for use with "taint", "untaint" and "import". module MODULE for use with "taint", "untaint" and "import". Refresh | show | taint | template | untaint Subcommand apply | console | destroy | import | output | plan | Terraform usage: ops cluster_config_path terraform facts Show inventory facts for the given hosts limit LIMIT Limit run to a specific server subgroup. refresh-cache Refresh the cache for the inventory Inventory usage usage: ops cluster_config_path inventory Names: # this assumes the EC2 nodes have the Tag Name "cluster" with Value "mycluster1" AWS example -īoto_profile: aam-npe # make sure you have this profile in your ~/.aws/credentials file In order to configure it, you need to add the inventory section in your cluster configuration file ( example here). The describe command matches all the nodes that have the tag "cluster" equal to the cluster name you have defined. The way inventory works is by doing a describe command in AWS/Azure. The extra filter is applied on the instance tags, which includes the instance name. You can always filter which nodes you want to display or use to run an ansible playbook on, by using the -limit argument (eg. The inventory command will list all the servers in a given cluster and cache the results for further operations on them (for instance, SSHing to a given node or running an ansible playbook). Eg: ~/.opsconfig.yaml, /project/.opsconfig.yaml, /project/clusters/dev/.opsconfig.yaml Inventory The file is looked-up in /etc/opswrapper/.opsconfig.yaml, then in ~/.opsconfig.yaml and then in the project folder starting from the current dir and up to the root dir.Īll the files found this way are merged together so that you can set some global defaults, then project defaults in the root dir of the project and Ops examples/inventory/aam.yaml sync -help Tool configuration. Eg: -e ssh_user=ssh_userĮach sub-command includes additional help information that you can get by running: verbose, -v Get more verbose output from commandsĮxtra variables to use. root-dir ROOT_DIR The root of the resource tree - it can be an absolute h, -help show this help message and exit Noop used to initialize the full container for api usage Run Runs a command against hosts in the cluster Ssh SSH or create an SSH tunnel to a server in the cluster Packer Wrap common packer tasks and inject variables from a Terraform Wrap common terraform tasks with full templated ![]() ![]() To start out a container, running the latest ops-cli docker image run: docker run -it ghcr.io/adobe/ops-cli:2.1.7 bashĪfter the container has started, you can start using ops-cli: ops help # usage: ops # cluster_config_path # The docker image has all required prerequisites (python, terraform, helm, git, ops-cli etc). You can try out ops-cli, by using docker. See Īlso for pretty formatting of terraform plan output you can install (use gem install for MacOS) Using docker image Optionally, install terraform to be able to access terraform plugin. # uninstall previous `ops` version (if you have it) If you're still using Python2, use ops-cli version > ~/.bash_profileĮcho 'source /usr/local/bin/virtualenvwrapper.sh' > ~/.bash_profile From version 2.0 onward, ops-cli requires Python3.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |