You will need the following to be able to perform this lab:
When you create an Amazon Web Services (AWS) account, you begin with a single sign-in identity that has complete access to all AWS services and resources in the account. This identity is called the AWS account root user. It is accessed by signing in with the email address and password that you used to create the account.
We strongly recommend that you do not use the root user for your everyday tasks, even the administrative ones. Instead, adhere to the best practice of using the root user only to create your first IAM user. Securely store the root user credentials and use them to perform only a few account and service management tasks. To view the tasks that require you to sign in as the root user, see AWS Tasks That Require Root User.
As a best practice, do not use the AWS account root user for any task where it’s not required. Instead, create a new IAM user for each person that requires administrator access. Then grant administrator access by placing the users into an “Administrators” group to which the AdministratorAccess managed policy is attached.
Use administrators group members to manage permissions and policy for the AWS account. Limit use of the root user to only those actions that require it.
If you’re currently using Event Engine, you do not need to complete this section or section 1.2. Skip the following two sections and go to section 1.3 Create an EC2 Key Pair.
To create an administrator user for yourself and add the user to an administrators group:
You can use this same process to create more groups and users and to give your users access to your AWS account resources. To learn about using policies that restrict user permissions to specific AWS resources, see Access Management and Example Policies. To add additional users to the group after it’s created, see Adding and Removing Users in an IAM Group.
Amazon EC2 uses public-key cryptography to encrypt and decrypt login information. Public-key cryptography uses a public key to encrypt a piece of data, such as a password, then the recipient uses the private key to decrypt the data. The public and private keys are known as a key pair. To log in to the Amazon Linux instances we will create in this lab, you must create a key pair, specify the name of the key pair when you launch the instance, and provide the private key when you connect to the instance.
OELabIPM
, choose pem, and then choose Create key pair.keyPairName.pem
file for optional later use accessing the EC2 instances created in this lab.Tagging
We will make extensive use of tagging throughout the lab. The CloudFormation template for the lab includes the definition of multiple tags against a variety of resources.
AWS enables you to assign metadata to your AWS resources in the form of tags. Each tag is a simple label consisting of a customer-defined key and an optional value that can make it easier to manage, search for, and filter resources. Although there are no inherent types of tags, commonly adopted categories of tags include technical tags (e.g. Environment, Workload, InstanceRole, and Name), tags for automation (e.g. Patch Group, and SSMManaged), business tags (e.g. Owner), and security tags (e.g. Confidentiality).
Apply the following best practices when using tags:
Note
It is easy to modify tags to accommodate changing business requirements; however, consider the consequences of future changes, especially in relation to tag-based access control, automation, or upstream billing reports.Important
Patch Group is a reserved tag key used by Systems Manager Patch Manager that is case sensitive with a space between the two words.
AWS CloudFormation is a service that helps you model and set up your Amazon Web Services resources so that you can spend less time managing those resources and more time focusing on your applications. You create a template that describes all the AWS resources that you want (like Amazon EC2 instances or Amazon RDS DB instances) and AWS CloudFormation provisions and configures those resources for you. AWS CloudFormation enables you to use a template file to create and delete a collection of resources as a single unit (a stack).
There is no additional charge for AWS CloudFormation. You pay for AWS resources (such as Amazon EC2 instances, Elastic Load Balancing load balancers, etc.) created using AWS CloudFormation in the same manner as if you created the resources manually. You only pay for what you use as you use it. There are no minimum fees and no required upfront commitments.
To deploy the lab infrastructure:
OE_Inventory_and_Patch_Mgmt.json
you downloaded in step 1 of this section.AWS CloudFormation Designer
AWS CloudFormation Designer is a graphic tool for creating, viewing, and modifying AWS CloudFormation templates. With Designer you can diagram your template resources using a drag-and-drop interface. You can edit their details using the integrated JSON and YAML editor. AWS CloudFormation Designer can help you see the relationship between template resources.
A CloudFormation template is a JSON or YAML formatted text file that describes your AWS infrastructure containing both optional and required sections. In the next steps, we will provide a name for our stack and parameters that will be passed into the template to help define the resources that will be implemented.
OELabStack1
.Test
.Owner
, with Value set to the username you choose for your administrator. You may define additional keys as needed. The CloudFormation template creates all the example tags given in the discussion on tagging above.When the Status of your stack displays CREATE_COMPLETE in the filter list, you have just created a representation of a typical lift and shift 2-tier application migrated to the cloud.
With infrastructure as code, if you can deploy one environment, you can deploy any number of copies of that environment. In this example we have created a Test
environment. Later, we will repeat these steps to deploy a Prod
environment.
The ability to dynamically deploy temporary environments on-demand enables parallel experimentation, development, and testing efforts. It allows duplication of environments to recreate and analyze errors, as well as cut-over deployment of production systems using blue-green methodologies. These practices contribute to reduced risk, increased operations effectiveness, and efficiency.