Troposphere aws

Troposphere aws DEFAULT

troposphere

PyPI VersionBuild Statuslicense: New BSD licenseDocumentation Status

About

troposphere - library to create AWS CloudFormation descriptions

The troposphere library allows for easier creation of the AWS CloudFormation JSON by writing Python code to describe the AWS resources. troposphere also includes some basic support for OpenStack resources via Heat.

To facilitate catching CloudFormation or JSON errors early the library has property and type checking built into the classes.

Installation

troposphere can be installed using the pip distribution system for Python by issuing:

$ pip install troposphere

To install troposphere with awacs (recommended soft dependency):

$ pip install troposphere[policy]

Alternatively, you can use setup.py to install by cloning this repository and issuing:

$ python setup.py install # you may need sudo depending on your python installation

Examples

A simple example to create an instance would look like this:

>>>fromtroposphereimportRef, Template>>>importtroposphere.ec2asec2>>>t=Template() >>>instance=ec2.Instance("myinstance") >>>instance.ImageId="ami-951945d0">>>instance.InstanceType="t1.micro">>>t.add_resource(instance) <troposphere.ec2.Instanceobjectat0x101bf3390>>>>print(t.to_json()) { "Resources": { "myinstance": { "Properties": { "ImageId": "ami-951945d0", "InstanceType": "t1.micro" }, "Type": "AWS::EC2::Instance" } } } >>>print(t.to_yaml()) Resources: myinstance: Properties: ImageId: ami-951945d0InstanceType: t1.microType: AWS::EC2::Instance

Alternatively, parameters can be used instead of properties:

>>>instance=ec2.Instance("myinstance", ImageId="ami-951945d0", InstanceType="t1.micro") >>>t.add_resource(instance) <troposphere.ec2.Instanceobjectat0x101bf3550>

And returns the object to make it easy to use with :

>>>instance=t.add_resource(ec2.Instance("myinstance", ImageId="ami-951945d0", InstanceType="t1.micro")) >>>Ref(instance) <troposphere.Refobjectat0x101bf3490>

Examples of the error checking (full tracebacks removed for clarity):

Incorrect property being set on AWS resource:

>>>importtroposphere.ec2asec2>>>ec2.Instance("ec2instance", image="i-XXXX") Traceback (mostrecentcalllast): ... AttributeError: AWS::EC2::Instanceobjectdoesnotsupportattributeimage

Incorrect type for AWS resource property:

>>>ec2.Instance("ec2instance", ImageId=1) Traceback (mostrecentcalllast): ... TypeError: ImageIdis<type'int'>, expected<type'basestring'>

Missing required property for the AWS resource:

>>>fromtroposphereimportTemplate>>>importtroposphere.ec2asec2>>>t=Template() >>>t.add_resource(ec2.Subnet("ec2subnet", VpcId="vpcid")) <troposphere.ec2.Subnetobjectat0x100830ed0>>>>print(t.to_json()) Traceback (mostrecentcalllast): ... ValueError: ResourceCidrBlockrequiredintypeAWS::EC2::Subnet (title: ec2subnet)

Currently supported resource types

Duplicating a single instance sample would look like this

# Converted from EC2InstanceSample.template located at:# http://aws.amazon.com/cloudformation/aws-cloudformation-templates/fromtroposphereimportBase64, FindInMap, GetAttfromtroposphereimportParameter, Output, Ref, Templateimporttroposphere.ec2asec2template=Template() keyname_param=template.add_parameter(Parameter( "KeyName", Description="Name of an existing EC2 KeyPair to enable SSH ""access to the instance", Type="String", )) template.add_mapping('RegionMap', { "us-east-1": {"AMI": "ami-7f418316"}, "us-west-1": {"AMI": "ami-951945d0"}, "us-west-2": {"AMI": "ami-16fd7026"}, "eu-west-1": {"AMI": "ami-24506250"}, "sa-east-1": {"AMI": "ami-3e3be423"}, "ap-southeast-1": {"AMI": "ami-74dda626"}, "ap-northeast-1": {"AMI": "ami-dcfa4edd"} }) ec2_instance=template.add_resource(ec2.Instance( "Ec2Instance", ImageId=FindInMap("RegionMap", Ref("AWS::Region"), "AMI"), InstanceType="t1.micro", KeyName=Ref(keyname_param), SecurityGroups=["default"], UserData=Base64("80") )) template.add_output([ Output( "InstanceId", Description="InstanceId of the newly created EC2 instance", Value=Ref(ec2_instance), ), Output( "AZ", Description="Availability Zone of the newly created EC2 instance", Value=GetAtt(ec2_instance, "AvailabilityZone"), ), Output( "PublicIP", Description="Public IP address of the newly created EC2 instance", Value=GetAtt(ec2_instance, "PublicIp"), ), Output( "PrivateIP", Description="Private IP address of the newly created EC2 instance", Value=GetAtt(ec2_instance, "PrivateIp"), ), Output( "PublicDNS", Description="Public DNSName of the newly created EC2 instance", Value=GetAtt(ec2_instance, "PublicDnsName"), ), Output( "PrivateDNS", Description="Private DNSName of the newly created EC2 instance", Value=GetAtt(ec2_instance, "PrivateDnsName"), ), ]) print(template.to_json())

Community

We have a Google Group, cloudtools-dev, where you can ask questions and engage with the troposphere community. Issues and pull requests are always welcome!

Licensing

troposphere is licensed under the BSD 2-Clause license. See LICENSE for the troposphere full license text.

Sours: https://github.com/cloudtools/troposphere
Troposphere: a better way to build, manage and maintain a Cloudformation based infrastructure on AWSNowadays most of the modern SaaS applications are developed and deployed on Cloud providers and, in particular, Amazon Web Service, the first real Cloud provider, took and held the lead of this market due to the quality and the flexibility of its services. AWS hosted Cloud Infrastructures keep getting larger and more complex with time in order to take full advantage of new services released by AWS. In fact, the number of services offered directly by Amazon is gargantuan and keeps growing every year. Using AWS services whenever possible instead of custom solutions deployed on EC2 virtual machines results in a huge decrease in the infrastructure setup and maintenance costs since Amazon is responsible for the deployment, Cloud optimization, security and maintenance of each service. Furthermore, most of the AWS services are designed to be highly available without any additional configuration, saving another significant configuration burden for the DevOps. Using the AWS services as building blocks allows developer to create almost every type of application, for example, a typical serverless web application leverages Amazon Cognito for authentication,  AWS Lambda/ApiGateway for the backend, DynamoDB for the database, SNS/SES for push and mail notifications to users, S3/Cloudfront for the frontend and SQS for internal queuing. However most applications are much more complicated than that (they often needs machine learning, datalakes, vpn connections to other services, different databases, batch processing and so on) and the number of different services and resources needed quickly escalates resulting in infrastructure so big and complicated that cannot be safely managed ‘by hand’ anymore. In fact, sometimes modifications to just one component (e.g. a security group or a routing table) to could result in unexpected side effects impacting several services and has the potential to take the whole Application offline. In these cases, IaC (Infrastructure as Code) comes to the rescue. Through IaC it is possible to describe the whole AWS infrastructure writing regular code, so you can version it using Git just like any other code project. When the IaC code is executed it will create or update the infrastructure in order it to be exactly like you wrote in your code! If you need to change the infrastructure you update the IaC commit your change and rerun the codeIf all this sounds too good to be true you are probably right! Every abstraction level we add to our software development flow comes with its own problems and IaC is no exception. The first problem we had when we decided to go with the IaC paradigm is the choice of the right tool, in fact there two main several IaC frameworks for AWS out there: Terraform and CloudFormation. We tried Terraform but found several issues which were a no-go for us: 
  • Terraform uses its own language which is also very limited: no loops and cycles are possible
  • Sometimes Terraform fails to wait for resource creation resulting in difficult to debug errors
  • It is possible for two developers to unknowingly run terraform at the same time resulting in infrastructure inconsistencies, if you want to use terraform a pipeline flow needs to be enforced for all projects at all times
  • Rollbacks are often not carried out correctly.
  • Changes often break at runtime because Terraform sometimes does not update resources in the right order.
  • The resources are created using the AWS APIs and there is not a centralized place describing the actual state of the infrastructure
  • Terraform run locally (or VM/container on AWS) so could be affected by network/hardware errors
CloudFormation, on the other hand, is a managed service by AWS: the user must simply write a YAML or JSON file describing all the infrastructure upload it on S3 or directly to Cloudformation and the service will take care of running it safely and statefully. Rollbacks are natively supported and it is also possible to execute “dry runs” of the template by creating a Change Set (analogously to terraform plan). In general, the execution of the template is much less error-prone than the one from terraform thanks to the service being AWS native. The only compelling Terraform use case is that of a multi-cloud infrastructure.However, CloudFormation has its own drawbacks: YAML files are often very verbose and difficult to write and debug and like terraform do not support advanced logic and loops. Furthermore spitting a project in more files requires nested stacks that are difficult to integrate with Change Sets. So the next step is to generate the Cloudformation YAML templates using a more advanced language like python!Here we have two alternatives AWS CDK and Troposphere. AWS CDK is new and extremely powerful and allows to declare complex infrastructure with very few lines of codes. However, being high level is also its biggest fault: some very low-level associations between resources are difficult to create and furthermore the output  YAML template is difficult to read because all logical Ids of the resources are managed by CDK.  On the contrary, troposphere is really simple: it is just a Python DSL which maps CloudFormation Entities (all of them!) to Python classes and the other way round.This gives us a very simple way to create a template that looks exactly like we want but is generated through a high level easily maintainable language. Furthermore, Python IDEs will help us fixing problems without even running the YAML template and the compilation step to YAML will break if we create inconsistent references. To demonstrate the power of this workflow we show here how we can create a simple VPC with subnets, one for each Availability Zone.First of all, let’s look at the raw CloudFormation template:Description: AWS CloudFormation Template to create a VPC Parameters: SftpCidr: Description: SftpCidr Type: String Resources: SftpVpc: Properties: CidrBlock: !Ref 'SftpCidr' EnableDnsHostnames: 'true' EnableDnsSupport: 'true' Type: AWS::EC2::VPC RouteTablePrivate: Properties: VpcId: !Ref 'SftpVpc' Type: AWS::EC2::RouteTable PrivateSubnet1: Properties: AvailabilityZone: !Select - 0 - !GetAZs Ref: AWS::Region CidrBlock: !Select - 4 - !Cidr - !GetAtt 'SftpVpc.CidrBlock' - 16 - 8 MapPublicIpOnLaunch: 'false' VpcId: !Ref 'SftpVpc' Type: AWS::EC2::Subnet PrivateSubnet2: Properties: AvailabilityZone: !Select - 1 - !GetAZs Ref: AWS::Region CidrBlock: !Select - 5 - !Cidr - !GetAtt 'SftpVpc.CidrBlock' - 16 - 8 MapPublicIpOnLaunch: 'false' VpcId: !Ref 'SftpVpc' Type: AWS::EC2::Subnet PrivateSubnet3: Properties: AvailabilityZone: !Select - 2 - !GetAZs Ref: AWS::Region CidrBlock: !Select - 6 - !Cidr - !GetAtt 'SftpVpc.CidrBlock' - 16 - 8 MapPublicIpOnLaunch: 'false' VpcId: !Ref 'SftpVpc' Type: AWS::EC2::Subnet SubnetPrivateToRouteTableAttachment1: Properties: RouteTableId: !Ref 'RouteTablePrivate' SubnetId: !Ref 'PrivateSubnet1' Type: AWS::EC2::SubnetRouteTableAssociation SubnetPrivateToRouteTableAttachment2: Properties: RouteTableId: !Ref 'RouteTablePrivate' SubnetId: !Ref 'PrivateSubnet2' Type: AWS::EC2::SubnetRouteTableAssociation SubnetPrivateToRouteTableAttachment3: Properties: RouteTableId: !Ref 'RouteTablePrivate' SubnetId: !Ref 'PrivateSubnet3' Type: AWS::EC2::SubnetRouteTableAssociation We immediately notice that the code is readily readable and understandable even if it was automatically generated by a troposphere based script. As can immediately be seen most of the code is duplicated since we created 3 subnets with relative attachments to a routing table.The python troposphere script which generated the script is the following:import troposphere.ec2 as vpc template = Template() template.set_description("AWS CloudFormation Template to create a VPC") sftp_cidr = template.add_parameter( Parameter('SftpCidr', Type='String', Description='SftpCidr') ) vpc_sftp = template.add_resource(vpc.VPC( 'SftpVpc', CidrBlock=Ref(sftp_cidr), EnableDnsSupport=True, EnableDnsHostnames=True, )) private_subnet_route_table = template.add_resource(vpc.RouteTable( 'RouteTablePrivate', VpcId=Ref(vpc_sftp) )) for ii in range(3): private_subnet = template.add_resource(vpc.Subnet( 'PrivateSubnet' + str(ii + 1), VpcId=Ref(vpc_sftp), MapPublicIpOnLaunch=False, AvailabilityZone=Select(ii, GetAZs(Ref(AWS_REGION))), CidrBlock=Select(ii + 4, Cidr(GetAtt(vpc_sftp, 'CidrBlock'), 16, 8)) )) private_subnet_attachment = template.add_resource(vpc.SubnetRouteTableAssociation( 'SubnetPrivateToRouteTableAttachment' + str(ii + 1), SubnetId=Ref(private_subnet), RouteTableId=Ref(private_subnet_route_table) )) print(template.to_yaml()) Running this script after installing Troposphere (pip install troposphere) will print the CF YAML shown above. As you can see the python code is much more compact and easy to understand. Furthermore, since Troposphere maps all the native cloudformation YAML functions (e.g. Ref, Join, GettAtt, etc.) we don’t even need to learn anything new: every existing CF template can easily be converted in a Troposphere template.Differently from plain CloudFormation with troposphere we can assign the various entities to python variables and use the python variables in the Ref and GettAtt functions in place of the logical CloudFormation names of the resource: in the example above we referenced the private subnet with Ref(private_subnet_route_table), not Ref('RouteTablePrivate'). This is a huge advantage because we don’t need to remember the logical name while coding, the IDE will do that for us and warn us if the resource is not defined or has a different name.Troposphere is also able to manage flawlessly nested stack and other complex multi Stack architecture through the Sceptre (https://github.com/Sceptre/sceptre) automation tool. However, instead of using Sceptre you can also write a custom deployment script, like we did in beSharp, to fully manage your deployment pipe and run automatic CloudFormation Drift changes check and evaluate the Change Set for all the nested templates before executing the template.As a final remark troposphere is also able to manage the reverse flow: from a YAML template to python classes:from cfn_tools import load_yaml from troposphere import TemplateGenerator template = TemplateGenerator(load_yaml( app_config.cloudformation.meta.client.get_template( StackName='MyStack')['TemplateBody'] )) This is very useful in situations where you need to dynamically update the infrastructure.To conclude using Troposphere is a very simple way to reap all the advantages of CloudFormation together with the abstraction level provided by a modern programming language and it greatly simplifies CloudFormation code development and deployments. If you are interested in this topic do not hesitate to comment or reach usfor further info!
Matteo Moroni

Matteo Moroni

DevOps and Solution Architect at beSharp, I deal with developing Saas, Data Analysis, and HPC solutions, and with the design of unconventional architectures with different complexity. Passionate about computer science and physics, I have always worked in the first and I have a PhD in the second. Talking about anything technical and nerdy makes me happy!

Sours: https://www.proud2becloud.com/troposphere-a-better-way-to-build-manage-and-maintain-a-cloudformation-based-infrastructure-on-aws/
  1. Werewolf witcher 3
  2. Car keys ornament
  3. 1979 c10 truck
  4. Glitter stars clipart

Using AWS Cloud9, AWS CodeCommit, and Troposphere to author AWS CloudFormation templates

by Luis Colón | on | in AWS CloudFormation, Developer Tools, DevOps, Management Tools | Permalink |  Share

AWS Cloud9 was announced at AWS re:Invent in November 2017. It’s a browser-based IDE suitable for many cloud development use cases, including serverless applications. AWS CloudFormation now supports quickly spinning up AWS Cloud9 development environments, with integration with AWS CodeCommit. In this blog post, I’ll explore how to quickly spin up AWS Cloud9 environments with CloudFormation, and also how to code in AWS Cloud9 with Python and Boto3, and generate CloudFormation template code in YAML using Troposphere.

Note: As of the time of this writing, AWS Cloud9 is available in the following AWS Regions: Northern Virginia, Ohio, Oregon, Ireland, and Singapore regions.

Spinning up Cloud9 Environments with CloudFormation

Using the resource type, you can quickly deploy AWS Cloud9 development environments, which you can use to set up development environments for a class or workshop at your company. Each AWS Cloud9 environment also creates an Amazon EC2 instance where you can use a command line to set up additional development tools, much like modern IDEs integrate with your local machine’s terminal sessions. You can also use this automation opportunity to set up CodeCommit repositories for each student that is using the environment. When you create these environments using CloudFormation you can also create a Git-compatible repository and integrate it with the Amazon EC2 instance that gets deployed with the AWS Cloud9 environments.

The template code that follows shows how simple it is to set up a single environment using CloudFormation with a CodeCommit repository. There are two prerequisites that you must complete before you deploy this template into a stack:

  • Your IAM user must belong to a group that includes the  and
  • You need an appropriate subnet to deploy the EC2 instance. I used an account that had a VPC already deployed. (For more information, read Getting Started with Amazon VPC.) I used the intrinsic function  to connect the instance to the CodeCommit repository being created.

After it’s deployed, this template creates a separate stack that includes the EC2 instance as an output. This additional stack gets deleted after you delete the resources, so you don’t have to worry about administering this additional stack for the purposes of this example. After the stack deployment is complete, you can verify that it created both the AWS Cloud9 environment and CodeCommit repository by visiting their respective browser-based consoles.

Figure 1. A CloudFormation template to create a Cloud9 environment and a CodeCommit repository.

More on AWS Cloud9

AWS Cloud9 is a versatile browser-based IDE. After you create your environment, you can operate your environment from typical MacOS, Linux and Windows machines, as well as from Chromebooks and tablets. This modern IDE provides multiple coding panes, as well as a command line pane, so you can switch from the editor to command line functions. As shown in Figure 2, I did some of the coding for this article on an iPad Pro, changing a few keyboard settings like splitting the keyboard so I can see both the editor and command line. I could also disable automatic capitalization, automatic word suggestions, and smart punctuation. As I do more iPad-based coding in the future, I’ll probably choose a more programmer-centric keyboard, like SmoothMobile’s DevKey.

Figure 2. The Cloud9 experience on an iPad.

Basic file editing

When you open your AWS Cloud9 environment, your EC2 instance has already been configured with the AWS CLI and a handful of other utilities, like Python versions 2 and 3, as well as Node 6. Also, the AWS CLI has been preconfigured with the required permissions to execute AWS commands. Finally, your local Git client automatically recognizes the repository you’ve just created, and clones it. Note that because it is a new repository, it will be empty.

Assuming you’ve never used AWS Cloud9 or CodeCommit, now is a good time to create a simple file and add it to the repository. I’ll create a default markdown text file for the repository. After you navigate to your repository and open it in CodeCommit, it will automatically look for a file and render it, behaving in a similar way to Git repositories hosted on GitHub.

The first time you open your AWS Cloud9 environment, it will prompt you to navigate to the (still empty) directory it cloned for your newly created repository. It will also advise you to set up your display name and email for your commits using the following commands:

I’ll execute those two commands first.

For this example, I’ll create a bare bones markdown file for my sample project, as shown in Figure 3. First, I navigated to the AWS Cloud9 console, found the environment I recently created, and opened it. Then, I leveraged a pre-packaged file template by choosing the File menu, then the New From Template submenu, then the Markdown template option from the resulting submenu. Notice that I also changed the default color theme for the IDE, and I changed the local Git configuration for my commits. In addition, I checked the versions for some of the pre-installed packages in the EC2 instance that AWS Cloud9 presents. I can also preview my markdown file in a separate pane with the Preview button. After I’ve added a few lines, I saved the file inside my repository folder ( in my example).

Figure 3. Editing a markdown file in Cloud9, previewing it, and checking versions on the instance.

Now that I have a new local file in my repository folder, it’s time to commit it to my remote CodeCommit repository. After I changed directories to my local repository folder, I’ll enter the following sequence of commands:

Confirms any new files saved to my local repository
Adds new files to my next commit
Adds a helpful message context to my commit action
Sends the committed files to the (remote) repository

To verify that everything works as I expected, go to the CodeCommit console and inspect your repository, which we named MyRepo in our template. The file we just created should be there, and it should be automatically rendered.

Using Python and AWS Boto3

In preparation for using Troposphere to generate CloudFormation code, I’ll make sure that I can create and run a Python program, and that I can use the AWS-provided Boto3 library to execute AWS API calls. You can find a fairly easy Python example as part of the Cloud9 documentation here. You’ll find that both Python 2 and 3 are already installed, so you can grab the sample code, save a new file in your instance and repository, and add a new Run Configuration to execute your code. The Run Configuration is handy to inject environment variables as you test your code, as well as to see the active standard output console for a long-running server process.

In Figure 4, I’ve validated that I’m able to compose and run basic Python code in my environment. I created the file by choosing the File menu, then the New From Template submenu, then the Python File template option from the resulting submenu. When I paste the sample code into my editing pane, I get smart syntax highlighting as well. Note the command I entered in the Command: field on the Run pane on the bottom of the screen, and that it is running Python 2.

Now that I’ve verified that I can edit and run Python programs in my environment, I’ll repeat the sequence of Git commands that I used for my earlier markdown file above, and move on to my next sample file.

Figure 4. Running a basic Python program in AWS Cloud9.

Next, let’s ensure we can call AWS APIs with the latest version of Boto3. From a terminal tab, enter the following command:

After it is installed, I’ll use the sample AWS SDK code from the previously mentioned documentation page for AWS Cloud9 and copy it into a new file. When creating the new file, I’ll use the File, New From Template, and Python File menu sequence as I did before, to ensure that I get proper syntax highlighting. Do this for all your language-specific code files and, in some cases, you’ll be able to leverage automatic suggestions as you type. Save your file and execute it in the existing Run Configuration tab. When you use a unique bucket name as the first runtime parameter, and a target Region as the second runtime parameter, the program uses Boto3 to list the current S3 buckets for the current user account. It also adds the new bucket, lists the buckets again, then deletes the newly-added bucket, and lists the buckets again showing that it successfully deleted it. See Figure 5.

Figure 5. Using AWS APIs in Python with Boto3.

Generating CloudFormation template code with Troposphere

Troposphere is an open-source project that enables Python developers to use Boto3 to generate CloudFormation code in either JSON or YAML. Many AWS customers use it. Troposphere is a mature project, its repository provides many examples, and it allows you to use imperative language constructs like looping to generate template code. To add it to our environment, run the following command as you did when installing Boto3 before:

Installing it with the policy option brings an additional project, which makes it easier to generate JSON for the AWS Access Policy Language, although is it not required for this example. To set up our example, first write the header for our template into a file called :

With this file in hand, we’ll create a short Python program to loop through the generation of an EC2 instance three times. The code, which uses the installed Troposphere library, is listed in Figure 6.

Figure 6. A sample Python file using Troposphere to generate a CloudFormation YAML code snippet.

As it loops, the EC2 template instance name will be generated from a list of three items. This will have the effect of generating three instances each for dev, test, and prod environments. Then, we’ll append the generated YAML code into the header file we manually created earlier, although the header could also be generated in Python code as well. The sample code, as well as the resulting YAML file, are shown in Figure 7.

Figure 7. A sample Python file using Troposphere to generate a CloudFormation YAML code snippet.

The resulting YAML file is ready for us to use with CloudFormation. You can either list the file and copy and paste its contents to a local YAML file to upload to CloudFormation, or push it back to your CodeCommit repo and pick up the file from there.

Cleaning up

Now that the sample exercise is completed, I’ll clean up all the resources I’ve created. But first, I’ll probably want to keep the files in my CodeCommit repository for future reference. Since removing the initial stack will remove my AWS Cloud9 environment and CodeCommit repository, I’ll clone my repository locally so I can keep the files I’ve created. To do so, I’ll go to the AWS IAM console and create HTTPS Git credentials for my user account. You can find this option after you choose your user, then choose the Security Credentials tab. Scroll down, and choose the Generate button, as shown in Figure 8. Grab the resulting username and password, and clone your repository locally. To find the URL, open the CodeCommit console, select your repository, and choose the Connect button. This will give you the URL and the Git clone command to enter locally.

Figure 8. Generating Git credentials so you can copy the sample files from your repository to your local machine.

After the files are copied down, go back to the CloudFormation console and delete the sample stack you created from the Troposphere-generated YAML code, as well as the stack that created your AWS Cloud9 environment and CodeCommit repository. You can delete both stacks in parallel since they don’t depend on each other. Also, note that the additional stack that was created to generate your Amazon EC2 instance will be removed as well.

Bonus Tip: Increasing your AWS Cloud9 storage space

While coding on your EC2-backed AWS Cloud9 instance, you may run out of the default 8GB storage space in your connected EBS volume. You can expand the space by following these steps:

  1. On the AWS Cloud9 IDE, switch to a bash terminal session, and enter the command. You should see the default 8GB disk size under the column. This is the value you will change in this procedure.
  2. Close your Cloud9 session, then go to the EC2 console and stop the EC2 instance used by your environment. Its name should have the prefix “”. In the template listed earlier, the prefix is “”.
  3. Wait until the instance state is displayed as stopped, and then choose Volumes from the Elastic Block Store submenu on the left side of the page.
  4. Choose your volume, and then choose Modify Volume from the Actions menu above the instance list.
  5. Enter a number larger than 8 in the Size field (for example, 16 corresponds to 16GB, which will double your storage).
  6. Choose the Modify button and accept the changes.
  7. Refresh the page a few times, and you will see the state of the volume changing to in-use – optimizing. Continue to refresh the page until the state of the volume changes to in-use – completed (100%).
  8. Return to the AWS Cloud9 console, and choose Open IDE to start your environment. It will recognize that the instance is stopped, and it will restart the instance and connect to it.
  9. Run again to verify that the space is allocated. In the earlier example, you should see the 16GB size of your disk under the column, and your should be updated to reflect the larger size.

Summary

Setting up a robust browser-based development environment with CloudFormation, AWS Cloud9, and CodeCommit is fast and easy. Using tools like Boto3 and Troposphere, you can use AWS SDK APIs and generate CloudFormation code in YAML from Python just as quickly. There are other ways to use high-level languages like Python to generate CloudFormation code. Other public projects target other languages like Ruby, JavaScript, and Go.

Finally, I should note that we only covered a few of the features of AWS Cloud9 in this blog post. Using AWS Cloud9 you can edit remote files, enable live pair programming, and perform debugging with breakpoints, and show variable values as the program runs. You can clone additional repositories and work on multiple projects from a single AWS Cloud9 environment. There are many additional configuration options, accessible by using the Settings menu or by choosing the gear icon on the right edge of the top menu bar. For more information on AWS Cloud9 features, see the AWS Cloud9 documentation.

I encourage you to experiment with other AWS Cloud9 features, as well as other projects that allow you to use high-level languages to author CloudFormation templates. It is likely that you’ll discover new ideas to help you improve your infrastructure code and your automation options.

About the Author

 Luis Colon is a Senior Developer Advocate for the AWS CloudFormation team. He works with customers and internal development teams to focus on and improve the developer experience for CloudFormation users. In his spare time, he mixes progressive trance music. You can reach him via @luiscolon1 on Twitter.

 

TAGS: AWS CodeCommit, Cloud9

Sours: https://aws.amazon.com/blogs/mt/using-aws-cloud9-aws-codecommit-and-troposphere-to-author-aws-cloudformation-templates/
Part 13 - Deploying CloudFormation Stacks with Stacker

Infrastructure as Code for Python Developer- Part 1 - Troposphere

In the AWS world, Infrastructure as code is not a new concept but a hot topic as lot of improvisation had happened in this area.
After working with CloudFormation templates for a while, one can notice several shortcomings that make templates long, clunky, and nigh unreadable. So what are the alternatives from a python developer lens: Troposphere and AWS CDK

The Troposphere python library allows for easier creation of the Aamzon CloudFormation JSON by writing Python code to describe the AWS resources. This effectively allows you to programmatically define your infrastructure without becoming as limited as we are with plain CloudFormation.

Let' install it:

The Troposphere team has some great examples in their GitHub repository. I suggest taking a look there and working from their examples.

CloudFormation

CloudFormation, is a managed service by AWS: the user must simply write a YAML or JSON file describing all the infrastructure upload it on S3 or directly to Cloudformation and the service will take care of running it safely and statefully.
However, CloudFormation has its own drawbacks: YAML files are often very verbose and difficult to write and debug and do not support advanced logic and loops.
Let’s look at the raw CloudFormation template to create VPC:

Troposphere

Troposphere is really simple: it is just a Python DSL which maps CloudFormation Entities (all of them!) to Python classes and the other way round. This gives us a very simple way to create a template that looks exactly like we want but is generated through a high level easily maintainable language. Furthermore, Python IDEs will help us fixing problems without even running the YAML template and the compilation step to YAML will break if we create inconsistent references. The python troposphere script which generated the script is the following:

The code is readily readable and understandable even if it was automatically generated by a troposphere based script. As can immediately be seen most of the code is duplicated since we created 3 subnets with relative attachments to a routing table.

Running this script after installing Troposphere (pip install troposphere) will print the CF YAML shown above. As you can see the python code is much more compact and easy to understand. Furthermore, since Troposphere maps all the native cloudformation YAML functions (e.g. Ref, Join, GettAtt, etc.) we don’t even need to learn anything new: every existing CF template can easily be converted in a Troposphere template.

Conclusion

Troposphere is comparable to AWS CloudFormation templates, but in Python - offering the features of a programming language. Troposphere is a very simple way to reap all the advantages of CloudFormation together with the abstraction level provided by a modern programming language and it greatly simplifies CloudFormation code development and deployments.
In next part, I'll explain about AWS CDK.

Sours: https://dev.to/priyanka_bisht_567bb3341b/infrastructure-as-code-for-python-developer-part-1-troposphere-51lp

Aws troposphere

Quick Start¶

Troposphere closely follows CloudFormation, so there isn’t much documentation specific to Troposphere. In this documentation there are various examples but for the most part the CloudFormation docs should be used.

CloudFormation Basics¶

  • Template Anatomy - structure of a CloudFormation template.
  • Resources are the basic blocks and required in any template.
  • Outputs are optional but can be used to create cross-stack references. Having everything in one stack will make it very hard to manage the infrastructure. Instead, values from one stack (for example, network setup) can be exported in this section and imported by another stack (for example, EC2 setup). This way a stack used to set up a certain application can be managed or deleted without affecting other applications that might be present on the same network.
  • Intrinsic Functions should be used to manipulate values that are only available at runtime. For example, assume a template that creates a subnet and attaches a routing table and network ACL to that subnet. The subnet doesn’t exist when the template is created, so it’s ID can’t be known. Instead, the route and network ACL resources are going to get the ID at runtime, by using the Ref function against the subnet.

Basic Usage¶

The following two pieces of code are intended to demonstrate basic usage of Troposphere and CloudFormation templates. First template will create two subnets and export their IDs. The second one will create an EC2 instance in one of the subnets. The comments explain how it works and where to find documentation regarding the use of CloudFormation and Troposphere.

#!/usr/bin/env python3## learncf_subnet.py## Generate a CloudFormation template that will create two subnets. This# template exports the subnet IDs to be used by a second template which# will create an EC2 instance in one of those subnets.#from__future__importprint_functionfromtroposphereimportec2fromtroposphereimportTags,GetAtt,Ref,Sub,ExportfromtroposphereimportTemplate,Output# Create the object that will generate our templatet=Template()# Define resources that CloudFormation will generate for us# https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/resources-section-structure.html# Define the first subnet. We know that 'Subnet()' is in the ec2 module# because in CloudFormation the Subnet resource is defined under EC2:# https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-ec2-subnet.htmlnet_learncf_1a=ec2.Subnet("netLearnCf1a")# Information about the possible properties of Subnet() can be found# in CloudFormation docs:# https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-ec2-subnet.html#aws-resource-ec2-subnet-propertiesnet_learncf_1a.AvailabilityZone="eu-west-1a"net_learncf_1a.CidrBlock="172.30.126.80/28"# ADJUST THIS VALUEnet_learncf_1a.VpcId="vpc-abcdefgh"# ADJUST THIS VALUE# Tags can be declared in two ways. One way is# (1) in AWS/boto format, as a list of dictionaries where each item in the# list has (at least) two elements. The "Key" key will be the tag key and# the "Value" key will be the tag's Value. Confusing, but it allows for# additional settings to be specified for each tag. For example, if a tag# attached to an autoscaling group should be inherited by the EC2 instances# the group launches or not.net_learncf_1a.Tags=[{"Key":"Name","Value":"learncf-1a"},{"Key":"Comment","Value":"CloudFormation+Troposphere test"}]# The subnet resource defined above must be added to the templatet.add_resource(net_learncf_1a)# The same thing can be achieved by setting parameters to Subnet() function# instead of properties of the object created by Subnet(). Shown below.## For the second subnet we use the other method of defining tags,# (2) by using the Tags helper function, which is defined in Troposphere# and doesn't have an equivalent in CloudFormation.## Also, we use GetAtt to read the value of an attribute from a previously# created resource, i.e. VPC ID from the first subnet. For demo purposes.## The attributes returned by each resource can be found in the CloudFormation# documentation, in the Returns section for that resource:# https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-ec2-subnet.html#aws-resource-ec2-subnet-getatt## GetAtt documentation:# https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/intrinsic-function-reference-getatt.htmlnet_learncf_1b=ec2.Subnet("netLearnCf1b",AvailabilityZone="eu-west-1b",CidrBlock="172.30.126.96/28",# ADJUST THIS VALUEVpcId=GetAtt(net_learncf_1a,"VpcId"),Tags=Tags(Name="learncf-1b",Comment="CloudFormation+Troposphere test"))t.add_resource(net_learncf_1b)# Outputs section will export the subnet IDs to be used by other stacks# https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/outputs-section-structure.htmlout_net_learncf_1a=Output("outNetLearnCf1a")# Ref is another CloudFormation intrinsic function:# https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/intrinsic-function-reference-ref.html# If pointed to a subnet, Ref will return the subnet ID:# https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-ec2-subnet.html#aws-resource-ec2-subnet-refout_net_learncf_1a.Value=Ref(net_learncf_1a)# Append the subnet title (Logical ID) to the stack name and set that as the# exported property. Importing it in another stack will return the Value# we set above to that stack.## Sub stands for 'substitute', another CloudFormation intrinsic function.out_net_learncf_1a.Export=Export(Sub("${AWS::StackName}-"+net_learncf_1a.title))# Similar output for the second subnetout_net_learncf_1b=Output("outNetLearnCf1b")out_net_learncf_1b.Value=Ref(net_learncf_1b)out_net_learncf_1b.Export=Export(Sub("${AWS::StackName}-"+net_learncf_1b.title))# Add outputs to templatet.add_output(out_net_learncf_1a)t.add_output(out_net_learncf_1b)# Finally, write the template to a filewithopen('learncf-subnet.yaml','w')asf:f.write(t.to_yaml())

And the EC2 instance template:

#!/usr/bin/env python3## learncf_ec2.py## Generate a CloudFormation template that creates an EC2 instance in a# subnet which was created previously by another template (learncf-subnet)#from__future__importprint_functionfromtroposphereimportec2fromtroposphereimportTags,ImportValuefromtroposphereimportTemplate# create the object that will generate our templatet=Template()ec2_learncf_1a=ec2.Instance("ec2LearnCf1a")ec2_learncf_1a.ImageId="ami-e487179d"# ADJUST IF NEEDEDec2_learncf_1a.InstanceType="t2.micro"# We set the subnet to start this instance in by importing the subnet ID# from the other CloudFormation stack, which previously created it.# An example of cross-stack reference used to split stacks into# manageable pieces. Each export must have a unique name in its account# and region, so the template name was prepended to the resource name.ec2_learncf_1a.SubnetId=ImportValue("learncf-subnet-netLearnCf1a")ec2_learncf_1a.Tags=Tags(Name="learncf",Comment="Learning CloudFormation and Troposphere")t.add_resource(ec2_learncf_1a)# Finally, write the template to a filewithopen('learncf-ec2.yaml','w')asf:f.write(t.to_yaml())

After the .yaml files are generated using the code above stacks can be created from the command line like this:

awscloudformationcreate-stack--stack-namelearncf-subnet--template-bodyfile://learncf-subnet.yamlawscloudformationcreate-stack--stack-namelearncf-ec2--template-bodyfile://learncf-ec2.yaml

© Copyright 2019, cloudtools Revision .

Built with Sphinx using a theme provided by Read the Docs.
Sours: https://troposphere.readthedocs.io/en/latest/quick_start.html
AWS Infrastructure as Code with Python - Josh Dolitsky

Pulumi vs. AWS CDK and Troposphere

Because of the challenges of writing raw YAML/JSON by hand, two notable projects exist to compile higher-level languages into AWS CloudFormation YAML/JSON templates:

  • Troposphere: a community-led open source project created in 2013
  • AWS Cloud Development Kit (CDK): an AWS Labs project created in 2018

Similar to Pulumi, these projects let you author infrastructure as code using general-purpose languages like TypeScript, JavaScript, and Python. Unlike Pulumi, however, whose open source engine understands these languages, a transpiler a.k.a., source-to-source compiler, translates this program into AWS CloudFormation YAML/JSON. The resulting markup file is then submitted to the closed source AWS CloudFormation servers to provision infrastructure on AWS in the usual ways.

Pulumi Supports Many Clouds

AWS CDK and Troposphere support AWS only. Pulumi supports the entire capabilities of Azure, Google Cloud Platform, and cloud native technologies such as Kubernetes, in addition to AWS. Projects like and exist to bridge these gaps, but provide disjoint experiences across target clouds. There are several other points outlined below, but these are the top-level key differences.

Summary of Major Differences

The transpiler approach gives you some of the benefits of Pulumi, with the following caveats:

  • Troposphere and the AWS CDK only support the AWS platform. Pulumi supports many clouds, including major cloud platforms (such as Microsoft Azure, Google Cloud Platform, Kubernetes, and DigitalOcean), on-premises and hybrid technologies (such as VMWare vSphere and OpenStack), and online SaaS offerings (like Cloudflare, Datadog, New Relic, and more). Furthermore, Pulumi is extensible, supports custom providers, and can bridge with any existing Terraform-based provider.

  • Pulumi supports Cloud Native technologies, including Kubernetes, Helm Charts, Istio service meshes, and hosted Kubernetes clusters in any cloud (AWS EKS, Azure AKS, Google GKE, etc).

  • Troposphere and CDK compile down to YAML and are therefore limited in what they can express. The Pulumi engine understands general-purpose language patterns, dependencies between objects, and therefore delivers a better overall experience. Pulumi also supports going beyond what you can express in YAML, such as building and publishing a Docker container image, authoring serverless functions in code, automating packaging and versioning of code, and so on.

  • The Pulumi CLI and Console are co-designed to make team collaboration simple, especially with organization-wide sharing of projects and stacks. This is closer to “GitHub for DevOps” and delivers a rich experience including diffs and previews of updates before they are made. Troposphere and CDK rely on CloudFormation which is known to be more challenging in these areas.

  • Pulumi has a built-in configuration system that is super easy to use. Related, encrypted secrets give you an easy way to integrate secrets management best practices for database passwords, tokens, and other secrets. In contrast, AWS offers building block services like AWS KMS, AWS Secrets Manager, and AWS Systems Manager Parameter Store. However, using them in combination with one another in just the right way can be challenging. Pulumi leverages underlying building block services in your target cloud, or even HashiCorp Vault, to deliver an easy experience with secrets management automatic best practices built-in.

  • Pulumi integrates with a number of CI/CD providers and source control systems (SCMs) out of the box, for easy continuous delivery with systems you might already be using. Although CloudFormation can be used in this manner, it requires manual configuration, and is designed to work best with AWS’s own CodeBuild/Pipeline products.

  • Pulumi integrates with your identity provider—including GitHub, GitLab, Atlassian, or any SAML/SSO 2.0 provider (such as Azure Active Directory, Google G Suite, or Okta)—for auditing and access controls using your existing enterprise systems of record. AWS CloudFormation can be manually integrated with those systems with greater effort.

  • Pulumi can use custom state management and offers a self-hosting option for greater control, including “behind the firewall” on-premises and hybrid options. Troposphere and CDK exclusively rely on the server-side AWS CloudFormation runtime. Pulumi offers a free hosted backend as its default offering but gives you more flexibility and control.

Although Pulumi and the Troposphere and AWS CDK projects share a vision for the future of infrastructure as code using general-purpose languages, Pulumi’s many-cloud nature, embrace of modern Cloud Native technologies, and its open source engine and modern SaaS that deeply understand language semantics and advanced orchestration significantly differentiate the offerings.

Sours: https://www.pulumi.com/docs/intro/vs/cloud-template-transpilers/

You will also be interested:

Well. let's swim, sit at the sky and look - You probably plan that I will look at the sky alone, and you yourself are going to look at me. From above.



4766 4767 4768 4769 4770