This is how you could structure your AWS CDK project

In this post we will walk through how you can structure your AWS CDK projects so that it makes sense for others. In short, we want to limit the amount of WTFs/minute of your AWS CDK code.

At the end of this post

You will have a better understanding of the possibilities of structuring your AWS CDK project to make it future-proof.

Requirements — Level 100

This post requires very limited knowledge of AWS CDK. You should generate an sample project using the CLI or you can check out the source code of any of the other CDK posts.

  • You have an sample AWS CDK project
  • You have a starter understanding of AWS CDK

Use cases

We want to ship this project to our client but want to make it future proof for them to continue developing and deploying the application

If you have build an awesome new project where you’re proud of, you want to start sharing it with others. However, there’s nothing as demotivating than to show it to others and they just don’t get it. If you convert that example to business, it could cost you a sale or at-least losing trust with your customer. This post walks through the basics of how you can make you’re CDK code understandable for outsiders.

‌             ‌

Tutorial

In this chapter we are walking through the different steps to restructure you’re project.  We are basing out tutorial on the source code of this CDK post which deploys a couple of basic resources. This is an good example because it mixes infrastructure with business logic.

1. Split per intention: infrastructure versus business logic

We are looking at the example repository above that we will rework in a good project.

The first thing you’d like to do is split up that business logic and the infrastructure. They’re separate things so let’s treat them that way.

We currently have a structure like the following:

/project
        /lib
        /bin
        /python-app
        readme.md
        ...

What we’d like to do in our example is to get that python-app folder out of the way so its no longer in the same place as the infrastructure.

Example representation of what we’d like it to be:

/project
	/infrastructure
            /lib
            /bin
            readme.md
	/source
            Dockerfile
            app.py
            requirements.txt
            readme.md

In order to do so for our current example we don’t need to do much. We just need to adjust the line of our Fargete Task Container’ App Definition so that it grabs the code from the new source:

    // Create container from local `Dockerfile`
    const appContainer = fargateTaskDefinition.addContainer("Container", {
      image: ecs.ContainerImage.fromAsset("./../source/", {}),
      logging,
    });

2. Split per logical resources

The second thing we want to do is split up the logical resources. We currently have them all defined under 1 file, which is the main stack of the CDK project. The thing you’d like to do is split them in to multiple seperate files in which a file represents a common goal or resource.

We want to keep resources that logically belong together in the same place. Think about this like you’re kitchen supplies. You don’t want your plates and cutlery to be in a completely different place because most of the time you’d need them simultaneously.

  • We have an ECS cluster

The ECS cluster has configuration that is ECS specific. You can think of resources like the cloudwatch log driver or the Task role that you’d like tasks to use.

  • We have a custom ECS task

This task will be deployed to the ECS cluster defined above but is its own entity. We could have multiple tasks on this one cluster for example. So this deserves its own file.

  • We have an API Gateway and SQS queue

This is an tricky one and very specific to this example. Because, you could make the argument that you should split the API Gateway and the SQS queue in seperate files. However, in our example repository they fullfill one very specific need and are tightly coupled. We don’t expect our use case to change, and that the api gateway only pushes to the SQS queue. Because of this, I am putting these two together.

            ‌

Conclusion

I hope you’ve been able to follow along with this post and that you’ve now successfully deployed your resources. You can give that API Gateway URL to anyone for them to be able to trigger new ECS tasks on your cluster.

⎘ Next steps

If you think this is interesting and want to build out this or more projects, you can always find more solutions and builds on our not build to last blog.

By Mart

Tutorials at nbtl.blog

View all of Mart's posts.

1 comment

Leave a comment

Your email address will not be published. Required fields are marked *