Quickly Building Modern Web Applications on Amazon AWS

Quickly Building Modern Web Applications on Amazon AWS
Quickly Building Modern Web Applications on Amazon AWS
1. Overview
This article is based on the official AWS tutorial of the same name, with additional explanations and refinements. My main work here includes practicing with and translating the English documentation (the official tutorial is only available in English except for the first page) and filling in more of the details encountered along the way. By following this article, you can build your first modern application on AWS.
A “modern application” is contrasted with a traditional one. According to Amazon, a modern web application should have the following characteristics:
- It can isolate business logic
- It is easy to optimize, reuse, and iterate
- It minimizes computing overhead as much as possible
- It is built by combining multiple services
- It allows developers to focus on writing code
- It automates infrastructure maintenance tasks
To build a modern application and optimize the architecture as much as possible, this article uses AWS ECS, S3 for static storage, DynamoDB as a NoSQL database, AWS Cloud9 as the cloud IDE, Fargate for deployment, Amazon Cognito integrated with API Gateway for authentication, and AWS Lambda plus Amazon Kinesis Firehose for cloud-side event processing.
If your goal is purely learning, this full setup can be fairly expensive. However, AWS provides a free tier for new users that covers most of the resources required by this project. To use it, you need to register an account on the international AWS site and verify it with an international credit card. That gives you access to a 12-month free allowance for some resources such as ECS, along with some permanently free but usage-limited resources. You do, however, need to tolerate the painfully slow network access to the international AWS site, as well as the occasional automatic redirection to Amazon China. If you want to use AWS China instead, you need a company account approved by AWS’s domestic service providers. After verification, Amazon also offers some limited trial resources there. Since most people do not have the authority to create a new enterprise account, this article is still based on the international AWS site.
By following this article, you can achieve the following goals:
- Create a static website using Amazon Simple Storage Service (S3) to host static content such as images and text
- Build a dynamic website using an API backend microservice deployed as containers via AWS Fargate, with application logic hosted on the web server
- Store Mysfit data by externalizing all Mysfit data and persisting it in DynamoDB, a managed NoSQL database
- Add user registration using AWS API Gateway integrated with Amazon Cognito, enabling users to sign up, authenticate, and be authorized so visitors can like and interact with the site
- Capture user clicks using AWS Lambda and Amazon Kinesis Firehose to record and analyze on-site clicks and understand user behavior
2. AWS Free Tier and Important Notes
On the AWS homepage, it is easy to find the free tier for new users, and filters make it quick to locate the resources you need. Useful offerings include 750 hours per month of Amazon EC2, RDS databases, ElastiCache, and limited DynamoDB usage. One thing to keep in mind is that when creating resources, you must make sure the selected features are actually covered by the free tier. In addition, when using those resources, pay close attention to performance limits and requirements so that you do not accidentally end up with an enormous bill.
EC2: The free tier only includes the t2.micro instance type, so you need to choose that when launching an instance. Although the free tier claims to include 750 hours per month of Linux and Windows micro instances for one year, each month has at most 744 hours. In practice, this means you can only continuously run one free instance. If you want to switch between Linux and Windows, make sure to destroy the instance you are not using.
RDS: Although many resource types are available for trial, just like EC2, the free tier is limited to the db.t2.micro type. If you are only learning, then after creating one database type and experimenting with it, destroy it promptly before trying another one. Otherwise, you can easily exceed the time allowance, because as long as the resource exists, the clock keeps running.
S3: Static storage is highly efficient for hosting static assets, but the free tier limits you not only to 5 GB of storage, but also to 20,000 GET requests and 2,000 PUT requests per month. Therefore, when using static assets, hotlink protection is very important. Otherwise, if your resources become popular and are downloaded globally, you could very easily receive a shocking bill.
Most other resources are similar. The basic advice is that the free tier is not always as generous as it sounds. If you are using AWS only for learning, destroy unnecessary resources as soon as possible after finishing your experiments. That saves money for everyone. AWS officially recommends the following when using free-tier resources:
- Understand which services and resources are covered by the AWS Free Tier
- Use AWS Budgets to monitor free-tier usage
- Monitor costs in the Billing and Cost Management console
- Make sure your planned configuration falls within the free-tier offering
- Clean up test resources after use
3. Connecting Cloud9 to an EC2 Server
Most of the work in this tutorial is completed in the Amazon Cloud9 online IDE, and Cloud9 itself needs an EC2 server as its host. In other words, if you have already used up your free EC2 quota with another instance, then when deploying Cloud9 you will need to destroy the previous EC2 server and create a new one for Cloud9. Of course, Cloud9 can also be deployed onto an existing EC2 server by choosing the option to use an existing EC2 instance.
You need to fill in the existing EC2 server address and username, and Cloud9 will generate a new key pair. The public key from that pair must be added to the EC2 server by appending it to ~/.ssh/authorized_keys.
By default, the EC2 server does not have Node.js installed. I chose Amazon Linux as the distribution, which according to Amazon is optimized for AWS’s own servers and is relatively resource-efficient. Therefore, before proceeding, Node.js needs to be installed on the server.
Of course, the prerequisite for the later installation steps is that you can log in to the EC2 server normally via SSH. You must also configure the security group to allow SSH (port 22), HTTP (port 80), and HTTPS (port 443) connections. If you cannot even get into the server, there is no point discussing anything else.
The commands for installing Node.js are as follows:
# First install nvm:
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.34.0/install.sh | bash
# Then activate nvm:
. ~/.nvm/nvm.sh
# Use nvm to install the specified version of Node.js
nvm install
# Test whether Node.js has been installed and is running correctly
node -e "console.log('Running Node.js ' + process.version)"
# The system should display something like this, which means Node.js is installed correctly
Running Node.js v4.4.5
After installing Node.js, Cloud9 can be deployed normally on the EC2 server. During deployment, a prompt will appear asking whether to install packages; enter y to continue. The process also includes instructions on how to remove Cloud9. Once installation is complete, you can access the created Cloud9 environment through the AWS console.
4. Building a Static Website and Hosting It on S3
4.1 Clone the Project from GitHub
As an example, this article directly retrieves AWS’s official sample static website via Git. In the bash command line at the bottom of Cloud9, enter the following commands to clone the source code from the Git repository. After cloning, use cd to enter the project directory, and you will be able to see the full project code in the Cloud9 project manager.
git clone -b python https://github.com/aws-samples/aws-modern-application-workshop.git
cd aws-modern-application-workshop/
4.2 Create a User with Administrative Permissions and Configure Credentials
Next, you need to create an S3 bucket. However, because administrative permissions have not yet been configured, you will get an Unable to locate credentials error. To solve this, you first need to create a user with permission to manage S3 resources. This is done through AWS Identity and Access Management (IAM).
When creating access in IAM, AWS will remind you to create a sub-user account with the required permissions rather than giving direct permissions to the main administrative account. Following AWS best practices, you can create a user with permissions to manage S3 resources. Since this user is only for programmatic access, it does not need a login password—only an access key pair.
For each set of credentials you create, AWS only provides one chance to download them. In the downloaded CSV file, you can find the key pair (key and secret key).
Add those credentials in Cloud9 so you can operate on the S3 bucket. The commands are as follows:
# View the existing credentials list; if no user is configured, it should return None
aws configure list
# Start updating credentials; replace the sample values with your own
aws configure
AWS Access Key ID [None]: AKIAIOSFODNN7EXAMPLE
AWS Secret Access Key [None]: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
Default region name [None]: us-west-2
Default output format [None]: json
4.3 Deploy the Project to S3 and Publish It
Command:
# Create an S3 bucket. Be sure to follow AWS naming rules and use a globally unique name with no special characters.
# Replace REPLACE_ME_BUCKET_NAME with your own bucket name.
aws s3 mb s3://REPLACE_ME_BUCKET_NAME
After the bucket is created, some additional configuration is needed so AWS can generate a publicly accessible DNS website endpoint and map static assets to URL paths. Besides entering commands in the terminal, you will also need to edit a file directly in the Cloud9 editor.
Note that because each person’s environment variables may differ, the exact file path may not be identical. If you are unsure, you can run pwd in the terminal to get your current path and replace the path in the commands below accordingly. Also replace REPLACE_ME_BUCKET_NAME with your own bucket name.
Open the file module-1/aws-cli/website-bucket-policy.json by double-clicking it in the folder tree, and replace the bucket name in it with your own bucket name. This updates the permissions so the files can be publicly accessed. Otherwise, if only you can access the site, it defeats the purpose of publishing it.
# Use the AWS CLI to configure the website directory
aws s3 website s3://REPLACE_ME_BUCKET_NAME --index-document index.html
# Use the AWS CLI API to update the S3 bucket policy with a JSON file
aws s3api put-bucket-policy --bucket REPLACE_ME_BUCKET_NAME --policy file://~/environment/aws-modern-application-workshop/module-1/aws-cli/website-bucket-policy.json
# Copy the website into the S3 bucket
# To save resources, this article only uploads the index.html file.
# Images and other assets used by the site are loaded directly from the sample site,
# which avoids generating too many S3 storage and request operations.
aws s3 cp ~/environment/aws-modern-application-workshop/module-1/web/index.html s3://REPLACE_ME_BUCKET_NAME/index.html
At this point, the first static website has been published. Depending on the default AWS region, the generated website URL will be slightly different. In a real project, if you have your own domain name, you can point it to the corresponding address via CNAME to make the site accessible from your own domain.
# For users in us-west-2
http://REPLACE_ME_BUCKET_NAME.s3-website-REPLACE_ME_YOUR_REGION.amazonaws.com
# For users in us-east-2
http://REPLACE_ME_BUCKET_NAME.s3-website.REPLACE_ME_YOUR_REGION.amazonaws.com
(to be continued...)


