10 Best Practices to get AWS Serverless implementation right
Working on a new AWS Serverless Microservice project can be exciting and challenging at the same time. It is important to make informed Architecture decisions and adapt best practices from the start to ensure the success of the project. These decisions consist of prioritize security, choose the right tooling, optimize request/response payload size, addressing cold start , exception handling, optimize performance, and automate deployment. I have been part of many applications in the past where small decisions made a big impact and contributed to the big success of the project.
Here are a few key considerations to be adapted while developing AWS Serverless Microservice applications.
1. IDE and Tooling
The choice of the right IDE and tools can greatly impact the development and testing experience of your application, since you need a support for application creation, code quality, building and testing in your local machine.
There are several Integrated Development Environments (IDEs) and tools that you can use to develop AWS serverless microservices. Some popular ones include:
a. Visual Studio Code: It is a popular and open-source code editor that provides a seamless development experience for AWS serverless applications. It comes with built-in support for AWS CloudFormation, AWS Serverless Application Model (SAM), and the AWS CLI.
b. AWS Cloud9: This is a cloud-based IDE offered by AWS. It provides a fully featured development environment that you can use to develop, run, and debug serverless applications. It supports a wide range of programming languages, including Node.js, Python, and Java.
c. AWS SAM Local: This is a command-line tool that you can use to test your serverless applications locally. It enables you to simulate the entire AWS Cloud environment on your local machine, making it easier to develop and debug your applications.
d. AWS Toolkit for Visual Studio: This is an extension for Visual Studio that makes it easier to develop, deploy, and debug serverless applications using Visual Studio. It provides a visual interface for working with AWS services and enables you to take advantage of the familiar Visual Studio development environment.
e. Serverless framework: The Serverless Framework is a popular, open-source framework for building, deploying, and managing serverless applications on various cloud platforms, including AWS. The Serverless Framework provides a unified experience for building serverless applications, and it abstracts away many of the complexities of cloud infrastructure management.
Although the choice of the IDE or tool depends on client preferences and the requirements of specific project but when working with the large team or a complex application choosing a right IDE and tools are critical based on the application complexity and quality.
2. Branching Strategy
A branching strategy is an important aspect of version control and helps you manage multiple versions of your code. Here are a few common branching strategies that you can use for AWS serverless microservices:
a. Gitflow: Gitflow is a popular branching strategy that involves creating several different branches for different stages of development, such as Dev, QA, and production. The main branches in Gitflow are the “develop” branch for ongoing development, the “master” branch for production-ready code, and “feature” branches for new features. This strategy is well suited for larger, more complex projects.
b. Trunk-based development: Trunk-based development is a simpler branching strategy that involves only two branches: the “trunk” or “main” branch, which contains the latest version of the code, and feature branches for individual features. The feature branches are merged into the main branch as soon as they are ready, ensuring that the main branch always contains the latest version of the code.
c. Release branching: Release branching is a strategy that involves creating a separate branch for each release of your code. This allows you to continue development on the main branch while still maintaining a stable version of the code for each release.
Regardless of the branching strategy you choose, it is important to have a well-defined process for merging code changes, testing, and deploying your code. This helps ensure that your code is always in a stable, production-ready state.
3. Address Cold start and Mitigation strategy
Cold start is a phenomenon that occurs in AWS Lambda when a function is invoked for the first time or after a period of inactivity. During a cold start, AWS must allocate resources for the function, which can result in increased latency.
AWS Lambda function invocation model works on one request and one function model thus every time new request reaches to the AWS Lambda it will start the whole process from the scratch. Cold starts can be an issue for functions that are latency-sensitive, as the increased latency can result in a poor user experience and may slow the API response time.
Cold start can impact the performance of serverless applications. By understanding the root cause of cold starts and taking steps to mitigate them, you can ensure that your AWS Lambda functions perform optimally and provide a great user experience. There are several techniques you can use to mitigate cold start:
a. Use provisioned concurrency: Provisioned concurrency is a feature in AWS Lambda that allows you to pre-warm function instances to reduce cold start latencies. You can set the desired number of function instances to be kept warm, and AWS will automatically manage the warm instances for you.
b. Memory sizing : One important consideration when configuring your Lambda functions is the memory allocation. AWS Lambda functions can be configured with a specific amount of memory, which determines the amount of CPU and other resources allocated to the function.
When you create a new Lambda function, you can choose the amount of memory to allocate to the function, with options ranging from 128 MB to 10,240 MB (10 GB). The more memory you allocate, the more CPU and other resources are available to your function, and the faster it will be able to run and reduce the cold start.
c. Avoid initialization logic: Avoid placing initialization logic in your Lambda functions, as this can increase cold start latencies. Instead, consider using environment variables or external data stores to store configuration information that your functions can access when they start up.
d. Language Runtime and Framework: This is one of the important point that Architect must decide wisely in the beginning of the application. Language runtime like Nodejs or python has better cold start performance than JVM based framework like Java and Spring Boot.
4. Dependency management and Application packing
These are an important aspect of developing and deploying AWS Lambda functions, as it helps ensure that the necessary libraries and dependencies are available at runtime. AWS Lambda comes with several hard limit like 50MB of deployment packing limit and 250MB form Lambda layer library limit. These hard limits become unmanageable as the number of functions grow. As we keep adding new feature in the Microservice. We have seen many time the application deployment failed to deploy due to large size of the application package. Here are a few best practices for managing dependencies in AWS Lambda:
a. Application Package Size: The simplest way to manage dependencies in AWS Lambda is to include only required dependency or shared common dependency among the group of Lambda function. This can be achieved either by combining Lambda function in group and include the common dependency or group the Lambda function based on common functionality like User module, master or booking so that only required dependency would be included and hence reduce the size of overall deployable package.
b. Consider using layers: AWS Lambda Layers allow you to package common libraries and dependencies and share them across multiple functions. This can help reduce the size of your deployment packages and simplify the process of managing dependencies.
c. Use a container image: AWS Lambda also allows you to run your functions in a container. When using container images, you can include all the dependencies required by your function in the image. This approach can be especially useful if you have complex dependencies or if you need to use an operating system or language runtime that is not provided by AWS Lambda.
d. 3rd party dependencies: we need third party dependency while either working with Integration or when consuming the SaaS service. The common way to handle these dependencies to either use the popular REST based integration or deploying the library separately and used Lambda Layer as explained in the option b or create the container image as explained in option c.
5. Authentication/Authorization
When it comes to building AWS serverless microservices, authentication and authorization are critical components to ensure the security and integrity of your application. Here are some approaches to consider:
a. Amazon Cognito: Amazon Cognito is a managed authentication and authorization service that lets you easily add user sign-up, sign-in, and access control to your web and mobile apps. You can use Cognito to authenticate users and authorize access to AWS resources and APIs.
b. Amazon API Gateway: Amazon API Gateway is a fully managed service that makes it easy for developers to create, publish, maintain, monitor, and secure APIs at any scale. You can use API Gateway to control access to your APIs by configuring authentication and authorization mechanisms such as OAuth2, Amazon Cognito, and AWS Identity and Access Management (IAM).
c. AWS Lambda authorizers: AWS Lambda authorizers are custom authentication and authorization functions that you can use with Amazon API Gateway. You can write a Lambda function that authenticates the user and returns an IAM policy that authorizes access to your API.
d. AWS IAM: AWS Identity and Access Management (IAM) lets you manage access to AWS services and resources securely. You can use IAM to create and manage AWS users and groups and define permissions to access AWS resources.
It’s important to note that the best approach for authentication and authorization in your AWS serverless microservices may depend on your specific use case and application requirements.
6. Exception Handling and timeout
Exception handling and monitoring are important aspects of developing and deploying AWS Lambda microservices, as they help you identify and respond to errors and performance issues in a timely manner. Here are a few best practices for exception handling and monitoring AWS Lambda microservices:
a. Use Global Exception handling in your code: It’s important to use global error handling in your code to catch exceptions and prevent them from causing your functions to fail or failed to response gracefully to caller. You can use try-catch blocks or other error handling techniques to catch exceptions and return meaningful error messages to the caller.
b. Configure API Gateway HTTP status codes: AWS API Gateway supports customizing HTTP status codes that are returned to the client. For example, you can configure a 404-status code to be returned when a requested resource is not found. You can also define custom status codes to represent specific errors in your API.
c. HTTP Response Code: When working with AWS Lambda and API Gateway Non-proxy mode API gateway expect Lambda to return appropriate HTTP response code to caller. This requires a dedicate function like response builder to build the response with proper response code before sending it to the caller.
d. API Gateway Timeout: Although the API gateway timeout is configurable and can be used to set lower to enhance the user experience and saving the cost. But some time functionality like advanced search may requires larger time out duration but due to the hard limit of 29 Sec the same is not possible. To handle this scenario, one must set the proper error mapping at the API gateway so that caller receive the proper response code.
e. Lambda Timeout: Lambda function can run up to 15 min to process the business functionality. This would be ok if the Event source is like Event bridge but when working with API gateway and Lambda, Lambda must send the response within 29 second of time limit or before the timeout value set at API gateway.
f. Pay Load size limit : Both API gateway(10MB) and Lambda(6MB) comes with hard limit of the request and response payload size and if we failed to handle the same it would cause the error response. To prevent it, one can adopt the design pattern like pagination, S3 pre-sign URL or compression to avoid the failure.
7. Logging and Monitoring
a. CloudWatch for logging: Lambda function code can directly use CloudWatch integration to send all the logging information to cloud watch. You can use CloudWatch Logs to monitor your function logging information for errors and performance issues. You can also set up CloudWatch Alarms to notify you when certain error conditions occur.
b. Enable CloudWatch Logs for API Gateway: API Gateway can log requests and responses to CloudWatch Logs. You can enable logging for your API by creating a CloudWatch Logs log group and specifying the log group ARN in the API Gateway settings. API Gateway logs can include information such as request and response payloads, IP addresses, user agents, and more.
c. Enable AWS X-Ray: AWS X-Ray is a service that provides end-to-end tracing for distributed applications. You can use X-Ray to trace requests through your API Gateway and Lambda function. X-Ray can help you identify errors and performance bottlenecks in your API.
8. Performance Tuning
Optimize application performance by reducing response time and improving resource utilization. This can be achieved by using optimizing code, memory balancing, tuning start-up time, and leveraging AWS services such as Amazon API Gateway cache.
a. Startup Time: Ensure that the startup time is optimized for faster processing and reduced latency. This can be achieved by using minimize the complexity of your dependencies, Memory balancing by using Lambda power tool, choose of interpreted languages like Nodejs or Python over languages like Java and C# .
b. Code Optimization: Store and reference external configurations and dependencies locally after first execution. Avoid memory intensive or iteration of large data set.
c. Re-Use Lambda Container: Cache reusable resources. Limit the re-initialization of variables/objects on every invocation. Instead use static initialization/constructor, global/static variables, and singletons.
d. API Gateway Caching: API gateway caching is another way to return the API call faster for frequently access data. The caching of the data for Edge optimized Endpoint can significantly minimize the API response time and also save the number of API call reaching to Lambba function.
9. Security
Security should be a top priority when building any application. Make sure to follow AWS security best practices and implement appropriate security measures such as encryption, access control, and data protection. Here are a few best practices to keep in mind:
a. Adapt IAM least privilege: AWS Identity and Access Management (IAM) is a central part of AWS security. When developing AWS Lambda microservices, it’s important to use least privilege IAM roles and policies to grant access to function to access the resources and ensure that only authorized users have access to your functions and data.
b. Secure Storage: When storing application sensitive data, such as PII data or user name passwords to connect application service, it’s important to use encryption to protect the data from unauthorized access. AWS provides several options for same, including AWS Key Management Service (KMS), Parameter Store, and Secrets Manager, to help you secure your data.
c. Use VPCs: Virtual Private Clouds (VPCs) can help you isolate your functions and data from the public internet and protect them from unauthorized access. You can configure your functions to run inside a VPC, and use security groups and network access control lists to control access to resources.
d. Monitor and log function activity: Monitoring and logging function activity can help you detect and respond to security incidents. AWS CloudTrail and AWS CloudWatch are two services that can help you monitor and log function activity and alert you if suspicious activity is detected.
e. Keep your functions up to date: It’s important to keep your functions up to date with the latest security patches and updates. AWS Lambda provides automatic security patching for the underlying infrastructure, but it’s important to also keep your code and dependencies up to date.
10.Deployment and Continuous Integration/Continuous Deployment (CI/CD)
A key aspect of AWS Lambda applications is the deployment process, and it should be automated as much as possible. You can use tools such as AWS CodeDeploy, AWS CodePipeline and Serverless Framework or AWS SAM to automate the deployment process and ensure that new changes are quickly and easily deployed to the production environment. Additionally, implementing a CI/CD pipeline can help ensure that your application is always up-to-date and that new changes are quickly and easily deployed.
Summary
By following these best practices, you can build a AWS serverless application that meets your complex business requirements and is able to handle the demands of a rapidly evolving landscape. As the adoption of serverless technology continues to grow, it is important to stay informed and up-to-date with the latest development trends and best practices.