Well Architected Framework Sustainability Pillar and news from re:Invent – Club Cloud Stories #5

23 Dec, 2021
Xebia Background Header Wave

In this Christmas edition of Club Cloud Stories we talk about a few highlights from AWS Re:Invent 2021:

  • Well Architected Framework Sustainability Pillar
  • re:Post
  • Graviton 3 processor
  • Amplify Studio
  • Rust SDK
  • CDK v2
    During re:Invent 2021 a lot of new features were announced. For the Christmas edition of Club Cloud Stories I wanted to collect some "ingredients" for a demo to give some of the newly announced features a spin.


    The new Sustainability pillar of the Well Architected Framework challenges us to use the least wasteful resources where possible. In the end, all cloud computation ends up being done by some processor. So if we care about the environment, we better start using the AWS-designed Graviton ARM processors:
    Graviton 1: cheaper but lower performance than Intel/AMD
    Graviton 2: up to 40% faster than Intel/AMD, and they cost 20% less
    Graviton 3: up to 50% faster than Intel/AMD, and they cost 20% less and cosume up to 60% less energy
    Ingredient 1: Graviton for Sustainability
    Now to use such a processor to the fullest, you would have to use a compiled language. Otherwise you’d waste some clock-cycles on parsing. Since I want to use a Lambda for the demo, only Go and .Net Core seem to be available, but some research revealed that you can run Rust in a so called ‘Custom Runtime’. Since the Rust SDK was also announced during re:Invent we found our second ingredient:
    Ingredient 2: Rust SDK (in a Lambda)
    CDK v2 was also announced so why not use that to set up a sample.
    Ingredient 3: CDK v2

    0. Prerequisites

    Quite the number of pre-requisites I’m afraid:

  • AWS account
  • AWS command line tool
  • nodejs
  • typescript – npm -g install typescript
  • CDK – npm install -g aws-cdk
  • Rust
  • Rust cross compiling to ARM – rustup target add aarch64-unknown-linux-gnu
  • Docker

    1. Create a Rust echo lambda ARM binary

    Just 4 files are needed for the echo Lambda to be ready to be compiled.
    Create Cargo.toml

    name = "clubclouddemo"
    version = "0.1.0"
    edition = "2021"</li>
    lambda_runtime = "0.4.1"
    tokio = "1.14.0"
    log = "0.4.14"
    simple_logger = "1.15.0"
    serde_json = "1.0.72"

    Create src/

    use lambda_runtime::{handler_fn, Error};
    use serde_json::{Value};
    use simple_logger::SimpleLogger;
    async fn main() -> Result<(), Error> {
        let func = handler_fn(my_handler);
    pub(crate) async fn my_handler(event: Value, _ctx: lambda_runtime::Context) -> Result<Value, Error> {

    Create Dockerfile

    WORKDIR /var/app
    RUN mkdir -p src/ && echo "fn main() {}" > src/ && uname -a
    COPY Cargo.toml Cargo.lock .
    RUN cargo build --release --target aarch64-unknown-linux-gnu
    COPY src src
    RUN cargo build --release --target aarch64-unknown-linux-gnu
    CMD cat target/aarch64-unknown-linux-gnu/release/clubclouddemo


    LAMBDA_ARCH="linux/arm64" # set this to either linux/arm64 for ARM functions, or linux/amd64 for x86 functions.
    docker build . -t localhost/clubclouddemo --platform ${LAMBDA_ARCH}
    mkdir -p lambda
    docker run --platform ${LAMBDA_ARCH} --rm localhost/clubclouddemo > lambda/bootstrap

    (you need to use chmod +x to be able to use the ./ command)
    Elvin Luff has helped me setup a Dockerfile that utilizes layering here. The cross-compiling to ARM can take very long every time, because the dependencies would not be cached. The Dockerfile used here will only recompile the dependencies when the toml file changes. This save a lot of time!

    2. Deploy the binary to AWS using CDK v2

    Once you’ve setup an initial CDK for typescript project using CDK V2 the following file should go in lib/yourname-stack.ts. A nice new thing with CDK V2 is that all standard AWS constructs are included in one library. This means you do not have to install every single API you use. Apart from that I have not seen significant changes to CDK.

    import { Stack, StackProps } from 'aws-cdk-lib';
    import { Construct } from 'constructs';
    import * as lambda from 'aws-cdk-lib/aws-lambda';
    export class BackendStack extends Stack {
      constructor(scope: Construct, id: string, props?: StackProps) {
        super(scope, id, props);
        const server = new lambda.Function(this, "ClubCloudDemo", {
          functionName: 'EchoLambda',
          runtime: lambda.Runtime.PROVIDED_AL2,
          code: lambda.Code.fromAsset("../binaries/clubclouddemo/lambda"),
          handler: 'not.required',
          architecture: lambda.Architecture.ARM_64

    The directory referred to ../binaries/clubclouddemo/lambda should contain the binary created in the previous step and it should be called bootstrap.

    4. Echo echo echo echo

    In the AWS console I tested the performance of the created Lambda. It gave sub millisecond response times. I have never seen such response times before when using python or nodejs. For $1 you can call this lambda about 500 million times (excluding cost for network traffic).
    Of course this doesn’t do anything useful so in the next section I will start using the Rust SDK to do some stuff with DynamoDB.

    3. Do something more useful with the Rust SDK

    The following 2 files are necessary to create a new binary. The toml files contains a few extra dependencies. You can use the same Dockerfile and as with the previous binary.
    Also a DynamoDB tables named Blog is needed with PK as partition key (string), SK as sort key (string) and a Global Index on SK and SRT (also string).

    name = "clubclouddemo"
    version = "0.1.0"
    edition = "2021"
    lambda_runtime = "0.4.1"
    tokio = "1.14.0"
    log = "0.4.14"
    simple_logger = "1.15.0"
    serde = "1.0.131"
    serde_json = "1.0.72"
    aws-config = "0.2.0"
    aws-types = "0.2.0"
    aws-sdk-dynamodb = "0.2.0"

    The following file is quit lengthy so I only included the interesting parts in this article (the full source code can be found in my git repository). Essentially I use the Rust SDK to do two different queries on a DynamoDB table (index). The routines that iterate over the returned results support paging. Because the Rust SDK is so new there are hardly any examples to be found for this using google search. Maybe this is even a world first implementation of this. Finally I test running these queries both serial and in parallel. (Rust supports threading and async/await by using tokio).

    fn type_query(client: &Client, key: &String) -> aws_sdk_dynamodb::client::fluent_builders::Query {
            .key_condition_expression("#key = :value".to_string())
            .expression_attribute_names("#key".to_string(), "SK".to_string())
            .expression_attribute_values(":value".to_string(), AttributeValue::S(key.to_string()))
    fn author_query(client: &Client) -> aws_sdk_dynamodb::client::fluent_builders::Query {
        type_query(client, &"USER".to_string())
    #[derive(Clone, Debug, Serialize, Deserialize)]
    struct Author {
        id: String,
        name: String,
    async fn get_authors(client: &Client) -> HashMap::<String, Author> {
        let mut last: Option<HashMap<String, AttributeValue>> = None;
        let mut result = HashMap::<String, Author>::new();
        loop {
            match author_query(&client)
                .await {
                    Ok(resp) => {
                        if let Some(recs) = &resp.items {
                            for item in recs {
                                let auth = Author {
                                    id: item["PK"].as_s().ok().unwrap().to_string(),
                                    name: item["SRT"].as_s().ok().unwrap().to_string()
                                result.insert(, auth);
                        if let Some(lev) = resp.last_evaluated_key() {
                            last = Some(lev.to_owned())
                        } else {
                    Err(e) => {
                        println!("error {}", e);
        return result;

    The serial execution takes 12 milliseconds to complete, while the parallel version uses about 5 milliseconds (when using 3 USER records and 10 POST records in the DynamoDB table). That means for $1 you can still call this lambda about a 100 million times! (excluding cost for network traffic).
    If you look at the code in the github repository you will see that the directory for the binary source is called ‘GraphQL-server’. I was a little optimistic about what I could accomplish in the time available

    Previous episodes

    Cloud Club Stories #4
    Cloud Club Stories #3
    Cloud Club Stories #2
    Cloud Club Stories #1
    July 2021
    May 2021
    April 2021
    Image by PublicDomainPictures from Pixabay

Jacco Kulman
Jacco is a Cloud Consultant at As an experienced development team lead he coded for the banking- and hospitality- and media-industries. He is a big fan of serverless architectures. In his free time he reads science fiction, contributes to open source projects and enjoys being a life-long-learner.

Get in touch with us to learn more about the subject and related solutions

Explore related posts