Testing cloud-native applications can prove difficult, mostly due to their dependency on proprietary cloud services whose behaviour is difficult to replicate in a test environment. Since I have a background as a cloud developer using (mostly) Python, my main issue with building (but mostly testing) Rust applications on AWS was missing a mocking library like moto.
Fortunately, moto can be run as a local server, and the AWS SDK for Rust allows you to specify the endpoint URL for each service that you are using. You can, therefore, create a setup that allows you to test your code against moto.
Why?
The best place to test your cloud-native application is, of course, in the cloud. Even if AWS themselves provided a way to run their services locally, you would probably still experience different behaviour in production. The best practices for setting up your AWS environments even suggest a sandbox account for each developer, where they can test and experiment. Additionally, AWS already provides guidelines on how to mock AWS SDK calls. So why would you use services like moto or localstack?
Although the mocked services are, obviously, not 1:1 copies of the real services, they are close enough to be used for testing purposes. This allows you to cover the functionality of structs or functions that access AWS - something that would be overlooked if you are using a library like mockall.
Moreover, I simply prefer testing against a server specifically designed to mock AWS, rather than having to create expected responses for each testing scenario. It feels more productive and makes me actually want to write tests.
Setting up moto
You can look into moto's documentation for running it in server mode to get more information about the different ways you can set it up. Given that we are talking about Rust applications, I don't like adding pip as a project dependency, so I'm relying on the Docker image. This way, you can also easily run it in the background and stop it after the tests have run successfully. To run the Docker container in the background, you can use:
docker run --rm -p 5000:5000 -d --name moto motoserver/moto:latest
And you can then stop it by running:
docker stop moto
Creating some basic functionality
To see moto in action, let's loot at a basic example that makes some simple S3 calls, such as creating a bucket and listing the existing buckets from a region. We can create a function for each, and have them take an aws_sdk_s3::Client reference as a parameter; this way, we can inject either a real client or a mocked one.
Note: if you want to copy the code from below, you should also run the following command to install all the dependencies (or clone the GitHub repository):
cargo add aws_config aws_sdk_s3 tokio --features full
For starters, here are the 2 functions:
const REGION: &str = "eu-west-1";
async fn list_buckets(client: &aws_sdk_s3::Client) -> Vec<String> {
let response = client
.list_buckets()
.send()
.await
.unwrap()
.buckets
.unwrap()
.into_iter()
.map(|b| b.name.unwrap())
.collect();
response
}
async fn create_bucket(client: &aws_sdk_s3::Client, bucket_name: &str) {
let constraint = BucketLocationConstraint::from(REGION);
let cfg = CreateBucketConfiguration::builder()
.location_constraint(constraint)
.build();
let _ = client
.create_bucket()
.create_bucket_configuration(cfg)
.bucket(bucket_name)
.send()
.await
.unwrap();
}
Next, we need a way of creating the client. You can go into detail about configuring you real and mock clients differently, but the important thing is specifying the endpoint URL for the service. By default, this is set to something like s3.eu-west-1.amazonaws.com, and there is no need to mention it when working with the real client. For the mock client, we need to set it to the URL where moto is running (for this example, localhost:5000). The function for building the client can take an Option<&str> parameter to achieve this:
async fn build_client(endpoint_url: Option<&str>) -> aws_sdk_s3::Client {
let config = aws_config::load_defaults(BehaviorVersion::latest()).await;
let s3_config = aws_sdk_s3::config::Builder::from(&config);
let s3_config = match endpoint_url {
Some(endpoint_url) => s3_config.endpoint_url(endpoint_url),
None => s3_config,
};
let s3_config = s3_config.region(Region::new(REGION)).build();
let client = aws_sdk_s3::Client::from_conf(s3_config);
client
}
You can then make some simple calls to these in the main function. If you have AWS configured on your machine, you can then run this with cargo run:
#[tokio::main]
async fn main() {
let client = build_client(None).await;
let _create_result = create_bucket(&client, "test-bucket").await;
let list_result = list_buckets(&client).await;
println!("Buckets: ");
for bucket in list_result {
println!("\t Bucket: {}", bucket);
}
}
Testing using moto
Now that we have some basic functionality, we can write a simple test and run our code against the moto instance. The test will create a mock client and then performs some calls to the functions we have created:
#[cfg(test)]
mod tests {
use super::*;
#[tokio::test]
async fn test_list_buckets() {
let moto_client = build_client(Some("http://127.0.0.1:5000")).await;
let result = list_buckets(&moto_client).await;
assert!(result.len() == 0);
let expected_buckets = 5;
for i in 0..expected_buckets {
create_bucket(&moto_client, &format!("test-bucket-{}", i)).await;
}
let result = list_buckets(&moto_client).await;
assert!(result.len() == expected_buckets);
}
}
For me, this shows exactly why this approach is more powerful than simply mocking the struct/function that makes an AWS call: we are testing the code against a working piece of software, as opposed to some static data that we decide upon. The buckets we create are returned by the list_buckets call.
You can see the test results by running:
docker run --rm -p 5000:5000 -d --name moto motoserver/moto:latest
cargo test
Conclusion
I can understand that this kind of setup doesn't work for everyone. You can even see it as a drawback, as you are testing your application against a service that is close to, but definitely not the same as the service that you will be using in the cloud. However, I have personally found that this approach makes me more confident in the correctness and reliability of my code. It might do the same for you.