Everything I know about tests in rust
I see testing questions pop up quite a bit on /r/rust. Rust’s testing is rather different from how people do testing in other languages, mostly due to the low-level nature of the language and being highly static. I’m going to cover a lot of stuff that you can find in the official rust documentation — sometimes just having a different description of the same thing can make it easier to understand. I also have some experience writing web services in rust, so that’s a perspective whos specific problems I’ll have some solutions to. I’m also going to go beyond how to test, and talk about higher-level program structuring. I’ve set up a table of contents so you can skip to whatever part you’re interested in:
# This code block gets replaced with the TOC
The basics
Writing tests in rust is easy:
fn add(a: i32, b: i32) -> i32 {
a + b
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn adding_works() {
assert_eq!(add(2, 2), 4)
}
}
Even if you don’t know rust you can probably figure out what this is doing at a high level, but let’s break down some of the boilerplate and why it exists:
The module declaration
#[cfg(test)]
mod tests {
// ...
}
This is creating an inline module — it’s the same as having mod tests
and a file named tests.rs
(or tests/mod.rs
). You don’t need to use a module, you can have your tests in the same module you define your code in, however it’s convinient to define a module like this for several reasons. We don’t have to name it tests or anything either. You can have several test modules if you want.
The #[cfg(test)]
is conditional compilation —
it’s really just there to suppress warnings about unused code when building. Rust will warn you of any unused code, and anything in the tests module shouldn’t be used in your
runtime code. This module isn’t some special pattern to rust, so to prevent rust from warning you, we don’t compile it unless it’s a test. I’m not going to go into the details,
but #[cfg(some condition)]
is how you do conditional compilation in rust. For now all we need to know is #[cfg(test)]
means compile only when running tests. You can use this on other code — I’ll often define testing helper functions that are #[cfg(test)]
. One last little thing I might as well mention is you can write this as
mod tests {
#![cfg(test)] // <- notice the #! -- that means apply this attribute not to the next item, but to the parent: mod tests in this case
// ...
}
We also have use super::*;
— this isn’t required, it’s pulling in everything from the parent module into the tests module, so we can just refer to add
instead of parent::add
. You can bring in individual items too (e.g. use super::add;
), but I usually do use super::*
to bring in everything. This is pretty much my only use of a glob import except in the case of a specifically designed prolog module for some library (I’ll occasionally do a use diesel::prolog::*
). I believe you can access non-pub
items this way as well, but I never do that — if there’s something you want to test it should probably be in a module and export a public API — at least pub(crate)
, and reaching into the private innards of some data is a bad practice.
The test itself
OK now on to the test proper. For a reminder here’s what it looks like
#[test]
fn adding_works() {
assert_eq!(add(2, 2), 4)
}
We’ve got the #[test]
attribute that tells cargo test
that this function should be run when testing. The function should take no arguments and return no value. If the function panics the test is a failure, otherwise it’s a success. Rust comes with a few assert!
macros that will panic, depending on your defined conditions:
assert!(boolean, optional message)
will panic if the boolean is false, and will print the message if you gave one. Technically this is all you needassert_eq!(param1, param2, optional message)
will panic if param1 != param2 usingPartialEq
(so you can use it with e.g. floats, though that’s probably not what you want). It’ll print out each item, but won’t do any fancy diffing. I’ve started using the pretty_assertions crate to get better error messagesassert_ne!(param1, param2, optional message)
works the same asassert_eq!
, but panics if the two params are equal.
Test failure on panic is good because then you can use .unwrap()
as an assertion. A test like
let unit_under_test = create();
unit_under_test.do_stuff().unwrap()
is a totally valid test. In fact for tests is the only time I wish that .unwrap()
was a little more convinient. It’s also the only time I wish String
s were a bit more convinient to create.
You can also have a test that should panic. Just add a #[should_panic]
attribute to it.
Integration tests
Compilation of integration tests
So far we’ve just been dealing with what rust calls unit tests (though they don’t need to be true unit tests). These are all compiled into one big crate together with the non-test code
Unit Test binary crate
┌────────────────┐
│ │
│ Non-test code │
│ │
├────────────────┤
│ │
│ Tests │
│ │
└────────────────┘
For integration tests, cargo will compile your crates not as tests (#[cfg(test)]
code will not be compiled in), and each file in
the tests/
folder is compiled as a seperate binary test crate. It’s common to share some code between the integration test files with a common.rs
file that gets included in each test binary via mod common;
.
┌────────────┐
│ │
│ tests/a.rs │ link
│ ├────────────────────┐
├────────────┤ │
│ common.rs │ │
└────────────┘ │
│
▼
┌────────────┐ ┌───────────┐
│ │ │ │
│ tests/b.rs │ link │ Production│
│ ├──────────────►│ code │
├────────────┤ │ │
│ common.rs │ └───────────┘
└────────────┘ ▲
│
┌────────────┐ │
│ │ link │
│ tests/c.rs ├────────────────────┘
│ │
├────────────┤
│ common.rs │
└────────────┘
Compiling this many crates can be slow if you have a lot of tests. For a small, single project it probably doesn’t matter too much, but when it gets big, there’s a few things you can do to speed up compilation:
- Any generic code exposed via your library crate will be instantiated once per type (which is normal), once per test crate (which you might not be aware of), so things that reduce the amount of code instantiated for each generic usage can be helpful as well. For example you can have a non-generic function do most of the heavy-lifting and have a wrapper function that does type conversions. This way most of your code is generated once for the lib crate, and just the wrapper is generated for each test crate
- Keep your common module small since it’ll go into each test crate.
- As a last resort, just have a single tests crate. One file directly under
tests/
, all other files are in a subdirectory and are included in the test crate viamod
.
How to integration test
In these integration tests you can either
- use your library crate just as your library consumers would
- run your binary crates.
Option 1 is always preferred — calling library functions is way faster and easier compared to starting a new process for every test. In fact, I almost never test a binary directly. I’ll define all functionality in a library crate and the binary will just pass the arguments into that library crate.
An example
I’ve written a few actix-web web servers, and a pattern I’ve settled on for doing integration tests is to have my lib crate define a function that sets up all of the routes via configure. This way I can have minimal boilerplate in my binary crate for the server, and my tests can pass in parameters for e.g. the database. You can also return an http server, but I’ve found that getting the types right to be tricky and somewhat fragile, so I use configure which doesn’t have any return value so you don’t have to worry about the types :-).
// in lib.rs
pub struct StaticConfig {
database_url: String
}
pub fn config_api(cfg: &mut web::ServiceConfig, static_config: StaticConfig) {
let db_pool = get_pool_from_url(static_config.database_url);
cfg.service(
web::scope("/")
.data(db_pool)
.service(
web::scope("/users")
.route("", web::get().to(users_index))
.route("", web::post().to(users_create))
.route("/{user_id}", web::get().to(users_get_by_id))
.route("/{user_id}", web::get().to(users_get_by_id))
)
)
}
// in main.rs
#[actix_rt::main]
async fn main() {
HttpServer::new(move || {
App::new()
.wrap(Logger::new())
.configure(|cfg| {
let database_url = std::env::var("DATABASE_URL").expect("DATABASE_URL");
config_api(cfg, StaticConfig {
database_url
})
})
}).bind('0.0.0.0:8000')
.unwrap().run().await.unwrap()
}
// in tests/users_test.rs
#[actix_rt::test]
async fn test_can_create_user() {
let db = common::test_db();
let mut app = test::init_service(
App::new().configure(|cfg|
config_api(cfg, StaticConfig {
database_url: db.database_url.clone()
}))
)).await;
let req = test::TestRequest::post()
.uri("/users")
.set_json(&json!({
"username": "MaxPolun"
}))
.to_request();
let user = test::read_response_json::<User>(&mut app, req).await;
assert_eq!(user.username, "MaxPolun");
}
I haven’t looked into it too much, but it might be possible to have a setup like this using the #[get(route)]
/#[post(route)]
macros, but I haven’t tried to. The boilerplate is slightly annoying, but I’d rather have the boilerplate and be able to set up integration tests the way I want, so I haven’t looked too closely at it. I haven’t looked at every rust web framework, but you should be able to do an equivalent setup, and do something similar for CLIs or GUIs.
Doctests
I’ll keep the section on doctests short, the basic idea is that any code examples in your documentation is compiled and executed as a test, in order to make sure they all stay valid. It’s a good idea, but nontrivial doctests are annoying and awkward to write. They serve their purpose of making sure examples stay working, but I don’t think they’re useful as a testing strategy, except in the case of the smallest libraries.
Here’s an example:
pub struct Example {}
impl Example {
/// Make a new Example
///```
///assert_eq!(Example::new(), Example {})
///```
pub fn new() -> Self { Self {} }
/// Here's when they get annoying
///```
/// # #[whatever_runtime::async_main]
/// # fn main() -> Result<(), Box<dyn Error>> {
/// # let example = Example::new();
/// assert_eq!(example.more_complex().await?, 5)
/// #}
}
///```
pub async fn more_complex(&self) -> Result<usize, SomeError> {todo!()}
}
Notice the triple layer of quoting (///
, ```
, and #
) in the more complex example. It’s not a problem per say, but it adds too much overhead to really be useful for tests.
Consider doctests as a way to ensure your docs have working examples, not as part of your tests.
Organizing tests
Setup and teardown
One thing rust tests do not have is setup and teardown. You see these in most testing frameworks, but there’s no feature for this in rust. Now there are tools that will add this feature to rust tests, but you don’t really need it. All you need is regular rust functions, structs, and traits.
Setup is the easy part. Just define a function:
fn setup() {}
#[test]
fn test_with_setup() {
setup();
assert!(true)
}
You can write some reusable setup helpers. They’re just regular functions, so you can pass in params, return values, etc. Pattern matching is helpful here. I’ll often have a module named test_util
for test functions that are used all over my project (e.g. getting a test db connection).
Now you can’t just use functions for teardown. If the test fails there will be a panic and it won’t be executed. Most of the time this is fine actually — since you’re not mutating a shared variable in your setup, and generally making new variables for each test — teardown is something you don’t usually need. Sometimes, however, you need to manage an external resource (e.g. database connection, tempfile) and you don’t want to either use up that resource when running your tests, or accidentally couple your tests together. The solution is to just use standard rust idiom again, but here you need RAII implemented with a struct with a Drop
impl.
struct TestDb {
database_url: String
}
impl Drop for TestDb {
fn drop(&mut self) {
Database::connect(&self.database_url).truncate_all_tables() // or whatever your db cleaning method is
}
}
fn get_db() {
return TestDb {database_url: get_the_url_somehow()}
}
#[test]
fn integration_test_using_db() {
let db = get_db();
let server = start_server(&db.database_url);
request_to_server().unwrap()
// database will be truncated here, making it ready for the next test
}
Drop
will be called even if there’s a panic. Now if you panic during the panic, then you’ll just abort and skip cleanup, but that’s generally a possibility in testing frameworks — if you abort early cleanup won’t run. You should still try to avoid this if you reasonably can — it’s always better to not need a teardown step. For unit tests hitting a database, you can often just create a new connection and start a test transaction instead, however this doesn’t work when you’re writing an integration test that might make multiple requests.
dealing with big test modules
When you’ve got a large module, and a lot of tests, it can be pretty annoying to switch back and forth from the top to the bottom of a file. IDE features can help, but I still find that it can get overwhelming. There are several options options for splitting it up:
- Multiple test modules
Here you have multiple things you are testing in the parent module, and you put a test module after each one. Even if you just have one struct in the file, if it has impls for multiplle traits you can split it up this way, that means your tests and non-test code are kept relatively close:
struct User {
username: String
}
impl User {
fn new() -> Self {...}
}
#[cfg(test)]
mod tests_inherent {
user super::*;
#[test]
fn new_returns_a_valid_user() {
...
}
}
impl Display for User {
...
}
#[cfg(test)]
mod tests_display {
user super::*;
#[test]
fn display_stringifies_a_user_just_right() {
...
}
}
This works, but it doesn’t scale up that much better than just putting one big mod tests {}
at the bottom of the file. I only use it when there’s a bunch of traits I’m implementing for a single struct, and each trait is mostly independent and each trait has a non-trivial implementation.
- Multiple non-test modules
You can also split your code up into multiple modules on the production code side of things, and have a mod tests {}
for each one of those. This works best when each thing you’re testing is logically, but not physically related (e.g. each route handler for the same resource makes more sense than multiple methods on a single struct).
If you have a struct with several methods, and you’d like to have seperate files to test each method, you can have multiple impl
blocks like so:
mod method_a;
mod method_b;
struct User {
username: String
}
// in method_a.rs
impl User {
pub fn a(&self) -> u32 {
12345
}
}
#[cfg(test)]
mod tests {
user super::*;
#[test]
fn test_a() {
assert_eq!(User::new().a(), 12345)
}
}
// in method_b.rs
impl User {
pub fn b(&mut self) {
self.username += "method b"
}
}
#[cfg(test)]
mod tests {
user super::*;
#[test]
fn test_a() {
let u = User::new();
u.b();
assert_eq!(u.username, "testusermethod b");
}
}
If you are using a trait as a testing seam, you may end up with traits with too many methods, most of which you ignore for most tests. In this case, you can split up your trait by method, and furthur into files:
mod method_a;
mod method_b;
struct UserImpl {
username: String
}
// in method_a.rs
pub trait MethodA {
fn a(&self) -> u32
}
impl MethodA for UserImpl {
fn a(&self) -> u32 {
12345
}
}
// in method_b.rs
pub trait MethodB for UserImpl {
fn b(&mut self)
}
impl MethodB for UserImpl {
fn b(&mut self) {
self.username += "method b"
}
}
This adds a lot of boilerplate though — trait definition for each method, and each consumer of these traits must say which methods they want to use. Something to keep in mind if you really need it, but not needed most of the time.
- Seperate test file(s)
Nothing forces you to have your test modules inline, you can define a test module in a seperate file via
#[cfg(test)]
mod tests;
I like the default of tests in the same file as the production code, but having seperate files may be simpler. The flexibility of rust’s test system is one of the nice things about it — you can use all of the tools you have for structuring normal rust code in your tests. I’ve never done this but I could see it if keeping everything in one implementation file makes sense (shared private functions, etc) but there are several disparate tests that are mostly unconnected. I could also see doing it for property-based testing (aka quickcheck).
Writing testable code
The goals of writing testable code are to write your code such that your unit tests:
- Are isolated to the single unit under test
- Are fast
- Are as close to reality without compromising on the first to items
Why are these the goals? So you can write many unit tests and get fast feedback when you change code.
- Isolation means you can quickly determine the cause of a failing test
- Fast tests mean you can run all of your tests often
- Reality (as much as possible) is to make sure your tests are useful for finding regressions.
Not all code (and not all projects) need to be testable in this sense. Sometimes it’s fine to have to test the whole system, but that is less true the larger your system is. If you’ve got a microservice with 2 routes, having to do all testing through the HTTP interface isn’t too much of a problem, but in a larger service with 8, 10, 12 routes, you’re going to start to hate firing up the test suite if everything goes through HTTP and hits the database. Better to have a minimal set of full-system integration tests, and do the detail testing of edge-cases in isolated unit tests. Of course the problem is that small systems often evolve into large systems without updating the test strategy.
Seams
In the book Working Effectively with Legacy Code, Michael Feathers defines a seam as a place where you can change the behavior of your code without editing it. This is important for writing tests because you want to have different behavior in tests than your production code in order to isolate your unit.
Parameter seams
The simplest seam is a parameter — you can send different parameters into a function or method, and get different results. This is why pure code is easy to test. You can pass in whatever parameters and assert based on the results
fn add(a: i32, b: i32) -> i32 {
return a + b
}
#[test]
fn test_add() {
assert_eq!(add(2, 2), 4)
}
However
- Sometimes your dependency is not a parameter, but is accessed globally (whether a global variable, or a global function)
- Just because you take a parameter doesn’t mean it can be swapped out.
So a plain parameter is fine for mostly-pure data: numbers, strings, simple enums, Vecs/arrays/Hashmaps/structs of these simple data with no methods, and not too deeply nested (you can test deeply nested pure data, but it just becomes unwealdy). One testing strategy is to move most of your business logic that needs careful testing to act on pure data like this, and mostly not unit test your impure code — just integration tests for the impure part. This works ok your overall program mostly reads data at the start, processes it, and then returns output. However many programs require a lot of back-and-forth interaction with state (either internal — things like internal caches or in-memory datastores, or external — like a filesystem, database, network, or UI) and only testing your stateful interactions via integration tests for these types of programs starts to look like you’re not doing unit testing at all.
I’d recommend this pure-data approach mainly for things like compilers — each step in the process takes something as input, and prodces an output, mostly purely (sometimes multiple steps take and output the same format, sometimes they output a different format). In a case like that you can test all of your difficult logic with pure data, and just test that your I/O works with a much smaller, simpler set of integration tests.
Trait seams
So given we’re taking parameters, what’s the best way to swap out a production dependency for a test double? As usual with rust, it’s Traits.
Sometimes we can use traits that already exist. For example let’s say that you’ve got your adder already, and you want to test that you can print what you’ve added. Your first code looks like
fn add_and_print(a: i32, b: i32) {
println!("{}", add(a, b))
}
This works ok, but you can’t really test it easily. println!
takes an implicit dependency on stdout
, so let’s make that explicit:
fn add_and_print(stdout: Stdout, a: i32, b: i32) {
write!(stdout, "{}\n", add(a, b))
}
Now our dependency is explicit, but we still can’t test it too well. How do we send a test double in place of stdout?, well the write!
macro works in terms of the Write
trait, so we can use any type that impls Write
:
fn add_and_print<Out: Write>(out: &mut Out, a: i32, b:i32) {
write!(out, "{}\n", add(a, b))
}
#[test]
fn test_add_and_print() {
let mut buf = String::new(); // String impls `Write`
add_and_print(&mut buf, 2, 2);
assert_eq!(buf, "4");
}
Now we can easily test this code doing IO by swapping out stdout for a different implementation, we have our seam. This is using compile time trait seams, but we can do it at runtime too:
fn add_and_print(out: &mut dyn Write, a: i32, b:i32) {
write!(out, "{}\n", add(a, b))
}
#[test]
fn test_add_and_print() {
let mut buf = String::new();
add_and_print(&mut buf, 2, 2);
assert_eq!(buf, "4");
}
The code even looks the same, outside of the type signature.
For your own traits, you can write structs yourself to implement different scenarios you’re testing, or use a mocking library:
trait UserRepository {
fn create(&self, username: String) -> Result<User, UserCreateError>
fn get_all(&self) -> Result<Vec<User>, GetUserError>
}
// for use in cases where you're testing the success result:
struct FakeUserRepository {
users: Rc<RefCell<Vec<User>>>
}
impl UserRepository for FakeUserRepository {
fn create(&self, username: String) -> Result<User, UserCreateError> {
let user = User::new(username);
self.borrow_mut().push(user.clone());
Ok(user)
}
fn get_all(&self) -> Result<Vec<User>, GetUserError> {
Ok(self.borrow().clone())
}
}
// when you're testing errors
struct ErrorUserRepository {
err: Box<dyn Error>
}
impl ErrorUserRepository {
pub fn set_err(&mut self, err: Box<dyn Error>) {
self.err = err
}
}
impl UserRepository for ErrorUserRepository {
fn create(&self, username: String) -> Result<User, UserCreateError> {
Err(self.err.downcast().unwrap())
}
fn get_all(&self) -> Result<Vec<User>, GetUserError> {
Err(self.err.downcast().unwrap())
}
}
// For mixed cases you'll need some sort of custom implementation that tracks calls
// or use a library -- here's an example using https://github.com/DavidDeSimone/mock_derive
#[test]
fn test_mixed() {
let mock = MockUserRepository::new();
let method = mock.method_get_all()
.first_call()
.set_result(Ok(vec![User::new(), User::new()]))
.second_call()
.set_result(Err(DatabaseError));
mock.set_get_all(method);
assert_eq(do_something_with_users(mock), Err(DatabaseError))
}
Traits are the go-to seam for testing in rust. They explicitly are designed to let you swap out implementations. Generally in rust, compile time (impl
rather dyn
) polymorphism is preferred, though both have their place. There are a few things to keep in mind though:
- For now, traits can’t be async (or use
impl Trait
)
You can still return a Future though, and there’s the async_trait
crate that will translate async methods in a trait to ones returning a Pin<Box<dyn Future + Send + 'async>>
. This mostly works like you want, but it has a bit of runtime cost. The cost is pretty reasonable for the type of things I typically use rust for, but it might be costly in performance critical or highly constrained environments. If you want to avoid the overhead of async_trait
you can always use an associated type like so:
trait MyAsyncTrait {
type Future: Future<Output = i32>
fn async_method(&mut self, param: i32) -> Self::Future
}
This adds some boilerplate, and it’s a bit of a PITA every time you need to implement it, but no runtime overhead. If a specific implementation (e.g. your test double) doesn’t care about the small performance cost you can even use async/await syntax with this:
struct Factorial;
impl MyAsyncTrait for Factorial {
type Future = Pin<Box<dyn Future<Output = i32>>>;
fn async_method(&mut self, param: i32) -> Self::Future {
Box::pin(async { // this is async block syntax
async_function(param).await
})
}
}
One thing to keep in mind for these sorts of traits though, is that the implementor can do work before returning the future. e.g.
struct Factorial;
impl MyAsyncTrait for Factorial {
type Future = Ready<i32>; // ready is a future that immediatly resolves its value
fn async_method(&mut self, param: i32) -> Self::Future {
ready(factorial(param)) // <- computes the factorial before returning the future
}
}
This is something to be aware of, but it’s mostly fine. I’ve only seen issues when interacting with tracing.
- You still need to wire up your real impl somehow
If you’re using traits as testing seams pervasively, you’ll be passing around a lot of formerly implicit dependencies as explicit dependencies, and all of your code will be generic. Getting these all set up correctly in production can result in a lot of boilerplate and sometimes difficult to do correctly.
Dependency injection frameworks can help with this, I haven’t used any in rust though. For a typical web framework, you can set the correct types at the point where you register a route handler e.g.
.route("", web::get().to(users_index::<RealUserRepository>))
However this can break down if the type needs to be constructed (if you can just call Default::default()
, then this might be fine). Dependency injection is a huge topic and this post is long enough as is, so I’ll leave this for now.
Conditional Compilation seam
We’ve seen conditional compilation before — #[cfg(test)]
. We can swap out an alternate type in tests like so
pub struct RealUserRepository {
conn: DbConnection
}
pub struct FakeUserRepository {
users: RefCell<Rc<Vec<User>>>
};
#[cfg(test)]
pub type UserRepository = FakeUserRepository
#[cfg(not(test))]
pub type UserRepository = RealUserRepository
Then you can write tests against RealUserRepository
, but all of the code that depends on a UserRepository
will always get a fake in unit tests (but not integration tests notably).
This has some upsides and downsides. You don’t have to define a trait (and deal with the current limitation of traits), and it’s simple to wire up — just wire it up as normal. You can use functions instead of structs, and don’t have to change your code in any way.
On the downside, now you can only have one test double for all of your tests. This means it has to be somewhat complex and full-featured so you can test all scenarios. There are libraries to help with this — foux is one, though I’ve never used it.
I think this method shines if you have a natural uniform interface in your code. What do I mean by uniform interfice? One example would be sending messages to actix actors — you can send many types of messages via the addr.send(msg)
interface, and get different results based on what message you sent. This means if you use actix actors to do a lot of the heavy lifting in your app, and you have a conditional compilation-based seam for actors, you can mock any communication to them using the same test double. Actix also provides this test double via Mocker. So you could have
#[cfg(not(test))]
type DbActor = RealDbActor
#[cfg(test)]
type DbActor = Mocker<RealDbActor>
Then you can write a test like
let mock_actor: Addr<DbActor> = Mocker::mock(Box::new(|msg, _ctx| {
let msg = msg.downcast_ref::<CreateUser>().unwrap();
let result: Result<User, UserCreateError> = Ok(User::new());
Box::new(Some(result))
})).start();
// create_user_route is the route handler and is the unit of the test
let response = create_user_route(web::Data::new(mock_actor), web::Json(UserCreateRequestBody {username: "test"})).await.unwrap();
assert_eq!(response.status(), http::StatusCode::Ok());
assert_eq!(parse_http_response::<User>(&response).username, "test");
How does this work? Well, internally the Mocker
uses the Any
type to allow you to do dynamic type checking using the downcast
family of methods. If your types don’t match up you’ll get a panic and a failed test. This works great for tests — it’s very flexible and lets you easily mock anything you send to an actor without messing with your types for non-test code. It’s a bit boilerplate-y, but that can be fixed with a library. The same test above would be
// setup the actor, assert that the right message has been set, and retur the value you need for your test
let mock_actor = simple_mock_actor::<RealDbActor, CreateUser, _>(|_msg| { Ok(User::new()) });
let response = create_user_route(web::Data::new(mock_actor), web::Json(UserCreateRequestBody {username: "test"})).await.unwrap();
assert_eq!(response.status(), http::StatusCode::Ok());
assert_eq!(parse_http_response::<User>(&response).username, "test");
What do I personally do?
I try and structure as much of my code as possible using pure data, so I don’t need mocking, I just have simple functions from input to output. However web services are like 90% IO so I do usually still have to mock quite a bit. I tend to favor using actix actors and using conditional compilation to swap out the real actor for the mocker. This lets my database interaction code be fairly simple — I usually use diesel which is synchronous and actix lets me make the database access async, as well as not having to worry about making my database layer mockable. My request handlers communicate with the mock actor for the database, but actix isn’t only good for database access — you can use it for any service you want to mock out and/or make async from synchronous. I’m sure other actor systems work well too, and using threads + a channel can be used to do the same thing, but since I usually use actix_web, actix is a nice choice.