Generic Cowboy handlers to work with Sumo DB
We, at Inaka, build our RESTful servers on top of cowboy. We use sumo_db to manage our persistence and trails together with cowboy-swagger for documentation.
Soon enough, we realized that we were duplicating code everywhere. Not every endpoint in our APIs is just a CRUD for some entity, but there are definitely lots of them in every server. As an example, most of our servers provide something like the following list of endpoints:
GET /users
- Returns the list of usersPOST /users
- Creates a new userPUT /users/:id
orPATCH /users/:id
- Updates a userDELETE /users/:id
- Deletes a userGET /users/:id
- Retrieves an individual user
To avoid (or at least reduce) such duplication, we started using mixer. That way, we can have a base_handler in each application where all the common handler logic lives.
Eventually, all applications shared that same base_handler, so we decided to abstract that even further. Into its own app: sumo_rest.
This project dependency tree is a great way to show the architecture behind it.
As you'll see below, Sumo Rest gives you base handlers that you can use on your Cowboy server to manage your Sumo DB entities easily. You just need to define your routes using Trails and provide proper metadata for each of them. In particular, you need to provide the same basic metadata Swagger requires. You can manually use the base handlers and call each of their functions when you need them, but you can also use Mixer to just bring their functions to your own handlers easily.
In a nutshell, Sumo Rest provides 2 cowboy rest handlers:
sr_entities_handler
that provides an implementation forPOST /entities
- to create a new entityGET /entitites
- to retrieve the list of all entities
sr_single_entity_handler
that provides implementation forGET /entities/:id
- to retrieve an entityPUT /entities/:id
- to update (or create) an entityPATCH /entities/:id
- to update an entityDELETE /entities/:id
- to delete an entity
(Of course, the uris for those endpoints will not be exactly those, you have to define what entities you want to manage.)
To use them you first have to define your models, by implementing the behaviours sumo_doc
(from Sumo DB) and sumo_rest_doc
.
Then you have to create a module that implements the trails_handler
behaviour (from Trails) and mix in that module all the functions that you need from the provided handlers.
You can find a very basic example of the usage of this app in the tests.
The app used for the tests (sr_test
), makes no sense at all. Don't worry about that. It's just there to provide examples of usage (and of course to run the tests). It basically manages 2 totally independent entities:
- elements: members of an extremely naĂŻve key/value store
- sessions: poorly-designed user sessions
Let me walk you through the process of creating such a simple app.
In sr_test.app file you'll find the usual stuff. The only particular pieces are:
- The list of
applications
, which includescowboy
,katana
,cowboy_swagger
andsumo_db
. - The list of
start_phases
. This is not a requirement, but we've found this is a nice way of getting Sumo DB up and running before Cowboy starts listening:
{ start_phases
, [ {create_schema, []}
, {start_cowboy_listeners, []}
]
}
In test.config we added the required configuration for the different apps to work:
We just defined the minimum required properties:
, { cowboy_swagger
, [ { global_spec
, #{ swagger => "2.0"
, info => #{title => "SumoRest Test API"}
, basePath => ""
}
}
]
}
We've chosen Mnesia as our backend, so we just enabled debug on it (not a requirement, but a nice thing to have on development environments):
, { mnesia
, [{debug, true}]
}
Sumo DB's Mnesia backend/store is really easy to set up. We will just have 2 models: elements and sessions. We will store them both on Mnesia:
, { sumo_db
, [ {wpool_opts, [{overrun_warning, 100}]}
, {log_queries, true}
, {query_timeout, 30000}
, {storage_backends, []}
, {stores, [{sr_store_mnesia, sumo_store_mnesia, [{workers, 10}]}]}
, { docs
, [ {elements, sr_store_mnesia, #{module => sr_elements}}
, {sessions, sr_store_mnesia, #{module => sr_sessions}}
]
}
, {events, []}
]
}
Finally we add some extremely naĂŻve configuration to our own app. In our case, just a list of users we'll use for authentication purposes (:warning: Do NOT do this at home, kids
, { sr_test
, [ {users, [{<<"user1">>, <<"pwd1">>}, {<<"user2">>, <<"pwd2">>}]}
]
}
The next step is to come up with the main application module: sr_test. The interesting bits are all in the start phases.
For Sumo DB to work, we just need to make sure we create the schema. We need to do a little trick to setup Mnesia though, because for create_schema
to properly work, Mnesia has to be stopped:
start_phase(create_schema, _StartType, []) ->
_ = application:stop(mnesia),
Node = node(),
case mnesia:create_schema([Node]) of
ok -> ok;
{error, {Node, {already_exists, Node}}} -> ok
end,
{ok, _} = application:ensure_all_started(mnesia),
sumo:create_schema();
Since we're using Trails, we can let each module define its own routes trails. And, since we're using a single host we can use the fancy helper that comes with Trails:
Handlers =
[ sr_elements_handler
, sr_single_element_handler
, sr_sessions_handler
, sr_single_session_handler
, cowboy_swagger_handler
],
Routes = trails:trails(Handlers),
trails:store(Routes),
Dispatch = trails:single_host_compile(Routes),
It's crucial that we store the trails. Otherwise, Sumo Rest will not be able to find them later.
Then, we start our Cowboy server:
TransOpts = [{port, 4891}],
ProtoOpts = %% cowboy_protocol:opts()
[{compress, true}, {env, [{dispatch, Dispatch}]}],
case cowboy:start_http(sr_test_server, 1, TransOpts, ProtoOpts) of
{ok, _} -> ok;
{error, {already_started, _}} -> ok
end.
The next step is to define our models (i.e. the entities our system will manage). We use a module for each model and all of them implement the required behaviours.
Elements are simple key/value pairs.
-type key() :: integer().
-type value() :: binary() | iodata().
-opaque element() ::
#{ key => key()
, value => value()
, created_at => calendar:datetime()
, updated_at => calendar:datetime()
}.
sumo_doc
requires us to add the schema, sleep and wakeup functions. Since we'll use maps for our internal representation (just like Sumo DB does), they're trivial:
-spec sumo_schema() -> sumo:schema().
sumo_schema() ->
sumo:new_schema(elements,
[ sumo:new_field(key, string, [id, not_null])
, sumo:new_field(value, string, [not_null])
, sumo:new_field(created_at, datetime, [not_null])
, sumo:new_field(updated_at, datetime, [not_null])
]).
-spec sumo_sleep(element()) -> sumo:doc().
sumo_sleep(Element) -> Element.
-spec sumo_wakeup(sumo:doc()) -> element().
sumo_wakeup(Element) -> Element.
sumo_rest_doc
on the other hand requires functions to convert to and from json (which should also validate user input):
-spec to_json(element()) -> sumo_rest_doc:json().
to_json(Element) ->
#{ key => maps:get(key, Element)
, value => maps:get(value, Element)
, created_at => sr_json:encode_date(maps:get(created_at, Element))
, updated_at => sr_json:encode_date(maps:get(updated_at, Element))
}.
In order to convert from json we have two options: from_json
or from_ctx
. The difference is from_json
accepts only a json body as a parameter, from_ctx
receive a context
structure which has the entire request and handler state besides the json body. We will see a from_ctx
example in sessions
section
-spec from_json(sumo_rest_doc:json()) -> {ok, element()} | {error, iodata()}.
from_json(Json) ->
Now = sr_json:encode_date(calendar:universal_time()),
try
{ ok
, #{ key => maps:get(<<"key">>, Json)
, value => maps:get(<<"value">>, Json)
, created_at =>
sr_json:decode_date(maps:get(<<"created_at">>, Json, Now))
, updated_at =>
sr_json:decode_date(maps:get(<<"updated_at">>, Json, Now))
}
}
catch
_:{badkey, Key} ->
{error, <<"missing field: ", Key/binary>>}
end.
We also need to provide an update
function for PUT
and PATCH
:
-spec update(element(), sumo_rest_doc:json()) ->
{ok, element()} | {error, iodata()}.
update(Element, Json) ->
try
NewValue = maps:get(<<"value">>, Json),
UpdatedElement =
Element#{value := NewValue, updated_at := calendar:universal_time()},
{ok, UpdatedElement}
catch
_:{badkey, Key} ->
{error, <<"missing field: ", Key/binary>>}
end.
For Sumo Rest to provide urls to the callers, we need to specify the location URL:
-spec location(element(), sumo_rest_doc:path()) -> binary().
location(Element, Path) -> iolist_to_binary([Path, "/", key(Element)]).
To let Sumo Rest avoid duplicate keys (and return 422 Conflict
in that case), we provide the optional callback duplication_conditions/1
:
-spec duplication_conditions(element()) -> sumo_rest_doc:duplication_conditions().
duplication_conditions(Element) -> [{key, '==', key(Element)}].
If your model has an id
type different than integer, string or binary you have to implement id_from_binding/1
. That function is needed in order to convert the id
from binary()
to your type. There is an example at sr_elements
module for our test coverage. It only converts to integer()
but that is the general idea behind that function.
-spec id_from_binding(binary()) -> key().
id_from_binding(BinaryId) ->
try binary_to_integer(BinaryId) of
Id -> Id
catch
error:badarg -> -1
end.
The rest of the functions in the module are just helpers, particularly useful for our tests.
Sessions are very similar to elements. One difference is that session ids (unlike element keys) are auto-generated by the mnesia store. Therefore they're initially undefined
. We don't need to provide a duplication_conditions/1
function in this case since we don't need to avoid duplicates.
The most important difference with elements is sessions does't implement from_json
callback. Remember, from_json
only accepts the request body in json format. In sessions we also need the logged user in order to build our session. In this case we implement from_ctx
instead of from_json
since it accepts the entire request and the handler's state. That information is encapsulated in a context
structure.
This is how the context
's spec looks like. It is composed by a sr_request:req()
and a sr_state:state()
structures. Modules sr_state
and sr_request
are available in order to manipulate them.
-type context() :: #{req := sr_request:req(), state := sr_state:state()}
.. In sr_request.erl ...
-opaque req() ::
#{ body => sr_json:json()
, headers := [{binary(), iodata()}]
, path := binary()
, bindings := #{atom() => any()}
}.
... In sr_state.erl ...
-opaque state() ::
#{ opts := sr_state:options()
, id => binary()
, entity => sumo:user_doc()
, module := module()
, user_opts := map()
}.
And this is the from_ctx
implementation
-spec from_ctx(sumo_rest_doc:context()) -> {ok, session()} | {error, iodata()}.
from_ctx(#{req := SrRequest, state := State}) ->
Json = sr_request:body(SrRequest),
{User, _} = sr_state:retrieve(user, State, undefined),
case from_json_internal(Json) of
{ok, Session} -> {ok, user(Session, User)};
MissingField -> MissingField
end.
Now, the juicy part: The cowboy handlers. We have 4, two of them built on top of sr_entitites_handler
and the other two built on sr_single_entity_handler
.
sr_elements_handler is built on sr_entities_handler
and handles the path "/elements"
. As you can see, the code is really simple.
First we mix in the functions from sr_entities_handler
:
-include_lib("mixer/include/mixer.hrl").
-mixin([{ sr_entities_handler
, [ init/3
, rest_init/2
, allowed_methods/2
, resource_exists/2
, content_types_accepted/2
, content_types_provided/2
, handle_get/2
, handle_post/2
]
}]).
Then, we only need to write the documentation for this module, and provide the proper Opts
and that's all:
-spec trails() -> trails:trails().
trails() ->
RequestBody =
#{ name => <<"request body">>
, in => body
, description => <<"request body (as json)">>
, required => true
},
Metadata =
#{ get =>
#{ tags => ["elements"]
, description => "Returns the list of elements"
, produces => ["application/json"]
}
, post =>
#{ tags => ["elements"]
, description => "Creates a new element"
, consumes => ["application/json"]
, produces => ["application/json"]
, parameters => [RequestBody]
}
},
Path = "/elements",
Opts = #{ path => Path
, model => elements
, verbose => true
},
[trails:trail(Path, ?MODULE, Opts, Metadata)].
The Opts
here include the trails path (so it can be found later) and the model behind it.
And there you go, no more code!
sr_single_element_handler
is analogous but it's based on sr_single_entity_handler
.
sr_sessions_handler shows you what happens when you need to steer away from the default implementations in Sumo Rest. It's as easy as defining your own functions instead of mixing them in from the base handlers.
In this case we needed authentication, so we added an implementation for is_authorized
:
-spec is_authorized(cowboy_req:req(), state()) ->
{boolean(), cowboy_req:req(), state()}.
is_authorized(Req, State) ->
case get_authorization(Req) of
{not_authenticated, Req1} ->
{{false, auth_header()}, Req1, State};
{User, Req1} ->
Users = application:get_env(sr_test, users, []),
case lists:member(User, Users) of
true -> {true, Req1, State#{user => User}};
false ->
ct:pal("Invalid user ~p not in ~p", [User, Users]),
{{false, auth_header()}, Req1, State}
end
end.
Finally, we did something similar in sr_single_session_handler
. We needed the same authentication mechanism, so we just mix it in:
-mixin([{ sr_sessions_handler
, [ is_authorized/2
]
}]).
But we needed to prevent users from accessing other user's sessions, so we implemented forbidden/2
:
-spec forbidden(cowboy_req:req(), state()) ->
{boolean(), cowboy_req:req(), state()}.
forbidden(Req, State) ->
#{user := {User, _}, id := Id} = State,
case sumo:fetch(sessions, Id) of
notfound -> {false, Req, State};
Session -> {User =/= sr_sessions:user(Session), Req, State}
end.
And, since sessions can not be created with PUT
(because their keys are auto-generated):
-spec is_conflict(cowboy_req:req(), state()) ->
{boolean(), cowboy_req:req(), state()}.
is_conflict(Req, State) ->
{not maps:is_key(entity, State), Req, State}.
For a more elaborated example on how to use this library, please check lsl.
If you find any bugs or have a problem while using this library, please open an issue in this repo (or a pull request :)).
And you can check all of our open-source projects at inaka.github.io.