How do You Structure Your Code When Moving Your API from Express to Serverless Functions?

There are a lot of articles showing how to use serverless functions for a variety of purposes. A lot of them cover how to get started, and they are very useful. But what do you do when you want to organize them a bit more as you do for your Node.js Express APIs?

There's a lot to talk about on this topic, but in this post, I want to focus specifically on one way you can organize your code. Add a comment to let me know what other areas you are interested in, and I'll consider covering those in the future.

Here are some getting started resources that I recommend:

Why Should You Structure Your Code?

You can put all of your function logic in a single file. But do you want to do that? What about shared logic? Testing? Debugging? Readability? This is where having a pattern or structure can help. There are many ways to do this. Beyond those I mentioned above, consistency is the primary additional aspect I target.

Here is a pretty standard representation of a function app:

FunctionApp
 | - host.json
 | - myfirstfunction
 | | - function.json
 | | - index.js
 | | - ...
 | - mysecondfunction
 | | - function.json
 | | - index.js
 | | - ...
 | - sharedCode

Here is what my structure is looking like, just for the heroes API.

Your Entry Point

The entry point to your function is in a file called index.js in a folder with the same name as your function.

The function itself is pretty self-explanatory. When this function is called the context is passed to it. The context has access to the request and response objects, which is super convenient. Then I call the asynchronous operation to get my data and set the response.

// heroes-get/index.js
const { getHeroes } = require('../shared/hero.service');

module.exports = async function(context) {
  context.log(
    'getHeroes: JavaScript HTTP trigger function processed a request.'
  );
  await getHeroes(context);
};

An alternative is to pass context.res and/or context.req into the service. How you choose is up to you. I prefer to pass req and res in as it's more familiar to me. But passing context also allows access to other features, such as context.log. There is no right or wrong here, choose your adventure and be consistent.

Data Access Service

When you create your first Azure function, the "hello world" function usually returns a static string message. In most APIs, you're going to want to talk to another database or web service to get/manipulate data before returning a response.

In my case, I am getting a list of heroes. So I defer most of my data access service to a Node.js module I named hero.service.js. Why do this? Simply put, organizing my code (in this case the data access service) so it is DRY (do not repeat yourself) and isolates the responsibility and makes it easier to scale, extend, debug, and test.

The hero.service.js module begins by getting a reference to the container (the storage unit that contains my data for my database). Why abstract that? Good question ... I abstract it to a shared module so I can reuse that logic. I'll be needing to get containers of all types, and getting the container requires accessing the database with some database specific connectivity APIs. We'll look closer at that in a moment.

The getHeroes service accepts the context and uses destructuring to get the response object out into a variable res. Then it tries to get the heroes, and when successful it adds them to the response. When it fails, it responds with an error.

// shared/hero.service.js
const { heroes: container } = require('./index').containers;

async function getHeroes(context) {
  let { req, res } = context;
  try {
    const { result: heroes } = await container.items.readAll().toArray();
    res.status(200).json(heroes);
  } catch (error) {
    res.status(500).send(error);
  }
}

Shared Database Module

The data access service module hero.service.js imports from a shared database module. This module is where the magic happens for connecting to our database. In this case, I am using Azure's CosmosDB via its Node.js SDK in npm.

Notice that the code reads in the secrets via the Node.js environment variables. Then it merely exports the containers from the appropriate database. I can use different environment variables without requiring the code to change.

// shared/index.js
const cosmos = require('@azure/cosmos');

const endpoint = process.env.CORE_API_URL;
const masterKey = process.env.CORE_API_KEY;
const databaseDefName = 'vikings-db';
const heroContainerName = 'heroes';
const villainContainerName = 'villains';
const { CosmosClient } = cosmos;

const client = new CosmosClient({ endpoint, auth: { masterKey } });

const containers = {
  heroes: client.database(databaseDefName).container(heroContainerName),
  villains: client.database(databaseDefName).container(villainContainerName)
};

module.exports = { containers };

What's Your Route?

I didn't want my API to be /api/heroes-get but rather I prefer /api/heroes when executing the GET action, so I changed that. My function is in the path /heroes-get/index.js and inside that same folder, there is a function.json file. This file is where you configure the function's behavior. The key one I wanted to change was the route alias. Notice I changed this by setting route: heroes in the code block below.

Now my endpoint is api/heroes.

// function.json
{
  "disabled": false,
  "bindings": [
    {
      "authLevel": "anonymous",
      "type": "httpTrigger",
      "direction": "in",
      "name": "req",
      "methods": ["get"],
      "route": "heroes"
    },
    {
      "type": "http",
      "direction": "out",
      "name": "res"
    }
  ]
}

What's the Point?

Organizing your code and isolating logic only makes your life easier if it had some tangible positive effect, so let's explore that. When writing your next function for updating heroes, the function could look like the following code.

const { putHero } = require('../shared/hero.service');

module.exports = async function(context) {
  context.log('putHero: JavaScript HTTP trigger function processed a request.');
  await putHero(context);
};

Do you notice that it looks very similar to the function for getting heroes? There is a pattern forming, and that's a good thing. The big difference here is that the code is calling putHero in the hero.service.js module. Let's take a closer look at that.

The logic for updating the heroes is isolated. Isolation is one of the main jobs of the hero.service.js, along with the logic for getting the heroes.

Thinking forward, the logic for delete, insert, and any other operations also could go in this module and be exported for use in other functions. This makes it relatively simple to extend this structure to other actions and models.

// shared/hero.service.js
const { heroes: container } = require('./index').containers;

async function getHeroes(context) {
  // ...
}

async function putHero(context) {
  const { req, res } = context;
  const hero = {
    id: req.params.id,
    name: req.body.name,
    description: req.body.description
  };

  try {
    const { body } = await container.items.upsert(hero);
    res.status(200).json(body);
    context.log('Hero updated successfully!');
  } catch (error) {
    res.status(500).send(error);
  }
}

More Serverless

Share your interests, and as I write more posts on Serverless I'll keep them in mind! In the meantime here are those resources again, in case you want some getting started materials:

Credit and Thanks

Thanks to Marie Hoeger for reviewing the content of this post and taking my feedback. You should definitely follow Marie on Twitter!