GraphQL API Gateway Patterns
Bff Derived Client Generation

BFF derived Client generation Pattern for API Gateways and BFFs

Before founding WunderGraph, I've worked for a Full Lifecycle API Management vendor. One of our core components was an API Gateway. After a while I realized that the classic API Gateway pattern creates unnecessary friction and overhead for developers.

Problem

The classic API Gateway pattern is implemented as follows:

We add APIs to the API Gateway, e.g. by importing an OpenAPI Specification. Next, we configure the Gateway to route requests to the origin, add authentication middlewares, rate limiting, policies and more.

At this point, we've got all sorts of valuable meta data about our APIs, like which endpoints are available, how to call them, how to authenticate, what the rate limits are, what the input and response schema looks like, which endpoints require authentication, permissions or are public.

What do we do with this information? We throw it away! WHAT?!

Once you integrate this API into a frontend application, you have to re-implement all of this information in your frontend application. Even worse, you have to manually keep the frontend application in sync with the API Gateway configuration. That is insane! It's error prone and simply a waste of precious developer time. Adding to that, every line of code is a liability, it has to be maintained, tested and documented.

Ideally, we could take all the information we have about our APIs and generate a client for it.

Solution

That's exactly how we came up with the idea for the pattern we call "BFF derived Client generation". Instead of manually implementing the client, we generate it from the API Gateway configuration.

You might be thinking that an OpenAPI Specification is enough to generate a client. For server-to-server communication, that's mostly true. However, when building frontend applications, we need way more information than just the endpoints.

What auth providers are available? What is the auth strategy? Which endpoints require authentication or special permissions? What's the input and response schema?

If you've read a bit about our approach, you might have notice how much we embrace JSON-RPC over REST. That's for a reason. When using the BFF Pattern, we're building specific endpoints for specific use cases or pages. REST is not a good fit for this pattern, because we're not building a generic API around resources, but rather a specific API that offloads the overhead of API integration from the frontend to the backend.

REST is way to flexible for this pattern. If we can use a simpler protocol, like JSON-RPC, it's a lot easier to generate code for the frontend.

We've made the decision to divide our Endpoints into three categories:

  • Queries (read operations, HTTP GET)
  • Mutations (write operations, HTTP POST)
  • Subscriptions (streaming operations, HTTP GET)

In addition, we've standardized how clients authenticate with our BFF / API Gateway, and how the client sends input data to the backend, and how the backend responds to the client.

When sending input data to the backend, we URL encode the input JSON object and send it as a query parameter in case of GET requests. For POST requests (mutations), we send the input JSON object as the request body.

This doesn't just allow us to generate a generic client for the frontend, but we can also generate a JSON Schema for the input and response objects. In addition, we generate TypeScript models.

When applying the generated models to the generic client, it becomes a strongly typed client. That's very imporant for three reasons. First, we can test the generic client within our SDK without having to rely on code generation. Second, the client is actually quite lightweight and keeps the client footprint small, independent of the number of endpoints we have. That is because the client code never changes, only the models do, and the models actually get stripped out during the build process. Third, we can layer any sort of framework specific code on top of the generic client. E.g. we're integrating with React, Vue, Svelte, Relay, SWR, Tanstack Query, React Native, Expo and more, all by just adding a thin layer on top of the generic client.

Furthermore, we can leverage the generated JSON Schema descriptions to integrate with form libraries. There are a lot of form libraries that can create forms based on a JSON Schema. All you need to do is to import the generated JSON Schema and pass it to the form library. That's amazing if you want to automate the process of building forms.

Example

Let's take a look at how this works in practice. When running wunderctl up or wunderctl generate, we introspect all the API Dependencies and combine it with the rest of your BFF configuration. From that, we generate what we call the "ResolvedWunderGraphApplication" configuration. This configuration is then used to configure the API Gateway that's powering your BFF. At the same time, we're creating another object, the "CodeGenerationConfig", which is then passed to the code generator.

The code generator allows the user to implement an interface called "Template", so that you can extend the existing templates or create your own if you want to generate code for a specific use case or integration.

Here's an example of how such a template could look like:

export class JsonSchema implements Template {
	generate(generationConfig: CodeGenerationConfig): Promise<TemplateOutputFile[]> {
		const model: Model = {
			operations: generationConfig.config.application.Operations.map((op) => ({
				name: op.Name,
				inputSchema: JSON.stringify(op.VariablesSchema),
				outputSchema: JSON.stringify(op.ResponseSchema),
			})),
		};
		const tmpl = Handlebars.compile(template);
		const content = tmpl(model);
		return Promise.resolve([
			{
				path: 'jsonschema.ts',
				content: formatTypeScript(content),
				header: doNotEditHeader,
			},
		]);
	}
}
 
interface Model {
	operations: {
		name: string;
		inputSchema: string;
		outputSchema: string;
	}[];
}

Here's the definition of the CodeGenerationConfig and ResolvedWunderGraphConfig:

export interface CodeGenerationConfig {
	config: ResolvedWunderGraphConfig;
	outPath: string;
	wunderGraphDir: string;
}
 
export interface ResolvedWunderGraphConfig {
	application: ResolvedApplication;
	deployment: ResolvedDeployment;
	sdkVersion: string;
	authentication: {
		roles: string[];
		cookieBased: AuthProvider[];
		tokenBased: TokenAuthProvider[];
		customClaims: Record<string, CustomClaim>;
		publicClaims: string[];
		authorizedRedirectUris: ConfigurationVariable[];
		authorizedRedirectUriRegexes: ConfigurationVariable[];
		hooks: {
			postAuthentication: boolean;
			mutatingPostAuthentication: boolean;
			revalidateAuthentication: boolean;
			postLogout: boolean;
		};
		cookieSecurity: {
			secureCookieHashKey: ConfigurationVariable;
			secureCookieBlockKey: ConfigurationVariable;
			csrfTokenSecret: ConfigurationVariable;
		};
	};
	enableGraphQLEndpoint: boolean;
	security: {
		allowedHostNames: ConfigurationVariable[];
	};
	interpolateVariableDefinitionAsJSON: string[];
	webhooks: WebhookConfiguration[];
	nodeOptions: ResolvedNodeOptions;
	serverOptions?: ResolvedServerOptions;
	experimental: {
		orm: boolean;
	};
}

As you can see, you've got access to every aspect of the BFF configuration, like authentication, security, webhooks, etc. All of this info can be leveraged to generate code for the frontend or other integrations.

Conclusion

We've been using this approach for a while now, and developers absolutely love it. Add your API dependencies, define Operations and configure your BFF, get a production-grade BFF and strongly typed client for every frontend framework for free.

There's nothing else like this pattern that tries to solve the problem in such a holistic way. Coming back to the initial question, it really doesn't make sense to throw away all the information about your BFF.

The "BFF derived Client generation" pattern is a paradigm shift in how we think about BFFs. It really doesn't make sense to think about BFFs and the frontend as two separate things. It's important to seperate them in terms of responsibilities and from an architecture perspective, but managing them separately is a huge mistake.