Application Programming Interface First

Application Programming Interface First (API-First) is said to be documentation-centric. It may and may not be. It just depends on what you make out of the principle. The principle itself is not heavy-weight document centric. Let us have a look at what Werner Vogels, the Amazon Web Services CTO published in his re:invent 2021 performance (check around 1:11):

Six rules for APIs according to Werner Vogels:

  1. APIs are forever
  2. Never break backward compatibility
  3. Work backwards from customer use cases
  4. Create APIs with explicit and well-documented failure modes
  5. Create APIs that are self-describing and have a clear, specific purpose
  6. Avoid leaking implementation details at all costs

According to Werner Vogels, these are lessons learned over 15+ years or so in AWS. He talks about cloud -facing software development. I have more of a systems background. Let me just look at the six rules and translate them into systems speak as I understand them. I am not interpreting. Werner can speak for himself. I just take the punch lines and transfer them into my universe.

  1. APIs are forever. Whatever you are exposing as an interface, people will use it. They will build their functions on your API. DO NOT CHANGE IT. It does not need to be a software interface. Any interface will do. 
    O.K. this is a bit scary. I am a big advocate of iterative development. How does this then go together? It does well. Just think of working with a sandbox. In a sandbox you have a lot of freedom in what you are doing and how you are doing it. Remember to start your activities with some architecture. Architecture is the founding principle. Later I will discuss function centricity vs. data centricity and data orientation. Right now we are agnostic to what we are focusing on. 
    If and when we start something entirely new, we will have an idea on the solution architecture of our function. On thing is a given fact: The first idea will not be the best one we will have. Very likely we are going to learn something. A statement you can already find in the Melvin Conway paper. We need to adapt the architecture as we go. Interfaces between the architectural elements are defined via APIs. So fixing the interfaces will fix the architecture. Simply do it as late as possible. I am not advocating to avoid decisions or delay them. No. Just commit as late as possible, but if you commit, commit. You need to be brave. That is all. And there is one more thing, I learned from some digital lawyers when writing terms and conditions for a product. Once you publish an AIP, you have NO CONTROL over how people are using it. You may have had the best intentions. When it is published, it is published and what follows is damage control. 
  2. Never break backwards compatibility. If you break backwards compatibility, you are going to hurt people. You need to have a good reason to break it. (When I was working as a function developer, I needed an approval by the department head to generate a branch. You could get a branch if you really needed it. Well, there always was a better solution. You can do many things on the trunk if you are forced to think. Yes, you are not fast in the beginning, but imagine all the time saved in the future. – Kudos to Gerd! What a valuable lesson.) The same goes with backwards compatibility. If you break it, you are going to upset a lot of people. BUT: Very few people oppose progress. You will find a solution. If it is in mechanics, a solution might be a physical adapter. Like in SIM-cards for example. They come in different sizes. With adapters to make them fit physically.
  3. Work backwards from customer use cases. Each of the units in the value stream or boundary diagram needs to produce something. If a block does not deliver a tangible result, what is the justification for its existence? Why is it there if it does not do anything? The entire value stream needs to have a purpose, just like the product or artifact I am delivering. Once I have that, I can throw the entire Wardley Mapping machine on it and get make or buy decisions and focus on what is most important. Again, I am simplifying a lot. I believe Haier is driving the idea of architecture centricity and working backwards towards an extreme and very successful direction. Again, I am only interested in some aspects of the story. See the full HBR article on the Haier concept. What I have in consequence is a succession of sub-systems or process steps with a defined input and output and a price tag in terms of duration and costs. I can assess if I want do do the job on my own (for financial or strategic reasons) or if I am going to buy it. This is pure Taylor scientific management at its best – to my understanding. And Wardley mapping. Taylor focuses on money alone. Wardley adds the strategic component. Get your strategy right and put the effort into what separates you from the rest and is not worth buying. Or buy what is not worth generating yourself. How to decide when to do what? My suggestion is to look at what Simon Wardley has to say. Again, do not copy. Understand and apply the concept to your problem.
  4. Create APIs with explicit and well-documented failure modes. This sounds familiar to systems engineers. In our context the APIs are representing the architecture of our system. All inputs and outputs are documented. If I am attaching the malfunctions and failures to this documentation, I can see the propagation of failures and basically can define countermeasures for these failures. That is an FMEA (Failure Mode and Effect Analysis) built into the design of the function. In the modern times of “anything as code” I can have this all linked and even automatically generate a Fault Tree Analysis. I could have this “life”. Directly from the code. Right in the beginning of the design of the function I can control all consequences of failures. This is essential for safety relevant functions or otherwise regulated functions. This feature generates enormous transparency and will lead to new solutions of known problems. This is not science fiction. The products you need to get a concept like this up and running do exist and even are open source.
  5. Create APIs that are self-describing and have a clear, specific purpose. O.K. this is a bit redundant to my interpretation of point 3., but that does not matter. The purpose was mentioned in 3. already. What was not mentioned was the self-describing nature of the API. Who likes to read manuals? Well, I was know in the laboratories I worked in for actually reading manuals of the devices we were using. Not everybody likes to read manuals. If and when the API is self-describing, nobody needs to read a manual. I acknowledge the benefit and think it is a good idea. If the function is self describing, who then needs a manual or instructions? It is not about reducing work or documentation. The point is simply, no matter how hard you are trying, there is always the challenge that two documentations (code and specification) will separate at some point. Solution: see above. Document in the code, a.k.a. documentation as code — I understand, this is not the same, but you get the picture. 
    There is another thing: If the API describes a sub-unit of a functionality, with a specific purpose, then each of the sub-units can be seen as a process. This a 1:1 correspondence to a production line if you like. There is a definition of the a process as “having an input and an output and consuming time, i.e. modifying the input into output (adding value).” 
  6. Avoid leaking implementation details at all costs. This is the pixie dust. Super tough for a technology person like me, not to present all the cool stuff I am doing to the customer/user of the function we are delivering. Just shut up and commit to the deliverables you promise. This is a big challenge in some business contexts that are dominated by lack of trust. There the engineers on the receiving side who often want to be convinced by technical detail. This spoils the game, because then you cannot improve what you have developed. At least not without disclosing all this to the customer. Challenge accepted. At least you should try as hard as you can, not to disclose any details. The less details of your function are known to the outside, the more flexibility you keep for improvement and ratio measures. I am not talking about keeping secrets in front of the customer or betraying the customer in any way. I am talking about keeping autonomy in the development of your function. As long as you deliver the promised result, why should anybody care about what is inside? You are the expert. You know best. I perfectly know that this may be a big challenge. However, when living in a world of mutual respect and co-development and partnership, this should not be an issue any longer. In the end, I could say: “This is what you are paid for.” 

In consequence, this leads to a service orientation-like architecture. Just with the limitation that the services have defined interfaces. There is nothing wrong with starting with a code-first, function and requirements based strategy if we know nothing. The more we however know, the more we should modularize the solution with the help of the API first concept. It is fine to “freeze” the proven APIs and keep flexibility for others. Rule number 6 should always be in place.

Is this all anarchy behind the APIs? No. For sure not. Remember that YOU can define rules yourself after understanding the principles. There could and probably should be governance guarding the methodologies or standards within all the teams. There could be overarching standards. Very likely, as a consumer of the interface, you will have some expectation with respect to the quality of the solution behind the interface. How the quality is achieved depends. Sometimes there is regulated business and there are expectations from external bodies with respect to regulations or so called “best practices”. This can all be part of the governance topics. It is important not to over-regulate the system in order to maintain adaptability and flexibility of the solution. In the end, the idea is that the sub-system or process step in the value map is delivering according to expectation and agreement.

Still there is the idea that all the APIs need to be externizable — the Bezos thing from the article before. Cool. Make or buy decision are then possible on the level of boxes. And the entire thing can scale. We had this earlier. If a subsystem becomes too big and slow to be managed, make it smaller. Either by splitting it at the same level or by introducing sub-level. The API being externizable however also means, I can buy the function somewhere. If I do that, I need to trust the supplier of the function that the supplier will deliver the agreed quality and functionality. Rule six simply means that I need to trust the API. 

A little story to loosen the discussion a bit. When I worked with AWS as a partner in some project, we had a management call. Somebody asked: “What would Jeff say now?” I was close to panicking. Whom did I forget to invite? Jeff who? There was no Jeff in the entire project. Then it struck me. O.K. THIS Jeff. They were checking guardrails. 

Externizable API simply also means, we are following Simon Wardleys mapping idea. This is the make or buy decision in the Wardley mapping. I understand that Wardley mapping is more than this single idea. I just want to give Kudos to where it belongs. And the idea of externalizing APIs is reversible. You can sell the functions to others, like AWS did with the cloud service or Amazon does with its accounting services. Or you can simply buy commodity services. It works full scale.

On big remark: sometimes modularity makes no sense. In this case, just dump it. You will save a hell of a lot of administrative work, but you will loose modularity and serviceability. There is a lot of polemics being exchanged. This and that company now goes back to monolithic solutions, so dump your modularity approach and so on. Remember: your choice. This is what you are paid for. Make up your mind and decide. Nobody will do it for you unless the person takes your job. Whatever gets the job done in your very situation is right. Nothing else.

When should modularity be dropped? There are cases when your adaptive cash-cow product turns to a commodity product. You will have a lot of competition. Others can do the same trick. Now you have the choice: There are two rabbit holes. One is to drop the product and let do others care for the commodity product and maybe buy yourself. Just make sure your supplier then has some competition. The other is to make you product a commodity product. Maybe even re-design it to make it a lot cheaper. Whatever it takes. These are two very valid paths. Make a data-supported decision on what to do and commit. Ever forget: Data does not take the decision for you. You decide, informed by data. (We can go on here on which decisions are to be taken by AIs and so on… I don’t do it here. maybe later.)

Is this now the solution for all the problems? Go API-first instead of code-centric and remember Melvin Conway to generate a homomorphism between team structure and product architecture? No. Something additional is needed. Experience simply tells that if we have no overarching governance in form of an architecture and a targeted functionality that is clearly specified, we are going to generate something that is not well structured and not easy to be modified.

And another question needs to be answered: When am I going to use agile methods and when is a classical set-up more promising? Closely related is a more general concept of how not to loose focus on the important stuff and how to find our what is important. The concept that helps us decide is called Situation Awareness and is formulated by Mica Endsley. Situation Awareness helps us to make the decision. The article “Agile or Not Agile” gives us the parameter.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top