Reaction: The Importance of Open APIs
Over at CIMI, Tom Nolle Considers whether the open API is a revolution, or a cynical trap. The line of argument primarily relates to accessing functions in a Virtual Network Function (VNF), which is then related to Network Function Virtualization (NFV). The broader point is made in this line:
One important truth about an API is that it effectively imposes a structure on the software on either side of it. If you design APIs to join two functional blocks in a diagram of an application, it’s likely that the API will impose those blocks on the designer.
This is true—if you design the API first, it will necessarily impose information flow between the different system components, and even determine, at least to some degree, the structure of the software modules on either side of the API. For instance, if you decide to deploy a single network appliance vendor, then your flow of building packet filters will be similar across all devices. However, if you add a second vendor into the mix, you might find the way packet filters are described and deployed are completely different, requiring a per-device module that moves from intent to implementation.
While this problem will always exist, there is another useful way of looking at the problem. Rather than seeing the API as a set of calls and a set of data structures, you can break things up into the grammar and the dictionary. The grammar is the context in which words are placed so they relate to other words, and the dictionary is the meaning of the words. We often think of an API as being almost purely dictionary; if I push this data structure to the device, then something happens; in grammatical terms, the verb is implied in the subject and object pair you feed to the device.
Breaking things up in this way allows you to see the problem in a different way. There is no particular reason the dictionary or the grammar must be implied. Rather than being built into the API, they can be carried through the API. An example would be really helpful here.
In the old/original OSPF, all fields were fixed length. There was no information about what any particular field was, because the information being carried was implied by the location of the data in the packet. In this case, the grammar determined the dictionary, both of which had to be coded into the OSPF implementation. The grammar and dictionary are essentially carried from one implementation to another through the OSPF standards or specifications. IS-IS, on the other hand, carries all its information in TLVs, which means at least some information required to interpret the data is carried alongside the data itself. This additional information is called metadata.
There are a couple of tradeoffs here (if you haven’t found the tradeoffs, you haven’t looked hard enough). First, the software on both ends of the connection that read and interpret the information are going to be much more complex. Second, the amount of data carried on the wire increases, as you are not only carrying the data, you are also carrying the metadata. The upside is that IS-IS is easier to extend; implementations can ignore TLVs they don’t understand, new TLVs with new metadata can be added, etc.
If you want to understand this topic more deeply, this is the kind of thing Computer Networking Problems and Solutions discusses in detail.
In the world of network configuration and management, an example of this kind of system is YANG, which intentionally carries metadata alongside the data itself. In this way, the dictionary is, in a sense, carried with the data. There will always be different words for different objects, of course, but translators can be built to allow one device to talk to another.
There is still the problem of flow, or grammar, which can make it difficult to configure two devices from one set of primitives. “I want to filter packets from this application” can still be expressed by a different process on two different vendor implementations. However, the ability to translate the dictionary part of the problem between devices is a major step forward in solving the problem of building software that will work across multiple devices.
This is why YANG, JSON, and their associated ecosystems really matter.