HistoryIn the beginning, there was the database. It stored vast quantities of information. It allowed retrieval and searching of the information at scales never before seen. What was once stored in ledgers and paper was now electronic. This pleased the Business immensely – never before were so many questions answerable! Yet, there were clouds on the horizon. As the software industry matured, people invented new types of applications. And with them, new databases. Different and separate databases. Over time, the need to combine the data became clear. But chasms existed everywhere. All these applications stored their own data, in their own locations. Business had unintentionally imprisoned the data!
To free the data, systems needed to communicate with one another. In the late 1970’s, Remote Procedure Call (RPC) systems began to arise. RPC provided a common interface and data format. This allowed computers to communicate with each other. By today’s standards, this architecture required a great deal of expertise to create. It involved complicated coding, was brittle, and expensive. But it worked. The imprisoned data could begin to stretch its legs.
Over time, RPC implementations became more robust and standardized. From the late 1980’s to early 1990’s, standards gained prominence. CORBA and Microsoft’s DCOM were two examples. Standardization made it easier to write code that could communicate. Different teams could work independently, and combine systems. Better transport mechanisms and message reliability made things work better. Still, by today’s standards, these technologies were still difficult and expensive. A famous paper highlighted pain points the developers were feeling. The industry slogged on, continuing to try to find a solution.
In the 1990’s, the World Wide Web (WWW) begin gaining traction outside of academia. In the late 90’s developers begin using Hypertext Transfer Protocol (HTTP) to communicate. This widespread standard solved the problem of “how do we send message from point A to point B”. Popular operating systems such as Windows and Linux supported it natively. HTTP used the networking port 80. Because of all the web browsing, it was open by default on firewalls. However, disagreements about what form the data should take still rumbled.
At first, the software industry embraced two frameworks. Web Services Description Language (WSDL) defined the format of the data and services. Simple Object Access Protocol (SOAP) described how to send it. The combination of these two specification laid out “rules” of communication over HTTP. While a good idea in theory, they quickly became overwhelming. They were difficult to maintain, and painful to implement. Too much of a good thing, you might say.
Finally, in 2000 Roy Fielding published a groundbreaking paper. In it he describing the Representational State Transfer (REST) architecture. This approach provides a simple mechanism of exposing:
- Resources that represent something in your system (orders, customers, accounts, etc)
- Actions against them, represented using existing HTTP verbs (GET, POST, PUT, DELETE).
IntegrationWith APIs so prevalent in the industry, API integration has become super important. There is an ocean of data, and thousands of places to pull it from. An effective application leverages different APIs to maximize their power. This can include:
- Internal data belonging to the business, both current and historical.
- The data can “live” in modern systems that already feature APIs.
- The data can exist buried and hidden in legacy databases. Important historical trends and details becomes available.
- Real-time data about financial markets.
- Information about the weather in various geo-locations.
- Traffic data from cities, highways, and rail lines.
- Births and deaths. Marriages and divorces.
Oftentimes, there is no need to re-invent the wheel – simply pull the data from an existing API. API directories can assist with this discovery. Like a phone book, they allow browsing for the right service. They provide thorough documentation and tutorials.
Client libraries allow rapid consumption of APIs. Discussion boards offer a community of help to aid in getting started. With these available tools, developer stand on the shoulders of giants.On the flip side, a business can choose to expose their own valuable information in a public API. By hosting their API, a business can provide their data to others. This can drive traffic to their site, and help build their reputation. If the information is valuable enough, the company can charge for access to the API. Suddenly, what was once a cost can become a revenue stream! Plugging their API into an API directory advertises their business to other developers. These actions can have a multiplier effect on the value of the data. Consumers can feed back into the system, improving the data.
DreamFactoryIf you’re sold on the idea of using APIs to power your business, what’s next? If your business sits upon a large number of separate data stores, the chore can seem daunting. The cost of developing an API layer for each system could cost thousands of developer hours. Time and effort better spent elsewhere. Money better spent elsewhere. Wouldn’t it be nice to generate these API layers? Some sort of code generation mechanism that made all that data available in a modern REST API? Friends, that day is upon us!
The good folks at DreamFactory have built a system that does that. Start with some simple configuration. Then, with a few clicks of a mouse, DreamFactory generates powerful APIs. Feed it your database connections and legacy SOAP APIs, and out comes a robust REST API. Complete with sparkly documentation. It looks like a team of developers took months to create, but is available to you in minutes. To learn more, check out a few beginner-level videos at DreamFactory Academy, or read the guide at https://guide.dreamfactory.com. Slash your budget estimates. DreamFactory has lifted a large cost of software development off your plate!