API Integration – What is it? The Definitive Guide


Modern business runs on software. This involves storing business data, and moving that data from place to place. In the old days, software stored the data in a siloed fashion. This meant it was only available in one place, cutoff from the rest of the world. Companies struggled to pull data from different locations. Combining it in meaningful ways was difficult. This isolated data only provided a fraction of its potential value. In modern applications, the data wants to be free. With the onset of the web, data is available everywhere. Sharing data has had an exponential effect on the power of software. Applications share data using some form of API, or Application Programming Interface. With the explosion of available APIs, it has become more difficult to manage it all. API Integration allows you to combine, track, and add new APIs.


In the beginning, there was the database. It stored vast quantities of information. It allowed retrieval and searching of the information at scales never before seen. What was once stored in ledgers and paper was now electronic. This pleased the Business immensely – never before were so many questions answerable! Yet, there were clouds on the horizon. As the software industry matured, people invented new types of applications. And with them, new databases. Different and separate databases. Over time, the need to combine the data became clear. But chasms existed everywhere. All these applications stored their own data, in their own locations. Business had unintentionally imprisoned the data!To free the data, systems needed to communicate with one another. In the late 1970’s, Remote Procedure Call (RPC) systems began to arise. RPC provided a common interface and data format. This allowed computers to communicate with each other. By today’s standards, this architecture required a great deal of expertise to create. It involved complicated coding, was brittle, and expensive. But it worked. The imprisoned data could begin to stretch its legs.Over time, RPC implementations became more robust and standardized. From the late 1980’s to early 1990’s, standards gained prominence. CORBA and Microsoft’s DCOM were two examples. Standardization made it easier to write code that could communicate. Different teams could work independently, and combine systems. Better transport mechanisms and message reliability made things work better. Still, by today’s standards, these technologies were still difficult and expensive. A famous paper highlighted pain points the developers were feeling. The industry slogged on, continuing to try to find a solution.In the 1990’s, the World Wide Web (WWW) begin gaining traction outside of academia. In the late 90’s developers begin using Hypertext Transfer Protocol (HTTP) to communicate. This widespread standard solved the problem of “how do we send message from point A to point B”. Popular operating systems such as Windows and Linux supported it natively. HTTP used the networking port 80. Because of all the web browsing, it was open by default on firewalls. However, disagreements about what form the data should take still rumbled.At first, the software industry embraced two frameworks. Web Services Description Language (WSDL) defined the format of the data and services. Simple Object Access Protocol (SOAP) described how to send it. The combination of these two specification laid out “rules” of communication over HTTP. While a good idea in theory, they quickly became overwhelming. They were difficult to maintain, and painful to implement. Too much of a good thing, you might say.Finally, in 2000 Roy Fielding published a groundbreaking paper. In it he describing the Representational State Transfer (REST) architecture. This approach provides a simple mechanism of exposing:
  1. Resources that represent something in your system (orders, customers, accounts, etc)
  1. Actions against them, represented using existing HTTP verbs (GET, POST, PUT, DELETE).
This simple representation of business entities and operations works very well. With it, REST eliminates much of the complexity and overhead of earlier approaches. No more struggling with definitions, schemas, and standards. A consumer takes actions against resources. Very straightforward.With these recent advancements, API creation is finally within reach of almost anyone. In the years since, we have seen an explosion in the number of available APIs. APIs serving almost any functional area exist for consumption. According to ProgrammableWeb, there are over 20,000 publicly available APIs. This rapid growth has created an ecosystem of API directories and management solutions.


With APIs so prevalent in the industry, API integration has become super important. There is an ocean of data, and thousands of places to pull it from. An effective application leverages different APIs to maximize their power. This can include:
  • Internal data belonging to the business, both current and historical.
  • The data can “live” in modern systems that already feature APIs.
  • The data can exist buried and hidden in legacy databases. Important historical trends and details becomes available.
  • The data can also come from external sources. This can include:
  • Real-time data about financial markets.
  • Information about the weather in various geo-locations.
  • Traffic data from cities, highways, and rail lines.
  • Births and deaths. Marriages and divorces.
Modern applications can search, filter and combine this data. They can access multiple data sources at a time. The application’s utility and power grows. No longer is a firm required to build out massive data sets themselves. Once someone collects the data and publishes the data, the cost is never incurred again. For example, look at Google Maps API. The cost of mapping every road, location, and border in the world is astronomical. But now that it’s done, we don’t need to do it again.With API integration, knowing what already exists can accelerate solving a business problem. Oftentimes, there is no need to re-invent the wheel – simply pull the data from an existing API. API directories can assist with this discovery. Like a phone book, they allow browsing for the right service. They provide thorough documentation and tutorials. Client libraries allow rapid consumption of APIs. Discussion boards offer a community of help to aid in getting started. With these available tools, developer stand on the shoulders of giants.On the flip side, a business can choose to expose their own valuable information in a public API. By hosting their API, a business can provide their data to others. This can drive traffic to their site, and help build their reputation. If the information is valuable enough, the company can charge for access to the API. Suddenly, what was once a cost can become a revenue stream! Plugging their API into an API directory advertises their business to other developers. These actions can have a multiplier effect on the value of the data. Consumers can feed back into the system, improving the data.


If you’re sold on the idea of using APIs to power your business, what’s next? If your business sits upon a large number of separate data stores, the chore can seem daunting. The cost of developing an API layer for each system could cost thousands of developer hours. Time and effort better spent elsewhere. Money better spent elsewhere. Wouldn’t it be nice to generate these API layers ? Some sort of code generation mechanism that made all that data available in a modern REST API?Friends, that day is upon us! The good folks at DreamFactory have built a system that does that. Start with some simple configuration. Then, with a few clicks of a mouse, DreamFactory generates powerful APIs. Feed it your database connections and legacy SOAP APIs, and out comes a robust REST API. Complete with sparkly documentation. It looks like a team of developers took months to create. And is available to you in minutes.Watch this video, and prepare to breath a sigh of relief. Slash your budget estimates. DreamFactory has lifted a large cost of software development off your plate!

MS SQL Server vs MySQL – Which Reigns Supreme?

RDBMS databases form the backbone of most business software solutions. When people discuss SQL (Structured Query Language), it’s in reference to an RDBMS system. Applications store all their important data there. The databases (usually) power all the searches. A good database can bring a system to a higher level. A bad database can bring a business to its knees. For any developer or enterprise embarking on a new software venture, one big question is "which database vendor should I use?". In the early days of computing, database vendors such as IBM and Oracle reigned supreme. That has changed in recent years. MySQL (open source solution recently purchased by Oracle) and Microsoft’s SQL Server have gained market share. According to a 2018 StackOverflow survey, they hold the top two rankings in SQL database usage. But which one is best for YOUR business? MySQL vs SQL Server presents a tough and complicated decision!
Mysql Vs Sql Server

Which to Choose?

To select the best database solution for the task at hand, one must weigh several factors, including:
  • Operating System
  • Cost
  • Cloud Support
  • Performance
  • Tool Support
This article will compare and contrast these decision points for MySQL and SQL Server. Armed with these details, hopefully you will be better positioned to make this important decision.

Operating System

Most companies have already invested time, money, and expertise in their computing infrastructure. This includes their choice of Operating System (OS). Usually that consists of "Windows vs Linux" (although cloud computing is beginning to change that). When selecting a database to power your business, the OS your company is already is a big deciding factor. Here’s how that looks for MySQL vs SQL Server:


MySQL runs on virtually all major operating systems, including Linux, MacOS, and Windows. While traditionally associated with Linux (as part of the famed LAMP stack), it will run on Windows as well.

SQL Server

SQL Server was originally written for the Microsoft Windows operating system. In recent years, Microsoft has made strides in embracing the open source community, and providing support for both Linux and Mac OS. The most recent versions of SQL Server run on Linux natively, and will run on Mac OS within a Docker container.

Advantage – It Depends

Honestly, this one depends on what OS your company is already using. While both platforms support the two major operating systems, there are "home court advantages" to each. If you’re already a Windows and .Net shop, it probably makes sense to use SQL Server. If you’re a Linux and Python/Java/PHP shop, MySQL might be the better choice.


Cost is always a factor when making decisions about software, and an enterprise-grade database can be one of the biggest expenses. Both solutions offer a "free" tier. From there, the price depends on how powerful a database you need, and what sort of support you’re looking for. It may be tempting to try and save money, and go for the free tier. But if the database is mission critical, paying for advanced monitoring, backup, and support is probably worth the cost. Here’s the breakdown:


MySQL’s free offering is the MySQL Community Edition. It boasts a decent number of the standard features. This would work fine for a developer learning the platform. It should also meet a smaller system’s needs. For a more complete feature set (as well as Oracle support), you need to shell out some bucks. According to recent pricing, this can run you anywhere from $2k-$10k per server, annually. There are 3 different tiers (Standard Edition, Enterprise Edition, and Cluster CGE). Choosing between them largely depends on the complexity and scale of your data needs.

SQL Server

SQL Server’s free offering comes in two flavors – here’s how Microsoft describes them:
  • Developer – "Full-featured version of SQL Server software that allows developers to cost-effectively build, test, and demonstrate applications based on SQL Server software."
  • Express – "Free entry-level database that’s ideal for learning, as well as building desktop and small server data-driven applications of up to 10 GB."
In a nutshell, Developer edition gives you everything you need, as long as you’re not using it in production. Express has a smaller feature set, but it’s license allows for production use. Like MySQL, if you’re business needs and scale are smaller, Express may do the trick. If you need a more robust feature-set, you’re going to have to pay for it. According to Microsoft’s pricing page, you can pay anywhere from $931 to $14,256 per core. There is a wide discrepancy in pricing here, and your business needs will dictate how much power you need.

Advantage – It Depends

Once again, the best choice here depends on the needs of your business. Both solutions offer a free tier. Both have complicated pricing schemes beyond that. Consult with the sales department of each to get a final determination of what you need, and what you would end up paying.

Cloud Support

In recent years the computing landscape has undergone a dramatic transformation. Cloud computing is all the rage. The "Big 3" providers are currently Amazon Web Services (AWS), Microsoft Azure, and Google Cloud. Each offer robust services, such as storage, computing, and yes, SQL Databases. This revolution has impacted the first two bullet points of this article (OS and Cost). The cloud provider manages the OS and server complications, and offer "pay as you go" plans to avoid the major up-front costs. In a way, this shift has diminished the importance of OS/Cost. Instead, other considerations such as performance, tool support, feature set are bigger factors. Here’s how the offerings stack up:


All 3 of the "Big 3" cloud providers support MySQL with the following offerings: Each service claims easy administration, high scalability, robust security, and pay-as-you-go pricing. This article offers an in-depth comparison of MySQL offerings across cloud providers. It does not attempt to compare pricing due to differences between the providers. It seems cloud pricing also falls into the "it depends" category – there is no "one size fits all" answer. The best approach might be to first create MySQL environments in several clouds. Then, load test typical usage for your business operations. and determine how the different costs shake out. It is worth noting that Oracle (owner of MySQL) ALSO features a cloud offering for MySQL. This might be worth exploring due to Oracle’s "native" support of MySQL. However, a SQL database is only one piece of a software architecture landscape. A system still needs storage, computing, and security services. Oracle’s not currently a market leader for providing these services. For that reason, Oracle’s cloud may be a risky choice for hosting MySQL. It is ALSO worth noting that all the cloud providers also offer Virtual Machine services, upon which you can run your own MySQL instances. This is an option for customers that want more control over their databases. This approach requires more expertise (and is more expensive).

SQL Server

Similar to MySQL, each major cloud provider has a SQL Server offering:
  • AWS offers Sql Server on their Relational Database Service.
  • Azure offers SQL Server on their SQL Database service. While SQL Server runs under the covers, the SQL Database offering abstracts much of the server administration away from the end user.
  • Google offers SQL Server on their Google Cloud Platform offering.
An interesting twist here is that one of the major cloud providers (Microsoft) is also the creator of SQL Server. While all 3 providers offer strong choices, there’s a sense of a "home-court advantage" with Microsoft. Like MySQL, you could also pay to host Windows VMs in the cloud, and self-host SQL Server. This also comes with the same expertise requirements and additional cost concerns.

Advantage – SQL Server (SQL Database)

While either solution works as a cloud offering, the combination of Microsoft Azure and SQL Database is hard to beat. If you are ALREADY using another provider, or have ALREADY invested in MySQL, then that would still probably be your choice. However, coming into a green-field decision, the Azure/SQL Database choice is pretty compelling.


Database performance is crucial to any software application. If the database doesn’t respond in an expedient fashion, the entire system bogs down. This leads to issues like poor user experience, delays in operations, and lost money. Database performance depends on an IMMENSE number of variables. Slight differences in workloads can skew advantages one way or another. Minor tweaks can improve results. A well-designed database is worth its weight in gold. MySQL and SQL Server both tout extensive performance and scaling capabilities. After scouring the web for comparisons between the two, SQL Server seems to have the advantage. Here are some hard numbers: An additional consideration is MySQL is Oracle’s "entry level" database. For high performance needs, Oracle would steer you towards their flagship database offering. On the other hand, SQL Server IS Microsoft’s flagship offering.

Advantage – SQL Server

While not a slam dunk, SQL Server’s slightly better numbers, and "flagship" status give it a slight advantage here.

Tool Support

In order to work with a database, one needs a good toolset. The database itself is a background process without a GUI. However, in order to develop and support the database, you need to interact with it. Both MySQL and SQL Server provide front end clients for this purpose.


MySQL’s client application is MySQL Workbench. Workbench has offerings that run on Windows, Linux, and MacOS. It offers several important database management tools, including:
  • Database connection and management
  • SQL editor and execution
  • Database and Schema modeling GUI
  • Performance monitoring and query statistics

SQL Server

SQL Server’s client application is SQL Server Management Studio (SSMS). While SQL Server runs on Windows, Linux and MacOS (via Docker), SSMS is ONLY available on Windows machines. Note that Microsoft provides a Visual Studio Code extension to execute SQL from a Linux-based machine. SSMS has a more robust feature set than MySQL Workbench. This includes:
  • More extensive Database management tools. Includes a robust set of security, reporting, analysis, and mail services.
  • A powerful execution plan visualizer. This allows easy and fast identification of performance bottlenecks.
  • Integrated Source control.
  • Real-time activity monitor with filtering and automatic refresh.

Advantage – SQL Server

Both offerings provide "the basics" (ability to execute SQL and view/manage databases), but the SSMS experience is far superior. Seasoned Database Administrators (DBAs) may wish to manage their databases with scripts and SQL. But many users want a simple GUI to perform these tasks. This is an area where SSMS shines. Also, the execution plan visualizer makes performance bottlenecks easy to fix. That can pay for itself time and time again.

Language Support

Both platforms utilize SQL to interact with their schema and data (with some minor differences). However, they differ when it comes to runtime languages interfacing WITH the database. For example, in a typical server architecture, you might have:
  • Database – SQL reads/writes data
  • App Server – C++/PHP/Perl/Python/.Net/Java provide business logic, and interface with database
Here’s some of the differences to consider between the two systems:
  • SQL Server supports T-SQL, a proprietary extension to SQL. This enables concepts such as Procedural Programming, local variables, string/data processing functions, and FROM clauses in UPDATE/DELETE statements. Basically, you can do more with your SQL.
  • Runtime languages – both systems support connecting using the major programming languages (C#, Java, PHP, C++, Python, Ruby, Visual Basic, Delphi, Go, R). There are some articles on the web claiming that less-popular languages such as Eiffel are only supported on MySQL, but as long as you can connect using ODBC, both databases are available.
  • If using a .Net language (C#, F#, Visual Basic, etc), once again Microsoft provides a "homecourt advantage". Microsoft wrote the ADO.Net library for SQL Server first. ADO.Net works with MySQL, but it really shines with SQL Server.
  • SQL Server also provides the additional (and possibly controversial) mechanism of invoking .Net code FROM a stored procedure. This can be a powerful mechanism for injecting all sorts of functionality within your database. It also allows you to shoot yourself in the foot. Proceed with caution here.

So Which To Choose?

Obviously there is a great deal of information to unpack here. The "it depends" caveat still looms largely over the entire decision process. A general rule of thumb for approaching this decision might be:
  • If you are a Linux shop, already using pieces of the LAMP stack, MySQL would fit in with that nicely.
  • If you are a Microsoft shop, already invested in .Net and the Windows ecosystem, SQLServer seems like the obvious choice.
  • If you are completely green field, or looking to make a clean start, the evidence above leans towards SQL Server. Microsoft is building momentum in the cloud arena with Azure’s SQL Database. They are continuing to embrace other ecosystems (eg, Linux) and open source. And SQL Server features a better toolset, the more robust TSQL, and arguably better performance.


With your Database decision made, what’s next? Most business applications consist of choices around the following rough architecture: Wouldn’t it be nice to knock out work in the middle layer automatically? Some sort of code generation mechanism that made all the database information instantly available for a GUI to consume? Friends, that day is upon us! The good folks at DreamFactory have built a system that does just that. With some configuration, and a few clicks of a mouse, DreamFactory will turn your database objects into a REST API. They support all sorts of databases (MySQL, SQL Server, and a long list of others). They even auto generate the documentation. Watch this video, and prepare to breath a sigh of relief. DreamFactory just removed a big piece of heavy lifting off your plate!

SOAP vs. REST APIs: Understand the Key Differences

Computer SOAP vs. REST APIs: Understand the Key Differences

These days, it’s more true than ever that “no company is an island.” From social logins to selling their data, many businesses rely on each other by exchanging information over the Internet–and much of that exchange is done via an API. When delving deeper into the question of developing APIs, you’ll undoubtedly encounter the question: SOAP or REST? Although REST APIs have become the most popular choice for today’s businesses, the decision isn’t always an easy one. In this article, we’ll go over everything you need to know about SOAP and REST APIs, so that you can come to the conclusion that’s ultimately right for your situation.

What is an API?

There are a lot of definitions of the term “API” out there, and many can leave you feeling more confused than before you read them. At its core, an API (application program interface) is a way for you to get the information that you need from a website in a consistent format. You can think of an API as like an interaction between a business and a customer, such as placing an order at a restaurant or getting cash from an ATM.
  • Customers first read the menu or the ATM screen. Then, they decide what food they would like to order, or what transaction they would like to select.
  • The waiter or ATM serves as the “middleman” between the customer and the business. They take the request from the customer and present it to the business in the way that’s most comprehensible and efficient.
  • The business reviews the request and sends back a response to the customer, such as a plate of food or the customer’s account balance.
It’s important to note that in both of these examples, the interaction is entirely predictable. When customers go to a restaurant, they can assume that they’ll be presented with a menu, use that menu to place an order, and receive the food that they ordered. Meanwhile, most ATMs have a similar user interface that customers can easily navigate in order to withdraw money and check their balance. In the same way, APIs offer consistency and regularity to users who want to query a website for its data. By establishing a common set of rules for exchanging information, APIs make it easier for two parties to communicate. Suppose that you want to download 100 different articles from Wikipedia. You’d also like to know the date that each page was created, and which other Wikipedia pages link to that page. The good news is that you don’t have to visit each page individually and compile this information yourself. Wikipedia offers an API through which it can deliver this data (and more) to the user. You include the name of the article in your API request, and then you can parse the content of the API response to get what you’re looking for: the article text, the creation date, and the list of other pages. Some organizations offer their APIs as a product that other businesses can purchase, such as commercial weather service Weather Underground. The company sells access to its complete weather data and forecasts in the form of an API. Its customers can use that data for their own business purposes and in their own products.

What is a SOAP API?

SOAP (which stands for Simple Object Access Protocol) is an API protocol that uses the XML Information Set specification in order to exchange information. A standard SOAP message consists of the following XML elements:
  • An Envelope element that identifies the document as a valid SOAP message.
  • An optional Header element that specifies additional requirements for the message, such as authentication.
  • A Body element that contains the details of the request or response.
  • An optional Fault element that contains information about any errors encountered during the API request and response.
An example SOAP request for the weather.gov API might look like this:
<SOAP-ENV:Envelope xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/" 
   xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
   xmlns:SOAP-ENC="http://schemas.xmlsoap.org/soap/encoding/" SOAP-ENV:encodingStyle="http://schemas.xmlsoap.org/soap/encoding/">


<ns8023:NDFDgenLatLonList xmlns:ns8023="uri:DWMLgen">

<listLatLon xsi:type="xsd:string">
   39.965506,-77.997048 39.916268,-77.947228

<product xsi:type="xsd:string">time-series</product>

<startTime xsi:type="xsd:string">2004-01-01T00:00:00</startTime>

<endTime xsi:type="xsd:string">2012-02-12T00:00:00</endTime>

<Unit xsi:type="xsd:string">e</Unit>

   <maxt xsi:type="xsd:boolean">1</maxt>
   <mint xsi:type="xsd:boolean">1</mint>



This simple SOAP API request asks for the minimum and maximum temperatures for two locations in Pennsylvania between 2004 and 2012. As you can see, it contains a base Envelope element, which itself contains a Body element with the details of the request:
  • The “ns8023:NDFDgenLatLonList” element, which contains the latitudes and longitudes of the two locations.
  • The “startTime” and “endTime” elements, which denote the time boundaries of the request.
  • The “weatherParameters” element, which denote the information that we are interested in seeing (here, the maximum and minimum temperatures).

What is a REST API?

REST (which stands for Representational State Transfer) is an architectural style for APIs that relies on the HTTP protocol and JSON data format to send and receive messages. Let’s use the example of the API for the copywriting marketplace Scripted.com. If you want to get all of the writing jobs that a particular business has ordered, for example, then you would make the following REST API request:
GET https://api.scripted.com/abcd1234/v1/jobs
where “abcd1234” is replaced with a key that is unique to the organization. With REST APIs, the details of the request–such as the type of request (jobs) and the organization (abcd1234)–are explicitly embedded in the URL itself, rather than being wrapped in an XML document like we saw with SOAP. REST APIs typically send back data in JSON format rather than XML. The corresponding JSON response would look something like:
HTTP/1.1 200 OK{

"id": "5654ec06a6e02a37e7000318",

"topic": "Where to Buy an Orangutan",

"state": "copyediting",

"quantity": 1,

"delivery": "standard",

"deadline_at": "2015-12-04T01:30:00Z",

"created_at": "2015-11-24T23:00:22Z",

"content_format": {

"id": "5654ec02a6e02a37e70000d5",

"name": "Standard Blog Post",

"pitchable": true,

"length_metric": "350-450 words",


"pricing": {

"total": 9900


"writer": {

"id": "5654ec01a6e02a37e700003b",

"nickname": "Bob L.",


"document": {

"id": "5654ec06a6e02a37e700031a",

"type": "Document"


Here, the HTTP response object contains details about the API request: for example, the title of the article, the length of the article, and the writer assigned to the article.

The Pros and Cons of SOAP and REST

When comparing REST and SOAP, people often use the analogy of a postcard and an envelope. REST is like a postcard in that it’s lightweight and consumes less bandwidth (paper). Meanwhile, SOAP is like an envelope: there’s an extra overhead required on both ends to package and unpackage it. Note that the analogy isn’t perfect: unlike a postcard, the content of REST requests and responses isn’t (necessarily) insecure. Instead, REST uses the security of the underlying transport mechanism, which is usually HTTPS. On the other hand, SOAP implements its own security measure, which is known as WS-Security. Some people believe that REST is largely a “replacement” for SOAP, due to its lower overhead and improved ease of use. According to Cloud Elements’ 2017 State of API Integration report, 83 percent of APIs now use REST, while only 15 percent continue to use SOAP. Some of these businesses primarily use REST, but continue to integrate SOAP APIs into their projects using tools such as DreamFactory’s SOAP connector. However, this conception of SOAP as outmoded isn’t quite accurate. Even as REST becomes the API style of choice for most businesses, SOAP remains a tool that is better-suited for certain use cases, mainly in large enterprises who need  additional extensibility and logic features native to the protocol. The advantages of REST include:
  • Flexibility: Although REST is most commonly implemented with HTTP and JSON, developers are by no means obligated to use them. Websites can send back responses using data formats including JSON, XML, HTML, or even plaintext–whatever best suits their needs.
  • Speed: Because it tends to use much less overhead, REST APIs are typically significantly faster than SOAP. While the differences might be imperceptible for a single request, the disparity grows larger and larger as you place more and more requests.
  • Popularity: REST has reached critical mass on the Internet. Major websites such as Google, Twitter, and YouTube all use REST APIs for users to send and receive messages. Due to this familiarity, it’s typically easier for developers to get up and running with REST.
  • Scalability: Thanks to their speed and simplicity, REST APIs usually perform very well at scale.
Despite the major benefits of using REST, SOAP remains the preferred protocol in certain use cases. Some organizations find that SOAP offers the transactional reliability that they’re looking for, while others simply continue to use SOAP because they need legacy system support. The advantages of SOAP include:
  • Formality: SOAP can use WSDL (Web Services Description Language) to enforce the use of formal contracts between the user and the website. SOAP is also inherently compliant with ACID database standards, which ensures that the transactions it performs will be valid even in the event of errors or hardware issues.
  • Logic: If a REST API request is unsuccessful, it can only be addressed by retrying until the request successfully goes through. On the other hand, SOAP includes built-in successful/retry logic so that the requesting system knows how to behave.
  • Security: SOAP comes with its own security mechanism, WS-Security, built into the protocol. If you want to ensure that your messages are secure, rather than relying on the underlying transport mechanism as does REST, then SOAP may be the right choice.
  • Extensibility: In addition to WS-Security, SOAP includes support for other protocols such as WS-Addressing and WS-ReliableMessaging that can define other standards of communication and information exchange.

Final Thoughts

For most cases, REST should be considered the “default” option as adoption continues to grow across the web. Most public-facing APIs now use REST, because it consumes less bandwidth and its compatibility with HTTP makes it easier for web browsers to use. However, you may find that the additional features and security offered by SOAP are enough to sway your decision. In the end, the “right” choice between SOAP and REST will be highly dependent on your own situation. Even better, the choice of SOAP and REST doesn’t have to be between one and the other. If you want to communicate with REST but still need access to legacy SOAP services, DreamFactory offers the ability to add a REST API onto any database or SOAP API. Reach out to us today to get a free demo from our team of API experts.

AWS Redshift – SQL Functionality on Planet-Scale Hardware

The Problem

Your manager’s peers have been bragging a lot lately about their data warehouses, analytics, and charts, and now a steady stream of data-related questions are being sent your way.  Your department maintains several databases, and the data they contain has the potential to answer everything management is asking for. But the databases are needed for day-to-day operations, and can’t scale to answer these often highly specific questions such as, “How many asparaguses were consumed by men named Fonzie in Cleveland on Tuesdays in 2013?”. How to unlock the potential of this data?

You’ve probably heard of data warehouses, which are tailor-made for this sort of witchcraft. They make it possible to unlock every bit of value from data, and find answers wickedly fast. In the past, creating and maintaining data warehouses meant large, ongoing investments in hardware, software, and people to run them. This would be a hard sell – isn’t the company already spending enough?! Good news, however! In this day of cloud computing, it’s incredibly simple to create, load, and query data warehouses. They typically charge on a usage basis, meaning you don’t need the initial upfront capital investment to get off the ground. And they are super fast – far more powerful than anything you could run in-house.

This post will focus on Amazon Web Services Redshift (Amazon Web Services = AWS). And as a bonus, I’ll demonstrate the incredible Dreamfactory, which automatically builds a slick REST API interface over the top. From there, you’re a GUI away from giving management everything they could ask for, and wowing them with extras they hadn’t even thought of. They can now stand tall amongst their fellow executives, knowing you have their back.

AWS Redshift

AWS Redshift is built upon PostgreSQL, but has been dramatically enhanced to run at “cloud scale” within AWS. There are a few ingredients to this secret sauce:

Column-oriented storage

While you don’t need a deep understanding of what’s happening under the hood to use it, Redshift employs a fascinating approach to achieve it’s mind-boggling performance. Let’s say you have data that looks like the following:

1 Harold 2018/01/01 Membership 10.00

2 Susan 2017/11/15 Penalty 5.00

3 Thomas 2016/10/01 Membership 8.00
Most SQL databases you’ve probably used in the past are row-based, which means they store their data something like this:


This is the efficient way to maximize storage, and works well for retrieving data in the “traditional fashion” (rows at a time). But when you want to slice and dice this data, it doesn’t scale very well. If you’ve got large (business-scale) volumes of data, and a variety of ways you want to query it, you can really start to strain your database. Column-based databases, on the other hand, flip this idea on its head, and store the information in a column-based format, with the *data* serving as the *key*. So the above might look something like this:



This drastically improves query performance. For example, when searching for “DESCRIPTION == ‘Membership'”, the query only needs to make one database call (“give me the items with a ‘DESCRIPTION’ of ‘Membership'”), instead of inspecting each row individually (as it would have to do in a traditional, row-based database). Very cool, very fast!

Massive Parallelization

When I picture what the AWS cloud must look like, I usually conjure something up from the Matrix (except it’s full of regular computers, rather than, well, humans). Or maybe Star Trek’s “Borg”, a ridiculous planet-cube flying through space, sucking up other civilizations. I guess both of those images are a little disturbing. A safer mental image is this – data centers spanning the globe, loaded with racks and racks of computers, all connected and working together. For most computing tasks, throwing more hardware at the problem doesn’t automatically increase performance. There are bottlenecks that remain in place no matter how many processors are churning away. In our “traditional database” example, this bottleneck is typically disk I/O – the processors are all trying to grab data from the same place. To overcome this, the architecture and storage have to be arranged in a way that can benefit from parallelization. Which is exactly the case with AWS Redshift. Due to the column-based design described above, Redshift is able to take full advantage of adding processors, and it’s almost linearly scalable. This means if you double the number of computers (“nodes”, in Redshift-speak), the performance doubles. And so on. Combine this scalability with the ridiculous number of computers AWS has at it’s disposal (specifically, several Borgs-worth), and it’s like staring out at a starry night. It goes on forever in all directions.

How this works for you

If you’re sold on the power of AWS Redshift, then you’ll be pleased to learn that setup is incredibly simple. AWS documentation is top notch, a crucial thing in this brave new world. When writing this post, I followed their tutorial, and it all went smoothly. Probably took me 15 minutes, and I had the example up and running. If you already have SQL expertise, you won’t have any problem picking up Redshift syntax. There are some differences and nuances, but the standard “things” (joins, where clauses, etc) all work as expected. I typically use Microsoft’s SQL Server Management Studio (SSMS), and was able to connect to Redshift with no problem (after setting it up as a linked server). Your favorite SQL client will presumably work here as well (anything that supports JDBC or ODBC drivers). Once you get your feet wet, there are myriad tools that will load your business data into Redshift. If you’ve got SQL chops in house, I’d start with the AWS documentation, and go from there. If you need a little (or a lot) of help, a whole ecosystem of companies and tools have sprung up around Redshift. A quick Google search will introduce you to them. When you’re up and running, and growing more comfortable demanding more from the system, AWS makes it incredibly simple to add capacity. Thanks to the brilliant Redshift architecture, you just add nodes, and AWS takes care of the rest. Their billing dashboard will show you what it’s costing in real time, with no hidden or creeping costs of data centers, hardware upgrades, things going bump in the night, etc. So much magic happening under the covers, and you get the credit. The joys of cloud computing!

My Humble Example

When writing this, I used the example AWS provides (it consists of a few tables containing some fake Sales data). With everything in place, I can query from SSMS (with a little bit of “linked server” glue syntax):
exec ('-- Find total sales on a given calendar date.

SELECT sum(qtysold)

FROM sales, date

WHERE sales.dateid = date.dateid

AND caldate = ''2008-01-05'';') at redshift




(1 row affected)
I get a thrill when a chain of systems, architectures, and networks all flow together nicely. Somewhere in a behemoth of a data center, a processor heard my cry, and spun out this result in response. Amazing.


Now that the company has access to the data, and can gleefully ask any question, they are going to want the dashboards and pretty graphs. Typically you’d use a REST API to feed the data to some sort of UI, but how to do this with Redshift? While management is tickled with their new toy, they will cloud over with suspicion if you now propose a months-long project to make it shinier. In keeping with the theme of “easy, automatic, and powerful”, I’d propose using DreamFactory. In a matter of minutes (literally), it will connect to a data store (both SQL or NoSQL), intelligently parse all the schema, and spin up a REST API layer for doing all the things (complete with attractive documentation). What used to take a team of developers months can now happen in an afternoon! Here are some screenshots of my REST API, completely auto generated from the Redshift example above. It took me about 15 minutes (12 of those spent poking around the documentation) to get this done. For my simple example, I followed their Docker instructions, and in no time was playing with the REST API depicted below:
let’s get our rest on!
what pretty documentation you have!
  Powerful stuff!

To Infinity and Beyond!

Now that you’ve witnessed how easily you can warehouse all your data, and bootstrap it into a REST API, it’s time to bring this to your organization. Play with it a little, get comfortable with the tools, then turn up the dials.   Want to learn more about how DreamFactory and Redshift can work together (or how to put a REST API on any database)?  Schedule a demo with us. The next time management comes calling for data, you can give it to them with a fire hose!