{ DreamFactory: 'Blog' }

The importance of loose coupling in REST API design

Posted by Bill Appleton

Fri, Jun 24, 2016

One of the most important ideas in the world of software engineering is the concept of loose coupling. In a loosely coupled design, components are independent, and changes in one will not affect the operation of others. This approach offers optimal flexibility and reusability when components are added, replaced, or modified. Conversely, a tightly coupled design means that components tend to be interdependent. Changes in a single component can have a system wide impact, with unanticipated and undesirable effects.

The value of loosely coupled systems is widely recognized in the software world, but unfortunately most mobile applications end up being tightly coupled to the REST API services that they use. Each server-side API is often developed for a specific mobile application project. Each new custom application then requires another special-purpose REST API. In other words, the application and the service end up being tightly coupled to one another.

Developing a new REST API for every new project leads to backend complexity. Over time, a company can end up with infrastructure that is not portable, scalable, reliable, or secure. I have written about the problem of developing new REST APIs for every new project elsewhere, but now I think this warning should be even more strongly worded: companies should never develop a REST API for any specific application. Please read that again, it’s a game changer. You should never develop a REST API for any specific application. This practice almost always results in an application that is tightly coupled to a custom built service.

The best approach is to build a REST API platform that can be used and reused in a flexible manner for general-purpose application development. The advantages are enormous. For example, developers don’t need to learn a new API to develop a new application. The same APIs can be reused for many different purposes. The total number of services and endpoints is consolidated, improving security. Documentation, user roles, and API services become standardized, enhancing corporate governance and compliance.

Picture1.png

When a mobile application is developed, there is usually a server-side team that builds the REST API and a client-side team that builds the application. The interaction between these two groups takes lots of time and money while they converge on an interface. In fact, Gartner estimates that 75% of the cost of a mobile project is related to backend integration. And for this reason, the biggest benefit of a loosely coupled REST API architecture is that the interaction between these two teams is minimized.

This is where the concept of a loosely coupled REST API platform really generates business value. Components that need to “know things” about each other are tightly coupled. Components that can operate independently and have a well-defined communication channel are loosely coupled. In the same manner, if your server-side team is deeply engaged with your client-side team, then they are tightly coupled as well. These two teams can end up spending lots of time playing an expensive game of API Ping-Pong instead of shipping new applications.

As a veteran software engineer, I find one aspect of this situation rather fascinating. Usually, loose coupling is just a best practice for object-oriented software design. If you leave some tightly coupled interfaces in the code somewhere, then the worst-case scenario is probably a few snarky comments from one of the other engineers over lunch. But in this situation, there are two distinct development teams and their interaction is defined by the REST API interface they are building. Bad software design infects their working relationship, and this has real world consequences in terms of time and money.

A platform approach to RESTful services changes all of this. The server-side team focuses on mobilizing data sources, connecting legacy services, and administering role based security for the platform. The front-end team then builds anything they want on their platform of choice. Problems are minimized because the developers automatically receive the services that they need. But what type of software can actually implement a system like this?

Imagine that a modern developer could log into a portal, select the type of application that they want to build, and instantly get a comprehensive palette of REST API services designed for that purpose and vetted for use by their IT department. This is a tangible roadmap for the modern enterprise to embrace loosely coupled design and take this vision to the next level by combining secure administration with agile platform oriented application development.

The new DreamFactory Gold package provides this functionality. A company or service provider can host and manage hundreds or thousands of individual DreamFactory instances. Each one is a complete REST API development platform. Next, the administrators can define any number of pre-configured REST API packages for various purposes. Examples might include services for IoT, telephony, mobile applications, messaging, etc. These packages can include third party services like Stripe or Twilio, legacy SOAP services, and role-based access to any number of SQL or NoSQL databases. All a modern developer has to do is sign up, select a package, and start building the client application.

This is where DreamFactory is headed. For us, API automation means instantly providing a comprehensive service based environment for modern developers on demand. Use cases include exposing custom services to partners in a ready-made development environments and jump-starting enterprise developers with pre-loaded and pre-approved palettes of API services. This exciting new technology makes the benefits of loosely coupled REST API platforms a practical reality for the modern enterprise.

 

REST API DreamFactory API Enterprise applications

Data wrangling with Angular 2 and DreamFactory

Posted by Andy Rai

Thu, Jun 23, 2016

angular-1.png

A few days I posted about using angular2-auth-component for managing logins with Angular 2 and Dreamfactory. Now we're going to focus on the new angular2-data-component, which handles data.

Background

The data service in dreamfactory allows you to manage your sql and non sql data services. You can add, edit, update or delete records using data service. Angular2-data-component is a widget which allows you to all of that using a friendly UI. This compoenent can also be integrated in your other projects with a npm install and few lines of codes.

Using the component

In order to communicate with dreamfactory console you need to embed following configuration values in your index.html in a script tag.

window.instanceUrl = '';
window.appKey = ''

These config values will be used by angular2-data-component to communicate with dreamfactory console. Then a simple npm install should download the module inside your node_modules folder after which you can include and use all the components inside angular2-data-component.

npm install angular2-data-component --save

Import the Dreamfactory Data component and necessary services. Lets call your root component as AppCmp.

import {DfDataCmp} from 'angular2-auth-component/index';

@Component({
  selector: 'app',
  templateUrl: './components/app/app.html',
  styleUrls: ['./components/app/app.css'],
  encapsulation: ViewEncapsulation.None,
  directives: [ROUTER_DIRECTIVES, DfDataCmp],
  providers: [BaseHttpService]
})

Now in your html you should be able to use the component.

 <df-data></df-data>

The data component will then be rendered in your application page.

Breakdown of the component

The component is internally made of different other components.

filterbar: to apply filter on the selected table
record: the form view for any record to craete or update
df-table: the table view for any resource
df-toolbar: Toolbar with options like previous, next, filter, pagination etc.

There is also a separate service class to handle http operations called DfService.

Each of the above mentioned components have the new Angular 2 event emitters: a better way to notify parent component about the changes. This uni-directional flow of data can be clearly seen between the components filterbar and df-table. The use case is to refresh the table every time the filter changes. Now since the filter is totally a separate component there has to be a way for filterbar to communicate the filter form data to the table component which eventually will update the results in the table. This is how you define a event emitters in the component, in this case filterbar.

@Output() apply: EventEmitter = new EventEmitter();
@Output() remove: EventEmitter = new EventEmitter();

Apply emits the filter data set by the user in the ui to parent component df-table. The actuall emitting will happen in the applyFilter function.

applyFilter() {
  if (!this.filterType || !this.filterValue) return;
  var filter = this.filterType + ' like "%' + this.filterValue + '%"';
  this.apply.next(filter);
}

The parent component will be set up with these tags.

<df-table>
<filterbar [dataModel]="schema">
</filterbar>
</df-table>
class DfTable {
  applyEvent(args) {
    // apply filter
  }
}

Just like we had @Output or event emitters we also have @Input which are used to pass the data from parent component to child component. Sort of like directives and scope in angular 1.x. Consider the following example of filterbar and df-table.

The use case is to set the schema when it is fetched after user selets the table in parent component. Hence the schema is going to be dynamic and will be passed on to child component, filterbar. In filterbar we have to define the @Input variables that will accept values from parent component.

@Input() dataModel;

The parent component will pass the data like this:

<df-table>
<filterbar [dataModel]="schema">
</filterbar>
</df-table>

The value of schema is set by the df-table component. Since the data flow is uni-directional, the change in the filterbar component won't change the value of schema in parent component df-table.

There is also a sample repository which uses these components using npm install.

 

AngularJS sample app Tutorial Angular 2

Using the new auth component for Angular 2

Posted by Andy Rai

Tue, Jun 21, 2016

angular-1.png

I’ve been working on DreamFactory's Angular SDK, sample app, and documentation. It's now available in Github with some good examples and details. There are some important things to consider when using Dreamfactory authentication with a custom Angular 2 component.

Background

Angular2-auth-component handles login, register and logged in user profile management. This component can be installed in any Angular 2 project using npm and with some additional lines of code, you can have a running dreamfactory auth system in your dreamfactory app. Typescript has been chosen to write angular2-auth-component. This should not affect any project written in ES6. Angular2-auth-component can still be included and used in any other project.

To get started with Angular 2, check out the getting started guide from angular.io. It has simple and useful examples for organizing code and creating components in Angular 2.

Using the component

In order to communicate with dreamfactory console you need to embed following configuration values in your index.html in a script tag.

window.instanceUrl = '';
window.appKey = ''

These config values will be used by angular2-auth-component to communicate with dreamfactory console. Then a simple npm install should download the module inside your node_modules folder after which you can include and use all the components inside angular2-auth-component.

npm install angular2-auth-component --save

Services and components available for use:

LoginCmp - Dreamfactory login widget
RegisterCmp - Dreamfactory register widget
ProfileCmp - Dreamfactory logged in user profile widget
ProfileService - HTTP serivce with methods to get and set profile
BaseHttpService - A wrapper around Angular 2 http service. This service should be used by the application to make any api call.

Lets say for example you want to use Login component. First thing to do is to import the Login component and necessary services. Lets call your root component as AppCmp.

import {LoginCmp, BaseHttpService} from 'angular2-auth-component/index';

Then, in your main component, inject services and use the component on `/login` route.

@Component({
  selector: 'app',
  templateUrl: './components/app/app.html',
  styleUrls: ['./components/app/app.css'],
  encapsulation: ViewEncapsulation.None,
  directives: [ROUTER_DIRECTIVES],
  providers: [BaseHttpService]
})

@RouteConfig([
  { path: '/login', component: LoginCmp, as: 'Login' },
])

Alternatively, you can also have your own login route and use the widget in your html.

 <df-login></df-login>

Make sure you mention LoginCmp in directives list like this:

@Component({
  selector: 'app',
  templateUrl: './components/app/app.html',
  styleUrls: ['./components/app/app.css'],
  encapsulation: ViewEncapsulation.None,
  directives: [ROUTER_DIRECTIVES, LoginCmp],
  providers: [BaseHttpService]
})

Similar way you can also use RegisterCmp, ProfileCmp etc.

There is a separate service called ProfileService which handles all the http operations for current logged in user profile with following methods:

get: Get profile. Returns object of Profile class.
save: Save profile. Requires object of Profile class.
resetPassword: Reset password.

You can either have you own profile widget and use ProfileService methods or you can use ProfileCmp either in a route or html widget as shown in above `LoginCmp` example.

There is also a sample repository which uses these components using npm install.

 

AngularJS sample app Tutorial Angular 2

DreamFactory 2.2 released, includes important API changes

Posted by Ben Busse

Thu, Jun 2, 2016

twopointtwo.png

We're excited to announce DreamFactory Version 2.2. There are a number of important design improvements to services, scripting, and system resources. But the biggest change is that APIs for commercial databases will no longer be open source.

New features in Version 2.2:
  • The following services are now under a commercial license and have been removed from the default open source installation:
    • Oracle
    • Microsoft SQL Server
    • IBM DB2
    • SAP SQL Anywhere
    • Salesforce.com
    • SOAP 
    • LDAP
    • Active Directory
  • Redesigned services and system resources management to be more flexible and dynamic. 
    • Now using ServiceProviders for all service type onboarding
    • New service type migration command for pre-2.2 database upgrade (php artisan dreamfactory:service-type-migrate), run after migration and seeding
    • SQL database driver types now available as their own service types ("sql_db" type retired)
    • Script languages now available as their own service types ("script" type retired)
    • Service Definition system now adds service name to all defined paths and tags automatically
    • Old service types converted to new format during import in packaging
    • Support for service definition (Swagger doc) on service import/export in packaging
  • Improved script engine features.
    • Added platform.api support for Node.js and Python scripting
    • Node.js scripts now allow returning output from async callback functions
    • Python script improvements: allow empty script, correcting script output
  • API Docs now support OpenAPI (Swagger) YAML format, as well as JSON.
  • Usability improvement: first admin user of the Admin Console automatically logged in on creation
  • Include predis/predis package by default for using Redis for caching
  • Include df-azure using microsoftazure/storage by default, used sdk that required pear in prior releases
  • Now using guzzle 6
  • Added laravel/homestead support for php5.6 and php7 for dev installs

 

NoSQL DreamFactory API SQL

Scaling DreamFactory with Docker

Posted by Arif Islam

Wed, Jun 1, 2016

docker.png

Docker containers are great when it comes to deploying your application for production, testing, and scaling up for performance. DreamFactory instances can take advantage of Docker containers as well. In fact, it’s even easier to horizontally scale DreamFactory instances (with or without Docker containers) because DreamFactory uses JSON Web Tokens (JWT).

DreamFactory uses JWT to manage user sessions. JWT is completely stateless, which is important for scaling apps that rely on HTTP requests under load. DreamFactory web servers do not maintain session state at all. The JWT itself (token) that is passed around in every request is sufficient to hold the minimum data required to maintain the session. Therefore, when horizontally scaling across multiple web servers (or Docker web containers) running DreamFactory, the load balancer doesn’t need to maintain session across the instances. The only requirement is to share the same APP_KEY (located in the .env file of a DreamFactory instance) across all web instances of DreamFactory.

To make it easier to deploy DreamFactory using Docker containers, we’ve created a dreamfactorysoftware/df-docker image on Docker Hub. This image uses the MySQL and Redis images for the DreamFactory system database and cache storage, respectively. Simply follow the instructions on the dreamfactorysoftware/df-docker image to deploy a single instance of the DreamFactory Docker container connecting to the MySQL and Redis containers.

Here we'll show you how to deploy multiple DreamFactory web containers, all sharing a single MySQL and Redis container. Then we’ll put the web containers behind a load balancer. Before starting, make sure that you have Docker installed. Just follow the Docker installation docs.

Start by pulling the dreamfactorysoftware/df-docker image.

docker pull dreamfactorysoftware/df-docker 

Start the MySQL database container under the name df-mysql.

docker run -d --name df-mysql -e "MYSQL_ROOT_PASSWORD=root" -e "MYSQL_DATABASE=dreamfactory" -e "MYSQL_USER=df_admin" -e "MYSQL_PASSWORD=df_admin" mysql

Then start the Redis container under the name df-redis.

docker run -d --name df-redis redis

Now we’ll start three web containers: df-web1, df-web2, and df-web3 running a DreamFactory instance using the dreamfactorysoftware/df-docker image.

docker run -d --name df-web1  -e "APP_KEY=UseAny32CharactersLongStringHere" --link df-mysql:db --link df-redis:rd dreamfactorysoftware/df-docker
docker run -d --name df-web2  -e "APP_KEY=UseAny32CharactersLongStringHere" --link df-mysql:db --link df-redis:rd dreamfactorysoftware/df-docker
docker run -d --name df-web3  -e "APP_KEY=UseAny32CharactersLongStringHere" --link df-mysql:db --link df-redis:rd dreamfactorysoftware/df-docker

 As mentioned before, all instances of DreamFactory containers must use the same APP_KEY. Just replace UseAny32CharactersLongStringHere with any 32-character long alphanumeric string.

Now start the load balancer under the name df-lb using the tutum/haproxy image from Docker Hub and link all three DreamFactory containers to it.

docker run -d -p 80:80 --name df-lb --link df-web1 --link df-web2 --link df-web3 tutum/haproxy

Now using a web browser, point to IP 127.0.0.1 (on Mac OS X this should be the IP address of your Docker machine). You should see the “Hello and Welcome…” greeting page followed by a form to create the first DreamFactory admin user.

Congratulations! You have successfully deployed multiple instances of DreamFactory under a load balancer. All these instances share the same MySQL database and Redis cache.

To make sure the load balancer is working and using all three instances, go ahead and create the first admin user and log in to DreamFactory admin console. Then click on ‘Config’ tab. Now notice the ‘Host’ field under the ‘Server’ section. It should show you the Docker Container ID of the web container being used. Refresh/reload the admin console on the web browser and notice that the ‘Host’ has changed to a different Container ID. Refresh again and the Host should change again. Every time you refresh the admin console the load balancer uses a different DreamFactory container.

I hope this blog post explains how easy it is to deploy and scale DreamFactory with Docker. Let us know what you think in the comments or head on over to the community forum if you have any questions or feedback.

 

Docker Packages Deployment

Weekly Digest