{ DreamFactory: 'Blog' }

Supporting open source and making friends at OSCON

Posted by Alex Bowen

Tue, May 24, 2016

Last week some of the DreamFactory team traveled to Austin Texas to attend OSCON, one of the biggest events for the open source community. We wouldn't be where we are without the open source community, so we were excited to not only be sponsoring the event, but also by connecting with folks and giving talks. Jessica Rose, our Head of Developer Relations, gave a few talks throughout the week around fandoms and imposter syndrome, as well as led a workshop on internal communications for better leadership. Alexandra Bowen, Community and Developer Relations, helped lead a discussion around engaging lurkers in an online community at the Community Leadership Summit a few days before OSCON. The rest of the team, Tracy, Matt and Joshua helped us out by manning our booth, meeting tons of awesome people and passing out SWAG. OSCON has some traditions that we love, such as the sticker table and chalkboard message wall! It's great to see the open source culture grow.

Great Community Buidling!

We were excited to share our vision and messaging about how DreamFactory uniquely delivers REST API solutions. We spotlighted some challenges that developers and project managers faced around building and maintaining API's that we were excited to showcase at OSCON. We help you get to market that much quicker. It's an exciting time in the open source and API fields and we are glad to be a growing part of them. It was great connecting one-on-one with attendees and listen to their challenges, ideas and pressures. We call this a community approach, and it helps us navigate these issues in a way that brings value to our users. It is the best way, and maybe the only way, to keep what DreamFactory does in sync with developers and the market. Plus, we gave away arguably some of the coolest SWAG!

Want some SWAG? Tell us your DreamFactory Story!
Want some SWAG? Tell us your DreamFactory Story!

We are pretty proud of Jessica who rocked it during her multiple talks! At the Community Leadership Summit a few days before OSCON, she did a talk on Fandoms in the Community that had over 200 attendees. She also led a workshop on internal communication for better leadership and how effective communication is more than just speaking clearly. Finally, her talk on cognitive bias and imposter syndrome had a great showing of over 120 folks. We're proud, to say the least. Links to the videos and slides will get share as they become available on our social media channels.

Ciw5p9AVAAAnI7L.jpg
Head of Developer Relations Jessica Rose giving her talk on Imposter Syndrome.

We all went out for a friendly dinner in Austin and invited people we met there, as well as new and old friends - it was a huge sucess. We connected with a ton of new folks and had an awesome time nerding out. We hope you join us next time.

 collage.jpg
Our new conference dinner tradition, thanks Tracy! We'll hope you join us next time.

We had some great giveaways for OSCON including a BB8, RaspberryPi, t-shirts, and an Arduino starter kit. Be sure to stop by, say hi and get yourself some SWAG next time. As always, post a pic of you with our SWAG and tag #DreamFactory for a chance to get even more, and be featured on our social media channels.

Ci2XTWTUoAA4xDv.jpg
Matt from the DreamFactory team with one of our giveaway winners, Catherine, who got a Arduino Starter Kit.

These are some of our favorite types of events, because it gives us a chance to speak with the developer and open source communities we serve through DreamFactory, as well as meet and greet with the influencers. A list of our upcoming events can be found at our events page. If there's someplace you would love to see us, let us know.

We had a lot of fun and appreicate everyone who came by to say hi. Didn't stop by? Have your own fun with API's! Visit our Live API Documentation and see how to apply#DreamFactory to your projects.

 

 

Events OSCON DevRel

Join the DreamFactory team in Austin at OSCON

Posted by Tracy Osborn

Mon, May 16, 2016

OSCON.jpg

We're pleased to attend and sponsor OSCON, one of the biggest events for the open source community!

Come and meet DreamFactory team (Jess, Tracy, Alex, Josh, and Matt) and learn about DreamFactory at booth #826. Bring your business cards, we'll be raffling off a BB-8 Sphero and a whole pile of exciting hardware (yes, loads of Raspberry Pis!)

Our head of Developer Relations, Jess Rose, will be giving a workshop at the Cultivate leadership event embedded within OSCON, on how leaders can keep their teams working smoothly by focusing on internal communications. She'll be getting right back on stage the afternoon of the 18th at the main OSCON conference to talk about how cognitive biases like impostor syndrome impact our ability to judge our own skill sets.

We'll be participating in other events throughout OSCON — see our full schedule on our events page. Come say hi, we'd love to meet you!

Events Community Sponsorships

Why we didn't choose Node.js for the DreamFactory REST API Backend

Posted by Bill Appleton

Fri, May 13, 2016

Our engineering team considered using Node.js to build the DreamFactory REST API backend. There are some great things about Node that we really like. Developers can write JavaScript on both the client and the server, and the Node package manager is great. But after a careful look, we decided that Node was not the best choice. Instead, we choose the Laravel framework, the V8 Engine, and PHP to write DreamFactory. This architecture offers some real advantages when it comes to building a REST API backend. Read on, I think you will come to the same conclusion that we did.

Node is great for some things...

Node has an event-driven architecture capable of asynchronous input and output. This design optimizes throughput and scalability in Web applications with many input and output operations. Node has been used successfully for chat programs, browser games, and other applications that need real-time communications.

Node is single threaded and has a single event loop. All of the JavaScript you write executes in this loop, and if a blocking operation happens in that code, then it will block the entire loop and nothing else will happen until it finishes. When you do something that takes time, like reading from a database, conducting an HTTP transaction, or using the file system, then Node makes an asynchronous call to that driver and continues on immediately.

When one of these long running operations is finished, Node receives a callback. The callbacks are accomplished with a lightweight and highly efficient thread pool. In this manner, Node is very good at waiting for things to happen, and doing this with a bare minimum of resources. Other languages might have to spawn a thread or even start a new process in order to accomplish the same type of asynchronous operation.

One great application of Node is the new Lambda microservice from AWS. The purpose of Lambda is to simplify building smaller, on-demand applications that respond quickly to events and new information. Lambda spends most of the time waiting for an event to happen, after which your snippet of JavaScript is run as a microservice. Because of all the waiting involved, Node is a good choice for this type of product.

Another smart use of Node is for an MQTT message broker. This software is used for Internet of Things (IoT) applications where many devices need to communicate through a hub and spoke network. Each device on the network might publish certain messages and subscribe to others. As you can imagine, most of the time this network is just waiting for a message to be published, so Node also works well for an MQTT message broker.

...but not everything

If Node is so nifty, then why don’t people use it to build websites? The answer is that websites follow a particular data access pattern. A request comes in, there are some database transactions, and a response goes out. The advantages of Node start to fade away when there is a database transaction for every request and response. The problem is that the database driver needs to run in a separate process. This allows many people to use it at the same time. And so if you have to create a new process anyway, the fact that Node can efficiently wait for an asynchronous database transaction to finish doesn’t help conserve resources or improve speed very much.

node.png

As it turns out, a REST API platform follows the same basic data access pattern that a simple website does. A request comes in, there is a database transaction, and the response goes out. On a website the transaction happens with HTML pages, and on a REST API backend the transaction happens with JSON documents, but the workflow is the same. So the advantages of Node are mitigated when you are trying to build a REST API backend. In short, a REST API backend should be working all the time, not waiting for something to happen.

There is another problematic issue with Node. If there is lots of heavy lifting to be done, either before or after the database transaction, then all of that work must be completed by a single thread running JavaScript. That might be fine for some applications, but if massive scalability is the goal, then this can become a bottleneck. Two of the main uses cases for REST API services are widely distributed mobile applications and IoT deployments. Both of these scenarios could overwhelm a single CPU running Node at scale.

A better way

So what is the best language for building a simple request and response website? The world’s most popular Content Management Systems are Drupal and WordPress, and they are both written in PHP. In fact, over 80% of the world’s websites are written in PHP, including Facebook and Yahoo. There is a long history of solving scalability problems with LAMP stacks running PHP. Web servers, load balancers, operating systems, databases, and the PHP language have been optimized for this basic request and response model.

Some people might think that PHP is old hat. But take a look at the cutting edge Laravel framework and the new performance enhancements in PHP 7.0. We have written elsewhere about the advantages of using modular Laravel components for the DreamFactory platform. The popularity of PHP is another advantage, because third party database drivers are usually up to date and more widely tested than other languages. This was a key need for our REST API platform.

The diagram below shows the architecture for the DreamFactory backend. A process is assigned to the incoming request, the database or external service is called synchronously, and the response goes out on the same thread. If there is a lot of work to be done customizing, integrating, transforming or formatting the JSON request or response then this can happen in parallel without the single-threaded bottleneck. You need a separate process, but that was going to happen anyway because of the database transaction. For enhanced scalability we recommend running on NGINX to reduce the overhead of handling multiple processes.

lamp.png

One thing we like about Node is the ability to run JavaScript on the server. This capability appeals to JavaScript developers using client frameworks like AngularJS and React. Node is run on top of Google’s V8 Engine. So we also incorporated V8, but rather than running it in a single process like Node, we run it in parallel for server-side scripting and customization. Compare the blue boxes in the two architecture diagrams above to see the difference. We also support PHP, Python and Node itself for scripting and customization, if you prefer.

There are many debates on the Internet about how to scale Node applications. Vertical scaling seems possible, but horizontal scaling appears to be more difficult. In our case, we wanted to enable any system administrator to scale DreamFactory using techniques and technologies that they were already familiar with. Because the platform is configured as a LAMP stack, DreamFactory can be scaled vertically with additional processors or horizontally with a load balancer just like any website.

We adopted JSON Web Token (JWT) to ensure that the DreamFactory platform is completely stateless. Need to double performance? Double the number of instances. The only requirement is that all of your instances share the same default database and cache. DreamFactory does not need persistent local storage, and this enables our stack to run on PaaS systems like Bluemix, OpenShift, Heroku, and OpenStack, as well as Docker management solutions like Swarm, Fleet, Kubernetes, and Mesos.

I have written this blog post to explore some of the technology and design decisions that we made along the way. Since DreamFactory is a REST API backend, the platform could always be rewritten in another language if there were a compelling reason to do so. But as things stand we are happy with the current architecture and scalability characteristics. Reach out and let me know what you think about it.

DreamFactory Laravel, PHP Node.js

Deploying apps with DreamFactory packages

Posted by Arif Islam

Wed, May 11, 2016

Developers often find themselves creating same services and resources across all their DreamFactory instances multiple times. For example, in a simple three-stage software development lifecycle (dev, test, production), developers typically need to copy a set of apps, roles, users, services, lookups and other resources across all or some of their environments. This becomes a tedious and counter-productive task as the number of resources and instances goes up.

To address this challenge and to make it even easier to provision a DreamFactory instance with pre-configured services and resources, DreamFactory introduces the "Packages" feature in version 2.1.2. Packages allow you to export system-wide resources, including your storage files (apps), into a single package file. This package file is a zip file archive of the exported resources. You can take this package file and import it into an existing instance or provision a new instance with it. You can find the new package API in the API Docs under ‘system’ service.

Package File

Before we get into the details of how to export/import packages across multiple instances, let’s go over the structure and content of a package file. As mentioned earlier, the package file itself is a zip file of all the exported resources organized in related folders. Resources are exported in JSON format. Storage folders are exported in the zip file and stored inside folders that correspond to their path and storage service name.

The package always contains a ‘package.json’ file. This file is called the manifest file, which details out all the contents of the package. Here is how package contents look like after extracting the zip file.

package-zip.png

You can see that all exported resources are in a *.json file under their corresponding API path. For example, the API for roles is …system/role, therefore the exported roles are stored in the role.json file under the system folder. Similarly, _shema.json is stored inside a folder named with its service name - mysql. We also exported the ‘work’ and ‘my_images’ directories from the ’s3’ storage service. The contents of these directories are zipped into a single file and stored inside folders named with their corresponding path (work, my_images) and service name (s3).

Now let’s take a look at the content of the manifest ‘package.json’ file.

{
  "version": "0.1",         // Package version
  "df_version": "2.1.1",    // DF instance version
  "secured": false,         // Secured flag (coming up)
  "description": "",        // Optional package description
  "created_date": "2016-04-12 14:18:08",    // Date this package was created
  "service": {
    	"system": {
      		"role": [“role1”,“role2”],
      		"service": ["DB","mailgun","script","s3","mysql","math-py"],
      		"app": ["add_angular2","my-test-app"],
      		"user": ["arif+github@dreamfactory.com","arif@foo.com"],
      		"admin": ["arif@dreamfactory.com"],
      		"custom": ["adminPreferences"],
      		"cors": [6,7],
      		"email_template": [“test_email_template","test_template"],
      		"event": [
        			"mydb._table.contact.{id}.patch.pre_process",
        			"user.register.post.post_process"
      		],
      		"lookup": ["host","user"]
    	},
    	"DB": {
      		"_schema": [“contact_group","contact_info"]
    	},
    	"mydb": {
      		"_schema": [“contact","todo"]
    	},
    	"mysql": {
      		"_schema": [“contact","todo"]
    	},
    	"files": [
      		"applications/",
      		"projects/",
      		"testfiles.zip"
    	],
    	"s3": [
      		"my_images/",
      		"work/"
    	]
  }
}

As you can see, the manifest file simply lays out the resources and their location in this package. The service - resource structure of the manifest file closely adheres to their corresponding API path.

Export

Exporting a manifest only

Before exporting a package it’s good to know which resources you can export. To see all the exportable resources, simply request the package manifest with the following API call.

curl GET http://df.com/api/v2/system/package

You can use this manifest to export your desired resources. To get just the system resources use parameter ‘system_only=true’

curl GET http://df.com/api/v2/system/package?system_only=true

To download this manifest as a file use parameter ‘as_file=true’

curl GET http://df.com/api/v2/system/package?as_file=true

The manifest file shows all key resources in your instance. Periodically downloading and keeping an archive of the package manifest file will allow you to maintain an audit trail of your instance.

Exporting a package

Exporting a package requires a POST call to the system/package API with a payload similar to the package manifest. For example, the following call will export a package containing roles with name r1, r2, and r3.

curl POST http://df.com/api/v2/system/package -d ‘{“service”:{“system”:{“role”:[“r1”,”r2”,”r3”]}}}’

You can also export multiple resources, storage files, database schemas all in one package using a manifest like below.

{
 “service": {
        "system": {
      		"role": [“role1”,“role2”],
      		"service": [
			“DB",
			“mailgun",
			“script",
			“s3",
			“mysql",
			“math-py"
		],
      		"app": [“add_angular2",“my-test-app"],
      		"user": [“arif+github@dreamfactory.com",“arif@foo.com"],
      		"admin": ["arif@dreamfactory.com"],
      		"custom": ["adminPreferences"],
      		"cors": [6,7],
      		"email_template": [“test_email_template","test_template"],
      		"event": [
        			"mydb._table.contact.{id}.patch.pre_process",
        			"user.register.post.post_process"
      		],
      		"lookup": ["host","user"]
    	},
    	"DB": {
      		"_schema": [“contact_group","contact_info"]
    	},
    	"mydb": {
      		"_schema": [“contact","todo"]
    	},
    	"mysql": {
      		"_schema": [“contact","todo"]
    	},
    	"files": [
      		"applications/",
      		"projects/",
      		"testfiles.zip"
    	],
    	"s3": [
      		"my_images/",
      		"work/"
    	]
  }
}

There are few other ways you can export package resources using the export manifest. Here are some examples:

 Exporting package resources using id:

{
    “service”:{
		“system”:{
			“role”:[1,2,3]	// Exporting roles with id 1,2,3
			…
		}
	}
}

 Exporting package resources using filter:

You can use filters to export resources that are filterable. Example: exporting all services that begin with a prefix ‘test_’.

{
    “service”:{
		“system”:{
			“service”:{
				“filter”:”name like ’test_%’”
			}
			…
		}
	}
}

 Exporting package resources with related data:

To export related data , specify their relation name in the export manifest. Example:

{
    “service”:{
		“system”:{
			“role”:{
				“ids”:[1,2,3],  // Exporting roles with id 1,2,3
				“related”:[
					“app_by_role_id”,
					“role_service_access_by_role_id”
				]
			}	
			…
		}
	}
}

 Exporting package resources with related data using filter:

{
    “service”:{
		“system”:{
			…
			“role”:{
				“filter”:”name like ’test_%’”,
				“related”:[
					“app_by_role_id”,
					“role_service_access_by_role_id”
				]
			}
			…
		}
	}
}

 Exporting storage resources in package:

{
    “service”:{
		“files”:[				// Name of your storage service
			“css/”,			// Storage folders to export
			“assets/“,
			“images/buttons/“,
			“app.js”,			// Storage files to export
			“index.html”
		]
	}
}    

 Exporting schemas from a Database service:

{
    “service”:{
		“db”:{				// Name of your database service
			“_schema”:[		// Export _schema resource
				“table1”,		// Name of your tables
				“table2”,
				“table3”
			]
		}
	}
}

Exported package storage

By default, all exported packages are stored using the local file service - ‘files’ inside a folder named ‘__EXPORTS’. The response from the package export call shows the full URL of the exported package and a flag indicates whether the package URL is accessible or not. You can download a package using this URL.

{
    "success": true,
	"path": “http://df.com/files/__EXPORTS/df_2016-04-13_03:23:34.zip”,
	"is_public": true
}

You can also use a different storage service and folder for your exported package by specifying the service and folder in your export manifest.

{
    “storage”:{
		“name”:”s3”,			// Name of the storage service
		“folder”:”my-exports”,	// Folder to store your package in
	},
	“service”:{
		“system”:{
			“role”:[1,2,3]	// Exporting roles with id 1,2,3
			…
		}
	}
}

If you just provide the storage service name and do not provide a folder, then your package will be stored in a default folder called ‘__EXPORTS’. The default name used for the package file follows the scheme hostname_y-m-d_H:s:i.zip. You can use a different name for your package if desired.

{
    “storage”:{
		“name”:”s3”,			// Name of the storage service
		“folder”:”my-exports”,	// Folder to store your package in
		“filename”:”my-package.zip”
	},
	“service”:{
		“system”:{
			“role”:[1,2,3]	// Exporting roles with id 1,2,3
			…
		}
	}
}

Exporting a package securely

Normally, when you export system services in your package, all your service configs, which includes sensitive information such as your database host, username, password etc. are all in plain text. You can export your package securely using a password which will encrypt all your service config information.

When you import this package into another instance you will need to provide this password. If you export your users in a secure package, you can  export your users with their passwords (encrypted). This way all your users will continue to have the same password when imported in a different instance. To export a package securely, set the ‘secured’ flag to true and provide a ‘password’ on the export manifest.

{
    “secured”:true,		// Set this true for secure package
	“password”:”secret”,	// Must provide a password for secure package
	“service”:{
		“system”:{
			“service”:[“db”,”s3”]	// Service configs are encrypted
		}
	}
}

Import

Exporting storage resources in package:

Make a multipart/form-data POST request to system/package API.

curl --form files=@myfile.zip http://df.com/api/v2/system/package

Importing package file from a URL:

You can import a package from a URL using the ‘import_url’ parameter.

curl POST http://df.com/api/v2/system/package?import_url=http://me.com/packages/my-package.zip

Importing package from command line:

You can import a package from the command line using the Artisan console command - dreamfactory:import-pkg.

php artisan dreamfactory:import-pkg path

The path to your package file can be a path to a single file or a folder or a URL. When a folder is provided it will import all packages in that folder. To import a secured package, just use the ‘—password’ option to provide the password

php artisan dreamfactory:import-pkg path —password=secret

NOTE: When importing a service or resource that already exists in the target instance, the import process will skip it and leave the existing record unchanged. The import process logs all skipped/troubled items in the system log. It also returns the log messages as the response of the import call.

Import/Export using the Admin app

The admin app provides an easy way to export and import your packages using an UI. You can find this UI under the ‘Packages’ tab inside the admin app.

packages-admin.png

From the ‘Type’ drop down menu, select your resource type then select your resource from the ‘Name’ menu. Click on ‘Add to Package’ to add it to your ‘Selected Package Content’ list. Select as many resources as you need to export. After that, select a storage service where you want to store your package using the ‘Export to’ menu and provide a folder in the text field next to it. Click on ‘Export’ button to export your package.

I hope this blog post explains how easy it is to deploy your apps from development all the way to production. Let us know what you think in the comments or head on over to the community forum if you have questions! Also check out this screencast to see a quick example.

Packages Deployment

Reducing complexity with serverless API architecture

Posted by Alex Bowen

Fri, May 6, 2016

DreamFactory was recently featured in a Medium post from APIdays, penned by Mark Boyd and Mehdi Medjaoui. First in a series, it does a great job of detailing the challenges enterprise companies face when building APIs.

apidays.png

We know that two of the key factors driving cloud and API environments today are the need for agility, and the desire to reduce complexity. Serverless is igniting the interest of many because it can respond to both of those factors. To get there, business must first understand what value they are opening up by creating an API, and then serverless can help them open and refine that value faster, and cheaper. In an economy that can scale—like APIs— learning faster than the market is the key to success, and serverless architecture can let API providers quickly enter the market, and iterate towards creating real value.

Mapping the API Serverless Market Landscape

Our CEO Bill Appleton contirbuted this to the piece:

It used to be that all the attention was on the server. Now in the enterprise, there are two teams: client and backend, but a back-and-forth negotiation between the two can take months to play out and mobile projects can fail if they don’t get their act together.

But it is almost a bigger problem if they do get their act together: we have seen this again and again. In an enterprise, they start with one API, then they don’t want to rebuild that first API, so they build a second API. That development is outsourced, and over time, they dig this hole of complexity that can’t scale, that can’t port anywhere.

DreamFactory’s solution is one that meets the values of the market of reducing complexity and empowering the Developer. You gain dexterity by not being bogged down by new application architecture or backend servers.

DreamFactory Enterprise

Weekly Digest