wissel.net

Usability - Productivity - Business - The web - Singapore & Twins

By Date: May 2015

Your API needs a plan (a.k.a. API Management)


You drank the API Economy cool aid and created some neat https addressable calls using Restify or JAX-RS. Digging deeper into the concept of micro services you realize, a https callable endpoint doesn't make it an API. There are a few more steps involved.
O'Reilly provides a nice summary in the book Building Microservices, so you might want to add that to your reading list. In a nutshell:
  • You need to document your APIs. The most popular tool here seems to be Swagger and WSDL 2.0 (I also like Apiary)
  • You need to manage who is calling your API. The established mechanism is to use API keys. Those need to be issued, managed and monitored
  • You need to manage when your API is called. Depending on the ability of your infrastructure (or your ability to pay for scale out) you need to limit the rate your API is called by second, hour or billing period
  • You need to manage how your API is called. In which sequence, is the call clean, where does it come from
  • You need to manage versions of your API, so innovations and improvements don't break existing code
  • You need to manage grouping of your endpoints into "packages" like: free API, fremium API, partner API, pro API etc. Since the calls will overlap, building code for the bundles would lead to duplicates
And of course, all of this need statistics and monitoring. Adding that to you code will create quite some overhead, so I would suggest: use a service for that.
In IBM Bluemix there is the API Management service. This service isn't a new invention, but the existing IBM Cloud API management made available in a consumption based pricing model.
Your first 5000 calls are free, as is your first developer account. After that is is less than 6USD (pricing as of May 2015) for 100,000 calls. This provides a low investment way to evaluate the power of IBM API Management.
API Management IBM Style
The diagram shows the general structure. Your APIs only need to talk to the IBM cloud, removing the headache of security, packet monitoring etc.
Once you build your API you then expose it back to Bluemix as a custom service. It will appear like any other service in your catalogue. The purpose of this is to make it simple using those APIs from Bluemix - you just read your VCAP_SERVICES.
But you are not limited to use these APIs from Bluemix. You can call the IBM API management directly (your API partners/customers will like that) from whatever has access to the Intertubes.
There are excellent resources published to get you started. Now that you know why, check out the how: If you not sure about that whole micro services thing, check out Chris' example code.
As usual YMMV

Posted by on 20 May 2015 | Comments (0) | categories: Bluemix

The Rise of JavaScript and Docker


I loosely used JavaScript in this headline to refers to a set of technologies: node.js, Meteor, Angular.js ( or React.js). They share a communality with Docker that explains their (pun intended) meteoric rise.
Lets take a step back:
JavaScript on the server isn't exactly new. The first server side JavaScript was implemented 1998 and the Union mount, that made Docker possible, is from 1990. Client side JavaScript frameworks are plenty too. So what made the mentioned ones so successful?
I make the claim that it is machine readable community. This is where these tools differ. node.js is inseparable from its packet manager npm. Docker is unimaginable without its registry and Angular/React (as well as jquery) live on cushions of myriads of plug-ins and extensions. While the registries/repositories are native to Docker and node.js, the front-ends take advantage of tools like Bower and Yeoman, that make all the packaged feel native.
These registries aren't read-only, which is a huge point. Providing the means of direct contribution and/or branching on GitHub the process of contribution and consumption became two way. The mere possibility to "give back" created a stronger sense of belonging (even if that sense might not be fully concious).
machine readable community is a natural evolution born out of the open source spirit. For decades developers have collaborated using chat ( IRC anyone), discussion boards, Q & A sites and code sharing places. With the emergence of GIT and GitHub as de facto standard for code sharing the community was ready.
The direct access from scripts and configurations to source repository replaced the flow of "human vetting, human download, human unpack and copy to the right location" with: "specify what you need and the machine will know where to get it". Even this idea wasn't new. In the Java the Maven plug-in provided that functionality since 2002.
The big difference now: Maven wasn't native to Java, as it required a change of habit. Things are done differently with it than without. npm on the other hand is "how you do things in node.js". Configuring a docker container is done using the registry (and you have to put in extra effort if you want to avoid that).
So all the new tooling use repositories as "this is how it works" and complement human readable community with machine readable community. Of course, there is technical merit too - but that has been discussed elsewhere in great length.

Posted by on 09 May 2015 | Comments (0) | categories: Container Docker Software