Usability - Productivity - Business - The web - Singapore & Twins

From Blogsphere to a Static Site (Part 2) - Cleaning up the HTML

Blogsphere allows to create RichText and plain HTML entries. To export them I need to grab the HTML, either the manual entered or the RichText generated on, clean it up (especially for my manual entered HTML) and then replace image sources and internal links using the new URL syntax. To make this happen I created 2 functions that saved images and attachments and created a lookup list, so the HTML cleanup has a mapping table to work with

private void saveImage(Document doc) {
        String sourceDirectory = this.config.sourceDirectory + this.config.imageDirectory;
        try {
            String subject = doc.getItemValueString("ImageName");
            Date created = doc.getCreated().toJavaDate();
            Vector attNames = this.s.evaluate("@AttachmentNames", doc);
            String description = doc.getItemValueString("ImageName");
            String oldURL = this.config.oldImageLocation + doc.getItemValueString("ImageUNID") + "/$File/";
            SimpleDateFormat sdf = new SimpleDateFormat("yyyy");
            String year = sdf.format(created);
            FileEntry fe = this.imgEntries.add(subject, oldURL, description, created);

            for (Object attObj : attNames) {
                try {
                    String attName = attObj.toString();
                    String newURL = this.config.webBlogLocation + this.config.imageDirectory + year + "/" + attName;
                    fe.add(attName, newURL, description, created);
                    String outDir = sourceDirectory + year + "/";
                    EmbeddedObject att = doc.getAttachment(attName);
                    att.extractFile(outDir + attName);
                } catch (NotesException e) {
                } catch (Exception e2) {

        } catch (NotesException e) {


    private void saveImageFromURL(String href, String targetName) {

        String fetchFromWhere = "https://" + this.config.bloghost + href;
        try {
            byte[] curImg = Request.Get(fetchFromWhere).execute().returnContent().asBytes();
            this.saveIfChanged(curImg, targetName);
        } catch (ClientProtocolException e) {
        } catch (IOException e) {


With images saved the HTML cleanup can proceed. As mentioned before I'm using JSoup to process crappy HTML. It allows for easy extraction of elements and attributes, so processing of links an images is just a few lines

Read more

Posted by on 2017-04-17 09:29 | Comments (0) | categories: Blog

From Blogsphere to a Static Site (Part 1) - Objects

The migration tooling revolves around data, so getting the data classes right is important. The data classes need to be designed to allow it to be populated either from the Blogsphere NSF or from a collection of JSON files (so the blog generation can continue when the NSF is gone). For the blog we need 3 objects:
  • BlogEntry: The main data containing a blog entry including its meta data
  • BlogComment: An entry with a comment for a Blog in a 1:n relation
  • FileEntry: information about downloadable files (needed for export)

There will be auxiliary data classes like Config, RenderInstructions, Blogindex. Their content is derived from the data stored in the main object or, in case of Config, from disk.

Data classes in the Blog

Read more

Posted by on 2017-04-15 04:06 | Comments (0) | categories: Blog

From Blogsphere to a Static Site (Part 0) - Requirements

Readers of this blog might have noticed, that blog layout and blog URLs have changed (a while ago). This blog now serves static HTML pages using a nginx web server (get used to nginx, it's coming in Connections Pink too). I will document the steps and code I used to get there. Step 0 is: define the requirements and evaluate the resulting constraints:
  • Export of all Blogsphere content to HTML, including the conversion of entries written in RichText
  • No export of configuration or layout required
  • New site structure that shows articles in year and month folders
  • Modular templating system with includes for repeated pieces (e.g. header, footer, sidebar)
  • Summary pages for year, month and categories
  • Summary page for article series
  • Existing comments to become part of the html page
  • Repeatability: export must be able to repeat, but not overwrite a page that hasn't actually changed
  • Storage of exported pages in a file structure as JSON files
  • Rendering of static site from Notes or from JSON directory
  • Redirection file, so old links get a proper redirection to the new URL
  • Have a comment database for new comments
  • No pagination for any of the summary pages (I might change my mind on that one)
  • Cleanup messy HTML automatically, fix syntax and URLs to posts and images
  • Optimized HTML, CSS and JS for speedy delivery
I had a look at Jekyll, which is the templating engine GitHub is using. I would have allowed me to just commit a new file and Github would render for me. Unfortunately Jekyll fell short of the article series and category overview pages.

Read more

Posted by on 2017-04-12 11:00 | Comments (0) | categories: Blog

Project Deep Purple - IBM Notes Native for iOS

We all heard the announcements around Project Pink headed by Jason R. Gary, the future of IBM Connections. Attending the conference we could admire him all clad out in pink.

However there is something more going on (and I'm not talking about his "don style" haircut). Besides the pink suit, Jason has been spotted in a deep purple jacket on several occasions, like his FossASIA talk. Digging deeper it seems IBM collaboration projects are now colour coded. Purple is chosen quite deliberately: The color purple is a rare occurring color in nature and as a result is often seen as having sacred meaning. Purple combines the calm stability of blue and the fierce energy of red. The color purple is often associated with royalty, nobility, luxury, power, and ambition. Purple also represents meanings of wealth, extravagance, creativity, wisdom, dignity, grandeur, devotion, peace, pride, mystery, independence, and magic.

You might have guessed it: it is the next generation of IBM Notes! Not one of the fix feature packs, but an entire new generation. Under the influence Jason admitted: "We took the Notes Client source code and compiled it with XCode for iOS. Guess what: it worked. Cocoa needs some work, but it isn't rocket science"

There you have it! After banning Notes clients from your desktop, instead of relying on the IBM Client Application Access, you just can launch IBM Notes Native for iOS™ on your iPad and continue working. Any improvements and updates are automatically rolled out using the Apple app shop. Availability will be April 1st, 2018.

We live in interesting times!

Posted by on 2017-04-01 10:23 | Comments (1) | categories: IBM Notes

Goodbye IBM, hello Salesforce!

The Ministry of Manpower in Singapore is running a campaign "A new career at 55". Intrigued by it, I decided to give it a shot.

I will be joining Salesforce in Singapore as Cloud Solution Architect this Monday.

My 11 year tenure in IBM thus came to its end. With the new co-location policy sweeping though IBM, I realised, that staying in Singapore will not get me any closer to Notes than the December delivery of Verse on premises. Moving with my offspring in JC wasn't an option.

Working with the "Yellow bubble" always was fun and I intend to continue to participate there. Over the years the community propelled me to one of the top XPages experts on Stackoverflow, adopted my word creation XAgents and always made me feel welcome.

I had the opportunity to contribute code back to the community via OpenNTF on github. Check them out:

  • DominoDAV
    A webDAV implementation for Domino attachments. It allows you to fully round-trip edit office documents in a browser. It is extensible, so you could make views look like spreadsheets etc.
  • Swiftfile Java for Notes
    We had to pick a different name (AFSfNC) to add to the confusion. The project is a Java plugin implementation of Swiftfile, the little tool that would predict what folder you would file a message to. In todays lingo one would call it: Cognitive tag prediction (in Notes Folders and Tags could be used interchangeable)
  • Out of Office
    a Rest API that allows to check the OOO status of a given user
  • DominoRED
    Linking Domino and NodeRED. Very much work in progress

So let the adventure "From sensei to n00b" begin. See you on the other side!

Posted by on 2017-04-01 01:02 | Comments (e) | categories: IBM Salesforce

@Formula on JSON

When you look at "modern" programming styles, you will find novel concepts like isomorphic (runs on client or server), Idempotency (same call, same result), Immutable (functions never mess with the parameters or global state) or map operations (working on a set of data without looping).

I put "modern" deliberately in quotes, since these ideas have been around since Lisp (or for the younger of you: since you sorted all blocks by colour and size in kindergarden). In the Lotus world we got our share of this with the venerable @Formula language (the functions, not the commands), IBM Notes inherited from Lotus 1-2-3. While it has served us well, so far it has been confined to the realm of the NSF.

Not any more! Thanks to Connections Pink and the ever ingenious Maureen Leland, @Formula will come to a JSON structure near you soon. As far as I understood the plan: each @Function will serve as an endpoint to a (serverless) microservice that executes on values provided, returning a new value object that can be chained to the next call stream style. I'm very excited about this new development. Watch out for news about Connections Livegrid™.

Time for Maureen to undust her blog!

Posted by on 2017-04-01 12:19 | Comments (1) | categories: IBM Notes

Agile Outsourcing

The problem

Outsourcing is a "special" animal. Typically the idea is to save cost by letting a service provider execute work. The saving cost happens because the service provider is supposed to be able to perform this actions at scale. Increasingly outsourcing deals are motivated by a skill squeeze. So instead of maintaining in-house expertise, rely on the vendors to keep the light on.
This is where the trouble starts. Negotiations on outsourcing contracts revolves around price (drive it down) and the SLA (add as many 9 behind the comma as possible). The single outcome of such contracts is extreme risk aversion. For illustration here is the impact of SLA levels :
SLA Total annual Downtime
98% 7 days, 6h, 12min
99% 3 days, 15h, 36min
99.9% 8h, 45min, 36sec
99.99% 52min, 34sec
99.999% 5min, 16sec
99.9999% 32 sec
The fixation on SLA has a clinical term: OCD. Any change is considered as dangerous as someone holding a knife at your throat and asked you to dance.
Looking at some of the figures (I can't share) I would claim that short of highly parallel (and expensive) transaction system anything above 99.9% is wishful thinking. That doesn't deter negotiators to aim for a "look how many 9th I got" trophy. (While the Buddha reminds us: one cause of suffering is to close your eyes to reality). Expensive SLA violation clauses let outsourcers freeze all system, since any change (read: patches, upgrades, enhancements) is rightly identified as grave risk (to the profits).
So all sorts of processes and checks get implemented to vet any change request and in practise avoid them.
This usually leads to a lot of bureaucracy and glacial progress. As a result discontent, especially on the use of non-transactional system grows: Stuff like outdated eMail clients, lack of mobile support etc. etc.
The relation between oursourcer and oursourcee grows, inevitably, challenging over time. Does it have to be that way?

Some fresh thinking

Just move to cloud might not be the answer (or everybody would be there, it's such a nice place). So what could be done? Here are some thoughts:
  • Kiss goodby the wholesale SLA agreement. Classify systems based on business impact. A booking system for an airline surly deserves three nines (I doubt that four would make sense), while a website can live with one nine (as long as it distributed over the year)
  • Take a page from the PaaS offerings: each element of the environment has a measurement and a price. So the outsourcing provider can offer ala card services instead of freezing the environment. A catalogue entry could be "Running a current and patched DB/2", another entry could be "Run a legacy IIS, version xx"
  • Customer and provider would agree on an annual catalogue value, based on the starting environment and any known plan at the time
  • The catalogue would allow to decommission unneeded system and replace them with successors without much hassle (out with PHP, in with node.js)
  • Automate, Automate, Automate - An outsourcer without DevOps (Puppet, Chef and tight monitoring) didn't get the 2017 message
  • Transparency: Running systems over processes, Customer satisfaction over unrealistic SLA, Automation over documentation (I hear the howling), Repeatable procedures over locked down environments
What do you think?

Posted by on 2017-02-08 07:00 | Comments (3) | categories: Software

SAML and the Command Line

One of the best kept secrets of Connections Cloud S1 is the Traveler API. The API allows interactions that are missing from the Admin UI, like deleting a specific device or implementing an approval workflow.
Unfortunately the API only offers authentication via SAML, OAuth or BasicAuth are missing. So any application interacting with the API needs to do The SAML Dance. That's annoying when you have an UI to use, and a formidable challenge when you have a command line application, like a cron Job running unsupervised at interval.
One lovely step in the process: the IBM IdP returns a HTML page with a hidden form containing the SAML assertion result to be posted back to the application provider. Quite interesting, when your application provider is a command line app. Let's get to work.
The script is written in node.js and uses request and fast-html-parser npm package. The first step is to load the login form (which comes with a first set of cookies)
var requestOptionsTemplate = {
    headers: {
        'Origin': 'https://api.notes.ap.collabserv.com/api/traveler/',
        'User-Agent': 'ancy CommandLine Script',
        'Connection': 'keep-alive',
        'Cache-Control': 'max-age=0',
        'Upgrade-Insecure-Requests': 1
    'method': 'GET'

function scLoginPart1() {
    console.log('Authenticating to SmartCloud ...');
    var requestOptions = Object.assign({}, requestOptionsTemplate);
    requestOptions.url = 'https://apps.na.collabserv.com/manage/account/dashboardHandler/input';
    request(requestOptions, scLoginPart2);

The function calls the URL where the login form can be found. The result gets delivered to the function scLoginPart2. That function makes use of a global configuration variable config that was created through const config = require("./config.json") and contains all the credentials we need. Step2 submits the form and hands over to Step3.
function scLoginPart2(err, httpResponse, body) {
    if (err) {
        return console.error(err);
    // Capture cookies
    var outgoingCookies = captureCookies(httpResponse);
    var requestOptions = Object.assign({}, requestOptionsTemplate);
    requestOptions.headers.Cookie = outgoingCookies.join('; ');
    requestOptions.headers['Content-Type'] = 'application/x-www-form-urlencoded';
    requestOptions.method = 'POST';
    requestOptions.url = 'https://apps.ap.collabserv.com/pkmslogin.form';
    requestOptions.form = {
        'login-form-type': 'pwd',
        'error-code': '',
        'username': config.smartcloud.user,
        'password': config.smartcloud.password,
        'show_login': 'showLoginAgain'
    request(requestOptions, scLoginPart3);

function captureCookies(response) {
    var incomingCookies = response.headers['set-cookie'];
    var outgoingCookies = [];
    if (incomingCookies) {
        incomingCookies.forEach(function(cookie) {
    // Array, allows for duplicate coolie names
    return outgoingCookies;

Part 3 / 4 finally collect all the cookies we need, so to turn attention to getting the API token in step 5
function scLoginPart3(err, httpResponse, body) {
    if (err) {
        console.error('Login failed miserably');
        return console.error(err);
    // Login returns not 200 but 302
    // see https://developer.ibm.com/social/2015/06/23/slight-changes-to-the-form-based-login/
    if (httpResponse.statusCode !== 302) {
        return console.error('Wrong status code received: ' + httpResponse.statusCode);

    var outgoingCookies = captureCookies(httpResponse);
    var redirect = httpResponse.headers.location;

    // This is the 3rd request we need to make to get finally all cookies for app.na
    var requestOptions = Object.assign({}, requestOptionsTemplate);
    requestOptions.headers.Cookie = outgoingCookies.join('; ');
    requestOptions.url = redirect;
    request(requestOptions, scLoginPart4);

function scLoginPart4(err, httpResponse, body) {
    if (err) {
        console.error('Login redirect failed miserably');
        return console.error(err);
    var cookieHarvest = captureCookies(httpResponse);
    // Now we have some cookies in app, we need the SAML dance for api.notes

In Part 5 we first request the URL with actual data (devices in our case), but get another SAML dance step, since we have apps.na vs api.notes in the URL

Read more

Posted by on 2017-01-30 09:41 | Comments (1) | categories: NodeJS JavaScript

GIT deploy your static sites - Part 1

When you, in principal, like the idea to serve SPA from the http server, you will encounter the pressing question: where do babies come from how to get your application deployed onto the http server? This applies to nodeJS applications too, but that is part of another story for another time.
On Bluemix that's easy: just use a Pipeline.
For mere mortal environments there are several options:
  • Just FTP them - insecure unless you use sftp/scp. Big pain here: deleting obsolete files
  • Setup rsync. When done with a ssh certificate can be reasonably automated. Same pain applies: deleting obsolete files
  • Use a GIT based deployment. This is what I will discuss further
I like a repository based deployment since it fits nicely into a development based workflow. The various git gui tools provide insight what has changed between releases and if things go wrong, you can roll back to a previous version or you can wipe data and reestablish them from the repository. Designing the flow, I considered the following constraints:
  • The repositories would sit on the web server
  • Typically a repository would sit in .git inside the site directory. While you could protect that with access control, I decided I don't want to have it in separate directories
  • When pushing to the master branch, the site should get updated, not on any other branch. You can extend my approach to push other branches to other sites - so you get a test/demo/staging capability
  • Setting up a new site should be fast and reliable (including https - but that's part 2)
The "secret" ingredients here are git-hooks, in specific the post-receive. Hooks, in a nutshell are shell scripts that are triggered by events that happen to a git environment. I got inspired by this entry but wanted to automate the setup.

Read more

Posted by on 2017-01-12 12:26 | Comments (0) | categories: nginx WebDevelopment

Serving Single Page Applications with Domino

Single Page Applications (SPA) are all the rage. They get developed with AngularJS, ReactJS or {insert-your-framework-of-choice}. Those share a few communialities:
  • the application is served by a static web server
  • data is provided via an API, typically reading/writing JSON via REST or graph
  • authentication is often long lasting (remember me...) based on JWT
  • authentication is highly flexible: login with {facebook|google|linkedin|twitter} or a corporate account. Increasingly 2 factor authentication is used (especially in Europe)
How does Domino fit into the picture with its integrated http stack, authentication and database? The answer isn't very straight forward. Bundling components creates ease of administration, but carries the risk that new technologies are implemented late (or not at all). For anything internet facing that's quite some risk. So here is what I would do:
Red/Green Zone layout for Domino

Read more

Posted by on 2017-01-11 04:14 | Comments (3) | categories: IBM Notes XPages