wissel.net

Usability - Productivity - Business - The web - Singapore & Twins

By Date: March 2012

Story Telling


Antoine de Saint Exupéry is attributed with the quote " If you want to build a ship, don't drum up the men to gather wood, divide the work and give orders. Instead, teach them to yearn for the vast and endless sea."
Story telling is a powerful medium to get "your message" across. It reaches far beyond the confines of ones early years and the collection lovingly preserved by The Brothers Grimm.
All of business runs on stories, when unleashed to customers often compressed into 30 seconds in high color ( super bowl anybody?).
We love to hear the latest stories about colleagues and celebrities (a.k.a. gossip) and spin our yarns how wonderful work will be when implementing [Insert the current fancy here].
In IT our stories are called use cases and transport the advantages of the new system or process to users who shouldn't care about the technical and implementation details. Since they tell a story about systems not yet in existence, could we call them "science finction"? Following our proposals will, in many cases, alter the way people work and perceive their work, so we should stick to good basic journalism to present it:
Story Telling using the 5W and one H
It doesn't harm when our stories are entertaining. Applying quis, quid, ubi, quando, cur and quomodo to other's stories helps to find the weaknesses in the plot. More often than not, the better story, not the better product wins. Despite the case for evidence based management stories beat facts by a long shot. So get your story right!

Posted by on 31 March 2012 | Comments (1) | categories: After hours

Webservices in XPages - AXIS vs. CXF


I wrote about using Apache AXIS to connect to a web service before. It worked like a charm when you import the AXIS library into the NSF. In the past few days I tried to move this code into an extension library and I had initially very little success. Here is what I tried and what finally worked:
  1. The AXIS libraries are provided as a plug-in in the Domino server, so my first take was to declare a dependency in my plug-in to that plug-in. To successfully create a Extension Library I had to declare dependencies to com.ibm.commons and com.ibm.xsp.core to be able to extend com.ibm.commons.Extension. Unfortunately the plug-ins expose Java commons logging, so the classloader complaint and didn't load the AXIS classes
  2. Second attempt was to split the code into two plug-ins: one that depended on the AXIS plug-in and another one that depended on the former and com.ibm.commons and com.ibm.xsp.core. This didn't yield any better result
  3. 3rd test was to import the AXIS jars into a plug-in and only depend on com.ibm.commons and com.ibm.xsp.core but not the AXIS plug-in. Didn't work either
  4. Finally I tried to switch from AXIS to the newer and more flexible Apache CXF. CXF is able to provide transport over many protocols, not only SOAP and REST but JMS and others too. Initially I felt it was too complex for my little requirement, especially since the list of jar dependencies is quite long.
    Turns out CXF is the optimal solution. I used CXF 2.5.2 and the provided wsdl2java utility. The command line I used is [path-to-cxf-install]/bin/wsdl2java -frontend jaxws21 -client your.wsdl. The -client parameter is optional, but generates a nice sample client that shows the call parameters for all web service methods.
    I found I either need to include the WSDL file in the jar or point to the URL where the WSDL file is available online to get CXF to run. Now for the best part: All dependencies including CXF are available in Domino (I tested on 8.5.3), so other than the Java code generated I didn't had to import, depend etc. on anything (it actually might work even in a Java agent). So CXF is the way to go.
Update: As Bernd pointed out in the comments, it is important to update the permissions ( [Notes program directroy]/jvm/lib/security/java.policy ):
grant {
permission java.lang.RuntimePermission "setContextClassLoader";
permission java.lang.reflect.ReflectPermission "suppressAccessChecks";
};

For testing I used a dictionary service that returns the definitions found for a given word. My class (the part I had to write myself) is rather short:
package demo ;

import java.io.ByteArrayOutputStream ;
import java.io.IOException ;
import java.io.OutputStream ;
import java.io.OutputStreamWriter ;
import java.net.URL ;
import java.util.List ;
import javax.xml.namespace.QName ;
import com.aonaware.services.webservices.Definition ;
import com.aonaware.services.webservices.DictService ;
import com.aonaware.services.webservices.DictServiceSoap ;
import com.aonaware.services.webservices.WordDefinition ;

public class Dict2Test {
    public final static String WORD_TO_LOOK_FOR = "Trust" ;
    public final static String WSDL_URL = "http://services.aonaware.com/DictService/DictService.asmx?WSDL" ;
    public static final QName SERVICE_NAME = new QName ( "http://services.aonaware.com/webservices/", "DictService" ) ;

    public static void main ( String [ ] args ) throws IOException {
        Dict2Test dt = new Dict2Test ( ) ;
        ByteArrayOutputStream out = new ByteArrayOutputStream ( ) ;
        dt. getWordInfo (out, WORD_TO_LOOK_FOR ) ;
        System. out. println (out. toString ( ) ) ;
    }

    public void getWordInfo ( OutputStream out, String theWord ) {
        OutputStreamWriter w ;
        DictService ds ;
        DictServiceSoap dss ;
        w = new OutputStreamWriter (out ) ;
        try {
            ds = new DictService ( new URL (WSDL_URL ),SERVICE_NAME ) ;
            dss = ds. getDictServiceSoap ( ) ;
            WordDefinition wd = dss. define (theWord ) ;
            List <Definition > allDef = wd. getDefinitions ( ). getDefinition ( ) ;
            if (allDef. isEmpty ( ) ) {
                w. append ( "<h1>No definition found for " ) ;
                w. append (theWord ) ;
                w. append ( "</h1>\n" ) ;
            } else {
                w. append ( "<h1>You were looking for: " ) ;
                w. append (theWord ) ;
                w. append ( "</h1>\n<ul>" ) ;
                for (Definition oneD : allDef ) {
                    w. append ( "\n<li>" ) ;
                    w. append (oneD. getWordDefinition ( ) ) ;
                    w. append ( "</li>" ) ;
                }
                w. append ( "\n</ul>" ) ;
            }
        } catch ( Exception e ) {
            try {
                w. append ( "<h1>" + e. getMessage ( ) + "</h1>" ) ;
            } catch ( IOException e1 ) {
                e1. printStackTrace ( ) ;
            } finally {
                try {
                    w. close ( ) ;
                } catch ( IOException ex ) {
                    ex. printStackTrace ( ) ;
                }
            }
        }
    }
}
To test the class I defined it as a managed bean in faces-config.xml:
<managed-bean>
    <managed-bean-name>dictionary </managed-bean-name>
    <managed-bean-class>demo.Dict2Test </managed-bean-class>
    <managed-bean-scope>request </managed-bean-scope>
</managed-bean>
and created an XAgent with the following code

Read more

Posted by on 29 March 2012 | Comments (e) | categories: XPages

Connecting to Wireless@SGx using Ubuntu Linux


Singapore's island wide wireless network Wireless@SG provides encrypted and unencryped Wifi access. Unless your are a fan of being a Firesheep target, you want to use Wireless@SGx with encryption. Singtel provides detailed instructions for many platform with the unsurprising absence of instructions for Linux. So here you go:
  1. You will need the certificate they use from GoDaddy. So go to their certificate site and download the "Go Daddy Class 2 Certification Authority" in DER format. Note down the SHA1 key for the file: 27 96 BA E6 3F 18 01 E2 77 26 1B A0 D7 77 70 02 8F 20 EE E4. Optional (but highly recommended): open a terminal and check the checksum of your download: sha1sum gd-class2-root.cer
  2. In your network manager applet connect to Wireless@SGx. You will be prompted with "Wireless Network Authentication Required"
    Wireless SGx settings
  3. Fill in the form:
    • Wireless security: WPA & WPA2 Enterprise
    • Authentication: Protected EAP (PEAP)
    • Anonymous identity:: leave empty
    • CA certificate: gd-class2-root.cer (The one you downloaded in step 1)
    • PEAP version: Automatic
    • Inner authentication: MSCHAPv2
    • Username/Password: Your Wireless@SG credentials
A little embarrassment for SingTel: The registration page (at of today 23 March 2012) uses an outdated https certificate.
As usual YMMV

Posted by on 23 March 2012 | Comments (2) | categories: Buying Broadband Linux Singapore

Make Java code for XAgents easy to test


One of the popular entries in this blog is my introduction into XAgents together with related considerations and practial applications. There are a few patterns, that make XAgents easier to develop and debug.
Despite outstanding tool contribution debugging in XPages is still wanting, so any code that you can incorporate pre-debugged is a bonus. Here is what I do:
  • I try to write my logic in Java. I use the SSJS only to fetch the values I need to call my Java function
  • The functions of the Java class take always the OutputStream (or the ResponseWriter) as a parameter. This is where the output goes
  • For rendering XML or HTML output I use SAX with a little helper (Making one of these for JSON would be an interesting exercise)
  • I try to avoid using XPages specific (e.g. variable resolver) in my class containing the logic, but rather rely on Dependency Injection to ease testing
  • I debug and test the Java parts outside of XPages
A little sample. Imagine I have a class that renders a Notes document into a PDF (with a fixed style). In my XPages code (beforeRenderResponse) I would write:
// The external context and the response object
var exCon = facesContext. getExternalContext ( ) ;
var response = exCon. getResponse ( ) ;

// Deliver uncached PDF
response. setContentType ( "application/pdf" ) ;
response. setHeader ( "Cache-Control" , "no-cache" ) ;
// Force file name and save dialog
response. setHeader ( "Content-Disposition" , "attachment; filename=invoice.pdf" ) ;

// The Output Stream for Binary Output
var out =response. getOutputStream ( ) ;
var pdfProcessor = new com. notessensei. tools. PDFFormatter ( ) ;

// Here the call where "document" is a Notes document
pdfProcessor. renderInvoice (out , document ) ;

// Done
facesContext. responseComplete ( ) ;
out. close ( ) ;
Not really rocket science. The beauty of this approach: I can have a standalone Java application to test my PDFFormatter class. I use Eclipse for that. In Eclipse I have configured the Notes client's JVM as the runtime for the project. This way all Notes Java classes are found. Most likely you need the Notes program directory on your system path (but you know that). A simple testing class can look like this:
public class Tester {

    public static String theURL = "Notes:///8525744100531C7D/F4B82FBB75E942A6852566AC0037F284/034DAEE1CEEE2FB58525744000719185" ;

    public static void main ( String [ ] args ) throws Exception {
        NotesThread. sinitThread ( ) ;
        Session s = NotesFactory. createSession ( ) ;
        Document document = ( Document )session. resolve (theURL ) ;
        PDFFormatter pdfProcessor = new com. notessensei. tools. PDFFormatter ( ) ;
        FileOutputStream out = new FileOutputStream ( new File ( "invoice.pdf" ) ) ;
        pdfProcessor. renderInvoice (out, document ) ;
        out. close ( ) ;
        NotesThread. stermThread ( ) ;
    }
}
Of course that approach works too when you plan to use JUnit or a similar framework. Finally all can be glued together with ANT for a full automated environment, but that's a story for another time.
As usual: YMMV

Posted by on 21 March 2012 | Comments (6) | categories: XPages

Reuse web agents that PRINT to the browser in XPages


When upgrading classic Domino applications to XPages one particular problem arises constantly: " what to do with the PRINT statements in existing agents that write back directly to the browser?" Currently there is no automatic way to capture this output.
However with a little refactoring of the agent the output can be recycled. You can use a computed field for the result showing it on a page that maintains the all over layout of your new application or use the XAgent approach to replace the whole screen (I'm not discussing the merits of that here). These are the steps:
  1. Make sure your agent is set to "Run as web user" in the agent properties
  2. Add the AgentSupport LotusScript library to your agent
  3. Initialize the ResultHandler class: Dim result as ResultHandler
    SET result = new ResultHandler
    that will take in the print statements
  4. Use Search & Replace in your agent and replace Print  with result.prn 
  5. Add at the end call result.save()
  6. In your XPage or CustomControl add the AgentSupportX SSJS library
  7. Get the result with a call to agentResult("[name-of-your-agent]"). You can process it further or display in in a computed field etc.
The sample code is for illustration only. You want to add proper error handling to it. My test XPage looks like this:
<?xml version="1.0" encoding="UTF-8"?>
<xp:view xmlns:xp="http://www.ibm.com/xsp/core">
    <xp:this.resources>
        <xp:script src="/AgentSupportX.jss" clientSide="false"></xp:script>
    </xp:this.resources>
    This comes from an agent:
    <xp:text escape="false" id="computedField1">
        <xp:this.value> <![CDATA[#{javascript:agentResult("SampleAgent");}> </xp:this.value>
    </xp:text>
</xp:view>
The sample agent like this:
Option Public
Option Declare

Use "AgentSupport"

Dim result As ResultHandler

Sub Initialize
    Set result = New ResultHandler
   
    result. prt "Some message"
    result. prt "<h1> a header </h1>"
   
    Call result. save ( )
   
End Sub
Of course the interesting part are the two script libraries.

Read more

Posted by on 16 March 2012 | Comments (4) | categories: XPages

How much abstraction is healthy for a schema/data model? - Part 2


In Part 1 I discussed Elements vs. Attributes and the document nature of business vs. the table nature of RDBMS. In this installment I'd like to shed some light on abstraction levels.
I'll be using a more interesting examples than CRM: a court/case management system. When I was at lawschool one of my professors asked me to look out of the window and tell him what I see. So I replied: "Cars and roads, a park with trees, building and people entering them and leaving and so on". "Wrong!" he replied, "you see subjects and objects".
From a legal view you can classify everything like that: subjects are actors on rights, while objects are attached to rights.
Interestingly in object oriented languages like Java or C# you find a similar "final" abstraction where everything is a object that can be acted upon by calling its methods.
In data modeling the challenge is to find the right level of abstraction: to low and you duplicate information, to high and a system becomes hard to grasp and maintain.
Lets look at some examples. In a court you might be able to file a civil, criminal, administrative or inheritance case. Each filing consists of a number of documents. So when collecting the paper when doing your contextual enquiry you end up with draft 1:
<caselist>
    <civilcase id="ci-123"> ... </civilcase>
    <criminalcase id="cr-123"> ... </criminalcase>
    <admincase id="ad-123"> ... </admincase>
    <inheritcase id="in-123"> ... </inheritcase>
</caselist>
(I'll talk about the inner elements later) The content will be most likely very similar with plaintiff and defendant and the representing lawyers etc. So you end up writing a lot of duplicate definitions. And you need to add a complete new definition (and update your software) when the court adds "trade disputes" and, after the V landed, "alien matters" to the jurisdiction.
Of course keeping the definitions separate has the advantage that you can be much more prescriptive. E.g. in a criminal case you could have an element "maximum-penalty" while in a civil case you would use "damages-thought". This makes data modeling as much a science as an art.
To confuse matters more for the beginner: You can mix schemata, so you can mix-in the specialised information in a more generalised base schema. IBM uses the approach for IBM Connections where the general base schema is ATOM and missing elements and attributes are mixed in in a Connections specific schema.
You find a similar approach in MS-Sharepoint where a Sharepoint payload is wrapped into 2 layers of open standards: ATOM and OData (to be propriety at the very end).
When we abstract the case schema we would probably use something like:
<caselist>
    <case id="ci-123" type="civil"> ... </case>
    <case id="cr-123" type="criminal"> ... </case>
    <case id="ad-123" type="admin"> ... </case>
    <case id="in-123" type="inherit"> ... </case>
</caselist>
A little "fallacy" here: in the id field the case type is duplicated. While this not in conformance with "the pure teachings" is is a practical compromise. In real live the case ID will be used as an isolated identifier "outside" of IT. Typically we find encoded information like year, type, running number, chamber etc.
One could argue, a case just being a specific document and push for further abstraction. Also any information inside could be expressed as an abstract item:
<document type="case" subtype="civil" id="ci-123">
    <content name="plaintiff" type="person">Peter Pan </content>
    <content name="defendant" type="person">Captain Hook </content>
</document>
Looks familiar? Presuming you could have more that one plaintiff you could write:
<document form="civilcase">
    <noteinfo unid="AA12469B4BFC2099852567AE0055123F">
        <created>
            <datetime>20120313T143000,00+08 </datetime>
        </created>
    </noteinfo>
    <item name="plaintiff">
        <text>Peter Pan </text>
        <text>Tinkerbell </text>
    </item>
    <item name="defendant">
        <text>Captain Hook </text>
    </item>
</document>
Yep - good ol' DXL! While this is a good format for a generalised information management system, it is IMHO to abstract for your use case. When you create forms and views, you actually demonstrate the intend to specialise. The beauty here: the general format of your persistence layer won't get into the way when you modify your application layer.
Of course this flexibility requires a little more care to make your application easy to understand for the next developer. Back to our example, time to peek inside. How should the content be structured there?

Read more

Posted by on 13 March 2012 | Comments (0) | categories: Software

How much abstraction is healthy for a schema/data model? - Part 1


When engaged in the art of Data modelling everybody faces the challenge to find the right level of abstraction. I find that challenge quite intriguing. Nobody would create an attribute "Baker", "Lawyer", "Farmer" in a CRM system today, but one "profession" that can hold any of these professions as value. A higher level of abstraction would be to have attribute value pairs. So instead of "Profession" - "Baker" it would be "Attribute: Name=Profession, Value=Baker". Such constructs have the advantage to be very flexible, without changing the schema all sorts of different attributes can be captured. However they make validation more difficult: are all mandatory attributes present, are only allowed attributes present and do all attributes have values in the prescribed range?
Very often the data models are designed around the planned storage in an RDBMS. This conveniently overlooks that data modelling knows more approaches than just a physical data model and ER-diagrams. Tabular data in real life are confined to accountants' ledgers, while most of the rest are objects, documents and subjects (people, legal entities, automated systems - data actors so to speak) with attributes, components (sub-entries) and linear or hierarchical relations. Also exchanging data requires complete and integer data, which lends itself rather to the document than the table approach (putting on my flame proof underwear now).
In a RDBMS the attribute table would be a child table to the (people) master table with the special challenge how to find an unique key that survives an export/import operation.
This is the reason why a XML Schema seems to be the reasonable starting point to model the master data model for your application. Thus a worthwhile skill is to master XML Schema (It also helps to have a good schema editor. I'm - for many years- using oXygen XML).
This won't stop you to still use a RDBMS to persist (and normalize) data, but the ER-Schema wouldn't take centre stage anymore. Of course modern or fancy or venerable databases can deal with the document tree nature of XML quite well. I fancy DB/2's PureXML capabilities quite a bit. But back to to XML Schema (similar considerations apply to JSON which is lacking a schema language yet - work is ongoing).
Since XML knows elements (the stuff in opening and closing brackets, that can be nested into each other) and attributes (which are name/value pairs living inside the brackets) there are many variations to model a logical data entry. A few rules (fully documented in the official XML specifications) need to be kept in mind:
  • Element names can't contain fancy characters or spaces
  • Elements can, but don't need to have content
  • Elements can have other elements, text or CDATA (for fancy content) as children
  • Elements can, but don't need to have attributes
  • Elements can't start with "xml"
  • Attribute names can't contain fancy characters or spaces
  • Attribute values can't contain fancy characters. If there they need to be encoded
  • Attributes should only exist once in an element
  • Attributes must have a value (it can be empty), but can't have children
There are different "design camps" out there. Some dislike the use of attributes at all, others try to minimize the use of elements, but as usual the practicality lies in the middle. So we have 2 dimensions to look at: Element/Attribute use in a tree structure and secondly the level of abstraction. Lets look at some samples (and their query expressions):
<customer>
    <id>1234 </id>
    <name>John Doe </name>
    <status>active </status>
    <remarks>John loves roses and usually speaks to Jenny </remarks>
</customer>

Read more

Posted by on 10 March 2012 | Comments (0) | categories: Software

Who designed that process (a visit to the SingTel shop)?


First the good news: The sales rep at the SingTel shop, Brandon, was knowledgeable, patient and tried to help as much as he could, making that part of the process pleasant.
Now the breakdown of processes. I visited the SingTel shop with 2 objectives: subscribe to Fibre Broadband, after all on the 19th OpenNet will install the fibre end point, and to switch my mobile number back from Starhub to SingTel. I failed on both accounts.
Brandon explained: The installation of the end point doesn't coincide with the activation of the line (I wonder how they can test it then) and I have to wait for another notification letter from OpenNet that will state the availability of the line, so I thereafter can pick an ISP of my choosing. I asked: but I have chosen, can't I do the paperwork and OpenNet and SingTel sort it out, once they are ready? Nope the process doesn't allow this level of customer service and prohibits me from taking up any of the IT Show promotions. Furthermore SingTel wants to charge me SGD 107.00 since I'm an existing SingTel ADSL customer with a contract younger than a year, a charge the ADSL sales man conveniently missed to mention, even when I clearly explained that the ADSL is only meant as the interim solution between Starhub cable (that was pathetic slow in the evening) and Fibre Broadband. Aaaaaargl.
My second item on the list was switching the mobile phone line back to Singtel. I had switched from SingTel to Starhub 2 years ago, but my changed usage pattern requires better and faster 3G coverage than Starhub does offer. Besides their customer service .... Since the number originally belonged to SingTel, Brandon explained it could not be transferred back (what a logic). The way to go would be to terminate my contract with Starhub, which would return the number to the SingTel pool (after a while) and then to apply for a new contract and ask to reuse that number from the SingTel pool. Besides leaving me with all the work it would have the nasty side effect that I not only would be without a phone for a few days, but every caller would hear: "The number you have dialled is not in service". The irony: If I would have a Starhub or M1 number SingTel could transfer it without disruption. So I suggested: come up with a fancy form that I sign and then your ops sends it to Starhub ops and you do the cancel/reconnect in one go. After all you want my business. Unfortunately the process is not designed that way - Memento bene: there is no change to any technical system necessary, just a better coordination between the Telcos (guess the regulator needs to have a word with them). Brandon promised to sort this out for me. Stay tuned.

Posted by on 10 March 2012 | Comments (0) | categories: Buying Broadband

Preparing for a boring flight - XPages.tv offline (Extract media from a feed)


David Leedy provides us with the incredible useful Notes in 9 (a.k.a XPages.tv) tutorials and insights about XPages and Notes. The feed with all the videos is hosted by feedburner. To enjoy them while off the grid you can subscribe to them using iTunes, but that's for Warmduscher .
I'll show you how to use curl and a shell script (easy to translate to a cmd file):
  1. First you download the feed: curl -G -L -o notesin9.xml http://feeds.feedburner.com/notesin9/iTunes
  2. Run the transformation: xslt notesin9.xml feedburner2curl.xslt getXPagesTV.sh (on Windows you would use .cmd)
  3. Make the script executable: chmod +x getXPagesTV.sh
  4. Fetch the movies: ./getXPagesTV.sh
This technique works for any media RSS feed (ATOM wouldn need a different XSLT), so it is worth to be added to the toolbox. There are a few moving parts (which you should have anyway): You need to have curl and a XSLT shell script (that uses a jar file) as well as the stylesheet to convert the feed into a command file. The XSLT command file looks like this:
#!/bin/bash
notify-send -t 500 -u low -i gtk-dialog-info "Transforming $1 with $2 into $3 ..."
java -cp /home /stw /bin /saxon9he.jar net.sf.saxon.Transform -t -s: $1 -xsl: $2 -o: $3
notify-send -t 1000 -u low -i gtk-dialog-info "Extraction into $3 done!"
(where only the line with "java..." is relevant, the rest is eye candy). The XSLT stylesheet isn't much more complicated (the statements are in one line each, so check the download version to get them right):
<?xml version="1.0" encoding="UTF-8"?>
<xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform"
      xmlns:media="http://search.yahoo.com/mrss/"
   version="2.0">
   
    <xsl:output indent="no" method="text"/>
   
    <xsl:template match="/">#!/bin/bash <xsl:apply-templates select="//media:content" /></xsl:template>
   
    <xsl:template match="media:content">
        curl -C - -G <xsl:value-of select="@url"/> -L -o <xsl:value-of select="reverse(tokenize(@url,'/'))[1]"/>
    </xsl:template>
   
</xsl:stylesheet>
The only interesting part is reverse(tokenize(@url,'/'))[1] which I use to get the file name - basically the String after the last /. "tokenize" and "reverse" need a XSLT 2.0 processor.
Update: Got a little carried away and used the wrong script, it only did XSLT 1.0, now corrected using Saxon and XSLT 2.0. Thx Ulrich for noticing.
As usual YMMV

Posted by on 02 March 2012 | Comments (3) | categories: XML XPages