Lessons from Project OrangeBox
Project OrangeBox, the Solr free search component, was launched after the experiments with Java8, Vert.x and RxJava in TPTSNBN concluded. With a certain promise we were working on a tight dead line and burned way more midnight oil than I would have wished for.
Anyway, I had the opportunity to work with great engineers and we shipped as promised. There are quite some lesson to be learned, here we go (in no specific order):
- Co-locate
The Verse team is spread over the globe: USA, Ireland, Belarus, China, Singapore and The Philippines. While this allows for 24x7 development, it also poses a substantial communications overhead. We made the largest jumps in both features and quality during and after co-location periods. So any sizable project needs to start and be interluded with co-location time. Team velocity will greatly benefit - No holy cows
For VoP we slaughtered the "Verse is Solr" cow. That saved the Domino installed base a lot of investments in time and resources. Each project has its "holy cows": Interfaces, tool sets, "invaluable, immutable code", development pattern, processes. You have to be ready to challenge them by keeping a razor sharp focus on customer success. Watch out for Prima donnas (see next item) - No Prima Donnas
As software engineers we are very prone to perceive our view of the world as the (only) correct one. After all we create some of it. In a team setting that's deadly. Self reflection and empathy are as critical to the success as technical skills and perseverance.
Robert Sutton, one of my favourite Harvard authors, expresses that a little bolder.
In short: A team can only be bigger than the sum of its members, when the individuals see themselves as members and are not hovering above it - Unit test are overrated
I hear howling, read on. Like "A journey of a thousand miles begins with a single step" you can say: "Great software starts with a Unit Test". Begins, not: "Great software consists of Unit Tests". A great journey that only has steps ends tragically in death by starvation, thirst or evil events.
Same applies to your test regime: You start with Unit tests, write code, pass it on to the next level of tests (module, integration, UI) etc. So unit tests are a "conditio sine qua non" in your test regime, but in no way sufficient - Test pyramid and good test data
Starting with unit tests (we used JUnit and EasyMock), you move up to module tests. There, still written in JUnit, you check the correctness of higher combinations. Then you have API test for your REST API. Here we used Postman and its node.js integration Newman.
Finally you need to test end-to-end including the UI. For that Selenium rules supreme. Why not e.g. PhantomJS? Selenium drives real browsers, so you can (automate) test against all rendering engines, which, as a fact of the matter, behave unsurprisingly different.
One super critical insight: You need a good set of diverse test data, both expected and unexpected inputs in conjunction with the expected outputs. A good set of fringe data makes sure you catch challenges and border conditions early.
Last not least: Have performance tests from the very beginning. We used both Rational Performance Tester (RPT) and Apache JMeter. RPT gives you a head start in creating tests, while JMeter's XML file based test cases were easier to share and manipulate. When you are short of test infrastructure (quite often the client running tests is the limiting factor) you can offload JMeter tests to Blazemeter or Flood.io - Measure, measure, measure
You need to know where your code is spending its time in. We employed a number of tools to get good metrics. You want to look at averages, min, max and standard deviations of your calls. David even wrote a specific plugin to see the native calls (note open, design note open) or Java code would produce (This will result in future Java API improvements). The two main tools (besides watching the network tab in the browser) were New Relic with deep instrumentation into our Domino server's JVM and JAMon collecting live statistics (which you can query on the Domino console usingshow stats vop
. Making measurements a default practise during code development makes your life much easier later on - No Work without ticket
That might be the hardest part to implement. Any code item needs to be related to a ticket. For the search component we used Github Enterprise, pimped up with Zenhub.
A very typical flow is: someone (analyst, scrum master, offering manager, project architect, etc.) "owns" the ticket system and tickets flow down. Sounds awfully like waterfall (and it is). Breaking free from this and turn to "the tickets are created by the developers and are the actual standup" greatly improves team velocity. This doesn't preclude creation of tickets by others, to fill a backlog or create and extend user stories. Look for the middle ground.
We managed to get Github tickets to work with Eclipse which made it easy to create tickets on the fly. Once you are there you can visualize progress using Burn charts - Agile
"Standup meeting every morning 9:30, no exception" - isn't agile. That's process strangling velocity. Spend some time to rediscover the heart of Agile and implement that.
Typical traps to avoid:- use ticket (closings) as (sole) metric. It only discourages the us of the ticket system as ongoing documentation
- insist on process over collaboration. A "standup meeting" could be just a Slack channel for most of the time. No need to spend time every day in a call or meeting, especially when the team is large
- Code is final - it's not. Refactoring is part of the package - including refactoring the various tests
- Isolate teams. If there isn't a lively exchange of progress, you end up with silo code. Requires mutual team respect
- Track "percent complete". This lives on the fallacy of 100% being a fixed value. Track units of work left to do (and expect that to eventually rise during the project)
- One way flow. If the people actually writing code can't participate in shaping user stories or create tickets, you have waterfall in disguise
- Narrow user definitions and stories: I always cringe at the Scrum template for user stories: "As a ... I want ... because/in order to ...". There are two fallacies: first it presumes a linear, single actor flow, secondly it only describes what happens if it works. While it's a good start, adopting more complete use cases (the big brother of user stories) helps to keep the stories consistent. Go learn about Writing Effective Use Cases. The agile twist: A use case doesn't have to be complete to get started. Adjust and complete it as it evolves. Another little trap: The "users" in the user stories need to include: infrastructure managers, db admins, code maintainer, software testers etc. Basically anybody touching the app, not just final (business) users
- No code reviews: looking at each other's code increases coherence in code style and accellerates bug squashing. Don't fall for the trap: productivity drops by 50% if 2 people stare at one screen - just the opposite happens
- Big screens
While co-located wesquatted inbooked conference rooms with whiteboard, postit walls and projectors. Some of the most efficient working hours were two or three pairs of eyes walking through code, both in source and debug mode. During quiet time (developers need ample of that. The Bose solution isn't enough), 27" or more inches of screen real estate boost productivity. At my home office I run a dual screen setup with more than one machine running (However, I have to admit: some of the code was written perched into a cattle class seat travelling between Singapore and the US) - Automate
We used both Jenkins and Travis as our automation platform. The project used Maven to keep the project together. While Maven is a harsh mistress spending time to provide all automation targets proved invaluable.
You have to configure your test regime carefully. Unit test should not only run on the CI environment, but on a developers workstation - for the code (s)he touches. A full integration test for VoP on the other hand, runs for a couple of hours. That's the task better left to the CI environment. Our Maven tasks included generating the (internal) website and the JavaDoc.
Lesson learned: setting up a full CI environment is quite a task. Getting the repeatable datasets in place (especially when you have time sensitive tests like "provide emails from the last hour") can be tricky. Lesson 2: you will need more compute than expected, plan for parallel testing - Ownership
David owned performance, Michael the build process, Raj the Query Parser, Christopher the test automation and myself the query strategy and core classes. It didn't mean: being the (l)only coder, but feeling responsible and taking the lead in the specific module. With the sense of ownership at the code level, we experienced a number of refactoring exercises, to the benefit of the result, that would never have happened if we followed Code Monkey style an analyst's or architect's blueprint.
Posted by Stephan H Wissel on 03 January 2017 | Comments (3) | categories: Continuous integration IBM Notes Maven Software