Flow CAP is the Flow cytometry Critical Assessment of Population identification methods challenge.
FlowCAP I occurred last summer, with 17 algorithms entered to classify five data sets. The competition was structured as 4 challenges with an increasing amount of information provided to the competitors in each challenge.
Challenge 1 allotted the competitors the data and a pat on the back for good luck.
Challenge 2 allowed for some tuning of the algorithms.
Challenge 3 participants were told the target number of clusters.
Challenge 4 participants were provided with a set of training data.
The results were collected, announced, and then discussed at the FlowCAP summit held at the NIH on September 21-22. The meeting provided the competitors an opportunity to become collaborators. All participants explained their methods, swapped preprocessing, sampling, and training methods. They discussed hardware and software means of making the process more high throughput, and in general improved their knowledge of the state of the practice. One of the final presentations featured the revelation that all of the algorithms, regardless of success, were used in an ensemble to classify the data, and this method produced the best outcome of all.
The take home message from FlowCAP I was that algorithms can match expert classification. A paper will soon be submitted to Nature Biotechnology on the details of FlowCAP I.
I entered challenge 4 and used a radial basis support vector machine. The tool performed well, achieving the top rank score in the challenge. I've included my challenge entry in this post for anyone interested.
FlowCAP II is slated to happen this summer with the goal being to demonstrate the algorithms can add value to the classification process, not just match a human expert.
You might have read about it in the papers last week, or online in the last month or so, but DNS servers are vulnerable to an attack. Check yours by visiting Kaminsky's log and clicking the "check my DNS" button.