Algorithms,
you can’t see them, touch them, or if you cared to, taste them, but you
know they are everywhere. Algorithms are the O2 of the technology blood
stream. Without them, networks die, information ceases to exchange, and
the pace of innovation slows to a crawl. Algorithms are what make your
favorite apps so personalized and what deliver exceedingly larger sets
of Big Data to clouds to speed up computational decisions that have
legitimate impacts on humanity. Even companies who traditionally
wouldn’t consider themselves a “data company” are increasingly spending
time and resources to prep data, mine data, understand data and pair
otherwise non-related data sets in the hopes of mashing-up a solution
that pays dividends for their enterprise. All of this data work is
driven by sophisticated algorithms. So the question that matters to your enterprise is: How do you approach algorithmic innovation? Here are 3 compelling reasons you should consider an Open Innovation approach to tackle your biggest algorithmic challenges.
Not Just Success – Extreme, Repeatable Successes
TopCoder
Algorithm challenges often take place in the shape of what we call our
Marathon Matches. These are long form challenges dealing with highly
complex and sophisticated work, typically spanning 2 weeks in
competition time. Our Marathon Matches attract hundreds of registrants
and competitors and when you have that many solid algorithmists focused
on a problem and bringing their unique experiences and perspective to
their submission(s), the results are staggering.
Remember
in a competitive Open Innovation community, you the end-user of the
output, are after one thing, a truly valuable outcome.
Repeatable, rigorous, standards-based algorithm challenges hosted on a
mature Open Innovation platform, powered by a global community of
solvers is the most predictable way to attain extreme, repeatable
successes.
Need for Speed, Two-Ways
The
first “speed” benefit is the obvious one. At TopCoder, as mentioned
above, we host Marathon Matches that create stellar algorithmic
solutions and typically they take place in or under a two week time
frame. This is revolutionary in the arena of sophisticated algorithms.
As an example, for a Harvard Medical School competition focused on DNA
sequencing alignment, the TopCoder winning solutions were delivered in 2
weeks time. The existing gold standard that was in-use was developed by
a Full-Time resource, working on the problem for a full year. The TopCoder solution was delivered 96.2% faster than a traditional avenue could produce. But how did the algorithm perform? That is the second “speed” we are about to discus.
Whether
we point to the growth and fragmentation of mobile & tablet or the
shift caused by all things “social”, we talk about a hyper-paced
technology environment quite often. When we discuss “pace”, there is no
greater factor than that of neo-data creation. In fact, we just
canvassed this very topic in a recent blog – Why this infographic on data is astounding, terrifying, and revolutionary
– and the simple truth is that data creation is greatly outpacing the
ability to handle the data effectively, and the pace is accelerating!
In the Harvard Medical School example above; where
the existing gold standard solution was able to calculate an accurate
edit distance between a Query DNA string and the original DNA string in
400 seconds, TopCoder’s solution was able to do the same – at an
improved level of accuracy – in only 12 seconds.
Experiment Like Mad (Scientists)
James Kobielus, Senior Program Director, Product Marketing, Big Data Analytics at IBM, recently penned a fantastic article
canvassing the need to continuously and incrementally innovate in the
era of Big Data. Granted, the above examples we just showcased are not
incremental innovations, they are, big and staggering improvements over
existing gold standard solutions. But James’ article has many stellar
points, this being one of them:“If we can make our business model a tad smarter – in other words, more speedy, responsive, efficient, or flexible - we can differentiate where it counts. And if we can keep fresh innovations coming - week after week, quarter after quarter - we can make our competitive advantage durable over the long term. In so doing, we can shift the competitive playing field in our favor through process innovations that competitors can’t easily match.”
James
goes on to say that the key to these advancements is experimentation in
data science, and again, his point is spot on. But how does your
organization hope to continuously innovate in Big Data? To experiment
like mad scientists in a traditional business environment, you need
incredible resources, not to mention the pure man-power who can handle
this highly specialized work. Fact is most have neither, very few have
both. So what do you do?
You
access platforms, you tap into global communities, and you run your
data experiments – just as James is suggesting – but you do it through
Open Innovation, powered by competitions. We discussed above the why,
(Extreme value outcomes, speed two-ways), now it’s about how you
execute.
To
create value from advanced algorithms, you need faster, smarter
solutions, delivered quicker on a scalable platform that allows you to
experiment. Enterprise Open Innovation (EOI) is proving to be a
remarkable way to do just that.
No comments:
Post a Comment