The week’s three top articles on crowdsourcing. Here we go.
Sharon Gaudin over at Computerworld shares this terrific piece about NASA’s “massive migration” to the cloud. She writes:
NASA migrated 65 software applications, including its flagship NASA.gov website to the cloud in 22 weeks, and the space agency is still in the midst of a massive deployment to the cloud.
After completing that initial migration at what analysts called a breakneck pace, the head of NASA’s web services said the fun has just begun.
“The important thing this is we’ve learned a lot in the last 18 months,” said Roopangi Kadakia, NASA’s web services executive. “Going through that, we were able to see how you optimize legacy applications. It can’t be business as usual. There’s a whole set of different ways to think about the cloud. If we get folks to that point, we can really start creating a strategy so you have better access to information anytime and anywhere.”
This piece reminds us of a few important points:
- Only 2-3 years ago, everyone was excited when Federal CIO Vivek Kundra migrated the federal government to Google Apps after having first done so with the DC government. The excitement was justified, but think how far we’ve come.
- Remember that only a small percentage (about 10%) of companies have actually migrated to the crowd. That means that huge opportunities remain for the implementation of cloud-based applications and systems.
- This was about collaboration. According to Kadakia, “I want to give people the ability to collaborate. I want to give them a repository on the cloud where we can be doing code sharing and code reuse within NASA. And we’re looking at disaster recovery as a service.” It’s nice to see NASA developing an asset library.
- Kadakia has some interesting things to say about data security in the cloud. Definitely worth a look, and keep in mind that NASA can’t afford to have its security compromised. Neither can you.
Mithun Sridharan at Smart Data Collective makes the case for Big Data as a service. Let’s cut straight to his main points and then address them.
- Many organizations lack the time, resources or analytical expertise (Data Scientists) to solve Big Data challenges in-house.
- Companies are slammed with internal data and operate in established structures that make innovating on existing frameworks challenging.
- Internal Big Data projects could experience schedule slippages, cost overruns, etc. due to the lack of prior experience in Big Data delivery.
- Lack of prior experience in Big Data Analytics makes the problem too difficult to be solved internally or the steps to arrive at a pragmatic solution are considered beyond the organization’s capabilities or are regarded as overtly complicated.
OK, so what do all these points evoke . . .? Crowdsourcing, of course. There isn’t a single of the four points listed above that open innovation and an expert crowd of over 660,000 can’t solve. Batter up.
- The first is simple — in-house supply is depleted or non-existent. That’s a perfect opportunity to crowdsource and tap into global expertise to solve your challenges. Crowdsourcing is not outsourcing, but rather a way to augment staff on demand to work on particular challenges with results you’ve never conceived of. You pay only what you wish to offer for particular contest winners, and you can start and stop at any time without committing to huge engagements.
- Data structures are large and unwieldy. In that case, let our data scientists run complex algorithms against those data sets and simplify (abstract) them from complex domains into math problems. That’s at the heart of data science.
- Cost overruns and lack of experience? Hundreds of thousands of experts will meet your challenge better and for far less than internal initiatives.
- The problem is too difficult? Think again — we solve the hardest problems in the world for some of the most demanding clients: NASA, DARPA, HP, Comcast, ESPN, GEICO, and others
Our friends at NASA write about Topcoder and harnessing an ocean of energy.
It takes a lot of money to build something, deploy it in the water and test it,” said Noël Bakhtian of the DOE’s Wind and Water Power Technologies Office. “It would be a lot easier to have computational tools, where you can study a whole range of inputs and say, ‘What if I made the device twice as big? What if the wavelength of the waves was a little bit different? What if I pushed it out into the ocean a little bit deeper?’
The DOE wants to be able to offer modeling software to everyone with a potentially great idea for extracting energy from ocean waves. And they’re counting on crowdsourcing to help them to do it.
The article continues:
Through its contract with Harvard, [NASA’s Center of Excellence for Collaborative Innovation (CoECI)] uses an organization called [Appirio] to administer DOE’s OpenWARP Challenge and many of its other software competitions. [Appirio] breaks a problem down into small pieces and then offers them as challenges to its community of more than 650,000 members worldwide.
NASA’s desire to granularize (or atomize) its data goes straight to the heart of data science. According to Harvard Business School Professor Karim Lakhani, data science is solving problems that have a data component and a computational component together. As for the importance of the crowdsourcing Community, Tom Davenport argues in Harvard Business Review that “they are the most important resource for capitalizing on big data.”
I didn’t pick this piece to highlight our work on this particular project. Its title goes right to the heart of the matter: We’re helping to harvest energy from the ocean.
Rather, I chose it because we have other projects where we still have to port the essence of them from “saving humanity” to something customers can relate to. Our work on the Harvard Medical School immunogenomics project and NASA International Space Station are prime examples. It can be difficult to relate to Harvard, genetic edit distances, the ISS, and longeron tethers. At its core, however, the work for Harvard was a Biology / Life Sciences projects, an area ripe with customers with huge data sets. And the Space Station challenge was all about harvesting energy, just as with this OpenWarp Challenge, although perhaps not as obviously because it’s taking place in space. Appirio’s same same core competencies can be applied to a variety of industries — Bio Sciences, Energy, as well as other such as Finance, Insurance, Manufacturing, Pharmaceuticals, and Retail — as we shall see in the coming months.