At the DataContent 2013 conference, I will be a member of a panel discussing the tools being used in the information business to make our products richer, drive our costs lower, and keep our products updated in real time. When our firms can accomplish these three goals, our businesses are firing on all cylinders and we make our customers’ lives “better, cheaper, and faster” as well.
The types of data processing tools out there now include:
- Enterprise tools: These can process extremely high volumes of data and cloud storage has made them even cheaper to deploy, although up-front license fees and initial deployment costs are not for the shallow of wallet. Long-term cost savings can be astounding, however, after the initial investment.
- Open source tools: There are tons of APIs out there that help developers deploy services quickly and cheaply. Infochimps (now part of CSC) famously gathered all these tools in one place so developers didn’t have to keep reinventing the wheel.
- Inexpensive self-service tools: Harvesting data has never been easier with a bunch of easy-to-use self-service tools for the DIY crowd with plenty of time on their hands.
- APIs with embedded transactional components: BriteVerify, for instance, has an API that allows you to create a data entry form that validates email addresses on the fly at a penny a pop and Twilio allows you to seamlessly embed calling and SMS components into complex processes at a similarly low cost.
- Government and NGO datasets: data.gov, still in its infancy, and open data initiatives worldwide are taking public data and actually making them useful to those who want to build on top of them to create new services. Look at the move to XBRL if you want to see what the future holds in this critical area.