Austin

1601 E. 5th St. #109

Austin, Texas 78702

United States

Coimbatore

Module 002/2, Ground Floor, Tidel Park

Elcosez, Aerodome Post

Coimbatore, Tamil Nadu 641014 India

Coonoor

138G Grays Hill

Opp. BSNL GM Office, Sims Park

Coonoor, Tamil Nadu 643101 India

Laguna

Block 7, Lot 5,

Camella Homes Bermuda,

Phase 2B, Brgy. Banlic,

City of Cabuyao, Laguna,

Philippines

San Jose

Escazu Village

Calle 118B, San Rafael

San Jose, SJ 10203

Costa Rica

News & Insights

News & Insights

The Value of Incidental Data

Last week we saw two great cases at IEI where “data exhaust” — data created incidentally as part of a particular data management process — turned out to hold great value for our customers. In both cases we took simple telephone verification processes designed just to confirm that an executive’s contact information was still correct (in one case their email address, in the other their direct dial telephone number) and we re-engineered them. After doing so, we were able to identify substantial amounts of new contact information, dead URLs, changed job titles, defunct businesses, new business names, name changes, and deceased or changed contacts. And the beautiful part of the gold hidden in this data exhaust? It was all virtually free. We delivered the same core information to the customer at the same cost, and our customers got a wealth of additional valuable information to boot.

In speaking to folks at Infochimps, EDR, and Databridge about these experiences I realized that we aren’t the only ones gleaning through data “slag” in order to find gold. Metadata generated from various data processing routines and simply from usage patterns is being used by data analysts everywhere to make information services more valuable and even to drive completely new sets of valuable information.

Of course, turning data analysis into improved functionality has actually gone on for quite some time. A few of the more prominent examples are:

  • NewsEdge’s customer-generated list of misspelled company name variants: This was used to direct users to the correct information even when they couldn’t spell a company’s  name correctly. A lot better than a “404 – No Info Found” error and all for the cost of a little elbow grease.
  • Google’s “popularity engine”: Request frequency data was just ancillary info until it was used to prioritize search results.
  • Hoover’s took search popularity data and used it to drive the update frequency for the profiles of popular companies. “Not found” results also drove the creation of new profiles as well.

If your firm isn’t wading through your data effluvium looking for value, maybe you should be! One process’s garbage can easily be another’s titanium.

Keep on top of the information industry 
with our ‘Data Content Best Practices’ newsletter:

Keep on top of the information industry with our ‘Data Content Best Practices’ newsletter: