Deduplication used to be an exclusive tool of the large enterprise, with an imposing cost, a daunting learning curve, and – with file-only deduplication – a limited ability to use deduplicated data to restore a failed machine. Until now, deduplication has been too expensive to implement in any but the largest organizations. Moreover, it could be applied only in support of servers, despite the fact that enormous data stores are contained at the workstation level within most IT infrastructures.
Most deduplication products have been designed and sold as combined software/hardware solutions. In most cases the hardware alone has been difficult to justify because of its high cost. To illustrate the latter, consider the fact that one well-known vendor reduced the cost of one of its high-end data deduplication appliances in March 2009 by more than one third. But at a reported $130,000 for 12 TB of storage capacity, it is still an expensive proposition. Roadblocks like these have limited the promises of deduplication to the largest of organizations.
However, such limitations are finally being swept aside, and deduplication can be specified more broadly:
• not only by enterprises, but also by many smaller organizations which have very significant data storage challenges
• not only on servers, but on workstations as well
While pursuing a Computer Science degree, I founded DaniWeb.com, an online community for developers and IT professionals. I coded the backend platform from the ground up and I also do all of the advertising sales and SEO. I'm a super-geeky programmer with a passion for Internet marketing.