In 2002, IBM scientists managed to produce a magnetic data tape capable of a storage density of 1 billion bits per square inch. This week the IBM Almaden Research Center boffins have done it again, in conjunction with Fuji, to the tune of 6.67 billion bits per square inch. That figure I confidently expected to rise to 8 billion by the time the tape becomes commercially available in 2011. If you are not a large corporate then this news will probably hold little more than a passing geek interest. For the enterprise that requires large volumes of static data storage to comply with the Sarbanes-Oxley Act of 2002, however, it’s truly exciting stuff. Why so? Simply that it means something the same size as a current industry standard Linear Tape Open (LTO) cartridge will be able to hold 8 terabytes of data. That’s the equivalent of text from 8 million books, which would take up 57 miles of shelf space in printed and bound format.
Fujifilm Nanocubic technology using barium-ferrite (BaFe) particle coating was used to enable the high-density data recording. Coupled to sensitive giant-magnetoresistive (GMR) head materials as used to sense small magnetic fields in disk drives, but applied for the first time magnetic tape. The IBM developed signal processing algorithms for the advanced read data channel employs ‘noise-predictive, maximum-likelihood’ (NPML) software which processes captured data far more efficiently than ever before.
All very impressive sounding stuff, but why is tape still being developed at all when there are surely better storage solutions out there these days? The reason that tape is still so popular within the enterprise market is that it provides pretty much the most affordable large volume storage solution on the market. The 15 fold increase in data density can only improve the value proposition in the long term. Once you’ve got past the hurdle that an investment in new tape drive hardware will be required, and at present there is no indication of how much it will cost. Then there’s the not inconsiderable task of transferring all existing data from older tapes to the new standard in the cause of future proofing.
All of this assumes that IBM isn’t beaten to the fantastic new storage technology punch by a truly fantastic new storage technology rather than the wheel reinvented with bigger tires. I’m thinking of something like the Holographic Data Storage (HSD) technology announced by InPhase Technologies. Already having demonstrated a working data density of 515 gigabits per square inch, and InPhase claim to be on the verge of making the 1.6 terabyte per square inch milestone. HSD stores data in a three dimensional medium, intersecting signal and reference laser beams within 3D holographic images. The real breakthrough is the ability to write and read a million bits of data in parallel with a single flash of light, instead of recording one data bit at a time. It doesn’t take a genius to work out that transfer rates become significantly higher than current optical storage devices. InPhase claim at least 50 years of media archive life coupled to the lowest cost per gigabyte in the commercial removable storage market. All of which has yet to be proven of course, although your chance will come later this year when the first product is released. Capable of storing 300 gigabytes on a single optical disc, sustaining a 20 megabyte per second transfer rate, the price has yet to be announced by InPhase and partners Maxell.
Me? I’ll be sticking to my RAID 5 configured 1Tb network attached storage solution for now, coupled to an offsite, online backup service for my most valuable data. But I promise that if I was a large corporate I would be getting as excited as I possibly can when talking about data storage.