Computer Related > 80,000TB disk drive? Computing Issues
Thread Author: Focusless Replies: 47

 80,000TB disk drive? - Focusless
www.bbc.co.uk/news/technology-16543497

"Researchers have successfully stored a single data bit in only 12 atoms.

Currently it takes about a million atoms to store a bit on a modern hard-disk, the researchers from IBM say."

So that's a reduction by a factor of about 80000. What do you fancy - a whopping capacity drive of the current size, or a 1TB drive the size of a pinhead? :)
 80,000TB disk drive? - Zero

>> So that's a reduction by a factor of about 80000. What do you fancy -
>> a whopping capacity drive of the current size, or a 1TB drive the size of
>> a pinhead? :)

do you have any idea how long it takes to defrag an 80 petabyte drive?
 80,000TB disk drive? - Kevin
>do you have any idea how long it takes to defrag an 80 petabyte drive?

Defrag? Defrag is so Microsoft Zero.
 80,000TB disk drive? - rtj70
Over time all filesystems become fragmented - admittedly some more than others. Zero is correct. Especially a problem for very large files.

I defrag my Mac every now and then. Nowhere near as often as a Windows based PC.

Now how best to defrag a NAS....
 80,000TB disk drive? - TeeCee
Yup anything addressing disks directly by conventional means will benefit from a defrag.

In order for this to make no odds, you need a controller that "knows" the disk geometry and where everything on the disk is. Then it can sort the read/write requests and satisfy them all with a single pass of the heads across the platters.

I think IBM did this first with the System/38, which also deliberately fragged data across its storage to reap the benefits of parallel access.
Novell's hashing, cacheing and disk elevators served the same purpose.
SATA's Native Command Queueing feature goes some way in the same direction.

Shouldn't be an issue with NAS as disk seek overhead due to fragmentation on random I/O is pitiful when compared to the Network latencies involved in the multiple random I/O requests.

Best compromise for a desktop or laptop available right now are Seagate's Momentus XT hybrid drives. These serve frequently used data from a few Gb of flash (where fragmentation makes no odds at all) and return I/O performance comparable to SSDs in general use, with performance falling back to that of a normal 7200rpm 2.5" disk when accessing something less often used. I have one, it's like magic. These *can* be defragged, but it takes forever and also ruins performance, as it banjaxes the flash cache weightings on data access.
 80,000TB disk drive? - rtj70
After I posted I ran a defrag on the Mac and it wasn't too bad.
 80,000TB disk drive? - NortonES2
Is there a specific tool for this? My iMac is only a G4 so due for replacement soon - but I'm waiting to see what's next in the pipeline!
 80,000TB disk drive? - Iffy
...Is there a specific tool for this?...

Modern Macs don't need defragging, according to Apple:

answers.yahoo.com/question/index?qid=20071213120745AAV9MwX

 80,000TB disk drive? - rtj70
Any filesystem can become defragmented - Apple is in denial. I would say the Mac OS Extended filesystem is better than NTFS but it does get fragmented. If a file is being written and it is going to be very large (but start small) then it is impossible to place it in an area that may lead to fragmentation. Two examples: VM virtual disks (unless space all preallocated) and creating MPEG4 files.

I don't think there are any free defrag programs for the Mac.
 80,000TB disk drive? - rtj70
I reran my defrag app and noticed there was more files still fragmented than I thought but to really optimise the disk it needed to be rebooted in single user mode.

Over 4 hours later it is still running :-) And a fair amount of 'red' fragmented files still to be optimised.

As I say a lot are VMware virtual disks and MPEG4 files. The 8Gb sleep file is even fragmented though! Not badly, but still fragmented.
 80,000TB disk drive? - Kevin
>Any filesystem can become defragmented - Apple is in denial.

I think you mean fragmented, but what Apple probably meant to say is that "Fragmentation is no longer the problem it once was and you are unlikely to see any benefit from defragmentation."

Test it yourself and let me know if you see any significant difference between the read times of the same file fragmented and unfragmented. A MacBook is not HPC where every nanosecond counts.

I'd go one step further than Apple and say that defragging is a waste of time and potentially dangerous. One bug in your unnecessary defrag utility could trash your whole filesystem.

>Over 4 hours later it is still running :-) And a fair amount of 'red' fragmented files still to be optimised.

I hope you have a full, validated backup.
 80,000TB disk drive? - rtj70
I did mean fragmented - thanks.

It took over 6 hours to optimise the drive (not a full defrag). And the system starts up quicker for sure. Back to it's old self so to speak. Just goes to show the Apple MacOS Extended file system not only becomes fragmented but a defrag makes a difference. But then it would on any file system if you think about it from a filesystem perspective/level. You cannot avoid fragmentation when multi Gigabyte files are being created - unless you tell the filesystem to pre-allocate space.

I wouldn't have done a defrag without a full backup. The Mac backs up to a TimeMachine drive all the time and I sync to a NAS. The NAS gets sync'd periodically to a USB drive.

I am aware that if a file got corrupt then the next sync could result in copying corrupt files.

 80,000TB disk drive? - Kevin
>but a defrag makes a difference. But then it would on any file system if you think about it
>from a filesystem perspective/level.

I'd like to see before/after timings - I'm not convinced.
 80,000TB disk drive? - rtj70
>> I'd like to see before/after timings - I'm not convinced.

What timings are you after? Benchmark tests? I can roll it all back to before and run your suggested benchmarks if you like. I took a low level disk level backup of before the defrag.
 80,000TB disk drive? - Kevin
>Yup anything addressing disks directly by conventional means will benefit from a defrag.

Unfortunately, it's a bit more complicated than that. Large sequential reads will benefit if the data is fragmented but that depends upon how the applications have written the data in the first place. Large write buffers help significantly because the file system will write the data in contiguous sectors if there is sufficient space available. Random read/write operations will not benefit because they are by definition random.

Keeping separate file systems for "small", "medium" and "large" files helps and the right choice of elevator can reduce problems caused by multiple processes accessing the same file system.

>In order for this to make no odds, you need a controller that "knows" the disk geometry
>and where everything on the disk is.

All data on disk is now addressed by a relative sector number, not cylinder, head, sector as they used to be so there's not much we can do. The I/O Scheduler and driver can only tell the controller to "Read ten sectors starting at sector 599 and put the data into the buffer at address 0xFF800". The file system structures are mostly in-memory.

As you say, command queueing, scatter-gather DMA, read-ahead and hot-spot cacheing are all being used to improve performance and work remarkably well.

I think the biggest potential from this research will be a huge decrease in power consumption, weight and access time with multi-terabyte drives the size of a micro-SD card, platters spinning at 100k RPM with a small number of tracks and fixed heads. Big, big, big potential for tape drives too.

The biggest losers will be everyone who doesn't have a tested, consistent and reliable backup or hierarchical storage regime.
 80,000TB disk drive? - Zero
>
>> spinning at 100k RPM with a small number of tracks and fixed heads. Big, big,
>> big potential for tape drives too.
>

Unlikely to percolate to tape medium. The mechanics and physics of getting molecule size data read on flexible media is a real engineering challenge.
 80,000TB disk drive? - Kevin
>Unlikely to percolate to tape medium. The mechanics and physics of getting molecule size
>data read on flexible media is a real engineering challenge.

Oh, it'll filter down Zero. We're already using fast access 5TB tape drives (non-LTO) and I've seen the product plans - Tape is a long way from dead.
 80,000TB disk drive? - rtj70
Kevin tis is a motoring forum. So shut up. This thread is from what is theoretical physics = pie in the sky for a long time.

And LTO is the only long term tape format under development.
 80,000TB disk drive? - VxFan
>> Kevin tis is a motoring forum. So shut up.

But part of it has a section for discussing computer related stuff, and from what I can tell, he is doing so.

ps, no need to be so rude to fellow members.
 80,000TB disk drive? - rtj70
Vxfan from a technical perspective Kevin is mostly talking nonsense. But this discussion should be in another thread. This thread was about new discoveries and IBM and all that they have found.

What I meant to say to Kevin was this was the computer thread.
 80,000TB disk drive? - Focusless
As the OP I was enjoying the drift - surprised at your little outburst rtj.
Last edited by: Focus on Thu 19 Jan 12 at 06:54
 80,000TB disk drive? - Zero
>> Kevin tis is a motoring forum. So shut up....

WTF! you been on the juice RTj?



As far as bits and bytes on the molecular level and tape goes, I'd like to bet Kevin a fiver that this technology kills off tape. IF it ever comes to fruition at all. If nothing else comes along then it might, IBM have a good track record of moving real bleeding edge stuff like this into the real world, however there is always other stuff going on in parallel that might emerge to kill it. He is right however in the real significance of this is the prospect for ultra small, ultra ultra power efficient chips. SO power efficient they might even be able to powered by the bodys natural electrical sources.

I wonder if the guys at IBM research working on DNA and human cells in computing are exchanging sametime messages. Even sharing the same ETMs.
 80,000TB disk drive? - rtj70
Just fed up for starters of people believing Apple products are so brilliant that they never go wrong, never get viruses (and cannot), blah blah.

Secondly, Kevin was asking for benchmarks of my system before and after I did a defrag. Of course I didn't do any formal testing/timing before but subjectively I know it has speeded up*. My shut up remark about this being a motoring forum is because it's a motoring forum - if he wants to read up on benchmarking a Mac performance before and after a defrag then I don't think that is a topic for here.

* On a Mac, login performance is often 'measured' in how many bounces the icons in the dock perform whilst logging in. It is widely known in the Mac technical community that moving to an SSD speeds up login time significantly (especially if you set application to launch at login). Mine had progressively got worse... after the defrag it is a lot quicker. I went for optimised in the defrag so applications were also moved to the start of the drive.

When we're all finally on solid state memory of some sort for storage, the issue of fragmentation will of course go away. It is the latency involved in seeking to different parts of the disk that make hard disk drives slow... and tape slower still.
 80,000TB disk drive? - Focusless
>> My shut up remark about this being a motoring forum is because
>> it's a motoring forum - if he wants to read up on benchmarking a Mac
>> performance before and after a defrag then I don't think that is a topic for
>> here.

1. Looked like he was asking if you happened to have any before/after figures for your particular Mac defrag; seemed perfectly reasonable.

2. Nothing wrong with the topic here. You could always ignore it of you don't feel like contributing.
 80,000TB disk drive? - Kevin
>Kevin tis is a motoring forum. So shut up.

Excuse me?

>And LTO is the only long term tape format under development.

It appears that your knowledge of tape technology is as lacking as your manners. And why do you always assume that you are the font of all knowledge in computing matters?

As far as tape formats go, IBM are actively developing their TS Series drives for 3592 media and Oracle/Sun/Storage-Tek are working hard on their T10K drives and cartridges.

>from a technical perspective Kevin is mostly talking nonsense.

There you go again, but if you mean that in the sense that you don't understand what I'm talking about then I'd have to agree.

>Of course I didn't do any formal testing/timing before but subjectively I know it has
>speeded up.

Well done. So we now know that defrag has a subjective effect.

>My shut up remark about this being a motoring forum is because it's a motoring forum
> - if he wants to read up on benchmarking a Mac performance before and after a defrag
>then I don't think that is a topic for here.

And who made you the arbiter of what can and can't be discussed in here just because you don't like what is being said?!

>It is the latency involved in seeking to different parts of the disk that make hard disk
>drives slow... and tape slower still.

Tape is really not that slow if it is used in the right situations. The drives we have here (~170 of them) have a native throughput of 240MB/s compared to around 150MB/s for a 10k RPM SATA drive. Horses for Courses.
 80,000TB disk drive? - rtj70
I have thought long and hard how to respond. And I should have added the smiley after the shut up and didn't. It was more of a reply you might expect down the pub.

I know Kevin works in IT and so I was trying to point out despite what crap Apple say, performance of a Mac can degrade over time, especially if you're using hard drives and not SSDs. So when I did the defrag it made as difference - but it makes a difference and most systems, as Kevin will know.

Maybe for real test purposes I could have done some read/write tests but that was not my aim. But knowing the files fragmented (sleep file and some/many VMs VMDK files) then I knew it would improve some things. It also improved boot times and login.

The reason why I added this is a motoring forum is benchmarking system performance (like comparing an SSD to a hard disk) is the realms of a computing site. Not a motoring one. Most of you couldn't give a damn if the latest graphics card or hard disk/SSD got you an extra frame per second in a FPS type game. Well I assume not.

If people on here genuinely do want discussion and details of benchmarking then take a look at say www.anandtech.com or www.tomshardware.co.uk to begin with.

I take on board Kevin's comment on Oracle and IBM tape research but more are falling behind LTO. Gone are the times when you might back up to tape due to time. More likely to backup to disk and then to tape. But tape is here to stay. You want high capacity offline (and off-site) storage. Sometimes you also need write-one technology.

I will have offended Kevin - sorry. Bad week at work. I won't go into that here. But I work with some..... add your own word.
 80,000TB disk drive? - Pat
Good reply rtj.

Pat
 80,000TB disk drive? - Kevin
>I will have offended Kevin - sorry.

No offense taken so no apology necessary rtj.

Well maybe just itsy-bitsy offense at your accusation of talking nonsense. I work in the HPC Division of a major manufacturer, specialising in data storage and retrieval. File systems, disk, tape and other storage technologies are my bread and butter.

Enjoy your weekend.
 80,000TB disk drive? - Zero

>> tape and other storage technologies are my bread and butter.

Bread and butter has poor failure rates and the thruput is terrible. Try something magnetic.
 80,000TB disk drive? - Kevin
We're using sliced bread nicked out of the bins behind Tesco coated with Barium Ferrite butter.

Unbreakable, although they do tend to slip out of the grippers in the library. Always land butter side down as well ;-(
 80,000TB disk drive? - rtj70
Thanks Kevin. I knew you worked in the HPC field which is partly why I was surprised you wanted evidence of a defrag making a difference. You would know it did :-)

I did find an item on Apple's website that said that MacOS X tries to minimise fragmentation but it can still happen - of course it does. My disk had little continuous spaces anymore which didn't surprise me because many files I create are several Gb. They suggest you either backup all your files, wipe the drive and restore the files. Or buy a defrag program but they don't suggest which one to use.

I am sure too my NAS needs a defrag and that doesn't come with one either. Although it does run Linux. I'll leave that for another day.
 80,000TB disk drive? - Zero
I saw a library eject the tape cart through the viewing window at the end once. Sliced bread would alleviate that problem.,
 80,000TB disk drive? - Kevin
>I saw a library eject the tape cart through the viewing window at the end once.

In can be a bit scary if you're not used to seeing the robots accelerate towards you. They move PDQ.

Looking through the observation panel of an SL8500

www.youtube.com/watch?v=d-eWDuEo-3Q

And a 3584

www.youtube.com/watch?v=Bde8wJtzRx8
 80,000TB disk drive? - smokie
Back in the early days of this technology (early 90s?) I worked for Wang, and we had optical media jukeboxes. The story was that a couple shot the optical disk straight through the casing... Impressive stuff though.
 80,000TB disk drive? - Kevin
Can you remember the name of the head of customer support at Wang around that time? John something or other?

He moved to Siemens Nixdorf around 1990'ish.
 80,000TB disk drive? - smokie
'Fraid not, my memory isn't that good, but I do remember something of an exodus to Siemens and then a later one to Dell. Coincidentally met a few very old colleagues unexpectedly a few months back when I was last job hunting. It's a very small world, good advice is to not pee people off as you never know when you might run into them again.
Last edited by: smokie on Sun 22 Jan 12 at 23:25
 80,000TB disk drive? - Zero
>> 'Fraid not, my memory isn't that good, but I do remember something of an exodus
>> to Siemens and then a later one to Dell. Coincidentally met a few very old
>> colleagues unexpectedly a few months back when I was last job hunting. It's a very
>> small world, good advice is to not pee people off as you never know when
>> you might run into them again.

The IT game is exceptionally incestuous.
 80,000TB disk drive? - Kevin
>I do remember something of an exodus to Siemens

Yeah, John Whatsisname created quite a few management non-jobs and filled them with buddies.
 80,000TB disk drive? - smokie
Wasn't John Lord was it?
 80,000TB disk drive? - Kevin
>Wasn't John Lord was it?

That's it!

His sidekick was a guy called Chris Wright who had an immaculate VW Karmann Ghia Cabrio that he used on sunny days.
 80,000TB disk drive? - Crankcase
>> Back in the early days of this technology (early 90s?) I worked for Wang, and
>> we had optical media jukeboxes. The story was that a couple shot the optical disk
>> straight through the casing... Impressive stuff though.
>>

Friend of mine, a graphic designer, produced a beautiful A0 poster, and had it printed at huge expense. He put it on his desk proudly, showed us all, and then ejected a disc from his mac, which opened the disc tray and knocked his coffee all over his poster...

Oh how we laughed.
 80,000TB disk drive? - rtj70
A story at work about tape libraries was someone got in one (they shouldn't have) when it was still on. The robot arm beat them black and blue until someone stopped it. This was a long time ago. Maybe an urban myth. This was when some tape libraries had a central arm and tapes in slots around the circumference of the enclosure.
 80,000TB disk drive? - Zero

>> some tape libraries had a central arm and tapes in slots around the circumference of
>> the enclosure.

Ah the old Storagetek silo!
 80,000TB disk drive? - rtj70
That's probably what it was. A but of a myth sort of thing but the person telling me was likely to be truthful.

We still had one at the site at the time. Looked a bit like this if I recall:

info.instockinc.com/Portals/15701/images/StorageTek%209310%20a.jpg

So you're probably right again. Sigh.
Last edited by: rtj70 on Sun 22 Jan 12 at 22:03
 80,000TB disk drive? - Kevin
>Ah the old Storagetek silo!

The customer site I work at replaced their old STK Powderhorn silos last year with SL8500s. The only reason for doing so was withdrawal of support.

The Powderhorns were a fantastic, supremely reliable piece of kit. The only thing that ever went wrong with them was a dropped tape now and again or a faulty gripper.

>The robot arm beat them black and blue until someone stopped it.

Urban myth I'm afraid. You'd be lucky to survive a hit from one of those. They are about 12' in diameter and rotate at one helluva rate.

The silo will not operate if the door interlock is open and there are Emergency Stop buttons inside the silos that cannot be reset without opening the door again.

If anyone is interested I can post a photo of a Powderhorn robot arm removed from it's silo.
 80,000TB disk drive? - rtj70
>> Urban myth I'm afraid. You'd be lucky to survive a hit from one of those.

I did wonder the authenticity of the story. Thought I'd air for an opinion. The library in question was still in situ until we moved out of that old building.
 80,000TB disk drive? - Kevin
>I'd like to bet Kevin a fiver that this technology kills off tape. IF it ever comes to fruition at all.

It could well kill off tape Zero but they've been saying that for years and it's still with us. It will all depend on the relative total cost per GB, as it does now.

Change the bet to a pint and I'll accept - a fiver will be worthless when/if this is commercialised.
 80,000TB disk drive? - movilogo
>> how long it takes to defrag an 80 petabyte drive

Depends on what type of data is stored on those disks.

I guess such gigantic drives won't be used to store personal files. They will be used by corporations to act as storage for their data warehouse.

For this type of data, deletion is very rare event. So, they may not need defrag at all.

With hard disk size increase, reading mechanism are also getting advanced with several parallel reading mechanisms working simultaneously. So defrag won't be a big issue.
Latest Forum Posts