Back on July 11th 2006, I posted an article for Thin Provisioning. Today a reader made some very timely and appropriate comments around application support for thin provisioning and alerting and monitoring.
"I guess eventually OS and Apps have to start supporting thin provisioning, in terms of how they access the disk, and also in terms of instrumentation for monitoring and alerting"
Back in that article I had written that I would not deploy thin provisioning for new applications for which I had no usage metrics and for applications which would write, delete or quickly re-write data in a LUN. Here's why, up until now, I would avoid the latter scenario.
The example below attempts to illustrate this point.
Lets assume I have thinly provisioned a 100GB LUN to a Windows server.
I now fill in 50% of the LUN with data. Upon doing this, capacity utilization, from a filesystem standpoint is 50%, and from an array perspective is also 50%.
I then proceed to completely fill the LUN. Now, the filesystem and array capacity utilization are both at 100%.
Then I decide to delete 50% of the data in the LUN. What’s the filesystem and array capacity utilization? Folks are quick to reply that it’s at 50% but that is a partially correct answer. The correct answer is that filesystem utilization is at 50% but array utilization is still at 100%. The reason is that even though NTFS has freed some blocks upon deleting half of the data in the LUN, from an array perspective, these blocks still reside on the disk as there is no way for the array to know that the data is no longer needed.
So now, if more data is written in the LUN, there is no guarantee that the filesystem, will use the exact same blocks it freed previously to write the new data. That means that in a Thin Provisioning scenario, this behavior may trigger a storage allocation on the array when in fact such allocation may not be needed. So now, we’re back to square one in attempting to solve the exact same storage over-allocation challenge.
SnapDrive for Windows 5.0
With the introduction of SnapDrive for Windows 5.0, Network Appliance, introduced a feature called Space Reclamation.
The idea is to provide integration between NTFS and WAFL via a mechanism that will notify WAFL when NTFS has freed blocks so that WAFL, in turn, can reclaim these blocks and mark them as free.
Within SnapDrive 5.0 the space reclamation process can be initiated either via the GUI or the CLI. Upon initiating the space reclamation process, a pop-up window for the given LUN will inform the Administrator as to whether or not a space reclamation operation is needed, and if indeed, how much space will be reclaimed.
Additionally, the space reclamation process can be timed and the time window can be specified in minutes, 1-10080 minutes or 7 days, for the process to execute. Furthermore, there is no licensing requirement in order to utilize the Space Reclamation feature as it is bundled in with the recently released version of SnapDrive 5.0. However, the requirement is that the version of DataONTAP must be at 7.2.1 or later.
Performance is strictly dependent upon the number and the size of LUNs that are under the space reclamation process.
As a general rule, we recommend that the process is run during low I/O activity periods and when Snapshot operations such as snapshot create and snap restore are not in use.
While other competitors offer thin provisioning, Network Appliance, once again, has been the first to provide yet another innovative and important tool that helps our customers not only to safely deploy thin provisioning but also realize the benefits that derive from it.
Subscribe to:
Post Comments (Atom)
6 comments:
The readers comment doesn't make sense to me. The filesystem will not attempt to use different blocks after files have been deleted from a full (and therefore fully allocated on the array) LUN. It will use the same blocks that have been freed up, which are the ones already allocated.
It's true that once filled up, the LUN is no longer 'thin', but it will not require more blocks that the maximum allocation. Am I missing something?
-billy bathgates
Ok so here's a post from Searchstorage.com from a user experiencing what I was talking about:
But other users haven't had such idyllic experiences. Nick Poolos, systems/network specialist at The Ohio State University's Fisher College of Business, saw the NTFS file systems mounted on a Compellent Storage Center SAN grow abnormally large when using the array's Dynamic Capacity feature.
"At issue, according to Bob Fine, Compellent's product marketing manager, is Microsoft's "undelete" feature, which just marks blocks to be released without actually erasing them. Rather than reusing released blocks, NTFS prefers new, unused blocks, which caused the thin-provisioned volume to swell to its maximum allocated size. Fine says doing a periodic disk defragmentation may be a workaround. He adds that most users will never witness this problem because most environments grow gradually and don't delete much data."
whole article can be found here:
http://searchstorage.techtarget.com/originalContent/0,289142,sid5_gci1188117,00.html
One more thing... It looks like the tool you are referring to will allow the array to reclaim blocks once they have been freed on the host, allowing the LUN to again revert to being 'thin'.
Will this tool be available for other host platforms? Tired of vendors always emphasizing/promoting billy.
Correct. The Array does reclaim blocks and it will revert the LUN back to "thin".
So we're working on other platforms/filesystems but I can't really provide any specifics given that it's NDA info and somebody is sure to slap my hand by providing this in a public forum.
I hope you understand
wow that was fast...
I see what you are saying, blocks allocated in the filesystem will be touched more often in this scenario because ntfs tends to use previously untouched blocks. But it would never use more than the maximum capacity of course.
It also sounds like you are saying if a block is erased (filled with zeros, or a recognized erase pattern) these thin provisioning arrays have some facility to reclaim the block? That would be the only way the array could shrink a thin provisioned LUN that had been previously filled, without host filesystem assistance (i.e. the host telling the array it's no longer using the block. Apparently the windows server 2008 has a protocol to do something like this, for what it's worth.).
Most filesystems don't erase freed blocks by default, it's pretty expensive to do this.
Correct on all counts.
Indeed, Longhorn is suppose to provide the ability to shrink. I believe HP recently announced the ability to shrink LUNs but that won't be available until the MS Longhorn release but given the cautious nature of enterprises and how long the upgrade cycle takes, I suspect there's still be a whole bunch of 2003 servers around in the next 2-3 yrs.
I think the space reclaim ability is a nice feature to have and it maybe provide a benefit, depending on the I/O characteristics of the application.
Post a Comment