tag:blogger.com,1999:blog-27348612.post7095646222860665524..comments2023-04-30T04:07:02.613-05:00Comments on Storage: VMware over NFSNick Triantoshttp://www.blogger.com/profile/09961325835631522741noreply@blogger.comBlogger48125tag:blogger.com,1999:blog-27348612.post-44453598389276268892008-10-11T10:16:00.000-05:002008-10-11T10:16:00.000-05:00Hi Parker, Officially Storage VMotion is not supp...Hi Parker, <BR/><BR/> Officially Storage VMotion is not supported on NFS but that does not mean it doesn't work. Per VMware the reason has been lack of QA cycles. <BR/><BR/>I can not comment on VMware's RoadmapNick Triantoshttps://www.blogger.com/profile/09961325835631522741noreply@blogger.comtag:blogger.com,1999:blog-27348612.post-52905066765186340422008-10-11T08:02:00.000-05:002008-10-11T08:02:00.000-05:00Storage vmotion isn't supported on NFS? That's new...Storage vmotion isn't supported on NFS? That's news to me since I've done it quite a bit with NFS.<BR/><BR/>VMWare has told me they are targeting support for NFS on recovery manager for Q1 2009. I have also heard that it can be done now even though it isn't officialy supported.Parkerhttps://www.blogger.com/profile/01427653487062096552noreply@blogger.comtag:blogger.com,1999:blog-27348612.post-85860648038979419732008-10-10T23:09:00.000-05:002008-10-10T23:09:00.000-05:00It is true that block protocols have gotten priori...It is true that block protocols have gotten priority over NFS as for supporting features. The main reason has really been adoption rates. FC has the largest share of the market and was what VMware supported first. <BR/><BR/>Having said that, VMware says that things will change and there will be balance.<BR/><BR/>We'll seeNick Triantoshttps://www.blogger.com/profile/09961325835631522741noreply@blogger.comtag:blogger.com,1999:blog-27348612.post-60575508671649649272008-06-24T20:45:00.000-05:002008-06-24T20:45:00.000-05:00Pity Storage vMotion and Site Recovery Manager are...Pity Storage vMotion and Site Recovery Manager are only supported on block and not NFS....<BR/><BR/>guess that blows a hole in the NFS theoryAnonymousnoreply@blogger.comtag:blogger.com,1999:blog-27348612.post-89671976834117258192008-05-08T20:03:00.000-05:002008-05-08T20:03:00.000-05:00Ding Ding Ding...the key to NFS on ESX is NETAPP N...Ding Ding Ding...the key to NFS on ESX is NETAPP NFS. The WAFL file system is what makes the performance of FC or iSCSI similar, and scale better.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-27348612.post-10310865994369178712008-03-31T18:08:00.000-05:002008-03-31T18:08:00.000-05:00Anybody know of a white paper that shows this in d...Anybody know of a white paper that shows this in detail or comparisons of <A HREF="http://universitytechnology.blogspot.com/2008/03/vmware-over-nfs-vs-fiberchannel-fc.html" REL="nofollow">Fiber Channel vs. NFS</A>.<BR/><BR/>Since I have a NetApp and I have access to both protocols, when should I use FC and when should I use NFS? Would a Database work better of NFS?MBhttps://www.blogger.com/profile/14651890523678387870noreply@blogger.comtag:blogger.com,1999:blog-27348612.post-88651110732637372092007-12-04T11:48:00.000-06:002007-12-04T11:48:00.000-06:00Hi, I can't speak about any other NFS implementat...Hi, <BR/><BR/> I can't speak about any other NFS implementations regarding performance. However, some of the other benefits like provisioning and ease of use are ubiquitous regardless of the implementation. <BR/><BR/> The other thing you need to keep in mind, is that regardless of the protocol you choose to implement, the solution must be on VMware's support matrix...Nick Triantoshttps://www.blogger.com/profile/09961325835631522741noreply@blogger.comtag:blogger.com,1999:blog-27348612.post-47309142291662500112007-12-04T09:37:00.000-06:002007-12-04T09:37:00.000-06:00I do have a question: Does the NFS recommendation ...I do have a question: Does the NFS recommendation apply more to NetApp's implementation of NFS with WAFL underneath, or is it a more generic recommendation?<BR/><BR/>I understand WAFL has a lot of enhancements to make NFS perform well and the snapshots are great, but if someone wants to deploy ESX "on the cheap" (say by using a Linux box configured as an NFS server) or another vendor's NAS, would that not work as well?<BR/><BR/>Thx<BR/><BR/>DAnonymousnoreply@blogger.comtag:blogger.com,1999:blog-27348612.post-6930942202106487672007-11-21T11:00:00.000-06:002007-11-21T11:00:00.000-06:00Marcus, Just replied to your email. Check it out ...Marcus, <BR/><BR/> Just replied to your email. Check it out and let me know if you need anything else.<BR/><BR/>NickNick Triantoshttps://www.blogger.com/profile/09961325835631522741noreply@blogger.comtag:blogger.com,1999:blog-27348612.post-35855736841142177862007-11-19T06:03:00.000-06:002007-11-19T06:03:00.000-06:00Hello Nick,thanks for your good post. I move 12 of...Hello Nick,<BR/><BR/>thanks for your good post. I move 12 of our 24 active VM's from iScsi to NFS now. We'll move the rest on Wendsday. <BR/><BR/>Our I/O requierements are not really high. So I chose the flexibility.<BR/><BR/>Since our Team is responsible for storage, esx and everything else. We don't care were we make changes, as long as they are easy:-).<BR/><BR/>Most of our VM's are linux systems and since I'm using lvm we have no Problem with thin or thick disk files in the first place, but I guess it could make my life a bit easier.<BR/><BR/>So Thanks again for your post, since it made my decision a bit easier, too. :-)<BR/><BR/>MarkusAnonymousnoreply@blogger.comtag:blogger.com,1999:blog-27348612.post-83150238077782121132007-11-18T22:04:00.000-06:002007-11-18T22:04:00.000-06:00Hi Nick,Thanks for your response. So far i was deb...Hi Nick,<BR/><BR/>Thanks for your response. So far i was debating with my colleagues that MSCS should work on NFS datastores as well (within ESX & accross ESX). But now, your reply gave me a hint that why MSCS not possible on NFS Datastores when the MSCS VMs are accross ESX server 3.x<BR/><BR/>On ESX 2.5.x (or earlier), to configure MSCS accross ESX boxes, we need to change the VMFS volume access mode to 'shared'. Now in ESX 3.x, there is no concept of 'shared' mode at all. Hence the only one option is 'RDM'. As 'RDM' creation is not possible on NFS datastores, no support for MSCS on NFS datastores accross ESX 3.x servers.<BR/><BR/>However the MSCS should work on NFS Datastore within the ESX 3.x server.<BR/><BR/>Have a good day !<BR/><BR/>Thanks & Regards,<BR/><BR/>PhaniUnknownhttps://www.blogger.com/profile/11499149982670506248noreply@blogger.comtag:blogger.com,1999:blog-27348612.post-55564916784151064222007-11-16T12:29:00.000-06:002007-11-16T12:29:00.000-06:00For cluster in a box that ought to work since you ...For cluster in a box that ought to work since you have 2 options:<BR/><BR/>a) RDM in virtual comp. mode <BR/>b) virtual disk. <BR/><BR/>No idea as to why (b) is not supported. No support for iSCSI either in both cases. <BR/><BR/>For clustering across boxes there's only one option and that's RDM in physical or virtual comp. mode so NFS is not an option here and neither is iSCSI. The latter is a little surprising. It maybe they haven't tested thouroughly.Nick Triantoshttps://www.blogger.com/profile/09961325835631522741noreply@blogger.comtag:blogger.com,1999:blog-27348612.post-91898722045205617332007-11-16T09:44:00.000-06:002007-11-16T09:44:00.000-06:00Hi Nick,Is there any specific reason for not using...Hi Nick,<BR/><BR/>Is there any specific reason for not using MSCS on NFS Datastores on ESX server 3.x?<BR/><BR/>I see not much of a difference between VMFS filesystem & NFS mounted filesystem as long as the MSCS VMs uses the virtual disk files (not RDMs).<BR/><BR/>PhaniUnknownhttps://www.blogger.com/profile/11499149982670506248noreply@blogger.comtag:blogger.com,1999:blog-27348612.post-68768273344222873912007-11-14T13:25:00.000-06:002007-11-14T13:25:00.000-06:00I know about the Procurve dynamic LACP. I got bit ...I know about the Procurve dynamic LACP. I got bit by it when our regular network switches were Procurve. Apparently HP made dynamic the default setting on their switches at some point. They later realized it was a bad idea. They can be changed to static. You're right I don't have a problem right now. I'll have to keep an eye on things as we grow. I do also realize that trunking across switches isn't supported by VMWare. With the HP switches I am dealing with you can actually define trunks between two switches. I don't know if they are then considered to be stacked though. Thanks for the recommendations.Parkerhttps://www.blogger.com/profile/01427653487062096552noreply@blogger.comtag:blogger.com,1999:blog-27348612.post-32793898736679496732007-11-14T13:10:00.000-06:002007-11-14T13:10:00.000-06:00Parker, First of all, I don't recommend anyone c...Parker, <BR/><BR/> First of all, I don't recommend anyone changing anything unless there's a problem looking for a solution. <BR/><BR/>Having said that, a couple of things to keep in mind if you decide to reconfigure:<BR/><BR/>1) ESX server supports IEEE 802.3ad static <BR/><BR/>2) ESX server supports link aggregattion on a single or stackable switches but not on disparate trunked switches. <BR/><BR/>http://kb.vmware.com/selfservice/microsites/search.do?cmd=displayKC&docType=kc&externalId=1001938&sliceId=1&docTypeID=DT_KB_1_1&dialogID=31483476&stateId=0%200%2031481367<BR/><BR/>With respect to (1) I've seen some folks having a hard time configuring 802.3ad static on some Procurve swithes. I believe the default trunk type on the procurve is LACP dynamic which VMware does not support. I would imaging you'd be able to create a static trunk but i don't know. I also know Procurve supports FEC (Fast Etherchannel which uses Cisco's proprieraty PagP implementation. ESX does not support that either. <BR/><BR/>But assuming you resolve the above, you still have to deal with ESX support of link aggregation across disparate switches assuming this what you have now.<BR/><BR/>So, if that's true, if It were me, I'd stay as I am right now, unless I was about to upgrade the switches. Then I'd look at the stackable ones and I would turn on Link Aggregation. <BR/><BR/>As far as switch recommendations, but I'd look at the Cisco 3000 series. I'd also look at the Extreme Summit X series.Nick Triantoshttps://www.blogger.com/profile/09961325835631522741noreply@blogger.comtag:blogger.com,1999:blog-27348612.post-13087894779825678922007-11-14T06:41:00.000-06:002007-11-14T06:41:00.000-06:00Thanks for the quick response. One other question....Thanks for the quick response. <BR/>One other question. The network configuration on my filers for ISCSI and NFS is two single vifs on each filer head. We're not using IP trunking. The hosts are connected via two HP switches that are using HP meshing. This consists of 2 mesh ports on each switch connected to the mesh ports on it's partner switch. This provides fail over and I believe load balancing if a switch is overloaded. The vif ports are split between the two switches and this works for fail over in testing. If I pull one of the lines at the filer or the switch it fails over to the other switch very quickly. <BR/>I have two one gig connections on each filer that can be targeted by an ESX host. Most of the VMWAre storage is located one filer however. Should I consider re configuring these switches for IP trunking? <BR/>Also I am considering replacing the HP switches. We have two seperate sets of switches one for VMWare and one for Oracle on NFS on another set of filers. I was thinking of replacing these with larger switches and seperating the traffic with vlans. Any recommendations on switches? <BR/>Thanks again for your assistance. We could use your input on the VMWare discussion boards!Parkerhttps://www.blogger.com/profile/01427653487062096552noreply@blogger.comtag:blogger.com,1999:blog-27348612.post-22616562865659405032007-11-13T16:51:00.000-06:002007-11-13T16:51:00.000-06:00Parker, No. One VM per volume is not right at al...Parker, <BR/> <BR/> No. One VM per volume is not right at all. In fact this number of not even right with FC or iSCSI.<BR/><BR/>What we've seen internally and from customer deployments is that the fan-in ratio of VMs to an NFS volume is larger than VMs to a LUN. <BR/><BR/>With NFS Volumes we're recommending in the range of 35-50, even though we believe the limit is much higher. In fact, someone told me they were using 80. However, I can not confirm the latter. So take that with a grain of salt.Nick Triantoshttps://www.blogger.com/profile/09961325835631522741noreply@blogger.comtag:blogger.com,1999:blog-27348612.post-5957115225612560532007-11-13T10:11:00.000-06:002007-11-13T10:11:00.000-06:00Are there any guidelines for volume size, number o...Are there any guidelines for volume size, number of vms per volume when using NFS? I understand some of the answer will depend on my environment. <BR/>I asked out NetApp TAM about this and he was suggesting 1 vm per volume. With the maximum mounted exports per host being 32 this isn't practical. <BR/>Maybe he's still thinking in FC/ISCSI lun terms?Parkerhttps://www.blogger.com/profile/01427653487062096552noreply@blogger.comtag:blogger.com,1999:blog-27348612.post-40480880076683083762007-11-09T18:16:00.000-06:002007-11-09T18:16:00.000-06:00"Create a two level VIF on the netapp side with on..."Create a two level VIF on the netapp side with one VIF in single mode based on N VIFs in multi mode. So I get redundancy and N Gigabits througput.<BR/>- Create N-1 IP aliases for the top level VIF and N DNS entries for the netapp<BR/>- Mount each NFS datastore with one of the DNS entries."<BR/><BR/>Do you really need my help on this one? You already have it down? :-) <BR/><BR/>Number of VMs on a 1Gb link...I'm gonna give the answer people hate the most, myself included...It depends on the workload, but from what I've seen the I/O profile is small blog random. Small block = 4k/8k block size. In this case what you care is not bandwidth but rather IOPS and latency. So if all VMs were to collectivelly push 8k IOPs (very doubtful) your bandwidth requirement would be 8k x 8k blk size = 64MBps. 8k IOPs on a single physical server is a lot of IOPs and you'd either have to be running some extremely heavy duty stuff to get there or have a ton of VMs. <BR/><BR/>What you want to do is run perfmon and characterize the workload, in terms of reads/writes and start logging for at least 24hrs. The disk transfers/sec is the sum or reads/writes (IOPs). You also want Disk reads/sec, Disk writes/secNick Triantoshttps://www.blogger.com/profile/09961325835631522741noreply@blogger.comtag:blogger.com,1999:blog-27348612.post-4814677327214303742007-11-09T16:21:00.000-06:002007-11-09T16:21:00.000-06:00For the question about the number of VM per Gigabi...For the question about the number of VM per Gigabit link, I forgot to say that I plan to store application's data of my VMs mostly in NFS or CIFS shares on the storage.<BR/>I mean, the share will be mounted directly from the VM, to store data more efficiently and avoid a virtualization layer for datas.<BR/><BR/>So I imagine the vmdk access will occur mostly at boot time. Once the OS and the app is loaded, there is not so much disk access.<BR/><BR/>Thanks<BR/>RegardsUnknownhttps://www.blogger.com/profile/01124415899318697809noreply@blogger.comtag:blogger.com,1999:blog-27348612.post-26862757663068732862007-11-09T15:29:00.000-06:002007-11-09T15:29:00.000-06:00Nick,Exactly ! If nfs fits, why try to use iSCSI. ...Nick,<BR/>Exactly ! If nfs fits, why try to use iSCSI. <BR/>Besides, in general, the IT guys know FC SAN and NFS. iSCSI is new for them, and new stuffs scare....<BR/><BR/>If you could answer the questions I asked on Tuesday, October 30, 2007, in this thread, it would be great (2 level Vif, number of DNS entries, number of VMs per Gigabit link on average.....)<BR/><BR/>Eric,<BR/>I think, the RAM of ESX server should not be overcommited. Theses swap files are used only if ESX is out of RAM.<BR/><BR/>I don't know, even if ESX don't use them, if they change or not (reinit for instance). If not, no disk space would be used because of snapshots.<BR/><BR/>As ESX can share RAM blocks between VM, it could be usefull to specialize ESX server for Windows 2003 VMs and ESX server for Linux VMs.<BR/>Some of our VMs use less than 100 Mb with this feature.<BR/>Best regards<BR/>MLUnknownhttps://www.blogger.com/profile/01124415899318697809noreply@blogger.comtag:blogger.com,1999:blog-27348612.post-67829680641284957702007-11-08T19:17:00.000-06:002007-11-08T19:17:00.000-06:00Hi eric, Your instructor is correct. VMware does ...Hi eric, <BR/><BR/> Your instructor is correct. VMware does recommend (since VMworld 2006) to place the pagefiles/vwsp on a LUN in NFS implementation. That would mean that you could create an iSCSI LUN (if you're implementing NFS) and put that stuff in there.<BR/><BR/>Seperating this stuff, is also advantageous if you are taking snapshots so you don't snap pagefile junk nor replicate it. <BR/><BR/>Frankly, if you're paging you got bigger problems to solve than the location of the pagefile. But in any case, his recommendation is similar to what we've heard as well.Nick Triantoshttps://www.blogger.com/profile/09961325835631522741noreply@blogger.comtag:blogger.com,1999:blog-27348612.post-53932137545990261512007-11-08T19:03:00.000-06:002007-11-08T19:03:00.000-06:00Great comment ML. So here's my question. Assuming ...Great comment ML. So here's my question. <BR/><BR/>Assuming VMFS provides the same performance over LUNs to NFS, but NFS provides more flexibility, easier management and provisioning vs FC or iSCSI and considerably less complexity to recover at the individual vmdk level, why should I implement FC or iSCSI? <BR/><BR/>The issue with LUNs at least im my mind from a performance perspective is SCSI reservations and VMFS metadata updates. There is a section in one of the VMware documents that speaks as to what happens to a LUN when VMFS needs to update its metadata. You may want to do a search on the VMTN for SCSI reservations and see what happens when the same LUNs is shares across large clusters. <BR/><BR/>Sorry don't recall your previous question but if you can remind me I'll try to answer it, if I can.Nick Triantoshttps://www.blogger.com/profile/09961325835631522741noreply@blogger.comtag:blogger.com,1999:blog-27348612.post-53117295622444345912007-11-08T01:57:00.000-06:002007-11-08T01:57:00.000-06:00Hi,I found this : "the voice of EMC".http://france...Hi,<BR/>I found this : "the voice of EMC".<BR/>http://france.emc.com/techlib/pdf/H2756_using_emc_celerra_ip_stor_vmware_infra_wp_ldv.pdf<BR/>In particular : <BR/>page 21 : VMFS is an excellent choice along with NFS where various virtual machine Luns don't require specific IO performance<BR/>page 33 : VMware has executed a test suite that achieved similar performance between the two protocols<BR/>And, if I could get an answer to my previous question, it would be great ...<BR/>Best regards<BR/>MLUnknownhttps://www.blogger.com/profile/01124415899318697809noreply@blogger.comtag:blogger.com,1999:blog-27348612.post-70198609310875400582007-11-06T14:10:00.000-06:002007-11-06T14:10:00.000-06:00Hi. I just went to the VMWARE config class and the...Hi. I just went to the VMWARE config class and the instructor advise against VMDK over NFS, he believes the VM swap activities will not be efficient with NFS, resulting slow VM performance. However, my Netapp reps keep pushing for NFS. Even one of my application vender (Perforce) does not advice to use NFS for the Perforce DB and journal files. I'm not sure which is the best route to go at this point. Thanks for your advice. My setup: 2970 (to be dual quad) with Netapp 3070 backend. The question is: VMDK over NFS or iSCSI? If I go with iSCSI, is SNAP drivers needed on individual VMs?Anonymoushttps://www.blogger.com/profile/08761745325957228921noreply@blogger.com