iSCSI or NFS + ASM of SAN

Hello

my questions look at stupid.but it's a matter of simple vary.
Should I have any iscsi or SAN to install clusterware GR 11, 2 to 5.4 x86_64 OEL?

concerning

1. create the file

DD if = / dev/zero bs = 1024 k count = 1000 = / disk1

2 driver installation closure

losetup/dev/loop1 /disk1

3 instrument brand

oracleasm createdisk LOOP1/dev/loop1

4 use with ASM

Tags: Database

Similar Questions

  • ISCSI or NFS

    Hello world

    I am deploying ESXi for a client and they are quite short on money. They need at least 2 ESXi servers in order to deploy servers in 2003. They have an exchange 2003 server, several domain controllers, and a database server. The database is a SQL database, but its current use is still quite small. Maximum memory usage is about 4 GB. I want to ask some servers ESXi with 16 GB of RAM 24GO a piece.

    My problem is that in the long run, I need a storage shared for a my phase 2 plan. I then configure VCenter and apply the company licenses to be able to VMotion and some other services. I will also add more ESXi servers around this same time for exchange 2010 and some other applications. Usually, Exchange uses a lot of I/O transactions. I saw that exchange 2010 has decreased its I/O operations by dumping many of its works in the working memory. I was wondering if I can go out with a NAS and NFS. I would use 4 Gbps NICs to a switch. If I paste them, can I have a good link with ESX4i aggregation configuration? If not, is 1 Gbit/s over a good NFS? Experiments using such a slow link instead of 4 Gb/s FC?

    If I can't go out with NFS, whereas I would have to ask a local storage large enough to hold the current servers I want to convert and create for this first phase. Then I would get SAN iSCSI for their next year that is coming.

    Any advice would be really appreciated. Thank you

    Hello

    Would you recommend but using ISCSI or NFS but at these speeds "low" for the next 5 years?

    Mike Laspina has a good post on some of the differences between NFS and iSCSI, the other day.

    Have a read of his article ZFS running via NFS as a VMware store

    It is a little bit technical and focused on the opensolaris ZFS part, but the information you're looking for is especially in there.

    --

    Wil

    _____________________________________________________

    VI-box tools & scripts wiki at http://www.vi-toolkit.com

  • ASM vs San raid which to adopt

    Hello

    We have a system using ASM and San drive. San has a hardware raid, while the DSO has also do the same thing. Because according to my understand its function. Strip and any mirror. Do you think we have several raids in the system and we should disable hardware raid. Also I thing hardware raid is raid 4. Please correct me if I'm wrong

    concerning
    Nick

    RAID 1 + 0 or 0 + 1 is an implementation of striping and mirroring.
    RAID 5 is an implementation of aggregation by band with parity (where the parity strip is scattered between the discs, with RAID 4 parity stripes are on specific disks)
    See: http://www.baarf.com

    Please, mind, than the baarf Web site focuses on the decrease in performance (writing) of the use of RAID levels with parity. Even with the SAN, it is still quite true. (it is inherent on the way to parity is implemented)

    ASM normal redundancy is essentially a mirror application. The implementation of normal redundancy ASM subtly different way works mirrored RAID.

    This means having RAID 1 + 0 / 1 + 0 on SAN level is a way of mirroring, and having normal redundancy ASM is another way to have copies mirror blocks, which would mean that having RAID mirroring on the normal SAN and level of redundancy at the ASM level means you have 4 copies on the same storage box.

    So I would say that there is little benefit to have normal redundancy ASM on top a stripeset mirrored.

    As an adviser what to do: most ASM implementations use external redundancy, which means the redundancy of the San is used. I think it makes sense.
    With the help of normal redundancy makes sense when you use local drives (non-RAID), or when having multiple SAN.

  • NFS software E SAN

    Ciao software some tutti chi mi consiglia per virtualizzare dei brain san fiber channel, iscsi nfs ports?

    Ovviamente che sia compativile slab version vmware 3.5 in PVE.

    Ciao Procopio,.

    by iscsi & nfs you can try openfiler (2.99)... by CF devi avere una SAN...

  • ASM and SAN issues HQ

    Hello

    I have a few problems in ASM 4.6 and SAN HQ 3.0.

    SAN AC 3.0:
    I get alerts email saying that network connectivity is not even if there is no network connectivity issues.  The message is below:

    • [ID: 14.2] A connection could not be made.
    • The HQ SAN server issued the group SNMP requests, some or all failed.  This condition is probably due to the congestion of the network or has no network connectivity.  The server will try again in 60 seconds.  Check your network connectivity and that the storage group is online.  If the problem persists, contact your Dell support provider.

    ASM 4.6:
    For some reason when ASM is refreshing the host, it never ends with one of the hosts (in a cluster).  It sticks to the discovery of Cluster connectivity and I don't understand why.  I also get e-mail alerts saying that Smart snapshots failed.

    Can someone please?

    Thank you.

    There SANHQ available from eqlsupport.dell.com/.../download.aspx 3.0.1

    From the release notes:

    Elements fixed in SAN HQ Version 3.0.1

    The following question has been corrected in the version 3.0.1 SAN headquarters maintenance release:

    Previously, after SAN HQ 3.0 upgrade HQ SAN server could not connect to the newsgroups under firmware version 7 or more and SAN HQ has published an alert (HQ SAN event ID: 14.2) indicating that some or all SNMP requests the group failed.

    This condition is resolved.

  • iSCSI performance - 4 paths to SAN

    Hello.

    I have a test environment using a new HP 1u server and a sata san (12 disks in a raid 6 array) huwawi I created a 500 GB lun.

    Im trying to download a 60 GB model vcenter server from my laptop to the new data store and its expected to take 2,000 minutes.

    My huwawi I have 2 controllers connected to 2 network switches that I have connected 2 each controller ethernet ports to each switch is a pi and 8 multipaths for the SAN/store data.

    switch to Port 1 - 10.10.1.1 - x controller

    A Port 2 - 10.10.2.1 - z switch controller

    controller switch B Port 1 - 10.10.1.2 - x

    controller Port 2 - 10.10.2.2 - B switch z

    each subnet is on his own vswitch

    I don't know if the performance im getting is planned. or if something is wrong.

    I tried to change the most recently used fixed multipaths and also remove all but one of the ip dynamic iscsi which has no impact.

    in performance im get between a latencey 50-139ms on read and write.

    AFAIK everythings correct.

    Thanks for the help

    Then you have a good set of caching.

    What a total load of work goes to your table?  This def. feels like an overloaded array.

  • Overhead costs with software iSCSI vs NFS?

    If you use NFS to store your VMS VMDK files, could say nothing about the CPU above for this, compared to the iSCSI Software?

    If you use the FC or iSCSI hardware, it seems, most of the work could be discharged at the HBA, but what about NFS that needs to be done in software? It will be easier that iSCSI from the Vmkernel doesn't have to manage the lower block level access?

    Hello.

    The difference is very small.  Check out the white paper of comparison of the performance of the Protocol storage in VMware vSphere 4 .

    Good luck!

  • Convert iSCSI RDM NFS VMKD

    OK, a littlebit background.  I have two ESX environments: a cluster of older hardware that running ESX 3.0 and Virtual Center 2.0 and a cluster of newer hardware running ESX 3.5 and 2.5 Center Virtal.  On older hardware, are virtual machines with iSCSI RDM.  I want to convert them in VMDK of NFS and export/import in the new cluster of ESX servers.

    Now the time for the question: is it still possible?  If so, how should I do?  I read on the vCenter Converter but I have not really found a definitive answer to my question. 

    You can clone the virtual machine. When you do this, a ROW will be converted to a VMDK instead.  If you choose a NFS datastore as a destination, your work is done.

    -KjB

    VMware vExpert

  • SAN iSCSI and ESX3 Clustering

    Reading the documentation indicating that if we deploy an iSCSI SAN with the initiators of software that this system will not allow clustering.  What is the correct reading?  No clustering function?  If so, why?

    http://www.VMware.com/PDF/vSphere4/R40/vsp_40_mscs.PDF

    page 11

    > Environments and the following features are not taken in charge for the setup of MSCS in this version of vSphere:

    > - Clustering via NFS or iSCSI.

    http://www.VMware.com/PDF/vi3_35/esx_3/vi3_35_25_u1_mscs.PDF

    page 16

    Clustering is not supported on iSCSI or NFS

    ---

    VMware vExpert 2009

    http://blog.vadmin.ru

  • ISCSI and FC SAN VI 3.5

    Hello gurus!

    I have a VI 3.5 with FC SAN environment, but now we need more hard drive space. I think the use of ISCSI SAN, but some people says that it is not correct (unstable) to use ISCSI and FC SAN in the same environment.

    What do you think of this?

    Thank you!

    You can always have FC, ISCSI and NFS presented to the same ESX Server, it will not come into conflict each other as long as you do not present the same logic of FC and ISCSI each unit number.

    Craig

    vExpert 2009

    Malaysia, VMware communities - http://www.malaysiavm.com

  • 12 c grid rhel6 iscsi asm Storage Infrastructure

    I'm working on the construction of my 12 c Grid Infrastructure database by using the following:

    • RHEL 6.6
      • node rac1
      • node rac2
    • I use Synology NAS storage Oracle Clusterware RAC shared storage.  The following objectives were discovered by node rac1 and rac2. (/ dev/sda/dev/sdb/dev/sdc)
      • LUN/iSCSI shared CRS 10 g
      • LUN/iSCSI shared DATA 400 G
      • LUN/iSCSI shared FRA 400 G

    My question is?

    How do these iSCSI disks for ASM 12 c?

    Should I format each drive on each node and then oracleasm create discs?

    Please notify.

    BECAUSE two-node Oracle 12 c Grid Infrastructure installed on RHEL6

    Shared Storage for laboratory: Synology NAS which I created three LUNS/iSCSI devices:

    1 for the CRS

    1 for DATA

    1 for FRA

    Each node (initiator) discovered and connect on the objectives, the DATA and FRA

    I then used fdisk/dev/sda and/dev/sdb, / dev/sdc, which created

    / dev/sda1

    / dev/sdb1

    / dev/sdc1

    Initialized oracleasm on both RAC nodes and created the CBC records using the oracleasm createdisk CRS/dev/sda1

    Then oracleasm scandisk

    Then oracleasm listdisks

    And CBC, DATA and FRA discs seem to be working now.

  • How to compare premises, iSCSI and storage NFS for VMware ESXi 5 with IO Analyzer?

    Hello

    I have 2 hosts ESXi 5.0 with a few virtual machines running. I would compare the IO throughput for my local storage, iSCSI and NFS vmware storage store. Can someone point me to the right direction? I read the documents and tutorials, but I'm not sure how to compare tests from these 3 options.

    I have a need to compare the following storage options.

    Local disk on the ESX host

    iSCSI on QNAP NAS device

    iSCSI on EMC e 3100

    NFS on QNAP NAS device

    NFS on EMC e 3100

    IOmeter seems to be good tool based on reading, but still not clear where to make changes to pass tests of storage to another.

    Thanks in advance,

    Sam

    If you use IO monitor, then you must simply vmotion the VMS to the data store for the device you want to test, and then run the test (and Records results).  Can vmotion for the following data store and repeat.

  • NFS and iSCSI

    Hi all

    I will not go into another performance of iSCSI vs NFS because it isn't of interest to me, I would like to gather your thoughts on why you chose NFS or iSCSI and also the differences between the two because I'm not sure.

    The reason behind this, is based on a clean install of our Center of virtual data I have will happen soon.

    At the moment we have currently 5 virtual machines running on ESXi with two virtual machine to connect to an external iSCSI (RAID6) storage. This has worked well for over a year. (So I have no real experience with NFS)

    ESXi and VM host all five is on a 10 k SAS drives RAID1, but now I'm going to follow best practices now that we bought VMware Essentials.

    I'll put the host on the machine and the VM 5 on another separate NAS (ReadyNAS NVX) data store. I will use one of the 4 NETWORK card to connect directly on the NAS using a straight through cable, and the other three in a switch.

    Now, this is why I ask the question. Can I use NFS or iSCSI, on what I've seen there is a lot more technical documents and videos based on iSCSI, but according to me, it's because its aimed at the business market and the huge amount of VM.

    The future may hold an another VM 2-4 but no more than that.

    I have been recommended NFS in the external network manager and trust his opinion but feel that I have no experience of NFS.

    Tips are welcome.

    Specification of the server

    Reference Dell R710

    2 x Xeon processor Sockets (don't remember what model I'm typing this at home)

    24 GB Ram

    2 x RAID1 SAS drives

    4 x Broadcom NIC with iSCSI unloading

    This is the IP address on the cable, a crossover will do.

    AWo

    VCP 3 & 4

    \[:o]===\[o:]

    = You want to have this ad as a ringtone on your mobile phone? =

    = Send 'Assignment' to 911 for only $999999,99! =

  • FAS2050 performance (iscsi vs fc vs. nfs)

    We have a NetApp FAS2050.  It will use the Fibre Channel to connect to other guests (not VMware).

    To connect to our VMware environment, we use fibre, iscsi, or NFS.

    It seems that NFS would be easier in terms of deduplication (vs a provisioning).  Are there performance on iscsi vs fc vs. nfs reports?  So far, I've read the vmware docs "Performance Best Practices for VMware vSphere 4.0 ' and 'Scalable storage performance', but they don't focus on the performance of NFS issue.

    As a backdrop, the hosts are 2950 s Dell running esx 4 and about 10 virtual machines per host.

    Thank you

    Assuming that our flow would be less 1 GB & we can get the hang of the CPU, it seems that NFS will be sufficient.  Am I reading that correctly?

    Yes, but this assumption is based solely on this assumption.  It is always better to know the workloads and their needs, before these decisions are made.  You don't want to find out after the fact that your treatment needs were (or happened to be) too.  The chances are pretty good that you would be fine with all protocols and storage, NetApp, but its always better to know for sure.

    The other thing to keep in mind is that, while NFS provides operational simplifications, it is also an expensive option for the NetApp.  If you already have a fiber channel environment, strong support (or staff) in place, and this infrastructure can handle the growth, then it might be useful to research in use as well.

  • Home environment, direct attached eSATA or iSCSI/NFS 1 single?

    Greetings...

    Today, I am running ESXi 3.5u4 on my server at home, plans to move to ESXi 4.0 in a few months.  Right now, I'm flying without a net - no disk redundancy.  The server is not the space of the slot to add a good RAID card (it's a Shuttle XPC), I'm looking to go with an external RAID device.  I am considering a device that got both eSATA and 1GbE onboard (QNAP TS - 439 Pro).  If I go with it as a NAS, I'll plug 1 port of the NAS directly to a port 1GbE dedicated on the server, run frames, etc., leaving the 2nd port on the NAS connected to the management network.

    I run 5 VMs full-time, no not particular disc, i/o intensive, other than the file server that my wife and I use, and it's only a little time.  If I SIN instead of direct connection, I see a noticeable loss of performance?  If this isn't the case, should I be looking at iSCSI or NFS?

    So that I do not see a big performance hit, I'd rather go NAS, so he would give me a little extra flexibility to connect to other systems for the time machine backups, etc.  I'm leaning towards NFS, since iSCSI, if I remember involves stuffing a SCSI inside TCP frame, no. generals, creating news?

    The differences in performance between NFS and iSCSI would be minimal on a smaller device. Extended frames may also be minimum value, but your tests that prove. NFS is much simpler to install and use. Is there a lot of contention for disk access then you will start having performance problems. There is not a lot of processor or controller in these devices and no drive of caching (secure disk cache).

Maybe you are looking for