Best practices for creating data warehouses

I have 10 iSCSI LUN (all on the same device), each 1.8 size to which I want to introduce ESXi to create data warehouses.  Are there all recommended for how I divide these LUNS to the top, or should I just do a giant data store?  Maybe there are performance factors to consider here?

If I had to do 10 1.8 data stores to I can see a problem on the road when I need to expand a vmdk but cannot because there is not enough free space on the data store, it would be less of a problem if I had a giant data store at first.

Thank you.

First of all, it's one of those type questions "how long is a piece of string.

It depends of course of the VMDK number you're going to be running, the available storage, the type of storage, IO, storage, type of virtual machines etc. etc. etc.,

Things to consider are for example, you have a storage that deduplication and storage cost a major factor (and so on)
Of course. . almost always, a cost reduction is equivalent to a drop in performance.

In any case, a rule very loose, I have (in most cases), among which I size LUN somewhere between 400 and 750 GB and rarely (if ever) have more than 30 VMDK per LUN.

Almost always, redirect the request to the following resources:

first of all, the maximum rates of configuration:
http://www.VMware.com/PDF/vSphere4/R40/vsp_40_config_max.PDF

http://www.gabesvirtualworld.com/?p=68
http://SearchVMware.TechTarget.com/Tip/0, 289483, sid179_gci1350469, 00.html
http://communities.VMware.com/thread/104211
http://communities.VMware.com/thread/238199
http://www.yellow-bricks.com/2009/06/23/vmfslun-size/

(although Post-andre above covers most of them)

Tags: VMware

Similar Questions

  • What are the best practices for creating only time data types, and not the Date

    Hi gurus,

    We use 12 c DB and we have a requirement to create the column with datatype of time only, if someone please describe what are the best practices for the creation of this.

    I would strongly appreciate ideas and suggestions.

    Kind regards
    Ranjan

    Hello

    How do you intend to use the time?

    If you are going to combine with DATEs or timestamps from a other source, then an INTERVAL DAY TO SECOND or NUMBER may be better.

    Will you need to perform arithmetic operations on time, for example, increase the time to 20%, or take an average?   If so, the NUMBER would be preferable.

    You are just going to display it?  In this case, DAY INTERVAL in SECONDS, DATE or VARCHAR2 would work.

    As Blushadow said, it depends.

  • Best practices for creating a new project from an existing WebHelp

    I am currently working to WebHelp Pro (version RH is 9.0.2.271).

    I have a WebHelp project that currently supports the 2012 version of one of our projects. What I had to do was to create a separate project for 2013, using files of 2012 as a starting point. I couldn't find a way in HR to create a new project by importing an existing project of WebHelp, so I copied the files of 2012 in a new directory, open the project and renamed it.

    What drives this issue is that, as a result of this exercise, all seemed well, for once. However, I recently had to create new topics in the 2012 version. However, when I imported these subjects to the 2013 and compiled project, they disappeared - even if the htm files still appear in the appropriate folder of the file 2013 (when we look at with Windows Explorer).

    After reading a few posts on the forum, I thought I might have corrupted my database by creating the incorrectly the new project-, but if what I've done, it's the wrong way to go about this, I don't know what is the right way. I will be grateful for any suggestion.

    The easy way to do this is to create a copy using Windows Explorer.

    Open the project and click file > rename.

    Then you have your project is ready in 2013.

    See www.grainge.org for creating tips and RoboHelp

    @petergrainge

  • Best practices for migrating data tables - please comment.

    I have 5 new tables stocked with data that must be promoted an evolution to a production environment.
    Instead of just DBA using a data migration tool, they are insistent that I have record and provide scripts for each commit, in good condition, necessary both to build the table and insert the data from ground zero.

    I'm very little used to such an environment, and it looks much more risky for me to try to reconstruct the objects from scratch, so I already have a model perfect, tested and ready.

    They require a lot of literature, where each step is recorded in a document and use for deployment.
    I think their purpose is that they do not want to rely on backups but would rather than rely on a document that specifies each step to recreate.

    Please comment on your view of this practice. Thank you!

    I'm not a DBA. I can't even hold a candle to the fans of the forum as Srini/Justin/sb.
    Now that I'm giving a slightly different opinion.

    It is great to have, and I paraphrase of various positions
    Deployment documents,
    sign off of groups
    recovery steps
    Source code control
    repositories
    "The production environment is sacred. All the risks that must be reduced to a minimum at any price. In my opinion a DBA should NEVER move anything from a development environment directly in a production environment. 'NEVER.'
    etc etc.

    But we cannot generalize that each production system must have these. Each customer is different; everyone has different levels of fault tolerance.

    You can't wait for a change of design of the product to a cabinetmaker to go through "as rigorous than at NASA. Why would it be different for software change?

    How rigorous you put is a bit subjective - it depends on the policies of the company, experiences in society with the disasters of migration (if any) and the corporate culture.

    OP can come from a customer with lax controls. And it can be questioned if the rigour at the level of the new customer is worth it.

    To a single client, (and I don't kid), the prod password is apps/apps. (after 12 years of being alive!) I was appalled at first. But it's a very small business. They got saddled with EBS during the .com boom Hay days. They use just 2 modules in EBS. If I speak of creation (and charge) these documents/processes, I would lose the customer.
    My point is that not all places must or want these controls. By trial and error, you get what is the best company.

    OP:
    You said that you're not used to this type of environment. I recommend that you go with the current first. Spend time on understanding the value/use/history of these processes. Ask if they still seem too much to you. Keep in mind: this is subjective. And if it comes down to your existing reviews of the v/s, you need either an authority OR some means of persuasion (money / sox (intentional typo here) / beheaded horse heads... everything that works!)

    Sandeep Gandhi

    Edit: Typo-detail fault

    Published by: Sandeep Gandhi, independent Consultant on 25 June 2012 23:37

  • Best practices for the data pump or import process?

    We are trying to copy the existing to another newly created schema schema. Pump data export to succeed the export schema.

    However, we met errors when you import dump again file schema. Remapped schema and storage areas, etc.
    Most of the errors occur in PL/SQL... For example, we have views as below in the original schema:
    "
    CREATE the VIEW * oldschema.myview * AS
    SELECT col1, col2, col3
    OF * oldschema.mytable *.
    WHERE coll1 = 10
    .....
    "
    Quite a few functions, procedures, packages and triggers contain "* oldschema.mytable *" in the DML (insert, select, update), for example.

    Get the following errors in the import log:
    ORA-39082: object ALTER_FUNCTION type: 'TEST '. "' MYFUNC ' created with compilation warnings
    ORA-39082: ALTER_PROCEDURE object type: 'TEST '. "" MYPROCEDURE "created with compilation warnings
    ORA-39082: the VIEW object type: 'TEST '. "' BIRD ' created with compilation warnings
    ORA-39082: object PACKAGE_BODY type: 'TEST '. "' MYPACKAGE ' created with compilation warnings
    ORA-39082: TRIGGER object type: 'TEST '. "' MON_TRIGGER ' created with compilation warnings

    Many actual errors/no valid in the new schema objects are due to:
    ORA-00942: table or view does not exist

    My question is:
    1. What can we do to correct these errors?
    2. is there a better way to do the import with such condition?
    3 update PL/SQL and recompile with the new scheme? Or update in the scheme of origin, firstly and export?

    Your help will be greatly appreciated!

    Thank you!

    @?/rdbms/admin/utlrp.sql

    Will compile the objects in the database through drawings. In your case, you re-mapping from one schema to another and utlrp objects will not be able to compile.

    SQLFILE impdp option allows to generate the DDL of the discharge of export and change the name of the schema on a global scale and run the script in sqlplus. This should solve most of your errors. If you still see errors, now proceed to utlrp.sql.

    -André

  • What is the best practice for creating Volumes

    Hello.

    I have storage SAN Dell Equiliser PS 6000 model. configured with RAID 50. and FEPs 4TB

    1. I want to present the server 2 hosts 2 TB and 2 TB Volumes. is it good to put it this way or

    2. can I submit each Volume separately to each Vertual Machine... ???

    My friends said that method 1 is good practice.

    3 and I use the IP address of the host to present volumes it's correctly.

    4. If I use host NIC to present Volumes. so can I use the same NETWORK card like switch virtual or not.

    Please give me the urgency to answer.

    Thank you.

    Hello

    Re: 1

    If you have a table of 4 TB, use all the available space on the table.  This will reduce performance long-term.  The table needs unallocated free space.  If you use replication or snapshots.

    Re: 2

    If you do not use a file system capable of cluster, then do not connect several servers the same volumes.  Each server has access to its own volume.  You can connect either using the hypervisor (AKA Raw Device mapped LUN) or directly from the virtual machine.  Which is known as the "Storage Direct".  If your server is running Windows, with this configuration, you can use the Host Integration Toolkit / edition of Microsoft (HIT / ME) to provide improved MPIO and better integration with SharePoint, MS SQL and MS Exchange.

    Addresses using IP is fine to control access.  It's a personal preference.  It is possible to make a mistake and allow multiple access to the server when using just IP addresses.  Or an address can be spoofed as well.  A safer option is to use a CHAP username / password combination.

    About your last two questions, I'm not sure that I completely understand.

    The iSCSI network must be on its own.  If each virtual machine or a physical host linking would have their own network interface cards.  Whether it's virtual network adapters or the physical network interface cards. With virtual switches, it should have devoted HW NIC and iSCSI traffic.  But those that can be shared by multiple virtual machines.

    Kind regards

    Don

  • Best practices for a NFS data store

    I need to create a data store on a NAS and connect to some servers ESXi 5.0 as a NFS datastore.

    It will be used to host virtual machines less used.

    What are the best practices to create and connect a datastore NFS or networking and storage view bridges in order to get the best possible performance and decrease is not the overall performance of the network?

    Concerning

    Marius

    Create a new subnet of layer 2 for your NFS data warehouses and set it up on his own vSwitch with two uplinks in an active configuration / eve of reunification. Uplink should be variously patches in two distinct physical switches and the subnet must have the disabled bridge so that NFS traffic is not routable in other parts of your network. NFS export can be restricted to the IP address of storage host IP (address of the VM kernel port you created for NFS in the first step), or any address on that subnet. This configuration isolates traffic NFS for performance, ensures the security and redundancy. You should also consult your whitepapers of storage vendors for any specific recommendation of the seller.

    Data warehouses can be made available for the guests you wish and you can use Iometer to compare PAHO are / s and flow rate to see if it meets your expectations and requirements.

  • Best practices for the integration of the Master Data Management (MDM)

    I work on the integration of MDM with Eloqua and are looking for the best approach to sync data lead/Contact changes of Eloqua in our internal MDM Hub (output only). Ideally, we would like that integration practically in real time but my findings to date suggest that there is no option. Any integration will result in a kind of calendar.

    Here are the options that we had:

    1. "Exotic" CRM integration: using internal events to capture and queue in the queue changes internal (QIP) and allows access to the queue from outside Eloqua SOAP/REST API
    2. Data export: set up a Data Export that is "expected" to run on request and exteernally annex survey via the API SOAP/REST/in bulk
    3. API in bulk: changes in voting that has happened since the previous survey through the API in bulk from Eloqua outside (not sure how this is different from the previous option)

    Two other options which may not work at all and who are potentially antimodel:

    • Cloud connector: create a campaign questioning changes to schedule and configure a connector of cloud (if possible at all) to notify MDM endpoint to query contact/lead "record" of Eloqua.
    • "Native" integration CRM (crazy): fake of a native CRM endpoint (for example, Salesforce) and use internal events and external calls to Eloqua push data into our MDM

    Issues related to the:

    1. What is the best practice for this integration?
    2. Give us an option that would give us the close integration in real-time (technically asynchronous but always / event-based reminder)? (something like the outgoing in Salesforce e-mail)
    3. What limits should consider these options? (for example API daily call, size response SOAP/REST)

    If you can, I would try to talk to Informatica...

    To imitate the integrations of native type, you use the QIP and control what activities it validated by internal events as you would with a native integration.

    You will also use the cloud api connector to allow you to set up an integration CRM (or MDM) program.

    You have fields of identification is added objects contact and account in Eloqua for their respective IDs in the MDM system and keep track of the last update of MDM with a date field.

    A task scheduled outside of Eloqua would go to a certain interval and extract the QAP changes send to MDM and pull the contacts waiting to be sent in place of the cloud connector.

    It isn't really much of anything as outgoing unfortunately use Messaging.  You can send form data shall immediately submit data to Server (it would be a bit like from collections of rule of integration running of the steps in processing of forms).

    See you soon,.

    Ben

  • best practices for placing the master image

    Im doing some performances / analysis of load tests for the view and im curious about some best practices for the implementation of the image master VM. the question is asked specifically regarding disk i/o and throughput.

    My understanding is that each linked clone still reads master image. So if that is correct, then it seems that you would like the main image to reside in a data store that is located on the same table as the rest of the warehouses of data than the House related clones (and not some lower performing table). The reason why I ask this question, it is that my performance tests is based on some future SSD products. Obviously, the amount of available space on the SSD is limited, but provides immense quantities of e/s (100 k + and higher). But I want to assure you that if, by putting the master image on a data store that is not on the SSD, I am therefore invalidate the IO performance I want high-end SSD.

    This leads to another question, if all the linked clones read from the master image, which is general practices for the number of clones related to deploy by main image before you start to have problems of IO contention against this single master image?

    Thank you!

    -


    Omar Torres, VCP

    This isn't really neccissary. Linked clones are not directly related to the image of the mother. When a desktop pool is created and used one or more data stores Parent is copied in each data store, called a replica. From there, each linked clone is attached to the replica of the parent in its local data store. It is a replica of unmanged and offers the best performance because there is a copy in every store of data including linked gradient.

    WP

  • [ADF, JDev12.1.3] Best practices for maintaining a form validation

    Hallo,

    in my application, I need to create a registration form which contains fields that must be validated (for example they should follow a format like e-mail, phone number, tax code,...).

    If the data inserted by the user are ok, a new record in my custom db table Users will be created.

    I would like to know which are the best practices for maintaining the validation, which means the place where the controls must be made and a message to the user who fills out the form when something goes wrong.

    The best vo or EO or managed bean? Or some controls should be put in the OS, others in the VO and other in the managed bean?

    I would be happy if you could give me some examples.

    Thank you

    Federico

    Assuming you want the validation on the value of the field to any screen data can be entered in (and possibly web services that rely on the same BC ADF) then put the validation on the definition of the attribute in the EO.

    If you want to add a little more friendliness and eliminate some of the network traffic to the server, you can also implement the validation client in your page - for example by using the regular expression validator.

    https://blogs.Oracle.com/Shay/entry/regular_expression_validation

  • Best practices for SQL

    I started a discussion a few weeks there is movement .vmdks from the supplier of storage to another in regard to and got some great answers.  What I didn't ask was if the hard contain databases SQL is there special considerations when you cut storage EMC VNX (pools versus grpoups raid) or when the data store is created in vCenter.  I know that this may be more an EMC forum question but maybe someone in the world of VMware has a recommendation or can provide a link to best practices for the VNX.  Thank you.

    Post edited by: vmroyale to change the case of SQL

    Have you seen the document "using EMC VNX with VMware vSphere storage solutions"?

  • BEST PRACTICES FOR PL/SQL

    Hi all
    I'm looking for advice on best practices to create c# applications that have Oracle as backend.
    I noticed this insertion, update, and delete procedures are not so dynamic because they are static:

    Insertion procedure:
     Create or replace procedure insertion (param1 in varchar2 default '', 
    param2 in number default 0, param3 in float default 0,
     table_to_insert in varchar2, Some_error out varchar2) is
    begin
    if (table_to_insert='table1_spec') then
    insert into table1 (val1,val2,val3) values (param1, param2,param3);
    elsif (ect) then
    .
    .
    .
    end;
    /
    Updated and delete procedures also have the same behavior, I wonder if you use a cursor to do more dynamic values return and the use of native dynamic SQL could not help the developers and I have to have something more dynamic.
    Kind regards

    Published by: user650358 on June 9, 2011 08:37

    user650358 wrote:

    I'm looking for advice on best practices to create c# applications that have Oracle as backend.

    What John said. (even if I would have been more blunt and used the expression "+ approach silly +" several times) {noformat} ;-) {noformat}

    The most flexible approach is to abstract the complexities of SQL, the relational database and the physical implementation of it, of the code c# developer.

    No need to know the database design, the joints, the structures table, SQL and others. Instead the abstract layer takes care of that - where this layer is a suite of packages of PL/SQL procedures and functions.

    As the c# developer would use the Win32 API for creating threads, or the use of Sockets of Win, it now uses the layer of abstraction of PL/SQL in a similar way.

    You want to add an invoice? No need to know the name of the table, the structure of the table and SQL - c# code calls simply the CreateInvoice()PL/SQL procedure.

    This code performs the validation. Business rules and logic. The SQL.

    More, the underlying database model can change, the new rules of business introduced and so on - and the call from c# to CreateInvoice() will remain the same and the c# code not affected by these changes in dorsal.

    Oh yes - it does not using PL/SQL code to extract data from line in the PL/SQL variables and then push these data in turn to the c# client code.

    It means that a procedure or a function like GetCustomertInvoices() returns a ref cursor to the appellant to consume. (the SQL to create the slider is abstract - not the actual slider itself because it is correct the only way to access the data in row SQL)

  • Best practices for deferred loading collection once but ensuring there?

    I'm confused on best practices for managing the "setup" of the form, where I need a remote call to occur once only once for the form, but I also need to make use of this collection for a combobox that will change when different lines in the DataGrid or clicked. Easier if I just explain...

    1. You click on a row in a datagrid control to modify an object (for this example we will say it is an "employee")
    2. The form you go must have a collection of objects 'Department' charged by a remote call. This collection of departments should only occur once, since it is not common for them to change. The departments collection is used to fill a combobox in form.
    3. You need to understand what Department of the comboBox control is the property selectedIndex by iterating over the departments and find one that fits the employee.department.id

    Individually, I know how I can do all of the above, but because of the asynchronous nature of Flex, I'm having hard time setting up things. Here are a few questions...

    My first thought was just put the loading of the departments in an init() method on the employeeForm who would load as an event on the form creationComplete(). On the component page grid when the event handler by clicking a line of fire, I then calls the setup() method on my employeeForm which stands at which selectedIndex to set to the combobox control looking at the departments.

    The problem is the resultHandler for the load of the departments could not returned (so departments could not be there when "setUp" is called), but I can't put my business logic to determine the correct combobox in the departmentResultHandler because it would mean that I would always have whenever I don't want the fire of the call to the remote server object.

    I have to miss a single best practice? Suggestions welcome.

    Hi there rickcr

    It is pretty rough and you need to make a few storage upward, but take a look below.


    http://www.Adobe.com/2006/mxml"layout ="absolute">
       
            Import mx.controls.Alert;
    Import mx.collections.ArrayCollection;
               
    private var comboData:ArrayCollection;
               
    private void Setup (): void {}
    If {(comboData)
    Alert.Show ("data are present")
    populateForm()
    } else {}
    Alert.Show ("data not '")
    getData();
    }
    }
               
    private void getData (): void {}
    comboData = new ArrayCollection();
    The result of this call, the installer again
    }
               
    private function populateForm (): void {}
    fill out your form
    }
    ]]>
       

       
       
           
           

           
           

       

    I think this example type of watch you want.  When you first click on tab 2 there is no data.  When you click on tab 2 once again, there is. The data for your combo will be stored in comboData.  When the component gets created first the comboData is not instansiated, just romance.  This allows you to say

    If (comboData)

    This means that if the variable contains your data, you can fill out the form.  Initially it is not so now the else condition, you can call your data and return the result of your data then you can say

    comboData = new ArrayCollection(), put the data in it and remember the installation again.  This time comboData is populayed and is so it will run the method populate the form and you can decide which selected item to affect.

    If it is on a large scale you want to look into the creation of a suitable handler class to handle this, but this simple demo shows you can test to see if the data is different.

    Hope it helps and gives you some ideas.

    Andrew

  • Just improved m tips on best practices for sharing files on a Server 2008 std.

    The field contains about 15 machines with two domain controllers, one's data is the app files / print etc...  I just upgraded from 2003 to 2008 and want to get advice on best practices for the establishment of a group of file sharing. Basically I want each user to have their their own records, but also a staff; folder. Since I am usually accustomed to using windows Explorer, I would like to know if these actions can be done in the best conditions. Also I noticed on 2008 there is a feature of contacts. How can it be used? I would like to message or send an email to users their file locations. Also, I want to implement an admin at a lower level to handle the actions without making them far in on the server, not sure.

    I read a certain bbut I don't like test direct more because it can cause problems. So basically a way short and neat to manage shares using the MMC, as well as the way that I approach their mail from the server of their actions. Maybe what kind of access cintrol or permissions are suitable also for documents. Also how can I have them use office templates without changing the format of the model.

    THX

    g

    Hello 996vtwin,

    Thank you for visiting the Microsoft Answers site. The question you have posted is related to Windows Server and would be better suited to the Windows Server TechNet community. Please visit the link below to find a community that will support what ask you:

    http://social.technet.Microsoft.com/forums/en-us/category/WindowsServer

    Hope this helps J

    Adam
    Microsoft Answers Support Engineer
    Visit our Microsoft answers feedback Forum and let us know what you think

  • Best practices for vsphere 5.1

    where can I find the doc more up-to-date about berries EQL configuration / best practices with vmware vsphere 5.1

    Hello

    Here is a link to a PDF file that covers best practices for ESXi and EQL.

    Best EqualLogic practices ESX

    en.Community.Dell.com/.../20434601.aspx

    This doc mentions specifically that the storage Heartbeat VMKernel port is no longer necessary with ESXi v5.1.  VMware has corrected the problem that made it necessary.

    If you add it to a 5.1 system it will not hurt.  It will take an IP address for each node.

    If you upgrade 5.0 to 5.1, you can delete it later.

    Here is a link to VMware which addresses this issue and has links to other Dell documents which confirm also that it is fixed in 5.1.

    KB.VMware.com/.../Search.do

    Kind regards

Maybe you are looking for