Best way to store data on local RAID

Hi all

Simple question for you, I'm new to vmware and I whant to be sure that what is the best way to set up my server.

I have a proliant dl380 G5 with 5 HD SAS300G. Idon't know ask another external to associate with this server hardware.

What is the best way to configure vmware ESXi 3.5 and my different raid for 800G of file storage and 300G the operating system image.

I read on LUN RAW but I can't select it in my ESXi (because I have no san or because its esxi?)

Thx for the help.

AlX

5 x 300 GB drives, I'd say you're having trouble to adapt to your VMs on here. If you use RAID-5, which will allow the failure of disks 1, you get 1200 MB of storage (a little less when formatted) ask for 1100 MB, but then you will need enable General newspapers and fresh VMware vswp such as snapshots. I would like to add another disk of 300 GB in this Bay RAID-5 if possible.

If the virtual machines are intensive disk, you should consider the requirement of IOPS / s. If these are 15 k disks, the table should give you around 600IOPS - is it enough?

Just format the local array with VMFS and use VMDK for all machines

Tags: VMware

Similar Questions

  • best way to store, present the date and time of entry into

    Hello!

    I hope that I can make clear. I see an APEX application that has a couple of different date type fields. This demand will be translated in two different languages, which has representations different date/time. For example: hh12:miAM DD-MON-yyyy and yyyy-mm-dd hh24. I am now find what is the best way to store the date/time fields and hou I can present them in a form/report and how the user can enter in a field.

    That's what I see:
    -I can set a time stamp request format date format/application. It must be in the format of the main language, in this case, English.
    -I can also make translated versions of the application. QUESTION: can I enter a different date for the translated application format?

    What should I do my fields of type date or timestamp?

    I hope someone can answer. If I'm not clear, please tell me!

    You can set the format of date of application using an element of demand

    & DATE_FORMAT_MASK.

    and this element the value dynamically.

    Home > Application Builder > Application # > shared components > change attributes of globalization

    Denes Kubicek
    -------------------------------------------------------------------
    http://deneskubicek.blogspot.com/
    http://www.Opal-consulting.de/training
    http://Apex.Oracle.com/pls/OTN/f?p=31517:1
    http://www.Amazon.de/Oracle-Apex-XE-Praxis/DP/3826655494
    -------------------------------------------------------------------

  • What is the best way to store the RCS for an insert/update in this rec

    Oracle on Win 64 non-conteneur 12.1.0.2

    When a record in one table is inserted or updated, what would be the best way to store the RCS for this record in this folder.

    I thought of a line after trigger, but did not know if this trigger to store the current_scn would still fire that trigger again (recursive trigger).

    Someone at - he a good idea of what the best way is to do?  The devs don't want to store the pk and the SNA in yet another table...

    Yes, row_dependencies would be the best way to go.  But mgmt doesn't recreate all tables for this.

    3rd party applications retrieve data from tables (all data).  We are looking for a way for them to just pull what is new or updated updated since their last sweater.

    I suggest that you try again and give all OF THE REQUIREMENTS.

    You have rejected ANY answer given and he justified using 'hidden' on what knowledge management or the devs want or do not want to. Stop making us guess what are the requirements and constraints. If you want a real answer then tell us ALL the news.

    When a record in one table is inserted or updated, what would be the best way to store the RCS for this record in this folder.

    Solomon answered repeatedly. If you want to add a column to a table to store the then "best" SNA is to let the Oracle to do this for you automatically by using the DEPENDENCY LINE.

    As he says also re-create the table to add this clause will be MUCH MORE EFFECTIVE that everything THAT you can do it manually. It will be also more accurate because Oracle will fill the value ORA_ROWSCN with the SNA at the time the line was committed. You, as long as user, can't fill a column in function when a line is engaged since real VALIDATION belongs to a transaction, not the line or the trigger that you use.

    Yes - there are two drawbacks to this method:

    1. you need to re-create the table

    2. you cannot add an index to this "hidden" column

    The devs don't want to store the pk and the SNA in yet another table...

    Then? Who cares what the devs want to do? You want the BEST solution? Next, you will need to put aside personal preferences and determine what is the 'best' solution. Why it is important that certain dev wants to do this or not?

    OK, the problem of biz is now, 3rd party external users are an all-wheel drive large number of tables in the database via the API that we wrote.  That was obviously interrupted OLTP during the day.  To reduce to the minimum, we want for them just to extract data that has been inserted/updated since their last sweater.

    It is the definition of a "replica" DB Then why don't you consider a real replicated DB? You can use DataGuard and have replicated DB which is read only that can be used to generate reports. Oracle does ALL the work to keep ALL the tables in sync. You and your developers do NOTHING!

    We thought that store the RCS higher their last sweater would allow the API to extract only data with YVERT higher than their last data pull CHN.

    OK - except you keep rejecting solutions actually do. Ask you questions about the SNA stored in the same table, but then reject the solution that does this. And then you add your "devs" don't want to store the info in a new table either.

    Then your solutions must ONLY use the replication or Log Miner. The REDO logs have all changes, if you want to extract yourself. Replication (e.g., DataGuard) will use these logs for you to maintain a replicated database.

    We thought about it, but recreate all tables in production with ROWDEPENDENCIES as well as dealing with CF and other dependencies idea this was shot.

    Well you NEVER mentioned you "thought that" and rejected it. And you NEVER mentioned anything about FKs and other dependencies. What is FKs and other dependencies which prevents this working solution? Tell us! Give us ALL the information.

    Wouldn't a trigger AFTER LINE capture the commit YVERT?  Or is after really not after validation?

    No - a trigger has NOT one commit. A trigger runs as a step in a transaction. Validation applies to the entire transaction. Until you, or Oracle, issues a commit, there is NO "committed SNA" to be stored as ORA_ROWSCN.

    You can easily see that for yourself. Create a simple table with dependencies of the line and then update two different sessions.

    create the table emp_scn rowdependencies in select * from emp where rownum<>

    Select empno, emp_scn ora_rowscn

    Update emp_scn set work = 'b' where empno = 7499

    commit;

    The first SELECT statement will show you that each row has the same SNA.

    EMPNO, ORA_ROWSCN

    7369,70622201

    7499,70622201

    7521,70622201

    Now, do the update (but no commit), then SELECT it

    EMPNO, ORA_ROWSCN

    7369,70622201

    7499,

    7521,70622201

    Where is the value of 7499? This session will NOT see a value for the changed lines in the current transaction. Other sessions will still see the old value.

    Now do the validation, then SELECT

    EMPNO, ORA_ROWSCN

    7369,70622201

    7499,70622301

    7521,70622201

    7499 now has a new and different value than the other lines. It will not be this new value until the validation occurs.

    Yes, row_dependencies would be the best way to go.  But mgmt doesn't recreate all tables for this.

    Well, you got the answer you want. You ask the best way. Now, you say that you were told the best way. But now you don't like the answer.

    How is it our fault? Your question has been answered wasn't she?

    Here are the facts:

    1 oracle creates a history of changes - the REDO log files

    2. you can use Log Miner to extract these changes

    3. you can create your own change log by adding a log file of MV to your table.

    4. you can then write a custom code to use this MV log file to determine which rows to "reproduce".

    So far reject you all THE POSSIBLE solutions.

    Accept it or change the requirements to allow one of the solutions proposed to be used.

    Personally, if I HAD to use a customized solution, I would use a MV journal to record the ROWID of the lines that have changed (for tables ROWID cannot be changed). I would then extract the appropriate lines by pulling on the lines corresponding to these row ID.

    Even that has problems since a line can be changed several times and children lines can also be amended several times - these questions FK you mentioned.

    I suggest you read this entire thread on AskTom a dozen years ago. It addresses ALL these issues.

    https://asktom.Oracle.com/pls/Apex/f?p=100:11:0:P11_QUESTION_ID:16998677475837

    Then in your next reply on this topic give us a summary of where some things with your question and what help you further expect.

  • The best way to migrate data... Opinion please...

    Guys,

    I'm working on a solution for a customer who needs to move in a new SAN SAN data.

    Existing SAN is attached to the ESX server farm and is very close to the location of a new SAN. In your opinion, what is the best way to move data without interruption to VM ESX farm?

    I know we can do SAN replication, but right now, I don't know if the SAN is still the same provider.

    I know we can do Storage VMotion. In this case, he should present the new SAN somehow for the existing ESX servers.

    I know that we can probably use the converter to migrate virtual machines. This can take some time because we would go over WAN.

    What are your thoughts?

    Thanks in advance for your comments!

    With 4 TB of data, I wouldn't pass on the replication SAN or WAN solution of optimization of the property, and it can fly quickly with 4 TB of data that you use storage vmotion which basically depends on how your WAN links are, if if its quick to go for it.  There is not another free solution then this but if you can manage somehow to make backup of system state of all virtual machines, and then you can ship these backup data to remote locations and import, save back to clusters.  You have to do the actual calculation how fast you can Transfer data from the siteA-> siteB Mbps?  This will give you a clearer picture of how long it will take and picture plan here.  I could do that during maintenance or weekend windows that are the least impact to users and systems.

    If you found this information useful, please consider awarding points to 'Correct' or 'useful '. Thank you!!!

    Kind regards

    Stefan Nguyen

    VMware vExpert 2009

    iGeek Systems Inc.

    VMware, Citrix, Microsoft Consultant

  • The best way to pass data when browsing

    By taking the case where an action method is called and he chooses navigate to another page.

    What is the best way to pass data from bean of old media to the new (probably in the execution of the action method)? I am currently using beans of spring session scope as the support of beans.

    Take a look at:

    It goes between your commandLink or commandButton control:



  • Best way to store the values of the map

    Hi all.

    I have a card <>where I store the values (in this case, I store the IDs of rows in the table). I have an application with a panel tabs and I have a card for each tab. When the user exits the application, I'll go through the card and the identity of any process

    If I use normal in my grain map, once the user changes tab (or opens a pop up or something like that), the map is instantiated again and all data is lost. To resolve this problem, I used a static map. Everything is ok, but if 2 users use the application at the same time, they will share the card data and I don't want that.

    So I have a question: what is the best way to keep my card? I read some people saying to use one Bean of sessionScope, others a pageFlowBean, other parameters of Page, etc...

    Can you help me?

    P.s: jdev version 11.1.2.4.0

    Well, you can create a managed bean in this scope, or you can put your card directly at your fingertips.

    Here you will find several techniques to get the bean managed in java code: http://www.lkakarla.com/2013/06/retrieving-managed-beans.html

    Dario

  • What is the best way to store my Canon MX922? I'm not going to use it for several months (6 +).

    I don't use my printer for many months.  What is the best way to keep?  Leave the ink cartridges in the printer?  Get out them?  Store cartridges almost nine in a bag made of plastic?

    Hi mmosher,

    If your PIXMA MX922 should not be used for an extended period of time, please put your PIXMA MX922 using the power button on the unit.  It will be a "cap" of the ink tanks, so they will be ready to use when you print with your PIXMA MX922 once again.  No further action is required.

  • Best way to send data

    Hello

    It is more a question aside server, but I wanted to see what is the 'best practice' BB, and how the experts deal with this scenario.

    I intend to use servlets to serve data over HTTP to my BB application.  I need to retrieve a list of images more associated with name, description, dates, etc.  Sent a picture of servlet, I know that I can just send it as a byte array and the BB it can read the http input stream.  What to send the image and data? How can we separate the bytes of the image of the data bytes. Can I send a collection of images and associated data?

    What is the best way to address the issue?

    Thank you

    T

    I would answer the question by asking "what your infrastructure server-side looks like?

    If it's dot net, then you will probably want to take the path of least resistance and use JSR 172 or KSOAP.

    If you have more flexibility on the server, I recommend XML or JSON (as Peter says). JSON is going to be more compact, but the BB JSON tools are a bit sparse.

    On the other hand, SAX on the BB parser libraries are good enough for XML.

    I would say that in every project I've ever worked, I bypassed the weaknesses of the infrastructure server, rather than the reverse.

  • Best way to store backups of windows.

    I'm terrified of losing my files in a hard disk crash. I use Mozy online backup. Sometimes it is desperately slow, "blockages" on a single file for 30 minutes and occasionally reports, "backup failed."

    The Windows backup looks like a good idea, but of course, I understand that if I keep the backup on an external drive in my house, it would be lost in the event of fire or theft, with my computer.

    Experts of this forum recommend backup using Windows and then store it on a USB key and just protect this drive?

    Is it a realistic way to store the backup files to the cloud?

    Or services such as Mozy and Carbonite Online are the best solution and more simple?

    Thanks for any advice.

    Here's what I do.

    I use EaseUS Todo freeware to make a backup of the system to an external drive. A better program than Windows Backup, IMO '

    I also use Clickfree with their external drive.

    You could just drag files and folders to an external drive, but other options are much better. I didn't know what would be the right person for you.

    Keep the discs you want. I don't worry about fire or theft, but I don't respect the readers of my gun safe.

    I hope that gives an idea.

  • Best way to store all the pixels in the layer of the CBC?

    I am trying to create an image for AE filter plugin. The proposed model for Applications SDK ("skeleton project"):

    Suites. Iterate8Suite1 ()->Browse(in_data,

    0, / / basic course

    linesL, / / final progress

    & params [SKELETON_INPUT]->u. ld, // src

    NULL / / area - null for all pixels

    ()Sub*) & tInfo, / / Conref - your custom data pointer

    drawTriangles8 / / function pixel pointer

    output));

    But I want to run my filtering algorithm of image on an array of int int [] argument tableau_donnees which owns all of the RGBA (0-255) values for each pixel.

    I'm doing this because I have already written the filter for Java and prefer not to completely change my implementation.

    So already, part of an API or is there a common way to do this?

    Here is my current solution in the render function:

    static PF_Err Render (.) {

    //...

    for (int i = 0; i < tInfo > -width; i ++) {

    for (int j = 0; j < tInfo > -height; j ++) {

    PF_Pixel currentPixel = *getXY(* tInfo->enter, i, j);

    int alpha is currentPixel. alpha;

    int red = currentPixel. red;

    int Green is currentPixel. Green;

    int blue = currentPixel. Blue;

    p.push_back(red);

    p.push_back(green);

    p.push_back(blue);

    p.push_back(alpha);

    }

    }

    public static PF_Pixel

    * getXY (PF_EffectWorld & def, int x, int y) {}

    return (PF_Pixel*) ((char*) beatdata +)

    (y * batrowbytes) +.

    (x * sizeof(PF_Pixel))) ;

    }

    Also, while I'm here, what is the best way to debug the plugin using Xcode? I put the path to the executable to AE, but I don't know how to debug with AE mode. I appreciate any help anyone can give me.

    Hi timode! Welcome to the forum!

    If you use the following iteration you have given PF_EffectWorld pixels for

    the buffer in and the output buffer, as well as their coordinates x, y.

    You can't the rest of the iteration put you anything else automatically.

    You can pass a pointer to any data you like, get this pointer in the

    iteration work and then go searching your data int right-handed

    x, y parameters.

    to create an array of int of any size, use of the MemorySuite and lock the

    got memory handle on an int *, and then obtained pointer behave as a

    table. (you cannot use push_back, of course for reading.)

    or proposed methods are the best performance, but I guess that when

    you will take care of performance that you can convert your java code works directly

    on the input pixels...

    in regards to debugging in xcode, if you have set the path to the executable, just hit

    cmd + r and xcode will build and run a debugging session.

    do not ensure that the build location is the place of the

    that AE loads the plug-in... (you can create an alias of plug-ins for AE

    Directory to the build location. AE will scan the alias)

  • Best way to move data from one page to another within the app

    Hi all as I started to write that and was way to complicated to explain in full without a novel.

    Brief description. I use the query string that begins in the parent (from href in JSON) window must then be moved in the iframe url when moving to other pages to avoid security as a page and iframe is conflicting http and https. and are not accessible... It seems that there must be a better way.

    So my question is what is the best way in a platform open to move data from one page to another?

    You do it this way, it's in the menu.json, but when you work within the iframe and if your loading pages in applications using vairable relative URLS cross they come through.
    Discovered only this framework using the browser and you can se the construction and scope of all URLS used.

    The apps are sandboxed you cannot access aything in the parent frame, this is a feature of derliberate if something you can not do.

    As I said, you have a few methods to your own data storage space.

  • Best way to store text or word documents

    Hello everyone!

    I'm trying to decide the best method to store a bunch of floppies to a database of Oracle XE SUMMIT documents and I need advice on the best way approach this task. I can see these documents as a clob or blob field, but I would like to be able to search for documents. Should I keep the actual word version of each document as a blob field and a text only the same document as a clob version? Any other suggestions on how to best deal with this?

    Thank you in advance for your advice and assistance on this issue!

    Charles

    We have an application that uses a clob to a text column definition and use Oracle Text to search the text.
    The Oracle text index includes all the words in the text with the exception of an article standard list, i.e., etc...
    Text search is very fast and can be used with phrases or single words, and if the "$" is used in all forms of this
    The words can be found. Currently, we use 9i.

  • Best way to migrate data from a store of data to a ROW

    Hi all

    I have a maybe simple question.

    I have some SQL clusters to customers that are not configured by the best practices of VMware.

    They are running ESX 4.0 with vCenter 5.0 u2 u2

    Some of them are running two nodes on the same ESX, some of them have mixtures of RDM and data warehouses and LUNS are connected by mixtures of policies of multiple paths.

    I can make changes to the MPP and the separation of the knots without problem, but I was wondering if anyone has recommendations for the data transfer of any store of data that exists in a ROW?

    Obviously if it is possible I would disconnect the data store, add as a RDM again but is it ok to do?

    Are there catches hidden here?

    If this possible isn´t would I need to create a new drive as a vMotion RDM and the storage of the data between the 2?

    I Don t need to migrate data for the operating system, the data on disks, so I can use storage vMotion for this or it would be better to run VMware converter?

    Someone at - it advice or recommendations?

    Thanks in advance

    Mark

    Obviously if it is possible I would disconnect the data store, add as a RDM again but is it ok to do?

    Are there catches hidden here?

    If this possible isn´t would I need to create a new drive as a vMotion RDM and the storage of the data between the 2?

    I Don t need to migrate data for the operating system, the data on disks, so I can use storage vMotion for this or it would be better to run VMware converter?

    Someone at - it advice or recommendations?

    Thanks in advance

    Mark

    You can't add the data store to a server asa RDM because the data store is in VMFS format.

    you could use vmware converter or restore backups to place the data on the rdm.

  • Best way to store instances of class (or at least their private data) in a file for later use?

    I've been messing around with this concept for a few hours and have not yet found anything conclusive. I have a system where I want to store instances of a particular class (private data have a 2D array, a 1 d table, some Boolean values, numbers, and a set of strings.) for later use. The goal is to store a table of these classes for customization purposes. I've experimented with palette OpenG configuration file, especially the "read/write formatted article cluster", but I get some strange errors when I change around the private data and try to save the same instance. I thought using flatten XML vi and the wiring of the array of instances followed by reading in a table, but I anticipate similar problems. What other options are there? TDMS? A SQLite database? Should I do a few other split in the very basic types and combine them with the execution having read in? All suggestions/stories welcome.

    LV classes are compatible with the Flatten to Unflatten from XML and XML functions. Have you ever tried this approach?

    Norbert

  • Adding space to store data after DAS RAID expansion

    Hello all,.

    I was browsing and found some jobs, but never a direct answer to my problem, so I hope someone can point my research in the right direction, at least.

    I have a server (Dell 2950) with a matrix of drives RAID5 4 direct-attached storage.  I installed ESXi 4 (as a default value, possible stock) on this subject and have used for a while.  I decided that I needed more space, so I added a record 5th in the matrix RAID (via OpenManage).  I am now trying to use this space, but hit dead ends.  On the properties of the data store, when I click on the increase, it doesn't give me a measure to increase with. (See screenshot, attached, ignore the NAS1 data store, it is not related to this).

    From my reading of the post, I think that my problem is that I don't have this new space in my table in one of my partitions.  And since I have already 4 partititions (stock ESXi config as far I can tell because I don't create manually a), I can't add another partition.

    Here is my complication to my problem, I'm not a Linux guy, so I will try to understand both a > what I do exactly and b > what are the linux commands to do so.  Can I use SSH to access a console.

    I think what I have to do is either remove that #4 (what is this thing anyway?) of extended partition and then add all my new space as a 4th partition or expand this 4th partition to incorporate all my new space.  And then I think I need to format this partition as VMFS.  Then, #1 question: am I right on what do I do here?  Question #2: Someone can get me on the right track to find the exact command/syntax to do?

    The other bad news is that it is a production box.  The good news is, I have very good before you start and working out of hours possible backups.  I could just save it, wipe it all out, start over and restore, but that will be my production box down for a few hours for you (in the middle of the night, but I'd still like to avoid this if possible).  I prefer to do over time much less down, and I think that there must be a way to do that, certainly, I can't be the first person to never weant to increase the DAS storage on a box of ESXi... Wipe-out/restoration is my backup plan in case I just spoil the process...

    Any help is greatly appreciated...

    Jim

    You don't destroy your store of data through this approach. Everything that do you is delete and re-create the (larger) VMFS partition using fdisk. It won't hurt to the data on this VMFS partition (unless you change the boot block).

    André

Maybe you are looking for