Converter Standalone best practices advice helps

New VM, and I tried to run the converter on a single PowerEdge 860

The machine has two 500 gig drives.

60 gigs is used on drive C

100 GB is used on the D drive

My problem seems to be when I resize the readers.  I have a conversion failed, or I get error loading operating system if the virtual machine is converted successfully.

So, I am confused on how to better convert it to avoid having a 1 tera byte vm on my hands.

Hello

Please check the size of the block of data store (Mon). By default that it is formatted with 1 MB which allows 256 GB maximum disk, then try an another lun formatted with 2 MB block that allows to 512 GB disk size limit.

Tags: VMware

Similar Questions

  • Server 2003 Best Practice

    Research of best practices advice or links to them with regard to CPU and RAM settings.

    IM create several images of OS from scratch. I was with my DC approx. when I started the second guess my config.

    I have a Dell 2900 with Xeon E5405 (2.0 GHz Quad Core) processor and 16 GB of RAM. Using a 8 x 74 GB SAS disk in a RAID5 storage DataStore1 configuration. Datastore2 waiting for the snapshot storage.

    I'm not going to run 3 or 4 virtual machines on the server to begin (domain controller 1 x 2003, 1xTerminal Services, Server 2003, XP 1 or 2). With a VM 2 or 3 additional possibly implemented thereafter. That would be based on Linux. Its possible (but unlikely) that I could experiment with Server 2008 later on the road

    How many virtual processors should I appoint to the 2003 Server? 1? 2? 4?

    Can I configure affinity to use the processor for DC 1/3 and 2/4 for TS Server?

    I was counting on 2 GB of RAM for the domain controller and 3 GB of RAM for the TS and XP boxes. Is - this an exaggeration?

    I understand that the drive of 20 GB for my domain controller will be adaquite. It will house NO data, IIS, etc. or printers. Essentially just an AD box for users of 50ish. Probably do 30 GB for the TS and XP vm.

    Any help appreciated. I'm looking for someone who can tell me 'don't do this' expierence because "it could happen later."

    -Chris

    Start simple, then tweak as necessary. For the DC 2 GB mem is fine. In fact given the load that she could stand to 50 users it could probably do okay on 512 MB, but memory is inexpensive so no reason to not give him the 2 GB. Single processor

    The TS server should be where you will load then start with 4 GB of RAM and increase as needed. Create two VMDK files for this drive 20 a 1 for the operating system and a disk (start with 20 GB you can grow the latter) for Apps and TS profiles. Start with only one CPU and only add additional processors if the CPU becomes a bottleneck.

    2xGB RAM for XP systems is very good. 1 x CPU each. 30 GB for the discs is good

    Multiple processors on a virtual computer can impact performance as often as aid, so do not use them unless you need it.

    You don't have to bother with the processor affinity that this is an unnecessary complication in an ESX implementation that has more than enough power to manage the virtual machines you have specified will be hosted on it.

    Not to do: place an Exchange or SQL local VM on the data store of this ESX host. You run a large RAID 5 LUN, which gives a lot of space but the poor performance, no problem with the virtual machines you have. Exchange and SQL need high I/O so you can virtual machines placed on RAID 1 + 0 LUNS preference of SAN storage.

  • VMware Converter standalone OR... I'M CONFUSSED!  Help, please

    Hello

    I have the vmware converter plugin installed.  What is this for?  I can't find any documentation.  The only documentation I can find is for the stand-alone version.  Work together or what?  Then the documentation still says to install the converter on the source machine and I understand not what converter, they're talking about. I want to do a cold cloning on my SQL servers.  It's the standalone "VMware-converter - 4.0.1 - 161434.exe" and it's the VMware converter ' VMware - Converter.exe "I use the standalone version for a test of my Vista machine as a clone of good hot and has worked.  But I do not understand the cold cloning process.  Can someone point me to the documentation.

    My environment is PE2950, ESXi 4.0 embedded, vCenter Server vSphere 4.0.  I am cloning all standard W2K8 servers.  Some SQL, domain controllers and servers of rest files.  A Server Application.

    Help!

    Thank you

    GEOBrasil

    GEOBrasil,

    I understand the confusion around the different products of converter.  I hope that I can help you better understand which version is right for you.  The reason for the different versions is based on what type of conversion you want to do, which are based on physical operating systems (Source) and your virtual environment (Target) is going to be.

    Couple of warning or best practices:

    -VMware recommends NO conversion P2V from DC.  It is better to build NEW virtual machines for this.

    -Hang your applications (SQL, WEb, etc.) before making a hot-clone (plug-in or standalone Enterprise)

    First, for each platform VI3.x, VI4.x there are TWO business (plugin) version and a standalone version (Windows & Linux applications)

    -VMware comparison chart gives an overview of the differences (http://www.vmware.com/products/converter/get.html)

    Non-exhaustive summary below:

    Enterprise Edition:

    Pros: vCenter integration and automation, supports

    Cons: you have to pay for it (you already have it if not the factor in your assessment.  No 'multi-stage' conversion (i.e. 1 snapshot converter, deltas on a machine running are lost or migrated manually).  No Linux support. Can't stand VI as a platform to target.  Includes the CD "cold-clone".

    Stand-alone Edition:

    Benefits: support Linux.  Supports the VMware Desktop products (workstation, Fusion, etc.) as target platforms. "Several steps" cloning, intial snapshot conversion as deltas.  Source and target of service and the power of command for Windows.

    Cons: Billable per-incident support.  No automation, scheduling.

    Cold-conversion

    Benefits: no changes can occur on the source server.  preserves the logical partitions.  No service application comes into conflict with the local area.  No installation on the source area

    Disadvantages: Requires downtime for the source server.  Requires the driver support for all devices on a physical to be present in BootCD (WinPE) Server

    Hope this helps

  • Best practices storage or advice

    I try to develop a Java of BB application, currently appearing on local storage. The app will be for public use.

    Someone has advice on when it is best to use the SD vs the persistent store? Is there a good best practices or document advice out there somewhere?

    This application will have two types of data: preferences and what would be the data files on a desktop system.

    I read on using the persistent store, and it seems to be a good option because of the level of control over the data for synchronization and such. But I noticed that some OSS BB applications use the SD card, not the persistent store.

    If I'm going to deploy the application to the general public, I know that I'm working with many configurations as well as with the limits set by the policy of the company (assuming that these users can even install the app). So any advice on navigating these issues regarding the storage would be greatly appreciated.

    Thank you!

    The persistent store is fine for most cases.

    If the transient data is very large, or must be copied to the device via the USB cable, then maybe the SD card should be considered.

    However, many / most of the people do not have an SD card.

  • I'm looking for help to share best practices to upgrade the Site Recovery Manager (SRM), if someone can summarize the preparatory tasks?

    I'm looking for help to share best practices to upgrade the Site Recovery Manager (SRM), if someone can summarize the preparatory tasks?

    Hello

    Please check the content below, you may find useful.

    Please refer to the URL: Documentation VMware Site Recovery Manager for more detailed instructions.

    Important

    Check that there is no cleanup operation pending on recovery plans and there is no problem of configuration for the virtual machines that protects the Site Recovery Manager.

    1 all the recovery plans are in ready state.

    2 the protection status of all protection groups is OK.

    3 the status of the protection of all the individual virtual machines in the protection groups is OK.

    4 the recovery of all groups of protection status is ready.

    5. If you have configured the advanced settings in the existing installation, note settings you configured before the upgrade.

    6 the vCenter local and remote server instances must be running when you upgrade the Site Recovery Manager.

    7 upgrade all components Server vCenter Site Recovery Manager on a site until you upgrade vCenter Server and Site Recovery Manager on the other site.

    8 download the setup of Site Recovery Manager file in a folder on the machines to be upgraded the Site Recovery Manager.

    9 make sure no other facilities-\no updates windows restarts done shoud

    Procedure:

    1. connect to the machine on the protected site on which you have installed the Site Recovery Manager.

    2. backup the database of Site Recovery Manager by using the tools that offers the database software.

    3. (optional) If you upgrade of Site Recovery Manager 5.0.x, create a 64-bit DSN.

    4 upgrade the instance of vCenter Site Recovery Manager server that connects to vCenter Server 5.5.

    If you upgrade a vCenter Server and Site Recovery Manager 4.1.x, you upgrade the instances of vCenter Server and Site Recovery Manager server in the correct sequence until you can upgrade to Site Recovery Manager 5.5.

    a upgrade vCenter Server 4.1.x to 5.0.x server.

    b Update Site Recovery Manager of 4.1.x to 5.0.x.

    c upgrade server vCenter Server 5.0.x to 5.5.

    Please let me know if it helped you or not.

    Thank you.

  • Request for advice: generally speaking, what is the best practice for managing a paid and a free application?

    Hi all

    I recently finished my first app of cascades, and now I want to inspire of having a more feature rich application that I can then sell for a reasonable price. However, my question is how to manage the code base for both applications. Any have any "best practices", I would like to know your opinion.

    You use a revision control system? This should be a prerequisite...

    How the different versions of the application will be?

    Generally if you have two versions that differ only in terms of having a handful of features disabled in the free version, you must use exactly the same code base. You could even just it for packaging (build command) was the only difference, for example by adding an environment variable in one of them that would be checked at startup to turn paid options.

  • Help - best practices or guide line to add additional network cards.

    Hello...

    I seem to have a problem with adding additional network adapters in the ESX 3.5 servers.

    2 x patches up-to-date ESX Server 3.5 and HA Enabled.

    Each server already contains 6 cards (3 x Dual port NIC)

    I put 4 several NICs (2 x Dual port NIC)

    I had a bad experience with the addition of additional network cards before, lose the console service and overall. (new numbering)

    Somehow, I fixed that in fact separate old number and re-binder with new number assigned.

    Is there a best practices or procedures in this respect?

    Anyone can point me in right direction, URL, Blog or even simple command line to refresh my memory...?

    Thank you.

    It's easy:

    Remove the old nic:

    esxcfg-vswitch vSwitch0 - U vmnic2

    Add new:

    esxcfg-vswitch vSwitch0 vmnic1-L

    Duncan

    VMware communities user moderator | VCP | VCDX

    -

  • Just improved m tips on best practices for sharing files on a Server 2008 std.

    The field contains about 15 machines with two domain controllers, one's data is the app files / print etc...  I just upgraded from 2003 to 2008 and want to get advice on best practices for the establishment of a group of file sharing. Basically I want each user to have their their own records, but also a staff; folder. Since I am usually accustomed to using windows Explorer, I would like to know if these actions can be done in the best conditions. Also I noticed on 2008 there is a feature of contacts. How can it be used? I would like to message or send an email to users their file locations. Also, I want to implement an admin at a lower level to handle the actions without making them far in on the server, not sure.

    I read a certain bbut I don't like test direct more because it can cause problems. So basically a way short and neat to manage shares using the MMC, as well as the way that I approach their mail from the server of their actions. Maybe what kind of access cintrol or permissions are suitable also for documents. Also how can I have them use office templates without changing the format of the model.

    THX

    g

    Hello 996vtwin,

    Thank you for visiting the Microsoft Answers site. The question you have posted is related to Windows Server and would be better suited to the Windows Server TechNet community. Please visit the link below to find a community that will support what ask you:

    http://social.technet.Microsoft.com/forums/en-us/category/WindowsServer

    Hope this helps J

    Adam
    Microsoft Answers Support Engineer
    Visit our Microsoft answers feedback Forum and let us know what you think

  • Dimension design best practices

    Hello

    I'm about to start a new project!

    Do you have ideas on best practices to define dimensions? A presentation or conference will help.

    I ask this question because it's kind of a mix between an art and a science.

    And the current metadata provided seems to have redundancy on their GL segments.

    I don't want to go and map each segment in each dimension, I think it would be counterproductive.

    Thank you for your comments

    Concerning

    You may be able to get some advice from the technical point of view by searching online or via the database administrator's guide.

    If you want to get this from the functional point of view, you will need professional help. The design will depend entirely on what are the needs of your business.

    Only thing I can say is ESSBASE and planning analytical solutions type applications so we shouldn't try to bring in great detail it is there is transactional system.

    Kind regards

    Sunil

  • Canvas campaign best practices

    Hello my dear modern marketing. I wonder if you might be able to help out me. I'm hosting a Webinar this week on best practices of campaign canvas and even though I have a decent amount of material for the webinar, I'm sure there are a few other tricks, tips and best practices that you discovered that I have not taken into account. This webinar is for my marketing here at Intel security managers but I want to keep it generic enough so that it will apply to all the B2B marketing specialists who create campaigns in Eloqua. In this way, I can share it on Topliners for all to view/take advantage.

    So, what advice, tips, and best practices you have to share?

    Marking: Eytan Abrahams, freejung Michael Seto-Oracle, Kristin Farwell-Oracle mcalnan Mike McKinnon, schwartzrw, dliloia, hwhitehead, hdurante, jennifer.igartua first to us.

    P. S.

    PRICE!  Everyone who responds with a good tip or best practices, will win the #1 Prize and be entered in a drawing for the Grand Prize!

    Notes:

    Price #1 = un super hug

    Grand Prize = a Cross pen super-cool Oracle Cloud Marketing!

    Sterling Bailey-Oracle - huh, my answers were not useful? I got there first, so for that I am ready to plan this embrace.

  • VMware vCenter Converter Standalone 5.5.0 build-1362012 - unknown internal error trying to P2V Windows Server 2003 SP2

    I have a box of Windows Server 2003 SP2 Standard that fails to convert using vCenter Converter Standalone 5.5.0 build-1362012. After you submit the task in the wizard's GUI, I get "a general error has occurred: unknown internal error". A virtual machine is created and deleted and then a few moments later.

    Create the virtual machine

    192.168.1.201

    Completed

    Search entity by UUID

    Completed

    Search entity by UUID

    Completed

    Remote disk open read-write

    Completed

    Search entity by UUID

    Completed

    Remote disk open read-write

    Completed

    Search entity by UUID

    Completed

    Delete the entity

    SRV-cspnsn. CSPNSN.local

    Completed

    The destination is VMware ESXI version 5.5.0 build-1623387 managed by vSphere Client version 5.5.50 build-1618071. The destination data store is a 5.60 VMFS volume with block size of 4 MB, on a local SATA HDD with 923, 50GB.

    Both machines are in the same network and have no any King or firewall or ACL in switches.

    In the log file, the only error is "error"Ufa.HTTPService"] Impossible to read the application; "flow: io_obj p:0x03403f24, h:-1, < tube"\\.\pipe\vmware-converter-server-soap">, < pipe"\\.\pipe\vmware-converter-server-soap"> >, error: class Vmacore::TimeoutException (Operation timed out).

    Any help would be appreciated.

    Thanks in advance.

    Hi all

    Once the converter VMware-converter-en - 5.5.1 - 1682692 does not work, with a touch of friend uninstall the 5.5 version of Converter Standalone Client, look for the VMware converter version - 4.0.0 - 146302 and install this version to the other machine. No not that I had to be Converter, but the windows 7 Pro 64-bit.

    After this race, the Standalone Client 4.4 Converter and everything work fine.

    Now I have the P2V converted and good machine, as was the real operation!

    I thank all of you.

    Best regards

    ADAJio

  • AWM - cubes: best practices

    All,

    I'm working on a development of the cube AWM and need help with some best practices on the design of these cubes.

    Thank you

    40 dimensions sounds too. Not that it can be treated by OLAP.

    Often the two things times are mistaken as a dimension:
    (1) an attribute from a dimension
    (2) a hierarchy to a dimension

    Make sure that you understand when to add an attribute to a dimension AND when to add a hierarchy to a dimension, instead of doing these two things as separate dimensions.
    A dimension can have several attributes and hierarchies.

    Start with your reporting requirements and then to determine how stored and hierarchies of attributes, dimensions, cubes, measures are needed to support these reports. Do NOT ENTREPOSEZ to what you can calculate, using calculated measures. All OLAP engines are very efficient calculation engine.

    Another point to keep in mind. You can (and should) create several cubes stored with less dimensionality to each of them. Then, you can create a cube of reports that is dimensioned by all dimensions and 'join' all data cubes stored in that cube of reports using calculated measures. So in the reporting cube you NOT all stored measures, ONLY calculated measures.

    Partition your cubes stored in the time dimension. Start a MONTH level partitioning. You can always change it later. NOTE that this has nothing to do with the Option of partitioning of relational database. Partitioning of OLAP cube takes a logical cube and creates several cubes "behind the scenes" (you will not see in AWM gui) as well as several CPUS can be used to load the data of the cube at the same time.

    Initially create all your stored cubes as compressed cubes and the value of all dimensions as "Sparse".

    While OLAP and OBIEE can manage Parent-child hierarchies, I found that OBIEE works well with the level. So, if there is any parent-child hierarchies, then convert those to 'Ragged-based on the level' (NOT balanced with level). Avoid non-hierarchical hierarchies, as OBIEE generates too much (backstage) when requests further you on the SKIP level hierarchies.

  • Best practices for managing strategies of path

    Hello

    I get conflicting advice on best practices for managed paths.

    We are on version 4.0 of ESXi connection to a HP EVA8000. Best practices guide HP recommends setting the strategy of railways handle on Round Robin.

    This seems to give two active paths to the optimized controller. See: http://h20195.www2.hp.com/v2/GetPDF.aspx/4AA1-2185ENW.pdf

    We used certain consultants and they say that the best practices of Vmware for this solution is to use the MRU policy which translates a single path to the optimized controller.

    So, any idea what good practice is best practice? Does make a difference?

    TIA

    Rob.

    Always go with the recommendation of the storage provider.  VMware recommendation is based on the characteristics of the generic array (controller, capable ALUA failover methods, etc.).  The storage provider's recommendation is based on their performance and compatibility testing.  You may want to review their recommendations carefully, however, to ensure that each point is what you want.

    With the 8000, I ran with Round-Robin.  This is the option of creating more robust paths available to you from a failover and performance point of view and can provide performance more even through the ports on the storage controller.

    While I did of the specific tests/validation, the last time that I looked at the docs, the configuration of HP recommends that you configure each IO to the ports in the switch configuration.  This adds the charge to the ESX host, the switch to other ports, but HP claims that their tests showed that it is the optimal configuration.  It was the only parameter I wondered in their recommendation.

    If you haven't done so already, be sure to download the HP doc on configuring ESX and EVA bays.  There are several parameters that you must configure the policy path, as well as a few scripts to help make the changes.

    Virtualization of happy!

    JP

    Please consider awarding points to useful or appropriate responses.

  • BEST PRACTICES FOR PL/SQL

    Hi all
    I'm looking for advice on best practices to create c# applications that have Oracle as backend.
    I noticed this insertion, update, and delete procedures are not so dynamic because they are static:

    Insertion procedure:
     Create or replace procedure insertion (param1 in varchar2 default '', 
    param2 in number default 0, param3 in float default 0,
     table_to_insert in varchar2, Some_error out varchar2) is
    begin
    if (table_to_insert='table1_spec') then
    insert into table1 (val1,val2,val3) values (param1, param2,param3);
    elsif (ect) then
    .
    .
    .
    end;
    /
    Updated and delete procedures also have the same behavior, I wonder if you use a cursor to do more dynamic values return and the use of native dynamic SQL could not help the developers and I have to have something more dynamic.
    Kind regards

    Published by: user650358 on June 9, 2011 08:37

    user650358 wrote:

    I'm looking for advice on best practices to create c# applications that have Oracle as backend.

    What John said. (even if I would have been more blunt and used the expression "+ approach silly +" several times) {noformat} ;-) {noformat}

    The most flexible approach is to abstract the complexities of SQL, the relational database and the physical implementation of it, of the code c# developer.

    No need to know the database design, the joints, the structures table, SQL and others. Instead the abstract layer takes care of that - where this layer is a suite of packages of PL/SQL procedures and functions.

    As the c# developer would use the Win32 API for creating threads, or the use of Sockets of Win, it now uses the layer of abstraction of PL/SQL in a similar way.

    You want to add an invoice? No need to know the name of the table, the structure of the table and SQL - c# code calls simply the CreateInvoice()PL/SQL procedure.

    This code performs the validation. Business rules and logic. The SQL.

    More, the underlying database model can change, the new rules of business introduced and so on - and the call from c# to CreateInvoice() will remain the same and the c# code not affected by these changes in dorsal.

    Oh yes - it does not using PL/SQL code to extract data from line in the PL/SQL variables and then push these data in turn to the c# client code.

    It means that a procedure or a function like GetCustomertInvoices() returns a ref cursor to the appellant to consume. (the SQL to create the slider is abstract - not the actual slider itself because it is correct the only way to access the data in row SQL)

  • 1 hr, 2 users - best practices project

    Hello

    I was the only writer to our company for 6 years and more. I finally have someone to help me; However, this introduces a new challenge. We're going to * two * work on the same project HR, and we will use VSS to source code control. I don't know how we "share" this single HR project.

    Someone at - it of the best practices for when you work in this kind of situation?

    I have questions such as:

    • When the other person creates the Index keywords, what happens if I removed files - how will this affect the addition of keywords?
    • When the other person creates excerpts from news or new variables defined by the user, should immediately check them and let me know so that I can do a get latest and have new clips/variables in my project?
    • How do we manage the two of us working on the same project and saw that he had to extract / archive files, create new topics, etc - what should be our "workflow"?

    Thanks in advance for ANY help/advice anyone of you can provide!

    I like rule of Care author: keep things simple and robust. This topic covers the three basic methods of sharing help authoring tasks. In order of complexity:

    1. creation series. If you do not need to have the two authors of the project at the same time, you can just take turns working on the project. Just move the files back if necessary. It is the simpler and more robust approach.

    2. merger proposals. If you need simultaneous creation, then, Yes, it's an approach simpler and more robust than the source control. However, this works only if you can partition your hardware and your clearly demarcated into two or more parts work assignments. Mergers can be a great solution, but it does fit all cases.

    3. source control. If several authors need simultaneous access to the same material, then source control is the simplest answer.

    Here are a few tips and observations, based on my experience with RoboSource Control, in no particular order:

    1. source code control works best on small projects of medium size. Largest may be unstable.

    2. set up to restrict an author file extractions only. Allowing the two authors to work simultaneously on a single topic is bad.

    3. If possible, try to work in different areas of the project that are not. Remember that a single change in a subject can ripple on many related topics. (For example, if you change the name of file to a topic, all the links in this topic must be changed.) If someone else takes care of one of these topics, you will not be able to complete your initial change.

    4. backup of your projects regularly, even if they are in the source code control.

    5. create an administrator account to use just for that purpose. Do not use this account for the creation of the ordinary. All do not give administrator privileges.

    6 appoint a person as administrator. Have at least one backup administrator. It will be the people who put up user accounts, to substitute the extractions ("I need this file, and Joe's on vacation!"), resurrect the old files, adjust source control conflicts, etc..

    7 archive files as soon as you are finished with them. Don't let them verified any longer than necessary.

    8. If you have large scale projects, your virus scan utility can really degrade performance during certain operations, such as the initials "get" of the project files. If this is the case, you may be able to configure your antivirus program to be more respectful of these activities of source control.

    9. the authors of aid must remain in close communication. The other did know what you are doing, especially if you do something drastic like move folders. Be prepared to check something in the case of someone else in need.

    10 give a lot of thought to the structure of your project. Examine the structure of files, naming conventions, etc.

    11. some actions are more intensive than others source code control. (Move, delete or rename folders are biggies.) Your project is vulnerable, even if these changes are underway. If something is wrong until the process is completed, you can end up with a mess on your hands. For example, let's say there is a network problem while you move a folder, interruption of your connection with source code control. You may find yourself with HR thinking that the folder is in one place, while control of source code it is in another. The result is broken links and missing files. Time for the administrator to intervene and fix things. It is almost never a problem for small projects. It becomes a real problem for large projects.

    12. If you get near a date limit, DO NOT choose this time to reorganize and rename files and folders.

    13 follow the appropriate procedure for adding a project to source code control. Bad really do spoil you. It is easy to add a project to RoboSource Control. I can't speak for other solutions to source control.

    14. it may be necessary to rebuild your cpd file more often than with uncontrolled sources projects.

    15. I just lately that you must back up your source files?

    HTH,

    G

Maybe you are looking for