Cold conversion steps

Dear team,

I intend to do a p2v for a server that I planned to do a cold conversion of my. Need your help how to convert cold to sucessfully p2v.

concerning

Mr. VMware

The term P2V is physical to virtual conversion or powered on virtual machine one.

The term V2V means conversion of slots machines virtual power off the power at the other place.

The term cold conversion means conversion with the help of CD converter based, unfortunately it is outdated for a long time.

If your real question what?

Tags: VMware

Similar Questions

  • JMS of B2B message conversion steps

    Hello

    In the newspapers of the IDC:

    2008.12.21 to 22:59:10:170: Thread-16: (DEBUG) the number of messages to be processed by JMSMonitor = 1
    2008.12.21 to 22:59:10:171: Thread-16: JMSMonitor.convertJMSMessageToTransport () (DEBUG): message conversion...
    2008.12.21 to 22:59:11:919: Thread-16: JMSMonitor.run () (DEBUG): JMSMonitor goes to sleep
    for 0 seconds...
    2008.12.21 to 22:59:11:921: Thread-16: JMSMonitor.run (DEBUG): wake up.


    Please can you confirm the following steps occur during the stage, assuming a JMS listener: 2008.12.21 to 22:59:10:171: Thread - 6: JMSMonitor.convertJMSMessageToTransport () (DEBUG): message conversion...

    A message is taken from the queue, the message is converted to the B2B format
    the message conversion includes the following steps internally:
    1 convert the format format-B2B JMS
    2 B2B processes this request message
    3. forwards this request to the Trading partner, gets ack
    4 TP gives a respone
    5 B2B gives back an Ack.
    Only after this step is processing the next message

    Therefore, if a TP response takes time to come, B2B is not able to take the next message in the JMS queue (in the case of a single listener)

    Please can you confirm this?

    Thanking you in advance,
    Best regards,
    Suhas.

    No, the step are the following,

    1 converts the JMS in B2B format message
    2. treat and send the message to remote business partner
    3. once the request http/send is completed, it goes back to resume the message.

    HTH.

    Kind regards
    Sinkar

  • Cold /Conversion cloning

    Will there be a .iso boot to do a p2v offline for vsphere cold.  I installed vcenter server and the standalone converter, but I do not see a .iso to burn to a cd.  Please provide specific information on where to find this .iso.  I've seen many references to these for previous versions, but I could not download the .iso.

    check http://downloads.vmware.com/d/details/vc40/ZHcqYmRkJSVidGVw

    VMware vCenter Converter BootCD
    File size: 97 MB
    File type: .zip
    MD5SUM: f6320c6d9e4472e259b15c520f36f4bd
    SHA1SUM: ff895709f17b6cda6d10369a9794e3f837dfd164
    
  • Message from Network Access Protection missing conversion steps

    I work of network access protection, however, the window of remediation is empty.

    I can reach the HRA through ping and HTTP. I have GPO configuration for my HRA server and I have the GPO configuration for the control of the client.

    Any help would be appreciated.

    Hello

    The question you posted would be better suited to the TechNet community. Please send the query in the link.
    Hope this information helps.
  • Question: With the new stand-alone converter how do you do a cold P2V? (booting from CD)

    With the new stand-alone converter how do you do a cold P2V? (booting from CD)

    Well, I see that cold Conversions are more supported.

  • Conversion to Active Directory.

    Hello

    We run 3 x VMWare ESX4, vCentre Server hosts and the converter.

    We also have 2 Active Directory domain controller currently as physical servers outside the virtual machine environment.

    I'm looking forward to use guided codification tool to convert the a DC to a virtual machine - and then let the other like a physical machine.

    Anyone know if I am having trouble with going to the DC out of sync or any terrible things happen if I should try it?  Is there best practice procedures that this kind of thing - or things to aovid?

    Any help would be appreciated.

    Thank you

    Steve

    Recommended method is to build a new domain controller instead of doing a P2V.

    But if you do a P2V to the existing one, be sure to use cold conversion (using the ISO Converter).

    André

  • CONVERSION of BITMAP OF ROWID and IFFS

    Hello

    A few days ago, I observed the strange behavior of a query that has counted the number of distinct attributes in a single partition of a large table. There is a local bitmap index on the specified column and I assumed that the request would not need to access the table information in the index. Indeed, the query has made an analysis of index full and no access table, but it took more time than I expected (more than 50 seconds in the given case). The sql, followed dbms_sqltune showed me, that most of the work has been associated with the BITMAP ROWID CONVERSION steps and a following GROUP BY of HASH. Then, I changed the query a little and got a much better performance without the BITMAP CONVERSION to ROWID and with a UNIQUE HASH - and operation with the same cost (and the same amount of LIOs). Here's a small suitcase of test 11.2.0.1 showing my remarks (on my Windows PC; "I've seen the effect in a Linux - 11.1.0.7 - data warehouse):
    drop table t_partitioned;
    
    create table t_partitioned (
        id  number
      , startdate date
      , col1 number
      , padding varchar2(50)
    )
    partition by range(startdate)
    (
      partition t_partitioned_20120522 values less than (to_date('23.05.2012', 'dd.mm.yyyy'))
    , partition t_partitioned_20120523 values less than (to_date('24.05.2012', 'dd.mm.yyyy'))
    , partition t_partitioned_maxvalue values less than (maxvalue)
    );
    
    insert into t_partitioned
    select rownum id
         , to_date('22.05.2012', 'dd.mm.yyyy') - 1 + trunc(rownum/500000) startdate
         , mod(rownum, 4) col1
         , lpad('*', 50, '*') padding
      from dual
    connect by level <= 1000000;
    
    -- executed several times until the table contained 64M rows, i.e. 32M for each day partition.
    insert into t_partitioned
    select * from t_partitioned;
    
    commit;
    
    create bitmap index t_partitioned_col1_start_bix on t_partitioned(col1, startdate) local;
    
    exec dbms_stats.gather_table_stats(user, 't_partitioned')
    And here are the queries and dbms_sqltune.report_sql_monitor information:
    -- slow query
    select /*+ monitor test 1 */ 
           count(distinct col1)
      from t_partitioned 
     where startdate = to_date('22.05.2012', 'dd.mm.yyyy');
    
    -- fast query
    select /*+ monitor test 2 */
           count(*)
      from (select distinct col1
              from t_partitioned
             where startdate = to_date('22.05.2012', 'dd.mm.yyyy'));
    
    -- Test 1
    SQL Monitoring Report
    
    SQL Text
    ------------------------------
    select /*+ monitor test 1 */ count(distinct col1) from t_partitioned where startdate = to_date('22.05.2012', 'dd.mm.yyyy')
    
    Global Information
    ------------------------------
     Status              :  DONE (ALL ROWS)
     Instance ID         :  1
     Session             :  DBADMIN (67:178)
     SQL ID              :  3mxmz941azrgx
     SQL Execution ID    :  16777216
     Execution Started   :  05/22/2012 21:48:43
     First Refresh Time  :  05/22/2012 21:48:43
     Last Refresh Time   :  05/22/2012 21:48:47
     Duration            :  4s
     Module/Action       :  SQL*Plus/-
     Service             :  testdb
     Program             :  sqlplus.exe
     Fetch Calls         :  1
    
    Global Stats
    ================================================================
    | Elapsed |   Cpu   |    IO    | Fetch | Buffer | Read | Read  |
    | Time(s) | Time(s) | Waits(s) | Calls |  Gets  | Reqs | Bytes |
    ================================================================
    |    4.09 |    3.54 |     0.55 |     1 |   5868 |   62 |  46MB |
    ================================================================
    
    SQL Plan Monitoring Details (Plan Hash Value=3761143426)
    ==================================================================================================================================================================================================
    | Id |             Operation              |             Name             |  Rows   | Cost |   Time    | Start  | Execs |   Rows   | Read | Read  |  Mem  | Activity |      Activity Detail       |
    |    |                                    |                              | (Estim) |      | Active(s) | Active |       | (Actual) | Reqs | Bytes | (Max) |   (%)    |        (# samples)         |
    ==================================================================================================================================================================================================
    |  0 | SELECT STATEMENT                   |                              |         |      |         1 |     +4 |     1 |        1 |      |       |       |          |                            |
    |  1 |   SORT AGGREGATE                   |                              |       1 |      |         1 |     +4 |     1 |        1 |      |       |       |          |                            |
    |  2 |    VIEW                            | VW_DAG_0                     |    2602 | 5361 |         1 |     +4 |     1 |        4 |      |       |       |          |                            |
    |  3 |     HASH GROUP BY                  |                              |    2602 | 5361 |         4 |     +1 |     1 |        4 |      |       |  840K |    50.00 | Cpu (2)                    |
    |  4 |      PARTITION RANGE SINGLE        |                              |    2602 | 5360 |         3 |     +2 |     1 |      32M |      |       |       |          |                            |
    |  5 |       BITMAP CONVERSION TO ROWIDS  |                              |    2602 | 5360 |         3 |     +2 |     1 |      32M |      |       |       |    25.00 | Cpu (1)                    |
    |  6 |        BITMAP INDEX FAST FULL SCAN | T_PARTITIONED_COL1_START_BIX |         |      |         3 |     +2 |     1 |     5836 |   62 |  46MB |       |    25.00 | db file scattered read (1) |
    ==================================================================================================================================================================================================
    
    -- Test 2
    SQL Monitoring Report
    
    SQL Text
    ------------------------------
    select /*+ monitor test 2 */ count(*) from (select distinct col1 from t_partitioned where startdate = to_date('22.05.2012', 'dd.mm.yyyy'))
    
    Global Information
    ------------------------------
     Status              :  DONE (ALL ROWS)
     Instance ID         :  1
     Session             :  DBADMIN (67:178)
     SQL ID              :  512z4htdq43cn
     SQL Execution ID    :  16777216
     Execution Started   :  05/22/2012 21:48:49
     First Refresh Time  :  05/22/2012 21:48:49
     Last Refresh Time   :  05/22/2012 21:48:49
     Duration            :  .019299s
     Module/Action       :  SQL*Plus/-
     Service             :  testdb
     Program             :  sqlplus.exe
     Fetch Calls         :  1
    
    Global Stats
    =================================================
    | Elapsed |   Cpu   |  Other   | Fetch | Buffer |
    | Time(s) | Time(s) | Waits(s) | Calls |  Gets  |
    =================================================
    |    0.02 |    0.02 |     0.00 |     1 |   5868 |
    =================================================
    
    SQL Plan Monitoring Details (Plan Hash Value=4286208786)
    =======================================================================================================================================================================
    | Id |             Operation             |             Name             |  Rows   | Cost |   Time    | Start  | Execs |   Rows   |  Mem  | Activity | Activity Detail |
    |    |                                   |                              | (Estim) |      | Active(s) | Active |       | (Actual) | (Max) |   (%)    |   (# samples)   |
    =======================================================================================================================================================================
    |  0 | SELECT STATEMENT                  |                              |         |      |         1 |     +0 |     1 |        1 |       |          |                 |
    |  1 |   SORT AGGREGATE                  |                              |       1 |      |         1 |     +0 |     1 |        1 |       |          |                 |
    |  2 |    VIEW                           |                              |    2602 | 5361 |         1 |     +0 |     1 |        4 |       |          |                 |
    |  3 |     HASH UNIQUE                   |                              |    2602 | 5361 |         1 |     +0 |     1 |        4 |  840K |          |                 |
    |  4 |      PARTITION RANGE SINGLE       |                              |    2602 | 5360 |         1 |     +0 |     1 |     5836 |       |          |                 |
    |  5 |       BITMAP INDEX FAST FULL SCAN | T_PARTITIONED_COL1_START_BIX |    2602 | 5360 |         1 |     +0 |     1 |     5836 |       |          |                 |
    =======================================================================================================================================================================
    The difference is in the little test 4,09 0.02 sec dry. It seems that slow execution must unpack the bitmaps for the separate number - but I do not understand why it is necessary, given that the number of distinct values is not the ROWID (and indeed the conversion seems to be preventable if I look at the quick plan). And to my surprise the simple query is slower.

    I also did a trace of cbo (10053) but I don't see a good reason for the difference in behavior here:
    -- fast execution
    **************************
    Query transformations (QT)
    **************************
    CBQT: Validity checks passed for 8kj3uttv3nt4j.
    CSE: Considering common sub-expression elimination in query block SEL$1 (#0)
    
    ...
    
    Final query after transformations:******* UNPARSED QUERY IS *******
    SELECT COUNT(*) "COUNT(*)" FROM  (SELECT DISTINCT "T_PARTITIONED"."COL1" "COL1" FROM "DBADMIN"."T_PARTITIONED" "T_PARTITIONED" WHERE "T_PARTITIONED"."STARTDATE"=TO_DATE(' 2012-05-22 00:00:00', 'syyyy-mm-dd hh24:mi:ss')) "from$_subquery$_001"
    
    
    -- slow execution
    **************************
    Query transformations (QT)
    **************************
    JF: Checking validity of join factorization for query block SEL$1 (#0)
    JF: Bypassed: not a UNION or UNION-ALL query block.
    ST: not valid since star transformation parameter is FALSE
    TE: Checking validity of table expansion for query block SEL$1 (#0)
    TE: Bypassed: No relevant table found.
    CBQT bypassed for query block SEL$1 (#0): no complex view, sub-queries or UNION (ALL) queries.
    CBQT: Validity checks failed for d0471pgnw27nc.
    CSE: Considering common sub-expression elimination in query block SEL$1 (#0)
    
    ...
    
    Final query after transformations:******* UNPARSED QUERY IS *******
    SELECT COUNT("VW_DAG_0"."ITEM_1") "COUNT(DISTINCTCOL1)" FROM  (SELECT "T_PARTITIONED"."COL1" "ITEM_1" FROM "DBADMIN"."T_PARTITIONED" "T_PARTITIONED" WHERE "T_PARTITIONED"."STARTDATE"=TO_DATE(' 2012-05-22 00:00:00', 'syyyy-mm-dd hh24:mi:ss') GROUP BY "T_PARTITIONED"."COL1") "VW_DAG_0"
    So slow execution consider still more possible transformations - but obviously not the one fitting.

    The question is: I do not miss a semantic difference between the two queries and is there a good reason why the CBO does not have the transformation of the slow to the fast version?

    Concerning

    Martin Preiss

    mpreiss wrote:

     ==================================================================================================================================================================================================
    | Id |             Operation              |             Name             |  Rows   | Cost |   Time    | Start  | Execs |   Rows   | Read | Read  |  Mem  | Activity |      Activity Detail       |
    |    |                                    |                              | (Estim) |      | Active(s) | Active |       | (Actual) | Reqs | Bytes | (Max) |   (%)    |        (# samples)         |
    ==================================================================================================================================================================================================
    |  0 | SELECT STATEMENT                   |                              |         |      |         1 |     +4 |     1 |        1 |      |       |       |          |                            |
    |  1 |   SORT AGGREGATE                   |                              |       1 |      |         1 |     +4 |     1 |        1 |      |       |       |          |                            |
    |  2 |    VIEW                            | VW_DAG_0                     |    2602 | 5361 |         1 |     +4 |     1 |        4 |      |       |       |          |                            |
    |  3 |     HASH GROUP BY                  |                              |    2602 | 5361 |         4 |     +1 |     1 |        4 |      |       |  840K |    50.00 | Cpu (2)                    |
    |  4 |      PARTITION RANGE SINGLE        |                              |    2602 | 5360 |         3 |     +2 |     1 |      32M |      |       |       |          |                            |
    |  5 |       BITMAP CONVERSION TO ROWIDS  |                              |    2602 | 5360 |         3 |     +2 |     1 |      32M |      |       |       |    25.00 | Cpu (1)                    |
    |  6 |        BITMAP INDEX FAST FULL SCAN | T_PARTITIONED_COL1_START_BIX |         |      |         3 |     +2 |     1 |     5836 |   62 |  46MB |       |    25.00 | db file scattered read (1) |
    ==================================================================================================================================================================================================
    

    Martin,

    I think it's just one of those things where the code for the optimizer is not uniform in all cases. I first met this anomaly with a query with a subquery IN a few years ago:

    select
         dim.*
    from     dim
    where
         id in (
              select
                   distinct status
              from
                   area_sales
         )
    ;
    

    If the breast to a view query online, but is not converted to a semi-join, so you see the same effect. This request is a little weird (it's a business model of the customer). Here is the list rows in a dimension table (with PK) which lies in the fact (with bitmap index) table.

    ---------------------------------------------------------------------------------------------------------------------------------
    
    | Id  | Operation                       | Name     | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
    ---------------------------------------------------------------------------------------------------------------------------------
    |   0 | SELECT STATEMENT                |          |      1 |        |      6 |00:00:00.01 |      29 |       |       |          |
    |*  1 |  HASH JOIN                      |          |      1 |      6 |      6 |00:00:00.01 |      29 |  1594K|  1594K|  865K (0)|
    |   2 |   VIEW                          | VW_NSO_1 |      1 |      6 |      6 |00:00:00.01 |      21 |       |       |          |
    |   3 |    HASH UNIQUE                  |          |      1 |      6 |      6 |00:00:00.01 |      21 |  1595K|  1595K|  715K (0)|
    |   4 |     BITMAP CONVERSION TO ROWIDS |          |      1 |    100K|    100K|00:00:00.01 |      21 |       |       |          |
    |   5 |      BITMAP INDEX FAST FULL SCAN| AS_BI    |      1 |        |     30 |00:00:00.01 |      21 |       |       |          |
    |   6 |   TABLE ACCESS FULL             | DIM      |      1 |      6 |      6 |00:00:00.01 |       8 |       |       |          |
    ---------------------------------------------------------------------------------------------------------------------------------
    

    The conversion does not occur if you rewrite the manual unnesting query:

    select
         dim.*
    from     (
         select
              distinct status
         from
              area_sales
         )     sttv,I
         dim
    where
         dim.id = sttv.status
    ;
    

    The strange thing is the 10053 trace shows that this manual rewrite is (apparently) exactly the thing that is well optimized.
    In the latest versions of Oracle, plans are lots of volumes and distribution - I saw half a dozen plans different for the same query through three different versions of Oracle.

    (In fact, you can see something similar, many queries with subquery costs where the subquery is involved in-the cost of the query automatically to the often seems to be slightly higher than the cost of the subquery manually to the when the manual and automatic plans are the same (otherwise)).

    This really expected to have nothing to do with NULL values, moreover, because the bitmap indexes contain a string of NULL values.

    Concerning
    Jonathan Lewis

  • Windows 2003 fails to start after the conversion from a virtual hard disk with vSphere Standalone Converter 4.0.1

    Hello

    I've recently updated my host to vSphere 4. I've also upgraded to vSphere Converter 4.0.1. However, when I convert a Microsoft Windows 2003 virtual hard drive in vSphere, the resulting virtual machine does not start. He gets just after the BIOS screen, then turns black with only a white line in the upper left corner of the console.

    I tried the hot and cold conversions. I tried changing the different settings of default conversion. I tried to remove the source machine and recreate.

    It worked well under ESX3.5, so I wonder what I'm doing wrong on vSphere.

    Any help appreciated.

    Thank you.

    In v4 disk are in SCSI format only.

    Probably also the source was in the SCSI format... so v7 to the IDE format give you not configured in the bootloader of the disks.

    André

  • VMware Converter standalone OR... I'M CONFUSSED!  Help, please

    Hello

    I have the vmware converter plugin installed.  What is this for?  I can't find any documentation.  The only documentation I can find is for the stand-alone version.  Work together or what?  Then the documentation still says to install the converter on the source machine and I understand not what converter, they're talking about. I want to do a cold cloning on my SQL servers.  It's the standalone "VMware-converter - 4.0.1 - 161434.exe" and it's the VMware converter ' VMware - Converter.exe "I use the standalone version for a test of my Vista machine as a clone of good hot and has worked.  But I do not understand the cold cloning process.  Can someone point me to the documentation.

    My environment is PE2950, ESXi 4.0 embedded, vCenter Server vSphere 4.0.  I am cloning all standard W2K8 servers.  Some SQL, domain controllers and servers of rest files.  A Server Application.

    Help!

    Thank you

    GEOBrasil

    GEOBrasil,

    I understand the confusion around the different products of converter.  I hope that I can help you better understand which version is right for you.  The reason for the different versions is based on what type of conversion you want to do, which are based on physical operating systems (Source) and your virtual environment (Target) is going to be.

    Couple of warning or best practices:

    -VMware recommends NO conversion P2V from DC.  It is better to build NEW virtual machines for this.

    -Hang your applications (SQL, WEb, etc.) before making a hot-clone (plug-in or standalone Enterprise)

    First, for each platform VI3.x, VI4.x there are TWO business (plugin) version and a standalone version (Windows & Linux applications)

    -VMware comparison chart gives an overview of the differences (http://www.vmware.com/products/converter/get.html)

    Non-exhaustive summary below:

    Enterprise Edition:

    Pros: vCenter integration and automation, supports

    Cons: you have to pay for it (you already have it if not the factor in your assessment.  No 'multi-stage' conversion (i.e. 1 snapshot converter, deltas on a machine running are lost or migrated manually).  No Linux support. Can't stand VI as a platform to target.  Includes the CD "cold-clone".

    Stand-alone Edition:

    Benefits: support Linux.  Supports the VMware Desktop products (workstation, Fusion, etc.) as target platforms. "Several steps" cloning, intial snapshot conversion as deltas.  Source and target of service and the power of command for Windows.

    Cons: Billable per-incident support.  No automation, scheduling.

    Cold-conversion

    Benefits: no changes can occur on the source server.  preserves the logical partitions.  No service application comes into conflict with the local area.  No installation on the source area

    Disadvantages: Requires downtime for the source server.  Requires the driver support for all devices on a physical to be present in BootCD (WinPE) Server

    Hope this helps

  • How to import MPEG-4 files in Windows Movie Maker in Windows Vista?

    Windows Media Player 11 recognizes and plays MPEG-4 great but WMM does not recognize them.  I would like to import some MPEG-4 files in Movie Maker and work with them - is there a codec or something that allows this?

    I'm sure I can find a software to convert to MP4 format that mm will recognize, but if possible, I would just skip the conversion step and work with MP4 directly in MM.

    Thank you!

    Windows Media Player 11 recognizes and plays MPEG-4 great but WMM does not recognize them.  I would like to import some MPEG-4 files in Movie Maker and work with them - is there a codec or something that allows this?

    I'm sure I can find a software to convert to MP4 format that mm will recognize, but if possible, I would just skip the conversion step and work with MP4 directly in MM.

    Thank you!

    ====================================================
    Movie Maker is not compatible with MP4 files...
    the best would be to convert it to .wmv.

    The following preferential ticket may be worth a try:

    Format Factory
    http://www.pcfreetime.com/

    All to MP4 / 3GP / MPG / AVI / WMV / FLV / SWF.
    All to MP3 / WMA / AMR / OGG / AAC / WAV.
    All to JPG / BMP / PNG / TIF / ICO / GIF / TGA.

    John Inzer - MS - MVP - digital media experience

  • Katalog von PSE 4 nach PSE 13 if

    I have PSE 4 auf meinem PC und habe jetzt als PSE 13 Testversion installed. Den Katalog wanted ich mit "Catalogs verwalten" If. ICH erhalte keine catalogs in der list displaying. Wenn ich mit 'Weitere catalogs suchen' den path zu "Mein Katalog.psa" aus PSE 4 angebe green auch kein Katalog in the list. 4 ist zu PSE 13 nicht mehr aufwartskompatibel PSE? Operating system ist W7 teacher

    johannesk32270875 maps wrote:

    I have PSE 4 on my PC and installed now PSE 13 as a trial. I wanted to convert the catalogue with "manage catalogs". I receive a catalogue in the displayed list. If I give the path with 'Search catalogs more' to 'My Katalog.psa' of PSE 4 also a catalog in the list will appear. Is PSE PSE 13 4 transmits not compatible? Operating system is teacher of W7

    Microsoft® translator translator

    To convert catalogues of items 5 or more 13 elements with a 64-bit computer, you will need an additional conversion step:

    Convert catalogues organiser for 64-bit versions. Photoshop Elements 13 or later version

  • A lot of problems after June 2015 Muse updates

    I've updated Muse today. I wish I didn't. Now everytime I try to publish a Web site, Muse crashes. And whenever I restart muse, muse would take long working on "Update user libraries". Update this thing of libraries takes an eternity, 25 minutes!

    I need help please as soon as possible. I need to downgrade to previous version. So I'm back in time to a project I'm working on. How can I get the old version?

    Help please.

    To avoid having to convert libraries, the next time that you start, quit Muse once immediately after the conversion of libraries. You need to do this once and future launches will be quick because libraries will not be updated until the next version of Muse. The reason why libraries keep conversion during the launch, it's that there is a bug that causes the Muse to convert libraries if you don't never leave Muse without crashing into the same session that libraries have been converted. Sorry if it's confusing. Bottom line: If you see the message that libraries convert, Muse to quit smoking, and subsequent launches is no longer will go through the conversion step (even if you experience a crash).

  • Server connection (5.01) the older Agent (4.3/4.01)

    Hello

    To streamline a set of conversion, we deployed the agent 4.3 to several servers. As a result of an unforeseen problem with another server, I had to upgrade the conversion software to the latest version (5.01). Now whenever I convert a server it tells me that the former agent is not compatible and must be upgraded.

    The problem is that I have a few servers Windows 2000 to convert, and the target is an environment vSphere 4.1. Apparently the Converter 4.01, the last of them in support of Windows 2000 does not support the new environment vSphere 4.1. Is there a a way to trick the server to connect to the older agent? Has anyone tried this before?

    Thank you

    Richard

    You cannot connect Server converter to a different version of the agent, it is hardcoded VΘrifier versions of product.

    Can you try to select a destination 4.1 in Converter 4.0.1? I have not tried, but it should be theoretically possible.

    A clumsy way is to have a conversion step 2: start by converting with the former old converter to vSphere or the host and then, using newest converter - to vSphere 4.1.

    Kind regards

    Plamen

  • Refresh your 11.5.9.2 to R12.1.3 EBS

    Hi, we are planning the upgrade of the EBS/RDBMS for one of our customers. The current environment is:

    A single node 11.5.9.2/9.2.0.8 under Solaris 8 (64 - bit capable, although the RDBMS binaries are 32-bit)

    The plan is first to R12.1.1, then apply the ORS R12.1.3. For the first iteration, the main steps would be:

    1 clone environment to new, running Solaris 10 (64-bit).
    2. run the preparatory steps.
    3 upgrade RDBMS 10.2.0.4 (64-bit), including OLAP conversion steps.
    4. complete the R12.1.1 upgrade.
    5. apply R12.1.3. ORS
    6 update the steps (application of patches, etc.).

    We would like to upgrade to the RDBMS 11 GR 2, but we do not know if our schedule will allow us to do so.

    So, for now, the target environment would be: a single node 10.2.0.4/R12.1.3 in Solaris 10 (64-bit).

    Now, what worries me is that, from what I've seen on MOS, R12.1.3 is NOT certified with 10.2.0.4 on Solaris SPARC 64-bit RDBMS. Intriguing, since R12.1.1 IS certified with this same combination of RDBMS/OS. I have not read anywhere that 10.2.0.4 is not supported for R12.1.3 (at least it does not say the README file).

    Questions (just to be sure):
    -It will be possible during the downtime R12, to upgrade first to 10.2.0.4 then to 11 g and then perform the upgrade R12, all the way to R12.1.3?
    -J' read that 10.2.0.5 is NOT certified with 11.5.9.x, but this being an upgrade of R12, could pass us to this version, instead of 10.2.0.4 RDBMS? 10.2.0.5 IS certified at both R12.1.1 and R12.1.3 on SPARC Solaris 64-bit.

    If you have any suggetions/comments on our upgrade steps, please share with me.

    Thank you!

    Hiroshi Komatsu

    Published by: user11971612 on January 25, 2013 08:49

    user11971612 wrote:
    Hi Hussein, have ideas on why 12.1.3/10.2.0.4/Solaris 10 is not a combination certified, even if 12.1.1/10.2.0.4/Solaris 10 is? I can't find any information on its subject.

    I have no idea why - it's something that can respond to Oracle support (or they can't know what they're going to just clarify that it is not certified).

    Another thing: this customer has no packs applied ATG families, consider you + 11i. ATG. + H.RUP.7 would have a positive impact on the upgrade?

    It does not matter. I spend (on 6 of ORS ATG) 11i to R12 with no problems.

    Thank you
    Hussein

  • migrate virtual machines to esxi 3.5 free vsphere 4.1 without vcenter

    Hi all

    I have vms on 2 hosts esxi 3.5 with direct (without san)-attached storage that I want to migrate to a new server esxi under license of 4.1 vsphere essentials. I intend using the stand-alone converter to migrate virtual machines from 3.5 to 4.1 and then upgrade the other two hosts to 4.1 and then moving some virtual machines using the converter.

    I have Web servers that I don't want to bring down more than 10 to 20 minutes (not the 1 to 2 hours that should be a cold conversion with converter) so I try to use the feature p2v on the virtual machine running Web servers.

    Has anyone done this with no san and no vcenter? If so how did you do?

    is it possible to migrate without losing configurations of network adapter in the virtual machine?

    I can just do something like taking a snapshot of the system of runnying, then copy files to new host, then add to the inventory, upgrade tools, hardware upgrade?

    Thank you!

    is it possible to migrate without losing configurations of network adapter in the virtual machine?

    After you copy the files and re - add to inventory new virtual machine, you must take care on the issue of handgun that you see in the market.

    Choose I_moved the files to keep the VM UUID and the same virtual MAC address.

    André

Maybe you are looking for