List of jobs run in parallel

Hello
I use DBMS_FILE_TRANSFER. PUT_FILE as part of my portable from tablesspace shell script
my intermediate dwh instance to my instance of dwh report.

The problem is that I face the data files after one.
There are about 20 files of data, and each of them is 10 gigabytes in size.
I would like to copy the files in parallel and not after one.

I thought to create dynamically 20 jobs/Scheduler (based on the number of data files) and all run togther.
This way I could actually run in parallel.
Bellows is part of my TTS shell script.
The for loop generates the list of data files that I need to copy.
the DBMS_FILE_TRANSFER. PUT_FILE copy one after one.

Can you suggest how can I changed the plsql block below to create
and present in parallel a job for each data file?

Thank you.
sqlplus -s "sys/${SourceSysPass}@${SOURCE_ORACLE_SID}  as sysdba" << EOF 
whenever sqlerror exit 1
declare
   v_link varchar2(30);
begin
  select db_link
  into   v_link
  from   dba_db_links 
  where  db_link like '%TTS%';

  for x in (  select  fname, ltrim(rtrim(substr(fname,1,instr(fname,'.')-1)))||'_'||rownum nf_name
              from   (select substr(file_name,instr(file_name,'/',-1)+1) fname 
                      from   dba_data_files
                      where  tablespace_name in ('${TableSpacesList}') 
                      order by file_id)
           ) 
  loop
       DBMS_FILE_TRANSFER.PUT_FILE('source_tts',
                                   x.fname,
                                   'target_tts',
                                   x.nf_name,
                                   v_link);
  end loop;


exception
when others then
    raise;
end;
/
EOF

Hello

I'm happy that it worked. Don't worry, in this case, this setting is not necessary

Kind regards
Christian Balz

Tags: Database

Similar Questions

  • With the help of DAC for running no BIApps infa jobs n 2 EP running in parallel

    Hello

    We already have configuration BI Apps prod environment using DAC, Informatica and OBIEE 11 g for one of our customers.

    Now, we want to check the possibility of using the DAC for the execution of BIApps no informatica related jobs.
    (That we had only a week of the execution plan of DAC weekend and Informatica and DAC are inactive most of the time during the week)

    Customer wants a separate new small datamart be configured which meet the requirements of statement for different departments and has no links of kinship or any link with existing BI Data Warehouse applications.

    I just wanted to check if it will violate the license terms (if we use CAD to workflows not BI Apps and run another EP)?

    In addition, the DAC Build 10.1.3.4.1 is capable of running two parallel execution plans?

    We have heard long back that two parallel feature EP will be lunched in the version 11g CAD. Pointers or new in this space?

    Thanks in advance,

    From what I remember, you cannot load a 'distinct' DB instance that is NO OLIVIER. If you create a small custom datamart on the INSIDE of the OLIVIER exitsing schema, then it is acceptable. However, if you use DAC (no matter if its plan one or two plans) to load a NON-OBIA target, this may violate the license agreement. You need a self-contained separate license for Informatica and use the planner of Informatica tool. If you want to use DAC, ensure that your target is inside the DW OBIA.

    Pls correct brand...

  • How can I get a list of jobs that are planned for the days of the future

    How can I get a list of jobs that are planned for the days of the future?

    In a previous article, I found a query that lists the scheduled tasks that are already running. Can I get a similar request for the future days which is compiled?

    Hi sreedevir,

    SELECT jobmst.jobmst_prntname, jobmst.jobmst_name, jobrun.*

    FROM jobrun JOIN jobmst ON jobmst.jobmst_id = jobrun.jobmst_id -joining tables jobmst and jobrun

    WHERE jobrun_proddt > = dateadd (dd, 1, datediff (dd, getdate())) 0, -future dates

    AND jobmst.jobmst_type = 2 -given jobs (and not groups)

    ORDER BY 1, 2 -Sort by name of the parent, then by task name

    Feel free to make any changes to your reporting needs

    ARO

    The Derrick

  • Can't run in parallel Invoke VMScript

    Hi all!

    I create script that convert model VM, and then run VMs converted, then Invoke VMScript. After that my script restart, stop this VMs and VMs convert to patterns.


    All operations of this I have run in parallel mode of powershell.


    But when my script tries to run Invoke-VMScript in parallel mode , my script freezes on this operation. I see only - 'Inline Script runniung'


    But if I open one of the virtual machines, I see my local script on the VM run command and it's done.

    In VM events, I see the same, command done VMScript Invoke.


    How to solve this problem? What I've done wrong?


    Thanks in advance!


    My script:


    function Load-PowerCLI

    {

    Add-PSSnapin VMware.VimAutomation.Core

    Add-PSSnapin VMware.VimAutomation.Vds

    }

    Load-PowerCLI

    # Connect to Vcenter

    $vcenter = "vcenter.domain.local"

    function Connect to Vcenter

    {

    SE connect-VIServer-Server $vcenter

    }

    SE connect Vcenter

    function Unload-PowerCLI

    {

    Remove-PSSnapin VMware.VimAutomation.Core - ErrorAction SilentlyContinue

    Remove-PSSnapin VMware.VimAutomation.Vds - ErrorAction SilentlyContinue

    }

    # Download the list template

    Function Get-FolderFromPath

    {

    (param

    [String] $Path

    )

    $chunks = $Path.Split('\')

    $root = get-View - VIObject (Get-file-name $chunks [0])

    If (-$pas?) {return}

    $chunks [1.. $chunks. Count] | % {

    $chunk = $_

    $child = $root. ChildEntity |? {$_. Type - eq "File"} |? {(Get-Folder-id ("{0}-{1}»-f ($_.)"))} Type $_. (Value))). Name - eq $chunk}

    If ($child - eq $null) {throw "File '$chunk' not found"}

    $root = get-View - VIObject (Get-Folder-Id ("{0}-{1}" f ($child. ")) Type, $child. Value)))

    If (-$pas?) {return}

    }

    return (Get-Folder-Id ("{0}-{1}" f ($root. ")) MoRef.Type, $root. MoRef.Value)))

    }

    $Templateslist = (get-FolderFromPath-path 'DC\Templates\Windows' |) Get-model? ({$_.name - eq 'TEST'}). name

    $Templateslist

    # Convert templates of virtual machines

    workflow convert-models-to-vm {}

    (param

    [string []] models of $,.

    [string] $vcenter,

    [string] $session,

    [string] $user,.

    [string] $pass

    )

    for each-parallel ($template in $templates)

    {

    $run = {InlineScript

    Add-PSSnapin VMware.VimAutomation.Core

    SE connect-VIServer-Server $Using: vcenter-Session $Using: session

    Entire-Template - Template $Using: model ToVM.

    Remove-PSSnapin VMware.VimAutomation.Core - ErrorAction SilentlyContinue

    }

    $run

    }

    }

    Convert-models-to-vm - $Templateslist - vcenter $vcenter models - session $global: DefaultVIServer.SessionSecret

    # PowerOn VMs

    workflow poweron-vms {}

    (param

    [string []] models of $,.

    [string] $vcenter,

    [string] $session,

    [string] $user,.

    [string] $pass

    )

    for each-parallel ($vm in $templates)

    {

    $run = {InlineScript

    Add-PSSnapin VMware.VimAutomation.Core

    SE connect-VIServer-Server $Using: vcenter-Session $Using: session

    Start-VM - VM $Using: vm | Waiting-Tools

    Remove-PSSnapin VMware.VimAutomation.Core - ErrorAction SilentlyContinue

    }

    $run

    }

    }

    PowerOn-vms-models $Templateslist - vcenter $vcenter - session $global: DefaultVIServer.SessionSecret

    # Wait 1 minute

    sleep of the 1960s

    # Run the command update Script

    {of workflow run-update

    (param

    [string []] models of $,.

    [string] $vcenter,

    [string] $session,

    [string] $script,

    [string] $guestuser,

    [string] $guestpass

    )

    for each-parallel ($vm in $templates)

    {

    $run = {InlineScript

    Add-PSSnapin VMware.VimAutomation.Core

    SE connect-VIServer-Server $Using: vcenter-Session $Using: session

    Invoke VMScript - ScriptText ' $Using: script "-VM" $Using: vm "-Server" $Using: vcenter '-GuestUser ' $Using: guestuser '-GuestPassword ' $Using: guestpass»

    Remove-PSSnapin VMware.VimAutomation.Core - ErrorAction SilentlyContinue

    }

    $run

    }

    }

    $script = "c:\update.ps1".

    $guestuser = "administrator."

    $guestpass = "myPASS".

    Run-update - $Templateslist - vcenter $vcenter models - session $global: DefaultVIServer.SessionSecret - $script - $guestuser guestuser - guestpass $guestpass script

    # Restart virtual machines

    workflow restart-vms {}

    (param

    [string []] models of $,.

    [string] $vcenter,

    [string] $session,

    [string] $user,.

    [string] $pass

    )

    for each-parallel ($vm in $templates)

    {

    $run = {InlineScript

    Add-PSSnapin VMware.VimAutomation.Core

    SE connect-VIServer-Server $Using: vcenter-Session $Using: session

    Restart-VMGuest - VM $Using: vm | Waiting-Tools

    Remove-PSSnapin VMware.VimAutomation.Core - ErrorAction SilentlyContinue

    }

    $run

    }

    }

    restart vms models $Templateslist - vcenter $vcenter - session $global: DefaultVIServer.SessionSecret

    stop # VMs

    workflow stop-vms {}

    (param

    [string []] models of $,.

    [string] $vcenter,

    [string] $session,

    [string] $user,.

    [string] $pass

    )

    for each-parallel ($vm in $templates)

    {

    $run = {InlineScript

    Add-PSSnapin VMware.VimAutomation.Core

    SE connect-VIServer-Server $Using: vcenter-Session $Using: session

    Stop-VMGuest - VM "$Using: vm '-confirm: $false

    Remove-PSSnapin VMware.VimAutomation.Core - ErrorAction SilentlyContinue

    }

    $run

    }

    }

    stop-vms-models $Templateslist - vcenter $vcenter - session $global: DefaultVIServer.SessionSecret

    sleep 120

    # Convert models of virtual machines

    {to convert vm-to-model of workflow

    (param

    [string []] models of $,.

    [string] $vcenter,

    [string] $session,

    [string] $user,.

    [string] $pass

    )

    for each-parallel ($template in $templates)

    {

    $run = {InlineScript

    Add-PSSnapin VMware.VimAutomation.Core

    SE connect-VIServer-Server $Using: vcenter-Session $Using: session

    Set-VM - VM "$Using: model"-ToTemplate-confirm: $false

    Remove-PSSnapin VMware.VimAutomation.Core - ErrorAction SilentlyContinue

    }

    $run

    }

    }

    Convert-vm-to-models - models $Templateslist - vcenter $vcenter - session $global: DefaultVIServer.SessionSecret

    Unload PowerCLI

    The problem in this part:

    {of workflow run-update

    (param

    [string []] models of $,.

    [string] $vcenter,

    [string] $session,

    [string] $script,

    [string] $guestuser,

    [string] $guestpass

    )

    for each-parallel ($vm in $templates)

    {

    $run = {InlineScript

    Add-PSSnapin VMware.VimAutomation.Core

    SE connect-VIServer-Server $Using: vcenter-Session $Using: session

    Invoke VMScript - ScriptText ' $Using: script "-VM" $Using: vm "-Server" $Using: vcenter '-GuestUser ' $Using: guestuser '-GuestPassword ' $Using: guestpass»

    Remove-PSSnapin VMware.VimAutomation.Core - ErrorAction SilentlyContinue

    }

    $run

    }

    }

    $script = "c:\update.ps1".

    $guestuser = "administrator."

    $guestpass = "myPASS".

    Run-update - $Templateslist - vcenter $vcenter models - session $global: DefaultVIServer.SessionSecret - $script - $guestuser guestuser - guestpass $guestpass script

    I found the solution!

    I just add this string - $WarningPreference = "SilentlyContinue" in an inline script

    Like this:

    {of workflow run-update

    (param

    [string []] models of $,.

    [string] $vcenter,

    [string] $session,

    [string] $script,

    [string] $guestuser,

    [string] $guestpass

    )

    for each-parallel ($vm in $templates)

    {

    $run = {InlineScript

    $WarningPreference = "SilentlyContinue".

    Add-PSSnapin VMware.VimAutomation.Core

    SE connect-VIServer-Server $Using: vcenter-Session $Using: session

    Invoke VMScript - ScriptText ' $Using: script "-VM" $Using: vm "-Server" $Using: vcenter '-GuestUser ' $Using: guestuser '-GuestPassword ' $Using: guestpass»

    Remove-PSSnapin VMware.VimAutomation.Core - ErrorAction SilentlyContinue

    }

    $run

    }

    }

    $script = "c:\update.ps1".

    $guestuser = "administrator."

    $guestpass = "myPASS".

    Run-update - $Templateslist - vcenter $vcenter models - session $global: DefaultVIServer.SessionSecret - $script - $guestuser guestuser - guestpass $guestpass script

  • Have Windows 7 running on Parallels Desktop with a Mac. Get "setup.exe is not a valid Win32 application" when trying to download a program with Windows Explorer. I can download from these sites with Vista and XP with other computers.

    Have Windows 7 running on Parallels Desktop with a Mac. Get "setup.exe is not a valid Win32 application" when trying to download a program with Windows Explorer. I can download from these sites with Vista and XP with other computers. Now, I can't download the programs that are supposed to solve the problem! including FoxFire

    Try to download from this site:

  • You need to pass arguments to the Sub - VI which runs in parallel to the main vi.

    Please help me.

    I can pass arguments to a subvi using "call-by-reference-node" but the sub - vi will not work in parallel.  I can also run my sub - vi in parallel using "invoke the node", but then cannot pass arguments.  I can't understand how to merge these two concepts.

    When I open the reference VI, I specify the strict type, using call-by-reference-node.  When I use call node, I don't specify the type.  It seems that specifying the type strict guard screw run in parallel, but also seems necessary to pass arguments.

    I apologize for my absence of a deeper understanding of this and appreciate any help you can give me.

    Chris

    You must invoke 'Ctrl Val.Set' node.

  • Prerequisites for reentrant SubVIs to run in parallel

    Hello!

    In my VI, two clones préallouées of a Subvi, I thought I would go in a separate thread of each. Just this sub - VI contains a reference to a double and it increments. The two clones ran in parallel, but not in separate threads. For comparison, I did a Subvi, which does not have a reference - these clones run each in own thread.

    I noticed this watching the CPU usage: in the first case, has been used as a single core in the second respectively two sons were used.

    (1) I wonder what are the prerequisites for a Subvi to run effectively not only in parallel, but in fact in separate threads?

    (2) is there a way to discover during compilation which (Tufts) parts of a VI run in parallel, and SubVIs get making their own thread of execution?

    For reference: the execution is slow. If I understand correctly, the GUI is running in its own thread, so no interfering only not with the SubVIs that multiply references. I think that there is also no locking a reference readings and writings to a reference should be very fast. Is this correct? If so, why is this slow running?

    Thanks for your replies

    Marco

    (The test computer has a Quad-Core with Hyperthreading, using LabVIEW 2012SP1)

    One thing I know is if you have any manipulation of the UI elements.

    In the case you use a reference to a UI element user and properbly using the property node to insert data.
    This limits LabVIEW for your sub vi of in the UI thread.

    If you have transferred the values on the queue to a vi that manages all the user interface controls, then only that vi will be in the UI thread.

    Who is?

  • Upgrade to esxi 6 - running in parallel 5.5 versions Enterprise

    Hello together,

    We want to improve our Enterprise 5.5 to 6.0 Enterprise environment more.

    Can we run in parallel business with Enterprise 6.0 5.5 more ESXi hosts.

    or

    We can add the license keys for the company more than 5.5 to existing hosts without any configuration problem, then

    upgrade the esxi hosts to 6.0?

    Thank you

    There is no problem, but I recommend to keep this scenario only during migration.

  • Concurrent program is not running in parallel

    Hello

    There is a simultaneous custom program that must be run in order, sometimes and sometimes it should be able to run in parallel. The program was defined initially to be incompatible with itself and it is used to run only in sequence, as planned, but when the incompatibility with the self has been deleted or disabled can still, he runs in the order but does not run in parallel. What could be the reason?

    As a temporary solution, I tried to remove the simultaneous program and recreate and defined without any inconsistency and it works in parallel. But it won't help the incompatibility could be switched on/off often enough and recreate the simultaneous program every time is not a good idea.

    Thanks in advance.

    Kind regards
    RAM

    There is a simultaneous custom program that must be run in order, sometimes and sometimes it should be able to run in parallel. The program was defined initially to be incompatible with itself and it is used to run only in sequence, as planned, but when the incompatibility with the self has been deleted or disabled can still, he runs in the order but does not run in parallel. What could be the reason?

    The CM has been revived after doing the above?

    As a temporary solution, I tried to remove the simultaneous program and recreate and defined without any inconsistency and it works in parallel. But it won't help the incompatibility could be switched on/off often enough and recreate the simultaneous program every time is not a good idea.

    Whenever you toggle incompatibilities, please make sure that you bounce the CM.

    Establish rules of incompatibility for Custom Reports [107224.1 ID]

    Thank you
    Hussein

  • default 10g Auto Gather Stats Job running NECESSARY or NOT?

    Hello
    our database is 10 gr 2
    Operating system is windows


    in my 10g database, I already have a regular job of running statistics which collects level schema statistics. There is only 1 schema of the application.


    Is it necessary to permanently keep the Oracle default 10g Auto collect Stats Job?


    If two jobs are running (default 10 g auto gather stats job running in the window management 22:00 to 06:00) and our stats of manual diagram gathering job (market every day at 7 to 11) then that statistical jobs would be examined by CBO. ?

    should I deactivate default job statistics?

    Thank you...

    Probably the Oracle supplied job is "smarter" that no matter what mechanism you provided.
    I will disable your own job.
    The Oracle supplied job only collects outdated statistics, outside office hours.
    Your business brings together (all? / stale?) statistics during office hours. Your job may fail due to the ora-0054.
    Collection of statistics of segment during office hours is not desirable. Collection of statistics of system during office hours may be desirable.

    ----------
    Sybrand Bakker
    Senior Oracle DBA

  • Running in parallel to the interfaces

    Hello. In the package of the ODI, I can locate my interfaces and join each two of them in two lines: 'ok' (successful) and "ko" (unsuccessful). So I get a sequential execution of these interfaces. How can I make ODI run in parallel?

    To do this, create a scenario for each of the interfaces (right button of the mouse on the interface, generate scenario) and drag the scenario on the package, and not the interface. This will give you a script to execute tool, which you set to execute asynchronously. run each of the interfaces and then use an OdiWaitForChildSession tool to wait for the complketion of the child sessions. If only some of the tasks that you run asynchronously are on the critical path, you can use tags when you start the executions and the tool of waiting. For those who are on the critical path, give a keyword CP. The tool to wait, wait with the key word CP.

  • Unable to disable a job running

    Dear all,

    I created a job using code below

    Start

    DBMS_SCHEDULER. () CREATE_JOB

    job_name = > 'create_dir_test2 ',.

    job_type = > 'executable. "

    job_action = > "c:\winnt\system32\cmd.exe /c mkdir c:\test\test1."

    enabled = > true, auto_drop = > true

    *);*

    end;

    The work was created and his State showed that it runs but now I want to disable this job, but I am not able to do

    Tried to run the following code but it just crashes and does ' t work

    Start
    dbms_scheduler. Disable ('CREATE_DIR_TEST2', true);
    end;


    Start
    dbms_scheduler.stop_job ('CREATE_DIR_TEST2', true);
    end;

    any help will be appreciated

    Kind regards
    Kashif Ali

    Hello

    You may need to restart the database (or instance) which the job runs. Restart the Oracle Job Scheduler Windows service can also work.

    Once you have done this, see this post to rewrite your job (stdout/stderr redirection) and pass arguments as separate argument values.

    Guide to the external work on with dbms_scheduler 10g for example scripts, batch files

    Hope this helps,
    Ravi.

  • Running in parallel DML not forced

    Hello

    I work with an Oracle RDBMS 12cR1,

    I amcurrently how can I influence parallel experimentation DML and parallel DDL (for example edit.

    For example, I have the following settings:

    parallel_degree_level = 100

    parallel_degree_limit = CPU

    parallel_degree_policy = MANUAL

    These are the default values.

    At this point, that means I should be able to influence the optimier through tips.

    There is no statistics on the table I want to insert data.

    I started with:

    ALTER session force parallel dml parallel 4;

    Then I have an insert of type:

    insert into table (list of aliases for column using the dbms_random package)

    Select (column list) of the double

    connect by level < = 100000;

    When I followed the execution using EM, I don't see the parallel expected excution.

    The execution plan looks like this:

    CREATE statement

    load in select

    the gathering optimizer statistics

    connect to without filtering

    double quick

    This isn't a 'great' table and select to insert is not "big".

    If because I have such a '' big, '' set that Oracle chooses not to use parallel DML had?

    How can I force parallel DML execution?

    Thanks and greetings

    Laury wrote:

    Yes, after some further tests, I discovered that connection by does not allow for parallel processing.

    Yet, I do not observe the same kind of results as you.

    Think about how a connect works by, and it must be clear that it would be very difficult to implement a parallel connection by - especially when the driving table has only one row. But you don't really want to use a simple connect by to generate a large amount of data because of the impact this can have on memory.

    Once you've worked out why you do not get the parallelism you should be found - your parallel functions ARE enabled? Your parallel_max_servers is not null (you keep do not answer the question on parallel settings)-you can introduce the strategy who joined a small "connect by" result set to itself and you give the complete end parallelism you need:

    Insert / * + parallel (second_emp, 6) * / into second_emp

    (

    EmpNo,

    Ename,

    employment,

    Bishop.

    HireDate,

    SAL,

    Comm,

    DEPTNO

    )

    with generator as)

    Select

    rownum id

    Of

    Double

    connect

    level<=>

    )

    Select

    round (dbms_random.value (1: 500000)) like empno,

    dbms_random. String name ('U', 10),

    random_job as job,

    random_mgr as Bishop,

    trunc ((sysdate-1000) + dbms_random.value (0.366)) as hiredate.

    round (dbms_random.value (800, 300)) as sal,

    -decode (rounds (dbms_random.value (0, 1401)), 1401, null, rounds (dbms_random.value (0, 1401))) as comm

    (

    case

    When ((dbms_random.value (0, 1401)) round between 1000 and 1401)

    then null

    another round (dbms_random.value (0, 1401))

    end

    ) as Comm.,

    TO_NUMBER ((substr (round (dbms_random.value (10, 30)), 1, 1) |)) '0')) as deptno

    Of

    (select 1 n2 of the generator where id)<= 100)   ="">

    (select 1 n2 of the generator where id)<= 1000)  ="">

    ;

    Note in particular the subquery WITH which will generate a small TWG internally; then the join between two copies of this table - and this join does not use the rownum or operator LEVEL. This gives you a merge join parallel that generates a large amount of data and allows the PX servers make all calls to dbms_random.

    ----------------------------------------------------------------------------------------------------------------------------------------------

    | ID | Operation | Name                      | Lines | Bytes | Cost (% CPU). Time |    TQ | IN-OUT | PQ Distrib.

    ----------------------------------------------------------------------------------------------------------------------------------------------

    |   0 | INSERT STATEMENT.                           |       |       |     7 (100) |          |        |      |            |

    |   1.  TRANSFORMATION OF THE TEMPORARY TABLE.                           |       |       |            |          |        |      |            |

    |   2.   LOAD SELECT ACE |                           |       |       |            |          |        |      |            |

    |   3.    COUNT                            |                           |       |       |            |          |        |      |            |

    |   4.     CONNECT TO WITHOUT FILTERING.                           |       |       |            |          |        |      |            |

    |   5.      QUICK DOUBLE |                           |     1.       |     2 (0) | 00:00:01 |        |      |            |

    |   6.   COORDINATOR OF PX |                           |       |       |            |          |        |      |            |

    |   7.    PX SEND QC (RANDOM). : TQ10001 |     1.    26.     5 (0) | 00:00:01 |  Q1, 01 | P-> S | QC (RAND) |

    |   8.     LOAD SELECT ACE (HYBRID TSM/HWMB) |                           |       |       |            |          |  Q1, 01 | SVCP |            |

    |   9.      OPTIMIZER STATISTICS COLLECTION |                           |     1.    26.     5 (0) | 00:00:01 |  Q1, 01 | SVCP |            |

    |  10.       THE CARTESIAN MERGE JOIN.                           |     1.    26.     5 (0) | 00:00:01 |  Q1, 01 | SVCP |            |

    |  11.        RECEIVE PX |                           |     1.    13.     2 (0) | 00:00:01 |  Q1, 01 | SVCP |            |

    |  12.         PX SEND BROADCAST | : TQ10000 |     1.    13.     2 (0) | 00:00:01 |  Q1 00 | P-> P | BROADCAST |

    | * 13 |          VIEW                       |                           |     1.    13.     2 (0) | 00:00:01 |  Q1 00 | SVCP |            |

    |  14.           ITERATOR BLOCK PX |                           |     1.    13.     2 (0) | 00:00:01 |  Q1 00 | ISSUE |            |

    | * 15 |            TABLE ACCESS FULL | SYS_TEMP_0FD9D66BC_3BA6C3 |     1.    13.     2 (0) | 00:00:01 |  Q1 00 | SVCP |            |

    |  16.        KIND OF BUFFER.                           |     1.    13.     5 (0) | 00:00:01 |  Q1, 01 | SVCP |            |

    | * 17.         VIEW                        |                           |     1.    13.            |          |  Q1, 01 | SVCP |            |

    |  18.          ITERATOR BLOCK PX |                           |     1.    13.     2 (0) | 00:00:01 |  Q1, 01 | ISSUE |            |

    | * 19.           TABLE ACCESS FULL | SYS_TEMP_0FD9D66BC_3BA6C3 |     1.    13.     2 (0) | 00:00:01 |  Q1, 01 | SVCP |            |

    ----------------------------------------------------------------------------------------------------------------------------------------------

    Information of predicates (identified by the operation identity card):

    ---------------------------------------------------

    13 - filter ("ID"<>

    15 - access(:Z>=:Z AND:Z)<>

    17 - filter ("ID"<>

    19 - access(:Z>=:Z AND:Z)<>

    Concerning

    Jonathan Lewis

  • Same scenario running in parallel for different values

    Hi all, I have a requirement according to which, on a high level, calls for a scenario "in parallel" running for a set of values that are passed into it. To break it down, say you have a script that performs a treatment (could be anything), and to do this, he needs a list of values (for example, 1, 2 and 3). We need, it's that the script runs for these three values at the same time, the reason being, it can work 10 minutes to 1, 20 minutes for 2-10 minutes for 3 and we do not want to wait that it finished for 1 pass to 2 and 2 and complete then go to 3. In order to test this can be done, ideally, we would like to see in the operator when the scenario is started, it runs for all three values at the same time. Gurus, if there is anything that's possible, let me know.

    Thank you very much!

    Yes, they will be... Just put an extra step (odi procedure to trigger the error using java beanshell, jython or groovy etc.) in the scenario (put inside the package and then generate the scenario or a step before the scenario) which will lift the current timestamp and can be printed to the operator.
    I guess that there is a difference of microsecond that can be ignored (oracle can't stand not but the fact SQLServer)

    Thank you.

  • Two Partitions of disk for both systems or running under Parallels

    I have a 27 "Quad core i7 with 8 GB and 1 hard drive I partitioned the hd in 2 partitions: the boot partition has OS X-Yosemite and works very well. However there are occasions when I need to run OS X - Snow Leopard because some devices that I need to use are not compatible with Yosemite because there is no update for the OEM driver. Frankly I do not have the budget to upgrade the devices, so I need a workaround solution.  Is the drive of the 2 score or running 2 OS under Parallels the best solution? Help, please!

    Thank you

    Joe has.

    If the system can operate in native mode of SL (outside of the virtual machine), there is nothing wrong with it.  I currently have a 480 GB SSD with 240 GB Mavericks and my original SL on another partition of 240 GB.  You have to keep 2 backups, one for each system, but that is expected.

    On the other hand, if you want both running at the same time, use VM.

    VM is usually a puzzle that I reserve for Linux and Windows that need special 'hooks' to run on an Apple and are easier to exploit, because I didn't need "both running simultaneously.

Maybe you are looking for