Copy of the Prod to Test data forms

Hello

We have lots of data in production with business in data forms. I appreciate, is an ideal way to move these data forms of Test environment with the company data?

Thank you

Published by: user12250743 on April 14, 2010 11:47

This will be the form without the data, the data is stored in essbase and no planning.
To extract the essbase data, you can use the script command DATAEXPORT calc, an MDX, excel addin, smart view report script.
The fastest is to use excel addin or smart display, retrieve the data, point to the test environment and then submit.

See you soon

John
http://John-Goodwin.blogspot.com/

Tags: Business Intelligence

Similar Questions

  • Cloning of the prod to test data (with names different ownerIDs/schema)

    Hello
    ID of owner or the schema of production data is different than in the Test environment; So what happens is that after cloning the database, all the tables are still owned by prod user and not the test user. So if we just update the ownerID in psdbowner, logging would not be possible because it begins to search tables in a different pattern. If we leave the entry in psdbowner as it is, then the connection is possible but is not what we want. We want owner id to be different in the TEST.

    If we want to have all tables with an ID of owner different test than production; What measures should be taken?

    Thank you
    Vikas

    You must export from the user of the production and import in the test user.
    IMPDP in db 10g and more, within REMAP_SCHEMA option. However, there is no real reason to have another username across different environment, it's much easier to have the same username across all the env, especially if you want to copy the data files...

    Nicolas.

    Published by: Gasparotto N on May 31, 2010 16:46

  • The monitoring of test data to write in the CSV file

    Hi, I'm new to Labview. I have a state machine in my front that runs a series of tests. Every time I update the lights on the Panel with the State. My question is, how is the best way to follow the test data my indicators are loaded with during the test, as well as at the end of the test I can group test data in a cluster, and send it to an another VI to write my CSV file. I already have a VI who writes the CSV file, but the problem is followed by data with my indicators. It would be nice if you could just the data stored in the indicators, but I realize there is no exit node =) any ideas on the best painless approach to this?

    Thank you, Rob

    Yes, that's exactly what typedef are to:

    Right-click on your control and select make typedef.

    A new window will open with only your control inside. You can register this control and then use it everywhere. When you modify the typedef, all controls of this type will change also.

    Basically, you create your own type as 'U8 numéric', 'boolean', or 'chain' except yours can be the 'cluster of all data on my front panel' type, "all the action my state machine can do," etc...

  • VStorage different Motion - copy of the file on a data store vmdk

    Hi all

    Hoping that someone will be able to help with a problem I have right now.

    Configuration:

    NetApp SAN - vmfs data warehouses.

    Data for the migration store is synchronous to a recovery site.

    I planned a day to the next task to migrate a VMDK virtual machines to a non synchronous Datastore to synchronous data store.  When I arrived yesterday morning - the virtual machine has been turned on - but I couldn't access the console and migration was still sitting 89% full.

    Checked the Netapp Filer and found the copy of the snapshot is greater than the instant space on the volume.

    SnapMirror suspend and break.  Delete the copy of Netapp snapshot on the volume.  Once I had done this on the Netapp file server - virtual machine has begun to respond to ping and I was able to access the console.

    I restarted the VM Windows - to make sure everything was fine - as the date / time on the server was out of sync with the date and time - so again, I am happy that my virtual machine is now in place and running.

    Problem. : I migrated virtual disks - one both of the old data store to the new - no problems - until I'm gone make the last record and felt that there is not enough space on the data store - now it would have taken and further investigation found that the virtual disk I wanted to migrate — was already sitting on the new data store.

    Check the settings on the virtual machine shows that the file in question vmdk is still sitting on the old data store - but - I also have a copy of the vmdk - same size etc. - file sitting on the new data store.

    RVtools displays the disk on the new data as a Zombie VMDK store.

    Question: What vmdk file is the one I need to remove?  I think about the new data store - in addition, shown as a zombie in RVTools vmdk - also, shows only not the settings of the virtual machine.

    All find what vmdk file, the system uses permanently.

    Thank you very much

    Open the .vmx file and check the path to your vmdk here.

    If you can, backupo the VMDK at both ends before going any further.

    as the VC (and I hope that the .vmx), as well as RVtools indicate that the destination is a zombie. . I would lean towards this being the one to delete.

    In theory, if you use the browser to store data to try to remove a new and it's in use, you will get an error message, so if it removes OK, you should be good to continue from where you were before.

    Good luck.

  • PLSQL code to generate a simple test data

    Hello

    We need to create some test data for a customer, and I know that a PLSQL code would do the trick here. I would like the help of the community please because I'm not a programmer, or a guy PLSQL. I know that with a few simple loops and output statement, this can be achieved

    We have a holiday table that has 21 rows of data:

     CREATE TABLE "PARTY"
      (    "PARTY_CODE" NUMBER(7,0),
           "PARTY_NAME" VARCHAR2(50),
      ) PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING
     STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
     PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
     TABLESPACE "USERS"
    

    SELECT * FROM PARTY;
    
    PARTY_CODE PARTY_NAME
    ---------- ------------------
             2 PARTY 2 
             3 PARTY 3
             4 PARTY 4
             5 PARTY 5
             8 PARTY 8
             9 PARTY 9
            10 PARTY 10
            12 PARTY 12
            13 PARTY 13
            15 PARTY 15
      20 PARTY 20
            22 PARTY 22
            23 PARTY 23
            24 PARTY 24
            25 PARTY 25
            27 PARTY 27
            28 PARTY 28
            29 PARTY 29
            30 PARTY 30
            31 PARTY 31
            32 PARTY 32
    

    We have 107 events; each event, to create a dummy test data candidate (a candidate for each party code (to be found in the table above))

    It's the example of test data:

    001,100000000000, TEST001CAND01, FNCAND01, 1112223333, 2

    001,100000000001, TEST001CAND02, FNCAND02, 1112223333, 3

    001,100000000002, TEST001CAND03, FNCAND03, 1112223333, 4

    001,100000000003, TEST001CAND04, FNCAND04, 1112223333, 5

    ...

    ...

    001,100000000021, TEST001CAND21, FNCAND21, 1112223333, 32

    002,100000000000, TEST002CAND01, FNCAND01, 1112223333, 2

    002,100000000001, TEST002CAND02, FNCAND02, 1112223333, 3

    002,100000000002, TEST002CAND03, FNCAND03, 1112223333, 4

    ...

    ...

    002,100000000021, TEST002CAND21, FNCAND21, 1112223333, 32

    and this goes completely to the 107 event.

    I know it's trivial and with a little time, it can be done. I'm sorry asking such a simple code, and I really appreciate all the help in advance.

    Concerning

    I had sorted it by PLSQL.

    DECLARE

    CNTR NUMBER: = 1;

    BEGIN

    FOR I IN 1.107 LOOP

    for v_party_code in)

    Select party_code order by party_code

    )

    loop

    DBMS_OUTPUT. Put_line (LPAD (i, 3, 0) |) «, » ||' 1000000000' | LPAD (CNTR, 2, 0). «, » ||' TEST' | LPAD (i, 3, 0). "CAND' | LPAD (CNTR, 2, 0). «,, » ||' FNCAND' | LPAD (cntr, 2, 0). ', 1112223333. "| v_party_code.party_code);

    CNTR: = cntr + 1;

    end loop;

    CNTR: = 1;

    END LOOP;

    END;

    Thanks to all those who have been.

  • Import data from RCMP of PROD to TEST - can I exclude attachments?

    Dear professionals,
    I have RCMP Oracle Content Server 7.8 application.

    Need your expert advise on how to handle this situation. I need update my TEST instance with data from PROD. To do this, I intend to import the metadata of content and the PROD database server to the TEST.

    But I'm challenged by the fact that prod attachments are sensitive security and cannot be imported. Can I exclude attachments duing the import / export of the process? the archiver has such an ability to exclude attachments when I create a file to archive data from PROD?

    Thank you very much in advance.

    As chance would have...

    Attachments stored using the ZipRenditionManagement component is stored only in the weblayout and NOT the vault.

    So if you want that the spare part to disappear on the TEST server when you create your archiver export on PROD make sure you disable the OPTION (by default it is enabled) option Copy Web content in export on the general tab Options.

    This means that only the copy of the Vault will move through in the file archive 'batch '. The only fly in the ointment is that if you have the PDF etc. conversions or anything funky going on with the copy of weblayout this process will have to be redone in TEST - but this isn't normally a big headache.

    Import this test and you will not all attachments.

    Good luck and hope that helps

    Tim

    PS Si is what you need, please consider this correct marking help others with a similar problem in the future!

  • copy of the mac to windows server data

    Hi all

    My apologies in advance if this ended up in the wrong section of the forum, hoping someone could point me in the right direction.

    I work for a company that currently stores its files on an OS X journaled NAS device, connected to a Mac Mini via the lightning cable and the mac connects to the network via ethernet.

    We are planning a migration of large data (~ 18 TB) from a NAS device mac to format, on a Windows Server (like using NetApp storage solution / VM datastore)

    I wonder what could be the best application to manage the data transfer? In Windows environments, I used Robocopy or FTP and love it, but not really know on the side of Mac of things when it comes to data migrations.

    We have a paid version of ChronoSync we use to run our nightly backups to other NAS material - I see that this has developed in a few searches on Google.

    2 other products that pop up in my research are arRsync and SuperDuper - can someone comment or recommend these products?

    That's what I look for in an application to manage the transfer:

    -support to copy the attributes of file from the MAC world to Windows

    -support the recovery if the transfer fails / break

    -being able to provide a significant peace of mind summary when the transfer is complete (sort of like ChronoSyncs connect)

    -GUI based and have a nice interface

    -Be reliable to transfer large date - currently about 18 ~ TB

    Looking forward to hearing your comments, no doubt let me know things that I have to take into consideration during the planning phase or "traps".

    Matt

    So I'm going to bite and offers some advice.  Summer by this much (sigh).

    First of all, some of the things to watch for:

    1: naming.  Mac users in storage Mac can use any character that they want: and.  For example, my.file! @# $> That's so important # $123. .    can be a file name.  Cannot * REALLY IMPORTANT FILE!  Yes, there are spaces in the and at the end.  Yes, this will make panic window.  You need to do a review of the names of files and folders before you try to migrate.  Search for files that begin and end with the space character.  Beware reserved characters in Windows.  Clean your name prior to migration.  For more information, see here https://msdn.microsoft.com/en-us/library/windows/desktop/aa365247(v=vs.85).aspx

    2: maximum length of the path.  Yes.  In this day of unicode several parts of Windows can only manage a path of the file system maximum of 256 characters.  See the link above.  If you arrive by a system of support for paths longer (like OS X and * nix operating systems) that you want to review the length of your paths.  Test, test and test again with all of your tools, including your backup software.  Make sure he can see both paths that exceed the API limit and also it is able to restore data in ways that exceed the length.

    3: SMB is always a nightmare.  Numeral ID of file crawling, to slow down the reading of the directory, to Hung Finder.  Your version of Mac OS will have degrees of success.   10.8 was a mess with DFS.  10.9 was a wreck of train with Windows cluster servers.   10.10 has huge issues with ID of file enumeration and periodic deadlock.  Yet once, test in your environment.  Get comfortable with the nsmb.conf file.

    4: do not leave your sleep of Macs.  Reconnect the AFP inactive supports.  To connect to a server, let mac sleep, wake can renegotiate the connection with the server (generally) volume.  SMB does not.  If your Mac to sleep with documents open on the part of the SMB, you are in a world of pain.

    5: Be prepared to not be able to find anything.  Research on the shared Windows resources has been a frustrating situation.  You will have taken in charge of Spotlight from the server so that you can use a directory to search like Find Any File trawling or be ready to manage Spotlight index on each workstation and hope for the best.

    Regarding the methods to get the data from one place to another, you must realize that there is a time constraint.  Using a 4 GB/min of transfer GigE, assuming that no problem and rule without interruptions, 18 TB of data will be 75 hours to copy.  Now, since it is a SAR passes to a Windows system, you probably won't get 4 GB/min so increase this number by 20%.  If the Windows Server on writing virus scanning, add another 10%.  If possible, the best advice I can give you is to do it in logical blocks.  Now, I don't know your data set, so this is not possible.  However, if you have several shares, move an action by end of week to ensure you have enough time to perform the copy and also to correct problems that may occur.

    About the tools, I have always used rsync because it allows the two detailed logging and works also in additional line.  Should we get disturbed for any reason, you can pick up where you left off.  Unfortunately, this isn't based GUI.  Also, the rsync 2.6.3 included with OS X is not sufficient to support all the features of file system.  I prefer to build a copy of the last branch 3.  I'll also include some of the patches as indicators of file and compression of hfs.  If it's just data, the patches may not be necessary.  If you need a GUI tool, CronoSync is correct.

    And finally, if you find that SMB on Windows is just too frustrating, there's Acronis access connect.  It is a supplement to the Windows Server native AFP and Spotlight.  Over the past years, I was faced with questions SMB through many versions of Mac OS X.  Of course, several questions are in corporate environments where I have no visibility to the configuration of Windows Server.  I have no idea what they are doing on these machines, but I don't know that OS X integration was harsh.

    I hope this helps.  Good luck in your project.  Test and test even more before putting forward your end-users.

    Reid

    Apple Consultants Network

    Author - "El Capitan Server - Foundation Services.

    Author - "El Capitan Server - Collaboration & control»

    Author - "El Capitan Server - Advanced Services '.

    : IBooks exclusively available in Apple store

  • test DB has no AMM, the request takes 5 seconds, the prod DB has AMM query takes 5 minutes

    I created a copy of test of production for a long time, when I was creating the test I have not activated AMM.

    A developer created a new report and told me that it took seconds to run on the Pb test, as he was taking up to 10 minutes on the Pb of the production. I got it to extract the SQL code for the report and he ran from a command line, and on the Pb test, it took 5 seconds and more than 5 minutes on the Pb of the production.

    I then checked the AMM with the show memory_target parameter to see if AMM was activated and on the Pb of the prod is 1 GB on the DB test is 0

    memory allocation requests show this for prod:

    NAME                                     VALUE

    ------------------------------        ----------

    PGA maximum allowed 578929664

    pga_aggregate_target 0

    SGA_TARGET 0

    MEMORY_TARGET

    -------------

    578929664

    for the BP to test:

    NAME                                       VALUE

    ------------------------------          ----------

    PGA maximum allowed 214604800

    pga_aggregate_target 101711872

    SGA_TARGET 624951296

    MEMORY_TARGET

    -------------

    839556096

    I have not dbcontrol currently enabled on the Pb test, so I could not watch the advisors of the memory, but it is enabled on the Pb of the prod, that tells me it should be to the 1 GB is currently at.

    Two of these db are on the same server.

    Now I wonder if I should disable the AMM on the prod server and allocate manually to match the test, or should I change the AMM if possible so that it can perform as good as the Pb of test?

    Thanks in advance.

    I tried the suggestion of Geert earlier but I forgot to mention that it made no difference.

    However, I managed to find out what it is, starting at the beginning by looking at all the init parameters. It was then that I noticed the Prod OPTIMIZER_FEATURES_ENABLE was set to 9.2.0.8, while the test was 11.2.0.3, a quick alter system set optimizer_features_enable = '11.2.0.3' scope = both and the query ran<5 seconds,="" just="" like="" the="" test.="" it="" also="" got="" rid="" of="" the="" note="" that="" the="" cpu="" costing="" was="" not="">

    I thought a bit more wording accurate in the order note explain plan could be useful - but in any case, it makes sense, I guess...

    However, this has been a useful exercise for me, I have a better idea how they use the remote database for retrieving information and will make suggestions, it may be easier to create views of the side remote (SQL Server), rather than on the side of the Oracle.

    Still do not know how the test done with the right parameter and the prod has not, I used to create the init.ora file to create the test instance by getting a carbon copy of the production spfile, but I missed this setting this time.

    In any case, it's a good thing I had the right parameter under test - otherwise the Developer & users may have accepted, he was going to be a slow and left...

    Thanks again for all your help

    PS. Now here is the plan explain of production:

    SQL> SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY_CURSOR(NULL,NULL,'ALLSTATS LAST'));
    SQL_ID  bqh1aq71rshpp, child number 1
    -------------------------------------
    SELECT /*+ GATHER_PLAN_STATISTICS */
    DOMCUST.CUSTOMER,DOMHEADER.WOODTYPE,DOMHEADER.ISSUEDATE ISSUEDATE,
    vw_testlots.TLOT_PREV PREV_LOT, VW_TESTLOTS.TLOT_NEXT NEXT_LOT,
    DOMLOTS.lotno ship_lot,       VW_PTMS_LOTS.AIRDRY
    AIRDRY,VW_PTMS_LOTS.BRIGHTNESS BRIGHT,VW_PTMS_LOTS.DIRT
    DIRT,VW_PTMS_LOTS.VISCOSITY VISCOSITY,
    (A.CCSF_0+B.CCSF_0)/2 CCSF_0,
    (A.TEAR1_0+B.TEAR1_0)/2 TEAR1_0,
    (A.BL_0+B.BL_0)/2 BL_0,
    (A.BURST_0+B.BURST_0)/2 BURST_0,
    (A.BULK_0+B.BULK_0)/2 BULK_0,
    (A.POROSITY_0+B.POROSITY_0)/2 POROSITY_0,
      (A.BOND_0+B.BOND_0)/2 BOND_0,
    (A.OPACITY_0+B.OPACITY_0)/2 OPACITY_0,
    (A.CCSF_1+B.CCSF_1)/2 CCSF_400,
    (A.TEAR1_1+B.TEAR1_1)/2 TEAR1_400,                                 (
    
    Plan hash value: 2207188603
    
    ------------------------------------------------------------------------------------------------------------------------------------------------------------------------
    | Id  | Operation                              | Name         | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  | Writes |  OMem |  1Mem | Used-Mem | Used-Tmp|
    ------------------------------------------------------------------------------------------------------------------------------------------------------------------------
    |   0 | SELECT STATEMENT                       |              |      1 |        |    255 |00:00:05.90 |    1091K|     45 |     45 |       |       |          |         |
    |   1 |  SORT AGGREGATE                        |              |    255 |      1 |    255 |00:00:00.91 |     385K|      0 |      0 |       |       |          |         |
    |*  2 |   FILTER                               |              |    255 |        |  57880 |00:00:01.11 |     385K|      0 |      0 |       |       |          |         |
    |*  3 |    TABLE ACCESS BY INDEX ROWID         | LOTTEST      |    255 |      1 |  57880 |00:00:00.99 |     385K|      0 |      0 |       |       |          |         |
    |*  4 |     INDEX RANGE SCAN                   | UK_LOTNO     |    255 |     45 |    767K|00:00:00.85 |    5015 |      0 |      0 |       |       |          |         |
    |   5 |  SORT AGGREGATE                        |              |    255 |      1 |    255 |00:00:00.52 |     158K|      0 |      0 |       |       |          |         |
    |*  6 |   FILTER                               |              |    255 |        |  55792 |00:00:00.47 |     158K|      0 |      0 |       |       |          |         |
    |*  7 |    TABLE ACCESS BY INDEX ROWID         | LOTTEST      |    255 |      1 |  55792 |00:00:00.36 |     158K|      0 |      0 |       |       |          |         |
    |*  8 |     INDEX RANGE SCAN                   | UK_LOTNO     |    255 |     45 |    516K|00:00:00.55 |    1372 |      0 |      0 |       |       |          |         |
    |   9 |  SORT ORDER BY                         |              |      1 |      1 |    255 |00:00:05.90 |    1091K|     45 |     45 | 70656 | 70656 |63488  (0)|         |
    |  10 |   NESTED LOOPS                         |              |      1 |        |    255 |00:00:04.47 |     547K|     45 |     45 |       |       |          |         |
    |  11 |    NESTED LOOPS                        |              |      1 |      1 |    255 |00:00:04.47 |     546K|     45 |     45 |       |       |          |         |
    |  12 |     NESTED LOOPS                       |              |      1 |      1 |    255 |00:00:03.88 |     383K|     45 |     45 |       |       |          |         |
    |  13 |      NESTED LOOPS                      |              |      1 |      1 |    255 |00:00:03.02 |     979 |     45 |     45 |       |       |          |         |
    |  14 |       NESTED LOOPS                     |              |      1 |      1 |    255 |00:00:03.01 |     712 |     45 |     45 |       |       |          |         |
    |* 15 |        HASH JOIN                       |              |      1 |      1 |    255 |00:00:03.01 |     527 |     45 |     45 |   858K|   858K| 1283K (0)|         |
    |  16 |         NESTED LOOPS                   |              |      1 |     19 |    255 |00:00:00.01 |      28 |      0 |      0 |       |       |          |         |
    |  17 |          NESTED LOOPS                  |              |      1 |      1 |      9 |00:00:00.01 |      17 |      0 |      0 |       |       |          |         |
    |  18 |           TABLE ACCESS BY INDEX ROWID  | DOMCUST      |      1 |      1 |      1 |00:00:00.01 |       2 |      0 |      0 |       |       |          |         |
    |* 19 |            INDEX UNIQUE SCAN           | PK_DOMCUST   |      1 |      1 |      1 |00:00:00.01 |       1 |      0 |      0 |       |       |          |         |
    |* 20 |           TABLE ACCESS FULL            | DOMHEADER    |      1 |      1 |      9 |00:00:00.01 |      15 |      0 |      0 |       |       |          |         |
    |* 21 |          INDEX RANGE SCAN              | PK_DOMLOTS   |      9 |     21 |    255 |00:00:00.01 |      11 |      0 |      0 |       |       |          |         |
    |  22 |         VIEW                           | VW_PTMS_LOTS |      1 |    101 |  15101 |00:00:03.05 |     499 |     45 |     45 |       |       |          |         |
    |  23 |          SORT UNIQUE                   |              |      1 |    101 |  15101 |00:00:03.02 |     499 |     45 |     45 |  1328K|   587K| 1180K (0)|         |
    |  24 |           UNION-ALL                    |              |      1 |        |  15101 |00:00:03.06 |     499 |     45 |     45 |       |       |          |         |
    |  25 |            HASH GROUP BY               |              |      1 |    100 |  15101 |00:00:02.98 |       0 |     45 |     45 |  3004K|   982K| 3262K (1)|    1024 |
    |* 26 |             FILTER                     |              |      1 |    100 |  15842 |00:00:02.99 |       0 |      0 |      0 |       |       |          |         |
    |  27 |              REMOTE                    |              |      1 |        |  16563 |00:00:02.95 |       0 |      0 |      0 |       |       |          |         |
    |* 28 |            TABLE ACCESS FULL           | PTMSLOTS     |      1 |      1 |      0 |00:00:00.01 |     499 |      0 |      0 |       |       |          |         |
    |* 29 |        INDEX UNIQUE SCAN               | PK_DOMLOTS   |    255 |      1 |    255 |00:00:00.01 |     185 |      0 |      0 |       |       |          |         |
    |  30 |       TABLE ACCESS BY INDEX ROWID      | DOMHEADER    |    255 |      1 |    255 |00:00:00.01 |     267 |      0 |      0 |       |       |          |         |
    |* 31 |        INDEX UNIQUE SCAN               | PK_DOMHEADER |    255 |      1 |    255 |00:00:00.01 |      12 |      0 |      0 |       |       |          |         |
    |  32 |      TABLE ACCESS BY INDEX ROWID       | LOTTEST      |    255 |      1 |    255 |00:00:00.91 |     383K|      0 |      0 |       |       |          |         |
    |* 33 |       INDEX UNIQUE SCAN                | PK_LOTTEST   |    255 |      1 |    255 |00:00:00.91 |     382K|      0 |      0 |       |       |          |         |
    |  34 |        SORT AGGREGATE                  |              |    255 |      1 |    255 |00:00:00.91 |     382K|      0 |      0 |       |       |          |         |
    |  35 |         TABLE ACCESS BY INDEX ROWID    | LOTTEST      |    255 |      1 |    261 |00:00:00.90 |     382K|      0 |      0 |       |       |          |         |
    |* 36 |          INDEX RANGE SCAN              | UK_LOTNO     |    255 |      1 |    261 |00:00:00.90 |     382K|      0 |      0 |       |       |          |         |
    |  37 |           SORT AGGREGATE               |              |    255 |      1 |    255 |00:00:00.90 |     382K|      0 |      0 |       |       |          |         |
    |* 38 |            FILTER                      |              |    255 |        |  57880 |00:00:01.09 |     382K|      0 |      0 |       |       |          |         |
    |* 39 |             TABLE ACCESS BY INDEX ROWID| LOTTEST      |    255 |      1 |  57880 |00:00:00.97 |     382K|      0 |      0 |       |       |          |         |
    |* 40 |              INDEX RANGE SCAN          | UK_LOTNO     |    255 |     45 |    767K|00:00:00.85 |    1798 |      0 |      0 |       |       |          |         |
    |* 41 |     INDEX UNIQUE SCAN                  | PK_LOTTEST   |    255 |      1 |    255 |00:00:00.53 |     162K|      0 |      0 |       |       |          |         |
    |  42 |      SORT AGGREGATE                    |              |    255 |      1 |    255 |00:00:00.53 |     162K|      0 |      0 |       |       |          |         |
    |  43 |       TABLE ACCESS BY INDEX ROWID      | LOTTEST      |    255 |      1 |    264 |00:00:00.52 |     162K|      0 |      0 |       |       |          |         |
    |* 44 |        INDEX RANGE SCAN                | UK_LOTNO     |    255 |      1 |    264 |00:00:00.52 |     162K|      0 |      0 |       |       |          |         |
    |  45 |         SORT AGGREGATE                 |              |    255 |      1 |    255 |00:00:00.52 |     162K|      0 |      0 |       |       |          |         |
    |* 46 |          FILTER                        |              |    255 |        |  55792 |00:00:00.46 |     162K|      0 |      0 |       |       |          |         |
    |* 47 |           TABLE ACCESS BY INDEX ROWID  | LOTTEST      |    255 |      1 |  55792 |00:00:00.35 |     162K|      0 |      0 |       |       |          |         |
    |* 48 |            INDEX RANGE SCAN            | UK_LOTNO     |    255 |     45 |    516K|00:00:00.57 |    5282 |      0 |      0 |       |       |          |         |
    |  49 |    TABLE ACCESS BY INDEX ROWID         | LOTTEST      |    255 |      1 |    255 |00:00:00.01 |     255 |      0 |      0 |       |       |          |         |
    ------------------------------------------------------------------------------------------------------------------------------------------------------------------------
    
    Predicate Information (identified by operation id):
    ---------------------------------------------------
    
       2 - filter(:B1-365<=:B2+365)
       3 - filter(("WOODTYPE"=:B1 AND "PRODDATE">=:B2-365 AND "PRODDATE"<=:B3+365))
       4 - access("LOTNO"<:B1)
       6 - filter(:B1-365<=:B2+365)
       7 - filter(("WOODTYPE"=:B1 AND "PRODDATE">=:B2-365 AND "PRODDATE"<=:B3+365))
       8 - access("LOTNO">=:B1)
      15 - access("DOMLOTS"."LOTNO"="VW_PTMS_LOTS"."LOTID")
      19 - access("DOMCUST"."DOMCUSTID"=1)
      20 - filter(("DOMHEADER"."DOMCUSTID"=1 AND "DOMHEADER"."ISSUEDATE">=TO_DATE(' 2014-05-30 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND "DOMHEADER"."WOODTYPE"='H'
                  AND TRUNC(INTERNAL_FUNCTION("DOMHEADER"."ISSUEDATE"))<=TO_DATE(' 2015-05-30 00:00:00', 'syyyy-mm-dd hh24:mi:ss')))
      21 - access("DOMHEADER"."DOMID"="DOMLOTS"."DOMID")
      26 - filter("A"."StartTime">GREATEST(SYSDATE@!-730,TO_DATE(' 2013-04-16 13:00:00', 'syyyy-mm-dd hh24:mi:ss')))
      28 - filter("PRODTIME">SYSDATE@!-730)
      29 - access("DOMHEADER"."DOMID"="DOMLOTS"."DOMID" AND "DOMLOTS"."LOTNO"="DOMLOTS"."LOTNO")
      31 - access("DOMLOTS"."DOMID"="DOMHEADER"."DOMID")
      33 - access("A"."LOTID"=)
      36 - access("LOTTEST"."LOTNO"=)
      38 - filter(:B1-365<=:B2+365)
      39 - filter(("WOODTYPE"=:B1 AND "PRODDATE">=:B2-365 AND "PRODDATE"<=:B3+365))
      40 - access("LOTNO"<:B1)
      41 - access("B"."LOTID"=)
      44 - access("LOTTEST"."LOTNO"=)
      46 - filter(:B1-365<=:B2+365)
      47 - filter(("WOODTYPE"=:B1 AND "PRODDATE">=:B2-365 AND "PRODDATE"<=:B3+365))
      48 - access("LOTNO">=:B1)
    
    105 rows selected.
    
    Elapsed: 00:00:00.12
    
  • manually assign primary key and copy to the detailed form

    Hi experts,

    Oracle Apex 4.2, 11g database, using windows 7.

    I created a form and created automatic product no. (not only sequence) with logic and SQL. SQL query produced good wise exercise, and exercise begin from 01 July and ends 30 June each year. This means if the 07/01/2015 start it will create a new voucher No.

    The main Table name is GL_PV and the columns are:

    Number of PV_No

    Date of PV_Date

    Number of CC_code

    number amount

    Remarks varchar2 (100)

    Created a process to submit before the calculations and validations.

    The codes are

    NVL SELECT (MAX (to_number (nvl(pv_no,0))) + 1, 1) AMENDMENTS

    IN: P15_pv_no

    OF GL_PV

    WHERE pv_date

    BETWEEN to_date (' 01-07-' |) (extract (year from to_date (: P15_pv_date, "dd-mm-yyyy")))

    + case when extracted (month of to_date (: P15_pv_date, "dd-mm-yyyy")) < = end of another 0, then 6-1), "dd-mm-yyyy")

    AND to_date (30 - 06-' |) (extract (year from to_date (: P15_pv_date, "dd-mm-yyyy")))

    (+ case when extracted (month of to_date (: P15_pv_date, "dd-mm-yyyy")) < = 6 then 0 otherwise 1 end), "dd-mm-yyyy")

    and cc_code =: P15_cc_code;

    and press the button when Conditions = Generate_Button

    In the form of master I put the data and click on create button is working well and generating good can result.

    Now that I've created a detail of my detail table is pv_detail and the columns are

    pv_voucher_no

    pv_date

    account_code

    Remarks

    amount

    I want to create the relationship of the master / detail form.

    I tried:

    • primary key and foreign key, but does not. column GL_PV table primary key (PV_NO, PV_DATE), PV_DETAIL (pv_voucher_no, pv_date) foreign key table columns: -.
    • has created one for master and 2nd 2 form for details, good master shape generates but not detail of.

    I want to assign pv_no, pv_date in both value table (master / detail), in other words copy value pv_no and pv_date of main table in detail table pv_voucher_no and pv_date.

    Please advise how I can solve this problem.

    Thank you forum oracle to solve my problems.


    error report: ORA-01790: expression must have the same type of data, matching expression

    Find the solution on this forum

    Solution:

    Attributes and the tabular form:

    Change the default type = PL/SQL Expression on the function

    Default = to_date(:P15_PV_DATE,'DD-MON-YYYY')



  • Need some ideas on the copy of the data from one schema to another.

    Dear all,

    I would like to know the best method and possible by copying data from one schema to another. (I won't use EXP / IMP).
    Example:


    I have a scheme of production on which I have hundreds of tables. I copy the data of the TEST environment after comparing data, I should copy the data incrementally.
    Maybe this copy of data may occur once a month, or on request.

    I want to have this done by a procedure.
    I was thinking about the logic below, using a procedure I will compare the tables and the structures between the two schemas.
    Then I willl use the primary key in both tables column and compare the data first. If the data if it does not exist then I only inserts these records in the target.
    The above said logic could be as similar as the synchronization of data or records, but the table does not all columns to track records archived / copied from the source.
    Suggest me so if you can give me the best logic or solution.

    Thanks in advance...

    Concerning
    Suresh

    GSKumar wrote:
    Dear all,

    I would like to know the best method and possible by copying data from one schema to another. (I won't use EXP / IMP).
    Example:
    I have a scheme of production on which I have hundreds of tables. I copy the data of the TEST environment after comparing data, I should copy the data incrementally.
    Maybe this copy of data may occur once a month, or on request.

    I want to have this done by a procedure.
    I was thinking about the logic below, using a procedure I will compare the tables and the structures between the two schemas.
    Then I willl use the primary key in both tables column and compare the data first. If the data if it does not exist then I only inserts these records in the target.
    The above said logic could be as similar as the synchronization of data or records, but the table does not all columns to track records archived / copied from the source.
    Suggest me so if you can give me the best logic or solution.

    I don't know why you don't want to opt for EXP/IMP.
    As an alternative, if you have a link DB between Production and Test pattern (which I seriously doubt), you can use the MERGER to a copy of data from Production to Test Delta.
    a. INTRODUCE IN test_schema.table_name SELECT sequence_columns, column_list FROM (SELECT column_list FROM prod_schema.table_name LESS SELECT column_list from test_schema.table_name)
    Column_list must not contain columns that contain sequence values because they may differ and result in an incorrect result; Therefore, this is taken into account in the outer query rather than Inline mode.
    b. MERGE STATEMENT
    c. FALL of test_schema.table_name. CREATE test_schema.table_name as SELECT column_list from prod_schema.table_name; -A way very neat to copy all the data, provided you do not keep any changes to test the tables in the schema.

    However, you mentioned to find existing records based on the primary key; IMO, primary key is normally a sequence (may be alphanumeric) and its value on env Production and Test may defer or even can have different attributes, therefore, I find it incorrect to match only the primary keys. I would say to match the key attributes.

    If you want to follow the last update/insert for a record, you can add a column that puts the time of last modification. In this way, you can track the changes of a column. Another alternative would be to use a check to the table.

    Let us know your thoughts or concerns, so that help can be provided.

    I suggest that you read and follow {message identifier: = 9360002}.
    If you will be useful if you can provide all the information required in advance to help us provide you with a quick resolution.

  • Adding pages to form via JS and models. Stop the text to copy to the new page

    Hello community,

    In the last days, I was doing a crash course in Adobe acrobat and JavaScript. One of our clients asked us to make a PDF form with fields to fill and a button that will make a new copy of the form.

    I have text fields, model, and javascript for the button to load the model.

    My problem is that text in the fields on the first page is copied to the extra pages when they are made.

    I did a ton of research and found a lot of answerrs talk about having the renamed domain. I've included the part bRename to my JS, and text fields indicate that they are a differnet name, but he always pulls data from the first page.

    Here is the script I use. I'm also runing Acrobat X standard.

    var a = this.getTemplate ('new Page');

    a.Spawn ({bRename: true});

    Any suggestions would be a great help. I can provide necessary additional information.

    You can reset the fields on the newly created page. All fields on the page will have the prefix "Pn", where n is the number of the newly created page. Since your code adds the page at the end, the code to reset the fields that you can use after the code that generates the model can be:

    resetForm (["P" + (numPages - 1)]);

  • How to change data objects and update the corresponding task and task forms?

    Hi all

    I modified this thread because I found that I had many questions to ask.

    1.
    I'm quite new to OBPM and would like to know how to change data - for clarity objects add a new attribute "Dependents of the Client" "Customer care" - and therefore update the task that uses the 'customer information' what makes via data binding - I get an error message here - or via the data about the data Task-The chosen tab it does not appear to be linked to my approach in some sort.

    2.
    Will be the task form that I generated earlier in < 1 > update automatically? Is it possible to update manually if the task has been clearly customized?

    3.
    What are the objects of project data? They do not store values in my process. They are for the arguments only - like reusable process?

    Thanks in advance,

    Kind regards

    Yanis

    Hi Yanius,

    (1) assume that you start from scratch. First, you declare your data object structure. To do this, you must go to the BPM project Naviagtor, right-click on 'Catalogue of trades' and create a new Module. Then you can right click on the module you created and select the new object of trade. Add all the attributes you need. It is the same to declare a class in Java. Second, you must declare a variable of the type process you have created: select the process, go to the Structure display (if you don't go and activate it in menu view Jdev-> Structure). Right click on the Process data object and create your variable. It's like setting a variable in Java. In short, answering your question, make the path opposite: find in your process of type "Customer Details", then go to your catalog Business, right click and change the definition to add what you need.

    (2) the human task will not update automatically (annoying). If you change the object itself, so you don't need to change your mappings because actually there the same object you are through the task as in / out argument. There are two things here: the human task and form associated with it (where probably change you the subject i.e. customer details). Go to the form (.jspx) - click on the tab links (by default, you are in the Design). In the links page, there is a link on top: "Page Définition File" (something like proj/pageDef/...xml). Open the XML file and go to the source. There you can add manually now all the attributes you need and that were not available before (i.e. dependent Client). It's a little complicated, but at least you don't have to recreate it. particularly useful if you have already implemented and subsequently form, you need to add more things (business is very good to say otherwise, "I would like to see something else in the form" ;)

    (3) project data objects are visible by all processes that you have in the project rather than the object of process data that are visible only to the method where you set the variable to. This means that you declare an object of data of the project once and then it will be available to all processes. Keep in mind that each process has a copy of it. In other words, it is not like a global variable that everyone sees. If edit you in a single process, other processes will not see the new value.

    I hope that I have answered your questions.
    See you soon,.
    Felipe

  • How to copy all the data to repeat the subform to second repeating subform

    I have a repeating subform on P1 which will have a variable number of rows created.  Then on P4, I another repeating subform, with the same fields, that subform P1 data must be copied to.  I tried many things, but my skill level is not too big, so none of them have worked entirely.  I have a working partially on a button named CopyAll script, but it is not very elegant and throws an error message when you arrive at the end of the number of lines that have been created in the subform of P1.

    Can anyone suggest a fix to this script or a better method of script?  I think this must be tough, since everything what I found on the forums that are similar to this one have not been resolved either.  I enclose a copy of my form.

    In the case of exit of the 1st subforum.invoice, you will find the code to assign the value to his "brother" on page4.

    In addition, the Add button, I added the code to add a record to the subforum page4.

    To do this you will need to copy/past the code of the output event in all the fields that you want to copy and the xfa.resolvenode command ("XXXXXXX [] .fieldName") replace fieldname with the actual name of the field that you're there.

    I didn't ' t do anything for delete you "line" button because I know not how you want it to work, but if you want to control the p4 with p1 Subform subform, you should not allow for the lines to be deleted directly in the subform p4.

    I hope this helps.

  • A clone of prod to test the effect of change/production.

    Hello
    Recently, I cloned from prod to test. And the customer wants to say after cloning reports are not running (completed with error). Cloning possible cause, although we have just launched preclone and copy the top off. I think that it does not change anything in terms of production. No matter what 1 had like this issue.

    Concerning
    Taher

    Taher,

    Done cloning of prod to test the effect of change/production

    No, fast Cloen does not change the source instance.

    Note: 216664,1 - FAQ: cloning Oracle Applications Release 11i-* 19. Quick clone modifies the source system? *

    Kind regards
    Hussein

  • Quick way to copy all the data to Ipad Air 2 for Ipad Pro

    I would like to copy all the data on my Ipad 2 for the new Ipad Pro Air I just bought to avoid Delisle in downloading applications.

    Hello

    You'll be happy to learn that it is a very easy process!

    This should help:

    Transfer the contents of an iPhone, iPad or iPod touch to a new device - Apple Support

    Let me know how you go!

    All the best.

Maybe you are looking for