BI multi evil SQL transaction

After that we upgrade to OBIEE 10.1.3.3 7.8.2, our report does not work. The report is based on tables OPLT Siebel life sciences. I checked the RPD, it is almost exactly the same as in 7.8.2. However, the SQL code generated by the BI server in 10.1.3.3 contains a join that would return no records.
Here are the two SQLs:

In 7.8.2:

Select distinct D1.c1 as c1,
D1. C2 C2,
D1. C3 as c3,
D1. C4 as c4,
D1. C5 as c5,
D1. C6 as c6,
D1. As c7 C7
Of
(select distinct T1027. PTCL_NUM C1,
T10664. SITE_NUM C2,
T52895. Name as c3,
T52895. TODO_CD as c4,
T52895. EVT_STAT_CD like c5,
T52895. DOC_RCVD_DT as c6,
T52895.comments as c7
Of
S_PTCL_SITE_LS T10664 / * Site of the Protocol * /,.
S_CL_PTCL_LS T1027 / * protocols * /,.
S_EVT_ACT T52895 / * activity * /.
where (T10664. ROW_ID = T52895. CL_PTCL_ST_ID
and T1027. ROW_ID = T10664. CL_PTCL_ID
and T52895. SUBTYPE_CD = 'Document')
) D1
order by c1, c2, c3, c4, c5, c6, c7


In OBIEE 10.1.3.3, shows SQL as follows. The join ' T438245. ROW_ID = T443245. ROW_ID' is between 'S_EVT_ACT' and 'S_CL_PTCL_LS', their row_id would not match.

Where should I look for the index to such a problem?


Select distinct T438245. PTCL_NUM C1,
T442683. SITE_NUM C2,
T443245. TODO_CD as c3,
T443245. Name as c4,
T443245. EVT_STAT_CD like c5,
T443245. DOC_RCVD_DT as c6,
T443245.comments as c7
Of
S_PTCL_SITE_LS T442683 / * Site of the Protocol * /,.
S_CL_PTCL_LS T438245 / * protocols * /,.
S_EVT_ACT T443245 / * activity * /.
where (T438245. ROW_ID = T442683. CL_PTCL_ID
and T438245. ROW_ID = T443245. ROW_ID
and T442683. ROW_ID = T443245. CL_PTCL_ST_ID
and T443245. SUBTYPE_CD = 'Document')
order by c1, c2, c3, c4, c5, c6, c7

Thank you very much!

Check the join in the physical layer between S_CL_PTCL_LS and S_EVT_AC.

What you say (they do not match), they should not be reached. Ensure THAT RPD has only the correct joins so that the BI server can join your tables properly.
I wander if RPD 7.8.2 the join was there but BI Server never used it, can you confirm for us?

Tags: Business Intelligence

Similar Questions

  • SQL Transaction Log Corruption since upgrade to vSphere 6.0

    Since we have upgraded to vSphere 6.0, we've received damaged trans errors in the log while taking transactions log backups of a SQL instance running on a virtual machine. The SQL error message is: backup detected corruption newspaper in the Department database. Context is arglist. LogFile: 2 "F:\Sqldata\Department_log.ldf" I can work around this problem by assigning to the DB SQL mode to Simple recovery, then again in full and taking a new backup complete, followed by a backup of logfile trans did it restart the chain. This error happens every night and seems to be launched by our nightly VM-level backups taken by CommVault who takes a snapshot, supports the VMDk and then deletes the snapshot. However, I can reproduce this problem without our backup solution by a storage vMotion followed of a backup in SQL of the transaction logs that will be error with trans log corruption. It was non-existent before the upgrade to vSphere 6.0 5.5. I have ensured that the VM tools are updated as well as the hardware version of the virtual machine. I hope someone has an idea of what could happen here because I am confused and would not go back to vSphere 5.5.

    The VM and vSphere news

    VM

    Windows 2008 R2

    SQL Server 2008 R2 - SP3 (10.50.6220.0)

    VM tools 9536

    VM hardware v11

    vSphere

    Blade C7000 enclosure with 3 x BL460c Gen8

    VMware ESXi, 6.0.0 2494585 (HP Custom image)

    Get your vSphere 6.0 servers patched as soon as POSSIBLE.

    The same thing happens to us, we had several SQL Server databases that have been become corrupted.

    This is a known issue with vSphere 6.0 that has since been corrected (I think it was related to snapshots and CBT).  The latest patch for vSphere 6.0/build is "ESXi, 6.0.0 2809209.

  • multi select sql problem - ORA-01722: invalid number

    Hello

    I have a multi select list and when I select only one item it works perfectly, but if I select several items, it gives an error. The sql statement is as below:

    SELECT * VIEW WHERE INSTR (': ' |: P1_ID |': ',' :'||) ID: ': ') > 0

    and I get the following error:

    ORA-01722: invalid number


    Here is some information on the display:
    Name of the column
    Data Type (NUMBER)
    Nullable (No.)

    Hello

    I just try this in my own sandbox application to sand...

    Create report area with

    SELECT t_id, t_code
    FROM t_table
    WHERE INSTR(':'||TO_CHAR(:P1_ID)||':', ':'||TO_CHAR(t_id)||':') > 0
    

    as the source.

    Create multiple selection list

    SELECT t_code, t_id
    FROM t_table
    

    as the source.

    Create a button with application of GO, subordinate the region report on request = GB.

    Now, I run the page, select a value in the list, click on go - region displays with the correct result.

    Now I run the page, select several values in the list, click on go - region displays with good results.

    What is your scenario? If so, it works for me...

    See you soon

    Ben
    http://www.munkyben.WordPress.com
    Don't forget to mark the answers useful or correct ;)

  • Change of shape joining multi tables SQL...

    I need to be able to join the tables in the detail section of my master-detail form. Since I have more than three paintings to join, I think that APEX needs to use "ROWID" to track the value of the key of the primary table. When I try substitute basic sql generated for the report to my own joined table I get an error return "IMPOSSIBLE of ANALYSER. If I remove "ROWID" in the statement, then fine. Here's the statement from the apex generated:

    Select
    "ROWID",.
    "PO_ID."
    "PO_DET_ID,"
    "CHANGE_ORDER,"
    'START_DATE ',.
    'END_DATE ',.
    "PO_DET_AMT,"
    "SCOPE."
    "PO_STATUS,"
    "MRS."
    of ' #OWNER # '. " PO_DETAILS ".
    where "PO_ID" =: P240_PO_ID

    However, I need for the statement as follows:

    Select "ROWID",.
    po_details.po_id,
    po_details.po_det_id,
    po_details.change_order,
    po_details.start_date,
    po_details.end_date,
    po_details.po_det_amt,
    po_details. Scope,
    po_details.po_status,
    po_details. WBS
    of PO_DETAILS, IN., WBS, PROJECT
    where po_details.po_id = po.po_id
    and po.po_number = wbs.po_number_id
    and po_details.wbs = wbs.wbs_number_id
    and substr (wbs.wbs_number_id, 1, 6) = project.wbs_sequence
    and project.project_number = nvl(:F101_FPC_NUMBER,project.project_number)
    and po_details.po_id =: P240_PO_ID

    Then, I get the following ORACLE error:

    •Query cannot be parsed in the generator. If you believe that your query is syntactically correct, check the generic "columns" box below the source of the region without analysis. ORA-00918: column ambiguously defined

    I'm guessing that the column "defined ambiguous ' is ROWID.

    SELECT the query aspect works fine if I remove ROWID, but I then cannot update the table when the user needs to change the data. I get an exception defined by the user.

    I am new to APEX, so I do not know there are certain aspects that I'm missing.
    Any tips?

    There is no way to use a join of multiple tables in the APEX?

    Of course you can. Have you tried again (with the help of PO_DETAILS ROWID in the select statement)?

    I tried with a form on EMP and then modified the query to:

    select
    e."ROWID",
    e."EMPNO",
    e."ENAME",
    e."JOB"
    from "#OWNER#"."EMP" e, dept d
    where e.deptno = d.deptno
    and d.deptno = :P14_DEPTNO
    

    IMHO, it's simply more transparent if you use a view as the source. You select a view, and you update the same point of view.

    Published by: InoL on May 14, 2013 13:49

  • SQL +-PROBLEM OF QUERY IN MULTI TABLE

    HAI ALL,

    ANY SUGGESTION PLEASE?

    SUP: SQL +-PROBLEM OF QUERY IN MULTI TABLE


    SQL + QUERY DATA:
    -----------
    SELECT PATIENT_NUM, PATIENT_NAME, HMTLY_TEST_NAME, HMTLY_RBC_VALUE,

    HMTLY_RBC_NORMAL_VALUE, DLC_TEST_NAME, DLC_POLYMORPHS_VALUE,

    PATIENTS_MASTER1 DLC_POLYMORPHS_NORMAL_VALUE, HAEMATOLOGY1,

    DIFFERENTIAL_LEUCOCYTE_COUNT1
    WHERE PATIENT_NUM = HMTLY_PATIENT_NUM AND PATIENT_NUM = DLC_PATIENT_NUM AND PATIENT_NUM

    = & PATIENT_NUM;
    -----------
    RESULT:

    & PATIENT_NUM = 1
    no selected line
    ---------
    & PATIENT_NUM = 2
    no selected line
    ------------
    & PATIENT_NUM = 3
    PATIENT_NUM 3

    PATIENT_NAME KKKK

    HMTLY_TEST_NAME HEMATOLOGY

    HMTLY_RBC_VALUE 4
    4.6 - 6.0 HMTLY_RBC_NORMAL

    DLC_TEST_NAME LEUKOCYTE COUNT PREMIUM

    DLC_POLYMORPHS_VALUE 60

    DLC_POLYMORPHS_NORMAL_VALUE 40-65

    -------------
    -------------

    REAL WILL BE:

    & PATIENT_NUM = 1

    PATIENT_NUM 1

    PATIENT_NAME BBBB

    HMTLY_TEST_NAME HEMATOLOGY

    HMTLY_RBC_VALUE 5
    4.6 - 6.0 HMTLY_RBC_NORMAL

    -----------

    & PATIENT_NUM = 2

    PATIENT_NUM 2

    PATIENT_NAME GEORGE

    DLC_TEST_NAME LEUKOCYTE COUNT PREMIUM

    DLC_POLYMORPHS_VALUE 42

    DLC_POLYMORPHS_NORMAL_VALUE 40-65
    ---------------
    & PATIENT_NUM = 3
    PATIENT_NUM 3

    PATIENT_NAME KKKK

    HMTLY_TEST_NAME HEMATOLOGY

    HMTLY_RBC_VALUE 4
    4.6 - 6.0 HMTLY_RBC_NORMAL

    DLC_TEST_NAME LEUKOCYTE COUNT PREMIUM

    DLC_POLYMORPHS_VALUE 60

    DLC_POLYMORPHS_NORMAL_VALUE 40-65
    ----------------------------

    4 TABLES OF LABORATORY CLINIC FOR DATA ENTRY AND GET REPORT ONLY FOR THE TESTS CARRIED OUT FOR PARTICULAR

    PATIENT.

    TABLE1:PATIENTS_MASTER1
    COLUMNS: PATIENT_NUM, PATIENT_NAME,

    VALUES:
    PATIENT_NUM
    1
    2
    3
    4
    PATIENT_NAME
    BENAMER
    GIROT
    KKKK
    PPPP
    ---------------
    TABLE2:TESTS_MASTER1
    COLUMNS: TEST_NUM, TEST_NAME

    VALUES:
    TEST_NUM
    1
    2
    TEST_NAME
    HEMATOLOGY
    DIFFERENTIAL LEUKOCYTE COUNT
    -------------

    TABLE3:HAEMATOLOGY1
    COLUMNS:
    HMTLY_NUM, HMTLY_PATIENT_NUM, HMTLY_TEST_NAME, HMTLY_RBC_VALUE, HMTLY_RBC_NORMAL_VALUE

    VALUES:
    HMTLY_NUM
    1
    2
    HMTLY_PATIENT_NUM
    1
    3
    MTLY_TEST_NAME
    HEMATOLOGY
    HEMATOLOGY
    HMTLY_RBC_VALUE
    5
    4
    HMTLY_RBC_NORMAL_VALUE
    4.6 - 6.0
    4.6 - 6.0
    ------------

    TABLE4:DIFFERENTIAL_LEUCOCYTE_COUNT1
    COLUMNS: DLC_NUM, DLC_PATIENT_NUM, DLC_TEST_NAME, DLC_POLYMORPHS_VALUE, DLC_POLYMORPHS_

    NORMAL_VALUE,

    VALUES:
    DLC_NUM
    1
    2
    DLC_PATIENT_NUM
    2
    3
    DLC_TEST_NAME
    DIFFERENTIAL LEUKOCYTE COUNT
    DIFFERENTIAL LEUKOCYTE COUNT
    DLC_POLYMORPHS_VALUE
    42
    60
    DLC_POLYMORPHS_NORMAL_VALUE
    40-65
    40-65
    -----------------


    Thank you
    RCS
    E-mail:[email protected]
    --------

    I think you want an OUTER JOIN

    SELECT PATIENT_NUM, PATIENT_NAME, HMTLY_TEST_NAME, HMTLY_RBC_VALUE,
     HMTLY_RBC_NORMAL_VALUE, DLC_TEST_NAME, DLC_POLYMORPHS_VALUE,
     DLC_POLYMORPHS_NORMAL_VALUE
    FROM PATIENTS_MASTER1, HAEMATOLOGY1,  DIFFERENTIAL_LEUCOCYTE_COUNT1
    WHERE PATIENT_NUM = HMTLY_PATIENT_NUM (+)
    AND PATIENT_NUM = DLC_PATIENT_NUM (+)
    AND PATIENT_NUM = &PATIENT_NUM;
    

    Published by: shoblock on November 5, 2008 12:17
    the outer join brands became stupid emoticons or something. try hard

  • Properies of database transaction object?

    db.transaction(function(tx) {
        tx.executeSql("SELECT stuff FROM table1", [], dbSuccess, dbError);
    });
    
    function dbSuccess(tx, r) {
        // what can I do with tx here?
    }
    

    The SQL Transaction object, as passed to callbacks, success and executeSQL error seems to have no properties or methods other than executeSQL.  How can retrieve the SQL command was performed (for example) - for example something like:

    function dbSuccess(tx, r) {
        alert(tx.sqlStatement + " completed OK! - rows affected: " + r.rowsAffected);
    }
    

    The r.rowsAffected bit works fine.  But the argument tx seems to be useless.

    If something in the sense of the above is not possible, what is the point of the first argument of the callback?

    The argument tx allows to chain together statements in a transaction unique, is extremely useful, although in many cases you will probably not need that.

    However, because it is javascript, coud you add properties as you go. You can add a "lastSQL" property and set it to your SQL when you call the executeSQL example member function.

  • Hot SvMotion from large SQL databases

    Greetings,

    I had a few questions about the svMotion from a large SQL server VM. The vmdk is around 2 TB. The svMotion will take about 10 hours. I believe that, according to the white paper on the functioning of vMotion, the current state of the virtual machine is captured as the system changes during the virtual machine through a bitmap file.

    I just believe in the magic that is vMotion, but wanted to get a better understanding of what is happening under the hood.

    1. What is this bitmap file, and where changes are stored?

    2 - How is taken the vMotion and bitmap copy? Is it mainly a backup copy of the point in time of the VM/storage with incremental backups of the full backup once the vMotion starts? All of the incremental backup is then applied to the full backup once the VM/storage arrives at its destination?

    3. is there the _any_ loss of any SQL transaction on this 10-hour period?

    4. If there is no transaction drops, hash verification fails, etc. - how VMware detects and corect these errors?

    5. is there something that is regarded as too massive for vMotion live? An example might be a SQL Server with 8 GB of ram, 4 vcPU with 2 TB of data making 600 IOPS / s.

    6 - vMotion is possible on a WAN? If so, what are the requirements of speed and latency?

    Thank you in advance for your help!

    Tony

    Welcome to the community-

    1. What is this bitmap file, and where changes are stored?

    For vmorion there is no bitmap file - the machine is suspended in memory is copied to the target host and the virtual computer is started on the target host.

    With Storage vMotion - snapshot comes from the virtual disk, which means that the real virtual disk writes are redirected to a snapshot file that records the delta changes to the virtual disk

    2 - How is taken the vMotion and bitmap copy? Is it mainly a backup copy of the point in time of the VM/storage with incremental backups of the full backup once the vMotion starts? All of the incremental backup is then applied to the full backup once the VM/storage arrives at its destination?

    As I said vMotion does not bitmap - while Storage vMotion creates a snapshot of the virtual that saves any changes made on the disc - it took not the backup drive

    3. is there the _any_ loss of any SQL transaction on this 10-hour period?

    NO.

    4. If there is no transaction drops, hash verification fails, etc. - how VMware detects and corect these errors?

    VMware does not check for those I know.

    5. is there something that is regarded as too massive for vMotion live? An example might be a SQL Server with 8 GB of ram, 4 vcPU with 2 TB of data making 600 IOPS / s.

    For vMotion and SvMotion is as far as I know - the issue you are having with SvMotion and high IOPS / s is the Christ of the snapshot file.

    6 - vMotion is possible on a WAN? If so, what are the requirements of speed and latency?

    Yes vMotion can be done on the Wan, but latency requirements are rather tight - 10 ms latency roundtrip with Enterprise Plus -

  • How config VM for SQL 2008?

    I'm under ESXi 4 on a Dell of 710 on the SDCard with Raid 10. I have 12 GB of ram (soon to go to 48 since this server was going to be something else) with double Quad-cores. I am running WSUS as a virtual machine and SQL 2008 all 64 bits but uses SQL to the time as a virtual processor. All this in the same data store. I have an another data store on the same server and these 2 drives are Raid 1, but I have not yet used. Everything is contained in a box and will always be like that.

    1 can be located the SQL and the virtual machines with the help of it in the same data store or leave I better to put on various Raids?

    2. I want to add Sharepoint which uses also SQL 2008, if it was another instance of SQL2008?

    3. the amount of ram (assuming I'm 48 concerts) and the number of processors should I call?

    Thank you

    Gary

    Gary,

    without knowing what we take on the workload, it is not easy to answer your questions.

    However, we will try to find answers for you.

    Material:

    S ' ensure that your hardware is listed on the HCL. (Server, RAID controller,...)

    -For optimal performance, you must use SAS HDD and battery backup cache writing. Write cache - in my opinion - is essential!

    Licensing:

    -You can save or lose a lot of money thanks to a license.

    -With 1 license of Windows Server Enterprise (2003R2 plus) MS grants you the right for up to 4 virtual machines on 1 physical server.

    -SQL2008 to take a look at the millisecond of license. Depending on the version, you will need to purchase a license by vCPU.

    Installation:

    -Make sure you configure your data warehouses VMFS with the size of the desired block (1-8MB) for the maximum size of VMDK you expect.

    -If you use Windows 2003 server, be sure to align the partitions of the virtual machine!

    -Best practices for reading MS for SQL installations. (Alignment, partition NTFS block size,...)

    Now we will answer your questions:

    1.) depending on the workload and the SQL recovery mode (base, in bulk, plenty) it may make sense to have vmdk on different logical drives (spindles). With the hardware you have I recommend you to create up to 3 virtual drives (OS, data SQL, transaction logs). Put the OS and data on the RAID10 and newspapers on the RAID1.

    2.) well, again it depends on the workload. If a SQL Server is not enough you can save the money for a second license of SQL.

    3.) recommended is to assign as many vCPU and as much RAM as you need. You can always adjust the settings if necessary thereafter. Remember SQL server licenses when adding additional vCPUs.

    André

    PS: I personally do not trust SD cards with the exception of my camera. I always install ESX (i) on the HARD disk!

  • Foglight Transaction Log full rule

    Hello

    I have searched through Foglight and documents, but cannot see a rule that presents its findings on the filling SQL transaction log.  We are looking for an alert on the cart SQL that acts almost as a rule of ability on the cart of Windows OS, so that warning can be review - 90% 80%, and Fatal - 98%.  Is there a rule that acts on the SQLO cartridge the as this or that should be a custom?

    P

    Can an added custom similar to the rule of Sybase TransLog:

    (#tl_space_used_pct # > = f4registry ("DatabaseSpaceTLUsedWarning")) & (scope.get ("temp_db") == "NO") & (scope.get ("auto_extend") == "NO") & (scope.get ("alert_on") == 'YES')

    Hi Paul,.

    There are 2 rules for this-

    1-dynamic Log File growths remaining (when the log file cannot grow more due to the capacity of the file system)

    2 - correction Log File Used - behaves as you describe above, default is 75 and 85% full (it comes to 5.6.2)

    Kind regards

    Darren

  • Accident app: "all discussions of WebKit for this process have been closed.

    Hi bbdevs,

    develop a webworks app that uses the

    watchPosition()
    

    function.

    During the function watchPosition is active my app breaks down.

    This happens on a Z10 where my app is installed in debug mode.

    I use SDK Webworks 1.0.4.11 and bbUI.js 0.9.6.131.

    The 'watchPosition()' function calls a function where I do a little database to insert on the devices websql.

    This also works well - what app is backgrounded.

    Before the application blocks the last thing that has happened is a successful insertion of data to the database. That's what says my console.log.

    For better debugging, I installed the Momentics IDE (from the tools of waterfalls) and attached to the device via ssh.

    With the command 'slog2info w', I see more balls then in my browser chrome (thanks to ΕΚΚΕ for this great tip!).

    Happening in the slog before the app crahes:

    Apr 05 14:06:15.042      webkit_launcher.233201698               webkit      0  All WebKit threads for this process have been shut down.
    Apr 05 14:06:15.063      webkit_launcher.233201698               webkit      0  WebKit graphics for this process has been shut down.
    Apr 05 14:06:15.134      webkit_launcher.233201763               webkit      0  Received unexpected connection death 1073741825 from parent process! Exiting...
    

    A few minutes ago, I found this in the journal too:

    Apr 05 14:04:58.440      webkit_launcher.233201698               webkit      0  Thread 5: responding to low memory
    Apr 05 14:04:58.440      webkit_launcher.233201698               webkit      0  Thread 7: responding to low memory
    Apr 05 14:04:58.440      webkit_launcher.233201698               webkit      0  Thread 1: responding to low memory
    Apr 05 14:04:58.441      webkit_launcher.233201763               webkit      0  Thread 4: responding to low memory
    Apr 05 14:04:58.441      webkit_launcher.233201763               webkit      0  Thread 1: responding to low memory
    Apr 05 14:04:58.441      webkit_launcher.233201763               webkit      0  Thread 6: responding to low memory
    Apr 05 14:04:58.441      webkit_launcher.233201763               webkit      0  Thread 14: responding to low memory
    Apr 05 14:04:58.446      webkit_launcher.233201698               webkit      0  Thread 4: responding to low memory
    Apr 05 14:04:58.619      webkit_launcher.233201763               webkit      0  Thread 3: responding to low memory
    

    For more information:

    It's my SQL Transaction, which is called by the watchPosition function:

    database = window.openDatabase('mydbname', '', 'my Database', 1 * 1024 * 1024);
                try {
                    database.transaction(
                        function (tx) {tx.executeSql('INSERT INTO tbl_gps (lon, lat, alt, acc, altacc, head, speed) VALUES (?, ?, ?, ?, ?, ?, ?)',
                            [posData.coords.longitude, posData.coords.latitude, posData.coords.altitude, posData.coords.accuracy, posData.coords.altitudeAccuracy, posData.coords.heading, posData.coords.speed],
                            function (tx, res) {
                                onInsertSuccess = true;
                                console.log('Data insert into Table GPS Data Successfully');
                            },
                            function (tx, err) {
                                onInsertSuccess = true;
                                showToast("ERROR - DB INSERT Trail GPS Data - code: "+err.code+", message: "+err.message, "OK", 60000);
                            });
                        }
                    );
                 } catch (err) {
                     console.log('There was an Error during database transaction! '+err.message);
                 }
    

    Someone at - it an idea for this problem resolved? I'll be happy on any comment to bring me a little bit forward to make this work.

    Thank you in advance to all those who have read this thread... and thanks much for any tip!

    Lars.

    Hi adam,.

    Thank you for your message. I tested a lot last weekend and after hours and hours, I figured out more details on this memory problem.

    First of all:
    It was not a problem of database :-)
    The database transaction is accidentally the last only asynchronous thing happens. But the reason for the crash of the app is another.

    I also draw a line on the map with each update location.  I use openlayers for this feature. When the application is in the background, it seems that there is a problem with openlayers.

    I found a solution for this problem:

    I store my data to the line in a table when the application is backgrounded - when he returns to the foreground I take the data in the table and draw the line again.

    When I get the error again - hope not, of course - I'll open a subject in jira.

    Thanks also to luca for a brainstorming!

  • Cisco Unity full hard

    Hello

    Running Cisco Unity 4.x

    with exchange 2000 on the same box.

    I have two left that everything is installed in the c:\ drive.

    now c:\ drive becomes full.

    I have drive d:\ with 10 GB of free space.

    How can I use this drive d:\.

    Concerning

    M

    Well, 4.x is not real specific - there are 4.0 (1) by 4.0 (4) - many differences between these versions.

    That said, there are a few places to start here. The first is the maintenance of unity here guide:

    http://www.Cisco.com/en/us/products/SW/voicesw/ps2237/products_maintenance_guide_book09186a0080228417.html

    You can move the directories of newspapers to the D drive (in the guide above)-also known as manage the sizes of mailbox for users as well as turning on a circular logging if you do the regular backups of the Exchange database (this can be what is chewing your space).

    If Exchange is installed on C (I assume you are because you say 'all' is on C), you can then follow this technical trick to move logs store and transaction of the messages to another drive:

    http://www.Cisco.com/en/us/partner/products/SW/voicesw/ps2237/products_tech_note09186a00800c6f5f.shtml

    who will save a lot of space on C if logging stuff above is not sufficient.

    Finally, you can move the SQL transaction logs and files on the D drive - you will find details on this in the installation guide

    http://www.Cisco.com/en/us/products/SW/voicesw/ps2237/products_installation_guide_chapter09186a008022b8af.html#wp1664607

    My guess, however, will be that if you pass by the maintenance of the unit guide and check your situation of journaling Exchange, which will probably be enough to free up disk space on your system - if you do not circular logging and that you don't make regular backups of Exchange using BackupExec or something in this direction, then in Exchange transaction logs files will become big enough and chew your space. Whatever it is, the procedures here, should you get from A to B freeing space on C.

  • Access SQLite DB 6 BB

    Hello

    I already wrote my request of webworks thanks to google gears for SQLite DB. Recently, I decided to use HTML5 instead, so my application would run on OS 6. After several hours of laborious having to rewrite the application (and having to rethink traffic through asynchronous DB access of HTML5 ), I'm finally the application runs on OS 5 by using the file html5_init.js with no google gears in my javascript file. However, my request has always refused to operate normally on OS 6. I managed to refine one of the issues to something like this:

    db.transaction(function(tx){
        tx.executeSql('some_sql', [], function(tx){
            var request2 = new XMLHttpRequest();
        request2.open("GET", 'some_url');
        request2.setRequestHeader("Content-type", "text/html");
        request2.onreadystatechange = function() {
                tx.executeSql('some_sql');
            }
        }
    }
    

    Now, here's the problem, on the second tx.executeSql () call, the tx object is recognized as a valid SQL transaction object but the method call fails with the following exception:

    INVALID_STATE_ERROR: DOM Exception 11

    What could I do it wrong?

    Hello

    I was able to lay transactionw of the database with the callback functions as well as with the function asynschoronous to an AJAX call. It took a little while to get my head around it, but I got here in the end.

    I tend to use the reminder of the db.transaction real for the next call to db. (Not the callbackfunction of the transaction itself... it's confusing!)

    for example

    {DB.transaction (Function (TX)}
    tx.executeSql ('some_sql', [], tx_transaction_ok, tx_error_handler);

    }
    tx_error_handler
    function (tx) {}
    var request2 = new XMLHttpRequest();
    request2. Open ("GET", 'some_url');
    request2.setRequestHeader ("Content-type", "text/html");
    request2.onreadystatechange = function() {}
    DB.transaction (Function (TX) {tx.executeSql ('some_sql', [], tx_transaction_ok});

    }
    );

    (I may have missed a few brackets etc.)

    Have you tried this method? In this way that the db object is released prior to use.

    See you soon

    Andrew

  • What is the difference between POS and VSC? Which should I use?

    Hello

    Turns out we have a multitude of backup running for our virtual machine environment. We lack vCenter 6.0, 6.1 VDP, VSC 6.2P1, SnapManager for SQL and SnapMirror jobs put in place in OnCommand 3.1.2.

    I go crazy trying to pierce the need to have all these products running of backups and snapmirroring data. Bottom line, our SnapMirrors for one or two Volumes take f o r e v e r... And that impacts my tests of SRM (when you use the replication of data).

    I'm trying to clean up, he perfected down, consolidation, re-scheduling and friction on all these solutions. And in the middle of my panic, I thought that "these all do essentially the same thing? Taking snapshots and SnapMirroring data? »

    So, my question to the community is, what is the difference between these solutions, or are they all essentially do the same thing? They confront each other, vying for the band width and stumbling on them to achieve the same thing? Right now my plan is to suspend all work of everything except VSC and SnapManager for SQL and do all this from VSC. This means suspend jobs in NetApp services.

    I'm not the admin for POS, it is my co-worker, so he who installed independently of my work. Is it still necessary?

    What brought me to all of this, besides the failure of MRS to transfer data (SnapMirror on request within a reasonable time) was noting the deltas for our work of snapshot is always huge at certain times of the day. For example, our biggest pork of data has a light snapshot at midnight, and he is constantly 71GB. This leads me to believe that this is a job, because it is predictable. The net effect is SnapMirror jobs take days, latency are several days and transfers has never really stop.

    We plan to upgrade our pipe between our protected Site (HQ) and our DR Site (branch) of 50mbps at 100 Mbps. We have 100 out of our headquarters and 50 in our satellite TV right now, so we want to both be 100. Maybe that would solve the issue, but I still would be pretty effective.

    If you need more information, let me know and I will fill you, or less make up something so I look like I know what I'm talking about.

    TIA!

    Steve

    The reason why you do SnapMirror replication is DR. It is not a backup solution. If you stop all your SnapMirrors in OnCommand Unified Manager and your storage array, the production goes down, you no longer have a DR.

    What does VSC is, it creates snapshots of your NetApp volumes, you can use for short term backup/recovery. The snapshots are not a real backup solution, however. If someone deletes a volume on the NetApp snapshots of this volume will not save you because they no longer exist. But they are still useful in many scenarios, such as Previous Versions of Microsoft CIFS options.

    SnapManager aims to make consistent snapshots of demand for software such as MS SQL in your case. Instant database do not give you this ability, as they do not trigger the VSS suspension. More SnapManager for SQL gives you point in the ability to recover in time for your databases to SQL transaction logs, which provide none of the solutions above. However, as SnapManager creates snapshots, you must make sure that you have dedicated LUN for you SQL VM, otherwise you will be taking snapshots of the same volumes twice.

    And now VDP makes backups of your environment. It supports long-term retention, as snapshots in general are not kept for more than one week or one month. Make sure that the data are stored on a separate volume from where are your virtual machines. Or ideally to a destination that is external to your storage group.

    Hope that clears things up a bit. The result is, what you have is perfectly fine.

  • VCenter service is getting arrested after a while

    Hello

    We use Vcenter server 5.0. In recent days, we have observed that vcenter service will automatically stops after a while and cause the backup fails.

    We investigated the event viewer on this and found a sql transaction log event is full.

    Can someone help me get this resolved?

    Thanks in advance.

    Steps below should be useful for the resolution. Please check.

    1. Connect to the Microsoft SQL 2005/2008 Server as an administrator.
    2. Open SQL Management Studio.

      Note: If you are using MS SQL 2005,.
      check that you are using MS SQL Management Studio 2005. Alternatively, if you use MS
      SQL 2008, check that you are using MS SQL Management Studio 2008.

      Right click on the database using VirtualCenter.

    3. Click on Properties.
    4. Click the Options link.
    5. Set the recovery model simple as follows:
      Click OK.
    6. Once this operation is finished, right click on the database again.
    7. Click on tasks > shrink > files.
    8. In the Shrink Database window, select the file type as "Log".  The name of the file appears in the filename drop down as databasename_log as follows:
      Displays the workspace used compared to the allocated space. After you set
      the Simple, the majority of space recovery model in the
      released transactions.
    9. Make sure that the radio button to release unused space is selected.
    10. Click OK in this window to reduce the transaction log.
  • Calling SQLEXEC replicat finishing?

    I'll try to find a way to call a SQLEXEC when a replicat finishes process a SQL transaction? I looked through the documentation and have found things close but not exact.

    My situation is:

    Insert SQL made 38 001 insertions on source

    Do a commit on the source

    GoldenGate sends data through

    reach 20,000 documents

    38 001 files are removable

    < < < < < here I want to call a SQLEXEC package only once to end > > > >

    GoldenGate commitment

    I want the continuous replicat running (ONEXIT does not seem appropriate for example because it runs on graceful shutdown) but just by commit in the replicate what it has a SQLEXEC.

    All ideas are useful. Thank you!

    Here is how I could do this (tested).

    Replicate settings-

    ALLOWDUPTARGETMAP

    MAP TEST.*, TARGET TEST.*;

    TEST THE CARD. TEST, TEST OF THE TARGET. TEST,

    SQLEXEC (ID WRITE_TO_LOG, SPNAME TEST. WRITE_TO_LOG, SECONDARY FILTER),

    FILTER (@VALONEOF (@GETENV ('GGHEADER', 'TRANSACTIONINDICATOR'), 'WHOLE', 'END')),

    EVENTACTIONS (IGNORE THE REGISTRATION);

    So if you want this fire for all the tables, you can map the schema with joker twice using the ALLOWDUPTARGETMAP parameter.  I tested as follows.

    CREATE TABLE TEST (IDNO NUMBER, YCOL1 VARCHAR2 (20), NCOL2 VARCHAR2 (20), YCOL3 VARCHAR2 (20), NCOL4 VARCHAR2 (20, UPDTIME TIMESTAMP);)

    ALTER TABLE ADD PRIMARY KEY (IDNO) TEST;

    CREATE TABLE LOG_TABLE (COL1 VARCHAR2 (50));

    (Add the trandata too)

    create or replace PROCEDURE write_to_log IS

    BEGIN

    INSERT INTO LOG_TABLE (SELECT COUNT (*) TEST);

    END;

    /

    If the records themselves get inserted/updated etc. up-to-date using the first card.  The second card then fires and executes the SQLEXEC statement only if the transaction record indicator is 0 x 02 (end of transaction record) multiple or 0 x 03 (single record transaction).  The record is ignored after the SQLEXEC runs.

    In my test case, the result is that if I run-

    Insert test values (1, 'Hello', 'Hello', 'Hello', 'Hello');

    insert into test values (2, "Hello", "Hello", "Hello", "Hello");

    insert into test values (3, "Hello", "Hello", "Hello", "Hello");

    On the target, I see the 3 lines of the TEST table, the more I see the value "3" being inserted into the table LOG_TABLE proving that the procedure ran at the * END * of the transaction.

    Who help me?

Maybe you are looking for