Query performance optimization

HI, I need to know if there is another way to make this request.

The REVIEWS table have this structure:

DATE OF EXAM_DATE

OBJECT VARCHAR2 (50);

NUMBER OF GRADE;

The idea is to get statistics of exams.

Select EXAM_DATE,

Object

(SELECT COUNT (1))

REVIEWS

where GRADE to THE title (9,10)

AND THE SUBJECT = EXA. Object

AND EXAM_DATE = EXA. EXAM_DATE) exceptional.

(select count (1))

REVIEWS

where THE title (4,5,6,7,8) RANK

AND THE SUBJECT = EXA. Object

AND EXAM_DATE = EXA. EXAM_DATE) approved,

(select count (1))

REVIEWS

where THE title (0,1,2,3) RANK

AND THE SUBJECT = EXA. Object

AND EXAM_DATE = EXA. EXAM_DATE) disapproves of,

EXA REVIEWS

EXAM_DATE GROUP, TOPIC;

Thank you!!

Hello

If all the data are in the same table, you shouldn't do any subqueries.

Maybe something like that

SELECT exam_date, object

, COUNT (CASE WHEN rank IN (9, 10) THEN 1 END) pending

, COUNTY (CASE grade WHEN (4, 5, 6, 7, 8) THEN 1 END) as approved

, COUNTY (CASE of rank WHEN IN (0, 1, 2, 3) THEN 1 END) DISAPPROVE

Reviews

GROUP BY exam_date, object

;

If you would care to post a small example of data (CREATE TABLE and INSERT statements) and the results desired from these sample data, then I could test this.

See the FAQ Forum:

Re: 2. How can I ask a question in the forums?

Tags: Database

Similar Questions

  • SQL query performance

    I have a table that lists the users visits to pages on our website. The information takes the type of structure within our next record table:

    VisitID | IDVisiteur | VisitPage | VisitDate
    Index | UniqueID. VisitPage | Date/time

    I need to get to IDVisiteur who visited in a user defined date range for a report that is to be written, and then get a count the days of separate visit that each user has visited our website. I have a request of work attached that will get me the result set, I want, but it's so _very_ slowly. Query Analyzer it shows that 84% included in table scans. I hope someone has a suggestion on how to optimize it. I am currently working on a MSSQL 8.0 Server, so I have no access to the function of tronque() that I would prefer to use on the dates, but that's a minor inconvenience.

    Thank you
    -Daniel

    Quote:
    Posted by: Dan Bracuk
    You have an index on visitdate?

    Visitdate contains real-time, or are all the parts of the time 0:00? If they are all from 00:00, you don't need the convert function. Otherwise, you might have better luck by selecting all data from your database and using Q of Q for the counties.

    Dan there on this one. Looking at the design table index was absent. Once I added an index my query performance dramatically, improved enough so that I don't have a lot of worries more. Thanks for the suggestion.

    -Daniel

  • The BLASTP_ALIGN query performance decreases as increases the size table Ref?

    Newbie here.

    I'm using Oracle 11.2.0.3.

    I am currently running and a loop through the cursor according to who uses the tool BLASTP_ALIGN from Oracle:

    FOR MyALIGN_TAB IN

    (

    Select a.query_string, H.AA_SEQUENCE target_string, t_seq_id, pct_identity, alignment_length, incompatibilities, positive, gap_openings, gap_list, q_seq_start, q_frame, q_seq_end, t_seq_start, t_seq_end, t_frame, score, wait

    from (select t_seq_id, pct_identity, alignment_length, incompatibilities, positive, gap_openings, gap_list, q_seq_start, q_frame, q_seq_end, t_seq_start, t_seq_end, t_frame, score, wait

    table (BLASTP_ALIGN ((p_INPUT_SEQUENCE SELECT query_string FROM DUAL),

    CURSOR (Select GB_ACCESSION, AA_SEQUENCE from HUMAN_DB1. HUMAN_PROTEINS),

    1-1, 0, 0, 'PAM30',. 1, 10, 1, 2, 0, 0)

    )

    ),

    (SELECT p_INPUT_SEQUENCE FROM DUAL Query_string).

    HUMAN_DB1. HUMAN_PROTEINS H

    WHERE UPPER (t_seq_id) = UPPER (H.gb_accession) and gap_openings = 0

    )

    LOOP


    This initial query works relatively well (about 2 seconds) on a table target of approximately 20,000 documents (reproduced above, as the HUAMN_DB1. Table HUMAN_PROTEINS. However, if I had to choose a selected target table that contains approximately 170 000 records, the query performance are significantly reduced in about 45 seconds. The two tables have identical ratings.


    I was wondering if there are ways to improve the performance of BLASTP_ALIGN on large tables? There only seems to be a lot of documentation on BLASTP_ALIGN. I could find this (http://docs.oracle.com/cd/B19306_01/datamine.102/b14340/blast.htm), but it wasn't that useful.


    Any ideas would be greatly appreciated.



    In case one is interested... it looked like the AA_SEQUENCE column in the following slider: SLIDER (Select GB_ACCESSION, AA_SEQUENCE from HUMAN_DB1. HUMAN_PROTEINS) was a CLOB field. In my second target, my column correspodoning table was VARCHAR2. One hypothesis is that BLASTP_ALIGN made a VARCHAR2-> CLOB conversion internally. I changed the table to have a CLOB column and with success against BLASTP_ALIGN 170 000 documents about 8 seconds (not much, but better than 45).

    I will mark it as answered.

  • Query performance

    CURSOR c_exercise_list IS
           SELECT
                  DECODE(v_mfd_mask_id ,'Y',' ',o.opt_id) opt_id,
                  DECODE(v_mfd_mask_id ,'Y',' ',o.soc_sec) soc_sec,
                  P.plan_id plan_id, E.exer_id exer_id, E.exer_num,
                  DECODE(G.sar_flag, 0, DECODE(G.plan_type, 0, '1', 1, '2', 2, '3', 3, ' ', 4,'5', 5, '6', 6, '7', 7, '8', 8, '9', '0'), ' ') option_type,
                  TO_CHAR(G.grant_dt, 'YYYYMMDD') grant_dt, TO_CHAR(E.exer_dt, 'YYYYMMDD') exer_dt,
                  E.opts_exer opts_exer,
                  E.mkt_prc   mkt_prc,
                  E.swap_prc  swap_prc,
                  E.shrs_swap shrs_swap, decode(e.exer_type,2,decode(xe.cash_partial,'Y','A','2'),TO_CHAR(E.exer_type)) exer_type,
                  E.sar_shrs  sar_shrs,
                  NVL(ROUND(((xe.sar_shrs_withld_optcost - (e.opts_exer * g.opt_prc) / e.mkt_prc) * e.mkt_prc),2),0)+e.sar_cash sar_cash,
                  NVL(f.fixed_fee1,0) fixed_fee1,
                  NVL(f.fixed_fee2,0) fixed_fee2,
                  NVL(f.fixed_fee3,0) fixed_fee3,
                  NVL(f.commission,0) commission,
                  NVL(f.sec_fee,0)    sec_fee,
                  NVL(f.fees_paid,0)  fees_paid,
                  NVL(ct.amount,0)     cash_tend,
                  E.shrs_tend  shrs_tend, G.grant_id grant_id, NVL(G.grant_cd, ' ') grant_cd,
                  NVL(xg.child_symbol,' ') child_symbol,
                  NVL(xg.opt_gain_deferred_flag,'N') defer_flag,
                  o.opt_num opt_num,
                  --XO.new_ssn,
                  DECODE(v_mfd_mask_id ,'Y',' ',xo.new_ssn) new_ssn,
                          xo.use_new_ssn
                  ,xo.tax_verification_eligible tax_verification_eligible
                  ,(SELECT TO_CHAR(MIN(settle_dt),'YYYYMMDD') FROM tb_ml_exer_upload WHERE exer_num = E.exer_num AND user_id=E.user_id AND NVL(settle_dt,TO_DATE('19000101','YYYYMMDD'))>=E.exer_dt) AS settle_dt
                  ,xe.rsu_type  AS rsu_type
                  ,xe.trfbl_det_name AS trfbl_det_name
                  ,o.user_txt1,o.user_txt2,xo.user_txt3,xo.user_txt4,xo.user_txt5,xo.user_txt6,xo.user_txt7
                  ,xo.user_txt8,xo.user_txt9,xo.user_txt10,xo.user_txt11,
                  xo.user_txt12,
                  xo.user_txt13,
                  xo.user_txt14,
                  xo.user_txt15,
                  xo.user_txt16,
                  xo.user_txt17,
                  xo.user_txt18,
                  xo.user_txt19,
                  xo.user_txt20,
                  xo.user_txt21,
                  xo.user_txt22,
                  xo.user_txt23,
                  xo.user_dt2,
                  xo.adj_dt_hire_vt_svc,
                  xo.adj_dt_hire_vt_svc_or,
                  xo.adj_dt_hire_vt_svc_or_dt,
                  xo.severance_plan_code,
                  xo.severance_begin_dt,
                  xo.severance_end_dt,
                  xo.retirement_bridging_dt
                  ,NVL(xg.pu_var_price ,0) v_pu_var_price
                  ,NVL(xe.ficamed_override,'N') v_ficmd_ovrride
                  ,NVL(xe.vest_shrs,0) v_vest_shrs
                  ,NVL(xe.client_exer_id,' ') v_client_exer_id
                  ,(CASE WHEN xg.re_tax_flag = 'Y' THEN pk_xop_reg_outbound.Fn_GetRETaxesWithheld(g.grant_num, E.exer_num, g.plan_type)
                         ELSE 'N'
                     END) re_tax_indicator -- 1.5V
                  ,xe.je_bypass_flag
                  ,xe.sar_shrs_withld_taxes   --Added for SAR july 2010 release
                  ,xe.sar_shrs_withld_optcost --Added for SAR july 2010 release
            FROM
            (SELECT exer.* FROM exercise exer WHERE NOT EXISTS (SELECT s.exer_num FROM suspense s
                WHERE s.exer_num = exer.exer_num AND s.user_id = exer.user_id AND exer.mkt_prc = 0))E,
                grantz G,  xop_grantz xg, optionee o, xop_optionee xo, feeschgd f, cashtendered ct, planz P,xop_exercise xe
            WHERE
                  E.grant_num  = G.grant_num
            AND   E.user_id    = G.user_id
            AND   E.opt_num    = o.opt_num
            AND   E.user_id    = o.user_id
            AND   (G.grant_num = xg.grant_num(+) AND G.user_id=xg.user_id(+))
            AND   (o.opt_num   = xo.opt_num(+)   AND o.user_id=xo.user_id(+))
            AND   E.plan_num = P.plan_num
            AND   E.user_id = P.user_id
            AND   E.exer_num = f.exer_num(+)
            AND   E.user_id = ct.user_id(+)
            AND   E.exer_num = ct.exer_num(+)
            AND   E.user_id = ct.user_id(+)
            AND   E.exer_num=xe.exer_num(+)
            AND   E.user_id=xe.user_id(+)
            AND   G.user_id = USER
            AND NOT EXISTS (
                        SELECT tv.exer_num
                          FROM tb_xop_tax_verification tv--,exercise ex
                         WHERE tv.exer_num = e.exer_num
                           AND tv.user_id = e.user_id
                           AND tv.user_id = v_cms_user
                           AND tv.status_flag IN (0,1,3,4, 5)) -- Not Processed
            ;
    How to tune the query performance, any1 help me to impropve... Thanks in advance

    Published by: BluShadow on February 21, 2013 08:14
    corrected {noformat}
    {noformat} tags. Please read {message:id=9360002} and learn how to post code correctly.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                

    956684 wrote:
    I got the cost of CPU: 458.50 time: 1542.90 therefore anything can capture to improve performance, but there is no applied full table scan to put nothing in the mentioned table. . and most of the columns are index unique scan takes place... someone can help me to find the solution

    His request as "my car doesn't work, care color is gray. Can solve you this problem? »

    Please read the FAQ, I already posted and follow the instructions.

  • Query performance tuning

    Hi guru


    Perhaps this question is stupid, but I think this is the forum where each user to get the information on issues while there.

    I want to start learning performance optimization, but do not know the path of where to start anyone can suggest me the same.

    Thanks in advance

    but it is big enough

    Performance optimization is also quite big.

    John

  • The DIMINFO affects query performance?

    Hi all

    A USER_SDO_GEOM_METADATA can. DIMINFO well defined to improve the query performance?


    For all the tables in my system, I have the USER_SDO_GEOM_METADATA view like this:
    DIMINFO
    X; -2147483648; 2147483648; 5TH-5
    Y; -2147483648; 2147483648; 5TH-5
    Z; -2147483648; 2147483648; 5TH-5




    Thanks to you all

    The simple answer is Yes - it provides an alternative and faster I/O path.

    The real question is whether it is supposed that was the data model and its use.

    So your question is similar to asking if a varchar2 column indexing is good or not. The answer is "+ depends on +".

  • Query performance poor when they join CONTAINS to another table

    We just recently started evaluation Oracle Text for a search solution. We must be able to find a table which can have over 20 million lines. Each user can have visibility to a very small part of these lines. The goal is to have a single Oracle text index that represents all the columns of research in the table (multi column datastore) and provide a score for each search result so that we can sort the search results in descending score order. What we see is that the performance of the queries of TOAD are extremely fast, when we write a simple CONTAINS query against the table indexed Oracle text. However, when we first try reduce the lines from that CONTAINS query must search using a we find the query performance degrades significantly.

    For example, we can find all the records that a user has access from our base table of the following query:

    SELECT d.duns_loc
    DUNS d
    JOIN primary_contact pc
    ON d.duns_loc = pc.duns_loc
    AND pc.emp_id =: employeeID;

    This query may run in < 100 m in the example, this query returns close to 1200 lines of the duns_loc of primary key.

    Our search query looks like this:

    SELECT score (1), d.
    DUNS d
    WHERE CONTAINS (TEXT_KEY,: research, 1) > 0
    ORDER BY score (1) DESC;

    The: Find value in this example will be 'Highway '. The query can return 246 k lines in about 2 seconds.

    2 seconds is good, but we should be able to have a much quicker response if the request did not have to search the entire table, right? Since each user can only records from 'view' that they are assigned to as us if the search operation had to be analysed a tiny tiny percentage of the TEXT index, we should see results faster (and more relevant). If we now write the following query:

    WITH the subset
    AS
    (SELECT d.duns_loc
    DUNS d
    JOIN primary_contact pc
    ON d.duns_loc = pc.duns_loc
    AND pc.emp_id =: employeeID
    )
    SELECT score (1), d.
    DUNS d
    JOIN the subset s
    ON d.duns_loc = s.duns_loc
    WHERE CONTAINS (TEXT_KEY,: research, 1) > 0
    ORDER BY score (1) DESC;

    For reasons that we have not been able to identify this query actually takes longer to run than the sum times the contributing elements. This query takes more than 6 seconds to run. We, or our DBA can understand why this query runs worse than a large open research. Open research is not ideal because the query eventually folders back to the user, they do not have access to view.

    Has anyone ever encountered something like that? Any suggestions on what to watch or where to go? If someone wants more information to help diagnosis to let me know, and I'll be happy to produce it here.

    Thank you!!

    Since you're using two tables, you will get probably better performance on an index that uses a section group and a user_datastore that uses a procedure. He should be able to recover all the data with a simple query, and hit a single index. Please see the demo below. Indexing can be slower, but research should be faster. If you have your primary and foreign keys in place and current statistics before you create the index, it should speed up indexing.

    SCOTT@orcl_11gR2> -- tables:
    SCOTT@orcl_11gR2> CREATE TABLE duns
      2    (duns_loc       NUMBER,
      3       business_name  VARCHAR2 (15),
      4       business_name2 VARCHAR2 (15),
      5       address_line   VARCHAR2 (30),
      6       city            VARCHAR2 (15),
      7       state            VARCHAR2 (2),
      8       business_phone VARCHAR2 (15),
      9       contact_name   VARCHAR2 (15),
     10       contact_title  VARCHAR2 (15),
     11       text_key       VARCHAR2 (1),
     12       CONSTRAINT     duns_pk PRIMARY KEY (duns_loc))
     13  /
    
    Table created.
    
    SCOTT@orcl_11gR2> CREATE TABLE primary_contact
      2    (duns_loc       NUMBER,
      3       emp_id            NUMBER,
      4       CONSTRAINT     primary_contact_pk
      5                   PRIMARY KEY (emp_id, duns_loc),
      6       CONSTRAINT     primary_contact_fk FOREIGN KEY (duns_loc)
      7                   REFERENCES duns (duns_loc))
      8  /
    
    Table created.
    
    SCOTT@orcl_11gR2> -- data:
    SCOTT@orcl_11gR2> INSERT INTO duns (duns_loc, address_line) VALUES (1, 'highway')
      2  /
    
    1 row created.
    
    SCOTT@orcl_11gR2> INSERT INTO duns (duns_loc, address_line) VALUES (2, 'highway')
      2  /
    
    1 row created.
    
    SCOTT@orcl_11gR2> INSERT INTO primary_contact VALUES (1, 1)
      2  /
    
    1 row created.
    
    SCOTT@orcl_11gR2> INSERT INTO primary_contact VALUES (2, 2)
      2  /
    
    1 row created.
    
    SCOTT@orcl_11gR2> INSERT INTO duns (duns_loc, address_line)
      2  SELECT object_id, object_name
      3  FROM   all_objects
      4  WHERE  object_id > 2
      5  /
    
    76029 rows created.
    
    SCOTT@orcl_11gR2> INSERT INTO primary_contact
      2  SELECT object_id, namespace
      3  FROM   all_objects
      4  WHERE  object_id > 2
      5  /
    
    76029 rows created.
    
    SCOTT@orcl_11gR2> -- gather statistics:
    SCOTT@orcl_11gR2> EXEC DBMS_STATS.GATHER_TABLE_STATS (USER, 'DUNS')
    
    PL/SQL procedure successfully completed.
    
    SCOTT@orcl_11gR2> EXEC DBMS_STATS.GATHER_TABLE_STATS (USER, 'PRIMARY_CONTACT')
    
    PL/SQL procedure successfully completed.
    
    SCOTT@orcl_11gR2> -- procedure:
    SCOTT@orcl_11gR2> CREATE OR REPLACE PROCEDURE duns_proc
      2    (p_rowid IN ROWID,
      3       p_clob     IN OUT NOCOPY CLOB)
      4  AS
      5  BEGIN
      6    FOR d IN
      7        (SELECT duns_loc,
      8             '' ||
      9             business_name     || ' ' ||
     10             business_name2  || ' ' ||
     11             address_line  || ' ' ||
     12             city  || ' ' ||
     13             state     || ' ' ||
     14             business_phone  || ' ' ||
     15             contact_name  || ' ' ||
     16             contact_title ||
     17             ''
     18             AS duns_cols
     19         FROM      duns
     20         WHERE  ROWID = p_rowid)
     21    LOOP
     22        DBMS_LOB.WRITEAPPEND (p_clob, LENGTH (d.duns_cols), d.duns_cols);
     23        FOR pc IN
     24          (SELECT '' || emp_id || '' AS pc_col
     25           FROM   primary_contact
     26           WHERE  duns_loc = d.duns_loc)
     27        LOOP
     28          DBMS_LOB.WRITEAPPEND (p_clob, LENGTH (pc.pc_col), pc.pc_col);
     29        END LOOP;
     30    END LOOP;
     31  END duns_proc;
     32  /
    
    Procedure created.
    
    SCOTT@orcl_11gR2> SHOW ERRORS
    No errors.
    SCOTT@orcl_11gR2> -- user datastore, section group with field section:
    SCOTT@orcl_11gR2> begin
      2    ctx_ddl.create_preference ('duns_store', 'USER_DATASTORE');
      3    ctx_ddl.set_attribute ('duns_store', 'PROCEDURE', 'duns_proc');
      4    ctx_ddl.set_attribute ('duns_store', 'OUTPUT_TYPE', 'CLOB');
      5    ctx_ddl.create_section_group ('duns_sg', 'BASIC_SECTION_GROUP');
      6    ctx_ddl.add_field_section ('duns_sg', 'emp_id', 'emp_id', true);
      7  end;
      8  /
    
    PL/SQL procedure successfully completed.
    
    SCOTT@orcl_11gR2> -- text index with user datastore and section group:
    SCOTT@orcl_11gR2> CREATE INDEX duns_context_index
      2  ON duns (text_key)
      3  INDEXTYPE IS CTXSYS.CONTEXT
      4  FILTER BY duns_loc
      5  PARAMETERS
      6    ('DATASTORE     duns_store
      7        SECTION GROUP     duns_sg
      8        SYNC          (ON COMMIT)')
      9  /
    
    Index created.
    
    SCOTT@orcl_11gR2> -- variables:
    SCOTT@orcl_11gR2> VARIABLE employeeid NUMBER
    SCOTT@orcl_11gR2> EXEC :employeeid := 1
    
    PL/SQL procedure successfully completed.
    
    SCOTT@orcl_11gR2> VARIABLE search VARCHAR2(100)
    SCOTT@orcl_11gR2> EXEC :search := 'highway'
    
    PL/SQL procedure successfully completed.
    
    SCOTT@orcl_11gR2> -- query:
    SCOTT@orcl_11gR2> SET AUTOTRACE ON EXPLAIN
    SCOTT@orcl_11gR2> SELECT SCORE(1), d.*
      2  FROM   duns d
      3  WHERE  CONTAINS
      4             (text_key,
      5              :search || ' AND ' ||
      6              :employeeid || ' WITHIN emp_id',
      7              1) > 0
      8  /
    
      SCORE(1)   DUNS_LOC BUSINESS_NAME   BUSINESS_NAME2  ADDRESS_LINE                   CITY            ST BUSINESS_PHONE
    ---------- ---------- --------------- --------------- ------------------------------ --------------- -- ---------------
    CONTACT_NAME    CONTACT_TITLE   T
    --------------- --------------- -
             3          1                                 highway
    
    1 row selected.
    
    Execution Plan
    ----------------------------------------------------------
    Plan hash value: 2241294508
    
    --------------------------------------------------------------------------------------------------
    | Id  | Operation                   | Name               | Rows  | Bytes | Cost (%CPU)| Time     |
    --------------------------------------------------------------------------------------------------
    |   0 | SELECT STATEMENT            |                    |    38 |  1102 |    12   (0)| 00:00:01 |
    |   1 |  TABLE ACCESS BY INDEX ROWID| DUNS               |    38 |  1102 |    12   (0)| 00:00:01 |
    |*  2 |   DOMAIN INDEX              | DUNS_CONTEXT_INDEX |       |       |     4   (0)| 00:00:01 |
    --------------------------------------------------------------------------------------------------
    
    Predicate Information (identified by operation id):
    ---------------------------------------------------
    
       2 - access("CTXSYS"."CONTAINS"("TEXT_KEY",:SEARCH||' AND '||:EMPLOYEEID||' WITHIN
                  emp_id',1)>0)
    
    SCOTT@orcl_11gR2>
    
  • Nobody wants that my experience of performance optimization?

    I worked as a DBA for 2 years in major where my work was limited to the monitoring and performance optimization
    and if I had No opportunity to get experience of backup & restore, RMAN and other basic DBA responsibilities.

    I had to leave in July the year last because of my few other career interests. Unfortunately, who did not

    filled.
    January 15 so I decided to restart my career as a DBA.
    I went to a 3 interviews, but every time that my lack of experience in backup & recovery and other DB Coreresponsibilities is not my strongest case in interviews.
    I'm losing confidence on my chances of getting a job. Most (about 95%) of IT companies want minimum 3 years experience and that too with Backup & Recovery.

    I did not yet OCA (I spent only 1z0 042). So I intend to give 1z0 - 007 to make OCA.

    My questions at all?
    1.*how hold much value OCA in the market * with 2 years of experience (I am based in India, specifically of Pune)
    2 How can I get the hands-on experience of backup and recovery at home itself? + _
    3. is not the financial power to go to a part of the TRAINING of the OCP, so how can I make my resume look strong?
    4. any company refuses a DBA who has not had experience in Backup & Recovery. Is the performance tuning and monitoring experience such a waste?

    See you soon,.
    Malika

    That's the point!

  • Query performance problem

    I have two schemas of two databases.

    When I check the sql plan, both the schema contains diffrently (one is underway for a full table scan and a scan of systematic index range).

    Both the scheme almost similar kind of data, indexes, and charges.

    What is causing the performance of sql

    in the second plan, the optimizer expects the analysis of range on IDX_TSK_ID step 5 to return to only 14 lines and decides it's a good idea to join the second TB_TRANS_MSTR table with a nested loops join (make a loop on 14 TASK_INSTANCE results and do a search on each iteration of the use of the PK_TRANS_ID index).

    In the foreground, the optimizer decides to read TB_TRANS_MSTR (containing 978 lines) and build a table of hash in memory of the results - and then probe against the TASK_INSTANCE second set.

    The next question is: which plan is most suitable and translates into better performance? The chances are high that the best plan is one that includes an estimate more fitting of the cardinalities. These estimates are based on a simple arithmetic (more or less) - and they depend on the table and column statistics. So the dba_tab_column entires Swen W. mentioned would be useful. In addition the text of the query would probably we shed some light on the question.

  • Partitioning strategy for the OBIEE query performance

    I use partitioning for the first time and I'll have trouble determining if I partition my fact table in a way that will allow the partition size to work with queries generating OBIEE.  I've set up a simple example using query I wrote to illustrate my problem.  In this example, I have a star with a fact table schema and I join in two dimensions.  My fact table is partitioned on JOB_ID and TIME_ID RANGE LIST and those are the keys that link the two dimensions that I use in this application.


    Select sum (boxbase)

    TEST_RESPONSE_COE_JOB_QTR a

    Join DIM_STUDY C on A.job_id = C.job_id

    Join DIM_TIME B on A.response_time_id = B.time_id

    where C.job_name = "FY14 CSAT"

    and B.fiscal_quarter_name = ' quarter 1';


    What can I say, because the query is in fact done filtering on columns in the dimensions instead of they, the columns in the table, the size isn't actually is happening.  I see actually slightly better performance of a non-partitioned table, even though I wrote this application specifically for partitioning strategy which is now in place.


    If I run the next statement, it runs a lot faster and a plan to explain is very simple and it seems to me that it's size down to a partition of sub as I hoped.  This isn't any query generated by OBIEE how will seem so.


    Select sum (boxbase)

    of TEST_RESPONSE_COE_JOB_QTR

    where job_id = 101123480

    and response_time_id < 20000000;


    Any suggestions?  I get some benefits from the exchange of partition by using this configuration, but if I'm going to sacrifice performance reports then that maybe isn't useful, or at the very least, I would need to get rid of my partitions void if they are not providing any benefit.


    Here are the plans to explain that I got for two queries in my original post:

    Operation

    Name of the object

    Lines

    Bytes

    Cost

    Object node

    In/Out

    PStart

    PStop

    INSTRUCTION SELECT optimizer Mode = ALL_ROWS


    1


    20960





    AGGREGATION OF TRI


    1

    13






    VIEW

    SYS. VW_ST_5BC3A99F

    101 K

    1 M

    20960





    NESTED LOOPS


    101 K

    3 M

    20950





    PARTITION LIST SUBQUERY


    101 K

    2 M

    1281



    KEY (SUBQUERY)

    KEY (SUBQUERY)

    RANGE OF PARTITION SUBQUERY


    101 K

    2 M

    1281



    KEY (SUBQUERY)

    KEY (SUBQUERY)

    CONVERSION OF BITMAP IN ROWID


    101 K

    2 M

    1281





    BITMAP AND









    MERGE TO BITMAP IMAGE









    KEY ITERATION BITMAP









    BUFFER









    INDEX SKIP SCAN

    CISCO_SYSTEMS. DIM_STUDY_UK

    1

    17

    1





    BITMAP INDEX RANGE SCAN

    CISCO_SYSTEMS. FACT_RESPONSE_JOB_ID_BMID_12






    KEY

    KEY

    MERGE TO BITMAP IMAGE









    KEY ITERATION BITMAP









    BUFFER









    VIEW

    CISCO_SYSTEMS.index$ _join$ _052

    546

    8 K

    9





    HASH JOIN









    INDEX RANGE SCAN

    CISCO_SYSTEMS. DIM_TIME_QUARTER_IDX

    546

    8 K

    2





    INDEX FULL SCAN

    CISCO_SYSTEMS. TIME_ID_PK

    546

    8 K

    8





    BITMAP INDEX RANGE SCAN

    CISCO_SYSTEMS. FACT_RESPONSE_TIME_ID_BMIDX_11






    KEY

    KEY

    TABLE ACCESS BY ROWID USER

    CISCO_SYSTEMS. TEST_RESPONSE_COE_JOB_QTR

    1

    15

    19679



    ROWID

    L LINE









    Operation

    Name of the object

    Lines

    Bytes

    Cost

    Object node

    In/Out

    PStart

    PStop

    INSTRUCTION SELECT optimizer Mode = ALL_ROWS


    1


    1641





    AGGREGATION OF TRI


    1

    13






    SIMPLE LIST OF PARTITION


    198 K

    2 M

    1641



    KEY

    KEY

    RANGE OF SINGLE PARTITION


    198 K

    2 M

    1641



    1

    1

    TABLE ACCESS FULL

    CISCO_SYSTEMS. TEST_RESPONSE_COE_JOB_QTR

    198 K

    2 M

    1641



    36

    36


    It seems unreasonable to think that relying on our index in a table partitioned (or partitioned in a way only focused on the help of ETL) can actually work better than partitioning in a way that we could get some size dynamic, but never static pruning?

    Yes - standard tables with indexes can often outperform partitioned tables. It all depends on types of queries and predicates to query which are typically used and the number of rows returned in general.

    Size of the partition eliminates the partitions TOGETHER - regardless of the number of rows in the partition or table. An index, on the other hand, is prohibited if the query predicate needs a significant number of lines since Oracle can determine that the cost is lower simply use close readings and make a full scan.

    A table with 1 million rows and a query predicate who wants 100 K of them probably will not use an index at all. But the same table with two partitions could easily have one of the partitions pruned by the "number of effective lines" only 500 k or less.

    If you are partitioning for performance, you should test your critical queries to make sure partitioning/pruning is effective for them.

    Select sum (boxbase)

    TEST_RESPONSE_COE_JOB_QTR a

    Join DIM_STUDY C on A.job_id = C.job_id

    Join DIM_TIME B on A.response_time_id = B.time_id

    where C.job_name = "FY14 CSAT"

    and B.fiscal_quarter_name = ' quarter 1';

    So, what is a typical value for 'A.response_time_id '? That represents a 'B.time_id '?

    Because a way of providing explicit partition keys may be to use a range of 'response_time_id' of the FACT table rather than a value of 'fiscal_quarter_name' of the DIMENSION table.

    If "1 quarter" could correspond to a range of dates from 01/01/YYYY ' at ' 03/31/yyyy '. "."

    Also, you said that on the partitioning: JOB_ID and TIME_ID

    But if your questions relate mainly to the DATES / TIMES, you might be better use the TIME_ID for PARTITIONS and JOB_ID, if necessary, for the subpartitioning.

    Date range partitioning is one of the most common around.and serves both performance and ease of maintenance (delete/archive old data).

  • Questions after TimesTen first trial: memory footprint and query performance

    Hello!

    I'm testing TimesTen In - Memory Database cache to see if it could help with some ad hoc reports questioned this need too long to run in our Oracle database.

    Here is the configuration:

    1.) TimesTen Server CPU Quad Core 2 with 32 GB of RAM running Windows 2003 x 64.

    2.) put in place two cachegroups read-only: a little for a quick test and the real thing that maps to a table of the database as such:


    Database table looks like:
      CREATE TABLE "TB_BD" 
       (   
       "VALUE" NUMBER NOT NULL ENABLE, 
       "TIME_UTC" TIMESTAMP (6) NOT NULL ENABLE, 
       "ASSIGNED_TO_ID" NUMBER NOT NULL ENABLE, 
       "EVENT_ID" NUMBER, 
       "ID" NUMBER NOT NULL ENABLE, 
       "ID_LABEL" NUMBER NOT NULL ENABLE, 
       "ID_ALARM" NUMBER, 
        CONSTRAINT "PK_TB_BD" PRIMARY KEY ("ID")  
       );
    Oracle database table has 1.367.336.329 lines and table segments are approximately 61 GB, so a medium line takes about 46 bytes.

    Since I have 32 GB in the TimesTen machine, I created the Group cache with a where predicate in the ID column that only the 98.191.284 most recent ranks get in the cache group. In the Oracle database, it is around 4.2 GB of data.

    After the cache loading dssize group returns:
    Command> dssize
    
      PERM_ALLOCATED_SIZE:      26624000
      PERM_IN_USE_SIZE:         19772852
      PERM_IN_USE_HIGH_WATER:   26622892
      TEMP_ALLOCATED_SIZE:      32768
      TEMP_IN_USE_SIZE:         10570
      TEMP_IN_USE_HIGH_WATER:   14192
    
      (Note: the high PERM_IN_USE_HIGH_WATER comes from a first test where I tried to cache too many rows)
    I then ran on the TimesTen machine:
    tisql> select avg(value) from tb_bd;
    She is still going after 10 hours, so I can already tell that the query execution time is not really met my expectations. :-)

    In the Windows Task Manager, I see that tisql constantly use 13% of CPU (= 100% / 8 cores), so that it uses only a carrot, but even he was using all the hearts and the execution time would be 1/8th, it wouldn't meet my expectation. :-)

    I also see in the the Windows Task Manager who becomes slowly higher and higher, currently the 'MemUsage' of my tisq 14FR processl. I believe that it is shared memory mapping that is already mapped by the TimesTen process that has approximately 24 GB mapped. The query is probably 53% through and the total time of queries can be around 20 hours.


    My questions:

    1.) for what I tested, 1 GB of data in the table Oracle needs about 4-5 gigabytes of memory in the TimesTen database. I read a post on the forum who has explained with ' data are optimized for performance, no space in TT ", but I don't quite buy it. A factor of 4-5 means that the CPU must spend 4 to 5 times the amount of data. The data is not compressed in the Oracle database, but it is in its natural binary form. I would like to understand why data takes much more space in TT - like when you have a numeric in Oracle, which TT do with it to make it 4 - 5 times bigger and why does do that?

    2.) regarding the performance of the queries: how long can take even to the base allows to browse about 20 GB of data in memory, number of lines, summarize the NUMBER of a column with a division to get the avg (< column >)? Is there something flawed with my setup?


    Thanks for the ideas!

    Kind regards
    Marcus

    Published by: user11973438 on 06.09.2012 23:27

    I agree that the use of 4 - 5 times more memory than Oracle is far from optimal. Your drawing is unfortunately a little pathological; normally we see more like 2 - 3 times (which is still too really0. There are many internal differences between Oracle and TimesTen in the way data are stored internally. Some are historical, and some are due to the optimization of performance rather than storage efficiency.

    For example:

    1 oracle lines are always variable storage length while TimesTen lines are always of fixed length in storage.

    2. in Oracle, a column defined as NUMBER only occupies the space needed based on the stored value. In TimesTen SEVERAL column always occupies the space to store the maximum possible precision and therefore takes up 22 bytes. You can reduce it by restricting explicitly using NUMBER (n) or NUMBER(n,p).

    3 TimesTen does not support any kind of parallel query within a single data store. All queries will be run using maximum core CPU; Oracle DB supports parallel queries and so it can make a big difference for certain types of application.

    4. NUMBER is implemented in software and is relatively ineffective. Calculating the average of almost 100M lines will take time... You can try to change cela a native binary type (TT_INTEGER, TT_BIGINT, BINARY_DOUBLE depending on your data); This will no doubt give a good improvement (but see point 5 below).

    5. with a database of this size, it is possible that Windows made a lot of paging, while the query is running. I myself also observed on Windows it seems to be a penalty when a process key/maps a page for the first time. You should monitor the paging activity via the task manager that the query is run. All important pagination will really affect the performance. Also, try to execute the query a second time without disconnecting ttIsql this may also show an advantage. On Unix/Linux platforms, we provide an option (MemoryLock == 4) to lock the entire database in physical memory to prevent any paging, but is not available under Windows.

    Chris

  • QUERY, LOGMINER OPTIMIZER

    HI friends
    I have very less knowledge about the following topics, I want to improve it. Please give me some good links, notes and PDFS on it.

    1 QUERY OPTIMIZER
    2 STATSPACK
    3 SQLTRACE
    2 h TKPROF
    I have no knowledge of these two topics
    1 LOGMINER
    4 DATAGUARD
    Thank you

    You have received the documentation links already. I'll give you some links to book that supplement the documentation.

    susdba wrote:
    HI friends
    I have very less knowledge about the following topics, I want to improve it. Please give me some good links, notes, & PDFs on it.

    1 QUERY OPTIMIZER
    2 STATSPACK
    3 SQLTRACE
    2 h TKPROF

    http://www.Amazon.com/cost-based-Oracle-fundamentals-experts-voice/DP/1590596366
    http://www.Amazon.com/Optimizing-Oracle-performance-Cary-Millsap/DP/059600527X
    http://www.Amazon.com/effective-Oracle-design-Osborne-Oracle/DP/0072230657

    I have no knowledge of these two topics
    1 LOGMINER
    4 DATAGUARD
    Thank you

    There is no book about Logminer, it's just a package nothing else. You can read the documentation for him, play with him and he should be fine. For the Dataguard, buy this book,
    http://www.Amazon.com/Oracle-guard-Handbook-Osborne-Oracle/DP/0071621113

    HTH
    Aman...

  • DLL - vista antivirus 2008 - performance optimizer - spyhunter3

    Hi all

    New computer laptop and in 2 days im infected with various villains who are waiting.

    I downloaded Spyhunter3 in a moment of madness as a result of a search for info on 'vista antivirus 2008' is the first mover to appear. Since then, I have not found any reviews or comments that make me feel good in the past for £30!

    Analysis indicates its cleaned up that system but the messages DLLs appear on the re-boot and vista Antivirus 2008 - optimizer performance jump in and take over frequently, my free 90 days McAfee product is also does not remove one of them. IM browsing through firefox but still once not without result.

    Quick wins?

    Thank you.


  • ASO performance optimization.

    Can you please send me the link to update on the optimization of the performance of the ASO

    Thank you

    Buy the book of Cameron and read chapters Dan and Gary and search WORLD archive. I don't know if there is anything better that is available through these two sources of it.

  • Question about the construction of cube / query performance (11.2.0.3)

    Hi, I have a stupid question on the performance of cube generation. By choosing the precalc %, is linear (or nearly linear) construction time to that? for example if you select 10% going to be 3 times faster than the selection of 30%? Also, is it fair to assume that if only 10% of the values are precalculated, an average end-user queries have to hit 3 times more data and therefore be about 3 times slower?

    Sorry, this on a virtual computer on your laptop, so test different configs build takes forever (I still have a load of cube really complete). Guess I should not be trying a cube Sun 15 on a virtual computer on your laptop, but trying to sell a DBA on the fact that she could improve the performance of our mini - DW.

    Thank you
    Scott

    Aggregation based on costs (aka "Pre-computes percent") was introduced in 11.1 as a simpler alternative based on the level of aggregation. Product management dream was a linear parameter, but the complexity is quickly apparent. Which could measure the linearity against? The generation time? Time of the query? Total size of the disk? The result the balances of all of these factors, but is linear against any of them. Fortunately, behavior level percentage precompute was fairly consistent between cubes and patterns in our experience, so I can give you a rough characterization. But keep in mind that this is a guide only - you have to experience on your own scheme and a system to see what works for you. In particular, you must balance your own requirements on construction time, time of the request and the disk size.

    * 0% *-this means no precomputation at all, so all data access will be dynamic. This is the recommended setting for the top of the page partition of a cube. If, for any reason, you want to use for the scores of leaves as well, then I advise you pass a cube uncompressed.

    * 1% *-he pre-computes the smallest part of the cube that is allowed by the algorithm and would take certainly greater than 1% of the time taken by an accumulation of 100%. For partitions of leaves, it is usually best to increase the amount because you'll get much better query response time for not much more profitable in terms of disc size and the generation time. It may be a good level for the top of the page partition of a cube, but should be used with caution because the top of the page scores are often too big to pre-computes.

    * 2%-19% *-these levels does not seem to be a lot of benefits since the amount of time and the total size of the disk is almost identical to a construction of 20%, but queries are slower.

    * 20%-50% *-this range is probably the best compromise in terms of construction compared to the time of the query. The default value of the AWM is 35%, which is a good starting point. Lower it to 20% if you want a faster version and get up to 50% if you want to replace the faster queries. The setting is close to linear in this interval than outside it.

    * 51%-99% *-you should probably avoid these levels, although I've seen 60% used in practice. The reason is that while the size of the cube and the length of the construction increase rapidly, the queries do not receive proportionally faster. Indeed, you may find that queries are slower because the code spends more time swapping in the pages of the disc.

    * 100% *-this will be pre-computes all (non-NULL) cells in the cube. It may be a surprise after my advice about 51%-99%, but 100% is a reasonable level to choose. This is because the code is much simpler when you know that everything is precalculated and then stored in the disk pages.

Maybe you are looking for