Best Index on a column with low cardinality on a high DML activity table?
I wanted to ask a question about the indices? I have a table which is the main table in the my system it holds nearly 65 000-1000000 rows on any average day. Each line has a (mandatory) batch, it can be 6 different status which is held in the state table. The distribution of status id in the master table is shown below. There is no index on this column, it has a low cardinality, but there are many dml moves in this table. However the State is the main request, the report runs against, so it made a full table scan every time. Users run their applications via an interactive report on Apex 4.0 and it runs slowly. It seemed to be a lock on the $ wwv_FLOW_PREFERENCES caused by this slow execution of interactive report query in this table for and is it locking associated with a slow interactive report query?
SELECT * from Main_Table where batch =: b2 A full table scan is done in this case. Distribution of status id in the main table
| ||||||||||||
Database Database Oracle 12 c Enterprise Edition Release 12.1.0.2.0 - 64 bit Production PL/SQL Release 12.1.0.2.0 - Production CORE Production 12.1.0.2.0 |
AMT for Linux: Version 12.1.0.2.0 - Production
NLSRTL Version 12.1.0.2.0 - Production
How about a LIST partition on batch column?
Tags: Database
Similar Questions
-
index on the column of low cardinality
Hello
Standard edition Oracle 10.2
I have a 'status' column in my table. table contains about 100000 records.
The possible values of status will be 1 or 2 (digital). This column will be 5 to 10 percent of the population, and the rest is null.
Because it is seems that bitmap index is candidate for low cardinality column, but this picture has high of DML operations. SO by web research I learned about this bitmap causes problems of blocking on high DML operations.
So I think that B-tree is only option left. Can you suggest a good index to improve performance on such a column?You don't need an index based on a function, do you think?
You have a STATUS column where a small percentage of values is not null.
To effectively access these lines whose STATUS is non-zero, simply create an index on the column State.
This should be plenty effective.
The effectiveness of this index (or the fbi if she was forced) is based on the fact that an index b-tree contains entry when the value is null (or all of the values in the case of a composite).
There are those that 100000 is small and 5-10% is not particularly selective and you tell the width of the table, etc., so, even with the index here, it might not if used - i.e. depending on the circumstances it might be more efficient to do a FTS.
It depends on what you want to do.
See:
http://asktom.Oracle.com/pls/asktom/f?p=100:11:0:P11_QUESTION_ID:4433887271030
http://richardfoote.WordPress.com/2008/05/12/index-scan-or-full-table-scan-the-magic-number-magic-dance/Published by: Dom Brooks on February 10, 2011 12:58
-
Indexes on columns with only "Y" and "n".
Our transaction table has about 100 mn lines. Of these latter on 60 minutes the lines are not actively used.
We have composite index on multiple columns, including dates, and these are used to regular queries.
Will be adding a status = 'y' or ' don't column and the creation of an index on this column is useful to increase the performance of the query?
For example, will add a condition like
OÙ......... AND ACTIVE_FLAG = 'Y '.
help increase the performance of queries?
Creating such a column and adding a help index?
What type of clue do you recommend?
The alternative advocated by some members of the staff is the archiving of INACTIVE lines in a separate table. This will have a big impact on business processes. But we might be able to pull it off.
Is there another solution.
I am a beginner full performance although I am familiar with PL/SQL and queries.
I'm on the constraints of very tight timetable for the first level of back. I'll have more time as soon as I know the direction in which to go.
Forgive me if I broke a tag forum. I'm new on the forum too.Thanks for providing this information. This gives a much clearer picture of what you are facing.
I will try to give you my answer your questions afterwards.
You have indicated your data volumes have steadily increased and performance made that decline.
Even if you do not say (I forgot to ask) but it may be that the number of users increases as and.
so typical, many users use the system of the time.For me it's indicative of a systemic problem. In other words, the problem is not due to a
a thing or a part of the system.There are two main components of a server: the instance and the database. This link sums up the difference
between the two - http://www.adp-gmbh.ch/ora/misc/database_vs_instance.html and here is a link to recent forum
for reference Re: difference between Oracle Instance and Database.To paraphrase, an instance includes background processes and structures of memory (SGA, PGA, etc.)
Oracle uses.The symptoms you describe could mean that your instance is more configured size not set correctly
for your current workload are originally strain throughout the system. Maybe the memory is too limited.
Maybe your sorts are growing with addional data.I suggest that you start the new thread to ask for help in the evaluation and optimization of your instance. Use this
as a starting point:
>
Question/title - how to evaluate the State of health / instance and tuneOur facility has a problem of increasing return.
How can we collect and provide assessment of workload and configuration information, so you can help determine
possible solutions such as: memory sizing, temp and segment again sizing, sort the issue.Statement of the problem
1 oracle Version is? ?. ?. ?2. gradually return was degrading with more and more data. The UI response suffers and
batch processing is also in hours. We expect volumes to only continue to grow.3. the volumes have continued to increase over the years and gradually performance issues
accumulate over the past 2 years. Data volumes began to increase faster
last year 1 due to changes in the company.4. the data in OLTP system is a combination of assets of 40% and 60% relatively inactive (financial history).
5. a complete system near-live replecation on several sites. In my opinion, using streams.
6. notice of the tech team is a few tables have too many lines. We know that our demands can be suboptimal.
>
Now to your question:
>
1. I do not know the index on the existing columns are not fully exploited. In the meantime can but, we still get some benefits from adding
a 'Y' and the null column? What will be improved?
>It is unlikely the benefits you need. It's putting the cart before the horse. The first step is to identify a specific problem.
Only then can examine you and evaluate solutions. for example by adding a new column or index.There are several reasons, this isn't the right solution; certainly not at the moment
A. any new column and index, BY DEFINITION, maybe even this does not use except if one or more current
queries are changed. This is obvious since no existing query could possibly refer to a column that does not exist.(B) to try to obtain the Oracle to use the new index column / single lease request must be changed.
C. in my opinion, you should never modify a query of production without knowing which allows to obtain the amendment
a well-defined objective. You must evaluate the current execution plan to identify what changes, IF ANY,
can improve performance.D. it can be and given your systemic problem is likely to be, some ripe fruit on the performance. That
is that there may be ways to tune the query to use existing indexes or add a new index on an existing column
improve things.E. assuming that none of the above does the work adding a new column and an index to identify
a 40/60 split (40% of assets) is unlikely to be used by Oracle (see response of Centinui).>
2. notes that this calculation 'Y' is not negligible. I can easily reproduce this with a WHERE condition on the existing columns. We can have
to run a batch on the weekend to check row groups and mark them as 'Y '. So it is not only the advantage of indexing on 'Y', but also some benefits of prior calculation.
Given this info doing now more logic to have a 'Y' and the null column?
>
My answer is no – there is no sense to have a Y/N Y/NULL column.A. certainly not for reasons of performance - as noted above above it is unlikely help
Certainly not for commercial reasons.
The calculation of your "non-trivial" is to demonstrate a business rule: identify groups of lines that have earned.
If you perform this calculation, the result must be saved significantly. One way is to create a new
column called 'DATE_NETTED_OUT '. This column name have meanings, and can be used as a boolean DATE/NULL type. This
It would be much better that Y/NULL which is not really enter the business sense of the value.>
3 partitioning speed up queries on the minutes 40 active and slow queries on the full 100 minutes?
>
Probably not to have one influence on the other. Allocation decisions are usually made to ensure easier data management and often
have little, if any, a performance impact. All existing applications are unlikely to accomplish the any
differently just because the table is partitioned. There are exceptions of course. If you partition on DATE
and an existing query has a DATE filter, but there is no index on the DATE column, it will do a full table scan
of the whole picture. If the table is partitioned on this DATE column that oracle would probably make a partition full
Scan just the 40% or 60% of the table according to the value DATE. It would be so much faster.But if there is already an index on the column DATE I do not expect the performance to change much. It comes
just speculation since it is based on data, the factor of grouping the data, and the existing queries.>
1. I still need to a particular column of partion, I do not?
>
Yes, you do. Unless you use HASH Partitioning that don't really benefit your use case. You too
says that "...". staff advocate is the archiving of INACTIVE lines in a separate table "."
Partitioning can be used as part of this strategy. Partition by MONTH or by QUARTER on your
new column "DATE_NETTED_OUT." Keep 1 or 2 years of data online as you do now and when a partition
becomes more than 2 years you can 'transport' it to your archiving system. This is part of the partition
management, which I mentioned earlier. You can simply disconnect the oldest partition and copy it to archive.
This will not affect your applications other than the data not being is not available.>
2. "adds a column' or ' cut-and - paste lines of table created by copy" the worst idea?
>
I don't know what that means.SUMMARY
It is premature to consider alternatives until you know what the problems are. Only then you can try to
determine the applicable solutions.I do it in this order:
1. upgrading to a newer version of Oracle you are on an older (you said 9i which is not
longer supported)2 assess the health and configuration of your instance. You may be able to significantly improve things
by adjusting the parameters of configuration and instance. Post a new question, as previously mentioned. I don't
have the expertise to advise you in details on that and the gurus tuning of the Forum the Forum can
not to notice this thread (indexes on columns with only 'Y' and 'n') than even know you need help.3. identify the ripe fruits for performance problems. Is this one of your batch process? An individual
request? You mentioned UI soon - that could be a problem of front end, middle tier or application
not a database one.4. don't it make changes architecture (add columns) until you have tried everything first.
-
An update on an index column with the same value generates an index to the top
An update on an index column with the same value generates an update of the index?
Thank youIn addition to my previous answer, see also
http://orainternals.WordPress.com/2010/11/04/does-an-update-statement-modify-the-row-if-the-update-modifies-the-column-to-same-value/
Riyaj Shamsudeen has this to say:
"+ We have an index on this column v1 and we update this column indexed too." Oracle was updating the indexed column? N ° if the values match the level of the indexed column, then the code of RDBMS isn't up-to-date index, a feature for optimization again. Only the row of table is updated, and the index is not updated. + "Hemant K Collette
-
High or low cardinality?
Hi all
11.2.0.3
I have table TAB1 with columns COL1 and COL2. I want to check which of the two columns is a candidate of low cardinality for index creation.
How can I check and compare low cardinality?
Thank you all,
pK
SQL> create table temp as select * from all_objects; Table created. SQL> exec dbms_stats.gather_table_stats(user, 'TEMP') PL/SQL procedure successfully completed. SQL> select c.column_name 2 , c.num_distinct 3 , t.num_rows 4 from user_tables t 5 join user_tab_columns c 6 on t.table_name = c.table_name 7 where t.table_name = 'TEMP' 8 order 9 by c.num_distinct desc; COLUMN_NAME NUM_DISTINCT NUM_ROWS ------------------------------ ------------ ---------- OBJECT_ID 213396 213396 CREATED 4855 213396 DATA_OBJECT_ID 4684 213396 OBJECT_NAME 2631 213396 TIMESTAMP 1856 213396 LAST_DDL_TIME 1589 213396 SUBOBJECT_NAME 122 213396 OWNER 118 213396 OBJECT_TYPE 13 213396 STATUS 2 213396 TEMPORARY 2 213396 GENERATED 2 213396 SECONDARY 1 213396 13 rows selected. SQL>
-
Choice of index when 2 index have similar columns
Long after, in summary: I a table with 2 indexes, col1, col2 and col3, col1, col2. If I run a query provides values for col1 and col2, I expect the index of the 2 column to use, but it is not. I'm guessing that the optimizer based on CSSTidy based his decision on the value of avg_data_blocks_per_key as the cost for table is different in each query. But it is not (completely?) holding account of how this will change when the index columns is excluded:
It is possible to add an indicator of index in the code, but I don't like this as a matter of course, and when I do not understand what makes the optimizer.
Now follows the details of the table and the index.
USER_INDEXES, showing differences expected in AVG_DATA_BLOCKS_PER_KEY
Note statistics recently gathered on CX02 to see if that makes a difference, I get more temporary space to do this about the CX03, after that, I expect the leaf_blocks & distinct_keys to ~ 50% higher about the CX03. The differences are pretty near what I expected however, CX03 is almost unique and feel the two clusters.
Table and Index of columns:INDEX_NAME : REFTABLE_CX02 REFTABLE_CX03 INDEX_TYPE : NORMAL NORMAL TABLE_OWNER : ACTD00 ACTD00 TABLE_NAME : REFTABLE REFTABLE TABLE_TYPE : TABLE TABLE UNIQUENESS : NONUNIQUE NONUNIQUE COMPRESSION : DISABLED DISABLED PREFIX_LENGTH : TABLESPACE_NAME : ACT_INDEXES_X128M ACT_INDEXES_X128M INI_TRANS : 2 2 MAX_TRANS : 255 255 INITIAL_EXTENT : 134217728 134217728 NEXT_EXTENT : 134217728 134217728 MIN_EXTENTS : 1 1 MAX_EXTENTS : 2147483645 2147483645 PCT_INCREASE : 0 0 PCT_THRESHOLD : INCLUDE_COLUMN : FREELISTS : FREELIST_GROUPS : PCT_FREE : 10 10 LOGGING : YES YES BLEVEL : 3 3 LEAF_BLOCKS : 6175698 6159664 DISTINCT_KEYS : 76747169 926267839 "AVG_LEAF_BLOCKS_PER_KEY : 1 1" "AVG_DATA_BLOCKS_PER_KEY : 10 1" CLUSTERING_FACTOR : 773025434 661852333 STATUS : VALID VALID NUM_ROWS : 1508335135 1054996402 SAMPLE_SIZE : 1508335135 1054996402 LAST_ANALYZED : 27/03/2013 15:08:56 18/01/2012 02:01:11 DEGREE : 1 1 INSTANCES : 1 1 PARTITIONED : NO NO TEMPORARY : N N GENERATED : N N SECONDARY : N N BUFFER_POOL : DEFAULT DEFAULT FLASH_CACHE : DEFAULT DEFAULT CELL_FLASH_CACHE : DEFAULT DEFAULT USER_STATS : NO NO DURATION : PCT_DIRECT_ACCESS : ITYP_OWNER : ITYP_NAME : PARAMETERS : GLOBAL_STATS : YES YES DOMIDX_STATUS : DOMIDX_OPSTATUS : FUNCIDX_STATUS : JOIN_INDEX : NO NO IOT_REDUNDANT_PKEY_ELIM : NO NO DROPPED : NO NO VISIBILITY : VISIBLE VISIBLE DOMIDX_MANAGEMENT : SEGMENT_CREATED : YES YES
With no clues, uses 'erroneous' index because the cost is 'too low '.SQL> select * from user_ind_columns where index_name in('REFTABLE_CX02','REFTABLE_CX03') order by 1,4; INDEX_NAME TABLE_NAME COLUMN_NAME COLUMN_POSITION COLUMN_LENGTH CHAR_LENGTH DESC ______________________________ ______________________________ ______________________________ _______________ _____________ ___________ ____ REFTABLE_CX02 REFTABLE REFNO 1 22 0 ASC REFTABLE_CX02 REFTABLE REFTYPESEQNO 2 22 0 ASC REFTABLE_CX03 REFTABLE REFNO 1 22 0 ASC REFTABLE_CX03 REFTABLE TMSTAMP 2 7 0 ASC REFTABLE_CX03 REFTABLE REFTYPESEQNO 3 22 0 ASC SQL> desc reftable Name Null? Type ----------------------------------------------------------------- -------- ------------ REFSEQNO NOT NULL NUMBER(10) ACTIVITYSEQNO NOT NULL NUMBER(10) REFTYPESEQNO NOT NULL NUMBER(10) REFNO NOT NULL NUMBER(10) HIDEIND NOT NULL NUMBER(10) USID NOT NULL VARCHAR2(16) TMSTAMP NOT NULL DATE
I can't access on both columns and filter on the second.
Plan with suspicion, which should be betterSQL> explain plan for 2 select * 3 from RefTable 4 where RefTypeSeqNo = :1 5 and RefNo = :2; Explained. SQL> @?\rdbms\admin\utlxpls --------------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | --------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 3 | 126 | 6 (0)| 00:00:01 | | 1 | TABLE ACCESS BY INDEX ROWID| REFTABLE | 3 | 126 | 6 (0)| 00:00:01 | |* 2 | INDEX RANGE SCAN | REFTABLE_CX03 | 3 | | 4 (0)| 00:00:01 | --------------------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 2 - access("REFNO"=TO_NUMBER(:2) AND "REFTYPESEQNO"=TO_NUMBER(:1)) filter("REFTYPESEQNO"=TO_NUMBER(:1)) 15 rows selected.
with the actual queriesSQL> explain plan for 2 select /*+INDEX (RefTable REFTABLE_CX02) */ 3 * 4 from RefTable 5 where RefTypeSeqNo = :1 6 and RefNo = :2; Explained. SQL> @?\rdbms\admin\utlxpls --------------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | --------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 3 | 126 | 15 (0)| 00:00:01 | | 1 | TABLE ACCESS BY INDEX ROWID| REFTABLE | 3 | 126 | 15 (0)| 00:00:01 | |* 2 | INDEX RANGE SCAN | REFTABLE_CX02 | 14 | | 4 (0)| 00:00:01 | --------------------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 2 - access("REFNO"=TO_NUMBER(:2) AND "REFTYPESEQNO"=TO_NUMBER(:1)) 14 rows selected.
First of all on bad index - logical upper bed
Then, using the indicator:SQL> set autotrace traceonly explain statistics SQL> select * 2 from RefTable 3 where RefTypeSeqNo = 0 4 and RefNo = 57748; 629 rows selected. Statistics __________________________________________________________ 8 recursive calls 0 db block gets 1156 consistent gets 588 physical reads 32828 redo size 32709 bytes sent via SQL*Net to client 811 bytes received via SQL*Net from client 43 SQL*Net roundtrips to/from client 0 sorts (memory) 0 sorts (disk) 629 rows processed
Plans are the same as those planned (see above), so I did not understand them here.SQL> select /*+INDEX (RefTable REFTABLE_CX02) */ 2 * 3 from RefTable 4 where RefTypeSeqNo = 0 5 and RefNo = 57748; 629 rows selected. Statistics __________________________________________________________ 0 recursive calls 0 db block gets 633 consistent gets 0 physical reads 0 redo size 32709 bytes sent via SQL*Net to client 811 bytes received via SQL*Net from client 43 SQL*Net roundtrips to/from client 0 sorts (memory) 0 sorts (disk) 629 rows processed
Thank you for your time,
Benbencol wrote:
--------------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | --------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 3 | 126 | 6 (0)| 00:00:01 | | 1 | TABLE ACCESS BY INDEX ROWID| REFTABLE | 3 | 126 | 6 (0)| 00:00:01 | |* 2 | INDEX RANGE SCAN | REFTABLE_CX03 | 3 | | 4 (0)| 00:00:01 | --------------------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 2 - access("REFNO"=TO_NUMBER(:2) AND "REFTYPESEQNO"=TO_NUMBER(:1)) filter("REFTYPESEQNO"=TO_NUMBER(:1)) 15 rows selected. SQL> @?\rdbms\admin\utlxpls --------------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | --------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 3 | 126 | 15 (0)| 00:00:01 | | 1 | TABLE ACCESS BY INDEX ROWID| REFTABLE | 3 | 126 | 15 (0)| 00:00:01 | |* 2 | INDEX RANGE SCAN | REFTABLE_CX02 | 14 | | 4 (0)| 00:00:01 | --------------------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 2 - access("REFNO"=TO_NUMBER(:2) AND "REFTYPESEQNO"=TO_NUMBER(:1)) 14 rows selected.
I don't think you said what version of Oracle, but it looks like 10 gr 2.
Note that the second plan calculates the cost of index and lines and the cost of the table based on an estaimate of 14 lines - that the optimizer has drifted through the value of distinct_keys for the index. Oracle can do because the predicates you have are an exact match for the index it uses. However, when calculating the cardinality of the table he "forgets" the cardinalilty of the index and applies the old 'product selectivity' algorithm for two columns.
For the first plan, however, the columns are NOT an exact match for the index definition, then Oracle uses the product of column selectivity to work all the costs and cardinalities - in this case to give a much lower cardinality on the scan of the index systematic range as well even if the index is more fat low cost and it also gives a much lower cost than access the table. In this case the difference in actual work done is not huge - it's basically the analysis of range index greater impact, although different order in which you collect the rows in the table could have various side effects in a more complex query. (Of course the difference in cost for this step could cause dramatic differences in the execution plan for a query more complex.)
11 g (same release 1) the optimizer uses the values of the exact index distinct_keys in two sets of calculations, and it will follow the cardinality of the table using the same calculation of distinct_keys. As a side effect of the upgrade, you can find a lot of change because the execution plans (following the example given), the cardinality of the table estaimtes can increase sgnificantly - a factor of almost 5 in the example.
Update: I forgot that I had written the basic problem on my blog a few years ago: http://jonathanlewis.wordpress.com/2008/03/11/everything-changes/
Concerning
Jonathan LewisPublished by: Jonathan Lewis on March 30, 2013 09:02
-
It's a good idea to create an index on a column used for partitioning
I'm new to using partitions as well as the index. Can someone help me with the situation below?
I have a table defined as below
CREATE TABLE FACT_MASTER
(
PERIODCODE NUMBER (8) NOT NULL,
PRODUCT_CD NUMBER (10) NOT NULL,
DPT_CD NUMBER (3).
FACT_VALUE1 NUMBER,
FACT_VALUE2 NUMBER,
FACT_VALUE3 NUMBER,
.....
.....
.....
.....
.....
FACT_VALUE50 NUMBER
)
PARTITION BY RANGE (PERIODCODE)
(
partition P1 lower values (2002),.
partition P2 lower values (2003).
lower partition P3 values (2004),
...
..
)
This table is partitioned on the shop period code.
In a select query on the table, period code and product code are used. I intend to create an index to improve performance. Does make sense to include the period code column in the index? I intend to create indexes only for the product code column. I think that the partition itself behaves as an index for periodcode. Can someone tell me if this is correct? I use 10g.
Published by: user6794035 on January 21, 2010 08:47
Published by: user6794035 on January 21, 2010 08:50If the index is an overall index, then yes it is worth including. But check that your SQL would actually benefit an overall index.
If your period still code in SQL, then the partition pruning will natuarally and would therefore not be required to be part of the index!
P;
-
the best game of video card with the power pack for a new 8500 XPS
What is the power supply best match with the graphics card AMD Radeon HD 7770 for my new desktop XPS 8500? The original of the graphics card is nvidia GeForce GT 640, but then Dell sent me that the order has been changed to this nvidia card is no longer available for a PC Windows 7 XPS 8500. So now they have changed my graphics card to this 7770 AMD which shows a 500 watt minimum power is required, but the 8500 has a 460 watts inside. So I email Dell customer support on this topic, but since I don't play all the games or graphics applications high, 460 watt will work OK or not? Either type OEM AMD Radeon card available from Dell for this computer is the HD 7570 with lower at only 400 watts power supply needs?
The 7770 works perfectly with POWER 460 block. It is much faster than the 7570. Although the 7770 recommends a 500 Watt power supply, it attracts about 140 watts on average so no need to worry.
-
How to avoid duplicates on a column with condition
Hi all
I need some advice here. At work, we have an Oracle APEX application that allow the user to add new records with the decision of the increment automatic number based on the year and the group name.
Said that if they add the first record, group name AA, for 2012, they get the decision number AA 1 2013 as their record casein displayed page of the report.
The second record of AA in 2013 will be AA 2 2013.
If we add about 20 records, it will be AA 20 2013.
The first record for 2014 will be AA 1 2014.
However, recently, we get a claim of the user on two files of the same name of group have the same number of the decision.
When I looked in the history table and find that the time gap between 2 record is about 0.1 seconds.
In addition, we have the correspondence table which allows the user admin update the sequence number start with the restraint that it must be greater than the maximum number of the current name of the current year.
This boot sequence number and the name of the group is stored together in a table.
And in some other case, the user can add a decision duplicate for related record number. (this is a new feature)
The current logic of the procedure to add the new record on the application are
_Get max record table with selected group name (decision_number) and the current year.
_INSERT in the folder table the new record came with the decision to number + 1
_ update sequence number of the number of the decision just added.
So instead of utitlising the process of editing the built-in automatic table of the APEX, I write a procedure that combine all three processes.
I have run some loop for continually perform this procedure, and it seems that it can generate autotically new decision unique number with time about 0.1 second difference.
However, when I increase the number of entry to 200 and let two users run 100 each.
If the time gap is about 0.01 second, double decision numbers are displayed.
What can I do to prevent duplicate?
I can't just apply a unique constraint here for three columns with condition because it can be duplicated in some special conditions. I don't know much about the use of lock and its impact.
This is the content of my procedure
create or replace
PROCEDURE add_new_case)
-ID just use the trigger
p_case_title IN varchar2,
p_year IN varchar2,
p_group_name IN VARCHAR2,
-decisionnumber here
p_case_file_number IN VARCHAR2,
-active
p_user in VARCHAR2
)
AS
NUMBER default_value;
caseCount NUMBER;
seqNumber NUMBER;
previousDecisionNumber NUMBER;
BEGIN
-execution immediate q '[alter session set nls_date_format = "dd/mm/yyyy"]';
SELECT count (*)
IN caseCount
OF CASE_RECORD
WHERE GROUP_ABBR = p_group_name
AND to_number (to_char (create_date, "yyyy")) = to_number (to_char (date_utils.get_current_date, "yyyy"));
SELECT max (decision_number)
IN previousDecisionNumber
OF CASE_RECORD
WHERE GROUP_ABBR = p_group_name
AND to_number (to_char (create_date, "yyyy")) = to_number (to_char (date_utils.get_current_date, "yyyy"));
IF p_group_name IS NULL
THEN seqNumber: = 0;
ON THE OTHER
SELECT Seq_number INTO seqNumber FROM GROUP_LOOKUP WHERE ABBREVIATION = p_group_name;
END IF;
IF caseCount > 0 THEN
default_value: largest = (seqNumber, previousdecisionnumber) + 1;
ON THE OTHER
default_value: = 1;
END IF;
INSERT INTO CASE_RECORD (case_title, decision_year, GROUP_ABBR, decision_number, case_file_number, active_yn, created_by, create_date)
VALUES (p_case_title, p_year, p_group_name, default_value, p_case_file_number, 'Y', p_user, sysdate);
-Need to update the sequence here also
UPDATE GROUP_LOOKUP
SET SEQ_NUMBER = default_value
WHERE the ABBREVIATION = p_group_name;
COMMIT;
EXCEPTION
WHILE OTHERS THEN
Logger.Error (p_message_text = > SQLERRM)
, p_message_code = > SQLCODE
, p_stack_trace = > dbms_utility.format_error_backtrace
);
LIFT;
END;
Many thanks in advance,
Ann
It's easier to solve for the case, while p_group_name is not null. In this case, you update a GROUP_LOOKUP line, so that you can select to update this line at the beginning, to prevent cases of two for the same group added at the same time. To do this, change the selection of GROUP_LOOKUP to:
SELECT Seq_number INTO seqNumber FROM GROUP_LOOKUP WHERE ABBREVIATION = p_group_name for an updated VERSION OF the SEQ_NUMBER;
and move this to be the first thing that did the procedure - before it has CASE_RECORD lines.
In the case when p_group_name is set to null, you have some object to be locked. I think the best you can do is to lock the entire table GROUP_LOOKUP:
the table lock in exclusive mode GROUP_LOOKUP wait 100;
The '100 expectation' means that he will wait until 100 seconds before giving up and trigger an error. in practice, that is expected to only wait a moment.
Exclusive mode allows others to read, but not to update the table.
UPDATES and the LOCK of the TABLE will be updates of other sessions wait for this transaction to validate. Queries from other sessions are not affected.
The locks are released when you commit or roll back.
-
Oracle Spatial Index on a column NOT NULL
I'm using the Oracle 11 g with data space init.
I have a SDO_GEOM column in a table that has NULL values for some records. I have to run spatial query based on this column so the need of a spatial index on it. Is it possible to create a Spatial Index on a column SDO_GEOM even if some of them have the NULL value and results of spatial query?
Thank you
Alan
Published by: user3883362 on 29 April 2013 05:59Alan,
Is it possible to create a Spatial Index on a column SDO_GEOM even if some of them are NULL
Yes.
SQL> CREATE TABLE test (ID NUMBER PRIMARY KEY, geom MDSYS.SDO_GEOMETRY); Table created. -- "insert a row with non-NULL geometry" SQL> INSERT INTO test VALUES (1, SDO_GEOMETRY('POINT (6000000 2100000)', 40986)); 1 row created. -- "insert a row with NULL geometry" SQL> INSERT INTO test VALUES (2, NULL); 1 row created. SQL> CREATE INDEX test_spx ON test (geom) INDEXTYPE IS MDSYS.SPATIAL_INDEX; Index created.
.. .and getting results of spatial query
Yes, assuming that your data is valid.
-- "Creates a 10' buffer around the point we previsouly inserted then applies SDO_INSIDE" SQL> SELECT ID, SDO_INSIDE(geom, SDO_GEOM.SDO_BUFFER(geom, 10, 1)) FROM test; 1 TRUE --"our point geometry" 2 FALSE --"our NULL" 2 rows selected.
If you encounter problems here, I think you can use the functions that are choking in NULL values, for example, the conversion of NULL in WKT geometry.
How do you access data (sqlplus, application, custom, etc.)? What is the query, and we see a few examples of data?
Kind regards
Noel -
The research of a column with comma separated values with ora-text
I use the Oracle 11 g 2 XE and Oracle Text to a web search engine.
I've now created and text indexed a CLOB keywords column that contains words separated by spaces. This allowed me to expand the search, as Oracle Text returns the rows that have one or more keywords that are stored in this column. The contents of the column are visible to the user and serves to 'expand' the search. This does not work as expected.
But now I need support several words or even sentences. With the current configuration, Oracle Text will only search for each keyword. How should I store the phrases and configure Oracle text so that it will search entire sentences (exact match is better, but the partial match is fine too)?
Example of content column of two lines (values separated semicolon):
"Hello, Hello; y at - it anyone out there? Nope; »
"the just; basic facts; »
I found a similar question: looking for a column with values separated by commas, except that I need a solution for Oracle 11 g with it's freetext search.
Possible solutions:
1st solution: I thought to redraw the DB as follows. I would like to make a new array of keywords (pkID NUMBER, nonUniqueID NUMBER, singlePhrase VARCHAR2 (100 BYTE)). And I want to change the column previous keyword to KeywordNonUniqueID, holding the ID (instead of a list of values). At the time of the research I had INNER JOIN with the new keyword table. The problem with this solution is that I will get several lines containing the same data except for the sentence. I guess this will destroy the ranking?
2nd solution: is it possible to store sentences as an XML in the column key of origin and somehow say Oracle text to search for in the XML?
3rd solution: separate individual phrases with spaces, but replace the spaces in sentences with the underscore or something (making a single word). If a phrase "why Hello there, Johnny!" is saved as "Why_hello_there, _Johnny!
4th solution?:
Note that, generally, there is a lot of sentences (less than 100), nor that they will be long (one sentence will be up to 5 words).
Also note that I am currently using CONTAINS, and needs some of its operators, to my full-text searches.When you talk about "phrase", do you mean "a list of words separated by a comma other sentences?
Isn't that the definition of "sentence" used by Oracle Text, where it simply means "a list of words in the order defined."
If I understand your requirement, you want to have data such as:
"aa bb cc dd".
"aa ee dd ff.and give priority to the first on the second if someone looking for "dd".
First, to conduct research in the comma separated list, you should look for in a section. You can either explicitly define sections of field such as
AA bb cc dd
Or you can use the PHRASE special section and set the sentence delimiters correctly. This is done with the attribute BASIC_LEXER punctuationThen you have the number you want to find only words where they are the only words in the section. That's the same problem, I address in the last post of this forum entry:
Contains: match exactlyOur solution will be substantially the same, some surrounding text with special markers, and then prioritize a phrase search with these special markers each side of the word.
We need to do a treatment some additional, although, as we need to surround each "sentence" (in your terminology) with special markers. I did it by surrounding the text with "XX1"... Condition2"then by replacing every comma with"Condition2, XX1"as part of a MULTI_COLUMN_DATASTORE:drop table names; create table names (id number primary key, text varchar2(50)); insert into names values( 1, 'just and kind, kind and loving' ); insert into names values( 2, 'just, kind' ); exec ctx_ddl.drop_preference ( 'mylex' ) exec ctx_ddl.create_preference( 'mylex', 'BASIC_LEXER' ) exec ctx_ddl.set_attribute ( 'mylex', 'PUNCTUATIONS', ',' ) exec ctx_ddl.drop_preference ( 'mcds' ) exec ctx_ddl.create_preference( 'mcds', 'MULTI_COLUMN_DATASTORE' ) exec ctx_ddl.set_attribute ( 'mcds', 'COLUMNS', '''XX1 ''||replace(text, '','',''XX2, XX1'')||'' XX2''' ) exec ctx_ddl.drop_preference ( 'mywl' ) exec ctx_ddl.create_preference( 'mywl', 'BASIC_WORDLIST' ) exec ctx_ddl.set_attribute ( 'mywl', 'SUBSTRING_INDEX', 'YES' ) create index namesindex on names(text) indextype is ctxsys.context parameters( 'datastore mcds wordlist mywl' ) / select score(1),id,text from names where contains( text, '
XX1 kind XX2 kind Output of this is:
SCORE(1) ID TEXT ---------- ---------- -------------------------------------------------- 52 2 just, kind 2 1 just and kind, kind and loving
-
update to column values (false) in a copy of the same table with the correct values
Database is 10gr 2 - had a situation last night where someone changed inadvertently values of column on a couple of hundred thousand records with an incorrect value first thing in the morning and never let me know later in the day. My undo retention was not large enough to create a copy of the table as it was 7 hours comes back with a "insert in table_2 select * from table_1 to timestamp...» "query, so I restored the backup previous nights to another machine and it picked up at 07:00 (just before the hour, he made the change), created a dblink since the production database and created a copy of the table of the restored database.
My first thought was to simply update the table of production with the correct values of the correct copy, using something like this:
Update mnt.workorders
Set approvalstat = (select b.approvalstat
mnt.workorders a, mnt.workorders_copy b
where a.workordersoi = b.workordersoi)
where exists (select *)
mnt.workorders a, mnt.workorders_copy b
where a.workordersoi = b.workordersoi)
It wasn't the exact syntax, but you get the idea, I wanted to put the incorrect values in x columns in the tables of production with the correct values of the copy of the table of the restored backup. Anyway, it was (or seem to) works, but I look at the process through OEM it was estimated 100 + hours with full table scans, so I killed him. I found myself just inserting (copy) the lines added to the production since the table copy by doing a select statement of the production table where < col_with_datestamp > is > = 07:00, truncate the table of production, then re insert the rows from now to correct the copy.
Do a post-mortem today, I replay the scenario on the copy that I restored, trying to figure out a cleaner, a quicker way to do it, if the need arise again. I went and randomly changed some values in a column number (called "comappstat") in a copy of the table of production, and then thought that I would try the following resets the values of the correct table:
Update (select a.comappstat, b.comappstat
mnt.workorders a, mnt.workorders_copy b
where a.workordersoi = b.workordersoi - this is a PK column
and a.comappstat! = b.comappstat)
Set b.comappstat = a.comappstat
Although I thought that the syntax is correct, I get an "ORA-00904: 'A'. '. ' COMAPPSTAT': invalid identifier ' to run this, I was trying to guess where the syntax was wrong here, then thought that perhaps having the subquery returns a single line would be cleaner and faster anyway, so I gave up on that and instead tried this:
Update mnt.workorders_copy
Set comappstat = (select distinct)
a.comappstat
mnt.workorders a, mnt.workorders_copy b
where a.workordersoi = b.workordersoi
and a.comappstat! = b.comappstat)
where a.comappstat! = b.comappstat
and a.workordersoi = b.workordersoi
The subquery executed on its own returns a single value 9, which is the correct value of the column in the table of the prod, and I want to replace the incorrect a '12' (I've updated the copy to change the value of the column comappstat to 12 everywhere where it was 9) However when I run the query again I get this error :
ERROR on line 8:
ORA-00904: "B". "" WORKORDERSOI ": invalid identifier
First of all, I don't see why the update statement does not work (it's probably obvious, but I'm not)
Secondly, it is the best approach for updating a column (or columns) that are incorrect, with the columns in the same table which are correct, or is there a better way?
I would sooner update the table rather than delete or truncate then re insert, as it was a trigger for insert/update I had to disable it on the notice re and truncate the table unusable a demand so I was re insert.
Thank youHello
First of all, after post 79, you need to know how to format your code.
Your last request reads as follows:
UPDATE mnt.workorders_copy SET comappstat = ( SELECT DISTINCT a.comappstat FROM mnt.workorders a , mnt.workorders_copy b WHERE a.workordersoi = b.workordersoi AND a.comappstat != b.comappstat ) WHERE a.comappstat != b.comappstat AND a.workordersoi = b.workordersoi
This will not work for several reasons:
The sub query allows you to define a and b and outside the breakets you can't refer to a or b.
There is no link between the mnt.workorders_copy and the the update and the request of void.If you do this you should have something like this:
UPDATE mnt.workorders A -- THIS IS THE TABLE YOU WANT TO UPDATE SET A.comappstat = ( SELECT B.comappstat FROM mnt.workorders_copy B -- THIS IS THE TABLE WITH THE CORRECT (OLD) VALUES WHERE a.workordersoi = b.workordersoi -- THIS MUST BE THE KEY AND a.comappstat != b.comappstat ) WHERE EXISTS ( SELECT B.comappstat FROM mnt.workorders_copy B WHERE a.workordersoi = b.workordersoi -- THIS MUST BE THE KEY AND a.comappstat != b.comappstat )
Speed is not so good that you run the query to sub for each row in mnt.workorders
Note it is condition in where. You need other wise, you will update the unchanged to null values.I wouold do it like this:
UPDATE ( SELECT A.workordersoi ,A.comappstat ,B.comappstat comappstat_OLD FROM mnt.workorders A -- THIS IS THE TABLE YOU WANT TO UPDATE ,mnt.workorders_copy B -- THIS IS THE TABLE WITH THE CORRECT (OLD) VALUES WHERE a.workordersoi = b.workordersoi -- THIS MUST BE THE KEY AND a.comappstat != b.comappstat ) C SET C.comappstat = comappstat_OLD ;
This way you can test the subquery first and know exectly what will be updated.
This was not a sub query that is executed for each line preformance should be better.Kind regards
Peter
-
AdvancedDataGrid - add columns with ActionScript
I am trying to add columns in an AdvancedDataGrid via ActionScript.
I can't make it work.
I tried two approaches - with an intermediary table to store the columns then set the adg in table columns; One where I assign columns directly to the table of columns of the adg.
They are likely to fail in their own way. The columns don't "take" and the adg uses the default values for dataProviders, or there are no columns at all.
"adg_test.mxml" has the code AdvancedDataGrids.
'adg_test_renderer.mxml' is a rendering engine for one of the columns.
Would appreciate learning what I'm doing wrong.
Thanks for any help.
= Adg_test_renderer.mxml START =.
<? XML version = "1.0" encoding = "utf-8"? >
" < = xmlns:mx mx:VBox ' http://www.Adobe.com/2006/MXML ">
< mx:Button id = "btnTest" label = "Working Renderer" / >
< / mx:VBox >= Adg_test_renderer.mxml END =.
== Adg_test.mxml START =.
<? XML version = "1.0"? >
< mx:Application
' xmlns:mx = ' http://www.Adobe.com/2006/MXML "
Initialize = "init ()" >< mx:Script >
<! [CDATA]
Import mx.collections.ArrayCollection;
[Bindable]
private var dpADGExplicit:ArrayCollection = new ArrayCollection([)
{Artist: 'Pavement', Album: 'Slanted and enchanted', price: 11.99},.
{Artist: 'Pavement', Album: 'Brighten the corners', price: 11.99},.
{Artist: 'Saner', Album: 'A child once', price: 11.99},.
{Artist: 'Saner', Album: 'Helium wings', price: 12.99},.
{Artist: 'The doors', Album: 'The doors', price: 10.99},.
{Artist: 'The doors', Album: "Morrison hotel", price: 12.99},.
{Artist: 'Grateful Dead', Album: 'American Beauty', price: 11.99},.
{Artist: 'Grateful Dead', Album: 'In the Dark', price: 11.99},.
{Artist: 'Grateful Dead', Album: 'Shakedown Street', price: 11.99},.
{Artist: 'The doors', Album: 'Strange Days', price: 12.99},.
{Artist: 'The doors', Album: 'The best of the doors', price: 10.99}
]);
[Bindable]
private var dpADGActionScript:ArrayCollection = new ArrayCollection([)
{Artist: 'Pavement', Album: 'Slanted and enchanted', price: 11.99},.
{Artist: 'Pavement', Album: 'Brighten the corners', price: 11.99},.
{Artist: 'Saner', Album: 'A child once', price: 11.99},.
{Artist: 'Saner', Album: 'Helium wings', price: 12.99},.
{Artist: 'The doors', Album: 'The doors', price: 10.99},.
{Artist: 'The doors', Album: "Morrison hotel", price: 12.99},.
{Artist: 'Grateful Dead', Album: 'American Beauty', price: 11.99},.
{Artist: 'Grateful Dead', Album: 'In the Dark', price: 11.99},.
{Artist: 'Grateful Dead', Album: 'Shakedown Street', price: 11.99},.
{Artist: 'The doors', Album: 'Strange Days', price: 12.99},.
{Artist: 'The doors', Album: 'The best of the doors', price: 10.99}
]);
private function init (): void
{
var arr:Array = []; table of //Intermediary which became the AdvancedDataGridColumn table
var col: AdvancedDataGridColumn = new AdvancedDataGridColumn();col.dataField = "artist";
arr.push (col);
col.dataField = 'Album ';
Col.Visible = false;
arr.push (col);
col.dataField = "Price";
col.itemRenderer = new ClassFactory (adg_test_renderer);
arr.push (col);
adgActionScript.columns = arr;
UNSUCCESSFUL ALTERNATIVE APPROACH
/*
col.dataField = "artist";
adgActionScript.columns.push (col);
col.dataField = 'Album ';
Col.Visible = false;
adgActionScript.columns.push (col);
col.dataField = "Price";
col.itemRenderer = new ClassFactory (adg_test_renderer);
adgActionScript.columns.push (col);
*/
}
[]] >
< / mx:Script >< mx:Label text = "Explicit columns" / >
< mx:AdvancedDataGrid
ID = "adgExplicit".
Width = "100%" height = "100%".
sortExpertMode = "true".
dataProvider = "{dpADGExplicit}" >
< mx:columns >
< mx:AdvancedDataGridColumn dataField = "Artist" / >
< mx:AdvancedDataGridColumn dataField = "Album" visible = "false" / >
< mx:AdvancedDataGridColumn dataField = "Price" itemRenderer = "adg_test_renderer" / >
< / mx:columns >
< / mx:AdvancedDataGrid >
< mx:Label text = "columns ActionScript (ActionScript so works: Arist column should be hidden.)" Should see Album with data columns and price with the buttons. / >
< mx:AdvancedDataGrid
ID = "adgActionScript".
Width = "100%" height = "100%".
sortExpertMode = "true".
dataProvider = "{dpADGActionScript}" >
< / mx:AdvancedDataGrid >
< / mx:Application >== Adg_test.mxml END =.
If you are looking to add columns with ActionScript, follow this.
var _advancedDataGrid: AdvancedDataGrid = new AdvancedDataGrid();
var columns: Array = _advancedDataGrid.columns;
Columns.push (new AdvancedDataGridColumn ('field1'));
Columns.push (new AdvancedDataGridColumn ('field2'));
Columns.push (new AdvancedDataGridColumn ('field3'));
_advancedDataGrid.columns = columns;
_advancedDataGrid.validateNow ();
-
Index on two columns becomes the index of function?
Hello, I create a unique index with two columns, a number (9) and a date.
It becomes an index of feature based with the number column and a column sys hidden (date).
When I do queries that use this index the autotrace tells me it does things like this:
sys_nc00001$ > SYS_OP_DESCEND (datevalue)
sys_nc00001$ IS NOT NULL
How is he did not have a normal index?Use of ESCR does this.
Of http://download.oracle.com/docs/cd/B19306_01/server.102/b14200/statements_5010.htm
Oracle database processes Index descending as if they were focused on the index function.
-
Indexing of a column based on content types
Hello
Is it possible to create an index of text only for certain content. For example, I have a table of CONTENTS
create table CONTENT
(
Identification number,
CONTENT_TYPE_ID number 4,
Clob DATA
)
with different content_type_ids say 10, 20, 30, etc. I need to create a text index on the column of DATA for content_type_id = 10. Can anyone help me please with an example? Thanks in advance
Concerning
REDAAnother method would be to use the format column in the database using table IGNORE to disregard the documents you do not want to index.
SQL> alter table CONTENT add fmt varchar2(10); Table altered. SQL> update CONTENT set fmt='IGNORE' where content_type_id != 10; 2 rows updated. SQL> commit; Commit complete. SQL> create index content_idx on content (data) indextype is ctxsys.context parameters ('format column fmt') / Index created.
-Edwin
Maybe you are looking for
-
How can I remove my name from the top right of the screen. I have a 2015 13 "MacBook Pro running 10.11.4
-
I have 2 tidserv activity virus. Microsoft security there a cure?
My computer has a virus tidserv activity 2. My son has recommended http://Windows.Microsoft.com/en-us/Windows/products/security-essentials This download will help me remove the virus? I want to know if it will work until I have it try. Thank you. All
-
No air-conditioned WRT1900AC option in network Mode
I have a few AC supported devices and thought that I would spend my router only AC mode on the 5 GHz network to find the drop-down list offer, mixed, a/d, n, options. Is there some type of extra necessary step to get the CA appears, or am I missing s
-
I use Windows XP. Thanks for any help.
-
BlackBerry smartphone 8 GB Micro SDHC for 8110?
Hello Can someone tell me if the Blackberry Pearl 8110 supports 8 GB Micro SDHC card? Otherwise, what is the maximum capacity it can handle? Thank you!