compress the question from the table
My question is simple:If I execute this SQL:
ALTER TABLE some_table_name MOVE COMPRESS;
and the table is already compressed, Oracle jump or spend time re - compress?Why I ask this is because I have a few nice PL/SQL that loops through all the tables
in my schema, I have about 1800 tables and it compresses, but it takes a long time, hours
and I would like to only compress Tables which are not already compressed, so compressed, jump, so
that's why I ask... try to determine whether or not PL/SQL needs a dry:-)
Kodiak_Seattle wrote:
Thanks for all the help, space is always a big problem for us here, we always run out of it. On only 1/3 of all Tables is compressed, 2/3 is not.So just research in there, thanks!
on the 11G R2
To continue the question I found the following:
(a) have compression enabled for a table does not necessarily mean that the data is compressed. If compression is enabled with the statement:
ALTER TABLE tablename COMPRESS;
the existing data in the table remain uncompressed.
However the execution of the order
ALTER TABLE tablename MOVE COMPRESS;
Compress in the existing data in the table and it is caught in time.
Try to compress an already compressed or partially compressed table is taken on time. Oracle will analyze the data in the table again and try to compress.
I did a test on a table with half million records already compressed and time was almost the same as the initial compression.
You can do a quick test on your environment to verify that.
Sorry for the misinformation, I provided before.
Kind regards.
Al
Tags: Database
Similar Questions
-
Create the Table as and hybrid columnar Compression
I'm looking to connect to tables help to create the table as I had a question about columnar Compression hybrid. For the test, I found that the uncompressed daata will be approximately 10 to and compressed data will be around 1 TB. I anticipate compress the table when the table to create as an instruction and wanted to know in what order Oracle forge do compression, that is, the table will be created then Oracle will compress the data or will the compressed table that the table will be created. The motivation behind the question is to see how much storage I need to complete the operation.
Thank youIf you are using
create table xxx compress for query high what to choose...While the data will be compressed before insertion, so in your case, it will use about 1 TB of disk space
-
Hi guys,.
I try to compress the table. Then I connect via Oracle SQL Developer > right click > storage > compress. All right.
But does not change the size of the table.
Y at - it a command to say: 'start compression for my table1 now?
More than 50,000 rows in my table.
Thank you!
Compression of the data in a table does NOT change the size of the table.
It increases just the available space in the table.
Oracle shall never voluntarily return disk space after that she acquired it.
Oracle is designed to reuse free space.
Are what problem you trying to solve?
-
How to design the table from the answer to the question table.
Hi all
I am creating an application for student review online.
There are two types of questions, the only choice of response and multi choice answers.
My question is less than
my table to answer (I'm not satisfied with who is belowcreate table question_master ( exam_id number references exam_master(exam_id), marks_of_each_question number, type_of_question char(1),-- single choice answer/multiple choices answer q1 varchar2(2000), q2 varchar2(2000), q3 varchar2(2000), q4 varchar2(2000))
now, I'm perfectly how to create the RESPONSE table to contain the answers.create table answers_of_questions ( answer_id number primary key, question_ID number referenes question_master(question_id), answer varchar2(4000) not null, is_answer_correct char(1),--y/n, student_selection char(1),--y/n student select it or not .... ...
the only choice is good, but several checkboxes choices, what to do?
How to design the table from the answer?
do I have to create 2 tables to contain the answers?
Note: the QUESTIONS and ANSWERS, all will be entered by the teacher. students will make a choice and I will store this choice in another table
may be called STUDENT_SELECTED_ANSWERS or something like that.
If anyone has some reference to the script retail scheme review online, kindly share with me.
Kind regards.
Kind regards.If you need to have answers in another table:
Student (student_id, name, etc...)
Review (exam_id, exam_name, etc...)
Question (question_id, exam_id, question_text)Response (question_id, a_text, b_text, c_text, d_text, a, b, c, d).<-- one-to-one="" with="" question="" table,="" a-d="" flags="" used="" to="" indicate="">-->
OR
Response (question_id, answer_id, answer_text, OK)<-- many-to-one="" with="" question="" table,="" correct="" flag="" used="" to="" indicate="" correctness="" for="" this="" single="">-->Student_Answer (student_id, question_id, a, b, c and d)
OR
Student_Answer (question_id, student_id, answer_id)<-- creation="" of="" a="" question_id+answer_id="" in="" this="" table="" implies="" the="" student="" checked/selected="" it="" as="" an="">-->To what extent you want to standardize, it is up to you.
-
Question from beginner on the fusion of the Recrords in the Table
This question is about question 10-b of merger on page 3-41, it says:
(page 3-41 "Oracle Database 10g SQL Fund. II Vol.1")
Merge the data into the EMP_DATA table that is created in the lab last in the data in the table emp_hist. assume
EMP_DATA external table data corresponds to the EMP_HIST of table, update the email column
table EMP_HIST to match the row in the EMP_DATA table. If a row in the EMP_DATA table is not
match, to be inserted in the tables of EMP_HIST lines are considered as corresponding whenever his first and
family name are the same.
For me, this issue is built wrong. First of all in the last lab we have not been asked to create EMP_DATA. Secondly, EMP_DATA is empty.
Thirdly, this question asks us to merge into the table EMP_HIST while EMP_DATA is empty.
Table EMP_HIST currently copied data from the employees table. Structure EMP_HIST:
FIRST NAME VARCHAR2 (20)
LAST_NAME NOT NULL VARCHAR2 (25)
EMAIL NOT NULL VARCHAR2 (45)
EMP_DATA table is empty. I created it as follows:
create or replace directory emp_dir
like 'F:\emp_dir ';
drop table emp_data;
CREATE TABLE emp_data
(first name VARCHAR2 (20))
, last_name VARCHAR2 (20)
, email VARCHAR2 (30)
)
EXTERNAL ORGANIZATION
(
TYPE oracle_loader
Emp_dir default DIRECTORY
ACCESS SETTINGS
(
RECORDS DELIMITED BY NEWLINE CHARACTERSET US7ASCII
NOBADFILE
NOLOGFILE
FIELDS
(first name POSITION (01:20) TANK)
, last_name POSITION (22:41) TANK
CHAR POSITION (43:72) by email)
)
LOCATION ('emp.dat'));
Back to our question, I've done the merger as follows:
merge into e emp_hist
with the help of emp_data d
on (e.first_name = d.first_name)
When matched then
game update
Select = d.last_name,
e.email = d.email
When not matched then
Insert values (d.first_name, d.last_name, d.email);
I get this error:
Error report:
SQL error: ORA-29913: error in executing ODCIEXTTABLEOPEN legend
ORA-29400: data cartridge error
KUP-04040: file emp.dat in EMP_DIR not found
ORA-06512: at "SYS." ORACLE_LOADER', line 19
29913 00000 - "error in the execution of %s legend".
* Cause: The execution of the specified legend caused an error.
* Action: Examine the error messages take appropriate measures.
On the other hand, I said I'm going to try this:
merge into emp_data d
using e emp_hist
on (d.first_name = e.first_name)
When matched then
game update
d.last_name = select,
d.email = e.email
When not matched then
Insert values (e.first_name, select, e.email);
I get this error because the external table's final after its creation (no update, insert, delete authorized) as far as I know:
Error report:
SQL error: ORA-30657: operation not supported on external organized table
30657.0000 - "operation not supported on external-organized table".
* Cause: User attempted on the operation on an external table which is
not supported.
* Action: Don't do that!
**********************************
I don't know what to do. I did my best, please help.What happens if I want to import large data set in the external table?
Hmm, external table is just a definition on how you want to watch a file as a table in the database. Think of it as a tool to import data.
1. first of all, have a ready file (with data).
2. create an external table using the format of the file.
3. now you should be able to see the data when you do this. Select * from my_external_table.
4. now, create an ordinary table where you want the data to insert permanently.
5 insert into permanent_table select * from my_external_table.
I hope this helps.
-
RENAMECOLLECTIONTABLE - questions by renaming the table from the collection.
Hi all
I use the version of database Oracle 10.2.0.2, OS - Solaris
I am enrolled in the scheme after successfully.
<? XML version = "1.0" encoding = "UTF-8"? >
< xs: schema xmlns: XS = "http://www.w3.org/2001/XMLSchema".
xmlns:xdb = "http://xmlns.oracle.com/xdb".
elementFormDefault = "qualified" attributeFormDefault = "unqualified" >
< xs: element name = "employee" type = "employeeType" xdb:defaultTable = "EMPLOYEEXMLTYPE_TABLE" / >
< name XS: complexType "employeeType" = >
< xs: SEQUENCE >
< xs: element name = "name" type = "xs: String" / >
< xs: element name = "departments" type = "departmentsType" nillable = "true" minOccurs = "0" / >
< xs: element name = "name" type = "xs: String" / >
< / xs: SEQUENCE >
< name XS: attribute = "id" type = "Integer" use = "required" / >
< / xs: complexType >
< name XS: complexType = "departmentsType" >
< xs: SEQUENCE >
< xs: element name = "Department" type = "DepartementType" maxOccurs = "unbounded" / >
< / xs: SEQUENCE >
< / xs: complexType >
< name XS: complexType = "DepartementType" >
< xs: SEQUENCE >
< xs: element name = "name" type = "xs: String" / >
< xs: element name = "departmentWings" type = "departmentWingsType" / >
< / xs: SEQUENCE >
< / xs: complexType >
< name XS: complexType = "departmentWingsType" >
< xs: SEQUENCE >
< xs: element name = "departmentWing" type = "departmentWingType" maxOccurs = "unbounded" / >
< / xs: SEQUENCE >
< / xs: complexType >
< name XS: complexType = "departmentWingType" >
< xs: SEQUENCE >
< xs: element name = "name" type = "xs: String" / >
< xs: element name = "target" type = "xs: String" / >
< / xs: SEQUENCE >
< / xs: complexType >
< / xs: Schema >
My goal is to create indexes on/employee/departments/department/departmentWings/departmentWing/name.
I followed the steps mentioned in Create index for an XMLType column.
One of the measures mentioned in the link above is to rename the table in the collection (in my case departmentwing) using the DBMS_XMLSTORAGE_MANAGE procedure. RENAMECOLLECTIONTABLE.
So I tried the following:
SQL > START
DBMS_XMLSTORAGE_MANAGE 2. () RENAMECOLLECTIONTABLE
OWNER_NAME 3 = > "XDB_TEST"
Table_name 4 = > "EMPLOYEEXMLTYPE_TABLE"
5 COL_NAME = > NULL,
XPATH 6 = > ' / employee/departments/Department/departmentWings/departmentWing/name. "
7 COLLECTION_TABLE_NAME = > 'DEPTWING_TABLE '.
(8);
9 END;
10.
PL/SQL procedure successfully completed.
If the procedure has said that it has completed successfully, I couldn't see the created DEPTWING_TABLE.
SQL > DEPTWING_TABLE DESC;
ERROR:
ORA-04043: object DEPTWING_TABLE does not exist
I am left helpless. Can someone please?
Kind regards
Simo.
Published by: user5805018 on October 12, 2011 04:40Hello
First of all, are you sure that the nested tables generated during the registration of the scheme?
You must annotate the schema with xdb:storeVarrayAsTable = 'true' and use the option genTables-online true for registration.
Then you should see something like this:SQL> select table_name, table_type_name, parent_table_name, parent_table_column 2 from user_nested_tables 3 ; TABLE_NAME TABLE_TYPE_NAME PARENT_TABLE_NAME PARENT_TABLE_COLUMN ------------------------------ ------------------------------ ------------------------------ -------------------------------------------------------------------------------- SYS_NTy8fd9LOzSmun4whyTbJC4g== department384_COLL EMPLOYEEXMLTYPE_TABLE "XMLDATA"."departments"."department" SYS_NTeAiIyOu0Q9G3nvWMtcM6mQ== departmentWing381_COLL SYS_NTy8fd9LOzSmun4whyTbJC4g== "departmentWings"."departmentWing"
In addition, according to the manual (xdb_easeofuse_tools.pdf):
For Oracle Database 11g Release 2 (11.2) and above, this function accepts only, XPath
the rating as a dotted notation. If the XPath notation is used, a setting of namespaces
may also be necessary.So, on your version, you must use the notation 'dot ':
SQL> call XDB.DBMS_XMLSTORAGE_MANAGE.renameCollectionTable( 2 USER 3 , 'EMPLOYEEXMLTYPE_TABLE' 4 , NULL 5 , '"XMLDATA"."departments"."department"' 6 , 'DEPT_TABLE' 7 ); Method called SQL> call XDB.DBMS_XMLSTORAGE_MANAGE.renameCollectionTable( 2 USER 3 , 'DEPT_TABLE' 4 , NULL 5 , '"departmentWings"."departmentWing"' 6 , 'DEPTWING_TABLE' 7 ); Method called SQL> select table_name, table_type_name, parent_table_name, parent_table_column 2 from user_nested_tables; TABLE_NAME TABLE_TYPE_NAME PARENT_TABLE_NAME PARENT_TABLE_COLUMN ------------------------------ ------------------------------ ------------------------------ -------------------------------------------------------------------------------- DEPT_TABLE department384_COLL EMPLOYEEXMLTYPE_TABLE "XMLDATA"."departments"."department" DEPTWING_TABLE departmentWing381_COLL DEPT_TABLE "departmentWings"."departmentWing"
Hope that helps.
-
Addition of high rank of the question from the table in OFA
Hi all
I have a customized page of the OPS with a table area advanced, under whom I add a line event, so when I select the Add button of the line box, I am able to insert a blank line in the table, but it is whenever it inserts the line to the top of the table, but I need the empty line must be added at the end so it is possible to implement this help please me.
I wrote a method like below in the AM to create a line.
Public Sub InsertRecord (String p_seq)
{
XXXVOImpl vo = getXXXVO1();
vo.setMaxFetchSize (0);
Line XXXVORowImpl = (XXXVORowImpl) vo.createRow ();
row.setAttribute ("xxId", p_seq);
vo.insertRow (row);
row.setNewRowState (rank. STATUS_INITIALIZED);
}
Thank you
Hello
Try using the code below:
public void InsertRecord (String p_seq) {}
XXXVOImpl vo = getXXXVO1();
vo.setMaxFetchSize (0);
Line XXXVORowImpl = (XXXVORowImpl) vo.createRow ();
row.setAttribute ("xxId", p_seq);
VO. Last();
VO. Next();
vo.insertRow (row);
row.setNewRowState (rank. STATUS_INITIALIZED);
}
Sushant-
-
size of the table with the compress and no compression
Hello
I have a table takes up 35.5 GB of space with 120 mill inside files.
I thought that compress would save me a lot of space, so I did.
Here's how I compressed:
I created a separate tablespace and in the new tablespace, I created several database, files, each file is to have 2 GB.
then insert all the records out of the original table to the new then,
I ran the query below see also size
SQL > select ((blocks*8192)-(blocks*avg_space)) / 1024/1024 MB size of "", empty_blocks,.
2 avg_space, num_freelist_blocks
user_tables 3
4 where table_name = 'strategy '.
Size MB EMPTY_BLOCKS AVG_SPACE NUM_FREELIST_BLOCKS
---------- ------------ ---------- -------------------
2059.88281 0 0 0
----------------------------------------
Select ((blocks*8192)-(blocks*avg_space)) / 1024/1024 "size MB", empty_blocks,.
2 avg_space, num_freelist_blocks
user_tables 3
4 where table_name = 'strategy '.
Size MB EMPTY_BLOCKS AVG_SPACE NUM_FREELIST_BLOCKS
---------- ------------ ---------- -------------------
35504.2422 0 0 0
in above query first a 2 GB on all resultant space after compress and the second is the original with NO size to compress.
I have a question are these calculated dimensions are reliable. ?
because the utility compress oracle inserts data in blocks in the database files.
If these blocks are to halfway filled and left empty and then go to the next block and so forth.
Is above sizes to account for empty blocks... ?
I don't know how to explain it
Help, please
JPI don't see where you use compression. So I don't know what you mean by compress. If you mean native compression, use something like:
SQL> create table tbl as select * from dba_source; Table created. SQL> select sum(bytes) 2 from user_segments 3 where segment_name = 'TBL' 4 / SUM(BYTES) ------------------------- 159383552 SQL> alter table tbl move compress 2 / Table altered. SQL> select sum(bytes) 2 from user_segments 3 where segment_name = 'TBL' 4 / SUM(BYTES) ------------------------- 124780544 SQL>
SY.
-
Cannot read iTunes from USB. Windows media player cannot play the file. The player might not support the file type or does not support the codec used to compress the file?
Ask the question in the Apple Forums:
https://discussions.Apple.com/index.jspa -
Hi Experts,
I have a DB table has columns of more than 50.
I question this table, it should only return one line at any time. as sqldeveloper below image.
here, I need to build block pl/sql-query, Discover the column in the table as a key and query result as values.
Eg: Key - Value
TASK_EVENT_ID - 1765
EVENT_TYPE - ASR_UPDATE
... etc until all of the columns in my table.
Experts please comment on that point, appreciate your help on this.
Thank you
-Vincent.
Here is an approach using DBMS_SQL to iterate over the columns of key / value to assign... (Little code snipped for brevity)
create or replace procedure (task_expired)
v_store_id in full,
v_task_action_id in full,
v_job_id in full
)
as
-[SNIP code...]
v_sql VARCHAR2 (4000): = ' select * from my_table where PK = 123'; -Your SQL here!
v_v_val VARCHAR2 (4000);
v_n_val NUMBER;
v_d_val DATE;
v_ret NUMBER;
c NUMBER;
d NUMBER;
col_cnt INTEGER.
f BOOLEAN;
rec_tab DBMS_SQL. DESC_TAB;
col_num NUMBER;
vAsString VARCHAR2 (4000);
BEGIN
-[SNIP code...]
Message_properties. CORRELATION: = "EDF_EVENT";
MSG: = SYS. AQ$ _JMS_BYTES_MESSAGE. Construct();
Msg.set_string_property ('queueName', ' shipping/csi_cth');
Msg.set_string_property ('MODE', 'CR8');
c: = DBMS_SQL. OPEN_CURSOR;
DBMS_SQL. PARSE (c, v_sql, DBMS_SQL. NATIVE);
d: = DBMS_SQL. Execute (c);
DBMS_SQL. DESCRIBE_COLUMNS (c, col_cnt, rec_tab);
1.col_cnt J
LOOP
CASE rec_tab (j) .col_type
WHEN 2 THEN
DBMS_SQL. DEFINE_COLUMN (c, j, v_n_val); -Number
WHEN 12 CAN
DBMS_SQL. DEFINE_COLUMN (c, j, v_d_val); -Date
ON THE OTHER
DBMS_SQL. DEFINE_COLUMN (c, j, v_v_val, 2000); -Else treat as varchar2
END CASE;
END LOOP;
LOOP
v_ret: = DBMS_SQL. FETCH_ROWS (c);
WHEN OUTPUT v_ret = 0;
1.col_cnt J
LOOP
-Fetch each column to the correct data type based on coltype
CASE rec_tab (j) .col_type
WHEN 2 THEN
DBMS_SQL. COLUMN_VALUE (c, j, v_n_val);
vAsString: = to_char (v_n_val);
WHEN 12 CAN
DBMS_SQL. COLUMN_VALUE (c, j, v_d_val);
vAsString: = to_char (v_d_val, ' DD/MM/YYYY HH24:MI:SS');
ON THE OTHER
DBMS_SQL. COLUMN_VALUE (c, j, v_v_val);
vAsString: = v_v_val;
END CASE;
Msg.set_string_property (rec_tab (j) .col_name, vAsString);
END LOOP;
END LOOP;
DBMS_SQL. CLOSE_CURSOR (c);
DBMS_AQ. ENQUEUE (queue_name-online 'cbus.aqjms_common',
Enqueue_options => Enqueue_options,
Message_properties => Message_properties,
Payload-online msg,
Msgid => Message_handle);
dbms_output.put_line ('00 Msgid =' |) Message_handle);
dbms_output.put_line('===Done=');
-[SNIP code...]
END;
/
-
Hello
Oracle version: 12.1.0.1.0 - 64 bit
OS: Fedora Core 17 X86_64
My question is about the conservation of fields (hour, minute, second) time DATE type in the array on NLS_DATE_FORMAT changes.
Take the following test case:
SQL> create table tmptab(dateval date); Table created. SQL> alter session set nls_date_format = 'yyyy-mm-dd hh24:mi:ss'; Session altered. SQL> insert into tmptab(dateval) values('2014-01-01 00:00:00'); 1 row created. SQL> insert into tmptab(dateval) values('2014-01-01 10:00:00'); 1 row created. SQL> commit; Commit complete. SQL> select * from tmptab; DATEVAL ------------------- 2014-01-01 00:00:00 2014-01-01 10:00:00 SQL> alter session set nls_date_format = 'yyyy'; Session altered. SQL> select * from tmptab where dateval > '2014'; no rows selected SQL>
I don't understand why it returns nothing. The second test case above insert statement inserted a line with 10 as the value for the time of the DATE field column dateval.
Accordingly, while comparing this with the literal '2014' (which based on the new value of NLS_DATE_FORMAT = "yyyy" is implicitly converted to DATE), shouldn't the above query returns the line 2014-01-01 10:00 ?
I mean, I changed the NLS_DATE_FORMAT but data from time in the table fields are preserved and that's why they should normally be taken into account in the comparison of date.
What I'm trying to say is that for me (Please correct me if I'm wrong), no matter what NLS_DATE_FORMAT configuration is the following test
SQL> select * from tmptab where dateval > '2014';
is the same thing that
SQL> select * from tmptab where dateval > to_date('2014-01-01 00:00:00', 'yyyy-mm-dd hh24:mi:ss');
And because the line 2014-01-01 10: 00:00 in the tmptab table. The following test
2014-01-01 10:00:00 > to_date('2014-01-01 00:00:00', 'yyyy-mm-dd hh24:mi:ss')
evolves normally true (beucase of TIME = 10 on the left side of the test) and therefore this line must be returned which is not the case in the test above.
You kindly could you tell me what I misunderstood?
Thanks in advance,
This is the price for the use of implicit conversions. Implicit DATE conversion rules are not as direct as it can be assumed. In your case, all you provide is year as date format. In this case date implicit conversion rules assumes that month in the current month, day 1 and time as 00:00:00.
SQL > alter session set nls_date_format = "yyyy";
Modified session.
SQL > select to_char (to_date ('2014 "), ' mm/dd/yyyy hh24:mi:ss') twice;
TO_CHAR (TO_DATE('20)
-------------------
01/08/2014-00:00:00SQL >
So, when you start:
Select * from tmptab where dateval > '2014 '.
Oracle implicitly converts date using "YYYY", which translates as August 1, 2014 '2014'. That's why your quesry returns no rows.
SY.
-
How to get the data in the table if the date is less - than two months from the current date?
Hi all
I have a requirement in to retrieve table data. If any of the basic date in the column data.
I have a date field (expiration date) I need to display data which expires two months.
Please give me a solution.
Concerning
Shankar
Hi, Shankar,
Sorry, we don't know what you want.
If you want something which, when executed on March 13, 2014, finds all the lines where expires and is located between March 13, 2014 and may 13, 2014, inclusive,
SELECT *- or whatever the columns that you want to
FROM table_x
WHERE expire > = TRUNC (SYSDATE)
Maturity AND< add_months="" (trunc="" (sysdate)="" ,="" 2)="" +="">
;
I hope that answers your question.
If not, post a small example data (CREATE TABLE and only relevant columns, INSERT statements) for all of the tables involved and also publish outcomes from these data.
Explain, using specific examples, how you get these results from these data.
Always say what version of Oracle you are using (for example, 11.2.0.2.0).
See the FAQ forum: https://forums.oracle.com/message/9362002
-
missing parenthesis in insertion into separate lines select the table from the other table
Hello
could you help me with the following question?
I have the following tables
CREATE TABLE table1)
ID varchar (12),
col2 varchar (10),
COL3 varchar (10),
level varchar (10))
CREATE TABLE table2)
Id2 varchar (12)
A varchar (10),
B number (1)
CONSTRAINT PRIMARY KEY PK (ID2, is));
INSERT INTO table2 (ID2, A, B) SELECT ID, col2
MAX (CASE WHEN level = "level 1" then 1
level = 'level 2' then 2
Level = 3 then 'niveau3') as colIN3)
FROM table1 GROUP BY ID2, a.;
the first table have duplicates as follows:
Id2 COL2 COL3 level
A1 pepe football level1
A1 pepe football level2
A1 pepe football level1
A1 pepe basket level2
A1 pepe pingpong level3
the output should be selected with unique key (ID2, col3) lines and the level must be the greatest.
Id2 COL2 COL3 level
A1 pepe football level2
A1 pepe basket level2
A1 pepe pingpong level3
The output of the script tells me the following messages:
-lack of right parenthesis referring to the max function.
Thanks adavance.
Kind regards
Hello
Remember the ABC's of the GROUP BY:
When you use a GROUP BY clause or in an aggregate function, then all in the SELECT clause must be:
(A) a ggregate function,
(B) one of the expressions "group By."
(C) adding to C, or
(D) something that Depends on the foregoing. (For example, if you "GROUP BY TRUNC (dt)", you can SELECT "TO_CHAR (TRUNC (dt), 'Mon - DD')").
To ask him, there are 5 columns in the SELECT clause. The last one is a function MAX (...); It is an aggregate, is not serious.
The first 2 columns are also named in the GROUP BY clause, so that they are well.
The other 2 columns, country and internal_Id do not match any of the above categories. These 2 columns cause the error.
There are many ways to avoid this error, each producing different results. You could
- remove these 2 columns in the SELECT clause
- Add these 2 columns in the GROUP BY clause
- use the aggregation such as MIN, 2-column functions
- remove the country from the SELECT clause and add internal_id to the GROUP BY clause
- remove the internal_id from the SELECT clause, and add countries to the GROUP BY clause
- ...
What are the results you want?
Whenever you have a question, please post a small example of data (CREATE TABLE and INSERT statements) for all the tables involved, so people who want to help you can recreate the problem and test their ideas. Also post the results you want from this data, as well as an explanation of how you get these results from these data.
Always say what version of Oracle you are using (for example, 11.2.0.2.0).
See the FAQ forum: https://forums.oracle.com/message/9362002
-
(1) I received an email like this:
Event_alert
2013-09-17 22:00:16 ERROR OGG - 01028 Oracle GoldenGate Capture for Oracle, ext_1.prm: object with the number of the object 80673 is compressed. Compression of the table is not supported.
2013-09-17 22:00:16 ERROR OGG - 01668 Oracle GoldenGate Capture for Oracle, ext_1.prm: PROCESS ABENDING.
(2) I have not found the OBJ
SQL > select OBJECT_ID, OBJECT_NAME from dba_objects where object_id = 80673;
no selected line
(3) change a few times the process EXT recover;
(4) my excerpt settings
Cat ext_1.prm
EXTRACT ext_1
Ogg, ogg PASSWORD USERID
TRANLOGOPTIONS EXCLUDEUSER ogg
SETENV (NLS_LANG = AMERICAN_AMERICA. ZHS16GBK)
-SETENV (NLS_LANG = AMERICAN_AMERICA. AL32UTF8)
EXTTRAIL ./dirdat/t1
DYNAMICRESOLUTION
TABLE YBK.*;
This is a bug that has been fixed
Excerpt from abending with Table of Compression is not supported, even if the database has no tables compressed. (Doc ID 1510691.1)
event text: Oracle GoldenGate Capture for Oracle, ext_1.prm: object with the number of the object 86173 is compressed. Compression of the table is not supported.
tableexclude *. DBMS_TABCOMP_TEMP *.
-
Questions on the tables of materialized views and MV newspaper
Hi all
Have some questions about Materialized View.
(1) once the materialized view reads the records from the table MLOG, reviews the MLOG get purged. fix? or is that not the case? In some cases, I see still (old) records in the MLOG table even after updating MV.
(2) how the table MLOG distinguishes between a reading which comes from a MV and a reading that comes from a user? If I execute manually
"Select * < table MLOG > ' could get record of the table MLOG redacted all the same way as it does after a refresh of MV?
(3) one of our MV updates crashes intermittently. Based on the events of waiting I noticed that it was a 'db file sequential read' against the main table. Finally I had to put an end to the update. I don't know why it was sequential reading on the main table when she should be reading the table MLOG. Any ideas?
(4) I saw 'file db scattered read' (full table scan) usually on tables, but I was surprised to see 'db file sequential read' against the table. I thought sequential read occurs normally against the index. All the world has noticed this behavior?
Thanks for your time.(1) once all the registered materialized views have read a particular line in a trunk of materialized view, it is removed, Yes. If there are multiple materialized views that are based on the same newspaper, they would all need to refresh before it would be safe to delete the log entry for MV. If one of the materialized views is no incremental updating, there may be cases where the log purge automatically.
(2) No, your query does not cause anything be served (although you wouldn't see something interesting unless you get to implement a lot of code to analyze change vectors stored in the journal). I don't know the exact mechanism used by Oracle has been published, if you could go through and draw a session to get an idea of the moving parts. From a practical point of view, you just need to know that when you create an updatable materialized view fast, it will register as interested especially newspapers MV.
(3) it depends on what is stored in the log of MV. The update process may need to recover specific table columns if your log stores just the fact that the data for a particular key changed. You can specify when you create a materialized view that you want to store specific columns or include the new clause values (with the NEW VALUES INCLUDING). It is perhaps beneficial (or necessary) for the refreshment quick process, but it would tend to increase the storage space for the materialized view log and increase the cost of the maintianing the materialized view log.
(4) sequential reads on a table are perfectly normal - it just means that someone looking for a block of data in the table (i.e. looking a line in the table of ROWID based on the ROWID in an index or a materialized view log).
Justin
Maybe you are looking for
-
does not accept new password hotmail - can't find hotmail settings
Hi - I had to change my Hotmail password as a precaution after a phishing e-mail made it through my Inbox. Now Thunderbird will not receive this or find the Hotmail server settings. All of my current projects are there waiting, but I have no idea how
-
Ive lost accidnetl all my icons amd left the documents scanned on my homepage?
-
Hello How can I get all the audio files on my device and the number of audio files...
-
Read only Web access to the nodes of ISE
Hi all How can we create an account read only for web access from nodes Cisco ISE? I created a new user name with the role of the 'user' but not able to log into the web administration page. Thank you best regards &,. Guelma
-
Auf dem drive green die Fehlerursache: E_ACT_TOO_MANY_ACTIVATIONS