Aggregation of data

Hello

I'm curious to know if this problem can be solved with just of SQL.  I have a table of holds, and I need some sort of aggregation to display all the blocks which lie on a line.  Shims can be placed in the header, in which case they appear on all lines.

I need the data to display like this. 1 card for each header and the line (shims at the level of the header are displayed on the lines and should not receive its own line, indicated with a null, row_id).

ID of header row holds holds billing collection shims

1 1 ' B C L W' "B C" 'L '.

1         2      'B C L'    'B C'        'L'

1         3      'C L'      'B'          'L'

This can be down with only SQL?

Thank you

-Johnnie

CREATE TABLE HOLD_TYPES
(
  HOLD_ID                NUMBER NOT NULL,
  HOLD_NAME              VARCHAR2(80) NOT NULL,
  HOLD_CODE              VARCHAR2(1),
  HOLD_TYPE              VARCHAR2(20) NOT NULL
);


INSERT INTO HOLD_TYPES
   (HOLD_ID, HOLD_NAME, HOLD_CODE, HOLD_TYPE)
VALUES
   (1, 'Customized Billing', 'B', 'BILLING' );
INSERT INTO HOLD_TYPES
   (HOLD_ID, HOLD_NAME, HOLD_CODE, HOLD_TYPE)
VALUES
   (2, 'Delayed Billing', 'B', 'BILLING' );
INSERT INTO HOLD_TYPES
   (HOLD_ID, HOLD_NAME, HOLD_CODE, HOLD_TYPE)
VALUES
   (3, 'Combo Billing', 'C', 'BILLING' );
INSERT INTO HOLD_TYPES
   (HOLD_ID, HOLD_NAME, HOLD_CODE, HOLD_TYPE)
VALUES
   (4, 'Staging Block', 'W', 'SHIPPING' );
INSERT INTO HOLD_TYPES
   (HOLD_ID, HOLD_NAME, HOLD_CODE, HOLD_TYPE)
VALUES
   (5, 'Waiting Letter of Credit', 'L', 'PICKING' );


CREATE TABLE ORDER_HOLDS
(
  HEADER_ID                        NUMBER        NOT NULL,
  LINE_ID                          NUMBER,
  HOLD_ID                          VARCHAR2(40)  NOT NULL
);


INSERT INTO ORDER_HOLDS
  ( HEADER_ID, LINE_ID, HOLD_ID )
VALUES( 1, 1, 1 );
INSERT INTO ORDER_HOLDS
  ( HEADER_ID, LINE_ID, HOLD_ID )
VALUES( 1, 1, 4 );
INSERT INTO ORDER_HOLDS
  ( HEADER_ID, LINE_ID, HOLD_ID )
VALUES( 1, 2, 2 );
INSERT INTO ORDER_HOLDS
  ( HEADER_ID, LINE_ID, HOLD_ID )
VALUES( 1, 3, 3 );
INSERT INTO ORDER_HOLDS
  ( HEADER_ID, LINE_ID, HOLD_ID )
VALUES( 1, NULL, 5 );
INSERT INTO ORDER_HOLDS
  ( HEADER_ID, LINE_ID, HOLD_ID )
VALUES( 1, NULL, 3 );




Hi, Johnnie,

Assuming you use Oracle 11.2 or later version, you can use LISTAGG for a list delimited by the aggregate of the codes.

Applying the shims with a line_id NULL for all real line_ids is more complicated (at least for me).

Here's one way:

WITH effective_holds AS

(

SELECT header_id

line_id

hold_id

Of order_holds
WHERE the line_id IS NOT NULL

UNION

SELECT l.header_id

l.line_id

w.hold_id

To order_holds l

JOIN order_holds w l.header_id = w.header_id

AND l.line_id IS NOT NULL

AND w.line_id IS NULL

)

SELECT eh.header_id, eh.line_id

, LISTAGG (ht.hold_code, ' ')

THE Group (ORDER BY ht.hold_code) THAT holds

, LISTAGG (CASE WHEN ht.hold_type = "BILLING" THEN ht.hold_code END, ' ')

THE Group (ORDER BY ht.hold_code) AS billing_holds

, LISTAGG (CASE WHEN ht.hold_type = 'PICKING' THEN ht.hold_code END, ' ')

THE Group (ORDER BY ht.hold_code) AS picking_holds

Of effective_holds right

JOIN hold_types ON ht.hold_id = eh.hold_id ht

Eh.header_id GROUP, eh.line_id

ORDER BY eh.header_id, eh.line_id

;

Once again, LISTAGG is new in version 11.2.  If you use Oracle 9.1 or later, you can use instead of LISTAGG SYS_CONNECT_BY_PATH.

If the same type is applied 2 times or more in the same header_id and line_id, 1 one copy of the code appears in the delimited lists.  If this isn't what you want, then we can change effective_hold; Maybe do a FULL OUTER JOIN instead of a UNION.

Thanks for posting the CREATE TABLE and INSERT statements; It's very useful!

Tags: Database

Similar Questions

  • Aggregation of data from multiple sources with BSE

    Hello

    I want to aggregate data from multiple data sources with a BSE service and after this call a bpel with a process of construction of these data.
    1 read data from the data source (dbadapter-select-call)
    2. read data from the data source B (dbadapter-select-call)
    3 assemble the data in xsl-equiped
    4. call bpel

    Is this possible? How can I get data from the first call and the second call to conversion data? If I receive data from the second call, the first call data seem to be lost.

    Any ideas?
    Gregor

    Gregor,

    It seems that this aggregation of data is not possible in the BSE. This can be done in BPEL too using only assigned but not using transformations. I tried to use transformations by giving the third argument to the function ora: processXSLT. But couldnot get the desired result.

    For more information on the passage of a second variable (of another schema) as a parameter to xslt pls refer to the post office

    http://blogs.Oracle.com/rammenon/2007/05/

    and the bug fix 'passage BPEL Variable content in XSLT as parameters'.

    Hope this helps you.

    Thank you, Vincent.

  • Questions on the aggregation of data for parents

    Hi guys,.

    I have two questions.

    One, once I have enter data in the leaf level, values aggregate to their parent except period dimension. The aggregation method is elsewhere, conservation never share type and data type is currency. Should provide the member formula to calculate the value for high level members?

    Two, some nonleaf members need their own values. For example, a business in the entity dimension unit has some departments. Apart from the consolidation of departments, the BU needs a budget for itself. But members nonleaf cells are read-only. How to deal with my situation?

    Thank you

    You can't, not in a release from the bottom up. You can allot of bu1_ to his brothers and sisters. Or you can use the version of up and down, to parents of input values and allocate up to nonleaf members.

    See you soon,.
    Alp

  • Pivot out the aggregation of data with

    Hello

    I have it here is output

    node_id | object_name | att_value | ATTRIBUTE_NAME
    469988 | Serum sample. Project | GSKMDR - status
    469988 | Serum sample. 1     | GSKMDR - Version

    based on the above output I required to get a way below output

    node_id | object_name | status | Version
    469988 | Serum sample. Project | 1

    I tried to use pivot and wm_concat but no luck. can you please any action ideas.

    My query is given below.

    {noformat} {noformat}
    WITH first_req AS
    (SELECT node_id object_name, att_value, attribute_name)
    FROM (SELECT n.node_id, o.NAME object_name, o.object_id,)
    att.name_display_code att_ndc, att. VALUE att_value,
    attribute_name atty.NAME, n.deletion_date
    OF vp40.nodes n.
    vp40. O objects,
    vp40. Att ATTRIBUTES,
    Atty vp40.attribute_types
    WHERE n.object_id = o.object_id
    AND o.object_id = att.object_id
    AND att.attribute_type_id = atty.attribute_type_id) t
    WHERE deletion_date = January 1, 1900"
    AND IN attribute_name ('GSKMDR - Version', "GSKMDR - Status")
    AND = node_id: node_id
    -Node id of the object "GSKMDR - Concept Template"
    )
    Select * from first_req
    {noformat} {noformat}

    ravt261 wrote:
    Hello

    I have it here is output

    node_id | object_name | att_value | ATTRIBUTE_NAME
    469988 | Serum sample. Project | GSKMDR - status
    469988 | Serum sample. 1     | GSKMDR - Version

    based on the above output I required to get a way below output

    node_id | object_name | status | Version
    469988 | Serum sample. Project | 1

    I tried to use pivot and wm_concat but no luck. can you please any action ideas.

    Well PIVOT should work, if you have not shown that you tried in this regard.
    WM_CONCAT is a undocumented function, then you'd be a fool to use, because functionality may change or it may be removed in future versions.

    It will pivot the data based on what you have provided (and it will work in older versions of 11 g)

    select note_id, object_name
          ,max(decode(attribute_name,'GSKMDR - Status',att_value)) as status
          ,max(decode(attribute_name,'GSKMDR - Version',att_value)) as version
    group by note_id, object_name
    order by note_id
    

    My query is given below.

    {noformat} {noformat}
    WITH first_req AS
    (SELECT node_id object_name, att_value, attribute_name)
    FROM (SELECT n.node_id, o.NAME object_name, o.object_id,)
    att.name_display_code att_ndc, att. VALUE att_value,
    attribute_name atty.NAME, n.deletion_date
    OF vp40.nodes n.
    vp40. O objects,
    vp40. Att ATTRIBUTES,
    Atty vp40.attribute_types
    WHERE n.object_id = o.object_id
    AND o.object_id = att.object_id
    AND att.attribute_type_id = atty.attribute_type_id) t
    WHERE deletion_date = January 1, 1900"

    Dates should be treated as dates, no chains.

                   WHERE deletion_date = to_date('01-JAN-1900','DD-MON-YYYY')
    

    or (as it is just a date and no time)...

                   WHERE deletion_date = date '1900-01-01'
    
  • Aggregation of RF

    Hello

    An AP1100 can function as a capable aggregation RF WDS?

    Thank you.

    Hi Navid,

    The 1100 Series AP can be set up for WDS. Look at how the WDS AP is involved in the management of the RM;

    The WDS allows to control path technologies that must be active on an access point on each AP subnet; a backup WDS can also be defined in each AP subnet. The WDS provides:

    Fast, customer's wireless secure layer 2 roaming the WDS plays the role of authenticator 802. 1 x for the network layer 2 wireless clients.

    Aggregation of data management (RM) radio the WLSE provides intelligent processing of aggregated data collected by access point WDS to other customers of the wireless network. The WLSE can manage multiple subnets, so it can receive data many WDS APs running radio.

    There is no aggregation data ROM without a WDS.

    This doc.

    http://www.Cisco.com/en/us/products/SW/cscowork/ps3915/products_user_guide_chapter09186a0080527f1f.html#wp1617750

    What is the role of the WDS device in the network wireless LAN (WLAN)?

    The WDS device performs these tasks on your WLAN:

    Announcement of WDS capability and participates in an election of the best WDS for your WLAN device. When you configure your WLAN for WDS, put you up a feature as the main WDS candidate and one or several additional devices as backup WDS candidate. If the main feature of WDS is taken offline, a backup WDS devices takes the place of the main unit.

    Authenticates all the APs in the subnet and establishes a communication channel secured with each access points

    APs radio data collection in the subnet, aggregates the data and transfers data to the device Wireless LAN Solution Engine (WLSE) on your network.

    Saves all client devices in the subnet and establishes key in session for client devices puts in cache the credentials of client security. When a customer switches to another access point, the WDS device transfers the client credentials for security to the new access point.

    This doc.

    Field Services FAQ wireless

    http://www.Cisco.com/en/us/Tech/tk722/tk809/technologies_q_and_a_item09186a00804d4421.shtml#QA6

    Series 1100 AP - configuring WDS, Fast Secure roaming, and Radio management

    http://www.Cisco.com/en/us/docs/wireless/access_point/12.3_2_JA/configuration/guide/s32roamg.html

    I hope this helps!

    Rob

  • How to use the aggregate with Date function

    Hi all

    I have a Group date is it possible of Max and Min to date.

    I tried like this but its out errored <? MIN (current - group () / CREATION_DATE)? >.

    I also tried this, but it does not work
    <? xdoxslt:minimum (CREATION_DATE)? >

    Is it possible to use the function of aggregation with date values.

    Thanks and greetings
    Srikkanth

    You can use
    Ensure that the "date" is in canonical format

  • ADF pivot total table on the lines as specified column in different color, Jdev 11.1.2.3

    Hi all

    I have the aggregation of data on my PivotTable set to display totals for all rows and all columns. But they appear exactly like all other cells. To distinguish them clearly, I want to change the color of this entire column and the row that displays totals at the end of something different - say "Grey".

    Is it possible to do this?... preferably without binding the bean pivot table,...

    DataIsTotal metadata allows you to set a color if dataIsTotal is set to true.

  • How to change the name of the map dynamically?

    I use an excel template and my requirement is to change the name of the spreadsheet (NOT the file name) of the output to excel when blow up.

    Is this possible and if so, how? The user wants to get the group number and the name of the report.

    For example Group 1 a and the name of the report, audit report, the name of the spreadsheet must be "1 a - Audit report.

    I understand that with the constraints of data in the XDO_METADATA worksheet.

    We can add two functions XDO_SHEET_? and XDO_SHEET_NAME_? By aggregating my data correctly, I was able to produce several sheets with dynamic for each worksheet name.

    More information search for these two functions and you will find your answer.

  • Planning metadata, loading problem

    Hello.

    I use this syntax to make loading metadata:

    OutlineLoad A:sample12 /U:admin / /I:D:\1\custom.csv /D:custom /L:D:\1\custom.log /X:D:\1\custom_ex.exc m

    When I load the metadata I get this problem.even I gave the space between 'Type' and '(PL). Please notify.
    Thanks in advance
    Journal and Exceptions:

    [Mar 22 May 08:14:12 IST 2012] Unrecognized column header value 'Plan Type (PL)'.
    [Mar 22 May 08:14:12 IST 2012] Impossible to get analytical information and/or perform a data load: values of unrecognized column header, refer to previous messages. (Note: column header values are case sensitive.)
    [Mar 22 May 08:14:12 IST 2012] Planning of vector data store loaded with exceptions processes: records not all input have been read errors (or empty input file). 0 records were read 0 records were treated, 0 were loaded successfully, 0 is rejected.


    [Mar 22 May 08:14:12 IST 2012] Input file located and opened successfully "D:\1\custom.csv".
    [Mar 22 May 08:14:12 IST 2012] Record header fields: Parent, custom, Alias: by default, aggregation (PL), data storage, Plan Type (PL)
    [Mar 22 May 08:14:12 IST 2012] Find and use "custom" size for loading the data in the application "sample12".
    [Mar 22 May 08:14:12 IST 2012] Impossible to get analytical information and/or perform a data load: values of unrecognized column header, refer to previous messages. (Note: column header values are case sensitive.)
    [Mar 22 May 08:14:12 IST 2012] Look at the files of newspapers of Essbase to status if Essbase data have been loaded.
    [Mar 22 May 08:14:12 IST 2012] Planning of vector data store loaded with exceptions processes: records not all input have been read errors (or empty input file). 0 records were read 0 records were treated, 0 were loaded successfully, 0 is rejected.

    Hello

    I see that you load the metadata for a custom dimension. Your custom dimension is valid for 'Plan type (PL) '?

    Thank you
    Sourabh

  • Planning changes to metadata

    Hi all

    It can dumb Question but I don't get the idea... Here is my requirement. I built the metadata in the planning with ODI my source system is Oracle EBS. But in my system from source the accounting structure will often change as a child may come as parent and can become children. How to handle this as if making this Essbase will restructure and how to handle data...

    (I would like to change email id for otn forum. is this a guide me how to change)


    Concerning
    PrakashV

    If you have changes in metadata, then when you load again the members will spend in the hierarchy if they are valid, these changes must then be refreshed to essbase.
    Then, you have to take care of the aggregation of data using a rule or calc script.

    E-mail - go to http://www.oracle.com/technetwork/index.html click on account at the top then click on change user name

    See you soon

    John
    http://John-Goodwin.blogspot.com/

  • Partial aggregate

    Hello

    The customer is not having of actuals on the Department level, while they charge actual values directly at the head office node (which is the parent to the departments).

    I develop a rule for aggregating the data loaded on the entity dimension but AGG (entity) is not valid because it added departments HO put null values in the node of HO.

    I tried to use global system model to get the desired output, I wrote something like below

    0001 DIFFICULTY ('NO LOB', 'WorkingVersion', 'No RLM', 'Local', 'No Segment', & V_CURRENT_YEAR, 'real', 'Jan', 'Feb', 'Mar', 'Apr', ' may', 'Jun', 'Jul', 'Aug', "1000 ', ' 1091")
    0002 @ICHILDREN("01");
    ENDFIX 0003

    If I understand well @ICHILDREN is supposed to add 01 + 010001 + 0101 + 0102 but above is the addition of the departments filling a null value in the node of HO.

    My dimension entity hierarchy

    All Branches
    ----------------01
    -010001 (HO)
    -0100001 (Dep1)
    -0100002 (Dep2)
    -0101 (branch1)
    -0102 (branch2)

    I want 01 = 010001 + 0101 + 0102 ignoring the values 0100001 and 0100002

    In addition, all members of the entity are the data store.

    No idea what is happening? any other way to reach my goal?

    Thanks in advance

    Hello
    Try
    Aggmissg Set off;
    Fix()
    AGG (Entity);
    Endfix

    See you soon,.
    Alp

  • Error in the SQL statement when executing ETL of DAC

    Hi all

    I have installed and configured the 7.9.6.3 biapps, then I run the full load of the CRM - loyalty in DAC topic. And I got the error task failed, I check the Session to the Informatica server log files.

    view of $ SEBL_VERT_811.DATAWAREHOUSE. SDE_SBL_Vert_811_Adaptor.SDE_GeographyDimension_Business.log

    DIRECTOR > use VAR_27028 replace value [0] for the variable defined by the workflow/worklet user: [$passInStatus].
    DIRECTOR > VAR_27028 use override the value [DataWarehouse] session parameter: [$DBConnection_OLAP].
    DIRECTOR > VAR_27028 use override the value [SEBL_VERT_811] for the session parameter: [$DBConnection_OLTP].
    DIRECTOR > VAR_27028 use override the value [SEBL_VERT_811.DATAWAREHOUSE. SDE_SBL_Vert_811_Adaptor.SDE_GeographyDimension_Business.log] for the session parameter: [$PMSessionLogFile].
    DIRECTOR > VAR_27028 use override value [1] to parameter mapping: [MPLT_LOAD_W_GEO_DS. [DATASOURCE_NUM_ID$ $].
    DIRECTOR > VAR_27028 use override value for the parameter mapping]: [$$ Hint1].
    DIRECTOR > VAR_27028 use override value for the parameter mapping]: [$$ council2].
    DIRECTOR > session initialization of TM_6014 [SDE_GeographyDimension_Business] to [my Jul 25 17:29:47 2011].
    DIRECTOR > name of the repository TM_6683: [Oracle_BI_DW_Base]
    DIRECTOR > TM_6684 server name: [Oracle_BI_DW_Server]
    DIRECTOR > TM_6686 folder: [SDE_SBL_Vert_811_Adaptor]
    DIRECTOR > Workflow TM_6685: [SDE_GeographyDimension_Business] run the Instance name: Id series []: [260]
    DIRECTOR > mapping TM_6101 name: SDE_GeographyDimension_Business [version 1].
    DIRECTOR > TM_6963 pre 85 Timestamp compatibility is enabled
    DIRECTOR > the TM_6964 Date of the Session format is [HH24:MI:SS DD/MM/YYYY]
    DIRECTOR > TM_6827 [u01/app/oracle/biapps/dev/Informatica/9.0.1/server/infa_shared/Storage] will be used as the storage of session directory [SDE_GeographyDimension_Business].
    DIRECTOR > CMN_1802 Session recovery cache initialization is complete.
    DIRECTOR > configuration using [DisableDB2BulkMode, Yes] TM_6708 property
    DIRECTOR > configuration using TM_6708 [ServerPort, 6325] property
    DIRECTOR > configuration using [overrideMpltVarWithMapVar Yes] TM_6708 property
    DIRECTOR > configuration using TM_6708 [SiebelUnicodeDB,SIEBEL@ANSDEV dwhadmin@ANBDEV] property

    DIRECTOR > TM_6703 Session [SDE_GeographyDimension_Business] is headed by 64-bit integration Service [node01_hkhgc01dvapp01], [version9.0.1 HotFix2], build [1111].
    MANAGER > PETL_24058 Running score of the Group [1].
    MANAGER > initialization of engine PETL_24000 of parallel Pipeline.
    MANAGER > PETL_24001 parallel Pipeline engine running.
    MANAGER > session initialization PETL_24003 running.
    MAPPING > CMN_1569 Server Mode: [UNICODE]
    MAPPING > code page of the server CMN_1570: [Unicode UTF-8 encoding]
    MAPPING > TM_6151 the session to the sort order is [binary].
    MAPPING > TM_6185 warning. Code page validation is disabled in this session.
    MAPPING > treatment of low accuracy using TM_6156.
    MAPPING > retry TM_6180 blocking logic will not apply.
    MAPPING > TM_6187 Session focused on the target validation interval is [10000].
    MAPPING > TM_6307 DTM error log disabled.
    MAPPING > TE_7022 TShmWriter: initialized
    MAPPING > DBG_21075 connection to the database [ANBDEV], [dwhadmin] users
    MAPPING > Search CMN_1716 [MPLT_LOAD_W_GEO_DS. LKP_W_LST_OF_VAL_G] uses the connection to database [relational: DataWarehouse] in [UTF-8 encoding Unicode] code page
    MAPPING > Search CMN_1716 [MPLT_LOAD_W_GEO_DS. LKP_W_GEO_DS] uses the connection to database [relational: DataWarehouse] in [UTF-8 encoding Unicode] code page
    MAPPING > DBG_21694 AGG_COUNTRY_CITY_ZIPCODE [0] Partition: size = [1048576] Index cache, data cache size = [2097152]
    MAPPING > TE_7212 increasing [Cache of Index] size of transformation [AGG_COUNTRY_CITY_ZIPCODE] of [1048576] to [2402304].
    MAPPING > TE_7212 increasing [Cache data] size of transformation [AGG_COUNTRY_CITY_ZIPCODE] of [2097152] to [2097528].
    MAPPING > TE_7029 aggregate information: create the new Index and data files
    MAPPING > TE_7034 aggregate information: Index file is [u01/app/oracle/biapps/dev/Informatica/9.0.1/server/infa_shared/Cache/PMAGG14527_3_0_260.idx]
    MAPPING > information aggregated TE_7035: data file is [u01/app/oracle/biapps/dev/Informatica/9.0.1/server/infa_shared/Cache/PMAGG14527_3_0_260.dat]
    MAPPING > TM_6007 DTM initialized successfully for the session [SDE_GeographyDimension_Business]
    DIRECTOR > PETL_24033 all the DTM connection information: [< NO >].
    MANAGER > PETL_24004 from the tasks before the session. : (My Jul 25 17:29:47 2011)
    MANAGER > task PETL_24027 before the session completed successfully. : (My Jul 25 17:29:47 2011)
    DIRECTOR > PETL_24006 from data movement.
    MAPPING > Total TM_6660 Buffer Pool size is 36000000 bytes and block size is 128000 bytes.
    LKPDP_2 > search for DBG_21097 Transformation [MPLT_LOAD_W_GEO_DS. LKP_W_GEO_DS]: default sql to create the cache of research: SELECT CITY, COUNTRY, POSTAL code, STATE_PROV OF W_GEO_DS of the ORDER BY CITY, COUNTRY, POSTAL code, STATE_PROV

    LKPDP_1 > search for DBG_21312 Transformation [MPLT_LOAD_W_GEO_DS. LKP_W_LST_OF_VAL_G]: search replace sql to create the cache: SELECT W_LST_OF_VAL_G.VAL AS VAL, W_LST_OF_VAL_G.R_TYPE AS R_TYPE FROM W_LST_OF_VAL_G
    WHERE
    W_LST_OF_VAL_G.R_TYPE LIKE '% ETL' ORDER BY R_TYPE, VAL

    LKPDP_1 > TE_7212 increasing [Cache of Index] size of transformation [MPLT_LOAD_W_GEO_DS. LKP_W_LST_OF_VAL_G] of [1048576] to [1050000].
    LKPDP_2 > TE_7212 increasing [Cache of Index] size of transformation [MPLT_LOAD_W_GEO_DS. LKP_W_GEO_DS] of [20000000] to [20006400].
    LKPDP_2 > TE_7212 increasing [Cache data] size of transformation [MPLT_LOAD_W_GEO_DS. LKP_W_GEO_DS] of [20000000] to [20004864].
    READER_1_1_1 > DBG_21438 Reader: Source is [ANSDEV], [SIEBEL] users
    READER_1_1_1 > code page Source of BLKR_16051 database connection [SEBL_VERT_811]: [Unicode UTF-8 encoding]
    READER_1_1_1 > BLKR_16003 initialization completed successfully.
    WRITER_1_ * _1 > WRT_8146 author: target's database [ANBDEV], user [dwhadmin], loose [on] mode
    WRITER_1_ * _1 > WRT_8106 WARNING! Session Mode Bulk - recovery is not guaranteed.
    WRITER_1_ * _1 > code page target database connection [Data Warehouse] WRT_8221: [Unicode UTF-8 encoding]
    WRITER_1_ * _1 > target WRT_8124 W_GEO_DS of Table: SQL INSERT statement:
    INSERT INTO W_GEO_DS(CITY,CONTINENT,COUNTRY,COUNTY,STATE_PROV,ZIPCODE,DATASOURCE_NUM_ID,X_CUSTOM) VALUES (?,?,?,?,?,?,?,?)
    WRITER_1_ * _1 > WRT_8020 No. column that is marked as the primary key for the table [W_GEO_DS]. Updates not supported.
    WRITER_1_ * _1 > connection WRT_8270 #1 target group consists of target (s) [W_GEO_DS]
    WRITER_1_ * _1 > WRT_8003 writer initialization complete.
    READER_1_1_1 > BLKR_16007 player run began.
    WRITER_1_ * _1 > WRT_8005 writer run began.
    WRITER_1_ * _1 > WRT_8158

    START SUPPORT SESSION *.

    Startup load time: my Jul 25 17:29:47 2011

    Target table:

    W_GEO_DS


    READER_1_1_1 > RR_4029 SQ [SQ_S_ADDR_ORG] User Instance specified SQL query [SELECT DISTINCT
    S_ADDR_ORG. CITY,
    S_ADDR_ORG. COUNTRIES,
    S_ADDR_ORG. COUNTY,
    S_ADDR_ORG. PROVINCE,
    S_ADDR_ORG. STATE,
    S_ADDR_ORG. Zip code
    '0' AS X_CUSTOM
    Of
    V_ADDR_ORG, S_ADDR_ORG
    ]
    READER_1_1_1 > RR_4049 SQL query sent to the database: (my Jul 25 17:29:47 2011)
    READER_1_1_1 > CMN_1761 Timestamp event: [lun 25 juil 17:29: 47 2011]
    READER_1_1_1 > RR_4035 SQL Error]
    ORA-00942: table or view does not exist

    Database driver error...
    Function name: run
    Stmt SQL: SELECT DISTINCT
    S_ADDR_ORG. CITY,
    S_ADDR_ORG. COUNTRIES,
    S_ADDR_ORG. COUNTY,
    S_ADDR_ORG. PROVINCE,
    S_ADDR_ORG. STATE,
    S_ADDR_ORG. Zip code
    '0' AS X_CUSTOM
    Of
    V_ADDR_ORG, S_ADDR_ORG

    Fatal error Oracle
    Database driver error...
    Function name: run
    Stmt SQL: SELECT DISTINCT
    S_ADDR_ORG. CITY,
    S_ADDR_ORG. COUNTRIES,
    S_ADDR_ORG. COUNTY,
    S_ADDR_ORG. PROVINCE,
    S_ADDR_ORG. STATE,
    S_ADDR_ORG. Zip code
    '0' AS X_CUSTOM
    Of
    V_ADDR_ORG, S_ADDR_ORG

    [Error fatal Oracle].
    READER_1_1_1 > CMN_1761 Timestamp event: [lun 25 juil 17:29: 47 2011]
    READER_1_1_1 > BLKR_16004 ERROR: prepare failed.
    WRITER_1_ * _1 > WRT_8333 roll back all the targets due to the fatal error of session.
    WRITER_1_ * _1 > rollback WRT_8325 Final, executed for the target [W_GEO_DS] at end of load
    WRITER_1_ * _1 > WRT_8035 of full load time: my Jul 25 17:29:47 2011

    SUMMARY OF THE LOAD
    ============

    WRT_8036 target: W_GEO_DS (Instance name: [W_GEO_DS])
    WRT_8044 responsible for this target data no.



    WRITER_1__1 > WRT_8043 * END LOAD SESSION *.
    MANAGER > PETL_24031
    PERFORMANCE INFORMATION FOR TGT SUPPORT ORDER [1] GROUP, SIMULTANEOUS GAME [1] *.
    Thread [READER_1_1_1] created [stage play] point score [SQ_S_ADDR_ORG] is complete. Running time total was enough for significant statistics.
    [TRANSF_1_1_1] thread created for [the scene of transformation] partition has made to the point [SQ_S_ADDR_ORG]. Running time total was enough for significant statistics.
    [TRANSF_1_2_1] thread created for [the scene of transformation] partition has made to the point [AGG_COUNTRY_CITY_ZIPCODE]. Running time total was enough for significant statistics.
    Thread [WRITER_1_ * _1] created for [the scene of writing] partition has made to the point [W_GEO_DS]. Running time total was enough for significant statistics.

    MAPPING > CMN_1791 size which would take [0] groups total lines of entry for [AGG_COUNTRY_CITY_ZIPCODE], in memory, it is [0] index cache bytes
    MAPPING > CMN_1790 cached data size that would [0] groups total lines of entry for [AGG_COUNTRY_CITY_ZIPCODE], in memory, bytes [0]
    MAPPING > CMN_1793 index cache size which would hold [0] lines in the table to search for [MPLT_LOAD_W_GEO_DS. LKP_W_LST_OF_VAL_G], in memory, is bytes [0]
    MAPPING > CMN_1792 cached data size that would [0] lines in the table to search for [MPLT_LOAD_W_GEO_DS. LKP_W_LST_OF_VAL_G], in memory, is bytes [0]
    MAPPING > CMN_1793 index cache size which would hold [0] lines in the table to search for [MPLT_LOAD_W_GEO_DS. LKP_W_GEO_DS], in memory, is bytes [0]
    MAPPING > CMN_1792 cached data size that would [0] lines in the table to search for [MPLT_LOAD_W_GEO_DS. LKP_W_GEO_DS], in memory, is bytes [0]
    MANAGER > PETL_24005 from tasks after the session. : (My Jul 25 17:29:47 2011)
    MANAGER > task of PETL_24029 after the session completed successfully. : (My Jul 25 17:29:47 2011)
    MAPPING > cache TE_7216 deleting files [PMLKUP14527_524289_0_260L64] for processing [MPLT_LOAD_W_GEO_DS. LKP_W_LST_OF_VAL_G].
    MAPPING > cache TE_7216 deleting files [PMLKUP14527_524293_0_260L64] for processing [MPLT_LOAD_W_GEO_DS. LKP_W_GEO_DS].
    MAPPING > TM_6018 the session completed with errors of processing row [0].
    MANAGER > TE_7216 deleting files cache [u01/app/oracle/biapps/dev/Informatica/9.0.1/server/infa_shared/Cache/PMAGG14527_3_0_260.idx] for [AGG_COUNTRY_CITY_ZIPCODE] transformation.
    MANAGER > TE_7216 deleting files cache [u01/app/oracle/biapps/dev/Informatica/9.0.1/server/infa_shared/Cache/PMAGG14527_3_0_260.dat] for [AGG_COUNTRY_CITY_ZIPCODE] transformation.
    MANAGER > parallel PETL_24002 engine Pipeline completed.
    DIRECTOR > Session PETL_24013 run duly filled with failure.
    DIRECTOR > TM_6022

    PLENARY OF THE LOAD
    ================================================

    DIRECTOR > TM_6252 Source load summary.
    DIRECTOR > Table CMN_1740: [SQ_S_ADDR_ORG] (name of the Instance: [SQ_S_ADDR_ORG])
    Output [0] lines, affected lines [0], applied [0] lines, rejected lines [0]
    DIRECTOR > TM_6253 Target Load summary.
    DIRECTOR > Table CMN_1740: [W_GEO_DS] (name of the Instance: [W_GEO_DS])
    Output [0] lines, affected lines [0], applied [0] lines, rejected lines [0]
    DIRECTOR > TM_6023
    ===================================================

    DIRECTOR > TM_6020 Session [SDE_GeographyDimension_Business] to [my Jul 25 17:29:48 2011].


    After reviewing the log, I found the select statement fails, the SQL below was wrong:

    SELECT DISTINCT
    S_ADDR_ORG. CITY,
    S_ADDR_ORG. COUNTRIES,
    S_ADDR_ORG. COUNTY,
    S_ADDR_ORG. PROVINCE,
    S_ADDR_ORG. STATE,
    S_ADDR_ORG. Zip code
    '0' AS X_CUSTOM
    Of
    V_ADDR_ORG, S_ADDR_ORG

    There is no V_ADDR_ORG but the table S_ADDR_ORG in the transaction of Siebel database. So I don't know why he build this sql when data transfer to OBAW. It is the fate of bi box app.

    Experts! How could I solve this problem? Please help, thanks a lot!


    Best regards
    Ryan

    Ryan,

    Yes, you missed to create views for the source tables. both in the design of dac > tables tab, select any table and clieck right on it and then click capture of change scripts > generate scripts from view. It will ask you if it can generate a script for all tables. so it will generate the script from view for you. now run the whole esript in the source database. then your problem will be solved.

    If this answers your question. do my correct answer.

    Thank you
    Jay.

  • Access committed and uncommitted.

    Hello, this question could be very old and boring for all the geeks here. But I need some information about it as I could not as a result of the SER60.
    One of my db takes a long time to run specific scripts. It was not the same in the past. now, when I checked the operations under DB Properties tab, I see my cube is checked as not engaged with Commit and Commit Rows blocks as 0.
    If I understand correctly, if it is not validated, essbase locks all the blocks until the end of the transaction and will then commit. Access commits commits the blocks at the indicated time.
    But my question is if uncommitted Access waits until the end of the transaction, done sync point? He joined after reaching the blocks of Francis mentioned? My db is 0, which is already optimally? Or I change these numbers to get some better? Am not worried about the consistency of the data, I need a better performance.

    Please advice.
    I see that my cube is checked as not engaged with Commit and Commit Rows blocks as 0.

    This setting causes the fragmentation in the cube.
    Default settings are:
    Committing blocks: 3000
    Commit lines: 0

    Go to default settings and restart the application.

    Resume activity to eliminate fragmentation.
    To remove fragmentation:
    To export the database Lev0.
    Delete all data in the database with CLEARDATA
    reload the exported file.
    Aggregation of data.

  • Query enable tracking

    I'm using ASO for statement purposes. . Some data will be updated daily. We have report scripts, which will take place in 2 times a day, which exports a part of DB.
    My questions are:

    (1) if I activate Query Tracking - WE and aggregation of data, has a positive effect on the extraction of script time report?

    (2) for a reason, I exported 'Lev0 Export', using Maxl, and its taking more time, is it be possible to reduce the time for export Lev0? any changes. CFG file?

    Thanks in advance!

    No, you don't want to create all possible global views (if Essbase would even he - the SER60 talking about a limit, but I don't know if it's within a script of unique or total aggregation). It is not necessary to create which is a lot. I didn't know what you mean by "bringing together DB. :)

    Increase the size of the extraction buffer is worth a test to see if the performance of your generator reports script improves. The extraction buffer is allocated by the user, but unless you are close to the limits of memory or thousands of concurrent users, it is unlikely that create problems.

  • Is a set of possible consolidation in Dicoverer of results?

    Hi all

    I have to give results in the following format in the observer.

    Col1.value1
    Col2 col3 col4 col5
    val1 val2, val3 val4
    Val5 val3 val2-val4
    Col1.Value2
    Col2 col3 col4 col5
    val1 val2, val3 val4
    Val5 val3 val2-val4

    I tried to put Col1 asa page point but whn I want to see if the values for < all > to Col1.
    The result apppears set, but donot col1 values and results are not grouped by col1.

    The only thing I could do was display results as follows, and then do a kind group:

    K1 col2 col3 col4 col5
    val1 val2, val3, val5 val4
    val1 val2 val3 val4 val5

    Is that all we can do? ...
    Please advice...

    Thank you
    VJ

    Hi Vj
    You cannot make discoverer display results grouped into sets. The only things we can do are:

    1. use the Page elements
    2. use group them Sorted
    3 use a crosstab with display either online or a schema

    We can make discoverer to change the method of aggregation to use Grouping Sets, but this applies to all queries from there on. You had to do this by changing the preferences of the discoverer backstage in the pref.txt file and change the parameter called EnhancedAggregationStrategy to 1. Here is the note of in the file:

    The controls use features of increased aggregation of data i.e. Rollup, Cube, Grouping Sets
    0 is regular use Group By,
    1 == grouping strategy Set - SQL will contain only Grouping Sets needed by server of discoverer,.
    2 == automatic determination - Grouping Sets of option 1 with additional levels of aggregation using rollups preloading,
    3 == cube strategy - generates a cube of all group elements.
    4 == allow the code to choose between options 1, 2 and 3

    You can find your default value is already 1.

    Best wishes
    Michael

Maybe you are looking for

  • Poster address bar search

    When I type a url in the address bar, I get a page of search results returned. When I leave aside the www and com and press Ctrl/Enter I get the same result.I changed the following in the configuration without success: -.Keyword.EnabledBrowser.Fixup.

  • USB ports not working not not on Satellie C850

    The USB ports have stopped working on my C850 Satellite with Windows 8.1. I tried the update and uninstalling/reinstalling the drivers and even disassembled the laptop and reassembled it but all without success.Any help would be greatly appreciated

  • New wireless adapter does not connect to my router

    at home, I have a wireless router WRT120N set up for wireless internet connections, I also use the wireless adapter that comes with it. I recently bought a new adapter wireless like this: http://global.level1.com/product_d.php?id=891 and it connect t

  • PC cRash

    I recently download windows vista service pack 2 and while she was down there told me to resart my pc, so I did and now I have a black screen telling me to uninstall the recent download so I rebooted the pc to uninstall the download, and I tried to o

  • x 476 print envelopes in word

    The word, I want to print an envelope Tray 1 #10 and the letter (8.5x11) from tray 2.  These 2 documents are in the same file in word. I have print/properties/printing shortcuts/EnveopeTray1, which is registered with the following parameters: Paper s