ITS Table/Database Source

Hi all
I created a table data source and was able to find the data in the CONTENT column.
My question is, don't the results displayed will be in the form of URL only?
What is the difference between the database and source table?
If I want to do a search using several columns of different tables in different schemas with and/or condition, I can do this? If not, should I use the API of SES and achieve through custom code?

Thank you
Shyamala

-What are the settings used for research? All columns used in the query in case of source database and content and all of the keys used in the source table?

***
When ITS analyses of the database data source, the robot automatically takes each column returned in the query (less the CONTENT and URL columns and perhaps other reserved columns) and turns them into a Document attribute. If ITS detects the document attribute does not already exist, create it. If there is already (by checking if column name = name of the search attribute of HIS), HIS maps the Document attribute it to the existing search HIS attribute.

On the data source, see the attribute Mapping tab. This will tell you how ITS mapped columns internally. There will then be a full text search on a specific search based on the attributes on a specific attribute to search HIS. When you run a search FT, ITS research on all data, CONTENT and specific search attributes column. With a search based on attributes, HIS will only search for this specific attribute, nothing else. Using the advanced search to search for the attribute search.
***

-What is the need for a content column?

***
This is the data that ITS takes, is transformed into an HTML page and the index. She becomes the cached version of the document if you use this feature.
***

-If the data type of column is varchar2, how can I use it as content? What is the contenttype to give? I tried CONTENTTYPE ' application/octet-stream' for those other than "text/plain", but it does not work.

***
By the documentation, the CONTENT DO column can be varchar2/clob. If your data if nothing else and that you use the column, you must convert it.

I'm not your comment about the application/octet-stream. You are indexing files with this data source? The data source for database can index files using the ATTACHMENT_LINK function, but it I'm sure he can't index the binary files. For the list of supported document types, see the tab document to the data source Types.

Sorry, I don't have any experience using the column CONTENTTYPE can therefore not say whether this should correspond to the actual document of MIMETYPE.
***

-If I have multiple data sources, then search displays the results of all, even if I disable the calendar of other sources. How can I search in the desired source? Should I put in a group of separate data source?

***
Disable the calendar will prevent only the database re-indexed. If the data source is added to a dataset that can be found, the data source will get searched. Yes, put it in a different dataset so that your users can specify what data to search for.
***

I hope this helps.

Tags: Oracle

Similar Questions

  • flow of Installer for tables with source schemas and destination is different

    How to set up courses of water for tables with source schemas and destination is different? But the structure of the table is the same.

    Please paste a small example too.

    Thank you
    Bala

    Hello

    I have a concrete example of this issue, I tested in my environment. Please follow the steps below to install this on your databases.

    My global_name to database source is: REP102A. WORLD
    My global_name to database target is: REP102B. WORLD

    Source table: ORDERS. ORDER_ENTRY
    Target Table: MERCHANT NAVY. ORDER_ENTRY

    Replication will happen as follows:
         
    ORDERS. ORDER_ENTRY (REP102A)-> SHIPPING. ORDER_ENTRY (REP102B)

    Needs to follow the steps below as the one proposed at each site. You may need to make the changes required for your environment.

    SCRIPT:
    =======

    * 1. Remove the configuration of the flow of SOURCE and TARGET sites: *.

    CONNECT / as SYSDBA
    RUN DBMS_STREAMS_ADM. REMOVE_STREAMS_CONFIGURATION;
    DROP USER CASCADE STRMADMIN;

    * 2. Installation STRMADMIN user and streams queued on the Source database: *.

    CONNECT / as sysdba
    Strmadmin CREATE USER IDENTIFIED BY strmadmin;
    GRANT dba, connect, resource, aq_administrator_role to strmadmin;
    ALTER SYSTEM SET aq_tm_processes = 1;
    BEGIN
    DBMS_STREAMS_AUTH. () GRANT_ADMIN_PRIVILEGE
    dealer-online "strmadmin."
    grant_privileges => TRUE);
    END;
    /

    CONNECT strmadmin/strmadmin
    BEGIN
    DBMS_STREAMS_ADM. () SET_UP_QUEUE
    queue_table-online "streams_queue_table."
    queue_name-online "streams_capture_queue."
    queue_user-online "strmadmin");
    END;
    /

    * 3. Installation STRMADMIN user and streams queued to the database target: *.

    CONNECT / as sysdba
    Strmadmin CREATE USER IDENTIFIED BY strmadmin;
    GRANT dba, connect, resource, aq_administrator_role to strmadmin;
    ALTER SYSTEM SET aq_tm_processes = 1;
    BEGIN
    DBMS_STREAMS_AUTH. () GRANT_ADMIN_PRIVILEGE
    dealer-online "strmadmin."
    grant_privileges => TRUE);
    END;
    /

    CONNECT strmadmin/strmadmin
    BEGIN
    DBMS_STREAMS_ADM. () SET_UP_QUEUE
    queue_table-online "streams_queue_table."
    queue_name-online "streams_apply_queue."
    queue_user-online "strmadmin");
    END;
    /

    * 4. Create ORDERS. ORDER_ENTRY Table on the Source database: *.

    CREATE USER controls IDENTIFIED BY;
    GRANT connect, resource TO arrested;
    CONNECT orders/orders

    CREATE THE ORDERS TABLE. ORDER_ENTRY
    (
    order_id number (8) primary key,.
    ORDER_ITEM varchar2 (30),
    ship_no number (8)
    );

    * 5. Create shipping. Target ORDER_ENTRY Table on the database: *.

    CREATE USER IDENTIFIED BY expedition expedition;
    GRANT connect, navigation resources;
    CONNECT shipping/delivery

    CREATE TABLE EXPEDITION TO FRANCE. ORDER_ENTRY
    (
    order_id number (8) primary key,.
    ORDER_ITEM varchar2 (30),
    ship_no number (8)
    );

    * 6. On the target, add the rules to apply and create a dblink from destination to source: *.

    CONNECT strmadmin/strmadmin
    SET SERVEROUTPUT ON
    DECLARE
    v_dml_rule VARCHAR2 (80);
    v_ddl_rule VARCHAR2 (80);
    BEGIN
    DBMS_STREAMS_ADM. () ADD_TABLE_RULES
    table-name => ' EXPEDITION. ORDER_ENTRY',.
    streams_type-online "apply."
    streams_name-online "streams_apply."
    queue_name-online "strmadmin.streams_apply_queue."
    include_dml to-online true.
    include_ddl-online fake,
    include_tagged_lcr-online fake,
    source_database => ' REP102A. WORLD ',.
    dml_rule_name-online v_dml_rule,
    ddl_rule_name-online v_ddl_rule,
    inclusion_rule-online true);
    DBMS_OUTPUT. Put_line (' apply rule DML for SHIPPING.) ORDER_ENTRY => ' | v_dml_rule);
    END;
    /

    CREATE DATABASE LINK rep102a.world CONNECT TO strmadmin IDENTIFIED BY strmadmin USING 'rep102a ';
    -check the link works with:
    SELECT * from [email protected];

    BEGIN
    DBMS_APPLY_ADM. () ALTER_APPLY
    apply_name-online "streams_apply."
    apply_user-online "strmadmin");
    END;
    /

    * 7. The source adds the rules of capture, transformation, create a database link for target, add spread rules, then prepare the tables for instantiation: *.

    CONNECT strmadmin/strmadmin
    SET SERVEROUTPUT ON
    DECLARE
    v_dml_rule VARCHAR2 (80);
    v_ddl_rule VARCHAR2 (80);
    BEGIN
    DBMS_STREAMS_ADM. () ADD_TABLE_RULES
    table-name => ' ORDERS. ORDER_ENTRY',.
    streams_type-online "capture."
    streams_name-online "streams_capture."
    queue_name-online "strmadmin.streams_capture_queue."
    include_dml to-online true.
    include_ddl-online fake,
    include_tagged_lcr-online fake,
    source_database => ' REP102A. WORLD ',.
    dml_rule_name-online v_dml_rule,
    ddl_rule_name-online v_ddl_rule,
    inclusion_rule-online true);
    DBMS_OUTPUT. Put_line ('Capture DML rule for ORDERS.) ORDER_ENTRY => ' | v_dml_rule);
    -Add the schema to rename it to change ORDERS to NAVIGATION for this rule
    DBMS_STREAMS_ADM. () RENAME_SCHEMA
    nom_regle-online v_dml_rule,
    from_schema_name-online "ORDERS."
    to_schema_name-online 'EXPEDITION. "
    operation => "ADD");
    END;
    /

    CREATE DATABASE LINK rep102b.world CONNECT TO strmadmin IDENTIFIED BY strmadmin USING 'rep102b ';
    -check the link works with:
    SELECT * from [email protected];

    -post-harvest enqueues the LCR will have schema as EXPEDITION via renamed of schema processing
    -then add the rule of propagation for SHIPPING. ORDER_ENTRY

    SET SERVEROUTPUT ON
    DECLARE
    v_dml_rule VARCHAR2 (80);
    v_ddl_rule VARCHAR2 (80);
    BEGIN
    DBMS_STREAMS_ADM. () ADD_TABLE_PROPAGATION_RULES
    table-name => ' EXPEDITION. ORDER_ENTRY',.
    streams_name-online "streams_prop."
    source_queue_name-online "strmadmin.streams_capture_queue."
    destination_queue_name-online "[email protected]."
    include_dml to-online true.
    include_ddl-online fake,
    include_tagged_lcr-online fake,
    source_database => ' REP102A. WORLD ',.
    dml_rule_name-online v_dml_rule,
    ddl_rule_name-online v_ddl_rule,
    inclusion_rule to-online true.
    queue_to_queue-online fake);
    DBMS_OUTPUT. Put_line ("rule DML of Propagation for SHIPPING. ORDER_ENTRY => ' | v_dml_rule);
    END;
    /

    BEGIN
    DBMS_CAPTURE_ADM. () PREPARE_TABLE_INSTANTIATION
    table-name => ' ORDERS. ORDER_ENTRY',.
    SUPPLEMENTAL_LOGGING-online 'key');
    END;
    /

    * 8. Set the instantiation scn for ORDERS. ORDER_ENTRY on the site target and start apply: *.

    -Apply on site
    CONNECT strmadmin/strmadmin
    SET SERVEROUTPUT ON
    DECLARE
    iSCN NUMBER;
    BEGIN
    iSCN: = DBMS_FLASHBACK. [email protected] ();
    DBMS_OUTPUT. Put_line (' instantiation SCN is: ' | iSCN);
    DBMS_APPLY_ADM. () SET_TABLE_INSTANTIATION_SCN
    source_object_name => ' ORDERS. ORDER_ENTRY',.
    source_database_name => ' REP102A. WORLD ',.
    instantiation_scn-online iSCN,
    apply_database_link => NULL);
    COMMIT;
    END;
    /

    BEGIN
    DBMS_APPLY_ADM. START_APPLY ('streams_apply');
    END;
    /

    -Check if apply is enabled
    SELECT apply_name, dba_apply State;

    * 9. Start capturing on source: *.

    CONNECT strmadmin/strmadmin
    BEGIN
    DBMS_CAPTURE_ADM. START_CAPTURE ('streams_capture');
    END;
    /

    * 10. Wait capture change its status of 'CAPTURER the CHANGES' and check that the status of propagation is ENABLED: *.

    CONNECT strmadmin/strmadmin
    SELECT capture_name, State FROM v$ streams_capture;
    SELECT propagation_name, dba_propagation State;

    * 11. Perform inserts in ORDERS. Table ORDER_ENTRY on the source site: *.

    CONNECT orders/orders
    INSERT INTO orders.order_entry VALUES (23450, 'Johnny Walker', 98456);
    INSERT INTO orders.order_entry VALUES (23451, "Chivas Regal", 98457);
    COMMIT;

    * 12. On the site apply, check the DELIVERY. ORDER_ENTRY and check if the data is replicated: *.

    CONNECT shipping/delivery
    SELECT * FROM shipping.order_entry;

    * 13. Check the application errors in the queue to apply: *.

    CONNECT strmadmin/strmadmin
    SELECT apply_name, local_transaction_id, error_number, error_message FROM dba_apply_error;

    Thank you
    Florent

  • Table 2 sources in LTS but only 1 query?

    Hello

    I use obiee 10g.

    I have a DIM_A table with some data. Table DIM_A is of type 'SELECT' on the physical layer, and in this table, the data is filtered. I use ValueOf (NQ_SESSION. USER variable) to filter the data in this table DIM_A. Each user can see different data in this table.

    In table FACT_A are given for each user, so I always have to filter the data on FACT_A using values in the table DIM_A.

    I join in the physical layer.

    MDB I have two tables. All with a single logical table source. I added DIM_A second source 'required' to the existing table FACT_A source. So, I see 2 documents in the general tab of the source of the FACT_A table and also an inner join between them.

    But when I make the request only on FACT_A without dimension, or with another dimension, DIM_A is not present in the select generated so that the user can see the data that are not for him.

    I need to make it MANDATORY to join between FACT_A and DIM_A every time.

    Can you tell me how to simulate a mandatory join on these two tables?

    Hi cardel,

    If you have this scenario:

    Physical layer

    TABLE1:
    column A
    column B

    TABLE2
    column C
    column D

    Business model layer

    LTS1-2
    Table1 inner join TABLE2
    column A
    column B
    column C
    column D

    If you create a report and select column A ony, OBIEE will choose only table1: physical, if you select column A and C, OBIEE will select TABLE1 and TABLE2.

    If you want every time select OBIEE both tables, you have different choices:

    (1) use a left join instead of inner join
    (2) create a logical column like this: column = column-columnC + columnC, so if you choose the column one in answers, OBIEE is forced to also join TABLE2
    (3) use a dummy column in one of the physical table, for example a columnDummy inside the TABLE2 with 1 for each line, LTS1-2 in the where clause using culumnDummy = 1, so OBIEE do the join between TABLE1 and TABLE2 for each column you select in a report.

    Kind regards
    Gianluca

  • Retrieve the attributes of table of database source Java webservice API

    Hello

    I have a source of data, with all the columns of mandatoru and the additional columns. I can search these columns, and they are listed in the source database "attribute Mapping" tab.

    When I try to get these columns/attributes of Web services Java API ResultElement, I don't have access to them, apparently. I try with the getCustumAttributes(), but it is empty. I also tried mapping of these attributes to something else, but the "Mapping of attributes" tab always reset my selections.

    How to read the values of these columns in the result of my search?

    Thank you
    Søren

    Hi Soren,

    When you use the Java API, you must indicate ITS what personalized search attributes you want to return. This is done by an fetchAttribute array [integer] ID's discover the Java API in OracleSearchService.doOracleSearch.

    Once you add your list to your appeal request, ITS them will return in the resulting XML. I think that the reason that its does not automatically return all custom attributes is to improve performance. Need to return the query more than you need.

    I hope this helps.

    Best regards

    -Stephen

  • Business Model - Table logic Source

    Can someone give me details on exactly when you would follow scenario 1 when modeling a logical table and exactly when you would follow scenario 2, and what is the main difference in behavior between the two. It would help if someone could illustrate with joins of equivalent sql tables.


    Scenario 1


    You drag a second physical column of table on the logical table. This causes an additional field will appear in this list of logical tables in columns from another table to the original and causes a second table to appear under the original table in the source folder.


    Scenario 2

    You drag a second physical table to the existing source of the logical table of a logic table. The source of physical table appears on the surface, the same as before, but when you examine its properties you will see that the second table was joined to it.


    Thanks for your comments,


    Robert.

    Scenario 1
    ---> It would be more economical and BI server is free to use his Intelligence to peak sources based on the extraction of columns in the criteria tab provide you do know the sources using the content tab.
    In general, we're going to do this is when:
    Extensions of dimension
    Fragmentation
    Global table

    Scenario 2
    ---> In this case we force Server BI go as we said (which forces to use joins) and can be intelligence not to use BI
    In general, we're going to do this is when:
    Of fact extensions
    10 g; What measures are based on certain conditions based on the dimensions, so we might have to add/map them and make
    version of Siebel Analytics we used to go the aggregations based on the logical columns, it is not more than 10 g and 11 g.

    Hope this helps

    Published by: Srini VIEREN on 21 February 2013 09:17

  • Database source in NOARCHIVELOG temporarily for maintenance

    Hi all

    We have a big interview to run the end of next week. There will be especially rebuilding indexes, statistics collected and Yes. We use 10.2.0.3. Source database is a RAC one.

    We would put our database in NOARCHIVELOG temporarily during this maintenance window. This may seem like an easy task, but I rather regard as risky - I need a 100% safe procedure!

    I Stop the stress, if there are files in the Oracle or Metalink documentation that could help.

    I think I know that several of the required steps:

    -Stop the capture

    -Put the database in NOARCHIVELOG mode

    -Re-instantiate the tables using dbms_capture_adm.prepare_table_instantiation on the source database

    -Re-instantiate the tables using dbms_apply_adm.set_table_instantiation_scn on the destination database

    -I think I have to advance the DPF on the capture, but I don't know if I need to change the FIRST_SCN or the START_SCN or both?

    -Replace the database in ARCHIVELOG mode

    -Start the capture

    I don't know in which order should I do these tasks.

    Thanks for any advice/links on this topic!
    Jocelyn

    Hello

    Capture of flow will break if you disable the ARCHIVELOG mode. Capture will not be able to continue after that. You must re-create the capture process and resynchronize the data, then perform an instantiation (instancing SNA definition) If you change the database in NOARCHIVELOG mode and if you want to continue with the capture when you enable archiving.

    The recovery flow that gets archived will be a discontinuity when you disable the archivelog mode.

    There is no possibility to put any YVERT or moving all SNA as FIRST_SCN, START_SCN or REQUIRED_CHECKPOINT_SCN if you want to start the capture.

    Thank you
    Florent

  • A single logical Source VS. multiple Table logic Source

    When is it appropriate to use a single Table Source VS. multiple logic logic Table Source.

    Single logical Table source: Dimension or fact table extension of fact, related data or in more than one table
    Several logic Table Source: loved fact and aggregated with similar data implemented with different granularity

    Score pls correct/good

  • Wanted to access a user, and then from there to another user & acess al its tables

    Hi all

    First of all, I want to access the user "VIKR" in order to create views in the VIKR dev01 database schema.
    Then, I want to get READ access to all the tables in two more (VIKRD, VIKRE) schema.

    Can help please. How to grant read access to all tables of a schema to another.

    Your help will be appreciated.

    You can write a bit of PL/SQL

    DECLARE
      l_sql_stmt varchar2(1000);
    BEGIN
      FOR t IN (SELECT * FROM dba_tables WHERE owner IN ('VIKRD','VIKRE'))
      LOOP
        l_sql_stmt := 'GRANT SELECT ON ' || t.owner || '.' || t.table_name || ' TO vikr';
        dbms_output.put_line( l_sql_stmt );
        EXECUTE IMMEDIATE l_sql_stmt;
      END LOOP;
    END;
    

    Depending on why you want to do this, you can prefer this access to a role and this role of VIKR subsidies so that you can then use the same role for other users to access the same set of objects. And, of course, when new objects are created in the schema or in the future, you will need access to these objects VIKR (or role).

    Justin

  • Why index has a greater than its table dimension

    I have a table with 25 columns, but most of the columns is zero except the primary key. Dba_tables and dba_indexes, I worked on that the primary key index has 309 MB and has 280 MB table.

    Suppose the index blocks contains the key identifier and table indexes, and blocks of table contains its data in the column and the rowid. If this is true, the primary key index should has the same size as the table which is null everything except the primary key. How is my index is 10% larger than the table?

    An index is a B-tree +, where B is balanced.
    It consists of blocks of leaf and nonleaf.
    Blocks not sheet are a kind of pre-Selection mechanism, to avoid, consider all the blocks of leaves.
    Buy a book on data structures and/or to read the Oracle documentation you will see it explained in detail.
    If you issue
    analyze the index validate structure;
    and
    question
    Select * from index_stats;
    right after that, you will see how many levels have the index.
    This allows the 10% overload.
    Also the data blocks contain no ROWID.

    ------------
    Sybrand Bakker
    Senior Oracle DBA

  • Hi John: How to identify relevent table data source?

    Hello

    I need your help to understand what is the way to identify the relevant table; Suppose that I need product members laod the datasourese company (take simple example SQL 2000):

    option 1: go to the tables in the databases and the related table to get product and open a table and try to guess any member products, once confirmed using ODI)

    Option 2: Read the manuals of the ERP vender and identyfy the table any member information to draw using ODI products.



    Thanks and greetings
    N Kumar

    Hello

    Personally, I would try and talk to someone who understands the structure of tables and the information they contain, there's probably someone to discuss this with?

    Otherwise if the paintings are of a standard ERP then I came home to hand over the documentation and read.

    See you soon

    John
    http://John-Goodwin.blogspot.com/

  • How to make a reference to a column from the column in a table other source

    Hello

    I have a tabular presentation (created using Wizard), in which a selection list, and the values in this list depends on another column in the form of tables.

    I tried 'Selection list (query based lov)' and mentioned the other column using #COL_NAME # and directly by COL_NAME, but nothing works.

    also tried using the dynamic query 'return "my query" ' but it does not work also.

    If someone has worked on a scenario like this, thanks for posting the solution.

    I get this error when I tried to the foregoing:
    report error:
    ORA-20001: Error fetching column value: ORA-06550: line 1, column 56:
    PL/SQL: ORA-01744: inappropriate INTO
    ORA-06550: line 1, column 13:
    PL/SQL: SQL Statement ignored
    Apex 4.1
    Oracle 11 g 2

    Kind regards
    Tauceef

    Published by: Tauceef on 5 December 2012 10:42

    See + {: identifier of the thread = 2250770} +.

  • The use of tables of database for authentication in the ADF

    Hello

    I need to use my user table in the database for authentication in ADF (adf 11.1.2).

    I have 3 categories: agent admin and user is unique, each has its own page, logging, the application checks the type of user and directs its jsf task or the page of the jsf page flow.

    I have a user with the type attribute table

    HOW CAN I MAKE THIS PRAYER.

    Hello

    See links below.

    Whatever Fusion Middleware: Tables database user to implement authentication in the ADF

    Java / Oracle SOA blog: the use of tables of database as in WebLogic authentication provider

  • How to find the sequence and its associated with a particular schema tables

    Hello

    How can I get a list of all sequences and its tables related to a particular schema.

    Kind regards...

    ASIT K. reducing

    AsitK.Mohanty wrote:

    Hello

    How can I get a list of all sequences and its tables related to a particular schema.

    Kind regards...

    ASIT K. reducing

    What is the solution when the code that specifies the SEQUENCE does not exist or is in the database; for example when the code runs in the external application server?

  • Problems with mappings when changing the type of source table field?

    Hello world

    one question.
    Is it a problem when I change my types of fields in my table of source of char for example varchar2 with mapping that uses this table?
    In the mapping of the source table field Types are defined as Char fields. Do I have to change all the fields of char to varchar2 (via Import again from the source table and synchronize in the map)?

    Thanks in advance.

    Greetings

    Hello

    If your new data type (in the database is compatible and big enough) then you probably don't have to change the class in your other mapping wise you need.

    It a good practice to import the last definition of table (metadata) in the repository of OWB and then synchronize incoming.

    Thank you
    Fati

  • Data synchronization after restoring the database to the Source or target

    Hi DBAs,

    I've implemented replication unidirectional flow at the level of transactional database to the database schema (excluding some tables). It works very well without any problems after doing the tuning of flow with the help of health check scripts. My question is if some reasons, if I have to recover to the level of the database on the database Source then I need rebuild flows from scratch? Database size is close to 2 TB and after recovery, even using the pump data at the same time, it could take hours and also the source won't be available even after the recovery (data pump running job - if the source is online, then I was getting ORA-01555 errors and work was not).

    Please notify by above circumstances what is the best way to re - synchronize data between the source and target.

    Thank you
    -Samar-

    I would export of the datadict (build) once again, just to avoid any MVDD, something to do directly after the restoration of the DB,.
    because I'm not sure from the perception of this restoration DB in terms of DBID. Have you checked in rman if source db is considered to be the epitome?

    Then restart what are the messages in the target error queues. I think that you will have problems to move the first_scn
    due to differences on some YVERT in source/target system.logmnr_restart_ckpt$. system will think it has holes.
    You may need to manually remove a bunch of lines to allow the flow to jump on the fate of the YVERT
    that have been sent and are not present in the source system.logmnr_restart_ckpt$

Maybe you are looking for