using PIPELINED in VO

Hello

I have 2 types:
create or replace type e1e2 () AS OBJECT
E1 VARCHAR2 (3),
VARCHAR2 (3)) E2;

CREATE or REPLACE TYPE e1e2_tabtype AS TABLE of e1e2;

and function:

Function to CREATE or REPLACE manage_type (ee in varchar2) return e1e2_tabtype PIPELINED
as
E1 VARCHAR2 (3);
E2 VARCHAR2 (3);
E CURSOR IS
SELECT 'o' a, ee two of the double
Union
One, SELECT ee 'o1' two double;
Start
opening of e;
loop
extract the e in e1, e2;
OUTPUT WHEN e % NOTFOUND;
LINE (e1e2 (e1, e2));
end loop;
Close;
return;
end manage_type;

and query:
Select e2, e1 of TABLE (manage_type (:P))

the query returns 2 rows into a TOAD, but when I create a display on the query object, it triggers error:
SELECT * from (select e1, e2 TABLE (manage_type(:P))) QRSLT où 1 = 2-ORA-22905: ne peut pas accéder àles lignes d'élément de tableau non imbriquées.)
The same query works fine without parametrs:
lselect * table (mamage_sergey_type('0')) - works!

I wonder why a legal query does not in the object view. I saw not all restrictions on the use of TABLE function in the ADF.

I need it to work against Oracle 8i.
SNikiforov

SNikforov,

Try this query to your VO:

select e1, e2 from table(cast (manage_type(:P) as e1e2_tabtype))

ADF will wrap your query of VO (as you can see) with "select * from qrslt () where 1 = 2 - If your query works fine with SQL * that way, it should work with ADF."

John

Tags: Java

Similar Questions

  • Dynamic LOV pipeline

    Oracle apex 5.0

    That's what I created to display the terminal node.

    SELECT LPAD (' > ', 2 *(LEVEL-1),'. '). product_group_name "Product category", Product_group_id

    Of oms_product_group

    START WITH parent_id = 1

    CONNECT BY PRIOR Product_Group_id = parent_id;

    Result would be:

    ALL THE

    > Mobile

    > > android

    > > > Kitkat

    etc...

    I want to you use pipeline in dynamic lov thus result will be:

    ALL THE | Mobile | Android | KitKat

    How to get there

    Thank you

    Come up with a Solution that works for me hope it works for others as well.

    Select

    LTRIM (sys_connect_by_path (product_group_name, ' |)) '), ' |') l

    product_group_id v

    of oms_product_group

    connect NOCYCLE. prior PRODUCT_GROUP_ID = parent_id

    Start by parent_id = 0

    siblings arrested by

    product_group_name

  • parallel in pipeline table function

    Hi all

    To "allow the parallel pipeline" table function, what I need to turn a query parallel session first?

    I read a few white papers or web pages on map and reduce implemented with function table Oracle and see that, based on the table.

    Use the cursor in the loop to get a line, run out! This replaces SQL pl/sql.

    What is the cost that must be paid according to the table?

    Finally, how can I confirm that Oracle has put the table function in parallel?

    Best regards

    Leon

    user12064076 wrote:

    In my application, I wrote stored procedures that return a collection of user-not of the types of objects to Java.

    Flawed approach using memory expensive private server (PL/SQL PGA) caching SQL data and then push this data to the client - when the customer can use instead the more scalable and higher shared cache buffer memory instead.

    With the types of objects, we can remove most of the redundent data.

    This statement makes no sense that it is the same for SQL and sliders. And remove redundant data is preferable to the SQL itself - through partitioning engine pruning, predicates, only by selecting the columns there is place for the projection of SQL and so on.

    This OO design reduces the load on the network and makes it easy for Java to parse.

    Incorrect answer. It does not reduce network load - it can actually increase. Regarding Java 'parsing' - it's wrong from the beginning approach if the customer is required to analyze the data it receives from the server database. The analysis requires time CPU. Many average general processor muscle which will degrade the performance of analysis.

    You may be using the analysis out of context here? Find it me hard to believe that one could design an application and use a server database in this way. The analysis of means for example to receive XML data (text) and then he analysis in an object - like structure for use.

    Data of the database must be recovered by the customer in a binary structured format and not in a free format that requires the client to analyze in a structured format.

    But the problem is that we accumulate all data in memory first and push them to the client as a whole. If it's too huge, ORA-22813 occurs.
    This is why I intend to use the table of piplelined function.

    How that will solve the problem?

    As I followed your logic:
    (1) you do not use the cursor for some obscure (and probably completely unjustified) reason.
    (2) you may not return large collection PL/SQL variables without significantly dent PGA (process private memory) on the server and running into errors (this approach is conceptually incorrect anyway)
    (3) you are now using an array of pipeline that executes PL/SQL code to execute SQL code - and in turn must be executed by the client as a SQL using a slider

    So why put all the other moving parts (the pipeline code) between the two when the customer
    (a) must use SQL to access?
    (b) create a cursor?

    If, as you say, I returned a cursor, it would be very difficult for Java organize data.

    A table of pipeline must be able to be used through a cursor. All the DML statements from a client by Oracle are analyzed as cursors.

    Read the language PL/SQL Oracle® Database reference guide section on ' + chaining Pipelined Table Functions for Multiple Transformation + ".

    The format using a pipeline is (from the manual):

    SELECT * FROM TABLE(table_function_name(parameter_list))
    

    The "+ pipeline + ' is created by the SQL engine-, he need SQL to execute the PL/SQL code via the function TABLE() SQL.

    You said that the reason to use a pipeline transforms a structure of relational data stored in a structure of the object. You don't need a pipeline for it. Plain vanilla SQL can do the same thing, without the fixed costs of use PL/SQ, SQL data recovery and change within the pipeline between PL/SQL and SQL context engines.

    You simply call the constructor of the class of object in the projection of SQL and the cursor SQL returning the instantiated objects. For example

    create or replace type TEmployee is object(
     .. property definitions ...
    );
    
    create or replace type TEmployeeCollection is table of TEmployee;
    
    select
      TEmployee( col1, col2, .., coln ) as EMP_COLLECTION
    from emp
    where dept_id = :0
    and salary > :1
    and date_employed >= :2
    order by
      salary, date_employed
    

    No need for PL/SQL code. No need for a pipeline. The client will open the cursor and extraction of objects in a collection. The same approach that the customer would have used during extraction of a cursor on a table of pipeline function.

    Pipelines are best used as a process of transformation of data where only SQL cannot perform the transformation. I never in many years of design and writing applications used Oracle PL/SQL pipeline into production on a SQL table. Simply because the SQL itself is capable and powerful enough to do the job - and do it faster and better.

    I used pipeline is to transform the data from external sources into sets of SQL data. For example, a pipe on a web service. When the code PL/SQL of the constructions of the SOAP envelope, the HTTP call, analyzes the XML and returns the content in form of lines and columns - that allows to run a SQL SELECT on web-service-turned-into-a-SQL-table.

    If you'd told me that Leon - it seems to me that your approach is a typical approach to Java that has very little understanding of the concepts of database and Oracle databases. You can't deal with Oracle as a simple persistence layer. You can't treat SQL and PL/SQL as a simple i/o interface for the extraction of data from Oracle and grinding that in Java. Not if you think that your system to run Java and scaling.

    Rethink the Oracle layer, use properly - and your application will occur and will scale. Guaranteed.

    However, from my experience, many J2EE developers choose to treat the Oracle as a black box, not further that a kind of file system loaded to store structured data and try to do it in the Java layer. And this fail. And I saw him failing - of the jaw dropping kind epic failures (knocking all the national newspapers and media as a result and an impact on ordinary people who have to deal with the Government).

    And it's a shame... SQL and PL/SQL are superior to Java in this regard and are the layers much more able to cope and a power of data in the database. Example of the real world - largest table in our busiest database develops between 350 and 450 million lines per day and all our calculations of the data in this table is inside the database layer - and not in a layer of Java. Oracle performs and scales beautifully... when used correctly.

  • Violation of timing when adding code blocks parallel to each other in a FPGA SCTL

    Hey guys!

    I have a problem with my code inside a SCTL of running on an FPGA VI and hoped someone could help me.

    I work with a 120 MS/s ADC and write the data to the FPGA block of memory in a 120 MHz SCTL. Since the data has to be 'mixed' there is also a code to get the correct address (by first example data at address 0, second to address 20, 40 and so on until 2000, then I start with 1, 21...).

    I have a second block of code that later, when writing is completed, reads in the block of memory to perform some calculations (a linear slope of the signal interpolation)

    The two parties are implemented using pipelining (I say this because I think it would be the first response of the community), while if I compile only one, everyone needs about 7 to 8 ns to run, which means that they meet the requirements to run in my 120 MHz SCTL (8.33 ns).

    But when two blocks of code are in the same field of clock (whether in ore SCTL in two SCTLs using the same clock) from the FPGA VI I get a timing violation, saying that the code in the SCTL needs 17 ns (logical and delay routing) to run. It is not also show me the critcal way, just the SCTL. Because the code runs at the same time, I don't see why the delay of logic should increase (to 12 ns). The only way 2 code blocks communicate with each other are the memory of block and some shift registers so the path of Combinatorics is not increased.

    I already checked whether the SubVIs (I use one per block of code to avoid the lack of clarity, but they are different) are the problem. If the code in the SubVIs implemented directly in the FPGA hand VI it does not change the delay of logic.

    50% of the registers and the lut FPGA are used, the DSP48s of 5% and 50% of RAM block, so I don't think that the problem is something like the compiler finds not enough free slices to create fast paths.

    The main problem is not that the delay of routing increases, but the delay of logic.

    Does anyone have an idea what could cause this huge increase by delay time?

    I use labview 2011 and 12.4 Xilinx.

    Try not to put memory read the node within a box structure. Instead, you can always read from a fixed address, say 0, when the data are not all ready. Reading a memory will not damage your data contrary to push from a FIFO. If your memory by reading the node is inside a case structure, there will be a mux when data is passed outside this business structure, which will increase the delay of logic.

    In addition, I also recommend that you do not use the loop of the index 'i', if you are running at high clock frequency. This counter is seen to be 32-bit, and you probably don't have a memory that deeply. LV FPGA is to have logic check inside the read node memory address range, so width incorrect address can still be important. You can implement instead of your counter code which the width of the tip is appropriate.

  • deletion of a huge number of issuance of documents

    Hello

    I have a job that deletes the records from 1 to 2 lakh aprx daily in a table based on a validation (where expire_date < sysdate-1).

    Currently, I wrote code like this.

    PROCEDURE ITEM_CLEAN
    AS

    BEGIN
    REMOVE THE ITEM_INFO WHERE EXPIRE_TIME < SYSDATE-1;
    COMMIT;

    EXCEPTION
    WHILE OTHERS THEN
    ROLLBACK;
    END ITEM_CLEAN;

    This procedure, I plan to run every day at 01:00.

    My question is that I want to convert in FORALL BULK COLLECT of the statements.
    Can you please help me.

    910575 wrote:

    My question is that I want to convert in FORALL BULK COLLECT of the statements.
    Can you please help me.

    Loads of silly stuff being posted on forums OTN tends to make me cranky. Lack of caffeine does not help. If the mug shot is not intended to you. It's idea that bulk collect is somehow magically faster and better.

    BULK COLLECTION IS SLOW! COLLECTION IN BULK TO PUSH DATA IN TABLES IS INCORRECT!

    Then please. Forget this silly idea that bulk collection is a magic wand to address issues of scalability for a power of large amounts of data.

    It is not the case.

    It has never been.

    It does not fit.

    It is ALWAYS slower than SQL. ALWAYS. It MAY simply be faster. NEVER! Because it MORE work and SLOWER work than just native SQL.

    The right way to address the issues of performance and scalability with processing of large amounts of data with SQL is parallel processing.

    You have two basic approaches to choose.

    Approach 1. Use Oracle parallel query, where Oracle intones the SQL execution more small loads (ranges of rows to process) and then run these workloads in parallel processing.

    Approach 2. Custom parallel processing. This can be done using the package DBMS_PARALLEL_EXECUTE PL/SQL interface. Another custom method uses pipelining and have a parallel query your pipeline custom running in parallel.

    For the workload you describe, DBMS_PARALLEL_EXECUTE is the best option. See the code example {message identifier: = 10571826}.

  • Connect by level - with several lines of inpur

    Very simplified, I have this table - it has a 'table' as a variable-length structure.

    I need x line of output for each input line (do not use pipelining og PL/SQL function)
    Wih a single line as input of the "log in"-it works of course.
    drop table test3;
    create table test3 (id number, tekst varchar2(20));
    
    insert into test3 values (1, 'acbdef');
    insert into test3 values (2, '123');
    insert into test3 values (3, 'HUUGHFTT');
    insert into test3 values (4, 'A');
    insert into test3 values (5, 'AKAJKSHKJASHKAJSHJKJ');
    commit;
    
    with tal as
    (
    select * from
    (select a.*, rownum rn
    from test3 a)
    where rn < 2)
    ------
    select tekst, level , 
    substr(tekst,(level-1)*1+1, 1) content
    from tal
    connect by level < length(tekst) 
    ;
    How to achieve the same thing for several input lines?
    I know I can do it in PL/SQL using plan either pl or just a function in the pipeline, but I prefer a clean if possible SQL.
    I tried to do in a cross join test3 and (select different values of double the test3 table) and other versions, but all with syntax errors
    with tal as
    (
    select * from
    (select a.*, rownum rn
    from test3 a)
    where rn < 3)
    ------
    select * from test3 cross join table 
    (
    select tekst, level , 
    substr(tekst,(level-1)*1+1, 1) content
    from dual
    connect by level < length(tekst)
    )
    ;
    Oracle version will be 10.2 and 11 +.

    I think it's kind of what you're looking for:

    with tal as
    ( select 1 id, 'acbdef' tekst         from dual union
      select 2   , '123'                  from dual union
      select 3   , 'HUUGHFTT'             from dual union
      select 4   , 'A'                    from dual )
    ------
    select  id, tekst, level, substr(tekst,(level-1)*1+1, 1) content
      from  tal
    connect by (    level <= length(tekst)
               and  prior id = id
               and  prior dbms_random.value is not null
               )
    ;
    
            ID TEKST         LEVEL CONTENT
    ---------- -------- ---------- -------
             1 acbdef            1 a
             1 acbdef            2 c
             1 acbdef            3 b
             1 acbdef            4 d
             1 acbdef            5 e
             1 acbdef            6 f
             2 123               1 1
             2 123               2 2
             2 123               3 3
             3 HUUGHFTT          1 H
             3 HUUGHFTT          2 U
             3 HUUGHFTT          3 U
             3 HUUGHFTT          4 G
             3 HUUGHFTT          5 H
             3 HUUGHFTT          6 F
             3 HUUGHFTT          7 T
             3 HUUGHFTT          8 T
             4 A                 1 A       
    
  • Java programming Linux

    Hi all.
    I get an interesting taks - a programm cli for UNIX base system. No matter what he must do, in this important topic that it should support Linux pipelining, for example can run "ls |" use it less", but of course program of specific orders, but support this form of orders. And the Java language.
    I'm new on Linux and Linux programming, but try to read as much as possible, please excuse me if I said sth wrongly.  :-)
    OK, I get this task and start thinking. First idea was:
    1. I get a table of command line commands in args [].
    2. then just cut chain in pieces using the symbol "|" and analyse one by one.
    Idea was not bad, but a man who gie me this task (this is a kind of traning for me), said that I do it way wornd - unix shell support pipelining for my program, but the program needs to be done in a specific way.
    OK, I'm starting to read on linux, linux ipc and api pipes. Find examples in C++, where programm to create a child process (using pipe (), fork parent and child processes begin to send information using pipes. If I understand correctly this is what is meant in the words "special way".
    I need an adviser to expirenced programmers. How do I programm pipelining? As far as I understand I have no need to analyze all the command line and children in parties to ' | ', I should just I do program that can run any possible command in single and add support for linux pipilining and shell will send commands to my show by its path without my separate. Am I wrong? If so, I still do not follow what I would add to program to the shell allow use pipelining?

    sphinks wrote:
    So I should do 2 method works with all types of entry.
    In this case

    java myProgram "filename"
    

    I will exercise my sth like this:

    FileInputStream fis = new FileInputStream(args[0]);
    ...
    

    In this case

    cat "filename"|java myProgram
    

    You talk about your program to make his entrance to one of two different sources. Is it it reads a file, either he read it System.in. It's not uncommon; a lot of unix programs that. For example, cat, grep and sed all do. If you provide a file name, they read from this file. Otherwise, they read from stdin. So you deal with the command line, and it indicates the playback of a file, you open this file and read it if you read from System.in.

    Completely separate from that is what others have already said: If your program is read from stdin (System.in), then it don't know or care if this flow is coming directly from the console and passing along what the user types, or if it is channeled through the output of another process, or if it is redirected from a file or here - doc. Piping and redirection are handled by the shell, and your program is totally unaware of them. It simply reads System.in.

    In addition, make sure you understand what is the same and what is different about the following:

    grep foo x.txt
    vs.
    cat x.txt | grep foo
    vs.
    grep foo < x.txt
    
    or in the case of your program
    
    java -cp . MyClass x.txt
    vs.
    cat x.txt | java -cp . MyClass
    vx.
    java -cp . MyClass < x.txt
    

    Edited by: jverd April 28, 2011 10:35

    Edited by: jverd April 28, 2011 10:35

  • Mail user

    Dear all,
    I have an answer report. I would like to send emails after annex.
    I have set up the mail server.
    I have tryied ways below
    1. in response, page seetting-> my account - Device tab (Mail)

    Device: Mail
    E-mail name: peripheral HTML
    Address: [email protected]

    2 delivery profile
    I put a high priority for the mail.

    My question is how to send to several people.
    How to set up
    Like [email protected], [email protected], [email protected]

    Could be he you pls explain to me

    Hi reda,.

    You can give the list of recipients in the recipients tab

    and, check this box:

    http://oraclebizint.WordPress.com/2007/12/28/Oracle-BI-EE-101332-report-bursting-using-delivers-Phase1-using-pipelined-functions/
    http://gerardnico.com/wiki/dat/OBIEE/sasystem
    http://oraclebizint.WordPress.com/2008/04/25/Oracle-BI-EE-101332-SA-system-subject-area-autoloading-profiles-and-bursting/
    http://OBIEE-tips.blogspot.com/2010/02/SA-system-subject-area.

    This can be help you...

  • Best structure for a master/detail relationship?

    Hello

    I would like to get some expert advice on the best architecture Oracle 11 g for a fairly simple storage situation: I have a table in a data warehouse with millions of records added every day. Each record can be associated with a (99.9% of the cases) or, more rarely, many under folders (up to 3 or 4).

    (1) the natural relational approach would require a relationship of the master / detail, with a key shared between the two tables. Some questions in the case:
    -Here, what would be the best structure? Because its lines are going to be quite small, the child table should probably be a table index. But consider a cluster?
    -Also what key will be used in the relationship? Master records are never accessible using full table scan, so that they have no primary key or an index. Is it possible to avoid creating a surrogate key only for reasons to join the master and detail tables, for example, by using a rowid any?

    (2) another reasonable approach, it seems, would be to use a nested table structure to store the detail records.
    -Is an interesting solution? It seems that he avoids the need for a surrogate key and looks "natural way" of doing things. But generally people seem not very keen on this feature, perhaps because of the lack of knowledge?
    -In this case, would it still possible to scan detailed records using pure SQL, as you would with a join in case 1?

    (3) although I have not yet studied the pb in detail, it seems that the right tools to populate two tables at the same time external tables would be to use PIPELINE and PL/SQL FORALL constructs, can anyone confirm?

    Thanks for your help,
    Chris

    In an environment of DW, things are generally modeled according to the fact dimension. The fact table would hold everything 'measurable' is a fact (in your case a call). It would be things like the date and time of the call, duration, charges total, possibly local/long distance/overseas etc. The dimension tables would provide additional information to the call and solve research etc. For example, you might have a date dimension which had every day, the year, month, day of the year, quarter, fiscal quarter (in my organization, they differ and we report both for different reasons) etc. You have also a time dimension indicating if the time if the call was during the day, evening, night, etc., depending on how you define them.

    Of course, all of this really depends on the types of questions you want to ask questions about the data you have.

    I'd be inclined to model along the lines of (note I guess anum and bnum are something like departure and destination phone numbers):

    CREATE TABLE calls (
       Call_Id      INTEGER PRIMARY KEY, -- this comes from a sequence
       Anum_key     NUMBER REFERENCES call_numbers,
       Bnum_key     NUMBER REFERENCES call_numbers,
       charging_key NUMBER REFERENCES charging
       Duration     NUMBER,
       charge       NUMBER,
       call_count   NUMBER);
    
    CREATE TABLE call_numbers (
       number_key    INTEGER PRIMARY KEY,
       phone_number  VARCHAR2(50),
       area_code?    VARCHAR2(50),
       country_code? VARCHAR2(50));
    
    CREATE TABLE charging (
       charging_key         INTEGER NOT NULL CHECK (counter BETWEEN 1 AND 19 or counter = -1),
       counter_description  VARCHAR2(50));
    

    with a line of calls for each combination of meter that must be loaded. The ETL code would assing a call unloaded (i.e. your first scenario a charge_key-1), and depending on how you want to count calls, 1 or 0 for call_count. For your second scenario, County of appeal would be 1 for the single record for this call. For the third scenario I would, more or less arbitrarily (i.e., max or min amount/meter) assign 1 in one of the recordings and 0 for the rest (which implies that even if the cost of a call to several counters, it's still only 1 call).
    Thus, in light of your three specimen he would look like:

    calls
    ID   Anum   Bnum   chg   Dur   amount   count
     1      1      2    -1     0        0       1
     2      3      2     1     2      610       1
     3      4      5     1     3      240       1
     4      5      6     6     3      520       0
    
    call_number
    key   number
    1     123456
    2     234567
    3     987655
    4     545678
    5     435467
    6     986234
    
    charging
    key   descr
    -1    Unanswered
     1    Counter 1
     2    Counter 2
     ...
    19    counter 19
    

    So, how many call where there?

    SELECT SUM(call_count) FROM calls WHERE ... 
    

    How much do we chanrge?

    SELECT SUM(charge_amt) FROM calls WHERE ... 
    

    How long they spoke?

    SELECT SUM(duration) FROM calls WHERE call_count = 1 and ... 
    

    How much we took in counter 6:

    SELECT SUM(charge_amt) FROM calls WHERE charging_key = 6 and... 
    

    How long was the 6 meter?

    SELECT SUM(duration) FROM calls WHERE charging_key = 6 and ... 
    

    As I said before, I'm not really fan of tables nested in the columns. :-)

    John

  • break statement

    Hello

    What is bursting report and can on everything please post any report bursting and its relationship with OBIEE and how
    is it even that Oracle offers or planner, etc.

    Please post me as soon as possible
    your jobs are highly appreciated
    Thank you
    audray

    Published by: Ninon Sep 13, 2009 08:16

    Well, you said you looked at and it's good. It is always good to Google search, this forum, articles, documents and articles first to become familiar with the concepts before posting a question. When you do post, let us know what you looked like a link to the documentation, if she was online, or a link to the blog, etc. In this way, you do not get people simply tell you to do something, you did already. It wastes your time and ours.

    It is better to ask the question more precisely too in order to get the best answers. Here's a useful tip on this: read the first post, that of Justin Kesteley who has the 'yellow star' at the front. Click on the link to the FAQ to learn more about etiquette, how to ask a question, rewards points, etc.. It's a very useful start.

    Now, to answer your question, you said you looked at Vidal's blog. I guess (because you did not) that he included this one...

    http://oraclebizint.WordPress.com/2007/12/28/Oracle-BI-EE-101332-report-bursting-using-delivers-Phase1-using-pipelined-functions/

    If this isn't the case, read it, it's very informative. But in a few simple words.

    Report bursting sends a report or reports (or same pages dashboard) to multiple users at the same time. Think of lava "break" on a volcano, every hot rock a "report." (This analogy gives you a hint of where I come from...)

    Now, how this "breach" of reports to all my users? Well, it's book comes in. Offer allows you to set the parameters of the frequency of these reports are delivered and what conditions must be fulfilled before the 'delivery' is promulgated. Once done, you have in iBot, the mechanism that will send your reports to all your users.

    But where everything is set up which allows the book know how to provide these reports. This is where you start with the implementation of your Scheduler tables.

    Read this: (even once, you did not what Articles of Vidal was looking at you...)
    http://oraclebizint.WordPress.com/2007/09/13/Oracle-BI-EE-10133-configuring-delivers-ibots/

    And this: http://obiee101.blogspot.com/2008/08/obiee-configuring-configuring-scheduler.html

    Summary:

    You want to send reports to many different users. First set you up the Scheduler tables. Once this is done, you are ready to send reports. Say report 1 you want to send it to 5 users. You are breaking this report to five users. You go to offers to set up your conditions of 'when' etc.. When you are finished, you have created an iBot. Report 2, you go through the same process in the offer to set up the conditions for your second report bursting and when you're done, you have your second iBot.

    This should give you an idea of iBots, book and report of bursting.

    Good luck.

  • OBIEE can distribute reports to users non-OBIEE e-mail addresses?

    We are looking at using OBIEE, but one of our requirements is to be able to plan and to provide basic (Excel or PDF maybe) reports by e-mail to a relatively large number of recipients.

    When I say 'relatively large number' - what I mean is: more than beneficiaries we want to buy licenses for. These beneficiaries would not need or be allowed to access anything else than the scheduled report we send to them on a weekly or monthly, it is not logical for us to purchase licenses for them.

    But from what I can find so far, OBIEE provides reports only to authorized users.

    Is there actually some kind of calendar and e-mail reports to any e-mail (non-user) in OBIEE?

    Or I would need a third party solution separate to achieve? (Any suggestions would be appreciated!)

    Thank you!

    If you store the ppl in a table (essentially any datasource you attack with the RPD), you create a query pulling up all targeted users, and then in the "Beneficiaries" of the iBot tab, you can choose

    'Determine the recipients of conditional query.

    Then, you specify the recipients 'column container.

    MUS wrote on this subject some time ago, but also: http://oraclebizint.wordpress.com/2007/12/28/oracle-bi-ee-101332-report-bursting-using-delivers-phase1-using-pipelined-functions/

  • APEX of a function report

    Hello everyone, I have a function that returns a select cursor with a statement.
    I want to use it in a report, but it does not work.
    is the indicated error: ORA-00932: inconsistent data types: expected NUMBER obtained CURSER

    IM calling the function in this way: SELECT * FROM (select ASSESSMENT.) Double Test01);


    Can you help me please.
    Thank you.

    Hello

    Following the post of Salvatore, I made a blog on generic mapping using Pipelined functions earlier, that might be useful (to illustrate the principles).

    http://Jes.blogs.shellprompt.NET/2006/05/25/generic-charting-in-application-express/

    Hope this helps,

    John.
    --------------------------------------------
    Blog: http://jes.blogs.shellprompt.net
    Work: http://www.apex-evangelists.com
    Author of Pro Application Express: http://tinyurl.com/3gu7cd
    AWARDS: Don't forget to mark correct or useful posts on the forum, not only for my answers, but for everyone!

  • Correct use of the function in the pipeline?

    Hello

    I have a function in the pipeline, I am using 11 g. the idea is I can pass in the name of a table, and it returns a set of rowcounts for this table

    I obviously have something wrong, but what?

    -Package

    create or replace PACKAGE IS GetMigSamples

    -set types for a record and an array of records like this

    TYPE sample_record IS RECORD)

    PERSON_ID NUMBER,

    CF_ID VARCHAR2 (10),

    number_of_records NUMBER);

    TYPE sample_table IS the TABLE OF THE sample_record;

    FUNCTION SP_MIG_SAMPLES (P_TABLE IN VARCHAR)

    RETURN sample_table

    IN PIPELINE;

    END;

    -Package body

    create or replace PACKAGE GetMigSamples BODY IS

    FUNCTION SP_MIG_SAMPLES (P_TABLE IN VARCHAR) sample_table RETURN PIPELINED IS

    v_cur sys_refcursor;

    v_rec sample_record;

    v_migtable VARCHAR (64);

    v_stmt_str VARCHAR2 (400);

    v_col VARCHAR (64);

    BEGIN

    v_migtable: = P_TABLE;

    -some tables have a different name for the foreign key column

    v_col: = box

    When v_migtable = "MYTABLE" then "MAIN_ID".

    another end 'PERSON_ID ';

    -build a sql query for this table and the foreign key column

    v_stmt_str: = ' SELECT

    MX.' | v_col |' like PERSON_ID,.

    COALESCE (MX. Reference, "?") as CF_ID,.

    Count (*) as number_of_records

    OF ' | P_TABLE |' mx

    GROUP BY mx.' | v_col;

    -Open the query and loop through it, each line of pipes

    Open the v_cur for v_stmt_str;

    LOOP

    EXTRACTION v_cur

    IN v_rec;

    EXIT WHEN v_cur % NOTFOUND;

    PIPE ROW (v_rec);

    END LOOP;

    CLOSE V_cur;

    RETURN;

    End;

    END GetMigSamples;

    When I use it

    Select getmigsamples.sp_mig_samples ('M_MY_TABLE') of double

    I get

    FW. SYS_PLSQL_228255_29_1()

    Which I guess means that I have a reference to an object, rather than the actual values in the lines. I tried to corrrect it for centuries and have now arrived at the point of the tear-my-hair-out. Can anyone help please?

    Thank you

    When I use it

    Select getmigsamples.sp_mig_samples ('M_MY_TABLE') of double

    I get

    FW. SYS_PLSQL_228255_29_1()

    Which I guess means that I have a reference to an object, rather than the actual values in the lines. I tried to corrrect it for centuries and have now arrived at the point of the tear-my-hair-out.

    No - this 'thing' you got is a hidden type of SQL Oracle automatically created to match the type of PL/SQL, that you used. SQL can only work with the SQL types defined at the schema level. But for functions PIPELINED Oracle allows you to specify the types of PL/SQL and it will create SQL HIDDEN types to make it work.

    A pipeline function should be treated as a table. You must use the SCOREBOARD operator

    Select * from table (myFunction);

    Try this simple example

    -type to match record emp

    create or replace type emp_scalar_type as an object

    (EMPNO NUMBER 4,

    ENAME VARCHAR2 (10),

    USE VARCHAR2 (9).

    MGR NUMBER 4,

    HIREDATE DATE,

    NUMBER OF SAL (7, 2)

    NUMBER OF COMM (7, 2)

    DEPTNO NUMBER (2)

    )

    /

    -records of the emp table

    create or replace type emp_table_type to table of emp_scalar_type

    /

    -function of pipelined

    create or replace function get_emp (p_deptno number)

    return emp_table_type

    PIPELINED

    as

    TYPE EmpCurTyp IS REF CURSOR RETURN emp % ROWTYPE;

    emp_cv EmpCurTyp;

    l_rec emp % rowtype;

    Start

    Open emp_cv SELECT * from emp where deptno = p_deptno;

    loop

    extract the emp_cv in l_rec;

    When the output (emp_cv % notfound);

    line of conduct (emp_scalar_type (l_rec.empno, LOWER (l_rec.ename),

    l_rec.job, l_rec.mgr, l_rec.hiredate, l_rec.sal, l_rec.comm, l_rec.deptno));

    end loop;

    return;

    end;

    /

    Select * from table (get_emp (20))

    See use of the SCOREBOARD operator?

  • Functions Pipeline Table with other tables using

    I'm on DB 11.2.0.2 and used sparingly in pipeline table functions, but plans to arrays for a project that has a pretty big (many lines). In my tests, selecting from the table in pipeline perform well enough (if it's directly from the table pipleined or the view from above, I have created). Where I start to see some degregation when I try to join the tabe in pipeline discovered at other tables and add where conditions.

    Download

    SELECT A.empno, A.empname, A.job, B.sal

    OF EMP_VIEW A, B OF THE EMP

    WHERE A.empno = B.empno AND

    B.Mgr = '7839'

    I've seen articles and blogs that mention this as a matter of cardinality and offer some undocumented methods to try to fight.

    Can someone please give me some tips or tricks on this. Thank you!

    I created a simple example using the emp table below to help illustrate what I'm doing.

    DROP TYPE EMP_TYPE;

    DROP TYPE EMP_SEQ;

    CREATE OR REPLACE TYPE EMP_SEQ AS OBJECT

    (EMPNO NUMBER (10),)

    ENAME VARCHAR2 (100),

    VARCHAR2 (100)) WORK;

    /

    CREATE OR REPLACE TYPE EMP_TYPE AS TABLE EMP_SEQ;

    /

    FUNCTION to CREATE or REPLACE get_emp back EMP_TYPE PIPELINED AS

    BEGIN

    TO heart (SELECT IN

    EmpNo,

    Ename,

    job

    WCP

    )

    LOOP

    PIPE ROW (EMP_SEQ (cur.empno,

    cur. Ename,

    cur.job));

    END LOOP;

    RETURN;

    END get_emp;

    /

    create or REPLACE view EMP_VIEW select * from table (get_emp ());

    /

    SELECT A.empno, A.empname, A.job, B.sal

    OF EMP_VIEW A, B OF THE EMP

    WHERE A.empno = B.empno AND

    B.Mgr = '7839'

    bobmagan wrote:

    The ability to join would give me the most flexibility.

    Pipelines can be attached. But here is the PL/SQL code - no tables. And without index.

    Consider a view:

    create or replace view sales_personel in select * from emp where job_type = 'SALES '.

    And you use the view to determine the sellers in department 123:

    Select * from sales_personel where dept_id = 123

    Oracle considers that logically the next SQL statement like her can be pushed in the view:

    select * from emp where job_type = 'SALES' and dept_id = 123


    If the two columns in the filter are indexed for example, he may well decide to use a fusion of index to determine what EMP lines are dirty and department 123.

    Now consider the same exact scenario with a pipeline. The internal process of pipelines are opaque to the SQL engine. He can't say the internal code pipeline "Hey, don't give me employees service 123".

    He needs to run the pipeline. It must evaluate each channeled line and apply the "dept_id = 123" predicate. In essence, you must treat the complete pipeline as a table scan. And a slow that it take more than a simple disc lines when you perform the transformation of data too.

    So yes - you can use the predicates on the pipelines, can join them, use analytical SQL and so immediately - but expect it to behave like a table in terms of optimization of SQL/CBO, is not realistic. And pointing to a somewhat flawed understanding of what a pipeline is and how it should be designed and used.

  • using the function in pipeline

    Hello

    I defined a VC_ARRAY_1 as
    create or replace TYPE  "VC_ARRAY_1" as table of varchar2(4000)
    and the work that the test shows below:
    select * from table(VC_ARRAY_1('qqqq', pppp'))
    
    COLUMN_VALUE
    qqqq
    pppp
    But, when I use the function that
    select * from table (vc_array_1(select personkey from tbl_persons))
    I get an error ORA_00936 missing expression (the column personkey is a vrachar2 (4000)). What is the problem with the use of the second?

    Thank you
    Rose

    Elle rose says:
    What is the problem with the use of the second?

    Everything. You need to throw query results in collection type:

    select  *
      from  table(
                  cast(
                       multiset(
                                select  personkey
                                  from  tbl_persons
                               )
                       as vc_array_1
                      )
                 )
    /
    

    SY.
    P.S. and where is a function in pipeline in all this?

Maybe you are looking for