Query EQL via WS

Hello

It is a follow-up on a previous thread in the forum:
Query EQL in Conversation WS

As mentioned in the thread, I am able to apply for WS using the structure defined in the wire to make an EQL query and return the results.
However, when you try to programmatically, I noticed that the ContentElementConfig node has any child nodes, and still less the node LQLQueryString, defined in the WSDL file.

I know that Julia has mentioned in the post that studio components using the parser NQL Different service first before using the chat service, but someone could throw some light on this and this as the good order and procedure of making a query EQL to the server via the short services?

Thank you!
Chris

Chris,

Programming language you are using? I think that the exact response will depend on that as LQLQueryString is defined (essentially) as an extension of the string type.

Using your example above, the following c# Code would hit the chat service correctly with your NQL Different statement (assuming that a namespace defined reference Web of Endeca.Conversation23):

>

Create an object of the request
Endeca.Conversation23.Request req = new Endeca.Conversation23.Request ();

Create your object LQLConfig, NOTE: the .NET path sets up its links, you use it. Item property to set the LQLString
Endeca.Conversation23.LQLConfig lqlConfig = new Endeca.Conversation23.LQLConfig ();
lqlConfig.Id = "myConfig".
lqlConfig.HandlerFunction = "LQLHandler";
lqlConfig.HandlerNamespace = "http://www.endeca.com/MDEX/conversation/1/0";
lqlConfig.Item = @ "SELECT SUM (FactSales_SalesAmount) AS RETURN statement
WHERE (DimDate_FiscalYear = 2008) AS Sales2008,
Sum (FactSales_SalesAmount) WHERE (DimDate_FiscalYear = 2007) AS Sales2007,
((Sales2008-Sales2007)/Sales2007 * 100) AS pctChange,
countDistinct (FactSales_SalesOrderNumber)
TransactionCount GROUP;

Add the Config NQL Different to list the ContentElements
Conf [] Endeca.Conversation23.ContentElementConfig = new Endeca.Conversation23.ContentElementConfig [] {lqlConfig};
forced. ContentElementConfig = conf;

Set cross it and the vacuum State
forced. PassThrough = new Endeca.Conversation23.CatchAll ();
forced. State = new Endeca.Conversation23.State ();

Hit the chat service quick start on your local instance of the server short
CPC Endeca.Conversation23.ConversationPortClient = new Endeca.Conversation23.ConversationPortClient ("ConversationPort", "http://localhost:7770 / ws/conversation/quickstart?") WSDL");"
Results Endeca.Conversation23.Results = cpc. Request (req);

I hope that aid and it is easy to read, the tools of the forum for the formatting are not large.

Kind regards

Patrick Rafferty
http://branchbird.com

Edited by: Branchbird - Pat on October 18, 2012 11:00

Tags: Business Intelligence

Similar Questions

  • XenServer multipath.conf (blacklists EQL via multipath.conf)

    Is it possible to blacklist effectively EQL connections via multipath.conf?

    This user has EQL and EMC attached to this particular host and wants to kill multiple paths to the PS6100 for a stable job and dev.

    Is there a documentation for it? The following lines would work for this in multipath.conf? Thank you-

    {black list

    {of device

    the name of the seller "EQLOGIC".

    Product '100th-00' 100th #Equalogic - 00

    }

    }

    Hello

    All that is required is:

    {black list

    {of device

    the name of the seller "EQLOGIC".

    }

    }

    Kind regards

  • Query EQL in Conversation WS

    This thread is related to the discussion in this thread: Re: code for Studio OOB Source components

    So far, I'm still not able to submit a statement in the application of WS of conversation in the 2.3 OEID EQL.
    Any thoughts would be much appreciated.

    Thank you!

    I am trying to install the Quick Start and will try your request today. Meanwhile, have you checked the EQL topics in the short Information discovered Migration Guide? EQL had changed dramatically from the previous version and has been much improved. You may find the reason why your statement does not work. Or, better yet, EQL experts there might have an answer for you.

    UPDATE: I was able to run your query. The only change I made was using a different manager function: LQLHandler that is used in the example previously shown. This query runs and produces results. Thanks to EQL developers who helped me to solve the problems.
    Here's my query:


    xmlns:ns = "http://www.endeca.com/MDEX/conversation/1/0".
    xmlns: xsi = "http://www.w3.org/2001/XMLSchema-instance".
    xmlns:Typ = "http://www.endeca.com/MDEX/lql_parser/types" >





    HandlerNamespace = "http://www.endeca.com/MDEX/conversation/1/0".
    HandlerFunction = "LQLHandler."
    xmlns: xsi = "http://www.w3.org/2001/XMLSchema-instance" >
    SELECT SUM (FactSales_SalesAmount) AS RETURN statement
    WHERE (DimDate_FiscalYear = 2008) AS Sales2008,
    Sum (FactSales_SalesAmount) WHERE (DimDate_FiscalYear = 2007) AS Sales2007,
    ((Sales2008-Sales2007)/Sales2007 * 100) AS pctChange,
    countDistinct (FactSales_SalesOrderNumber)
    GROUP TransactionCount




    I hope this helps!

    Published by: JuliaM on May 7, 2012 09:31

  • A query EQL response XML parsing

    Hello, I am new to short, and I upgraded from 2.4 to 3.0 short short.
    When I run a query of the Integrator, I get a response like this:

    ------
    < soapenv:Envelope = "http://schemas.xmlsoap.org/soap/envelope/" xmlns:soapenv >
    < soapenv:Header / >
    < soapenv:Body >
    < xmlns:cs cs: results = "http://www.endeca.com/MDEX/conversation/2/0."
    xmlns:MDEX = "http://www.endeca.com/MDEX/XQuery/2009/09" >
    < cs: ContentElement xmlns: xsi = "http://www.w3.org/2001/XMLSchema-instance".
    xsi: type = "cs: NQL Different" Id = "LQLConfig" >
    < cs: ResultRecords NumRecords = "390" Name = 'Messages' >
    < cs: DimensionHierarchy / >
    < name cs: AttributeMetadata = "FactPost_PostId."
    Type = "mdex:string" / >
    < name cs: AttributeMetadata = "FactPost_TimeId."
    Type = "mdex:string" / >
    < name cs: AttributeMetadata = "FactPost_Type."
    Type = "mdex:string" / >
    < cs: record >
    < name cs: attribute = "FactPost_PostId" type = "mdex:string" > 100002635219620_411535012277669
    < / cs: attribute >
    < name cs: attribute = "FactPost_TimeId" type = "mdex:string" > 2013 - 05-01 T 01: 49:35 - 0000
    < / cs: attribute >
    < name cs: attribute = "FactPost_Type" type = "mdex:string" > FB_POST_SEARCH_1
    < / cs: attribute >
    < / cs: record >
    < cs: record >
    < name cs: attribute = "FactPost_PostId" type = "mdex:string" > 420550781295893_592578997426403
    < / cs: attribute >
    < name cs: attribute = "FactPost_TimeId" type = "mdex:string" > 2013 - 05-01 T 03: 17:52 - 0000
    < / cs: attribute >
    < name cs: attribute = "FactPost_Type" type = "mdex:string" > FB_POST_SEARCH_1
    < / cs: attribute >
    < / cs: record >
    < / cs: ResultRecords >
    < / cs: ContentElement >
    < / cs: results >
    < / soapenv:Body >
    < / soapenv:Envelope >
    -----

    How do I analyze it? I tried with the XMLReader component, with this mapping:
    -----
    <? XML version = "1.0" encoding = "UTF-8" standalone = "yes"? >
    < context xpath = "" / soapenv:Envelope "outPort = '0' namespacePaths =" soapenv = "http://schemas.xmlsoap.org/soap/envelope/"; CS = 'http://www.endeca.com/MDEX/conversation/2/0' ">"
    "< xpath="./soapenv:Body/cs:Results/cs:ContentElement/cs:ResultRecords context ">
    "[" < mapping cloverField = "FactPost_PostId' xpath="./cs:Record/cs:attribute[@name='FactPost_PostId "]" / >
    "[" < mapping cloverField = "FactPost_TimeId' xpath="./cs:Record/cs:attribute[@name='FactPost_TimeId "]" / >
    "[" < mapping cloverField = "FactPost_Type' xpath="./cs:Record/cs:attribute[@name='FactPost_Type "]" / >
    < / context >
    < / context >
    -----

    but I get an error:
    Node XML_READER0 completed with status: ERROR caused by: xpath result, fill in the field "FactPost_PostId" contains two values or more!

    What's not in my map? How can I analyze what kind of xml?

    Try to collapse these context-s down to one, for example:


    <>
    XPath = "" / soapenv:Envelope / soapenv:Body / cs: results / cs: ContentElement / cs: ResultRecords / cs: record ""
    outPort = '0 '.
    namespacePaths = "soapenv ="http://schemas.xmlsoap.org/soap/envelope/"; CS = 'http://www.endeca.com/MDEX/conversation/2/0' ">"
         
         
         

  • function count() throws the error in the EQL query for metric

    Hello

    I try to use count function in query EQL for metric bar to display the number of occurrences of a record.
    When the record type does not exist, I hope that the function returns the number 0. Instead the lift function a runtime exception and the whole Bar Metric portlet fails, error no. results available. Even if the other components of the metric bar have valid results.

    2013-02-21 13:03:51, 216 WARN [MetricsBarQueryProcessor] EQL the query returns no results.
    2013-02-21 13:03:51, 216 ERROR [MetricsBarQueryProcessor] EQL the query returns no results.
    2013-02-21 13:03:51, 216 WARN [MetricsBarResultsProcessor] no declaration is the result of process.

    The EQL query I use is
    BACK NUMBER_OCCURS AS SELECT COUNT (EMP_ID) AS NUMBER_OCCURS where EMP_NAME IS NOT NULL group;

    I tried to use COALESCE, but it does not help.

    Can someone please suggest a work around?

    Varkashy,

    It should work...

    DEFINE foo AS SELECT COUNT (1) AS NUMBER_OCCURS where EMP_NAME IS NOT NULL group;

    RETURN NUMBER_OCCURS AS SELECT
    COALESCE (foo []. NUMBER_OCCURS, 0) AS NUMBER_OCCURS
    GROUP

    Patrick Rafferty
    http://branchbird.com

    Edited by: Branchbird - Pat on February 21, 2013 05:20

  • EQL request to the batch of 30 days group

    Hello

    I would like to use a query to EQL to give me the difference between date1 and sysdate in batches of 30 days.
    For example If date1 is between now and 30 days from now, it should return 1, thirty days within 60 days from now should return 2, etc...
    I would use this query for a view of the Studio of the EID.

    Please help me to write the query EQL. Also is there any document syntax available with examples EQL? Please post links to the same.

    Thank you
    Leon.

    I think you should check out the JULIAN_DAY_NUMBER. It will be something like
    FLOOR ((EXTRAIT (SYSDATE, JULIAN_DAY_NUMBER) - EXTRACTED (date1, JULIAN_DAY_NUMBER)) / 30) + 1

    You can change the order of subtraction, use ABS, use CEIL instead of FLOOR or boundries adjust according to your needs.

    For the EQL syntax documentation see "Short Server Query Language Reference" under http://docs.oracle.com/cd/E35822_01/index.htm

  • Missing column in CF9 query results

    I have a very simple table in mssql 2005 consisting of these columns:

    ID (pk int),

    app_id (int),

    name (varchar50),

    dir_key (varchar50),

    path (varchar100),

    size_limit (smallint),

    ext_limit (varchar50),

    virtual_dir (varchar50)

    My query, run via cfquery or as a query object in the script:

    SELECT * FROM dbo.tblAppUploadSettings WHERE app_id = 1

    Now, when only a query runs, column virtual_dir is missing in the result set. He is present when you run the same query through sql management studio.

    If I explicitly add virtual_dir to the list of selection ( *, virtual_dir ) then I get the column returned twice.

    I took care to clear the query cache, etc., but to no avail.

    Any thoughts?

    vectorpj wrote:

    I took care to clear the query cache, etc., but to no avail.

    What "query cache" erase you?

    vectorpj wrote:

    Any thoughts?

    DO NOT USE

    SELECT *
    

    When you do, database driver caches the structure of the table the first time it is executed, and then he won't see the changes made to the table in the database, until this "cache" is disabled.  And the only way, as far as I know, to do this is to restart the services and | or servers.  This can lead to a multitude of difficult to debug problems, such as the one you are experiencing.

  • Query IDS in ORACLE_SEM_FS_NS: 32 or 64 bit?

    Hello

    I have a question about the ID of query used in the definition of the prefix ORACLE_SEM_FS_NS and Joseki / Oracle querymgt servlet.

    It seems that:
    * Table ORACLE_ORARDF_QUERY_MGT_TAB can apparently contain long
    * the querymgt servlet accepts long and returns "No ID valid query not specified." for anything more than 64-bit.
    * running queries with results long qid (of an endpoint and joseki Jena) bizarre behavior:
    * qid < 2 ^ 32: OK
    * qid < 2 ^ 64: Returns an exception (see below) or a result empty set immediately (without exception)
    * qid > 2 ^ 64: query works OK but qid does not stop the query because it is more than 64-bit


    Here's the poster exception if you use a qid between 32 and 64 bit:
    Exception in thread "AWT-EventQueue-0".
    java.lang.NumberFormatException: for input string: "8947349906".
    at java.lang.NumberFormatException.forInputString(NumberFormatException.java:48)
    at java.lang.Integer.parseInt(Integer.java:459)
    at java.lang.Integer.parseInt(Integer.java:497)
    at oracle.spatial.rdf.client.jena.OracleQueryContextParameters.getQID(OracleQueryContextParameters.java:83)


    My question: because of the differences when using large values, I don't understand exactly: the query ID is supposed to be 32 or 64 bit? Also: I have not checked whether the id has been signed or not signed. Could you please specify exactly the extent of possible values this qid may be transferred?

    Thank you

    Kind regards
    Julien

    Hi Julien,

    My comments are in line.

    See you soon,.
    Vlad

    ----
    Well, to be honest, I can't reproduce the problem I encountered yesterday. The server weblogic providing Joseki was overloaded for some other reason (out of memory) and was only answering the queries of the querymgt (SPARQL queries themselves were blocked threads).

    In regard to point 4 of your explanation: when you say that "the Jena adapter" question DB regularly to check whether any request should be killed, do you mean that the following might work?
    Example: A user runs a query SPARQL via a load balancing program uses two instances of Joseki w / querymgt. Then the same user wants to kill the same long running through the same load balancer query and hit the other instance of Joseki w / querymgt.
    ----

    V: two instances of Joseki use the same datasource right? In this case, the request to kill will be sent to the two instances of Joseki.

    --------
    If I understand what you're saying, the queries management features are available for queries through Joseki since they were arrested on the client side. In this case, is there a way to prevent long-running queries that are sent directly through JDBC (using the Jena adapter)?
    --------
    V: If you submit a SPARQL query programmatically via the adapter Jena against a GraphOracleSem supported by Oracle, and you specify a QID in the prefix, then the query is saved, and you will always be able to kill him using the querymgt servlet. Let me know if that answers your question.

    ------
    On the first issue (available for qid values), do you intend to correct that only less than half of all the 32-bit values are available? I ask this because it is quite common to use 32 bits (or a multiple of 32 anyway) ID of instances / sessions to client applications and it seems to me that it would make it easier to use the ID of the request.
    ------
    V: we will consider allowing ID query 64-bit in the next version of the adapter of Jena.

  • Problem with intrusion via CFMAIL in MX7

    I have used intrusion via CFMAIL in a former model on the previous versions of the CF without problem, but seem to trigger an error on MX7.

    Anyone know what I'm doing wrong?

    My code:
    < request of intrusion via CFMAIL = "mailing".
    (OBJET = "#DateFormat (maintenant (),)" MMMM (YYYY) ") #-what's new wild Asia."
    FROM = "" #AdminEmail # ""
    TO = "(email) #Trim" # ""
    is = "" #MailingList # ""
    >

    This was my mistake:

    Validation error of intrusion via CFMAIL tag attribute.
    The value of the attribute, which is currently "[email protected] my ', is not valid.

    The error occurred in Asia D:\Reza\DATA\PROJECTS\Wild web\WEB\Wild Asia\public_html\WWW\members\mass_emails_submit.cfm: line 332
    Called from the Asia D:\Reza\DATA\PROJECTS\Wild web\WEB\Wild Asia\public_html\WWW\members\mass_emails_submit.cfm: line 329
    Called from the Asia D:\Reza\DATA\PROJECTS\Wild web\WEB\Wild Asia\public_html\WWW\members\mass_emails_submit.cfm: line 326
    Called from the Asia D:\Reza\DATA\PROJECTS\Wild web\WEB\Wild Asia\public_html\WWW\members\mass_emails_submit.cfm: line 1

    330: <! - send the e-mail message, based on the entry form - >
    331: < query intrusion via CFMAIL = "mailing".
    (332: OBJET = "#DateFormat (maintenant (),)" MMMM (YYYY) ") #-what's new wild Asia."
    333: FROM = "" #AdminEmail # ""
    334: TO = "(eMail) #Trim" # ""

    Resources:

    * See the ColdFusion documentation to verify that you are using the correct syntax.
    * Search the Knowledge Base to find a solution to your problem.

    Browser Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; RV:1.8.1) Gecko/20061010 Firefox/2.0
    Remote address 127.0.0.1
    Reference http://127.0.0.1/wildasia/members/mass_emails.cfm
    DateTime December 6 06 14:23
    Stack trace
    to cfmass_emails_submit2ecfm1053470653._factor12 (Asia D:\Reza\DATA\PROJECTS\Wild web\WEB\Wild Asia\public_html\WWW\members\mass_emails_submit.cfm:332) at cfmass_emails_submit2ecfm1053470653._factor13 (Asia D:\Reza\DATA\PROJECTS\Wild web\WEB\Wild Asia\public_html\WWW\members\mass_emails_submit.cfm:329) at cfmass_emails_submit2ecfm1053470653._factor14 (Asia D:\Reza\DATA\PROJECTS\Wild web\WEB\Wild Asia\public_html\WWW\members\mass_emails_submit.cfm:326) at cfmass_emails_submit2ecfm1053470653.runPage (D:\Reza\DATA\PROJECTS\Wild Asia web\ WEB\Wild Asia\public_html\WWW\members\mass_emails_submit.cfm:1)

    coldfusion.tagext.InvalidTagAttributeException: validation error for tag intrusion via CFMAIL to attribute.
    at coldfusion.tagext.net.MailTag.validate(MailTag.java:489)
    at coldfusion.tagext.net.MailTag.processAttributes(MailTag.java:549)
    to cfmass_emails_submit2ecfm1053470653._factor12 (Asia D:\Reza\DATA\PROJECTS\Wild web\WEB\Wild Asia\public_html\WWW\members\mass_emails_submit.cfm:332)
    to cfmass_emails_submit2ecfm1053470653._factor13 (Asia D:\Reza\DATA\PROJECTS\Wild web\WEB\Wild Asia\public_html\WWW\members\mass_emails_submit.cfm:329)
    to cfmass_emails_submit2ecfm1053470653._factor14 (Asia D:\Reza\DATA\PROJECTS\Wild web\WEB\Wild Asia\public_html\WWW\members\mass_emails_submit.cfm:326)
    to cfmass_emails_submit2ecfm1053470653.runPage (Asia D:\Reza\DATA\PROJECTS\Wild web\WEB\Wild Asia\public_html\WWW\members\mass_emails_submit.cfm:1)
    at coldfusion.runtime.CfJspPage.invoke(CfJspPage.java:152)
    at coldfusion.tagext.lang.IncludeTag.doStartTag(IncludeTag.java:349)
    at coldfusion.filter.CfincludeFilter.invoke(CfincludeFilter.java:65)
    at coldfusion.filter.ApplicationFilter.invoke(ApplicationFilter.java:225)
    at coldfusion.filter.RequestMonitorFilter.invoke(RequestMonitorFilter.java:51)
    at coldfusion.filter.PathFilter.invoke(PathFilter.java:86)
    at coldfusion.filter.LicenseFilter.invoke(LicenseFilter.java:27)
    at coldfusion.filter.ExceptionFilter.invoke(ExceptionFilter.java:69)
    at coldfusion.filter.BrowserDebugFilter.invoke(BrowserDebugFilter.java:52)
    at coldfusion.filter.ClientScopePersistenceFilter.invoke (ClientScopePersistenceFilter.java:2 8)
    at coldfusion.filter.BrowserFilter.invoke(BrowserFilter.java:38)
    at coldfusion.filter.GlobalsFilter.invoke(GlobalsFilter.java:38)
    at coldfusion.filter.DatasourceFilter.invoke(DatasourceFilter.java:22)
    at coldfusion.filter.RequestThrottleFilter.invoke(RequestThrottleFilter.java:115)
    to coldfusion. CfmServlet.service (CfmServlet.java:107)
    at coldfusion.bootstrap.BootstrapServlet.service(BootstrapServlet.java:78)
    at jrun.servlet.ServletInvoker.invoke(ServletInvoker.java:91)
    at jrun.servlet.JRunInvokerChain.invokeNext(JRunInvokerChain.java:42)
    at jrun.servlet.JRunRequestDispatcher.invoke(JRunRequestDispatcher.java:257)
    at jrun.servlet.ServletEngineService.dispatch(ServletEngineService.java:541)
    at jrun.servlet.jrpp.JRunProxyService.invokeRunnable(JRunProxyService.java:204)
    to jrunx.scheduler.ThreadPool$ DownstreamMetrics.invokeRunnable (ThreadPool.java:318)
    to jrunx.scheduler.ThreadPool$ ThreadThrottle.invokeRunnable (ThreadPool.java:426)
    to jrunx.scheduler.ThreadPool$ UpstreamMetrics.invokeRunnable (ThreadPool.java:264)
    at jrunx.scheduler.WorkerThread.run(WorkerThread.java:66)

    Try:
    TO = "#IIf (IsValid ('email', Trim (e-mail)), (TRIM (e-mail)), TO (MailingList)) ' # '"
    (adjusted citations)

    I tested it and it works.

    But, remember it as a stop gap. The DB needs to be cleaned.

  • Written by UTL_FILE Package

    I write a PL/SQL code to extract data from query select via cursor & write output to a CSV file.

    PL/SQL code is as below.

    serverout Set size 1000000

    Set serveroutput on

    ALTER session set nls_date_format = 'YYYY-MM-DD ";

    CREATE or REPLACE DIRECTORY MY_FILE_DIR AS ' / usr/tmp2;

    DECLARE

    file_name varchar2 (100): = "Emp_Data";

    file utl_file.file_type.

    TYPE no IS RECORD)

    v_empno varchar2 (10),

    v_last_name varchar2 (40)

    );

    cursor PnJD_cur is

    Select employee_number, last_name

    of apps.zshr_employee_v one

    where a.user_person_type = '01'

    and a.EFFECTIVE_END_DATE ='4712-12-31';

    BEGIN

    IF this is PnJD_cur % ISOPEN THEN

    PnJD_cur OPEN;

    ENDIF;

    Look FOR PnJD_cur IN SheikYerbouti;

    While PnJD_cur %

    LOOP

    file: = utl_file.fopen (MY_FILE_DIR, file_name |'.) CSV', 'w');

    UTL_FILE.put (file, emp_rec.v_empno |) ',' ||

    emp_rec.v_last_name

    );

    UTL_FILE.fclose (file);

    -dbms_output.put_line (file: '.) CSV');

    Look FOR PnJD_cur IN SheikYerbouti;

    END LOOP;

    END;

    I believe that this code "beautifully" written in the csv file, but it didn't. Rather, there was an output like below. In the end, I had to do CTRL + C to stop execution.

    In this regard, any HELP is APPRECIATED. Thank you in advance.

    SQL > @EMP_Personal_Job_Data_Extract1123

    Modified session.

    Created directory.

    37

    38

    39

    40

    41

    42

    43

    44

    45

    46

    47

    48

    49 50

    51

    52

    53

    54

    55 ^ C

    SQL >

    Hey,.

    Instead of TYPE & OPEN, FETCH, CLOSE cursor, try this simple LOOP FOR cursor that is mentioned below, unless you have a specific requirement to use these commands.

    serverout Set size 1000000

    Set serveroutput on

    ALTER session set nls_date_format = 'YYYY-MM-DD ";

    CREATE or REPLACE DIRECTORY MY_FILE_DIR AS ' / usr/tmp2;

    DECLARE

    file_name varchar2 (100): = "Emp_Data";

    file utl_file.file_type.

    cursor PnJD_cur is

    Select employee_number, last_name
    of apps.zshr_employee_v one

    where a.user_person_type = '01'

    and a.EFFECTIVE_END_DATE ='4712-12-31';

    BEGIN

    file: = utl_file.fopen (MY_FILE_DIR, file_name |'.) CSV', 'w');

    FOR v_pnjd IN PnJD_cur

    LOOP

    UTL_FILE.put_line (file, v_pnjd.v_empno |) ',' || v_pnjd.v_last_name);

    END LOOP;

    UTL_FILE.fclose (file);

    END;

    The above code is based on the principle that you want to recover the data and put them into the same file (Emp_Data.csv).

  • Identification of VM on vCloud and passing account/password for VM

    Hi all

    About vCloud, any virtual machine on this subject can be identified by querying adminVM via REST vCloud API.

    However, we have some account as a spectator and his password to this virtual machine to use vCloud API, which means I need to disseminate this information to account for each virtual computer.

    Anyone can easily get this account and password once I have pass to each virtual machine, and everyone can use it to connect to vCloud-Console with this account.

    Although I can lower security problem to use "Access Console only" account, I still wonder if there is another way unauthenticated as what EC2 provided (with metadata ec2 instance).

    Any comments/help is appreciated. Thank you!

    If you have a management server that can query VCD to the list of virtual machines and customers would also send updates on the same management server, so why the clients needed access directly in VCD? If I understand your question, you try only to find a way to bind the data of the virtual machine with the list of the VM in VCD? If Yes, one-way is just assign a unique identifier which may be internal to your company for each virtual computer using custom properties of FVO, who can also access the virtual machine within the guestOS. Then, when it sends its updates, it can send the unique key and the administration server can use this key to link information to the virtual machine.

    Have a look here to learn more - http://pubs.vmware.com/vcloud-api-1-5/wwhelp/wwhimpl/js/html/wwhelp.htm#context=vCloudAPI&file=GUID-E13A5613-8A41-46E3-889B-8E1EAF10ABBE.html

  • The list of resources in service of the user having the status of associates

    Hi all

    I would need to have a way to get all of the resources, with the related status for a given to IOM by programming user (SQL query or via java MobiLink IOM).

    Could you please help me? Is it possible to have this information?

    Thank you
    Giuseppe.

    The following query works with 10g. Try to twist the same for 11g. I also check and confirm if it works for 11 GR 2.

    Select distinct oiu.oiu_key, oiu.req_key, oiu.oiu_offlined_date, oiu.oiu_offlined_action,
    OBI.obi_key, obi.obi_status, obj.obj_key, obj.obj_name, orc.orc_key, orc.orc_create, orc.orc_update,
    ORC.orc_status, orc.orc_tos_instance_key, ORC_TASKS_ARCHIVED, ost_status, obj.sdk_key as OBJECTFORMKEY,
    '0' such as OBJECTFORMCOUNT, objsdk.sdk_name as OBJECTFORMNAME, tos.sdk_key as PROCESSFORMKEY, '0' as PROCESSFORMCOUNT,
    procsdk.sdk_name as PROCESSFORMNAME, oiu.oiu_serviceaccount xladm.obj obj outer join left sdk objsdk on obj.sdk_key = objsdk.sdk_key,
    Ouedraogo ouedraogo left outer join orc orc oiu.orc_key = orc.orc_key left outer join tos tos on orc.tos_key = tos.tos_key
    outer join sdk procsdk let tos.sdk_key = procsdk.sdk_key, obi obi, ost ost
    where oiu.obi_key = obi.obi_key and oiu.ost_key = ost.ost_key and obi.obj_key = obj.obj_key and oiu.usr_key =
    (select usr_key from usr where usr_login = ")

  • Essbase for reporting - is this best practice?

    Hello
    We have installed HFM v11 with DRM. We want to be able to plan and produce reports using Smartview. Consultant suggested that we build an Essbase cube in order to get the zoom capability of eliminations and other details. Who is considered "best practice"? Can we not use Smartview to declare and to break into the repository of HFM SQL?
    Thanks in advance

    This looks like a lot of work/maintenance.

    (1) you can easily program reports to aid Financial Reporting of Hyperion and batch Scheduler in the workspace.
    (2) you can use Extended Analytics to the largest reporting requirements and query data via an ODBC connection with access, etc.
    (3) you can use the VBA of SmartView feature to auto-refresh reports 'open' and save a copy in a different location - so you can run the macros via the Windows Task Scheduler.

  • 11g - new: NOT IN has a different behavior with key FOREIGN ACTIVE!

    Oracle has changed NOT IN way that works?
    The NOT IN operator is supposed to return no rows when there are NULL values, see example below:
    Two table c1 (table 1), p2 (table 1 in the parent).

    create table c1 (c_col, p_col number);
    create table p1 (number p_col);
    insert into p1 values ('1');
    insert into values p1 (2);
    Insert in the values of c1 (100, 1);
    Insert in the values of c1 (200, 2);
    Insert in the values of c1 (300, null);
    Select * C1
    If p_col not in (select p_col from p1)
    NO RETURNS NO LINE! <-this is the expected behavior

    When adding a foreign key, the variations in results:
    ALTER TABLE p1
    Add CONSTRAINT p_PK PRIMARY KEY (p_col)
    ALTER TABLE c1
    ADD CONSTRAINT FOREIGN KEY FK1
    (p_col) P1 REFERENCES;

    THE RESULT OF THE CHANGE:
    Select * C1
    If p_col not in (select p_col from p1)
    RETURNS:
    C_COL P_COL
    ---------------------- ----------------------
    300

    1 selected lines

    WHY?
    When the foreign key is disabled, the result does not change back to the old.
    -
    ALTER table disable forced fk1 c1
    Select * C1
    If p_col not in (select p_col from p1)
    NO RETURNS NO LINE!

    Activation of the constraint:
    ALTER table ENable constraint fk1 c1
    Select * C1
    If p_col not in (select p_col from p1)
    RETURNS:
    C_COL P_COL
    ---------------------- ----------------------
    300

    1 selected lines

    That's happened?

    This is a bug caused by a combination of two elements: [join elimination | http://optimizermagic.blogspot.com/2008/06/why-are-some-of-tables-in-my-query.html] introduced in 10 gr 2 and [Null-aware anti-jointures | http://structureddata.org/2008/05/22/null-aware-anti-join/] introduced in 11 GR 1 material. 11 g CBO to the first query transformations via unnesting of the subquery:

    *****************************
    Cost-Based Subquery Unnesting
    *****************************
    SU: Unnesting query blocks in query block SEL$1 (#1) that are valid to unnest.
    Subquery Unnesting on query block SEL$1 (#1)SU: Performing unnesting that does not require costing.
    SU: Considering subquery unnest on query block SEL$1 (#1).
    SU:   Checking validity of unnesting subquery SEL$2 (#2)
    SU:   Passed validity checks.
    SU: Transform ALL subquery to a single null-aware antijoin.
    

    and then eliminates the join:

    *************************
    Join Elimination (JE)
    *************************
    JE:   cfro: C1 objn:61493 col#:2 dfro:P1 dcol#:2
    JE:   cfro: C1 objn:61493 col#:2 dfro:P1 dcol#:2
    Query block (32724184) before join elimination:
    SQL:******* UNPARSED QUERY IS *******
    SELECT "C1"."C_COL" "C_COL","C1"."P_COL" "P_COL" FROM "TIM"."P1" "P1","TIM"."C1" "C1" WHERE "C1"."P_COL"="P1"."P_COL"
    JE:   eliminate table: P1
    Registered qb: SEL$F9980BE6 0x32724184 (JOIN REMOVED FROM QUERY BLOCK SEL$5DA710D3; SEL$5DA710D3; "P1"@"SEL$2")
    ---------------------
    QUERY BLOCK SIGNATURE
    ---------------------
      signature (): qb_name=SEL$F9980BE6 nbfros=1 flg=0
        fro(0): flg=0 objn=61492 hint_alias="C1"@"SEL$1"
    
    SQL:******* UNPARSED QUERY IS *******
    SELECT "C1"."C_COL" "C_COL","C1"."P_COL" "P_COL" FROM "TIM"."C1" "C1" WHERE "C1"."P_COL" IS NULL
    Query block SEL$F9980BE6 (#1) simplified
    

    which is obviously false.
    With optimizer_features_enable = '10.2.0.4' CBO transforms a NOT EXISTS query, since he cannot transform into normal join:

    SELECT "SYS_ALIAS_1"."C_COL" "C_COL","SYS_ALIAS_1"."P_COL" "P_COL" FROM "TIM"."C1" "SYS_ALIAS_1"
    WHERE  NOT EXISTS (SELECT 0 FROM "TIM"."P1" "P1" WHERE LNNVL("P1"."P_COL"<>"SYS_ALIAS_1"."P_COL"));
    

    and elimination of join is not even considered.
    So as a workaround, you can disable elimination join on either the session or application level via the index OPT_PARAM (although course after approval of Oracle support):

    SQL> explain plan for
      2  select /*+ opt_param('_optimizer_join_elimination_enabled' 'false') */ * from c1
      3  where p_col not in (select p_col from p1)
      4  /
    
    Explained
    
    SQL>
    SQL> select * from table(dbms_xplan.display)
      2  /
    
    PLAN_TABLE_OUTPUT
    --------------------------------------------------------------------------------
    Plan hash value: 1552686931
    ---------------------------------------------------------------------------
    | Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
    ---------------------------------------------------------------------------
    |   0 | SELECT STATEMENT   |      |     1 |    39 |     5  (20)| 00:00:01 |
    |*  1 |  HASH JOIN ANTI SNA|      |     1 |    39 |     5  (20)| 00:00:01 |
    |   2 |   TABLE ACCESS FULL| C1   |     3 |    78 |     3   (0)| 00:00:01 |
    |   3 |   INDEX FULL SCAN  | P_PK |     2 |    26 |     1   (0)| 00:00:01 |
    ---------------------------------------------------------------------------
    Predicate Information (identified by operation id):
    ---------------------------------------------------
       1 - access("P_COL"="P_COL")
    

    BTW, elimination of join has two major bugs: [bad results: https://metalink2.oracle.com/metalink/plsql/f?p=130:14:9360445691478960109:p14_database_id, p14_docid, p14_show_header, p14_show_help, p14_black_frame, p14_font:NOT, 6894671.8, 1, 1, 1, helvetica] with outer joins and [suboptimal plan | https://metalink2.oracle.com/metalink/plsql/f?p=130:15:9360445691478960109:p15_database_id, p15_docid, p15_show_header] [, p15_show_help, p15_black_frame, p15_font:BUG, 7668888, 1, 1, 1, helvetica], making this dangerous service in 10 gr 2 also.

  • Timestamp query via data link

    We have a problem with the synchronization of the time of one of our database servers. I thought it might be possible to check the time difference of two servers by querying the time from a server via a database link.
    But the script:
    connect myuser/xxxxxxxx@db1
    
    select 'DB1: ' server, to_char(systimestamp,'mm/dd/yyyy hh24:mi:ss,ff3') time from dual;
    
    connect myuser/xxxxxxxx@db2
    
    select 'DB2: ' server, to_char(systimestamp,'mm/dd/yyyy hh24:mi:ss,ff3') time from dual;
    
    select 'local (DB2):  ' server,
           to_char(systimestamp,'mm/dd/yyyy hh24:mi:ss,ff3') time
    from dual
    union 
    select 'remote (DB1): ',
           to_char(systimestamp,'mm/dd/yyyy hh24:mi:ss,ff3')
    from dual@db_link_to_db1;
    gives me this result:
    Connected.
    
    SERVE TIME
    ----- -----------------------------
    DB1:  03/08/2013 14:08:51,333
    
    Connected.
    
    SERVE TIME
    ----- -----------------------------
    DB2:  03/08/2013 14:08:51,208
    
    
    SERVER         TIME
    -------------- -----------------------------
    local (DB2):   03/08/2013 14:08:51,298
    remote (DB1):  03/08/2013 14:08:51,298
    You can see, that the server, when I connect later indicates a time that is earlier than the timestamp, that I had before, which would not know if the two servers are synchronized.
    But when I try to interrogate the timestamp of the servers in a single statement, I have time for the local server to the remote server. Is there a way to get the 'real time' of DB1 in a SQL query that is run on DB2?

    Published by: UW (Germany) on 08.03.2013 14:56

    Hey UW.

    MOS 165674.1 describes how.

    You need a remote control works (or discovered) return remote sysdate.

    Concerning
    Peter

Maybe you are looking for

  • What is the small icon of Firefox "bug" on the right side of the browser next to the menu?

    Menu to the right of the browser is a graphic or icon that appears to be connected to the combustion chamber. This icon is poorly equipped and distracting to the browser and seems to serve no purpose other than just being annoying. If it is displayed

  • KVM does not work with the mouse

    I recently bought a pink switch KVM 4 port. I bought a microsoft keyboard and wireless mouse.I also bought a mouse 3000 and 4000 mouse. I have a pink usb to ps2 tranalator to the KVM.The keyboard and mouse 3000 work great togeather, but I can't get t

  • Create the directory in the "shared" area

    Hi all I can't find a solution for this one... Is it possible to create a directory in the "shared" area of the PlayBook? I can write the file, but I would like to have a directory structure. And whenever I invoke createDirectory I have error: Error

  • Cisco AnyConnect Secure mobility Client cannot initialize connection subsystem after updates Windows (Feb 10, 2015)

    Hello The customer Cisco Anyconnect Secure mobility gives me an error when I try to use it. It started after the latest updates for Windows (10 Feb. 2015). The error it causes is "could not initialize the subsystem of connection". I looked at another

  • What format should my files?

    before that I am committed to the purchase of this product, if I want to take a pre-existing song and break up into individual instruments so I can manipulate them (as in a remix of an existing song to store some items, but change or add others), wha