Pivot out the aggregation of data with

Hello

I have it here is output

node_id | object_name | att_value | ATTRIBUTE_NAME
469988 | Serum sample. Project | GSKMDR - status
469988 | Serum sample. 1     | GSKMDR - Version

based on the above output I required to get a way below output

node_id | object_name | status | Version
469988 | Serum sample. Project | 1

I tried to use pivot and wm_concat but no luck. can you please any action ideas.

My query is given below.

{noformat} {noformat}
WITH first_req AS
(SELECT node_id object_name, att_value, attribute_name)
FROM (SELECT n.node_id, o.NAME object_name, o.object_id,)
att.name_display_code att_ndc, att. VALUE att_value,
attribute_name atty.NAME, n.deletion_date
OF vp40.nodes n.
vp40. O objects,
vp40. Att ATTRIBUTES,
Atty vp40.attribute_types
WHERE n.object_id = o.object_id
AND o.object_id = att.object_id
AND att.attribute_type_id = atty.attribute_type_id) t
WHERE deletion_date = January 1, 1900"
AND IN attribute_name ('GSKMDR - Version', "GSKMDR - Status")
AND = node_id: node_id
-Node id of the object "GSKMDR - Concept Template"
)
Select * from first_req
{noformat} {noformat}

ravt261 wrote:
Hello

I have it here is output

node_id | object_name | att_value | ATTRIBUTE_NAME
469988 | Serum sample. Project | GSKMDR - status
469988 | Serum sample. 1     | GSKMDR - Version

based on the above output I required to get a way below output

node_id | object_name | status | Version
469988 | Serum sample. Project | 1

I tried to use pivot and wm_concat but no luck. can you please any action ideas.

Well PIVOT should work, if you have not shown that you tried in this regard.
WM_CONCAT is a undocumented function, then you'd be a fool to use, because functionality may change or it may be removed in future versions.

It will pivot the data based on what you have provided (and it will work in older versions of 11 g)

select note_id, object_name
      ,max(decode(attribute_name,'GSKMDR - Status',att_value)) as status
      ,max(decode(attribute_name,'GSKMDR - Version',att_value)) as version
group by note_id, object_name
order by note_id

My query is given below.

{noformat} {noformat}
WITH first_req AS
(SELECT node_id object_name, att_value, attribute_name)
FROM (SELECT n.node_id, o.NAME object_name, o.object_id,)
att.name_display_code att_ndc, att. VALUE att_value,
attribute_name atty.NAME, n.deletion_date
OF vp40.nodes n.
vp40. O objects,
vp40. Att ATTRIBUTES,
Atty vp40.attribute_types
WHERE n.object_id = o.object_id
AND o.object_id = att.object_id
AND att.attribute_type_id = atty.attribute_type_id) t
WHERE deletion_date = January 1, 1900"

Dates should be treated as dates, no chains.

               WHERE deletion_date = to_date('01-JAN-1900','DD-MON-YYYY')

or (as it is just a date and no time)...

               WHERE deletion_date = date '1900-01-01'

Tags: Database

Similar Questions

  • Find the store of data with more free space.

    So I'm trying to build a script to configure the virtual machine.

    I'll have it retrieve the node to keep newly available vm out based on the cluster, I said to use. I'm just using the Random function to do that then I select a node in the cluster.

    Where I have questions, how can I

    find the data store less used, based on a particular data store naming scheme.

    I want to say is:

    Hypathetically, I for example named data warehouses:

    store data-prod-01 200 GB free

    store data-prod-02 500 GB free

    free data-prod-03 10 GB store

    free data-qa-01 200 GB store

    free data-qa-02 1000 GB store

    I want to throw in a piece of code to tell him to watch data warehouses "datastore-prod *" and place the virtual machine on the store of data with the most space. (that's assuming that the vm will agree on the DS and let fresh generals)

    I guess I want to know if it is possible?

    I would also be concerned about scenario that perhaps the vm should I build just will not match on any of my other data store. I guess I need a logic to check if it is still possible.

    This is more than a wish rather than a necessity. I'm thinking if I just read the info, or use a cvs file after running the script. Any recommendations would be greatly appreacted.

    Hello, drivera01-

    You should be able to do this with a very small amount of code.  Download all data warehouses that correspond to the model name, sort free space (in descending order) and select the top one.  As:

    ## get the datastore matching datastore-prod* that has the most freespace$oDatastoreWithMostFree = Get-Datastore datastore-prod* | Sort-Object -Property FreespaceGB -Descending:$true | Select-Object -First 1
    
    ## if the freespace plus a bit of buffer space is greater than the size needed for the new VMif (($oDatastoreWithMostFree.FreespaceGB + 20) -gt $intNewVMDiskSize) {<# do the provisioning to this datastore #>}else {"oh, no -- not enough freespace on datastore '$($oDatastoreWithMostFree.Name)' to provision new VM"}
    

    The second part, where it checks for sufficient freespace on the data store that has the most free, can be updated to behave as you need, but that should be the basis.  How does this look?

  • How to get out the mode safe mode with msconfig.

    I'm working on my computer and need to start in safe mode. I ran the msconfig and reboot the laptop. I get to the login screen, but when I get my username and password it tells me that the user name and password is not correct. How to get out the mode safe mode with msconfig.

    When you run msconfig to get set to Safe Mode, you just activate/check the option/SafeBoot at startup. INI tab or did you do something else?

    There is some malware that if you use the/SafeBoot option, you will not be able to use your system again until you remove the switch/SafeBoot the boot.ini for you can boot normally.

    Maybe it's not your exact problem, but I will never suggest to anyone to use the option / SafeBoot never again - too risky when troubleshooting since you can always start new or will never connect again until you remove the switch/SafeBoot.

    Anywho, if that's what you have done, you can start in the XP Recovery Console and then either make a new boot.ini file containing the switch/SafeBoot or simply rename the boot.ini file, you have something like boot.ini.old if you don't have a boot.ini file (I know it seems like a weird idea).

    In a single partition configuration, XP is not even a file boot.ini to boot.  XP will complain if there is no boot.ini file, but will always start very well without one (non-believers - try it!).

    After you get booted up and logged in, you can rename boot.ini.old in boot.ini and run msconfig to remove the option/SafeBoot, and never use it again.

  • Error: The lines of data with unmapped dimensions exist for period "1 April 2014".

    Expert Hi

    The below error when I click on the button Execute in order to load data in the area of data loading in 11.1.2.3 workspace. Actually, I already put in the tabs global mapping (add records of 12 months), mapping of Application (add records of 12 months) and map sources (add a month "1 April 2014' as the name of period with Type = Explicit mapping") in the service of the period mapping. What else should I check to fix this? Thank you.

    2014-04-29 06:10:35, 624 [AIF] INFO: beginning of the process FDMEE, process ID: 56
    2014-04-29 06:10:35, 625 [AIF] INFO: recording of the FDMEE level: 4
    2014-04-29 06:10:35, 625 [AIF] INFO: FDMEE log file: null\outbox\logs\AAES_56.log
    2014-04-29 06:10:35, 625 [AIF] INFO: user: admin
    2014-04-29 06:10:35, 625 [AIF] INFO: place: AAESLocation (Partitionkey:2)
    2014-04-29 06:10:35, 626 [AIF] INFO: period name: Apr 1, 2014 (period key: 4/1/14-12:00 AM)
    2014-04-29 06:10:35, 627 [AIF] INFO: category name: AAESGCM (category key: 2)
    2014-04-29 06:10:35, 627 [AIF] INFO: name rule: AAESDLR (rule ID:7)
    2014-04-29 06:10:37, 504 [AIF] INFO: Jython Version: 2.5.1 (Release_2_5_1:6813, September 26 2009, 13:47:54)
    [JRockit (R) Oracle (Oracle Corporation)]
    2014-04-29 06:10:37, 504 [AIF] INFO: Java platform: java1.6.0_37
    2014-04-29 06:10:39, 364 INFO [AIF]: - START IMPORT STEP -
    2014-04-29 06:10:45, 727 INFO [AIF]:
    Import of Source data for the period "1 April 2014".
    2014-04-29 06:10:45, 742 INFO [AIF]:
    Import data from Source for the book "ABC_LEDGER".
    2014-04-29 06:10:45, 765 INFO [AIF]: monetary data lines imported from Source: 12
    2014-04-29 06:10:45, 783 [AIF] INFO: Total of lines of data from the Source: 12
    2014-04-29 06:10:46, 270 INFO [AIF]:
    Map data for period "1 April 2014".
    2014-04-29 06:10:46, 277 [AIF] INFO:
    Treatment of the column mappings 'ACCOUNT '.
    2014-04-29 06:10:46, 280 INFO [AIF]: data rows updated EXPLICIT mapping rule: 12
    2014-04-29 06:10:46, 280 INFO [AIF]:
    Treatment of the "ENTITY" column mappings
    2014-04-29 06:10:46, 281 [AIF] INFO: rows of data updates to EXPLICIT mapping rule: 12
    2014-04-29 06:10:46, 281 [AIF] INFO:
    Treatment of the column mappings "UD1.
    2014-04-29 06:10:46, 282 [AIF] INFO: rows of data updates to EXPLICIT mapping rule: 12
    2014-04-29 06:10:46, 282 [AIF] INFO:
    Treatment of the column mappings "node2".
    2014-04-29 06:10:46, 283 [AIF] INFO: rows of data updates to EXPLICIT mapping rule: 12
    2014-04-29 06:10:46, 312 [AIF] INFO:
    Scene for period data "1 April 2014".
    2014-04-29 06:10:46, 315 [AIF] INFO: number of deleted lines of TDATAMAPSEG: 171
    2014-04-29 06:10:46, 321 [AIF] INFO: number of lines inserted in TDATAMAPSEG: 171
    2014-04-29 06:10:46, INFO 324 [AIF]: number of deleted lines of TDATAMAP_T: 171
    2014-04-29 06:10:46, 325 [AIF] INFO: number of deleted lines of TDATASEG: 12
    2014-04-29 06:10:46, 331 [AIF] INFO: number of lines inserted in TDATASEG: 12
    2014-04-29 06:10:46, 332 [AIF] INFO: number of deleted lines of TDATASEG_T: 12
    2014-04-29 06:10:46, 366 [AIF] INFO: - END IMPORT STEP -
    2014-04-29 06:10:46, 408 [AIF] INFO: - START NEXT STEP -
    2014-04-29 06:10:46, 462 [AIF] INFO:
    Validate the data maps for the period "1 April 2014".
    2014-04-29 06:10:46, 473 INFO [AIF]: data rows marked as invalid: 12
    2014-04-29 06:10:46, ERROR 473 [AIF]: error: the lines of data with unmapped dimensions exist for period "1 April 2014".
    2014-04-29 06:10:46, 476 [AIF] INFO: Total lines of data available for export to the target: 0
    2014-04-29 06:10:46, 478 FATAL [AIF]: error in CommMap.validateData
    Traceback (most recent call changed):
    Folder "< string >", line 2348 in validateData
    RuntimeError: [u "error: the lines of data with unmapped dimensions exist for period" 1 April 2014' ""]

    2014-04-29 06:10:46, 551 FATAL [AIF]: COMM error validating data
    2014-04-29 06:10:46, 556 INFO [AIF]: end process FDMEE, process ID: 56

    Thanks to all you guys

    This problem is solved after I maped all dimensions in order of loading the data. I traced only Entity, account, Custom1 and Custom2 at first because there is no source map Custom3, Custom4 and PIC. After doing the mapping for Custom3, Custom4 and PKI, the problem is resolved. This is why all dimensions should be mapped here.

  • Questions on the aggregation of data for parents

    Hi guys,.

    I have two questions.

    One, once I have enter data in the leaf level, values aggregate to their parent except period dimension. The aggregation method is elsewhere, conservation never share type and data type is currency. Should provide the member formula to calculate the value for high level members?

    Two, some nonleaf members need their own values. For example, a business in the entity dimension unit has some departments. Apart from the consolidation of departments, the BU needs a budget for itself. But members nonleaf cells are read-only. How to deal with my situation?

    Thank you

    You can't, not in a release from the bottom up. You can allot of bu1_ to his brothers and sisters. Or you can use the version of up and down, to parents of input values and allocate up to nonleaf members.

    See you soon,.
    Alp

  • Foreign key constraint, not recognized during the synchronization of data with the model dictionary

    Hello

    Data Modeler is a foreign key constraints do not recognize when synchronizing data with the model dictionary, although the foreign key is there (in the database that a data dictionary is read). I can't find any criterion when a foreign key is not recognized by the Data Modeler. Are there limits to the length of the attribute, or the number of columns in a foreign key, or other limitations which may lead to this behavior not to recognize a fk by Data Modeler? I have columns more than 32 characters. I compared with the fk is recognized by DM, but I can't find anything that indicates why it is not recognized.

    I wonder if someone also has constraints of foreign keys that are not recognized in the comparison of data bases and model?

    Thank you

    Robert

    Hi Robert,.

    Thanks for the comments, I logged a bug.

    Philippe

  • Synchronize the dictionary of data with model only works for models imported?

    When I imported data dictionary model (file-> import-> data dictionary) then in relational model the two buttons "Synchronize Data with Model Dictionary" and "Synchronize model with Data Dictionary" works very well.

    But when I model created from scratch and I'm clicking on the buttons "Synchronize data dictionary with the model" or "Synchronize model with Data Dictionary" nothing happens.

    It works only for models imported?

    (Data model EA 3.3)

    Hello

    Yes, Synchronize only works for objects that were imported to the original (like Synchronize uses the information entered during the import to determine which database connection and the database object to compare to).

    If your model is not imported, you can achieve the same effect as follows:
    -Open the template in Data Modeler and also open the relevant physical model.
    -Do an Import of data dictionary, select the objects you want to compare with that.

    After the import phase, this will display the dialog box to compare, showing the differences between the objects imported from the database and your model.

    Note that if you intend to generate the DDL to update your database of the difference (as in "synchronize with Model Data Dictionary"), you must select the "Swap Target Model" option in step 2 (select database schema) data dictionary import wizard.

    David

  • Creating folders in the view of data with powercli store

    Hello

    We try to automate some parts of a build script and we want to create a folder in the view of data warehouses to move all the local disk.  the only place that I can create a folder has been the point of view of Cluster, DataCenter and VM.  is there a way to do this?

    Thank you

    Matt

    This is a hidden folder named "datastore".

    You can do

    $dsHome = Get-Folder -Name datastoreNew-Folder -Name MyFolder -Location $dsHome
    

    Note that there is 1 folder "data store" by the data center.

    If you have more than 1 data center in your vCenter, you need to indicate what you want "datastore" folder from the data center.

    $dc = Get-Datacenter -Name DC$dsHome = Get-Folder -Name datastore -Location $dcNew-Folder -Name MyFolder -Location $dsHome
    
  • WRT 1900AC slows down the flow of data with Brighthouse

    I recently bought a WRT 1900 AC.  Synchronize straight up with my Brighthouse modem, but I noticed that the maximum data rate I can get through wired connections is 7.2 Mbps.  When Brighthouse is out focus on the system, they read 22 Mbit/s on the modem.

    One reason why the router should be reduced my flow so much?

    Excellent advice.  It's the correction that I needed

  • My organization wants to partner with Mozilla, I filled out the form of "Work with us" several times but no feedback

    I filled out this form https://www.mozilla.org/en-US/about/partnerships/ several times and I never receive any comments. Is there a phone number I can use to get in touch with Mozilla on a potential partnership or email?

    Hi Kombuta,

    Thanks for your post, I'm sorry that you don't have a response to the form that you filled for the partnership with us. Can you tell me when you have completed the form, by chance? This will help me to direct the proper person to your form.

    Otherwise, you can also send me a private message to [email protected] with the information and I will pass along (I'm a Mozilla employee, customer management).

    Thank you and my apologies for the late reply, I appreciate your patience and your interest in Firefox OS!

    Kind regards
    Michelle Luna

  • Fill out the template online DATE of a cell function

    More on this thread: fill format identical to another cell, I would like the test to check the date in my column 'A' to see if it is today, and if this is the case, change the background of the whole line (or the selected cells in the same row). Anyone following is the original script which simply checks the test for a numeric value cell, can help adapt to check 'today '?

    the value testCol to 4 - column D

    the value theKeys to {'1', '2', '3', '4'}

    -have a key for each possible value in testCol, surrounded by quotation marks

    the value theColors to {'yellow', 'gray', 'orange', 'blue'}

    -the order here is significant: yellow if the value is 1, if gray 2, etc. - and have a color for each possible value in re

    say application "Numbers".

    say document 1 to say sheet active

    the value t of first table whose class of extended selection is carried

    Repeat with I from 2 to the County t lines - 2 to pass the header line

    tell you s line I

    value formatting the value v in his cell testCol

    If v is in theKeys then

    the value v point of its background color of theColors

    end if

    end say

    end Repeat

    end say

    end say

    Hi Rusto,

    SGIII script should meet your needs with minor modifications in the script (change of the values defined in the first three lines) and adding a separate test column which displays the date in the column and produces a number saying that the color to apply to each line.

    NOT tested. The script is written for 3 numbers (current version). It returns a syntax error on "...". "for say active sheet" when I try it on a version 2 of numbers.

    SG will probably soon as well as a version.

    Add a column to your table to serve the condition column.

    1. Revise the first line of the script to match the number of the condition column.
    2. Revise the second line to read: theKeys {'1', '2'} value
    3. Review the following script steps lie: the value theColors to {'white', 'red'}

    Put this formula in line 2 of the new test column: = IF (A2 = TODAY (), 2, 1)

    Fill the formula down to the condition column.

    Notes:

    -the formula checks again to a numeric value. The numerical value (1 or 2) is provided by the formula - 1 if the line is not dated today, 2 if it is.

    -' white' and a trigger for the 'white' value are included so that the script 'Cancel' 'red' change when it is run on a day after a previous "today."

    Kind regards

    Barry

  • Write the table of data (with a fixed size)

    Hi fellow users of LabView,.

    My problem is very basic, I'm sorry about that, I'm just a beginner

    I get continuous data of a function and I want to write them in a table with a fixed size. The fixed size is because I want to get the max/min of the latest x items entry. I tried this for an hour now with different approaches, but nothing seems to work, so I'm quite frustrated to post my problem here.

    I hope that the VI is understandable even with foreign securities. Basically, I want to make two light up if my data within a period equal to or less than a reference value.

    I'm grateful for any hint of help

    Use min & max ptbypt with the length of the buffer desired story.

  • Wanted: simple example of the class of data with a signal?

    Been looking for a tutorial on that in the examples and find nothing.  Maybe I missed it?

    I'm looking for an example of simple data class that has a public method to define an observer, similar to this:

    class MyDataClass : QObject // I'm guessing I would subclass QObject
    {
      public:
      int getSomeValue();
    
      signals:
       void setObserver(const QObject* receiver, ...?);
    
      private:
      int m_someValue;
    };
    
    // ...and cpp file with setting the connection
    

    There in this code example?  If this isn't the case, could someone put an example of class?

    I've seen samples in QML, but I'm looking for C++.

    Thank you!

    #include 
    
    class Settings : public QObject
    {
        Q_OBJECT
    public:
        explicit Settings(QObject *parent = 0);
        static Settings *sharedInstance();
    
        QString getTargetIPAddress();
        void setTargetIPAddress(const QString &ipAddress);
    signals:
        void targetIPAddressChanged();
    };
    
    ---
    
    #include 
    
    #include "Settings.h"
    
    const char *targetIPAddressKey = "targetIPAddress";
    
    Settings::Settings(QObject *parent) :
        QObject(parent)
    {
    }
    
    Settings *Settings::sharedInstance()
    {
        static Settings settings;
        return &settings;
    }
    
    QString Settings::getTargetIPAddress()
    {
        QSettings settings;
        QVariant v = settings.value(targetIPAddressKey);
        if (v.isNull())
            return "192.168.0.1";
        return v.toString();
    }
    
    void Settings::setTargetIPAddress(const QString &ipAddress)
    {
        QSettings settings;
        settings.setValue(targetIPAddressKey, QVariant(ipAddress));
        settings.sync();
        emit targetIPAddressChanged();
    }
    
    ---
    
    Some other class:
    
    .h:
    protected slots:
        void targetAddressChanged();
    
    .cpp:
    [...constructor...]
        Settings *settings = Settings::sharedInstance();
        QObject::connect(settings, SIGNAL(targetIPAddressChanged()),
                     this, SLOT(targetAddressChanged()));
        targetAddressChanged(); // for initial setup
    [...]
    
    void UDPClient::targetAddressChanged()
    {
        Settings *settings = Settings::sharedInstance();
        // Do something with settings->getTargetIPAddress()
    }
    

    I hope this will help.

  • Develop the encryption Transparent data with Oracle 10 g XE

    Currently I develop an application that will require encrypted in some tables columns, I will recommended to the customer buying an Oracle database for the application and that you have installed Oracle 10 g XE to begin development, I found that I can't create tables with columns TDE tho I can't create a portfolio. I searched the forums and found that a portfolio manager is not available with Oracle XE.

    My plan was to develop the application and then provide scripts for creating the DBA of the customer so that they can create data tables in their Oracle database... Can I develop the application without transparent data encryption and then say s/n, which must be implemented in the version of the application? The application needs to know the password of portfolio/TDE to encrypt/decrypt the columns!

    Any ideas how I could go on the development of the customer Oracle XE database without access to CDW?

    The T in TDE is transparent, so that your application should need not even be aware that all columns or storage are encrypted. Transparent data encryption are generally implemented in systems that were never designed to encrypt data, so in theory it should be 'perfectly safe' to develop not encrypted and have the client encrypt the columns during installation.

    Of course, when marketing people start talking about things that are 'perfectly safe', it is always a sign of coming danger. Although I have never heard of a case where encrypt a column caused a problem for an application, I would be very doubtful to the development in an environment different from that of production. This includes the exact version of the database (I guess that the customer has installed the last patchsets, so they run 10.2.0.4, for example) as well as editing. If you decide to rely on the fact that everything should go smoothly when you promote to a different version of a different edition of the database with a different schema definition, even if it would normally, you virtually guarantee that you will end up with a problem that will be difficult to solve.

    In your case, I would use XE to the development. It would be much safer to develop against the personal edition. It's not free, but it's the database licensed Enterprise edition to run on developer machines. It is not free, but it is much less than an enterprise edition license.

    Justin

  • How can I check the voltage values on the acquisition of data with a data acquisition assistant and the custom scale?

    I use a custom scale and the DAQ assistant to acquire data from a USB DAQ device.  How can I display voltage gross values at the same time?

    Thank you

    David

    Do not use the ladder custom, but read the raw data using the daq assistant and adapts the data later. Or the scale of the data using the inverse of the custom scale.

Maybe you are looking for