Extraction of data and merger

Hello

Work on EBS version 11.5.10.2


This query retrieves data from shipping for 2007, and there is no transaction delivery available in August, September & October 2007 now I need take data for August 2008, seven & Oct 2008 and replace date with 2007 and merge data.

I need assistance to achieve this result.

SELECTION of products. SEGMENT1 as elements, price. OPERAND as price, SUM (expeditions. PRIMARY_QUANTITY *-1) as a Volume
TRUNC (LAST_DAY (expeditions. TRANSACTION_DATE)) in the month
Inv.mtl_system_items_b products, shipments of inv.mtl_material_transactions qp_list_lines_v price
WHERE
expeditions. INVENTORY_ITEM_ID = products. INVENTORY_ITEM_ID
AND the price. PRODUCT_ID = products. INVENTORY_ITEM_ID
AND the price. LIST_HEADER_ID = 7042
AND products.ORGANIZATION_ID = 103
AND products. ITEM_TYPE = "FG".
AND shipments. TRANSACTION_TYPE_ID = 33
AND shipments. TRANSACTION_DATE between (SELECT ADD_MONTHS(SYSDATE,-36) FROM DUAL) and (SELECT SYSDATE FROM DUAL)
GROUP OF products. SEGMENT1, price. OPERAND, TRUNC (LAST_DAY (expeditions. TRANSACTION_DATE))
ORDER BY TRUNC (LAST_DAY (expeditions. TRANSACTION_DATE))


Thanks and greetings

Try
SELECTION of products. SEGMENT1 as elements, price. OPERAND as price, SUM (expeditions. PRIMARY_QUANTITY *-1) as a Volume
TRUNC (LAST_DAY (expeditions. TRANSACTION_DATE)) in the month
Inv.mtl_system_items_b products, shipments of inv.mtl_material_transactions qp_list_lines_v price
WHERE
expeditions. INVENTORY_ITEM_ID = products. INVENTORY_ITEM_ID
AND the price. PRODUCT_ID = products. INVENTORY_ITEM_ID
AND the price. LIST_HEADER_ID = 7042
AND products.ORGANIZATION_ID = 103
AND products. ITEM_TYPE = "FG".
AND shipments. TRANSACTION_TYPE_ID = 33
AND shipments. TRANSACTION_DATE between (SELECT ADD_MONTHS(SYSDATE,-36) FROM DUAL) and (SELECT SYSDATE FROM DUAL)
GROUP OF products. SEGMENT1, price. OPERAND, TRUNC (LAST_DAY (expeditions. TRANSACTION_DATE))
Union
SELECTION of products. SEGMENT1 as elements, price. OPERAND as price, SUM (expeditions. PRIMARY_QUANTITY *-1) as a Volume
TRUNC (LAST_DAY ((add_months (expeditions. TRANSACTION_DATE,-24))) per month
Inv.mtl_system_items_b products, shipments of inv.mtl_material_transactions qp_list_lines_v price
WHERE
expeditions. INVENTORY_ITEM_ID = products. INVENTORY_ITEM_ID
AND the price. PRODUCT_ID = products. INVENTORY_ITEM_ID
AND the price. LIST_HEADER_ID = 7042
AND products.ORGANIZATION_ID = 103
AND products. ITEM_TYPE = "FG".
AND shipments. TRANSACTION_TYPE_ID = 33
AND shipments. TRANSACTION_DATE between (SELECT ADD_MONTHS(SYSDATE,-24) FROM DUAL) and (SELECT SYSDATE FROM DUAL)
GROUP OF products. SEGMENT1, price. OPERAND, TRUNC (LAST_DAY ((add_months (expeditions. TRANSACTION_DATE,-24)))
ORDER BY 4

Tags: Oracle Applications

Similar Questions

  • extract the date and the creation of a new channel

    How can I take a date as 12/01/2013 format and create a new string with only the month and one with just the year?

    I want to be able to look for months on the trend of the month on the data.

    Hello Jcheese,

    In addition to the suggestion of Dia791 you have other means in tiara 2012:

    (A) using the VBS command YEAR and MONTH

    Dim i
    
    Dim oCh_source : Set oCh_source = Data.Root.ChannelGroups(1).Channels("Time")
    Dim oCh_month  : Set oCh_month = Data.Root.ChannelGroups(1).Channels.Add("Month1",DataTypeFloat64)
    Dim oCh_year   : Set oCh_year = Data.Root.ChannelGroups(1).Channels.Add("Year1",DataTypeFloat64)
    
    For i = 1 To oCh_source.Size
      oCh_month(i) = month(oCh_source(i))
      oCh_year(i)  = year(oCh_source(i))
    Next
    

    (B) with the help of a tiara command:

    Call Calculate ("Ch(""[1]/Month"")= RTP(Ch(""[1]/Time""), ""M"")")
    Call Calculate ("Ch(""[1]/Year"")= RTP(Ch(""[1]/Time""), ""Y"")")
    

    Greetings

    Walter

  • extract table data to different locations

    Hello

    I collected data in the format:

    Temp: 25 Freq: 136 100 99.993 2998,581 0
    Temp: 25 Freq: 136 125,89 125.991 2997,196-0.004
    Temp: 25 Freq: 136 158,48 158.007 - 2995, 1 0.01
    Temp: 25 Freq: 136 199,52 200.002 2991, 905 - 0.019
    Temp: 25 Freq: 155 100 100.005 3000,866 0.003
    Temp: 25 Freq: 155 125,89 126.003 3000,086 0.000
    Temp: 25 Freq: 155 158,48 157.985 2996, 133 - 0.011
    Temp: 25 Freq: 155 199,52 200.018 2992,644-0,021
    Temp: 25 Freq: 174 100 100 3001,405 0.000
    Temp: 25 Freq: 174 125,89 126.016 2997, 996 - 0.010
    Temp: 25 Freq: 174 158,48 158.013 2996,371-0.015
    Temp: 25 Freq: 174 199,52 199.983 2992, 315 - 0,026
    Temp :-30 Freq: 136 100 99.993 2998,581 0
    Temp :-30 Freq: 136 125,89 125.991 2997,196-0.004
    Temp :-30 Freq: 136 158,48 158.007 - 2995, 1 0.01
    Temp :-30 Freq: 136 199,52 200.002 2991, 905 - 0.019
    Temp :-30 Freq: 155 100 100.005 3000,866 0.003
    Temp :-30 Freq: 155 125,89 126.003 3000,086 0.000
    Temp :-30 Freq: 155 158,48 157.985 2996, 133 - 0.011
    Temp :-30 Freq: 155 199,52 200.018 2992,644-0,021
    Temp :-30 Freq: 174 100 100 3001,405 0.000
    Temp :-30 Freq: 174 125,89 126.016 2997, 996 - 0.010
    Temp :-30 Freq: 174 158,48 158.013 2996,371-0.015
    Temp :-30 Freq: 174 199,52 199.983 2992, 315 - 0,026
    Temp: + 70 Freq: 136 100 99.993 2998,581 0
    Temp: + 70 Freq: 136 125,89 125.991 2997,196-0.004
    Temp: + 70 Freq: 136 158,48 158.007 - 2995, 1 0.01
    Temp: + 70 Freq: 136 199,52 200.002 2991, 905 - 0.019
    Temp: + 70 Freq: 155 100 100.005 3000,866 0.003
    Temp: + 70 Freq: 155 125,89 126.003 3000,086 0.000
    Temp: + 70 Freq: 155 158,48 157.985 2996, 133 - 0.011
    Temp: + 70 Freq: 155 199,52 200.018 2992,644-0,021
    Temp: + 70 Freq: 174 100 100 3001,405 0.000
    Temp: + 70 Freq: 174 125,89 126.016 2997, 996 - 0.010
    Temp: + 70 Freq: 174 158,48 158.013 2996,371-0.015
    Temp: + 70 Freq: 174 199,52 199.983 2992, 315 - 0,026

    I am able to extract specific data and establish a curve for the different Freqs (see "extract file_for email.VI pivot table).  I would like to help in order now to a parcel with a fixed frequency but different time (therefore field will be 3 Graphics: 25-30, + 70 for the same word of freq 136).

    Thanks for your help and your time,

    hiNi.

    This code gross works for any number of temperatures and allow for different sizes of each table (say that you measured 3 points to 1, temp and 6 points to another).  I strongly doubt this operation is necessary, but this is how the code works.

    Just drop the and connect the tables correctly loaded from your text file to place the controls in table I used.  (or you can wire up and use it as a subvi).

  • XML data. Extraction of elements and attributes using Xpath or another?

    Hi experts,
    With the help of Oracle 11 g.
    I have an XML table that stores XML in a column (xml_col) like the structure below.
    Example:
    <measure id="abc">
      <data-elements>
        <data-element id="ab">
          <value>40</value>
        </data-element>
        <data-element id="cd">
          <value>8</value>
        </data-element>
        <data-element id="ef">
          <value>38</value>
        </data-element>
        <data-element id="gh">
          <value>32</value>
        </data-element>
      </data-elements>
    </measure>
    I tried constantly to run XPath queries on this column to get the node < element > attribute data and the value of the < value > node in.

    My goal is to turn this into a table of:

    AB | 40
    CD | 8
    EF | 38
    GH | 32


    My mind is stuck on him doing this below and dressing with a CSV file to query hierarchical lines. I can't convert xmltype in chain to do this work so.
    select str1,str2 from (
    select extract(xml_col, 'string-join(//@id, '','')') str1
    ,extract(xml_col, 'string-join(//value, '','')') str2
    from xml_temp_table)
    CONNECT BY LEVEL <= LENGTH (REGEXP_REPLACE (str1, '[^,]+')) + 1;
    But I get the following error:

    ORA-31011: XML parsing failed
    ORA-19202: an error has occurred in the processing of XML
    LPX-00607: Invalid Reference: "string-join".

    I am looking for a solution not PL.

    Any suggestion is appreciated.
    Thank you

    Published by: chris001 on February 26, 2013 12:06

    Published by: chris001 on February 26, 2013 12:07

    chris001 wrote:
    My mind is stuck on him doing this below and dressing with a CSV file to query hierarchical lines. I can't convert xmltype in chain to do this work so.

    Oh boy!

    This should relieve your pain ;)

    SQL> select x.*
      2  from xml_temp_table t
      3     , xmltable(
      4         '/measure/data-elements/data-element'
      5         passing t.xml_col
      6         columns element_id  varchar2(10) path '@id'
      7               , element_val number       path 'value'
      8       ) x ;
    
    ELEMENT_ID ELEMENT_VAL
    ---------- -----------
    ab                  40
    cd                   8
    ef                  38
    gh                  32
     
    
  • Help in the merging of data and create mailing labels

    I have a CSV file with three field ("name, envelope" "" = address"and"City ST Zip") and 245 records.

    I have the labels that correspond to seven on a page.

    I have the designed document. I am able to get the fields inserted, but... when I merge, I get the first page with seven the first record labels, the second page with seven second record labels, page 3 with seven third record labels. You get the photos. I want seven different addresses on each page. So... instead of having more to 35 245 pages.

    Is this possible or should I find a way to make my drawing in Word? * sigh *.

    Everyone made the same mistake; you set seven (records) to your InDesign document, are not.

    Create the first instance (upper-left)

    Go to control panel to merge the data and choose multiple records per page.

    It helps to have put guidelines in place for the Visual presentation and I try to keep the total area of my equal fields the number of hole - exactly 2 inches of width, for example.

    Data merge multiple records per page will need the values for spacing from room to room. Integers and guides make easier.

  • What is the Diff between extract data and analytics

    Hello

    I'm new to HFM.
    What is the difference between the extract data and extended Analytics?

    Thank you
    Fri

    Extract data will create a flat file of entity currency only on folders - to simplify, there basically entry level data with the exception of the entity dimension and account where higher level data can also be extracted.

    EA has the ability to turn a star schema or a flat file that contains the data to any level of any dimension.

  • As of the 31 version, why is there still no option of Thunderbird to insert the date and time in the message that you write?

    As of the 31 version, why is there still no option of Thunderbird to QUICKLY insert the date and time in the message that you write?

    Literally, saw this option very well needed - and opportunity-"promise" for three years now, and even if there are only one or two formats that could be used, at least the option is there.

    It seems that only is to bind a Macro, and the tool to Thunderbird and do it this way.

    Joe Rotello
    [email protected]
    Skype: joerotello

    If the installation of the add-on of TimeStamp is unacceptable for see you if there is a related add-on that you that you already might have managed to convince author to add your function. The Add on more for example has many variables that can enter the body of the message that get automatically replaced with the appropriate data when you merge a message.

  • Firefox Version 27 Action Menu error in Reporting Services. An error has occurred with the extraction of data.

    Hello, since I've updated for Firefox 27.0.1 on Windows 7, I have a problem with Reporting Services on a Sharepoint site. It is a site of Sharepoint 2010 with SQL Server Reporting Services 2012 Sharepoint integrated mode. I was already on Firefox version 26 and didn't meet with this problem.

    When a report is opened and you use the Actions link on the Reporting Services toolbar, I get the following error messages.

    An error has occurred with the extraction of data. Please, refresh the page and try again.

    I tried updating to the beta version of Firefox 28 but the same error occurs. I see that someone else is having the same problem here. http://SharePoint-community.NET/forum/topics/reporting-service-and-Firefox-27

    Any help would be appreciated. Thank you!

    Ryan

    Firefox version 28.0 has corrected this problem. Thank you!

  • Year (date) and Month (date) "YYMM" in number.

    In the column 'G' I use the formula = month (date)

    In the column 'H' I use the formula = year (date)

    Is it possible to use the formula above to create a "YYMM' as the column 'I '?

    Column 'G and H' will be deleted if the column 'I' is the work.

    Hi LV.

    Here is an example of use of Numbers ' 09. The first method (in column (C) is feasible in 3 numbers; the second is maybe not.

    Columns C and D are left in the default auto format (and alignment).

    Column C, automatically align left, excerpts from the last two formatted numbers Year() then extracts and oncatenates the last two digits of a string built consisting of two zeros & the number Month() one or two. The result is a text string of four digits, as shown.

    C2 and filled down: = RIGHT ((B), 2 YEAR) & RIGHT ("00" & MONTH (B), 2)

    Column D simply copies the string Date and time of B and displays it by using a custom format, showing that the two-digit year & the monthnumber double-digit.

    D2 and filled down: = B

    Custom format for the cells D2 - D8:

    Kind regards

    Barry

  • DV7 - 7333cl: how to extract usable data from a bad drive using 22 pins adapter usb to SATA on the new HD

    Greetings HP Forum,

    Recently, I replaced a bad hard drive on my laptop. I need now step by step procedure to extract usable data from the wrong drive using 22 pins adapter usb to SATA on the new HD.

    NOTE: I can't not all data that especially if some of the software downloaded on the replaced disk can be altered. Should I first download Internet Antivirus to protect my new hard drive? I'm not an expert, or even close to this when you work on the back of the laptop, so I'll have to step by step how to download and to partition the data recoverable and software, etc..

    Thanks in advance for your help... I greatly appreciate it!

    See you soon!

    Wes

    It is not complex. Attach the drive to the adapter and connect it to any other computer with good antivirus and antispyware installed. I use Malwarebytes and Avast. When you connect the adapter with the drive connected to the usb port, the drive will appear and it will be assigned a letter maybe E:\ or F:\ or something else. Immediately, he analyzes with the antivirus and anti-spyware. Quarantine or delete any virus or malware it finds. Then, it's just a matter of navigate the disk and copy and paste the contents of the host computer in a directory for this purpose. Obviously, you can copy from documents word, photos, music files, but you cannot copy the applications like Microsoft Word, iTunes. Photoshop.  You may need to take ownership of the files on the old hard drive, but Windows will guide you through this. Don't know what else I can answer.

  • How to extract of gain and offset by AnalogWaveform?

    Hello

    On a similar note to the posts at https://forums.ni.com/t5/High-Speed-Digitizers/How-are-offset-and-gain-in-the-niScope-fetch-function... and http://forums.ni.com/t5/High-Speed-Digitizers/Where-to-find-gain-and-offset-of-USB-5132-running-in-C... , I try to extract the file to output data from the gain and offset of our USB-5132 digitizer.  The file contains the event argument e.Result 'Fetch' and is sent to file in c# as follows:

    binaryFormatter.Serialize (fileStream, e.Result);.

    I want to read in the data file and to determine what the gain and offset of the digitizer.  I don't think I can use "Fetch" to read the file (is this correct)?  I'm reading as this waveform data:

    AnalogWaveformrecords [] = binaryFormatter.Deserialize (fileStream) as AnalogWaveform[];

    I think there should be a way to get the gain and offset because this routine can apparently do:

    Private AnalogWaveformScaleRecords (AnalogWaveformrecords []) [];

    Also, I tried to get the ScopeWaveformInfo like this:

    Info [] ScopeWaveformInfo = binaryFormatter.Deserialize (fileStream) as ScopeWaveformInfo [];

    but I got a null result in the structure.  This could have the right approach however?

    Is there a way to get the gain and offset of the digitizer without using Fetch?  If not, is it possible to use Fetch reading data from a file, and then extract the gain and offset the usual way (from Fetch)?

    Thanks for any thoughts on this.

    Kind regards

    Penny

    I found a solution for this - get the gain and offset with extraction by using a different set of data.  Calibration seems stable enough to do this.

    Thank you

    Penny

  • Extraction of data from MS SQL .bak file

    Hi all

    I received a MS SQL .bak file and need to extract the data into it, what should I do to achieve this?

    In addition, what is a bak file? It's a backup for a database file? I remember that I read somewhere that backup database has several different method (for example, the differential backup, the full backup), which will affect the restoration?

    Thank you

    Lee

    Hi Lee

    Your best resource for information is the Forum dedicated to suppoeting SQL.

    Category of the SQL Server:

    http://social.msdn.Microsoft.com/forums/en-us/category/SQLServer

    Concerning

  • Import script and merging of multiple Excel worksheets

    I use a script written by NOR supported for import and merge multiple Excel worksheets. This works when importing data in the format, so that the script has been designed. The data format has now changed with a new added to the right column. This new column is now only the first worksheet, but the script still matter the rest of the data correctly if I move another column across.

    How can I import data from Excel and merge while keeping in sync?

    Please find the attached script. Thank you very much.


  • The most effective way to log data and read simultaneously (DAQmx, PDM) high data rates

    Hello
     
    I want to acquire the data of several Modules cDAQ using several chassis to
    high data rates (100 k samples per second if possible). Let's say the measurement time is 10 minutes and we got a large number of channels (40 for example). The measured data is written to a PDM file. I guess, the memory or the HARD disk speed is the limits. For the user, there must be a possibility to view the selection of channels in a graph during the measurement.

    My question: what is the best and most effective way to save and read data at the same time?

    First of all, I use an architecture of producer-consumer and I don't want to write and display the data in the same loop. I expect two possibilities:

    [1] to use the 'DAQmx configure logging.vi' with the operation 'journal and read' to write the data to a PDM file. To display the data in a second loop, I would create a DVR samples documented and 'sent' the DVR for the second loop, where the data will be displayed in a graph (data value reference). This method has the disadvantage that the data of all channels is copied into memory. Correct me if I'm wrong.

    [2] use 'DAQmx configure logging.vi', but only with the "journal" operation to write the data to a PDM file. To view the selected data, I had read a number of samples of the TDMS file in the second loop (I'm currently writing the TDMS file). In this case, I have only one copy data from the selected channels (not), but there will be more HARD drive accesses necessary.

    What is the most effective and efficient solution in this case?

    Are there ways to connect and read data with high frequencies of sampling?

    Thank you for your help.

    You say that the measurement time is 10 minutes. If you have 40 channels and you enjoy all CHs at 100 kHz, it is quite a number of values.

    In this case, I always try to approach under the conditions of use. If a measure is only 10 minutes, I just connect all PDM data and create a graphic module that could be in the same loop of consumers where connect you the data. You can always work on the raw data files big offline afterwards, the extraction of all the information you need (have a look at the product called NI DIAdem: http://www.ni.com/diadem/)

    The main issue is that the user needs to see in the graph (or perhaps a chart can be useful too). Lets say that the graph is 1024 pixels wide. It makes no sense to show multiple data to 1024 points, Yes? Every second will produce you 100 data points k per channel. What is the useful information, which should see your username? It depends on the application. In similar cases, I usually use some kind of data reduction method: I use a moving average (Point by point Mean.VI for example) with a size of the interval of 100. This way you get 100 data points of 1000 per channel every second. If you feed your graph every second with these average values, it will be able to data points in 1024 of the store (as a default) by channel (curve), which is a little more than 10 minutes, so that the user will see the entire measurement.

    So it depends on the frequency at which you send data to the consumer. For example, collect you values 1024 by iteration of the producer and send it to the consumer. Here you can make a normal means calc or a bearing (according to your needs) and he draw a graphic. This way your chart will display only the values of the last 10 seconds...

    Once I programmed some kind of module where I use a chart and not a graph, and the user can specify the interval of the absolute timestamp that is traced. If the data size is larger than the size of the chart in pixels, the module performs an average calculation in order to reduce the number of data points. Of course, if you need to see the raw data, you can specify an interval that is small. It all depends on how you program zoom functions, etc... In my case I hade a rate of 1 Hz, so I just kept all data in RAM limiting the berries to keep 24 hours of data, so that technicians could monitor the system. In your case, given the enormous amount of data, only a file read/write approach can work, if you really need access to all of the RAW data on the fly. But I hope that the values of working capital means will be enough?

  • XY graph, initialize multiple groups of data and update a single set of data

    Hello

    I have developed a VI to control the coordinates XY of a platform using two engines. A version a little kwicks xy plotter.

    The detail that I could use help in is the following:

    1. I have 469 (or 400 ~ ish, there are two modes) of the discrete points to be initialized/connected to a XY graph such that a cursor on the XY graph can break discrete points. It's well done.

    2. I have to keep track of the current position of XY. It's also well done

    3. I have to keep track of the former x - y positions, where I was able to extract the data correctly. This is done by a switch.

    4. almost all VI has an a giant loop, although sequenced by 0 of the past a structure of case to another until you reach the final "global dummy.

    Now, signals which are processed to the engine through a USB digital signal generator, I think not that the specifications are necessary. Essentially I needed bitmask each individual movement, so the 'engine of movement' part is embedded in the giant of while loop. This means that whenever I have add a new feature to my VI, engine movements slow down, each time that the operating system on the computer decides to run an automatic update, drivers slow down. I do not have the luxury of multi-threading or a microcontroller immediately.

    At this point, my engines are terribly slowly, and I'll try to find a way to make them run faster. I am sure that the bottleneck is actually treatment for my data XY graph.

    My XY graph manages 3 sets of data,

    a. 469 discrete points (table 2 x 469) or 400 ~ ish discrete points (2 x 400 ~ table)

    position current c. (x, y)

    d. "rescued/captured" (2xincreasing up to table 469) positions

    everything has, b, c are inside the while loop. A switch determines the 'a' is used.

    the berries are grouped and then sent in the graph xy inside the while loop. I need the graph to be interactive to the cursor and data.

    Now, is that I wish to run things.

    1. selection from either a or b is responsible to the xy chart (by a switch).

    2 c is updated constantly.

    3 d is updated whenever the data was captured (by a switch).

    Basically, I need a and or b to run only ONCE at initialization of the (and when I decide to swtich from one to the other) and remains constant. In the same way D updates only when a switch has been pressed.

    I failed to find resources too tracing data to the xygraph, and drawing a live data on top selectively.

    I have attached an outdated version of my front as reference. I would like suggestions. Thank you in advance.

    -JLS


Maybe you are looking for