Data loading performance issues

Hi all:
I have two rules of loading to load from a relational database using SQL interface record of 57 M each. The cube is reset every day and the data are reloaded from the source. It's a cube BSO, 11.1.2.2, windows 64-bit version. Load time gets longer and more every day. I tried the following:
1. rearrange the columns of the SQL statement according to the outline.
2 sort the data source.
3. Add the FAKE DLSINGLETHREADPERSTAGE, DLTHREADSWRITE 16
DLTHREADSPREPARE 16-essbase.cfg file as there are 16 CPU on the server.

None of them do greatly improve performance. What else can I try? Changing data cache help? In addition, increase parellel loading number does not appear to increase the performance, is this normal?

Thank you

Are you sure of the time is consumed on the side Essbase (i.e. how your query take to return results to run outside the context of Essbase)?

There is almost always a very noticeable performance gain of sorting the input data to touch each block only once. This means not only sort your data but it sort first by the sparse dimensions. When you say that you have 'ordered' the data are exactly what you've done?

Tags: Business Intelligence

Similar Questions

  • Date performance issue

    Hi guru,.

    I use 11.1.6.8 OBIEE. One of my report is to have performance issue when I dig in that I found that the date filter not applied in the code SQL generated for send DB, due to which there is table scan, but strange thing is that when she displays data based on the Date range filter. It only occurs with the date dimension, all other dimensions are working properly. I'm not sure what he is missing.

    Thanks in advance.

    concerning

    Mohammed.

    I found the problem, it is in the characteristics of the DB. I click query to get the DB position it works now.

  • Satellite P200 - 1 K 9 and performance issue

    I hope IM this ad in the right area...

    Hello

    I have a Satellite P200 - 1 K 9 with the following specifications:

    Intel Core 2 Duo T7700 @ 2.40 GHz CPU
    3 GB RAM
    32 bit
    Note Windows experience 4.8

    ATI Mobility Radeon HD 2600 map graphic and one
    HD sound card...

    My problem is that since I bought my laptop about 7-8 months ago slowed down the performance. Of course, since then I had filled my hard drive (partition) with the media and understand that this is at the expense of the performance of the system. However, since this grateful I released a lot of disk space. I have 40 GB free on my C: and 90 GB free on my D:. Total, shared capacity is 250 GB (125/125).

    Basically, while being largely ignorant of the technical questions, laptops and technology altogether, the way I noticed the inability of systems to "return" to its original factory running (which, I might add, I was very impressed by) is when it comes to games. Football Manager 2009, for example running very slowly. I had a peek in the Task Manager, because I play the game in a window, and the CPU usage works 100%, although I'm sure the culmination of the individual tasks and Applications is not equivalent to that (maybe there is a program hidden Hogging my CPU speed?). Call of Duty 4, which took place on medium-high graphics settings when the laptop was new, now struggles on the lower graph and the sound settings, without that there is a problem with internet connectivity. In addition, (for reference, I m not not the game breath) Battlefield 2, which is about 3-4 years of age very slowly also works. He ran too extremely fast, on the highest graphics settings when the laptop was brand new.

    Basically, what I want to do, is to return the system to its original performance. It is always too slow when two hard drives were full of games and media. Now, despite the withdrawal of many of them, the system works like a sloth.

    Means I ve done to solve performance problems are: disk defrag, disk clean up, error checking, virus scan (with Kaspersky), scanning of spyware (with PC tools Spyware Doctor), deleted the old files with CCleaner and erased files not deleted with a program called eraser... As well as the release of about 50% of the disk space.

    Add to that, I ve noticed performance was much more when I first turn on the laptop and the sides of it n t had the chance to become hot. I understand what is normal, and he did before when the laptop was new and was operating at optimum performance, but perhaps I have an overheating problem? I'll let you know what is the temperature, but I put just t know.

    I hesitate to return the system to its factory settings and if necessary I'll take to a computer store, if there is something they can do. But in the meantime I d like to listen to any advice on general issues that I face, and possible solutions to my problem.

    Thank you very much.

    The problem with Windows is that it becomes slower and slower that you operate.

    When the devices and programs are installed, they increase the size of the registry, activate the Services, add resident DLLS and drivers, slow down the file system by adding thousands of files. As well, the operating system can get damaged after some time due to the loss of power and software poorly written.
    Even if you use CCleaner and other utilities and uninstall several programs, it is never as fast as a fresh installation of windows. Uninstalling programs rarely remove the program completely 100%.

    The antivirus can have a huge effect on performance, that's why the system runs great when the image of the plant is installed (which has usually no anti-virus).

    So basically if you want the system making a lot once again, you must back up your data and perform the recovery. I myself do it once a year to restore performance.

  • HP Elitebook 8570p - Windows XP USB very slow and the transfer rate performance issues.

    Hello. I was wondering if anyone else knows some performance issues with the HP Elitebook 8570p and Windows XP Pro - SP3.

    I get a VERY slow start, general poor performance (waiting time when opening files and programs).

    I asked to burn in tests that show that everything is good.

    USB transfer rates are VERY slow (tried all ports of a Seagate FreeAgent USB drive, which works fine on other laptops etc.) for example a 30-minute transfer a 7gig single file of the USB HD on the desktop. Only 3 minutes to do the same on another laptop.

    I think there are problems with the drivers, like Windows 7 works very well and does boot from a bootable USB.

    USB transfer rate are also good when starting from a Windows boot environment.

    I installed HP SoftPaq Download Manager, and all the drivers are up to date.

    Hello:

    I think outside of the box here... Select the Intel (r) 7 Series Chipset Family SATA AHCI Controller instead.

    If it does not, I raise the white flag.

    Paul

  • Data loading Wizzard

    On Oracle Apex, is there a feasibility study to change the default feature

    1. can we convert the load data Wizard just insert to insert / update functionality based on the table of the source?

    2. possibility of Validation - Count of Records < target table is true, then the user should get a choice to continue with insert / cancel the data loading process.

    I use APEX 5.0

    Need it please advice on this 2 points

    Hi Sudhir,

    I'll answer your questions below:

    (1) Yes, loading data can be inserted/updated updated

    It's the default behavior, if you choose the right-hand columns in order to detect duplicate records, you will be able to see the records that are new and those who are up to date.

    (2) it will be a little tricky, but you can get by using the underlying collection. Loading data uses several collections to perform the operations, and on the first step, load us all the records of the user in the collection "CLOB_CONTENT". by checking this against the number of records in the underlying table, you can easily add a new validation before moving on to step 1 - step 2.

    Kind regards

    Patrick

  • Data loading 10415 failed when you export data to Essbase EPMA app

    Hi Experts,

    Can someone help me solve this issue I am facing FDM 11.1.2.3

    I'm trying to export data to the application Essbase EPMA of FDM

    import and validate worked fine, but when I click on export its failure

    I am getting below error

    Failed to load data

    10415 - data loading errors

    Proceedings of Essbase API: [EssImport] threw code: 1003029 - 1003029

    Encountered in the spreadsheet file (C:\Oracle\Middleware\User_Projects\epmsystem1\EssbaseServer\essbaseserver1\app\Volv formatting

    I have Diemsion members

    1 account

    2 entity

    3 scenario

    4 year

    5. period

    6 regions

    7 products

    8 acquisitions

    9 Servicesline

    10 Functionalunit

    When I click on the button export its failure

    I checked 1 thing more inception. DAT file but this file is empty

    Thanks in advance

    Hello

    Even I was facing the similar problem

    In my case I am loading data to the Application of conventional planning. When all the dimension members are ignored in the mapping for the combination, you try to load the data, and when you click Export, you will get the same message. . DAT empty file is created

    You can check this

    Thank you

    Praveen

  • VCO performance issue?

    We have upgraded a VCO stand-alone server to version 5.5.  Expand the folder objects to data center and host when using vCenter 5.5 plugin can last up to 10 minutes.

    Upgrade process

    1. export the packages and configuration

    2 order services and uninstalled the application

    3. has taken a backup of the MS - SQL Database and and got up a new database in its place

    4. install the new version of the application 5.5

    5 import the configuration and the packages in the application.

    Has anyone seen this behavior? or better yet have a fix for this?  5.5 seems to be significantly slower than version 5.1, we were in front.

    Specifications of the system running virtually...

    Win 2008 R2

    2 CPU i7

    12 GB OF RAM

    The Java heap size has been increased to...

    maxHttpHeaderSize = "163840.

    It is a new version of technical overview of vCenter plugin available here.  Version of technical overview of VMware vCenter Orchestrator plug-in for VMware vSphere 5.5.x

    This version fixes some performance issues related to vCenter inventory and plugin starting aceptinag of workflow as vCenter parameter objects...

    It will be useful.

  • Accrual accounting reconciliation executed load performance problem report.

    We have significant performance problems when you run the reconciliation of charge accumulation report run. We had to cancel after having run for a day. any idea on how to solve it?

    We had a similar question. As this report runs depends on the input parameters. Remember, your first round of what this report will take a lot of time and subsequent executions will be much shorter.

    But w had to apply the fixes mentioned in the MOS article to solve the performance issue.
    Accrual accounting reconciliation Load Run has slow performance [ID 1490578.1]

    Thank you
    MIA...

  • Event scripts FDM shot twice during data loads

    Here's an interesting question. I added the following three scripts to different event (one at a time, ensuring that one of them is both), clear data before loading to Essbase:


    Script event content:
    ' Declare local variables
    Dim ObjShell
    Dim strCMD
    «Call MaxL script to perform data clear the calculation.»
    Set objShell = CreateObject ("WScript.Shell")
    strCMD = "D:\Oracle\Middleware\EPMSystem11R1\products\Essbase\EssbaseClient\bin\startMAXL.cmd D:\Test.mxl"
    API. DataWindow.Utilities.mShellAndWait strCMD, 0


    MaxL Script:
    Login * identified by * on *;
    run the calculation ' FIX("Member1","Member2") CLEARDATA "Member3"; ENDFIX' on *. *** ;
    "exit";




    However, it seems that clear is performed twice, both before and after the data has been loaded to Essbase. This has been verified at every step, checking the newspaper of Essbase applications:

    No script event:
    -No Essbase data don't clear in the application log

    Above to add the script to the event "BefExportToDat":
    -The script is executed once when you click Export in the customer Web FDM (before the "target load" modal popup is displayed). Entries are visible in the log of Essbase applications.
    -Script is then run a second time when you click the OK button in the modal pop-up "target Load System". Entries are visible in the log of Essbase applications.

    Above to add the script to the event "AftExportToDat":
    -The script is executed once when you click Export in the customer Web FDM (before the "target load" modal popup is displayed). Entries are visible in the log of Essbase applications.
    -Script is then run a second time when you click the OK button in the modal pop-up "target Load System". Entries are visible in the log of Essbase applications.

    Above to add the script to the event "BefLoad":
    -Script only runs that after you click Export in the FDM Web Client (before 'target system load' modal popup is displayed).
    -Script is run AFTER loading to Essbase data when the OK button is clicked in the modal popup "load the target system". Entries are visible in the log of Essbase applications.

    Some notes on the above:
    1. "BefExportToDat" and "AftExportToDat" are both performed twice, before and after the modal popup "target Load System". :-(
    2. "befLoad" is executed WHEN the data is loaded to Essbase. :-( :-(

    Someone please any idea how we could run a clear Essbase database before the data is loaded, and not after we have charged for up-to-date data? And maybe about why event scripts above seem to be fired twice? It doesn't seem to be any logic to this!


    BefExportToDat - entered in the journal Application Essbase:
    + [Sea 16 May 16:19:51 2012]Local/Monthly/Monthly/admin@Native Directory/140095859451648/Info (1013091) +]
    + Received order [calculate] user [directory admin@Native] +.

    + [Sea 16 May 16:19:51 2012]Local/Monthly/Monthly/admin@Native Directory/140095859451648/Info (1013162) +]
    + Received order [calculate] user [directory admin@Native] +.

    + [Sea 16 May 16:19:51 2012]Local/Monthly/Monthly/admin@Native Directory/140095859451648/Info (1012555) +]
    + Erasure of the data in the partition [Member3] with fixed member [Period (Member1); Scenario (Member2)] +.
    +...+

    + [Sea 16 May 16:20:12 2012]Local/Monthly/Monthly/admin@Native Directory/140095932016384/Info (1003037) +]
    Updated load cells [98] data

    + [Sea 16 May 16:20:12 2012]Local/Monthly/Monthly/admin@Native Directory/140095932016384/Info (1003024) +]
    Data load time: seconds [0.52]
    +...+

    + [Sea 16 May 16:20:12 2012]Local/Monthly/Monthly/admin@Native Directory/140095930963712/Info (1013091) +]
    + Received order [calculate] user [directory admin@Native] +.

    + [Sea 16 May 16:20:12 2012]Local/Monthly/Monthly/admin@Native Directory/140095930963712/Info (1013162) +]
    + Received order [calculate] user [directory admin@Native] +.

    + [Sea 16 May 16:20:12 2012]Local/Monthly/Monthly/admin@Native Directory/140095930963712/Info (1012555) +]
    + Erasure of the data in the partition [Member3] with fixed member [Period (Member1); Scenario (Member2)] +.


    AftExportToDat - entered in the journal Application Essbase:
    + [Sea 16 May 16:21:32 2012]Local/Monthly/Monthly/admin@Native Directory/140095933069056/Info (1013091) +]
    + Received order [calculate] user [directory admin@Native] +.

    + [Sea 16 May 16:21:32 2012]Local/Monthly/Monthly/admin@Native Directory/140095933069056/Info (1013162) +]
    + Received order [calculate] user [directory admin@Native] +.

    + [Sea 16 May 16:21:32 2012]Local/Monthly/Monthly/admin@Native Directory/140095933069056/Info (1012555) +]
    + Erasure of the data in the partition [Member3] with fixed member [Period (Member1); Scenario (Member2)] +.
    +...+

    + [Sea 16 May 16:21:47 2012]Local/Monthly/Monthly/admin@Native Directory/140095930963712/Info (1003037) +]
    Updated load cells [98] data

    + [Sea 16 May 16:21:47 2012]Local/Monthly/Monthly/admin@Native Directory/140095930963712/Info (1003024) +]
    Data load time: seconds [0.52]
    +...+

    + [Sea 16 May 16:21:47 2012]Local/Monthly/Monthly/admin@Native Directory/140095928858368/Info (1013091) +]
    + Received order [calculate] user [directory admin@Native] +.

    + [Sea 16 May 16:21:47 2012]Local/Monthly/Monthly/admin@Native Directory/140095928858368/Info (1013162) +]
    + Received order [calculate] user [directory admin@Native] +.

    + [Sea 16 May 16:21:47 2012]Local/Monthly/Monthly/admin@Native Directory/140095928858368/Info (1012555) +]
    + Erasure of the data in the partition [Member3] with fixed member [Period (Member1); Scenario (Member2)] +.


    BefLoad - entered in the journal Application Essbase:
    + [Sea 16 May 16:23:43 2012]Local/Monthly/Monthly/admin@Native Directory/140095932016384/Info (1013091) +]
    + Received order [calculate] user [directory admin@Native] +.

    + [Sea 16 May 16:23:43 2012]Local/Monthly/Monthly/admin@Native Directory/140095932016384/Info (1013162) +]
    + Received order [calculate] user [directory admin@Native] +.

    + [Sea 16 May 16:23:43 2012]Local/Monthly/Monthly/admin@Native Directory/140095932016384/Info (1012555) +]
    + Erasure of the data in the partition [Member3] with fixed member [Period (Member1); Scenario (Member2)] +.
    +...+

    + [Sea 16 May 16:23:44 2012]Local/Monthly/Monthly/admin@Native Directory/140095929911040/Info (1003037) +]
    Updated load cells [98] data

    + [Sea 16 May 16:23:44 2012]Local/Monthly/Monthly/admin@Native Directory/140095929911040/Info (1003024) +]
    Data load time: seconds [0.52]
    +...+

    + [Sea 16 May 16:23:45 2012]Local/Monthly/Monthly/admin@Native 140095860504320/Directory/Info (1013091) +]
    + Received order [calculate] user [directory admin@Native] +.

    + [Sea 16 May 16:23:45 2012]Local/Monthly/Monthly/admin@Native 140095860504320/Directory/Info (1013162) +]
    + Received order [calculate] user [directory admin@Native] +.

    + [Sea 16 May 16:23:45 2012]Local/Monthly/Monthly/admin@Native 140095860504320/Directory/Info (1012555) +]
    + Erasure of the data in the partition [Member3] with fixed member [Period (Member1); Scenario (Member2)] +.

    James, the scripts export and the Load event will fire four times, once for each type of file: the. DAT file (main TB file),-A.DAT (log file),-B.DAT and - c.DAT.

    To work around this problem, then only run during the loading of the main TB file, add the following or something similar at the beginning of your event scripts. This assumes that strFile is in the list of parameters to the subroutine:

    Select Case LCase(Right(strFile,6))
         Case "-a.dat", "-b.dat", "-c.dat" Exit Sub
    End Select
    
  • Performance issues with Photoshop CS6 64-Bit

    Hello-

    Issue at hand: in recent weeks, I noticed significant problems with performance since the last update PS CS6 using the Adobe Application Manager, from the unexpected shut downs to bring my computer to a crawl (literally, my cursor seems to crawl on my screens). I'm curious to know if anyone knows these issues, or if there is a solution I have not yet tried. Here is a list of actions that give rise to these performance issues - there are probably more that I have not experienced due to my frustration, or were not documented as occurring several times:

    • Opening files - results by hanging processes, takes 3-10 seconds to solve
    • Paste from the Clipboard - results by hanging processes, takes 3-10 seconds to solve
    • Saving files - takes 3-10 seconds to open the dialog box, another 3 to 10 seconds to return to the normal window (saving an image PNG compressed)
    • The eyedropper tool - will crash Photoshop on the desktop, or take 5 to 15 seconds to load
    • Try to navigate any menu - Photoshop crash on the office or take 5 to 15 seconds to load

    Attempts I took to solve this issue, which failed:

    • Uninstalled all fonts I have added since the last update (it's a pain in the *, thanks to be glitch Windows Explorer)
    • Uninstall and reinstall the application
    • Edition 32-bit use
    • Change process priority to above normal
    • Confirm the process for all available cores of the processor affinity
    • Change the configuration of the performance of Photoshop options
      • 61% of memory is available to Photoshop to use (8969 MB)
      • History States: 20; Cache levels: 6; Tile cache size: 1024K
      • Records of work: active on the production of SSD, ~ 10GB of available space
      • Dedicated graphics processor is selected (2 x nVidia cards in SLI)

    System information:

    • Intel i7 2600 K @ 3.40 GHz
    • 16 GB of RAM Dual Channel DDR3 memory
    • 2 x nVidia GeForce GTS 450 cards, 1 GB each
    • Windows 7 Professional 64 bit
    • Adobe Creative Cloud

    This problem is cost me time that I work every day, and I'm about to start looking for alternatives and cancel my subscription if I can't get this resolved.

    Have you tried to restore your preferences to set the parameters of performance return to their default values and restart Photoshop? http://blogs.Adobe.com/Crawlspace/2012/07/Photoshop-basic-troubleshooting-steps-to-fix-MOS t - issues.html #Preferences

  • OPA webservice performance issues

    Hi all

    We are experiencing some performance issues with the service call a takeover of modules. The service is called via a regular Web service (XML) and we analyzed (using a clean SOAP UI call) the OPA response time is 1.5-2 seconds.

    As the modules is not very big (below I provide you with project statistics), I am surprised by the amount of time it takes OPA to return. Anyone know what could possibly cause this problem?

    Best regards, Els



    Project statistics

    Build the model

    Number of attributes: 242
    Number of attributes at the basic level: 26
    Number of top level attributes: 39
    Number of entities: 7
    Number of rules: 234
    Number of screens: 8
    Number of witnesses: 9

    Project files

    Word documents: 16
    Excel documents: 0
    Records of the screen: 1
    Source files: 1
    Test script files: 11
    The Visual Explorer files: 2
    Excluded files of generation: 0
    Other files: 0
    -----
    Project files total: 31

    Hi, Els,

    The amount of data you pass in to OPA (i.e. What is the XML)? I saw some performance issues with web service calls big (such as those involving thousands of entity instances), and I understand the bottleneck in these situations the time it takes to parse the XML. (The engine of the OPA itself is extremely fast).

    Of course, there are many variables to consider in all environments. Without doubt you and your team will have to do some testing to eliminate assumptions until you find the root cause.

    M

  • ASO performance issue

    Hi Experts,

    I have two versions of the 9.3.1 and 9.2. and loading data into the 9.3.1 takes three times longer than 9.2

    Why so

    all ideas

    R Khan

    Hello

    1 OK, I got it. In 9.2, it was a way to optimizatin, through which the differentials upates, if they are less than 20% of the size of the database/cube, which then generate a list of deltas, then applied to existing views (global view), this way views overall are not always built again from scratch in 9.2

    2. so, this improves performance of incremental in ASO cube data loading where aggregation views are already in plance and size of incremental data is low in camparision to already loaded data

    3. This optimization is no longer in 9.3.1 there is a feature called "loading of the data slice" to ASO which offers better performance. With the new feature, the amout of time to make an incremental laod depends directly on the amount of new data, and it is not in the current size of the database/cube of ASO.

    4. use this data slice load and see the results.

    Sandeep Reddy, Enti
    HCC
    http://hyperionconsultancy.com/

  • Performance issue of SAP with Oracle 10 on Sun SPARC T5240 server

    Dear friends,

    We have a performance issue after the migration of our basic system of SAP ERP 6.0. To new servers, we moved a month ago.
    The report of EarlyWatchAlert SAP response time of CPU is too high, although the CPU utilization is never more than 5%,

    The current system is:

    Database server: Sun SPARC Enterprise T5240 Server - 2 CPU´s with 6 core wire 8, 1.2 Ghz 32 GB RAM
    and we use a different identical server configured as an application server.

    Database is Oracle 10.2.0 and Solaris 10 operating system.

    The problem is CPU average response time is 450 ms and maximum CPU load is 5% percent.

    Configuration prior to migration with the old servers, we got the answer of CPU: 150 ms and max CPU load: 50%.
    Old configuration: 2 X HP rp3440, 2 X PA 8800 (2 core, 1.0 GHz) CPU.

    You have experience with a similar situation, setting that could be bad for server CPU´s not making full use?
    or you know any similar configuration for benchmark?

    Thanks in advance

    Uzan

    The new server's processors in Niagara? If so you could run in the known performance issue identified in MOS Doc 781763.1 (Migration of fast single machine threaded CPU to CMT UltraSPARC T1 and T2 causes an increase in the CPU reports)

    HTH
    Srini

  • Performance issues when you build a large crossroads

    Hello

    I have a performance issues when you build a large crossroads.

    So here's my situation:

    We use Discoverer Plus 10.1.2.45.46c
    Database Oracle 9.2.0.6 on Windows 2003
    OAS 10.1.0.3
    1.5 GB of RAM on my PC

    -J' planned a workbook with 1 single worksheet that takes 2 minutes to run. The table generated in the database to store the results from 19 columns and rows to about 225000 (I know that's a lot, but it is what the customer needs).

    -When I opened the workbook, it takes about 2 hours to recover the data. The data is retrieved in groups of 1000 and at the beginning, the lines are read much faster than at the end. There are also 15 minutes more to build the crosstab. So overall, it takes 2 hours and 15 minutes for the crosstab display on my screen.

    Can someone explain to me:
    -Why does take so long?
    -What can I do to improve the execution other than changing the application or displaying the results in a regular tab? Is there some setting that I can do on the database of the Oracle Application Server?

    Note: I have reproduced the worksheet in a simple tab and demand it. The table generated in the DB to store the results is the same and it takes only 2-3 minutes to open this spreadsheet and extract all lines.


    Thank you!

    Mary

    Hi Mary
    If you have 225 000 lines in your basic spreadsheet, then it will take a lot of time to produce a crosstab. The reason is that Finder can not calculate how many buckets he needs until he has read all the data. I can almost guarantee that with 225 000 lines to read and manipulate that you are short of memory.

    You might be better suited someone create and populate a table with the results you need rather than try to get the discoverer to calculate values in crosstab on the fly. If the final result of the crosstab is a few lines of aggregated data, then this is what is right for your table. The advantage of using a table and a few SQL (or PL/SQL) is that you will not be using your local computer during the aggregation / sorting phase. Don't forget a crosstab also sort based on the values in the left columns, the columns you have aggregations (buckets) you will have and most necessary sort. Using a table, you can index or even partition the results that will make for a much faster recovery.

    As a tip, recommend Oracle, confirmed by myself during the exhaustive tests, no do not build large paintings to double entry due to the performance of touch that you will have. The tables are much more efficient because you can pull back x lines at once. You can't do that with a crosstab and all values must be read before that whatever it is is displayed.

    Does that help?
    Concerning
    Michael

  • Using APEX_ITEM in SQL performance issues?

    Hello

    I was wondering if using APEX_ITEM.* in your SQL source for a report would give some performance issues? I expect the report to bring a little more than 3000 documents. When I developed it it was working fine but we had 150 cases or so, but now we have migrated demand during our test the page system will take about 2 minutes to load. Here is my SQL to create the report:

    Select distinct
    initcap (MPa.pa_name) | ' (' || sd. DESIGNATION_CODE | ')' site.
    FRC. REPORT_DESCRIPTION report_category,
    MF. Function FEATURE_DESC
    decode (cmf. SELECTED_FOR_QA, 'Y', 'X', ' N ',' ') QA,.
    () apex_item.select_list_from_query
    21,
    CPF. ASSIGN_TO,
    «Select ss.firstname | "» '' || SS. Surname d, ss. STAFF_NUMBER r
    of snh_staff ss,.
    snh_management_units smu,
    m_pa_snh_area psa
    where ss. MU_UNIT_ID = EMS. UNIT_ID
    and smu. UNIT_ID = psa. UNIT_ID
    and ss. CURRENTLY_EMPLOYED = "Y"
    and psa. SCM_LEAD = "Y"
    and psa. MAIN_AREA = "P"
    and psa.PA_CODE = ' | MPa.pa_code,
    NULL,
    '' YES. ''
    NULL,
    (' ') assign_to,.
    () apex_item.select_list_from_query
    22,
    Decode (to_char (cpf.planned_fieldwork, ' DD/MM /'),)
    30/06 /', 'Q1 ' | TO_CHAR (planned_fieldwork, 'YYYY'),
    30/09 /', 'T2 ' | TO_CHAR (planned_fieldwork, 'YYYY'),
    31/12 /', 'Q3 | TO_CHAR (planned_fieldwork, 'YYYY'),
    31/03 /', 'T4 ' | TO_CHAR (planned_fieldwork-365, "YYYY").
    TO_CHAR (cpf.planned_fieldwork, "YYYY")),
    ' select r d,
    of CM_CYCLE_Q_YEARS') planned_fieldwork,.
    () apex_item.select_list_from_query
    23,
    Decode (to_char (cpf.planned_cmf, ' DD/MM /'),)
    30/06 /', 'Q1 ' | TO_CHAR (planned_cmf, 'YYYY'),
    30/09 /', 'T2 ' | TO_CHAR (planned_cmf, 'YYYY'),
    31/12 /', 'Q3 | TO_CHAR (planned_cmf, 'YYYY'),
    31/03 /', 'T4 ' | TO_CHAR (planned_cmf-365, "YYYY").
    TO_CHAR (cpf.planned_cmf, "YYYY")),
    ' select r d,
    of CM_CYCLE_Q_YEARS') planned_cmf,.
    () apex_item.select_list_from_query
    24,
    CPF.monitoring_method_id,
    "(Select METHOD, MONITORING_METHOD_ID from cm_monitoring_methods where active_flag ="Y"') monitoring_method,
    (apex_item). Text
    25,
    CPF.pre_cycle_comments,
    15,
    255,
    "title =" '. CPF.pre_cycle_comments |' » ',
    'annualPlanningComments '.
    || TO_CHAR (cpf. Comment PLAN_MON_FEATURE_ID)),
    (apex_item). Text
    26,
    TO_CHAR (cpf. CONTRACT_LET, 'MON-DD-YYYY'),
    11,
    (11) contract_let,
    (apex_item). Text
    27,
    TO_CHAR (cpf. CONTRACT_REPORT_PLANNED, 'MON-DD-YYYY'),
    11,
    (11) contract_report,
    (apex_item). Text
    28,
    CPF. ADVISOR_DATA_ENTRY,
    11,
    (11) advisor_entry,
    CMS.complete_percentage | ' ' || status of CMS. Description,
    (apex_item). Text
    29,
    TO_CHAR (cpf. RESULT_SENT_TO_OO, 'MON-DD-YYYY'),
    11,
    (11) result_to_oo,
    CPF. PLAN_MON_FEATURE_ID,
    CMF. MONITORED_FEATURE_ID,
    mpa.PA_CODE,
    MPF. SITE_FEATURE_ID
    of fm_report_category ERS,
    m_feature mf,
    m_pa_features mpf,
    m_protected_area mpa,
    snh_designations sd,
    cm_monitored_features FMC,
    cm_plan_mon_features FCP,
    cm_monitoring_status cms,
    cm_cycles cc,
    msa m_pa_snh_area,
    snh_management_units smu,
    ssa snh_sub_areas
    where frc. REPORT_CATEGORY_ID = mf. REPORT_CATEGORY_ID
    and mf. Feature_code = mpf. FEATURE_CODE
    and mpa.PA_CODE = mpf.PA_CODE
    and mpa. DESIGNATION_ID = sd. DESIGNATION_ID
    and the mpf. SITE_FEATURE_ID = FCM. SITE_FEATURE_ID
    and CME. MONITORED_FEATURE_ID = cpf. MONITORED_FEATURE_ID
    and cms. MONITORING_STATUS_ID = FCM. MONITORING_STATUS_ID
    and cc. CYCLE # = FCM. CYCLE #.
    and msa.PA_CODE = mpa.PA_CODE
    and msa. UNIT_ID = EMS. UNIT_ID
    and msa. SUB_AREA_ID = ass. SUB_AREA_ID
    and cc. CURRENT_CYCLE = 'Y '.
    and msa. MAIN_AREA = 'P '.
    and msa. SCM_LEAD = 'Y '.
    and the mpf. INTEREST_CODE in (1,2,3,9)
    and ((nvl (: P6_REPORTING_CATEGORY, 'TOUT') = 'ALL'))
    and to_char (frc. FCA_FEATURE_CATEGORY_ID) = case nvl (: P6_BROAD_CATEGORY, 'ALL') when "ALL" then to_char (frc. (FCA_FEATURE_CATEGORY_ID) else: P6_BROAD_CATEGORY end)
    or (nvl (: P6_REPORTING_CATEGORY, 'ALL')! = 'ALL')
    and to_char (mf. REPORT_CATEGORY_ID) = case nvl (: P6_REPORTING_CATEGORY, 'ALL') when "ALL" then to_char (mf. REPORT_CATEGORY_ID) else: P6_REPORTING_CATEGORY end))
    and ((nvl (: P6_SNH_SUB_AREA, 'TOUT') = 'ALL'))
    and to_char (msa. UNIT_ID) = case nvl (: P6_SNH_AREA, 'ALL') when "ALL" then to_char (msa. (UNIT_ID) else: P6_SNH_AREA end)
    or (nvl (: P6_SNH_SUB_AREA, 'ALL')! = 'ALL')
    and to_char (msa. SUB_AREA_ID) = case nvl (: P6_SNH_SUB_AREA, 'ALL') when "ALL" then to_char (msa. SUB_AREA_ID) else: P6_SNH_SUB_AREA end))
    and ((nvl (: P6_SITE, 'TOUT')! = 'ALL'))
    and mpa.PA_CODE =: P6_SITE)
    or nvl (: P6_SITE, 'ALL') = 'ALL')

    As you can see I have 9 calls the APEX_ITEM API and when I get out them the works of report that I expect.

    Has anyone else ran into this problem?

    We're currently on APEX: 3.0.1.00.08 and using Oracle9i Enterprise Edition Release 9.2.0.8.0 - 64 bit, Production database.

    Thanks in advance,
    Paul.

    Try to remove all except one of the calls apex_item.select_list_from_query, and then rewrite this one to use subquery factoring. Now compare at the same time for the same query with and without subquery factoring. In addition, 500 lines is a realistic number that you expect someone to change?

Maybe you are looking for

  • Line out Dock to USB Mac connection

    I have a dock station to connect my iPhone 4s, what happens if I connect a cable (used to charge an iPod shuffle) to a USB port in my Macbook Pro and across (male 3.5 mm connector) cable at the entrance to the dock line?

  • Boring firmware icon

    Just updated firmware on MacBook white older 3.1. Now the icon launches every time at startup. Despite the use of 'hide' in the dock icon, it's there every time. How can I do that, go away?

  • cannot start microsoft office outlook. Cannot open the outlook window.

    I have Windows 7 and keep getting this message when trying to open my emial. "" annot start Microsoft Office Outlook. Cannot open the Outlook window.

  • Vista will not read the disc, MPEG-2

    Hello. I recorded a few tv programs with my dvd burner, and it always does to the MPEG-2 format. But when I pop the disk in my laptop, windows does not detect the file, it detects the drive but please nothing to her which means that it wont let me co

  • BlackBerry Smartphones can't send a file music via whatsapp

    I have more option on whatsaap to send the music file? can anyone help and I check the properties of the music, everything looks ok.