JDBC-fetch data - set and very big (table 35)

Hello

I'm trying to get a very large data set and the desired behaviour is that it continues to function without exception of lack of memory (that is, it's good that it takes days but don't blow up on out-of-memory exception) *.
The task runs on a 64-bit with 16 processors computer.

For testing purposes, I used a table 35 GB in size (of course this would not be our typical fetch but just to emphasize JDBC)

+??? Java exception occurred: +.
+ means: Java heap space.
+ java.util.Arrays.copyOf (Unknown Source) +.
+ java.util.Vector.ensureCapacityHelper (Unknown Source) +.
+ java.util.Vector.addElement (Unknown Source) +.
+          at+
+ com.mathworks.toolbox.database.fetchTheData.dataFetch(fetchTheData.java:737) +.
+ Error in == > cursor.fetch at 114 +.
+ dataFetched = +.
+ dataFetch (fet, resultSetMetaData, p.NullStringRead, tmpNullNumberRead);

In the light of this problem, I added jdbc-fetch-size = 999999999 on my JDBC connection string.
' com.microsoft.sqlserver.jdbc.SQLServerDriver ',' jdbc:sqlserver://SomeDatabaseServer:1433; databaseName = SomeDatabase; integratedSecurity = true; JDBC-fetch-size = 999999999; »

Slightly different error reported.

+??? Java exception occurred: +.
+ means: GC overhead limit exceeded+.

+ java.lang.Integer.toString (Unknown Source) +.

+ java.sql.Timestamp.toString (Unknown Source) +.

+          at+
+ com.mathworks.toolbox.database.fetchTheData.dataFetch(fetchTheData.java:721) +.
+          +

+ Error in == > cursor.fetch at 114 +.
+ dataFetched = +.
+ dataFetch (fet, resultSetMetaData, p.NullStringRead, tmpNullNumberRead); +

Any suggestion?

REF:
JDBC http://msdn.microsoft.com/en-us/library/ms378526.aspx
32 bit vs 64 bit: http://support.microsoft.com/kb/294418

Published by: devvvy may 6, 2011 01:10

There is one thing to go too far. You won't push ever that a lot of data in memory at a time, which seems to happen due to the vector between the two. The amount of objects that must be created would be already staggering.

Best is to use a result set and browse results instead, treat the rows one at a time, or maybe a batch. Who will use only as much memory as necessary to keep a line / a lot of lines in memory unless the driver not just preloading of magic.

Tags: Java

Similar Questions

  • Procedure to take income data set and return the result set

    Hi all

    I have a situation where there will be a 'standard' set of data (source_data below), and I need to get information on outcomes for certain groups of customers (client_data for example). As SQL right it would be very easy (see below) - real world problem is a bit more complicated. However, what I would do is set up a procedure so that I can pass on my data of client variables and it will spit back on a set of data that is identical to the output of the given SQL.

    A pointer in the right direction would be appreciated. If I could 'pass customer data' as a string that contains an SQL query, which would be even better, for example

    GetResults ("select client_id, min (whatever_date) from some_client_data which...", MyOutputRefCursor?)
    create table source_data
    (client_id integer,
    tdate date,
    amount number(6,2));
    
    create table client_data
    (client_id integer,
    critical_date date);
    
    insert into source_data values(1,to_date('20090104','yyyymmdd'),1000);
    insert into source_data values(1,to_date('20100104','yyyymmdd'),2000);
    insert into source_data values(1,to_date('20110104','yyyymmdd'),3000);
    insert into source_data values(1,to_date('20120104','yyyymmdd'),4000);
    insert into source_data values(2,to_date('20090104','yyyymmdd'),5000);
    insert into source_data values(2,to_date('20090604','yyyymmdd'),1000);
    insert into source_data values(2,to_date('20100104','yyyymmdd'),2000);
    insert into source_data values(3,to_date('20091004','yyyymmdd'),3000);
    insert into source_data values(3,to_date('20091104','yyyymmdd'),4000);
    insert into source_data values(4,to_date('20090104','yyyymmdd'),5000);
    insert into source_data values(4,to_date('20090604','yyyymmdd'),2000);
    
    insert into client_data values(1,to_date('20110104','yyyymmdd'));
    insert into client_data values(2,to_date('20090604','yyyymmdd'));
    
    
    select c.client_id,
      sum(CASE WHEN tdate < critical_date then amount else null end) used_before,
      sum(CASE WHEN tdate >= critical_date then amount else null end) used_after
    from source_data s
    inner join client_data c on s.client_id = c.client_id
    GROUP BY c.client_id;
    Thank you

    Jon

    Hello

    You can do this with a view. Make settings of the view a global temporary table:

    create GLOBAL TEMPORARY table client_data
    (       client_id       integer
    ,     critical_date      date
    )
    ON COMMIT PRESERVE ROWS
    ;
    

    Then, you can create a view based on your real of the table and the parameter array:

    CREATE OR REPLACE VIEW     special_clients
    AS
    select c.client_id,
      sum(CASE WHEN tdate < critical_date then amount else null end) used_before,
      sum(CASE WHEN tdate >= critical_date then amount else null end) used_after
    from source_data s
    inner join client_data c on s.client_id = c.client_id
    GROUP BY c.client_id;
    

    Because the table of parameters is a global temporary table, each session will have its own private copy of the table of parameters and therefore his own private version of the view.

  • How to determine tables of tracking, data, processes and relationships of Table requirements

    I need to determine the 4 things from an existing of APEX application

    1. process flow

    2. necessary data

    3. the tables used

    4 all the relationships that exist between the tables

    Can someone help me plese. I worked on it for too long!

    # 3, you can use the utilities, the dependency ratio of object database to determine which tables and views are used by your application...

    # 4, you can get in SQL Developer and produce a data model that should show you the relationships between the tables (if they are enabled)

    The first two will involve actually evaluate the application itself...

    Thank you

    Tony Miller
    Software LuvMuffin
    Ruckersville, WILL

  • Date field and amount of table format

    Hi all
    I'm working on jdev 11.1.2.1.0
    When the page loads I'm populating a table based on a query of the VO.
    In this table, I have a few fields (columns) of the date and the amount.
    I want to format these columns, how can I do?

    Appreciate your help.

    Thank you and best regards,
    Madhav K.

    For the Date field, you can try this:

    <>
    dateStyle = 'average' pattern = "dd-MMM-yyyy" / >

    And for many, it should work.

    I just tried, both work for me.

  • Scroll Spry data set?

    I have a large set of data created with Spry in Dreamweaver CS6 and want to scroll in rows.  Put a Div inside the data set and adding the overflow/ScrollBars property adds a scroll bar that includes the header of the data set and who is compensated.  Is it possible to create a scroll bar that scrolls all lines except the header and is not compensated for that?

    Table of vertical Scrolling.

    Fixed the Table header - http://alt-web.com/

    Nancy O.

  • Can I make my table to DataSet table 3d 2D data set?

    Hello.  I'm putting my data in a 3D Board.  I'm not really satisfied with the table 3D data set since that requires a large amount of manipulation of the table whenever I want to do something with the data - I do a lot of analysis, it's a pain.  It is reliable, but it takes space BD and is generally a pain.  I read here (post #6 & #7 if the link don't you shoot directly to it) a few messages by some respected LabVIEW guru they didn't see no reason to 3D tables used.  I was hoping someone might be able to point to a better solution to know how to store and access my data logically.

    I'm n channels of data acquisition (a while but I have will add to this later), that puts me in a table 2D with each line being the channel and the columns are the data points.

    I need to acquire multiple test runs a data value.

    I also need using data in each series of tests, average averages and check the test medium-sized operating are with a tolerance of the average general.

    I also need to be able to revive one of the test tracks if its average does not fall within the tolerance of the average general.

    I use the page in 3D as the series of tests table.  If I can then get some tests, I want to by selecting the corresponding page.

    I'm waiting to save all the test runs at the same time until the user selects to save the data.  Which means that the user has run the minimum number of test runs, and all have averages that fall within the tolerance of the average general.  I love this soley because this test will be carried out different several times for different of UUT and all data for this USE can be saved in a single file.

    I am currently save the file as a binary file (for my purposes of data backup) and then the user can also choose to save it in Excel, with each test on its own sheet.

    I thought to save after each race.  I don't know how add/replace the data in the binary file. I would rather not a separate binary file for each set of tests, but it's not a dealbreaker.

    The only way I see to avoid the 3D Board is back up after each series of tests.  This would mean that I would have a lot more handling of file (when they should be replaced, when the averages should be analyzed, etc..).

    Is the "save after each test cycle" approach that is generally used?  If so, it seems to me that file manipulation would be more prone to error than the manipulation of table 3D (maybe just for me I guess). Can someone tell me otherwise?

    Is there another approach that I have not thought or discovered here?

    Thank you

    Scott

    I think that save after each test cycle is a safer and more reliable way to accomplish what trying to boot of all data and recording at the same time.  This way, if the program crashes, you don't lose all the data because it was still in memory.  The data had been collected so far are safely in a file.  Why do you think of file manipulation would be more subject?

    If you want to add an existing file, then you just use the position to set the file to set the pointer to file at the end of the file, after you open the file.

    You can also watch the TDMS file format because it has means of sorting and organization of several sets of data.

  • Filter for table data, the range of data obtained and defined 2D

    I produce data of an ultrasonic sensor at 1 K Hz, and there is a lot of data (data points range of 0 to 10). However, in some cases when I know that the data should be about 7 (for example) I get outliers (about 9 and 10). Is it possible to define a filter for data in the defined range.

    I averaged the data to get an average value, and outliers are distorting. In the worst case, my outliers are 30 to 40% of the data generated. I created a filter to sort the data and, taken from the lowest value. I stop the loop when data reaches a value greater then 9. But this seems to take a long time (because the loop checks for each data point and there are 1000s of them).

    Is there a better way to filter data and define a predefined table range to collect?

    I enclose my filter.vi... and a set of samples of my previous data. The ranges of data of 10-8 and would like to have the range 7.5 to 8.5 to consider. The sensor records tension here and the problem can be solved by installing a different type of sensor, but if a filter in LabView can due it, the sensor that we use now is absolute.

    I am in kind of emergency, my design in unfinished because of this problem, if someone can find some time to share some suggestions, I will be grateful.

    Thanks in advance.

    See attachment.  I have incorporated the data you've posted in the vi.  It doesn't seem like any data were less than 8.7 or so, so I modified the scope so it would be a few points on average.  Some games were completely out of reach while the average came back like NaN (not a number) due to a division by zero.

  • How to calculate daily, hourly average for the very large data set?

    Some_Timestamp

    "Parameter1" Parameter2
    1 JANUARY 2015 02:00. 00:000000 AM - 07:002341.4534676341.4534
    1 JANUARY 2015 02:00. 01:000000 AM - 07:002341.4533676341.3
    1 JANUARY 2015 03:04. 01:000000 PM - 07:005332.3533676341.53
    1 JANUARY 2015 03:00. 01:000046 PM - 07:0023.3436434.4345
    JANUARY 2, 2015 05:06.01:000236 AM - 07:00352.3343543.4353
    JANUARY 2, 2015 09:00. 01:000026 AM - 07:00234.453453.54
    3 FEBRUARY 2015 10:00. 01:000026 PM - 07:003423.3534634.45
    FEBRUARY 4, 2015 11:08. 01:000026 AM - 07:00324.3532534534.53

    We have data as above a table and we want to calculate all the days, the hourly average for large data set (almost 100 million).  Is it possible to use the sql functions of analysis for better performance instead of the ordinary average and group of?  Or any other way better performance?

    Don't know if that works better, but instead of using to_char, you could use trunc. Try something like this:

    select
        trunc(some_timestamp,'DD'),
        avg(parameter1)
    from
       bigtable
    where
       some_timestamp between systimestamp-180 and systimestamp
    group by
       trunc(some_timestamp,'DD');
    

    Hope that helps,

    dhalek

  • How to interpret the data cache setting and the current value of data cache?

    How to interpret the data cache setting and the current value of data cache? We found that even, we configure a larger data cache in Essbase 2 GB for example, the current value of the data cache is always much lower. Does that indicate an activities of data at very low recovery or something else?

    Thanks in advance!

    Hello

    When a block is requested, Essbase searches the data for the block cache. If Essbase is the block in the cache, it is immediately accessible. If the block is not found in the cache, Essbase in the index for the appropriate block number and then uses the index of the block entry to retrieve from the data on the disk file. Retrieve a block requested in the data cache is faster and therefore improves performance.

    So as you say that its current value is much lower then % is very low, that a requested block is in the cache of Essbase data.

    Hope that respond you to the.

    Atul K

  • How to set up my DD to the default value of data storage and files

    Can someone help me set up my SD as a default to store the data, images, files, etc.  Internal storage becomes small and at the moment I have no data plan, and even if I, I don't want to use my phone as a storage device data.  I took pictures and save them, but still only gives me one option, to save them in My Drive (drive of glasses), I would like to you save them in the SD, but this option is not coming in the command.  What should do?  Tried to go to settings > storage, I see phone storage, storage internal and SD card with the amount of space available, but no way to choose SD for failure.

    Thanks for all the help and information.

  • Convert the Period_Name in GL_JE_Lines table to a date format and then come back year

    I'm working on a data model BI Publisher and I try to convert the Period_Name in GL_JE_Lines table to a date format and then return of the year.

    The sql below works in 11i, but I can't make it work in Fusion.

    to_char (to_date (l. )) period_name , ' MON-RR ' ),'YYYY')

    Any ideas?

    Hi Jennifer,.

    To_char (sysdate, 'DDMONYYYY') in BI Publisher does not return a correct results due NLS_DATE_FORMAT/DATE_LANGUAGE settings.

    According to the standards of the I18N, NLS_DATE_LANGUAGE in the database is still hardcoded to NUMERIC_DATE_LANGUAGE. NUMERIC_DATE_LANGUAGE 'MY' in a date format mask is an integer, so you see the correct value.

    You're not supposed to publish direct SQL with fixΘe format masks (unless it's some sort of canonical format used in internal processing, including the end-user will not be), you should return language digital date to the mid range and then make the formatting of y.

    Workaround

    Try adjusting the NLS_LANGUAGE in SQL data model to override formatting from of the

    Data base and values of the Session, for ex: select to_char (sysdate, 'MON-DD-YYYY', 'NLS_DATE_LANGUAGE = AMERICAN') of double;

    I got this Oracle support after lifting a SR.

    Thank you

    Rahul.

  • Dynamics Processor Calc does not reach more than [100] ESM blocks during the calculation, please increase the CalcLockBlock setting, and then try again (a small data cache setting can also cause this problem, check the size of data cache setting).

    Hello

    Our environment is Essbase 11.1.2.2 and work on Essbase EAS and components of Shared Services. One of our user tried to execute the Script of Cal of a single application and in the face of this error.

    Dynamics Processor Calc does not reach more than [100] ESM blocks during the calculation, please increase the CalcLockBlock setting, and then try again (a small data cache setting can also cause this problem, check the size of data cache setting).


    I did a few Google and found that we need to add something in the Essbase.cfg file as below.

    Dynamics Processor Calc 1012704 fails to more blocks ESM number for the calculation, please increase the CalcLockBlock setting, and then try again (a small data cache setting can also cause this problem, check the size of data cache setting).

    Possible problems

    Analytical services cannot lock enough blocks to perform the calculation.

    Possible solutions

    Increase the number of blocks of analytical Services can allocate to a calculation:

    1. Set the maximum number of blocks of analytical Services can allocate at least 500.
      1. If you are not a $ARBORPATH/bin/essbase.cfg on the file server computer, create one using a text editor.
      2. In the essbase.cfg folder on the server computer, set CALCLOCKBLOCKHIGH to 500.
      3. Stopping and restarting Analysis server.
    2. Add the command SET LOCKBLOCK STUDENT at the beginning of the calculation script.
    3. Set the cache of data large enough to hold all the blocks specified in the CALCLOCKBLOCKHIGH parameter.

    In fact in our queue (essbase.cfg) Config Server we have given below added.

    CalcLockBlockHigh 2000

    CalcLockBlockDefault 200

    CalcLockBlocklow 50


    So my question is if edit us the file Essbase.cfg and add the above settings restart services will work?  and if yes, why should change us the configuration file of server if the problem concerns a Cal Script application. Please guide me how to do this.


    Kind regards

    Naveen

    Yes it must *.

    Make sure that you have "migrated settings cache of database as well. If the cache is too small, you will have similar problems.

  • When and how the subsidiary ledger data are filled in the Table XLA_DISTRIBUTION_LINKS

    Hello gurus

    I worked on a subledger of accountants and I was able to get a fair idea of how data is transferred from XLA_DISTRIBUTION_LINKS to XLA_AE_LINES and XLA_AE_HEADERS. According to my understanding, "Create Accounting" course XLA_DISTRIBUTION_LINKS data will comes down to from JOUNRAL_LINE_TYPES rules and conditions to load in XLA_AE_LINES and XLA_AE_HEADERS. Then during the 'Transfer to GL', data XLA_AE_LINES and XLA_AE_HEADERS are transferred to the GL Tables. What I did not understand, is when and how the data is loaded in XLA_DISTRIBUTION_LINKS. What are the rules and conditions and during what stage data will be charged at XLA_DISTRIBUTION_LINKS table? Can someone please explain to me how the data is transferred from the subsidiary ledger of the panels to the Table of Distribution of the SLA?

    Thank you
    Sunny.

    Top Notes are good.
    But I just wanted to add a point of fertilizers.
    xla_distribution_links and xla_ae_lines will be filled at the same time.

    The two goes on JLTs and their conditions. One is detailed and the other is summarized / merged.
    xla_ae_lines aura balancing additional lines and xla_distirbution_links were going to win extra / loss lines, which end with zero amount.

    Also check the links for the installer of identifiers for distribution.
    http://docs.Oracle.com/CD/E18727_01/doc.121/e13420/T193592sdextchap.htm#sdextacattg
    http://docs.Oracle.com/CD/E18727_01/doc.121/e13420/T193592sdextchap.htm#sdextdisidg

    By
    VAMSi

  • Problems getting started using Variables and data sets

    Hello

    I am new to this forum and new with scripts in Illustrator and scripts in general.

    I have an XML file that contains a number of data sets each composed of a number of text variables that I want to use to create a trading card game.

    Each trading card consists of a text field for the title of the map and a number of icons, which are instances of different symbols

    I wrote a script to create each map and I can load variables in my using javascript: newCard.importVariables (news leader (xmlPath));

    Now, I need to access the data in the data sets to power the card.

    To provide some context, it's a picture of one of the cards with the art of the placeholder:

    Screen Shot 2013-02-22 at 23.02.29.png

    Currently, for the icon in the upper right corner (the PHASE icon), I use the following code:

    phase = "night";

    phaseIcon = newCard.symbols.getByName (phase);

    phaseIcon1 = newCard.symbolItems.add (phaseIcon);

    phaseIcon1.top = 232;

    phaseIcon1.left = 140;

    I want to be able to do, is to shoot the value during the phase of <>< / phase > TextVariable my dataset XML and insert that into the script, but I don't know how. This is where I'm stuck on how to proceed. Any help is greatly appreciated.

    Thank you

    Nick

    OK, so it takes data sets to be able to get the data of type string, right? This has proved not to be as easy as it seemed

    -----------------

    then to get string data that you used the XML to the file instead, your last sample seems fine, I did the same thing then you posted your sample, I used the xml file

    var xmlfile = File.openDialog("Select a valid XML file","XML:*.xml", false);
    if(xmlfile != null) {
        xmlfile.open("r");
        var xmlstring = xmlfile.read();
        xmlfile.close();
        xmlfile = null;
    } else {
    alert("Error opening XML file.");
    }
    var wolfCardsXML = new XML (xmlstring);
    var currentCard = wolfCardsXML.card[0];
    var phaseValue = currentCard.phase.toString();
    
    alert(phaseValue);
    
  • show only "Fetching Data...". "and no display of data

    Hello

    can someone help me,

    When I view a particular page appears "Fetching Data"... "and then shows all the data.
    and the rest, the words «... data recovery» ».

    Hello

    This can have many reasons. One of the reasons is that you use a dynamic region on the page and this bean région #s is configured by supporting the scope of the bean. However, as said above, it's a needle in a haystack thing you request we do with so little information

    Frank

Maybe you are looking for