Linux kernel parameters and Oracle memory allocations.

I'm currently creating an Oracle 11 G R2 in database, on a Linux (SLES 11) OS. The software OS and DB are the two 64-bit.
I wondered about the optimal method of memory allocation shared, such that I can get the automatic memory management to recognize 12G of memory (which allows 4 g for the operating system and other pieces).

I put the kernel parameters are:
SHMMAX: 8589934592 (the recommended physical memory/2).
SHMALL: 3145728 (giving a total of 12 GB of shared memory allocation that the page size is 4096).

The way I read the documentation, it must free up enough shared memory to allow the specification of a memory_max_target and memory_target "11 g". I'm getting 11G to ensure that I am not ruin by a few bytes (if it all works, I will develop to 12G).
However, whenever I try a startup, I get the ' ORA-00845: not supported on this system MEMORY_TARGET' error.
It disappears if I create a great shmfs block using a mount command, but I was under the impression you had is no longer to do with 64-bit and Oracle 11 G system.

Could someone clarify a little bit about what I'm doing wrong and where I should be looking for the answer?

See you soon,.

Rich

Note 749851.1 ID and ID 460506.1:

AMM all SGA memory is attributed by creating the files under/dev/SHM. When Oracle DB not allowances of SGA that HugePages reserved/does not serve. The use of the AMM is absolutely incompatible with HugePages.

eseentially, AMM requires that/dev/SHM.

Tags: Database

Similar Questions

  • Linux OEM 11g and Oracle 10 g AIX agent

    Everyone has put in place of Oracle 11 g Grid Control Linux / Aix 10 g Client combination?

    The reason for this is that we use a joint for our oracle database operating system, but we want to run Linux Control11g grid and be able to monitor all our DB (9i to 11g) that runs in different OS.

    nigel86a wrote:
    Everyone has put in place of Oracle 11 g Grid Control Linux / Aix 10 g Client combination?

    The reason for this is that we use a joint for our oracle database operating system, but we want to run Linux Control11g grid and be able to monitor all our DB (9i to 11g) that runs in different OS.

    Yes it is possible. And is followed in most of the environment.

    Also I suggest you to use 10.2.0.5 agents with 11G GRID.

    Also check

    Auditor Certification Oracle Enterprise Manager Grid Control [ID 412431.1]

    Concerning
    Rajesh

  • buffer allocation and minimizing memory allocation

    Hello

    I am tryint to minimize the buffer allocation and memory in general activity. The code will run 'headless' on a cRIO and our experience and that of the industry as a whole is to ellliminate or minimize any action of distribution and the dynamic memory deallocation.

    In our case we treat unfortunately many string manipulations, thus eliminating all the alloc/dealloc memmory is significant (impossible?).

    Which leaves me with the strategy of "minimize".

    I did some investigation and VI of profiling and play with the structure "on the spot" to see if I can help things.

    For example, I have a few places where I me transpoe a few 2D charts. . If I use the tool 'See the buffer allocations' attaced screenshot would indicate that I am not not to use the structure of the preliminary examination International, both for the operation of transposition of the table for the item index operations? As seems counter intuitive to me, I have a few basic missunderstanding either with the "show stamp" tool of the preliminary examination International, or both... The tool shows what a buffer is allocated in the IPE and will once again out of the International preliminary examination, and the 2D table converts has an allowance in and out, even within the IPE causing twice as many allowances as do not use REI.

    As for indexing, using REI seems to result in 1.5 times more allowances (not to mention the fact that I have to wire the index numbers individually vs let LabVIEW auto-index of 0 on the no - IPE version).

    The example illustrates string conversions (not good from the point of view mem alloc/dealloc because LabVIEW does not determine easily the length of the 'picture' of the chain), but I have other articles of the code who do a lot of the same type of stuff, but keeping digital throughout.

    I would be grateful if someone could help me understand why REI seems to increase rather than decrease memory activity.

    (PS > the 2D array is used in the 'incoming' orientation by the rest of the code, so build in data table to avoid the conversion does not seem useful either.)

    QFang wrote:

    -My reasoning (even if it was wrong) was to indicate to the compiler that "I do not have an extra copy of these tables, I'll just subscribe to certain values..." Because a fork in a thread is a fairly simple way to increase the chances of duplications of data, I thought that the function index REI, by nature to eliminate the need to split or fork, the wire of the array (there an in and an exit), I would avoid duplication of work or have a better chance to avoid duplication of work.

    It is important to realize that buffer allocations do occur at the level of the nodes, not on the wires. Although it may seem to turn a thread makes a copy of the data, this is not the case. As the fork will result in incrementing a reference count. LabVIEW is copy-on-write - no copy made memory until the data is changed in fact, and even in this case, the copy is performed only if we need to keep the original. If you fork a table to several functions of Board index, there is always only one copy of the table. In addition, the LabVIEW compiler tries to plan operations to avoid copies, so if several branches read from a wire, but only it changes, the compiler tries to schedule the change operation to run after all the readings are made.

    QFang wrote:

    After looking at several more cases (as I write this post), I can't find any operation using a table that I do in my code that reduces blackheads by including a preliminary International examination... As such, I must STILL understand IPE properly, because my conclusion at the present time, is that haver you 'never' in them for use. Replace a subset of a table? no need to use them (in my code). The indexing of the elements? No problem. .

    A preliminary International examination is useful to replace a subset of the table when you're operating on a subset of the original array. You remove the items that you want, make some calculations and then put back them in the same place in the table. If the new table subset comes from somewhere other than the original array, then the POI does not help. If the sides of entry and exit of International preliminary examination log between them, so there no advantage in PEI.

    I am attaching a picture of code I wrote recently that uses the IPEs with buffer allocations indicated. You can see that there is only one game of allowances of buffer after the Split 1 table D. I could have worked around this but the way I wrote it seemed easier and the berries are small and is not time-critical code so there is no need of any optimization. These tables is always the same size, it should be able to reuse the same allowance with each iteration of the VI, rather than allocate new arrays.

    Another important point: pads can be reused. You might see a dot of distribution on a shift register, but that the shift register must be assigned only once, during the first call to the VI. Every following call to the VI reuses this very spot. Sometimes you do not see an allocation of buffer even if it happens effectively. Resizing a table might require copying the whole table to a new larger location, and even if LabVIEW must allocate more memory for it, you won't always a point of buffer allocation. I think it's because it is technically reassign an existing table instead of allocating a new, but it's always puzzled me a bit. On the subject of the paintings, there are also moments where you see a point to buffer allocation, but all that is allocated is a 'subfield' - a pointer to a specific part of an existing table, not a new copy of the data. For example 1 reverse D table can create a sub-table that points towards the end of the original with a 'stride' array-1, which means that it allows to browse the transom. Same thing with the subset of the table. You can see these subtables turning on context-sensitive help and by placing the cursor on a wire wearing one, as shown in this image.

    Unfortunately, it isn't that you can do on the string allocations. Fortunately, I never saw that as a problem, and I've had systems to operate continuously for months who used ropes on limited hardware (Compact FieldPoint controllers) for two recordings on the disk and TCP communication. I recommend you move the string outside critical areas and separate loop operations. For example, I put my TCP communication in a separate loop that also analyses the incoming strings in specific data, which are then sent by the queue (or RT-FIFO) to urgent loops so that these loops only address data of fixed size. Same idea with logging - make all string conversions and way of handling in a separate loop.

  • List of all virtual machines and their memory allocated procs

    Y at - it a script that will list all the virtual machines in my environment and how many processors and memory is allocated for them? Thanks in advance for your help.

    This isn't really a PowerCLI thing, but you can see RVTools

  • Oracle 10g, how to determine the memory allocated is healthy or sufficient.

    Hi guys,.

    I have a 10.2.0.5 production database.
    Currently, my server has 8 GB of physical RAM.
    / 3GB is allocated to the SGA and 1 GB for the PGA.
    Let's say a day is needed for the application (for example, weblogic) to increase the pool of connections from 20 to 50.
    How are we able to know if the memory allocated is sufficient for the existing load as well as the increase in workload?
    CPU is altogether would apply us we can generate on the CPU. If the load is low, I assumed is quite safe to increase the connection.
    Please share your experiences of dealing with this situation.

    Thank you

    Chewy wrote:
    How are we able to know if the memory allocated is sufficient for the existing load as well as the increase in workload?

    There are a set of views memory Advisor which will tell you if your memory structures of appropriate size:
    v$ db_cache_advice
    v$ shared_pool_advice
    v$ java_pool_advice
    v$ sga_target_advice
    v$ pga_target_advice
    --
    John Watson
    Oracle Certified Master s/n
    http://skillbuilders.com

  • For ebs 11.5.10 and db 10.2.0.4 kernel parameters

    Hello
    y at - it no doc describing the kernel parameters (settings) in order to install the 11.5.10 system EBS on Linux Enterprise Edition (v. 4.6)... something like the note Metalink: 265121.1 (requirements to install Oracle9i R2 (9.2.0) on RHEL3)...

    Thank you
    SIM

    SIM,

    Please refer to:

    Note: 294932,1 - recommendations for install of Oracle 11i Applications
    https://metalink2.Oracle.com/MetaLink/PLSQL/ml2_documents.showDocument?p_database_id=not&P_ID=294932.1

    Note: 316806.1 - Oracle Applications Installation update Notes, Release 11i (11.5.10.2)
    https://metalink2.Oracle.com/MetaLink/PLSQL/ml2_documents.showDocument?p_database_id=not&P_ID=316806.1

  • New installation woes - SQLDeveloper on Oracle 11 g XE and Oracle Linux x 64

    Note that as a developer Oracle experienced in enterprise environments running the MS Windows operating system, I am familiar with SQLDeveloper for Windows. However, I have not much experience with Linux. I find not intimidating, just, I don't have a lot of accumulated knowledge.

    Recently, I tried to set up a development environment on my portable Windows7 Professional x 64.

    1 installed Oracle VBox
    2. install Oracle Linux 5.2 inside the VBox (because Oracle 11 g XE is not certified on 64-bit Windows, to go with a Linux OS)
    3 installed Oracle DB 11.2 XE
    4 Installing JDK as described in SQLDeveloper for Linux x 64 release notes
    5 installed SQLDeveloper 3.1.07.42 - 1

    No apparent problems with any of the facilities.

    Problem: I can run SQL Plus command line, but when I click on the icon for SQLDeveloper, nothing happens. No error message, nothing else.

    Question #1: Not sure how debug/fix it. Will read through the installation and release for SQLDeveloper, but any suggestions would be welcome.

    Question #2: This could be due to a problem with the installation of JDK? It appeared to be successful (no obvious problem), but I do not see anything this either in the Oracle Linux menu system that tells me the JDK was installed successfully. How can I make sure that the JDK installation was successful?

    Published by: Peter Hartgerink, 19 June 2012 09:12

    >
    Linux x 64 of SQLDeveloper version is shipped with the JDK. There is a recommended JDK version, with a view to install it before installing SQLDeveloper. What I did with apparent success. I guess I'll have to reinstall the JDK and check my Java versions later. Thank you.
    >
    I don't know what JDK you have downloaded since you didn't say. What version is this? Post the link.

    I can hardly believe that the recommended JDK version is 1.4.2 since it is an OLDER version of Java, you have to hunt to find even for download and, especially, the main download page, sql Developer Announces clear what version of Java is required.
    http://www.Oracle.com/technetwork/developer-tools/SQL-Developer/downloads/index.html
    >
    JDK support
    Oracle SQL Developer 3.1 comes with JDK1.6.0_11. However, you can connect and use any JDK 1.6.0_11 or higher. To use an existing JDK, download the below zip files "with JDK installed.' already
    >
    As you can see you need JDK 1.6.0_11 or above.

    If the version that you say was recommended was actually 1.4.2 is a mistake; view details so that developers can get this fixed an error.

    I don't know what is on your system and can only go by the information you provide
    >
    4 Installing JDK as described in SQLDeveloper for Linux x 64 release notes
    . . .
    version Java '1.4.2 '.
    >
    You have either downloaded the 1.4.2 version which seems unlikely, or your Linux included 1.4.2 and it is used when you try to start sql developer.

    Maybe that your system has now several versions of Java top but the version used by sql developer is evil. If so this is the question that you have to solve.

  • need to change password for root and oracle Linux

    Dear all,

    We have installed Oracle RAC server configured a year back. now, due to security reason has to change the root and oracle user password.
    so before changing the password, we want to know is there any impact on live server due to change password for the user root and oracle RAC.

    as we so oracle user equivalence if we change the password, it will work very well.

    Thank you
    Shir khan

    SHERKHAN wrote:
    Dear all,

    We have installed Oracle RAC server configured a year back. now, due to security reason has to change the root and oracle user password.
    so before changing the password, we want to know is there any impact on live server due to change password for the user root and oracle RAC.

    as we so oracle user equivalence if we change the password, it will work very well.

    No, there is no impact of the change to the password of user oracle/root on the equivalence of the user and RAC.

    Also check similar posts in the past:

    User OS password Oracle RAC environment change

    Concerning
    Rajesh

    Published by: Rajesh on August 25, 2010 11:26

  • JNI memory allocation errors

    We took a 32-bit 10.1.2.3 B2B instance on Linux. We have dedicated as much memory as possible to the JVM and for some messages we receive memory errors in allocation of native code NYI (EDIFACT). When the native code allocate the memory?

    My real question is whether we should allocate less memory for the JAVA virtual machine in order to make more memory in the 32-bit address space available for the operating system for allowances of JNI or what we should do instead. There are many memory physical machine (16 G, no more than 4 G) it must have something to do with the 32-bit limitation.

    My mistake. Not sure which used Oracle of the API, but for native code memory must be allocated in the heap of BONES and not in the java heap.

    When execution of a piece of large capacity memory is requested, then the allocator of OS will try to find that OS bunch and if she couldn't then returns the bad_alloc exception (as far as I know, bad_alloc is a class in the C++ standard library). It needs a lot of debugging to know the exact cause and find a solution; and for this, Oracle's Support is certainly knocking on the right door.

    There are many memory physical machine (16 G, no more than 4 G) it must have something to do with the 32-bit limitation.

    This statement is interesting because as far as I know, Linux does not need to be defragmented and so if 4G memory is available then bad_alloc exception should not come. However I still can not comment because there are other factors as well that should be considered, as - size of the payload, of environment settings, by the memory of the affected user, by the maximum memory allocation process has allowed etc.

    Please check the above factors and try to find any occurrence of pattern/common between exceptions. I'm not a linux expert, but again, it seems that the problem is with OS and its parameters.

    Kind regards
    Anuj

  • Tuxedo of Oracle and Oracle Tuxedo message queue on a virtual machine.

    Hi friends.

    How is it going?

    A small question.

    So let's go live using Oracle Tuxedo 12.1.1.0 with binding distinguished Bulletin Board and Oracle Tuxedo Message Queue 12.1.1.0 on a Virtual Machine (VMWARE) running Oracle Linux 6.2. However, we want to know if there are recommendations or mishaps in which we face before running Oracle Tuxedo on a virtual machine?

    I mean, I wonder if I have to worry about kernel parameters, settings of virtual machine or any other thing that could ruin everything.

    Another question.

    Oracle also provides certification of VMS where Tuxedo Oracle would go on top of the?

    Todd little-Oracle

    Maurice G-Oracle

    Hi Bruno.

    I'm not sure what you mean with single Liaison Bulletin Board.  I guess you mean a cluster or the MP configuration?  And is it really a clustered or just a single machine of the MP configuration?

    About the configuration, you use Tuxedo services as well or just Tuxedo Message Queue?  The biggest problem with the configuration of the BONE is the IPC resources.  If you do a-c tmloadcf on your UBBCONFIG file, it will help you to determine the required minimum IPC resources.  In general, I suggest configuration much more resources than the minimum to allow for the changes to come and for some of the parameters for heavier loads.  In particular IPC message queue settings are strongly dependent on the load.  So make sure that the maximum message size and the size of the queue are big enough for your expected workload.  You can monitor the slot load using the ipcs command.

    We don't certify virtual machine environment, but support also a long time the VM vendor to ensure compatibility, obviously VMware and Oracle VM who both do.

    Kind regards

    Todd little

    Chief Architect of Oracle Tuxedo

  • Kernel parameters

    Hello

    I have an instance R12 all services on the single node and the ram size is / 3GB and the kernel.shmall kernel.shmmax kernel parameters are not defined as by the notes 402310.1 , they are more than the recommended value, for this reason, I get a lot of memory related issues such as its unable to PGA allocated for new connections. I wanted to know if these settings are true culprit of connectivity issues.

    Kind regards
    Satya.

    Hi Satya,

    Good to hear about this problem is resolved.

    You are welcome!

    Kind regards
    Hussein

  • MacBook 4.1 keeps glitching, kernel panics and 3 beeps

    Help me, my Macbook 4.1 running 10.6.8, is used off e - bay... I've had for a month and it glitches, kernel panics and 3 beeps... I RESET the PRAM, used repair disk permissions and 10.6.3 upgrade 10.6.8... It's ALWAYS the problems, and I tried to use Linux instead of OS x... it does same Linux...

    I think it's the ram... but I'm not allowed to repair/replace it.

    Here's the hardware specs

    Processor name: Intel Core 2 Duo

    Processor speed: 2.4 GHz

    Number of processors: 1

    Total number of cores: 2

    L2 Cache: 3 MB

    Memory: 4 GB

    Bus speed: 800 MHz

    Boot ROM version: MB41.00C1.B00

    Version of the SCM (System): 1.31f1

    Serial number (System): W8 * 0P1

    Material UUID: F5F35C41-4BB4-52B0-A068-0765985002FD

    Motion sensor sudden:

    Status: enabled

    Three beeps indicates a ram failed, cowardly or incompatible. Try deleting and reinstalling it first.

  • Get the DLL string (memory allocated for DLL)

    Hi, I'm aware there are a lot of discussions around this topic, but there are a lot of variations and I've never used before LabVIEW, and I seem to have a hard time at a very basic level, so I hope someone can help me with the below simple specific test case to put me on the right track before I pull my hair remaining.

    I've created a DLL with a single function "GenerateGreeting". When it is called, it allocates enough memory for the string "Hello World!" \0"at the pGreeting of pointer, copy this string to the pointer and sets the GreetingLength parameter to the number of allocated bytes (in the DLL in the end, I want to use, there is a DLL function to free the memory allocated for this way).

    I created a header file to go with the DLL containing the following line.

    extern __declspec(dllimport) int __stdcall GenerateGreeting(char* &pGreeting, int &GreetingLength);
    

    I then imported the LabVIEW file using the import Shared Library Wizard. That created a "generate Greeting.vi' and everything seems somewhat sensitive for me (although this does not mean a lot right now). When I run the vi, the ' GreetingLength on ' display correctly '13', the length of the string, but "pGreeting out" shows only three or four characters (which vary in each race), place of the string that is expected of junk.

    The pGreeting parameter is set to the 'String' type, the string "String pointer C" format, size currently Minimum of 4095. I think the problem is that the DLL wants to allocate memory for pGreeting; the caller is supposed to pass a unallocated pointer and let the DLL allocates memory for the string the right amount, but LabVIEW expected the DLL to write in its buffer préallouée. How to with LabVIEW? Most of the functions in the DLL in the end, I want to use work this way, so I hope that's possible. Or I have to rewrite all my DLL functions to use buffers allocated by the appellant?

    The vi , header and the DLL are atteched, tips appreciated. Edit - cannot attach the dll or the headers.

    tony_si wrote:

    extern __declspec(dllimport) int __stdcall GenerateGreeting(char* &pGreeting, int &GreetingLength);
    

    Although char * & pGreeting is actually a thing of C++ (no C compiler I know would accept it) and this basically means that the char pointer is passed as a reference. So, technically, it's a double referenced pointer, however nothing in C++ Specifies that reference parameters should be implemented as a pointer at the hardware level. So free to decide to use some other possible MECHANISM that takes the target CPU architecture support a C compiler constructor. However, for the C++ compilers, I know it's really just syntactic sugar and is implemented internally as a pointer.

    LabVIEW has no type of data that allows to configure this directly. You will have to configure it as a whole size pointer passed as a pointer value and then use a call MoveBlock() or the support VI GetValuePtr() to copy the data on the pointer in a string of LabVIEW.

    AND: You need to know how the DLL allocates the pointer so that you can deallocate it correctly after each call to this function. Otherwise you probably create a leak memory, since you say that the first 4 bytes in the returned buffer always change, this feature seems to assign to each run of a new buffer that you want to deallocate correctly. Unless the DLL uses a Windows such as HeapAlloc() API function for this, it should also export a function according to deallocate the buffer. Functions like malloc() and free() from the C runtime cannot always be applied in the same version between the caller and callee, so that calling free() by calling on a buffer that has been allocated with malloc() in the DLL may not work on the same segment of memory and result in undefined behavior.

  • apex_css.add_3rd_party_library_file is smashing Oracle memory use

    I have found that when a call to apex_css.add_3rd_party_library_file, using the example directly from the documentation is make my Oracle environment to a stand still. I don't know if this problem has been resolved and were not game for testing on apex.oracle.com. I tested on a XE installation, I as well as the 11 g Dev days VM (However, this version of apex those is not the last)-both with the same result.

    Medium 1:

    Oracle 11g XE

    Apex 4.2.6.00.03

    ADR

    Tomcat 6

    BONES of hundred 6.6

    SQL > select *.

    the v version $2;

    BANNER

    --------------------------------------------------------------------------------

    Oracle Database 11 g Express Edition Release 11.2.0.2.0 - 64 bit Production

    PL/SQL Release 11.2.0.2.0 - Production

    CORE Production 11.2.0.2.0

    AMT for Linux: Version 11.2.0.2.0 - Production

    NLSRTL Version 11.2.0.2.0 - Production

    Environment 2 (11g Dev days VM):

    Oracle 11g

    Apex 4.2.0.00.07

    Oracle Enterprise Linux

    SQL > select *.

    the v version $2

    3;

    BANNER

    --------------------------------------------------------------------------------

    Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production

    PL/SQL Release 11.2.0.2.0 - Production

    CORE Production 11.2.0.2.0

    AMT for Linux: Version 11.2.0.2.0 - Production

    NLSRTL Version 11.2.0.2.0 - Production

    To test this, I have a plugin to point with the following code:

    function css_item (
        p_item                in apex_plugin.t_page_item,
        p_plugin              in apex_plugin.t_plugin,
        p_value               in varchar2,
        p_is_readonly         in boolean,
        p_is_printer_friendly in boolean )
        return apex_plugin.t_page_item_render_result
    as
        l_Return apex_plugin.t_page_item_render_result;
    begin
        /*apex_css.add_3rd_party_library_file (
            p_library   => apex_css.c_library_jquery_ui,
            p_file_name => 'jquery.ui.accordion' );*/
    
        sys.htp.p('<input type="text" id="' || p_item.name ||'" />');
    
        return l_return;    
    end css_item;
    
    
    
    
    

    Note I have commented the library add procedure call.

    On the BONE, I look at offenders of upper memory with 'ps ':

    Who runs the page with 'apex_css.add_3rd_party_library_file' commented:

    Each 2. 0s: ps - A - Tri - rss o comm, mmtp | Chief-...  Sat 6 dec 14:20:37 2014

    COMMAND % MEM

    Oracle 21.0

    Java 15.2

    Oracle 7.3

    Oracle 7.3

    Oracle 6.5

    Oracle 5.9

    Oracle 4.3

    Oracle 3.2

    Oracle 2.8

    Oracle 2.1

    Then I Uncomment this block of code, run the page with this plugin installed, and oracle increased to 70% +, making it unusable

    Each 2. 0s: ps - A - Tri - rss o comm, mmtp | Chief-...  Sat 6 dec 14:24:08 2014

    COMMAND % MEM

    Oracle 76.0

    Java 1.3

    Oracle 0.3

    Oracle 0.2

    Oracle 0.2

    Oracle 0.2

    Oracle 0.2

    Oracle 0.1

    Oracle 0.1

    Oracle 0.1

    See you soon,.

    Trent

    Post edited by: trent also, I should mention, calling the equivalent function in apex_javascript does not produce the same problem.

    Hi guys,.

    Please let us know. Indeed it is a bug in the package APEX_CSS package which is called instead of the internal version of add_3rd_party_library_file. I filed the bug # 20317978 and fixed it 5.0.

    Concerning

    Patrick

    Member of the APEX development team

    My Blog: http://www.inside-oracle-apex.com

    APEX Plug-Ins: http://apex.oracle.com/plugins

    Twitter: http://www.twitter.com/patrickwolf

  • Fire of sql require separate memory allocation

    Hello

    So I tried light sql and trying to put things up. I had a question on how data is managed in memory regarding the memory available for sqlfire.

    First of all,-sqlfire has a default amount of memory allocated when we have a home server? I didn't notice settings during server startup to say he could use x amount of memory.

    Also assume that I have 2 members in my cluster (running with 4 GB of memory available) and I have millions of lines that could not fit in available memory. Data would be saturated on the disk?

    See you soon

    -Imran

    You can explicitly set the initial heap and max - heap for the server at startup. Sqlf server aid Fund

    By default, lines are only in memory.

    If you set the initial heap and max heap and there is no more than ability, you will get an exception. If you do not use these parameters, you run the risk of running out of memory server.

    To outflank your tables, you must explicitly create the table with the configuration of overflow. Keys and indexes are still in memory even when overflow you.

Maybe you are looking for