Doubts about IP SCAN

Hi all

I'm trying to set up a cluster of 11 g R2 RAC. I'm looking at a scenario where in four databases on a three Oracle 11 g RAC node runs.

with VIP, I know how to get the file TNS to three databases. How will work even with SCAN IPs?

Do I have to solve a single name for the three databases and run the three databases on three different ports? We would do the same on 10g RAC too. But incase of 11g, IP SCAN with different ports for different databases in the TNS names. is it not?

Hope I'm being clear with my question. My main question is, will increase SCAN names incase I want to install more than one database on a RAC cluster?
Thank you

Hello

No, you will have only a SCAN headphones for your cluster, run there regardless of how many databases.

The mode of operation is as follows:
-in the tnsnames.ora on the client, the database alias (in other words: any database alias for DBs on this cluster) points to the name of scan instead of a specific host and to the same port configured for the listener to scan (default 1521)
-the dns server will connect to one of the 3 addresses IP resolves the name of the scan
-l' listener to scan, the connection request will give the local listener on the node
-Once the connection has been established, the client will intervene directly to the local listener on the node and should not go in the direction over the listener to scan for the moment of the connection

So in fact, you have still less overhead because you have really a VIP by database, but the listener with its 3 fps scan handles.

That answer your question?

Best regards
Robert
http://robertvsoracle.blogspot.com

Published by: Robert Hanuschke, August 15, 2011 19:01

Tags: Database

Similar Questions

  • Doubts about licenses

    Hi all

    I have a few doubts about the price of licenses.

    I understand, I can deploy an APEX Server 11g XE free of charge, but what happens, if I want to install, a version no XE?

    Imagine a billing application, for 10 users, and I will assume that a Standard is sufficient. With the help of [this price list | http://www.oracle.com/us/corporate/pricing/technology-price-list-070617.pdf], how much exactly will cost?

    I understand I can get a license by user or server, or I have to license user and server too?

    Kind regards.

    Hello
    metric license is named plu user or license CPU (see the table of the core).

    for a quote, you can take a look in the oracle store or ask your dealer for an exact price oracle.

    concerning
    Peter

  • I have a doubt about the file .folio and publications

    Hello, I m new here.

    I want to start working with DPS, but I have a doubt about which version to buy.

    At the moment I have one customer just wants to publish a magazine, but my intention is to have more customers and publish more magazines.

    If I buy the unique edition of DPS, I read that I can publish a single file .folio. What it means? Each folio file represents a publication?

    Please, I need help to understand this before you purchase the software.

    Thank you very much

    Paul

    Here's a quick blog I wrote to compare the simple edition and

    multifolio apps:

    http://boblevine.us/Digital-Publishing-Suite-101-single-Edition-vs-multi-Folio-apps/

    Bob

  • Doubts about event handlers

    Hello

    I had some doubts about the event handlers in the IOM 11.1.1.5...

    (1) I want to use the same event handler for the message insert and update Post task... Can I use the same handler for this... If Yes, then how can I make...

    (2) can I create the single class of Plugin.xml and add all the jar files in IE single lib folder and zip them all together... If yes then what changes I need to do? Need only add that the plugin tags for different class in the plugin.xml file files? OR need to do something extra too...?

    (3) if I need to change something in any class handler... Is it need to unregister the plugin and register again...?
    If Yes... Is it need to delete the event handler using the weblogicDeleteMetadata command?

    (4) that we import the event handler of the path as event manager/db /... If we add all the evetn handler.xml files in this folder... As when importing weblogicImportMetadata called recursively all files in this folder... Now, if I need to change anything in one of the event handler class... so if import us from the same event manager/db folder... What to do... Create the copy of the eventhandlers? OR should I not add Eventhandler.xml files to class files, I made the changes...

    (5) given that I need to create emails on the creation of the user while recon and identification of email updated as a first name or surname updates... I had to use in the event handler.xml (entity-type = 'User' operation = "CRΘER") or something else...


    Help me clarify my doubts...

    Yes, on the update post you need to be check first if the first and last name change to update the mail electronic id, rather then calculation always email identification. So, you can check the path name are updated through the previous code.

    -Marie

  • Doubt about appsutil.zip in R12

    Hi all
    I have doubts about the application of rapid Clone on 12.1.3.I the latest patches have applied the fix using adpatch. After that, it must synchronize directories appsutil
    in RDBMS oracle home. I created appsutil.zip in the application layer and copied in the RDBMS oracle home. If I move the old appsutil to appsutil.old and extract appsutil.zip, the new appsutil directory should not constituted by the context file (I think). So, I have to run the automatic configuration based on the old cotextfile. Below, I have summarized the steps that I follow. Please check and correct me if I'm wrong.

    Copy appsutil.zip to $INST_TOP/admin/out of RDBMS oracle home
    CP $CONTEXT_FILE /tmp/mytest_vis.xml
    MV appsutil appsutil.orig
    unzip appsutil.zip
    Run autoconfig based on/tmp/mytest_vis.xml.


    Thank you
    Jay

    Jay,

    Is there a reason why do not use the old file context? What is the difference between the context file that will be generated by adbldxml.pl and the old file context?

    If there are updates in the application, it will be updated in the new xml file generated by adbldxml.sh, but he's not in the old file.

    So it is always best to run adbldxml.sh and autoconfig.

    Amulya

  • Doubts about RAC infrastructure with a disk array

    Hello everyone,

    I am writing because we have a doubt about the correct infrastructure to implement RAC.

    Please, let me first explain the current design we use for storage Oracle DB. Currently, we are conducting multiple instances in multiple servers, all connected to a SAN disk storage array. As we know that it is a single point of failure so we have redundant controlfiles, archiveds and Orde in the table and in the internal drive of each server, in which case table has completely failed us 'just' need to recover cold backup nightly, applied hoops and Oder and everything is ok. This is possible because we have autonomous bodies and we can assume that this downtime of 1 hour.

    Now, we want to use these servers and implementing this table to a RAC solution and we know that this table is our only point of failure and wonder if it is possible to have a RAC multi-user solution (not AS a node) with controlfiles/archs/oder redundant internal drives. Is it possible to have each written full node RAC controlfiles/archs/oder in drives internal and applies these files systematically when the ASM filesystem used for CARS is restorations (i.e. with a softlink in an internal drive and using a single node)? Maybe the recommended solution is to have a second table to avoid this single point of failure?

    Thank you very much!

    CSSL wrote:

    Maybe the recommended solution is to have a second table to avoid this single point of failure?

    Fix. It is the right solution.

    In this case, you can also decide to simply use the distribution on both tables and mirror of the array1 array2 on table data using the ASM redundancy options.

    Keep in mind that the redundancy is also necessary for connectivity. If you need at least 2 switches to connect on two tables and two HBA ports on each server, 2 fibers running, one to each switch. You will need driver multichannel s/w on the server to deal with the multiple I/O paths for storing same lun.

    Similarly, you will need to repeat this step for your Interconnect. 2 private switches, 2 cards on each server which are pasted. Connect then these 2 network cards on the 2 switches, one NETWORK card per switch.

    Also, don't forget to spare parts. Spare switches (one for the storage and interconnection). Spare cables - fiber and everything that is used for the interconnection.

    Bottom line - not a cheap to have a redundancy solution. What we can do is to combine the layer of Protocol/connection of storage with the interconnection layer and run both on the same architecture. Oracle database machine and Exadata storage to servers. You can run your storage Protocol (e.g. PRSS) and your Protocol (TCP or RDS) interconnection on the same 40 GB Infiniband infrastructure.

    As well as 2 switches Infiniband are needed for redundancy, plus 1 spare. With each server running a dual port HCA and one cable for each of these 2 switches.

  • Some doubts about 11g R2 Policy Based/Scan

    Hello

    I went through several documents, including the help of Oracle to understand oracle new feature Policy Based management/Scan, I also explored a few RACSIG Webinar by Markus which were useful, but still not able to understand the concept. I will be grateful if someone can help me understand the concept if you please.

    Written document - the advantage of the creation of a pool of servers (policy) is to manage the workload automatically. I do not understand how it will manage the workload automatically. I understand according to importance and setting min/max, the servers will allocate/deallocate pools as a result.

    For example - is a server pool: high_priority pool with min 2 servers. first and foremost, he is serving 2 instance of database on payroll
    another pool of servers: low_priority pool with min 2 servers with low priority, he is serving 2 HR database instance

    In this case, any expelled obtained high_priority server, it should take only one low_priority server. (That's what expression I so far after going through the documentation. Please correction if I'm mistaken).

    1st question: doest which means that he's going to stop a case of HR running on low_priority pool and will allocate for this server at high_priority
    and as a result instance payroll goes up.

    2nd question: can still reach TAF feature using server pools. That is, I read that I can have a uniform service that runs on all instances
    (I guess that means favorite)... or the service can run on any single server (singletone)


    question 3: feature with SCAN, where we put remote_listener = SCAN_LISTENER, and clients will connect first to the SCAN and then had diversion to local listener.
    What is the advantage to have VIP for each node. SCAN listener knows which instance is in place, which is the less loaded can connections directly to the front
    local listener. (we can start using public ip for listener)

    my doubt is: 10 g, we only used vip to get quick notification or avoid tcp timeout... and then we got to redirect to some other listener/node to connect
    now SCAN can do the same features...

    Thank you in advance, I thank your time.

    Kind regards
    Lyxx

    Hello

    In this case, any expelled obtained high_priority server, it should take only one low_priority server. (That's what expression I so far after going through the documentation. Please correction if I'm mistaken).

    Yes, that is the correct hypothesis.

    1.) Yes. The instance of database that HR will be stopped and a third instance of payroll will start.
    2 TAF.) will work for uniform and singleton services. However if you don't have 1 server in the serverpool (or have a singleton service) that there is no sense to work with preconnect.
    Note: TAF time varies based on uniform/singleton service and needs if service failover will need time so that it starts an instance. If you must configure TAF for such services with new higher attempts.
    (3.) the SCAN will redirect the customer to use the VIP for the final sign in the local listener. If it would use the public IP address and the server fails in exactly this moment, the client would wait for the TCP/IP timeout (which may take some time).
    With the VIP, he will tell him immediately, report that its connect has been unsuccessful and rerun the ANALYSIS to get a new connection.

    While the SCAN done the same thing in case of a change of itinerary as the VIP there 2 advantages:
    a.) no matter what your cluster, you must always 3 SCANs.
    (b) and they stay the same, no matter if the cluster has changed. (The VIPs are).

    Concerning
    Sebastian

  • doubt about the Index Skip Scan

    Hi all

    I read the setting of Oracle performance guide (Version 11.2 Chapter 11). I just want to see index skip scan with an example. I created a table called t and inserted the test data. When I asked the table optimizer did not use the index skip scan path.

    Can you please let me know what mistake I am doing here.

    Thanks a lot for your help in advance.

    SQL > create table t (empno number
    2, ename varchar2 (2000)
    3, varchar2 (1) sex
    4, email_id varchar2 (2000));

    Table created

    SQL >
    SQL >-test data
    SQL > insert into t
    2 level, select "suri" | (level), ','suri.king' | level | ' @gmail.com'
    3 double
    4. connect by level < = 20000
    5.

    20000 lines inserted

    SQL >
    SQL > insert into t
    2 Select level + 20000, 'surya ' | (level + 20000), 'F', 'surya.princess'. (level + 20000) : ' @gmail.com '
    3 double
    4. connect by level < = 20000
    5.

    20000 lines inserted

    SQL > create index t_gender_email_idx on t (gender, email_id);

    Index created

    SQL > explain the plan for
    2 Select
    3 t
    4 where email_id = "[email protected]";

    He explained.

    SQL > select *.
    table 2 (dbms_xplan.display);

    PLAN_TABLE_OUTPUT
    ----------------------------------------------------------------------------------------------------------------
    Hash value of plan: 1601196873

    --------------------------------------------------------------------------
    | ID | Operation | Name | Lines | Bytes | Cost (% CPU). Time |
    --------------------------------------------------------------------------


    |   0 | SELECT STATEMENT |      |     4.  8076 |   103 (1) | 00:00:02 |
    |*  1 |  TABLE ACCESS FULL | T    |     4.  8076 |   103 (1) | 00:00:02 |
    --------------------------------------------------------------------------

    Information of predicates (identified by the operation identity card):
    ---------------------------------------------------

    1 - Filter ("EMAIL_ID"= "[email protected]")

    Note
    -----
    -dynamic sample used for this survey (level = 2)

    17 selected lines.

    See you soon,.

    Suri

    You have just demonstrated how your execution plan gets screwed up if you do not have your statistics

    SQL > create table t
    () 2
    3 empno number
    4, ename varchar2 (2000)
    5, varchar2 (1) sex
    6, email_id varchar2 (2000)
    7  );

    Table created.

    SQL > insert into t
    2 Select level, "suri" | (level), ', 'suri.king'| level | ' @gmail.com'
    3 double
    4. connect by level<=>
    5.

    20000 rows created.

    SQL > insert into t
    2 Select level + 20000, 'surya ' | (level + 20000), 'F', 'surya.princess'. (level + 20000) : ' @gmail.com'
    3 double
    4. connect by level<=>
    5.

    20000 rows created.

    SQL > create index t_gender_email_idx on t (gender, email_id);

    The index is created.

    SQL > set autotrace traceonly explain
    SQL >
    SQL > select *.
    2 t
    3 where email_id = "[email protected]";

    Execution plan
    ----------------------------------------------------------
    Hash value of plan: 2153619298

    --------------------------------------------------------------------------
    | ID | Operation | Name | Lines | Bytes | Cost (% CPU). Time |
    --------------------------------------------------------------------------
    |   0 | SELECT STATEMENT |      |     3.  6057.    79 (4) | 00:00:01 |
    |*  1 |  TABLE ACCESS FULL | T    |     3.  6057.    79 (4) | 00:00:01 |
    --------------------------------------------------------------------------

    Information of predicates (identified by the operation identity card):
    ---------------------------------------------------

    1 - Filter ("EMAIL_ID"= "[email protected]")

    Note
    -----
    -dynamic sampling used for this statement

    SQL > exec dbms_stats.gather_table_stats (user, 't', cascade-online true)

    PL/SQL procedure successfully completed.

    SQL > select *.
    2 t
    3 where email_id = "[email protected]";

    Execution plan
    ----------------------------------------------------------
    Hash value of plan: 2655860347

    --------------------------------------------------------------------------------------------------
    | ID | Operation | Name               | Lines | Bytes | Cost (% CPU). Time |
    --------------------------------------------------------------------------------------------------
    |   0 | SELECT STATEMENT |                    |     1.    44.     1 (0) | 00:00:01 |
    |   1.  TABLE ACCESS BY INDEX ROWID | T                  |     1.    44.     1 (0) | 00:00:01 |
    |*  2 |   INDEX SKIP SCAN | T_GENDER_EMAIL_IDX |     1.       |     1 (0) | 00:00:01 |
    --------------------------------------------------------------------------------------------------

    Information of predicates (identified by the operation identity card):
    ---------------------------------------------------

    2 - access ("EMAIL_ID"= '[email protected]')
    filter ("EMAIL_ID"= "[email protected]")

    SQL >

  • Some doubts about Oracle DBA...

    Respected, professionals

    I am new to Oracle RAC DBA. Knew just CARS and have some doubts, associated with Oracle RAC databases and a few other s/n basic questions.

    (1) client connected to one of the rac node in the database of node three cars by scan listener. The node to which it is connected was dead. My question is - is that its connection automatically goes to the next available node

    without using TAF? Or he needs to connect again? I have to configure TAF as in 10g RAC? or the configuration of TAF to perform a SCAN is completely misconceptual?

    (2) a database connection request will be going to one of the node system of three-node rac (shooting technology in return for load balancing). How can I connect remotely to PuTTY tru particular node?

    Do I need to enter particular node (virtual host as IP) TNS in tnsnames.ora in my system oracle client?

    (3) we have a listener to scan with three IP for RAC 3-node addresses. If you add a new node, I need to create another listener with a single IP address scan? or do I need to add another IP existing scan listener? What should I do to connect again with existing scan listener node?

    My sincere apologies over the three questions are completely misleading or false.

    (4) incremental 11 g rman are speed, compared with 10 g rman. WHY?

    (5) how connect us using sqlplus. (No high level architectural decoration). I mean what happens when connect us to the database using SQL * more (SQLPLUS scott/tiger)?

    My sincere apologies if any inconvenience caused... I know this isn't a chat BOX... Please forgive me If my way of questioning is insulting and rude.

    Thank you

    Hello

    (1) client connected to one of the rac node in the database of node three cars by scan listener. The node to which it is connected was dead. My question is - is that its connection automatically goes to the next available node

    without using TAF? Or he needs to connect again? I have to configure TAF as in 10g RAC? or the configuration of TAF to perform a SCAN is completely misconceptual?

    If there is no TAF not configured you will have to reconnect.

    TAF can be configured on the side client (tnsnames.ora) or side (srvctl) server.

    (2) a database connection request will be going to one of the node system of three-node rac (shooting technology in return for load balancing). How can I connect remotely to PuTTY tru particular node?

    Do I need to enter particular node (virtual host as IP) TNS in tnsnames.ora in my system oracle client?

    You can connect to one of the nodes via PuTTY using the public IP node.

    To connect to a particular instance through SQLPLU, then you must specify the IP node-vip in tnsnames.ora.

    (3) we have a listener to scan with three IP for RAC 3-node addresses. If you add a new node, I need to create another listener with a single IP address scan? or do I need to add another IP existing scan listener? What should I do to connect again with existing scan listener node?

    You don't need to create another auditor SCAN.

    For your new node, you must assign the listener SCAN and LOCAL_LISTENER REMOTE_LISTENER must be defined on node-VIP.

    Ivica

    Post edited by: Ivica Arsov

  • Some doubts about the navigation in unifying

    Hi all

    I had a few questions about unifying navigation.

    Is it possible to move the admin mode user mode access level?

    I mean, if a particular feature as Manager of the shell I can only access from admin mode is it possible to provide access even in user mode?

    If so, how?

    My 2nd question of doubt is, currently, we can access company BPs level "Journal of society" or "Resource Manager" under shell 'Company Workspace'.

    Is it possible to move the "journal of the society" or "Resource Manager" in the folder? If yes how?

    I tried in "navigation user mode" to move the company BPs level at shell of the House, but I can't do it.

    To answer your questions:

    (1) User-Mode browser can have the user feature included. You cannot change the view mode Admin or move functions admin for user mode.

    (2) you cannot move these on the Home tab.

  • Doubt about the Index

    Hi all

    Oracle Database 11 g Enterprise Edition Release 11.2.0.2.0 - 64 bit Production
    PL/SQL Release 11.2.0.2.0 - Production
    "CORE 11.2.0.2.0 Production."
    AMT for Linux: Version 11.2.0.2.0 - Production
    NLSRTL Version 11.2.0.2.0 - Production

    I have a question about the index. Is - this required that the index will be useful if we have a "WHERE" clause I tried to find myself there but do not.
    In this example I haven't used where clause used but group. But it gives a comprehensive analysis. Is it possible to get the scan interval or something else using Group by?
    SELECT tag_id FROM taggen.tag_master GROUP by tag_id 
    
    Explain Plan:
    Plan hash value: 1688408656
     
    ---------------------------------------------------------------------------------------
    | Id  | Operation             | Name          | Rows  | Bytes | Cost (%CPU)| Time     |
    ---------------------------------------------------------------------------------------
    |   0 | SELECT STATEMENT      |               |  4045 | 20225 |     6  (17)| 00:00:01 |
    |   1 |  HASH GROUP BY        |               |  4045 | 20225 |     6  (17)| 00:00:01 |
    |   2 |   INDEX FAST FULL SCAN| TAG_MASTER_PK |  4045 | 20225 |     5   (0)| 00:00:01 |
    ---------------------------------------------------------------------------------------

    Hello

    SamFisher wrote:
    Since I was on what they do full scan. Is it possible to restrict of fullscan without using where clause?
    I guess having limit clause but not quite know.

    Why?
    If this query is producing good results, then you need a full analysis.
    If fool you somehow the optimizer by doing a scan of interval, it will be slower.

  • Question about full scan of the Table and Tablespaces

    Good evening (or morning).

    I read the Concepts of Oracle (I am new to Oracle) and it seems that, based on the way in which Oracle allocates and manages the storage of the local the following is true:

    Principle: A table which is often accessible by using a full table scan (for some reason any) would be better resident in its own dedicated tablespace.

    The main reason I came to this conclusion is that when you perform a full table scan, Oracle doesn't diluvium IO, reading probably one step at a time. If datafile (s) of storage space only contain data for a single table, then a reading series will have to skip segments containing the data in the other tables (as is the case if the tablespace is shared with other tables). The performance improvement is probably low, but it seems that there is a very similarly.

    I wish I had the thoughts of DBA experienced about the assumption above.

    Thank you for your contribution,

    John.

    a reading series will not jump the segments that contain data from other tables

    You are referring to a misleading picture of the operation.

    Let's say 'A' table is a table among a number of tables in a tablespace that consists of one or more data files.
    Since this is one of the many tables, it may not start at the beginning of a data file. His first measure may be somewhere in the middle of the file. The next step may not be nearby with the first step. The 3rd degree can be in different data file.
    So, it's the picture where you see Oracle having to jump the other tables when you do a FullTableScan.

    Now, view the same image:
    Measure 1 is in the 1000 to the 1127 block block (IE 128 blocks) in the data file 6
    Measure 2 is the 2400 block 2527 block in the data file 6
    3 is to the block to block 9371 9245 in datafile 12

    How Oracle does not have a FullTableScan? It starts with the segment header to get a map of the extent (it would read UET$ in a dictionary managed tablespace). He knows now what degrees it should read.
    What is doing?

    He made a call to the operating system to read 1000 blocks Oracle 1127 in the data file 6.
    The operating system then translates this call to starts and ID ranges and the block of the file system.
    The operating system puts this request to the storage subsystem.
    The storage subsystem and then translated the BONE, ID, blockrange to the drive starts, track, sector.
    Assume that these are all the neighbor on the single disc (that is, the volume is not distributed over multiple LUNS and disks).
    Storage reads the information and sends it to the operating system. This can still mean multiple readings, because the head (or several heads because the disc is made up of several trays) does not read 1 MB in an "appeal".
    The operating system then collects the 1 MB and passes it to Oracle.
    Oracle then fills buffers in its cache buffers--(PS: Astuce!) Tip! (: don't you know that it will be not be neighbor memory locations?)
    Microseconds past milliseconds.
    The disc is still spinning (it stops ). It has "evolved" - a different set of blocks are now under the read head.

    Oracle is now asking the OS for blocks of 2400 to 2527 in the data file 6.
    Oracle did not to "jump the other tables. He just made a separate call to the OS with a new range of blocks.
    The operating system now translates the block ID and filenumber at its own mapping.
    Storage then made his translation.
    The disc under the head blocks are very (almost certain) to be a different from blocks together. Not even those corresponding to 1128 Oracle Oracle datafile block 6. Not even the ones corresponding to 2400 block Oracle in Oracle data files.
    We are suffering now fetch - and reading the disk.

    and the story continues...

    And what happens if there is finally another user on the same system? Between the first blocks 1000-1127 call and the second call to blocks 2400-2527, that other user requested blocks 2000 (IE made a call from reading one piece).
    And, after the second measure is read by the user FullTableScan, a third user has requested block 2048 in a single block read call.

    It is important that the Table is not in the neighbor extensions? Even if you did put the table in "adjacent areas" they are not likely to be contiguous on the disk. Even if they are nearby on the disk, they are not likely to be able to read without delay, because the disc has passed.
    And we have not even spoken interleaving where readings from multiple disks need to be coordinated.

    So why still prefer 1Mo diluvium readings in Oracle? Because most SOAs allow a single OS call of up to 1 MB. Therefore, the overhead of Oracle making a call to read 1 MB are experienced only once. If the operating system allowed a maximum of 256KB reads diluvium, even a reading of 1 MB measurement must be carried out as 4 separate calls to the OS (and, recursively, 4 separate calls to the OS for the storage subsystem). We try to reduce the number of calls from software that we do. We cannot reduce the time of access and reading (but allows us to achieve more high "parallelism" in readings striping).

    Hemant K Collette
    http://hemantoracledba.blogspot.com

  • Doubts about the speed

    Hello gentlemen;

    I have a few questions, I would like to ask more experienced people here. I have a program running on a computer that has a processor i7 processor. In this computer that I have programmed in LabVIEW, meanwhile in another lab, we have another PC, a little older, a dual core 2.3 Ghz, in this pc, we perform a testing platform for a couple of modems, let us not get into the details.

    My problem is that I discovered recently that my program, I programmed in the computer, i7, much slower work in the other machine, the dual core, so the timings are all wrong and the program does not run correctly. For example, there is a table with 166 values, which, in the i7 machine are filled quickly, leaving almost without delay, however, the double machine heart, it takes a few milliseconds to fill about 20 values in the table, and because of the timing, it can fill more values and so the waveform that I use is all wrong. This, of course, live of the whole program and I can't use it as a test I need to integrate.

    I have create a .exe program in labview and try it in the different PC that's how I got to this question.

    Now, I want to know if there is actually a big problem due to the characteristics of the computer, the program is slow in one machine. I know that, to ensure the eficiently program, I need to use States, sub - vi, idea of producer-consumer machines and other things. However, I discovered this is not a problem of the speed generated by the program, because, if that were the case, the table would eventually fill it completely, however in slow computer, it is not filled more with 20 values.

    Else, helps to hide unnecessary variables in the front panel?, because the time beeing I have keep track of lots of variables in the program, so when I create the .exe I still see them runing to keep this follow-up. In the final version, that I won't need them so I'll delete some and hide front panel some. It helps that require less condition?

    I would like to read your comments on this topic, if you have any ideas in machines to States, sub - vi, etc., if there is a way to force the computer to use more resources in the Labview program, etc.
    I'm not add any VI because, in the current state, I know you will say, state machines, sub.vi and so on, and I think that the main problem is between the difference in computers, and I'm still working in the things of the State/sub-VI/etc

    Thank you once again, we just let this hollow.

    Kind regards

    IRAN.

    Get started with, using suitable as a machine for States stream you can ensure that your large table would be always filled completely before moving on, regardless of how long it takes. Believe it or not add that a delay to your curls will do more all the program run faster and smoother, because while loops are eager and can consume 100% of CPU time just a loop waiting for a button press, at the same time all other processes are fighting for time CPU.

  • Doubts about ButtonPress

    When I press the button on blackberry phone, I should get the menu options, I want to know how to achieve this, IT has the default close the menu and I want to know how to add the menu and I want to know what the character of the key.

    It's the image

    Concerning

    Rakesh Shankar.P

    You can override makeMenu for the screen, or makeContextMenu for a particular domain. You can read about the menus in the development guide.

    When the user presses the menu button, a Keypad.KEY_MENU to the key event is sent to any KeyListener saved with the current screen. But in General, you're better off replacing the functions in menu appropriate instead of listening to the key.

  • Doubt about the persistent object

    Hi friends,

    I've stored data object persistent after that some time, my Simulator has taken a lot of time to load the application so I run clear.bat do fast Simulator. But after I run clear.bat. The values of what I stored in the persistent object had disappeared. Someone at - he said, therefore, it is the persistent object data are parties to cause of the performer, the clear.bat or any other reason. pls clarify my doubt friends...

    Kind regards

    s.Kumaran.

    It is b'caz of clean.bat. Clean.bat will remove all applications and unnecessary files, etc...

Maybe you are looking for

  • Tecra M5: Recovrey CD was not provided with the device

    I bought brand new Tecra M5. My tecra M5 is to have Windows XP Professional. After such an enormous investment, I wonder why Toshiba have no cd of Windows XP Professiona included with this system? If some how we need to reinstall Windows XP Professio

  • Constipated because of the HUGE email

    Stupidly, I tried to send an email with the way too many large pictures in the annex. He would not go. He keeps trying to go, and my poor phone just beeps at me all a few minutes to tell me that it has not been sent, but it won't let me open the e-ma

  • HP Envy M6-W105dx: Ram memory upgrade?

    Hello... I'm a big fan of commodity computers HP laptops and desktops... I work with a lot of graphics desinges and this laptop met my needs for the work I do... I just bought a HP Envy M6-w105dx computer windows laptop 10-64 bit with 8 GB of RAM...

  • Update "KB2416447" will download, but not install. Error code is 0x66A. How to fix this?

    Download has been a success, install gets 1/2 way & fails, with error code 0x66A. I can't find this error code in the lists.  Finally, subsequent updates were installed successfully.

  • IPSEc between PIX devices

    Hi guys I'm trying to create an IPSEC tunnel between a 515 and a 506. Of course, it does not work, otherwise I would not here :) The 515 has these entries to the tunnel: melbMap 22 ipsec-isakmp crypto map correspondence address card crypto 22 22 melb