question about clustering configuration in LCDS3.1

In the doc LCDS3.1, it is said "using the"
shared-backend = "false".
the configuration attribute does not have a single, consistent view of the data in the cluster
. When two clients are connected to different servers in the cluster, if they make conflicting updates to the same data value, each server applies these changes in a different order. The result is that clients connected to a server to see a view of the data, while the clients connected to other servers have a different view. The conflict detection mechanism does not detect these changes as long as conflicts. If you need a single view of data where both clients are updating the same data values at roughly the same time, use a database or some other mechanism to ensure that the regular blocking is done when updating the data in the back-end system".

This means using the static-backend = "true" IS guaranteed a single consistent view of the data in the cluster? Or it means that I must use the level of database locking in both cases?

Here is some info on this to Mete Atamel, engineer of the LiveCycle Data Services team:

When shared-backend = true, a single consistent data display is guaranteed, but it's not because of the shared-backend itself, but rather the fact that once shared-backend = true, there is a single database used by multiple instances of the LCD screens.

Imagine a scenario where you have two LCDs nodes in a cluster with the destinations of two data, on node1 and Node2, on management and they are grouped (via JGroups). You can have these destinations maintain their own DB, and when a change occurs in a node¹s destination, destination of the other node gets a message JGroups and updates its own DB. It works but it is not ideal because at some point, you¹re try to maintain the State of cluster even on two different DBs and there are times where connected clients do not get a consistent view of the data.

Instead, you can use a single database and have the two destinations, talk with the same PB, if you have a single consistent view of the data. When possible, you enable property shared-backend in the destinations of LCD screens, in this way when a single destination updates of the DB and send a message of JGroups the other destination, the other destination do not have to update the DB again, instead she simply passes the update message to clients connected without sending it to the adapter.

Tags: Adobe LiveCycle

Similar Questions

  • A question about clusters

    Hello

    I have a question about the single table hash clusters. How are these paintings in a position to provide better performance? Any block can contain data from a single table, and here was the only cluster table will also create blocks with data from a table. So how or why is it more effective? I am not unclear on the concept plan and will be grateful for the info.

    Concerning

    Published by: orausern on 22 Sep, 2010 07:57

    orausern wrote:
    I did it - but the concept is what I'm looking for. The test case proves that there are benefits. but my question is why the profit returns. It's what I'm not very clearly.

    The benefit is provided
    (1) exercise you some control over the way in which the data are filled in the table
    2) most of your questions is a 'SELECT col1, col2,... '. "Col WHERE = "
    (3) the number of values unique 'col' is fairly constant
    The common approach in the situation described above is to create an index on 'col' so that applications may perform better.
    But even with the indexes in place, oracle will need to access the blocks of two segments, index and table, in order to process the requests (of the type described in step (2) above).
    When you create the table in a hash TABLE UNIQUE cluster, you can get the grouping of the data in the table itself, without the need of an index. Oracle will have access to the only segment (cluster) blocks to process requests (of the type described in step (2) above).
    The result is much less number of consistent gets for queries of the type described in step (2) above, resulting in better use of the buffer cache, which improves performance.
    I hope this helps.

  • Questions about the configuration of the cache for use with partitioned off-lot...

    Once more, I give it a try to see if we can make use of the new partitioned (split) off-heap storage and are having problems with the configuration of the cache (including configuration files).

    The problems that I had, it seems that < high > units should be specified for the entire cluster (or perhaps for a node? not sure yet!) while < original-size > & < size > is specified by partition. Is this correct? That's the way it was intended (for me it would have seemed more logical to also specify < high-units > per partition since I guess overflow checking and expulsion is made by partition)? The way I read the documentation, it seems that all three should be per partition if < partitioned > true < / partitioned > is specified.
    If I value < > 1 mb high-units (as i belive I should if it was per partition) I get the impression that I posted in a previous question (a message to info on some missing index data, then the crash of nodes in cluster with some of out of memory error).

    / Magnus
    <?xml version="1.0"?>
    <!DOCTYPE cache-config SYSTEM "cache-config.dtd">
    
    <cache-config>
        <caching-scheme-mapping>
            <cache-mapping>
                <cache-name>ObjCache</cache-name>
                <scheme-name>off-heap-near</scheme-name>
                <init-params>
                    <init-param>
                        <param-name>front-size</param-name>
                        <param-value>200000</param-value>
                    </init-param>
                </init-params>
            </cache-mapping>
        </caching-scheme-mapping>
    
        <caching-schemes>
            <near-scheme>
                <scheme-name>off-heap-near</scheme-name>
                <front-scheme>
                    <local-scheme>
                        <high-units>{front-size}</high-units>
                    </local-scheme>
                </front-scheme>
                <back-scheme>
                    <distributed-scheme>
                        <service-name>PartitionedOffHeap</service-name>
                        <backup-count>1</backup-count>
                        <thread-count>4</thread-count>
                        <partition-count>127</partition-count>
                        <backing-map-scheme>
                                  <partitioned>true</partitioned>
                              <external-scheme>
                                    <nio-memory-manager>
                                       <initial-size>1m</initial-size> <!-- PER PARTITION?! -->
                                       <maximum-size>1m</maximum-size> <!-- PER PARTITION?! -->
                                    </nio-memory-manager>
                                    <unit-calculator>BINARY</unit-calculator>
                                    <high-units>127m</high-units> <!-- PER PARTITION/NODE/CLUSTER?????? -->
                             </external-scheme>
                        </backing-map-scheme>
                        <backup-storage>
                        <!-- PARTITIONED BY DEFAULT?! -->
                            <type>off-heap</type>     
                        <initial-size>1m</initial-size> <!-- PER PARTITION?! -->
                        <maximum-size>1m</maximum-size> <!-- PER PARTITION?! -->
                        </backup-storage>
                        <autostart>true</autostart>
                    </distributed-scheme>
                </back-scheme>
                <autostart>true</autostart>
            </near-scheme>
        </caching-schemes>
    </cache-config>

    Sorry, my description is very confusing. High units is by cache. What I was trying to say, is that cache mapping can train additional units high to affect the memory required by the node. Since multiple caches can map to the same pattern, especially if you use wildcards in the mapping, you must consider the total number of hidden units of high times. It is true or not caches use different services.

    You are also right about high units, applying to the partitioned support cards. You could have easily expulsion are happening as you describe. We must take another look at the configuration because it is too easy to make a mistake.

    As expected, the allocation of card support splitting is lazy to avoid the problem you described. The worst case situation, I was trying to explain can occur if you have caused all buffers to be allocated based on the data before all other nodes could take some of the partitions.

    Kind regards

    David

  • HP Officejet Pro 8500 A909a USB 3 questions about the configuration of the network - port USB3 problem

    Fujitsu Windows 7 laptop (all USB2)

    used the full software and installed a printer HP Officejet Pro 8500 A909a Series as a network printer.

    The printer installed fine and don't have USB connection

    All the software and scanning twain installed, so I was able to scan the copy etc.

    I could browse and choose my PC from the front panel of the printer

    However, it was Tuesday confirmed to a friend that their printer would work on a new laptop - bought the laptop

    Fujitsu - but more recent model with 3 USB ports

    I have install and then used the same FULL configured driver to be installed (V14)

    selected as a network printer and then, during the installation, I get an error hardware error USB and USB drivers

    All the USB works, that I've used - BUT the printer is networked and no USB used

    tried to download the driver and install it-exactly the same question

    SO I configure using the devices and printers 'add printer' network and installed

    I can print OK

    BUT no analysis facilities

    and on the façade - cannot see the new PC to scan to

    The only real difference, is that this PC has USB3 and only mine has USB2

    DO not know why on a network installation USB even when searching for

    It's a problem because Drivet package not has not been updated for USB3 - y at - it a patch?

    Thanks for any help / advice

    Hi etaf,.

    I guess the short answer is: Yes, the problem with the software and the 3.0 Port.

    The long answer with a possible work around is as follows: first of all, the software looks for minimum requirements. Because the software is older, he doesn't know what a 3.0 port. You are right when you say that it should work since you do not use the USB and use wireless instead, but this is not the case.

    There is something we can try, but I can't guarantee it will work. I thought first we could connect a USB cable and run the software made to force the software to see the USB port, but then the software wouldn't even get far enough because you can't plug the cable to the most later in the installation.

    My second thought and most promising suggestion is to connect the USB cable (although we do normally not until the software prompts us to) and use the installation wizard HP Print to get the insteaad of software to use the CD or download from the printer driver page. Still, I can't say with certainty that it works, but I don't see why it would not be worth a try.

    If it works, and we get the software installed you can convert the USB connection to a wireless connection using the software. Fingers crossed! Download and run the with the following link. Printer HP install wizard for Windows

    If this does not work, or you don't want to bother with it (not worth trying), I might consider getting a new model of the printer. You can even call HP. They might be able to offer a discount on a newer model. If you are in the Canada, U.S. dial 800-474-6836 or you can Contact HP worldwide.

  • Small Question about Advanced Configuration settings

    When you change an advanced value of a VM, why the new object is created

    New-Object VMware.Vim.VirtualMachineConfigSpec

    When we look at the object of $vm.extensiondata.config this object looks like for me VMware.Vim.VirtualMachineConfigInfo no "context".

    So my question... to rephrase and perhaps provide some clarification... Why is context and not ConfigInfo?

    Objects of the type of information are present when you retrieve a vSphere object, objects of type Spec are used when you configure a vSphere object.

    It's the Get-ter and Set-ter from the point of view PowerCLI cmdlets.

  • Questions about replication configuration

    Hello

    We currently have the configuration of the following replication to a backup Timesten hot facility:

    CREATE REPLICATION REP1
    A store of DATA of ELEMENT
    MASTERS ds on "host1".
    SUBSCRIBED ds on "host2".
    RETURN TWOSAFE
    ELEMENT b DATASTORE
    MASTERS ds on "host2".
    SUBSCRIBED ds on "host1".
    RETURN TWOSAFE
    STORE ftappttprd on "host1" PREMISES COMMETTRE ACTION COMMETTRE
    RETURN SERVICES OFF WHEN REPLICATION STOPPED
    SUSTAINABLE COMMITMENT ON
    RESUME BACK 300
    Ftappttprd on "host2" LOCAL STORE COMMIT ACTION COMMITTING
    RETURN SERVICES OFF WHEN REPLICATION STOPPED
    SUSTAINABLE COMMITMENT ON
    CVS BACK 300;

    1)
    (a) only the part of the sentence 'COMMIT LOCAL ACTION VALIDATION' implies whenever replication fails on the Subscriber, so the update will be hired locally on master? that is, it is actually becomes a commit sustainable.
    (b) only apply to the timeout for replication scenario? What about other cases where commit at the Subscriber cannot be done.
    (2) if we do not have the expression of DISABLE RETURN as above, would be 'SUSTAINABLE ON COMMETTRE' and 'SUMMARY BACK' always to news?
    in other words, if we decide to stop the replication agent after failover mode standby
    -service back end
    -txn on master will commit sustainable
    -the return service will resume automatically when the replication agent is started again (after the replication problem is fixed)

    Thank you

    Mike

    Here are the answers:

    1)
    (a) only the part of the sentence 'COMMIT LOCAL ACTION VALIDATION' implies whenever replication fails on the Subscriber, so the update will be hired locally on master? that is, it is actually becomes a commit sustainable.

    CJ > if there is a timeout of servcie to return for a transaction twosafe the default behavior is that the transaction state remains "pending" and the appl; ication must include logic to keep retrying the validation until it succeeds or the application decides to abandon. By configuring LOCAL ACTION COMMIT ENTER, when a timeout occurs the transaction committed locally the band except manipulation (or ignore) the return servcie timeout warningf application has no need to make special anythnin.

    (b) only apply to the timeout for replication scenario? What about other cases where commit at the Subscriber cannot be done.

    CJ > it applies to delays. If you get a different kind of validation error then the application must decide what to (force a local commit or rollback).

    (2) if we do not have the expression of DISABLE RETURN as above, would be 'SUSTAINABLE ON COMMETTRE' and 'SUMMARY BACK' always to news?

    CJ > COMMETTRE SUSTAINABLE can be valid deoending on other factors, RETIURN RESUME is not valid (or useful)

    in other words, if we decide to stop the replication agent after failover mode standby
    -service back end
    -txn on master will commit sustainable
    -the return service will resume automatically when the replication agent is started again (after the replication problem is fixed)

    CJ > I guess you mean "failure" of the day before. Yes, with this configuration, you should see the behavior that yopu described above. The curriculum VITAE of RETURN should not be necessary.

  • (Maybe stupid) Question about ASDM configured PIX PIX VPN

    I have two PIX515 running v7.2 (1) and ASDM 5.2 (1).

    If I use the VPN Wizard of the ASDM to configure a site to site VPN, this process takes care of the need to create split tunnel parameters, so that the outgoing traffic non - VPN inside each PIX is managed properly?

    Hello

    By default, all client VPN traffic is encrypted and sent to the VPN server, Split tunneling is used for client vpn remote to exempt a particular traffic to be encrypted and tunnel to the VPN server so that the traffic will be sent in parallel to the internet or local.

    During the configuration of site to site intuitively that when the configuration of the remote networks on both sides that communicate together by the IPSec tunnel and all other traffic is routed to their destinations without encryption.

  • Question about the configuration of virtual computer file and the working VM location

    I noticed one of my virtual machines had two data warehouses listed under "Items" in the vSphere Client.

    After some research autour, I found the virtual machine files are in the correct data store, but the 'VM-configuration file' and 'Place of work VM' are in another data store.

    Can I combine all in the same database?   I'd rather have the config and the workplace on ma who has a lot of main storage.  For some reason, the configuration file and the location of work were created on a local storage on my host.

    Any suggestions?

    Thank you

    David Moore

    You can use Storage vMotion to migrate only the virtual machine configuration file to the desired location, just start the wizard of vMotion and storage step 'Select the data store', click Advanced, and you will see the option to select a new destination for the configuration file data store.

  • Question about the configuration of Netbackup

    DB version: 11.2
    Operating system platform: AIX 6.1
    NetBackup version: 7

    I need to configure RMAN backups on tape for my Production DB using netbackup as the MML. The given statement was
    Shutdown all DBs running from this Oracle Home        # Missed this step
    cd $ORACLE_HOME/lib
    ls -lrt libobk.a                                                                                 
    ln -s /usr/openv/netbackup/bin/libobk.a64 libobk.a
    Start all DBs
    I did all the steps above except knock down DB linking this ORACLE_HOME. I even tested backup of the control of the band successfully file after creating symbolic link. Tonight at off-peak hours I will run the full backup DB. There will be no problem because the DB is not bounced? I can't test it now that it's production.

    Hello

    did all the steps above except knock down DB linking this ORACLE_HOME...
    There will be no problem because the DB is not bounced?

    Don't worry, the installation of MML and link building can be made without stop since Oracle 9i running instances.
    From:
    http://docs.Oracle.com/CD/B10500_01/server.920/a96566/rcmconfg.htm#453248

    You don't need to start or stop the instance during the installation of the management of the media library.

    You seem to be on 11 GR 2, then everything should work properly.

    I can't test it now that it's production

    Do the backup will now generate additional I/O now as well...
    Kind regards
    Tycho

  • Questions about the configuration of libraries of ASM Database 11 GR 2 under VM

    People,

    Hello. I install the Oracle 11 g RAC 2 system with 2 Virtual Machines (rac1 and rac2 - Oracle Linux 5.6) on top of VMPlayer.

    I'm libraries ASM on rac1 configuration using the command: oracleasm # [root@rac1 dev] configure-i

    But this message appears: "Command not found."

    I can't find the directory where the "oracleasm. Any folk can help solve the problem?

    Thanks in advance.

    PL don't post duplicate Oracle Database 11 g RAC 2 - configure the libraries of ASM on VM

  • Question about Clusters and host bus adapters

    I have worked with ESX in a non-clustered environment, but I'm looking to experiment with clusters.

    If I have two host servers in a Vmware HA cluster (using VC 2.5)

    And I have these two server hooked up to an FC SAN, each with a FC HBA.

    Now, I create two virtual computers that will be part of a Windows / SQL cluster, do I need a second set of FC HBAS in each server?

    Now if I add a second series of cluster of virtual machines, this time window / cluster exchange, I have to add yet another set of HBAs?

    I understand that if I had a physical server, I need an HBA in each... but unsuire how sharing equipment works in vmware. It's the virtual FC HBA for ESX?

    Thank you!

    bilbus wrote:

    OK thanks, so as to be clear...

    so, if I have one HBA for each ESX cluster server (assoming I don't need double tracks) that's all I need?

    OK - all you need is 1 HBA per server.

    Once the server has a FC HBA (even if it is used for the ESX cluster server it self) I now have access to a virtual FC HBA on the set of my virtual machines?

    No, this is not how ESX.  Each VM has a virtual card SCSI attached to a VMDK on your data store, which is accessed throught he FC HBA above.

    Is there a rule on how many servers can share an HBA (another bandwidth then)?

    There may be, but its mainly led by IOPS / s and bandwidth.

    -Matt

  • Question about the configuration of wireless networks

    I managed to set up my laptop and desktop computer to access the interest through a Belkin WLan modem router. So why can't they access each other or the printer that is attached to the office. Computers independently access the router via a wireless connection.

    The printer is connected to the desktop via the parrell port. Each are running xp home.

    Hello

    In so that they can access the printer, you must have the shared printer. Go to start-> settings-> fax and printers, then go to your printer, right click and go to share. Then share it. Now go to your cell phone and go back to the printers and Faxes page. Now click on add a printer and click on that it is connected to the network via another computer and add it to your laptop. Now it should be usable on the network. So that your computers to see each other, they will have to be interconnected, in a literal sense. Access networks and create a LAN or WLAN, first of all, the office, then the laptop. After that the two of them, they should be interconnected and see themselves. A routher is dandy, but for some reason, some computers do not always act as well as on a network up to that force you to.

  • I have several questions about the configuration of Windows Media Player XP

    original title: subtitle in WMM as subtitle

    Hi, I use windows XP and when you use WMM under XP, I want that

    • legend is placed as the subtitle to the picture or just scrolling out of bottom right below on the left.
    • How can second thing, I change background for title slides
    • Third legend fonts
    • I can add some animation while going for tranmission

    Hello

    1. what version of Windows Movie Maker is installed on your computer?

    You want to return these items to help.

    Publish a movie in Windows Movie Maker
    Change Windows Movie Maker advanced settings

    Note: The items above are for Vista, but they are also applicable for XP.

  • Basic question on the configuration of the OVD

    Hi gurus
    I have a basic question about the configuration of the TPM. I have all the "employees" in Active Directory and 'Employees + external users' in OID. The employee user password is maintained in AD, while the same for the external user is maintained in OID.

    Query:
    I need to configure authentication for an application for the employees present in the AD and present external users in OID. I have to join the profiles of employees present in AD and OID. Even if the employees are present in the OID, I don't want to configure for authentication of the OAM.

    How can I implement this using OAM and OVD.

    I got your condition.

    You said that OID contains external users as well as employees. You manage a container separated for employees in OID. If so, you job would be easy. If there is no container separated for employees on the OID and storing all users (employees and external users) in the same container (default: cn = Users, dc = xxx, dc is com) then you can get the list of AD users only with the following ldapsearch command:

    ldapsearch h Pei d 'cn = sleep '' w sub s b '' 'objectclass = user' SamAccountName

    Create three adapters OVD:

    1 adapter used (this should point to Active Directory)

    2. users adapter (when creating this, you have to exclude users AD, just similar to find the above option).

    3 create the adapter to store as the clubs above two adapters.

    Now, OAM should talk to store adapter.

    I hope this helps.

    Thank you
    GK

    Published by: GK Goalla on October 3, 2012 12:34

  • Questions about the implementation of clusters of storage and DTS on active cluster

    I have a few questions about the implementation of clusters of data store and SRS storage in VSphere 5.1.

    We have a data center with about 15 HP blades and a few servers non-lame.  The hosts are all either VSphere 5.0 or 5.1.  Our back-end storage is an EMC VNX 5700 with about 20 stores of data.  All the VMWare is managed by VCenter 5.1 running on a dedicated physical machine.

    Currently, each data store is used individually; No piles of data store are put in place.  When a new virtual machine is created, the administrator usually chooses the data with the most space store.  Periodically, we will go back and manually storage vMotion machines to balance the load.

    Recently, we have expanded the VNX storage, so we now have LUNS in different pools, with different levels of performance.

    What I would do is set up for the data store clusters, so that us when a virtual machine is created, the administrator must not know what data store is in which pool.  Also take advantage of the DRS storage so that the burden of storage will be "refine" himself a little.

    I know the setting up of a cluster of storage in a 'clean' environment is quite simple, but my concern is creation/conversion data warehouses existing in active production in the cluster, while they are used.

    If I access the screen "data warehouses and store data from clusters ', right-click on the data center and create a cluster, and then move the various data stores in the new cluster, will there be an interruption in the production running systems?

    I also wonder about activation of storage i/o control.  It is not currently enabled on data warehouses.  I know it's useful for the StorageDRS, but will be allowing any negative impact on the system?  If I turn on the warehouses of data, is there anything else I should do or set in addition to all that allows him?

    Finally, we are in the process of Site Recovery Manager configuration.  SRS has an impact on the configuration of the data clusters and DTS store?

    Thanks in advance for your comments.

    Mike O.

    Gregg Robertson wrote:

    Hello

    Clusters of data and using DTS store doesn't impact RS even if you create a cluster of data store for storage of replicated data and another for everyone else, this way you machines virtual you want to replicate are not moved on warehouses of data that are not replicated , but will still have the ability to move if there is a conflict.

    Gregg

    With all due respect but SRM and DTS don't go together. SRM does not at all support the use of DTS. So if you are configuring SRM forget DTS for now.

    Reason for this is that SRM knows no DTS and VMs can be in flight when a failover should occur, and bad things can happen. Also, the protection breaks when a virtual computer is moved between data warehouses in a cluster data store.

    In short: don't go there.

Maybe you are looking for

  • Sort Photos to add a place

    Is there an easy way to sort my images that have no places geotagged photos for Mac so I can start to add locations?   I have a lot of pictures taken with my DSLR that don't have GeoTagging and just migrated to the Photos.  They have a date and what

  • K7C85-a: NVY 5540 does not print

    I use a Mac and my ENVY 5540 does not print and has an exclamation mark on the icon on the dock.  I unplugged the printer, the wi - fi connection and the computer nothing works.  I had no problem with my old HP if someone else had problems with this

  • Import Mac Photos button gone?

    I'm trying to import pictures from my iPhone or iPad on my mac. Last month I can still connect my iPhone with the mac, by clicking on the Photos app and it would show all my photos on the iPhone. By selecting the photos by clicking on the button "imp

  • Repeatedly offered Silverlite updates

    Why windows does not update keep system saying that I need to update, but they are the same ones repeatedly.

  • BCM Office Addin?

    After installing MS Office home & student 07, I get a screen when I try to open Word or Excel that says: "The office Application Version does not match". What is c?