Quorum failgroup

Hello.
I created diskgroup with redundancy normal without specifying any failgroups and correctly migrated files in the diskgroup vote. All failgroups are normal. Then, what is "quorum failgroup" and when it should be used?

Hello

1-how votedisk is stored on ASM Diskgroup?
Discs with voting rights are placed in the header of the disk of an ASM diskgroup. The number of disks with voting rights is based on the diskgroup redundancy:
External (no mirroring) = 1 votedisk; Normal = 3 votedisk; High redundancy (triple mirrored) = 5 votedisk;
Oracle Clusterware will store the votedisk disk within a disk group that contains the files to vote.

However OCR however is saved as a normal (as any a tablespace data file) of the file inside the ASM and therefore "inherits" the diskgroup redundancy.
So while you see only 1 file, the file blocks are reflected accordingly (double or triple (High)).

Vote records can only reside in a diskgroup. You can't create another disc to vote in a separate DG.

2-how well votedisk must be configured?
You must configure an odd number of disks with right to vote because the vote of the files are concerned, a node must be able to access more than half of the files with the right to vote at any time (simple majority). In order to be able to tolerate failure of n files in vote, need at least 2n + 1 configured. (n = number of files with voting rights) for the cluster.

If you lose 1/2 or more of all your disks with right to vote, then nodes get evicted from the cluster, or nodes to expel from the cluster.
For this reason when using Oracle for the redundancy of your discs with voting rights, Oracle recommends that customers use disks of 3 or more voting.

3 - Why use voting Quorum disk?

If you use only a 1 H/W storage and it is fail. Together the cluster falls down including all the voting disk is not serious the number configured.

The problem in a stretch cluster configuration, is that most of the facilities use only two storage systems (one on each site), which means that the site that hosts the majority of voting records is a potential single point of failure for the entire cluster. If the storage or the site where the files of n + 1 vote is configured fails, the entire cluster will go down, because Oracle Clusterware will lose the majority of the files.
To avoid a complete cluster failure, Oracle will support a third vote file on a cheap lowend, standard NFS mounted device somewhere in the network. Oracle recommends the file NFS vote on a dedicated server, which belongs to a production environment.

Thus, you will create a file on NFS "cooked leader" (as a disk) and present to the DSO. From this point of ASM does not know that ASMDISK between the network (wan) and it's a "cooked" file
Then, you must mark this ASMDISK as QUORUM, because Oracle will use this ASMDISK only to store the VOTEDISK. This will prevent him causing problem perfomance or dataloss store data (such as data files).

http://levipereira.WordPress.com/2012/01/11/explaining-how-to-store-OCR-voting-disks-and-ASM-SPFile-on-ASM-diskgroup-RAC-or-RAC-extended/

Tags: Database

Similar Questions

  • Voting Quorum failgroup or regular failgroup files location?

    About the notion of quorum failgroup ORACLE writes:
    QUORUM of discs, or in groups of lack of quorum, cannot contain all database files,
    (OCR) Oracle Cluster registry, or dynamic volumes. However, the QUORUM disks
    can contain the file with voting for synchronization of Cluster Services (CSS). Oracle ASM
    whenever files disk quorum uses or groups failed quorum for the vote
    possible.

    To sum up, I think that is the difference between a regular failgroup and a quorum quorum failgroup failgroup can only contain files of voice and an ordinary one can contains several types of files.
    So I don't see any advantage to place files with right to vote on a quorum failgroup on a regular. Why Oracle has introduced the notion of quorum failgroup?
    Thanks in advance.

    Why Oracle has introduced the notion of quorum failgroup?

    You must configure an odd number of disks with right to vote because the vote of the files are concerned, a node must be able to access more than half of the files with the right to vote at any time (simple majority). In order to be able to tolerate failure of n files in vote, need at least 2n + 1 configured. (n = number of files with voting rights) for the cluster.

    If you lose 1/2 or more of all your disks with right to vote, then nodes get evicted from the cluster, or nodes to expel from the cluster.

    For this reason when using Oracle for the redundancy of your discs with voting rights, Oracle recommends that customers use disks of 3 or more voting.

    If you use only a 1 H/W storage and it is fail. Together the cluster falls down including all the voting disk is not serious the number configured.

    The problem in an extended cluster configuration (Extended RAC) is that most of the facilities use only two storage systems (one on each site), which means that the site that hosts the majority of voting records is a potential single point of failure for the entire cluster. If the storage or the site where the files of n + 1 vote is configured fails, the entire cluster will go down, because Oracle Clusterware will lose the majority of the files.

    To avoid a complete cluster failure, Oracle will support a third vote file on a cheap lowend, standard NFS mounted device somewhere in the network. Oracle recommends the file NFS vote on a dedicated server, which belongs to a production environment.

    Thus, you will create a file on NFS "cooked leader" (as a disk) and present to the DSO. From this point of ASM does not know that ASMDISK between the network (wan) and it's a "cooked" file

    Then, you must mark this ASMDISK as QUORUM, because Oracle will use this ASMDISK only to store the VOTEDISK. This will prevent him causing problem perfomance or dataloss store data (such as data files).

  • Application of quorum

    Environment

    2 RAC cluster nodes with each container:

    ----------------------------------------

    Oracle Linux 6 update 5 (x 86-64)

    Oracle Grid Infrastructure 12R1 (12.1.0.2.0)

    12R1 database Oracle (12.1.0.2.0)

    I'm not understanding the role of the College in the Group of lack of quorum for voting with Oracle ASM files.

    Failure of groups (FG) assure that there is a separation of the risk you are trying to mitigate.   The separation is to scale with user data.  Vote files, the file is separated on a disk for each FG in the Group Disk (DG).  If voting files are stored on a disk group (DG) to the normal dismissal.  A minimum of two FGs is required.  It is recommended that 3 FGs serve.  It's so partner status Table (PST) may have at least one other FG where PST is maintain by comparison.  It is in the case of a failure of FG.  Must the FGs be QUORUM which store files to vote?  What is the role of the College? When is it necessary?

    Hello

    I'll start with what the Quorum means:

    The quorum is the minimum number of members (majority) of a game, necessary to avoid a failure. (COMPUTER concept)

    A quorum is a lot like Votedisk Quorum Quorum OCR, network Quorum, Quorum PST, etc.

    We must separate this quorum, we are concerned.

    The PST Quorum is different from the Quorum of Votedisk, well think everything works toogheter.

    Quorum PST:

    A PST file contains information on all the disks in a Diskgroup ASM - number of disk, Disk Status, disk number, the heart rate info and Failgroup Info partner.

    A disk group must be able to access a quorum of the Tables of status partner (PST) to mount the diskgroup.

    When diskgroup mount is requested the instance reads all disks in the disk group to find and verify all available PST. Once it checks that there is enough PSTs for quorum, he set up the disk group.

    There is a good post here: ASM support Guy: partnership and the State Table

    Votedisk quorum:

    Is a minimum number of votes for cluster is operational. There is still a quorum of votedisk.

    When you configure votedisk in normal redundancy you 3 Votedisk one in each Failgroup. For cluster is operational you need of at least a quorum with 2 online voting for the cluster are online.

    Quorum Failgroup (clause):

    Quorum Failgroup is an implementation of a Diskgroup option.

    This option do not confuse with the Quorum of voting, because the Quorum of voting and Quorum Failgroup are different things.

    For example: in a Normal Redudancy diskgroup can I lost my Failgroup Quorum, and the Cluster will stay in line with 2 Failgroup regular, this Quorum failgroup is an installation program.

    Named as 'Failgroup Quorum' a failegroup to a specific purpose which is store only the Votedisk due to deployment of the infrastructure of the Oracle.

    Is no obligation to use "Quorum Failgroup' in a Diskgroup containing votedisk.

    Now, back to your question:

    If your failure groups have only 1 ASM disk then should not the recommendation to use high redundancy (5 groups of failure) so in the case of a disk failure ASM a quorum of PST (3 PST files) would be possible?

    About Quorum PTS: You need must be aware that if you have 5 PST, you will need at least a quorum with 3 PST to mount his Diskgroup.

    If you have 5 Failgroup and each Failgroup has ASMDISK that one alone, you'll have a STP by ASMDISK that support you lost at least 2 PST to make a Quorum with 3 PST and keep mounted Diskgroup or fix it.

    The bold italic Oracle documentation above seems to say that if you allocate 3 devices in disc 2 will be used by groups of normal redundancy failure. In addition, a group of lack of quorum will exist who will use all disk devices.  What does that mean?

    I don't know what documents are saying is so confused. I'll try to contact some employee of Oracle to check it out.

    But will attempt to clarify some things:

    Suppose you install a Diskgroup as follows:

    DATA DiskGroup

    Failgroup data01

    * / dev/hdisk1 and/dev/hdisk2

    Failgroup data02

    * / dev/hdisk3 and/dev/hdisk4

    Failgroup data03

    * / dev/hdisk5 and/dev/hdisk6

    Quorum Failgroup data_quroum

    * / nfs/votedisk

    When you add Votedisk on this WHAT DATA diskgroup, the CSSD will store as follow:

    CSSD will randomly choose an asmdisk by votedisk Failgroup and magazine on the subject, but always chooses an ASMDISK of Failgroup of Quorum (if it exists).

    Therefore, after you add votedisk below diskgroup that you have:
    * Failgroup data01 (/ dev/hdisk2)

    * Failgroup data03 (/ dev/hdisk5)

    * Failgroup data_quorum (votedisk/nfs /)

    Mount diskgroup data, you must failgroup data01, data03 and data_quorum available in the diskgroup mount, otherwise diskgroup does not rise.

    On Documentation (https://docs.oracle.com/database/121/CWADD/votocr.htm#CWADD91889) is a bit of confusion:

    Normal redundancy

    The level of redundancy you choose for the Oracle ASM disk group determines how Oracle ASM mirrors the disk group files and determines the number of disks and the amount of disk space you need. If the files with the right to vote in a disk group, groups of disks that contain Oracle Clusterware (OCR and vote files) files have a more minimum number of groups of failure than other disk groups because with voting rights are stored in the default quorum groups.

    What he says is: when using Failgroup of Quorum, you will have a more minimal number of failure than other disk groups...

    But remember that qorum Failgroup is optional for those who use a single storage or an odd number of H/w storage.

    For Oracle Clusterware files, a normal redundancy disk group requires a minimum of three disk devices (two of the three discs are used by groups of failure and all three disks are used by the default quorum group) and provides three files with right to vote and a mirror of the OCR OCR. When you use a normal disk of redundancy group, the cluster can survive the loss of the group a failure.

    Try to clarify:

    -Votedisk in Diskgroup with normal redundancy requires three drive devices. (When using Failgroup Quorum: you will have two of the three disk used by regular failgroup and one of the three discs are used by Quorum Failgroup but all three discs (regular failgroup and quorum that store votedisk) count when mount the diskgroup.)

    - OCR and a mirror of OCR:

    It was really confusing. Because the mirror of the OCR must be placed in a different diskgroup because OCR is stored similar to how Oracle database files are stored. Scopes are spread across all disks in the diskgroup.

    I don't know what he's talking about. If measure on diskgroup redundancy mirror or mirror of the OCR.

    By as note and especially the documentation says it's not possible store OCR, and OCR mirror on diskgroup even

    RAC FAQ (Doc ID 220970.1)

    How the registry of Cluster Oracle (OCR) is retained when using ASM?

    And (https://docs.oracle.com/database/121/CWADD/votocr.htm#CWADD90964)

    * At least two places of OCR if OCR is configured on an Oracle ASM disk group. You must configure OCR in two groups of independent labels. In general, it is the recovery area and work space.

    High redundancy:

    For the Oracle Clusterware, a disk group files high redundancy requires a minimum of five disk drives (three of the five discs are used by groups of failure and all five disks are used by the default quorum group) and provides five files with right to vote and an OCR and two mirrors of the OCR. With high redundancy, the cluster can survive the loss of two failure groups.

    Three of the five discs are used? and two mirrors of OCR? In a Diskgroup single?

    Now things goes bad.

    As much as I can test and see how Quorum Votedisk four (but not three) of five disks are used and all five counties.

  • Windows 2003 ent with storage cluster 2 and 2 nodes attached with it. It is possible to have a quorum failover?

    The scenario is that I have 2 DC and DR site and 1 storage connected with 1 node on both sites. The thing I have to do is cluster between two nodes if 1 went nodes then another are toggled but can failover storage level could be done by assigning 1 GB Lun as Quorum in both storage in both sites. I already grouped both nodes as local quorum.1st what I've done is MNS cluster but it does not fit our scenario.

    Hello

    Your question of Windows 7 is more complex than what is generally answered in the Microsoft Answers forums. It is better suited for the Windows Server Forums on TechNet. Please post your question in the Windows Server Forums.

    http://social.technet.Microsoft.com/forums/en-us/category/WindowsServer/

  • Is have RAID (1 + 0) so that Failgroups for all ASM ASM starts is a well-known best practice?

    Dear Experts,

    Is have RAID (1 + 0) so that Failgroups for all ASM ASM starts is a well-known best practice?

    Having both is the best practice

    • RAID (1 + 0)
    • ASM Failgroups for all ASM starts

    Thank you

    IVW

    You can create groups of ASM disks with normal or high redundancy or specify external redundancy. It depends on your reliability requirements and storage performance. Remember that ASM is not RAID and redundancy is the base file. ASM uses alternating devices. Oracle is generally recommended to have redundancy of storage. If you have RAID redundancy at the level of hardware controller or storage, Oracle recommends to configure ASM disks with external redundancy groups. Redundancy of the DSO is only between groups of disk failure and by default, each device's own failure group. Of course, you always want to make sure that you do not rely on redundancy of data between logical devices or partitions residing on a single physical unit. Compared to external RAID redundancy redundancy ASM using will give the DBA more control and transparency on the setting of the underlying.

  • New mechanism of quorum VSAN in vSphere 6

    Hi all

    I'm a little confused on the new mechanism of quorum vsan in vsphere 6.x

    As far as I've seen, the new technique does not use the witness to vmdk (but still use it for namespace) and must use the componet voice in order to get a quorum.

    However, I have a number of deployment VSAN where I still observe the witness. What I don't understand?

    Thank you

    pic1.PNG

    It is still possible to see the witness components used in 6.0.

    For votes to work, you generally increased bandwidth.

    Here is an example of voice used, rather than a witness - VSAN 6.0 part 1 - new mechanism of quorum - CormacHogan.com

  • Quorum configuration - option 'Add an existing drive' and no Quorum disk appears

    Hi all..

    I'm MSCS Faiover 5.1 of vSphere clusters management document and everything went well till I myself am page 24... Add the Quorum disk to the second physical host.  He mentions I should add existing hard drive, then select RDM quorum I added to the first host.

    When I go to select the disc, it does not appear in the data of the list store... is there a setting I'm missing?  All fine on the first host mapped RDM, but none are appearing for me to add on the second host.  I don't know that it's a trivial config, I forget, but cannot understand where he might be.

    Any help would be appreciated...

    Thank you very much

    N

    When you add a logical unit number to a virtual machine as a RDM, ESXi will create a hard of mapping file in the folder of the virtual machine (default). For the second virtual machine, use "Add an existing virtual disk" and navigate to the folder of the first virtual machine, where you will find hard to mapping files.

    André

  • Question about ASM a Failgroups

    Hello everyone,

    I have an ASM Diskgroup that is short on space. The diskgroup has NORMAL redundancy and three groups of failure with two disks of each. All the disks are the same size (50G).
    Now the diskgroup is almost full and I have to add space. And I have two raw devices of 36G each available.

    I read that recommmends Oracle for all groups of the failure of the same size. So use it means I can't do these two free disks for ASM?
    Because if I create another Failgroup with them, the Group would have a different size than the others. And if I add them to a group (if it is possible at all), this group would also have a different size.

    Thanks for your thoughts.

    Mario

    The Diskgroup has three default groups: FG1, FG2, FG3.

    in this case you have 3 discs 2 you need to add the disc in each behalf can be 36 GB but all should have the same size disc.

    Seems clear, but on the other hand that means once I start with 50G disks, I always have to use 50 G discs in the future. What happens if they are out of stock etc. ?

    same size of DISKGROUP not discs. so 3 disc may be 36 GB each.

  • How to create a windows cluster quorum

    Hi all

    I use vmware workstation 7.

    I want to create a windows cluster composed with 2 nodes with windows server 2003.

    Can someone help me create a quorum for use with the cluster?

    Thanks to all the wait

    Concerning

    You can find the following link helpful. Even if it's for Workstation 6, it must still work to V7.

    The only part that I don't like is the cluster VM cloning, I recommend you do a clean instead install.

    Configure the Windows Cluster on VMware Workstation 6.x

    André

  • Virtual as a Quorum SAN storage? How to configure?

    Hi all

    I install the VMWare Storage as my virtual SAN on VM Workstation Sun but I wonder now how to get the cluster configuration wizard as storage for the quorum? Anyone done this before in a test laboratory? I'll put up Exch 2003 Ent and you want just the cluster to see Sun storage, but I'm stuck.

    Well, maybe not a direct solution for your situation, but you can try called StarWindiSCSI SAN product. It of free for the construction of virtual San and a lot of documentation here.

  • MSCS cluster on hosts cannot find Quorum RDM

    Have Setup 2 x W2K3 for MS Cluster nodes, network connections configured and assigned a RDM 1 GB formatted as NTFS disk Basic. Second node can connect to 100% to the same storage and networking - 2nd node was set offline.

    When you configure the pool, it cannot locate a Quorum disk even if there are drive Q: RDM sitting there. Says "'the quorum disk could not be located by the cluster service". " I took care to configure the network elements before storage so I do not have a signature of disk set-config.

    Anyone has any suggestions I can try?

    Andy

    I have installed recently 2 MSCS each with 2 nodes.  2 things I discovered during the process was first to add the RDM readers for only 1 node first.  Get the first installation of the node added to the cluster and add the disk to the second node and add it to the cluster.  The second thing I discovered was to store the RDM file on the shared storage.  In my view, that it says store with VM or the data store.  Select the data store, and the store of data that you want to save the file.  To add the disk, make sure you select Add existing disk and that it points to the location of the first node and the *.vmdk the quorum disk and data.

    You must also make sure that each node is configured to use a new SCSI (1: #) and that each node has the same disc using the same number of SCSI.  In addition, SCSI mode must by physical if I remember corrctly in the documentation.

    I hope this helps.

  • ASM failgroup

    Hello

    11 GR 1 material, is it possible to change the failgroup that ASM disk is assigned to the after it is created, or do I have to remove the Group and add it specifying the correct failgroup?

    Thank you
    Justin

    Justin,

    The best place to find it would be Oracle docs.
    http://download.Oracle.com/docs/CD/B28359_01/server.111/b28286/statements_1006.htm#BABGCDFE

    And looking at it, its clear that you must re-create the group in order to change the redundancy.

    HTH
    Aman...

  • Re-mapping Disk Quorum MSCS RDM

    I have a problem on an Exchange 2003 MSCS on VMWare.  VM1 (active), VM2 (Passive)

    VM2 off and turns not back with the error:

    Virtual disk 'X' is a direct access mapped LUN which is not accessible

    I've referenced this Ko http://kb.vmware.com/selfservice/microsites/search.do?language=en_US & cmd = displayKC & externalId = 1016210

    and vmware contacted support.

    We discovered that the identifier of vml in the mapping of RDM file is completely different from vml with all 4 the host for the RDM - infact vml in the mapping of RDM file does not exist on any of the hosts.

    VM2 is currently offline and had deleted ROW mapping, but they advised there are:

    1. turn off both nodes.

    2. on node 1 - Note the SCSI controller that is used for the RDM (disc 2).

    3 remove the ROW of the virtual machine and remove the disc (this will remove the file pointer to the ROW, it will not erase the data).

    4. Add the ROW to the node 01 - using the same SCSI controller (which I think SCSI1:1) and the physical bus shares.

    5. Add the ROW to the node 02 - however this time above to add the LUN as an existing drive and use the pointer file created in step 4.

    6. turn power on node 1.

    7. turn power on node 2.

    I can see the logic, but I'm worried because this ROW is the Quorom player for the cluster, so if there is any questions/differencies when VM1 starts up with the new mapping then the cluster does not start and we will lose Exchange. (other resourses in cluster are iSCSI LUNS Netapp connect with SnapDrive one in the virtual machine)

    somebody had to do something similar?

    taking a copy of the driectory vm before making any munmap or remove on VM1 would provide a roll back option?

    any advice appreciated,

    GAV

    I did something similar and the process they describe is what you need to do.

    I also did without delting the pointer in the first vm, but delting the poitner won't matter.

    the disc is already signing, which is on the row, not the pointer.

  • Distribution of ASM & Failgroups

    Hi gurus,
    After Googl-ing as I had only blurry image on ASM aggregation and Fail groups.
    Could you please contact intertwining of ASM 1. Concept,
    2 ASM Fail group Concept. At least guide me through a URL.

    udayjampani wrote:
    Understand the following scenario and explain,
    I chose normal redundancy and provided no one fails the group specification.
    So I have two groups of disks, the DATA and the FRA, which in turn have four discs, DATA_0000, DATA_0001, FRA_0000 and FRA_0001.
    1. in the document, DATA_0000, DATA_0001 will be two groups of failure automatically.is that at the time?

    Yes, both discs would be in two different groups of failure and each of them would be owned by a single drive.

    2. but the concept of interleaving, mirror file will be stored on discs, for example for a table, some data is stored in a disk, and remaining will be on the other disks. But this scenario will no data when there is failure of the right drive?

    You mix two things. Do not combine striping and mirroring together. You have two disc mirroring would be but since you have only two disks, if we left, there will be no other drives available to support the disk group redundancy, loss of a drive would lead to term for the disassembly of the entire disk group. To confirm this, create a group of records with 3 disks and then lose one of the disks.

    Aman...

  • Oh noCRS-4258: adding and deleting files from voting are not allowed because

    Hi RAc sheet...

    whoat do now?


    CRS-4258: adding and deleting files from voting are not allowed because the files with voting rights are on ASM

    Oralce 11 GR 2 aix cluster with Asm...

    Hello Hagggy

    Something's still bothering you? Why always having these problems? Unfair or just good enough to get all ask them...

    Sorry don t be offended, we will do better... Attached a solution that could work, you may need to change some parts I'm not sure you need to move the voting too...

    First take a deep breath of fresh air and take a backup also

    /U01/11.2.0/grid/bin/ocrconfig - manualbackup

    create a fake vote of the volume of the DSO.

    CREATE DISKGROUP FAKE_VOT NORMAL REDUNDANCY
    FAILGROUP failure_group_rup DRIVE
    ' / dev/rfake_vot "vo1_rup1 NAME
    FAILGROUP failure_group_bry DRIVE
    ' / dev/rfake_vot_2 "vo1_rup2 NAME
    ;

    /U01/11.2.0/grid/bin/crsctl add css votedisk FAKE_VOT

    CREATE DISKGROUP PPR_VOT_OCR NORMAL REDUNDANCY
    FAILGROUP failure_group_rup DRIVE
    ' / dev/rhdisk13 "VOT_OCR_rup1 NAME.
    ' / dev/rhdisk15 "VOT_OCR_rup2 NAME
    FAILGROUP failure_group_bry DRIVE
    ' / dev/rhdisk14 "VOT_OCR_bry1 NAME.
    ' / dev/rhdisk16 "VOT_OCR_bry2 NAME
    FAILGROUP QUORUM fg3 DISK ' / dev/rhdisk17 "NAME QRM_VOT_OCR_BAL
    ATTRIBUTE 'compatible.asm' = ' 11.2.0.0.0 ';

    /U01/11.2.0/grid/bin/crsctl replace votedisk + PPR_VOT_OCR

    /U01/11.2.0/grid/bin/ocrconfig-Ajouter + PPR_VOT_OCR

    /U01/11.2.0/grid/bin/ocrconfig-Supprimer/dev/rfake_ocr

    get the new volume or diskgroup to pass to love

Maybe you are looking for

  • blue screen after installing the updated driver intel chipset

    I have y510 42q. After you install an update for driver chipset intel from windows update, I have blue screens while watching videos. What should I do?

  • Trojan:DOS / Alureon.E partially removed?

    I use Microsoft Security Essentials last week I suddenly received the alert that my computer was IN PERIL, and when I ran the clean up he was able to complete the elimination of the Trojan:DOS / Alueron.E.  I took the computer to the workshop repair,

  • HP Pavilion dv7-4165dx Entertainment Notebook

    Windows 7 does not start, screen nothing visible only the lines and flickering also how contact the HP USA warranty service?

  • New problem with Eclipse No definition found for. main()

    All of a sudden, I have hurt and packaging applications building.  Get this at the end of the packaging: No definition not found for static exported routine: .main (String []) What's happening on many projects that worked perfectly the week without c

  • applet does not not in yahoo pool

    my yahoo pool game does not load and also I can't go into my bank online with this computer when I want 2 go into the playroom, it flickers then turns on Java then error is a mess last night I was able 2 get into the billiard room, but I could only p