Loading of the keys from the data store

Hello

I have a near cache, supported by a cache partitioned using a dumps. When I call cache.keySet (), I get only the keys that are currently in the partitioned cache. Not those who have been expelled for dumps. Right?

If I would also like to receive those, can I retrieve it from the database separately? Or is there another way?

Best regards
Jan

user10601659 wrote:
Hello

I have a near cache, supported by a cache partitioned using a dumps. When I call cache.keySet (), I get only the keys that are currently in the partitioned cache. Not those who have been expelled for dumps. Right?

Fix. The cache does not remember anything on the expelled entries, they are as if they have never been.

user10601659 wrote:
If I would also like to receive those, can I retrieve it from the database separately?

If you know these keys, you can simply ask that them and load the cache-store them. If you do not know the key, then you must go to the database to find out them and either put the data in the cache of your own (and do it in a way that the cache-store won't write their), or you must apply entries with the now well-known keys.

user10601659 wrote:
Or is there another way?

Best regards
Jan

You can write a cache store hides another which takes query parameters including a key cache, and on obtaining a value, it returns the result of the query according to the query parameters as a value cached. You should be aware, however, that this cache will contain deep stale data, then you probably want to configure this cache of very rapid expulsion.

Best regards

Robert

Tags: Fusion Middleware

Similar Questions

  • Find files on the data store that have been removed from inventory, but not deleted from the disk

    I have ESXi 4.1 and with vSphere Client to manage virtual machines.

    Some of my users continue to use the 'remove from inventory"rather than the option" remove disc "in vSphere when they want to delete the virtual machine.

    This leaves the computer virtual on the data store but not used. I have since removed this privilege among the offending users but I need to do a bit of cleaning.

    I have a lot of files on the data store where users have done this in recent years. Probably about 150 records but only 80 VM listed in the inventory.

    Is there a way I can output, a report showing the data of each machine store directory in inventory so that I can remove anything not on this list? ID rather not to manually check the parameters of all 80 VM in the inventory.

    Out ideal would be something like:

    MyVmNameHere 1, \MyVmDirectoryHere1\ [DataStore1]

    MyVmNameHere2, \MyVmDirectoryHere2\ [DataStore1]

    A great tool to discover all this and much more is RVTools

    André

  • Local data store has disappeared from the data store window (necessary emergency aid)

    Dear team,

    I m facing a very strange problem, all of a sudden one of the local ESX datastore disappeared thereafter are full details we have encountered/noticed.

    A local data store disappeared from the data store window.able to see this data store to add storage Wizard, which allows us to format the same.

    * If we take a session putty from here we can see and browse this store of data without problem.

    * Virtual computers that are running on this data store work as well (all files are accessible / VM is accessible on the network)

    * Unable to take backup image do error "the object has already been deleted or was not completely created.

    * Not able to take a «cannot complete the copy file... network» clone »

    Getting from newspapers in vmkernel:

    (14 dec 17:11:39 localhost vmkernel: 0:01:55:28.677 cpu1:4097) ScsiDeviceIO: 747: command 0 x 28-the device 'mpx.vmhba1:C0:T1:L0' failed, the data of sense H:0 x D:0 x 2 P:0 x 0 0 valid: 0 x 4 0 44 x 0 x 0.

    (14 dec 17:11:39 localhost vmkernel: 0:01:55:28.677 cpu1:4097) ScsiDeviceToken: 293: Sync IO 0 x 28-the device 'mpx.vmhba1:C0:T1:L0' failed: error/o H:0 x D:0 x P:0 x 0 2 0 valid sense data: 0x4 0 44 x 0 x 0.

    (14 dec 17:11:39 localhost vmkernel: 0:01:55:28.677 cpu6:4110) capability3: 5354: Sync READ error ('. fbb.sf') (ioFlags: 8): i/o error

    Need your help urgently to solve the same.

    concerning

    Mr. VMware

    Dear all,

    We have enclosed a case at VMware, please find their findings on the same.

    ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

    After the webex session, we just had, I discovered the root cause of the problem reported to an underlying problem on the block device (the logical drive, or a problem on the Board) presented to accommodate the data in question store successfully.

    In short, whenever we try to do raw reading from the disk (from sector 0), the same always fail when we reach the 30932992 bytes (31 MB) brand with an IO error (which is consistent, he is always on this region of the disc that read operations fail, no more, no less). This result can be seen, even if no partition is in the disk (using if = / dev/sdb instead of/dev/sdb1 with dd) and even after zeroing on all sectors (if dd \u003d/dev/zero of = / dev/sdb). Strangely, read operations work fine (as he writes zeros of random data) throughout the entire disk. Keep in mind that the tests I did with no VMware tools (I used almost only dd for these operations), which prohibits certainly a VMware problem (in fact, if you were to try to start the server with a Linux live CD and run the same tests that I did, you would see the same behavior).

    I know that there is no report of material of any bad behavior on the table, but data collected with our tests today completely invalid who. The next step is for you to take this to the server provider to check for problems on the table or discs, because they are there and they are the reason for the problem you reported initially.

    Please let me know if you have other questions about it.

    Thank you

    -

    David Meireles

    Technical Support Engineer

    ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

    Now we have blocked a case from the hardware vendor, to see what the next move will be.

    concerning

    Mr. VMware

  • Can photos be deleted from the data store directly?

    Hello

    I just doubt. Can shapshots be deleted directly from the data store.

    I checked the Snapshot Manager, but was not able to locate snapshots in it, but at the same time there were three snapshots in particular virtual machine data store. What it means?

    And what is my isNativeSnapshot = 'no'?

    I just doubt. Can shapshots be deleted directly from the data store.

    No, snapshots work as a string. Deletion of one of the links in the chain breaks the chain. Please take a look at http://kb.vmware.com/kb/1015180 to see how snapshots work.

    I checked the Snapshot Manager, but was not able to locate snapshots in it, but at the same time there were three snapshots in particular virtual machine data store. What it means?

    Information about snapshots are kept in .vmsd the virtual machine folder. If this file is corrupt - for some reason-you won't see the correct string snapshot in the Snapshot Manager.

    Please create a list of all files in the folder of the virtual machine (dir *. * /oen > filelist.txt (for Windows) or ls-lisa > filelist.txt (for Linux)) and after (tie) the filelist.txt, file .vmx VM as well as the file .vmsd for your next post.

    André

  • Error when displaying data from the data store

    Hello gurus,


    We are also facing the issue with driver when we try to view data from a data store links to Hyperion Essbase technology.
    ODI version is 11.1.1.6.
    Here is the error we receive:

    java.lang.IllegalArgumentException: Driver name cannot be empty
         at org.springframework.util.Assert.hasText(Assert.java:161)
         at com.sunopsis.sql.SnpsConnection.setDriverName(SnpsConnection.java:302)
         at com.sunopsis.dwg.dbobj.DwgConnectConnection.setDefaultConnectDefinition(DwgConnectConnection.java:380)
         at com.sunopsis.dwg.dbobj.DwgConnectConnection.<init>(DwgConnectConnection.java:274)
         at com.sunopsis.dwg.dbobj.DwgConnectConnection.<init>(DwgConnectConnection.java:288)
         at oracle.odi.core.datasource.dwgobject.support.DwgConnectConnectionCreatorImpl.createDwgConnectConnection(DwgConnectConnectionCreatorImpl.java:53)
         at com.sunopsis.graphical.frame.edit.EditFrameTableData.snpsInitializeSnpsComponentsSpecificRules(EditFrameTableData.java:85)
         at com.sunopsis.graphical.frame.SnpsEditFrame.snpsInitialize(SnpsEditFrame.java:1413)
         at com.sunopsis.graphical.frame.edit.AbstractEditFrameGridBorland.initialize(AbstractEditFrameGridBorland.java:623)
         at com.sunopsis.graphical.frame.edit.AbstractEditFrameGridBorland.<init>(AbstractEditFrameGridBorland.java:868)
         at com.sunopsis.graphical.frame.edit.EditFrameTableData.<init>(EditFrameTableData.java:50)
         at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
         at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
         at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
         at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
         at oracle.odi.ui.editor.AbstractOdiEditor$1.run(AbstractOdiEditor.java:176)
         at oracle.ide.dialogs.ProgressBar.run(ProgressBar.java:655)
         at java.lang.Thread.run(Thread.java:662)
    Is there a specific JAR file associated with Hyperion Essbase?
    and where do I find the default drivers provided with ODI?

    Help, please.

    Thank you
    Santy.

    You cannot view data from a store of essbase data as it is not configured with a driver that supports this feature

  • Help display information from the data store

    Hi, I'm trying to format a table with information from various commands, and I can't find out how to do it.

    Basically I want the table to have the following information:

    ESX host - DatastoreName - CanonicalName - CapacityMB - MultipathPolicy

    The problem is the get-scsilun cli annoys me only the canonical ability and multiple trips, but I want to add the other get-vmhost and get-datastore information

    The command I'm ussing is:

    Get-vmhost | Get-ScsiLun | WHERE-object {$_.} Seller - eq "EmC" - and $_. LunType - eq 'disc'} | Format-Table-property CanonicalName, CapacityMB, MultipathPolicy-auto | Out-String-width 120

    No idea how to do that?

    Thanks in advance!

    Pablo. -.

    Try something like this

    $report =@()
    
    foreach($esx in get-vmhost){
        foreach($lun in (Get-ScsiLun -VmHost $esx -LunType disk | where-object {$_.Vendor -eq "EMC"})){
            foreach($ds in (Get-Datastore -VMHost $esx)){
                $ds.ExtensionData.Info.Vmfs.Extent | %{
                    if($_.diskName -eq $lun.CanonicalName){
                        $row = "" | Select Host,DS,CanonicalName,CapacityMB,MultiPathPolicy
                        $row.Host = $esx.Name
                        $row.DS = $ds.Name
                        $row.CanonicalName = $lun.CanonicalName
                        $row.CapacityMB = $lun.CapacityMB
                        $row.MultiPathPolicy = $lun.MultipathPolicy
                        $report += $row                }
                }
            }
        }
    }
    
    $report | ft -AutoSize
    

    The link between the data store and the logic unit number is done by comparing canonicalname the LUN with each canonicalname of the extent of the disk of the data warehouses.

  • Please help: measure is missing from the data store

    Hello

    I have an a 20 TB drive data store and a magnitude of about 40 TB (local disks, areca raid controller).

    Long story short, I had to delete and re-create the volume 40 to on the RAID controller (without initialization, of course).

    Now all the data is there but eui has changed for the 40 to volume and vmware is unable to mount.

    log/hostd.log:2016-05-10T07:17:16.107Z of information pass [3D380B70] [Originator@6876 sub = Vimsvc.ha - eventmgr] event 118: a connected device eui.001b4d2051676528:1 may be offline. [Backvol1, 55e980d4-386dfa7f-7cb2-0cc47a09ba36] file system now is in a degraded state. While the data store is always available, parts of data residing on the extent which went offline may be inaccessible.

    [root@esxi2:/vmfs/volumes/55e980d4-386dfa7f-7cb2-0cc47a09ba36] esxcli instant vmfs storage list

    Volume name:

    VMFS UUID:

    Can mount: false

    UN-mountability reason: a few missing extensions

    Can will: false

    Reason for non-resignaturability: some missing extensions

    County of measure pending: 1

    Can someone help me please?

    Thank you!

    Hello
    You asked for instructions on how to readd a missing measure.
    Basically, it's pretty easy:
    in the VMFS to VMFS-based volume header section, you can assign extends it.
    Lets say that the basic volume is mpx.vmhba1:C0:T3:L0:1
    first of all extend is mpx.vmhba1:C0:T4:L0:1
    second extension is mpx.vmhba1:C0:T5:L0:1
    Then you will see this string in the VMFS header:
    vmhba1:3:0
    and a little later

    MPX.vmhba1:C0:T3:l0:1

    MPX.vmhba1:C0:T4:l0:1

    MPX.vmhba1:C0:T5:l0:1
    Just change the list of stretches.
    But: normally you can't change this section at all, and to add to the confusion, these values are stored in RAM is not trivial to change this value correctly.
    I highly recommend that instead of try it yourself and repeat all the mistakes I made while learning it - call me.
    I rather help you personally to give dangerous advice that will probably make things worse.

  • Migration of virtual machines from a data store, and then delete the data store?

    Hello

    I have a future deployment this month and my Director wants to build a new RAID 6 array, create a store of data with the new table and then move all the VMS to the newly created data store.

    Then he asked me to take the old data store and remove it and the available space between the deleted data store and other stores of data in our shared environment. The question is when the new raid is created and vm has migrated to the new data store, which is the best way to remove the data store empty and make sure that the space available for other data stores? Any help would be appreciated...

    Greg ~.

    If all controls are met, you can go ahead and take it apart without any problem.

    I have re-iterate below checks:

    -No virtual machine is in the data store

    -The data store is not part of the cluster data store

    -The data store is not managed by the DRS storage

    -IGCS is disabled for this data store

    -The data store is not used for vSphere HA heartbeat.

    Especially highlighted 2 controls. Please note that SIOC can be enabled on the data without SDR photo store

  • During remediation instant failure disappear from the Manager, is the data store

    Hi guys

    I was contracted to sort a vSphere 4.1 environment and there are a few VM some with up to 4 snapshots on them and more than a year. I've been rehabilitating some of them already with the "remove all" according to vSphere documentation, and all went very well.

    I just tried to reclaim a more, it only has a single VM disk and a snapshot of what is only a few months old. The remediation process has gone wrong, and I got the following error: ' < unspecified file name > file is larger than the maximum size supported by the data store ' < unspecified datastore > root 25/05/2012 16:20:07 25/05/2012 16:20:07 25/05/2012 16:20:09.

    The data store has a block size of 1 MB and should be able to support up to 256 MB file size and disk of the virtual machine is only 84 GB, real size of snapshots is 77 GB with space set 84 GB service identical to the original size of the virtual machine.

    The server is a box of SQL2008R2, so it is quite important that he does not fail. In addition, the company used to have vCenter installed, but at some point in time ceased it to use it and are now only connect directly to the ESXi hosts themselfes.

    Does anyone have an idea how to fix this? The server seems to be working well, but if you look at the snapshot the Snapshot Manager disappeared, but the files still exist in the data store. I'll make sure that we at this time do not turn off the server.

    I also just found when comparing the .vmx files this virtual machine and another virtual machine that still has old clichés in this regard, the following differences.

    hostCPUID.0 = "0000000b756e65476c65746e49656e69".
    hostCPUID.1 = "000206c220200800029ee3ffbfebfbff".
    hostCPUID.80000001 = "0000000000000000000000012c 100800.
    guestCPUID.0 = "0000000b756e65476c65746e49656e69".
    guestCPUID.1 = "000206c200010800829822030febfbff".
    guestCPUID.80000001 = "00000000000000000000000128100800".
    userCPUID.0 = "0000000b756e65476c65746e49656e69".
    userCPUID.1 = "000206c220200800029822030febfbff".
    userCPUID.80000001 = "00000000000000000000000128100800".

    That information above, does not exist in the file .vmx for virtual machine that does not have the reclamation of the snapshot.

    Kind regards

    Niels

    OK, let's put it all together:

    ... There's a few VM some with up to 4 snapshots on them and more than a year

    ... The data store has a block size of 1 MB and should be able to support up to 256 GB file size

    ... "The file is larger than the maximum size supported by the data store

    --> virtual disks _1 and _2 _3 on different data warehouses (MON)

    Think again of what could be the cause of this error message, the only reason why I can think of is that the other virtual disks have been added after taken the snapshot, and at least one of the virtual disks has a size greater than 254 GB. (see "calculates the time required by the snapshot files system" at http://kb.vmware.com/kb/1012384)

    When you delete a snapshot with the virtual machine, ESXi creates a ' consolidate snapshot of assistance. " If this snapshot of assistance cannot be created, you will receive this error message. If this is the case you should at least be able to remove the snapshot with the virtual machine off.

    The next question is how to resolve this situation, in order to create snapshots in the future. Well, the best way - where your hardware is supported - would be upgrade to ESXi 5 and VMFS-5, that supports the file sizes of ~ 2 TB size of blocks of 1 MB unified. In the case where an upgrade is not an option you will need to either create smaller virtual disks and migrate/copy data or create new warehouses of data with a block size and migrate virtual disks.

    An alternative to the above mentioned, which adds however complexity to the configuration - potentially redirect snapshots of a store of data with the appropriate block size. See http://kb.vmware.com/kb/1002929

    André

  • Technical question from the data store

    Hello everyone.

    I have a question that you can maybe answer:

    "Is it possible to have a WARNING message (check) before formatting a LUN (EMC)?"

    Let me explain:

    We currently have a datastore composed of 12 LUN (of 236 GB), and even if this isn't in the best practices of VMware, I made a mistake (und) when creating a new data store.

    Indeed, I used a logic unit number which was attached directly to a virtual machine (in Raw Device) pour set up my new data store.

    Would you have a solution that makes a data control (on the logic unit number) and which could see us the partition proposed pour extend the data store contains data!

    Thanks in advance for your answer!

    J. Gaspoz

    Hi Julien

    Nice to see you on communities

    The vCenter will list all LUNS that are not already in the "Add storage" wizzard data warehouses, and by definition, a RDM en of is not formatted VMFS, since the lun belongs to the VM and it met with filesystem clean so son.

    There is therefore no direct way to pour do what you're asking (UN ESX knows nothing about the data on a non-vmfs LUNs).

    Personally, plug-in a simple way: the LUN ID. All the LUNs that will be used in RDMS are presented to my ESX with UN LUN ID between 50 and 70 (f.ex.). So I know whether or not a lun is RDM.

    Otherwise maybe third-party products (rv maybe tools) can get out a report listing the VM using the RDM.

    I hope that helps!

    Best regards.

    Stéphane Grimbuhler

    Senior consultant virtualization & storage (VCP / VCAP-DCA)

    VMware instructor (VCI)

    My Blog: www.virtualgeek.ch

  • Find a VM (from the data store, file, the pool and etc..) ID name

    Hello:

    I wonder if there is a way to find the virtual computer (of esx, the data store, file, the pool and etc..) ID name of vSphere CLI power.

    The goal is to find the virtual if name machine ID if known (and the same for esx datastore, record pool and etc..).

    Thank you

    Olegarr

    Hello

    According to my understanding, we can get the managed object ID of the virtual machines,

    $serv = connect-VIServer-Server

    Write-Output $vm. ID

    The same with others, like first Get-VMHost, store in a variable and then the variable. ID gives you the code.

    I hope this helps.

  • Machine virtual IOPS / s report, how to display the name of the data store?

    Hi guys

    I am new to the Foglight community, this is a great tool, and I learn a lot.

    Currently I am trying to create a simple table that will show me metric of my VMware environment: Virtual Machine name, Datastore IOPS and data store.

    However I can't find how to include data store name in the table, because it is not a measure of the Virtual Machine. I think I need to expand the scope of my table to include VMware Datastore, but I don't know how to do this.

    -Mark

    Check the options available it seems that it can be done with WCF (the frame behind the Foglight dashboards).  We recommend generally customers who plan to build views WCF take adequate training or our PSO people engaged in it.

    In any case I can help show a quick example of how it's done.

    Please try this on a local/test server.

    Go to Configuration > definition >

    Make sure that you are in my definition, and click the icon to add a view. then choose from the tables and trees - oriented line table

    Give a name to the view, go public and make a portlet/reportlet and apply

    Switch to the configuration tab and click the change for the lines and choose a query

    Under query, expand the VMware and scroll down

    Until you can select the query for virtual machines

    And press the set button.

    Your view should look like this

    Now you must select the columns.

    Each column has a value you can edit and there is a button + to add additional columns.

    Lets start with the name of the virtual machine - click on the button to change to your default column and choose the context.

    Click on the drop down menu to enter key and choose the current line (virtual vmware machine)

    Click on the drop down menu to access path and scroll down until you can select the name and then click on set.

    You have created a table that lists the names of all virtual machines.

    You can click on save. and then click test, choose a time and click the result. A new window will open a show the list you of virtual machines.
    From here you can continue to add additional columns, each time choosing the key entry in the current line and the path to the metric/string to display.

    For example, the name of the data store.
    I change the module

    Click the configuration tab and click the icon to add a column

    For the column value, that I chose defined context once again, the key input is the current row and for the path, I expand the node for the data store

    And scroll until I see the Proprietename

    If you save and test you will see the result

    Keep adding columns and the data you want, notice that you have arrows that allow you to control the order of the columns.

    Note that you can click Show advanced configuration properties

    This will give you the opportunity to see the properties of the extra table, such as header - giving you the opportunity to give a more meaningful name (name of the data store, the name of the virtual machine, etc.) to the column header.

    You can now go you drag and drop the table edge/report and under my eyes, you will see your new view

    Drop it in the main view

    I hope this has given you the starting point to build this table.

    As I said, I strongly recommend going through our WCF training if you plan build more custom views or hire software Dell PSO Organisation to help build you views that correspond to your need.

    Best regards

    Golan

  • Target iscsi ESXI 5.0, active path connected, mounted, are not not in the data store data store

    HI -.

    I am looking for help to point me in the right direction.  This problem occurred after a reboot of the system.

    I'm in vmware esxi essentials 5.0.0 821926

    I use the starwind software as my iSCSI target

    I use ISCSI to connect to my server from storage to the esxi hosts.

    I have 2 warehouses of data showing inactive under clusters of data storage and the data store.  I have a third-party data store on the same server that is loading properly.

    I currently have the same behavior on three esxi hosts:

    under configuration - storage adapters is the ISCSI path is active. Data warehouses appear under devices and mounted.  In data warehouses, they are inactive. On the same storage server

    on the starwind Server - I have

    built a new target

    removed and added the databases to the new target

    Changed the IP address of the target.

    On Esxi hosts, I removed the ISCSI server, discover places BOF dynamic and static and added under the same or IPs news discovery

    I can restart esxi hosts & storage servers and get to that same point.

    Whenever I find myself in the same place - course assets, devices mounted, inactive in data warehouses

    I don't know what else to share, let me know what you need to know to help out me.

    Thank you

    Dylan

    OK, incase someone crosses my ramblings, it may help.  My original question was warehouses of data that would not rise.  They were visible as devices in the area of storage under devices and as devices in the scsi adapter.

    When I would try to force mount them, add storage, keep the signature, mount they wouldn't mount.  I did some research and my question was after a failed mount attempt strength as got data attached as snapshots warehouses.  I then mounted force them and he tells will data warehouses and I got them back.

    I followed this KB to identify and resolve the problem

    manipulation of vSphere detected as snapshot LUN LUN (1011387)

    When it was over, I tried to add vms back and found a new show.  Power of vms would expire.  the vmkernel.log showed;

    2014 05-27 T 07: 20:40.010Z [verbose 'Vmsvc' 564D7B90] Vmsvc: filtering Vms: ignored not ready to request power state vm vim. VirtualMachine:75

    AND

    (2014 05-27 T 03: 45:47.821Z cpu4:2052) NMP: nmp_PathDetermineFailure:2084: SCSI cmd RESERVE failed on the vmhba35:C0:T1:L0 path, booking on eui.cff548d1952b1e4c of device status is unknown.

    I had latency huge read write showing upwards, 3K and more

    After several searches, I had in the ESXShell and found that there is no conflict of booking.

    On a whim, I took inventory all virtual machines that are now inaccessible.  I then added a DOS virtual machine. Alto! the latency down to version 1.2 a.7 ms for all data warehouses.

    ultimately the instructions said you may need to add virtual machines in the inventory, but does not remove all virtual machines first.  I was grabbing vms that were not in stock, so I didn't remove the old virtual machines in the inventory.

    A recruit Mennonites, Yes.

  • VMDK is not displayed when you add the drive but shows on browse the data store

    This may be a simple solution but my forehead hurts to hit my head against the wall and need your expert help.

    Environment: running ESX 4.0 with 4 nodes in the cluster.  Each node in the cluster has only one HBA card with double fiber connections that connect to both sides of FC SAN.  There is not a fiber between nodes and SAN switch, so direct connect.  Built two systems to support Exchange 2007 using the CCS.  Using a FC SAN to shared disk space and iSCSI for disk space not shared.

    Problem: have added 5 player glow spaces on System #1 using RDM and they appear.  Try to load the same readers in the System #2.  I am able to load 4 readers in the System #2 but the 5th disc (which is in fact the reader 3rd in the series) refuses to show when I added the player.  The change settings - & gt; Add Hard Drive - & gt; Use the existing - & gt; Open the data store and the folder where the virtual machine from the system files and the folder where the LUN mapping lie as they are stored with the system.  But at the opening of this file the LUN mapping in question is absent.  All others appear.  However when I right click to browse the data store outside of the settings and go to the folder where the LUN mapping for the virtual drive and it appears.

    Attempts to resolve:

    (1) have removed the drive and delete system #1 data store and we readded to the system = & gt; nothing.

    (2) removed the disk without delete system data store #1 and it added (thus creating two virtual disks in the file for the system to the same LUN... xxx_3.vmdk and xxx_7.vmdk) = & gt; without success.  (xxx_3.vmdk, nor xxx_7.vmdk appear when trying to add a player)

    (3) tried to use a system not intended within the CSC Exchange cluster, attemtped to add the xxx_3.vmdk or xxx_7.vmdk and both appear as an option.

    I am at a total loss.  Do I need to trash the system no. 2 and rebuild?  Is there a problem of possible system configuration #2 that does not allow him to see the two virtual drives?  The #2 system configuration is corrupt?  Do I have to power System #2 power on and off and then try to add readers?  I can the virtual cycle vCenter system?  Or I am much more of a noob that the answer is with myself and I can't see it?  Any help would be welcome.

    When you want to use RDM on hosts, please make sure lun the same number on hosts.

    for example you have assigned the lun 3 host A storage, make sure that it is also the lun 3 to host b.

    Let me know if it does not

    Binoche, VMware VCP, Cisco CCNA

  • How to extend the size of the data store

    Hello

    Everyone knows how to perform this task?


    Thank you



    Lewis

    As long as you have enough memory and disk space to accommodate the largest data store, the procedure is as follows:

    1. stop the data store and make sure that it is unloaded from memory

    2. increase the values for PermSize or TempSize on the DSN (sys.odbc.ini) parameters

    3. load the store data in memory again.

    Fact!

    If you use TimesTen 11 g, step (3) must be performed as long as the user administrator of the o/s instance.

    Chris

  • FDMEE error data import: No. periods have been identified for the loading of the data in the table "AIF_EBS_GL_BALANCES_STG".

    Hi experts,

    I tried to load the data of EBS in HFM via FDMEE.

    Importing data in the rule of loading, I have encountered an error in loading.

    2014-11-21 06:09:18, 601 INFO [AIF]: beginning of the process FDMEE, process ID: 268

    2014-11-21 06:09:18, 601 [AIF] INFO: recording of the FDMEE level: 4

    2014-11-21 06:09:18, 601 [AIF] INFO: FDMEE log file: D:\fdmee\outbox\logs\TESTING_268.log

    2014-11-21 06:09:18, 601 [AIF] INFO: user: admin

    2014-11-21 06:09:18, 601 INFO [AIF]: place: Testing_loc (Partitionkey:3)

    2014-11-21 06:09:18, 601 [AIF] INFO: name: OCT period (period key: 31/10/14 12:00 AM)

    2014-11-21 06:09:18, 601 INFO [AIF]: name of the category: real (category key: 1).

    2014-11-21 06:09:18, 601 INFO [AIF]: name rule: Testing_dlr (rule ID:8)

    2014-11-21 06:09:19, 877 [AIF] INFO: Jython Version: 2.5.1 (Release_2_5_1:6813, September 26 2009, 13:47:54)

    [JRockit (R) Oracle (Oracle Corporation)]

    2014-11-21 06:09:19, 877 INFO [AIF]: Java platform: java1.6.0_37

    2014-11-21 06:09:19, 877 INFO [AIF]: connect the file encoding: UTF-8

    2014-11-21 06:09:21, 368 [AIF] INFO: - START IMPORT STEP -

    2014-11-21 06:09:24, 544 FATAL [AIF]: error in CommData.insertImportProcessDetailsTraceback (most recent call last): File '< string >", line 2672, in insertImportProcessDetail

    RuntimeError: No periods have been identified for the loading of the data in the table 'AIF_EBS_GL_BALANCES_STG'.

    2014-11-21 06:09:24, 748 FATAL [AIF]: load balances data launch GL error

    2014-11-21 06:09:24, 752 [AIF] INFO: end process FDMEE, process ID: 268

    I found a post related to this error, but did not respond.

    I know I'm missing something, gurus please help me to overcome this error.

    ~ Thank you

    I managed to overcome this problem,

    This was caused due to an error in the map of the time.

    In the mapping of source, the name of period should be defined exactly as displayed in the EBS.

    for example: {EBS--> OCT - 14} FDMEE {mapping source--> OCT - 14}

    The names of the time must be identical.

Maybe you are looking for