Add a secondary index to store existing data (json).

I want to store json using BDB messages. We use the json as a key object property and the rest of the object (bytes) json as a data. later if we want to add secondary index targeting property of the json for the existing data store object, I can not do that because the data is to stay as a bytes.is their all recommend how do this .i am very new to BDB.

In BDB, the land used for a secondary index is from primary registration data (byte []) by using the code that you write.  You can convert bytes of data from the primary registration in JSON, pull on the property you want, and then convert this byte property (since all the keys of the BDB are bytes more).

See:

SecondaryKeyCreator (Oracle - Berkeley DB Java Edition API)

And to make it easy convert the property in bytes:

com Sleepycat.bind.Tuple (Oracle - Java Edition Berkeley DB API)

The collections API tutorial is good to learn how it works, even if you do not use the collections API:

Berkeley DB Java Edition Collections tutorial

-mark

Tags: Database

Similar Questions

  • How to add a logic unit with an existing data store number

    I have a logic unit number I replicated from a SAN at a data center in a SAN to another data center. I want to map this LUN to a cluster of ESX 4 and use the store of existing data on the LUN to recover the virtual machines it contains. What is the best way to do it? Guests will see the existing data store, when I rescan HBAS or y at - it a trick to add a data store existing cluster?

    It's a separate LUN on a different SAN mapped to another set of hosts. Once I have a new analysis for the new LUNS and data warehouses, store data on this LUN will display in the list of data stores to then browse data warehouses and register virtual machines on it?  Or still do I add storage and first select the existing data store?

    Given that the hosts did not see the original LUN before, the data store should appear just after the new analysis.

    André

  • Cannot add the NFS share to store Cluster data

    Hello

    I have this scenario:

    Server - FREENAS with NFS share

    -cluster with 2 guests

    I added the NFS datastore successfully to each host in the cluster. The problem is when I try to add the NFS Datastore for the Cluster data store, I get the error: a specified poarameter was not correct. datastore.info.type.

    Any suggestions would be much appreciated.

    Thank you.

    Datastore1 and datastore2 data warehouses are data warehouses local? If so, you must remove these storages of data added before the NFS datatore to the cluster, first cause a cluster cannot be VMFS and NFS on the same cluster and second cause without making sense have warehouses of local data within a cluster data store.

  • Data store 'datastore1' is in conflict with a store of data that exists in the data center which has the same URL (.), but is back different physical storage

    Hello

    I am new to vCenter Server, so you can assume that I'm missing something obvious.

    I installed vCenter Server 5.5 and one of the two hosts of re-usable ESXi 5.0.0 connected correctly. When I try to connect to the second 5.0.0 ESXi host, I get the error message:

    Data store 'datastore1' is in conflict with a store of data that exists in the data center which has the same URL (.), but is supported by different physical storage

    I Googled it and found what I think are the best answers that are successful, but they all seem pretty unique in their situation (they have a cluster, I do not have) etc. Some solutions involve disconnecting the store of data and reconstruction of things. I would not make things worse and can live without doubt with the help of vSphere client (and not the web client) until maybe I can start again with a 5.5 installation to a new host once I have back up everything from the host unconnectable. I closed all the VM on the host of the second. I have put in maintenance mode. I've renamed the data store, all to nothing does not.

    Thanks in advance

    The problem is that a whole army could not be connected to vCenter due to the problem, the solution involving disconnection of the host apply.

    Here is how I solved the problem:

    1 use the client vSphere client heavy, connect you to the host that cannot not be connected to vCenter Server.

    2. click on the host computer, and then click the Configuration tab.

    3. click on "storage".

    4. find the offending on the right data store.

    5. right click on the data store incriminated and click 'remove '.

    6. click on 'Add storage' in the top right.

    7. follow the steps to find this data not mounted store. I gave him a new name just to be sure, even if it's probably not necessary.

  • Secondary partition of the same data store question

    I currently trying to partition of one of my stores of data and perform in questions. I have a 2 TB data store, which first of all, I broke to 20 GB for my OS, so I want to use the remaining amount in storage. When I change the VM and add a secondary partition as a hard drive, it appears as a disk on the server. Is there something that I need to open I'm missing?

    Once the data store is created you can not break. Is that what you have already installed your operating system on the 20 GB drive? You have changed the settings of the virtual machine to add the second hard drive? Once you have changed the settings of the machine virtual and added the hard drive, you should be able to start the virtual machine and the partition and format the disk storage within the operating system through the normal operating system disk tools.

  • Add esxi 4.1 in the same store of data of esx 4.0

    Dear community

    I connect two ESXi 4.1 to the data store three that are currently connected to host four VMS ESX 4.0 in production, all the infrastructure and then migrate to ESXi without a stop from the virtual machine
    I have a problem? All this and ' on an EMC Clariion CX3 SAN.

    Thank you

    Fabio

    You can have 4.1 and 4.0 connect to the same store of data without any problem.

  • Question on UniqueConstraintException and secondary indexes.

    Hello

    We use the BDB DPL package for loading and reading data.

    We have created ONE_TO_MANY secondary Index on for example ID-> Account_NOs

    During the loading of data, using for example. primaryIndex.put(id,account_nos); -UniqueConstraintException is thrown when there are duplicate account numbers existing id

    But although UniqueConstraintException is thrown, the secondary key duplicate data are still load in BDB. I think that the data should not get charged if an Exception is thrown. Can I know if I am missing something here?

    for example.

    ID = 101-> accounts = 12345, 34567 - loading successfully of the key = 101
    ID = 201-> accounts = 77788, 12345 - throw an Exception but data is still to take key = 201.

    Your store is transactional? If this isn't the case, it is what explains and I suggest you do your transactional shop. A non-transactional store with secondary clues ask trouble, and you are responsible for the manual correction of errors of integrity if the primary and the secondary to be out of sync.

    See 'Special Considerations for the use of secondary databases with or without Transactions' here:
    http://download.Oracle.com/docs/CD/E17277_02/HTML/Java/COM/Sleepycat/je/SecondaryDatabase.html

    -mark

  • One of the secondary index is not complete

    Hello

    I had an entity having 18847 record.  It contains a primary key and secondary keys several.  Since the next release to check, we see it, all indexes are complete except ProductId.  What should I do to fix this error?

    Verification of data persist #gdlogs #test. TableWhProductStorageCard

    Tree control to persist #gdlogs #test. TableWhProductStorageCard

    BTree: Composition of the btree, types and number of nodes.

    binCount = 149

    binEntriesHistogram = [40-49%: 1; 80 to 89%: 1, 90-99%: 147]

    binsByLevel = [level 1: count = 149]

    deletedLNCount = 0

    inCount = 3

    insByLevel = [level 2: number = 2; level 3: count = 1]

    lnCount = 18, 847

    mainTreeMaxDepth = 3

    BTree: Composition of the btree, types and number of nodes.

    Verification of data persist #gdlogs #test. TableWhProductStorageCard #BatchNo

    Tree control to persist #gdlogs #test. TableWhProductStorageCard #BatchNo

    BTree: Composition of the btree, types and number of nodes.

    binCount = 243

    binEntriesHistogram = [% 40-49: 43; 50 to 59%: 121, 60-69%: 30; 70-79%: 23; 80 to 89%: 17; 90-99%: 9]

    binsByLevel = [level 1: count = 243]

    deletedLNCount = 0

    inCount = 4

    insByLevel = [level 2: number = 3; level 3: count = 1]

    lnCount = 18, 847

    mainTreeMaxDepth = 3

    BTree: Composition of the btree, types and number of nodes.

    This secondary index is correct. (the lnCount is the same as the primary index)


    Verification of data persist #gdlogs #test. TableWhProductStorageCard #ProductId

    Tree control to persist #gdlogs #test. TableWhProductStorageCard #ProductId

    BTree: Composition of the btree, types and number of nodes.

    binCount = 168

    binEntriesHistogram = [% 40-49: 16; 50 to 59%: 47; 60 to 69%: 39; 70-79%: 26; 80 to 89%: 26; 90-99%: 14]

    binsByLevel = [level 1: count = 168]

    deletedLNCount = 0

    inCount = 3

    insByLevel = [level 2: number = 2; level 3: count = 1]

    lnCount = 14: 731

    mainTreeMaxDepth = 3

    BTree: Composition of the btree, types and number of nodes.

    This index is not complete. (lnCount is less than the primary index)  Then when use this index to iterate through the lines, only the first record 14731 is returned.


    Apparently, somehow your secondary index DB became not synchronized with your primary.  Normally, this is caused by not using is not a transactional store (EntityStore.setTransactional).  But whatever the cause, I will describe how to correct the situation by rebuilding the index.

    (1) take your application offline so that no other operations occur.

    (2) make a backup in case a problem occurs during this procedure.

    (3) do not open the EntityStore yet.

    (4) delete the database index that is out of sync (persist #gdlogs #test. TableWhProductStorageCard #ProductId) by calling the Environment.removeDatabase with this name.

    (5) rebuild the index database simply by opening the EntityStore.  It will take more time than usual, since the index will be rebuilt before the return of the EntityStore constructor.

    (6) confirm that the index is rebuilt correctly.

    (7) bring your online return request.

    -mark

  • Best way to migrate data from a store of data to a ROW

    Hi all

    I have a maybe simple question.

    I have some SQL clusters to customers that are not configured by the best practices of VMware.

    They are running ESX 4.0 with vCenter 5.0 u2 u2

    Some of them are running two nodes on the same ESX, some of them have mixtures of RDM and data warehouses and LUNS are connected by mixtures of policies of multiple paths.

    I can make changes to the MPP and the separation of the knots without problem, but I was wondering if anyone has recommendations for the data transfer of any store of data that exists in a ROW?

    Obviously if it is possible I would disconnect the data store, add as a RDM again but is it ok to do?

    Are there catches hidden here?

    If this possible isn´t would I need to create a new drive as a vMotion RDM and the storage of the data between the 2?

    I Don t need to migrate data for the operating system, the data on disks, so I can use storage vMotion for this or it would be better to run VMware converter?

    Someone at - it advice or recommendations?

    Thanks in advance

    Mark

    Obviously if it is possible I would disconnect the data store, add as a RDM again but is it ok to do?

    Are there catches hidden here?

    If this possible isn´t would I need to create a new drive as a vMotion RDM and the storage of the data between the 2?

    I Don t need to migrate data for the operating system, the data on disks, so I can use storage vMotion for this or it would be better to run VMware converter?

    Someone at - it advice or recommendations?

    Thanks in advance

    Mark

    You can't add the data store to a server asa RDM because the data store is in VMFS format.

    you could use vmware converter or restore backups to place the data on the rdm.

  • How to move a virtual machine to another store of data on the same host

    Hello

    I have a single ESX4 host with 3 data warehouses and I need to move a virtual machine to another data store, how can I do that since I don't have the ability to migrate the virtual machine when I am connected directly to the client with vSphere host and I do not have a server vCenter also?

    Thanks, Julien

    Hello

    Yes, you can use #vmkfstools-i /vmfs/volumes/datastore/VM/VM.vmdk /vmfs/volumes/datastore/VM1/VM1.vmdk

    For the record, the data store must be shared if you copy of VM and VM must be turned off. In the above command you copy VM.vmdk to VM1.vmdk and add the newly created with this existing VMDK VM.

    If you have found this or other useful information, please consider awarding points to 'Correct' or 'useful '.

  • Expansion of the store of data RAID 5

    Hi, I have a store of data on disks RAID 5 4 and I want to add more than 2 drives in RAID 5, is it possible to add the data store / RAID 5 arrary keeping the data store existing / (VMs) data intact?

    Thanks in advance!

    You must enter your PERC BIOS.

    Follow the manual of Dell.

    And (like writing above), perform a full backup before you start.

    André

  • Shared vs datastore stores local data and DNS in 5.1

    I read page 56 of the vSphere, ESXi, vCenter Server 5.1 Upgrade Guide

    It addresses the issue of DNS load balancing and vCenter Server data store name.

    I think it's to discuss the issue of shared storage and multiple hosts accessing a data store-same and that each host must VMware it the same name, due to how 5.1 doesn't solve VoIP DNS names, but now uses the DNS name for the data store.

    I have no shared storage.  They are all my data store (3 now 3 guests) local.  I guess I have in no way the need to appoint all my local data stores the same.  Correct?

    Here is the text directly from the page 56:

    DNS load balancing solutions and vCenter Server Datastore Naming

    vCenter Server 5.x uses different internal numbers for the storages of data than previous versions of vCenter Server. This

    change affects the way you add NFS shared data warehouses to the hosts and can assign the vCenter Server updates

    5.x.

    names of host IP addresses versions before version 5.0 vCenter Server convert data store. For example, if you

    mount a NFS data by the name \\nfs-datastore\folder, pre - 5.0 vCenter Server store versions convert the name

    NFS-data store to an IP address like 10.23.121.25 before putting it away. The original name of nfs-data store is lost.

    This conversion of hostnames to IP addresses a problem when using DNS load balancing solutions

    with vCenter Server. Replicate data and appear as a single logical DNS load balancing solutions

    data store. The load balancing occurs during the conversion to IP host datastore in solving the

    host name to different IP addresses, depending on the load data store. This load balancing happens outside

    vCenter Server and is implemented by the DNS server. In versions prior to version 5.0, features vCenter Server

    like vMotion don't work with this DNS balancing solutions because the load balancing causes a

    data store logic to appear that several data stores. vCenter Server fails to perform vMotion because he cannot

    recognize that what it considers multiple data warehouses are actually a single logical datastore that is shared between

    two hosts.

    To fix this, versions of vCenter Server 5.0 and later do not convert names to IP addresses data store

    When you add data warehouses. This measure of vCenter Server to recognize a store of data shared, but only if you add the

    data store for each host in the same data store name. For example, vCenter Server does not recognize a data store

    as shared between hosts in the following cases.

    -The data store is added by hostname and IP address on host1 to host2.

    -The data store is added by host name to host1 and host2 to hostname.vmware.com.

    VCenter Server to recognize a shared data store you must add the store of data of the same name in each

    host.

    You have reason - the section relates to the NFS shared storage - with the same NFS server load balancing is multiple IP addresses - in older versions of vCenter as the name of the NFS server is converted into intellectual property that could be different that would cause problems with vMotion and other opt vCenter - with vCenter 5.1 NFS host name is maintained.

  • "Virtual Machine is a store of data not managed by vCloud Director."

    The deployment of new virtual machines in vCD 1.5 we see alerts the system on all the deployed virtual machines that say "Virtual Machine is a store of data not managed by vCloud Director.  Virtual machine has mounted the media or the drive to a store of data not managed by vCloud Director.  Ensure that the Virtual Machine has all the drives and the media mounted on a data store managed by vCloud Director".

    The virtual machines in the catalogue have been imported from vCenter and have a single drive and no media mounted.  Storage is the same that the new virtual machines are deployed.  Virtual machines of power on the good and seem to be ok, but they are all showing this alert system, with 3 catalog from different sources.

    Indeed, it is the question.

    I used SSD on the local disks for VM vswap. I have it configured on Cluster (cluster vswap policy for vSphere) and I got this error message.

    I configured the Cluster vswap strategy to use the same directory as the virtual machine, and I use the new "Host Cache" feature of ESXi5 to use my local SSD drives.

    Fixed it.

    To deliver you, just configure the policy of vswap cluster where you will store the vswap with virtual machine (on the same storage seen by vCD) or add storage nfs for the vCD and it will fix your problem.

  • ESXi and store design data

    Hi all -

    I find VMware site but can't find any info about my installation and hope someone can help me with a few questions about the configuration of the data store, or maybe someone has implemented a similar setup in their environment.  Any comments would certainly be appreciated


    Questions: I have two 5.0 ESXi host and will link to 5300 VNX via iSCSI. Each ESXi will host virtual machines about 6-Windows 2008 R2 File servers each - VMs C: 50 GB on iSCSI SAN disk. Data store single for example 500 GB for all readers of VMs C: create?
    In addition to the C: partition, each VM file server will host several partitions greater than 2 TB data, for example a 3 TB S: and T: 2 TB partition I create a store of data for each partition and under the VM iSCSI, select change settings, select Add hard drive, and then navigate to the data store for each VM partition? This means that I will create a data store for each partition of data correct?

    Thank you

    .

    almost:

    ESXi 5.0 supports LUN/data warehouses with up to 64 TB (without extensions). = RDM in physical compatibility ((pass through)) mode

    AND

    2 TB less 512 bytes for virtual disks on VMFS datatores. = Of VMDK more RDM in virtual compatibility mode

    André

  • issue of configuration for the store of data/LUN iSCSI

    Hi all!  I am all new to VMWare and iSCSI, and I have a question about data warehouses regarding being mounted on an iSCSI target.

    Here's my setup.  I have a Dell PowerEdge R810 connected over a link to 10 GB dedicated to a Dell PowerEdge 2950 that I use as a SAN.  The R810 has no hard drives in it, it starts ESXi on a redundant configuration of SD card integrated into the server.  My SAN software is Server 2008 Enterprise with Microsoft iSCSI software Target 3.3 installed.  I created target 1, LUN 0, 800 GB .vhd file sitting on a NTFS partition and set up a store of data on it in VCenter.  The data store is the size of the entire iSCSI, 800 GB target.  I have my first virtual server up and get 50GB of space between the data store.

    Here is my question: can I create several virtual machines using the data store I put in place?  I know it sounds like a stupid question, but my concern comes from some reading I did on several computers, access to a single LUN.  If I set up another virtual machine, it will explode on my SAN file system as two virtual machines would be to access the same big VHD file on the NTFS partion, or I'm far from base?

    Issue of bonus points:

    I have an another R810 that I will be online soon.  Assuming that I am OK to add more virtual machines of my first machine, pourrais I connect this box in the data store as well and create virtual machines, or should I set up a separate data on another iSCSI target store?

    Thanks for your patience with a new guy.  I searched and read for hours, and I could have read too much so I thought that I had just askt experts.

    Hello and welcome to the forums.

    Can I create several virtual machines using the data store that I put in place?

    Yes.

    If I set up another virtual machine, it will explode on my SAN file system as two virtual machines would be to access the same big VHD file on the NTFS partion, or I'm far from base?

    No, you should be fine.  One thing to keep in mind is that the Windows/MS iSCSI target is not a supported storage solution.  There could be problems of scalability with iSCSI Windows/MS, but you'll be fine in a VMware perspective.

    I have an another R810 that I will be online soon.  Assuming that I am OK to add more virtual machines of my first machine, pourrais I connect this box in the data store as well and create virtual machines, or should I set up a separate data on another iSCSI target store?

    You could, but I personally would put in place a different data store on another iSCSI target.

    Good luck!

Maybe you are looking for