Question on UniqueConstraintException and secondary indexes.

Hello

We use the BDB DPL package for loading and reading data.

We have created ONE_TO_MANY secondary Index on for example ID-> Account_NOs

During the loading of data, using for example. primaryIndex.put(id,account_nos); -UniqueConstraintException is thrown when there are duplicate account numbers existing id

But although UniqueConstraintException is thrown, the secondary key duplicate data are still load in BDB. I think that the data should not get charged if an Exception is thrown. Can I know if I am missing something here?

for example.

ID = 101-> accounts = 12345, 34567 - loading successfully of the key = 101
ID = 201-> accounts = 77788, 12345 - throw an Exception but data is still to take key = 201.

Your store is transactional? If this isn't the case, it is what explains and I suggest you do your transactional shop. A non-transactional store with secondary clues ask trouble, and you are responsible for the manual correction of errors of integrity if the primary and the secondary to be out of sync.

See 'Special Considerations for the use of secondary databases with or without Transactions' here:
http://download.Oracle.com/docs/CD/E17277_02/HTML/Java/COM/Sleepycat/je/SecondaryDatabase.html

-mark

Tags: Database

Similar Questions

  • Difference between the foreign key index and secondary index

    Hello

    Suppose we have two primary databases,

    Employee (id, company_id)

    Company (id, name)

    (where: the employee of the company is one to many, and I use the collections API)

    If we want to perform a join between the employee and the company based on the ID of the company. I want to know the difference between the following two options:

    1 - construction of a secondary index on the Employee (company_id). call this index index1. Then, we use the following code:

    For each company c
    index1.get (c.ID)

    2 - construction of a foreign key on Employee (company_id) index where the database of foreign key was undertaken. call this index index2. Then, we use the following code:

    For each company c
    index2.get (c.ID)

    I have two questions:

    1 - What is the difference between these two options in terms of performance?

    2. I know that one of the benefits of the foreign key are the application of integrity constraints (CASCADE, CANCEL, etc. to DELETE). That declare a foreign key to give me any advantage when I want to do a join? (for example a quick method, or a single statement to a join)

    Thank you
    Wait.

    It doesn't matter what the example, the only advantage of a foreign key index (above, the benefits of a secondary index) is that it imposes of foreign key constraints. There is no other advantage to a foreign key index.

    -mark

  • Question about the use of secondary indexes in application

    Hi, I'm a newbie to Berkeley DB. We use Berkeley DB for our application that has tables in the following structure.

    Key to value1 value2
    ------- --------- ----------
    1) E_ID-> E_Attr, A_ID - where A_ID is String for example. A_ID = A1; A2; A3
    -where E_ID is unique but for example A1 or A2 may be part of multiple F_VITA say E1, E3, E5 etc.


    So my question is that it is possible to create secondary indexes on individual items of Value2 (e.g., A1, A2 or A3)?


    Another question, lets say we have two tables

    Key to value1 value2
    ------- --------- ----------
    2) X_ID-> X_Attr, E_ID

    E_ID-> E_Attr, A_ID - where A_ID is String for example. A_ID = A1; A2; A3

    In this case, can create us E_ID as a secondary Index but with primary Table-> E_Attr, A_ID E_ID?

    While X_ID given, we can get the chronogram, E_ID-> E_Attr, table allocation A_ID?

    Don't know if its possible.

    Thanks for reading.

    (1) when talking about data & Index, I was referring to READ ONLY BDB with no. UPDATES where you download entire files allows for example on a weekly basis. In this case, I believe that the data will be stored directly in the tree. It will not be stored in the transaction as such logs. This hypothesis is correct?

    # Storage I is nothing other than a transaction log. Read the white paper, that I mentioned.

    (2) and about the Garbage Collection operation, I meant BDB 'Cache éviction' algorithms. Sorry I have not communicated before.

    I use an LRU algorithm. What do you need exactly to know, that you can not get the doc?

    -mark

  • How to move the FILE from one place to another and keep "Indexing" have to move to the unknown location so you can't discover its full file path? Windows instructions provide information wrong re: how to do this!

    Make a bunch of audio files, placed in a folder on my desktop. Files initially sent to RealPlayer to burn, but when finished burning CD and went to read a CD, folder got seized by Media Player, 'Indexed' and disappeared from the office. I'm a relatively new computer user, and I need to learn more about file paths, how to view the path FULL of a file on my computer and how to type (create) full path when I need to. The "Indexing" feature seems to erase this lesson for me, and after having spent four hours trying to find Vista instructions on "How to move the file from one place to another", I gave up! Windows 'Help and Support' on my computer gives wrong directions. It states that if I right click on a folder > properties, a dialog box opens with a tab by which I can move my account. There is no tab location here. I found locations tab when right click on the "Mobile" folder, but still no option to "move file". No idea what is the folder "Roaming" or why it's on my computer. I want my audio files in the My Music folder, but this place is "access denied." Don't know how to get the audio file it in any case, but if anyone has any advice, I would be very happy! Thank you. PS - I had no problem moving folders in XP. I don't like the idea that a computer is to decide where to put my files. I want to control where I put my files. I don't like the way search works under Vista. I liked the XP search companion better because, for a computer fool like me, it was really easy to organize and find files and folders and had an option specific to find audio and video file TYPES.

    Here is an article on how to move your personal folders in Vista: http://www.howtogeek.com/howto/windows-vista/moving-your-personal-data-folders-in-windows-vista-the-easy-way/.  If you're talking about the special folders (such as photos, Documents, office...), then here is an article on how to move: http://www.winhelponline.com/articles/95/1/How-to-move-the-special-folders-in-Windows-Vista.html.

    If you have trouble with the search after you move the files, then rebuild the index: http://www.tech-recipes.com/rx/2103/vista_rebuilding_the_search_index/.  Here is an article on how to use Indexing Options in Vista that may be useful for you: http://www.vistax64.com/tutorials/69581-indexing-options.html.

    If the above does not help, your problem seems to relate to the image of the files/folders and their interactions with Media Center (which operate on different other folders).  Please repost your question in images and video Forum at: http://social.answers.microsoft.com/Forums/en-US/vistapictures/threads where the people who specialize in issues of the image will be more than happy to help you with your quesitons.

    Good luck!

    Lorien - MCSA/MCSE/network + / has + - if this post solves your problem, please click the 'Mark as answer' or 'Useful' button at the top of this message. Marking a post as answer, or relatively useful, you help others find the answer more quickly.

  • question about redundancy and failover from one site to tunnel

    First, I create the underside of crypto card,

    crypto IPSec_map 10 card matches the address encrypt-acl
    card crypto IPSec_map 10 set peer 209.165.201.1

    card crypto IPSec_map 10 the transform-set RIGHT value

    card crypto IPSec_map 10 the value reverse-road

    Then I set up a 2nd card statement, corresponding to the same ACL.

    crypto IPSec_map 20 card matches the address encrypt-acl
    card crypto IPSec_map 20 set peer 23.10.10.10

    card crypto IPSec_map 20 set transform-set RIGHT

    card crypto IPSec_map 20 set reverse-road

    My first question is - since Cryptography cards are processed in order, does that mean the first tunnel VPN (map 10) will always be used if its place?

    If so, what happens when the 209.165.201.1 remote peer becomes unreachable? is the tunnel of the 23.10.10.10 peer back automatically?

    What is the best way to reach a VPN site-to-site main and secondary where 209.165.201.1 primary school and 23.10.10.10 is backup and appears only when the primary is down?

    Thank you

    Hello

    As you mentioned that the Cryptography cards are processed in the order.

    If two cryptographic cards have the same "interesting" traffic then the second card encryption is never used (first crypto card is used).

    The best way to get a redundancy is to do the following:

    crypto IPSec_map 10 card matches the address encrypt-acl
    card crypto IPSec_map 10 set peer 209.165.201.1 23.10.10.10

    card crypto IPSec_map 10 the transform-set RIGHT value

    card crypto IPSec_map 10 the value reverse-road

    Note in the example above that you have defined an instance unique card crypto with two counterparts. The first pair will try first and if not answered, the second peer will be used as a backup.

    It will be useful.

    Federico.

  • Add a secondary index to store existing data (json).

    I want to store json using BDB messages. We use the json as a key object property and the rest of the object (bytes) json as a data. later if we want to add secondary index targeting property of the json for the existing data store object, I can not do that because the data is to stay as a bytes.is their all recommend how do this .i am very new to BDB.

    In BDB, the land used for a secondary index is from primary registration data (byte []) by using the code that you write.  You can convert bytes of data from the primary registration in JSON, pull on the property you want, and then convert this byte property (since all the keys of the BDB are bytes more).

    See:

    SecondaryKeyCreator (Oracle - Berkeley DB Java Edition API)

    And to make it easy convert the property in bytes:

    com Sleepycat.bind.Tuple (Oracle - Java Edition Berkeley DB API)

    The collections API tutorial is good to learn how it works, even if you do not use the collections API:

    Berkeley DB Java Edition Collections tutorial

    -mark

  • One of the secondary index is not complete

    Hello

    I had an entity having 18847 record.  It contains a primary key and secondary keys several.  Since the next release to check, we see it, all indexes are complete except ProductId.  What should I do to fix this error?

    Verification of data persist #gdlogs #test. TableWhProductStorageCard

    Tree control to persist #gdlogs #test. TableWhProductStorageCard

    BTree: Composition of the btree, types and number of nodes.

    binCount = 149

    binEntriesHistogram = [40-49%: 1; 80 to 89%: 1, 90-99%: 147]

    binsByLevel = [level 1: count = 149]

    deletedLNCount = 0

    inCount = 3

    insByLevel = [level 2: number = 2; level 3: count = 1]

    lnCount = 18, 847

    mainTreeMaxDepth = 3

    BTree: Composition of the btree, types and number of nodes.

    Verification of data persist #gdlogs #test. TableWhProductStorageCard #BatchNo

    Tree control to persist #gdlogs #test. TableWhProductStorageCard #BatchNo

    BTree: Composition of the btree, types and number of nodes.

    binCount = 243

    binEntriesHistogram = [% 40-49: 43; 50 to 59%: 121, 60-69%: 30; 70-79%: 23; 80 to 89%: 17; 90-99%: 9]

    binsByLevel = [level 1: count = 243]

    deletedLNCount = 0

    inCount = 4

    insByLevel = [level 2: number = 3; level 3: count = 1]

    lnCount = 18, 847

    mainTreeMaxDepth = 3

    BTree: Composition of the btree, types and number of nodes.

    This secondary index is correct. (the lnCount is the same as the primary index)


    Verification of data persist #gdlogs #test. TableWhProductStorageCard #ProductId

    Tree control to persist #gdlogs #test. TableWhProductStorageCard #ProductId

    BTree: Composition of the btree, types and number of nodes.

    binCount = 168

    binEntriesHistogram = [% 40-49: 16; 50 to 59%: 47; 60 to 69%: 39; 70-79%: 26; 80 to 89%: 26; 90-99%: 14]

    binsByLevel = [level 1: count = 168]

    deletedLNCount = 0

    inCount = 3

    insByLevel = [level 2: number = 2; level 3: count = 1]

    lnCount = 14: 731

    mainTreeMaxDepth = 3

    BTree: Composition of the btree, types and number of nodes.

    This index is not complete. (lnCount is less than the primary index)  Then when use this index to iterate through the lines, only the first record 14731 is returned.


    Apparently, somehow your secondary index DB became not synchronized with your primary.  Normally, this is caused by not using is not a transactional store (EntityStore.setTransactional).  But whatever the cause, I will describe how to correct the situation by rebuilding the index.

    (1) take your application offline so that no other operations occur.

    (2) make a backup in case a problem occurs during this procedure.

    (3) do not open the EntityStore yet.

    (4) delete the database index that is out of sync (persist #gdlogs #test. TableWhProductStorageCard #ProductId) by calling the Environment.removeDatabase with this name.

    (5) rebuild the index database simply by opening the EntityStore.  It will take more time than usual, since the index will be rebuilt before the return of the EntityStore constructor.

    (6) confirm that the index is rebuilt correctly.

    (7) bring your online return request.

    -mark

  • spatial index and normal index

    Hi space specialists,.

    I have a question for spatial index performance and how Oracle manages queries that have a space and regular condition combined. We are on the Oracle 10 g Enterprise Edition Release 10.2.0.4.0 database

    Take a query like this one (not a real query but based on the type of query that is generated by ArcGIS):

    Select objectid

    attribute

    geometry

    table

    where mdsys.sdo_filter (geometry, mdsys.sdo_geometry (: gtype1,: srid1, null,: elem_info1,: ordinates1), 'querytype = window') = 'TRUE '.

    and attribute = value

    In our scenario, we have a lot of rows in the table (2 million), but we have only a few lines (2000) where attribute = value.

    It seems that there is a scenario where the performance is really bad. It takes 5 seconds for the query to return the results and you're just too slow for our needs. There is a spatial index on a geometry and a normal index on the field for the attribute. The plan of the explain command shows that both the spatial index and the index of the attribute are used.

    Of course, we can break to the top of the table in separate tables by object type. This would make the index space datamodel much more efficient, but at the expense of our current simple, abstract.

    Do you have suggestions on how we can improve performance without too much impact on our datamodel? Oracle for example has an option where you can create a spatial index on a subset of the data in a table?

    Thank you!

    Rob

    Do not know whether or not the following works for your case:

    use the column range-key attribute to create a partitioned table Beach and then create a local spatial index.

  • PCT_DIRECT_ACCESS of secondary indexes of the IOT

    Facing a performance problem to DELETE statement due to ignorance of a secondary index on ITO by the optimizer.

    Version: 11.1.0.7.0

    We did the "update block references" for secondary indexes, here's what we tested.
    SQL> select owner,index_name,PCT_DIRECT_ACCESS  from dba_indexes where index_name in ('XYZ1');
    
    OWNER                          INDEX_NAME                     PCT_DIRECT_ACCESS
    ------------------------------ ------------------------------ -----------------
    DLF5                           XYZ1                                          91
    DLF7                           XYZ1                                          87
    DLF4                           XYZ1                                          90
    DLF0                           XYZ1                                          92
    DLF3                           XYZ1                                          85
    DLF1                           XYZ1                                          97
    DLF6                           XYZ1                                          93
    DLF2                           XYZ1                                          91
    
    SQL> delete FROM DLF0.LOCATE D WHERE GUID = 'Iwfegjie2jgigqwwuenbqw' AND C_ID = 30918 AND ((S_ID < 8672) OR (S_ID <= 8672 AND DELETE_FLAG = 'Y'));
    
    0 rows deleted.
    
    SQL> select PLAN_TABLE_OUTPUT  from table(dbms_xplan.display_cursor(null,null,'RUNSTATS_LAST'));
    
    PLAN_TABLE_OUTPUT
    ------------------------------------------------------------------------------------------------------------------------------------
    SQL_ID  7qxyur0npcpvh, child number 0
    -------------------------------------
    delete FROM DLF0.LOCATE D WHERE GUID = :"SYS_B_0" AND C_ID
    = :"SYS_B_1" AND ((S_ID < :"SYS_B_2") OR (S_ID <= :"SYS_B_3"
    AND DELETE_FLAG = :"SYS_B_4"))
    
    Plan hash value: 310264634
    
    -------------------------------------------------------------------------------------------
    | Id  | Operation          | Name       | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
    -------------------------------------------------------------------------------------------
    |   0 | DELETE STATEMENT   |            |      1 |        |      0 |00:00:00.01 |    1260 |
    |   1 |  DELETE            | LOCATE     |      1 |        |      0 |00:00:00.01 |    1260 |
    |   2 |   CONCATENATION    |            |      1 |        |      0 |00:00:00.01 |    1260 |
    |*  3 |    INDEX RANGE SCAN| DLF0PK     |      1 |      1 |      0 |00:00:00.01 |     630 |
    |*  4 |    INDEX RANGE SCAN| DLF0PK     |      1 |      1 |      0 |00:00:00.01 |     630 |
    -------------------------------------------------------------------------------------------
    
    Predicate Information (identified by operation id):
    ---------------------------------------------------
    
       3 - access("C_ID"=:SYS_B_1 AND "GUID"=:SYS_B_0 AND
                  "DELETE_FLAG"=:SYS_B_4 AND "S_ID"<=:SYS_B_3)
           filter(("GUID"=:SYS_B_0 AND "DELETE_FLAG"=:SYS_B_4))
       4 - access("C_ID"=:SYS_B_1 AND "GUID"=:SYS_B_0 AND "S_ID"<:SYS_B_2)
           filter(("GUID"=:SYS_B_0 AND (LNNVL("S_ID"<=:SYS_B_3) OR
                  LNNVL("DELETE_FLAG"=:SYS_B_4))))
    
    
    28 rows selected.
    
    SQL> rollback;
    
    Rollback complete.
    At this point, we have updated the block reference and gathered stats.
    SQL> select owner,index_name,PCT_DIRECT_ACCESS  from dba_indexes where index_name in ('XYZ1');
    
    OWNER                          INDEX_NAME                     PCT_DIRECT_ACCESS
    ------------------------------ ------------------------------ -----------------
    DLF0                           XYZ1                                         100
    DLF1                           XYZ1                                         100
    DLF2                           XYZ1                                         100
    DLF3                           XYZ1                                         100
    DLF4                           XYZ1                                         100
    DLF5                           XYZ1                                         100
    DLF6                           XYZ1                                         100
    DLF7                           XYZ1                                         100
    
    8 rows selected.
    
    SQL> delete  FROM DLF0.LOCATE D WHERE GUID = 'Iwfegjie2jgigqwwuenbqw' AND C_ID = 30918 AND ((S_ID < 8672) OR (S_ID <= 8672 AND DELETE_FLAG = 'Y'));
    
    0 rows deleted.
    
    Elapsed: 00:00:00.03
    SQL> select PLAN_TABLE_OUTPUT  from table(dbms_xplan.display_cursor(null,null,'RUNSTATS_LAST'));
    
    PLAN_TABLE_OUTPUT
    ------------------------------------------------------------------------------------------------------------------------------------
    SQL_ID  fmrp501t70s66, child number 0
    -------------------------------------
    delete  FROM DLF0.LOCATE D WHERE GUID = :"SYS_B_0" AND
    C_ID = :"SYS_B_1" AND ((S_ID < :"SYS_B_2") OR (S_ID <=
    :"SYS_B_3" AND DELETE_FLAG = :"SYS_B_4"))
    
    Plan hash value: 310264634
    
    -------------------------------------------------------------------------------------------
    | Id  | Operation          | Name       | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
    -------------------------------------------------------------------------------------------
    |   0 | DELETE STATEMENT   |            |      1 |        |      0 |00:00:00.02 |    1260 |
    |   1 |  DELETE            | LOCATE     |      1 |        |      0 |00:00:00.02 |    1260 |
    |   2 |   CONCATENATION    |            |      1 |        |      0 |00:00:00.02 |    1260 |
    |*  3 |    INDEX RANGE SCAN| DLF0PK     |      1 |      1 |      0 |00:00:00.01 |     630 |
    |*  4 |    INDEX RANGE SCAN| DLF0PK     |      1 |      1 |      0 |00:00:00.02 |     630 |
    -------------------------------------------------------------------------------------------
    
    Predicate Information (identified by operation id):
    ---------------------------------------------------
    
       3 - access("C_ID"=:SYS_B_1 AND "GUID"=:SYS_B_0 AND
                  "DELETE_FLAG"=:SYS_B_4 AND "S_ID"<=:SYS_B_3)
           filter(("GUID"=:SYS_B_0 AND "DELETE_FLAG"=:SYS_B_4))
       4 - access("C_ID"=:SYS_B_1 AND "GUID"=:SYS_B_0 AND "S_ID"<:SYS_B_2)
           filter(("GUID"=:SYS_B_0 AND (LNNVL("S_ID"<=:SYS_B_3) OR
                  LNNVL("DELETE_FLAG"=:SYS_B_4))))
    
    
    28 rows selected.
    
    Elapsed: 00:00:00.01
    SQL> rollback;
    
    Rollback complete.
    
    Elapsed: 00:00:00.00
    So it made no difference.
    With the help of suspicion is much better.
    SQL> delete /*+ index(D XYZ1) */  FROM DLF0.LOCATE D WHERE GUID = 'Iwfegjie2jgigqwwuenbqw' AND C_ID = 30918 AND ((S_ID < 8672) OR (S_ID <= 8672 AND DELETE_FLAG = 'Y'));
    
    0 rows deleted.
    
    Elapsed: 00:00:00.00
    SQL> select PLAN_TABLE_OUTPUT  from table(dbms_xplan.display_cursor(null,null,'RUNSTATS_LAST'));
    
    PLAN_TABLE_OUTPUT
    ------------------------------------------------------------------------------------------------------------------------------------
    SQL_ID  0cf13mwxuksht, child number 0
    -------------------------------------
    delete /*+ index(D XYZ1) */  FROM DLF0.LOCATE D WHERE GUID =
    :"SYS_B_0" AND C_ID = :"SYS_B_1" AND ((S_ID < :"SYS_B_2")
    OR (S_ID <= :"SYS_B_3" AND DELETE_FLAG = :"SYS_B_4"))
    
    Plan hash value: 2359760181
    
    ------------------------------------------------------------------------------------------
    | Id  | Operation         | Name       | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
    ------------------------------------------------------------------------------------------
    |   0 | DELETE STATEMENT  |            |      1 |        |      0 |00:00:00.01 |       3 |
    |   1 |  DELETE           | LOCATE     |      1 |        |      0 |00:00:00.01 |       3 |
    |*  2 |   INDEX RANGE SCAN| XYZ1       |      1 |      1 |      0 |00:00:00.01 |       3 |
    ------------------------------------------------------------------------------------------
    
    Predicate Information (identified by operation id):
    ---------------------------------------------------
    
       2 - access("GUID"=:SYS_B_0)
           filter(("C_ID"=:SYS_B_1 AND ("S_ID"<:SYS_B_2 OR
                  ("S_ID"<=:SYS_B_3 AND "DELETE_FLAG"=:SYS_B_4))))
    
    
    23 rows selected.
    
    Elapsed: 00:00:00.01
    
    SQL> rollback;
    
    Rollback complete.
    Could someone please help us to get the cause for the ignorance of a secondary by the optimizer index when secondary Index is to have 100% PCT_DIRECT_ACCESS.

    It seems to be processing problem with predicate of GOLD... but could not connect as I don't know many with ITO.

    -Yasser

    Published by: Yasu on 23 April 2012 16:39

    Its no different then any other optimizer choice, why did - it go one way rather than the other, its because he thought that less expensive.

    If you really want to prove, suggest a with and without suspicion 10053 trace, read both and see the costing. the wonderful Mr. lewis has a note on this thing with the GSIS ignored with IOT, it seems that the work is already done for you

    http://jonathanlewis.WordPress.com/2008/02/25/iots-and-10053/

  • Questions of detail on the index?

    We try to reduce the consumption of lots of index consistency in our application using the partitioned cache (we use close but it's partitioned back layer that contains the indexes) and therefore better understandand some details (we are now using version 3.6 but eventually will rise to 3.7 so if response are different for both please indivcate only) :

    1 are you all indexes (or part of them) maintained by score (would I think since it is possible to apply effectively the PartitionFilter) or are they all by node?
    2. is part of the index in binary form or are the keys and values in the form of "java objects?

    Best regards
    Magnus

    Hello.

    1 are you all indexes (or part of them) maintained by score (would I think since it is possible to apply effectively the PartitionFilter) or are they all by node?

    Secondary indexes are one per node, but consistency maintains in-house 'key index' which especially used for partition based filtering AFAIK. (I did the presentation on one of the SIGs consistency, this is drag bridge http://blog.ragozin.info/2011/11/coherence-sig-advanced-usage-of-indexes.html).

    2. is part of the index in binary form or are the keys and values in the form of "java objects?

    Indexed attributes are stored as java objects in the heap, but the keys are binary, shared with support card (e.i. index doesn't create additional copy of the key).

    A few year ago, I did an analysis of index piles footprint you can find it here http://blog.griddynamics.com/2009/10/coherence-memory-usage-indexes.html but they relevant to 3.5. As far as I know, there are some improvements in point 3.6. General index may also depend on selectivity.

    Kind regards
    Alexey

  • Questions of size and speed of Collection stored

    Hello everyone,

    Developing a search engine for text, I was looking for a quick tool and low level storage. Berkeley DB I got and it seemed to suit my needs. I used Java cards to my process if it appeared logical to use the Collections API.

    Structure database (or card):

    DOCUMENT MAP
    - docKey a simple integer ID generated in the order
    - docEntry small number of properties


    TERMMAP
    - termKey, a unique string value
    - termEntry small number of properties

    INDEXMAP
    - indexKey two foreign key values: docKey, termKey
    -number of occurrence indexEntry of this term in the doc
    There is a secondary index for INDEXMAP termKey interrogation.

    I use the tuples of the key entities serializable for entries and I removed the redundant entities serializable key value using the transient modifier.

    Here's the algorithm
    The program retrieves the terms of a document and add the doc in the DOCUMENT Explorer.
    For each term, if it is not in the TERMMAP, it is inserted.
    This isn't in the INDEXMAP, it is inserted and the number of occurrence = 1 if the number of occurrence is triggered.
    DOCUMENT map is quite small, while TERMMAP and INDEXMAP in particular have a huge amount of entries.

    When testing it, it has very well worked and provided a quick mark. However, I noticed a few problems that could be critical:

    -The size of the log is very high
    1, 5 GB size of total newspaper for 100 MB of text indexing, then the same index (without the secondary index) takes 100 MB in ASCII text.
    I tried to reduce the total log by increasing CLEANER_MIN_UTILIZATION but it has not reduced sufficiently.
    I thought that maybe does not keep good things, but by displaying the contents of the card, all goes well, no redundancy.

    -The storage is too slow
    With a simple file storage, it's very fast if the bottleneck comes from db storage.
    I noticed that it is faster to store temporary data in java cards (TreeMap to have sorted input values), then add them to the maps stored than to use the saved directly map.

    To conclude

    My final goal is to process a huge amount of data (collections of giga or terabyte) and I'll probably have to use parallel processing. However, if a simple test gives good results, I fear that I'll have to find another way to store my data.

    This is the first time that I use berkeley db, so maybe I did something wrong. To avoid asking help for nothing I tried a lot of changes in the configuration: size, cleaner use, size of the cache, off transactional stuff and nothing gave significant results. I followed the tutorial for Java Collections and read the Javadoc. Finally, I used the JConsole plugin to take a look at the stats. This is my last chance...

    If more details are needed, just ask,
    Thank you in advance,
    Nicolas.
  • putNoOverwrite taints a secondary index

    Context: BDB v4.7.25, DPL via the Java API, enabled replication subsystem (so with the transactions, logging, etc..)

    I encounter strange behavior, using putNoOverwrite on an existing entity with secondary keys.
    One of the two secondary indexes declared in the entity gets broken.

    Let me explain with an example case.

    The entity:
    @Entity (version = 0)
    public class SomeEntity
    {
    @PrimaryKey
    private int pk;

    @SecondaryKey (= Relationship.MANY_TO_ONE, relatedEntity = AnotherEntity.class, onRelatedEntityDelete = DeleteAction.ABORT Chronicles)
    private int fk;

    @SecondaryKey (relate = Relationship.MANY_TO_ONE)
    deprived of the String status = "UNKNOWN";
    }

    The first put or putNoOverwrite is perfect:
    putNoOverwrite (pk = 1, fk = 10, State = "OK")

    My entity is not in DB, can I get it back through the secondary database 'State' with the value 'OK' (method SecondaryIndex.subIndex).

    Then the defective putNoOverwrite:
    putNoOverwrite (pk = 1, fk = 10, State = "UNKNOWN")

    This call should have no effect. In turn, my entity is still present in DB and when I recover via is PK, I get it intact.
    But when I retrieve via the secondary index 'State' with the value 'OK' (method SecondaryIndex.subIndex), there is no match. The only present secondary key in the secondary index is 'UNKNOWN '.

    I encounter this problem repeatedly.
    Thanks for your help

    Hello. This bug has been fixed in the latest version of BDB (4.8.24.) The change log entry is 1 in general changes of access method:
    http://www.Oracle.com/technology/documentation/Berkeley-DB/DB/programmer_reference/changelog_4_8.html

    You will need to upgrade the BDB to solve this problem, because the fix has not been backported to version 4.7.

    Ben Schmeckpeper

  • Hi me Mamadou Moustapha I bought apple that i phone 6s a few days find out who the best phone of the world, because the day I bought it, I had questions on screen and when I went to the service center, they said they're going to update they have updated o

    Hi me yedess apple that i 6 s phone few days back knowing have the best phone in the world since the day I bought that I have questions on screen and when I went to the service centre they said they're going to update, I bought they updated after that I have the same problem now they say they're going to fix very terrible situation for apple this new product which has manufacturing defect they will repair, I asked to change device they deny because they want to fix m single i face the problem of the first day of the customer are powerless and that they only make just crazy.

    It's what warranties are for. If you bought the phone from Apple or an Apple authorized reseller, iPhone, you have 14 days to return it for a refund. After that the Bank should replace the phone. If you did not buy from Apple or a shop authorized, you must respect the terms of this store. However, Apple will always replace the phone, if you contact them directly.

  • About a month ago I posted a question about iMovie and not being able to "share". I solved the problem thanks, so no more emails!

    About a month ago I posted a question about iMovie and not being able to "share". I solved the problem thanks, so no more emails!

    Hi Michael,

    If you want to stop receiving notifications by electronic mail, in the thread, that you have created, then I suggest that you follow the steps below:

    One time connected to the Apple Support communities, visit your mini profile and select manage subscriptions.

    Content

    To manage this content, you are currently subscribed and changing your preferences, select the content.

    Select next to see what content you are currently following.  Note that any thread you are responding you subscribe you automatically to this thread.

    You can select to terminate a subscription to a thread.

    Learn how to manage your subscriptions

    Take care.

  • Question of Safari and Chrome. indicates on the navigation screen. "An element of the Protection of the family filter does not work as expected. Restart your computer. If the problem persists, contact support.  Error: failed to hose CPI. »

    Question of Safari and Chrome. indicates on the navigation screen. "An element of the Protection of the family filter does not work as expected. Restart your computer. If the problem persists, contact support.  Error: failed to hose CPI. »

    Quit Safari, Chrome to quit smoking. If necessary Forcequit.

    Start Safari while holding the SHIFT key, select the menu Safari ClearHistory, then after this check that the homepage is the one you want.

    Do the same for Chrome.

    Close all browsers, restart the mac.

Maybe you are looking for

  • Sort order in Photos when using insert pages

    Writing reports using Pages and photographs uploaded to the works of really good Photos. Except that I can't choose to launch Photos but must scroll to choose when I have to scroll again to access the correct album. I can work around the album by ens

  • Should I update my MacBook 2010?

    Should I update my MacBook 2010? I am a novice and mainly to store photos, e-mail and internet. Thanks in advance. Mac OS X 10.6.8 The specifications are: MacBook7, 1 Processor: Intel Core 2 Duo 2.4 GHz L2Cache: 3MB Memory: 4 GB Bus speed: 1.07 GHz

  • return to 3.6.5?

    Firefox 5 sucks big time! I want to go back to 3.6 where all my Add ons worked. Why do they decide to make a new version to give up all the things that have worked in the previous version? It is a sloppy all design simply. I want that my return as we

  • Skype in two windows suddenly

    I have keys as I do, and I did my Skype become two windows instead of just a window. How can I do a new window? I don't know how. [The update by the moderator topic title should be more descriptive. [Original topic title was: "Windows"]

  • change the variables in a smart camera by a remote computer

    Hello I have a smart camera NI1722 and I want to change some variables inside the camera by a computer which is connected to the camera. Is it possible to change the variables inside the camera from a computer outside? On the computer we program with