Insert with a double constraint

Hello guys,.

Version 5.0.97

Could I please clarify if it is possible to distinguish an insertion of an update of BerkeleyDB I (Collections API)?

I can provide a code if you need it, but I figured I could do something stupid, I want to ask the first question.

In the BerkeleyDB-GSG (page 74), there is this statement:

  • DatabaseConfig.setSortedDuplicates)

    If true , duplicates are allowed in the database. If this value is fake, then to put a duplicate record in database results in an error of the put call returns. Note that this property can be set at database creation time. Default value is fake.

I put this value under the mechanism of installation:

{} private void initDBConfig (boolean readOnly)

dbCfg = new DatabaseConfig();

dbCfg.setReadOnly (readOnly);                   We want to read/write

dbCfg.setAllowCreate (true);                 create if does not exist

dbCfg.setSortedDuplicates (false);           do not duplicate in primaryDB

dbCfg.setTemporary (false);                  It must be a persistent database, not in memory

dbCfg.setDeferredWrite (false);              No entry delay, he should be transactional

dbCfg.setTransactional (true);               do explicitly transactional DB TODO: do! readOnly?

}

I can insert an item in the primaryDB, but if I try to insert a second record by using the same key, I find the new record replaces the old one and does not raise an error.

Have I misinterpreted it or if I'm missing a configuration key somewhere?

I am pleased to provide you with an example if this can help even if I think that I am not creating things correctly.

Thanks for any help.

Clive

Hi Clive,.

Well, documentation of BDB I GSG is a little misleading in this particular section, because it does not detail all possible to make calls Database.put* ().

DatabaseConfig.setSortedDuplicates () configures the database to pick up duplicates (duplicates of records with the same key) or not. Note that this property of database is persistent and it is not editable once set;  the default value is false, in other words, the duplicates are not allowed.

If the database is configured to support not duplicates - setSortedDuplicates (false) - as in your case, then a call to Database.put () to insert a record of the key that already exists in the database will result in crushing the record with the same key, restoring the data associated with the key (an update);  It will not cause an error.  A call Database.putNoOverwrite () to insert a record of the key that already exists in the database will result in a OperationStatus.KEYEXIST error returned, regardless of if the database is configured to support duplicates or not (it's what you want to try an insert).

There is also Database.putNoDupData () that records the key/data pair into the database if it is not already in the database, but this method can be called only if the database supports sorted duplicates.

However, you mentioned that you use the Collections API.  In addition, as your database is a database, then you have successfully configured it so that it does not duplicate.

In the API of Collection I, if the database is configured to not allow duplicates, then the StoredMap.put () call will result in the insertion of a new record if the key does not already exist in the database or update the data if the key already exists in the database.  Note that the return value will be null if the key was not present, or it will be the previous value associated with the key, if the key was present in the database.

So, if you have set up to not allow duplicates, and if you want to prevent a call StoredMap.put () to replace/overwrite existing data if the key is already present, then you should check first if the key is present, using Map.containsKey (). See section adding database items in the Java Collections tutorial documentation.

Kind regards

Andrei

Tags: Database

Similar Questions

  • Prevent the Insert with Error Message

    Hello

    I have a req where a user cannot insert two rows of the same kind.

    To achieve this I write a process to compare the line that is inserted with the existing lines.

    I don't know where and what to include in the code while once create button an error message pops up and the insertion of the line is not processed or the line is not processed.

    How to display an error message?
    How to prevent you insert this line?

    Please suggest.
    Thank you.

    If I add a row to a table and use all THE columns duplicate values, but the id column (which should be unique [system-generated]), I would still have some duplicate data in all the other columns, correct?

    If the OP wants to ensure without duplicating the data are entered, he would have the better to try and produce a checksum for all data entered in the line and determine if any other input line generates a corresponding checksum, if so, then an error message should / could be displayed showing the possibility of a double row of data...

    Thank you

    Tony Miller
    Webster, TX

  • Aeromy computer has keywords with a double line under them, how can I get rid of them

    my computer has keywords with a double line under them, how can I get rid of them

    Hi Pete McDowell,.

    Glad to know that the problem is solved. Let us know if face you any problems with Windows in the future.

  • Parallel insert with noappend hint

    Hello

    I have a question about parallel to insert with the noappend indicator. Oracle Documentation says that:

    Append mode is the default for a parallel insert operation: data are always inserted in a new block, which is allocated to the table. Therefore, the APPEND is optional. You must use Add method to increase the speed of INSERT operations, but not when the use of space must be optimized. You can use NOAPPEND to void append mode.


    When I delete (all) and insert (parallel use noappend) and I see the number of blocks used by my table always increases. The blocks must not be reused due to remove it before insert?


    Version; Oracle Database 11g Enterprise Edition Release 11.2.0.4.0


    Here is the script I used:


    drop table all_objs;

    create the table all_objs in select * from object where rownum < 10000;

    Start

    DBMS_STATS.gather_table_stats (User, 'ALL_OBJS');

    end;

    Select the blocks of all_tables where table_name = 'ALL_OBJS ';

    ALTER session enable parallel dml.

    Start

    I'm in 1.5

    loop

    delete from all_objs;

    Insert / * + NOAPPEND PARALLEL (8 O) * /.

    in all_objs O

    Select * from object where rownum < 10000;

    end loop;

    end;

    commit;

    Start

    DBMS_STATS.gather_table_stats (User, 'ALL_OBJS');

    end;

    Select blocks of all_tables

    where table_name = 'ALL_OBJS ';

    Output:

    Deleted table.

    Table created.

    PL/SQL procedure successfully completed.

    BLOCKS

    ----------

    142

    1 selected line.

    Modified session.

    PL/SQL procedure successfully completed.

    Validation complete.

    PL/SQL procedure successfully completed.

    BLOCKS

    ----------

    634

    1 selected line.



    Why block increase of 142 to 634 even with the noappend trick?


    Thank you.

    Well, looks like the expected result:

    Repeated insertions PARALLELS - a lot of direct path writes, table size increases as the lines are added above the HWM, final size after ten iterations = 10 x original size

    NOAPPEND PARALLEL repeated inserts - no direct path did not write, size of the table increases do not significantly, lines inserted under the HWM

    The NOAPPEND PARALLEL insert clearly is re-use of space released by the deletion. However, I wonder why there is an overload of 3,000 blocks? It would be interesting to see how the overhead varies according to the degree of parallelism.

  • BAD RESULTS WITH OUTER JOINS AND TABLES WITH A CHECK CONSTRAINT

    HII All,
    Could any such a me when we encounter this bug? Please help me with a simple example so that I can search for them in my PB.


    Bug:-8447623

    Bug / / Desc: BAD RESULTS WITH OUTER JOINS AND TABLES WITH a CHECK CONSTRAINT


    I ran the outer joins with check queries constraint 11G 11.1.0.7.0 and 10 g 2, but the result is the same. Need to know the scenario where I will face this bug of your experts and people who have already experienced this bug.


    Version: -.
    SQL> select * from v$version;
    
    BANNER
    --------------------------------------------------------------------------------
    Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - 64bit Production
    PL/SQL Release 11.1.0.7.0 - Production
    CORE    11.1.0.7.0      Production
    TNS for Solaris: Version 11.1.0.7.0 - Production
    NLSRTL Version 11.1.0.7.0 - Production

    Why do you not use the description of the bug test case in Metalink (we obviously can't post it here because it would violate the copyright of Metalink)? Your test case is not a candidate for the elimination of the join, so he did not have the bug.

    Have you really read the description of the bug in Metalink rather than just looking at the title of the bug? The bug itself is quite clear that a query plan that involves the elimination of the join is a necessary condition. The title of bug nothing will never tell the whole story.

    If you try to work through a few tens of thousands of bugs in 11.1.0.7, of which many are not published, trying to determine whether your application would be affected by the bug? Wouldn't be order of magnitude easier to upgrade the application to 11.1.0.7 in a test environment and test the application to see what, if anything, breaks? Understand that the vast majority of the problems that people experience during an upgrade are not the result of bugs - they are the result of changes in behaviour documented as changes in query plans. And among those who encounter bugs, a relatively large fraction of the new variety. Even if you have completed the Herculean task of verifying each bug on your code base, which would not significantly easier upgrade. In addition, at the time wherever you actually performed this analysis, Oracle reportedly released 3 or 4 new versions.

    And at this stage would be unwise to consider an upgrade to 11.2?

    Justin

  • Need help writing an update / insert with linked tables

    I am new to ColdFusion. I am learning to write querys and creates a small application to collect information from visitors to my web site. (It's also a good way for me to learn this language) I'm having a problem and it is not only the way to use an update / insert with related tables. I don't know if I'm still gather the appropriate variables to compare them to existing DB records until his execution is the update or insert some querys. Can someone help me, show me how can I update / insert related tables and maybe tell me if I create the varibales good to the compairison? This is my code, I commented out.

    <! - creating a variable to compare with the db table - >
    < cfset userIP = ('#CGI.) REMOTE_ADDR #') >

    <! - run the query and compare the cfset cell remote_addr - >
    < name cfquery = 'userTracking' datasource = "" #APPLICATION.dataSource # "dbtype ="ODBC">"
    SELECT REMOTE_ADDR
    Of user_track
    WHERE REMOTE_ADDR = #userIP #.
    < / cfquery >

    <!-if the record exists, then run this update-->
    < cfif userTracking EQ userIP >
    < cfquery datasource = "#APPLICATION.dataSource #" >
    UPDATED user_track, trackDetail
    SET user_track. REMOTE_ADDR = < cfqueryparam value = '#Trim (CGI.' ' REMOTE_ADDR) # "cfsqltype ="CF_SQL_VARCHAR">.
    user_track. Browser = < cfqueryparam value = '#Trim (CGI.' "HTTP_USER_AGENT) #" cfsqltype = "CF_SQL_VARCHAR" >.
    user_track.visits = visits + 1,
    trackDetail.date = < cfqueryparam value = "#Now ()" # "cfsqltype ="CF_SQL_TIMESTAMP">,"
    trackDetail.path = < cfqueryparam value = "#Trim (PATH_INFO)" # "cfsqltype ="CF_SQL_LONGVARCHAR">"
    WHERE REMOTE_ADDR = < cfqueryparam value = '#Trim (CGI.' ' REMOTE_ADDR) # "cfsqltype ="CF_SQL_VARCHAR">
    < / cfquery >
    < cfelse >

    <! - if it isn't, then insert a new record-->
    < datasource = "" #APPLICATION.dataSource # cfquery "dbtype ="ODBC">"
    INSERT INTO user_track, trackDetail
    (user_track. REMOTE_ADDR, user_track.browser, user_track.visits, trackDetail.userID, trackDetail.date, trackDetail.path)
    VALUES)
    < cfqueryparam value = '#Trim (CGI.' ' REMOTE_ADDR) # "cfsqltype ="CF_SQL_VARCHAR">.
    < Len (Trim (HTTP_USER_AGENT)) GT 1 cfif >
    < cfqueryparam value = '#Trim (CGI.' "HTTP_USER_AGENT) #" cfsqltype = "CF_SQL_VARCHAR" >.
    < / cfif >
    visits + 1,
    < cfqueryparam value = '#Trim (CGI.' "HTTP_USER_AGENT) #" cfsqltype = "CF_SQL_VARCHAR" >.
    < cfqueryparam value = "" #user_track.userID # "cfsqltype ="CF_SQL_VARCHAR">,"
    < cfqueryparam value = "#Now ()" # "cfsqltype ="CF_SQL_TIMESTAMP">,"
    < cfqueryparam value = "#Trim (PATH_INFO)" # "cfsqltype ="CF_SQL_LONGVARCHAR">"
    )
    < / cfquery >
    < / cfif >


    I'm close on this? This throws any errors, but it is not no longer works. It is so obviously wrong. I get a cfdump the end of my query of compairison, but once it hits the stated case, it is lost.

    Thanks for your time no matter who.

    Newbie

    You must define the variable before you can use it.  You try to use it on line 1 of your model.

  • LOBs and how to retrieve the record inserted with EMPTY_BLOb()

    Hi guys


    How to retrieve the records inserted with EMPTY_BLOb() instead of NULL value.


    Best greetings
    A.G.

    There must be a better way, but these work:

    SELECT... from tableX where length (lob_column) = 0;
    SELECT... from tableX where dbms_lob.getlength (lob_column) = 0;

    Concerning
    Jonathan Lewis
    http://jonathanlewis.WordPress.com
    http://www.jlcomp.demon.co.UK

    "All experts it is a equal and opposite expert."
    Clarke

  • Insert rows in double with difrent languages

    Hello everyone,
    I need your help to solve this query with the most efficient way.
    I have a table:
    Create Table "POSITION" (
    id  number,
    TEXT varchar2(50),
    Language Varchar2(1)
    );
    INSERT orders
    INSERT INTO POSITION 
    VALUES (1, 'Secretary', 'E');
    INSERT INTO POSITION 
    VALUES (1, 'Secrétaire', 'F');
    INSERT INTO POSITION 
    VALUES (1, 'Segretario', 'I');
    INSERT INTO POSITION 
    VALUES (1, 'Secretario', 'S');
    INSERT INTO POSITION 
    VALUES (2, 'Assistance', 'E');
    INSERT INTO POSITION 
    VALUES (2, 'Ayuda', 'S');
    As you have seen changes to text in different languages for the same identifier. for some reason, the data are complete in all languages (from the ID missing some English french and little Italian). I need to a query to insert the missing in my example above language Id number 2 French and Italian language is missing, then the application must copy the English instead of the folder but with language = 'F' and once more with language = 'I' and so on.
    In addition the query should look to copy the English if there are then French or Italian then when more than one language are.

    hope that I clarify for me. as I said, that this request should be performing also the data size is big.



    Thanks for your help

    Hello

    Thanks for posting the CREATE TABLE and INSERT statements; THA is very useful.

    Here's a way to do what you want:

    MERGE INTO     position     dst
    USING     (
              WITH       all_language     AS
              (
                   SELECT  'E' AS language, 1 AS ord_num     FROM dual     UNION ALL
                   SELECT     'F',           2          FROM dual     UNION ALL
                   SELECT     'I',           3          FROM dual     UNION ALL
                   SELECT     'S',           4          FROM dual
              )
              SELECT     p.id
              ,     FIRST_VALUE (p.text IGNORE NULLS)
                        OVER ( PARTITION BY  p.id
                               ORDER BY      l.ord_num
                             )     AS top_text
              ,     l.language
              FROM           all_language     l
              LEFT OUTER JOIN  position     p     PARTITION BY (p.id)
                                       ON  l.language     = p.language
         )               src
    ON     (     src.id          = dst.id
         AND     src.language     = dst.language
         )
    WHEN NOT MATCHED THEN
    INSERT     (dst.id, dst.text,     dst.language)
    VALUES     (src.id, src.top_text, src.language)
    ;
    

    It might be just as effective to do an INSERT and not a MERGER.

    You probably already have a table such as all_language which indicates what are the languages, and in what order they should be used to get the missing values. If so, use this table where I used the subquery all_language above. If you do not have such a table, consider creating one; It might be useful for a number of reasons, including the validation of language when you INSERT or update the position.
    In this case, the alphabetic order of the codes 'E', 'F', 'I' and ' happens to be in the order you want, but it's wrong to programming build on this. If they change the order, or add another language, the code would require much larger changes.

    You might consider adding a new column, indicates that the word in the text really is a word in the given language, or has been copied from another language just to fill a void. I suggest another VARCHAR2 (1) column, called names, which will be equal to the language when the text is actually a word in that language.

  • Insert, based on condition (constraint unique key)

    Hello
    Need help with checing the uniqeness of value before inserting

    for example
    have two table

    Book book_owner years and need to inserting 100 rows in the table, but there are a few unique constraints challenged on the table where need to chek the value existing or not, if the value is exisitn jump insertion otherwise insert the value

    Here is an example

    Insert in the BOOK (BOOK_ID, CNT, ALT_CNT, ROW_INSERT_TMSTMP, ROW_LAST_UPDT_TMSTMP, BOOK_ID)
    Values (SEQ_BOOK_ID.nextval, 50, 500, sysdate, sysdate, ' 123456');
    commit;
    /
    Insert into BOOK_OWNER (BOOK_OWNER_ID, BOOK_ID, USER_ID) Values (SEQ_BOOK_OWNER_ID.nextval, SEQ_BOOK_ID.currval, "456");
    commit;
    /

    Insert in the BOOK (BOOK_ID, CNT, ALT_CNT, ROW_INSERT_TMSTMP, ROW_LAST_UPDT_TMSTMP, BOOK_ID)
    Values (SEQ_BOOK_ID.nextval, 50, 500, sysdate, sysdate, ' 678901');
    commit;
    /
    Insert into BOOK_OWNER (BOOK_OWNER_ID, BOOK_ID, USER_ID) Values (SEQ_BOOK_OWNER_ID.nextval, SEQ_BOOK_ID.currval, ' 678');
    commit;
    /

    Insert in the BOOK (BOOK_ID, CNT, ALT_CNT, ROW_INSERT_TMSTMP, ROW_LAST_UPDT_TMSTMP, BOOK_ID)
    Values (SEQ_BOOK_ID.nextval, 50, 500, sysdate, sysdate, ' 5123987');
    commit;
    /
    Insert into BOOK_OWNER (BOOK_OWNER_ID, BOOK_ID, USER_ID) Values (SEQ_BOOK_OWNER_ID.nextval, SEQ_BOOK_ID.currval, ' 896');
    commit;
    /

    in the Book table BOOK_ID has the unique constraint and the data type is varchar type
    BOOK_OWNER table USER_ID) has the unique constraint and as a varchar data type

    I use oracle 10g

    Double post!

  • Insert with errors of journal - interesting feautre or bug?

    rollbackSo, I run the below:

    Insert all

    in (e1)

    employee_id

    first name

    last_name

    E-mail

    phone_number

    hire_date

    job_id

    salary

    commission_pct

    manager_id

    department_id

    ) (the values

    case

    When mod (rn, 7) = 0 then cast (null as number)

    on the other

    employee_id

    end

    first name

    last_name

    E-mail

    phone_number

    hire_date

    job_id

    salary

    commission_pct

    manager_id

    department_id

    ) errors in the journal in err$ _e ('insert_e1 :'|| limit of rejection to_char (sysdate,' hh24 unlimited:mi:ss)) yyyy.mm.dd))

    in e2)

    employee_id

    first name

    last_name

    E-mail

    phone_number

    hire_date

    job_id

    salary

    commission_pct

    manager_id

    department_id

    ) (the values

    case

    When mod (rn, 6) = 0 then cast (null as number)

    on the other

    employee_id

    end

    first name

    last_name

    E-mail

    phone_number

    hire_date

    job_id

    salary

    commission_pct

    manager_id

    department_id

    ) errors in the journal in err$ _e ('insert_e2 :'|| limit of rejection to_char (sysdate,' hh24 unlimited:mi:ss)) yyyy.mm.dd))

    Select

    rownum rn

    employe_id

    first name

    last_name

    E-mail

    phone_number

    hire_date

    job_id

    salary

    commission_pct

    manager_id

    department_id

    e employees

    ;

    Rollback;

    The interesting thing is that after cancellation of the table _e err$ I still have lines that generated errors (violations of constraint not null I produced).

    Is this a feature or a bug?

    It appears like the insertion in the error table is an autonomous transaction that gets committed automatically independent of the transaction containing the insert statement.

    Why wouldn't be so?

    The error table can be even a global temporary table with preserve rows on commit. And I think this can help because I wouldn't need to worry about truncate table prior to insertion and there is no problem if several sessions would use this table.

  • Limit the records inserted with FUSION

    I created a procedure for updating a table date of termination when the user enters a date of term.  Username and ID (PK) comes from t_users and the endings are in t_terms with t_terms.user_id = t_users.id as key.  There are about 2,500 users and curently nothing in the picture of the word.

    This procedure lights are part of a dynamic of action when the page is sent on an Apex application:

    create or replace Procedure "TERM_UPDATE"

    ()p_user_id IN NUMBER

    p_term_eff_date IN DATE )

    is

    Start

    fusion en t_terms t

        using t_users u

        on (t.user_id=u.id)

    when matched then

    to update the value TERM_EFF_DATE=p_term_eff_date where useridentifier=p_user_id

    when not equal then

    integration ()user_id term_eff_date) values ()p_user_id p_term_eff_date)

    end ;

    I tried just the merge statement in SQL Workshop.  The result is an insertion of 2500 all records user to the table of endings each time that it fires.  If I run again, I get 5000 inserts.  If I understand this process, t.users_id = u.id must return nothing, triggering the insert without match.  Even when there is a match, I always get the insert.  What the hell is happening?

    The p_user_id and the p_term_eff_date are power from bind on page variables.  I put those to be bind variables when I tested the query.

    I'm under apex 4.2 on 11g.

    Thank you!

    Hello

    Thanks for posting the instructions CREATE TABLE.

    Don't forget to display instructions INSERT for you samples, a few calls to the procedure (with specific values for the arguments) and content of t_term after each call.

    What t_users role in this problem?  You get all the information to t_users; you already know the user_id to the time wherever you call the procedure.  I guess the only reason why you have to t_users to this problem is to guard against inserting a user_id not valid in t_term.  (The foreign key constraint would guard against that, too, but it would trigger an error.  You might find a little more elegant to just merge 0 line when an invalid user_id is passed to the procedure.)

    Here's a way to do it:

    create or replace procedure 'TERM_UPDATE '.

    (p_user_id in NUMBERS

    p_term_eff_date IN DATE

    )

    is

    Start

    merge into dst t_terms

    a_l'_aide_de)

    Select id as user_id

    of t_users

    where id = p_user_id

    )              src

    on (dst.user_id = src.user_id)

    When matched then

    update set TERM_EFF_DATE = p_term_eff_date

    where DECODE (TERM_EFF_DATE - if wanted

    p_term_eff_date, 1

    0

    ) = 0

    When not matched then

    Insert (dst.id, dst.user_id, dst.term_eff_date)

    values (term_id_seq. NEXTVAL, src.user_id, p_term_eff_date);

    end;

    /

    DISPLAY ERRORS

    Because t_terms.id is forced to be NON NULL, you must provide a value for the id before you can insert a new line in t_terms.  In the above procedure, I guess you have a sequence called term_id_seq for this purpose.

  • &lt; Cfquery &gt; insert generates the double entry

    Nobody knows, why might a 'insert into' generates a double entry on a SQL table?

    I use it in other routines and programs successfully, but in this specific programme is to be rebellious, generating two entry with the same content.

    Please, I would like to know if anyone has the same problem or what I am doing wrong?

    I appreciate all help.

    Dan,

    Thanks, for your help, I use something similar to work around, but I had to know why this happens, it is true that I ' m a novice here, but this is the 1st time I am facing this kind of problem with insert. You have an idea what causes this behavior?

    Once again thank you,

  • So, most of the time Thunderbird opens only incoming messages with a double click in a new window, which I find infuriating is there a workaround or a fix for this solution?

    At the start of Thunderbird, it opens the mail entering the lower panel with one click. But after an hour or two that no longer works and I have to double-click to open the message that is sent in a new window.

    Looks like a bad add-on for me. Now restart the SHIFT key and see if the issue arises again.

  • Every e-mail I receive is delivered with a double. I tried to look at the various features of the program, but I can't understand why this is happening. Help!

    This happened for several years now. After the updates, I started having duplicates. I lived with the problem for a long time because I could still get the emails, if more abundantly. My volume of e-mail has gone upward and upward, however, and I now get 60 or 70 e-mails a day because of this problem. Can someone tell me how to solve this problem?

    The popstate file is re-created automatically when you restart TB and if the original has been altered, see if you no longer receive messages double. To find accounts, please post your info as described here:

    https://support.Mozilla.org/en-us/KB/ask-TB#w_how-to-ask-your-question

    You can omit the details of the printer and the police.

  • How to solve a major conflict of Firefox with the double LG Smart monitor control screen running.

    PC: Dell Dimension E510 with MS windows XP Professional, Version 2002, Service Pack 3

         Intel(R), Pentium (R) D CPU 2.8 GHz, 1.0 GB Ram.
    

    I have a new 23 "LG monitor (23EN43T). Software to manage LG displays on the screen is called double Smart Solution, which works very well except with Firefox! Firefox screen goes immediately to more than 100% of the space on the screen. I can't even leave Firefox without using Ctrl + Alt + Delete. Then, I uninstalled and reinstalled Firefox, but it does not solve the problem.

     To get around this problem I have to use Internet Explorer.
    

    Firefox may be restoring the last used window dimensions based on a non-plus-relevant monitor configuration. You can try to delete a settings file and see if Firefox detects better your installation at the next reboot.

    With Firefox closed, open your current folder of the (AKA Firefox profile) on Firefox settings. Paste this in the search box on the start menu or in the address bar of Windows Explorer:

    %APPDATA%\Mozilla\Firefox\Profiles
    

    If you find more than one folder, open the one that has been recently updated.

    Rename localstore.rdf to something like localstore.old (in case you want to recover it later)

    Restart Firefox. It is positioned more logically?

    In addition, you should be able to leave "blind" Firefox using these key combinations:

    • ALT + F4 (General combination 'close this window' on Windows)
    • ALT + F, then X (hotkey for File > Exit)

Maybe you are looking for

  • Ring of the exercise went haywire

    for two weeks, the exercise on my Apple Watch ring was very erratic, count the exercise that I did not just not... I also noticed that the movement has stopped ring inscription calories as it used to? I thought to reset, but don't want to lose all th

  • Error BIOHD-2 but the computer works normally again

    My HP, p6310y had problems starting recently, so I did the f9 diagnostics start upward and received the biohd-2 error message. After the diagnosis, the computer could start without a problem. Went looking for a specific diagnosis to the HD for my HD-

  • Satellite A30 closed moderately down

    Recently my Satellite A30 was crashing every time I want something that uses animation or java or even itunes.When it broke down I have to put it back on and off power on three times before it works. Can someone tell me why this is happening and how

  • Definition for KB915597 Defender Windows Update (Vista Pro)

    Repeating the failures to install this update: KB915597, 1.71.1143.0, ideas, any help or advice? Thank you!

  • WEBSCAN didn't save the icon

    I use a HP Officejet Pro 8600 N911g. When I perform a Webscan for this, scanning, but then it says in red press the save icon to save the file. I can't save him icon.