transaction tables consistent reads - undo records applied

Hello

on 11.2.0.4 on Linux

In the documentation, it is said:

The report of the following V$SYSSTAT statistics should be close to a:

ratio = transaction tables consistent reads - undo records applied / transaction tables consistent read rollbacks 

And the recommendation is that the recommended solution is to use automatic undo management...

We are in automatic cancellation, but the ratio is 38:

SQL > show Cancel parameter

VALUE OF TYPE NAME

------------------------------------ ----------- --------

UNDO_MANAGEMENT string AUTO

SELECT name, value of V$ SYSSTAT whose name in ("transaction tables consistent reads - undo records applied",

("tables of compatible transactions read restorations")

NAMEVALUE

---------------------------------------------------------------- ---------------------------------------

transaction tables consistent reads - undo records applied38
operating tables consistent read recommitments1

2 selected lines

So, what would be the reasons and the solution?

Thank you.

This is a good demonstration of why it's a bad idea to blindly follow the ratios.

Once since your instance starts, you had to use 38 undo records to discover the time of validation of a specific transaction. The performance impact of that verification is virtually invisible - worst case, it is you read 38 undo disk blocks to get the necessary information.  All you have is an example where a query ran long enough so that a few hundred to a few thousands of transactions were hiding at the time of a transaction that committed at the same time that the query started - try to 'solve the problem' is a waste of time.

Concerning

Jonathan Lewis

Tags: Database

Similar Questions

  • SNA committed to header Undo (Transaction table) slot

    Hello
    When ever a transaction was committed committed RCS is saved with the vector of change in recovery and newspapers to the space reserved for this transaction in the rollback segment header.

    At some point later in time another session can read one of these blocks and discover that the ITL includes a transaction that has committed but not cleaned. (He

    can work that cross the ITL entry with the location for the transaction concerned in the segment undo header block).


    My Question is: at the next session visit this block, what happens if the slot of the cancellation was written more (this can happens to the lack of space to cancel and or the time after validation reached cancel retention).


    Please clarify my doubts?

    -Thank you
    Vijay

    If the Undo tablespace is full and no Cancel slots are available (i.e., all are uncomitted all active transactions, operations), you get an error. New transactions cannot continue without undo space.

    Hemant K Collette

  • transactions table

    In the concepts of Oracle 11 g R2 is indicated that "the database uses an operating table, also called a list of self-dealing (ITL), to determine if a transaction is not validated when the database has begun to change the block. The block of each block of the segment header contains an operating table. «by http://download.oracle.com/docs/cd/E11882_01/server.112/e16508/consist.htm#i13945 but the link provided for the transactions table that points of sentence in the box of glossary and he says: "transactions table: the data structure in a rollback segment that contains the identifiers of the transaction of operations using the rollback segment.» »

    In the same folder to another chapter (here http://download.oracle.com/docs/cd/E14072_01/server.112/e10713/transact.htm) says "A transaction ID is unallocated until a slot table undo segment and transaction are allocated, which occurs in the course of the first statement DML."

    In order to present the transaction is located in the rollback segment (in the header I guess)? This transaction table provides information such as: the financial statements included in a transaction, pointers to undo blocks for a transaction, commit State, commit YVERT?

    If in the first sentence, they talk abot LAUGHS, LAUGHS as blocks of information on the State of validation of the transaction?

    A1: No, it is in the fixed area of the SGA. (There are more components to the LMS than just different pools).

    A2: It is there no Q2.

    A3: A slot is allocated in the rollback segment header. This identifies the transaction via the XID. Space to write the actual cancellation records, enter the body of the undo segment, and this piece of spaced is indicated by information in the header undo slot.

    A4: Yes, all these scriptures initially arrive only in the redo log buffer. Before that any block can be updated, no matter if it is a block cancellation or a block of data, the change must be protected by redo.

    A5: Yes, the ITL slot has the XID, so it literally means the transaction. This is how keep us track of the operation 'show interest' in this data block.

    A6: Don't know what you're asking here. Yes, only the updated data is written to cancel, and any changes made by a specific transaction are written to the same rollback segment.

    Hope that answers all your questions,

    -Mark

  • Reading and recording of several channels simultaneously

    I use a NI PCIe-6363 map to acquire data from various sensors in an experimental engine.  I need to be able to show views and record data on all channels simultaneously.  I'm relatively new to Labview, so I think I'm doing things inefficiently.  I am also having a problem with the display of multiple signals.  I have attached the vi.  I look forward to the advice.

    DAQmx is able to take multiple samples at the same time, so you need only a wire covering your While loop. See this example VI that comes with LabVIEW. You can find others with the help > menu examples and digging from there:

    C:\Program Files (x 86) \National Instruments\LabVIEW 2012\examples\DAQmx\Analog Input\Voltage - Input.vi continues

    Initialization DAQmx VI would need another kind of entry rather than "PXI1Slot2/ai0. I forgot the exact syntax, but it would be something like "PXI1Slot2/ai0-15". In addition, the read DAQMx VI is polymorphic, so that it can read all these channels in sub form of table. Then, you have to build the table of these PXI objects to initialize DAQmx VI with the function 'Building the matrix' of LabVIEW and indexing table in 16 items with function "Array Index. You only have a single function Index Array, expand right down it and it automatically will give you items between 0 and 15 (or however far you develop) without having to wire in all indexes.

  • Table - jump on existing records Validation rules

    Hi all

    JDeveloper version: 12 c stack-(je l'espère, le comportement doit être le même en 11g ainsi)

    I created an af:table of editable records in them. And there is a Date beginning with minValue set to the current date (the value is read in backing bean)

    < af:inputDate value = "#{bindings." StartDate.inputValue}"required =" #{bindings. " StartDate.hints.mandatory}.

    columns = "#{bindings." StartDate.hints.displayWidth}.

    shortDesc = "#{bindings." StartDate.hints.tooltip}"id ="id1 ".

    minValue = "#{pageFlowScope.BackingBean.currentSysDate}" > "

    Example:

    Name Value Start date End date
    Parameter 11003/10/199906/10/2050
    Parameter 22005/10/201406/10/2050

    Parameter 1 is an old drive

    Parameter 2 is a new record

    Problem:

    When you submit the changes with the confirmation key that ADF trying to validate the parameter 1 and 2 setting for the Start Date.

    And it seems to happen in the java script on the client side.

    I was expecting that it would be only to validate new records, or modified records.

    Question:

    Is there a way I can force the ADF to the validation only on the changed lines and new lines?

    Thank you

    SAPP

    It looks like a the only option is to leave the MinValue feature in calendar component and implement validation on the server side.

    I prefer to do this validation as part of the validation from EntityObject as this coding in doDML() rules

  • ADF table large number of records

    JDeveloper 12.1.2

    I have an engine of research as well as an array of result. In some research, it is possible that the result set is more than a million records. The problem is when the user uses the table scroll bar and quickly tries to go at the end of the result set, weblogic with error, memory, it blocks. What is the best practice to handle this in the user interface?

    I think of several ways to resolve and would like if anyone has experience with one of the following:

    1. when the user performs the search, run select count (*), and determine the number of records search will return. If say greater then 1000, warn the user to refine the search criteria. This seems to have made an extra trip to DB

    2. implementation of the pagination of the table instead of the scroll bar. Is this possible in the last ADF table?

    3. go to the front and back only to say 1,000 records and warn the user that not all records are returned, and that it would be best to refine the criteria. Similar to 1, except no extra trip to DB

    Thanks in advance for comments and maybe the solution

    Insufficient memory is caused by the underlying ViewObject that retrieves and keeps track of all the lines to the position where the user scrolls to the bottom of the table. It is behavior of the VO when the VO is configured with the access mode "Scrollable", which is the default access mode setting. For example, if the user scrolls to the No.100000 line, the VO to get and keep in mind all the lines n ° 1 to No.100000. This means that the table pagination will NOT help you!

    Fortunately, you have two simple options out-of-the-box:

    1. The 1st option is to configure the original Version of extraction does not exceed a fixed number of lines (for example 5000). You can do it in the "General" of the VO definition dialog page. Go to the 'Tuning' section, select the option "only up to line number" and enter the number chosen in the field of text next to it. In this way, what the condition is entered by the user, the VO will seek no more than the specified line number.
    2. The 2nd option is to configure the VO with "Range Paging" access mode rather than the default access mode "Scrollable". With the access mode "Range Paging" the VO not go get and keep in mind all the lines until the currently selected line, but he keeps in memory a few line ranges (for example only a few dozen lines). This way you will not be affected by an OutOfMemory error even if the user scroll down to a millionth line. The downside is that when it is configured with "Range Paging" the VO will have to rerun the query each time when he needs to pick a different range of lines. If the execution of the query is slow to the DB server, this will load the DB server and the user will experience delays when scrolling in a different range of lines.
      Take a look at the documentation for more details here: View object performance tuning (read article 9.1.5).

    Dimitar

    Post edited by: Dimitar Dimitrov

    If you decide to use "Range Paging", don't forget to set the appropriate size, which is not less than the usual number of visible lines in your table on the page. If you set the size of the lower range, then the original Version will have to fetch more than a range of lines, which will need more of a running query. The size of default in the definition of VO is 1, but it is not important. The important value is the size of the range that is defined in the iterator binding in the PageDef, which overrides the setting of the VO. The default value for the size of the links of the iterator range is 25.

  • Update a table that has 1291444946 records

    Hi friends,

    We have table with 1291444946 records in this 13844852 files are updated. The current time is 12:22:55.19.This simple update statement
    update table_name set a=b where a=c;
    Superior wait event was scattered db read file.
    DB_Version=11.2.0.2
    OS=Solaris 10 x86
    SGA=16GB
    PGA=6GB
    Cursor_sharing=EXACT
    Is their anyway, we can reduce this time to 6 hours or less.

    Concerning
    NM

    What is the status here?

    Can you update us if you please, or close the message if the problem has been resolved

  • Output of the coil of large table with millions of records

    Hello

    I'm trying to simply take the exit of records in a table in the text file by using the query. I get error "ORA-01841 full year should be between 4713 and 9999 and not 0" after extraction of very few records in the table.
    Here's the query I'm currently using:

    SET THE POSITION

    SET VERIFY OFF

    SET SERVEROUTPUT OFF

    TERMOUT OFF SET

    SET FEEDBACK OFF

    SET TRIMSPOOL ON

    SET LINESIZE 32000

    SET PAGESIZE 0

    SET COLSEP «|»

    Column MANDT A5 format

    Column BUKRS format A5

    Column BELNR A10 format

    Column GJAHR A5 format

    Column BLART A2 format

    Format of column BLDAT A11

    Format of column BUDAT A11

    Column MONAT A5 format

    Column CPUDT A10 format

    Column CPUTM A8 format

    Column AEDAT A10 format

    Column UPDDT A10 format

    Column WWERT A10 format

    USNAM A12 column format

    TCODE A20 column format

    Format of column BVORG A16

    Format of column XBLNR A16

    Column DBBLG A10 format

    Column STBLG A10 format

    Column STJAH A5 format

    BKTXT A25 column format

    Column WAERS format A5

    Column KURSF 999999999.99999 format

    Column KZWRS format A5

    Column KZKRS format 999999999.99999

    Column BSTAT A5 format

    Column XNETB A5 format

    FRATH 9999999999999.99 column format

    Column XRUEB A5 format

    Column GLVOR A5 format

    GRPID A12 column format

    Format of column DOKID A40

    ARCID A10 column format

    IBLAR columns A5 format

    Column AWTYP A5 format

    AWKEY A20 column format

    Column FIKRS format A5

    Column HWAER A5 format

    Column HWAE2 A5 format

    Column HWAE3 A5 format

    Column KURS2 999999999.99999 format

    Column KURS3 999999999.99999 format

    Column BASW2 A5 format

    Column BASW3 A5 format

    Column UMRD2 A5 format

    Column UMRD3 A5 format

    Column XSTOV A5 format

    Column STODT A10 format

    Column XMWST A5 format

    Column CURT2 A5 format

    Column CURT3 A5 format

    Column KUTY2 A5 format

    Column KUTY3 A5 format

    Column XSNET A5 format

    Column AUSBK A5 format

    Column XUSVR A5 format

    Column DUEFL A5 format

    Column AWSYS A10 format

    Column TXKRS format 999999999.99999

    Column LOTKZ A10 format

    Column XWVOF A5 format

    Column STGRD A5 format

    PPNAM A12 column format

    Column BRNCH A5 format

    Column NUMPG A5 format

    Column ADISC A5 format

    XREF1_HD A20 column format

    XREF2_HD A20 column format

    Column XREVERSAL A10 format

    Column REINDAT A10 format

    Column PSOTY A5 format

    Column PSOAK A10 format

    Column PSOKS A10 format

    Column PSOSG A5 format

    PSOFN A30 column format

    Column INTFORM A8 format

    Column INTDATE A10 format

    Column PSOBT A10 format

    Column PSOZL A5 format

    Column PSODT A10 format

    Column PSOTM A8 format

    Column FM_UMART A8 format

    NIEC A4 format column

    CCNUM A25 column format

    Column SSBLK A5 format

    LOT 10 column format

    SNAME A12 column format

    Column SAMPLED A8 format

    Column KNUMV A10 format

    XBLNR_ALT A26 column format

    ZZREASON_CD A12 column format

    ZZREASON_DSCRPT A50 column format

    coil bkpf.txt

    Select MANDT, BUKRS, BELNR, GJAHR, BLART, decode (' BLDAT, '00000000', ' 00.00.0000', to_char (to_date (substr (BLDAT, 1, 10), "DD-MM-RRRR"),' JJ.)) Messrs. RRRR')) as BLDAT, decode ("BUDAT, '00000000', ' 00.00.0000', to_char (to_date (substr (BUDAT, 1, 10),"DD-MM-RRRR"),' JJ.)) Messrs. RRRR')) as BUDAT, MONAT, decode (' CPUDT, '00000000', ' 00.00.0000', to_char (to_date (substr (CPUDT, 1, 10), "DD-MM-RRRR"),' JJ.)) Messrs. RRRR')) as CPUDT, decode (cputm, '000000','00: 00:00', to_char (substr (cputm, 1, 2) |': ': substr (cputm, 3, 2) |': ' | substr (cputm, 5, 2))) as CPUTM,
    Decode (' aedat, '00000000', ' 00.00.0000', to_char (to_date (substr (AEDAT, 1, 10), "DD-MM-RRRR"),' JJ.)) Messrs. RRRR')) as AEDAT,
    Decode (' UPDDT, '00000000', ' 00.00.0000', to_char (to_date (substr (UPDDT, 1, 10), "DD-MM-RRRR"),' JJ.)) Messrs. RRRR')) as UPDDT,
    Decode (' WWERT, '00000000', ' 00.00.0000', to_char (to_date (substr (WWERT, 1, 10), "DD-MM-RRRR"),' JJ.)) Messrs. RRRR')) as WWERT,
    USNAM, TCODE, BVORG, XBLNR, DBBLG, STBLG, STJAH, BKTXT, WAERS, KURSF, KZWRS, KZKRS, BSTAT, XNETB, FRATH, XRUEB, GLVOR, GRPID, DOKID, PROFESSIONNAL, IBLAR, AWTYP,
    AWKEY, FIKRS, HWAER, HWAE2, HWAE3, KURS2, KURS3, BASW2, BASW3, UMRD2, UMRD3, XSTOV, decode ("STODT, '00000000', ' 00.00.0000', to_char (to_date (substr (STODT, 1, 10),"DD-MM-RRRR"),' JJ.)) Messrs. RRRR')) as STODT, XMWST, CURT2, CURT3, KUTY2,.
    KUTY3, XSNET, AUSBK, XUSVR, DUEFL, AWSYS, TXKRS, LOTKZ, XWVOF, STGRD, PPNAM, BRNCH, NUMPG, ADISC, XREF1_HD, XREF2_HD, XREVERSAL,
    Decode (' REINDAT, '00000000', ' 00.00.0000', to_char (to_date (substr (REINDAT, 1, 10), "DD-MM-RRRR"),' JJ.)) Messrs. RRRR')) as REINDAT, PSOTY, PSOAK, PSOKS, PSOSG, PSOFN, INTFORM, decode ("INTDATE, '00000000', ' 00.00.0000', to_char (to_date (substr (INTDATE, 1, 10),"DD-MM-RRRR"),' JJ.)) Messrs. RRRR')) as INTDATE,.
    Decode (' PSOBT, '00000000', ' 00.00.0000', to_char (to_date (substr (PSOBT, 1, 10), "DD-MM-RRRR"),' JJ.)) Messrs. RRRR')) as PSOBT, PSOZL,.
    Decode (' PSODT, '00000000', ' 00.00.0000', to_char (to_date (substr (PSODT, 1, 10), "DD-MM-RRRR"),' JJ.)) Messrs. RRRR')) as PSODT, decode (PSOTM, '000000','00: 00:00', to_char (substr (PSOTM, 1, 2) |': ' | substr (PSOTM, 3, 2) |': ': substr (PSOTM, 5, 2))) as PSOTM, FM_UMART, NIEC,.
    CCNUM, SSBLK, BATCH, SNAME, SAMPLED, KNUMV, XBLNR_ALT, ZZREASON_CD, ZZREASON_DSCRPT from sapsrp.bkpf where mandt = 200;

    spool off

    Please suggest what Miss me in this area, so that I can extract all records from this table.


    Thank you
    Manish

    >
    This issue is that all of these columns are to the date format, but when I try to extract data from these columns it comes like 20120606 (YYYYMMDD) while I need my output as DD. MM YYYY.

    I used decode only to convert values to these fields in these columns, where the date was not entered and therefore containing 00000000.
    >
    This is simply not true. Did you read my response? You use "substr (BLDAT, 1, 10)" - so how can you say "the data in those columns happen like 20120606' which is only 8 characters long? Which does not match what you just said.

    Also, as I said before, if you have data in BLDAT that contains ' 00000000' and then to the ' to_date (substr (BLDAT, 1, 10),...). ' throw an exception because you try to convert the 8 zeros of a date. This is the problem that you are experiencing.

    BLDAT contains a string that is a valid date, or it doesn't.

    You continue this problem posed by the same if you copy values such as "00.00.0000" as these will cause an exception when you attempt to convert their return at a date each time that data are used.

    If you have at least two problems with BLDAT and the other two that I mentioned.

    1. the data in the column are 8 characters or 10 length may not be both - use you 8 in the decoding and 10 in the substr

    2. the data is either a string representing a date is valid or not.

    If all values are "00000000" or a string such as "YYYYMMDD" then you need to use a decoding that creates ' 00.00.0000 "' if the value is '00000000' or makes a TO_DATE (BLDAT, 'YYYYMMDD'). In only one of the two values will be issued. Now you're different columns show two for BLDAT.

    Suggest that experiment you with a simple query that contains only BLDAT until you get the process worked on how get the result without errors. Then expand the query to include other columns.

    Keep it simple until it works.

  • Import msicellaneous transaction with error of batch records

    Hi all
    When I import transactions with the element which is the batch and series controlled, it returns 'Batch record' error and the explanation of the error is "quantity batch and/or serial number is not amount of transaction", but the quantity of lot and serial number is equal to the amount of transaction. What is the problem? Can someone help me to solve it. Thank you!
    Here is the code inserted in the tables of the interface:

    DECLARE
    l_tran_rec inv.mtl_transactions_interface%ROWTYPE;
    l_lot_rec inv.mtl_transaction_lots_interface%ROWTYPE;
    l_serial_rec inv.mtl_serial_numbers_interface%ROWTYPE;
    BEGIN
    l_tran_rec.last_update_date: = SYSDATE;
    l_tran_rec.last_updated_by: = 1318;
    l_tran_rec. CREATION_DATE: = SYSDATE;
    l_tran_rec.created_by: = 1318;
    l_tran_rec.last_update_login: = - 1;
    l_tran_rec.transaction_interface_id: = 18027007;
    l_tran_rec.transaction_header_id: = 18027007;
    l_tran_rec.transaction_mode: = 3;
    l_tran_rec.process_flag: = 1;
    l_tran_rec.transaction_type_id: = 40;
    l_tran_rec.transaction_source_id: = 12831.
    l_tran_rec.transaction_action_id: = 27;
    l_tran_rec.transaction_source_type_id: = 3;
    l_tran_rec.distribution_account_id: = 12831.
    l_tran_rec.organization_id: = 204;
    l_tran_rec.inventory_item_id: = 208955;
    l_tran_rec.subinventory_code: = "FS stores."
    l_tran_rec.transaction_quantity: = 10;
    l_tran_rec.transaction_uom: = 'Ea ';
    l_tran_rec.transaction_date: = SYSDATE;
    l_tran_rec.source_code: = 'ANNOUNCEMENTS ';
    l_tran_rec.source_header_id: = 976110541;
    l_tran_rec.source_line_id: = 976110541;
    INSERT INTO inv.mtl_transactions_interface VALUES l_tran_rec;

    l_lot_rec.transaction_interface_id: = 18027007;
    l_lot_rec.last_update_date: = SYSDATE;
    l_lot_rec.last_updated_by: = 1318;
    l_lot_rec. CREATION_DATE: = SYSDATE;
    l_lot_rec.created_by: = 1318;
    l_lot_rec.last_update_login: = - 1;
    l_lot_rec.lot_number: = 'F10000. "
    l_lot_rec.transaction_quantity: = 10;
    l_lot_rec.source_code: = 'ANNOUNCEMENTS ';
    l_lot_rec.source_line_id: = 976110541;
    INSERT INTO inv.mtl_transaction_lots_interface VALUES l_lot_rec;

    l_serial_rec.transaction_interface_id: = 18027007;
    l_serial_rec.last_update_date: = SYSDATE;
    l_serial_rec.last_updated_by: = 1318;
    l_serial_rec. CREATION_DATE: = SYSDATE;
    l_serial_rec.created_by: = 1318;
    l_serial_rec.last_update_login: = - 1;
    l_serial_rec.fm_serial_number: = 'S20040 ';
    l_serial_rec.to_serial_number: = 'S20049 ';
    l_serial_rec.source_code: = 'ANNOUNCEMENTS ';
    l_serial_rec.source_line_id: = 976110541;
    INSERT INTO inv.mtl_serial_numbers_interface VALUES l_serial_rec;
    commit;
    END;

    Kind regards
    Sheng

    Thank you, Hussein. Now I have the problem. I do not insert the column serial_transaction_temp_id in mtl_transaction_lots_interface, and the value must be equal to the transaction_interface_id.

  • Cancellation and the transactions Table segment

    Hello

    I'm confused.

    As a rollback segment can have about 34 locations of transaction in his operating table in 11g, and we will have 10 segments to cancel first, that must surely mean that we cannot have 340 concurrent transactions at a time, Yes? But we can have more, and it's confusing me.

    I read the core Oracle JL at the moment, I'm sure he explained it this way and that the weakness is in my opinion and I did not understand something, but if someone can help me here it would be appreciated.

    Thank you.

    >
    Does this mean that Oracle will have to create a new rollback segment
    >
    You already know the answer to that.

    Through the book, slowly and carefully and imagine examples like this one you read. Then try to confirm the finding the sections that talk about this issue. The one complaint I have about the book is that the index is not very useful. I would like to see more references, especially for key words.

    But two pages more than where we were before you can find it in ' beginning and end of the Transaction "on Page 28
    >
    When a session starts a transaction, it chooses a rollback segment, selects an entry from the operating table, increment the film #...
    >
    So if the ITL, if it is FULL (each slot used by a current transaction) there are only two choices: wait until a slot opens, or create a new rollback segment.

  • Camileo S20 - How to read all recorded videos?

    Hi all!

    Ive has had not long ago the camileo s20, so always try to get mt mind around it. I think that Ive managed the basics but im having a problem with the game back.

    When I recorded a video - 6 minutes long, but I stop recording half back way through, then go to start recording where I left, while playing back it will play the movie sepratlely and manually I have to start the other half of the video to play.

    Is there anyway I can just click play and it plays all my videos continuously? a play all the setting?

    Thank you

    Hi wayne_1,

    First of all, I can recommend consulted the user manual. It contains a lot of information on your camera and how to use all the features.

    I checked the user manual, but unfortunately I think that such a reading function isn't available. With the play button, you can switch to playback mode and use the left and right button to go to the next picture/movie.

    But there is no function on playback of videos.

  • I want to read and record something on a drive from my computer, how can I do?

    I want to be able to read literature and save it on a disk so I can listen more later, my computer. No music but a reading. How can I do?

    Hi Lorimorris,

    Thanks for posting your question in the Microsoft Community Forum.

    Based on the information, it seems you want to record your voice and even copy on a disk for read it later.

    What version of the Windows operating system do you use?

    If you use a Windows 7, you can save your statements using sound recorder.

    After registration, you may burn a DVD using the saved file of records.

    You can read the articles and check if it helps:

    Record audio with Sound Recorder

    Sound recording - for Windows Vista
    Burn a CD or DVD in Windows Explorer

    Burn a CD or DVD in Windows Media Player

    For more information, see the article:

    Audio recording in sound recorder: frequently asked questions

    Hope the helps of information.

    Let us know if you need help with Windows related issues. We will be happy to help you.

  • I have a laptop running XP pro with office 2003. All of a sudden all the folders and files in the folders have read only attribute applied.

    This make all folders on the computer and all the files stored inside. The individual files are not performed. If I remove read-only folders and files and click OK, reopen the file and the attribute has been applied.
    Any solution to this problem s.

    Dave

    It's not a problem... it is a feature.

    At a time in the past, Windows has decided that the read-only attribute means nothing, because it applies to files, so Microsoft 'solved' this bit to indicate to whom the file has been customized.  There are other methods available if you really don't want to do a write-protected file.  The details are explained in the following Microsoft KB article.  I recommend reading the article in full to understand what Microsoft did...

    "You can not view or change the read-only or the attributes of system files in Windows Server 2003, Windows XP, Windows Vista or Windows 7"
        <>http://support.Microsoft.com/kb/326549 >

    HTH,
    JW

  • I have a column of the table where I want to apply box, what is the best approach?

    Mr President.

    My worm jdev is 12 c. (12.0.3.0)

    I have a table column and want to default on every insert I have check box selected.

    "DED_WHT" VARCHAR2(1) DEFAULT 'Y', 

    How to apply?

    Concerning

    Drop the DedWht attribute of the DataControl and select ADF select Boolean Checkbox

    When the following dialog box appears, choose selected state value of Y:

    Finally, go to the display attribute and the literal value for Y:

  • How to identify records in a table for which no record exists in another table?

    Hello

    I have a table named CARE:

    Name of Type NULL

    ---------------------------- -------- --------------

    FICHE_ID NOT NULL NUMBER

    AGENT_ID NUMBER

    FICHE_ID is the main KEY to the PLUG

    There is a table ACTIVITE_FAITE:

    Name NULL Type

    ----------------- -------- -----------

    FICHE_ID NOT NULL NUMBER

    ACTIVITES_ID NOT NULL NUMBER

    NUMBER OF SECTEURS_ID

    NOT NULL NUMBER DURATION (8.2)

    ACTIVITE_FAITE_ID NOT NULL NUMBER

    NUMBER OF TYPE_ACTIVITE_ID

    FICHE_ID is a key for both tables dower (it references FICHE_ID. of the table INSERT)

    I just want to display the records of CARE for which there is no reference in the ACTIVITE_FAITE table. In other words, all the documents of record for records that have been created in ACTIVITE_FAITE.

    I tried without success:

    Select * from sheet f

    If f.fiche_id not in

    (by selecting af.fiche_id in Activite_faite af)

    Thank you very much for your help!

    Select * from sheet f

    where fiche_id not in (select fiche_id from activite_faite)

    is a way (this works because fiche_id is not nullable)

    where there is not (select null from activite_faite where f.fiche_id = a.fiche_id)

    is another way.

Maybe you are looking for