Data extract Essbase to Oracle DB using report Script

I get an error saying that ODI can not locate my report script. My essbase is on a different server from ODI. Can I copy the script on the server of the ODI report? Is that the way to solve this problem?

Documentation
"Validation of the column is not executed during data extraction using report scripts. Thus, the output of a script to report columns is directly mapped to the corresponding column connected in the source model. »

This means that the order of the columns in the report script must be in the exact order as the columns in the source Essbase model for extraction to perform successfully.

See you soon

John
http://John-Goodwin.blogspot.com/

Tags: Business Intelligence

Similar Questions

  • How to get data from an Excel file exist using report generation tools

    I try to use the Excel vi get the data of the report generation tool, but I can't understand how to activate a path can be explored in the right type for the VI.  I tried to use the new Report.vi, but this does not work unless you use a template.  It will not open an existing excel file and make an open report to extract data from.

    Essentially, I have a bunch of excel files that have data in them, and I want a VI allows to analyze the data.  I'm going to pull in all the data directly from the excel file so I don't have to reproscess them all in text so I can use the more standard datasheet live but to convert even the excel file programtically in labview I still need to be able to open the excel file and get the data?

    I found my problem.  It turns out that only not to have had a problem with the tool box new report vi.  I had accidentally wired an input control of path of folder instead of an input control of path of file to it.  Changing the file type took care of her and I was able to access excel files, I tried using the new report VI to extract the file, and Excel Get Data to extract the data.

  • Failed to load data to Essbase (IKM SQL for Hyperion Essbase (DATA)

    I am trying to load data to Hyperion Essbase. Unfortunately he is not going so well. I followed the instructions but I get this error "BUFFER_SIZE. I have not changed the default BUFFER_SIZE (it is set to < default >: 80) and I have not changed any other settings in the KM.

    Appreciate any thoughts...



    com.hyperion.odi.common.ODIConstants has No attribute * 'BUFFER_SIZE. "

    org.apache.bsf.BSFException: exception of Jython:
    Traceback (innermost last):
    "< String >" file, line 82, inside?
    AttributeError: class 'com.hyperion.odi.common.ODIConstants' has no attribute 'BUFFER_SIZE '.
    at org.apache.bsf.engines.jython.JythonEngine.exec(JythonEngine.java:146)
    at com.sunopsis.dwg.codeinterpretor.k.a (k.java)
    at com.sunopsis.dwg.dbobj.SnpSessTaskSql.scripting (SnpSessTaskSql.java)
    at com.sunopsis.dwg.dbobj.SnpSessTaskSql.execScriptingOrders (SnpSessTaskSql.java)
    at com.sunopsis.dwg.dbobj.SnpSessTaskSql.execScriptingOrders (SnpSessTaskSql.java)
    at com.sunopsis.dwg.dbobj.SnpSessTaskSql.treatTaskTrt (SnpSessTaskSql.java)
    at com.sunopsis.dwg.dbobj.SnpSessTaskSqlI.treatTaskTrt (SnpSessTaskSqlI.java)
    at com.sunopsis.dwg.dbobj.SnpSessTaskSql.treatTask (SnpSessTaskSql.java)
    at com.sunopsis.dwg.dbobj.SnpSessStep.treatSessStep (SnpSessStep.java)
    at com.sunopsis.dwg.dbobj.SnpSession.treatSession (SnpSession.java)
    at com.sunopsis.dwg.cmd.DwgCommandSession.treatCommand (DwgCommandSession.java)
    at com.sunopsis.dwg.cmd.DwgCommandBase.execute (DwgCommandBase.java)
    at com.sunopsis.dwg.cmd.e.i (e.java)
    at com.sunopsis.dwg.cmd.h.y (h.java)
    at com.sunopsis.dwg.cmd.e.run (e.java)
    at java.lang.Thread.run (unknown Source)

    Published by: Chris Rothermel on April 13, 2010 15:44

    Hello

    What ODI and patch version you are running.
    Looks like you are using a KM newer than the java files that are located in the oracledi\drivers directory, I would say that you need to update the files java essbase for a newer version.
    In the last few patch releases memory size buffer has been added to the data load essbase KM for the use of the ASO, even if this meant you had to also update the java files as well as in the version.

    See you soon

    John
    http://John-Goodwin.blogspot.com/

  • Export data to Oracle DB using ODI Essbase

    I am trying to export data to Oracle DB using ODI essbase. When I create the schema with data source and target warehouses, my goal has error indicating that the target column has no capabilities of sql. I don't know what that means. Can anyone enlighten us? Thank you.

    This means that you have not defined a resting area for your columns on the target data store, as you use essbase in your interface, you need a rest area, essbase as a technology has no SQL features.

    See you soon

    John
    http://John-Goodwin.blogspot.com/

  • Script Insert statement to extract data from Table in Oracle 7i

    Hi all, I have an old Oracle legacy system that works for more than 15 years. Every now and then, we need to extract data from this table @ ORacle 7i to import to Oracle 10 G.

    My thoughts are to create a script to Insert statements in oracle 7 and that, to be deployed to Oracle 10 G.

    I found cela scripts in Google and don't know exactly how it works. No explanation on these scripts, would be greatly appreciated. I find that this format can help to produce a set of insert statements in this table to the last table to 10G.

    < pre >
    -Step 1: create this procedure:
    create or replace function ExtractData (v_table_name varchar2) return varchar2 as
    Boolean b_found: = false;
    v_tempa varchar2 (8000);
    v_tempb varchar2 (8000);
    v_tempc VARCHAR2 (255);
    Start
    for tab_rec in (select table_name from user_tables where table_name = upper (v_table_name))
    loop
    b_found: = true;
    v_tempa: =' select ' insert into ' | tab_rec.table_name |' (';
    for col_rec in (select * from user_tab_columns)
    where
    table_name = tab_rec.table_name
    order by
    column_id)
    loop
    If col_rec.column_id = 1 then
    v_tempa: = v_tempa | " ' || Chr (10) | " ' ;
    on the other
    v_tempa: = v_tempa |', ". Chr (10) | " ' ;
    v_tempb: = v_tempb |', ". Chr (10) | " ' ;
    end if;
    v_tempa: = v_tempa | col_rec.column_name;
    If instr(col_rec.data_type,'CHAR') > 0 then
    v_tempc: = "' |' | col_rec.column_name |'| " ' ;
    elsif instr (col_rec.data_type, 'DATE') > 0 then
    v_tempc: = "' to_date ("'| to_char('|| col_rec.column_name||',''mm/DD/YYYY HH24 '') | ") (', "' dd/mm/yyyy hh24"') "';
    on the other
    v_tempc: = col_rec.column_name;
    end if;
    v_tempb: = v_tempb | " ' || Decode('|| col_rec.column_name||',''Null'','||v_tempc||') | " ' ;
    end loop;
    v_tempa: = v_tempa |') values ('| v_tempb |'); "from ' |" tab_rec.table_name | « ; » ;
    end loop;
    If not b_found then
    v_tempa: ='-Table ' | v_table_name | 'not found ';
    on the other
    v_tempa: = v_tempa | Chr (10) | "select"-commit; "double;';
    end if;
    Return v_tempa;
    end;
    /
    display errors

    -STEP 2: run the following code to extract the data.
    Go head
    set pages 0
    game of stripes on
    fixed lines 2000
    the feeding off value
    trigger the echo
    var retline varchar2 (4000)
    coil c:\t1.sql
    Select 'set echo off' from dual;
    Select 'spool c:\recreatedata.sql' from dual;
    Select ' select "-these data was extracted on" | TO_CHAR (sysdate, "mm/dd/yyyy hh24" ") double;' double.

    -The following two lines as repeat as many times as the tables that you want to extract
    exec: retline: = ExtractData ('dept');
    print: retline;

    exec: retline: = ExtractData ('emp');
    print: retline;

    Select 'off spool' from dual;
    spool off
    @c:\t1

    -Step 3: run the updated c:\recreatedata.sql waiting for output to recreate the data.

    Source: http://www.idevelopment.info/data/Oracle/DBA_tips/PL_SQL/PLSQL_5.shtml




    < / pre >

    Hello

    Well what this script do.
    You will pass a table name as input to the function that will return varchar2 (string - insert statement). It will generate 2 t1.sql of sql script that contains the output sequence.

    Will use the first passed the user_tables scipt to check if the input table name exists and if there is the will reterive user_table_columns column names and generate the following sql script.
    Now, this t1.sql will run to generate a final sript formally orders insert that will run you on the target schema (make sure that the table exists).

    * #t1.sql*

    set echo off
    spool recreatedata.sql
    select '-- This data was extracted on '||to_char(sysdate,'mm/dd/yyyy hh24:mi') from dual;
    select 'insert into MY_OBJECT1 ('||chr(10)||'OWNER,'||chr(10)||'TOTAL) values ('||decode(OWNER,Null,'Null',''''||OWNER||'''')||','||chr(10)||''||decode(TOTAL,Null,'Null',TOTAL)||');' from MY_OBJECT1;
    select '-- commit;' from dual;
    spool off
    

    Then @t1.sql runs, and the general insert for the infeed table table.

    -- This data was extracted on 03/09/2009 23:39
    
    INSERT INTO MY_OBJECT1 (OWNER, TOTAL)
      VALUES   ('MDSYS', 92800);
    
    INSERT INTO MY_OBJECT1 (OWNER, TOTAL)
      VALUES   ('TSMSYS', 256);
    
    INSERT INTO MY_OBJECT1 (OWNER, TOTAL)
      VALUES   ('DMSYS', 15104);
    
    INSERT INTO MY_OBJECT1 (OWNER, TOTAL)
      VALUES   ('TESTME', 128);
    
    INSERT INTO MY_OBJECT1 (OWNER, TOTAL)
      VALUES   ('PUBLIC', 2571392);
    
    INSERT INTO MY_OBJECT1 (OWNER, TOTAL)
      VALUES   ('OUTLN', 768);
    
    INSERT INTO MY_OBJECT1 (OWNER, TOTAL)
      VALUES   ('CTXSYS', 21888);
    
    INSERT INTO MY_OBJECT1 (OWNER, TOTAL)
      VALUES   ('OLAPSYS', 78336);
    
    INSERT INTO MY_OBJECT1 (OWNER, TOTAL)
      VALUES   ('KLONDIKE', 2432);
    
    INSERT INTO MY_OBJECT1 (OWNER, TOTAL)
      VALUES   ('SYSTEM', 51328);
    
    INSERT INTO MY_OBJECT1 (OWNER, TOTAL)
      VALUES   ('EXFSYS', 21504);
    
    INSERT INTO MY_OBJECT1 (OWNER, TOTAL)
      VALUES   ('DBSNMP', 4096);
    
    INSERT INTO MY_OBJECT1 (OWNER, TOTAL)
      VALUES   ('ORDSYS', 216192);
    
    INSERT INTO MY_OBJECT1 (OWNER, TOTAL)
      VALUES   ('SYSMAN', 111744);
    
    -- commit;
    

    Hope this helps
    Concerning

  • Help to use MaxL to automate data loading Essbase

    Hello

    I am trying to automate the loading of data in an Essbase Database (for a planning application). My current script using the Maxl shell looks like this:

    MaxL >
    DATABASE Application.Database DATA IMPORT
    OF TEXT SERVER DATA_FILE * 'data.csv' *.
    With the HELP of SERVER RULES_FILE * 'csv. "
    ERROR WRITING to *'c:\\hyperion\\MaxL_logs\data_csv.err'*.

    In the script bold text indicates specific names or files for my case.

    Using this script works successfully to load data in my planning application, as long as the data data.csv file is located in the directory of Planning application database. I want to be able to do is to load the data file data.csv from a directory that I clarify, as the data file cannot be placed in the directory database and must have its own directory instead.

    Does anyone know what I need to use in a MaxL script to load a data file that is not in the directory database of the syntax? I can provide more information if needed. Thank you.

    Hello

    Have you tried...

    DATABASE Application.Database DATA IMPORT
    OF DATA_FILE TEXT "C:\\temp\\data.csv".
    With the HELP of SERVER RULES_FILE 'csv '.
    ON ERROR WRITE to 'c:\\hyperion\\MaxL_logs\data_csv.err ';

    or

    DATABASE Application.Database DATA IMPORT
    TEXT LOCAL DATA_FILE 'C:\\temp\\data.csv '.
    With the HELP of SERVER RULES_FILE 'csv '.
    ON ERROR WRITE to 'c:\\hyperion\\MaxL_logs\data_csv.err ';

    See you soon

    John
    http://John-Goodwin.blogspot.com/

  • Cannot load data into Essbase using ODI

    Hi guys,.

    Help help. I have problem loading data into essbase using ODI. The error message is
    java.sql.SQLException: unexpected token: ACCOUNT in the statement [select C1_ACCOUNT "" account]

    I have a very simple flat file that are similar to the below:

    Account, resources, time, data
    Active, Na_Resource, Jan, 10
    Active, Na_Resource, 12, February

    With the same flat files, I am able to load data to load rules.


    I use 9.3.1.0 and ODI 10.1.3.4.0 essbase. I use the ODI to load members and data in the planning without any problem.


    Thank you

    Hello

    It seems to generate an extra set of quotation marks around the SQL, in my interface it generates.

    SQL = "" "select C1_ACCOUNT 'Account', C2_PERIOD 'Period', C3_RESOURCE 'Resource', C4_DATA 'Data' of the" C$ _0TestApp_testData "where (1 = 1) «»

    Note the single quotes around the account.

    If you go to the topology Manager, on the tab of the physical architecture, right-click 'Hyperion Planning' > 'change '.
    Select the "Langugage" tab for the "JYTHON" line, make sure that the "Object Delimiter" field has no quotes, if it's remove and apply and save.

    See you soon

    John
    http://John-Goodwin.blogspot.com/

  • loading data to essbase using EAS against back-end script

    Good afternoon

    We have noticed recently that when loading of our ASO cube (with the help of back-end scripts - esscmd etc.) it seems to load much more overhead then when using EAS and loading files of data individually.  When loading using scripts, the total size of the cube has been 1.2 gig.  When you load the files individually to EAS, the size of the cube has been 800 meg.  Why the difference?  is there anything we can do in scripts supported to reduce this burden?

    Thank you

    You are really using EssCmd to load the ASO cubes?  You should use MAxL with buffers to load. Default EAS uses a buffer to load when you load multiple files. Esscmd (and without the command buffer MAxL) won't. It means loads long and larger files. Loading ASO, it takes the existing .dat file and adds the new data. When you are not using a buffer load, it takes the .dat file and the .tmp file and merges together, then when you do the second file, it takes the files .dat (which includes the first load data) and repeats the process. Whenever he does that he has to move together (twice) .dat file and there is growth of the .dat files. If nothing else I'll call it fragmentation, but I don't think it's as simple as that. I think it's just the way the data is stored. When you use a buffer and no slices, he need only do this once.

  • How to load several files column data into essbase using the rule of load.

    Hello

    I need to load a file of data into essbase, which includes data from several columns.

    Here is a sample file.


    Year, Department, account, Jan, Feb, Mar, Apr, may, June, July, August, Sept, Oct, Nov, Dec
    FY10, ministere1, account1, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12
    FY10, agencies2, account1, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12
    FY10, ministere3, account1, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12


    Thank you
    Sirot

    But this isn't an error as such, that is to say that no data values have been changed so that they possible already exist in the database.
    If there is no release, they should be in a file of errors.

    See you soon

    John
    http://John-Goodwin.blogspot.com/

  • Data export Essbase in RDMBS

    Hello

    I am trying to export data in RDBMS with Essbase Calc. but there is problem with formatting. I get the output in the columns of RDBMS. As 12 months as unique for each month column (I did the same thing to another application, it happens as expected. application application to happen?)

    Account, scenario, Version, entity, Jan, Feb,..., Dec


    I expect
    Account, scenario, version, entity, period
    1 xyz first 1 Jan
    VXA 2 first Feb 1

    Here's my Calc.

    SET DATAEXPORTOPTIONS
    {
    DataExportLevel Level0.
    WE DataExportDimHeader;
    WE DataExportRelationalFile;
    };
    FIX (& CurYr, "Local", "Budget");
    DATAEXPORT 'DSN' 'ESS' 'BUDEXP1' 'hypstg' 'hypstg ';
    ENDFIX;


    Kind regards
    PrakashV

    Essbase takes a dense dimension to use as the data columns. I don't know what are the rules of AutoPlaylist, but he could definitely change by database. In your case, is picking up 'period '.

    However, you can use the DATAEXPORTCOLHEADER option to specify an another dense dimension. It is not always not you quite what you want - unless one of the dimensions of your DIFFICULTY is dense and you can use it - but it gives you control. You could add a dense dimension with a single member and specify with DATAEXPORTCOLHEADER but it's not an ideal solution.

    Documentation (http://docs.oracle.com/cd/E26232_01/doc.11122/esb_tech_ref/set_dataexportoptions.html) deprecated DATAEXPORTCOLHEADER use with relational export, but Glenn reported that it works fine as well, with a value of test - Re: DATAEXPORT of RDBMS - column Sun accounts in time

    Furthermore, the documentation for DATAEXPORTCOLHEADER in general seems very confused. For example, it comes to the example in DATAEXPORTCOLHEADER to the 'Scenario' in Sample.Basic:

    Specifies scenario as the page header in the export file. The Scenario dimension contains three members: scenario, actual and Budget. All case data is indicated first, followed by all the data real, then all data of the Budget.

    First, is not a "header" scenario, at least not in the same way that the term is used in the report Scripts. Second, the declaration that the output will contain "all data scenario to be indicated first," followed by all the real data is dubious at best. Scenario data are shown in the first column of each row, followed by the actual data in the second column. Maybe it's just my cultural bias to read horizontally before vertically...

  • Data export Essbase 11

    When I try to export get error:

    com.hyperion.odi.essbase.ODIEssbaseException: Extraction using the calculation script is not supported for essbase versions 9.3


    at com.hyperion.odi.essbase.ODIEssbaseDataReader.validateExtractOptions (unknown Source)


    at com.hyperion.odi.essbase.AbstractEssbaseReader.beginExtract (unknown Source)


    at sun.reflect.NativeMethodAccessorImpl.invoke0 (Native Method)


    I read Jean - goodwin.blogspot.com.

    So I have to use ReportScript or MDX?
    So who knows when there will be a new version of the adapter which will support Essbase 11?

    Thank you

    ---
    Gelo

    Hello

    Yes even if it is supposely certified for the use of the Version 11 export of calc script still does not.

    Your options are to update the code as in my blog or use MDX / report script.
    Another option would be to write the calc script to export data, and then create an interface to load data, but in the IKM options, set RUN_CALC_SCRIPT_ONLY to yes and enter details of calc script in the CALCULATION_SCRIPT option, then you can use a different interface to do what you want the exported data file.

    See you soon

    John
    http://John-Goodwin.blogspot.com/

  • Functions of pipeline in Oracle 11 g reports

    Anyone know if you can use a pipeline function in the model of report of 11g data?  For example:

    SELECT sys

    data_set

    interface_seqno

    subpost_key

    gl_cmp_key

    TABLE (rfi.gl_apex_extract_pkg.create_erp_detail_func (p_user_id = >: P_USER_ID))

    p_id_fm = > 1

    p_id_to = > 2))

    I use in an application APEX without problem, but in the Oracle report returns no data.  No error, just no data.

    Yes, you can use a function in the pipeline in the data model.

    Try this simple example (and stupid):

    Emp_t CREATE TYPE IS OBJECT)

    EmpNo VARCHAR2 (10),

    Ename VARCHAR2 (30)

    )

    /

    CREATE TYPE Emp_nt IS an ARRAY of emp_t

    /

    FUNCTION to CREATE or REPLACE emps_pipelined

    Emp_nt RETURN PIPELINED

    IS

    l_row emp_t: = emp_t (NULL, NULL);

    BEGIN

    l_row.EmpNo: = '10';

    l_row. Ename: = ' Emp 10';

    COURSE OF ACTION (l_row);

    l_row.EmpNo: = '20';

    l_row. Ename: = ' Emp 20';

    COURSE OF ACTION (l_row);

    RETURN;

    END;

    /

    SELECT EmpNo, ename

    TABLE (emps_pipelined)

    /

    > I use it in an application APEX without problem, but in Oracle report returns no data.  No error, just no data.

    Check the input parameters.

    Kind regards

    Zlatko

  • Oracle database inventory report

    Hello

    Foglight 5.6.4 monitor us Oracle DBs.

    How to generate the inventory of Oracle database report that gives all the objects, the invalid objects, tables without PK, etc.?

    Thank you.

    Hi guy,

    5.6.4 Oracle cartridge includes these out-of-the-box reports:

    • Balance report - displays the workload balance between selected nodes of the RAC cluster
    • Drive capacity report - displays a breakdown of the use of storage space disk at the level of the host from the host.
    • Health status of check - displays various aspects of the health of the instance specified, namely: availability, status of the listener, response time and connection time
    • I/o activity report - full screens of activity of IO, including workload, the wait events and physical analysis against writing and logical read operations
    • Report availability - displays free/busy information at the level of the instance of the instance.
    • Data storage report - displays a breakdown of the use of space of storage of data to the host, instance, and tablespace level.
    • Oracle executive summary report
    • Report of the PL/SQL blocks
    • Summary report of storage - provides an overview of disk space, the use of the ASM, archive redo logs status, albums and destinations of tablespaces and data files
    • Top of page DB Users
    • Top SQLs report - provides information detailed about the SQL statements that experienced the longest period of consumption of CPU or total wait events
    • Summary report - global workload workload displays, using various performance

    You can schedule or run these reports from the reports dashboard.

    You can create additional custom reports by using drag and drop basis, or WCF reports for more sophisticated reports. WCF allows you to copy and modify out-of-the-box reports.

    If you have a requirement for additional reports of out-of-the-box, please submit a detailed description in the ideas to be considered for a future version of the product. You can simultaneously publish the same information here in Discussions as a "How-to" question if you would like some advice on how to achieve this in the meantime.

    Kind regards

    Brian Wheeldon

  • How to load HFM data into Essbase

    Hello

    How can bring us the HFM data into Essbase cube with using EAL, since we have performance using EAL - DSS as a source with OBIEE reporting problems.

    The expanded use of analytical, I heard we can only only level 0 of HFM Essbase data and need to write the currency conversion and the calculations of eliminating PKI in Essbase to roll the Member parent. Is this true?

    Also how we can convert security HFM to Essbase security.

    Please suggest me on this.

    Thank you
    Vishal

    Security will be a bit tricky as Essbase uses generally filters and HFM security classes. You can potentially use shared groups in Shared Services, but licensing issues soar depending on supply. Your best bet is maybe to look at export artifact LCM to handle.

  • Display of UDAs in Essbase report Script

    Hello world.

    I am creating a report script where I show members by their UDA instead of their membername or alias.

    I have a large number of level 0 members who each have a tag for them to the UDA. I can query by the UDA in the report script, but it displays their membername, not the UDA in the final output.


    I need it is because I have a second Essbase cube that is to receive the data of this first. The 2nd cube has a level less than the source cube, therefore I am 'connecting' with a UDA on the members of level 0 of the source cube that corresponds to the membernames of members at level 0 of the target cube.

    I guess I could possibly use a linked partition?


    Thank you!

    As you found, UDAs return the Member who are associated with them. Maybe create a size of attribute, which is the same as the ADU and retireve there. IF you want to go with the approach of the partition, it would be a not replicated partition a linked partition that will transfer data of one cube to another.

Maybe you are looking for

  • HP LaserJet M603: HP LaserJet M603 defaultinng in TRAY 1

    Dear expert HP, One of my printers M603 (we have 2) wants to default print to TRAY 1 no matter what I do. I created a workaround solution using the application CONFIGURATION of PAGE such as most print jobs are diverted to TRAY 2. However, there are s

  • Cannot install Skype 6.18

    For the past week or so I try to install the latest version of Skype on my computer laptop windows 7. Whenever I'm trying to update my computer crashes and continues to do so until I have start the computer in safe mode and restore to a point before

  • Satellite A210-171: need XP display driver

    Hello I bought Equium A210 with Microsoft Windows Vista... I some how manage to get drivers Wireless LAN for XP but I don't know where to find the display drivers please someone tell me... Thanks a lot, I really need to do so much work and its just c

  • Is there a way to make the NAS200 act as a drive in my computer

    Not sure if this has been discussed here yet. I have a NAS200 connected to my router and I can access through my web browser without problem. I was wondering if there is a way to make it act as a player that is in my computer. It is to say give it a

  • Windows cannot update the configuration to start the computer. 'The installation cannot continue.

    W-xp-Prof sp3 running on a Dell Dimension 8400 with Intel 925 x / xe chip set.  I updated the BIOS and chip set his configuration of Dell and still get the error. Where in the list of stall could find and log events or error log that might help Tech