Error in activities > export while exporting data admin

Hello

I am facing problem with activities > export.

The details are as below:

1. the user is to have all access (Prodikaadmin).

2 EnvironmentSetting.config entry is as below:

< DataExchangeService configChildKey = 'name' refscope = 'Application' factory = "Class: Xeno.Prodika.Services.DataExchange.DataExchangeServiceFactory, PlatformExtensions" >

< name varenv = "DexConfiguration" handler = "Class: Xeno.Prodika.Services.DataExchange.Configuration.DexConfigSectionHandlerFactory, PlatformExtensions" >

"< DataExchangeConfig system ="staged"NotifierEmail="@@VAR:Prodika.DataExchangeService.Notifier.EmailAddress @ "EncryptionFilter =" Class: Xeno.Prodika.Services.DataExchange.Serialization.RijndaelEncryptionStreamFilterFactory, PlatformExtensions ">

< TargetSystems >

Production of < system > < / system >

< / TargetSystems >

< SourceSystems > < / SourceSystems >

< / DataExchangeConfig >

< / varenv >

< / DataExchangeService >

3 generate "token" is not visible in the system under activities.

4. but able to access 'Generate token' with ' ~ / portal/DataAdmin/DataAdmin.aspx. ContentKey = GenerateToken' link.

5. able to generate tokens and saved.

6 browse the same and clicked on the button "Upload token.

7 has got the error "Import is not targeted to this system expected real GetByteArray, staged '."

Kindly let me know if I'm missing something.

Question:

1. how to turn on "Generate the token" in the user interface.

2. how to fix the above error.

Thank you!!

If the server as configuration file above, it can only export data. If you want to import a server config, please see How to set up an IMPORT environment?

Tags: Oracle Applications

Similar Questions

  • Error import (when exporting data from 10 g to 9i)

    Hai friends

    Recently, we migrated to 9i and 10g. the circumstances we did take the rear of 10 g (a few tables) and import it into 9i.we using HP UNIX server

    I can't able to import... like I got error like below

    IMP-00058: ORACLE error 6550
    ORA-06550: line 1, column 33:
    PLS-00302: component 'SET_NO_OUTLINES' must be declared.
    ORA-06550: line 1, column 7:
    PL/SQL: Statement ignored
    IMP-00000: Import terminated unsuccessfully

    I guess, because of import: release 10.2.0.2.0 - Production on Thu Aug 27 07:11:07 2009


    How can I FIX this problem?

    kindly help
    S

    Just run the utility EXP ORACLE_HOME of 9i environment you have. And connect (using username/password@instance) to the instance of remote database 10G.

  • Data export Error exporting data databae

    Hello. IAM getting the error when you export data from database. the error is

    "Export data for June 3, 2009 09:41:20 IST failed.
    : Error network 1042013 [10054]: cannot receive data
    "Error: network error 1042012 [10054]: can not send data.


    Any clue?

    Essbase system 9 (9.3.1)

    Thank you and best regards,
    Srini

    There are the onl; there both times I was not able to get an export of work
    1. when all the dimensions of the database have been set as sparse. A database apparently needing at least a dense export dimension
    2. If the .pag file was corruption. You can run a database of audit to check this and try to fix it, there are problems

  • ELQ-00107 errors during the export of the activity with the REST API data in bulk (2.0)

    I'm following the flow described in bulk API Documentation v2.0

    I POST to https://secure.Eloqua.com/API/bulk/2.0/activities/exports and back (note: I work in python, so it's all the deserialized json)

    {u 'createdAt': u'2014 - 08-14T 07: 05:17.6413979Z',}

    u 'createdBy': u'P of,

    u 'fields': {u 'ActivityDate': u'{{Activity.CreatedAt}} ',}

    u 'ActivityId': u'{{Activity.Id}} "},"

    u 'filter': u "('{{Activity.CreatedAt}}' > ' 2014-07-31T23:43:02.080971Z' ET '{{Activity.Type}}' = 'EmailOpen') ',"

    u 'name': u 'blarg3 ',.

    u 'updatedAt': u'2014 - 08-14T 07: 05:17.6413979Z',.

    u 'updatedBy': u'P of,

    u 'uri': u ' {/ activities/exports/275 '} '.

    Then I POST on /syncs and get back

    {u 'createdAt': u'2014 - 08-14T 07: 05:31.6571126Z',}

    u 'createdBy': u'P of,

    u 'status': u 'pending',

    u 'syncedInstanceUri': u ' / activities/exports/275 ', '.

    {u 'uri': u'/ synchronization/17790 '}

    Now (unfortunately) I GET/synchronization/17790 and/syncs/17790/logs

    {u 'createdAt': u'2014 - 08-14T 07: 05:31.9330000Z',}

    u 'createdBy': u'P of,

    u 'status': u 'error',

    u 'syncStartedAt': u'2014 - 08-14T 07: 05:32.6570000Z',.

    u 'syncedInstanceUri': u ' / activities/exports/275 ', '.

    {u 'uri': u'/ synchronization/17790 '}

    {u 'count': 2}

    u 'hasMore': false,

    u 'items': [{u 'count': 0}]

    u 'createdAt': u'2014 - 08-14T 07: 05:33.3770000Z',.

    u 'message': u 'There was an error in processing of export.',

    u 'severity': u 'error',

    u 'statusCode': u 'ELQ-00107',.

    {u 'syncUri': u'/ synchronization/17790 '},

    {u 'count': 0,}

    u 'createdAt': u'2014 - 08-14T 07: 05:33.3930000Z',.

    u 'message': u 'Sync to sync 17790, resulting from the error status.',

    u 'severity': u 'information',

    u 'statusCode': u 'ELQ-00101',.

    [{u 'syncUri': u'/ synchronization/17790 '}],

    u 'limit': 1000,.

    'offset' u: 0,

    {'u' totalResults: 2}

    All that I can find ELQ-00107 is "ELQ-00107: there was an error in processing the {type}."

    Any thoughts on what I can hurt? Pointers on how I can further debug?

    Thank you!

    Joel Rothman-Oracle Allison.Moore Christopher Campbell-Oracle Ryan Wheler-Oracle

    Hi pod.

    Try to remove all the void second precision of the elements in date filter.  '{{Activity.CreatedAt}}' > ' 2014 - 07-31T 23: 43:02.080971Z' becomes '{{Activity.CreatedAt}}' > ' 2014 - 07-31 T 23: 43:02Z'.  The second precision is not supported for export activity.

  • exporting data to Excel using XSSFWorkbook

    Hi everyone export data to Excel using XSSFWorkbook

    having error javax.el.ELException: means: lots of Java space now I need to change my code to BigGridDemo.java

    http://www.Docjar.org/HTML/API/org/Apache/POI/xssf/userModel/examples/BigGridDemo.Java.html

    http://Apache-POI.1045710.N5.Nabble.com/HSSF-and-XSSF-memory-usage-some-numbers-td4312784.html

    How can I change my code for BigGridDemo.java

    This is my code

    import com.bea.common.security.xacml.context.Result;

    import com.sun.jmx.snmp.Timestamp;

    to import java.io.FileNotFoundException;

    import java.io.IOException;

    import java.io.OutputStream;

    import java.util.HashMap;

    to import java.util.Iterator;

    import java.util.Map;

    Org.apache.poi.ss.usermodel import. *;

    Import javax.faces.context.FacesContext;

    Import org.apache.poi.hssf.usermodel.HSSFCell;

    Import org.apache.poi.hssf.usermodel.HSSFCellStyle;

    Import org.apache.poi.hssf.usermodel.HSSFDataFormat;

    Import org.apache.poi.hssf.usermodel.HSSFRow;

    Import org.apache.poi.hssf.usermodel.HSSFSheet;

    Import org.apache.poi.hssf.usermodel.HSSFWorkbook;

    Org.apache.poi import. *;

    Import org.apache.poi.hssf.util.HSSFColor;

    Import oracle.adf.model.BindingContainer;

    Import oracle.adf.model.BindingContext;

    Import oracle.adf.model.binding.DCBindingContainer;

    Import oracle.adf.model.binding.DCIteratorBinding;

    Import oracle.adf.view.rich.component.rich.data.RichTable;

    Import org.apache.poi.POIDocument;

    import org.apache.poi

    Import org.apache.poi.xssf.usermodel.XSSFWorkbook;

    Org.apache.poi.hssf.usermodel import. *;

    Import oracle.jbo.Row;

    Import oracle.jbo.RowSetIterator;

    Import oracle.jbo.ViewObject;

    Import org.apache.myfaces.trinidad.model.CollectionModel;

    Import org.apache.myfaces.trinidad.model.RowKeySet;

    Import org.apache.myfaces.trinidad.model.RowKeySetImpl;

    Import org.apache.poi.hssf.usermodel.HSSFRichTextString;

    Import org.apache.poi.ss.usermodel.Workbook;

    Import org.apache.poi.POIXMLDocumentPart;

    Import org.apache.poi.POIXMLDocument;

    Import org.apache.poi.hssf.usermodel.HSSFRow;

    Import org.apache.poi.hssf.usermodel.HSSFSheet;

    Import org.apache.poi.hssf.usermodel.HSSFWorkbook;

    public class PoiBean {}

    RicheTableau CustomTable;

    public PoiBean() {}

    }

    public static BindingContainer {} getBindingContainer()

    return (BindingContainer) JSFUtils.resolveExpression("#{bindings}");

    return (BindingContainer) BindingContext.getCurrent () .getCurrentBindingsEntry ();

    }

    public static DCBindingContainer getDCBindingContainer() {}

    return (DCBindingContainer) getBindingContainer ();

    }

    ' Public Sub generateExcel (FacesContext facesContext, OutputStream outputStream) throws IOException {}

    try {}

    Workbook = new XSSFWorkbook();  or new HSSFWorkbook();

    Spreadsheet sheet = workbook.createSheet("Fonts");

    Get all lines of an iterator

    /////////////////////////////////////////////////////////////////////////////////////////////////////

    Links DCBindingContainer = (DCBindingContainer) BindingContext.getCurrent () .getCurrentBindingsEntry ();

    DCIteratorBinding dcIteratorBindings = bindings.findIteratorBinding("CustomClientView1Iterator");

    Line rowss = worksheet.createRow (0);

    ViewObject yourVO = dcIteratorBindings.getViewObject ();

    Get all the lines of a ViewObject

    RowSetIterator iter = yourVO.createRowSetIterator ("CustomClient");

    ITER. Reset();

    int rowCounter = 0;

    While (iter.hasNext ()) {}

    A cell = null;

    line oracle.jbo.Row = iter.next ();

    print header on the first line in excel

    If (rowCounter == 0) {}

    rowss = worksheet.createRow (rowCounter);

    int cellCounter = 0;

    {for (String colName: {row.getAttributeNames ())}

    cell = rowss.createCell (cellCounter);

    cellA1.setCellValue (colName);

    cellCounter ++;

    }

    }

    print the data from the second row in excel

    rowCounter ++;

    //////////////////////////////////////////////////////////////

    short j = 0;

    int cellCounter = 0;

    excelrow = (HSSFRow) worksheet.createRow ((int) i);

    rowss = worksheet.createRow (rowCounter);

    {for (String colName: {row.getAttributeNames ())}

    System.out.println ("Hello" + row.getAttribute (colName));

    System.out.println ("Hello" + name of column);

    cell = rowss.createCell (cellCounter);

    rowCounter ++;

    cell.setCellValue (new HSSFRichTextString (rs.getS));

    {if (! isBlank (colname))}

    If (colName.equalsIgnoreCase ("CcnCode")) {}

    cell.setCellValue (row.getAttribute (colName) m:System.NET.SocketAddress.ToString ());

    System.out.println ("column name" + colName + "row.getAttribute (colName) m:System.NET.SocketAddress.ToString ()" + row.getAttribute (colName) m:System.NET.SocketAddress.ToString ());

    }

    }

    logic for the cell formatting

    ElseIf (colName.equalsIgnoreCase ("CcnName")) {}

    cell.setCellValue (row.getAttribute (colName) m:System.NET.SocketAddress.ToString ());

    }

    make double if you want to convert as a result

    ElseIf (colName.equalsIgnoreCase ("CcnRegDate")) {}

    cell.setCellValue (row.getAttribute (colName) m:System.NET.SocketAddress.ToString ());

    }

    ElseIf (colName.equalsIgnoreCase ("CcnCancelDate")) {}

    {if (null! = Row.GetAttribute (colname))}

    cell.setCellValue (row.getAttribute (colName) m:System.NET.SocketAddress.ToString ());

    }

    } ElseIf (colName.equalsIgnoreCase ("CcnUndertaking")) {}

    {if (null! = Row.GetAttribute (colname))}

    cell.setCellValue (row.getAttribute (colName) m:System.NET.SocketAddress.ToString ());

    }

    }

    ElseIf (colName.equalsIgnoreCase ("CcnCode8")) {}

    {if (null! = Row.GetAttribute (colname))}

    cell.setCellValue (row.getAttribute (colName) m:System.NET.SocketAddress.ToString ());

    }                                                                                                            }

    on the other

    cell.setCellValue (row.getAttribute (colName) m:System.NET.SocketAddress.ToString ());

    cellCounter ++;

    }

    worksheet.createFreezePane (0, 1, 0, 1);

    }

    Workbook.Write (OutputStream);

    outputStream.flush ();

    }

    //}

    catch (Exception e) {}

    e.printStackTrace ();

    }

    }

    The demo "big grid" is obsolete and has been replaced by SXSSF, which is compatible with XSSF (seehttp://poi.apache.org/spreadsheet/how-to.html#sxssfthe new Halloween Document) API.

    Theoretically, all you need to do is replace "new XSSFWorkbook()" by "new org.apache.poi.xssf.streaming.SXSSFWorkbook ()" in your program.

    You better post any specific questions of POI on the forum of the user Apache POI (see mailing lists , Apache POI)

    Kind regards

    Alex

  • export data from the table in xml files

    Hello

    This thread to get your opinion on how export data tables in a file xml containing the data and another (xsd) that contains a structure of the table.
    For example, I have a datamart with 3 dimensions and a fact table. The idea is to have an xml file with data from the fact table, a file xsd with the structure of the fact table, an xml file that contains the data of the 3 dimensions and an xsd file that contains the definition of all the 3 dimensions. So a xml file fact table, a single file xml combining all of the dimension, the fact table in the file a xsd and an xsd file combining all of the dimension.

    I never have an idea on how to do it, but I would like to have for your advise on how you would.

    Thank you in advance.

    You are more or less in the same situation as me, I guess, about the "ORA-01426 digital infinity. I tried to export through UTL_FILE, content of the relational table with 998 columns. You get very quickly in this case in these ORA-errors, even if you work with solutions CLOB, while trying to concatinate the column into a CSV string data. Oracle has the nasty habbit in some of its packages / code to "assume" intelligent solutions and converts data types implicitly temporarily while trying to concatinate these data in the column to 1 string.

    The second part in the Kingdom of PL/SQL, it is he's trying to put everything in a buffer, which has a maximum of 65 k or 32 k, so break things up. In the end I just solved it via see all as a BLOB and writing to file as such. I'm guessing that the ORA-error is related to these problems of conversion/datatype buffer / implicit in the official packages of Oracle DBMS.

    Fun here is that this table 998 column came from XML source (aka "how SOA can make things very complicated and non-performing"). I have now 2 different solutions 'write data to CSV' in my packages, I use this situation to 998 column (but no idea if ever I get this performance, for example, using table collections in this scenario will explode the PGA in this case). The only solution that would work in my case is a better physical design of the environment, but currently I wonder not, engaged, as an architect so do not have a position to impose it.

    -- ---------------------------------------------------------------------------
    -- PROCEDURE CREATE_LARGE_CSV
    -- ---------------------------------------------------------------------------
    PROCEDURE create_large_csv(
        p_sql         IN VARCHAR2 ,
        p_dir         IN VARCHAR2 ,
        p_header_file IN VARCHAR2 ,
        p_gen_header  IN BOOLEAN := FALSE,
        p_prefix      IN VARCHAR2 := NULL,
        p_delimiter   IN VARCHAR2 DEFAULT '|',
        p_dateformat  IN VARCHAR2 DEFAULT 'YYYYMMDD',
        p_data_file   IN VARCHAR2 := NULL,
        p_utl_wra     IN VARCHAR2 := 'wb')
    IS
      v_finaltxt CLOB;
      v_v_val VARCHAR2(4000);
      v_n_val NUMBER;
      v_d_val DATE;
      v_ret   NUMBER;
      c       NUMBER;
      d       NUMBER;
      col_cnt INTEGER;
      f       BOOLEAN;
      rec_tab DBMS_SQL.DESC_TAB;
      col_num NUMBER;
      v_filehandle UTL_FILE.FILE_TYPE;
      v_samefile BOOLEAN      := (NVL(p_data_file,p_header_file) = p_header_file);
      v_CRLF raw(2)           := HEXTORAW('0D0A');
      v_chunksize pls_integer := 8191 - UTL_RAW.LENGTH( v_CRLF );
    BEGIN
      c := DBMS_SQL.OPEN_CURSOR;
      DBMS_SQL.PARSE(c, p_sql, DBMS_SQL.NATIVE);
      DBMS_SQL.DESCRIBE_COLUMNS(c, col_cnt, rec_tab);
      --
      FOR j IN 1..col_cnt
      LOOP
        CASE rec_tab(j).col_type
        WHEN 1 THEN
          DBMS_SQL.DEFINE_COLUMN(c,j,v_v_val,4000);
        WHEN 2 THEN
          DBMS_SQL.DEFINE_COLUMN(c,j,v_n_val);
        WHEN 12 THEN
          DBMS_SQL.DEFINE_COLUMN(c,j,v_d_val);
        ELSE
          DBMS_SQL.DEFINE_COLUMN(c,j,v_v_val,4000);
        END CASE;
      END LOOP;
      -- --------------------------------------
      -- This part outputs the HEADER if needed
      -- --------------------------------------
      v_filehandle := UTL_FILE.FOPEN(upper(p_dir),p_header_file,p_utl_wra,32767);
      --
      IF p_gen_header = TRUE THEN
        FOR j        IN 1..col_cnt
        LOOP
          v_finaltxt := ltrim(v_finaltxt||p_delimiter||lower(rec_tab(j).col_name),p_delimiter);
        END LOOP;
        --
        -- Adding prefix if needed
        IF p_prefix IS NULL THEN
          UTL_FILE.PUT_LINE(v_filehandle, v_finaltxt);
        ELSE
          v_finaltxt := 'p_prefix'||p_delimiter||v_finaltxt;
          UTL_FILE.PUT_LINE(v_filehandle, v_finaltxt);
        END IF;
        --
        -- Creating creating seperate header file if requested
        IF NOT v_samefile THEN
          UTL_FILE.FCLOSE(v_filehandle);
        END IF;
      END IF;
      -- --------------------------------------
      -- This part outputs the DATA to file
      -- --------------------------------------
      IF NOT v_samefile THEN
        v_filehandle := UTL_FILE.FOPEN(upper(p_dir),p_data_file,p_utl_wra,32767);
      END IF;
      --
      d := DBMS_SQL.EXECUTE(c);
      LOOP
        v_ret := DBMS_SQL.FETCH_ROWS(c);
        EXIT
      WHEN v_ret    = 0;
        v_finaltxt := NULL;
        FOR j      IN 1..col_cnt
        LOOP
          CASE rec_tab(j).col_type
          WHEN 1 THEN
            -- VARCHAR2
            DBMS_SQL.COLUMN_VALUE(c,j,v_v_val);
            v_finaltxt := v_finaltxt || p_delimiter || v_v_val;
          WHEN 2 THEN
            -- NUMBER
            DBMS_SQL.COLUMN_VALUE(c,j,v_n_val);
            v_finaltxt := v_finaltxt || p_delimiter || TO_CHAR(v_n_val);
          WHEN 12 THEN
            -- DATE
            DBMS_SQL.COLUMN_VALUE(c,j,v_d_val);
            v_finaltxt := v_finaltxt || p_delimiter || TO_CHAR(v_d_val,p_dateformat);
          ELSE
            v_finaltxt := v_finaltxt || p_delimiter || v_v_val;
          END CASE;
        END LOOP;
        --
        v_finaltxt               := p_prefix || v_finaltxt;
        IF SUBSTR(v_finaltxt,1,1) = p_delimiter THEN
          v_finaltxt             := SUBSTR(v_finaltxt,2);
        END IF;
        --
        FOR i IN 1 .. ceil( LENGTH( v_finaltxt ) / v_chunksize )
        LOOP
          UTL_FILE.PUT_RAW( v_filehandle, utl_raw.cast_to_raw( SUBSTR( v_finaltxt, ( i - 1 ) * v_chunksize + 1, v_chunksize ) ), TRUE );
        END LOOP;
        UTL_FILE.PUT_RAW( v_filehandle, v_CRLF );
        --
      END LOOP;
      UTL_FILE.FCLOSE(v_filehandle);
      DBMS_SQL.CLOSE_CURSOR(c);
    END create_large_csv;
    
  • Cannot export data so WHERE clause AND/OR

    I am able to export the results of a query if the WHERE clause has only one condition. But if there are and AND or a RC, you can right-click and choose export data, but nothing happens.

    For example, the following text exports very well:

    SELECT * FROM DUAL
    WHERE ROWNUM = 1;

    But throw in an 'AND', and it will not be exported:

    SELECT * FROM DUAL
    WHERE ROWNUM = 1 AND ROWNUM < 2;

    I'm running worm 1.5.3 and did not apply the patches.

    Unfortunately, in the framework of trying to solve other problems with the export feature, 1.5.3 presented problems where certain types of SQL statements would not export (nothing happened like you are seeing or reports of the errors like ORA-936). While it is not yet perfect, 1.5.5 manages exporters results much better (it fits your case who fails in 1.5.3), then I would say that you upgrade to 1.5.5.

    theFurryOne

  • export data

    Hi I was wondering if someone can help me to write data to an Excel file or text. I get continuous data of a data acquisition through a loop and want to export the data to a file. I tried to write in Excel, but an error returns constantly to my number of samples. However, this error does not come up without this block. Can anyone help?

    Thank you

    VL

    Do you need to export data in continuous or export after the continues to collect data is collected? Here is my example of export after that data are collected. It will maybe help and give you some ideas!

  • Data loading 10415 failed when you export data to Essbase EPMA app

    Hi Experts,

    Can someone help me solve this issue I am facing FDM 11.1.2.3

    I'm trying to export data to the application Essbase EPMA of FDM

    import and validate worked fine, but when I click on export its failure

    I am getting below error

    Failed to load data

    10415 - data loading errors

    Proceedings of Essbase API: [EssImport] threw code: 1003029 - 1003029

    Encountered in the spreadsheet file (C:\Oracle\Middleware\User_Projects\epmsystem1\EssbaseServer\essbaseserver1\app\Volv formatting

    I have Diemsion members

    1 account

    2 entity

    3 scenario

    4 year

    5. period

    6 regions

    7 products

    8 acquisitions

    9 Servicesline

    10 Functionalunit

    When I click on the button export its failure

    I checked 1 thing more inception. DAT file but this file is empty

    Thanks in advance

    Hello

    Even I was facing the similar problem

    In my case I am loading data to the Application of conventional planning. When all the dimension members are ignored in the mapping for the combination, you try to load the data, and when you click Export, you will get the same message. . DAT empty file is created

    You can check this

    Thank you

    Praveen

  • ATG export data using startSQLRepository in CommerceReferenceStore

    Today, I tried to export data for ProfileAdapterRepository using strartSQLRepository for commerceReferenceStore and use the following command.

    startSQLRepository -m CommerceReferenceStore.Store  -export all d:\profile_data.xml  -repository /atg/userprofiling/ProfileAdapterRepository
    

    and the above command works fine.

    But when I tried the below command.

    startSQLRepository -m CommerceReferenceStore.Store.EStore  -export all d:\profile_data.xml  -repository /atg/userprofiling/ProfileAdapterRepository
    

    It throws the error.

    **** info       Sat Feb 22 11:40:51 IST 2014    1393049451776   /atg/commerce/pricing/Promotions        Resolving reference to /atg/commerce/catalog/Produc
    tCatalog
    **** Warning    Sat Feb 22 11:40:52 IST 2014    1393049452658   DistributorSender       No remote servers configured
    **** info       Sat Feb 22 11:40:53 IST 2014    1393049453259   /atg/commerce/pricing/priceLists/PriceLists     SQL Repository startup complete
    **** Error      Sat Feb 22 11:40:53 IST 2014    1393049453440   /atg/commerce/pricing/priceLists/PriceLists     Table 'dcs_price_list' in item-descriptor:
    'priceList' does not exist in a table space accessible by the data source.  DatabaseMetaData.getColumns returns no columns.  Catalog=null Schema=null
    **** Warning    Sat Feb 22 11:40:53 IST 2014    1393049453542   /atg/commerce/pricing/priceLists/PriceLists     atg.adapter.gsa.GSARepository->loadColumnIn
    fo : SQLException in Table.loadColumnInfo.  Trying again.       com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: Table 'production.dcs_price_list
    ' doesn't exist
    **** Warning    Sat Feb 22 11:40:53 IST 2014    1393049453542   /atg/commerce/pricing/priceLists/PriceLists             at sun.reflect.NativeConstructorAcc
    essorImpl.newInstance0(Native Method)
    **** Warning    Sat Feb 22 11:40:53 IST 2014    1393049453542   /atg/commerce/pricing/priceLists/PriceLists             at sun.reflect.NativeConstructorAcc
    essorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
    **** Warning    Sat Feb 22 11:40:53 IST 2014    1393049453542   /atg/commerce/pricing/priceLists/PriceLists             at sun.reflect.DelegatingConstructo
    rAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
    **** Warning    Sat Feb 22 11:40:53 IST 2014    1393049453542   /atg/commerce/pricing/priceLists/PriceLists             at java.lang.reflect.Constructor.ne
    wInstance(Constructor.java:513)
    

    For both the above command I copied the FakeXADataSource and the JTDataSource in as below.

    FakeXADataSource.properties

    URL=jdbc:mysql://localhost:3306/production
    user=root
    password=root
    driver=com.mysql.jdbc.Driver
    

    and JTDataSource.properties

    $class=atg.service.jdbc.MonitoredDataSource
     dataSource=/atg/dynamo/service/jdbc/FakeXADataSource
     transactionManager=/atg/dynamo/transaction/TransactionManager
    


    So can you explain twhy I amd not able to export the module data EStore.

    If you need further clarification the on my side the please let me know

    The error:

    Property $class not defined in the configuration for/atg/dynamo/service/jdbc/FakeXADataSource_switchinga

    means that you are missing the key $class in the /FakeXADataSource_switchinga properties file.

    $class = atg.service.jdbc.FakeXADataSource

  • How to export data to excel that has 2 tables with the same number of columns and the column names?

    Hi everyone, yet once landed upward with a problem.

    After trying many things to myself, finally decided to post here...

    I created a form in form builder 6i in which clicking on a button, the data gets exported to the excel sheet.

    It works very well with a single table. The problem now is that I cannot do the same with 2 tables.

    Because the tables have the same number of columns and the columns names.

    Here are the 2 tables with column names:

    Table-1 (MONTHLY_PART_1) Table-2 (MONTHLY_PART_2)
    SL_NOSL_NO
    MODELMODEL
    END_DATEEND_DATE
    U-1U-1
    U-2U-2
    U-4U-4
    ..................
    ..................
    U-20U-20
    U-25U-25

    Given that the tables have the same column names, I get the following error :

    402 error at line 103, column 4

    required aliases in the SELECT list of the slider to avoid duplicate column names.

    So how to export data to excel that has 2 tables with the same number of columns and the column names?

    Should I paste the code? Should I publish this query in 'SQL and PL/SQL ' Forum?

    Help me with this please.

    Thank you.

    Wait a second... is this a kind of House of partitioning? Shouldn't it is a union of two tables instead a join?

    see you soon

  • Problem with export data from an application and importing data into another application

    Hello

    I need to give a stream to an application (target) of one of the other application (source) of data. So what I did is I exported the data necessary for the application target using the DATAEXPORT function in text files and you now want to import data in text files in the application of the source using rules files.

    I'm trying to create files for each exported text file separate compilation, but I'm not able to do this successfully.
    I traced all members of the source to the target application, but still there is something wrong is going on is to not let the successful import or do not allow that data to go to the right places.

    There all the specifications, while using this function DATAEXPORT as format or the kind of DATAEXPORTOPTIONS that I use to make it work?

    Here is the first part of my script to calc. For all other data I wanted to export that I wrote similar statements by simply setting on individual members.

    SET DATAEXPORTOPTIONS
    {
    DATAEXPORTLEVEL ALL;
    DATAEXPORTCOLFORMAT ON;
    DATAEXPORTOVERWRITEFILE ON;
    };

    DIFFICULTY ("ST", "BV", "SSV", "ASV", "ESV", "W", "HSP_InputValue", "SÜSS", "TBSD", "ABU", "AC", "LB", "LT", "VAP", "Real", "FY11", @REMOVE (@LEVMBRS (period, 0), @LIST (BegBalance)));
    DATAEXPORT 'File' ',' '...\export Files\D3.txt ';
    ENDFIX;


    Please let me know your opinion on this. I really need help with that.
    ~ Hervé

    So, you have 12 periods through the columns of the export data?

    If so, you must assign each column of data at a time. The data column property is used only when you have completely defined the dimensions either through other fields or other areas and the header and the left point only possible is data. This does not work when you have several columns of data that you find, just so assign the right period; doing this means there is no column of data .

    Kind regards

    Cameron Lackpour

  • Export data to Oracle DB using ODI Essbase

    I am trying to export data to Oracle DB using ODI essbase. When I create the schema with data source and target warehouses, my goal has error indicating that the target column has no capabilities of sql. I don't know what that means. Can anyone enlighten us? Thank you.

    This means that you have not defined a resting area for your columns on the target data store, as you use essbase in your interface, you need a rest area, essbase as a technology has no SQL features.

    See you soon

    John
    http://John-Goodwin.blogspot.com/

  • export data to sql

    Hello

    I'm trying to export essbase data then reload it into the sql table, I to the right, click Export, then choose "All data" and check "Export to the format column", if I get the text file. However, when I try to import into a sql table, it's just a great column.

    What is the best way to export data out essbase and then import them into a table in sql, the idea is that I want to just export, for example, budget data, and then import them into sql.

    Thank you

    If you read my blog, you will see that you can really control the export format except in the designate one of the dense dimensions such as columns. The rest follows the contour. I'll guess that your periods dimension is what is prevalent throughout. While you can not retrieve the columns in the desired order. If it is still a cube of dev, you can add a dense dimension with a single member and use it as your column. It would not affect the performance of the cube, but would affect any charge the rules or you get already.

    If you need to format you describe, another suggestion would be to export the format as you see, then put views above it that unions periods to take a single column of data and forcing a month name in it.

    If your table looks like the year, scenario, dept, acct, per1, Per2, Per3etc
    Then, you could do a view as
    Select the year, scenario, dept, Acct, 'Jan', table 1, where 1 is not null
    Union
    Select the year, scenario, dept, Acct, 'February', Per2 table where Per2is not null
    etc.

  • Export data then reload

    Hello

    I have the such Scenario than real dimension, forecast, Budget, forecast 2, FY10 Budget, etc. I want to export data for only 2 forecasts and Budget FY10, so here is my script:

    ESS_LOCALE English_UnitedStates.Latin1@Binary
    SET DATAEXPORTOPTIONS
    {
    DataExportLevel 'all '.
    DataExportColFormat
    WE DataExportDynamicCalc;


    };
    DIFFICULTY ("FY10 Budget", "Preview 2");
    DATAEXPORT 'File' ',' 'D:\testexport.txt ';
    ENDFIX;

    The file is generated, and then erase FY10 Budget and forecasting 2 block and try to load the data in the cube with the same dimensoin, I got error such as:

    «Unkonwn Item ["Feb", "Mar", "CalQ1", "Apr"...»

    What I'm doing wrong here?

    Thank you

    Published by: Alain on November 15, 2010 07:50

    Published by: Alain on November 15, 2010 07:51
    I feel is my delimiter, I use the comma in my calc script, but it seems it is tab or space, which is the problem here, how can I specify tab or space as the delimiter in my dataexport option?

    ^ ^ ^ I'm not sure that the separator is the issue, but certainly try it. If you put a "" you will get a space as a separator. If you put "" (not really a tab, but must be EAS/editor of your choice I can't tab in the my computer window) you get tab as a delimiter. That's all there is to it. It's a thread from last week - I learn new things all the time on this forum.

    Kind regards

    Cameron Lackpour

Maybe you are looking for

  • hard drive 500 GB shows only 150gbs.

    I have installed a new hard drive for SSHD 500GH and partitioned to install Ubuntu or Linux Mint, but later decided that I didn't want to install Linux. I tried to delete the partition, but it wouldn't let me, so I got 150GBs of space on my back. How

  • Satellite 1410: I forgot the password for HARD drive

    Hi Please help! I have toshiba satellite 1410 series...I think that the hard drive was full and it was locked.I really need to get to this unblock as im very new to comps. whenever I start my pc it ask me password for HARD drive...I need all the help

  • Satellite P300 - Microphone is not working after installing Windows 7

    Hello I recently did a clean install of Windows 7 on my Satellite P300. The microphone has been fine with Vista but does not currently work with Windows 7.I tried to go to the sound section and it does not list the casting Mike - just struggling. Can

  • HP Officejet 7500 E910 deletion of the web registration application

    I bought my HP Officejet 7500 E910in 2013 and have registered the product via the web application for registration that appeared on my desk. Despite this, the recording web application still appears after that, I was working for a while and the only

  • Tactile Envy 15: Keyboard too sensitive - double substitution

    My keyboard sometimes double hit sometimmess key while type II causes me to be too careful where I can miss a ltter and does not register at all. It is inefficient to type for exxtremely jobs. I just started a job of American Government in the Japan