data loading fails in ie

This problem is totally strange me... I have a flash application that communicates with php for data files in format xml and get/post...

I load XML data, load and send with LoadVars.sendAndLoad

I have load data from multiple php files and support all very well, but one and only in ie. Since firefox it loads fine.
in the case of onLoad (success), I get success == false and there is no charge.

What weird me out the most is that I call the php in the same way, I load another
XML.load(baseURL+"language.php?language="+language);
as the problematic
XML.load(baseURL+"data.php?u="+unique);
(I use u = time in ms to move, IE y tends to keep things he shouldn't be cached)

If I try to open it directly from ie it returns also fine

the SWF exported for v6, the swf and the scripts are on the same server/domain

I don't know where to look for more, and this is a very important project...

Does anyone have any suggestions? Some IE maybe quirk?

We have solved!

It is related to this: http://www.blog.lessrain.com/?p=276

It seems that there is a bug with ie, where flash does not receive data if some labels in the http headers related to caching are used...

Now that we have finally found the cause, it took the guy php/js 5 minutes to fix and 5 minutes to the curse of m$...

Have a great weekend everyone! I know that I, now.

Tags: Adobe Animate

Similar Questions

  • Why statistical data loading fails if there are periods without data?

    Hello

    EPM 11.1.2.3.500 on Exalytics

    Try to load statistics of several periods (from Jul_2012 to Aug_2014).

    So is having some of them with no data, it stops.

    Is this normal? Or it should load at least the rest of the data?

    Traceback (most recent call changed):

    File '< string >", line updateTDATASEG_T_TDATASEGW, 1982

    RuntimeError: [u "error: no record exists for period"Jul_2012"', u ' error: no record exists for period"Aug_2012"", u "error: no record exists for period"Sep_2012"", u "error: no record exists for period"Oct_2012"", u "error: no record exists for period"Nov_2012"", u "error: no record exists for period"Dec_2012"", u "error: no record exists for the period"Jan_2013"' u"error "] [': no record exists for period "Feb_2013" ', u ' error: no record exists for period "Mar_2013" ", u" error: no record exists for period "Apr_2013" ", u" error: no record exists for period "May_2013" ", u" error: no record exists for the period "Jun_2013" "]

    2014-08-28 06:44:59, 387 FATAL [AIF]: error in data comm. plan

    2014-08-28 06:44:59, 397 [AIF] INFO: end process FDMEE, process ID: 380

    Thank you

    I agree that the messages in the details of the process should be more informative about the cause of the failure.

    I suggest you raise an enhancement request

    Please, you can mark this question as answered while others can see his reactions?

    Concerning

  • Data loading 10415 failed when you export data to Essbase EPMA app

    Hi Experts,

    Can someone help me solve this issue I am facing FDM 11.1.2.3

    I'm trying to export data to the application Essbase EPMA of FDM

    import and validate worked fine, but when I click on export its failure

    I am getting below error

    Failed to load data

    10415 - data loading errors

    Proceedings of Essbase API: [EssImport] threw code: 1003029 - 1003029

    Encountered in the spreadsheet file (C:\Oracle\Middleware\User_Projects\epmsystem1\EssbaseServer\essbaseserver1\app\Volv formatting

    I have Diemsion members

    1 account

    2 entity

    3 scenario

    4 year

    5. period

    6 regions

    7 products

    8 acquisitions

    9 Servicesline

    10 Functionalunit

    When I click on the button export its failure

    I checked 1 thing more inception. DAT file but this file is empty

    Thanks in advance

    Hello

    Even I was facing the similar problem

    In my case I am loading data to the Application of conventional planning. When all the dimension members are ignored in the mapping for the combination, you try to load the data, and when you click Export, you will get the same message. . DAT empty file is created

    You can check this

    Thank you

    Praveen

  • SQL Loader failed to load the same source sometimes data file

    I meet different types of data, loading errors when you try to load data. What makes it really annoying, is that I'm not able to identify the real culprit since the success of load depends on the amount of lines in the source data file but not its content. I use dataset Toad export feature to create delimited data set. When I decided to take the first 50 lines then succeeds data to load into the target table. When I decide to take the first 150 lines then the data to load into the target table fails indicating that:
    Record 13: Rejected - Error on table ISIKUD, column SYNLAPSI.
    ORA-01722: invalid number
    I can see .bad file that the same line has been loaded successfully when the data file that contains 50 rows was used. The content has no column for this particular row is NULL (no chain).
    I suspect that the toad generates faulty delimited text file when taking 150 lines. File data for reasons of confidentiality, I can't show. What can I do? How can we all further investigate this problem? Size of the array that must be loaded by SQL Loader is almost 600 MB. I use Windows XP.

    Published by: totalnewby on June 30, 2012 10:17

    I do not believe that customer sqlloader 11g allows you to load a 10g database. You can run sqlldr on the server of database 10g itself? PL also post the rest I asked information

    HTH
    Srini

  • Essbase in MSCS Cluster (metadata and data load failures)

    Hello

    Is there a power failure on the active node of the Cluster Essbase (call this node A) and the Cube needs to be rebuilt on the node of Cluster B, how the Cube will be rebuilt on Cluster Node B.

    What will orchestrate the activities required in order to rebuild the Cube)? Both Essbase nodes are mounted on Microsoft cluster Services.

    In essence, I want to know

    (A) Comment do to handle the load of metadata that failed on Node1 to Node2 Essbase?

    (B) makes the continuous session to run meta-data / load on the Second knot, Essbase data when the first node of Essbase fails?

    Thank you for your help in advance.

    Kind regards

    UB.

    If the failover product then all connections on the active node will be lost as Essbase will restart on the second node, just treat the same as if you restarted the Essbase service and had a metaload running that it would fail to the point when Essbase breaks down.

    See you soon

    John

    http://John-Goodwin.blogspot.com/

  • Apex data load configuration missing table when importing to the new workspace

    Hi everyone knows this show before and have a work around.

    I export / import my request for a different workspace, and everything works fine except for 1 loading tables data.

    In my application, I use tables of data loading, and it seems that the latter do not properly Setup for the table object to load data. This causes my application to fail when the user tries to download a text file.
    Does anyone have a work around next to recreate the table of data load object?

    Breadcrumb: Components shared-> load data tables-> add modify data load table

    The app before exporting displays Workspace: OOS
    Single column 1 - CURRENCY_ID (number)
    Single column 2 - month (Date)

    When I import the app in the new workspace (OOS_UAT) data type is absent.
    Single column 1 - CURRENCY_ID
    Single column 2 - MONTH

    When I import the same workspace app: OOS I do not know this problem

    Version of the apex: Application Express 4.1.1.00.23

    Hi all

    If you run 4.1.1 it was a bug 13780604 (DATA DOWNLOAD WIZARD FAILED if EXPORTS of OTHER workspace) and have been fixed. You can download the fix for 13780604 (support.us.oracle.com) and the associated 4.1.1

    Kind regards
    Patrick

  • Newbie sorry data-load question and datafile / viral measure

    Hi guys

    Sorry disturbing you - but I did a lot of reading and am still confused.

    I was asked to create a new tablespace:

    create tablespace xyz datafile 'oradata/corpdata/xyz.dbf' size 2048M extent management local size unique 1023M;

    alter tablespace xyz add datafile ' / oradata/corpdata/xyz.dbf' size 2048M;

    Despite being worried not given information about the data to load or why the tablespace must be sized that way - I was told to just 'do it '.

    Someone tried to load data - and there was a message in the alerts log.

    ORA-1652: unable to extend temp by 65472 segment in tablespace xyz

    We do not use autoextend on data files even if the person loading the data would be so (they are new on the environment).

    The database is on a cold backup nightly routine - we are in a rock anvil - we have no space on the server - to make RMAN and only 10 G left on the Strip for (Veritas) backup routine and thus control space with no autoextend management.

    As far as I know of the above error message is that the storage space is not large enough to hold the load data - but I was told by the person who imports the data they have it correctly dimensioned and it something I did when the database create order (although I have cut and pasted from their instructions - and I adapted to our environment - Windows 2003 SP2 but 32 bits).

    The person called to say I had messed up their data loading and was about to make me their manager for failing to do my job - and they did and my line manager said that I failed to correctly create the tablespace.

    When this person was asked to create the tablespace I asked why they thought that extensions should be 1023M and said it was a large data load that must be inserted to a certain extent.

    That sounds good... but I'm confused.

    1023M is very much - this means that you have only four extents in the tablespace until it reaches capacity.

    It is a load - is GIS data - I have not participated in the previous data loads GIS - other than monitor and change of tablespaces to support - and previous people have size it right - and I've never had no return. Guess I'm a bit lazy - just did as they asked.

    However, they never used 128K as a size measure never 1023M.

    Can I ask is 1023 M normal for large data loads - or I'm just the question - it seems excessive unless you really just a table and an index of 1023M?

    Thanks for any idea or other research.

    Assuming a block size of 8 KB, 65472 would be 511 MB. However, as it is a GIS database, my guess is that the database block size itself has been set to 16K, then 65472 is 1023MB.

    What load data is done? Oracle Export dump? Which includes a CREATE INDEX statement?
    Export-Import is a CREATE TABLE and INSERT so that you would get an ORA-1652 on it. So you get ORA-1652 if the array is created.
    However, you will get an ORA-1652 on an INDEX to CREATE the target segment (ie the Index) for this operation is initially created as a 'temporary' segment until the Index build is complete when it switches to be a 'temporary' to be a segment of "index".

    Also, if parallelism is used, each parallel operation would attempt to assign degrees of 1023 MB. Therefore, even if the final index target should have been only, say 512 MB, a CREATE INDEX with a DEGREE of 4 would begin with 4 extensions of 1023 MB each and would not decrease to less than that!

    A measure of 1023 MB size is, in my opinion, very bad. My guess is that they came up with an estimate of the size of the table and thought that the table should be inserted in 1 measure and, therefore, specified 1023 MB in the script that is provided to you. And it is wrong.

    Same Oracle AUTOALLOCATE goes only up to 64 MB extended when a Segment reached the mark of 1 GB.

  • Windows Update Standalone Installer: error 0x80073afc the resource loader failed to find MUI file. Windows 7 64 bit

    I have a desktop PC running Windows 7 Professional 64-bit.  There is an update which is unable to complete, causing the PC to undo changes to before the update.  It usually takes an extreme amount of time, and of course, I've always updates installed.  I tried to use the Microsoft Fixit tool, which came back with error 0 x 80070057.  I saw in a community forum that I could try the tool of microsoft SURT, but after the long scan he came back with the following error:

    Windows Update Standalone install

    Setup has encountered an error: 0x80073afc

    The resource loader failed to find MUI file.

    Please help.

    Hello

    Thanks for posting your question on the Microsoft community.

    Thank you for details on the question and your efforts to resolve.

    This problem can occur because of corrupted Windows Update files and components.

    I suggest you to reset the Windows update components and check if that helps.
    Refer to this article:
    How to reset the Windows Update components?
    https://support.Microsoft.com/en-us/KB/971058

    Note: Serious problems can occur if you modify the registry incorrectly. Therefore, make sure that you proceed with caution. For added protection, back up the registry before you edit it. Then you can restore the registry if a problem occurs. For more information about how to back up and restore the registry, click on the number below to view the article in the Microsoft Knowledge Base:
    http://Windows.Microsoft.com/en-us/Windows/back-up-registry#1TC=Windows-7

    I hope this information helps.

    Please let us know if you need more help.

    Thank you

  • Forcing errors when loading essbase nonleaf data loading

    Hi all

    I was wondering if anyone had experience forcing data load errors when loadrules is trying to push the nonleaf members data in an essbase cube.

    I obviously ETL level to prevent members to achieve State of charge which are no sheet of error handling, but I would however like the management of the additional errors (so not only fix the errors).

    ID much prefer errors to be rejected and shown in the log, rather than being crushed by aggregation in the background

    Have you tried to create a security filter for the user used by the load that allows only write at level 0 and not greater access?

  • Data loading convert timestamp

    I use APEX 5.0 and download data wizard allows to transfer data from excel.

    I define Update_Date as timestamp (2) and the data in Excel is 2015/06/30 12:21:57

    NLS_TIMESTAMP_FORMAT is HH12:MI:SSXFF AM JJ/MM/RR

    I created the rule of transformation for update_date in the body of the function, plsql as below

    declare

    l_input_from_csv varchar2 (25): = to_char(:update_date, 'MM/DD/YYYY HH12:MI:SS AM');

    l_date timestamp;

    Start

    l_date: = to_timestamp (l_input_from_csv, ' MM/DD/YYYY HH12:MI: SS AM' ");

    Return l_date;

    end;

    I keep having error of transformation rule.  If I do create a transformation rule, it will ask for invalid month.

    Please help me to fix this.

    Thank you very much in advance!

    Hi DorothySG,

    DorothySG wrote:

    Please test on demand delivery data loading

    I stick a few examples to the Instruction of Page, you can use copy and paste function, it will give the same result

    Please change a coma separator ',' on the other will be message "do not load".

    Check your Application 21919. I changed the rule of transformation:

    Data is loading properly.

    Kind regards

    Kiran

  • Move rejected records to a table during a data load

    Hi all

    When I run my interfaces sometimes I get errors caused by "invalid records. I mean, some contents of field is not valid.

    Then I would move these invalid to another table records, while the data loading lights in order not to interrupt the loading of data.

    How can I do? There are examples that I could follow?

    Thanks in advance

    concerning

    Hi Alvaro,

    Here you can find different ways to achieve this goal and choose according to your requirement:

    https://community.Oracle.com/thread/3764279?SR=Inbox

  • Data loading Wizzard

    On Oracle Apex, is there a feasibility study to change the default feature

    1. can we convert the load data Wizard just insert to insert / update functionality based on the table of the source?

    2. possibility of Validation - Count of Records < target table is true, then the user should get a choice to continue with insert / cancel the data loading process.

    I use APEX 5.0

    Need it please advice on this 2 points

    Hi Sudhir,

    I'll answer your questions below:

    (1) Yes, loading data can be inserted/updated updated

    It's the default behavior, if you choose the right-hand columns in order to detect duplicate records, you will be able to see the records that are new and those who are up to date.

    (2) it will be a little tricky, but you can get by using the underlying collection. Loading data uses several collections to perform the operations, and on the first step, load us all the records of the user in the collection "CLOB_CONTENT". by checking this against the number of records in the underlying table, you can easily add a new validation before moving on to step 1 - step 2.

    Kind regards

    Patrick

  • The data load has run in Odi but still some interfaces are running in the operator tab

    Hi Experts,

    I'm working on the customization of the olive TREE, we run the incremental load every day. Data loading is completed successfully, but the operator status icon tab showing some interfaces running.

    Could you please, what is the reason behind still running the interfaces tab of the operator. Thanks to Advance.your valuable suggestion is very useful.

    Kind regards

    REDA

    What we called stale session and can be removed with the restart of the agent.

    You can also manually clean session expired operator.

  • Ignore the ASO - zero data loads and missing values

    Hello

    There is an option that ignores the zero values & the missing values in the dialog box when loading data in cube ASO interactively via EAS.

    Y at - it an option to specify the same in the MAXL Import data command? I couldn't find a technical reference.

    I have 12 months in the columns in the data flow. At least 1/4 of my data is zeros. Ignoring zeros keeps the size of the cube small and faster.

    We are on 11.1.2.2.

    Appreciate your thoughts.

    Thank you

    Ethan.

    The thing is that it's hidden in the command Alter Database (Aggregate Storage) , when you create the data loading buffer.  If you are not sure what a buffer for loading data, see loading data using pads.

  • The execution of work load failed

    Hi experts BI Apps,

    I made a plan to load on the supply of the Oracle and analytics spend, load failed at the SDE_ORAR1212_ADAPTOR_SDE_ORA_UOMCONVERSIONGENERAL_INTERCLASS session

    ODI-1519: series step "start load Plan.

    (InternalID:2278500) ' failed, because the child step "Global Variable Refresh.

    (InternalID:2279500) "is a mistake.

    ODI-1519: series step "Refresh Global Variable (InternalID:2279500).

    has failed, because the step child ' Source extracted phase (InternalID:2282500) ' is

    by mistake.

    ODI-1519: step series ' Source extracted phase (InternalID:2282500) ' failed

    because the child step "1 general (InternalID:2283500)" is in error.

    ODI-1519: series step '1 general (InternalID:2283500)' failed, because

    step child "2 General SDE (InternalID:2326500)" is in error.

    ODI-1519: series step '2 General SDE (InternalID:2326500)' failed

    because no children "Parallel (InternalID:2412500)" is in error.

    ODI-1518: parallel step 'Parallel (InternalID:2412500)' failed. 1 child

    steps in error, which is more than the maximum number of permits

    errors (0) defined for parallel step.  Has no measures of the child: 3 SDE

    General MEASURE unit (InternalID:2432500)

    ODI-1519: series step '3 General SDE UOM (InternalID:2432500)' failed

    because the step of the 'Load Target Table (InternalID:2435500)' child is in error.

    ODI-1519: series step 'Load Table of target (InternalID:2435500)' failed

    because the step 'EBS_12_1_2 - 3 (InternalID:2436500) DSN' child is in

    error.

    ODI-1519: series step 'EBS_12_1_2 - DSN 3 (InternalID:2436500)' failed

    because the stage of the child "UOM_DIM (InternalID:2437500)" is in error.

    ODI-1519: series step 'UOM_DIM (InternalID:2437500)' failed, because

    stage of the child 'SDE_ORA_UOMCONVERSIONGENERAL_INTERCLASS '.

    (InternalID:2438500) "is a mistake.

    ODI-1217: Session

    SDE_ORAR1212_ADAPTOR_SDE_ORA_UOMCONVERSIONGENERAL_INTERCLASS (667500)

    fails with return code 1031.

    ODI-1226: step run

    SDE_ORA_UOMConversionGeneral_InterClass.W_UOM_CONVERSION_GS falls down after 1

    attempt (s).

    ODI-1240: Flow Run

    SDE_ORA_UOMConversionGeneral_InterClass.W_UOM_CONVERSION_GS fails while

    running a load operation. This flow of charge table target

    W_UOM_CONVERSION_GS.

    ODI-1227: SrcSet0 (load) task fails on the source of ORACLE connection

    ON_EBS1212.

    Caused by: java.sql.SQLSyntaxErrorException: ORA-01031: insufficient

    privileges

    I will be grateful someone can give some clues about this.

    Thank you.

    you don't have enough privileges for the user you are using to run the load. Check with the team of DB if the user you are using has obtained the privileges of right or not

Maybe you are looking for