Why the size of my document is double when I save as a doc file?

Hi guys, I'm puzzled.

I have a doc pages file which is 13.3 MB (the right size for the download of Smashwords).

Edit: 2 photos no were not "in line with the text" set.

I re saved this document in a new empty file.

Now it's 26 MB (too big to download)

It is a mystery to me. What made the biggest file? There is no additional fonts or images.

Any help will give me a cool day and hope for my title.

And heal my heart.  This made me so much frustration.

Thanks in advance.

Open the Word document that you exported from the Pages of LibreOffice Writer or MS Word and then he re-record. The file size will be reduced by a wide margin.

The terribly complex and exclusive internal document format that now uses the Pages causes swollen exports Word for translation.

I can create a new document Pages v5.6.2 by using a blank template. I can type the letter 'A' in it and then export it to Word (.docx). The file size will be approximately 494 KB. It's true bloat.

Tags: iWork

Similar Questions

  • Normally, when you open the page palette in InDesign, it is divided into two. The upper part is where is the master page (and a page named none), while the lower part is where are the actual pages of documents. Unfortunately, when I opened my palette of p

    Normally, when you open the page palette in InDesign, it is divided into two. The upper part is where is the master page (and a page named none), while the lower part is where are the actual pages of documents. Unfortunately when I opened my palette (InDesign CS 6 on a Mac), there is no upper and lower part of the page-only the lower part is displayed. When I put my cursor on my pages, it tells me that I "Applied A-master".

    Why not my top shows (A-master and none)? The pages that I don't "Act" like master pages - for example, I can't get past page numbering.


    Any help is appreciated. Thank you!

    It looks like the upper part may be in a State reduced - the line was pulled up.

    Just of the cursor on the line that separates the upper and lower sections, and drag it to the bottom.

  • Why does my document change color when I save it?

    Screen Shot 2015-06-10 at 11.06.18.png

    For some reason any my document changes color when I save it as a .pdf or .ai file.

    I checked the color mode is CMYK mode.

    I also went in the shade options and changed all Pantone colors used in the document of 'book' to 'CMYK', but I still get this problem.

    Any suggestions?

    Also, make sure that the Inclusion of profile is set to "Include all profiles" as what is indicated in the screenshot of Mike. Not all the presets include the profile. For example, save a PDF with the Preset Illustrator does not include profiles. It always intrigued me as to why.

  • Why the time display is set to 19 when I run the attached VI

    Why the time display is set to 19 when I run the attached VI?

    Thank you.

    Seconds function Date/time returned to 12/31/1903, as UTC is false.  Set the Boolean value TRUE (UTC)

  • I use the helper downloader (ver. 2.2) when you save a video of WWII, during the reading period, I get some equidistant points from the screen and Flash

    I use the helper downloader (ver. 2.2)
    When you save a video of WWII on my Windows 7 Desktop, during the period of the video, I get points at equal distance between the screen and flashing arrow as well. What is the problem with my download?

    [email protected]

    Which may be caused by a problem with the codec that is used by this video and who is not taken care of correctly.

    You can open this video in another video player to see if it works there?

    What kind of video is it?

  • Why jpg export does not change the size of my document?

    This is a minor annoyance I noticed once I upgraded to CS6.

    It's the right size of my InDesign document...

    Indesign.jpg

    So why the dimensions change a fraction of an inch when I export my document of identity as a jpg?

    Photoshop.jpg

    (from Photoshop)

    Well, you can't half a decimal pixel.

    Like - 6685,2 pixels

    So he must work around somehow.

    What if you start a document InDesign using the Web instead of print in the file > new Document

    Then open in Photoshop?

  • The size of my document in In Design print in a different size when I export to PDF

    I made a document in In Design, which is 7x9.25' and it prints in the correct size when I print from In Design.  But when I export to PDF in Acrobat DC it prints more.  It prints to 8.5 x 11.  When I cut along the outline of the document, printing In Design is the exact size I want, but leave the PDF file, even after I cut along the outline, it is too much.  Is there a setting that I need to use in In Design before exporting to PDF print to the exact size that I created it in In Design?

    Thanks for your help!

    Angie

    In Acrobat, place your cursor in the lower left corner of the screen and it will show you what your document is about the size. If the size of the document is correct, then I guess that when you print from Acrobat, which you 'Fit' selected under Page size and handling in the print dialog box. Make sure that 'real size' is chosen, and which should print in the right size.

  • Why the size of the files there so much CS5 for CC increase?

    I have created a lot of logo EPS files and noticed that the file size was much larger than usual. When compared using CS5 vs CC, I created a file, CC versions are much more important. I understand that the file size 'creep' is normal during the passage of a generation from one application to another, but this increase seems a bit excessive. Is there a preference I'm missing who would get the file size down to something more normal?

    I did a test comparing a file done with Illustrator CS5 with file done with Illustrator CC 2015 (v9.2.1). I have created a document in the blank letter format and does even not any work of art, and this was the result:

    File HAVE

    CS5 = 57 KB

    CC = 711 KB (an increase of 12.5 x)

    EPS file

    CS5 = 241 KB

    CC = 1.7 MB (a 7 x increase)

    Because these files are empty, most of the options that appear when save files does not really affect the size of the file, even if I DO NOT embed ICC profiles and ALWAYS uses Compression.

    The funny thing is, if I open the file AI CS5 and then re - save as a file HAVE CC, the file remains low on 58 KB. If I open the file EPS CS5 and re - save as an EPS CC file, single file size goes up to 828 KB, which is still much smaller than the EPS in CC file.

    The problem I have is that I create logo suites that are composed of many different logos files. I never had a problem with the size of the file before, but some of the files even create a ZIP file is to big to e-mail. Especially since some of my clients have a 10 MB limit on attachments to emails and some of their firewall prevent me from being able to use services like Dropbox.

    Did you run the button remove items unused Panel action? In fact, I modified this action to delete all the

    not used:

    brushes, graphics, tank Styles & sale, swatches, symbols

    and of course save compatible pdf power on and off.

    If you want to send a file (use any link ftp service such s dropbox, hightail), we could take a look at this.

  • Why the size of the brush and tools change with zoom?

    Why the Brush tool changes sizes when zoomed in and out? It is so boring. Is there a way to fix this? BTW I use the CS4 version. It happens to all other tools? I tried that it seems not, but I can't say if there is a big difference in the size of the brush as Brush tool.  Thanks for any help!

    Found this thread while searching for an answer to this question and figured out how to fix it, granted that this thread has been 5 years, so there is probably no reason to try and offer this image, but just in case flash is not dead and someone use it yet, here is GB.

  • How to reduce the size of Indesign documents PDF export?

    I wonder if there is a way for Indesign reduce the size of page of a multiple page document?

    I have a 38 page book that has a page size of 9 "x 8"... so the spread size is 9 "x 16".

    I want this in order to reduce the size of the exported PDF file.

    So if there is a way to reduce the size of PDF, that would work too.

    I looked everywhere but could not find a function that allows you to reformat and save this document to a 50 percent reduction, for example.

    Is there such a feature with Indesign on CS1? (or one of the more recent versions CS?)

    Currently the smallest PDF I can export this document is 8 MB. (it contains a lot of graphics)

    I want to get that figure up to 3 MB or more.

    I have the dpi all reduced to 72 dpi... which has helped to reduce the PDF of the original 58 MB.

    But how is it possible to make it even smaller for emailing?

    Thanks for your suggestions.

    And in the process you'll find yourself with a flattened PDF. If you send to a client for proofing, be prepared to explain that these ugly white lines in all directions does not print it. And when they print to a desktop printer and print some lines, you'll have a problem of real credibility on your hands.

    Just a warning.

    Bob

  • Why the size of the system table space is limited to 16383 MB

    Hello
    I use 10.2.0.3.0 on linux 64-bit

    I want to know why the sysaux tablespace data file size is limited to 16383 MB as stated in the Enterprose management, I use linux 64-bit

    SYSAUX tablespace

    Size of the file (KB) 716800
    Yes AutoExtend
    Increment 10240 KB
    16383MB maximum file size
    16383MB maximum file size

    A data file can have up to 4 MB blocks (http://download.oracle.com/docs/cd/B19306_01/server.102/b14237/limits002.htm#i287915), do the math:

    4194304 * 4096 = 17179869184 = 16384 GB (4096 = database block size)

    HTH

    Enrique

  • Why the size of my browser minimized become so small that it is unusable?

    I clicked on something in google and all of a sudden receipt of warnings about a virus attack. It seems to have been blocked because everything looks OK, except that the size of my browser minimized is now tiny and is completely unusable. Can I reset the size of the browser minimized and if so, how?

    See:

  • Why the DR unit does not trigger schema when it is called remotely?

    Hi all

    I have a question about the triggers of oracle schema and I would be grateful if you could kindly give me a helping hand.

    Oracle version: 11 GR 2 (11 g Enterprise Edition Release 11.2.0.1.0 - 64 bit)

    OS:                      Linux Fedora Core 17 (X86_64)

    I was reading the online documentation on schema triggers where oracle says:

    Assume that users user1 and user2 own schema triggers and user1 invokes a DR unit owned by user2. Inside the DR unit, User2 is the current user. Therefore, If the DR unit triggers the triggering event of a trigger schema that User2 owns, while the trigger is activated.

    I wanted to see this behavior in practice, so I made the following test case:

    -There are two schemas:

    • testuser where I create a procedure with AUTHID DEFINE (a unit of the Dr. therfore) named createTab. This procedure takes a table name as a parameter and if no table with this name exists already in the testuser schema, it will create a new table with the same name with a single column of type NUMBER (well, it's just an example to this issue, in practice I never create my tables this way)

    • training is therefore another scheme to which we grant the privilege EXECUTE on the above mentioned procedure createTab so that it may be possible to create tables on schema testuser by calling the remote procedure.

    The idea behind the test is to create a schema for testusertrigger, so that whenever he is, for example, a creation of the table, a message is inserted into a table of newspaper (just an example to show proof that trigger the diagram has been drawn on the table creation event). Now assuming I admit the EXECUTE privilege on the procedure of createTab for the trainingscheme, then any creation of the remote table must trigger the schema trigger, because according to the documentation inside the unit of the DR, the user is not considered appellant user (= training) but actually the owner (= testuser) that created the trigger and procedure.

    The problem is that I cannot see it in my test. Therefore I will write here my test case so that you can have a look at it and to indicate where I did wrong, and what I misunderstood in the documentation.

    So here's what I created on the schema testuser

    Code

    SET SQLBLANKLINES

    ALTER SESSION SET PLSQL_WARNINGS = ' ENABLE: ALL ';

    SET SERVEROUTPUT ON;

    -A table of newspaper in which the schema trigger inserts messages


    -indicating that the schema trigger was triggered (as proof)

    CREATE TABLE tablog (logMsg VARCHAR2 (100));

    -Here is the procedure that updates the above defined log table (tablog)

    -This procedure (autonomous transaction) is called by the schema trigger

    CREATE OR REPLACE PROCEDURE updateLog (p_logMsg IN tablog.logMsg%TYPE)

    DEFINE AUTHID

    IS

    PRAGMA AUTONOMOUS_TRANSACTION;

    BEGIN

    INSERT INTO tablog (logMsg) VALUES (p_logMsg);

    COMMIT;

    END updateLog;

    /

    DISPLAY ERRORS;

    -This is the procedure we use to create tables (which will be called so

    -remotely from another schema-> training)

    -As stated above, the procedure takes a table
    -name as a parameter and creates a table with a single column of type NUMBER

    -that if no table with this name exists already

    CREATE OR REPLACE PROCEDURE createTab

    (

    p_tabName IN user_tables.table_name%TYPE

    )

    AUTHID DEFINE - Therefore a unit DR that we explicitly specify AUTHID DEFINE

    IS

    BEGIN

    < < bk > >

    DECLARE

    tabName user_tables.table_name%TYPE;

    BEGIN

    -Check to see if a table with the name p_tabName
    -already exists

    T1.table_name SELECT INTO bk.tabName

    FROM user_tables t1

    WHERE t1.table_name = upper (p_tabName);

    EXCEPTION

    -No table with this name exists, so we create now

    WHEN NO_DATA_FOUND THEN

    IMMEDIATELY RUN 'CREATE TABLE ' |

    p_tabName | '(NUMÉRO n) ';

    END;

    END createTab;

    /

    DISPLAY ERRORS;

    - And finally it is the schema for the schema 'testuser '.

    -Any appeal of the above mentioned procedure createTab (if the procedure)
    -creates a new table) fires the following trigger

    CREATE OR REPLACE TRIGGER testuser_schema_tr

    Before you CREATE on testuser.schema

    BEGIN

    -Just insert a message into the table of the newspaper showing the evidence
    -that our schema trigger wiped of CREATE TABLE
    -statements

    updateLog

    (

    TO_CHAR (sysdate, ' ' MON-DD-YYYY HH24:Mi:ss) |

    ' ': Schema for testuser trigger pulled.

    );

    END testuser_schema_tr;

    /

    DISPLAY ERRORS;

    -I grant the privileges required for the formation of the user/schema
    -may also be able to remotely run my procedure

    GRANT EXECUTE ON createTab to training;

    GRANT SELECT ON tablog to training;

    First, I tested the procedure createTab locally (so be etre connecte connected as drawing testuser , in other words, the owner of the procedure and the relaxation). Everything worked pretty well and created table, that table the journal has been updated by the trigger which showed that in fact after each CREATE TABLE statement, the trigger was activated.

    However, when I opened a new SQL * Plus term, this time in being connected as a training scheme, I have observed that, once again, it was possible to create tables on schema testuser remotely, but the log table has been updated no more, which means that the trigger has not wiped CREATE TABLE statements that were issued remotely (by remote createTab procedure call).

    Code

    SQL > EXECUTE testuser.createTab ('tmptab');

    PL/SQL procedure successfully completed.

    SQL > SELECT * FROM testuser.tablog;

    no selected line

    SQL > USER to see THE

    The USER is 'TRAINING'


    SQL >

    Any idea? Why unity DR (createTab procedure) does not have the schema trigger, unlike what documents said, when it is called remotely?

    Thanks in advance,

    Dariyoosh

    It works for me on Oracle 11.2.0.3

    August 21, 2013 18:10:12: trigger pulled schema

    But not on 11.2.0.1

    It looks like a bug.

  • Why the horizontal scroll bar does not disappear when windows are maximized, but reappears when minimize?

    I'm a fool to the computer. I fix my car and I don't fix my computer. If there is no easy solution, I'll pay someone to do it for me. So, is there an easy way to retrieve my horizontal scroll bar? I've updated to the latest Firefox (35). I restored my computer to an earlier date when everything was ok. I downloaded another browser (Chrome) to see if that would fix it. Nothing has worked. The system I use is Windows 7 64 bit. I tried not to do anything, because this has led to big problems in the past. Suggestions, or I need to take the machine to my fix - it person?

    Thanks for any assistance yu might be able to provide.

    Bingo! You have reason, cor - el. Thank you.

    I never noticed this before. Sometimes (when I'm too lazy to get up and do my glasses or cannot find them) I just increase the size of the window.

    To me it seemed that the scroll bar was still there, and then it wasn't. Hours lost, but a lesson learned. I did not mention that I am a fool to the computer, I don't?

    Thank you!!!

  • Why the optimizer ignores Index Fast full Scan when much lower cost?

    Summary (tracking details below) - to improve the performance of a query on more than one table, I created an index on a table that included all the columns referenced in the query. With the new index in place the optimizer is still choosing a full Table Scan on an Index fast full scan. However, by removing the one query tables I reach the point where the optimizer suddenly use the Index Fast Full Scan on this table. And 'Yes', it's a lot cheaper than the full Table Scan it used before. By getting a test case, I was able to get the motion down to 4 tables with the optimizer still ignoring the index and table of 3, it will use the index.

    So why the optimizer not chooses the Index Fast Full Scan, if it is obvious that it is so much cheaper than a full Table Scan? And why the deletion of a table changes how the optimizer - I don't think that there is a problem with the number of join permutations (see below). The application is so simple that I can do, while remaining true to the original SQL application, and it still shows this reversal in the choice of access path. I can run the queries one after another, and he always uses a full Table Scan for the original query and Index fast full scan for the query that is modified with a table less.

    Watching trace 10053 output for the two motions, I can see that for the original query 4 table costs alone way of ACCESS of TABLE UNIQUE section a full Table Scan. But for the modified query with a table less, the table now has a cost for an Index fast full scan also. And the end of the join cost 10053 does not end with a message about exceeding the maximum number of permutations. So why the optimizer does not cost the IFFS for the first query, when it does for the second, nearly identical query?

    This is potentially a problem to do with OUTER joins, but why? The joins between the tables do not change when the single extra table is deleted.

    It's on 10.2.0.5 on Linux (Oracle Enterprise Linux). I did not define special settings I know. I see the same behavior on 10.2.0.4 32-bit on Windows (XP).

    Thank you
    John
    Blog of database Performance

    DETAILS
    I've reproduced the entire scenario via SQL scripts to create and populate the tables against which I can then run the queries. I've deliberately padded table so that the length of the average line of data generated is similar to that of the actual data. In this way the statistics should be similar on the number of blocks and so forth.

    System - uname - a
    Linux mysystem.localdomain 2.6.32-300.25.1.el5uek #1 SMP Tue May 15 19:55:52 EDT 2012 i686 i686 i386 GNU/Linux
    Database - v$ version
    Oracle Database 10g Enterprise Edition Release 10.2.0.5.0 - Prod
    PL/SQL Release 10.2.0.5.0 - Production
    CORE    10.2.0.5.0      Production
    TNS for Linux: Version 10.2.0.5.0 - Production
    NLSRTL Version 10.2.0.5.0 - Production
    Original query (complete table below details):
    SELECT 
        episode.episode_id , episode.cross_ref_id , episode.date_required , 
        product.number_required , 
        request.site_id 
    FROM episode 
    LEFT JOIN REQUEST on episode.cross_ref_id = request.cross_ref_id 
         JOIN product ON episode.episode_id = product.episode_id 
    LEFT JOIN product_sub_type ON product.prod_sub_type_id = product_sub_type.prod_sub_type_id 
    WHERE (
            episode.department_id = 2
        and product.status = 'I'
          ) 
    ORDER BY episode.date_required
    ;
    Execution of display_cursor after the execution plan:
    SQL_ID  5ckbvabcmqzw7, child number 0
    -------------------------------------
    SELECT     episode.episode_id , episode.cross_ref_id , episode.date_required ,
    product.number_required ,     request.site_id FROM episode LEFT JOIN REQUEST on
    episode.cross_ref_id = request.cross_ref_id      JOIN product ON episode.episode_id =
    product.episode_id LEFT JOIN product_sub_type ON product.prod_sub_type_id =
    product_sub_type.prod_sub_type_id WHERE (         episode.department_id = 2 and
    product.status = 'I'       ) ORDER BY episode.date_required
    
    Plan hash value: 3976293091
    
    -----------------------------------------------------------------------------------------------------
    | Id  | Operation             | Name                | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
    -----------------------------------------------------------------------------------------------------
    |   0 | SELECT STATEMENT      |                     |       |       |       | 35357 (100)|          |
    |   1 |  SORT ORDER BY        |                     | 33333 |  1920K|  2232K| 35357   (1)| 00:07:05 |
    |   2 |   NESTED LOOPS OUTER  |                     | 33333 |  1920K|       | 34879   (1)| 00:06:59 |
    |*  3 |    HASH JOIN OUTER    |                     | 33333 |  1822K|  1728K| 34878   (1)| 00:06:59 |
    |*  4 |     HASH JOIN         |                     | 33333 |  1334K|       |   894   (1)| 00:00:11 |
    |*  5 |      TABLE ACCESS FULL| PRODUCT             | 33333 |   423K|       |   103   (1)| 00:00:02 |
    |*  6 |      TABLE ACCESS FULL| EPISODE             |   299K|  8198K|       |   788   (1)| 00:00:10 |
    |   7 |     TABLE ACCESS FULL | REQUEST             |  3989K|    57M|       | 28772   (1)| 00:05:46 |
    |*  8 |    INDEX UNIQUE SCAN  | PK_PRODUCT_SUB_TYPE |     1 |     3 |       |  0   (0)|          |
    -----------------------------------------------------------------------------------------------------
    
    Predicate Information (identified by operation id):
    ---------------------------------------------------
       3 - access("EPISODE"."CROSS_REF_ID"="REQUEST"."CROSS_REF_ID")
       4 - access("EPISODE"."EPISODE_ID"="PRODUCT"."EPISODE_ID")
       5 - filter("PRODUCT"."STATUS"='I')
       6 - filter("EPISODE"."DEPARTMENT_ID"=2)
       8 - access("PRODUCT"."PROD_SUB_TYPE_ID"="PRODUCT_SUB_TYPE"."PROD_SUB_TYPE_ID")
    Updated the Query:
    SELECT 
        episode.episode_id , episode.cross_ref_id , episode.date_required , 
        product.number_required , 
        request.site_id 
    FROM episode 
    LEFT JOIN REQUEST on episode.cross_ref_id = request.cross_ref_id 
         JOIN product ON episode.episode_id = product.episode_id 
    WHERE (
            episode.department_id = 2
        and product.status = 'I'
          ) 
    ORDER BY episode.date_required
    ;
    Execution of display_cursor after the execution plan:
    SQL_ID  gbs74rgupupxz, child number 0
    -------------------------------------
    SELECT     episode.episode_id , episode.cross_ref_id , episode.date_required ,
    product.number_required ,     request.site_id FROM episode LEFT JOIN REQUEST on
    episode.cross_ref_id = request.cross_ref_id      JOIN product ON episode.episode_id =
    product.episode_id WHERE (         episode.department_id = 2     and product.status =
    'I'       ) ORDER BY episode.date_required
    
    Plan hash value: 4250628916
    
    ----------------------------------------------------------------------------------------------
    | Id  | Operation              | Name        | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
    ----------------------------------------------------------------------------------------------
    |   0 | SELECT STATEMENT       |             |       |       |       | 10515 (100)|          |
    |   1 |  SORT ORDER BY         |             | 33333 |  1725K|  2112K| 10515   (1)| 00:02:07 |
    |*  2 |   HASH JOIN OUTER      |             | 33333 |  1725K|  1632K| 10077   (1)| 00:02:01 |
    |*  3 |    HASH JOIN           |             | 33333 |  1236K|       |   894   (1)| 00:00:11 |
    |*  4 |     TABLE ACCESS FULL  | PRODUCT     | 33333 |   325K|       |   103   (1)| 00:00:02 |
    |*  5 |     TABLE ACCESS FULL  | EPISODE     |   299K|  8198K|       |   788   (1)| 00:00:10 |
    |   6 |    INDEX FAST FULL SCAN| IX4_REQUEST |  3989K|    57M|       |  3976   (1)| 00:00:48 |
    ----------------------------------------------------------------------------------------------
    
    Predicate Information (identified by operation id):
    ---------------------------------------------------
       2 - access("EPISODE"."CROSS_REF_ID"="REQUEST"."CROSS_REF_ID")
       3 - access("EPISODE"."EPISODE_ID"="PRODUCT"."EPISODE_ID")
       4 - filter("PRODUCT"."STATUS"='I')
       5 - filter("EPISODE"."DEPARTMENT_ID"=2)
    Creating the table and Population:
    1 create tables
    2. load data
    3 create indexes
    4. collection of statistics
    --
    -- Main table
    --
    create table episode (
    episode_id number (*,0),
    department_id number (*,0),
    date_required date,
    cross_ref_id varchar2 (11),
    padding varchar2 (80),
    constraint pk_episode primary key (episode_id)
    ) ;
    --
    -- Product tables
    --
    create table product_type (
    prod_type_id number (*,0),
    code varchar2 (10),
    binary_field number (*,0),
    padding varchar2 (80),
    constraint pk_product_type primary key (prod_type_id)
    ) ;
    --
    create table product_sub_type (
    prod_sub_type_id number (*,0),
    sub_type_name varchar2 (20),
    units varchar2 (20),
    padding varchar2 (80),
    constraint pk_product_sub_type primary key (prod_sub_type_id)
    ) ;
    --
    create table product (
    product_id number (*,0),
    prod_type_id number (*,0),
    prod_sub_type_id number (*,0),
    episode_id number (*,0),
    status varchar2 (1),
    number_required number (*,0),
    padding varchar2 (80),
    constraint pk_product primary key (product_id),
    constraint nn_product_episode check (episode_id is not null) 
    ) ;
    alter table product add constraint fk_product 
    foreign key (episode_id) references episode (episode_id) ;
    alter table product add constraint fk_product_type 
    foreign key (prod_type_id) references product_type (prod_type_id) ;
    alter table product add constraint fk_prod_sub_type
    foreign key (prod_sub_type_id) references product_sub_type (prod_sub_type_id) ;
    --
    -- Requests
    --
    create table request (
    request_id number (*,0),
    department_id number (*,0),
    site_id number (*,0),
    cross_ref_id varchar2 (11),
    padding varchar2 (80),
    padding2 varchar2 (80),
    constraint pk_request primary key (request_id),
    constraint nn_request_department check (department_id is not null),
    constraint nn_request_site_id check (site_id is not null)
    ) ;
    --
    -- Activity & Users
    --
    create table activity (
    activity_id number (*,0),
    user_id number (*,0),
    episode_id number (*,0),
    request_id number (*,0), -- always NULL!
    padding varchar2 (80),
    constraint pk_activity primary key (activity_id)
    ) ;
    alter table activity add constraint fk_activity_episode
    foreign key (episode_id) references episode (episode_id) ;
    alter table activity add constraint fk_activity_request
    foreign key (request_id) references request (request_id) ;
    --
    create table app_users (
    user_id number (*,0),
    user_name varchar2 (20),
    start_date date,
    padding varchar2 (80),
    constraint pk_users primary key (user_id)
    ) ;
    
    prompt Loading episode ...
    --
    insert into episode
    with generator as 
    (select rownum r
              from (select rownum r from dual connect by rownum <= 1000) a,
                   (select rownum r from dual connect by rownum <= 1000) b,
                   (select rownum r from dual connect by rownum <= 1000) c
             where rownum <= 1000000
           ) 
    select r, 2,
        sysdate + mod (r, 14),
        to_char (r, '0000000000'),
        'ABCDEFGHIJKLMNOPQRSTUVWXYZ' || to_char (r, '000000')
      from generator g
    where g.r <= 300000
    /
    commit ;
    --
    prompt Loading product_type ...
    --
    insert into product_type
    with generator as 
    (select rownum r
              from (select rownum r from dual connect by rownum <= 1000) a,
                   (select rownum r from dual connect by rownum <= 1000) b,
                   (select rownum r from dual connect by rownum <= 1000) c
             where rownum <= 1000000
           ) 
    select r, 
           to_char (r, '000000000'),
           mod (r, 2),
           'ABCDEFGHIJKLMNOPQRST' || to_char (r, '000000')
      from generator g
    where g.r <= 12
    /
    commit ;
    --
    prompt Loading product_sub_type ...
    --
    insert into product_sub_type
    with generator as 
    (select rownum r
              from (select rownum r from dual connect by rownum <= 1000) a,
                   (select rownum r from dual connect by rownum <= 1000) b,
                   (select rownum r from dual connect by rownum <= 1000) c
             where rownum <= 1000000
           ) 
    select r, 
           to_char (r, '000000'),
           to_char (mod (r, 3), '000000'),
           'ABCDE' || to_char (r, '000000')
      from generator g
    where g.r <= 15
    /
    commit ;
    --
    prompt Loading product ...
    --
    -- product_id prod_type_id prod_sub_type_id episode_id padding 
    insert into product
    with generator as 
    (select rownum r
              from (select rownum r from dual connect by rownum <= 1000) a,
                   (select rownum r from dual connect by rownum <= 1000) b,
                   (select rownum r from dual connect by rownum <= 1000) c
             where rownum <= 1000000
           ) 
    select r, mod (r, 12) + 1, mod (r, 15) + 1, mod (r, 300000) + 1,
           decode (mod (r, 3), 0, 'I', 1, 'C', 2, 'X', 'U'),
           dbms_random.value (1, 100), NULL
      from generator g
    where g.r <= 100000
    /
    commit ;
    --
    prompt Loading request ...
    --
    -- request_id department_id site_id cross_ref_id varchar2 (11) padding 
    insert into request
    with generator as 
    (select rownum r
              from (select rownum r from dual connect by rownum <= 1000) a,
                   (select rownum r from dual connect by rownum <= 1000) b,
                   (select rownum r from dual connect by rownum <= 1000) c
             where rownum <= 10000000
           ) 
    select r, mod (r, 4) + 1, 1, to_char (r, '0000000000'),
    'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz01234567890123456789' || to_char (r, '000000'),
    'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789012345678' || to_char (r, '000000')
      from generator g
    where g.r <= 4000000
    /
    commit ;
    --
    prompt Loading activity ...
    --
    -- activity activity_id user_id episode_id request_id (NULL) padding 
    insert into activity
    with generator as 
    (select rownum r
              from (select rownum r from dual connect by rownum <= 1000) a,
                   (select rownum r from dual connect by rownum <= 1000) b,
                   (select rownum r from dual connect by rownum <= 1000) c
             where rownum <= 10000000
           ) 
    select r, mod (r, 50) + 1, mod (r, 300000) + 1, NULL, NULL
      from generator g
    where g.r <= 100000
    /
    commit ;
    --
    prompt Loading app_users ...
    --
    -- app_users user_id user_name start_date padding 
    insert into app_users
    with generator as 
    (select rownum r
              from (select rownum r from dual connect by rownum <= 1000) a,
                   (select rownum r from dual connect by rownum <= 1000) b,
                   (select rownum r from dual connect by rownum <= 1000) c
             where rownum <= 10000000
           ) 
    select r, 
           'User_' || to_char (r, '000000'),
           sysdate - mod (r, 30),
           'ABCDEFGHIJKLMNOPQRSTUVWXYZ' || to_char (r, '000000')
      from generator g
    where g.r <= 1000
    /
    commit ;
    --
    
    prompt Episode (1)
    create index ix1_episode_cross_ref on episode (cross_ref_id) ;
    --
    prompt Product (2)
    create index ix1_product_episode on product (episode_id) ;
    create index ix2_product_type on product (prod_type_id) ;
    --
    prompt Request (4)
    create index ix1_request_site on request (site_id) ;
    create index ix2_request_dept on request (department_id) ;
    create index ix3_request_cross_ref on request (cross_ref_id) ;
    -- The extra index on the referenced columns!!
    create index ix4_request on request (cross_ref_id, site_id) ;
    --
    prompt Activity (2)
    create index ix1_activity_episode on activity (episode_id) ;
    create index ix2_activity_request on activity (request_id) ;
    --
    prompt Users (1)
    create unique index ix1_users_name on app_users (user_name) ;
    --
    prompt Gather statistics on schema ...
    --
    exec dbms_stats.gather_schema_stats ('JB')
    10053 sections - original query
    ***************************************
    SINGLE TABLE ACCESS PATH
      -----------------------------------------
      BEGIN Single Table Cardinality Estimation
      -----------------------------------------
      Table: REQUEST  Alias: REQUEST
        Card: Original: 3994236  Rounded: 3994236  Computed: 3994236.00  Non Adjusted: 3994236.00
      -----------------------------------------
      END   Single Table Cardinality Estimation
      -----------------------------------------
      Access Path: TableScan
        Cost:  28806.24  Resp: 28806.24  Degree: 0
          Cost_io: 28738.00  Cost_cpu: 1594402830
          Resp_io: 28738.00  Resp_cpu: 1594402830
    ******** Begin index join costing ********
      ****** trying bitmap/domain indexes ******
      Access Path: index (FullScan)
        Index: PK_REQUEST
        resc_io: 7865.00  resc_cpu: 855378926
        ix_sel: 1  ix_sel_with_filters: 1
        Cost: 7901.61  Resp: 7901.61  Degree: 0
      Access Path: index (FullScan)
        Index: PK_REQUEST
        resc_io: 7865.00  resc_cpu: 855378926
        ix_sel: 1  ix_sel_with_filters: 1
        Cost: 7901.61  Resp: 7901.61  Degree: 0
      ****** finished trying bitmap/domain indexes ******
    ******** End index join costing ********
      Best:: AccessPath: TableScan
             Cost: 28806.24  Degree: 1  Resp: 28806.24  Card: 3994236.00  Bytes: 0
    ***************************************
    10053 - updated the Query
    ***************************************
    SINGLE TABLE ACCESS PATH
      -----------------------------------------
      BEGIN Single Table Cardinality Estimation
      -----------------------------------------
      Table: REQUEST  Alias: REQUEST
        Card: Original: 3994236  Rounded: 3994236  Computed: 3994236.00  Non Adjusted: 3994236.00
      -----------------------------------------
      END   Single Table Cardinality Estimation
      -----------------------------------------
      Access Path: TableScan
        Cost:  28806.24  Resp: 28806.24  Degree: 0
          Cost_io: 28738.00  Cost_cpu: 1594402830
          Resp_io: 28738.00  Resp_cpu: 1594402830
      Access Path: index (index (FFS))
        Index: IX4_REQUEST
        resc_io: 3927.00  resc_cpu: 583211030
        ix_sel: 0.0000e+00  ix_sel_with_filters: 1
      Access Path: index (FFS)
        Cost:  3951.96  Resp: 3951.96  Degree: 1
          Cost_io: 3927.00  Cost_cpu: 583211030
          Resp_io: 3927.00  Resp_cpu: 583211030
      Access Path: index (FullScan)
        Index: IX4_REQUEST
        resc_io: 14495.00  resc_cpu: 903225273
        ix_sel: 1  ix_sel_with_filters: 1
        Cost: 14533.66  Resp: 14533.66  Degree: 1
    ******** Begin index join costing ********
      ****** trying bitmap/domain indexes ******
      Access Path: index (FullScan)
        Index: IX4_REQUEST
        resc_io: 14495.00  resc_cpu: 903225273
        ix_sel: 1  ix_sel_with_filters: 1
        Cost: 14533.66  Resp: 14533.66  Degree: 0
      Access Path: index (FullScan)
        Index: IX4_REQUEST
        resc_io: 14495.00  resc_cpu: 903225273
        ix_sel: 1  ix_sel_with_filters: 1
        Cost: 14533.66  Resp: 14533.66  Degree: 0
      ****** finished trying bitmap/domain indexes ******
    ******** End index join costing ********
      Best:: AccessPath: IndexFFS  Index: IX4_REQUEST
             Cost: 3951.96  Degree: 1  Resp: 3951.96  Card: 3994236.00  Bytes: 0
    ***************************************

    I mentioned that it is a bug related to the ANSI SQL standard and transformation probably.

    As suggested/asked in my first reply:
    1. If you use a no_query_transformation then you should find that you get the use of the index (although not in the plan you would expect)
    2. If you use the traditional Oracle syntax, then you should not have the same problem.

Maybe you are looking for