Best practices temporary table

Hello

I have a temporary table, and this table is emptied (removal of table_name) several times daily using a procedure.

I just wanted to know what the best practice is to manage this scenario taking into account the time of reading / the table space / all other advantages/disadvantages?

Truncate and load.

Or, if possible, use a MVIEW instead of the table...

Tags: Database

Similar Questions

  • What is the best (global temporary Table or tables of Type Object)

    Dear all,

    I'll try to refine some code and find that we have a large loop that goes to loop over 100000 times and for each record, it in turn for validation of controls in individual tables.

    I intend to implement all of the functionality of controls using join conditions. I.e. to empty all the data into a global temporary table or tables oracle object type and apply the conditions of verification using join operations, so that I can avoid the unnecessary check for each record.

    If I want to implement this, I want to know what is best. A global temporary table or the Oracle Tables nested.

    Appreciate your response.

    Thank you
    MK.

    If you mean a global temporary table vs a variable from PL/SQL grouping, then 100 + lines is a lot to store in session memory a GTT would be so a more scalable solution. It will give you more options to manipulate the data using SQL.

    Note that a 'table of type object', or 'table object type' can mean a table of database (no PL/SQL) for example "CREATE TABLE MyTable mytype". However, I don't think you mean that.

  • Best practices for retrieving a single value from the Oracle Table

    I'm using Oracle Database 11 g Release 11.2.0.3.0.

    I would like to know the best practice to do something like that in a PL/SQL block:

    DECLARE
        v_student_id    student.student_id%TYPE;
    BEGIN
        SELECT  student_id
        INTO    v_student_id
        FROM    student
        WHERE   last_name = 'Smith'
        AND     ROWNUM = 1;
    END;

    Of course, the problem here is that when there is no success, the NO_DATA_FOUND exception is thrown, which interrupts the execution.  So, what happens if I want to continue despite the exception?

    Yes, I could create a block nested with EXCEPTION section, etc, but it seems awkward for what seems to be a very simple task.

    I've also seen this handled like this:

    DECLARE
        v_student_id    student.student_id%TYPE;
        CURSOR c_student_id IS
            SELECT  student_id
            FROM    student
            WHERE   last_name = 'Smith'
            AND     ROWNUM = 1;
    BEGIN
        OPEN c_student_id;
        FETCH c_student_id INTO v_student_id;
        IF c_student_id%NOTFOUND THEN
            DBMS_OUTPUT.PUT_LINE('not found');
        ELSE
            (do stuff)
        END IF;
        CLOSE c_student_id;   
    END;

    But it still seems to kill an Ant with a hammer.

    What is the best way?

    Thanks for any help you can give.

    Wayne

    201cbc0d-57b2-483a-89f5-cd8043d0c04b wrote:

    What happens if I want to continue despite the exception?

    It depends on what you want to do.

    You expect only 0 or 1 rank. SELECT INTO waiting for exactly 1 row. In this case, SELECT INTO may not be the best solution.

    What exactly do you do if you return 0 rows?

    If you want to set a variable with a NULL value and continue the treatment, Frank's response looks good, or else use the modular Billy approach.

    If you want to "do things" when you get a line and 'status quo' when you don't get a line, then you can consider a loop FOR:

    declare
      l_empno scott.emp.empno%type := 7789;
      l_ename scott.emp.ename%type;
    begin
      for rec in (
        select ename from scott.emp
        where empno = l_empno
        and rownum = 1
      ) loop
    l_ename := rec.ename;
        dbms_output.put_line('<' || l_ename || '>');
      end loop;
    end;
    /
    

    Note that when no line is found, there is no output at all.

    Post edited by: StewAshton - Oops! I forgot to put the result in l_ename...

  • / var/log is full. Best practices?

    One of the score of the newspaper of our host is 100% full. I'm not the practice administrator for this host, but manage/deploy the virtual machines it for others to use.

    I was wondering what's the best practice to deal with a more complete log partition? I found an article that mentioned editing the file /etc/logrotate.d/vmkernel/ so that files

    be compressed more often and saved for less often, but there was no real clear instructions on what to change and how.

    Is the only way to investigate on the console itself or the directory/var/log via putty? No there is no way to see VIC?

    Thank you

    Hello

    To solve the immediate problem, I would transfer to any newspaper in/var/log with a number at the end is dire.1,.2, etc. to a temporary storage outside the ESX host location. You could run something similar to the following command of the scp to do:

    scp /var/log/*.[0-9]* /var/log/*/*.[0-9]* host:TemporaryDir
    

    Or you can use winscp to transfer of the ESX host in a windows box. A you get the files from existing logs from the system for later playback, use the following to clear the space:

    cd /var/log; rm *.[0-9]* */*.[0-9]*
    

    I would therefore consist logrotation thus directed by hardening for VMware ESX.

    Best regards, Edward L. Haletky VMware communities user moderator, VMware vExpert 2009
    "Now available on Rough Cuts: url = http://www.astroarch.com/wiki/index.php/VMware_Virtual_Infrastructure_Security' VMware vSphere (TM) and Virtual Infrastructure Security: ESX security and virtual environment ' [url]
    Also available url = http://www.astroarch.com/wiki/index.php/VMWare_ESX_Server_in_the_Enterprise"VMWare ESX Server in the enterprise" [url]
    [url =http://www.astroarch.com/wiki/index.php/Blog_Roll] SearchVMware Pro [url] | URL = http://www.astroarch.com/blog Blue Gears [url] | URL = http://www.astroarch.com/wiki/index.php/Top_Virtualization_Security_Links Top security virtualization [url] links | URL = http://www.astroarch.com/wiki/index.php/Virtualization_Security_Round_Table_Podcast Virtualization Security Table round Podcast [url]

  • TDMS &amp; Diadem best practices: what happens if my mark has breaks/cuts?

    I created a LV2011 datalogging application that stores a lot of data to TDMS files.  The basic architecture is like this:

    Each channel has these properties:

    To = start time

    DT = sampling interval

    Channel values:

    Table 1 d of the DBL values

    After the start of datalogging, I still just by adding the string values.  And if the size of the file the PDM goes beyond 1 GB, I create a new file and try again.  The application runs continuously for days/weeks, so I get a lot of TDMS files.

    It works very well.  But now I need to change my system to allow the acquisition of data for pause/resume.  In other words, there will be breaks in the signal (probably from 30 seconds to 10 minutes).  I had originally considered two values for each point of registration as a XY Chart (value & timestamp) data.  But I am opposed to this principal in because according to me, it fills your hard drive unnecessarily (twice us much disk footprint for the same data?).

    Also, I've never used a tiara, but I want to ensure that my data can be easily opened and analyzed using DIAdem.

    My question: are there some best practices for the storage of signals that break/break like that?  I would just start a new record with a new time of departure (To) and tiara somehow "bind" these signals... for example, I know that it is a continuation of the same signal.

    Of course, I should install Diadem and play with him.  But I thought I would ask the experts on best practices, first of all, as I have no knowledge of DIAdem.

    Hi josborne;

    Do you plan to create a new PDM file whenever the acquisition stops and starts, or you were missing fewer sections store multiple power the same TDMS file?  The best way to manage the shift of date / time is to store a waveform per channel per section of power and use the channel property who hails from waveform TDMS data - if you are wiring table of orange floating point or a waveform Brown to the TDMS Write.vi "wf_start_time".  Tiara 2011 has the ability to easily access the time offset when it is stored in this property of channel (assuming that it is stored as a date/time and not as a DBL or a string).  If you have only one section of power by PDM file, I would certainly also add a 'DateTime' property at the file level.  If you want to store several sections of power in a single file, PDM, I would recommend using a separate group for each section of power.  Make sure that you store the following properties of the string in the TDMS file if you want information to flow naturally to DIAdem:

    'wf_xname '.
    'wf_xunit_string '.
    'wf_start_time '.
    'wf_start_offset '.
    'wf_increment '.

    Brad Turpin

    Tiara Product Support Engineer

    National Instruments

  • (Best practices) How to store the adjustment curve values?

    I got two sets of data, Xreal and Xobserved, abbreviated Xr and Xo. Xreal is a data set that contains the values of sensor from a reliable source (it's a pain to collect data for), and Xobserved is a set of data containing the values from a less reliable source, but much less maintenance, sensor. I'll create a VI that receives the entry of these two sources of data, stores it in a database (text file or csv) and crosses some estimators of this database. The output of the VI will be best approximation of linear adjustment (using regression, not the Xreal) of the input value of Xobserved.

    What are best practices for storage Xreal and Xobserved? In addition, I'm not too known using best VI made, take CSV files for entry? How would format it best?

    '

    Keep things simple.  Convert the table to CSV file and write to a text file.  See attached example.

  • Best practice? Storage of large data sets.

    I'm programming a Client to access the customer address information. The data are delivered on a MSSQL Server by a Web service.

    What is the best practice to link these data namely ListFields? String tables? Or an XML file with is analyzed?

    Any ideas?

    Thank you, hhessel

    These debates come from time to time. The big question is how normally geet she on the phone after

    someone asks why BB does not support databases. It is there no magic here - it depends on what you do with

    the data. Regarding the General considerations, see j2me on sun.com or jvm issues more generally. We are all

    should get a reference of material BB too LOL...

    If you really have a lot of data, there are libraries of zip and I often use my own patterns of "compression".

    I personally go with simple types in the store persistent and built my own b-tree indexing system

    which is also j2se virtue persistable and even testable. For strings, we'll store me repeated prefixes

    that only once even though I finally gave up their storage as only Aspire. So if I have hundreds of channels that start "http://www.pinkcat-REC".

    I don't store this time. Before you think of overload by chaining these, who gets picked up

    the indexes that you use to find the channel anyway (so of course you have to time to concatenate pieces)

    back together, but the index needs particular airspace is low).

  • Display the format US phone number - best practices

    JDeveloper 12.1.3

    -Telephone table has field VARCHAR2 with phone numbers. All records have exactly 10 numbers (characters) length

    -In the user interface, I want to show number xxx-xxx-xxxx, that is to say with af:outputText or similar device

    -observe that I am looking for display, and no to entry/edit phone number

    -in search of best practices

    What I did but am not satisfied:

    -In the t 3 has created a transitional attributes with the default SQL string. For example:

    Phone1To3: substr (Phones.Phone, 1, 3)

    Phone4To6: substr (Phones.Phone, 4, 3)

    Phone7To10: substr (Phones.Phone, 7, 4)

    Then in file jsff (the table component):

    <af:outputText value="#{row.Phone1To3}-#{row.Phone4To6}-#{row.Phone7To10}" shortDesc="#{bindings.Phones.hints.Phone.tooltip}" id="ot2222"/>
    

    Another way (better, easier)?

    In my view, that the best practice is to do your own converter in this case you will not need transitional attributes as you did. After doing a converter, you can add this converter inside outputText inputText. .. to display the required format

    To find out how a converter, you can check this url

    Sameh Nassar: Create custom converter

  • [ADF, JDev12.1.3] Best practices for maintaining a form validation

    Hallo,

    in my application, I need to create a registration form which contains fields that must be validated (for example they should follow a format like e-mail, phone number, tax code,...).

    If the data inserted by the user are ok, a new record in my custom db table Users will be created.

    I would like to know which are the best practices for maintaining the validation, which means the place where the controls must be made and a message to the user who fills out the form when something goes wrong.

    The best vo or EO or managed bean? Or some controls should be put in the OS, others in the VO and other in the managed bean?

    I would be happy if you could give me some examples.

    Thank you

    Federico

    Assuming you want the validation on the value of the field to any screen data can be entered in (and possibly web services that rely on the same BC ADF) then put the validation on the definition of the attribute in the EO.

    If you want to add a little more friendliness and eliminate some of the network traffic to the server, you can also implement the validation client in your page - for example by using the regular expression validator.

    https://blogs.Oracle.com/Shay/entry/regular_expression_validation

  • Sliders - best practices

    Hi all

    This question is based on the thread: Re: best practices with the sliders with curls

    Here I've created the same script with different methods.

    1 CURSOR

    ------------------

    DECLARE

    CURSOR table_count

    IS

    SELECT table_name

    From user_tables

    ORDER BY 1;

    sqlstr VARCHAR2 (1000);

    numrow NUMBER;

    BEGIN

    Dbms_output.put_line ('Start time' | to_char (sysdate,' dd-mon-yyyy hh24:mi:ss'));))

    I'm IN table_count

    LOOP

    sqlstr: = 'SELECT COUNT (*) FROM "| i.table_name;

    EXECUTE IMMEDIATE sqlstr INTO numrow;

    If numrow > 0 then

    Dbms_output.put_line (RPAD (i.table_name, 30, '.') |) ' = ' || numrow);

    end if;

    END LOOP;

    Dbms_output.put_line ('End time' | to_char (sysdate,' dd-mon-yyyy hh24:mi:ss'));))

    END;

    My understanding:

    He's going to line-by-line treatment generally slow performance

    2. BULK COLLECT

    -----------------------------

    DECLARE

    sqlstr VARCHAR2 (1000);

    numrow NUMBER;

    Table-name TYPE is table of the varchar2 (30);

    tNom table_name;

    BEGIN

    Dbms_output.put_line ('Start time' | to_char (sysdate,' dd-mon-yyyy hh24:mi:ss'));))

    SELECT table_name

    TNom LOOSE COLLECTION

    From user_tables

    ORDER BY 1;

    BECAUSE me IN tNom. FIRST... tNom. COUNTY

    LOOP

    sqlstr: = 'SELECT COUNT (*) FROM "| tname (i);

    EXECUTE IMMEDIATE sqlstr INTO numrow;

    If numrow > 0 then

    Dbms_output.put_line (RPAD (tname (i), 30, '.') |) ' = ' || numrow);

    end if;

    END LOOP;

    Dbms_output.put_line ('End time' | to_char (sysdate,' dd-mon-yyyy hh24:mi:ss'));))

    END;

    My understanding:

    1 avoid context switching

    2 uses more PGA

    3. THE CURSOR AND IN BULK AT COST VIRES

    --------------------------------------------------

    DECLARE

    CURSOR table_count

    IS

    SELECT table_name

    From user_tables

    ORDER BY 1;

    sqlstr VARCHAR2 (1000);

    numrow NUMBER;

    Table-name TYPE is table of the varchar2 (30);

    tNom table_name;

    BEGIN

    OPEN table_count;

    Pick up the LOOSE COLLECT tNom table_count;

    Dbms_output.put_line ('Start time' | to_char (sysdate,' dd-mon-yyyy hh24:mi:ss'));))

    BECAUSE me IN tNom. FIRST... tNom. COUNTY

    LOOP

    sqlstr: = 'SELECT COUNT (*) FROM "| tname (i);

    EXECUTE IMMEDIATE sqlstr INTO numrow;

    If numrow > 0 then

    Dbms_output.put_line (RPAD (tname (i), 30, '.') |) ' = ' || numrow);

    end if;

    END LOOP;

    Dbms_output.put_line ('End time' | to_char (sysdate,' dd-mon-yyyy hh24:mi:ss'));))

    END;

    My understanding:

    I really don't understand why some people prefer this method is to have the two SLIDER and COLLECT in BULK

    4. IMPLICIT CURSOR

    ----------------------------------

    DECLARE

    sqlstr VARCHAR2 (1000);

    numrow NUMBER;

    BEGIN

    Dbms_output.put_line ('Start time' | to_char (sysdate,' dd-mon-yyyy hh24:mi:ss'));))

    FOR I IN (SELECT table_name

    From user_tables

    ORDER BY 1)

    LOOP

    sqlstr: = 'SELECT COUNT (*) FROM "| i.table_name;

    EXECUTE IMMEDIATE sqlstr INTO numrow;

    If numrow > 0 then

    Dbms_output.put_line (RPAD (i.table_name, 30, '.') |) ' = ' || numrow);

    end if;

    END LOOP;

    Dbms_output.put_line ('End time' | to_char (sysdate,' dd-mon-yyyy hh24:mi:ss'));))

    END;

    My understanding:

    It will also gives better performance compare that loops of CURSOR

    Given that the 4 methods above do the same work, please explain how to choose the correct methods for different scenarios. That is what we have to consider before choosing a method.

    I asked this question on asktom a few years ago Tom Kyte. He recommended that the implicit cursors:

    • they have a size of 100 (not 500) extraction;
    • PL/SQL manages the opening and closing of the cursor automatically.

    He mentioned one important exception: If you need to change data and not just read it, you need in "bulk" so much read as write operations - and that is to use the FORALL.

    To use FORALL for writing, you must use the COLLECTION in BULK for reads - and you should almost always use LIMIT with the COLLECTION in BULK.

    So, to make it "in bulk the Scriptures ', use FORALL. For the ' readings in bulk "preparing the ' written in bulk ', use BULK COLLECT with LIMIT. For the ' readings in bulk "when you change all the data, implicit cursors are more simple.

    Best regards, stew Ashton

  • Best practices: linking opportunity data step contact SFDC

    Hi - I was hoping someone could give me a best practice to manage and automate the following use cases.

    I have tried a number of different things in Eloqua but met a roadblock.

    CASE OF USE/PROBLEM:

    I need to remove contacts from a campaign of education that are associated with a designated SFDC opportunity as won.
    I realized my first step, which is to remove a contact from a campaign by referencing a custom object.

    However, I need updated chance data mapped to a contact to handle all upward.

    Thus so, my real problem is updated (every 30 minutes or more) data in the custom about opportunities object table.

    What I've tried map data updated to a contact opportunity:

    (1) auto Synch the opportunity table an object custom

    I was able to bring opportunities but the email address/contact data is stored on the Contact role table so no e-mail address were brought in Eloqua.

    (2) automatic synchronization on the Contact role opportunity table an object custom

    This works if I do an automatic synchronization and automatic synchronization does not work with the updated filter, and the last successful Upload Date.

    Is it possible to change the filter to make the data when the opportunity is changed and not the role of Contact?
    And if so, can someone give me direction on how to implement that in Eloqua?

    If you know of something else to do to manage this entire upward please let me know. I appreciate any assistance.

    Blake Holden

    Hi Kathleen,.

    I understand. Below shows you an automatic synch successfully pull data in the role of opportunity Contact SFDC Eloqua. Once a week, I still don't have a full automatic synchronization to ensure that all data from, but most of the time, it is entirely automated.

    Blake

  • Global temporary table in PL/SQL with XML

    Hello

    I have the impression that it is something strange or maybe I'm missing something basic.

    Step 1: Create a global Temp table that should not be specific transaction.

    create a global temporary table Temp01

    (

    NUMBER OF TICKET_ID

    , Varchar2 (10) of the REGION

    NUMBER OF THE YEAR

    , CO_ID VARCHAR2 (10)

    ) ON COMMIT DELETE ROWS.

    Step 2:

    My XML that goes as a parameter to a new function.

    < TICKET >

    < TICKET_ID > 38498051 < / TICKET_ID >

    the USA < REGION > < / REGION >

    < YEAR > 2014 < / YEAR >

    XYZ123 < CO_ID > < / CO_ID >

    < / TICKET >

    Step 3: Create a Stand Alone function:

    -drop function aagarwal.wr_creation;

    create or replace FUNCTION XML_FUNC

    (

    ret_msg out varchar2,.

    p_xmlval IN varchar2

    )

    RETURN varchar2

    is

    l_xmlval varchar2 (4000): = p_xmlval;

    V_CO_ID VARCHAR2 (10);

    V_CODE VARCHAR2 (10);

    BEGIN

    BEGIN

    INSERT INTO Temp01

    (

    TICKET_ID,

    REGION,

    BLEACHED,

    CO_ID

    )

    SELECT

    EXTRACTVALUE (XmlType (p_xmlval), "/ TICKET/TICKET_ID ') ID,.

    EXTRACTVALUE (XmlType (p_xmlval), "/ TICKET/REGION") REGION.

    EXTRACTVALUE (XmlType (p_xmlval), "/ TICKET per YEAR"),.

    EXTRACTVALUE (XmlType (p_xmlval), "/ TICKET/CO_ID ') CO_ID

    FROM DUAL;

    ret_msg: = 'SUCCESS';

    -SELECT CO_ID IN V_CO_ID of aagarwal. TEMP_STAGE_WR;

    -return ret_msg;

    EXCEPTION

    WHILE OTHERS THEN

    ret_msg: = sqlerrm;

    Return ret_msg;

    END;

    BEGIN

    SELECT CO_ID INTO V_CO_ID FROM Temp01;

    / * MERGE IN the site is

    With the HELP of aagarwal. TEMP01 T

    WE (T.co_id = se.code AND se.type_nm = ' TYPE' and se.src_nm = T.region)

    WHEN NOT MATCHED THEN

    INSERT (ID, SRCNM, CODE, TYPENM)

    VALUES (SHARED_SEQ. NEXTVAL, T.region, T.co_id, 'TYPE');

    -commit; */

    return ret_msg | "ACE" | v_co_id;

    END;

    END;

    /

    Fact - created function.

    NOTE: MERGE statement is blocked and if the function was created in sweetness.

    Step 4: Call the function

    declare

    l_out varchar2 (50);

    l_outr varchar2 (50);

    p_xml XMLTYPE.

    Start

    l_outr: = XML_FUNC (l_out, ' < TICKET >)

    < TICKET_ID > 38498051 < / TICKET_ID >

    the USA < REGION > < / REGION >

    < YEAR > 2014 < / YEAR >

    XYZ123 < CO_ID > < / CO_ID >

    (< / TICKET > ');

    dbms_output.put_line (l_outr);

    end;

    /


    Step 5: Check the value being inserted into the temporary Table:


    Select * from temp01;


    So far so good.

    THE PROBLEM:

    Now I want to tweek the XML_FUNC function above by uncommenting MERGE statement, which brings me to an error that is not differentiable:

    I.e. PL/SQL: ORA-00942: table or view does not exist in line on MERGE pointing to Temp01 statement.

    NOTE: I tested this Merge statement explicitly (as long as the execution of Stand Alone and also by calling via anonymous block PLSQL) and its absolutely perfect work. And SITE table exist.

    PS: I would be grateful, if there is a better way to write this code? I'm not a regular PLSQL developer and so badly can write the code of practice.

    Kind regards

    AAG.

    Using 11.2.0.3:

    Owner of all these three objects is DBA.

    Are you sure?

    After the release of:

    Select object_name, object_type

    of object

    where object_name in ('TEMP01', 'SITE', 'XML_FUNC');

    You must grant the explicit right to select on the table for the owner of the function if the owners are different.

    This works as expected for me (user DEV has all 3 items):

    Connected to Oracle Database 11g Enterprise Edition Release 11.2.0.3.0

    Logged in as dev

    SQL >

    SQL > create a global temporary table Temp01)

    NUMBER OF TICKET_ID 2

    3, Varchar2 (10) of the REGION

    4, NUMBER OF THE YEAR

    5, CO_ID VARCHAR2 (10)

    6)

    7. ON COMMIT DELETE ROWS.

    Table created

    SQL >

    SQL > create table site)

    Identification number 2

    3, srcnm varchar2 (10)

    4, code varchar2 (10)

    5, typenm varchar2 (10)

    6  );

    Table created

    SQL > create the sequence shared_seq;

    Order of creation

    SQL >

    SQL >

    SQL > create or replace FUNCTION XML_FUNC)

    2 p_xmlval IN varchar2

    3)

    4 RETURN varchar2

    5 is

    6 l_xmlval xmltype: = xmltype (p_xmlval);

    7. START

    8

    9 INSERT INTO Temp01

    (10)

    TICKET_ID 11,

    REGION 12,

    13 YEARS,

    CO_ID 14

    15)

    16. SELECT ID EXTRACTVALUE(l_xmlval, '/TICKET/TICKET_ID'),

    17 EXTRACTVALUE(l_xmlval, '/TICKET/REGION') REGION,

    18 EXTRACTVALUE (l_xmlval, ' / TICKET per YEAR ') YEAR.

    19 EXTRACTVALUE(l_xmlval, '/TICKET/CO_ID') CO_ID

    20 FROM DUAL;

    21

    22. MERGE IN site

    23. WITH THE HELP OF TEMP01 T

    (24)

    25 T.co_id = se.code

    26 AND se.typenm = 'TYPE '.

    27 and se.srcnm = T.region

    28)

    29 WHEN NOT MATCHED THEN

    30 INSERT (ID, SRCNM, CODE, TYPENM)

    31 VALUES (SHARED_SEQ. NEXTVAL, T.region, T.co_id, 'TYPE');

    32

    33 return "SUCCESS";

    34

    35 END;

    36.

    Feature created

    SQL >

    SQL >

    SQL > set serveroutput on

    SQL >

    SQL >

    SQL > declare

    2

    3 l_outr varchar2 (50);

    4

    5. start

    6

    7 l_outr: = XML_FUNC (')

    8 38498051

    9 USA

    10 2014

    11 XYZ123

    12    ');

    13

    14 dbms_output.put_line (l_outr);

    15

    16 end;

    17.

    SUCCESS

    PL/SQL procedure successfully completed

    SQL > select * from site;

    ID CODE TYPENM SRCNM

    ---------- ---------- ---------- ----------

    TYPE 1 USA XYZ123

  • Best practices for setting in RoboHelp to create .chm?

    I have Tech Com Suite 2015. I need to make a FrameMaker book .chm file. I tried to do directly from chassis, but was not happy with the results, so I will try to do a RoboHelp project instead.  Can someone help me with best practices to achieve. I would like to than my files related to RoboHelp, so that I don't have to start over if they are updated. I tried to work with it. You can fix things after you import? For example, if I have does not have difficulty cross-references (and delete page for example numbers) in FrameMaker, before the import/lining, what I have to do it again?  I have worked with FrameMaker for quite a long time, but I'm less familiar with RoboHelp. Is there a video or webinar showing how to do this? Or can someone give some tips and things that I should know about this procedure. Thank you

    Hello

    1. the table of contents at the same level:

    To create levels of navigation OCD in a table of contents in the output to publish FM, we need to change either the first indent, the property size or weight are.

    We determine the level by setting these properties by Tag:

    -First indent,

    -Font Size,

    -Font

    by example, so if you want to have titre3 appear inside Title2 like this:

    Titre3

    Title2

    In Para designer > properties updated by designer of these 2 tags (Heading2TOC, Heading3TOC):

    -First indent Heading2TOC Heading3TOC more

    - Or font of Heading2TOC less Heading3TOC

    - Or the size of the police of Heading2TOC less Heading3TOC

    2. the option Enable browse sequence allows the navigation arrows. Try to activate the option "activate browse Sequence. (apply the latest Patch, help > updates)

    3. Once you create your table of contents, you will see the title of the chapter begins to appear in the breadcrumbs.

    Main effort, you'll need to do, is to create a table of contents leveled once it made should solve the issues you face.

    Amit

  • NTFS file cluster best practice deployment of Windows server on vSphere 5.1?

    What are best practices, VMware or Microsoft that I need to know to make sure the server that I created 2 x Windows Server 2008 SP2 Enterprise MSCS of NTFS file (active / passive Cluster) can operate reliably to be accessed by more than 1,000 people?

    The underlying data disk is RDM because I need to replicate the LUNS in the DR site using the tool table of SANCopy.

    • SCSI controller LSI Logic SAS in Paravirtual changed

    Not really an option... This is not supported by VMware in an MSCS configuration...

    / Rubeck

  • What is the best practice to block through several layers sizes: hardware and hypervisor VM OS?

    The example below is not a real Setup, I work with, but should get the message. Here's my example of what I'm doing as a reference layer:

    (LAYER1) Hardware: The hardware RAID controller

    • -1 TB Volume configured in the 4K block size. (RAW)?


    (Layer2) Hypervisor: Data store ESXi

    • -1 TB of Raid Controller formatted with VMFS5 @ block size of 1 MB.


    Layer (3) the VM OS: Server 2008 R2 w/SQL

    • -100 GB virtual HD using NTFS @ 4 K for the OS block size.
    • -900 GB virtual HD set up using NTFS @ 64 K block size to store the SQL database.

    It seems that vmfs5 is limited to only having a block size of 1 MB. It would be preferable that all or part of the size of the blocks matched on different layers and why or why not? What are the different block sizes on other layers and performance? Could you suggest better alternative or best practices for the sample configuration above?

    If a San participated instead of a hardware on the host computer RAID controller, it would be better to store the vmdk of OS on the VMFS5 data store and create an iSCSI separated THAT LUN formatted to a block size of 64 K, then fix it with the initiator iSCSI in the operating system and which size to 64 K. The corresponding block sizes through layers increase performance or is it advisable? Any help answer and/or explaining best practices is greatly appreciated.

    itsolution,

    Thanks for the helpful response points.  I wrote a blog about this which I hope will help:

    Alignment of partition and blocks of size VMware 5 | blog.jgriffiths.org

    To answer your questions here, will:

    I have 1 TB of space (around) and create two Virutal Drives.

    Virtual Drive 1-10GB - to use for OS Hyper-visiere files

    Virtual Drive 2 - 990 GB - used for the storage of data/VM VMFS store

    The element size of default allocation on the Perc6 / i is 64 KB, but can be 8,16,32,64,128,256,512 or 1024 KB.

    What size block would you use table 1, which is where the real hyper-visiere will be installed?

    -> If you have two tables I would set the size of the block on the table of the hypervisor to 8 KB

    What block size that you use in table 2, which will be used as the VM data store in ESXi?

    ->, I'd go with 1024KO on VMFS 5 size

    -Do you want 1024KO to match the VMFS size which will be finally formatted on top he block?

    -> Yes

    * Consider that this database would eventually contain several virtual hard drives for each OS, database SQL, SQL logs formatted to NTFS to the recommended block, 4K, 8K, 64K size.

    -> The problem here is THAT VMFS will go with 1 MB, no matter what you're doing so sculpture located lower in the RAID will cause no problems but does not help either.  You have 4 k sectors on the disk.  RAID 1 MB, 1 MB invited VMFS, 4 k, 8K, 64 K.   Really, 64K gains are lost a little when the back-end storage is 1 MB.

    If the RAID stripe element size is set to 1 024 Ko so that it matches the VMFS 1 MB size of block, which would be better practice or is it indifferent?

    -> So that's 1024KB, or 4 KB chucks it doesn't really matter.

    What effect this has on the OS/Virtual HD and their sizes respective block installed on top of the tape and the size of block VMFS element?

    -> The effect is minimal on the performance but that exists.   It would be a lie to say that he didn't.

    I could be completely on the overall situation of the thought, but for me it seems that this must be some kind of correlation between the three different "layers" as I call it and a best practice in service.

    Hope that helps.  I'll tell you I ran block size SQL and Exchange time virtualized without any problem and without changing the operating system.  I just stuck with the standard size of microsoft.  I'd be much more concerned by the performance of the raid on your server controller.  They continue to do these things cheaper and cheaper with cache less and less.  If performance is the primary concern then I would consider a matrix or a RAID5/6 solution, or at least look at the amount of cache on your raid controller (reading is normally essential to the database)

    Just my two cents.

    Let me know if you have any additional questions.

    Thank you

    J

Maybe you are looking for

  • Two problems with my Satellite L50D-B - 17 K

    Hi all I have two problems with my new laptop (L50D-B - 17 K). I installed Windows 7 64 bit: 1 - USB 3.0 ports (right) does not work after the installation of the USB driver available in the Web of Support from Toshiba site.An unknown device appears

  • Why are my dow numbers not apron on my msn

    Why my DOW numbers not pass my msn?

  • How to copy a home video off a dvd on another dvd?

    I copied my video off my camcorder digital on a disk, but accidentally erased a part of it out of my camera and need to burn another disc.  How can I do this?

  • T43p stopped booting, help?

    After that my T40 started failure, I decided to get a T43p (except my boss). This machine has worked perfectly for a while until at least today. He would not return to idle, and no amount of playing with it has managed to start. More precisely; When

  • Question of PIX 515E

    Hi all We just bought a PIX 515E and try to use it, but got a number of questions. Here's the NVA of show: PIX-151st #show version Cisco PIX Firewall Version 6.3 (1) Cisco PIX Device Manager Version 3.0 (1) Updated Thursday 19 March 03 11:49 by Manu