For the days based on the specific date

Dear Experts,

I need help on quick formula.

Case: -.

We have entered element value leave_end_date and based on this date, I should be able to get the no. of days.

For example. If theeave_end_date is August 5, 2015 ' then it should return in 5 days. If 15 August 15, then return 15 days fast formula without creating a custom function.

I tried this below but impossible to compile the fast formula.

y = Trunc (to_char (eave_end_date, 'DD' (l)) / * by DEFAULT for y 0 * /

Your quick response would be much appreciated.

Thank you.

Hello 2889916

Try

y = TO_NUMBER (to_char (leave_end_date, 'DD'))

If your salary is monthly... you can also do

DAYS_BETWEEN (PAY_PROC_PERIOD_END_DATE, leave_end_date) + 1

There could be some easier ways to do it, based on what you want to achieve.

See you soon,.

Vignesh

Tags: Oracle Applications

Similar Questions

  • Recommended value for the BAM data expiration time

    Hello

    Can someone tell me what is the recommended value for the BAM data expiration time?

    Enterprise Server default is 24 hours, but I would like to be able to raise the average runtime instance after several months. Is it reasonable to the value of the time-out value a high value? Or it will have an impact on the performance of BPM/BAM?

    Thanks in advance.

    Best regards
    CA

    Normally, we keep the BAM data expiration time at halfway with 24 to 72 hours. For historical reports that you are looking for the Data Mart / Data Warehouse DB are more logical. This database stores the data forever and takes pictures at longer intervals, normally 24 hours. These data are not in time real normally then because a capture instant is only taken once per day but will give you historical reports that you are looking for. The data from this database structure is almost identical to the BAM DB.

  • Need to create a structure for the target data store?

    Hi Experts,

    If I create a structure for the target data store, and then load the data from source to target works fine. If I make mistakes.

    Is necessary to create a structure for target?

    Please help me...

    Thanks in advance.

    A.Kavya.

    I found the answer. No need to create the structure for target temporary data store. and we need to create the structure for the permanent target data store.

  • Need to retrieve the data for the current date.

    Hello

    I have a table which then retrieves information when using this command.

    Select ta_acct, shift, created_on track_alerts;

    Technicolor A 24 March 14

    Technicolor A 24 March 14

    Technicolor A 24 March 14

    Technicolor A 24 March 14

    Manitoba telecom a 24 March 14 system

    Technicolor A 24 March 14

    I used this statement to retrieve the data for the given date.

    Select ta_acct, shift, created_on track_alerts where created_on = 24 March 14 ';

    Its not data recovery.

    Need help.

    Kind regards

    Prasad K T,.

    984002170

    Prasad K T wrote:

    Yes the created data type is date.

    CREATED_ON DATE

    Partha thanks it works now.

    Select ta_acct, shift, created_on in track_alerts where to move is: Shift and TRUNC (created_on) = TO_DATE('24-MAR-2014','DD-MON-YYYY');

    Still, I made a small change to my querry.

    Select ta_acct, shift, created_on track_alerts where to move is: shft and TRUNC (created_on) = TO_DATE (select double sysdate,'MON-DD-YYYY "");

    For this statement, it does not work.

    of course not...

    first: sysdate returns a date so no need of conversion here

    and

    second SYSDATE includes time, so your application should look like this:

    Select ta_acct, shift, created_on in track_alerts where to move is: Shift and TRUNC (created_on) = trunc (sysdate)

    or

    Select ta_acct, shift, created_on in track_alerts where to move is: shft and created_on > = trunc (sysdate) and created_on<>

    HTH

  • The number of heartbeat for the host data warehouses is 0, which is less than required: 2

    Hello

    I have trouble creating my DRS cluster + storage of DRS, I have 3 hosts esxi 5.1 for the task

    First, I created the cluster, no problem with that, so the DRS storage was created and now I can see in the Summary tab

    "The number of heartbeat for the host data warehouses is 0, which is less than required: 2".

    I search the Web and there are similar problems when people has only a single data store (the one that came with ESXi) and need to add another but in my case... vcenter detects any...

    In the views of storage I see the store of data (VMFS) but for some strange reason the cluster not

    In order to achieve data warehouses minimum (2) can I create an NFS and map it in THE 3 esxi hosts? Vcenter which consider a play config?

    Thank you

    You probably only have local data warehouses, which are not that HA would require for this feature (pulsations datastore) to work properly.

    You will need either 2 iSCSI, FC 2 or 2 NFS volumes... Or a combination of the any of them, for this feature to work. If you don't want to use this feature, you can also turn it off:

    http://www.yellow-bricks.com/2012/04/05/the-number-of-vSphere-HA-heartbeat-datastores-for-this-host-is-1-which-is-less-than-required-2/

  • Difference in the number of records for the same date - 11 GR 2

    Guy - 11 GR on Windows2005 2, 64-bit.

    BILLING_RECORD_KPN_ESP - is a monthly partitioned table.
    BILLING_RECORD_IDX #DATE - is a local index on "charge_date" in the table above.

    SQL > select / * + index (BILLING_RECORD_KPN_ESP BILLING_RECORD_IDX #DATE) * /.
    2 (trunc (CHARGE_DATE)) CHARGE_DATE;
    3 count (1) Record_count
    4. IN "RATOR_CDR". "" BILLING_RECORD_KPN_ESP ".
    where the 5 CHARGE_DATE = January 20, 2013.
    Group 6 by trunc (CHARGE_DATE)
    5 m

    CHARGE_DATE RECORD_COUNT
    ------------------ ------------
    2401 20 January 13-> > some records here.

    -> > Here I can see only '2041' records for Jan/20. But in the query below, it shows "192610" for the same date.

    Why is this difference in the number of records?

    SQL > select / * + index (BILLING_RECORD_KPN_ESP BILLING_RECORD_IDX #DATE) * /.
    (trunc (CHARGE_DATE)) CHARGE_DATE,
    2 count (1) Record_count
    3. FOR "RATOR_CDR." "" BILLING_RECORD_KPN_ESP ".
    "4 where CHARGE_DATE > 20 January 2013."
    Group of 5 by trunc (CHARGE_DATE)
    6 order by trunc (CHARGE_DATE)
    5 m

    CHARGE_DATE RECORD_COUNT
    ------------------ ------------
    192610 20 January 13-> > more records here
    JANUARY 21, 13 463067
    JANUARY 22, 13 520041
    23 JANUARY 13 451212
    JANUARY 24, 13 463273
    JANUARY 25, 13 403276
    JANUARY 26, 13 112077
    27 JANUARY 13 10478
    28 JANUARY 13 39158

    Thank you!

    Because in the second example you also select rows that have a nonzero component.

    The first example selects only rows that are 00:00:00

    (by the way, you should ask questions like this in the forum SQL)

  • ESXi is unable to install 'his place on the disk for the dump data' no ideas?

    It is an older server.

    Data sheet:

    GOING Linux 2200 series

    Intel P3 700 MHz x 2

    768 MB OF RAM

    9.1 GB SCSI x 3 (configured in RAID 5 now)

    I have attached a picture of the error message that is received and typed most of the message below.

    NOT_IMPLEMENTED /build/mts/release/bora-123629/bora/vmkernel/sched/sched.c:5075

    Frame 0x1402ce0 ip = 0x62b084 cr2 = cr3 = 0 x 0 = ox3000 cr4 = 0 x 20

    are is 0xffffffff ds is 0xffffffff fs = 0xffffffff gs = 0xffffffff

    EAX = 0xffffffff = 0xffffffff = 0xffffffff edx ecx ebx = 0xffffffff

    = 0x1402e3c = 0xffffffff edi esi EBP = 0xffffffff err =-1 eflags = 0xffffffff

    * 0:0 / & lt; NULL & gt;  1:0 / & lt; NULL & gt;

    0x1402e3c: battery: 0x830c3f, 0x1402e58, 0x1402e78

    VMK availability: 0:00:00:00.026 TSC: 222483259709

    No space on disk for the dump data

    Waiting for debugger... (World 0)

    Debugger is listening on the serial port...

    Press ESC to enter the local debugger

    This could be a simple problem or not, I'm not sure. I spent several hours already trying to reconfigure the readers to try to get the installation to recognize.

    Any help is greatly appreciated.

    I agree with Matt, the material can be simply too old-

    If you find this or any other answer useful please consider awarding points marking the answer correct or useful

  • error R32762: # Error: error looking for the specification of resources.  Need to 'resource' or 'type '.

    I properly wore some of my CS5 developments. I used the following procedure:

    -create new project using dollyX

    -Add the source code

    -fix errors

    But now I encountered an error that I can't understand.

    Error 1 error R32762: # Error: error looking for the specification of resources.  Need to 'resource' or 'type '. C:\Program Files\Adobe\CS5\InDesign product SDK\source\public\interfaces\architecture\IPMUnknown.h 47

    Any suggestions?

    Im running a machine windows7, visual studio 2008

    Hello Pectora:

    Have you checked to make sure that your file .FR is get compiled with the #defines suitable for the appropriate build (i.e. debugging to debug, etc.), and that all agree resource files names (like the generation pre and post and the resources)?

  • Not able to start agent cache for the requested data store

    Hello

    This is my first attempt in TimesTen. I am running TimesTen on the same host Linux (RHES 5.2) running Oracle 11 g R2. TimesTen version is:

    TimesTen Release 11.2.1.4.0


    Trying to create a simple cache.

    The DSN entry section for ttdemo1 to. odbc.ini is as follows:

    + [ttdemo1] +.
    Driver=/home/Oracle/TimesTen/TimesTen/lib/libtten.so
    Data store = / work/oracle/TimesTen_store/ttdemo1
    PermSize = 128
    TempSize = 128
    UID = hr
    OracleId = MYDB
    DatabaseCharacterSet = WE8MSWIN1252
    ConnectionCharacterSet = WE8MSWIN1252

    With the help of ttisql I connect

    Command > Connect "dsn = ttdemo1; pwd = oracle; oraclepwd = oracle;
    Successful login: DSN = ttdemo1; UID = hr; DataStore = / work/oracle/TimesTen_store/ttdemo1; DatabaseCharacterSet = WE8MSWIN1252; ConnectionCharacterSet=WE8MSWIN1252;DRIVER=/home/oracle/TimesTen/timesten/lib/libtten.so;OracleId=MYDB; PermSize = 128; TempSize = 128; TypeMode = 0; OracleNetServiceName = MYDB;
    (Default AutoCommit = 1).
    Command > call ttcacheuidpwdset ('ttsys', 'oracle');
    Command > call ttcachestart;
    * 10024: could not start agent cache for the requested data store. Could not initialize Handle.* Oracle environment
    The command failed.

    The following text appears in the tterrors.log:

    15:41:21.82 Err: ORA: 9143: ora-9143 - 1252549744-xxagent03356: database: TTDEMO1 OCIEnvCreate failed. Return - 1 code
    15:41:21.82 Err: 7140: oraagent says it failed to start: could not initialize manage Oracle environment.
    15:41:22.36 Err: 7140: TT14004: failed to create the demon TimesTen: couldn't reproduce oraagent for "/ work/oracle/TimesTen_store/ttdemo1 ': has not been initialized Handl Oracle environment

    What are the reasons that the demon cannot happen again to another agent? FYI, the environment variables are defined as:

    ORA_NLS33=/U01/app/Oracle/product/11.2.0/Db_1/ocommon/NLS/Admin/data
    ANT_HOME = / home/oracle/TimesTen/ttdemo1/3rdparty/ant
    CLASSPATH=/home/oracle/TimesTen/ttdemo1/lib/ttjdbc5.jar:/home/oracle/TimesTen/ttdemo1/lib/orai18n.jar:/home/oracle/TimesTen/ttdemo1/lib/timestenjmsxla.jar:/home/oracle/TimesTen/ttdemo1/3rdparty/jms1.1/lib/jms.jar:.
    Oracle@rhes5:/Home/Oracle/TimesTen/ttdemo1/info% echo $LD_LIBRARY_PATH
    /home/oracle/TimesTen/ttdemo1/lib:/home/oracle/TimesTen/ttdemo1/ttoracle_home/instantclient_11_1:/u01/app/oracle/product/11.2.0/db_1/lib:/u01/app/oracle/product/11.2.0/db_1/network/lib:/lib:/usr/lib:/usr/ucblib:/usr/local/lib


    See you soon

    I see no problem here. The ENOENTs are superfluous because it locates libtten here:

    23302 open ("/ home/oracle/TimesTen/11.2.1.4.0/TimesTen/ttimdb1/lib/libtten.so", O_RDONLY) = 3

    without doubt, it does the same thing trying to find the libttco.so?

    23302 open ("/ home/oracle/TimesTen/11.2.1.4.0/TimesTen/ttimdb1/ttoracle_home/instantclient_11_1/libttco.so", O_RDONLY) =-1 ENOENT (no such file or directory)

    Thank you for taking the trace. I really want to have a look at the complete file if you can send it to me?

  • Determine the compensation period based on the specific date

    I'm developing an application in the APEX and need determine the compensation based on an entered date period.  Our pay period is 14 days and pay period 1 begins after the last pay period of the previous year, which started in any day in December (so, for this year, the pay period 1 began January 11, because the last pay period in 2014 began Dec. 28 and went to January 10).  Some years we have 26 pay periods and in a few years, we have 27.  Does anyone have ideas on how to extract the number of pay for something like this using PL/SQL period?

    If you store the payperiods inside a table with i.e. four columns startdate, enddate, year and periodnumber. You can easily query on this table.

    Kind regards

    Mark

  • Issue by updating the configuration of the specific data of the provider for a virtual portgroup distributed in vcenter 5.5

    Hi all

    We strive to update provider specific data in config spec for a group of distributed virtual ports, we are able to reconfigure the dvportgroup for the first time and data updates are visible in the list of mob (QuerydvsbyUUID). But the second time, if we try to update specific provider we receive error in vcenter below.

    Could not complete the operation due to simultaneous changes by another operation


    The environment we use is as follows:

    Customer vpshere 5.5 ESXi and Vcenter 5.5 5.5.


    We use Vijava to update the specific configuration of the dvportgroup seller (reconfigureDVPortgroup_Task). When we looked this error, we found below thread:

    http://KB.VMware.com/selfservice/microsites/search.do?language=en_US & cmd = displayKC & externalId = 2016329

    But we try to reconfigure the dvportgroup with the root user in VCenter.

    Ask you please let us know if I'm doing something wrong.

    Thank you and best regards,

    Aristides Maximilian


    Found a similar question in the communities,

    https://communities.VMware.com/message/2342966#2342966

    The issue is a bad configversion.

    Using the last ConfigVersion (the real value of the ConfigVersion from the portgroup extraction) in ReconfigurePortgroup_Task worked well.

  • Java code for the current date more than 30 days

    I searched this forum and google to see if I could get something to work with negative results. I created a form in Livecycle 8.2 and I'm trying to set a due date 30 days from today's date. When the form is opened, I have the date and time, read-only, displays so I also the expiry date to automatically display. I don't know anything about Java, and that I have used so far I was able to copy and paste to my form. Any help would be greatly appreciated.

    It was too complicated. All you need is:

    $ = Num2Date (date () + 30, ' MM/DD/YYYY')

  • Get all the VMS in givencluster on the specific data store

    Hello

    I need help, filtering of all virtual machines in a given Cluster, which are located on specific data warehouses. I go so far:

    List of all the virtual machines in a given cluster:

    Get-Cluster CLUSTER-TEST | Get - VM

    List all the virtual machines on given data warehouses:

    Get - VM - data store 'TEST-DATA store.

    But when I try to combine these I still have all the virtual machines (also other clusters) on this data store:

    Get-Cluster CLUSTER-TEST | Get - VM - data store 'TEST-DATA store.

    I also found this command:

    Get-Cluster CLUSTER-TEST | Get - VM | % {@{$_ .name = $_.datastoreidlist | %{(get-View-propriété Name-Id $_).}} Name}}}

    It lists data warehouses for all the virtual machines in the given cluster. Now, I need that filter down to a specific data store.

    I need assistance with anyway =)

    With a nested Where clause and some not-so-nice trick PS it seems to work

    $ds = Get-Datastore Test* | %{$_.ExtensionData.MoRef}
    
    Get-Cluster Test-Cluster | Get-VM |
    where {$vm = $_; $ds | where {$vm.DatastoreIdList -contains $_}}
    
  • Query for the cumulative data

    HI friends,


    I need output like this.

    Frequency (%) Cumulative
    Frequency cumulative percentage
    4468 0.91 0.91 4468
    21092 4.31 25560 5.23
    57818 11.82 83378 17.05
    6274 1.28 89652 18.33

    I use Oracle 9i.
    My data of output like that and I need to write the query for 3 columns (for cent, the cumulative frequency and cumulative percentage)

    1: the formula for the percentage column data is (frequency/amount of cumulative frequency) * 100
    2: is the formula for the cumulative frequency column data (data of the cumulative frequency column of)
    3: is the formula for the cumulative percentage column data (data for the cumulative percentage column of)

    What should be the analytical function and how to write the query.

    Thank you
    Lony

    Hi, Lony,

    SUM is the function that adds several different line numbers:

    SELECT  frequency
    ,     percent
    ,     SUM (frequency) OVER (ORDER BY  x)     AS cumulative_frequency
    ,     SUM (percent)      OVER (ORDER BY  x)     AS cumulative_percent
    FROM     table_x
    ;
    

    In your data, not frequency = 4468 on the first line, and not the line with frequency = 21092 come from then? If Yes, then you must have another column (or expression) that determines what are words like 'next', 'first' and 'after' means. I called x in the above query.

    I hope that answers your question.
    If not, post a small example data (CREATE TABLE and only relevant columns, INSERT statements) for all of the tables involved and also publish outcomes from these data.
    Explain, using specific examples, how you get these results from these data. Beespecially erase everything that makes a line come before or after the other.
    Always tell what version of Oracle you are using.
    See the FAQ forum {message identifier: = 9360002}

    It looks like you posted another copy of the same question: {message identifier: = 10271688} maybe it's not your fault; This site can be leafed through like that. No matter who's fault, mark the other thread as "Answered" immediately, so only look in one place to find answers, and other people won't waste their time, answering a question that is answered elsewhere.

  • ORA-01111 name for the 10 data file is unknown

    Hello
    I created two new areas of storage (11 and 12) with 1 data file in each of them in my primary database,
    I forgot to set the parameter in the server backup standby_file_management = AUTO,
    auto standby server to create tablespace data file in D:\ORACLE\PRODUCT\10.2.0\DB_1\DATABASE\UNNAMED00010 11,
    Although there is no "UNNAMED00010" in D:\ORACLE\PRODUCT\10.2.0\DB_1\DATABASE\ file.
    in the tablespace $ v and v$ datafile, I can see that the entry was created in the control to the standby server file.
    Here is the list of the data file in my backup server

    TS # name
    ----- --------------------------------------------------------
    G:\OFSDB\OFS4\ORADATA\OFS4\SYSTEM01 0. DBF
    1 G:\OFSDB\OFS4\ORADATA\OFS4\UNDOTBS01. DBF
    G:\OFSDB\OFS4\ORADATA\OFS4\SYSAUX01 2. DBF
    G:\OFSDB\OFS4\ORADATA\OFS4\USERS01 4. DBF
    6 G:\OFSDB\OFS4\ORADATA\OFS4\DATA_SE. ORA
    7 G:\OFSDB\OFS4\ORADATA\OFS4\INDEX_SE
    8 G:\OFSDB\OFS4\ORADATA\OFS4\FLOW_1.DBF
    G:\OFSDB\OFS4\ORADATA\OFS4\CJHIT01 9. DBF
    G:\OFSDB\OFS4\ORADATA\OFS4\DBMON_TS 10. DBF
    11 D:\ORACLE\PRODUCT\10.2.0\DB_1\DATABASE\UNNAMED00010

    Someone knows how to fix this problem?

    Thank you

    Another way try this,

    1 put the tablespace that holds the database without name begin backup mode.

    SQL > alter tablespace mytbs begin backup;

    2 copy the missing data (copy of the OS) file to the location of the files of pending data.

    3. run the end backup command.

    SQL > alter tablespace mytbs end backup;

    4. create a controlfile for the use of the database pending,

    SQL > alter database create standby controlfile as ' / D:/oracle/stdcontrol.ctl';

    5. move the alos for standby server.

    In the standby mode,

    1 cancel the recovery mode.
    2 stop and change the controlfile.
    3. start the day before db in the editing phase.
    4 restore database pending.

    Thank you

  • Alert entered for the missing data file

    I'm trying a simple payback scenario: remove a data file to a tablespace that is normal when the database is running, and the data file has been saved.

    Before, the control point ends normally (I guess that still retains the file inode handle Oracle). When I choose to dba_data_files, the error occurred:
    SQL> alter system checkpoint;
    
    System altered.
    
    SQL> select count(*) from dba_data_files;
    select count(*) from dba_data_files
                         *
    ERROR at line 1:
    ORA-01116: error in opening database file 10
    ORA-01110: data file 10: '/u01/oradata/data01.dbf'
    ORA-27041: unable to open file
    Linux Error: 2: No such file or directory
    Additional information: 3
    To my surprise, nothing has been recorded in the alerts log. And no trace of the files not generated in udump. And Oracle has refused to restore the data file when he was running.

    The message appeared in the journal of alerts when trying to stop the database. After abort shutdown (Oracle refused to stop immediately) and restart the instance, I was able to restore and recover the data file.

    This is the expected behavior? If so, how you would detect files of missing when db is running?


    Oracle: 9.2.0.6, OS: RHEL3

    1. it has not detected by the control point caused by a switch logfile because the file is not actually missing. Like other - inded you! -have, Unix/Linux, you can remove a file at any time, but if the file is in use, the inodes are allocated and the process using it continues to see it very well because it. Because, at the level of the inode, it IS absolutely still there! Either by the way, it is needless to say to make a logfile switch, because as data files are concerned, which does nothing, that the command of alter system checkpoint that you already tried.

    I used to do demos of how to recover from the complete loss of a controlfile in Oracle University classes: I would like to make a burst of rm *.ctl commands and then significantly question the command checkpoint... and nothing happened. Very demoralizing the first time I did it. Deliver a force of startup, however here the inodes are released at that time, files are actually deleted and start falls in nomount State. Bouncing the instance was the only way I could get the data base to protest against the removal of a data file, too.

    2. No, if a file has disappeared in a way allowing to detect Oracle, it wouldn't matter if the folder was empty because he's worried the presence/absence of the datafile headers. They need to be updated, even if nothing else does.

    3. it would be in the alerts log if SMON has detected the lost file at startup (one of his jobs is, explicitly, to verify the existence of all the files mentioned in the control file). CKPT would not trigger an alert, because concerned, all right. Inodes are still held, after all. Your attempts to create tables and so forth inside the lost file do not generate display errors (I think) at this time, it's your server process that seeks to do things for the file, not CKPT or pre-existing other background process, constantly running.

    4. it's specific to Unix, anyway, that Linux is a Variant.

Maybe you are looking for