Timestamp of last

My table Table have this construction:

NAME TYPE

IDENT_NR NOT NULL VARCHAR2 (40)
DATE NOT NULL
TIME NOT NULL NUMBER 4
.
.
Is what I want to select the last time on the last day:

Select ident_nr,
substr (LPAD (Time, 4, 0), 1, 2). substr (LPAD (Time, 4, 0), 3, 2) time.
to_date (date, 'dd.mm.yyyy') date
of v_scans_all
If date between to_date ('20.12.2010 ',' JJ.) MM YYYY') and last_day (to_date ("'21.12.2010', ' dd.mm.yyyy"))
and ident_nr = '00340434121154000033'
Group ident_nr, date, time;

The result is:

DATE OF IDENT_NR TIME
---------------------------------------- ---- --------
00340434121154000033 0705 20.12.10
00340434121154000033 0706 20.12.10
00340434121154000033 0712 21.12.10
1609 00340434121154000033 21.12.10

I only need the last test (21.12.10 / 1609), because this is really the last event. It must be something like a combination of Date and time. Any idea?
Thank you very much. Joerg

Edited by: user5116754 the 21.12.2010 12:07

Edited by: user5116754 the 21.12.2010 12:11

Hello

user5116754 wrote:
Thank you for quick help.
Unfortunately, I have no possibility to change the version of the client. I work here for only a short project :(

It's ridiculous! The upgrade is something that needs to be done; not to do so is a breach of your work now, it will have lasting benefits after this short project is completed, and it only takes a few minutes.

So my question is: is there an option to combine the fields "reference" and "time" in order to get the last scan event?

What is wrong witht the online solution view I posted earlier?

as:
Max(Datum||) Zeit)

Almost; you would need to put in the form of reference so that it sorts correctly (for example, January comes before February, even if 'J' comes after 'F')
You also need to format time, so that it sorts correctly (for example, the 959 comes before 1000, even if 9 'comes after 1').

MAX (  TO_CHAR (datum, 'YYYYMMDD HH24MISS')     -- Assuming datum is always in the Common Era
    || TO_CHAR (zeit, '9999')                -- Assuming zeit >= 0
    )

But you still have the problem of how to get the ident_nr linked to the final value. I don't see how to convert a different type of data reference and time will help you.

.. .or is it possible to format a date in a number consecutively (.. .as Excel...)

Yes, you can convert it to a single NUMBER. Depending on what you want, you can call TO_NUMBER results in "TO_CHAR... | To_char... ", as stated above. It will be more work for no purpose.

Tags: Database

Similar Questions

  • Changes in git when creating child classes

    Hi all

    LVOOP question brough on using git for the control of source code in a project.

    I have a repo that contains my base classes, the classes that are used by several projects.  When I build a new system out of these base classes, I noticed that the Act of creation methods in a class of the child record a change to the methods of parent in git.  Why is this? Of course, there is no functional change to the parent.  Is there a way to get these types of changes to stop happening or a way to better account for them in my control source code?

    Thank you.

    Hi Scott,.

    What I understand about git, it records a change has occurred if the file has been modified.

    I am currently trying to determine if this issue occurs because of LabVIEW or git. LabVIEW has built in tools > compare > screw. ... compare function. I tried to recreate your behavior by creating a simple Parent and child.  I duplicate the Parent in Windows method VI, then override the method in children. Finally, compare it with the parent method and the original method duplicated I run to see if LabVIEW is registered modifications.  I have some found differences.  When I check the timestamp of last modification of the methods of Parent Windows, I get a back issue.  To me, this indicates that the file does not change and the git is something weird.

    Can you if you please try this to double check?

    Kind regards

  • Serving AVDF 12.1.2 integrated with the package DBMS_AUDIT_MGMT allowing the automation of audit records

    I have a question about this part of the vault of the audit and the Guide Release 12.1.2 database firewall administrator documentation:

    -Start quote-

    Schedule for a job of automatic Purge

    Oracle AVDF is integrated with the DBMS_AUDIT_MGMT package on an Oracle database. This integration automates the purge of the AUD $ audit records and files of $ FGA_LOG and operating system .aud and .xml files after that that they have been properly applied in the repository of Audit Vault Server.

    Once the complete purge, officer of Vault automatically sets a timestamp on the audit data that has been collected. Therefore, you must set the property USE_LAST_ARCH_TIMESTAMP set to true to ensure that the right set of audit records are purged. You don't need to manually set a work of purge interval.

    -Extract-

    According to the documentation above, how AVDF brings integration resulting in automation?

    Hello

    When you configure an audit trail in the AV server, say a table AUD$ path, once it collects the audit data he attributes automatically the last time stamp archive on the secure target database (you can check it out of view DBA_AUDIT_MGMT_LAST_ARCH_TS).

    However, the trail (or the AV itself server) does not purge that verification data already collected.

    You have to clean these data with the DBMS_AUDIT_MGMT. Procedure CLEAN_AUDIT_TRAIL, example for AUD$ table only:

    BEGIN

    DBMS_AUDIT_MGMT. () CLEAN_AUDIT_TRAIL

    audit_trail_type-online DBMS_AUDIT_MGMT. AUDIT_TRAIL_AUD_STD,

    use_last_arch_timestamp => TRUE);

    END;

    /

    You can simply run this procedure via a job depending on how often you want to cleanup audit and what time recordings. You don't need to worry about the timestamp of last archive.

  • How to get the status of the servers indeign

    I use indesign server.
    I want to detect if the server is frozen / crashed. Y at - it a command to get the console out permanently. I met heartbeatupdateinterval command line parameter. but I don't know how to use it, because it is not well documented.

    Or is there another way to say IndesignServer 'give me an update (console output or any other)"to check the health of InDesign Server.

    heartbeatUpdateInterval is a server parameter that indicates the interval of time (in seconds) during which the last active timestamp of InDesign server is updated on the console.

    Consider using the module with LBQ related to the status of commands like:

    -GetVersion

    -JobStatus

    -QueueStatus

    -IDSStatus

    and also some administrative commands:

    - Ping

    -Kill

    The ping command is what might interest you:

    It would give you timestamp active last server. This command requires no parameters.

    So, you could have your client installation to periodically hit the Ping command for LBQ-health surveillance.

    You could hit more orders as shown above for more details on the State of the server.

  • How to upgrade a custom scheduled task setting?

    Hello

    I develop a scheduled task that has a setting called 'Last Run Timestamp'.  I want to this field allows to limit my reconciliation events to those that happened after the timestamp of last Run.

    From my java code, how can I change this field with sysdate/time stamp of the last race?

    I'm running on IOM 11gr2ps2.

    Thank you

    Khanh

    Example of Code using SchedulerService.

    SimpleDateFormat time = new SimpleDateFormat ("yyyy-MM-DD hh: mm: zzz");

    Start date = new Date();

    LOGGER.log (Level.INFO, "start time:" + (start) time.format);

    Update the Timestamp of scheduled task setting

    JobDetails job = getSchedIntf () .getJobDetail (getName ());

    HashMap attributes = job.getAttributes ();

    JobParameter = attributes.get ("last Run Timestamp") jobparam;

    String timestamp = (time.format (start));

    jobparam.setValue (timestamp);

    Attributes.put ("last Run Timestamp", jobparam);

    job.setAttributes (attributes);

    getSchedIntf () .updateJob (job);

    -Kevin

  • Flash_recovery_area 11g Windows backup

    Hi all
    I apologize if this is a repeat, but my search in the forums did not completely explicit answer to this question.

    I'm under 11.2.0.3 64-bit on Windows Server 2008 R2. I activated the Archivelog mode and at the request of a backup of the database to execute (incremental) daily through Enterprise Manager Database Control.
    I have two sites Configuration for archived Redo Log Destination, the flash_recovery_area and the other on the C drive. The backups worked perfectly for several weeks.

    In my flash_recovery_area, I have the following files:
    ARCHIVELOG
    AUTOBACKUP
    BACKUPSET
    DATA FILE

    All AUTOMATIC and BACKUPSET DATAFILE backup files have the timestamp of last updated the as the last backup time. However, the ARCHIVELOG file (as well as the secondary on C location) files have timestamps that are random.

    My question is* if I can make copies of the database file via BackupExec of all records in the flash_recovery_area as well as the secondary archived Redo Log Destination without interrupting the live database and running - assuming I plan the work of copy of BackupExec database file run during backup of Oracle database is not running?

    Thanks for any help.
    -Ian

    Ok.

    They were so cold.

    Then you're OK.

    Copy its not uncommon for RMAN backup on the disk and then do a bunch of it more extras.

    Best regards

    mseberg

  • @@dbts equivalent

    Hello

    I'm transforming some Transact SQL PL/SQL queries.
    SQL Server uses a variable named @@DBTS that returns the timestamp of last used throughout the base and it is guaranteed to be unique.
    I was wondering if there is something similar that I could use with Oracle.

    Thank you!

    Do not know who could be considered unique throughout the Basic (never tested that).

    If you want a unique identified, use a sequence.

  • 11 GR 2: periodically purge the question of audit trail

    Hello

    I have a question about automatic and periodically audit trail is serving to 11g.

    I went through the docs and went ahead scheduling periodic purge of my audit trail based on archive timestamp.

    Here are the steps I did:

    (1) run dbms_audit_mgmt.init_cleanup
    (2) run dbms_audit_mgmt.set_last_archive_timestamp as follows:
    SQL>begin
     2  dbms_audit_mgmt.set_last_archive_timestamp(
     3  audit_trail_type => DBMS_AUDIT_MGMT.AUDIT_TRAIL_AUD_STD,
     4  last_archive_time => SYSTIMESTAMP-365);
     5  end;
     6  /
    (3) created a purge with dbms_audit_mgmt.create_purge_job job that uses the timestamp of set_last_archive_timestamp


    That went well, and the first work of purge delete all records of 365 days. But I expect I would be the timestamp increment any, so that the recurring task always deletes records older than systimestamp-365.
    But this does not happen and the initial date of SYSTIMESTAMP-365 remains on the original date.
    Am I supposed to run dbms_audit_mgmt.set_last_archive_timestamp every time before work to purge itself?

    Thanks a lot for any advice / experience on this. :)

    Hi AJ,.
    set_last_archive_timestamp is considered to be the last stage of the audit manual archiving data. Doc States:

    This procedure sets a timestamp that indicates when the audit records were finally archived. The Director of audit provides the time stamp to affix the audit records. The CLEAN_AUDIT_TRAIL procedure uses this timestamp to decide on audit records should be deleted.

    So you have to run set_last_archive_timestamp on a regular basis and thus confirm the audit data archiving.

    CREATE_PURGE_JOB calls CLEAN_AUDIT_TRAIL which checks archive timestamp of last call of set_last_archive_timestamp.

    In short, yes call set_last_archive_timestamp each time before purge jobs runs or working purge purge more recent verification of data.

    Concerning
    Martin

  • Database dead XE?

    Hello. I'm not very well informed when it comes to work s/n, but I hit a snag and would welcome a little advice. I have an Oracle XE installation on my home computer to track sales and other revenue. Recently, I could not connect to the APEX and started looking into it. When I tried to connect with SQL +, I would get the message:

    ERROR:
    GOLD-01033: ORACLE initialization or shutting

    Then I stopped and started the DB, checked to see the listening service was going on and still have the issue. I checked my alert_xe.log file, and it's huge. Attached to the bottom, I've included a fragment of my log file which consists of the timestamp of last recent.

    I have a script that runs on my computer which is an EXP every night on my diagram, but if I try a little DEVIL, I get the same message (which makes sense since my DB apparently isn't running).

    What would be your suggestions? I wanted some advice before I uninstalled and reinstalled then imported my backups of dmp.

    Thanks to all the Necromancers who can help me to wear this back from the dead!

    Kerry.








    E:\oraclexe\app\oracle\admin\xe\bdump\alert_xe.log dump file
    Game 29 Oct 20:40:40 2009
    ORACLE V10.2.0.1.0 - Production vsnsta = 0
    vsnsql = 14 vsnxtr = 3
    Windows XP Version 5.1 Service Pack 2
    CPU: 1 - type 586
    Process affinity: 0x00000000
    Memory (success/Total): Ph: 211 M / 511 M, Ph + FCP: 1012 M / 1247 M, GOES: 1945M / 2047 M
    Game 29 Oct 20:40:40 2009
    Starting ORACLE instance (normal)
    LICENSE_MAX_SESSION = 0
    LICENSE_SESSIONS_WARNING = 0
    Diagram of SNA picked latch-free 2
    Using the default for the LOG_ARCHIVE_DEST_10 as USE_DB_RECOVERY_FILE_DEST parameter value
    Autotune undo retention is enabled.
    IMODE = BR
    ILAT = 10
    LICENSE_MAX_USERS = 0
    SYS audit is disabled
    Game 29 Oct 20:40:52 2009
    ksdpec: called to the event 13740 before initialization of the event group
    Commissioning ORACLE RDBMS Version: 10.2.0.1.0.
    Parameters of the system with default values:
    sessions = 49
    __shared_pool_size = 104857600
    __large_pool_size = 8388608
    __java_pool_size = 4194304
    __streams_pool_size = 0
    SPFile = E:\ORACLEXE\APP\ORACLE\PRODUCT\10.2.0\SERVER\DBS\SPFILEXE. ORA
    SGA_TARGET = 146800640
    control_files = E:\ORACLEXE\ORADATA\XE\CONTROL. DBF
    __db_cache_size = 25165824
    compatible = 10.2.0.1.0
    db_recovery_file_dest = E:\oraclexe\app\oracle\flash_recovery_area
    db_recovery_file_dest_size = 10737418240
    UNDO_MANAGEMENT = AUTO
    undo_tablespace = CANCEL
    Remote_login_passwordfile = EXCLUSIVE lock
    dispatchers = (PROTOCOL = TCP) (SERVICE = XEXDB)
    SHARED_SERVERS = 4
    JOB_QUEUE_PROCESSES = 4
    audit_file_dest = E:\ORACLEXE\APP\ORACLE\ADMIN\XE\ADUMP
    background_dump_dest = E:\ORACLEXE\APP\ORACLE\ADMIN\XE\BDUMP
    user_dump_dest = E:\ORACLEXE\APP\ORACLE\ADMIN\XE\UDUMP
    core_dump_dest = E:\ORACLEXE\APP\ORACLE\ADMIN\XE\CDUMP
    db_name = XE
    open_cursors = 300
    os_authent_prefix =
    pga_aggregate_target = 41943040
    PSP0 started with pid = 3, OS id = 2988
    MA started with pid = 4, OS id = 2992
    PMON started with pid = 2, OS id = 2984
    DBW0 started with pid = 5, OS id = 3004
    LGWR started with pid = 6, OS id = 3008
    CKPT started with pid = 7, OS id = 3012
    SMON started with pid = 8, OS id = 3016
    RECCE has started with pid = 9, OS id = 3020
    CJQ0 started with pid = 10, OS id = 3024
    MMON started with pid = 11, OS id = 3028
    MMNL started with pid = 12, OS id = 3032
    Game 29 Oct 20:40:55 2009
    commissioning 1 dispatcher (s) for '(ADDRESS =(PARTIAL=YES) (PROTOCOL = TCP))' network address...
    commissioning or servers shared 4...
    Oracle Data Guard is not available in this edition of Oracle.
    Game 29 Oct 20:41 2009
    change exclusive database editing
    Game 29 Oct 20:41:05 2009
    Definition of embodiment of recovery target 2
    Game 29 Oct 20:41:05 2009
    Mount of redo thread 1, with mount id 2582644764
    Game 29 Oct 20:41:05 2009
    Database mounted in exclusive Mode
    Completed: alter exclusive of database editing
    Game 29 Oct 20:41:05 2009
    change the database open
    Game 29 Oct 20:41:06 2009
    Beginning of thread 1 crash recovery
    Game 29 Oct 20:41:06 2009
    Scan again started
    Game 29 Oct 20:41:07 2009
    Complete scan again
    9180 redo blocks, blocks 514 data need recovery reading
    Game 29 Oct 20:41:07 2009
    Request for reinstatement has started
    Thread 1: logseq 797, block 5520
    Game 29 Oct 20:41:09 2009
    Online Redo Log recovery: thread 1 mem Group 2 Seq 797 reading 0
    Mem # 0 0 error: E:\ORACLEXE\APP\ORACLE\FLASH_RECOVERY_AREA\XE\ONLINELOG\O1_MF_2_2MXYQN2G_. JOURNAL
    RECOVERY OF WIRE STUCK 1 TO 28 OF FILE 2 BLOCK
    Game 29 Oct 20:41:11 2009
    Abandonment of crash because of the 1172 error recovery
    Game 29 Oct 20:41:11 2009
    Errors in the e:\oraclexe\app\oracle\admin\xe\udump\xe_ora_3080.trc file:
    ORA-01172: recovery of wire 1 stuck in block 28 of file 2
    ORA-01151: use recovery media recover block, restore the backup if necessary

    ORA-1172 marked during: alter database open...
    Game 29 20:55:05 Oct 2009
    10240 MB db_recovery_file_dest_size is 0.98% used. It is a
    user-specified limit on the amount of space that will be used by the present
    for the files related to the recovery of databases and does not reflect the amount of
    space available in the underlying file system or ASM diskgroup.
    Game 29 Oct 22:50:17 2009
    WARNING: inbound connections has expired (ORA-3136)

    DomBrooks wrote:
    I do a separate install and test import. Don't burn your bridges. If there is something wrong with the export, and data are valuable for you, as a last resort there are tools that could help get something of the old files.

    I would usually without reservation agree with that, but I think that XE only lets you install one instance per server. I have not tried to see if it is a restriction of license or a technical restriction enabled by the installer, but I'm not sure you can get a second XE database running on the server.

    And while there are tools that can recover data from old data files, because the free Oracle version is used, these tools are probably not an option - I think not that Oracle DUL is available if you have a support that is not available on XE contract and competitors are pretty darn expensive. If the data is not important enough for the need of a commercial version of the database with support, patches, etc., it is probably not enough important license one of these tools.

    Justin

  • Get track List - Last Timestamp returns a time one hour behind

    Hello

    I use the DSC module to create traces.  I need the timestamp of the last of a trace.  When I go to the track via MAX (export to text file), I see all timestamps in the trace.  When I use the "Get Trace List.vi" it returns is an hour behind.  I would make up for just an hour, but I thought that I saw that it compensates for return after writing again.  I have a complex code, and after having thought about it, I think the "Get Trace List.vi' should work.  I tried to turn the time zone but this makes no difference.  A solution I just think now is to use the track of the reading, but use a more recent time interval.  In my case I do not know the more lately but if I got the data from last month and used the max/min chart on the table timestamp incomming; It would work.  OK, so I thought about a work around, but maybe someone knows something about why this last time stamp is an hour off the coast?

    Thank you

    Matt

    You have not included everything essential subVIs, but I suspect a problem with the setting "Daylight Saving Time".

  • When you rename a file change the last timestamp consulted?

    I was in a heated debate with one of my instructors today associated with a test on the access timestamp question when you rename a file. Through my research and testing to rename a file via the command prompt, it does not appear that the process of renaming a file will change the date of that last file access. Access and modify times will change to the directory that contains the file, but not the file itself.  So my understanding is that it is not, however, my instructor insists that it is, can someone shed some light on this topic please?

    Reading material more - it's interesting.

    More information under:

    Windows NT keeps track of three stamps of time associated with files and directories. These three temporal are written in creation, last access, and last. When a file or directory is created, accessible or changed, Windows NT updates the appropriate time stamp.

    http://support.Microsoft.com/kb/148126/a

    NTFS uses the change log to track information on the added files, deleted and changed for each volume.

    http://TechNet.Microsoft.com/en-us/library/cc938919.aspx

    Working with file systems

    http://TechNet.Microsoft.com/en-us/library/bb457112.aspx

  • View timestamp last logon

    Hello

    We have recently begun to setting to the top of our new environment VROPS 6.1 and brought in several groups, user accounts, and the sources of the identity of our previous environment.  It's very confusing to try to determine which account is the account of users sign on to help. I myself account five appearing as:

    Source user name type

    domain\dbutch1976 Virtual Center - VC

    domain\dbutch1976 Virtual Center - VC

    domain\dbutch1976 Virtual Center - VC

    domain\dbutch1976 group of Virtual Center

    dbutch1976 Active Directory

    How can I determine the account that I use when I login?   Is it possible to display last logon timestamp and determine when (and if) the account is already in use?

    Thank you.

    If you log in to the product of vROps UI account used when you connect is determined by the 'authentication Source"drop-down list box on the login page.  There may be multiple vCenters listed there in addition to your domain active directory.  However, if you use the client vSphere Web focus on health a specific object and use the link "View details in vCenter Operations Manager" he uses VC as an authentication source.  You can use all the accounts listed according to vCenters how much you have and how much you have access to the product.  I agree that it's very confusing manage.

    You may be able to reduce some accounts by limiting the authentication sources that can used.  For example, if you do not use the vSphere Web client to access vROps and don't want vCenter registered as authentication source on the login page of vROps you can turn off these options in the global settings under Administration.  Deactivation of all three the vCenter options in the global settings should prevent VC accounts in your list to be created, but will require everyone to use AD authentication.

    It is not a last time stamp login that I know, but you can probably find the last connection under Administration, Audit, Audit of user activity.  You can filter by user Auth Source ID and a few other properties.  You may have to play with it a bit to find the right account and the last time it was used.  I filtered by Auth Source "VC" to see just the vCenter logins.

  • Script to pull the last timestamp spend all hosts in a cluster?

    Hi all

    Lately I have seen an issue where my host 5.1 logging abruptly stops and the other remote access to every host I have no way of knowing what happened.

    Is it possible to account for the last, for example hostd.log timestamp for each host in a cluster?

    Thank you

    Tony

    Try something like this

    foreach($esx in Get-Cluster -Name MyCluster | Get-VMHost){  $log = Get-Log -Key hostd -VMHost $esx  $esx | Select Name,@{N="Last entry";E={[datetime]($log.entries[-1].Split(' ')[0])}}}
    

    But be aware that fetch a newspaper of the ESXi could take some time.

    The script needs get the full log, to be able to extract the last line

  • Find the folder with the timestamp of the last

    Hello

    I have a table that has four fields.


    CREATE TABLE MATCH_L
    (
    SUBSCRIPTION_INFO VARCHAR2 (512 BYTE),
    SUBSCRIBERID NUMBER NOT NULL,
    VARCHAR2 (12-BYTE) USERID NOT NULL,.
    TIME_LAST_USED TIMESTAMP (6) NOT NULL
    )
    TABLESPACE TEST1
    PCTUSED 0
    PCTFREE 10
    INITRANS 1
    MAXTRANS 255
    STORAGE)
    64K INITIALS
    ACCORDING TO 1 M
    MINEXTENTS 1
    MAXEXTENTS UNLIMITED
    PCTINCREASE 0
    DEFAULT USER_TABLES
    )
    LOGGING
    NOCOMPRESS
    NOCACHE
    NOPARALLEL
    MONITORING;



    For each couple of records like below

    FSB0004609 42 192902070016 2012-05-03 10:30:33, 011000
    FSB0004609 42 192902070016 2012-04-19 14:30:33, 627000
    FSB0004609 42 192902070016 2012-04-03 16:28:16, 019000
    FSB0004609 42 192902070016 2012-03-21 17:11:04, 398000
    FSB0004609 42 192902070016 2012-04-30 09:12:40, 366000


    SUBSCRIPTION_INFO, SUBSCRIBERID, USERID are the same and they differ just in the timestamp.

    So I want to get my hands on one of these "all the two files" that are having the last TIME_LAST_USED.

    I hope you understood what I want (in another interpretation, I want to have a USER SUBSCRIPTION_INFO name, SUBSCRIBERID, distinct but not not using separate in the query)

    Please let me know if my explanation is not enogh.

    Concerning

    / Louis

    Published by: Louis March 19, 2013 15:01

    Simply use GROUP BY:

    SELECT  SUBSCRIPTION_INFO,
            SUBSCRIBERID,
            USERID,
            MAX(TIME_LAST_USED) TIME_LAST_USED
      FROM  MATCH_L
      GROUP BY SUBSCRIPTION_INFO,
               SUBSCRIBERID,
               USERID
    /
    

    SY.

  • How to find the timestamp of the packege/package body compiled in the last 15 days

    Hi all

    I would find the timestamp of the package/package body compiled in the last 15 days.

    Y at - it all features, which will record all the information date of compilation of a particular package or package body.

    If so, please provide the request.

    version: Oracle Database 11 g Enterprise Edition Release 11.1.0.7.0 - Production

    Thank you
    Rambeau

    Hi Raghu,

    As far as I know, you can't.

    Kind regards.
    Al

Maybe you are looking for