daily batch

If I have a daily prizes that need to pick up the records that have been modified in the last 24 hours and proceed with a treatment, there is the issue of timing.

If I run:

insert into TARGET_TABLE select * From SOURCE_TABLE where MODIFIED_DATE >= to_date(to_char(sysdate - 1, 'YYYYMMDD') || '070000', 'YYYYMMDDHHMISS')
and MODIFIED_DATE < to_date(to_char(sysdate , 'YYYYMMDD') || '070000', 'YYYYMMDDHHMISS')

If I have a column MODIFIED_DATE which is updated by a trigger whenever there is a change, how can I make sure I pick up all the records? Because of the consistency of reading and a transaction may take a few minutes to commit a file can get the updated MODIFIED_DATE up-to-date in the range above after the statement above is executed, so it won't be picked up. How to solve the question of the coherence of reading?

Can expand you on the two options above?

Course

1 accept the risk

Many orgs do excerpts at the end of the morning (for example 01:00 to 03:00) when it WOULD normally not watch uncommitted transactions. Their extract can also occur AFTER other maintenance operations who have already dealt with uncommitted transactions by ending sessions that have them.

In these situations, it is would have little risk of DML uncommitted could cause a problem of extract.

For an example of a situation where there is a problem using a MODIFIED_DATE value to retrieve data see my last reply in this thread from April

https://forums.Oracle.com/thread/2527467

Is Oracle, possible to check which are uncommitted by table transactions?

Ask the view DBA_DML_LOCKS (and/or view DDL)

http://docs.Oracle.com/CD/E11882_01/server.112/e25513/statviews_3157.htm

Select * from DBA_DML_LOCKS

SESSION_ID, OWNER, NAME, MODE_HELD, MODE_REQUESTED, LAST_CONVERT, BLOCKING_OTHERS

10, SCOTT, EMP_READ_ONLY, Row - X (SX), no, 1163, does not

69, SCOTT, EMP_READ_ONLY, Row - X (SX), no, 1648, does not

This shows that two session held in exclusive line on the EMP_READ_ONLY TABLE lock. As a test, I inserted a line from two different sessions, but he has NOT yet committed.

So when you do not want to accept the risk of putting you the table in READ-ONLY mode, perform extract and then put the table in READ WRITE mode.

ALTER TABLE EMP_READ_ONLY IN READ ONLY

You will get an exception if there is remaining DML operations. You can find them and deal with them.

How would a materialized view using would solve the issue of the calendar? How that would solve the question of an operation which began, let's say at 06:58 and gets committed the day before at 07:04. If I update the view materialized to 7 today, the record would not be captured because at 7, the transaction is not committed. I have the same problem? I don't know I'm missing something, please provide details.

I said usage log a MV. I did not use a materializled view.

You can create a log of MV on the table so that

Oracle will save all changes on the table for the NEWSPAPER.

See my response detailed, dated May 23, 2013 13:37 in this thread

https://forums.Oracle.com/forums/thread.jspa?MessageID=11034366�

My answer includes examples of code and results that show the information captured in a newspaper of the MV and how you can use it.

If possible, you must use LogMiner to simply capture changes from the REDO log files.

Tags: Database

Similar Questions

  • Call of SOA (OSB/BPEL) to ETM to import daily batch files

    Hi all

    I want to download a daily batch (< 5000 records per day) in the ETM of Oracle SOA.

    Where is the best as below?

    1. I just need use SOA to call Inbound XAI and load batch data in.
    2. push the XML data in a queue, and then set up an input JMS for MPL

    But because I have Oracle SOA, includes the BSO, I use option 2? He'll give us a little flexibility for secondary processing of XML in time if necessary. But I read the best practice. The OSB can be the replacement of MPL. What I have to do to replace the MPL in OSB?

    And about Option 1, what is the procedure to him?
    For example:
    The incoming service XAI is to add a record of payment.
    The batch XML contains 1,000 records. Can I simplely invoke the OSB to the XAI inbound service to add these 1,000 records?

    Thank you
    Kerr

    (Do not know why my history files have been removed, and I'm a newbie with 0 post)

    Batch, I meant PSSA - Upload payment process.
    The goal is to fill the entire payment download staging tables that involve (in order of priority): deposit of control staging, control of tender staging, staging of tender of payment and remittance advice staging tables (notice of payment - only if practice you the IO accounting).
    It is the safe method to post payments received by other external/systems parts, as you get out to maintain controls of deposit, the controls of tendering, balancing, and other aspects related to a payment transaction and institutions at the end of the payload or the end of the day. Another option is Autopay (Direct debit or credit card samples), which is not your business case for the moment.

    Also, you can do this from a BO. A BO will only make reference to an object of Maintenance and in case of download staged, several objects/entities are concerned. That's why you need to use a Service Script instead, exposed as a Service of inbound XAI and consumed by the OSB.

    Your definition should match the schema of SS, which in turn will populate various intermediate objects (above). If XSD for the is out of your reach, you need to define a XSLT to map definition ( you can use XML Spy here or jDeveloper ) the source to the target schema SS.

    Above is a synchronous approach to the message exchange point of view, but the records of erroneous transactions (as a result of the transformation of the PSSA) can be viewed in their respective tables of staging; which may require companies to intervene.

  • Performance of the Oracle

    Hi friends,

    10.2.0.4.0 on Linux

    Background:
    We have the database running on the OS cluster.
    Before the performance of the server on the server issue has Cpu spiked to 100% used. Custom top found some applications (small applications in general, what race in daily batches) were cpu 100% used and the same queries in AWR. For this reason, the database switched to Server B, the queries executed as correctly without problem.
    Server A and B are clustered BONES and using the same hard drive memory.

    After restarting the server/failover, the daily batch runs successfully without any problems for a few days (about 6 to 10 days). The 11th batch runs very slowly.

    We try to find the solution,
    Now, we are not suspecting the question is next to the Oracle. The problem may be with the side OS/hardware.

    All the suggesitions/help will be appreciated.

    Thank you
    KSG

    Hello

    KSG wrote:

    Thanks for your help Nikolay.

    If necessary, I'll tie the AWR report #2, where the batch was properly executed on the failure on server B.

    Thank you

    KSG

    No, it is not necessary.

    The AWR report you posted was for a period of 9 hours, right? It shows CPU DB = 55 000 seconds. 9 hours = 32 400 seconds. For example, the database was using (on average) just 1.5 CPU on 16. I.e. average usage of the CPU by the database was only 10% (and total usage of CPU on the server, as I said earlier, was about 40%, i.e. the database is not yet the main consumer of CPU on the box).

    So based on what I'm leaning to the version that the CPU problem you mentioned was not true. Most likely, you simply misread some of the CPU numbers in reports (like CPU DB = 91%, which could be confusing). To avoid such confusion, I recommend that you go through my blog on the interpretation of the figures of the AWR CPU: reports AWR: interpret the CPU usage | Diagnostician Oracle

    If you leave this part of your account of the event, we have an intermittent problem of performance with a batch. As Jonathan said, SQL plan regression is the most likely explanation of that. If you have the License Pack Tuning and Diagnostic tests, then you can easily diagnose these problems using view ASH (not ASH report), especially if you instrument your code work properly batch:

    (1) in your batch, call dbms_session.set_identifier ()

    (2) once the task is completed, check out the SQL which took as long:

    Select sql_id, count (*) / 10 elapsed_seconds

    from the ashes of dba_hist_active_sess_history

    where client_id =: client_id_you_set_above

    and sample_time between: job_start_time and: job_finish_time

    Group of sql_id

    order of count (*) desc;

    (3) check the hash value of the plan for this SQL query:

    Select begin_interval_time time, plan_hash_value

    of dba_hist_snapshot sn.

    St dba_hist_sqlstat

    where sn.snap_id = st.snap_id

    and sql_id =: sql_id_found_above

    begin_interval_time desc order

    Best regards

    Nikolai

  • Strange question Essbase - calculation doesnot return good result in MaxL

    Hello

    All veterans of Essbase, need your help for this strange issue.

    We run Essbase on UNIX and we have an OSI application where a daily batch process works for aggregate database of real-world scenario and Budget. Strange things, it's the night aggregation process is not correctly aggregate data, as we see variance between stored and shared hierarchies. But if we manually execute the same aggregation through the service console Regional, aggregation gives good results with zero gap between stored and shared hierarchies.

    In the script of aggregation we clear senior level blocks (with UPDATECALC on OFF), and then use CALC dim on the accounts and the entity to aggregate the results.

    This question has been very strange to my little experience the same calc behaves differently when called in MaxL.

    Any suggestion useful to nail, they are welcome.

    Thanks in advance!

    >

    Is not dense dimension meant to aggregate properly (as it is in a block) in a single pass. I am confused because I know that the theory is in conflict...

    In addition, Parents can be marked TWO PASS (that I see no strong reason to make them two passes).

    Published by: user10725029 on May 2, 2012 20:30

    If the dimension is marked as the size of accounts and and you have two pass members that are not dynamic calc, running the default calc will do the calculation in a single pass. Otherwise if they are stored members, then you must make a twopass calc. If you have twopass in another dimension the dimension of accounts, it must be dynamic or or it will not calculate in two passes

  • segaments of restoration are not get deleted after completion of transact

    Hi all


    I do 8i production support.during data archiving activities I m facing a problem restoring segaments. As we know after instruction to commit, it allows free space once all transactions get more, but here its not free space after long back too.

    Please suggest me what to do?

    don't we have any tool by which I can check and free up space? Another manual way?

    With manual undo management - that is to say using Rollback Segments - will Oracle NOT resize or reduce rollback segments automatically (unless you set an OPTIMUM size). This is the architecture.

    If "I do data archiving activities, so a lot of deletion going on in the database" is, surely, it would be too much effort to write scripts plus two or three to re-create the Rollback Segments (and RBS tablespace) - you can even manage 'windows' archiving to do if you don't really have the space to the tablespace RBS. I guess that the "check database" is not automatic and requires the DBA run scripts. How much more effort is to add a few scripts and steps for managing Rollback segments?

    In normal operation it has no reason to shrink/drop/recreate Rollback Segments - even where a batch can increase requirements RBS for 2 to 5 times daily batch. Leave RBS to the maximum size is the easiest and safest.

  • and interpreted as set

    SQL > BEGIN nts_mail_out_utils.insert_row (00002934, "SENT", "to: c & [email protected] ',' topic: RASCAL batch daily updates"); END;

    Enter the value for rsupportteam:


    How can I make the '&' interpreted as a part of the string, without the use of the parameter of SQLPLUS.

    Here's a way to do

    With the parameter of sqlplus

    set define OFF
    

    Or

    select chr(38) from dual;
    
    CHR(38)
    &
    

    If you can do

    BEGIN
       nts_mail_out_utils.insert_row (00002934,
                                      'SENT',
                                         'To: c'
                                      || CHR (38)
                                      || '[email protected]',
                                      'Subject: RASCAL Daily Batch Updates'
                                     );
    END;
    
  • Cannot create a daily backups from a batch file

    xcopy & too many parameters.

    I'm trying to create a simple daily to the top of a batch file.

    I am using the command line is:
    xcopy Files\Soredex\DfW2.1s\Image g:\Image/s/d c:\Program
    I get an error too many parameters. Can someone tell me what I am doing wrong?

    Put quotation marks around the file name with spaces included, so:

    xcopy "c:\Program Files\Soredex\DfW2.1s\Image" g:\Image/s/d

    I would also put a space before the second pass, but I don't know if it's absolutely necessary: / s/d

  • "Access to the volume, batch" on a daily basis

    Hi all

    In the hope of a few tips see where to look because we know so many questions.

    The environment is:

    ESX 5.5 (last version update of this week)

    HP ProLiant BL460c Gen8 blade

    Brocade FC switches

    VNX5400 storage

    Everything is current patch level (HP Feb SSP, latest drivers etc from the latest edition of VMWare/HP Cookbook)

    On a daily basis, we see the messages of the following:

    Lost access to the volume 52e00703 - 7702b 882 - d845 - a 0017, 4779402 (data store) because of connectivity issues. Recovery attempt is underway and results will be communicated shortly.

    05/07/2014-07:50:16

    Access to the volume 52e0072b-0adb69f0-f97b - a 0017, 4779402 (DataStore) connectivity issues.info successfully restored the suite

    Of the vmkernal.log, I see these messages:

    (2014 05-07 T 02: 44:14.860Z cpu0:33743) lpfc: lpfc_scsi_cmd_iocb_cmpl:2157: 0: (0): 3271: FCP cmd x2a failed < 2/4 > sid x0d0505, did x0d0100, oxid xd4 Abort requested host Abort Req

    (2014 05-07 T 02: 44:14.860Z cpu0:32813) NMP: nmp_ThrottleLogForDevice:2321: Cmd 0x2a (0x412e80450900, 32805) dev 'naa.60060160fd503600ea66bd8acb82e311' the path 'vmhba0:C0:T2:L4' Failed: H:0 x 5 D:0 x 0 P:0 x 0 Possible sense data: 0 x 0 0 x 0 0 x 0. Law: EVAL

    (2014 05-07 T 02: 44:14.860Z cpu0:32813) WARNING: NMP: nmp_DeviceRequestFastDeviceProbe:237: State of 'naa.60060160fd503600ea66bd8acb82e311' of device NMP doubt; asked the State of the fast path update...

    (2014 05-07 T 02: 44:14.860Z cpu0:32813) ScsiDeviceIO: 2337: Cmd (0x412e80450900) 0x2a, CmdSN 0 x 299 worldwide 32805 to dev 'naa.60060160fd503600ea66bd8acb82e311' has no sense H:0 x 5 D:0 x P:0 x 0 possible 0-data: 0 x 0 0 x 0 0 x 0.

    (2014 05-07 T 02: 44:15.859Z cpu0:32881) lpfc: lpfc_scsi_cmd_iocb_cmpl:2157: 0: (0): 3271: FCP cmd x2a failed < 2/4 > sid x0d0505, did x0d0100, oxid xe7 Abort requested host Abort Req

    (2014 05-07 T 02: 44:15.860Z cpu0:32813) WARNING: NMP: nmp_DeviceRequestFastDeviceProbe:237: State of 'naa.60060160fd503600ea66bd8acb82e311' of device NMP doubt; asked the State of the fast path update...

    (2014 05-07 T 02: 44:15.860Z cpu0:32813) ScsiDeviceIO: 2337: Cmd (0x413682f6e580) 0x2a, CmdSN 0x29a worldwide 32805 to dev 'naa.60060160fd503600ea66bd8acb82e311' has no sense H:0 x 5 D:0 x P:0 x 0 possible 0-data: 0 x 0 0 x 0 0 x 0.

    (2014 05-07 T 02: 44:20.025Z cpu4:36428) World: 14296: VC opID 4CAEF736-0000009 D-0-fb maps to vmkernel opID 4f50e1ac

    (2014 05-07 T 02: 44:20.397Z cpu1:34426) capability3: 15408: Max (10) attempts exceeded for appellant Fil3_FileIO (status "IO has been abandoned by VMFS via virt-reset of the unit")

    (2014 05-07 T 02: 44:20.397Z cpu1:34426) BC: 2288: cannot write (couriers) object '. iormstats.SF': level core exceeded Maximum attempts

    (2014 05-07 T 02: 44:23.013Z cpu8:34236) HBX: 2692: waiting for [HB State offset abcdef02 4161536 gen 179 stampUS 1629072438 uuid 536997 d 8-04a1ea9d-6c9a - 0017 has 4779402 jrnl < 2656233 FB > drv 14.60] expired on flight 'uk1 - san01:Production data store 3'

    (2014 05-07 T 02: 44:29.645Z cpu4:33106) lpfc: lpfc_scsi_cmd_iocb_cmpl:2157: 0: (0): 3271: FCP cmd xa3 failed < 3/4 > sid x0d0505, did x0d0000, oxid xeb Abort requested host Abort Req

    (2014 05-07 T 02: 44:29.645Z cpu20:33044) VMW_SATP_ALUA: satp_alua_issueCommandOnPath:651: path 'vmhba0:C0:T3:L4' (UP) command 0xa3 failed with status Timeout. H:0 x 5 D:0 x 0 P:0 x 0 Possible sense data: 0 x 5 20 x 0 0 x 0.

    (2014 05-07 T 02: 44:29.755Z cpu7:32857) HBX: 255: recovered for volume 52e0072b-0adb69f0-f97b - 0017 has 4779402 heart rate (uk1 - san01:Production 3 DataStore): Offset [Timeout] 4161536

    (2014 05-07 T 02: 44:29.755Z cpu7:32857) [HB State offset abcdef02 4161536 gen 179 stampUS 1632947371 uuid 536997 d 8-04a1ea9d-6c9a - 0017 has 4779402 jrnl < 2656233 FB > drv 14.60]

    (2014 05-07 T 02: 44:31.808Z cpu0:32807) lpfc: lpfc_scsi_cmd_iocb_cmpl:2157: 0: (0): 3271: FCP cmd x2a failed < 2/4 > sid x0d0505, x0d0100, x 112 requested Abort Abort Req host oxid fact

    (2014 05-07 T 02: 44:31.808Z cpu0:32813) WARNING: NMP: nmp_DeviceRequestFastDeviceProbe:237: State of 'naa.60060160fd503600ea66bd8acb82e311' of device NMP doubt; asked the State of the fast path update...

    (2014 05-07 T 02: 44:31.808Z cpu0:32813) ScsiDeviceIO: 2337: Cmd (0x412e8349b000) 0x2a, CmdSN 32805 to dev 'naa.60060160fd503600ea66bd8acb82e311' world 0x2ab has no sense H:0 x 5 D:0 x P:0 x 0 possible 0-data: 0 x 0 0 x 0 0 x 0.

    (2014 05-07 T 02: 44:32.407Z cpu1:34426) capability3: 15408: Max (10) attempts exceeded for appellant Fil3_FileIO (status "IO has been abandoned by VMFS via virt-reset of the unit")

    (2014 05-07 T 02: 44:32.407Z cpu1:34426) BC: 2288: cannot write (couriers) object '. iormstats.SF': level core exceeded Maximum attempts

    (2014 05-07 T 02: 44:33.541Z cpu16:32855) HBX: 255: recovered for volume 52e0072b-0adb69f0-f97b - 0017 has 4779402 heart rate (uk1 - san01:Production 3 DataStore): Offset [Timeout] 4161536

    (2014 05-07 T 02: 44:33.541Z cpu16:32855) [HB State offset abcdef02 4161536 gen 179 stampUS 1646119268 uuid 536997 d 8-04a1ea9d-6c9a - 0017 has 4779402 jrnl < 2656233 FB > drv 14.60]

    Any help much appreciated.

    Thank you

    The problem was a combination of factors.  Specifically, THIN LUNS on EMC VNX with Windows 2012 R2.  It's to do with how 2012R2 TRIM questions and cancel orders MAPPING in the table.  Basically, he blows on MS and has a knock on effect on all LUNS.

    The thing in our case is that we have a chassis package HP blades, for the most part running ESX, Windows R2 2012... and we essentially had a single physical machine 2012R2 out the table in its entirety...

    Defrag on 2012R2 exasperated this issue and was a way to find the number of logic (light IO) massive unit on the Page file for example, LUN during the defragmentation operation

  • Batch file to download Microsoft security essentials by day

    Hello

    I am an employee of software. I need to download Microsoft Security Essentials every day to connect to my VPN client.

    It takes about an hour to get the download (about 80 MB) and installed.
    I want to know is there a program available batch file that runs daily to achieve so that it installs at the time where I'm going to the office.
    Waiting for the answer as soon as possible.
    Thanks in advance
    Jennifer

    Hi Jennifer,.

     

    The question you posted would be better suited in the TechNet Forums. I would recommend posting your query in the TechNet Forums.

    TechNet Forum

    http://social.technet.Microsoft.com/forums/en-us/category/windowsxpitpro

     

    Hope this information helps.

  • batch file to copy the file with the extension of the date - does not work in Windows 7

    Under XP, I got a batch file to copy files daily and rename them with the date, as:

    Project_Tracker_Copy.mdb

    TO

    Project_Tracker_Copy_2012_01_17.mdb - it would make the current format YYYY_MM_DD day

    next day, through Planner, he would run the batch and I'd end up with

    Project_Tracker_Copy_2012_01_18

    Here is the key part of the DOS command:

    Ren S:\archive_2012\Project_Tracker_Copy.mdb Project_Tracker_Copy_%date:~6,4%_%date:~0,2%_%date:~3,2%.mdb

    A batch file that runs hidden in Task Scheduler. [Task Scheduler starts the vbs & the vbs starts the batch file.

    This example shows how run the batch BackupCoreFiles.bat - the only point the vbs to packaging is so he can run hidden like Task Scheduler it would be otherwise in the default command window & that can be a distraction, if you work on something else at the time.

    Set WshShell = CreateObject ("WScript.Shell")
    Chr (34) WshShell.Run & "C:\Program Files (x 86) \Backup" & Chr (34), 0
    Set WshShell = Nothing

    The well-informed person who first posted this code was Ramesh Srinivasan how to run . BAT files invisible

  • Create the batch file to remove log files in Windows

    We are running Oracle 11 G under Windows 2008 R2. For free tickets, we run daily from rman commands manually. I would like to create a batch to do it automatically. This is the batch file that I created.

    The echo Yes | Del Y:\Orace11g\temp\*.*

    Robocopy D:\Relius\Admin\RADB\ArchiveLogFiles\ Y:\Orace11G\temp\ b / mining: 3

    RMAN

    connect sys@radb target;

    password

    overlap archivelog all;

    remove expired archivelog all;

    Yes

    but it stops at rman

    C:\ > rman

    Recovery Manager: release 11.2.0.1.0 - Production on Mar 21 10:49:27 Oct 2014

    Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.

    RMAN >

    What is the correct way to cerate the batch file to delete the expired archivelogs?

    Hello

    It is not a problem of RMAN, but is that a batch under windows creation/configuration is the problem.

    See this URL, how do I Configure a tasks scheduled under Windows 2008 R2.

    -Next, create a file .bat with your instructions:

    Set ORACLE_SID = radb

    set ORACLE_HOME = c:\dbhome

    RMAN cmdfilemyrmancommands.csv msglog = sys/PASSWD@radb target = c:\log.log

    - Then put your RMAN commands in the file myrmancommands.csv

    Run {}

    devote to the disc channel type c1:

    overlap archivelog all;

    remove expired archivelog all;

    output channel c1;

    }

    Best regards

  • PowerCLI daily report script automation

    Hope everything goes well, I just started using PowerCLI to automate some admin tasks in our virtual environment, I have a .ps1 alan daily report script renouf, I'm sure you've seen or used, but I'm trying to make it automatic and no matter what I do, I always get prompt to connect to the server you know a smart way to use a file batch or something that will run the script without user intervention?  Basically, I want to create a dashboard admin with the ability to click on an executable file or batch file and have it initialize the analysis without having to provide credentials...  Any input would be greatly appreciated.

    Chris

    Chris

    Yes you can do it.

    Create a command batch script.

    Call;

    %Path%\powershell.exe - PSConsoleFIle "Path of Vim.psc1" (usually the files\VMware\Infrastructure\vSphere program PowerCLI) "&"Path to Script.ps1"vCenterServerName.Domain.Name".

    I hope this helps

    Nathan

  • PeopleSoft HRMS 9.1 <>OPA 10.4 batch processor using tables db - exp/advice?

    The members of the forum - I'm looking for anyone who has experience in the integration of PeopleSoft Enterprise HRMS 9.1 with the OPA 10.4 batch using PeopleSoft dbase tables that can be easily accessed in program AE/SQR. Due to the requirements of very high flow with a large client company... thanks to the integration of service WSDL/IB/web is out of the question.

    In addition, this integration must be very reliable (IB tends to error intermittently).

    If someone on this forum has done this? This client will present high daily OPA treatment... so dosage determination it upwards on a recurrence of an EI program and entry into a writing table... pending batch OPA 10.4 at trigger/complete...and then read results in application pages PeopleSoft HRMS 9.1 since output table seems the way to go...

    I would be grateful to talk to someone who has followed this path. Thank you

    IB = Integration Broker.
    If you can fix or improve the separate question of the slightest error of IB then the web service call (determination server) should give you the performance you need, based on the volumes that you mentioned.
    Otherwise, you could look on the applicability of your page PeopleCode using Java to make a direct call to the engine of determination (and without going through the IB).

  • MaxL Script and batch file

    I have a MaxL script that exports data from different cubes Level0. I have a batch file to call this script at daily intervals.

    I want to name something out of different every day (preferably NAMEmmdd type format). I know that I can add a rename or move the lot but wonder if there is a MaxL method that will allow me to export to another name?

    Current MaxL is simply
    export database APP. Level0 data_file DB database ' ' $Arborpath\am1119.txt "";

    Ideas or direction to go learn the answer?

    T,
    J

    Hello

    You can enter variables in maxl, so you the calling batch file can pass in a variable that contains the name of the file with the current date...

    For example, you have a batch file
    essmsh c:\temp\export.mxl "C:\temp\output%DATE:~3,2%%DATE:~0,2%.txt".

    If that was run today it would pass in ' C:\temp\output1118.txt

    Now in your maxl you just refer to the variable with

    export data from database sample.basic Level0 to data_file $1;

    $1 refers to the variable which has been adopted, the 1 being the position, in which it was adopted.

    See you soon

    John
    http://John-Goodwin.blogspot.com/

  • How the batch of photos of change in Version 2.0 of Photos... impossible to find the function.

    How the batch of photos of change in Version 2.0 of Photos... impossible to find the function.  If it has been deleted to "improve" the Photo experience?

    Lot, what exactly will change?

    Titles, Descriptions and capture date - Yes.

    Select the images you want to change the title or the description and to set-up the info (command-i) pane.  Enter the title or the description in the appropriate field and he will be assigned to all selected pictures.

    If you want to batch, change the title with sequential use attached many Applescripts provided by users in the Photos for Mac user tips section.

    Batch change/correction of dates is provided by the Image ➙ setting Date and time menu option:

Maybe you are looking for