Data archival and Resue

Hello

I dropped to know some examples of data archiving and more use it once. I've heard of fragmentation, shirnk and other methods but I just wanted to know how to use it
example is apperciated


Thank you
Also in addition to him, you can use partitions to share the table archived with a score of the work table for the archived lines will be logically added to the lines of the work table.

can u please give a simple example on above statement

Example for a single table. What you can use as a partition key may depend on your existing data structure and the row of the table. If data allow, that it can be optimized by exchanging the empty partition with an existing table (which is very fast) (not included).

1. create tablespace ARCHIVE1900 datafile ' / replaceable_disk/data/archive1900.dbf'

2. create the WORKING_PARTITIONED table (YEAR number / * it will be the partition key * /, OTHER_COL number, YETANOTHER_COL varchar...) partition of range (YEAR)
(lower ARCHIVE1900 partition (2000) tablespace ARCHIVE1900,)
lower DATA2000 partition (2010) DATA2000 tablespace.
partition lower DATA2010 (2020) tablespace DATA2020)

3 move the data in WORKING_PARTITIONED of WORK. Remove the WORK. Rename WORKING_PARTITIONED to WORK.

4 archive
4.1 create tablespace table WORKING_ARCHIVE1900 ARCHIVE1900 select * from WORKING where 1 = 0;
4.2 change the partition of table WORK Exchange ARCHIVE1900 with table UPDATE GLOBAL INDEXES WORKING_ARCHIVE1900;
All data from the partition of WORK ARCHIVE1900 is now in the WORKING_ARCHIVE1900 table.
WORK ARCHIVE1900 partition is empty.
4.3 change only archive1900 read-only tablespace;
4.4 alter tablespace ARCHIVE1900 offline;
4.5 BONE lever unmount or eject a /replaceable_disk

5 restoration of Archives
5.1 mounting /replaceable_disk so you have ' / replaceable_disk/data/archive1900.dbf' available
5.2 alter tablespace ARCHIVE1900 online;
5.3 modify the partition of table WORK Exchange ARCHIVE1900 with table UPDATE GLOBAL INDEXES WORKING_ARCHIVE1900;
All the data in the table WORKING_ARCHIVE1900 is now in the partition of WORK ARCHIVE1900. WORKING_ARCHIVE1900 table is empty.

Enjoy.

The example is approximate. Here can be syntax errors. then test the approach carefully before implementing it.

Tags: Database

Similar Questions

  • How can I archive and back up my calendars? data disappear after one year

    I find it problematic that all my calendar information disappear after a year. I don't want to have to repeat every year because it would be confusing in many cases, but I don't want to save the dates, places, etc. is there a way to archive and save it for later use? Thank you!

    Calendar/file/export.

  • Differences between archiving and off of a qualitative research within the Admin Data Toolbox.


    Hello

    Can you please let me know what could be the difference between archiving and the deactivation of the Qualitative research in data Admin Tool Kit.

    Thank you

    Rohini M

    When you idle or archive anything it is no longer available for selection.   The difference between inactive and the archives, it's inactive items still appear available for the fine items while searching of archive will not be.

    Allows so that you have the following to say:

    List A

    -Article 1

    -Article 2

    List items

    If you were to inactivate the point 1, end-users is more would see can be selected when you use the extended qualitative attribute.  However when they are looking for specifications based on the extended attribute, they would still be able to select 1 point, whereas they could find objects that this value was used. If you archive Item 1, end users should display it no longer available for selection from anywhere - including research.

    Lists

    If you were to inactive or archive the whole list, you would see is no longer available for selection when the establishment of a qualitative research of extended attributes.   I don't think that there's nowhere you can search for extended attributes by the list of research currently out of the box so that they would act similar.  Is there a place to find attributes extended by the search list and then he would follow the same rules as above.

  • Archive Data Guard and Flashback

    Hello

    I have a DB 11.2.0.1 with a database of pending.
    If I put a few tables to be in the flashback data archive, (support for FDA tables) will be copied to a db as well sleep?

    concerning

    Hello;

    I hesitate to answer because you don't seem to close your old questions or give points to those who help. But I also believe in giving the benefit of the doubt.

    If the FIU is configured on your watch then the answer is Yes.

    While this is not a requirement to use I would always use FRA with Data Guard.

    db_recovery_file_dest = ' / u01/app/oracle/flash_recovery_area.

    The following are some sample parameter values that might be used to configure a physical standby database to archive its standby redo log to the fast recovery area:
    
    LOG_ARCHIVE_DEST_2 = 'LOCATION=USE_DB_RECOVERY_FILE_DEST VALID_FOR=(STANDBY_LOGFILE,STANDBY_ROLE)' LOG_ARCHIVE_DEST_STATE_2=ENABLE
    

    Source

    DataGuard of Concepts and Administration 11g Release 2 (11.2) E10700-02

    With RMAN efficiently in an environment of Dataguard. [848716.1 ID]

    Best regards

    mseberg

    Published by: mseberg on December 1, 2012 13:53

  • Flashback Data Archive issue

    Hi all

    There is this issue on the web:

    Q: identify the statement that is true about Flashback Data Archive:

    A. you can use multipletablespaces for an archive, and each archive can have its own maintenance time.

    B. you can have an archive, and to eachtablespace which is part of archive, you can specify a different retention period.

    C. you can use multipletablespaces for an archive, and you can have more than one default value archive by retention period.

    D. Si you specify an archive by default, it must exist in onetablespace only.

    Everyone says that the correct answer is b. isn't it supposed to be a?

    Specified retention period to the archive flashback level not tablespace, then B is incorrect.

  • Query Archive for Data Archive Manager


    Hello

    Recently I create a query of Peoplesoft (value Type Archive and Public) to use for the data archives Manager. However, after that I saved and closed, when I try to search there again, I couldn't see to change. I know that the query was created because I can see it when I set up the Archive and when trying to create the same query, the system asked me to crush him.

    Is - it so this type of query works? How can I change the SQL code, I just created?

    Appreciate anyone's help.

    Simple search in the default query Manager, Query Type = User, unless you explicitly change the type = search Archive. If you want to search by type and name, place you in the advanced search.

    Kind regards

    Bob

  • Analyzes operational Audit, data verification and historical process of flow

    Hello

    Internal Audit Department asked a bunch of information, we need to compile from newspaper Audit task, data verification and process Flow history. We have all the information available, but not in a format that allows to correct "reporting" the log information. What is the best way to manage the HFM logs so that we can quickly filter and export the verification information required?

    We have housekeeping in place, newspapers are 'live' partial db tables and partial purged tables which have been exported to Excel to archive historical newspaper information.

    Thank you very much.

    I thought I posted this Friday, but I just noticed that I never hit the "Post Message" button, ha ha.

    This info below will help you translate some information in tables, etc.. You may realize in tables audit directly or move them to another array of appropriate data for analysis later. The consensus, even if I disagree, is that you will suffer from performance issues if your audit tables become too big, if you want to move them periodically. You can do it using a manual process of scheduled task, etc.

    I personally just throw in another table and report on it here. As mentioned above, you will need to translate some information as it is not "readable" in the database.

    For example, if I wanted to pull the load of metadata, rules of loading, loading list of members, you can run a query like this. (NOTE: strAppName must be the name of your application...)

    The main tricks to know at least for checking table tasks are finding how convert hours and determine what activity code matches the friendly name.

    -- Declare working variables --
    declare @dtStartDate as nvarchar(20)
    declare @dtEndDate as nvarchar(20)
    declare @strAppName as nvarchar(20)
    declare @strSQL as nvarchar(4000)
    -- Initialize working variables --
    set @dtStartDate = '1/1/2012'
    set @dtEndDate = '8/31/2012'
    set @strAppName = 'YourAppNameHere'
    
    --Get Rules Load, Metadata, Member List
    set @strSQL = '
    select sUserName as "User", ''Rules Load'' as Activity, cast(StartTime-2 as smalldatetime) as "Time Start",
          cast(EndTime-2 as smalldatetime) as ''Time End'', ServerName, strDescription, strModuleName
       from ' + @strAppName + '_task_audit ta, hsv_activity_users au
       where au.lUserID = ta.ActivityUserID and activitycode in (1)
            and cast(StartTime-2 as smalldatetime) between ''' + @dtStartDate + ''' and ''' + @dtEndDate + '''
    union all
    select sUserName as "User", ''Metadata Load'' as Activity, cast(StartTime-2 as smalldatetime) as "Time Start",
          cast(EndTime-2 as smalldatetime) as ''Time End'', ServerName, strDescription, strModuleName
       from ' + @strAppName + '_task_audit ta, hsv_activity_users au
       where au.lUserID = ta.ActivityUserID and activitycode in (21)
            and cast(StartTime-2 as smalldatetime) between ''' + @dtStartDate + ''' and ''' + @dtEndDate + '''
    union all
    select sUserName as "User", ''Memberlist Load'' as Activity, cast(StartTime-2 as smalldatetime) as "Time Start",
          cast(EndTime-2 as smalldatetime) as ''Time End'', ServerName, strDescription, strModuleName
       from ' + @strAppName + '_task_audit ta, hsv_activity_users au
       where au.lUserID = ta.ActivityUserID and activitycode in (23)
            and cast(StartTime-2 as smalldatetime) between ''' + @dtStartDate + ''' and ''' + @dtEndDate + ''''
    
    exec sp_executesql @strSQL
    

    With regard to the codes of the activity, here's a quick breakdown on those...

    ActivityID     ActivityName
    0     Idle
    1     Rules Load
    2     Rules Scan
    3     Rules Extract
    4     Consolidation
    5     Chart Logic
    6     Translation
    7     Custom Logic
    8     Allocate
    9     Data Load
    10     Data Extract
    11     Data Extract via HAL
    12     Data Entry
    13     Data Retrieval
    14     Data Clear
    15     Data Copy
    16     Journal Entry
    17     Journal Retrieval
    18     Journal Posting
    19     Journal Unposting
    20     Journal Template Entry
    21     Metadata Load
    22     Metadata Extract
    23     Member List Load
    24     Member List Scan
    25     Member List Extract
    26     Security Load
    27     Security Scan
    28     Security Extract
    29     Logon
    30     Logon Failure
    31     Logoff
    32     External
    33     Metadata Scan
    34     Data Scan
    35     Extended Analytics Export
    36     Extended Analytics Schema Delete
    37     Transactions Load
    38     Transactions Extract
    39     Document Attachments
    40     Document Detachments
    41     Create Transactions
    42     Edit Transactions
    43     Delete Transactions
    44     Post Transactions
    45     Unpost Transactions
    46     Delete Invalid Records
    47     Data Audit Purged
    48     Task Audit Purged
    49     Post All Transactions
    50     Unpost All Transactions
    51     Delete All Transactions
    52     Unmatch All Transactions
    53     Auto Match by ID
    54     Auto Match by Account
    55     Intercompany Matching Report by ID
    56     Intercompany Matching Report by Acct
    57     Intercompany Transaction Report
    58     Manual Match
    59     Unmatch Selected
    60     Manage IC Periods
    61     Lock/Unlock IC Entities
    62     Manage IC Reason Codes
    63     Null
    
  • Indexing & Data Archive Manager

    Hello forums.

    I work with Data Archive Manager at 8.52 PeopleTools, using the table PSACCESSLOG as indicated on the Wiki of PeopleSoft here:
    http://PeopleSoft.wikidot.com/data-archive-manager

    Basically, the problem I encounter is during the creation of a model, when I select the subject archive and check as a base object, I get an error warning me that a unique index does not exist on the record, here's the message:
    http://d.PR/i/llxl

    However, the only way I know PeopleSoft manages index is by specifying the key for record fields. Here is the table definitions, both have the same major fields:
    http://d.PR/i/G0E6
    http://d.PR/i/h2Fz

    Index of the two documents/tables have been created since the application here is the built-in script designer:
    DROP INDEX PS_PSACCESSLOG_HST
    /
    CREATE  iNDEX PS_PSACCESSLOG_HST ON PS_PSACCESSLOG_HST (PSARCH_ID,
       PSARCH_BATCHNUM,
       OPRID,
       LOGIPADDRESS,
       LOGINDTTM) TABLESPACE PSINDEX STORAGE (INITIAL 40000 NEXT 100000
     MAXEXTENTS UNLIMITED PCTINCREASE 0) PCTFREE 10 PARALLEL NOLOGGING
    /
    ALTER INDEX PS_PSACCESSLOG_HST NOPARALLEL LOGGING
    /
    Is there a specific way to create keys/indexes so that recording can be used as a basis of archives? I tried to create as unique index by publishing the create index SQL manually, by specifying "create unique index...". ».
    There must be something that I am missing. As always, any input is much appreciated, thank you.

    Best regards.

    PeopleBooks
    The nonunique indexes
    The SQL code generated by the data archives Manager assumes that the index keys identify unique rows. Therefore, the base table of the base object must have a unique index.

    Check if the table has a unique index as pictured below. You can change this by clicking on the edit index unique index
    http://docs.Oracle.com/CD/E28394_01/pt852pbh1/Eng/psbooks/tapd/IMG/sm_ChangeRecordIndexesDialog_tapd.PNG

    Say what you can do to create a unique index, I advice not to set a unique index with the keys as highly match the table PSACCESSLOG.
    Current keys:
    -OPRID
    -LOGIPADDRESS
    -LOGINDTTM

    Sense
    A person
    newspapers comes from a workstation
    at a certain point.

    When you place a unique index on this table, this means that theoretically, you can not connect to PeopleSoft at the same time a position working with the same user with say different browsers IE and Firefox.
    This will cause errors in unique constraints that will be presented to the user on logon, which is that average there is not a unique index on that table.

    Halin

  • Flashback data archive commit the performance of time - a bug problem?

    Hi all

    I use the Oracle 11 g R2 on 64-bit windows environment. I just want to do some tests quue flashback data archive. I created one and add to a table. about 1.8 million records is to exist in this table. Furthermore, this table is one of the examples of oracle, SH. SALEStables. I created another table using the table and insert the same data twice.
    -- not a SH session
    
    Create Table Sales as select * from sh.sales;
    
    insert into sales select * from sh.sales;
    Commit;
    insert operation takes a few seconds. sometimes, in this code, validation command takes more of * 20 *, sometimes 0 seconds. If validation time is brief after insert, can I update the table and then validate again:
    update sales set prod_id = prod_id; -- update with same data
    commit;
    update takes a few seconds longer. If the first commit (after integration) has had too little time, second validation, after the update, takes more than 20 minutes. At this time, while that commit were working, my cpu becomes overloaded, 100% charge.

    the system that oracle runs on is good for quest staff use, i7 4 real core cpu, 8 GB ram, disk SSD etc.

    When I looked at the Business Manager - performance monitoring, I saw this SQL in my sql album list:
    insert /*+ append */ into SYS_MFBA_NHIST_74847  select /*+ leading(r) 
             use_nl(v)  PARALLEL(r,DEFAULT) PARALLEL(v,DEFAULT)  */ v.ROWID "RID", 
             v.VERSIONS_STARTSCN "STARTSCN",  v.VERSIONS_ENDSCN "ENDSCN", 
             v.VERSIONS_XID "XID" ,v.VERSIONS_OPERATION "OPERATION",  v.PROD_ID 
             "PROD_ID",  v.CUST_ID "CUST_ID",  v.TIME_ID "TIME_ID",  v.CHANNEL_ID 
             "CHANNEL_ID",  v.PROMO_ID "PROMO_ID",  v.QUANTITY_SOLD 
             "QUANTITY_SOLD",  v.AMOUNT_SOLD "AMOUNT_SOLD"  from SYS_MFBA_NROW r, 
             SYS.SALES versions between SCN :1 and MAXVALUE v where v.ROWID = 
             r.rid
    This consumes my resources for more than 20 minutes. what I do is, just an update 1.8 milion records (which use update is really takes little time) and validation (which kills my system).

    What is the reason for this?

    Info:
    Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
    PL/SQL Release 11.2.0.1.0 - Production
    CORE    11.2.0.1.0      Production
    TNS for 64-bit Windows: Version 11.2.0.1.0 - Production
    NLSRTL Version 11.2.0.1.0 - Production

    I see that in the example of Guy Harrison the SYS_MFBA_NROW table contains a very large number of lines - and the query is forced into a loop of neste join on this table (as is your query). In your case the nested loop is a vision (and there is no sign of a current "pushed" predicate.

    If you have a very large number of rows in the table, the resulting code is bound to be slow. To check, I suggest your run the test again (from scratch) with active sql_trace or statistics_level defined at all so that you can get the rowsource for the query execution statistics and check where the time is being spent and where the volume is displayed. You will have to call this one in Oracle - if this observation is correct that FBDA is only for OLTP systems, no DSS or DW.

    Concerning
    Jonathan Lewis

  • CAN I RECOVER DELETED DATA FILE AND ITS TABLESPACE BY USING FLASHBACK DATABASE

    Hello!

    I CREATED THE TABLESPACE WITH ITS DATA FILE.

    SQL > create tablespace usmantbs datafile 'E:\oracle\product\10.2.0\oradata\orcl\usman.dbf' recording petit_fichier the size of 10 M extent management local segment space management auto;

    THEN, I CREATED A USER AND HIM ENTRUST THIS TABLESPACE.

    SQL > create default profil_utilisateur identified by Neil Leal Microsoft account unlock default tablespace usmantbs;
    SQL > grant connect, resources for Neil;

    I CONNECTED WITH Neil as USER AND CREATED a TABLE.

    SQL > conn Leal/Leal
    SQL > create table baseball (id number (9));

    SQL > select current_scn from v$ database;

    CURRENT_SCN
    ---------------------
    545863

    Then I deleted the tablespace including contents and data files...

    SQL > drop tablespace usmantbs including content and data files;

    I have no backup of this data file, but my database is in archive log...

    So I can... .flashback database to the SNA 545863 as it was before the fall... to get back my along its tablespace data file
    Wil I get my datafile back or not? Help, please...

    You can test it by yourself easily :) You will not be able to open your database
    After getting the error, just rename this data file and flashback again. Then open your database

    C:\Documents and Settings\Administrator>sqlplus "/as sysdba"
    
    SQL*Plus: Release 10.2.0.1.0 - Production on Sat Aug 1 14:20:34 2009
    
    Copyright (c) 1982, 2005, Oracle.  All rights reserved.
    
    Connected to:
    Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
    With the Partitioning, OLAP and Data Mining options
    
    SQL> shutdown immediate
    Database closed.
    Database dismounted.
    ORACLE instance shut down.
    SQL> startup mount
    ORACLE instance started.
    
    Total System Global Area  293601280 bytes
    Fixed Size                  1248624 bytes
    Variable Size              96469648 bytes
    Database Buffers          192937984 bytes
    Redo Buffers                2945024 bytes
    Database mounted.
    
    SQL> alter database archivelog;
    
    Database altered.
    
    SQL> alter database flashback on;
    
    Database altered.
    
    SQL> alter database open;
    
    Database altered.
    
    SQL> create tablespace tb datafile 'c:\tb.df' size 1m;
    
    Tablespace created.
    
    SQL> create user tb identified by tb;
    
    User created.
    
    SQL> grant dba to tb;
    
    Grant succeeded.
    
    SQL> alter user tb default tablespace tb;
    
    User altered.
    
    SQL> create table tb (id number);
    
    Table created.
    
    SQL> select current_scn from v$database;
    
    CURRENT_SCN
    -----------
         547292
    
    SQL> drop tablespace tb including contents and datafiles;
    
    Tablespace dropped.
    
    SQL> shutdown immediate
    Database closed.
    Database dismounted.
    ORACLE instance shut down.
    SQL> startup mount
    ORACLE instance started.
    
    Total System Global Area  293601280 bytes
    Fixed Size                  1248624 bytes
    Variable Size              96469648 bytes
    Database Buffers          192937984 bytes
    Redo Buffers                2945024 bytes
    Database mounted.
    
    SQL> flashback database to scn 547292;
    flashback database to scn 547292
    *
    ERROR at line 1:
    ORA-38795: warning: FLASHBACK succeeded but OPEN RESETLOGS would get error
    below
    ORA-01245: offline file 5 will be lost if RESETLOGS is done
    ORA-01111: name for data file 5 is unknown - rename to correct file
    ORA-01110: data file 5: 'C:\ORACLE\PRODUCT\10.2.0\DB_1\DATABASE\UNNAMED00005'
    
    SQL> alter database open resetlogs;
    alter database open resetlogs
    *
    ERROR at line 1:
    ORA-01245: offline file 5 will be lost if RESETLOGS is done
    ORA-01111: name for data file 5 is unknown - rename to correct file
    ORA-01110: data file 5: 'C:\ORACLE\PRODUCT\10.2.0\DB_1\DATABASE\UNNAMED00005'
    
    SQL> select name from v$datafile;
    
    NAME
    --------------------------------------------------------------------------------
    
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\TEST1\SYSTEM01.DBF
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\TEST1\UNDOTBS01.DBF
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\TEST1\SYSAUX01.DBF
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\TEST1\USERS01.DBF
    C:\ORACLE\PRODUCT\10.2.0\DB_1\DATABASE\UNNAMED00005
    
    SQL> alter database create datafile 'C:\ORACLE\PRODUCT\10.2.0\DB_1\DATABASE\UNNAMED00005' as 'c:\tb.dbf';
    
    Database altered.
    
    SQL> flashback database to scn 547292;
    
    Flashback complete.
    
    SQL> alter database open resetlogs;
    
    Database altered.
    
    SQL>
    
    SQL> select * from tb;
    
    no rows selected
    
    SQL> select name from v$datafile;
    
    NAME
    --------------------------------------------------------------------------------
    
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\TEST1\SYSTEM01.DBF
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\TEST1\UNDOTBS01.DBF
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\TEST1\SYSAUX01.DBF
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\TEST1\USERS01.DBF
    C:\TB.DBF
    
    SQL> select name from v$tablespace;
    
    NAME
    ------------------------------
    SYSTEM
    UNDOTBS1
    SYSAUX
    USERS
    TEMP
    TB
    
    6 rows selected.
    
    SQL>
    

    - - - - - - - - - - - - - - - - - - - - -
    Kamran Agayev a. (10g OCP)
    http://kamranagayev.WordPress.com
    [Step by step installation Oracle Linux and automate the installation by using Shell Script | http://kamranagayev.wordpress.com/2009/05/01/step-by-step-installing-oracle-database-10g-release-2-on-linux-centos-and-automate-the-installation-using-linux-shell-script/]

    Published by: Kamran Agayev, a., July 27, 2009 14:38

  • How to transfer pictures from iphone to PC and keep the date, time and place

    How to transfer pictures from iphone to PC and keep the date, time and place

    Hello Grumpelfuerer,

    Thank you for using communities of Apple Support.

    If I understand your message that you want to import your photos from your iPhone 6 s, as well as for your Windows PC. I would like to save my photos on my computer as well. There are two ways to do this. You can use the iCloud photo library allowing to synchronize photos between your devices or import your photos to your PC using Windows Photo Gallery. This article will provide you with the steps that you can use for both options:

    Import photos and videos from your iPhone, iPad or iPod touch

    Best regards

  • Thunderbird 38.01, when I created a new e-mail account, it does not take account the preferences of archive and records everything in a single folder, although the old accounts work well.

    I installed Thunderbird on a system different and migrated with success profile, but earlier today that Gmail wasn't picking up messages correctly, so I ended up using the new feature of Thunderbird. (I'm also spent POP IMAP for Gmail). The newly created has no trouble picking up the messages from the server, but when I try to archive, he pours everything into one folder. My settings call for records of the year and the month. Each account other archive correctly. The newly created is not in a row.

    While poking around, I can have answered my own question. Thunderbird was not ignoring the parameters month and year for archiving completely. He was pouring newly archived messages from the Gmail account in a single folder. However, it was also archiving each of them in the old Gmail account archives folder and putting them in the correct subfolders it. It's strange, but it is achievable. I guess that Thunderbird has recognized the same email address used to set up the old account (which wasn't downloading mail more for some reason any) and the new. I changed the folder on the new configuration settings to point to the folder of old archives, and now Thunderbird archives only one copy of each message, as it should.

  • mail folder turned to the icon of the archive and relocated

    When I upgraded to the current Thunderbird last week, my e-mail folder main (PurepianoInbox) who is not "Inbox" (where most of my e-mail comes) got converted into a folder to archive or so it seems. Specifically, the folder icon has been replaced with the icon of the archive, and now this is located above my Junk next to my file folder to archive. Ironically, he continues to receive mail correctly, but I would like to return it to the mail folder list because it is very confusing. (See attached chart). I tried to get an answer, but so far no one has given a useful answer. Thank you!

    Take a look at your account settings for your email /copies & records.
    If no message archives points to this local folder. If so, change it.

  • How to export pictures / albums of iPhotos to external disk and keep the time/date/year and location information changed?

    How to export pictures / albums of iPhotos to external disk and keep the time/date/year and location information changed?

    Menu, file == > export - check boxes to include metadata and location- export to iPhoto

    LN

  • Script Date added and last played Date

    Looking for a script to change the "Date added" and "Last played" in iTunes on Windows.

    Doug AppleScripts have a "new last played Date" but unfortunately its scripts work only on OS X.  I checked http://samsoft.org.uk/iTunes/ , but it doesn't have the scripts I need.

    Is anyone able to make of these?  Thank you.

    Date added is not directly editable. The only way to change the value should be to remove the item, change the date system, and then add it back back. Different properties that are only stored in the iTunes database would be lost, although these can be saved and restored. The SortDateAdded script has all the code outside this to change the date system that I've always been suspicious of change in the case where they cause unintended side effects. LastPlayedNever makes it look like the selected tracks have never been played. Another script called SetLastPlayedByAlbum can be used to manipulate played dates so that it appears that each album was played sequentially starting at a point it time. It should be easy enough to add an option for fill-in of the playback start time if that's what you're after.

    TT2

Maybe you are looking for