Best practices for storage and backups on internal and external drives

Hello

I have a Mac laptop and I would like your advice on how to organize the storage of files on internal and external drives and how optimizes it the backup plan.

From now on, my organization of file storage is:

I keep my most important files on the internal disk of SSD encrypted and less critical files on an encrypted external drive (format ExFAT). This external hard drive was errors, so I bought a new Laetitia 2 Tb drive that I formatted in Mac OS journaled with encryption instead of ExFAT, to replace.

For backups, I do not use Time Machine, but I use an application that synchronizes files between two disks. By using this application, I first sync the essential document on the SSD files internal to the external drive, and then I synchronize all the external drive to another external drive.

My question is, if I keep this Organization of file storage or I should, for example just move all my files, critical and non-critical for the internal SSD (I could make enough room for that) and then backup up the whole Mac with Time Machine? Or there is a better way to organize files in this scenario?

The second question is, if it is better to keep the Organization as it split storage (internal, less critical on external criticism) is what would be the best way to back up everything?

Thanks in advance for all contributions.

First of all, to the ExFAT is the problem, I suggest that the reader will just physically hurt.  ExFAT is perfectly acceptable as a format where Mac and PC can read/write.  You may have reacted a little leaving ExFAT for HFS, but you have made your choice and adapts to it.

Here is how I operate my backups:

I use CCC (CarbonCopyClone, $40, bombich.com).  It copies the boot sectors, drive, mapping, recovery Partition (these three are 'invisible' to DiskUtility for most) as well as system partitions and data records 'other'.  These other disks may be internal for clone-to-test-new-OSX in multi-disc systems or external for backup purposes.  The beauty is that you can start an external clone in a system crash, then CCC will be re - copy the active external clone on a new/fixed internal drive.

Encryption isn't something I know, but... If only the content of the system partition and data is encrypted, it * could * be OK for clone-back to partition-as-encrypted partitions.  If you clone internal external the external partition is active and copy the 'encrypted plan', while small adjustments are made so that the race could be bad.  Copy of the files on my partitions unencrypted change interrupt because they are individual files.

As an alternative, you could have a system drive that does CCC charge, let the clone encrypted and encrypted internal idle and just copy the entire map with the 'third system' active.

Tags: Notebooks

Similar Questions

  • best practices for OBIEE and Bi Publisher?

    Hello

    1. What is the best way to implement BI Publisher with OBIEE?
    2 is it better to instal Bi Publisher on OBIEE servers group or keep in a separate machine and integration with clustered servers?
    3. the intended use of OBIEE is areas of the RPD as well as our other databases. Are there licensing issues for installing BI Publisher on a separate machine?

    Thank you.

    BI Publisher is a license no matter where use you it outside of the EBS, either as stand-alone or bolted in OBIEE. A benefits of the independent, it's that you can use against any source of data/database.

    http://www.Oracle.com/corporate/pricing/technology-price-list.PDF

  • Best Practices for ControlDelegates and ComponentDefinitions

    I'm fighting to use ControlDelegates and ComponentDefinitions.  My problem with ControlDelegates is that all actions must be attached to the page.  Delegate control is also attached to the page for any function names or id in the page loaded qml are lost to you because of the scope.

    So I thought to use just ComponentDefinition load but won't have a chance.  Here is my code, I am using:

    NavigationPane {
        id: navPaneid
        Page {
            id: pageid
            Container {
                id: rootContainer
                attachedObjects: [
                    ImagePaintDefinition {
                        id: backgroundPaint
                        imageSource: "asset:///images/Tile_nistri_16x16.amd"
                        repeatPattern: RepeatPattern.XY
                    }
                ]
    
                Label {
                    id: labelID
                    text: "test"
    
                }
            }
            onCreationCompleted: {
                cd_testshow.delegateActive = true
                cd_slidetip.delegateActive = true
                var createdControl = compDef.createObject();
                pageid.add(createdControl);//  navPaneid doesn't have any effect either
            }
            actions: [
                ActionItem {
                    id: playid
                    title: "Play"
                    ActionBar.placement: ActionBarPlacement.OnBar
                    onTriggered: {
                        navPaneid.push(testshow)
                    }
                    imageSource: "images/playimage.png"
                    objectName: "Play"
                },
                ActionItem {
                    title: "Quit"
                    ActionBar.placement: ActionBarPlacement.OnBar
                    onTriggered: {
                        navPaneid.push(slidetip)
                    }
                    imageSource: "images/xdarkimage.png"
                },
                ActionItem {
                    title: "CompDef"
                    ActionBar.placement: ActionBarPlacement.OnBar
                    onTriggered: {
                        navPaneid.push(compDef)  // Nothing happens
                    }
                    imageSource: "images/xdarkimage.png"
                }
            ]
        }
        attachedObjects: [
            Page {
                id: testshow
                objectName: "testshow"
                ControlDelegate {
                    id: cd_testshow
                    source: "testshowPages/testshowCont.qml"
                    delegateActive: false
                }
                actions: [
                    ActionItem {
                        title: "Hyperout"
                        ActionBar.placement: ActionBarPlacement.OnBar
                        onTriggered: {
                            animslide.hyperout()
                        }
                        imageSource: "images/stamps/Columbus_Thumb.png"
                    },
                    ActionItem {
                        title: "Change"
                        ActionBar.placement: ActionBarPlacement.OnBar
                        onTriggered: {
                        }
                        objectName: "Change"
                    }
                ]
            },
            testshowPage {
                id: slidetip
                objectName: "slidetip"
            },
            ComponentDefinition {
                id: compDef
                source: "testshowPage.qml"
            }
        ]
    }
    

    I have 3 buttons on the page to try all 3 methods.  The method of testshowPage but I lose the benefits of preloading everything.    The ControlDelegate method does not work because it does not recognize the animslide variable - located at testshowPage.qml.  I can't get the componentDefinition to work.  Not sure if I tie properly.

    Here is a piece of code that works - pages allowing you to dynamically push load both a Menu and an action bar.

    import bb.cascades 1.0
    
    //-- introduce custom C++ object registered in app.cpp
    
    NavigationPane {    id: navPaneid    Menu.definition: MenuDefinition {        // Add a Help action        helpAction: HelpActionItem {            // do something there            onTriggered: {            }        }        actions: [            ActionItem {                id: play1id                title: "Play"                onTriggered: {                    var page = getSecondPage();                    console.debug("pushing detail " + page)                    navPaneid.push(page);                }                imageSource: "images/playimage.png"                objectName: "Play"            }        ]    }    Page {        id: pageid        Container {            Label {                text: "testing"            }        }        actions: [            ActionItem {                id: playid                title: "Play"                ActionBar.placement: ActionBarPlacement.OnBar                imageSource: "images/playimage.png"                objectName: "Play"                onTriggered: {                    var page = getSecondPage();                    console.debug("pushing detail " + page)                    navPaneid.push(page);                }            }        ]    }    property Page secondPage    function getSecondPage() {        if (! secondPage) {            secondPage = secondPageDefinition.createObject();        }        return secondPage;    }    attachedObjects: [        ComponentDefinition {            id: secondPageDefinition            source: "DetailsPage.qml"        }    ]}
    
  • HTML and CSS best practices for Eloqua?

    Hello Topliners community,

    My name is Ben and I am a Web Designer. Currently looking for guidance on best practices in HTML and CSS when you work with Eloqua. I am interested in best practices for email and landing pages.

    Thank you

    Ben

    For landing pages, you can use a bit of HTML, CSS, and Javascript like on any other page. For example, we use the Bootstrap on a couple of our Eloqua landing pages.

    Support for HTML and CSS is much more limited. It is one of the best resources I've seen for CSS support:

    http://www.campaignmonitor.com/CSS/

  • Best practices for vsphere 5.1

    where can I find the doc more up-to-date about berries EQL configuration / best practices with vmware vsphere 5.1

    Hello

    Here is a link to a PDF file that covers best practices for ESXi and EQL.

    Best EqualLogic practices ESX

    en.Community.Dell.com/.../20434601.aspx

    This doc mentions specifically that the storage Heartbeat VMKernel port is no longer necessary with ESXi v5.1.  VMware has corrected the problem that made it necessary.

    If you add it to a 5.1 system it will not hurt.  It will take an IP address for each node.

    If you upgrade 5.0 to 5.1, you can delete it later.

    Here is a link to VMware which addresses this issue and has links to other Dell documents which confirm also that it is fixed in 5.1.

    KB.VMware.com/.../Search.do

    Kind regards

  • best practices for the storage of the vm and vhd

    no doubt this question has been answered not once... Sorry

    I would like to know the best practice for the storage of the vm and its virtual hard disk to a SAN.

    Any show advantage does make sense to keep them on separate LUNS?

    Thank you.

    It will really depend on the application of the virtual machine - but for most of the applications no problem by storing everything on the same data store

  • Best practices for backup of vCenter device and ESXi hosts?

    Hello

    I have a new VMware environment based on 5.5 ESXi and vCenter Server Appliance. A few questions:

    • What is the best practice to perform the backup of the vCenter Server Appliance? I set up the camera with the embedded SQL Express database.
    • It's recommended to backup the ESXi host or is it easier to any re - install failure?

    Best regards
    ,

    Thor-Egil

    Hello

    You can use VMware VDP to backup your device of vCenter. It is included in essentials, standard, enterprise, and enterprise as well as licenses.

    I won't have backups of ESXi servers. Restore the backup would take more time that the deployment of a new ESXi.Only thing, you may want to consider is to enable logging remotely, so you can see why the hosts failed.

    Concerning

    Tim

  • best practices for backup of bi

    Hello

    Who can suggest me best practices for backup and restore of dashboard reports integer bi, permissions and etc.?

    Ed,

    Hello

    If you want to move entire dashboards, reports, permissions
    Zip the *Web / catalog folder and move new environment. In new relax unwind this catalogue and Instanceconfig.xml mention the path for this new catalogue.

    If you want to move a few reports or dashboads, do so by Catalog Manager.

    Thank you.

  • I would like to know the "best practices" for unplugging my computer permanently to the internet and other updates.

    Thank you for taking the time to read this. I would like to know the "best practices" for unplugging my computer permanently to the internet and other updates. I thought I would do a clean install of Windows XP, install my Microsoft Works again and nothing else. I would like to effectively transforming my computer into a word processor. He continues more and more slow. I get blue screen errors, once again. I received excellent Microsoft Support when it happened before, but since my computer is around 13 years, I think it is not worth the headache to try to remedy. I ran the Windows 7 Upgrade Advisor, and my computer would not be able to upgrade. Please, can someone tell me how to make it only a word processor without updates or internet connection? (I already have a new computer with Microsoft Windows 7 Home Premium, it's the computer that I use. The old computer is just sitting there and once a week or so I updates.) I appreciate your time, thank you!

    original title: old computer unstable

    http://Windows.Microsoft.com/en-us/Windows-XP/help/Setup/install-Windows-XP

    http://www.WindowsXPHome.WindowsReinstall.com/sp2installxpcdoldhdd/indexfullpage.htm

    http://aumha.NET/viewtopic.php?f=62&t=44636

    Clean install XP sites
    You can choose which site to reinstall XP.

    Once it is installed, then you do not have to connect what anyone, however, some updates may be required to perform the work, test this by installing work and see if you get an error msg. Except that you should be fine.

  • Best practices for managing exceptions and success messages.

    Hey people,

    These days I've been shooting packages to clean my application. And question came to my mind, ' should I treat my exceptions the right way?


    So I want to ask you met guys, what is the best practice for this? (I want to learn it until it's too late )

    Currently I have a function that returns "OK" if all goes well.


    return('OK');  

    Can I manage my exceptions like this

      EXCEPTION
        WHEN OTHERS THEN
          ROLLBACK;
          RETURN (SQLERRM);
    
    

    At THE SUMMIT, I have a process that calls the function and then checks if the function returned "OK".

         IF cRet not LIKE 'OK%' THEN
          RAISE_APPLICATION_ERROR(-20000,cRet);
         END IF;
    
    

    And in 'process Error Message' I put "#SQLERRM_TEXT #" so that I can see what error occurred.

    Question aside, how do you manage your messages of success?

    Currently in 'process success Message' put something along the lines "Action completed successfully". What to do about all the processes.

    Do you want to do differently?

    I really want to hear what you have to say since I'm tired of feeling like this is a terrible way to manage these things.

    Hi Para,

    Para wrote:

    I don't know of situations where my service throw exceptions like no_data_found.

    and I need to know that the process is not so I can get to see my # #SQLERRM_TEXT.

    I got this by increasing the error in the application (which I think is a bad way to go) if the return is anything other than 'OK '. I get my application error in the process of the apex, and not in my service.
    And I want to show the inline error in the notification. (Which I am currently with my approach).

    You can use APEX_ERROR. ADD_ERROR in your PL/SQL process to throw exceptions in Oracle APEX.

    Reference: Re: Re: error in the processing of the page management

    Kind regards

    Kiran

  • What are the best practices for creating only time data types, and not the Date

    Hi gurus,

    We use 12 c DB and we have a requirement to create the column with datatype of time only, if someone please describe what are the best practices for the creation of this.

    I would strongly appreciate ideas and suggestions.

    Kind regards
    Ranjan

    Hello

    How do you intend to use the time?

    If you are going to combine with DATEs or timestamps from a other source, then an INTERVAL DAY TO SECOND or NUMBER may be better.

    Will you need to perform arithmetic operations on time, for example, increase the time to 20%, or take an average?   If so, the NUMBER would be preferable.

    You are just going to display it?  In this case, DAY INTERVAL in SECONDS, DATE or VARCHAR2 would work.

    As Blushadow said, it depends.

  • What is the best practice for a 'regular' Server VMware and VDI environment?

    What is the best practice for a "regular" VMware Server and VDI environment?   A single environment (ESXi and SAN) can accommodate two if it is a whole new configuration?  Or even better to keep separate?

    Enjoying inputs.

    Quick and dirty answer is that "it depends."

    serioulsy, it depends really two things budget and IO.  If you had the money for two without then buy two and don't have to host your server environment and the other for VDI desktop, their IO profiles are completely different.

    If this is not the case, try to keep each type of use for their own dedicated LUN.

  • Best practices for dealing with Exceptions on members of storage

    We recently encountered a problem where one of our DistributedCaches was closing himself and restart due to a RuntimeException is thrown from our code (see below). As usual, it's our own code and we have updated to not throw a RuntimeException in all circumstances.

    I would like to know if there are some best practices for Exception handling, other than catching Exceptions and their record. We should always catch the Exceptions and ensure that they do not spread back to code that is running from the pot of consistency? Is it possible to configure consistency so that our DistributedCaches are not completed even when filters custom and other throw RuntimeExceptions?


    Thank you, Aidan


    Exception below:

    2010-02-09 12:40:39.222/88477.977 Oracle coherence GE < error > 3.4.2/411 (thread = DistributedCache:StyleCache, Member = 48): a (java.lang.RuntimeException) exception occurred reading Message AggregateFilterRequest Type = 31 for Service = DistributedCache {Name = StyleCache, State = (SERVICE_STARTED), LocalStorage = active, PartitionCount = 1021, BackupCount = 1, AssignedPartitions = 201, 204 = BackupPartitions}
    2010-02-09 12:40:39.222/88477.977 Oracle coherence GE < error > 3.4.2/411 (thread = DistributedCache:StyleCache, Member = 48): DistributedCache ending because of an exception not handled: java.lang.RuntimeException

    We have reproduced you problem in the House and it looks like a global filtering does not
    the correct thing (i.e. having caught a processed) when moving from a runtime exception.
    In general runtime exceptions should simply be exploited and returned to the application
    without compromising the cache server, so we will be solving it.

  • Best practices for the integration of the Master Data Management (MDM)

    I work on the integration of MDM with Eloqua and are looking for the best approach to sync data lead/Contact changes of Eloqua in our internal MDM Hub (output only). Ideally, we would like that integration practically in real time but my findings to date suggest that there is no option. Any integration will result in a kind of calendar.

    Here are the options that we had:

    1. "Exotic" CRM integration: using internal events to capture and queue in the queue changes internal (QIP) and allows access to the queue from outside Eloqua SOAP/REST API
    2. Data export: set up a Data Export that is "expected" to run on request and exteernally annex survey via the API SOAP/REST/in bulk
    3. API in bulk: changes in voting that has happened since the previous survey through the API in bulk from Eloqua outside (not sure how this is different from the previous option)

    Two other options which may not work at all and who are potentially antimodel:

    • Cloud connector: create a campaign questioning changes to schedule and configure a connector of cloud (if possible at all) to notify MDM endpoint to query contact/lead "record" of Eloqua.
    • "Native" integration CRM (crazy): fake of a native CRM endpoint (for example, Salesforce) and use internal events and external calls to Eloqua push data into our MDM

    Issues related to the:

    1. What is the best practice for this integration?
    2. Give us an option that would give us the close integration in real-time (technically asynchronous but always / event-based reminder)? (something like the outgoing in Salesforce e-mail)
    3. What limits should consider these options? (for example API daily call, size response SOAP/REST)

    If you can, I would try to talk to Informatica...

    To imitate the integrations of native type, you use the QIP and control what activities it validated by internal events as you would with a native integration.

    You will also use the cloud api connector to allow you to set up an integration CRM (or MDM) program.

    You have fields of identification is added objects contact and account in Eloqua for their respective IDs in the MDM system and keep track of the last update of MDM with a date field.

    A task scheduled outside of Eloqua would go to a certain interval and extract the QAP changes send to MDM and pull the contacts waiting to be sent in place of the cloud connector.

    It isn't really much of anything as outgoing unfortunately use Messaging.  You can send form data shall immediately submit data to Server (it would be a bit like from collections of rule of integration running of the steps in processing of forms).

    See you soon,.

    Ben

  • Best practices for network configuration of vSphere with two subnets?

    Well, then I'll set up 3 ESXi hosts connected to storage shared with two different subnets. I configured the iSCSI initiator and the iSCSI with his own default gateway - 192.168.1.1 - targets through a Cisco router and did the same with the hosts configured with its own default gateway - 192.168.2.2. I don't know if I should have a router in the middle to route traffic between two subnets since I use iSCSI ports linking and grouping of NETWORK cards. If I shouldn't use a physical router, how do I route the traffic between different subnets and use iSCSI ports binding at the same time. What are the best practices for the implementation of a network with two subnets vSphere (ESX host network: iSCSI network)? Thank you in advance.

    Install the most common iSCSI would be traffic between hosts and

    the storage is not being routed, because a router it could reduce performance.

    If you have VLAN 10(192.168.1.0/24) iSCSI, VLAN 20 (192.168.2.0/24) ESX

    MGMT and VLAN 30 (192.168.3.0/24) comments VMs and VLAN 40 (192.168.4.0/24)

    vMotion a deployment scenario might be something like:

    NIC1 - vSwitch 0 - active VMK (192.168.1.10) MGMT, vMotion VMK (192.168.4.10)

    standby

    NIC2 - vSwitch 1 - current (VLAN30) guest virtual machine port group

    NIC3 - vSwitch 2 - active VMK1 (192.168.1.10) iSCSI

    NIC4 - vSwitch 2 - active VMK2 (192.168.1.11) iSCSI

    NIC5 - vSwitch 1 - current (VLAN30) guest virtual machine port group

    NIC6 - vSwitch 0 - MGMT VMK (192.168.2.10) standby, vMotion

    VMK (192.168.4.10) active

    You would place you on VLAN 10 storage with an IP address of something like target

    192.168.1.8 and iSCSI traffic would remain on this VLAN. The default value

    gateway configured in ESXi would be the router the VLAN 20 with an ip address of

    something like 192.168.2.1. I hope that scenario help set some options.

    Tuesday, June 24, 2014 19:16, vctl [email protected]>

Maybe you are looking for