data loss after return to snapshot?

What is the best approach to resolve problems of this kind: Let's say I have a VM mail server.

1. I intend to upgrade the mail server, I take snapshot to have something to come back to disaster.

2. place at the mail server level.

3. I let it work for a while to ensure that there is no problem with the upgrade... e-mail server receives e-mails.

4. If everything works fine, I remove the snapshot.

What happens if I want to go back to the snapshot? Meanwhile, the mail server received emails, so if I go back to the snapshot I lose those emails. Of course, I can't lose emails.

Thank you for your suggestions

What happens if I want to go back to the snapshot? Meanwhile, the mail server received emails, so if I go back to the snapshot I lose those emails. Of course, I can't lose emails.

Yes, you lose your mail.

If you have the more virtual disk, you can consider using anti-capitalist disc to not put pressure on the selected disk.

Or you can make a backup of your Inbox mailbox before returning the snapshot, and then you restore the data.

André

Tags: VMware

Similar Questions

  • Data loss after the instant withdrawal (and errors)

    This title does not exactly cover all this... but the biggest part is covered.

    I'm fairly new to Vmware ESXi, but yesterday we had a problem. We had exactly this error in ONE of our virtual machines

    We have 2VM on a 250 GB SATA drive to the address in the mirror.

    Drives: 1 x 40 GB / 1 x 100 GB/1 x 70 GB

    As you can see, there was little room left. When the system tried to make snapshots (don't ask me why, we certainly didn't want to, we create backups on a San from some windows), he could not and that he left us the above error (at least that is what I think). After a few minutes on google I found this page http://virtrix.blogspot.com/2007/06/vmware-dreadful-sticky-snapshot.html.

    Then came a thought that it just was not enough room to make a snapshot. If a 160 GB extra hard drive is added to the server (physical). I copied the VHD, including the-000001-the new hard drive for the files delta.vmdk and - 000001.vmdk ( which took forever, 103 GB at least 6 hours all thoughts on this /offtopic).

    I removed the hard drive of the virtual machine (on the physical disk, the file has been moved to the new drive) and added the virtual disk to the virtual machine (not the delta, but the real - flat file).

    Then I did exaclty the last line of the page linked above, the line ending by "great huh? But things are not that great, because now we have data loss. I think that the changes in the files of snapshots must be inserted into the - flat file, but because the situation has changed... is it possible to do? Otherwise a few weeks of work or disappeared into thin air... (not yet looked in backups...)

    If the virtual machine never was under tension since the creation of hartserver_1 - 000002.vmdk we can ignore it.

    I started to copy the files after snapshots were made, I did not copy the snapshot files. and as I said in my first post,

    I deleted the disk of the virtual machine and added the 'copied' to the VM disk. (the - flat file) is when

    data loss of course occurred because the disk know not the snapshot files.

    In this case, copy the snapshot files to the new folder AND rename the old folder.

    Can also edit the VMX file and remove all the absolute paths.

    Example:

    scsi0:1.filename = hartserver_1 - 000001.vmdk

    Instead of

    scsi0:1.filename = /vmfs/volumes/49acf302-aff5ac21-ddde-000423c854bc/hartserver_1-000001.vmdk

  • Java Oracle GoldenGate adapter the data loss after the failure of the transactionCommit()?

    I am developing a custom Manager to deliver the change Oracle logs.

    When errors have occurred, normally, I can throw RuntimeException or return Status.ABEND. Then OGG would be the error in the log and stop the process.

    The following code works fine when operationAdded() failed (extraction process will report abend, and when extraction of restarting after errors, the operations in the transaction failure would be referred to the Manager).

    @Override
    public Status operationAdded(DsEvent e, DsTransaction tx,
      
    DsOperation dsOperation) {
      
    Status status = super.operationAdded(e, tx, dsOperation);
      
    ...
      
    //throw new RuntimeException("op add runtime error");
      
    return status;
    }

    However, when the error occurred in the function transactionCommit(), OGG does work as expected. Throw RuntimeException or return only Status.Abend can stop the extract. Just OGG continue to work like nothing happened. (Code below)

    @Override
    public Status transactionCommit(DsEvent e, DsTransaction tx) {
      
    super.transactionCommit(e, tx);
      
    Status status = sendEvents();
      handlerProperties
    .totalTxns++;
      
    //throw new RuntimeException("tx ci runtime error");
      
    return Status.ABEND;
    }

    I tried to kill and restart the extraction process. The transaction that failed were not referred to the Manager. It seems that all the transaction data that failed were lost!

    Here are the logs of return Status.ABEND in transactionCommit():

    ...
    DEBUG
    [main] (AbstractHandler.java:509) - Event: handler=ggdatahub, transactionCommit ( Commit transaction ) DsTransaction [ops=1, buffered=1, state=BEGIN, start=2015-08-21 20:04:25.842275, end=2015-08-21 20:04:25.842275]
    WARN
    [main] (DsEventManager.java:231) - Error sending event to handler: status=ABEND, event=Commit transaction, handler=ggdatahub
    Exception in thread "main" com.goldengate.atg.util.GGException: Unable to commit transaction, STATUS=ABEND
      at com
    .goldengate.atg.datasource.UserExitDataSource.commitActiveTransaction(UserExitDataSource.java:1392)
      at com
    .goldengate.atg.datasource.UserExitDataSource.commitTx(UserExitDataSource.java:1326)
    Error occured in javawriter.c[752]:
    ***********************************************************************
    Exception received committing transaction: com.goldengate.atg.util.GGException: Unable to commit transaction, STATUS=ABEND

    DEBUG
    [main] (UserExitDataSource.java:504) - (JNI) C-user-exit checkpoint event
    DEBUG
    [main] (UserExitDataSource.java:1364) - UserExitDataSource.CommitActiveTransaction: Same transaction committed more than once (possibly due to commit-on-checkpoint).
    DEBUG
    [main] (UserExitDataSource.java:516) - UserExitDataSource.userExitCheckpoint: incrementing the flush counter
    DEBUG
    [main] (PendingOpGroup.java:315) - now ready to checkpoint? false (was ready? false): {pendingOps=1, groupSize=0, timer=0:00:00.000 [total = 0 ms ]}
    DEBUG
    [main] (UserExitDataSource.java:504) - (JNI) C-user-exit checkpoint event
    DEBUG
    [main] (UserExitDataSource.java:1364) - UserExitDataSource.CommitActiveTransaction: Same transaction committed more than once (possibly due to commit-on-checkpoint).
    DEBUG
    [main] (UserExitDataSource.java:516) - UserExitDataSource.userExitCheckpoint: incrementing the flush counter
    DEBUG
    [pool-1-thread-1] (AbstractDataSource.java:737) -  [2] getStatusReport: Mon Aug 24 10:51:14 CST 2015
    DEBUG
    [Thread-1] (UserExitDataSource.java:1601) - UserExitDataSource closing, #1 of class="UserExitDataSource"
    DEBUG
    [main] (PendingOpGroup.java:315) - now ready to checkpoint? false (was ready? false): {pendingOps=3, groupSize=0, timer=0:00:00.000 [total = 0 ms ]}
    DEBUG
    [Thread-1] (UserExitDataSource.java:1608) - Shutting down data source; attempting a final checkpoint.
    INFO
    [pool-1-thread-1] (AbstractDataSource.java:730) - Memory at Status : Max: 455.00 MB, Total: 60.50 MB, Free: 27.54 MB, Used: 32.96 MB
    DEBUG
    [pool-1-thread-1] (UserExitDataSource.java:1637) - time spent checkpointing: 0:00:00.000 [total = 0 ms ]
    DEBUG
    [Thread-1] (UserExitDataSource.java:1668) - doCheckpoint() called
    INFO
    [pool-1-thread-1] (AbstractDataSource.java:980) - Status report: Mon Aug 24 10:51:14 CST 2015
    *************************************************
    Status Report for UserExit
    *************************************************
    Total elapsed time:   2 days 14:47:06.139 [total = 226026 sec = 3767 min = 62 hr ]   => Total time since first event
    Event processing time:  0:00:12.692 [total = 12 sec ]   => Time spent sending msgs (max: 4795 ms)
    Metadata process time:  0:00:02.159 [total = 2 sec ]   => Time spent receiving metadata (1 tables, 3 columns)
    Operations Received/Sent:  3 / 3
    Rate (overall):   0 op/(peak: 0 op/s)
      
    (per event):   0 op/s
    Transactions Received/Sent: 2 / 0
    Rate (overall):   0 tx/(peak: 0 tx/s)
      
    (per event):   0 tx/s
    3 records processed as of Mon Aug 24 10:51:14 CST 2015 (rate 0/sec, delta 3)
    *************************************************

    Someone knows how to fix this? Thanks in advance!


    For others who may encounter this problem:

    It turns out be a bug...

    I switch to Version 11.2.1.0.1 OGGCORE_11.2.1.0.1_PLATFORMS_120423.0230 Version 12.1.2.1.4 OGGCORE_12.1.2.1.0OGGBP_PLATFORMS_150303.1209 20470586 . Everything works fine now.

  • Data loss after restoring system (non-destructive) F11 on Presario

    Hello:

    I have a Compaq Presario V3019 with Windows XP. I made a "non-destructive" system recovery using the F11 key after plugging on. The restoration of the partition went well and then I completed the Windows set up / configuration of sequence (language, time zone, user account names, etc.).  But now I don't see any of my files under My Documents (I had several subfolders and files here)!

    Moreover, in the last step of installing windows when you are prompted to create accounts of users (not knowing any better) I used the same name as before for my account (except the first letter was lower, not higher, case). Would using the same name that before (it's case sensitive?) cause an excessive writing of the original My Documents folder? Is there a way to get the old folder (with all subfolders and files) back? Help, please.

    Alex

    OMG! I FOUND MY MISSING FILES!  Over 30GB, collected over 10 years (pictures, tax returns, e-mail archives, everything...)!  THANK YOU, Adam! I followed your suggestion above and voila! another folder "Alex" appeared which was previously hidden.  But I couldn't have inside this folder.  I kept thinking it's gotto the missing folder; I have to get inside.  I then found a few instructions to access this folder at this link:

    http://support.HP.com/us-en/document/nph00268.

    I followed the steps there and restarted normal Windows and there it was "Alex Documents"! Each of them!

    Half a million congratulations to Adam to lead me there!

    Alex

    PS: How to mark this "resolved" thread and click on the stars Kudos!

  • Data loss after the use of disk recovery - how to recover data

    I know this may sound stupid and I know that I was...

    But it was the first time that I had to run the recovery disk and I moved all my important documents on the second partition (E :) and then run the recovery disc, thought it would be only to format the first partition (c)...
    Well I was wrong.

    Can anyone help me please with a way to get information on my second partition (e)?

    Hi mate

    I have bad news for you
    The HARD drive has been formatted by the recovery DVDs, which means that all available partitions have been deleted!

    I'm sorry but I don't see the options to retrieve data :(

    Of course, there are many companies that are able to save data on hard drives according to the formatting of HARD drive, but in my opinion, the costs would be immense

  • Irregular data loss - function from PL/SQL returning data using Ref Cursor

    Database Version: 10.2.0.4.0 (node 2 CARS)

    The high-level process flow is as below:
    (1) insert records in a few tables & commit the same
    (2) call the pl/sql function to extract files (on certain conditions with joins with other tables) of the tables which are filled in step 1.
    -> It uses the ORDER BY clause to queries inline & line number 5000 records return for each call.
    Sense - if inline query is supposed to return 1,00,000 records then 20 calls to the same function. This, because the application cannot contain records beyond number.
    (3) the data returned by the ref cursor is then processed by application (Tibco BW) to generate the flat file.

    We are facing the problem of data loss in the file and there is no fixed model. It happens once between 200-300 calls process.
    Resolution: When the problem occurs, triggering the process and in almost every time re-outbreak of the process provides required data.

    Guidance on what could be the reason?

    * Examples of Code for the function:
    CREATE OR REPLACE FUNCTION FUNC_GET_HRCH_TOTAL_DATA)
    outinstrid in NUMBERS
    outinstrkey in NUMBERS
    rownumberstart in NUMBERS
    rownumbereend in NUMBERS
    err_code OUT VARCHAR2,
    err_msg OUT VARCHAR2)
    RETURN PACK_TYPES. HRCH_TOTAL_CURSOR
    IS
    REF_HRCH_TOTAL_CURSOR PACK_TYPES. HRCH_TOTAL_CURSOR;
    BEGIN

    OPEN FOR REF_HRCH_TOTAL_CURSOR
    SELECT *.
    FROM (SELECT A.HIERARCHY_KEY, B.KEY, B.VAL_KEY, A.KEY_NEW, C.ITEMID, B.VAL_TAG, B.sort_order, ROWNUM ROWNUMBER
    OF AOD_HRCH_ITEM A, AOD_HRCH_ATTR B, AOD_HRCH_ITEMS C
    WHERE A.outputid = B.outputid
    AND A.outputid = C.outputid AND A.outputkey = B.outputkey
    AND A.outputkey = C.outputkey AND A.outputid = outinstrid
    AND A.outputkey = outinstrkey AND A.ITEM_SEQ = B.ITEM_SEQ
    AND A.ITEM_SEQ = C.ITEM_SEQ AND A.HIERARCHY_LEVEL_ORDER = B.SORT_ORDER
    ORDER BY A.HIERARCHY_LEVEL_ORDER DESC)
    WHERE ROWNUMBER < rownumbereend
    AND ROWNUMBER > = rownumberstart;


    RETURN REF_HRCH_TOTAL_CURSOR;
    EXCEPTION
    WHILE OTHERS
    THEN
    err_code: = x_progress | ' - ' || SQLCODE;
    err_msg: = SUBSTR (SQLERRM, 1, 500);

    END FUNC_GET_HRCH_TOTAL_DATA;
    /

    Published by: meet_sanc on February 16, 2013 10:42

    Your SELECT statement is almost certainly incorrect

    SELECT *
      FROM ( SELECT A.HIERARCHY_KEY, B.KEY, B.VAL_KEY, A.KEY_NEW, C.ITEMID, B.VAL_TAG, B.sort_order,ROWNUM ROWNUMBER
               FROM AOD_HRCH_ITEM A, AOD_HRCH_ATTR B, AOD_HRCH_ITEMS C
              WHERE A.outputid = B.outputid
                AND A.outputid = C.outputid AND A.outputkey = B.outputkey
                AND A.outputkey = C.outputkey AND A.outputid = outinstrid
                AND A.outputkey = outinstrkey AND A.ITEM_SEQ = B.ITEM_SEQ
                AND A.ITEM_SEQ = C.ITEM_SEQ AND A.HIERARCHY_LEVEL_ORDER = B.SORT_ORDER
              ORDER BY A.HIERARCHY_LEVEL_ORDER DESC)
     WHERE ROWNUMBER < rownumbereend
       AND ROWNUMBER >= rownumberstart;
    

    Since the ORDER BY is applied after the ROWNUM is assigned in this case, your query is requested for a period of 5000 lines any arbitrariness. It would be perfectly valid for a single line to return in each of your 200 different calls or for a line to return in any of them.

    You definitely want to do something in the sense of the canonical askTom wire

    select *
      from ( select a.*, rownum rnum
               from ( YOUR_QUERY_GOES_HERE -- including the order by ) a
              where rownum <= MAX_ROWS )
     where rnum >= MIN_ROWS
    

    That said, it seems inconceivable that Tibco is unable to manage a cursor that returns more than a certain number of lines. You do a ton of work to return the data pages that are certainly not necessary. Unless you're saying that you somehow paralyzed your installation of Tibco giving him a ridiculously small amount of memory to process, something doesn't look good. A slider is just a pointer - it holds that no data - so the number of lines that you can extract a slider should have no impact on the amount of memory on the client application needs.

    As others have already pointed out, your exception handler is almost certainly do more harm than good. Return the error codes and error messages as from the OUT parameters, instead of simply allowing the exception to propagate deletes a ton of useful information (such as the mistake of the stack) and makes your process much less robust.

    Justin

  • My ipad is disabled. How to activate ipad without data loss? I know the password

    My ipad is disabled. How to activate ipad without data loss? I know the password

    You can not. If you know the password, how the iPad become disabled? If the device is disabled, the content is not already available. The only way to recover the device is to restore. You can restore your last backup after that. Follow the instructions in this document support. If you have forgotten the password for your iPad, iPhone or iPod touch, or your device is disabled - Apple supports

  • Need help to avoid data loss on a corrupt and/or failure to hard drive

    I don't know what happened. One minute I'm surfing the web and listening to music on iTunes. Then out of nowhere he had a sudden failure. Changed not long at all, but now when I try to turn on my computer, it starts normally but then shortly after that the chime and the apple logo appear, if a progress indicator yet that I've never seen before today. Until the progress bar is still the size of my little finger nail, the computer turns off just. I started in recovery mode and run disk check and repair utility. Make sure the disc is not far off when he said something like - disc cannot be checked completely. Disk needs to be repaired. So when I click on the repair disk, it works only for maybe a minute before it comes up saying - utility disk can't repair the disk. Upwards from your many possible files, reformat the drive and restore your backed up files. So obviously it's not good. What are my options and can I do to avoid losing data that is not already saved? I have a lot of things that have not been backed up and lose some of them would be pretty damning. That's my main goal; data loss as little as possible. My computer is an iMac 2012 end Mavericks 10.9.5 running.

    Thank you.

    You do not have a current backup? If this is not the case, your only real option may be to pay a data recovery company and see if they can help you. Which can be very expensive.

  • disable the window maximize AFTER return to windows windows 7 fron 10

    I recently returned to windows 7 (time to go to windows 10 months) and since then I can not turn off to maximize windows.  I disabled the "prevent windows" option in the ease of use of the mouse and keyboard with the combinations to have an on and off.

    Can you help me please turn this feature off, no offense to anyone, but I really hate it.

    Thank you

    Deb.

    Hello Deb,.

    Thanks for posting your query on the Microsoft Community.

    By the description, you like to deactivate windows optimize desktop.

    As the question arose after return to windows 7 there is a possibility that the display driver may got corrupted. You have already made linear troubleshooting to solve the problem. I suggest to update the graphics card and check.

    Make sure that your display driver is up-to-date. Then follow the steps below.

    1. press on the Windows key + R to open Run
    2 type devmgmt.msc in run and press on Enter.
    3. expand display adapter.
    4. right click on the device and select Update driver software

    You also try to

    Update a hardware driver that is not working properly.

    http://Windows.Microsoft.com/en-us/Windows7/update-a-driver-for-hardware-that-isn ' t-work correctly

    Hope this information is useful. Please let us know if you need help with Windows.

  • Application of expression Boolean generation causes a data loss condition tags that are excluded as 'NOT' tagged

    Hi all

    I am running FM12.0.2.389 on Win7-SP1 (64-bit) laptop with i7 & 16 GB of RAM.

    I reported this problem as a bug in the Adobe bug base, but they were not able to reproduce the bug with the files I sent. This leads me to believe that the conflict is perhaps the result of something gone wobbly or conflicting in my system. Here's what happens:

    Using 'Set expression' triggers crash FM on the next file > save, displays the error message on the imported graphic and loss of data which has been marked with conditions to exclude "NOT" in the phrase Boolean build.

    Steps to reproduce using my files (zipped):

    1. open Ch1_Introduction - 4FC.fm.

    2. create a build expression named 3DpartnerKitConditions using this definition:

    not "LIVEonly" and not 'InstallOnly' and not ('Commentary' or 'Deleted' or 'Future' or 'international' or 'PostInstall' or 'Question' or 'SDinstallOnly')

    3 file > save.

    4. apply the expression of the construction.

    5 file > save... (crash)

    Result: All conditions excluded via Boolean DO NOT tag data is not longer in the file, as seen in Ch1_Introduction - droppedText.fm.

    What I THOUGHT was going to happen are that tagged data excluded via Boolean DO NOT conditions should stay in the file but hidden and can be demonstrated easily through text show/hide conditional > show all with indicators of Condition Show. Sometimes it does not crash FM and must behave this way, but 90% of the time the result is the crash and data loss.

    Other things that I am looking at, I think I have a few scripts or running plugins that have been written for previous versions of FM. Yes, I'm addicted to the things of CudSpan, Bruce Foster, Silicon Prairie, FrameExpert, Sundorne and Miaramo. I noticed that (admittedly written in FM11) SafetyMIF causes crash FM at the exit, but as we use the Git repositories for incremental recording, trying to produce regularly the MIF is quite complicated the case, and record the bin file from FM bloats the referential way too technical to be happy at all.

    I don't know what triggers the question of expression Boolean tag, but I really, really need to solve. I'm a solo writer (for now) with a very heavy load of docs and deadlines that can be managed by only sound sourcing methods. Thus, these files depend on variables, condition labels, shared files between different books, shared graphics and overlays of text, if any.

    Someone in the community can help me find a way to identify the root cause? I plan to approach troubleshooting by first uninstalling FM, cleaning the trash to uninstall typical Windows, reinstall fresh FM, then adding the plugins one at the time and the attempt running Boolean tag setting after each plugin is added to see if I can tell which one is the trigger of the question. If it is a corrupt, obtained FM facility that would solve the problem, too.  What do you think guys?

    Thank you

    René

    Rene,

    It is a healthy approach and probably the only thing, because there may be interactions between them (even though I probably have most of the same and do not see the odd-ball accident during recording).

    Bruce Foster those who aren't are not compatible with FM12 (or even 11) - too old. Some of the plugins of Chris also had problems with FM11 and 12 If you don't have the correct MS 32-bit operating times installed (even on a 64-bit operating system).

  • Data loss from turnover

    Hello Experts,

    I got backup fully on July 21, 2011 09:00 and I took archivelog all backup JULY 21, 17:00 and then the next day I took another backup full July 22, 2011 09:00, now I have confusion about the backup archivelog all is what he still necessary after the full backup taken on July 22, 2011 09:00

    We can keep data loss and I want to restore the database up to the last full backup of July 22, 2011 09:00 or assume that I lost my archive backup and archive files too, can we restore database without archivelog backup and files?

    Thanks in advance

    Kind regards
    Asif

    Published by: Asif Hussain on July 21, 2011 23:29

    If my database created over 1000 files archivelog 6 months before plowing date, so I demanded of all files from archive until today with the full backup?

    Rep. : No, you must all archive logs after your last full backup taken.

    Created DB: January 1, 2011

    Last full RMAN backup: 20 July, 10:00

    Then to recover the database without any loss of data you need for a full backup of 20 July 10:00 + all the Archives of the newspapers after the full backup.

    # Time based recovery

    bootable media;
    restore the database up to the time "TO_DATE (29 October 10 19:00","MM/DD/YY HH24:MI:SS')";
    recover database until time ' TO_DATE (29 October 10 19:00 "," MM/DD/YY HH24:MI:SS') ";
    ALTER database open resetlogs;

    # YVERT database recovery

    bootable media;
    restore the database to the SNA 10000;
    recover the database to the SNA 10000;
    ALTER database open resetlogs;

    # The sequence in function recovery journal

    bootable media;
    restore the database up over 100 1 sequence;
    recover database until over 100 1 sequence;
    ALTER database open resetlogs;

    Concerning
    Asif Kabir

  • What are the causes of data loss during an update?

    Just what are really the causes of data loss or corruption when you update. Most sites will tell you to back up what I'm doing, but none of them actually appoint a cause to save.

    -Thanks in advance

    Minor updates in an OSX * may * break the file system.  Fully upgrades to other OSX replaces almost all file systems.

    Some backup of people even before you install some software packages, because they know how this package changes directories OSX.

    Backups are never a waste of time.

  • Palm E2 data loss

    Help! I loat my PALM E2 and tried to sync my desktop with a new E2 I had and she deleted the office somehow.  Is it possible to recover what was in the calendar before my mishap.  Thank you!

    millershrink wrote:

    Thanks WyreNut.  I didn't was not able to recover data of ants from the desktop as a tech friend who spent 2 hours at my house last night explained to me because I had put my system for handheld to replace office, inadvertently.  He was able to recover a large part of the data loss because I Carbinite on this pc.  So I'm pretty much OK.  Once again thank you for your time and expertise.

    Exceptional!

    You can never have too many friends 'tech', huh?   Accessories for her to properly diagnose our ancient, yet superior PDA system!

    WyreNut

  • Producer/consumer vs Master/Slave of data loss

    Hello users of Labview,.

    I did experiment with vs p / M/S lately in the context of data aquistion. Many messages did appear (correct me if I'm wrong) that p/c is the correct architecture to if the data retention is desired, so that's what I tried to implement.

    In the loop of producer, I've included 3 Assistant DAQ VIs (2AO, 1 HAVE), which generates data actively and which is being queued. I have 3 other loops of consumers, two of which are consumer 1) written loops of 2) outputs TDMS data of a chart 3) no producer: structure of the event, monitors changes to the user interface.

    When I run data acquisition, the generated file are always half either acquire the half of the sample that I've specified regardless of the sampling rate. I also have a strange displayed graph (see image).

    However,.

    If I replace with a master/slave architecture, this seems to solve the problem.

    So question is: anyone met with this kind of problems? Are there disadvantages to stick with M/S instead of p/c in terms of data loss?

    Thank you

    johnji wrote:

    I think I'm well done

    You are not.  Once an article is removed, he went.  Since you have three loops made of the same queue, each loop will get ~1/3 of the data.  If all the loops are to treat everything, then you want another line for each loop and meet the producer, write to each queue.  You can also use a user event.

  • How to convert NTFS to FAT32 without data loss?

    Hello
    I want to format a computer with 4 logical partition, where C: is partitioned FAT (Windows NT Server), and other (D:, E: and F :) are on NTFS.

    Now I want to install Win98 on it, which requires the hard drive or FAT32 partitioned. How to do that without losing valuable data in all 3 NTFS partitioned drives.

    Hello

    I think you are very lucky.

    Aomei technology is giving away NTFS to FAT32 Converter Pro Edition 1.5 which can help you convert NTFS to FAT32 without data loss. The offer is limited from 18/05/2011 ~ 23/05/2011.
    Contest page: http://www.aomeitech.com/giveaway/ntfs2fat32.html

Maybe you are looking for

  • Why my favorites disappear several times since the update to Firefox 39?

    Hi - I am running Firefox 39 on desktop PC Windows 7, 64-bit. I already know how to restore bookmarks. I just can't understand why they keep disappearing! This has been bothering me for weeks. At this diagnosis, I backed up the bookmarks that I will

  • Windows Update changes home page

    I have VISTA. My Yahoo is my home page. 2 days ago, that the pg suddenly changed while I was away from the computer. The menu file/Edit/View/favourites at the top of the pg bar disappeared. My Yahoo logo, who was also at the top of the pg that I used

  • Files & documents within a folder is moved to another folder

    We have files within a folder on a drive L: some files missing from the inside of the folder and find themselves in a different folder on the same drive.  How can I eliminate the possibility that someone is clicking and dragging files inadvertently i

  • ACS 5.7.0.15 backup fails

    Hi all We have configured the FTP on Cisco ACS repository. When we at the request of the primary ACS via FTP backup, we can see the incoming transfers on the FTP server, but we are unable to see any file in the home directory. And also "referential s

  • Cisco VMfex port-profile manager HyperV 2.2 UCS 2012R2

    Could not locate the files of instalation for Cisco VMfex Manager of port-profile for Windows Server 2012 R2 Hyper-V. Without it, we were able to use VMFEX/SRIOV vNIC on our UCS system. UCS Firmware is 2.2