Best practices for automation of ghettoVCBg2 starting from cron

Hello world!

I set up an instance of vma for scheduling backups with ghettoVCBg2 in a SIN store. Everything works like a charm from the command line, I use vi fastpass for authentication, backups complete very well.

However, I would like to invade the cron script and got stuck. Since vifp is designed to run only command line and as I read not supposed to work from a script, it seems that the only possibility would be to create a backup user dedicated with administrator privileges and store the user and pass in the shell script. I'm not happy to do so. I searched through the forums but couldn't ' find any simple solution.

any IDE for best practices?

Thank you

eliott100

In fact, incorrect. The script relies on the fact that the host ESX or ESXi are led by vi-fastpass... but when you run the script, it does not use vifpinit command to connect. It access credentials via the modules of vi-fastpass which don't vifpinit library but as you have noticed, you cannot run this utility in off-line mode. Therefore, it can be scheduled via cron, basically, but you must run the script interactively, just set up in your crontab. Please take a look at the documentation for more information

=========================================================================

William Lam

VMware vExpert 2009

Scripts for VMware ESX/ESXi and resources at: http://engineering.ucsb.edu/~duonglt/vmware/

Twitter: @lamw

repository scripts vGhetto

Introduction to the vMA (tips/tricks)

Getting started with vSphere SDK for Perl

VMware Code Central - Scripts/code samples for developers and administrators

VMware developer community

If you find this information useful, please give points to "correct" or "useful".

Tags: VMware

Similar Questions

  • Best practices for retrieving a single value from the Oracle Table

    I'm using Oracle Database 11 g Release 11.2.0.3.0.

    I would like to know the best practice to do something like that in a PL/SQL block:

    DECLARE
        v_student_id    student.student_id%TYPE;
    BEGIN
        SELECT  student_id
        INTO    v_student_id
        FROM    student
        WHERE   last_name = 'Smith'
        AND     ROWNUM = 1;
    END;

    Of course, the problem here is that when there is no success, the NO_DATA_FOUND exception is thrown, which interrupts the execution.  So, what happens if I want to continue despite the exception?

    Yes, I could create a block nested with EXCEPTION section, etc, but it seems awkward for what seems to be a very simple task.

    I've also seen this handled like this:

    DECLARE
        v_student_id    student.student_id%TYPE;
        CURSOR c_student_id IS
            SELECT  student_id
            FROM    student
            WHERE   last_name = 'Smith'
            AND     ROWNUM = 1;
    BEGIN
        OPEN c_student_id;
        FETCH c_student_id INTO v_student_id;
        IF c_student_id%NOTFOUND THEN
            DBMS_OUTPUT.PUT_LINE('not found');
        ELSE
            (do stuff)
        END IF;
        CLOSE c_student_id;   
    END;

    But it still seems to kill an Ant with a hammer.

    What is the best way?

    Thanks for any help you can give.

    Wayne

    201cbc0d-57b2-483a-89f5-cd8043d0c04b wrote:

    What happens if I want to continue despite the exception?

    It depends on what you want to do.

    You expect only 0 or 1 rank. SELECT INTO waiting for exactly 1 row. In this case, SELECT INTO may not be the best solution.

    What exactly do you do if you return 0 rows?

    If you want to set a variable with a NULL value and continue the treatment, Frank's response looks good, or else use the modular Billy approach.

    If you want to "do things" when you get a line and 'status quo' when you don't get a line, then you can consider a loop FOR:

    declare
      l_empno scott.emp.empno%type := 7789;
      l_ename scott.emp.ename%type;
    begin
      for rec in (
        select ename from scott.emp
        where empno = l_empno
        and rownum = 1
      ) loop
    l_ename := rec.ename;
        dbms_output.put_line('<' || l_ename || '>');
      end loop;
    end;
    /
    

    Note that when no line is found, there is no output at all.

    Post edited by: StewAshton - Oops! I forgot to put the result in l_ename...

  • Best practices for connecting to a database from a Web service

    I develop Web services that need to connect to an Oracle database. Currently, I connect to the database using jdbc as this class:

    con = DriverManager.getConnection (connectURL, "HR_DATA_READERS", "xxxxx").

    I do not like this approach because it hardcodes the connection string and account information. One of our colleagues has suggested that we use the properties file, but this leaves the user name and password in full view. Given that we can define jdbc dataources on the server of the SOA and the server offers all the nice features like pooled connection etc, is there a way to connect to the database using the data source defined the SOA rather than defining a connection server jdbc in the Web service itself? Is the best way to do it or is there still a way paste?

    Thank you

    Hi, Alix,.

    If you are JEE compatible environment. i.e. WLS 12 c, you can use resource injection approach, instead of manually coding the above proposed DS research.

    More information on injected resources and you will find here - http://docs.oracle.com/cd/E19226-01/820-7627/6nisfjmbl/index.html

    "For example, you can use this tutorial - Servlet JDBC DataSource resource Injection" Open tutorials

    Take a look at sections 5.3 and 5.4 only, which is what the code should look like and how to configure resource in the web.xml file.

    And always use finally to try to close the db connection, otherwise in the case of exceptions, as it is not returned to the pool and at some point he's going to get exhausted.

    Hope this helps,

    A.

  • Best practices for creating a new project from an existing WebHelp

    I am currently working to WebHelp Pro (version RH is 9.0.2.271).

    I have a WebHelp project that currently supports the 2012 version of one of our projects. What I had to do was to create a separate project for 2013, using files of 2012 as a starting point. I couldn't find a way in HR to create a new project by importing an existing project of WebHelp, so I copied the files of 2012 in a new directory, open the project and renamed it.

    What drives this issue is that, as a result of this exercise, all seemed well, for once. However, I recently had to create new topics in the 2012 version. However, when I imported these subjects to the 2013 and compiled project, they disappeared - even if the htm files still appear in the appropriate folder of the file 2013 (when we look at with Windows Explorer).

    After reading a few posts on the forum, I thought I might have corrupted my database by creating the incorrectly the new project-, but if what I've done, it's the wrong way to go about this, I don't know what is the right way. I will be grateful for any suggestion.

    The easy way to do this is to create a copy using Windows Explorer.

    Open the project and click file > rename.

    Then you have your project is ready in 2013.

    See www.grainge.org for creating tips and RoboHelp

    @petergrainge

  • Best practices for the storage of information from lead

    Hi all

    I use a qualification process using a script assessment of lead. Via this script our reps ask the potential customer 8 questions and a score is calculated.

    Now, the reps will begin the process to convert leads into opportunities - in order of priority according to the script of assessment score.

    Then information that is entered in the assessment script is stored in 8 fields in the lead record and is relevant to only one service in a hospital (anesthesia fx). This information is very valuable to us in the future - because it tells us about the potential prospects for the purchase of other products.

    Now, I want to make sure that this information is passed on the most appropriate way when wires are converted into accounts, contacts, and opportunities. My first thought was to create 8 new fields in the opportunity record and communicate with the record type and map the conversion of lead to these areas. Through reflection, this would help me with a large number of redundant data. Also the data will be regularly updated - which will cause problems with this solution.

    Another option is to display the fields of 8 lead and related information on the opportunity and contact record details page. Or I could pass data to the contact record and tell officials that this is where it should be updated in the future.

    I'm pretty new to OnDemand, so I couldn't be on the right track here! Any help will be much appreciated :-)

    Kind regards
    Allan

    Allan, once the lead is converted the lead record (with your 8 domains) is available as a related under the opportunity and contact records record. This allows you to make updates to these fields without having to do several times if you match during the conversion from lead.

  • Best practices for Smartview when upgrading from Excel 2003 to Excel 2007?

    Anyone know the best practice for Smartview during the upgrade from Excel 2003 to Excel 2007?


    Current users have Microsoft Excel 2003 with Smartview 9.3.1.2.1.003.

    Computers are upgraded to Microsoft Excel 2007.


    What is the best practice for Smartview in this situation?

    1. do nothing with Smartview and just install Excel 2007.

    2. install Excel 2007 and then uninstall and reinstall Smartview

    3 uninstall Smartview, Excel 2007 installation and then install Smartview

    4 something else?


    Thank you!

    We went with option 1 and it worked very well. Be aware that OAS Treaty substantially slower in Excel 2007 to 2003. Many users have been/is unhappy about the switch. We have not tested SV v11 yet, so I don't know if it has improved performance with Excel 2007 or not (hopefully it does).

  • Best practices for SQL

    I started a discussion a few weeks there is movement .vmdks from the supplier of storage to another in regard to and got some great answers.  What I didn't ask was if the hard contain databases SQL is there special considerations when you cut storage EMC VNX (pools versus grpoups raid) or when the data store is created in vCenter.  I know that this may be more an EMC forum question but maybe someone in the world of VMware has a recommendation or can provide a link to best practices for the VNX.  Thank you.

    Post edited by: vmroyale to change the case of SQL

    Have you seen the document "using EMC VNX with VMware vSphere storage solutions"?

  • best practices for placing the master image

    Im doing some performances / analysis of load tests for the view and im curious about some best practices for the implementation of the image master VM. the question is asked specifically regarding disk i/o and throughput.

    My understanding is that each linked clone still reads master image. So if that is correct, then it seems that you would like the main image to reside in a data store that is located on the same table as the rest of the warehouses of data than the House related clones (and not some lower performing table). The reason why I ask this question, it is that my performance tests is based on some future SSD products. Obviously, the amount of available space on the SSD is limited, but provides immense quantities of e/s (100 k + and higher). But I want to assure you that if, by putting the master image on a data store that is not on the SSD, I am therefore invalidate the IO performance I want high-end SSD.

    This leads to another question, if all the linked clones read from the master image, which is general practices for the number of clones related to deploy by main image before you start to have problems of IO contention against this single master image?

    Thank you!

    -


    Omar Torres, VCP

    This isn't really neccissary. Linked clones are not directly related to the image of the mother. When a desktop pool is created and used one or more data stores Parent is copied in each data store, called a replica. From there, each linked clone is attached to the replica of the parent in its local data store. It is a replica of unmanged and offers the best performance because there is a copy in every store of data including linked gradient.

    WP

  • Just improved m tips on best practices for sharing files on a Server 2008 std.

    The field contains about 15 machines with two domain controllers, one's data is the app files / print etc...  I just upgraded from 2003 to 2008 and want to get advice on best practices for the establishment of a group of file sharing. Basically I want each user to have their their own records, but also a staff; folder. Since I am usually accustomed to using windows Explorer, I would like to know if these actions can be done in the best conditions. Also I noticed on 2008 there is a feature of contacts. How can it be used? I would like to message or send an email to users their file locations. Also, I want to implement an admin at a lower level to handle the actions without making them far in on the server, not sure.

    I read a certain bbut I don't like test direct more because it can cause problems. So basically a way short and neat to manage shares using the MMC, as well as the way that I approach their mail from the server of their actions. Maybe what kind of access cintrol or permissions are suitable also for documents. Also how can I have them use office templates without changing the format of the model.

    THX

    g

    Hello 996vtwin,

    Thank you for visiting the Microsoft Answers site. The question you have posted is related to Windows Server and would be better suited to the Windows Server TechNet community. Please visit the link below to find a community that will support what ask you:

    http://social.technet.Microsoft.com/forums/en-us/category/WindowsServer

    Hope this helps J

    Adam
    Microsoft Answers Support Engineer
    Visit our Microsoft answers feedback Forum and let us know what you think

  • What are the best practices for creating only time data types, and not the Date

    Hi gurus,

    We use 12 c DB and we have a requirement to create the column with datatype of time only, if someone please describe what are the best practices for the creation of this.

    I would strongly appreciate ideas and suggestions.

    Kind regards
    Ranjan

    Hello

    How do you intend to use the time?

    If you are going to combine with DATEs or timestamps from a other source, then an INTERVAL DAY TO SECOND or NUMBER may be better.

    Will you need to perform arithmetic operations on time, for example, increase the time to 20%, or take an average?   If so, the NUMBER would be preferable.

    You are just going to display it?  In this case, DAY INTERVAL in SECONDS, DATE or VARCHAR2 would work.

    As Blushadow said, it depends.

  • Best practices for building an infrastructure of APEX for 12 c

    Hi all

    Have we not the docs on best practices for building an infrastructure of APEX?

    Which means, for the production, it is acceptable to use Embedded PL as the listener, or we stick with the Listenr tested on Weblogic?

    Thank you

    Hi JCGO,.

    JCGO wrote:

    Hi all

    Have we not the docs on best practices for building an infrastructure of APEX?

    Which means, for the production, it is acceptable to use Embedded PL as the listener, or we stick with the Listenr tested on Weblogic?

    Thank you

    I agree with Scott's response '' it depends. '' It starts with the appropriate choice of a web listening port.

    You should discourage use EPG facility based in Production environments in accordance with the recommendation of the Oracle.

    Reference: See security considerations when you use the Embedded PL/SQL Gateway section.

    ADR (APEX Listener) + Oracle Weblogic Server sounds good, if you already have tried and have appropriate expertise to manage it.

    Also, you might consider what other facilities ADR based ADR + Apache Tomcat with Apache HTTP Server reverse proxy as described here:

    Dimitri Gielis Blog (Oracle Application Express - APEX): Prepare the architecture for the APEX 5.0 upgrade

    But it depends on Apache skills, you have within your organization.

    I hope this helps!

    Kind regards

    Kiran

  • Best practices for the integration of the Master Data Management (MDM)

    I work on the integration of MDM with Eloqua and are looking for the best approach to sync data lead/Contact changes of Eloqua in our internal MDM Hub (output only). Ideally, we would like that integration practically in real time but my findings to date suggest that there is no option. Any integration will result in a kind of calendar.

    Here are the options that we had:

    1. "Exotic" CRM integration: using internal events to capture and queue in the queue changes internal (QIP) and allows access to the queue from outside Eloqua SOAP/REST API
    2. Data export: set up a Data Export that is "expected" to run on request and exteernally annex survey via the API SOAP/REST/in bulk
    3. API in bulk: changes in voting that has happened since the previous survey through the API in bulk from Eloqua outside (not sure how this is different from the previous option)

    Two other options which may not work at all and who are potentially antimodel:

    • Cloud connector: create a campaign questioning changes to schedule and configure a connector of cloud (if possible at all) to notify MDM endpoint to query contact/lead "record" of Eloqua.
    • "Native" integration CRM (crazy): fake of a native CRM endpoint (for example, Salesforce) and use internal events and external calls to Eloqua push data into our MDM

    Issues related to the:

    1. What is the best practice for this integration?
    2. Give us an option that would give us the close integration in real-time (technically asynchronous but always / event-based reminder)? (something like the outgoing in Salesforce e-mail)
    3. What limits should consider these options? (for example API daily call, size response SOAP/REST)

    If you can, I would try to talk to Informatica...

    To imitate the integrations of native type, you use the QIP and control what activities it validated by internal events as you would with a native integration.

    You will also use the cloud api connector to allow you to set up an integration CRM (or MDM) program.

    You have fields of identification is added objects contact and account in Eloqua for their respective IDs in the MDM system and keep track of the last update of MDM with a date field.

    A task scheduled outside of Eloqua would go to a certain interval and extract the QAP changes send to MDM and pull the contacts waiting to be sent in place of the cloud connector.

    It isn't really much of anything as outgoing unfortunately use Messaging.  You can send form data shall immediately submit data to Server (it would be a bit like from collections of rule of integration running of the steps in processing of forms).

    See you soon,.

    Ben

  • Best practices for HP DL385p G8

    Trying to find documentation for the implementation of a HP DL385p G8 for VMware Server. I would like to find a guide to best practices for this server? Been looking for but not found anything. I would like to find things are should hyperthreading turned on and so on. I use the ESXI from HP image for pilots, that all work but just want to find the settings.

    Thank you

    Hello

    you will find on this blog: BIOS settings in a VMware ESX (i) / vSphere environment. VirtualKenneth & #039; s Blog - hqVirtual | Quality rental a detailed explanation on all the BIOS settings and why you should enable or disable it.

    hope that helps.

  • vSpere 5 Networking of best practices for the use of 4 to 1 GB NIC?

    Hello

    I'm looking for a networking of best practices for the use of 4-1 GB NIC with vSphere 5. I know there are a lot of good practice using 10 GB, but our current config does support only 1 GB. I need to include the management, vMotion, Virtual Machine (VM) and iSCSi. If there are others you would recommend, please let me know.

    I found a diagram that resembles what I need, but it's for 10 GB. I think it works...

    vSphere 5 - 10GbE SegmentedNetworks Ent Design v0_4.jpg(I had this pattern HERE - rights go to Paul Kelly)

    My next question is how much of a traffic load is each object take through the network, percentage wise?

    For example, 'Management' is very small and the only time where it is in use is during the installation of the agent. Then it uses 70%.

    I need the percentage of bandwidth, if possible.

    If anyone out there can help me, that would be so awesome.

    Thank you!

    -Erich

    Without knowing your environment, it would be impossible to give you an idea of the uses of bandwidth.

    That said if you had about 10-15 virtual machines per host with this configuration, you should be fine.

    Sent from my iPhone

  • Best practices for the compression of the image in dps

    Hello! I read up on best practices for the compression of the image in dps and I read the asset from the source of panoramas, sequences of images, Pan and zoom images and audio skins is resampled not downloading. You will need to resize them and compress them before deleting the in your article, because the dps do not do it for you. Hey can do!

    So Im also read as he active source of slideshows, scrolling images, and buttons ARE resampled as PNG images. Does this mean that DPS will compress for you when you build the article? Does this say I shouldn't worth going bother to resize these images at all? I can just pop in 300 DPI files 15 MB used in the print magazine and dps will compress their construction article - and this will have no effect on the size of the file?

    And this is also the case with static background images?


    Thanks for your help!

    All images are automatically resampled to based on the size of the folio you do. You can put in any image resolution you want, it's not serious.

    Neil

Maybe you are looking for

  • Why my Firefox works in "stealth mode"?

    Several times when I close on a web page and I'm going to restart, I get a Message telling me "Firefox is already underway and I have to shut down before I could start another web page, but I have already liquidated and there is no web page Firefox i

  • iPhoto 1.2 does not work with El Capitan and 'Help' pages ignored by Apple!

    Update to Apple iPhoto with El Capitan on a MacBook Pro is a disaster: it is a completely new version with NO INSTRUCTIONS! It is impossible to edit uploaded photos, events seems to have totally disappeared and the news of 'Help' are a joke. Also, Ap

  • Windows 8 beds but will not wake up completely

    When I try to activate windows 8 after it has expired, or even when I ordered the "sleep mode", I can't all programs on the taskbar to come high std office format. For example, I can move the mouse over the program in the taskbar, and it will show a

  • key Windows 8 for the new installation on windows 8.1

    Hello Anyway we can use key windows 8 to install windws 8.1 fees Thank you

  • Repeat the action and save the file

    HelloI'm having a fever and is trying to accomplish something that seems very simple not knowing anything about scripting.I made an action that applies to some simple effects (blur, threshold and high pass). I want to do is to repeat the action of n