Independent endpoints of best practices

Hello

I am looking for any documentation that talks about best practices for independent video endpoints, I found Architecture preferred Cisco
for the video, but still have Expressay C and E.

I want to send to a client some info, whats better put endpoint behind the firewall for example in a DMZ?

Concerning

Leonardo Santana

There are no guides that I know, or even remember seeing anywhere for this mater.  It is always better to put the end behind something, usually a firewall or a guard point, there never recommended that it is where it is directly accessible.

Tags: Cisco Support

Similar Questions

  • NetApp Best Practice and independent labels

    Hi, Best practices for VMware NetApp recommends, transitional and temporary data such as comments

    pagefile operating system, temporary files and swap files, must be moved to another disk virtual one

    different data store as snapshots of this type of data can consume a large amount of storage in a very short time



    high time due to the rate of change (that is, to create a data store dedicated to transitional and temporary for all VMS data without other types of data or VMDK residing on it).

    NetApp recommends also configure the VMDK residing in these stores data as "Independent persistent" disks in vCenter. Once configured, the transitional and temporary data VMDK will be excluded from the VMware vCenter snapshot and copy snapshot of NetApp initiated by SnapManager for Virtual Infrastructure.

    I would like to understand the impact of this best practice - can anyone advise on the following:

    • If the above is implemented:

      • Snapshots will work via vcenter?

      • Snapshots will work via the Netapp Snapmanager tool?

      • The snapsot includes all of VM disks? If this is not the case, what is the consequence of not having the whole picture of the VM?

      • The snapshot of vcenter can restore ok?

      • Netapp snapshot can restore ok?

    • What impact the foregoing has on the process of return if using a backup product that relies on snapshot technology?

    Thank you





    Hi Joe

    These recommendations is purely to save storage space when the replication or backup.

    For example, you can move your *.vswap (VM swap file) file to a different data store. NetBackup can do instant IVMS of the warehouses of data and with this configuration, you can exclude this particular data store

    This is also true if you create a data store dedicated for OS Swap files, mark independent so that vCenter not relieve these VMDK.

    I did a project with NetApp on boxes of SAP production

    We moved all the files in *.vswap to warehouses of data created and dedicated RDM for the OS Swap locations

    We actually used the SnapDrive one NetApp technology to suspend the DB SQL on the ROW before the ROW is broken, but I won't go into too much detail

    To answer your questions (see the comments in the quote)

    joeflint wrote:

    • If the above is implemented:
      • Snapshots will work via vcenter? -Yes it will be - independent drive gets ignored
      • Snapshots will work via the Netapp Snapmanager tool? -Yes it will be - snaps the entire data store/LUN
      • The snapsot includes all of VM disks? If this is not the case, what is the consequence of not having the whole picture of the VM? -No. - *.vswap file is created when the VM is started (no need to backup)

    -OS Swap VMDK of location must be re-created in the case of restoration. WIndows will be

    always Prime if the Swap disk is missing, and you specify the new location of swap.

    • What impact the foregoing has on the process of return if using a backup product that relies on snapshot technology? -These backup products use vCenter snapshots and because the vCenter snapshots works 100% it shouldn't be a problem.

    It may be useful

    Please allow points if

  • Best Practice Guide for stacked N3024 switches

    Is there a guide to BP for the configuration of the 2 N3024s stacked for the connections to the server, or is the same eql iscsi configuration guide.

    I'm trying to:

    1) reduce to a single point of failure for rack.

    (2) make good use of LACP for 2 and 4 nic server connections

    (3) use a 5224 with it's 1 lacp-> n3024s for devices of unique connection point (ie: internet router)

    TIA

    Jim...

    Barrett pointed out many of the common practices suggested for stacking. The best practice is to use a loop for stacking and distributing your LAG on multiple switches in the stack, are not specific to any brand or model of the switch. The steps described in the guides of the user or the white papers generally what is the recommended configuration.
    http://Dell.to/20sLnnc

    Many of the best practices scenarios will change of network-to-network based around what is currently plugged into the switch, and the independent networks needs / requirements of business. This has created a scenario where the default settings on a switch are pre-programmed for what is optimal for a fresh switch. Then recommended are described in detail in white papers for specific and not centralized scenarios in a single document of best practices that attempts to cover all scenarios.

    Express.ypH N-series switches are:
    -RSTP is enabled by default.
    -Green eee-mode is disabled by default.
    -Frother is enabled by default.
    -Storm control is disabled by default.

    Then these things can change based on the towed gear and needs/desires of the whole of society.

    For example, Equallogic has several guides that recommendations of configuration detail to different switches.
    http://Dell.to/1ICQhFX

    Then on the side server, you would like to look more like the OS/server role. For example a whitepaper VMware that has some network settings proposed when running VMware in an iSCSI environment.
    http://bit.LY/2ach2I7

    I suggest making a list of the technology/hardware/software, which is used on the network. Then use this list to acquire white papers for specific areas. Then use these white papers best practices in order to ensure the switch configuration is optimal for the task required by the network.

  • Best practices for color use in Adobe CC?

    Hi all

    Is there an article that describes the best practices for use of color in Adobe CC?

    I produce a mixture of viewing online (PDF, for the most part) and real world print projects - often with the obligation for both. I recently updated my PANTONE + bridge books for the first time in ages and I am suddenly confused by the use of Lab colors in the Adobe Suite (Illustrator and InDesign).

    Everything I found online, looks like Lab color mode preferred to use because it is device independent. And perceptual (on screen), it looks much closer to the color, it is trying to represent. But when I mark a Spot color Illustrator rectangle using laboratory coordinates, to the sides of a rectangle using PANTONE + bridge CP and then export it to PDF, the version of CP to mix CMYK color corresponds exactly to my Pantone book - while the version of laboratory (after converted to CMYK using the ink Manager) is far away.

    I have this fantasy to manage only a single Illustrator or InDesign file for both worlds (PDF) printed and online. Is not possible in practice?

    Any info describing the basic definitions of the color modes - or even a book tracing more than use them in the real world - would be much appreciated!

    Thank you

    Bob

    Here are a few best practices you can already do.

    1 make sure that your color settings are synchronized on all applications.

    2. use a CMYK profile appropriate for your print output. Lab spot colors convert to CMYK values based on the CMYK icc profile.

    3. include icc profiles when save or export pdf files

    In theory, your imagination is possible today. It requires color management and the use of icc profiles. You can place RGB images in InDesign and use Pantone colors in your objects. The problem lies in the printers. If a printer uses a RIP with built in Pantone library, the colors will match when printing. Unfortunately, this kind of CUT is more expensive and not enough printers use them. Most of them is always manually approximate CMYK values composition given Pantone colors.

  • Best practices for the integration of the Master Data Management (MDM)

    I work on the integration of MDM with Eloqua and are looking for the best approach to sync data lead/Contact changes of Eloqua in our internal MDM Hub (output only). Ideally, we would like that integration practically in real time but my findings to date suggest that there is no option. Any integration will result in a kind of calendar.

    Here are the options that we had:

    1. "Exotic" CRM integration: using internal events to capture and queue in the queue changes internal (QIP) and allows access to the queue from outside Eloqua SOAP/REST API
    2. Data export: set up a Data Export that is "expected" to run on request and exteernally annex survey via the API SOAP/REST/in bulk
    3. API in bulk: changes in voting that has happened since the previous survey through the API in bulk from Eloqua outside (not sure how this is different from the previous option)

    Two other options which may not work at all and who are potentially antimodel:

    • Cloud connector: create a campaign questioning changes to schedule and configure a connector of cloud (if possible at all) to notify MDM endpoint to query contact/lead "record" of Eloqua.
    • "Native" integration CRM (crazy): fake of a native CRM endpoint (for example, Salesforce) and use internal events and external calls to Eloqua push data into our MDM

    Issues related to the:

    1. What is the best practice for this integration?
    2. Give us an option that would give us the close integration in real-time (technically asynchronous but always / event-based reminder)? (something like the outgoing in Salesforce e-mail)
    3. What limits should consider these options? (for example API daily call, size response SOAP/REST)

    If you can, I would try to talk to Informatica...

    To imitate the integrations of native type, you use the QIP and control what activities it validated by internal events as you would with a native integration.

    You will also use the cloud api connector to allow you to set up an integration CRM (or MDM) program.

    You have fields of identification is added objects contact and account in Eloqua for their respective IDs in the MDM system and keep track of the last update of MDM with a date field.

    A task scheduled outside of Eloqua would go to a certain interval and extract the QAP changes send to MDM and pull the contacts waiting to be sent in place of the cloud connector.

    It isn't really much of anything as outgoing unfortunately use Messaging.  You can send form data shall immediately submit data to Server (it would be a bit like from collections of rule of integration running of the steps in processing of forms).

    See you soon,.

    Ben

  • Looking for a best practices guide: complete replacement cluster

    Hello

    I was in charge as the complete replacement of our current environment hardware ESXi 5.0 U2 with a new cluster of servers running 5.1.

    Here are the basics:

    Currently, HP Blade Server chassis with 6 hypervisors running ESXi 5.0 U2, the company license, about 100 or so virtual running different operating systems - mainly MS 2003 R2 to 2008 R2, stores the data on without connected through ethernet connections 1 GB.

    Intended to run 7 independent servers as a cluster with ESXi 5.1, license of the company, connections to SAN be improved to 10 GB ethernet or fiber.  The range of virtual machines in the importance of 'can be restarted after hours' to ' should not be restarted or that will cost us money service interruptions.  (Looking for the options live - migrate if possible although I have my doubts, it will be an option given the cluster plans)

    I'm looking for a Guide to best practices (or a combination of the guides) which will help me to determine how best to plan the migration of VM - especially in light of the fact that the new cluster will be not part of the existing.  Given also the fact we upgrade is unable (due to problems on the chassis firmware) 5.1 before this work...

    Any pointers in the right direction would be great - look no no not a handout, just people signs

    See you soon.

    Welcome to the community - from vCenter 5.1 can manage an ESXi 5.0 host just one at a time do guests 5.0 5.1 and vmotion the VMs to new hosts - environment as the two environment will see the same SAN it will be necessary for storage vmotion.

  • Best practices: multiple partitions on a single vmdk or partition by vmdk

    Hello all-

    I would like to get your opinions on the best practices for the vmdk file server installation program.

    The drive C partition would be allotted for the operating system, while E, F... to store the data of the partitions.

    configuration 1:

    vmdk1 = thick disk of provisioned by a partition of drive c.

    vmdk2 = thickness accommodation provisioned disk partitions E, f...

    Installer 2

    vmdk1 = thick disk of provisioned by a C partition

    vmdk2 = thick disk of provisioned by a partition E

    vmdk3 = thick disk of provisioned by a partition F

    .......

    Also the partitions of multiple data configured as independent + permanent virtual disks due to snapshots. My logic is that OS (C drive) is used for snapshots in test of new software for example while the data partitions act as the storage disks that need to keep the most recent files regardless of the return to an older snapshot. BTW data partitions regularly are Word, excel, photos and so on.

    also, I realize that I could have a single example, E: data with several shared folders partition, but given that each folder is for another Department could cause more trouble when space more and more in the future. Great VMDK could take more time to develop. Not sure again.

    Thank you

    Hello

    in general, virtualization does not change much on the disk IO.

    You can use the same rules that you would use to size a physical server.

    Multiple vmdk mean multiple targets for your I/o load.

    Best solution if IO load/troughput high or low response time should be reached, you create multiple VMDK and spread over several data stores.

    HtH

  • Types of discs DC Win2008R2 best practices...

    Hello

    I am trying to upgrade my AD environmental to 2008R2 victory, and I ask for feedback on best practices for the selection of the disk type and the parameters of the virtual machine.  I intend to install new scratch and transfer AD ads and much more thereafter.

    I intend to create a volume of 50Gb for the OS itself and a 200 GB volume for departure records and other data to the user.

    In a virtual environment, it is adviceable to locate the sysvol and AD database in a separate location other than the default system drive?

    For my 50 GB disk OS (domain controller), is this normal / good practice to store it with its virtual machine?

    My drive 200 GB of user data should ideally be able to reassign to another VM intact to keep the data in case of failure.

    Which drive 'provisioning' would be normal to use with these types of discs?

    What disc 'mode' (independent / persistent) it would be normal to specify in this scenario?

    When such a virtual disk is created, is it possible to increase / decrease its size?

    Thank you very much for the comments on these issues.

    Best regards

    Tor

    Hello.

    In a virtual environment, it is adviceable to locate the sysvol and AD database in a separate location other than the default system drive?

    Yes, follow the same best practices as you would in the physical environment here.

    For my 50 GB disk OS (domain controller), is this normal / good practice to store it with its virtual machine?

    Probably yes.  Why would you consider storing elsewhere?  You do something with SAN replication or y at - there a reason?

    Which drive 'provisioning' would be normal to use with these types of discs?

    I tend to use thin for the volumes to the OS.  For the absolute best performance, you can go with thick-eager discs set to zero.  If you're talking about data sharing standard file for users, then all disc types would probably be fine.  Check it out "[study of the performance of VMware vStorage Thin Provisioning | ]. "[http://www.VMware.com/PDF/vsp_4_thinprov_perf.pdf]" for more information.

    What disc 'mode' (independent / persistent) it would be normal to specify in this scenario?

    This can depend on how you back up the server, but I guess you wouldn't use not independent.  If you do, make sure you use persistent, you do not want to lose user data.

    When such a virtual disk is created, is it possible to increase / decrease its size?

    The increase is very easy with vSphere and 2008.  Very easy, and it can be done without interruption.  Shrinkage is a more convoluted process and will most likely require a manual intervention and downtime on your part.

    Good luck!

  • Best practices Apple ID

    I help the family members and others with their Apple products. Probably the problem number one revolves around Apple ID I saw users follow these steps:

    (1) share IDs among the members of the family, but then wonder why messages/contacts/calendar entries etc are all shared.

    (2) have several Apple IDs willy-nilly associated with seemingly random devices. The Apple ID is not used for anything.

    (3) forget passwords. They always forget passwords.

    (4) is that I don't really understand. They use an e-mail from another system (gmail.com, hotmail.com, etc) as their Apple ID. Invariably, they will use a different password for their Apple ID than the one they used for other email, so that they are constantly confused about which account to connect to.

    I have looked around for an article on best practices for creating and using Apple ID, but could not find such a position. So I thought I would throw a few suggestions. If anyone knows of a list or wants to suggest changes/additions please feel free. Here are the best practices for normal circumstances, i.e. not cooperate accounts etc.

    1. every person has exactly 1 Apple ID.

    2. do not share Apple ID - share content.

    3. do not use an email address of another counts as your Apple ID.

    4. When you create a new Apple ID, don't forget to complete the secondary information to https://appleid.apple.com/account/manage. It is EXTREMELY important questions your email of relief and security.

    5. the last step is to collect the information that you entered in a document and save to your computer AND print and store it somewhere safe.

    Suggestions?

    I agree with no. 3, it is no problem with using a addressed no iCloud as the primary ID, indeed, depending on where you set up your ID, you may have no choice but to.

  • Best practices Upgrade Path - Server 3 to 5?

    Hello

    I am trying a migration and upgrade of a server in the Profile Manager. I currently run an older mac mini Server 10.9.5 and Server 3 with a vast installation of Profile Manager. I recently successfully migrated the server itself out of the old mac mini on a Xserve end 2009 of cloning the drive. Still of double controls everything, but it seems that the transition between the mini and the Xserve was successful and everything works as it should (just with improved performance).

    My main question is now that I want to get this software-wise at day and pass to the Server 5 and 10.11. I see a lot of documentation (still officially Apple) best practices for the upgrade of the Server 3 to 4 and Yosemite, but can't find much on the Server 5 and El captain, a fortiori from 3 to 5. I understand that I'll probably have to buy.app even once and that's fine... but should I be this staging with 10.9 to 10.10 and Server 4... Make sure that all is well... and the jump off 10.11 and Server 5... Or is it 'safe' (or ok) to jump 3 to 5 Server (and 10.9.5 to 10.11.x)? Obviously, the AppStore is pleased to make the jump from 10.9 to 10.11, but once again, looking for best practices here.

    I will of course ensure that all backups are up-to-date and make another clone just before any which way that take... but I was wondering if someone has made the leap from 3-5... and had things (like the Profile Manager) still work correctly on the other side?

    Thanks for any info and/or management.

    In your post I keep the Mini running Server 3, El Capitan and Server 5 install the Xserve and walk through setting up Server 5 by hand. Things that need to be 'migrated' as Open directory must be handled by exporting the mini and reimport on Xserve.

    According to my experience, OS X Server facilities that were "migrated" always seem to end up with esoteric problems that are difficult to correct, and it's easier to adopt the procedure above that to lose one day try.

    YMMV

    C.

  • What is the best practice to move an image from one library to another library

    What is the best practice to move an image from a photo library to another library of Photos ?

    Right now, I just export an image on the desktop, then remove the image from Photos. Then, I open the other library and import these images from the office in Photos.

    Is there a better way?

    Yes -PowerPhotos is a better way to move images

    LN

  • Code/sequence TestStand sharing best practices?

    I am the architect for a project that uses TestStand, Switch Executive and LabVIEW code modules to control automated on a certain number of USE that we do.

    It's my first time using TestStand and I want to adopt the best practices of software allowing sharing between my other software engineers who each will be responsible to create scripts of TestStand for one of the DUT single a lot of code.  I've identified some 'functions' which will be common across all UUT like connecting two points on our switching matrix and then take a measure of tension with our EMS to check if it meets the limits.

    The gist of my question is which is the version of TestStand to a LabVIEW library for sequence calls?

    Right now what I did is to create these sequences Commons/generic settings and placed in their own sequence called "Functions.seq" common file as a pseduo library.   This "Common Functions.seq" file is never intended to be run as a script itself, rather the sequences inside are put in by another top-level sequence that is unique to one of our DUT.

    Is this a good practice or is there a better way to compartmentalize the calls of common sequence?

    It seems that you are doing it correctly.  I always remove MainSequence out there too, it will trigger an error if they try to run it with a model.  You can also access the properties of file sequence and disassociate from any model.

    I always equate a sequence on a vi and a sequence for a lvlib file.  In this case, a step is a node in the diagram and local variables are son.

    They just need to include this library of sequence files in their construction (and all of its dependencies).

    Hope this helps,

  • TDMS & Diadem best practices: what happens if my mark has breaks/cuts?

    I created a LV2011 datalogging application that stores a lot of data to TDMS files.  The basic architecture is like this:

    Each channel has these properties:

    To = start time

    DT = sampling interval

    Channel values:

    Table 1 d of the DBL values

    After the start of datalogging, I still just by adding the string values.  And if the size of the file the PDM goes beyond 1 GB, I create a new file and try again.  The application runs continuously for days/weeks, so I get a lot of TDMS files.

    It works very well.  But now I need to change my system to allow the acquisition of data for pause/resume.  In other words, there will be breaks in the signal (probably from 30 seconds to 10 minutes).  I had originally considered two values for each point of registration as a XY Chart (value & timestamp) data.  But I am opposed to this principal in because according to me, it fills your hard drive unnecessarily (twice us much disk footprint for the same data?).

    Also, I've never used a tiara, but I want to ensure that my data can be easily opened and analyzed using DIAdem.

    My question: are there some best practices for the storage of signals that break/break like that?  I would just start a new record with a new time of departure (To) and tiara somehow "bind" these signals... for example, I know that it is a continuation of the same signal.

    Of course, I should install Diadem and play with him.  But I thought I would ask the experts on best practices, first of all, as I have no knowledge of DIAdem.

    Hi josborne;

    Do you plan to create a new PDM file whenever the acquisition stops and starts, or you were missing fewer sections store multiple power the same TDMS file?  The best way to manage the shift of date / time is to store a waveform per channel per section of power and use the channel property who hails from waveform TDMS data - if you are wiring table of orange floating point or a waveform Brown to the TDMS Write.vi "wf_start_time".  Tiara 2011 has the ability to easily access the time offset when it is stored in this property of channel (assuming that it is stored as a date/time and not as a DBL or a string).  If you have only one section of power by PDM file, I would certainly also add a 'DateTime' property at the file level.  If you want to store several sections of power in a single file, PDM, I would recommend using a separate group for each section of power.  Make sure that you store the following properties of the string in the TDMS file if you want information to flow naturally to DIAdem:

    'wf_xname '.
    'wf_xunit_string '.
    'wf_start_time '.
    'wf_start_offset '.
    'wf_increment '.

    Brad Turpin

    Tiara Product Support Engineer

    National Instruments

  • best practices to increase the speed of image processing

    Are there best practices for effective image processing so that will improve the overall speed of the performance? I have a need to do near real-time image processing real (threshold, filtering, analysis of the particle/cleaning and measures) at 10 frames per second. So far I am not satisfied with the length of my cycle so I wonder if he has documented ways to speed up performance.

    Hello

    IMAQdx is only the pilot, it is not directly related to the image processing IMAQ is the library of the vision. This function allows you to use multi-hearts on IMAQ function, to decrease the time of treatment, Arce image processing is the longest task for your computer.

    Concerning

  • Best practices for the .ini file, reading

    Hello LabViewers

    I have a pretty big application that uses a lot of communication material of various devices. I created an executable file, because the software runs on multiple sites. Some settings are currently hardcoded, others I put in a file .ini, such as the focus of the camera. The thought process was that this kind of parameters may vary from one place to another and can be defined by a user in the .ini file.

    I would now like to extend the application of the possibility of using two different versions of the device hardware key (an atomic Force Microscope). I think it makes sense to do so using two versions of the .ini file. I intend to create two different .ini files and a trained user there could still adjust settings, such as the focus of the camera, if necessary. The other settings, it can not touch. I also EMI to force the user to select an .ini to start the executable file using a dialog box file, unlike now where the ini (only) file is automatically read in. If no .ini file is specified, then the application would stop. This use of the .ini file has a meaning?

    My real question now solves on how to manage playback in the sector of .ini file. My estimate is that between 20-30 settings will be stored in the .ini file, I see two possibilities, but I don't know what the best choice or if im missing a third

    (1) (current solution) I created a vi in reading where I write all the .ini values to the global variables of the project. All other read only VI the value of global variables (no other writing) ommit competitive situations

    (2) I have pass the path to the .ini file in the subVIs and read the values in the .ini file if necessary. I can open them read-only.

    What is the best practice? What is more scalable? Advantages/disadvantages?

    Thank you very much

    1. I recommend just using a configuration file.  You have just a key to say what type of device is actually used.  This will make things easier on the user, because they will not have to keep selecting the right file.

    2. I use the globals.  There is no need to constantly open, get values and close a file when it is the same everywhere.  And since it's just a moment read at first, globals are perfect for this.

Maybe you are looking for