Doubts about the migration parallel to Lync 2013-> Skype4B 2015 on VCS - C (not clustered)

Hello everyone!

As I saw on Cisco documents, applying "B2BUA/Microsoft Interoperability" on VCS can "communicate" with just an instance Microsoft Lync pool servers, but we need to migrate the Lync server on parallel to the servers of Skype, we need to have a few "maintenance window" to migrate all users!

Can we keep 'UP' communication for VCS (lync and Skype) pool of two servers until the end of the migration? The lync Server legacy 2013 (shared resources) with VCS today can communicate with users (migrated) for 2015 of Skype with trunk Lync existing TLS today?

I think we generate another certificate for TLS and affecting some Skype server on the option "host approved", that's okay, I forgot something? Or I have other ways to communicate two pools Microsoft server with a VCS - C with the application "B2BUA/Microsoft Interoperability?

Thanks for help me!

To see some possible examples of deployment options, refer to Appendix 3 of the infrastructure Microsoft (X8.8) Deployment Guide and totalled, suggest that you also look over the guide in full as it might answer some of your questions about what is supported.

Tags: Cisco Support

Similar Questions

  • a few questions about the migration of content/web content

    Hi all


    I have a few questions about the migration of the website and content (from development to test) which are not specified after reading the oracle documentation.

    -> When we make the site replication will be migrated content (data files).

    -> when we move will contained all the ED and RD etc (any object type web site migrated).

    --> are the two the necessary step for migration when we migrate web site.


    Thank you

    -Yves

    --> When we make the site replication of the content will be migrated (data files).

    It depends on how you do it. The tool 'Site Studio Replicator' won't move any content, only the structure of the site. The 'Manage replication Site' page can be used to migrate content with the structure of the site but I do not recommend for large sites, I use separate tasks to archive to move the content. "Backup and restore" page stores the entire site in a ZIP file, not advisable for large sites.

    --> When migrate us content will be all ED and RD etc (any object type web site migrated).

    Yes. It uses the metadata xWebsites field to identify the elements that belong to the site.

    --> are the two the necessary step for migration when we migrate web site.

    Depends on how you do it, but yes, all bits are needed.

  • Portal of WC - need information about the Migration or DB tables for roles and users/groups.

    Hello

    We are to modernize the WebCenter portal for a client of 11.1.1.3.0 to 11.1.1.8.0.

    Anything can let me know the procedure of migration or the involved DB tables that store the roles and the "user groups &" under the administration of security.

    A manual level by recreating all roles and users and groups one by one is my last option.

    Thank you

    Jean Claude

    Hello.

    Do not recreate it manually.

    The documentation must guide for PS2 - PS7 migration explaining step by step what to do regarding the security / policies.

    Read it slowly and carefully.

    Using WLST backup/export/import of your policy store scripts / qualifications.

    Following links can help you understand the WLST Scripts for the migration of security:

    http://docs.Oracle.com/CD/E29542_01/core.1111/e10043/addlsecfea.htm#JISEC3639

    Custom security infrastructure controls WLST - 11g Release 1 (10.3.6)

    We have migrated many times of 11.1.1.4/5 to 11.1.1.8. Always on the PS3 (11.1.1.4) version.

    11.1.1.3 to 11.1.1.4 was the biggest change from my point of view. I never had the opportunity to PSx PS2.

    For migration tasks, my recommendation is to ask for doubts or things not clearly in Support of Oracle documentation.

    Kind regards.

  • I have a doubt about the file .folio and publications

    Hello, I m new here.

    I want to start working with DPS, but I have a doubt about which version to buy.

    At the moment I have one customer just wants to publish a magazine, but my intention is to have more customers and publish more magazines.

    If I buy the unique edition of DPS, I read that I can publish a single file .folio. What it means? Each folio file represents a publication?

    Please, I need help to understand this before you purchase the software.

    Thank you very much

    Paul

    Here's a quick blog I wrote to compare the simple edition and

    multifolio apps:

    http://boblevine.us/Digital-Publishing-Suite-101-single-Edition-vs-multi-Folio-apps/

    Bob

  • Doubt about the Index

    Hi all

    Oracle Database 11 g Enterprise Edition Release 11.2.0.2.0 - 64 bit Production
    PL/SQL Release 11.2.0.2.0 - Production
    "CORE 11.2.0.2.0 Production."
    AMT for Linux: Version 11.2.0.2.0 - Production
    NLSRTL Version 11.2.0.2.0 - Production

    I have a question about the index. Is - this required that the index will be useful if we have a "WHERE" clause I tried to find myself there but do not.
    In this example I haven't used where clause used but group. But it gives a comprehensive analysis. Is it possible to get the scan interval or something else using Group by?
    SELECT tag_id FROM taggen.tag_master GROUP by tag_id 
    
    Explain Plan:
    Plan hash value: 1688408656
     
    ---------------------------------------------------------------------------------------
    | Id  | Operation             | Name          | Rows  | Bytes | Cost (%CPU)| Time     |
    ---------------------------------------------------------------------------------------
    |   0 | SELECT STATEMENT      |               |  4045 | 20225 |     6  (17)| 00:00:01 |
    |   1 |  HASH GROUP BY        |               |  4045 | 20225 |     6  (17)| 00:00:01 |
    |   2 |   INDEX FAST FULL SCAN| TAG_MASTER_PK |  4045 | 20225 |     5   (0)| 00:00:01 |
    ---------------------------------------------------------------------------------------

    Hello

    SamFisher wrote:
    Since I was on what they do full scan. Is it possible to restrict of fullscan without using where clause?
    I guess having limit clause but not quite know.

    Why?
    If this query is producing good results, then you need a full analysis.
    If fool you somehow the optimizer by doing a scan of interval, it will be slower.

  • Some doubts about the topology, interfaces and security modules

    Hello

    Below, some questions about the ODI:


    1. to use an LKM ODI always ask to use two different DATASERVERS (one for the SOURCE) and another to the TARGET?

    2. what would be the best way to create a new IKM with GROUP BY clauses?

    3. What is the required minimum PROFILE for developers users could import projects created in other ODI environments?

    4. If a particular WORK_REP is lost, it is possible that retrieve projects from version control information stored in the MASTER_REP?

    1.) Yes. LKM always loads data from one root to another.
    More than once I saw that even if there is a single physical server, several servers are configured in the topology Manager. This would lead to the use of a LKM because ODI consider 2 different servers.
    If the physical server is set only once, LKM won't be necessary.

    2.) IKM automatically adds a GROUP BY clause if it detects an aggregation function in the Interface implementation.

    3.) try to use the profile of the creator of NG.

    4.) this is not an easy task. But all the versioned objects are compressed and stored in a BLOB field in the master repository.
    You will need to know the names and versions you need to recover.
    SNP_VERSION and SNP_DATA have this information. Retrieves the field BLOB SNP_DATA and unpack using a zip utility. This will give you the XML property of the object that was transferred.
    Now, you can import this xml file and retrieve the object.

    You will need to loop through all the records in order of I_DATA, then extract to .xml file, and then import them to build the work rep.

  • doubts about the result in short 3.1 table

    Hi all

    I created the table of results short 2.4 using the box for query using EQL and motions, but I do not see the possibility to enter the eql short 3.1.Has that something has changed, or I missed something?

    and to allow drilling down short 2.4 I added an action column to retrieve the ID of event available on a particular line. Clicking on the button Add an Action column on the Configuration tab and added the action but can not see samething in short 3.1.

    Can you get it someone please let me know how the above things is possible in short 3.1

    Thanks in advance.

    Unfortunately, you cannot rename it. It is standard to have the label the name of the attribute followed by the subset of date / time used.

  • doubt about the update

    Hi all

    Three statements to update, I need to run on a single table.
    Is there a way I can update instructions executed all three in a single statement?
        UPDATE xxops_forecast_extract b SET territory_id = (SELECT a.territory_id
             FROM fdev_hier_node_mv a
             WHERE a.shr_node_id = b.shr_node_id
              AND NVL(end_dt,SYSDATE) > SYSDATE) ;
        COMMIT;
    
        UPDATE xxops_forecast_extract b SET position_id = (SELECT a.row_id
            FROM s_postn a
            WHERE a.name = 'TD-'||UPPER(b.am_id))
            WHERE position_level = 7
            AND b.am_id IS NOT NULL;
        COMMIT;
      
        UPDATE xxops_forecast_extract b SET position_id = (SELECT a.row_id
            FROM s_postn a
            WHERE UPPER(a.desc_text) = UPPER(TRIM(B.POSITION_NAME)))
            WHERE position_level = 7
            AND b.am_id IS NULL;
     Below are the sample data for the tables. 
    
     xxops_forecast_extract 
     shr_node_id am_id position_name  position_id  territory_id
     2231211     Dave     (null)        (null)       (null)
     2231211     Michele  (null)        (null)       (null)
     2231211     (null)   COMM WEST 230 (null)       (null)
     2231211     (null)   COMM ISAM 110 (null)       (null)
    
     fdev_hier_node_mv
     shr_node_id territory_id 
      2231211      5694
    
    
     s_postn
     row_id    name       desc_text
     12122   TD-Dave     (null)
     12123   TD-Michele  (null)
     89381   (null)          COMM WEST 230
     89382   (null)          COMM ISAM 110
    
     Resulting table after update
    
     xxops_forecast_extract 
     shr_node_id am_id position_name  position_id  territory_id
     2231211     Dave     (null)        12122       5694
     2231211     Michele  (null)        12123       5694
     2231211     (null)   COMM WEST 230 89381       5694
     2231211     (null)   COMM ISAM 110 89382       5694
    Thank you all.

    Hello

    You can combine the statements by combining subqueries.
    No logic not apply to all the the original updates must be out of the WHERE clause and put in a CASE statement.
    The CASE statements should "update" column to itself if none of the conditions apply.

    For example, your final two statements UPDATE, that have subqueries on s_postn, both can be combined like this:

    UPDATE     xxops_forecast_extract     b
    SET     position_id =
         (
         SELECT     CASE
                   WHEN     (     b.am_id          IS NOT NULL
                        AND     UPPER (a.name)     = 'TD-' || UPPER (b.am_id)
                        )
                   OR     (     b.am_id          IS NULL
                        AND     UPPER (a.desc_text)     = UPPER (TRIM (b.position_name))
                        )
                   THEN     a.row_id
                   ELSE     b.position_id
              END
         FROM     s_postn     a
            WHERE     UPPER (a.name)          = 'TD-' || UPPER (b.am_id)
         OR     UPPER (a.desc_text)     = UPPER (TRIM (b.position_name))
         )
    ;
    

    There seems to be some mistakes in the statemnts UPDATE that you posted. For example, the last two refer to a column called position_level that does not exist in the other table.
    The above statement produces the results you want with the data you've posted.

    As you can see, the code is much harder to understand, debug and maintain.
    If the performance gain (if any) justifies the addional complexity is debatable in this case.
    I'm sure that combining all three queries would be useful.

    Consider using the MERGE: it is sometimes easier to use, even if, as in this case, you're never insert.

  • Doubts about cloning/migrating off the virtual machines on the network

    Hello

    I'm going to have some problems of connection while cloning/migration power VMs on different hosts on ESXi5 off. I don't know is this procedure for moving a VM off will use the vMotion network, or just the normal management. I have a lot of is speeds when copying VMs on several different hosts, so I wonder what I could do to make sure that the speeds are overfished? I have all the hosts that are connected on a gbit switch.

    Thank you!

    Yes, two links are used, even when you vMotion a single virtual machine. Here is an official KB article.

    http://KB.VMware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalID=2007467

    A quick test (only VM vMotion) confirms this, as evidenced by the performance tables;

    Source (transmit - these two vmnic is used);

    Destination (receiving -serve two vmnic);

    See you soon,.

    Jon

  • Some doubts about the navigation in unifying

    Hi all

    I had a few questions about unifying navigation.

    Is it possible to move the admin mode user mode access level?

    I mean, if a particular feature as Manager of the shell I can only access from admin mode is it possible to provide access even in user mode?

    If so, how?

    My 2nd question of doubt is, currently, we can access company BPs level "Journal of society" or "Resource Manager" under shell 'Company Workspace'.

    Is it possible to move the "journal of the society" or "Resource Manager" in the folder? If yes how?

    I tried in "navigation user mode" to move the company BPs level at shell of the House, but I can't do it.

    To answer your questions:

    (1) User-Mode browser can have the user feature included. You cannot change the view mode Admin or move functions admin for user mode.

    (2) you cannot move these on the Home tab.

  • doubts about the metadata

    Hello

    (1) metadata are stored in the system tablespace only? If metadata is also stored in the spaces of storage independent of the system, then what is the diff between the metadata in the system tablespace and an independent the system tablespace.

    (2) how to export metadata for all the whole base?

    (3) in what scenarios we export the metadata structure? and how can we copy or upgrade the structure of a database of metadata metadata from database B data.

    concerning

    hungry_dba wrote:
    Laura thanks,

    This means transportable tablespace will bring tabledata and exp/imp with lines = no will to put the metadata I mean strcuture and when connect us the data files and import metadata database target will be even a source database.

    transportable_tablespace will import tablespace and you copy both data files, so all of the information in the tables will be imported as well.
    exp/imp with lines only = no will import only metadata (only structure, no data).
    Any connections datafiles that I did not understand what you mean.

    >

    a doubt more please

    users of the database are stored in the default tablesapce that we give when creating the user, the above procedure, it will come to the target database also we import metadata and connect the data file using option transportable tablespace, what happens if the users default tablespace is system on top of procedure we cannot transfer system tablespace?

    For the first default users tablespace must never be system. Never - it will make problems you sooner or later.

    >

    Laura thanks for the suggestion, I really appreciate. There are may other procedures to clone the database and what you say is really kool by rman, but I wanted to clarify my doubts on this procedure... coz I'm looking to upgrade the Database 9i and 10g with a maximum power of 20 minutes downtime.

    OK, but why you are not simply migrating the database? Why you need exp/imp? Moving to a different server/platform as well?
    What exact 9i version do you have? And to which 10g, you are planning to upgrade?

    I have a shell of doubt more ways to database 10g... 10g software more database created in this right?

    Hmm, where did you read this "shell of 10g?

  • Doubts about the speed

    Hello gentlemen;

    I have a few questions, I would like to ask more experienced people here. I have a program running on a computer that has a processor i7 processor. In this computer that I have programmed in LabVIEW, meanwhile in another lab, we have another PC, a little older, a dual core 2.3 Ghz, in this pc, we perform a testing platform for a couple of modems, let us not get into the details.

    My problem is that I discovered recently that my program, I programmed in the computer, i7, much slower work in the other machine, the dual core, so the timings are all wrong and the program does not run correctly. For example, there is a table with 166 values, which, in the i7 machine are filled quickly, leaving almost without delay, however, the double machine heart, it takes a few milliseconds to fill about 20 values in the table, and because of the timing, it can fill more values and so the waveform that I use is all wrong. This, of course, live of the whole program and I can't use it as a test I need to integrate.

    I have create a .exe program in labview and try it in the different PC that's how I got to this question.

    Now, I want to know if there is actually a big problem due to the characteristics of the computer, the program is slow in one machine. I know that, to ensure the eficiently program, I need to use States, sub - vi, idea of producer-consumer machines and other things. However, I discovered this is not a problem of the speed generated by the program, because, if that were the case, the table would eventually fill it completely, however in slow computer, it is not filled more with 20 values.

    Else, helps to hide unnecessary variables in the front panel?, because the time beeing I have keep track of lots of variables in the program, so when I create the .exe I still see them runing to keep this follow-up. In the final version, that I won't need them so I'll delete some and hide front panel some. It helps that require less condition?

    I would like to read your comments on this topic, if you have any ideas in machines to States, sub - vi, etc., if there is a way to force the computer to use more resources in the Labview program, etc.
    I'm not add any VI because, in the current state, I know you will say, state machines, sub.vi and so on, and I think that the main problem is between the difference in computers, and I'm still working in the things of the State/sub-VI/etc

    Thank you once again, we just let this hollow.

    Kind regards

    IRAN.

    Get started with, using suitable as a machine for States stream you can ensure that your large table would be always filled completely before moving on, regardless of how long it takes. Believe it or not add that a delay to your curls will do more all the program run faster and smoother, because while loops are eager and can consume 100% of CPU time just a loop waiting for a button press, at the same time all other processes are fighting for time CPU.

  • Doubt about the persistent object

    Hi friends,

    I've stored data object persistent after that some time, my Simulator has taken a lot of time to load the application so I run clear.bat do fast Simulator. But after I run clear.bat. The values of what I stored in the persistent object had disappeared. Someone at - he said, therefore, it is the persistent object data are parties to cause of the performer, the clear.bat or any other reason. pls clarify my doubt friends...

    Kind regards

    s.Kumaran.

    It is b'caz of clean.bat. Clean.bat will remove all applications and unnecessary files, etc...

  • doubts about the css class...

    I tried to load a background image in the theme universal apex 5 in the login page.

    and I used the code found in the following link and got it works

    Apex 5.0: Theme Roller and background image

    But I doubt that can be very simple for the css professionals.

    .t-PageBody-.t-body connection

    {

    Background: URL("Sports.jpg") repeat top center white scroll;

    Color: #000000;

    do-family: Arial, Helvetica, Sans-serif;

    do-size: 12px;

    line-height: 17px;

    }

    .t - PageBody.t - body

    How do you know .t-PageBody - .t-body connection was the main class to change...

    Let me know if my interpretation is correct

    .t-PageBody - login is the main class

    and .t-Body is the upper class?


    pauljohny100 wrote:

    I tried to load a background image in the theme universal apex 5 in the login page.

    and I used the code found in the following link and got it works

    Apex 5.0: Theme Roller and background image

    But I doubt that can be very simple for the css professionals.

    .t-PageBody-.t-body connection

    {

    Background: URL("Sports.jpg") repeat top center white scroll;

    Color: #000000;

    do-family: Arial, Helvetica, Sans-serif;

    do-size: 12px;

    line-height: 17px;

    }

    .t - PageBody.t - body

    How know .t-PageBody-.t-body connection was the main class change...

    Let me know if my interpretation is correct

    .t-PageBody - login is the main class

    and .t-Body is the upper class?

    .t-PageBody--login .t-Bodyis a descendant selector. It matches any element with a class attribute that contains a t-Body value having an element ancestor with a class attribute that contains a t-PageBody--login value. There is no concept of 'main' class or 'slot' in CSS. The required selector is likely to have been determined on the supplement page using a web Inspector.

    It is advisable to take some tutorials to get at least a basic understanding of web technologies when you work with APEX.

  • Doubt about the LDAP synchronization

    Hi all

    I have sync LDAP enabled on my server of IOM. I also installed OID connector. I installed it since I want a user to be able to see DIO user resource in service to him in the "resources" tab. Now, whenever I create a new user, the user is created successfully. Now I have an access policy that grants the user the user OID resource based on its role. Now, once the user is created, I see in the OID I use. Of course, it is placed in service in the cn = default user directory but I read here that it is configurable from the LDAP container rules xml file. Now this provisioning in OID arrives with LDAP synchronization, and so I do not see any resource under the tab "resources". Then I grants it the user the OID resource by attaching the role to him and now he gets put in service to the OID as well. Now I see that based on the pre fill out cards that I put in place, this user gets provisioned to the correct container in the OID. But the question is now I find myself with two users with the same name and details in the directory of the OID. I don't want that to happen. Is there some way I can cut the somehow OID LDAP synchronization over the create user operation? Commissioning product only when I apply the role and therefore in the correct container?

    Thank you
    $id

    This is where, with a solid knowledge of the IOM is required. Should be re-evaluated the connector. For example, if the user exists, you know that you can not use the default create the task from the user. You will need to put just a spot of AutoComplete, since you know that each user will exist. You must also remove all your form variables that are managed from the user profile of the IOM.

    I suggest the following:

    Change your form to include only the user ID, the common name and the OrclGUID and name of the organization. You can use a pre-fill adapter on all those who will come from the user profile, because they already exist. If you need to move them to a different OU, after execution of the AutoComplete that defines the status of provisioned, you could start an update task the organization name field, which then the user to the appropriate ORGANIZATIONAL unit.

    You really need to think about all the tasks, and what is involved and change the connector. When you implement two methods that accomplish the same thing, you need to remove a few pieces of one of them. If you need to look at all of the tasks that will be required and the actions which are carried out. Some of them will have to be autocomplété so you can always view the status of correct resource.

    -Kevin

Maybe you are looking for