Issue of best practice

Hello

I mounted the UN lab pour assess vSphere 5.

I have a switch and 2 servers.

Problem:

vCenter is currently on a VM Server n ° 2. Resources are not sufficient and it is possible that I will reinstall.

Question:

More than the way of the East which simple / Secure / quick pour transfer this VM on the server # 1.

Good evening

Without the use of special features, you can also:

-clone the virtual machine

-migrate from the

fishing - your vcenter server.

-start the clone vcenter server.

and:

-Since the clone migrate the original stop the clone / delete it and restart the original.

UO:

-check that the ok to EST vcenter.

-delete the original and keep the clone.

After I know best practices, but in any case by causing, you Microsoft have almost no interruption of service and you have a backup available in 2 clicks.

He must just ensure that the 2 vm (original and clone is not started at the same time, otherwise you will create a conflict of @ ip).

Good luck.

Tags: VMware

Similar Questions

  • Windows 2008 Server R2 Std - issue of Best Practice Analyzer

    Windows 2008 Server R2 Std - question BPA
    I built a new Windows 2008 R2 Standard Server and configured as a domain controller. When you run the BPA I get the following error, which I can not solve:

    Question:
    The default domain controllers policy is not in force on the OR OR = Domain Controllers, DC = WRIGHT.
    Impact:
    If the Group Policy settings that are defined in the default domain controllers policy are not applied to domain controllers, Active Directory operations may fail.
    Resolution:
    Link the default domain controllers policy to the ORGANIZATION OR unit = Domain Controllers, DC = WRIGHT

    The default domain controllers policy is linked to the domain controllers OU. I dun dcdiag and the only failure is NCSecDesc but I don't plan to add an RODC to the forest. When I run the resulting strategy game I can see domain controller policy has been applied.
    Is there something I can do to clear this error message?

    Hello Lynn Furlong,

    Microsoft Communities is for consumer issues on Windows 8, Windows 7, Windows Vista and Windows XP. Your question deals with Windows 2008 Server R2, it would be better to post your question in the forums TechNet for ITPros.
    Click on the link here to transfer your question in the Forum Windows Server 2008 Group Policy. They will be better able to meet your group policy do not apply to domain controllers.

    Sincerely,

    Marilyn

  • encoding issue "best practices."

    I'm about to add several command objects to my plan, and the source code will increase accordingly. I would be interested in advice on good ways to break the code into multiple files.

    I thought I had a source file (and a header file) for each command object. Does this cause problems when editing and saving the file .uir? When I run the Code-> target... file command, it seems that it changes the file target for all objects, not only that I am currently working on.

    At least, I would like to have all my routines of recall in one file other than the file that contains the main(). Is it a good/bad idea / is not serious? Is there something special I need to know this?

    I guess what I'm asking, what, how much freedom should I when it comes to code in locations other than what the editor of .uir seems to impose? Before I go down, I want to assure you that I'm not going to open a can of worms here.

    Thank you.

    I'm not so comfortable coming to "best practices", maybe because I am partially a self-taught programmer.

    Nevertheless, some concepts are clear to me: you are not limited in any way in how divide you your code in separate files. Personally, I have the habit of grouping panels that are used for a consistent set of functions (e.g. all the panels for layout tests, all the panels for execution of / follow-up... testing) in a single file UIR and related reminders in a single source file, but is not a rigid rule.

    I have a few common callback functions that are in a separate source file, some of them very commonly used in all of my programs are included in my own instrument driver and installed controls in code or in the editor of the IUR.

    When you use the IUR Editor, you can use the Code > target file Set... feature in menu to set the source file where generated code will go. This option can be changed at any time while developing, so ideally, you could place a button on a Panel, set a routine reminder for him, set the target file and then generate the code for this control only (Ctrl + G or Code > Generate > menu control reminders function). Until you change the target file, all code generated will go to the original target file, but you can move it to another source after that time.

  • ORACLE_HOME on best practice database backup issue

    Hello
    The next question is about best practices.
    I would like to know if the ORACLE_HOME on the physical database should have the same name
    as the basis of primary data.
    For example, if the primary database name is: / u01/app/oracle/prodcut/1120/PROD
    According to best practices should also be: / u01/app/oracle/product/1120/PROD or it should be:
    / u01/app/oracle/product/1120/STDY?
    Thank you

    Yes.

    Given what you just said I agree that is the way to do it. So when patch arrives, you should be less likely to make a mistake.

    I probably create a 'home' of the decoder sheet and hung it near my desk.

    Best regards

    mseberg

  • Best practices for vsphere 5.1

    where can I find the doc more up-to-date about berries EQL configuration / best practices with vmware vsphere 5.1

    Hello

    Here is a link to a PDF file that covers best practices for ESXi and EQL.

    Best EqualLogic practices ESX

    en.Community.Dell.com/.../20434601.aspx

    This doc mentions specifically that the storage Heartbeat VMKernel port is no longer necessary with ESXi v5.1.  VMware has corrected the problem that made it necessary.

    If you add it to a 5.1 system it will not hurt.  It will take an IP address for each node.

    If you upgrade 5.0 to 5.1, you can delete it later.

    Here is a link to VMware which addresses this issue and has links to other Dell documents which confirm also that it is fixed in 5.1.

    KB.VMware.com/.../Search.do

    Kind regards

  • Best practices storage or advice

    I try to develop a Java of BB application, currently appearing on local storage. The app will be for public use.

    Someone has advice on when it is best to use the SD vs the persistent store? Is there a good best practices or document advice out there somewhere?

    This application will have two types of data: preferences and what would be the data files on a desktop system.

    I read on using the persistent store, and it seems to be a good option because of the level of control over the data for synchronization and such. But I noticed that some OSS BB applications use the SD card, not the persistent store.

    If I'm going to deploy the application to the general public, I know that I'm working with many configurations as well as with the limits set by the policy of the company (assuming that these users can even install the app). So any advice on navigating these issues regarding the storage would be greatly appreciated.

    Thank you!

    The persistent store is fine for most cases.

    If the transient data is very large, or must be copied to the device via the USB cable, then maybe the SD card should be considered.

    However, many / most of the people do not have an SD card.

  • Best practice? Storage of large data sets.

    I'm programming a Client to access the customer address information. The data are delivered on a MSSQL Server by a Web service.

    What is the best practice to link these data namely ListFields? String tables? Or an XML file with is analyzed?

    Any ideas?

    Thank you, hhessel

    These debates come from time to time. The big question is how normally geet she on the phone after

    someone asks why BB does not support databases. It is there no magic here - it depends on what you do with

    the data. Regarding the General considerations, see j2me on sun.com or jvm issues more generally. We are all

    should get a reference of material BB too LOL...

    If you really have a lot of data, there are libraries of zip and I often use my own patterns of "compression".

    I personally go with simple types in the store persistent and built my own b-tree indexing system

    which is also j2se virtue persistable and even testable. For strings, we'll store me repeated prefixes

    that only once even though I finally gave up their storage as only Aspire. So if I have hundreds of channels that start "http://www.pinkcat-REC".

    I don't store this time. Before you think of overload by chaining these, who gets picked up

    the indexes that you use to find the channel anyway (so of course you have to time to concatenate pieces)

    back together, but the index needs particular airspace is low).

  • Best practices for the integration of the Master Data Management (MDM)

    I work on the integration of MDM with Eloqua and are looking for the best approach to sync data lead/Contact changes of Eloqua in our internal MDM Hub (output only). Ideally, we would like that integration practically in real time but my findings to date suggest that there is no option. Any integration will result in a kind of calendar.

    Here are the options that we had:

    1. "Exotic" CRM integration: using internal events to capture and queue in the queue changes internal (QIP) and allows access to the queue from outside Eloqua SOAP/REST API
    2. Data export: set up a Data Export that is "expected" to run on request and exteernally annex survey via the API SOAP/REST/in bulk
    3. API in bulk: changes in voting that has happened since the previous survey through the API in bulk from Eloqua outside (not sure how this is different from the previous option)

    Two other options which may not work at all and who are potentially antimodel:

    • Cloud connector: create a campaign questioning changes to schedule and configure a connector of cloud (if possible at all) to notify MDM endpoint to query contact/lead "record" of Eloqua.
    • "Native" integration CRM (crazy): fake of a native CRM endpoint (for example, Salesforce) and use internal events and external calls to Eloqua push data into our MDM

    Issues related to the:

    1. What is the best practice for this integration?
    2. Give us an option that would give us the close integration in real-time (technically asynchronous but always / event-based reminder)? (something like the outgoing in Salesforce e-mail)
    3. What limits should consider these options? (for example API daily call, size response SOAP/REST)

    If you can, I would try to talk to Informatica...

    To imitate the integrations of native type, you use the QIP and control what activities it validated by internal events as you would with a native integration.

    You will also use the cloud api connector to allow you to set up an integration CRM (or MDM) program.

    You have fields of identification is added objects contact and account in Eloqua for their respective IDs in the MDM system and keep track of the last update of MDM with a date field.

    A task scheduled outside of Eloqua would go to a certain interval and extract the QAP changes send to MDM and pull the contacts waiting to be sent in place of the cloud connector.

    It isn't really much of anything as outgoing unfortunately use Messaging.  You can send form data shall immediately submit data to Server (it would be a bit like from collections of rule of integration running of the steps in processing of forms).

    See you soon,.

    Ben

  • Best practices for setting in RoboHelp to create .chm?

    I have Tech Com Suite 2015. I need to make a FrameMaker book .chm file. I tried to do directly from chassis, but was not happy with the results, so I will try to do a RoboHelp project instead.  Can someone help me with best practices to achieve. I would like to than my files related to RoboHelp, so that I don't have to start over if they are updated. I tried to work with it. You can fix things after you import? For example, if I have does not have difficulty cross-references (and delete page for example numbers) in FrameMaker, before the import/lining, what I have to do it again?  I have worked with FrameMaker for quite a long time, but I'm less familiar with RoboHelp. Is there a video or webinar showing how to do this? Or can someone give some tips and things that I should know about this procedure. Thank you

    Hello

    1. the table of contents at the same level:

    To create levels of navigation OCD in a table of contents in the output to publish FM, we need to change either the first indent, the property size or weight are.

    We determine the level by setting these properties by Tag:

    -First indent,

    -Font Size,

    -Font

    by example, so if you want to have titre3 appear inside Title2 like this:

    Titre3

    Title2

    In Para designer > properties updated by designer of these 2 tags (Heading2TOC, Heading3TOC):

    -First indent Heading2TOC Heading3TOC more

    - Or font of Heading2TOC less Heading3TOC

    - Or the size of the police of Heading2TOC less Heading3TOC

    2. the option Enable browse sequence allows the navigation arrows. Try to activate the option "activate browse Sequence. (apply the latest Patch, help > updates)

    3. Once you create your table of contents, you will see the title of the chapter begins to appear in the breadcrumbs.

    Main effort, you'll need to do, is to create a table of contents leveled once it made should solve the issues you face.

    Amit

  • Recommendations or best practices around change of the audio data on a network share?

    I have several users editing audio located on the same network share. They are always complaining about the performance.  Is it a best practice to edit audio located on the network?  I see so many issues (latency time, possible corruption, etc.) with that from the computer SCIENCE point of view, but I would like the opinion of those more familiar with the application and its best practices.  Thanks in advance.

    It's crazy! So that any audio to be edited with any degree of speed and security, it must be downloaded to a local computer, edited on that, and then the final result re-recorded on the network.

    You might as well do this anyway - at the time wherever you make a change, you store a local version of the file temp on the editing machine, and it has real save, or save as who turned to the network drive. Also, you would be working on a copy, the original is still available in the case of the vis - is up, and would not be the case if you edit the original files directly on the network, so it is intrinsically safer.

  • Best practices of VCO - combination of workflow

    I took a modular approach to a task, I am trying to accomplish in VCO. I want to take action against the network to ANY NAT routed ORG in a specified org.  Here is where I am now. Would like to comment on what is possible or recommended before you start down the wrong path. I have two separate workflows individually do what I need to do and now I need to combine them.  Two of these workflows work well separately and I am now ready to combine both in a single workflow.

    1 Both Workflows.jpg

    (1) with the help on this forum of Joerg, I was able to return all ORG networks in a particular ORG, then filters according to the fenced mode return array ONLY NAT network routed ORG.  Name of the workflow is «Routed BACK ORG nets» The input parameter is an organization, and the output parameter is an array of NAT routed networks ORG.

    1 RETURN ORG Net.jpg

    2) there is an existing workflow (comes with the VCD 1.5 plugin) that configures a routed NAT ORG network. The input parameter is a routed Org network of NAT. name of the workflow is 'Routed SECURE ORG Net' and I would like to use this (slightly modified) workflow as it is perfect for a task I need to fill.

    1 Secure ORG Nets.jpg

    I think there are two options.

    (1) to include javascript code and the logic of the 'SECURE Ext ORG' which has several elements in the workflow in the workflow for "Nets of ORG routed BACK" Net inside the loop of the search RETURN flow.

    (2) the second option is to add the 'SECURE routed ORG Net' as part of workflow for existing workflow "Nets of ORG routed RETURN" as part of workflow.  I like this approach better because it allows me to keep them separate and reuse elements of the workflow or individually.

    Issues related to the:

    What is recommended?

    Y thre restrictions on what I can pass as an INPUT to the second workflow (embedded) parameter? Can I pass only the JavaScript object references or can I get through an array of '.name' routed networks NAT ORG property in the table?

    I assume that the input parameters for the second workflow can be invited in the first and passed as OUTPUT to the second workflow (embedded) where they will be mapped as INPUT?

    I read through the developer's guide, but I wanted to get your comments here also.

    Hello!

    A good principle (from software engineering) is DRY: don't repeat yourself!

    So call to a workflow library indeed seems to be the best approach for the majority of use cases. (So you don't have to worry about maintaining the logic, if something changes in the next version,...) That's exactly what the workflow library is for.

    To move objects to a workflow item, you can use all the inventory items, basic (such as boolean, string, number) and generic data types ones (all, properties). Each of them can also be an array.

    However, the type must adapt to the input of the called Workflow parameter (or be "ANY").

    So in your case, I guess that the COURSE called...-Workflow is waiting for just a single network (perhaps by his name, probably also OrgNetwork).

    You must create the logic of loop to go through all the networks you want to reconfigure manually (in your "external" workflow).

    For an example: see http://www.vcoteam.info/learn-vco/creating-workflow-loops.html

    And as Tip: the developer's Guide is unfortunately not really useful for this "methodology" - related issues. See the examples on http://www.vcoteam.info , the beautiful videos on youtube ( http://www.vcoportal.de/2011/11/getting-started-with-workflow-development/ ) and watch the recording of our session at VMworld: http://www.vcoportal.de/2011/10/workflow-development-best-practices/ (audio recording is in the post in mp3 format)

    See you soon,.

    Joerg

  • Best practices for tags

    Hello

    In the bundled applications Tags are used in most applications. For example. in App Customer Tracker, we can add tags to a customer where these tags are stored in a varchr2 column in the Customers Table.
    In my case, I've pre-defined tags real estate (Real Estate) in a table of research called TAGS. For example, Full floor, furnished, equipped, duplexes, attached... What is the best practice for tag properties:
    1 - to store these tags in a varchar column in the table of PROPERTIES using the Shuttle box.
    OR
    2. to store in a third table Eg, PROPERTIES_TAGS (ID PK, FK property-ID, TAG_ID FK), then use the LISTAGG function to show tags in one line in the report properties.
    OR
    You have a better option?

    Kind regards
    Fateh

    Fateh says:
    Hello

    In the bundled applications Tags are used in most applications. For example. in App Customer Tracker, we can add tags to a customer where these tags are stored in a varchr2 column in the Customers Table.
    In my case, I've pre-defined tags real estate (Real Estate) in a table of research called TAGS. For example, Full floor, furnished, equipped, duplexes, attached...

    These seem to be two different use cases. In the bundled applications tags allow end users to join free-form metadata to the data for their own needs (they are sometimes called "folk taxonomies"). Users can use tags for different purposes or different tags for the same purpose. For example, I could add 'Wednesday', 'Thursday' or 'Friday' tags customers because these are the days that they receive their deliveries. For the same purpose, you could mark the same customers '1', '8' and '15' by the numbers of road trucks making deliveries. You can use 'Monday' to indicate that the client is closed on Mondays...

    In your application you assign to the known properties of predefined attributes. It is a model of standard attribute 1:M. their view using the metaphor of the label is not equivalent to the user of free-form tags.

    What is the best practice for tag properties:
    1 - to store these tags in a varchar column in the table of PROPERTIES using the Shuttle box.

    If you do this, how can you:

  • Search for furnished duplex properties effectively?
  • Change in the world "mounted" to "integrated"?
  • Ratio of the number of properties, broken down by full floor, double-sided, equipped...

    OR
    2. to store in a third table Eg, PROPERTIES_TAGS (ID PK, FK property-ID, TAG_ID FK), then use the LISTAGG function to show tags in one line in the report properties.

    As Why use Look up Table, shows the correct way to proceed. It allows the data to be indexed for efficient extraction, and issues such as those above should be dealt with simply by using joins and grouping.

    You might want to examine the possibility of eliminating the PK ID and use an index table organized for this.

    OR
    You have a better option?

    I'd also look carefully your data model. Make sure that you're not flirting with the anti-pattern VAE. Some/all of these values are not simply the attributes on the property?

  • Types of discs DC Win2008R2 best practices...

    Hello

    I am trying to upgrade my AD environmental to 2008R2 victory, and I ask for feedback on best practices for the selection of the disk type and the parameters of the virtual machine.  I intend to install new scratch and transfer AD ads and much more thereafter.

    I intend to create a volume of 50Gb for the OS itself and a 200 GB volume for departure records and other data to the user.

    In a virtual environment, it is adviceable to locate the sysvol and AD database in a separate location other than the default system drive?

    For my 50 GB disk OS (domain controller), is this normal / good practice to store it with its virtual machine?

    My drive 200 GB of user data should ideally be able to reassign to another VM intact to keep the data in case of failure.

    Which drive 'provisioning' would be normal to use with these types of discs?

    What disc 'mode' (independent / persistent) it would be normal to specify in this scenario?

    When such a virtual disk is created, is it possible to increase / decrease its size?

    Thank you very much for the comments on these issues.

    Best regards

    Tor

    Hello.

    In a virtual environment, it is adviceable to locate the sysvol and AD database in a separate location other than the default system drive?

    Yes, follow the same best practices as you would in the physical environment here.

    For my 50 GB disk OS (domain controller), is this normal / good practice to store it with its virtual machine?

    Probably yes.  Why would you consider storing elsewhere?  You do something with SAN replication or y at - there a reason?

    Which drive 'provisioning' would be normal to use with these types of discs?

    I tend to use thin for the volumes to the OS.  For the absolute best performance, you can go with thick-eager discs set to zero.  If you're talking about data sharing standard file for users, then all disc types would probably be fine.  Check it out "[study of the performance of VMware vStorage Thin Provisioning | ]. "[http://www.VMware.com/PDF/vsp_4_thinprov_perf.pdf]" for more information.

    What disc 'mode' (independent / persistent) it would be normal to specify in this scenario?

    This can depend on how you back up the server, but I guess you wouldn't use not independent.  If you do, make sure you use persistent, you do not want to lose user data.

    When such a virtual disk is created, is it possible to increase / decrease its size?

    The increase is very easy with vSphere and 2008.  Very easy, and it can be done without interruption.  Shrinkage is a more convoluted process and will most likely require a manual intervention and downtime on your part.

    Good luck!

  • Where to put the java code - best practices

    Hello. I work with the Jdeveloper 11.2.2. I'm trying to understand the best practices for where to put the code. After reviewing the http://docs.oracle.com/cd/E26098_01/web.1112/e16182.pdf, it seemed that request module was the preferred location (although many examples in the pdf file reside in the main methods). After some time of coding, if, I noticed that there was a certain libraries imported and wondered if this would impact performance.

    I looked at the articles published on the forum, in particular Re: programmatically access the method of service (customer interface) . This link mentions for access to the code a bean of support - and the bulk of the recommendations seem to be using the data control to drag to the Joint Strike Fighter, or use the links to access code.

    My interest lies in where to put the java code in the first place; In the view object, entity object, and... other Am, backing bean object?

    I can describe several guess better know where to put the code and the advantages and disadvantages:

    1. in the application module
    Benefits: Central location for code makes development and support easier as there are not multiple access points. Kinda like a data control centralizes the services, the module of the application can act as a conduit for the different parts of the code you have in your model objects.
    Cons: Everything in one place means that the module of the application becomes bloated. I don't know how the memory works in java - if the app module has tons of different libraries are all called when even a method of re - run a simple query is called? Memory of pigs?

    2. write the code in the objects it affects. If you write code that accesses a view object, write it to a display object. Then make it visible for the customer.
    benefits: the code is accessible through ducts less (for example, I expect that if you call the module from the application of a JSF backing bean, then the module of the application calls the view object, you have three different pieces of code-)
    CONT: the code gets spread, more difficult to locate etc.

    I would greatly appreciate your thought on the issue.


    Kind regards
    Stuart

    Published by: Stuart Fleming on May 20, 2012 05:25

    Published by: Stuart Fleming on May 20, 2012 05:27

    First point here is when you say 'where to put the code of java' and you're referring to ADF BC, the point is that you put 'code of java business logic' in the ADF business components. Of course it is very good to have the Java code in the ViewController layer that covers the user interface layer. Just don't put the business logic in the user interface layer and don't put no logical user interface in the model layer. In your 2 examples you seem to consider the ADF BC layer only, so I'll assume that you're not only serious logic java code.

    Meanwhile, I'm not keen on best practices in the term that people are following best practices without thinking, usually best practices come with conditions and forget to apply. Fortunately you do not here that you have thought through the pros and cons of each (nice work).

    Anyway, back on topic and turn off my soap box, regarding where to put your code, my thoughts:

    (1) If you have only 1 or 2 methods set in the AppModuleImpl

    (2) If you have hundreds of methods, or there is that a chance #1 above will turn into #2, divide the code between the AppModuleImpl, the ViewImpl and the ViewRowImpls. Why? Because your AM will become overloaded with hundreds of methods making it unreadable. Put the code where it should logically go instead. Methods that operate on a specific line of VO Approfondissez partner ViewRowImpl, methods that work across lines in a VO enter the ViewImpl and methods that work throughout your in the associated AppModuleImpl.

    To be honest that you never the option you choose, one thing I recommend as a best practice is to be consistent and document standard so not know your other programmers.

    BTW, it is not a question about loading a lot of libraries/imports in a class, it has no performance cost. However if your methods require a lot of class variables, then yes there will be a memory of the costs.

    On a side note, if you are interested in more ideas on how to create ADF applications properly think about joining the EMG "ADF", a forum which deals with ADF architecture, best practices (cough), deployment architectures free online and more.

    Kind regards

    CM.

  • Best practices on the steps of the Post after 11.2.0.2.4 installation of ORACLE RAC

    I finished 11.2.0.2 RAC installation and it patched to 11.2.0.2.4. The database is also created.

    Nodes are Linux redhat and ASM storage.

    There the good article or links regarding best practices post not after installation?

    Thanks in advance.

    Hello

    I also want to know what kind of analysis scripts, I can use for the installer as cron tasks to monitor or detect any failure or problems?

    To control the Cluster (OS level):
    I suggest you use a tool powerful "CHM" already accompanying the product grid Infrastructure.

    How do you set up? Nothing... Just use.

    Cluster Health Monitor (CHM) FAQ [ID 1328466.1]

    See this example:
    http://levipereira.WordPress.com/2011/07/19/monitoring-the-cluster-in-real-time-with-chm-cluster-health-monitor/

    To monitor the database:
    With the HELP OF ADVISORS of PERFORMANCE TUNING AND FUNCTIONS of MANAGEABILITY: AWR and ASH and ADDM Sql Tuning Advisor. [276103.1 ID]

    The purpose of this article is to illustrate how to use the new features of 10g management to diagnose
    and resolve performance issues in the Oracle database.
    Oracle10g features powerful tools to help the DBA to identify and resolve performance issues
    without the hassle of complex statistical data analysis and comprehensive reports.

    Hope this helps,
    Levi Pereira

    Published by: Levi Pereira on November 3, 2011 23:40

Maybe you are looking for

  • Portege 3500 modules - 512MB memory lead to error

    Hallo, I tried to upgrade the memory in my P3500.There are currently 1 256 MB module installed in the slot under the keyboard. I bought 2 x 512 MB PC133 SO-DIMM ram modules, but both lead to errors / beeps from the BIOS. I tried different combination

  • iCloud preferences error

    I've been struggling with a particular Contact that is on my iphone and ipad, but refuses to sync to my Mac and iCloud.com Search through similar questions online I found tips that I should check my iCloud preferences, but when I go into Preferences

  • WiFi more than 5 GHZ

    All, morning Is that what I can do to get my D3 to use wifi on the range of 5 GHZ, rather than the 2.4 GHZ spectrum. I have a dual-band AP and its failure in the bottom of the speed spectrum. Thank you Steve

  • taskbar disappeared

    my taskbar has disappeared. Windows key on the keyboard does not... can't see my screen except my wallpaper... Help, please

  • HP2210: Scanning problem

    HI, after replacing the cartridges in my HP22100, the "press tto enter align cartridges ' message repeated so I couldn, t do.  Now, I, ve sort, the printer will not analyze what so I can't copy documennts. Wrench ratchet continues as click suggests t