VSAN network

I would like to install a switch distributed with 2 DV exchanges.

one is for the VSAN traffic and one for the production with 40 network of VLAN.

They'll use them ports nic0 and nic1.

Should I ports in my physical switch connected to my servers VSAN in trunk mode or only in VLAN 40 for these parameters?

If they are on separate VLANs, then yes it should be a trunk on the switch port and then mark the traffic.  Thank you, Zach.

Tags: VMware

Similar Questions

  • Measure the impact of the network stack / components in latency

    Hello

    If I want to measure the impact of VSAN network as part of the latency, how better would be to do?

    Example: Writing takes place. It is of course delivered on VSAN network to another host as a component of the object is located here also. Transport data on the network and putting to the physical disk layer take time.

    VSAN using observer, it is easy to see the latency time physical disks and how much they contribute to the latency. Looking latency compared to the level of 'customer' shows, however, a different image which is logical because it is high in the stack.

    As I can not afford for 10 Gbit Ethernet, my Lab network is a 1 Gbps where each node has two NIC in a LACP LAG, and I see clearly a bow to talk to another node in the network more than 1 NETWORK and talk to another node on the other card NETWORK LACP nicely splits the traffic on the two cards.

    During reconstructions, I noticed that a nice full use of those two links on servers, so from a bandwidth perspective, using LACP as seems to be worth. But it would increase the latency to an undesirable extent? I have also noticed that during normal operations, the links never 1Gig not even approaching is saturated, sort of presumption 10 giga Ethernet unjustified for my small environment 4 node (i 80VM on this our).

    I want to be able to do is determine how much latency is added by the network layer.

    Hi John,.

    I threw out the 850 Pro SATA Samsung replace them with HCL - Listed SAS SSD. The difference is simply huge. But I still have weird 100ms + 'moments of despair' sometimes.

    I use vDS with a single Interface with LACP vmKernel on a pair of chassis-Stack switches Cisco Gigabit of SG500, where the LACP trunk is spread over the two chassis. When you look at observer vSAN during these 'unhappy moments' where latency is very high from the point of view of the virtual machine, I see that in terms of 'disc', the latency is extremely low (this used to be very high with the Samsungs) but in the levels 'Owner' and 'Customer', the latency can be very high. I suspect it's due to the latency of the network.

    I will dissolve the LACP, configure a second interface with a different subnet vmKernel and prevent traffic from the cross-frame. In this way, a lot of overhead should be gone. I'll post the results.

    My logical vSAN observer troubleshooting is:

    If

    VM feel morose

    and

    the metric "virtual disk" in vCenter is high (explaining why the virtual machine feels slow when participates in the activity of the disc)

    While

    Use memory and CPU of the host are low

    and

    latency to the 'disc' level is very low

    but

    latency on the 'owner' and 'client' levels are high

    then

    blame the network

    endif

    Does this make any sense?

    Find out what network component is the culprit is another question...

  • Separate VSANS / Mgmt / Vmotion traffic on different switches

    Hello

    I want to build a hybrid Cluster VSAN 4 knots (6.x).

    I have 2 x 10GE switches dedicated for Taffic VSAN

    also 2 * 1GBE switches for management / VM / Vmotion traffic

    Each host has 2 10 GbE NIC, 1 uplink for each of 10GBE switches

    Each host also has 4 * rising 1GBE, 2 for each of the switches 1GBE

    My idea is to use 10GBE just for VSAN traffic (separate DVSWitch, an asset, a standby) on each 10GBE switch

    Also to use 2 Ports on the management traffic (separate DVSWitch, an asset, a standby) on each 1GBE switch

    And use 2 Ports on traffic VM (separate DVSWitch, an asset, a standby) on each 1GBE switch

    VMotion traffic would be on the same rising as management, but active / standby vice versa

    Question: is - this possiuble / useful to connect vmotion for 10 GbE uplink ports (active standby vice versa VSAN traffic)

    I appreciate any useful comment on my scheduled install (I have no boy networking)

    Couple thoughts.  You can run several vKernels to VSAN and vMotion (allowing one on each switch).  To VSAN, use of VLANS / subnets (one on each switch offers you a pleasant has / B space) while vMotion must have all the vkernel on the same subnet 2 ideally layer.  This allows for maximum reliability, in theory faster failover and access flow.  This also has the advantage of if your cluster is not big (bigger that say 48 ports) you can avoid having multicasting, leaving the switch (usually requires more quarrels with the cats of network) as each switch TOR will have its own VSAN network.    Then use the NIOC for butterfly and protect the traffic between them.  Now the NIOC is not as effective as SR - IOV (but its good enough) to 100% and you will maximize your investment 10 Gbps without swamping the things.

    • Network management interface VMkernel = explicit order of Fail-over = active P1 / P2 ensures 10.1.1.1/24 VLAN 100
    • vMotion VMkernel - an explicit order of Fail-over = interface = active P1 / P2 ensures 10.1.2.1/24 VLAN 101
    • vMotion VMkernel-B interface = explicit Fail-over order active = P2 / P1 ensures 10.1.2.2/24 VLAN 101
    • Machine virtual Portgroup = explicit Fail-over order = active P1 / P2 ensures 10.1.2.1/22 VLAN 102
    • Virtual VMkernel-A SAN interface = explicit order of Fail-over = active P1 / P2 in standby (or not use) * 10.1.3.1/24 VLAN 103
    • Virtual interface of SAN VMkernel-B = explicit Fail-over order active = P2 / P1 Eve (or not use) * 10.1.4.1/24 VLAN 104

    In this case, VLAN, 100, 101 and 102 will be on all switches.

    * 103 and 104 could be configured to only exist on each switch (has / B isolation, failover do not switch on the value and set up on both) or exist on both (and use standby) as described.  This design will focus on maintaining the communications host-to-host on the same switch (decreases complications with multicasting and reduces its lag time as VSAN traffic does not have to jump to another switch unless you run out of switch ports, but now with dense 40Gbps past using 10 Gbps break out in theory you could hit the limit of 64 nodes on a single switch).

    I'm curious of anyone thoughts on just disable the failover and forcing them to each core sticking to its switch (and accepting loss of communication) on this vKernel in the event of a switch failure.  I want to do some tests with both laboratory and to test the failover of the switch/path between both of these configurations (compared to a configuration unique vkernel).

    However, some people prefer a 'simpler' configuration if (and I'm not opposed to that).

    Duncan drawn active passive failover configuration with of the vKernel unique for each host.

    In theory not being not not as dependent on the NIOC for insulation of storage should help the latency for the short bursts it takes to NIOC launch design active active vs.

    Control IO SAN and virtual network

    • Network management interface VMkernel = explicit order of Fail-over = active P1 / P2 ensures
    • vMotion VMkernel interface = explicit Fail-over order = active P1 / P2 ensures
    • Machine virtual Portgroup = explicit Fail-over order = active P1 / P2 ensures
    • SAN VMkernel virtual interface = explicit order of Fail-over = P2 active / standby P1
  • HA response of Isolation with VSANs

    Hi everyone, I hope that someone can offer some advice on that.

    I am new to vSAN, but try to get a few together for HA clusters design decisions in a vSAN environment. Our environment (in short0 looks like this:)

    • 8-node cluster
    • All nodes have storage and participate in the vSAN
    • n + 1 resilience required
    • HA/DRS required
    • Double, 10 GbE NIC will be used for all traffic (with the NIOC shares configured for QoS)
    • VMFS datastore (shared between all hosts) will be used for templates, ISO etc.


    It is, I'm a little on some aspects of the response of isolation. There are a few good articles out there, and I would say that I understand 80-90% of it. In our scenario, if a host had become isolated, then HA heartbeats (via the network of vSAN) would fail and the response of isolation would be triggered, it's very well (in our scenario off power / stop I guess that would be the best option that VM would have lost all network access too).

    It is, how having a data store available to all VMFS the cluster hosts (that HA re for heartbeat data store) changing the decision for which use of response of insulation?

    In addition, if there is, say, two guests who become partitioned form the other hosts in the cluster, the response of isolation would not be triggered by these two hosts because they simply elect a new master and continue to operate (as well as the virtual machines running on the host). However, other hosts (say 6 of them) who are now in their own partition can not see the other two hosts and they start the answer HA (restarting the virtual machine of the other two hosts). What strategy must be in place to deal with this?

    Thanks in advance.

    Andy

    Hi there, good question. Let go on it.

    It is, how having a data store available to all VMFS the cluster hosts (that HA re for heartbeat data store) changing the decision for which use of response of insulation?

    This will not affect the decision to define the response of isolation. It looks differently, when the VSAN network doesn't have the host cannot access the components of the affected objects any longer. This means that virtual machines that are running on the host computer that is isolated just lost connection with their storage. If the connection is lost with the storage more often then the virtual computers running it will be useless. Even if you add the data warehouses of heartbeat that it does not change the fact that these virtual machines are not able to connect to the storage system. Whatever it is, I'd always go for "turn off". That way when isolation is lifted the 'remote' VM has already gone.

    For a partition, it's different. There is no "response of partition" that you can set. So if there is a partition, then the partition that owns > 50% of the components will get the property of the object, the other side will lose the property. And then the virtual machine can be restarted... but he will not be turned off automatically as can be done with a solitary event. In the case of a partition when the partition is lifted the host that is running the virtual computer that has lost access to its storage space will recognize that he has lost access and then kill the process from the virtual machine.

    Who help me?

  • Error "host cannot communicate with all other nodes in the cluster of enabeld VSAN.

    Hello community,

    We have a problem (?).

    We have a cluster VSAN enabled with four hosts. Everything seems perfect,

    -the configuration is good,

    -Displays the page state VSAN "network status: (green arrow) Normal."

    -Displays the disk management page "status: healthy" for all of our groups of disks.

    -Same 'esxcli vsan cluster get"on each host returns a 'HEALTHY '.

    But we have a yellow exclamation littly on each host 'host cannot communicate with all other nodes in the cluster of enabeld VSAN.

    Anyone with the same problem? Anyone with an idea or a hint?

    Thank you!

    Update vcenter to the latest version and the error disappears. Problem solved! The 'old' version of vcenter performed since September 2014, strange.

    Thank you very much for your help!

  • Help! VSAN is empty after black out!

    We had a power failure and all the ESX Servers restarted. After that they returned the/vmfs/volumes/vsanDatastore was empty. all our virtual machines were VSAN, including vcenter.

    How can I recover my VM? My entire infrastructure broke down.

    brugh2,

    It certainly looks like a partition of the network occurred. We have a single node in our cluster instead of the three you mentioned should be there. If this is the case on all three nodes, we can form a college and get production line.

    You have a request for Support VMware class? If you do, please PM me the SR number?

    In addition, we can try a few things.

    (1) on each host, make sure that the marking system is intact and we are associated with a vmknic:
    # esxcli vsan network list *.

    (2) if the network is always marked properly (it must be), try to ping each VSAN each VSAN node node (for example, ping your partner machines).

    (3) if the ping works, determine if we use frames. If we are to ensure that the configured completely extended frames (vmknic, vswitch, physical NIC, physical switch). **

    --> If jumbo frames are in use, send a ping large frame without allowing fragmentation:

    # vmkping s - 8500D

    (4) if (if applicable) jumbo frames do NOT work, set the MTU value in the physical switch or drop your vmknics down to MTU 1500.

    (5) if any to the extracted transport level, we very likely have a problem with multicast. Validate your IGMP groups, spying, couriers, etc on the physical switch to ensure that multicast is handled correctly.

    Please let me know how things are going!

    * The output should look like this (from my infrastructure):

    Interface

    VmkNic name: vmk1

    IP Protocol: IPv4

    Interface UUID: 9ebf0854-3a78-734f-b15e-90b11c2b6604

    Agent group multicast address: 224.2.3.4

    Agent Group Multicast Port: 23451

    Multicast Address Group Master: 224.1.2.3

    Master Group Multicast Port: 12345

    Multicast TTL: 5

    * You can use the following commands to check the configurations of jumbo frame (I don't use them, so my MTU are every 1500):

    ~ # esxcfg - vmknic - l | grep vmk1<== i="" am="" examining="" vmk1="" because="" that="" is="" the="" interface="" we="" got="" from="">

    vmk1 VSAN IPv4 172.200.200.207 255.255.255.0 172.200.200.255 00:50:56:68:00:fb 1500 65535 true STATIC

    ~ # esxcfg - vswitch - l

    [ ... ]

    Name of the switch Ports used num configured Ports Ports MTU Uplinks
    vSwitch1 2352 6 128 1500 vmnic2, vmnic3
    PortGroup name VLAN ID used rising Ports
    VSAN 0 1 vmnic2, vmnic3

    ^^ the vmknic is called "VSAN," as it is the name of the port group. If you a vSwitch distributed, you must get the port instead of a portgroup name number (the number will always be in the esxcfg-vmknic output - l).

    ~ # esxcfg - NICS - l

    Name PCI Driver link speed Duplex MAC address MTU Description

    [ ... ]

    vmnic2 0000:41:00.00 bnx2x Up 10000Mbps Full 00:10:18:f1:b8:40 1500 Broadcom Corporation NetXtreme II 10 Gigabit Ethernet BCM57810

    vmnic3 0000:41:00.01 bnx2x Up 10000Mbps Full 00:10:18:f1:b8:42 1500 Broadcom Corporation NetXtreme II 10 Gigabit Ethernet BCM57810

    ^^ Here are the two physical NICs used as uplinks ports vmknic group.

    All of the above MTU is 1500 bytes. If you use frames, they should all be 9000 bytes.

  • vSAN not recognizing SSD, even after that I have to label it manually as SSD...

    I build a lab POC using a C7000 chassis full of blades and local storage.

    I want to exploit local storage through the vSAN product.

    I installed the USB in the servers and have successfully deployed ESXi 5.5u1 on each server and carved the worms readers a RAID0 (pass through is not available on the installed RAID controller).

    Then, I created two drive RAID 0 on the media.

    1 disc is 22Gb

    1 disc is 222Gb the rest of the space.

    I have tagged the 22 GB SSD drive, and vCenter customers all show this as an SSD.

    I've implemented all requirements and vSAN networks so I'll put up the first disk group, and when it stops.

    Where to go to set up a group of disks on a host computer, I wonder to select 1 SSD and select a non - SSD drive and the screen displays the disc SSD - no, but SSDS list is empty.

    I rebooted and even rebuild RAID volumes hosts from the ground upward.

    I also made sure no. VMFS partition was placed on it.

    What's wrong?

    I noticed an anomaly...

    When I go to "detach" the SSD so that I can release the property path (for diagnostic reasons) I noticed that I can not.

    He is unable to say that there is a Diagnostic partition on this disk, while she refuses.

    I think that this could be the problem, then, how to prevent this volume to be touched?

    Well then that abnormality?

    That's all.

    Sneaky Sneaky ESXi with its partitioning for coredumps auto was the culprit.

    I followed this advice using a remote coredump collector and then amazed the marked as SSD disk partition scheme, and it appeared.

    http://www.virten.NET/2014/02/permanently-disable-ESXi-5-5-coredump-file/

    To do this, and you're golden.

  • VSAN/ha cluster network partitions and

    I wonder what will happen in a vsan/ha 4 host cluster when there is a network without a data store partition external heartbit.

    Let's do an example:

    host1 contains the virtual machine running

    host2 contains the vmdk copy1

    host3 contains the vmdk copy2

    host4 contains the witness

    If there is a partition of network between the 2 groups, (host1 sees just host2 and host3 sees just host4)

    The first partition (host1 and host2) feed not off the virtual machine, because it is not a network isolation but HA in the second partition will restart the virtual machine (host3 and 4 contains 50% + 1 of the votes).

    We have half a brain with 2 copies of the virtual machine running?

    Hello, in your scenario.  host 1: the virtual machine running and 1 copy of the virtual machine, not for quorum and would be killed.  Hosts 3 and 4 have 1 copy of the virtual machine and the witness and the VM could be powered.  Thank you, Zach

  • New addition ESXi host to vSAN, put in the new group of network Partition

    Add a new R730xd from Dell to my cluster active vSAN (also all the Dell R730xds, same features), but when doing so, the new host is put into another group of network Partition.  Same configuration of the switch, the new host is plugged as that of others.  Can check the connectivity between each host (in addition, confirmed as much as I can ping to the interface vSAN between one of the four hosts), yet always says there is a communication error.  Any ideas?

    Certainly a multicast problem, strongly recommend the reading of these:

    Virtual SAN troubleshooting: Multicast. VMware vSphere Blog - VMware Blogs

    http://www.yellow-bricks.com/2014/03/31/VSAN-misconfiguration-detected-2/

  • VSAN involved environmental: nested network

    Hi all

    I build a test VSAN env in nested environment, and I'm stuck with an error that seems to be not new with the implementation of VSAN.

    I checked the various blogs and website and solution seems to be

    • Option 1 - Disable IGMP Snooping. Now, this will allow to all the multicast traffic through, but if the only traffic is VSAN, then this should be a significant amount of traffic and should be safe to use.
    • Option 2 - interrogator IGMP snooping to configure. If there is another multicast traffic and you fear that disabling IGMP snooping could open the network until a stream of multicast traffic, then this is the preferred option. Cisco in detail How to do this here.

    But these virtual ESX in nested environments host is connected via vSwitch and there is no physical NIC conntect

    Any help is very appreciated.

    Thank you

    Kapil

    If you run nested, you configured the promiscuous mode and forged passes?

    http://www.virtuallyghetto.com/2013/11/Why-is-promiscuous-mode-forged.html

  • VMkernel for VSANs

    Hello

    I have a question about Vsan

    1 can. 1 vcenter cause 2 vsan cluster? Each vsan cluster will have its own separate cluster by cluster (in VMware vCenter).

    2. for clusters VSAN, they will use same physical switch uplink. How can we separate these clusters? Is - this same correct range of IP addresses with the same multicast address? or we have to separate the IP address range and the multicast address?

    I use vSAN 6.2.

    Thank you.

    Hi, to comment further on your second question, if you have multiple clusters VSAN, best practices is to provide these more separate network segments (say, different VLAN). I'm not sure what you mean by "same multicast address"; each VSAN node performs the multicast, so all vmkernels VSAN are multicast addresses.

    For more help with this kind of considerations, please consult the Virtual SAN 6.2 Networking Design Guide.

  • Battle of VSAN (2 node with node witness Cluster) cannot write to VSAN

    I'm having problems with my Cluster active VSAN and can't seem to write in the data store or migrate a virtual machine to download whatever it is him. When I try a vMotion VM over the VSAN DS I get an error "file impossible creation a full operation, an unknown error occurred."

    Here is my layout,

    Two Clusters with the node of witness, one with 2 nodes

    Each host has 2 discs with a Cache and running as FLASH capacity and disk HARD hybrid Type. Field of the fault, I have a witness preferred and secondary each host is divided.

    Here are the best, I got a work around, if I change the host of fd perfered and when updating the domain fault I can during this brief moment write to the data store. That's how I'm migration of VMs for the moment.

    Any suggestions?

    Thank you

    VSAN Checkup shows that everything works well? My guess is that your 2 hosts have problems with communication with the node of witness or vice versa.

    Try to modify the VSAN storage policy to enable ForceProvisioning as a test, if it works, the problem is probably problem of config/route network communicates with the witness.

  • 6 host Cluster VSAN - I want to change the IP vmkernel VSAN

    Hello

    as the title says, I have 6 cluster host VSANS 6.2 (with some VMS on the store of data VSAN, off right now). What is the best method to change the addresses IP of VSAN vmkernel, without loss of data...

    Someone did he do such a thing? The last byte will change slightly to the decline in the number... None VLAN / subnet etc. changes... I have just change the VSAN vmkernel and change the last octet...

    See you soon

    Paul.

    I re-IPed hosts and their IP of vmk corresponding VSAN in maintenance mode as you describe. With all that in maintenance mode, you can just go and change it. I don't think that there is no danger of data loss. If you make a mistake and all start, it would detect split partitions network or other network through the assessment of health problems and you would have a data store does not work until you fix the network problems.

  • Performance tests of VSANS: what is the attitude of your cluster under actual stress?

    Our config node VSAN

    Reference Dell R730xd

    768 GB OF RAM

    PERC730

    ability level: 21 x SAS disks 1, 2 TB

    the caching level: 3 x SANDISK SX350-3200

    dedicated NIC for VSAN 10GE

    We encounter problems while driving a large number of virtual machines on our cluster which themselves generate a large number of the IOPS / s (similar to IOmeter).

    We are faced with questions:

    -guests are cranked out vcenter

    -vmx files have been corrupted

    This does not mean that if we only use one virtual machine host to generate the load. In this case, these are our observations:

    % Of reading

    Write %

    The KB block size

    Oustanding IO

    GB of file size

    # Workers

    % Randomly

    Sequential %

    FTT

    # Components

    read a book

    IOPS / total s

    Totl Mbps

    AVG ms Lat

    45

    55

    4

    5

    10

    4

    95

    5

    1

    6

    0

    8.092

    33,15

    0.6

    45

    55

    4

    5

    10

    4

    95

    5

    1

    12

    0

    10.821

    44,33

    0.46

    45

    55

    4

    5

    10

    4

    95

    5

    1

    6

    100

    12.611

    51.65

    0.39

    45

    55

    4

    5

    10

    4

    95

    5

    1

    12

    100

    11,374

    46,59

    0.43

    45

    55

    4

    64

    10

    4

    95

    5

    1

    12

    100

    29.746

    121,8

    2.15

    100

    0

    4

    64

    10

    4

    100

    0

    1

    12

    0

    50.576

    207,1

    1.26

    100

    0

    4

    64

    10

    4

    100

    0

    1

    12

    100

    50.571

    207,1

    1.26

    100

    0

    0.512

    64

    10

    4

    0

    100

    0

    1

    100

    67.330

    34,47

    3.8

    If these numbers seem very good, Don't they?

    From our point of view, it would be ok if latency increases when you increase the load with multiple virtual machines. But as already out doubled over our cluster becomes broken after increasing the number of virtual machines at some point.

    We are currently testing with a subset of four nodes VSAN with the above config. In this group, we were able to turn on 187 load generation machines before that three of the four hosts entered the State "is not responding.

    We wonder if anyone has also made large performance tests. If so, we would be very interested in comments you make. Maybe this helps us to find the fault of our construction.

    Kind regards

    Daniel

    With the help of VMware we discover that the problems we face are caused by virtual machines gen load... each vm is generating 2048 ios in circulation... so with 187 VM on the cluster, we unreal ios in circulation (382976... when I remember the graphic observation of our cluster it would reduce up to 13000 per host) which we will never see in real world scenarios. With this high traffic vsan 6.1 ios has problems and vmware is working on a solution... maybe with 6.2 and qos that is resolved?

    A few other changes:

    -a newer driver for our network cards intel x 710

    -new driver and firmware (beta version of dell) for our h730 perc controller

    and some advanced settings:

    esxcfg-advcfg - 100000/LSOM/diskIoTimeout s

    esxcfg-advcfg - s 4/LSOM/diskIoRetryFactor

    esxcfg-advcfg - s LSOM/2047/heapSize (with these parameters, we were able to create 3 starts - with 7 metal discs and 1 flash device - for each host)

  • Will be older, slower, diskless / nodes consumption slow down a VSAN system?

    Hello

    We have a 6 identical VSAN contributing storage of cluster nodes.
    We have also two old machines (think generation Proliant G7), which are simply 'broad' and we thought to use these two to host performance reduced, not part of the virtual computer, which we have many offshore.

    Question: these two boxes older, which are much slower than today's material, will drag down the performance of the entire solution VSAN. 6 nodes, contributors 'feel' the presence of these aging and all devices slow down?

    There are best practices do not use a cluster asymmetric but does that refer to contribute nodes only?  I understand that the nodes 'consumers only', basically just "sitting there" but when will be the part of the Cluster vSAN, they have a negative impact? If so, why?

    The network connections will be 10 gig and identical to the 6 contributing nodes. It's just the performance of the processor that are slower.

    We will use the affinity / no affinity rules to keep those "low performance" VM on these older machines.

    Kind regards

    Steven

    Hello, this is my understanding of your configuration.  6 new nodes in a cluster of vSAN contributing storage.  2 old nodes in a cluster of vSAN contribute not storage.  I would not expect the old 2 knots to affect performance.  Just my opinion, no way for me to provide proof.  Thank you, Zach.

Maybe you are looking for

  • Can I have and use alternating, 2 browsers on my computer, or can I only have and use a?

    I have Internet explore and I am thinking to change to something else, but do not want to remove it before testing other browsers. I can add another browser on my computer to test while I also keep and use Internet explore?

  • Y400 100% disc, freezes

    There's a month my Y400 started freezing and then after a few minutes he would resume as usual. He began to make it worse so I checked the Task Manager and it grew to 100% in the use of the disc that I noticed a couple of problem process she would ge

  • Shift register no

    I am new to multisim and indeed digital theory in general. I am trying to wrap my head around shift registers, and I hope someone can help out me. A circuit with a 74164 conduct 8 LEDs is attached. I filled A & B high if it is enabled and has a HIGH

  • Where can I download an installation for windows Vista CD?

    To reinstall my OEM laptop, I need a downloadable version of Windows Vista German. Is it possible to download?

  • PC freezes, crashes

    Material: Compaq Presario SR2150NX RR790AA-ABA Software: XP SP3, suite of Norton 360, Office 2003, IE 8 Problem: Everything I do, whether it's browsing, emailing, creating a document, my PC freezes and nothing will move.  It is intermittent and witho