We are mainly dealing with 10GbE cards from Emulex in this thread. I noticed this in the VC log after bringing it up to VC 4. However after the upgrade I am facing several problems and the problem mentioned above one of them. Hi Will, We run HyperV from the beginning and up till now it has been rock solid stable. With enhanced configuration flexibility, unmatched performance, and leading energy efficient design the DLp Gen8 offers the perfect solution for the dynamic compute requirements of today’s demanding datacenters. What server hardware do you use?
|Date Added:||18 December 2011|
|File Size:||18.93 Mb|
|Operating Systems:||Windows NT/2000/XP/2003/2003/7/8/10 MacOS 10/X|
|Price:||Free* [*Free Regsitration Required]|
We have found out since that most 331fld the machines that are losing connectivity are sending lots of data. All Pre-configured servers ship with a high voltage server-to-PDU power cord.
After a half day troubleshooting I found out the following: I contacted HP and they said taht this was 4-por MS problem and i should contact them. The cluster has been really stable since VMQ was disabled. Ball bearing rail kits contain telescoping rails which allow for in-rack serviceability.
vNICs and VMs lose connectivity at random on Windows Server R2
The suggestion made to me by MS tier 3 support was to disable task offloading on all the NICs on the cluster nodes. Next a lot of disabling: After the reboot I am not able to see the content of the ClusterStorage folder and fallback of the virtual machines is impossible.
The survey will appear here when you’ve completed your visit, so please do hp ethernet 1gb 4-port 331flr adapter close this window.
So evacuate all VMs first before you do this. Up till this morning I had high hopes the issue was fixed by applying the latest supplement from HP and in particular the broadcom update. Your email address will not be published.
This is the quickest way to make the issue occur. For One of the following additional details see the Networking Section of this document.
Will Moore â€” Please let us know if the Emulex driver upgraded helped on fixing this issue. If so I will update the rest of the nodes and keep you posted.
HP PROLIANT DLP SPECIFICATION Pdf Download.
For additional details see the Networking Section of this document. We have decided to bail on R2 for the moment. Don’t have an account? And without a way to reproduce the issue on demand it was complex to troubleshoot.
This manual also for: So we did this: I finally got the rebuilt Dell with the QP card from into the cluster today.
When packets are indicated up, all the packet data in the queue is delivered directly to the virtual network adapter. We have been running this 2-node Hyper-V R2 cluster for a few months and this is the first time this has occurred.
Meanwhile we are quite used to disabling VMQ and fully agree with the resolution mentioned. The 331lr nodes seem to run fine up till now. That’s why it makes sense to team up with the people who know HP infrastructure hardware and adapted best – the experienced professionals at HP Services.
Prior to making a hp ethernet 1gb 4-port 331flr adapter supply selection it is highly recommended that the HP Power Advisor is run to determine the right size power supply for your server configuration. If you could keep us updated about your observations, please report back!
Power Specification and Technical Content for supported power supplies can be found at: For me this seems pretty conclusive. Don’t show me this message again.
331glr the same KB: And yes I have already tried: My repro is fairly simple: Any additional options purchased will be shipped separately. It has been over a month with tickets escalated with HP and MS. The Dell with the older DP card has been running now for 24 hours without any issues.
At this moment I am still facing 2 problems and fixed one.
HP ProLiant DL380p Specification
Thanks mk, let us know if it helped performance. A live migration to another host brings the network back up again. I hp ethernet 1gb 4-port 331flr adapter the issue will be fixed in the new update pack from MS 1gbb comes at april 10th.
Btw, servers are X X5. Even after replacing the faulty Flexfabric module, network disconnect was happening.