Hi,
I believe this is what you're looking for in order to learn about QP context characteristics :
https://www.mindshare.com/files/ebooks/InfiniBand%20Network%20Architecture.pdf
Hi,
I believe this is what you're looking for in order to learn about QP context characteristics :
https://www.mindshare.com/files/ebooks/InfiniBand%20Network%20Architecture.pdf
Are you sure you used '-a' flag on both client and server?
Can you please try and put the -a flag as the last flag in the command line on the server?
For the client it should be the last flag before the IP of the server.
Thank you. That fixed it. Can you explain why?
Hi,
I have a pair of SX1410's configured with MLAG. In a situation where a 40G port is trying to send traffic to a 10G port located on the partner switch, the performance is terrible - in this case around 50-60Mbps. Where receiving, the 40G attached host receives from the 10G host at nearly line speed (9.9Gb/s)
When the same 40G port sends traffic to a 10G port located on the same switch, the performance is very good, nearly line speed (9.9Gb/s).
There is no other traffic currently on the switches and I am using iperf to test, but can reproduce with other applications. Flow control is enabled pretty much every, so maybe this has a bearing. I cannot reproduce the problem where hosts are both 40G in any combination of tests - that works absolutely fine.
Any ideas?
Regards,
Barry
Software: CentOS 7.2 with their repo provided drivers, libs, and utils.
Hardware: ConnectX-2 HCAs.
The basic network connections appear good, IPoIB
and infiniband-diags utilities do not fail.
But when trying to connect RDMA I get this error:
rdma_create_event_channel: No such device
Also same error results when tested with rping utility.
Are there compatibility issue with CentOS 7 provided Software and ConnectX-2 hardware ?
Gary
CentOS does install the kernel drivers by default: mlx4_ib.ko, mlx4_en.ko, mlx4_core.ko.
But it does not install the user space lib: libmlx4-rdmav2.so
$ yum install libmlx4
I have an system in which performance critically depends upon the ability for the NIC to separate protocol headers and payload into separate buffers.
This is a proprietary protocol on top of Ethernet, not using TCP/IP.
The payload must be stored in system memory on a page (4K) boundary.
I am attempting to discern whether the ConnectX-3 NIC has the ability to support this behavior.
Our software system is built on top of FreeBSD and we have made suitable driver modifications for a few NICs from another vendor.
If we can do this with the CX3 NIC then it will open some new options for us and our customers in terms of hardware that will be available for use with our software.
I understand we probably need to modify the existing FreeBSD driver to operate the way we want; what I need to know is whether the NIC has the capabilities, and if so, how to program this behavior.
Can we close this post?
sorry, i am too busy to work on that issue.
I still cannot solve that issue, but please close that post.
Does MCX3141 only support ubuntu12.04? whether support 14.04 &15.04&15.10 and other new version? more questions: Does MCX3141 support SR-IOV for ROCEv2 in which ubuntu version?
--all / -a flag is a feature that runs traffic on all message size from 2^1 to 2^23
When not using this flag, the default message size is 64KB.
Which means that if the server side doesn't have the '-a' flag set, it will prepare it's resources for 64KB messages.. And when the client tries to send 128KB messages, it will fail.
I tested MCX3141(connectx_3_pro) in ubuntu 15.04 version, i find normal network card function don't work, i use ping command to test, test failure, also same issue in ubuntu 14.04.
Hi
We tried to connect our servers with connectx3 cards and cisco nexus 9372 by cable mellonox MC2206130-002-А3
But cisco and windows shows that cable is unplugged.
Before nexus these adapter work fine in IB mode with mellonox unmanaged switches.
We tried just connect 2 cisco nexus 9372 with cable MC2206130-002-А3 to each other - link was up.
So problem i think in server/drivers side.
OS windows 2016 tp5 with last drivers version 5.19.11822.0.
Port protocol Eth. Full duplex both sides.
What could be wrong?
Hi, Can you attach a basic topology diagram, showing the ports with the performance problem, and how they are connected?
Are the inter-switch ports showing any discards or errors?
Are the inter-switch ports part of a port channel? Is there mlag configured on these inter-switch ports?
You wrote:
"When the same 40G port sends traffic to a 10G port located on the same switch, the performance is very good, nearly line speed (9.9Gb/s)."
Did you do this same test on "both" switches?
clock_gettime has its own overhead that need to be taken into calculations. What are your expectations?
What are the the results of ib_read_lat/ib_write_lat/ib_send_lat on the systems connected back-to-back? What are the details of setup?
Hi,
The issue has been resolved via support. The config was missing the following:
interface port-channel 1 vlan map-priority 0 traffic-class 1
dcb priority-flow-control priority 0 enable
It may be worth updating the community MLAG how-to's with these lines as I don't believe this is in there.
With that in, everything is now working well.
Great - thanks for the update.
These Mellanox ConnectX2 quad Rate IB Mezz cards have been working on our Dell HPCC cluster for the past 5 years without any problems (t was setup by Dell in the past).
Apparently one of the cluster nodes stopped detecting the IB last week. I have tried running the commands (cluster nodes are running RedHat Ent. Linux 6.1) ibstat, lspci -v | grep Mellanox and ibv_devices but all came up with empty result. I had even physically removed and re-seated the card but that also did not help. Now I am thinking of temporarily replacing the card with a similar card from another cluster node to see if it is a card related issue or not. However, I would like to know if I need to change or update any configuration parameters (say, GUID or MAC) after replacing the card or would the cluster node pickup the new card automatically? If yes, could you please let me know how to do that on RHEL 6.1?
I am sorry, I am new to IB and I have been relying heavily on the Mellanox/RedHat articles so far but I am almost stuck now with this question, could someone help please?
Have you changed the cable connectivity from Ethernet to InfiniBand or vice versa? if you did then cable link may disappear (known issue)
Else, it looks though you may have a port type configuration problem. Check the "port protocol" in Windows system devices to ensure it has the correct type configuration
check the switch profile configuration as well to ensure the "port-profile" is in the right type (VPI, ETH or IB)