Quantcast
Channel: Mellanox Interconnect Community: Message List
Viewing all 6227 articles
Browse latest View live

Re: ConnectX-3 RoCE fails send WRs when running multiple QPs

$
0
0

I afraid it is tricky to understand without debugging and source code, but most likely there is a problem there. Try to print the values of lkey/rkey, QP numbers, protection domains, other related data in every packet you send/receive. Check that a receiver is running first, before sending posting its send request. What is you run only one sender and one receiver, does it work?

 

Regarding performance, you might check the Mellanox Tuning guide. What is the test you are using, it is part of 'perftest'?

 

And probably most important question - do you use Mellanox OFED or this is inbox driver? is it Linux?


Re: 'State: Initializing' but works

$
0
0

Thank you eddie.notz for your response.

 

Yes.

Both port 1 and port 2 status had been 'active' till then. (I did not apply any updates etc in recent and all updates are always manually installed to avoid any potential issues, perhaps...).

The second port status is still 'Initializing' but the port is properly working.

Re: Which ESXi driver to use for SRP/iSER over IB (not Eth!)?

$
0
0

7 days later and no reply from Mellanox... Seriously?

Re: VMware ESXi 6.0 virtual ib_ipoib interfaces

$
0
0

Hi Sopie

 

we don't want to use it as Ethernet card but as IB over IP. That's the problem that I can't get it to work as a 40GB IBoIP card in ESX 6 anymore, which worked perfectly with ESX 5.5 and the 1.8.2.4 drivers (which are not supported anymore)

How to change speed from FDR to QDR using ibportstate command in centos 6.2

$
0
0

We have Mellanox Infiniband FDR cards and cables supporting it. We want to reduce the speed of the adapters from 56 Gbps to 40 Gbps. We tried using ibportstate <base lid> <port id> of the node. However, it does not help. Please let me know the exact sequence of commands to do so.

 

Card :

Mellanox Tech. MT27500 Family [ConnectX-3]

 

Switch : Mellanox 6536 Switch

Re: How to change speed from FDR to QDR using ibportstate command in centos 6.2

$
0
0

ibportstate <lid > <port> fdr10 0 espeed 30
ibportstate <lid> <port> reset

Is it possible to connect 2 ethernet switches (sx1036) located up to 10km apart without losing RDMA?

$
0
0

Hi guys,

 

I have a question that is real important for me; How can I connect my two sites so that I am able to use RDMA between them?

 

I have 2 racks, each located in a different datacenter. Each rack has got its own SX1036 Ethernet switch, and all servers are connected using connectx-3 EN or VPI PRO adapters. The server-switch cabling is all in the same rack, so these cables are passive. The time has come to connect my the sites together, so I will be needing active fiber between the sites.

 

I have the following options to choose from, please note that I am fairly low on budget, so buying a metro-x long haul switch is out of the question.

 

I can rent a fiber-ring, giving me 2 Darkfibers, or either go with a 10gbe L2 or L3 ring.

 

Lets say I take the fiber-ring, can I just buy 4 QSFP+ LR 10km tranceivers and install these into my SX1036's, connect both darkfibers to both switches and I'm good to go?

 

I really hope someone can explain how to tackle this problem without losing RDMA capability between the servers.

 

Thanks,

 

CloudBuilder

Re: How to setup IPoIB correctly?

$
0
0

I have been kept digging, and here is what I have observed:

 

I just did a

 

[root@sc2u0n0 ~]# lsinitrd /boot/initramfs-3.10.0-327.28.2.el7.x86_64.img|less

[...]

drwxr-xr-x  5 root    root            0 Aug  7 15:31 usr/lib/modules/3.10.0-327.28.2.el7.x86_64/kernel/drivers/infiniband

drwxr-xr-x  2 root    root            0 Aug  7 15:31 usr/lib/modules/3.10.0-327.28.2.el7.x86_64/kernel/drivers/infiniband/core

-rw-r--r--  1 root    root        21149 Aug  3 04:52 usr/lib/modules/3.10.0-327.28.2.el7.x86_64/kernel/drivers/infiniband/core/ib_addr.ko

-rw-r--r--  1 root    root        82669 Aug  3 04:52 usr/lib/modules/3.10.0-327.28.2.el7.x86_64/kernel/drivers/infiniband/core/ib_cm.ko

-rw-r--r--  1 root    root      159989 Aug  3 04:52 usr/lib/modules/3.10.0-327.28.2.el7.x86_64/kernel/drivers/infiniband/core/ib_core.ko

-rw-r--r--  1 root    root        77565 Aug  3 04:52 usr/lib/modules/3.10.0-327.28.2.el7.x86_64/kernel/drivers/infiniband/core/ib_mad.ko

-rw-r--r--  1 root    root        51765 Aug  3 04:52 usr/lib/modules/3.10.0-327.28.2.el7.x86_64/kernel/drivers/infiniband/core/ib_sa.ko

-rw-r--r--  1 root    root        33549 Aug  3 04:52 usr/lib/modules/3.10.0-327.28.2.el7.x86_64/kernel/drivers/infiniband/core/ib_ucm.ko

-rw-r--r--  1 root    root        35829 Aug  3 04:52 usr/lib/modules/3.10.0-327.28.2.el7.x86_64/kernel/drivers/infiniband/core/ib_umad.ko

-rw-r--r--  1 root    root        87141 Aug  3 04:52 usr/lib/modules/3.10.0-327.28.2.el7.x86_64/kernel/drivers/infiniband/core/ib_uverbs.ko

-rw-r--r--  1 root    root        68989 Aug  3 04:52 usr/lib/modules/3.10.0-327.28.2.el7.x86_64/kernel/drivers/infiniband/core/iw_cm.ko

-rw-r--r--  1 root    root        75765 Aug  3 04:52 usr/lib/modules/3.10.0-327.28.2.el7.x86_64/kernel/drivers/infiniband/core/rdma_cm.ko

-rw-r--r--  1 root    root        37637 Aug  3 04:52 usr/lib/modules/3.10.0-327.28.2.el7.x86_64/kernel/drivers/infiniband/core/rdma_ucm.ko

drwxr-xr-x  12 root    root            0 Aug  7 15:31 usr/lib/modules/3.10.0-327.28.2.el7.x86_64/kernel/drivers/infiniband/hw

drwxr-xr-x  2 root    root            0 Aug  7 15:31 usr/lib/modules/3.10.0-327.28.2.el7.x86_64/kernel/drivers/infiniband/hw/cxgb3

-rw-r--r--  1 root    root      233861 Aug  3 04:52 usr/lib/modules/3.10.0-327.28.2.el7.x86_64/kernel/drivers/infiniband/hw/cxgb3/iw_cxgb3.ko

drwxr-xr-x  2 root    root            0 Aug  7 15:31 usr/lib/modules/3.10.0-327.28.2.el7.x86_64/kernel/drivers/infiniband/hw/cxgb4

-rw-r--r--  1 root    root      286621 Aug  3 04:52 usr/lib/modules/3.10.0-327.28.2.el7.x86_64/kernel/drivers/infiniband/hw/cxgb4/iw_cxgb4.ko

drwxr-xr-x  2 root    root            0 Aug  7 15:31 usr/lib/modules/3.10.0-327.28.2.el7.x86_64/kernel/drivers/infiniband/hw/ipath

-rw-r--r--  1 root    root      439149 Aug  3 04:52 usr/lib/modules/3.10.0-327.28.2.el7.x86_64/kernel/drivers/infiniband/hw/ipath/ib_ipath.ko

drwxr-xr-x  2 root    root            0 Aug  7 15:31 usr/lib/modules/3.10.0-327.28.2.el7.x86_64/kernel/drivers/infiniband/hw/mlx4

-rw-r--r--  1 root    root      250733 Aug  3 04:52 usr/lib/modules/3.10.0-327.28.2.el7.x86_64/kernel/drivers/infiniband/hw/mlx4/mlx4_ib.ko

drwxr-xr-x  2 root    root            0 Aug  7 15:31 usr/lib/modules/3.10.0-327.28.2.el7.x86_64/kernel/drivers/infiniband/hw/mlx5

-rw-r--r--  1 root    root      192053 Aug  3 04:52 usr/lib/modules/3.10.0-327.28.2.el7.x86_64/kernel/drivers/infiniband/hw/mlx5/mlx5_ib.ko

drwxr-xr-x  2 root    root            0 Aug  7 15:31 usr/lib/modules/3.10.0-327.28.2.el7.x86_64/kernel/drivers/infiniband/hw/mthca

-rw-r--r--  1 root    root      221773 Aug  3 04:52 usr/lib/modules/3.10.0-327.28.2.el7.x86_64/kernel/drivers/infiniband/hw/mthca/ib_mthca.ko

drwxr-xr-x  2 root    root            0 Aug  7 15:31 usr/lib/modules/3.10.0-327.28.2.el7.x86_64/kernel/drivers/infiniband/hw/nes

-rw-r--r--  1 root    root      274037 Aug  3 04:52 usr/lib/modules/3.10.0-327.28.2.el7.x86_64/kernel/drivers/infiniband/hw/nes/iw_nes.ko

drwxr-xr-x  2 root    root            0 Aug  7 15:31 usr/lib/modules/3.10.0-327.28.2.el7.x86_64/kernel/drivers/infiniband/hw/ocrdma

-rw-r--r--  1 root    root      131165 Aug  3 04:52 usr/lib/modules/3.10.0-327.28.2.el7.x86_64/kernel/drivers/infiniband/hw/ocrdma/ocrdma.ko

drwxr-xr-x  2 root    root            0 Aug  7 15:31 usr/lib/modules/3.10.0-327.28.2.el7.x86_64/kernel/drivers/infiniband/hw/qib

-rw-r--r--  1 root    root      600661 Aug  3 04:52 usr/lib/modules/3.10.0-327.28.2.el7.x86_64/kernel/drivers/infiniband/hw/qib/ib_qib.ko

drwxr-xr-x  2 root    root            0 Aug  7 15:31 usr/lib/modules/3.10.0-327.28.2.el7.x86_64/kernel/drivers/infiniband/hw/usnic

-rw-r--r--  1 root    root      135229 Aug  3 04:52 usr/lib/modules/3.10.0-327.28.2.el7.x86_64/kernel/drivers/infiniband/hw/usnic/usnic_verbs.ko

drwxr-xr-x  7 root    root            0 Aug  7 15:31 usr/lib/modules/3.10.0-327.28.2.el7.x86_64/kernel/drivers/infiniband/ulp

drwxr-xr-x  2 root    root            0 Aug  7 15:31 usr/lib/modules/3.10.0-327.28.2.el7.x86_64/kernel/drivers/infiniband/ulp/ipoib

-rw-r--r--  1 root    root      161509 Aug  3 04:52 usr/lib/modules/3.10.0-327.28.2.el7.x86_64/kernel/drivers/infiniband/ulp/ipoib/ib_ipoib.ko

drwxr-xr-x  2 root    root            0 Aug  7 15:31 usr/lib/modules/3.10.0-327.28.2.el7.x86_64/kernel/drivers/infiniband/ulp/iser

-rw-r--r--  1 root    root        85917 Aug  3 04:52 usr/lib/modules/3.10.0-327.28.2.el7.x86_64/kernel/drivers/infiniband/ulp/iser/ib_iser.ko

drwxr-xr-x  2 root    root            0 Aug  7 15:31 usr/lib/modules/3.10.0-327.28.2.el7.x86_64/kernel/drivers/infiniband/ulp/isert

-rw-r--r--  1 root    root        91245 Aug  3 04:52 usr/lib/modules/3.10.0-327.28.2.el7.x86_64/kernel/drivers/infiniband/ulp/isert/ib_isert.ko

drwxr-xr-x  2 root    root            0 Aug  7 15:31 usr/lib/modules/3.10.0-327.28.2.el7.x86_64/kernel/drivers/infiniband/ulp/srp

-rw-r--r--  1 root    root        85757 Aug  3 04:52 usr/lib/modules/3.10.0-327.28.2.el7.x86_64/kernel/drivers/infiniband/ulp/srp/ib_srp.ko

drwxr-xr-x  2 root    root            0 Aug  7 15:31 usr/lib/modules/3.10.0-327.28.2.el7.x86_64/kernel/drivers/infiniband/ulp/srpt

-rw-r--r--  1 root    root        92813 Aug  3 04:52 usr/lib/modules/3.10.0-327.28.2.el7.x86_64/kernel/drivers/infiniband/ulp/srpt/ib_srpt.ko

[...]

 

Yes. I am new to IB, but I am very sure the above is from the distro.  The mlnxofedinstall perl script doesn't run dracut -f, as proved by the following:

 

 

[root@sc2u0n0 mlnx_ofed]# grep dracut mlnxofedinstall

[root@sc2u0n0 mlnx_ofed]#

 

Thus, even the Mellanox drivers/kernel modules are loaded with its openibd running, I suspect that both the inbox and Mellanox ones are loaded.  The outcome under such a circumstance is not something I have much experience - since I am new to IB. But my experience with other drivers tells me that it's not going to be good.  If this is indeed the case, then there is a bug in the mlnxofedinstall perl script (missing a critical step!) that should be fixed ASAP.

 

I have started thinking whether I should use the uninstall.sh that comes with MLNX_OFED to uninstall it, and then just use the inbox IB modules/drivers.


Re: VMware ESXi 6.0 virtual ib_ipoib interfaces

$
0
0

Driver 2.4.0 include IPoIB driver and works!

Absolutlely! driver 2.4.0 located in ethernet driver section, but why this driver 2.4.0 include IPoIB driver & works?

Re: Is it possible to connect 2 ethernet switches (sx1036) located up to 10km apart without losing RDMA?

$
0
0

You can do that using MetroX and InfiniBand connection for 10Km, we don't support RoCE for such distance.

This is mainly due to packet loss and buffer size of the switch on each side (depends on the speed as well).

 

Ophir.

Re: How to setup IPoIB correctly?

$
0
0

I resolved the problem by moving all IPoIB IPv4 addresses to a different subnet.

Re: How to setup IPoIB correctly?

Re: Which ESXi driver to use for SRP/iSER over IB (not Eth!)?

$
0
0

Looking at the current driver selection across different OS/cards/medium (IB vs. ETH) it looks like the only space consistently supported by Mellanox is Linux. Indeed, for Linux you have everything:

  • every card is supported (all the way from Connect-X2 to Connect-X5)
  • IB and ETH and the possibility to switch from one to another for the cards that support it (VPI), and you can even use IB on one port and ETH on another
  • iSER initiator is supported across the board and iSER target is supported with both LIO and SCST
  • SRP initiator is supported across the board and SRP target is supported with SCST

 

So, if you use KVM as your hyperwisor , there is no problem.

 

However, if you want to use Mellanox IB technology in conjunction with currently the most popular hyperwisor (VMware ESXi), you're in trouble:

  • there is no official support for ESXi 5.5 and up for any card older than Connect-X3
  • the only VPI cards supported in IB mode are Connect-X3/Pro
  • Connect-IB cards are not supported at all
  • Connect-X4 cards are supported only in ETH mode
  • dual-port VPI cards support only the same protocol (IB or ETH) on both ports, not a mix
  • SRP initiator is no longer available
  • iSER initiator is available only with 1.9.x.x drivers only over ETH and only for Connect-X3/Pro cards
  • the current IB driver 2.3.x.x is compatible only with ESXi 5.5 (not 6.0!), works only with Connect-X3/Pro cards and includes neither SRP nor iSER initiator

 

My question is very simple: what's the long term strategy of Mellanox with regards to hyperwisor support? Are they suggesting that everyone considering Mellanox products should switch to KVM as their hyperwisor of choice? Or they should abandon RDMA and use Mellanox adapters/switches only as 56/100Gbe network infrastructure?

 

I would REALLY appreciate some reaction from Mellanox staff, who no doubt have already seen this thread, but, for some reason, chose not to react to it...

Re: I have 2 colocation sites located 6-7km apart. I can rent a 10gb wave between both sites, which I want to connect to my switches; (SX1036 at each site). Will i be able to use RDMA from one site to the other? If not, what is the max distance that I can

Re: Can multiple versions of mlnx-ofed exist in the same IB fabric?

$
0
0

Hi Greg,

 

It is recommended to perform the upgrade during a maintenance windows and not in production, because there are a lot of differences between 2.X and 3.X versions and since part of the driver upgrade is a firmware upgrade also.

 

Best Regards,

Viki


Is this the best our FDR adapters can do?

$
0
0

We have a small test setup illustrated below. I have done some ib_write_bw tests. Got "decent" numbers, but not as fast as I anticipated.  First, some background of the setup:

 

ipoib_for_the_network_layout_after.png

 

Two 1U storage servers each has a EDR HCA MCX455A-ECAT. The other four each has a ConnectX-3 VPI FDR 40/50Gb/s HCA mezz OEMed by Mellanox for Dell.  The firmware version: 2.33.5040.  This is not the latest (2.36.5000 according to hca_self_test.ofed) but I am new to IB, and still getting up to speed with updating using Mellanox's firmware tools. The EDR HCA firmware has been updated when the MLNX_OFED was installed.

 

All servers:

CPU: 2 x Intel E5-2620v3 2.4Ghz 6 core/12 HT

RAM: 8 x 16GiB DDR4 1866Mhz DIMMs

OS: CentOS 7.2 Linux ... 3.10.0-327.28.2.el7.x86_64 #1 SMP Wed Aug 3 11:11:39 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux

OFED: MLNX_OFED_LINUX-3.3-1.0.4.0 (OFED-3.3-1.0.4)

 

A typical ib_write_bw test:

 

Server:

[root@fs00 ~]# ib_write_bw -R

 

 

************************************

* Waiting for client to connect... *

************************************

---------------------------------------------------------------------------------------

                    RDMA_Write BW Test

Dual-port       : OFF Device         : mlx5_0

Number of qps   : 1 Transport type : IB

Connection type : RC Using SRQ      : OFF

CQ Moderation   : 100

Mtu             : 2048[B]

Link type       : IB

Max inline data : 0[B]

rdma_cm QPs : ON

Data ex. method : rdma_cm

---------------------------------------------------------------------------------------

Waiting for client rdma_cm QP to connect

Please run the same command with the IB/RoCE interface IP

---------------------------------------------------------------------------------------

local address: LID 0x03 QPN 0x01aa PSN 0x23156

remote address: LID 0x05 QPN 0x4024a PSN 0x28cd2e

---------------------------------------------------------------------------------------

#bytes     #iterations    BW peak[MB/sec]    BW average[MB/sec]   MsgRate[Mpps]

65536      5000             6082.15            6081.07   0.097297

---------------------------------------------------------------------------------------

 

Client:

[root@sc2u0n0 ~]# ib_write_bw -d mlx4_0 -R 192.168.111.150

---------------------------------------------------------------------------------------

                    RDMA_Write BW Test

Dual-port       : OFF Device         : mlx4_0

Number of qps   : 1 Transport type : IB

Connection type : RC Using SRQ      : OFF

TX depth        : 128

CQ Moderation   : 100

Mtu             : 2048[B]

Link type       : IB

Max inline data : 0[B]

rdma_cm QPs : ON

Data ex. method : rdma_cm

---------------------------------------------------------------------------------------

local address: LID 0x05 QPN 0x4024a PSN 0x28cd2e

remote address: LID 0x03 QPN 0x01aa PSN 0x23156

---------------------------------------------------------------------------------------

#bytes     #iterations    BW peak[MB/sec]    BW average[MB/sec]   MsgRate[Mpps]

65536      5000             6082.15            6081.07   0.097297

---------------------------------------------------------------------------------------

 

Now 6082MB/s ~ 48.65Gbps.  Even taking into account of the 64/66 encoding overhead, over 50+Gbps should be the case, or is this the best the setup can do?  Or is there anything I can do to push up the speed further?

 

Look forward to hearing the experience and observations from the experienced camp!  Thanks!

Re: Is this the best our FDR adapters can do?

$
0
0

One thing to keep in mind is that you'll hit the bandwidth of the PCIe bus.

I've not used the ib_write test myself - but I'm fairly sure that it's not actually handling data - just accepting it and tossing it away so it's going to be a theoretical maximum.

In real life situations that bus is going to be handling all data in/out of the CPU and for my oldest motherboards that maxes out at 25Gb/s - which is what I hit with fio tests on QDR links.  I've heard that with PCIe gen 3 you'll get up to 35Gb/s.

Generally whenever newer networking tech rolls out there is nothing that a single computer can do to saturate the link - unless it's pushing junk data and the only way to really max it out is for switch-switch (hardware to hardware) traffic.

Of course using IPoIB an anything other than native IB traffic is going to cost you performance.  In my case of NFS with IPoIB (with or without RDMA) I quickly slam into the bandwidth of my SSDs.  The only exception I'll have is the Oracle dB where the low latency is what I'm after as the database is small enough to fit in RAM.

Re: How to change speed from FDR to QDR using ibportstate command in centos 6.2

$
0
0

Hi,

 

Thanks for the help, however, unfortunately this didn't help.

 

Without firing command:

 

CA PortInfo:

# Port info: Lid 144 port 1

LinkState:.......................Active

PhysLinkState:...................LinkUp

Lid:.............................144

SMLid:...........................442

LMC:.............................0

LinkWidthSupported:..............1X or 4X

LinkWidthEnabled:................1X or 4X

LinkWidthActive:.................4X

LinkSpeedSupported:..............2.5 Gbps or 5.0 Gbps or 10.0 Gbps

LinkSpeedEnabled:................2.5 Gbps or 5.0 Gbps or 10.0 Gbps

LinkSpeedActive:.................10.0 Gbps

LinkSpeedExtSupported:...........14.0625 Gbps

LinkSpeedExtEnabled:.............14.0625 Gbps

LinkSpeedExtActive:..............14.0625 Gbps

# Extended Port info: Lid 144 port 1

StateChangeEnable:...............0x00

LinkSpeedSupported:..............0x01

LinkSpeedEnabled:................0x01

LinkSpeedActive:.................0x00

 

 

After firing the above mentioned commands, i.e.

ibportstate 144 1 fdr10 0 espeed 30

ibportstate 144 1 reset

 

We get below output

 

CA PortInfo:

# Port info: Lid 144 port 1

LinkState:.......................Active

PhysLinkState:...................LinkUp

Lid:.............................144

SMLid:...........................442

LMC:.............................0

LinkWidthSupported:..............1X or 4X

LinkWidthEnabled:................1X or 4X

LinkWidthActive:.................4X

LinkSpeedSupported:..............2.5 Gbps or 5.0 Gbps or 10.0 Gbps

LinkSpeedEnabled:................2.5 Gbps or 5.0 Gbps or 10.0 Gbps

LinkSpeedActive:.................10.0 Gbps

LinkSpeedExtSupported:...........14.0625 Gbps

LinkSpeedExtEnabled:.............14.0625 Gbps

LinkSpeedExtActive:..............14.0625 Gbps

# Extended Port info: Lid 144 port 1

StateChangeEnable:...............0x00

LinkSpeedSupported:..............0x01

LinkSpeedEnabled:................0x00

LinkSpeedActive:.................0x00

 

Can you please help ?

 

CA 'mlx4_0'

        CA type: MT4099

        Number of ports: 1

        Firmware version: 2.30.8000

        Hardware version: 0

        Node GUID: 0x0002c903001f69e0

        System image GUID: 0x0002c903001f69e3

        Port 1:

                State: Active

                Physical state: LinkUp

                Rate: 56

                Base lid: 144

                LMC: 0

                SM lid: 442

                Capability mask: 0x02514868

                Port GUID: 0x0002c903001f69e1

                Link layer: InfiniBand

Re: Which ESXi driver to use for SRP/iSER over IB (not Eth!)?

$
0
0

KVM?

No! KVM also have many limitations.

For example EoIB, etc.


Infiniband communication rely on SM

- Subnet Manager


SM consist of some components and API.

But SM architecture didn't designed for hypervisor world.


Historically many problems were exist in vSphere environment.


1st. ESXi

When vSphere 4.x age VMware give us two choice.


ESX and ESXi

ESX consist hypervisor and OEMed Red Hat console.

ESXi consist hypervisor only.


Some IB tools didn't work in ESXi host in my experience. But that IB tools work in ESX host nicely.


ESXi isn't a general purpose kernel.

I think it cause a major IB driver porting problem.


2nd. Infiniband design itself!!!

Hypervisor control all communication guest vm and host network. RDMA have a kernel by-pass feature called zero copy or RDMA R/W.


This feature controlled by SM, but add hypervisor to this network, many complex modification must add to SM and IB APIs.


There isn't IBTA standard yet.

This will be stadardize in near future.


3rd. RDMA storage protocol.

Infiniband specification concern all of RDMA and ULP protocols.


The improvement of Linux OFED is very fast.

No one know Which OFED version will be port to latest ESXi version.


Many complexity exist and must resolve issues in ESXi environment.


iSER also good candidate for ESXi RDMA protocol, but some critical problem exist.


I think that we must check the latest Linux OFED release note that include so many bugs, limitations.


Linux is a good platform but also suffer from IB's unique limitations.


Conclusion.

I think IB is fastest and eiffient high speed protocol in the planet. But not ready to enterprise network environment yet.


Mellanox said in their product brouchur that they can support major OS environments.


But many case wasn't.

Beta level driver, manual, bug, limitation, etc.


Absolutely!

Many time later all problem will be overcome with new standard and product.


But not now...






Re: Is this the best our FDR adapters can do?

$
0
0

 

Thanks for sharing your experience.  I did the following:

 

[root@sc2u0n0 ~]# dmidecode |grep PCI

  Designation: PCIe Slot 1

  Type: x8 PCI Express 3 x16

  Designation: PCIe Slot 3

  Type: x8 PCI Express 3

 

lspci -vv

[...]

02:00.0 Network controller: Mellanox Technologies MT27500 Family [ConnectX-3]

[...]

                LnkCap: Port #8, Speed 8GT/s, Width x8, ASPM L0s, Exit Latency L0s unlimited, L1 unlimited

                        ClockPM- Surprise- LLActRep- BwNot-

 

So, the theoretical speed should be 8Gbps/lane x 8 lane x 128b/130b = 63 Gbps.  In fact, we just did a fio sweep using fio-2.12.  The read is quite reasonable. We are now investigating why the write is so low.

 

A. Read test results

 

  • Chunk size = 2 MiB
  • Num. Jobs = 32
  • IO Depth = 128
  • File size = 500 GiB
  • Test time = 360 seconds
ModeSpeed, GbpsIOPS
psync, direct47.772986
psync, buffered24.491530
libaio, direct49.17

3073

 

 

B. Write test results

 

  • Chunk size = 2 MiB
  • Num. Jobs = 32
  • IO Depth = 128
  • File size = 500 GiB
  • Test time = 360 seconds
ModeSpeed, GbpsIOPS
psync, direct24.141509
psync, buffered9.32583
libaio, direct22.511407

 


 

 

Viewing all 6227 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>