Quantcast
Channel: Mellanox Interconnect Community: Message List
Viewing all 6227 articles
Browse latest View live

Re: Debian 9 OFED driver

$
0
0

We will support Debian 9 (kernel 4.9) in release MLNX_OFED 4.2

 

Thanks


Re: mlx5 with inbox driver 100G is not detecting

$
0
0

What OS and kernel are you use ?

Re: Debian 9 OFED driver

$
0
0

Is that release expected next month?

Re: Debian 9 OFED driver

$
0
0

Yes,  the schedule is next month

 

Thanks

40G SR4 NIC Support

$
0
0

Hi, I am using a ConnectX-4 with a fibre 40Gbase-SR4 QSFP. The QSFP is directly connected via a fibre breakout to 4 independent 10G streams.

It appears the standard driver included with debian connects at 10G and only uses one of the lanes.

What are the other options to receive data from all 4 lanes? Is this only possible using VMA?

 

Thanks

Re: 40G SR4 NIC Support

$
0
0

You mean you connect QSFP to a CX4 card ?

Re: 40G SR4 NIC Support

$
0
0

No, the 40GBaseSR4 QSFP has an MPO12 fibre connector which is then split into 4 independent streams (LC fibre) coming from different devices which use 10GBaseR. I would like to receive data from all streams (transmit is not important).

Re: Melanox grid director 4036e won't boot.

$
0
0

No unfortunately that does not work.

The run flash_self_safe forces the switch to boot from secondary and we get the exact same error output.

 

From uboot I attempted to download an image by tftp but the file transfer begins, the switch outputs an error and boots to the same error.

 

I opened the unit and there are 4 red LEDs so I suspect a hardware failure.

The LEDS are as follows:

Top row of LEDS (next to RAM module, below the chassis fans)

D104 + D105 on RED

LEDS in bottom right of chassis

CPLD 2 R643 - D87 RED

CPLD 4 R645 - D89 RED

 

The LEDs turn red shortly after power is applied.

 

Do you know what may have failed? Will the failed components be  replaceable as this is a legacy unit and this with another 7 switches may have a similar problem.

 

Kind regards.

Rav.


Re: Melanox grid director 4036e won't boot.

$
0
0

In this case, I think you need RMA the switch if the switch under warranty

NFS over RDMA on OEL 7.4

$
0
0

Hello my configuration is simple OEL 7.4 two Mellanox ConnectX-3 VPI cards , SX1036 switch and two very Fast NVMe drives.

So my problem is that I configured NFS over RDMA using Infiniband Support packages from OEL because OFED from Mellanox not support

NFS over RDMA from version 3.4 + .

Everything is working I can connect to server over RDMA and I can write/read from NFS server etc.  but I have a problem with performance .

I done test on my stipe LV and fit shows me 900k IOPS and around 3,8 GB/s using 4k but when I do the same tests on NFS client I can't get more

then 190k IOPS ? Problem is not the bandwidth because when I change the block size I can get even over 4GB/s but the problem seams to be number

of IOPS delivered from server to client. 

I am asking maybe somebody have idea ?? I already change size and size to 1m but without any performance benefits.

My next steps will be configure Aggregation (LCAP) to see if it change something , now I'm using only one Port .

 

Adam

Re: Melanox grid director 4036e won't boot.

$
0
0

Legacy equipment out of warranty.

Are we able to purchase a service contract or is this equipment unsupported?

Many thanks.

Re: Melanox grid director 4036e won't boot.

Re: Melanox grid director 4036e won't boot.

$
0
0

Thank you for the assistance.

This call can now be closed.

Re: Running ASAP2

Re: Melanox grid director 4036e won't boot.

$
0
0

Hello Rav,

 

Mellanox do not seel servicecontracts on 4036E switches anymore. The product is at EOL stage.

for more information, please refer to our EOL info page at: http://www.mellanox.com/page/eol

 

Sorry we couldn't assist you.

 

 

Thanks


Re: Running ASAP2

$
0
0

Hi Lenny

 

I can’t access this document.

Can you please send me the PDF version.

There seems to be something wrong with my Mellanox account.

 

Regards

Francois Kleynhans

Re: Running ASAP2

$
0
0

I also cannot access the documents listed...

Re: How to configure MCX354A-FCBT Mellanox InfiniBand speed at 56Gbps ?

$
0
0

From the output prints you have presented, it looks like your SX6036 switch should be good & supports 56Gb (FDR)

- cables & nics are also fine and capable of fdr

- cables present: Infiniband speeds : SDR , DDR , QDR , FDR

- switch presents:

Supported LLR speeds : FDR10, FDR - which indicates you can set it to 56Gb

Supported speeds : sdr, ddr, qdr, fdr10   - here you see that fdr is missing

so, in my view - all you have to do is to run the following command on the switch that will add FDR on "supported speeds" list

run on CLI commnd:

(config) # interface ib <interface#> speed sdr ddr qdr fdr10 fdr force

(config) # configure write (to save the changes)


you should now see: Supported speeds : sdr, ddr, qdr, fdr10, fdr

this should enable you to uses 56Gb on the switch and on the nics as well

Re: Running ASAP2

Re: Running ASAP2

Viewing all 6227 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>