Quantcast
Channel: Mellanox Interconnect Community: Message List
Viewing all 6227 articles
Browse latest View live

Asking about InfiniBand cards and related items for proper setup

$
0
0

Hi all,

Our company has 3 HP DL380p G8 and 3 DL380p G7 servers. We plan to upgrade their IO connectivity to reach 10Gbps network or even higher. The main reason is that we are building Hyper Converged Infrastructure, especially SDS (VMware VSAN, or Microsoft S2D). We already have a pair of 10Gb Switches (JG219A).

We took a deep search and found some useful information:

  • We see there are 2 options: 10Gb Ethernet NIC, or 40Gb InfiniBand (IB) card. We prefer InfiniBand as it has more hardware offload features that we could benefit from, such as RDMA (RoCE).
  • FlexLOM card (FLR) is not an option, as the only slot has already been occupied.
  • According to HP DL380p G8 and DL380 G7 QuickSpecs, the only suitable IB card is HP Infiniband FDR/Ethernet 10Gb/40Gb 2-port 544QSFP Adapter (649281-B21). I dig deeper and found that this IB card is OEMed by Mellanox, and according to HP-Mellanox-Reference-Guide-August-2012 the appropriate Mellanox Part # is MCX354A-FCBT.
  • I took one step further and found the ConnectX®-3 VPI Single and Dual QSFP+ Port Adapter Card User Manual, Rev 2.5. There on Section 7.6, MCX354A-FCB[T/S] Specifications, I found something:
    • Protocol Support:
      • Ethernet:  10GBASE-CX4, 10GBASE-R, and 1000BASE-R, 40GBASE-R4
      • Data Rate:  Up to 56Gb/s FDR– InfiniBand

             1/10/40/56Gb/s – Ethernet

I know that we also need QSFP to SFP+ adapter P/N 655874-B21, or MAM1Q00A-QSA in Mellanox world in order to use 10Gb Ethernet.

However, I am not so familiar with InfiniBand, and not confident to tell what else we need to take. Do we need special cables in order to work with our existing 10Gb switches? Or do we need to buy special InfiniBand switches?

Hope I explained our case clearly. Long story short, we need to verify the possibility to invest on InfiniBand (6 - 10 dual-port cards) and related items.

Actually I tried to reach both HP Sales and Presales teams in my country, but haven't received any feedback from them for days. I have no options left but posting a question here and hope someone from the community could help me out.

Thank you very much in advanced!


Re: ceph + rdma error: ibv_open_device failed

$
0
0

Luminous does not support latest Ceph RDMA code.  What version of Ceph are you using?  Also, if you can please provide the ceph.conf configuration.  Lastly, are you able to run other tool on this node like ib_write_bw?

Re: solution for design small HPC

$
0
0

Those are basic steps:

1. Hardware need to be the same, this is basic HPC requirement, so using two different servers is not recommended and you, probably, need some kind of job scheduler - SLURM, Torque, LSF, etc. If you are not using identical hardware, you most likely will have an issue related to performance.

2. Use the same network adapter ( onboard Ethernet, HighSpeed Adapter aka HCA). See item 1.

3. Install same OS, drivers

4. Is using IB, be sure to run OpenSM

5. If you are using Mellanox hardware, use HPC-X toolkit  - http://www.mellanox.com/page/products_dyn?product_family=189&mtag=hpc-x

6. Run jobs

 

Your simple question, is is compilated subject that included almost everything - fabric design, networking, performance tuning including BIOS, OS and driver ( for reference you may check Mellanox Tuning Guide) and it is definitely should be splitted to different subjects.

 

Take, for example, this article - http://hpcugent.github.io/vsc_user_docs/pdf/intro-HPC-windows-gent.pdf (135 pages)

 

I would suggest to start building and run jobs using onboard Ethernet adapter ( for HPC you can use any communication channel that exists on the host). When this phase is over, add Mellanox and you'll get much better performance.

Re: Asking about InfiniBand cards and related items for proper setup

$
0
0

Dear Tu Nguyen Anh,

 

Thank you for posting your question on the Mellanox Community.

 

The quickest way to get all the information about the products you need for your setup is to fill in the form on the following link: https://store.mellanox.com/customer-service/contact-us/

Then a Sales Representative will contact you as soon as possible regarding your inquiry.

 

Thanks and regards,

~Mellanox Technical Support

NVMeOF SLES 12 SP3 : Initiator with 36 cores unable to discover/connect to target

$
0
0

Hi,

I am trying NVMeOF with RoCE on SLES 12 SP3 using the document

HowTo Configure NVMe over Fabrics

 

I am noticing that whenever the initiator is having > 32 cores, the initiator is unable to discover/connect to the target. The same procedure works fine if the number of cores <= 32. 

the dmesg:

 

kernel: [  373.418811] nvme_fabrics: unknown parameter or missing value 'hostid=a61ecf3f-2925-49a7-9304-cea147f61ae' in ctrl creation request

 

for a successful connection:

 

[51354.292021] nvme nvme0: creating 32 I/O queues.

[51354.879684] nvme nvme0: new ctrl: NQN "mcx", addr 192.168.0.1:4420

 

Is there any parameter that can restrict the number of the cores the mlx5_core/nvme_rdma/nvmet_rdma driver can use to restrict the IO queue creation and result in a successful discovery/connection?  I won't be able to disable the cores/hyperthreading from the BIOS/UEFI since there are other applications running on the host.

 

Appreciate any pointers/help!

Melanox grid director 4036e won't boot.

$
0
0

I have been asked to look at the aforementioned 4036e. This is my first time with Melanox switches.

 

No warning LEDS. All green. Power supply and fans ok.

 

Boots then crashes at different places in the boot sequence.

I am seeing a 'Warning - Bad CRC' before the switch decides to boot from the secondary flash.

The boot sequence creates 9 partitions.

 

When we get to the NAND device line it scans for bad blocks.

Then it creates 1 MTD partition.

 

Later it identifies a bad area from kernel access.

Then we just get a call trace and instruction dump and the loading process halts.

Line connection no longer responds.

 

I suspect a bad/faulty NAND flash chip.

 

Does anyone have any suggestions, is this replaceable? Should I try flashing the firmware.

 

I am not currently at that site, I will visit on Sunday and copy the full configuration then post back here.

I would appreciate any suggestions or ideas.

 

Many thanks.

Switchman.

Re: Melanox grid director 4036e won't boot.

$
0
0

I had saved a portion the end of the output and then attempting to boot a second time at the bottom:

============================================================================

Intel/Sharp Extended Query Table at 0x010A
Intel/Sharp Extended Query Table at 0x010A
Intel/Sharp Extended Query Table at 0x010A
Intel/Sharp Extended Query Table at 0x010A
Using buffer write method
Using auto-unlock on power-up/resume
cfi_cmdset_0001: Erase suspend on write enabled
cmdlinepart partition parsing not available
RedBoot partition parsing not available
Creating 9 MTD partitions on "4cc000000.nor_flash":
0x00000000-0x001e0000 : "kernel"
0x001e0000-0x00200000 : "dtb"
0x00200000-0x01dc0000 : "ramdisk"
0x01dc0000-0x01fa0000 : "safe-kernel"
0x01fa0000-0x01fc0000 : "safe-dtb"
0x01fc0000-0x03b80000 : "safe-ramdisk"
0x03b80000-0x03f60000 : "config"
0x03f60000-0x03fa0000 : "u-boot env"
0x03fa0000-0x04000000 : "u-boot"
NAND device: Manufacturer ID: 0x20, Chip ID: 0xda (ST Micro NAND 256MiB 3,3V 8-bit)
Scanning device for bad blocks
Creating 1 MTD partitions on "4e0000000.ndfc.nand":
0x00000000-0x10000000 : "log"
i2c /dev entries driver
IBM IIC driver v2.1
ibm-iic(/plb/opb/i2c@ef600700): using standard (100 kHz) mode
ibm-iic(/plb/opb/i2c@ef600800): using standard (100 kHz) mode
i2c-2: Virtual I2C bus (Physical bus i2c-0, multiplexer 0x70 port 0)
i2c-3: Virtual I2C bus (Physical bus i2c-0, multiplexer 0x70 port 1)
i2c-4: Virtual I2C bus (Physical bus i2c-0, multiplexer 0x70 port 2)
i2c-5: Virtual I2C bus (Physical bus i2c-0, multiplexer 0x70 port 3)
rtc-ds1307 6-0068: rtc core: registered ds1338 as rtc0
rtc-ds1307 6-0068: 56 bytes nvram
i2c-6: Virtual I2C bus (Physical bus i2c-0, multiplexer 0x70 port 4)
i2c-7: Virtual I2C bus (Physical bus i2c-0, multiplexer 0x70 port 5)
i2c-8: Virtual I2C bus (Physical bus i2c-0, multiplexer 0x70 port 6)
i2c-9: Virtual I2C bus (Physical bus i2c-0, multiplexer 0x70 port 7)
pca954x 0-0070: registered 8 virtual busses for I2C switch pca9548
TCP cubic registered
NET: Registered protocol family 10
lo: Disabled Privacy Extensions
IPv6 over IPv4 tunneling driver
sit0: Disabled Privacy Extensions
ip6tnl0: Disabled Privacy Extensions
NET: Registered protocol family 17
RPC: Registered udp transport module.
RPC: Registered tcp transport module.
rtc-ds1307 6-0068: setting system clock to 2000-01-18 01:06:09 UTC (948157569)
RAMDISK: Compressed image found at block 0
VFS: Mounted root (ext2 filesystem) readonly.
Freeing unused kernel memory: 172k init
init started: BusyBox v1.12.2 (2011-01-03 14:13:22 IST)
starting pid 15, tty '': '/etc/rc.d/rcS'
mount: no /proc/mounts
Mounting /proc and /sys
Mounting filesystems
Loading module Voltaire
Empty flash at 0x0cdcf08c ends at 0x0cdcf800
Starting crond:
Starting telnetd:
ibsw-init.sh start...
Tue Jan 18 01:06:42 UTC 2000
INSTALL FLAG  0x0
starting syslogd & klogd ...
Starting ISR:                   Unable to handle kernel paging request for data at address 0x0000001e
Faulting instruction address: 0xc00ec934
Oops: Kernel access of bad area, sig: 11 [#1]
Voltaire
Modules linked in: ib_is4(+) ib_umad ib_sa ib_mad ib_core memtrack Voltaire
NIP: c00ec934 LR: c00ec930 CTR: 00000000
REGS: d7bdfd10 TRAP: 0300   Not tainted  (2.6.26)
MSR: 00029000 <EE,ME>  CR: 24000042  XER: 20000000
DEAR: 0000001e, ESR: 00000000
TASK = d7b9c800[49] 'jffs2_gcd_mtd9' THREAD: d7bde000
GPR00: 00000001 d7bdfdc0 d7b9c800 00000000 000000d0 00000003 df823040 0000007f
GPR08: 22396d59 d9743920 c022de58 00000000 24000024 102004bc c026b9a0 c026b910
GPR16: c026b954 c026b630 c026b694 c022b790 d8938150 d8301000 c022b758 d7bdfe30
GPR24: 00000000 0000037c d8301400 00000abf d9743d80 00000000 d8938158 df823000
NIP [c00ec934] jffs2_get_inode_nodes+0xb6c/0x1020
LR [c00ec930] jffs2_get_inode_nodes+0xb68/0x1020
Call Trace:
[d7bdfdc0] [c00ec758] jffs2_get_inode_nodes+0x990/0x1020 (unreliable)
[d7bdfe20] [c00ece28] jffs2_do_read_inode_internal+0x40/0x9e8
[d7bdfe90] [c00ed838] jffs2_do_crccheck_inode+0x68/0xa4
[d7bdff00] [c00f1ed8] jffs2_garbage_collect_pass+0x160/0x664
[d7bdff50] [c00f36c8] jffs2_garbage_collect_thread+0xf0/0x118
[d7bdfff0] [c000bdb8] kernel_thread+0x44/0x60
Instruction dump:
7f805840 409c000c 801d0004 48000008 801d0008 2f800000 409effdc 2f9d0000
40be0010 48000180 4802ba05 7c7d1b78 <a01d001e> 7fa3eb78 2f800000 409effec
---[ end trace b57e19dd3d61c6af ]---
ib_is4 0000:81:00.0: ep0_dev_name 0000:81:00.0
Unable to handle kernel paging request for data at address 0x00000034
Faulting instruction address: 0xc002f3b0
Oops: Kernel access of bad area, sig: 11 [#2]
Voltaire
Modules linked in: is4_cmd_driver ib_is4 ib_umad ib_sa ib_mad ib_core memtrack Voltaire
NIP: c002f3b0 LR: c002fb00 CTR: c00f3a10
REGS: df8a3de0 TRAP: 0300   Tainted: G      D    (2.6.26)
MSR: 00021000 <ME>  CR: 24544e88  XER: 20000000
DEAR: 00000034, ESR: 00000000
TASK = df88e800[8] 'pdflush' THREAD: df8a2000
GPR00: c002fb00 df8a3e90 df88e800 00000001 d7b9c800 d7b9c800 00000000 00000001
GPR08: 00000001 00000000 24544e22 00000002 00004b1a 67cfb19f 1ffef400 00000000
GPR16: 1ffe42d8 00000000 1ffebfa4 00000000 00000000 00000004 c0038778 c0261ac4
GPR24: 00000001 c02f0000 00000000 d7b9c800 00000001 d7b9c800 00000000 d8301400
NIP [c002f3b0] prepare_signal+0x1c/0x1a4
LR [c002fb00] send_signal+0x28/0x214
Call Trace:
[df8a3e90] [c0021bb8] check_preempt_wakeup+0xd8/0x110 (unreliable)
[df8a3eb0] [c002fb00] send_signal+0x28/0x214
[df8a3ed0] [c002fe40] send_sig_info+0x28/0x48
[df8a3ef0] [c00f35c4] jffs2_garbage_collect_trigger+0x3c/0x50
[df8a3f00] [c00f3a40] jffs2_write_super+0x30/0x5c
[df8a3f10] [c007340c] sync_supers+0x80/0xd0
[df8a3f30] [c0054dc8] wb_kupdate+0x48/0x150
[df8a3f90] [c0055434] pdflush+0x104/0x1a4
[df8a3fe0] [c00387c4] kthread+0x4c/0x88
[df8a3ff0] [c000bdb8] kernel_thread+0x44/0x60
Instruction dump:
80010034 bb810020 7c0803a6 38210030 4e800020 9421ffe0 7c0802a6 bf810010
90010024 7c9d2378 83c4034c 7c7c1b78 <801e0034> 70090008 40820100 2f83001f
---[ end trace b57e19dd3d61c6af ]---
------------[ cut here ]------------
Badness at kernel/exit.c:965
NIP: c00273f0 LR: c000a03c CTR: c013b2b4
REGS: df8a3cb0 TRAP: 0700   Tainted: G      D    (2.6.26)
MSR: 00021000 <ME>  CR: 24544e22  XER: 20000000
TASK = df88e800[8] 'pdflush' THREAD: df8a2000
GPR00: 00000001 df8a3d60 df88e800 0000000b 00002d73 ffffffff c013e13c c02eb620
GPR08: 00000001 00000001 00002d73 00000000 24544e84 67cfb19f 1ffef400 00000000
GPR16: 1ffe42d8 00000000 1ffebfa4 00000000 00000000 00000004 c0038778 c0261ac4
GPR24: 00000001 c02f0000 00000000 d7b9c800 df8a3de0 0000000b df88e800 0000000b
NIP [c00273f0] do_exit+0x24/0x5ac
LR [c000a03c] kernel_bad_stack+0x0/0x4c
Call Trace:
[df8a3d60] [00002d41] 0x2d41 (unreliable)
[df8a3da0] [c000a03c] kernel_bad_stack+0x0/0x4c
[df8a3dc0] [c000ef90] bad_page_fault+0xb8/0xcc
[df8a3dd0] [c000c4c8] handle_page_fault+0x7c/0x80
[df8a3e90] [c0021bb8] check_preempt_wakeup+0xd8/0x110
[df8a3eb0] [c002fb00] send_signal+0x28/0x214
[df8a3ed0] [c002fe40] send_sig_info+0x28/0x48
[df8a3ef0] [c00f35c4] jffs2_garbage_collect_trigger+0x3c/0x50
[df8a3f00] [c00f3a40] jffs2_write_super+0x30/0x5c
[df8a3f10] [c007340c] sync_supers+0x80/0xd0
[df8a3f30] [c0054dc8] wb_kupdate+0x48/0x150
[df8a3f90] [c0055434] pdflush+0x104/0x1a4
[df8a3fe0] [c00387c4] kthread+0x4c/0x88
[df8a3ff0] [c000bdb8] kernel_thread+0x44/0x60
Instruction dump:
bb61000c 38210020 4e800020 9421ffc0 7c0802a6 bf010020 90010044 7c7f1b78
7c5e1378 800203e0 3160ffff 7d2b0110 <0f090000> 54290024 8009000c 5409012f

U-Boot 1.3.4.32 (Feb  6 2011 - 10:18:30)

CPU:   AMCC PowerPC 460EX Rev. B at 666.666 MHz (PLB=166, OPB=83, EBC=83 MHz)
       Security/Kasumi support
       Bootstrap Option E - Boot ROM Location EBC (16 bits)
       Internal PCI arbiter disabled
       32 kB I-Cache 32 kB D-Cache
Board: 4036QDR - Voltaire 4036 QDR Switch Board
I2C:   ready
DRAM:  512 MB (ECC enabled, 333 MHz, CL3)
FLASH: 64 MB
NAND:  256 MiB
*** Warning - bad CRC, using default environment

MAC Address: 00:08:F1:20:52:E8
PCIE1: successfully set as root-complex
PCIE:   Bus Dev VenId DevId Class Int
        01  00  15b3  bd34  0c06  00
Net:   ppc_4xx_eth0

Type run flash_nfs to mount root filesystem over NFS

Hit any key to stop autoboot:  0
=> run flash_nfs
## Booting kernel from Legacy Image at fc000000 ...
   Image Name:   Linux-2.6.26
   Image Type:   PowerPC Linux Kernel Image (gzip compressed)
   Data Size:    1406000 Bytes =  1.3 MB
   Load Address: 00000000
   Entry Point:  00000000
   Verifying Checksum ... OK
   Uncompressing Kernel Image ... OK

Re: IB Switch IS5035 MTU Setting?

$
0
0

Thanks.

 

Can you tell me if this will take effect immediately, or require a restart of the servers.

 

Will it cause an interruption of traffic?

 

Our IB is for traffic from Cluster nodes to SAN storage.  I just want to know what the impact will be.

 

Thanks again!

 

Todd


no iser adapter listed

$
0
0

I installed the iser drivers sucessfully, i cannot add an iser adapter. I did the following:

 

 

 

[root@esxi-1:~] esxcfg-module -g iser

 

 

iser enabled = 1 options = ''

 

 

 

 

[root@esxi-1:~] vmkload_mod iser

vmkload_mod: Can not load module iser: module is already loaded

 

 

[root@esxi-1:~] esxcli rdma iser add

Failed to add device: com.vmware.iser

 

 

2017-09-21T15:54:05.956Z cpu20:68115 opID=8596fd32)WARNING: Device: 1316: Failed to register device 0x43055fde3050 logical#vmkernel#com.vmware.iser0 com.vmware.iser (parent=0x130c43055fde3244): Already exists

[root@esxi-1:~]

 

There is no iscsi or iser adapter listed. I did have an iscsi setup originally on this host but i removed it and the vkernel port. any suggestions?

Re: IB Switch IS5035 MTU Setting?

$
0
0

The error should clear once the SM's partition configuration (IpoIB MTU setting) is adjusted (to eliminate the inconsistency):

For example, to enable a 4096-byte IpoIB MTU on the Subnet Manager's default partition, (assuming SM is running on a switch), perform the following commands below. If more than one switch is running SM, this change should be made on each switch running SM.

  

In MLNX-OS, to change this MTU setting, the path to this setting is:

 

In GUI:

 

IB SM Mgt tab

 

Partitions Tab

 

In the existing Default Partition,

 

IPoIB MTU can be changed from 2K to 4K.

 

No other settinngs need to be changed.

 

Apply changes, but be aware that this is an intrusive configuration change and will disrupt the cluster as the SM process gets restarted and the MTU changes are applied.

40Gbps on 4x QDR ConnectX-2 VPI cards / Win10

$
0
0

I recently bought a pair of used Mellanox Infiniband 4x QDR ConnectX-2 VPI cards. One cars is single port, the other is dual port.

I will use them to connect my workstation to a server for HPC applications.

I am running Windows 10 on both systems. If that helps the systems are somewhat old platform, 2x Xeon X5670 CPUs for each node.

The seller of the Mellanox cards told me that it will be difficult to reach 40G rates on Win10 except:

1) I use a 12K 12 port switch

2) I use a Linux to Linux configuration.

I don't want to lose time learning and configuring Linux OS, and i don't want to buy an expensive switch either!

Do you think it's possible to reach the maximum rate (40G) of the cards by any means?

Trouble making Infiniband running udaddy

$
0
0

Hello currently I am using Mellanox ConnectX-3 Adapter for test

currently the pingpong test that was included in the Mellanox install package (ibv_rc_pingpong) are working

 

However the tests such as rping and udaddy that were mentioned in the post HowTo Enable, Verify and Troubleshoot RDMA

https://community.mellanox.com/docs/DOC-2086#jive_content_id_4_rping

 

None of the tests will run

here are the error result below

sungho@c1n15:~$ udaddy -s 172.23.10.30                           │sungho@c1n14:~$                

udaddy: starting client                                          │sungho@c1n14:~$                

udaddy: connecting                                               │sungho@c1n14:~$ udaddy         

udaddy: event: RDMA_CM_EVENT_ADDR_ERROR, error: -19              │udaddy: starting server        

test complete                                                   

return status -19            

  

 

 

I have two servers running connected with a switch,

and the infiniband ethernets are all pingable with each other

and all the ethernets are installed and running

 

However I have doubts about the arp table

because it doesn't seem to look like to be connected properly. (listed below)

 

here is the information of the two servers below

Do you think I need to statistically add the arp table? or is there something fundamentally wrong?

 

server (A)

sungho@c1n14:/usr/bin$ ibstat

CA 'mlx4_0'

        CA type: MT4099

        Number of ports: 1

        Firmware version: 2.42.5000

        Hardware version: 1

        Node GUID: 0x7cfe9003009a7c30

        System image GUID: 0x7cfe9003009a7c33

        Port 1:

                State: Active

                Physical state: LinkUp

                Rate: 56

                Base lid: 3

                LMC: 0

                SM lid: 3

                Capability mask: 0x0251486a

                Port GUID: 0x7cfe9003009a7c31

                Link layer: InfiniBand

Kernel IP routing table

Destination     Gateway         Genmask         Flags Metric Ref    Use Iface

0.0.0.0         172.23.1.1      0.0.0.0         UG    0      0        0 enp1s0f0

172.23.0.0      0.0.0.0         255.255.0.0     U     0      0        0 enp1s0f0

172.23.0.0      0.0.0.0         255.255.0.0     U     0      0        0 ib0

sungho@c1n14:/usr/bin$ arp -n

Address                  HWtype  HWaddress           Flags Mask            Iface

172.23.10.1              ether   0c:c4:7a:3a:35:88   C                     enp1s0f0

172.23.10.15             ether   0c:c4:7a:3a:35:72   C                     enp1s0f0

172.23.1.1               ether   00:1b:21:5b:6a:a8   C                     enp1s0f0

enp1s0f0  Link encap:Ethernet  HWaddr 0c:c4:7a:3a:35:70

          inet addr:172.23.10.14  Bcast:172.23.255.255  Mask:255.255.0.0

          inet6 addr: fe80::ec4:7aff:fe3a:3570/64 Scope:Link

          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

          RX packets:12438 errors:0 dropped:5886 overruns:0 frame:0

          TX packets:5861 errors:0 dropped:0 overruns:0 carrier:0

          collisions:0 txqueuelen:1000

          RX bytes:2356740 (2.3 MB)  TX bytes:836306 (836.3 KB)

 

ib0       Link encap:UNSPEC  HWaddr A0-00-02-20-FE-80-00-00-00-00-00-00-00-00-00-00

          inet addr:172.23.10.30  Bcast:172.23.255.255  Mask:255.255.0.0

          inet6 addr: fe80::7efe:9003:9a:7c31/64 Scope:Link

          UP BROADCAST RUNNING MULTICAST  MTU:2044  Metric:1

          RX packets:0 errors:0 dropped:0 overruns:0 frame:0

          TX packets:8 errors:0 dropped:0 overruns:0 carrier:0

          collisions:0 txqueuelen:256

          RX bytes:0 (0.0 B)  TX bytes:616 (616.0 B)

 

lo        Link encap:Local Loopback

          inet addr:127.0.0.1  Mask:255.0.0.0

          inet6 addr: ::1/128 Scope:Host

          UP LOOPBACK RUNNING  MTU:65536  Metric:1

          RX packets:189 errors:0 dropped:0 overruns:0 frame:0

          TX packets:189 errors:0 dropped:0 overruns:0 carrier:0

          collisions:0 txqueuelen:1

          RX bytes:13912 (13.9 KB)  TX bytes:13912 (13.9 KB)

 

server (B) 

sungho@c1n15:~$ ibstat

CA 'mlx4_0'

        CA type: MT4099

        Number of ports: 1

        Firmware version: 2.42.5000

        Hardware version: 1

        Node GUID: 0x7cfe9003009a6360

        System image GUID: 0x7cfe9003009a6363

        Port 1:

                State: Active

                Physical state: LinkUp

                Rate: 56

                Base lid: 1

                LMC: 0

                SM lid: 3

                Capability mask: 0x02514868

                Port GUID: 0x7cfe9003009a6361

                Link layer: InfiniBand

sungho@c1n15:~$ route -n

Kernel IP routing table

Destination     Gateway         Genmask         Flags Metric Ref    Use Iface

0.0.0.0         172.23.1.1      0.0.0.0         UG    0      0        0 enp1s0f0

172.23.0.0      0.0.0.0         255.255.0.0     U     0      0        0 enp1s0f0

172.23.0.0      0.0.0.0         255.255.0.0     U     0      0        0 ib0

sungho@c1n15:~$ arp -n

Address                  HWtype  HWaddress           Flags Mask            Iface

172.23.10.14             ether   0c:c4:7a:3a:35:70   C                     enp1s0f0

172.23.10.1              ether   0c:c4:7a:3a:35:88   C                     enp1s0f0

172.23.10.30             ether   0c:c4:7a:3a:35:70   C                     enp1s0f0

172.23.1.1               ether   00:1b:21:5b:6a:a8   C                     enp1s0f0

 

enp1s0f0  Link encap:Ethernet  HWaddr 0c:c4:7a:3a:35:72

          inet addr:172.23.10.15  Bcast:172.23.255.255  Mask:255.255.0.0

          inet6 addr: fe80::ec4:7aff:fe3a:3572/64 Scope:Link

          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

          RX packets:19432 errors:0 dropped:5938 overruns:0 frame:0

          TX packets:8783 errors:0 dropped:0 overruns:0 carrier:0

          collisions:0 txqueuelen:1000

          RX bytes:8246898 (8.2 MB)  TX bytes:1050793 (1.0 MB)

 

ib0       Link encap:UNSPEC  HWaddr A0-00-02-20-FE-80-00-00-00-00-00-00-00-00-00-00

          inet addr:172.23.10.31  Bcast:172.23.255.255  Mask:255.255.0.0

          inet6 addr: fe80::7efe:9003:9a:6361/64 Scope:Link

          UP BROADCAST RUNNING MULTICAST  MTU:2044  Metric:1

          RX packets:0 errors:0 dropped:0 overruns:0 frame:0

          TX packets:16 errors:0 dropped:0 overruns:0 carrier:0

          collisions:0 txqueuelen:256

          RX bytes:0 (0.0 B)  TX bytes:1232 (1.2 KB)

 

lo        Link encap:Local Loopback

          inet addr:127.0.0.1  Mask:255.0.0.0

          inet6 addr: ::1/128 Scope:Host

          UP LOOPBACK RUNNING  MTU:65536  Metric:1

          RX packets:109 errors:0 dropped:0 overruns:0 frame:0

          TX packets:109 errors:0 dropped:0 overruns:0 carrier:0

          collisions:0 txqueuelen:1

          RX bytes:7992 (7.9 KB)  TX bytes:7992 (7.9 KB)

INFINIBAND RDMA_CM_EVENT_ADDR_ERROR

$
0
0

Hello currently I am using Mellanox ConnectX-3 Adapter for test

currently the pingpong test that was included in the Mellanox install package (ibv_rc_pingpong) are working

 

However the tests such as rping and udaddy that were mentioned in the post HowTo Enable, Verify and Troubleshoot RDMA

https://community.mellanox.com/docs/DOC-2086#jive_content_id_4_rping

 

None of the tests will run

here are the error result below

sungho@c1n15:~$ udaddy -s 172.23.10.30                           │sungho@c1n14:~$               

udaddy: starting client                                          │sungho@c1n14:~$               

udaddy: connecting                                               │sungho@c1n14:~$ udaddy        

udaddy: event: RDMA_CM_EVENT_ADDR_ERROR, error: -19              │udaddy: starting server       

test complete                                                   

return status -19           

 

 

 

I have two servers running connected with a switch,

and the infiniband ethernets are all pingable with each other

and all the ethernets are installed and running

 

However I have doubts about the arp table

because it doesn't seem to look like to be connected properly. (listed below)

 

here is the information of the two servers below

Do you think I need to statistically add the arp table? or is there something fundamentally wrong?

 

server (A)

sungho@c1n14:/usr/bin$ ibstat

CA 'mlx4_0'

        CA type: MT4099

        Number of ports: 1

        Firmware version: 2.42.5000

        Hardware version: 1

        Node GUID: 0x7cfe9003009a7c30

        System image GUID: 0x7cfe9003009a7c33

        Port 1:

                State: Active

                Physical state: LinkUp

                Rate: 56

                Base lid: 3

                LMC: 0

                SM lid: 3

                Capability mask: 0x0251486a

                Port GUID: 0x7cfe9003009a7c31

                Link layer: InfiniBand

Kernel IP routing table

Destination     Gateway         Genmask         Flags Metric Ref    Use Iface

0.0.0.0         172.23.1.1      0.0.0.0         UG    0      0        0 enp1s0f0

172.23.0.0      0.0.0.0         255.255.0.0     U     0      0        0 enp1s0f0

172.23.0.0      0.0.0.0         255.255.0.0     U     0      0        0 ib0

sungho@c1n14:/usr/bin$ arp -n

Address                  HWtype  HWaddress           Flags Mask            Iface

172.23.10.1              ether   0c:c4:7a:3a:35:88   C                     enp1s0f0

172.23.10.15             ether   0c:c4:7a:3a:35:72   C                     enp1s0f0

172.23.1.1               ether   00:1b:21:5b:6a:a8   C                     enp1s0f0

enp1s0f0  Link encap:Ethernet  HWaddr 0c:c4:7a:3a:35:70

          inet addr:172.23.10.14  Bcast:172.23.255.255  Mask:255.255.0.0

          inet6 addr: fe80::ec4:7aff:fe3a:3570/64 Scope:Link

          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

          RX packets:12438 errors:0 dropped:5886 overruns:0 frame:0

          TX packets:5861 errors:0 dropped:0 overruns:0 carrier:0

          collisions:0 txqueuelen:1000

          RX bytes:2356740 (2.3 MB)  TX bytes:836306 (836.3 KB)

 

ib0       Link encap:UNSPEC  HWaddr A0-00-02-20-FE-80-00-00-00-00-00-00-00-00-00-00

          inet addr:172.23.10.30  Bcast:172.23.255.255  Mask:255.255.0.0

          inet6 addr: fe80::7efe:9003:9a:7c31/64 Scope:Link

          UP BROADCAST RUNNING MULTICAST  MTU:2044  Metric:1

          RX packets:0 errors:0 dropped:0 overruns:0 frame:0

          TX packets:8 errors:0 dropped:0 overruns:0 carrier:0

          collisions:0 txqueuelen:256

          RX bytes:0 (0.0 B)  TX bytes:616 (616.0 B)

 

lo        Link encap:Local Loopback

          inet addr:127.0.0.1  Mask:255.0.0.0

          inet6 addr: ::1/128 Scope:Host

          UP LOOPBACK RUNNING  MTU:65536  Metric:1

          RX packets:189 errors:0 dropped:0 overruns:0 frame:0

          TX packets:189 errors:0 dropped:0 overruns:0 carrier:0

          collisions:0 txqueuelen:1

          RX bytes:13912 (13.9 KB)  TX bytes:13912 (13.9 KB)

 

server (B)

sungho@c1n15:~$ ibstat

CA 'mlx4_0'

        CA type: MT4099

        Number of ports: 1

        Firmware version: 2.42.5000

        Hardware version: 1

        Node GUID: 0x7cfe9003009a6360

        System image GUID: 0x7cfe9003009a6363

        Port 1:

                State: Active

                Physical state: LinkUp

                Rate: 56

                Base lid: 1

                LMC: 0

                SM lid: 3

                Capability mask: 0x02514868

                Port GUID: 0x7cfe9003009a6361

                Link layer: InfiniBand

sungho@c1n15:~$ route -n

Kernel IP routing table

Destination     Gateway         Genmask         Flags Metric Ref    Use Iface

0.0.0.0         172.23.1.1      0.0.0.0         UG    0      0        0 enp1s0f0

172.23.0.0      0.0.0.0         255.255.0.0     U     0      0        0 enp1s0f0

172.23.0.0      0.0.0.0         255.255.0.0     U     0      0        0 ib0

sungho@c1n15:~$ arp -n

Address                  HWtype  HWaddress           Flags Mask            Iface

172.23.10.14             ether   0c:c4:7a:3a:35:70   C                     enp1s0f0

172.23.10.1              ether   0c:c4:7a:3a:35:88   C                     enp1s0f0

172.23.10.30             ether   0c:c4:7a:3a:35:70   C                     enp1s0f0

172.23.1.1               ether   00:1b:21:5b:6a:a8   C                     enp1s0f0

 

enp1s0f0  Link encap:Ethernet  HWaddr 0c:c4:7a:3a:35:72

          inet addr:172.23.10.15  Bcast:172.23.255.255  Mask:255.255.0.0

          inet6 addr: fe80::ec4:7aff:fe3a:3572/64 Scope:Link

          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

          RX packets:19432 errors:0 dropped:5938 overruns:0 frame:0

          TX packets:8783 errors:0 dropped:0 overruns:0 carrier:0

          collisions:0 txqueuelen:1000

          RX bytes:8246898 (8.2 MB)  TX bytes:1050793 (1.0 MB)

 

ib0       Link encap:UNSPEC  HWaddr A0-00-02-20-FE-80-00-00-00-00-00-00-00-00-00-00

          inet addr:172.23.10.31  Bcast:172.23.255.255  Mask:255.255.0.0

          inet6 addr: fe80::7efe:9003:9a:6361/64 Scope:Link

          UP BROADCAST RUNNING MULTICAST  MTU:2044  Metric:1

          RX packets:0 errors:0 dropped:0 overruns:0 frame:0

          TX packets:16 errors:0 dropped:0 overruns:0 carrier:0

          collisions:0 txqueuelen:256

          RX bytes:0 (0.0 B)  TX bytes:1232 (1.2 KB)

 

lo        Link encap:Local Loopback

          inet addr:127.0.0.1  Mask:255.0.0.0

          inet6 addr: ::1/128 Scope:Host

          UP LOOPBACK RUNNING  MTU:65536  Metric:1

          RX packets:109 errors:0 dropped:0 overruns:0 frame:0

          TX packets:109 errors:0 dropped:0 overruns:0 carrier:0

          collisions:0 txqueuelen:1

          RX bytes:7992 (7.9 KB)  TX bytes:7992 (7.9 KB)

 

iSER for ESXi 6.5 No target detected, no traffic sent.

$
0
0

Hello,

 

I have seen other reports of no targets being detected. I have esxi 6.5 host running the new iser driver, the adapter firmware is up to date (dual port connectx-4 vpi card in eth mode).

 

I can ping from the mellanox card to the esxi host. When i configure a target no devices are detected.

 

I loaded the newest mellanox on my linux target and followed the following guide: How-To Dump RDMA traffic Using the Inbox tcpdump tool (ConnectX-4)

 

How-To Dump RDMA traffic Using the Inbox tcpdump tool (ConnectX-4)

 

When i run a packet capture and rescan the iser adapter i see no traffic generated and sent to the linux target server. I logged just 2 cdp packets unrelated to the adapter refresh.

 

I have the esxi server directly connected into the linux server, so there is no switch currently and thus not alot of other traffic on this interface, but surely i should see the esxi server attempt to discover the targets?

 

the only thing i noticed was that the path is listed as not used, but i believe this is because there is no target detected:

 

 

should iser target port be something different than 3260?

Re: New iSER Driver installation on ESXi 6.5-U1

$
0
0

any luck seeing the iser target? i am pretty much where you guys are, i have the adapter but cannot see the target.


Re: New iSER Driver installation on ESXi 6.5-U1

Re: New iSER Driver installation on ESXi 6.5-U1

$
0
0

Unfortunately answer is no!

 

But my friend said to me a info some works in advanced.

 

He test with StarWind vSAN iSER Target and ESXi 6.5 iSER initiator (1.0.0.1) connect to iSER Target.

 

He test with RoCE v1.0, 1.2x, 2.0.

 

In near future, I'll test what RoCE version is support with ESXi 6.5 iSER 1.0.0.1 driver.

 

BR,

Jae-Hoon Choi

Re: How do I disable FEC for MCX416A-CCAT on windows

$
0
0

regarding the switch side:

 

  1. Disable FEC from the switch side, if it’s a Mellanox switch then run:  “ interface ib 1/1 fec-override no-fec” in console terminal mode.

it should be

 

  1. Disable FEC from the switch side, if it’s a Mellanox switch then run:  “ interface ethernet 1/1 fec-override no-fec” in console terminal mode.

Re: How do I disable FEC for MCX416A-CCAT on windows

$
0
0

Hello,

Here are the ways to disable FEC on the card:

  1. Connect it to a switch that does not support FEC because our cards set to auto-negotiate by default, meaning if the other side also supports FEC, FEC is enabled. if not, FEC is disabled.
  2. Disable FEC from the switch side, if it’s a Mellanox switch then run:  “ interface ib 1/1 fec-override no-fec” in console terminal mode.
  3. The latest MFT 4.7 package has the mlxlink tool that has the ability to disable FEC as follows:

a.verify with mlxconfig that KEEP_ETH_LINK_UP_P1 value is 0 (for example: mlxconfig -d /dev/mst/mt4117_pciconf0 q)

b.disable RS FEC - "mlxlink -d /dev/mst/mt4117_pciconf0 --fec NF"

c.Toggle link - "mlxlink -d /dev/mst/mt4117_pciconf0 -a TG"

d.verify FEC is disabled - "mlxlink -d /dev/mst/mt4117_pciconf0 --show_fec |grep -i fec"

FEC : No FEC

The MFT (Mellanox firmware tool) can be downloaded from Mellanox fabric website.

 

Regards,

Viki

How to configure MCX354A-FCBT Mellanox InfiniBand speed at 56Gbps ?

$
0
0

Can someone help me to configure my MCX354A-FCBT Mellanox InfiniBand speed at 56Gbps. I have a MCX354A-FCBT Mellanox configured for InfiniBand but the speed remains at 40Gbps (All the components can speed at 56Gbps (card/Switch/Cable)! Thank a lot for your help. Here is my configuration:

 

Operating System

Fedora release 24 (Twenty Four)

kernel 4.11.12-100.fc24.x86_64

 

Mellanox card

MCX354A-FCBT

 

[root@aigle ~]# mlxconfig -d /dev/mst/mt4099_pci_cr0 q

 

Device #1:

----------

Device type:    ConnectX3      

PCI device:     /dev/mst/mt4099_pci_cr0

 

Configurations:                              Next Boot

         SRIOV_EN                            True(1)        

         NUM_OF_VFS                          8              

         LINK_TYPE_P1                        VPI(3)         

         LINK_TYPE_P2                        VPI(3)         

         LOG_BAR_SIZE                        3              

         BOOT_PKEY_P1                        0              

         BOOT_PKEY_P2                        0              

         BOOT_OPTION_ROM_EN_P1               True(1)        

         BOOT_VLAN_EN_P1                     False(0)       

         BOOT_RETRY_CNT_P1                   0              

         LEGACY_BOOT_PROTOCOL_P1             PXE(1)         

         BOOT_VLAN_P1                        1              

         BOOT_OPTION_ROM_EN_P2               True(1)        

         BOOT_VLAN_EN_P2                     False(0)       

         BOOT_RETRY_CNT_P2                   0              

         LEGACY_BOOT_PROTOCOL_P2             PXE(1)         

         BOOT_VLAN_P2                        1              

         IP_VER_P1                           IPv4(0)        

         IP_VER_P2                           IPv4(0)        

 

[root@aigle ~]# mlxfwmanager --query

Querying Mellanox devices firmware ...

 

Device #1:

----------

  Device Type:      ConnectX3

  Part Number:      MCX354A-FCB_A2-A5

  Description:      ConnectX-3 VPI adapter card; dual-port QSFP; FDR IB (56Gb/s) and 40GigE; PCIe3.0 x8 8GT/s; RoHS R6

  PSID:             MT_1090120019

  PCI Device Name:  /dev/mst/mt4099_pci_cr1

  Port1 GUID:       f45214030027f751

  Port2 GUID:       f45214030027f752

  Versions:         Current        Available    

     FW             2.42.5000      N/A          

     PXE            3.4.0752       N/A          

  Status:           No matching image found

 

Device #2:

----------

  Device Type:      ConnectX3

  Part Number:      MCX354A-FCB_A2-A5

  Description:      ConnectX-3 VPI adapter card; dual-port QSFP; FDR IB (56Gb/s) and 40GigE; PCIe3.0 x8 8GT/s; RoHS R6

  PSID:             MT_1090120019

  PCI Device Name:  /dev/mst/mt4099_pci_cr0

  Port1 GUID:       0002c9030032e311

  Port2 GUID:       0002c9030032e312

  Versions:         Current        Available    

     FW             2.42.5000      N/A          

     PXE            3.4.0752       N/A          

  Status:           No matching image found

 

[root@aigle ~]# ibstat

CA 'mlx4_0'

CA type: MT4099

Number of ports: 2

Firmware version: 2.42.5000

Hardware version: 1

Node GUID: 0x0002c9030032e310

System image GUID: 0x0002c9030032e313

Port 1:

State: Active

Physical state: LinkUp

Rate: 40 (FDR10)

Base lid: 6

LMC: 0

SM lid: 1

Capability mask: 0x02514868

Port GUID: 0x0002c9030032e311

Link layer: InfiniBand

Port 2:

State: Active

Physical state: LinkUp

Rate: 40 (FDR10)

Base lid: 7

LMC: 0

SM lid: 1

Capability mask: 0x02514868

Port GUID: 0x0002c9030032e312

Link layer: InfiniBand

CA 'mlx4_1'

CA type: MT4099

Number of ports: 2

Firmware version: 2.42.5000

Hardware version: 1

Node GUID: 0xf45214030027f750

System image GUID: 0xf45214030027f753

Port 1:

State: Active

Physical state: LinkUp

Rate: 40 (FDR10)

Base lid: 8

LMC: 0

SM lid: 1

Capability mask: 0x02514868

Port GUID: 0xf45214030027f751

Link layer: InfiniBand

Port 2:

State: Active

Physical state: LinkUp

Rate: 40 (FDR10)

Base lid: 9

LMC: 0

SM lid: 1

Capability mask: 0x02514868

Port GUID: 0xf45214030027f752

Link layer: InfiniBand

 

[root@aigle ~]# ibv_devinfo -v

hca_id: mlx4_0

transport: InfiniBand (0)

fw_ver: 2.42.5000

node_guid: 0002:c903:0032:e310

sys_image_guid: 0002:c903:0032:e313

vendor_id: 0x02c9

vendor_part_id: 4099

hw_ver: 0x1

board_id: MT_1090120019

phys_port_cnt: 2

max_mr_size: 0xffffffffffffffff

page_size_cap: 0xfffffe00

max_qp: 393144

max_qp_wr: 16351

device_cap_flags: 0x057e9c76

BAD_PKEY_CNTR

BAD_QKEY_CNTR

AUTO_PATH_MIG

CHANGE_PHY_PORT

UD_AV_PORT_ENFORCE

PORT_ACTIVE_EVENT

SYS_IMAGE_GUID

RC_RNR_NAK_GEN

XRC

Unknown flags: 0x056e8000

device_cap_exp_flags: 0x5000401600000000

EXP_DEVICE_QPG

EXP_UD_RSS

EXP_CROSS_CHANNEL

EXP_MR_ALLOCATE

EXT_ATOMICS

EXP_MASKED_ATOMICS

max_sge: 32

max_sge_rd: 30

max_cq: 65408

max_cqe: 4194303

max_mr: 524032

max_pd: 32764

max_qp_rd_atom: 16

max_ee_rd_atom: 0

max_res_rd_atom: 6290304

max_qp_init_rd_atom: 128

max_ee_init_rd_atom: 0

atomic_cap: ATOMIC_HCA (1)

log atomic arg sizes (mask) 0x8

masked_log_atomic_arg_sizes (mask) 0x8

masked_log_atomic_arg_sizes_network_endianness (mask) 0x0

max fetch and add bit boundary 64

log max atomic inline 3

max_ee: 0

max_rdd: 0

max_mw: 0

max_raw_ipv6_qp: 0

max_raw_ethy_qp: 0

max_mcast_grp: 131072

max_mcast_qp_attach: 244

max_total_mcast_qp_attach: 31981568

max_ah: 2147483647

max_fmr: 0

max_srq: 65472

max_srq_wr: 16383

max_srq_sge: 31

max_pkeys: 128

local_ca_ack_delay: 15

hca_core_clock: 427000

max_klm_list_size: 0

max_send_wqe_inline_klms: 0

max_umr_recursion_depth: 0

max_umr_stride_dimension: 0

general_odp_caps:

max_size: 0x0

rc_odp_caps:

NO SUPPORT

uc_odp_caps:

NO SUPPORT

ud_odp_caps:

NO SUPPORT

dc_odp_caps:

NO SUPPORT

xrc_odp_caps:

NO SUPPORT

raw_eth_odp_caps:

NO SUPPORT

max_dct: 0

max_device_ctx: 1016

Multi-Packet RQ is not supported

rx_pad_end_addr_align: 0

tso_caps:

max_tso: 0

packet_pacing_caps:

qp_rate_limit_min: 0kbps

qp_rate_limit_max: 0kbps

ooo_caps:

ooo_rc_caps  = 0x0

ooo_xrc_caps = 0x0

ooo_dc_caps  = 0x0

ooo_ud_caps  = 0x0

sw_parsing_caps:

supported_qp:

tag matching not supported

Device ports:

port: 1

state: PORT_ACTIVE (4)

max_mtu: 4096 (5)

active_mtu: 4096 (5)

sm_lid: 1

port_lid: 6

port_lmc: 0x00

link_layer: InfiniBand

max_msg_sz: 0x40000000

port_cap_flags: 0x02514868

max_vl_num: 8 (4)

bad_pkey_cntr: 0x0

qkey_viol_cntr: 0x0

sm_sl: 0

pkey_tbl_len: 128

gid_tbl_len: 128

subnet_timeout: 18

init_type_reply: 0

active_width: 4X (2)

active_speed: 10.0 Gbps (8)

phys_state: LINK_UP (5)

GID[  0]: fe80:0000:0000:0000:0002:c903:0032:e311

port: 2

state: PORT_ACTIVE (4)

max_mtu: 4096 (5)

active_mtu: 4096 (5)

sm_lid: 1

port_lid: 7

port_lmc: 0x00

link_layer: InfiniBand

max_msg_sz: 0x40000000

port_cap_flags: 0x02514868

max_vl_num: 8 (4)

bad_pkey_cntr: 0x0

qkey_viol_cntr: 0x0

sm_sl: 0

pkey_tbl_len: 128

gid_tbl_len: 128

subnet_timeout: 18

init_type_reply: 0

active_width: 4X (2)

active_speed: 10.0 Gbps (8)

phys_state: LINK_UP (5)

GID[  0]: fe80:0000:0000:0000:0002:c903:0032:e312

 

 

Switch

Part Info

--------

Type: SX6036

S/N: IL23190198

P/N: 712498-B21

Chassis system GUID: 00:02:C9:03:00:AC:6C:20

Asic FW version: 9.4.3580

LID: 1

Node GUID: 00:02:C9:03:00:AC:6C

 

Installed MLNX-OS Images

------------------------

Partition 1 - Active Image (partition of next boot)

PPC_M460EX 3.6.4006 2017-07-03 16:17:35 ppc

 

Partition 2

PPC_M460EX 3.6.3004 2017-02-05 17:31:50 ppc

 

Port Info

----------

Port number : 1

Port type : IB

IB Subnet : infiniband-default

Port description :

Logical port state :    Active

Physical port state : LinkUp

Current line rate : 40.0 Gbps

Supported speeds : sdr, ddr, qdr, fdr10

Speed :         fdr10

Supported widths : 1X, 4X

Width :         4X

Max supported MTUs : 4096

MTU :         4096

VL capabilities : VL0 - VL7

Operational VLs : VL0 - VL7

Supported LLR speeds : FDR10, FDR

LLR Status : Active

 

Transceiver Information 

Identifier :                 QSFP+        

Cable/ Module type : Passive copper, unequalized

Infiniband speeds : SDR , DDR , QDR , FDR

Vendor :         Mellanox

Cable length :         2 m

Part number :         MC2207130-002

Revision :         A3

Serial number :         MT1710VS05863

 

Subnet Manager (SM) Status

--------------------------

SM Status

Local SM running 1 hour 26 minutes 52 seconds

SM Priority 7 State running

Failures 0 Autostart true

Routing Engine Used minhop

SM version OpenSM4.7.0.MLNX20170511.3016205

Viewing all 6227 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>