Hi Jorge,
Can you please provide more details?
Do you mean to do virtualization on the mellanox switch?
Hi Jorge,
Can you please provide more details?
Do you mean to do virtualization on the mellanox switch?
Hi Bill,
Here is the link to the Release Notes for Ubuntu 18.04 Inbox Driver. Section 2 Changes and New Features lists the support for Enhanced IPoIB for ConnectX-4 cards. Also, I am attaching the link for the User Manual for your reference. Which card are you using? Also, you mentioned you are receiving the following error [ 57.573664] ib_ipoib: unknown parameter 'ipoib_enhanced' ignored. Is this after testing with MLNX_OFED? If yes, which OFED version did you test it with? Also, for any questions related Inbox Driver, it would be great to reach out to OS Vendor.
http://www.mellanox.com/pdf/prod_software/Ubuntu_18.04_Inbox_Driver_Release_Notes.pdf
http://www.mellanox.com/pdf/prod_software/Ubuntu_18_04_Inbox_Driver_User_Manual.pdf
Disappointment also on my side :-(
But thank you so much for your help.
Hi Andreas,
The post How to Enable PFC on Mellanox Switches (Spectrum) is outdated.
Please refer to the Recommended Network Configuration Examples for RoCE Deployment article.
The configuration in the above article will enable the correct lossless priority pfc on all ports.
Is there anyone working on or any road map for getting the mlx5 driver working under FreeBSD? FreeBSD 12 has added a lot of features needed to the default kernel build but it looks to be still missing quite a bit.
Thanks in advance!
Mit
Hi Fred,
We noticed that you had a Mellanox Support Case opened with us regarding the same question in which we provided you with the information requested. If needed, we can continue communicating through the Mellanox Support Case.
Thanks & Regards,
Namrata.
Hi,
Please refer to the following community links:
Understanding mlx4 Linux Counters
Understanding mlx5 Linux Counters and Status Parameters
Thanks,
Samer
Samer-
I understand that it is not currently supported. That is why I asked if anyone was working on it... before I tackle the port myself.
We started using DPDK to get out of the driver business, which worked until now!
Thanks!
Mit
Hi Mit,
Currently there is no plan to support FreeBSD in any roadmap.
Thanks,
Samer
Thank you sir.
Mit
Hi,
Currently Mellanox supports the following OS list in the latest DPDK 18.05 :
OS List:
Currently there is no plan to support FreeBSD in any roadmap.
Thanks,
Samer
Hi Samer.
Thanks for your reply. But I'm looking for theshold information. Like I told in previous message, you can have 50 symbol error in an hour or in a week. I want to know when these 50 errors are important. The same for all the port counters of an Infiniband link. I need to debug the situation and I don't know and I don't know where to look for information about what are dangerous numbers on Infiniband counters.
Thanks again.
BYe...
Hi Wayne,
Did you try running SM on the server to check if issue is not reproducible? Can you please let me know the following:
1. PSID of the card and FW version
#mst start
#mst status (get the mst device name)
#flint -d <mst device name> q
2. Driver version
#ofed_info -s
Hi Dan,
Many thanks for posting your question on the Mellanox Community.
As per ESXi 6.7, iSER support is built-in ESXi 6.7 and also supports the ConnectX-5 adapter
Please following the instruction provided in the following link on how-to configure iSER in ESXi 6.7 -> https://docs.vmware.com/en/VMware-vSphere/6.7/com.vmware.vsphere.storage.doc/GUID-679D1419-BF9D-4E10-8598-F23205E8AB0F.html
In case, you run in to a new issue, please contact VMware for additional support.
Many thanks.
~Mellanox Technical Support
This is kinda frustrating. I have done this exercise multiple times. As the docs say, I did:
esxcli rdma iser add
and the log shows:
2018-11-07T02:10:53.287Z cpu4:2099839 opID=28b042a0)Device: 1466: Registered device: 0x43060f7cd070 logical#swdevroot#com.vmware.iser0 com.vmware.iser (parent=0x6daa43060f7cd35b)
yet no iSER storage adapter is visible in the web client. I'm puzzled why y'all just cut&pasted a reference to vmware docs when I explicitly stated this didn't work for me.
Hi Manuel,
To monitor such counters, i suggest using ibdiagnet tool
ibdiagnet performs quality and health checks, scans the fabric and extracts connectivity and devices available information.
An ibdiagnet run performs the following:
• Fabric discovery
• Duplicated GUIDs detection
• Duplicate Node Description detection
• Alias GUIDs check
• Lids check
• Links in INIT state and unresponsive links detection
• Counters fetch
• Error counters check
• Counter increments during run detection
• BER test
• Routing checks
• Link width and speed checks
• Topology matching.
• Partition checks
Example:
./ibdiagnet -P all=1 --pc --pm_pause_time 1200 --get_cable_info -r -o /tmp/$(date +%Y%m%d)
The output logs will be under “/var/tmp/ibdiagnet2/”
Thanks,
Samer
Hiya, I hope you can help me.
I have purchased the MHGA28-XTC card (Revision 5) I am wanting to install it on Windows 10 (Ideally)
I have followed the instructions on the Mellanox website but for my trying I cannot get the card installed. I have tried it on different machines, on different OS's from Windows 7 to Windows 10 and also on Windows server 2016.
The Device manager sees the card as an Infiniband Controller, and when i follow the instructions to download and install the Card but it doesnt want to know, I have tried forcing the driver onto the card and the best i have got is that it will take the driver, and the device will appear in Network Devices but its disabled, and I cannot enable it.
I have used the MST Status command and it always states that there are no MST Devices detected, even though Windows sees the PCI Card (ive used multiple PCI Slots also)
If someone could basically take me through this step by step I would be eternally grateful.
Hello,
Thank you for posting your question on the Mellanox Community.
Based on the information provided, we need some more additional information to debug the issue.
MLNX_OFED version used
OS release and kernel version used
How-to reproduce
Many thanks.
~Mellanox Technical Support
Hi Graham,
Thank you for posting your question on the Mellanox Community.
Unfortunately, the Mellanox Infinihost III adapter MHGA28-XTC is EOL for awhile. Our latest drivers do not provide any support for this HCA.
I have found the following post online, regarding the bring-up of these HCA's under Windows 7.
The link to the post -> http://andy-malakov.blogspot.com/2015/03/connecting-two-windows-7-computers-with.html
Hopefully the post's instructions will resolve your issue.
Thanks and regards,
~Mellanox Technical Support
hello,
After I changed to driver from 4.1 to 4.4.2 on Centos 7.1, when I mount and umount glusterfs some times, the system was hanged.
The below is the screen shot of console output and dmesg output when do mount.
It is ok while using dirver 4.1, I don't know how to debug this problem and it blocked me for a long time。
I need some advice to workround with this problem, thanks.
[Fri Nov 2 15:07:38 2018] WARNING: at /var/tmp/OFED_topdir/BUILD/mlnx-ofa_kernel-4.4/obj/default/drivers/infiniband/core/cma.c:666 cma_acquire_dev+0x268/0x280 [rdma_cm]()
[Fri Nov 2 15:07:38 2018] Modules linked in: fuse ipt_MASQUERADE nf_nat_masquerade_ipv4 iptable_nat nf_conntrack_ipv4 nf_defrag_ipv4 nf_nat_ipv4 xt_addrtype iptable_filter xt_conntrack nf_nat nf_conntrack br_netfilter bridge stp llc dm_thin_pool dm_persistent_data dm_bio_prison dm_bufio loop bonding rdma_ucm(OE) ib_ucm(OE) rdma_cm(OE) iw_cm(OE) ib_ipoib(OE) ib_cm(OE) ib_uverbs(OE) ib_umad(OE) mlx5_fpga_tools(OE) mlx5_ib(OE) mlx5_core(OE) mlxfw(OE) iTCO_wdt dcdbas iTCO_vendor_support mxm_wmi intel_powerclamp coretemp intel_rapl iosf_mbi kvm_intel kvm irqbypass crc32_pclmul ghash_clmulni_intel aesni_intel lrw gf128mul glue_helper ablk_helper cryptd ses ipmi_devintf enclosure pcspkr ipmi_si ipmi_msghandler wmi acpi_power_meter shpchp lpc_ich mei_me sb_edac edac_core mei ip_tables xfs libcrc32c mlx4_ib(OE) mlx4_en(OE)
[Fri Nov 2 15:07:38 2018] ib_core(OE) sd_mod crc_t10dif crct10dif_generic mgag200 crct10dif_pclmul drm_kms_helper crct10dif_common crc32c_intel syscopyarea sysfillrect sysimgblt fb_sys_fops ttm mpt3sas raid_class drm scsi_transport_sas nvme ahci ixgbe libahci igb mdio libata ptp i2c_algo_bit mlx4_core(OE) i2c_core pps_core megaraid_sas devlink dca mlx_compat(OE) fjes dm_mirror dm_region_hash dm_log dm_mod zfs(POE) zunicode(POE) zavl(POE) zcommon(POE) znvpair(POE) spl(OE) zlib_deflate sg
[Fri Nov 2 15:07:38 2018] CPU: 10 PID: 18958 Comm: glusterfs Tainted: P W OE ------------ 3.10.0-514.26.2.el7.x86_64 #1
[Fri Nov 2 15:07:38 2018] Hardware name: Dell Inc. PowerEdge R730xd/0WCJNT, BIOS 2.4.3 01/17/2017
[Fri Nov 2 15:07:38 2018] 0000000000000000 000000009310e9fa ffff8807f176bcf0 ffffffff81687133
[Fri Nov 2 15:07:38 2018] ffff8807f176bd28 ffffffff81085cb0 ffff8810498cec00 0000000000000000
[Fri Nov 2 15:07:38 2018] 0000000000000001 ffff88104f5a71e0 ffff8807f176bd60 ffff8807f176bd38
[Fri Nov 2 15:07:38 2018] Call Trace:
[Fri Nov 2 15:07:38 2018] [<ffffffff81687133>] dump_stack+0x19/0x1b
[Fri Nov 2 15:07:38 2018] [<ffffffff81085cb0>] warn_slowpath_common+0x70/0xb0
[Fri Nov 2 15:07:38 2018] [<ffffffff81085dfa>] warn_slowpath_null+0x1a/0x20
[Fri Nov 2 15:07:38 2018] [<ffffffffa0aed1c8>] cma_acquire_dev+0x268/0x280 [rdma_cm]
[Fri Nov 2 15:07:38 2018] [<ffffffffa0af214a>] rdma_bind_addr+0x85a/0x910 [rdma_cm]
[Fri Nov 2 15:07:38 2018] [<ffffffff8120e5e6>] ? path_openat+0x166/0x490
[Fri Nov 2 15:07:38 2018] [<ffffffff8168a982>] ? mutex_lock+0x12/0x2f
[Fri Nov 2 15:07:38 2018] [<ffffffffa082c104>] ucma_bind+0x84/0xd0 [rdma_ucm]
[Fri Nov 2 15:07:38 2018] [<ffffffffa082b71b>] ucma_write+0xcb/0x150 [rdma_ucm]
[Fri Nov 2 15:07:38 2018] [<ffffffff811fe9fd>] vfs_write+0xbd/0x1e0
[Fri Nov 2 15:07:38 2018] [<ffffffff810ad1ec>] ? task_work_run+0xac/0xe0
[Fri Nov 2 15:07:38 2018] [<ffffffff811ff51f>] SyS_write+0x7f/0xe0
[Fri Nov 2 15:07:38 2018] [<ffffffff81697809>] system_call_fastpath+0x16/0x1b
[Fri Nov 2 15:07:38 2018] ---[ end trace c97345452e609a78 ]---
[Fri Nov 2 15:07:38 2018] ------------[ cut here ]------------
[Fri Nov 2 15:07:38 2018] WARNING: at /var/tmp/OFED_topdir/BUILD/mlnx-ofa_kernel-4.4/obj/default/drivers/infiniband/core/cma.c:666 cma_acquire_dev+0x268/0x280 [rdma_cm]()
[Fri Nov 2 15:07:38 2018] Modules linked in: fuse ipt_MASQUERADE nf_nat_masquerade_ipv4 iptable_nat nf_conntrack_ipv4 nf_defrag_ipv4 nf_nat_ipv4 xt_addrtype iptable_filter xt_conntrack nf_nat nf_conntrack br_netfilter bridge stp llc dm_thin_pool dm_persistent_data dm_bio_prison dm_bufio loop bonding rdma_ucm(OE) ib_ucm(OE) rdma_cm(OE) iw_cm(OE) ib_ipoib(OE) ib_cm(OE) ib_uverbs(OE) ib_umad(OE) mlx5_fpga_tools(OE) mlx5_ib(OE) mlx5_core(OE) mlxfw(OE) iTCO_wdt dcdbas iTCO_vendor_support mxm_wmi intel_powerclamp coretemp intel_rapl iosf_mbi kvm_intel kvm irqbypass crc32_pclmul ghash_clmulni_intel aesni_intel lrw gf128mul glue_helper ablk_helper cryptd ses ipmi_devintf enclosure pcspkr ipmi_si ipmi_msghandler wmi acpi_power_meter shpchp lpc_ich mei_me sb_edac edac_core mei ip_tables xfs libcrc32c mlx4_ib(OE) mlx4_en(OE)
[Fri Nov 2 15:07:38 2018] ib_core(OE) sd_mod crc_t10dif crct10dif_generic mgag200 crct10dif_pclmul drm_kms_helper crct10dif_common crc32c_intel syscopyarea sysfillrect sysimgblt fb_sys_fops ttm mpt3sas raid_class drm scsi_transport_sas nvme ahci ixgbe libahci igb mdio libata ptp i2c_algo_bit mlx4_core(OE) i2c_core pps_core megaraid_sas devlink dca mlx_compat(OE) fjes dm_mirror dm_region_hash dm_log dm_mod zfs(POE) zunicode(POE) zavl(POE) zcommon(POE) znvpair(POE) spl(OE) zlib_deflate sg