See some examples here for mlx5, I'm not sure if this is the same for mlx4.
See some examples here for mlx5, I'm not sure if this is the same for mlx4.
Hi Kaijun,
Did you bind the devices to UIO/VFIO? If so, you should bind it back to kernel driver. Mellanox PMD doesn't use UIO/VFIO but its control path goes through regular kernel driver unlike Intel NICs.
Thanks,
Yongseok
hi all,
I've a a very basic setup, directly two boxes via two MHEH28-XTC and I cannot activate them.
One peculiar thing is I get (randomly & !often):
[85947.090496] AMD-Vi: Event logged [
[85947.090539] IO_PAGE_FAULT device=09:00.7 domain=0x0000 address=0x00000000f6ffb000 flags=0x0050]
[85947.298509] AMD-Vi: Event logged [
[85947.298550] IO_PAGE_FAULT device=09:00.7 domain=0x0000 address=0x00000000f6ffb000 flags=0x0050]
which is the card itself, judging by the device id
Would you have and share some thoughts please?
$ ./flint/mstflint -d 09:00.0 q # for both cards
-W- Running quick query - Skipping full image integrity checks.
Image type: FS2
FW Version: 2.9.1000
Device ID: 25408
Description: Node Port1 Port2 Sys image
GUIDs: 0008f104039a62a0 0008f104039a62a1 0008f104039a62a2 0008f104039a62a3
MACs: 000000000000 000000000001
VSD:
PSID: MT_04A0110001
$ ibstat
CA 'mlx4_0'
CA type: MT25408
Number of ports: 2
Firmware version: 2.9.1000
Hardware version: a0
Node GUID: 0x0008f104039a08dc
System image GUID: 0x0008f104039a08df
Port 1:
State: Initializing
Physical state: LinkUp
Rate: 10
Base lid: 1
LMC: 0
SM lid: 1
Capability mask: 0x0259086a
Port GUID: 0x0008f104039a08dd
Link layer: InfiniBand
Port 2:
State: Down
Physical state: Polling
Rate: 10
Base lid: 0
LMC: 0
SM lid: 0
Capability mask: 0x0259086a
Port GUID: 0x0008f104039a08de
Link layer: InfiniBand
in opensm log:
Jan 06 17:00:28 817185 [F6D5A700] 0x01 -> sm_mad_ctrl_send_err_cb: ERR 3113: MAD completed in error (IB_TIMEOUT): SubnGet(NodeInfo), attr_mod 0x0, TID 0x1cd1
Jan 06 17:00:28 817200 [F6D5A700] 0x01 -> sm_mad_ctrl_send_err_cb: ERR 3120 Timeout while getting attribute 0x11 (NodeInfo); Possible mis-set mkey?
many thanks
See the below commands:
[standalone: master] > enable
[standalone: master] # configure terminal
[standalone: master] (config) # vlan 3
[standalone: master] (config vlan 3) exit
[standalone: master] (config) # interface ethernet 1/1
[standalone: master] (config interface ethernet 1/1) # switchport mode hybrid
[standalone: master] (config interface ethernet 1/1) # switchport access vlan 3
Verify with the following command:
config) # show interface switchport
Interface Mode Access vlan Allowed vlans
---------------------------------------------------------------------------------
Eth1/1 hybrid 3
Marlon
In fact ,there are two computers that could execute the command right. So I think there is no problem with BIOS. The error happened after I burn the firmware. After that all the computers can not execute the command.
what is the firmware version? is this the latest one?
Hello everyone!
I'm trying to add infiniband cards to our working openstack liberty installation ( to create a backup network), We already are using ethernet in a provider flat network, but want a second interface for infiniband where the compute nodes can talk to the backup server.
I have the sriov working, and the pci passthru configured. I followed this document: https://wiki.openstack.org/wiki/Mellanox-Neutron-Liberty-Redhat-InfiniBand
Mellanox-Neutron-Liberty-Redhat-InfiniBand - OpenStack
When I try to start the neutron-mlnx-agent on the compute node, I get an error and it stops with
Traceback (most recent call last):
File "/usr/bin/neutron-mlnx-agent", line 6, in <module>
from neutron.cmd.eventlet.plugins.mlnx_neutron_agent import main
File "/usr/lib/python2.7/site-packages/neutron/cmd/eventlet/plugins/mlnx_neutron_agent.py", line 15, in <module>
from neutron.plugins.ml2.drivers.mlnx.agent import eswitch_neutron_agent
File "/usr/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/mlnx/agent/eswitch_neutron_agent.py", line 19, in <module>
from networking_mlnx.plugins.ml2.drivers.mlnx.agent import (
ImportError: No module named networking_mlnx.plugins.ml2.drivers.mlnx.agent
When I look for /usr/lib/python2.7/site-packages/networking_mlnx, the directory isn't there.
After some reasearch, I realize it comes from python-networking-mlnx, which isnt in the centos 7 openstack liberty repo (http://mirror.centos.org/centos/7/cloud/x86_64/openstack-liberty/ )
One thing to note, at the beginning of the document specified above, it says to add a repo, which no longer exists, ie http://trunk.rdoproject.org/centos7-liberty/delorean-deps.repo
Is there an updated document that is for liberty release with mellanox? What repo is the python-networking-mlnx in?
I did add the mellanox repo, but is not in there either ( Index of /repository/solutions/openstack/liberty/redhat/7 )
Any ideas?
Try to reset the driver configuration and then try again
# mlxconfig –d <dev> reset
Hi Oskar,
As discussed by mail, there is no support for Real Time kernel in Mellanox OFED right now.
Thanks and regards,
~Martijn
I have try this command but it doesn't solve the problem.
Firmware version: 2.35.5100
Hi Cummunity.
I have a tender requirement specification, and i have 3 feature that i have no idea or can't find the information for either MLNX-OS or Cumulus OS.
Is anyone able to help? i would like to bring in Mellanox Switches for this tender project but i have to comply to these features first.
Thank you.
Pause frames (priority flow control [PFC] and IEEE 802.3x) |
Private VLANs and promiscuous only on uplinks |
8000 entries of local multicast replication |
Ranith Japa,
SET Compatibility with Windows Server Networking Technologies
SET is compatible with the following networking technologies in Windows Server 2016.
SET is not compatible with the following networking technologies in Windows Server 2016.
SET is the new alternative NIC Teaming solution in Windows Server 2016 and you can find more information about it here - Remote Direct Memory Access (RDMA) and Switch Embedded Teaming (SET)
What is the PSID of the card?
What is the version of mlxconfig you are using?
Can he run:
Flint –d <device> dc
Thank you for mentioning this. I had no idea and struggled with this for a long time. I forgot one time to bind the Mellanox card to the UIO driver, and suddenly it worked! I wish that had been in the documentation somewhere.
Thanks all yes that worked.
--- Original Message ---
Thank you for the suggestion. We will put more effort to improve documentation. Meanwhile, please refer to the following ones.
1) DPDK.org - DPDK doc
2) Mellanox.com - LTS(Long-Term Support) version is released here. We will move to MLNX_DPDK_16.11_1.0 this month.
Quick Start Guide is still applicable to 16.11 and coming 17.02.
Thanks,
Yongseok
Hi,
I am curious if any Spectrum Switch offerings support Network Address Translation (NAT) of any kind (source, static, destination etc. for ipv4) ?
Also, does it support Ethernet VPN (EVPN) of any kind (VXLAN, MBGP etc.) ?
A link to a list of supported RFCs for this chip's implementation in MX-OS would suffice as well.
I am presently looking at Cumulus's documentation on my own.
Thanks!
Trying to use dpdp-pdump to capture packet:
# dpdk-pdump -- --pdump 'port=0,queue=*,rx-dev=/tmp/a'
EAL: Detected 12 lcore(s)
EAL: Probing VFIO support...
EAL: PCI device 0000:08:00.0 on NUMA socket 0
EAL: probe driver: 15b3:1015 net_mlx5
PMD: mlx5.c:419: mlx5_pci_probe(): PCI information matches, using device "mlx5_0" (SR-IOV: false, MPS: true)
PMD: mlx5.c:442: mlx5_pci_probe(): 1 port(s) detected
PMD: mlx5.c:590: mlx5_pci_probe(): port 1 MAC address is 24:8a:07:8b:25:30
PMD: mlx5.c:638: mlx5_pci_probe(): no private data for port 0
EAL: Error - exiting with code: 1
Cause: Requested device 0000:08:00.0 cannot be used
0000:08:00.0 is one of the Mellanox 10G [the first 2 is 10G Dual-port]:
Network devices using kernel driver
===================================
0000:08:00.0 'MT27710 Family [ConnectX-4 Lx]' if=enp8s0f0 drv=mlx5_core unused=igb_uio
0000:08:00.1 'MT27710 Family [ConnectX-4 Lx]' if=enp8s0f1 drv=mlx5_core unused=igb_uio
0000:0a:00.0 'MT27710 Family [ConnectX-4 Lx]' if=enp10s0 drv=mlx5_core unused=igb_uio
testpmd would also cause segmentation fault
RoCE/RDMA is a network communication protocol therefore can be used within B2B (back-to-back) configuration or over switch connection. .
As DBC is related to ethernet network for data center environment, you'd probably need to use a switch that can share more that one cluster node in the network...but basically it is feasible in B2B topology as well
Have any of you tried the "esxcli rdma iser add" command? I tried it on one host, it didn't generate any output but this was logged in dmesg log:
2017-01-13T08:05:35.347Z cpu17:67870 opID=3f4390b7)World: 12230: VC opID esxcli-f2-cc96 maps to vmkernel opID 3f4390b7
2017-01-13T08:05:35.347Z cpu17:67870 opID=3f4390b7)Device: 1320: Registered device: 0x43044f391040 logical#vmkernel#com.vmware.iser0 com.vmware.iser (parent=0x1f4943044f3912b0)
I'm using native drivers on a clean esxi 6.5 install. ConnectX-3 10GbE nics (MT27520). My hope was that there are plans to add native iser support but might just be wishful thinking?