Quantcast
Channel: Mellanox Interconnect Community: Message List
Viewing all 6227 articles
Browse latest View live

Re: nvidia_peer_memory-1.0 build fails on CentOS 6.8

$
0
0

We are successfully using nvidia_peer_memory version 1.0 on a system running Scientific Linux 6.8 with the patch pasted below. The build problems you experienced originate from the fact that the compilation takes the printk header from the Mellanox OFED compat layer without having the compat/config.h macros defined.

 

Best regards,

Dorian

 

 

From f653387ae914271e9fd639a00af1c02daf8560e3 Mon Sep 17 00:00:00 2001

From: []

Date: Tue, 23 Aug 2016 13:58:56 +0200

Subject: [PATCH 1/2] Update Makefile

 

Take the defines from compat/config.h and define them on the

command line. This fixes a build problem with the latest RHEL6

kernels.

---

Makefile | 5 +++--

1 file changed, 3 insertions(+), 2 deletions(-)

 

diff --git a/Makefile b/Makefile

index c2d6a29..929a296 100644

--- a/Makefile

+++ b/Makefile

@@ -1,8 +1,9 @@

obj-m += nv_peer_mem.o

 

-OFA_KERNEL=$(shell (test -d /usr/src/ofa_kernel/default && echo /usr/src/ofa_kernel/default) || (test -d /var/lib/dkms/mlnx-ofed-kernel/ && ls -d /var/lib/dkms/mlnx-ofed-kernel/*/build))

+OFA_KERNEL = /usr/src/ofa_kernel/default

+DEFINES    = $(shell /bin/cat $(OFA_KERNEL)/compat/config.h | grep '\#define' | sed 's/\#define /-D/g' | sed 's/ /=/g' | tr '\n' ' ')

 

-EXTRA_CFLAGS +=-I$(OFA_KERNEL)/include/ -I$(OFA_KERNEL)/include/rdma

+EXTRA_CFLAGS +=-I$(OFA_KERNEL)/include/ -I$(OFA_KERNEL)/include/rdma $(DEFINES)

PWD  := $(shell pwd)

KVER := $(shell uname -r)

MODULES_DIR := /lib/modules/$(KVER)

--

2.7.4


Re: mlnx_tune does not detect the BIOS I/O non-posted prefetch settings?

$
0
0

Hi Tal,

 

My team and I were discussing which profile to use to tune for our usage scenarios; a team member asked me "why don't we just try different profiles and then test it out?"  I replied "well, based on my code review, I didn't see a way to restore the pre-tune system configuration. So, we may not wish to do that". 

 

Can an "restore" action be added to mlnx_tune?

 

Best,

 

Chin

Mellanox eSwitchd issue on Openstack Kilo?

$
0
0

I have an SR-IOV enabled installation of Openstack Kilo on RHEL 7.1 compute nodes.

I followed this document  (Mellanox-Neutron-Kilo-Redhat-InfiniBand - OpenStack) and it seemed to work partially.

I can see ib0 attached to the VM (I log in from console and run command "lspci" and "ip link")

But ib0 doesn't link up.

I  can make it work by using libvirt(not nova) directly,so I think this is a problem of Nova, eswitchd or neutron-mlnx-agent.

 

Error logs are here.

・On computer node

1. dmesg of VM

the message "ib0: multicast join failed for ff12:401b:8000:0000:0000:0000:ffff:ffff, status -22"  appears many times  after this.

I find it strange that  "Bringing up interface ib0:  [  OK  ]"  displays without " ADDRCONF(NETDEV_CHANGE): ib0: link becomes ready"

 

mlx4_core: Mellanox ConnectX core driver v3.1-1.0.3 (29 Sep 2015)

mlx4_core: Initializing 0000:00:05.0

mlx4_core 0000:00:05.0: Detected virtual function - running in slave mode

mlx4_core 0000:00:05.0: Sending reset

mlx4_core 0000:00:05.0: Sending vhcr0

mlx4_core 0000:00:05.0: Requested number of MACs is too much for port 1, reducing to 64

mlx4_core 0000:00:05.0: HCA minimum page size:512

mlx4_core 0000:00:05.0: Timestamping is not supported in slave mode

mlx4_core: device is working in RoCE mode: Roce V1

mlx4_core: gid_type 1 for UD QPs is not supported by the devicegid_type 0 was chosen instead

mlx4_core: UD QP Gid type is: V1

NET: Registered protocol family 10

lo: Disabled Privacy Extensions

<mlx4_ib> mlx4_ib_add: mlx4_ib: Mellanox ConnectX InfiniBand driver v3.1-1.0.3 (29 Sep 2015)

<mlx4_ib> check_flow_steering_support: Device managed flow steering is unavailable for IB port in multifunction env.

mlx4_core 0000:00:05.0: mlx4_ib_add: allocated counter index 6 for port 1

mlx4_core 0000:00:05.0: mlx4_ib_add: allocated counter index 7 for port 2

microcode: CPU0 sig=0x206a1, pf=0x1, revision=0x1

platform microcode: firmware: requesting intel-ucode/06-2a-01

microcode: CPU1 sig=0x206a1, pf=0x1, revision=0x1

platform microcode: firmware: requesting intel-ucode/06-2a-01

Microcode Update Driver: v2.00 <tigran@aivazian.fsnet.co.uk>, Peter Oruba

[  OK  ]

mlx4_core 0000:00:05.0: mlx4_ib: multi-function enabled

mlx4_core 0000:00:05.0: mlx4_ib: operating in qp1 tunnel mode

knem 1.1.2.90mlnx: initialized

Setting hostname cbv-lsf4.novalocal:  [  OK  ]

Setting up Logical Volume Management:   7 logical volume(s) in volume group "rootvg" now active

[  OK  ]

Checking filesystems

Checking all file systems.

[/sbin/fsck.ext4 (1) -- /] fsck.ext4 -a /dev/mapper/rootvg-lv_root

/dev/mapper/rootvg-lv_root: clean, 134671/4915200 files, 2194674/19660800 blocks

Entering non-interactive startup

Calling the system activity data collector (sadc)...

Starting monitoring for VG rootvg:   7 logical volume(s) in volume group "rootvg" monitored

[  OK  ]

pps_core: LinuxPPS API ver. 1 registered

pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>

PTP clock support registered

mlx4_en: Mellanox ConnectX HCA Ethernet driver v3.1-1.0.3 (29 Sep 2015)

card: mlx4_0, QP: 0xa78, inline size: 120

Default coalesing params for mtu:4092 - rx_frames:88 rx_usecs:16

card: mlx4_0, QP: 0xa80, inline size: 120

Default coalesing params for mtu:4092 - rx_frames:88 rx_usecs:16

Loading HCA driver and Access Layer:[  OK  ]

NOHZ: local_softirq_pending 08

ADDRCONF(NETDEV_UP): ib0: link is not ready

ib0: multicast join failed for ff12:401b:8000:0000:0000:0000:ffff:ffff, status -22

ip6tables: No config file.[WARNING]

Bringing up loopback interface:  [  OK  ]

Bringing up interface eth0:

Determining IP information for eth0...ib0: multicast join failed for ff12:401b:8000:0000:0000:0000:ffff:ffff, status -22

done.

[  OK  ]

Bringing up interface ib0:  [  OK  ]

 

2. /var/log/neutron/eswitchd

2016-08-25 13:21:35,989 DEBUG eswitchd [-] Handling message - {u'action': u'get_vnics', u'fabric': u'*'}

2016-08-25 13:21:35,989 DEBUG eswitchd [-] fabrics =['default']

2016-08-25 13:21:35,989 DEBUG eswitchd [-] vnics are {u'fa:16:3e:7d:d7:87': {'mac': u'fa:16:3e:7d:d7:87', 'device_id': u'afab526e-da36-44ee-8f5e-8743451bc8a4'}, 'fa:16:3e:d8:dd:a3': {'mac': 'fa:16:3e:d8:dd:a3', 'device_id': '7c5f4c1a-1492-4087-8eee-c54b91cc733b'}, '1a:5c:90:77:4f:88': {'mac': '1a:5c:90:77:4f:88', 'device_id': '0e3e7d62-b88f-4e9b-b685-116280c87f5a'}, u'fa:16:3e:4f:46:de': {'mac': u'fa:16:3e:4f:46:de', 'device_id': u'7b7e8f69-438c-4ec7-95fe-0d59f939fd19'}, 'fe:66:d7:3e:cb:ca': {'mac': 'fe:66:d7:3e:cb:ca', 'device_id': '7d4a002a-cfab-4189-9bff-b656c863592a'}}

2016-08-25 13:21:37,989 DEBUG eswitchd [-] Handling message - {u'action': u'get_vnics', u'fabric': u'*'}

2016-08-25 13:21:37,989 DEBUG eswitchd [-] fabrics =['default']

2016-08-25 13:21:37,990 DEBUG eswitchd [-] vnics are {u'fa:16:3e:7d:d7:87': {'mac': u'fa:16:3e:7d:d7:87', 'device_id': u'afab526e-da36-44ee-8f5e-8743451bc8a4'}, 'fa:16:3e:d8:dd:a3': {'mac': 'fa:16:3e:d8:dd:a3', 'device_id': '7c5f4c1a-1492-4087-8eee-c54b91cc733b'}, '1a:5c:90:77:4f:88': {'mac': '1a:5c:90:77:4f:88', 'device_id': '0e3e7d62-b88f-4e9b-b685-116280c87f5a'}, u'fa:16:3e:4f:46:de': {'mac': u'fa:16:3e:4f:46:de', 'device_id': u'7b7e8f69-438c-4ec7-95fe-0d59f939fd19'}, 'fe:66:d7:3e:cb:ca': {'mac': 'fe:66:d7:3e:cb:ca', 'device_id': '7d4a002a-cfab-4189-9bff-b656c863592a'}}

 

3. /var/log/neutron/mlnx-agent.log

2016-08-25 13:20:16.230 8881 DEBUG oslo_messaging._drivers.amqp [-] UNIQUE_ID is b3adf08a1ac24b8d83eee8f48f0e47aa. _add_unique_id /usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqp.py:264

2016-08-25 13:20:17.973 8881 DEBUG networking_mlnx.plugins.ml2.drivers.mlnx.agent.utils [req-e22856b5-392a-4794-ac1e-b34fdf0eb9e1 ] get_attached_vnics get_attached_vnics /usr/lib/python2.7/site-packages/networking_mlnx/plugins/ml2/drivers/mlnx/agent/utils.py:82

2016-08-25 13:20:19.974 8881 DEBUG networking_mlnx.plugins.ml2.drivers.mlnx.agent.utils [req-e22856b5-392a-4794-ac1e-b34fdf0eb9e1 ] get_attached_vnics get_attached_vnics /usr/lib/python2.7/site-packages/networking_mlnx/plugins/ml2/drivers/mlnx/agent/utils.py:82

2016-08-25 13:20:21.974 8881 DEBUG networking_mlnx.plugins.ml2.drivers.mlnx.agent.utils [req-e22856b5-392a-4794-ac1e-b34fdf0eb9e1 ] get_attached_vnics get_attached_vnics /usr/lib/python2.7/site-packages/networking_mlnx/plugins/ml2/drivers/mlnx/agent/utils.py:82

 

 

I think this is a similar issue to this post,but I can't find what to do the next.

Mellanox eSwitchd issue on Openstack Havana nova-compute

Could you tell me any ideas?

Re: nvidia_peer_memory-1.0 build fails on CentOS 6.8

$
0
0

Thank you! This worked for me as well.

Additionally, I had to define the DEPMOD variable with 'DEPMOD=$(shell which depmod)' for the install target.

Re: 40 GbE External Metallic Loopback Plug?

$
0
0

Hi David,

 

We dont manufacture 40GbE loopback cables and we don't have any recommendation for any of these cables.

 

Thnaks

Khwaja

Re: Mellanox eSwitchd issue on Openstack Kilo?

$
0
0

Hi Muneyoshi,

 

I see that you are using an EOL version of Openstack. Is it possible for you to reproduce with the "Liberty" or "Mitaka" versions?

 

Thanks and regards,

~Martijn

 

Re: Mellanox eSwitchd issue on Openstack Kilo?

$
0
0

Hi Martijn,

 

Unfortunately I can't upgrade a version of OpenStack because of  restrictions for middleware associated with OpenStack.

Is there any way to update eswitchd for Kilo version? I find that eswitchd is updated in Liberty(or later) release, but not for Kilo.

Re: Mellanox eSwitchd issue on Openstack Kilo?

$
0
0

Hi Muneyoshi,

 

When you followed the installation Wiki document, did you also configured the SubnetManager through the "Manual OpenSM Configuration" section? Can you also share which HCA you are using?

 

Thanks and regards,

~Martijn


Re: WinOF v5.22 and Platform MPI problem on ConnectX-3 cards

$
0
0

Correct. It is only MS-MPI and not Platform-MPI that that Mellanox WinOF supports since v2.1 and the reason is stemming from Microsoft compatibility requirements.

Re: Mellanox eSwitchd issue on Openstack Kilo?

$
0
0

Hi Martijn,

 

>When you followed the installation Wiki document, did you also configured the SubnetManager through the "Manual OpenSM Configuration" section?

Yes, I copied the document setting like this. And I restarted the opensmd.

[root@xxxx neutron]# cat /etc/opensm/partitions.conf

management=0x7fff,ipoib, sl=0, defmember=full : ALL, ALL_SWITCHES=full,SELF=full;

vlan1=0x1, ipoib, sl=0, defmember=full : ALL;

vlan2=0x2, ipoib, sl=0, defmember=full : ALL;

vlan3=0x3, ipoib, sl=0, defmember=full : ALL;

vlan4=0x4, ipoib, sl=0, defmember=full : ALL;

vlan5=0x5, ipoib, sl=0, defmember=full : ALL;

vlan6=0x6, ipoib, sl=0, defmember=full : ALL;

[root@xxxx neutron]# cat /etc/opensm/opensm.conf

allow_both_pkeys TRUE

------------------------------------------------------------------

The number of vlan is the same as the number of sriov.

 

>Can you also share which HCA you are using?

Sure. Here are the results for ibstat and other hardware information commands.

----------------------------------------------

[root@xxxx neutron]# ibstat

CA 'mlx4_0'

        CA type: MT4103

        Number of ports: 2

        Firmware version: 2.34.5000

        Hardware version: 0

        Node GUID: 0x7cfe9003009b8ae0

        System image GUID: 0x7cfe9003009b8ae3

        Port 1:

                State: Active

                Physical state: LinkUp

                Rate: 56

                Base lid: 63

                LMC: 0

                SM lid: 24

                Capability mask: 0x02514868

                Port GUID: 0x7cfe9003009b8ae1

                Link layer: InfiniBand

        Port 2:

                State: Down

                Physical state: Polling

                Rate: 10

                Base lid: 0

                LMC: 0

                SM lid: 0

                Capability mask: 0x02514868

                Port GUID: 0x7cfe9003009b8ae2

                Link layer: InfiniBand

[root@xxxx neutron]# mst status

MST modules:

------------

    MST PCI module loaded

    MST PCI configuration module loaded

 

MST devices:

------------

/dev/mst/mt4103_pciconf0         - PCI configuration cycles access.

                                   domain:bus:dev.fn=0000:06:00.0 addr.reg=88 data.reg=92

                                   Chip revision is: 00

/dev/mst/mt4103_pci_cr0          - PCI direct access.

                                   domain:bus:dev.fn=0000:06:00.0 bar=0x90700000 size=0x100000

                                   Chip revision is: 00

[root@xxxx neutron]# flint -d /dev/mst/mt4103_pci_cr0 query

Image type:          FS2

FW Version:          2.34.5000

FW Release Date:     28.7.2015

Product Version:     02.34.50.00

Rom Info:            type=PXE version=3.4.521 devid=4103

                     type=UEFI version=14.7.24

Device ID:           4103

Description:         Node             Port1            Port2            Sys image

GUIDs:               7cfe9003009b8ae0 7cfe9003009b8ae1 7cfe9003009b8ae2 7cfe9003009b8ae3

MACs:                                     7cfe909b8ae1     7cfe909b8ae2

VSD:

PSID:                IBM2000110021

[root@xxxx neutron]# lspci |grep Mell

06:00.0 Network controller: Mellanox Technologies MT27520 Family [ConnectX-3 Pro]

06:00.1 Network controller: Mellanox Technologies MT27500/MT27520 Family [ConnectX-3/ConnectX-3 Pro Virtual Function]

06:00.2 Network controller: Mellanox Technologies MT27500/MT27520 Family [ConnectX-3/ConnectX-3 Pro Virtual Function]

06:00.3 Network controller: Mellanox Technologies MT27500/MT27520 Family [ConnectX-3/ConnectX-3 Pro Virtual Function]

06:00.4 Network controller: Mellanox Technologies MT27500/MT27520 Family [ConnectX-3/ConnectX-3 Pro Virtual Function]

06:00.5 Network controller: Mellanox Technologies MT27500/MT27520 Family [ConnectX-3/ConnectX-3 Pro Virtual Function]

06:00.6 Network controller: Mellanox Technologies MT27500/MT27520 Family [ConnectX-3/ConnectX-3 Pro Virtual Function]

--------------------------------

 

It might be a firmware problem(not still up-to-date), but I need a conviction for updating a firmware because the firmware version is fixed by a rule of our project..Though I can make an exception if I have an understandable reason.

 

Thank you and best regards,

Muneyoshi

Re: Is srp supported in RHEL7.2 PPC64 ?

Re: vSphere 6.0 PFC configuration for Ethernet iSER with Ethernet Driver 1.9.10.5

$
0
0

I you have configured mlx4_en.conf file properly as suggested below then you should be able to use pfc properly

+++++++++++++++++++++++++++++++

/etc/modprobe.d/mlx4_en.conf:

options mlx4_en pfctx=0x08 pfcrx=0x08

++++++++++++++++++++++++++++++++++++

try also to restart the driver after configuration by using /etc/init.d/opeibd restart

Rather then this, if loging into the target lun still fails, you should be re-checking if iser parms on bith initiator & target are configured as stated in Mellanox community

 

Re: Mellanox eSwitchd issue on Openstack Kilo?

$
0
0

Hi Muneyoshi,

 

Can you provide us the full eswitchd.log?

 

Thanks and regards,

~Martijn

Re: vSphere 6.0 PFC configuration for Ethernet iSER with Ethernet Driver 1.9.10.5

$
0
0

This environment is VMware ESXi, not a Linux one.

mlx4_en parameter must need integer value, not a 0x08!

 

Testing MCX455A bandwidth between Dell servers

$
0
0

I am testing MCX455A bandwidth between Dell servers using the commands:

Server: Ib_write-bw –d mlx5_0  –i  1 –a –F
and

Client: Ib_write-bw –d mlx5_0  –i  1 –a –F  <server address>

The results looks fine until # bytes = 65536

Then I get the message:

mlx5: usb1 : got completion with errors

00000000 00000000 00000000 00000000

00000000 00000000 00000000 00000000

00000000 00000000 00000000 00000000

00000000 00008813 08000029 40807dd3

Problems with warm up

 

This test used to work to completion and I don't think I've changed the configuration.  I have the same problem with multiple cards (I have 13).


ibv_post_send is slow in ping-pong

$
0
0

I tried to measure how much time it takes for each ibv_post_send (IB_WR_RDMA_WRITE) in the default rping program.

I used clock_gettime to measure and the results show that every ibv_post_send function takes around 170~180 nanoseconds. I expected it to be faster. Does any one have ideas on how to tune this? What could be the affecting factors? Many thanks in advance.

Re: vSphere 6.0 PFC configuration for Ethernet iSER with Ethernet Driver 1.9.10.5

Re: Mellanox eSwitchd issue on Openstack Kilo?

$
0
0

Hi Martijn

Sorry for being late, here is an eswitchd.log on the compute node.

And I think mlnx-agent.log will be helpful so I add it.

----------------------------

1.eswitchd.log

----------------------------

2016-09-01 16:24:37,419 DEBUG eswitchd [-] vnics are {u'fa:16:3e:a2:5b:b4': {'mac': u'fa:16:3e:a2:5b:b4', 'device_id': u'c510b038-ef87-4030-a4f0-4f996b181855'}}

2016-09-01 16:24:39,419 DEBUG eswitchd [-] Handling message - {u'action': u'get_vnics', u'fabric': u'*'}

2016-09-01 16:24:39,420 DEBUG eswitchd [-] fabrics =['default']

2016-09-01 16:24:39,420 DEBUG eswitchd [-] vnics are {u'fa:16:3e:a2:5b:b4': {'mac': u'fa:16:3e:a2:5b:b4', 'device_id': u'c510b038-ef87-4030-a4f0-4f996b181855'}}

2016-09-01 16:24:41,420 DEBUG eswitchd [-] Handling message - {u'action': u'get_vnics', u'fabric': u'*'}

2016-09-01 16:24:41,420 DEBUG eswitchd [-] fabrics =['default']

2016-09-01 16:24:41,420 DEBUG eswitchd [-] vnics are {u'fa:16:3e:a2:5b:b4': {'mac': u'fa:16:3e:a2:5b:b4', 'device_id': u'c510b038-ef87-4030-a4f0-4f996b181855'}}

2016-09-01 16:24:43,421 DEBUG eswitchd [-] Handling message - {u'action': u'get_vnics', u'fabric': u'*'}

2016-09-01 16:24:43,421 DEBUG eswitchd [-] fabrics =['default']

2016-09-01 16:24:43,421 DEBUG eswitchd [-] vnics are {u'fa:16:3e:a2:5b:b4': {'mac': u'fa:16:3e:a2:5b:b4', 'device_id': u'c510b038-ef87-4030-a4f0-4f996b181855'}}

2016-09-01 16:24:43,421 DEBUG eswitchd [-] Resync devices

2016-09-01 16:24:45,421 DEBUG eswitchd [-] Handling message - {u'action': u'get_vnics', u'fabric': u'*'}

2016-09-01 16:24:45,422 DEBUG eswitchd [-] fabrics =['default']

2016-09-01 16:24:45,422 DEBUG eswitchd [-] vnics are {u'fa:16:3e:a2:5b:b4': {'mac': u'fa:16:3e:a2:5b:b4', 'device_id': u'c510b038-ef87-4030-a4f0-4f996b181855'}}

2016-09-01 16:24:47,422 DEBUG eswitchd [-] Handling message - {u'action': u'get_vnics', u'fabric': u'*'}

2016-09-01 16:24:47,422 DEBUG eswitchd [-] fabrics =['default']

2016-09-01 16:24:47,422 DEBUG eswitchd [-] vnics are {u'fa:16:3e:a2:5b:b4': {'mac': u'fa:16:3e:a2:5b:b4', 'device_id': u'c510b038-ef87-4030-a4f0-4f996b181855'}}

2016-09-01 16:24:49,423 DEBUG eswitchd [-] Handling message - {u'action': u'get_vnics', u'fabric': u'*'}

2016-09-01 16:24:49,423 DEBUG eswitchd [-] fabrics =['default']

2016-09-01 16:24:49,423 DEBUG eswitchd [-] vnics are {u'fa:16:3e:a2:5b:b4': {'mac': u'fa:16:3e:a2:5b:b4', 'device_id': u'c510b038-ef87-4030-a4f0-4f99

6b181855'}}

--------------------------------------------

2.mlnx-agent.log

--------------------------------------------

2016-09-01 16:29:04.885 19895 DEBUG oslo_messaging._drivers.amqp [-] unpacked context: {u'read_deleted': u'no', u'project_name': u'service', u'user_id': u'a76a50c916be47d5bc42aa900a3d2f52', u'roles': [u'_member_', u'admin'], u'tenant_id': u'fb93c4aa4484455eac338a3989feedca', u'auth_token': u'***', u'timestamp': u'2016-09-01 07:29:04.829415', u'is_admin': True, u'user': u'a76a50c916be47d5bc42aa900a3d2f52', u'request_id': u'req-0a3f1dc5-5bd7-405a-abe5-822bef58fd0a', u'tenant_name': u'service', u'project_id': u'fb93c4aa4484455eac338a3989feedca', u'user_name': u'neutron', u'tenant': u'fb93c4aa4484455eac338a3989feedca'} unpack_context /usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqp.py:209

2016-09-01 16:29:04.887 19895 DEBUG neutron.agent.securitygroups_rpc [req-0a3f1dc5-5bd7-405a-abe5-822bef58fd0a ] Security group member updated on remote: [u'988dc170-b1de-4614-895b-1a423ec8faf4'] security_groups_member_updated /usr/lib/python2.7/site-packages/neutron/agent/securitygroups_rpc.py:150

2016-09-01 16:29:04.888 19895 INFO neutron.agent.securitygroups_rpc [req-0a3f1dc5-5bd7-405a-abe5-822bef58fd0a ] Security group member updated [u'988dc170-b1de-4614-895b-1a423ec8faf4']

2016-09-01 16:29:05.011 19895 DEBUG oslo_messaging._drivers.amqp [-] unpacked context: {u'read_deleted': u'no', u'project_name': u'service', u'user_id': u'a76a50c916be47d5bc42aa900a3d2f52', u'roles': [u'_member_', u'admin'], u'tenant_id': u'fb93c4aa4484455eac338a3989feedca', u'auth_token': u'***', u'timestamp': u'2016-09-01 07:29:04.960239', u'is_admin': True, u'user': u'a76a50c916be47d5bc42aa900a3d2f52', u'request_id': u'req-19755025-f52a-4a5d-bdad-9b09a93759ef', u'tenant_name': u'service', u'project_id': u'fb93c4aa4484455eac338a3989feedca', u'user_name': u'neutron', u'tenant': u'fb93c4aa4484455eac338a3989feedca'} unpack_context /usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqp.py:209

2016-09-01 16:29:05.012 19895 DEBUG neutron.agent.securitygroups_rpc [req-19755025-f52a-4a5d-bdad-9b09a93759ef ] Security group member updated on remote: [u'988dc170-b1de-4614-895b-1a423ec8faf4'] security_groups_member_updated /usr/lib/python2.7/site-packages/neutron/agent/securitygroups_rpc.py:150

2016-09-01 16:29:05.013 19895 INFO neutron.agent.securitygroups_rpc [req-19755025-f52a-4a5d-bdad-9b09a93759ef ] Security group member updated [u'988dc170-b1de-4614-895b-1a423ec8faf4']

2016-09-01 16:29:05.479 19895 DEBUG networking_mlnx.plugins.ml2.drivers.mlnx.agent.utils [req-63368208-c269-4901-839e-d4a9697faa33 ] get_attached_vnics get_attached_vnics /usr/lib/python2.7/site-packages/networking_mlnx/plugins/ml2/drivers/mlnx/agent/utils.py:82

2016-09-01 16:29:06.430 19895 DEBUG networking_mlnx.plugins.ml2.drivers.mlnx.agent.utils [-] get_attached_vnics get_attached_vnics /usr/lib/python2.7/site-packages/networking_mlnx/plugins/ml2/drivers/mlnx/agent/utils.py:82

2016-09-01 16:29:06.433 19895 DEBUG oslo_messaging._drivers.amqp [-] UNIQUE_ID is 369d8286fe75477d83d700f4d991b7b9. _add_unique_id /usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqp.py:264

2016-09-01 16:29:07.480 19895 DEBUG networking_mlnx.plugins.ml2.drivers.mlnx.agent.utils [req-63368208-c269-4901-839e-d4a9697faa33 ] get_attached_vnics get_attached_vnics /usr/lib/python2.7/site-packages/networking_mlnx/plugins/ml2/drivers/mlnx/agent/utils.py:82

2016-09-01 16:29:07.482 19895 DEBUG networking_mlnx.plugins.ml2.drivers.mlnx.agent.mlnx_eswitch_neutron_agent [req-63368208-c269-4901-839e-d4a9697faa33 ] Starting to process devices in:{'current': set([u'fa:16:3e:15:72:31']), 'removed': set([]), 'added': set([u'fa:16:3e:15:72:31']), 'updated': set([])} run /usr/lib/python2.7/site-packages/networking_mlnx/plugins/ml2/drivers/mlnx/agent/mlnx_eswitch_neutron_agent.py:374

2016-09-01 16:29:07.483 19895 DEBUG oslo_messaging._drivers.amqpdriver [req-63368208-c269-4901-839e-d4a9697faa33 ] MSG_ID is cd37e22ce92d43d6995ca428f4eefe0e _send /usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py:311

2016-09-01 16:29:07.483 19895 DEBUG oslo_messaging._drivers.amqp [req-63368208-c269-4901-839e-d4a9697faa33 ] UNIQUE_ID is 4a6c7c1ce95f48dbb91e47fc945503cb. _add_unique_id /usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqp.py:264

2016-09-01 16:29:07.557 19895 INFO networking_mlnx.plugins.ml2.drivers.mlnx.agent.mlnx_eswitch_neutron_agent [req-63368208-c269-4901-839e-d4a9697faa33 ] Adding or updating port with mac fa:16:3e:15:72:31

2016-09-01 16:29:07.558 19895 INFO networking_mlnx.plugins.ml2.drivers.mlnx.agent.mlnx_eswitch_neutron_agent [req-63368208-c269-4901-839e-d4a9697faa33 ] Port fa:16:3e:15:72:31 updated

2016-09-01 16:29:07.558 19895 DEBUG networking_mlnx.plugins.ml2.drivers.mlnx.agent.mlnx_eswitch_neutron_agent [req-63368208-c269-4901-839e-d4a9697faa33 ] Device details {u'profile': {}, u'allowed_address_pairs': [], u'admin_state_up': True, u'network_id': u'c00fd123-c176-492c-a7f4-97d41db325ce', u'segmentation_id': 2, u'device_owner': u'compute:nova', u'physical_network': u'default', u'mac_address': u'fa:16:3e:15:72:31', u'device': u'fa:16:3e:15:72:31', u'port_security_enabled': True, u'port_id': u'f8cad055-09ea-4d72-99be-2a992af3843c', u'fixed_ips': [{u'subnet_id': u'eaec5013-b11d-48e8-9bc9-9bc6c78d8286', u'ip_address': u'10.35.6.32'}], u'network_type': u'vlan'} treat_devices_added_or_updated /usr/lib/python2.7/site-packages/networking_mlnx/plugins/ml2/drivers/mlnx/agent/mlnx_eswitch_neutron_agent.py:307

2016-09-01 16:29:07.558 19895 DEBUG networking_mlnx.plugins.ml2.drivers.mlnx.agent.utils [req-63368208-c269-4901-839e-d4a9697faa33 ] get_attached_vnics get_attached_vnics /usr/lib/python2.7/site-packages/networking_mlnx/plugins/ml2/drivers/mlnx/agent/utils.py:82

2016-09-01 16:29:07.560 19895 DEBUG networking_mlnx.plugins.ml2.drivers.mlnx.agent.mlnx_eswitch_neutron_agent [req-63368208-c269-4901-839e-d4a9697faa33 ] Connecting port f8cad055-09ea-4d72-99be-2a992af3843c port_up /usr/lib/python2.7/site-packages/networking_mlnx/plugins/ml2/drivers/mlnx/agent/mlnx_eswitch_neutron_agent.py:93

2016-09-01 16:29:07.560 19895 INFO networking_mlnx.plugins.ml2.drivers.mlnx.agent.mlnx_eswitch_neutron_agent [req-63368208-c269-4901-839e-d4a9697faa33 ] Binding Segmentation ID 2 to eSwitch for vNIC mac_address fa:16:3e:15:72:31

2016-09-01 16:29:07.561 19895 DEBUG networking_mlnx.plugins.ml2.drivers.mlnx.agent.utils [req-63368208-c269-4901-839e-d4a9697faa33 ] Set Vlan  2 on Port fa:16:3e:15:72:31 on Fabric default set_port_vlan_id /usr/lib/python2.7/site-packages/networking_mlnx/plugins/ml2/drivers/mlnx/agent/utils.py:93

2016-09-01 16:29:07.610 19895 DEBUG networking_mlnx.plugins.ml2.drivers.mlnx.agent.utils [req-63368208-c269-4901-839e-d4a9697faa33 ] Port Up for fa:16:3e:15:72:31 on fabric default port_up /usr/lib/python2.7/site-packages/networking_mlnx/plugins/ml2/drivers/mlnx/agent/utils.py:112

2016-09-01 16:29:07.611 19895 DEBUG networking_mlnx.plugins.ml2.drivers.mlnx.agent.mlnx_eswitch_neutron_agent [req-63368208-c269-4901-839e-d4a9697faa33 ] Setting status for fa:16:3e:15:72:31 to UP treat_devices_added_or_updated /usr/lib/python2.7/site-packages/networking_mlnx/plugins/ml2/drivers/mlnx/agent/mlnx_eswitch_neutron_agent.py:316

2016-09-01 16:29:07.612 19895 DEBUG oslo_messaging._drivers.amqpdriver [req-63368208-c269-4901-839e-d4a9697faa33 ] MSG_ID is 6411955127c94755a19ba2b04d04f278 _send /usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py:311

2016-09-01 16:29:07.612 19895 DEBUG oslo_messaging._drivers.amqp [req-63368208-c269-4901-839e-d4a9697faa33 ] UNIQUE_ID is 7738464db5c2477284cacd25000320e4. _add_unique_id /usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqp.py:264

2016-09-01 16:29:09.480 19895 DEBUG networking_mlnx.plugins.ml2.drivers.mlnx.agent.utils [req-63368208-c269-4901-839e-d4a9697faa33 ] get_attached_vnics get_attached_vnics /usr/lib/python2.7/site-packages/networking_mlnx/plugins/ml2/drivers/mlnx/agent/utils.py:82

2016-09-01 16:29:11.480 19895 DEBUG networking_mlnx.plugins.ml2.drivers.mlnx.agent.utils [req-63368208-c269-4901-839e-d4a9697faa33 ] get_attached_vnics get_attached_vnics /usr/lib/python2.7/site-packages/networking_mlnx/plugins/ml2/drivers/mlnx/agent/utils.py:82

2016-09-01 16:29:13.480 19895 DEBUG networking_mlnx.plugins.ml2.drivers.mlnx.agent.utils [req-63368208-c269-4901-839e-d4a9697faa33 ] get_attached_vnics get_attached_vnics /usr/lib/python2.7/site-packages/networking_mlnx/plugins/ml2/drivers/mlnx/agent/utils.py:82

2016-09-01 16:29:15.481 19895 DEBUG networking_mlnx.plugins.ml2.drivers.mlnx.agent.utils [req-63368208-c269-4901-839e-d4a9697faa33 ] get_attached_vnics get_attached_vnics /usr/lib/python2.7/site-packages/networking_mlnx/plugins/ml2/drivers/mlnx/agent/utils.py:82

2016-09-01 16:29:17.481 19895 DEBUG networking_mlnx.plugins.ml2.drivers.mlnx.agent.utils [req-63368208-c269-4901-839e-d4a9697faa33 ] get_attached_vnics get_attached_vnics /usr/lib/python2.7/site-packages/networking_mlnx/plugins/ml2/drivers/mlnx/agent/utils.py:82

2016-09-01 16:29:19.481 19895 DEBUG networking_mlnx.plugins.ml2.drivers.mlnx.agent.utils [req-63368208-c269-4901-839e-d4a9697faa33 ] get_attached_vnics get_attached_vnics /usr/lib/python2.7/site-packages/networking_mlnx/plugins/ml2/drivers/mlnx/agent/utils.py:82

2016-09-01 16:29:21.481 19895 DEBUG networking_mlnx.plugins.ml2.drivers.mlnx.agent.utils [req-63368208-c269-4901-839e-d4a9697faa33 ] get_attached_vnics get_attached_vnics /usr/lib/python2.7/site-packages/networking_mlnx/plugins/ml2/drivers/mlnx/agent/utils.py:82

2016-09-01 16:29:23.481 19895 DEBUG networking_mlnx.plugins.ml2.drivers.mlnx.agent.utils [req-63368208-c269-4901-839e-d4a9697faa33 ] get_attached_vnics get_attached_vnics /usr/lib/python2.7/site-packages/networking_mlnx/plugins/ml2/drivers/mlnx/agent/utils.py:82

2016-09-01 16:29:25.481 19895 DEBUG networking_mlnx.plugins.ml2.drivers.mlnx.agent.utils [req-63368208-c269-4901-839e-d4a9697faa33 ] get_attached_vnics get_attached_vnics /usr/lib/python2.7/site-packages/networking_mlnx/plugins/ml2/drivers/mlnx/agent/utils.py:82

2016-09-01 16:29:27.481 19895 DEBUG networking_mlnx.plugins.ml2.drivers.mlnx.agent.utils [req-63368208-c269-4901-839e-d4a9697faa33 ] get_attached_vnics get_attached_vnics /usr/lib/python2.7/site-packages/networking_mlnx/plugins/ml2/drivers/mlnx/agent/utils.py:82

2016-09-01 16:29:29.481 19895 DEBUG networking_mlnx.plugins.ml2.drivers.mlnx.agent.utils [req-63368208-c269-4901-839e-d4a9697faa33 ] get_attached_vnics get_attached_vnics /usr/lib/python2.7/site-packages/networking_mlnx/plugins/ml2/drivers/mlnx/agent/utils.py:82

2016-09-01 16:29:31.482 19895 DEBUG networking_mlnx.plugins.ml2.drivers.mlnx.agent.utils [req-63368208-c269-4901-839e-d4a9697faa33 ] get_attached_vnics get_attached_vnics /usr/lib/python2.7/site-packages/networking_mlnx/plugins/ml2/drivers/mlnx/agent/utils.py:82

2016-09-01 16:29:33.482 19895 DEBUG networking_mlnx.plugins.ml2.drivers.mlnx.agent.utils [req-63368208-c269-4901-839e-d4a9697faa33 ] get_attached_vnics get_attached_vnics /usr/lib/python2.7/site-packages/networking_mlnx/plugins/ml2/drivers/mlnx/agent/utils.py:82

2016-09-01 16:29:35.482 19895 DEBUG networking_mlnx.plugins.ml2.drivers.mlnx.agent.utils [req-63368208-c269-4901-839e-d4a9697faa33 ] get_attached_vnics get_attached_vnics /usr/lib/python2.7/site-packages/networking_mlnx/plugins/ml2/drivers/mlnx/agent/utils.py:82

2016-09-01 16:29:36.430 19895 DEBUG networking_mlnx.plugins.ml2.drivers.mlnx.agent.utils [-] get_attached_vnics get_attached_vnics /usr/lib/python2.7/site-packages/networking_mlnx/plugins/ml2/drivers/mlnx/agent/utils.py:82

2016-09-01 16:29:36.433 19895 DEBUG oslo_messaging._drivers.amqp [-] UNIQUE_ID is d5c58c1faa8c400cb9c731da1034906e. _add_unique_id /usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqp.py:264

2016-09-01 16:29:37.482 19895 DEBUG networking_mlnx.plugins.ml2.drivers.mlnx.agent.utils [req-63368208-c269-4901-839e-d4a9697faa33 ] get_attached_vnics get_attached_vnics /usr/lib/python2.7/site-packages/networking_mlnx/plugins/ml2/drivers/mlnx/agent/utils.py:82

2016-09-01 16:29:39.482 19895 DEBUG networking_mlnx.plugins.ml2.drivers.mlnx.agent.utils [req-63368208-c269-4901-839e-d4a9697faa33 ] get_attached_vnics get_attached_vnics /usr/lib/python2.7/site-packages/networking_mlnx/plugins/ml2/drivers/mlnx/agent/utils.py:82

2016-09-01 16:29:41.482 19895 DEBUG networking_mlnx.plugins.ml2.drivers.mlnx.agent.utils [req-63368208-c269-4901-839e-d4a9697faa33 ] get_attached_vnics get_attached_vnics /usr/lib/python2.7/site-packages/networking_mlnx/plugins/ml2/drivers/mlnx/agent/utils.py:82

 

Thank you and best regards,

Muneyoshi

Re: Qos options and Vlarb table

$
0
0

Hi all

 

I think I understood in this topic
IB QoS is really very different from the analogue to Ethernet.
The IB uses two tables and the number of processed entires, which significantly increases the possibility of flexible configuration traffic prioritization under the specified goals.

For me, the best explanation was given in this link - InfiniBand QoS with Lustre ko2iblnd.

QP Context size

$
0
0

I have read the document saying the ICM (Infinihost Context Memory) is required by the HCA to store Queue Pair (QP) context/Completion Queue (CQ) and Address Translation Table entries.

I am wondering the QP context size is the same for a queue pair?

Or it varies, like how many WQEs (allowed outstanding requests) can affect this size?

Many thanks for your time.

Viewing all 6227 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>