Quantcast
Channel: Mellanox Interconnect Community: Message List
Viewing all 6227 articles
Browse latest View live

Please help

$
0
0

Hello ,I'm just a new bee.my company have bought a SX6025.I want to know how to manage it.

network switch have a console port.But it only have a I2C port.Please tell me what should I do?


Re: Does ubuntu 14.04 inbox-driver support connectx-4 ?

$
0
0

Hi Taiyoung,

Regarding ubuntu 14.04 inbox driver and connectx-4 en , I suggest to contact the OS vendor in order to receive the supported matrix compatibility.

 

Thank you , Karen.

Re: Fiber optic cable?

$
0
0

Thank you for the quick response!  Would the fiber optic cable be better in either transfer speeds or latency than the copper cable?  Or are they the same in those characteristics?

Re: Usage of CX4 to QSFP cable for the Application

$
0
0

hi,

 

1. It is possible to change the configuration of the CX3 NIC to accept 10G over 4 lanes.

2. Can you please share the .ini of the OEM (using the aforementioned 'flint dc' command)? We will need to change the CX3 NIC .ini in accordance to the OEMs ini settings for CX4 communication (3.125gpbs over copper).

 

Thanks,

Dan

Re: nvidia_peer_memory for Cuda8 ?

Re: Driverdisk for Xenserver 7?

40Gb/s IPoIB only gives 5Gb/s real throughput?!

$
0
0

I really need some expertise here:

 

I have two Windows 10 Machines with two MHQH19B-XTR 40 Gbit Adapters and a QSFP cable in between. The Vlan manager is opensm.

 

The connection should be about 32Gbits Lan. In reality i only get 5 Gbit performance. So clearly something is very wrong.

C:\Program Files\Mellanox\MLNX_VPI\IB\Tools>iblinkinfo

CA: E8400:

      0x0002c903004cdfb1      2    1[  ] ==( 4X          10.0 Gbps Active/  LinkUp)==>       1    1[  ] "IP35" ( )

CA: IP35:

      0x0002c903004ef325      1    1[  ] ==( 4X          10.0 Gbps Active/  LinkUp)==>       2    1[  ] "E8400" ( )

 

I tested my IPoIB with a program called lanbench and nd_read_bw:

nd_read_bw -a -n 100 -C 169.254.195.189

#qp #bytes #iterations    MR [Mmps]     Gb/s     CPU Util.

0   512       100          0.843        3.45     0.00

0   1024      100          0.629        5.15     0.00

0   2048      100          0.313        5.13     0.00

0   4096      100          0.165        5.39     0.00

0   8192      100          0.083        5.44     0.00

0   16384     100          0.042        5.47     0.00

0   32768     100          0.021        5.47     100.00

..stays at 5.47 after that. with CPU util 100%

The processor is an intel core I7 4790k so it should not be at 100%. According to Taskmanager only 1 Core is actively used.

Firmware, Drivers, Windows 10 are up to date.

 

My goal is to get the fastest possible File sharing between two windows 10 machines.

What could be the problem here and how do I fix it?

 

Update: I tested with Windows 2012 Clients to verify and I still get about 5.5 Gbit/s max.

Maybe someone has other 40Gbit adapters what are the speeds for you?

Re: MPI startup():ofa fabric is not available and fallback fabric is not exist


Re: 40Gb/s IPoIB only gives 5Gb/s real throughput?!

$
0
0

Hi   Danie

Could you help  check the type of your pcie slot

 

Thanks

Re: Fiber optic cable?

$
0
0

Performance for Copper and AOC cable should be the same ,  the main difference is Copper is cheaper , but it cannot be support more than 7m

 

Thanks

“Invalid module format” error while loading nv_peer_mem in CentOS 6.6

$
0
0

Hello everybody,

I have 2 twin servers, with same hardware (Infiniband and Nvidia Tesla) and same OS (CentOS6.6, 2.6.32-504.el6.x86_64 kernel and drivers).

On host1 everything is working fine as usual, while on host2 I cannot run anymore this service, because I get this error:

[root@vega2 nvidia_peer_memory-1.0-0]# service nv_peer_mem start
starting... FATAL: Error inserting nv_peer_mem (/lib/modules/2.6.32-504.el6.x86_64/extra/nv_peer_mem.ko): Invalid module format
Failed to load nv_peer_mem

and dmesg says:

nv_p2p_dummy: exports duplicate symbol nvidia_p2p_free_page_table (owned by nvidia)

Note that host2 has been working fine for 2 months, until a rebooted it after summer holydays.   What can be the cause of this error ? The main software component didn't change (kernel, Nvidia drivers, Mellanox drivers) and hardware is ok. I tried also to repeat the installation procedure, but I get stuck at module loading point:

[root@vega2 nvidia_peer_memory-1.0-0]# rpm -ivh /root/rpmbuild/RPMS/x86_64/nvidia_peer_memory-1.0-0.x86_64.rpm
Preparing... ########################################### [100%]
1:nvidia_peer_memory ########################################### [100%]
FATAL: Error inserting nv_peer_mem (/lib/modules/2.6.32-504.el6.x86_64/extra/nv_peer_mem.ko): Invalid module format

I found this post (http://stackoverflow.com/questions/3454740/what-will-happen-if-two-kernel-module-export-same-symbol ) about two kernel modules exporting the same symbols, but why on host2 this second module is disturbing nv_peer_mem, while on host1 it does not ? Here is the output of nm commands, exactly the same for both hosts.

[root@vega2 nvidia_peer_memory-1.0-0]# nm /lib/modules/2.6.32-504.el6.x86_64/kernel/drivers/video/nvidia.ko |grep nvidia_p2p_free_ page_table
0000000088765bb5 A __crc_nvidia_p2p_free_page_table
0000000000000028 r __kcrctab_nvidia_p2p_free_page_table
000000000000007e r __kstrtab_nvidia_p2p_free_page_table
0000000000000050 r __ksymtab_nvidia_p2p_free_page_table
00000000004bcb10 T nvidia_p2p_free_page_table

[root@vega2 nvidia_peer_memory-1.0-0]# nm /lib/modules/2.6.32-504.el6.x86_64/extra/nv_peer_mem.ko |grep nvidia_p2p_free_page_table 
  U nvidia_p2p_free_page_table

I conclude saying that NVidia drivers are version 7.5, nvidia_peer_memory v. 1.0.0 package was downloaded from Mellanox site (http://www.mellanox.com/page/products_dyn?product_family=116), I tried also the new 1.0.1 version with the same effect. Moreover, the only packages present in host2 and NOT in host1 are:

lapack-3.2.1-4.el6.x86_64

lapack-devel-3.2.1-4.el6.x86_64

libX11-1.6.0-6.el6.x86_64

libX11-common-1.6.0-6.el6.noarch

libpng-1.2.49-2.el6_7.x86_64

libxcb-1.9.1-3.el6.x86_64

libxml2-2.7.6-21.el6_8.1.x86_64

libxml2-python-2.7.6-21.el6_8.1.x86_64

Can they interfere with nv_peer_mem service ?

 

Thanks in advance for any help.

  Stefano

Re: 40Gb/s IPoIB only gives 5Gb/s real throughput?!

$
0
0

Well one machine has a PCIe 3.0 x16 slot and the other one I think PCIe 1.0 x16.

If it helps the first one is a Z97 Extreme 6 and the second mainboard is an abit ip35 pro.

PCIe 1.0 x16 should support up to 4 Gbyte/s according to wikipedia.

Re: Driverdisk for Xenserver 7?

Re: 40Gb/s IPoIB only gives 5Gb/s real throughput?!

$
0
0

I suggest you use PCIe-3.0 slot

 

thanks

Re: nvidia_peer_memory for Cuda8 ?

$
0
0

I took driver form github. Distro is centos 6.8, kernel although is still 2.6.32-573.26.1.el6.x86_64


title

Re: How do you post a question?

$
0
0

You must register in order to post.

I want to register but I don't have a team/partner, can you help me find a team/partner?

$
0
0

I want to register but I don't have a team/partner. Can you help me find a team/partner?

Re: I want to register but I don't have a team/partner, can you help me find a team/partner?

$
0
0

Our hackathon registration is for complete teams. You can go ahead and submit a question with your area of interest and contact and people can use it to contact you and have you join their team.

How can I change my group members?

$
0
0

Hi,

One of the people from my team has left. How can we change the team members?

c

Viewing all 6227 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>