html可以做网站后台吗,wordpress公众号涨粉,wordpress 等级,景观设计案例网站文章目录 概述Dhrystone#xff08;单核性能测试工具#xff09;简介#xff1a;源码下载#xff1a;源码编译#xff1a;使用及输出结果 coremark#xff08;多核性能测试工具#xff09;简介#xff1a;源码下载#xff1a;源码编译#xff1a;使用及输出结果… 文章目录 概述Dhrystone单核性能测试工具简介源码下载源码编译使用及输出结果 coremark多核性能测试工具简介源码下载源码编译使用及输出结果 streamDDR内存带宽测试工具简介源码下载源码编译使用及输出结果 概述
本文描述linux系统下的芯片性能检测工具
Dhrystone单核性能测试工具
简介
Dhrystone是测量处理器运算能力的最常见基准程序之一常用于处理器的整型运算性能的测量。Dhrystone的计量单位为每秒计算多少次Dhrystone程序后来把在VAX-11/780机器上的测试结果1757 Dhrystones/s定义为1 Dhrystone MIPS(百万条指令每秒MIPS是Million Instructions Per Second的缩写)。 部分芯片测试结果参考网站http://www.roylongbottom.org.uk/dhrystone results.htm
源码下载
源码下载地址http://www.roylongbottom.org.uk/classic_benchmarks.tar.gz
源码编译
下载完源码后进行解压
tar -vxf classic_benchmarks.tar.gz解压完成后进入文件夹文件夹内容如下
bin32已经编译好的32位的工具bin64已经编译好的64位工具source_code源码目录-- common_32bit通用的32位测试代码--common_64bin通用的64位测试代码--dhrystone1一代测试源码--dhrystone2二代测试源码--linpack浮点性能评估--livermore_loops其他工具--whetstone浮点性能测试程序README说明文档在解压的根目录创建一个存放编译的输出的文件夹并进入文件夹
mkdir ./arm_build
cd ./arm_build将相关的源码cpuidc64.c、cpuidh.h、dhry.h、dhry_1.c 、dhry_2.c拷贝到arm_build文件夹中
cp -rf ../source_code/common_64bit/cpuidc64.c ./
cp -rf ../source_code/common_64bit/cpuidh.h ./
cp -rf ../source_code/dhrystone2/dhry.h ./
cp -rf ../source_code/dhrystone2/dhry_1.c ./
cp -rf ../source_code/dhrystone2/dhry_2.c ./拷贝完成后创建makefile文件进行编译这个makefile写了两种编译方式gcc和arm64的交叉编译根据需要选择屏蔽那个build_cc
# build_ccgcc
build_cc/opt/rv3399/gcc-linaro-7.5.0-2019.12-x86_64_aarch64-linux-gnu/bin/aarch64-linux-gnu-gccmain:*.o${build_cc} -o dhry2_64 *.o${build_cc} -O2 -o dhry22_64 *.o${build_cc} -O3 -o dhry23_64 *.o*.o:*.c${build_cc} -g -c *.cclean:rm -f *.o dhry2_64 dhry22_64 dhry23_64添加完成makefile后直接执行make是编译不通过的还需要修改一下源码
需要修改 cpuidh.h在文件末尾的 #endif 前面加上对应函数的声明
int getDetails();
void start_time();
void end_time();需要修改 cpuidc64.c屏蔽掉其中3个函数_cpuida() 、_calculateMHz() 这两个是汇编函数写在/source_code/common_64bit/cpuida64.asm中但因为编译不过所以屏蔽掉修改如下
// _cpuida();
// _calculateMHz();
// pagesize getpagesize();pagesize 0;这些源码中_cpuida() 、_calculateMHz() 这两个是汇编函数但是交叉编译编译汇编源码不通过导致只能屏蔽掉对应的汇编源码
gcc编译汇编源码
nasm -f elf64 cpuida64.asm交叉编译器编译汇编源码
/opt/rv3399/gcc-linaro-7.5.0-2019.12-x86_64_aarch64-linux-gnu/bin/aarch64-linux-gnu-as -f cpuida64.asm以上两个编译汇编源码都会生成.o文件
修改完后执行编译
make clean
make编译过程会有一些警告不过也能正常编译出可执行文件dhry2_64无优化等级、dhry22_64优化等级2、dhry23_64优化等级3
这3个可执行文件分别代表不同的优化等级在Dhrystone中有大量的字符串复制语句因此一个优化性能好的编译器能够在去掉循环的情形下通过一连串字的移动替代对字符串的复制由此会出现优化等级越高性能指标越好的情况。
使用及输出结果
直接执行可执行文件dhry2_64如果是交叉编译的则只需要拷贝可执行文件dhry2_64到板子里面直接执行即可
输出结果如下
rootubuntu:~/classic_benchmarks/arm_build$ ./dhry2_64 ####################################################getDetails and MHzAssembler CPUID and RDTSC CPU , Features Code 00000000, Model Code 00000000Measured - Minimum -2147483648 MHz, Maximum 0 MHzLinux Functionsget_nprocs() - CPUs 2, Configured CPUs 2get_phys_pages() and size - RAM Size 0.00 GB, Page Size 0 Bytesuname() - Linux, ubuntu, 4.15.0-142-generic#146~16.04.1-Ubuntu SMP Tue Apr 13 09:27:15 UTC 2021, x86_64##########################################Dhrystone Benchmark, Version 2.1 (Language: C or C)Optimisation Opt 3 64 Bit
Register option not selected10000 runs 0.00 seconds 100000 runs 0.01 seconds 1000000 runs 0.06 seconds 2000000 runs 0.15 seconds 4000000 runs 0.28 seconds 8000000 runs 0.53 seconds 16000000 runs 1.05 seconds 32000000 runs 2.10 seconds Final values (* implementation-dependent):Int_Glob: O.K. 5 Bool_Glob: O.K. 1
Ch_1_Glob: O.K. A Ch_2_Glob: O.K. B
Arr_1_Glob[8]: O.K. 7 Arr_2_Glob8/7: O.K. 32000010
Ptr_Glob- Ptr_Comp: * 25137728Discr: O.K. 0 Enum_Comp: O.K. 2Int_Comp: O.K. 17 Str_Comp: O.K. DHRYSTONE PROGRAM, SOME STRING
Next_Ptr_Glob- Ptr_Comp: * 25137728 same as aboveDiscr: O.K. 0 Enum_Comp: O.K. 1Int_Comp: O.K. 18 Str_Comp: O.K. DHRYSTONE PROGRAM, SOME STRING
Int_1_Loc: O.K. 5 Int_2_Loc: O.K. 13
Int_3_Loc: O.K. 7 Enum_Loc: O.K. 1
Str_1_Loc: O.K. DHRYSTONE PROGRAM, 1ST STRING
Str_2_Loc: O.K. DHRYSTONE PROGRAM, 2ND STRINGMicroseconds for one run through Dhrystone: 0.07
Dhrystones per Second: 15248279
VAX MIPS rating 8678.59 Press Enter
输出完结果后按回车键可以退出界面其中“15248279”表示单核的性能测试结果
Dhrystones per Second: 15248279 coremark多核性能测试工具
简介
CoreMark是一个简单而复杂的基准测试专门用于测试处理器核心的功能。运行CoreMark产生一个单数字分数每秒钟能够执行的迭代次数Iterations per Second即CoreMarks迭代次数越高性能越好允许用户快速比较处理器。
官网地址https://www.eembc.org/coremark/index.php
一些芯片的性能参数参考列表https://www.eembc.org/coremark/scores.php
源码下载
源码下载地址https://github.com/eembc/coremark
源码编译
进入源码根目录后执行编译命令
gcc编译
# 先清除编译参数
make clean
# 重新编译
make生成coremark.exe的可执行文件这个可执行文件是可以直接在linux下运行的查看编译出来的文件是否符合编译平台
rootubuntu:~/app/coremark-main$ file ./coremark.exe
./coremark.exe: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, for GNU/Linux 2.6.32, BuildID[sha1]dd83f1651bf5fa4b513c4bc077146c2d1add2287, not stripped可以看到是x86-64平台的直接运行可执行文件coremark.exe即可
交叉编译器编译
# 先清除编译参数
make clean
# 重新编译
make CCaarch64-linux-gnu-gcc CXXaarch64-linux-gnu-g生成coremark.exe的可执行文件这个可执行文件是可以直接在linux下运行的查看交叉编译出来的文件是否符合交叉编译的平台
rootubuntu:~/app/coremark-main$ file ./coremark.exe
./coremark.exe: ELF 64-bit LSB executable, ARM aarch64, version 1 (SYSV), dynamically linked, interpreter /lib/ld-linux-aarch64.so.1, for GNU/Linux 3.7.0, BuildID[sha1]e60d55afe73eb5ddb2158e1a6a77b33130719b6b, not stripped可以看到是“ARM aarch64”平台的将该可执行文件coremark.exe复制到板子中直接运行即可
使用及输出结果
直接运行可执行文件执行后需要等一会才会输出结果
./coremark.exe运行结果
rootubuntu:~/app/coremark-main$ ./coremark.exe
2K performance run parameters for coremark.
CoreMark Size : 666
Total ticks : 16466
Total time (secs): 16.466000
Iterations/Sec : 18219.361108
Iterations : 300000
Compiler version : GCC5.4.0 20160609
Compiler flags : -O2 -DPERFORMANCE_RUN1 -lrt
Memory location : Please put data memory location here(e.g. code in flash, data on heap etc)
seedcrc : 0xe9f5
[0]crclist : 0xe714
[0]crcmatrix : 0x1fd7
[0]crcstate : 0x8e3a
[0]crcfinal : 0xcc42
Correct operation validated. See README.md for run and reporting rules.
CoreMark 1.0 : 18219.361108 / GCC5.4.0 20160609 -O2 -DPERFORMANCE_RUN1 -lrt / Heap可以看到总共迭代了300000次共耗时16.466s每秒的迭代次数为18219.361108次可以通过这个迭代次数来评估出cpu的处理能力
streamDDR内存带宽测试工具
简介
STREAM: Sustainable Memory Bandwidth in High Performance Computers高性能计算机的可持续内存带宽这是一个内存带宽测试的工具主要是以四个方面反映测试结果
copy(复制操作2次访问内存1读1写)从内存单元中读取一个数并复制到其他内存单元中scale(乘法操作2次访问内存1读1写)从内存单元中读取一个数与常数相乘得到的记过存到其他内存单元Add(加法操作3次访问内存2读1写)从两个内存单元中分别读取两个数将其进行加法操作后得到的结果写入另一个内存单元中Triad(综合操作3次访问内存2读1写)先从内存中读取一个数与一个常数相乘得到一个乘积然后从另一个内存单元中读取一个数与刚才乘积结果相加得到的结果写入内存测试结果一般的规律是Add Triad Copy Scale
单核Stream测试影响的因素除了内存控制器能力外还有Core的ROB、Load/Store对其影响因此不是单纯的内存带宽性能测试
多核Stream测试通过多核同时发出大量内存访问请求能够更加饱和地访问内存从而测试到内存带宽的极限性能
官网地址https://www.cs.virginia.edu/stream/
源码下载
有C语言版本和Fortran语言版本都需要自己下载源码后编译成可执行文件再运行可编译出来的可执行文件输出测试结果
官网源码地址C语言版本https://www.cs.virginia.edu/stream/FTP/Code/stream.c
官网源码地址Fortran语言版本https://www.cs.virginia.edu/stream/FTP/Code/stream.f
以下是C语言版本的源码
/*-----------------------------------------------------------------------*/
/* Program: STREAM */
/* Revision: $Id: stream.c,v 5.10 2013/01/17 16:01:06 mccalpin Exp mccalpin $ */
/* Original code developed by John D. McCalpin */
/* Programmers: John D. McCalpin */
/* Joe R. Zagar */
/* */
/* This program measures memory transfer rates in MB/s for simple */
/* computational kernels coded in C. */
/*-----------------------------------------------------------------------*/
/* Copyright 1991-2013: John D. McCalpin */
/*-----------------------------------------------------------------------*/
/* License: */
/* 1. You are free to use this program and/or to redistribute */
/* this program. */
/* 2. You are free to modify this program for your own use, */
/* including commercial use, subject to the publication */
/* restrictions in item 3. */
/* 3. You are free to publish results obtained from running this */
/* program, or from works that you derive from this program, */
/* with the following limitations: */
/* 3a. In order to be referred to as STREAM benchmark results, */
/* published results must be in conformance to the STREAM */
/* Run Rules, (briefly reviewed below) published at */
/* http://www.cs.virginia.edu/stream/ref.html */
/* and incorporated herein by reference. */
/* As the copyright holder, John McCalpin retains the */
/* right to determine conformity with the Run Rules. */
/* 3b. Results based on modified source code or on runs not in */
/* accordance with the STREAM Run Rules must be clearly */
/* labelled whenever they are published. Examples of */
/* proper labelling include: */
/* tuned STREAM benchmark results */
/* based on a variant of the STREAM benchmark code */
/* Other comparable, clear, and reasonable labelling is */
/* acceptable. */
/* 3c. Submission of results to the STREAM benchmark web site */
/* is encouraged, but not required. */
/* 4. Use of this program or creation of derived works based on this */
/* program constitutes acceptance of these licensing restrictions. */
/* 5. Absolutely no warranty is expressed or implied. */
/*-----------------------------------------------------------------------*/
# include stdio.h
# include unistd.h
# include math.h
# include float.h
# include limits.h
# include sys/time.h/*-----------------------------------------------------------------------* INSTRUCTIONS:** 1) STREAM requires different amounts of memory to run on different* systems, depending on both the system cache size(s) and the* granularity of the system timer.* You should adjust the value of STREAM_ARRAY_SIZE (below)* to meet *both* of the following criteria:* (a) Each array must be at least 4 times the size of the* available cache memory. I dont worry about the difference* between 10^6 and 2^20, so in practice the minimum array size* is about 3.8 times the cache size.* Example 1: One Xeon E3 with 8 MB L3 cache* STREAM_ARRAY_SIZE should be 4 million, giving* an array size of 30.5 MB and a total memory requirement* of 91.5 MB. * Example 2: Two Xeon E5s with 20 MB L3 cache each (using OpenMP)* STREAM_ARRAY_SIZE should be 20 million, giving* an array size of 153 MB and a total memory requirement* of 458 MB. * (b) The size should be large enough so that the timing calibration* output by the program is at least 20 clock-ticks. * Example: most versions of Windows have a 10 millisecond timer* granularity. 20 ticks at 10 ms/tic is 200 milliseconds.* If the chip is capable of 10 GB/s, it moves 2 GB in 200 msec.* This means the each array must be at least 1 GB, or 128M elements.** Version 5.10 increases the default array size from 2 million* elements to 10 million elements in response to the increasing* size of L3 caches. The new default size is large enough for caches* up to 20 MB. * Version 5.10 changes the loop index variables from register int* to ssize_t, which allows array indices 2^32 (4 billion)* on properly configured 64-bit systems. Additional compiler options* (such as -mcmodelmedium) may be required for large memory runs.** Array size can be set at compile time without modifying the source* code for the (many) compilers that support preprocessor definitions* on the compile line. E.g.,* gcc -O -DSTREAM_ARRAY_SIZE100000000 stream.c -o stream.100M* will override the default size of 10M with a new size of 100M elements* per array.*/
#ifndef STREAM_ARRAY_SIZE
# define STREAM_ARRAY_SIZE 10000000
#endif/* 2) STREAM runs each kernel NTIMES times and reports the *best* result* for any iteration after the first, therefore the minimum value* for NTIMES is 2.* There are no rules on maximum allowable values for NTIMES, but* values larger than the default are unlikely to noticeably* increase the reported performance.* NTIMES can also be set on the compile line without changing the source* code using, for example, -DNTIMES7.*/
#ifdef NTIMES
#if NTIMES1
# define NTIMES 10
#endif
#endif
#ifndef NTIMES
# define NTIMES 10
#endif/* Users are allowed to modify the OFFSET variable, which *may* change the* relative alignment of the arrays (though compilers may change the * effective offset by making the arrays non-contiguous on some systems). * Use of non-zero values for OFFSET can be especially helpful if the* STREAM_ARRAY_SIZE is set to a value close to a large power of 2.* OFFSET can also be set on the compile line without changing the source* code using, for example, -DOFFSET56.*/
#ifndef OFFSET
# define OFFSET 0
#endif/** 3) Compile the code with optimization. Many compilers generate* unreasonably bad code before the optimizer tightens things up. * If the results are unreasonably good, on the other hand, the* optimizer might be too smart for me!** For a simple single-core version, try compiling with:* cc -O stream.c -o stream* This is known to work on many, many systems....** To use multiple cores, you need to tell the compiler to obey the OpenMP* directives in the code. This varies by compiler, but a common example is* gcc -O -fopenmp stream.c -o stream_omp* The environment variable OMP_NUM_THREADS allows runtime control of the * number of threads/cores used when the resulting stream_omp program* is executed.** To run with single-precision variables and arithmetic, simply add* -DSTREAM_TYPEfloat* to the compile line.* Note that this changes the minimum array sizes required --- see (1) above.** The preprocessor directive TUNED does not do much -- it simply causes the * code to call separate functions to execute each kernel. Trivial versions* of these functions are provided, but they are *not* tuned -- they just * provide predefined interfaces to be replaced with tuned code.*** 4) Optional: Mail the results to mccalpincs.virginia.edu* Be sure to include info that will help me understand:* a) the computer hardware configuration (e.g., processor model, memory type)* b) the compiler name/version and compilation flags* c) any run-time information (such as OMP_NUM_THREADS)* d) all of the output from the test case.** Thanks!**-----------------------------------------------------------------------*/# define HLINE -------------------------------------------------------------\n# ifndef MIN
# define MIN(x,y) ((x)(y)?(x):(y))
# endif
# ifndef MAX
# define MAX(x,y) ((x)(y)?(x):(y))
# endif#ifndef STREAM_TYPE
#define STREAM_TYPE double
#endifstatic STREAM_TYPE a[STREAM_ARRAY_SIZEOFFSET],b[STREAM_ARRAY_SIZEOFFSET],c[STREAM_ARRAY_SIZEOFFSET];static double avgtime[4] {0}, maxtime[4] {0},mintime[4] {FLT_MAX,FLT_MAX,FLT_MAX,FLT_MAX};static char *label[4] {Copy: , Scale: ,Add: , Triad: };static double bytes[4] {2 * sizeof(STREAM_TYPE) * STREAM_ARRAY_SIZE,2 * sizeof(STREAM_TYPE) * STREAM_ARRAY_SIZE,3 * sizeof(STREAM_TYPE) * STREAM_ARRAY_SIZE,3 * sizeof(STREAM_TYPE) * STREAM_ARRAY_SIZE};extern double mysecond();
extern void checkSTREAMresults();
#ifdef TUNED
extern void tuned_STREAM_Copy();
extern void tuned_STREAM_Scale(STREAM_TYPE scalar);
extern void tuned_STREAM_Add();
extern void tuned_STREAM_Triad(STREAM_TYPE scalar);
#endif
#ifdef _OPENMP
extern int omp_get_num_threads();
#endif
int
main(){int quantum, checktick();int BytesPerWord;int k;ssize_t j;STREAM_TYPE scalar;double t, times[4][NTIMES];/* --- SETUP --- determine precision and check timing --- */printf(HLINE);printf(STREAM version $Revision: 5.10 $\n);printf(HLINE);BytesPerWord sizeof(STREAM_TYPE);printf(This system uses %d bytes per array element.\n,BytesPerWord);printf(HLINE);
#ifdef Nprintf(***** WARNING: ******\n);printf( It appears that you set the preprocessor variable N when compiling this code.\n);printf( This version of the code uses the preprocesor variable STREAM_ARRAY_SIZE to control the array size\n);printf( Reverting to default value of STREAM_ARRAY_SIZE%llu\n,(unsigned long long) STREAM_ARRAY_SIZE);printf(***** WARNING: ******\n);
#endifprintf(Array size %llu (elements), Offset %d (elements)\n , (unsigned long long) STREAM_ARRAY_SIZE, OFFSET);printf(Memory per array %.1f MiB ( %.1f GiB).\n, BytesPerWord * ( (double) STREAM_ARRAY_SIZE / 1024.0/1024.0),BytesPerWord * ( (double) STREAM_ARRAY_SIZE / 1024.0/1024.0/1024.0));printf(Total memory required %.1f MiB ( %.1f GiB).\n,(3.0 * BytesPerWord) * ( (double) STREAM_ARRAY_SIZE / 1024.0/1024.),(3.0 * BytesPerWord) * ( (double) STREAM_ARRAY_SIZE / 1024.0/1024./1024.));printf(Each kernel will be executed %d times.\n, NTIMES);printf( The *best* time for each kernel (excluding the first iteration)\n); printf( will be used to compute the reported bandwidth.\n);#ifdef _OPENMPprintf(HLINE);
#pragma omp parallel {
#pragma omp master{k omp_get_num_threads();printf (Number of Threads requested %i\n,k);}}
#endif#ifdef _OPENMPk 0;
#pragma omp parallel
#pragma omp atomic k;printf (Number of Threads counted %i\n,k);
#endif/* Get initial value for system clock. */
#pragma omp parallel forfor (j0; jSTREAM_ARRAY_SIZE; j) {a[j] 1.0;b[j] 2.0;c[j] 0.0;}printf(HLINE);if ( (quantum checktick()) 1) printf(Your clock granularity/precision appears to be %d microseconds.\n, quantum);else {printf(Your clock granularity appears to be less than one microsecond.\n);quantum 1;}t mysecond();
#pragma omp parallel forfor (j 0; j STREAM_ARRAY_SIZE; j)a[j] 2.0E0 * a[j];t 1.0E6 * (mysecond() - t);printf(Each test below will take on the order of %d microseconds.\n, (int) t );printf( ( %d clock ticks)\n, (int) (t/quantum) );printf(Increase the size of the arrays if this shows that\n);printf(you are not getting at least 20 clock ticks per test.\n);printf(HLINE);printf(WARNING -- The above is only a rough guideline.\n);printf(For best results, please be sure you know the\n);printf(precision of your system timer.\n);printf(HLINE);/* --- MAIN LOOP --- repeat test cases NTIMES times --- */scalar 3.0;for (k0; kNTIMES; k){times[0][k] mysecond();
#ifdef TUNEDtuned_STREAM_Copy();
#else
#pragma omp parallel forfor (j0; jSTREAM_ARRAY_SIZE; j)c[j] a[j];
#endiftimes[0][k] mysecond() - times[0][k];times[1][k] mysecond();
#ifdef TUNEDtuned_STREAM_Scale(scalar);
#else
#pragma omp parallel forfor (j0; jSTREAM_ARRAY_SIZE; j)b[j] scalar*c[j];
#endiftimes[1][k] mysecond() - times[1][k];times[2][k] mysecond();
#ifdef TUNEDtuned_STREAM_Add();
#else
#pragma omp parallel forfor (j0; jSTREAM_ARRAY_SIZE; j)c[j] a[j]b[j];
#endiftimes[2][k] mysecond() - times[2][k];times[3][k] mysecond();
#ifdef TUNEDtuned_STREAM_Triad(scalar);
#else
#pragma omp parallel forfor (j0; jSTREAM_ARRAY_SIZE; j)a[j] b[j]scalar*c[j];
#endiftimes[3][k] mysecond() - times[3][k];}/* --- SUMMARY --- */for (k1; kNTIMES; k) /* note -- skip first iteration */{for (j0; j4; j){avgtime[j] avgtime[j] times[j][k];mintime[j] MIN(mintime[j], times[j][k]);maxtime[j] MAX(maxtime[j], times[j][k]);}}printf(Function Best Rate MB/s Avg time Min time Max time\n);for (j0; j4; j) {avgtime[j] avgtime[j]/(double)(NTIMES-1);printf(%s%12.1f %11.6f %11.6f %11.6f\n, label[j],1.0E-06 * bytes[j]/mintime[j],avgtime[j],mintime[j],maxtime[j]);}printf(HLINE);/* --- Check Results --- */checkSTREAMresults();printf(HLINE);return 0;
}# define M 20int
checktick(){int i, minDelta, Delta;double t1, t2, timesfound[M];/* Collect a sequence of M unique time values from the system. */for (i 0; i M; i) {t1 mysecond();while( ((t2mysecond()) - t1) 1.0E-6 );timesfound[i] t1 t2;}/** Determine the minimum difference between these M values.* This result will be our estimate (in microseconds) for the* clock granularity.*/minDelta 1000000;for (i 1; i M; i) {Delta (int)( 1.0E6 * (timesfound[i]-timesfound[i-1]));minDelta MIN(minDelta, MAX(Delta,0));}return(minDelta);}/* A gettimeofday routine to give access to the wallclock timer on most UNIX-like systems. */#include sys/time.hdouble mysecond()
{struct timeval tp;struct timezone tzp;int i;i gettimeofday(tp,tzp);return ( (double) tp.tv_sec (double) tp.tv_usec * 1.e-6 );
}#ifndef abs
#define abs(a) ((a) 0 ? (a) : -(a))
#endif
void checkSTREAMresults ()
{STREAM_TYPE aj,bj,cj,scalar;STREAM_TYPE aSumErr,bSumErr,cSumErr;STREAM_TYPE aAvgErr,bAvgErr,cAvgErr;double epsilon;ssize_t j;int k,ierr,err;/* reproduce initialization */aj 1.0;bj 2.0;cj 0.0;/* a[] is modified during timing check */aj 2.0E0 * aj;/* now execute timing loop */scalar 3.0;for (k0; kNTIMES; k){cj aj;bj scalar*cj;cj ajbj;aj bjscalar*cj;}/* accumulate deltas between observed and expected results */aSumErr 0.0;bSumErr 0.0;cSumErr 0.0;for (j0; jSTREAM_ARRAY_SIZE; j) {aSumErr abs(a[j] - aj);bSumErr abs(b[j] - bj);cSumErr abs(c[j] - cj);// if (j 417) printf(Index 417: c[j]: %f, cj: %f\n,c[j],cj); // MCCALPIN}aAvgErr aSumErr / (STREAM_TYPE) STREAM_ARRAY_SIZE;bAvgErr bSumErr / (STREAM_TYPE) STREAM_ARRAY_SIZE;cAvgErr cSumErr / (STREAM_TYPE) STREAM_ARRAY_SIZE;if (sizeof(STREAM_TYPE) 4) {epsilon 1.e-6;}else if (sizeof(STREAM_TYPE) 8) {epsilon 1.e-13;}else {printf(WEIRD: sizeof(STREAM_TYPE) %lu\n,sizeof(STREAM_TYPE));epsilon 1.e-6;}err 0;if (abs(aAvgErr/aj) epsilon) {err;printf (Failed Validation on array a[], AvgRelAbsErr epsilon (%e)\n,epsilon);printf ( Expected Value: %e, AvgAbsErr: %e, AvgRelAbsErr: %e\n,aj,aAvgErr,abs(aAvgErr)/aj);ierr 0;for (j0; jSTREAM_ARRAY_SIZE; j) {if (abs(a[j]/aj-1.0) epsilon) {ierr;
#ifdef VERBOSEif (ierr 10) {printf( array a: index: %ld, expected: %e, observed: %e, relative error: %e\n,j,aj,a[j],abs((aj-a[j])/aAvgErr));}
#endif}}printf( For array a[], %d errors were found.\n,ierr);}if (abs(bAvgErr/bj) epsilon) {err;printf (Failed Validation on array b[], AvgRelAbsErr epsilon (%e)\n,epsilon);printf ( Expected Value: %e, AvgAbsErr: %e, AvgRelAbsErr: %e\n,bj,bAvgErr,abs(bAvgErr)/bj);printf ( AvgRelAbsErr Epsilon (%e)\n,epsilon);ierr 0;for (j0; jSTREAM_ARRAY_SIZE; j) {if (abs(b[j]/bj-1.0) epsilon) {ierr;
#ifdef VERBOSEif (ierr 10) {printf( array b: index: %ld, expected: %e, observed: %e, relative error: %e\n,j,bj,b[j],abs((bj-b[j])/bAvgErr));}
#endif}}printf( For array b[], %d errors were found.\n,ierr);}if (abs(cAvgErr/cj) epsilon) {err;printf (Failed Validation on array c[], AvgRelAbsErr epsilon (%e)\n,epsilon);printf ( Expected Value: %e, AvgAbsErr: %e, AvgRelAbsErr: %e\n,cj,cAvgErr,abs(cAvgErr)/cj);printf ( AvgRelAbsErr Epsilon (%e)\n,epsilon);ierr 0;for (j0; jSTREAM_ARRAY_SIZE; j) {if (abs(c[j]/cj-1.0) epsilon) {ierr;
#ifdef VERBOSEif (ierr 10) {printf( array c: index: %ld, expected: %e, observed: %e, relative error: %e\n,j,cj,c[j],abs((cj-c[j])/cAvgErr));}
#endif}}printf( For array c[], %d errors were found.\n,ierr);}if (err 0) {printf (Solution Validates: avg error less than %e on all three arrays\n,epsilon);}
#ifdef VERBOSEprintf (Results Validation Verbose Results: \n);printf ( Expected a(1), b(1), c(1): %f %f %f \n,aj,bj,cj);printf ( Observed a(1), b(1), c(1): %f %f %f \n,a[1],b[1],c[1]);printf ( Rel Errors on a, b, c: %e %e %e \n,abs(aAvgErr/aj),abs(bAvgErr/bj),abs(cAvgErr/cj));
#endif
}#ifdef TUNED
/* stubs for tuned versions of the kernels */
void tuned_STREAM_Copy()
{ssize_t j;
#pragma omp parallel forfor (j0; jSTREAM_ARRAY_SIZE; j)c[j] a[j];
}void tuned_STREAM_Scale(STREAM_TYPE scalar)
{ssize_t j;
#pragma omp parallel forfor (j0; jSTREAM_ARRAY_SIZE; j)b[j] scalar*c[j];
}void tuned_STREAM_Add()
{ssize_t j;
#pragma omp parallel forfor (j0; jSTREAM_ARRAY_SIZE; j)c[j] a[j]b[j];
}void tuned_STREAM_Triad(STREAM_TYPE scalar)
{ssize_t j;
#pragma omp parallel forfor (j0; jSTREAM_ARRAY_SIZE; j)a[j] b[j]scalar*c[j];
}
/* end of stubs for the tuned versions of the kernels */
#endif源码编译
可以使用gcc直接编译如果需要移植到其他平台可以使用交叉编译
gcc编译带编译参数
gcc -O3 -mtunenative -marchnative -fopenmp -DSTREAM_ARRAY_SIZE200000000 -DNTIMES100 stream.c -o streamgcc编译默认参数
gcc stream.c -o stream交叉编译器编译默认参数
aarch64-linux-gnu-gcc stream.c -o stream编译参数说明
-O3 :指定最高编译优化级别如-O0,-O1,-O2,-O3-fopenmp:启用OpenMP适应多处理器环境更能得到内存带宽实际最大值。开启后程序默认运行线程为CPU线程数-DN2000000部分版本是-DSTREAM_ARRAY_SIZE200000000:指定测试数组a[]、b[]、c[]的大小Array size
该值对测试结果影响较大5.9版本默认值2000000,。若stream.c为5.10版本参数名变为-DSTREAM_ARRAY_SIZE默认值10000000;
注意必须设置测试数组大小远大于CPU 最高级缓存一般为L3 Cache的大小否则就是测试CPU缓存的吞吐性能而非内存吞吐性能;
推荐计算公式{最高级缓存X MB}×1024×1024×4.1×CPU路数/8结果取整数
公式说明由于stream.c源码推荐设置至少4倍最高级缓存且STREAM_ARRAY_SIZE为double类型8 Byte。所以公式为最高级缓存(单位Byte)×4.1倍×CPU路数/8
eg:测试机器是双路CPU最高级缓存32MB则计算值为32×1024×1024×4.1×2/8≈34393292-DNTIMES10:执行的次数并从这些结果中选最优值-mtunenative -marchnative:针对CPU指令的优化-mcmodelmedium:当单个Memory Array Size 大于2GB时需要设置此参数
新的gcc已经不支持‘-mcmodelmedium’参数了可以改为“-mcmodellarge”、“-mcmodelsmall”、“-mcmodeltiny”-DOFFSET4096:数组的偏移一般可以不定义使用及输出结果
设置运行时的进程数可设可不设
# export OMP_NUM_THREADSx x为自定义的要使用的处理器数量
export OMP_NUM_THREADS8直接运行编译出来的stream
export OMP_NUM_THREADS8
./stream编译及运行结果
rootubuntu:~/app/stream_ddr$ gcc stream.c -o stream
rootubuntu:~/app/stream_ddr$ export OMP_NUM_THREADS8
rootubuntu:~/app/stream_ddr$ ./stream
-------------------------------------------------------------
STREAM version $Revision: 5.10 $
-------------------------------------------------------------
This system uses 8 bytes per array element.
-------------------------------------------------------------
Array size 10000000 (elements), Offset 0 (elements)
Memory per array 76.3 MiB ( 0.1 GiB).
Total memory required 228.9 MiB ( 0.2 GiB).
Each kernel will be executed 10 times.The *best* time for each kernel (excluding the first iteration)will be used to compute the reported bandwidth.
-------------------------------------------------------------
Your clock granularity/precision appears to be 1 microseconds.
Each test below will take on the order of 20046 microseconds.( 20046 clock ticks)
Increase the size of the arrays if this shows that
you are not getting at least 20 clock ticks per test.
-------------------------------------------------------------
WARNING -- The above is only a rough guideline.
For best results, please be sure you know the
precision of your system timer.
-------------------------------------------------------------
Function Best Rate MB/s Avg time Min time Max time
Copy: 7940.8 0.020932 0.020149 0.023032
Scale: 9247.6 0.018226 0.017302 0.019239
Add: 10917.1 0.022417 0.021984 0.023661
Triad: 10522.6 0.023232 0.022808 0.023928
-------------------------------------------------------------
Solution Validates: avg error less than 1.000000e-13 on all three arrays
-------------------------------------------------------------在下面可以看到输出Copy、Scale、Add、Triad的数据读写速度及时间
-------------------------------------------------------------
Function Best Rate MB/s Avg time Min time Max time
Copy: 7940.8 0.020932 0.020149 0.023032
Scale: 9247.6 0.018226 0.017302 0.019239
Add: 10917.1 0.022417 0.021984 0.023661
Triad: 10522.6 0.023232 0.022808 0.023928
-------------------------------------------------------------免责声明本文内容含网络参考、作者编写等内容版权归原作者所有未经允许禁止转载。如涉及作品版权问题请与我们联系我们将根据您提供的版权证明材料确认版权并支付稿酬或者删除内容。