Verifies the baseline hardware performance of the specified hosts.
hawq checkperf -d <test_directory> [-d <test_directory> ...] (-f <hostfile_checkperf> | - h <hostname> [-h <hostname> ...]) [-r ds] [-B <block_size>] [-S <file_size>] [-D] [-v|-V] hawq checkperf -d <temp_directory> (-f <hostfile_checknet> | - h <hostname> [-h <hostname> ...]) [-r n|N|M [--duration <time>] [--netperf]] [-D] [-v|-V] hawq checkperf --version hawq checkperf -?
hawq checkperf utility starts a session on the specified hosts and runs the following performance tests:
- Disk I/O Test (dd test) — To test the sequential throughput performance of a logical disk or file system, the utility uses the dd command, which is a standard UNIX utility. It times how long it takes to write and read a large file to and from disk and calculates your disk I/O performance in megabytes (MB) per second. By default, the file size that is used for the test is calculated at two times the total random access memory (RAM) on the host. This ensures that the test is truly testing disk I/O and not using the memory cache.
- Memory Bandwidth Test (stream) — To test memory bandwidth, the utility uses the STREAM benchmark program to measure sustainable memory bandwidth (in MB/s). This tests that your system is not limited in performance by the memory bandwidth of the system in relation to the computational performance of the CPU. In applications where the data set is large (as in HAWQ), low memory bandwidth is a major performance issue. If memory bandwidth is significantly lower than the theoretical bandwidth of the CPU, then it can cause the CPU to spend significant amounts of time waiting for data to arrive from system memory.
- Network Performance Test (gpnetbench*) — To test network performance (and thereby the performance of the HAWQ interconnect), the utility runs a network benchmark program that transfers a 5 second stream of data from the current host to each remote host included in the test. The data is transferred in parallel to each remote host and the minimum, maximum, average and median network transfer rates are reported in megabytes (MB) per second. If the summary transfer rate is slower than expected (less than 100 MB/s), you can run the network test serially using the
-r noption to obtain per-host results. To run a full-matrix bandwidth test, you can specify
-r Mwhich will cause every host to send and receive data from every other host specified. This test is best used to validate if the switch fabric can tolerate a full-matrix workload.
To specify the hosts to test, use the
-f option to specify a file containing a list of host names, or use the
-h option to name single host names on the command-line. If running the network performance test, all entries in the host file must be for network interfaces within the same subnet. If your segment hosts have multiple network interfaces configured on different subnets, run the network test once for each subnet.
You must also specify at least one test directory (with
-d). The user who runs
hawq checkperf must have write access to the specified test directories on all remote hosts. For the disk I/O test, the test directories should correspond to your segment data directories. For the memory bandwidth and network tests, a temporary directory is required for the test program files.
hawq checkperf, you must have a trusted host setup between the hosts involved in the performance test. You can use the utility
hawq ssh-exkeys to update the known host files and exchange public keys between hosts if you have not done so already. Note that
hawq checkperf calls to
hawq ssh and
hawq scp, so these HAWQ utilities must also be in your
-doption multiple times to specify multiple test directories (for example, to test disk I/O of your data directories).
sdw1-1 sdw2-1 sdw3-1
-hoption multiple times to specify multiple host names.
- Disk I/O test (
- Stream test (
Network performance test in sequential (
n), parallel (
N), or full-matrix (
M) mode. The optional
--durationoption specifies how long (in seconds) to run the network test. To use the parallel (
N) mode, you must run the test on an even number of hosts.
If you would rather use
netperf(http://www.netperf.org) instead of the HAWQ network test, you can download it and install it into
$GPHOME/bin/libon all HAWQ hosts (master and segments). You would then specify the optional
--netperfoption to use the
netperfbinary instead of the default
-d. <file_size> should equal two times total RAM on the host. If not specified, the default is calculated at two times the total RAM on the host where
hawq checkperfis executed. This ensures that the test is truly testing disk I/O and not using the memory cache. You can specify sizing in KB, MB, or GB.
netperfbinary should be used to perform the network test instead of the HAWQ network test. To use this option, you must download
netperffrom http://www.netperf.org and install it into
$GPHOME/bin/libon all HAWQ hosts (master and segments).
Run the disk I/O and memory bandwidth tests on all the hosts in the file host_file using the test directory of /data1 and /data2:
$ hawq checkperf -f hostfile_checkperf -d /data1 -d /data2 -r ds
Run only the disk I/O test on the hosts named sdw1 and sdw2 using the test directory of /data1. Show individual host results and run in verbose mode:
$ hawq checkperf -h sdw1 -h sdw2 -d /data1 -r d -D -v
Run the parallel network test using the test directory of /tmp, where hostfile_check_ic* specifies all network interface host address names within the same interconnect subnet:
$ hawq checkperf -f hostfile_checknet_ic1 -r N -d /tmp $ hawq checkperf -f hostfile_checknet_ic2 -r N -d /tmp
Run the same test as above, but use
netperf instead of the HAWQ network test (note that
netperf must be installed in
$GPHOME/bin/lib on all HAWQ hosts):
$ hawq checkperf -f hostfile_checknet_ic1 -r N --netperf -d /tmp $ hawq checkperf -f hostfile_checknet_ic2 -r N --netperf -d /tmp