Using Kerberos Authentication

Kerberos is an encrpyted network authentication protocol for client/server applications. Kerberos is a complex subsystem. Detailing how to install and configure Kerberos itself is beyond the scope of this document. You should familiarize yourself with Kerberos concepts before configuring Kerberos for your HAWQ cluster. For more information about Kerberos, see http://web.mit.edu/kerberos/.

HAWQ supports Kerberos at both the HDFS and/or user authentication levels. You will perform distinct configuration procedures for each.

Kerberos provides a secure, encrypted authentication service. It does not encrypt data exchanged between the client and database and provides no authorization services. To encrypt data exchanged over the network, you must use an SSL connection. To manage authorization for access to HAWQ databases and objects such as schemas and tables, you assign privileges to HAWQ users and roles. For information about managing authorization privileges, see Overview of HAWQ Authorization.

Prerequisites

Before configuring Kerberos authentication for HAWQ, ensure that:

  • System time on the Kerberos server and HAWQ hosts is synchronized. (For example, install the ntp package on both servers.)
  • Network connectivity exists between the Kerberos server and all nodes in the HAWQ cluster.
  • Java 1.7.0_17 or later is installed on all nodes in your cluster. Java 1.7.0_17 is required to use Kerberos-authenticated JDBC on Red Hat Enterprise Linux 6.x or 7.x.
  • You can identify the Key Distribution Center (KDC) server you use for Kerberos authentication. See Example: Install and Configure a Kerberos KDC Server if you have not yet set up your KDC.

Configuring HAWQ/PXF for Secure HDFS

When Kerberos is enabled for your HDFS filesystem, HAWQ, as an HDFS client, requires a principal and keytab file to authenticate access to HDFS (filesystem) and YARN (resource management). If you have enabled Kerberos at the HDFS filesystem level, you will create and deploy principals for your HDFS cluster, and ensure that Kerberos authentication is enabled and functioning for all HDFS client services, including HAWQ and PXF.

Procedure for Ambari-Managed Clusters

If you manage your cluster with Ambari, you will enable Kerberos authentication for your cluster as described in the Enabling Kerberos Authentication Using Ambari Hortonworks documentation. The Ambari Kerberos Security Wizard guides you through the kerberization process, including installing Kerberos client packages on cluster nodes, syncing Kerberos configuration files, updating cluster configuration, and creating and distributing the Kerberos principals and keytab files for your Hadoop cluster services, including HAWQ and PXF.

Procedure for Command-Line-Managed Clusters

If you manage your cluster from the command line, before you configure HAWQ and PXF for access to a secure HDFS filesystem ensure that you have:

  • Enabled Kerberos for your Hadoop cluster per the instructions for your specific distribution and verified the configuration.

  • Verified that the HDFS configuration parameter dfs.block.access.token.enable is set to true. You can find this setting in the hdfs-site.xml configuration file.

  • Noted the host name or IP address of your HAWQ <master> and Kerberos Key Distribution Center (KDC) <kdc-server> nodes.

  • Noted the name of the Kerberos <realm> in which your cluster resides.

  • Distributed the /etc/krb5.conf Kerberos configuration file on the KDC server node to each HAWQ and PXF cluster node if not already present. For example:

    $ ssh root@<hawq-node>
    root@hawq-node$ cp /etc/krb5.conf /save/krb5.conf.save
    root@hawq-node$ scp <kdc-server>:/etc/krb5.conf /etc/krb5.conf
    
  • Verified that the Kerberos client packages are installed on each HAWQ and PXF node.

    root@hawq-node$ rpm -qa | grep krb
    root@hawq-node$ yum install krb5-libs krb5-workstation
    

Procedure

Perform the following steps to configure HAWQ and PXF for a secure HDFS. You will perform operations on both the HAWQ <master> and the <kdc-server> nodes.

  1. Log in to the Kerberos KDC server as the root user.

    $ ssh root@<kdc-server>
    root@kdc-server$ 
    
  2. Use the kadmin.local command to create a Kerberos principal for the postgres user. Substitute your <realm>. For example:

    root@kdc-server$ kadmin.local -q "addprinc -randkey postgres@REALM.DOMAIN"
    
  3. Use kadmin.local to create a Kerberos service principal for each host on which a PXF agent is configured and running. The service principal should be of the form pxf/<host>@<realm> where <host> is the DNS resolvable, fully-qualified hostname of the PXF host system (output of hostname -f command).

    For example, these commands add service principals for three PXF nodes on the hosts host1.example.com, host2.example.com, and host3.example.com:

    root@kdc-server$ kadmin.local -q "addprinc -randkey pxf/host1.example.com@REALM.DOMAIN"
    root@kdc-server$ kadmin.local -q "addprinc -randkey pxf/host2.example.com@REALM.DOMAIN"
    root@kdc-server$ kadmin.local -q "addprinc -randkey pxf/host3.example.com@REALM.DOMAIN"
    

    Note: As an alternative, if you have a hosts file that lists the fully-qualified domain name of each PXF host (one host per line), then you can generate principals using the command:

    root@kdc-server$ for HOST in $(cat hosts) ; do sudo kadmin.local -q "addprinc -randkey pxf/$HOST@REALM.DOMAIN" ; done
    
  4. Generate a keytab file for each principal that you created in the previous steps (i.e. postgres and each pxf/<host>). Save the keytab files in any convenient location (this example uses the directory /etc/security/keytabs). You will deploy the service principal keytab files to their respective HAWQ and PXF host machines in a later step. For example:

    root@kdc-server$ kadmin.local -q "xst -k /etc/security/keytabs/hawq.service.keytab postgres@REALM.DOMAIN"
    root@kdc-server$ kadmin.local -q "xst -k /etc/security/keytabs/pxf-host1.service.keytab pxf/host1.example.com@REALM.DOMAIN"
    root@kdc-server$ kadmin.local -q "xst -k /etc/security/keytabs/pxf-host2.service.keytab pxf/host2.example.com@REALM.DOMAIN"
    root@kdc-server$ kadmin.local -q "xst -k /etc/security/keytabs/pxf-host3.service.keytab pxf/host3.example.com@REALM.DOMAIN"
    root@kdc-server$ kadmin.local -q "listprincs"
    

    Repeat the xst command as necessary to generate a keytab for each HAWQ and PXF service principal that you created in the previous steps.

  5. The HAWQ master server requires a /etc/security/keytabs/hdfs.headless.keytab keytab file for the HDFS principal. If this file does not already exist on the HAWQ master node, create the principal and generate the keytab. For example:

    root@kdc-server$ kadmin.local -q "addprinc -randkey hdfs@REALM.DOMAIN"
    root@kdc-server$ kadmin.local -q "xst -k /etc/security/keytabs/hdfs.headless.keytab hdfs@REALM.DOMAIN"
    
  6. Copy the HAWQ service keytab file (and the HDFS headless keytab file if you created one) to the HAWQ master segment host. For example:

    root@kdc-server$ scp /etc/security/keytabs/hawq.service.keytab <master>:/etc/security/keytabs/hawq.service.keytab
    root@kdc-server$ scp /etc/security/keytabs/hdfs.headless.keytab <master>:/etc/security/keytabs/hdfs.headless.keytab
    
  7. Change the ownership and permissions on hawq.service.keytab (and hdfs.headless.keytab) as follows:

    root@kdc-server$ ssh <master> chown gpadmin:gpadmin /etc/security/keytabs/hawq.service.keytab
    root@kdc-server$ ssh <master> chmod 400 /etc/security/keytabs/hawq.service.keytab
    root@kdc-server$ ssh <master> chown hdfs:hdfs /etc/security/keytabs/hdfs.headless.keytab
    root@kdc-server$ ssh <master> chmod 400 /etc/security/keytabs/hdfs.headless.keytab
    
  8. Copy the keytab file for each PXF service principal to its respective host. For example:

    root@kdc-server$ scp /etc/security/keytabs/pxf-host1.service.keytab host1.example.com:/etc/security/keytabs/pxf.service.keytab
    root@kdc-server$ scp /etc/security/keytabs/pxf-host2.service.keytab host2.example.com:/etc/security/keytabs/pxf.service.keytab
    root@kdc-server$ scp /etc/security/keytabs/pxf-host3.service.keytab host3.example.com:/etc/security/keytabs/pxf.service.keytab
    

    Note the keytab file location on each PXF host; you will need this information for a later configuration step.

  9. Change the ownership and permissions on the pxf.service.keytab files. For example:

    root@kdc-server$ ssh host1.example.com chown pxf:pxf /etc/security/keytabs/pxf.service.keytab
    root@kdc-server$ ssh host1.example.com chmod 400 /etc/security/keytabs/pxf.service.keytab
    root@kdc-server$ ssh host2.example.com chown pxf:pxf /etc/security/keytabs/pxf.service.keytab
    root@kdc-server$ ssh host2.example.com chmod 400 /etc/security/keytabs/pxf.service.keytab
    root@kdc-server$ ssh host3.example.com chown pxf:pxf /etc/security/keytabs/pxf.service.keytab
    root@kdc-server$ ssh host3.example.com chmod 400 /etc/security/keytabs/pxf.service.keytab
    
  10. On each PXF node, edit the /etc/pxf/conf/pxf-site.xml configuration file to identify the local keytab file and security principal name. Add or uncomment the properties, substituting your <realm>. For example:

    <property>
        <name>pxf.service.kerberos.keytab</name>
        <value>/etc/security/keytabs/pxf.service.keytab</value>
        <description>path to keytab file owned by pxf service
        with permissions 0400</description>
    </property>
    
    <property>
        <name>pxf.service.kerberos.principal</name>
        <value>pxf/_HOST@REALM.DOMAIN</value>
        <description>Kerberos principal pxf service should use.
        _HOST is replaced automatically with hostnames
        FQDN</description>
    </property>
    
  11. Perform the remaining steps on the HAWQ master node as the gpadmin user:

    1. Log in to the HAWQ master node and set up the HAWQ runtime environment:

      $ ssh gpadmin@<master>
      gpadmin@master$ . /usr/local/hawq/greenplum_path.sh
      
    2. Run the following commands to configure Kerberos HDFS security for HAWQ and identify the keytab file:

      gpadmin@master$ hawq config -c enable_secure_filesystem -v ON
      gpadmin@master$ hawq config -c krb_server_keyfile -v /etc/security/keytabs/hawq.service.keytab
      
    3. Start the HAWQ service:

      gpadmin@master$ hawq start cluster -a
      
    4. Obtain a HDFS Kerberos ticket and change the ownership and permissions of the HAWQ HDFS data directory, substituting the HDFS data directory path for your HAWQ cluster. For example:

      gpadmin@master$ sudo -u hdfs kinit -kt /etc/security/keytabs/hdfs.headless.keytab hdfs
      gpadmin@master$ sudo -u hdfs hdfs dfs -chown -R postgres:gpadmin /<hawq_data_hdfs_path>
      
    5. On the HAWQ master node and each segment node, edit the /usr/local/hawq/etc/hdfs-client.xml file to enable kerberos security and assign the HDFS NameNode principal. Add or uncomment the following properties in each file:

      <property>
        <name>hadoop.security.authentication</name>
        <value>kerberos</value>
      </property>
      
    6. If you are using YARN for resource management, edit the yarn-client.xml file to enable kerberos security. Add or uncomment the following property in the yarn-client.xml file on the HAWQ master and each HAWQ segment node:

      <property>
        <name>hadoop.security.authentication</name>
        <value>kerberos</value>
      </property>
      
    7. Restart your HAWQ cluster:

      gpadmin@master$ hawq restart cluster -a -M fast
      

Configuring Kerberos User Authentication for HAWQ

When Kerberos authentication is enabled at the user level, HAWQ uses the Generic Security Service Application Program Interface (GSSAPI) to provide automatic authentication (single sign-on). When HAWQ uses Kerberos user authentication, HAWQ itself and the HAWQ users (roles) that require Kerberos authentication require a principal and keytab. When a user attempts to log in to HAWQ, HAWQ uses its Kerberos principal to connect to the Kerberos server, and presents the user’s principal for Kerberos validation. If the user principal is valid, login succeeds and the user can access HAWQ. Conversely, the login fails and HAWQ denies access to the user if the principal is not valid.

When HAWQ utilizes Kerberos for user authentication, it uses a standard principal to connect to the Kerberos KDC. The format of this principal is postgres/<FQDN_of_master>@<realm>, where <FQDN_of_master> refers to the fully qualified distinguish name of the HAWQ master node.

You may choose to configure HAWQ user principals before you enable Kerberos user authentication for HAWQ. See Configure Kerberos-Authenticated HAWQ Users.

The procedure to configure Kerberos user authentication for HAWQ includes:

  • Creating a Kerberos principal and generating and distributing a keytab entry for the postgres process on the HAWQ master node
  • Creating a Kerberos principal for the gpadmin or another administrative HAWQ user
  • Updating the HAWQ pg_hba.conf configuration file to specify Kerberos authentication
  • Restarting the HAWQ cluster

Perform the following steps to configure Kerberos user authentication for HAWQ. You will perform operations on both the HAWQ <master> and the <kdc-server> nodes.

Note: Some operations may differ based on whether or not you have configured secure HDFS. These operations are called out below.

  1. Log in to the Kerberos KDC server system:

    $ ssh root@<kdc-server>
    root@kdc-server$ 
    
  2. Create a keytab entry for the HAWQ postgres/<master> principal using the kadmin.local command. Substitute the HAWQ master node fully qualified distinguished hostname and your Kerberos realm. For example:

    root@kdc-server$ kadmin.local -q "addprinc -randkey postgres/<master>@REALM.DOMAIN"
    

    The addprinc command adds the principal postgres/<master> to the KDC managing your <realm>.

  3. Generate a keytab file for the HAWQ postgres/<master> principal. Provide the same name you used to create the principal.

    If you have configured Kerberos for your HDFS filesystem, add the keytab to the HAWQ client HDFS keytab file:

    root@kdc-server$ kadmin.local -q "xst -norandkey -k /etc/security/keytabs/hawq.service.keytab postgres/<master>@REALM.DOMAIN"
    

    Otherwise, generate a new file for the keytab:

    root@kdc-server$ kadmin.local -q "xst -norandkey -k hawq-krb5.keytab postgres/<master>@REALM.DOMAIN"
    
  4. Use the klist command to view the key you just generated:

    root@kdc-server$ klist -ket ./hawq-krb5.keytab
    

    Or:

    root@kdc-server$ klist -ket /etc/security/keytabs/hawq.service.keytab
    

    The -ket option lists the keytabs and encryption types in the identified key file.

  5. When you enable Kerberos user authentication for HAWQ, you must create a Kerberos principal for gpadmin or another HAWQ administrative user. Create a Kerberos principal for the HAWQ gpadmin administrative role, substituting your Kerberos realm. For example:

    root@kdc-server$ kadmin.local -q "addprinc -pw changeme gpadmin@REALM.DOMAIN"
    

    This addprinc command adds the principal gpadmin to the Kerberos KDC managing your <realm>. When you invoke kadmin.local as specified in the example above, gpadmin will be required to provide the password identified by the -pw option when authenticating. Alternatively, you can create a keytab file for the gpadmin principal and distribute the file to HAWQ client nodes.

  6. Copy the file in which you added the postgres/<master>@<realm> keytab to the HAWQ master node:

    root@kdc-server$ scp ./hawq-krb5.keytab gpadmin@<master>:/home/gpadmin/
    

    Or:

    root@kdc-server$ scp /etc/security/keytabs/hawq.service.keytab gpadmin@<master>:/etc/security/keytabs/hawq.service.keytab
    
  7. Log in to the HAWQ master node as the gpadmin user and set up the HAWQ environment:

    $ ssh gpadmin@<master>
    gpadmin@master$ . /usr/local/hawq/greenplum_path.sh
    
  8. If you copied the hawq-krb5.keytab file, verify the ownership and mode of this file:

    gpadmin@master$ chown gpadmin:gpadmin /home/gpadmin/hawq-krb5.keytab
    gpadmin@master$ chmod 400 /home/gpadmin/hawq-krb5.keytab
    

    The HAWQ server keytab file must be readable (and preferably only readable) by the HAWQ gpadmin administrative account.

  9. Add a pg_hba.conf entry that mandates Kerberos authentication for HAWQ. The pg_hba.conf file resides in the directory specified by the hawq_master_directory server configuration parameter value. For example, add:

    host all all 0.0.0.0/0 gss include_realm=0 krb_realm=REALM.DOMAIN
    

    This pg_hba.conf entry specifies that any remote access (i.e. from any user on any remote host) to HAWQ must be authenticated through the Kerberos realm named REALM.DOMAIN.

    Note: Place the Kerberos entry in the appropriate location in the pg_hba.conf file. For example, you may choose to retain pg_hba.conf entries for the gpadmin user that grant trust or ident authentication for local connections. Locate the Kerberos entry after these line(s). Refer to Configuring Client Authentication for additional information about the pg_hba.conf file.

  10. Update HAWQ configuration and restart your cluster. You will perform different procedures if you manage your cluster from the command line or use Ambari to manage your cluster.

    Note: After you restart your hawq cluster, Kerberos user authentication is enabled for HAWQ, and all users, including gpadmin, must authenticate before performing any HAWQ operations.

    1. If you manage your cluster using Ambari:

      1. Login in to the Ambari UI from a supported web browser.
      2. Navigate to the HAWQ service, Configs > Advanced tab and expand the Custom hawq-site drop down.
      3. Set the krb_server_keyfile path value to the new keytab file location, /home/gpadmin/hawq-krb5.keytab.
      4. Save this configuration change and then select the now orange Restart > Restart All Affected menu button to restart your HAWQ cluster.
      5. Exit the Ambari UI.
    2. If you manage your cluster from the command line:

      1. Update the krb_server_keyfile configuration parameter:

        gpadmin@master$ hawq config -c krb_server_keyfile -v '/home/gpadmin/hawq-krb5.keytab'
        GUC krb_server_keyfile already exist in hawq-site.xml
        Update it with value: /home/gpadmin/hawq-krb5.keytab
        GUC      : krb_server_keyfile
        Value    : /home/gpadmin/hawq-krb5.keytab
        
      2. Restart your HAWQ cluster:

        gpadmin@master$ hawq restart cluster
        
  11. When Kerberos user authentication is enabled for HAWQ, all users, including the gpadmin administrative user, must request a ticket to authenticate before performing HAWQ operations. Generate a ticket for gpadmin on the HAWQ master node; enter the password identified when you created the principal:

    gpadmin@master$ kinit gpadmin@<realm>
    Password for gpadmin@REALM.DOMAIN:
    

    See Authenticate User Access to HAWQ for more information about requesting and generating Kerberos tickets.

Configuring Kerberos-Authenticated HAWQ Users

You must configure HAWQ user principals for Kerberos. The first component of a HAWQ user principal must be the HAWQ user/role name:

<hawq-user>@<realm>

This procedure includes:

  • Identifying an existing HAWQ role or creating a new HAWQ role for each user you want to authenticate with Kerberos
  • Creating a Kerberos principal for each role
  • Optionally generating and distributing a keytab file to all HAWQ clients from which you will access HAWQ as the new role

Procedure

Perform the following steps to configure Kerberos authentication for specific HAWQ users. You will perform operations on both the HAWQ <master> and the <kdc-server> nodes.

  1. Log in to the HAWQ master node as the gpadmin user and set up your HAWQ environment:

    $ ssh gpadmin@master
    gpadmin@master$ . /usr/local/hawq/greenplum_path.sh
    
  2. Identify the name of an existing HAWQ user/role or create a new HAWQ user/role. For example:

    gpadmin@master$ psql -d template1 -c 'CREATE ROLE "bill_kerberos" with LOGIN;'
    

    This step creates a HAWQ operational role. Create an administrative HAWQ role by adding the SUPERUSER clause to the CREATE ROLE command.

  3. Create a principal for the HAWQ role. Substitute the Kerberos realm you noted earlier. For example:

    $ ssh root@<kdc-server>
    root@kdc-server$ kadmin.local -q "addprinc -pw changeme bill_kerberos@REALM.DOMAIN"
    

    This addprinc command adds the principal bill_kerberos to the Kerberos KDC managing your <realm>.

  4. You may choose to authenticate the HAWQ role with a password or a keytab file.

    1. If you choose password authentication, no further configuration is required. bill_kerberos will provide the password identified by the -pw option when authenticating. Skip the rest of this step.
    2. If you choose authentication via a keytab file:

      1. Generate a keytab file for the HAWQ principal you created, again substituting your Kerberos realm. For example:

        root@kdc-server$ kadmin.local -q "xst -k bill-krb5.keytab bill_kerberos@REALM.DOMAIN"
        

        The keytab entry is saved to the ./bill-krb5.keytab file.

      2. View the key you just added to bill-krb5.keytab:

        root@kdc-server$ klist -ket ./bill-krb5.keytab
        
      3. Distribute the keytab file to each HAWQ node from which you will access the HAWQ master as the user/role. For example:

        root@kdc-server$ scp ./bill-krb5.keytab bill@<hawq-node>:/home/bill/
        
  5. Log in to the HAWQ node as the user for whom you created the principal and set up your HAWQ environment:

    $ ssh bill@<hawq-node>
    bill@hawq-node$ . /usr/local/hawq/greenplum_path.sh
    
  6. If you are using keytab file authentication, verify the ownership and mode of the keytab file:

    bill@hawq-node$ chown bill:bill /home/bill/bill-krb5.keytab
    bill@hawq-node$ chmod 400 /home/bill/bill-krb5.keytab
    
  7. Access HAWQ as the new bill_kerberos user:

    bill@hawq-node$ psql -d testdb -h <master> -U bill_kerberos
    psql: GSSAPI continuation error: Unspecified GSS failure.  Minor code may provide more information
    GSSAPI continuation error: Credentials cache file '/tmp/krb5cc_502' not found
    

    The operation fails. The bill_kerberos user has not yet authenticated with the Kerberos server. The next section, Authenticating User Access to HAWQ, identifies the procedure required for HAWQ users to authenticate with Kerberos.

Authenticating User Access to HAWQ

When Kerberos user authentication is enabled for HAWQ, users must request a ticket from the Kerberos KDC server before connecting to HAWQ. You must request the ticket for a principal matching the requested database user name. When granted, the ticket expires after a set period of time, after which you will need to request another ticket.

To generate a Kerberos ticket, run the kinit command. Specify the Kerberos principal for which you are requesting the ticket in a command option. You may optionally specify a path to a keytab file.

For example, to request a ticket for the bill_kerberos user principal you created above using the keytab file for authentication:

bill@hawq-node$ kinit -k -t /home/bill/bill-krb5.keytab bill_kerberos@REALM.DOMAIN

To request a ticket for the bill_kerberos user principal using password authentication:

bill@hawq-node$ kinit bill_kerberos@REALM.DOMAIN
Password for bill_kerberos@REALM.DOMAIN:

kinit prompts you for the password; enter the password at the prompt.

For more information about the ticket, use the klist command. klist invoked without any arguments lists the currently held Kerberos principal and tickets. The command output also provides the ticket expiration time.

Example output from the klist command:

bill@hawq-node$ klist
Ticket cache: FILE:/tmp/krb5cc_502
Default principal: bill_kerberos@REALM.DOMAIN

Valid starting     Expires            Service principal
06/07/17 23:16:04  06/08/17 23:16:04  krbtgt/REALM.DOMAIN@REALM.DOMAIN
    renew until 06/07/17 23:16:04
06/07/17 23:16:07  06/08/17 23:16:04  postgres/master@
    renew until 06/07/17 23:16:04
06/07/17 23:16:07  06/08/17 23:16:04  postgres/master@REALM.DOMAIN
    renew until 06/07/17 23:16:04

After generating a ticket, you can connect to a HAWQ database as a kerberos-authenticated user using psql or other client programs.

Name Mapping

To simplify Kerberos-authenticated HAWQ user login, you can define a mapping between a user’s Kerberos principal name and a HAWQ database user name. You define the mapping in the pg_ident.conf file. You use a mapping by specifying a map=<map-name> option to a specific entry in the pg_hba.conf file.

The pg_ident.conf and pg_hba.conf files reside on the HAWQ master node in the directory identified by the hawq_master_directory server configuration parameter setting value.

You use the pg_ident.conf file to define user name maps. You can create entries in this file that define a mapping name, a Kerberos principal name, and a HAWQ database user name. For example:

# MAPNAME   SYSTEM-USERNAME      HAWQ-USERNAME
kerbmap     /^([a-z]+)_kerberos      \1

This entry extracts the component prefacing the _kerberos of the Kerberos principal name and maps that to a HAWQ user/role.

You identify the map name in the pg_hba.conf file entry that enables Kerberos support using the map=<mapname> option. For example:

host all all 0.0.0.0/0 gss include_realm=0 krb_realm=REALM.DOMAIN map=kerbmap

Suppose that you are logged in as Linux user bsmith, your Kerberos principal is bill_kerberos@REALM.DOMAIN, and you want to log in to HAWQ as user bill. With the kerbmap mapping configured in pg_ident.conf and pg_hba.conf as described above and a ticket for Kerberos principal bill_kerberos, you log in to HAWQ with the user name bill as follows:

bsmith@hawq-node$ klist
Ticket cache: FILE:/tmp/krb5cc_500
Default principal: bill_kerberos@REALM.DOMAIN
bsmith@hawq-node$ psql -d testdb -h <master> -U bill
psql (8.2.15)
Type "help" for help.

testdb=> SELECT current_user;
 current_user
--------------
 bill
(1 row)

For more information about specifying username maps see Username maps in the PostgreSQL documentation.

Kerberos Considerations for Non-HAWQ Clients

If you access HAWQ databases from any clients outside of your HAWQ cluster, and Kerberos user authentication for HAWQ is enabled, you must specifically configure Kerberos access on each client system. Ensure that:

  • You have created the appropriate Kerberos principal for the HAWQ user and optionally generated and distributed the keytab file.
  • The krb5-libs and krb5-workstation Kerberos client packages are installed on each client.
  • You copy the /etc/krb5.conf Kerberos configuration file from the KDC or HAWQ master node to each client system.
  • The HAWQ user requests a ticket before connecting to HAWQ.

Configuring JDBC for Kerberos-Enabled HAWQ

JDBC applications that you run must utilize a secure connection when Kerberos is configured for HAWQ user authentication.

The following example database connection URL uses a PostgreSQL JDBC driver and specifies parameters for Kerberos authentication:

jdbc:postgresql://master:5432/testdb?kerberosServerName=postgres&jaasApplicationName=pgjdbc&user=bill_kerberos

The connection URL parameter names and values specified will depend upon how the Java application performs Kerberos authentication.

Before configuring JDBC access to a kerberized HAWQ, verify that:

Procedure

Perform the following procedure to enable Kerberos-authenticated JDBC access to HAWQ from a client system.

  1. Create or add the following to the .java.login.config file in the $HOME directory of the user account under which the application will run:

    pgjdbc {
      com.sun.security.auth.module.Krb5LoginModule required
      doNotPrompt=true
      useTicketCache=true
      debug=true
      client=true;
    };
    
  2. Generate a Kerberos ticket.

  3. Run the JDBC-based HAWQ application.

Example: Install and Configure a Kerberos KDC Server

Note: If your installation already has a Kerberos Key Distribution Center (KDC) server, you do not need to perform this procedure. Note the KDC server host name or IP address and the name of the realm in which your cluster resides. You will need this information for other procedures.

Follow these steps to install and configure a Kerberos KDC server on a Red Hat Enterprise Linux host. The KDC server resides on the host named <kdc-server>.

  1. Log in to the Kerberos KDC Server system as a superuser:

    $ ssh root@<kdc-server>
    root@kdc-server$ 
    
  2. Install the Kerberos server packages:

    root@kdc-server$ yum install krb5-libs krb5-server krb5-workstation
    
  3. Define the Kerberos realm for your cluster by editting the /etc/krb5.conf configuration file. The following example configures a Kerberos server with a realm named REALM.DOMAIN residing on a host named hawq-kdc.

    [logging]
     default = FILE:/var/log/krb5libs.log
     kdc = FILE:/var/log/krb5kdc.log
     admin_server = FILE:/var/log/kadmind.log
    
    [libdefaults]
     default_realm = REALM.DOMAIN
     dns_lookup_realm = false
     dns_lookup_kdc = false
     ticket_lifetime = 24h
     renew_lifetime = 7d
     forwardable = true
     default_tgs_enctypes = aes128-cts des3-hmac-sha1 des-cbc-crc des-cbc-md5
     default_tkt_enctypes = aes128-cts des3-hmac-sha1 des-cbc-crc des-cbc-md5
     permitted_enctypes = aes128-cts des3-hmac-sha1 des-cbc-crc des-cbc-md5
    
    [realms]
     REALM.DOMAIN = {
      kdc = hawq-kdc:88
      admin_server = hawq-kdc:749
      default_domain = hawq-kdc
     }
    
    [domain_realm]
     .hawq-kdc = REALM.DOMAIN
     hawq-kdc = REALM.DOMAIN
    
    [appdefaults]
     pam = {
        debug = false
        ticket_lifetime = 36000
        renew_lifetime = 36000
        forwardable = true
        krb4_convert = false
       }
    

    The kdc and admin_server keys in the [realms] section specify the host (hawq-kdc) and port on which the Kerberos server is running. You can use an IP address in place of a host name.

    If your Kerberos server manages authentication for other realms, you would instead add the REALM.DOMAINM realm in the [realms] and [domain_realm] sections of the krb5.conf file. See the Kerberos documentation for detailed information about the krb5.conf configuration file.

  4. Note the Kerberos KDC server host name or IP address and the name of the realm in which your cluster resides. You will need this information in later procedures.

  5. Create a Kerberos KDC database by running the kdb5_util command:

    root@kdc-server$ kdb5_util create -s
    

    The kdb5_util create command creates the database in which the keys for the Kerberos realms managed by this KDC server are stored. The -s option instructs the command to create a stash file. Without the stash file, the KDC server will request a password every time it starts.

  6. Add an administrative user to the Kerberos KDC database with the kadmin.local utility. Because it does not itself depend on Kerberos authentication, the kadmin.local utility allows you to add an initial administrative user to the local Kerberos server. To add the user admin as an administrative user to the KDC database, run the following command:

    root@kdc-server$ kadmin.local -q "addprinc admin/admin"
    

    Most users do not need administrative access to the Kerberos server. They can use kadmin to manage their own principals (for example, to change their own password). For information about kadmin, see the Kerberos documentation.

  7. If required, edit the /var/kerberos/krb5kdc/kadm5.acl file to grant the appropriate permissions to admin.

  8. Start the Kerberos daemons:

    root@kdc-server$ /sbin/service krb5kdc start
    root@kdc-server$ /sbin/service kadmin start
    
  9. To start Kerberos automatically upon system restart:

    root@kdc-server$ /sbin/chkconfig krb5kdc on
    root@kdc-server$ /sbin/chkconfig kadmin on