Drill JDBC Drivers

Download the Drill JDBC driver and use it on all platforms to connect BI tools, such as SQuirreL and Spotfire, to Drill. Drill also includes an embedded, open-source JDBC driver.

The downloadable Drill JDBC driver provides read-only access to Drill data sources and supports the security features described in Securing Drill.

Alternatively, you can use the open-source JDBC driver embedded in Drill; however, the open-source JDBC driver is not tested on the HPE Ezmeral Data Fabric. The open-source driver supports Kerberos and Plain authentication mechanisms, but does not support the MapR-SASL authentication mechanism. After you install Drill from the mapr-drill package, you can find the open-source JDBC driver files in the following directories:
  • $DRILL_HOME/jars/jdbc-driver/drill-jdbc-all-<drill-version>.jar
  • $DRILL_HOME/jars/drill-jdbc-<drill-version>.jar

Drill JDBC Driver Download

Use the version of the driver that correlates with the version of the installed Drill server. Although older versions of the driver may connect to an upgraded version of Drill, the older drivers do not include all the server features available in the newer drivers.

It is integral that you install and retain all files associated with the Drill JDBC driver as downloaded. Dependencies exist among driver files, and downloading and retaining all files allow for successful driver functionality and averts failures.
The following table provides links to the download locations for the Drill JDBC drivers that correlate with each of the Drill versions listed:
To access the Data Fabric internet repository, you must specify the email and token of an HPE Passport account. For more information, see Using the HPE Ezmeral Token-Authenticated Internet Repository.
Drill Version JDBC Version
1.16.1.[200 or later]
This driver supports JRE 8 only and includes updated driver classes. See Driver Class.

Driver Class

The Drill JDBC Driver installation and configuration PDF document does not include the information provided in the following sections:

Driver Class

The Registering the Driver Class section of the Drill JDBC Driver documentation incorrectly lists the driver classes as com.simba.drill.jdbc41.Driver and com.simba.drill.jdbc41.DataSource.
  • For driver version and earlier, the correct driver classes are:
    • com.mapr.drill.jdbc41.Driver
    • com.mapr.drill.jdbc41.DataSource
  • For driver version, the correct driver classes are:
    • com.mapr.drill.jdbc.Driver
    • com.simba.drill.jdbc.DataSource

JDBC Connection String

You can indicate the schema parameter in the connection string, as shown in the following example:
You can also include the authentication mechanism in the connection string using the AuthMech or auth parameter. For data-fabric-SASL, use auth=MAPRSASL.
  • If using the data-fabric-SASL or Plain authentication mechanism, you must add the Drill JDBC JAR files and /opt/mapr/lib/* to the classpath of the third-party client tool, as shown in the following example for SQuirreL when the path to the driver is C:\driver\MapRDrillJDBC41-
    -cp "%SQUIRREL_CP%;C:\driver\MapRDrillJDBC41-\*;C:\opt\mapr\lib\*"

    The driver JAR files should appear before /opt/mapr/lib/* in the classpath.

Using Data Fabric-SASL for Authentication on Windows

Drill is automatically configured with Data Fabric security when you install Drill on a cluster configured with default security. To successfully connect to Drill from a Windows JDBC client, a user ticket must exist on the Windows client in the %TEMP% directory or in the location specified by the $MAPR_TICKETFILE_LOCATION environment variable.

The JDBC driver locates user tickets for the current Windows user in the default ticket location, %TEMP%, or in the location specified by the environment variable, $MAPR_TICKETFILE_LOCATION. See Tickets and Generating a Data Fabric User Ticket for more information.

You can either copy a user ticket that was generated on the cluster into the default location (%TEMP%), or you can install the data-fabric client on the Windows client and then run the maprlogin command to generate the ticket on the Windows client.
The JDBC user must be the same as the Windows user that created the ticket.


If you want to connect to Drill as the mapr user, you must create a ticket for the mapr user, as shown:
$ maprlogin password -user mapr
[Password for user 'mapr' at cluster 'Cluster1':]
The credentials for the mapr user in Cluster1 are written to /tmp/maprticket_1000.
Next, place the ticket in the %TEMP% directory on the Windows client. For example, the default location for a Windows 10 user named Tabetha Stephens is shown:
'C:\Users\TABETH~1\AppData\Local\Temp/maprticket_Tabetha Stephens'

To override this location, set the "MAPR_TICKETFILE_LOCATION" global variable for the Windows user.

Using the MAPR_TICKETFILE_LOCATION is recommended because the %TEMP% directory differs between Windows versions. You may also want to set the MAPR_TICKETFILE_LOCATION per user on the operating system to prevent all users from using the same user ticket on the client.

Avoiding Driver Conflicts

If you download and use the Drill JDBC driver, rename the embedded JDBC driver files to avoid any conflict between the downloaded driver and the open-source driver. The embedded JDBC driver files are in the following directories after you install Drill:
Changing the file extension to rename these files, as shown in the following example, prevents Drill or any other application, such as SQLLine, from picking up the embedded driver:

Connecting to Drill via the Drill Shell (SQLLine)

See Connecting to Drill via the Drill Shell (SQLLine).

Driver Limitations

When using data-fabric-SASL with JDBC or ODBC drivers, there is no way to specify the target cluster name as part of the connection parameters. Data Fabric-SASL reads the first entry in the /opt/mapr/conf/mapr-clusters.conf file and assumes it is the target cluster name.

For example, if the mapr-clusters.conf file has an entry for 'cluster1' followed by an entry for 'cluster2' and you want to connect to a node in 'cluster2', authentication fails. As a workaround, manually switch the order of entries in the mapr-clusters.conf file.