Streaming Data JDBC Examples
This section provides common usage scenarios of streaming data between different databases to or from HPE Ezmeral Data Fabric Streams.
Streaming Data from HPE Ezmeral Data Fabric Streams to a MySQL Database
The following is example code for streaming data from HPE Ezmeral Data Fabric Streams stream topics to a MySQL database.
POST /connectors HTTP/1.1
Host: connect.example.com
Content-Type: application/json
Accept: application/json
{"name": "mysql-sink-connector",
"config": {
"connector.class":"io.confluent.connect.jdbc.JdbcSinkConnector",
"connection.url":"jdbc:mysql://hostname:3306/mysql_db?user=<user>&password=<password>",
"auto.create":"true",
"topics":"/kafka-connect:topic1",
"tasks.max":"2",
"insert.mode":"insert"
}}
Streaming Data from a MySQL Database to HPE Ezmeral Data Fabric Streams
The following is example code for streaming data from a MySQL database to HPE Ezmeral Data Fabric Streams stream topics.
POST /connectors HTTP/1.1
Host: connect.example.com
Content-Type: application/json
Accept: application/json
{"name": "mysql-source-connector",
"config": { "connector.class":"io.confluent.connect.jdbc.JdbcSourceConnector",
"connection.url":"jdbc:mysql://hostname:3306/newdb?user=<user>&password=<password>"
"mode":"incrementing",
"incrementing.column.name":"id",
"topic.prefix":"/kafka-connect:mysql-",
"tasks.max":"1"
}}
Streaming Data from a Hive Database to HPE Ezmeral Data Fabric Streams
IMPORTANT
Using Hive-JDBC driver with Connect-JDBC requires different jars in the
share/java/kafka-connect-jdbc/
directory depending on whether Connect
HDFS is installed on the same node:-
If only Connect-JDBC is installed, you must keep the
hive-jdbc-<version>-standalone
jar. -
If both Connect-JDBC and Connect-HDFS are installed on the same node, you must keep
hive-jdbc
(not standalone) andcurator-client
jars.
POST /connectors HTTP/1.1
Host: connect.example.com
Content-Type: application/json
Accept: application/json
{"name": "hive-source-connector",
"config": {
"connector.class":"io.confluent.connect.jdbc.JdbcSourceConnector",
"connection.url":"jdbc:hive2://hostname:10000/database_name;user=<user>;password=<pa
ssword>",
"mode":"bulk",
"topic.prefix":"/kafka-connect:hive-",
"tasks.max":"1"
}}
NOTE
For a secure HPE Ezmeral Data Fabric cluster, use next connection.url
jdbc:hive2://hostname:10000/database_name;auth=maprsasl