Migrating Kafka C Applications to HPE Data Fabric Streams
With some modification, you can use existing Kafka C applications to consume and produce topics in HPE Data Fabric Streams. The HPE Data Fabric Streams C Client is a distribution of librdkafka that is compatible with HPE Data Fabric Streams.
- Install and configure the MapR Streams C Client.
 - When you refer to a topic in the application code, include the path and name of the stream
        in which the topic is located:
          
/<path and name of stream>:<name of topic>For example, you might have a stream in a HPE Data Fabric cluster that is named stream_A, and the stream might be in a volume named IoT and in a directory named automobile_sensors. You want to redirect a producer application to a topic in that stream. The syntax of the path to the topic might look like this:
/mapr/IoT/automobile_sensors/stream_A:<name of topic>.NOTEOptionally, use thestreams.consumer.default.streamandstreams.producer.default.streamconfiguration parameters. When you configure these parameters, applications can specify just the topic name to write or read from the default stream. To use these HPE Data Fabric-specific parameters in your application, compile your application with the rdkafka.h file (/opt/mapr/include/librdkafka/rdkafka.h) that was installed with the HPE Data Fabric Streams C Client. See the Compile the Apps section of Developing a HPE Data Fabric Streams C Application. - See Configuration Properties for HPE Data Fabric Streams C Client for the list of
        supported configuration parameters, including a few parameters that are HPE Data Fabric-specific. Make changes
        to your application, as needed.NOTESSL-related configuration parameters are ignored. When you set these parameters, the HPE Data Fabric Streams Client issues a warning indicating that the parameters are not supported.
 - Review the list of librdkafka APIs that are not supported by the HPE Data Fabric Streams C
        Client and make changes to your application, as needed.
- Simple/low level consumer APIs that are not supported
 - 
                
- rd_kafka_queue_new
 - rd_kafka_queue_destroy
 - rd_kafka_consume_start
 - rd_kafka_consume_start_queue
 - rd_kafka_consume_stop
 - rd_kafka_consume
 - rd_kafka_consume_batch
 - rd_kafka_consume_callback
 - rd_kafka_consume_queue
 - rd_kafka_consume_batch_queue
 - rd_kafka_consume_callback_queue
 - rd_kafka_offset_store
 - rd_kafka_pause_partitions
 - rd_kafka_resume_partitions
 
 - Producer/Consumer common APIs that are not supported
 - 
                
- rd_kafka_conf_set_dr_cb
 - rd_kafka_conf_set_throttle_cb
 - rd_kafka_conf_set_stats_cb
 - rd_kafka_conf_set_socket_cb
 - rd_kafka_conf_set_open_cb
 - rd_kafka_conf_dump
 - rd_kafka_conf_dump_free
 - rd_kafka_name
 - rd_kafka_set_log_level
 - rd_kafka_mem_free
 - rd_kafka_set_log_level
 - rd_kafka_mem_free
 
 - Topic APIs that are not supported
 - 
                
- rd_kafka_query_watermark_offsets NOTEAs of HPE Data Fabric 6.0.1, this API is supported.
 - rd_kafka_get_watermark_offsets NOTEAs of HPE Data Fabric 6.0.1, this API is supported.
 
 - rd_kafka_query_watermark_offsets 
 - Cluster APIs that are not supported
 - 
                
- rd_kafka_memberid
 - rd_kafka_metadata
 - rd_kafka_metadata_destroy
 
 - Miscellaneous APIs that are not supported
 - 
                
- rd_kafka_version
 - rd_kafka_version_str
 - rd_kafka_get_debug_contexts
 - rd_kafka_dump
 - rd_kafka_thread_cnt
 - rd_kafka_message_timestamp