hdfs - How to write data to HA Hadoop QJM using Apache FLUME? -
how flume identify active namenode data written hdfs? without high availability hadoop have namenode ip configured in flume.conf data directed hdfs. whereas in our case, flume should identify active , standby namenodes , thereby data should directed active one.
afaik not possible in direct way. hdfs sink
configuration has room 1 namenode.
nevertheless, think can configure 2 hdfs sinks (and 2 channels), each 1 pointing namenode. source put copy of each event in both channels due default replicating channel selector
. so, each sink try persist data itself; 1 pointing standby namenode not persist until active 1 falls down , standby becomes active.
hth!
Comments
Post a Comment