爪哇拼花匠缺少与avro相关的拼花
我正在尝试使用ApacheFlink进行流处理,学习StreamingFileLink教程
书中提到,我可以指定我的AvroPOJO对象,并将我的流保存到带有avro模式的拼花文件中,保存到本地fs中
看起来是这样的:
val sink: StreamingFileSink[String] = StreamingFileSink
.forBulkFormat(
new Path("/base/path"),
ParquetAvroWriters.forSpecificRecord(classOf[AvroPojo]))
.build()
在我添加后(测试了1.9和1.8,在1.9中我使用了flink-parquet_2.11)
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-parquet</artifactId>
<version>${flink.version}</version>
</dependency>
然后我可以成功地使用ParquetAvroWriter,但编译时会出现缺少ParquetAvroWriter的错误
Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/flink/formats/parquet/avro/ParquetAvroWriters
因此,我必须添加镶木地板avro依赖性manuelly
那就给我:
Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.conf.Configuration
更新:
波姆。xml
<dependencies>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-streaming-java_${scala.compat}</artifactId>
<version>${flink.version}</version>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-streaming-scala_${scala.compat}</artifactId>
<version>${flink.version}</version>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-runtime-web_${scala.compat}</artifactId>
<version>${flink.version}</version>
</dependency>
<dependency>
<groupId>org.elasticsearch</groupId>
<artifactId>elasticsearch</artifactId>
<version>6.7.0</version>
<scope>compile</scope>
</dependency>
<dependency>
<groupId>org.elasticsearch.client</groupId>
<artifactId>transport</artifactId>
<version>6.7.0</version>
</dependency>
<dependency>
<groupId>joda-time</groupId>
<artifactId>joda-time</artifactId>
<version>2.7</version>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-avro</artifactId>
<version>${flink.version}</version>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-parquet_${scala.compat}</artifactId>
<version>${flink.version}</version>
</dependency>
<dependency>
<groupId>org.apache.parquet</groupId>
<artifactId>parquet-avro</artifactId>
<version>1.10.0</version>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-client</artifactId>
<version>2.6.5</version>
</dependency>
<!-- https://mvnrepository.com/artifact/org.apache.pulsar/pulsar-common -->
<dependency>
<groupId>org.apache.pulsar</groupId>
<artifactId>pulsar-common</artifactId>
<version>2.4.1</version>
</dependency>
<dependency>
<groupId>org.apache.logging.log4j</groupId>
<artifactId>log4j-api</artifactId>
<version>2.6.1</version>
</dependency>
<dependency>
<groupId>org.apache.logging.log4j</groupId>
<artifactId>log4j-core</artifactId>
<version>2.6.1</version>
</dependency>
问题: 1.在没有配置hadoop的情况下,如何编译此代码?或者使用parquet和avro在本地fs中保存数据的任何其他方法
共 (0) 个答案