Flink集成Iceberg

Flink: 1.11.0
Iceberg: 0.11.1
hive: 2.3.8
hadoop: 3.2.2
java: 1.8
scala: 2.11
 

一、下载或编译iceberg-flink-runtime jar包

下载

wget https://repo.maven.apache.org/maven2/org/apache/iceberg/iceberg-flink-runtime/0.11.1/iceberg-flink-runtime-0.11.1.jar

直接编译

git clone https://github.com/apache/iceberg.git
./gradlew build -x test

 

二、启动Hadoop、Flink

export HADOOP_CLASSPATH=`$HADOOP_HOME/bin/hadoop classpath`
${HADOOP_HOME}/sbin/start-all.sh
${FLINK_HOME}/bin/start-cluster.sh

三、Flink sql操作

1、启动客户端

${FLINK_HOME}/bin/sql-client.sh embedded -j iceberg-flink-runtime-0.11.1.jar -j flink-sql-connector-hive-2.3.6_2.11-1.11.0.jar shell

2、建Catalog

临时:

create catalog iceberg with('type'='iceberg',
  'catalog-type'='hive',
  'uri'='thrift://rick-82lb:9083',
  'clients'='5',
  'property-verion'='1',
  'warehouse'='hdfs:///user/hive/warehouse');

永久:

catalogs:
  - name: iceberg
    type: iceberg
    warehouse: hdfs:///user/hive2/warehouse
    uri: thrift://rick-82lb:9083
    catalog-type: hive

3、建库和表

create database iceberg.test;
create table iceberg.test.t20(id bigint);

4、写入数据

insert into iceberg.test.t20 values (10);
insert into iceberg.test.t20 values (20);

  

t20目录的情况如下,后面会做具体介绍

t20
├── data
│   ├── 00000-0-9c7ff22e-a767-4b85-91ec-a2771e54c209-00001.parquet
│   └── 00000-0-ecd3f21c-1bc0-4cdc-8917-d9a1afe7ce55-00001.parquet
└── metadata
├── 00000-d864e750-e5e2-4afd-bddb-2fab1e627a21.metadata.json
├── 00001-aabfd9a8-7dcd-4aa0-99aa-f6695f39bf6b.metadata.json
├── 00002-b5b7725f-7e86-454b-8d16-0e142bc84266.metadata.json
├── 0254b8b6-4d76-473c-86c2-97acda68d587-m0.avro
├── f787e035-8f7c-43a3-b264-42057bad2710-m0.avro
├── snap-6190364701448945732-1-0254b8b6-4d76-473c-86c2-97acda68d587.avro
└── snap-6460256963744122971-1-f787e035-8f7c-43a3-b264-42057bad2710.avro

原文地址:https://www.cnblogs.com/codetouse/p/14758942.html