The VMware Tanzu Greenplum Connector for Apache Spark provides high speed, parallel data transfer between Greenplum Database and an Apache Spark cluster using Spark's Scala API for programmatic access (including the
Refer to the VMware Tanzu Greenplum documentation for detailed information about Greenplum Database.
See the Apache Spark documentation for information about Apache Spark version 2.4.
The following table identifies the supported component versions for the VMware Tanzu Greenplum Connector for Apache Spark 2.x:
|Connector Version||Greenplum Version||Spark Version||Scala Version||PostgreSQL JDBC Driver Version|
|2.1.1||5.x, 6.x||2.3.x , 2.4.x
|2.1.0, 2.0||5.x, 6.x||2.3.x , 2.4.x
The Connector is certified against the Greenplum, Spark, and Scala versions listed above. The Connector is bundled with, and certified against, the listed PostgreSQL JDBC driver version.
Released: May 4, 2022
VMware Tanzu Greenplum Connector for Apache Spark 2.1.1 includes a change and bug fixes.
VMware Tanzu Greenplum Connector for Apache Spark 2.1.1 includes this change:
The following issues were resolved in version 2.1.1:
|32201||Resolves an issue where the Connector, when reading from Greenplum Database, dropped a data row when the first column started with the
|32186||Resolves an issue where the Connector returned a
Released: November 24, 2020
VMware Tanzu Greenplum Connector for Apache Spark 2.1.0 includes new and changed features and bug fixes.
VMware Tanzu Greenplum Connector for Apache Spark 2.1.0 includes this new and changed feature:
The Connector now uses external temporary tables when it loads data between Greenplum and Spark. Benefits include the following:
CREATEprivileges on the schema in which the accessed Greenplum table resides.
The following issues were resolved in VMware Tanzu Greenplum Connector for Apache Spark version 2.1.0:
|31083||Resolves an issue where the Connector failed to read data from Greenplum Database when the
|31075||The developer had no way to specify the schema in which the Connector created its external tables; the Connector always created external tables in the same schema as the Greenplum table. An undesirable side effect of this behaviour was that the Greenplum user reading a table was required to have
Released: September 30, 2020
VMware Tanzu Greenplum Connector for Apache Spark 2.0.0 includes new and changed features and bug fixes.
VMware Tanzu Greenplum Connector for Apache Spark 2.0.0 includes these new and changed features:
The Connector is certified against the Scala, Spark, and JDBC driver versions identified in Supported Platforms above.
The Connector is now bundled with the PostgreSQL JDBC driver version 42.2.14.
The Connector package that you download from Tanzu Network is now a
.tar.gz file that includes the product open source license and the Connector JAR file. The naming format of the file is
gpfdist server connection activity timeout changes from 30 seconds to 5 minutes.
server.timeout option is provided that a developer can use to specify the
gpfdist server connection activity timeout.
The Connector improves read performance from Greenplum Database by using the internal Greenplum table column named
gp_segment_id as the default
partitionColumn when the developer does not specify this option.
The following issues were resolved in VMware Tanzu Greenplum Connector for Apache Spark version 2.0.0:
|30731||Resolved an issue where the Connector timed out with a serialization exception when writing aggregated results to Greenplum Database. The Connector now exposes the
|174495848||Resolved an issue where predicate pushdown was not working correctly because the Connector did not use parentheses to join the predicates together when it constructed the filter string.|
The VMware Tanzu Greenplum Connector for Apache Spark version 2.x removes:
connector.portoption (deprecated in 1.6).
partitionsPerSegmentoption (deprecated in 1.5).
Known issues and limitations related to the 2.x release of the VMware Tanzu Greenplum Connector for Apache Spark include the following:
partitionColumn(the default) when reading data from Greenplum Database and mirroring is enabled in the Greenplum cluster.