This recipe shows how Spark DataFrames can be read from or written to relational database tables with Java Database Connectivity (JDBC).
The DataFrames API provides a tabular view of data that allows you to use common relational database patterns at a higher abstraction than the low-level Spark Core API. As a column-based abstraction, it is only fitting that a DataFrame can be read from or written to a real relational database table. Spark provides built-in methods to simplify this conversion over a JDBC connection.
1 2 3 4 5 6 7 8 9 10 | # Download the using-jdbc source code to the home directory.cd ~wget https://sparkour.urizone.net/files/using-jdbc.zip# Unzip, creating /opt/sparkour/using-jdbcsudo unzip using-jdbc.zip -d /opt# Update permissionssudo chown -R ec2-user:ec2-user /opt/sparkour |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 | -- This SQL script should be run as a database user with permission to -- create new databases, tables, and users.-- Create test databasecreate database sparkour;-- Create a user representing your Spark cluster-- Use % as a wildcard when specifying an IP subnet, such as '123.456.78.%'create user 'sparkour'@'<subnetOfSparkCluster>' identified by '<password>';-- Add privileges for the Spark clustergrant create, delete, drop, insert, select, update on sparkour.* to 'sparkour'@'<subnetOfSparkCluster>';flush privileges;-- Create a test table of physical characteristics.use sparkour;create table people ( id int(10) not null auto_increment, name char(50) not null, is_male tinyint(1) not null, height_in int(4) not null, weight_lb int(4) not null, primary key (id), key (id));-- Create sample data to load into a DataFrameinsert into people values (null, 'Alice', 0, 60, 125);insert into people values (null, 'Brian', 1, 64, 131);insert into people values (null, 'Charlie', 1, 74, 183);insert into people values (null, 'Doris', 0, 58, 102);insert into people values (null, 'Ellen', 0, 66, 140);insert into people values (null, 'Frank', 1, 66, 151);insert into people values (null, 'Gerard', 1, 68, 190);insert into people values (null, 'Harold', 1, 61, 128); |
1 2 3 4 |
1 2 3 4 5 6 |
1 2 3 4 5 | jdbc:mysql://<hostname>:3306/sparkourjdbc:postgresql://<hostname>:5432/sparkourjdbc:sqlite:sparkour.dbjdbc:oracle:thin:@<hostname>:1521:sparkour |
1 2 3 4 5 6 | wget http://central.maven.org/maven2/mysql/mysql-connector-java/5.1.38/mysql-connector-java-5.1.38.jarwget http://central.maven.org/maven2/postgresql/postgresql/9.1-901-1.jdbc4/postgresql-9.1-901-1.jdbc4.jar# Oracle now requires you to login to their portal to download the thin client JAR file. |
1 2 3 | spark.driver.extraClassPath /someDirectoryOnClusterNode/mysql-connector-java-5.1.38.jarspark.executor.extraClassPath /someDirectoryOnClusterNode/mysql-connector-java-5.1.38.jar |
1 2 3 4 5 6 7 8 | cd /opt/sparkour/using-jdbc# Run in Local Mode./sparkour.sh java --driver-class-path lib/mysql-connector-java-5.1.38.jar# Run against a Spark cluster./sparkour.sh java --master spark://ip-172-31-24-101:7077 |
1 2 3 4 5 6 7 8 | cd /opt/sparkour/using-jdbc# Run in Local Mode./sparkour.sh python --driver-class-path lib/mysql-connector-java-5.1.38.jar# Run against a Spark cluster./sparkour.sh python --master spark://ip-172-31-24-101:7077 |
1 2 3 4 5 6 7 8 | cd /opt/sparkour/using-jdbc# Run in Local Mode./sparkour.sh r --driver-class-path lib/mysql-connector-java-5.1.38.jar# Run against a Spark cluster./sparkour.sh r --master spark://ip-172-31-24-101:7077 |
1 2 3 4 5 6 7 8 | cd /opt/sparkour/using-jdbc# Run in Local Mode./sparkour.sh scala --driver-class-path lib/mysql-connector-java-5.1.38.jar# Run against a Spark cluster./sparkour.sh scala --master spark://ip-172-31-24-101:7077 |
If you encounter this error while running your application, then your JDBC library cannot be found by the node running the application. If you're running in Local mode, make sure that you have used the --driver-class-path parameter. If a Spark cluster is involved, make sure that each cluster member has a copy of library, and that each node of the cluster has been restarted since you modified the spark-defaults.conf file.
1 2 3 4 5 | // Load properties from fileProperties dbProperties = new Properties();dbProperties.load(new FileInputStream(new File("db-properties.flat")));String jdbcUrl = dbProperties.getProperty("jdbcUrl"); |
1 2 3 4 5 6 7 8 9 | # Load properties from filewith open('db-properties.json') as propertyFile: properties = json.load(propertyFile)jdbcUrl = properties["jdbcUrl"]dbProperties = { "user" : properties["user"], "password" : properties["password"]} |
1 2 3 4 | # Load properties from fileproperties <- fromJSON(file="db-properties.json")jdbcUrl <- paste(properties["jdbcUrl"], "?user=", properties["user"], "&password=", properties["password"], sep="") |
1 2 3 4 5 | // Load properties from fileval dbProperties = new PropertiesdbProperties.load(new FileInputStream(new File("db-properties.flat")));val jdbcUrl = dbProperties.getProperty("jdbcUrl") |
1 2 3 4 5 6 | System.out.println("A DataFrame loaded from the entire contents of a table over JDBC.");String where = "sparkour.people";Dataset<Row> entireDF = spark.read().jdbc(jdbcUrl, where, dbProperties);entireDF.printSchema();entireDF.show(); |
1 2 3 4 5 6 | print("A DataFrame loaded from the entire contents of a table over JDBC.")where = "sparkour.people"entireDF = spark.read.jdbc(jdbcUrl, where, properties=dbProperties)entireDF.printSchema()entireDF.show() |
1 2 3 4 5 6 | print("A DataFrame loaded from the entire contents of a table over JDBC.")where <- "sparkour.people"entireDF <- read.jdbc(url=jdbcUrl, tableName=where)printSchema(entireDF)print(collect(entireDF)) |
1 2 3 4 5 6 | println("A DataFrame loaded from the entire contents of a table over JDBC.")var where = "sparkour.people"val entireDF = spark.read.jdbc(jdbcUrl, where, dbProperties)entireDF.printSchema()entireDF.show() |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 | A DataFrame loaded from the entire contents of a table over JDBC.root |-- id: integer (nullable = false) |-- name: string (nullable = false) |-- is_male: boolean (nullable = false) |-- height_in: integer (nullable = false) |-- weight_lb: integer (nullable = false)+---+-------+-------+---------+---------+| id| name|is_male|height_in|weight_lb|+---+-------+-------+---------+---------+| 1| Alice| false| 60| 125|| 2| Brian| true| 64| 131|| 3|Charlie| true| 74| 183|| 4| Doris| false| 58| 102|| 5| Ellen| false| 66| 140|| 6| Frank| true| 66| 151|| 7| Gerard| true| 68| 190|| 8| Harold| true| 61| 128|+---+-------+-------+---------+---------+ |
1 2 3 | System.out.println("Filtering the table to just show the males.");entireDF.filter("is_male = 1").show(); |
1 2 3 | print("Filtering the table to just show the males.")entireDF.filter("is_male = 1").show() |
1 2 3 | print("Filtering the table to just show the males.")print(collect(filter(entireDF, "is_male = 1"))) |
1 2 3 | println("Filtering the table to just show the males.")entireDF.filter("is_male = 1").show() |
1 2 3 4 5 6 7 8 9 10 11 | Filtering the table to just show the males.+---+-------+-------+---------+---------+| id| name|is_male|height_in|weight_lb|+---+-------+-------+---------+---------+| 2| Brian| true| 64| 131|| 3|Charlie| true| 74| 183|| 6| Frank| true| 66| 151|| 7| Gerard| true| 68| 190|| 8| Harold| true| 61| 128|+---+-------+-------+---------+---------+ |
1 2 3 4 5 | System.out.println("Alternately, pre-filter the table for males before loading over JDBC.");where = "(select * from sparkour.people where is_male = 1) as subset";Dataset<Row> malesDF = spark.read().jdbc(jdbcUrl, where, dbProperties);malesDF.show(); |
1 2 3 4 5 | print("Alternately, pre-filter the table for males before loading over JDBC.")where = "(select * from sparkour.people where is_male = 1) as subset"malesDF = spark.read.jdbc(jdbcUrl, where, properties=dbProperties)malesDF.show() |
1 2 3 4 5 | print("Alternately, pre-filter the table for males before loading over JDBC.")where <- "(select * from sparkour.people where is_male = 1) as subset"malesDF <- read.jdbc(url=jdbcUrl, tableName=where)print(collect(malesDF)) |
1 2 3 4 5 | println("Alternately, pre-filter the table for males before loading over JDBC.")where = "(select * from sparkour.people where is_male = 1) as subset"val malesDF = spark.read.jdbc(jdbcUrl, where, dbProperties)malesDF.show() |
1 2 3 4 5 6 7 8 9 10 11 | Alternately, pre-filter the table for males before loading over JDBC.+---+-------+-------+---------+---------+| id| name|is_male|height_in|weight_lb|+---+-------+-------+---------+---------+| 2| Brian| true| 64| 131|| 3|Charlie| true| 74| 183|| 6| Frank| true| 66| 151|| 7| Gerard| true| 68| 190|| 8| Harold| true| 61| 128|+---+-------+-------+---------+---------+ |
The DataFrame class exposes a DataFrameWriter named write which can be used to save a DataFrame as a database table (even if the DataFrame didn't originate from that database). There are four available write modes which can be specified, with error being the default:
The underlying behavior of the write modes is a bit finicky. You should consider using only error mode and creating new copies of existing tables until you are very confident about the expected behavior of the other modes. In particular, it's important to note that all write operations involve the INSERT SQL statement, so there is no way to use the DataFrameWriter to UPDATE existing rows.
To demonstrate writing to a table with JDBC, let's start with our people table. It turns out that the source data was improperly measured, and everyone in the table is actually 2 pounds heavier than the data suggests. We load the data into a DataFrame, add 2 pounds to every weight value, and then save the new data into a new database table. Remember that Spark RDDs (the low-level data structure underneath the DataFrame) are immutable, so these operations involve making new DataFrames rather than updating the existing one.
1 2 3 4 5 6 | System.out.println("Update weights by 2 pounds (results in a new DataFrame with same column names)");Dataset<Row> heavyDF = entireDF.withColumn("updated_weight_lb", entireDF.col("weight_lb").plus(2));Dataset<Row> updatedDF = heavyDF.select("id", "name", "is_male", "height_in", "updated_weight_lb") .withColumnRenamed("updated_weight_lb", "weight_lb");updatedDF.show(); |
1 2 3 4 5 6 | print("Update weights by 2 pounds (results in a new DataFrame with same column names)")heavyDF = entireDF.withColumn("updated_weight_lb", entireDF["weight_lb"] + 2)updatedDF = heavyDF.select("id", "name", "is_male", "height_in", "updated_weight_lb") \ .withColumnRenamed("updated_weight_lb", "weight_lb")updatedDF.show() |
1 2 3 4 5 6 | print("Update weights by 2 pounds (results in a new DataFrame with same column names)")heavyDF <- withColumn(entireDF, "updated_weight_lb", entireDF$weight_lb + 2)selectDF = select(heavyDF, "id", "name", "is_male", "height_in", "updated_weight_lb")updatedDF <- withColumnRenamed(selectDF, "updated_weight_lb", "weight_lb")print(collect(updatedDF)) |
1 2 3 4 5 6 | println("Update weights by 2 pounds (results in a new DataFrame with same column names)")val heavyDF = entireDF.withColumn("updated_weight_lb", entireDF("weight_lb") + 2)val updatedDF = heavyDF.select("id", "name", "is_male", "height_in", "updated_weight_lb") .withColumnRenamed("updated_weight_lb", "weight_lb")updatedDF.show() |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 | Update weights by 2 pounds (results in a new DataFrame with same column names)+---+-------+-------+---------+---------+| id| name|is_male|height_in|weight_lb|+---+-------+-------+---------+---------+| 1| Alice| false| 60| 127|| 2| Brian| true| 64| 133|| 3|Charlie| true| 74| 185|| 4| Doris| false| 58| 104|| 5| Ellen| false| 66| 142|| 6| Frank| true| 66| 153|| 7| Gerard| true| 68| 192|| 8| Harold| true| 61| 130|+---+-------+-------+---------+---------+ |
1 2 3 4 5 6 7 8 | System.out.println("Save the updated data to a new table with JDBC");where = "sparkour.updated_people";updatedDF.write().mode("error").jdbc(jdbcUrl, where, dbProperties);System.out.println("Load the new table into a new DataFrame to confirm that it was saved successfully.");Dataset<Row> retrievedDF = spark.read().jdbc(jdbcUrl, where, dbProperties);retrievedDF.show(); |
1 2 3 4 5 6 7 8 | print("Save the updated data to a new table with JDBC")where = "sparkour.updated_people"updatedDF.write.jdbc(jdbcUrl, where, properties=dbProperties, mode="error")print("Load the new table into a new DataFrame to confirm that it was saved successfully.")retrievedDF = spark.read.jdbc(jdbcUrl, where, properties=dbProperties)retrievedDF.show() |
1 2 3 4 5 6 7 8 | print("Save the updated data to a new table with JDBC")where <- "sparkour.updated_people"write.jdbc(updatedDF, jdbcUrl, tableName=where)print("Load the new table into a new DataFrame to confirm that it was saved successfully.")retrievedDF <- read.jdbc(url=jdbcUrl, tableName=where)print(collect(retrievedDF)) |
1 2 3 4 5 6 7 8 | println("Save the updated data to a new table with JDBC")where = "sparkour.updated_people"updatedDF.write.mode("error").jdbc(jdbcUrl, where, dbProperties)println("Load the new table into a new DataFrame to confirm that it was saved successfully.")val retrievedDF = spark.read.jdbc(jdbcUrl, where, dbProperties)retrievedDF.show() |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 | Save the updated data to a new table with JDBC INSERT statementsLoad the new table into a new DataFrame to confirm that it was saved successfully.+---+-------+-------+---------+---------+| id| name|is_male|height_in|weight_lb|+---+-------+-------+---------+---------+| 1| Alice| false| 60| 127|| 2| Brian| true| 64| 133|| 3|Charlie| true| 74| 185|| 4| Doris| false| 58| 104|| 5| Ellen| false| 66| 142|| 6| Frank| true| 66| 153|| 7| Gerard| true| 68| 192|| 8| Harold| true| 61| 130|+---+-------+-------+---------+---------+ |
1 2 | DROP TABLE sparkour.updated_people |
You may encounter this error when trying to write to a JDBC table with R's write.df() function in Spark 1.6 or lower. You need to upgrade to Spark 2.x to write to tables in R.
If you compare the schemas of the two tables, you'll notice slight differences.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 | mysql> desc people;+-----------+------------+------+-----+---------+----------------+| Field | Type | Null | Key | Default | Extra |+-----------+------------+------+-----+---------+----------------+| id | int(10) | NO | PRI | NULL | auto_increment || name | char(50) | NO | | NULL | || is_male | tinyint(1) | NO | | NULL | || height_in | int(4) | NO | | NULL | || weight_lb | int(4) | NO | | NULL | |+-----------+------------+------+-----+---------+----------------+5 rows in set (0.00 sec)mysql> desc updated_people;+-----------+---------+------+-----+---------+-------+| Field | Type | Null | Key | Default | Extra |+-----------+---------+------+-----+---------+-------+| id | int(11) | NO | | NULL | || name | text | NO | | NULL | || is_male | bit(1) | NO | | NULL | || height_in | int(11) | NO | | NULL | || weight_lb | int(11) | NO | | NULL | |+-----------+---------+------+-----+---------+-------+5 rows in set (0.00 sec) |
When a DataFrame is loaded from a table, its schema is inferred from the table's schema, which may result in an imperfect match when the DataFrame is written back to the database. Most noticeable in our example is the loss of the database index sequence, the primary key, and the changes to the datatypes of each column.
These differences may cause some write modes to fail in unexpected ways. It's best to consider JDBC read/write operations to be one-way operations that should not use the same database table as both the source and the target, unless the table was originally generated by Spark from the same DataFrame.
Spot any inconsistencies or errors? See things that could be explained better or code that could be written more idiomatically? If so, please help me improve Sparkour by opening a ticket on the Issues page. You can also discuss this recipe with others in the Sparkour community on the Discussion page.