How to find number of months between dates in PySpark Azure Databricks?

Are you looking to find out how to get the number of months between two days of PySpark DataFrame using Azure Databricks cloud or maybe you are looking for a solution, to get difference between two months from date columns in PySpark Databricks using the months_between() function? If you are looking for any of these problem solutions, you have landed on the correct page. I will also help you how to use PySpark months_between() function with multiple examples in Azure Databricks. I will explain it by taking a practical example. So please don’t waste time let’s start with a step-by-step guide to understand how to use the months_between() function in PySpark.

In this blog, I will teach you the following with practical examples:

  • Syntax of months_between()
  • Month difference using DataFrame
  • Month difference using SQL expression

The Pyspark months_between() function is used to get the number of months between from and to date.

Syntax:

months_between()

The PySpark’s DateTime function supports both DataFrame and SQL work, very similar to traditional SQL. If you work with data extraction, transformation, and loading, you should have a good understanding of SQL Date functions.

What is the syntax of the months_between() function in PySpark Azure Databricks?

The syntax is as follows:

months_between(end_date, start_date)
Parameter NameRequiredDescription
end_date (str, Column)YesIt represents the ending date.
start_date (str, Column)YesIt represents the starting date.
roundOff (bool)OptionalIt represents the difference to be rounded off or not.
Table 1: months_between() Method in PySpark Databricks Parameter list with Details

Apache Spark Official Documentation Link: months_between()

Create a simple DataFrame

Let’s understand the use of the months_between() function with a variety of examples. Let’s start by creating a DataFrame.

Gentle reminder:

In Databricks,

  • sparkSession made available as spark
  • sparkContext made available as sc

In case, you want to create it manually, use the below code.

from pyspark.sql.session import SparkSession

spark = SparkSession.builder 
    .master("local[*]") 
    .appName("azurelib.com") 
    .getOrCreate()

sc = spark.sparkContext

a) Create manual PySpark DataFrame

data = [
    ("2019-01-11","2021-04-12","2019-09-17 12:02:21","2021-07-12 18:29:29"),
    ("2019-08-04","2021-04-15","2018-11-11 14:17:05","2021-08-03 16:21:40"),
    ("2019-03-24","2021-02-08","2019-02-07 04:26:49","2020-11-28 05:20:33"),
    ("2019-04-13","2021-06-05","2019-07-08 20:04:09","2021-05-18 08:21:12"),
    ("2019-02-22","2021-10-01","2018-11-28 05:46:54","2021-06-17 21:39:42")
]

columns = ["from_date","to_date","from_datetime","to_datetime"]
df = spark.createDataFrame(data, schema=columns)
df.printSchema()
df.show(truncate=False)

"""
root
 |-- from_date: string (nullable = true)
 |-- to_date: string (nullable = true)
 |-- from_datetime: string (nullable = true)
 |-- to_datetime: string (nullable = true)

+----------+----------+-------------------+-------------------+
|from_date |to_date   |from_datetime      |to_datetime        |
+----------+----------+-------------------+-------------------+
|2019-01-11|2021-04-12|2019-09-17 12:02:21|2021-07-12 18:29:29|
|2019-08-04|2021-04-15|2018-11-11 14:17:05|2021-08-03 16:21:40|
|2019-03-24|2021-02-08|2019-02-07 04:26:49|2020-11-28 05:20:33|
|2019-04-13|2021-06-05|2019-07-08 20:04:09|2021-05-18 08:21:12|
|2019-02-22|2021-10-01|2018-11-28 05:46:54|2021-06-17 21:39:42|
+----------+----------+-------------------+-------------------+
"""

b) Creating a DataFrame by reading files

Download and use the below source file.

# replace the file_path with the source file location which you have downloaded.

df_2 = spark.read.format("csv").option("header", True).load(file_path)
df_2.printSchema()

"""
root
 |-- from_date: string (nullable = true)
 |-- to_date: string (nullable = true)
 |-- from_datetime: string (nullable = true)
 |-- to_datetime: string (nullable = true)
"""

Note: Here, I will be using the manually created DataFrame.

How to find the month difference between days in PySpark Azure Databricks?

Let’s see how to find the month difference between two days in PySpark using Azure Databricks.

Example 1:

# using select()

from pyspark.sql.functions import months_between, floor

df.select("from_date",
          floor(months_between("to_date", "from_date")).alias("months_between"),
          "to_date").show()
"""
Output:

+----------+--------------+----------+
| from_date|months_between|   to_date|
+----------+--------------+----------+
|2019-01-11|            27|2021-04-12|
|2019-08-04|            20|2021-04-15|
|2019-03-24|            22|2021-02-08|
|2019-04-13|            25|2021-06-05|
|2019-02-22|            31|2021-10-01|
+----------+--------------+----------+

"""

Example 2:

# using withColumn()

from pyspark.sql.functions import months_between, floor

df.withColumn("months_between", floor(months_between("to_datetime", "from_datetime"))) \
.select("to_datetime", "months_between", "from_datetime").show()

"""
Output:

+-------------------+--------------+-------------------+
|        to_datetime|months_between|      from_datetime|
+-------------------+--------------+-------------------+
|2021-07-12 18:29:29|            21|2019-09-17 12:02:21|
|2021-08-03 16:21:40|            32|2018-11-11 14:17:05|
|2020-11-28 05:20:33|            21|2019-02-07 04:26:49|
|2021-05-18 08:21:12|            22|2019-07-08 20:04:09|
|2021-06-17 21:39:42|            30|2018-11-28 05:46:54|
+-------------------+--------------+-------------------+

"""

How to find the month difference between days in PySpark Azure Databricks using SQL expression?

Let’s see how to find the month difference between two days using SQL expressions in PySpark Azure Databricks.

Example:

In order to use raw SQL expression, we have to convert our Dataframe into SQL view.

df.createOrReplaceTempView("days")

spark.sql("""
SELECT
    from_date,
    floor(months_between(to_date, from_date)) AS months_between,
    to_date
FROM days
""").show()

"""
Output:

+----------+--------------+----------+
| from_date|months_between|   to_date|
+----------+--------------+----------+
|2019-01-11|            27|2021-04-12|
|2019-08-04|            20|2021-04-15|
|2019-03-24|            22|2021-02-08|
|2019-04-13|            25|2021-06-05|
|2019-02-22|            31|2021-10-01|
+----------+--------------+----------+

"""

Note: In the above examples, I have used the floor() function to round the number by zero decimals. This is because the roundOff parameter only rounds off the difference from a 13-digit decimal value to 8 digits and this parameter is True by default.

I have attached the complete code used in this blog in a notebook format to this GitHub link. You can download and import this notebook in databricks, jupyter notebook, etc.

When should you use the PySpark months_between() in Azure Databricks?

These could be the possible reasons:

  1. To find the month difference between the Date format
  2. To find the month difference between the DateTime format

Real World Use Case Scenarios for PySpark DataFrame months_between() in Azure Databricks?

Assume that you have an employee dataset. The dataset has the employee’s ID, name, starting_date, and ending_date. You have been given a requirement to find out the number of months employees have been working in your organization. You can use the PySpark months_between() inbuilt function to find out the number of months in-between dates.

What are the alternatives to the months_between() function in PySpark Azure Databricks?

The PySpark function month_between() is the only one that helps in finding the number of months between two dates and this function is explained in detail in the above section with multiple examples.

Final Thoughts

In this article, we have learned about the PySpark months_between() method of DataFrame in Azure Databricks along with the examples explained clearly. I have also covered different scenarios with practical examples that could be possible. I hope the information that was provided helped in gaining knowledge.

Please share your comments and suggestions in the comment section below and I will try to answer all your queries as time permits.

Arud Seka Berne S

As a big data engineer, I design and build scalable data processing systems and integrate them with various data sources and databases. I have a strong background in Python and am proficient in big data technologies such as Hadoop, Hive, Spark, Databricks, and Azure. My interest lies in working with large datasets and deriving actionable insights to support informed business decisions.