Pandas is a popular opensource Python library used for data manipulation and analysis. One of the most powerful features of Pandas is the groupby()
method, which allows you to split data into groups, apply a function to each group independently, and then combine the results into a new data structure.
The process of splitting data into groups, applying a function to each group, and combining the output is commonly known as splitapplycombine. The groupby()
method encapsulates this pattern nicely in Pandas. Using groupby()
allows you to answer questions about your data based on groups and aggregates, making your data analysis code elegant, flexible, and expressive.
In this comprehensive guide, you will learn:
 The splitapplycombine concept and how
groupby()
implements it  Ways to split data into groups
 How to apply aggregations and transformations to groups
 Techniques for combining group results
 Handling hierarchical indices and missing data
 Resetting the index and using grouper()
 Common groupby operations and methods
 Best practices and examples for using
groupby()
effectively
By the end, you will have mastered the nuances of Pandas’ powerful groupby()
functionality to conduct fast, efficient data analysis and manipulation.
Splitting Data into Groups
The first step when using groupby()
is to split the data into groups based on one or more keys. This splits the DataFrame or Series into groups based on distinct values in the specified columns.
There are several ways to group data with Pandas:
Group by Single Column
To group by a single column, pass the column name as a string to groupby()
.
import pandas as pd
df = pd.DataFrame({'Department': ['Sales', 'Marketing', 'Sales', 'Marketing'],
'Revenue': [256, 210, 363, 298]})
df.groupby('Department')
This groups df
by the ‘Department’ column, creating one group for ‘Sales’ and one for ‘Marketing’.
Group by Multiple Columns
To group by multiple columns, pass a list of column names.
df.groupby(['Department', 'Revenue'])
This first groups by ‘Department’, then within each department group, creates subgroups based on ‘Revenue’.
Group by Index Level
For DataFrames with MultiIndex, pass the index level(s) to group by.
df = df.set_index(['Department', 'Revenue'])
df.groupby(level='Department')
This groups df
by the outer index level ‘Department’.
Group by Function
To group by a function or mapping of the index or columns, pass a function or dictionary mapping values to group names.
# Group by bin of 'Revenue'
df.groupby(pd.cut(df.index.get_level_values('Revenue'), bins=[150, 300, 400]))
# Group by custom mapping
df.groupby({'Sales': 'Department A',
'Marketing': 'Department B'})
This provides complete flexibility to group by any custom criteria.
Group by DataFrame, Series, Array
Pass a Series, DataFrame, numpy array, or list to group by the values it contains.
categories = pd.Series(['A', 'B', 'A', 'B'])
df.groupby(categories)
This groups df
by the categories defined in the passed Series.
Applying Functions to Groups
After splitting the data into groups, functions can be applied to operate on each group independently.
Aggregate Groups
Aggregation functions like sum()
, mean()
, count()
, min()
, and max()
can be applied to get aggregate statistics for each group.
df.groupby('Department').Revenue.sum()
This returns the sum of the ‘Revenue’ column for each ‘Department’ group.
Multiple aggregations can be applied at once using agg()
and passing a list of functions or a dict mapping columns to functions.
df.groupby('Department').agg(['sum', 'mean', 'max'])
df.groupby('Department').agg({'Revenue': 'sum',
'SalesCount': 'size'})
Transform Groups
Transformation functions like cumsum()
, diff()
, rank()
, filters, and more can be applied to transform the data within each group.
df.groupby('Department').Revenue.cumsum()
This cumulatively sums the ‘Revenue’ column groupwise.
Apply Custom Functions
Custom functions can be applied to manipulate each group. The function should take a DataFrame or Series and return a Pandas object or scalar.
def percent_of_total(x):
return x / x.sum() * 100
df.groupby('Department').Revenue.apply(percent_of_total)
This applies percent_of_total
to the ‘Revenue’ column of each group.
Lambda functions can also be used for simple operations.
df.groupby('Department').Revenue.apply(lambda x: x.nlargest(2))
This gets the 2 highest ‘Revenue’ values for each group.
Combining Grouped Results
After applying functions to groups, the results can be combined into a new Series or DataFrame.
Concatenation
By default, the group results are concatenated together into a new DataFrame or Series with a hierarchical index.
df_grouped = df.groupby('Department').sum()
df_grouped
will contain the group aggregate sums with a MultiIndex of 'Department'
.
Reset Index
The hierarchical index can be discarded to create a simple indexed DataFrame.
df_grouped.reset_index()
This will drop the group index and add ‘Department’ as a column in the DataFrame.
As DataFrame, Series, etc.
The combined groups can be converted into a specific output format like DataFrame or Series using apply()
and specifying pd.DataFrame
or pd.Series
as the function.
df.groupby('Department').apply(pd.DataFrame)
df.groupby('Department').apply(pd.Series)
This returns a DataFrame or Series concatenation of the groups rather than a MultiIndex output.
Handling Indices, Missing Data, and Multilevel Groups
When grouping data, there are some important considerations around indices, missing data, and hierarchical groups.
Preserve Index Values
By default, grouping removes the original index values. To preserve indices:
df.groupby('Department', as_index=False)
This will retain the index column after grouping.
Exclude Missing Data
Missing values are excluded from groups by default. To include NaNs in a group:
df.groupby('Department', dropna=False)
This will add rows with missing ‘Department’ values to a separate NaN group.
Group by Index Level
On MultiIndex DataFrames, groups are created for each unique combination of index values by default. To group by outer level:
df.groupby(level=0, axis=0)
Group MultiIndex Columns
To group by MultiIndex columns, use:
df.groupby(level=0, axis=1)
This groups df
by the outer column index level.
Resetting Index and Using Grouper()
When working with time series data, it can be useful to reset the index after grouping and use grouper()
to group by time periods.
Reset Time Series Index
To move the timestamp index to a column:
df.groupby('Department').sum().reset_index()
This resets a simple index after grouping.
Group by Time Periods
To group a time series DataFrame into periods like days, months, years, etc., use pd.Grouper()
:
df = df.set_index('Timestamp')
df.groupby(pd.Grouper(freq='M')).sum()
This groups df
by month based on the datetime index. Frequency strings like ‘M’ (month), ‘Y’ (year), etc. can be used.
Resample and Aggregate
An alternative to groupby()
for time series is to resample and aggregate:
df.resample('D').mean()
This resamples df
to daily means. This operates similarly to grouping by day and taking the mean.
Common Groupby Operations and Methods
Some common patterns and methods when using Pandas groupby()
include:
 Aggregate statistics 
sum()
,mean()
,median()
,min()
,max()
,count()
, etc.  Multiple functions 
agg()
with a list or dict of functions.  Transformations 
cumsum()
,cumprod()
,pct_change()
,rank()
,diff()
, etc.  Filtering 
filter()
to filter groups based on a condition.  Apply function  Apply a custom function with
apply()
.  Column selection  Select columns to group by with
groupby()
.  Grouped joins  Perform inner, outer, right joins between grouped DataFrames.
 Pivot tables  Create pivot tables and crosstabs using
pivot_table()
on a grouped DataFrame.  Iterations  Iterate through groups using
groupby
object orGroupBy.apply()
.
Refer to the Pandas documentation for more details on these methods.
Best Practices for Using Groupby()
Here are some best practices to use Pandas groupby()
effectively:

Split data into logical groups that can reveal insights when aggregated or transformed. Balance group size with number of distinct groups.

Apply functions that answer specific questions about each group. Stick to aggregation, transformation, and filtering of groups.

Combine groups into an output format suited for analysis like DataFrame, pivot table, or indexed Series.

Set group keys properly and handle MultiIndexes and missing data appropriately.

Label groups meaningfully and reset indices if needed.

Iterate through groups when performing complex calculations or transformations.

Use
groupby()
instead of less efficient loops or multiple data queries. It provides optimized splitapplycombine operations. 
Avoid setting indices as group keys unnecessarily. This reduces performance by triggering data duplication.

Prefilter data before doing grouped operations to reduce memory usage and improve performance.
Following these best practices will ensure you fully leverage the power of Pandas groupby()
for fast, efficient data analysis.
Example Usage
Here are some examples demonstrating realworld usage of Pandas groupby()
for different analysis tasks:
Analyze Sales by Product Category
import pandas as pd
sales = pd.DataFrame({'Product': ['Electronics', 'Clothing', 'Cleaning', 'Electronics', 'Clothing'],
'Revenue': [1243, 854, 634, 1254, 983]})
# Group by product category and find statistics
product_sales = sales.groupby('Product')[['Revenue']].agg(['sum', 'mean', 'count'])
print(product_sales)
Output:
sum mean count
Product
Cleaning 634.0 634.000000 1
Clothing 1837.0 918.500000 2
Electronics 2497.0 1248.500000 2
This provides aggregate sales statistics for analysis by product category.
Time Series Analysis by Month
import pandas as pd
sales = pd.DataFrame({'Date': ['20220101', '20220201', '20220301',
'20220401', '20220501'],
'Revenue': [856, 734, 906, 876, 698]})
sales['Date'] = pd.to_datetime(sales['Date'])
# Resample to groups by month and aggregate
monthly = sales.set_index('Date').resample('M').sum()
print(monthly)
Output:
Revenue
Date
20220131 856.0
20220228 734.0
20220331 906.0
20220430 876.0
20220531 698.0
Resampling by month provides a timeseries analysis view.
Percent Contribution by Department
import pandas as pd
sales = pd.DataFrame({'Department': ['Sales', 'Marketing', 'IT', 'Operations'],
'Revenue': [256834, 105283, 90438, 125499]})
def pct_of_total(x):
return x / x.sum() * 100
# Apply percent contribution function by department
pct_by_dept = sales.groupby('Department').Revenue.apply(pct_of_total)
print(pct_by_dept)
Output:
Department IT 9.438000 Marketing 10.5283 Operations 12.549900 Sales 25.683400
This shows each department’s percentage contribution to total revenue.
Conclusion
Pandas groupby()
provides a compact, expressive way to analyze and transform data based on groups. With the splitapplycombine approach, complex operations can be performed on grouped data with minimal code. groupby()
also optimizes performance by aggregating data in the database before analysis.
By mastering Pandas groupby()
, you can write fast, efficient data manipulation and analysis code. The examples and best practices discussed in this guide demonstrate how to effectively apply Pandas splitapplycombine pattern to answer key questions about your data at a group level.