Skip to content

Using Pandas for Web Scraping in Python

Updated: at 05:44 AM

Web scraping is the process of extracting data from websites automatically. It allows you to collect large volumes of data from the web for analysis. Pandas is a popular Python library for data manipulation and analysis that can be leveraged for web scraping tasks. This comprehensive guide will teach you how to use Pandas web scraping tools to efficiently obtain data from websites.

Table of Contents

Open Table of Contents

Overview of Web Scraping

Web scraping involves programmatically fetching data from websites and extracting the relevant information into a structured format like a Pandas dataframe. The key steps are:

  1. Identifying the target site and URLs to scrape.

  2. Fetching the HTML content of the target pages.

  3. Parsing the HTML to extract the required data using libraries like Beautiful Soup.

  4. Structuring and storing the scraped data.

Why Use Pandas for Web Scraping

Pandas provides beneficial features that facilitate web scraping workflows:

Overall, Pandas simplifies the post-scraping data processing and analysis tasks.

Web Scraping Tools in Pandas

Pandas provides the following tools to aid web scraping:


The pd.read_html() function parses HTML tables from web pages into Pandas dataframes.

import pandas as pd

dfs = pd.read_html('')

print(dfs[0]) # prints first dataframe

read_html() extracts tables based on <table> tags. You can specify the URLs to scrape or pass the raw HTML content. It even handles badly formatted HTML and nested tables.


The to_html() method converts Pandas dataframes to HTML tables:

df = pd.DataFrame(data)

html = df.to_html()

# write html to file
with open('table.html', 'w') as f:

This facilitates saving scraped data tables as HTML.


The Series.str.extract() method extracts data from strings using regular expressions.

For example, to extract numbers from strings:

import re

data = pd.Series(['Text 123', 'More 456 text'])

nums = data.str.extract(r'(\d+)', expand=False)


# Output:
0    123
1    456

This can parse out specific data like phone numbers, prices etc. from scraped text.

DataFrame.to_json(), to_csv(), to_excel()

Pandas enables saving scraped dataframes to various file formats:

These methods are useful for exporting scraped datasets.

Web Scraping Process with Pandas

Here is an overview of a typical web scraping workflow using Pandas:

1. Get HTML Content

Use Python requests module to download the HTML content:

import requests

url = ''

response = requests.get(url)
html_content = response.text

2. Parse HTML

Use Beautiful Soup to parse and extract data from HTML:

from bs4 import BeautifulSoup

soup = BeautifulSoup(html_content, 'html.parser')

# Extract table
table = soup.find('table', {'class':'data-table'})

# Find all rows
rows = table.find_all('tr')

3. Store in DataFrame

Put the scraped data into a Pandas dataframe:

import pandas as pd

# Initialize empty dataframe
df = pd.DataFrame()

# Extract data row-wise and append to dataframe
for row in rows:
    cols = row.find_all('td')
    col_vals = [col.text for col in cols]
    df = df.append([col_vals])


4. Clean Data

Clean the scraped dataframe if required:

# Remove NaN values

# Change column datatypes
df['Price'] = df['Price'].astype(float)

5. Analyze & Visualize

Use Pandas tools like groupby, pivot tables, plotting etc. for analysis:

# Groupby analysis

# Plot total sales per year
df.pivot_table(values='Sales', index='Year', aggfunc='sum').plot()

6. Export DataFrame

Finally, export the dataframe to file formats like CSV or JSON:

df.to_csv('scraped_data.csv', index=False)

This scrapes the table data from the web page into a Pandas dataframe for further processing.

Practical Examples of Web Scraping using Pandas

Let’s go through some real-world examples of web scraping using Pandas tools:

Scraping Wikipedia Tables

We can extract tables from Wikipedia pages into Pandas using read_html():

url = ''

dfs = pd.read_html(url)

df = dfs[0]


This scrapes the revenue data table from the Wikipedia page into a dataframe.

Scraping Financial Data

We can scrape live stock data from Yahoo Finance with Pandas:

import pandas as pd
import requests

url = ''
response = requests.get(url)

dfs = pd.read_html(response.text)
quotes = dfs[0]


This extracts the Apple Inc. (AAPL) stock price history into a dataframe for analysis.

Scraping Real Estate Listings

We can use str.extract() to parse key details from real estate listing pages:

import re
import pandas as pd
from bs4 import BeautifulSoup

# Scrape listings page into soup
soup = BeautifulSoup(page_html, 'html.parser')

# Extract listing details
prices = soup.find_all(class_='listing-price').text
beds = soup.find_all(class_='listing-beds').text
areas = soup.find_all(class_='listing-area').text

# Extract numbers using regex
df = pd.DataFrame({
    'Price': prices.str.extract(r'([\d,]+)', expand=False),
    'Beds': beds.str.extract(r'(\d+)', expand=False),
    'Area': areas.str.extract(r'(\d+)', expand=False)


This scrapes key listing attributes like prices, beds, area into a clean dataframe for analysis.


In summary, Pandas provides a versatile set of tools that can simplify and streamline many aspects of web scraping workflows. read_html() allows effortless extraction of data tables into dataframes. Vectorized string methods like str.extract() facilitate parsing textual data. And Pandas integrated data analysis capabilities enable easily exploring and visualizing scraped datasets. By leveraging Pandas for web scraping tasks, you can focus on accessing and collecting relevant web data without getting mired in the complexities of post-processing and analysis.