In the previous tutorial, we learned about type hints. Now let’s learn about testing — how to write tests that catch bugs and give you confidence to change your code.
Testing is one of the most valuable skills you can learn as a developer. It saves time in the long run, makes refactoring safe, and serves as living documentation. When you have tests, you can change code confidently — if the tests pass, you know nothing is broken. By the end of this tutorial, you will know how to write tests with pytest, use fixtures, parametrize tests, and mock external dependencies.
Why pytest?
Python has a built-in unittest module, but most Python developers prefer pytest because:
- Simpler syntax — just use
assertinstead ofself.assertEqual(),self.assertTrue(), etc. - Better error messages — shows exactly what failed and what the values were
- Powerful fixtures — reusable setup/teardown with dependency injection
- Rich plugin ecosystem — coverage, parallel execution, Django integration, etc.
- Less boilerplate — no need for classes, just write functions
Install it:
pip install pytest
Your First Test
Create a function to test:
# src/py17_testing.py
def add(a: int, b: int) -> int:
return a + b
def divide(a: float, b: float) -> float:
if b == 0:
raise ValueError("Cannot divide by zero.")
return a / b
def fizzbuzz(n: int) -> str:
if n % 15 == 0:
return "FizzBuzz"
if n % 3 == 0:
return "Fizz"
if n % 5 == 0:
return "Buzz"
return str(n)
def truncate(text: str, max_len: int) -> str:
if len(text) <= max_len:
return text
return text[: max_len - 3] + "..."
class ShoppingCart:
def __init__(self) -> None:
self.items: list[dict] = []
def add_item(self, name: str, price: float, quantity: int = 1) -> None:
self.items.append({"name": name, "price": price, "quantity": quantity})
def remove_item(self, name: str) -> None:
self.items = [i for i in self.items if i["name"] != name]
def item_count(self) -> int:
return sum(i["quantity"] for i in self.items)
def total(self) -> float:
return sum(i["price"] * i["quantity"] for i in self.items)
def apply_discount(self, percent: float) -> float:
return self.total() * (1 - percent / 100)
def clear(self) -> None:
self.items.clear()
class WeatherService:
def get_temperature(self, city: str) -> float:
raise NotImplementedError("Requires network access")
def get_recommendation(self, city: str) -> str:
temp = self.get_temperature(city)
if temp < 10:
return "Wear a warm coat"
if temp < 20:
return "Wear a jacket"
return "Enjoy the weather"
def is_hot(self, city: str) -> bool:
return self.get_temperature(city) > 30
def format_weather_report(service: WeatherService, city: str) -> str:
temp = service.get_temperature(city)
rec = service.get_recommendation(city)
return f"{city}: {temp}°C — {rec}"
def is_palindrome(text: str) -> bool:
cleaned = text.lower().replace(" ", "")
return cleaned == cleaned[::-1]
Write tests:
# tests/test_py17.py
from src.py17_testing import add, divide
def test_add():
assert add(3, 4) == 7
assert add(-1, 1) == 0
assert add(0, 0) == 0
Run the tests:
python -m pytest tests/test_py17.py -v
pytest discovers functions starting with test_ in files starting with test_. The assert statement checks that the condition is true. If it fails, pytest shows a detailed error message:
FAILED tests/test_py17.py::test_add
assert add(3, 4) == 8
E assert 7 == 8
E + where 7 = add(3, 4)
pytest shows you the actual value (7), the expected value (8), and the expression that failed. This is much more helpful than unittest’s generic “AssertionError”.
Testing Exceptions
Use pytest.raises to check that a function raises the expected exception:
import pytest
def test_divide():
assert divide(10, 2) == 5.0
assert divide(7, 2) == 3.5
def test_divide_by_zero():
with pytest.raises(ValueError, match="Cannot divide by zero"):
divide(10, 0)
The match parameter checks the error message using a regular expression. You can also inspect the exception object:
def test_divide_error_message():
with pytest.raises(ValueError) as exc_info:
divide(10, 0)
assert "Cannot divide by zero" in str(exc_info.value)
The exc_info object contains the exception type, value, and traceback.
Fixtures: Reusable Setup
Fixtures provide test data or resources. They run before each test that uses them. Use the @pytest.fixture decorator:
import pytest
from src.py17_testing import ShoppingCart, WeatherService
@pytest.fixture
def empty_cart():
"""Create an empty shopping cart."""
return ShoppingCart()
@pytest.fixture
def cart_with_items():
"""Create a shopping cart with some items."""
cart = ShoppingCart()
cart.add_item("Laptop", 999.99)
cart.add_item("Mouse", 29.99, quantity=2)
cart.add_item("Keyboard", 79.99)
return cart
Use fixtures as function parameters — pytest injects them automatically:
def test_empty_cart(empty_cart):
"""Test that empty_cart fixture works."""
assert empty_cart.item_count() == 0
assert empty_cart.total() == 0.0
def test_cart_total(cart_with_items):
"""Test that cart_with_items has correct total."""
assert cart_with_items.total() == pytest.approx(1139.96, abs=0.01)
pytest calls the fixture function before each test that requests it. Each test gets a fresh fixture instance — tests are completely isolated from each other. You never need to worry about one test affecting another.
pytest.approx: Floating-Point Comparisons
Use pytest.approx for floating-point comparisons:
assert 0.1 + 0.2 == pytest.approx(0.3) # True!
assert cart.total() == pytest.approx(1139.96, abs=0.01)
Without pytest.approx, this comparison fails:
assert 0.1 + 0.2 == 0.3 # FAILS! 0.30000000000000004 != 0.3
Floating-point math is not exact. pytest.approx handles this by allowing a small tolerance.
conftest.py: Shared Fixtures
Put fixtures in conftest.py to share them across all test files in the directory:
# tests/conftest.py
import pytest
from src.py17_testing import ShoppingCart, WeatherService
@pytest.fixture
def empty_cart():
return ShoppingCart()
@pytest.fixture
def cart_with_items():
cart = ShoppingCart()
cart.add_item("Laptop", 999.99)
cart.add_item("Mouse", 29.99, quantity=2)
cart.add_item("Keyboard", 79.99)
return cart
Any test file in the tests/ directory can use these fixtures without importing them. pytest discovers conftest.py files automatically. You can have multiple conftest.py files in different directories for different scopes.
Parametrize: Test Many Inputs at Once
Use @pytest.mark.parametrize to run the same test with different inputs:
from src.py17_testing import fizzbuzz, truncate
@pytest.mark.parametrize(
"n, expected",
[
(1, "1"),
(3, "Fizz"),
(5, "Buzz"),
(15, "FizzBuzz"),
(6, "Fizz"),
(10, "Buzz"),
(30, "FizzBuzz"),
(7, "7"),
],
)
def test_fizzbuzz(n, expected):
assert fizzbuzz(n) == expected
This creates 8 separate tests from one test function. pytest runs each one independently. If one input fails, the others still run. You see exactly which input caused the failure.
You can parametrize with multiple parameters:
@pytest.mark.parametrize(
"text, max_len, expected",
[
("Hello", 10, "Hello"),
("Hello World", 8, "Hello..."),
("Hi", 2, "Hi"),
("Testing", 5, "Te..."),
],
)
def test_truncate(text, max_len, expected):
assert truncate(text, max_len) == expected
Parametrize is much better than putting all assertions in one test. With separate tests, you get a clear report of which specific inputs pass and which fail.
Mocking: Replace External Dependencies
When your code calls an external API, database, or file system, you do not want your tests to depend on those systems. Tests should be fast, reliable, and run without network access. Use mocking to replace external dependencies with fake objects.
MagicMock: The Swiss Army Knife
from unittest.mock import MagicMock
from src.py17_testing import WeatherService, format_weather_report
def test_weather_report():
# Create a mock that looks like WeatherService
mock_service = MagicMock(spec=WeatherService)
mock_service.get_temperature.return_value = 22.0
mock_service.get_recommendation.return_value = "Wear a jacket"
# Test the function with the mock
result = format_weather_report(mock_service, "Berlin")
# Verify the result
assert result == "Berlin: 22.0°C — Wear a jacket"
# Verify the mock was called correctly
mock_service.get_temperature.assert_called_once_with("Berlin")
mock_service.get_recommendation.assert_called_once_with("Berlin")
MagicMock(spec=WeatherService) creates a fake object with the same methods as WeatherService. You control what the methods return with return_value. You can also verify how the mock was called.
Mock as Fixture
Create a mock fixture for reuse across tests:
@pytest.fixture
def mock_weather():
"""Create a mocked weather service."""
service = MagicMock(spec=WeatherService)
service.get_temperature.return_value = 25.0
return service
def test_format_report(mock_weather):
"""Test format_weather_report with mocked service."""
mock_weather.get_recommendation.return_value = "Wear a jacket"
result = format_weather_report(mock_weather, "Berlin")
assert result == "Berlin: 25.0°C — Wear a jacket"
mock_weather.get_temperature.assert_called_once_with("Berlin")
def test_override_temperature(mock_weather):
"""Override return value for this test."""
mock_weather.get_temperature.return_value = 35.0
assert mock_weather.get_temperature("Berlin") == 35.0
def test_cold_temperature(mock_weather):
"""Test with cold temperature."""
mock_weather.get_temperature.return_value = 5.0
assert mock_weather.get_temperature("Berlin") == 5.0
Each test can override the mock’s return value. The fixture creates a fresh mock for each test.
Verifying Mock Calls
Mocks track every call they receive:
mock_fn = MagicMock(return_value=42)
mock_fn("a")
mock_fn("b")
mock_fn("c")
assert mock_fn.call_count == 3
mock_fn.assert_any_call("b")
mock_fn.assert_called_with("c") # Last call was with "c"
This is useful for verifying that your code calls dependencies correctly.
Markers: Categorize Tests
Markers let you tag tests with metadata:
@pytest.mark.slow
def test_large_cart():
"""This test is slow — only run it when needed."""
cart = ShoppingCart()
for i in range(1000):
cart.add_item(f"item_{i}", 9.99)
assert cart.item_count() == 1000
Run or skip tests by marker:
# Run only slow tests
python -m pytest -m slow
# Skip slow tests
python -m pytest -m "not slow"
Register custom markers in pyproject.toml to avoid warnings:
[tool.pytest.ini_options]
markers = [
"slow: marks tests as slow (deselect with '-m \"not slow\"')",
"integration: marks tests that need external services",
]
Test Organization
A good project structure:
project/
src/
__init__.py
shopping_cart.py
weather.py
tests/
__init__.py
conftest.py # Shared fixtures
test_shopping_cart.py # Tests for shopping_cart.py
test_weather.py # Tests for weather.py
Rules for organizing tests:
- One test file per source file —
shopping_cart.pygetstest_shopping_cart.py - Test file names start with
test_— pytest discovers them automatically - Test function names start with
test_— pytest runs them automatically - Put shared fixtures in
conftest.py— available to all test files - Keep tests close to the code they test — same directory structure
Running Tests
Here are the most useful pytest commands:
# Run all tests
python -m pytest
# Run with verbose output (shows each test name)
python -m pytest -v
# Run a specific file
python -m pytest tests/test_py17.py
# Run a specific test function
python -m pytest tests/test_py17.py::test_add
# Run tests matching a keyword
python -m pytest -k "cart"
# Show print() output (normally captured)
python -m pytest -s
# Stop on first failure
python -m pytest -x
# Show the 5 slowest tests
python -m pytest --durations=5
# Run with coverage report
python -m pytest --cov=src
TDD: Test-Driven Development
TDD is a workflow where you write the test first, then the code:
- Red — write a failing test that describes what you want
- Green — write the minimum code to make it pass
- Refactor — clean up the code while keeping tests green
Example:
# Step 1: RED — write the test (fails because is_palindrome doesn't exist yet)
def test_is_palindrome():
assert is_palindrome("racecar") is True
assert is_palindrome("hello") is False
assert is_palindrome("A man a plan a canal Panama") is True
# Step 2: GREEN — write the simplest code that passes
def is_palindrome(text: str) -> bool:
cleaned = text.lower().replace(" ", "")
return cleaned == cleaned[::-1]
# Step 3: REFACTOR — improve if needed (already clean in this case)
You do not need to follow TDD strictly for everything. But writing tests early — even before the code is complete — helps you think about the interface and catch edge cases.
Integration Test Example
Integration tests test multiple components working together. Here is a complete ShoppingCart test:
def test_cart_full_workflow(empty_cart):
"""Test the complete shopping cart lifecycle."""
# Add items
empty_cart.add_item("Book", 19.99)
empty_cart.add_item("Pen", 4.99, quantity=3)
assert empty_cart.item_count() == 4
# Check total
total = empty_cart.total()
assert total == pytest.approx(34.96)
# Apply discount
discounted = empty_cart.apply_discount(10)
assert discounted == pytest.approx(total * 0.9)
# Remove item
empty_cart.remove_item("Pen")
assert empty_cart.item_count() == 1
# Clear
empty_cart.clear()
assert empty_cart.item_count() == 0
assert empty_cart.total() == 0.0
This tests the full lifecycle of a shopping cart in one test. It verifies that all operations work together correctly.
What Makes a Good Test?
Good tests follow these principles:
- Fast — tests should run in milliseconds, not seconds
- Isolated — each test is independent, no shared state between tests
- Repeatable — same result every time, no randomness or timing dependence
- Self-checking — tests verify their own results with assertions
- Timely — written close to when the code is written
Name your tests descriptively:
# BAD — what does this test?
def test_1():
...
# GOOD — describes what is being tested
def test_cart_remove_nonexistent_item_does_nothing():
cart = ShoppingCart()
cart.remove_item("Nonexistent") # Should not raise
assert cart.item_count() == 0
Test Coverage
Use pytest-cov to measure how much of your code is covered by tests:
pip install pytest-cov
python -m pytest --cov=src --cov-report=term-missing
The output shows which lines are not covered:
Name Stmts Miss Cover Missing
-----------------------------------------------------
src/py17_testing.py 45 3 93% 42-44
-----------------------------------------------------
TOTAL 45 3 93%
Aim for 80-90% coverage on important code. 100% coverage is not always worth the effort — some code (like if __name__ == "__main__":) does not need testing.
Common Testing Mistakes
Testing Implementation Instead of Behavior
# BAD — tests how the code works internally
def test_cart_uses_list():
cart = ShoppingCart()
assert isinstance(cart.items, list) # Implementation detail!
# GOOD — tests what the code does
def test_cart_starts_empty():
cart = ShoppingCart()
assert cart.item_count() == 0
Tests That Depend on Each Other
# BAD — test_b depends on test_a running first
cart = ShoppingCart()
def test_a():
cart.add_item("Book", 10)
def test_b():
assert cart.item_count() == 1 # Fails if test_a doesn't run first!
# GOOD — each test is independent
def test_a():
cart = ShoppingCart()
cart.add_item("Book", 10)
assert cart.item_count() == 1
Source Code
You can find the code for this tutorial on GitHub:
kemalcodes/python-tutorial — tutorial-17-testing
Run the tests:
python -m pytest tests/test_py17.py -v
What’s Next?
In the next tutorial, we will learn about async/await — how to write concurrent code that handles multiple tasks at once.
Related Articles
- Python Tutorial #16: Type Hints — writing safer Python code
- Python Tutorial #11: Error Handling — try/except and custom exceptions
- Python Cheat Sheet — quick reference for Python syntax