This document provides a comprehensive analysis of the OpenStack Placement project's Gabbi-based functional test infrastructure. It is designed to enable replication of this pattern in other OpenStack projects, with particular focus on fixture structure, database setup, API application initialization, and test execution.
Key Technologies:
- Gabbi: YAML-based declarative HTTP testing framework
- oslo.test/oslotest: OpenStack test framework base
- oslo.db: Database abstraction and fixtures
- wsgi-intercept: In-process WSGI application testing
- SQLite: In-memory database for tests
- stestr: Parallel test runner
- Architecture Overview
- Test Directory Structure
- Fixture Architecture
- Database Setup
- API Application Initialization
- Gabbi Test Structure
- Test Execution Configuration
- Tox Configuration
- Zuul CI/CD Integration
- Complete Working Examples
- Implementation Checklist
The Placement functional test infrastructure consists of several layers that work together:
┌─────────────────────────────────────────────────────┐
│ Gabbi YAML Test Files (gabbits/*.yaml) │
│ - Declarative HTTP test definitions │
│ - Environment variable substitution │
│ - Sequential test ordering within files │
└──────────────────────────┬──────────────────────────┘
│
┌──────────────────────────▼──────────────────────────┐
│ Test Loader (test_api.py) │
│ - gabbi.driver.build_tests() │
│ - Discovers YAML files in gabbits/ │
│ - Creates Python test cases │
└──────────────────────────┬──────────────────────────┘
│
┌──────────────────────────▼──────────────────────────┐
│ Fixture Layer (fixtures/gabbits.py) │
│ - APIFixture (base) │
│ - AllocationFixture (with pre-created data) │
│ - SharedStorageFixture (complex topologies) │
│ - etc. │
└──────────────────────────┬──────────────────────────┘
│
┌────────────────────┼────────────────────┐
│ │ │
┌─────▼──────┐ ┌─────────▼────────┐ ┌───────▼──────┐
│ Config │ │ Database │ │ WSGI App │
│ Fixture │ │ Fixture │ │ (via │
│ │ │ (SQLite in-mem) │ │ deploy.py) │
└────────────┘ └──────────────────┘ └──────────────┘
- Isolation: Each test file (YAML) runs in sequence but files run in parallel
- In-Memory Database: SQLite
:memory:for speed - No External Dependencies: No real Keystone, no network services
- WSGI Intercept: HTTP requests intercepted in-process (no network)
- Fixture Reuse: Multiple test files can share fixture classes
- Environment Variables: Share state between tests in a file
placement/tests/
├── __init__.py
├── fixtures.py # Database fixture
├── README.rst # Testing philosophy
├── unit/ # Unit tests
│ └── policy_fixture.py # Policy fixture for tests
└── functional/ # Functional tests
├── __init__.py
├── base.py # Base TestCase for DB tests
├── test_api.py # Gabbi test loader
├── test_direct.py # Direct Python API tests
├── db/ # Database-focused tests
│ ├── test_allocation.py
│ ├── test_resource_provider.py
│ └── test_base.py # Helper functions
├── fixtures/ # Gabbi-specific fixtures
│ ├── __init__.py
│ ├── gabbits.py # APIFixture and variants
│ ├── placement.py # PlacementFixture (for Nova)
│ └── capture.py # Logging/warnings fixtures
└── gabbits/ # YAML test files
├── basic-http.yaml
├── resource-provider.yaml
├── allocations.yaml
└── ... (79 total files)
Key Files:
fixtures.py: Database fixture using oslo.dbfunctional/base.py: Base test case with minimal setupfunctional/fixtures/gabbits.py: Gabbi-specific fixtures with API setupfunctional/fixtures/capture.py: Logging/warning capturefunctional/test_api.py: Gabbi test discovery and loadingfunctional/db/test_base.py: Helper functions for creating test data
File: placement/tests/fixtures.py
"""Fixtures for Placement tests."""
from oslo_config import cfg
from oslo_db.sqlalchemy import test_fixtures
from placement.db.sqlalchemy import migration
from placement import db_api as placement_db
from placement.objects import resource_class
from placement.objects import trait
class Database(test_fixtures.GeneratesSchema, test_fixtures.AdHocDbFixture):
"""Database fixture for placement tests.
Inherits from oslo_db test fixtures to provide:
- In-memory SQLite database
- Automatic schema creation
- Transaction management
- Cleanup between tests
"""
def __init__(self, conf_fixture, set_config=False):
"""Create a database fixture.
:param conf_fixture: oslo_config Config fixture
:param set_config: If True, register and set connection config
"""
super(Database, self).__init__()
if set_config:
try:
conf_fixture.register_opt(
cfg.StrOpt('connection'), group='placement_database')
except cfg.DuplicateOptError:
# already registered
pass
conf_fixture.config(connection='sqlite://',
group='placement_database')
self.conf_fixture = conf_fixture
self.get_engine = placement_db.get_placement_engine
placement_db.configure(self.conf_fixture.conf)
def get_enginefacade(self):
"""Return the enginefacade for this database."""
return placement_db.placement_context_manager
def generate_schema_create_all(self, engine):
"""Create database schema.
Called by oslo_db fixtures during setup.
Uses Alembic to create schema from models.
"""
# Create schema using Alembic
migration.create_schema(engine)
# Patch engine into enginefacade early
# (oslo_db will patch it later, but we need it now)
_reset_facade = placement_db.placement_context_manager.patch_engine(
engine)
self.addCleanup(_reset_facade)
# Reset sync flags
self.addCleanup(self.cleanup)
self.cleanup()
# Sync traits and resource classes
# This populates standard resource classes and traits
from placement import deploy
deploy.update_database(self.conf_fixture.conf)
def cleanup(self):
"""Reset sync flags for standard traits/resource classes."""
trait._TRAITS_SYNCED = False
resource_class._RESOURCE_CLASSES_SYNCED = FalseKey Points:
- Inherits from
oslo_db.sqlalchemy.test_fixtures - Uses
sqlite://(in-memory) for speed - Creates schema via Alembic migration
- Syncs standard resource classes/traits
- Cleans up between tests
File: placement/tests/functional/base.py
"""Base test case for placement functional tests."""
from oslo_config import cfg
from oslo_config import fixture as config_fixture
from oslo_log.fixture import logging_error
from oslotest import output
import testtools
from placement import conf
from placement import context
from placement.tests import fixtures
from placement.tests.functional.fixtures import capture
from placement.tests.unit import policy_fixture
class TestCase(testtools.TestCase):
"""A base test case for placement functional tests.
Sets up minimum configuration for database and policy handling
and establishes the placement database.
"""
USES_DB = True
def setUp(self):
super(TestCase, self).setUp()
# Manage required configuration
self.conf_fixture = self.useFixture(
config_fixture.Config(cfg.ConfigOpts()))
conf.register_opts(self.conf_fixture.conf)
if self.USES_DB:
self.placement_db = self.useFixture(fixtures.Database(
self.conf_fixture, set_config=True))
else:
self.conf_fixture.config(
connection='sqlite://',
group='placement_database',
)
self.conf_fixture.conf([], default_config_files=[])
# Set up policy
self.useFixture(policy_fixture.PolicyFixture(self.conf_fixture))
# Set up logging and output capture
self.useFixture(capture.Logging())
self.useFixture(output.CaptureOutput())
self.useFixture(capture.WarningsFixture())
self.useFixture(logging_error.get_logging_handle_error_fixture())
# Create a request context
self.context = context.RequestContext()
self.context.config = self.conf_fixture.conf
class NoDBTestCase(TestCase):
"""Test case without database."""
USES_DB = FalseKey Points:
- Uses
testtools.TestCaseas base (OpenStack standard) - Sets up
oslo.configfixture with isolated config - Creates database fixture with
USES_DBflag - Sets up logging, output capture, warnings
- Creates request context for database operations
File: placement/tests/functional/fixtures/capture.py
"""Fixtures for capturing logs and filtering warnings."""
import logging
import warnings
import fixtures
from oslotest import log
from sqlalchemy import exc as sqla_exc
class NullHandler(logging.Handler):
"""Custom NullHandler that formats records.
Used to detect formatting errors in debug logs even when
logs aren't captured.
"""
def handle(self, record):
self.format(record)
def emit(self, record):
pass
def createLock(self):
self.lock = None
class Logging(log.ConfigureLogging):
"""Logging fixture for tests.
- Captures logs for later inspection
- Ensures DEBUG logs are formatted even if not captured
"""
def __init__(self):
super(Logging, self).__init__()
# Default to INFO if not otherwise set
if self.level is None:
self.level = logging.INFO
def setUp(self):
super(Logging, self).setUp()
if self.level > logging.DEBUG:
handler = NullHandler()
self.useFixture(fixtures.LogHandler(handler, nuke_handlers=False))
handler.setLevel(logging.DEBUG)
class WarningsFixture(fixtures.Fixture):
"""Filter or escalate certain warnings during test runs.
Add additional entries as required. Remove when obsolete.
"""
def setUp(self):
super(WarningsFixture, self).setUp()
self._original_warning_filters = warnings.filters[:]
warnings.simplefilter("once", DeprecationWarning)
# Ignore policy scope warnings
warnings.filterwarnings(
'ignore',
message="Policy .* failed scope check",
category=UserWarning)
# Escalate invalid UUID warnings to errors
warnings.filterwarnings('error', message=".*invalid UUID.*")
# Prevent introducing unmapped columns
warnings.filterwarnings(
'error',
category=sqla_exc.SAWarning)
# Configure SQLAlchemy warnings
warnings.filterwarnings(
'ignore',
category=sqla_exc.SADeprecationWarning)
warnings.filterwarnings(
'error',
module='placement',
category=sqla_exc.SADeprecationWarning)
self.addCleanup(self._reset_warning_filters)
def _reset_warning_filters(self):
warnings.filters[:] = self._original_warning_filtersKey Points:
Logging: Captures logs and ensures DEBUG logs are formattedWarningsFixture: Filters/escalates warnings to catch issues early- Both clean up after themselves
File: placement/tests/functional/fixtures/gabbits.py
"""Gabbi fixtures for placement API testing."""
import os
from gabbi import fixture
import os_resource_classes as orc
import os_traits as ot
from oslo_config import cfg
from oslo_config import fixture as config_fixture
from oslo_log.fixture import logging_error
from oslo_policy import opts as policy_opts
from oslo_utils.fixture import uuidsentinel as uuids
from oslo_utils import uuidutils
from oslotest import output
from placement import conf
from placement import context
from placement import deploy
from placement.tests import fixtures
from placement.tests.functional.fixtures import capture
from placement.tests.unit import policy_fixture
# Global CONF for loadapp workaround
# (gabbi limitations require this)
CONF = None
def setup_app():
"""App factory for gabbi.
Called by gabbi to get the WSGI application under test.
"""
global CONF
return deploy.loadapp(CONF)
class APIFixture(fixture.GabbiFixture):
"""Setup the required backend fixtures for placement service.
This is the base fixture for Gabbi tests. It:
- Sets up configuration
- Creates database
- Initializes policy
- Sets environment variables for test data
- Creates the WSGI application
"""
# For secure RBAC testing (optional)
_secure_rbac = False
def start_fixture(self):
"""Called once before any tests in a YAML file run."""
global CONF
# Set up logging and output capture
self.standard_logging_fixture = capture.Logging()
self.standard_logging_fixture.setUp()
self.output_stream_fixture = output.CaptureOutput()
self.output_stream_fixture.setUp()
self.logging_error_fixture = (
logging_error.get_logging_handle_error_fixture())
self.logging_error_fixture.setUp()
self.warnings_fixture = capture.WarningsFixture()
self.warnings_fixture.setUp()
# Create isolated config (don't use global CONF)
self.conf_fixture = config_fixture.Config(cfg.ConfigOpts())
self.conf_fixture.setUp()
conf.register_opts(self.conf_fixture.conf)
# Configure API with no auth
self.conf_fixture.config(group='api', auth_strategy='noauth2')
self.conf_fixture.config(
group='oslo_policy',
enforce_scope=self._secure_rbac,
enforce_new_defaults=self._secure_rbac,
)
# Set up database
self.placement_db_fixture = fixtures.Database(
self.conf_fixture, set_config=True)
self.placement_db_fixture.setUp()
# Create context for fixture data creation
self.context = context.RequestContext()
self.context.config = self.conf_fixture.conf
# Set default policy opts
policy_opts.set_defaults(self.conf_fixture.conf)
# Empty config files list (don't read /etc/placement/placement.conf)
self.conf_fixture.conf([], default_config_files=[])
# Turn on policy fixture
self.policy_fixture = policy_fixture.PolicyFixture(
self.conf_fixture)
self.policy_fixture.setUp()
# Set up environment variables for use in YAML tests
# These are substituted into test data via $ENVIRON['VAR_NAME']
os.environ['RP_UUID'] = uuidutils.generate_uuid()
os.environ['RP_NAME'] = uuidutils.generate_uuid()
os.environ['RP_UUID1'] = uuidutils.generate_uuid()
os.environ['RP_NAME1'] = uuidutils.generate_uuid()
os.environ['RP_UUID2'] = uuidutils.generate_uuid()
os.environ['RP_NAME2'] = uuidutils.generate_uuid()
os.environ['CUSTOM_RES_CLASS'] = 'CUSTOM_IRON_NFV'
os.environ['PROJECT_ID'] = uuidutils.generate_uuid()
os.environ['USER_ID'] = uuidutils.generate_uuid()
os.environ['CONSUMER_UUID'] = uuidutils.generate_uuid()
# ... etc.
# Store config globally for setup_app()
CONF = self.conf_fixture.conf
def stop_fixture(self):
"""Called after all tests in a YAML file complete."""
global CONF
# Clean up all fixtures in reverse order
self.placement_db_fixture.cleanUp()
self.warnings_fixture.cleanUp()
self.output_stream_fixture.cleanUp()
self.standard_logging_fixture.cleanUp()
self.logging_error_fixture.cleanUp()
self.policy_fixture.cleanUp()
self.conf_fixture.cleanUp()
CONF = NoneKey Points:
- Extends
gabbi.fixture.GabbiFixture start_fixture(): Called once before tests in a YAML filestop_fixture(): Called once after tests in a YAML file- Sets up config, database, policy, logging
- Populates
os.environwith UUIDs for test data - Uses global
CONFto work around gabbi limitation
Example: AllocationFixture (pre-creates test data):
class AllocationFixture(APIFixture):
"""An APIFixture that has some pre-made Allocations.
Pre-creates:
- Resource provider with VCPU and DISK_GB inventory
- Two consumers with allocations
- Users and projects
"""
def start_fixture(self):
# Call parent to set up base infrastructure
super(AllocationFixture, self).start_fixture()
# Create additional environment variables
os.environ['ALT_USER_ID'] = uuidutils.generate_uuid()
project_id = os.environ['PROJECT_ID']
user_id = os.environ['USER_ID']
alt_user_id = os.environ['ALT_USER_ID']
# Create user and project objects in database
from placement.objects import user as user_obj
from placement.objects import project as project_obj
user = user_obj.User(self.context, external_id=user_id)
user.create()
alt_user = user_obj.User(self.context, external_id=alt_user_id)
alt_user.create()
project = project_obj.Project(self.context, external_id=project_id)
project.create()
# Create resource provider with inventory
from placement.tests.functional.db import test_base as tb
rp_name = os.environ['RP_NAME']
rp_uuid = os.environ['RP_UUID']
rp = tb.create_provider(self.context, rp_name, uuid=rp_uuid)
tb.add_inventory(rp, 'DISK_GB', 2048,
step_size=10, min_unit=10, max_unit=1000)
tb.add_inventory(rp, 'VCPU', 10, max_unit=10)
# Create consumers with allocations
consumer1 = tb.ensure_consumer(self.context, user, project)
tb.set_allocation(self.context, rp, consumer1, {'DISK_GB': 1000})
os.environ['CONSUMER_0'] = consumer1.uuid
consumer2 = tb.ensure_consumer(self.context, user, project)
tb.set_allocation(self.context, rp, consumer2, {'VCPU': 6})
os.environ['CONSUMER_ID'] = consumer2.uuidKey Points:
- Extends
APIFixturefor base setup - Uses helper functions from
test_base.py - Pre-creates complex test scenarios
- Stores IDs in environment variables for YAML tests
- Configuration Registration:
placement/conf/database.py - Database API:
placement/db_api.py(enginefacade setup) - Migration:
placement/db/sqlalchemy/migration.py(Alembic) - Models:
placement/db/sqlalchemy/models.py(SQLAlchemy)
"""Database context manager for placement database connection."""
from oslo_db.sqlalchemy import enginefacade
from oslo_log import log as logging
LOG = logging.getLogger(__name__)
# Global enginefacade transaction context manager
placement_context_manager = enginefacade.transaction_context()
def _get_db_conf(conf_group):
"""Extract db config from oslo.config group."""
conf_dict = dict(conf_group.items())
# Remove non-enginefacade settings
conf_dict.pop('sync_on_startup', None)
return conf_dict
def configure(conf):
"""Configure the database connection.
Called once at application startup (or fixture setup).
"""
placement_context_manager.configure(
**_get_db_conf(conf.placement_database))
def get_placement_engine():
"""Get the database engine."""
return placement_context_manager.writer.get_engine()
@enginefacade.transaction_context_provider
class DbContext(object):
"""Stub class for db session handling outside of web requests."""Configuration Options (automatically registered):
[placement_database]
connection = sqlite:// # In-memory for tests
# connection = mysql+pymysql://user:pass@host/dbname # Production
sync_on_startup = True # Run migrations on startupFile: placement/db/sqlalchemy/migration.py
"""Database migration using Alembic."""
import os
import alembic
from alembic import config as alembic_config
from placement.db.sqlalchemy import models
def create_schema(engine=None):
"""Create schema from models, without a migration.
Used in tests to quickly create schema.
"""
base = models.BASE
if engine is None:
from placement import db_api as placement_db
engine = placement_db.get_placement_engine()
base.metadata.create_all(engine)
def upgrade(revision, config=None):
"""Run Alembic migrations.
:param revision: Target revision ('head' for latest)
"""
revision = revision or "head"
config = config or _alembic_config()
alembic.command.upgrade(config, revision)
def _alembic_config():
"""Get Alembic configuration."""
path = os.path.join(os.path.dirname(__file__), "alembic.ini")
config = alembic_config.Config(path)
return configAlembic Directory Structure:
placement/db/sqlalchemy/
├── alembic.ini # Alembic configuration
├── alembic/
│ ├── env.py # Alembic environment
│ ├── script.py.mako # Migration template
│ └── versions/ # Migration scripts
│ ├── b4ed3a175331_initial.py
│ └── ...
├── migration.py # Migration helpers
└── models.py # SQLAlchemy models
File: placement/tests/functional/db/test_base.py
"""Helper functions for creating test data in functional tests."""
from oslo_utils import uuidutils
from oslo_utils.fixture import uuidsentinel as uuids
from placement import exception
from placement.objects import allocation as alloc_obj
from placement.objects import consumer as consumer_obj
from placement.objects import inventory as inv_obj
from placement.objects import project as project_obj
from placement.objects import resource_class as rc_obj
from placement.objects import resource_provider as rp_obj
from placement.objects import trait as trait_obj
from placement.objects import user as user_obj
def create_provider(context, name, *aggs, **kwargs):
"""Create a resource provider.
:param context: RequestContext
:param name: Provider name
:param aggs: Aggregate UUIDs to associate
:param kwargs:
- parent: Parent provider UUID
- uuid: Provider UUID (generated if not provided)
:returns: ResourceProvider object
"""
parent = kwargs.get('parent')
uuid = kwargs.get('uuid', getattr(uuids, name))
rp = rp_obj.ResourceProvider(context, name=name, uuid=uuid)
if parent:
rp.parent_provider_uuid = parent
rp.create()
if aggs:
rp.set_aggregates(aggs)
return rp
def add_inventory(rp, rc, total, **kwargs):
"""Add inventory to a resource provider.
:param rp: ResourceProvider object
:param rc: Resource class name (e.g., 'VCPU')
:param total: Total amount of resource
:param kwargs: Optional inventory attributes
- max_unit, min_unit, step_size, reserved, allocation_ratio
"""
ensure_rc(rp._context, rc)
kwargs.setdefault('max_unit', total)
inv = inv_obj.Inventory(
rp._context, resource_provider=rp,
resource_class=rc, total=total, **kwargs)
rp.add_inventory(inv)
return inv
def set_traits(rp, *traits):
"""Set traits on a resource provider.
:param rp: ResourceProvider object
:param traits: Trait names (created if don't exist)
"""
tlist = []
for tname in traits:
try:
trait = trait_obj.Trait.get_by_name(rp._context, tname)
except exception.TraitNotFound:
trait = trait_obj.Trait(rp._context, name=tname)
trait.create()
tlist.append(trait)
rp.set_traits(tlist)
return tlist
def ensure_consumer(ctx, user, project, consumer_id=None):
"""Get or create a consumer.
:param ctx: RequestContext
:param user: User object
:param project: Project object
:param consumer_id: Consumer UUID (generated if None)
:returns: Consumer object
"""
consumer_id = consumer_id or uuidutils.generate_uuid()
try:
consumer = consumer_obj.Consumer.get_by_uuid(ctx, consumer_id)
except exception.NotFound:
consumer = consumer_obj.Consumer(
ctx, uuid=consumer_id, user=user, project=project)
consumer.create()
return consumer
def set_allocation(ctx, rp, consumer, rc_used_dict):
"""Set allocations for a consumer.
:param ctx: RequestContext
:param rp: ResourceProvider object
:param consumer: Consumer object
:param rc_used_dict: Dict of {resource_class: amount}
:returns: List of Allocation objects
"""
alloc = [
alloc_obj.Allocation(
resource_provider=rp, resource_class=rc,
consumer=consumer, used=used)
for rc, used in rc_used_dict.items()
]
alloc_obj.replace_all(ctx, alloc)
return alloc
def create_user_and_project(ctx, prefix='fake'):
"""Create user and project objects.
:param ctx: RequestContext
:param prefix: Prefix for external IDs
:returns: (User, Project) tuple
"""
user = user_obj.User(ctx, external_id='%s-user' % prefix)
user.create()
proj = project_obj.Project(ctx, external_id='%s-project' % prefix)
proj.create()
return user, proj
def ensure_rc(context, name):
"""Ensure resource class exists.
:param context: RequestContext
:param name: Resource class name
"""
try:
rc_obj.ResourceClass.get_by_name(context, name)
except exception.NotFound:
rc_obj.ResourceClass(context, name=name).create()File: placement/deploy.py
"""Deployment handling for Placement API."""
from microversion_parse import middleware as mp_middleware
import oslo_middleware
from placement import auth
from placement import fault_wrap
from placement import handler
from placement import microversion
from placement import policy
from placement import requestlog
from placement import util
def deploy(conf):
"""Assemble the middleware pipeline leading to the placement app.
Pipeline (inside to outside):
1. PlacementHandler (core application)
2. MicroversionMiddleware (API versioning)
3. FaultWrapper (exception handling)
4. PlacementKeystoneContext (auth context)
5. Auth middleware (noauth2 or keystonemiddleware)
6. CORS middleware (optional)
7. RequestLog (logging)
8. HTTPProxyToWSGI (proxy header handling)
"""
# Auth strategy
if conf.api.auth_strategy == 'noauth2':
auth_middleware = auth.NoAuthMiddleware
else:
auth_middleware = auth.filter_factory(
{}, oslo_config_config=conf)
# CORS (optional)
if conf.cors.allowed_origin:
cors_middleware = oslo_middleware.CORS.factory(
{}, **conf.cors)
else:
cors_middleware = None
# Core application
application = handler.PlacementHandler(config=conf)
# Microversion middleware
application = mp_middleware.MicroversionMiddleware(
application, microversion.SERVICE_TYPE, microversion.VERSIONS,
json_error_formatter=util.json_error_formatter)
# Middleware stack (inside to outside)
for middleware in (fault_wrap.FaultWrapper,
auth.PlacementKeystoneContext,
auth_middleware,
cors_middleware,
requestlog.RequestLog,
oslo_middleware.HTTPProxyToWSGI):
if middleware:
application = middleware(application)
return application
def update_database(conf):
"""Run database migrations and sync standard data."""
from placement.db.sqlalchemy import migration
from placement import db_api
from placement.objects import trait
from placement.objects import resource_class
if conf.placement_database.sync_on_startup:
migration.upgrade('head')
ctx = db_api.DbContext()
trait.ensure_sync(ctx)
resource_class.ensure_sync(ctx)
def loadapp(config, project_name=None):
"""WSGI application creator for placement.
:param config: An oslo_config.cfg.ConfigOpts
:param project_name: Ignored (backwards compatibility)
:returns: WSGI application
"""
application = deploy(config)
policy.init(config)
update_database(config)
return applicationKey Points:
loadapp()is the main entry pointdeploy()assembles middleware stackupdate_database()syncs standard traits/resource classes- Middleware wraps from inside out
File: placement/tests/functional/test_api.py
"""Gabbi test loader for placement API tests."""
import os
from oslotest import output
import wsgi_intercept
from gabbi import driver
from placement.tests.functional.fixtures import capture
from placement.tests.functional.fixtures import gabbits as fixtures
# Enforce native str for response headers
wsgi_intercept.STRICT_RESPONSE_HEADERS = True
TESTS_DIR = 'gabbits'
def load_tests(loader, tests, pattern):
"""Provide a TestSuite to the discovery process.
This is a standard Python unittest load_tests protocol.
Called by test runners (stestr, unittest discover).
:param loader: unittest.TestLoader
:param tests: Existing TestSuite (ignored)
:param pattern: Pattern for test discovery (ignored)
:returns: TestSuite containing Gabbi tests
"""
test_dir = os.path.join(os.path.dirname(__file__), TESTS_DIR)
# Per-test fixtures (for clean output/logging)
inner_fixtures = [
output.CaptureOutput,
capture.Logging,
]
# Build test suite from YAML files
return driver.build_tests(
test_dir, # Directory with YAML files
loader, # unittest.TestLoader
host=None, # No real host (wsgi-intercept)
test_loader_name=__name__, # Module name for test naming
intercept=fixtures.setup_app, # App factory function
inner_fixtures=inner_fixtures, # Per-test fixtures
fixture_module=fixtures # Module with GabbiFixture classes
)Key Parameters:
test_dir: Directory containing YAML filesintercept: Function returning WSGI app (uses wsgi-intercept)fixture_module: Module containingGabbiFixturesubclassesinner_fixtures: Applied to each individual testtest_loader_name: Used to name generated test methods
File: placement/tests/functional/gabbits/basic-http.yaml
# Basic HTTP behavior tests for placement API
fixtures:
- APIFixture # Uses APIFixture from fixtures/gabbits.py
defaults:
request_headers:
x-auth-token: admin # NoAuth token
accept: application/json
tests:
- name: 404 at no service
GET: /barnabas
status: 404
response_json_paths:
$.errors[0].title: Not Found
- name: error message has request id
GET: /barnabas
status: 404
response_json_paths:
$.errors[0].request_id: /req-[a-fA-F0-9-]+/
- name: 200 at home
GET: /
status: 200
- name: post good resource provider
POST: /resource_providers
request_headers:
content-type: application/json
data:
name: $ENVIRON['RP_NAME'] # Environment variable substitution
uuid: $ENVIRON['RP_UUID']
status: 201
response_headers:
location: //resource_providers/[a-f0-9-]+/
- name: get resource provider
GET: $LOCATION # Uses Location header from previous test
response_json_paths:
$.uuid: $ENVIRON['RP_UUID']
$.name: $ENVIRON['RP_NAME']
$.generation: 0YAML Structure:
- fixtures: List of
GabbiFixtureclass names - defaults: Default headers/settings for all tests
- tests: List of test cases (sequential)
Test Case Structure:
- name: descriptive name
DESC: optional long description
GET: /path # Or POST, PUT, DELETE, PATCH
request_headers:
header-name: value
data: # Request body (JSON or string)
key: value
status: 200 # Expected status code
response_headers: # Expected headers (regex allowed)
content-type: application/json
response_forbidden_headers: # Headers that must NOT be present
- x-not-here
response_json_paths: # JSONPath assertions
$.key: value
$.list[0]: /regex/
response_strings: # String in response body
- "expected string"
poll: # Optional: retry assertion
count: 10
delay: 0.1Advanced Features:
- Environment variables:
$ENVIRON['VAR'] - Prior test data:
$LOCATION,$HISTORY['test-name'].<attr> - JSONPath: Complex JSON assertions
- Regex:
/pattern/in assertions - URL templates:
$ENVIRON['UUID']in URLs
Example: Complex Test with Pre-Created Data
fixtures:
- AllocationFixture # Has pre-created allocations
tests:
- name: get usages
GET: /resource_providers/$ENVIRON['RP_UUID']/usages
response_json_paths:
$.usages.VCPU: 6
$.usages.DISK_GB: 1000
$.resource_provider_generation: 2
- name: delete consumer
DELETE: /allocations/$ENVIRON['CONSUMER_ID']
status: 204
- name: verify usages updated
GET: /resource_providers/$ENVIRON['RP_UUID']/usages
response_json_paths:
$.usages.VCPU: 0
$.usages.DISK_GB: 1000File: .stestr.conf
[DEFAULT]
test_path=./placement/tests/unit
top_dir=./
# Gabbi test grouping
# Ensures tests from the same YAML file run in the same process
# (maintains test ordering within a file)
group_regex=placement\.tests\.functional\.test_api(?:\.|_)([^_]+)Key Points:
group_regex: Groups tests by YAML file- Pattern captures YAML filename from test name
- Tests in same group run serially in same process
- Different groups run in parallel
How it Works:
Test name format: placement.tests.functional.test_api.BasicHttpGabbits.test_000_404_at_no_service
The regex captures: BasicHttpGabbits (derived from basic-http.yaml)
All tests with same capture group run together.
File: placement/tests/unit/policy_fixture.py
"""Policy fixture for tests."""
import copy
import fixtures
from oslo_policy import policy as oslo_policy
from placement.conf import paths
from placement import policies
from placement import policy as placement_policy
class PolicyFixture(fixtures.Fixture):
"""Load the default placement policy for tests."""
def __init__(self, conf_fixture):
self.conf_fixture = conf_fixture
super(PolicyFixture, self).__init__()
def setUp(self):
super(PolicyFixture, self).setUp()
# Set policy file location
policy_file = paths.state_path_def('etc/placement/policy.yaml')
self.conf_fixture.config(
group='oslo_policy', policy_file=policy_file)
# Reset policy
placement_policy.reset()
# Initialize with default rules
# (copy to avoid oslo.policy modifying originals)
placement_policy.init(
self.conf_fixture.conf,
suppress_deprecation_warnings=True,
rules=copy.deepcopy(policies.list_rules()))
self.addCleanup(placement_policy.reset)
@staticmethod
def set_rules(rules, overwrite=True):
"""Override policy rules for testing.
:param rules: dict of action=rule mappings
:param overwrite: Whether to replace all rules
"""
enforcer = placement_policy.get_enforcer()
enforcer.set_rules(
oslo_policy.Rules.from_dict(rules),
overwrite=overwrite)File: tox.ini
[tox]
minversion = 4.6.0
envlist = py3,functional,pep8
[testenv]
usedevelop = True
allowlist_externals =
bash
rm
env
install_command = python -I -m pip install -c{env:UPPER_CONSTRAINTS_FILE:https://releases.openstack.org/constraints/upper/master} {opts} {packages}
setenv =
VIRTUAL_ENV={envdir}
LANGUAGE=en_US
LC_ALL=en_US.utf-8
OS_STDOUT_CAPTURE=1
OS_STDERR_CAPTURE=1
OS_LOG_CAPTURE = {env:OS_LOG_CAPTURE:1}
OS_TEST_TIMEOUT=160
PYTHONDONTWRITEBYTECODE=1
deps = -r{toxinidir}/test-requirements.txt
commands =
stestr run {posargs}
passenv =
OS_LOG_CAPTURE
OS_DEBUG
GENERATE_HASHES
# Functional tests environment
[testenv:functional{,-py39,-py310,-py311,-py312}]
commands =
stestr --test-path=./placement/tests/functional run {posargs}
[testenv:pep8]
description =
Run style checks.
skip_install = true
deps =
pre-commit
commands =
pre-commit run --all-files --show-diff-on-failure
[testenv:cover]
setenv =
{[testenv]setenv}
PYTHON=coverage run --source placement --parallel-mode
commands =
coverage erase
stestr --test-path=./placement/tests run {posargs}
coverage combine
coverage html -d cover
coverage xml -o cover/coverage.xml
coverage reportKey Points:
functionaltestenv: Sets--test-pathfor functional testsstestr: Parallel test runner- Environment variables control output capture
- Python version variants supported
Running Tests:
# All functional tests
tox -e functional
# Specific test file
tox -e functional -- test_api.BasicHttpGabbits
# Specific test
tox -e functional -- test_api.BasicHttpGabbits.test_000_404_at_no_service
# With debugging
OS_LOG_CAPTURE=0 tox -e functional -- test_api
# Parallel workers
tox -e functional -- --concurrency 4File: .zuul.yaml
- project:
templates:
- check-requirements
- integrated-gate-placement
- openstack-cover-jobs
- openstack-python3-jobs
- periodic-stable-jobs
- publish-openstack-docs-pti
- release-notes-jobs-python3
check:
jobs:
- openstack-tox-functional-py310
- openstack-tox-functional-py312
- openstack-tox-pep8
- placement-nova-tox-functional-py312
- tempest-integrated-placement
gate:
jobs:
- openstack-tox-functional-py310
- openstack-tox-functional-py312
- openstack-tox-pep8
- placement-nova-tox-functional-py312
- tempest-integrated-placement
- job:
name: placement-nova-tox-functional-py312
parent: nova-tox-functional-py312
description: |
Run the nova functional tests to confirm that we aren't breaking
the PlacementFixture.
vars:
tox_envlist: functional-without-sample-db-testsKey Points:
openstack-tox-functional-*: Runstox -e functionalplacement-nova-tox-functional-*: Tests PlacementFixture used by Novatempest-integrated-placement: Integration tests- Runs on check (pre-merge) and gate (post-merge)
Directory Structure:
myproject/
├── myproject/
│ ├── __init__.py
│ ├── api.py # WSGI application
│ ├── conf.py # Configuration
│ ├── db_api.py # Database setup
│ └── tests/
│ ├── __init__.py
│ ├── fixtures.py # Database fixture
│ └── functional/
│ ├── __init__.py
│ ├── base.py
│ ├── test_api.py
│ ├── fixtures/
│ │ ├── __init__.py
│ │ ├── capture.py
│ │ └── gabbits.py
│ └── gabbits/
│ └── basic.yaml
├── setup.py
├── setup.cfg
├── tox.ini
├── .stestr.conf
└── test-requirements.txt
1. test-requirements.txt
fixtures>=3.0.0
gabbi>=1.35.0
oslotest>=3.5.0
stestr>=1.0.0
testtools>=2.2.0
wsgi-intercept>=1.7.0
oslo.config>=6.7.0
oslo.db>=8.6.0
oslo.log>=4.3.0
SQLAlchemy>=1.4.0
2. myproject/db_api.py
"""Database API setup."""
from oslo_db.sqlalchemy import enginefacade
context_manager = enginefacade.transaction_context()
def configure(conf):
"""Configure database connection."""
context_manager.configure(
connection=conf.database.connection,
sqlite_fk=True,
)
def get_engine():
"""Get database engine."""
return context_manager.writer.get_engine()
@enginefacade.transaction_context_provider
class DbContext(object):
"""Database context for operations outside web requests."""3. myproject/api.py
"""WSGI API application."""
import webob
import webob.dec
import webob.exc
class MyAPI(object):
"""Simple WSGI application."""
def __init__(self, config):
self.config = config
@webob.dec.wsgify
def __call__(self, req):
"""Handle WSGI request."""
if req.path_info == '/':
return {'version': '1.0'}
elif req.path_info == '/items':
if req.method == 'GET':
return {'items': []}
elif req.method == 'POST':
# Create item
data = req.json
return webob.Response(
status=201,
json={'item': data},
)
raise webob.exc.HTTPNotFound()
def loadapp(config):
"""Create WSGI application."""
return MyAPI(config)4. myproject/tests/fixtures.py
"""Test fixtures."""
from oslo_config import cfg
from oslo_db.sqlalchemy import test_fixtures
from myproject import db_api
class Database(test_fixtures.GeneratesSchema,
test_fixtures.AdHocDbFixture):
"""Database fixture."""
def __init__(self, conf_fixture, set_config=False):
super(Database, self).__init__()
if set_config:
conf_fixture.register_opt(
cfg.StrOpt('connection'), group='database')
conf_fixture.config(
connection='sqlite://',
group='database')
self.conf_fixture = conf_fixture
self.get_engine = db_api.get_engine
db_api.configure(self.conf_fixture.conf)
def get_enginefacade(self):
return db_api.context_manager
def generate_schema_create_all(self, engine):
# Create schema
from myproject.db.sqlalchemy import models
models.BASE.metadata.create_all(engine)
# Patch engine into facade
_reset = db_api.context_manager.patch_engine(engine)
self.addCleanup(_reset)5. myproject/tests/functional/base.py
"""Base test case."""
from oslo_config import cfg
from oslo_config import fixture as config_fixture
from oslotest import output
import testtools
from myproject import conf
from myproject.tests import fixtures
from myproject.tests.functional.fixtures import capture
class TestCase(testtools.TestCase):
"""Base functional test case."""
def setUp(self):
super(TestCase, self).setUp()
# Config
self.conf_fixture = self.useFixture(
config_fixture.Config(cfg.ConfigOpts()))
conf.register_opts(self.conf_fixture.conf)
# Database
self.db = self.useFixture(
fixtures.Database(self.conf_fixture, set_config=True))
# Logging/output
self.useFixture(capture.Logging())
self.useFixture(output.CaptureOutput())
self.useFixture(capture.WarningsFixture())6. myproject/tests/functional/fixtures/capture.py
"""Logging and warning fixtures."""
import logging
import warnings
import fixtures
from oslotest import log
class Logging(log.ConfigureLogging):
"""Logging fixture."""
def __init__(self):
super(Logging, self).__init__()
if self.level is None:
self.level = logging.INFO
class WarningsFixture(fixtures.Fixture):
"""Warning filter fixture."""
def setUp(self):
super(WarningsFixture, self).setUp()
self._original = warnings.filters[:]
warnings.simplefilter("once", DeprecationWarning)
self.addCleanup(self._reset)
def _reset(self):
warnings.filters[:] = self._original7. myproject/tests/functional/fixtures/gabbits.py
"""Gabbi fixtures."""
import os
from gabbi import fixture
from oslo_config import cfg
from oslo_config import fixture as config_fixture
from oslotest import output
from oslo_utils import uuidutils
from myproject import conf
from myproject import api
from myproject.tests import fixtures
from myproject.tests.functional.fixtures import capture
CONF = None
def setup_app():
"""Application factory for gabbi."""
global CONF
return api.loadapp(CONF)
class APIFixture(fixture.GabbiFixture):
"""Base API fixture for gabbi tests."""
def start_fixture(self):
global CONF
# Logging
self.logging = capture.Logging()
self.logging.setUp()
self.output = output.CaptureOutput()
self.output.setUp()
self.warnings = capture.WarningsFixture()
self.warnings.setUp()
# Config
self.conf_fixture = config_fixture.Config(cfg.ConfigOpts())
self.conf_fixture.setUp()
conf.register_opts(self.conf_fixture.conf)
self.conf_fixture.conf([], default_config_files=[])
# Database
self.db = fixtures.Database(self.conf_fixture, set_config=True)
self.db.setUp()
# Environment variables
os.environ['ITEM_ID'] = uuidutils.generate_uuid()
os.environ['ITEM_NAME'] = 'test-item'
CONF = self.conf_fixture.conf
def stop_fixture(self):
global CONF
self.db.cleanUp()
self.warnings.cleanUp()
self.output.cleanUp()
self.logging.cleanUp()
self.conf_fixture.cleanUp()
CONF = None8. myproject/tests/functional/test_api.py
"""Gabbi test loader."""
import os
from oslotest import output
import wsgi_intercept
from gabbi import driver
from myproject.tests.functional.fixtures import capture
from myproject.tests.functional.fixtures import gabbits as fixtures
wsgi_intercept.STRICT_RESPONSE_HEADERS = True
TESTS_DIR = 'gabbits'
def load_tests(loader, tests, pattern):
"""Load gabbi tests."""
test_dir = os.path.join(os.path.dirname(__file__), TESTS_DIR)
inner_fixtures = [
output.CaptureOutput,
capture.Logging,
]
return driver.build_tests(
test_dir, loader, host=None,
test_loader_name=__name__,
intercept=fixtures.setup_app,
inner_fixtures=inner_fixtures,
fixture_module=fixtures)9. myproject/tests/functional/gabbits/basic.yaml
fixtures:
- APIFixture
defaults:
request_headers:
accept: application/json
tests:
- name: get version
GET: /
status: 200
response_json_paths:
$.version: "1.0"
- name: get empty items
GET: /items
status: 200
response_json_paths:
$.items: []
- name: create item
POST: /items
request_headers:
content-type: application/json
data:
id: $ENVIRON['ITEM_ID']
name: $ENVIRON['ITEM_NAME']
status: 201
response_json_paths:
$.item.id: $ENVIRON['ITEM_ID']
$.item.name: $ENVIRON['ITEM_NAME']10. .stestr.conf
[DEFAULT]
test_path=./myproject/tests/unit
top_dir=./
group_regex=myproject\.tests\.functional\.test_api(?:\.|_)([^_]+)11. tox.ini
[tox]
minversion = 4.6.0
envlist = py3,functional,pep8
[testenv]
usedevelop = True
deps = -r{toxinidir}/test-requirements.txt
commands = stestr run {posargs}
[testenv:functional]
commands = stestr --test-path=./myproject/tests/functional run {posargs}Running:
tox -e functionalmyproject/tests/functional/fixtures/gabbits.py (addition):
class ItemFixture(APIFixture):
"""Fixture with pre-created items."""
def start_fixture(self):
super(ItemFixture, self).start_fixture()
# Create items in database using objects/db layer
# (example assumes you have item creation logic)
from myproject.db import models
engine = self.db.get_engine()
with engine.begin() as conn:
conn.execute(
models.items_table.insert(),
[
{'id': os.environ['ITEM_ID'],
'name': os.environ['ITEM_NAME']},
]
)myproject/tests/functional/gabbits/with-items.yaml:
fixtures:
- ItemFixture # Uses ItemFixture instead of APIFixture
tests:
- name: get items
GET: /items
response_json_paths:
$.items[0].id: $ENVIRON['ITEM_ID']
$.items[0].name: $ENVIRON['ITEM_NAME']- Create
tests/fixtures.pywithDatabasefixture- Inherit from
oslo_db.sqlalchemy.test_fixtures - Implement
generate_schema_create_all() - Configure SQLite connection
- Inherit from
- Create
tests/functional/base.pywithTestCase- Set up config fixture
- Set up database fixture
- Set up logging/output capture
- Create
tests/functional/fixtures/capture.py-
Loggingfixture -
WarningsFixture
-
- Install dependencies in
test-requirements.txt-
gabbi>=1.35.0 -
wsgi-intercept>=1.7.0 -
oslotest>=3.5.0 -
stestr>=1.0.0
-
- Create
tests/functional/fixtures/gabbits.py-
setup_app()function -
APIFixtureclass (extendsgabbi.fixture.GabbiFixture) -
start_fixture()method -
stop_fixture()method
-
- Create
tests/functional/test_api.py-
load_tests()function - Call
gabbi.driver.build_tests()
-
- Create
.stestr.conf- Set
test_path - Set
group_regexfor Gabbi grouping
- Set
- Update
tox.ini- Add
[testenv:functional]section - Set
--test-pathfor functional tests
- Add
- Create
tests/functional/gabbits/directory
- Create first YAML test file
- Declare fixtures
- Set defaults
- Write basic tests
- Run tests:
tox -e functional - Iterate on test coverage
- Create specialized fixtures (e.g., with pre-created data)
- Add helper functions (like
test_base.py) - Add policy testing fixtures
- Configure Zuul CI jobs
- Add coverage reporting
test-requirements.txt:
# Core testing
fixtures>=3.0.0
testtools>=2.2.0
stestr>=1.0.0
# Oslo libraries
oslo.config>=6.7.0
oslo.db>=8.6.0
oslo.log>=4.3.0
oslotest>=3.5.0
# Gabbi and WSGI testing
gabbi>=1.35.0
wsgi-intercept>=1.7.0
# Database drivers
SQLAlchemy>=1.4.0
# Optional (for MySQL/PostgreSQL testing)
PyMySQL>=0.8.0
psycopg2-binary>=2.8
requirements.txt (production dependencies):
oslo.config>=6.7.0
oslo.db>=8.6.0
oslo.log>=4.3.0
oslo.utils>=4.5.0
SQLAlchemy>=1.4.0
WebOb>=1.8.2
In fixture:
os.environ['RESOURCE_UUID'] = uuidutils.generate_uuid()In YAML:
data:
uuid: $ENVIRON['RESOURCE_UUID']- name: create resource
POST: /resources
status: 201
response_headers:
location: /resources/[a-f0-9-]+
- name: get resource
GET: $LOCATION # Uses Location header from previous test
status: 200response_json_paths:
$.items[0].id: abc123
$.items[?name='foo'].id: bar456
$.metadata.count: /\d+/ # Regexclass BaseFixture(fixture.GabbiFixture):
def start_fixture(self):
# Base setup
pass
class SpecializedFixture(BaseFixture):
def start_fixture(self):
super(SpecializedFixture, self).start_fixture()
# Additional setup
passfixtures:
- APIFixture
- CORSFixture # Adds CORS configurationSymptom: stestr run finds no tests
Solution:
- Check
test_pathin.stestr.conf - Verify
load_tests()function exists intest_api.py - Check YAML files are in correct directory
Symptom: Tests within a YAML file don't run sequentially
Solution:
- Check
group_regexin.stestr.conf - Ensure regex captures YAML filename
- Verify test names match pattern
Symptom: Tests fail with database errors
Solution:
- Verify
generate_schema_create_all()is called - Check
db_api.configure()is called - Ensure models are imported before schema creation
Symptom: NameError: name 'MyFixture' is not defined
Solution:
- Check fixture class is in
fixtures/gabbits.py - Verify
fixture_module=fixturesinbuild_tests() - Check fixture name in YAML matches class name
Symptom: Connection errors or 500 responses
Solution:
- Verify
setup_app()returns WSGI application - Check
intercept=fixtures.setup_appinbuild_tests() - Ensure
wsgi_intercept.STRICT_RESPONSE_HEADERS = True
- Main test directory:
placement/tests/functional/ - Fixtures:
placement/tests/functional/fixtures/gabbits.py - Test loader:
placement/tests/functional/test_api.py - Database fixture:
placement/tests/fixtures.py - Helper functions:
placement/tests/functional/db/test_base.py
- Gabbi: https://gabbi.readthedocs.io/
- oslo.db: https://docs.openstack.org/oslo.db/
- oslo.config: https://docs.openstack.org/oslo.config/
- stestr: https://stestr.readthedocs.io/
- wsgi-intercept: https://github.com/cdent/wsgi-intercept
- Testing guidelines: https://docs.openstack.org/hacking/latest/
- Python style guide: https://docs.openstack.org/hacking/latest/
- Commit messages: https://wiki.openstack.org/wiki/GitCommitMessages
The Placement Gabbi functional test infrastructure provides a robust, maintainable approach to HTTP API testing:
Benefits:
- Declarative: Tests written in YAML
- Fast: In-memory SQLite, in-process WSGI
- Isolated: Each test file independent
- Maintainable: Clear separation of concerns
- Reusable: Fixtures shared across tests
Key Takeaways:
- Use
oslo_dbfixtures for database setup - Use
gabbi.fixture.GabbiFixturefor API setup - Use
wsgi-interceptfor in-process testing - Group tests by YAML file with
stestrregex - Use environment variables for shared state
This pattern is production-tested in OpenStack Placement and used by Nova's PlacementFixture, demonstrating its robustness and flexibility.
Document Version: 1.0
Generated from: OpenStack Placement (review/balazs_gibizer/bug/2126751)
Date: 2025-10-07