Skip to content

Instantly share code, notes, and snippets.

@SeanMooney
Created October 7, 2025 11:56
Show Gist options
  • Select an option

  • Save SeanMooney/36f2ef20bd5cb4c853af23c581b917fc to your computer and use it in GitHub Desktop.

Select an option

Save SeanMooney/36f2ef20bd5cb4c853af23c581b917fc to your computer and use it in GitHub Desktop.
gabbi testing patterns in placement

OpenStack Placement Gabbi Functional Test Infrastructure

Executive Summary

This document provides a comprehensive analysis of the OpenStack Placement project's Gabbi-based functional test infrastructure. It is designed to enable replication of this pattern in other OpenStack projects, with particular focus on fixture structure, database setup, API application initialization, and test execution.

Key Technologies:

  • Gabbi: YAML-based declarative HTTP testing framework
  • oslo.test/oslotest: OpenStack test framework base
  • oslo.db: Database abstraction and fixtures
  • wsgi-intercept: In-process WSGI application testing
  • SQLite: In-memory database for tests
  • stestr: Parallel test runner

Table of Contents

  1. Architecture Overview
  2. Test Directory Structure
  3. Fixture Architecture
  4. Database Setup
  5. API Application Initialization
  6. Gabbi Test Structure
  7. Test Execution Configuration
  8. Tox Configuration
  9. Zuul CI/CD Integration
  10. Complete Working Examples
  11. Implementation Checklist

Architecture Overview

The Placement functional test infrastructure consists of several layers that work together:

┌─────────────────────────────────────────────────────┐
│  Gabbi YAML Test Files (gabbits/*.yaml)             │
│  - Declarative HTTP test definitions                │
│  - Environment variable substitution                │
│  - Sequential test ordering within files            │
└──────────────────────────┬──────────────────────────┘
                           │
┌──────────────────────────▼──────────────────────────┐
│  Test Loader (test_api.py)                          │
│  - gabbi.driver.build_tests()                       │
│  - Discovers YAML files in gabbits/                 │
│  - Creates Python test cases                        │
└──────────────────────────┬──────────────────────────┘
                           │
┌──────────────────────────▼──────────────────────────┐
│  Fixture Layer (fixtures/gabbits.py)                │
│  - APIFixture (base)                                │
│  - AllocationFixture (with pre-created data)        │
│  - SharedStorageFixture (complex topologies)        │
│  - etc.                                             │
└──────────────────────────┬──────────────────────────┘
                           │
      ┌────────────────────┼────────────────────┐
      │                    │                    │
┌─────▼──────┐  ┌─────────▼────────┐  ┌───────▼──────┐
│ Config     │  │ Database         │  │ WSGI App     │
│ Fixture    │  │ Fixture          │  │ (via         │
│            │  │ (SQLite in-mem)  │  │ deploy.py)   │
└────────────┘  └──────────────────┘  └──────────────┘

Key Design Principles

  1. Isolation: Each test file (YAML) runs in sequence but files run in parallel
  2. In-Memory Database: SQLite :memory: for speed
  3. No External Dependencies: No real Keystone, no network services
  4. WSGI Intercept: HTTP requests intercepted in-process (no network)
  5. Fixture Reuse: Multiple test files can share fixture classes
  6. Environment Variables: Share state between tests in a file

Test Directory Structure

placement/tests/
├── __init__.py
├── fixtures.py                      # Database fixture
├── README.rst                       # Testing philosophy
├── unit/                           # Unit tests
│   └── policy_fixture.py           # Policy fixture for tests
└── functional/                     # Functional tests
    ├── __init__.py
    ├── base.py                     # Base TestCase for DB tests
    ├── test_api.py                 # Gabbi test loader
    ├── test_direct.py              # Direct Python API tests
    ├── db/                         # Database-focused tests
    │   ├── test_allocation.py
    │   ├── test_resource_provider.py
    │   └── test_base.py            # Helper functions
    ├── fixtures/                   # Gabbi-specific fixtures
    │   ├── __init__.py
    │   ├── gabbits.py              # APIFixture and variants
    │   ├── placement.py            # PlacementFixture (for Nova)
    │   └── capture.py              # Logging/warnings fixtures
    └── gabbits/                    # YAML test files
        ├── basic-http.yaml
        ├── resource-provider.yaml
        ├── allocations.yaml
        └── ... (79 total files)

Key Files:

  • fixtures.py: Database fixture using oslo.db
  • functional/base.py: Base test case with minimal setup
  • functional/fixtures/gabbits.py: Gabbi-specific fixtures with API setup
  • functional/fixtures/capture.py: Logging/warning capture
  • functional/test_api.py: Gabbi test discovery and loading
  • functional/db/test_base.py: Helper functions for creating test data

Fixture Architecture

1. Base Database Fixture

File: placement/tests/fixtures.py

"""Fixtures for Placement tests."""

from oslo_config import cfg
from oslo_db.sqlalchemy import test_fixtures

from placement.db.sqlalchemy import migration
from placement import db_api as placement_db
from placement.objects import resource_class
from placement.objects import trait


class Database(test_fixtures.GeneratesSchema, test_fixtures.AdHocDbFixture):
    """Database fixture for placement tests.
    
    Inherits from oslo_db test fixtures to provide:
    - In-memory SQLite database
    - Automatic schema creation
    - Transaction management
    - Cleanup between tests
    """
    
    def __init__(self, conf_fixture, set_config=False):
        """Create a database fixture.
        
        :param conf_fixture: oslo_config Config fixture
        :param set_config: If True, register and set connection config
        """
        super(Database, self).__init__()
        if set_config:
            try:
                conf_fixture.register_opt(
                    cfg.StrOpt('connection'), group='placement_database')
            except cfg.DuplicateOptError:
                # already registered
                pass
            conf_fixture.config(connection='sqlite://',
                                group='placement_database')
        self.conf_fixture = conf_fixture
        self.get_engine = placement_db.get_placement_engine
        placement_db.configure(self.conf_fixture.conf)

    def get_enginefacade(self):
        """Return the enginefacade for this database."""
        return placement_db.placement_context_manager

    def generate_schema_create_all(self, engine):
        """Create database schema.
        
        Called by oslo_db fixtures during setup.
        Uses Alembic to create schema from models.
        """
        # Create schema using Alembic
        migration.create_schema(engine)

        # Patch engine into enginefacade early
        # (oslo_db will patch it later, but we need it now)
        _reset_facade = placement_db.placement_context_manager.patch_engine(
            engine)
        self.addCleanup(_reset_facade)

        # Reset sync flags
        self.addCleanup(self.cleanup)
        self.cleanup()

        # Sync traits and resource classes
        # This populates standard resource classes and traits
        from placement import deploy
        deploy.update_database(self.conf_fixture.conf)

    def cleanup(self):
        """Reset sync flags for standard traits/resource classes."""
        trait._TRAITS_SYNCED = False
        resource_class._RESOURCE_CLASSES_SYNCED = False

Key Points:

  • Inherits from oslo_db.sqlalchemy.test_fixtures
  • Uses sqlite:// (in-memory) for speed
  • Creates schema via Alembic migration
  • Syncs standard resource classes/traits
  • Cleans up between tests

2. Functional Test Base Class

File: placement/tests/functional/base.py

"""Base test case for placement functional tests."""

from oslo_config import cfg
from oslo_config import fixture as config_fixture
from oslo_log.fixture import logging_error
from oslotest import output
import testtools

from placement import conf
from placement import context
from placement.tests import fixtures
from placement.tests.functional.fixtures import capture
from placement.tests.unit import policy_fixture


class TestCase(testtools.TestCase):
    """A base test case for placement functional tests.

    Sets up minimum configuration for database and policy handling
    and establishes the placement database.
    """

    USES_DB = True

    def setUp(self):
        super(TestCase, self).setUp()

        # Manage required configuration
        self.conf_fixture = self.useFixture(
            config_fixture.Config(cfg.ConfigOpts()))
        conf.register_opts(self.conf_fixture.conf)
        
        if self.USES_DB:
            self.placement_db = self.useFixture(fixtures.Database(
                self.conf_fixture, set_config=True))
        else:
            self.conf_fixture.config(
                connection='sqlite://',
                group='placement_database',
            )
        self.conf_fixture.conf([], default_config_files=[])

        # Set up policy
        self.useFixture(policy_fixture.PolicyFixture(self.conf_fixture))

        # Set up logging and output capture
        self.useFixture(capture.Logging())
        self.useFixture(output.CaptureOutput())
        self.useFixture(capture.WarningsFixture())
        self.useFixture(logging_error.get_logging_handle_error_fixture())

        # Create a request context
        self.context = context.RequestContext()
        self.context.config = self.conf_fixture.conf


class NoDBTestCase(TestCase):
    """Test case without database."""
    USES_DB = False

Key Points:

  • Uses testtools.TestCase as base (OpenStack standard)
  • Sets up oslo.config fixture with isolated config
  • Creates database fixture with USES_DB flag
  • Sets up logging, output capture, warnings
  • Creates request context for database operations

3. Capture Fixtures (Logging and Warnings)

File: placement/tests/functional/fixtures/capture.py

"""Fixtures for capturing logs and filtering warnings."""

import logging
import warnings

import fixtures
from oslotest import log
from sqlalchemy import exc as sqla_exc


class NullHandler(logging.Handler):
    """Custom NullHandler that formats records.
    
    Used to detect formatting errors in debug logs even when
    logs aren't captured.
    """

    def handle(self, record):
        self.format(record)

    def emit(self, record):
        pass

    def createLock(self):
        self.lock = None


class Logging(log.ConfigureLogging):
    """Logging fixture for tests.
    
    - Captures logs for later inspection
    - Ensures DEBUG logs are formatted even if not captured
    """

    def __init__(self):
        super(Logging, self).__init__()
        # Default to INFO if not otherwise set
        if self.level is None:
            self.level = logging.INFO

    def setUp(self):
        super(Logging, self).setUp()
        if self.level > logging.DEBUG:
            handler = NullHandler()
            self.useFixture(fixtures.LogHandler(handler, nuke_handlers=False))
            handler.setLevel(logging.DEBUG)


class WarningsFixture(fixtures.Fixture):
    """Filter or escalate certain warnings during test runs.
    
    Add additional entries as required. Remove when obsolete.
    """

    def setUp(self):
        super(WarningsFixture, self).setUp()

        self._original_warning_filters = warnings.filters[:]

        warnings.simplefilter("once", DeprecationWarning)

        # Ignore policy scope warnings
        warnings.filterwarnings(
            'ignore',
            message="Policy .* failed scope check",
            category=UserWarning)

        # Escalate invalid UUID warnings to errors
        warnings.filterwarnings('error', message=".*invalid UUID.*")

        # Prevent introducing unmapped columns
        warnings.filterwarnings(
            'error',
            category=sqla_exc.SAWarning)

        # Configure SQLAlchemy warnings
        warnings.filterwarnings(
            'ignore',
            category=sqla_exc.SADeprecationWarning)

        warnings.filterwarnings(
            'error',
            module='placement',
            category=sqla_exc.SADeprecationWarning)

        self.addCleanup(self._reset_warning_filters)

    def _reset_warning_filters(self):
        warnings.filters[:] = self._original_warning_filters

Key Points:

  • Logging: Captures logs and ensures DEBUG logs are formatted
  • WarningsFixture: Filters/escalates warnings to catch issues early
  • Both clean up after themselves

4. Gabbi Fixtures (API Setup)

File: placement/tests/functional/fixtures/gabbits.py

"""Gabbi fixtures for placement API testing."""

import os

from gabbi import fixture
import os_resource_classes as orc
import os_traits as ot
from oslo_config import cfg
from oslo_config import fixture as config_fixture
from oslo_log.fixture import logging_error
from oslo_policy import opts as policy_opts
from oslo_utils.fixture import uuidsentinel as uuids
from oslo_utils import uuidutils
from oslotest import output

from placement import conf
from placement import context
from placement import deploy
from placement.tests import fixtures
from placement.tests.functional.fixtures import capture
from placement.tests.unit import policy_fixture


# Global CONF for loadapp workaround
# (gabbi limitations require this)
CONF = None


def setup_app():
    """App factory for gabbi.
    
    Called by gabbi to get the WSGI application under test.
    """
    global CONF
    return deploy.loadapp(CONF)


class APIFixture(fixture.GabbiFixture):
    """Setup the required backend fixtures for placement service.
    
    This is the base fixture for Gabbi tests. It:
    - Sets up configuration
    - Creates database
    - Initializes policy
    - Sets environment variables for test data
    - Creates the WSGI application
    """

    # For secure RBAC testing (optional)
    _secure_rbac = False

    def start_fixture(self):
        """Called once before any tests in a YAML file run."""
        global CONF
        
        # Set up logging and output capture
        self.standard_logging_fixture = capture.Logging()
        self.standard_logging_fixture.setUp()
        self.output_stream_fixture = output.CaptureOutput()
        self.output_stream_fixture.setUp()
        self.logging_error_fixture = (
            logging_error.get_logging_handle_error_fixture())
        self.logging_error_fixture.setUp()
        self.warnings_fixture = capture.WarningsFixture()
        self.warnings_fixture.setUp()

        # Create isolated config (don't use global CONF)
        self.conf_fixture = config_fixture.Config(cfg.ConfigOpts())
        self.conf_fixture.setUp()
        conf.register_opts(self.conf_fixture.conf)
        
        # Configure API with no auth
        self.conf_fixture.config(group='api', auth_strategy='noauth2')
        self.conf_fixture.config(
            group='oslo_policy',
            enforce_scope=self._secure_rbac,
            enforce_new_defaults=self._secure_rbac,
        )

        # Set up database
        self.placement_db_fixture = fixtures.Database(
            self.conf_fixture, set_config=True)
        self.placement_db_fixture.setUp()

        # Create context for fixture data creation
        self.context = context.RequestContext()
        self.context.config = self.conf_fixture.conf

        # Set default policy opts
        policy_opts.set_defaults(self.conf_fixture.conf)

        # Empty config files list (don't read /etc/placement/placement.conf)
        self.conf_fixture.conf([], default_config_files=[])

        # Turn on policy fixture
        self.policy_fixture = policy_fixture.PolicyFixture(
            self.conf_fixture)
        self.policy_fixture.setUp()

        # Set up environment variables for use in YAML tests
        # These are substituted into test data via $ENVIRON['VAR_NAME']
        os.environ['RP_UUID'] = uuidutils.generate_uuid()
        os.environ['RP_NAME'] = uuidutils.generate_uuid()
        os.environ['RP_UUID1'] = uuidutils.generate_uuid()
        os.environ['RP_NAME1'] = uuidutils.generate_uuid()
        os.environ['RP_UUID2'] = uuidutils.generate_uuid()
        os.environ['RP_NAME2'] = uuidutils.generate_uuid()
        os.environ['CUSTOM_RES_CLASS'] = 'CUSTOM_IRON_NFV'
        os.environ['PROJECT_ID'] = uuidutils.generate_uuid()
        os.environ['USER_ID'] = uuidutils.generate_uuid()
        os.environ['CONSUMER_UUID'] = uuidutils.generate_uuid()
        # ... etc.
        
        # Store config globally for setup_app()
        CONF = self.conf_fixture.conf

    def stop_fixture(self):
        """Called after all tests in a YAML file complete."""
        global CONF
        # Clean up all fixtures in reverse order
        self.placement_db_fixture.cleanUp()
        self.warnings_fixture.cleanUp()
        self.output_stream_fixture.cleanUp()
        self.standard_logging_fixture.cleanUp()
        self.logging_error_fixture.cleanUp()
        self.policy_fixture.cleanUp()
        self.conf_fixture.cleanUp()
        CONF = None

Key Points:

  • Extends gabbi.fixture.GabbiFixture
  • start_fixture(): Called once before tests in a YAML file
  • stop_fixture(): Called once after tests in a YAML file
  • Sets up config, database, policy, logging
  • Populates os.environ with UUIDs for test data
  • Uses global CONF to work around gabbi limitation

5. Specialized Fixtures

Example: AllocationFixture (pre-creates test data):

class AllocationFixture(APIFixture):
    """An APIFixture that has some pre-made Allocations.
    
    Pre-creates:
    - Resource provider with VCPU and DISK_GB inventory
    - Two consumers with allocations
    - Users and projects
    """

    def start_fixture(self):
        # Call parent to set up base infrastructure
        super(AllocationFixture, self).start_fixture()

        # Create additional environment variables
        os.environ['ALT_USER_ID'] = uuidutils.generate_uuid()
        project_id = os.environ['PROJECT_ID']
        user_id = os.environ['USER_ID']
        alt_user_id = os.environ['ALT_USER_ID']

        # Create user and project objects in database
        from placement.objects import user as user_obj
        from placement.objects import project as project_obj
        
        user = user_obj.User(self.context, external_id=user_id)
        user.create()
        alt_user = user_obj.User(self.context, external_id=alt_user_id)
        alt_user.create()
        project = project_obj.Project(self.context, external_id=project_id)
        project.create()

        # Create resource provider with inventory
        from placement.tests.functional.db import test_base as tb
        
        rp_name = os.environ['RP_NAME']
        rp_uuid = os.environ['RP_UUID']
        rp = tb.create_provider(self.context, rp_name, uuid=rp_uuid)
        tb.add_inventory(rp, 'DISK_GB', 2048,
                         step_size=10, min_unit=10, max_unit=1000)
        tb.add_inventory(rp, 'VCPU', 10, max_unit=10)

        # Create consumers with allocations
        consumer1 = tb.ensure_consumer(self.context, user, project)
        tb.set_allocation(self.context, rp, consumer1, {'DISK_GB': 1000})
        os.environ['CONSUMER_0'] = consumer1.uuid

        consumer2 = tb.ensure_consumer(self.context, user, project)
        tb.set_allocation(self.context, rp, consumer2, {'VCPU': 6})
        os.environ['CONSUMER_ID'] = consumer2.uuid

Key Points:

  • Extends APIFixture for base setup
  • Uses helper functions from test_base.py
  • Pre-creates complex test scenarios
  • Stores IDs in environment variables for YAML tests

Database Setup

Database Configuration Flow

  1. Configuration Registration: placement/conf/database.py
  2. Database API: placement/db_api.py (enginefacade setup)
  3. Migration: placement/db/sqlalchemy/migration.py (Alembic)
  4. Models: placement/db/sqlalchemy/models.py (SQLAlchemy)

Key File: db_api.py

"""Database context manager for placement database connection."""

from oslo_db.sqlalchemy import enginefacade
from oslo_log import log as logging

LOG = logging.getLogger(__name__)

# Global enginefacade transaction context manager
placement_context_manager = enginefacade.transaction_context()


def _get_db_conf(conf_group):
    """Extract db config from oslo.config group."""
    conf_dict = dict(conf_group.items())
    # Remove non-enginefacade settings
    conf_dict.pop('sync_on_startup', None)
    return conf_dict


def configure(conf):
    """Configure the database connection.
    
    Called once at application startup (or fixture setup).
    """
    placement_context_manager.configure(
        **_get_db_conf(conf.placement_database))


def get_placement_engine():
    """Get the database engine."""
    return placement_context_manager.writer.get_engine()


@enginefacade.transaction_context_provider
class DbContext(object):
    """Stub class for db session handling outside of web requests."""

Configuration Options (automatically registered):

[placement_database]
connection = sqlite://  # In-memory for tests
# connection = mysql+pymysql://user:pass@host/dbname  # Production
sync_on_startup = True  # Run migrations on startup

Schema Creation

File: placement/db/sqlalchemy/migration.py

"""Database migration using Alembic."""

import os
import alembic
from alembic import config as alembic_config

from placement.db.sqlalchemy import models


def create_schema(engine=None):
    """Create schema from models, without a migration.
    
    Used in tests to quickly create schema.
    """
    base = models.BASE
    if engine is None:
        from placement import db_api as placement_db
        engine = placement_db.get_placement_engine()
    base.metadata.create_all(engine)


def upgrade(revision, config=None):
    """Run Alembic migrations.
    
    :param revision: Target revision ('head' for latest)
    """
    revision = revision or "head"
    config = config or _alembic_config()
    alembic.command.upgrade(config, revision)


def _alembic_config():
    """Get Alembic configuration."""
    path = os.path.join(os.path.dirname(__file__), "alembic.ini")
    config = alembic_config.Config(path)
    return config

Alembic Directory Structure:

placement/db/sqlalchemy/
├── alembic.ini          # Alembic configuration
├── alembic/
│   ├── env.py           # Alembic environment
│   ├── script.py.mako   # Migration template
│   └── versions/        # Migration scripts
│       ├── b4ed3a175331_initial.py
│       └── ...
├── migration.py         # Migration helpers
└── models.py           # SQLAlchemy models

Helper Functions

File: placement/tests/functional/db/test_base.py

"""Helper functions for creating test data in functional tests."""

from oslo_utils import uuidutils
from oslo_utils.fixture import uuidsentinel as uuids

from placement import exception
from placement.objects import allocation as alloc_obj
from placement.objects import consumer as consumer_obj
from placement.objects import inventory as inv_obj
from placement.objects import project as project_obj
from placement.objects import resource_class as rc_obj
from placement.objects import resource_provider as rp_obj
from placement.objects import trait as trait_obj
from placement.objects import user as user_obj


def create_provider(context, name, *aggs, **kwargs):
    """Create a resource provider.
    
    :param context: RequestContext
    :param name: Provider name
    :param aggs: Aggregate UUIDs to associate
    :param kwargs: 
        - parent: Parent provider UUID
        - uuid: Provider UUID (generated if not provided)
    :returns: ResourceProvider object
    """
    parent = kwargs.get('parent')
    uuid = kwargs.get('uuid', getattr(uuids, name))
    rp = rp_obj.ResourceProvider(context, name=name, uuid=uuid)
    if parent:
        rp.parent_provider_uuid = parent
    rp.create()
    if aggs:
        rp.set_aggregates(aggs)
    return rp


def add_inventory(rp, rc, total, **kwargs):
    """Add inventory to a resource provider.
    
    :param rp: ResourceProvider object
    :param rc: Resource class name (e.g., 'VCPU')
    :param total: Total amount of resource
    :param kwargs: Optional inventory attributes
        - max_unit, min_unit, step_size, reserved, allocation_ratio
    """
    ensure_rc(rp._context, rc)
    kwargs.setdefault('max_unit', total)
    inv = inv_obj.Inventory(
        rp._context, resource_provider=rp,
        resource_class=rc, total=total, **kwargs)
    rp.add_inventory(inv)
    return inv


def set_traits(rp, *traits):
    """Set traits on a resource provider.
    
    :param rp: ResourceProvider object
    :param traits: Trait names (created if don't exist)
    """
    tlist = []
    for tname in traits:
        try:
            trait = trait_obj.Trait.get_by_name(rp._context, tname)
        except exception.TraitNotFound:
            trait = trait_obj.Trait(rp._context, name=tname)
            trait.create()
        tlist.append(trait)
    rp.set_traits(tlist)
    return tlist


def ensure_consumer(ctx, user, project, consumer_id=None):
    """Get or create a consumer.
    
    :param ctx: RequestContext
    :param user: User object
    :param project: Project object
    :param consumer_id: Consumer UUID (generated if None)
    :returns: Consumer object
    """
    consumer_id = consumer_id or uuidutils.generate_uuid()
    try:
        consumer = consumer_obj.Consumer.get_by_uuid(ctx, consumer_id)
    except exception.NotFound:
        consumer = consumer_obj.Consumer(
            ctx, uuid=consumer_id, user=user, project=project)
        consumer.create()
    return consumer


def set_allocation(ctx, rp, consumer, rc_used_dict):
    """Set allocations for a consumer.
    
    :param ctx: RequestContext
    :param rp: ResourceProvider object
    :param consumer: Consumer object
    :param rc_used_dict: Dict of {resource_class: amount}
    :returns: List of Allocation objects
    """
    alloc = [
        alloc_obj.Allocation(
            resource_provider=rp, resource_class=rc,
            consumer=consumer, used=used)
        for rc, used in rc_used_dict.items()
    ]
    alloc_obj.replace_all(ctx, alloc)
    return alloc


def create_user_and_project(ctx, prefix='fake'):
    """Create user and project objects.
    
    :param ctx: RequestContext
    :param prefix: Prefix for external IDs
    :returns: (User, Project) tuple
    """
    user = user_obj.User(ctx, external_id='%s-user' % prefix)
    user.create()
    proj = project_obj.Project(ctx, external_id='%s-project' % prefix)
    proj.create()
    return user, proj


def ensure_rc(context, name):
    """Ensure resource class exists.
    
    :param context: RequestContext
    :param name: Resource class name
    """
    try:
        rc_obj.ResourceClass.get_by_name(context, name)
    except exception.NotFound:
        rc_obj.ResourceClass(context, name=name).create()

API Application Initialization

WSGI Application Stack

File: placement/deploy.py

"""Deployment handling for Placement API."""

from microversion_parse import middleware as mp_middleware
import oslo_middleware

from placement import auth
from placement import fault_wrap
from placement import handler
from placement import microversion
from placement import policy
from placement import requestlog
from placement import util


def deploy(conf):
    """Assemble the middleware pipeline leading to the placement app.
    
    Pipeline (inside to outside):
    1. PlacementHandler (core application)
    2. MicroversionMiddleware (API versioning)
    3. FaultWrapper (exception handling)
    4. PlacementKeystoneContext (auth context)
    5. Auth middleware (noauth2 or keystonemiddleware)
    6. CORS middleware (optional)
    7. RequestLog (logging)
    8. HTTPProxyToWSGI (proxy header handling)
    """
    # Auth strategy
    if conf.api.auth_strategy == 'noauth2':
        auth_middleware = auth.NoAuthMiddleware
    else:
        auth_middleware = auth.filter_factory(
            {}, oslo_config_config=conf)

    # CORS (optional)
    if conf.cors.allowed_origin:
        cors_middleware = oslo_middleware.CORS.factory(
            {}, **conf.cors)
    else:
        cors_middleware = None

    # Core application
    application = handler.PlacementHandler(config=conf)

    # Microversion middleware
    application = mp_middleware.MicroversionMiddleware(
        application, microversion.SERVICE_TYPE, microversion.VERSIONS,
        json_error_formatter=util.json_error_formatter)

    # Middleware stack (inside to outside)
    for middleware in (fault_wrap.FaultWrapper,
                       auth.PlacementKeystoneContext,
                       auth_middleware,
                       cors_middleware,
                       requestlog.RequestLog,
                       oslo_middleware.HTTPProxyToWSGI):
        if middleware:
            application = middleware(application)

    return application


def update_database(conf):
    """Run database migrations and sync standard data."""
    from placement.db.sqlalchemy import migration
    from placement import db_api
    from placement.objects import trait
    from placement.objects import resource_class
    
    if conf.placement_database.sync_on_startup:
        migration.upgrade('head')
    
    ctx = db_api.DbContext()
    trait.ensure_sync(ctx)
    resource_class.ensure_sync(ctx)


def loadapp(config, project_name=None):
    """WSGI application creator for placement.
    
    :param config: An oslo_config.cfg.ConfigOpts
    :param project_name: Ignored (backwards compatibility)
    :returns: WSGI application
    """
    application = deploy(config)
    policy.init(config)
    update_database(config)
    return application

Key Points:

  • loadapp() is the main entry point
  • deploy() assembles middleware stack
  • update_database() syncs standard traits/resource classes
  • Middleware wraps from inside out

Gabbi Test Structure

Gabbi Test Loader

File: placement/tests/functional/test_api.py

"""Gabbi test loader for placement API tests."""

import os
from oslotest import output
import wsgi_intercept

from gabbi import driver

from placement.tests.functional.fixtures import capture
from placement.tests.functional.fixtures import gabbits as fixtures

# Enforce native str for response headers
wsgi_intercept.STRICT_RESPONSE_HEADERS = True
TESTS_DIR = 'gabbits'


def load_tests(loader, tests, pattern):
    """Provide a TestSuite to the discovery process.
    
    This is a standard Python unittest load_tests protocol.
    Called by test runners (stestr, unittest discover).
    
    :param loader: unittest.TestLoader
    :param tests: Existing TestSuite (ignored)
    :param pattern: Pattern for test discovery (ignored)
    :returns: TestSuite containing Gabbi tests
    """
    test_dir = os.path.join(os.path.dirname(__file__), TESTS_DIR)
    
    # Per-test fixtures (for clean output/logging)
    inner_fixtures = [
        output.CaptureOutput,
        capture.Logging,
    ]
    
    # Build test suite from YAML files
    return driver.build_tests(
        test_dir,                          # Directory with YAML files
        loader,                            # unittest.TestLoader
        host=None,                         # No real host (wsgi-intercept)
        test_loader_name=__name__,         # Module name for test naming
        intercept=fixtures.setup_app,      # App factory function
        inner_fixtures=inner_fixtures,     # Per-test fixtures
        fixture_module=fixtures            # Module with GabbiFixture classes
    )

Key Parameters:

  • test_dir: Directory containing YAML files
  • intercept: Function returning WSGI app (uses wsgi-intercept)
  • fixture_module: Module containing GabbiFixture subclasses
  • inner_fixtures: Applied to each individual test
  • test_loader_name: Used to name generated test methods

Gabbi YAML Structure

File: placement/tests/functional/gabbits/basic-http.yaml

# Basic HTTP behavior tests for placement API

fixtures:
    - APIFixture  # Uses APIFixture from fixtures/gabbits.py

defaults:
    request_headers:
        x-auth-token: admin  # NoAuth token
        accept: application/json

tests:
- name: 404 at no service
  GET: /barnabas
  status: 404
  response_json_paths:
      $.errors[0].title: Not Found

- name: error message has request id
  GET: /barnabas
  status: 404
  response_json_paths:
      $.errors[0].request_id: /req-[a-fA-F0-9-]+/

- name: 200 at home
  GET: /
  status: 200

- name: post good resource provider
  POST: /resource_providers
  request_headers:
    content-type: application/json
  data:
      name: $ENVIRON['RP_NAME']  # Environment variable substitution
      uuid: $ENVIRON['RP_UUID']
  status: 201
  response_headers:
      location: //resource_providers/[a-f0-9-]+/

- name: get resource provider
  GET: $LOCATION  # Uses Location header from previous test
  response_json_paths:
      $.uuid: $ENVIRON['RP_UUID']
      $.name: $ENVIRON['RP_NAME']
      $.generation: 0

YAML Structure:

  1. fixtures: List of GabbiFixture class names
  2. defaults: Default headers/settings for all tests
  3. tests: List of test cases (sequential)

Test Case Structure:

- name: descriptive name
  DESC: optional long description
  GET: /path  # Or POST, PUT, DELETE, PATCH
  request_headers:
      header-name: value
  data:  # Request body (JSON or string)
      key: value
  status: 200  # Expected status code
  response_headers:  # Expected headers (regex allowed)
      content-type: application/json
  response_forbidden_headers:  # Headers that must NOT be present
      - x-not-here
  response_json_paths:  # JSONPath assertions
      $.key: value
      $.list[0]: /regex/
  response_strings:  # String in response body
      - "expected string"
  poll:  # Optional: retry assertion
      count: 10
      delay: 0.1

Advanced Features:

  • Environment variables: $ENVIRON['VAR']
  • Prior test data: $LOCATION, $HISTORY['test-name'].<attr>
  • JSONPath: Complex JSON assertions
  • Regex: /pattern/ in assertions
  • URL templates: $ENVIRON['UUID'] in URLs

Example: Complex Test with Pre-Created Data

fixtures:
    - AllocationFixture  # Has pre-created allocations

tests:
- name: get usages
  GET: /resource_providers/$ENVIRON['RP_UUID']/usages
  response_json_paths:
      $.usages.VCPU: 6
      $.usages.DISK_GB: 1000
      $.resource_provider_generation: 2

- name: delete consumer
  DELETE: /allocations/$ENVIRON['CONSUMER_ID']
  status: 204

- name: verify usages updated
  GET: /resource_providers/$ENVIRON['RP_UUID']/usages
  response_json_paths:
      $.usages.VCPU: 0
      $.usages.DISK_GB: 1000

Test Execution Configuration

stestr Configuration

File: .stestr.conf

[DEFAULT]
test_path=./placement/tests/unit
top_dir=./

# Gabbi test grouping
# Ensures tests from the same YAML file run in the same process
# (maintains test ordering within a file)
group_regex=placement\.tests\.functional\.test_api(?:\.|_)([^_]+)

Key Points:

  • group_regex: Groups tests by YAML file
  • Pattern captures YAML filename from test name
  • Tests in same group run serially in same process
  • Different groups run in parallel

How it Works:

Test name format: placement.tests.functional.test_api.BasicHttpGabbits.test_000_404_at_no_service

The regex captures: BasicHttpGabbits (derived from basic-http.yaml)

All tests with same capture group run together.

Policy Fixture

File: placement/tests/unit/policy_fixture.py

"""Policy fixture for tests."""

import copy
import fixtures
from oslo_policy import policy as oslo_policy

from placement.conf import paths
from placement import policies
from placement import policy as placement_policy


class PolicyFixture(fixtures.Fixture):
    """Load the default placement policy for tests."""

    def __init__(self, conf_fixture):
        self.conf_fixture = conf_fixture
        super(PolicyFixture, self).__init__()

    def setUp(self):
        super(PolicyFixture, self).setUp()
        
        # Set policy file location
        policy_file = paths.state_path_def('etc/placement/policy.yaml')
        self.conf_fixture.config(
            group='oslo_policy', policy_file=policy_file)
        
        # Reset policy
        placement_policy.reset()
        
        # Initialize with default rules
        # (copy to avoid oslo.policy modifying originals)
        placement_policy.init(
            self.conf_fixture.conf,
            suppress_deprecation_warnings=True,
            rules=copy.deepcopy(policies.list_rules()))
        
        self.addCleanup(placement_policy.reset)

    @staticmethod
    def set_rules(rules, overwrite=True):
        """Override policy rules for testing.
        
        :param rules: dict of action=rule mappings
        :param overwrite: Whether to replace all rules
        """
        enforcer = placement_policy.get_enforcer()
        enforcer.set_rules(
            oslo_policy.Rules.from_dict(rules),
            overwrite=overwrite)

Tox Configuration

File: tox.ini

[tox]
minversion = 4.6.0
envlist = py3,functional,pep8

[testenv]
usedevelop = True
allowlist_externals =
  bash
  rm
  env
install_command = python -I -m pip install -c{env:UPPER_CONSTRAINTS_FILE:https://releases.openstack.org/constraints/upper/master} {opts} {packages}
setenv =
  VIRTUAL_ENV={envdir}
  LANGUAGE=en_US
  LC_ALL=en_US.utf-8
  OS_STDOUT_CAPTURE=1
  OS_STDERR_CAPTURE=1
  OS_LOG_CAPTURE = {env:OS_LOG_CAPTURE:1}
  OS_TEST_TIMEOUT=160
  PYTHONDONTWRITEBYTECODE=1
deps = -r{toxinidir}/test-requirements.txt
commands =
  stestr run {posargs}
passenv =
  OS_LOG_CAPTURE
  OS_DEBUG
  GENERATE_HASHES

# Functional tests environment
[testenv:functional{,-py39,-py310,-py311,-py312}]
commands =
  stestr --test-path=./placement/tests/functional run {posargs}

[testenv:pep8]
description =
  Run style checks.
skip_install = true
deps =
    pre-commit
commands =
    pre-commit run --all-files --show-diff-on-failure

[testenv:cover]
setenv =
  {[testenv]setenv}
  PYTHON=coverage run --source placement --parallel-mode
commands =
  coverage erase
  stestr --test-path=./placement/tests run {posargs}
  coverage combine
  coverage html -d cover
  coverage xml -o cover/coverage.xml
  coverage report

Key Points:

  • functional testenv: Sets --test-path for functional tests
  • stestr: Parallel test runner
  • Environment variables control output capture
  • Python version variants supported

Running Tests:

# All functional tests
tox -e functional

# Specific test file
tox -e functional -- test_api.BasicHttpGabbits

# Specific test
tox -e functional -- test_api.BasicHttpGabbits.test_000_404_at_no_service

# With debugging
OS_LOG_CAPTURE=0 tox -e functional -- test_api

# Parallel workers
tox -e functional -- --concurrency 4

Zuul CI/CD Integration

File: .zuul.yaml

- project:
    templates:
      - check-requirements
      - integrated-gate-placement
      - openstack-cover-jobs
      - openstack-python3-jobs
      - periodic-stable-jobs
      - publish-openstack-docs-pti
      - release-notes-jobs-python3
    check:
      jobs:
        - openstack-tox-functional-py310
        - openstack-tox-functional-py312
        - openstack-tox-pep8
        - placement-nova-tox-functional-py312
        - tempest-integrated-placement
    gate:
      jobs:
        - openstack-tox-functional-py310
        - openstack-tox-functional-py312
        - openstack-tox-pep8
        - placement-nova-tox-functional-py312
        - tempest-integrated-placement

- job:
    name: placement-nova-tox-functional-py312
    parent: nova-tox-functional-py312
    description: |
        Run the nova functional tests to confirm that we aren't breaking
        the PlacementFixture.
    vars:
        tox_envlist: functional-without-sample-db-tests

Key Points:

  • openstack-tox-functional-*: Runs tox -e functional
  • placement-nova-tox-functional-*: Tests PlacementFixture used by Nova
  • tempest-integrated-placement: Integration tests
  • Runs on check (pre-merge) and gate (post-merge)

Complete Working Examples

Example 1: Minimal Setup

Directory Structure:

myproject/
├── myproject/
│   ├── __init__.py
│   ├── api.py          # WSGI application
│   ├── conf.py         # Configuration
│   ├── db_api.py       # Database setup
│   └── tests/
│       ├── __init__.py
│       ├── fixtures.py  # Database fixture
│       └── functional/
│           ├── __init__.py
│           ├── base.py
│           ├── test_api.py
│           ├── fixtures/
│           │   ├── __init__.py
│           │   ├── capture.py
│           │   └── gabbits.py
│           └── gabbits/
│               └── basic.yaml
├── setup.py
├── setup.cfg
├── tox.ini
├── .stestr.conf
└── test-requirements.txt

1. test-requirements.txt

fixtures>=3.0.0
gabbi>=1.35.0
oslotest>=3.5.0
stestr>=1.0.0
testtools>=2.2.0
wsgi-intercept>=1.7.0
oslo.config>=6.7.0
oslo.db>=8.6.0
oslo.log>=4.3.0
SQLAlchemy>=1.4.0

2. myproject/db_api.py

"""Database API setup."""

from oslo_db.sqlalchemy import enginefacade

context_manager = enginefacade.transaction_context()


def configure(conf):
    """Configure database connection."""
    context_manager.configure(
        connection=conf.database.connection,
        sqlite_fk=True,
    )


def get_engine():
    """Get database engine."""
    return context_manager.writer.get_engine()


@enginefacade.transaction_context_provider
class DbContext(object):
    """Database context for operations outside web requests."""

3. myproject/api.py

"""WSGI API application."""

import webob
import webob.dec
import webob.exc


class MyAPI(object):
    """Simple WSGI application."""

    def __init__(self, config):
        self.config = config

    @webob.dec.wsgify
    def __call__(self, req):
        """Handle WSGI request."""
        if req.path_info == '/':
            return {'version': '1.0'}
        elif req.path_info == '/items':
            if req.method == 'GET':
                return {'items': []}
            elif req.method == 'POST':
                # Create item
                data = req.json
                return webob.Response(
                    status=201,
                    json={'item': data},
                )
        raise webob.exc.HTTPNotFound()


def loadapp(config):
    """Create WSGI application."""
    return MyAPI(config)

4. myproject/tests/fixtures.py

"""Test fixtures."""

from oslo_config import cfg
from oslo_db.sqlalchemy import test_fixtures

from myproject import db_api


class Database(test_fixtures.GeneratesSchema,
               test_fixtures.AdHocDbFixture):
    """Database fixture."""

    def __init__(self, conf_fixture, set_config=False):
        super(Database, self).__init__()
        if set_config:
            conf_fixture.register_opt(
                cfg.StrOpt('connection'), group='database')
            conf_fixture.config(
                connection='sqlite://',
                group='database')
        self.conf_fixture = conf_fixture
        self.get_engine = db_api.get_engine
        db_api.configure(self.conf_fixture.conf)

    def get_enginefacade(self):
        return db_api.context_manager

    def generate_schema_create_all(self, engine):
        # Create schema
        from myproject.db.sqlalchemy import models
        models.BASE.metadata.create_all(engine)
        
        # Patch engine into facade
        _reset = db_api.context_manager.patch_engine(engine)
        self.addCleanup(_reset)

5. myproject/tests/functional/base.py

"""Base test case."""

from oslo_config import cfg
from oslo_config import fixture as config_fixture
from oslotest import output
import testtools

from myproject import conf
from myproject.tests import fixtures
from myproject.tests.functional.fixtures import capture


class TestCase(testtools.TestCase):
    """Base functional test case."""

    def setUp(self):
        super(TestCase, self).setUp()

        # Config
        self.conf_fixture = self.useFixture(
            config_fixture.Config(cfg.ConfigOpts()))
        conf.register_opts(self.conf_fixture.conf)

        # Database
        self.db = self.useFixture(
            fixtures.Database(self.conf_fixture, set_config=True))

        # Logging/output
        self.useFixture(capture.Logging())
        self.useFixture(output.CaptureOutput())
        self.useFixture(capture.WarningsFixture())

6. myproject/tests/functional/fixtures/capture.py

"""Logging and warning fixtures."""

import logging
import warnings
import fixtures
from oslotest import log


class Logging(log.ConfigureLogging):
    """Logging fixture."""

    def __init__(self):
        super(Logging, self).__init__()
        if self.level is None:
            self.level = logging.INFO


class WarningsFixture(fixtures.Fixture):
    """Warning filter fixture."""

    def setUp(self):
        super(WarningsFixture, self).setUp()
        self._original = warnings.filters[:]
        warnings.simplefilter("once", DeprecationWarning)
        self.addCleanup(self._reset)

    def _reset(self):
        warnings.filters[:] = self._original

7. myproject/tests/functional/fixtures/gabbits.py

"""Gabbi fixtures."""

import os
from gabbi import fixture
from oslo_config import cfg
from oslo_config import fixture as config_fixture
from oslotest import output
from oslo_utils import uuidutils

from myproject import conf
from myproject import api
from myproject.tests import fixtures
from myproject.tests.functional.fixtures import capture

CONF = None


def setup_app():
    """Application factory for gabbi."""
    global CONF
    return api.loadapp(CONF)


class APIFixture(fixture.GabbiFixture):
    """Base API fixture for gabbi tests."""

    def start_fixture(self):
        global CONF

        # Logging
        self.logging = capture.Logging()
        self.logging.setUp()
        self.output = output.CaptureOutput()
        self.output.setUp()
        self.warnings = capture.WarningsFixture()
        self.warnings.setUp()

        # Config
        self.conf_fixture = config_fixture.Config(cfg.ConfigOpts())
        self.conf_fixture.setUp()
        conf.register_opts(self.conf_fixture.conf)
        self.conf_fixture.conf([], default_config_files=[])

        # Database
        self.db = fixtures.Database(self.conf_fixture, set_config=True)
        self.db.setUp()

        # Environment variables
        os.environ['ITEM_ID'] = uuidutils.generate_uuid()
        os.environ['ITEM_NAME'] = 'test-item'

        CONF = self.conf_fixture.conf

    def stop_fixture(self):
        global CONF
        self.db.cleanUp()
        self.warnings.cleanUp()
        self.output.cleanUp()
        self.logging.cleanUp()
        self.conf_fixture.cleanUp()
        CONF = None

8. myproject/tests/functional/test_api.py

"""Gabbi test loader."""

import os
from oslotest import output
import wsgi_intercept
from gabbi import driver

from myproject.tests.functional.fixtures import capture
from myproject.tests.functional.fixtures import gabbits as fixtures

wsgi_intercept.STRICT_RESPONSE_HEADERS = True
TESTS_DIR = 'gabbits'


def load_tests(loader, tests, pattern):
    """Load gabbi tests."""
    test_dir = os.path.join(os.path.dirname(__file__), TESTS_DIR)
    inner_fixtures = [
        output.CaptureOutput,
        capture.Logging,
    ]
    return driver.build_tests(
        test_dir, loader, host=None,
        test_loader_name=__name__,
        intercept=fixtures.setup_app,
        inner_fixtures=inner_fixtures,
        fixture_module=fixtures)

9. myproject/tests/functional/gabbits/basic.yaml

fixtures:
    - APIFixture

defaults:
    request_headers:
        accept: application/json

tests:
- name: get version
  GET: /
  status: 200
  response_json_paths:
      $.version: "1.0"

- name: get empty items
  GET: /items
  status: 200
  response_json_paths:
      $.items: []

- name: create item
  POST: /items
  request_headers:
      content-type: application/json
  data:
      id: $ENVIRON['ITEM_ID']
      name: $ENVIRON['ITEM_NAME']
  status: 201
  response_json_paths:
      $.item.id: $ENVIRON['ITEM_ID']
      $.item.name: $ENVIRON['ITEM_NAME']

10. .stestr.conf

[DEFAULT]
test_path=./myproject/tests/unit
top_dir=./
group_regex=myproject\.tests\.functional\.test_api(?:\.|_)([^_]+)

11. tox.ini

[tox]
minversion = 4.6.0
envlist = py3,functional,pep8

[testenv]
usedevelop = True
deps = -r{toxinidir}/test-requirements.txt
commands = stestr run {posargs}

[testenv:functional]
commands = stestr --test-path=./myproject/tests/functional run {posargs}

Running:

tox -e functional

Example 2: With Pre-Created Data

myproject/tests/functional/fixtures/gabbits.py (addition):

class ItemFixture(APIFixture):
    """Fixture with pre-created items."""

    def start_fixture(self):
        super(ItemFixture, self).start_fixture()

        # Create items in database using objects/db layer
        # (example assumes you have item creation logic)
        from myproject.db import models
        engine = self.db.get_engine()
        with engine.begin() as conn:
            conn.execute(
                models.items_table.insert(),
                [
                    {'id': os.environ['ITEM_ID'],
                     'name': os.environ['ITEM_NAME']},
                ]
            )

myproject/tests/functional/gabbits/with-items.yaml:

fixtures:
    - ItemFixture  # Uses ItemFixture instead of APIFixture

tests:
- name: get items
  GET: /items
  response_json_paths:
      $.items[0].id: $ENVIRON['ITEM_ID']
      $.items[0].name: $ENVIRON['ITEM_NAME']

Implementation Checklist

Phase 1: Project Setup

  • Create tests/fixtures.py with Database fixture
    • Inherit from oslo_db.sqlalchemy.test_fixtures
    • Implement generate_schema_create_all()
    • Configure SQLite connection
  • Create tests/functional/base.py with TestCase
    • Set up config fixture
    • Set up database fixture
    • Set up logging/output capture
  • Create tests/functional/fixtures/capture.py
    • Logging fixture
    • WarningsFixture

Phase 2: Gabbi Infrastructure

  • Install dependencies in test-requirements.txt
    • gabbi>=1.35.0
    • wsgi-intercept>=1.7.0
    • oslotest>=3.5.0
    • stestr>=1.0.0
  • Create tests/functional/fixtures/gabbits.py
    • setup_app() function
    • APIFixture class (extends gabbi.fixture.GabbiFixture)
    • start_fixture() method
    • stop_fixture() method
  • Create tests/functional/test_api.py
    • load_tests() function
    • Call gabbi.driver.build_tests()

Phase 3: Test Configuration

  • Create .stestr.conf
    • Set test_path
    • Set group_regex for Gabbi grouping
  • Update tox.ini
    • Add [testenv:functional] section
    • Set --test-path for functional tests
  • Create tests/functional/gabbits/ directory

Phase 4: Write Tests

  • Create first YAML test file
    • Declare fixtures
    • Set defaults
    • Write basic tests
  • Run tests: tox -e functional
  • Iterate on test coverage

Phase 5: Advanced Features (Optional)

  • Create specialized fixtures (e.g., with pre-created data)
  • Add helper functions (like test_base.py)
  • Add policy testing fixtures
  • Configure Zuul CI jobs
  • Add coverage reporting

Key Dependencies

test-requirements.txt:

# Core testing
fixtures>=3.0.0
testtools>=2.2.0
stestr>=1.0.0

# Oslo libraries
oslo.config>=6.7.0
oslo.db>=8.6.0
oslo.log>=4.3.0
oslotest>=3.5.0

# Gabbi and WSGI testing
gabbi>=1.35.0
wsgi-intercept>=1.7.0

# Database drivers
SQLAlchemy>=1.4.0

# Optional (for MySQL/PostgreSQL testing)
PyMySQL>=0.8.0
psycopg2-binary>=2.8

requirements.txt (production dependencies):

oslo.config>=6.7.0
oslo.db>=8.6.0
oslo.log>=4.3.0
oslo.utils>=4.5.0
SQLAlchemy>=1.4.0
WebOb>=1.8.2

Common Patterns

Pattern 1: Environment Variable Substitution

In fixture:

os.environ['RESOURCE_UUID'] = uuidutils.generate_uuid()

In YAML:

data:
    uuid: $ENVIRON['RESOURCE_UUID']

Pattern 2: Using Previous Test Results

- name: create resource
  POST: /resources
  status: 201
  response_headers:
      location: /resources/[a-f0-9-]+

- name: get resource
  GET: $LOCATION  # Uses Location header from previous test
  status: 200

Pattern 3: JSONPath Assertions

response_json_paths:
    $.items[0].id: abc123
    $.items[?name='foo'].id: bar456
    $.metadata.count: /\d+/  # Regex

Pattern 4: Fixture Inheritance

class BaseFixture(fixture.GabbiFixture):
    def start_fixture(self):
        # Base setup
        pass

class SpecializedFixture(BaseFixture):
    def start_fixture(self):
        super(SpecializedFixture, self).start_fixture()
        # Additional setup
        pass

Pattern 5: Multiple Fixtures Per YAML

fixtures:
    - APIFixture
    - CORSFixture  # Adds CORS configuration

Troubleshooting

Issue: Tests Not Discovered

Symptom: stestr run finds no tests

Solution:

  1. Check test_path in .stestr.conf
  2. Verify load_tests() function exists in test_api.py
  3. Check YAML files are in correct directory

Issue: Tests Run Out of Order

Symptom: Tests within a YAML file don't run sequentially

Solution:

  1. Check group_regex in .stestr.conf
  2. Ensure regex captures YAML filename
  3. Verify test names match pattern

Issue: Database Not Initialized

Symptom: Tests fail with database errors

Solution:

  1. Verify generate_schema_create_all() is called
  2. Check db_api.configure() is called
  3. Ensure models are imported before schema creation

Issue: Fixture Not Found

Symptom: NameError: name 'MyFixture' is not defined

Solution:

  1. Check fixture class is in fixtures/gabbits.py
  2. Verify fixture_module=fixtures in build_tests()
  3. Check fixture name in YAML matches class name

Issue: WSGI App Not Working

Symptom: Connection errors or 500 responses

Solution:

  1. Verify setup_app() returns WSGI application
  2. Check intercept=fixtures.setup_app in build_tests()
  3. Ensure wsgi_intercept.STRICT_RESPONSE_HEADERS = True

References

Placement Codebase

  • Main test directory: placement/tests/functional/
  • Fixtures: placement/tests/functional/fixtures/gabbits.py
  • Test loader: placement/tests/functional/test_api.py
  • Database fixture: placement/tests/fixtures.py
  • Helper functions: placement/tests/functional/db/test_base.py

Documentation

OpenStack Guidelines


Conclusion

The Placement Gabbi functional test infrastructure provides a robust, maintainable approach to HTTP API testing:

Benefits:

  • Declarative: Tests written in YAML
  • Fast: In-memory SQLite, in-process WSGI
  • Isolated: Each test file independent
  • Maintainable: Clear separation of concerns
  • Reusable: Fixtures shared across tests

Key Takeaways:

  1. Use oslo_db fixtures for database setup
  2. Use gabbi.fixture.GabbiFixture for API setup
  3. Use wsgi-intercept for in-process testing
  4. Group tests by YAML file with stestr regex
  5. Use environment variables for shared state

This pattern is production-tested in OpenStack Placement and used by Nova's PlacementFixture, demonstrating its robustness and flexibility.


Document Version: 1.0
Generated from: OpenStack Placement (review/balazs_gibizer/bug/2126751)
Date: 2025-10-07

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment