Skip to content

Conversation

@codeflash-ai
Copy link

@codeflash-ai codeflash-ai bot commented Jan 22, 2026

📄 76% (0.76x) speedup for LocalInteraction._play in quantecon/game_theory/localint.py

⏱️ Runtime : 15.1 milliseconds 8.54 milliseconds (best of 138 runs)

📝 Explanation and details

The optimized code achieves a 76% speedup (15.1ms → 8.54ms) by introducing a fast path for the most common case in the LocalInteraction._play method.

Key Optimization

Vectorized Best Response Computation: When tie_breaking='smallest' (the default and most common case), the optimization replaces individual Player.best_response() calls in a loop with a single vectorized matrix operation:

# Original: Loop calling best_response for each player
for k, i in enumerate(player_ind):
    actions[i] = self.players[i].best_response(
        opponent_act_dict[k, :], tie_breaking=tie_breaking, ...
    )

# Optimized: Single vectorized computation
actions_onehot = np.eye(self.num_actions, dtype=int)[np.asarray(actions)]
opponent_act_dict = self.adj_matrix[player_ind].dot(actions_onehot)
payoffs = payoff_matrix @ opponent_act_dict.T
best_indices = (payoffs >= (max_vals - tol)).argmax(axis=0)

Why This Is Faster

  1. Batch Processing: Computes payoffs for all players simultaneously using NumPy's efficient matrix operations instead of Python-level loops
  2. Reduced Function Call Overhead: Eliminates repeated calls to best_response(), payoff_vector(), and sparse matrix operations inside the loop
  3. Memory Access Patterns: Better cache locality from contiguous array operations versus scattered method calls

Test Results Analysis

The optimization shows dramatic improvements (54-485% faster) across nearly all test cases:

  • Small networks (2-3 players): 55-79% faster
  • Medium networks (30-50 players): 125-189% faster
  • Large networks (100+ players): 221-485% faster

The speedup scales with network size because the vectorization benefit compounds with more players. Tests with tie_breaking='random' show no regression (~1ms, unchanged) since they use the original fallback path.

Impact Considerations

The fast path only activates when tie_breaking='smallest', which is the default parameter in the LocalInteraction class definition. This means most existing workloads automatically benefit without code changes. Workloads involving simulations with many iterations (common in game theory research) will see substantial cumulative time savings.

Correctness verification report:

Test Status
⚙️ Existing Unit Tests 🔘 None Found
🌀 Generated Regression Tests 937 Passed
⏪ Replay Tests 🔘 None Found
🔎 Concolic Coverage Tests 🔘 None Found
📊 Tests Coverage 100.0%
🌀 Click to see Generated Regression Tests
import numpy as np  # used to construct arrays for tests
# imports
import pytest  # used for our unit tests
from quantecon.game_theory.localint import LocalInteraction
from quantecon.game_theory.normal_form_game import \
    Player  # used to construct expected best responses
from scipy import sparse  # used to construct adjacency matrices

def test_play_basic_smallest_tiebreaking():
    # Simple 3-player, 2-action game where payoff favors matching opponent weight
    # Payoff matrix: rows = own action, columns = opponent mixed action support
    A = np.array([[2.0, 0.0],  # action 0 gets payoff 2 if opponent plays action 0, 0 if action 1
                  [0.0, 1.0]]) # action 1 gets payoff 1 if opponent plays action 1, 0 if action 0
    # Adjacency: 3 players in a line 0-1-2 (undirected weights)
    adj = np.array([[0, 1, 0],
                    [1, 0, 1],
                    [0, 1, 0]], dtype=float)
    LI = LocalInteraction(A, adj)  # construct LocalInteraction (converts adj to csr internally)

    # initial actions: players 0..2 choose actions [0, 1, 0]
    actions = np.array([0, 1, 0], dtype=int)

    # we will update all players
    player_ind = [0, 1, 2]

    # Use tie_breaking='smallest' which picks smallest index in ties
    codeflash_output = LI._play(actions.copy(), player_ind, tie_breaking='smallest', tol=None, random_state=None); returned = codeflash_output # 227μs -> 140μs (62.1% faster)

    # Manually compute expected:
    # Construct one-hot action indicator matrix (rows: players, cols: actions)
    N = LI.N
    num_actions = LI.num_actions
    one_hot = np.zeros((N, num_actions), dtype=float)
    for idx in range(N):
        one_hot[idx, actions[idx]] = 1.0

    # Compute opponent action distributions for each player in player_ind via dense multiplication
    expected_opponent = adj[player_ind] @ one_hot  # shape (len(player_ind), num_actions)

    # For each targeted player compute expected best response using Player.best_response with 'smallest'
    expected_actions = actions.copy()
    for k, i in enumerate(player_ind):
        p = Player(A)  # same payoff structure as LI.players[i]
        # Player.best_response expects the opponents_actions as a 1-D array (mixed action)
        br = p.best_response(expected_opponent[k, :], tie_breaking='smallest', tol=None, random_state=None)
        expected_actions[i] = int(br)  # ensure integer type
    # check elementwise equality
    for a, b in zip(returned, expected_actions):
        pass

def test_play_random_tiebreaking_reproducible_with_seed():
    # Payoff matrix where all payoffs are equal => every action is a best response (tie)
    # Use 4 actions so ties are non-trivial
    A = np.ones((4, 4), dtype=float)
    # Small symmetric adjacency among 5 players (complete graph with weight 1)
    N = 5
    adj = np.ones((N, N), dtype=float) - np.eye(N)  # no self-loops, all others weight 1

    LI = LocalInteraction(A, adj)

    # Random initial actions but within range 0..3
    rng = np.random.RandomState(12345)
    actions = rng.randint(0, 4, size=N).astype(int)

    # Update all players using random tie breaking with a fixed seed
    player_ind = list(range(N))
    seed = 42  # deterministic seed
    codeflash_output = LI._play(actions.copy(), player_ind, tie_breaking='random', tol=None, random_state=seed); updated = codeflash_output # 1.13ms -> 1.13ms (0.106% slower)

    # Because all payoffs are equal, best_responses for each player should be all actions [0,1,2,3]
    # Player.best_response with tie_breaking='random' and the same integer seed will produce
    # a deterministic choice for each call (since the seed is re-initialized per call).
    expected = actions.copy()
    p = Player(A)
    for k, i in enumerate(player_ind):
        expected_choice = p.best_response(np.array([0.0, 0.0, 0.0, 0.0]), tie_breaking='random', tol=None, random_state=seed)
        expected[i] = int(expected_choice)

    # The updated array must match the expected deterministically-chosen actions
    for a, b in zip(updated, expected):
        pass

    # Re-run to ensure reproducibility: calling again with same inputs should yield same result
    codeflash_output = LI._play(actions.copy(), player_ind, tie_breaking='random', tol=None, random_state=seed); updated2 = codeflash_output # 1.12ms -> 1.12ms (0.371% faster)
    for a, b in zip(updated, updated2):
        pass

def test_play_invalid_tiebreaking_raises_value_error():
    # Create a trivial 2-action payoff and adjacency for two players
    A = np.array([[1.0, 0.0],
                  [0.0, 1.0]])
    adj = np.array([[0.0, 1.0],
                    [1.0, 0.0]])
    LI = LocalInteraction(A, adj)

    actions = np.array([0, 1], dtype=int)
    player_ind = [0, 1]

    # Using an invalid tie_breaking string should cause Player.best_response to raise ValueError
    with pytest.raises(ValueError):
        LI._play(actions.copy(), player_ind, tie_breaking='invalid_option', tol=None, random_state=None) # 211μs -> 212μs (0.434% slower)

def test_play_single_action_network_no_change_and_inplace_return():
    # Payoff matrix 1x1 (only one action available)
    A = np.array([[10.0]])
    # Four players with arbitrary adjacency (it must be square)
    adj = np.array([[0.0, 1.0, 0.0, 0.0],
                    [1.0, 0.0, 1.0, 0.0],
                    [0.0, 1.0, 0.0, 1.0],
                    [0.0, 0.0, 1.0, 0.0]])
    LI = LocalInteraction(A, adj)

    # All players can only choose action 0
    actions = np.zeros(4, dtype=int)
    player_ind = [0, 1, 2, 3]

    # Call _play and ensure no change occurs and the returned object is the same object (in-place modification)
    codeflash_output = LI._play(actions, player_ind, tie_breaking='smallest', tol=None, random_state=None); returned = codeflash_output # 232μs -> 137μs (69.4% faster)
    # All entries must remain 0 because only action 0 exists
    for val in actions:
        pass

def test_play_large_scale_many_players_and_actions():
    # Large but within limits: 200 players and 5 actions each
    N = 200
    num_actions = 5

    # Random payoff matrix (dense) but fixed seed for reproducibility
    rng = np.random.RandomState(2026)
    A = rng.randn(num_actions, num_actions).astype(float)

    # Construct a sparse random adjacency matrix with average degree ~5; ensure nonnegative weights
    # Use a reproducible generator
    rng2 = np.random.RandomState(2027)
    rows = []
    cols = []
    data = []
    avg_deg = 5
    for i in range(N):
        # choose 'avg_deg' neighbors (without self-loops)
        neighbors = rng2.choice([j for j in range(N) if j != i], size=avg_deg, replace=False)
        for j in neighbors:
            rows.append(i)
            cols.append(j)
            # random positive weight
            data.append(float(rng2.rand()))
    adj_sparse = sparse.csr_matrix((data, (rows, cols)), shape=(N, N))
    # ensure the adjacency is dense-like for our expected calculation by converting to dense array
    adj_dense = adj_sparse.toarray()

    LI = LocalInteraction(A, adj_sparse)

    # Random initial actions in [0, num_actions-1]
    actions = rng.randint(0, num_actions, size=N).astype(int)
    player_ind = list(range(N))  # update all players

    # Compute expected opponent distributions via dense multiplication of adj_dense and one-hot actions
    one_hot = np.zeros((N, num_actions), dtype=float)
    for idx in range(N):
        one_hot[idx, actions[idx]] = 1.0
    expected_opponent = adj_dense[player_ind] @ one_hot  # shape (N, num_actions)

    # Compute expected best responses per player using Player.best_response
    expected_actions = actions.copy()
    for k, i in enumerate(player_ind):
        p = Player(A)
        br = p.best_response(expected_opponent[k, :], tie_breaking='smallest', tol=None, random_state=None)
        expected_actions[i] = int(br)

    # Run the method under test on a copy of actions and compare
    codeflash_output = LI._play(actions.copy(), player_ind, tie_breaking='smallest', tol=None, random_state=None); returned = codeflash_output # 1.44ms -> 246μs (485% faster)
    for val in returned:
        pass

    # Validate that returned actions equal our expected actions
    for a, b in zip(returned, expected_actions):
        pass
# codeflash_output is used to check that the output of the original code is the same as that of the optimized code.
import numpy as np
import pytest
from quantecon.game_theory.localint import LocalInteraction
from quantecon.game_theory.normal_form_game import Player
from scipy import sparse

class TestLocalInteractionPlay:
    """Test suite for LocalInteraction._play method."""

    # ==================== Basic Test Cases ====================
    # These tests verify fundamental functionality under normal conditions

    def test_play_single_player_single_action(self):
        """Test _play with a single player and single action."""
        # Create a trivial 1x1 payoff matrix
        payoff_matrix = np.array([[1.0]])
        # Create a 1x1 identity adjacency matrix (player connected to itself)
        adj_matrix = np.array([[1.0]])
        
        local_interaction = LocalInteraction(payoff_matrix, adj_matrix)
        # Initial actions: player 0 chooses action 0
        actions = np.array([0], dtype=int)
        player_ind = np.array([0], dtype=int)
        
        # Call _play with smallest tie-breaking
        codeflash_output = local_interaction._play(
            actions, player_ind, tie_breaking='smallest', tol=1e-8, random_state=None
        ); result_actions = codeflash_output # 209μs -> 117μs (78.8% faster)

    def test_play_two_player_symmetric_game(self):
        """Test _play in a simple 2-player symmetric game."""
        # Payoff matrix for a simple coordination game
        payoff_matrix = np.array([[2.0, 0.0],
                                  [0.0, 1.0]])
        # Adjacency matrix: two players connected to each other
        adj_matrix = np.array([[0.0, 1.0],
                               [1.0, 0.0]])
        
        local_interaction = LocalInteraction(payoff_matrix, adj_matrix)
        # Both players initially play action 0
        actions = np.array([0, 0], dtype=int)
        player_ind = np.array([0, 1], dtype=int)
        
        # Call _play to update both players
        codeflash_output = local_interaction._play(
            actions, player_ind, tie_breaking='smallest', tol=1e-8, random_state=None
        ); result_actions = codeflash_output # 218μs -> 138μs (57.7% faster)

    def test_play_single_player_update(self):
        """Test _play updating only one player in a multi-player network."""
        # Simple 2x2 payoff matrix
        payoff_matrix = np.array([[3.0, 1.0],
                                  [2.0, 4.0]])
        # 3-player network where each player connects to others
        adj_matrix = np.array([[0.0, 1.0, 1.0],
                               [1.0, 0.0, 1.0],
                               [1.0, 1.0, 0.0]])
        
        local_interaction = LocalInteraction(payoff_matrix, adj_matrix)
        # Initial actions: players 0, 1, 2 all play action 0
        actions = np.array([0, 0, 0], dtype=int)
        # Update only player 1
        player_ind = np.array([1], dtype=int)
        
        codeflash_output = local_interaction._play(
            actions, player_ind, tie_breaking='smallest', tol=1e-8, random_state=None
        ); result_actions = codeflash_output # 209μs -> 134μs (55.4% faster)

    def test_play_multiple_players_update(self):
        """Test _play updating multiple (but not all) players."""
        payoff_matrix = np.array([[5.0, 0.0],
                                  [0.0, 5.0]])
        adj_matrix = np.array([[1.0, 1.0, 1.0],
                               [1.0, 1.0, 1.0],
                               [1.0, 1.0, 1.0]])
        
        local_interaction = LocalInteraction(payoff_matrix, adj_matrix)
        actions = np.array([0, 1, 0], dtype=int)
        # Update players 0 and 2
        player_ind = np.array([0, 2], dtype=int)
        
        codeflash_output = local_interaction._play(
            actions, player_ind, tie_breaking='smallest', tol=1e-8, random_state=None
        ); result_actions = codeflash_output # 218μs -> 138μs (58.0% faster)

    def test_play_preserves_array_structure(self):
        """Test that _play returns a properly structured numpy array."""
        payoff_matrix = np.array([[1.0, 0.0],
                                  [0.0, 1.0]])
        adj_matrix = np.array([[1.0, 1.0],
                               [1.0, 1.0]])
        
        local_interaction = LocalInteraction(payoff_matrix, adj_matrix)
        actions = np.array([0, 1], dtype=int)
        player_ind = np.array([0, 1], dtype=int)
        
        codeflash_output = local_interaction._play(
            actions, player_ind, tie_breaking='smallest', tol=1e-8, random_state=None
        ); result_actions = codeflash_output # 217μs -> 136μs (58.9% faster)

    def test_play_with_sparse_adjacency(self):
        """Test _play correctly handles sparse adjacency matrices."""
        payoff_matrix = np.array([[2.0, 0.0],
                                  [0.0, 1.0]])
        # Sparse adjacency: only player 0 and 1 are connected
        adj_matrix = np.array([[0.0, 1.0, 0.0],
                               [1.0, 0.0, 0.0],
                               [0.0, 0.0, 0.0]])
        
        local_interaction = LocalInteraction(payoff_matrix, adj_matrix)
        actions = np.array([0, 0, 0], dtype=int)
        player_ind = np.array([0, 1, 2], dtype=int)
        
        codeflash_output = local_interaction._play(
            actions, player_ind, tie_breaking='smallest', tol=1e-8, random_state=None
        ); result_actions = codeflash_output # 226μs -> 138μs (63.0% faster)

    def test_play_best_response_selection(self):
        """Test that _play correctly selects best response actions."""
        # Payoff matrix where action 1 dominates action 0
        payoff_matrix = np.array([[1.0, 5.0],
                                  [0.0, 3.0]])
        adj_matrix = np.array([[0.0, 1.0],
                               [1.0, 0.0]])
        
        local_interaction = LocalInteraction(payoff_matrix, adj_matrix)
        # Player 0 initially plays action 0, player 1 plays action 1
        actions = np.array([0, 1], dtype=int)
        player_ind = np.array([0], dtype=int)
        
        codeflash_output = local_interaction._play(
            actions, player_ind, tie_breaking='smallest', tol=1e-8, random_state=None
        ); result_actions = codeflash_output # 209μs -> 135μs (54.6% faster)

    # ==================== Edge Cases ====================
    # These tests evaluate behavior under extreme or unusual conditions

    def test_play_zero_payoffs(self):
        """Test _play when all payoffs are zero."""
        payoff_matrix = np.array([[0.0, 0.0],
                                  [0.0, 0.0]])
        adj_matrix = np.array([[1.0, 1.0],
                               [1.0, 1.0]])
        
        local_interaction = LocalInteraction(payoff_matrix, adj_matrix)
        actions = np.array([1, 1], dtype=int)
        player_ind = np.array([0, 1], dtype=int)
        
        codeflash_output = local_interaction._play(
            actions, player_ind, tie_breaking='smallest', tol=1e-8, random_state=None
        ); result_actions = codeflash_output # 217μs -> 137μs (57.9% faster)

    def test_play_negative_payoffs(self):
        """Test _play with negative payoff values."""
        payoff_matrix = np.array([[-5.0, -1.0],
                                  [-2.0, -3.0]])
        adj_matrix = np.array([[1.0, 1.0],
                               [1.0, 1.0]])
        
        local_interaction = LocalInteraction(payoff_matrix, adj_matrix)
        actions = np.array([0, 0], dtype=int)
        player_ind = np.array([0], dtype=int)
        
        codeflash_output = local_interaction._play(
            actions, player_ind, tie_breaking='smallest', tol=1e-8, random_state=None
        ); result_actions = codeflash_output # 209μs -> 134μs (55.9% faster)

    def test_play_large_payoff_differences(self):
        """Test _play with very large differences in payoff values."""
        payoff_matrix = np.array([[1.0, 1e10],
                                  [1e-10, 1.0]])
        adj_matrix = np.array([[0.0, 1.0],
                               [1.0, 0.0]])
        
        local_interaction = LocalInteraction(payoff_matrix, adj_matrix)
        actions = np.array([0, 1], dtype=int)
        player_ind = np.array([0], dtype=int)
        
        codeflash_output = local_interaction._play(
            actions, player_ind, tie_breaking='smallest', tol=1e-8, random_state=None
        ); result_actions = codeflash_output # 208μs -> 134μs (55.4% faster)

    def test_play_with_tolerance_parameter(self):
        """Test _play respects the tolerance parameter for tie-breaking."""
        # Create payoffs that are very close (within tolerance)
        payoff_matrix = np.array([[1.0, 1.0 + 1e-9],
                                  [0.0, 0.0]])
        adj_matrix = np.array([[0.0, 1.0],
                               [1.0, 0.0]])
        
        local_interaction = LocalInteraction(payoff_matrix, adj_matrix)
        actions = np.array([0, 0], dtype=int)
        player_ind = np.array([0], dtype=int)
        
        # With tight tolerance (1e-10), payoffs should be treated as distinct
        codeflash_output = local_interaction._play(
            actions, player_ind, tie_breaking='smallest', tol=1e-10, random_state=None
        ); result_tight = codeflash_output # 208μs -> 133μs (55.7% faster)
        
        # With loose tolerance (1e-8), payoffs should be treated as tied
        codeflash_output = local_interaction._play(
            actions, player_ind, tie_breaking='smallest', tol=1e-8, random_state=None
        ); result_loose = codeflash_output # 175μs -> 110μs (58.9% faster)

    def test_play_disconnected_players(self):
        """Test _play when some players are disconnected from the network."""
        payoff_matrix = np.array([[1.0, 0.0],
                                  [0.0, 1.0]])
        # Player 2 is not connected to anyone (all zeros in row/column)
        adj_matrix = np.array([[0.0, 1.0, 0.0],
                               [1.0, 0.0, 0.0],
                               [0.0, 0.0, 0.0]])
        
        local_interaction = LocalInteraction(payoff_matrix, adj_matrix)
        actions = np.array([0, 1, 0], dtype=int)
        player_ind = np.array([2], dtype=int)
        
        codeflash_output = local_interaction._play(
            actions, player_ind, tie_breaking='smallest', tol=1e-8, random_state=None
        ); result_actions = codeflash_output # 207μs -> 134μs (54.5% faster)

    def test_play_self_loop_network(self):
        """Test _play when adjacency matrix has self-loops."""
        payoff_matrix = np.array([[2.0, 0.0],
                                  [0.0, 1.0]])
        # Each player connected to all others including itself
        adj_matrix = np.array([[1.0, 1.0],
                               [1.0, 1.0]])
        
        local_interaction = LocalInteraction(payoff_matrix, adj_matrix)
        actions = np.array([0, 1], dtype=int)
        player_ind = np.array([0], dtype=int)
        
        codeflash_output = local_interaction._play(
            actions, player_ind, tie_breaking='smallest', tol=1e-8, random_state=None
        ); result_actions = codeflash_output # 207μs -> 133μs (55.6% faster)

    def test_play_asymmetric_adjacency(self):
        """Test _play with asymmetric adjacency matrix."""
        payoff_matrix = np.array([[3.0, 1.0],
                                  [1.0, 2.0]])
        # Asymmetric adjacency: player 0 knows about 1, but not vice versa
        adj_matrix = np.array([[0.0, 1.0],
                               [0.0, 0.0]])
        
        local_interaction = LocalInteraction(payoff_matrix, adj_matrix)
        actions = np.array([0, 1], dtype=int)
        player_ind = np.array([0, 1], dtype=int)
        
        codeflash_output = local_interaction._play(
            actions, player_ind, tie_breaking='smallest', tol=1e-8, random_state=None
        ); result_actions = codeflash_output # 218μs -> 139μs (56.4% faster)

    def test_play_weighted_adjacency(self):
        """Test _play with weighted (non-binary) adjacency matrix."""
        payoff_matrix = np.array([[2.0, 1.0],
                                  [1.0, 3.0]])
        # Weighted adjacency: different strength connections
        adj_matrix = np.array([[0.0, 0.5, 0.5],
                               [1.0, 0.0, 0.0],
                               [0.0, 1.0, 0.0]])
        
        local_interaction = LocalInteraction(payoff_matrix, adj_matrix)
        actions = np.array([0, 1, 1], dtype=int)
        player_ind = np.array([0], dtype=int)
        
        codeflash_output = local_interaction._play(
            actions, player_ind, tie_breaking='smallest', tol=1e-8, random_state=None
        ); result_actions = codeflash_output # 208μs -> 134μs (55.1% faster)

    def test_play_empty_player_index(self):
        """Test _play with empty player_ind array."""
        payoff_matrix = np.array([[1.0, 0.0],
                                  [0.0, 1.0]])
        adj_matrix = np.array([[1.0, 1.0],
                               [1.0, 1.0]])
        
        local_interaction = LocalInteraction(payoff_matrix, adj_matrix)
        actions = np.array([0, 1], dtype=int)
        player_ind = np.array([], dtype=int)
        
        codeflash_output = local_interaction._play(
            actions, player_ind, tie_breaking='smallest', tol=1e-8, random_state=None
        ); result_actions = codeflash_output # 157μs -> 108μs (45.3% faster)

    def test_play_tie_breaking_smallest(self):
        """Test _play with 'smallest' tie-breaking strategy."""
        # Payoff matrix with tied best responses
        payoff_matrix = np.array([[2.0, 2.0, 1.0],
                                  [2.0, 2.0, 1.0],
                                  [1.0, 1.0, 1.0]])
        adj_matrix = np.array([[0.0, 1.0],
                               [1.0, 0.0]])
        
        local_interaction = LocalInteraction(payoff_matrix, adj_matrix)
        actions = np.array([2, 2], dtype=int)
        player_ind = np.array([0], dtype=int)
        
        codeflash_output = local_interaction._play(
            actions, player_ind, tie_breaking='smallest', tol=1e-8, random_state=None
        ); result_actions = codeflash_output # 208μs -> 133μs (55.5% faster)

    def test_play_with_many_actions(self):
        """Test _play with a large number of available actions."""
        # Create a 10x10 payoff matrix
        payoff_matrix = np.eye(10)  # Identity matrix
        adj_matrix = np.array([[0.0, 1.0],
                               [1.0, 0.0]])
        
        local_interaction = LocalInteraction(payoff_matrix, adj_matrix)
        actions = np.array([0, 5], dtype=int)
        player_ind = np.array([0], dtype=int)
        
        codeflash_output = local_interaction._play(
            actions, player_ind, tie_breaking='smallest', tol=1e-8, random_state=None
        ); result_actions = codeflash_output # 209μs -> 133μs (56.9% faster)

    # ==================== Large Scale Test Cases ====================
    # These tests assess performance and scalability with large data

    def test_play_large_network(self):
        """Test _play with a moderately large network of 100 players."""
        # Small payoff matrix but large network
        payoff_matrix = np.array([[2.0, 0.0],
                                  [0.0, 1.0]])
        # Create a random adjacency matrix (sparse for realism)
        np.random.seed(42)
        adj_dense = np.random.choice([0, 1], size=(100, 100), p=[0.9, 0.1])
        adj_matrix = adj_dense.astype(float)
        
        local_interaction = LocalInteraction(payoff_matrix, adj_matrix)
        actions = np.random.choice([0, 1], size=100).astype(int)
        player_ind = np.arange(100, dtype=int)
        
        codeflash_output = local_interaction._play(
            actions, player_ind, tie_breaking='smallest', tol=1e-8, random_state=None
        ); result_actions = codeflash_output # 833μs -> 180μs (360% faster)

    def test_play_many_actions_many_players(self):
        """Test _play with both many players and many actions."""
        # 50 players, 5 actions each
        payoff_matrix = np.random.rand(5, 5)
        np.random.seed(42)
        adj_matrix = np.random.choice([0, 1], size=(50, 50), p=[0.8, 0.2]).astype(float)
        
        local_interaction = LocalInteraction(payoff_matrix, adj_matrix)
        actions = np.random.choice(np.arange(5), size=50).astype(int)
        player_ind = np.arange(50, dtype=int)
        
        codeflash_output = local_interaction._play(
            actions, player_ind, tie_breaking='smallest', tol=1e-8, random_state=None
        ); result_actions = codeflash_output # 528μs -> 164μs (221% faster)

    def test_play_dense_network(self):
        """Test _play with a dense (nearly complete) network."""
        payoff_matrix = np.array([[1.0, 2.0],
                                  [2.0, 1.0]])
        # Dense network: 50 players, high connectivity
        np.random.seed(42)
        adj_matrix = np.random.choice([0, 1], size=(50, 50), p=[0.1, 0.9]).astype(float)
        
        local_interaction = LocalInteraction(payoff_matrix, adj_matrix)
        actions = np.random.choice([0, 1], size=50).astype(int)
        # Update a subset of players
        player_ind = np.arange(0, 50, 2, dtype=int)
        
        codeflash_output = local_interaction._play(
            actions, player_ind, tie_breaking='smallest', tol=1e-8, random_state=None
        ); result_actions = codeflash_output # 375μs -> 156μs (140% faster)

    def test_play_large_payoff_matrix(self):
        """Test _play with a relatively large payoff matrix (15x15)."""
        payoff_matrix = np.random.rand(15, 15)
        np.random.seed(42)
        # 30 players
        adj_matrix = np.random.choice([0, 1], size=(30, 30), p=[0.85, 0.15]).astype(float)
        
        local_interaction = LocalInteraction(payoff_matrix, adj_matrix)
        actions = np.random.choice(np.arange(15), size=30).astype(int)
        player_ind = np.arange(30, dtype=int)
        
        codeflash_output = local_interaction._play(
            actions, player_ind, tie_breaking='smallest', tol=1e-8, random_state=None
        ); result_actions = codeflash_output # 399μs -> 158μs (152% faster)

    def test_play_sequential_updates_large_network(self):
        """Test _play with sequential player updates on a large network."""
        payoff_matrix = np.array([[3.0, 1.0],
                                  [1.0, 2.0]])
        np.random.seed(42)
        adj_matrix = np.random.choice([0, 1], size=(75, 75), p=[0.85, 0.15]).astype(float)
        
        local_interaction = LocalInteraction(payoff_matrix, adj_matrix)
        actions = np.random.choice([0, 1], size=75).astype(int)
        
        # Simulate sequential updates
        for player_idx in range(0, 75, 5):
            player_ind = np.array([player_idx], dtype=int)
            codeflash_output = local_interaction._play(
                actions, player_ind, tie_breaking='smallest', tol=1e-8, random_state=None
            ); actions = codeflash_output # 2.53ms -> 1.54ms (64.5% faster)

    def test_play_mixed_action_weights(self):
        """Test _play with heavily weighted asymmetric adjacency matrix."""
        payoff_matrix = np.array([[2.0, 0.0],
                                  [0.0, 1.0]])
        np.random.seed(42)
        # Create weighted adjacency with varied weights
        adj_matrix = np.random.uniform(0, 1, size=(40, 40))
        # Set diagonal to zero
        np.fill_diagonal(adj_matrix, 0)
        
        local_interaction = LocalInteraction(payoff_matrix, adj_matrix)
        actions = np.random.choice([0, 1], size=40).astype(int)
        player_ind = np.arange(40, dtype=int)
        
        codeflash_output = local_interaction._play(
            actions, player_ind, tie_breaking='smallest', tol=1e-8, random_state=None
        ); result_actions = codeflash_output # 464μs -> 162μs (185% faster)

    def test_play_random_state_consistency(self):
        """Test that _play produces consistent results with fixed random seed."""
        payoff_matrix = np.array([[2.0, 1.0],
                                  [1.0, 2.0]])
        np.random.seed(42)
        adj_matrix = np.random.choice([0, 1], size=(30, 30), p=[0.85, 0.15]).astype(float)
        
        local_interaction = LocalInteraction(payoff_matrix, adj_matrix)
        actions1 = np.random.choice([0, 1], size=30).astype(int)
        actions2 = actions1.copy()
        player_ind = np.arange(30, dtype=int)
        
        # Both calls should use same random state (None)
        codeflash_output = local_interaction._play(
            actions1, player_ind, tie_breaking='smallest', tol=1e-8, random_state=None
        ); result1 = codeflash_output # 398μs -> 152μs (161% faster)
        codeflash_output = local_interaction._play(
            actions2, player_ind, tie_breaking='smallest', tol=1e-8, random_state=None
        ); result2 = codeflash_output # 366μs -> 126μs (189% faster)

    def test_play_extreme_payoff_scales(self):
        """Test _play with payoffs spanning extreme scales."""
        payoff_matrix = np.array([[1e-15, 1e15],
                                  [1e-15, 1e15]])
        adj_matrix = np.array([[0.0, 1.0],
                               [1.0, 0.0]])
        
        local_interaction = LocalInteraction(payoff_matrix, adj_matrix)
        actions = np.array([0, 0], dtype=int)
        player_ind = np.array([0, 1], dtype=int)
        
        codeflash_output = local_interaction._play(
            actions, player_ind, tie_breaking='smallest', tol=1e-8, random_state=None
        ); result_actions = codeflash_output # 217μs -> 138μs (57.2% faster)

    def test_play_sparse_vs_dense_consistency(self):
        """Test _play produces consistent results for sparse and dense representations."""
        payoff_matrix = np.array([[2.0, 0.0],
                                  [0.0, 1.0]])
        # Create an adjacency matrix
        np.random.seed(42)
        adj_array = np.random.choice([0, 1], size=(20, 20), p=[0.8, 0.2]).astype(float)
        
        local_interaction1 = LocalInteraction(payoff_matrix, adj_array)
        actions1 = np.random.choice([0, 1], size=20).astype(int)
        player_ind = np.arange(20, dtype=int)
        
        # Make copy for second run
        actions2 = actions1.copy()
        
        codeflash_output = local_interaction1._play(
            actions1, player_ind, tie_breaking='smallest', tol=1e-8, random_state=None
        ); result1 = codeflash_output # 330μs -> 146μs (125% faster)
        codeflash_output = local_interaction1._play(
            actions2, player_ind, tie_breaking='smallest', tol=1e-8, random_state=None
        ); result2 = codeflash_output # 302μs -> 122μs (148% faster)
# codeflash_output is used to check that the output of the original code is the same as that of the optimized code.

To edit these changes git checkout codeflash/optimize-LocalInteraction._play-mkp7brrn and push.

Codeflash Static Badge

The optimized code achieves a **76% speedup** (15.1ms → 8.54ms) by introducing a fast path for the most common case in the `LocalInteraction._play` method.

## Key Optimization

**Vectorized Best Response Computation**: When `tie_breaking='smallest'` (the default and most common case), the optimization replaces individual `Player.best_response()` calls in a loop with a single vectorized matrix operation:

```python
# Original: Loop calling best_response for each player
for k, i in enumerate(player_ind):
    actions[i] = self.players[i].best_response(
        opponent_act_dict[k, :], tie_breaking=tie_breaking, ...
    )

# Optimized: Single vectorized computation
actions_onehot = np.eye(self.num_actions, dtype=int)[np.asarray(actions)]
opponent_act_dict = self.adj_matrix[player_ind].dot(actions_onehot)
payoffs = payoff_matrix @ opponent_act_dict.T
best_indices = (payoffs >= (max_vals - tol)).argmax(axis=0)
```

## Why This Is Faster

1. **Batch Processing**: Computes payoffs for all players simultaneously using NumPy's efficient matrix operations instead of Python-level loops
2. **Reduced Function Call Overhead**: Eliminates repeated calls to `best_response()`, `payoff_vector()`, and sparse matrix operations inside the loop
3. **Memory Access Patterns**: Better cache locality from contiguous array operations versus scattered method calls

## Test Results Analysis

The optimization shows **dramatic improvements** (54-485% faster) across nearly all test cases:
- **Small networks** (2-3 players): 55-79% faster
- **Medium networks** (30-50 players): 125-189% faster  
- **Large networks** (100+ players): 221-485% faster

The speedup scales with network size because the vectorization benefit compounds with more players. Tests with `tie_breaking='random'` show no regression (~1ms, unchanged) since they use the original fallback path.

## Impact Considerations

The fast path only activates when `tie_breaking='smallest'`, which is the **default parameter** in the `LocalInteraction` class definition. This means most existing workloads automatically benefit without code changes. Workloads involving simulations with many iterations (common in game theory research) will see substantial cumulative time savings.
@codeflash-ai codeflash-ai bot requested a review from aseembits93 January 22, 2026 08:40
@codeflash-ai codeflash-ai bot added ⚡️ codeflash Optimization PR opened by Codeflash AI 🎯 Quality: High Optimization Quality according to Codeflash labels Jan 22, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

⚡️ codeflash Optimization PR opened by Codeflash AI 🎯 Quality: High Optimization Quality according to Codeflash

Projects

None yet

Development

Successfully merging this pull request may close these issues.

0 participants