- 알파-베타 가지치기는 minimax 알고리즘의 수정된 버전입니다. Minimax 알고리즘의 최적화 기법입니다.
- Minimax 검색 알고리즘에서 살펴본 것처럼 조사해야 하는 게임 상태의 수는 트리 깊이에 따라 기하급수적으로 증가합니다. 지수를 제거할 수는 없지만 절반으로 줄일 수는 있습니다. 따라서 게임 트리의 각 노드를 확인하지 않고도 올바른 미니맥스 결정을 계산할 수 있는 기술이 있으며, 이 기술을 호출합니다. 전정 . 여기에는 향후 확장을 위한 두 개의 임계값 매개변수인 Alpha와 Beta가 포함되므로 이를 호출합니다. 알파베타 가지치기 . 그것은 또한 다음과 같이 불린다. 알파-베타 알고리즘 .
- 알파-베타 가지치기는 나무의 모든 깊이에 적용될 수 있으며 때로는 나무 잎뿐만 아니라 전체 하위 나무도 가지치기합니다.
- 두 매개변수는 다음과 같이 정의할 수 있습니다.
알파: Maximizer 경로의 어느 시점에서든 우리가 지금까지 찾은 최고의(가장 높은 가치) 선택입니다. 알파의 초기값은 다음과 같습니다. -무한대 . - 표준 미니맥스 알고리즘에 대한 알파 베타 가지치기는 표준 알고리즘과 동일한 동작을 반환하지만 최종 결정에 실제로 영향을 미치지는 않지만 알고리즘을 느리게 만드는 모든 노드를 제거합니다. 따라서 이러한 노드를 잘라내면 알고리즘이 빨라집니다.
참고: 이 주제를 더 잘 이해하려면 minimax 알고리즘을 연구하십시오.
알파베타 가지치기 조건:
알파-베타 가지치기에 필요한 주요 조건은 다음과 같습니다.
α>=β
알파-베타 가지치기에 대한 주요 사항:
- Max 플레이어는 알파 값만 업데이트합니다.
- Min 플레이어는 베타 값만 업데이트합니다.
- 트리를 역추적하는 동안 노드 값은 알파 및 베타 값 대신 상위 노드로 전달됩니다.
- 알파, 베타 값만 하위 노드에 전달합니다.
알파베타 가지치기를 위한 의사 코드:
function minimax(node, depth, alpha, beta, maximizingPlayer) is if depth ==0 or node is a terminal node then return static evaluation of node if MaximizingPlayer then // for Maximizer Player maxEva= -infinity for each child of node do eva= minimax(child, depth-1, alpha, beta, False) maxEva= max(maxEva, eva) alpha= max(alpha, maxEva) if beta<=alpha break return maxeva else for minimizer player mineva="+infinity" each child of node do eva="minimax(child," depth-1, alpha, beta, true) eva) beta="min(beta," if beta<="alpha" < pre> <h2>Working of Alpha-Beta Pruning:</h2> <p>Let's take an example of two-player search tree to understand the working of Alpha-beta pruning</p> <p> <strong>Step 1:</strong> At the first step the, Max player will start first move from node A where α= -∞ and β= +∞, these value of alpha and beta passed down to node B where again α= -∞ and β= +∞, and Node B passes the same value to its child D.</p> <img src="//techcodeview.com/img/artificial-intelligence/75/alpha-beta-pruning.webp" alt="Alpha-Beta Pruning"> <p> <strong>Step 2:</strong> At Node D, the value of α will be calculated as its turn for Max. The value of α is compared with firstly 2 and then 3, and the max (2, 3) = 3 will be the value of α at node D and node value will also 3.</p> <p> <strong>Step 3:</strong> Now algorithm backtrack to node B, where the value of β will change as this is a turn of Min, Now β= +∞, will compare with the available subsequent nodes value, i.e. min (∞, 3) = 3, hence at node B now α= -∞, and β= 3.</p> <img src="//techcodeview.com/img/artificial-intelligence/75/alpha-beta-pruning-2.webp" alt="Alpha-Beta Pruning"> <p>In the next step, algorithm traverse the next successor of Node B which is node E, and the values of α= -∞, and β= 3 will also be passed.</p> <p> <strong>Step 4:</strong> At node E, Max will take its turn, and the value of alpha will change. The current value of alpha will be compared with 5, so max (-∞, 5) = 5, hence at node E α= 5 and β= 3, where α>=β, so the right successor of E will be pruned, and algorithm will not traverse it, and the value at node E will be 5. </p> <img src="//techcodeview.com/img/artificial-intelligence/75/alpha-beta-pruning-3.webp" alt="Alpha-Beta Pruning"> <p> <strong>Step 5:</strong> At next step, algorithm again backtrack the tree, from node B to node A. At node A, the value of alpha will be changed the maximum available value is 3 as max (-∞, 3)= 3, and β= +∞, these two values now passes to right successor of A which is Node C.</p> <p>At node C, α=3 and β= +∞, and the same values will be passed on to node F.</p> <p> <strong>Step 6:</strong> At node F, again the value of α will be compared with left child which is 0, and max(3,0)= 3, and then compared with right child which is 1, and max(3,1)= 3 still α remains 3, but the node value of F will become 1. </p> <img src="//techcodeview.com/img/artificial-intelligence/75/alpha-beta-pruning-4.webp" alt="Alpha-Beta Pruning"> <p> <strong>Step 7:</strong> Node F returns the node value 1 to node C, at C α= 3 and β= +∞, here the value of beta will be changed, it will compare with 1 so min (∞, 1) = 1. Now at C, α=3 and β= 1, and again it satisfies the condition α>=β, so the next child of C which is G will be pruned, and the algorithm will not compute the entire sub-tree G.</p> <img src="//techcodeview.com/img/artificial-intelligence/75/alpha-beta-pruning-5.webp" alt="Alpha-Beta Pruning"> <p> <strong>Step 8:</strong> C now returns the value of 1 to A here the best value for A is max (3, 1) = 3. Following is the final game tree which is the showing the nodes which are computed and nodes which has never computed. Hence the optimal value for the maximizer is 3 for this example. </p> <img src="//techcodeview.com/img/artificial-intelligence/75/alpha-beta-pruning-6.webp" alt="Alpha-Beta Pruning"> <h2>Move Ordering in Alpha-Beta pruning: </h2> <p>The effectiveness of alpha-beta pruning is highly dependent on the order in which each node is examined. Move order is an important aspect of alpha-beta pruning.</p> <p>It can be of two types:</p> <ul> <tr><td>Worst ordering:</td> In some cases, alpha-beta pruning algorithm does not prune any of the leaves of the tree, and works exactly as minimax algorithm. In this case, it also consumes more time because of alpha-beta factors, such a move of pruning is called worst ordering. In this case, the best move occurs on the right side of the tree. The time complexity for such an order is O(b<sup>m</sup>). </tr><tr><td>Ideal ordering:</td> The ideal ordering for alpha-beta pruning occurs when lots of pruning happens in the tree, and best moves occur at the left side of the tree. We apply DFS hence it first search left of the tree and go deep twice as minimax algorithm in the same amount of time. Complexity in ideal ordering is O(b<sup>m/2</sup>). </tr></ul> <h2>Rules to find good ordering: </h2> <p>Following are some rules to find good ordering in alpha-beta pruning:</p> <ul> <li>Occur the best move from the shallowest node.</li> <li>Order the nodes in the tree such that the best nodes are checked first. </li> <li>Use domain knowledge while finding the best move. Ex: for Chess, try order: captures first, then threats, then forward moves, backward moves.</li> <li>We can bookkeep the states, as there is a possibility that states may repeat.</li> </ul> <hr></=alpha>