2024-03-11
Goal
To continue writing Mini Martial Artists - Part 2.
Notes
I have a general outline of what I want to write for this Blog Post which has helped make it easy to get started with writing. Depending on how far I get I'd like to get a handle on which graphics I want to generate for this blog post and leave ample space to do so.
As I was writing I thought that there may be a smarter way to develop the Markov Chain that drives the Fight Simulator. A single value could be generated to represent the probability of any given state change by sampling from the Beta Distributions for the aggressor and defender. The difference between the aggressor and defenders' distributions would then indicate the probability of a success or failure for any given action. ChatGPT offered an interesting approach that looks at the equation rather than having to draw thousands of samples which is seen below:
from scipy.stats import beta
import numpy as np
def calculate_probability(alpha_a, beta_a, alpha_b, beta_b):
"""
Calculate the probability that a value from Beta(alpha_a, beta_a) is greater
than a value from Beta(alpha_b, beta_b).
Parameters:
alpha_a, beta_a : Parameters of the first Beta distribution.
alpha_b, beta_b : Parameters of the second Beta distribution.
Returns:
Probability that a value from Beta(alpha_a, beta_a) > Beta(alpha_b, beta_b).
"""
# Define the integrand function for the probability calculation
def integrand(x):
return beta.cdf(x, alpha_a, beta_a) * beta.pdf(x, alpha_b, beta_b)
# Numerical integration over the interval [0, 1] using the trapezoidal rule
x_values = np.linspace(0, 1, 1000)
integrand_values = [integrand(x) for x in x_values]
result = np.trapz(integrand_values, x_values)
return result
# Parameters for the beta distributions
alpha_a, beta_a = 10, 10
alpha_b, beta_b = 7, 13
# Calculate the probability
probability = calculate_probability(alpha_a, beta_a, alpha_b, beta_b)
print(f"The probability that a value from Beta({alpha_a}, {beta_a}) is greater than one from Beta({alpha_b}, {beta_b}) is approximately {probability:.3f}.")
which I adapted slightly to
from scipy.stats import beta
import numpy as np
# Define the integrand function for the probability calculation
def integrand(x):
return beta.pdf(x, alpha_a, beta_a) * beta.cdf(x, alpha_b, beta_b)
def calculate_probability(alpha_a, beta_a, alpha_b, beta_b):
# Numerical integration over the interval [0, 1] using the trapezoidal rule
x_values = np.linspace(0, 1, 1000)
integrand_values = [integrand(x) for x in x_values]
result = np.trapz(integrand_values, x_values)
return result
# Parameters for the beta distributions
alpha_a, beta_a = 10, 10
alpha_b, beta_b = 7, 13
# Calculate the probability
probability = calculate_probability(alpha_a, beta_a, alpha_b, beta_b)
print(f"The probability that a value from Beta({alpha_a}, {beta_a}) is greater than one from Beta({alpha_b}, {beta_b}) is approximately {probability:.3f}.")
It was initially backward (should be the product of Probability Density Function of Distribution A
with the Cumulative Density Function of Distribution B
) to get the probability that Distribution A
> Distribution B
. Now I wonder if I could use this explicit probability to turn this into a Dynamic Programming problem to essentially solve the Markov Chain.
Results
- Continued writing Mini Martial Artists - Part 2
- Found how to calculate the difference between two Beta Distributions without needing to apply Bootstrapping techniques to them.
- Involves a product of the Cumulative Density Function and Probability Density Function of the two different Beta Distributions
- Unrelated to the above but today I also found out about smarter notetaking in Obsidian via Dann Berg. This could be worth looking into.
- Could have implications for the BigBrain App (maybe making a BigBrain Blog that posts raw notes as I work on them?)
Next Time
- Continue writing Mini Martial Artists - Part 2