2024-03-11-Monday
created: 2024-03-11 19:59 tags: - daily-notes
Monday, March 11, 2024
<< Timestamps/2024/03-March/2024-03-10-Sunday|Yesterday | Timestamps/2024/03-March/2024-03-12-Tuesday|Tomorrow >>
🎯 Goal
- To continue writing Mini Martial Artists - Part 2.
accomplished:: true
🌟 Results
- Continued writing Mini Martial Artists - Part 2
- Found how to calculate the difference between two Beta Distributions without needing to apply Bootstrapping techniques to them.
- Involves a product of the Cumulative Density Function and Probability Density Function of the two different Beta Distributions
- Unrelated to the above but today I also found out about smarter notetaking in Obsidian via Dann Berg. This could be worth looking into.
- Could have implications for the BigBrain App (maybe making a BigBrain Blog that posts raw notes as I work on them?)
🌱 Next Time
- next-goal: Continue writing Mini Martial Artists - Part 2 OR Learn more about Personal Knowledge Management (specifically Map of Content)
📝 Notes
I have a general outline of what I want to write for this Blog Post which has helped make it easy to get started with writing. Depending on how far I get I'd like to get a handle on which graphics I want to generate for this blog post and leave ample space to do so.
As I was writing I thought that there may be a smarter way to develop the Markov Chain that drives the Fight Simulator. A single value could be generated to represent the probability of any given state change by sampling from the Beta Distributions for the aggressor and defender. The difference between the aggressor and defenders' distributions would then indicate the probability of a success or failure for any given action. ChatGPT offered an interesting approach that looks at the equation rather than having to draw thousands of samples which is seen below:
from scipy.stats import beta
import numpy as np
def integrand(x):
### KEY LINE HERE ###
return beta.cdf(x, alpha_a, beta_a) * beta.pdf(x, alpha_b, beta_b)
# Numerical integration over the interval [0, 1] using the trapezoidal rule
x_values = np.linspace(0, 1, 1000)
integrand_values = [integrand(x) for x in x_values]
result = np.trapz(integrand_values, x_values)
which I adapted slightly to
from scipy.stats import beta
import numpy as np
def integrand(x):
### KEY LINE HERE ###
return beta.cdf(x, alpha_a, beta_a) * beta.pdf(x, alpha_b, beta_b)
# Numerical integration over the interval [0, 1] using the trapezoidal rule
x_values = np.linspace(0, 1, 1000)
integrand_values = [integrand(x) for x in x_values]
result = np.trapz(integrand_values, x_values)
It was initially backward (should be the product of Probability Density Function of Distribution A
with the Cumulative Density Function of Distribution B
) to get the probability that Distribution A
> Distribution B
. Now I wonder if I could use this explicit probability to turn this into a Dynamic Programming problem to essentially solve the Markov Chain.
Notes created today
List FROM "" WHERE file.cday = date("2024-03-11") SORT file.ctime asc
Notes last touched today
List FROM "" WHERE file.mday = date("2024-03-11") SORT file.mtime asc