Will Murphy's personal home page

Rust Monte Carlo Pi 2

Last time, we wrote a simple Rust program to compute Pi by simulating throwing darts at a quarter of the unit circle inscribed in a square with side length 1. When I run that program, it gives me answers like: PI might be 3.1420408. These answers are fine, but I’d really like to do better. I think the first step to doing better is changing the program a bit so that it prints out its score in a machine-readable form. Then I can run it repeatedly and see how long it takes and how well it does.

First, let’s change the code so that it times itself, and reports more completely what it did:

use clap::Parser;
use rand::prelude::*;
use std::time::Instant;

struct Point {
    x: f64,
    y: f64,

#[derive(Parser, Debug)]
#[command(author, version, about, long_about = None)]
struct Args {
    #[arg(short, long)]
    iterations: u64,

fn main() {
    let args = Args::parse();

    let mut rng = rand::thread_rng();

    let iterations = args.iterations;
    let now = Instant::now();

    let b_count = (0..iterations)
        .map(|_| random_point(&mut rng))
        .filter(|point| !is_in_circle(&point))
        .count() as f64;

    let b = b_count / iterations as f64;

    let pi = 4.0 - (4.0 * b);

    let elapsed = now.elapsed();

    let difference = (std::f64::consts::PI - pi).abs();

    // Duration, iterations, estimate of pi, difference from actual Pi.

fn random_point(rng: &mut rand::rngs::ThreadRng) -> Point {
    let x = rng.gen::<f64>();
    let y = rng.gen::<f64>();

    Point { x, y }

fn is_in_circle(point: &Point) -> bool {
    (point.x * point.x) + (point.y * point.y) <= 1.0

Now I can invoke it like this:

$ ./target/release/monte-carlo-pi --iterations 10000000
$ for i in {1000..1000000..1000}
./target/release/monte-carlo-pi --iterations $i >> results.csv

Now lets graph iterations against accuracy:

chart showing that more iterations means more accuracy

The code to generate the chart is (using Python and matplotlib):

import matplotlib.pyplot as plt
import csv

with open('results.csv') as f:
    reader = csv.reader(f)
    iterations = []
    diffs = []
    for row in reader:
    fig, ax = plt.subplots()
    ax.plot(iterations, diffs)
    ax.set(xlabel='millions of iterations', ylabel='Abs value of error vs standard constant',
       title='More iterations generally means more accurate.')

I hypothesize that if I run enough iterations, eventually we’ll stop seeing improvement, either because the random number generator is biased in some way, or I have some floating point rounding error I’m not thinking of. In a future post, I’ll try to take a look at running a lot more iterations and seeing how much accuracy I can get, but my laptop is starting to get slow, so I might have to do some benchmarking and try to speed things up on the way.

Till next time, happy learning!
– Will


Note: recently submitted comments may not be visible yet; the approval process is manual. Please be patient, and check back soon!

Join the conversation!