Skip to content

Benchmarks

Benchmarks

Brahe is benchmarked against OreKit 12.2 (Java), the most widely used open-source astrodynamics library, across 24 tasks spanning 7 modules. All three implementations — Java (OreKit), Python (Brahe), and Rust (Brahe) — are given identical inputs (seed=42, 100 iterations) and their outputs are compared for both performance and numerical accuracy.

Tip

These benchmarks are meant to highlight the consistency and agreement of Brahe with other astrodynamics software libraries and enable users to make informed selection trades. The purpose is NOT state that one offering is superior to another. These benchmarks, and even brahe itself wouldn't exist without the excellent work of other projects trail-blazing an open astrodynamics software ecosystem. As programming and technology evolves, it is helpful to have multiple viable solutions so that users have flexibility to select a tool that works well for their system and problem.

Methodology

Languages and Libraries:

  • Java: OreKit 12.2 on OpenJDK 21
  • Python: Brahe Python bindings (PyO3)
  • Rust: Brahe native Rust library

Test Environment: 2021 MacBook Pro, Apple M1 Max, 64 GB RAM

Protocol: Each task is run 1000 iterations with a fixed random seed. Mean execution time is reported. Accuracy is measured by comparing outputs element-wise against the Java (OreKit) reference implementation.

Performance Overview

The table below summarizes average speedup relative to Java (OreKit) for each module. Values greater than 1.0× indicate Brahe is faster; values less than 1.0× indicate OreKit is faster.

Module Tasks Avg Python Speedup Avg Rust Speedup
Time 5 44.1× 190.1×
Coordinates 5 2.76× 108.8×
Attitude 4 7.75× 225.4×
Frames 2 3.45× 3.03×
Orbits 2 6.19× 67.1×
Propagation 5 0.75× 0.81×
Access 1 4.54× 4.53×

Brahe's Python bindings are 2–44× faster than OreKit across most modules, and Rust native is 3–225× faster for pure computational tasks. Propagation benchmarks are the exception — OreKit's mature SGP4 and numerical integrator implementations are highly optimized for trajectory generation workloads.


Time

Five tasks covering epoch creation and time system conversions (UTC → TAI, TT, GPS, UT1).

Task Java Python Rust Python Speedup Rust Speedup
Epoch Creation 438.5 µs 30.5 µs 14.0 µs 14.4× 31.4×
UTC To GPS 40.0 µs 3.3 µs 193.7 ns 12.1× 206.5×
UTC To TAI 55.2 µs 3.3 µs 192.1 ns 16.8× 287.4×
UTC To TT 33.4 µs 3.3 µs 192.5 ns 9.98× 173.3×
UTC To UT1 2.77 ms 16.6 µs 11.0 µs 167.4× 251.9×

Accuracy: TAI, TT, and GPS conversions show zero error — all three implementations use identical offset constants. UT1 shows a maximum absolute error of ~1.0 µs, attributable to different Earth Orientation Parameter (EOP) sources and interpolation methods between OreKit and Brahe.


Coordinates

Five tasks covering coordinate system transformations: geodetic/geocentric to/from ECEF, and ECEF to azimuth-elevation.

Task Java Python Rust Python Speedup Rust Speedup
ECEF To AzEl 373.8 µs 127.2 µs 11.2 µs 2.94× 33.5×
ECEF To Geocentric 137.3 µs 52.3 µs 1.0 µs 2.62× 137.3×
ECEF To Geodetic 140.2 µs 59.3 µs 5.4 µs 2.37× 25.8×
Geocentric To ECEF 172.6 µs 51.7 µs 812.5 ns 3.34× 212.4×
Geodetic To ECEF 132.9 µs 52.5 µs 985.4 ns 2.53× 134.9×

Accuracy: All coordinate transformations agree to sub-nanometer precision (< 2 nm). Maximum absolute errors are on the order of \(10^{-9}\) m, reflecting only floating-point representation differences.

Task Comparison Max Abs Error RMS Error
ECEF To AzEl Java vs Python 0.70 nm 0.13 nm
ECEF To AzEl Java vs Rust 0.70 nm 0.14 nm
ECEF To Geocentric Java vs Python 0.00 nm 0.00 nm
ECEF To Geocentric Java vs Rust 0.93 nm 0.13 nm
ECEF To Geodetic Java vs Python 1.98 nm 0.50 nm
ECEF To Geodetic Java vs Rust 1.98 nm 0.53 nm
Geocentric To ECEF Java vs Python 0.93 nm 0.20 nm
Geocentric To ECEF Java vs Rust 1.51 nm 0.31 nm
Geodetic To ECEF Java vs Python 0.93 nm 0.09 nm
Geodetic To ECEF Java vs Rust 1.86 nm 0.27 nm

Attitude

Four tasks covering conversions between quaternions, rotation matrices, and Euler angles.

Task Java Python Rust Python Speedup Rust Speedup
Euler Angle To Quaternion 216.9 µs 22.4 µs 1.7 µs 9.67× 127.7×
Quaternion To Euler Angle 266.5 µs 15.6 µs 2.0 µs 17.1× 130.5×
Quaternion To Rotation Matrix 183.9 µs 86.1 µs 508.3 ns 2.14× 361.7×
Rotation Matrix To Quaternion 361.9 µs 171.5 µs 1.3 µs 2.11× 281.6×

Accuracy: Quaternion ↔ rotation matrix conversions agree to machine epsilon (\(< 10^{-15}\)). Quaternion ↔ Euler angle conversions also agree to machine epsilon.

The euler_angle_to_quaternion task shows a large apparent max absolute error of 0.67 in the raw comparison data. This is a quaternion sign convention artifact, not a real error — quaternions \(q\) and \(-q\) represent the same rotation, so implementations may validly return either sign. The actual rotations are equivalent.


Frames

Two tasks covering full 6-DOF state vector transformations between ECEF and ECI reference frames using the IAU 2006/2000A precession-nutation model.

Task Java Python Rust Python Speedup Rust Speedup
State ECEF To ECI 8.36 ms 2.47 ms 3.49 ms 3.38× 2.39×
State ECI To ECEF 8.12 ms 2.32 ms 2.22 ms 3.51× 3.67×

Accuracy: The raw comparison data shows large apparent errors (\(\sim 2 \times 10^5\) m) between OreKit and Brahe frame transformations. This is a known comparison methodology issue — the benchmark compares full 6-element state vectors [x, y, z, vx, vy, vz] element-wise, where position components (meters) and velocity components (m/s) have vastly different magnitudes. The large max absolute error reflects a small fractional difference in position that appears large in absolute terms due to the ~7000 km orbital radius.

Both implementations use the IAU 2006/2000A model but differ in EOP interpolation and nutation series truncation. Investigation of position-only and velocity-only errors separately is planned to quantify the true agreement.


Orbits

Two tasks covering conversions between Keplerian orbital elements and Cartesian state vectors.

Task Java Python Rust Python Speedup Rust Speedup
Cartesian To Keplerian 433.2 µs 66.1 µs 5.4 µs 6.56× 80.7×
Keplerian To Cartesian 406.6 µs 69.9 µs 7.6 µs 5.82× 53.5×

Accuracy: Both conversion directions agree to sub-millimeter precision. Maximum absolute errors are on the order of \(10^{-8}\) m (~24 nm), with relative errors below \(10^{-11}\).

Task Comparison Max Abs Error RMS Error
Cartesian To Keplerian Java vs Python 22.35 nm 2.35 nm
Cartesian To Keplerian Java vs Rust 22.35 nm 2.75 nm
Keplerian To Cartesian Java vs Python 14.90 nm 2.38 nm
Keplerian To Cartesian Java vs Rust 24.21 nm 4.05 nm

Propagation

Five tasks covering Keplerian (two-body analytical), numerical (RK4/RK78 two-body), and SGP4/SDP4 propagation.

Task Java Python Rust Python Speedup Rust Speedup
Keplerian Single 522.3 µs 937.4 µs 902.1 µs 0.56× 0.58×
Keplerian Trajectory 448.8 µs 322.3 µs 288.4 µs 1.39× 1.56×
Numerical Two-Body 3.55 ms 2.48 ms 2.35 ms 1.43× 1.51×
SGP4 Single 649.3 µs 2.13 ms 1.92 ms 0.31× 0.34×
SGP4 Trajectory 3.35 ms 62.12 ms 66.16 ms 0.05× 0.05×

Propagation is the one area where OreKit outperforms Brahe, particularly for SGP4 trajectory generation. OreKit's SGP4 implementation benefits from decades of optimization and a mature numerical integration framework. The SGP4 trajectory benchmark shows OreKit ~20× faster — this reflects architectural differences in how each library handles batch propagation, not fundamental algorithmic limitations.

Accuracy:

Task Max Abs Error RMS Error Notes
Keplerian Single 103 nm 17 nm Sub-millimeter agreement
Keplerian Trajectory 14 nm 2.8 nm Sub-millimeter agreement
Numerical Two-Body 0.066 m 0.018 m Different integrators (RK4 vs RK78)
SGP4 Single 49.3 m 12.7 m Different SGP4 implementations
SGP4 Trajectory 58.5 m 13.4 m Different SGP4 implementations

Keplerian propagation shows nanometer-level agreement. Numerical propagation diverges by ~7 cm due to different integrator implementations and step-size strategies. SGP4 divergence of ~50 m is expected and well-documented across different SGP4 implementations — the original Fortran, OreKit Java, and Brahe Rust implementations all make slightly different numerical choices in the deep-space and near-Earth branch logic.


Access (Comparative)

One task: computing all satellite-to-ground-station access windows over a 48-hour period using SGP4 propagation.

Task Java Python Rust Python Speedup Rust Speedup
SGP4 Access 679.42 ms 149.64 ms 150.04 ms 4.54× 4.53×

Accuracy: Access window start/end times differ by a maximum of ~69 seconds between OreKit and Brahe. This is consistent with the ~50 m SGP4 position divergence — at LEO velocities (~7.5 km/s), a 50 m position difference translates to roughly this magnitude of timing difference in pass predictions. Both implementations agree on the total number of access windows.


Access Computation (Brahe vs Skyfield)

In addition to the OreKit comparison above, Brahe is also benchmarked against Skyfield, a popular Python astronomy library, for access computation. This benchmark focuses on Brahe's serial vs parallel execution modes and Python bindings vs native Rust performance.

The benchmark randomly samples 100 ground station locations and computes all satellite accesses over a 48-hour window. Access start and end times agree to within one second between Brahe and Skyfield.

Implementation Avg Time vs Skyfield vs Brahe-Py-Serial
Brahe-Rust (parallel) 1.37ms 3.2x faster 23.0x faster
Brahe-Python (parallel) 2.40ms 1.8x faster 13.1x faster
Brahe-Rust (serial) 2.79ms 1.6x faster 11.2x faster
Skyfield 4.44ms baseline 7.1x faster
Brahe-Python (serial) 31.41ms 7.1x slower baseline

The parallel implementations leverage multiple CPU cores to handle multiple ground stations simultaneously. Skyfield's performance is impressive, being only marginally slower than Brahe's serial Rust implementation despite being written in pure Python.


Reproducing These Results

Comparative benchmarks (Java/Python/Rust):

1
2
3
4
5
6
7
8
9
# One-time setup: build Java/Rust implementations, download OreKit data
just bench-compare-setup

# Run benchmarks, generate figures + CSV tables, and stage artifacts for commit
just bench-compare-publish --iterations 100 --seed 42

# Review staged changes and commit
git status
git commit -m "Update benchmark data"

Individual steps can also be run separately:

1
2
3
4
5
# Run benchmarks only (results saved to benchmarks/comparative/results/)
just bench-compare --iterations 100 --seed 42

# Regenerate figures and tables from existing results (without re-running benchmarks)
python plots/fig_comparative_benchmarks.py

Access benchmark (Brahe vs Skyfield):

uv run scripts/benchmark_access_three_way.py --n-locations 100 --seed 42 --output chart.html --plot-style scatter --csv accesses.csv