Development Guide¶
Development Workflow¶
For all development we recommend using uv to manage your environment. The guidelines for contributing, developing, and extending brahe assume you are using uv.
Setting up your environment¶
If you need to setup the development environment, including installing the necessary development dependencies.
First, you need to install Rust from rustup.rs.
After this you can now setup your python environment with:
Finally, you can install the pre-commit hooks with:
Testing¶
The package includes Rust tests, Python tests, and documentation example tests.
Development Workflow: Implementing a New Feature¶
When adding new functionality to Brahe, follow this sequence:
1. Rust Implementation - Implement functionality in the appropriate module under src/ - Use SI base units (meters, seconds) in all public APIs - Follow existing patterns and naming conventions
2. Rust Tests - Write comprehensive unit tests in the same file (in a #[cfg(test)] mod tests {} module) - Test edge cases and typical use cases - Run: cargo test - Ensure all tests pass before proceeding
3. Python Bindings - Create 1:1 Python bindings in src/pymodule/ - Use identical function names and parameter names as Rust - Add complete Google-style docstrings with Args, Returns, Examples - Export new classes in src/pymodule/mod.rs - Export in Python package (brahe/*.py files) - Reinstall: uv pip install -e .
4. Python Tests - Write Python tests that mirror Rust tests in tests/ - Follow the same test structure and assertions - Run: uv run pytest tests/ -v
5. Documentation Examples - Create standalone example files in examples/<module>/ - Create both Python and Rust versions (see templates below) - Test: just test-examples
6. Documentation - Update or create documentation in docs/ - Reference examples using snippet includes (see template below) - Build Locally: uv run mkdocs serve
7. Final Checks
Rust Standards and Guidelines¶
Rust Testing Conventions¶
New functions implemented in rust are expected to have unit tests and documentation tests. Unit tests should cover all edge cases and typical use cases for the function. Documentation tests should provide examples of how to use the function.
Unit tests should be placed in the same file as the function they are testing, in a module named tests. The names of tests should follow the general convention of test_<struct>_<trait>_<method>_<case> or test_<function>_<case>.
Rust Docstring Template¶
New functions implemented in rust are expected to use the following docstring to standardize information on functions to enable users to more easily navigate and learn the library.
Python Standards and Guidelines¶
Python Testing Conventions¶
Python tests should be placed in the tests directory. The test structure and names should mirror the structure of the brahe package. For example, tests for brahe.orbits.keplerian should be placed in tests/orbits/test_keplerian.py.
All Python tests should be exact mirrors of the Rust tests, ensuring that both implementations are equivalent and consistent. There are a few exceptions to this rule, such as tests that check for Python-specific functionality or behavior, or capabilities that are not possible to reproduce in Python due to language limitations.
Documentation Examples¶
Documentation examples are standalone executable files that demonstrate library functionality. Every example must exist in both Python and Rust versions to ensure API parity.
Example File Structure¶
Examples are organized by module in examples/:
Naming Convention¶
Example files should follow this pattern:
Examples: - time_epoch_creation.py / time_epoch_creation.rs - orbits_keplerian_conversion.py / orbits_keplerian_conversion.rs - coordinates_geodetic_transform.py / coordinates_geodetic_transform.rs
Python Example Template¶
See examples/TEMPLATE.py:
Note: The # /// script header makes this a uv script, allowing it to be run standalone with uv run example.py.
Rust Example Template¶
See examples/TEMPLATE.rs:
Testing Examples¶
Test examples locally:
The build system will: 1. Execute all .rs files via rust-script 2. Execute all .py files via uv run python 3. Verify every .rs has a matching .py (and vice versa) 4. Report pass/fail for each example
Including Examples in Documentation¶
Use the pymdownx.snippets directive to include examples in markdown files. See the snippets plugin documentation for additional details on usage.
This will: - Create tabbed interface with Python shown first - Include the actual file contents (always in sync) - Automatically update when examples change
Documentation Plots¶
Interactive plots are generated from Python scripts in plots/ and embedded in documentation.
Plot Naming Convention¶
Plot files should follow this pattern:
Examples: - fig_time_system_offsets.py - fig_orbital_period.py - fig_anomaly_conversions.py
Plot Template¶
See plots/TEMPLATE_plot.py:
Note: The # /// script header allows standalone execution with uv run fig_plot.py.
Generating Plots¶
Generate all plots:
Plots are written to docs/figures/ as partial HTML files for embedding.
Including Plots in Documentation¶
This will: - Embed the interactive Plotly plot - Add a collapsible section showing the source code
Pull Request Changelog¶
Automatic Changelog Generation¶
When you create a pull request, you must fill in the changelog section in the PR description. The changelog uses Keep a Changelog format with four categories:
- Added - New features
- Changed - Changes to existing functionality
- Fixed - Bug fixes
- Removed - Removed features or functionality
How It Works¶
-
Fill in PR description: When opening a PR, add entries under the appropriate changelog section(s)
-
Validation: A GitHub Action checks that at least one changelog section has entries
- PR will fail validation if all sections are empty
-
You'll receive a comment with instructions if validation fails
-
Automatic fragment creation: When the PR is merged:
- A GitHub Action parses your changelog entries
- Creates fragment files in
news/directory (e.g.,123.added.md,123.fixed.md) -
Commits the fragments to the main branch
-
Release compilation: During release:
- Towncrier collects all fragments from
news/ - Generates formatted release notes
- Updates
CHANGELOG.mdwith the new version section - Deletes fragment files
Example PR Changelog¶
This will automatically create: - news/123.added.md with both Added items - news/123.fixed.md with both Fixed items
Manual Fragment Creation (Rare)¶
In rare cases where you need to create fragments manually, see news/README.md for instructions. Fragment files use the format <PR#>.<type>.md where type is one of: added, changed, fixed, removed.
Previewing the Changelog¶
To see what changelog fragments are currently queued:
To see what the next release changelog would look like without making changes:
This shows the formatted output without modifying CHANGELOG.md or deleting fragments.
Releases Without Changelog Fragments¶
If you create a release when there are no changelog fragments in news/:
- The release workflow will succeed
- A minimal release will be created with "No significant changes"
- This is useful for releases that only contain dependency updates or internal changes
Release Process¶
Initiating a Release¶
Before creating a release:
-
Update version in
Cargo.toml: -
Run quality checks:
-
Push version tag:
Automated Workflow¶
Note: Changelog fragments are automatically created from PR descriptions. You don't need to manually create fragment files.
Once the tag is pushed, GitHub Actions automatically:
- Validates version matches between tag and
Cargo.toml - Runs all tests (Rust, Python, examples)
- Generates release notes with towncrier (commits CHANGELOG.md)
- Builds documentation and deploys to GitHub Pages
- Builds Python wheels and source distribution
- Publishes to PyPI and crates.io
- Creates draft GitHub Release with artifacts and release notes
- Updates "latest" tag and release
Completing the Release¶
After automation completes:
- Review draft release at
https://github.com/duncaneddy/brahe/releases - Edit release notes (optional):
- Add highlights or breaking changes
- Include migration notes if needed
- Publish release by clicking "Publish release"
Verification¶
After publishing, verify:
- PyPI: https://pypi.org/project/brahe/
- Crates.io: https://crates.io/crates/brahe
- Docs: https://duncaneddy.github.io/brahe/latest/
- GitHub: https://github.com/duncaneddy/brahe/releases
Benchmarks¶
Brahe has two benchmark layers: Criterion micro-benchmarks for internal Rust performance regression testing, and a comparative benchmark framework that measures both runtime performance and numerical accuracy across Python (Brahe), Rust (Brahe), and Java (OreKit).
Criterion Micro-Benchmarks¶
These are standard Rust benchmarks using the Criterion harness, located in benchmarks/:
Criterion generates HTML reports in target/criterion/ with statistical analysis, regression detection, and timing distributions.
Comparative Benchmark Framework¶
The comparative framework lives in benchmarks/comparative/ and compares equivalent implementations across languages using a standardized JSON stdin/stdout protocol. Each language implementation is a standalone process that receives task parameters as JSON and returns timing data and numerical results.
Setup¶
Before running comparative benchmarks, install all dependencies with a single command:
This builds the Rust benchmark binary, builds the Java/Gradle project (generating a Gradle wrapper if needed), and downloads OreKit data to ~/.orekit/orekit-data.
Prerequisites:
- Rust: Install from rustup.rs (used for the Rust benchmark binary)
- JDK 17+: Install via
brew install openjdk(macOS) or your system package manager (used for Java/OreKit benchmarks) - Gradle: Install via
brew install gradle(macOS) orsdk install gradlevia SDKMAN (Linux). Only needed if the Gradle wrapper doesn't exist yet — after first setup,gradlewis committed and Gradle is no longer required.
You can override the OreKit data location with the OREKIT_DATA environment variable.
Running Benchmarks¶
Output¶
Each run prints two Rich tables to the console:
- Performance Comparison — mean, median, std, min, max per task per language, with speedup ratios relative to the OreKit (Java) baseline.
- Numerical Accuracy (vs OreKit baseline) — max absolute error, max relative error, and RMS error for each implementation compared against OreKit.
Results are saved as JSON to benchmarks/comparative/results/ (gitignored). Plots are generated as themed Plotly HTML to docs/figures/.
Architecture¶
The orchestrator dispatches each task to each language. Python implementations run in-process. Rust and Java implementations are invoked as subprocesses with JSON piped to stdin and results read from stdout.
Adding a New Benchmark Task¶
To add a new benchmark task (e.g., a frame transformation benchmark):
1. Define the task specification in benchmarks/comparative/tasks/:
2. Register the task in benchmarks/comparative/tasks/__init__.py:
3. Add the Python implementation in benchmarks/comparative/implementations/python/:
Register the function in implementations/python/__init__.py by adding it to _DISPATCH_TABLE.
4. Add the Rust implementation in benchmarks/comparative/implementations/rust/src/:
Create the module file (e.g., frames.rs) with functions that deserialize JSON params, run the benchmark loop with std::time::Instant, and return (Vec<f64>, serde_json::Value). Add the module and dispatch arm in main.rs.
5. (Optional) Add the Java/OreKit implementation following the same pattern in the Gradle project.
Key Design Decisions¶
- OreKit as baseline: Java/OreKit is the reference implementation for both performance speedup ratios and numerical accuracy comparisons. OreKit runs first for each task, and all other implementations are compared against it.
- Deterministic parameters:
generate_params(seed)ensures reproducible benchmarks across runs. Always use the seed to initialize your RNG. - JSON protocol: Language implementations are decoupled from the orchestrator. Any language that can read JSON from stdin and write JSON to stdout can participate.
- First-iteration results: Only the first iteration's numerical results are stored for accuracy comparison. All iterations contribute timing data.
- Angle normalization: Orbital element comparisons normalize angular differences modulo 360 degrees to handle different library conventions for angle ranges.
- EOP initialization: Benchmarks use
StaticEOPProvider.from_zero()(zero EOP values) to avoid file I/O overhead and ensure reproducibility. This is sufficient for coordinate and orbital element conversions that don't depend on Earth orientation.