Jacobian matrices are fundamental to many advanced astrodynamics computations. This guide explains how to compute and use Jacobians in Brahe for both analytical and numerical approaches.
A Jacobian matrix describes how a function's outputs change with respect to changes in its inputs. For a vector function \(\mathbf{f}: \mathbb{R}^n \rightarrow \mathbb{R}^n\) that maps state \(\mathbf{x}\) to derivative \(\dot{\mathbf{x}}\):
\[\dot{\mathbf{x}} = \mathbf{f}(t, \mathbf{x})\]
The Jacobian matrix \(\mathbf{J}\) is:
\[J_{ij} = \frac{\partial f_i}{\partial x_j}\]
\(\mathbf{J}\) is a real-valued \(n \times n\) matrix (\(J \in \mathbb{R}^{n \times n}\)). In astrodynamics, this describes how the rate of change of each state component depends on all other state components.
In astrodynamics, Jacobians are crucial for:
State Transition Matrices: Describing how small changes in initial conditions affect future states
Orbit Determination: Propagating covariance matrices and computing measurement sensitivities
Trajectory Optimization: Computing gradients for optimization algorithms
Uncertainty Propagation: Tracking how uncertainties (covariances) evolve over time
Brahe supports both analytical and numerical Jacobian computation. Analytical Jacobians, represented by the AnalyticJacobian class require you to provide closed-form derivative expressions, while numerical Jacobians, provided by NumericalJacobian use finite difference methods to approximate derivatives automatically when given only the dynamics function.
When you know the closed-form derivatives \(\frac{\partial f_i}{\partial x_j}\), analytical Jacobians provide the most accurate and efficient computation.
For a simple example, let's consider a 2D harmonic oscillator with state vector \(\mathbf{x} = \begin{bmatrix} x \\ v \end{bmatrix}\) where \(x\) is position and \(v\) is velocity. The dynamics are:
importbraheasbhimportnumpyasnp# Define dynamics: Simple harmonic oscillator# State: [position, velocity]# Dynamics: dx/dt = v, dv/dt = -xdefdynamics(t,state):returnnp.array([state[1],-state[0]])# Define analytical Jacobian# J = [[0, 1],# [-1, 0]]defjacobian_func(t,state):returnnp.array([[0.0,1.0],[-1.0,0.0]])# Create analytical Jacobian providerjacobian=bh.AnalyticJacobian(jacobian_func)# Compute Jacobian at a specific statet=0.0state=np.array([1.0,0.0])J=jacobian.compute(t,state)print("Analytical Jacobian:")print(J)# Expected output:# [[ 0. 1.]# [-1. 0.]]# Verify it's time-invariant for this systemt2=10.0state2=np.array([0.5,0.866])J2=jacobian.compute(t2,state2)print("\nJacobian at different time and state:")print(J2)# Should be identical for linear systemprint("\nJacobians are equal:",np.allclose(J,J2))# Output: True
usebrahe::math::jacobian::{DAnalyticJacobian,DJacobianProvider};usenalgebra::{DMatrix,DVector};fnmain(){// Define analytical Jacobian function// For simple harmonic oscillator: J = [[0, 1], [-1, 0]]letjacobian_fn=|_t:f64,_state:&DVector<f64>,_params:Option<&DVector<f64>>|->DMatrix<f64>{DMatrix::from_row_slice(2,2,&[0.0,1.0,-1.0,0.0])};// Create analytical Jacobian providerletjacobian=DAnalyticJacobian::new(Box::new(jacobian_fn));// Compute Jacobian at a specific statelett=0.0;letstate=DVector::from_vec(vec![1.0,0.0]);letjac=jacobian.compute(t,&state,None);println!("Analytical Jacobian:");println!("{}",jac);// Expected output:// [[ 0. 1.]// [-1. 0.]]// Verify it's time-invariant for this systemlett2=10.0;letstate2=DVector::from_vec(vec![0.5,0.866]);letjac2=jacobian.compute(t2,&state2,None);println!("\nJacobian at different time and state:");println!("{}",jac2);// Check if matrices are equalletare_equal=(jac-jac2).norm()<1e-10;println!("\nJacobians are equal: {}",are_equal);// Output: true}
Numerical Jacobians use finite differences to approximate derivatives automatically. This is essential when analytical derivatives are complex or unknown. Brahe supports three difference methods: forward, central, and backward differences.
The forward difference method approximates the derivative by perturbing the input state positively along each dimension \(e_j\) and measuring the change in output, as follows:
Forward differences have first-order accuracy with an error on the order of \(O(h)\), where \(h\) is the perturbation size. This method requires \(n + 1\) function evaluations for an \(n\)-dimensional state vector.
The central difference method improves accuracy by perturbing the input state both positively and negatively along each dimension \(e_j\):
\[ J_{ij} \approx \frac{f_i(\mathbf{x} + h \cdot \mathbf{e}_j) - f_i(\mathbf{x} - h \cdot \mathbf{e}_j)}{2h} \]
Central differences have second-order accuracy with an error on the order of \(O(h^2)\). This method requires \(2n\) function evaluations for an \(n\)-dimensional state vector.
Recommendation
Use central differences unless computational cost is prohibitive. The ~2x increase in function evaluations is usually worth the improved accuracy.
Similar to forward differences, backward differences have first-order accuracy with an error on the order of \(O(h)\) and require \(n + 1\) function evaluations. They are less commonly used but implemented for completeness.
importbraheasbhimportnumpyasnp# Define dynamics: Simple harmonic oscillatordefdynamics(t,state):returnnp.array([state[1],-state[0]])# Create numerical Jacobian with default settings (central differences)jacobian=bh.NumericalJacobian(dynamics)# Compute Jacobian at a specific statet=0.0state=np.array([1.0,0.0])J_numerical=jacobian.compute(t,state)print("Numerical Jacobian (central differences):")print(J_numerical)# Expected output (should be very close to analytical):# [[ 0. 1.]# [-1. 0.]]# Compare with analytical solutionJ_analytical=np.array([[0.0,1.0],[-1.0,0.0]])error=np.linalg.norm(J_numerical-J_analytical)print(f"\nError vs analytical: {error:.2e}")# Output: ~1e-8 (machine precision)# Verify accuracy at different statestate2=np.array([0.5,0.866])J_numerical2=jacobian.compute(t,state2)print("\nNumerical Jacobian at different state:")print(J_numerical2)error2=np.linalg.norm(J_numerical2-J_analytical)print(f"Error vs analytical: {error2:.2e}")# Output: ~1e-8 (consistent accuracy)
usebrahe::math::jacobian::{DNumericalJacobian,DJacobianProvider};usenalgebra::{DMatrix,DVector};fnmain(){// Define dynamics functionletdynamics=|_t:f64,state:&DVector<f64>,_params:Option<&DVector<f64>>|->DVector<f64>{DVector::from_vec(vec![state[1],-state[0]])};// Create numerical Jacobian with default settings (central differences)letjacobian=DNumericalJacobian::central(Box::new(dynamics));// Compute Jacobian at a specific statelett=0.0;letstate=DVector::from_vec(vec![1.0,0.0]);letjac_numerical=jacobian.compute(t,&state,None);println!("Numerical Jacobian (central differences):");println!("{}",jac_numerical);// Expected output (should be very close to analytical):// [[ 0. 1.]// [-1. 0.]]// Compare with analytical solutionletjac_analytical=DMatrix::from_row_slice(2,2,&[0.0,1.0,-1.0,0.0]);leterror=(jac_numerical.clone()-jac_analytical.clone()).norm();println!("\nError vs analytical: {:.2e}",error);// Output: ~1e-8 (machine precision)// Verify accuracy at different stateletstate2=DVector::from_vec(vec![0.5,0.866]);letjac_numerical2=jacobian.compute(t,&state2,None);println!("\nNumerical Jacobian at different state:");println!("{}",jac_numerical2);leterror2=(jac_numerical2-jac_analytical).norm();println!("Error vs analytical: {:.2e}",error2);// Output: ~1e-8 (consistent accuracy)}
The choice of perturbation size \(h\) significantly affects numerical Jacobian accuracy. Too large does not provide an accurate approximation of the local derivative; too small causes roundoff errors. Brahe provides several perturbation strategies to provide options for choosing \(h\). However it is ultimately up to the user to select the best strategy for their specific application.
One simple approach is to use a fixed perturbation size for all state components. This generally works well when all state components have similar magnitudes.
The adaptive perturbation strategy combines both absolute and relative scaling to choose an appropriate perturbation size for each state component. It multiples the component scale factor \(s\) by \(\sqrt(\epsilon)\) where \(\espilon\) is machine epsilon for double precision (\(\approx 2.22e-16\)) and enforces a minimum value \(h_{min}\) to avoid excessively small perturbations.
\[ h_j = s \times \sqrt(\epsilon) \times \max(|x_j|, h_{min}) \]
Adaptive perturbation is will generally best choice for most applications, as it balances accuracy and robustness across a wide range of state magnitudes, but percentage-based perturbations can also work well without much tuning.
importbraheasbhimportnumpyasnp# Define dynamics with mixed-scale state# State: [large_position (km), small_velocity (km/s)]defdynamics(t,state):# Simple dynamics with different scalesx,v=statereturnnp.array([v,-x*1e-6])# Different scales# Analytical Jacobiandefanalytical_jacobian(t,state):returnnp.array([[0.0,1.0],[-1e-6,0.0]])# Test state with very different magnitudesstate=np.array([7000.0,7.5])# Position in km, velocity in km/st=0.0J_analytical=analytical_jacobian(t,state)print("Testing perturbation strategies on mixed-scale state:")print(f"State: position={state[0]} km, velocity={state[1]} km/s\n")# Strategy 1: Fixed perturbationprint("1. Fixed Perturbation (h = 1e-6)")jacobian_fixed=bh.NumericalJacobian.central(dynamics).with_fixed_offset(1e-6)J_fixed=jacobian_fixed.compute(t,state)error_fixed=np.linalg.norm(J_fixed-J_analytical)print(f" Error: {error_fixed:.2e}\n")# Strategy 2: Percentage perturbationprint("2. Percentage Perturbation (0.001%)")jacobian_pct=bh.NumericalJacobian.central(dynamics).with_percentage(1e-5)J_pct=jacobian_pct.compute(t,state)error_pct=np.linalg.norm(J_pct-J_analytical)print(f" Error: {error_pct:.2e}\n")# Strategy 3: Adaptive perturbation (recommended)print("3. Adaptive Perturbation (scale=1.0, min=1.0)")jacobian_adaptive=bh.NumericalJacobian.central(dynamics).with_adaptive(scale_factor=1.0,min_value=1.0)J_adaptive=jacobian_adaptive.compute(t,state)error_adaptive=np.linalg.norm(J_adaptive-J_analytical)print(f" Error: {error_adaptive:.2e}\n")# Summaryprint("Strategy Comparison:")print(f" Fixed: {error_fixed:.2e}")print(f" Percentage: {error_pct:.2e}")print(f" Adaptive: {error_adaptive:.2e}")print("\nBest strategy: Adaptive (handles mixed scales robustly)")# Test with state component near zeroprint("\n"+"="*60)print("Testing with component near zero:")state_zero=np.array([7000.0,1e-9])# Very small velocityprint(f"State: position={state_zero[0]} km, velocity={state_zero[1]} km/s\n")J_analytical_zero=analytical_jacobian(t,state_zero)# Percentage fails when component is near zerotry:J_pct_zero=jacobian_pct.compute(t,state_zero)error_pct_zero=np.linalg.norm(J_pct_zero-J_analytical_zero)print(f"Percentage: Error = {error_pct_zero:.2e}")exceptZeroDivisionError:print("Percentage: FAILED (division by near-zero)")# Adaptive handles it gracefullyJ_adaptive_zero=jacobian_adaptive.compute(t,state_zero)error_adaptive_zero=np.linalg.norm(J_adaptive_zero-J_analytical_zero)print(f"Adaptive: Error = {error_adaptive_zero:.2e}")print("\nConclusion: Adaptive perturbation is most robust")
usebrahe::math::jacobian::{DNumericalJacobian,DJacobianProvider};usenalgebra::{DMatrix,DVector};fnmain(){// Define dynamics with mixed-scale stateletdynamics=|_t:f64,state:&DVector<f64>,_params:Option<&DVector<f64>>|->DVector<f64>{letx=state[0];letv=state[1];DVector::from_vec(vec![v,-x*1e-6])};// Analytical Jacobianletanalytical_jacobian=||->DMatrix<f64>{DMatrix::from_row_slice(2,2,&[0.0,1.0,-1e-6,0.0])};// Test state with very different magnitudesletstate=DVector::from_vec(vec![7000.0,7.5]);// Position in km, velocity in km/slett=0.0;letj_analytical=analytical_jacobian();println!("Testing perturbation strategies on mixed-scale state:");println!("State: position={} km, velocity={} km/s\n",state[0],state[1]);// Strategy 1: Fixed perturbationprintln!("1. Fixed Perturbation (h = 1e-6)");letjacobian_fixed=DNumericalJacobian::central(Box::new(dynamics)).with_fixed_offset(1e-6);letj_fixed=jacobian_fixed.compute(t,&state,None);leterror_fixed=(j_fixed-j_analytical.clone()).norm();println!(" Error: {:.2e}\n",error_fixed);// Strategy 2: Percentage perturbationprintln!("2. Percentage Perturbation (0.001%)");letjacobian_pct=DNumericalJacobian::central(Box::new(dynamics)).with_percentage(1e-5);letj_pct=jacobian_pct.compute(t,&state,None);leterror_pct=(j_pct-j_analytical.clone()).norm();println!(" Error: {:.2e}\n",error_pct);// Strategy 3: Adaptive perturbation (recommended)println!("3. Adaptive Perturbation (scale=1.0, min=1.0)");letjacobian_adaptive=DNumericalJacobian::central(Box::new(dynamics)).with_adaptive(1.0,1.0);letj_adaptive=jacobian_adaptive.compute(t,&state,None);leterror_adaptive=(j_adaptive-j_analytical.clone()).norm();println!(" Error: {:.2e}\n",error_adaptive);// Summaryprintln!("Strategy Comparison:");println!(" Fixed: {:.2e}",error_fixed);println!(" Percentage: {:.2e}",error_pct);println!(" Adaptive: {:.2e}",error_adaptive);println!("\nBest strategy: Adaptive (handles mixed scales robustly)");// Test with state component near zeroprintln!("\n{}","=".repeat(60));println!("Testing with component near zero:");letstate_zero=DVector::from_vec(vec![7000.0,1e-9]);// Very small velocityprintln!("State: position={} km, velocity={} km/s\n",state_zero[0],state_zero[1]);letj_analytical_zero=analytical_jacobian();// Percentage can be problematic when component is near zeroletj_pct_zero=jacobian_pct.compute(t,&state_zero,None);leterror_pct_zero=(j_pct_zero-j_analytical_zero.clone()).norm();println!("Percentage: Error = {:.2e}",error_pct_zero);// Adaptive handles it gracefullyletj_adaptive_zero=jacobian_adaptive.compute(t,&state_zero,None);leterror_adaptive_zero=(j_adaptive_zero-j_analytical_zero).norm();println!("Adaptive: Error = {:.2e}",error_adaptive_zero);println!("\nConclusion: Adaptive perturbation is most robust");}