Solver configuration#

class jaxls.TrustRegionConfig[source]#

TrustRegionConfig(lambda_initial: ‘float | jax.Array’ = 0.0005, lambda_factor: ‘float | jax.Array’ = 2.0, lambda_min: ‘float | jax.Array’ = 1e-05, lambda_max: ‘float | jax.Array’ = 1000000.0, step_quality_min: ‘float | jax.Array’ = 0.001)

lambda_initial: float | Array = 0.0005#

Initial damping factor. Only used for Levenberg-Marquardt.

lambda_factor: float | Array = 2.0#

Factor to increase or decrease damping. Only used for Levenberg-Marquardt.

lambda_min: float | Array = 1e-05#

Minimum damping factor. Only used for Levenberg-Marquardt.

lambda_max: float | Array = 1000000.0#

Maximum damping factor. Only used for Levenberg-Marquardt.

step_quality_min: float | Array = 0.001#

Minimum step quality for Levenberg-Marquardt. Only used for Levenberg-Marquardt.

class jaxls.TerminationConfig[source]#

TerminationConfig(max_iterations: ‘jdc.Static[int]’ = 100, early_termination: ‘jdc.Static[bool]’ = True, cost_tolerance: ‘float | jax.Array’ = 1e-05, gradient_tolerance: ‘float | jax.Array’ = 0.0001, gradient_tolerance_start_step: ‘int | jax.Array’ = 10, parameter_tolerance: ‘float | jax.Array’ = 1e-06)

max_iterations: Annotated[int, '__jax_dataclasses_static_field__'] = 100#

Maximum number of optimization steps. For constrained problems, this is the maximum iterations per inner solve (not total iterations).

early_termination: Annotated[bool, '__jax_dataclasses_static_field__'] = True#

If set to True, terminate when any of the tolerances are met. If False, always run max_iterations steps.

cost_tolerance: float | Array = 1e-05#

We terminate if |cost change| / cost < cost_tolerance. For constrained problems, this acts as a floor for the adaptive inner solver tolerance.

gradient_tolerance: float | Array = 0.0001#

We terminate if norm_inf(x - rplus(x, linear delta)) < gradient_tolerance. For constrained problems, this acts as a floor for the adaptive inner solver tolerance.

gradient_tolerance_start_step: int | Array = 10#

When to start checking the gradient tolerance condition. Helps solve precision issues caused by inexact Newton steps.

parameter_tolerance: float | Array = 1e-06#

We terminate if norm_2(linear delta) < (norm2(x) + parameter_tolerance) * parameter_tolerance.

class jaxls.ConjugateGradientConfig[source]#

Iterative solver for sparse linear systems. Can run on CPU or GPU.

For inexact steps, we use the Eisenstat-Walker criterion. For reference, see “Choosing the Forcing Terms in an Inexact Newton Method”, Eisenstat & Walker, 1996.”

tolerance_min: float | Array = 1e-07#
tolerance_max: float | Array = 0.01#
eisenstat_walker_gamma: float | Array = 0.9#

Eisenstat-Walker criterion gamma term. Controls how quickly the tolerance decreases. Typical values range from 0.5 to 0.9. Higher values lead to more aggressive tolerance reduction.

eisenstat_walker_alpha: float | Array = 2.0#

Eisenstat-Walker criterion alpha term. Determines rate at which the tolerance changes based on residual reduction. Typical values are 1.5 or 2.0. Higher values make the tolerance more sensitive to residual changes.

preconditioner: Annotated[Literal['block_jacobi', 'point_jacobi'] | None, '__jax_dataclasses_static_field__'] = 'block_jacobi'#

Preconditioner to use for linear solves.

class jaxls.AugmentedLagrangianConfig[source]#

Configuration for Augmented Lagrangian solver (ALGENCAN-style).

penalty_factor: float | Array = 4.0#

Penalty multiplier when constraint progress stagnates.

penalty_max: float | Array = 10000000.0#

Maximum penalty parameter.

penalty_min: float | Array = 1e-06#

Minimum penalty parameter.

penalty_initial: float | Array | None = None#

Initial penalty parameter. If None, uses ALGENCAN-style heuristic: rho = 10 * max(1, |f|) / max(1, 0.5 * c^2). Set to a fixed value (e.g., 1.0) to override the automatic initialization.

tolerance_absolute: float | Array = 1e-05#

max(snorm, csupn) < tol.

Type:

Absolute convergence tolerance

tolerance_relative: float | Array = 0.0001#

snorm / snorm_initial < tol.

Type:

Relative convergence tolerance

violation_reduction_threshold: float | Array = 0.5#

Increase penalty if violation > threshold * previous_violation. E.g., 0.9 requires ~10% reduction per update to avoid penalty growth. Use higher values (e.g., 0.99) for more lenient penalty updates.

lambda_min: float | Array = -10000000.0#

Minimum Lagrange multiplier (safeguard).

lambda_max: float | Array = 10000000.0#

Maximum Lagrange multiplier (safeguard).

inner_solve_tolerance: float | Array = 0.01#

Only update AL parameters when inner problem has converged. Update when ||gradient|| < tolerance, meaning the LM solver has approximately solved the current augmented subproblem.