Getting Started
Prerequisites
Before using HPRLP, ensure you have:
- Julia (version 1.10 or higher recommended)
- CUDA (optional, for GPU acceleration)
- CUDA Toolkit 12.4 or higher for best compatibility
Installation
From GitHub
using Pkg
Pkg.add(url="https://github.com/PolyU-IOR/HPR-LP")Development Installation
To install for development:
git clone https://github.com/PolyU-IOR/HPR-LP.git
cd HPR-LP
julia --project -e 'using Pkg; Pkg.instantiate()'Verifying CUDA Installation
If you plan to use GPU acceleration, verify CUDA is working:
using CUDA
CUDA.versioninfo()If CUDA is not available, HPRLP will automatically fall back to CPU mode.
First Example: Solving a Simple LP
Let's solve a basic linear programming problem:
\[\begin{array}{ll} \min \quad & -3x_1 - 5x_2 \\ \text{s.t.} \quad & x_1 + 2x_2 \leq 10 \\ & 3x_1 + x_2 \leq 12 \\ & x_1, x_2 \geq 0 \end{array}\]
Using the Direct API
using HPRLP
using SparseArrays
# Convert to standard form: AL ≤ Ax ≤ AU
# Inequality ax + by ≤ c becomes -ax - by ≥ -c (i.e., AL = -c, AU = ∞)
A = sparse([-1.0 -2.0; -3.0 -1.0])
c = [-3.0, -5.0]
AL = [-10.0, -12.0]
AU = [Inf, Inf]
l = [0.0, 0.0]
u = [Inf, Inf]
# Step 1: Build the model
model = build_from_Abc(A, c, AL, AU, l, u)
# Step 2: Set up parameters
params = HPRLP_parameters()
params.use_gpu = false # Use CPU for this small problem
params.stoptol = 1e-4 # Convergence tolerance
params.time_limit = 60 # Maximum 60 seconds
params.verbose = true # Print solver output
# Step 3: Solve
result = optimize(model, params)
# Check results
println("Status: ", result.status)
println("Optimal value: ", result.primal_obj)
println("Solution: x₁ = ", result.x[1], ", x₂ = ", result.x[2])
println("Iterations: ", result.iter)
println("Solve time: ", result.time, " seconds")Expected output:
- Optimal value: -26.4
- Solution: x₁ ≈ 2.8, x₂ ≈ 3.6
Using JuMP
The same problem using the JuMP modeling interface:
using JuMP
using HPRLP
model = Model(HPRLP.Optimizer)
# Set optimizer attributes
set_optimizer_attribute(model, "stoptol", 1e-4)
set_optimizer_attribute(model, "use_gpu", false)
set_optimizer_attribute(model, "verbose", true)
# Define variables and constraints
@variable(model, x1 >= 0)
@variable(model, x2 >= 0)
@objective(model, Min, -3x1 - 5x2)
@constraint(model, x1 + 2x2 <= 10)
@constraint(model, 3x1 + x2 <= 12)
# Solve
optimize!(model)
# Get results
println("Status: ", termination_status(model))
println("Optimal value: ", objective_value(model))
println("Solution: x₁ = ", value(x1), ", x₂ = ", value(x2))
println("Solve time: ", solve_time(model), " seconds")Solve the instance from MPS Files
using HPRLP
# Build model from file
model = build_from_mps("model.mps")
# Set parameters
params = HPRLP_parameters()
# Solve
result = optimize(model, params)Next Steps
- Learn about solving MPS files
- Explore the Direct API in detail
- See more JuMP integration examples
- Check out the full API Reference
- Browse additional Examples
Performance Tips
JIT Compilation: The first run will be slow due to Julia's JIT compilation. For benchmarking, run the solver twice or use a warm-up phase.
GPU Usage: For large problems (typically > 10,000 variables/constraints), GPU acceleration can provide significant speedups.
Scaling: The default scaling methods (Ruiz and Pock-Chambolle) improve numerical stability. Keep them enabled unless you have specific reasons to disable them.
Tolerance: The default
stoptol = 1e-4is relatively loose. Increase for more accurate solutions in critical applications.