top of page
Search
cedarcantab

Newton's method




Newton’s Method for Nonlinear Equations

Newton’s method is a powerful iterative algorithm used to find the roots (or solutions) of differentiable functions. It relies on local linearization to approximate the roots. While it’s commonly applied to single-variable functions, it can also be extended to systems of n differentiable functions in n variables.

Here’s how it works:

  1. Evaluation of Jacobian Matrix and Function Vector:

  • Given a system of nonlinear equations F(X) = 0, where F(X) is a vector function and X is a vector of variables, we start by evaluating the Jacobian matrix J(X) and the function vector F(X).

  • The Jacobian matrix contains partial derivatives of each component of F(X) with respect to the variables in X.

  1. Update Step:

  • We solve the linear system of equations: J(X) · ΔX = -F(X), where ΔX represents the update to the variables.

  • The goal is to find the correction ΔX that brings us closer to the root.

  1. Update Variables:

  • Update the variables: X = X + ΔX.

  • Repeat steps 1 and 2 until convergence (when F(X) approaches zero or a specified tolerance).

  1. Convergence and Basins of Attraction:

  • The points found through iteration of Newton’s method correspond to distinct components of the Fatou set.

  • We can determine the basins of attraction, which are sets of starting points that iteratively converge to specific roots.

  • For a system of two nonlinear equations, we can visualize these basins of attraction by iterating Newton’s method on the inverse of the Jacobian matrix.



Quasi Newton Method


Quasi-Newton Methods are a kind of methods used to solve nonlinear optimization problems. They are based on Newton's method yet can be an alternative to Newton's method when the objective function is not twice-differentiable, which means the Hessian matrix is unavailable, or it is too expensive to calculate the Hessian matrix and its inverse.


numerous variants were proposed, include Broyden's method (1965), the SR1 formula (Davidon 1959, Broyden 1967), the DFP method (Davidon, 1959; Fletcher and Powell, 1963), and the BFGS method.



Broyden’s Method for Nonlinear Equations

  1. Introduction:

  • Broyden’s method is an iterative algorithm used to find the roots (solutions) of systems of nonlinear equations.

  • Unlike Newton’s method, which requires evaluating the Jacobian matrix, Broyden’s method approximates the Jacobian using first derivatives.

  • It is particularly useful when computing the exact Jacobian is computationally expensive or impractical.

  1. Basic Idea:

  • Start with an initial guess for the solution vector X.

  • Update the Jacobian approximation iteratively based on the function evaluations and the changes in the function vector.

  • The method retains the local Q-superlinear convergence of Broyden’s method.

  1. Algorithm Steps:

  • Given a system of nonlinear equations F(X) = 0, where F(X) is a vector function and X is a vector of variables:

  • Initialize the Jacobian approximation J(X) (usually an identity matrix or an estimate).

  • Compute the function vector F(X).

  • Solve the linear system: J(X) · ΔX = -F(X) to find the update ΔX.

  • Update the variables: X = X + ΔX.

  • Update the Jacobian approximation: J(X).

  • Repeat until convergence.

  1. Advantages:

  • Broyden’s method avoids the costly computation of the full Jacobian matrix.

  • It converges efficiently for many problems.

  • If some equations are linear, it can locate a zero of those equations in fewer iterations.

  1. Limitations:

  • Broyden’s method may not perform well if the system has singularities or ill-conditioned regions.

  • It doesn’t guarantee global convergence.



  1. Jacobian Matrix: In Newton’s method, the Jacobian matrix (J) is required at every iteration. However, computing this Jacobian can be difficult and costly.

  2. Secant Methods: Broyden’s method is a type of secant method. Unlike Newton’s method, it doesn’t explicitly compute the Jacobian. Instead, it constructs an approximation to the Jacobian, which is updated iteratively.

  3. Update Formula: Let B_k be the Jacobian approximation at iteration k, and let s_k represent the difference between successive points (x_{k+1} - x_k). The updated Jacobian approximation B_{k+1} satisfies the secant equation: [ B_{k+1} s_k = f(x_{k+1}) - f(x_k) ] where f is the vector function representing the system of nonlinear equations.

  4. Matrix Update: Broyden’s method generates subsequent matrices using the update formula: [ B_{k+1} = B_k + (y_k - B_k s_k) s_k^T / |s_k|^2 ] Here:

  • y_k = f(x_{k+1}) - f(x_k) represents the difference in function values.

  • s_k^T denotes the transpose of s_k.

  • (|s_k|^2) is the squared Euclidean norm of s_k.




Useful References





1 view0 comments

Recent Posts

See All

p2 naive broadphase

var Broadphase = require('../collision/Broadphase'); module.exports = NaiveBroadphase; /** * Naive broadphase implementation. Does N^2...

sopiro motor constranit

import { Matrix2, Vector2 } from "./math.js"; import { RigidBody } from "./rigidbody.js"; import { Settings } from "./settings.js";...

Kommentarer


記事: Blog2_Post
bottom of page