Adaptive Filter Theory, Simon O. Haykin, May 16, 2013, Technology & Engineering,. This is the eBook of the printed book and may not include any media, website access codes, or print. SGN 2206 Adaptive Signal Processing Lecturer. Lecture 1 2 Text book: Simon Haykin, Adaptive Filter Theory Prentice Hall International, 2002 0 Background and preview 10 Kalman Filters 1 Stationary Processes and Models 11 Square Root Adaptive Filters 2 Wiener Filters 12 Order Recursive Adaptive Filters.

Adaptive Filter Theory 5th Edition Haykin SOLUTIONS MANUAL Full download at: Chapter 2 Problem 2.1 a) Let wk = x + j y p(−k) = a + j b We may then write f =wk p∗ (−k) =(x + j y)(a − j b) =(ax + by) + j(ay − bx) Letting f = u +jv where u = ax + by v = ay − bx 21 Hence, ∂u =a ∂x ∂v =a ∂y ∂u =b ∂y ∂v = −b ∂x 22 PROBLEM 2.1. From these results we can immediately see that ∂u ∂v = ∂x ∂y ∂v ∂u =− ∂x ∂y In other words, the product term wk p∗ (−k) satisfies the Cauchy-Riemann equations, and so this term is analytic.

Theory

B) Let f =wk p∗ (−k) =(x − j y)(a + j b) =(ax + by) + j(bx − ay) Let f = u + jv with u = ax + by v = bx − ay Hence, ∂u =a ∂x ∂v =b ∂x ∂u =b ∂y ∂v = −a ∂y From these results we immediately see that ∂u ∂v = ∂x ∂y ∂v ∂u =− ∂x ∂y In other words, the product term w∗kp(−k) does not satisfy the Cauchy-Riemann equations, and so this term is not analytic. 23 PROBLEM 2.2. Problem 2.2 a) From the Wiener-Hopf equation, we have w0 = R−1 p (1) We are given that R = 1 0.5 0.5 1 p = 0.5 0.25 Hence the inverse of R is R−1 = 1 0.5 0.5 1 −1 1 1 = −0.5 0.75 −0.5 1 −1 Using Equation (1), we therefore get 1 1 −0.5 0.5 −0.5 1 0.25 0.75 1 0.375 = 0 0.75 0.5 = 0 w0 = b) The minimum mean-square error is Jmin =σd2 − pH w0 =σd2 − 0.5 0.25 0.5 0 =σd2 − 0.25 24 PROBLEM 2.2. C) The eigenvalues of the matrix R are roots of the characteristic equation: (1 − λ)2 − (0.5)2 = 0 That is, the two roots are λ1 = 0.5 and λ2 = 1.5 The associated eigenvectors are defined by Rq = λq For λ1 = 0.5, we have 1 0.5 0.5 1 q11 q = 0.5 11 q12 q12 Expanded this becomes q11 + 0.5q12 = 0.5q11 0.5q11 + q12 = 0.5q12 Therefore, q11 = −q12 Normalizing the eigenvector q1 to unit length, we therefore have 1 1 q1 = √ −1 2 Similarly, for the eigenvalue λ2 = 1.5, we may show that 1 1 q2 = √ 2 1 25 PROBLEM 2.3.

Accordingly, we may express the Wiener filter in terms of its eigenvalues and eigenvectors as follows:! 2 X 1 qi qHi p w0 = λ i i=1 = = = = = = 1 1 q1 qH1 + q2 q2H p λ1 λ2 1 1 1 1 −1 + 1 1 −1 3 1 1 1 1 1 −1 0.5 + −1 1 0.25 3 1 1 4 2 − 0.5 3 3 2 4 0.25 − 3 3 4 1 − 6 6 1 1 − + 3 3 0.5 0 0.5 0.25 Problem 2.3 a) From the Wiener-Hopf equation we have w0 = R−1 p We are given R = (1) 1 0.5 0.25 0.5 1 0.5 0.25 0.5 1 and p = 0.5 0.25 0.125 T 26 PROBLEM 2.3. Aplikasi antrian loket. Hence, the use of these values in Equation (1) yields w0 =R−1 p 1 0.5 0.25 = 0.5 1 0.5 0.25 0.5 1 −1 0.5 0.25 0.125 1.33 −0.67 0 = −0.67 1.67 −0.67 0 −0.67 1.33 w0 = 0.5 0 0 0.5 0.25 0.125 T b) The Minimum mean-square error is Jmin =σd2 − pH w0 =σd2 − 0.5 0.25 0.125 0.5 0 0 =σd2 − 0.25 c) The eigenvalues of the matrix R are λ1 λ2 λ3 = 0.4069 0.75 1.8431 The corresponding eigenvectors constitute the orthogonal matrix: Q= −0.4544 −0.7071 0.5418 0.7662 0 0.6426 −0.4544 0.7071 0.5418 Accordingly, we may express the Wiener filter in terms of its eigenvalues and eigenvectors as follows:! 3 X 1 qi qHi p w0 = λ i i=1 27 PROBLEM 2.4. W0 = CHAPTER 2. −0.4544 1 0.7662 −0.4544 0.7662 −0.4544 0.4069 −0.4544 −0.7071 1 −0.7071 0 −0.7071 0 + 0.75 0.7071 0.5418 1 0.6426 + 1.8431 0.5418 w0 = 0.5 418 0.6426 0.5418 0.2065 −0.3482 0.2065 1 −0.3482 0.5871 −0.3482 0.4069 0.2065 −0.3482 0.2065 1 + 0.75 0.5 0 −0.5 0 0 0 −0.5 0 0.5 0.2935 0.3482 0.2935 1 + 0.3482 0.4129 0.3482 1.8431 0.2935 0.3482 0.2935 = 0.5 0 0 Problem 2.4 By definition, the correlation matrix R = E[u(n)uH (n)] Where u(n) = 0.5 × 0.25 0.125 u(n) u(n − 1). U(0) Invoking the ergodicity theorem, N 1 X u(n)uH (n) R(N ) = N + 1 n=0 28 0.5 × 0.25 0.125 PROBLEM 2.5.

Popular Posts