Summer Research Fellowship Programme of India's Science Academies

Tests for Detection of Granger Causality in Bond Yield Curve Data

Pramit Das

Indian Statistical Institute, Kolkata

Guided by:

Rituparna Sen

Indian Statistical Institute, Chennai


Suppose , we have 2 (univariate) time series Xt X_t and YtY_t, where t=1,2,3,....t=1,2,3,..... First we consider a standard AR(p)AR(p) model for XtX_t, given by ,

Xt=a0+a1Xt1+a2Xt2+......+apXtp+ϵtX_t=a_0+a_1X_{t-1}+a_2X_{t-2}+......+a_pX_{t-p}+\epsilon_t....... (1) ​.

Further, assume YtY_talso has an AR(p)AR(p)model . (This might require a few zero coefficients in one of the AR models) .

Consider another model , ​ Xt=d0+j=1j=pbjXtj+  j=1j=pcjYtj+ϵtX_t=d_0+\sum_{j=1}^{j=p}b_jX_{t-j}+\;\sum_{j=1}^{j=p}c_jY_{t-j}+\epsilon_t.....(2) .

Assume , the errors follow MVN(0~,Σ)MVN(\tilde0,\Sigma)distribution .

Our problem is to check whether the model in (2) is "better" than that in (1) . Here, "better" refers to less residual sum of squares. In other words, we want to see, whether including past values of YtY_tmakes the prediction of present Xt X_t  better. In case of univarite Xt X_t  and YtY_t, it's possible to do so by using standard F-test. ​​​​In such a case, we find that model (2) is better, and we say that YtY_t Granger causes Xt X_t .

Our current problem is to work on the Multivariate extension. It's known that in the case of multivariate time series, standard F-test doesn't work (as det(Σ^)det(\hat{\Sigma})becomes 0 or very close to 0). So, we use tools from functional data analysis, like Functional PCA and do Granger causality test on the obtained coefficients . We'd implement these methods on the bond yield curve data of USA & India. Further, we'd look at various different test statistics and do high-dimensional sign test, spatial rank sign test etc. for non-parametric Granger causality tests for high dimensional data (like functional data) .

Further, we'd use Nelson Siegel equations, which are especially designed for modelling yield curves and use it to predict future returns using a method suggested by Diebold and Li (referred as DL method) . Then we check how good the predictions are using various measures like standardised RMSE and standardised absolute error etc.

Keywords: Finance, High dimensional inference, Functional data analysis, Yield Curve modelling


 AR model  Autoregressive model 
 DL forecasting Diebold and Li's method of forecasting 
 VARMA modelVector Autoregressive Moving average model  



The current project deals with detecting Granger Causality among the US yield curve and the Indian yield curve .

  • We take data from January 2, 2019, to May 10, 2019, for both the countries for this purpose. We use FPCA based Granger causality test, followed by a permutation test to conclude about it .
  • Also, using this data, we predict the returns for May 13, 2019, to June 13, 2019, and check how accurate these returns are (using various metrics).
  • We compare the power of various tests, such as permutation test, FPCA based test, and spatial tests using simulated datasets.
  • We use Nelson-Siegel curves to model the Indian bond yield curve .

Statement of the Problems

  • Functional datasets are prevalent in almost all areas of science, e.g. Finance, population studies, disease modeling, behavioral sciences, handwriting recognition, etc. Our present problem concerns 2 financial datasets, US bond yield dataset, and Indian bond yield dataset. We'd try to approximate the functions (infinite dimensional) by a linear combination of few orthogonal basis functions and use various testing (inference) and prediction models on them. This analysis would be helpful to understand whether US bond yields granger cause Indian bond yields and vice-versa, i.e. we'd know whether there is a statistically significant evidence of causality. Further, we can use the coefficients from FPCA to predict the bond yields at a future date using Nelson Siegel curves. This can be used for academic purposes, share market tradings and policy research. One of our main objectives would be carrying out simulation studies to compare powers of various tests like FPCA based test, Spatial sign test, permutation test etc.
  • However, for spatial tests, we'd stick to multidimensional setup. The results can be further improved using infinite dimensional setup , but those tests require usage of random variables on Banach space. So for the sake of ease, we stick to multidimensional setup, which as we later see, has pretty decent power.

Objectives of the Research

Overall objective

First, we see whether Indian bond returns granger cause US bond returns (and vice-versa) . We use various tests for this, viz. spatial tests, permutation test, FPCA based test etc. These are all hypothesis testing problems from an inferential perspective. We also discuss the theoretical details of these tests .

Further, we use the available data to estimate the parameters of the Nelson Siegel curve and then use AR(1) models to forecast those parameters and use them to predict future Indian & USA bond yields. Then we compare them with the real data to see how accurate the estimates are .

Also, we use simulation studies to compare powers of various tests .


The current study involves investigation of Granger causality between US & Indian yield curves , but the techniques used here are general and can be used for any country/functional data in general. The prediction methods (Nelson-Siegel curves), or DL method too , are applicable in general for modelling any yield curve .



In the paper "Time series of functional data with application to yield curves" by Sen & Klüppelberg (2019), the results regarding FPCA, parameter estimation, DL method, and consistency have been proved. This paper also discusses the technique for prediction of long term interest rates using short term interst rates using functional regression .

The paper "Determining the order of the functional autoregressive model " by Kokoszka & Reimherr enables us to decide the order of a VAR model by sequential hypothesis testing. The techniques laid out here are essential to determine the order of VAR model when modelling the vector of coefficients from FPCA for Granger causality detection.

Further, discussion regarding NS curves & its application in yield curve modeling has been done in the paper "Estimating the Yield Curve Using the Nelson‐Siegel Model" by Annaert et al. (https://pdfs.semanticscholar.org/70c4/20a1dc336394866dff5f206df6cc8274d991.pdf). Sen & Klüppelberg's paper also discusses how to use the parameters obtained from the NS model for prediction of returns at a future date.

The NS model proposes the following equation for bond yields are various maturities :

Xt(τ)=βt0+βt1(1eλτλτ)+βt2(1eλτλτeλτ)X_t(\tau)=\beta_{t0}+\beta_{t1}(\frac{1- e^{-\lambda\tau}}{\lambda\tau})+\beta_{t2}(\frac{1- e^{-\lambda\tau}}{\lambda\tau}-e^{-\lambda\tau})......(#)​

where τ \tau refers to maturity in months .

The coefficients are computed using least square fitting. We know use this equation to model the Indian & USA bond yields and report the error.

In DL method, we obtain estimates ψ^t0,  ψ^t1,  ψ^t2,λ^t{\widehat\psi}_{t0},\;{\widehat\psi}_{t1},\;{\widehat\psi}_{t2},{\widehat\lambda}_tfor t=1,2,....,85t=1,2,....,85and use AR(1) model to forecast ψ^t0,  ψ^t1,  ψ^t2,λ^t{\widehat\psi}_{t0},\;{\widehat\psi}_{t1},\;{\widehat\psi}_{t2},{\widehat\lambda}_t for ​​ t=86,87,....,108t=86,87,....,108(May 13 too June 13 ). Then we use these forecasts in the NS model (#) to forecast bond yields at various maturities at future dates.


We'd be using tools and techniques discussed in these papers in the context of the 2019 data of Indian & USA yield curves. We'd use the available data to predict the Indian bond return rates for 1 month future (May 13 to June 13) using NS model coupled with autoregression and find the accuracy by comparing with real data. Further, we use 4 different tests for Granger causality detection, compare the results and investigate their powers through simulation studies .


Concepts of Functional PCA

The data for US & Indian bond returns have been collected from in.investing.com (https://in.investing.com/rates-bonds/india-30-year-bond-yield-historical-data). The data was downloaded in CSV format and was processed using R (https://www.r-project.org) , an open-source statistical computing framework.

We use bond yield data from 2 countries, viz. India and USA. We use daily returns data obtained from in.investing.com for 85 days, from January 1, 2019 to May 10, 2019 (ignoring weekends & holidays). For both the countries, we looked at returns from 3 month, 6 month, 1 year, 2 year, 3 year, 5 year, 7 year, 10 year and 30 year bonds . We assume maturity to be a continuous variable taking values in [0, 30] years.

So, formulating in the language of functional data, we've 2 functional datasets Xt(u) X_t(u)  and Yt(u)Y_t(u)where t=1,2,3,...,85t=1,2,3,...,85 and u[0,30]u \in [0,30]. The first one refers to Indian bond returns and the later one refers to the US bond returns respectively. e.g. X64(0.3) X_{64}(0.3) refers to the returns from a 0.3 year maturity Indian bond on the 64th day .

FPCA is used for representing a function in the eigen basis, where the basis functions form a set of orthonormal functions in Hilbert space with L2L^2norm . We now consider a stochastic process Xt ,  tRX_t  ,    t\in \mathbb{R} (continuous time process ), which is square integrable .

We express Cov(X(u),X(v))=k=1λkϕk(u)ϕk(v)Cov(X(u),X(v))=\sum_{k=1}^\infty\lambda_k\phi_k(u)\phi_k(v) , where ϕk(.),k=1,2,3,....\phi_k(.) , k=1,2,3,.... are orthogonal basis functions and λ1λ2........\lambda_1\geq \lambda_2 \geq ........be eigenvalues .

It's known, using Karhunen–Loève theorem that we can write X(u)=μ(u)+k>=1ψkϕk(u)X(u)=\mu(u)+\sum_{k>=1}\psi_k\phi_k(u).

Our main target is to represent the variances using a linear combination of a small number of basis functions (very similar to the idea of usual PCA, here we're just dealing with an infinite dimensional case). We use eigen functions ϕ1(.),....,ϕq(.)\phi_1(.),....,\phi_q(.)corresponding to the largest eigenvalues λ1,λ2,...,λq\lambda_1,\lambda_2,...,\lambda_qso that 80-90% of the variance is explained.

We now do reconstruction X~(u)=μ(u)+k=1k=qψkϕk(u)\widetilde X(u)=\mu(u)+\sum_{k=1}^{k=q}\psi_k\phi_k(u)​......(3) using only "few" basis functions. In our examples on bond yield data , we'd mostly have q<=8q <=8​.

Now, we do FPCA on Xt(.) X_t(.) and Yt(.) Y_t(.) . We used R for these computations and found the optimal number of components (Fig. 1) selected is: 8, and the first 3 eigenvalues are: 0.174 , 0.111 and 0.008 for the Indian data. For USA data, the optimal number of components (Fig. 2 ) selected is: 7, and the first 3 eigenvalues are: 0.241, 0.007 and 0.001 .

Methods for testing Granger causality

Granger causality detection using FPCA based test

We take the coefficients estimated in the FPCA model (for convenience, we consider first 4 PC's for both Indian & USA data) .

For each t[85]t \in [85] , we have 2 vectors (ψt1Ind,ψt2Ind,ψt3Ind,ψt4Ind)(\psi^{Ind}_{t1},\psi^{Ind}_{t2},\psi^{Ind}_{t3},\psi^{Ind}_{t4}) and (ψt1US,ψt2US,ψt3US,ψt4US)(\psi_{t1}^{US},\psi_{t2}^{US},\psi_{t3}^{US},\psi_{t4}^{US})of dimension 4 ( obtained from Indian FPCA and US FPCA respectively) which gives us coefficients from FPCA. These are examples of multivariate (4 variate to be specific) time series and we perform Granger Causality tests on these by looking at usual VAR (vector autoregressive) models .

Now, Consider an 8 dimensional vector (ψt1Ind,ψt2Ind,ψt3Ind,ψt4Ind,ψt1USA,ψt2USA,ψt3USA,ψt4USA)(\psi_{t1}^{Ind},\psi_{t2}^{Ind},\psi_{t3}^{Ind},\psi_{t4}^{Ind},\psi_{t1}^{USA},\psi_{t2}^{USA},\psi_{t3}^{USA},\psi_{t4}^{USA})and run a VAR model on it to get fitted vectors, say ψ^tIND\hat\psi^{IND}_t. The chosen model is VAR (1) for the joint dataset involving both ψInd\psi^{Ind}'s and ψUSA\psi^{USA}'s . Under H0H_0 , i.e. non-causality ( i.e. USA returns don't Granger cause Indian returns), we run a VAR model using values of (ψt1Ind,ψt2Ind,ψt3Ind,ψt4Ind)(\psi^{Ind}_{t1},\psi^{Ind}_{t2},\psi^{Ind}_{t3},\psi^{Ind}_{t4}) for t=1,2,....,85t=1,2,....,85 and calculate the fitted vectors, say call them ψ^tINDNULL\widehat\psi_t^{IND-NULL} for t[85]t \in[85]. Then, we use inbuilt functions in R , to test for Granger causality using ψ^tIND\hat\psi^{IND}_t and ψ^tINDNULL\widehat\psi_t^{IND-NULL} values.

Permutation test for detecting Granger Causality

As already explained above, under H0H_0, we run a VAR model using values of (ψt1Ind,ψt2Ind,ψt3Ind,ψt4Ind)(\psi^{Ind}_{t1},\psi^{Ind}_{t2},\psi^{Ind}_{t3},\psi^{Ind}_{t4}) for t=1,2,....,85t=1,2,....,85 and calculate the fitted vectors, say call them ψ^tINDNULL\widehat\psi_t^{IND-NULL} for t[85]t \in[85]. (This model turned to be VAR(1) by AIC minimisation). From here, we calculate the fitted X^t(.)  {\widehat X}_t(.)\;values by equation (3)​.

Again, Consider an 8 dimensional vector (ψt1Ind,ψt2Ind,ψt3Ind,ψt4Ind,ψt1USA,ψt2USA,ψt3USA,ψt4USA)(\psi_{t1}^{Ind},\psi_{t2}^{Ind},\psi_{t3}^{Ind},\psi_{t4}^{Ind},\psi_{t1}^{USA},\psi_{t2}^{USA},\psi_{t3}^{USA},\psi_{t4}^{USA}) and run a VAR model on it to get fitted vectors, say ψ^tIND\hat\psi^{IND}_t. Using these estimates for ψt \psi_t , we calculate the fitted X^t(.)^\widehat{\widehat X_t(.)}values using equation (3).

Our idea is to calculate two integrals, which gives us a measure of error, viz. ​

Pt=0.2530(Xt(u)X^t(u))2duP_t=\int_{0.25}^{30}(X_t(u)-\widehat X_t(u))^{2}du and Qt=0.2530(Xt(u)X^t(u)^)2duQ_t=\int_{0.25}^{30}(X_t(u)-\widehat{{\widehat X}_t(u)})^2 du for t=2,3,....t=2,3,.... . The first one gives the measurement of error under the null hyptheses of non-causality & the second one does the job under general alternative. We ignored t=1t=1 because the fitted value doesn't make sense for the starting point in a VAR model .​

R returns the values of the basis function on a workgrid only ( in this case, a discrete set of 51 points ). So, X^t(.)  {\widehat X}_t(.)\; or X^t(.)^\widehat{\widehat X_t(.)} values can be found from (3) on the workgrid only, but we need values at other points too for computing the integral . To approximate X^t(.)  {\widehat X}_t(.)\; and X^t(.)^\widehat{\widehat X_t(.)} at each point in [0.25,30][0.25,30] by a piecewise smooth function , we use cubic spline interpolation. For Xt(.)X_t(.) itself too, the values are available at a few particular points only (0.25,0.5,1,2,3,5,7,10,30) ( 0.25 , 0.5 , 1 , 2 , 3 , 5 , 7 , 10 , 30 ) . We use cubic splines to approximate Xt(.)X_t(.) at other points too . ​​​​​

Now, considered paired data, (P2,Q2),(P3,Q3),......,(P85,Q85)(P_2,Q_2), (P_3,Q_3),......,(P_{85},Q_{85}). We'd run a paired permutation test with one-sided alternative to test whether the mean of PtP_t's and QtQ_t's are the same .

Spatial Sign Test

Definition : For a non-zero vector xRpx\in\mathbb{R}^p, we define spatial sign as U(x):=xxU(x) :=\frac x{\vert\vert x\vert\vert}.

Observe that, this definition is analogous to the usual sign [positive/negative] for real numbers, i.e. the case p=1p=1.

Given the workgrid (u1,u2,.....,u51)(u_1,u_2,.....,u_{51}), consider the error vectors from the null model ,

e1t=(Xt(u1)X^t(u1),Xt(u2)X^t(u2),......,Xt(u51)X^t(u51))e_{1t}=(X_t(u_1)-\widehat X_t(u_1),X_t(u_2)-\widehat X_t(u_2),......,X_t(u_{51})-\widehat X_t(u_{51}))for each t[85], t>1 t\in\lbrack85\rbrack ,   t>1 . Further, consider error vectors from the general model , e2t=(Xt(u1)X^t(u1)^,Xt(u2)X^t(u2)^,.....,Xt(u51)X^t(u51)^)e_{2t}=(X_t(u_1)-\widehat{{\widehat X}_t(u_1)},X_t(u_2)-\widehat{{\widehat X}_t(u_2)},.....,X_t(u_{51})-\widehat{{\widehat X}_t(u_{51})}) for each t[85], t>1 t\in\lbrack85\rbrack ,   t>1 . ​​

Consider the vectors (e1te2t)T(e_{1t}-e_{2t})^T as data (observations) for t[85], t>1 t\in\lbrack85\rbrack ,   t>1 .

Consider the setup described in Multivariate nonparametrical methods based on spatial signs and ranks: The R package SpatialNP (2007) by Sirki ̈a , Taskinen , Nevalainen and Oja .

We perform spatial sign test to test ,

H0:μ~=0~H_0:\widetilde\mu=\widetilde0 vs. H1:H0c H_1 : H_0^c  .

( Here μ~\widetilde\mu refers to the multivariate mean of the data (e1te2t)T(e_{1t}-e_{2t})^T, t[85], t>1 t\in\lbrack85\rbrack ,   t>1 ) . ​​​​

For spatial tests , given any score function , T(.) T(.) , under null hypothesis ,

nB^1/2ave[T(zt)]2H0d χp2n\vert\vert\widehat B^{-1/2}ave\lbrack{T(z_t)}\rbrack\vert\vert^2 \xrightarrow[H_0]{d}  \chi_p^2 .......(4)

where zt=(e1te2t)T z_t=(e_{1t}-e_{2t})^T and B^=ave[T(zt)T(zt)]\hat B= ave[T(z_t)T(z_t)']. Here , p=51 p=51 and n=84n=84. For this particular test , used score function is the Spatial sign ( later we'd use signed rank) , i.e. T(.)=U(.) T(.)=U(.) here .

Spatial Signed Rank Test

We use the same test statistics & null asymptotic distribution used in the last section, now with the scoring function,

T(zi)=Q(zi,Z) T(z_i)= Q(z_i,Z) , where Q(zi,Z)=avej[U(zizj)+U(zi+zj)]Q(z_i,Z)=ave_j [U(z_i-z_j)+U(z_i+z_j)], where Z refers to the full data zt=(e1te2t)T z_t=(e_{1t}-e_{2t})^T  for t[85], t>1 t\in\lbrack85\rbrack ,   t>1  .

Here avej ave_j refers to the average taken over all possible values of jj. Details are available in Multivariate nonparametrical methods based on spatial signs and ranks: The R package SpatialNP (2007) by Sirki ̈a, Taskinen, Nevalainen and Oja.

Non-Parametric Graph based test

Consider the setup described in section 3 of http://www.stat.cmu.edu/~ryantibs/statml/lectures/TwoSample.pdf ​ .

Here, n=m=84  n=m=84\;. Consider the error vectors e1t,t=1,2,..,84e_{1t},t=1,2,..,84 as Z1,Z2,...,Z84Z_1,Z_2,...,Z_{84}and ​ e2t,t=1,2,..,84e_{2t},t=1,2,..,84 as Z85,....,Z168  Z_{85},....,Z_{168}\; . So , we've 2 group of error vectors .

Under non-causality, both the groups should represent IID realisations from the same distribution.

For i=1,2,..,84  i=1,2,..,84\;, define, Bj(r)=1 B_j(r)=1 if rth r^{th} nearest neighbor has the same label as ZjZ_j. Otherwise , let Bj(r)=0B_j(r)=0.

Define , Tn=184kj=1j=nr=1r=kBj(r)T_n=\frac1{84k}\sum_{j=1}^{j=n}\sum_{r=1}^{r=k}B_j(r) and μ=π2+(1π)2=0.5 \mu=\pi^2+(1-\pi)^2=0.5 under H0 H_0 

Then under null, we have (nk)(Tnμ)σN(0,1)  \frac{\sqrt(nk)(T_n-\mu)}\sigma\rightarrow N(0,1)\;. This test is computationally super-intensive for a simulation based study, because, σ \sigma has a very messy theoretical expression and in practice, we need bootstrap based approach to estimate​ σ \sigma .

Simulation Studies to compare the power of tests

We use simulated data to compare powers of the 4 tests mentioned above .

For the analysis, let the USA data, Yt(.) Y_t(.) be kept fixed and Consider the 8 dimensional vectors,

Vt=(ψt1Ind,ψt2Ind,ψt3Ind,ψt4Ind,ψt1USA,ψt2USA,ψt3USA,ψt4USA)tV_t=(\psi_{t1}^{Ind},\psi_{t2}^{Ind},\psi_{t3}^{Ind},\psi_{t4}^{Ind},\psi_{t1}^{USA},\psi_{t2}^{USA},\psi_{t3}^{USA},\psi_{t4}^{USA})^tand run a VAR model on them.

It turns out to be a VAR (1) model of the form Vt=MVt1+ζt V_t=MV_{t-1}+\zeta_t  ..... (**) , where, ζt\zeta_trefers to the noise present. Clearly, in this case, US bond yields "affect" the Indian bond yields, as ψtIND \psi_t^{IND} depends on ψ1USA,ψ2USA,.....,ψt1USA\psi^{USA}_1,\psi^{USA}_2,.....,\psi^{USA}_{t-1}.

We'd simulate ψtsIND\psi_{ts}^{IND} by using a VAR (1) model with the obtained matrix M as the premultiplication matrix and noise from a MVN(0~,Σ^res) MVN(\tilde 0,\hat\Sigma_{res}) distribution, where Σ^res\hat\Sigma_{res}is the variance-covariance matrix of the residuals obtained in the model (**) .

Then we can use the reconstruction, X~ts(u)=μX(u)+k=1k=qψksINDϕk(u)\widetilde X_t^s(u)=\mu_X(u)+\sum_{k=1}^{k=q}\psi^{IND}_{ks}\phi_k(u) and check in how many cases the tests can detect that Yt(.) Y_t(.) Granger causes X~ts(.)  \tilde X_t^s(.)\;, i.e. reject the null hypthesis. In this way, we can emperically estimate the powers. Due to limited computational power, we'd stick to 50-500 simulations in each case. ​

Another idea to simulate would be using variants of Nelson Siegel function , but with large τ \tau and the decaying nature of the second and the third terms of the NS model, the simulation couldn't explain the variation in long term maturities properly. So , this wasn't of any practical use.


FPCA based test

The plots of the functional principal components of India and USA are given below :

India FPCA 13th latest submission_1.png
    Indian FPC top 4 phi's 
    USA 13th june FPCA.png
      USA FPC's 

      The fitted variance-covariance matrices for both the countries are attached as 3D-plots herewith .

      India fitted cov 3D.png
        Indian Data fitted var-covariance matrix
        USA var covar.png
          USA Data fitted var-covariance matrix

          Using FPCA based test, we test whether USA returns Granger cause Indian returns. The p-value is 0.0005043, indicating rejection of null hypothesis. In the case of testing whether India Granger causes US returns, we get a p-value 0.01131 , i.e. we reject H0 H_0 .

          Using spatial sign test, the p-value comes around 1, and using signed rank test, the p-value comes around 1.5×10311.5\times 10^{-31}. So , all of these tests imply rejection of the null hypothesis (non-causality) in testing whether USA returns Granger cause Indian returns .

          Spatial Test & Permutation Test

          In the case of simulation, the power of the spatial tests come to be around 1 using 100-200 simulations and that of the FPCA based test come to around 0.40- 0.48 using 50-500 simulations. The permutation test yields a very low power (around 0.2) and is not efficient.

          The huge power of spatial tests is attributed to its high dimensional nature. Intuitively speaking, in higher dimension, it is much easier to detect whether a bunch of points is centred around origin, than to do the same in lower dimension (which the permutation test kind of does by comparing the difference of integrals, in this case).

          The FPCA based test has a decent power, but is far from the spatial test due to the cure of dimensionality, which means that we have considered only the top 4 functional principal components from both countries (while FPC returns 7 & 8 compenents to explain most of the varaince) .

          Emperical powers 
           Test # of simulations  Emperical power  
          FPCA  based test 100 0.37 
           FPCA based test300 0.393 
          FPCA based test   5000.408 
           FPCA based test 600  0.373
          Spatial sign test  100,200,300 ~1 in all cases  
           Spatial rank test 100,200,300 ~1 in all cases  

          NS Model fitting

          We estimate the parameter in the NS model and compute the fitted bond returns for the Indian data from Jan 2 to May 10 . The average relative error in predicting each Xt(τ),t[85],τ=0.25,0.5,....,30 X_t (\tau) , t \in [85] , \tau = 0.25 , 0.5 , ....,30  is just about 0.02%. This shows that, the NS curve models the bond yields almost perfectly, since it is custom-designed for applying on yield curves .

          In fact ,we find that the values fitted by the NS model explain most of the total variances observed in the Indian data Xt(τ),t[85],τ=0.25,0.5,....,30 X_t (\tau) , t \in [85] , \tau = 0.25 , 0.5 , ....,30  . ​

          Here, we show the plot of the actual bond yields and predicted bond yields by NS equation for different maturities for 3 radomly selected days .

          India NS fit 3 random days.png
            Indian bond yields - Actual (red) and predicted by NS curve fitting (black) for 3 randomly chosen (47th , 65th & 80th day ) days for different maturities 

            DL method of Forecasting

            As mentioned in methods, we use AR(1) models to forecast values of the FPC coefficients & the parameter λ \lambda  in the NS model at future dates (May 13-June 13) and use those to find forecasts of Indian bond yields at future dates, i.e. May 13-June 13. The average relative error in the prediction of future Indian bond yields Xt(τ),t=86,87,...,108  ,τ=0.25,0.5,....,30  X_t(\tau),t=86,87,...,108   ,\tau=0.25,0.5,....,30\;turns out to be just 3.7%, showing that DL method produces good results on the Indian data.

            India DL PRED 24TH JUNE.png
              Actual (blue) and DL forecasted (black) interest rates for different maturities on 3 randomly selected days in the period May 13 to June 13 in Indian market 

              Modified DL Forecasting technique

              As mentioned earlier, in DL forecasting, we use AR(1) models to forecast values of the FPC estimates and the parameter λ \lambda  in the NS model at future dates (May 13-June 13) and use those to find forecasts of Indian bond yields at future dates, i.e. May 13-June 13 .

              Now, we use ARIMA models of the appropriate order, instead of AR models to forecast the parameters of the NS model at future dates. The order of the ARIMA model would be chosen by AIC (Akaike Information Criterion) minimisation .

              Here, we report the order of the fitted ARIMA models for 4 parameters :

              Order of fitted ARIMA model for NS parameters 
              Parameter  Order of ARIMA model
              ψt1 \psi_{t1} ARIMA(0,1,0)  
              ψt2 \psi_{t2} ARIMA(0,1,1)  
              ψt3 \psi_{t3} ARIMA(0,1,0)  
              λ \lambda ARIMA(2,1,1)  

              As expected, this method generates only 2.75% average relative error in forecasting the bond yields from May 13 to June 13 , which is less than the above mentioned method. This shows that, using finer models to forecast the parameters of the NS curve leads to better overall forecasts .


              • The spatial tests have the highest emperical power among the ones discussed.
              • Permutation test is NOT efficient in this context.
              • Indian returns Granger cause USA bond returns and vice-versa.
              • NS curves fit the Indian bond yield curve very accuartely and explains most of the variance observed in the values.
              • DL forecasting method works nicely for short term prediction of bond yields in the Indian market. It can be improved in case we use ARIMA ( of appropriate order) instead of plain AR(1) model for forecasting the FPC coefficients.


              ✪ Use of infinite dimensional spatial tests in this steup is possible. Such tests have been discussed in the paper "Tests for high-dimensional data based on means, spatial signs and spatial ranks" by Chakrabarty and Chaudhuri (https://projecteuclid.org/euclid.aos/1494921957)

              ✪​ In the future, we can take up an extensive study involving many more countries to explore the Granger causality network and its economic & statistical implications.

              ✪ Use Nelson, Siegel and Stevenson curves instead of the NS model .


              ✪ All of the techniques used in FPCA can be used in audio waves, where we can use Fourier basis sin(kx),cos(kx),kN0 sin(kx) , cos(kx) , k \in N^{\geq 0} . This can help us in lower dimensional representation in sound waves, i.e. audio compression. Further, the obtained coefficients can be used for speech recognition (using the nearest neighbour or more advanced ML techniques) or for understanding how the voice of a person changes over age (it would be a functional dataset) .

              ✪ Further, the techniques can be used to understand the growth of tumors from MRI data, or of change in brain waves in Brain-computer interfaces .


              1. Time series of functional data with application to yield curves - Sen & Klüppelberg (2019)

              2. ​High-Dimensional, Two-Sample Testing - CMU Stat ML lectures http://www.stat.cmu.edu/~ryantibs/statml/lectures/TwoSample.pdf

              3. Tests for high-dimensional data based on means, spatial signs and spatial - Chakraborty and Chaudhuri ranks https://projecteuclid.org/euclid.aos/1494921957

              4. Determining the order of the functional autoregressive model - Kokoszka & Reimherr

              5. R programming language for statistical computations - https://​www.r-project.org

              6. Functional Autoregression for Sparsely Sampled Data - Daniel R. Kowal, David S. Matteson, and David Ruppert (2016)

              7. Likelihood ratio tests for covariance matrices of high-dimensional normal distributions Dandan Jiang , Tiefeng Jiang , Fan Yang (2012)

              8. Measurement of Linear Dependence and Feedback Betwveen Multiple Time Series - John Geweke

              9. Granger Causality Testing in High-Dimensional VARs: a Post-Double-Selection Procedure Hecq, A., Margaritella, L., Smeekes, S.

              10. Wikipedia - https://www.google.com/search?q=wikipedia+granger+causality&oq=wikipedia+grange&aqs=chrome.3.0j69i57j0l4.6871j0j7&sourceid=chrome&ie=UTF-8

              11. Wikipedia - https://en.wikipedia.org/wiki/Functional_principal_component_analysis

              12. Multivariate nonparametrical methods based on spatial signs and ranks: The R package SpatialNP (2007) by Sirki ̈a , Taskinen , Nevalainen and Oja


              I'm eternally grateful to Indian Academy of Sciences and National Academy of sciences for providing me the summer research fellowship for carrying out the research .

              I'm also grateful to thank Dr. Rituparna Sen , my guide , for guiding me in this exciting project in mathematical finance. Her guidance on the mathematical as well as computational aspects of this project has been immensly valuable .

              Further, I'd like to thank ISI Chennai the PhD students in the institute and my fellow interns for providing me a helpful and comfortable environment for carrying out summer research. Also, I'd like to thank Authorcafe for having such a nice platform to write research reports.

              Written, reviewed, revised, proofed and published with